According to a classic 1993 New Yorker cartoon, “on the Internet, nobody knows you’re a dog.” But that cartoon is taking on a whole new meaning in the era of increasingly powerful artificial intelligence (AI) technology. Case in point, Google has just debuted a virtual assistant that takes human impersonation to a whole new level. Google Duplex can make appointments and book reservations over the phone with a level of humanoid impersonation so accurate it will be difficult to know whether you are talking to a person or to a Google bot.
It seems like it is only a matter of time before AI systems engage with us in a broad range of higher-stakes conversations. In other words, on the internet and on the phone, nobody will know whether you’re a person or an AI.
From a technology perspective this is impressive, but from an ethical and regulatory perspective it’s more than a little concerning. In response, we need to insist on mandatory full disclosure. Any AI should declare itself to be so, and any AI-based content should be clearly labeled as such. We have a right to know whether we are sharing are thoughts, feelings, time, and energy with another person or with a bot.
We deserve to know when our conversations are with other humans, or with machines masquerading as humans. When AI hides itself, the results can seem harmless or even humorous, as in the case of Lenny, a simple bot designed to waste telemarketers’ time.
But the stakes go up quickly. A more eyebrow-raising example is “Jill Watson,” a Georgia Tech teaching assistant that was actually an AI. Students interacted with Jill in an online class forum, and her true nature was hidden from the class until after the final exam. Over the course of the class, one astute student noted “..if there is anything this class has taught me, is that i should always question if someone ive met online is an AI or not [sic]”.
While Jill was intended to inform and educate, we’ve already seen AI that is directly intended to influence human behavior or decision-making. Social media marketing companies sell access to Twitter bots, false accounts copied from real users that automatically share a client’s content to boost their popularity for a price.
These bots are also used across networks to spread disinformation (“fake news”) that is intended to sway and amplify the opinions of voters. Twitter ultimately found tens of thousands of active automated accounts linked to Russia in the months preceding the 2016 election collectively responsible for hundreds of millions of tweets.
Similar bots on Facebook are deployed to disseminate false news articles or images with bombastic claims. Facebook estimates that fake news spread by Russian-backed bots from January 2015 to August 2017 reached potentially half of the 250 million Americans who are eligible to vote.
Recent AI technologies take impersonation and intentional misrepresentation even further. We’ve long been familiar with doctored images and the magic of Photoshop, but advances in machine learning and image processing now allow the creation of incredibly realistic fake video. Researchers dramatically demonstrated this new capability with AI-generated video of President Barack Obama speaking phrases that were previously only audio clips.
Then came the “deepfakes,” AI-generated videos of entirely new facial expressions of a target person created by stitching together two faces in an eerily convincing way. This face-swapping technology is already inexpensive and accessible enough that it has started appearing in pornography, with several high-profile celebrities added to compromising videos with no legal recourse. A recent viral video of Obama issuing a warning about deepfakes was, itself, a fake.
Which brings us back to the seemingly harmless Google Assistant. When AI is used to covertly impersonate real people, real damage can happen. On a more personal scale, malicious AI posing as trusted institutions or friends could scam unwitting or vulnerable people out of their money or their personal information. To believe that your Google Assistant can’t be hacked or manipulated is to be dangerously naïve.
My call for labeling AI echoes an earlier call by professor and leading AI researcher Toby Walsh, and a broader set of ideas on regulating AI. It is not a coincidence that many of the people calling for regulation are also the people helping to develop this technology. Because who better to understand the risks than those of us working hard to maximize AI’s rewards?
AI is evolving rapidly, and our rules, conventions, and norms have to keep up. To listen to Google Duplex is to listen to the sound of a Pandora’s box that is opening rapidly. It is a canary in the proverbial coal mine warning us that in the future, private, corporate and government actors can and assuredly will utilize AI for nefarious ends. We have mandatory labeling for Genetically Modified Foods (GMOs), but we don’t yet have mandatory labeling for AI-generated voices, pictures and videos. Let's change that before it’s too late.
Oren Etzioni is the CEO of the Allen Institute for AI and a professor of computer science at the University of Washington.
Carissa Schoenick is the senior program manager and communications director at the Allen Institute for AI.