Fake news is fueled in part by advances in technology — from bots that automatically fabricate headlines and entire stories to computer software that synthesizes Donald Trump’s voice and makes him read tweets to a new video editing app that makes it possible to create authentic-looking videos in which one person’s face is stitched onto another person’s body.
But technology, in the form of artificial intelligence, may also be the key to solving the fake news problem — which has rocked the American political system and led some to doubt the veracity even of reports from long-trusted media outlets.
Experts say AI systems would help fill the gaps left by Snopes, Truth or Fiction, and other online fact-checking outlets, whose human fact-checkers lack the bandwidth to evaluate every article that appears online. These systems could also work with various fake news alert plugins available from Google’s web store, such as the browser extension This is Fake, which uses a red banner to flag debunked news stories on your Facebook newsfeed.
“All of the current systems for tracking fake news are manual, and this is something we need to change as the earlier you can highlight that a story is fake, the easier it is to prevent it going viral,” says Delip Rao, founder of the San Francisco-based AI research company Joostware and organizer of the Fake News Challenge, a competition set up within the AI community to foster development of tools that can reliably spot fake content.
Fighting the fakers
At last month’s World Economic Summit in Davos, Switzerland, Google and Facebook announced plans to develop AI systems that would notify users about dubious content. Google has floated the idea of a “misinformation detector” browser extension that would alert users if they land on a link deemed untrustworthy.
But while these plans have yet to be put into action, an Israeli startup company called AdVerif.ai has already begun fighting back against the fakers.
“There are reports which are predicting that within three to four years, people in advanced economies will consume more false content than true content, which is really mind-blowing,” says company founder Or Levi. “But because a lot of this content is recycled and repeated in different ways, we believe we can use AI to pinpoint trends which detect it as being fake.”
In November, AdVerif.ai launched an AI-based algorithm that the company claims can identify fraudulent stories with an accuracy approaching 90 percent. The algorithm’s development has been bankrolled by advertising networks across the U.S. and Europe, with major brands like Adidas and Nike keen to avoid being associated with fake news.
The company intends to launch a browser plug-in that would display a pop-up warning if you landed on a suspect story.
To develop its algorithm, AdVerif.ai fed it thousands of news stories, legitimate as well as fraudulent. Fraudulent stories tend to differ in subtle ways, including their heavy use of adverbs and adjectives as well as slang, simple sentence structures, and relatively few commas and quotations. The algorithm was trained to spot these psycholinguistic cues and render a judgment: fake or real.
But Levi says the algorithm isn’t foolproof because it lacks the ability to assess the accuracy of purported facts within articles.
“Right now, a story could say that New York is the capital of Uganda and the algorithm may not flag it because it doesn’t have a database of common facts,” Levi says. “Current forms of AI can look at the style of the language, and the topic that the text is discussing, but it can’t figure out the meaning behind statements.”
This could change soon. The next version of AdVerif.ai will use natural language processing to verify assertions made in articles against trusted online content, like that published by Wikipedia and the World Bank eLibrary.
Levi acknowledges that Wikipedia isn’t 100 percent reliable but says it’s “accurate enough that it can have practical applications.”
Even with AI systems that can check purported facts, fake news stories could slip by without being flagged. That’s true in particular for stories that include opinions and other statements that defy easy assessment.
“Right now machines cannot evaluate more complicated statements, ones which you cannot quantify,” Rao says. “Statements like ‘Trump is the best U.S. president’ can’t easily be measured, so it’s very hard for AI to compute whether they’re true or false.”
The latest breed of image and video manipulation tools further complicates the task facing AI researchers.
“The problem we have is that the same AI tools which are allowing us to fight fake news are also allowing the fakers to create content which is ever more difficult to separate from reality,” Rao says.
Levi thinks cyberspace will soon become a battleground of competing intelligent systems — some creating fake media and others searching for the subtle cues that mark it as such.
AI experts are grappling with ways to identify fake photos, and identifying fake videos is even more challenging. “These latest apps have left the AI community playing catch-up,” Levi says of the tools used by the creators of fraudulent images and videos.
Ultimately, Levi believes artificial intelligence may effectively neutralize the threat posed by fake news. But he’s unsure when that day will come.
“It’s an information arms race, and AI will definitely provide us with some tools to help,” he says. “But at the end of the day, the onus will probably always be on humans to use their own intuition to decide whether something is true or not.”