IE 11 is not supported. For an optimal experience visit our site on another browser.

Inside the government agency designing tech to fight fake news

The team that brought you driverless cars and Siri is now fighting a new kind of war on information.

In the world of internet memes, Instagram filters, and Photoshop, seeing is no longer believing. Everyone is a photographer or videographer once they have a smartphone in their hands, which often comes equipped with alteration tools already installed. But manipulations that started with cropping and teeth-whitening have spurred into the spread of misinformation.

Parkland shooting survivor Emma Gonzalez is pictured ripping up a shooting range target, left. At right, a doctored photo circulated the Internet, depicting Gonzalez tearing apart the U.S. Constitution.
Parkland shooting survivor Emma Gonzalez is pictured ripping up a shooting range target, left. At right, a doctored photo circulated the Internet, depicting Gonzalez tearing apart the U.S. Constitution.

Take this photo of Parkland shooting survivor Emma Gonzalez, seemingly ripping up the U.S. Constitution. The image went viral following last month's March for Our Lives, where Gonzalez attracted widespread media attention after addressing hundreds of thousands of supporters of gun control. The photo sparked outrage on social media, criticizing Gonzalez's motives and her patriotism.

But the image had been doctored. In the original photo, Gonzalez was tearing up a shooting range target, and was part of a Teen Vogue cover story on the #NeverAgain movement.

It was an example of what’s known as a “deep fake”: when people use machine-learning algorithms to easily create digital impersonations via photo, video or audio content. With manipulated media popping up and circulating on social media in recent years, experts are worried about the increasing possibility that these deep fakes will easily dupe the public.

The U.S. Department of Defense is fighting back. After testing projects like driverless cars and early iterations of Siri, Apple's virtual assistant, years ahead of their release to the public, the Defense Advanced Research Projects Agency is now taking on fake news.

The agency was originally formed in 1958 in response to the successful — and unexpected — Soviet launch of the Sputnik satellite into space.

“We have a mission within DARPA to invest in breakthrough technologies that prevent strategic surprise,” said David Doermann, program manager of DARPA’s new media forensics project called MediFor, “and essentially guarantee national security.”

Doermann’s team of researchers is working to create an automated tool to detect manipulations and then provide detailed information about how a photo, video or audio file was altered.

“If our adversaries are able to generate material that can spread quickly, they can generate all of this in a disinformation campaign,” Doermann warned, emphasizing that people should always be skeptical that anything they are exposed to online could be challenged in one way or another for reality.

NBC News took a look at the technology firsthand, pointing out differences that a human’s naked eye wouldn’t notice.

DARPA

For example: This photo of race cars looks legitimate, with five cars speeding along the track. But the orange and white car wasn’t originally in the photo. MediFor’s tech can run a heat map to identify where an image’s statistics, called a JPEG dimple, differ from the rest of the photo. Here, the heat map highlights the car in question in red; its pixelation and image statistics differ from the other cars in the photo, which shows that the car was added.

In this next sample, two men sit side by side on a couch, reading from separate sheets of paper. In reality, each man was filmed sitting on the couch separately. The two videos were then merged together to show the final product in this still image from the video.

DARPA

The University of Maryland’s team of researchers working with DARPA developed this indication tool to flag problem spots in the video. The tool detects light levels and the direction from which the light is coming, using arrows to point out the differences and prove that the original videos where shot at different times. The indicator turns red, marking a literal red flag when the content is suspicious.

Automated tools like these have been in development for about 20 months, and will continue to be finessed through 2020, said Doermann.DARPA then hopes to work directly with Silicon Valley tech companies to implement the analytics on their platforms, identifying questionable online content in a matter of seconds.

“Really, today, nothing [online] is authentic,” said Doermann, noting that further uses for the automation could include the FBI and military, as well as court evidence, and insurance and fraud companies.