IE 11 is not supported. For an optimal experience visit our site on another browser.

Facebook Will Use A.I. to Flag Offensive Live Streams

Facebook is working on automatically flagging offensive material in live video streams, building on a growing effort to use artificial intelligence.
Mark Zuckerberg
Facebook CEO Mark Zuckerberg speaks at the company's headquarters in Menlo Park, California, in 2013.Marcio Jose Sanchez / AP file
/ Source: Reuters

Facebook is working on automatically flagging offensive material in live video streams, building on a growing effort to use artificial intelligence to monitor content, said Joaquin Candela, the company's director of applied machine learning.

Facebook has historically relied mostly on users to report offensive posts, which are then checked by Facebook employees against company "community standards."

Related: Police Shootings Test New Era of Live Media

Image: Facebook outperforms analysts' expectations
Facebook says the company is increasingly using artificial intelligence to find offensive material. PETER DASILVA / EPA

Candela told reporters that Facebook increasingly was using artificial intelligence to find offensive material. It is "an algorithm that detects nudity, violence, or any of the things that are not according to our policies," he said.

Related: Facebook Slammed for Censoring Iconic 'Napalm Girl' Photo

The automated system also is being tested on Facebook Live, the streaming video service for users to broadcast live video.

Using artificial intelligence to flag live video is still at the research stage, and has two challenges, Candela said. "One, your computer vision algorithm has to be fast, and I think we can push there, and the other one is you need to prioritize things in the right way so that a human looks at it, an expert who understands our policies, and takes it down."

Related: It Won't Be Easy to Fix Facebook's Fake News Problem

Facebook said it also uses automation to process the tens of millions of reports it gets each week, to recognize duplicate reports and route the flagged content to reviewers with the appropriate subject matter expertise.

Chief Executive Officer Mark Zuckerberg in November said Facebook would turn to automation as part of a plan to identify fake news. However, determining whether a particular comment is hateful or bullying, for example, requires context, the company said.