IE 11 is not supported. For an optimal experience visit our site on another browser.

Whose Responsibility Is It to Police Content on Facebook?

Facebook is under fire to do something to stop the spate of abusive content.
Facebook CEO Mark Zuckerberg announced Wednesday that the social network giant would be adding 3,000 workers to the current stable of 4,500 people worldwide who currently monitor posts.
Facebook CEO Mark Zuckerberg announced Wednesday that the social network giant would be adding 3,000 workers to the current stable of 4,500 people worldwide who currently monitor posts.Getty Images

Chewbacca Mom, who achieved viral fame on Facebook Live, may be one of the few rays of light on a platform that has been used to live stream suicides and murders — including the recent killing of a baby in Thailand and a grandfather who was seemingly shot at random on Easter Sunday in Cleveland.

Facebook is under fire to do something to stop the spate of abusive content. Last week, Facebook CEO Mark Zuckerberg revealed he is hiring 3,000 additional human moderators, adding to a team that is already 4,500-strong.

However, is it fair to singlehandedly place the onus on Facebook to tame the live video beast it has created?

"I think the community definitely has some responsibility in making it a better place and flagging this stuff so it can get reviewed by Facebook," Jen Golbeck, a professor at the University of Maryland's College of Information Studies, told NBC News.

"At the same time, Facebook is a company and they want to think about what there is to support," Golbeck said. "Do they want those sorts of communities existing on their platform and doing these things? You are responsible for monitoring what you host."

A Facebook representative declined NBC News' request for comment as to where the moderators will be based and whether they'll be Facebook employees or contractors.

Related: Cleveland Shooting Highlights Facebook’s Responsibility in Policing Depraved Videos

However, Zuckerberg described this strategy as a vital link to help Facebook "get better at removing things we don't allow on Facebook, like hate speech and child exploitation."

"It gets to a larger question in life. When people misuse something, do we take it down for everyone else?" Dr. Nadine Kaslow, a professor at Emory University's department of psychiatry, told NBC News.
"I actually don't feel like they have to do that, to shut it down. That, to me, feels really extreme," she said.

Humans Helping Bots

Golbeck said that while it would be impossible for 3,000 additional people to help monitor every single live stream, she believes it's a "step in the right direction."

"It's not totally going to solve the problem," she said. Facebook will rely on a combination of reporting and algorithms to "help flag videos that need more human attention," she said.

Just last week, police in Georgia were able to stop a Facebook Live suicide from unfolding. After a teenage girl began live streaming her plans to commit suicide on Facebook, authorities received a call from a "concerned adult" and were also alerted by Facebook as the incident was developing. Police officers were able to save the girl.

But getting people to report an unsavory piece of content is a crucial part of the battle. Even if they do, the sheer volume of reports can add up.

Perhaps one of the more staggering examples: For one hour and 45 minutes last month, a video showing Robert Godwin's death on a Cleveland street remained live on Facebook until it was reported, the company said in a blog post.

Related: When Seeing the Most Depraved Side of the Internet is Your Job

But the company has also seen their technology do its job.

Facebook's algorithm is getting smarter and is working to "identify posts as very likely to include thoughts of suicide," Facebook said in February. After those posts have been flagged, a human member of Facebook's team will review them and, if appropriate, reach out to the person with resources.

"This is important," Zuckerberg said of Facebook's investment in safety tools. "Just last week, we got a report that someone on Live was considering suicide. We immediately reached out to law enforcement, and they were able to prevent him from hurting himself."

He added, "In other cases, we weren't so fortunate."

Twitter Still Struggling to Control Content

While Facebook Live has only been available to the masses for around a year, Golbeck said she believes it can weather its current difficulties, as long as Facebook shows it is "motivated enough to dedicate resources to solve this problem."

"If we look at other examples, Twitter has literally seen the value of its company suffer because they haven't dedicated themselves to solving the abuse and terribly offensive content people put up there — to the point where no one wanted to buy them even when they were ready to be bought," she said.

Twitter has doubled down in recent months, rolling out new tools such as the ability to filter notifications and algorithms that can identify trolls and put them in a Twitter timeout of sorts, allowing only their followers to see their tweets for a certain period of time.

However, the improved tools come as Twitter rings in its 11th year in existence. One year into the advent of live video, Golbeck said Facebook is at a key place to take control now — before things get worse.

"There's a lot of brain power and ingenuity at Facebook," she said. "I think right now they are feeling a lot of pressure and are getting a lot of bad press and feel like they need to solve it. I am totally confident they can."