After the recent spate of terrorist attacks inspired by the so-called Islamic State, lawmakers on both sides of the aisle have called for greater cooperation from social media companies like Facebook, YouTube and Twitter in combating hate propaganda.
For sure, these companies already invest huge resources in preventing their platforms from being used as terrorists' loudspeakers. But with billions of users across the globe posting content daily, it's an uphill and expensive battle.
Susan Etlinger, an analyst at Altimeter Group who covers data intelligence, analytics and strategy suggests a tighter partnership between the social platforms and the government could be the answer.
"The challenge is that the social platforms, they're not experts in homeland security. They do not necessarily know what they're looking for. For them to infer what they're looking for, it could exclude really important signals and include unimportant signals, and that would be overwhelming." she said.
"There is some very sophisticated technology for the government to use to try to understand what credible threats look like," she said.
Of course, any increased cooperation between social media companies and the authorities could drive bad actors further underground and violate user trust — a social network's No. 1 concern.
"We are talking about major issues that the public is wary about in terms of how the government, the NSA and other enforcement agencies is making use of user data," said Forrester analyst Nick Hayes. "From a resources perspective, if they really wanted to focus more on monitoring they could."
Facebook, YouTube and Twitter are not the only social platforms utilized by terrorists, but they are the most prominent such tools in the U.S. Each has long-standing policies in place to handle terror-related content and accounts, and all work with authorities and the community at large to thwart attacks.
Facebook has teams around the globe so it can review reported content 24 hours a day. Employees receive specialized training in identifying content that could be tied to terrorism, they speak several dozen local languages and also work with police. Experts say that the language component is particularly important.
"The need to be able to translate in real time and determine if that indicates a real threat is very difficult," said Etlinger. "If Twitter incorrectly identifies you as someone who is going to buy a hybrid car, that is very different from them identifying you as someone who's in contact with ISIS."
For the alias account set up by Tashfeen Malik — the woman linked to the San Bernardino, California, terror attack — Facebook closed it the day after the shooting for violating its terms of service. The company also relies heavily on its 1.5 billion global users to flag anything that could be terror related. That content is fast tracked to the front of the queue for review.
A spokesperson for Facebook told CNBC it shares the government's goal of keeping terrorist content off its site and issued this statement.
"Facebook has zero tolerance for terrorists, terror propaganda, or the praising of terror activity and we work aggressively to remove it as soon as we become aware of it. If we become aware of a threat of imminent harm or a planned terror attack, our terms permit us to provide that information to law enforcement and we do."
In the first half of 2015, the company said it responded to 17,577 U.S. law enforcement requests for data on 26,579 users or accounts across all of its properties (Facebook, Messenger, WhatsApp and Instagram). Government requests for information have increased 53 percent from 2013, the year the company first started making these requests public (after the Edward Snowden revelations). The vast majority of those requests — 80 percent — produced some data. Facebook also receives government requests to remove content that violates the law, but does not make those numbers public.
Like Facebook, YouTube also has teams in place around the globe, working around the clock to examine and remove content that could be terror related. It relies entirely on its community of more than a billion users to police the platform for content that violates its terms of service. In March 2014, the company launched the "Trusted Flagger" program to empower preapproved users including individuals, organizations and the authorities, to do just that.
With the sheer volume of content on the site, YouTube says it is impossible to monitor everything. A YouTube Trusted Flagger video explains the importance of its community when it comes to surfacing content that violates its terms of service: "Every minute, users upload hours video to YouTube, and watch hundred of millions of videos. With that many videos popping up on YouTube, we couldn't possibly watch them all, that's why we rely on our community of hundreds of millions of users to flag content that they believe is inappropriate."
Here's what a YouTube spokesperson said in a statement to CNBC:
"YouTube rejects terrorism and has a strong track record of taking swift action against terrorist content. We have clear policies prohibiting terrorist recruitment and content intending to incite violence and quickly remove videos violating these policies when flagged by our users. We also terminate accounts run by terrorist organizations or those that repeatedly violate our policies. We allow videos posted with a clear news or documentary purpose to remain on YouTube, applying warnings and age-restrictions as appropriate."
According to the most recent numbers posted by Alphabet, YouTube's parent company, it received 9,981 U.S. government requests for user data in the first half of 2014, an 8.6 percent decrease year over year. YouTube also fields U.S. government requests to remove illegal content from its platform — in the second half of 2014 (the most recent period for which the company makes data available) it received 3,523 U.S. government content removal requests.
For Twitter, the number of U.S. government requests for user data have skyrocketed since 2013 — the company has seen a 170 percent increase in just the past two years. In the first half of 2015, it fielded 2,436 U.S. government requests with 80 percent of those requests producing some information. (Like Facebook and YouTube, Twitter receives such requests from foreign governments as well.)
Twitter also fields content removal requests and received 25 requests from U.S. authorities covering 71 accounts. It complied with all of those requests.
"Violent threats and the promotion of terrorism deserve no place on Twitter and our rules make that clear. We have teams around the world actively investigating reports of rule violations, and they work with law enforcement entities around the world when appropriate," the company said in a statement. It also provides detailed guidelines for law enforcement on how to request information about Twitter accounts through a valid legal process.
Although Twitter has been criticized for allowing the proliferation of ISIS-related accounts on its platform, closing down accounts linked to terrorists may not be particularly effective — with new accounts cropping up as soon as others are shut down — or ever very desirable.
"It becomes this whack-a-mole issue that you have to deal with." said Hayes.
"When you do enable these accounts to remain online, you in some sense are able to gain more intelligence. You start collecting more information about the individuals and group if you keep them more active," he said.
Ultimately, said Hayes, it comes down to privacy versus national security. "There really is no easy answer to this," he said.