Social media firms including Facebook, Twitter and YouTube are "consciously failing" to stop their sites from being used to promote terrorism and recruit extremists, U.K. lawmakers claimed in a report released on Thursday.
The Commons home affairs select committee, which is made up of British members of parliament (MPs), said that U.S. platforms have become the "vehicle of choice in spreading propaganda" and urged the technology giants to do more to remove extremist content.
"These companies are hiding behind their supranational legal status to pass the parcel of responsibility and refusing to act responsibly in case they damage their brands," the report said.
"If they continue to fail to tackle this issue and allow their platforms to become the 'Wild West' of the internet, then it will erode their reputation as responsible operators."
The lawmakers' accusations come after British authorities made a number of attempts to get Twitter posts and YouTube videos by radical Muslim preacher Anjem Choudary taken offline. Choudary was found guilty by a U.K. court last week of supporting Islamic State.
Social media companies have been making moves to try and fight extremist materials. A Twitter spokesperson pointed to the fact that the company had suspended 235,000 accounts since February related to the promotion of terrorism.
Google told MPs that it has a "trusted flagger" program that lets approved users highlight content which they have concerns about. This is then reviewed by YouTube staff. The report said that Google claimed the accuracy rate for trusted flaggers was 90 percent. Facebook and Twitter told MPs that it did not have similar schemes but it "did have arrangements with government agencies", according to the report.
"We take our role in combatting the spread of extremist material very seriously. We remove content that incites violence, terminate accounts run by terrorist organisations and respond to legal requests to remove content that breaks UK law. We'll continue to work with Government and law enforcement authorities to explore what more can be done to tackle radicalisation," YouTube spokesperson told CNBC in an email.
Simon Milner, director of policy for Facebook in the U.K. said the social network deals "swiftly and robustly" with reports of terrorism-related content.
"In the rare instances that we identify accounts or material as terrorist, we'll also look for and remove relevant associated accounts and content," Milner said.
"Online extremism can only be tackled with a strong partnership between policymakers, civil society, academia and companies. For years we have been working closely with experts to support counter speech initiatives, encouraging people to use Facebook and other online platforms to condemn terrorist activity and to offer moderate voices in response to extremist ones."
Still, lawmakers said that the companies' methods of rooting out extremist content are insufficient.
"It is therefore alarming that these companies have teams of only a few hundred employees to monitor networks of billions of accounts and that Twitter does not even proactively report extremist content to law enforcement agencies," MPs said.
To solve this, social media firms should be required to publish quarterly statistics showing how many sites and accounts they have taken down and for what reason, the report recommends. Facebook and Twitter should implement a trusted flagger system like Google's YouTube and these companies must be willing to extend it to smaller community organizations to help highlight terrorist material, the MPs said. The lawmakers also called for closer co-operation between tech firms and law enforcement agencies.