IE 11 is not supported. For an optimal experience visit our site on another browser.

Facebook Is Cracking Down on Those 'Too Good to Be True' Posts

You should soon see fewer posts in your news feed for miracle cures or supposed short cuts to magically getting washboard abs in time for summer.
FILE PHOTO: The Facebook logo is displayed on the company's website in an illustration photo taken in Bordeaux, France
FILE PHOTO: The Facebook logo is displayed on the company's website in an illustration photo taken in Bordeaux, France, February 1, 2017. REUTERS/Regis Duvignau/File Photo

If Facebook's newest plan works, you should soon see fewer posts in your news feed for miracle cures or supposed short cuts to magically getting washboard abs in time for summer.

Facebook is tackling the "spammy" post problem with artificial intelligence, similar to the way the social network is taking on the fight against fake news.

Facebook

"With this update, we reviewed hundreds of thousands of web pages linked to from Facebook to identify those that contain little substantive content and have a large number of disruptive, shocking or malicious ads. We then used artificial intelligence to understand whether new web pages shared on Facebook have similar characteristics," the company explained in a blog post on Wednesday.

Related: Facebook Just Rolled Out Its Fake News Tool

The idea is to disrupt the economic incentives for posting such low quality webpage experiences, since spammers often make money if you click on that "too good to be true" link.

Facebook said its analysis will allow the social media giant to ensure the posts show up much lower in your news feed — or lose eligibility to be an ad.

"This way, people can see fewer misleading posts and more informative posts," the company said.

Related: Social Media Companies Join Forces to Take Down Terror Content

Facebook, Twitter, and YouTube have all been criticized in recent months for their lack of control over the dissemination of false, inappropriate, or violent content on their platform. In March, Google was forced to review its strategy after several major advertisers pulled their YouTube ads when it was revealed that some brands were being displayed alongside hate speech and extremist content. Facebook has been fighting a very public battle against fake news and violent live videos, and announced last week that it would hire an additional 3,000 people to monitor online content.