IE 11 is not supported. For an optimal experience visit our site on another browser.

Online product reviews are becoming a battlefield for modern AI

Just what is or isn’t a fake review is now less clear, and the technology to detect fraudulent reviews is still a work in progress.
Four and a half stars, gradually pixelating from left to right.
AI technologies are squaring off in the realm of online product reviews.Owen Berg / NBC News

On the battlefield of online reviews, it’s AI vs. AI.

Generative artificial intelligence that can spit out human-sounding reviews is being met by AI trained to detect fake reviews. It’s the kind of clash that has implications for consumers as well as the future of content online.

Saoud Khalifah, founder and CEO of Fakespot, a startup that uses AI to detect fraudulent reviews, said his company has seen an influx of AI-generated fake reviews. Fakespot is working on a way to detect content written by AI platforms like ChatGPT.

“The thing that is very different today is that the models are knowledgeable to a point where they can write about anything,” he said.

Fake online reviews have been around about as long as real online reviews, but the issue has taken on new urgency thanks to broader concerns about advanced AI technology that is now widely available on the internet.

After years of policing the issue through case-by-case enforcement, the Federal Trade Commission last month proposed a new rule to crack down on fraudulent reviews. If approved, the rule would ban writing fake reviews, paying for reviews, concealing honest reviews and other deceptive practices — and deliver hefty fines to those who break it.

But just what is or isn’t a fake review is now less clear, and the technology to detect fraudulent reviews is still a work in progress.

“We don’t know — really have no way to know — the extent to which bad actors are actually using any of these tools, and how much may be bot-generated versus human-generated,” Michael Atleson, an attorney in the FTC’s Division of Advertising Practices, said. “It’s really more of a serious concern, and it’s just a microcosm of the concerns that these chatbots are going to be used to create all kinds of fake content online.”

There are some indications that AI-generated reviews are already common. CNBC reported in April that some reviews on Amazon had clear indications of AI involvement, with many starting off with the phrase, “As an AI language model ...”

Amazon is among the many online sellers that have battled fake reviews for years. A spokesperson said the company receives millions of reviews each week and that it proactively blocked 200 million suspected fake reviews in 2022. The company uses a combination of human investigators and AI to spot fake reviews, employing machine learning models that analyze factors like a user’s review history, sign-in activity and relationship to other accounts.

Further complicating the issue is the fact that AI-generated reviews aren't entirely against Amazon's rules. An Amazon spokesperson said the company allows customers to post AI-generated reviews as long as they are authentic and don’t violate policy guidelines.

The e-commerce giant has also indicated that it could use some help. In June, Dharmesh Mehta, Amazon’s vice president of worldwide selling partner services, called in a company blog post for more collaboration between “the private sector, consumer groups, and governments” to address the growing problem of fake reviews.

The crucial question is whether AI detection will be able to outfox the AI that creates fake reviews. The first AI-generated fake reviews detected by Fakespot came from India a few months ago, Khalifah said, produced by what he calls “fake review farms” — businesses that sell fraudulent reviews en masse. Generative AI has the potential to make their work much easier.

“It’s definitely a hard test to pass for these detection tools,” said Bhuwan Dhingra, an assistant professor of computer science at Duke University. “Because if the models are exactly matching the way humans write something, then you really can’t distinguish between the two. I wouldn’t expect to see any detector passing the test with flying colors any time soon.”

Several studies have found that humans aren’t particularly good at detecting reviews written by AI. Many technologists and companies are working on systems to detect AI-generated content, with some such as OpenAI, the company behind ChatGPT, even working on AI to detect their own AI

Ben Zhao, a professor of computer science at the University of Chicago, said it’s “almost impossible” for AI to rise to the challenge of snuffing out AI-generated reviews, because bot-created reviews are often indistinguishable from human ones.

“It’s an ongoing cat-and-mouse chase, but there is nothing fundamental at the end of the day that distinguishes an AI-created piece of content,” he said. “You’ll find systems that claim that they can distinguish between texts written by humans versus ChatGPT text. But the techniques underlying them are all fairly simple compared to the thing that they’re trying to catch up to.”

With 90% of consumers saying they read reviews while shopping online, that’s a prospect that has some consumer advocates worried.

“It’s terrifying for consumers,” said Teresa Murray, who directs the consumer watchdog office for the U.S. Public Interest Research Group. “Already, AI is helping dishonest businesses spit out real-sounding reviews with a conversational tone by the thousands in a matter of seconds.”