Win $100-Register

The web is filled with fraudulent reviews. Could AI exacerbate the problem?

The rise of generative artificial intelligence tools has led to a significant shift in how online reviews are created, leaving businesses, service providers, and consumers navigating a complex landscape, according to various watchdog organizations and researchers.

For a long time, fake reviews have been a persistent issue on popular consumer platforms such as Amazon and Yelp. These counterfeit evaluations are often exchanged in private groups on social media between fraudsters and companies willing to pay for better ratings. Sometimes, businesses entice customers by offering incentives like gift cards in exchange for favorable reviews.

However, the advent of AI-powered text generation tools, widely known through OpenAI’s ChatGPT, has made it easier for scam artists to produce reviews more quickly and in larger volumes, as noted by experts in the technology sector. This illegal activity becomes particularly problematic during the holiday shopping season when consumers heavily rely on reviews to make informed purchasing decisions.

The presence of AI-generated reviews spans multiple industries, covering areas from e-commerce and hospitality to services like home repairs, healthcare, and even piano lessons. The Transparency Company, a tech organization and watchdog, reported a notable increase in AI-generated reviews beginning in mid-2023, and their numbers have only escalated since then. Their analysis of 73 million reviews in the home, legal, and medical sectors revealed that nearly 14% of the evaluations were likely fake, with 2.3 million of those believed to be entirely or partially generated by AI.

Maury Blackman, an investor and advisor to tech startups who has reviewed the work of The Transparency Company, remarked, “It’s just a really, really good tool for these review scammers.” In August, DoubleVerify, a software company, documented a marked uptick in mobile apps and smart TVs featuring reviews created by generative AI. These deceptive evaluations were often designed to mislead users into installing applications that could hijack their devices or bombard them with ads, according to the company.

In an unfortunate development, the Federal Trade Commission filed a lawsuit against the creators of an AI writing tool called Rytr, claiming that it facilitated the generation of misleading reviews. The FTC, which has outlawed the buying and selling of fake reviews this year, found that some Rytr users were producing hundreds, if not thousands, of fraudulent reviews for various businesses, including garage door repair services and sellers of imitation designer goods.

AI-generated reviews have also surfaced prominently on major online platforms. Max Spero, CEO of Pangram Labs, which specializes in AI detection, noted that some AI-crafted reviews on Amazon have managed to rise to the top of search results due to their detailed and seemingly thoughtful nature. Nonetheless, distinguishing between legitimate and fake reviews can be complicated, as external evaluators often lack access to key data that reveals patterns of abuse, as highlighted by Amazon.

Pangram Labs has conducted detection efforts for well-known websites, although Spero opted not to disclose specific names due to confidentiality agreements. In his independent assessments of Amazon and Yelp, he observed that many AI-generated reviews on Yelp appeared to be submitted by users attempting to achieve an “Elite” badge, a marker meant to signify trustworthy content. Kay Dean, a former federal criminal investigator leading a group called Fake Review Watch, explained that scammers pursue this badge to create a more credible online presence.

It’s important to note that not all AI-generated reviews are fraudulent. For instance, some consumers may use AI tools to express their genuine opinions or correct their language. Marketing professor Sherry He of Michigan State University indicated that AI can enhance reviews when generated with good intentions. She believes tech platforms should target the behaviors of malicious users rather than discourage legitimate users from utilizing AI tools.

To combat the rise of AI-generated reviews, various companies are formulating policies regarding the integration of such content into their systems aimed at eliminating fake and abusive assessments. Many already utilize algorithms and investigative teams to identify and remove fraudulent reviews while allowing some flexibility for users to employ AI. For example, Amazon and Trustpilot stated they would permit AI-assisted reviews as long as they accurately reflect users’ experiences, while Yelp has stated that its policies necessitate original content from reviewers.

The Coalition for Trusted Reviews, which includes Amazon, Trustpilot, and several travel sites, remarked that while some may misuse AI technology, it also offers an opportunity to combat review fraud. This coalition aims to raise standards and develop sophisticated AI detection systems to protect consumers and enhance the credibility of online reviews.

The FTC’s rule prohibiting fake reviews took effect in October, granting the agency authority to impose fines on individuals and businesses engaged in these deceptive practices. However, tech platforms like Amazon, Yelp, and Google are shielded from penalties as they are not liable for user-generated content posted on their sites. These tech companies have taken legal action against fake review brokers accused of selling counterfeit appraisals and claim their technology has removed a significant quantity of suspicious reviews and accounts. Still, some experts argue that more needs to be done.

Consumers can be vigilant in identifying fake reviews by watching for specific warning signs, researchers suggest. Overly positive or negative language, repetitive jargon that names the product or model, and certain AI writing characteristics can serve as red flags. Research conducted by Yale professor Balázs Kovács indicates that distinguishing between AI-generated and human-crafted reviews is often challenging, as even some AI detection tools may struggle with shorter texts typical of online reviews. Nonetheless, there are certain indicators to help shoppers recognize AI-written evaluations, such as a longer and more structured format or the frequent use of clichéd expressions like “the first thing that struck me” and “game-changer.”

ALL Headlines