advertisement
advertisement

Why you can’t always trust ratings on sites like Amazon and TripAdvisor

Many websites and apps use black box algorithms to aggregate how many stars an item or experience receives—sometimes in a misleading way.

Why you can’t always trust ratings on sites like Amazon and TripAdvisor
[Source images: Stakes/iStock; MedejaJa/iStock]
advertisement
advertisement
advertisement

A while back, I bought a mouthwash that was advertised on Amazon. It was from a well-known brand, and it had a high consumer-rating score. The mouthwash turned out to be greasy, the bottle was poorly designed, and it tasted awful: I would have given the item one star. The high rating was baffling. Surely, others agreed with me, so I returned to the item’s page to see if there were other low ratings. It turned out there were many, and nearly all those reviews reiterated my criticisms. Despite so many 1-star reviews, how did the product get such a high overall rating?

advertisement
advertisement

I found an answer several links deep on Amazon.com: The site says that when calculating a product’s ratings, it uses “machine-learned models instead of a simple average.” As Amazon further explains, “These models take into account factors, such as how recent the rating or review is and verified purchase status.” Whatever the company claimed went into the calculation, it seemed like many of the 1-star reviews weren’t counted toward the final rating in spite of being valid assessments of the product.

Amazon isn’t the only one that uses algorithms like this to determine ratings. TripAdvisor has a Popularity Ranking algorithm, which claims it uses review quality, recency, and quantity factors. For Uber delivery ratings, Uber notes, “Your overall delivery rating is based on the average of your 100 latest ratings.” Even that isn’t entirely true, as I experienced Uber saying it wouldn’t count a recent rating of mine. The bottom line is that it’s extremely difficult for a consumer to know what goes into the ratings they see and rely on.

Ratings and reviews are among the most important drivers of conversion in just about every product category, so the lack of transparency in any ratings system is both perplexing and alarming. According to a recent survey from my employer and market research firm Forrester, more than a third of U.S. shoppers say that ratings and reviews “strongly influence” their final purchase decisions. More than half of consumers say that if they see lots of good reviews, they feel confident in their purchase choices. Unfortunately, ratings are often subject to deception and manipulation as I saw from buying mouthwash on Amazon.

advertisement
advertisement

There are some very easy fixes to this problem. All companies that provide reviews should provide more transparency around reviews, or face hefty fines. They should share which ratings were collected within the past year or last few months, and they should have both overall scores as well as scores of verified buyers and confirmed members. They should reveal whether ratings and reviews were paid for or incentivized in some way, and what the ratings are when any paid reviews are removed.

Marketplaces should also be required to disclose what percent of buyers returned the items or received a refund. Consumer should know how long after consumers received the item that the review was written. The sortability of ratings, particularly the ability to sort by lowest ratings first, should also be table stakes. These aren’t radical suggestions; the Organisation for Economic Cooperation and Development has recognized the same issues that I did and has already made some of these recommendations. However, the only legislation around ratings and reviews in the U.S. was years ago and was created to protect a consumer’s right to post negative reviews.

Because of the lack of transparency around ratings and reviews, they are subject to manipulation. It’s time to disallow companies that can shape shopper demand from publishing “black box” ratings. More recently, HR 3816 introduced this June in the U.S. Congress says that big tech companies shouldn’t give preference their own products: Calling out features like inflated ratings and reviews, particularly for products and services that are also advertised on a platform, should be part of that as well.

advertisement

But until that happens, we may just be stuck buying greasy mouthwash.


Sucharita Kodali is a vice president and principal analyst at Forrester.  She is an expert on eCommerce, omnichannel retail, consumer behavior, and trends in the online shopping space.