Manipulation of the Crowd: How Trustworthy Are Online Ratings?: Scientific American
Such negativity exposes another, more pernicious bias: people tend not to review things they find merely satisfactory. They evangelize what they love and trash things they hate. These feelings lead to a lot of one- and five-star reviews of the same product.
A controlled offline survey of some of these supposedly polarizing products revealed that individuals’ true opinions fit a bell-shaped curve—ratings cluster around three or four, with fewer scores of two and almost no ones and fives. Self-selected online voting creates an artificial judgment gap; as in modern politics, only the loudest voices at the furthest ends of the spectrum seem to get heard.
This self-selection process manifests itself in other ways. In a 2009 study of more than 20,000 items on Amazon, Vassilis Kostakos, a computer scientist at the University of Madeira in Portugal, found that a small percentage of users accounted for a huge majority of the reviews. These super-reviewers—often celebrated with “Top Reviewer” badges and ranked against one another to encourage their participation—each contribute thousands of reviews, ultimately drowning out the voices of more typical users (95 percent of Amazon reviewers have rated fewer than eight products). “There is nothing to say that these people are good at what they do,” Kostakos says. “They just do a lot of it.” What appears to be a wise crowd is just an oligarchy of the enthusiastic.