By Shraman Banerjee & Swagata Bhattacharjee
Imagine a tunnel. At the end, there is either treasure or a trap. One by one, people enter this tunnel, each carrying a candle. Some candles are dim; others, by chance, shine brightly enough to reveal what’s ahead.
But those waiting outside the tunnel cannot see the path that others have taken. They only hear voices echoing back from inside. And here is the twist: people shout only when they are extremely delighted or bitterly disappointed. If their experience was merely average—if the candlelight wasn’t bright enough to warrant excitement or alarm—they remain silent.
As more and more people enter the tunnel, the crowd at the entrance listens:
“Amazing!”
(silence)
“Terrible!”
(more silence)
But silence, in this world, is ambiguous. Did the person choose not to enter the tunnel? Or did they enter and experience something unremarkable—neither good nor bad enough to shout about? This uncertainty distorts what the crowd hears. Even if someone saw the treasure with a brilliant torch, we may never know.
This metaphor captures the essence of recent research on online ratings and reviews. In today’s marketplace, the digital revolution has profoundly altered the epistemic foundations of consumer behavior. Where once information about product quality was rationed, either indirectly, via price signals, or selectively, via seller advertising, today a potential consumer gathers most of the information through the large volume of decentralised, user-generated content that is freely accessible.
Online reviews, a dominant form of this content, purport to democratise knowledge, enabling buyers to make informed decisions based on the aggregated experience of peers. For goods whose quality unfolds only after purchase, so-called experience goods (such as books, movies, restaurants etc.), such post-purchase testimonies are particularly salient. Unsurprisingly, consumers are increasingly guided by peer evaluations on platforms such as Amazon, TripAdvisor, Yelp, or IMDb.
According to a 2021 Forrester Survey, 71 percent of online shoppers in the United States consult reviews prior to making a purchase, making it the most widely read form of consumer-generated content. In India, only 3 percent of consumers reported that they never look at ratings (Statista Survey, 2022).
This empirical prominence invites a theoretical interrogation: Does the “wisdom of crowds” (Surowiecki, 2004) manifest in online reviews? Can decentralised and voluntary feedback approximate an objective signal of quality? One might conjecture that ratings, which capture the buyer’s satisfaction — or the lack of it — after purchasing the product form a rich source of information and hence, should lead to effective information aggregation about the quality of a product.
Yet this optimism overlooks a key institutional feature of online reviews: their voluntary nature. That means people choose whether or not to leave feedback. And consistent patterns show that people are far more likely to write reviews when they are very happy or very unhappy. In contrast, the vast majority of users remain silent.
This self-selection creates a subtle but powerful form of informational bias. A product that evokes mostly average reactions might appear, based on reviews alone, to be deeply polarising—or worse, might get ignored altogether. And what’s more troubling: this distortion doesn’t go away over time. Even if individuals receive increasingly accurate private information (called “unbounded beliefs” in economic theory), the crowd might still fail to learn the truth. The problem is not the intelligence of the crowd but the filtering of voices they hear.
Our research builds on the well-known “social learning” models in economics, where individuals learn about the world not just through their own experiences, but by observing others. In idealised versions of these models, even small, rare insights can guide society towards the truth—as long as those insights are visible. But if only extreme reactions are visible while moderate ones are invisible,then truth itself gets lost in translation.
And here lies a disturbing implication: even in a world of rational agents, fake narratives can go viral. Not because someone is lying, but because those who are silent—those with mild experiences or reasonable doubts—are systematically unheard.
The public belief may settle somewhere in the middle, neither confirming nor rejecting the truth, simply because the informational system is biased toward emotional extremes.
This is not just a theoretical concern. It plays out in real-world phenomena every day: when a restaurant appears polarising online because only angry diners and elated foodies post reviews; or when a product seems too good to be true—or suspiciously bad—based on a handful of exaggerated reviews. Consumers browsing these reviews cannot tell if a lack of feedback means “no one bought it” or “many people bought it and didn’t care enough to say anything.”
Interestingly, our research also shows that this learning failure can be fixed—in a paradoxical way. If sellers were to plant fake positive reviews every time a buyer doesn’t leave feedback, it would ironically restore learning. Why? Because the silence would no longer be ambiguous—it would always be filled with something, even if artificial.
Why does this work? Because bad-quality products already attract disproportionately more negative reviews—buyers are more likely to voice dissatisfaction. In such cases, sellers cannot suppress these negative signals. But for high-quality products, the problem is the opposite: too many satisfied customers remain silent. By filling these gaps with fake, but positive, feedback, the seller can restore balance. In other words, fake reviews don’t distort learning where the product is bad (since the negativity still shows up), but they can help reveal quality when it’s good but quietly received.
These findings offer a cautionary note for regulators, consumers, and platform designers. Reviews do matter. But so does the process by which they are generated. In an age where algorithms reward engagement and extreme opinions dominate digital attention, we must rethink what “public opinion” really means.
The real-world implications are immediate and far-reaching. Consumers must approach online reviews with healthy skepticism, particularly when feedback is overwhelmingly extreme or eerily sparse. Platforms must reconsider how they present ratings, perhaps by nudging more users to leave reviews, not just the ecstatic or the enraged. Also, sellers must reflect on how transparency can be engineered without manipulation.
Often, the crowd is wise, but sometimes, it’s just loud.
Shraman Banerjee teaches economics at Shiv Nadar University.
Swagata Bhattacharjee is a faculty member in economics at O.P. Jindal Global University.