Last updated on:
In the world of ecommerce, it now feels like online customer reviews are everything – they’re a seemingly unbiased, democratized, and honest look at how good or bad a product actually is, all thanks to the power of the globally connected internet.
Just about everyone looks at customer reviews, including me, even if it’s just a glance for a quick sanity check.
… but are they actually accurate? Or even useful at all?
And that’s assuming they’re truthful to begin with – companies who sell via ecommerce definitely know how important a product’s online review profile is, and you better believe they work very hard to make it look as good as possible, be it honestly or not. That’s a whole divergent issue though.
To date I’ve conducted and published well over 100 audio product reviews, and have otherwise used and looked at yet many more products. And one of the most interesting things about doing reviews, especially so as I’ve done more of them and gained more experience, is seeing how my conclusions compare to the customer reviews – and other professionally published reviews for that matter – and answering the above questions for myself.
Article Sections Navigation
- Acknowledging Bias
- Being Comparatively Objective in Audio, Even if You’re Sincerely Trying, Is Actually Really Hard
- Speaking of Comparative Objectivity – This is The Glaring Problem With Customer Reviews:
- Furthermore: The Fundamental Problem With Absolute, Scale Based Reviews
- Let’s Talk About The Indeterminate “Good Zone”
- The Actually Really Simple Takeaway From All This
Acknowledging Bias
There’s positive bias with published online product reviews. In-Ear Fidelity did a great job extensively showing this with audio reviews specifically. More positive skew leads to more sales. Monetary incentive is economics 101 – there’s just no way around it.
Something I’d like to mention now – and this is a really big pet peeve of mine – is when other reviewers act like their s#!t doesn’t stink and sanctimoniously declare that they are *un* *biased*. Mmmmmm – queue pompously loud breath through the nose with closed eyes.

What about online customer reviews on Amazon and the like though? A reason they are inherently so attractive because they are ostensibly unbiased. After all, what does some customer who’s unaffiliated with the selling company have to gain by giving a product an overly positive review?
But here’s the point I’m making: everybody’s s#!t stinks, and everyone has bias because everyone is human. In fact, people who think they’re somehow above having human psychology and being biased are almost always more biased because they don’t check themselves.
Maybe you’re biased because you have affiliate links or native banner ads or some other quid pro quo arrangement on your platform. Or maybe you’re biased because you want to look cool on head-fi.org for liking the obscure offboard products and wouldn’t be caught dead wearing a pair of Beats. Or maybe you’re biased because you want to feel like your hard earned money was well spent (people do intrinsically feel like more expensive things are inherently better). There are probably a million other reasons why people are biased that you could find by digging through Google Scholar, but I digress. Point made: all people are biased, for some reason or another.
Being Comparatively Objective in Audio, Even if You’re Sincerely Trying, Is Actually Really Hard
I’ve done a lot of a/b testing. I’d like to think I’ve got my process down pretty solid at this point. And I’m always looking for ways to make it better.
And it often goes like this: I’ll listen to headphone A, then headphone B immediately after, and they both sound decent, and I decide B sounds better.
…But did it actually? I already kind of forgot how A sounds. So I’ll listen to A again, and now it sounds different then I remember because I got used to the sound signature of B in the meantime. But what did B really sound like again? Now all of a sudden I’m not so sure anymore.
This problem increases by an order of magnitude when there’s any significant amount of time between listenings – it could be minutes, hours, or maybe even weeks. I will totally admit that I’ve subsequently listened to something again, having already given it a glowing review, only to think to myself: wait a minute… this does not sound as good as I remember. All of a sudden there’s a noticeable bass bleed, or sibilance, or boxyness, or whatever else.
I would also bet that there’s at least one logical contradiction somewhere amongst my comparison articles where I said A is better than B, and B is better than C, but also that C is better than A, or something to that effect.
There’s also the issue that any past evaluation I’ve done was when I had less overall experience than I do now, and therefore any past assessments become more and more inaccurate. In theory at least.
This is where the measurement point dexters like to chime in and iterate why the graphs and numbers are the only way to objectively compare headphones, speakers, or whatever else. The thing is, though, is that we don’t buy headphones because they produce beautiful looking (the irony) frequency response graphs – we buy them because we want to listen to them, and for them to make our music sound good.
Measurements can sometimes do a rudimentary job of snuffing out a blatant issue with any given product, but beyond that? They don’t really mean all that much honestly, especially when you add in the subjective perception between every person.
Then you get some Reddit nerd who will tell someone else they are “wrong” for liking how a pair of popular earbuds sounds more than how a pair of cans coming out of a DAC and amp that isn’t actually doing anything does. Nauseating, right?
It really is very hard to be comparatively objective in audio. It’s essentially impossible honestly – especially as the number of choices rises much above two.
Speaking of Comparative Objectivity – This is The Glaring Problem With Customer Reviews:
Almost everyone who writes a customer’s review of a pair of headphones hasn’t actually listened to many if any of the competitors. If Joe Audiophile buys a pair of AirPods and thinks “wow! these sound amazing! 5 stars!”, how would he actually know if he’s never listened to his Steely Dan a $10,000 rig? or any of the other competition for that matter?
He can’t. Obviously. The one thing “professional” reviewers do have over customer reviewers is an at least somewhat significant frame of reference from having listened to a lot of different headphones, whereas most customers have maybe owned four or five pairs of headphones in their entire lives.
Furthermore: The Fundamental Problem With Absolute, Scale Based Reviews
5/5 stars, 100%, ten out of ten, S-tier, or any other numerically definite rankings is what I’m talking about here. They’re inherently flawed and are honestly mostly worthless in my opinion.
Let’s say we’ve got solid and decidedly 8/10 pair of headphones, by present day standards (whatever that even means). First of all, what does a perfect 10/10 headphone sound like for reference? We don’t know because perfection can’t be engineered in the real world. Secondly, even if it could be, what happens if a 10/10 by “todays standards” becomes a 9/10 by the improved technology and standards of a year from now, or whenever from now? That would essentially mean that all review scores would have to be constantly reevaluated and reranked into perpetuity.
That is not at all feasible, and numerically objective scores, stars, tiers, or whatever don’t really mean anything in the first place if there’s no objective reference of perfection that exists to compare to.
Let’s Talk About The Indeterminate “Good Zone”
What is the so called “good zone”? It’s what I call that small window of variability between like 4 and 5 stars. And this is where most audio products score in customer reviews if we’re being honest.
Is there any correlation between noticeable quality and “good zone” score?
I’ll be honest, in the 100 plus headphones I’ve listened to? No. None at all.
I’ve listened to headphones that I thought sounded best-in-class which amassed a mere 4.2 stars on Amazon. I’ve also seen decidedly top products amongst the lot of reviewers (including me) like the inexpensive yet excellent Moondrop Chu have similarly low “good zone” scores. And on the contrary I’ve seen many 4.5 plus star products that I thought were pretty meh.
The Actually Really Simple Takeaway From All This
Online customer reviews, and yes maybe even “professional” reviews, have a very basic use of seeing if there are any blatant red flags for a given product. But beyond passing a basic sniff test? They really don’t matter that much. Also: percentages, stars, fractions out of 10, tier lists, or whatever other quantitative review scales are basically meaningless.
If something looks good and a review or two helps you seal the deal? Go ahead and buy it. It will probably be just fine. And if not? audio gear almost always has a 30 day return policy so you can just get your money back anyhow. Also choosing an established brand is almost always the best way to go, especially in the e-retail world where no-name knockoffs are now abound.
