Review scores have become meaningless, and if you don't believe me, try to find a major AAA release from the last five years that scored below 7/10 on aggregate. The 10-point scale has collapsed into a 3-point scale where 7 is bad, 8 is acceptable, 9 is good, and 10 is reserved for games with sufficient marketing budgets.
This isn't a conspiracy; it's systemic. The incentives that shape game reviews make meaningful scoring nearly impossible. Publications need access to review copies, which requires maintaining relationships with publishers. Writers need to keep their jobs, which means avoiding scores that generate controversy. The entire ecosystem pushes toward middle-of-the-road evaluations that offend no one and inform no one.
The Scale Problem
Mathematically, a 10-point scale suggests distribution across the range. Some games should be 3/10. Some should be 10/10. Most should cluster in the middle. Instead, we see scores compressed at the top, with the vast majority of major releases scoring 7 or above.
This compression makes the scale useless for differentiation. If every game is "good," how do players choose? The answer is they don't use scores for decision-making anymore; they use aggregate percentages, specific critic opinions, or let's play videos. The score itself has become decorative.
Some outlets have abandoned scores entirely, which solves the problem by eliminating it. But scores persist because they're convenient—easy to display in Metacritic, simple to reference in marketing, digestible for readers who want quick answers.
Why Scores Inflate
Score inflation has structural causes. Reviewers play more games than typical consumers, developing higher standards that paradoxically produce higher scores—what feels like an 8/10 to a critic who plays everything might feel like a 6/10 to a player with limited time. Conversely, exposure to truly bad games makes competent ones seem better by comparison.
There's also the access problem. Publications that give low scores to major releases face consequences: delayed review copies, exclusion from preview events, loss of advertising relationships. Individual reviewers might want to give honest scores, but institutional pressure pushes upward.
And there's audience capture. Readers who buy major releases want validation for their purchase, not honest criticism. A 6/10 for a hyped game generates more complaints than a 9/10 for the same quality. Over time, outlets learn what scores produce what responses.
The Alternative Models
Some outlets use different scales—5 stars, buy/rent/skip, recommendation without numbers. These avoid some problems but create others. Binary recommend/don't recommend loses nuance. Verdict categories require interpretation. No system perfectly captures critical evaluation.
The most honest approach might be abandoning scores entirely, letting the text speak for itself. But readers want shortcuts, and scores serve that demand even when they're misleading.
What Would Help
If we're stuck with scores, transparency would help. Publications should explain their scales, acknowledge their limitations, be clear about what scores actually represent. They should publish score distributions so readers can calibrate expectations.
More importantly, we need cultural change around how scores are used. Publishers should stop weaponizing them in marketing. Players should stop treating them as objective measures. Reviewers should feel free to use the full scale without fear of reprisal.
None of this is likely. The current system serves powerful interests—publishers, platforms, publications that benefit from the traffic scores generate. Meaningful reform would require those interests to prioritize accuracy over convenience.
Conclusion
The 10/10 review problem isn't really about scores; it's about the relationship between games press and games industry. Until that relationship changes—until publications can give honest criticism without fear of losing access—scores will remain inflated and meaningless.
If you're using review scores to make purchasing decisions, you're using broken tools. Read the text, watch gameplay, trust your own judgment. The numbers don't mean what they pretend to mean.