Did that awful game score a 7.8, while your favourite game only scored a 7.5? Nick D. explains why it doesn’t really matter.
Review scores are an odd thing. In restaurants, people tend to gravitate towards the middle, since the five star restaurants are often an indicator of exorbitant prices as well as good food. For movies, many people are perfectly content with low scores, and movies like Transformers and Resident Evil still do well despite their rockbottom scores. Video games are different still. As a general rule, professional video game review scores are top heavy. What I mean by this is that if you look at the various review sites and various games, you’ll find that most games fit in a 7.0-9.0 (halved for systems that only go up to 5) range. It is uncommon for a game to dip below 7.0 or rise above 9.0, and relatively rare for any score below 6.0 to be given at all. Today, I want to look at this top heavy system and examine the advantages and disadvantages of the industry adopting it.
Firstly, let’s be clear: this is an informal adoption at best, and I’m speaking in generalities. Some professional video game publications or sites use the 5.0 is average scale, while others do often dip below a 6.0. What I’m referring to are general trends in gaming review scores. An exception here or there does not change the fact that the majority fits into one such trend. This is also why Metacritic tends to skew low by between one to five points since a few extremely low scores are more powerful than the few that breach the 9.0 threshold. What I mean by informal adoption is that this system has grown organically. Nobody has sat down with every game reviewer and outlined the procedure. Thus there is no culprit for why we have this system other than the market and tradition.
If you ask gamers, they will tell you that a score of 7.0 is average (some will place the bar at 8.0, but for the duration of this article we will be looking at 7.0 as the average). Whether it should be or not isn’t really the point. As it currently stands, 7.0 is something of a threshold of acceptability. Anything in the 6.0 range is worrisome, while anything that reaches into the 8.0 zone is deemed to be a good game. Thus 7.0 is something of a magic number of mediocrity. Games that fit under this are ones that haven’t successfully grasped that star, ones that many gamers will be unwilling to buy because of that fact.
That last statement might come across as strange. After all, millions are willing to see average movies or read average books. A problem inherent in the gaming industry is the cost of games. At sixty dollars new, if you’re lucky and live in the United States, games are expensive. This translates to gamers being far more discerning purchasers. With a market full of good and great games, average games don’t cut it for many even if those gamers, though flawed, would be more enjoyable than another game with the arbitrary score of 8.0 slapped on it.
This price argument for most AAA games is bolstered by the fact that, taking away the steep price tends to drop gamer standards. Firstly, there are the lower-priced indie games. Many score in the 6.0-7.0 range, and many gamers are willing to take the plunge because of the lower cost of admission. Similarly, with major sales such as the now infamous Steam sales, gamers are perfectly willing to try out various types of low-scoring games since it doesn’t set them back much.
What this goes to prove is that it isn’t that gamers are inherently more picky than consumers in other media, nor are they valuing their time particularly heavily. What it shows is that there is a certain correlation between the price of games and what gamers are willing to place their bets on. As you may suspect, this underlies quite a bit of how business is done.
The way the current system works is that 7.0 is the threshold. The reason that most games fit between the 7.0 and 9.0 range is because the current review climate is more of a red flag system. In other words, games that fit within the threshold are perfectly fine for consumption, while games that score low, though not excluded from purchase, raise red flags. The system is more designed to highlight the low scores rather than the high ones since there is such a small window for standing out above the standard range.
This helps consumers because people, on average, are perfectly able to find a game they may be interested either from gameplay, art style or what have you. Gaming reviews are almost the last checkpoint. They let us know if they are in the standard range (safe), below the threshold (red flags) or above (excitement). Practically, there is little difference between a 7.0 game and an 8.0 game, and, truthfully, the way the system works now, it is impossible to effectively compare games of similar but different scores. A game that scores an 85 on IGN isn’t necessarily worse than a game that scored 80, which can be chocked up to both personal bias, and the fact that the review range is so small that numbers that close become especially arbitrary.
This is the biggest disadvantage to the current system. It simply doesn’t work as a comparative. You cannot take two games, even of the same genre and series, and objectively weigh their scores against each other. When you only have two points to work with in the standard range, there really isn’t any way to set games apart. A 5.0 is average system has more points to work with, so a much larger distinction can be made of divergent review scores. As gamers, we love to compare review scores and trump out what we consider our biggest review injustice, but the way the system works now is that pretty much everything that is in that standard range is more or less the same even if there is a gut reaction to 8.5 being wildly superior to a 7.0 game.
Despite this shortcoming, I’m not convinced that the current system is bad in anyway. Even without the small threshold range, game review are not a great tool for comparison. You bring in every bias that the reviewer has, and there’s the pretty important fact that different reviewers with different standards review different games. As such, even adopting a system that would give a greater range of acceptable games would not really be beneficial for comparison reasons. It may allow gamers to set their personal bar a bit lower on the range, but the effect would be minimal.
The reason the red flag system works so well is because of the steep cost as discussed earlier. Gamers know what they want and just want to make sure that they aren’t going to be picking up a negative surprise. The lack of an effective way to tell between closely scoring games is irrelevant due to this fact, and, since reviewers generally lump the majority of games in the standard range, it doesn’t prejudice most game developers, only punishing those who turned out a product that is seen as truly inferior.
Of course, as mentioned above, Steam sales have changed the the market somewhat, closing the gap between the video game industry approach and the movie industry approach. Many people don’t care about thresholds or how games compare to each other when they are cheap enough to be written off. Thus, if the Steam sale model gets adopted widely by the industry and not just Steam, the way the review system is set up becomes less relevant for all but the gamiest gamers. Until then, we’re stuck with high costs and cautious purchasers, thus reviews are very serious business in the video game industry for better or for worse.