I don’t think anyone was suggesting otherwise. The impression I got was that some people wanted to be more Web 2.0-ish rather than to replace the judges.
The solution to the issue of users not rating games is fairly simple: scale each user’s rating by a function of the number of games they’ve played, and normalise them. The function should probably be of games on the site as a whole rather than games in a given year. I suggest something along the lines of
f(x) = x > 20 ? 1 : x < 4 ? 0 : Math.pow(0.9, 20 - x)
For normalisation, don’t display any scores (show a placeholder “Insufficient ratings”) until you have N people with more than 20 ratings. Then take the mean of their mean ratings as the mean of the normalised rating, and the mean of the variance of their ratings as the variance of the normalised rating. Possibly re-evaluate these once a month.
Do a Youtube to the extent that if someone has rated a game then you show them their rating rather than the overall one. That way they won’t notice that their first rating had weight 0. If a game has fewer than 10 ratings show the placeholder.
Note: I’m not a statistician - I have an A/S in statistics and nothing more. Someone who really knows what they’re talking about can probably suggest improvements to this scheme. In particular, it needs a workaround to the fact that each person’s distribution will probably be non-normal - if ratings are 1 to 5, then I expect 2 to be less used than 1 or 3.