Fewer games due to...

Bear in mind that the judges aren’t paid to do this, have limited free time, and may not be in the same timezone.

I don’t know if you’re paying attention. From what I have seen of the system, doing this elimination would make very little difference. It’s just a weighted average. If I give every single game a score between 90 and 100 (giving out nothing below 90), then effectively 90 is a 0 for me and 100 is still a 100 - I’m just reducing my effective range so I only have 10 options. Or, if I only give out a 0 through 10, then all 10s are basically 100s. You might see a 10 on my entry and a 100 on Appel’s, but in terms of finding the winner nothing changes.

More importantly, I don’t know if you actually read all the reviews we painstakingly wrote (for free).

Here is a small sampling of reviews I did:
4bidden Fruit - Overall 96% Technical 86%

F-Zero 4K - Overall: 66% Technical 84%

Frequent Flier - Overall: 68% Technical 91%

Ultimate Tic-Tac-Toe - Overall: 50% Presentation: 85%

4bsolution - Overall: 80% Presentation: 99%

Are those examples perhaps indicative of the fact that “There were many games that had very little game play value, but were technically very difficult to implement. I never saw that reflected in the judge’s scoring.” is probably the result of you not actually reading all the reviews? From what I saw of every judge, if there was a game they thought overall wasn’t very good but was very technically amazing, they gave out low overall and high technical. Also, when games looked great, they got high presentation. So… what’s your point, other than that you’re going on about something that’s not lacking?

One thing I will agree with you about is that a chat log or something like that would be very cool. If I were you I’d be very interesting in seeing the thoughts of my judges laid out on paper. Unfortunately, this is not feasible for the reasons pjt33 already mentioned. We’re all over the world and we’ve all got limited time. Usually I play a few games when I’m on the train or something like that, every day to and from work. There is no internet when I’m on the train, so chats are not an option. Plus, last year there were so many games that Appel had to extend the deadline so many of us (including me) could finish up our reviews. Pushing it even further so that we could then discuss our favorites and all that seems like it would be in no way worth it. Had we discussed last year’s entrants, we would have all overwhelmingly said that Left 4k Dead was the clear winner. And guess what won? We may have had a bit of argument about the next 4 places (my tie for first ended up in 4th place), but really the changes are so insignificant that it’s not going to make a big difference. And, once again, having to get all the judges together at the same time would be borderline impossible.

PS - Last year’s reviews are here, if you want to look over them again. http://www.java4k.com/index.php?action=games&method=reviews&cid=5

I greatly appreciate the long written reviews that the judges made and I actually did read the majority of them. I never understood the categories. Just off the top of my head, some things I look for in these games: was it fun to play, does it have replay value, how much depth does the game have (level complexity and variations between levels), graphics quality, music/sound quality, how faithfully it captures the spirit of a game it is imitating, technical complexity of the algorithms used, game difficulty, and how well the controls work. I do not think that Overall, Presentation and Technical capture enough and I noticed that the percentage of all 3 seemed closely related. I would also like to know how far the judges explored a game. In some games, the depth of the game is not revealed until you get to higher levels. If the judges give up early, they never get to see significant details of the game. That is part of the reason I feel the collaboration between the judges would help. Some will venture further and share their findings.

Okay, that makes some sense. Yeah, we’re going to eliminate the 3 categories this year which should help things. Without the categories, I would give a crappy game with cool technical achievements a higher score than I would otherwise, etc. I typically try to play through the game as much as it appears it has - that is, until I win or I get so fed up I want to quit. For games that don’t have a way of winning I usually play until I feel like I’ve gotten the gist. Obviously I’m just using my judgement here - I can certainly be wrong and miss something. But if someone’s game doesn’t have a way to hook me until I get to that point, then does it really deserve the points?

That requires you to know it. I did feel that there was a bias in favour of clones of games the judges knew last year - games which I thought were better got worse marks than games which got comments about nostalgia.

That’s very true. Being the youngest judge (I think) but a decade or more, I noticed a lot of judges say “awesome remake, reminds me of being a kid!” whereas I had said “feels retro, but it’s kinda boring,” etc. Their scores would often be mountains higher than mine for these entries. Nothing we can about this, though, and I think it’s fine actually. Think of the Olympic judges for figure skating - if someone does a piece that reminds them of their childhood they will probably give an uncharacteristically high score. It’s just the fact of it.

Which is still biased.

Hum… maybe I should remake pong then ;D
My nostalgic favourites: Asteroids, Pacman, Defender, Missile Command, Zaxxon.

hehe yeah… My 2nd entry this year will be based on a 1987 Acorn Electron game… Would be suprised if anyone on JGO has heard of it, never mind played it… So no unfair advantage for me on this one :frowning:

Well yeah, but that’s the nature of asking one person’s opinion of something else. You can never had objective judging.

Sorry, but this all sounds like whining to me. When I judged in 2006(?) I specifically scored on a few criteria and gave bonus points for things like technical difficulty and innovative game play. There are clearly better ways to judge, you can never make everyone happy (ie the ‘losers’). Seems to me that best games have always won or been at the top anyway so the point is moot (imho).

Edit; I even documented my process in a thread: http://www.java-gaming.org/index.php/topic,12824.0.html