GC Lag Question

Exactly.
Nobody can guarantee if high resolution timers will work or will really be high resolution at all.
[/quote]
This is not a unique fault of time-based updating. Tick based updating depends on having a high resoltution timer also, how else can you keep a constant time between frames…

The multiplayer aspect is the only thing I see as a disadvantage. I’m running into that problem now. I plan to overcome it by taking advantage of the fact that updates are very quick compared with frame draws (2500/second for my game). This is the solution I’ll soon implement:

Update by the time from the last update time to the next millisecond time divisible by 5. Do this until the addition of these update times is within 5 milliseconds of the current time. Then update by the time that remains until the current time and draw & display the frame. Repeat.

For rotation the server and client will get slightly out of sync because of the sub-5-millisecond updates required before and after frame draws. However, with my particular game design the server resends its whole game world to clients who replace theirs, so the small out-of-sync errors will be corrected.

You may say that this design depends on being able to replace all clients’ game worlds with that of the server, but even with your tick-based games, this will be needed because of floating-point rounding differences between server and clients.

[quote]This is not a unique fault of time-based updating. Tick based updating depends on having a high resoltution timer also, how else can you keep a constant time between frames…
[/quote]
With time based updating, the accuracy of timing is much more critical.
The difference is that with time-based updating, any timing errors lead to inconsistent behaviour where with tick based rendering, the worst that can happen is that the frame rate is not 100% stable, but the game’s behaviour always is. Those frame rate variations due to timing granularity are hard to spot, even if you use a 1ms timer. And most of the times, VSync works which makes timing almost a non-issue.

[quote]The multiplayer aspect is the only thing I see as a disadvantage.
[/quote]
Also, collision detection can become more difficult if you cannot fully predict game behaviour due to timing.

Even with FPS games, it’s not unheard of to update the game at a fixed 60 frames per second (which makes the game play tick based), but render as fast as you can (interpolating the display frames between the game update frames). I think the reason behind it is also consitency and predictability.
Correct me if I’m wrong, but IIRC Doom3 uses this technique because of the inconsistencies due to timing errors in Quake3.

Floatingpoint inaccuracies are not nearly anywhere relevant in this context. The differences are so extremely small that it will at least look the same everywhere.

Floatingpoint inaccuracies are not nearly anywhere relevant in this context. The differences are so extremely small that it will at least look the same everywhere.

You’re right. But mightn’t these errors compound over time? AffineTransforms were needed in Java2D to mitigate this problem were they not?

Nevertheless, tick-based updating

  1. doesn’t take advantage of the full processor power and
  2. can skip frames while time-based rendering won’t.

Also, erikd, System.nanoTime() is for all intensive purposes adequate as a high-res timer & both approaches depend on it anyway. I can’t see where ‘predictability’ comes into it except from a networking perspective.

I don’t have anything to add to the discussion, and I apologise for nitpicking, but this:

[quote=“CommanderKeith,post:44,topic:27121”]
really grinds my gears.
I hope that one day, whoever’s sloppy diction spawned this malapropism will get their comeuppance.

[quote]Nevertheless, tick-based updating

  1. doesn’t take advantage of the full processor power and
  2. can skip frames while time-based rendering won’t.
    [/quote]
    Not necessarily. If you do it like Doom3 like I mentioned in my last post, where game updating will be done at a fixed speed but rendering will be performed as fast as possible. I’m guessing this approach has it’s own share of problems, so in the end it probably depends on the type of game you’re creating which approach will be preferable.
    My point was not to dismiss time based updating, but to show some good reasons when tick based updating can be preferred.

[quote]Also, erikd, System.nanoTime() is for all intensive purposes adequate as a high-res timer & both approaches depend on it anyway. I can’t see where ‘predictability’ comes into it except from a networking perspective.
[/quote]
Like I mentioned, in time based updating, accurate timing is essential for correct and predictable behaviour of the game. In tick based updating, this is not an issue and timing is only used for throttling and optionally a frame skipping option, so 1ms precision will do. I believe the timer in LWJGL has at least 1ms precision everywhere (although it doesn’t guarantee that as well).

I actually avoid System.nanoTime() at the moment, because it seems that it doesn’t work reliably in some configurations (which I believe is why it’s turned off by default in LWJGL). I don’t know if it even has sub millisecond accuracy on Mac, for example.

I think nanotime has problems with machines that have HT or are multicore machines…

yet another reason for fixed timestep timing :slight_smile:

DP

HT was a red herring but multicore might well cause issues. The biggest problem is SpeedStep.

Cas :slight_smile:

[quote=“erikd,post:46,topic:27121”]
It appears to have microsecond precision on the Mac.

/*
 * Main.java
 *
 * Created on May 12, 2006, 1:57 PM
 */

package nanotimetest;

/**
 *
 * @author scottpalmer
 */
public class Main {
	
	public static void main(String[] args) {
		long a = System.nanoTime();
		long c = a;
		long b = System.nanoTime();
		while ((b-a) < 1000000000L) {
			if (b!=c) {
				System.out.println(b-c);
				c = System.nanoTime();
			}
			b = System.nanoTime();
		}
	}
}

prints:

...
1000
1000
1000
1000
2000
2000
1000
1000
2000
1000
1000
2000
1000
1000
1000
1000
1000
1000

The biggest problem is SpeedStep.

And well, it (QPC) is simply broken on some older chipsets. Worked fine with win98se and an nvidia card. Now with 2k and an ati card (not sure which one is to blame) it doest work anymore. The problem I’m seeing is so called QPC leaping… that is… it randomly jumps a few seconds in the future if there is a high bus load (say a non-command line game heh). As you can imagine this is really annoying, because you get warped to death in lots of timebased games, which rely on a working/accurate QPC.

Elegant little test. On windows XP, nanosecond precision seems to hold:


610
455
519
459
476
461
461
473
473
462
454
464
466
468
476
453
447
459
453
470
465
453
458
469
476
466
468
464
477
440
458
472
469
453
478
577
460
470
462
463
500
578
476
460
470
467
483
456
465

I just got the latest drivers for my nVidia card so I can finally play all of your OGL games… They’re great! I’ve been playing Cas’s Titan Attacks & Ultratron, OrangyTangy’s Quix, AnalogKid’s last drops & Mojang’s Wurm… they’re very polished, I take my hat off to all.

The thing I notice about OGL which I suspect most of you have written these games in (since none worked before I got the driver except for Last Drops), I saw how OGL locks the draws to the screen refresh.

It does it to my own game too now when I’m in OGL pipeline mode. It is kind of annying for me because the time-based approach to rendering isn’t working as well since the frame has to wait for so long until the screen refresh is ready. Now I see another reason why you like to use tick-based rendering - because it locks you into the pattern of screen refreshes.

Does that mean that you have to always try to have your tick-based FPS set at the refresh rate (or a factor of it)?

It’s probably because your drivers are set to sync to video refresh by default :slight_smile:

Cas :slight_smile: