is this the best performace I can get from the game loop?

Updated question.

I have asituation where my time_pre_frame in milliseconds is the following

17
17
17
17
63 <-- huge jump due to loop taking too long
10
17

What is the correct course of action in the situation where my time_pre_frame varies greatly for one frame or two frames? Ideally I need each frame to take the same amount of time, if they dont then how what can be done to compensate?

I find this to be one of my problems i am thinking it is the gc kicking in.

You can’t compensate for the visual loss - the frame time is gone. You can compensate for game play though with a lag timer for your moving entities. In general: A lag timmer is simply a multiplier that is adjusted based on how fast the frame was rendered. If your target speed is 15ms then the muliplier is 1. If a frame took 30ms your multiplier would be 2. All movement in the game is multiplied by the multiplier. So say if a character was supposed to move 10 pixels, but the last frame took 30ms to render, once the multiplier is applied, the character would move 20 pixels. This complicates things like collision detection and animation though, it’s tricky to take it into account for those things, but it’s more a matter of experimentation depending on how your code goes. A good test is to introduce lag and see if the game still plays the same as if the FPS was high. That should be the net effect.

So as Vorax says, you need to compensate the motion because you will never get the same time for each frame.

There are a couple of things people do:

(1) Introduce a max frame tate. By limiting your rate to the low end of your normal spread you hide minor variations. This is done just be introducing a sleep() at the end of the frame calculation to take up the left over time for frames that get done faster…

(2) “Skip frames” when there is an unusually long delay. You could do this simply by multiplying your movement by the extra time BUT if you are doing per-frame collision detection you will risk missing collisions that way. A preferrable way is to loop over the frame calculations and actually do multiple frames “move and react” calculations but no render until you are caught up, then render the final result.

(3) You really shouldnt be seeing significant gc pauses on JDK1.5+ or later on a properly architected game. If I were you I’d profile that puppy and take a good look at both where youa re using a lot of CPU and whats happening to your heap.

I dont mean to sound desperate but can some one please help me with this? The demo is here:

(cut and paste please) http://www.angelfire.com/clone/wolfbane/gameLoop_KevGlass.jar

The file is mainClass.java, and the method is called public void gameLoop_KevGlass(). If you run the jar file it will runthe loop for 10 seconds and then exit. A txt file is created with the delta’s written in. For me after the 4th second I ususally see the jumps start to occur. I am new to profiling code and will try my best but any help will be greatly appreciated. I have never been able to resolve this problem.

If you are using either Eclipse or Netbeans, get their pro0filer plug ins. Theya re free and I know that the Netbwans one at least is pretty easy to use.

Frankly, if yo uare writing data to a file its possible that just the flush of the cache of output is causing your pause, you really need a less intrusive measuring tool and thats what profilers are all about.

I hoenslty dont know if Ill have time to hit your code with the Profiler msyelf, but I’ll see…

Thanks Jeff, and no I know that it is not the write operation, since without it I get the same effect.

Hum. now that I look at it closer I can see that it only jumps in the begging and the rest of the time it remains pretty constant. So I guess the cause was something loading.

But still one thing really bothers me, why is it that when I artificially set a frame cap at say 65 fps, the movement still appears jerky, while if I let it run as fast as possible but change the screen refresh to 60hz, the movement is a lot smoother. Can anyone run a similar test and tell me if they see the same results?

Just go to gameLoop_KevGlass() method, set the sec variable to something like 20, and then either leave the sleep call at the bottom of the while loop for a frame cap, or comment it out for a fast as possible run. Any feedback will be much appreciated, thanks.

I just ran the test myself and noticedthat there is no difference in the time deltas that are passed to the update method. The movement is updated by x += dx * delta. So the deltas are the same for both a frame capped version, anda non capped 60 hz refresh version. Is thee something I am missing here? maybe haing to do with rendering and the refresh? I see no reason why it is not smooth in both vesions.

Ahhhh.

ANother possability if you are running on server VM is that you are hitting a pause for compilation.

A lot of games will “warm up” the VM or force it to compile up front (I think the flag for that is -Xcompile but I might be wrong. One of the guys here should know.) You might want to try that and see if it changes anything.

Id need to run it and play with ti a bit. If i get the time Il llet you know.

I think the reason that it’s smooth when you set the monitor refresh rate to 60fps is that you throttle to ~67fps. It is probably being nicely vsync’ed so you really get exactly 60fps. When you set your refresh rate to 85, your game loop can’t keep up, because you’re aiming at ~67fps, so what happens is that one ‘game frame’ takes more than one ‘monitor frame’, so it will miss vsync at least 1 out of 2 frames, leaving you with only 50% of 85fps and maybe even an irregular framerate.
Try to set your target framerate in your throttling to something slightly higher than the monitor refresh rate, but don’t remove your throttling code (in case vsync is disabled).

Thanks Erikd, that sounds like the most logical explanation, I will give it a try when I get home. Is this a common problem/solution? If my demo was written in C++ would the issues still exist? Is the issue of having the frame time take longer than the refresh time a Java issue?

This is true of all languages. A rule of thumb is that animation is always smoothest when it’s synced with the hardware. However, you can get away with reasonably smooth animation as long as your framerate is far in excess of the hardware. (e.g. 130 FPS looks just fine on a 60Hz monitor)

BTW, look at the bottom of the GAGE Homepage for an algorithm that automatically adjusts to variances in the time to generate a frame. It should help smooth out your animation when VSync is disabled.

The new and improved throtteling code (for bad timers) looks like this:


private long lastFrame=0;
float yield=10000f;
float frameAverage=16f; //start with desired msec per frame

[...]

long timeNow = System.currentTimeMillis();
//rolling average for the last 10 time inputs
frameAverage = (frameAverage * 10 + (timeNow - lastFrame)) / 11;
lastFrame=timeNow;

//16f = for 16msec
//0.1f = damping value
//and that +0.05f bit is for ensuring that it can grow faster after it ran flat out for a while
yield+=yield*((16f/frameAverage)-1)*0.1f+0.05f;

for(int i=0;i<yield;i++)
	Thread.yield();

It uses rolling average and proportional adjusting yield (with some damping).

It seems to work very well. Adjusting happens more gentle and sensible than before. Currently fuzetsu and bad sector are using that kind of throtteling.

Cool man, thanks a lot.