deWitters 4th game loop HELP!

Hello everyone!

I am attempting to implement the deWitters game loop (The one where the game rate is consistent but the FPS is independent), but I have no luck!
For one reason or a other I have a tiny stutter in my image as it moves across the screen even if vsync is enable.

If you guys could take a look at it and tell me what Im doing wrong that would be great.

The code below is in C++ but still is openGL based!
Thanks!



//Set up the ticks and such
double ticksPerSec = 25.0;
double skipTicks = 1000.0 / ticksPerSec;
double maxFrameSkip = 10;

//SDL_GetTicks returns the number of ticks since the init of the game window in milliseconds

//Set the clock and the loop count
Uint32 gameClock = SDL_GetTicks();
int loops = 0;

while(gameRunning)
	{
                 //Poll for events such as window close
		while(SDL_PollEvent(&event))
		{
				if(event.type == SDL_QUIT)
					gameRunning = false;
		}

                //reset the loopcount
		loops = 0;

		while(SDL_GetTicks() > gameClock && loops < maxFrameSkip)
		{
			player.prevX = player.x;
			player.prevY = player.y;

			player.x += 1;

			gameClock += skipTicks;
			loops++;
		}

		double alpha = (double)(SDL_GetTicks() + skipTicks - gameClock) / skipTicks;

		player.viewX = player.x + alpha * (player.x - player.prevX);
		player.viewY = player.y;

		drawGame();
		
		SDL_GL_SwapBuffers();
	}


The draw function



        //Open GL code to render the things that need to be drawn

	glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);

        //Bind the texture to the quad to draw
	glBindTexture(GL_TEXTURE_2D, playerImage);

        //Begin the simple render
        //All of the player values (player.x, player.y, player.viewX, etc) are in double
        //player.width / player.height is the size in pixels of the image (in this case 32 x 32 image)
	glPushMatrix();
	glBegin(GL_QUADS);
 
        //Upper left
	glTexCoord2d(0, 0);
	glVertex2d(player.viewX, player.viewY);

         //Upper right
	glTexCoord2d(1, 0);
	glVertex2d((player.viewX + player.width), player.viewY);
	
        //Bottom right
	glTexCoord2d(1, 1);
	glVertex2d((player.viewX + player.width), (player.viewY + player.height));
	
        //Bottom left
	glTexCoord2d(0, 1);
	glVertex2d(player.viewX, (player.viewY + player.height));

	glEnd();
	glPopMatrix();


EDIT : Posted draw function code

This is a Java forum…

Anyway, could it be a problem with SDL_GetTicks()? A similar function in Java had very bad precision on Windows, which could lead to heavy stuttering. Other than that I have no idea…

I know this is a java forum but code is still code. There is one fatal thing I forgot though and that was to post my draw function!

Also what type of timing precision should I be aiming for? Nano seconds (Is that even needed really)?

Well, at least the interpolation needs millisecond precision to be smooth. The reason why I mentioned the timing problem is because the standard way of measuring time had a very bad precision in some OSes, more to the tune of over 10ms or so. Since many important functions like sleep() and just basic FPS calculation rely on being able to accurately measure time you could get very bad stuttering. If that’s not the problem, it’s most likely some logic problem, but be sure to measure the FPS to see that you’re actually getting exactly 60 FPS (assuming you have a 60Hz screen). You also didn’t post your draw function?

I edited my original post to show the draw function

My monitor is at 60hz and my fps calculator with vsync on fluctuates between 59.98XX and 60.03XX FPS
Is that to much fluctuation?

You mixed that up a little. In java [icode]System.currentTimeMillis();[/icode] returned good values. The problem was [icode]Thread.sleep(…);[/icode], as that method didn’t sleep exactly as long s specified. Sometimes more than +/-4 ms.

Oh no! :slight_smile: that fluctuation is really negligible :slight_smile:

What I just notice, probably you should output “loops”. I guess the while-loop is called sometimes often, sometimes not.

Oh, okay. Must have missed that edit. And that kind of FPS fluctuation is normal. It’s caused by the CPU working out of sync with the GPU. The monitor is even further “away”, so a little variation is expected.

It’s a bit weird to extrapolate the player position. If the “player” is a ball and bounces off a wall, it’ll continue through the wall until the next update discovers the collision and snaps it to its new position. Rather, I’d interpolate between the previous and current position. The new code for line 37 in that case will be:


player.viewX = player.prevX + alpha * (player.x - player.prevX);

I don’t see how that would help though…

You could also try to run the game fullscreen. If you’re running in windowed mode VSync might not work (though it seems to be working considering your FPS…). Frankly I’m running out of ideas here…

If I am using this type of game loop should I be converting my times into seconds? I don’t think that would matter as long as everything is matching times (milliseconds or seconds), but at this point I’m trying to think of anything

I’m not a mind reader. Did you try any of my suggestions?

I don’t like this game loop at all.

See what this game loop is doing is “render as many as needed to fit into an X fps scenario”. The problem is, there is no consistency in framerate; and because the loop is dependent on the render loop for motion, it will never be smooth. Also, you cannot have true 60FPS because you cannot divide 1000 milliseconds into 60 frames (you get 16.6 repeating) so it will flip around a bit. I don’t care how many tutorials try to do otherwise - as far as I’m concerned the most accurate way to process motion is to do it based on time.

Sure, you can use this loop if you want, but NEVER simply + or - a value. Always consider motion as a vector that is applied to a unit of time.

PlayerPos += MoveVec * TimeDiff

That way your code will attempt to skip frames and multi-update without drawing if rendering is too slow, but motion will still be based on time, which will work regardless of framerate. OR you can just have a typical draw/update loop, and do time based motion that way and call it a day :smiley:

To give you an Idea of those “delta time”-loops LunaticEdit talked about, see this code, altough it’s java it should be understandable:
(line 100: “runDelta()”):



This might be crucial for understanding the source:
It’s the main part of using a delta-time based game loop:

@LunarEdit
Wow! How did you solve the determinism problem with floating point numbers? And the problem of physics simulations exploding after a half-second freeze due to anti-virus scans or just plain old GC pauses? Amazing!

Nowhere does it say that we’re trying to achieve 60 FPS. We’re trying to achieve as high FPS as possible. If we want 60 FPS, we’ll use the sync() method or VSync. Neither of them suffers from the “cannot divide 1000 by 60” problem you made up.

Processing motion based on delta time sucks because the game isn’t deterministic. The exact values and timings of collisions will depend on the computer it’s running on and also what other programs are competing for CPU time. If the game freezes up for a few milliseconds you’ll get a huge time skip which might cause things to tunnel through each other or bullets to miss. If we instead have an extremely high FPS, we’ll get floating point precision problems. The Call of Duty games are a prime example of this. When you had above a certain FPS you suddenly started jumping a lot higher. The other problem is that you won’t be able to exactly repeat the calculations since the delta-times will be completely different on each run. That makes it impossible to do lockstep synchronization over network (useful for strategy games) or record a replay and play it back later, though those features are usually not needed.

Oh boy here we go…

It’s called having a tolerance. If the delta time is smaller than the smallest time slice, you skip processing for that frame. If it’s bigger than a certain granularity, you THEN can loop to chew it back down (each loop taking out MAX_GRANULARITY time from the difference). Honestly… it’s 3 lines of code.

You are mixing two different thoughts together and getting confused. What I’m saying is that your ‘update’ loop will not be hit in any even sort of way, which raw addition and subtraction would require.

Now you’re just ranting, see one of your previous quotes where you say the exact same thing in a slightly different way.

[quote=“theagentd,post:12,topic:40584”]
More ranting, see previous quote. Easy fix. EA is well known for writing solid and stable games :cranky:

[quote=“theagentd,post:12,topic:40584”]
This is why you don’t lockstep synchronize over a network - this implies that your client actually has control over what’s going on. It shouldn’t. In a network scenario, you send commands complete with timestamp, and the client will go ahead and start acting on the command, but the server is going to process and respond as it pleases (maybe at a lagged interval) with the true state of the game – again, with timestamps to synchronize properly. If it’s client-to-client, there will still be one client that is acting as the ‘server’ for the session. Same goes with recording. If AI and interactions occur over time, then it is worry-free. Simply record when the game started up, and then what actions were taken, and at what offset to the game start time. Then it will always be repayable. I’ve done this before, and you’ll see it again on the game I’m doing now (I’ll have a demo screen) – it works. every time.

@Granularity: Not entirely sure how that solves anything. If a collision isn’t detected, it would ruin determinism anyway. At least I would not want my collision detection to work better for players with better computers.

This does not make sense. The whole point of the fixed delta with interpolation game loop is to completely separate rendering from logic. Using a variable delta value implies that the updating is in some way related to the render loop, specifically the performance of the render loop. Besides, without interpolation the the render speed cannot go over the updating speed, but I won’t “rant” about that anymore. -_-’

@Lockstep: I’m pretty sure lockstepping is standard for strategy games, in which case the client rarely acts without the server’s consent since they’re much less reliant on input responsiveness compared to for example first person shooter games. I assume your “it works. every time.” means that you aren’t using a variable delta for that game. If not, please explain how you’re doing that, since to me means you are using magic floating point numbers. =P

I have tried changing form extrapolating to interpolating the character position, but like you said it seems to do nothing for me.
I am still working on finding a more accurate timer to use.

But in the meantime

In order for my game loop to give me perfect smoothness I need to be “stuck” in the update loop for 16.666667 milliseconds (because my monitor is 60 hz and vsync is on)
Lets also assume I never want to skip frames, there is no interpolation going on, and my computer can always keep up

So the following code :



//Set up the updates per tick; or the amount of time to spend on each game logic update
double updates = 1000.0 / 60.0; //16.66667 milliseconds (perfect update rate for 60hz monitor)

//Set up the start time of the game
double gameClock = GetTicks(); //returns milliseconds

//enter the game loop
while(runningGame)
{

      //Get the Current time in milliseconds
      double nowTime = GetTicks();

      //Update the game
      while(nowTime >= gameClock)
     {
          player.x += 3;
          gameClock += updates;
     }

     //Render the game
     drawGame();

}


Should give me perfect smoothness and no stutter? Correct?

I’m not sure why you need to divide 1000 by 60. VSync should do that for you. Unless it is different in c++?

[edit]
There is no way to ensure perfect smoothness.
[/edit]

[edit2]
That was what @LunaticEdit was talking about
[/edit]

[edit3]
This is a JAVA forum. http://lmgtfy.com/?q=c%2B%2B+forum

What this is a java forum? I thought this was a c++ forum all this time… That would explain all of the java threads and the name of the site…
I know this is not a c++ forum! I’m using openGL and coding is coding! Do I really need to explain more?

Vsync is Vsync no matter what. I say its on because I mention that there is no interpolation going on. With out vsync on I stutter horribly.

Yes I know I can not get perfect smoothness, but I don’t believe my player sprite should hicup / stutter every other second or so (no matter if vsync is on or off)

http://lmgtfy.com/?q=deWitters+4th+game+loop#

[quote=“GiraffeTap222,post:15,topic:40584”]
Yes, but running the game logic at 60Hz might not be possible if you have heavy physics or simply lots of objects, etc. That’s why you may want to run the logic at a slower rate and use interpolation. Also keep in mind that not all screens update at 60Hz. Some refresh at 59. Hz, others at 120Hz. Since we can’t really assume anything about the screen’s refresh rate, it’s better to just pick an update rate that’s as high as possible but still has good performance. The interpolation then allows you to actually render the game at any FPS. Vsync in turn can provide stutter-free synchronization (well, in theory I guess since it’s not helping here…?).

Okay, here’s my last card: For now, keep the update rate at 60Hz and enable VSync. Now, for each frame, output the interpolation value to the console. If the game really is stutter-free the interpolation value should be relatively constant. It will undoubtedly drift around a bit, and if something disturbs it (other programs hogging CPU time or so) it might jump randomly, but when it’s not stuttering it should be constant. If your interpolation value is jumping around a lot, it could indicate a problem with the timing method. If it however actually is smooth even though it visually stutters (and you’re not imagining it =P) I’d just conclude that it’s unfixable and that it’s probably a problem with your graphics drivers or something like that.

[quote=“theagentd,post:19,topic:40584”]

I did this and I am pretty good on the interpolation values! I really only fluctuate between .034 and .036
Where my interpolation calculation (tried basing it on player.prevX instead of player.x, but I seem to jump forward) is



//updateTime : 1000.0 / 60.0 or 16.66667 
//gameClock : the value added on to during the loop by updateTime; originally this value is set to GetTicks() (which returns a clock value in milliseconds) before the start of the main game loop
double interpolation = (GetTicks - gameClock) / updateTime;
player.viewX = player.x + interpolation * (player.x - player.prevX)


Now since this is based on deWitters 4th game loop like the title implies

He has his interpolation calculation as (copied directly from the article I was basing this post off of) :



/*
SKIP_TICKS (time spent updating the game loop) : 1000.0 / 60.0

next_game_tick : the value added on to during the loop by next_game_tick; originally this value is set to GetTicks() (which returns a clock value in milliseconds) before the start of the main game loop

*/
double interpolation = (double)(GetTicks() + SKIP_TICKS - next_game_tick) / (double)SKIP_TICKS;


Now my question is what is the point or why should I add SKIP_TICKS (or time spent updating the game loop)? When I do this I seem to jump forward whenever my player stops moving.