Solving Stuttering With Fixed Timesteps, Once and For All

Current solution: LWJGL + extra thread fix (downloads have not been updated with this fix)

Hello JGO! This is my first time posting here; I apologize in advance for any mistakes I make.

I’ve been using Game Maker for quite a while, but recently I learned some Java in college so I’ve been trying to make a game engine in Java (relying further on tutorials and various other sources). I want my engine to use fixed timesteps or some similar method (with graphical interpolation), and want it to be run in a window (or rather, to be capable of being run in either fullscreen or windowed mode as opposed to just one or the other), if possible. But, on some computers the graphics never get rendered as smoothly as, say, a GM game— moving images sometimes stutter. I realize that much has already been said on this topic, for example in the following links:

[quote]1) http://www.java-gaming.org/topics/game-loops/24220/view.html

  1. http://www.java-gaming.org/topics/slight-jerkiness/24311/view.html

  2. http://www.java-gaming.org/topics/player-sprite-just-doesnt-move-smooth/22762/view.html

  3. http://www.java-gaming.org/topics/why-java-games-look-choppy-vertical-retrace/14696/view.html

  4. http://www.java-gaming.org/index.php/topic,21086.

  5. http://www.java-gaming.org/topics/why-java-games-look-choppy-vertical-retrace/14696/view.html

  6. http://forums.create.msdn.com/forums/t/30892.aspx
    [/quote]
    But I feel like no conclusive solution has yet been discovered, so I’d like to try and consolidate all of the known information on this topic and start another, more language-specific discussion. I’ve been discussing this with some people over at TIGForums in this topic:

[quote]http://forums.tigsource.com/index.php?topic=22054.0

(Thank you to the many who helped, including Bryant Drew Jones, Chromanoid, J-Snake and Paul Eres!)
[/quote]
We’ve discussed many things and tried various methods to solve the issue. However, at least on the computers I’ve tested my programs on, I haven’t been able to get smoothness on the level of GM. The only computers that have gotten really close to GM’s smoothness are the ones at my university which, if I understand correctly, are really fast.

Note: I’ve mostly tested on Windows computers.

Potential causes:

1) Hardware. It could just be that the machines I test on are slow, but I doubt this is the case because as far as I know there shouldn’t be much reason for them to be slow, and in any case that doesn’t explain why GM games still run smoothly on them. It is true that GM games also display some stuttering on the same computers, but it’s usually not as noticeable as the stuttering with Java.

2) My code. Of course I may have just written faulty code, but I think, if that were the problem, there probably wouldn’t be so many existing discussions about the issue, would there? (Because people would just need one or two resources of this kind to solve their problem, and wouldn’t have had these discussions for so long.)

EDIT: I am kind of a novice, though. I’ve been building my program from portions of various examples and tutorials; if there is an error in my code, it may be pretty likely that it comes from faulty implementation of the code from these various sources. Maybe I made an error in fitting all the different pieces into my target framework?

EDIT: I prepared several versions of the LWJGL example Cero posted that incorporate different examples of fixed timesteps from various sources. I’ve bundled together those examples in a single download, along with some other example programs (like a similar Game Maker example) and the source for all the different examples. You can download it here: box.com download

Note: example_simple.jar and example_variabletimestep.jar are examples without fixed timesteps; they’re there for comparison.

There are various settings you can toggle within the JARs, like vsync and the usage of Thread.yield and Thread.sleep. More details are contained in the included readme file.

3) Sleeping and timers. This is the kind of problem discussed in this thread. The idea is that sleep() is unreliable, and/or that the precision of available timers is insufficient. Wasn’t this latter problem solved by System.nanoTime(), though?

[b]a) Potential solution: Running a thread in the background the whole time.[/b] I've seen this work on at least one computer, but I think I've also seen it not do much another computer, and in any case it seems a bit hacky to me...

4) Something to do with waiting for vertical retrace. Here are some links related to this topic:

[quote]http://www.compuphase.com/vretrace.htm

http://weblogs.java.net/blog/chet/archive/2006/02/make_your_anima.html

http://today.java.net/pub/a/today/2006/02/23/smooth-moves-solutions.html
[/quote]
a) Potential solution: A request for enhancement. I think some of us have heard that there are or were people trying to incorporate vsyncing capabilities (or something similar) into Java itself:

[quote]http://bugs.sun.com/bugdatabase/view_bug.do?bug_id=6378181
[/quote]
But as far as I know this hasn’t been achieved yet. Do any of you know where this effort currently stands?

[b]b) Potential solutions with native code.[/b] The following links, one of them a JGO thread by Kevin Glass, claim to provide native methods to solve this:

[quote]http://today.java.net/pub/a/today/2006/02/23/smooth-moves-solutions.html

http://www.java-gaming.org/index.php/topic,4127
[/quote]
But I haven’t tried the first solution (I don’t really know how to make it work…), and I suspect that the second one might be outdated. Does anyone know how to properly implement these solutions, or know if any current, equivalent solutions have been made, or could be made?

[b]c) Potential solution provided over in the XNA forums.[/b] Someone has apparently experienced similar problems with XNA, and has provided a supposed solution:

[quote]http://forums.create.msdn.com/forums/t/30892.aspx
[/quote]
This may be a solution that can get around not having access to vsync, but I’ve had trouble understanding some of the XNA-specific code provided in the link. Does anyone know enough XNA to be able to try an equivalent solution in Java?

A Potential General Non-Solution: Maybe we’re just being too sensitive to stuttering? Someone in the aforementioned XNA-related link commented:

[quote]Ignore the GPU clock, and let the CPU clock control the update frequency. Sure, this means you will occassionally have dropped or doubled frames, but as long as the clocks are roughly close, this will be rare, and the results are fine for many games. Especially, people tend to notice the time drift in very simple test apps where they are doing things like just moving a box across the screen one pixel per tick, so they freak out about this, but the artifacts tend to become less obvious as the game becomes more complex, so they are often no problem at all in the finished product. The nice thing about this mode is that it keeps your update logic nice and simple, which is why this is the XNA Framework default (we call it fixed timestep mode).
[/quote]
This sounds kind of plausible, but I think there’s one thing this doesn’t account for: GM running more smoothly than Java.

Sorry for the long post! So, where does the future of this issue lie?

I have seen screen tearing the worst on crappy computers at my school. What removes the screen tearing is to run the game at or above 60FPS. This is done by using a busy while loop that calls Thread.yield().

Response to causes:

  1. Single core computers (and dual cores to some extent) are more susceptible to stuttering due to other programs hogging resources than quad+ cores.

2 (also touching 4). I’ve never had any real problems with stuttering unless it’s my own fault, usually when I only do some special updating every n:th frame or something like that. In those cases, VSync did not fix the problem, as if you go over 16ms frame time, the frame will have to wait for the next sync, ruining performance while not doing anything to cure the stuttering. I believe the universal solution is to keep the amount of computations you do as constant as possible between frames, in which case you are not the source of the stuttering. Not to mention the mouse lag caused by VSync.

  1. The leep fix solves the problem in my experience.

  2. See number 2. TIP: Vsync can be used to “prove” stuttering and see if you’re not just imagining things. Adjust the workload (create objects, increase number of particles, whatever) so that you get an FPS slightly higher than your screen’s refresh rate (often 60 hertz) and then enable VSync. If you get a huge FPS drop (to 30 or even lower) you’re having so much frame time jitter that VSync some frames are too slow for the frame time (see number 2 again). If I get my threaded particle test to run at 65 FPS (about 900k particles) and enable VSync performance is mostly constant at 60FPS but occasionally drops a few frames, with very noticeable stuttering in those cases. The reason it’s so noticeable for me is because my laptop’s screen is extremely bad and slow, and fast moving things never manage to “light” the pixels they cover up fully. If a frame is dropped, all particles remain at their old position for another frame, which will be visible as almost twice as bright particles on my screen. Easily visible as blinking. Wondered what the hell was happening before I figured it out. And what’s wrong with calling LWJGL’s Display.setVSyncEnabled(true); after creating your display? It works for windowed games too as long as you aren’t on 5 year old Intel cards, in which case it doesn’t work at all (even for fullscreen).

I might try to investigate this more as people seem to seriously have a problem with this.

Completely wrong, screen tearing is most visible in games running at higher than 60 FPS. It’s visible at any FPS, but less noticeable at lower <40 FPS (the stuttering hides it a little). Even syncing the game using sleep or (Display.sync() in LWJGL) to achieve 60 FPS or whatever your screen has will NOT remove tearing. The only cure for screen tearing is VSync, and usually at the horrible cost of mouse lag and reduced FPS (unless you use also enable triple buffering, in which case the mouse lag is even worse).

I thought the big issue with Windows was their choice to use a clock interrupt at something like once every 15msec. The commands Thread.sleep() and System.currentTimeMillis() rely on this signal for their accuracy. However, as theagentd points out, there is what he calls the “leep” solution. I thought this was deemed to be sufficient? (At least for bare boxes moving across the screen with little GUI involvement.)

I’ve been so focused on audio and its interaction with the GUI that I’ve kind of lost touch with all this. Am looking forward to reading more on this thread.

One thing I’ll say, though, is that there sure are an effing lot of ways to write less than optimal code, and the type of performance we are looking for doesn’t leave a lot of slack.

Does the following concept apply at all? In audio, the way the JVM switches back and forth between tasks, the audio signal that gets made is actually assembled a bit ahead of the game, in bunches (and the bunches DON’T necessarily relate to the chosen buffer size). Because of this, real time events such as GUI events, don’t “line up” with the audio signal very well. I first brought this up here: http://www.java-gaming.org/topics/an-audio-control-helper-tool-using-a-fifo-buffer/24605/view.html There is a diagram that helps explain.

So, I am wondering if there are analogous issues on the graphics end of things.

I only quickly looked through the thread you mentioned. OpenGL buffers commands and then actually issues them later, if that’s something similar. It obviously shouldn’t cause lag if the driver handles everything well, but in a well threaded program on a quad-core or something, it might be optimal to leave a single core for the driver to work with.

yes Saucer, you are right.

I have tried so many things and I dont have the perfect solution yet.

Cas wrote a gameloop, but when I tried it, it lagged every 3 seconds or so.

Also it depends on your game: I’m making a 2D sidescroller in which case this issue is most noticeable…

Like I said I used so many different loops, but actually, right now, I just enable Vsync (so if vsync works on this machine, its fine) and then sync using LWJGL’s Display.sync at 60
This simple way is still one of the more stable ways. I also always use the dead background sleeping thread / windows fix.
Don’t know yet about Linux, still get screen tearing - but I’m focused on the game itself for now.

I try my game on a lot of machines, because I want to support a lot of machines - and yeah, not easy to get right, especially if there is no vsync

But also: Even though you want to make it perfect, I don’t think its necessary - Most of the time the stutters are only like 1 frame every 2-3 seconds, and I doubt a “normal” player will really notice it much - or I hope =D

I don’t think that’s the issue here, because I don’t think it’s caused by cross-thread communication in that way. If cross-thread communication is an issue, then (as I mentioned in your thread), adding a small but constant time lag between control signal and output can help - a constant delay is actually less noticeable than a shifting one. Incidentally, you shouldn’t be getting bunching in that way …

Is this just with composite window managers (compiz, etc)? There’s a range of problems with these that means vsync rarely (if ever) seems to work, whether you try and switch it on or not. Using metacity or similar brings back vsync but loses desktop effects.

Just “marked compiz for complete removal” to be sure - no screen tearing in window mode and “only” a line on the top of the screen (like 40 pixel from the top) when scrolling in fullscreen mode.
Still not a fix obviously.

Interesting to note is that the stutter is the same as in windows, every 2-3 seconds, 1-2 frames of stutter, on average.
Well I also know that one of the programmers who are working on this game too; he doesnt have this problem on his desktop pc at all - and on his laptop, it says 60 but the rendering itself skips frames noticable (pretty sure its some kind of 7 / Aero problem there)
But this stutter - I experience it on my desktop quad core machine; it can happen that there is no stutter at all on my 2 Ghz single core laptop
It’s not predictable =P

[quote]http://bugs.sun.com/bugdatabase/view_bug.do?bug_id=6378181
[/quote]
Ah yes, and this is still not being “fixed” =/

Ah yes, and this is still not being “fixed” =/
[/quote]
Out of interest, does the ExtendedBufferCapabilities class mentioned in this thread - http://www.java-gaming.org/index.php/topic,21086. - actually work??? Presume would need to check for and use through reflection to be safe.

@Cero - do you get stutter with any of our games?

Cas :slight_smile:

I think I only tried ROTT and no i didn’t, but there isn’t as much scrolling - Well when I played it, I wasn’t really looking for it back then
But in a sidescroller you have almost constant scrolling of course

Since I still use Slick, using your gameloop wasn’t as easy - so I might have made some mistakes (with the LWJGL timer or something)

But obviously it would incredible if you could make like a simple Ball bouncing example using your gameloop, for us to use as “the solution”

Ok. I actually used the same code for that Android test too! I’ll cobble together a quickie - after Hallowe’en party and lots of cider :slight_smile: Should be funny.

Cas :slight_smile:

Bonk. Stupidly long post again.

VSync does NOT fix stuttering! It even worsens it when it actually appears! The only time it will improve it is if your screen has a better precision timer, in which case it KIND OF will sync to it instead of using your computer’s timer, so VSync could improve the timing. However it’s so unreliable (Linux, Intel graphics cards) that you should in no way use it without some other kind of syncing method (or using a varying time step).

For the last time:

Vertical Syncronization is a setting that forces the graphics card to synchronize its update time to screen refreshes. When your 60 hertz screen decides that it is time to renew its content, it will read the current frame the graphics card’s front frame buffer (you’re drawing to the back buffer as you’re doing double buffering). That means that if the front buffer is updated by a new frame while the screen is reading it, it will get part of the old frame (the top part) and part of the new frame (the bottom). This is what’s called tearing, as you’ll be able to clearly see the “tear” or discontinuity between the two images. By enabling VSync you force the graphics card to only update its front framebuffer when it is not being read by your monitor, eliminating the possibility for tearing. Obviously your graphics card can’t overwrite the front buffer when your monitor is reading from it, so it has to wait until the monitor is done reading it.

It’s like a train station, where your update much catch its train scheduled every 1/60th second. If you miss it, you’ll have to wait for the next train. If you enable VSync and you’re able to render 50FPS with your hardware, VSync will limit this to 30 FPS. This is due to the fact that you only have a passanger for every other train, forcing you to wait for the next one. I mean, even if you’re only 1 ms late for your morning train, you’ll still have to wait for the next one (true in both real life and when rendering with VSync xD). If you take even more time you’ll have to wait for every 3rd train, and you’ll get 20 FPS. Every 4th train = 15 FPS. Your FPS gets rounded down to the nearest (refresh_rate / n) FPS, where n is an integer above 0 (1, 2, 3, …). If you’re able to render at 1000 FPS without VSync, it will still only go as fast as your screen can refresh (in this case 60 FPS), as you only have 60 trains running.

“Sure, I know that but how does that worsen stuttering?”
Well, if your average frame time is close to 1/60th second (about 16.6667 ms) there is a big risk that your frame will take more to than 1/60th second to render. This could be due to another program using a shared resource (CPU, GPU, whatever), which can push the rendering time for just a single frame to over 1/60th second, causing you to drop a frame. Without VSync, at least half the frame might make it to the screen (which isn’t much of an improvement anyway), and you won’t be wasting the time until the screen refresh is finished.

“So when it works, it gives me perfect timing, right?”
NO. The timing might be better than the 15 millisecond precision that Windows manages, but far from perfect. Why? OpenGL buffers commands. While it cannot buffer more than one complete rendered frame at a time (for double buffering), it can buffer rendering commands for several frames. By buffering more frames the driver can ensure that the graphics card always has something to do and increase efficency, similar to how we keep buffers when playing and mixing audio, e.t.c. Your OpenGL commands only block when the command queue is full. As long as your game is GPU limited (or VSync limited, it’s the same effect), your CPU will fill the command buffer and then wait until the GPU consumes some of them, which will obviously sync up with the VSync rate and your game will be running pretty synchronized to the refresh rate even though the command buffer causes some jitter. When your CPU is faster than your GPU, VSync doesn’t do anything for the timing, but in such a case there is no need to sleep in the first place, so there won’t be any timing problems in the first place.
Because of the command buffer, it doesn’t actually make much sense to measure the rendering time of a frame (the “update delta”), because due to the command buffer, the time you measure is just how much time it took for the last frame, the frame before that, or maybe even the frame before that. It will not move the objects relative to the amount of time it took to actually render them, but by the time it took to submit the commands of the frame, which is also depending on how the buffer is looking at the time (= depending on the commands of previous frames). Again, this is only the case if you’re GPU limited.

“It solves my stuttering!”
Most likely no.

“What about micro stuttering?”
Micro stuttering is when you get constant stuttering due to frame times varying. I know of two possible causes for it: SLI/Crossfire (two or more graphics cards alternately rendering frames) and workload differing between frames. Micro stuttering is when the game runs it at certain FPS but looks like it’s running at a much lower FPS. I’ve seen very bad micro stuttering due to the nature of alternate rendering with SLI where the two frames are completed at the same time, causing one of them to be displayed for a very short time or not at all as the next frame was ready before a screen refresh, effectively looking more like between 1/2th or 2/3th the FPS you’re rendering at. I have seen this myself on my GTX 295, and it’s especially bad in Bad Company 2, where it looks like 30-40 FPS when it is running at about 60. In this case, VSync DOES indeed fix the problem, as it gives both graphics card something to synchronize their updates with so that they actually produce frames at roughly constant interval. At the moment, VSync is the only cure for micro stuttering caused by SLI or Crossfire.
The other cause is that you do heavy computations occasionally in some frames. This will obviously cause this frame to take more time to render and the game experiences stuttering during those frames. As an example, I only generated my fog of war every 10th frame, but it took about 70ms each time (I had 2000 units >_>). The game was running at up towards 180FPS, but it still looked like it was stuttering due to the uneven distribution of load between frames. In this case, enabling VSync limited the FPS of the frames that just sped by (the ones were I didn’t generate the fog of war) while obviously not speeding up the fog of war rendering. With VSync on I got close to 30-40 FPS for the same scene. VSync was making me get an FPS more closer to what I was actually seeing in the first place, and it looked very similar to not having VSync on.

“So VSync reduces tearing at least. Why not always use it?”
Two words. Input lag. VSync is insanely infamous in shooting games and other games that are dependent on fast input response. The reason is simple. When a game is running at 60 FPS, the game buffers rendering commands several frames ahead. This can be controlled on NVidia cards in the NVidia Control Panel (the confusing setting Maximum “Pre-rendered Frames”). The default value for this setting is 3. Yes, 3 full frames. Check it yourself
As long as VSync is on, the command buffer will almost always be full. Frames are consumed at a constant interval, so the input delay can be calculated pretty accurately.
delay = (pre_rendered_frames + 1) * frame_time
The +1 is because you actually have to render that frame too after buffering it in the command buffer. Example: You have a 60 Hz screen and VSync enabled. The frame time is constant to 1/60th second = 16.666667 milliseconds. The delay is approximately
(3 + 1) * 16.666667 = 66.66667ms delay
Your cheap USB keyboard and mouse is polled at perhaps 100 Hz, giving you 10ms delay just there. It takes a frame before your game reads the buffered input (actually doing stuff based on the input in the game loop), so that’s another frame_time long delay. After rendering you also have to transmit the frame to the monitor. An optimal dual-link DVI cable can transmit data at approximately 8 giga-bit = 1 Giga-byte per second. Ignoring all encoding overhead, e.t.c we still have to transmit the 32-bit color (it’s encoded into 10-bit per channel according to Wikipedia so probably 32-bit per pixel). A 1920x1080p screen is 7.91MBytes of data, which will take about 1ms (0.966ms, but hey, I thought it was more xD). Finally the screen has to process it. My laptop represents a worst case scenario with its 17ms delay there, but there are better 2ms screens (like the one I have at home -_-). This is all really simplified, and there are probably lots of other sources of delay, but these should be the worst ones.
Total: 10 + 16.66667 + 66.66667 + 1 + 17 = 111.333334 ms delay
That’s pretty insane. A majority of those numbers are based on frame time, so having an FPS higher than 60 FPS is actually a benefit in fast paced games. For example, disabling VSync on a really good graphics card so the game runs at, say, 120FPS instead, using a 1000Hz polled gaming keyboard and mouse (1ms delay) and a 2ms delay monitor, you can get a much lower delay. In this case, frame_time = 1/120 secs = 8,333333333333.
1 + frame_time + frame_time * 4 + 1 + 2 = 4 + 5 * 8.333333 = 45.66666667 ms delay.
This isn’t even exactly accurate, as it isn’t guaranteed that the command buffer gets completely filled with VSync off. It’s most likely slightly less than what those 45.667ms I get by calculating it, maybe 10 ms less at best, but that’s a guesstimate.

“I’ve heard triple buffering solves everything.”
Triple buffering simply adds another fully rendered buffered frame. The reason is that this makes the graphics card always able to render to one of the two backbuffers buffers, making the actual rendering FPS not limited by the screen refresh rate. The cost is even more input delay,

Due to all this, the only time I enable it is in games that don’t require fast input response, like strategy games (as they usually have a sync time of several hundred milliseconds due to determinism), and when the artifacts of not having it on are too visible. I enable it in Bad Company 2 for more fluent rendering (easier to see things moving > input delay) and in games with extremely visible tearing. For example, in Bioshock there are often blinking lights that look like shit when the you have a clear line between the almost black last frame and the fully lit current frame.

Yeah I don’t have the mad low level understanding

but with my game, it never stutters with vsync (dont experience any real input lag, not using the mouse anyway), while no vsync results in occasional stutters, like said

I wanted to record it - but then the stutters without vsync behave differently - you have like a big stutter (~800ms) every 3-12 seconds

and cas uses vsync aswell, so if you have THE solution, then pray write THE gameloop / ball bouncing example =D

That’s it. I’m making a test program for this crap tomorrow. Look forward to it.

just to show that it’s not trivial

this is an example from the lwjgl wiki, same problem - it stutters


import org.lwjgl.LWJGLException;
import org.lwjgl.Sys;
import org.lwjgl.opengl.Display;
import org.lwjgl.opengl.DisplayMode;
import org.lwjgl.opengl.GL11;
 
public class LWJGLExample {
 
	/** position of quad */
	float x = 400, y = 300;
 
	/** time at last frame */
	long lastFrame;
 
	/** frames per second */
	int fps;
	/** last fps time */
	long lastFPS;
	
	float value = 0.15f;
 
	public void start()
	{
		try
		{
			Display.setDisplayMode(new DisplayMode(800, 600));
			Display.create();
		} catch (LWJGLException e)
		{
			e.printStackTrace();
			System.exit(0);
		}
 
		initGL(); // init OpenGL
		getDelta(); // call once before loop to initialise lastFrame
		lastFPS = getTime(); // call before loop to initialise fps timer
 
		while (!Display.isCloseRequested())
		{
			int delta = getDelta();
 
			update(delta);
			renderGL();
 
			Display.update();
			Display.sync(60); // cap fps to 60fps
		}
 
		Display.destroy();
	}
 
	public void update(int delta)
	{
		x += value * delta;
		
		// keep quad on the screen
		if (x < 100) {x = 100; value = -value; }
		if (x > 700) {x = 700; value = -value; }
 
		updateFPS(); // update FPS Counter
	}
	
	/** 
	 * Calculate how many milliseconds have passed 
	 * since last frame.
	 * 
	 * @return milliseconds passed since last frame 
	 */
	public int getDelta() {
	    long time = getTime();
	    int delta = (int) (time - lastFrame);
	    lastFrame = time;
 
	    return delta;
	}
 
	/**
	 * Get the accurate system time
	 * 
	 * @return The system time in milliseconds
	 */
	public long getTime() {
	    return (Sys.getTime() * 1000) / Sys.getTimerResolution();
	}
 
	/**
	 * Calculate the FPS and set it in the title bar
	 */
	public void updateFPS() {
		if (getTime() - lastFPS > 1000) {
			Display.setTitle("FPS: " + fps);
			fps = 0;
			lastFPS += 1000;
		}
		fps++;
	}
 
	public void initGL() {
		GL11.glMatrixMode(GL11.GL_PROJECTION);
		GL11.glLoadIdentity();
		GL11.glOrtho(0, 800, 600, 0, 1, -1);
		GL11.glMatrixMode(GL11.GL_MODELVIEW);
	}
 
	public void renderGL() {
		// Clear The Screen And The Depth Buffer
		GL11.glClear(GL11.GL_COLOR_BUFFER_BIT | GL11.GL_DEPTH_BUFFER_BIT);
 
		// R,G,B,A Set The Color To Blue One Time Only
		GL11.glColor3f(0.5f, 0.5f, 1.0f);
 
		// draw quad
		GL11.glPushMatrix();
 
			GL11.glBegin(GL11.GL_QUADS);
				GL11.glVertex2f(x - 50, y - 50);
				GL11.glVertex2f(x + 50, y - 50);
				GL11.glVertex2f(x + 50, y + 50);
				GL11.glVertex2f(x - 50, y + 50);
			GL11.glEnd();
		GL11.glPopMatrix();
	}
 
	public static void main(String[] argv) {
		LWJGLExample fullscreenExample = new LWJGLExample();
		fullscreenExample.start();
	}
}

Wow, so much feedback! I’m having difficulty understanding some of the things being discussed, though— as you can probably tell, I’m kind of a novice. Note that I’ve been building my program from portions of various examples and tutorials; if there is an error in my code, it may be pretty likely that it comes from faulty implementation of the code from these various sources. Maybe I made an error in fitting all the different pieces into my target framework?

I don’t know if this is what you’re referring to, but we have tried rendering as fast as possible. I got it to work on one computer with what is reported as “120 FPS” (that’s probably not the actual number of frames rendered per second, but it still seemed to help), but it still wasn’t perfectly smooth, and I think it was hard on the computer; the fan started going really easily whenever I ran the program like that. Furthermore, on at least two other computers it didn’t do much. Another method suggested was to run the program at high update rates, but personally I’ve already settled on having my updates be performed at a certain rate (60 updates per second).

I think Slick2D manages to achieve this, actually, but, again, I haven’t yet been able to solve stuttering with it.

:slight_smile:

Alright, here are all the various versions of the basic loop that currently seem promising (note that I’m going for fixed timesteps with graphical interpolation):

Chromanoid’s latest loop:

EDIT: Oops… What’s with all the commented code…?

import java.awt.BasicStroke;
import java.awt.Color;
import java.awt.Graphics2D;
import java.awt.Toolkit;
import java.awt.event.MouseAdapter;
import java.awt.event.MouseEvent;
import java.awt.image.BufferStrategy;

public class Game extends javax.swing.JFrame {

    private static final long serialVersionUID = 1L;
    /* difference between time of update and world step time */
    double localTime = 0f;

    /** Creates new form Game */
    public Game() {
        setDefaultCloseOperation(javax.swing.WindowConstants.EXIT_ON_CLOSE);
        setIgnoreRepaint(true);
        this.setSize(800, 600);
    }

    /**
     * Starts the game loop in a new Thread.
     * @param fixedTimeStep
     * @param maxSubSteps maximum steps that should be processed to catch up with real time.
     */
    public final void start(final double fixedTimeStep, final int maxSubSteps) {

            this.createBufferStrategy(2);
        init();
        long start = System.nanoTime();
        long step=(long)Math.floor(1000000000d * fixedTimeStep);        
        while (true) {
            long now = System.nanoTime();
            double elapsed = (now - start) / 1000000000d;
            start = now;
            internalUpdateWithFixedTimeStep(elapsed, maxSubSteps, fixedTimeStep);
            internalUpdateGraphicsInterpolated();
            while (true) {         
                Thread.yield();
                long delta = start + step - System.nanoTime();
                if (delta <= 0) {
                    break;
                }                
                try {
                    Thread.sleep(1);
                } catch (InterruptedException ex) {
                }
            }
        }
    }

    /**
     * Updates game state if possible and sets localTime for interpolation.
     * @param elapsedSeconds
     * @param maxSubSteps
     * @param fixedTimeStep 
     * @return count of processed fixed timesteps
     */
    private int internalUpdateWithFixedTimeStep(double elapsedSeconds, int maxSubSteps, double fixedTimeStep) {
        int numSubSteps = 0;
        if (maxSubSteps != 0) {
            // fixed timestep with interpolation
            localTime += elapsedSeconds;
            if (localTime >= fixedTimeStep) {
                numSubSteps = (int) (localTime / fixedTimeStep);
                localTime -= numSubSteps * fixedTimeStep;
            }
        }
        if (numSubSteps != 0) {
            // clamp the number of substeps, to prevent simulation grinding spiralling down to a halt
            int clampedSubSteps = (numSubSteps > maxSubSteps) ? maxSubSteps : numSubSteps;
            for (int i = 0; i < clampedSubSteps; i++) {
                update(fixedTimeStep);
            }
            return clampedSubSteps;
        }
        return 0;
    }

    /**
     * Calls render with Graphics2D context and takes care of double buffering.
     */
    private void internalUpdateGraphicsInterpolated() {
        BufferStrategy bf = this.getBufferStrategy();
        
        Graphics2D g = null;
        try {
            g = (Graphics2D) bf.getDrawGraphics();
            render(g, localTime);
        } finally {
            g.dispose();
        }
        // Shows the contents of the backbuffer on the screen.
        bf.show();        
        //Tell the System to do the Drawing now, otherwise it can take a few extra ms until 
        //Drawing is done which looks very jerky
        Toolkit.getDefaultToolkit().sync();
    }
    Ball[] balls;
    BasicStroke ballStroke;
    int showMode = 0;

    /**
     * init Game (override/replace)
     */
    protected void init() {
        balls = new Ball[20];
        int r = 20;
        for (int i = 0; i < balls.length; i++) {
            Ball ball = new Ball(getWidth() / 2, (getHeight() - 120) / (balls.length + 1f) * (i + 1f) + 60f, 10f + i * 300f / balls.length, 0, r);
            balls[i] = ball;
        }
        ballStroke = new BasicStroke(3);
        this.addMouseListener(new MouseAdapter() {

            @Override
            public void mouseClicked(MouseEvent e) {
                showMode = ((showMode + 1) % 3);
            }
        });
    }

    /**
     * update game. elapsedTime is fixed.
     * @param elapsedTime 
     */
    protected void update(double elapsedTime) {
        for (Ball ball : balls) {
            ball.x += ball.vX * elapsedTime;
            ball.y += ball.vY * elapsedTime;
            if (ball.x > getWidth() - ball.r) {
                ball.vX *= -1;
            }
            if (ball.x < ball.r) {
                ball.vX *= -1;
            }

            if (ball.y > getHeight() - ball.r) {
                ball.vY *= -1;
            }
            if (ball.y < ball.r) {
                ball.vY *= -1;
            }
        }
    }

    /**
     * render the game
     * @param g
     * @param interpolationTime time of the rendering within a fixed timestep (in seconds)
     */
    protected void render(Graphics2D g, double interpolationTime) {
        g.clearRect(0, 0, getWidth(), getHeight());
        if (showMode == 0) {
            g.drawString("red: raw, black: interpolated (click to switch modes)", 20, 50);
        }
        if (showMode == 1) {
            g.drawString("red: raw (click to switch modes)", 20, 50);
        }
        if (showMode == 2) {
            g.drawString("black: interpolated (click to switch modes)", 20, 50);
        }
        for (Ball ball : balls) {
            g.setStroke(ballStroke);
            if (showMode == 0 || showMode == 1) {
                //w/o interpolation
                g.setColor(Color.RED);
                g.drawOval((int) (ball.x - ball.r), (int) (ball.y - ball.r), (int) ball.r * 2, (int) ball.r * 2);
            }
            if (showMode == 0 || showMode == 2) {
                //with interpolation
                g.setColor(Color.BLACK);
                g.drawOval((int) (ball.x - ball.r + interpolationTime * ball.vX), (int) (ball.y - ball.r + interpolationTime * ball.vY), (int) ball.r * 2, (int) ball.r * 2);
            }
        }
    }

    public static class Ball {

        public double x, y, vX, vY, r;

        public Ball(double x, double y, double vX, double vY, double r) {
            this.x = x;
            this.y = y;
            this.vX = vX;
            this.vY = vY;
            this.r = r;
        }
    }

    /**
     * @param args the command line arguments
     */
    public static void main(String args[]) {
        /* Create and display the form */
        new Thread() {

            {
                setDaemon(true);
                start();
            }

            public void run() {
                while (true) {
                    try {
                        Thread.sleep(Integer.MAX_VALUE);
                    } catch (Throwable t) {
                    }
                }
            }
        };
        Game game = new Game();
        game.setVisible(true);
        game.start(1 / 120d, 5);
    }
}

The relevant (?) code in the most promising version of my target framework, based most heavily (I think) on the “Game loops!” tutorial here on JGO, by Eli Delventhal:

public void run() {
		if (myGame != null) myGame.initGame();
		
		//This value would probably be stored elsewhere.
		//final double GAME_HERTZ = 30.0;
		//Calculate how many ns each frame should take for our target game hertz.
		final double TIME_BETWEEN_UPDATES = 1000000000.0 / gameSpeed;//GAME_HERTZ;
		//At the very most we will update the game this many times before a new render.
		final int MAX_UPDATES_BEFORE_RENDER = 5;
		//We will need the last update time.
		double lastUpdateTime = System.nanoTime();
		//Store the last time we rendered.
		double lastRenderTime = System.nanoTime();
		
		//If we are able to get as high as this FPS, don't render again.
		final double TARGET_FPS = 60;
		final double TARGET_TIME_BETWEEN_RENDERS = 1000000000 / TARGET_FPS;
		
		//Simple way of finding FPS.
		int lastSecondTime = (int) (lastUpdateTime / 1000000000);
		
		running = true;
		
		while (running) {
			double now = System.nanoTime();//*/
			int updateCount = 0;
			
			//Do as many game updates as we need to, potentially playing catchup.
			while(now - lastUpdateTime > TIME_BETWEEN_UPDATES && updateCount < MAX_UPDATES_BEFORE_RENDER) {
				update();
				lastUpdateTime += TIME_BETWEEN_UPDATES;
				updateCount++;
				updatesPerSecondCount++;//
			}
			
			//If for some reason an update takes forever, we don't want to do an insane number of catchups.
			//If you were doing some sort of game that needed to keep EXACT time, you would get rid of this.
			if (now - lastUpdateTime > TIME_BETWEEN_UPDATES) {
				lastUpdateTime = now - TIME_BETWEEN_UPDATES;
			}
			
			debugDisplay = "" + updateCount;//
			
			//Render. To do so, we need to calculate interpolation for a smooth render.
			interpolation = Math.min(1.0f, (float) ((now - lastUpdateTime) / TIME_BETWEEN_UPDATES));
			render();
			lastRenderTime = now;
			
			//Update the frames we got.
			int thisSecond = (int) (lastUpdateTime / 1000000000);
			if (thisSecond > lastSecondTime) {
				fps = frameCount;
				frameCount = 0;
				updatesPerSecond = updatesPerSecondCount;//
				updatesPerSecondCount = 0;//
				lastSecondTime = thisSecond;
			}
			
			//Yield until it has been at least the target time between renders. This saves the CPU from hogging.
			while (now - lastRenderTime < TARGET_TIME_BETWEEN_RENDERS && now - lastUpdateTime < TIME_BETWEEN_UPDATES) {
				Thread.yield();
				
				//This stops the app from consuming all your CPU. It makes this slightly less accurate, but is worth it.
				//You can remove this line and it will still work (better), your CPU just climbs on certain OSes.
				//FYI on some OS's this can cause pretty bad stuttering.
				try {Thread.sleep(1);} catch(Exception e) {}
				
				now = System.nanoTime();
			}//*/
		}
	}
	
	public void update() {
		myInput.updateKeys();
		if (myGame != null) myGame.update();
	}
	
	public void render() {
		g = (Graphics2D) strategy.getDrawGraphics();
		g.scale(scale, scale);
		/*g.setColor(Color.LIGHT_GRAY);
		g.fillRect(0, 0, width, height);//*/
		if (myGame != null) myGame.render();
		if (debugOn) {
			g.setColor(Color.BLACK);
			g.drawString("Interpolation: " + interpolation, 0, 16);
			g.drawString("FPS: " + fps, 0, 32);
			g.drawString("Updates per second: " + updatesPerSecond, 0, 48);
			g.drawString("updateCount: " + debugDisplay, 0, 64);
		}
		frameCount++;
		g.dispose();
		strategy.show();
	}

The relevant (?) code from an attempt to incorporate fixed timesteps into Slick2D (try with and without VSync):

@Override
    public void update(GameContainer container, int delta)
            throws SlickException {
		double now = Sys.getTime() * 1000000000.0 / Sys.getTimerResolution();//*/
		int updateCount = 0;
		
		//Do as many game updates as we need to, potentially playing catchup.
		while(now - lastUpdateTime > timeBetweenUpdates && updateCount < maxUpdatesBeforeRender) {
			innerUpdate();
			lastUpdateTime += timeBetweenUpdates;
			updateCount++;
			updatesPerSecondCount++;//
		}
		
		//If for some reason an update takes forever, we don't want to do an insane number of catchups.
		//If you were doing some sort of game that needed to keep EXACT time, you would get rid of this.
		if (now - lastUpdateTime > timeBetweenUpdates) {
			lastUpdateTime = now - timeBetweenUpdates;
		}

		interpolation = Math.min(1.0f, (float) ((now - lastUpdateTime) / timeBetweenUpdates));		
		
    	//Update the frames we got.
		int thisSecond = (int) (Sys.getTime() / Sys.getTimerResolution());
		if (thisSecond > lastSecondTime) {
			updatesPerSecond = updatesPerSecondCount;//
			updatesPerSecondCount = 0;//
			lastSecondTime = thisSecond;
		}
    	
    	deltaDisplay = delta;
    }
    
    public void innerUpdate() {
    	for (int i = 0; i < positions.length; i++) {
			positions[i] += directions[i] * (SPEED * i + SPEED);
			if (positions[i] <= 0) directions[i] = 1;
			else if (positions[i] >= DEFAULT_WIDTH) directions[i] = -1;
		}

		for (int i = 0; i < positions.length; i++) {
			lastPositions[i] = positions[i];
		}//*/
    }

    @Override
    public void render(GameContainer container, Graphics g)
            throws SlickException {
    	//double now = Sys.getTime() * 1000000000.0 / Sys.getTimerResolution();//*/
    	g.scale(DEFAULT_SCALE, DEFAULT_SCALE);
        g.setColor(Color.lightGray);
		g.fillRect(0, 0, DEFAULT_WIDTH, DEFAULT_HEIGHT);//*/
		g.setColor(Color.red);
		for (int i = 0; i < positions.length; i++) {
			int x;
			if (drawInterpolated) {
				x = (int) ((positions[i] - lastPositions[i]) * interpolation + lastPositions[i]);
			} else {
				x = (int) positions[i];
			}
			for (int j = 0; j < 2; j++) {
				//g.fillRect(x, HEIGHTS[j] + 16 * i, 16, 16);
				sprite.draw(x, HEIGHTS[j] + 16 * i);
				//fonts[j].draw(":)", x, HEIGHTS[j] + 16 * i - 16);
			}
		}
		g.setColor(Color.white);
        g.drawString("Hello, Slick world!", 16, 100);
        g.drawString("Delta: " + deltaDisplay, 16, 116);
        g.drawString("Updates per second: " + updatesPerSecond, 16, 132);
        g.drawString("Sys.getTime(): " + Sys.getTime(), 16, 148);
        g.drawString("Sys.getTimerResolution(): " + Sys.getTimerResolution(), 16, 164);
        g.drawString("" + Sys.getTime() * 1000000000.0 / Sys.getTimerResolution(), 16, 180);
        g.drawString("Interpolation: " + interpolation, 16, 196);
        /*interpolation = Math.min(1.0f, (float) ((now - lastRenderTime) / timeBetweenUpdates));
        lastRenderTime = now;//*/
        /*for (int i = 0; i < positions.length; i++) {
			lastPositions[i] = positions[i];
		}//*/
    }

Hrrrm wait a minute. Who here is running with desktop window composition turned on? Hands up!

Cas :slight_smile:

Is this the oh so dreaded problem of stuttering? You mean updating the game at 63 FPS and expecting it to be look smooth on 60 FPS? Yeah, good example. The stuttering is your fault because you’re rendering 3 more frames than you can display each second, so every 1/3rd second you get this small jump, where one frame was never shown. If you actually limited it to 60 FPS properly, you would not have gotten it. Wow, that was hard.

…Sorry if this is a silly question, but… what’s that?

Although, whatever it is, could it have no affect on a Game Maker program while still having an affect on our Java programs? One of our issues is that Game Maker games run just fine on the computers that our Java programs don’t run smoothly on.

I seem to remember encountering this problem when I tried LWJGL.

[quote]If you actually limited it to 60 FPS properly, you would not have gotten it. Wow, that was hard.
[/quote]
Alright, how would we go about doing that?