Thread.Sleep(#) in gameloop

I’m using the ninja cave lwjgl 2 tutorials for this and I’m very new to Java and also to understanding game mechanics.

I used this example from ninja cave


import org.lwjgl.LWJGLException;
import org.lwjgl.opengl.Display;
import org.lwjgl.opengl.DisplayMode;
 
public class DisplayExample {
 
    public void start() {
        try {
	    Display.setDisplayMode(new DisplayMode(800,600));
	    Display.create();
	} catch (LWJGLException e) {
	    e.printStackTrace();
	    System.exit(0);
	}
 
	// init OpenGL here
 
	while (!Display.isCloseRequested()) {
 
	    // render OpenGL here
 
	    Display.update();
	}
 
	Display.destroy();
    }
 
    public static void main(String[] argv) {
        DisplayExample displayExample = new DisplayExample();
	displayExample.start();
    }
}

The example is very easy to understand and I got it running without a problem. The big problem started when running the application. The first thing that happened is that my cpu fan went from zero to very high almost immediately. This was my first clue, that cpu utilization was up. So I figured it must be the game loop. You can understand how this is distressfull since there is nothing in the game loop yet. So I added the following into the game loop.


try
    {
        Thread.sleep(50);
    }
    catch (Exception e)
    {
        e.printStackTrace();
    }

This immediately solved the problem. Excellent, but is it correct coding. In other words. Java has a lot of libraries. Such as lwjgl 2 which is what I’m using here. Then there is the standard api and threading practices and etc. Since I know very little about this, I would like to know if someone can tell me if I’m doing something that does not correctly correspond to lwjgl. Is this the right way or is there another better way to stabilize the initial game loop before I start adding drawing and updates to it?

ok so your cpu usage went straight up because the first loop you showed us had no pause , it would literally call that function asap every time. Which is really really bad. So you fixed it with your knowledge , which is good however thread.sleep() is not commonly used because it changes based on the types of operation in-between the calls. Calculating the difference between two points in time is much more viable. Take time at start , take time again store in a different variable. If the difference is large enough then call the update function , if not then just keep checking the time.

Just checking the time over and over yields the same problem as now your cpu is just putting all of the processing power into doing the calculation. Shouldn’t there be a better way?

What I currently do is call Thread.sleep(1) and then calculate the time difference. Since Thread.sleep(1) is reliant on the JVM, which isnt’ super accurate (polls from OS about every 15ms by default) switching to Thread.yield() when you get closer to the time to update again seems to give better performance and gives your cpu a break.

Edit: reworded

[quote=“thedanisaur,post:3,topic:55063”]
You mean you copied LWJGL’s Sync.java code :slight_smile:

Seriously, there is no ~15ms variation in the sleep() timer on most (including all recent) OSs. You can also force a high precision timer by creating an immortal thread in your init code.


    public static void forcePrecisionTimer() {
        new Thread() {
            {
                setDaemon(true);
                start();
            }

            public void run() {
                //noinspection InfiniteLoopStatement
                while (true) {
                    try {
                        Thread.sleep(Long.MAX_VALUE);
                    } catch (Exception ex) {
                        Benchmark.handleError(ErrorHandler.ErrorLevel.major,
                                "An error occurred while forcing the high-precision timer.",
                                Utils.class, ex);
                    }
                }
            }
        };
    }

@ags1 I worded that poorly, but the JVM used to only poll for time from the OS every ~15ms (or so I’ve been lead to believe).

@Riven :-X

If I recall correctly the real issue was that some versions of Windows had a system timer with a 60 FPS (~16ms) precision. You could however force Java to use the high precision timer with the forever sleep code which I posted above. I’ve seen this 16ms imprecision on older versions of Windows, but not on Windows 7 or 8.

OK, great it’s what I suspected. Now as for all the answers. Maybe a little more help please.

In 2 different type of games

  1. rpg like zelda

  2. point and click like space quest

would there be a different way that I would want to do the game loop timing?
or is it generic and both would be timed based on an average of updates and draws. I might sound like I know more than I really do, so please keep it simple.

So what would be an example properly timed loop using lwjgl. Can someone please show a code snippet. Something that on the average works with most game types that require animated sprites. I’m leaning toward Zelda like games.

In other words - I’m trying to make sure I understand all the posts so far.

Hey everyone

An apology. The ninja cave tutorials have a timing tutorial that I think explains frame independent movement or something similar. Since I did not do those yet, I don’t know if it will help the cpu problem. So I will also look over that.

I do know to avoid Thread.sleep now for sure. At least for lwjgl.

If anyone has more input on this I’m definitely listening.

…and sure enough a rotating cube and no performance hits. So the tutorial covers it.

Coming from .NET (I don’t want to knock .NET), but it’s much much easier to find working examples of code in Java.

lwjgl really is Cool!