[Solved]lwjgl fullscreen cpu load problem

Hello all!
I spent some time learning basic java stuff since my last visit (as you people mentioned) and ended up with lwjgl.
I have a little problem which stops me from starting to learn opengl and from actual game making.

Here is the code first:

import org.lwjgl.LWJGLException;
import org.lwjgl.Sys;
import org.lwjgl.input.Keyboard;
import org.lwjgl.opengl.Display;


public class Game {
	long lastFrame;
	int fps;
	long lastFPS;
	boolean fullscreen = true;
	boolean vsync = true;
	int refrashRate = Display.getDesktopDisplayMode().getFrequency();
	int currentWidth = Display.getDesktopDisplayMode().getWidth();
	int currentHeight = Display.getDesktopDisplayMode().getHeight();
	boolean exitFromApp = true;

	public void start() {
		try {
			Display.setVSyncEnabled(vsync);
			Display.setFullscreen(fullscreen);
			Display.create();
		} catch (LWJGLException e) {
			e.printStackTrace();
			System.exit(0);
		}

		getDelta();
		lastFPS = getTime();

		while (!Display.isCloseRequested() && exitFromApp) {
			int delta = getDelta();

			update(delta);

			Display.update();
			Display.sync(refrashRate);
		}

		Display.destroy();
	}

	public void update(int delta) {
		while (Keyboard.next()) {
			if (Keyboard.getEventKeyState()) {
				if (Keyboard.getEventKey() == Keyboard.KEY_ESCAPE) {
					exitFromApp = false;
				}
				if (Keyboard.getEventKey() == Keyboard.KEY_F) {
					fullscreen = false;
					try {
						Display.setFullscreen(false);
					} catch (LWJGLException e) {
						e.printStackTrace();
						System.exit(0);
					}
				}
			}
		}

		updateFPS();
	}

	public int getDelta() {
		long time = getTime();
		int delta = (int) (time - lastFrame);
		lastFrame = time;

		return delta;
	}

	public long getTime() {
		return (Sys.getTime() * 1000) / Sys.getTimerResolution();
	}

	public void updateFPS() {
		if (getTime() - lastFPS > 1000) {
			Display.setTitle("FPS: " + fps);
			fps = 0;
			lastFPS += 1000;
		}
		fps++;
	}

	public static void main(String[] argv) {
		Game game = new Game();
		game.start();
	}
}

It’s just fullscreen window. If I add some graphics, calculations, else… then everything runs super fine and smooth. The problem is the cpu load. At the moment the program is launched one of the cpu cores gets 100% load all the time and my radiator cooler starts to spin faster. What is the problem here and what should I implement to solve this issue?

I have no idea why you need to those deltas, but what I can see is missing are:

*Initialize projection matrix. The easiest way is to:


glMatrixMode(GL_PROJECTION);
glOrtho(0, Display.getWidth(), 0, Display.getHeight(), -1, 1);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();

*If you’re rendering something, you need to clear color buffer. If you’re not using depth buffer, it is this line:


glClear(GL_COLOR_BUFFER_BIT);

If you’re using depth buffer for something:


glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);

I have no idea what kind of problem you’re running into, but this code doesn’t seem to cause some bad CPU problems…

Thanks for your response.
But, I know that. And I make 2d game so I don’t use depth buffer. I deleted everything irrelevant just to show you that cpu load is not related to my methods/other classes/etc. It’s just window which heavily loads one of my cpu cores… By the way this load does not affect the performance. No stutters/lags. It just fries my cpu core. :slight_smile:
I have i7-3930k running at 4.4 GHz and Nvidia GTX 680 and 100% stable system if that matters. Hope you can help me.

P.S. I need deltas to make game logic which won’t depend on fps.

I don’t know about other people opinions, but if you’re just starting out, making a perfect game shouldn’t be your goal.

I tried running this code. I don’t know how accurate Windows Task Manager is, but when I’m running this code, I get almost none CPU usage (I’m looking at that percent thingy. Shows 0-3% when running your code). Either you deleted the code that was causing this, or your computer is setup in a certain way.

Nooo! That’s not my way. Perfect or nothing. :smiley:

Now that is super strange. What ide do you use? Maybe it’s something related to eclipse?

Eclipse here too bro :stuck_out_tongue:
You should press CTRL+SHIFT+F to format your code. Looks clear that way.

I use ctrl+shift+f all the time…
What version of lwjgl do you have? Mine is 2.9.1.

It shouldn’t matter which version you are using, they are tested and made sure they are stable. It sounds like a driver issue, IMHO. I also experience next to no load when testing your code. It’s just not something the LWJGL team could get away with, or miss for that matter.

Try this,
Verify that Display.getDesktopDisplayMode().getFrequency(); returns a positive number, like 60.
otherwise set refreshrate to 60 manually.
also make sure vsync actually gets enabled, check that you haven’t enforced vsync to stay off, in nvidia control panel.

Nope, nothing. :frowning:

Could someone with 331.82 drivers try to run this code?

UPD: I commented Display.setVSyncEnabled(true); and everything is fine. 0 percent usage. :frowning:

I have 331.82 driver and tried out your code, and I was able to reproduce the issue.
after a bit of messing around I discovered that, it would seem like enabling vsync, is what causing the issue, at least for me.

Yep, enabling vsync causes this problem. I discovered it just the second before you posted your suggestion.
So… What should I do now when I know it’s driver related. Just use vsync, continue coding and forget about it, hoping that people won’t have 331.82 driver when my game is released?
Or is there anyway to track down this leak?

P.S. enabling adaptive vsync from nvidia control panel and running my code also brings up this problem.

V-sync effectively does the same thing as Display.sync(refreshRate) (though it’s unreliable and may be disabled in the user’s driver so don’t rely on it), but how this is done is up to the driver. Most likely the driver is doing some stupid looping to keep the game’s rendering rate to the refresh rate, hence you’re seeing that kind of CPU load. This shouldn’t be a performance problem since that’ll only kick in when your game is being rendered at a faster rate than 60 FPS (or whatever refresh rate you have). I agree that it’s probably not exactly optimal for battery life. Try using Display.sync(refreshRate - 1) and the problem should go away since the game is now rendering at 59 FPS, so the driver doesn’t have to do any synchronization itself.

I am sorry but I think I didn’t really get what you just said.

Display.sync(int) is just an fps cap. It doesn’t do what vsync does. The main goal of vsync is to eliminate screen tearing. And screen tearing can only be eliminated by turning on vsync. Well, at least now, before gsync comes to the market. It also handles with some stuttering.
Yes, that’s right. Vsync can be turned off by the user and I believe it shouldn’t be done if he is looking for smoothest game experience.

Why do you think it’s not a performance problem? It actually kicks in at exactly 60 fps (when you turn vsync on). It just uses 100% of one core because of this bug. Yeah, I don’t see performance issues because I have 12 cores. But if I had dual core CPU I believe I would get 50% performance reduction.

Did you mean to leave vsync on and cap fps at 59? If yes then there is still huge cpu drain (not 100% actually, mb 80%~) but also some pretty bad stuttering appears.

Sorry again if I misunderstood you.

Just because your driver uses 1 core for 100%, doesn’t mean it will affect your game’s performance. The driver is basically filling the idle time with a busy loop, waiting for the signal from the gpu that the next frame can be rendered. The more you render, the fewer cycles will be spent in this loop, to the point where it’s marginal and can be ignored.

I know what V-sync does. I’m just saying that as a side effect it effectively does the same thing as Display.sync(refreshRate), e.g. limits the FPS to the refresh rate. However, how this works is left completely to the driver. For all we know, it may be a simple loop like this:


while(!monitorIsReady()); //Loop until monitor is ready for a new frame

How this is done is completely up to the driver. I even get 100% CPU load on one core if my game is GPU limited. In that case the CPU is waiting for the GPU command queue to finish so it can submit more OpenGL commands. When rendering a few huge HDR particles covering the whole screen I get around 40 FPS, but if I simply make the particles 1/100 as large I get 2500 FPS. Both use 100% CPU on one core, but the first one is GPU limited.

Like Riven said, it doesn’t matter when it comes to performance. With V-sync on, you can’t go over FPS. When GPU limited you can’t submit any more OpenGL commands until the queued up commands have been processed by the GPU. Any time the CPU has over is effectively wasted anyway. Sure, the driver spinning and using 100% CPU on one core is a waste of electricity, but it won’t affect your game’s performance since it’ll only do that if the CPU can’t continue rendering for some other reason. It may however slow down other processes running at the same time (possibly severely if you’re on a single core), but on a dual core you’ll still have another core left over and most people don’t run heavy programs in the background when gaming. It’s bad practice of course, but there’s nothing you can do about it.

Here are some examples (I have a 90 Hz monitor):

CPU limited to 2500 FPS: 100% CPU load. The driver is not wasting any CPU time here, since the CPU never has to wait for anything.
GPU limited to any FPS: 100% CPU load. The driver IS doing some mindless spinning to fill in the time until it can submit more OpenGL commands, but everything is running as fast as it can run.
V-synced to 90 FPS: 100% CPU load. The driver is mindlessly spinning to fill in the time until the monitor is ready for a new frame.
Display.sync(90): 0-1% CPU load. LWJGL’s sync() function does a better job at synchronizing to 90 FPS than the driver, so it simply sleeps instead of spinning.
V-sync on + Display.sync(89): 0-1% CPU load. No tearing, but one frame missed per second (that stutter you talked about). The driver believes the CPU is doing its best to submit OpenGL commands but that it isn’t fast enough to render at 90 FPS, so it won’t do any spinning at all since there’s no command queue or monitor to wait for.

This was on a GTX 770 with the latest drivers.

Also, V-sync sucks in any fast paced game. The driver may buffer OpenGL commands for any number of frames (defaults to 3 I think) before it’ll start blocking the program (= use 100% CPU load) from doing any more OpenGL commands, so you’re effectively getting an much larger delay between issuing OpenGL commands and the result of them appearing on the screen. This is the cause of the infamous V-sync “input lag”, since the time it takes for your keyboard and mouse input to take effect is very noticably higher. Frame limiting ( = Display.sync()) to an FPS slightly under the monitors refresh rate is a way of reducing the input lag since it’ll prevent the OpenGL command queue from becoming filled, but again this introduces stuttering.

You mentioned G-sync, but in this case G-sync won’t help as far as I know. G-sync is only superior to V-sync when rendering at an FPS LOWER than the screen’s refresh rate. If your game is rendered faster than 60 FPS on a 60 Hz screen, G-sync will work exactly like V-sync. The key difference is what happens when you’re CPU or GPU limited and can’t render at 60 FPS. With V-sync, if you’re rendering at 59 FPS then once a second you won’t have a new frame ready in time to show, so it’ll simply show the previously rendered frame one more time. The fact that one out of 60 frames is shown for twice as long is easily recognizable as stuttering, and this is where G-sync shines. G-sync effectively does the opposite of V-sync. Instead of synchronizing the game’s FPS to the refresh rate, it synchronizes the refresh rate to the FPS so the monitor is now dynamically changing its refresh rate. Suddenly you have a 59 Hz monitor, and 59 FPS with perfect V-sync on a 59 Hz screen has no stuttering. I’d dare say that 40 FPS will look acceptable if not great with G-sync simply because each frame is displayed exactly the same amount of time. But what happens when the game is rendering at 60+ Hz? Your monitor can’t refresh that fast, so it’ll simply fall back to limitting the game to 60 FPS just like V-sync does.

Thanks a lot! I get it now.
This actually is very sad. :frowning:
And my last question related to this problem is… If it’s driver related issue then why won’t every game which uses vsync suffer from this issue? I mean… When I play terraria (just another 2d game example) or some AAA titles every thing is fine. For example terraria never loads any of my cpu cores at 100%.

I think you might be partially wrong. First of all gsync will be implemented in 144hz monitors. Even if it’ll limit fps at 144 with vsync, it will significantly decrease input lag (if it’ll have input lag at 144hz at all). NVIDIA mentioned that Gsync completely eliminates input lag. So it might mean that monitors with Gsync modules will never use vsync. Because Vsync implies input lag… More likely it’ll use something like fps cap (refresh rate - 1) at 144 hz to actually use gsync. This is just my guess, but I believe Gsync is much greater achievement than we can imagine.

I don’t think I said anything that contradicts what you said. How would G-sync be any better than V-sync when your game’s FPS is capped in the exact same way? I’m not buying that G-sync would fix that, but manually frame limiting to 140 Hz or so and then letting G-sync eliminate the stuttering. But any monitor related stuttering will be massively reduced by having a 144 Hz refresh rate, so if you have that kind of refresh rate then G-sync won’t do much in the first place, except remove tearing. A single repeated frame is much less significant on a 144 Hz monitor, so only adding the technology to those screens sounds pretty stupid to me.

Examples with V-sync:

24 FPS video on a 60 Hz monitor: 2.50 refreshes per frame, meaning that each video frame is shown for 2 or 3 refreshes, alternating. That’s 33.3 ms and 50.0 ms respectively, a pretty big variation.
24 FPS video on a 144 Hz monitor: 6.00 refreshes per frame. Since 144 is a multiple of 24, it’s a lot better suited for 24 Hz video. Each frame is displayed for 6 refreshes (exactly 41.6 ms) which is optimal.
25 FPS video on a 144 Hz monitor: 5.76 refreshes per frame. Each video frame is displayed for either 34,7 ms or 41,6 ms, a much smaller variation than you had for a 60 Hz monitor.

55 FPS game on a 60 Hz monitor: 1.09 refreshes per frame. Each game frame is displayed for either 16.6 or 33.3 ms. Almost every 10th refresh shows a duplicate frame, resulting in easily visible stuttering.
55 FPS game on a 144 Hz monitor: 2.61 refreshes per frame. Each game frame is displayed for either 13.8 or 20,8. Visible stutter? Probably not. Maybe if you’re looking for it.

Using G-sync, 24/25 FPS videos would look perfectly fine on monitors with any refresh rate since they’d be able to adapt their frame rate for 24/25 FPS, probably by running at 48/50 FPS and displaying each frame twice. However, the impact of that is smaller the higher refresh rate of the monitor is. It’d make much more sense to put G-sync on low-end 60 Hz monitors.