C++ developers don’t use threads generally; and when they rely on timers they have yet to find out that they don’t actually work. Valve, for example, don’t know about the problem, as HL2 runs like shite on my laptop.
Cas
C++ developers don’t use threads generally; and when they rely on timers they have yet to find out that they don’t actually work. Valve, for example, don’t know about the problem, as HL2 runs like shite on my laptop.
Cas
And I suspect a large amount just run tick based and use vsync. Which raises the question - just why can’t we get a guranteed v-sync?
Because many shitty graphics chipsets simply don’t support it. None of the Intel ones work for example. And of the ones that do, there is a daft setting that actually allows the user to completely override what the program specifies. Duh. And ingeniously there’s no API to detect this except on Nvidia.
Cas
Crappy intel hardware aside (anyone who buys that deserves whatever they get) why on earth aren’t ATi at least offering the same as nVidia? Sheer sillyness. >:(
Just a note, this is not a bug.
Thread.sleep() gives no promise of any particular level of accuracy.
In general its going to be up to the underlying OS.
[quote]Just hit this problem on jdk1.5.0, and I’m extremely disappointed.
Thread.sleep( 20 ) now consistently sleeps for 31ms on my Sempron machine. I remember seeing similar behaviour on much earlier JVM versions.
[/quote]
What operating system (including version if Windows) are you using. In any case try Thread.sleep(19), Thread.sleep(21) and Thread.sleep(9) and report how long the sleep period is (measured using nanoTime).
Here’s the test code, in case I happen to be doing something not-recommended. I’m running XP SP2 on a 1500mhz Sempron.
public class Test implements Runnable {
public void run() {
while( true ) {
long timer = System.nanoTime();
try{ Thread.sleep( 20 ); } catch( InterruptedException e ) {}
System.out.println( System.nanoTime() - timer );
}
}
public static void main( String[] args ) {
new Thread( new Test() ).start();
}
}
The output is quite amusing!
9ms :~ 9600000
19ms :~ 19300000
20ms :~ 31000000
21ms :~ 21300000
Looks like 20ms triggers the scheduler into doing something wierd on my machine
I know I got a bit too emotional in my rant In these days of gigahertz and nanoseconds it’s not unreasonable to be able to make something update once every 20ms is it?
That output is much as I expected. The JVM seems to assume that the natural resolution is 10ms and passes multiples of 10ms direct to the normal OS methods. However for values which are not multiples of 10ms, it makes some additional calls to alter the timing resolution.
It would be better if that 10ms value wasn’t hard coded, but measured either at JVM install time or each time the JVM starts.
The normal clock period on this machine is 15.625ms (which is typical for the multiprocessor kernal).
Perhaps you should try 10ms as well; I would predict a 15.6ms result.
Just ran the test program again:
10ms :~ 10600000
20ms :~ 20200000
So it looks like my kernel has decided to use a 10ms timeslice again. I have no idea what could have caused this to change.
I guess that means there is a problem with the JVM, as it assumes the default timeslice on windows will always be 10ms?
HotSpot just passes the given time through to through to the OS. Nothing magical about 10msec, 15.6msec or any other number, at least not to HotSpot.
Before you can trust any such timing thing to be reliable you gotta run it “under load” - get a bunch of other threads running doing variable amounts of work+sleep (your AI wakes up, works for a quanta, sleeps, you decide to background load some music for the next scene, etc). All those other threads running around will also mess with your timer’s response.
Hi
I got a workaround with System.currentTimeMillis(), which seems to work fine: I simply count the frames and then divide the time difference by the passed number of frames.
long timeBefore;
long timeNow;
long elapsedFrames;
while( true ) {
timeNow = System.currentTimeMillis();
if(timeNow == timeBefore) elapsedFrames++;
else {
long diffTime = timeNow-timeBefore;
long fps = elapsedFrames/(diffTime/1000) //diffTime are millis
elapsedFrames = 1;
timeBefore = timeNow;
}
}
Is this a good workaround?
Arne
[quote]HotSpot just passes the given time through to through to the OS. Nothing magical about 10msec, 15.6msec or any other number, at least not to HotSpot.
[/quote]
The value effects whether those calls to timeBeginPeriod/timeEndPeriod occur.
Well, obviously, I designed my code to be able to handle a delay every once in a while, but when the timer is consistently late by a massive margin (in computer terms) it makes it difficult to write reliable code.
Instead of drawing a frame every 20ms and sleeping to aviod wasting cpu time, you have max out the cpu drawing frames one after another, additionally having to guess how long they took to draw!
Meh, I hate computers
[quote]Hi
I got a workaround with System.currentTimeMillis(), which seems to work fine: I simply count the frames and then divide the time difference by the passed number of frames.
long timeBefore;
long timeNow;
long elapsedFrames;
while( true ) {
timeNow = System.currentTimeMillis();
if(timeNow == timeBefore) elapsedFrames++;
else {
long diffTime = timeNow-timeBefore;
long fps = elapsedFrames/(diffTime/1000) //diffTime are millis
elapsedFrames = 1;
timeBefore = timeNow;
}
}
Is this a good workaround?
Arne
[/quote]
I think so, as my solution does almost exactly the same thing. It can also provide an estimate of nanosecond timing (edit - well, it does now):
public class TimerTest implements Runnable {
public void run() {
long tstart, tend;
long lcount = 1, nanos = 0;
tstart = System.currentTimeMillis();
while( true ) {
// Do stuff, use nanos to estimate loop time
for( int n = 0; n < 100000; n++ );
System.out.print( nanos + ", " );
// Stop doing stuff
tend = System.currentTimeMillis();
if( tend != tstart ) {
nanos = ( tend - tstart ) * 1000000 / lcount;
tstart = tend;
lcount = 0;
}
lcount++;
}
}
public static void main( String[] args ) {
new Thread( new TimerTest() ).start();
}
}
Right now I’ve created a class that uses System.nanoTimer to measure time elapsed in milliseconds like so…
In my primary loop I have the following…
Now I don’t have sleepTimer being used here because I haven’t gotten that far yet since I’ve been spending most of my time on the renderer. Even without measuring the exact time the sleep actually took, this still keeps my engine within 2 FPS of the target 30. If you add another long above the while() loop that stores the previous passes sleep time, you can subtract that from the next sleep to compensate for it taking longer than expected. My engine has a thread for playing music and when you Thread.sleep() your main thread on purpose, it’s going to lose context to the music thread and there’s really no telling just how long it will run. It might take 30ms because there’s a lot of disk access going on and it had to wait to read the next bit of music, or some such.
Additionally, I don’t use Thread.sleep() inside the main loop, I use Util.sleep (Util being my own abstract class) that just wraps a Thread.sleep() with a few extras, such as detecting the possibility that the math I have above could quite possibly ask Thread.sleep() to sleep for 0ms, or even a negative time causing an except FPS, I don’t really need to do this.
Hope this helps.