Too much sleep(xx)

Hi,

i recently discovered a strange behaviour regarding sleep on my machines that i wanted to share. Maybe somebody has an idea what causes this. Here we go:

Consider an endless loop doing nothing but a Thread.sleep(20). How many times can it do this in 1 sec?

On my old Celeron 1Ghz/XP/Java 1.4.2, the answer is 50 (as expected)
On my P4HT 3.2Ghz/XP/Java 1.4.2/Java 1.5, the answer is 33 (for both VMs)
On my PII-400/Debian Linux/Java 1.4.2, the answer is 33.

So far, so good. I changed the value to 19 instead and got 52 on the Celeron, 36 on Linux…but i got 52 on the P4 too. After changing it to 21, i got 48 on the Celeron, 48 on the P4 and 28 on Linux.

I would be able to understand that, if the Celeron would always be correct and Linux and the P4 won’t for whatever reason…but why is the P4 like the Celeron for 19 and 21 and like the Linux machine for 20 ???

Any ideas?

Well, by calling sleep you give away the control (the OS’ scheduler takes care then) and if you are lucky you get the cpu back in time… but there is no guarantee.

The shortest time you can sleep (on all OSs) is about 5msec. For that reason most games don’t sleep at all… they yield() instead.

Since System.currentTimeMillies() only gives you an accuracy of 10ms. Then it is likley that Thread.sleep() behaves as poorly.

It may behave poorly and be inaccurate…i understand all that. But i really don’t understand why 19 and 21 works on this machine as expected and 20 acts as if i would have choosen 30…that makes no sense to me.

Firstly your P4 is probably hyperthreaded. This means that the normal clock period is 15.625ms instead of the 10ms usually found on single processor machines (under Windows anyway). This explains the approximately 30ms intervals you are seeing. The Celeron is not HT so it will use the normal 10ms period which fits nicely with a 20ms request.

So what happens with the ‘odd’ requests. To support multimedia use Windows allows the clock period to be changed. Recent JVM do this when the requested interval is short and not a multiple of what they think is the clock period (i.e. they appear to assume the clock period is 10ms even on HT/dual processor machines). I haven’t seen any documentation on the exact logic used. The effect is that a request for either 19 or 21ms will cause the clock period to be reduced to perhaps 1ms, which then enables the OS thread scheduler to sleep for close to the requested period.

Even more exciting is that the clock period is a global property for the machine. Thus if a separate process is granted a short clock period, then your process may suddenly find sleep intervals becoming more (or even less) accurate!

http://bugs.sun.com/bugdatabase/view_bug.do?bug_id=4500388
http://bugs.sun.com/bugdatabase/view_bug.do?bug_id=4717583

You could also try running with this JVM flag:
-XX:+ForceTimeHighResolution

Thank you for your answer. It really explains the behaviour i’m seeing. I already thought that it has something to do with HT, but i wasn’t aware that the clock period is actually different. I’ll try that flag later and see if it helps.