How reliable is System.currentTimeMillis()? Is it possible that the currentTimeMillis() starts falling behind if the computer has a heavy workload? I’m using it to time frames per second and it seems to behave oddly sometimes…
currentTimeMillis may have a crap timer resolution,
but it shouldn’t suffer from drift.
how are u calcing your fps?
Here’s the code. Let me know if it doesn’t make sense.
private static boolean engineRunning = true;
private static long startTime;
private static int SLEEP_TIME= 0;
private static final int INTERVAL= 25;
public static int CURR_FRAME = 0;
public void run () {
startTime=System.currentTimeMillis();
while (engineRunning) {
SLEEP_TIME=(int) (startTime+(CURR_FRAME*INTERVAL)-System.currentTimeMillis());
if (SLEEP_TIME<0){
// scheduler is behind! This will cause huge problems!
System.out.println("Scheduler is behind:"+SLEEP_TIME+" ms. ("+(System.currentTimeMillis()-startTime)+")");
}else{
try {sleep(SLEEP_TIME);}
catch(InterruptedException e) {}
}
// new frame
CURR_FRAME++;
// process events of frame etc. etc.
Different platforms have different resolutions for System.currentTimeMillis(). You’re INTERVAL of 25 milliseconds maybe small enough that you’re getting the same value twice.
Also Thread.sleep() is not guarenteed to start the instant it’s time is up. It’s possible that the operating system scheduler and/or the Java scheduler may decide to do other things before it gets around to waking up your thread once it’s ready. Under heavy load this will be more likely. Also your rendering work could be doing too much work to do in one interval.
You can: a) not worry about it, b) skip frames, or c) reduce the amount of work by lowering the quality when you’re behind.
Thanks for the input. I’ve also noticed that thread.sleep() can ±5 ms. Is this the conventional way to code a Scheduler or are there better alternatives for keeping track of frames?
I guess so. ;D
What works the best (it seems) is to just generate as many frames per second as possible and let the VSync smooth everything. Of course, this is a problem if you can’t push that many FPS for some reason. I just wrote a timer to help with this problem. Check out the discussions at:
http://www.JavaGaming.org/cgi-bin/JGOForums/YaBB.cgi?board=share;action=display;num=1048008212
and
http://forum.java.sun.com/thread.jsp?forum=406&thread=373032&tstart=0&trange=15
System.currentTimeMillis() most likely uses the Real Time Clock (RTC) which is often not handled by the CPU itself but a daughter component. This is NOT the same as the systems internal clock cycles which are susceptible to skew based on load.
Intel platforms typically use a RTC that has a 19ms interval, so things which look use this will not show any difference between System.currentTimeMillis() calls within 19ms. That is to say that if you time two events that take less than 19ms, it is likely you will show zero time used.
Partially correct. The Intel timer chip beats about once every 19.2 ms (as you said) and is used to keep the system clock running. The one trick here is that the chip can be reprogrammed. Every DOS video game I ever knew of reset it to align with the VSync. And modern day OSes reset the clock to whatever the heck they feel like. Under Windows 98 it’s something like 50 ms, while Windows 2000 & XP set it to 10 ms. The *nixes don’t screw around. They tick every millisecond. Try downloading the timer code I wrote and you’ll be able to measure the timer resolution for various systems.
If you want reliable timing use the LWJGL Sys.getTime() which has a really nice resolution
Oh yeah… another thing… if you really need to use the System.currentTimeMillis(), you really should capture 10-12 frames and divide the result by 12 to get a average time.
Thanks again for all the input. I’ve been away for awhile, but now starting looking at this again.
I ran the code I pasted above on two machines next to each other. Within minutes they are off from each in seconds!!
I figure from some of the advice I got that some believed I was coding a client, which is not the case. This is a server and it gets terrible when the clients and server get way out of sync, which is my problem.
Should I really be using System.currentTimeMillis() on the server? It behaves very differently between just two machines… I’m getting differences beyond 10-50 ms, they now end up 1000’s ms off each other with time.
Thanks for your input.
Btw is there a way to analyze the system clock on a computer and get it tuned? I got no way of telling if the clock is healthy…
Now that J2SE 1.4.2 is officially released, I would suggest you use the new high resolution timer it provides. Read more about it here:
http://www.java-gaming.org/cgi-bin/JGNetForums/YaBB.cgi?board=Tuning;action=display;num=1053157119
Is the “Perf” class actually part of the officially exposed API now, or is it still hidden?
Kev
[quote]Btw is there a way to analyze the system clock on a computer and get it tuned? I got no way of telling if the clock is healthy…
[/quote]
Yes, http://www.ntp.org/ . Seems they got a new site recently.
We run ntp on all our severs at work (Linux or AIX) that way we know my servers have correct time and we can effectively compare logs. I don’t know about Windows but on Linux/Unix NTP figures out clock drift and will make adjustments and it makes incremental corrections. Such that if your clock is only a few minutes off it will effectively speed up or slow down the clock until it is in sync again. That way you will never have a jump forward or backwards in time.
Anything in com.sun.* is, by definition, specific to the sun VM (and anyone who copies it) but not part of the standard.
I have every hope though that this functionality will move over to be part of the standard by 1.5
JK
[quote]Now that J2SE 1.4.2 is officially released, I would suggest you use the new high resolution timer it provides.
[/quote]
The original query appears to relate to synchronization between two machines. The high resolution timer is not guaranteed to be synchronized at all.
The NTP clients in Windows 2000 and XP work much the same as the *nix equivalents, although by default they reference a Microsoft time server which may not be very local (>100ms round trip delay from here).
aikarele, I tried the code you submitted that uses sun.misc.Perf
The hiResTimer.currentTimeMillis() method goes haywire on one of my computers. It ranges from -540000 to 946000 making huge leaps between ticks. I’m not sure if it proves the computer is sick, but lets say it is… should I be changing the whole motherboard or is there a simple way to change the hardware clock…
The timers in windows based systems are not accurate and other background processes will cause inaccuracies.
My experience of this is in profiling my Prolog program. (using SCI Prolog FYI)The docs say that you will never get super accurate results because windows does NOT count actual CPU cycles devoted to a process. So the timings a estimates at best. (Making sure all your background processes are switched off will help)
Linux, of course, will return very accurate results.
This explains why Jeff is getting good results, you wont/can’t on your windows systems.
This is very frustrating in my case because I will be using profiling to estimate execution times on a 600 machine cluster that has a mixture of XP and Linux boxes. Bugger.