System.nanoTime() is a great way to find performance problems when System.currentTimeMillis’s resolution isn’t sufficient.
But I had to discover the drawback of System.nanoTime when I compared the runtime of a benchmark on windows and linux.
To my surprise it turned out that windows took something like 1.23 msec and linux took 1.67 msec for the same algorithm (computing the silhouette of a 3d object) on the same jdk version(1.5.0_06). It turned out that actually the operating system doesn’t influence the duration of the algorithm, but that System.nanoTime() is so much slower on linux than on windows!
Is this a known effect (Maybe this is a notebook specific effect due to CPU power saving)? I think it’s particularly bad that the longer duration of System.nanoTime influences the measurement itself!
As a workaround: What high precision, low impact timer that runs on both windows and linux can you recommend (that also works on notebooks - where simple RTDSC based timers appear to fail)?
Yours,
Stefan