Ye, the old timing code is pure voodoo. It even works reasonable well with a timer resolution of 250msec if the time per frame is pretty consistent (~changes slowly).
I think oNyx used the rolling average because Java 1.5 was not allowed at that time.
I invented it because nanoTime didn’t work on my old machine anymore after I installed an ATI graphics card. It caused a different bus load than the Nvidia card which in turn triggered QPC leaping. I really wish there were some 1msec timer (TGT) in Java. LWJGL for example uses TGT on Windows and currentMillis elsewhere (=1msec - except for Windows, of course).
With 1.5+ I’d use (now that I got a better machine haha) a simple min-cap loop such as the one tom posted. Just with sleep instead of yield. The RADYTT (rolling average damped yield throttling thingy ;D) only uses yield, because there can be so many of them. If 1% of them take 100 times longer than usual you won’t notice it, because it would be averaged out in that gigantic pile of yield calls.
Sleep, however, is a different thing. There are only a few calls. If one takes a tad too long you gotta quit the loop right away, but with imprecise timing you wouldn’t know and there is a fixed amount of iterations. So, that wasn’t an option.
Would be great if sleeps for longer durations than 0 or 1 msec would be as accurate as calling 0 or 1 msec sleeps in a loop, but even that isn’t the case. Otherwise sleeping the full amount in one go would have been a good solution.
edit: What’s really bad about RADYTT is that the over-/under-steering effects get pretty big if two instances of it are running at the same time (e.g. having 2 games running which use this timing method).