getTimerResolution did only return the actual CPU ticks/sec on computers with multiple cpus/cores. It usually used another independant clock. But because of this bug it had the tendency to speed up/down on laptops with speedstep. Wich is why cas used timeGetTime() instead on the latest versions of LWJGL. It usually has a resolution of 1000, meaning it can returns 1000 unique number a second. But it is also buggered on some computers. The resolution can degrage to 10-15 ms wich makes it just as bad as System.currentTimeMillis(). This can explain what you are seeing. I suggest you mesure the real resolution on your computer. Do it by spinning in a loop and capturing the difference in the timer when it changes. Also, it performs badly when yielding in a loop. Instead use Thread.sleep(1). That might improve the resolution and fix the problem.
Yesterday I temporarily reimplemented my rendering loop with lwjgl.util.Timer class to see if it gives a better performance and it’s exactly the same behavior. Also, I rolled back my lwjgl libs to 0.95 and replaced Thread.yield() by Thread.sleep(1) and it resulted in a loss of 10 fps so I don’t understand why you are saying that sleep produces better results???
I’m not surprised as it uses the Sys.getTime() and Sys.getTimerResolution(), just as your code did.
When using timeGetTime() (LWJGL version 0.96 and above) you need to use Thread.sleep(1). With earlier version (0.95 and belove) yielding worked fine. It seems that timeGetTime() need there to be some free time available or it will choke, reducing it’s resolution.
OK thanks for the useful info! 
I’ll try sleep(1) with 0.98.
Stay tuned.
P.S. Just checked Unded arena. Have you done any progress recently since the last screenshot refers to May 2005?