Java becoming a non-deterministic runtime?

Hi,

I don’t know if you feel like me, but I think that Sun is turning Java in a non-deterministic language. I’ll try to explain…

I’ve been used to languages where one could say that a loop control structure is O(N). In ol’ BASIC days,

FOR i = 1 TO N

took a time proportional to the value of N (at least when the code in the loop is constant and the compiler does not do too much agressive code removal). There are other parameters like the platform, but at least, you could do an algorithmic analysis and optimize your source code on the paper without event running it.

Now, with the Java runtime, the performance of some code depends on soo many runtime parameters:

  • The JVM type (Sun, IBM, BEA…)
  • The JVM compiler type: Hotspot -server or -client. Has the loop run enough to raise the hotspot compiler?
  • The JVM parameters: -incgc and surch -X non-portable options.
  • And of course platform: you don’t write the same code for J2ME or J2SE. On J2ME, you MUST manage your objects allocations.

The programmer has less options to write optimized code and the performance depends on the runtime environment.
As a consequence, when you do a change in your code, you can’t predict the impact it’ll have on performance.
Remember when code was cluttered with object pools to compensate for the garbage collector poor performance?

Now, I fear to write clean code, with correct performance with today VM (and its tuned runtime parameters) and to discover in a few years that the code+tuning parameters are no more valid with current VM… Nowadays, you take care not to create objects in a critical time loop, but how will run this code when JVM implement escape analysis or other techniques?

Nothing really new in this. The performance of malloc in C/C++ (and related methods) has often depended on external factors. The extent of this dependence use to change with each successive release of Microsoft’s C compiler/libraries. It would become more or less tied to the performance of the platforms allocator (which was outside of your exe).
The problem isn’t quite as bad as you suggest, because in most cases the performance simply improves with later versions. The only catch being that had you used simpler code (such as not moving allocations out of loops), it might have improved more.
If you restrict your more devious performance tricks to those small areas where it is really needed then you don’t have to change too much code when the JVM improves.

I can understand your fear/doubts, but as I see it is that it’s not really a problem but merely a sign of improvement.
For example, my 1.1 MSVM targeted applets run as fast or faster on my current PC with current JVM than the target machine I had for those applets at the time. Even though some 1.1 typical functions perform a bit worse on todays JVM’s and I used some typical ‘old school’ optimizing techniques that might affect performance negatively on today’s JVMs, the PC’s got faster to compensate.

Youve missed the point im afraid.

EVERY step we’ve taken with Java performance has been to imrpove the performance of exactly what you are talkign abotu writing: clear, clean, well encapsulated code.

Today Java can do so FAR better then C++ for instance where you sill have to sacrafice clarity and clean OOP in fvaor of the kind of hand-tuning you are talkign about.

So don’t worry, be happy. Write clear, clean well encapsulated code that makes it as clear as possible to the VM what your intentions are and you will get the best performance.

Its shift of mindset, but a good one!