Basic Java code optimisation

Want to try something fun? Compile your code, then use a decompiler, and take a look at the decompiled code.

The compiler will try to optimize on its own, so it will give you an idea of what is being done under the hood to improve performance… But it will also make you realize how horribly unreadable such code is.

It is a fun experience though. I particularly like decompiling stuff to try and figure out how it works.

Not very useful with Java, though, unless you can get at the JIT output. javac leaves most of the optimisation to Hotspot.

https://wikis.oracle.com/display/HotSpotInternals/PrintAssembly

  1. Bad idea if you think its applicable to human optimization. It can’t help you and will often hurt you.
  2. If “optimized” Java code is unreadable then the person optimizing it did not write it for a computer made in the multicore era or does not know what constitutes an optimization. There is no more excuse for optimized source code to be unreadable than for unoptimized code to be unreadable. – Are these fixes unreadable: Using Trove/Colt primitive collections over wrapper classes? Reducing the number of operations in a calculation? Replacing naively selected data structures and algorithms with good big O behavior like linked lists and quad trees with faster, shorter, simpler brute force methods?

I think you missed my point.

I’m not suggesting decompilation is a valid optimization technique, it is just a fun thing I like to do sometimes, and that in some cases can give you ideas.

The most educational part of such an exercise, in my opinion, is to learn to value readable code. I agree that optimization shouldn’t be unreadable, but in practice, it often is, specially when code is over-optimized.

I think you missed my point. :frowning: It’s educational, but its in no way applicable to optimizing on a source code level.

If a compiler does something weird with your source code, it is because it did static analysis on it and determined that the two operations were equivalent. It only does it if is faster and if it is equivalent. If you use it in your source code its either a) slower, b) not what you intended, or c) identical in effect and speed to more straightforward, cross platform, and future proof code. It can only hurt you to take inspiration from it for your high level source code.

I know it is unintuitive that optimization would not be unintuitive. It should be complicated. Right? It should be extra work. Right? Here is a shocking truth: If a compiler performs optimizations on your code, it does not mean your code is deficient and needs to be mucked around with until you confuse the compiler. It means your code is optimal.

Optimization and readability is also not a tradeoff. If it is, then you are using optimization techniques from the 80s and 90s that don’t work on modern computers. (Back when a several kilobytes was a lot of memory and all instructions took approximately the same long amount of time.) Desktop and mobile computers today are now literal super computers. Super computers are absolutely common place now, so a different programming style is required. They have a different architecture; it’s not that they’re just faster versions of old machines. I was not kidding that certain brute force methods are better than things liked linked list and quad trees.

Optimal code nowadays is code that can run on fast hardware (either in series or in parallel) without being interrupted. Complicated code uses complicated features which usually stalls the CPU or GPU; so, complicated high level code is the opposite of optimal. Also: The opposite of complicated code is not optimal. Conflating optimized/unoptimized with unreadable/readable is long outdated. It’s pretty great because you can use another programmer’s optimized code without even noticing. It’s also why making small changes like iterating over an array backwards may hurt performance even though it doesn’t seem more complex and it used to be a recommended optimization when C was still young.

I’ve found that avoiding the use of loops within the main game loop boosts the performance quite alot.

Did I miss your sarcasm? :slight_smile: The less you do, the faster it goes, so… yes.

Profile real code with real data. The vast majority of people can’t write a micro-benchmark that doesn’t lie to them.

Division by 2 isn’t the same as a right shift by one. Consider an input of -1. The compiler can only transform into a shift if the input is insured to be positive and/or negative and even. Multiplication by 2 and left-shift by 1 are always the same and the compiler can brainlessly perform the transform for you.

If you want sin/cos fast…think minmax. It’s excessively rare than a table-lookup will be a win (Sound synth might be one). LUT in general are very slow but people believe their broken micro-benchmark that’s telling them lies.

Skip clearing the array if you can.

Integer constant divisions are always transformable into a multiple (worst case…don’t know if HotSpot does this or not). Floating point generally isn’t.

Java ahead of time compilers (javac, eclipse, etc) don’t do anything. They just transform source into bytecode…no optimizations occur (well interesting ones). You have to have HotSpot dump out the native assembly to see real optimizations.

No sarcasm… Just a really, really basic fact. Sometimes I personally tend to forget these things, that’s why I ‘contributed’ my simple, yet usefull knowledge to the community. :slight_smile:

What I meant to say is: There are alot of different algorithms that can give you the same result. What I find challenging in Java (and what makes it more fun for myself) is trying to write algorithms that don’t make excessive use of loops and at the same time aren’t recursive or lengthy.