Great article here http://www-106.ibm.com/developerworks/java/library/j-jtp04223.html?ca=dgr-lnxw01JavaUrbanLegends
Very good article. There are far too many myth around that make people tweaking thing before they ever get a problem…
The article says these 3 things are legends:
- Synchronization is really slow
- Declaring classes or methods final makes them faster
- Immutable objects are bad for performance
Yet, it doesn’t show any measurements showing that they really are legends.
There is a lot of discussion about this article on Slashdot:
http://developers.slashdot.org/developers/03/05/17/175255.shtml?tid=126&tid=156&tid=108
For example “jdennett” tells that these 3 things were the main performance problems in his project (using J2SE 1.3):
- Synchronization
- Object creation
- Immutable objects
He also says: “Funny that the article “debunks” these myths without figures, when our thorough measurements showed that the problems are real, and in our case would have killed our chances of meeting performance targets had we not found them and dealt with them.”
I haven’t done any measurements related to these things myself, so I’m not saying what is right and what is wrong. I am just saying that I only trust performance measurements (well not even all of them), not words
Hm, the advice in the article is rather like giving fireworks to children.
It doesn’t matter how slow synchronization is, for example: you use it because you have to use it, not because it’s fun to type ‘synchronized’.
And whereas I don’t think declaring a method final makes them faster, I have a feeling it speeds up Hotspot’s compilation because it doesn’t have to think quite so hard about whether a method is or isn’t final.
And as for mutable objects - well, the example in the article is just plain daft. The guy should try writing a game, and then he’d know about performance and optimisation and the perils of excessive object creation.
Different strokes, innit? A J2EE programmer wouldn’t know performance if it poked him up the arse with an immutable instance of a final class.
Cas
[quote]I have a feeling it speeds up…
[/quote]
Now you’re doing it…
In terms of making methods final… while it is possible that such a thing would make it easier for HotSpot to optimize, my own measurements have shown no effect on the performance when adding the final keyword.
True that enterprize apps have different performance goals, but the main thing is not to base things on guessing when you have performance tools like profilers to give you real information.
Indeed; I spoke to the author specifically about the object-creation situation, and cited a few examples (in the java.* libraries) where the overhead for object-creation is so mind-bogglingly huge that it can single-handedly destroy your app (i.e. make performance go from great to so completely dog slow that you can’t actually use the darn thing).
His view was that these examples show why the main thrust of his article (don’t trust myths, write it properly, then optimize only what your profilers tell you to) is so important.
I think, in light of that response, that the article could more accurately be described as “don’t assume you actually have a clue what is fast and slow in the esoteric details of compiled code” - but the actual title is a lot more attention grabbing :).
For me, this happens to relate to when we were talking about articles we’d like to see on JGO, and I said I’d like someone to do a regularly-updated column on the evolution of the JVM’s. Stuff like “Hey, 1.4.5 is out, and the following optimizations aren’t worth doing any more (they don’t work) and bugfixes mean you can get rid of the following workarounds”. Well. that’s probably way too hopeful - but even just a regular article (in the same format each time) for each and every point release would make life sooooo much easier.
e.g. I recently realised I no longer had a clue what the threading model was for java - what percentage of it is handled by the OS scheduler, and what percent by java etc. Current API docs and tutorials at java.sun.com suggest there is a crap java scheduler by design in every JVM (one of the tutorials in particular actually states various things along the lines of “no java thread does/can do this” which is only true if they all use an old scheduler); my memory of how it used to work was that you were scheduled by the OS, UNLESS your OS was too rubbish to have a decent pre-emptive scheduler, and hence your java implementation simulated one for you.
I made a deliberate choice to omit numbers from the article. Why? Because that would probably just create tomorrow’s urban performance legends. Any numbers I posted would be specific to my hardware configuration, JVM, whether I have enough memory to avoid garbage collection during the benchmark run, and all the other factors that bias microbenchmarks. It would be out of date tomorrow.
Instead of just believing “X is slow” and “Y is fast”, I was hoping to encourage people to take responsibility for their own applications, through measurement, not guesswork. You’re right – you shouldn’t believe me. Try it for yourself.
I don’t claim that my article authoritatively “debunks” any of these myths. But if I’ve motivated just a few readers to try measuring something for themselves, then this article has succeeded.
[quote]Hm, the advice in the article is rather like giving fireworks to children.
It doesn’t matter how slow synchronization is, for example: you use it because you have to use it, not because it’s fun to type ‘synchronized’.
[/quote]
And I thought I was trying to take the fireworks away from the children. Did you misread the article? I thought i stated pretty clearly that (a) performance was a bad reason to compromise the thread-safety of your code, and (b) said compromises probably won’t have the performance benefit you think it will anyway, so don’t bother.
You are correct, synchronization (where needed) is not optional. And knowing where its needed is often hard to figure out. The performance impact, real or imagined, should not enter into that determination.
However, many developers are far too willing to compromise the thread safety of their programs because they think that doing it right will be too slow. By offering some balance to the “synchronization is so slow that we can’t even think about using it” myth, maybe people will be less likely to give in to their tendency towards premature optimization.
That would have been a good title, had I thought of it, but of course the editors wouldn’t have let me do that.
But thank you for stating my point more clearly than I did
I did think it was a good article generally! But the object creation example doesn’t really work out with the current state of play in Java games development, which brings me back to kids and fireworks. What’ll happen is someone will try writing a game and create tons of objects and then complain that the garbage collector is crap (which is exactly what’s happened) when in fact it’s not crap at all! There’s just different and better ways to do things when writing games. (Perhaps you could add a footnote to the article and put your comments you’ve posted in here in it? Because they shed a subtly different light on a good article which is perhaps easily misinterpreted)
Cas
[quote](Perhaps you could add a footnote to the article and put your comments you’ve posted in here in it? Because they shed a subtly different light on a good article which is perhaps easily misinterpreted)
Cas
[/quote]
After the brutal coal-raking I got on slashdot, I’ve been thinking about doing a follow-up where I can clarify some of these issues. From what I saw on /., its a lot easier to misinterpret than I thought!
I do focus almost entirely on server-side issues – I often conduct performance audits for consulting customers. So what I say in the article is drawn from mistakes I’ve seen in the field, which is server applications.
The point you make about game development, which is pretty different in character from server-side apps, fits pretty well into the point of the previous article (“Performance management – do you have a plan?”), which is before you try and tune, know what your performance objectives and requirements are. Game development is one of the areas where these requirements are easiest to quantify (doesn’t make them easy to achieve, though.)
Cheers,
-Brian
PS Thanks to all on this board for engaging in a much more intelligent and respectful debate than the folks on Slashdot. Makes me want to lurk here more often.
hehe, u can blame me for sending you the link to here ;D
I’ve preached the same points myself, having learnt both from experience and some good teachers. I did wonder for a moment whether I was describing a summary of your article, or a summary of the article I would have written myself
On a related note, there was a thread a while back on compiler optimizations:
Exactly! Performance goals can be very different depending on what kind of game you are writing. There’s vastly different performance requirements for:
- Single-player video games
- Mobile games
- Persistent worlds
And also different genres: action vs strategy vs FPS vs puzzle.
For example, for the MMO strategy game we’re working on, transactional reliability for certain types of network messages is more important than latency or per-message throughput. And the 3D portion of the client has different performance requirements than the 2D portion. And the server-side AI has different requirements than the client-side AI, etc, etc, etc.
And all of those requirements are very different than for a standalone shooter game.
Make it run, make it run correctly, make it run fast – in that order!
From my last large project I just noticed one difference: methods with final attribute do speedup in 1.1.8. Maybe it was the special implementation (Lotus Notes server and servlet enfgine with 1.1.8 )…
With 1.3 I couldn’t see a real difference
That makes sense.
You shouldn’t use the final keyword to speed things up, but only for design reasons.
Recent hotspot versions are smart enought to do clever inlining (to optimize) without using the ‘final’ keyword, where older implementations used the final keyword to do such optimizations.
Erik
FWIW
I’m going to jump in here and defned the author. In general I agree with the statements listed on the top assuming you are ina modern VM environment (IBM or Sun JDK 1.3 or better.)
I just “love” guys who post “I tried to write a Java program once. It was slow, java sucks.” Which is what your slashdot poster sounds like. Lets we forget the first program ANY of us wrote in ANY new environment sucked. the problem wasn’t the environment but rather our own knowledge and/or expectations.
Its a whole lot like saying “I tried to cook once. It tasted awful. Fire sucks.”
In particular its pretty obvious he did NOT know what he was talking about in regards to synchronization. Since 1.3 synchronization itself has been pretty much free. The locks arent taken until contention occurs. Which means he had contention and THAT was what was slowing his code down.
That means either synchronization was necessary for the code to work right at all OR he was over-synchronizing and making his code needlessly linear. In either case its not synchronization thats the issue but how the code is written.
Or he is just totally wrong about where his bottleneck was. Unless he carefully profiled thats a strong possability.
The most intelligent thing on this subject I’ve seen is the post that “trying to determine bottlenecks in a dynmaically compiled environment is difficult at run-time and nearly impossible at code-design time.” (Paraphrased, pardon me.)
The best advice for fast Java code is to write clean, clear well encapsulated code that can be tuned easily, then take a profiler to it and tune it.
Do that and you can get C/C++ level performance.
Another common myth:
“Calling through interfaces is slower then calling through the class itself.”
Totally wrong. Based, as many of these are, on a mistaken belief that Java VMS work like C++ compilers.
The problem seems to me is not that java is slow, but the fact that it is a very popular belief that it is.
At work, I always have to defend the fact that java is not the believed performance bottleneck when a client complains that ‘Help, the server is overloaded! Isn’t that because of the java stuff running on it?’, or ‘Oh no, is it a java program? It will tax my server too much, because its so slow!’, or just recently ‘I want it done in C++ because it will be too slow in java’.
Until now I have always been able to convince them with hard evidence like profiler output and such, but the very fact that the belief of java’s bad performance is so popular is quite annoying, really.
Unfortunately there are still enough cases where Java is still slow to sustain the belief that it is slow in general. It doesn’t help that products like NetBeans are rather optimistic with their minimum machine specification (notably with regard to available memory).