I think this case is a failure in management. He might have been the project lead, but that doesn’t mean he could do whatever he wanted. At some point he should have been told that his intended actions were not part of his assigned tasks. That it actually came that far that he rewrote your codebase, is shocking, imho.
Quite often programmers can zoom off on a tangent and do stuff so quickly nobody even has time to question what they’re doing let alone stop them. That’s programmers for you.
This sort of “reaching through objects” is something you don’t want to do all the time. Obviously just one example of it doesn’t mean anything one way or the other, but when you find yourself doing it frequently, it goes against an OO design principle called the Law Of Demeter. Usually it means you wanted a method on the class of thingy rather than computing it “from the outside” as it were.
IDEA has an inspection called “feature envy” that detects this sort of thing. Ironically, refactoring it into a temporary reference will defeat this inspection though.
Well i guess Law Of Demeter is just another myth successfully promoted by computer science professors. :
Imho pracmatism is fine but if you start defending goto its really gets rediculous.
My definitions are attempting to use a little know literary style known as Satire. BTW: my father and step-father are professors, a fair number of friends and family members are as well and I spent a number of years performing university research. The core of my definition is a paraphrase of what the chair of computer science said to me during a meeting when I was describing what my plan of attack was. I described my plan and ended with “or I can apply the principle of K.I.S.S. and defer the optimizations”.
I included them in an attempt to show what he actually said, rather than a twisted version or completely opposite in meaning. Knuth’s statements are tangential as they are only concerned with localized micro-optimizations, which are the least interesting kind. Generally the best you can hope for is some small linear speed increase in the specific code in question. It can go higher if you find reducible mathematical formulations or computational fast exits but still the gains will tend to be relatively minor. Also note that it seems that you think I’m diss’in Don K. No way. I 100% agree with what he actually said…in the context that he said it. And part of that context is when it was said: 1974, which is B-4 the personal computer revolution when the kinds of programs being written changed pretty radically. Today what he said is still good advice, say about 97% of the time.
Recall what I’ve said: optimization is attempting to meet some set goals with some metric of success and your time is your most valuable resource. Your time should be measured in opportunity cost. Every hour you work doing A has an opportunity cost of 2 hours. The hour you spend on A and the hour that it will take you to catch up doing B if B would have been overall better thing to be doing at the time in question. And of course this further explodes if the lack of ‘B’ has an impact on the effectiveness of others involved. Worse if ‘A’ ends up of having no value and must be replaced.
He’s wrong.
[quote]Any LTE or GTE checks he decided required two checks
[/quote]
Again wrong.
WRT: dereferencing chains. Personally I tend to pull them out for cosmetic purposes. But it is a useful and easy micro-optimiziation in the case where the compiler cannot statically tell the one or more of the members couldn’t have changed since the last dereference. A non-inlined call between or the potential for an alias are examples.
[b]~/src/SecretProject:[/b] svn up
“Huh, why have ALL these files changed?”
I’m happy up on my horse. I can see further than the heathens at my feet.
Hopefully you mean unstructured gotos. There’s never been a question about the structured kind, which sadly many people don’t get.
Now back to the subject at hand:
I’m not really talking about micro-optimizations. These can and and almost always should be deferred, as they have no or little external impact. This is what Knuth is talking about. They might be needed to meet the performance requirements, but they offer little returns compared to the development time cost. And Knuth is absolutely correct that frequently programmers will not properly identify what will end up being a hotspot so there is no reason not to wait until you’re 100% sure they’ve been properly located. This lowers your risks.
Similarly for small local only optimizations. By this I mean routines or sub-systems that external interactions can easily be abstracted away or corrected by calling on Cas’ refactoring fairy when a bad choice has been made. The opportunity cost is a real drag but hopefully manageable. The trick here is that small is context dependent. If the project has to be done in a couple of weeks, then nothing is small and if it’s a ‘my lifelong tinkering project’ then pretty much every thing is.
So what AM I talking about then? Wide spread decisions which pretty much must be made in advance to not skyrocket your opportunity cost. And if you move in that direction far enough, then you’ve painted yourself into a corner and you’re SOL if you need to change. I’ll give a couple of examples and stick to things that have popped up recently on these forums.
Scenegraphs vs. spatial partitioning. These two styles of world management are pretty much mutually exclusive when use as the world database representation. I won’t go into pros & cons as I’m far too biased.
Not storing explicit angles in 2D for orientation/rotational information. Logically using complex numbers instead of storing the angle makes it possible to hardly ever need to use trig and inverse trig functions. This rotational information is numerically equivalent to a unit vector in the “facing” direction. Also it’s possible to drop a fair number of matrix operations as complex numbers trivially handle composition of rotations and rotation of vectors/points as well as reflections (although a different formula, where they are unified in matrices).
In theagentd’s thread: Random thoughts: Extreme speed 2D physics one concern was having enough precision in coordinate information to be able to handle the scale that he desired. One possibility would be to move to a higher (non-native supported) format. Doing so would have an enormous impact on performance of every simple calculation involving a coordinate. Additionally, to be practical, you’d have to create a class to support this non-native format. HotSpot isn’t great for small objects. If my quick math is correct (excluding the pointer) a 3D vector of doubles is 48 bytes & a 3D vector of a non-primative (as a class) using 128 bits/component is 120 (2.5x more memory). Toss in a lack of operator overload to complicate implementation and that (something like double-doubles) each simply operation would cost about 10x the number of cycles of a double. Of course all of this would take much longer to implement. So, simply forget about all of that and just break the world up into some collection of local coordinates…problem solved you’re back to using plain old doubles. And even if you do the most naive collision detection possible (n2) the fact that entities are scattered across multiple coordinate frames will led toward an exponentially faster execution time. No downsides here. Faster, smaller, easier and faster to implement…move on to the next task.
Of course, but doesn’t change the fact that it’s already been done. Let us not forget the extreme speed at which coders and do their magic when on a roll. I wouldn’t necessarily call it a management failure at this stage: but letting him get away with it more than once would be.
@Eli – Having some fellow making changes in the code base without agreement from the project lead seems quite out of line!
This “optimization” of his caught my eye. Having i suddenly heading in the opposite direction seems like a dangerous change unless is it only being used for counting.
[quote]Change all:
for (int i = 0; i < arrayList.length; i++)
to:
for (int i = arrayList.length-1; i > -1; i--)
[/quote]
But what I am curious about is that I was reading the proper way to do this sort of thing was as follows:
for (int i = 0, n = arrayList.length; i < n; i++)
This way, i stays the same for the looped code, and arrayList.length is no longer being repeated needlessly.
Are compilers getting smart enough to automatically fix this sort of thing now? Does this optimization matter very much (maybe only with large arrays)? Is it worth the bother? It seems to me to be readable way to write loops. I can’t recall where I first read about it.
The thing is, I trust a professional programmer to get on with doing what he thinks is best, and I find that having to agree pointless micro bullshit like this with a so-called project lead to be insulting. I have a general rule for people who work with me these days, which is, you don’t tell me what to do, and I won’t tell you how to do it. Whoever writes it and makes it work first is right. After a while of working with people you get to know who goes off on a tangent doing pointless work, and yes, sometimes it’s even me, because occasionally I like to do pointless work while I’m thinking about something else or just for a change.
One unfortunate aspect of programming is the very wide disparity in understanding and ability of programmers on all levels. This creates astounding friction with the other aspect of programmers which is that they all think they know more than everyone else on their team (you can just see how aspect #2 mysteriously gives rise to aspect #1). Yes, me included. It is a remarkable achievement sometimes that software ever gets made in a collaborative manner given these two invariant truths on programming teams. It is also therefore remarkably unsurprising that lone programmers usually produce their best work, and very small teams are vastly, vastly more productive than very large teams.
[quote]It is also therefore remarkably unsurprising that lone programmers usually produce their best work, and very small teams are vastly, vastly more productive than very large teams.
[/quote]
I don’t think its limited to programing. Smaller teams just work better for us humans. Its sort of the way we are wired. Also there is just less communication overhead.
As the old management saying goes. You can’t get a baby in 1 month by getting 9 woman pregnant.
@philfrei: The only reason to pull out length is if the compiler can’t tell if the array might have been changed behind its back. So if the reference is a local variable and is never assigned within the loop to something else, then it will be read exactly once.
Micro-management is madness. Massive time cost for everyone involved and it create the exact opposite atmosphere from what you really want. The team is “us” and the project is our baby that we all want to be proud of.
On the flip side it’s fantastic when the levels of knowledge are scattered across different areas.
Unless those areas are conflicting in nature. The design problems that can occur when you put an Oracle PL/SQL developer in a team with any kind of web app developer for example. You’ll have one person wanting to put everything in the database and keeping the application layer as thin as possible and one person treating a database as something to put data in and nothing more.
Memoizing the loop condition after proving it never changes during the loop is a trivial optimization for most cases; hell, I bet even Dalvik manages that one.
Yes, the guy needed to be better managed. Unfortunately we were a small team, I was professionally inexperienced so didn’t feel comfortable straight up telling him he was wrong, and I didn’t have time to deal with it. But he had instructions, which were to do something completely different than micro or anti or whatever you want to call them optimizations. That’s why we had him leave.
He also made class names that were massively long and made ASCII graphs in the code, but hey, that’s just preference.
Roquen - I know all his optimizations were wrong, that was the point in posting them. I would also continue to disagree with you on your examples for optimizations (not storing angles and storing vectors instead, scenegraphs, etc.). Write it quickly and intelligently the first time, but don’t worry about stuff like that. Chances are in 99% of situations, calculating a square root 5,000 times per frame is going to do nothing to your FPS. If you make your game and you’ve only got 20 FPS, figure out specifically what is taking the most time. If it’s square roots (probably won’t be), then make that change.
But whatever. You can do what you want to, my man.
Remember I’m talking about attempting to make reasonable (not necessarily best) design decisions based on the problem at hand and making forward progress. If the design “tells you” that there are no big potentials for performance bottlenecks, space issues, etc. then you spend zero time thinking about them. On the other hand ignoring what your design is telling you about the problem is a recipe for failure. I’d say that 99% of failed or in trouble project are due to a combination of over & under design and lack of reasonable time estimates. No amount of duct table and super glue at the tail end will address the problem (at least in a reasonable amount of time).
But what I am curious about is that I was reading the proper way to do this sort of thing was as follows:
for (int i = 0, n = arrayList.length; i < n; i++)
[/quote]
Understanding assembly helps when optimizing simple loops. Here’s a couple of optimizations that should work in Java.
Unrolling a loop. Each iteration of a loop requires a comparison. If you can get rid of that comparison then it will go faster. For example if you have a fixed array of 256 elements then 256 consecutive lines will be faster than a loop.
for (int i = 0; i < 256; i++) {
array[i] = foo(i);
}
is slower than
array[0] = foo(0);
array[1] = foo(1);
array[2] = foo(2);
...
array[255] = foo(255);
Have the loop invariant compare against 0 if possible. The reasons is that the basic CMP instruction will see if a register contains 0 or not. The moment you compare against something that is non-zero then a subtraction instruction is required before the compare.
for (int i = 0; i < array.length(); i++)
is slower than
for (int i = array.length() - 1; i >= 0; i--)
Good compilers can optimize these. I haven’t checked recently but in 1.4 the JDK did not.