Generalized Rant Thread

Adam Martin (T-machine, blahblahblahh of these parts) once told me how they structure authority at his consultancy: he who codes it is right. That is, you can argue till you’re blue in the face but whoever got it working has moved on and is making money doing something else.

Cas :slight_smile:

NOTE: My rants (unless I’m poking a bear) can all be backed up by technical merit and many years of experience. If I’m making a subjective statement I’ll almost always point it out. And they are always intended to be helpful and to make people think rather than parrot.

Luckily in computer science a fair amount of things actually are black & white. What’s tricky is that today’s black & white might have flipped vs. X number of years ago. As an example: gotos. The original problem was that all mainstream languages were unstructured and unstructured gotos was the only flow-control mechanism. That situation has been effectively dead for about 40 years and is moot. This is basically the rant of “Gotos considered harmful”. Dead issue, move on. The remaining problem was that “unstructured gotos” caused havok to compilers. They complicated the notion of a basic block and prevented a fair number of optimizations. This issue is also dead with the introduction SSA (and other modern representations, in SSA a PHI node kills the problem.) So also dead for 20-30 years. But all of this is about unstructured gotos. There has never ever been an issue with structured gotos. And yet there are a large number of people that avoid them because someone (without a clue) told them they were bad. Actually the opposite is true. Avoiding a structured goto will almost always require the introduction of otherwise pointless variables which increase the pressure on the register allocator. It certainly increases the size and complexity of the code. Now if in a given situation the programmer in question decides that a version without unstructured gotos look cleaner than with…fine, there’s nothing wrong with making stylistic choices. The thing to keep in mind is that if you think that structured gotos are “bad”, then you can’t use any flow control mechanism. They are all structured gotos (for, while, if blocks, etc.).

In which case I’d ask them to point out these “situations”…ya know, so I could show them “the errors of their ways”. :slight_smile:

The only generalization that doesn’t fall apart is: All generalizations are false. (Think about that one for a minute).

Oh the paradox… my mind = blown

All generalizations (models/abstractions) are incorrect: But some are useful.

Ah, but over-generalization is never useful but it’s sometimes subjective.

Ok I must find this “Roquen’s Dictionary of Computer Science” ;D It seems to have a plethora of useful information.

Please master…teach me. :o

This thought model in your brain is induced by your relation with computers… The bitwise thinking : something is true or false!

You could “double” your thinking by having “floating” boundaries!

(I’m not kidding) ;D

True, but then you’re only retaining the most significant information. Which may or may not be the desired goal.

Oops accidental medal for you :stuck_out_tongue:

Not accidental, that was actualy a prety good reply. :slight_smile:

Optimization is the root of all evil (phrase): A myth successfully promoted by computer science professors and teaching assistants. The goal is to minimize (optimize) their time actually spent with students and student related activities, such as grading assignments and tests, so that they have more free time for their real reason of being an academic. Examples: performing research, getting grants, scoring with undergraduates and/or drinking at the pub to drown their sorrows about not being able to get a real job.

Seriously. Let me see a show of hands of people that think “gotos” are evil. Now let me see a show of hands of people that think “optimizations” are evil. If you held up your hand both times, you’re being a parrot without even knowing it and there is no way that you’ve read the two principle papers on which these notions are based.

DONALD E. KNUTH, “Structured Programming with go to Statements”, Computing Surveys, Vol. 6, No. 4 December 1974: Which is a defense of the goto statement. The paper is available online.

And another from the same paper:

Copyright © 1974, Association for Computing Machinery, Inc. General permission to republish, but not for profit, all or part of this material is granted, provided that ACM’s copyright notice is given and that reference is made to this publication, to its date of issue, and to the fact that reprinting privileges were granted by permission of the Association for Computing Machinery.

True story.

A guy (Whom shall remain nameless) was working on some code that needed to be run on cluster for many months. Since the cluster was still getting built, he decided to ultra optimize the core part of the code. He busted it down to assembler and after about 6 months, he managed to make almost 5x faster than the original C code. We he gave a talk about the ultra cool optimizations like instruction order and other such stuff. Someone else in the crowed (whom shall also remain nameless), said that compilers are for the most part just better than humans at that stuff*. To prove it, this person took the original C and spent a day optimizing the compiler flags for gcc. After just a day, he also had almost a 5x speed increase. When switched to the Intel compiler, it was more than 6x faster than the original. Mr Ultra optimizer cried and went MIA for about 2 months before coming back to finished his PhD.

Sure don’t write crap code off the bat. There is no point doing a bunch of O(n^2) stuff when just as easily could be O(n ln n). But for the most part, people i know that want to optimize optimize optimize, are the root of all evil on the projects I have had the displeasure to work on with them.

  • Of course there are exceptions. Like basic vector stuff for SSE etc. But these are typically the exception.

I’d say that the problem here wasn’t optimization. The wasted time and effort was a lack of understanding. In this case of tools. In other cases it will be the language in question, mathematics, algorithms and the actual problem itself. I bet the nameless person will forever more pay attention to choice of compilers and their associated options. Lesson learned. And they needed to be burned…dropping to assembly should virtually never be done and large chunks of code is pure foolishness.

The only reason you need to drop to hand-rolled ASM for SSE in C is because the language lacks a construct to express vector operations in, so the compiler has to analyze loops, which is essentially an impossible problem for arbitrary loops. C++ can express vector ops by defining vector types, but still has no vector primitives for the compiler’s benefit, so the “hand-rolling” would still have to take place in the class body, which is the wrong place to be making architecture-specific implementation decisions.

If Java wants high-performance vectorized operations, it could do worse than to lift them from Fortress.

I believe what we programmers collectively agree on is that premature optimizations are the root of all evil.

Let’s not get bogged down in “dropping to assembly”. This is the least interesting kind of optimization in terms of usefulness. Generally a reasonable expectation is some small linear improvement with a very short shelf life. It’s the last line of defense, squeezing water from a rock, (fill in some other cliche phrases). If you go there without seriously considered all other options you’re officially doing it wrong.

@sproingie: While what your saying is true for code written for scalar (SISD) execution, it should be noted that MSVC, Intel’s compiler & GCC all support extensions which expose SIMD et al instructions. So dropping to assembly isn’t really needed in that case except to manually schedule, register allocate, etc.

Unless you’re using the royal “we” in this sentence it seems to me that programmers can agree on very little. There are certainly is a large group of programmers that believe that optimization is evil.

Personally I think that “premature optimization” is an oxymoron. You can “prematurely code” but it’s impossible to “prematurely optimization”. Optimization != make this faster. Optimization is attempt to meet a collection of goals and having some measures on the relative success of the attempt. Making some piece of code go faster without have any real impact on the final product is an anti-optimization. It was wasted time (almost always the most important resource) and doesn’t measurable move you toward any goal but did move you away from the ultimate goal of getting the project done.

But even if you take the narrow view of optimization only being about speed, then I consider the notion of “premature optimization” to be harmful. Because so many people take that to mean not worrying about performance until “the end”. On the whole that simply doesn’t work. The largest speed improvements will come from design and understanding the problem. Waiting until the end will tend to limit your options and cost you additional coding time.

I’d also like to point out that the notion that optimizing for speed makes code harder to read, write and debug. On the whole I find that to be more of an exception than a rule. Most of the time it should be a wash, followed by actually easier and finally harder (a very small percentage).

In summary: Your most important resource is your time…don’t freaking waste it.

[quote]In summary: Your most important resource is your time…don’t freaking waste it.
[/quote]
Unless you’re a hobbyist, and your time is spent trying to outperform your last project… even though none of your games will ever need to render nearly that many sprites at once. ::slight_smile:

Wasting time in this context is only in terms of goal meeting. It’s only wasted if your making zero or negative progress. So if the goal is a learning experience, it’s not really expected that the produced code is useful, fast, bug-free, well-designed, etc. unless any of these criterion are a part of the goal set.

Ha! I can do one like that - the first place I got a proper coding job was a massive C++ hardware control system / UI, written in the bad old days of VS6 and MFC. It ran like an absolute dog - because they only did debug builds. In release builds it was so crash-tastic it wouldn’t even boot. No-one seemed to know (or care) what the release-only build bugs were (I think people assumed it was something wrong in the MS compiler and not their problem, but I’m certain it was just the usual uninitialised memory stuff).

Of course, performance was still an issue, and they actually attempted to optimised their debug-only code.

To make matters even weirder, they actually shipped debug builds. And because debug builds contain debug libraries from VS which you’re not supposed to ship, they actually had to buy a site license of VS for every client they shipped to ($$$).

I would love to go back, knowing what I know now, and fix the release build up. It probably wasn’t even any huge problems, just lots of little ones…



FYI there is this crap above because the forum won’t stop inserting this and jacking up my post, this is the best I can do.

It seems like you’re making your point invalid with your own quotes. Also, your mindless judgmental insult of professors (both my parents are professors, by the way) further dilutes any valid point you may have had in your statement.

Nobody has ever told me “optimization is the root of all evil,” in school or otherwise. I have always heard “premature optimization is the root of all evil,” which, lo and behold, is exactly what Knuth said. Guess what I was often reading in school and taught had good philosophies to think about? Knuth. And who put me down that path? Professors.

Reading other peoples’ follow-up posts, I see they are all providing examples of exactly the same point. Don’t optimize prematurely. The proper development path is to write as good code as you can without stressing over performance, instead focus on good design, readability, and modularity. Then when you find something is too slow you find out why and fix it. End of story. Se get off your high horse because nobody is going to raise their hand saying that optimization in general is a bad thing.

Gotos, maybe, but I find that’s personal opinion, because there are a lot of ways to do the same thing you’d use gotos for.

/////////////////////////////////////
Now my true story:

I worked on an iPhone game that was very much in the prototype stage, and had been in development for a couple months. We were playing around with different ways of doing things and making it fun. Because we were short people, we hired another engineer. To get familiar with the code, he was supposed to write a level editor. Instead, he went through the entire codebase (tens of thousands of lines long) and did these optimizations:

Change all:


for (int i = 0; i < arrayList.length; i++)

to:


for (int i = arrayList.length-1; i > -1; i--)

Why? Because we avoid calling .length more than once and therefore it’s faster. Let’s forget about any potential compiler optimizations there are and assume he’s right. The most we’d be looping through is maybe 100 things. So, he saved maybe a fraction of a nanosecond every once in a while.

Change all:


if (i >= 0 || x >= array.length || y >= 0 || z <= n)

to:


if (i > -1 || x > array.length-1 || y > -1 || z < n+1)

Why? Also apparently faster. Any LTE or GTE checks he decided required two checks (> and ==) and therefore was slower. This sounded totally insane to me, but I’ve never prided myself at knowing exactly what goes on under the hood so I let him have his declaration for the moment. A few minutes of Googling later and I had several links telling him he was very wrong. His response was “okay” but behind his eyes it seemed like he didn’t trust the links I gave him. Also, his changes are insanely difficult to read.

Change all:


float f = obj.thingy.x + obj.thingy.y;

to:


Thingy thingy = obj.thingy;
float f = thingy.x + thingy.y;

Yes, this one is actually fractionally faster. Although with compilers these days I’d question that too. And once again you’re sacrificing readability by having massive line bloat.

I know I’ve said some pretty stupid assumptions or misunderstandings on these forums (yes nobody needs to point them out to me), so I’m not saying I’m spotless. But I would never go into an active codebase and start rewriting everything completely pointlessly instead of doing my job.

Did I mention I was the lead on that project? He didn’t work there much longer.