I recently spent some time profiling a procedural FM synth program I’ve been working on, trying to find areas of improvement. This is what I came up with:
Instead of “/ 2” (integer division by 2), “>> 1” really does perform better. However, “* 2” and “<< 1” gave me identical performance times. Is this compiler dependent?
Instead of creating a new double array with each iteration of a crucial loop, clearing an existing array and reusing it was significantly faster. Probably saves on garbage collection, too.
Instead of using Math.sin(), using a lookup table of 1024 indexes into a single sine wave, combined with linear interpolation performed significantly faster. I was a bit surprised because with the sin() one can just plug in a double, but with a lookup table, for decent accuracy, you have to do two lookups and compute the linear interpolation–e.g., a lot more fussing. In spite of this, the lookup method would win.
Really stupid (and what caused me to have to look for sources of performance dropoff), I came up with a totally inefficient way to do error checking, in that the call had a String concatenation (to identify the location of the code) in the parameter area. It wasn’t obvious to me that this was getting executed, since it was buried within the line of code. (And, I hadn’t made this error before. I seem to need to make every possible error at LEAST once, usually more, before I learn better.) That turned out to be the biggest culprit. If there is any String activity needed at all, keep it out of the sections that require any degree of performance.
The nice thing was that this error forced me to have to profile, and I found the other stuff in the process.
Maybe these suggestions are obvious or basic for most of you. I’m admittedly a self-taught intermediate level Java programmer with a LOT to learn still.