So now we're on a microbenchmarking spree

Of course I don’t mind :slight_smile: but you have to test it, cos I didn’t really check the results (it might well be totally wrong), so some bugfixing may be involved ;D
I just did it to see if this would be a workaround for Math’s slowness.

EDIT: I just did some checks. Precision is good until ~4 positions after the point. You can of course make the precision higher at the cost of memory use, but hey, it’s 17.6 times faster than Math :slight_smile:
Interpolation is indeed an option, although it will surely cost.

BTW it seems to me that Math.sqrt is pretty fast already, although I haven’t checked against C. Anyway I doubt if it could be made really faster using java alone.


            for (int i = 0; i < 1000000; i++) {
                  float r = (float) (i / 1000000f) * FMath.PI_2;
                  result += Math.sqrt(r);
            }

is done in 31 ms on my machine. Well, on the client that is. On the server it’s 47ms strangely enough :-/

Strange thing is that if I redirect FMath.sqrt() to StrictMath.sqrt() (just like Math does), the FMath version is way slower than Math (Math being ~30x faster) ???
Something to do with strictfp perhaps?

Not strange at all. The StrictMath version really does do a JNI call to some complex C code, while the Math version will use the Intel sqrt instruction inlined.

Ah, that explains the comment in Math.sqrt() :slight_smile:
So the implementation in math is really like ‘overridden’ by HotSpot if I understand correctly?

Just for curiousity I implemented a simple polynomial approximation to sin accurate to about 2e-4. With code to reduce arguments to PI/2 it was about 5 times faster than Math.sin. If it could assume the arguments were in the range 0 … 2PI, then it was about 9 times faster.
The range of arguments used in the test was 0 … 2
PI. For arguments restricted to the range 0 … PI/2 the advantage is less because the Math.sin code doesn’t need to use its expensive argument reduction in this interval.
Note that the argument reduction used by Math.sin becomes more expensive with larger arguments as it has use ever higher precision values of PI (up to ~1024 bits as I recall). This means that the benchmark which used arguments up to 1e6 was particularly cruel to the Java implementation.

[quote]Ah, that explains the comment in Math.sqrt() :slight_smile:
So the implementation in math is really like ‘overridden’ by HotSpot if I understand correctly?
[/quote]
Yes the VM knows how to do math primatives directly in the code, as opposed to treating them as method calls.

nifty :slight_smile:
Would you happen to know a possible explanation of the lower performance of Math using the server VM in my benchmark by any chance?
All misteries would be solved then :wink:
Or should I report a bug and see what happens?

[quote]nifty :slight_smile:
Would you happen to know a possible explanation of the lower performance of Math using the server VM in my benchmark by any chance?
All misteries would be solved then :wink:
Or should I report a bug and see what happens?
[/quote]
I know some folks I can ask. I’ll try to get to it. I’m kinda bogged down right now with getting the Big Secret Surprise ready for GDC…

Ok, thanks and no hurries. Good luck with the preparations for GDC.

Here’s a bug from BugParade I found on a discussion at TheServerSide.com about the same benchmark study: http://developer.java.sun.com/developer/bugParade/bugs/4857011.html
I don’t know if this particular bug was posted in this forum before. The evaluation makes the reasoning behind Java 1.4 trig functions’ implementation pretty clear. A combination of using narrow ranges and table lookups seems to be the way.

Uh uh diggin up a dead thread :slight_smile:

There is a little glitch in the Test class. public void math() uses a long for the result (instead of a double). The massive amount of casting makes it ~25% slower than it has to be. Before it was 1:6 and now it’s about 1:4.5.

Well, Math got faster and FMath isn’t “17.6 times faster than Math” anymore, but it’s still faster and might be a measurable difference in your application.

Hmmm, I’m quite sure I fixed that when I measured it. I print the results and they were approx. the same. Lemme run it again :slight_smile:

I checked and re-ran the benchmark.
On the client, I’m getting FMath being ~6.2 times as fast, on the server about ~22.8 times as fast.

This is on 1.4.2_03

[EDIT:]
BTW, it’s of course not really a comparitive test since Math obviously has far greater precision but it might indicate what kind of performance you could gain by settling for slightly inaccurate results.
That said, I suggest to always use Math, and switch to something like FMath if (after profiling) it’s clear that Math causes a major bottleneck and that FMath’s inaccuracies are within bounds of acceptability.

[quote][…]
That said, I suggest to always use Math, and switch to something like FMath if (after profiling) it’s clear that Math causes a major bottleneck and that FMath’s inaccuracies are within bounds of acceptability.
[/quote]
Yea, of corse :slight_smile:

In my current case I’ll only have one class wich uses some basic math relativly often. Therefore I can just try it out without too much hassle.

One thing wich is rather odd: with C/C++ lookup tables are slower then doing it the proper way, because lookup tables needs to execute more commands. Well, it’s more inaccurate than the über accurate java way in double precision. I wish there would be a float math lib and a hotspot compiler wich knows about it, too.

That would be pretty awesome :slight_smile:

I think many things are slower to do using look-up tables these days (CPU’s being much faster than memory), unless they are very processor intensive tasks. Especially in java, because of its bounds checks.
In the old days I used look-up tables everywhere. Now only in some circumstances.
And yes, a fast, less accurate Math lib like you described (or a even a switch) would be awesome.
I wish nobody noticed java’s slight ‘inaccuracy’ with sin/cos in 1.3 ;D