Square and then square root?
Bit-twiddling?
Ooh!
I had never heard about that one before, so I tried figuring it out:
int val = random.nextInt();
int sign = ((val&0x80000000)>>>31);
int val2 = (val^(0xFFFFFFFF*sign))+sign;
Probably not optimal, but fun.
edit:
Cleaner version with less bit twiddling.
int val = random.nextInt();
int sign = ((val&0x80000000)>>>31);
int val2 = val*(1-sign*2);
edit again:
Heh. Even easier.
int val = random.nextInt();
int val2 = val*(1-(val>>>31)*2);
Never come across that one before.
I think this should work?
Dunno if its the optimal solution though.
int a = random.nextInt();
final int sign = a>>31;
a = (a-(sign&1)))^sign;
:edit:
he, beaten to it.
Slightly different algorithm though =)
No multiplies either
Yes, and yours is faster!
Math.abs(): 17.61743 ns
My way: 6.5100017 ns
Your way: 5.472977 ns
This is fun, in a geeky way!
lol, is that with the code inside a method?
Even faster:
a = (a^(a>>31))-(a>>31);
4.3739724 ns
Storing the sign in a final int is actually SLOWER on my jvm (1.5.0_06, client).
And, no, it’s not including a method call. =)
I set up an int[] full of random integers, then do four for loops like this:
int res = 0;
long pre = System.nanoTime();
for (int i = 0; i < vals.length; i++)
{
int a = vals[i];
a = (a^(a>>31))-(a>>31);
res += a;
}
long post = System.nanoTime();
Sytem.out.println((post - pre)/(float)vals.length + " ns");
The res values are then compared to each other to make sure they all get the same result.
Of course, this is all done several times to make sure the code has a chance to warm up.
Edit:
Just to clarify, the ONLY thing that changes in the loops is the middle row. (ie the one that says “a = (a^(a>>31))-(a>>31);” above)
This makes sure the loop overhead is the same.
You should stick these in a final method and compare with Math.abs()… Then if it is still faster submit a performance enhancement to Mustang
Math.abs: 17.01518 ns
public static int abs(int a) {
return (a < 0) ? -a : a;
}
Foo.abs: 4.990932 ns
public static int abs(int a)
{
return (a^(a>>31))-(a>>31);
}
That’s… quite a lot faster.
Sweet! I like your further optimised version!
I presume that is the optimal solution.
It may not be so efficient on processors that lack a fast shifter. While many processors will do a 31 bit shift in 1 or 2 clock cycles some require 31 clock cycles.
yup gotta agree with that, markus u gonna submit it?
Is Math.abs() really ever even close to being a bottleneck?
If someone else wants to submit it, go for it.
Doubtful, though Java is renown for having slow trig.
I realize this is in part due to the accuracy specifications,
but any unnecesary slowdowns in core methods (such as abs()) are always going to have a knock-on effect when they are relied upon in higher level functionality.
That is true.
Now we just need to figure out a fast abs for floats and doubles.
See the gems section of my FAQ for a way to improve this.
Speaking of which has anyone timed this out against Math.abs(int) ?
If its faster, I’ll add it to the Gems section 8)

That is true.
Now we just need to figure out a fast abs for floats and doubles.
That’s easy, actually it’s much easier than for integers. Just use one AND, and you are done.
Of course I asume you are using SSE2 registers for all floating point work.

Speaking of which has anyone timed this out against Math.abs(int) ?
If its faster, I’ll add it to the Gems section 8)
Yes, see my post a few post backs.
Even when in a method, the hack runs at about 30% of the speed of Math.abs(int)
That’s a speedup of ~250%!
Wow this thread actually became useful again. What a surprise

Wow this thread actually became useful again. What a surprise
But awfully dull isn’t it. The coolest accessory you can wear at this forum probably is blinkers. ;D
Good luck with Math.abs! Once you got that optimized everybody will start using Java I’m sure.