Java improvements 1

Hi. I think Java needs 128 bit precision, and some other nice thingies.

That above sentence looks horrible. Basically I’d like to see quad precision floating point numbers, and i128/u128 integer numbers in the Java. When this would be added, I would like to see also unsigned multiplication and unsigned if operations. Just under theory why unsigned numbers when you could have unsigned operations. This theory is used in assembly and it works well.

So we would have
int a = 0; 1; 2
int b = 0xff; 0xff; 0xff

int c = a#*b 0xff; 0xff0

and so on.

Basically I posted a list of improvements on the sun forums, back then when “advanced programming topics existed”.
Actually I found that discussion. it’s on
http://onesearch.sun.com/ClickThru?qt=raghar+128&url=http%3A%2F%2Fforum.java.sun.com%2Fthread.jsp%3Fforum%3D4%26thread%3D488568&pathInfo=%2Fsearch%2Fdevelopers%2Findex.jsp&hitNum=1&col=devforums

Basically unsigned operations weren’t disputed so they passed by default. Operator overloading defined for some highly specific mathematic operations, and defined in language specification in the similar way as String operator overloading was also found as reasonable.

[quote]Hi. I think Java needs 128 bit precision
[/quote]
It might help if you said why?

I think Java is much better off without unsigned numbers. Sure it can be a pain when converting an algorithm in C/C++ which uses unsigned values extensively. On the other hand improperly mixing signed and unsigned values in C/C++ has been the source of many bugs.

[quote]I think Java is much better off without unsigned numbers.
[/quote]
The exception to the above I find is with the byte type, which I wish was unsigned, much like char is.

At the very least it would be nice if you could initialize a byte using a hexidecimal number in the 0x80-0xFF range without having to stick in an ugly cast.

Edit: I should elaborate…

Rarely does someone operate on signed bytes. Usually where sign matters you are using a int, short, or long. Bytes are usually operated on with boolean logic operators, when they are promoted to int types in mathematical expression I find that more frequently than not the sign-extension must be suppressed with (b & 0xff). The problem I suppose is finding a way to sign extend an unsigned byte type for those rare occasions when it is desired.

For the same reason why Java uses 64 bit numbers, sometimes 64 bit precision isn’t enough.

That’s not an actual reason :(. Give an example of why you actually need 128-bit…

In particular, do you have any game-related situations where you need it?

It’s fairly easy. You have a galaxy. It’s aprox 100 000 pc in diameter. You have a fleet. It’s moving from point A to point B. You have another one. It’s moving from point C to point D. Now you need to know if one fleet detect the another and where it will happen… With precission of 0.1 km.

Another examples could be AIs that thrives at the 128 bit data type, or 256 bit one.

Not to mention a little presure at the lazy hardware developers. ATI pretty pissed me. 24 bit precision, yuck. Who would be happy with that for longer than 2 years? AMD wasn’t nice as well, with barton inability to use SSE2.

Whats wrong with BigInteger? ???

And if its for games/realtime, then the usual approach is a series of nested coord systems. Would you want to use the same coord to distinguish between the flight deck and the toilet in a space ship that could travel right across a galaxy? Of course you wouldn’t, you’d have on for the position in space, and another relative to the insides of the ship.

The ships moving though space example is the same thing on a difference scale. It even lends itself to splitting up nicely based on individual solar systems and their relative placement.

[quote]It’s fairly easy. You have a galaxy. It’s aprox 100 000 pc in diameter. You have a fleet. It’s moving from point A to point B. You have another one. It’s moving from point C to point D. Now you need to know if one fleet detect the another and where it will happen… With precission of 0.1 km.
[/quote]
Totally the wrong solution to the problem. You are much better off multi-co-ordinate-space solutions to this (as outlined by OT).

It’s a bit like OOP: using multiple spaces at different resolutions has benefits similar to encapsulation of code in objects. It also makes debugging a heck of a lot easier (you don’t have to keep mentally adding and subtracting numbers with terrifyingly many digits and can instead work with small numbers appropriate to the space).

EDIT: you also get to do very handy stuff, like having some of your resolutions be integers. You still have float-level precision via the resolution below, yet you can work with integers and get much more accurate maths etc. Note that 128-bit floating point (just like 32-bit and 64-bit) is STILL going to allow me to prove that 0 == 1, whereas integers won’t allow that to happen…As it happens, we do this in survivor: there are two levels of float’s, giving a total of 64-bit accuracy, and a level above that of int’s that is only used by the level-editor, and is used for snap-to-grid etc.

It can also be considerably faster in processing to use multiple small spaces, and it can be extended to aribtrary levels of precision quite easily.

So I still don’t see a reason for 128 bits here :(. You didn’t give another actual reason, just mentioned the word “AI” without explaining why it might need 128 bits.

to Orangy Tang
Slow. I used it for calculation of square root of 2 by my method up to around 2496 decimal places. It took 78974 ms on a Celeron overclocked to 458 MHz.
Babylonian method took around 71613 ms. I think it could be around 20000 ms after optimization.

to blah^3
Eeer what debugging? I already debugged that. I had 2 lines in 3D space, and another line connecting them. It worked well. No big numbers needed.
Also I think a simple walk over a single array is much better than crazy dereferencing several Oct tree nodes, especially with low locality.
Of course I’m currently unemployed and my math skills went who knows where.

Level editor? Where? This is a map of the galaxy.

Well decision for AI could be important even if difference would be at the bit 79.
Imagine function that feeds another one that feeds another one. Errors would increase by a fairly large amount.

OK, so you have a calculator. I thought you wanted to write games, though, which need to do a lot more than calculate a single basic item of 3D geometrical algebra ???.

Sounds like pre-emptive optimization to me…i.e. until it appears in your profiler as a bottleneck, it’s probably not relevant.

You still can’t think of an actual reason, can you? Other than to say that if someone uses an algorithm that requires more than 64 bits accuracy with 64 bits data then it potentially won’t work.

What every single games programmer does is either to use algorithms that do not require more accuracy or to make trivial adaptations to multi-co-ord-spaces that mean the algorithm no longer needs the additional accuracy.

This is not a reason! This is merely saying “if you can’t be bothered to do what everyone else does, then your code will break”.

It doesn’t matter that you use 128 bits - your game will still probably break if you insist on not using the appropriate algorithms, its just that it will take longer for you to notice :(.

Why stop at 128? Why not ask for 1024 bits?