I’m working on a memory expensive program which uses lots of values which are limited to unsigned bytes.
because java doesn’t support ubytes (I never got the clue why…) I have to figure out whicht datatype suits best.
The next choice would be the next bigger datatype like char or short. But since I am working on 32bit machines
I am wondering if there is any memory advantage of using 16bit datatypes instead of 32bit (which I don’t need).
Does java do some kind of optimizations using 16bit types?
Only if it is an array.
You can store unsigned bytes just fine in a regular byte[], just make whenever you use them, you first &0xFF to prevent the automatic cast from sign extending the 8th bit.
The why is the tons of errors in C programs that can be trced back to sign-conversion between signed and unsigned.
Having said that, I do wish that we had a “bitfield” type that was an unsigned that was not compatable to be assigend to signeds or vice versa except with very deliberate actions.
But then Sun decided to be inconsistent and make char equvalent to unsigned short. Presumably comparing unicode codepoints with signed chars would have been a pain. The thing about the byte situation is that signed bytes are extremely unpopular. You almost never want a byte to be signed. Even working with video data where the chroma components are signed values the bytes are typically stored as “offset binary” (signed values with 128 added to them so they are always positive)… and of course initializing bytes to values between 0x80 - 0xff becomes a royal pain.
So now we have bugs because of forgetting to do &0xff when moving from bytes back to ints 