That’s a bit of a misleading way to look at it, though. It’s not 24-bit vs 32-bit numbers you should be thinking about, but data types. And unless there is some darkened corner of Java I’m oblivious to, there’s no 24-bit data types to worry about. If the color values weren’t stored in 32-bit integers, then they’d have to be separated into 3 8-bit values or 1 16-bit and 1 8-bit (which would just be weird). A 24-bit RGB in a 32-bit data type (int) is just an RGBA value whos alpha component is ignored.
The argument holds true to some extent for lesser bit depths, though. A 16-bit color value would be more efficiently manipulated in the native word size (32-bit or 64-bit) than as a short.
[quote]and also I read that not using the 8 bits at the end of the 24 bits (because modern registers are 32 bit) is a waste of memory.
[/quote]
Which in practice is really no concern. It’s not like you have much of a choice for 24-bit color values, though. Either you use a tightly-packed array of bytes where each byte represents one component of an RGB triplet, or you use an array of ints where each int represents a full 24-bit color value with 8-bits unused. There are performance vs. memory implications for both approaches, but I don’t see that either would be statistically relevant in any but the most extreme circumstances (such as a heavily resource constrained system – in which case, what the hell are you running a JVM for in the first place?). If I were implementing a graphics package myself, I’d would certainly use int to represent 24-bit color values unless I knew the more common use case would be in implementing individual color components rather than full color values.