It seems that if I use anything but 8 bits per colour channel I get much decreased performance with DirectColorModel and MemoryImageSource in my applet. For example, if I wanted 5 remaining bits for a 32 bit mode I’d do something like this:
new DirectColorModel(32, 0x7fe00000, 0x001ffc00, 0x000003ff)
This gives 10.11.10. Using that model to display is much slower than a standard 8 bits per colour. If I were to guess, I would say it has something to do with the fact that when the Producer requests the RGB, the Colour Model has to scale the values to 0-255 first if you don’t use the standard 8. Funny thing is, it doesn’t seem to happen in 1.1, only in later versions, especially 1.4. Can anybody back me up on that? I also wonder if you could overcome this by writing your own colour model?
Abs