In JCD’s WaterCubeMap demo he loads the images using a class called ImageLoader to create BufferedImage objects, however, he converts the actual data of the images manually before passing the created ImageComponent2D object–why? I tried just passing back just the bufferedImage–unaltered–but it came out looking dark bluish.
Looking through some other examples of his, I noticed that he has two different versions of the ImageLoader class, one that does the conversion, and one that doesn’t.
The one that doesn’t do the conversion is his TextureCubeMap example. I’m beginning to suspect this has to do with image formats… and it is true that the two programs use different formats: WaterCubeMap uses JPGs, and TextureCubeMap uses PNGs.
Is this the reason that the byte-wise manipulation was neccessary (because they’re JPG files)? If it is, why is it neccessary? Or is the actual reason he’s converting them completely different?
Any help helping me understand this would be greatly appreciated.
