ImageIO reading problem: Always get DataBufferByte and not DataBufferInt


  1. Create any PNG image file of size 8x8. Name it whatever you want. (This example will use “ship.png” as the file name.) The first row must be entirely filled with transparent color.
  2. Create an empty Java project.
  3. Create a class folder in your project.
  4. Put your PNG image in the class folder, and set the build path of your project to reference the class folder.
  5. Copy the following code:

	public static void main(String[] arg) {
		int pixels[] = new int[64];
		try {
			BufferedImage img ="/ship.png"));
			pixels = img.getRGB(0, 0, 8, 8, pixels, 0, 0);

		catch (IOException e) {

  1. Check the pixels array. You can see that it fills up only the first 8 elements. The rest of the array is NULL.

I don’t get exactly why this line of code:
[icode]pixels = img.getRGB(0, 0, 8, 8, pixels, 0, 0);[/icode]
returns BufferedImage.TYPE_3BYTE_RGB, and not what the documentation says BufferedImage.TYPE_INT_ARGB. If I were to cast the data buffer that I obtain from:
[icode]pixels = ((DataBufferInt) img.getRaster().getDataBuffer()).getData();[/icode]
It would always return DataBufferByte and not DataBufferInt, because of the above problem.

Can anyone solve this mystery?


Image proof:

The ship.png:

Not certain, but was the image saved as 24bit png? Try re-saving it as 32bit, then it should load up as int_argb.

Well I think you have confused the issue somewhat:

pixels = img.getRGB(0, 0, 8, 8, pixels, 0, 0);

Obviously does not return a DataBuffer… It returns exactly what it states in the API:

Obviously a translation from the image instance’s underlying ColourModel to the default ColourModel is performed automatically. The API also states that the only RGB (sRGB) components are returned… i.e. no alpha.

When you use


you get the ColourModel that ImageIO has used when reading the image file. No guarantees on what model it uses. In this case it uses DataBufferByte.

To get ARGB pixel data you need to create a separate BufferedImage of BufferedImage.TYPE_INT_ARGB, paint the original loaded image into this new image and then you can do:

pixels = ((DataBufferInt) newImg.getRaster().getDataBuffer()).getData();

If you’re trying to put image data into int[] pixels array, then you should try to put image.getWidth() for scansize. Since you’re putting in 0 as scansize, you’re not scanning anything, which might be causing the problem.

		try {
			BufferedImage image ="/image.png"));
			int[] pixels = image.getRGB(0, 0, image.getWidth(), image.getHeight(), null, 0, image.getWidth());
		} catch (IOException e) {

That might fix it… Idk for sure.

Oh God, why…? That’s the problem. Thanks for pointing that out for me. Programming alone does have its “quirks.”

Its not a quirk, its the way the function works. Read the docs.

I was referring to myself not being able to spot the obvious error I had, even when I’m reading the documentation and checking the code numerous times. :stuck_out_tongue: