BufferedImage and DataBuffer: Questions

I’ve got another set of questions that you can help to answer - this is regarding BufferedImages in particular:

  1. How are the pixel samples organized in a DataBuffer that was retrieved from a BufferedImage object created using the constructor BufferedImage(int width, int height, int type)? For example, how are the pixels organized for a BufferedImage created with TYPE_INT_ARGB? If I wanted to access the pixel at the 5th row and 10th column in a rectangle of width x height pixels, is it simply like this:
    pixelAt[4 * width + 9] ?

  2. Are there documents that describe what implementations of ColorModel, SampleModel and DataBuffer objects are automatically created for the BufferedImage object when a particular type (ie, BufferedImage.TYPE_INT_ARGB) is passed in to the constructor?

  3. If I want to create a BufferedImage with only BITMASK transparency using the constructor BufferedImage(int width, int height, int type), what should I pass in as the type? If I should use TYPE_INT_ARGB to simulate BITMASK transparency, would that be considered as a full-alpha image (like images created with Transparency.TRANSLUCENT) in terms of performance during rendering?

  4. Is getting the data array from the DataBuffer of a BufferedImage the quickest way to manipulate and render bitmaps using Java 1.4.2?

Also, I forgot to thank Abuse, Onyx and Trembovetski for helping me out with answers for my last post. So, here you go guys - thanks.

[quote]I’ve got another set of questions that you can help to answer - this is regarding BufferedImages in particular:
[/quote]
I’ll have a go at the ones I think I can answer.

yes, TYPE_INT_ARGB uses DataBufferInt, which organises the pixels in row-major order (is row-major the right term?)

[quote]2. Are there documents that describe what implementations of ColorModel, SampleModel and DataBuffer objects are automatically created for the BufferedImage object when a particular type (ie, BufferedImage.TYPE_INT_ARGB) is passed in to the constructor?
[/quote]
Dunno, about documentation, but :-

System.out.println(bufferedImage.getColorModel());

Will give you a desciption of the various masks for a given ColorModel, and bufferedImage.getPixelSize() gives you the BPP.

I don’t think BITMASK transparency is covered by any of the predefined types, so you would have to use

BufferedImage(ColorModel cm, WritableRaster raster, boolean isRasterPremultiplied, Hashtable properties)

where cm would be something like :-

new DirectColorModel(24+1, 0xFF0000, 0xFF00, 0xFF,0x1000000)

[quote]If I should use TYPE_INT_ARGB to simulate BITMASK transparency, would that be considered as a full-alpha image (like images created with Transparency.TRANSLUCENT) in terms of performance during rendering?
[/quote]
I would expect that yes, it would. BITMASK has 25BPP, not 32BPP.

[quote]4. Is getting the data array from the DataBuffer of a BufferedImage the quickest way to manipulate and render bitmaps using Java 1.4.2?
[/quote]
It depends what you are doing, but in the general case I would say no, as grabbing a BufferedImages DataBuffer prevents the JVM from caching it in vram. (creating a managedImage)

If you are doing per-pixel rendering, then writing to a BufferedImages DataBuffer is the fastest way of maipulating individual pixels.

[quote]Also, I forgot to thank Abuse, Onyx and Trembovetski for helping me out with answers for my last post. So, here you go guys - thanks.
[/quote]
thanks for saying thanks, though its quite unnecessary :smiley:
I enjoy answering questions simply because most of the time it expands my own knowledge as well (and ofcourse corrects me when I get it wrong) :wink:

Okay. But we are assuming here that the ColorModel is an instance of DirectColorModel/IndexColorModel (which probably always assumes a SinglePixelPackedSampleModel) for all BufferedImage types.

Are DirectColorModels/ IndexColorModel always used for ALL BufferedImage types?

Would the underlying graphics renderer in Java2D know how to render transparent pixels given a BufferedImage using that DirectColorModel?

For example, if the 25th bit in a pixel sample is a 1, would the renderer be able to render that pixel as opaque instead of mistaking it for almost transparent pixel (ie alpha value of 1/255)?

How would the renderer draw a BufferedImage with this DirectColorModel(24+2, 0xFF0000, 0xFF00, 0xFF,0x2000000)?

The javadoc gives a short desciption of what each of the pre-defined image types is.
TYPE_USHORT_GRAY is a ComponentColorModel, as is TYPE_BYTE_GRAY.

[quote]Would the underlying graphics renderer in Java2D know how to render transparent pixels given a BufferedImage using that DirectColorModel?
[/quote]
I don’t see why not, it knows how many bits you are using for each channel,r,g,b & a, and knows which bits within each sample correspond to which channel.

As to whether it does it efficiently, I don’t know - it would have to recognise the specific arrangment of bit masks, and know that it is compatible with a supported hardware rendering path.
1.4.x doesn’t do this, as new BufferedImages are never accelerated. 1.5.x does, so I would assume they’ve added the code for associating ColorModel mask settings with specific rendering paths.

[quote]For example, if the 25th bit in a pixel sample is a 1, would the renderer be able to render that pixel as opaque instead of mistaking it for almost transparent pixel (ie alpha value of 1/255)?
[/quote]
In the ColorModel constructor you are not only telling it which bits correspond to which colors, you are also telling it how many bits are used to represent the color.

So, if you tell it the alpha channel is 0x1000000, it knows the range of possible alpha values is 0 or 1, and that the 25th bit alone contains the value.

[quote]How would the renderer draw a BufferedImage with this DirectColorModel(24+2, 0xFF0000, 0xFF00, 0xFF,0x2000000)?
[/quote]
I’m not sure what it would do in that instance, since the number of bits set in the 4 masks (8+8+8+1=25) does not correspond to the number of bpp you are specifying (24+2=26)

I think you meant :-

DirectColorModel(24+2, 0xFF0000, 0xFF00, 0xFF,0x3000000)

This would work, and would describe a colormodel with 2bit transparency.
Giving, 4 possible alpha values, 0%, 33%, 66% and 100%.
I doubt very much this would be accelerated, even in 1.5, as it isn’t a common format, so is unlikely to have a specific rendering path defined for it.
It would probably be converted to full ARGB at render time. (whether it was then accelerated as a TRANSLUCENT image would be, I don’t know)

Funny. I could have sworn that it wasn’t there when I looked just now ;D.

So, knowing the ColorModel, and the SampleModel used by the Raster, we can find the way pixels are organized in the DataBuffer. I suppose that to find the SampleModel used we would have to get the Raster to hand it over, type check it, cast it to the appropriate class, and then get it to cough up its details.

Hmm. So you’re saying the java2d renderer would:

  1. Given the number of bits of the alpha channel, and the masks, calculate the range of alpha values to render. So the value 1 in a 1bit alpha channel for example, would be rendered at 100% alpha, and the value 0, at 0%.

  2. And it would recognise so called “standard” alpha channel formats to take advantage of optimizations. For example, a 1bit alpha would be recognised as a BITMASK type image and a 8bit alpha would be recognised as TRANSLUCENT type.

I really hope #2 is correct. Specifically, a 1-bit transparent BufferedImage directly instanced from the constructor should be rendered as fast as a software rendered BITMASK type BufferedImage created using createCompatibleImage(), compatibility issues aside.

Yes, this was what I meant actually.

I’m crossing my fingers that 1 bit transparent BufferedImages gets recognized as BITMASK type. I’ll be screaming if it renders as slowly as full ARGB.

To points #1 and #2, basically, yes.

The optimisation is normally to cache it in vram, and use hardware operations to blit it to where-ever you want it.

[quote]as fast as a software rendered BITMASK type BufferedImage created using createCompatibleImage()
[/quote]
hardware rendered

Note that all i’ve said is applicable only to what we’ve been told about 1.5 (that images created from the BufferedImage constructor will be eligable for hardware acceleration)
1.4.x does not accelerate ANY images created with the BufferedImage constructor.

What I am unsure of however, is whether images using strange ColorModels (as the 2bit alpha discussed above) are expanded to ARGB and hardware accelerated through the same code path as TRANSLUCENT images. Or if they will simply not be accelerated at all.
I would hope the former case - but, obviously the later case would involve less work for the coders :slight_smile:

Theres alot in this Thread for trembovetski (damn it, I have to check the spelling everytime!) to [comment on|correct] :smiley:

Nah, I’m manipulating my images per-pixel through the use of a DataBuffer and since I’ve inevitably tripped the “rasterStolen” boolean that punts my image from VRAM, I’m just hoping for performance similar to the software rendered BITMASK image speed. Assuming of course, that the image created with the constructor was even “managed” into VRAM in the 1st place.

Yes.

I’d be happy if the renderer interpolates the color range (for example, instead of 8888 or 4444 color formats, I could use 1234 or something really wacky) and recognizes commonly used ones, so 1888 is recognised as Transparency.BITMASK type images and uses the same optimized pipeline.

What I (and everybody else) would really want are TRANSLUCENT VolatileImages with pixels that can be tweaked directly in VRAM. Woohoo! :wink:

Ah right, then yes I would definitly expect the 2 to render at identical speeds, since from the perspective of the software rendering code they are identical.

[quote] What I (and everybody else) would really want are TRANSLUCENT VolatileImages with pixels that can be tweaked directly in VRAM. Woohoo!
[/quote]
I don’t think that would give any speed gain at all, infact the bus speed between main memory and the cpu is far greater than between the gfx cards memory and cpu.

So if you were manipulating the raster alot(i.e. 1000s of times a second, as you do when per-pixel renderin), it would be better to have it in main memory, not vram.

The major culpret for slow per-pixel rendering is the array bounds checking that Java does, use a VM that eliminates this and the speed increase will be dramatic.

Hmmm. Is it at all possible to do per-pixel manipulation entirely using the graphics hardware?

A bit OT but:

I’ve read somewhere that array-bounds checking is removed when the array is accessed using an index that is incremented sequentially in certain loops in the current JVM.

Mostly, I’ll try to avoid random access in arrays to take advantage of this optimization.

You are calculating the pixel color in your code, your code is executing on the cpu. The operation to change the pixel color has to be communicated from the cpu (where it is calculated) to where-ever it is stored.
In terms of bandwidth, main memory is alot closer to the cpu than vram.

If you could upload code to the graphics cards processor to perform arbitary operations on a per-pixel basis, then yes you could do it in vram. (think along the lines of pixel shaders ;))

[quote]A bit OT but:

I’ve read somewhere that array-bounds checking is removed when the array is accessed using an index that is incremented sequentially in certain loops in the current JVM.

Mostly, I’ll try to avoid random access in arrays to take advantage of this optimization.
[/quote]
Yeah, im unsure of whether it is done as part of jit/hotspot or if its a serverVM only thing. Thats why my original comment on it was so vague :wink:
I think its a -server optimization, but don’t quote me on that :stuck_out_tongue: