well no, it would just do the AlphaComposite operation incredibly slowly.
There are 3 possible scenarios :-
- The back Buffer is in vram, and the image you are drawing exists ONLY in vram. (i.e. both images are VolatileImages)
This is the worst scenario, because java has to read-back both a portion of the back buffer AND the image you are drawing, back into main memory.
It can then perform the software composite operation.
Finally, it copies the modified portion of the back buffer back into vram.
As you can see, you’ve got 3 EXTRA copies ontop of the AlphaComposite operation.
- The back Buffer is in vram, and the image you are drawing exists in main memory [and possibly vram]. (i.e. the back buffer is a VolatileImage, and the image is either an automatic image, or a regular unacceleratable image)
This scenario is not so bad.
All it has to do is read-back a portion of the back buffer, use the version of the image that is in main memory, perform the composite operation, and copy the result back to vram.
Thats 2 extra copies ontop of the AlphaComposite operation.
- both the back buffer and image are in main memory.
(i.e. you are NOT using bufferStrategy for the back buffer, and the image is either an automatic image, or a normal unacceleratable image)
In this scenario no read-back from vram needs to be performed, the composite operation is done, and then at the end of each frame, the back buffer is copied into vram.
You could count that as either ZERO copies, or 1 copy.
But the 1 copy is only done once per frame, so its a fixed overhead. (where as the copies done in the previous scenarios had to be performed per drawing operation.
So as you can see, the 3rd scenario is going to be alot quicker when performing unaccelerated drawing operations (specifically compositions).
oh, and… there is a 4th scenario; which, when 1.4.2 is complete will hopefully be a reality.
- both the back buffer and image are in vram, BUT the AlphaComposite operation is done in hardware!
This scenario is insanely quick in comparison with all the others.
There are no copies required at all, AND the composite operation is done by the graphics cards processor, releaving the cpu of the responsibility. (hence it can be off doing something else)
Oh, and you don’t have to take my word for it, all these scenarios are demonstratable with this application I wrote a while ago.
http://www.pkl.net/~rsc/downloads/Balls.jar
:edit:
hmm, seems I have fiddled around with Balls.jar since then, and infact it won’t let you do scenario 3 or 4 
You’ll have to take my word for it
atleast until I un-modify it back to how it used to be.
:edit:
ok, i’ve made the changes, so when you change to a software backBuffer, it actually uses main memory now
(before, it was using vram - even though I was telling it not too; another bug for Sun to fix >:().
Also, i’ve added a new feature!
I’ve added support for the experimental hardware acceleration for AlphaCompositing.
However, because the hardware acceleration cannot be changed once the AWT system has been initialised, i’ve done abit of a hack.
I’ve made 1 automatic image, that will be accelerated (if it can be) this image is called ‘Accelerated FullAlpha’
and I’ve made a 2nd automatic image, and intentionally made it unacceleratable (I call getRaster().getDataBuffer() after creating it) this image is called ‘Full Alpha AutoImage’