Where is the best place to put this bit of code?

On Sun’s site the following code:
while (!done) {
Graphics g;
try {
g = myStrategy.getDrawGraphics();
render(g);
} finally {
g.dispose();
}
myStrategy.show();
}

The g = myStrategy.getDrawGraphics();
and then
g.dispose()
are withing the animation loop. Is it better to have them here or to have them in the render method?

I’d keep it outside the render method.

In future, you may want to stop using BufferStrategy;
keeping the render method reliant only on a Graphics object makes it far more flexible.

Ok question, in future what would I ue instead of BufferStrategy? Also, what is the exact benefit of having it in the animation loop instead of the render method?

[quote]Ok question, in future what would I ue instead of BufferStrategy? Also, what is the exact benefit of having it in the animation loop instead of the render method?
[/quote]
well…
if you are doing lots of unaccelerated drawing operations, it can be faster (at the moment) to render to a backBuffer in system memory, than 1 stored in vram.

Hence, in some circumstances, you may want to do this instead :-

render(backBuffer.getGraphics());

I think its simply neater to keep the source of the Graphics object distinctly seperate from the manipulation of the Graphics object.

I thought I was doing accelerated drawings !?

this.createBufferStrategy(2);
bs = this.getBufferStrategy();
.
.
.
render()
{
if( !bs.contentsLost() )
{

Graphics g = bs.getDrawGraphics();
g.setColor(Color.black);
g.fillRect(0,0, width, height);
g.setColor(Color.blue);
g.fillRect(100, 600, 70, 70);
.
.
bs.show();
g.dispose();
}

So this piece of code in my rener method means that I am not oing accelerated drawing?

sure you are.

I’m just saying, if in your render method you did some draw operation that used an unaccelerated feature(such as AlphaCompositing) it would infact be alot quicker if the backBuffer was NOT in vram.
(i.e. you weren’t using bufferStrategy)

As long as you stick with accelerated features, you won’t ever have to worry about that…

I only brought it up because it is a reason for keeping the rendering system abstracted away from the actual render code.

cool. Quick question, if I did do alphaComposition in my draw mthod then the back nuffer would automatically become un accelerated and everything I blit to the screen is un accelerated becasue oif that one part right?

well no, it would just do the AlphaComposite operation incredibly slowly.

There are 3 possible scenarios :-

  1. The back Buffer is in vram, and the image you are drawing exists ONLY in vram. (i.e. both images are VolatileImages)

This is the worst scenario, because java has to read-back both a portion of the back buffer AND the image you are drawing, back into main memory.
It can then perform the software composite operation.
Finally, it copies the modified portion of the back buffer back into vram.
As you can see, you’ve got 3 EXTRA copies ontop of the AlphaComposite operation.

  1. The back Buffer is in vram, and the image you are drawing exists in main memory [and possibly vram]. (i.e. the back buffer is a VolatileImage, and the image is either an automatic image, or a regular unacceleratable image)

This scenario is not so bad.
All it has to do is read-back a portion of the back buffer, use the version of the image that is in main memory, perform the composite operation, and copy the result back to vram.
Thats 2 extra copies ontop of the AlphaComposite operation.

  1. both the back buffer and image are in main memory.
    (i.e. you are NOT using bufferStrategy for the back buffer, and the image is either an automatic image, or a normal unacceleratable image)

In this scenario no read-back from vram needs to be performed, the composite operation is done, and then at the end of each frame, the back buffer is copied into vram.
You could count that as either ZERO copies, or 1 copy.
But the 1 copy is only done once per frame, so its a fixed overhead. (where as the copies done in the previous scenarios had to be performed per drawing operation.

So as you can see, the 3rd scenario is going to be alot quicker when performing unaccelerated drawing operations (specifically compositions).

oh, and… there is a 4th scenario; which, when 1.4.2 is complete will hopefully be a reality.

  1. both the back buffer and image are in vram, BUT the AlphaComposite operation is done in hardware!

This scenario is insanely quick in comparison with all the others.
There are no copies required at all, AND the composite operation is done by the graphics cards processor, releaving the cpu of the responsibility. (hence it can be off doing something else)

Oh, and you don’t have to take my word for it, all these scenarios are demonstratable with this application I wrote a while ago.

http://www.pkl.net/~rsc/downloads/Balls.jar

:edit:

hmm, seems I have fiddled around with Balls.jar since then, and infact it won’t let you do scenario 3 or 4 :confused:

You’ll have to take my word for it :slight_smile: atleast until I un-modify it back to how it used to be.

:edit:

ok, i’ve made the changes, so when you change to a software backBuffer, it actually uses main memory now
(before, it was using vram - even though I was telling it not too; another bug for Sun to fix >:().

Also, i’ve added a new feature!

I’ve added support for the experimental hardware acceleration for AlphaCompositing.

However, because the hardware acceleration cannot be changed once the AWT system has been initialised, i’ve done abit of a hack.

I’ve made 1 automatic image, that will be accelerated (if it can be) this image is called ‘Accelerated FullAlpha’

and I’ve made a 2nd automatic image, and intentionally made it unacceleratable (I call getRaster().getDataBuffer() after creating it) this image is called ‘Full Alpha AutoImage’

Wow nice !!! Thanx alot Abuse. I didnt know that hardware accelerated alpha composition was promised in the release of 1.4.2 Also thanx a lot for your reply and the code you posted. Hey what image format are you using png? Also it what exactly is masked auto image? Is that where the image’s magenta color is set to transparent through image producer and image consumer?

  1. Is it better to use a png image which supports transparency or a image if a magenta color that can be easy pickout and set to transparent during loading?

bitmask autoimage, is an automatic image that has bitmask transparency. (i.e. Each pixel is either transparent (alpha of 0) or opaque (alpha of 1))

They can be obtained from the method

graphicsConfiguration.createCompatibleImage(width,height,Transparency.BITMASK);

as for the format of the images;

i’ve used a gif (with bitmask transparency).
For the other formats (the Full alpha versions) I simply copied the bitmask image onto an automatic image returned from

graphicsConfiguration.createCompatibleImage(width,height,Transparency.TRANSLUCENT);

So the full alpha images arn’t realy taking advantage of the full alpha channel available… but that doesn’t realy matter… its only a test program :smiley:

The only reason I used a gif, is cos I know PSP creates gifs with bitmask transparency correctly :slight_smile:
dunno how it handles pngs.

So in PSP you created an image with a transparent background or that magenta background? How did you setup the bitmask? I am not sure how this is done. Do you create the bitmask yourself? If not where do you get it from?

in psp, you just create a regular image,
then set the transparency index to the color you want to be transparent. (Shift+Ctrl+v in PSP7)

for png its abit more complex, you have to create a mask, and add it to the image.

and im not sure how it interprets the mask, whether it can identify the difference between an alpha mask, and a bitmask mask :smiley:

Cool. I have been using Photoshop for a while but have always had to go around the anti-aliasing issues among other things. I guess its a good time to learn paint shop pro since I wish to create spites for my game, and PSP sounds better than PS for making sprites.

dunno about that… but PSP is free :slight_smile: kinda…