Scaling Slowness

Hi,

OK, firstly, I did search the forums and try some stuff out from the information I gained as there seems to be a fair bit of stuff about scaling.

Well, I still have problem… ???

What i’ve basically written is a class that renders a 2d effects overlay in my game through int arrays.

If I render it 80x60 onto a 800x600 window it looks and runs perfectly in a little square.

If I call the g.drawImage so it scales the 80x60 bufferedimage to 800x600, the framerate completely collapses. So obviously there is some serious slowdown coming from scaling of the image. The image looks ok scaled up btw (slightly pixellated, kinda old school effect), which was the look I ideally wanted anyway :slight_smile:

Anyone got any idea how I can improve the speed of the scaling operation in this code? I’ve tried turning the DD flags on and it didn’t seem to have any effect.


public FXLayer()
{
bi = new BufferedImage(80, 60, BufferedImage.TYPE_INT_ARGB);
DataBufferInt data = (DataBufferInt)bi.getRaster().getDataBuffer();
PixelIndex = data.getData() ;
}

public void paint (Graphics g)
{
g.drawImage(bi, 0, 0, 800, 600, null);
}

public void update()
{
// Only updating my int array here
}


Ok

I just made the bufferedimage copy into a volatileimage, then I scale the volatileimage where I was scaling the bufferedimage before. Wow, the difference that just made.

I think anyway, at the moment it’s not alpha blending with the stuff below it, so I can’t actually see the gameworld at the moment, but it definately seems to be much much faster :slight_smile:

I have the same performance problem but the performance is acceptable even though it is not very smooth.

Have you tried creating the image your’re refering with a GraphicsConfiguration.createCompatibleImage? This way it creates for sure an hardware accelerated image. But in java 1.4 BufferedImage wasn’t accelerated so it might be the cause of your initial problem?

Well, it’s now running fine (speedwise) when I scale the new volatileimage which contains a copy of the bufferedimage. I copy the bufferedimage into the volatile image everytime I call the update method, so it’s not been done in the paint method. Performance honestly seems excellent, i’m quite surprised in the huge difference it made. It’s not perfect, i’ve definately lost a few fps, but I expected that.

Anyway, I now seem to be of the understanding that I can’t render volatile images over my previously rendered game layers with transparency? If so, this is a major problem :frowning:

Here’s my code now


public FXLayer(GraphicsConfiguration gc)
{
vImg = gc.createCompatibleVolatileImage(80, 60);

          bi = new BufferedImage(80, 60, BufferedImage.TYPE_INT_ARGB);

            DataBufferInt data = (DataBufferInt)bi.getRaster().getDataBuffer();
        PixelIndex = data.getData() ;
}

public void update()
{
// Modify my int[] here
do
            {
                if (vImg.validate(gc) == VolatileImage.IMAGE_INCOMPATIBLE)
                {
                        // old vImg doesn't work with new GraphicsConfig; re-create it
                        vImg = gc.createCompatibleVolatileImage(80, 60);
                }
                Graphics2D g = vImg.createGraphics();

                g.drawImage(bi, 0, 0, null);

                g.dispose();
            }
            while (vImg.contentsLost());
}

public void paint(Graphics g)
    {

        g.drawImage(vImg, 0, 0, 800, 600, null);
    }

2 things:

  1. When you create your VolatileImage, you don’t allow transparency or translucency. Simple append the parameter Transparency.BITMASK or Transparency.TRANSLUCENT and it sould allow what you want.

  2. Now I think I understand why it’s so much faster than with the call createCompatibleImage, it’s because your VolatileImage is unmanaged obviously. If a managed image is used to scale the image, then each time you scale it then it has to be copied again in VRAM and thus it defeats the image optimization. Sounds logic?

"Simple append the parameter Transparency.BITMASK or Transparency.TRANSLUCENT and it sould allow what you want. "

I’m using Java 1.4.2, so that’s not available I don’t think :frowning:

So it looks like I might not be able to use VolatileImage after all.

I think I understand what you mean. Does this mean I can keep the performance of VolatileImage but using a managedimage somehow?

The data to be drawn to the screen is constantly changing btw, irrelevant of the scaling.

Thanks,

[quote] I’m using Java 1.4.2, so that’s not available I don’t think
[/quote]
At least you can use Transparency.BITMASK. That’s accelerated for sure. Also if you use the sun.java2d.translaccel=true flag on Windows to speed it up but don’t forget to set sun.java2d.ddforcevram=true at the same time. And why not set sun.java2d.ddscale=true also? :slight_smile:

Yeah it’s accelerated, but doesn’t exist in 1.4.2 :-[

Ok, i’ve dumped VolatileImage due to the translucency issues, only to discover problems with BufferImage in the area.

If my code run as follows the layer is drawn over the all the other layers obscuring them, but it’s accelerated :frowning:


bi = gc.createCompatibleImage(80,60, BufferedImage.TYPE_INT_ARGB);

DataBufferInt data = (DataBufferInt)bi.getRaster().getDataBuffer();
PixelIndex = data.getData() ;

If run like this, it’s layered over perfectly, but grinds the framerate to a halt :frowning:


bi = gc.createCompatibleImage(80,60, BufferedImage.TYPE_INT_ARGB_PRE);

DataBufferInt data = (DataBufferInt)bi.getRaster().getDataBuffer();
PixelIndex = data.getData() ;

I’ve got all those flags on by the way.

Thanks,

Any chance you could give me an example of its use?

The only useful use of it I could find in context to this problem was in regard to Java 1.5.

Your confusion is from this code:


i = gc.createCompatibleImage(80,60, BufferedImage.TYPE_INT_ARGB_PRE); 

That actually doesn’t do what you think it does. “createCompatibleImage” creates an image that matches the current screen mode. So passing a color model would be silly. If you look at the JavaDocs you will find:


public abstract BufferedImage createCompatibleImage(int width,
                                                    int height,
                                                    int transparency)  <--- HERE!


Returns a BufferedImage that supports the specified  transparency and has a data layout and color model  compatible with this GraphicsConfiguration.  This  method has nothing to do with memory-mapping  a device. The returned BufferedImage has a layout and  color model that can be optimally blitted to a device  with this GraphicsConfiguration. 

Parameters:
width - the width of the returned BufferedImage
height - the height of the returned BufferedImage
transparency - the specified transparency mode 
Returns:
a BufferedImage whose data layout and color    model is compatible with this GraphicsConfiguration  and also supports the specified transparency.

So the correct call is actually:


i = gc.createCompatibleImage(80,60, Transparency.BITMASK); 

Ok, so I guess I didn’t do quite enough reading on that part, or maybe I got confused between the 1.4.2 and 1.5 docs ???

I just tried it out that way it doesn’t work either, it’s obviously accelerated, but it’s not blended to the layers underneath at all either ???

Do you want blended regions or transparent regions? BITMASK gives you transparent regions. TRANSLUCENT gives Alpha blending.

Sorry, I should of said in my post I tried with BITMASK (No blending, perfect speed) & TRANSLUCENT (Perfect blending, awful speed)

That’s with translucent, scaling, and vram flags set.

Thanks,

One question:
Have you tried with different RenderingHints keys like KEY_INTERPOLATION and KEY_RENDERING to see if it improves the performance?

No, but i’ll be sure to give that a go now though.

Cheers,

g2d.setRenderingHint(RenderingHints.KEY_INTERPOLATION, RenderingHints.VALUE_INTERPOLATION_BILINEAR);

That one speeds up my scaling a LOT

No joy with that either :frowning:

Here’s a pic of it in its blended glory at about 2fps, in case anyone is interested.

http://img60.exs.cx/img60/3663/2fpsfogging5ez.jpg

Sprite images are from Ari Feldmans awesome sprite library.

I’m thinking about starting a big rewrite with LWJGL (something i’ve been thinking about doing for quite a while), either instead or as well as my current Java2D rendering, because i’m starting to lose hope on this one.

I’m 99% sure the slowdown is all coming from the blending, as the scaling definately seems to be fully accelerated now.

Unfortunately, I don’t think there’s a way to scale translucent images with h/w acceleration on 1.4.2 or 1.5.

Here’s what you can do with hw acceleration:

  • scale opaque managed or volatile images (with ddscale flag on)
  • copy opaque/1-bit transparent managed or opaque volatile images
  • copy translucent managed images (with translaccel property on)

Note that in 1.4.2 managed are only those images created with createCompatibleImage(w,h, OPAQUE/BITMASK), or createImage(w,h).

In 1.5 all opaque or 1-bit transparent images are potentially managed. And with the translaccel flag, translucent images could also be managed.

With the opengl pipeline in 1.5 you can do all you want =)