Awesome Speeds With VolatileImage

Managed BIs are accelerated when used as source, when used as destination (“drawing on bufferedimages”), they are strictly CPU only.
I did the new xrender java2d pipeline thats in jdk7, I know its the same for the opengl pipeline and thinking about the java2d-internals I really doubt the d3d pipeline accelerates rendering TO BIs.

So the code posted compares software-only rendering with hw-accelerated rendering, but by using a BufferedImage as buffer (=destination) it doesn’t give “image management” even a chance.

Basically Kerai summed it up quite nicely:

I have no real proof other then that fact that when I try drawing onto a bufferedimage I get 1-2fps.

You can draw onto a volatile image and then call getSnapShot() which will return a bufferedImage of that volatile image so you can do stuff to the pixels.

I should write a detailed tut on how to get java2d to run fast and exactly what you can expect to get out of it performance wise. Also, go into detail on what the typical bottle necks are.

For example, you will almost never have a bottle neck with, fill/blending/alpha/scaling it is almost always the draw call count. And almost all images you create start out managed.

Yes, yes you should. I’d be reading it, for sure! :slight_smile:

Ah, t’is the audacity of youth to be so sure about being wrong! :stuck_out_tongue:

Thanks Linuxhippy for backing up Kerai and my points about this earlier. I think this is now the third or fourth thread recently I’ve picked up ra4king on this. ::slight_smile:

As Riven correctly pointed out (pity about the second sentence! :wink: ) Kerai’s information is somewhat old. There is no need to use a compatible image for it to be managed since Java 5. In fact, I’ve generally found createCompatibleImage(…) to be more trouble than it’s worth, particularly on Linux in the past - YMMV.

Sounds like a good idea. See if you can bug Linuxhippy into proofing it - he’s probably the person with the most knowledge of the Java2D back-end around here!

Whoa really?

:emo: :’( :emo: :’( :emo: :’( :emo: :’( :emo: :’( :emo: :’(

What’s the score

NO thread, you’re not dying just yet…

I’m working on my rendering classes now, and from what I’ve gathered from the java API, one thing that forces an image to Software mode is exposing the underlying data structure as a Java type (For example, a int array of RGBA values).

This is a problem for me, as I intended to expose said data structure to be able to operate directly on a per-pixel basis. This is how I get said structure:



java.awt.image.BufferedImage bufferImage = null;
int[] exposedDataBuffer = null; 

bufferImage = new java.awt.image.BufferedImage( 
									renderResolutionWidth,
									rederResolutionHeight,
									java.awt.image.BufferedImage.TYPE_INT_RGB );

exposedDataBuffer = ((java.awt.image.DataBufferInt) bufferImage.getRaster().getDataBuffer()).getData();


And when drawing a frame:


// Get Buffer Strategy Context
java.awt.Graphics strategyGraphicsContext = bufferStrategy.getDrawGraphics();

// Clear
strategyGraphicsContext.fillRect(0, 0, frameWidht, frameHeight);

// Draw
strategyGraphicsContext.drawImage(
							bufferImage, // Image Source
							0,                // Destination Rectangle 1st Corner X Coord
							0,                // Destination Rectangle 1st Corner Y Coord
							frameWidth,  // Destination Rectangle 2nd Corner X Coord
							frameHeight, // Destination Rectangle 2nd Corner Y Coord
							0,                // Source Rectangle 1st Corner X Coord
							0,                // Source Rectangle 1st Corner Y Coord
							bufferWidth,  // Source Rectangle 2nd Corner X Coord
							bufferHeight, // Source Rectangle 2nd Corner Y Coord
							null     );      // Observer 
                                                       

// Dispose of Context
strategyGraphicsContext.dispose();

// Flip Buffers
bufferStrategy.show();

The question then is… How to do this (per-pixel manipulation) while still retaining the capability to use Accelerated graphics?

And yes, using AWT, still not looking into better libraries. I’m really trying to get a better understanding of the basic operation of Java graphics before delegating. :slight_smile:

Are you updating the array every frame and only drawing once? If you update a BufferedImage every time before you draw it, through the array or Graphics2D, then it won’t be accelerated anyway.

If you update the image then draw it many times, either draw it into a second BufferedImage or a VolatileImage and draw from that.

I’m not sure I follow.

Right now yes, it is a 1:1 process, each frame is drawn before showing, but that’s just because it’s still under development.

My first idea is to decouple the rendering process from the drawing process, so frames are only sent to the drawing surface once they are complete (not sure if this would help).

Then, to further save on processing, split the frame into layers to be assembled, reusing those that need not be updated as often (for example, backgrounds).

Thing is, in the test I have now, I’m applying a full-screen static filter (Made by randomizing the lightness value on a per-pixel basis), so I don’t see how can I break such a thing down so as not to need to update the pixel buffer each time.

Keep in mind I’m a graphics pipeline noob here, bashing my head in an effort to learn! :slight_smile:

OK. Though I’m not sure I can explain it any simpler, apart from to just say not to worry about software mode / acceleration. From what I read you’re doing now, it won’t make any difference! :slight_smile:

If you’re manipulating the whole image array each time before you draw it, it will never be accelerated. You’re processing it in software!

This is the only scenario where it may be worth worrying that an image is forced to unmanaged (software only) mode. If you manipulate the pixel array of a background image, then draw it to the screen 10, 100, 1000 … times, consider drawing it to another BufferedImage (that you haven’t grabbed the array of!) first. This will mean the image can be cached on the graphics card and so faster to draw to the screen.

A BufferedImage is always stored and manipulated in software - the pixels are stored in Java memory. What “managed image” means is that when you draw the image more than once to the screen, Java2D will try to cache a copy of the image on the graphics card rather than uploading the pixels each time. Each time you draw on a BufferedImage, the cached copy is invalidated and drawing the image will not be accelerated again at least the second time you draw it to the screen. Grabbing the pixel array means the image is never cached on the graphics card.

Therefore, to put it simply, unless you draw an image more than once to the screen before you modify it again, it is pointless to worry about grabbing the pixel array.

Lets just sum this up and make things easy. If you want any form of performance in java2D, you can never grab the pixel array of a bufferedImage every frame.

If you want to do some sort of lighting, then there are other ways of doing it that are much fast. (look at the Toxic Bunny dev blog for details) If you want to do post processing effects, move to opengl. The best I go with a basic bloom filter was really hacky and very slow.

If you were attempting to sum up what I was saying, you got it opposite! :stuck_out_tongue:

Every time you modify a BufferedImage it happens in software. One of the fastest and most flexible ways to modify a BufferedImage is to grab the pixel array. Grabbing it every frame has no performance penalty over other methods of modifying it, and can often be faster! A pity the PulpCore project is discontinued, because this was a good example of pure software rendering in Java, and how direct array manipulation was often faster than the Java2D software pipeline (search Bubblemark on here!).

What I was saying is that if you want to grab the array of an image (sprite) and change it, then draw it lots of times unchanged, you should draw it into another BufferedImage so it can be re-managed.

Thanks for the replies.

Now, I do understand that if I’m meddling with the internal structure, it is being handled by the CPU.

From what you’ve explained, I’m guessing the usual way to draw graphics is to keep the graphic elements in VRAM (say, sprites and backgrounds) and composite (copy into frame) from said copies.

I also understand that certain operations (rotation, scaling, shearing…) need to be supported by the GPU or else the images will be dumped back into Software mode.

Now, as far as sprites, tiles or GUI is concerned, that seems easy to handle.

What I am somewhat concerned is with full-screen effects.

One example is the static filter I’m using for testing, other would be screen glows, or using masks to apply lighting effects…

On one hand I’ve read that blitting of alpha-enabled images isn’t always supported by the GPU, so creating masks as BufferdImages and then applying them might not be a good way to do it.

On the other had, some effects might need per-pixel control (Static being an example, although I can think of a few ways to simulate static with a set of pre-calculated alpha masks).

So how would you suggest these are handled? (And AWT not being able to do these things reliably is a valid answer)

For the record, I’m inquiring now because I’m building the rendering pipeline as I learn, better to decide now on this than backtrack later.

Also, I’ve noticed slowdowns when experimenting with large resolutions. My target game resolution is around 320x200, so the CPU will probably be able to handle it. This VolatileImage discussion is mostly educational for me.

And again, thanks for your time.

Scaling has very little performance penalty if any. Rotating will drop performance but will be fine as long as you are not rotating 500-1000 sprites every frame. I think just about all gpu from the last 5 years should support the rotation, scaling, and shearing.

I meant modifying the pixels. You can grab them and be ok but if you want to edit them and do something then you are SOL unless there are some tricks that I do not know of.

The idea of drawing to another bufferImage/volatileimage is good if you only do it on start up or maybe one time when a level is loaded. Doing it every frame will kill performance. I used this technique for my bloom filter and it gave a nice speed boost but not enough to get higher then 35fps.

What do you mean by blitting? Are you talking about alpha blending? Then again, you should be fine. Most gpu and even integrated chips support alpha blending. So you should not need any form of mask.

I am not sure what you mean by fullscreen effects. What is the static filter? Adding what looks like static to the game? like an old TV. Screen glow would be bloom? Lighting?

Java2D can draw images very well. You can simulate a lot of cool effects with just drawing an image and using clipping such as lighting.

I would suggest a low resolution (which you have) and making it so the target fps is 24-30 fps. You might be able to manage doing some of these things at that low of a frame rate. If you really want to do all of this in java2D look at Toxic Bunny dev blog. They show what they did to get all of the effects they have. (lighting being the big one)

Blitting is the composition of several bitmaps by means of a raster operation (As defined by wikipedia).

I was refering to hardware blitting as explained here. Meaning how the GPU combines volatile images without making software copies.

As for “fullscreen effects”, the proper term probably is “post-processing filters” meaning stuff you do to arbitrary portions of the screen independently of content.

An example is a simple “glow” effect made by increasing the lightness component of all pixels on the screen to make it flash.

The static effect yes, simulates typical white noise type static you get on TVs, it is done by randomly altering the lightness of every pixel, something like this:


for(int i = 0; i < bufferPixelArray.length; i++)
{
  int pixel = bufferPixelArray[i];

  int randomNoise = random.nextInt(16) * 5;
  int lightLevel = 100;	
            
  //Compute pixel illumination + static filter
  int valueRed    = ((pixel >> 16)  & 0xFF) * lightLevel / 255 + randomNoise;
  int valueGreen = ((pixel >> 8)    & 0xFF) * lightLevel / 255 + randomNoise;
  int valueBlue   = ((pixel)           & 0xFF) * lightLevel / 255 + randomNoise;
	
	
  int finalColor = valueRed << 16 | valueGreen << 8 | valueBlue; 

  bufferPixelArray[i] = finalColor; 		// Assign RGB data to pixel

}

I doubt this can really be accelerated, unless the code can somehow be run on the GPU.

On the other hand, maybe this effect or rudimentary lighting can be achieved by rendering the light mask into a low resolution image (say at half resolution) and then attempt to have the GPU blit the image into the main frame, combining alpha values.

If it worked, It’d make for pixelated effects, though.

Create a big texture with noise, and sample it with your shader, with random offsets each frame.

Riven this is in java2D there are not shaders.

Oskuro, that is what I thought you meant but there really is no major performance gain from blitting vs just drawing. If there is some, it is not the major bottle neck. The pixel manipulation is your bottle neck.

One idea for the static effect is to have 3-4 images that have bits of static in them and the rest transparent and composite them randomly on the screen. This will give you a similar effect and be very very fast.

You can do the glow effect by rendering a white semi transparent square over the whole window.

Java2D does not do additive blending and if you want to do that then you will have to un-accelerate your drawing surface.

Most effects you will have to fake because java2D was not designed to do them. The lighting being draw onto a smaller map is what I did with some things and it can greatly improve performance. If you use bi linear filtering you can go to 1/3 the resolution and still get fairly good looking lights.

Using a volatile image as a FBO or Texture (not quite sure which one is more accurate) is great and I am not sure if you know about the getSnapShot() method that will give you a bufferedImage of the volatile. This is useful as you can then edit it the pixels you just drew and render them on to your main drawing surface. Think of light maps.

If you really want to do all sorts of cool post processing effects, java2D is not the way to go. At this point with how well you understand everything, you should have no problem getting into opengl or libgdx. I do not know how much you have put into this but it sounds like you want to get going with shaders. (something I am strangling myself with)

So did I! Kind of getting the feeling we’re speaking different languages here?! :stuck_out_tongue:

Directly updating / editing the pixel array of a BufferedImage every frame is actually pretty fast, and it’s possible to do this faster (a lot in some cases) than the software Java2D pipeline. If you do this, you should just ignore everything about managed images because they never will or can be managed.

Personally, I’d ignore Java2D as much as possible for anything requiring consistent framerates, performance, etc. It’s great for UI’s, but it’s too unpredictable for many uses, and favours accuracy over performance too much.

If you want consistent(ish) hardware rendering, use libGDX or similar. If you want to work with software rendering and pixel operations in Java, just grab the pixel array of a BufferedImage, play with it, and draw it to a BufferStrategy every frame. It’s a usable solution for low / medium resolution stuff as long as you don’t go overboard, and performance is far more predictable.

Doing it every frame is precisely what I said not to do (at least if you’re only drawing it once)! You’re just needlessly copying an entire pixel array in Java before it gets streamed to the graphics card. It is potentially worth doing if you need to transform the image when you draw it, but only if you use a VolatileImage.

If you’re going to stick with the software rendering route, and are just doing plain blitting (no scaling), I’d also recommend looking at this thread - lots of fast blend modes (Add, Multiply, Difference, etc.) for combining the pixel arrays from BufferedImages (or similar). The code derives from Praxis’ software pipeline, but was extended and developed further by Dx4 (he also changed to work with single source pixels, which I disagree with). Using these is faster than using drawImage() on a BufferedImage.

Looks interesting. There are a few ways you could speed that up though. Primarily replacing the use of random with a much faster performing (though less random :slight_smile: ) alternative.

I’m using this which outputs between 0 and 255 - can’t remember where the algorithm came from originally


private static int rngseed = 0;
public static int random() {
    rngseed = rngseed * 1103515245 + 12345;
    return ((rngseed >> 16) & 0xFF);
}

You could also replace divisions by 255 with >>8.

Check out the mult() and multRGB() methods here - http://code.google.com/p/praxis/source/browse/ripl/src/net/neilcsmith/ripl/rgbmath/RGBMath.java NB. This will move with my next code commit.

Well, one day we’ll see Java code running on the GPU - http://openjdk.java.net/projects/sumatra/. :wink: In fact, it’s doable now with Aparapi. But seriously, if you want to get it running on the GPU, it’s time to dive into GLSL.

Thanks for all the replies.

As I’ve said, I’m mostly learning the basics of graphics rendering. The use of Java is merely incidental, as I often strip out parts of the standard java classes I have no interest in using. My point is to try and develop a graphics pipeline that would work equally well (with minor library changes) if I were to port the code to other language, or to a different Java implementation, hence the interest in manipulating data buffers as directly as possible.

My main concern is what strategies to use when approaching rendering of different elements. From this thread I gather the following:

  • When dealing with concrete resources (Sprites, backgrounds, etc) it is best to load them as BufferedImages and compose them together without modification, so the system accelerates them if possible
  • When dealing with direct manipulation, minimize by reducing resolution or framerate, and whenever an effect is going to remain static for a while (say, the glow around the player when holding a lamp, which will always be the same for as long as the player has that specific lamp) it is best to render it into a texture to be handled as a concrete resource.

I will eventually move into better rendering solutions, right now I just want to try doing things by hand to wrap my mind around how things work under the hood, so to speak.

Also, my first project is meant to be retro-limited, as in low resolution, lack of fancy effect and the like.

Possible follow-up projects I may develop to run on my Blackberry (with no OpenGL support), so it is important I learn to handle things as low level as possible.

In any case, thanks for all the information! It is hard enough to find out information on the net that doesn’t directly tell you to use this or that library, so these threads are gold to me! :smiley: