Thanks for the replies.
Now, I do understand that if I’m meddling with the internal structure, it is being handled by the CPU.
From what you’ve explained, I’m guessing the usual way to draw graphics is to keep the graphic elements in VRAM (say, sprites and backgrounds) and composite (copy into frame) from said copies.
I also understand that certain operations (rotation, scaling, shearing…) need to be supported by the GPU or else the images will be dumped back into Software mode.
Now, as far as sprites, tiles or GUI is concerned, that seems easy to handle.
What I am somewhat concerned is with full-screen effects.
One example is the static filter I’m using for testing, other would be screen glows, or using masks to apply lighting effects…
On one hand I’ve read that blitting of alpha-enabled images isn’t always supported by the GPU, so creating masks as BufferdImages and then applying them might not be a good way to do it.
On the other had, some effects might need per-pixel control (Static being an example, although I can think of a few ways to simulate static with a set of pre-calculated alpha masks).
So how would you suggest these are handled? (And AWT not being able to do these things reliably is a valid answer)
For the record, I’m inquiring now because I’m building the rendering pipeline as I learn, better to decide now on this than backtrack later.
Also, I’ve noticed slowdowns when experimenting with large resolutions. My target game resolution is around 320x200, so the CPU will probably be able to handle it. This VolatileImage discussion is mostly educational for me.
And again, thanks for your time.