Doing 2d with 3d (OpenGl/Xith/Java3d)

I’d like to use OpenGL (in the form of Xith which bases on JOGL in my case) to draw 2d sprites for a fast game, so that it’s all HW accelerated and I can use 8bpp alpha blending, blended zooming and rotating etc.
For example all objects contain a planar rectangle (or 2 triangles) in a Shape3D with a texture. Let’s call this sprite. The sprite’s Z value is zero (usually).

A Xith user suggested not to use perspective projection because then it’s difficult to control the zoom factor, but to use PARALLEL_PROJECTION. Then use scaling for zoom effects. This works.
However, I couldn’t figure out how to set the rectangles/triangles vertex coordinate dimension compared to the texture and screen resoltion, so that on screen one pixel of the texture maps to one pixel on screen. Is this possible somehow?

Is there a beginner’s guide how to use 2d (with or without Parallel_Projection) with Jogl/Xith/Java3d maybe? Thanks!

Part b) of the question: Many textures. I intend to use many textures because most of the sprites will be animated. What would be the fastest and most VRAM efficient method to swap textures in OpenGL (for an animated 2d sprites) ?

PS: Since this question probably applies to Jogl, Java3d and Xith, I posted it here.

You’ll find buried in the source code of Alien Flux that perspective projection is really really easy to set up such that it’s correct when z=0. Altering z is a billion times easier than scaling it yourself.

In fact to save you the bother of searching…


            GL11.glFrustum(
                  - Game.WIDTH / 64.0,
                  Game.WIDTH / 64.0,
                  - Game.HEIGHT / 64.0,
                  Game.HEIGHT / 64.0,
                  8,
                  65536);

initialises the projection matrix and


            GL11.glTranslatef(-Game.WIDTH / 2.0f, -Game.HEIGHT / 2.0f, -256);

initialises the modelview matrix so that 0,0,0 is in the bottom left of the screen.

Cas :slight_smile:

[quote]Part b) of the question: Many textures. I intend to use many textures because most of the sprites will be animated. What would be the fastest and most VRAM efficient method to swap textures in OpenGL (for an animated 2d sprites) ?
[/quote]
Put all animation frames in one texture and only change texture coordinates for each animation frame instead of having lots of textures and binding a texture for each animation frame.

Thanks Cas and Erik.
Putting the animation in one texture is a good idea. :slight_smile: In case they won’t fit to 1024x1024, I’ve to start a few textures, but better than hundreds.

Using a Frustum with perspecitve projection sounds also nice. Still I don’t get the point why you (Cas) do a “div 64.0” and why you choose 8, 65536 as far and near values of the frustum?
Also how do you ensure your “sprite” polygon’s vertex coordinates are so that the mapped texture has a 1:1 zoom ratio? For example in case your texture is 1024x1024 pixel sized: which polygon vertex coordinates do you have to choose (1024, 1024) ?

OpenGL Redbook, Perspective Viewing Volume Specified by glFrustum()

http://www.parallab.uib.no/SGI_bookshelves/SGI_Developer/books/OpenGL_PG/sgi_html/figures/chap3-5.gif

Just use normal 2D coordinates like you would anywhere else. Just try it and see.

As for those values… I worked them out in my head at the time and can’t quite remember the sums but It Just Works.

Cas :slight_smile:

Doing 2D with Xith is a bit overkill. It’s a scenegraph api - ment to do neat 3d enviroments. In a 2D game the “scene” is just way to simplistic.

So… use either lwjgl or jogl. Take a look at Kev’s OpenGL space invaders tutorial game (jogl) and you’ll see it isn’t that difficult. Setup the perspective, put some ogl stuff into methods et voila - pure 2D stuff.

Thanks again for your hints.
So Cas suggests to use Perspecitve_Projection, while Onyx’ hint to Kevin’s tutorial uses Parallel_Projection, like the OpenGL FAQ suggests: OpenGL Technical FAQ, 9 Transformations, section “9.030 How do I draw 2D controls over my 3D rendering?”.

With an OpenGL binding this should work. However, I still would like to use Xith if possible, because I’m more familiar with it compared to pure OpenGL. Well, let’s see. :slight_smile:

Heh… you asked for 2d - so ortho :wink:

Oh and don’t panic, you won’t need much opengl and as I already pointed out, after a bit capsulation it’s just like using a neat 2D api. Kev’s tutorial should give you a good starting point :slight_smile:

[quote]Heh… you asked for 2d - so ortho :wink:
[/quote]
Yes, it’s been my first thought, too, and since the OpenGL FAQ suggests to use that, too, I’m problably using it, too.
However, Cas suggests to use perspective projection and well, he’s the man who wrote a commercial 2d with 3d-HW Java game. Btw it’s been mentioned on the OpenGL.org portal some days ago - really nice!

[quote]Oh and don’t panic, you won’t need much opengl and as I already pointed out, after a bit capsulation it’s just like using a neat 2D api. Kev’s tutorial should give you a good starting point :slight_smile:
[/quote]
Yes, it’s nice, I’ve taken a look at (Kevin should publish books, hehe).

OK, so now with the nice Jogl I draw those planar quad-polygon “sprites” on screen, usually with Z=0 and all in parallel projection.
However, there will be several layers of sprites, like background, normal, front, on-top, etc.

Probably I could sort of “sort the sprites” before drawing, because since it’s no real 3d game, drawing them from back to front isn’t too difficult (much simpler than with isometric 3d-2d, hehe).
Wait… there’s this nice depth buffer thing in OpenGL we all know and love (and missed a lot back in the retro gaming days). Should I use it for a 2d game? Of course the HW on the 3d card has to do extra bits then like clearing the buffer every frame, and for any texel to be written, it need compare it to the buffer etc.

So… is it overkill to use the z-buffer? Or does it “really just” disburden my valuable CPU time?

No - do NOT use the depth buffer.

Cas :slight_smile:

It would waste cpu power and besides that it will lead to a major headache. Z is always 0 -> z-fight massacre ;D

In most 2d games you wouldn’t need any sorting. The order is always the same: background, background tiles, items, enemies, player, bullets/particles, hud - just build your loop for drawing it in that order and everything is ok :slight_smile:

Also if you draw everything (the whole screen) all the time, you can disable clearing (that will result in about +3% speed on older hardware).

[quote]Also if you draw everything (the whole screen) all the time, you can disable clearing (that will result in about +3% speed on older hardware).
[/quote]
John Carmack used a neat z-buffer trick on Quake 2. As he knew was redrawing the whole screen every frame, he never cleared the buffer, only used half the precision, and flipped the depth-test direction every frame. Nice!

A bit convoluted, but I guess if he thought it was worth doing it must have been - on machines of that era, at least.

But yes, for a 2D game the z-buffer will likely cause you more grief than help, unless you’ve got very good reason for using it. Your sprites are likely to be so small compared to the background that you wouldn’t save much overdraw even if you did take full advantage of the z-buffer.

The real reason for not using the depth buffer is that you will then no longer be able to blend antialiased sprites (and fonts) with the background, as you’ll have to use the alpha test. Result == ugly as hell.

Cas :slight_smile:

[quote]The real reason for not using the depth buffer is that you will then no longer be able to blend antialiased sprites (and fonts) with the background, as you’ll have to use the alpha test. Result == ugly as hell.
[/quote]
Oh… Now I think this confuses me a bit.

My sprites are in RGBA format. The alpha layer is 8bpp because in future they will be anti-aliased like in your example. Currently I use alpha 255 for visible sprite pixels and alpha 0 for invislbe sprite pixels.
To make this work with full alpha range (in future) I’ve to enable blending and use a BlendFunction, isn’t it? So I do:[quote]
gl.glEnable(GL.GL_BLEND);
gl.glBlendFunc(GL.GL_SRC_ALPHA, GL.GL_ONE_MINUS_SRC_ALPHA);
[/quote]
Well, it looks like it works, however I can’t see a difference with or without depth buffer…

Maybe it’s another topic when I don’t use blending, but just alpha testing? Like with this[quote]
gl.glEnable(GL.GL_ALPHA_TEST);
gl.glAlphaFunc(GL.GL_GREATER, 0);
[/quote]
Then the sprite’s edges are sharp (not blended) but I don’t think I like the result…

That’s because you’re not using any intermediate alpha values.

Anyway, it’s a bit slower and needs more video RAM. So don’t :stuck_out_tongue:

Cas :slight_smile:

[quote]That’s because you’re not using any intermediate alpha values.
[/quote]
Ah, I see, it’s because currently I use an alpha layer just consting of 0’s and 255’s.

[quote]Anyway, it’s a bit slower and needs more video RAM. So don’t :stuck_out_tongue:
[/quote]
So… do you suggest to use a 2 bit alpha mask for the sprites? And not use blending but glAlphaFunc instead? Or maybe I get you wrong - lost the … context I’m afraid. :slight_smile:

Dunno if its possible to use an RGBA image with only two bits of alpha (though I think one of the texture compression modes allows it). But alpha testing rather than blending will be much faster for the same results - a single comparison instead of two multiplies, and thats before you start worrying about the framebuffer reads needed.

[quote]But alpha testing rather than blending will be much faster for the same results - a single comparison instead of two multiplies, and thats before you start worrying about the framebuffer reads needed.
[/quote]
I see. :slight_smile:

[quote]Put all animation frames in one texture and only change texture coordinates for each animation frame instead of having lots of textures and binding a texture for each animation frame.
[/quote]
Well then, say for each frame of a sprite you’ve got a bitmap file on your harddisc.
Is there maybe a nice editor/tool which would paste the bitmap files into one big texture (or more, in case 512x512 or 1024x1024 can’t hold all animation frames) and produce the needed UV float values?
For a start this tool could place the frames regularly on the texture, but in case it’s really clever it could pack the differently sized frames so close next to each other on the texture so that you save (3d-card) memory during runtime…

Since there are many 2d games using 3d hardware out there, I wonder if such a cool tool exists, maybe… :slight_smile: