Cas’ image packer in the SPGL libs will do that for you, and spit out an image + xml describing sprite positions. However its pretty closely tied to the whole SPGL sprite system, so it might not be useful straight away. At the very least the source might give you a few hints for writing your own.
Yeah, it’s easy to chop out the guts and make it do what you want.
Cas
Thanks Orangy Tang & Cas. I’ll have a look at.
However when I go on http://www.lwjgl.org/links.php and click to Spgl on Sourceforge, there’s nothing to download…?
'srite, you need to use CVS to browse the source. There is no SPGL build - it’s a window onto my real working code. So it breaks frequently.
Cas
Say you intend to use many animations and (texture) memory on the 3d card could become a problem…
Is it advisable to use 8bpp textures (with an own palette for each) in the game? (Still I would need 8bpp for the picture and 8bpp for the alpha.)
Can OpenGL handle such 8bpp textures + palette smartly? When I read the OpenGL red book there’s the difference between RGB(A) mode and a so called color indexed 8 bpp mode… but I guess this means the bpp onscreen, not only the textures…?
What do you OpenGL experts say…?
A friend tells me that several commercial games today use a kind of 8 bpp texture (with own color indexed) and that with a resolution of >= 1024x768 it’s hard for the human eye to distinguish 8bpp from 24bpp textures.
There was an extension for using 8bit palletised images as textures, but its been depreciated unfortunatly. I doubt many current games use anything less that 16bit textures.
Memory usage isn’t too much of a concern, since OpenGL will automatically swap them out to system memory. It might slow things down but its not a catastrophy. You might find better results with one of the texture compression extensions if you’ve got loads of huge textures.
[quote]There was an extension for using 8bit palletised images as textures, but its been depreciated unfortunatly.
[/quote]
A pity, but I think your mentioned compression extensions can compensate that.
[quote]Memory usage isn’t too much of a concern, since OpenGL will automatically swap them out to system memory. It might slow things down but its not a catastrophy. You might find better results with one of the texture compression extensions if you’ve got loads of huge textures.
[/quote]
Yes, I’ll have to use these, because memory is a concern. Since it’s a 2d game (using 3d OpenGL) with hopefully smooth movement, I should avoid to let OpenGL swap texture memory to system memory where possible.
When using S3tc compression for example, my question actually becomes obsolete, because the texture images are always lossy compressed on the fly by OpenGL, or at compile time by an external tool to an S3tc RGB or RGBA format (with various compression ratios).
There’s a nice short docu giving an overview of how to use the GL_ARB_texture_compression and GL_EXT_texture_compression_s3tc on the Nvidia site: http://developer.nvidia.com/attach/6585
Unfortunately with a 2D game you will seriously notice compressed textures. I tried it with Alien Flux - looks awful!
You won’t run out of texture RAM, don’t worry.
Cas
[quote]Unfortunately with a 2D game you will seriously notice compressed textures. I tried it with Alien Flux - looks awful!
[/quote]
You mean because of the lossy compression nature of S3tc compression (like with JPEG artefacts) ?
That would be bad, indeed. Since for the 2d game I use mainly artificial pictures in contrast to photographic ones, the artefacts would be noticeable… Well, then maybe compression just as an user’s option.
[quote]You won’t run out of texture RAM, don’t worry.
[/quote]
Well, I’ll try to quote some numbers soon.
On that compression topic: do all modern 3d cards support S3tc texture compression? Because when with my Ati 9600, I use glTexImage2D(…) with the internal_format parameter set to GL_COMPRESSED_RGB_ARB (or GL_COMPRESSED_RGBA_ARB for textures with alpha) the result is this:
Using glGetTexLevelParameteriv with GL_TEXTURE_COMPRESSED_ARB says “is compressed” (> 0).
Asking for the GL_TEXTURE_INTERNAL_FORMAT, the returning number says
GL_COMPRESSED_RGB_S3TC_DXT1_EXT, when the input image has been RGB (24bpp),
and
GL_COMPRESSED_RGBA_S3TC_DXT5_EXT, when the input image has been RGBA (32bpp).
So maybe I could do this: when the GL_ARB_texture_compression extension is available and the user has enabled the “use compressed textures” game option, I’m going to use GL_COMPRESSED_RGBA_ARB on the glTexImage2D() call. This would mean: no need to store the textures lossy s3tc compressed on disc but use normal PNGs for both cases: uncompressed and compressed textures at runtime.
So for 3d cards with no texture compression extension, I’d feed them directly to the card, with the result that OpenGL will regularly swap texture mem with sys mem and unavoidable jerkyness would be the result.
So far the theory. Is it going to clash with the practice?
Your theory is probably way out compared to the practice. Do you really expect to be able to get 16MB of textures on screen at once in a 2D game? Answer: no, not in a thousand years! The reason we have so much texture ram these days is because in a 3D scene we can easily find ourselves looking at vastly more stuff than a 2D scene. In a 2D scene you’re going to have maybe 3-4 layers of textures covering the whole screen - very unlikely to thrash the cache.
Cas
[quote]Your theory is probably way out compared to the practice. Do you really expect to be able to get 16MB of textures on screen at once in a 2D game?
[/quote]
I should have stated more details of how the 2d game shall be: three or four background layers shall scroll constantly and consist of large blocks (256x256 or even 512x512 pix) so that they can look very interesting. Needless to say, the front layers will have to have an (8bpp) alpha channel.
Then there shall be many sprites, again with 8bpp alpha. They shall be animated: on the avarage ~50 frames for one animation. I pack them onto large texture pages, but even if the sprite is on the avarage ~100x100 pixel sized, it leads to ~2 MB for just one animated RGBA sprite. Sigh.
So… A dozen of such sprites and several background tiles at once on screen, if they’re RGBA: voila, you exceed 16 MB video RAM within a few seconds.
Since it’s constantly scrolling, I’d like to avoid the un-/binding of new textures during one level. On older 3d cards it’s not possible but well. With newer cards compression could help: compression ratios from 4:1 up to 8:1 shall be possible, the Nvidia document explains.
Well, that’s the plan.
Y’know, if you sort all your drawing by texture (which you should be doing anyway) I doubt you’d actually notice that you couldn’t fit all your textures into video ram at the same time.
[quote]Y’know, if you sort all your drawing by texture (which you should be doing anyway) I doubt you’d actually notice that you couldn’t fit all your textures into video ram at the same time.
[/quote]
What exactly do you mean with “sort your drawings by texture?” … I pack all animation sprites of one sprite into one (or 2…3) large texture, for example 512x512 pix sized. Is this the kind of sorting you mention?
It just means minimizing calls to glBindTexture() by ordering your sprite drawing to use textures in sequence.
Technically glBindTexture can be a very expensive operation as it can involve uploading a texture from sytem ram to vram if it’s not cached. This only happens at most typically once per frame, of course; it’s in the cache then and tends to get reused. But as the cache is a LRU cache, if you’ve got 17mb of textures in a single frame and there’s only 16mb of free vram, you’ll overwrite them all every single frame and end up uploading 17mb of texture data every single frame. You know when this happens as the frame rate drops to about 10fps.
Cas
[quote] cache is a LRU cache, if you’ve got 17mb of textures in a single frame and there’s only 16mb of free vram, you’ll overwrite them all every single frame and end up uploading 17mb of texture data every single frame. You know when this happens as the frame rate drops to about 10fps.
Cas
[/quote]
Do you know of any good benchmarking apps that would let me try and work out myself why this GF2Go is so incredibly slow for current AF and some JOGL games, but fine for others?
(for instance, survivor runs considerably faster on this than on a winXP machine with twice as fast CPU, twice as much RAM, and more GFX ram but a puny GF2MX - and, as previously mentioned, it runs Quake3 fine)
Obviously there are a wide range of “my [card] is bigger than yours” style things that call themselves “benchmarks” but are really just a series of not-very-smart micro-benchmarks only used by crap lazy “hardware reviewers” who’ve never written a single line of high performance code in their life and make statements such as “This card gets 7500 winmarks, whereas that one gets 7511, so the second card is obviously much better in this area” (which means: “I don’t have the faintest clue what this benchmark really does, but it gives me some numbers to post on my website!”).
…but I was kind of hoping there might be some more useful apps - something akin to the Sony PS2 devkits (although much less powerful) which give detailed graphs and stats on how you’re using the graphics pipeline etc so you can see if your code is (ab)using the hw. Perhaps something started by Carmack, as a stick to beat hw providers over the head with? ;D
I’m guessing that if there are any such things, Cas (or someone else here) would probably know of them…?
Or…alternatively, if someone has / wants to write a java-based OGL tool to do stuff like this, perhaps as a “perf-test for [JOGL, LWJGL, etc]” I could do lots of testing for you :).
You don’t have a performance problem; you’ve got a driver problem. Try opening a 16 bit fullscreen window with no alpha, depth, or stencil requirements, at 800x600. If it doesn’t run like the clappers then something in your x config is preventing that mode from getting hardware acceleration.
Cas
[quote]You don’t have a performance problem; you’ve got a driver problem. Try opening a 16 bit fullscreen window with no alpha, depth, or stencil requirements, at 800x600. If it doesn’t run like the clappers then something in your x config is preventing that mode from getting hardware acceleration.
[/quote]
As I said, survivor (using xith + JOGL) works fine. It’s doing an 800x600 window faster than a considerably faster win32 PC (GFMX). I don’t have a handy fullscreen app that runs at 800x600 (lots of LWJGL games posted here have half-broken screenmode selection, and the JOGL games can’t do fullscreen, so…).
There isn’t ANY config for screenmodes on this machine. It’s all autosensed by the nv driver, except for the refresh rate override I mentioned earlier (refreshrate is locked at 60 Hz in X config; LWJGL games report this as 0 Hz). If you have a hand crafted X config that does anything better, send me a copy and I’ll analyse, but I’m just doing everything nv tells you to, and seeing this sometimes-on-sometimes-off behaviour (well, in fact, worse than that - the Zoltar game I get less than 1 frame every 3 seconds, which I would have thought I could do better than in software!)
Also, IIRC, games always report the current mode as hw-accelerated (e.g. I wrote a tiny JOGL app to check what renderer was being used, and it reported the hw as opposed to software as expected.)
???
[quote]You don’t have a performance problem; you’ve got a driver problem. Try opening a 16 bit fullscreen window with no alpha,
[/quote]
c.f. the post I just did here
It seems that just going out of “maximized window mode” is enough to bring performance back to roughly where it should be.
Is there a way I could start one of your games (AF, HG) in windowed mode (perhaps a cmd-line switch if I manually download JARs) so I could test this out with LWJGL too? See if the same action has the same effect?
About your texture-animation problem, why not put all the frames for one set of animation on a single texture? I.E. you’d have a run texture and a shooting texture, etc, so that you swap positions for the anim, and swap textures when you swap animations. Don’t know if that would work, but I thought I’d suggest it.
BTW, I have a related question, since I’m also planning a 2D game with LWJGL. According to the OpenGL tutorials I’ve read, you’re supposed to not use textures above something like 512x512. If that’s the case, then how does one go about doing big objects, like a backround, or a huge sprite sheet?