LWJGL how would i draw an image ???

Hee hee I’m same with Cero ;D “way too low level for me”.

libgdx uses (for most but not all backends, eg for GWT different shenanigans are involved) stb_image by Sean Barrett. It’s fast, C (so works on desktop/Android), and decodes into a format suitable for OpenGL. There is a thin native layer over the stb_image stuff called gdx2d for working with the decoded image data, drawing primitives, blending, etc. This is all hidden by the Pixmap class, which has a Java API for manipulating the data and provides a ByteBuffer containing the pixels. Textures can be created with this ByteBuffer, though most often a Texture is created from a file and a Pixmap is used under the covers.

But yeah, with only LWJGL you need to decode an image to bytes OpenGL can use, which can be nontrivial depending on the image format. Often, on the desktop, AWT is used to do the decoding. I believe this is what the slick-utils does.

Heh, it’s so hard to understand why others hate your own hobby. ;D

I dont hate it, its just…
If I didnt want to make games actually, then fiddling around with it would be fun I guess.
Well the fact that its horribly designed still stands of course.
Maybe OpenGL 3+ is better, but of course I wont use it until its available on like 90%+ pcs…

Slick-Util uses Matthias’ PNGDecoder, Kev’s TGA decoder, and AWT for other formats. The decoders are acceptable – it’s the texture loader that makes SlickUtil a “bad” and outdated library. It doesn’t support or include things you’d want to use in a modern OpenGL game – like automatic mipmapping, non-power-of-two sizes, wrap modes, compressed textures, custom internal formats, multiple texture targets (cube maps, 1D/3D textures, arrays) etc. As far as texture loaders go, it’s really just the bare minimum, and not something any serious OpenGL user should rely on. :slight_smile:

A better texture library would support the features I mentioned earlier, as well as decoding without the need for AWT. LibGDX is pretty close, although because of its Android focus it doesn’t seem too serious about DXTn texture compression or 1D/3D/arrays/etc. I have my own WIP texture library (using Matthias’ decoders) that tries to include many of these features, as well as a simple DDS decoder. It might help for inspiration in your own texture loaders:

I know we have OpenGL bindings, but do we have any more abstraction than that? :clue:
There’s obviously a need for some standard tools.

http://store.steampowered.com/hwsurvey/

Oh, look, it almost is since OpenGL 3 works on Windows XP too. 40.94 + 37.59 + 12.95 = 91,48%. Add that you also have Linux and Mac (or not because of drivers?) and that should compensate for it.

(EDIT: I thought you wanted 95%+, therefore my “almost”.)

[quote]texture compression
[/quote]
This is relevant to my interests.

This is relevant to my interests.
[/quote]
It mostly used to lower the amount of VRAM needed when lots of textures need to be in memory at the same time. It can also actually improve performance if your program is very limited by memory bandwidth since it trades processing power for less memory.

It’s really easy to use I think, you should just change the internal texture format when allocating the texture:


glTexImage2D(GL_TEXTURE_2D, 0, compressedFormat, width, height, 0, dataFormat, dataType, data);

Basic compression formats can be found in this : extension, which is supported by literally everything.

If you already have compressed texture data on your disk which uses one of the compression algorithms OpenGL supports, you can load the already compressed data with glCompressedTexImage2D(), which looks works identically to glTexImage() but expects compressed data.

The different S3TC algorithms compress 4x4 blocks of pixels to either 64 bits or 128 bits. I don’t know about the quality but expect to take a pretty big hit there considering the huge compression ratio. Since the algorithms work on 4x4 blocks of pixels it might be a good idea to at least keep texture dimensions to multiples of 4. The spec says that the padding pixels are undefined, which might screw up linear filtering even with clamping since it technically isn’t the texture edge, but this is just a speculation.

BPTC is a newer compression mode which uses slightly more memory but has much better quality. However, it’s only implemented by OGL3 GPUs, but since those GPUs generally have more memory than GPUs that don’t support it it might be a good idea to choose the most advanced compression algorithm available since it’s literally only a single changed parameter to glTexImage2D().

Any questions?

EDIT: Comparison: http://www.g-truc.net/post-0340.html