Painless way of loading textures

Is there a really painless way of loading textures, that just works without a lot of decisions and kicking about.

So far, I’ve looked at org.newdawn.slick.opengl.TextureLoader, and that seems very complicated. The class pretty much relies on org.newdawn.slick.opengl.InternalTextureLoader which in turn relies on a lot more Slick specific classes, and eventually I don’t feel I have the same customizability as without.

Then, I’ve looked at org.newdawn.spaceinvaders.lwjgl.TextureLoader, and that requires me to make decisions about things I don’t understand, such as magnification filters and different pixel-formats.

While I understand how to use these classes, I don’t really want to, because I don’t understand how they work.

How do you guys handle this? How does textures in OpenGL even work?

The best way to learn is to dive in the raw API, not some highlevel library.

There are many tutorials that explain OpenGL texture loading in an easy to follow manner. Yes, there will be a mention of magnification filters, but it’s not like that needs more than 3 lines of text to explain the gist of it. It’s all pretty simple, don’t let the abstraction layers of libraries fool you.

Seemed like a great idea. Then I went here http://www.opengl.org/sdk/docs/man/, and I have no clue how to use any of the texture related methods.
Afterwards I head over here, http://www.opengl.org/sdk/docs/tutorials/, where even less helpful articles are dying.

Where is the good basic documentation that shows you how to use these things, at a basic level?

Matthias Mann wrote a very good texture loading library. However, if you want to learn OpenGL and understand exactly what’s going on, like Riven said, you should dive into OpenGL more directly.

Texture creation looks like this:

  • Decode your texture into a readable format (e.g. RGBA)
  • Generate an ID with glGenTextures
  • Bind the texture, set up any filtering/wrap modes with glTexParameteri
  • Upload the texture data from step one using glTexImage2D

It’s not very painful compared to many other aspects of OpenGL, but for somebody who is used to Java2D it may seem very long-winded.

Here’s some code from my own library, utilizing Matthias’ PNGDecoder. It also corrects NPOT images if necessary.

	//there are other targets, but this is what you'll generally be using
	private final int TEXTURE_TARGET = GL11.GL_TEXTURE_2D;
	//we can set this to true to ensure that ALL textures are created with POT sizes
	private final boolean FORCE_POWER_OF_TWO = true;

	private int imageWidth, imageHeight;
	private int texWidth, texHeight;
	private float normalizedWidth=1f, normalizedHeight=1f;
	private int texture;

	public void initTexture() throws IOException {
		ContextCapabilities caps = GLContext.getCapabilities();
		ByteBuffer buf = null;
		int numComponents;

		//Decode the image using Matthias Mann's PNGDecoder
		try {
			URL url = TextureTest.class.getClassLoader().getResource("res/1.png");
			InputStream in = url.openStream();
			PNGDecoder decoder = new PNGDecoder(in);
			imageWidth = texWidth = decoder.getWidth();
			imageHeight = texHeight = decoder.getHeight();
			if (imageWidth==0||imageHeight==0)
				throw new IOException("image is zero sized");

			//this is the format we will tell PNGDecoder to decode to
			PNGDecoder.Format format = PNGDecoder.Format.RGBA;

			//"num components" is how many color components exist in this format
			//GL_RGBA - 4 components (red, green, blue, alpha)
			//GL_RGB - 3 components, no alpha
			numComponents = format.getNumComponents();

			try {
				//we create a ByteBuffer to hold our array of pixels (i.e. image)
				buf = BufferUtils.createByteBuffer(imageWidth * imageHeight * numComponents);
				decoder.decode(buf, imageWidth*numComponents, format);
			} finally {
				try { in.close(); }
				catch (IOException e) {}
			}
			buf.flip();
		} catch (IOException e) {
			throw new RuntimeException("error decoding the image");
		}

		//whether NPOT textures size is supported
		boolean npotSupported = caps.GL_ARB_texture_non_power_of_two; 

		//on some systems NPOT might not be supported, also we may want to force POT for efficiency
		boolean usePOT = !npotSupported || FORCE_POWER_OF_TWO;
		if (usePOT) {
			texWidth = toPowerOfTwo(imageWidth);
			texHeight = toPowerOfTwo(imageHeight);
		}

		//get an ID handle for the OpenGL texture
		texture = GL11.glGenTextures();

		//bind it 
		GL11.glBindTexture(TEXTURE_TARGET, texture);

		//to be safe, reset our unpack values 
		GL11.glPixelStorei(GL11.GL_UNPACK_ROW_LENGTH, 0);
		GL11.glPixelStorei(GL11.GL_UNPACK_ALIGNMENT, 1);

		//no mipmapping; just regular LINEAR filtering
		GL11.glTexParameteri(TEXTURE_TARGET, GL11.GL_TEXTURE_MAG_FILTER, GL11.GL_LINEAR);
		GL11.glTexParameteri(TEXTURE_TARGET, GL11.GL_TEXTURE_MIN_FILTER, GL11.GL_LINEAR);

		//this is the "internal format" of the texture; could also be RGB, LUMINANCE or RED, for example
		int internalFormat = GL11.GL_RGBA;
		//this is the "data format", i.e. how the ByteBuffer is organized (we told PNGDecoder to use RGBA)
		int dataFmt = GL11.GL_RGBA;
		int dataType = GL11.GL_UNSIGNED_BYTE;

		//if we need to correct the texture to a power-of-two size...
		if (texWidth!=imageWidth || texHeight!=imageHeight) {
			//OpenGL allows you to pass a "null" ByteBuffer, however it seems unreliable on my system
			//here's a workaround:
			ByteBuffer emptyData = BufferUtils.createByteBuffer(texWidth * texHeight * numComponents);

			//the full texture
			GL11.glTexImage2D(TEXTURE_TARGET, 0, internalFormat, texWidth, texHeight, 0, dataFmt, dataType, emptyData);

			//now upload the non-power-of-two image using the decoded PNG data
			GL11.glTexSubImage2D(TEXTURE_TARGET, 0, 0, 0, imageWidth, imageHeight, dataFmt, dataType, buf);

			//and we adjust the texcoord values to match the new ratio
			normalizedWidth = imageWidth / (float)texWidth;
			normalizedHeight = imageHeight / (float)texHeight;
		} else { //the image is POT or NPOT is supported... no worries
			GL11.glTexImage2D(TEXTURE_TARGET, 0, internalFormat, texWidth, texHeight, 0, dataFmt, dataType, buf);
		}
	}

Then your textures should be ready to go. If you want to render them as 2D sprites in ortho view, the code might look like this:

		float x = 50, y = 50;

		GL11.glEnable(TEXTURE_TARGET);
		GL11.glBindTexture(TEXTURE_TARGET, texture);
		GL11.glBegin(GL11.GL_QUADS);
			GL11.glTexCoord2f(0f, 0f); //TOP LEFT
			GL11.glVertex2f(x, y); 
			GL11.glTexCoord2f(0f, normalizedHeight); //BOTTOM LEFT
			GL11.glVertex2f(x, y+imageHeight);
			GL11.glTexCoord2f(normalizedWidth, normalizedHeight); //BOTTOM RIGHT 
			GL11.glVertex2f(x+imageWidth, y+imageHeight); 
			GL11.glTexCoord2f(normalizedWidth, 0f); //TOP RIGHT
			GL11.glVertex2f(x+imageWidth, y); 
		GL11.glEnd();
Texture texture = new Texture("something.png");

That looks pretty simple. You guess the lib. :stuck_out_tongue: If you want to dig into how it works, there is two parts to the problem. First is to decode the image from PNG or JPG or whatever to bytes that represent the unencoded bitmap. Second is to upload the data to the GPU. For libgdx, there is a Pixmap class that is used to load and decode the image and then Texture just binds and makes the GL call to upload the data. Texture is a bit more complex than that because it allows customizing texture loading and reloading textures if the GL context is lost. The real guts of Pixmap is actually in Gdx2DPixmap, which is a native wrapper over gdx2d, which is a tiny native lib that uses stb_image (a small lib for loading images) for image decoding and adds things like converting between GL formats and drawing lines and circles.

The native stuff is easy to read, inline with the Java source, due to Mario’s super fancy “gdx-jnigen” build tool:
https://code.google.com/p/libgdx/source/browse/trunk/gdx/src/com/badlogic/gdx/graphics/g2d/Gdx2DPixmap.java#235
Crazy, but cool! 8)