Non-Power-Of-Two Texture Utility?

I’ve been reading through JOGL’s Texture and TextureData class trying to handle mipmapping and NPO2 textures and was wondering if someone has a nice utility class that I could look at as an example since Texture and TextureData seems overly complicated.

I have several textures that I’m trying to display that are of various sizes and am trying to understand how to properly display them. At the moment the content displays but looks strange (sort of stretched).

If you have NPOT texture support and use the GL_TEXTURE_2D target, mipmapping should basically work as before, where each time you halve each dimension, and cap it at 1 pixel.

If you are using GL_TEXTURE_RECTANGLE_ARB, that is a whole different texture target and has different rules. It isn’t allowed to have mipmaps and its texture coordinates aren’t normalized. In addition, some wrap modes aren’t supported either.

Unfortunately, all of my utility classes are fairly engine specific and abstract so they wouldn’t help much, but if you could post some example code I’d be glad to go through it.

I’m a bit of a noob when it comes to OpenGL. But what I did was just convert all my textures to power-of-two so I didn’t have to worry if it would work or not. Not sure if this is a good idea?

Here’s the code to convert them:


float aimWidth = bitmap.getWidth();
float aimHeight = bitmap.getHeight();

// Work out the correct width
if (((Math.log((double)bitmap.getWidth())/Math.log(2.0)) -
		Math.floor((Math.log((double)bitmap.getWidth())/Math.log(2.0)))) != 0) {

	// The bitmap width isn't a power of two.
	aimWidth = (int) Math.pow(2,Math.ceil(Math.log((double)bitmap.getWidth())/Math.log(2.0)));
}

// Work out the correct height
if (((Math.log((double)bitmap.getHeight())/Math.log(2.0)) -
		Math.floor((Math.log((double)bitmap.getHeight())/Math.log(2.0)))) != 0) {
	
	// The bitmap Height isn't a power of two.
	aimHeight = (int) Math.pow(2,Math.ceil(Math.log((double)bitmap.getHeight())/Math.log(2.0)));
}

// Resize (always do this, as it also insures that the bitmap is compatible with the OpenGL chip)
Bitmap bitmapOld = bitmap;
bitmap = Bitmap.createScaledBitmap(bitmap, (int)(aimWidth+0.5f), (int)(aimHeight+0.5f), false);
bitmapOld.recycle();

Ranger, I’ve thought about that, but since I have so many non-standard textures I fear the memory increase would be drastic and I want to avoid that if possible.

Ihkbob, I’ve been using TextureRenderer and am trying to migrate to using my own texture support using VBOs and GLSL. I’ve been using GL_TEXTURE_2D, and for my simple tests everything looks fine, but for my more complex ones the text looks “scrunched” and some of the colors look slightly odd.

I’m using a BufferedImage.TYPE_4BYTE_ABGR and doing the following with it to create a texture:

		if ((image.getType() != BufferedImage.TYPE_4BYTE_ABGR) && (image.getType() != BufferedImage.TYPE_3BYTE_BGR) && (image.getType() != BufferedImage.TYPE_CUSTOM)) {
			throw new RuntimeException("Unhandled BufferedImage format. Use 4BYTE_ABGR or 3BYTE_BGR. Type: " + image.getType());
		}
		
		boolean alpha = image.getColorModel().hasAlpha();
		int byteCount = alpha ? 4 : 3;
		byte[] data = new byte[width * byteCount];
		WritableRaster raster = image.getRaster();
		ByteBuffer pixels = ByteBuffer.allocateDirect((width * height) * byteCount);
	    pixels.order(ByteOrder.nativeOrder());
		for (int i = 0; i < height; i++) {
			raster.getDataElements(x, y + i, width, 1, data);
			pixels.put(data);
		}
		pixels.flip();

		int[] id = new int[] {textureId};
		if (textureId == -1) {
			gl.glGenTextures(1, id, 0);
		}
		
		gl.glBindTexture(GL.GL_TEXTURE_2D, id[0]);
		
		gl.glTexParameteri(GL.GL_TEXTURE_2D, GL.GL_TEXTURE_MAG_FILTER, GL.GL_LINEAR);
		gl.glTexParameteri(GL.GL_TEXTURE_2D, GL.GL_TEXTURE_MIN_FILTER, GL.GL_LINEAR);
		
		gl.glTexImage2D(GL.GL_TEXTURE_2D, 0, alpha ? GL.GL_RGBA : GL.GL_RGB, width, height, 0, alpha ? GL.GL_RGBA : GL.GL_RGB, GL.GL_UNSIGNED_BYTE, pixels);

I’m trying to make this as efficient as possible by not doing any data conversion before pushing the data to the texture and this seems to work…for the most part. :o

For further clarification on what I’m seeing see the following images.

Using TextureRenderer:

http://captiveimagination.com/download/gearloop01.jpg

Using my own rendering using VBO and GLSL:

http://captiveimagination.com/download/gearloop02.jpg

They look very similar, but you can see some striking differences in the pixelation and proportions. Any ideas on what I’m doing wrong is greatly appreciated. You can view the image directly to see it larger (these are pretty high resolution images and may need to be zoomed to see the differences).

I have two thoughts:

  1. Your created texture is in case 2 is slightly smaller than with the texture renderer
  2. Your computing the texture coordinates incorrectly in your glsl shader

These would both be bugs in your code, but I’m only suggesting them because I’ve never seen your problems any other way (but I could be wrong :slight_smile:

Is there anything else I need to call if my BufferedImage is something like 361x413 pixels to make it display properly?

Everything looked okay from what I saw, is it possible to reduce to a simple demo so I could test it out and experiment?

Yeah, I’ll see what I can do and get back to you.

Thanks again.

For anyone that cares I finally solved my problem. It turns out it was the blend function causing this to occur.

gl.glBlendFunc(GL.GL_SRC_ALPHA, GL.GL_ONE);

Changing it to:

gl.glBlendFunc(GL.GL_SRC_ALPHA, GL.GL_ONE_MINUS_SRC_ALPHA);

Resolved the problem entirely. Thanks lhkbob for trying to help, I very much appreciate it.

Ah, I remember this problem now. I think if you have PNG files that use a non-premultiplied alpha (which apparently that’s what the PNG spec says to do) then you need this, however, some PNG files don’t follow the spec and use premultiplied alphas, and in that case, you don’t need it. I read about it here: http://groups.google.com/group/android-developers/browse_thread/thread/2cb496c5da3b6955/b3776b600542640e?lnk=raot

Actually no, nearly everything from that screenshot is vector-based and gets drawn to a BufferedImage and then pushed into the texture buffer.