Generating Textures

I have a ByteBuffer that I pulled from a BufferedImage that I need to convert into a texture. I have seen the code to do that somewhere, but for the life of me I can’t seem to find it now! Does anyone know where I can look for this?

Ok, I have it working … sort of. I am still pretty clueless when it comes to textures and I have never used mipmaps before, so please bear with me! The texture that I created is a simulated weather pattern that I want to be able to zoom in on, but it takes a really long time (~5 sec.) to create the mipmap and when I zoom in it gets really fuzzy. So I have three questions, 1) is there a better way to create a mipmap? 2) Is there a setting to allow it to look better at higher zoom levels? 3) There isn’t any transparency, how do I create that?

Here’s the code I have so far and some screenshots:


GL11.glClearColor(0.0f, 0.0f, 0.0f, 0.0f);
GL11.glClearDepth(1.0);

GL11.glEnable(GL11.GL_DEPTH_TEST);
GL11.glDepthFunc(GL11.GL_LEQUAL);

GL11.glEnable(GL11.GL_LINE_SMOOTH);
GL11.glEnable(GL11.GL_BLEND);
GL11.glBlendFunc(GL11.GL_SRC_ALPHA, GL11.GL_ONE_MINUS_SRC_ALPHA);

GL11.glMatrixMode(GL11.GL_PROJECTION);
GL11.glLoadIdentity();
GLU.gluPerspective(45f, (float)mode.getWidth()/(float)mode.getHeight(), 0.1f, 100f);

GL11.glMatrixMode(GL11.GL_MODELVIEW);
GL11.glLoadIdentity();
GL11.glViewport(0, 0, mode.getWidth(), mode.getHeight());

...

byte[] data = ((DataBufferByte) weatherImage.getRaster().getDataBuffer()).getData();
imageBuffer = ByteBuffer.allocateDirect(data.length); 
imageBuffer.order(ByteOrder.nativeOrder()); 
imageBuffer.put(data, 0, data.length); 
imageBuffer.flip();

GL11.glGenTextures(weatherTexture);
GL11.glBindTexture(GL11.GL_TEXTURE_2D, weatherTexture.get(0));
GL11.glTexEnvf(GL11.GL_TEXTURE_ENV, GL11.GL_TEXTURE_ENV_MODE, GL11.GL_MODULATE);

// when texture area is small, bilinear filter the closest mipmap
GL11.glTexParameterf(GL11.GL_TEXTURE_2D, GL11.GL_TEXTURE_MIN_FILTER, GL11.GL_LINEAR_MIPMAP_NEAREST);
// when texture area is large, bilinear filter the original
GL11.glTexParameterf(GL11.GL_TEXTURE_2D, GL11.GL_TEXTURE_MAG_FILTER, GL11.GL_LINEAR);
// the texture wraps over at the edges (repeat)
GL11.glTexParameterf(GL11.GL_TEXTURE_2D, GL11.GL_TEXTURE_WRAP_S, GL11.GL_REPEAT);
GL11.glTexParameterf(GL11.GL_TEXTURE_2D, GL11.GL_TEXTURE_WRAP_T, GL11.GL_REPEAT);

// build our texture mipmaps
GLU.gluBuild2DMipmaps(GL11.GL_TEXTURE_2D, 3, width, height, GL11.GL_RGBA, GL11.GL_UNSIGNED_BYTE, imageBuffer);

  1. You can specify the mipmaps yourself. Look at the GLU code or the red book for examples. GLU.gluBuild2DMipmaps is a bit slow since it allocates big buffers and arrays every time. You can make it faster by caching the buffers. Don’t expect huge improvements though.

But it don’t sound like you need mipmaps. It is only used when you zoom out, not in.

  1. No. You could use closest instead of leanear as the magnification filter. Then you’ll get big pixels intead of the smear. You’ll have to create a more detailed texture. Or possible add a deatiled texture layer to make it look better

  2. First make sure your Image got the correct data in the alpha channel and that you get it out correctly. Then you’ve got the set up OpenGL right. I would remove the glTexEnvf call in your source.

;D Big pixels would actually be perfect for what we’re trying to do!! I’ll have to wait until tomorrow to try out the rest of it! Thank you!!!

Using GL_NEAREST looks awesome! And you were right, I don’t need mipmaps, I just need a texture, but when I change this line

GLU.gluBuild2DMipmaps(GL11.GL_TEXTURE_2D, 3, width, height, GL11.GL_RGBA, GL11.GL_UNSIGNED_BYTE, imageBuffer);

to this

GL11.glTexImage2D(GL11.GL_TEXTURE_2D, 0, 3, width, height, 0, GL11.GL_RGBA, GL11.GL_UNSIGNED_BYTE, imageBuffer);

it throws this exception

Exception in thread "main" org.lwjgl.opengl.OpenGLException: Invalid value (1281)
	at org.lwjgl.opengl.Util.checkGLError(Util.java:56)
	at org.lwjgl.opengl.Display.update(Display.java:567)
	at Main.render(Main.java:326)
	at Main.run(Main.java:282)
	at Main.<init>(Main.java:57)
	at Main.main(Main.java:49)

What does that mean?! ???
The only difference between the calls is that I tell it that this is the base image and that it shoun’t have a border, right?

The line should be:
GL11.glTexImage2D(GL11.GL_TEXTURE_2D, 0, GL11.GL_RGBA, width, height, 0, GL11.GL_RGBA, GL11.GL_UNSIGNED_BYTE, imageBuffer);

There are no components (the 3 in gluBuild2DMipmaps) in the call to glTexImage2D - in glTexImage2D, that parameter represents the internal format of the image (how OGL should interpret it.)

IIRC, ‘3’ is a perfectly valid parameter, and means a three component image (ie. RGB). Although if you want a texture with alpha you’ll want 4 (or the more readable RGBA).

However textures need to have power-of-two dimensions. gluBuild2DMipmaps will automatically resize your image if you pass it non-PO2 image data (which may be part of the reason for slowness) but you don’t get this with glTexImage2D. Check you’re not passing in funny sized texture data.

:o The image is a power of two, but it’s huge!!! I printed out the size and it’s 4096x2048!!! The issue is, probably, that I need to bring it down to a reasonable size first! The mipmap was probably buying me that as well!

You may be able to get away with that, check what glGetInteger(GL_MAX_TEXTURE_SIZE) returns.

;D Thanks to you guys, I’ve got that part done!!! Here is what I did:


/* Find the closest Power-of-Two */
while(!done)
{
	if(width < pattern[0].length) width *= 2;
	if(height < pattern.length) height *= 2;
	if(width >= pattern[0].length && height >= pattern.length) done = true;			
}

/* Create a BufferedImage to draw to */
glAlphaColorModel = new ComponentColorModel(ColorSpace.getInstance(ColorSpace.CS_sRGB),
				new int[] {8,8,8,8},
				true,
				false,
				ComponentColorModel.TRANSLUCENT,
				DataBuffer.TYPE_BYTE);
raster = Raster.createInterleavedRaster(DataBuffer.TYPE_BYTE,width,height,4,null);
weatherImage = new BufferedImage(glAlphaColorModel,raster,false,new Hashtable());
Graphics g = weatherImage.getGraphics();

// Draw to BufferedImage stuff here

/* Scale the image to a reasonable size */
Image image = weatherImage.getScaledInstance(width/4, height/4, Image.SCALE_REPLICATE);
raster = Raster.createInterleavedRaster(DataBuffer.TYPE_BYTE,width/4,height/4,4,null);
weatherImage = new BufferedImage(glAlphaColorModel,raster,false,new Hashtable());
g = weatherImage.getGraphics();
g.drawImage(image, 0, 0, null);

/* Turn that into a Texture */
byte[] data = ((DataBufferByte) weatherImage.getRaster().getDataBuffer()).getData();
imageBuffer = ByteBuffer.allocateDirect(data.length); 
imageBuffer.order(ByteOrder.nativeOrder()); 
imageBuffer.put(data, 0, data.length); 
imageBuffer.flip();

GL11.glGenTextures(weatherTexture);
GL11.glBindTexture(GL11.GL_TEXTURE_2D, weatherTexture.get(0));
GL11.glTexEnvf(GL11.GL_TEXTURE_ENV, GL11.GL_TEXTURE_ENV_MODE, GL11.GL_DECAL);

GL11.glTexParameterf(GL11.GL_TEXTURE_2D, GL11.GL_TEXTURE_MIN_FILTER, GL11.GL_NEAREST);
GL11.glTexParameterf(GL11.GL_TEXTURE_2D, GL11.GL_TEXTURE_MAG_FILTER, GL11.GL_NEAREST);

GL11.glTexImage2D(GL11.GL_TEXTURE_2D, 0, GL11.GL_RGBA, width/4, height/4, 0, GL11.GL_RGBA, GL11.GL_UNSIGNED_BYTE, ImageBuffer);

The whole thing (reading from the file, creating the BufferedImage, drawing to it, scaling it, and turing that into a texture) went from taking ~5.5 sec. to taking ~2.5 sec! Most of that is eaten up by the buffer manipulation, I’ll look into that and see if there is anything else I can do! Now if I can figure out the Alpha bit I am all set!

-=EDIT=-
It seems that the call to drawImage() is taking up ~1.5 sec. all by itself! :frowning: Does anyone know of a better way to do this?

Is there a difference when it comes to generating textures under Windows vs. under Linux? When I run the program on my Windows box I get this.

When I run it on the Linux target box I get this.

I changed the color of the quad that I am texturing to red so I could see the extent of it, but the “weather” is the same!! It looks to me like some kind of a buffer issue. What do you guys think? ???

-=EDIT=-
I ran my proof of concept app on the Linux box and it ran like a champ, now I believe that it has somethign to do with how I read the file. In the proof of concept app I read it from the file system, in the real app I read it from the network. Well at least I know I am not crazy! (well, not totally anyway :wink: )

-=EDIT, AGAIN=-
::slight_smile: Ok, you can completely ignore this post! I did a stupid thing with reusing variable names! :-[