Howdy chaps,
I’m possibly opening myself up to some “LMGTFY” replies here, but I’m having some major trouble understanding the upload of 2d textures in LWJGL/OpenGL. I’ve been focusing on it all afternoon, but don’t seem to be getting anywhere.
Assuming I’ve got an int[] filled with image data in the form of ARGB, how can I:
a) Prepare the data for OpenGL. Can I simply cram the data into an IntBuffer? (It appears I’ll have to change ARGB to RGBA, but that should be trivial). I see one form of glTexImage2D() takes an IntBuffer, so I’m assuming that’s all good?
b) Describe the data to OpenGL. Given I’m using the aforementioned RGBA IntBuffer, what should my call to glTexImage2D() look like? I’m struggling to understand the parameters, in particular internalformat, format and type. I’ve looked at the description in the OpenGL reference, but can’t make head nor tail of it. Is anyone able to provide a more in-depth description of the parameters and their use?
Just for reference, the arguments to glTexImage2D are (with the params in bold giving me grief):
public static void glTexImage2D(int target, int level, int internalformat, int width, int height, int border, int format, int type, java.nio.IntBuffer pixels)
Much appreciated,
nerb.
LMGTFM:
- So internalformat is how OpenGL should represent the texture data internally. All I want to do is display a sprite directly, as is, so I’m guessing GL_RGBA should suit my needs?
- format is the format of my pixel data. If I’ve got RGBA packed into an int, do I want to use GL_RGBA again?
- type is the type of data I’m uploading. Now this one confuses me a little. The data is packed into standard Java signed 32bit ints… So what type should I select? GL_INT?