LWJGL using multiple textures

How can i use multiple textures on a model? I have the .OBJ loader and .MTL loader and everything is working fine. I can use a single texture. But i have a model that has 2 textures. How can i use both of the textures?

You’ll need to use a spritesheet. If you haven’t used one, get comftorable with using them because they will pretty much greatly boost your performance by allowing you to bind fewer textures, which is an expensive function.

A spritsheet is just a large texture that contains other, smaller, textures inside it. In your program you need to figure out where each texture is in your spritesheet relative to its coordinate constraints of [0, 1]. Then you tell your renderer to use those coordinates on the specific face you want to render. This way you can pack a bunch of textures into a larger one, bind less, and maybe even have a smaller memory footprint.

If you’re confused, just ask, or Google is always you’re best friend :slight_smile:

Problem is. Im using a Blender model i downloaded from internet. Its a superman :D. And it has a Different Texture for it cape. And the textures are in 2 different files. So i dont know if i can combine those two into a spritesheet and then successfully use em.
Although yes, this model is only for the purpose to complete my model class to fully support Textures and stuff.

Of course you can, just make sure the renderer knows what texture file to use, and see if Blender can export both on the same texture.

As much as i know about blender and could find, it doesent have the support to export textures in 1 file. And i cant but them together manually either cause then i have to do some changes to the texture coordinates and well my mind is blown. 1 texture is 1024X1024 the other is 512X512. So if there is any Blender pro who could tell me how to export a model with Textures in one file and the coordinates in .OBJ would correspond to it as well as the file name in .MTL

And if i make a sprite sheet, then how can i make sure that the coordinates are taken from the right sprite is a model uses multiple sprites.

Textures always come in a square or rectangle right? I mean that width = height. or width = x * height. No irregular shapes.

No they don’t have to. Its just easier to calculate coordinates if they are the same width and height.

Thats what i was thinking. That if they are width = height, then i can easily build a spritesheet and workout the coordinate system to multitexture it.

I think that this sprite sheet thing is misleading in this context. Yes, texture changes can be expensive but you can’t avoid them in a reasonable complex 3d scene. You can’t stuff everything into one texture and adjust the texture coordinates. It would be a complete nightmare to handle this.
In your case, you have to split the rendering into at least as many batches as you have textures (or shaders or whatever causes a state change). When loading the model, you put everything that uses one texture in the first batch and everything else into the second. Then bind texture 1, render batch 1, bind texture 2, render batch 2.

Cant the texture be binded during the VBO generation? My brain is breaking down with this multitexture problem.

Could this work, if i make my main model consist of submodels. each model is rendered separtely there is just 1 function that calls the renderings.

Yes, but I don’t think in the way you are thinking about it. You specify the coordinates of the texture, but you can’t actually keep the texture bound. Well you could, but only if you don’t use any other textures in the entire program because you wouldn’t need to unbind the first texture. However you can’t tell OpenGL to store an entire texture. It can only store coordinates.

I would prefer using a spritesheet because in my opinion all textures of one “mapobject” should be on one “texture”. Like always there are exceptions. For example if you don’t have to care about performance or if you want to use use gl_repeat it’s not usefull to use a spritesheet :slight_smile:
You might wanna have a look at: Vertex Attributes -> https://github.com/mattdesl/lwjgl-basics/wiki/ShaderLesson1

Think of it this way: The vbo or vertex array or whatever you are using defines what is being rendered but it doesn’t define how. If the ‘how’ changes for a part of the scene or a single model, you have to split your draw calls. You can abstract this away on a higher level, but down in GL, you have to deal with it.

A easy way to do this is certainly like EgonOlsen said : split your model into batchs per texture, using one vbo for the whole model, and as many rendering pass as textures needed.
The spriteseheet solution needs a big work on the texture coordinates (even more if you have 2, 3 or more textures used in your model) but allows to render the model in one pass.
There are other solutions, like using texture channels if few textures needed, but to understand and test the mechanics I would use first batchs…

So basicly i need to create 6 buffers to house 2 textures. normals1Buffer, normals2buffer, vertex1Buffer, vertex2buffer, texture1buffer, texture2buffer. and with each new texture i need 3 more buffers?

Either that, or you could interleave the data in one buffer and render parts of it. But to keep things simple in the beginning, i would go with multiple buffers.

Ok… it was easier for me to understand the interleaving buffer, than to get the idea how to get it into baches. So could you give me an example of how to draw the baches. Are each batch in different buffers or all in one buffer and i just have to know how big of a jump i have to make to get the next bach or what?

Either way will work.

2 problems.

  1. how to jump to next bach within a buffer?
  2. how do i actually bind the 2 textures?

there was something with GL_TEXTURE0, GL_TEXTUR1 etc. but i cant find it anymore.