lwjgl3 newbie texturing problem

Hi all,

trying to learn OpenGL using lwjgl3, got stuck at texturing a triangle, nothing is displayed. Decoding png texture to a bytebuffer using Manns PNGDecoder.

The whole thing seems straight forward to me but all i get is black screen. As far as i figured out the texture should be bound automatically to the sampler, i dont need to set the uniform myself in the code.

Decoding the texture and uploading to GPU:


   public static Texture getPNGTexture(String fileName)
    {
        int id;
        ByteBuffer buf = null;
        int imageWidth = 0;
        int imageHeight = 0;

        try {
            InputStream is = new FileInputStream(new File("res/textures/" + fileName + ".png"));
            PNGDecoder decoder = new PNGDecoder(is);
            imageWidth = decoder.getWidth();
            imageHeight = decoder.getHeight();
            if (imageWidth == 0 || imageHeight == 0)
            {
                throw new IOException("Image is zero sized!");
            }

            PNGDecoder.Format format = PNGDecoder.RGBA;

            try{
                buf = BufferUtils.createByteBuffer(imageHeight*imageWidth*format.getNumComponents());
                decoder.decode(buf, imageWidth*format.getNumComponents(), format);
            }catch(IOException e){
                e.printStackTrace();
            }finally {
                is.close();
            }

            buf.flip();
        } catch (IOException e) {
            e.printStackTrace();
        }

        id = glGenTextures();

        glBindTexture(GL_TEXTURE_2D, id);

        glTexStorage2D(GL_TEXTURE_2D, 1, GL_RGBA, imageWidth, imageHeight);
        glTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0, imageWidth, imageHeight, GL_RGBA, GL_UNSIGNED_BYTE, buf);

        return new Texture(id);
    }

Getting texture and mesh data:


texture = Texture.getPNGTexture("box");

        test = new Mesh();
        float[] vertices = {
                -0.5f, 0.5f, 0f,
                -0.5f, -0.5f, 0f,
                0.5f, -0.5f, 0f,
        };
        int[] indices = {
                0,1,2
        };
        float[] texCoords = {
                0,0,
                0,1,
                1,1
        };
        test.addVertices(vertices, texCoords, indices);


    public void addVertices(float[] vertices, float[] texCoords, int[] indices)
    {
        size = indices.length;

        glBindVertexArray(vao);                                                             //Bind VAO

        int vbo = glGenBuffers();                                                           //Create new VBO
        int ibo = glGenBuffers();
        int tbo = glGenBuffers();
        buffers.add(vbo);
        buffers.add(ibo);
        buffers.add(tbo);

        glBindBuffer(GL_ARRAY_BUFFER, vbo);                                                 //Bind vbo
        glBufferData(GL_ARRAY_BUFFER, Util.createFlippedBuffer(vertices), GL_STATIC_DRAW);  //Put data in it
        glVertexAttribPointer(0, 3, GL_FLOAT, false, 0, 0);                                 //Put VBO in VAO on pos 0

        glBindBuffer(GL_ARRAY_BUFFER, tbo);
        glBufferData(GL_ARRAY_BUFFER, Util.createFlippedBuffer(texCoords), GL_STATIC_DRAW);
        glVertexAttribPointer(1, 2, GL_FLOAT, false, 0, 0);

        glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, ibo);                                                 //Bind vbo
        glBufferData(GL_ELEMENT_ARRAY_BUFFER, Util.createFlippedBuffer(indices), GL_STATIC_DRAW);   //Put data in it
    }

Binding texture bit and drawing mesh:


    public void render()
    {
        texture.bind();
        shader1.bind();
        shader1.SetUniform("transform", transform1.getProjectedTransformation());

        test.draw();
    }

    public void bind()
    {
        glActiveTexture(GL_TEXTURE0);
        glBindTexture(GL_TEXTURE_2D, id);
    }

    public void draw()
    {
        glBindVertexArray(vao);
        glEnableVertexAttribArray(0);
        glEnableVertexAttribArray(1);
        glDrawElements(GL_TRIANGLES, size, GL_UNSIGNED_INT, 0);
        glDisableVertexAttribArray(0);
        glDisableVertexAttribArray(1);
    }

Shaders:


#version 430 core

layout (location=0) in vec3 position;
layout (location=1) in vec2 texCoords;

out vec2 pass_texCoords;

uniform mat4 transform;

void main()
{
    pass_texCoords = texCoords;
    gl_Position = transform * vec4(position, 1.0);
}




#version 430 core

in vec2 pass_texCoords;
layout (binding=0) uniform sampler2D sampler;

out vec4 color;

void main()
{
    color = texture(sampler, pass_texCoords);
}

First, make sure that your quad or whatever your drawing pops up.

#version 400 core

in vec3 pass_position;

out vec4 out_Color;

void main(void) {
	
	out_Color = vec4(1,1,1,1);
	
}

Then make sure that it pops up with color and change the 1’s to decimals to make sure that the color changes.

This confirms that you are actually drawing something.

Adding uniform sampler2D u_texture; will allow you to get a texture in there to read the texel at the point and shade it. Thusly, you can add that variable.

#version 400 core

in vec2 pass_texcoord;

uniform sampler2D u_texture0;

out vec4 out_Color;

void main(void) {
	
	out_Color = vec4(1,1,1,1);
	
}

As you can see, I just plopped it in there. I added a 0 at the end as per when I do multitexturing (u_texture1, u_texture2, etc). So now we have to set the texture in the shader.

First, you need to bind the shader, then get the variable.
// binds shader
int shader = shader.getUniform(“u_texture” + i); // get the texture variable
GL13.glActiveTexture(GL13.GL_TEXTURE0 + i); // from texture unit 0, translated 0 because let’s say i = 0 (on texture 0)
GL11.glBindTexture(GL11.GL_TEXTURE_2D, material.getTextureData().getId()); // actually load the texture into here
shader.bindUniformi(shader, i); // upload to texture unit 0

Then you need to use it in your shader.

#version 400 core

in vec2 pass_texcoord;

uniform sampler2D u_texture0;

out vec4 out_Color;

void main(void) {
	
	out_Color = texture(u_texture0, pass_texcoord); // and here we are
	
}

If your texture is upside down, you have a texcoord/indices/vertex problem.

The triangle is visible and i can change colors.

What exactly does this bit here do for you?


int shader = shader.getUniform("u_texture" + i); // get the texture variable
shader.bindUniformi(shader, i); // upload to texture unit 0

Im assuming it gets the location of the uniform u_texture0 and then sets the value for it to 0?

Why would that be neccessary if i activate the texture bank 0 with:


glActiveTexture(GL_TEXTURE0);

and then bind my texture to that bank with:


glBindTexture(GL_TEXTURE_2D, id);

shouldnt then the sampler in the fragment shader automatically look at the texture in the bank 0 if i say:


layout (binding=0) uniform sampler2D sampler;

All the tutorials ive seen say that this sampler automatically takes what ever there is in the texture bank 0.

According to that there is no reason to be getting the location of my sampler uniform and setting it in the code. Is that what you are doing or I misunderstood?

More info:

If I do this in frag shader:


#version 430 core

in vec2 pass_texCoords;
layout (binding=0) uniform sampler2D sampler;

out vec4 color;

void main()
{
    color = texture(sampler, pass_texCoords)+vec4(0.1, 0.1, 0.1, 1);
}

I get this:

If I do this:


#version 430 core

in vec2 pass_texCoords;
layout (binding=0) uniform sampler2D sampler;

out vec4 color;

void main()
{
    color = vec4(pass_texCoords, 1, 1);
}

I get this:

So the texture method obviously outputs vec4 of 0s, but the texture coordinates seem to be passed fine to the shader. So i guess its safe to assume something is wrong with the sampler. Either not bound properly which i dont see how, or something is wrong with the actual texture upload??

That won’t even compile, but what you are doing is getting the uniform location from the program so you can bind a texture to it:

int unit = 0;
		gl.glActiveTexture(GL4.GL_TEXTURE0 + unit);
		gl.glBindTexture(GL4.GL_TEXTURE_2D, m_material.m_diffuse_map.m_id);
		gl.setUniform1i("diffuse_map", unit);

yes.

First: You need to enable the shader program BEFORE binding the textures.

Also, are you sure your graphics card supports glsl 430? It could still compile, but it might not like the layout (binding=0) Edit:I don’t think it would compile. I would check to make sure you can use this version.

If those don’t check out I would look at the buffer you create for the texture and make sure the expected values are there.

I have changed my code so that glUseProgram now comes before texture bind. Graphics card driver supports opengl 4.4.0, so im good there. Still the same, however.

Let me get one thing straight first. If Im using this layout(binding=0) thing there is no need to be setting that uniform from code like you did with this setuniform(“diffuse_map”, unit)?

Using setuniform to bind a uniform to a texture bank seems kind of wierd to me, shouldnt that be more something like binduniform??

I’m not sure about layout(binding=0), my dev computers graphics card does not support it, so I haven’t used it.

That being said setUniformi() doesn’t do any “binding” it associates a uniform name with the value at the unit location.

When you activate a texture slot:

gl.glActiveTexture(GL4.GL_TEXTURE0 + unit);

you are ready to bind a texture:

gl.glBindTexture(GL4.GL_TEXTURE_2D, m_material.m_diffuse_map.m_id);

With your texture(s) bound you now need to tell the program which uniform references the slot:

gl.setUniform1i("diffuse_map", unit); 

Did you check your image buffer to make sure that it’s not full of garbage?

Edit: to be clear, you do not have to call glActiveTexture() when only using one texture at a time, however the others are a must.

What happens when you do…

#version 430 core

in vec2 pass_texCoords;
layout (binding=0) uniform sampler2D sampler;

out vec4 color;

void main()
{
    color = texture(sampler, pass_texCoords);
}

@Hydroque I get a black screen.

In my bytebuffer I have something like this after its flipped, i got this using ByteBuffer.get(i) method, it doesnt look right, why is this signed, why would there be minus values in rgba format? Or this whole thing gets cast to unsigned byte when it goes to GPU? Its not the whole output but you get the picture. Image is 256x256.


26
20
12
-1
103
83
48
-1
-98
126
73
-1
-98
126
73
-1
-98
126
73
-1
-99
125
71
-1
-99
125
71
-1
-99


Java doesn’t support unsigned integers, not a problem when moving over to openGL. Anyway, the values are non-zero which means you should at least have color.

I’m at a loss though, it looks fine to me. Another thing you can try is checking each channel individually in your shader

vec4 color = texture(sampler, pass_texCoords);
out_color = vec4(color.r, color.r, color.r, 1.0);<- could be that the alpha channel is messed up, but I’m pretty sure 255 == 1.0.

I didn’t read the entire thread, but I see no glTexParameteri() calls to set up filtering. The default near filter enables mipmapping, so if you don’t have mipmaps defined the texture is considered incomplete and won’t be readable from shaders.

@thedanisaur tried it, no help.

@theagentd
I added these two lines just before storing the buffer to opengl, is this what you meant?


        glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
        glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);

Didnt help tough.

Instead of this:


glTexStorage2D(GL_TEXTURE_2D, 1, GL_RGBA, imageWidth, imageHeight);
glTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0, imageWidth, imageHeight, GL_RGBA, GL_UNSIGNED_BYTE, buf);

Try this:


glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, imageWidth, imageHeight, 0, GL_RGBA, GL_UNSIGNED_BYTE, buf);

If it still isn’t working, try doing the following after all of your texture creation code:


System.out.println(Util.translateGLErrorString(GL11.glGetError()));

The above code will print the error to the console, so you’ll know exactly what’s wrong with your code, which makes it a lot easier for you to fix the issue.

well damn!

That worked. But only in combo with the previously mentioned gltexparametri, without them it doesnt work. I know i tried what you suggested before, but without the gltexparametri.

Any ideas as to why this new opengl approach of uploading textures(gltexstorage and gltexsub) wouldnt work?

I modified my post to include a snippet of code that helps you figure out what the error is, so you can try that out with your code :slight_smile:

You did not use a sized texture format for the internalformat parameter of glTexStorage2D(), but just used GL_RGBA. This would not work. You have to use a sized internal format, such as GL_RGBA8.

According to the OpenGL wiki, glTexStorage2D() is the better choice if you’re doing mipmapping, as it (supposedly) improves performance.

The error was 1280.

According to internet its an unsupported enum value passed to a gl function.

And that fits to what kaiHH says, if i replace RGBA with RGBA8 the whole thing works.

Thanks guys for all the answers, learned a few things. This thread can be marked solved,

To explain why glTexParameteri() wasn’t necessary with glTexStorage2D():

When glTexImage2D() is called the default values for BASE_LEVEL and MAX_LEVEL are these:
glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_BASE_LEVEL, 0);
glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_MAX_LEVEL, 999);

When glTexStorage2D() is called the default values are:
glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_BASE_LEVEL, 0);
glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_MAX_LEVEL, mipmaps-1);

In other words, when glTexStorage2D() is used all mipmaps are allocated and the MAX_LEVEL parameter is set to include those mipmaps. With only 1 level in the texture, the texture is always complete regardless of the filtering used as all mipmaps up to MAX_LEVEL-1 are defined and ready.

Good to know, and it works, i removed the gltexparametri completely now. Only have the new texture upload approach. If i fix the sized internal format thingy the whole thing looks solid.

Yep, in early OpenGL versions GL_RGBA was a valid internal format, which indicated that the driver was free to choose the precision of the internal format. Theoretically the driver could choose GL_RGBA4 for example, but in practice GL_RGBA is just an alias for GL_RGBA8. For glTexStorage2D() they just made the API more explicit and disallowed GL_RGBA it seems.

You really should use a debug context to avoid missing OpenGL errors. You may even have gotten an exact error message detailing what you did wrong in this case.

EDIT: On Nvidia driver: