GLSL: Force attribute to be used

So I’ve got a Sprite class, which uses a simple vertex format of position (x,y), colour (r, g, b, a) and texCoords (u, v).

There’s a corresponding ‘textured’ shader which has position, colour and texCoord attributes in it’s vertex shader. So far so normal, everything works fine.

The problem is when I want to swap in an alternate texture. That one only uses position and texCoord, and ignores colour. For compatibility, the vertex shader still has a colour attribute, it’s just unused. However some drivers (hello nVidia!) will decide the ‘colour’ attribute is unused, and optimise it out. That means my code now crashes because the shader doesn’t actually expose the interface the source code looks like it does.

Does anyone know how to force a shader attribute to be included (ie. not optimised out)? Or any other alternate solutions to this?

you can always use

#pragma optimize(off)

in your shader, but I strongly advise you to just support this kind of different shaders in your code.

I’d use the same vertex/fragment shader combination in both cases. In the second case I’d set the colour vertex attribute to a constant no-op value (all 0.0 or all 1.0 depending on how you use it in the fragment shader). This is easily achieved with a single call to glColor4f or glVertexAttrib4f before making the draw call, assuming the bound VBO has no colour data.

Btw, the optimization behavior you’re seeing is expected and in accordance to the GLSL spec.

I think you’ve got it backwards - my sprite is always sending position+colour+uvs, but a shader with an unused colour attrib has the colour attrib optimised out, so the sprite errors as it can’t lookup the ‘colour’ attrib from the shader.

Just don’t enable the vertex attribute if it doesn’t exist?

Indeed, this should be handled by the engine, there shouldn’t be a colour attrib in the shader at all. But if you don’t want to change the engine or the model data (or whatever you use to describe the model data to the engine), there’s one stupid trick I use when I want to quickly test something and bump into unused attribute/uniform optimizations:

in vec4 dummyColor; // unused input
out vec4 fragOut;

void main(void) {
    ...
    fragOut = ... // normal fragment shader output
    fragOut += dummyColor * 0.00001; // dummyColor is used now and won't be optimized away
}

Hmm. The problem with skipping it if not found is that now the engine can no longer catch cases where you write a shader expecting to handle all attributes, but don’t (say, you mis-spell colour) and your shader silently produces incorrect results.

The analogy would be to an interface function - the specs say you have shade(position, colour, uvs), you accept those arguments even if you ignore one of them.

first off all, you can do everything. Of course you can somehow manage all the different cases(I do it somehow but don#t have time to look into my stuff to figure it out how^^)

have you tried my pragma? you can use it until you figured out how to handle this case with your engine

Right now, I’m not sure that is the case - I simply don’t get enough info back from the shader to distinguish between the two cases. Either missing attributes are ok, in which case misspelt variables won’t be caught. Or missing attributes are not ok, which catches misspelt variables but doesn’t allow shaders which only use a subset of their inputs.

I don’t think disabling optimisations is a reasonable solution I’m afraid.

You’re correct, the shader source and geometry data alone are not enough to describe both situations. In Marathon I used to have configuration files (custom format + scripting support) that described what every shader expected from the engine: vertex attributes, texture maps, preprocessor #defines, dependencies on other shader fragments, etc. All that served as the “specs” you mentioned and any incompatibilities could easily be detected.