[solved] GLSL Link fails with no error on intel graphics

Hi, I am using similar code to http://lwjgl.org/wiki/index.php?title=GLSL_Shaders_with_LWJGL and when I run my program on my intel HD 4000 graphics card, the link stage fails:

ARBShaderObjects.glGetObjectParameteriARB(shaderID, ARBShaderObjects.GL_OBJECT_COMPILE_STATUS_ARB)

returns GL_FALSE but when using

ARBShaderObjects.glGetInfoLogARB(obj, ARBShaderObjects.glGetObjectParameteriARB(obj, ARBShaderObjects.GL_OBJECT_INFO_LOG_LENGTH_ARB));

an empty string is returned.

This only happens on my intel card, my ATI one runs fine.
Also a couple of simple shaders did link fine on the intel card but just drew a black screen (They worked fine on the ATI card too)
Is there anything I can do? Or the intel card just sucks and can’t handle the shaders?

Thanks,
roland

sometimes shader compilers fail so hard, they don’t even tell you why. depends on the code.

try using https://www.opengl.org/registry/specs/ARB/debug_output.txt for more info.

What is “similar”? One thing that I’ve found fails on Intel but not other cards is using integers where you mean floats (eg 1 rather than 1.0). I think the Intel behaviour is actually correct unless you specify #version but it’s the one that always seems to bite me. :persecutioncomplex:

Thanks :slight_smile: You’re right about the shader compilers fail so hard, they don’t even tell you why. I commented out everything and added back one thing at a time, this introduced some errors. I finally got it linking but it still crashed when I ran it.

Turns out using an array of uniform sampler2D was the issue (As I said, worked fine on ATI). So I just replaced it with 4 uniform sampler2Ds instead.


#define NUM_MAP_TEXTURES 4
uniform sampler2D mapTextures[NUM_MAP_TEXTURES]; //fails on intel

Thanks, I got some of those errors too before, however they printed out fine in the compile stage. Thinking about why the similar shaders linked fine and this one didn’t helped though. :slight_smile:

Could even be the same reason - should have been more generic in my answer! :wink: Looking at something like this ( http://stackoverflow.com/a/12031821 ) arrays of samplers are not valid below 1.30. Are you using a #version pragma?

According to the GLSL spec ( https://www.opengl.org/wiki/Core_Language_(GLSL)#Version )

[quote]If a #version​ directive does not appear at the top, then it assumes 1.10, which is almost certainly not what you want.
[/quote]
What I’ve found is that AMD & nVidia cards seem to allow features above version to work, whereas Intel cards seem to follow that spec and fail. The int <> float thing is not the only time I’ve fallen foul of this, but just the one I keep repeating! ::slight_smile:

Thanks, I wasn’t using a #version directive. However, I don’t want to go right up to GLSL 4.0 just so that I can loop over the sampler array :-\

[quote]In GLSL 1.30 to 3.30, you can have sampler arrays, but with severe restrictions on the index. The index must be an integral constant expression. Thus, while you can declare a sampler array, you can’t loop over it.
[/quote]
Oh well, it seems to be fine how I currently have it, apart from a bit of duplicated code. I’ll get over it :slight_smile:

Thanks again,
roland