speed of glsl

Recently I had posted on the jogl forum topic with a similar question, so pardon my double-posting. However, no one was able to come with a good answer to explain what I was seeing.

Anyway, here it goes:
When I’d draw a scene on either an intel mac book pro w/ geforce 8800 mobile or an amd pc with ati x700 using a custom glsl shader (even when it was only one line of work to assign the fragment color), it would perform significantly slower than the fixed function pipeline.
This is contrary to everything that I’ve heard, so I was wondering if anyone has experienced something similar or if there are ways in which you can incorrectly setup a shader program that causes it to run slower (and fyi, I only called glUseProgram() once so I no there was no additional call overhead slowing it down).

Thanks,

I will get my ATI Radeon X1950 in a few days and I would like to test your shader. I’ve already a small program written in JOGL which allows me to test the shaders. I would like to know if you’re right because I plan to implement the Phong model in a shader and I don’t want to decrease a lot the performances.

If memory serves correct(and correct me if i am wrong), you were not replacing fixed functionality, you were using a shader on top of it. (I.E. using all of the gl.glDoSomething() commands for your rendering etc… and then just changing the frag color with your shader) Take a look at http://www.gamedev.net/reference/programming/features/glsllib/
for an example of how to replace fixed functionality with shaders. The example shows how to replace texture and lighting functionality with shaders. It also shows how to build a shader library(actually the main purpose for the article).

Shaders only replace a portion of the existing API, you still need to use the same methods for submitting geometry data.

I had done some searching through gamedev and found other people that have had similar problems. One of the things that they said may have been the cause was that there is some small constant overhead when drawing using a shader vs. ffp. If this is true, it would explain the behavior of my test cases since increasing the complexity of the shader didn’t cause in performance decrease (solid color to directional lighting model).

[quote]Shaders only replace a portion of the existing API, you still need to use the same methods for submitting geometry data.
[/quote]

  • this i understand…

the original post asked…

[quote]it seems that the shader programs give me worse framerate. With a fixed function pipeline, I could get about 18 fps drawing 10,000 cubes, while using the shader it would give me only 12.5. The shader consited of this:

vertex shader:
void main() {
gl_FrontColor = gl_Color; 
gl_Position = ftransform();
}
fragment shader:
void main() {
gl_FragColor = vec4(.2, .4, .5, 1);
}

[/quote]
my point went along w/ what ezmic added in the original post… The shader that was shown doesn’t seem to replace any fixed functionality … It looks like it is just setting a frag color in addition to the original rendering(original rendering 18fps, original rendering + shader changing frag color = more work so this should be slightly slower) … am i missing something?? I am very sorry if i am just confused on this. I wouldn’t want to send anyone off in the wrong direction…

Well a basic no-lighting no-textures FF setup will stil have to output the surface colour. So the original rendering would use the FF pipeline to output a solid colour, whereas the shader version disables the FF part and replaces it with an identical shader implementation.

ok so lets say i render something large  where i specify a color for all verticies as case 1......


gl.glInterleavedArrays(GL.GL_C4F_N3F_V3F, 0, meshVertices);
gl.glDrawArrays(GL.GL_TRIANGLES, 0, MeshVertCount);

 

for case 2 i do the same thing w/ a simple shader but i use GL.GL_N3F_V3F  and let the shader handle color


       //use program.................
       gl.glInterleavedArrays(GL.GL_N3F_V3F, 0, meshVertices);
       gl.glDrawArrays(GL.GL_TRIANGLES, 0, MeshVertCount);
       //useProgram(0);
 

case 2 should be faster right???

and for case 3 i do what case 1 did with the shader as well. Aren't i doing the color stuff w/ the FF and the shader basically duplicating effort, therefor adding overhead ???......

code]
           //use program.................
           gl.glInterleavedArrays(GL.GL_C4F_N3F_V3F, 0, meshVertices);
           gl.glDrawArrays(GL.GL_TRIANGLES, 0, MeshVertCount);
           //useProgram(0);

if Interleaved Arrays are a bad example replace that with gl.glVertex3f(), gl.glColor4f(), and gl.glNormal3f() …

I would test this now but my work computer is horrible and doesn’t support shaders :frowning:

According to the opengl spec, shaders are supposed to replace a portion of the FF pipeline, mainly vertex processing for a vertex shader and calculating fragment color for pixel stuff. Thus is a shader is enabled, the FF portion won’t be run along with the shader portion, it’s one or the other. However, shaders have so many extra attributes that are supposed to be available to them from the gl state that maybe making them available to the custom shader has some overhead.

Can you please post your entire test case for us? I think it would help us if we have the entire test case to see…

DP :slight_smile:

I would if it still gave the results that I said it did. However, I had set it aside for a while and coming back to it, it no longer performs slower in the test case. Performance is now identical. I don’t think I changed anything except that mac came out with a new software update, so that might have had an effect. Anyway, I found an answer that worked and I’m not sure the problem really exists, so thanks all for the help.