No. A shader does not do any rendering. It manipulates the vertex/fragment data.
If you are using fixed function rendering, you aren’t using shaders. That’s what the ‘fixed’ means – it can’t be changed. To render with a shader, you pass your vertex data to the graphics card via vertex arrays or VBOs and set up the projection matrices, blending modes and any other state you need. The only difference with fixed function is that when you want a particular effect, you activate a particular shader to replace the per-vertex processing stage of the pipeline.
Here’s the complete vertex stage of the pipeline:
per vertex operations -> primitive assembly -> clip/project/viewport/cull -> rasterize
In the fixed function pipeline, all of this is happening in the driver. With shaders, you are able to completely replace the ‘per vertex operations’ stage with your own alogrithm as often as you like. Vertex data includes position, color, normal, texture coordinates and, for advanced uses, any custom data you want to associate with each vertex. This allows you to manipulate the data in any way you need to before it is actually drawn to the screen. Look at this shader, for example:
#version 330
layout(location = 0) in vec4 position;
void main()
{
gl_Position = position;
}
The graphics card gives the position of each vertex to the shader in the variable ‘position’. The shader does no manipulation of the position at all. It simply passes it off to the next stage of the vertex pipeline untouched. Now look at this vertex shader:
#version 330
layout(location = 0) in vec4 position;
uniform float loopDuration;
uniform float time;
void main()
{
float timeScale = 3.14159f * 2.0f / loopDuration;
float currTime = mod(time, loopDuration);
vec4 totalOffset = vec4(
cos(currTime * timeScale) * 0.5f,
sin(currTime * timeScale) * 0.5f,
0.0f,
0.0f);
gl_Position = position + totalOffset;
}
It takes the same information, the position of the vertex, and modifies it based on elapsed time. It then passes the modified position on to the next stage of the pipeline. While this shader is active, all objects that are drawn will move across the screen.
This is what vertex shaders do. The basic functionality is for transformations and lighting, but you can do a variety of tricks by manipulating any of the data associated with the vertex. That’s the key point: you manipulate vertex data, then pass it back to the pipeline for final rendering. In the fragment shader, the concept is the same, but you are now manipulating the final color that is output to the screen rather than vertices.
[quote] If so, how can you tell if the shader is actually working?
[/quote]
By what you see on the screen. It’s usually fairly obvious if the shader is working or not. What is sometimes less obvious is whether or not your shader is buggy.
Some are automatically provided by the graphics card (like gl_Position above), some are based on the data you configure in the vertex arrays/VBOs (like position above), some you have to pass from your program via the shader API (like time and loopDuration in the second shader above), and some are local (like timeScale and currTime in the second shader above).
I highly recommend this tutorial, Learning Modern 3D Graphics Programming if you’ve not seen it yet. Also, the Orange Book is a good reference to keep handy.