Why do we need gluNormal3d when using lighting?

At first, it seems so self-explainatory - how else would OpenGL knew what’s the light projection on the different polygons, right? But then i got me thinking (seldom a good idea :wink: ) and i realized that the algebraical normal vector is already predefined (implicitly, but still). Why on Earth shall i set the normal vector to what already can be computed anyway, using the vertices i had to specify in the first place?

I have a suspicion that one can produce some cool graphical effect by setting the OpenGL-normal different than the purely algebraical one. Is that correct? What effect would that be?

1/ the normals are different from a face to another. So computing them on the fly for each face would cost some time whereas precomputing them will save some.
2/ You want to have a nice smooth effect, so you want a normal by vertex and not by face. It’s a bit more complicated to compute these, as you have to know all the faces that use a given vertex. This can’t be computed if you don’t know the whole mesh topology, and it would be very expensive to recompute this each frame.
3/ Sometimes you want the same vertex to have different normals to have the effect called “smoothing groups” (google it). And this would even more costly to compute.
4/ And even more, some CAD softwares allows the artist to define their own normals, independently of the geometry underlying. You’ll want to use these instead of automatically generated ones (supposing you can read them in the file or have a custom exporter).

SeskaPeel.