Can anyone explain why lighting calculations must occur in tangent space rather than model space? I keep hearing that object (model) space normal maps will not remain accurate if the model is rotated, but if the lightPos and eyePos are multiplied by the inverseModelViewMatrix then they are brought into the same coordinate frame as the vertex, and by interpolation, fragment. The other thing I hear is that animation that distorts a mesh’s triangles throws object space lighting calculations out, but every fragment normal is tightly defined within the triangle’s vertices because these pin the normal map at the uv coords.
Tangent space is nice for textures (normalmaps) as they decouple the texture bumps from the geometry, you can then easily edit it in graphics editor. It’s also better for texture compression as you need to store just two channels, this allows to use DXT5 (G+A) or 3Dc compressions.
As of mesh skinning (deforming), it’s more efficient to use tangent normalmaps, but you can use object space normalmaps too. Eg. Overgrowth game uses it for characters. You can then share it with all LODs of model, whereas with tangent normalmaps it has graphics issues due to different geometry.
Personally I’ve made converter from tangent to objectspace and back and I use it sometimes to fixup problems by painting in 3D in modeller.
[quote]Can anyone explain why lighting calculations must occur in tangent space
[/quote]
this is the easiest frame of reference to compute light (tangent space is aligned to current “triangle frame”)
some other advantage for tangent space with map are that in object space if the mesh is deformed then the map would need to be recomputed, also you can use tiling texture in tangent space or use same texture on different object