Low poly vs. highres normal maps

I’m working my way through various books on 3d graphics processing. Recently I started wondering, whether the advantage of lower polygon models at some point vanishes when I start increasing the resolution of normal maps, to compensate for the lower detail of the mesh.

From my experience with performance optimisation for CPUs, I know that memory access has significant high latency (up-to 100 clock cycles) especially when frequently accessing lots of different addresses (which result in cache invalidations). Applying this imagination to 3d processing, vertex fetching should be considered as quite fast, because it can be streamed straight forward - reading block by block from memory into the pipeline. In comparison, reading texels from textures is more like randomly accessing different locations from a large memory block. So, even though vertices have to be processed through the whole pipeline and induce significant processing effort, reading from multiple large textures in the fragment shader should have a similar or even higher impact on performance.

So, from a performance point of view, is there a threshold where it doesn’t make sense to increase the size of a normal map and instead keep a certain amount of polygons? Is my view of a GPU architecture possibly wrong?

nah it’s pretty much that. texture memory is a bit different, it’s cached in a texture-fetch-friendly way, so even that isn’t really an issue.

but i think you could approach the whole thing from another side.

conside the fact that you cannot really pinpoint a threshold value at all. it’s too dependent on what’s going on - on the big picture. let alone the different hardware (gpu+cpu+ram+hd) combination possible.

if you have high ALU pressure, you can waste memory, if memory is the limit you might want to run more shaders.

i’m assuming you’re doing something like https://knowledge.autodesk.com/support/3ds-max/learn-explore/caas/CloudHelp/cloudhelp/2015/ENU/3DSMax/files/GUID-9A503FA1-E2B1-4E20-984B-DAC9AD8AB7A0-htm.html

  • model high res
  • reduce polygons
  • bake normal map from high res mesh matching low res mesh

now we can just create things as high res as possible, then sample down - and select the LOD mesh+textures later, like in the render loop.

o/

A normal map is not a full replacement for a high-poly mesh. You could use a cube as a low-poly “sphere”. It’d still look like an obvious cube, but the lighting would be computed as if it was a sphere. No matter how high-res your normal map is, it will not be able to compensate for the shortcomings of the geometry.

Normal maps are still awesome for filling in detailed stuff that would both be too expensive to keep in the model and look bad due to containing details smaller than a pixel, requiring a lot of anti-aliasing/multisampling. They just don’t help with improving the silhouette of of an object. Tessellation on the other hand could be used to turn that cube into a sphere, and normal mapping could then be used to fix up the normal.

right, silhouette (also shadows) can be improved by tesselation displace mapping using the height-map texture (matching the normal-map) by finding edges contributing to the shape.

sounds a bit overkill, isn’t just drawing a damn high res mesh same slow anyway ?

Tessellation can actually be faster and have less popping. It’s faster because you can draw the exact same mesh/LOD level as many times as you want and have the GPU dynamically tessellate it, leading to less CPU overhead and the perfect number of triangles for a given view distance of the mesh (optimal quality/performance trade-off on the GPU). In addition, skinning is done only for the base and interpolated for the vertices added by tessellation, so it can be cheaper on the GPU in that sense. It has quite a few advantages in other words.

That being said, it’s probably completely overkill for OP. I was simply mentioning that there’s a reason why tessellation exists, because it solves a problem that normal mapping doesn’t solve.

Thank you for your answers.

basil_ , theagentd: So, apart from limitations (concerning available memory size and processing power), you’d say, latency of texel fetching is pretty low in comparison to vertex fetching and/or processing. And yes, baking normal maps from highres meshes is the concept I am referring to. It is just an example though - from my point of view, the same question applies to displacement mapping, as mentioned by theagentd. You remove information from your mesh (i.e. vertices) and provide it to another stage in the pipeline via textures, if you want to keep the level of detail of your original mesh.

But, what I understand from your answers is basically this: You have to analyse your application and available hardware to determine the best tradeoff between mesh complexity and texture sizes. Am I correct?

Texture size has pretty much no impact at all on performance; it really just uses more memory. The actual bandwidth usage doesn’t really go up with texture size. This is assuming you’re using mipmaps and coherent sampling patterns though, which is essentially what always happens when you do normal mapping or any kind of texturing of triangles. There is usually no real reason to not use a normal map if you have one, since they’re so cheap and can improve the lighting a lot.

The real trade-off is between mesh complexity and performance. More vertices = more work for the GPU = lower FPS. No matter if you you’re using a normal map or not, you usually want more triangles anyway to get better silhouettes. In addition, tessellation doesn’t replace normal mapping. Calculating new vertex normals for the generated vertices doesn’t improve things much, so simply sampling a normal for the heightmap used will result in the by far best result with tessellation.

Finally, there really are cases where a normal map is actually better than real geometry. If you have tiny seams, edges, etc that you want to have strong specular reflections, not even high amounts of MSAA can fully anti-alias that if you’re rendering with HDR. In this case, a normal map can do that job better and faster, as you can filter the normal map in realtime to anti-alias them much more effectively, all while using fewer vertices. You can even prefilter the normal map to anti-alias it ahead of time as well, for no realtime cost.

nice write-up agend. thanks.

i’m putting it like - this stuff is very interesting so use it all together but do not overdo :slight_smile: