Low poly vs. highres normal maps

I’m working my way through various books on 3d graphics processing. Recently I started wondering, whether the advantage of lower polygon models at some point vanishes when I start increasing the resolution of normal maps, to compensate for the lower detail of the mesh.

From my experience with performance optimisation for CPUs, I know that memory access has significant high latency (up-to 100 clock cycles) especially when frequently accessing lots of different addresses (which result in cache invalidations). Applying this imagination to 3d processing, vertex fetching should be considered as quite fast, because it can be streamed straight forward - reading block by block from memory into the pipeline. In comparison, reading texels from textures is more like randomly accessing different locations from a large memory block. So, even though vertices have to be processed through the whole pipeline and induce significant processing effort, reading from multiple large textures in the fragment shader should have a similar or even higher impact on performance.

So, from a performance point of view, is there a threshold where it doesn’t make sense to increase the size of a normal map and instead keep a certain amount of polygons? Is my view of a GPU architecture possibly wrong?