I read in the Red Book that when quads become twisted visual oddities can arise Yep. Cite from OpenGL faq: An OpenGL implementation may or may not break up your quad into two triangles for rendering (…) Many OpenGL applications avoid quads altogether because of their inherent rasterization problems (…) Wise programmers use this (triangle) primitive in place of quads. Basically, when you use a quad and the corners of the quad differ by certain amounts, you will get unnatural interpolation across the quad, resulting in strange-looking lighting effects. Once upon a time I did quad-based terrain (ERacer, DirectX/VB … okay, I know, I was young and needed the money http://wolf.ascendancy.at, bottom of page) and if the terrain heightmap includes radical terrain changes, the effect shows.
On the plus side it seems that since texture patches are rectangular, using quads would be a natural. That only applies if you use textures per quad. What you get is a tile-based engine, which is simple to implement but introduces several problems, like getting seams between differing surface textures right.
Also, the polygon count for any brute-force patch would be cut in half Without knowing the details for all the graphic cards, I’m pretty sure that this is not true. Your quad will be internally splitted into two triangles before rendering on the hardware side, and sending triangles in a strip or fan structure doesn’t introduce any overhead compared to sending a quad on the transfer side.
Bottom line, if you want something that is flexible and state-of-the-art, go with triangles. For example, this link http://home.planet.nl/~monstrous/terrain.html will take you straight from zero to geomorphing lod in a day or two, complete with source code.
HTH
Wolfgang