Hi all,
Ive got no where else to post this, so might as well be here. I kinda thought of a terrain level of detail algorithm that might just spank the living daylights out of any other algorithm (emphasis on might : ). Imagine the camera is looking down onto the xz plane like in an RTS game…
The outlines of the “simple” algorithm are:
-Create a n*m grid that is orthogonal to the near plane of the camera (i.e. in post perspective space)
-Project this grid onto a plane in world space (probably xz plane)
-Displace the vertices’s by a heightmap
-Render the grid
If you can predict the outcome for this algorithm, you’ll see that the vertices of the grid before step 3 are no longer evenly spaced, they are more compacted towards the camera’s location on that plane.
The algorithm gives you these advantages:
- Vertex poping no longer is a problem as the frame to frame coherency is good, the features are introduces slowly from the horizon towards the viewer and geomorphing is done implicitly.
- No polys/vertices exist outside the camera’s frustum, as opposed to SOAR where frustum culling of each DAG node is needed to do frustum culling and reduce the stress on the GPU. I dont know much details about ROAM, but it seems that polys do exist outside the frustum but are just big polys, like in SOAR.
- Indices are preserved, one less thing to calculate per frame, all other implementations need to recaculate based on the new tesselation
- You can restrict the number of poly’s the mesh is to the exact poly, i believe that is impossible to do directly in SOAR or ROAM. In soar, you can do it indirectly by specificing a higher screen space tau value, but no direct control is given.
- No data paging issues at all since the only data that needs to be stored is the actual height map, not any other queued data (in SOAR, this will be the DAG stored in a special way to avoid the need to do manual paging and memory control) and thus, your terrain height field data can be as big as you want it to be as long as it fits into RAM.
As you can imagine, doing the calculations needed for step 2 per vertex per frame is alot of work on the CPU (you can offload the work to a vertex shader, but you need to be able to do a texture lookup on the vertex shader, which is an NV only operation at the moment). Im pretty sure I can work out a way to do this using a heuristic based on the dot product between the camera and each of the x and z axis. That is then used along with the distance between the camera and the vertex to calculate its x/z position. To avoid the projection stage, the mesh can be made every frame in world space coordinates as a uniform grid and then the heuristic is used to create the mesh into a non-uniform grid, this method will also avoid the nastiness of the projection “back firing” if the camera isn’t pointing at the xz plane ( if you dont know what why back firing appears, google for “projective textures”)
I really wish I could throw everything down and just work on this, unfortunetly, its only getting my free time at the moment, which isnt alot. I’ll try and implement something in the next few weeks, and then possibly write a paper about it, we’ll see. If anyone implements something before, let me know, im very interested
DP