Problems viewing vast landscapes

I want to do a small demo of a vast landscape, something like 30,000Km by 30,000Km and still be able to walk in it and iteract with objects as small as 1cm. I solved the problem of loding such of vast region so it should render like if it was a normal map with average content. The solution was random and dynamic terrain generation based on a overal map representation of the land and an heavy use terrain tiles, one for each lod level arround the player.

The problem is the clipping planes have some limitations. There is some formula about setting the cliping planes that prevent me for have such range of view (min 1cm, max 30000Km).

I have some questions about this.

What ways there are in Java3d to avoid this ?

Is there any way to distort the biggest land tiles (some weird perspective projection) so that they render in the samller region allowed by the clipping planes ?

If i use the last version of OpenGL (with the bindings) do i have this limitation ?

The Z-range limitations have nothing to do with Java3D and are the same for OpenGL. It is just a problem of the accurary of the Z-buffer. So if you need nearclip at 1cm, farclip should be somewhere around 30m.

You could think of depthsorting terrain yourself and switch off z-buffering at all (dunno how to do that with Java3D by heart).

Here is the most general solution I can think of. Should work in any version of opengl.

Divide your minz and maxz into acceptable ranges. In your example you might use the following ranges: [0.03, 3], [3, 3000], [3000, 30000], [30000, 30000000] Then render the whole scene for each range and use the values in frustum minz and maxz.

How many ranges to use depends on the resolution of the depth buffer. You need less iterations with a 32 bit depth buffer compared to a 16 bit.

This is of course a bit slow :), but It’s easy to do in opengl. Don’t know if it’s possible in java3d. You would need a way to change the frustum when rendering the graph, or a way to do multipass rendering. Anyone got any idees?

You used values like 3000/3 which says that that the maxz/miz must be at most 1000, for 16bits right ? What about for 32bits z-buffer ? Most modern graphics cards have 32bit zbuffer.

And by the way how do you get to the 3000/3 value. For a zbuffer of 16bits and if you work with cm units it means that multiplying 1cm by 2^16 ~ 60000 it gets a maxz of 60000cm or 600m.

Now working with a 32bit z-buffer and units of 1cm it gives 2^32 cm ~ 4x10^9 cm or 40000km which i believe is more than enough to represent our entire planet with a 1cm precision. There is something fishy about this. :wink:

From the javadoc of javax.media.j3d.View:
The ratio of the back distance divided by the front distance, in physical eye coordinates, affects Z-buffer precision. This ratio should be less than about 3000 to accommodate 16-bit Z-buffers. Values of 100 to less than 1000 will produce better results.
So I used 1000. (although I did some mistakes in my last post :-[ )

If you know you have got a 32 bit z-buffer, squaring the 16 bit range would be ok I guess (1.000*1.000 = 1.000.000). But how do you set or get the depth buffer resolution in java3d? If you don’t find a way to do this, you must assume you’ve got a 16 bit z buffer.

I think i would rater inform anyone who would see the demo that a 32bit zfuffer is required. If java3d can take advatage of it that is.

The number 3000 doesnt make much sense even for a 16bit z-buffer. It probably means that for the 16bits in the zbuffer value only 12 or 11 bits are used for the actual z value 2^12 ~ 4000, 2^11 ~ 2000. The other 4 bits must be internal data.

Maybe they guessed at the time that for every g card they would need to cut some bits into the zvalue and made a rough calculation.

Anyway if the number of bits to cut out is 4 then for 32bit zuffer it gives 32-4=28 thus 2^28 ~ 2.5e8cm or 2.5e3km.

[quote]The number 3000 doesnt make much sense even for a 16bit z-buffer. It probably means that for the 16bits in the zbuffer value only 12 or 11 bits are used for the actual z value 2^12 ~ 4000, 2^11 ~ 2000. The other 4 bits must be internal data.
[/quote]
Eh? Why the hell would 4 bits just be discarded? 16bit zbuffer means exactly that, 16 bits of precision in the depth buffer.

AFAIK, graphics cards can use the 16bits however they want, however we’re talking floating point here, not integer values as you seem to think judging by your 11/12 bit guess. And they’re likely to be in screen space, so it’ll be in the range of 0 to 1 anyway.

Are you shure about zbuffer values being representing has floating poit numbers in the range [0;1] ?

But that doesnt make make any difference since you still have has many numbers has you can put in a 16bit value. I just said they must have cut some bits because if they used the all amount of bits the ration between the front/back cliping plane would be much bigger than 3000.

Let me try to explain this because its a little hard. If you have 16bits to store an integer value you can store 2^16 different numbers. If you have a floating point format unless the format itself wastes some bits you have exactly the same range of numbers you can store in 16bits. Except that in a floating point format the dynamic range is 0 to 1 and with an integer the dynamic range is 0 to 2^16. In the end no matter what number format is used the precision which is what maters here is exactly the same.

The z values are being normalized into a [0, 1] range. Where 0 is near clipping plane, and 1 is the far clipping plane. But they are probably stored as integers, kind of like this: “unsigned short z = (int)(z_as_float_in_range_0_1 * 2^16)”

Anyway, all the bits in the z buffer is used because 0 is mapped to near clipping plane, and the maximum value of the z buffer is mapped to the far clipping plane.

The additional problem is that Zbuffers are linear in Z, whereas the percievable world is linear in 1/Z

Hi
A long time ago I found this article, which has the information on why and what z buffer/ratios to use and how to work them out.
Also note, that it’s high powered 3D workstation cards that have 32 bit z buffer, most games cards have 24 bit z buffer, still better than 16 bit though :slight_smile:

HTH

Endolf

If trhe problem is on the non-linearity of the zvalues in the way opengl handles the zbuffer the problem is easy to solve. I could just divide my scene landscape into tiles and scale down the faraway tiles until they are inside the front/back plane range without touching the inner tiles where the camera is in.