Setting the frustum for a space simulation

I am working on a 3-D view of the Galaxy and have a question on how to set the frustum. Right now I use gluPerspective() and set the near to something close to 0 and the far to something very far and an FOV of 20º. I then zoom in and out of the scene by changing viewing distance. I need a very deep viewing volume because space is pretty much infinite. I’ve been told that deep frustums can cause problems. I know setting near to 0 does cause problems.

What I’m not clear on is how zooming in and out should be handled. I think of it as similar to changing magnification in a telescope. Is modifying the viewing distance the right way? It seems like this might not give the correct perspectives. Anther thought I had was changing the frustum (using glFrustum() rather than gluPerspective) every time I zoomed in or out.

Anyone have any thoughts on how I should be thinking about this?

Playing with clipping planes is probably the worst way of dealing with this issue as it effects all the rendering by causing horrible Z-buffer issues. Instead you want to keep to a fixed ratio of front to rear (~3000 if using 16 bit depth buffers and 10,000 if using 32 bit) and scale the content to fit within that ratio. Zooming should then be just part of your navigation as the first matrix pushed onto the model stack.

To handle the big numerical levels needed for space simulation, you probably want to take the approach of dividing your world into zones and then treat the system as, effectively, a multiple pass renderer. Place the view point in the right place for that zone, render, clear the depth buffer and start the next zone (working from deepest to shallowest). This should allow you to handle the physically big numerical values but still have visually pleasing results.

about the zooming: this is done by narrowing your FOV. If you use a narrow angle for the FOV, you will zoom in a lot. In fact, 20° is already pretty narrow IMHO. I think you can start out from 45° or something. As far as I can tell, your perspective will also be correct this way. Depth perception is pretty bad when you’re zoomed in.

On the frustrum: before I start giving “advice”, let me tell you first I have zero experience in this, but I’ve lurked a similar discussion on another forum, so I’m just repeating what I heard there ;D

the position of the near plane has a lot more influence than the position of the far plane. Especially if you’re zoomed in far, you can probably push your near plane pretty far without visual artifacts, increasing the depth buffer precision a lot.

Based on another discussion, I propose a slightly easier variant of Mithrandir’s multipass rendering.

  • Your camera is in a certain position in space.
  • First, you set your near plane to 10 miles, the far plane to 10000 miles. Your depth buffer precision is not good, but that doesn’t matter for objects this far away.
  • Render everything (you can perform frustrum culling first, of course).
  • Now, clear the depth buffer
  • set your near plane to 5 miles, far plane to 10 miles. Render everything again.
  • near = 2 miles, far = 4 miles. Render. Clear depth buffer
  • wash, rince, repeat

Mithrandir seems to suggest you need to change your camera position before rendering each zone, I’m not sure I understood him correctly. However, this is not necessary. All your transformations remain the same, only the frustrum’s far and near planes are different.

@Mithrandir: Just out of curiosity, but can you give me the rationale for those numbers you give? (3000/10000)

http://www.starfireresearch.com/services/java3d/supplementalDocumentation.html#zbuffer

Thats where I got those same values from.

HTH

Endolf

I don’t remember the original source of where I read the research into the ratios. They’re one of those “commonly accepted truths” in 3D graphics. Note that the number is about ratios, not exact values, that’s the important bit and why the near clip plane has a bigger influence in a small amount of movement relative to the rear.

Bahuman’s suggestion of moving the clip planes also can work. The difference is one in accuracy tolerance. As you move away from the value 1.0 floating point numbers become progressively less accurate. Moving the clip planes leaves all the data in thos inaccurate regions, whereas rendering spatial zones does not. However, one takes more work than the other, so it it’s a tradeoff of speed versus accuracy.

Thank you all for your thoughts. Let me put some numbers on this. Please note that this is my first experience with 3D programming in general and JOGL in particular.

My units are 1 unit = 1000 light years.

The viewing volume is around 100,000 units deep and centered on our Galaxy (yes, 100M light years). There are other galaxies +/- 50,000 units from the Galaxy in all directions. When I zoom fully out I need to render pretty much this entire volume. I might have a toggle for using either glFrustum or glOrtho. The latter would be nice so galaxies at far distances would be visible.

When zoomed fully in I need to have a field of view of around 0.1 unit (yes 100 light years) at the location of the Galaxy so I can show things in the immediate vicinity of the Sun.

Our Galaxy is the only one that shows any structure and is the only one you can zoom in on. Other galaxies and objects within our Galaxy will be drawn simply as icons.

So if I set the near plane to 30 and the far to 100,000, that gives a ration of 3000 which seems in line with what others have suggested. If I move the camera back 50,000 units from the Galaxy (where 0,0,0 is) that would seem to encompass everything. However to zoom way in I will have to set the width and height of the front frustum plane to something pretty small.

So is this feasible or do I ned to get more complicated for this to work?

I was looking back over the previous responses and I’m confused by something.

Mithrandir said:
Playing with clipping planes is probably the worst way of dealing with this issue as it effects all the rendering by causing horrible Z-buffer issues. Instead you want to keep to a fixed ratio of front to rear (~3000 if using 16 bit depth buffers and 10,000 if using 32 bit) and scale the content to fit within that ratio. Zooming should then be just part of your navigation as the first matrix pushed onto the model stack.

This sounds like what I was originally doing by changing the viewing distance using a translation along the z axis.

But bahuman said:
about the zooming: this is done by narrowing your FOV. If you use a narrow angle for the FOV, you will zoom in a lot. In fact, 20° is already pretty narrow IMHO. I think you can start out from 45° or something. As far as I can tell, your perspective will also be correct this way. Depth perception is pretty bad when you’re zoomed in.

This sounds like zooming using the projection matrix rather than the model/view matrix.

Am I correct in my understanding that you two are saying different things? Any idea whose right? Both? Somehow, it makes more sense to me to think about changing the viewing distance. This is the navigation method I assume. However, I also see the point of changing the FOV although there is a pretty large range I have to accommodate. Help??

I don’t see how it could possibly work using the projection matrix… setting the near and far distance doesn’t chage the apparent distance or size of what’s rendered - it only determines what’s clipped as “outside” the viewing area. The approach I would try (atlhgouh, I’m not certain this will look right) would be to both translate along the camera axis AND chagne the perspective FOV, but leave the near and far clipping planes like you originally did (0.1 and 1000 or something like that).

Hmm, I think bahuman is right. Narrowing the FOV stretches or zooms the objects, since the “available space on screen” decreases. Don’t know how to explain this better in english. Translating the camera has nothing to do with zooming. Someone I know once summarized “the ModelView matrix defines the view to your model and the Projection matrix how drunk you are” :wink: