I have a scene graph that has a root “world” TreeObject and possibly several children TreeObjects. Each TreeObject includes a BranchGroup and
TransformGroup. Each TreeObject can have a separate translation/rotation and scale applied to it. Note: a TreeObject is my construct.
I have to support transparent switching between parallel and perspecive views. So to get the effect of zooming, I scale instead of translating in z. This keeps the code simpler. (comments?)
I need to add depth cueing and as far as I know, fog is a common way of doing it. Other suggestions are welcome. I have tried pointlight and it seems to have the same issues as fog. Spotlight also seems to have the same issues.
I prefer LinearFog: Exponential fog is less desirable, but acceptable. The contrast for each TreeObject should be optimal: i.e. the front and
back of each TreeObject must be as bright (least fogged) and as dark (most fogged) as possible. In my opinion, this forces us to have multiple fog nodes, one per each TreeObject. Different sizes and shaped for different TreeObjects force this restriction.
The other requirement is that each point on each TreeObject should have the same visibility based on fog whether it is scaled or not.
As far as I know, fog equations work in physical eye coordinates, not in the local coordinates of the TreeObject. If the fog equations work in local coorindates of the TreeObject, fog would be correctly applied in terms of what I want: -
In my application, zooming is implemented by scaling. This has a side effect of making a point behind the center go even further away in terms of eye coordinates, thus causing fog to get thicker. The point in question seems to be getting closer visually because everything gets bigger when you scale; yet it gets murkier because of fog. This effect is visually weird.
Because fog is calculated in physical eye coordinates, I now need compensating logic to undo the conversion to the physical coordinate world.
The compensation would be such that the end effect would be as if the fog equations use TreeObject-local coordinates.
Do you know how to go about doing the compensation? Specifically, what transform(s) should I use to convert: (scale and/or translate) from the virtual world coordinate system to the physical eye coordinate system? I know how to get from my local coordinate system to the virtual world system, but how do I crossover to the physical coordinates system?
Do you know of any helper classes that returns these transformations? I have heard of a ViewInfo object, but I am not sure how to use it and where to get it from. Also, while we are at it, do you know what the co-existence coordinate system is? It seems to be an intermediary coordinate system between the physical world and the virtual world.
Links to good sites/books that talk in detail about these things would also be useful. I suppose these things are similar/exactly the same in
OpenGL (DirectX ???) and any book that describes those would also do?