3D application design question regarding cubic, and quartic surfaces.

i’m working on an application used to prototype interactive and non-interactive 3D animations. currently, the app is focused on opengl. however, i have designed from the beginning to output Maya ascii files. while designing the scenegraph and geometry i ran into a little design issue and would love some commentary:

considering the fact that my app ultimately will render to opengl in realtime, and also to export scene data to ray tracers or other non-real-time renderers, how would you store your internal representation of shapes? for example i have been wavering between something like this:

abstract shape ( such as elipse, plane, cube, etc. ) controls basic information used to derive renderable geometry. for example, plane may have width, height. then each shape can contain a reference hi-res polyMesh, a lowResPolyMesh ( for wireframes ), and possibly a cubic or parametric representation data such as uspan, vspan, tesselation preference, etc.

or

traditional class heirarchy where each shape and type is explicit. anyone familiar with maya will recognize this style: instead of general forms attached to different representations (as above), you have very specifc ones. for example, a polySphere, nurbsSphere would be completely independent objects subclassed from something like polyPrimitive and nurbsPrimitve, etc.

in all examples objects would be organized and managed in a scenegraph of course.

i hope this makes some sense. in summary, when requiring both real-time and non-real-time rendering capabilities, what is a well designed approach to organizing geometry?

here’s some info that might be more specific:

if i have an abstract shape, say, Rectangle. what would make more sense in a scenegraph:

  1. root > transform > shape, polyMesh (shape and polyMesh independent children of the transform node)
  2. root > transform > shape > polyMesh (polyMesh child of shape)
  3. root > transform > shape.polyMesh (polyMesh is not referenced in the scenegraph, it is internally represented in shape object)

i ran into this issue while designing an app that needs to do real-time interactivity, but also to output results of interactivity to RenderMan or MentalRay. this means that i can’t assume that each shape will be represented as a polyMesh. i’m trying to separate the shape from its representation yet make them work in a real-time system.

hope that clarifies a bit.