Pros and cons of a Polygon Class?

After making my way through the first several chapters of the red book, and reading posts on this board related to depth sorting and state management it seems each individual polygon needs to be an object so that its fields could be used to determine which texture it currently uses, its z depth, etc. This seemed especially important after reading a post saying that all objects in the scene need to be broken down into the individual polygons so they may be sorted on various features related to those polygons.

I think the idea of a Polygon Class sounds good from an OO standpoint, but what about memory consumption? At the very least there would be the vertex data, and once many thousands of Polys were on the screen it doesn’t seem feasable.

So, could someone give me the pros and cons of this idea? If the concept is viable, and used in the real world, what type of info would be stored at the instance level? At the class level (that is, if static members would make any sense here)? Are there any ‘tricks’ that a relative noob could grasp that might get around the memory consumption?

I have read a bit about old Quake 1 style rendering being done in a Java fashion, and the book proposed this type Polygon class, so it’s clear it can be done, but SHOULD it be?

Thanks,

K M

I don’t think it would be good for memory reasons and more importantly because you will want to send data to the video card via nio buffers.

If you want to wrap it, I would suggest doing it by creating a proxy to the data using a Vertex manager type of class with static mehtods that manag looking up and storing data in a nio buffer.

For example (quick and dirty):


class VertexManager  {
  public static final int BLOCK_SIZE = 3;

  FloatBuffer buffer;

  public VertexManager(FloatBuffer b) {
    buffer = b;
  }

  public void set(int index, float x, float y, float z) {
     int pointer =  index* BLOCK_SIZE;
     buffer.put( p, x) ;
     buffer.put( p+1, y);
     buffer.put( p+2, z);
  }

  public float getX(int index) {
    return buffer.get(index*BLOCK_SIZE);
  }

  public float getY(int index) {
    return buffer.get(index*BLOCK_SIZE+1);
  }

  public float getZ(int index) {
   return buffer.get(index*BLOCK_SIZE+2);
  }

  public void setX(int index, float x) {
    return buffer.put(index*BLOCK_SIZE, x);
  }

  ... etc...

}

By doing it this way all vertex data can be stored in one giant FloatBuffer and sent to the video card as such. You save creating millions of instances of the vertex class and you get some (most) of the management that would give you. You could expand this class out with pointers to other buffers (could use the same buffer with interleaving) with texture coordinates, colors, normals, etc. You could also easily modify it all with a relative offset so that multiple VertexManagers could be used to manage a single nio buffer.

In this example, to access the 1000th vertex would just be:


 VertexManager vm = new VertexManager( vertexbuffer );
  float x = Vertex.getX(1000 ) ;

  vm.set(1000, x / 2f);

BLOCK_SIZE would be 3 for this example, but if your data was interleaved (or whatever else), it would be whatever length of data the VertexManager class is typically managing for a single vertex.

Now…with all that you can create a EntityManager that accesses VertexManagers. Then you can abstract out entire entities with just a couple of objects…so you wouldn’t create a polygon class, but you would create an Entity type of object that manages all the vertices for you. Doing it this way, all actual data could be stored in a single nio buffer and you only get a few actual objects created (one for each entity and one or more for each set of vertices it’s managing…arms, legs, ?..etc.). This is also where you can get your sorting by creating new VertexManagers within entities for the various data types you want to sort on (texture, color, etc).

This works pretty well for me and is how I do things in my engine.

My two cents.

Thanks for the response Vorak… I’ll look at this over the next few days and see if I can get something off the ground. If not, I’ll ask another question.

The first question I have regards sorting: with the proposed Polygon class its easy for me to see that the objects would be sorted based on some field (say z depth) and then rendered. Without it, how will I carry this out? All I can picture now is a bunch of numbers in the buffer without any direct tie to which polygon they came from. My next thought would be that they need to be sorted before being placed in the buffer, but, again, without the Polygon class I can’t don’t have an idea there either.

If you know a book that would answer all these questions, please let me know.

You do not want to have a polygon class and you do not want to sort polygons. When using OpenGL you let the hardware do the depth sorting with the depth buffer. Sounds like you’ve been reading about software renderers. That is outdated information. Forget about it. Back in the days they sometimes sorted polygons instead of using depth buffer in order to improve speed. Q1 used a BSP tree to do perfect back to front sorting.

You do want to group geometry that share the same state. Make a Mesh class that contains a list of polygons stored in nio buffers. Try to group as much geometry as possible and find the fastest way to render.

What the above does is give you the grainularity to do “enough” sorting. You group verticies by characteristics that may effect state, rendering type (triangle, strip, etc), texture, transparent, color, etc.

They don’t need to be sorted before being placed in the buffer, but you will need to maintain seperate index list(s) into them. You sort that index lists based on the vertex groups that will be rendered. You add methods on the vertex manager to give you what you need for sorting - getZDepth (average or greatest z depth vertex value in it’s vertices – PS only need to worry abou this for transparent polys, not every poly), getTextureID (texture that is applied to them), isTrasnparent(), etc. With that data you create your sorted index list.

PS: Tom said it simpiler (while I was writing this response), you don’t need to sort every poly.

Based on the two replies, I believe the whole way I was thinking about this is wrong. Let’s use this scenario: an empty scene except for 100 cubes. Each of the cubes is identical, having having red, green, blue, yellow, orange, and purple sides.

I have been thinking that the current color would be set to red, render all red, set current color to blue, render all blue…

But the Mesh class sounds as if each Entity attempts to optimize rendering within the context of its own self without regard to the rest of the scene. Is this the case?

Thanks,

Keith

Now you are trying to make a scene graph wich is a very complex thing. A box with different colored sides is not a single mesh since the sides have different states. Each side of each box is a mesh. Now you can either draw one box after the other since each side of the same box share the transform state. Or you could render all the colored sides one after another because they share the color state. The mesh don’t render itself. Instead create a datastructure containing all the geometry. This data structure, also called a scene graph, is then rendered by the renderer. It is the renderers job to order the meshes and draw the geometry.

Your box example may not be how a typical scene looks like. When creating your renderer you have to handle your typecal scene in the best way possible. Static/dynamic, few/many triangles, etc. It all will influence how you implement your renderer the best way.