Seperating R, LP, A and I

Rendering.
Logic Processing (heart of the game).
Audio.
Input.

What I do is:

Audio is usually caused by switches(boolean values depicting something occuring like moving into a new area) so that is pretty easy to do.

Rendering, I just have each class contain it’s own rendering definitions and then after calling the game processing function I pass the game processing function into the Render class which calls the game class’s render method.

Input I just do a couple bool switches which then end up calling the appropriate methods.

An illustration of what I’m talking about.
http://members.optusnet.com.au/ksaho/show/class.gif

Is there a better way of doing these things?

I really hate it how a lot of people squash as much as possible into a single class.

Shouldn’t Audio and Rendering be more coupled? If they are too separeted, wouldn’t we get audio out of sync with the video? Or am I concerned too much for nothing?

On the rendering side, one approach is to have a Render Queue. Your loop is basically doLogic, then render(renderQueue). As you are going through the game logic things can decide if thay are visible and place themselves on the queue.

When you render, you can sort the objects by render state and transparency and call them back to render themselves.

Have a look at the architecture of Ogre for a good example of this.

What am I going to stick into a render queue?

Something like this maybe?


public class GameObject {
  Mesh mesh;
  ...
  void render() {
    if( inFrustum(MyRenderer.getFrustum()) 
          MyRenderer.addRenderQueue(this);
  }

  boolean inFrustum(Frustum f) {...}
  ...
}

public class MyRenderer { 
  private static Queue renderQueue;
  ...
  private MyRenderer() {}
  public  static MyRenderer getMyRenderer() {}        
  public  static void addRenderQueue(GameObject) {}
  public  static Frustum getFrustum() {}
  public  static void drawAll() {
    //for each item in the queue
    drawMesh(renderQueue.getNext().getMesh());
  }
  private drawMesh(Mesh) {...}
  ...
}

I think I’ve ranted on about seperating out the data model and rendering before. Another way to organise things is to keep a model of the actual game world in just pure data. Then run through your Renderable(i/f) objects asking them to render themselfs. They update themselfs based on the data object they represent and decide whether or not to render themselves.

Its worked quite nicely for me in the past, benefits being the ability to change rendering details without worrying about game logic.

Incidently, I’ve always considered audio as part of the rendering layer since you’re rendering stuff its just not visual. Input is always tricky, theres this whole thing about using a controller interface but its never really worked out well for me. I tend to stick in in the main loop although I normally abstract away how the controls are actually being delivered, i.e. keyboard/joypad/etc…

Kev

I have used something like the GAGE sprite interface. I have various sprite classes for all my game objects. My parent sprite class is a composite of a java.awt.geom.Area instance (object geometry for stuff like collision detections) and a renderer instance which the sprite delegates to for rendering. Game object behaviour is determined by the methods in the sprite sub-classes.

I use renderer factories to construct the renderer delegates. At the moment I have two factories, the normal game one and a geometric one. The geometric one renders sprites according to their java.awt.geom.Shape and I use it to debug collision detection.

I have no sound currently and I handle input in the game frame class using boolean switches. Oh, I also went as far to abstract the rendering loop, so I can easily swap in different frame scheduling algorithms. ;D