How should rendering be done?

Hey guys, it’s me again.

This time I’m thinking of how the rendering should be done.

In the past, I always used


class Moveable implements Renderable,Updateable
{
  public void render()
  {
    for(Face f :faces)
    {
       f.render();
    }
  }
  public void update(double delta)
  {
    //DO A BARREL ROLL
  }
}

But I’m often reading that rendering itself is a bad idea, even though I don’t have an idea on how to do it in another way.

You could have a getter in the class that retrieves a texture object, and then have a RenderEngine that renders the returned texture object. That way you know every texture is being rendered exactly the same way, hence they are all either correct or incorrect.

Depends on what system you are using , if you are using a buffered image then you should only have 1 call for the entire game , if you are using Opengl then it depends on the system you use , if you use elemental type rendering with VAOs then you should limit yourself to a few calls linking as many faces into one as possible . If using VBOS then it requires you to render per VBO (one per transform group).

Buffered image:
graphics.paint called once
VAO:
GlDrawElements(); should only be called a few times link as many objects as possible into each one
VBO:
GlDrawArrays(); Again link as many objects as possible into one VBO , rendering lots of vbos for each face wastes memory and is inefficient binding as many vertices as possible to it is much more efficient.

Generally I use VBOS because they are easy to create and manipulate with fairly low understanding required. When doing VBO style rendering link as many pieces of similar data together as possible , this makes it much more efficient and much easier to handle rendering them lowering the number of glDrawArrays();. To do this in an easy to interpret way I previously mentioned doing it with transformation groups , by this I mean objects that you know are “connected” together and that can be transformed with a uniform variable instead of attributes (1 variable per call instead of an entire float buffer with all transformations for every vertex). Say for instance you have a tilemap (this can be 2d or 3d but should be implemented as you would usually do), if this tile map doesn’t have any special properties then you can join every tile into the same vbo because you know that as you move right the entire tilemap will move left and as you move left you know that the entire tilemap will move right or if you rotate the entire tilemap will rotate. Another slightly different vbo would be for effects and particles , particles are generally considered unique however they can be actually generated within the shader and a third attribute array to tell them what to do.

It really does depend what you are doing but doing it on a case by case basis is not efficient unless it is deathly required.

If you want to know how to do Opengl rendering efficiently with vbos then either google it or ask me and ill write out a brief tutorial on doing it.

Hope this helps.
Thanks.

I guess I have to rephrase my concern.

In the past, Objects rendered/did the logic themselves.
Now I often read that this is not good and I should use a render manager.

How exactly is that done?

Regarding VBOs:

So basically I group my Entities in the game, that are somehow connected to each other? like static Objects(Trees,rocks…) and the map itself?
So the statics and the map are in one VBO, while the player and the enemies have all separate VBOs?

I think by that you imply that all objects handle all logic internally and individually , the major issue with this is memory and cpu consumption considering that the cpu must fetch the object in order to perform commands unto it if its smaller its faster , it also uses less memory. In a tile based game you can have up to a million tiles being ticked at a time if each tile takes up 20kb that’s a lot of memory you don’t need , by handling rendering separately and batching data together you massively reduce the amount of memory reduce. To get around this think of a tile as an information pack it contains all the information that a single tile_processing class can use to simulate that tile and modify the data instead of having an inbuilt tick function.

By connected to each other I mean they would all move together or can be simulated together , I don’t mean statics they can move they just have to move together.

Honestly if you would like some help with this give me a PM ill gladly help you and run you through it entirely with code examples its a bit easier than trying to explain it over a forum post.

One of the main reasons you would render “externally” is for batching. Basically, you group together similar objects and render them in one draw call.

Have your texture /sprite/animation class implement a common interface, lets day called ‘Drawable’.

Have a renderer class that holds a batch, extends list with the generic type drawable.

Add and remove things from it.

The way I do it is have my drawable entities have Draw(Renderer renderer) and UnDraw(Renderer renderer) methods.

Upon creation or destruction of an entity, simply pass them the renderer with the attitude of “here is the renderer, as or remove your drawable components”.

Obviously the renderer needs to draw at a position, origin, angle etc. Just make a transform class for that.

so basically:


class GameObject implements Renderable
{
  float x,y,z;
  float yaw,roll,pitch;

  public void render(Renderer renderer)
  {
    renderer.addToRender(this);
  }
}

class Renderer
{
  List<Renderable> renderables;

  void addToRender(Renderable r)
  {
    renderables.add(r);
  }
  void render()
  {
    for(Renderable r : renderables)
    {
      //Render if r is on screen
    }
  }
  void removeFromRender()
}

?

Basically

how does the Renderer retrieve the Mesh from every Object? ???

Mesh being Face correct? Have Face implement Drawable.

Basically all drawable objects KNOW how they should be drawn, the entity should never care. So in this sense, textures/sprites/animations/meshes should all have a render method that is inherited from the interface Drawable.

I have never done it for 3D but really, it should not a make a difference. For a rendering technique like this you almost always have to roll your own class that extends from an existing class.

For instance I use LibGDX for 2D game development. I have my own Sprite class, despite LibGDX has one already but I need one that works with my renderer implementation.

EDIT: For clarification.



class GameObject implements Renderable
{
    float x,y,z;
    float yaw,roll,pitch;
    Mesh mesh;

    // Inherited from Renderable
    @Override
    public void render(Renderer renderer)
    {
         renderer.addToRender(mesh);
    }
}

class Mesh implements Drawable{

    // inherited from Drawable
    @Override
    public draw(){
        // In here the mesh draws itself
    }

}

class Renderer
{
    List<Drawable> drawables;

    void addToRender(Drawable d)
    {
        drawables.add(r);
    }
    void render()
    {
        for(Drawable d : drawables)
        {
             //Render if r is on screen
        }
    }

    void removeFromRender()
}

^isn’t that approach absolutely what I wrote was “bad”?
The meshes draw themselves,
a external renderer is useless, because you simply iterate through all meshes and they render themselves.
(Look at my entry post, it’s the exact same result) ???

Not exactly the same, the drawing is abstracted away from the Moveable object. The mesh is a drawable component, the Moveable object is not, it simply has drawable components.

Drawable components have to draw themselves, it can not be avoided. What can be avoided is having whatever objects own those components from drawing themselves.

Take this for example.



	@Override
	public void draw(SpriteBatch batch) {
		/* Flip the animations depending on which way the mob is facing */
		if (isFacingLeft && !animator.isFlipX()) {
			animator.flipFrames(true, false);
		} else if (!isFacingLeft && animator.isFlipX()) {
			animator.flipFrames(true, false);
		}

		if (physics.getBody() != null)
			/* Check if the mob can be drawn before trying to draw it */
			if (canDraw) {
				shadow.setPosition(0);
				shadow.draw(batch);

				animator.setPosition(
						(getPhysics().getBody().getPosition().x - (animator
								.getWidth() / 2)) + drawOffset.x,
						(getPhysics().getBody().getPosition().y - (animator
								.getHeight() / 2)) + drawOffset.y);
				animator.setOrigin(origin.x, origin.y);
				animator.setRotation(getPhysics().getBody().getAngle()
						* MathUtils.radDeg);
				animator.draw(batch);
			}

	}


This is a piece of code from a game that I have, this is in the player class (and a lot of other classes). What is happening here is that the player is coupled with the rendering, so technically every thing on the screen (the player, enemies, effects) are all drawing themselves with a method similar to this.

In this example, animator and shadow are both drawable components. One is a simple animation class and the other is a basic sprite. So instead of having every object have a method like this, I instead give the Sprite and Animation class and interface called Drawable which has this method above.

Therefore in the Sprite class we have something like this:


        
        // This is actually converted C# code, so anyone looking at this and thinking it is LibGDX it is not
        @Override
        public void Render(SpriteBatch batch){
            batch.draw(texture, transform.position, frame, color, transform.angle, transform.origin, transform.scale,
                effect, layer);
        }

So the renderer now looks like this, with the rendering code completely (almost) decoupled from the objects that have drawable components. The renderer extends from List hence the use of this.


        public void Render(){
            batch.Begin();
            for (Drawable child : this){
                child.Render(batch);
            }
            batch.End();
        }

So now I can literally just snip that entire draw code out of every single object in my game that is not a drawable component.

It is not completely decoupled, when you say give an enemy a sprite component, you still need to create the sprite and give it a transform to handle position, scale, rotation etc. In most cases you just give it a reference to the enemies own transform so the code is usually something like this:


public class Enemy{
     
    // Create new transform at pos 5, 5 with a scale of 1 on x and y, with the origin at 0 on x and y and facing right (angle 0)
    Transform transform = new Transform(5, 5, 1, 1, 0, 0, 0);
    // The drawable component
    Sprite sprite;

    public Enemy(){
        sprite = new Sprite("someDir/thetexture.png", transform);
    }

}

I know I just repeated my last post in more detail but yeah tl;dr somethings HAVE draw themselves but those things should be components that are given to things that require a texture or whatever. So in the cases of a Texture, Animation etc drawing them self, deal with it lol.

Did the message I sent you help a bit?

I’ve never worked on something like this before, so it’s quite confusing…