Some assistance required with 3D room rendering style, visualizations included!

Hey all!

I’m working on a kind of 3D room rendering data structure, but I’m having trouble with a couple of points.

First and foremost, here’s a couple of screenshots from the actual engine running:

An example room:

From inside the room:

A room is made up of segments. A segment can be defined as a 1 x 1 tile that extends upwards by an arbitrary amount to form a hollow 3D cuboid.

Every time you add a segment next to another, their two adjoining faces are removed (not rendered) to form a larger interior space, until you create an entire room, as explained in the following diagram:

Up to this point, my data structure would define each segment as simply having an X, Y, Z position, a floor height and a ceiling height. The inner wall height is naturally the difference between the upper ceiling Y and lower floor Y.

Here’s where things get a little more complicated :S

Adjacent floors can be different heights (as can ceilings). This raises the problem where you need to fill in the ‘gap’ between floors. This can be neatly explained in the following diagram:

The data structure gets a little more complex now. I’ve reduced the need to do runtime processing of these ‘step’ values by pre-computing them. For each segment in the entire map, I calculate the difference in floor height between each segment, and for each face (north, east, south and west). This allows each segment edge to drop down to a unique height from its other edges.

If the adjacent floor is higher, it gets ignored since that higher floor will do the processing for that edge anyway.

So, here’s where I’m having trouble.

I’m doing all of the tile rendering in immediate mode - so I have a Segment object that looks a bit like this in code:


public class Segment {

private float x, y, z;   // the segment's position, where Y is the floor's position
private float height;   // the height of the segment
private float ceilingY;   // for convenience, set as y + height

// the values that define how much lower this segment's neighbouring floors are
private float lowerDropNorth, lowerDropEast, lowerDropSouth, lowerDropWest;

// flags to enable or disable drawing of each side wall
private boolean drawNorth, drawEast, drawSouth, drawWest;

// flags to enable or disable drawing of the lower step edges
private boolean drawStepN, drawStepE, drawStepS, drawStepE;

public Segment() { }

public void render()
{
    // draw inner walls
    if (drawNorth)
    {
        glBegin(GL_QUADS);
        // vertex definitions...
        glEnd();
    }

    if (drawSouth)
    {
        glBegin(GL_QUADS);
        // vertex definitions...
        glEnd();
    }

    if (drawEast)
    {
        glBegin(GL_QUADS);
        // vertex definitions...
        glEnd();
    }

    if (drawWest)
    {
        glBegin(GL_QUADS);
        // vertex definitions...
        glEnd();
    }

    // draw step edges
    f (drawStepNorth)
    {
        glBegin(GL_QUADS);
        // vertex definitions...
        glEnd();
    }

    if (drawStepSouth)
    {
        glBegin(GL_QUADS);
        // vertex definitions...
        glEnd();
    }

    if (drawStepEast)
    {
        glBegin(GL_QUADS);
        // vertex definitions...
        glEnd();
    }

    if (drawStepWest)
    {
        glBegin(GL_QUADS);
        // vertex definitions...
        glEnd();
    }
}

}

This seems massively inefficient to me. Even with frustum and manual distance culling, this doesn’t really seem to be an efficient way of rendering each segment’s required faces. Or is it?

I figured that having each Segment’s data pre-processed (position, step size etc.) would be beneficial since nothing is ever going to move once its created (static geometry), but I can’t seem to think of a more appropriate way of rendering it.

I’d greatly appreciate feedback from anyone else who might have experience with these things!

Thanks!

Well I would say three things.

  1. All of what I’m saying really depends on the usage of the program. The title is “World Builder” apparently which would suggest to me a level editor type thing (you would not believe the number of programs I’ve used called simply “World Builder”). But if that were the case then I wouldn’t be too bothered about performance (also I’ve never seen a level editor work like that). You could elaborate on the purpose a bit for some more “personalized” advice.

  2. If 'twere me, I would get rid of all the fields after “ceilingY,” and the render method but rather have Segment as a “pure” data structure. Then you have a Quad class which is just a quad which you can draw etc. Every time something gets changed you go through all these Segment s and generate a list of all the Quad s you need to draw. Then each render all you have to do is iterate through a list of Quad s and call draw() (or whatever) and you still have the original data for whatever you need to do with it.

  3. If you are really concerned about rendering performance, then (as you yourself said) immediate mode is not the best idea (certainly it isn’t in this kind of application). I’m not telling you to learn how to use them (I’m assuming you don’t know since you haven’t), I’m just saying that your application’s performance will benefit if you do.

Hope I have been of some help.

Thanks for your reply, there’s some helpful ideas in there!

But you’re right, of course, in that some context might help to further ground the idea behind what I’m trying to implement.

Yes, this is a level-editor, but I’m curious where you say you’ve never seen an editor work like this - could you elaborate what you mean? :slight_smile:

In creating levels for a game I want to work on, I wanted to be able to visualise the layout of rooms and how they’re connected and accessible to the player. I created an environment where I could move around in all dimensions, and move a cursor around the grid to help me add segment tiles to one another, raise and lower floors / ceilings etc. This seemed to serve the best of both worlds. Being able to build the level in 3D, sculpting almost, felt like a natural fit.

Anyway, I digress. The segments’ edge data is re-calculated every time you add or remove (or change) a segment in the editor. During a recalculation, every segment is looped through and its edge data is calculated (the required dimensions of the edge quads and so forth).

The Segment object in the game would differ from the editor version in that it wouldn’t necessarily need the additional pre-calculation variables - as you rightly point out. Rather than storing the drop values and visibility flags, I could simply store an array of objects that define each of the segment’s faces (vertices, colours, tex coords and texture ID). Is this what you mean?

I would ask whether storing this ‘quad’ structure would be better served by putting the quad inside a display list (or VBO?), since all quads in the engine need to be equally sized (I’m using a texture atlas, so I can’t leverage texture repeating on one long surface - such as a tall inside wall - instead it needs to be comprised of several same-sized quads).

This is where I’m most concerned - the continued use of immediate mode drawing (glBegin) where I could possible improve my Segment render method to something more streamlined (such as a simple list of ‘quad’ objects with the above list of properties).

Thanks for your input, I hope the above all makes sense!

Yeah you’re right on all fronts. And VBOs I think are the way forward especially for you with (I presume) very static geometry.

All I meant with the “never seen a level editor like it” was that I’ve never seen a level editor umploy that means of constructing geometry. The ones I have worked with involve creating primitives (as in cuboids, spheres, cylinders etc rather than OpenGl primitives) then you scale, rotate, translate and split them. Or just directly transform the vertices. But I’m not saying your way is bad. It seems very simple to use and more than versatile enough to make some very complex rooms.

Thanks - I thought it was a case of ‘you’re doing it wrong!’ :slight_smile:

I’ve just boiled the game down to, essentially, a 3D interpretation of 2D data (x,z position plus height data).

Just one last question, then: if you set up one VBO, can you reuse that VBO with different textures? Just bind the one you want before you invoke the draw method of the VBO?

Thanks for your input, I’m glad to know I’m on the right road!

There is no wrong way in terms of how your application works only in implementation. Even with that there’s only suggestions.

Yes it is perfectly and extremely possible. A VBO is just a buffer holding vertex data - position, texture coords, normals, colour etc. Pointers define how you interpret that data. Shaders define what you do with it. And its at the shader level where the texture sampling happens. That was a long winded way of saying yes just bind a different texture before drawing the VBO but I just wanted to give you an overview.

I managed to solve this conundrum with your assistance, so thank you. VBOs work really well for speeding up the rendering.

However, I made another addition that has significantly increased draw speed across the board.

Before this change, I had been calculating a ‘boxed’ region around the player (camera’s) position which was the render bounds - no segments outside of this region would even be checked for rendering.

Then, within this region, anything inside the camera view frustum was being rendered.

However, there was a lot of occluded (hidden) geometry being rendered needlessly. Since these levels will be mostly rooms and connected corridors, there’s going to be plenty of cases where only partial geometry visibility is required, and to this end I wrote a line-of-sight algorithm.

I based it loosely off of Besenham’s algorithm, in that it works in a top-down perspective against the (for all intents and purposes) 2D data map of Segments. From the player’s perspective, several rays are fired in a fan that spans the entire camera’s FOV. These rays are then tested incrementally until they encounter a wall (or, more specifically, a lack of any segments, which indicates ‘nothing’, or a ‘wall’). As soon as they do encounter a wall, that ray’s tracing is stopped and processing moves on to the next ray.

By tracing out the vectors using actual GL lines it’s quite cool to see how it’s calculating the ray vs. segment visibility.

By simply flagging all segments as ‘hidden’ to begin with, the ray tracer does a stellar job of only showing geometry that isn’t occluded, and is within the player’s view cone. It works even better than simple frustum culling, which required all of my geometry to be cross-referenced.

If anybody reading this is interested in my method for doing the ray-traced line of sight calculation, let me know and I’ll post it up in a separate thread. I’m sure someone else might find it useful too :slight_smile:

Good job. You picked those VBOs up pretty damn quickly. Most people take a lot longer (I know I did)

Thanks… it took a little while for me to get my head around how the float buffers get filled (and flipped), but it made a lot more sense when I start added line breaks between each vertex’s data when building the buffer array - it made it feel a little bit like writing out the old-style glVertex3f(…) commands!

For what it’s worth, here’s the final implementation I’ve gone for (with some handy visualisations).

All quads are drawn as VBOs now, where I simply have one VBO for each face orientation (horizontal, vertical) which are flipped as required depending on where they need to be drawn.

From a speed perspective, the line-of-sight algorithm does the leg work in calculating which segments need to be visible - the segments will draw their quad VBO lists if they’re in the renderable list.

Here’s how I visualised it.

The whole map is tiled in 2D. So every X/Z tile can store 1 segment. This segment can have a floor and a ceiling height, and up to 4 walls. This means that I can calculate a line-of-sight algorithm in 2D as well.

The following image shows the level editor with placeholder tiles on the ground to represent where the level’s floors would be (just as an example, the actual segments are full 3D extrusions). The dark tiles aren’t rendered at runtime (invisible to the line-of-sight), and the light ones are seen by the player.

The green line represents the left-most edge of the camera’s viewpoint, and the red line the right. The purple lines indicate each ray that is being fired to test for wall collisions (in increments of 0.5 units, where one tile is 1 x 1 unit in width and depth).

You can see how, out of all of the level geometry, only 9 segments are actually visible. Massive speed boost!

The way the levels are designed will help in preventing the player from seeing too much distant geometry, but still allow them to see far down long corridors without having to render the entirety of room areas they’re ‘looking’ in to:

And even on larger spaces, there’s still a relatively low number of line of sight calculations happening:

I’ve restricted the LOS calculations to only happen once every 10 ticks or so, which should reduce the burden. Although to be fair, 146 calculations per tick wouldn’t be a massive problem in any case. There will unlikely ever be more than a thousand or so at worst - and since this all happens between render cycles, it won’t even chew up even a single millisecond of CPU time.

Quite pleased with these results - and I really appreciate your time in helping to answer my questions and be patient with me. I really appreciate it!

If anybody wants the LOS code, let me know and I’ll post, thanks!

Yeah its funny how making your code look pretty - like putting line breaks in logical sections - can make it so much easier to understand.

You might consider posing your los code in the shared code section or even putting the whole editor in the WIP\toys section if you’d be willing. It certainly looks very impressive although I’m not sure how portable it could be made to be.

Its a nice application and if you can do this with a level editor in so short a space of time, in looking forward to the game you actually make with it.

Thanks, I appreciate the sentiment, and yes - I’m only too happy to post the editor framework once it’s in a decent state. But as you say, it’ll depend mostly on peoples’ needs for something like this - but mapping out a 2D tile environment (for later extrusion into some 3D) might have its uses for some people :slight_smile: