[Solved] Stop geometry shader from auto-billboarding shapes

Hello, I have a voxel engine that uses geometry shaders to render chunks of 16^3 cubes. The code generates one point for voxel and then packs all the points in a vbo. My geometry shader then generates the faces from the point.
The problem is that the cubes seem to follow my 3d camera in a billboard-style, therefore I cannot see any other face than the front one.
I am new to shaders in general, so I took one and edited it trying to learn something :slight_smile:

My question is: is there a way to stop this behavior? And, if yes, how?

http://pastebin.java-gaming.org/7d90626362a1b Geometry shader
http://pastebin.java-gaming.org/d9066363a2b10 Render Class
http://pastebin.java-gaming.org/0666a5b302f18 Chunk Class
http://pastebin.java-gaming.org/666ab603f281f World Class

http://s23.postimg.org/5u28mu3o7/bug.jpg

I can only see the front face…

Thanks in advance :slight_smile:

Also, english isn’t my first language, so I apologize for any mistake…

not sure what the issue is but try rendering without culling [icode]GL11.glDisable(GL11.GL_CULL_FACE);[/icode]

in the fragment shader you can visualize ā€œwrongā€ faces with https://www.opengl.org/sdk/docs/man/html/gl_FrontFacing.xhtml

if that’s the case, you can fix those faces by flipping the face vertex order from face(a,b,c) to face(c,b,a) - i cannot give you a better example for triangle-strips as you use in your geometry-shader :wink:

it also looks like the modelview matrix is not applied to the vertices (vertex-shader).

Thanks for your reply!

I tried to disable glCullFace, and as expected now I see the front and back faces ā€œblendedā€:

http://s12.postimg.org/8nfkq5qjt/bug.jpg

(You can see the 4 triangles’ hypotenuses)

I also don’t think my problem is in the other shaders, but I’ll post’em here:

http://pastebin.java-gaming.org/6ab0f883f2f1f Fragment shader
http://pastebin.java-gaming.org/ab0f89f3f2f18 Vertex shader

I mean, they are just pass-through… how can they cause problems?

yea, pretty pass-through :slight_smile: … should be good if GL_PROJECTION and GL_MODELVIEW matrices are set up correctly (things like glOrtho/lookat/glViewport).

before actually using any geo-shader - did you get something more simple to fly ? like a ground rectangle with one or more plain cubes on it with a orbit/fps camera.

Well… here is my fps camera
http://pastebin.java-gaming.org/b0f8f0f4f2810

And yes, before using geometry shaders, I was using lists with all the vertices of a cube and it was working very good (except for the bad fps)!

I also tried to remove the ā€œgl_Position = ftransform();ā€ line in my vert shader, but without it my translations don’t seem to be applied…

We can help you better if you tell us what you are attempting here.
Obviously you are trying to render voxels as cubes, but for what exactly?

I am just trying to learn opengl and shaders, and I think cubes are the easiest way to begin :slight_smile:

The only thing I want to get from this project is a good amount of chunks rendered without huge performance drops :wink:
If you need more info, just tell me!

I could be wrong on this approach but where do you pass in the camera rotation to the geometry shader? I’m not an expert on mixing functionality introduced from 3.2 with the fixed pipeline so I don’t know the impact of a glRotate on verticies generated in the geometry shader, my initial assumption is that the geometry shader ignores everything from the fixed pipeline.

I also thought this, and I think you are right. In fact, when I create my points, I give them 0 pitch, 0 roll and 0 yaw. Maybe I could apply the rotation to the points and skip the fixed pipeline part… But how? Can you tell me more about this? Thank you very much, btw :slight_smile:

you will need to create 2 4x4 matrix

  1. a perspective matrix (one off as you create a new window)
  2. a view matrix which is the result of your camera moving
  3. optional model matrix

You pass these to your shader via uniforms

In your vertex shader do gl_Position = perspectiveMatrix * viewMatrix * modelMatrix * pointPosition, this will rotate everything to the correct positions. The geometry shader should then work

There are other threads on how to generate the perspective matrix which can explain things more on this topic

Recent updates:

I found that the vertex shader was passing the rotation/translation data with

gl_Position = ftransform();

. I changed it with

gl_Position = gl_ProjectionMatrix * gl_ModelViewMatrix * vec4(gl_Vertex.xyz, 1);

but, since ftransform does the same thing, nothing changed…

I then tried to apply my gl_ProjectionMatrix by replacing

in_matrix

with

gl_ProjectionMatrix

in

vec4 corner0 = corner[0] * in_scale * in_matrix + in_pos;

and I found that this method somehow allows me to see some more faces, here’s the screenshot:

http://s14.postimg.org/j2l3o4431/bug.jpg

As you can see, some side faces are visible, but the cubes ARE. STILL. BILLBOARDED. :clue:

Any more advice? I am going crazy with this :frowning:

Move the


gl_ProjectionMatrix * gl_ModelViewMatrix *

code to your geometry shader and rotate each generated vertex, example


gl_Position = gl_ProjectionMatrix * gl_ModelViewMatrix *corner0;

Oh GOD! IT WORKED! Thank you SO much! Gonna add [SOLVED] to the title.

You guys are the absolute best!

As for ā€˜why’:

Presumably you assumed (gasp) that the shaders were executed in this order:

Geometry shader → vertex shader → fragment shader

This would mean that any points you turn into cubes (in the geometry shader), are then transformed by the vertex shader. The order in which shaders are executed, however, is:

Vertex shader → geometry shader → fragment shader

which means that if you modify a vertex (received from the vertex shader) by adding a vector to it (in the geometry shader), it will be applied after the vertex was transformed (rotated, translated, etc) by the model-view/projection matrices in the vertex shader, causing your vertices to be transformed in screen-space, which causes the ā€˜screen aligned’ effect (or, as you put it: auto-billboarding).

Moving the vertex-transformation from the vertex shader to the geometry shader will allow you to apply the transformations in the right order.

As for the opening-post: this optimisation will most likely only hurt performance in the long run, because it prevents more advanced culling strategies like greedy meshing, bulk-backface-culling*, etc. The fastest way to perform an operation is not to do it at all. Reducing the overhead of the operation will only get you so far.

* splitting chunk geometry into 6 segments, 1 for each side of all the cubes, allowing you to not send any TOP faces if you know the chunk you’re rendering is entirely above the camera.

First of all: I knew from the first post the correct execution order of the shaders, but all the logic behind (that you explained) was unclear to me.

As I said some posts ago, I made this little ā€œengineā€ only to learn shaders, and I know that by doing this I wouldn’t be able to implement other culling strategies, but that’s ok. :slight_smile:

Again, thanks very much!