Normals very blocky

Hello,

Im trying to render some terrain, this works very well, but im having troubles using normals.
I just figured out how normals work and stuff, and worked out to calculate normals for my terrain.
However because an triangle only has 3 points = 3 normals, lightning appears to be really blocky with the rough edges:

I dont know if this is normal or caused by an error in my code.
Im calculating an normal for each vertex, and draw evrything with an vbo (using an trianglestrip).

So is this normal behaviour, and how can i smooth these edges?
I have googled a lot, but the solution seems to be some shader magic (with also a lot of choises about the algorithm to use).
Thank you for any help.

Even renormalizing normals in the fragment shader won’t fix this.

The underlying problem is that your terrain is blocky. Why did you use decrete height levels?

Well it was just my first try, i created an hightmap using perlin noise, and created some vertexs to render the hightmap.
What would be the best way of rendering some random terrain then?
Using more rectangles or something?

My Init:


        float scale = 0.5f;
        int x = 0;
        for(int zpos = 0; zpos < size; zpos+=1){
            for(int xpos = 0; xpos < size; xpos+=1){
                x = zpos % 2 == 0 ? xpos : size - (xpos+1);
                int tx = x % 2 == 0 ? 0 : 1;
                
                vbo.AddVertex(x*scale, depth[x][zpos]*scale, zpos*scale);
                vbo.AddNormal(getNormal(depth, x, zpos));
                vbo.AddText(tx, 0);
                
                vbo.AddVertex(x*scale, depth[x][zpos+1]*scale, (zpos+1)*scale);
                vbo.AddNormal(getNormal(depth, x, zpos+1));
                vbo.AddText(tx, 1);
                
                if(xpos == 99){
                    vbo.AddVertex(x*scale, depth[x][zpos+1]*scale, (zpos+1)*scale);
                    vbo.AddNormal(getNormal(depth, x, zpos+1));
                    vbo.AddText(tx, 1);
                }

            }
        }

I think Riven meant that the height-map values seem to be rounded to discreet values. If the terrain wasn’t made out of discrete height values, it wouldn’t look that blocky. Increasing the resolution won’t really fix the problem that easily. You’re basically getting aliasing when the height goes
x --> x --> x +1 --> x+1 --> x+1
and vice versa. You need to make it more smoothly go from one value to another:
x --> x+0.25 --> x+0.5 --> x+0.75 --> x+1
or something. In short, don’t round the values to ints. =S

The vertex normals need to be computed from the surface normals of all the shared tris. It’d be helpful if you show a top-down wireframe view as some patterns work better than others. Should look like squares with inscribed X’s or diamonds at all levels.

Omg! now i noticed im rounding the values to ints (depthmap is an float array, so i didnt get what the problem was).
Thats what i get for reusing my code lol.

when removing the (int) cast the terrain looks like this:

Thanks for the help so far =)
Much better but there are still “squares with inscribed X’s” like Roquen says.
What would be the best way to fix these (like creating some kind of circle or blurring it until there are no x’s)?

Renormalize the normals in the fragment shader.

Thanks.
Do you recomment any algorithms, or do you know on what terms i could search on?

The algorithm in GLSL is:

normal = normalize(normal)

Basically search on the words I used, and maybe ‘per pixel lighting’.

http://www.lighthouse3d.com/tutorials/glsl-tutorial/directional-light-per-pixel/

That a lot easyer then expected, thank you.
I was looking on stuff like Gouraud shading and Phong shading.

It turned out pretty nice:

I only had to modify the shader from the example a bit:


varying vec4 diffuse,ambient;
varying vec3 normal,halfVector;
uniform sampler2D Texture0;

vec3 halfV;
float NdotHV;
vec4 color, c;

void main()
{
    halfV = normalize(halfVector);
    NdotHV = max(dot(normal,halfV),0.0);
    color = ambient + clamp(normal.r * (diffuse-ambient), 0, 1);
    color += gl_LightSource[0].specular * NdotHV;
    
    c = ... *text color*
    gl_FragColor = vec4(c.xyz * color.xyz, 1);
}

You’re still not normalizing the normal? >_>

Does it have any effect when you normalize the normal in the fragment shader instead of the vertex shader?
Just try’d both, but i dont see much diffrence, i dont even see diffrence if i comment the normalizing out.
Normalizing is just making sure the vector is between 0 and 1 am i right?
Since im normalizing the vectors before passing them to the shader, i guess it has no use anyways.

A normal always has a length of 1. That’s an assumption that must hold for the dot products to return correct values. Even if you normalize the normal in the vertex shader, after linear interpolation over the surface of a triangle it won’t have have a length of one. You can visualize the “valid” normal values as a sphere with a radius of 1. When we linearly interpolate between two normals, we won’t be following the surface of the sphere (that’s called slerp and is a lot more expensive), we’ll be taking a shortcut straight through the inside of the sphere between the two normals. That’s why you need to normalize it again. Granted, the difference isn’t very big when the normal changes slowly over the surface, but there is a big difference when the normal changes quickly from vertex to vertex since the interpolation took a faster shortcut almost straight through the sphere.

I hope that explains it. And also, normalize() is blazingly fast since it’s optimized to be called for every pixel. You also shouldn’t have to normalize your vertices’ normals since they should already be normalized when you upload them to your VBO.

Thank you, i get it now.
“You also shouldn’t have to normalize your vertices’ normals since they should already be normalized when you upload them to your VBO.”
That was the part about renormalizing i didnt get, because i already send them normalized, but the story about interpolation made it very clear.

Even if your vertex normals are similar it often helps to re-normalize in the fragment shader. The difference isn’t that much when you’re doing nice smooth diffuse lighting, but for something like specular highlights you’ll only get the nice, tight, sharp highlights if you re-normalize.

Hehe, no problem. You got some really nice results there. I wonder about the color though, that slime green makes it look like a chemical spill! xD

It was the nicest seamless texture i could find, in my way to program the shader, it have had many diffrent shades of green in my journey to create the shader :slight_smile:
Already fixed it, i had diffuse set to 2.0 instead of 1.0

Now i need to get multitexturing to work and add water, what would i do first…

You might be interested in techniques like texture splatting and soft water edges then? =S

Indeed.
However it seems most engines use an huge alpha maps to determine each texture for multi-textured terrain.
As first try im going to pass an byte as attribute with each vertex containing the terrain type
I hope its possible to interpolate between the types some way, otherwise ill try the alpha map (128*128 image for each chunk does not seem to bad).