unit normal math problem

Probably a stupidly simple math problem, but could anyone tell me how to calculate the z length of a unit vector if I already have the x and y lengths? The unit length would normally be calculated by:

unitNorm =sqrt(xSquared + ySquared + zSquared)

.

Scince I know unitNorm is 1, and I know x and y, z should be discoverable but I do not know how to handle the math. Thanks for any help.

1^2 = sqrt(xx+yy+zz)^2
1 = x
x+yy+zz
1-(xx+yy)=zz
±sqrt(1-(x
x+y*y)) = z :smiley:

EDIT: Oops, I didn’t see the squared on the variables :smiley:

It’s either sqrt(1 - x^2 - y^2) or -sqrt(1 - x^2 - y^2), but it matters which.

True. z could either be +ve or -ve, so you’d have to account for both possibilities.
Out of interest, why do you need to do this?

Well, technically the length of the z component is the first one of those :slight_smile: The value of the z component is another matter. /pedantic

As to the math, here’s how you solve for it, starting with:
1 = sqrt(x^2 + y^2 + z^2)
Square both sides, you get:
1 = x^2 + y^2 + z^2
Which gets you:
1 - x^2 - y^2 = z^2

And take the square root of both sides to get
z = sqrt(1 - x^2 - y^2)

The + or - part comes in because the square root actually has two values, i.e. sqrt(4) is both 2 and -2. The Math.sqrt() function always returns the non-negative value.

z = [plus/minus] sqrt( max(0, 1-xx-yy) )
if you want to guard against numerical noise. :wink:

[SnarkMode]
For you kids that strive for accuracy (which 99.9% of the time you shouldn’t), this a classic example of a short code sequence that can have huge errors.
[/SnarkMode]

OP: Is this just for your knowlege or are you using it for something?

I am playing around with storing normals in maps. As a trade off, it might be worth encoding object space normals as x, y components of the unit normal vector and reconstructing the z component in the shader.

If you’re going to be storing normals in maps (by which I assume you mean texture maps), you should look into tangent space normals. They work better for animated meshes.

Some additonal solutions are convered here:
http://aras-p.info/texts/CompactNormalStorage.html

Note: This is for projective (screen) space instead of surface space problems, but they work. The main difference is that surface space normals more much more likely to be near ‘1’ for Z.

Just thought I’d drop this in here for comment, and because it helps me think it through. Basically it is a cheaper version of tangent space useful for floors, walls, and ceilings, all of which if they are flat do not need a tangent matrix per vertex. Unfortunately, because I cannot find an image program that creates world space normal maps; normals are defined in tangent space a la Gimp, Photoshop normal plugins etc, so a tangent matrix is still needed.

Lets say we have a normalized normal N=(0.2, 0.3, 0.87), generated in Gimp from a height map and stored in a normal map. As you can see, the greater part of the normal (0.87) points out of the screen toward the viewer who is sitting up the positive Z axis. Were this normal map to be applied to a south facing wall (one that faces the viewer) then we could read the normal from the map and use it directly in lighting calculations because, for all intents and purposes, it would be the same as a world/object space normal map.

As a quick aside it is important to understand the difference between object and world space. While it is said that models are defined in object space, they are not really, they are defined in world space only they have not been moved… object space is world space that has had no movement to it. As soon as the object is moved a transformation matrix is applied to it (glTranslate/rotate etc do this), the object now consists of its object space coordinates and a transformation to those coordinates that changes the location/orientation of the model in world space. If you reversed the transformation the model would return to its object space coordinates.

The normal map situation becomes problematic if we want to use this normal map on the floor, for there we want the greatest extent of the normals to point up the y axis. Intuitively, however, it seems that all we need do is bend the coordinate frame in which the original vertex normal and the perturbed normal (N`) sit through 90 deg about the X axis… that is, we want to glue the perturbed normal to the Z axis and rotate the Z axis till it points up; this will drag the perturbed normal to where we want it, pointing up the y axis, but correctly offset from it.

We can do that by providing a base frame and multiplying the perturbed normal by the inverse of that frame. What is a base frame? It is a set of vectors that define the x, y, and z axis of a 3D coordinate frame. World space defines the standard base frame

x=(1.0, 0.0, 0.0)
y=(0.0, 1.0, 0.0)
z=(0.0, 0.0, 1.0)

Any vector is only meaningful if it is specified in a coordinate frame. The vector vec=(0.2, 0.3, 0.87) is actually:

vec dot standard base frame, or:
vec.x=(1.0, 0.0, 0.0).( 0.2, 0.3, 0.87)=0.2
vec.y=(0.0, 1.0, 0.0).( 0.2, 0.3, 0.87)=0.3
vec.z=(0.0, 0.0, 1.0).( 0.2, 0.3, 0.87)=0.87

which is to say, 0.2 units along the x axis, 0.3 units up the y axis, and 0.87 units out along the z axis of the base frame.

If we create a new base frame and do the dot products using the same vector but with the inverse of the base frame, that vector will be rotated as if it were glued to the standard base frame, and the standard base frame was rotated to align with the new base frame.

The base frame we need takes the world space y axis and uses it as the zbase of the new frame, the x axis remains unchanged, and the y axis now points down the Z axis. The new base frame is:

xbase=(1.0, 0.0, 0.0)
ybase=(0.0, 0.0, -1.0)
zbase=(0.0, 1.0, 0.0)

We need the inverse of this base frame which because the frame is orthogonal (the axis are perpendicular to each other) we can get by transposition (basically substitute the columns for the rows):

xbase`=(1.0, 0.0, 0.0)
ybase`=(0.0, 0.0, 1.0)
zbase`=(0.0, -1.0, 0.0) 

Using dot products (as opposed to matrix multiplication but with identical effect) we can convert the perturbed normal to point up the y axis:

x`=xbase`.N`=(1.0, 0.0, 0.0).(0.2, 0.3, 0.87)=(1.0*0.2+0.0+0.0)=0.2
y`=ybase`.N`=(0.0, 0.0, 1.0).(0.2, 0.3, 0.87)=(0.0+0.0+1.0*0.87)=0.87
z`=zbase`.N`=(0.0, -1.0, 0.0).(0.2, 0.3, 0.87)=(0.0+(-1*0.3)+0.0)=-0.3

Which gives us the normal we are looking for:

N``=(0.2, 0.87, -0.3)

For the hell of it, and because it is extremely important for what follows, lets dot product the new vector with the non-inverted new base matrix:

x=xbase.N``=(1.0, 0.0, 0.0).(0.2, 0.87, -0.3)=(1.0*0.2+0.0+0.0)=0.2
y=ybase.N``=(0.0, 0.0, -1.0).(0.2, 0.87, -0.3)=(0.0+0.0+(-1.0*-0.3))=0.3
z=zbase.N``=(0.0, 1.0, 0.0).(0.2, 0.87, -0.3)=(0.0+1.0*0.87+0.0)=0.87

leaving us with the original perturbed normal (0.2, 0.3, 0.87). The inverse of the matrix moves a vector into the new base; the non-inverse moves a vector out of the new base and into world space or the standard base frame.

Now let’s consider the light vector. The light has a position in world space eg lightPos=(10.0, 10.0, 5.0). Let’s say the pixel the fragment shader is currently working on is at pos=(2.0, 0.0, 1.5), ie, it is part of the floor we have been considering… we assume pos is gl_Position * ModelViewMatrix as interpolated into the fragment shader. LightDir would be:

lightDir=normalize(gl_Position-lightPos)=normalize(2.0-10.0, 0.0-10.0, 1.5-5.0)=normalize(-8.0, -10.0, -3.5)=(-0.6, -0.75, -0.26)

Note that lightDir points from the light’s direction toward the pixel. Note also that the perturbed normal we calculated above would work correctly with lightDir, that is, the normal would be pointing in the general direction of the light. The lightDir vector can be envisaged as a vector in the same new base frame, its tail attaches to the point defined by pos (the pixel being worked on), and it heads in a direction somewhat opposite the normal vector.

But if we could return N to N` using the non-inverted new base matrix, and if the light vector can be said to be defined within the same new base frame as N, then we can also turn lightDir:

l.x=xbase.lightDir=(1.0, 0.0, 0.0).(-0.6, -0.75, -0.26)=-0.6
l.y=ybase.lightDir=(0.0, 0.0, -1.0).(-0.6, -0.75, -0.26)=0.26
l.z=zbase.lightDir=(0.0, 1.0, 0.0).(-0.6, -0.75, -0.26)=-0.75
Giving us lightDir`=(-0.6, 0.26, -0.75)

This is kind of difficult to envisage so grab a pen and hold it up representing the y axis (which is the z axis of the new base frame). Now grab another pen and stick its tail at the bottom of the y pen representing the lightDir, it will point to the left down and away from you. Now rotate both pens as if welded together so that the y axis pen points directly to you, the lightDir pen will now point to the left, up and away. Especially note that the y point moves from negative to positive.

OK, so what does all this mean? It means that the base frame can be used to convert lightDir, and by extension eyeDir, into correct position relative to the perturbed normal read from the normal map… ready to perform the required light calculations. Essentially, it acts like texture space matrix but we have had no need of uv cords to construct the tangent space, nor have we need of attaching the T vector (and possibly B vector) as attributes to each vertex; we need only pass one T vector as a uniform variable for the entire floor, wall, ceiling, etc. Furthermore, because z (b) values in normal maps are mapped different to x ® and y (g) values, we might be better to recalculate the z value from the x and y and use the z channel to store some other goodie.

As far as I can see, what I have said is right, but before I continue I just want to open what has been said to any mathemagicians who might be lurking, ready with mathemagical spells that would undo all my intuitions on this matter. I have not extensively tested this hypothesis and quite frankly getting this far finds me several fishes short of a bicycle, so criticism about the truth of the intuitions are appreciated.

If the mathemagicians have all been purged (“Huzzah” to quote Hiro), I would just like to end by considering some of the shortfalls that strike me about texture space and expand on how base frames as I have suggested above might be implemented over complex models.

Texture space tangent mapping seems to face several problems:

  1. one often submitted method for generating the texture space matrix is to calculate T, use N from the model and cross(T, N) to get B. The problem here being that N is most often exported from the modeling program as a smoothed vertex normal not an unsmoothed face normal, therefore it mostly will not be perpendicular to T therefore the matrix will not be orthonormal, and without a pet mathemagician I do not understand how out a non-orthonormal frame would throw vector during rotation

  2. producing an orthonormal frame using the face normal and calculating both T and B is also mostly problematic, because any stretch introduced to the texture during the unwrapping of the model equates to non-perpendicular T and B vectors, and many parts of complex models suffer thus

  3. another problem, which I suspect explains the difficulty getting texture spaced models to behave coherently around seams, is that different models will have different texture spaces. Thus if you have a model of a head that you want to place on the model of a shoulder, then even if the normals in the maps cohere at the pixels, I do not know if they should be expected to return the same lightDir values if the texture spaces are different.

  4. maybe the seam problem is to do with a mixture of the above, and this. Another problem can be seen by exposing the misnomer that is calling texture space tangent space. A tangent off a 2D circle is a line perpendicular to a normal on the circumference of the circle. A tangent off a 3D sphere is a plane perpendicular to a normal on the circumference of the sphere. The normals being talked about here are smoothed vertex normals. Textures mostly do not lie on tangents to models for exactly the same reason as there is a difference between smoothed and unsmoothed normals. Now, if the lightDir is considered a vector in texture space, and if texture space from one model to the next differs, including what is taken to be the tangent and therefore the light vector relative to the pixel, then I’ll leave the rest for Socrates.

This brings me to the final section; how could the above be exploited on complex models. I haven’t tried it, but intuitively I should think every vertex could have an orthonormal base frame constructed about it. Take the smoothed normal as the zbase. Cross product the zbase with world y to get the xbase. Cross product zbase with xbase to get the ybase. If zbase == world y, then create ybase first by crossing zbase and world x. This would give a true orthonormal frame universal to all models.

Author: Stephen Jones

Tangent space is different than texture space, for a smoothed triangle you got three TBN frame ( one per vertex) . The used TBN at a given vertex for a Mesh is the same when this vertex is shared by two triangle that should be fully smoothed, this ensure correct seams. three TBN inded imply that when renderind the used TBN have to be computed per pixel (=> in your case the inverse matrix should be too)

Definitely if we are talking about a complex model. But if we are talking of a flat floor where all the model’s vertex normals point up the y axis (not talking about the mapped perturbed normals), then my system only needs one tangent space (base frame) for all vertices that make up the floor. A texture base system still requires a tangent/bitangent per vertex.

Every object can be considered to be in it’s own (unique) euclidiean sub-space. The fact that world space and the various object spaces are both eulidiean doesn’t mean that they aren’t logically independent. In fact it isn’t uncommon for the total world space to be divided into different subspaces for precision and manageablity purposes. Working in ‘object space’ is needed if (for example) the ULP of the outer edges of a world subspace is sufficient for placement but not rendering. Another common example is if it is cheaper to transform world information (such as lights) to object space than it is to transform all of the needed object information into world (or really rendering, which is different) space.


Any vector is only meaningful if it is specified in a coordinate frame. The vector vec=(0.2, 0.3, 0.87) is actually:
(code ommited)

vec = .2i + .3j + .87k (replace i,j,k with whatever basis names you prefer). The paren notation is just shorthand for the previous. You’re basically defining extraction functions for the magnitude of each basis. (stating the obvious a wider target audience) So I don’t really understand what you’re saying here. I’m guessing that you mean that vectors are positionless and that we’re using geometric reasoning to define a position relative to an assumed coordinate frame.

Moving lightDir into object space is to multiply the vector by the same transformation matrix which is applied to the model: object space is world space with transformation matrices applied. That you wish to consider the space as a logical entity in its own right after this fact is fine, but irrelevant to the discussion except insofar as both use orthogonal base frames (rotation matrices) to perform the switch between spaces.

The reason I belaboured the relationship between a vector and it being a dot product of its base frame is because this is the fundamental working aspect of tangent space. Tangent space, be it the base frames I have talked about or texture space, is a matter of moving vectors between orthogonal base frames so that the vectors come to share the same base frame, only then can the lighting calculations work.

Here’s a ponderer: Given that photoshop does not read model data, it can have no idea of the tangent space attributed with each vertex, therefore the normal map filter cannot be encoding normals into the model’s tangent space… it must be encoding the normals into world space based upon the assumption that all of the model’s smoothed vertex normals point out the z axis. Tangent space is used to bend lightDir, etc, form how the light direction actually falls upon the transformed model into the same world space where all normals point out the z axis.