projective texturing

I am attempting to project a texture onto a quad. as i understand, i have to create a texture matrix


  tmatrix= lightview*lightprojection *cameraview.

and then the texcoord is

texcord= tmatrix*vertex position

am i getting this right?

Not entirely sure…


gl_Position = projectionMatrix * modelViewMatrix * inVertex;
texCoord = lightProjectionMatrix * lightViewMatrix * inVertex;
//Change texCoord from [-1, 1] to [0, 1]
texCoord = texCoord * 0.5 + 0.5;

is what I did for my shadow mapper.

I do shadow projection


attribute vec3 a_position;
uniform mat4 modelViewProjection;
uniform mat4 shadowViewProjection;

varying vec4 shadowCoord;
void main()
{
    vec4 pos = vec4(a_position,1.0); 
    shadowCoord = shadowViewProjection * pos;
    shadowCoord.xy += shadowCoord.w;
    shadowCoord.xy *= 0.5;
	
    gl_Position = modelViewProjection * pos;
    
}

Then I can read from projected texture with

texture2DProj(u_texture, coord);

I have pretty much tested every way and concluded that this is fastest way to do it. At least on mobile gpu’s becouse touching texcoords on fragment shader cause dependant texture reads on some hardware. Notable example is PowerVR which is pretty popular.(all ios + some android devices)

ps. Just noticed that some gpu can pre multiply 3-4 matrix outside of vertex shader for free. After that cost can skyrocket really fast so I wont trust that and keep doing it at cpu level.


    gl_Position = A * B * C * pos;
//is same as
    gl_Position = ABC * pos;

shadowCoord.xy += shadowCoord.w;
?
What. Is this some kind of hack to add 1 to x and y to go from [-1, 1] to [0, 1]?

Short version for theagent:
It’s not a “hack”. But that its just for that.

Long version with explanation:
Basically we want to shift from unit cube coordinates to texture coordinates.
[-1:1] -> [0:1] So: (uv +1) / 2 is what we want to do.
But that can be done only after we are back from homogeneous coordinates. And this can be done by dividing coordinate with w. But that is not linear operation and can’t be done at vertex shader becouse of interpolation. So it has to be done at fragment level. This can be done by using textureProj2d wich is just simply texture2d with cheap w divide by hardware.

So instead of doing this at fragment level:

shadowCoord.xy /= shadowCoord.w;
shadowCoord.xy = shadowCoord.w * 0.5 + 0.5;

So It can be done:


//vertex
shadowCoord.xy += shadowCoord.w;
shadowCoord.xy *= 0.5;

//fragment
shadowCoord.xy /= shadowCoord.w; 
//which can then be replaced with texture2DProj

Edit: Just thinked it trougth and noticed that simple method work allways when w is 1. But can that be guaranteed for shadow mapping? But this method work also for deferred renderer where you want to map world position to screen coordinates without additional fragment load.

Everything in rendering is a “hack” except for ray tracing :wink:

It’s awesome to see practical shader discussions going on though :slight_smile:

why would i need to use texture2dproj? i thought the vertex shader gave the correct texcoords.

Even ray tracing is a hack. You need to start doing photon mapping to stop being a hack, but current implementations of that only map a few thousand photons so it’s still hacky at the end of the day.

What constitutes a “hack”?

Maybe something “hard coded”? I often see it as non-oop code, but, who knows! :slight_smile:

I suppose any simulation could be called a hack, but ray-tracing is at least based somewhat on the real physical model, whereas other approaches are basically stagecraft. But as long as the result is believable, there’s nothing wrong with it.

speaking of hacks i have an idea to make rayfaster, but can we please get back on topic?

Why you need to use texture2dProj is explained above allready. You “need” to divide with w but its cheaper use hardware for that instead of shader code.