3D perspective correct texture mapping

Hello. Im looking for help with an algorithm on 3D Affine perspective correct texture mapping. This is basically rasterizing a 2d image in 3d space orthagonally.

You have either affine or perspective correct texture mapping. What is “affine perspective correct” supposed to be?

Ooop sorry I meant perspective correct :smiley:

Use hardware. :slight_smile:

It’s not much different from affine texture mapping, where you interpolate u/v linear along the edges and for each scan line. The problem here is that u and v are linear in 3d, but not in screen space. By doing it that way, you’ll get this Playstation 1-wobble-effect in the textures.
However, what is linear in screen space is “anything” divided by z. So instead of interpolating u and v, you interpolate u/z and v/z. To get the proper texture coordinates from this, you have to know the current z. z itself is again not linear in screen space. But 1/z is, so you interpolate three values: u/z, v/z and 1/z. To get u and v for each pixel, you simply divide u/z and v/z by 1/z…which is rather costly to be done per pixel. You can do a little optimzation here by doing this all 8/16/32/whatever… pixels only and do a linear interpolation between these correct values.

If done well, it’s pretty fast: http://www.jpct.net/quapplet/

There’s always the quadratic approximation which works in a fair number of situations and the error is easy to calculate so you can decide to sub-divide or use a different routine. But I wouldn’t bother, CPUs are much faster then when software rendering was required and the latency (in cycles) has also decreased. But then again I wouldn’t write a software renderer today either.

Oh and an easy and reasonable thing to do is be correct every Nth pixel (do the divide) and lerp the coordinates for the pixels between. Quake did this at every 4th pixel if memory serves.

That’s what i wrote. Quake did it every 16 pixels btw.

You did indeed.

Are the U and V the coordinates of the pixel on the screen or are they the vector coordinates? . An example would really help .

Neither. u and v are the texture coordinates. Each vertex has x,y,z to define the point in space and u,v to define the texture coordinates.

sorry I dont quite understand what u and v are then. Are they just temporary coordinates?

Source: Texture Mapping Mania

Ahh I understand U and v now.

As an aside I’d suggest not wasting your time writing a software rasterizer. It’s too big a time commitment vs. the useful learning (approaching none) you’ll get from it. Do something else.