Hi everyone, the current task I’ve set myself is to take an array of 4 three-dimensional points and project them on to a two-dimensional canvas. I’m having a bit of trouble understanding what I’ve read and tried to put into practice and was wondering whether anyone could tell me where I’ve gone wrong.
Say for example I have these 4 three-dimensional points representing the corners of a quadrilateral, with two points at Z coordinate 20 and the other two at Z coordinate 40, making up a square with its faces facing down the X axis. If I then translate this set of points 100 points down the negative X axis, I should now be able to see the back two points, which should also be rendered with perspective projection, meaning that the two furthest points appear closer together. This is the perspective projection I’ve used: http://msdn.microsoft.com/en-us/library/bb205351(v=vs.85).aspx
In the “Remarks” section.
Now my understanding breaks down when I come to transform the points to their perspective view. From what I’ve gathered, the point, which can be represented with a 3-dimensional vector is multiplied by the view matrix, which I’ve initially set to the identity matrix. From here presumably you’re meant to have a transformed point (3-dimensional vector), which you then have to multiply by the projection matrix, which in my case is in a perspective view.
From that I now have a new transformed 3-dimensional vector that I can simply render to the screen as I would normal screen pixels. So in a nutshell:
TransformedPoint = Point x ViewMatrix x ProjectionMatrix
Is this right? This appears to just produce the same point as the original. I know I’ll have to post some code up here, but could you tell if this logically isn’t very sound anyway?
Also, I’m rendering all this into an AWT Canvas.
Thanks in advance for any help!
Paul