Converting Screen Coordinates to World Coordinates

With the camera transformation I did not think of the projection matrix. The projection matrix transforms eye coordinates to a unit cube that is used for clipping.

I’ve not used Xith so I don’t know how it works. But the modelview matrix that I’m refering to will transform coordinates in object space into eye (or camera) space.

This matrix includes the camera (View). It might be that you’ve not included the view matrix in your modelview sent in to the unProject function. But I’m only guessing. If so you have to multiply the view matrix with the matrix that transforms object coordinates into world coordinates.

Does this make sens?

Yup, that makes sense. I hate keep pestering you with questions but here comes another one. (A little more concrete this time)

I have an object on the screen located at (50,0,50) in world coordinates. It is currently at the center of the screen.

I click on the object.

I then call gluUnProject using the projection matrix and using a modelview matrix which only contains the camera’s transformation (because I want world coordinates back). In addition I pass the in the viewport, the mouseX and mouseY coordinates and the value 1.0 for winz which I beleive is for the far clipping plane.

For the viewport I am using
viewPort[] = {0, canvasHeight, canvasWidth, canvasHeight} The first two values should be the location of the lower left of the screen. I know the default is 0,0 but I think in xith3d 0,0 is the top left so I adjusted it accordingly.

I get back from the gluUnProject call the following values.

targetx: 49.99645
targety: -40.57691
targetz: 15.948797

Then I call gluUnProject again with the exact same parameters except with winz of 0 for the near clipping plane and I get.

eyex: 50.00118
eyey: -43.640434
eyez: 13.434351

I’m a little concerned because I thought that at least one of the values in target and eye would have to be the same since the only paramter that changed was the winz. Is that assumption in fact correct?

No It’s not correct. Don’t know exactly how to explain it. For one of the values to be the same, the ray would have to lie on one of the axis planes in 3d. Which is almost impossible to do even if you try :slight_smile:

Btw. your eye and target looks correct. Does it work?

Did you ever get this sorted out? I’d like to do a similar thing…

Kev

Oh well, if anyone’s interested, I found a way of getting it to work… but its not pretty or super accurate :wink:

Kev

I’m interested! Can you post source? Did you use gluUnproject or did you come up with another way to do it?

Ok… this is what I use (well roughly), I’d be happy if someone would tell me its wrong :slight_smile:



    public Point3f toWorld(int x,int y) {
        return toWorld(x,y,view.getFrontClipDistance());
    }

    public Point3f toWorld(int x,int y,float depth) {
        float fov = view.getFieldOfView();
        float width = canvas.getWidth();
        float height = canvas.getHeight();
        
        float panelY = (float) (Math.tan(fov) * depth);
        float panelX = panelY * (width/height);
        
        float xp = x / (width/2);
        float yp = y / (height/2);
        
        Point3f pt = new Point3f((-1 + xp)*panelX,
                                 (-yp + 1)*panelY,
                                 -depth);
        
        Transform3D v = new Transform3D();
        view.getTransform(v);
        
        v.transform(pt);
        return pt;
    }


    public void castRay(int x,int y,float targetHeight) {
        Point3f center = toWorld(x,y);
        Point3f d = toWorld(x,y,view.getFrontClipDistance()+0.1f);
        Vector3f forward = new Vector3f(center);
        forward.sub(d);

        // at this point the ray is defined by 
        // point "center" and vector "forward"
     }

So, all I currently do is convert the screen coordinates to world coordinates based on the view transform and a given depth. Then based on a second depth. The vector these two points gives the direction of the ray…

It seems to work ok, there are some new screenshots at:

http://www.newdawnsoftware.com/martian

of what I’m trying to do… which I think might match up with what you’re doing.

Incidently, I also use this code to generate the geometry for the “rubber band” used to select a group of aliens :slight_smile:

EDIT: Please excuse typos, a little christmas “spirit” has been injested at this point :wink:

Kev

When I get more than one result from the View.pick() function, I’d like to be able to tell which object is “closest” to the screen. The easiest way I figured to do this is to convert the mouse click point to world coordinates and then compare distances between the converted world position and the node positions to find the shortest one.

But to be completely honest I didn’t quite understand the discussion that took place in this thread :-[ (i’m new to 3D). Could shochu or anyone else please explain what the parameters in the “xithUnProject” function listed above are, and where I would get them from? (I understand what winX, winY and results are, but not the rest).

To find which picked object is closest to the screen, you can use the PickRenderResult.getZMin and getZMax. They return the smaller and bigger distance between the eye and the object (they can be different if the object’s geometry intersects the ray more than once).
At least, that’s how I understand this.

Here are two methods based on the previously posted xithUnmap: one that creates a ray in vworld coordinates based on a canvas (mouse) coordinate, and another one that convert a vworld ray to a given node’s coordinate system.
It is working all right for me, but I cannot say I have tested it toroughly yet.
How about, however, including this set of method in a xith utility class?

It should be tested, and there is no pbs to include to, say, PickRenderResults, or somewhere there. Or, View itself…

Please submit a patch alone with a test case that tests/demonstrates the functionality.

Yuri

I created issue 69 (https://xith3d.dev.java.net/issues/show_bug.cgi?id=69)

It seems that the source code of my methods somehow vanished from my previous post, so those interested can download the attachment of issue 69.

This method is based on the Java3D source:

public void createPickRay(int x, int y, Point3f o, Vector3f d) {
float rx = (float)x/(float)getWidth() - 0.5f;
float ry = 1.0f - (float)y/(float)getHeight() - 0.5f;
float px = rxphysicalWidth;
float py = ry
physicalHeight;
float vpd = camera.getViewPlaneDistance();
if (camera.getPerspective())
o.set(0.0f, 0.0f, vpd);
else
o.set(px, py, vpd);
d.set(px, py, 0.0f);
d.sub(o);
d.normalize();

view.getTransform(t1);
t1.transform(o);
t1.transform(d);
}

I maintain physicalWidth and physicalHeight as the actual width and height of the canvas on the screen for accurate image scale. Any values for physicalWidth and physicalHeight should work, providing the aspect ratio is preserved.

View plane distance can be calculated as:
0.25fphysicalWidth(float)Math.tan(0.5f*fieldOfView))

It works!

A better way to calculate view plane distance is:
0.5f*physicalHeight/(float)Math.tan(fieldOfView);
:slight_smile:

I know this is a long time after, but if anyone is still interested I’ve done excatly what the inital poser wanted, but in c++. I don’t know enough about Xith3D to know if this is even doable here., but here is the code:

void XField::locate(int clickX, int clickY, double *x, double *z){
	double pos[3];
	
	glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);

	glPushMatrix();
	glTranslated  ( 0.0 , 0.0 , 0.0 );
	glBegin(GL_TRIANGLES);
            glVertex3f(-virWidth/2,0.0,virDepth/2);
            glVertex3f(virWidth/2,0.0,virDepth/2);
            glVertex3f(virWidth/2,0.0,-virDepth/2);
	glEnd();
	glBegin(GL_TRIANGLES);
            glVertex3f(-virWidth/2,0.0,virDepth/2);
            glVertex3f(-virWidth/2,0.0,-virDepth/2);
            glVertex3f(virWidth/2,0.0,-virDepth/2);
	glEnd();
	glPopMatrix();

	GLfloat winX, winY, winZ;
	GLdouble modelMatrix[16];
	GLdouble projMatrix[16];
	GLint viewport[4];

	glGetDoublev(GL_MODELVIEW_MATRIX,modelMatrix);
	glGetDoublev(GL_PROJECTION_MATRIX,projMatrix);
	glGetIntegerv(GL_VIEWPORT,viewport);

	winX = (float)clickX;
	winY = (float)viewport[3] - (float)clickY;
	glReadPixels( int(winX), int(winY), 1, 1, GL_DEPTH_COMPONENT, GL_FLOAT, &winZ );

	gluUnProject(
		winX,
		winY,
		winZ,
		modelMatrix,
		projMatrix,
		viewport,
		&pos[0], 
		&pos[1],
		&pos[2]
	);

	*x=pos[0];
	*z=pos[2];
}

So what is the easiest way to implement this…i mean do we have to do …


gl.glGetDoublev(GL.GL_MODELVIEW_MATRIX, mvmatrix); gl.glGetDoublev(GL.GL_PROJECTION_MATRIX, projmatrix); gl.glGetIntegerv(GL.GL_VIEWPORT, viewport); glu.gluUnProject

also if i call gl functins than i am getting

Exception in thread “main” net.java.games.jogl.GLException: glGetError() returned the following error codes after a call to glGetDoublev(): GL_INVALID_OPERATION GL_INVALID_OPERATION GL_INVALID_OPERATION GL_INVALID_OPERATION GL_INVALID_OPERATION GL_INVALID_OPERATION GL_INVALID_OPERATION GL_INVALID_OPERATION GL_INVALID_OPERATION GL_INVALID_OPERATION GL_INVALID_OPERATION
at net.java.games.jogl.DebugGL.checkGLGetError(DebugGL.java:13910)
at net.java.games.jogl.DebugGL.glGetDoublev(DebugGL.java:4218)

this is may be i am not getting the correct gl context …to get gl context i am using

		((GLCanvas) canvasPeer.getComponent()).setRenderingThread(Thread.currentThread());
		gl=((CanvasPeerImpl)canvasPeer).getGl();
		glu=((CanvasPeerImpl)canvasPeer).getGlu();

shochu you must have done this …how you solved this problem ?
so what is the right way to go about this…it would be of great help if somebody has a code snippet for this…thanks in advance

If i have to query the state machine than in xith how to go about it…

think about it: A point on your screen converts to a ray in world coordinates.

So simply use

Canvas3D.createPickRay

I am able to convert the Screen Coordinates to world Coordinates by using glu.gluUnProject in display of CanvasPeerImpl.java , to get the modelview matrix i am using shape.getLocalToVworld() .
currently if i translate my objects its giving me correct coordinates ex: if i draw a cube ( vertex 1=0,0,0
vertex 2=1,0,0
vertex 3=1,1,0
vertex 4=0,1,0 ) and clicks on vertex 2 it return co-ordinates (1,0,0), Now if i translate this cube on screen and again clicks on vertex 2 i am getting correct co-ordinates that is (1,0,0).
the problem comes if i rotate this cube about X or Y axis and than clicks vertex 2 now the co-ordinates i am getting are something junk.
Strangely if i rotate about Z axis this problem is not coming.
So only on rotation about X and Y axis i am not able to map my Screen Co-ordinates to World Co-ord.

Arne if you can suggest what i am may be doing wrong.

That’s not a cube, thats a Square.

What are those coordinates?
How do you translate the cube?

I don’t know what glu.gluUnProject does. Does it perform a pick - or what?

well sorry fr small typo…basically in my project i was using a cube but for simplicity a used square example , let me explain what i am doing …

  1. First i am creating a square using QuadArray by setting the specified co-oordinates.
    vertex 1=0,0,0 vertex 2=1,0,0 vertex 3=1,1,0 vertex 4=0,1,0 .

  2. To map the co-ordinates i am using



                        double projection[] = new double[16];
			int viewport[] = new int[4];

			gl.glGetDoublev(GL.GL_PROJECTION_MATRIX, projection);    //TO Get Projection Matrix
			gl.glGetIntegerv(GL.GL_VIEWPORT, viewport);	                        //To Get ViewPort

                      //  For ModelView matrix of selected Node i am using
                          float model_view [ ]= new float [16];
                         node.getLocalToVworld().getOpengl(model_view);

			glu.gluUnProject(mouseX_Pos,viewport[3]-mouseY_Pos,o,model_view,projection,viewport,worldX,worldY,worldZ);

Hence i get the world co-ordinates of the mouse in worldX,worldY,worldZ,.

  1. I am not using createPickray as my requirement is such that even if i translate or rotate the square ,on Clicking at vertices of square i should get the same values that is for vertex 2 it should always be (1,0,0).
    This means i have to take into account ModelView matrix of the shape which is been moved or rotated so i have used GluUnproject.

  2. Now to translate this square i am translating the TransformGroup which contain this square , or to rotate it a apply rotation to the TransformGroup.

  3. Now if i rotate my Square about Z axis say by 30 deg clockwise and now clicks any vertex it gives me coorect values, as while applying rotation its modelview matrix gets updated so on using gluUnproj its able to map to correct values .

  4. But when i apply rotation in X and y direction i am getting wrong values.

hopefully this time i am more clear about my req