I am attempting to render some textured Quads in the front of the canvas. I was able to do this using plain JOGL with the following:
gl.glMatrixMode(GL.GL_PROJECTION);
gl.glPushMatrix();
glu.gluOrtho2D(0,Constants.WINDOW_WIDTH,0,Constants.WINDOW_HEIGHT);
//Render
gl.glBindTexture(GL.GL_TEXTURE_2D,TextureTools.getHudRadar());
gl.glBegin(GL.GL_QUADS);
gl.glTexCoord2f(0,1);
gl.glVertex2f(Constants.HUD_POS_RADAR_REAR_X,Constants.HUD_POS_RADAR_REAR_Y);
gl.glTexCoord2f(0,0);
gl.glVertex2f(Constants.HUD_POS_RADAR_REAR_X,Constants.HUD_POS_RADAR_REAR_Y-Constants.HUD_SIZE_RADAR);
gl.glTexCoord2f(1,0);
gl.glVertex2f(Constants.HUD_POS_RADAR_REAR_X+Constants.HUD_SIZE_RADAR,Constants.HUD_POS_RADAR_REAR_Y-Constants.HUD_SIZE_RADAR);
gl.glTexCoord2f(1,1);
gl.glVertex2f(Constants.HUD_POS_RADAR_REAR_X+Constants.HUD_SIZE_RADAR,Constants.HUD_POS_RADAR_REAR_Y);
gl.glEnd();
gl.glPopMatrix();
gl.glMatrixMode(GL.GL_MODELVIEW);
How would I go about doing this with Xith? I’ve been trying variations of the following:
BranchGroup hud = new BranchGroup();
Transform3D t = new Transform3D();
t.ortho(0,1024,0,768,0,1);
hud.setTransformGroup(new TransformGroup(t));
Point3f [] coords = new Point3f[] {
new Point3f(0,0,-1f),
new Point3f(0,128,-1f),
new Point3f(128,128,-1f),
new Point3f(128,0,-1f)
};
QuadArray qA = new QuadArray(coords.length,GeometryArray.COORDINATES | GeometryArray.TEXTURE_COORDINATE_2);
qA.setCoordinates(0,coords);
Appearance a = new Appearance();
a.setColoringAttributes(new ColoringAttributes(new Color3f(1,1,1),ColoringAttributes.SHADE_FLAT));
Shape3D s = new Shape3D(qA,a);
hud.addChild(s);
The quad renders at a clearly incorrect size. I have a feeling that it has something to do with the distance of the quad from the camera (-1f, in this case) as the original used Vertex2f, and had no Z component. Any suggestions?