Interactive JOGL

Excuse me for asking a stupid question:

Can components be drawn using JOGL that are interactive? For example, clicking on it and dragging it to a new location.

If not, can it be simulated using the Mouse Events from a JPanel that is being used as a container for the JOGL component and moving them accordingly? (This assumes that given a coordinate, I can get the appropriate JOGL coordindates)

Also, is JOGL good for drawing single 2D graphic objects?

[quote]Excuse me for asking a stupid question:

Can components be drawn using JOGL that are interactive? For example, clicking on it and dragging it to a new location.

If not, can it be simulated using the Mouse Events from a JPanel that is being used as a container for the JOGL component and moving them accordingly? (This assumes that given a coordinate, I can get the appropriate JOGL coordindates)

Also, is JOGL good for drawing single 2D graphic objects?
[/quote]
In short, Yes. Although JOGL doesn’t actually have any of this functionality itself, It is merely a drawing library. While i hate to pimp my current project, It provides an example of what you seem to be talking about. Look for the thread ‘not a game … yet …’ in the ‘your games here forum’ and try out the JNLP link on the page. The gui elements can be dragged around the screen, the buttons and scrollbar gui elements are all functional etc etc. This is done by capturing the mouse events from the GLComponent and passing them down through my own GUI event stack in much the same way Swing or AWT does it. Given a particular orthographic transform its relatively easy to establish the position in 3d space of a click on the actual viewport itself.

D.

I understand JOGL is meant for 3D objects. But does it give good performance for drawing 2D objects?

[quote]I understand JOGL is meant for 3D objects. But does it give good performance for drawing 2D objects?
[/quote]
Sure. Apparently considerably better than Java2D if you read the posts on these forums. I’ve never actually used Java2D however so i can’t confirm or deny :slight_smile: It also has the advantage that its easier to include common effects like alpha/additive blending and of course it’ll be completely hardware accellerated to boot. Look up a series of posts by Malohkan about his game Rimscape in which he details the performance improvments gained from converting from Java2D to LWJGL. LWJGL is another Opengl binding, amongst other things.

D.

[quote]I understand JOGL is meant for 3D objects. But does it give good performance for drawing 2D objects?
[/quote]
Well, if you keep your z coordinate to 0, your 3D objects are 2D objects so…

Ok. I will try it. I have Swing based hex-based board game that I was wondering if I could get better performance from large numbers of pieces (i.e. 300 to 1000 2D images draw on a board)

And whether it was worth the effort to create my one component type objects.

Paul Franz

[quote]Ok. I will try it. I have Swing based hex-based board game that I was wondering if I could get better performance from large numbers of pieces (i.e. 300 to 1000 2D images draw on a board)

And whether it was worth the effort to create my one component type objects.

Paul Franz
[/quote]
You will definetly get a performance boost using Jogl for that. You will probably be able to load them all to the video card and get several hundred FPS.

Sorry for being lazy here. Can you printout a JOGL Canvas?

Daire Quinlan wrote:

"In short, Yes. Although JOGL doesn’t actually have any of this functionality itself, It is merely a drawing library. While i hate to pimp my current project, It provides an example of what you seem to be talking about. Look for the thread ‘not a game … yet …’ in the ‘your games here forum’ and try out the JNLP link on the page. The gui elements can be dragged around the screen, the buttons and scrollbar gui elements are all functional etc etc. This is done by capturing the mouse events from the GLComponent and passing them down through my own GUI event stack in much the same way Swing or AWT does it. Given a particular orthographic transform its relatively easy to establish the position in 3d space of a click on the actual viewport itself. "

I’m interested in seeing some code that does this. Can you post a link to some or the clip in the display method.

hokay, this’ll be a bit long though so bear with me …

first up is the MouseListener method which intercepts the actual mouse clicks on the AWT window (irrelevent bits snipped)


  public void      mousePressed(MouseEvent e)
      {
        //pass this to the gui manager first.
        //if it returns false then don't pass it onto the scene or geosphere...
        
        if(guiManager.dispatchMouseEvent(new GluiMouseEvent(null,GluiMouseEvent.MOUSE_PRESSED,0,e.getX(),e.getY(),0)))
        {
            if (e.getButton() == MouseEvent.BUTTON1) // left click
                {
                Vector3f screenVec = new Vector3f(),inVec = new Vector3f();
                translateClickToRay(e,screenVec,inVec);
                stateManager.activeState().stateEvent(SlayState.SELECT_EVENT,sphere.clickPoint(screenVec,inVec));
                }
           
        }
      }


This first passes the input into my GUI library by creating a new GluiMouseEvent which to all intents and purposes is identical to the AWTMouseEvent classes. If this returns true which means that none of the components has swallowed the event then it passes it through the ‘translateClickToRay’ and passes the resultant ray to whatever the active state is as a select message. Depending on the active state it does something with it. In the game playing state for example, the 3D ‘level’ will receive a ‘select’ message with that ray. My current Game uses a GeoSphere as a level, so its responsible for discovering which of the GameDataPoints (or hexes) is closest to the click point and marking it as selected.

I’ll post some more stuff in a bit, I have to go off and photograph my flatmates dinosaur equipment (don’t ask:-))

D.

heres the 2D setup and gui stuff…

I use GLOrtho as a setup projection, and the dimensions are the same as my window dimensions. This has pros and cons.
pros:
really easy gui management. The coordinates passed in by awt as the mouseclick coordinates are the same as the glOrtho ‘surface’ so no translation or scaling is required.
Much much easier to get fonts and gui elements to look good without any dodgy aliasing and filtering

cons:
GUI elements are fixed in size, so a large dialog on 640x480 will look like a smalllll dialog on 1280x1024. Frankly i’m willing to live with this.

heres the ortho setup for each draw call in my GluiManager class.



gl.glPushAttrib(GL.GL_ALL_ATTRIB_BITS);
gl.glEnable(GL.GL_TEXTURE_2D);
gl.glBindTexture(GL.GL_TEXTURE_2D, lafTexture.getTextureID());
gl.glTexEnvi(GL.GL_TEXTURE_ENV, GL.GL_TEXTURE_ENV_MODE, GL.GL_MODULATE);
gl.glDisable(GL.GL_LIGHTING);
gl.glEnable(GL.GL_BLEND);
       gl.glBlendFunc(GL.GL_SRC_ALPHA,GL.GL_ONE_MINUS_SRC_ALPHA);
        
gl.glColor4fv(DEFAULT_COLOR);
        
gl.glMatrixMode(GL.GL_PROJECTION);                            
// Select Projection    
gl.glPushMatrix();                                         // Push The Matrix  
gl.glLoadIdentity();                                       // Reset The Matrix 
gl.glOrtho( 0,width,height, 0, -1, 1 );  // Select Ortho Mode   
gl.glMatrixMode(GL.GL_MODELVIEW);                             // Select Modelview Matrix  
gl.glPushMatrix();                                         // Push The Matrix  
gl.glLoadIdentity();      

all this does is setup the correct state and projection matrix for the gui drawing. The GLOrtho call is the important bit. The width and height are set by the containing class (normally the one that implements GLEventListener) at startup and whenever it gets a reshape event. The rest of the stuff is just gumph, sets up the blend functions, color, and whatever Look&Feel texture i’m using to draw the windows.



GluiWidget gluiw;
Object[]  drawArray = children.toArray();
        
for(int c1 = drawArray.length-1;c1 >= 0;c1 --)
{
   gluiw = (GluiWidget)drawArray[c1];
   if(gluiw.isVisible() && gluiw.isActive() ) gluiw.draw(gl,glu);
}
        
gl.glPopMatrix();
gl.glMatrixMode(GL.GL_PROJECTION);
gl.glPopMatrix();
gl.glPopAttrib();


this draws all the top level component objects in the gluimanager component list in reverse order then pops all the matrices and reverts to the previous state with regards to blending/color etc etc.

this next bit is the initial dispatch on the mouse events:


public boolean dispatchMouseEvent(GluiMouseEvent e)
{
    if(mouseFocus == null)
    {
          Iterator it = children.iterator();
          while(it.hasNext())
          {
               ((GluiWidget)it.next()).processGluiMouseEvent(e);
               if (e.isConsumed()) return false;
          }
          return true;
    }
    else 
    {
          e.toLocalCoordinates(mouseFocus.getParent());
          mouseFocus.processGluiMouseEvent(e);
          e.toGlobalCoordinates();
          return(!e.isConsumed());
     }

this passes the GluiMouseEvents got from the AWT component on to the children. One important thing to note is the mouse Focus stuff. A left click on a component gives it mouse focus, so it grabs the mouse events directly as opposed to recieving them through the normal chain of parents. The mouse events are transformed in the local frame of reference of that components parent. This ensures that theres no actual discernable difference between getting them through the component stack or having them passed directly to the component.

MouseMove events are handled simialarly, and also fire off MouseExit and MouseEntry events to the components to handle things like rollovers.

Then in GluiFrame which is my top level ‘window’ type component, theres added functionality in the MouseMotion listener to handle being dragged.


if(hasMouseFocus)
{
     position.x += e.getX() - oldMouseX;
     position.y += e.getY() - oldMouseY;
     oldMouseX = e.getX();
     oldMouseY = e.getY();
}

The oldMouseX and Y are stored from frame to frame.

D.

3D picking is a little more complicated…
In my camera class, it updates the following arrays every time its transformed:


 gl.glGetFloatv(gl.GL_PROJECTION_MATRIX,projArray);
 gl.glGetFloatv(gl.GL_MODELVIEW_MATRIX,modlArray);
 gl.glGetIntegerv(gl.GL_VIEWPORT,viewArray);


these arrays are then used as inputs into the GluUnProject function to get a ray that passes from the near clip plane to the far clip plane along the center of the mouseclick on the ‘screen’



float x = (float)e.getX();
float y = (float)(canvas.getHeight() - e.getY());

Camera activeCamera =Scene.getInstance().activeCamera;

double[] modelArray = new double[16];
double[] projectionArray = new double[16];
int[] viewArray = new int[4];
for(int c1 =0;c1 < 16;c1 ++)
{
  modelArray[c1] = activeCamera.modlArray[c1];
  projectionArray[c1] = activeCamera.projArray[c1];
}

viewArray[0] = activeCamera.viewArray[0];
viewArray[1] = activeCamera.viewArray[1];
viewArray[2] = activeCamera.viewArray[2];
viewArray[3] = activeCamera.viewArray[3];

double[] objx = new double[1];
double[] objy = new double[1];
double[] objz = new double[1];

GLU glu = canvas.getGLU();

//B07 recompile because of method signatures change  
        
glu.gluUnProject(x, y, 0, modelArray, projectionArray, viewArray, objx, objy,objz);
screenVec.set((float)objx[0],(float)objy[0],(float)objz[0]);
        
glu.gluUnProject(x, y, 1, modelArray, projectionArray, viewArray, objx, objy,objz);
inVec.set((float)objx[0],(float)objy[0],(float)objz[0]);


x and y are the click positions on the window, The model, projection and view arrays are got from the camera, and the resulting two vectors are actually the endpoints of a vector stretching from the near clip plane ( first gluUnProject call) to the far clip plane (second gluUnProject call). These are then passed to the current state as detailed in the post above.

D.

-edit- cleaned up code. -edit-