NIO Buffers in JOGL and LWJGL

Today I just succesfully converted a complete application from JOGL to LWJGL. Ofcourse the performance-difference was zero, but I prefer the LWJGL API (no AWT, nomore passing GL references, proper Input API, damn straightforward, and everything I forgot).

One thing I noticed is that JOGL and LWJGL handle nio-buffers in a slightly different way: you have to rewind() your buffers in LWJGL, while JOGL always starts at pos 0, regardless the pos of the buffer.

Took quite a lot of time to find that, as the JVM just crashes the ugly way, without giving any hints.

Ooh, it shouldn’t be able to crash it, even with totally b0rked buffer positions. The GL drivers are supposed to be resilient to this.

Cas :slight_smile:


#
# An unexpected error has been detected by HotSpot Virtual Machine:
#
#  EXCEPTION_ACCESS_VIOLATION (0xc0000005) at pc=0x6915739c, pid=3992, tid=2564
#
# Java VM: Java HotSpot(TM) Client VM (1.5.0-b64 mixed mode)
# Problematic frame:
# C  [atioglxx.dll+0x15739c]
#

...
<snip>
...

Stack: [0x00030000,0x00070000),  sp=0x0006f7fc,  free space=253k
Native frames: (J=compiled Java code, j=interpreted, Vv=VM code, C=native code)
C  [atioglxx.dll+0x15739c]

Java frames: (J=compiled Java code, j=interpreted, Vv=VM code)
j  org.lwjgl.opengl.GL11.glDrawArrays(III)V+0
j  com.skips.engine.client.world.Terrain.renderSurface()V+284
j  com.skips.engine.client.world.Terrain.process()V+1
j  com.skips.engine.client.world.World.render()V+3

... etc etc etc

That always happened in the 2nd frame for some obscure reason. In theory the 1st and 2nd frame should be identical.

Hope this helps?

Specs:
Card: ATi Radeon 9700pro 128MB
Driver: 4.10 (not too old)

LOL! ATI can’t write drivers for toffee can they, the bastards! There are no arguments to glDrawArrays that should cause it to crash except invalid pointers - and LWJGL pointers are never invalid.

Cas :slight_smile:

I’ve had similair crashes in feedback for this project of mine under JOGL. When specifying a TexcoordPointer then calling DrawArrays without enabling TEXTURE_COORD_ARRAY all hell broke lose (OS crashes).

And end-users normally blame me (they were stupid bugs anyway, so they were right :))

Ati… Update your drivers ! OpenGL is broken on ATI cards with old drivers. I know what I mean, I’m running an ATI :stuck_out_tongue:

Download and install the last beta drivers (don’t worry about the “beta” word, the driver is very stable) asap !

It runs great here.

Chman

[quote]I’ve had similair crashes in feedback for this project of mine under JOGL. When specifying a TexcoordPointer then calling DrawArrays without enabling TEXTURE_COORD_ARRAY all hell broke lose (OS crashes).

And end-users normally blame me (they were stupid bugs anyway, so they were right :))
[/quote]
Bit of a necro here, sorry about that, but I wanted to comment and also ask a few questions. First off, as regards the problem above, I’ve found in my experience that EVERY OGL driver crashes out if you either do as Skippy did above, or worse still, enable *_COORD_ARRAY and then forget to set your *CoordPointer. I’ve had this in C++ stuff aswell so i suspect its a driver thing. It is just a stupid coding error for the most part anyway, I’ve done it many a time. Nothing like a coredump to focus your attention on a bug !

Question wise, I’ve got a (relatively) complicated app done up using JOGL at the moment for my OpenGL bindings. Haven’t looked much at LWJGL so a quick question, porting over to LWJGL will involve how much work. Is it simply a matter of changing a weensy bit of initialization code and some minor variable renaming ? or would it involve a coding ream-job ?? Particularly as regards the input stuff. I grab all the mouse events from my GLCanvas at the moment and pass them up through my own GUI stack. Would this be affected to a great extent ?

Thanks,

D.

Changing the input stuff will be pretty trivial, as we’ve got the easiest input library in the entire universe.

The GL code is trickier especially if you’re using arrays[] as you’ll have to convert all the calls over to using direct ByteBuffers/FloatBuffers/IntBuffers - and then you’ve also got to make sure the position() and limit()s are correct. But it’s not really that much of a chore at all.

The init code for a LWJGL app looks like this:

Display.create();

and you can get more complicated from there on :slight_smile:

Cas :slight_smile:

I’m having a rather strange crash with glDrawElements. It seems that my application (which is a really simple OpenGL test App that only draws a textured quad on the screen) crashes with EXCEPTION_ACCESS_VIOLATION (0xc0000005) after it has run for some time.

I did a little experimentation and it crashes somewhere around frame 12900 with vsync enabled, frame 13700 without vsync and 2500 if I don’t set the vertex array pointer with glVertexArrayPointer for every frame.

The rendering is done in render method:


public void render()
{
        GL11.glClear(GL11.GL_COLOR_BUFFER_BIT | GL11.GL_DEPTH_BUFFER_BIT); 
        GL11.glLoadIdentity();
        createVertexArray(vQuads);
        for(Quad q: quads) drawQuads(q);
        update();
}

Setting the vertices:



public void createVertexArray(Vector<Vertex> vList)
    {

        if(vArraySet) return; // for testing, tests if there is anything to be updated to the GL's vertex list

        if(v==null || v.length != vList.size() * 3) // check if the size of the Vector holding the vertices has changed (otherwise no need to make new arrays
        {
            v = new float[vList.size() * 3];
            n = new float[vList.size() * 3];
            t = new float[vList.size() * 2];
        }
        
        for(int i = 0, vCount = 0; vCount < vList.size(); vCount++)
        {
            // fill the array holding vertex coordinates
            int j = i;
            v[i++] = vList.get(vCount).getX();            
            v[i++] = vList.get(vCount).getY();
            v[i++] = vList.get(vCount).getZ();
            if(useTexture) 
            {
                // fill the array to hold the texture coords
                float[] texture = new float[2];
                texture = vList.get(vCount).textureArray();
                t[vCount*2] = texture[0];
                t[vCount*2 +1] = texture[1];
            }
            
        }
        
        FloatBuffer vb = filledFloatBuffer(v); // creates and fill the floatbuffer with the specified array

            GL11.glEnableClientState(GL11.GL_VERTEX_ARRAY);        
            GL11.glVertexPointer(3, 0, vb);
           
        
        if(useTexture)             
        {
            FloatBuffer tb = filledFloatBuffer(t); // as above
            
            GL11.glEnableClientState(GL11.GL_TEXTURE_COORD_ARRAY);
            GL11.glTexCoordPointer(2, 0, tb);
            
        }
            vArraySet = true;
        
        
    }


And the method that does the actual rendering:


    public void drawQuads(Quad q)
    {
        
        if(useTexture)
        {
           if(q.hasTexture()) 
           {    
               GL11.glEnable(GL11.GL_TEXTURE_2D);
               GL11.glBindTexture(GL11.GL_TEXTURE_2D, q.getTexture());
           }
           else GL11.glDisable(GL11.GL_TEXTURE_2D);
        }
        // for testing that the array actually hold the right information (to see if the crash would actually happen because the buffer wasn't allocated right
       // the Quad class just holds the vertex indices that the quad is comprised of and the texture index
        int[] a =q.asArray();                
        IntBuffer ib = filledIntBuffer(q.asArray());        
        if(ib.get(0) == a[0] && ib.get(a.length-1) == a[a.length-1])
        {
            GL11.glDrawElements(GL11.GL_QUADS, ib);
        }
        
    }

you are rewinding/flipping as needed ?

[quote]you are rewinding/flipping as needed ?
[/quote]
Yes, filledFloatBuffer flips the buffer before returning it. If it wouldn’t, the app wouldn’t work fine for the first thousands of frames :wink:

yeah :confused:
sounds odd though. What method does it fail in ? nglDrawElements?

[quote]sounds odd though. What method does it fail in ? nglDrawElements?
[/quote]
Yeah,

Java frames: (J=compiled Java code, j=interpreted, Vv=VM code)
j org.lwjgl.opengl.GL11.nglDrawElements(IIILjava/nio/Buffer;I)V+0
j org.lwjgl.opengl.GL11.glDrawElements(ILjava/nio/IntBuffer;)V+22

I have downloaded the sources for LWJGL and looked at the code for nglDrawElements, but didn’t get much wiser. I suspect it has something to do with the VertexPointers, because misinformation in them seems to lead to similar crashes. What bugs me is that the app works fine for sometime before the crash. I don’t think it’s the drivers either because another LWJGL app I’ve made (with pretty much similar code) seems to work fine.

I’m leaning towards driver issues or a “mis-managed” buffer.
What graphics card?
If you could ship it all in a zip file and send it to info@lwjgl.org I’ll check it with a Radeon 9700 when I get home.

[quote]LOL! ATI can’t write drivers for toffee can they, the bastards! There are no arguments to glDrawArrays that should cause it to crash except invalid pointers - and LWJGL pointers are never invalid.

Cas :slight_smile:
[/quote]
This is not really ATI’s fault, since the vertex array api is to blame. When setting up the vertex array pointers with (for example) glVertexPointer you don’t specify the array size, since that’s not generally checked in C anyway. Because of that, the GL driver is not (easily) able to check the vertex array accesses from glDrawArrays, glDrawElements and friends. In theroy, LWJGL could check them since we have the buffer sizes, but for anything other than glDrawArrays it would involve an iteration over the entire indices buffer.

Note that this only applies to normal vertex arrays. For VBO buffers, the driver knows the sizes since they are created through OpenGL.

  • elias

[quote]I’m leaning towards driver issues or a “mis-managed” buffer.
What graphics card?
[/quote]
I’m thinking mis-managed buffer, because it seems to work on the other app. Graphics card is Nvidia GeForce GX 5200.

I tried removing the vertexpointer setting and it seems to work fine (obviously showing only a blank) screen, so the bug seems to be there. I’ll send the code if I don’t get it to work before 18 o’clock (EET).

EDIT:

I got it workin now (at least it runs fine after 50 000 frames and I really don’t have nerves to watch a white box longer), but I still quite mystified. What I did was move the glDrawElements call from drawQuads method to the createVertexArray method. Somehow something get lost in between these two methods, but I have no idea what.

Adding the texture still seems to make the app crash after a while.

I think I know why: One function is creating the float buffers, filling them and giving them to OpenGL via glVertexPointer/TexCoordPointer. Then the function returns, the garbage collector reclaims the buffer, but OpenGL still has the pointer assigned. Then you draw from the now reclaimed buffers with glDrawElements and BOOM, crash.

That’s the reason that doing it all in one function works - the buffers is not generally garbage collected before the function returns.

  • elias

[quote]I think I know why: One function is creating the float buffers, filling them and giving them to OpenGL via glVertexPointer/TexCoordPointer. Then the function returns, the garbage collector reclaims the buffer, but OpenGL still has the pointer assigned. Then you draw from the now reclaimed buffers with glDrawElements and BOOM, crash.
[/quote]
Ah, of course, you’re right. I assumed that glVertexPointer would copy the array to graphic card’s memory which it seems was not the case. I think you should mention things like this somewhere, because they are kind of unique to java and therefore dismissed in other general openGL tutorials/books. Or if there is some kind of a “things to remember about java and opengl” article somewhere, please point it to me :slight_smile:

Hey, WOW!!
I think you just explained my strange bug of older topic
http://www.java-gaming.org/cgi-bin/JGNetForums/YaBB.cgi?board=LWJGL;action=display;num=1110649575
:slight_smile:

For textures, this things would mean that it is NOT allowed, to loose the reference to the pixel-buffer which makes up a texture…
-maybe its loaded to gfx-card… than all is fine
-maybe it doesnt fit to gx-card mem so its dumped and reloaded later from main memory… and if buffer is gc-collected and overwritten than … BANG.

i will check this out in detail.

thanks for this clue :slight_smile:

[quote]Hey, WOW!!
I think you just explained my strange bug of older topic
http://www.java-gaming.org/cgi-bin/JGNetForums/YaBB.cgi?board=LWJGL;action=display;num=1110649575
:slight_smile:

For textures, this things would mean that it is NOT allowed, to loose the reference to the pixel-buffer which makes up a texture…
-maybe its loaded to gfx-card… than all is fine
-maybe it doesnt fit to gx-card mem so its dumped and reloaded later from main memory… and if buffer is gc-collected and overwritten than … BANG.

i will check this out in detail.

thanks for this clue :slight_smile:
[/quote]
Not related. glVertexPointer gives the OpenGL driver a pointer to your data. The pointer needs to be valid as long as you are using it (with drawArrays or similar). glTexImage2D copies the data. After that you are free to delete the memory where you stored the image. If the driver stores the image on card or in memroy is irrelevant. It is all handled by the driver and it do not use your memory.

That is atleast how I understand it.