Byte buffer crashing

I have been trying to limit how often I post questions, but this one is just stumping me.

It only occurs after 24 bytes have been added (6 floats), and only to one of the three buffers.

I have a ByteBuffer where I send data to OpenGL with. No matter how I create the buffer, the JVM always crashes.

Portion of log:

Java frames: (J=compiled Java code, j=interpreted, Vv=VM code)
j  sun.misc.Unsafe.putInt(JI)V+0
j  java.nio.DirectByteBuffer.putFloat(JF)Ljava/nio/ByteBuffer;+33
j  java.nio.DirectByteBuffer.putFloat(F)Ljava/nio/ByteBuffer;+11
j  com.digiturtle.library.opengl.GLRenderer.addTriangle(Lcom/digiturtle/library/opengl/Renderer$Vertex;Lcom/digiturtle/library/opengl/Renderer$Vertex;Lcom/digiturtle/library/opengl/Renderer$Vertex;Lcom/digiturtle/library/opengl/GLTexture;)V+82
j  com.digiturtle.library.opengl.GLGlyph.render(Lcom/digiturtle/library/opengl/Renderer;Lcom/digiturtle/library/util/Color;Lcom/digiturtle/library/opengl/GLTexture;)V+303
j  com.digiturtle.library.opengl.GLString.render(Lcom/digiturtle/library/opengl/Renderer;)V+115

Here is how I have tried to create the buffer.

// Method one
ByteBuffer buffer = ByteBuffer.allocateDirect(size).order(ByteOrder.nativeOrder());
buffer.rewind();
return buffer;
// Method two
return ByteBuffer.allocate(size).order(ByteOrder.nativeOrder());
// Method three
return ByteBuffer.allocateDirect(size).order(ByteOrder.nativeOrder());
// Method four
return org.lwjgl.BufferUtils.createByteBuffer(size);

It was working two days ago, and no updates have happened since then, so I can’t seem to find out what’s wrong. I have googled a lot and have tried to find a solution, with no luck.

Thanks,
CopyableCougar4

Make a standalone compilable example and post it here. Method two and three cannot crash on Unsafe as they are not direct buffers.

I couldn’t reproduce it with just standalone example (which confuses me more and makes it harder to get community help).

However, I think I know why it’s crashing after the first time. Every frame (based on some code from SHC that I added in the hopes of getting my renderer working) the buffer gets remapped with [icode]glMapBuffer(target, access, pointer.capacity(), pointer);[/icode]

I will try and do some debugging and see why the mapped buffer doesn’t work the next time data is uploaded.

You are handed memory from the driver, you’re not creating the ByteBuffer yourself, so those ‘four methods that crash’ are irrelevant. Besides, there are no valid use cases left for glMapBuffer - nobody should use it, switch to glMapBufferRange

Okay so I switched it to:

pointer = glMapBufferRange(target, offset, size, access, null);
// target is the buffer type, offset is passed as 0, size is in bytes (like it should be), access is GL_READ_WRITE

However, I still get crashes.

probably something silly like a missing [icode]rewind()[/icode] and a funky pointer/position you let the float drop at. as long as it just crashes your VM :slight_smile: … try to find a position which destroys your OS graphic output, much more psychedelic fun.

what does toString() on the buffer say after you mapped it ?

I added a rewind and it still crashes.

When I print the buffer’s toString(), here’s what is printed:

java.nio.DirectByteBuffer[pos=0 lim=4194304 cap=4194304]

After turning on the debug flag, it throws an exception because the address given is 0. However, I still can’t wrap my mind around why the address is 0.

is the buffer already mapped when you map it … maybe ? … should generate a GL_INVALID_OPERATION tho’.

just for the sake … here’s one way to get the address of a bytebuffer (another faster is using unsafe) :

public static class reflectiveAccessor
{
  private final static Field addressField;

  static
  {
    Field f = null;

    try
    {
      f = Buffer.class.getDeclaredField("address");
      if(!f.isAccessible()) f.setAccessible(true);
    }
    catch(final NoSuchFieldException e)
    {
      // deal with the error
    }

    addressField = f;
  }

  public static long getAddress(Buffer buffer)
  {
    try
    {
      return addressField.getLong(buffer);
    }
    catch(final IllegalAccessException e)
    {
       return -1;
    }
  }
}

in case you want to double check.

I do

glUnmapBuffer(target);

after I upload all the data.

Also, I have some code that throws a runtime exception every frame if GL11.glGetError() isn’t 0, so there isn’t an error there.

then i’m as clueless as you :wink:

So I have the issue traced back to native code.

nglMapBufferRange(target, offset, length, access);

The above method, when invoked, returns 0. I set target to GL_ARRAY_BUFFER, offset to 0, length to the buffer size in bytes, and access to GL_READ_WRITE.

I think it might be with accesses, I first map the buffer before calling glDrawArrays with GL_ARRAY_BUFFER, 0, buffer.capacity() and GL_WRITE_ONLY. I unmap the buffers right after rendering. I don’t get why it is not working.

According to the OpenGL specification, [icode]glMapBufferRange[/icode] takes a bitfield of access flags, so the equivalent of [icode]GL_READ_WRITE[/icode] is [icode]GL_MAP_READ_BIT | GL_MAP_WRITE_BIT[/icode]. Hope this helps.

Are you using LWJGL 2 or 3?

Spasi: IIRC LWJGL 2 [icode]glMapBuffer[Range][/icode] returned [icode]null[/icode] instead of a ByteBuffer pointing to 0L

I’m using LWJGL 3.

LWJGL 3 indeed by default returns a ByteBuffer pointing at 0L. If you enable debug mode (-Dorg.lwjgl.util.Debug=true) you will get an exception instead.

You could argue that it’s a problem with LWJGL, but this is an application bug really. You’re using the result of MapBuffer without checking for OpenGL errors. Which I’m sure what is happening here, otherwise you wouldn’t be getting a null pointer back.

I highly recommend setting up a debug message callback while developing your application. It’s particularly easy in LWJGL 3:

debugProc = GLContext.createFromCurrent().setupDebugMessageCallback();

Don’t forget to store the returned Closure somewhere, like GLFW callbacks. You may also have to create a debug context (see the GLFW_OPENGL_DEBUG_CONTEXT window hint).

Checking for errors using glGetError() can ruin performance since it causes a driver thread sync. Since there is no way (?) to figure out the address of the returned buffer, I’d strongly prefer it returning null. That makes it much clearer what’s going on.

Sorry for saying that, but saying this right after spasi pointed out to check for a GL error, is just stupid.

As long as there is evidence of your code having a programming error in it, you should not give a damn about performance at all, but instead should focus on getting rid of the source of the error. Therefore you most certainly want to insert an error check after each and every single GL call…

If I wanted my program to crash without a decent stack trace in a way that’s hard to debug I wouldn’t use Java in the first place. I can check for null every time I map a buffer, but having to call glGetError() if debug callbacks aren’t supported is a nightmare.