better off without ATI single-threaded workaround?

This is roughly related to my previous postings about trouble with ATI v5200, but I’ve investigated some more and have some new information.

Summary: I see hotspot crashes after a fairly-consistent length of time when running one of my applications. The thread dump shows that it is crashing in code that looks like this:

double x[] = new double[6], y[] = new double[6], z[] = new double[6];
// fill x,y,z with interesting stuff
glBegin(GL_LINES);
glVertex3f(x,y,z);
glEnd(GL_LINES);

The hotspot crash shows a segfault in glVertex3f (looks like a null pointer dereference.)

If I run my application with -Djogl.1thread=false, the problem does not occur (in fact, everything seems happy!)

I find it interesting that the crashes only seem to occur when I’m creating transitory double[]s, despite the fact that my code spends a lot more time pushing data that I keep around. (garbage collector interaction?)

I believe I only do GL operations from the GLEventListener methods.

Thoughts?

Without knowing much of the internals of JOGL, I remember a problem like this on LWJGL a long time ago.

If JOGL uses Buffers under the hood, to convert the double[] to a DoubleBuffer, and no reference to the DoubleBuffer is kept around, the GC can free the allocated space while the native code (or GPU) is reading from it. That could cause these nasty segfaults.

Then again, i can’t recall a glVertex(double[], double[], double[]); but hey, I’m not using JOGL.

Anyway, I’d suspect it’s a Buffer being freed while the native code accesses it, according to the “fairly-consistent length of time” (GC of non?-heap buffers)

My two cents.

Hmm, I botched my pseudo code, which makes this even more perplexing.

I have a method that acts like glVertex3f(float[], float[], float[]) which in turn does the inefficient thing of calling glVertex3f(float, float, float) a gazillion times. But this means that there are no buffers involved-- it makes it rather perplexing where this seg fault could be coming from, because it couldn’t possibly be literal data!

(Indeed, the stack trace shows the signature of glVertex3f as (FFF); I was being unobservant in addition to forgetful!)

The GL context must be somehow invalid then, or is there another explanation?

Are you using either JSR-231 beta 4 or the most recent nightly build? The threading behavior changed in beta 4 relative to earlier nightly builds and is now consistent with earlier releases. It seems to be more robust than the nightly builds between beta 3 and beta 4.

It sounds to me like you are running into a considerable number of issues with ATI’s proprietary drivers. I would strongly suggest you get into their beta program and send them feedback. Again, I may be able to help you get into their program if you contact me via email.