Newbie alert :).
I’ve got JOGL working OK (most recent stable binary for linux, not CVS version AFAIAA) with 1.4.2 under linux, and started experimenting. Following the advice in the quick-start guide, I’ve got a GLCanvas up (apparently that is hardware-accelerated, no?) and am rendering OK from my GLEventListener.
However, when I gave it a simple task of rendering a heightfield with 350k triangles, it can only render less than 100k tris a second - on a GeForce2Go (just like Ge2, but with 16Mb RAM) + P3-1Ghz with no lightsources, nothing fancy - just smoothshaded tris in tristrips. The heightfield is almost square, and I’ve got a separate tri-strip per row.
I was sure performance should be better than that?
Curiously, reducing the size of the output window makes a big difference in performance, approximately linear relative to num pixels in window (I’ve got no code in the GL resize methods in my GLEventListener, so just allowing it to do whatever comes naturally).
FWIW, I started from Greg Pierce’s “Getting started with JOGL” and have just made minor modifications since…e.g. adding a one-time method to generate a heightfield.