Anyway, I think I fixed a bug in the win32 implementation. My test case now works on all platforms. Grab it here:
http://odense.kollegienet.dk/~naur/lwjgl_14012005.zip
- elias
 
      
    Anyway, I think I fixed a bug in the win32 implementation. My test case now works on all platforms. Grab it here:
http://odense.kollegienet.dk/~naur/lwjgl_14012005.zip
I’m going to send you the sources of my implementation. It’s pretty straighforward. Send me an email address at mik@classx.it
I’m very new to OpenGL sorry. Anyway you are suggesting me to enable render-to-texture on my Pbuffers ?
That’s what I’ve got:
Mmmh, some nice things:
the bad: I have to rewrite everything.
Q: if the design above is right, can you tell me how glGetTexImage() perform compared to glReadPixels() ?
Mik
PS your http://odense.kollegienet.dk/~naur/lwjgl_14012005.zip WORKS very well! MANY THANKS!
GetTexImage = very very fast, copies from VRAM to VRAM.
ReadPixels = very very slow, reads the wrong way across the AGP buffer, aaaagh!
Cas 
That’s more or less right I think. To clarify, I’d probably make a SGLBitmap and SGLImage the same, say, SGLGraphics. It would work something like this:
To actually get at the pixel data from a SGLGraphics you’ll need to use glGetTexImage.
Hope that makes it clear.
[quote]GetTexImage = very very fast, copies from VRAM to VRAM.
ReadPixels = very very slow, reads the wrong way across the AGP buffer, aaaagh!
Cas 
[/quote]
Cas is slightly wrong here. GetTexImage and ReadPixels are probably equally fast, depending on whether the texture is cached in system ram or not. It’s CopyTexImage2D that is (potentially) VRAM to VRAM and therefore fast.
Thanks for the hints.
What happens if the several SGBraphics have different sizes ? I would be forced to re-allocate the PBuffer, right ?
I’d like to add that render-to-texture is not an option in such a system, because:
A) When pbuffer contents get lost, the texture contents will be lost too. render-to-texture is more appropriate for dynamic textures that get refreshed each frame.
B) How can multiple textures be created from a single render-to-texture pbuffer? Only by making it huge and packing multiple images. Not very nice.
So, CopyTexImage should be the best option and more than fast enough too.
Thanks for the correction smartass 
Cas 
Elias, your last change broke getPbufferCaps, it always returns 0. Pbuffers work great if you ignore it though.
Elias,
I implemented the SGLGraphics as suggested. There was a little design problem in your description. No problem, I fixed it today.
I benchmarked both implementations with 20000 loops of misc graphics:
2 bitmap/graphics switches per loop
SGLGraphics gives:
Time s. 11.46202
Fps.    1744.8932
SGLBitmap:
Time s. 9.126782
Fps.    2191.3528
4 bitmap/graphics switches per loop
SGLGraphics gives:
Time s. 22.803085
Fps.    877.0743
SGLBitmap:
Time s. 14.629756
Fps.    1367.0768
From the numbers the SGLBitmap seems to be quite better. The reason is the number of graphics copys from/to the pbuffer needed from SGLGraphics.
Even more, with SGLGraphics, when the pbuffer loses it contents, even the textures in its context go away. This forces me to reallocate every texture in the “new” pbuffer (including the SGLGraphics backbuffer).
From the SGLBitmap side, when the pbuffer gets lost I can recreate it and ask SGLImages to re-create their textures at drawImage() time.
Anyway, the Pbuffers shared context works as expected and this is a great feature.
However Pbuffer flushing is still a problem. For this reason I would suggest what I call a “missing feature”: create an invisible Display i.e. by putting a boolean in the constructor and a pair of hide()/show() methods.
This would be useful in order to share the Display context between subsequent Pbuffer allocations in order to keep the textures when the Pbuffers get lost. Nice, isn’t it ?
[quote]This would be useful in order to share the Display context between subsequent Pbuffer allocations in order to keep the textures when the Pbuffers get lost. Nice, isn’t it ?
[/quote]
Unless I’m misunderstanding what you’re doing, this won’t be necessary. If the pbuffer contents get lost, only contents of the buffers (color, depth, etc) associated with it are lost, not the textures that live in its context. It’s a problem only when you’re using render-to-texture, since the buffer contents are the texture(s).
Could you please confirm that getPbufferCaps doesn’t work?
Pbuffer.getPbufferCaps() should work again now.
I’ll check the missing textures problem again.
Anyway, what about the invisible Display feature ?
[quote]I’ll check the missing textures problem again.
Anyway, what about the invisible Display feature ?
[/quote]
If the textures are not lost, what do you need the invisible display for?
Suppose you have a Swing app that wants to hide/show a preview panel (the Display).
In that case I’d rather prefer a proper AWTGLCanvas (maybe even that from JOGL) that you can embed.
Very nice! Will the above end up into an RFE ?
It’s an often-requested feature that no-one’s quite had time to do yet. My gut feeling is that it won’t be too hard to do but using it will involve learning about threading and AWT properly 
If it gets done it’ll be in another little subproject like the SWT canvas adapter.
This functionality is already available by the way, if you use JOGL to create your canvases. You can subsequently do all your rendering using the LWJGL API. (Keyboard and Mouse of course won’t be available, you’ll have to use AWT or JInput)
Cas 