OpenGL GUI

I’m about to create a GUI in OpenGL (using LWJGL). I want similar functionality to the java.awt.Graphics class, ie I need fonts, lines etc. What is the best way to implement this in OpenGL? I’m thinking something like this:

  1. The Graphics class renders to an OpenGL texture using OpenGL calls, only GUI areas which are changed are rendered to the texture.

  2. The 3D scene is rendered.

  3. The GUI texture is rendered with alpha on top of the 3D scene using one full screen quad in ortho mode.

Is this a viable solution?

Searching a bit more I found this thread:

http://www.java-gaming.org/cgi-bin/JGNetForums/YaBB.cgi?board=jogl;action=display;num=1099179678

which is pretty much what I was looking for. So rendering to a pbuffer is probably the way to go.

[quote]Searching a bit more I found this thread:

http://www.java-gaming.org/cgi-bin/JGNetForums/YaBB.cgi?board=jogl;action=display;num=1099179678

which is pretty much what I was looking for. So rendering to a pbuffer is probably the way to go.
[/quote]
No, no, no! You surely don’t need to use pbuffers for GUI stuff. That thread you reference refers to a special case - how OpenGL is integrated with Java2D, which has to play nice with AWT and Swing. For more normal applications you don’t need to jump through hoops.

Lines, points, turtle graphics, filled 2D shapes… all of that can be rendered to a normal openGL context without rendering to a texture - on top of a 3D scene if you wish. Font bitmaps can be generated at runtime or as a preprocessing step. For windows and such, if you need more than filled polys or vertex coloring just load in some predrawn textures and plop them on a quad in an orthographic projection.

Unless you’re talking about something completely different and I missed the boat here, then just forget about pbuffers.

Ok, but doesn’t that require that I repaint the entire UI every frame? Most of the GUI will be static from frame to frame (maybe just some text changes). I figured it would be faster to render the UI once to an offscreen buffer and then just apply changes to that. But maybe there is a way to clear the entire screen and zbuffer except for the GUI parts?

Yes, you repaint the UI every frame. Opengl will use no time at all painting a standard UI.

I didn’t really catch the ‘full screen quad’ in there earlier. There’s no reason to do that for a GUI. You can have a quad for each component and you’ll be fine. Check out the Torque Engine Demo from GarageGames. From within the demo mission, press F10 to bring up the world editor. You will see a very complex GUI system with menu bars, menu items, list boxes, tree controls… all of this (even the cuursor) is rendered in Ortho mode on top of the 3D scene using a quad for each component (and generally when someone says ‘quad’ in game development you should interpret that as two triangles and not actually GL_QUADS). Also do a search for Crazy Eddie’s GUI at Sourceforge. It’s a great example to work from, and is a prime candidate for porting to Java I would think!

Many GUI implementations are naive and render in immediate mode using glVerfex calls. A more complex system would have a central window manager that maintains indidual arrays for vertices, tex coords and/or vertex colors. Then all of the visible components can be rendered at once via vertex arrays or vertex buffers. If you plan to have, or could potentially have, a large number of GUI components on screen at once that would be the way to go.

Ok, thanks for your replies. A couple of more questions:

  • If the GUI is not alpha transparent, I presume it would be fastest to first render the GUI and then the 3D scene as this would allow for fast z buffer fails when rendering stuff beneath UI components. Correct?

  • If the GUI is alpha transparent, I would presume it would be fastest to render the GUI last?

  • If the GUI is not alpha transparent, it wouldn’t be necessary to repaint it each frame if there is some way of clearing the 3D scene without touching the GUI components. For example, a clear that would remove everything beyond a certain z-depth. Any ideas?

-Don’t think it will be any differance in performance wether you draw the GUI first or last with regards to fast z fail. Only consider this if your GUI cover a large part of the screen and you use complex, slow pixel shaders.

-If the GUI is alpha transparent then it has to be rendered last to be correct. It has to be blended with the color already stored in the frame buffer.

-There might be a way to clear only part of the screen by either messing with the z-clear states or using the stencil buffer. Who knows.

Why are you even considering this kind of optimization? Repainting the whole GUI every frame is very fast, unless your doing something extraordanary.

The common way to do GUI is to render the 3D scene first. Then render the gui with the z-buffer disabled.

I was hoping some of you guys knew.

I’m just trying to find a good method for implementing a generic OpenGL GUI without affecting overall application performance much. If there is a faster/better method that is relatively simple to implement I will use it. A complex UI can contain thousands of triangles, lines, characters etc. Rendering this every frame results in hundreds of thousand OpenGL calls each second. This can affect performance.

A GUI is lucky if it contains more than a hundred elements, not a thousand.

Cas :slight_smile:

Yup, but if he’s trying to make something re-usable, packagable as a library for doing this stuff, then it’s the kind of thing he needs to think about.

For the record, I’ve made a couple of GUI’s which, if rendered in OGL, would easily hit a thousand components if there were no optimization (and, negotiating with Swing until it gave decent performance on them wasn’t fun. “Optimized to minimize overdraw” my ass :()

Certainly, if I were considering using someone else’s GUI lib, one of my frist questions would be “can you guarantee your lib will never be the source of performance problems in my app?”.

IIRC we got screwed by exactly this problem (although for different reasons) when doing Survivor: the frame rate was considerably reduced in some situations because of problems with Xith’s 2D GUI library. Eventually Kev sussed it might be the GUI lib (or else he found an embarassing series of bugs in his code and quietly fixed them, meanwhile blaming it on the lib :wink: :stuck_out_tongue: ;D).

The point is, though, that we wasted a fair amount of time collectively, trying to find the problem in our code, hardware, and video drivers. Never expected the GUI to be the cause :(.

Oh yes… that problem :wink:

Actually, IMO, stay the hell away from Swing/AWT/SWT for games GUIs. You’re pulling in too much baggage. Build a simple GUI framework, implement a few basic test components to ensure things work then implement components on a as you need them basis.

This will:

a) Guarantee you know whats going on in the GUI layer
b) Ensure that your component interfaces are clean
c) Ensure that you have compability across platforms

For a really nice example of this, c heck out the JME UI package. Looks good, runs well.

Kev

Why not switch to ortho mode for the GUI?

As far as a global alpha channel goes…

pushState()
switchtoOrthoMode()
setAlpha()
setColor()
drawGUITextures()
popState()

The “setColor()” method could be used to give the GUI a specific color saturation. For example, in your image editor, make all your HUD graphics white. Then, with setColor() you can dynamically change the color of the whole GUI using glColor3f(float, float, float). Just an example I suppose. In my mind, it’d be easier than messing with the Z axis.

I am planning to start an OGL GUI (BSD based!) in the next couple months as part of the Typhon framework… I already have a speedy (compared to Swing) Java2D/Volatile Image based GUI that I’ll be shelving for the OGL based one.

As mentioned in implementation details in previous responses I’m looking into making things efficient through buffered texture usage. I agree with B^3 on making the GUI low impact and also with Kevin in implementing the components you need 1st. I’ll be saving the bitching table and tree components for the second run!

The way I plan to implement things is slightly modified from phazers 3 steps in the original post.

All GUI components have a static and dynamic drawing stage.

The static stage draws whatever elements don’t change into a static texture. For a fader this could be the fader tract and any markings indicating position (0db, -6db, etc.)

The dynamic stage could be the position of a knob on the fader.

  1. Construct the layout and render static elements into a texture.

  2. With a 2nd texture of course create a copy of the static texture 1st, but overwrite the dirty areas with the static texture and then draw specific dynamic parts of each component. For a given window of the GUI all the dynamic parts that are texture based (moving knobs, etc.) can be packed into a second texture.

There will be twice the amount of texture memory used, but there is a lot these days.

Once I get the preview release of Auriga3D out I’ll spend a couple of weeks on getting things going. I already have a lot the functional code working from my current GUI and the framework is the same…

Really can’t wait for the GL_EXT_Render_Target extension,
but in the meantime I’ll get things rolling through rendering to the color buffer / glCopyTexSubImage until greener pasteurs present themselves… :slight_smile:

Sounds good Catharsis. I remember you mentioned your GUI lib in some messages you sent. :slight_smile:

I don’t follow 100% how your algorithm works. Basicly what’s important is that only modified areas of the GUI buffer is repainted. I think your method ensures that, right? Are you rendering to textures? If so, doesn’t that limit the buffer to a square area?

I’ve read a little about pixel buffers and it seems they could be used for an offscreen GUI buffer which could be copied to the screen using glCopyPixels. Don’t know how to mask the copied pixels or make them alpha transparent though. A pixel buffer consumes a lot of memory, but there should be enough memory for one.

You’re really, really wasting your time with this caching scheme… don’t do it!!!

Cas :slight_smile:

Did not read the whole thread:

But take a look at this attempt : www.gui3d.org

Made by a friend, it has basic LayoutManagers
Buttons Labels, Scrollpane + Scrollbars etc…
ActionListeners etc… the whole is very swing’ing
to code.

  • Jens

Yes. Only dirty areas are repainted. The nice thing about this system is that it breaks things up between static and dynamic areas of the GUI. A good portion of the GUI is prerendered into a static texture. This static texture is used to copy over the dirty areas where upon just the dynamic parts of the GUI are redrawn.

Yes on rendering to textures.

No, on square area.
This extension ARB_texture_non_power_of_two
http://oss.sgi.com/projects/ogl-sample/registry/ARB/texture_non_power_of_two.txt
was promoted to the core in GL2.0

[quote]I’ve read a little about pixel buffers and it seems they could be used for an offscreen GUI buffer which could be copied to the screen using glCopyPixels. Don’t know how to mask the copied pixels or make them alpha transparent though. A pixel buffer consumes a lot of memory, but there should be enough memory for one.
[/quote]
I am waiting for the Ext_render_target extension:
http://www.opengl.org/resources/features/GL_EXT_render_target.txt

Until then I am skipping PBuffers as there are too many downsides. While not as efficient the preview release will render into the color buffer and copy that to a texture whenever a GUI window/component needs to be updated prior to rendering a full frame if embedded in an 2D/3D engine or just the GUI in a normal desktop window/frame. This is a temporary solution. I will be looking into abstracting this from the main GUI API end users manipulate, so that true render to texture functionality will be used when it becomes available.

I am focusing my efforts on using GL2.0 as a baseline for my work and also any special extensions like render_target when they appear. This will make the GUI not backwards compatible, but hey call it next gen… :slight_smile:

In regard to rendering the GUI texture transparently glBlendFunc with one of the CONSTANT_ALPHA blending factors can be chosen.

[quote]You’re really, really wasting your time with this caching scheme… don’t do it!!!
[/quote]
Maybe, but I spent about 8 months developing the predecessor in Java2D/Volatile Image API and it performs quite well over Swing. The only thing I’m changing is the rendering of the system to an even more efficient environment. This is an iteration and not building something from scratch; IE the concept is proven.

Can you provide a counter example where the process I outlined is not efficient or makes things more difficult for highly detailed GUIs?

Here is a question Cas. I am well aware of the one window context that LWJGL is built for now, but could LWJGL be able to handle multiple windows in the future? This would be for traditional desktop environment where each window/frame has one GL context that is its entire content.

My examples are anecdotal I’m afraid. Quix has an editor which repaints its entire screen every frame, and Alien Flux repaints itself every frame, on top of the game (it’s a fully transparent GUI). At one point Flux had a way of detecting whether something had “changed” one frame to the next but in the end it was a complete overcomplication.

The one thing Flux doesn’t do is check for totally obscured components (didn’t see the point at the time as all my components are transparent).

If you want to use multiple windows with LWJGL, either use JOGL to create your contexts so you can use AWT, or use SWT and use the LWJGL-SWT adaptor that’s kicking about somewhere.

Having said that - the whole purpose of LWJGL was to simply take over your computer and give you a blank canvas. Develop your own windowing system in OpenGL like Mac OS X! Or something a bit simpler. But you can do it. You could even implement the entire of AWT in a LWJGL screen, with frames and dialogs and everything.

Cas :slight_smile:

[quote]My examples are anecdotal I’m afraid.
[/quote]
Most in game GUIs don’t need to be very detailed.

Since one area of application that I am creating my framework for is real time interfaces for audio software (could be anything though) I have to match the likes of what you are seeing elsewhere… stuff like this:
http://www.motu.com/products/software/machfive/body.html/images/machfive_full1.jpg

Having to redraw something of that complexity every frame would be high impact.

My Java2D based GUI can approach this quality level, however I artistically never went as far as the above picture… Here is a good snapshot of some audio software I have working now:
http://www.egrsoftware.com/picture/picture.cgi?pict=pictures/scream/modules/ambipan/ambipan-1

I’m not interested in actually creating a fully 3D as in mesh oriented representation. a highly efficient 2D based GUI in GL would be great. It would be neat to add in normal mapping and similar techniques though + displacement mapping when things go there soon.

The idea of having say a super wrist watch on a game character and being able to look at it and have this crazy GUI for it is an application for game engines.

I also have this wild vision of enabling a window with a transparent crystalline based background that blends in the GUI, refracts light, and casts shadows in a game engine, etc. IE. getting shaders involved… :slight_smile:

Ok that is bordering on crazy talk, but would be kick ass…

Yep… I support JOGL as well, but do prefer LWJGL as I have side by side comparisons with my codebase. I haven’t looked at the SWT LWJGL bridge. As long as there isn’t a big hit in performance when redirecting LWJGL between windows this should do and get things rolling with the Eclipse folks… I’d like to have an AWT free version of this…

The GUI I’m building can be used in normal windowed frames or in full blank canvas mode (or in a game engine).