"RADEON 9200" in jogl becomes "GDI Generic" in JSR-231?

Using JOGL from http://www.wurmonline.com/client/wurmclient.jnlp :

----------------
OpenGL information:
----------------
JOGL version: 1.1.0-b12
OpenGL vendor: ATI Technologies Inc.
OpenGL renderer: RADEON 9200 Series DDR x86/MMX/3DNow!/SSE
OpenGL version: 1.3.4582 WinXP Release
OpenGL extensions:
    GL_ARB_multitexture
    GL_EXT_texture_env_add
    GL_EXT_compiled_vertex_array 
    [... and many more ...]

Using JSR-231 from http://www.wurmonline.com/newclient/ :

----------------
OpenGL information:
----------------
JOGL version: 1.0.0-beta4
OpenGL vendor: Microsoft Corporation
OpenGL renderer: GDI Generic
OpenGL version: 1.1.0
OpenGL extensions:
    GL_WIN_swap_hint
    GL_EXT_bgra
    GL_EXT_paletted_texture 

The later one loads JSR-231 from the jnlp using the following line:

Obviously, it’s kinda hard to play the game using the horrible software opengl implementation. :wink:

Is there anything I can do to fix this?

What are the GLCapabilities you’re requesting? What happens if you use the default GLCapabilities? Are you using the DefaultGLCapabilitiesChooser or your own?

Can you run the game standalone rather than via Java Web Start? Or specify -Djogl.debug.DefaultGLCapabilitiesChooser via your JNLP file?

Ah, interesting.

I was doing this:

        GLCapabilities glCaps = new GLCapabilities();
        glCaps.setRedBits(8);
        glCaps.setBlueBits(8);
        glCaps.setGreenBits(8);
        glCaps.setAlphaBits(8);
        glCaps.setDepthBits(16);

Would setting “glCaps.setHardwareAccelerated(true);” make it never ever return the software one?

No, the hardware accelerated bit is only set (on some platforms) for GLCapabilities provided to the chooser as an advisory measure, but it isn’t robust enough to be used. It’s ignored on GLCapabilities passed in when determining the window system’s recommended choice.

Where do you stand at this point? If you avoid touching the GLCapabilities do you get hardware acceleration? I suspect touching the number of depth bits may be causing your problem. The default there is 24 bits.

I’d be surprised there is a behavioral difference here. I suspect the JSR-231 code may be taking a different code path (ChoosePixelFormat) than the old JOGL 1.1 code (wglChoosePixelFormatARB) and that ATI’s drivers behave differently with wglChoosePixelFormatARB. Unfortunately it was a memory leak in ATI’s drivers which forced us to use ChoosePixelFormat by default when multisampling was not requested.

The behavioral difference most likely comes from the fact that I used my own capabilitieschooser in the old wurm version. :wink:

I tried to get a hold of the guy who had the problems, but it was very difficult to run proper tests. :-\

Anyway, does the GLCapabilitiesChooser try to use exacty 16 bits if you specify that, and not just 16 bits and upwards? If so, how does one say that “I require at least 8 bits stencil buffer”, for example?
It’s pretty dangerous/bug prone to have a default glcapabilities that reverts to the software implementation if there are better hardware accelerated ones than what you’ve specified available…

Ah, well, I’ll roll my own chooser again. I’ll attach the code for your pleasure (it’s kinda wurm specific, but could probably be reused by others with some tweaks:

package com.wurmonline.client;

import javax.media.opengl.GLCapabilities;
import javax.media.opengl.GLCapabilitiesChooser;


public class WurmCapabilitiesChooser implements GLCapabilitiesChooser
{
    public int chooseCapabilities(GLCapabilities desired, GLCapabilities[] available, int systemChoice)
    {
        int chosen = systemChoice;
        for (int i = 0; i < available.length; i++)
        {
            if (isBetter(available[chosen], available[i])) chosen = i;
        }
        return chosen;
    }

    private boolean isBetter(GLCapabilities oldCaps, GLCapabilities newCaps)
    {
        if (newCaps==null) return false;
        if (oldCaps==null) return true;

        // ALWAYS trade to hardware acceleration and double buffering if possible
        if (!oldCaps.getHardwareAccelerated() && newCaps.getHardwareAccelerated()) return true;
        if (!oldCaps.getDoubleBuffered() && newCaps.getDoubleBuffered()) return true;
        
        // NEVER trade from hardware acceleration and double buffering if possible
        if (oldCaps.getHardwareAccelerated() && !newCaps.getHardwareAccelerated()) return false;
        if (oldCaps.getDoubleBuffered() && !newCaps.getDoubleBuffered()) return false;
        
        if (newCaps.getDepthBits()>oldCaps.getDepthBits()) return true;
        if (newCaps.getRedBits()>oldCaps.getRedBits()) return true;
        if (newCaps.getGreenBits()>oldCaps.getGreenBits()) return true;
        if (newCaps.getBlueBits()>oldCaps.getBlueBits()) return true;
        if (newCaps.getAlphaBits()>oldCaps.getAlphaBits()) return true;
        if (newCaps.getStencilBits()>oldCaps.getStencilBits()) return true;
        return false;
    }
}

I have a 9200, if you would like me to test the new configuration, i’d be more than happy to; just drop me a PM.

DP

In general the capabiliites requested are used as a minimum specification. By default JOGL just uses whatever the window system-specific pixel format selection algorithm returns; on Windows this uses ChoosePixelFormat, on X11 platforms glXChooseVisual, and Mac OS X the NSOpenGLPixelFormat allocation routine.

Ah, I see. That makes sense. :slight_smile: