MULTISAMPLE support, crossPlatform Antialiasing

Hi all,
First thanx for being interested in my supply :slight_smile: I hope you ll be able to help me, & i think many other more or less advanced openGL graphic programmers trying to use JOGL :wink:

General discussion

Antialiasing is always a major & redundant trade-off matter for game engine programmers that want to control their end frames quality.
I read many openGL books, net tutorials, some other papers,… and discovered so the major techniques but without thin comparative, stats,… so i m a bit blurred on how to do make my final user click enable Antiailasing & it does the thing with his non-mammoth PC ;D

My first idea was enjoying the GL_MULTISAMPLE core extension & so quick test if it was available on my ATI RADEON 9700/XP/lattest catalyst drivers. So i created a context with multisample capabilities enabled, 2samples, & queryed GL_MULTISAMPLE (&_ARB) availability. AMAZEMENT,… not available even if ATI is one of the openGL major contributors. So again the HELL of vendor specifics that dont want to make a standard driver interface, and want (i know they want lol) programmers getting a cancer before time >:( ;D

My goal is to provide an EFFICIENT && very portable Full Scene AntiAliasing (FSAA) method for my game engine (especially for filled polygons).
I want it such a way that most of latest home PC (Win, Apple, linux) with at least a Geforce2 generation (ATI, or other vendor equivalent) could run it with its lattest drivers ! Thats obviously why i choosed JOGL !!!

So the solution i’m looking for is find an efficient generic solution for all (the best but i have the premonition there’s none lol) or enumerate the major cases to deal with.

So there are 3 major constraints :

  • efficiency of used openGL rendering algoritm/method
  • extension hardware support needed for each algo (& majors vendors support for each graphic generation above or equal to Geforce2)
  • JOGL support to get a GL a proper Context able to do the Algo (in the fact jogl may not be able to provide a proper context while C system dependant code could get one that about i m not sure with all jogl doc or discussions i read).

I hope this discussion will fix clearly the ways to achieve it with openGL in general & more precisely within JOGL Binding, such with help of some experimented programmers or professors ;D I guess there’s a lot to talk about :o lol

Sebastien Wirth

Funny choice that, what with LWJGL having built-in multisampling with just a constructor argument. :stuck_out_tongue:

Multi-sampling seems to be really easy to implement with JOGL. There s only one of the GLCapabilities to enable, and give this GLCapabilities object to the GLDrawableFactory when creating the GLCanvas.
I think that’s my ATI OpenGL Drivers dont implement GL_MULTISAMPLE extension, or could it be JOGL fault ???

This should work. There may be a problem in the default GLCapabilitiesChooser’s pixel format selection algorithm. Could you try writing a GLCapabilitiesChooser that prints out each GLCapabilities object it receives and see whether FSAA is supported for any of them? Also feel free to file a bug with the JOGL Issue Tracker on the JOGL home page – we should probably already have such a DebugGLCapabilitiesChooser for these situations.

first ty Ken for your answer. I m happy there are not only lwjgl wild preachers on jogl forum ;D but also some nice altruist men like you. I had to say that :wink:

So i wrote a GLCapabilitiesChooser to test each available GLCapabilities multisample ability with getSampleBuffers(). Result : none have multisample ability.

I looked for my issue in issue tracker & i found it (…almost lol). There was also a problem with ATI WGLContext but it’s been fixed for May, and my jogl is latest August one. I checked source, the patch has been applyed & WGL_ARG_multisample is supported.

I ll report the bug but if you have an idea i ll take it :slight_smile:

tc

Would you try one of the earlier betas, like 1.1 b04? There is a chance that some of the new pixel format selection code on Windows may need some debugging.

i ll check them as soon as possible :slight_smile:

Hi Ken
I hope you’ll get this mail. I’m sorry i couln’d answer quicker as i got tons of work :stuck_out_tongue:

I saw my issue didn’t get answer but it was surely very badly done as i am a bit confused with issue tracker ;D

So i give you my first work resume about the matter here in our new nested location lol

The matter came from multisample demo (that shows it on lines), that couldnt create a multisample capable context, as it couldn’t get proper pixel format selection. I have for recall an ATI Radeon 9700, WinXP,JVM 1.4.2

The matter was occuring on my brother Geforce2 GTS too.

An Ati bug was (almost) fixed whith creating a dummy one to be able to get wglPixelFormat selection.

The matter is coming from the fact that :
when we have to perform wgl pixel Format selection we NEED dummyGL access.

The proposed solution was ā€œwrongā€ cause this dummy initialisation is invoked later on AWT EventQueue, so we have only usable GLDummy (and so multisampling) for SECOND created canvas, not first one !!

So this dummyGL needs to be already created or created on the thread whithin we are creating our DESIRED context (enabling multisampling).

The first solution is surely bad cause our dummyGLContext should be significant for one particular GraphicDevice, and a static init maybe bad.

The second case seems to be problematic for deadlock purpose, damn ATI/CONTEXT/AWT nightmares ;D
But i choosed to try in that way trying to force things a bit and it seems to work very well !!!

For note i choosed to synchronize ā€œdevice->dummy in creationā€ map, this to prevent bad scenario in dummies creations. The code is really close to last source, maybe you ll check it yourself or give it to the expert guy(s) that work on.

HERE IS THE CODE in WindowsContextFactory if i remember well

public static GL getDummyGLContext(final GraphicsDevice device) {
    GL gl = (GL) dummyContextMap.get(device);
    if (gl != null) {
        return gl;
    }
    
    synchronized(pendingContextSet){
        // GL Dummy canvas pixel selection request dummy that is not necessary so null to break recursion
        if(pendingContextSet.contains(device))
            return null;
        // to break recursion in Dummy creation
        pendingContextSet.add(device);
    }
    GraphicsConfiguration config = device.getDefaultConfiguration();
    final Dialog frame = new Dialog(new Frame(config), "", false, config);
    frame.setUndecorated(true);
    final GLCanvas canvas = GLDrawableFactory.getFactory().createGLCanvas(new GLCapabilities(),
    null,
    null,
    device);
    canvas.addGLEventListener(new GLEventListener() {
        public void init(GLDrawable drawable) {
            dummyContextMap.put(device, drawable.getGL());
            synchronized(pendingContextSet){
                // not more usefull as dummy has just been put in Map
                pendingContextSet.remove(device);
            }
            String availableGLExtensions = "";
            String availableWGLExtensions = "";
            String availableEXTExtensions = "";
            try {
                availableWGLExtensions = drawable.getGL().wglGetExtensionsStringARB(WGL.wglGetCurrentDC());
            } catch (GLException e) {}
            try {
                availableEXTExtensions = drawable.getGL().wglGetExtensionsStringEXT();
            } catch (GLException e) {}
            availableGLExtensions = drawable.getGL().glGetString(GL.GL_EXTENSIONS);
            dummyExtensionsMap.put(device, availableGLExtensions + " " + availableEXTExtensions + " " + availableWGLExtensions);
        }
        
        public void display(GLDrawable drawable) {}            
        public void reshape(GLDrawable drawable, int x, int y, int width, int height) {}
        public void destroy(GLDrawable drawable) {}
        public void displayChanged(GLDrawable drawable, boolean modeChanged, boolean deviceChanged) {}
    });
    
  // We want dummy created now

    canvas.setSize(0, 0);
    canvas.setNoAutoRedrawMode(true);
    canvas.setAutoSwapBufferMode(false);
    frame.add(canvas);
    frame.pack();
    frame.show(); // Dummy GLContext Init has been "forced" for ATIs at this step

    //   NO  canvas.display() for ATI or deadlock is obvious !!!!
  //   BUT need one for Nvdias -I dont know for other chipsets drivers :(
    if(!SingleThreadedWorkaround.doWorkaround())
        canvas.display();
    
    EventQueue.invokeLater(new Runnable() {
        public void run() {
            frame.dispose();
        }
    });
    
    return (GL) dummyContextMap.get(device);
}

I checked on my brother NVDIA Geforce2 GTS and it works fine as my ATI Radeon 9700 on WinXP. It is stable, nothing wrong on at least 50 exec on each config.

i hope it will be good & will make a new release after experts validation and/or correction :slight_smile:

There is something strange on Geforce2 GTS in the fact that :

  • i have openGL 1.5 Xp driver support
  • multisample is a core features for openGL 1.3 version and we have to use ARB extension (core or ARB status ???) that is not available
  • Nvdia hardware draphic device Xp panel let us configurate it to ApplicationControl/Desactivated/x2/x4 but WGL_ARB_multisample is not available through jogl in any case ???

If some1 has any advice about it he will be welcome ;D
And I recall I put Damn Pixel format selection solution for Windows just over here ;D

tchao all tc :slight_smile:

Interestingly, when I run the NVidia Pixel Format program on my machine (Quadro FX Go700 with Windows XP), all of the pixel formats that support FSAA can only render to pbuffers, not to an on-screen window. The same machine when using NVidia’s drivers on Linux does support FSAA to on-screen visuals. If you can find a small C program that shows that FSAA is possible where JOGL claims it is not, then please file a bug with the JOGL Issue Tracker and we’ll try to fix the problem in JOGL’s pixel format selection code.