Okay I see. The CGL Libs are vastly different from what you guys have to do. There is no resetDisplayMode() on OSX. When your application exits, the OS automatically reverts to the resolution it was in - anything you do is transient.
My Display core doesn’t do much other than change resolution as with OSX I have to both capture the display:
CGDisplayCapture( kCGDirectMainDislpay );
and then release the display:
CGReleaseAllDisplays();
Because of the way this is done, I’ve done this in the OpenGL code as opposed to the Display code because it would lock a machine on exit since there is no way to release a display and have the opportunity to call CGReleaseAllDisplays().
As such when Display_init is done, its pretty much just a passthrough call as the OS is going to handle the stashed resolution informaiton. I’ll have it mimic what you guys are doing with getting the current display mode and storing it for shits and grins but for me its unnecessary.
Display_resetDisplayMode() for me is an empty body function.
The way the API is oriented makes the way you want to deal with switching display modes extremely awkward on OSX using Core Graphic Direct Display. Setting the display mode for me means invoking:
CFDictionaryRef displayMode;
displayMode = CGDisplayBestModeForParametersAndRefreshRate( kCGDirectMainDisplay,
bpp,
width, height,
freq,
NULL );
CGDisplaySwitchToMode( kCGDirectMainDisplay, displayMode );
I can’t do this until I lock the display, which as stated before has to be in the GL code because I won’t be able to release the display. Because of this you won’t ever be able to swap the display mode properly on OSX under the current architecture because of the ambiguity of the state of the box when setDisplayMode is called. The paradigm of separating the Display stuff into one class and the GL stuff into another class makes it very difficult for me to make things work properly because CGL is so straight forward that you don’t split things up that way. The flow for OSX is:
CGDisplayCapture( kCGDirectMainDisplay );
CGDisplayHideCursor( kCGDirectMainDisplay );
CGDisplayMoveCursorToPoint( kCGDirectMainDisplay, CGPointZero );
CGAssociateMouseAndMouseCursorPosition( FALSE );
CFDictionaryRef displayMode;
displayMode = CGDisplayBestModeForParametersAndRefreshRate( kCGDirectMainDisplay,
bpp,
width, height,
freq,
NULL );
CGDisplaySwitchToMode( kCGDirectMainDisplay, displayMode );
CGLPixelFormatAttribute attribs[2];
CGLPixelFormatObj pixelFormatObj;
long numPixelFormats;
attribs[0] = kCGLPFAFullScreen;
attribs[1] = NULL;
CGLChoosePixelFormat( attribs, &pixelFormatObj, &numPixelFormats );
if ( pixelFormatObj != NULL )
{
CGLCreateContext( pixelFormatObj, NULL, &contextObj );
CGLDestroyPixelFormat( pixelFormatObj );
CGLSetCurrentContext( contextObj );
CGLSetFullScreen( contextObj );
}
// do LWJGL stuff
if ( contextObj != NULL )
{
CGLSetCurrentContext( NULL );
CGLClearDrawable( contextObj );
CGLDestroyContext( contextObj );
}
CGAssociateMouseAndMouseCursorPosition( TRUE );
CGDisplayShowCursoe( kCGDirectMainDisplay );
CGReleaseAllDisplays();
And that’s pretty much it. Help me understand how this fits cleanly in with the architecture now. It would have before, but now things are happening a lot different and that’s causing some issues. It may be that the OSX APIs are now ‘too’ clean that all the extra steps of the other platforms is causing me trouble
