A couple of opengl + Linux related questions

Hi,
I recently began experimenting with 1.5’s opengl pipeline under linux and am confused about a couple of things.

First, (and this may be a bit of a noob question) does enabling the opengl pipeline differ between setting the property as a JVM parameter via

-Dsun.java2d.opengl=True

vs setting the system property in main() with

System.setProperty("sun.java2d.opengl", "True");

I was under the impression that setting doing the System.setProperty as soon as possible was the correct way to set java2d properties. However, if I do it with System.setProperty, I don’t get the warm and fuzzy “OpenGL pipeline enabled for default config on screen 0” output.
Does the System.setProperty just not output the verbose messages, or can the opengl pipeline only be set by passing it as a JVM parameter?

Second, as I mentioned, I get a “OpenGL pipeline enabled for default config on screen 0”. However, I also get a “Could not enable OpenGL pipeline for default config on screen 1”. So I am wondering what this implies as far as my hardware goes. I am using a dual monitor setup with an Nvidia card using Twinview. Does this mean that opengl is disabled for one of the displays? If I want hardware acceleration should I be checking that my window is remains completely within the bounds of screen 0? My guess is that Nvidia’s Twinview abstracts away the second monitor and treats screen 0 as one large, opengl accelerated dispaly, rendering screen 1 as reported by java irrelevant, but this is just a guess. Does anyone know for certain?

Third, when watching the “sun.java2d.trace” output in Windows I know to look for “D3DBlitLoops” to know that it’s accelerated, but I’m not sure what to look for in Linux. I am seeing alot of [quote]sun.awt.X11PMBlitLoops::Blit(“Integer RGB Pixmap”, SrcNoEa, “Integer RGB Pixmap”)
[/quote]
which I’m guess are not hardware accelerated based on the fact that they have “X11” in there. Is this a correct assumpution? What should I be seeing as far as OpenGL accelerated messages go from the java2d.trace feature?

Anyway, just trying to get a grasp on a few basics to know which way I should go next. Thanks for any insight.

First, setProperty vs -D: depending on where you put the setProperty, it may be too late for us to initialize the opengl pipeline.

It should be set before any of the awt/2d code is executed,
which in some cases may not be as easy to determine.
Consider this example:


GraphicsEnvironment ge = GraphicsEnvironment.getLocalGraphicsEnvironment();

public static void main() {
   System.setProperty("sun.java2d.opengl", "True");
  // do stuff
}

In this case even though we setg the property the is first thing in main(), it’s already too late, since the toolkit has been initialized by the getLocalGraphicsEnvironment() call, because the class members initialize before main is executed.

One thing to consider is that you really shouldn’t be setting the opengl property unconditionally. There are many cases where it’ll make your application unusable for the user - what if they have old/bad drivers? The artifacts could be severe: from nothing being rendered to the system lockups.

Regarding the multiscreen question: depending on how your multiscreen setup is configured, it may or may not work on non-primary screens. If you’re using xinerama, it should work on both screens (even if it reports that it can’t enable opengl for the secondary screen). But if I’m not mistaken, currently for non-xinerama cases the opengl pipeline can only be enabled on a primary screen.

About the tracing. If the opengl pipeline is enabled, you’ll see a bunch of OGL* primitives, like OGLDrawLine, OGLBlitLoop, etc - pretty easy to spot.

Thanks,
Dmitri
Java2D Team

Thank You, Dmitri, for your help. Taking into consideration what you described, I tried a few more things out. Turns out, my assumption about which display device was the primary display (i.e. Display 0) in my Linux setup was incorrect. I have a digital flat panel and an analog CRT connected and it turns out that the analog CRT is actually the primary display and upon dragging my opengl enabled window to that physical display I begin to see OGL calls in my trace output. If the window is on the second display (i.e. Display 1) it appears that OpenGL acceleration desists and the X11 pipeline takes over. So, when it says it didn’t enable opengl on screen 1, I guess it really means it!

However, I still don’t think everything is behaving properly. From the NVidia Linux driver documentation Twinview should behave as such:

[quote]- The NVIDIA driver conceals all information about multiple display devices from the X server; as far as X is concerned, there is only one screen.

  • Both display devices share one frame buffer. Thus, all the
    the functionality present on a single display (e.g.accelerated OpenGL) is available on TwinView
    [/quote]
    It seems to me the exact opposite is happening with Java, i.e. it is seeing two distinct displays when it should only be seeing one.

There is an option called “NoTwinViewXineramaInfo” that disables Twinview’s xinerama information. [quote]When in TwinView, the NVIDIA X driver normally provides a Xinerama extension that X clients (such as window managers) can use to to discover the current TwinView configuration. Some window mangers can get confused by this information, so this option is provided to disable this behavior.
[/quote]
So when I enable the NoTwinViewXineramaInfo option, I effectively get only one very large, opengl accelerated screen called “screen 0” and my opengl window runs via the OpenGL pipeline on both physical displays. Unfortunately, though, this effectively cripples the Linux window manager which now no longer honors the bounds of the physical displays.

As such, it seems to me that either Java is asking the Nvidia drivers the wrong question when querying the opengl capabilities of the physical device or the NVidia drivers are providing back the wrong answer.

As far as enabling opengl with the System.setProperty, you we’re right. I hadn’t realized static 2D calls were being made elsewhere in advance of the static main method. So I’ll have to rethink how I allow the user to adjust that property. I realize that OpenGL isn’t enabled by default for a reason, but unlike Windows which has multiple fallback methods, it seems to me that there is no good, accelerated alternative to X, so I intend to provide the user with the ability to run in some sort of safe mode using the X11 pipeline if they notice display problems.

Thanks for a bunch of useful information and the investigation.

So, it looks like by default the x server runs in xinerama mode (and they provide an option to turn it off and just run in ‘true twinview’ mode, when even X isn’t aware that there are two screens).

To ‘normal’ X application xinerama looks like a large single display, but there are specific apis which allow getting xinerama-related info.

A couple of releases ago AWT implemented a feature so that java applications get access to the separate graphics devices in xinerama mode. I don’t have the bug id handy, but you can try to look it up. It doesn’t have anything to do with opengl, though.

Just print out the GraphicsDevices array that you can get from a GraphicsEnvironment, there should be two in xinerama case, with or w/o the opengl pipeline.

It’s a good approach that you’re thinking about regarding the way to allow the user to choose the pipeline.

One way to do it is to present a user with a dialog which allows the user to choose the pipeline, and then exec another java process and pass the properties in the command line (and quit the ‘preferences dialog’ process).

Thanks,
Dmitri

Ok, I think I found the AWT bug you were describing here: http://bugs.sun.com/bugdatabase/view_bug.do?bug_id=4356756

However, I don’t think AWT being able to detect the physical displays is really the problem. If my understanding is correct from reading Nvidia’s documentation, there are 2 distinct ways to enable a multiple display desktop with X (as well as enabled 2D opengl). The first way, which is a legacy or deprecated mode as far as NVidia is concerned is to run two instances of the NVidia driver which creates two distinct desktops controlled by the X11 server. In this mode, NVidia can provide opengl acceleration to only one physical display while the second display will not have opengl acceleration.

The second method which I’m trying to take advantage of, and the one which does not seem to work properly with Java involves NVidia’s twinview. In this mode, NVidia is able to provide opengl acceleration to 2 physical display devices at the same time while still providing xinerama extensions. They do this by providing one large, opengl accelerated desktop behind the scenes and then adding a twinview/xinerama layer in between that enables applications (including the window manager) to “see” two distinct displays and act accordingly while at the same time able to take advantage of opengl acceleration on those two displays.

It seems to me that when the Java 2D opengl pipeline initializes, it sees the NVidia twinview/xinerama layer and correctly detects two physical displays, but that it is not correctly determining that both physical devices can be opengl accelerated. By enabling the “NoTwinViewXineramaInfo”, the Nvidia twinview/xinerama layer no longer provides physical device information and only at this time does the Java 2D opengl pipeline code work correctly on both physical devices.

So, am I off my rocker in my understanding? Or should this be something I should consider filing a bug report for?

[quote]It seems to me that when the Java 2D opengl pipeline initializes, it sees the NVidia twinview/xinerama layer and correctly detects two physical displays, but that it is not correctly determining that both physical devices can be opengl accelerated. By enabling the “NoTwinViewXineramaInfo”, the Nvidia twinview/xinerama layer no longer provides physical device information and only at this time does the Java 2D opengl pipeline code work correctly on both physical devices.

So, am I off my rocker in my understanding? Or should this be something I should consider filing a bug report for?
[/quote]
Hi there,

As you’ve discovered, there is a whole slew of ways to enable and tweak Xinerama mode on Linux depending on drivers and so on. We haven’t covered all these configurations in our testing, and are aware of some other issues related to OGL+Xinerama. So it looks like you’ve stumbled into a real issue worth investigating. Could you please file a bug report and include your findings (along with details about which Xserver and driver flags are used in your configuration)? I could try to reproduce your configuration in house and it should be easy to see why OGL can’t be enabled on your second screen.

Thanks,
Chris