How to reduce video "tearing" (video texturing) / V-Sync ??

Hi,

I have written an OpenGL-based video renderer as a plugin for the Java Media Framework (JMF) which is working greatm apart for one aspect: When the video scene changes dramatically (i.e. camera pans or sweeps in a movie) the video exhibits serious video tearing (http://en.wikipedia.org/wiki/Page_tearing).

My GL Drawable component is repainted upon each media frame, but obviously I need V-Sync (opengl screen redraws must be synchronised with the video monitor refresh rate).

I don’t know if I’ve been searching ineffectively, but nowhere could I find any information on doing something like this in JOGL. I can understand that such a feature must be dependant on native code, but so is OpenGL (JOGL) in anyway.

Can anybody point me to information on achieving V-Sync in JOGL?

P.S> More information on my renderer: I use a texture-mapped quad, and update the texture data as frames come in (typically 24fps - as packed byte arrays from a JMF processor) and I must commend the JOGL team for the fantastic performance - texture updates are incredibly fast, I render full-resolution DVD on an older (1.8Ghz P4) windows machine using around 10% CPU - most of which is consumed by the codec (XVid / MP4 decoding). This allows us to implement a true cross-platform, java-based media solution, with great performance.

thanks,
Dawid Loubser
lancer@ibi.co.za

How did you get xvid/mp4 to work with JMF? I’ve written something similar to what you’ve done, but I never get sound with those :< only with MPG’s I get sound

Call GL.setSwapInterval(1). Unfortunately we don’t have a platform-independent feedback mechanism to tell you whether this call has actually succeeded.

I know this is rather hack-ish… but if nothing better can be done…
Couldn’t you do something along the lines of having a double query? One that tells if you do know the status of the peration and then simply return “maybe succeed” or “definitely failed” from the API. Where “maybe” means “it worked” if the other query API says that the return code is reliable. On systems where you can’t tell if the operation succeeds you would always return the “maybe”

Hi,

I couldn’t launch your webstart app (I am on Mac OS X, not sure if this is tha cause) but to answer your question: For decoding, we use the FOBS (http://fobs.sourceforge.net/) JMF plugin which basically plugs in a neatly packaged version of FFMPEG (http://ffmpeg.sourceforge.net/index.php) as codec, so we can play pretty much anything under the sun in JMF.

The other end of the problem was the renderers that ship with JMF, especially the AWT / Java2D ones (terrible performance / quality). So I wrote a new renderer using JOGL which, though I still have timing issues (especially with VSync, which mostly seems to work BTW, thanks).

Remember, JMF is just an abstract framework, and a nice one at that. It’s the plugins it ships with that are terrible, so replace the codecs / renderers (with FOBS and OpenGL, or whatever you want) and you have a fantastic cross-platform media framework. Our media app (not just playback, but does real-time compositing) has to work on Windows/Linux/Apple, so it’s ideal for us.

Just a word of warning, FOBS is still immature (version 0.4, though I can only get version 0.3 to play on Windows with Java5 currently) so it’s not perfect, but it really revolutionises JMF. Alsi, it’s very important to get your configuration (jmf.properties) set up correctly, otherwise it won’t work. The FOBS page has instructions :wink:

Hi,

Would you please bear with me as I try to understand what exactly

GL.setSwapInterval(1)

does? You say there is no feedback mechanism which tell me whether it has succeeded - does this mean I can also not calculate or deduce when the drawing actually occurred? For example, I need to time exactly how long it takes to draw my frame, in order to sleep for the remainder of the “frame time” in order to achieve smooth video playback.

If, for example, I have VSync correctly enabled as per your instructions (and it’s supported), and I do:


long start = System.nanoTime();
myGLCanvas.repaint();
long duration = System.nanoTime() - start;

will my duration contain the delay which waits for the next screen redraw (i.e. does redraw() block until the screen is ready to redraw) or is this an asynchronous service, in that my canvas repaint does not include the additional delay? (i.e. something done in the OpenGL / Video Card layer)

thanks for any information.
My renderer will be available under an open source license once it performs satisfactorily, so please help me achieve this for the common good!

[quote=“dawidl,post:5,topic:26586”]
A few years ago I tried with considerable effort to use JMF to stream A/V media from one machine to another on a LAN. I could not get the audio and video to be in sync. I wrote a capture filter for some custom hardware we made and that seemed to be working fine, but when I tried to view the RTSP-controlled media stream something inside JMF would delay one of the audio or video streams (forget which it was now) until it was many seconds (30 or so) out of sync and THEN it would be able to keep everything going smoothly in step so that the fixed 30 second lag remained but did not increase or decrease. It was such a frustrating experience and sucked up so much time that I gave up on JMF despite seeing that it had the potential to be the excellent cross-platform media framework that you elude to.

In the end I manually streamed raw audio samples and JPEG frames with much better control and predictability than I got from JMF. And I used a very simple loop to do my streaming.

I can only assume that it is perhaps the networking layer (the stuff that should have been Sun’'s area of expertise) that was hosed and for local playback of media files what you say is true… with the right codec and render filter things aren’t so bad. I just thought I would point out my experience since you mentioned that you are still having timing issues.

If the swap interval is set to 1 then the low-level swapBuffers call which is performed automatically after your GLEventListener.display() is called will block until the next vertical retrace. Effectively, your frame time will be rounded up to the time it takes the monitor/LCD to draw one frame.

This won’t work – repaint() is asynchronous. However GLCanvas.display() is synchronous, so if you change repaint() to display(), the code above will measure how long it takes to render one frame.