is video playback in java really as complicated as it seems?

right of the bat, I wouldn’t assume that it would be fast and/or efficient enough

[quote]If neither bandwidth/filesize, CPU-usage and image-quality are a major concern… why not try MJPEG?
[/quote]
We have low system requirements, as in netbooks.
Also quality is an issue

by video playback - I’m talking about 1-5 minute, 720p videos, obviously without artifacts

haven’t really looked into MJPEG

only used some JPEG format with jmf back then

We need someone to write a multithreaded, standard compliant H264 decoder in pure Java. Any volunteers?

pretty sure using H264 has also licensing problems
IIRC its only free for internet video and end users

WebM / VP8 might be a better choice in that regard. There seem to be a couple of attempts at pure-Java decoders out there (and I mean attempts from what I’ve seen so far! :wink: )

On the GStreamer linking thing - I installed the Windows OSSBuild version and it all just worked first time with Praxis, though I haven’t given it a huge amount of testing yet. Have you actually tried with Processing to see if their bundling of the native libs works?

you seem to have a better hang of gStreamer - try and do a lwjgl package with everything included that just works
I didn’t use the processing libs and stuff , because I wasn’t sure of the license in this regard (distributing a part of Processing in a commercial product ?)

My FFMPEG player can run any codec that ffmpeg supports. You can simple compile FFMPEG on any platform that you want to run the player on.

There are ways to get around licensing issues with FFMPEG, even if that means requiring a simple external installer not packaged with your program.

http://fmj-sf.net/ffmpeg-java/getting_started.php ?

EDIT: or http://jffmpeg.sourceforge.net/formats.html ? It even has MPEG 4 implemented in Java…

I have had a chat with a lawyer on the licensing thing. First off outside the US and the odd other country that has software patents using a software version of something like H.264 is, to the letter of the law, is possibly ok. That is right, even then the lawyer would claim its ok if you are making money. The the only thing that is free with H.264 is streaming over the web, the encoder and decoders need a license if you ask MPEG-LA.

As for quality and space, well its no secrete that MJPEG sux on this front, but if you don’t have a lot of cut scenes it may be a good option.

But don’t buy the H264 is the best by far crap. I was on the doom forums and i pointed out that at the kind of bit rates I care about (high quality) MEPG2, H264 and Theora are about the same for 99.9% of the people out there. The answers i got is we need to use really low and unrealistic bit rates so we can tell what is better! IMO that is crazy talk.

Theora does not have hardware decoding but recent versions do 1080p on my old machine with less that 30% of one core.

So the best option IMO is a JNI to the standard Theora playback libs with playback via lwjgl to solve or mitigate sync issues. Perhaps some thought as to how to get the data out… ie perhaps decode directly to texture, and use GLSL for colour space changes, since this is often the bottle neck. Then basically use a build stack like lwjgl uses. Some work, but the lest work IMO…

I have a GLSL codec of my own which in tests didn’t do so bad, but i doubt i will finish it on a human time scale :smiley:

Firstly, I don’t see the point of LWJGL in there at all. What we need is a pure-Java equivalent / fork of GSVideo from Processing. Once you can get to the stage of having access to the native video buffer, you can do what you want with it.

I’m on the GStreamer-Java mailing list, as is the author of GSVideo, so I’ll see if there’s any possibility of him doing it. If not, then I might look at forking it myself, but it’s not going to happen in the short term - too much other stuff on right now.

Processing core is LGPL, same as GStreamer. If you can ship with one you can ship with the other!

I say lwjgl because everything java2d is not easy to sync if at all. To sync you going to need to framedrop sometimes which looks horrible on PC monitors where you really need to run a native frame rates or have a very consistent pull down. Now lets consider java sound? Hell no, just use openal. Java2d is just not animation friendly.

BTW you should always sync to sound rather than a clock if possible. Perhaps not so important with short clips, but for long ones, the pc timers are not accurate, so sync with what we humans notice… sound out of sync with video.

Is that a response to my reply to @cero about LWJGL, because if so you’ve completely missed the point - basically that the output target is immaterial to how we get hold of the video data, which if using GStreamer-Java will be a native ByteBuffer. With that you can send it to whatever target you want. And for some people, Java2D and JavaSound will be fine.

Your idea of using GLSL to offload some of the work (like colour space changes) is pretty cool, though as long as whatever video library is giving you the native ByteBuffer allows you to specify raw vs rgb colour space, then that could still be done.

[quote=“nsigma,post:29,topic:37747”]
This. ByteBuffer --> glTexImage2D() --> Draw quad. I have so many filters and things I want to try on videos, so someone better get to it! ;D ;D ;D

Color conversion can be a very significant portion of the total cost, so it’s a very good candidate. Along the same lines the freq/space transform can be performed on the GPU. Nvidia has a block DCT example in one of their SDKs.

Talking to the Theora devs, they thought that for most cases the colour transform is the lions share of performance. For a lot of predicted frames the DCT coefficients are zero and there are optimized versions of iDCT with small coefficient counts. Having said that iDCT with SSE is very fast.

My codec has a 4x4 transforms for easy glsl implementation. I have a trick or two to get good compression with such a small block size. One day i should really finish it. The idea of doing a lot on the card, is that you can send the mostly compressed video stream to the card, since something like 1080p60 raw is a lot of bandwidth.

Obviously having a decode to texture, does not preclude decode to bytebuffer.

My experience is the same (assuming that the freq/space transform has short-cuts) that color conversion is the big win. A problem with the Nvida example is that they don’t reorder the output from the entropy decoder and reorder data on the GPU.

[quote=“theagentd,post:32,topic:37747”]

You could do that now using GStreamer-Java! The problem above is more about making sure it’s easy to package and “just works” across all OS’s.

The Processing video code already does exactly what you want using JOGL. Should be easy to fork it to work with LWJGL.

In Praxis I’ve got video playback, webcam, etc. going to an LWJGL texture. However, at the moment it’s not a direct upload to the texture - it’s coming into an int[] first - a legacy of Praxis’ need to support a software renderer too. This will be fixed in the next month or so, along with addition of live-coding GLSL filters - I want to do exactly the thing you’re interested in! ;D

just to check: did anyone work on some of this video stuff, since this thread ?

FWIW, I just did a test on MJPEG using a video file from ‘30 Rock’:

[tr][td]MPEG1[/td][td]219MB[/td][/tr]
[tr][td]MJPEG (90%)[/td][td]960MB[/td][/tr]
[tr][td]MJPEG (75%)[/td][td]561MB[/td][/tr]

Below 75% jpeg compression quality it becomes uncomparable to the original.

So it’s not really useful, unless you have short videos :-\

how long was that clip ? I guess not a full episode ?

edit: well I need audio too, so it doesnt matter anyway I guess

You can solve the audio problem by playing that independently and just ensuring you sync the video to the audio. As streaming audio comes in chunks of known size (and hence, duration) this is pretty easy.

Cas :slight_smile: