Video frame grabbing to bufferedImage

Hi all. I am trying to grab frames from a video and then convert them to a BufferedImage. I will always be grabbing frames in order (so from frame 21 I’ll either go to 20 or 22, never off to 100 or anything) and pausing occasionally.

I have achieved this using JCodec, but it is slow (between 3-15 frames a second)

BufferedImage frame = FrameGrab.getFrame(filmFile, frameNumber);

I’m guessing this is because it’s working its way from the last … keyframe or whatever the term is, whereas it would be quicker if I was simply going from frame 1 to frame 2 and so on.

I’m giving up on JCodec - can anyone else recommend what I should use to achieve this? I have read abut Xuggle, JMF and a few others but I keep reading about how they are abandoned.
(Video is mp4 if that’s of any use)


If I’m reading the JCodec source correctly, each call to FrameGrab.getFrame() seems to decode the video all over again. I think that instead of looking for a new library you should see if you can find some way to not re-decode the file multiple times.

If you’re using the JavaSE AWT backend, then instead of calling [icode]FrameGrab.getFrame(filmFile, frameNumber);[/icode]

add the methods and static variables:

private static FrameGrab grab;

public static void initialize(File file) {
	FileChannelWrapper ch = NIOUtils.readableFileChannel(file);
	grab = new FrameGrab(ch);

public static BufferedImage getFrame(double second) {
	return ((FrameGrab) grab.seekToSecondPrecise(second)).getFrame();

and then call

initialize(filmFile); // call once
BufferedImage frame = getFrame(frameNumber); // call each time

You would have to try it out but that may give you an FPS boost.

Hey, thanks for the reply.

I was really hopeful but unfortunately it gave no speed boost. I tried it with both seekToSecondPrecise and seekToFramePrecise but it’s still exactly the same speed.

I have used Xuggle for this, so it does work, but I cannot remember how fast it was, sorry. I don’t recall it being noticeably slow though.

I wouldn’t be surprised if nsigma had a good solution to this built in to Praxis Live.

Yes that’s true, I think @nsigma uses a java wrapper on gstreamer, perhaps this:

Also, @Riven’s YUNPM does similar stuff:

Yes, that’s right, Praxis LIVE uses those GStreamer bindings (and I’m one of the repository admins). I also use the same native GStreamer libs as Processing is using. It’s great and will do everything I need. However, it’s big too! It might be overkill for what you need. The video I posted yesterday in the “What I did today” thread is actually using Praxis LIVE’s software pipeline rather than its OpenGL one, so is pretty much doing what you’re trying to do.

@Riven 's YUMPM might well be worth looking at. IIRC it’s quite a bit smaller, and though there are potential issues in the approach (around timing, frame skipping, etc.) that might actually suit your purposes better (which are?)

The other obvious link missing is the Java bindings to VLC - - however, they are GPL licensed so may not suit.

And there’s obviously JavaFX, which actually uses GStreamer under the hood, but I have no idea how low level you can get with it.

The odd thing that stands out about that API you’re using is that it implies it’s creating a new BufferedImage every frame. You ideally want something that will give you direct access to the decoded pixel data. There is a Swing player in the GStreamer examples which does something similar to what you want - note this code fills the pixels array of the BufferedImage from the passed in native pointer (IntBuffer).

My intention is to have a video I can play forwards frame by frame, either at normal x1 speed, or slowed down. I wanted to convert to BufferedImage so I can draw these in a variety of ways (other libs I use make use of them). Ideally I’d like to be able to backwards frame by frame but that’s not a priority, just being able to go forward frame by frame at say 60fps is ideal.

I actually have my images all saved as PNG files that I converted into a movie purely for the compression. If there is any alternative (looked into APNG but nothing useful) that would be fine, though I think this is the right way.

I’m currently using Xuggle but without meaning to sound lazy it’s proving a pain up the ass to work with, also the whole GPL or build yourself for LGPL scares me.

I found several mentions of pure Java decoders for mpeg which would be ideal; some were simply publications talking about it but no public release, and the other ( I have found literally nothing documentation wise and cannot work them.

With YUNPM you can read/display frames one at a time, at any speed. You have access to this input stream that simply feeds you byte[wh3] for every frame. Plain RGB8.

The audio sync, frame skipping and other features are merely feeding off this stream. The logic behind that is simply not active if you’re only reading from this raw video stream. Same applies to audio, whatever media file you play, you get access to a plain inputstream with whatever sample rate you prefer.

Well I’m going to use YUNPM, will post on its own thread not this one anymore. Thanks for the help