Motion Blur Woes

After re-finding this article (http://john-chapman-graphics.blogspot.com/2013/01/what-is-motion-blur-motion-pictures-are.html) I decided I would try and tackle motion blur. It’s really not that complicated of an effect, and I thought I could implement it rather easily. Of course, I was wrong.
I’m sending in the inverse of the model view matrix as well as the previous model view projection matrix correctly. I’m multiplying the view matrix of the camera (rotation matrix * translation vector) by the transformation matrix (world/model matrix).

I’m inverting it using this code, taken from the jMonkeyEngine but converted to work with my matrices:

http://pastebin.java-gaming.org/6f68e1a7f90

I believe this works just fine. If you have a better/simpler way to invert a matrix, do tell.
I also have a linearized depth texture.

After that, I get the position of the current pixel in view space and then do all the other fancy stuff that gets me my blur vector. Then I blur… along that vector. Surprising, right?
GLSL Code:
http://pastebin.java-gaming.org/f68ea2f7099

Well, everything runs fine however when I move, nothing happens. Nada.
But check this out; if I multiple blurVec by 100, I get a constant blur at a vector that seems to be about 1, 1. It looks like this:

I am not moving in this at all, just standing still. Therefore, when I’m not even moving, the blur vector is a very small value around 0.001 on the x and y. It stays this way when I am moving too though.

Is there anything you can think of off the top of your head why this may be happening?
Thanks

P.S. This is what the position looks like, used as color:

You’re calculating the world position wrongly. OpenGL also expects your depth value to go from -1 to +1, not 0 to 1. I am also pretty sure that you should not invert Y when doing that. Finally, you can do a small optimization by not dividing by W for the current position.


float z = texture2D(R_depthBuffer, texCoord0).r;

vec3 currentPosition = vec3(texCoord0, z) * 2 - 1;
vec4 previous = T_previousMVP * (T_MVInverse * vec4(currentPosition, 1.0));
vec3 previousPosition = previous.xyz / previous.w;

vec2 blurVec = currentPosition.xy - previousPosition.xy;

...


If you still have a problem with this code, then you most likely have a problem with your matrices. Also note that this motion blur technique only takes into consideration camera movement and rotation, not object movement.

I plugged that code in and still no dice. Seems that I’m sending the matrices in wrong :emo:
I’m aware that this only takes into account the camera motion, I’ll be moving on to the next article of his about motion blur after I figure this out.

To get the view matrix, I’m doing this:


cameraRotation.mul(cameraTranslation);

cameraRotation is the conjugated rotation quaternion turned into a rotation matrix.

Then I do view.mul(worldMatrix).invert().
Worldmatrix is the translation matrix of the post process quad.
I have no clue what could be going wrong, other than my invert method. However, I think it’s working fine. I’m going to just go on to his next article on motion blur and see what happens from there…

Start by figuring out which matrix isn’t working. Stop when you find a problem, fix it and continue to the next step once you’ve solved it.

  1. Make the fragment shader output currentPosition.xyz to gl_FragColor. This is a bit hard to read, with most of the screen having negative X and/or Y (which displays as 0 of course), to ensure that you’re basing the calculation on the correct coordinates. The depth should be somewhat easy to read in the blue channel. Now that I think about it, if your texture coordinates have (0, 0) in the top left corner, you DO need to invert Y.

vec3 currentPosition = vec3(texCoord0, z) * 2 - 1;
currentPosition.y = -currentPosition.y;

If you find an error here: The error occurs before the matrices are even used.

  1. Make it output the following to check if the inverted matrix is correct:

vec4 worldPos = T_MVInverse * vec4(currentPosition, 1.0);
gl_FragColor = worldPos / worldPos.w;

This will output the WORLD position of each pixel. This will most likely be a value far over 1.0, so you may want to divide it by 10 or 100 or something to get reasonable values. Make sure that the world position values are stable under camera motion and rotation for static objects.
If you find an error here: The problem lies in the T_MVInverse matrix.

  1. Make it output previousPosition, which should look the same as currentPosition when the camera is not moving.
    If you find an error here: The problem lies in the T_previousMVP matrix.

If you feel like going all out and implement something much more complicated, you can go for this motion blur algorithm:
http://graphics.cs.williams.edu/papers/MotionBlurI3D12/McGuire12Blur.pdf
I’ve implemented it myself, and it works great. It relies on per pixel motion vectors so it can handle any kind of motion. I calculate accurate motion vectors which take into account camera movement, object movement and skeleton animation movement. The cool thing about this algorithm is that it can actually blur over edges. It relies on a second low-resolution motion vector texture which keeps track of the dominant motion vector of the pixels that it covers to achieve this, although it does have trouble when motion in different directions causes the motion blur vectors to “overlap”… I get a feeling that this is a bit too advanced, but it’s a cool paper nonetheless.

Great explanation, thank you very much for the reply.
I’ve used the positions to debug the matrices before, but I didn’t really know what to look for so I missed what was going wrong.
The problem lies within the inverted model view matrix, because the stuff changes as I move when using worldPos/worldPos.w as the color, like you suggested.

There’s not much you guys can do to help me without knowing the whole damned engine so I’m by myself on this one. Thanks for the help, I’m taking a break for now because I’d rather not break my computer in anger. This hasn’t been working for days now so a small break will be nice.

Ah bugs. ::slight_smile:

Did you try inverting the Y channel? It may just be that the depth is wrong, in which case it would look very wrong.

Since GLSL shaders are so hard to debug, it’s very important to learn how to output and read information as colors. You need to take into consideration the scale of the value, if it’s negative or not and how it’s supposed to behave when you move the camera.