Finally got rid of my “jitter compensation”. The idea was a flawed one in the first place. When doing temporal supersampling, you jitter each frame a tiny bit so you get slightly different results. When you average together the current and previous frame, you get something very similar to 2x supersampling with good sample offsets ((+0.5, +0.5) and (-0.5, -0.5)). The thing is that this messes up the motion vectors of each pixel since it’s calculated as the difference of (currentFramePosition - previousFramePosition). Although it’s a tiny offset, it’s enough to trigger a tiny amount of motion blur on each pixel, both reducing performance as the early exit of the shader won’t work and fuzzing and aliasing edges in a flickering manner. In other words, I need to correct the motion vectors somehow. To do this, I was rendering the incorrect motion vectors and then fixing them in a fullscreen pass (it was batched into the shader that calculated motion vectors for the sky so it didn’t affect performance at all), but this was inaccurate due to the 16-bit float precision of the texture holding the motion vectors. I finally got rid of the need to correct them by simply using the unjittered matrices of the current and previous frame when calculating the motion vectors. That’s an additional 12 instructions in the vertex shader for each vertex, but I can’t see a performance difference at all.
Also, I added a vignette effect to be used in WSW. It will be toggleable for people who hate it. =P
Tried to get depth of field working. Didn’t go very well, but by carefully dodging all the artifacts I could take the following screenshot.
Looks good in this picture, but it’s slow as hell and has a shitload of artifacts in a lot of common cases. I can’t get the damn circle of confusion weighing working correctly, and even the depth weighing somehow messes up sometimes because (x-x != 0) for some reason. Too tired to focus (pun intended) so it’s time to call it a night. Hopefully it’ll all make sense tomorrow.
Try this technique. It’s lot simpler and there are tons of optimization opportunities.
Bruteforcing it is too slow for gameplay scenes but usually you want to use crazy dof/bokeh only for non action parts anyway.(cut scenes etc)
Hardland is actually shipped with naive variation of this but its only used when player inspect something or chat with npc.
Oh, god. I have a feeling that rendering 7 372 800 triangles (2 per pixel for 2560x1440) is going to be slow, and that’s before considering the insane number of pixels they’ll cover… You’re right that it’ll only be used for cutscenes though, but the cutscenes in WSW are quite fast and action-packed, so decent performance is required.
After looking at the code a bit, I realized that the background wasn’t being reconstructed correctly. The problems I’m trying to solve for depth of field are very similar to those of motion blur, and simply porting many of the ideas from it allowed me to correct the background. So far this technique still has promise, so I’ll stick with it for now. I might try out the point sprite based one out of pure curiosity though. The article you posted is very interesting.
Yes! Managed to get the depth of field as good as physically possible when generating it from a sharp image.
The biggest problem with applying depth of field (and motion blur) on a sharp image is that you lack information that you need to accurately calculate it. Consider a small object being right in front of the camera while the focus is on a far-away object. In practice, the small object becomes blurred and transparent, but we don’t know what’s behind it. The best we can do is to try to reconstruct it by looking at the pixels nearby and see if any other have the right depth, but if they don’t we’re out of luck. There’s no real way of correctly retrieving that information as that part was never rendered, lit, shaded etc. Luckily, the reconstruction can in most cases work well enough to give the illusion of correct DoF and motion blur.
Focus on character:
Focus on closest bright object to the left of the character:
Focus on one of the objects furthest away (kind of):
This scene has very simple colors, so luckily no visible reconstruction errors occur. I have yet to test it a bit more thoroughly, but it looks very promising.
Here’s an image showing the reconstruction of the background when blurring a foreground object:
As you can see on the sharp left image, there’s no background data for the pixels in the marked area, so the background pixels around it are used instead which luckily didn’t give any problems at all.
Now, all that’s left is the most fun part: Optimization! ____
Today I worked on some more Android based stuff, learning more about SQLite and SQL in general which I actually enjoyed… I’m starting to love databases :point:
Also covered some more C# which i’m loving as well. A great day!
Looks technically very impressive, but depth of field…blergh.
Fine in a cutscene where you might want to guide the user’s focus.
However in an interactive game it just doesn’t make sense; the player’s eyes are never fixed on a single focal point.
The effect will only be enabled during cutscenes, don’t worry. It’s meant to give a cinematic effect to the cinematic parts.
EDIT:
One last update today. =P
I reduced the sample count to 32 samples per pixel. That really helped get performance to a reasonable level, but the noise is a bit annoying. I’ll see if I can get rid of it by blurring the picture a bit extra or so. I also added an early exit for tiles that are 100% in focus, and a fast path for tiles where the circle of confusion is uniform. The fast path basically skips all the depth testing and just does a simple sum of all colors, so it’s a lot faster. It kicks in for a large number of pixels, so it improves performance quite a bit. With 32 samples, the early exit and the fast path enabled, the shader only takes around 2ms at 1920x1080 on my GTX 770.
Red = full depth-aware foreground/background blending path
Green = fast color sum
Blue = in focus, early exit.
EDIT: At 64 samples the shader uses 130 registers… o__O
Working on a new game! My father is going back to college for graphic design and said that as soon as he gets a better grip on computer animation/graphics he’ll help me with my game assets! My uncle is also a programmer and interested in game development, so my father hinted at maybe a little coalition between us later
That is pretty cool: a family that codes together! It is great when people help each other out.
Today I finished debugging the Java side of a redo of my audio library. All the functionality written to date still works, including the “audio event handling system”, despite the audio mixing class broken into two, a “core” part that uses core Java, and a wrapper part that handles the javax.audio.sampled part. Tomorrow, I hope to start testing the library on Android, using an Android-specific wrapper for the output. A couple days ago I got an audio thread on Android reading and playing back a simple sine wave being played through my audio mixer for the first time. Now I can test stuff like the FM synthesis, effects like echos and flangers, and the somewhat primitive but working sequencer, all written in core Java.
My uncle actually is the one who convinced my boss to give me an interview. Technically my work didn’t need anymore interns, but my uncle convinced them to take a look at me. I also inspired my uncle to get into games a couple years ago when I told him about all the projects I was doing. He hadn’t ever touched a line of game code, I feel proud for being the instigator in this one!
And my father is definitely the one who opened up my love for the arts. I’m incredibly fortunate enough to have a family which took me out to plays and to see bands, to show me how to paint mini-figures (warhammer FTW), who helped find me a violin and drum set when I wanted to learn how to play music. If it weren’t for my father and my artistic family, there’s no way I would have gotten into programming, and consequently I would not have such a great job and a great life.
I’m kind of ranting now, but I left home a year and a half ago when I was 17 because of some major issues we had as a family. I’ve had a strange year and I miss my family. I want to tell them how much I appreciate them even though we didn’t have the best relationship in my teenage years. Maybe its time to tell them.