What I did today

aye. not any soon tho’, my mesh-dealing-class doesn’t have any UV attributes yet and my texture-dealer doesn’t read a damn file of the harddrive yet. i’m storing the occlusion in plain vertex-color at the moment, which is useful for artistic aspects but not for proper lightning (content dependent)…

i hope one day i understand how to get from triangle+UV coords to world-pos/normal and bake the rays properly into a texel. not near that at all :slight_smile:

You could propably optimize the plain color blur by using bilinear or anistrophic filtering and get away with lower amount of actual samples.

Mine’s probably nothing compared to Cas, but here’s mine: https://github.com/CopyableCougar4/Font-Rendering/blob/master/src/com/digiturtle/ui/GLFont.java

Every character is stored as a VBO, and a string of text can be cached in a Display List :slight_smile:

CopyableCougar4

I doubt that this would be a good idea. I’ve already tried this with other applications, and it’s surprisingly slow and limited.

I noticed that anisotropic filtering could be used when sampling depth textures for shadow mapping with GL_COMPARE_REF_TO_TEXTURE, which caused it to do up to 16 bilinear depth tests (which totals 64 taps) in a line. Even with just a small amount of anisotropic filtering, it seems like textureGrad() is much slower than texture() when doing texture lookups (~4-5 textureGrad()s are slower than 16 texture()), so for PCF I scrapped the idea.

On the other hand, I do use anisotropic filtering for particle motion blur. I modify the gradients using textureGrad() and do a single texture lookup for each pixel. Since particles use texture filtering and mipmaps, this looks decent and automatically clamps the maximum blur kernel size to the amount of anisotropic filtering enabled, but it has some limitations. The biggest one is the limited precision of the anisotropic filtering direction. It seems like the anisotropic filtering only works for a limited number of angles, my guess is 32 or 64 discrete angles. For particles, this works pretty well, but I would not want to use it for motion blur, as this could cause weird seams in the blur.

For the main algorithm, I can’t use any filtering at all as that would blur over depth discontinuities. For the optimized blur, I also don’t want to mix in the center sample (I want to use my anti-aliased center sample), so bilinear filtering is kind of a bad there too.

How stable is the shading (in model space) when the camera position/angle/fov changes arbitrarily?

What is your framerate? (on which hardware?)

It’s a shame ‘depth darkening’ adds this dark halo around objects, IMHO the (very last?) trigger for my brain to scream ‘fake!’ (see lamp, armrests, gong).

Nice idea, need to test it out. Have you tested extruding particle geometry towards motion vector? I use this for sparks and other fast moving/emissive particles but it doesn’t work at all if particle sprites can have arbitary rotation.

occlusion is baked into vertex-colors, so once computed, it’s as fast as any vertex coloring (or any other single value vertex attribute). it’s purely depending the geometry, no view dependence.

that scene contains ~500k triangles, ~250k vertices, ~350k unique vertex-normal pairs (normal-autosmoothing), precalculating 64 rays per pair on a hemisphere (~20 mil. rays) takes ~13 sec. on my nvidia 560ti 488 cores/14 compute units using openCL (40 sec. on pure-cpu-reference implementation using 4 cores of a intel i7 3.4ghz). nothing near realtime. it is very responsive to the quality of the triangle-BVH and maximum ray distance.

i still want to try tracing shadow rays per pixel instead of comparing a shadow-map just for fun. looks like that can run pretty ok in realtime.

frametime, resolution around 1080p 16:9 with depth darkening, 4x msaa, 16 bit hdr and per sample tonemapping is stable around 12ms (~80 fps). tho’ that is really variable since all the nifty things can be switched off or reduced. i use two times (high and low frequency) 32 rays per pixel for ssao, which is way too much (~4 ms frame-time alone). rendering at half-resolution and upsampling depth-weighted is getting it to ~2.5ms.

you’re right. the depth darkening is, as expected very dependent on the view. lots of tweaking required to get rid of the halos while keeping a nice depth-perception. it’s all fake :slight_smile:

Ah, mystery solved. I was already wondering how you could possibly end up with this result:

purely with SSOA, given that the desk behind the chair is not in the depth buffer.

speaking of mystery … i’m still banging my head agains LibStruct trying to get the raytracer to fly with it. any chance you share some tutorials ?

You’re asking me to make a dozen tutorials, hoping one of them matches your needs? :slight_smile:

If you show me some code, I most likely can spot the misconception in a few minutes.

Random prototype/concept picture for a mobile game:

https://dl.dropboxusercontent.com/u/1668516/randomconcept.png

Cheers,

Kev

no no, just basics. for a better feeling for the genral behaviour. atm it’s more like try and error and compare the source code to what happend. i ll put up some questions on a new thread. o/


Tested my memory skills; I haven’t done any graphics programming in about 5 months! I wanted to see if I could remember hwo to write a phong lighting shader, and to my surprise I got one working without any Google searches. For some reason I couldn’t get any ambient stuff working, which is weird. But here it is, specular equation included: http://goo.gl/9p16Po

I probably messed something up (the specular highlight is crazy, I couldn’t make it look semi realistic), also I guessed on what the view vector for the site was. But it looks fine…

Edit: I was being stupid about the specular thing. http://goo.gl/vVA3SZ

You forgot to normalize interpolated normal. Also your light vector isn’t normalized. View vector is also wrong.(should be from camera to vertex)

Better?
http://goo.gl/G7CL90

Yeah now it look plausible and code seems to be fine.

Implemented temporalAA. I do resolve before transparent and post process so its fully artifact free. This also help temporal smoothed screenspace global illumination because now input and output is temporally stable.

I have a couple things so far. I may add to this list if I can plow through all that I planned for.

  • Implemented a hovering effects system for components
  • Draggable panels
  • Parsing renderable interface objects from XML/JavaScript (with the ability to add Java classes to call stuff from
  • Iterating through a nodes child with the help of my nifty (but simple) [icode]ChildIterator[/icode]
  • Caching multiple font faces in my GLFont structure :slight_smile:

Not much progress so far, but more to come :slight_smile:

CopyableCougar4

Did you see my post on temporal SRAA? http://www.java-gaming.org/topics/temporal-subpixel-reconstruction-anti-aliasing/34555/msg/326409/view.html#msg326409