Unlimited Gfx Detail

I came across the following post (video) on RPS. Basically its promising to be able to render unlimited detail without requiring a super computer.

still skeptical that it’ll work as promised but if it does could be pretty interesting.

what do you ppl think, possible that it could work?

I don’t see why not, so long as they have some fancy indexing. Of course it means that nothing dynamic could happen, since that would require re-indexing (and tellingly there was no sign of animation in their videos).

Yeah, there’s nothing particularly new or interesting going on there really, sparse voxel tracing has been around a while now. See this much more interesting variant from a couple of years ago that actually runs on a GPU rather than their CPU implementation (and so seems to run a whole lot faster). The latest batch of 3D mandelbrot renderers have all been using this technique (again, on a gpu) as well.

It’s a bit of a dead end for games IMHO. It’s completely static so it’s not really practical to do animation and things like dynamic lighting and shadowing are probably impractical too. And games have been trending towards less precalculation and more dynamic environments and effects (eg. Quake’s static lightmaps vs. Doom3’s dynamic lights vs. Crysis’ realtime radiosity) so this would be a real step backwards for a questionable gain in detail.

hehe this is the return of the “zero pixel overfill” Grall !

I remember before 3D card came out this was basically the thing that every 3D engine programmer was looking for, I dont know if this one is a solution but really think that the solution/futur of 3D is more in those new kinds of algortihms and not in bruteforcing more and more bilion polygons every new GPU come out.

brut forcing polygon is just a stupid commercial approch, its like searching for the shortest path in a graph by trying all possible path and increasing CPU to get it faster rather than using/searching a beter algorithm.

[quote]It’s a bit of a dead end for games IMHO. It’s completely static so it’s not really practical to do animation and things like dynamic lighting and shadowing are probably impractical too. And games have been trending towards less precalculation and more dynamic environments and effects (eg. Quake’s static lightmaps vs. Doom3’s dynamic lights vs. Crysis’ realtime radiosity) so this would be a real step backwards for a questionable gain in detail.
[/quote]
I really like this idea of new ways in 3D. Even if I know that’s not what you said, as you mentioned Quake/ Doom3 / Crysis, wanted to point something :

all of them are some kind of linear progression, none of them is a real revolutionary invention, as said in the video this is just more power => more polygon & more shader, finally nothing really new/interresting in all those engine (I mean they are just the add of thousands of little inventions but no one is really revolutionary, they all use existing technologies without real/fundamental research in 3D (it is like adding more and more core on CPU vs trying to build a quantum computer : in one case it is nothing new and a linear progression, in the other case it is really something new offering exponentially more power)

finally all that only to say that I really love theirs works even if it is still not perfect :slight_smile:

seems they are not alone to look for a something like a 3D point clouds based technic.

planed on id tech 6 engine and named : Sparse Voxel Octree http://my.mmoabc.com/article/VGenforcer/4186/A-first-look-at-id-tech-6-engine-Infinite-geometry.html?login=no

ID TECH 6 , short explanation on Wikipedia : http://en.wikipedia.org/wiki/Id_Tech_6

a must read article with some idea of john carmack : http://www.pcper.com/article.php?aid=532

I’d love to see a hybrid system that seemlessly blended these techniques together, and was intuitive enough to build worlds for.

Personally, I found their commentary a bit unrealistic. They mentioned that ray-tracing was horribly slow and outdated, but it is in essence what they’re doing (casting out into the scene to find a color for a single pixel). Why not just offer up their apparently amazing search indexing to raytracing communities. The images provided were certainly detailed, but you can definitely notice sampling aliasing in stonework, etc. when they zoom out.

Additionally, and this has been mentioned before, that animations and dynamic scenes are a yet unsolved problem for voxels and point renderers. Even the video provided by OrangyTang was a static.

Yeah, that’s the same tech as the video I linked to earlier. There’s a pdf linked from the info page which has all sorts of interesting details (and IMHO is much more advanced than this “unlimited detail” video).

Edit: direct paper link: http://s08.idav.ucdavis.edu/olick-current-and-next-generation-parallelism-in-games.pdf

Their current feeling seems to be to trace the static voxels for your static environment, and have it spit out regular depth buffer values so that the dynamic stuff (characters, objects, etc.) could be rendered on top correctly. That’s pretty cool and probably quite workable, but I’m not sure if having an almost entirely static environment is a good trade off for the increase in detail.

Plus I suspect you’d have a hard time getting your static voxels and dynamic models to visually mesh together. You’d end up with something like an old WB cartoon (like this) where the dynamic stuff (the rock in this case) looks completely different from the static rocks in the environment.

The one thing I find interesting about this is that it could potentially work on mobile systems like the iPhone. If I had this rendered in my game, people would sh!t their pants. There’s no need for dynamics in a mobile game.

I smell hoax.

it is still a technic in research, If you remember polygin 3d was not that much dynamic when it was started : no morph / no 3d character ! just sprites & billboarding for tree and such / no shadow / not even perpixel ligthing, so no doubt there will be some algortihms enabling animation and more, it just a question of time to find the best one

Really? I can’t remember seeing any purely static 3d polygonal environments, but I’ve seen literally dozens of static voxel landscapes (dating back at least 1992 with commanche, up through Outcast, and now this garbage).

Heck, Zarch had dynamic filled 3d polygons back in 1987

It looks to me like they have solved the animation/lighting issues which is the whole point of them applying for funding etc etc. I look forward to seeing what they finally come up with.

Cas :slight_smile:

I’m a little bamboozled that Derek Smart is 47yo?
(see last comment in that page)

you cant remember Doom & Wolfeinstein ?

but if anyway you find them too much dynamic to be compared I could have pointed some of very-first 3D game built only by empy/non-filled polygon that I was playing on my CPC464 :stuck_out_tongue: everything have a start and everything have a end :), thinking of filling them with colored pixels was just impossible/unrealistic and so dynamic light and such was just science-fiction

also animating such world is inded possible (stupid idea but… just create 25 of those world and you got one second of animation, find a way to export it as a Z buffer and you can merge it with traditional polygonal 3D or can apply some shaders on it, or may be some little branch of the tree could be animated/updated realtime ? ) but those would requiere some expensive computation, nearly the same impossible idea that if 20 years ago we told about rendering 1 billion of polygons at 100 FPS (you just remember me a project that someone posted in the showcase and wich I really liked: fully voxel based and animated http://www.helicopter-fun.com/heli.php?cores=2&size=800&quality=2 add some antialias ans it is an amazing little voxel software 3d renderer )

why not thinking of replacing GPU capable of rendering billion of polygons by GPU capable of rendering billion of those kind of 3d octrees object ? the base entity of the GPU would then become 3d “octree/free object” rather than 2d “flat faces (triangular/polygon)” it could be cool

a modern GPU does not care what it renders. All the nice things you can read in the GL spec are actually programs executed on the GPU. You can implement it right now using OpenCL, but as already mentioned this tec isn’t yet ready to replace polygons. Interoperability is the new mid-term goal.

Neither is polygon based, and both have moving world geometry.

An animated character in a polygon-based game may have somewhere around 10,000 polygons in a detailed character. All the rest of the detail is through using textures and normal-mapping. So when someone needs to animate an arm or limb using skinning, they only have to do a max of 10,000 vertex blends (which can easily be accelerated on the GPU).

For a voxel based renderer, you’d have to adjust every single cell that makes up the same limb would have to be animated (or do some weird screen to object space transformation that depends on the animation bones and the part of the voxel octree being accessed). Anyway, it seems like many more operations would have to be done in order to animate a character of the same amount of visual detail as a 10,000 polygon model.

:’( that’s crazy how you are limiting your imagination to existing things…