Oh stop being so jealous
It looks awesome.
I thought it was interesting but BS last year and I thought so this year as well. The number of questions they are avoiding (especially animation, as many have already pointed out) leads me to believe that they havenāt solved those issues.
And in this most recent video, what really got me was the āuglyā polygon grass they showed, which was like 3 different faces shaped like a * but which moved a swayed in the wind, giving the world a very alive feeling. They he shows his grass, which has plenty of detail but just sits there and looks dead.
The big, important, fact: I play a game to feel enveloped in the world, whether itās ugly blocks like in Minecraft, simple retro 2D graphics, or super pretty polygon Valve graphics. And his demo world is static and dead, it feels like a giant clay model.
Wow, this tech is pretty impressive, even without animation.
I think everyoneās being overly critical because that tech guy isnāt modest enough the way he says āunlimitedā all the time. But itās pretty clear that heās made something quite incredible.
And heās from Queensland, Australia! Thatās amazing in itself since queenslanders are bogans who canāt do anything but get drunk and be crass. These guys at euclideon seem organised and have brains.
I honestly think they are avoiding questions, because the technology is relatively simple.
Its just an advanced culling algorythm done on each pixel (instead of the frustrum). Its the logical next step. Geometry is no longer the bottle neck.
The problem is when you shown one demo is you have to assume that limits of the demo are limits of the engine, and the demo is pretty limited. Any one of the limitations could be deal breakers for doing anything useful.
It looks like it is a cube world made up of only about eight or so different cube types which makes sense because itās hard to imagine how the memory requirements for each cube are not massive. The only reflective surface shown is a perfectly flat mirror surface and I canāt see any shadows so it is quite probable they arenāt doing any dynamic lighting at all and it is all baked in, no shadows, no shiny objects, no moving lights.
If they donāt have any memory issues (as they claim) they should forget about making a voxel engine as the compression tech would be worth much more.
If the main problem is memory consumption then they should be able to implement non-flat reflections. Maybe not like ray tracing, but seriously, how many polygon games have reflective surfaces at all? Sure, itās maybe not as overly fantastic as they say, but if they can release a downloadable demo in a few years with animations, lighting, etc, I donāt see why youāre judging them so hard. Sure, itās maybe not unlimited, but you would still be able to make an AAA game with this if it works. If it has the performance they say it has it would be amazing. A software renderer running in 25 FPS on a quad-core laptop is pretty f*cking amazing considering the output. He also said the demo could easily be optimized to 3x performance (questionable, but whatever). The real question is how good it will scale with multiple processors. That will decide how good a version which runs on a GPU will work. A Radeon HD6970 has 1600 stream processors running at 880MHz. Compared to a quad-core CPU, in this case an laptop i7-2630QM, to a graphics card, we have so much more theoretical processing power. For a quick (maybe really inaccurate) comparison look at Bitcoin mining: https://en.bitcoin.it/wiki/Mining_hardware_comparison
i7-2635QM (closest match): 2.93 million hashes/sec
Radeon HD6950: 272 million hashes/sec
NVidia GTX560 Ti: 67.7 million hashes/sec
Even considering very bad scaling compared to theoretical performance (30-100x performance), we can still assume a 10x scaling compared to the CPU version is achievable. Multiply that with the promised optimizations, letās say 2x increase as a worst case scenario, and we still have a 20x increase in performance. 20 times the 15-25 FPS achieved in the demo would be 300-500 FPS. For the content in that demo. Which would be insane geometry performance compared to current AAA games no matter how you look at, even without advanced shaders, lighting, etc.
Now before you start flaming me:
I now I compared a laptop CPU to a high-end desktop graphics card, but really, isnāt that the target hardware for most AAA games? If a game runs badly on cheap hardware people canāt really complain if it looks that good. xd
Memory problems would be even more insane. Graphics cards usually donāt have 8GB of memory⦠More like 1-2GB. Considering each āatomā would require more data than that demo if they had lighting (do they need normals? I think they doā¦), shader data, e.t.c.
Great. Now I have even more questions I want them to answer. Can they extract motion vectors for motion blur? How fast can they calculate the distance of a point to the screen (for SSAO/HBAO/shadow mapping. Remember, they did have some buggy shadow mapping in the first demo)? What was the resolution the game was running on in the demo? Considering the clearly aliased edge when he happened to move into something in the demo, I think it was quite low actually⦠What about antialiasing? Can it support anything faster than supersampling? Will supersampling even be slower compared to todayās game engines? As the lighting will effectively be performed just like with deferred rendering for the engines we use today, and considering that a good MSAA implementation will do the lighting for each sample, would we even see a big performance cost if we do jittered supersampling with this good geometry performance compared to deferred shading? How do they not have huge aliasing problems now? Can the solution be used for antialiasing? Memory usage? How much data needs to always be on the graphics card on a GPU implementation? If they have geometry āmipmapsā, they could keep the whole world in RAM and only send the needed āmipmapsā to the GPU, problem solved. Same principle for RAM and a harddrive, if a harddrive/SSD is fast enough (streamed of course, but I couldnāt see any āpopsā or something, which they even bragged about having eliminated⦠Geh, I dunno!!!)
I understand some people are skeptical to the demo. So much is left to speculations, and not much is actually proven. But donāt look at what it isnāt, look at what it is! It is a different approach to rendering that could actually rival polygons for realtime applications. The fact that they got that far with about 10 people is insane, considering polygon rendering has evolved over many, many years. Think about what this could be with the funding and research of polygon rendering. I WANT this to work, because I want to see/use the end result. I WANT them to deliver a working demo when they are done with it in a year or two. If it turns out it they couldnāt then shit happens, but it has potential. For the moment Iām gonna assume they can do this. Maybe it wonāt. 
I work often with researchers (mainly medical though). They are usually great and enthusiastic. They often manage to do something working and when they want to sell it⦠it is a big failed (when they work alone).
Iām not against this engine, nor this technic. But if you think about it, there is some much thing against them :
- new technic : a lot of unknowns
- habits : it is allways difficult to change people habits. There will have to make change engine, design, workflow, ⦠(it is a lots of works)
- prejudge : just see how many people a far too enthusiastic (bad thing) or pesimistic (bad thing too); and all technical assumptions that has been done
- api competitors : it is not like 10 years ago⦠OpenGL and DirectX are really advenced API.
- hardware competitors : some people have say that they want to do some GPU or CPU. To do a fullgraphic card is a no go, they will have to do a OpenGL compatible card that can compete NVidia and AMD. To do a aditional card is not really go (very few people will buy it). And for both, if there is no software equivalent no company will use the engine.
- Small company : few human and money resources.
I hope them a lots of courage and luck.
By the way, I have heard a lot of optimization, optimiztion and optimization. I donāt know for you but my teachers have allways said : āDonāt do optimisation too early, make something almost complet and then start optimization. Otherwise, you will loose a lot of time to redo most of you codeā.
With this in mind, for me, the best is to stay on the CPU (or may be with OpenCL⦠it can be interresting this way. Multi-threading is the way to go at least) at a low resolution (640x480), at a low FPS (even 5 fps is good) but with a maximum functionnalities and quality. There is very few risk at the optimization stage (by the time, the hardware will even be more powerfull).
Youāre right, Iām not encouraging them to optimize it now. I just mentioned that they said that they could optimize it to 3x performance in the video.
When it comes to hardware, I donāt see why they shouldnāt be able to implement it in OpenCL or something similar. If they canāt, I donāt think they can really solve it even with dedicated hardware. Maybe their official point cloud graphics card will be a modded NVidia or ATI card with 16GB VRAM⦠;D