Unlimited Gfx Detail

I just realized that most “voxel” based games out there aren’t voxel based at all, but rather just heightmaps rendered as if it was voxels (true for your game, outcast, commance and delta force). Blood and other late Build engine games had 3d voxel sprites that looked really nice, but the levels weren’t based on them.

In fact, the only true voxel engine I can think of right now is VOXLAP by Ken Silverman, who also wrote the build engine.

I will, however, admit that combining voxels and raytracing is a much more accurate simulation of the real world than polygons will ever be. The “only” minor problems are that voxels require staggering amounts of ram and that raycasting in dynamic voxels is a very, very slow process still.

GPU/CPU power have made so incredible grows in 20 years that maybe in 20 years we will build 3D world modifying “3D pixels” as we already modify 2d pixels on a texture ala minecraft :stuck_out_tongue:

as you argue about lack of dynamic/interactivity think that this would be the only way to get proper destructive models, this is simply impossible to achieve with a “standard” polygonal approach (or they are very poor even in crysis engines…), you could imagine a bomb falling on ground and making a big hole and all will still appear perfectly rendered and textured. think about fluid that can find its way in such model also impossible in current polygonal approach, everything would become possible wich is not the case right now.

this is IMO a lot more adapted to animations and interactivity then the current existing polygonal model, but this is a wrong way to try to adapt already existing technics to this one, better to find new ones. just imagine a world where bitmap texture does not exist and you got our equivalent current 3d world

but for sure it wont come next years…

The problem is scalability:

The framebuffer, is our current ‘2D’ world, in which everything happens.
Filling a triangle takes O(n^2), and can be easily optimized by frustum culling and depth culling.

A voxel grid, would be our future ‘3D’ world, in which everything happens.
Filling a volume takes O(n^3), and can’t be optimized in any way.

The problem here is that for 2D grids to improve graphics, you need constant evolution.
For 3D grids to improve graphics, you need constant revolution, or we’d have to wait decades or centuries, with the current evolution.

As i noted on jME forums, there was already talk about this (sparse voxel octrees) for some time. There is a prediction, that we will play games in a couple of years which use this technology. And whatever you think against it, like no animation possible, it will be solved eventually. There is nothing telling us, that there could be nothing better than polygon graphics.

I may seem to be limiting myself, but that is not the case. I was merely trying to point out that the tech demo shown was unlikely to be feasible for games. If they had the large amount of innovations necessary to animate in realtime voxel/point clouds of that size they should have no trouble getting funding (or at least academic recognition, you’d think).

they wont do a lot without any money, this is the problem , they probably need funding to perform/continue research but they need to find a little more before they get money ( hehe I just think of that : do you know if the egg was first ? ot if it was the chicken ? )

I whatch the video again & again and really think it is a promizing technologie ( they got water reflexion already, faked for sure but it do the job ).

ps: I may not have really well understood but doesn’t he said “the rendering is done in software” ? it would be logical has hardware is probably not well adapted to this kind of rendering but if they can render that 3d world in software with decent FPS, imagine with the power of paralelization on a powerfull GPU !

anyone got some other videos about this ?

EDIT: they got some explntnio on lighting and nice screen on their website : http://unlimiteddetailtechnology.com/pictures.html

Wow those screenshots are extraordinary. It’s rare to see 3D trees looking so good. 3D graphics can easily look worse than 2D because they lack detail, but those screenshots are rich with details.

It’s strange that they’re working on this tech for games rather than big budget movies, I would have thought that movies would be an easier market.

It says you can “import objects from the real world using a laser scanner”. That’s amazing!

Bring it on, I can’t wait for the demos

here is another video http://www.youtube.com/watch?v=l3Sw3dnu8q8 ( I dont really like how this men present/talk about its technologie but it is still an impressive work )

I found it odd that in some videos the man suggests that his system can be used in mobile phones and other limited devices, and then in another video suggests that scenes can hold trillions and trillions of points. At some point the memory and disk space will become an issue, maybe not for desktop computers, but he seems to make a lot of really large claims without providing runnable demos, test computer statistics or anything like that.

Also, personally I hate the way he says “unlimited point cloud data” :persecutioncomplex:

There is no such thing as “unlimited” in computers.

If you have a world consisting of quadrillion trillion billion points, you need to store that data somewhere. You need to store the data of the world. Where???

yes I really hate that too, Unnnnnnnnnnnnnnnnnlimmited !

rather than lack of animation/dynamic and such wich I am sure can be improved, memory size is the most important problem I could see for this technic to be usefull because even if you can remove “plain area” and save huge amout of bytes you will still store million of useless details where nobody will never look at.

some more thinking about this technic…

for a project I am working on, I have used a huge map image (more than 350 bilions of points) that are seamless streamed to an applet and can be moved/zoomed with constant speed => not depending on the final texture size (nearly Unnnnnnnnnnnlimited ! :P),

and… finally It make me think that this technic is maybe not really that much complexe if you apply the same tips you would use for 2d grid (texture) to a 3d grid (octree) :

for a texture you would recursivly create mipmap until you get a 1*1 texture :
mine is 864000 * 432000 starting from this texture you got :
864000 * 432000
432000 * 216000
216000 * 108000
etc… until you get
2 * 1
then using a quadtree (wich is in a certain manner the equivalent of octree but for 2d) and the point of view of the user : you can easily determine what part of the picture is visible and wich mipmap you have to load this enable the user to navigate over a teorical infinite image (may even draw on it…)

so making the same in 3d , exacly the same enable to navigate over an octree with a constant speed

quadtree become octree
2d grid (texture) image become a 3d grid point
and mipmap (wich merge 4 pixels to 1 pixels become mipmap3d that merge 8 point to 1 point)

for a 3d grid of 256256256 mipmaped grid will be something like :
128128128
646464
etc…
111

finally displaying a 3D Unnnnnnlimited grid at constant speed is EXACLY the same as displaying a 2D Unnnnnlimited grid at constant speed, both are possible and not that hard

Those who wonder how you can possibly store trillions of points in a scene - you’re thinking polygons again. Were you to render it with polygons you would indeed need to render trillions of polygons. However this is clearly a very neat instancing solution. Imagine a scene with a million bricks in it, and each brick was made up of 10,000 points. You wouldn’t be storing 10,000,000,000 points - you’d be storing 10,000 points and 1,000,000 bricks, and the crucial part is, the rendering complexity is identical to the same scene with just 50 bricks in it (or near as dammit identical). Using warping and blending and other techniques applied dynamically to each instance would be a mean feat indeed but that’s how they’d get animation. They could animate tens of thousands of soldiers in an army in this way if they’ve figured out how to efficiently store and raycast into these mutated instances. And it’d all be in full detail, all the way to the back of the scene. Wowsers.

Cas :slight_smile:

It is meant as limitless level of detail. If you want, and have the amount of memory needed, you can subdivide the mesh into as tiny pieces as you want. And if the mesh is subdivided into an octree, i can imagine that it is possible to subdivide parts of the mesh with different sized pieces.

The sparse voxel octree can be viewed as a specially encoded 3D texture, which removes the need for polygon geometry.

also non-needed part (plain / underground) do not need to exist in the 3d data structure or part that dont need a lot of details can be not detailed, that’s the strenght of this technic you are free to scult whatever you want / modify existing object by adding details or remove area, the structure will always adapt to fit it

this is definitly a promising technologie…

some other nice related video :

GigaBroccoli : http://www.youtube.com/watch?v=PFr-cEEb8y0&feature=related
3D coat : http://www.youtube.com/watch?v=tx9LRvNATRg&feature=related

It’s fairly interesting, and I would like to see a more in depth, mature demo.
Generally I don’t find hype, sarcasm and lack of substance to be a great combination when it comes to new technology.

The video is awfully cringeworthy isn’t it. He sounds rather like the nerd from Inbetweeners :slight_smile:

Cas :slight_smile:

I’m trying to understand how all this works. A quick and dirty java version should be something like :

The octree :

public class Octree
{
  protected OctreeCell root;
  public float sx;
  public float sy;
  public float sz;
  public float x;
  public float y;
  public float z;

  public int depth;

  public Octree()
  {
    root = new OctreeCell();

    x = 0;
    y = 0;
    z = 0;

    sx = 1024;
    sy = 1024;
    sz = 128;

    depth = 9;
  }

  public void set(float x,float y,float z,byte r,byte g,byte b)
  {
    root.set(depth,x,y,z,r,g,b);
  }

  public void validate()
  {
    root.validate(depth);
  }

  public OctreeCell getRoot()
  {
    return root;
  }
}

The octree cell :


public class OctreeCell
{
  protected OctreeCell childs[][][];
  protected byte       red,green,blue;

  public OctreeCell()
  {
    childs = new OctreeCell[2][2][2];
  }

  public OctreeCell get(int iteration,float x,float y,float z)
  {
    if (iteration == 0)
    {
      return this;
    }

    int ix = 0; if (x>=0.5) { ix =1; }
    int iy = 0; if (y>=0.5) { iy =1; }
    int iz = 0; if (z>=0.5) { iz =1; }

    if (childs[ix][iy][iz] == null) { return null; }

    return childs[ix][iy][iz].get(iteration-1, x*2-ix, y*2-iy, z*2-iz);
  }

  public void set(int iteration,float x,float y,float z,byte r,byte g,byte b)
  {
    if (iteration == 0)
    {
      red = r;
      green = g;
      blue = b;

      return;
    }

    int ix = 0; if (x>=0.5) { ix =1; }
    int iy = 0; if (y>=0.5) { iy =1; }
    int iz = 0; if (z>=0.5) { iz =1; }

    if (childs[ix][iy][iz] == null) { childs[ix][iy][iz] = new OctreeCell(); }

    childs[ix][iy][iz].set(iteration-1, x*2-ix, y*2-iy, z*2-iz,r,g,b);
  }

  public int getColor()
  {
    return (red&0xFF)+((green&0xFF)<<8)+((blue&0xFF)<<16);
  }

  public void validate(int iteration)
  {
    if (iteration == 0) { return; }

    int compt = 0;
    int tr = 0;
    int tb = 0;
    int tg = 0;

    for(int i=0;i<2;i++)
    {
      for(int j=0;j<2;j++)
      {
        for(int k=0;k<2;k++)
        {
          OctreeCell c = childs[i][j][k];

          if(c!=null)
          {
            c.validate(iteration-1);

            compt++;
            tr += ((int)c.red)&0xFF;
            tg += ((int)c.green)&0xFF;
            tb += ((int)c.blue)&0xFF;
          }
        }
      }
    }

    red = (byte)((tr/compt)&0xFF);
    green = (byte)((tg/compt)&0xFF);
    blue = (byte)((tb/compt)&0xFF);
  }
}

Ray casting :


protected int casteRay(float cameraX,float cameraY,float cameraZ,
                           float viewX,float viewY,float viewZ)
    {
      float dist = MIN_DIST;

      while(dist<MAX_DIST)
      {
        float cx = cameraX+viewX*dist;
        float cy = cameraY+viewY*dist;
        float cz = cameraZ+viewZ*dist;
        int pixelSize = (int)(dist/SCREEN_DIST);
        if(pixelSize<1) { pixelSize = 1; }
        else if(pixelSize>16) { pixelSize = 16; }

        cx -= octree.x;
        cy -= octree.y;
        cz -= octree.z;

        if ((cx>=0)&&(cx<octree.sx)&&
            (cy>=0)&&(cy<octree.sy)&&
            (cz>=0)&&(cz<octree.sz))
        {
          
          int ite = octree.depth-INV_POW2[pixelSize];
          
          OctreeCell c = octree.getRoot().get(ite, cx/octree.sx, cy/octree.sy, cz/octree.sz);

          if (c != null)
          {
            return c.getColor();
          }
        }

        dist += pixelSize*0.25f;
      }

      return 0;
    }

With the octree search and the stepping casting, I don’t really like this thing.

What I don’t like too is that we simply replace a bunch of triangles by a bunch of voxels. When zooming, it will be the same problem. I would prefere to use a ray tracing base on nurbs with procedural/factal texture but it is not realistic… An artist will never deal with too mush abstract concepts. Thought I would prefere nurbs with displacement map (but calculations are not that simple)