Trying to store vertex data of all objects in a scene

Hi everyone.
I’m currently working on a little project of mine which is similar ray tracing in that I want to calculate ray intersections with objects in the scene, but my light rays do crazy things (although that’s not really relevant to what i’m stuck on).

I’m quite new to jogl and have only been using opengl for a few months now so I may have just missed something.

What I would like to do is fill a scene with hundreds / thousands of spheres (at the moment i’ve been using glutSolidSphere), and then store the vertices for all polygons so that I can iterate through all the vertices in the scene and test for intersections with a given ray.
What is the best way to go about this?

Thanks for the help.

You mite want to try google for this. Optimizing this problem is pretty old (70s) and there is a lot of literature and tutorials on it.

Also, shaders are really the way to go for this kind of computation.

I guess I shouldn’t have mentioned the words ray tracing :wink:
I already have a lot of literature on ray tracing algorithms.

What I am interested in is the various ways of storing the vertex data of objects I have in a scene (e.g. in this case hundres / thousands of spheres ) so that I can pull individual vertices for polygons when required.

Yes i know, and thats what a lot of that literature talks about. You can do it brute force and it will be slow as hell. Or you can use a very specific solution that works with your particular vertex access patterns.

But these things are very particular to how and when you access the data. So I can’t really even suggest a direction to go in. Perhaps Bounding volume trees, or oct trees or even some other spatial structure is needed. Perhaps not.

Follow up on delt0r comments: What’s the dimensions of a query: projected 2D or 3D? (As well as what’s the access pattern).

Additionally, it’s often much faster to use the mathematical model of a sphere or cylinder, etc. Storing all the vertex data for 1000s of spheres is a lot of waste. Also consider using instancing for geometry.

Thanks for the replies, much appreciated.

[quote=“delt0r,post:5,topic:35118”]
I’ll be using bounding volume trees as the warping effect i’m trying to model only exists for a certain distance from (0,0,0).

It will be projected 2d and the access pattern will at first be static but eventually I want to try and animate my result (e.g. rotating around the origin, but not in real time) so I assume for that I will use stream?

Thanks for this information. As all the objects in my scene are spheres I will use ray-sphere intersections instead of ray-plane intersections, this is much more efficient and I wish I had realised this earlier.
Regarding instancing, do you mean the use of display lists?

No need for instancing here: make an float4 texture and store X,Y,Z,RAD. Read it in your shader and do your magic. Use more textures for information like color, shininess, etc.

Right, if he’s just using the spheres but I figured it would be useful to mention the idea in case he started wanting to use 100s of dragon meshes.

Instancing refers to the idea that the local geometry will be the same, and there are multiple “instances” of the shape within the scene at different transforms. In your example, each sphere’s data would use the same geometry but have individual transforms.

Display lists can be used to achieve this effect when rendering, but is a more general technique than that.