[quote=“t_larkworthy,post:60,topic:32838”]
Most Likely! I know you can code so I’m not worried, but I haven’t got the ‘mental model’ staight in my head yet.
I guess you’re in the UK? If so we can talk by phone? If we can get this right now everything else should follow easily (getting other contributors to add content &c)
hehe,
Yeah I’m UK. God save The Queen etc.
Yeah the code is pretty messy. It comes from several facts.
JOODE is tied to its collision model. You add joode.Geom’s to Bodies. Obviously JBullet classes don’t implement Geom, and nor would they suit doing so (because JBullet doesn’t have a hierarchical Space model).
So JBulletBodyInterface wraps a Body and allows CollisionObjects to be associated. (and at the same time instanciates various listeners to track body changes that occur after a world step). I might just subclass Body to create a JBullet body which probably will probably be simpler. (…yeah I will)
JBullet’s CollisionWorld doesn’t have an event model so it is adapted in with I can’t rememeber the name but JBulletCollisionWorldAdapter or some such. I am using the adapter pattern heavily at the moment till I can intergrate deeper.
So everything JBullet related is adapted to try and make it behave like JOODE does. Which results in huge names and messy code. Further complications arise because high level JOODE functionality is event driven which is fine when you understand the events but terrible to read.
Anyway, yeah. We can exchange phone numbers. I only work on this project on the weekend though.
Tom
K, I’ll PM you later in the week.
Sorry I should have said this a couple of days ago, I have to get a peice of work together by Wed. So I have had to work this weekend. I’ll do what I was going to do this Sunday on Wed after I have done that. Then perhaps I will be able to work again on Sunday next weekend (but perhaps not because its all related to a tight deadline on March the 1st, but it is my target).
Tom
Progress update:
I have been converting my existing OO raytracer engine to a more procedural and potentially faster engine. It has been more slower than i anticipated as I am having to make sure that in the conversion i am not introducing errors and inefficiencies…
No “show stoppers” have yet emerged so I am still plugging away
Great moogie, I am interested to see where you get this.
[quote]It has been more slower than i anticipated as I am having to make sure that in the conversion i am not introducing errors and inefficiencies
[/quote]
Its better doing something slow and getting it done than producing broken code in a few hours. Right… back to work
Tom
True, however with doing it slow and steady means that I do not have yet have a result to show for my effort… and that makes it hard to keep motivated
The RTRT implemented in java will not be able to ray trace all pixels + secondary rays and achieve decent frame rate on average hardware. This is due to the fact that we cannot use SIMD instructions and can not optimise for a particular platform (not should we want to, we want as many people as possible to play )
However all is not lost as i have a couple of “cheats” to get better frame rate with little sacrifice to the image.
-
As a first pass, only render the corner pixels of 4x4 blocks. If the ray of each corner hits the same primitive then we assume that the rest of the pixels of the block hit the same object. using this assumption we can interpolate the point of intersections saving much computation! If the corner pixels do not hit the same object then ray trace the remaining pixels of the block as per normal. This will effectively reduce the number of pixels to ray trace by 16 times. Other attributes, such as texture co ordinates, whether in shadow, surface normal, etc can also be interpolated.
-
using a Bounding Volume Heirachy will accelerate ray object intersections by only testing against a subset of the objects. This will also give us the ability to start “looking” for the intersecting object given the current pixel’s ray’s last intersected object which will again reduce the number of objects to test against.
-
(to be tested) in the case that the 4 corner pixels of a 4x4 do not match then for the remaining pixels only test for intersection with the objects that the corner pixels intersect with.
With all these optimisers I am hopeful decent frame rates can be achieved.
Sounds good, and given that the environment is fully known, would it be worth focusing ray scans around the locations of known objects? Sort of “point more rays at known objects”? Hmmm… Could get tricky when an object fills the view though…
not sure i really follow your idea… if we knew where the objects are then only the pixels whoes rays’ hit these object would be needed to be ray traced… Unfortunetly we do not know this with out testing each pixel’s ray. Or by assumptions and hureristics narrow down the number objects to test.
If the space is not cluttered, then you can work out where the AABB are roughly located in screen ordinates, so only raytrace in those regions. e.g. work out the line between the 8 AABB corners, the camera’s pinhole and determine where the each AABB corner interesects the screen plane (8 2D ordinates) and ray trace only inside the minimum rectangle necessary to enclose all of those 8 2D points.
It would save loads of time when other ships are not occluding the screen much, but will not save much time when you are coming into dock/colliding with another ship, or viewing your ship in third person. On the other hand, by interpolating your 4x4 box, near polygons are optimized in a different way, so who can guess the performance of the combined approach?
Whats your frame rate at the moment moogie?
(back to work)
ah, do you mean view frustum culling? i have gone down that path in an earlier ray tracer, infact i went one further and performed view frustum subdivision based on whether any objects were contained( or overlapped) a frustum.
This worked quite well if you do not have an object acceleration structure such as BVH… due to the nature of BVH, you cannot just “iterate” over the objects in the scene and test to see whether they are in the view frustum. BVH does actually perform a similar function as view frustum culling in that as you traverse down the hierarchy you continue to narrow down to the objects which are in the view. And since the objects are grouped together, you are effectively performing the frustum culling test on many objects at once instead of individually.
I suppose performing a view frustum culling on the BVH to determine the optimum starting node could help, but I am not sure if any benefit would be achieved due to the heuristic i will be implementing: “the start node into the BVH for a given pixel is assumed to be the same a the neighboring pixel starting pixel” This is based on the fact that pixels in proximity to eachother will likely to be effected by similar objects and thus cut down the number of BVH traversals needed.
so far the frame rate is 0 but that is because I am still converting. there was over 8000 lines of code in my other ray tracer…
but here is an example raytracer which i was hoping to make for one of the 4k competitions… this one implemented the frustum subdivision: http://javaunlimited.net/hosted/moogie/jrtrt_specular_fixed.jar
No not frustum culling (although that is clearly necessary), I was trying to flesh out what Simon meant by:-
[quote]Sounds good, and given that the environment is fully known, would it be worth focusing ray scans around the locations of known objects?
[/quote]
In space what you mostly view is background. you only need to shoot rays for the areas of the screen where the ray’s may hit something. I dunno if you take this into account or shoot a ray for every pixel.
[quote]not sure i really follow your idea… if we knew where the objects are then only the pixels whoes rays’ hit these object would be needed to be ray traced… Unfortunetly we do not know this with out testing each pixel’s ray.
[/quote]
We do know the objects AABB. So you reverse shoot rays from the corners of the AABB’s in order to find approximate bounds of the objects in screen ordinates and shoot rays only in those regions.
Ah, ok
I am currently going to be using Bounding Spheres instead of Axis Aligned Bounding Boxes. The simplicity of the sphere shape means that ray transformation to local axis can be avoided per bounding volume… hopefully speeding up the ray/ bounding volume intersection test.
I wonder whether the construction and use of this ‘pixel raytrace’ boolean map from reverse rays from corners of bounding volume would give any net gain.
Update: JStackalloc dependency removed. I have not really replaced like functionality with like, so most (but not all) of the previous stack calls are falling back to garbage collection which I defiantly don’t want to happen. But the system runs now without having to go via an annoying ant script. I’ll finish the job off on the weekend at some point, which will give me time to think of a clever way of doing it :).
if sending rays out from pixels that ultimately hit the back ground cost near nothing to compute then there will be little gain in predicting a large set of rays that will miss.
Is this project still going. Ill be continuing my Sci-Fi RTS. So here are some models compatible with the space trade.
A fixed version of the space ship with a proper texture map.
http://users.on.net/~bobjob/stingrayb03.zip
and space station screenshot (unfinished).
Hey what about this project? still running, if yes how can i participate? have you created some webpage? mail list? irc channel? or even some public repository?, I’ve been looking the forum and i havent found anything. but if is still running i would like to participate. I work as a full time java developer for some years now so i think i can deal with this. hope you answer soon because im very motivated for work in something like this.
Best regards, Makz. :
Sorry makz. I think its dead.
Maybe you could start a new thread request for a community project.