Community Space Trader technical detail

I know next to nothing about Physics Engines, hence the following question: What are the advantages of using a Physics Engine in a space game?

Naively I’d assume that in a space game everything’s so far apart and so fast moving that you could just use bounding spheres for collisions. What sort of cool stuff am I missing? Is there anything the Physics Engine allows you to do that could become the ‘unique selling point’ of this game?

Cheers,
Simon

[quote=“dishmoth,post:21,topic:32838”]
You’re in a 10m fighter attacking a 1km long transport ship - you could be inside the transport’s bounding sphere before you’re even in effective laser range!

Duh, the transport ship got energy shields of course, which happens to be its bounding box. ;D

Ah… Star Destroyers ;D

[quote=“ShannonSmith,post:6,topic:32838”]
Too late to be first (that would be either GeoCraft or NASA’s Verve/Viz I guess.) But “one of the first” as you say, is still available. hehe :slight_smile: Sounds like jPCT is the choice already though, which should also be fun.

[quote]What are the advantages of using a Physics Engine in a space game?
[/quote]
If you think of a fighter like an x wing, then unless you support non-convex polygon intersections, the open space between the wings becomes occupied, and all lasers etc. hit it. As a player you would notice your lasers hitting open space when near missing shots against combat opponents.

[quote]Is there anything the Physics Engine allows you to do that could become the ‘unique selling point’ of this game?
[/quote]
There would be no unique selling point about supporting non-convex polygons, because every space sim supports them (actually I think the original elite ships were all convex including the space stations. Even so, elite did support convex polygon checks, nothing as crude as bounding boxes).

Still, physics engines are not really about collision checks, thats an annoying feature you need to get them to work. Its more about forces and stuff, but thats bits very straight forward for space sims (i.e. the complicated situations of calculating non-penetration and friction forces when 10 bodies are all touching
doesn’t really occur).

We could add joints to our ships. But thats an unnecessary complication. We will have have to have turrets, but I think thats gonna be the only actuated component of a ship, and that does not need to be controlled at the dynamics level.

[quote]Naively I’d assume that in a space game everything is so far apart and so fast moving that you could just use bounding spheres for collisions.
[/quote]
In implementation you do do that. You first wrap up all your polygons in loose fitting spheres. So that you can quickly find out if two ships are defiantly not touching. (if they might be touching, go do the expensive poly-poly collision check)

Agreed, lasers need accurate collision detection. But that’s a simple special case (raycasting), not by itself motivation for a physics engine.

Hmm, I wonder if that’s true… I’d imagine that collisions with the player’s ship would be modelled as a sphere-on-sphere since the player has no real sense of how big their ship is. The only exception to that would be during docking with a space station, but that seems like a simple special case. Lasers would be handled as above. Missiles (targetting enemies, not the player) would be trickier, but there’s a chance you could get away with just testing the tip of the missile against an approximating sphere for the enemy, or at worst the tip of the missile against the convex hull of the enemy. I can’t think of any other kinds of collision in Elite, but then it’s many years since I played it.

Anyway, sorry, that was a bit of a digression. ::slight_smile: This project isn’t required to use 1980’s technology, so if the code’s there for non-convex poly-on-poly it might as well be used.

What about explosions? Does the physics engine lend itself to making objects break apart in interesting ways?

Simon

[quote]What about explosions? Does the physics engine lend itself to making objects break apart in interesting ways?
[/quote]
Well only if there is a predetermined substructure to be broken down to. (e.g. configurable ships = deformable damage + less 3d graphics work ::)). Of course havoc can generate the substructure dynamically, but they can afford to spend loads of time getting people to build materials libraries in order to make poly’s break in a realistic fashion (thought in my opinion most of the deformable materials in the tech demo look pretty sh*t).

EDIT//

[quote]Agreed, lasers need accurate collision detection. But that’s a simple special case (raycasting), not by itself motivation for a physics engine.
[/quote]
Myeah, I would agree with you there if and only if inter-ship collisions never occurred out of first person and only between player and computer. Missile collisions could be implemented as (inaccurate) pre-impact explosions.

I don’t think we wanna go down that route though because you can’t fly around DS9.

Well I am working on getting JBullet to play with JOODE. I’ve put in my hours this week. I hope to get convex-convex tested and done by the end of Sunday next week. Oh wait, it might take a little longer because I’ll have to write a new renderer binding. As the new JBullet stuff won’t play with Xith anyway, I might as well write this one for JPCT.

I am doing this inside JOODE’s svn repository in a separate subdirectory. (I have not committed anything yet so you can’t see it, there is no point until it is working unless anyone states otherwise).

Err when we are ready to actually start the core code, might I suggest we make a convention that everybody sets up a root folder named anything, and have separate svn TRUNK checkouts with specific names for all the open source projects we are likely to integrate heavily with. Thus the main build.xml of the main project can operate knowing the other source folders are in sibling directories. i.e.
root folder SPACE_GAME_DEV (specific name irrelevant) has the following subdirectories:-
1 directory SPACE_GAME which is the snv checkout trunk of our project svn directory
2 directory JOODE which is the snv checkout trunk of JOODE
3 directory openMali which is the svn checkout of openMali
4 directory JPCT which is the svn checkout of JPCT

That way we can keep sync with other projects and even contribute bug fixes relatively easily. It does mean all devs will have to setup their root directory properly (but we do that surely?) as they won’t be able to checkout the root directory. (PS this is just a workaround for not being able to nest svn repositories, correct me if there is a better way. This workaround will make branching the repository’s difficult)

Tom

[quote=“t_larkworthy,post:29,topic:32838”]
Cool, but don’t lose sight of the objective, which is ease of use - a big problem is assets: If we encourage devs to use textured 3DS models we’ll get massive asset issues: ‘I’ve done a 2000 poly fully textured deathstar!’ means: ‘I’ve just raised the load time by 10 mins & made the renderer drop to 1 fps’. In software mode JPCT can’t really go over 2000 textured polys total. Plus we’d need to manage the assets (versioning, server space &c). This is why I favour a procedural approach (eg, that JPCT test applet had no art assets).
A 2000 poly/frame world is very different from a 40000 poly/frame world - the physics and renderer need to be tight.

OK. loud and clear. The binding for JPCT I will add to JOODE will not be the same as the one we use in this project anyway.
How will people make art? They will use blender -> 3DS just for designing the meshes?

Tom

Good question! What format’s best for you on the physics side? How can we keep the overall poly count down to playable levels?

Well its fairly standard practice to have a lower polygon count for the collision mesh to help prevent the sporadic collision checks being a bottle neck. I have no idea whether they will be or not. I suspect collision detection will not be the bottleneck but rendering. If collision detection is an issue we can prevent AI ships from colliding with anything other than missiles with springs or something. Then really the only true collider becomes the player, and the missiles can be simplified to a point or something (i.e. the physics can be tuned pretty easily).

I think our major time saver will be rendering far away ships as really low dimensional poly’s eventually simplifying them to be a single triangle of roughly the right color when miles away from the camera.

So how the question is whether the artists have to be responsible for producing different resolution polys or whether we try to do that automatically. (option 1 is hard on the artists, option 2 is hard on the coders)

[quote]What format’s best for you on the physics side?
[/quote]
Oh its doesn’t really matter. As long as I can get at the mesh verteces to build trimeshes it does not really make much odds.

I think making the artists job the easiest is the best direction. I dunno how people will use procedural texture generation and be able to model in a gfx program at the same time (unless we have a blender plugin or something)

Tom

Cool, rendering is definitely the biggest problem!

[quote]I think our major time saver will be rendering far away ships as really low dimensional poly’s eventually simplifying them to be a single triangle of roughly the right color when miles away from the camera.
So how the question is whether the artists have to be responsible for producing different resolution polys or whether we try to do that automatically. (option 1 is hard on the artists, option 2 is hard on the coders)
[/quote]
No such a problem (hopefully), I’d already thought of some form of imposting (ie pre-rendered images) for more distant objects, so I guess that a single low-poly model will do from the artists. I wonder about ships breaking up though (especially large ones) - we’ll need some way of splitting the mesh when a ship explodes… I looked into voxels but no way we’ve got the CPU beef to do it!

[quote]I think making the artists job the easiest is the best direction. I dunno how people will use procedural texture generation and be able to model in a gfx program at the same time (unless we have a blender plugin or something)
[/quote]
I’ve a feeling you’re right, but we’ll have to heavily stress the low poly/small texture requirements!

all the models for the RTS are going to be very low poly (as high polies are not needed in any way). Also I will be making calapsed versions of the polies, (these probably wont be UV mapped as there going to be so small, so texturing will be disabled and they will be rendered in the dominant color.

A primary mesh will have a poly count like the model update I posted.

I still really feel that sharing space ships between each project will be a good idea for productivity.

Sure thing - the more common resources the better!
BTW could you post the 3DS file for the Manta so’s I can check the JPCT loader? All being well I’ll put it in the feasibility test applet.

Here is a link for the manta model.
manta.zip
The zip file includes:
primary “.obj”: mesh
low poly “.obj”: extremely low poly (I use this for long distance objects). It could also be used for collision checking.
manta.jpg: primary texture map
manta_h.jpg: height map used with shaders
manta_n.jpg: normal map used with shaders

edit: Sorry its not in 3DS format, I use obj in my game. Blender can convert from obj to 3DS, in case you needed to.

Cheers bobjob! Seems to be some problems with the JPCT loaders though (even with jpct-supplied 3ds files) :frowning:

BTW Must-read article on procedural textures for planets!

yeah cheers bobjob

Whew! It was a struggle but I finally persuaded JPCT to play along - new feasibility test with the Manta in wobbly orbit.
NB: JPCT can only handle 256x256 textures. I haven’t done any texture-alignment tests yet…

EDIT: someone just pointed out that the thread title is spelled wrong! Hmm… how do you amend a thread title?