the PhysX processor

have you guys seen the PhysX processor? i saw an article about it at enthusiast.hardocp.com and it looks really cool. they actually showed the difference between a regular physics engine and its unbelievable! i dont know if would be worth buying but it looks amazing! check out this link and tell me what you think-

http://enthusiast.hardocp.com/article.html?art=MTAyMCwxLCxoZW50aHVzaWFzdA

PhysX looks cool but thats about it, personally i haven’t seen how this will help gameplay much appart from make them a bit better looking. i guess well just have to wait and see how it turns out.

You would actually be able to do real ballistic bullets rather than the current bullets-as-lasers. That has been my dream for a while. That would provide enough horsepower to do it. Distance, windage, sweet.

The company is Ageia.

They gave us (Full Sail) 25 of these cards (we have them installed in a student lab) and training over our Spring Break last week.
All in all, pretty cool, and the company has strong backing. They are focusing on the hardware AND software as well. It will be an interesting shake down that space over the next couple years.

BTW Jeff, I figure I lost our bet on this cards, so what was it I owe you? :slight_smile:

The downside is that such true physics can’t really be used in multiplayer games. The physics here are part of the gameplay. There is so much data being modified due to the new processing power, that it’s impossible to synchronize all clients, even over a LAN. We’re talking about lots of MBs per world-step. Think of a tower composed of bricks tumbling down. Due to the delay (you can only send the updates to the clients after it stepped) and the lag (network) tiny difference in the location could have an enormous impact on what happens with an object later - whether a pile of crates has tiny gaps to shoot through, might vary per client. This, and the sheer bandwidth/datatraffic required, makes it literaly impossible to sync. Further the server has to validate all changes (“is the client allowed to apply that force?”) to prevent cheating, so you could save up some spare cash to get your hands on a 8-way-opteron.

In singleplayer games it will be extremely useful enhancing gameplay and immersion.

For multiplayer it will probably be used to have extreme ‘local’ detail, only sync-ing the ‘relevant’ items, but in the end each client will see a more or less different world, where even the slightest change might affect gameplay immensly.

Accurate server-side and client-side simulation, broadband and progressive sync-ing ( syng-ing subsets of the active objects per network message) all come together to make netoworked physics worlds possible and playable.

With current CPUs physics-simulations that certainly is true.

With these PPUs I expect that there is just too much data to be synced (over TCP or even UDP).

On the financial side, will players be willing to spend extra money on this card when recent video cards are already expensive?

And GPUs are beginning to take over the physics-part of some games (new havok engine, iirc)

It will be a while before anything actually requires this, I’m guessing, but it’s a very interesting idea. I guess that it should be possible to offload more or less anything onto a separate card sooner or later, which actually leaves us with some very interesting possibilities for future cards. I’m trying to work out how it might work with AI but it would need some very standard AI libraries that everyone uses in the way that Havok or OpenGL work with physics and graphics…

i doubt they would need to send alot of data for the physics they could just use bounding boxes to batch large amount of objects together, like a box for a dust cloud or a box for a building that is collapsing, so even if the effect isn’t the same on all computers the game play should be the same.

AI is something that fits perfectly well on a x86 architecture. It’s so filled with branching, conditions and random memory-access, that it’s quite hard to make hardware that handles it better than x86, as x86 was designed for this kind of work. A GPU will simply suck at it, no matter how smart it’s implemented.

My prediction is that this technology will not take off. GPUs and CPUs will improve such that combined they can easily cover this aspect of simulation for gaming. This processor may get used for scientific/academic work - gaming won’t need it.

+1, Agreed

My prediction is that this will only take if i a killer game uses it.

its unlikely that PhysX processor will be of much use other than in games and scientifc apps, unless of course the next gen OS’s decide that icons with real time physics would make good eye candy :slight_smile:

Interestingly the Unreal Tournament guys mention the PhysX processor here :

I can see it now… when you move he mouse pointer across the screen it will bump all of your icons out of the way and spill the files out of any folders on your desktop :slight_smile:

Which bet was this again? The usual stakes are a buck, but you have to write what you were wrong about on it and sign it.

Actually there is an isrealie company making a hadrware graph-searcjing processor thats pretty dman cool.

I saw it at GDC. I have the name around erhe somewhere…