Advertisement

physx chip

Started by April 20, 2006 07:42 PM
223 comments, last by GameDev.net 18 years, 5 months ago
The difference is that nVidia is working to do general purpose computations on their GPU, which will still be focused on providing
top-notch graphics performance. Where as Ageia has made a special purpose processor to deal with the physics computations.

This means:
1) the Ageia processor has real memory, like a CPU. The GPU may have comperable bandwidth, but the access pattern that a GPU is designed for is not
the same access pattern seen in physics computations. So, the physics work has to be broken up in "unnatural" ways to accomidate the limited access to memory.

2) the Ageia processor is not "too far" from the CPU, where the CPU needs data such as position, velocity, direction, and rotation in order to
come do decisions for AI and graphics processing, the CPU no longer needs to deal with all the side-effect data that the physics needs inorder to compute
said object information. This is the same idea as HW T&L, the CPU doesn't need to know about all the side effect geometry, it only has to know the
position and orientation data for the mesh.

There is a very significant amount of processing that the GPU and PPU are both able to do. There is a significant amount of data that gets
cached on both that does not have to be transfered every frame. And anything you can offload from the CPU means more cycles for everything else.
It still takes time to itterate through 1000 objects and do frame updates, if you can half the processing time, you can either make the
world more "rich and full" with another 1000 objects, or make each object "smarter" in some way.

Quote:
What's next, an AI card? A Scripting card? There's only so many areas you can split it down into.

There is already specific hardware in your computer to decode MPEG4 video, MP3 audio. Your network card likely has RLE compression, and hardware to
help support the TCP/IP stack. RAID can be done with software, but there is hardware to deal with that too.

The point is that there are types of computation that can be done better with specific hardware. In 5 years we will likely have people
using the PhysX API to do fast general purpose computations on the PPU, just like we have people using fragment shaders to do some
general purpose computation on the GPU.

If we didn't have hardware acceleration for many facets of computation today, then the technology would have never made it to the mainstream.
(think a 500Mhz CPU trying to decode a DVD w/o GPU support, or tring to encode an mp3 in real time straght from the cd/line in.
Posible, but a 500Mhz + dvd/tv decoder card will make all the diference in the world as far as playback responsiveness.)
The PhysX chip is a step in the right direction. The cell probably is too.

And even if the consumer market doesn't take up the PhysX chip, there is likely
a commercial market for it, just as SGI opened the market for
commercial high-speed/quality graphics processing.
Likely the chip will not "flop", but it will be something that only gamers will buy. The average computer user will never see the need.
Alienware, Voodoo PC, and several other companies focus on the gamer market, and seem to be doing fine.
The PhysX chip will likely become part of their full line of products, just as SLI slowly edged its way in.


P.S.
Personally? Im waiting for the realtime raytracing GPU's to come to market.
When that day comes, all the pixel-shader hacks will be gone, and we can finally
see graphics that have the real effects in full 3d with no artifacts like today.
What starts out as an addon chip today may well become an inbuilt part of motherboard designs of the future. Games are not the only environments that require physics processing. Indeed, there are many more research machines out there that require simulation capabilities than there are games machines. This will indeed by a solid market place for PPUs. Additionally, I think you'll find that if PhysX is successful in penetration into software design (or at least the industry isn't adverse to catering for it) then you may well see next or latter generation consoles with dedicated PPUs.

My personal opinion is that this is actually well overdue. There is a great capacity within both the gaming/entertainment and the scientific/engineering research markets for the uptake of physics chips because what they bring to the table is increased performance, if you're prepared to take advantage of them. Since the market is always moving forward, there will always be people prepared to take the advantage and mark their place in the front line... and more power to them if it means my research life is made easier and my games look and feel more realistic!

Cheers,

Timkin
Advertisement
Quote: Original post by justo
...in the same way you dont want to be using a dual core processor for rasterizing triangles and running shader code, dual cores will never be as efficient or be able to run as many operations per second as a dedicated card...

You can't compare the two. First of all, modern games absolutely can't run without a graphics card because of the vast amount of processing power required. For physics, you don't need that many GFLOPS. Only a fraction of the geometry is in motion for any realistic game, and that's vertex processing, not pixel processing. Secondly, CPUs are bad at pixel processing (bilinear texture filtering in software is quite slow), but they are very good at vertex processing (Intel Integrated Graphics still uses software T&L in many cases). Thirdly, don't underestimate the effective efficiency of a CPU. It has a comparatively huge and fast cache, it has highly efficient jump prediction, can do out-of-order execution, and the execution units are fully pipelined and run at several gigahertz. A PPU uses a brute-force approach, but can't be that efficient. So even if it has a multiple of the GFLOPS of a CPU, a lot goes to waste because of additional overhead (not to mention PCI bandwidth and synchronization). Last but not least, the second core of a dual-core processor is practically unused at the moment. So while previously there was maybe a 10% 'budget' for physics processing, this has now become 100%. It's already going to be hard to use all of that (without wasting it of course) for any realistic game. Plus, in a year dual-core CPUs will be widespread and price-effective, while only over-enthousiastic gamers will buy a PPU.
Quote: Original post by C0D1F1ED
You can't compare the two. First of all, modern games absolutely can't run without a graphics card because of the vast amount of processing power required. For physics, you don't need that many GFLOPS.


That entirely depends on what you're simulating!

Quote: Only a fraction of the geometry is in motion for any realistic game, and that's vertex processing, not pixel processing.


Physics is about more than just accommodating motion of a few models. Comparing it to vertex processing is doing it a huge disservice. Anyone who's looked at endless equations of Hamiltonian or Lagrangian derivations of rotational velocity and stuff like that will know this. To do a good job of physics, you have various iterative methods and approximations that, to be done properly, require a lot of power. A large part of the reason little of the geometry moves in current games is because it's expensive to make it do so. Make it cheaper, and more will move.

Quote: Thirdly, don't underestimate the effective efficiency of a CPU. It has a comparatively huge and fast cache, it has highly efficient jump prediction, can do out-of-order execution, and the execution units are fully pipelined and run at several gigahertz. A PPU uses a brute-force approach, but can't be that efficient.


I seriously doubt that a PPU is not 'fully pipelined'. Nor is jump prediction or out of order execution a big deal for such hardware. Those features are part of how CPUs attempt to catch up to dedicated hardware, not advantages over such hardware.

Quote: Last but not least, the second core of a dual-core processor is practically unused at the moment.


This may be true, but it's almost exactly what Intel said about MMX for graphics. It was nowhere near enough for what was needed.
Quote: Original post by gumpy macdrunken
don't forget about server-side physics. allowing clients to calculate physics leads to hacks/cheats. dedicated physics hardware could be a cost-effective solution for online game servers.


This is exactly the use i had in mind for these cards where you can dictate the hardware specification and make your app make full use of them.

Quote: Original post by Kylotan
Quote: Original post by C0D1F1ED
You can't compare the two. First of all, modern games absolutely can't run without a graphics card because of the vast amount of processing power required. For physics, you don't need that many GFLOPS.

That entirely depends on what you're simulating!

Absolutely. I know simulations being done with FEM and modelling nuclear physics. These are of high complexity and dedicated hardware can be of great use.

But they are not used in real-time games and will never be. In games, only 'simple' physics really make sense. For slightly more complex things (e.g. fluid dynamics) hacks are very acceptable. Compare it with rasterization versus ray-tracing. Rasterization is an accepted hack in real-time games.
Quote: A large part of the reason little of the geometry moves in current games is because it's expensive to make it do so. Make it cheaper, and more will move.

Sure, but not 100x more. And there's a technical reason for that. Every game needs a visibility algorithm for solid geometry to get acceptable performance. A fully deformable world is not practical (and even if it was not every vertex would need complex physics calculations at all time). So I definitely agree that extra processing power for physics is welcome, but in my opinion the 10x increase offered by dual-core is plenty.
Quote: I seriously doubt that a PPU is not 'fully pipelined'. Nor is jump prediction or out of order execution a big deal for such hardware. Those features are part of how CPUs attempt to catch up to dedicated hardware, not advantages over such hardware.

The CPU's attempt to catch up with dedicated hardware is multi-core, which is only in its infancy. Quad-core and octa-core are already on the roadmaps, and now that Intel has learned that clock frequency isn't everything we're going to see some very powerful CPUs in the not so distant future. It's crazy to invest in PPU technology at this point.
Quote:
Quote: Last but not least, the second core of a dual-core processor is practically unused at the moment.

This may be true, but it's almost exactly what Intel said about MMX for graphics. It was nowhere near enough for what was needed.

That's hardly comparable. MMX allowed to roughly double the processing workload. Dual-core means a whole extra processor is available almost completely for physics. If on a single-core the budget for phycics calculations is 10% then with dual-core you can do 10x more at the same framerate.
Advertisement
Quote: Original post by C0D1F1ED
Quote: Original post by Kylotan
Quote: Original post by C0D1F1ED
You can't compare the two. First of all, modern games absolutely can't run without a graphics card because of the vast amount of processing power required. For physics, you don't need that many GFLOPS.

That entirely depends on what you're simulating!

Absolutely. I know simulations being done with FEM and modelling nuclear physics. These are of high complexity and dedicated hardware can be of great use.

But they are not used in real-time games and will never be. In games, only 'simple' physics really make sense. For slightly more complex things (e.g. fluid dynamics) hacks are very acceptable. Compare it with rasterization versus ray-tracing. Rasterization is an accepted hack in real-time games.

But the only reason high-complexity physics isn't applied to games is because of the technological bottleneck. If it didn't reduce framerates at all due to dedicated hardware, why wouldn't you put elaborate simulations into games. Games like Oblivion would be that much more immersive if the physics was even more realistic (which, in some aspects, it is)
Quote:
Quote: A large part of the reason little of the geometry moves in current games is because it's expensive to make it do so. Make it cheaper, and more will move.

Sure, but not 100x more. And there's a technical reason for that. Every game needs a visibility algorithm for solid geometry to get acceptable performance. A fully deformable world is not practical (and even if it was not every vertex would need complex physics calculations at all time). So I definitely agree that extra processing power for physics is welcome, but in my opinion the 10x increase offered by dual-core is plenty.

Again, this hardware removes the technical bottleneck from games allowing more objects to move without hindering performance.

I think the PPU is a great idea. I know that in a few years, my next PC will have one. If you see what nVidia and Havok did on the GPU for physics, imagine what could be done on a card of that power that wasn't also trying to render the geometry. It would allow for smooth, precise physical representations of the world.

I personally can't wait until every object in a visible scene is actively a part of the physics. Nothing bugs me more in Oblivion than touching something on a table and having all the other objects become "active" and either rise up a hair or, worse, fall through the table because they were set interpenetrating. It's just unacceptable in my eyes. A dedicated card would allow game makers to just activate all visible objects on the PPU so that you don't have these glitches or bugs in the physics simulation.

Now, my main question: why is this in the AI forum?
C0D1F1ED: i dont really understand your point. is it that its better to dedicate an entire general purpose core to physics rather than a specialized chip?

if that is your point and you think that because of this that there will never be a market, again i have to disagree...there is a fairly substantial gamer market out there (not huge but enough to sustain 512 MB 7900gtx's and the like) that will easily pay for a card to offload all physics calcs for any game based on unreal engine 3 (and others) and boost their framerate however much.

as a developer who is using gpgpu techniques to do somewhat massive simulations, i'd love to have a card to do the same stuff in realtime (and no, a cpu could *never* handle it, even if dedicated) *and* be able to readback large amounts of data to influence the interactive parts of the program (precluded from nvidia's work). there is almost no way to do this with a general purpose processor even for relatively simple scenes (like tens of thousands of interacting particles) with any kind of iterative solver.

Quote: Original post by NickGravelyn
Now, my main question: why is this in the AI forum?

haha.
Quote: Original post by NickGravelyn
But the only reason high-complexity physics isn't applied to games is because of the technological bottleneck. If it didn't reduce framerates at all due to dedicated hardware, why wouldn't you put elaborate simulations into games.

Because for a long time graphics have been the main selling point for games. Physics, or gameplay in general, has been neglected for some years. You're right that there is a technological bottleneck, knowledge and software, but not that much a performance bottleneck. We could have had games with awesome physics for years, if only they had put more effort into it.
Quote: Again, this hardware removes the technical bottleneck from games allowing more objects to move without hindering performance.

My argument was that you can't make the whole game world deformable. The visibility algorithms only work for solid geometry, which is absolutely required to get adequate framerates. The PPU won't help with visibility, so the possibilities are really limited. 10x more physics processing power (offered by dual-core) can be put to good use, 100x more (offered by the PPU), I doubt it.
Quote: A dedicated card would allow game makers to just activate all visible objects on the PPU so that you don't have these glitches or bugs in the physics simulation.

I'm really sure they are just bugs, not a fundamental limitation of the software physics engine. Like I said before, it doesn't cost that much GFLOPS to make a dozen objects on a table have correct physics. It's a perfect example that lots of effort is put into the graphics of games, but not always the physics. Don't blame the CPU for that, blame the buggy software and the lack of knowledge to fix it. Investing in a robust physics engine would have made all the difference. And for future games the power of a dual-core processor is really plenty.
Quote: Original post by justo
C0D1F1ED: i dont really understand your point. is it that its better to dedicate an entire general purpose core to physics rather than a specialized chip?

That depends on what you would consider 'better'. The way I see it, a dual-core processor is most cost effective. All software will soon benefit from it and it's great for running multiple applications. Besides it's getting mainstream so in a couple years everyone has one. And from my experience the extra amount of processing power is plenty for impressive physics.

The alternative is to have a PPU, which in my eyes is a big waste of money. It will only be used in a fraction of games, won't (can't) offer that much extra, and it's very expensive compared to a dual-core processor that also helps other applications (and which you need to buy anyway).
Quote: if that is your point and you think that because of this that there will never be a market, again i have to disagree...there is a fairly substantial gamer market out there (not huge but enough to sustain 512 MB 7900gtx's and the like) that will easily pay for a card to offload all physics calcs for any game based on unreal engine 3 (and others) and boost their framerate however much.

As a matter of fact I'm very interested in buying a Geforce 7900 GT(X) myself. I know it delivers awesome graphics for the right price. And I already have an Athlon 64 X2 4400+. But a PPU doesn't interest me one bit. I am convinced that for any real game (not a specially fabricated tech demo) the immense processing power of the CPU is plenty if only they use a robust and well optimized multi-threaded physics engine.

Anyway, I sure realize there is a market. Hardcore gamers will buy anything that 'might' improve their game experience. You could sell them a dead rat if you market it well enough (it does give Doom 3 the right smell you know). So I'm sure pretty Ageia will make a profit. I'm just not convinced that a PPU on a separate card is what most consumers really need...

This topic is closed to new replies.

Advertisement