top-notch graphics performance. Where as Ageia has made a special purpose processor to deal with the physics computations.
This means:
1) the Ageia processor has real memory, like a CPU. The GPU may have comperable bandwidth, but the access pattern that a GPU is designed for is not
the same access pattern seen in physics computations. So, the physics work has to be broken up in "unnatural" ways to accomidate the limited access to memory.
2) the Ageia processor is not "too far" from the CPU, where the CPU needs data such as position, velocity, direction, and rotation in order to
come do decisions for AI and graphics processing, the CPU no longer needs to deal with all the side-effect data that the physics needs inorder to compute
said object information. This is the same idea as HW T&L, the CPU doesn't need to know about all the side effect geometry, it only has to know the
position and orientation data for the mesh.
There is a very significant amount of processing that the GPU and PPU are both able to do. There is a significant amount of data that gets
cached on both that does not have to be transfered every frame. And anything you can offload from the CPU means more cycles for everything else.
It still takes time to itterate through 1000 objects and do frame updates, if you can half the processing time, you can either make the
world more "rich and full" with another 1000 objects, or make each object "smarter" in some way.
Quote:
What's next, an AI card? A Scripting card? There's only so many areas you can split it down into.
There is already specific hardware in your computer to decode MPEG4 video, MP3 audio. Your network card likely has RLE compression, and hardware to
help support the TCP/IP stack. RAID can be done with software, but there is hardware to deal with that too.
The point is that there are types of computation that can be done better with specific hardware. In 5 years we will likely have people
using the PhysX API to do fast general purpose computations on the PPU, just like we have people using fragment shaders to do some
general purpose computation on the GPU.
If we didn't have hardware acceleration for many facets of computation today, then the technology would have never made it to the mainstream.
(think a 500Mhz CPU trying to decode a DVD w/o GPU support, or tring to encode an mp3 in real time straght from the cd/line in.
Posible, but a 500Mhz + dvd/tv decoder card will make all the diference in the world as far as playback responsiveness.)
The PhysX chip is a step in the right direction. The cell probably is too.
And even if the consumer market doesn't take up the PhysX chip, there is likely
a commercial market for it, just as SGI opened the market for
commercial high-speed/quality graphics processing.
Likely the chip will not "flop", but it will be something that only gamers will buy. The average computer user will never see the need.
Alienware, Voodoo PC, and several other companies focus on the gamer market, and seem to be doing fine.
The PhysX chip will likely become part of their full line of products, just as SLI slowly edged its way in.
P.S.
Personally? Im waiting for the realtime raytracing GPU's to come to market.
When that day comes, all the pixel-shader hacks will be gone, and we can finally
see graphics that have the real effects in full 3d with no artifacts like today.