They're doing physics and even some A.I. on the GPU. A PPU would actually have trouble keeping up with that, and it's clearly a big waste of money. 'Nuff said.
So let me get this straight your impling that hardware specialized for graphics work, that is already being pushed to it's limits in current games (see oblivion), will run physics better than piece of hardware specialized for physics that isn't doing anything else.
What'll kill the PPU is lack of games and if the first version of it isn't a big enough jump in speed, Since this is the first generation PPU, I expect there to be a lot of optimization room in the drivers and in the chips design. I'll probably wait for 3rd gen before getting one. Unless some killer apps start coming out that take advantage of it before then.
C0D1F1ED - I disagree with your conclusions in the long term, for 2 reasons I don't think I've seen agresively mentioned in this thread.
1. Parallelism and Scale - while a CPU can add 1 extra proc today, which you can use to do physics, or AI or whatever ... super fast. Where do you go tomorow? Where do you go when you want to use that 1 extra CPU for physics, and another for AI, and another for Audio shaders and dynamic music creation? When someone puts an out-of-line processor on the market that works (such as Audio card, DSP, NIC, GPU, SCSI/RAID, PhysX (maybe), ...) they gain the ability to scale that seperate component over time - with minimal impact on other system reasources. The fact that a someone enables SCSI level 5 on their computer, vs not has almost no impact on any other performance number if done via a properly designed controller card. If that RAID was being done in software, then where would your great physics get to do its calculations? Likewise if the Physics CAN BE offloaded properly to an out-of-band device, then it will be freeing that second CPU proc to do yet another type of work. Every time we send computations away from the core, we once again reclaim that core work work we have not yet learned to perform well in dedicated hardware, or to paralellize. Hence we keep the Central Processing Unit busy doing things that only the CPU can effectively do. Maximizing every watt and dollar invested in the system.
2. My utter complete disappointment in CPU progress vs. GPU progress over the last 4 years or so. In the time frame that upper mid-range CPUs have gone from 1.8 GHz to 2.6 GHz x2 (AMD) or from 2.4 GHz to 3.4 GHz x 2 (Intel), the upper mid-range GPUs have gone from 325 Mhz x 8 to 650 x 24. Notice that these processors have had a doubling in speed which CPUs have had a 50% increase, and a Tripling in pipelines which CPUs have doubled. This is approximately the same at other price points in the spectrum as well. Just the fact that $325 spent the day the Athlon 64 came out bought nearly half the power that $325 buys today ... compared with the ATI Radion 9800 Pro $325 might have bought you size months later, against say a nvidia 7900 GT now ... see the difference - the dedicated hardware has the ability to scale in mutliple ways as they become cost effective (speed, bandwidith, parralelism, additional technologies (shaders), etc). The general purpose CPU focuses its changes on the areas that help the broadest range of uses, and therefore does not invest its efforts where they would dramatically help it acomplish specific tasks (CPUs don't haven't added things like 4 part crossbar memory controllers or offering versions with 4 times the data-path width because these ideas aren't economical to them (lowest end CPUs have half the data path of highest end available - and it is done in a way that only really benifits onboard video and certain special uses).
Quote:Original post by C0D1F1EDPhysics processing doesn't require highly specialized hardware like a texture sampler, just generic SIMD units.
gotta disagree. that is true for highly parallel physical systems, but for many situations involving multiple bodies you are going to need a lot of data as input that is being worked on simultaneously...aka the problem is often not fully parallel. as far as i have read that is exactly the problem space aegia's ppu is trying to address with a highly interconnected processor...not just some simd chip based on a limited view of what could constitute physics in games.
They're doing physics and even some A.I. on the GPU. A PPU would actually have trouble keeping up with that, and it's clearly a big waste of money. 'Nuff said.
if you look closer it is collision detection that *doesn't interact* with the game. you are never going to get the bandwidth to read back any appreciable amount of data off of the gpu. better to stick with the exta core approach.
I guess I didn't get my core idea across properly the first time. The two things I wanted to point out was the problem of specialization vs generalization along with the "no free lunch" theory. They go hand in hand. What the theory of "no free lunch" tells us is that you can't have something that is awesome at everything. If something works great on one thing, you cannot expect and it will not perform well on something else.
So, as GPUs are super specialized at doing graphics, forcing it to do physics too will create 1 of two results. Either it will lose its graphics performance as we try to get better physics, or it will just do very crappy physics in general and the end result will become not worthwhile considering the effort. The same goes with the CPU. Its a very generalized piece of hardware. So, its capable of doing everything, but it'll never do something well. Its like in real life, you can never be an expert in everything. You might be fairly good at doing quite a few things, but you'll never beat someone who specializes in something. The same goes with hardware.
So, sure we can offload everything to the CPU. Before specialized hardware, that was the best thing to do, but with specialized hardware, why keep using the old brute force method? There's a reason why game consoles have always seperate hardware to do certain things. You always had a chip for graphics, one for audio, then an I/O controller (maybe) and a general processing unit. Its always going to be cheaper to make a 3 chip combo that outperforms a single chip, even if that single chip is multicore. Also, there are inherent stability and development advantages. Like the fact that graphics can be optimized without worrying about eating resources that may be used for other thing.
Back in the old days, the CPU did everything, but you can say that CPUs weren't fast or multicore back then. Then specialized hardware made things faster and CPUs got faster, but does this mean we should go back to the old ways? The other question we should ask is whether software has stayed the same complexity. Recent research has shown that to a certain extent, software has gotten slower and more resource demanding than CPUs have gotten faster.
Also, having separate specialized modules have much better resource management properties than running everything on the CPU.
Xia, I agree that the CPU might not have the processing power required for the most demanding physics. At least not for a couple years.
But using the GPU for physics is very much a viable option. Think about the Radeon X1900 versus X1800. It has three times the shader units, yet it's only about 25% faster in games. Why is that? Because for graphics work it's limited by the number of texture units. And they can't easily increase that (at least not by a factor three). So what this means is that we have a vast amount of very generic and extremely powerful SIMD units dying to do some extra work. Physics processing is perfectly fitted. And with the DirectX 10 API shaders can directly access memory so it doesn't require extra texture units.
Just imagine a PPU 'glued' next to a GPU. Well that's exactly like going from X1800 to X1900. Even better, because the X1900 uses a 90 nm process, is clocked at 625 Mhz, and has 32 new shader units. The PhysX PPU doesn't even come near that. The X1600 has a comparable configuration for a more affordable price. But the real GPUs with a unified architecture ready for physics processing will arrive together with DirectX 10.
The PPU won't stand a chance. It's like adding a 250 dollar 486 next to your Pentium 4. It just can't keep up and isn't cost effective at all. Invest that money in the next generation DirectX 10 graphics card and you'll have better physics ánd graphics than with a current generation GPU plus a PPU.
To be fair, DX10 has the same problem the PPU has - it isn't likely to spread fast, so developers won't be able to rely on it for many years. As DX10 requires a new OS, it seems likely that a majority of the market will be stuck at DX9 for the next 3-5 years.
As I understand it, one of the big benefits of DX10 is the lower cost of state/shader changes (as it is all down in user mode). Taking advantage of this is tricky though, as that is something difficult to scale easily between hardware.
They're doing physics and even some A.I. on the GPU. A PPU would actually have trouble keeping up with that, and it's clearly a big waste of money. 'Nuff said.
if you look closer it is collision detection that *doesn't interact* with the game. you are never going to get the bandwidth to read back any appreciable amount of data off of the gpu. better to stick with the exta core approach.
That's why I believe the PPU will work out to some extent. If you can't read back enough data to make the physics interact with the game beyond looking cool, I don't see that killing the PPU. With PhysX you could make all the physics actually have an impact on the game, rather than just adding to the eye candy.
Wouldn't PhysX constrain you to a specific set of Physical Laws? What would happen when Quantum Physics are needed in games, or when Relatively should take part in simulation? Who told you that gamers WANT real physics in games.
At infinity, we will simulate everything on a hyper paralleled processing unit. We are still in the transiant state of developement. In a hundred years, they will look back and laugh at us!
Quote:Original post by arithma Wouldn't PhysX constrain you to a specific set of Physical Laws? What would happen when Quantum Physics are needed in games, or when Relatively should take part in simulation? Who told you that gamers WANT real physics in games.
At infinity, we will simulate everything on a hyper paralleled processing unit. We are still in the transiant state of developement. In a hundred years, they will look back and laugh at us!
I don't think the hardware is hard-coded with how the "real world" works, just methods of calculating math, collisions, and things necessary for physics faster. I'm sure many aspects of the physics are programmed by the developer.