physx chip
May 17, 2006 12:02 AM
lord, how many rounds is this going to go? everything that needed to be said was said on the first page. most are excited about developing with something new, codified is happy with his extra cores. it's not religion and it's not abortion, and yet i don't think anyone is going to walk away with a changed mind.
Quote: Original post by Anonymous Poster
lord, how many rounds is this going to go? everything that needed to be said was said on the first page. most are excited about developing with something new, codified is happy with his extra cores. it's not religion and it's not abortion, and yet i don't think anyone is going to walk away with a changed mind.
All I want is to make people aware that it's not as fantastic as Ageia would like us to believe.
Here's another intriguing fact: In 'Ghost Recon: Advanced Warfighter' there are separate setting for software mode and PPU mode. So you can't select the same level of physics detail to have a fair comparison. Now why would they do that?
Quote: Original post by Anonymous Poster
lord, how many rounds is this going to go? everything that needed to be said was said on the first page. most are excited about developing with something new, codified is happy with his extra cores. it's not religion and it's not abortion, and yet i don't think anyone is going to walk away with a changed mind.
If it's about cores, I would like the use of 2 individual GPUs instead of 1 GPU and 1 "PPU".
Why is this not possible? NVIDIA bus chipset, NVIDIA GPU, no flexibility.
In my world, 512MB + 512MB != 512MB
Where is the other 512MB?
[Edited by - taby on May 17, 2006 11:19:07 AM]
Just to briefly put my opinion:
I think it is a waste of time, effort, and money to design application specific hardware for the general home user PC. After all it is too general. You won't be spending all the time playing games (at least for a great majority). My point is, why not bring those sever-based universal FPGA or math co-processor cards to the general home (in simplified version). Let the developers choose how to use the MIMD cards. That's why I like the Cell BE, it has a general purpose processor and individual SPEs for task-specifc code (although I think they are a little too pumped up). My point is, hardware should be more universal, or at least more flexible (when was the last time you could easily hack up 3d graphics card?). Again just IMHO.
JVFF
I think it is a waste of time, effort, and money to design application specific hardware for the general home user PC. After all it is too general. You won't be spending all the time playing games (at least for a great majority). My point is, why not bring those sever-based universal FPGA or math co-processor cards to the general home (in simplified version). Let the developers choose how to use the MIMD cards. That's why I like the Cell BE, it has a general purpose processor and individual SPEs for task-specifc code (although I think they are a little too pumped up). My point is, hardware should be more universal, or at least more flexible (when was the last time you could easily hack up 3d graphics card?). Again just IMHO.
JVFF
ThanQ, JVFF (Janito Vaqueiro Ferreira Filho)
Quote: Original post by jvff
I think it is a waste of time, effort, and money to design application specific hardware for the general home user PC.
Luckily nobody told 3DFX that 10 years ago.
Quote: My point is, why not bring those sever-based universal FPGA or math co-processor cards to the general home (in simplified version).
The more general the processor, the less powerful it is. That's something that many of us have been trying to demonstrate throughout this thread. There are many examples from history of simpler processors being much, much faster than more general purpose ones.
Quote: Original post by Kylotan
Luckily nobody told 3DFX that 10 years ago.
Maybe if someone did tell them they wouldn't be bankrupt now... Their biggest mistake was targetting only the high-end market while there were much more cost-effective solutions from NVIDIA and ATI. Guess which road AGEIA took...
Anyway, physics is a whole lot more application specific than graphics. Even old 2D cards offer hardware acceleration. You need a graphics card either way. A physics card is much harder to justify. There is nothing truely unique about it that accelerates physics. It's basically all regular 32-bit floating-point arithmetic.
Quote: The more general the processor, the less powerful it is. That's something that many of us have been trying to demonstrate throughout this thread. There are many examples from history of simpler processors being much, much faster than more general purpose ones.
That's not completely correct. Graphics cards are getting more and more general purpose, while at the same time they get faster! And I wouldn't call them 'simpler' either.
I don`t care much about PPU card, but I would like to see some DirectPhy (and OpenPhy ;) ) low level API for physics, that would use existing hardware, no matter if that is CPU core, GPU or something else...
Bulma
Quote: Original post by Bulma
I don`t care much about PPU card, but I would like to see some DirectPhy (and OpenPhy ;) ) low level API for physics, that would use existing hardware, no matter if that is CPU core, GPU or something else...
According to the Inquirer Microsoft is already working on this. Which is obvious because they control pretty much the whole technical evolution of gaming on Windows systems. It also looks like they work most closely with GPU designers, realizing the potential of the next generation (after all they specified Direct3D 10 as well). A (multi-core capable) software fallback seems obvious as well. For PhysX it will be up to AGEIA to create a DirectX compatible driver. But they might attempt to stick to the NovodeX API in the hope to get market dominance (and avoid competition from CPU or GPU based solutions), but I predict that would fail just like 3Dfx's Glide API. Their best chance is to work closely with Microsoft and try to survive against the competition by offering the biggest feature set and the highest performance at an affordable price. Either way it's a tough market and PhysX might not survive for long.
For non-Windows systems I'm afraid there won't be a dominant API any time soon. When 3D graphics cards became popular we already had OpenGL from the workstation market. There's no professional market for PhysX so there won't be an organization stepping up to create an open API for it. NovodeX and Havok are almost equivalent options for CPU based physics of course. But for GPU or PPU based physics a game would have to support both, which is not going to happen.
Quote: Original post by C0D1F1EDQuote: Original post by Kylotan
Luckily nobody told 3DFX that 10 years ago.
Maybe if someone did tell them they wouldn't be bankrupt now... Their biggest mistake was targetting only the high-end market while there were much more cost-effective solutions from NVIDIA and ATI. Guess which road AGEIA took...
That is entirely false. The 'high end market' would not have settled for 16-bit graphics or lack of transform and lighting, which is what they pushed for way too long. Their high costs were down to their slow development model, not some sort of desire to aim at a different market.
Quote: A physics card is much harder to justify. There is nothing truely unique about it that accelerates physics. It's basically all regular 32-bit floating-point arithmetic.
So is graphics. You're not making a valid point here. The key is in which operations you optimise for. Having written software renderers and physics engines, they both have very distinctive operations.
Quote: That's not completely correct. Graphics cards are getting more and more general purpose, while at the same time they get faster! And I wouldn't call them 'simpler' either.
They would be even faster than they are if they had less programmability. And yes, they are much simpler than CPUs, in terms of architecture, instruction set, etc.
Quote: Original post by Kylotan
That is entirely false. The 'high end market' would not have settled for 16-bit graphics or lack of transform and lighting, which is what they pushed for way too long. Their high costs were down to their slow development model, not some sort of desire to aim at a different market.
Please be careful using words like "entirely false", I'm trying to have a useful discussion here and I do my research. Wikipedia: "3Dfx's decline is a matter of debate. [...] Voodoo cards were typically highly expensive, and left the mid and low end of the market to ATI and NVidia."
Aiming only for the high-end market, 'desired' or not, was clearly a mistake. AGEIA is unmistakably going for the high-end market as well. And I'm sure they'll even have some succes there. But once more cost-effective dual-core and GPU based solutions hit the market, the ground will be swept from under their feet. What exactly will happen after that is harder to predict but they're going to have to take this competition very serious. Affordable solutions offering adequate performance can gain popularity much faster and eventually beat AGEIA out of the high-end market as well, just like NVIDIA and ATI outlived 3Dfx.
Quote: So is graphics. You're not making a valid point here.
Yes so is graphics. That is my point. GPUs can handle physics just as well. And you get more floating-point power per dollar than with a PPU. With Direct3D 10 there's practically no API limitations either to how this floating-point power is used.
Quote: The key is in which operations you optimise for. Having written software renderers and physics engines, they both have very distinctive operations.
Please explain. Just asking: Did you write a software renderer with 32-bit floating point programmable shaders? I should probably add that I'm the author of swShader. I also have a deep insight in GPU and CPU architecture, if that wasn't clear yet. Anyway, that's hardly relevant to the discussion. I'd love to know what you think is so different between the floating-point operations of a graphics card versus a physics card. Or do you know of any other operation(s) the PhysX chip has (or could have) that would give it a definite benefit over physics on a next-generation GPU?
Quote: They would be even faster than they are if they had less programmability. And yes, they are much simpler than CPUs, in terms of architecture, instruction set, etc.
Yes, there's some truth to that. That's why I said your original argument is not completely correct.
The problem is that there's no useful way to make GPUs 'faster' by making them less programmable. Nowadays transistor budgets are around ~300 million on 90 nm, and you have to do something useful with that. Just to illustrate this, if NVIDIA went back to an NV3 (RIVA 128) architecture, even if massively parallelized, they'd end up with a bandwidth limited chip not capable of any interesting effect. Can we define that as being faster? I think not. Ok, this is an extreme example, but it clearly shows that programmability is a necessary step to actually increase practical performance.
Besides, you're not making much of a point because PhysX is a highly programmable chip.
This topic is closed to new replies.
Advertisement
Popular Topics
Advertisement