Advertisement

physx chip

Started by April 20, 2006 07:42 PM
223 comments, last by GameDev.net 18 years, 5 months ago
I think sun already have 1 with like 4-8 cores or something.
Quote: Original post by hymerman
I'm thinking it's a bad idea. Firstly, there's the problem that you have to cater to the lowest common denominator; games that use PhysX will only be able to use it for eye candy, not anything gameplay-changing. It's not like graphics cards at all; everybody has a graphics card, but they vary in performance, whereas with the PhysX card it's somewhat binary, you either have it or you don't.

Secondly, I just don't like the idea of it. I like parallelism, but splitting tasks up like this is just silly. What's next, an AI card? A Scripting card? There's only so many areas you can split it down into. Really, more work should be done in getting multi-cored processors; they may not be as optimal but they are more useful and scaleable since they are generic and not tied to any one task.



Actually AI will probably be such a use, but the computations involved are more generic and irregular (cant pipeline). Potentially AI would use a magnitude more computations than physics or graphics and will need something more like a cluster computer (CPUs+memory with high speed channels between them).


Possibly they will add a Matrix Unit to generic CPUs which would lessen the performance difference between GPUs and CPUs -- creating a more versatile building block.


Advertisement
(Sorry for AP label... just too lazy to sign in)

Codeified, seriously, man, read the arguements. With your eyes, not your fingers. Your colleagues on this forum have been far too charitable in dealing with your hostile and ignorant manner-- I'll be direct.


Look, the GPU is not meant to do physics. GPGPU is an academic exercise, and is just a case of people seeing how far they can push their hardware. Hell, I can load a C compiler or interpreter onto the TI-83 I used in school, but that doesn't mean I should.

All this talk about doing physics on the GPU, and how amazing it will be next year, and all that... look at what others have said: Physics will run, yes, and prettily, yes, but will not be able to interact with the game world past a visual level.

Let me repeat that: While your character is watching leaves fall from a tree, the fall and descent of the leaves is effectively unknown to your CPU.

If the importance of that statement escapes you, you have more business writing screensavers than games. If one cannot read back what is being done in the world, AI can't interact with it, and netcode can't interact with it. This might suit your goldfish tank simulation perfectly, when you merely want to accurately depict thousands of bubbles and fishfood flakes tumbling down, but for the rest of the gaming industry, it is damned annoying.

~

You also make this odd assumption that, while graphics card performance increases, workload will stay the same. This has been pointed out several times to you, but mostly you've ignored it.

I'd much rather have a game with realtime raytracing with Doom 3 or even Half-life 2 physics, than a game that takes shortcuts and skimps on graphics simply so they can save me some CPU cycles and do a bit of physics. Why in hell would I want a graphics card that does graphics decently and physics decently instead of a graphics card/physics card combination that does graphics excellently and physics excellently?

~

Anyway, to just use some very simple logic... I've talked with Ageia reps and gotten word that, if not now, then soon, a PCI-Express card will be available. So, assuming the same setup as a graphics card, same bandwidth (which is unfair, since we know, and it has been pointed out, that memory access on the GPU is going to be much slwoer than on the PPU)-

WHAT RATIONAL REASON DO YOU HAVE FOR CLAIMING THAT A CARD WHICH MUST DO TWO MATHEMATICALLY HEAVY TASKS WILL OUTPERFORM TWO CARDS SPLITTING THE TASKS EVENLY?!

(am not even mentioning that the latter case has both cards specialized for the task they were *designed* to do)

Here is a tip... fifty percent of one plus fifty percent of one is, in fact, less than one hundred percent of two.

~

Oh, and your other point about a next-gen GPU having a PPU onboard, by virtue of having extra vector units and SIMDs?

My computer has a lot of resistors and capacitors and coils in it... doesn't make it a radio. My modem has a phone jack, and the other telephonous crystals and DSP stuff in it... doesn't make it a telephone (though, in a pinch, I could get it to be one- but that would be a waste, as would USING A GPU AS A PPU). Hell, hard drive has at least three or four platters in it, and a motor... doesn't make it a racecar.

Note the fundamental difference between being comprised of the same materials and having the same function, Codeified.

~

I sincerely hope that the PPU takes off. If physics is covered by another card, and graphics are covered by yet another card, then perhaps, just perhaps, developers will get around to facing the reality that they can afford to code and write some genuinely intriguing AI into their games.

I'd much rather see A* running realtime for a thousand units on a map than ten thousand boxes sailing through the air in perfect physical form.

Mebbe I'm alone in this, but for the future of the industry, I hope that we get around to making some captivating worlds with the new tech, instead of bickering over whether or not it is cost effective or even neccesary. My old TI could do the math I use my computer for, nowadays; that doesn't make my PC any worse for being overkill :> .

-AvengingBob
They now have a Physx based card on newegg made by bfg BFG Tech PhysX Processing Unit BFGRPHYSX128P Physics Card - Retail. $300 (at time of this post). Plus it looks like it needs a molex connector for power.

"I can't believe I'm defending logic to a turing machine." - Kent Woolworth [Other Space]

Quote: Original post by C0D1F1ED
In a nutshell: GPU + PPU = next-generation (GP)GPU. DirectX 9 introduced programmable graphics shaders. DirectX 10 introduces programmable general purpose (unified) shaders. With this in mind I can't see how PhysX can possible survive long.

Wasn't DX8 the one to introduce programmable graphics shaders or did i miss something ?
Quote: I sincerely hope that the PPU takes off. If physics is covered by another card, and graphics are covered by yet another card, then perhaps, just perhaps, developers will get around to facing the reality that they can afford to code and write some genuinely intriguing AI into their games.

I'd much rather see A* running realtime for a thousand units on a map than ten thousand boxes sailing through the air in perfect physical form.

Mebbe I'm alone in this, but for the future of the industry, I hope that we get around to making some captivating worlds with the new tech, instead of bickering over whether or not it is cost effective or even neccesary. My old TI could do the math I use my computer for, nowadays; that doesn't make my PC any worse for being overkill :> .
Well said.
Free Mac Mini (I know, I'm a tool)
Advertisement
Quote: Original post by Anonymous Poster
Let me repeat that: While your character is watching leaves fall from a tree, the fall and descent of the leaves is effectively unknown to your CPU.

If the importance of that statement escapes you, you have more business writing screensavers than games. If one cannot read back what is being done in the world, AI can't interact with it, and netcode can't interact with it.

The PCI-Express bus graphics cards use is full-duplex. This means you can read back data at very high bandwidth. No problem making things interactive.

The legacy PCI bus, used by PhysX, has far less bandwidth. It also has to share this bandwidth with other PCI devices. This means that if you have for example a PCI RAID controller it can completely block access to the physics card during a burst.
Quote: You also make this odd assumption that, while graphics card performance increases, workload will stay the same. This has been pointed out several times to you, but mostly you've ignored it.

I never made that assumption so don't put words in my mouth. In fact I'm convinced that future GPU's will be capable of vastly improved graphics processing, as well as handling physics processing. Never did I ignore that graphics workload will increase.
Quote: Why in hell would I want a graphics card that does graphics decently and physics decently instead of a graphics card/physics card combination that does graphics excellently and physics excellently?

Because for the money that buys you a mediocre graphics cards and a physics card you can buy a state-of-the-art graphics card capable of superiour graphics that can handle physics perfectly.
Quote: Anyway, to just use some very simple logic... I've talked with Ageia reps and gotten word that, if not now, then soon, a PCI-Express card will be available.

What do you mean with that first sentence, in respect to the second one?
Quote: So, assuming the same setup as a graphics card, same bandwidth (which is unfair, since we know, and it has been pointed out, that memory access on the GPU is going to be much slwoer than on the PPU).

I'd love to know where you got that data from. Do you have the R600 and/or G80 specifications?
Quote: WHAT RATIONAL REASON DO YOU HAVE FOR CLAIMING THAT A CARD WHICH MUST DO TWO MATHEMATICALLY HEAVY TASKS WILL OUTPERFORM TWO CARDS SPLITTING THE TASKS EVENLY?!

Several. First of all we have to look at the combined cost again. For the price of a physics card you can get a second graphics card or a much faster one. And it's safe to assume GPU's have a higher FLOPS/$ ratio. NVIDIA and ATI have already spent years optimizing their SIMD units. And Direct3D 10 chips have probably been in development since the introduction of Direct3D 9.

Secondly, synchronization and data transfer between three devices is more complex than between two devices. It's clear that the physics processor, CPU and graphics processor each need (part of) the physics data. So that becomes easier and faster when combining graphics and physics on one device.
Quote: Oh, and your other point about a next-gen GPU having a PPU onboard, by virtue of having extra vector units and SIMDs?

My computer has a lot of resistors and capacitors and coils in it... doesn't make it a radio. My modem has a phone jack, and the other telephonous crystals and DSP stuff in it... doesn't make it a telephone (though, in a pinch, I could get it to be one- but that would be a waste, as would USING A GPU AS A PPU). Hell, hard drive has at least three or four platters in it, and a motor... doesn't make it a racecar.

Yet it has been shown that both the CPU and GPU are already capable of physics processing. All that is required is SIMD units and access to memory. And as I've said before the Direct3D 10 specifications allow direct unfiltered memory access for the GPU. So there's no need to make ridiculous examples of devices not being capable of performing multiple tasks. GPU's are getting more and more general-purpose. And that's a good thing.
Quote: I sincerely hope that the PPU takes off. If physics is covered by another card, and graphics are covered by yet another card, then perhaps, just perhaps, developers will get around to facing the reality that they can afford to code and write some genuinely intriguing AI into their games.

With physics being processed on the GPU you have a equal opportunity of increasing CPU processing power available for A.I.

Furthermore, I'm afraid you're underestimating the importance of software. Genuinely intriguing A.I. is possible, today. People just have to put more effort into it. Hardware isn't going to make that any easier. It's a lame excuse that current CPU's are not capable of better A.I. because of CPU processing power limitations. Processing physics on another processor isn't going to free up that much more capabilities. In fact with dual-core you can already move physics to another core. And I have yet to see a demo using dual-core capabilities to the fullest even though dual-core CPU's have been available for a while now. So with physics cards appearing in stores today it's still going to require a big investment in software before we actually get a better game experience.

And last but not least, the PC market for physics cards is a shrinking one, not a growing one. They're not going to last.
Quote: Original post by Anonymous Poster
Wasn't DX8 the one to introduce programmable graphics shaders or did i miss something ?

Yes, but it was actually a fixed-function pipeline with more temporary registers (cfr. eight arithmetic shader instructions limit - eight fixed-function pipeline stages). Shaders were more like a textual representation of a fixed-function configuration (and that would have been possible in earlier Direct3D versions as well). So it's debatable where we put the line between fixed-function and programmable. In my view only ps 2.0 and up are really programmable, offering truely new possibilities. And that started with Direct3D 9.
As I predicted, the next generation Intel CPU's are far more powerful than their predecessors: Anandtech: Intel Core versus AMD's K8 architecture.

The width of SSE execution units doubled from 64-bit to 128-bit, and there are three of them instead of one. Furthermore, dual-core will become mainstream and the number of SSE registers doubled compared to '32-bit' processors. So that's a hell of a lot of vector processing power.
yeah but the PPUs are gonna have X amount of registers and SIMDs specific to physics processing and as another member posted have PCI-e bandwidth access. so in the end the PPU is gonna just as useful as a GPU.

personally unless I was a game from 2003 or older i would not want my GPU doing physics calculations while it was doing graphics processing.

Beginner in Game Development?  Read here. And read here.

 

This topic is closed to new replies.

Advertisement