Advertisement

physx chip

Started by April 20, 2006 07:42 PM
223 comments, last by GameDev.net 18 years, 5 months ago
Quote: Original post by C0D1F1ED
I assumed you had the chance to profile the latest SDK with an actual game. You don't need the source to have a look at the code. In CodeAnalyst I simply double-clicked on the hotspots.

Ah yes ok that is what I meant. I was using the PhysX SDK compiling/profiling an actual game which is where I got most of my first hand information from.
I followed this discussion for a few pages and then I lost track of it.
One question to C0D1F1ED though. Rendering HDR in HL2 had quite a hit on my 6800 GT. So if HL2 would do its physics calculations on the GPU aswell, what would happen?
Do you really expect next-gen games, that need to look next-gen to get sales, are willing to downgrade how they look to be able to process the physics on the GPU, so that their next-gen game is actually playable on the next-gen graphicscards?
Advertisement
Quote: Original post by Saruman
Ah yes ok that is what I meant. I was using the PhysX SDK compiling/profiling an actual game which is where I got most of my first hand information from.

Cool, but, did you get a chance to see how well it is optimized with SSE?
Quote: Original post by Anonymous Poster
I followed this discussion for a few pages and then I lost track of it.
One question to C0D1F1ED though. Rendering HDR in HL2 had quite a hit on my 6800 GT. So if HL2 would do its physics calculations on the GPU aswell, what would happen?
Do you really expect next-gen games, that need to look next-gen to get sales, are willing to downgrade how they look to be able to process the physics on the GPU, so that their next-gen game is actually playable on the next-gen graphicscards?



"Next-gen graphics cards" is such a blanket title. It is emperical that the market calls for a wide range of capabilities per generation. The PPU only adds more flux to this. Kind of annoying.

SLI does this too. Surprise... They're the same thing (for the 3rd time?).

What happened to single slot multi-core graphics boards? Too hot? I bet Number 9 would have something to say about this if they were kickin... not to sound old, or as an electronic engineer. :)

That would be true progress IMHO.

I personally would use the SSE capabilities of the machine for my own CPU use.

[Edited by - taby on May 26, 2006 3:24:43 AM]
Taby: Allright, I should have avoided the "next-gen" term. But let's look at UT2007 as an example then. Would you want it to look worse in order for it to also process physics on the GPU? (I'm not saying it looks bad, I think it looks awesome.) And I assume they would need to downgrade how it looks in order to support physics on the GPU aswell, I just have a hard time seeing any developer justifying this to hardcore gamers and consumers.
Forgot to add that I really do think they would have to try to justify it, when another developer steps up with their awesome-looking eye-candy game which runs faster on the same hardware since it doesn't calculate physics on the GPU. How would they tell the customers and gamers about this?
Advertisement
If the demo box had 2+ GPUs installed, I wonder how it would compare to a GPU / 1+ PPU box, in terms of performance vs cost?

NVIDIA published a general-purpose GPU math toolkit for the Quattro line of cards. Where is the one for the 6800GT? It magically disappeared out of possibility as soon as Ageia came along. Hrm.
Quote: Original post by Anonymous Poster
Rendering HDR in HL2 had quite a hit on my 6800 GT. So if HL2 would do its physics calculations on the GPU aswell, what would happen?

Why would you do that in the first place? Half-Life 2 has typically no more than a dozen of moving objects.

Anyway, if you'd do it on the GPU it would lower the framerate a bit, obviously. But the real question is, what's most cost-effective? If you sold your Geforce 6800 GT today for 150$ and added 250$ (the price of a PhysX card) you'd be able to buy a Geforce 7800 GTX, Geforce 7900 GT, or a Radeon X1900XT. Or for the price of a PhysX card you can buy a second Geforce 6800 GT for an SLI configuration. I'm sure it would be able to do the physics processing and get you much higher graphics performance.
Quote: Do you really expect next-gen games, that need to look next-gen to get sales, are willing to downgrade how they look to be able to process the physics on the GPU, so that their next-gen game is actually playable on the next-gen graphicscards?

Next-generation games will have more advanced physics no doubt but they can't expect mainstream gamers to pay 250$ just to enjoy them. GPU and/or CPU based solutions will be necessary to make the games popular. If Unreal Tournament 2007 demands a 250$ PhysX card for advanced physics and Quake Wars runs fine without it, what do you think will be the most popular game? Actually... that's hard to answer because physics alone don't sell games. And that's exactly the reason why they won't take the risk.
Quote: Original post by Anonymous Poster
Forgot to add that I really do think they would have to try to justify it, when another developer steps up with their awesome-looking eye-candy game which runs faster on the same hardware since it doesn't calculate physics on the GPU. How would they tell the customers and gamers about this?

How would they tell the gamers with a high-end Direct3D 10 card and/or a Core 2 Duo that the game can't run at high quality because they didn't bother using the capabilities of this GPU/CPU to the fullest? Rephrased: What would they tell their customers when other games do manage to display superb physics and graphics without PhysX?
Found my account, the three most recent AP are mine.

Quote: Original post by C0D1F1ED
Quote: Original post by Anonymous Poster
Rendering HDR in HL2 had quite a hit on my 6800 GT. So if HL2 would do its physics calculations on the GPU aswell, what would happen?

Why would you do that in the first place? Half-Life 2 has typically no more than a dozen of moving objects.
Very true, I was just giving an example of something that puts a strain on the graphics card and how the physics wouldn't exactly benefit that.

Quote: Original post by C0D1F1EDAnyway, if you'd do it on the GPU it would lower the framerate a bit, obviously. But the real question is, what's most cost-effective? If you sold your Geforce 6800 GT today for 150$ and added 250$ (the price of a PhysX card) you'd be able to buy a Geforce 7800 GTX, Geforce 7900 GT, or a Radeon X1900XT. Or for the price of a PhysX card you can buy a second Geforce 6800 GT for an SLI configuration. I'm sure it would be able to do the physics processing and get you much higher graphics performance.
Yeah, there's always the cost of the card.. Unless of course it picks up and gets really popular. I think you would need to upgrade graphics cards more often than the PPU, but that remains to be seen I guess.

Quote: Original post by C0D1F1ED
Quote: Do you really expect next-gen games, that need to look next-gen to get sales, are willing to downgrade how they look to be able to process the physics on the GPU, so that their next-gen game is actually playable on the next-gen graphicscards?

Next-generation games will have more advanced physics no doubt but they can't expect mainstream gamers to pay 250$ just to enjoy them. GPU and/or CPU based solutions will be necessary to make the games popular. If Unreal Tournament 2007 demands a 250$ PhysX card for advanced physics and Quake Wars runs fine without it, what do you think will be the most popular game? Actually... that's hard to answer because physics alone don't sell games. And that's exactly the reason why they won't take the risk.

But they (probably) won't require the card in the first place since Ageia has a software fallback. And as for the risk involved, do you mean they're risking something by supporting Ageia? Ghost Recon: Advanced Warfighter has support for it, and I really hadn't heard of Ageia for long before I saw GR:AW. I think the developers just took a look at the benefits; by using Ageias' physics, they also support dedicated physics hardware. If the user doesn't have the card, it falls back on their software. Is there really that much risk involved in that?


Edit:
Quote: Original post by C0D1F1ED
How would they tell the gamers with a high-end Direct3D 10 card and/or a Core 2 Duo that the game can't run at high quality because they didn't bother using the capabilities of this GPU/CPU to the fullest? Rephrased: What would they tell their customers when other games do manage to display superb physics and graphics without PhysX?
What I doubt, and I would love for anyone to tell me how this will end up, is if another game can display physics and graphics that are better than an equal graphicscard and a PPU.. That's really the only argument I have.

I've only seen one Ageia demo movie, the CellFactor one. The action displayed in that movie, alone, makes me want to shell out the cash for a card. It says it is multiplayer, it has A LOT of debris and objects flying around that (atleast seemingly) affects the other players. How they solved syncronization issues, I have no idea. But if this trend continues, lots of physics which matter even in multiplayer, thanks to PhysX, I'm all for it.
If it never gets better than that movie, then I'm not so sure I would buy it.

( really struggling to update my post, getting server errors all the time:\ )

[Edited by - Tunah on May 26, 2006 4:30:28 AM]

This topic is closed to new replies.

Advertisement