Advertisement

physx chip

Started by April 20, 2006 07:42 PM
223 comments, last by GameDev.net 18 years, 5 months ago
Perhaps I didn't understand, but isn't there still a multi-threading issue regardless of whether you use a different core or a PPU?

--www.physicaluncertainty.com
--linkedin
--irc.freenode.net#gdnet

Now, I'm not saying that Aegia will absolutely come out on top. But I see a lot of almost-potential with M$'s XNA and managed code (more things to bog down the CPU which a PPU could mitigate) if it lives up to the hype.

Can you use the PhysX API with managed code? Perhaps if you could employ the rapid-dev of managed code while incorporating the PPU to mitigate the performance impact of managed code, you could possibly have a winner. I need to do more research, I guess.

What do you guys think about this?

Also, I think, if the PPU does take off over the next couple years, it will be only a matter of time before we see an AIPU, doncha think?
Advertisement
Quote: Original post by jjd
Perhaps I didn't understand, but isn't there still a multi-threading issue regardless of whether you use a different core or a PPU?


Yea, thats what I was trying to get at.
I feel I should charm in again, now that some places have put up benchmarks for different games that will be using the PPU.
[sarcasm] The votes are in, the new games have worse framerates with the PPU!
Nobody should buy one![/sarcasm]

Seems there are very few side-by-side comparisons of the computing power of the PPU and a CPU with the PhysX sdk.
The comparisons that do exist at the moment are based around games that greatly up the number of physical entities in the scene.
This likely has had the effect of causing more fill-rate limitations on the GPU, due to the larger number of alpha blended
particles.

Personally, after studying different SPH techniques, and running into the problem of a AMDx2 chugging with 4000 particles,
then seeing the SPH demo for the PPU, I still think there is a lot of potential that people are still not seeing with the PPU.
Expecially since it has better read-back from ram than a GPU does at the moment, so there is more potential for now to have
the extra physics interact with the game mechanics. (like the goo pushing stuff around in the SPH demo)
Perhaps someone could answer this question too:
How does the PhysX API make coding physics easier (or harder)??

I think ease-of-development is a factor that hasn't been addressed, but I may be speaking out of my pay-grade on this.
Quote: Original post by usurper
1) Having an extra dedicated PU is going to do nothing but allow dev’s to created more enriched games by freeing up the CPU for other tasks.

2) A PPU is pointless because the second core on our CPU is going to be “adequate” to do all that the PPU does.

Let's first get something straight. I absolutely agree with point 1. Having a PhysX card will open up new possibilities for enriched games. So I'm sorry but you absolutely divided the 'teams' wrongly.

And I also agree with point 2 except for the "pointless". See the problem is that not everbody will have a PhysX card because in the low-end market nobody spends 250$ on gaming hardware and in the mid-end market people want real value for their money. In this thread I've collected enough proof that a CPU will be "adequate" indeed for more advanced physics, and 250$ extra buys you equivalent or more physics processing power delivered by a Direct3D 10 GPU (you forgot to mention this one).
Quote: First of all, I remember when 640k of system mem was “adequate”. Nothing is ever adequate for long. If you don’t realize this..then you, sir, are a moron. I apologize for my childish language.

Oh I realize this. But I also realize that even though we could create some unbelievable games on a Cray supercomputer with an array of Quadro or FireGL graphics cards, nobody's going to buy my hardware nor the games I create for it.

So let's stop using silly obvious extremes now ok. Cost-effectiveness is of primary importance for the long-term success of any product. And unfortunately at this moment I don't see AGEIA's PhysX as a cost-effective solution.
Quote: Secondly, no matter how many cores you have, a dedicated PU will always beat out a gen-purpose CPU, sync and bandwidth issues aside. And you still have these issues with multi-threading. (I mean are you kidding me? Do you not see this?)

A Core 2 Duo can deliver 48 GFLOPS versus 20 GFLOPS for PhysX. For a quad-core, which is on schedule for 2007, that will be 96 GFLOPS and possibly more. How is that possible? Well Intel has been creating CPU's since 1968 and is nowadays producing them on the latest 65 nm technology (and then some) while AGEIA is still getting its feet wet and uses a 130 nm process with most likely a standard cell design.
Quote: Finally, the success of the PPU is merely a factor of how it is used by developers, and nothing else. This I can assure you!! If devs create “really good” games that have solid performance-benefit when employing the PPU, then the PPU will most certainly sell.

All true, but just you're missing one cricial point here. If any competitor manages to get you the required level of physics processing power at a lower price, PhysX is not goint to sell at all. Both CPU and GPU manufacturers have their eyes on this market and this I can assure you, it's going to get real tough for AGEIA no matter what.
Quote: I think $250 is a tad expensive, but price should diminish over time. Drop it to $100 and you got a mass-market seller.

Well, looking at the GPU market, performance increases 1.5x per year. So if we make the bold assumption that a PPU's price can drop at the same rate, we need 2 years and 3 months to get one for 100$. So by late 2008 AGEIA is ready for the mass market.

By that time, we'll have octa-core CPUs and second-generation Direct3D 10 graphics cards...
Quote: ...and Codified is an idiot.

It's not a sign of great intelligence to call someone an idiot. After all we're trying to predict the future here and my opinion is just as good as yours, if you can back it up with sound technical argument. So please do convince us all of your superiority or hold your peace. Thank you.

[Edited by - C0D1F1ED on May 26, 2006 9:19:00 PM]
Advertisement
Quote: Original post by KulSeran
...and running into the problem of a AMDx2 chugging with 4000 particles...

I'm sorry but I have to ask: Was this fully SSE optimized?
Quote: Expecially since it has better read-back from ram than a GPU does at the moment...

How's that? At the moment PhysX hogs the bandwidth of legacy PCI, while a GPU has full 16x PCI-Express bandwidth for accessing system RAM.
Quote: Original post by usurper
How does the PhysX API make coding physics easier (or harder)??

I think ease-of-development is a factor that hasn't been addressed, but I may be speaking out of my pay-grade on this.

Considering that it uses the same API across all hardware it should make no difference for ease-of-development. Its features are roughly comparable to Havok so it doesn't have an significant advantage in this area over the competition either.

It's really quite ironic that you first call me an idiot and then you can't even answer a basic question yourself...
I didn't go through the whole thread, reading only the first 4 pages.

Why hasn't anyone mentioned this? It doesn't matter if you do all this wonderful physics with PhysX or a next gen GPU, any physics that matter will be done on commonly available hardware, not on the fastest GPU or PhysX card available. This means that while this physics war goes on (GPU vs PPU), we will see no real leap in actual interactivity in games, on PC atleast, because physics that really affect gameplay will not be left to chance (GPU or PPU) and will be done by our CPUs. Any software engineer not working on console exclusives, or who wants his game to be played by more than a few thousand people will make sure that all physics related to GPU and PPU availability will be relagated, at best, to the realm of novelty (can you say pretty sparks?).

As a gamer, I loathe the idea of another card.
As an engineer, I would rather have more generalized vector processors.
C0D1F1ED, yes, that was with full SSE instructions for the Vector math.
The code I was playing with

It could be an issue with it not being set up properly, but I dont consider 8fps that much "real time".

as for the GPU memory access issues, I have yet to see a gpu with good response to something like glReadPixels.
It appears to me that the GPU is meant to take data and process it, not take data, process and return it.
I would apriciate information to the contrary, though. Since it would be nice to know things have changed.

This topic is closed to new replies.

Advertisement