Advertisement

physx chip

Started by April 20, 2006 07:42 PM
223 comments, last by GameDev.net 18 years, 8 months ago
Quote:
Original post by Kylotan
Where does it say that in the article?

Third paragraph:
Quote:
Also ATI says they are going to be simply faster than an Ageia PhysX card even with using a X1600 XT and that a X1900 XT should deliver 9 X the performance of an Ageia PhysX card

I can buy an X1600 XT for 2/3 the price of a PhysX card. Or, I can add the price of a PhysX card to that of a mid-end graphics card and get a high-end card that improves my graphics and has power to spare for physics.
Quote:
Original post by C0D1F1ED
Quote:
Original post by Kylotan
Where does it say that in the article?

Third paragraph:
Quote:
Also ATI says they are going to be simply faster than an Ageia PhysX card even with using a X1600 XT and that a X1900 XT should deliver 9 X the performance of an Ageia PhysX card

I can buy an X1600 XT for 2/3 the price of a PhysX card. Or, I can add the price of a PhysX card to that of a mid-end graphics card and get a high-end card that improves my graphics and has power to spare for physics.

I think he's looking for benchmarks not claims. I think.

Beginner in Game Development?  Read here. And read here.

 

Advertisement
Quote from that article - "ATI does not see their physics technology as an offload for the CPU but rather processes a new category of features know as “effects physics.”"

So I take it we are talking about bigger and more complex explosions like those seen in GRAW(more polygons per explosion that then disappear quickly afterwards).

In which case personally I'm not too bothered about it, why spend an extra $100-$200 for something that is only going to be fun the first few times you see it.

I'd be far more interested in a card that helps simulate realistic physics within a game like friction, collisions, inertia etc. While freeing up the processor for other jobs.

Malal
Quote:
Original post by Alpha_ProgDes
I think he's looking for benchmarks not claims. I think.

Fair enough. I'm still looking for extensive PhysX benchmarks...
Quote:
Original post by Anonymous Poster
Extra graphics cards are quite expensive and will rarely be used.

I don't think it really needs an extra graphics card. You can just use a fraction of the processing power of one graphics card. And it's still more cost-effective than buying a PhysX card.

Of course it's still not fully optimal on current GPUs; they have to deal with Direct3D 9 API limitations and it's not a unified architecture. But AGEIA definitely already has strong competition from all sides and with next generation GPUs and CPUs it's going to be a waste of money to buy a PhysX card. I feel sorry AGEIA but they should have seen this coming.
Quote:
Original post by Anonymous Poster
You need at least 2 cards, they want you to use 3. You're thinking of nvidia's solution maybe.

That's only with today's solution. I see no reason why they couldn't get it working with one card in the near future. Context switches are still inefficient with the Direct3D 9 API but that will change as well.
Quote:
Besides, don't underestimate the effective efficiency of a CPU.

I certainly don't. I have been advocating thoughout this whole thread that a Core 2 Duo might very well beat PhysX thanks to its high floating-point performance and efficient architecture.
Advertisement
A thought just crossed my mind after reading so many advocates of the power of dual core. How do we know that the current games aren't already multithreaded to begin with? It common knowledge that to speed up certain processes, you have to do operations asynchronously. So, doesn't that usually involve a thread of some sort with a callback when it is done? Also, if the current game engines are already somewhat multi-threaded, then you run into that problem of how do you know which core which process is running on? Technically, I haven't seen any real way of specifying where you want a thread to be run. So, even if we were to use a dual core processor, there really wouldn't be a guarantee that physics will run on a core on its own. Optimally, we hope that's what happens.

Personally, I've only played a hyperthreaded processor. Under window's task manager, the two hardware threads show up as 2 processors as we all know. When running single threaded number crunching programs, the load, though concentrates on one of the thread, but never really max it out, while the other thread never stays idle either. So, if this is also what happens on a multi-core processor, then we may have to factor in the fact that when the system tries to do load balancing, it may not do so by assigning threads to specific processing units. I guess my main point is that though a second core "may" help physics run better on the CPU, but we shouldn't expect a 1 + 1 deal, but more like a 0.75 + 0.75, and that saying you can have physics running on a second core may not be a technically sound argument as there is no guarantee.
Quote:
Original post by Anonymous Poster
and yet you want a gpu solution. granted, nvidia's and ati's next cards are apparently magical ones that have no latencies and infinite memory access, but they are still extra cards that no one is going to buy. why not just run it on the cpu?

you're advocating the same solution as aegia...an extra card. theres no reason to do the computations anywhere except the cpu.

No, just like people will upgrade to a powerful dual-core processor anyway, they will upgrade to a Direct3D 10 card in due time. And one such graphics card will be enough to do physics processing in between every frame.

I know the future dual-core CPUs will be (more than) adequate for physics processing and likely easier to work with, but I don't think we can rule out the GPU solution. I mean, it was easy to predict PhysX's doom, but whether CPU or GPU based physics will prevail would require a crystal ball to look years ahead. It's not unlikely that both solutions will be used for some time. It's clear that NVIDIA and ATI are very serious about GPGPU applications...
Quote:
Original post by WeirdoFu
Technically, I haven't seen any real way of specifying where you want a thread to be run. So, even if we were to use a dual core processor, there really wouldn't be a guarantee that physics will run on a core on its own. Optimally, we hope that's what happens.

It can be done using SetProcessAffinity. But you actually don't want to do this. The operating system schedules threads every few milliseconds. It balances things automatically and will typically assign the same thread to the same core. The best thing to do is have two threads that constantly decide what task to do next. So they never go idle. The tricky part is determining the granularity of the tasks and how to synchronize as little and as efficiently as possible. But the CPU and O.S. are really quite optimal at thread scheduling. It's up to the software to make the best use of it.
Quote:
I guess my main point is that though a second core "may" help physics run better on the CPU, but we shouldn't expect a 1 + 1 deal, but more like a 0.75 + 0.75, and that saying you can have physics running on a second core may not be a technically sound argument as there is no guarantee.

That's true, but it's in fact the best we can do. When tasking a PPU or GPU with physics calculations we also have to ensure the CPU can do something else in the meantime. It's not much different from having multiple cores. It requires the same task scheduling and synchronization. In fact it's worse because communication between separate devices is slower than between processor cores. So dual-core is really feasible for physics and I expect efficiencies between 170% and 190% to be possible.

This topic is closed to new replies.

Advertisement