The ATI thing sounds neat but it has no chance of survival. Dual-core processors have a whole lot of extra processing power that can be used for physics without the additional overhead (bandwidth, synchronization) of a separate physics card. Furthermore, investing in a dual-core processor benefits much more than just the physics in a few games. Extra graphics cards are quite expensive and will rarely be used.
Besides, no game developer in his right mind would create a game that will only run on a fraction of PCs. So there always has to be a fallback without affecting gameplay (hence the "effects physics"). It took graphics cards about three years to become widespread, but by the time physics play a key role in games the CPUs will be multi-core with highly improved architectures...
physx chip
Quote: Original post by Alpha_ProgDes
I think he's looking for benchmarks not claims. I think.
Fair enough. I'm still looking for extensive PhysX benchmarks...
Quote: Original post by Anonymous Poster
Extra graphics cards are quite expensive and will rarely be used.
I don't think it really needs an extra graphics card. You can just use a fraction of the processing power of one graphics card. And it's still more cost-effective than buying a PhysX card.
Of course it's still not fully optimal on current GPUs; they have to deal with Direct3D 9 API limitations and it's not a unified architecture. But AGEIA definitely already has strong competition from all sides and with next generation GPUs and CPUs it's going to be a waste of money to buy a PhysX card. I feel sorry AGEIA but they should have seen this coming.
June 07, 2006 03:41 PM
You need at least 2 cards, they want you to use 3. You're thinking of nvidia's solution maybe.
Besides, don't underestimate the effective efficiency of a CPU. It has a comparatively huge and fast cache, it has highly efficient jump prediction, can do out-of-order execution, and the execution units are fully pipelined and run at several gigahertz. A GPU-based solution uses a brute-force approach, but can't be that efficient. So even if it has a multiple of the GFLOPS of a CPU, a lot goes to waste because of additional overhead. Moreover, the second core of a dual-core processor is practically unused at the moment. In a year dual-core CPUs will be widespread and price-effective, while only over-enthousiastic gamers will buy a second (or third) graphics card.
Besides, don't underestimate the effective efficiency of a CPU. It has a comparatively huge and fast cache, it has highly efficient jump prediction, can do out-of-order execution, and the execution units are fully pipelined and run at several gigahertz. A GPU-based solution uses a brute-force approach, but can't be that efficient. So even if it has a multiple of the GFLOPS of a CPU, a lot goes to waste because of additional overhead. Moreover, the second core of a dual-core processor is practically unused at the moment. In a year dual-core CPUs will be widespread and price-effective, while only over-enthousiastic gamers will buy a second (or third) graphics card.
June 07, 2006 03:42 PM
link for last post:
http://www.bit-tech.net/news/2006/06/06/ATI_touts_physics_on_GPU/
http://www.bit-tech.net/news/2006/06/06/ATI_touts_physics_on_GPU/
Quote: Original post by Anonymous Poster
You need at least 2 cards, they want you to use 3. You're thinking of nvidia's solution maybe.
That's only with today's solution. I see no reason why they couldn't get it working with one card in the near future. Context switches are still inefficient with the Direct3D 9 API but that will change as well.
Quote: Besides, don't underestimate the effective efficiency of a CPU.
I certainly don't. I have been advocating thoughout this whole thread that a Core 2 Duo might very well beat PhysX thanks to its high floating-point performance and efficient architecture.
June 07, 2006 08:24 PM
and yet you want a gpu solution. granted, nvidia's and ati's next cards are apparently magical ones that have no latencies and infinite memory access, but they are still extra cards that no one is going to buy. why not just run it on the cpu?
you're advocating the same solution as aegia...an extra card. theres no reason to do the computations anywhere except the cpu.
you're advocating the same solution as aegia...an extra card. theres no reason to do the computations anywhere except the cpu.
June 07, 2006 08:24 PM
and yet you want a gpu solution. granted, nvidia's and ati's next cards are apparently magical ones that have no latencies and infinite memory access, but they are still extra cards that no one is going to buy. why not just run it on the cpu?
you're advocating the same solution as aegia...an extra card. theres no reason to do the computations anywhere except the cpu.
you're advocating the same solution as aegia...an extra card. theres no reason to do the computations anywhere except the cpu.
A thought just crossed my mind after reading so many advocates of the power of dual core. How do we know that the current games aren't already multithreaded to begin with? It common knowledge that to speed up certain processes, you have to do operations asynchronously. So, doesn't that usually involve a thread of some sort with a callback when it is done? Also, if the current game engines are already somewhat multi-threaded, then you run into that problem of how do you know which core which process is running on? Technically, I haven't seen any real way of specifying where you want a thread to be run. So, even if we were to use a dual core processor, there really wouldn't be a guarantee that physics will run on a core on its own. Optimally, we hope that's what happens.
Personally, I've only played a hyperthreaded processor. Under window's task manager, the two hardware threads show up as 2 processors as we all know. When running single threaded number crunching programs, the load, though concentrates on one of the thread, but never really max it out, while the other thread never stays idle either. So, if this is also what happens on a multi-core processor, then we may have to factor in the fact that when the system tries to do load balancing, it may not do so by assigning threads to specific processing units. I guess my main point is that though a second core "may" help physics run better on the CPU, but we shouldn't expect a 1 + 1 deal, but more like a 0.75 + 0.75, and that saying you can have physics running on a second core may not be a technically sound argument as there is no guarantee.
Personally, I've only played a hyperthreaded processor. Under window's task manager, the two hardware threads show up as 2 processors as we all know. When running single threaded number crunching programs, the load, though concentrates on one of the thread, but never really max it out, while the other thread never stays idle either. So, if this is also what happens on a multi-core processor, then we may have to factor in the fact that when the system tries to do load balancing, it may not do so by assigning threads to specific processing units. I guess my main point is that though a second core "may" help physics run better on the CPU, but we shouldn't expect a 1 + 1 deal, but more like a 0.75 + 0.75, and that saying you can have physics running on a second core may not be a technically sound argument as there is no guarantee.
Quote: Original post by Anonymous Poster
and yet you want a gpu solution. granted, nvidia's and ati's next cards are apparently magical ones that have no latencies and infinite memory access, but they are still extra cards that no one is going to buy. why not just run it on the cpu?
you're advocating the same solution as aegia...an extra card. theres no reason to do the computations anywhere except the cpu.
No, just like people will upgrade to a powerful dual-core processor anyway, they will upgrade to a Direct3D 10 card in due time. And one such graphics card will be enough to do physics processing in between every frame.
I know the future dual-core CPUs will be (more than) adequate for physics processing and likely easier to work with, but I don't think we can rule out the GPU solution. I mean, it was easy to predict PhysX's doom, but whether CPU or GPU based physics will prevail would require a crystal ball to look years ahead. It's not unlikely that both solutions will be used for some time. It's clear that NVIDIA and ATI are very serious about GPGPU applications...
This topic is closed to new replies.
Advertisement
Popular Topics
Advertisement