Quote:
Original post by taby
This would all be moot if we could use multiple local GPUs in non-SLI form. Dedicating each to a different field of physics would be straight-forward. One for dynamics, and one for ray-tracing, and one for...
If what they're trying to do is comparmentalize things, why not just give the consumer the option to use their GPUs however they see fit, in any type of SLI/non-SLI setup they want.
The GPU was designed with physics in mind. NVIDIA should not cloud the entire issue by not hard-wiring these equation-accelerators into their own GPUs themselves.
Admittedly dynamics is harder than ray-tracing, but that's relative when you know both. The appreciation for the uniqueness of the the PPU is lost immediately on me, simply because dynamics and ray-tracing are so tightly bound together by linear algebra that they are practically made of the same "stuff".
However, if the PPU price vs power ratio is significantly more favourable than that of a GPU, so be it.
I am only hard on NVIDIA because I expect the most from them every time.
I guess I'll just give one last seemingly irrational outburst before I just shut up completely.
Now if GPU was actually an abbreviation for General Processing Unit, I really wouldn't be feeling so out of whack about the things I'm seeing, but GPU actually stands for GRAPHICS Processing Unit. Why is it that we want a piece of hardware that does graphics do anything else? Sure it can do other things, especially if you fool it into thinking that its working with graphics, which is what GPGPU is all about, but why don't we all just leave it alone and let it do what it does best and stop trying to pound a round peg into a square hole.
Have we all forgotten why we have SLI or multicore processors in the first place? The answer is clean and simple, the industry is out of ideas as to how to make CPUs and GPUs faster at a decent pace. So, they lump two slower cores together and sell it as something faster. From a technical stand-point, SLI actually makes more sense than multicore CPUs just for the plain fact that the load balancing is done for us since graphics itself is a very parallel process.
But we need to realize that a GPU is called a GPU because it is excellent at pushing polygons. Though they've added pixel and vertex shaders, the GPU is still king at pushing polygons and is optimized for that. I'm starting to think what you guys want are those CPU expansion cards that servers used to use years ago where each card had a processor and RAM and you just plug it in and get another processor to work with. I think they still have some of those. In truth, you'd probably do better with those than continuously added GPUs and rewriting all your software to twist all processes into graphics related processes, such that the GPU can blindly operate on them thinking its doing graphics.
Also, the whole thing with dynamics and ray-tracing eludes me. As far as I know, there are almost no cards on the market capable of doing real-time ray-tracing. And no, why would anyone design a graphics processing unit with physics in mind? It eludes all common sense and naming methods. If you were an engineer and were asked to design a sound card, would you be thinking about anything other than processing sound? If you were asked to build a graphing calculator, would you be designing it game playing in mind? Of course not. It makes no economical sense whatsoever, and your design probably won't be approved in the first place.
The point is, if you looked at the market, the marketing the press releases, you'd realize that the whole physics on GPU thing was almost a knee jerk reaction from nVidia so they can sell more people into buying a second video card instead of any possible physics dedicated hardware. If nVidia had the whole physics thing in mind long ago, they would have marketted it the heck out of it long ago and PhysX wouldn't exist in the first place. But the truth is, we never heard anything in the remote ball park of physics on the GPU until Ageia announced the PhysX. So, yes, physics is the next big thing. Is PhysX the solution? Well, it's a dedicated product that's on the market right now. Is physics on GPU a solution? Well, yeah, but it robs you of the graphics performance you would have if you didn't have physics turned on, so it's a hack and marketing stunt aimed at making people throw more money into more graphic cards for reasons other than graphics. I wouldn't be surprised if another company comes out with a product for dedicated hardware AI acceleration that nVidia will counter with some AI on GPU deal. And yes, you can do AI on GPU already, though it's relatively convoluted with the whole encoding to texture, sending it to the GPU, rendering it to an offscreen buffer and doing read pixels to rebuild the texture on the main memory then translating back into the proper form.
Ok, I'm done. Sorry for taking so long.