Advertisement

eating potatoe

Started by August 05, 2014 08:25 PM
5 comments, last by frob 10 years, 5 months ago


Such statements need a lot of understanding to grok. In general, yes, floats are faster on the GPU. Ints can be faster on the CPU but the gap is a lot smaller today than it was. However, an operation that is naturally a floating-point operation is likely much, much faster on the CPU to just do with floats than it is to contort yourself into an integer and using a bunch of specialized math.

The overwhelming rule of thumb with performance advice is that it's all lies. What people learned 5 years ago is _wrong_ today. What I just told you above will probably be _wrong_ soon (if not already). If you want to know which is faster _for your target platforms_ (you have no reason to care about other platforms), try both, profile them, and see which performs better.

That said, given how close the performance is, and given how you're not exactly writing the next Crytek or in a position where squeezing out every single like tenth of a percent of performance matters, just use floats where they seem appropriate and ints where they seem appropriate. The differences in speed between the two are so incredibly minor in most cases that it's a complete and absolute waste of time to try to contort naturally floating-point operations into integer math or vice versa.

Sean Middleditch – Game Systems Engineer – Join my team!

Advertisement

What is the reason behind the question? What problem are you trying to solve?

Yes, the GPU can be use used for processing generally. General purpose GPU programming goes by the rather utilitarian acronym GPGPU.

Basically you are making a tradeoff. You give up the benefits of the CPU, which is very versatile, and replacing it with a massively parallel processing system designed for a dense grid of data.

This is why things like bitcoin miners love running code on the GPU. They get away from a 6-processor or 8-processor system that has gigabytes of memory and other resources, and exchange it for thousands of tiny processors. Their processing task is small but needs to be repeated on an enormous collection of data. They can load all the data up into a texture and run a series of shaders to get the desired results.

Crossing the boundary between the systems is relatively expensive.

So in order to help give you good information, why are you asking the questions about integer and floating point performance?


 

Short answer: Yes, under all circumstances I am aware of, shaders run on the GPU.

Longer answer: Maybe there are some obscure debugging setups where the shaders can be run on CPU but I find this unlikely as it would take a long time to render a screenful of pixels if the pixel shader had to be run on the CPU. GPU supports massive parrallelisation which cannot be matched on CPU i.e. it can run the same shader hundreds of times at the same time.

I was under the impression that on modern NVidia and ATI cards that most OPS (both float and int) that don't use the special function unit are single cycle execution. 64-bit and divide, pow, trig, ect... take more though. As far as Intel chips, I'm not too sure.

Advertisement

I was under the impression that on modern NVidia and ATI cards that most OPS (both float and int) that don't use the special function unit are single cycle execution. 64-bit and divide, pow, trig, ect... take more though. As far as Intel chips, I'm not too sure.

It is somewhat difficult to guarantee your customers are running those specific cards.

Most broad-consumer games look for shader model 2.0 (2002) or 3.0 (2004). It is fairly rare for mainstream games to require more modern hardware on mainstream games.

This topic is closed to new replies.

Advertisement