Advertisement

Are flops a "worthless metric" in relation to 3D physics calculations?

Started by March 15, 2017 09:27 PM
5 comments, last by Satharis 7 years, 8 months ago

Someone I know claims this to be true, that floating point operations per second have no bearing at all on the performance of real-time 3D physics calculations and are thus a "worthless metric".

How true is his claim? Do flops matter at all?

There's a big difference between "no bearing at all" and "a worthless metric". Obviously they matter -- if you had a CPU with 1 FLOPS, it would take an eternity to run any kind of computation.

However, real world performance is a lot more complicated than that and isn't captured by a single metric -- the main bottleneck in computing used to be FLOPS and now it is memory bandwidth. Also we keep increasing FLOPS these days by adding more and more cores to a computer, so software has to be written in a style that can actually take advantage of all of those cores. Real world performance is tangled up in all of these issues.

So it has some bearing on performance, but is also mostly a worthless metric :wink:

Advertisement

So would ~25 gflops of a CPU be kind of irrelevant these days in a sense?

So would ~25 gflops of a CPU be kind of irrelevant these days in a sense?


Irrelevant in what regard? That is around the performance of many current CPUs. You're missing the point. This is totally dependent on the application. 10 FLOPS is perfectly fine for a calculator. 25 GFLOPS would cripple a supercomputer running a weather simulation. A 25 GFLOP CPU would run any modern game without issue with tasks running in the background.

So would ~25 gflops of a CPU be kind of irrelevant these days in a sense?

No, that's not what I said. You need to look at more than the FLOPS rating to guess performance, and better than guessing is measuring actual performance... What matters is how may GFLOPS your particular bit of code can actually achieve on the hardware, which depends on the hardware architecture, the software architecture, the theoretical FLOPS rating of the CPU, the memory bandwidth, the cache architecture, etc, etc...

A 25 GFLOP CPU would run any modern game without issue with tasks running in the background.

Well that depends though. If it's completely changed up the architecture in order to achieve that FLOPS score, then no modern game will run on it.
e.g. there's GPUs now that easily hit the 500 GFLOPS range, but if you ran single-threaded code on it like an old video game, you would only be lucky to reach about 1 GFLOPS... or the PS3's CELL CPU had well over 200 GFLOPS in theory, but in practice most games would probably get closer to 50 GFLOPS out of it, simply because real world code performs very different to the theoretical maximum that CPU can do. Real programs have to spend time doing other boring things, like moving memory around.

...so the hardware's theoretical rating matters less than what your specific software can make that hardware do.

Also, 25 GFLOPS isn't that much any more. A good laptop from two years ago will have a 25 GFLOPS CPU :D

Also, 25 GFLOPS isn't that much any more. A good laptop from two years ago will have a 25 GFLOPS CPU :D


I'm running an AMD FX-8150 (21~ GFLOPS) and can play most games with ease. The reason is that the GPU I have is far more powerful and most modern games are not as CPU bound as earlier games. So what I posted was more to your point. It is an pointless metric by itself.
Advertisement

It's a pointless metric because it doesn't really give any useful information about performance. It's like trying to compare cars by comparing the RPM their engines spin at, one might sound like it is better and yet be a much less useful car because of other bottlenecks or design problems.

This topic is closed to new replies.

Advertisement