[quote name='SimonForsman' timestamp='1346076321' post='4973785']
Yes, one can split up each simulation step in smaller chunks that can be processed in parallel (Which many of us allready do) but when each step in a simulation requires the previous step to be completed you got an inherently serial system and the kind of near perfect horizontal scaling(>99%) we get fairly easily in some other types of applications are pretty much impossible to achieve, i wouldn't call that a myth, With games there will always be a need to synchronize the state which prevents perfect scaling.
Games aren't just a linear sequence of steps where each one depends only on the one that immediately preceded it. You've got lots of different systems that need to be updated, and points where data flows from one system to the next, yes. You can take each step of updating each system and split it across many cores. You can then interleave those steps so that a dependent task doesn't immediately follow it's dependency. You can pipeline all of these steps in such a way where there are no stalls.
Yes, this is new ground for most, even though
we were warned that we had to learn this stuff back in 2005, but PS3 developers have been working at it for years and getting to the point where they can get half a dozen 3GHz CPU cores working flat out with no stalling...
[/quote]
There is still a difference between getting the CPUs working and getting work done, getting 6 cores working 100% doesn't mean your application has perfect horizontal scaling(it only means that you're using all available resources to do something). CPU utilization is not a measurement of scalability, the amount of relevant(Relevant as defined by the application) work performed is, The question here isn't if it is possible to get X cores working 100%(anyone can write a game that pushes 100 cores to 100%, thats not hard, the hard part is making all those 100% contribute to the overall performance and not just be overhead), the question is if we can write a game that gets almost the same performance if we cut the frequency to 10% and push in 10 times the number of cores instead and this still hasn't been done. (I don't doubt that PS3 developers are getting alot out of the PS3 CPU but that doesn't say anything about scalability, tuning a pipeline for a fixed number of CPU cores running a fixed architecture at a fixed frequency is not comparable to writing a horizontally scaling application)
The entire point was that on the PC we have exactly that, expensive CPUs running more but slower cores than the cheaper models (and we will get far more cores in the future), i agree that well written games should scale well enough to make the expensive CPUs better but this is very rarely(pretty much never when it comes to AAA games) the case, even though you don't have to get much better than 50-60% scalability for the expensive CPUs to dominate, pretty much all games that take good advantage of multicore CPUs are optimized for a fixed number of cores(Just like your PS3 example) and way too often for a fixed architecture. (One fairly recent game (Skyrim) is a perfect example, the PC version uses 3 cores reasonably well, adding more cores does jack shit and one of the cores is used almost exclusivly for graphical effects like shadows even if the installed GPU has more than enough power to deal with it, i don't doubt that they got the most out of the xbox 360 hardware but the game still has close to 0% scaling)
[size="1"]I don't suffer from insanity, I'm enjoying every minute of it.
The voices in my head may not be real, but they have some good ideas!