Advertisement

These computers don't make sense

Started by August 26, 2012 09:37 PM
38 comments, last by szecs 12 years, 5 months ago
Servant of the Lord, you're a genius.

My guess: You didn't update the video card driver. Installing the latest driver for your video card will probably resolve your problem instantly. Yeah, it's that important.

I just did that through the NVIDIA site, and now Minecraft, and every other 3d game now runs lag-less. I even tested in on the highest render settings, and it still makes no difference.

So yes, that does make me brain-dead, considering you guys where right all along, and I wasn't listening.

So the other stuff, yes both are desktops, and yes I did go through the process monitor almost daily.

Easiest way to make games, I love LÖVE && My dev blog/project

*Too lazy to renew domain, ignore above links


[quote name='Bacterius' timestamp='1346043485' post='4973665']
[quote name='Servant of the Lord' timestamp='1346042482' post='4973661']
Oh, did I mention it's a desktop? A 5 year old 2GB Win 7 32bit desktop... that for games, runs better than the 2 year old, 4 GB, 64 bit Win 7 laptops my parents use. Integrated videocards are horrible for gaming.

Aw, dude, I know what you mean. I used to have a crappy laptop with a POS video chipset for several years, games were barely playable and it was such a pain! Fortunately, I eventually saw the light and built an actual desktop - possibly the best thing I ever did. The graphics card is the most important factor in how well a game plays - processor comes in distant second, immediately followed by memory (of course, there is a baseline requirement for processor and memory which must be met, but diminishing returns are hit much more quickly than with the graphics card).
[/quote]

Not just "much more quickly", games still scale like shit across multiple cores and most of the performance increase on CPUs in the last few years has come in the form of more cores, more cores, and yet more cores (which most games simply aren't using all that well anyway), You can actually get worse game performance on the CPU side these days by buying a expensive newer 6 core cpu than you'd get with a cheap older 4 core cpu when all other parts are identical.
[/quote]

isn't the idea of multi-core systems not so much for games to utilize all the cores, but for the OS to evenly distribute load of several applications/games onto different processors?
Check out https://www.facebook.com/LiquidGames for some great games made by me on the Playstation Mobile market.
Advertisement

isn't the idea of multi-core systems not so much for games to utilize all the cores, but for the OS to evenly distribute load of several applications/games onto different processors?


That is the primary advantage yes, but some types of applications do make almost full use of all your cores without much problems (video encoding for example), The point was that we really havn't had much of a CPU power increase that is useful for games in recent years so from a games perspective a 2-3 year old CPU might be better/faster than a brand new CPU. Games are getting better at using multiple cores effectivly but still have a long way to go (Games are serial in nature and thus a fairly bad fit for parallel compuation, the parts of a game that are suitable to run in parallel tend to be pushed onto the GPU these days anyway (Physics for example)
[size="1"]I don't suffer from insanity, I'm enjoying every minute of it.
The voices in my head may not be real, but they have some good ideas!

Games are serial in nature and thus a fairly bad fit for parallel compuation

<offtopic> Developers are used to thinking of games as serial in nature. Games are no more intrinsically serial than anything else (video compression was a serial process for years too). Parallelism requires a shift in understanding and programming paradigm that many game developers are unwilling/unable to make. </offtopic>

Tristam MacDonald. Ex-BigTech Software Engineer. Future farmer. [https://trist.am]


isn't the idea of multi-core systems not so much for games to utilize all the cores, but for the OS to evenly distribute load of several applications/games onto different processors?
The idea is that CPUs used to get faster by boosting up the clock speed, but we've kind of hit a limit with that avenue of thinking, so these days we instead make them do more things at once. As usual, the idea of newer systems is to be faster than older systems.
It used to be that your programs simply got faster every time you bought a new CPU. This is no longer true for non-multi-threaded programs. If you want your game to run faster on new CPUs, you've got to write it in such a way that that it scales to any number of CPU cores.
If you're writing your game for a specific minimum requirement (or fixed requirement, such as a games console), and you want to make 100% use of that minimum hardware, and that minimum hardware is multi-core, then you'll want to design your game to use the same number of threads as that hardware has cores. e.g. the Xbox360 has 3 hyperthreaded CPU cores, so designs using 6 threads are optimal on it.
Games are serial in nature and thus a fairly bad fit for parallel compuation
This is completely untrue. It's just a myth propagated by people who don't want to learn new skills.
As a simple counter-point -- all your typical serial OOP code could be implemented in terms of the Actor-model pretty easily, and while probably now inefficient due to most Actor-model frameworks sucking, it will scale to a huge number of CPU cores. Or just write your game in Erlang instead of C, etc...
Also - every console games programmer has been writing multi-threaded games since the 360/PS3 came out.

[quote name='SimonForsman' timestamp='1346061256' post='4973719']Games are serial in nature and thus a fairly bad fit for parallel compuation
This is completely untrue. It's just a myth propagated by people who don't want to learn new skills.
As a simple counter-point -- all your typical serial OOP code could be implemented in terms of the Actor-model pretty easily, and while probably now inefficient due to most Actor-model frameworks sucking, it will scale to a huge number of CPU cores.
[/quote]

Yes, one can split up each simulation step in smaller chunks that can be processed in parallel (Which many of us allready do) but when each step in a simulation requires the previous step to be completed you got an inherently serial system and the kind of near perfect horizontal scaling(>99%) we get fairly easily in some other types of applications are pretty much impossible to achieve, i wouldn't call that a myth, With games there will always be a need to synchronize the state which prevents perfect scaling.
[size="1"]I don't suffer from insanity, I'm enjoying every minute of it.
The voices in my head may not be real, but they have some good ideas!
Advertisement

Yes, one can split up each simulation step in smaller chunks that can be processed in parallel (Which many of us allready do) but when each step in a simulation requires the previous step to be completed you got an inherently serial system and the kind of near perfect horizontal scaling(>99%) we get fairly easily in some other types of applications are pretty much impossible to achieve, i wouldn't call that a myth, With games there will always be a need to synchronize the state which prevents perfect scaling.

Games aren't just a linear sequence of steps where each one depends only on the one that immediately preceded it. You've got lots of different systems that need to be updated, and points where data flows from one system to the next, yes. You can take each step of updating each system and split it across many cores. You can then interleave those steps so that a dependent task doesn't immediately follow it's dependency. You can pipeline all of these steps in such a way where there are no stalls. And even if you can't remove some kind of stall-inducing pipeline bubble, you can spend some memory to add latency until you can remove it.
Yes, this is new ground for most, even though we were warned that we had to learn this stuff back in 2005, but PS3 developers have been working at it for years and getting to the point where they can get half a dozen 3GHz CPU cores working flat out with no stalling...

The statement that "Games are serial in nature and thus a fairly bad fit for parallel computation" is definitely a myth. It's complete hogwash. Ok, most older games, yes, are serial and do not scale -- that's true. But this is not true in general, or for modern games.
You've just got to look at some other paradigms besides only traditional OOD, such as Flow-based, Functional and DOD.

You can even take traditional serial code and automagically transform it into multi-core code without the game programmer knowing about it -- e.g. to determine the high-level flow of my game systems I execute a serial Lua script, buffer it's calls using messages and futures, construct a DAG of these messages and their dependencies (inferred by which other messages the future-return-values are passed to) and then compile the DAG into a multi-core schedule of jobs to be executed.

[quote name='SimonForsman' timestamp='1346076321' post='4973785']
Yes, one can split up each simulation step in smaller chunks that can be processed in parallel (Which many of us allready do) but when each step in a simulation requires the previous step to be completed you got an inherently serial system and the kind of near perfect horizontal scaling(>99%) we get fairly easily in some other types of applications are pretty much impossible to achieve, i wouldn't call that a myth, With games there will always be a need to synchronize the state which prevents perfect scaling.
Games aren't just a linear sequence of steps where each one depends only on the one that immediately preceded it. You've got lots of different systems that need to be updated, and points where data flows from one system to the next, yes. You can take each step of updating each system and split it across many cores. You can then interleave those steps so that a dependent task doesn't immediately follow it's dependency. You can pipeline all of these steps in such a way where there are no stalls.
Yes, this is new ground for most, even though we were warned that we had to learn this stuff back in 2005, but PS3 developers have been working at it for years and getting to the point where they can get half a dozen 3GHz CPU cores working flat out with no stalling...
[/quote]

There is still a difference between getting the CPUs working and getting work done, getting 6 cores working 100% doesn't mean your application has perfect horizontal scaling(it only means that you're using all available resources to do something). CPU utilization is not a measurement of scalability, the amount of relevant(Relevant as defined by the application) work performed is, The question here isn't if it is possible to get X cores working 100%(anyone can write a game that pushes 100 cores to 100%, thats not hard, the hard part is making all those 100% contribute to the overall performance and not just be overhead), the question is if we can write a game that gets almost the same performance if we cut the frequency to 10% and push in 10 times the number of cores instead and this still hasn't been done. (I don't doubt that PS3 developers are getting alot out of the PS3 CPU but that doesn't say anything about scalability, tuning a pipeline for a fixed number of CPU cores running a fixed architecture at a fixed frequency is not comparable to writing a horizontally scaling application)

The entire point was that on the PC we have exactly that, expensive CPUs running more but slower cores than the cheaper models (and we will get far more cores in the future), i agree that well written games should scale well enough to make the expensive CPUs better but this is very rarely(pretty much never when it comes to AAA games) the case, even though you don't have to get much better than 50-60% scalability for the expensive CPUs to dominate, pretty much all games that take good advantage of multicore CPUs are optimized for a fixed number of cores(Just like your PS3 example) and way too often for a fixed architecture. (One fairly recent game (Skyrim) is a perfect example, the PC version uses 3 cores reasonably well, adding more cores does jack shit and one of the cores is used almost exclusivly for graphical effects like shadows even if the installed GPU has more than enough power to deal with it, i don't doubt that they got the most out of the xbox 360 hardware but the game still has close to 0% scaling)
[size="1"]I don't suffer from insanity, I'm enjoying every minute of it.
The voices in my head may not be real, but they have some good ideas!

[quote name='SimonForsman' timestamp='1346076321' post='4973785']
Yes, one can split up each simulation step in smaller chunks that can be processed in parallel (Which many of us allready do) but when each step in a simulation requires the previous step to be completed you got an inherently serial system and the kind of near perfect horizontal scaling(>99%) we get fairly easily in some other types of applications are pretty much impossible to achieve, i wouldn't call that a myth, With games there will always be a need to synchronize the state which prevents perfect scaling.

Games aren't just a linear sequence of steps where each one depends only on the one that immediately preceded it. You've got lots of different systems that need to be updated, and points where data flows from one system to the next, yes. You can take each step of updating each system and split it across many cores. You can then interleave those steps so that a dependent task doesn't immediately follow it's dependency. You can pipeline all of these steps in such a way where there are no stalls. And even if you can't remove some kind of stall-inducing pipeline bubble, you can spend some memory to add latency until you can remove it.
Yes, this is new ground for most, even though we were warned that we had to learn this stuff back in 2005, but PS3 developers have been working at it for years and getting to the point where they can get half a dozen 3GHz CPU cores working flat out with no stalling...
[/quote]

I think there's a really good video from GDC by some guys from Ubisoft on switching their engines to be more data driven. One great benefit oof switching to very data driven code is it tends to play nicer when you move stuff between platforms. One of the large benefits they talked about is that they could optimize for the PS3 really easily where needed without having to redo too much of the actual logic.
I once used openGL for a year and had 20 non-textured quads running around at 30 fps and I said: "crap. I will never make games. Probably there's some hidden magic in programming that a mortal cheapshit like me can never get". Than for some reason I reinstalled my video card driver. 1000 fps and I said "Holy shit".

End of story.

This topic is closed to new replies.

Advertisement