Advertisement

How to control FPS

Started by December 05, 2016 05:50 AM
11 comments, last by TheStudent111 8 years ago

I'm confused at how developers have any control over the FPS on a particular game. What does it mean when a developer aims for their game to run at 60 fps (As far as their code is concerned)? Isn't the FPS dependent on the hardware, especially the GPU. Since there are different PC configurations out there with different GPU speeds, how does a developer get their game at a specific frames per second.

You pick a target hardware specification and a target framerate (which is equivalent to a target time-per-frame), and then make sure that your game runs within that time-per-frame budget on that hardware.

Most AAA games sell more copies on consoles than they do on PC, so the target hardware is the PS4/XbOne/etc...

If you're developing a game with a target frame-rate of 60Hz, that means that you have 16.667ms per frame. If you're making it for PS4, then you will constantly profile the game on the PS4 hardware and make sure that it can complete every frame in less than 16.667ms... If it fails to do so, then you optimize your code until it can do so.

Advertisement

I'm confused at how developers have any control over the FPS on a particular game. What does it mean when a developer aims for their game to run at 60 fps (As far as their code is concerned)? Isn't the FPS dependent on the hardware, especially the GPU. Since there are different PC configurations out there with different GPU speeds, how does a developer get their game at a specific frames per second.

It's dependent on how much work you are doing. So obviously you can increase the fps by either speeding up the hardware, or doing less work per unit of time. When games are initially written they usually perform awful because it is most efficient to go back and rewrite key parts of code once you can profile it to figure out where the most inefficient spots are.

A target framerate of 60 frames per second is common in console games where the TV has a refresh rate of about 60 FPS. (Actually 59.9-something.)

60 frames per second, or about 16.6 milliseconds per frame, is a good goal but mostly arbitrary on modern computers.

Many PC monitors and gaming monitors have framerates of 75, 80, 120, even 150 frames per second. Some newer video technologies have a system of variable rates with a minimum number of milliseconds, so if a game takes 19 ms or 23 ms or some other time above the minimum, the display will wait to update until that time. So the 60 FPS rate is historical and doesn't precisely fit, even though many cheap monitors frequently hit 60 frames per second for refresh.


Now how can games control it? They cannot exactly control it, but we can do a lot to get it working well. First, there is the quality of machine. PC's are not running a realtime operating system (meaning we PC does not allow precise scheduling) and are at the mercy of whatever other programs are running. If the person is running a CPU-intensive and memory-intensive task at the same time as the game, there isn't anything the game can do about it. So game developers provide a "recommended" configuration that should work.

The developers can monitor how much work the game is doing at any time, using tools called profilers. They can identify what is running slow and find ways to address it. If a search algorithm is taking to long, it can have the work spread out over time. If an AI routine is too complex, it can be simplified or cut off when the results are taking to long. If processing a rendering batch is taking too long, the batch sizes can be reduced. The profilers help identify what is going on and provide accurate timings, and the team makes decisions based on that.

When the issues are resolved the game should be running at a very consistent rate on the recommended hardware. Maybe consistent at 12 milliseconds per frame, which is plenty to hit the 16.6 ms needed for 60 frames per second. Games will monitor their performance as they are built, often starting out at <1 millisecond per frame. Some beginners freak out when they see they have a framerate of 4000 frames per second suddenly drop to 2000 frames per second, but when they understand they went from 0.25 ms up to 0.5 ms it feels less frightening.

Personally we aim for about 8-10 milliseconds per frame, but every game and every team is different. Some games I've worked on had a target of 20 ms per frame so we could reliably hit 30 frames per second, and some programs were turn-based slow games with no soft targets at all, just render it once and be done.

So in terms of hitting a specific fps target, it all boils down to optimizing your code in terms of AI, Collision Detection, and or render code?

So in terms of PC games, do developers pick their target hardware specification based on the hardware they developed the game on?

Yes, to both.

The developers look at the code, all the code, and look at what is slowing it down. If AI is slowing it down, that is adjusted. If rendering is slowing it down, that is adjusted. If something else is slowing it down, that is adjusted. Exactly the changes that are made, whether they may be called optimizing it or not, depend on the system. Sometimes it means swapping out algorithms with different algorithms, sometimes it means changing data structures, sometimes it means taking steps to better use the cache, or taking steps to use SIMD instructions, or something else entirely. But to hit the target, whatever systems are slow are adressed.

The target is often set by a mix of marketing, research, and whatever the developers have on hand. Sometimes the QA group is asked about which old machines still run the game at a good level and which run it at a bad-but-playable level, and set those as the minimum and recommended machines. Market research early on can tell if the target demographic is likely to have the machines. If you're building a casual game for grandma you cannot expect a high-end machine. If you're building an ultra-modern high-end game you can require a substantial machine.
Advertisement

Why do some developers aim for 30 frames per second? Whats the advantage of that? I would think a developer would want to aim for as high a framerate as possible, since if the target is at 30 frames per second then the frame rate would have to be lower than that (To the point were the perception of smooth motion is gone.). Is it due to the complexity of the game and the algorithms used will never allow the game to reach a certain frame-rate over 30? Or is there something else at play?

Why do some developers aim for 30 frames per second? Whats the advantage of that? I would think a developer would want to aim for as high a framerate as possible, since if the target is at 30 frames per second then the frame rate would have to be lower than that (To the point were the perception of smooth motion is gone.). Is it due to the complexity of the game and the algorithms used will never allow the game to reach a certain frame-rate over 30? Or is there something else at play?

The simple reason is people tend to buy games(console games in particular) more swayed by the game play and the quality of the visuals than they do about the frame rate. At worst they complain about the fps dropping a lot, but still already bought the game and are likely to buy a sequel if they like the game as well. It's like arguing if you should make a car drive a little faster or have more lights on it, if the lights sell more copies then you'll always end up with the slower car.

Basically some marketing guy somewhere figured out they make more money aiming for 30 fps than 60 in many cases. Much to the dismay of PC gamers.

Why do some developers aim for 30 frames per second? Whats the advantage of that?

Somewhat as Satharis mentioned.

The specific game I was talking about was for the Nintendo DS. That's a 66 MHz handheld. We were building a mixed 3D/2D game, which required a considerable amount of processing. We realized early on that getting the framerate to constantly stick to 60 Hz, the refresh rate of the screen, was going to be nearly impossible. Dropping frames appears as a stutter, so since we knew 16.6 ms was out for a performance target, we dropped it to 30 ms for a worst case target. Since some frames take longer than others as they do more processing, as we were fine tuning it meant that most frames were around 20-25 milliseconds, and we never dropped frames.

That is commonly why developers target a lower framerate. It is generally better to have a consistent framerate than it is to drop frames. Since 75 Hz is about the maximum on common displays, that means if you are well below 13 ms you should never miss a frame at 60 fps or at 75 fps. But if you would occasionally jump above that line, especially if the person has other programs running, you can feel free to cut the framerate in half, drawing every other frame. If you stay below 27 ms per frame you can display every other frame even on a 75 Hz monitor, and still have some wiggle room for a 60 Hz monitor rendering at 30 fps.

As hardware gets faster and more powerful, it is generally more easy to reach higher framerates. It still requires making smart choices, but it is easier to stay below 10 milliseconds when you have eight 4GHz processors and 25 MB of cache and a GPU speed measured in terraflops. Far easier than to maintain it when on a 66 MHz processor with 4 MB total memory and a max of 2048 triangles every 60Hz frame.

Basically what you need to do:

1. Check your graphics. They are the perfomance hog #1!

First thing to do is to minimize your draw calls. Draw calls are basically instructions sent to the GPU to render a new object on the screen. They are one very big bottleneck for multiple reasons. First, they have a CPU overhead. And given your draw calls get into the multiple thousands, those work done on the CPU will start to fill the CPU Cores, which are also needed for the scripts, AI and Physics logic running in your game. ESPECIALLY in engines that are not that multithreaded, where a main thread needs to do most of the work, AND this thread also prepares the draw calls to the GPU.

Second reason why draw calls are such a bottleneck is because mostly, draw calls are "Context switches" of the GPU. Everything gets "reset", as a new draw call might mean a new shader, new settings, and whatnot. This is taking time.

Now, there are many things you can do to minimize draw calls, and with newer APIs (DX12 and Vulkan), the API itself tries to batch draw calls so that there is less CPU overhead.

But most important is to keep your scene under control. Combining static objects, only render what is visible, and checking your shaders (because complex shaders can use multiple draw calls per object).

Then, you need to check your postprocessing effects. Some of them are REALLY expensive. While not adding that much to the scene, or being just as effective as a cheaper method. Which is why most PC games give you access to the settings and letting your turn postprocessing effects on or off, and why postprocessing for Console games are CAREFULLY selected and tuned by the game devs. On the last generation console, the reason for AA completly missing was just the weak performance of both the XBox 360 and PS3, being quite low powered compared to PCs after just 2-3 years of their long cycle. Antialiasing tends to be REALLY expensive.

Another thing to keep i check is lighting. Realtime shadows especially can be extremly expensive... so if a game can get away with baked shadows for static objects, and just a few realtime shadows for characters close to the camera, you can save a TON of graphics OOMPH the GPU can spend on other things. Start adding realtime shadows to multiple lights, and you start to fry even more powerful GPUs.

Then there are different types of renderers for different lighting scenarios. Forward renderers are usually faster, with less overhead than deferred renderers. But try to light a nighttime city scene in Forward, and you either need a TON of clever light faking, or your renderer will slow down the scene quickly.

If you can get away WITHOUT lighting, do it! The cheapest lighting is no lights at all. Which is why Matcap shaders which fake lighting, or using vertex colors or textures with baked lighting are pretty common in mobile games. Depending on the game, and the visual style, nobody might notice the missing lighting.

2. Physics. Can be expensive as hell:

The first question is: do you really NEED physics for that specific object or task? Or can you fake it without the player noticing? There are many things a physics engine can do to enhance a game, but overusing it means wasting CPU cycles (and physics is 100% running on the CPU, save all the eye candy PhysX BS exclusive to nvidia cards most developers simply ignore).

For example you see many "developers" (I will not use less nice names) flogging their half baked games on Steam using ragdoll physics for enemies being shot. Looks like crap (Because that is not how people being shot react), and everyone quickly sees its just being used to save them the need to create animations for those events... just switching to ragdoll mode and imparting a force on the character is much simpler to do.

But it wastes precious CPU cycles for an effect that looks like crap. Playing animations is also not exactly free, especially for skinned meshes, but ragdoll physics add physic calculations ON TOP of the skinning cost, so its still a bad idea.

3. AI... its also expensive.

Which is why most newer AI implementation use a fixed "budget" for their AI calculations. If the AI is not finished by when the AI is used up, AI will go with a simpler result. The exact algorithms for that vary, and I don't know too much about it. I guess you reduce the amount of times AI is calculated per second, which might still be enough for the AI to look convincing. Maybe you have an iterative solver just like with physics that gets more accurate with every iteration... then you just take an earlier, less accurate result that might be "good enough".

Different AI algorithms have very different runtime costs, sometimes for not so different results. A good AI programmer might know about these different ways to achieve the same result, and might pick a faster algorithm because the game he is implementing the AI for works just as fine with this cheaper AI algorithm.

4. Lastly, game logic.

For most games AFAIK not so much of a problem, as gamelogic often is very simple. Some games, like simulators, still tend to have a rather heavy game logic, which might be tightly coupled to physics and whatnot. In some engines, gamelogic HAS to be on the main thread (Unity for example) if the game logic interacts with the game objects. There, better programming might save a ton of CPU time for other tasks... some game logic can be bundled and pushed to another thread to make better use of multiple cores.

5. With all the CPU and GPU bottlenecks, you shouldn't forget memory.

There are still a ton of plattforms with rather limited memory resources. And depending on the platform, if you run out of memory you are getting a crash... or the system slows down to a crawl because of swapping.

Thus keeping the memory usage in check is also a way to ensure your game runs fast and without crashes.

This topic is closed to new replies.

Advertisement