Advertisement

Fix Your Time Step - Need Help

Started by December 04, 2016 12:10 AM
17 comments, last by Norman Barrows 7 years, 11 months ago

Hello everyone! I'm new to SFML, but come from using XNA, Allegro, and some SDL back in the day.

I've always been use to just having a function that kept my games at 60 FPS lock, and all game play, logic, rendering was done in the same loop, with no independent timers, 100% based on frames rendered.

I'm extremely interested in duplicating the Time Step as per the article: http://gafferongames.com/game-physics/fix-your-timestep/ but I'm running into some issues as I've never used Delta Time, or any methods prior, everything was handled for me before.

NOTE: I turned on VSYNC in my Window Class to keep the rate at 60 FPS.

My current code for main.cpp:


#include "Game.h"
#include <iostream>

int main()
{
    // Time Test
    sf::Clock MainClock;
    double Time = 0.0;
    const double DeltaTime = 0.01;

    double CurrentTime = MainClock.getElapsedTime().asSeconds();
    double Accumulator = 0.0;

    Game GameTest;

    while (GameTest.IsWindowOpen())
    {
        double NewTime = MainClock.getElapsedTime().asSeconds();
        double FrameTime = NewTime - CurrentTime;
        CurrentTime = NewTime;

        Accumulator += FrameTime;

        while (Accumulator >= DeltaTime)
        {
            // Game Loop
            GameTest.Input(DeltaTime);

            GameTest.Logic(DeltaTime);

            // AI
            // Physics

            Accumulator -= DeltaTime;
            Time += DeltaTime;
        }
    
        // Render Graphics
        GameTest.Render();

        // FPS - Shows in Console Window
        std::cout << "FPS: " << 1.0f / FrameTime << std::endl;
    }

    return 0;
}

My game.cpp code for moving the sprite:


// Input
void Game::Input(double TempUpdatesPerSecond)
{
    // Keyboard Movement for guy1 --- TEST !!!
    if (sf::Keyboard::isKeyPressed(sf::Keyboard::Up))
    {
        guy1.move(0, -32 * TempUpdatesPerSecond);
    }

    if (sf::Keyboard::isKeyPressed(sf::Keyboard::Down))
    {
        guy1.move(0, 32 *TempUpdatesPerSecond);
    }

    if (sf::Keyboard::isKeyPressed(sf::Keyboard::Right))
    {
        guy1.move(32 * TempUpdatesPerSecond, 0);
    }

    if (sf::Keyboard::isKeyPressed(sf::Keyboard::Left))
    {
        guy1.move(-32 * TempUpdatesPerSecond, 0);
    }
}

In the article I'm having trouble understanding what I do with Time? Why is it passed into my Update function along with Delta Time? Do I have my variables set up properly?

From the Article - Second Last Code Snip:


double t = 0.0;
const double dt = 0.01;

double currentTime = hires_time_in_seconds();
double accumulator = 0.0;

while ( !quit )
{
    double newTime = hires_time_in_seconds();
    double frameTime = newTime - currentTime;
    currentTime = newTime;

    accumulator += frameTime;

    while ( accumulator >= dt )
    {
        integrate( state, t, dt );
        accumulator -= dt;
        t += dt;
    }

    render( state );
}

I also have a cout to show how many frames per second are being rendered out to make sure it matches with vsync.

Since DeltaTime = 0.01; does this mean I'm moving at 0.01 * units per frame, at a maximum of 0.06 * units per second assuming steady 60 FPS with Vsync?

I would like to get my code working properly as to match the second last part of the article before learning how to do the final step. I also have no idea about interpolation.

I was also reading online that Delta Time is a very poor technique to use, and you should only program based on a fixed amount of ticks? My issue is making sure my games run good on 60 Hz or 200+ hz monitors while keeping the logic at a fixed update rate, and using any left over time to render as many frames as possible, I just need some guidance along the way.

Thank you!

I also have no idea about interpolation.

The simulation is done at a fixed rate, 30fps/60fps/whatever. The rendering is done whenever you can. The only thing that is difficult to remember with interpolation is the name of the term. Basically it's pretty simple as a concept.

For example, imagine you update your simulation 2 times per second, so 0.5 sec per update, right? And the only thing you do in your game is to move a black dot on the screen with the W,A,S,D keys.

Imagine the dot has a speed of 4.0f per update. Now if you hold the W key for one second, the dot would have travelled 8.0f, right?(2 updates per second).

So your update function is like this:


//If you have accumulated enough milliseconds to reach your frame rate, update the simulation
//If not, the if-statement fails and you just execute the renderScene function below.
if( millisPassed >= 0.5 )
{
    updateSimulation() //Update dot position
    millisPassed -= MS_PER_UPDATE // In this case, that is 0.5
}

//Else just render (render whenever you can)
renderScene( millisPassed / MS_PER_UPDATE ); //Render dot

//The value passed to the renderScene says where exactly we are.
// ( between the previous frame and the next one )
//If the value is 1/2, then we are exactly in the middle between the two frames.

So, continuing with our dot. Normally what happens is that your computer executes all the stuff in updateSimulation() very fast, that's why most of the time it will just render.

Imagine that you render 10 times for every time that you update the simulation. That means that the dot position will be updated only 2 times per second, and rendered 20 times per second.

Now you probably say to yourself: ok, if the dot position was at x = 0.4f last frame and it is at 0.8f this frame and I render 10 times between the two frames, how the hell would I come up with the proper in-between values every time I render.

Easy. 0.4f + (0.8f - 0.4f)*(number from 1/10 to 10/10 ,depending on which time we render)

and this simple operation we did now is called interpolation.

this is what we did: previousPosition + ( nextPosition - previousPosition )*interpolationRatio

So you need to store your previous position and your next dot position and you need to know the exact time that the render function is called, that's why you pass it in as an argument. And you divide it by the frame rate, because you need to get the ratio between the previous and the next frame.

And then you just use that formula and use the value you passed as an argument to the renderScene function as the interpolationRatio. This way, you update at a constant frame rate, but you render based on your pc power. The faster your pc is, the more times you render, the more times you interpolate, and the smoother gameplay you get.

Advertisement

to convert a game loop with a framerate limiter (vsync in this case) to "fix your timestep":

1. your update code is currently designed to run at 60Hz, or 16 2/3 ms per update. so DT = 16 2/3 ms.

2. ET is simply how long it takes to render. or the time from the beginning of update to the next update (to capture render, input, and update time for greater accuracy.

3. start a timer, call render, get ET

4. pass ET into update()

5. in update:

accumulator +=ET

while accumulator >= DT

{

acumulator -= DT

run_original_update_code()

}

that's all there is to it.

note that if accumulator >= DT*2 you drop frames! a cap on ET is the usual workaround. an ET cap temporarily puts the game loop back into lockstep synchronization: one render, one input, and one update per loop iteration.

that's it for update. then you have to modify render.

in update, you'll need to add new variables for an entity: previous_position and previous_orientation. when you update an entity, you copy the current location and orientation to the previous location and orientation, before updating.

when its time to render, you tween between previous and current location and orientation of the entity to get the location and orientation for drawing. the amount of tween is accumulator / DT. so as accumulator goes from zero to DT, tween goes from previous to current location and orientation.

me personally, i just use a framerate limiter and get on with life. other than bragging rights, there's really no need for a game to run at 120Hz vs 60Hz. fact is anything 15Hz or above is playable. movies are only 24Hz, and you don't hear people complaining that movies aren't smooth. for a long time the game world was perfectly content with 30fps. there comes a certain point where fast enough is fast enough, and you don't really get much by running even faster.

Norm Barrows

Rockland Software Productions

"Building PC games since 1989"

rocklandsoftware.net

PLAY CAVEMAN NOW!

http://rocklandsoftware.net/beta.php

fact is anything 15Hz or above is playable. movies are only 24Hz, and you don't hear people complaining that movies aren't smooth. for a long time the game world was perfectly content with 30fps. there comes a certain point where fast enough is fast enough, and you don't really get much by running even faster.

1) 15Hz being playable is not a fact*, it's extremely subjective. There is also a massive difference between playable and "does not greatly benefit from higher framerates". Many games would be radically different if they had to operate at 15Hz. There are lots of games where framerate matters a lot (anything fast-paced, twitch-based, etc.), and there are several games where frame rate doesn't matter all that much (e.g. chess). But even in the case of simple computer chess games, a framerate higher than 15Hz gives a better experience. Lowering input latency, responding quicker to your actions, etc. Actual fact is, a lot of people are turned away from games that have a low framerate.

*You could argue that since you can play it, it's playable... but then a game would also be playable at 0.01Hz.

2) Movie framerates and game framerates are not interchangable, for a lot of reasons. Input and having to respond to what the player is doing being a major distinction. Additionally, even if we agree blindly with your statement that 24Hz is good enough for movies (and thus games), that does not mean 15Hz (an almost 40% drop in framerate!) is good enough. By that logic, where does it stop? If 15Hz is good enough, why isn't 9Hz (roughly the same percentage drop)? And if it were, why isn't ~5? Also, you do hear people complaining about the framerate in movies, wanting it increased.

3) For a long time, that was indeed true. That does not mean it is today, nor does it mean that going below what was accepted as ok before, is a good idea moving forward. "Yeah, this was good enough before, now we'll give you only half". Most game developers seem to be pushing for 30Hz as their target, so I don't think the minimum framerate is increasing any time soon (except for high-end/niche consumers), but the flipside is they are not decreasing past that intentionally. Developers will rather mess with resolution and effects than to go below 30Hz.

4) I agree that at some point, fast enough is fast enough. But I think it's silly to claim that 15Hz is all you need. For me, I would love if all games were 60Hz. I don't have any experience playing games with a higher framerate than that, so I can't tell if I would also love for them to be 144Hz or whatever. I do know that it would be wrong for me to say "this is good enough for me, so this is good enough for all games and all gamers".

More to the point, a game that is good in 15Hz, is most likely as good (or better) at higher framerates. A game designed for and good at 60Hz, is not necessarily any good at 15Hz. To me, it seems like implementing an artificial framerate like you're describing is only good for invoking a specific retro/nostalgic feel. There's nothing wrong with that, but it absolutely flies against the "fact" that 15Hz is good enough for all games.

Hello to all my stalkers.

I certainly wouldn't agree that a set update rate is "good enough" if it caps what people can see. Most of the reason for buying a 120hz or more monitor today and having a rig to play it is so you can enjoy the better experience it provides. Not much better way to make a bad impression than to lock people out of the option of benefiting from it.

Now on some stuff it might not matter that much, 2d games can certainly get away with it easier. 3d I would be much more concerned with.

I certainly wouldn't agree that a set update rate is "good enough" if it caps what people can see. Most of the reason for buying a 120hz or more monitor today and having a rig to play it is so you can enjoy the better experience it provides. Not much better way to make a bad impression than to lock people out of the option of benefiting from it. Now on some stuff it might not matter that much, 2d games can certainly get away with it easier. 3d I would be much more concerned with.

Huh, isn't that like the entire idea about fix your timestep? You lock that logic at a fixed rate, but render how often your monitor can display. Sure, if you don't do the last step regarding interpolation, then if you render at 120 hz, but your update rate is 60 hz, then you are rendering every frame twice. But if you use interpolation, you can still update just 60 times per frame, but interpolate between the frames to get 120 different images

Advertisement

I certainly wouldn't agree that a set update rate is "good enough" if it caps what people can see. Most of the reason for buying a 120hz or more monitor today and having a rig to play it is so you can enjoy the better experience it provides. Not much better way to make a bad impression than to lock people out of the option of benefiting from it. Now on some stuff it might not matter that much, 2d games can certainly get away with it easier. 3d I would be much more concerned with.

Huh, isn't that like the entire idea about fix your timestep? You lock that logic at a fixed rate, but render how often your monitor can display. Sure, if you don't do the last step regarding interpolation, then if you render at 120 hz, but your update rate is 60 hz, then you are rendering every frame twice. But if you use interpolation, you can still update just 60 times per frame, but interpolate between the frames to get 120 different images

True.

Satharis, you can theoretically update at a variable rate, but I don't recommend it because you will get really weird bugs, like going through a closed door or falling off the ground when jumping due to incorrectly specified deltaTime range and a slow PC. And you need to be very careful when coding. I don't speak from experience because I never tried it, it sounds too frustrating, I don't want to try it out.

In conclusion: variable time step makes the game non-deterministic and physics and networking become harder and more prone to bugs. (not my experience)

In conclusion: variable time step makes the game non-deterministic and physics and networking become harder and more prone to bugs. (not my experience)

In my own experience, this is indeed the case. Had some nasty collision-detection bugs due to variable frametime, and even aside from that, debugging collision/physics code became much easier now that update steps are actually deterministic. So I would recommend it for that alone, even if you do not go for the expence of actually using full interpolation.

Huh, isn't that like the entire idea about fix your timestep? You lock that logic at a fixed rate, but render how often your monitor can display. Sure, if you don't do the last step regarding interpolation, then if you render at 120 hz, but your update rate is 60 hz, then you are rendering every frame twice. But if you use interpolation, you can still update just 60 times per frame, but interpolate between the frames to get 120 different images

Yes, but that's the key part, if you aren't interpolating you are quite literally capping the frame rate to the update rate. It might say it is rendering at 120 hz and technically it is, but everything that happens on screen is repeated twice, so essentially there is no practical difference from 60 hz. As far as I know there isn't many methods to get around that besides interpolation, there's a lot of negatives to using a variable delta time.

The two fundamental aspects of good visuals are: how fast everything updates and how linear that update is. For instance 30 fps is generally visibly worse than 60 fps but 60 fps that randomly dips down to 40-45 fps every few moments often looks worse than a smooth and uninterrupted 30 fps. The problem with fixed updates is that frame rate is variable, whether its over 60 fps or not. Unless you have vsync on the monitor will happily be redrawing itself whenever it pleases even if you had been doing 1 update per frame and then suddenly it skips to two of the same frame and then two updates squashed together in one frame(because the monitor and update rate are desynchronized,) That's a fundamental problem and why just using a fixed timestep doesn't magically fix motion. Vsync can help with that, but then issues develop if the frame rate ever drops below vsync.

So you're left with a few options: update rate(variable or fixed) technically you can use both, and I have seen games do that, but that's generally to do update work more often rather than to update anything visual in the game. Then you can use interpolation or not. Problem with not using interpolation is that then nothing on screen moves, you are fundamentally capped at your update rate.

Satharis, you can theoretically update at a variable rate, but I don't recommend it because you will get really weird bugs, like going through a closed door or falling off the ground when jumping due to incorrectly specified deltaTime range and a slow PC. And you need to be very careful when coding. I don't speak from experience because I never tried it, it sounds too frustrating, I don't want to try it out.

The main problem with using variable update rate is that it is non-deterministic. The hurts for issues like networking, or if you want to generate replays. TIme in a computer is just an artificial construct and you can process it however you want to. For instance fixed timesteps are generally done using an accumulator, you accumulate time and then run a number of updates to drain that time and get caught up to just before where time is in reality. You can do the same thing with a variable time step if you wanted.

For instance you can add every update to the accumulator and then run updates at whatever the speed the last delta time was. Why would you want to do that? That would be one way to combat the issue you're talking about. If a variable time grows too large(say you used a breakpoint in a debugger) time continues ticking on but the game does not. Get a big enough delta time and a player will walk through an entire map instantly without colliding. Of course there are ways to combat that too, like continuous collision detection, but I digress. The trick is to set a cap on the variable time. Say one second. If more than a second passes then you simply eat up a second at a time from the accumulator and then eat up the remainder in a final update call.

Of course a method like that has pitfalls too like the spiral of death, but that can happen in fixed updates as well unless you account for it. The point I'm making is that there are multiple ways to handle the concept of time. Fixed time steps are pretty good, if you set the update rate high(say 120 hz or something) then that solves the issue for most people. For a 2d game I'd recommend that method even if you don't want to bother with coding interpolation, but interpolation does help get rid of "jitter." The point is not to assume people don't want to see a game moving at more than 60 fps, that's a silly assumption to make.

The point is not to assume people don't want to see a game moving at more than 60 fps, that's a silly assumption to make.

True, I know people that can't stand playing on 60 fps because they are used to 120.

This topic is closed to new replies.

Advertisement