Advertisement

Fixed timestep - finding out how often you can run update

Started by December 04, 2022 04:03 PM
7 comments, last by JoeJ 2 years ago

Hello,

this is a bit of a follow-up question to the issue I had with VSync causing a degradation of my applications frame-rate. After thinking about it more, it does make sense, and I'm wondering if there is a way around it. But let me explain.

I'm using fixed timestep (https://www.gamedev.net/forums/topic/713460-dxgi_swap_effect_flip_discard-forces-vsync/) for my entire game, with rendering using the alpha-interpolation described in the last paragraph. The whole engine/game is deterministic to the point where I can record inputs+rng-seeds, and replay them to get a frame-perfect replay, that I use, for example to validate that nothing broke in a build before I ship the to the players.

To run this validation, I increase the amount of times that the fixed-update is run in the same frame-time by a given factor, depending on the build/platform this could go > 100x as fast. How fast it can go, is obviously a factor of how long each individual update takes. However, as I continue rendering at a non-spedup rate, herein lies a problem. I achieve the speedup by multiplying the “dt” used to calculate the “accumulator" factor by a maximum speed-up value (which I set to x256), and adjust the maximum-update duration as well (to prevent deadlocking when the update-time becomes too large; which it usually does as x256 is not achieveable in most scenarios). This whole calculation looks like this:

dt *= speed; // speed => speed-up factor, for our intents and purposes == 256.0f
{
	const auto maxDuration = std::min(1.0f, m_maxUpdateDuration * speed); // maxUpdateDuration == 0.2f
	if (dt > maxDuration)
		dt = maxDuration;
}

m_accumulator += dt;

while (m_accumulator >= m_timeStep)
{
	m_states.Tick(m_timeStep);
	
	m_accumulator -= m_timeStep;
}

The one problem that I'm facing is this does not allow me to run as many updates as I'd like in case where the actual updates are fast enough to fit into the desired budget of 50*256 updates/second (50 is my update-framerate). This is due to the fact that, the basic calculation of this fixed update loop factors in the time it takes to render, as it needs to make sure to only run the fixed update exactly 50 times per seconds when there is no speedup present. But if we have, say, VSync on, the measured frame-time will always be about 0,0166ms on a 60hz monitor, due to waiting on the vsync in present. Since there is no way in the calculation to distinguish how much time was spent idling in Present, it will think that the entire frame took 0.0166ms including rendering, and thus run into the deadlock-prevention code, even though it could technically run the update at a lot higher frequency, as all it would do would remove idle-time at the end of present.

Sooo… I haven't been able to wrap my head around a good solution here. I think I cannot just entirely ignore the time it takes to Present from the calculation, because in case that the framerate would actually be limited due to rendering taking place, I would overshoot the budget I have. I'm thinking that maybe my approach for calculating the speedup itself might be flawed, but I can't think of a better way to calculate it. Anyone with experience in game-loop/physics-engine maybe got some idea?

;TL;DR I want to calculate how often I can run a fixed-update step while still rendering at the desired framerate, as close as possible

Juliean said:
I think I cannot just entirely ignore the time it takes to Present from the calculation

But you could assume some constant time the renderer takes? Would this somewhat help?

It's not clear to me what you want. Seemingly you want to update the game as often as possible to test things, but still get some visual feedback. Only for your own private purposes, or does this somehow affect other players?
But then you could just update N times, render once, and repeat. Probably with Vsync off.

I did something similar to debug physics. I can make the simulation faster or slower than realtime by a given scaling factor, and i can also limit the FPS to some constant like 30 or 60 to see how it feels to play at such rate.
But because i have added and extended this with time, i already did run out of meaningful variable names. So i do not understand my own code, and likely can not help. This topic feels just too confusing to discuss it with natural language at all, in my experience. Luckily it's not a hard problem. However, i did not mind to measure rendering time properly and just use a constant instead, and for my debugging needs this works well enough.

Advertisement

JoeJ said:
But you could assume some constant time the renderer takes? Would this somewhat help?

I guess I could assume some factor, but the rendering-speed is (slightly) affected on whats going on in the scenes. In regards to my application, its mostly when there is more sprites on screen, mostly for boss-fights or certain enemies that spawn a lot of projectiles. Though since this is all part of the variable-update step, it is not affected by the speedup, and should the time rendering takes should be minor compared to essentially using the whole available (single-core) CPU performance for blasting through as many updates as possible).

JoeJ said:
It's not clear to me what you want. Seemingly you want to update the game as often as possible to test things, but still get some visual feedback. Only for your own private purposes, or does this somehow affect other players? But then you could just update N times, render once, and repeat. Probably with Vsync off.

You did sum up what I want pretty well. Optimally, I would like to update just as many times as possible, without running into render-slowdowns. I initially belived that I was experiencing general slowdowns if I run the uncapped update-loop (before deadlock prevention), but I concede - after testing it again the actual time it took to execute went up vs the uncapped variant, only the rendering became really stuttery.

As to your guestion, it is only for internal testing purposes. I have two main features that rely on this:

  1. Turbo-Mode, akin to what an emulator can do. This uses fixed-update speedup as well, in order to be deterministic - no use in running turbo-mode if the results are all messed up due to large timesteps, had the pleasure of trying something like that in unity once.
  2. Automated tests, which should run as fast as possible while still allowing me to view the results on the monitor

Case 1) obviously hard-requires a stable framerate. Though in fairness, my one-button turbo uses x16 so that I can still react to the gameplay, and this is a rate that I can achieve reliably.
Case 2) could actually live without achieving 60 FPS for rendering. Though I don't like the diashow-like appearence of what I saw after removing the frame-cap, in all honesty it would be ok to just update x256 and throw in as many frames as fit in between - at such speeds (realistically after my system-upgrade I get 170x), I cannot fully grasp whats going on by watching anymore, regardless.

JoeJ said:
But because i have added and extended this with time, i already did run out of meaningful variable names. So i do not understand my own code, and likely can not help. This topic feels just too confusing to discuss it with natural language at all, in my experience. Luckily it's not a hard problem. However, i did not mind to measure rendering time properly and just use a constant instead, and for my debugging needs this works well enough.

I mean, if I discount VSync (which I can since it only affects me), then I could even actually measure the time it takes to render (and process non-fixed update-stuff as well). I'm still struggling to put the formula together in my head, but I think I'm onto something. If I figure out how to do it this way, I should be able to actually set a mode where it will update as often as possible, while keeping the desired framerate (which I could probably even tune to 30 or something since it doesn't matter too much). Seems a bit cleaner to me than just hardcoding a magic-number for how often it should update and sprinkling in the occassional render.

Thanks in any way -you've already given me some good ideas, I'll update this post when I find the actual working solution.

Should be easy. You could subtract the rendering time from the last frame from your target FPS, then keep updating the game until the remaining time budget is over.

Just, measuring render time isn't easy eventually. I was looking at my code closer, and found i actually try this by measuring the difference in each step of the game loop (which also issues the rendering commands).
Subtracting physics time as well, i should get the rendering time. But it does not precisely work as expected. The constant i saw is not the assumed rendering time, but some error compensation. Actually 1000000 nano seconds.
I do not understand why that's needed. Clock on CPU is super accurate nowadays. So where does this error come from? A mystery surrounded by ugly code. : )

JoeJ said:
Should be easy. You could subtract the rendering time from the last frame from your target FPS, then keep updating the game until the remaining time budget is over.

That sound like what I had in mind. Then I'll have to have a separate version of the fixed-update loop for this mode, but sounds like this is needed anyway.

JoeJ said:
Just, measuring render time isn't easy eventually. I was looking at my code closer, and found i actually try this by measuring the difference in each step of the game loop (which also issues the rendering commands). Subtracting physics time as well, i should get the rendering time. But it does not precisely work as expected. The constant i saw is not the assumed rendering time, but some error compensation. Actually 1000000 nano seconds. I do not understand why that's needed. Clock on CPU is super accurate nowadays. So where does this error come from? A mystery surrounded by ugly code. : )

I'm interested to see if I'll run into similar issues. At least anything related to rendering is completely separated from the fixed-update game logic, so I'm hopefull that if I just measure everything around it that I'll get the correct results. But we'll see how it goes.

First results look promising, and is actually really as simple as you put it:

core::Timer timer; // abstraction for high_resolution_clock
const auto timeStep = m_timeStep;
const auto frameBudget = timeStep - lastRenderTime;
do
{
	m_state.Tick(timeStep);
} while (timer.Duration() < frameBudget); // TODO: subtract time taken to render

Now, actual (render) framerate tends to be a bit stuttery. I do attribute that to the fact the multiple seconds worth of content are being executed, making the actual prediction of expected render-time via last frames time a bit unreliable. Introducing a bias (lastRenderTime * X) makes it a bit more stable but slows down actual execution-time (which is what I'm trying to maximize. Also in general the render-time that I've seen is really small compared to the timeStep. I'll be testing the results in an actual build (editor adds overhead too) for a final verdict. I could even leave out the calculation of lastRenderTime entirely for the game I'm currently making, but I do want the engine to be a bit future-proof for more render-complex projects.

Advertisement

Ok, holy shit, for release-player builds, this is even more awesome than I expected. I'm now getting ~500x execution-speed (up from 168x), entire 13h-test actually finished in 88s. The difference in debug-editor was very minor, but for the most important part of this whole system, its really working great. I can also still very roughly keep visual track of whats going on. So thanks again @joej for putting me on the right track =) I now feel kinda stupid for spending a few days implementing a pooling-optimization for my ECS, and I'm definately not going to optimize my shitty O(n^2) collision detection now that I'm getting those kinds of speeds lol.

Juliean said:
I'm definately not going to optimize my shitty O(n^2) collision detection now that I'm getting those kinds of speeds lol.

Now i regret being of help, if so at all.
Awaiting Switch port… ;D

This topic is closed to new replies.

Advertisement