Hello,
this is a bit of a follow-up question to the issue I had with VSync causing a degradation of my applications frame-rate. After thinking about it more, it does make sense, and I'm wondering if there is a way around it. But let me explain.
I'm using fixed timestep (https://www.gamedev.net/forums/topic/713460-dxgi_swap_effect_flip_discard-forces-vsync/) for my entire game, with rendering using the alpha-interpolation described in the last paragraph. The whole engine/game is deterministic to the point where I can record inputs+rng-seeds, and replay them to get a frame-perfect replay, that I use, for example to validate that nothing broke in a build before I ship the to the players.
To run this validation, I increase the amount of times that the fixed-update is run in the same frame-time by a given factor, depending on the build/platform this could go > 100x as fast. How fast it can go, is obviously a factor of how long each individual update takes. However, as I continue rendering at a non-spedup rate, herein lies a problem. I achieve the speedup by multiplying the “dt” used to calculate the “accumulator" factor by a maximum speed-up value (which I set to x256), and adjust the maximum-update duration as well (to prevent deadlocking when the update-time becomes too large; which it usually does as x256 is not achieveable in most scenarios). This whole calculation looks like this:
dt *= speed; // speed => speed-up factor, for our intents and purposes == 256.0f
{
const auto maxDuration = std::min(1.0f, m_maxUpdateDuration * speed); // maxUpdateDuration == 0.2f
if (dt > maxDuration)
dt = maxDuration;
}
m_accumulator += dt;
while (m_accumulator >= m_timeStep)
{
m_states.Tick(m_timeStep);
m_accumulator -= m_timeStep;
}
The one problem that I'm facing is this does not allow me to run as many updates as I'd like in case where the actual updates are fast enough to fit into the desired budget of 50*256 updates/second (50 is my update-framerate). This is due to the fact that, the basic calculation of this fixed update loop factors in the time it takes to render, as it needs to make sure to only run the fixed update exactly 50 times per seconds when there is no speedup present. But if we have, say, VSync on, the measured frame-time will always be about 0,0166ms on a 60hz monitor, due to waiting on the vsync in present. Since there is no way in the calculation to distinguish how much time was spent idling in Present, it will think that the entire frame took 0.0166ms including rendering, and thus run into the deadlock-prevention code, even though it could technically run the update at a lot higher frequency, as all it would do would remove idle-time at the end of present.
Sooo… I haven't been able to wrap my head around a good solution here. I think I cannot just entirely ignore the time it takes to Present from the calculation, because in case that the framerate would actually be limited due to rendering taking place, I would overshoot the budget I have. I'm thinking that maybe my approach for calculating the speedup itself might be flawed, but I can't think of a better way to calculate it. Anyone with experience in game-loop/physics-engine maybe got some idea?
;TL;DR I want to calculate how often I can run a fixed-update step while still rendering at the desired framerate, as close as possible