Here's a fairly simple problem but for which I'm struggling and looking for some advice.
I have a game that uses a fixed timestep of 30 FPS and integers, I would like to be able to upgrade it to 60 FPS.
In that game, there are constants that are ratios of 30 FPS: 15, 60, etc… many calculations are done against these and it works impeccably… when the frame rate is 30 FPS. (Note: I didn't write the game, just inherited the code base)
Example:
An input value X is an integer of 30, so at the time of update, X / 30 (fps) == 1.
Now if I upgrade the frame rate to say 60 FPS, I've got a problem since X / 60 == 0.
I guess you get the point, some of the values are so low that if I try to divide by more than 30 then it invariably ends up as zero since it's integer arithmetic.
Attempt:
I thought about upgrading the game using floats, I spent one good afternoon in a branch to try upgrade what roughly made sense to upgrade to floats, however that didn't quite work well, the physics went totally crazy! ?
This led me to think about two things:
First it will take significant time to upgrade to float, obviously and potentially for a questionable outcome if not introducing subtle bugs that will be hard to track.
Second thing is, I've been thinking if ain't simply “smarter” to keep using integers as it's already well implemented and there won't be any surprise as the game physics are very stable.
But for the latter, as I explained above, I'm facing a hurdle for which I'm seeking any expert advice.
Thanks for your help ?