When it comes to uptime, milliseconds stored in a 32-bit integer will overflow after a while.
For a signed value, that happens at 24.85 days; for unsigned, it happens after twice as long.
It's better to store it in a 64-bit integer (long long, or int64_t.)
And, when you go 64-bit, you might as well use microseconds instead of milliseconds :-)
Note: None of the millisecond timers are accurate at all. timeGetTime() is the least inaccurate, but it's still quantized at several milliseconds. GetTickCount() and similar calls are often quantized to dozens of milliseconds.
On UNIX, use clock_gettime(CLOCK_MONOTONIC_RAW).
On Windows, use QueryPerfofmanceCounter() and divide by QueryPerformanceFrequency(). (Years ago, this counter would sometimes mysteriously "jump" but I know of no motherboard in the last 10 years who still have that bug.)
(There is also GetSystemTimePreciseAsFileTime(), but that has a lot more overhead, so you likely don't want to use that for game time progression.)
Using a "float" for time loses big, because it only has 24 bits of mantissa, and thus will suffer precision loss after less than a day.
Using a "double" for time theoretically has the same precision problem after a while, but that while is very long. If you "zero" the time when the server starts up, rather than dozens or hundreds of years in the past, "double" will probably work fine.
Best, however, is int64_t.
You figure out which step number your game is at by simply subtracting the start time, and dividing by the step progression rate:
int64_t game_start_time;
void start_game() {
game_start_time = microseconds();
}
int current_step_number() {
return (microseconds() - game_start_time) * STEPS_PER_SECOND / 1000000;
}