Advertisement

A Time Problem

Started by January 09, 2003 09:59 AM
4 comments, last by etaylor27 22 years, 1 month ago
I suppose this is somewhat of a newbie question, so forgive me right away. My question is one that I''ve put much thought into, and there seem to me to be more than one answer, I''m looking for the better ones. What is the most usual method of having games run the same speed on every computer, independent of computer speed? (I understand this is a poorly stated question, because the game actually runs faster on faster computers, but I am talking about simulation time) I would have thought to employ a variation of the Euler method for updating object positions and velocities, with the step size dependent on how much time has passed from the last iteration, but I just don''t know to be honest. I was hoping someone could help clarify this and perhaps supply a crude pseudo-code to help me understand an implementation. Thanks, Elijah
--"The greatest pleasure in life is in doing what people say you cannot do." -- Walter Bageholt
Simply use the faster processors to render more often.

GameLoop:
- Calculate time since last loop
- update world for the passed time
- render world
- restart loop



now on a fast computer a loop would last 1/70 sec.
So you update your world for 1/70 sec, render it, and start over

on a slow computer a loop would last 1/20 sec.
So update the world in larger steps (1/20 sec, what a surprise!) and render the world.

now you have to ensure two things:
no matter what the worldUpdateStep is, if you call your updateWorld(time _time) ten times with 0.1 sec or hundred times with 0.01 sec, your world has to be in the same stage!

Also ensure your updateWorld function will not eat more and more time while your steps grow, or you get out of sync to easy.
Imagine you have to calculate a 0.1 sec step, but the calculation lasts 0.2, next time you have to do a updateWorld for 0.2 sec, and the calculation lasts 0.3 ...

i think you see it.
-----The scheduled downtime is omitted cause of technical problems.
Advertisement
The problem with using an Euler integration, or any numerical integration technique, that is tied to the actual time between frames, is that you don''t get the repeatability you might need to make your game play properly on different computer systems.

One idea is to build a simulation that runs very quickly, requiring much less than a frame/time step on the slowest computer. Then, on every computer use a constant time step for the integration, say 0.001 seconds. Then do multiple integration steps per frame to "catch up" with the actual game time. On computers that run fast, you may need to do only one or two integrations per frame, but for slower computers you''d need to do perhaps 10 or so integrations per frame. This is the reason you need lightning-fast calcs!

Graham Rhodes
Senior Scientist
Applied Research Associates, Inc.
Graham Rhodes Moderator, Math & Physics forum @ gamedev.net
Mr. Rhodes, I''m not quite sure I understand what you proposed. You said to design a simulation that requires much less time than a time/frame step, by that did you mean the refresh rate that the frames are drawn at or an arbitrarily picked time step? Also, you said that faster computers would only need to perform a few steps, and slower ones would need many more, I didn''t understand why there would be a difference if the simulation requires less time than any step on even the slowest computers, not to mention that if the slower computers must perform more steps, it seems they might just lag behind.

I''d appreciate any input from anyone if they could explain this to me.

Elijah
--"The greatest pleasure in life is in doing what people say you cannot do." -- Walter Bageholt
On a slow computer it might take 1/20s to draw the screen, on a fast it might take 1/100s. If you update the world state at a fixed interval of 1/1000s you would then need to call the update routine 50 times per frame on the slow computer, but only 10 on the fast.
quote:
Original post by Anonymous Poster
On a slow computer it might take 1/20s to draw the screen, on a fast it might take 1/100s. If you update the world state at a fixed interval of 1/1000s you would then need to call the update routine 50 times per frame on the slow computer, but only 10 on the fast.



Yes, that''s true. Which is why this technique would only work if the CPU time required to actually update the world state must be very fast indeed. Otherwise, the system wouldn''t be able to keep up on the slow machine.

The only real advantage to this approach is that it would ensure that the calculations are perfectly repeatable on any computer no matter what the speed. It wouldn''t ensure that the calculations actually occur at exactly the same time.

There was an article in Game Developer Magazine a while back, in the February 2001 issue I think, describing a design pattern for dead reckoning. Could be useful.

Graham Rhodes
Senior Scientist
Applied Research Associates, Inc.
Graham Rhodes Moderator, Math & Physics forum @ gamedev.net

This topic is closed to new replies.

Advertisement