Advertisement

Timing on game programming

Started by January 29, 2001 08:43 AM
3 comments, last by sidorczuk 24 years ago
Hi everyone. I would like to know the concept behind timing on game programming, i''m just confused with time, is it a method to set up the same speed for different machine or to lock the frame rate so that the game would run on the same fps at different computer''s speed? I would like to know how to setup a good timing so that the game won''t run too fast on a very super duper high speed processor How to implement this method? and explain to me the concept. if i''m wrong ,correct me Thanks for everyone ...
Hi,

i am not a game programming pro but i think that i know what the concept behing timing in games is. The goal of game timing is to make the player think that a game runs at same speed on every computer, no matter what system he''s got. If the game runs at 30 fps on a slow computer and at 100 fps on a fast one you won''t see much difference. Of course the game runs smoother on the fast computer but the objects on the screen will move as fast as on the slower conmputer. It is very easy to implement this: you first have to calculate the frametime(time needed to draw the entire scene). Then you simply have to use the frametime to calculate the movement of an object. If a object should move 1 unit per second and the game needs 3 ms to draw the scene you calculate the movement like this: 3 x 1 (frametime x speed). If the computer is faster than and needs only 1 ms to draw the scene the movement would be: 1 x 1.
The object will allways move the same speed. I can''t explain this very good so It is better that you take a look at this article:
http://www.gamedev.net/reference/articles/article753.asp

I don''t know if that was your question and I also don''t know if you could understand this. If you want you can take a look at my page(http://stonemaster.port5.com). I have there a little Timerclass(in C++) to simplify using timers.

Bye Stonemaster.

-------www.steinsoft.net cout << "Happy Coding!" << endl;
Advertisement
Hmmm....

This is something I''ve been puzzling over. OK - so you time a frame, and can then say something like

newLoc += velocity*time;

But you could well have non-linear acceleration in, which will produce inconsistencies when you compare the physics on the fast machine with the slow. A lot of cases, this won''t really matter. But say you want exact physics - eg Quake3. People playing deathmatch Q3 would be fairly upset if physics were running differently on anothers machine just cos they had a faster processor, & so Quake3 obviously uses a different method to keep physics constant (ignoring the dll problems they''ve been having...). Having said that, Q3 mainly uses a single, linear acceleration (ie gravity), so things are substantially easier.

I think the only problem with this is for nonlinear gravities - eg long distance gravitational attraction (solar system modelling), or some strange case where an acceleration switches on & off rapidly.

You could sample your physics more than once per frame, but still this won''t be perfect. I assume there''s a way for perfect physics on everyones machine, so how the hell do you do it??

Catfish
There is and it is called calculus.
Keys to success: Ability, ambition and opportunity.
Well, I wouldn''t say calculus is required, but maybe I''m just mixing my calculus knowledge in with my algebra, heh. It''s all math, who cares =P.

You need to make it so that instead of (for example) storing the number of pixels something moves, you store the amount of time something takes to move x (let''s say 10 pixels future examples) pixels. Then, you divide the time that''s passed since the last frame, divide it by 10, and that''s how far it has moved. Get it?



http://www.gdarchive.net/druidgames/

This topic is closed to new replies.

Advertisement