smooth anim depends on time
One tutorial I would like to read about or see some more detailed notes is on timing.
In particular, updating scenes depending on amounts of time that have passed.
Sure, you may say: "check how much time has passed and delay the frame updates until the time that has passed meets what you''re expecting". But by doing this you effectively limit your app to be stuck at X frames per second. I suppose for most cases that is fine but I would prefer my app to scale with hardware so that as time, cpu, 3d cards progress, so would the fluidity of my app.
I have seen code from some SDKs.... Fly3D sdk for example has a command that seems (I''m not entirely sure if this is true) to require you to send the amount of time passed and it updates all the scenes objects according to the time.
I suppose in your own engine you could say give an object a velocity and direction vector and then multiply the amount of time passed (hopefully a fractional number) by that velocity to have your engine animate the scene smoothly. The problem is, I don''t know if that is what other engines do.
Does anyone have a URL or some inside knowledge on the above topic? Care to share some of your opinions?
Thanks ...
Will
This Space Left Blank
I think the most correct thing to do, like you say, is to measure time past since last frame, and give your movement/object a speed, direction, acceleration and so on.
Ries
I don''t suppose you have read any articles on this?
To me it seems like this is a ''holy-grail'' type question. Most tutorial or information sites avoid this type of topic like the black plague. I don''t know why. Maybe because it depends on the type of project you''re working on.
All the stuff I''ve ever coded I just assumed "Full frame rate". (as do NeHe''s tutorials)
I was recently reminded how dumb coding without keeping in mind timing was when I tried running an old directx (direct-draw) app I coded on a P2-233 on a new pc. On the P2 the app ran nice, the animation was fluid. On the new Athlon system the app looked like hell (or the sprites were on crack-cocaine) because obviously the cpu/video was able to throw out 10x more fps.
Really this topic spans more than just an OpenGL forum but since I''m in love with OpenGL at the moment I thought I''d ask here.
To me it seems like this is a ''holy-grail'' type question. Most tutorial or information sites avoid this type of topic like the black plague. I don''t know why. Maybe because it depends on the type of project you''re working on.
All the stuff I''ve ever coded I just assumed "Full frame rate". (as do NeHe''s tutorials)
I was recently reminded how dumb coding without keeping in mind timing was when I tried running an old directx (direct-draw) app I coded on a P2-233 on a new pc. On the P2 the app ran nice, the animation was fluid. On the new Athlon system the app looked like hell (or the sprites were on crack-cocaine) because obviously the cpu/video was able to throw out 10x more fps.
Really this topic spans more than just an OpenGL forum but since I''m in love with OpenGL at the moment I thought I''d ask here.
This Space Left Blank
I don''t know if this will help...but it''s the way I do timing...
#include
clock_t start, finish; // Two time variables
double spf=0; // Seconds per Frame
int counter=0; // Frame counter
main
{
start = clock(); // Initialise timing
do{
if(spf<0.04) (some_timing_critical_variable)=spf;
// Let the computer run as fast as it can...
else (some_timing_critical_variable)=0.04;
//if that is <25fps make it process frames as if it was // running at 25fps
DrawGLScene();
SwapBuffers(hDC);
counter++;
finish = clock();
spf = (double)(finish*1000.0 - start*1000.0) /( 1000.0 * counter * CLOCKS_PER_SEC);
}while(something);
}
eg For velocity you could use:
velocity+=constant*spf;
This code makes the anim a bit jumpy at the start, but then it settles down.
alistai
#include
clock_t start, finish; // Two time variables
double spf=0; // Seconds per Frame
int counter=0; // Frame counter
main
{
start = clock(); // Initialise timing
do{
if(spf<0.04) (some_timing_critical_variable)=spf;
// Let the computer run as fast as it can...
else (some_timing_critical_variable)=0.04;
//if that is <25fps make it process frames as if it was // running at 25fps
DrawGLScene();
SwapBuffers(hDC);
counter++;
finish = clock();
spf = (double)(finish*1000.0 - start*1000.0) /( 1000.0 * counter * CLOCKS_PER_SEC);
}while(something);
}
eg For velocity you could use:
velocity+=constant*spf;
This code makes the anim a bit jumpy at the start, but then it settles down.
alistai
Currently the timing I have is something like yours I think. The problem with that is that your app is kinda stuck at a certain fps. Which is fine if you''re happy with your animations running at that rate, but what I''d like to create was a scene that if run on 2 pcs side by side produced the same results even if 1 pc was considerably faster than another.
By limiting the FPS (like your example) you will get the same result on both PCs. But it would be nice to have the fast pc showing 100+ fps (and of course super-fluid movement) yet the objects etc would move at the same speed of the slow machine running at 25 fps.
By limiting the FPS (like your example) you will get the same result on both PCs. But it would be nice to have the fast pc showing 100+ fps (and of course super-fluid movement) yet the objects etc would move at the same speed of the slow machine running at 25 fps.
This Space Left Blank
#define FRAME_INTERVAL 30
#define TRASLATION_STEP 2.0f
DWORD FPStPrec;
WORD FPS;
WORD FPSCounter;
float timeElapsed;
InitTimer()
{
FPSCounter = FPS = 0;
FPStPrec = timeGetTime();
}
TickTimer()
{
if(FPSCounter == FRAME_INTERVAL)
{
timeElapsed = (timeGetTime() - FPStPrec) / 1000.0f;
FPS = (WORD)(FRAME_INTERVAL / timeElapsed);
FPStPrec = timeGetTime();
FPSCounter = 0;
}
else
FPSCounter++;
}
Call InitTimer in your WinMain function and then call
TickTimer at the beginning of EVERY call to DrawGLScene.
Doing that you''ll have a variable called timeElapsed that specifics (in seconds) the time elapsed from the previuos call to DrawGLScene to the actual.
after the call to SwapBuffers() just control the keys array state like that
if(keys[''W'']) //move forward
MoveForward(TRASLATION_STEP * timeElapsed);
Doing that your applications is NOT limited by any call to a Delay or a Sleep function so you could have 100 FPS in a Coppermine 800 MHZ with a GeForce or 2 FPS in a P100 with a S3 Virge.
#define TRASLATION_STEP 2.0f
DWORD FPStPrec;
WORD FPS;
WORD FPSCounter;
float timeElapsed;
InitTimer()
{
FPSCounter = FPS = 0;
FPStPrec = timeGetTime();
}
TickTimer()
{
if(FPSCounter == FRAME_INTERVAL)
{
timeElapsed = (timeGetTime() - FPStPrec) / 1000.0f;
FPS = (WORD)(FRAME_INTERVAL / timeElapsed);
FPStPrec = timeGetTime();
FPSCounter = 0;
}
else
FPSCounter++;
}
Call InitTimer in your WinMain function and then call
TickTimer at the beginning of EVERY call to DrawGLScene.
Doing that you''ll have a variable called timeElapsed that specifics (in seconds) the time elapsed from the previuos call to DrawGLScene to the actual.
after the call to SwapBuffers() just control the keys array state like that
if(keys[''W'']) //move forward
MoveForward(TRASLATION_STEP * timeElapsed);
Doing that your applications is NOT limited by any call to a Delay or a Sleep function so you could have 100 FPS in a Coppermine 800 MHZ with a GeForce or 2 FPS in a P100 with a S3 Virge.
#define FRAME_INTERVAL 30
#define TRASLATION_STEP 2.0f
DWORD FPStPrec;
WORD FPS;
WORD FPSCounter;
float timeElapsed;
InitTimer()
{
FPSCounter = FPS = 0;
FPStPrec = timeGetTime();
}
TickTimer()
{
if(FPSCounter == FRAME_INTERVAL)
{
timeElapsed = (timeGetTime() - FPStPrec) / 1000.0f;
FPS = (WORD)(FRAME_INTERVAL / timeElapsed);
FPStPrec = timeGetTime();
FPSCounter = 0;
}
else
FPSCounter++;
}
Call InitTimer in your WinMain function and then call
TickTimer at the beginning of EVERY call to DrawGLScene.
Doing that you''ll have a variable called timeElapsed that specifics (in seconds) the time elapsed from the previuos call to DrawGLScene to the actual.
after the call to SwapBuffers() just control the keys array state like that
if(keys[''W'']) //move forward
MoveForward(TRASLATION_STEP * timeElapsed);
Doing that your applications is NOT limited by any call to a Delay or a Sleep function so you could have 100 FPS in a Coppermine 800 MHZ with a GeForce or 2 FPS in a P100 with a S3 Virge.
#define TRASLATION_STEP 2.0f
DWORD FPStPrec;
WORD FPS;
WORD FPSCounter;
float timeElapsed;
InitTimer()
{
FPSCounter = FPS = 0;
FPStPrec = timeGetTime();
}
TickTimer()
{
if(FPSCounter == FRAME_INTERVAL)
{
timeElapsed = (timeGetTime() - FPStPrec) / 1000.0f;
FPS = (WORD)(FRAME_INTERVAL / timeElapsed);
FPStPrec = timeGetTime();
FPSCounter = 0;
}
else
FPSCounter++;
}
Call InitTimer in your WinMain function and then call
TickTimer at the beginning of EVERY call to DrawGLScene.
Doing that you''ll have a variable called timeElapsed that specifics (in seconds) the time elapsed from the previuos call to DrawGLScene to the actual.
after the call to SwapBuffers() just control the keys array state like that
if(keys[''W'']) //move forward
MoveForward(TRASLATION_STEP * timeElapsed);
Doing that your applications is NOT limited by any call to a Delay or a Sleep function so you could have 100 FPS in a Coppermine 800 MHZ with a GeForce or 2 FPS in a P100 with a S3 Virge.
I''m sorry, the TickTimer function was wrong, just use this
void TickTimer()
{
timeElapsed = (timeGetTime() - timePrec) / 1000.0f;
timePrec = timeGetTime();
if(FPSCounter == FRAME_INTERVAL)
{
FPS = (WORD)(1 / timeElapsed);
FPSCounter = 0;
}
else
FPSCounter++;
}
void TickTimer()
{
timeElapsed = (timeGetTime() - timePrec) / 1000.0f;
timePrec = timeGetTime();
if(FPSCounter == FRAME_INTERVAL)
{
FPS = (WORD)(1 / timeElapsed);
FPSCounter = 0;
}
else
FPSCounter++;
}
This topic is closed to new replies.
Advertisement
Popular Topics
Advertisement