SDL_GetTicks
SDL_GetTicks is a Uint32 and has a precision of milliseconds, but I'd like something more precise than that (like a floating point number). Is there a way to get a more precise timing value in SDL?
AP has it right.
HxRender | Cornerstone SDL TutorialsCurrently picking on: Hedos, Programmer One
Casting it to a float doesn't increase the precision. I don't think there is a cross-platform way to do this. There are several timeing functions avaiable for both windows and *nix. There is an article on gamedev.net that describes some pitfalls and solutions for them.
Indeed, the typecasting isn't what I need, I mean I need higher precision (for example so that I can measure if something took 0.01 or 0.02 milliseconds).
It's for an FPS counter, it isn't very precise now and if my program is running very fast, the FPS counter switches between 333.33333fps and 250.0000fps constantly, because it sees that the frame either took 3 milliseconds or 4 milliseconds.
What a shame that there's no cross platform way, I guess it's not that important though, I prefer to keep everything cross platform.
It's for an FPS counter, it isn't very precise now and if my program is running very fast, the FPS counter switches between 333.33333fps and 250.0000fps constantly, because it sees that the frame either took 3 milliseconds or 4 milliseconds.
What a shame that there's no cross platform way, I guess it's not that important though, I prefer to keep everything cross platform.
A good way of implementing an FPS counter is to time more than one frame. Finding the duration of one frame isn't so useful anyway. If you, say, count the number of frames rendered in half a second, you avoid the fast FPS update completely.
Also note that SDL_GetTicks may only be up to 10ms precise on some platforms. Under Linux it's 1ms, but I don't know about others, and the docs say you shouldn't rely on it being better than 10.
-phil
Also note that SDL_GetTicks may only be up to 10ms precise on some platforms. Under Linux it's 1ms, but I don't know about others, and the docs say you shouldn't rely on it being better than 10.
-phil
~phil
The last time I checked the SDL source (version 1.2.6?) it was using GetTickCount() under windows to get time, which provides resolution nowhere near sub millisecond accuracy.
On Windows, you can try high performance counters. On other platforms, no idea.
On Windows, you can try high performance counters. On other platforms, no idea.
SlimDX | Ventspace Blog | Twitter | Diverse teams make better games. I am currently hiring capable C++ engine developers in Baltimore, MD.
Most platforms have higher precision counters. However SDL strives to be as cross-platform as possible, to the extent that it rounds the high-precision counters to the nearest millisecond, just so that the program can be more compatible with the virtually non-existent platforms which only support 1ms precision. Its an oft-complained-about "feature" of SDL. There's nothing you can do about it, unless you want to "un-portabilize" your game by writing a seperate timer function for each platform you think you will release on. For your case, I'd say that you should use JohnnyQuest's idea - look at glxgears for an example (It calculates the fps every 500 frames or something).
Zorx (a Puzzle Bobble clone)Discontinuity (an animation system for POV-Ray)
I take that back - Here's the output I get from glxgears:
4309 frames in 5.0 seconds - 861.800 FPS
4475 frames in 5.0 seconds - 895.000 FPS
4401 frames in 5.0 seconds - 880.200 FPS
4309 frames in 5.0 seconds - 861.800 FPS
4475 frames in 5.0 seconds - 895.000 FPS
4401 frames in 5.0 seconds - 880.200 FPS
Zorx (a Puzzle Bobble clone)Discontinuity (an animation system for POV-Ray)
Well... a couple things. You may consider having your FPS counter average the FPS over a period of time. For example, count all the frames within 100ms to 1000ms. this will give you a more realistic FPS reguardless of your actual frame time.
As far as super precision... You have a couple options. The only way to get any kind of sub ms precision to my knoledge is to use something like tdtsc. Which gives you a your processor cycle count.
There are a few really big issues with using this of course. Compatability for example can be a very large one. Some systems don't support tdtsc (or have an equivalant). Some systems the number actually jumps back and forth (or jitters)... which will also give you weird results.
I personally use it in my code profiler with VERY good results.
Here is an example on how to query the tdtsc.
Again, I wouldn't really recomend this for timing in a game.
As far as super precision... You have a couple options. The only way to get any kind of sub ms precision to my knoledge is to use something like tdtsc. Which gives you a your processor cycle count.
There are a few really big issues with using this of course. Compatability for example can be a very large one. Some systems don't support tdtsc (or have an equivalant). Some systems the number actually jumps back and forth (or jitters)... which will also give you weird results.
I personally use it in my code profiler with VERY good results.
Here is an example on how to query the tdtsc.
Uint64 getCount() {#ifdef WIN32 Uint32 timehi, timelo; __asm { rdtsc mov timehi, edx; mov timelo, eax; } return ((Uint64)timehi << 32) + (Uint64)timelo;#else #error Not supported! Muhaha#endif}
Again, I wouldn't really recomend this for timing in a game.
im facing the same problem too. i want to stay cross platform but need a high precision timer. surely someone out there must have created a simple function / "library" that can do this. i could probably write it myself using preprocessor #if's, but thats a huge pain if someones already done it for me =)
FTA, my 2D futuristic action MMORPG
This topic is closed to new replies.
Advertisement
Popular Topics
Advertisement
Recommended Tutorials
Advertisement