Advertisement

clock() function

Started by May 10, 2002 09:53 AM
15 comments, last by SonicMouse2 22 years, 9 months ago
in the tutorials i see them teaching to use a "preformance timer" which seems quite complicated to the new programmer. If you read up on the clock() function you will find that it is quite simple, and it calls on any platform. Basically what the clock() function does is return the time your program has been running. this is what MSDN says about it: ------------>8------------------- The clock function's era begins (with a value of 0) when the C program starts to execute. It returns times measured in 1/CLOCKS_PER_SEC (which equals 1/1000 for Microsoft C). ------------8<------------------- I looked it up on my linux machine and CLOCKS_PER_SEC is equal to 1,000,000 either way, this is pretty damn accurate i added a very accurate FPS to my program with only a few lines of code... no confusing structs, no confusing init routines...
            
#include <time.h>

long getFPS(){
	static long frameCount = 0;
        static long totalFrames = 0;
	static unsigned long lastTime = 0;
	unsigned long curr = clock();

	if(curr - lastTime >= CLOCKS_PER_SEC){
		totalFrames = frameCount;
		frameCount = 0;
		lastTime = curr;
	}else
		++frameCount;

        return totalFrames;
}
            
in my scene redering function, i just call getFPS() and thats it.. no other magic tricks. Im not saying that the tutorials are wrong at all, i am only pointing out another, more simpler way to set a very accurate timer. -andy [edited by - SonicMouse2 on May 10, 2002 11:11:34 AM]
i not am smart stupid no ok
clock() gives you process(thread) time, not total(real) time.

You should never let your fears become the boundaries of your dreams.
You should never let your fears become the boundaries of your dreams.
Advertisement
what a gip. I have never used the clock function, i saw it in a souce code example for timing how long it took in a decompression algorithm... i thought it was the other way around..
i guess ill go back to GetTickCount()
it works the same way, except its a Windows thing

thanks for the input

(if you want to know how one would use GetTickCount(), just take the above source example and replace clock() with GetTickCount() and CLOCKS_PER_SEC with 1000 )

[edited by - SonicMouse2 on May 10, 2002 1:58:58 PM]
i not am smart stupid no ok
Can anyone tell me if we can probe directly the hardware for clock changes ?
Yours truly.
QueryPerformenceCounter() is pretty close. _rdstc assmebly instruction is also pretty damn good. be careful with QueryPerformenceCounter() as on some pcs with broken south bridge pci support on the mother boards (most amd style, and intel boards) you will get time warps causing jumps in time and thus apparent movment. i have switched to timeGetTime() because of this. the cause of the problem is that some mother boards handle pci data flow wrong and windows has to adjust the counter to compensate. this ussually occers under heavy loads (ie lots of harddrive activity, sending massive amounts of data to the sound card, some usb devices, maybe even the video card, etc).
If you run the timing demo from developer.nvidia.com you will likely find that timeGetTime () takes less time to execute that QueryPerformanceCounter (), by about one oder of magnitude, and its easier to use too (because you don''t have to deal with the 64 bit number and converting to seconds). Generally I would prefer timeGetTime, except when you need the extra resolution of QueryPerformanceCounter().

download the demo here:
http://developer.nvidia.com/view.asp?IO=timer_function_performance
Advertisement
SonicMouse: Are you sure that code is correct? I just tried it in a test program, and it does not work. Here is the code I am using:


  GLint getFPS(){		GLint frameCount = 0;        	GLint totalFrames = 0;		GLint lastTime = 0;		unsigned long curr = GetTickCount();		if(curr - lastTime >= 1000)	{				totalFrames = frameCount;				frameCount = 0;				lastTime = curr;		}	else				++frameCount;        	return totalFrames;}  


And here''s a portion of my DrawGLScene function:

  	glLoadIdentity();	fps = getFPS();	glTranslatef(0.0f,0.0f,-2.0f);	glColor3f(0.0f,0.0f,0.0f);	glRasterPos2f(-1.07f,0.75f);	glPrint("FPS: %5.0f",fps);  

(fps is declared globally as a GLint)

All that is printed is "FPS: 0"
What''s the problem?
looks good to me..

the reason why its not working would be because your useing INT instead of UINT to determine time and your forgetting to make the var static.

you have this:
GLint frameCount = 0;
GLint totalFrames = 0;
GLint lastTime = 0;
unsigned long curr = GetTickCount();

but you should have this:
static GLint frameCount = 0;
static GLint totalFrames = 0;
static GLuint lastTime = 0;
GLuint curr = GetTickCount();

you will get screwy results if you subtract unsigned from signed... and vise versa
plus you need the vars to be static since the values have to be remembered every time you call this function (unless you store the vars within the class or globally)
i not am smart stupid no ok
Here is the updated function. Still does not work, though. I even tried declaring the static variables globally (I thought that they were resetting every time, but it looks like they are not).


  GLint getFPS(){		static GLint frameCount = 0;        	static GLint totalFrames = 0;		static GLuint lastTime = 0;		GLuint curr = GetTickCount();		if(curr - lastTime >= 1000)	{				totalFrames = frameCount;				frameCount = 0;				lastTime = curr;		}	else				++frameCount;        	return totalFrames;}  

Bringing back up... anyone know the problem with that code?

This topic is closed to new replies.

Advertisement