Advertisement

FPS Counter

Started by July 21, 2000 05:41 PM
13 comments, last by OtakuCODE 24 years, 3 months ago
I''ve implemented an FPS counter in my game using some of the code NeHe had in his Tutorial 23 (the timer junk, I made a timer class out of it so I wasn''t just getting fat on global variables). This is basically the pseudocode of how I''m counting my framerate, I''d appreciate it if anyone could point out any possible logical errors: On program init, initialize the timer. In the message loop, where the Display() function is called (I''m not using a NeHe shell, I''m using the one from the glGameDeveloper() website (which has disappeared)), I get the current time passed since the timer was initialized. I also have a static variable keeping track of the last time Display() was called. Both of those are store in milliseconds in floats. I subtract the current time minus the last time and divide that into 1000 to get an immediate fps estimate. I calculate the FPS before Display() is actually called because I do not simply want to test how many FPS the display routine is running at, but because I want to see how many fps are actually being done, including the Idle() routine, message handling, etc. When I do this my very simple game (36 cubes, 2 glu spheres, all drawn with display lists) hovers around 70 - 80fps at all times. Shouldn''t this be running at 300fps or something ridiculous like that? Should I only be timing the display routine to get the time? Or are people claiming an fps of 300+ and such timing just the display and ignoring how many frames are actually getting drawn? (I know real games can''t do it this way otherwise their FPS reading would be totally bogus and adding 2 million lines of AI code wouldn''t slow down the FPS at all). Thanks in advance for any advice. Oh, wait, I should mention some system specs... I''m running an Athlon 500 with a geForce256 32MB DDR/DVI. When I send this to a friend of mine running an Athlon 750 with a geForce2 GTS 64MB DDR it runs 90-100fps. ----------------------- Go for it. OtakuCODE
-----------------------Go for it.OtakuCODE
Well, before getting mad about your actual source, you should first determine whether you just disabled vsync, which tied the framerate to your actual monitor refreshing rate.

hope it helps
pi~
Jan PieczkowskiBrainwave Studios
Advertisement
Well, where did NeHe calculate it?
I suppose that some people might just measure the display() part of things, since this is what they are concerned about - it doesn''t matter that adding AI and physics is going to slow down things, because the only concern is seeing how fast the rendering portion of the code is.

Not sure though, check up other people''s demos.

-Mezz
Well, NeHe didn''t calculate it... originally, I thought that was what he was doing, but upon reading further I found out he was only using the timer to slow down the demo so that one keystroke didn''t fly through the scene before it could be seen. I can imagine simply timing the display code would be helpful if you simply wanted to see how much of a performance boost something like compiled vertex array gave you, but I would like to get a general feel for how fast everything is running so I can get a rough estimate of things like "If I add something that does X, it''s going to slow me down a lot, but if I do Y and Z together, they won''t really make any difference"... plus, it gives me a feel for the awesome power of the modern processors... looking at all my code and saying "Jesus, its doing all of that stuff in 3 milliseconds?!".

As for the vsync, I am pretty sure that I have it turned off. At least I turned it off in my drivers the other day (and I checked, it is using the geForce renderer, not software), I''ll have to check and play with some settings and see the effect.



-----------------------
Go for it.
OtakuCODE
-----------------------Go for it.OtakuCODE
FPS means frames per second so if youre only timing the drawing part youre not getting the actual FPS are u
vsync enabled
and how large is your window and what bitdepth are u running at

a glClear(....) is a very expensive opperation prolly takes longer to clear your screen than to draw the cubes
VSync is not enabled. Enabling it clamps my FPS to 60. My window is 640x480 and at 32 bit color depth (of course). I''m going crazy as to why I''m only getting 80fps... There doesn''t happen to be a web page that discusses what OpenGL functions are expensive and which aren''t so bad, is there?



-----------------------
Go for it.
OtakuCODE
-----------------------Go for it.OtakuCODE
Advertisement
like i said clear is expensive depth + colour buffers, 32 bit doesn''t help either.
BTW do u do glEnable(GL_POLYGON_SMOOTH)

have a look at http://www.opengl.org/About/FAQ/technical/performance.htm
also at the nvidia website theres a opengl performance FAQ for geforces
First off, thanks for the quick reply and the link, its loading in the other window right now

I have tried the game with GL_POLYGON_SMOOTH on and off and I got no difference in performance. As for glClear()... don''t I pretty much have to do that every frame?

Also, something I noticed tonight... gluPerspective() is being called every single frame. I haven''t had a chance to move it out of the display routine, but I''m thinking that the perspective matrix will remain stationary as long as I don''t screw with it, right? And is gluPerspective() expensive?



-----------------------
Go for it.
OtakuCODE
-----------------------Go for it.OtakuCODE
How do you disable V-sync in your opengl proggie?
Man... I just _love_ the new vocabulary related to programming...:

proggie, puter, box, coding, proggin|g|, etc.

This topic is closed to new replies.

Advertisement