Please give me a bit of advice.....
Love the site.
I am using MSVC++ 6.0 enterprise
When i do around 6000 triangles on the screen, i get about 10 fps. Now, with languages such as blitz basic 3d and such, I can go up to over 100,000 polys before i get a performance hit. What am I donig wrong?
Oh, I have a Geforce4 ti btw.
Thanks for any advice you can give.
Let me expand on that question. You guys are very knowledgable, thus I hope you will be able to provide some good answers.
1. I read in earlier posts that you can optimize the code (with the compiler) in Enterprise edition, how, exactly, do I go about doing that?
2. How do i Disable Vsync in the program (i did it on the nvidia control panel, but it doesnt show a difference in the game)
1. I read in earlier posts that you can optimize the code (with the compiler) in Enterprise edition, how, exactly, do I go about doing that?
2. How do i Disable Vsync in the program (i did it on the nvidia control panel, but it doesnt show a difference in the game)
3. Why is it, that with languages like blitz3d (which uses direct3d and a very slow engine), am i able to update the vertecies of an object (with over 7000 polys) with different coordinates (doing real time bone animation) every program scan and still get 20 fps, yet just drawing 3000 polys in opengl I get 10 fps?
I tested the code, which is a combination of tutorial 5 and 21 (for the timer functions), and I get 80 ms on each scan of the program. You gotta tell me that the transfer on my 1.7 ghz p4 is better then that?
Please give me some advice, for I have done extensive searching and find very little with detail on optimizing in MSVC++ 6.0 , or opengl even. I am a firm believer that Opengl is the best, however finding information so that I may do it efficiently is getting to be an overwelming pain.
Thanks
I tested the code, which is a combination of tutorial 5 and 21 (for the timer functions), and I get 80 ms on each scan of the program. You gotta tell me that the transfer on my 1.7 ghz p4 is better then that?
Please give me some advice, for I have done extensive searching and find very little with detail on optimizing in MSVC++ 6.0 , or opengl even. I am a firm believer that Opengl is the best, however finding information so that I may do it efficiently is getting to be an overwelming pain.
Thanks
You''re posting on the NeHe forum, so you must be using OpenGL.
Are you using vertex arrays? If you''re calling a glVertex function per vertex you''ll definitely get crummy framerates.
Are you using vertex arrays? If you''re calling a glVertex function per vertex you''ll definitely get crummy framerates.
first did you choose a color mode compatable with your card -- run in 32 bit mode and 16 bit mode and see if it helps.
for static geometry use a display list. I will take 5 minutes to implement. see glGenLists, glNewList and glCallList. driver will store only end result of all your calls (possibly in video memory) giving you a performance boost.
for dynamic geometry use glDrawElements to eliminate all those excessive glVertex, glNormal, glTextCoord, etc. calls.
Always prefer display lists over glDrawElements if you do not need to change / edit the geometry. You only ever use glBegin / End to render if its something really small. Also consider looking into compiled vertex arrays or nVidia VAR extensions.
for static geometry use a display list. I will take 5 minutes to implement. see glGenLists, glNewList and glCallList. driver will store only end result of all your calls (possibly in video memory) giving you a performance boost.
for dynamic geometry use glDrawElements to eliminate all those excessive glVertex, glNormal, glTextCoord, etc. calls.
Always prefer display lists over glDrawElements if you do not need to change / edit the geometry. You only ever use glBegin / End to render if its something really small. Also consider looking into compiled vertex arrays or nVidia VAR extensions.
That is a few of the things i dont understand (as in where to start looking for information about),
Nvidia var?
As for weather it is static, no. It is going to be dynamic (real time bone animation). Thus I need to modify the coordinates of all the verticies at will. I know that vertex arrays are good for this, but where do i find information on that (nehe doesnt seem to have tutorials dealing with it).
Yes, the color mode is quite capable (it is a geforce 4 ti).
Which is another thing that boggles me. In their wolfman demo (very detailed), they use a 100,000 poly model with real time bone animation (136 bones). Which means it is modifying over 200,000 verticies during each scan. It is amazing, but when I start to look at code, it seems so far away.
Nvidia var?
As for weather it is static, no. It is going to be dynamic (real time bone animation). Thus I need to modify the coordinates of all the verticies at will. I know that vertex arrays are good for this, but where do i find information on that (nehe doesnt seem to have tutorials dealing with it).
Yes, the color mode is quite capable (it is a geforce 4 ti).
Which is another thing that boggles me. In their wolfman demo (very detailed), they use a 100,000 poly model with real time bone animation (136 bones). Which means it is modifying over 200,000 verticies during each scan. It is amazing, but when I start to look at code, it seems so far away.
"how else am i supposed to define a vertex without using glvertex?"
Use vertex arrays, like I said. The first resource you should use is www.google.com
Do a search for "opengl vertex arrays tutorial", I''m sure that''ll turn up the goods.
Use vertex arrays, like I said. The first resource you should use is www.google.com
Do a search for "opengl vertex arrays tutorial", I''m sure that''ll turn up the goods.
quote:
Original post by masterprompt
Which is another thing that boggles me. In their wolfman demo (very detailed), they use a 100,000 poly model with real time bone animation (136 bones). Which means it is modifying over 200,000 verticies during each scan. It is amazing, but when I start to look at code, it seems so far away.
You need nvidia specific extensions. It is using vertex shaders to animate the bones. Everything is occuring on the card. If you are interested in programming specifically for geforce 4, download the nvidia sdk: http://developer.nvidia.com/view.asp?PAGE=opengl
I think you should make sure you have a firm understanding of all the basic openGL functions as well as vector and matrix math, and illumination and texturing equations before you dive into vertex and pixel shaders.
Drawing 6000 tris with glVertex on a GF4(probably even a GF1) will give you well over 10 fps, but yes vertex arrays are where you want to go.
Try downloading NeHe''s apps and see how they run in terms of FPS. If they run fine then its probablly something with your code. If they dont, then its possibly a driver issue.
-=[ Megahertz ]=-
Try downloading NeHe''s apps and see how they run in terms of FPS. If they run fine then its probablly something with your code. If they dont, then its possibly a driver issue.
-=[ Megahertz ]=-
-=[Megahertz]=-
This topic is closed to new replies.
Advertisement
Popular Topics
Advertisement
Recommended Tutorials
Advertisement