Why wouldnt you be able to use vertex arrays in display lists ?
I believe the opengl performance paper in nvidia''s developer section RECOMMENDS using VA''s in display lists. The only problem that will occur is that if the VA is not locked you still wont be able to change the data inside it once you display list.
I have tried using VA''s in display lists and have had no problems.
Nitzan
-------------------------
www.geocities.com/nitzanw
www.scorchedearth3d.net
-------------------------
question triangle strip vertex array and display list
March 07, 2002 11:20 PM
Uhh, you DONT want to use vertex arrays in display lists.
What you DO want to do is use the NV_vertex_array_range extension. This will let you put your data into AGP memory, and the GPU can DMA the data from there. This is the fastest way to render tons of triangles. ATI also has an extension which does the same thing, so there''s no problem in using these extensions and not being able to support both of the top line cards.
What you DO want to do is use the NV_vertex_array_range extension. This will let you put your data into AGP memory, and the GPU can DMA the data from there. This is the fastest way to render tons of triangles. ATI also has an extension which does the same thing, so there''s no problem in using these extensions and not being able to support both of the top line cards.
ioda : about fast normal calculation. i persume [512x512] is your terrain. If so than you can use finite distance method( +5x faster ahan doing cross products ) I''ll post a direct link when I get home.
There are more worlds than the one that you hold in your hand...
There are more worlds than the one that you hold in your hand...
You should never let your fears become the boundaries of your dreams.
ioda: http://www2.arnes.si/~ssdmtera/TUT_FastNormals/normals2.htm
(my first tutorial that never got published.. don''t be to harsh)
There are more worlds than the one that you hold in your hand...
(my first tutorial that never got published.. don''t be to harsh)
There are more worlds than the one that you hold in your hand...
You should never let your fears become the boundaries of your dreams.
thx
i don''t look at all but it looks good.
finally what''s the best way to render all the triangles (511*511*2 triangles)?
(i have a voodoo3 but it will work on various other card.)
is it to put all th vertex (512*512) and all the normal(512*512 too) in vertex array then then made 1 triangle strip for each line (so 512-1=511 triangle strip)?
if i understand, i can''t use display list to perform each triangle strip because it''s compiled and then can''t used variables to pass from one strip to the next?
i don''t look at all but it looks good.
finally what''s the best way to render all the triangles (511*511*2 triangles)?
(i have a voodoo3 but it will work on various other card.)
is it to put all th vertex (512*512) and all the normal(512*512 too) in vertex array then then made 1 triangle strip for each line (so 512-1=511 triangle strip)?
if i understand, i can''t use display list to perform each triangle strip because it''s compiled and then can''t used variables to pass from one strip to the next?
quote:
Why wouldnt you be able to use vertex arrays in display lists ?
Because it''s stupid. Current hardware will simply cache the VA *call* in the display list, not the actual data. You will gain nothing at all, in fact it will most certainly even slow you down. Not much, but putting a VA in a DL is simply asking for a performance drop. The only positive thing you might get (and this is very implementation dependend) is that the driver will automatically lock the VA, since it assumes it contains static geometry. You can also do that by hand, and that''s more flexible.
quote:
I believe the opengl performance paper in nvidia''s developer section RECOMMENDS using VA''s in display lists.
Show me where it says *that*...
The best thing, as the AP pointed out, is to use VAR. It''s blazing fast, but nVidia proprietary, and not easy to get right (esp. with NVfence). I heard from a lot of people that actually got a huge slowdown when using VAR, because they were using it the wrong way. Be sure to first familiarize yourself with the papers from nVidia concerning this topic.
I have no experience concerning ATis vertex streaming extensions.
> is it to put all th vertex (512*512) and all the normal(512*512 too) in vertex array then then made 1 triangle strip for each line (so 512-1=511 triangle strip)?
That would be a good solution.
> if i understand, i can''t use display list to perform each triangle strip because it''s compiled and then can''t used variables to pass from one strip to the next?
Well, you could also put each strip in a separate display list, and then call them one after the other. But I think, it''s going to be less efficient than vertex arrays.
That would be a good solution.
> if i understand, i can''t use display list to perform each triangle strip because it''s compiled and then can''t used variables to pass from one strip to the next?
Well, you could also put each strip in a separate display list, and then call them one after the other. But I think, it''s going to be less efficient than vertex arrays.
ok thx
i will drop display list and use only vertex array and triangle strips.
i will drop display list and use only vertex array and triangle strips.
the nvidia specific vertex array calls are faster then display lists, but they only work on nvidia cards. Display lists ARE faster then LOCKED vertex array calls.
http://developer.nvidia.com/view.asp?IO=ogl_performance_faq
http://www.opengl.org/developers/faqs/technical/displaylist.htm
16.100 Will putting vertex arrays in a display list make them run faster?
It depends on the implementation. In most implementations, it might decrease performance because of the increased memory use. However, some implementations may cache display lists on the graphics hardware, so the benefits of this caching could easily offset the extra memory usage.
Anyone know how the nvidia implementation deals with this ?
Nitzan
-------------------------
www.geocities.com/nitzanw
www.scorchedearth3d.net
-------------------------
http://developer.nvidia.com/view.asp?IO=ogl_performance_faq
http://www.opengl.org/developers/faqs/technical/displaylist.htm
16.100 Will putting vertex arrays in a display list make them run faster?
It depends on the implementation. In most implementations, it might decrease performance because of the increased memory use. However, some implementations may cache display lists on the graphics hardware, so the benefits of this caching could easily offset the extra memory usage.
Anyone know how the nvidia implementation deals with this ?
Nitzan
-------------------------
www.geocities.com/nitzanw
www.scorchedearth3d.net
-------------------------
> the nvidia specific vertex array calls are faster then display lists, but they only work on nvidia cards. Display lists ARE faster then LOCKED vertex array calls.
Please keep in mind, that this is an *NVidia* performance FAQ. DL''s might not be faster than CVAs on other 3D cards. As a general rule of thumb, I try to stay away from both. The CVA specifications are a load of crap. The nVidia people on opengl.org won''t stop complaining about it, they say the ARB is to blame. Well, I don''t know, but I don''t like CVAs, period. I don''t like DLs ever. I just can''t control what happens in them, I can''t fine tune memory usage, etc. When dealing with nVidia boards, I always use VAR. It''s just the best, and gives you maximum flexibility and maximum performance. No place for DLs and CVAs in there
And as someone else pointed out, ATi has a similar extension (I think it''s vertex_objects). Anyone tried it ?
> Anyone know how the nvidia implementation deals with this
As your FAQ quote (from the other non-nvidia faq) already said: In most implementations, it might decrease performance because of the increased memory use.
From what I have heard on the opengl.org forums and from personal experience with several different chipsets under OpenGL, ''most implementations'' == all implementations. I never heard of a case, where it was faster to put a VA in a DL, the only reaction I almost always got from people was: "don''t do that".
Please keep in mind, that this is an *NVidia* performance FAQ. DL''s might not be faster than CVAs on other 3D cards. As a general rule of thumb, I try to stay away from both. The CVA specifications are a load of crap. The nVidia people on opengl.org won''t stop complaining about it, they say the ARB is to blame. Well, I don''t know, but I don''t like CVAs, period. I don''t like DLs ever. I just can''t control what happens in them, I can''t fine tune memory usage, etc. When dealing with nVidia boards, I always use VAR. It''s just the best, and gives you maximum flexibility and maximum performance. No place for DLs and CVAs in there

> Anyone know how the nvidia implementation deals with this
As your FAQ quote (from the other non-nvidia faq) already said: In most implementations, it might decrease performance because of the increased memory use.
From what I have heard on the opengl.org forums and from personal experience with several different chipsets under OpenGL, ''most implementations'' == all implementations. I never heard of a case, where it was faster to put a VA in a DL, the only reaction I almost always got from people was: "don''t do that".
This topic is closed to new replies.
Advertisement
Popular Topics
Advertisement
Recommended Tutorials
Advertisement