Advertisement

Texture coordinates and vertex arrays

Started by February 26, 2003 03:58 PM
14 comments, last by DalTXColtsFan 22 years ago
Greetings again all, I''m trying to use a vertex array to render a texture-mapped cube. It has the 8 points in it and I use glDrawElements as follows: GLubyte front[] = {4, 5, 6, 7}; GLubyte back[] = {0, 3, 2, 1}; GLubyte left[] = {0, 4, 7, 3}; GLubyte right[] = {1, 2, 6, 5}; GLubyte top[] = {2, 3, 7, 6}; GLubyte bottom[] = {0, 1, 5, 4}; glDrawElements(GL_QUADS, 4, GL_UNSIGNED_BYTE, front); glDrawElements(GL_QUADS, 4, GL_UNSIGNED_BYTE, back); glDrawElements(GL_QUADS, 4, GL_UNSIGNED_BYTE, left); glDrawElements(GL_QUADS, 4, GL_UNSIGNED_BYTE, right); glDrawElements(GL_QUADS, 4, GL_UNSIGNED_BYTE, top); glDrawElements(GL_QUADS, 4, GL_UNSIGNED_BYTE, bottom); The problem is, I can''t just give it 8 texture coordinates that correspond to the 8 vertices because the texture coordinates varies with the side that''s being drawn. The only thing I can think of to solve this problem would be to use the stride feature and define 3 texture coordinates per vertex, then before drawing a side set glTextureCoordPointer to the proper "offset". That would take a bit of work I''d think. Is there an easier/better solution? Thanks Joe
I''m not that expierenced, but I think you can just set the vertex coordinates the same way as you set up the vertexes: Make an array of coordinates per side of your cube and set the array width glTextureCoordPointer.

Marty
_____ /____ /|| | || MtY | ||_____|/Marty
Advertisement
or use 24 vertices instead of 8. This will be faster than calling glTexCoordPointer many times as you will be able to draw the entire cube in a single call to glDrawElements by merging you index arrays.
Also I guess you will want unique normals for each face too.

| - Project-X - my mega project.. yup, still cracking along - | - adDeath - an ad blocker I made - | - email me - |
Thanks Rip - it definitely makes sense that it''ll be faster if I can make only one call to glDrawElements. I''ll try that and see what kind of performance increase I get.

I guess I was just hoping to save SOMETHING by only declaring the 8 vertices - do you really save anything from specifying 8 instead of 24 besides memory?

This is probably a reasonably appropriate place to ask this question too: What is this concept of a normal to a VERTEX? I am a mathematician btw - I have an MS from U of Missouri-Rolla and I have TAUGHT multivariable calculus, so I''m very familiar with a normal to a SURFACE, or even the normal to the tangent plane to a surface, but this idea of being able to declare a normal vector for each VERTEX really baffles me - what is this new paradigm and what benefits do you get from it in OpenGL?

I would really appreciate help on this one because it''s killing me!

Thanks
Joe
It''s for the smooth shading, I guess...
Each poly can be shaded with the normals between the vertex normals (via some sort of formula) instead of having just one normal and therefore just one (flat) shade.

Marty.
_____ /____ /|| | || MtY | ||_____|/Marty
Advertisement
It''s for the smooth shading, I guess...
Each poly can be shaded with the normals between the vertex normals (via some sort of formula) instead of having just one normal and therefore just one (flat) shade.
Triangle- and Quadstrips are there to speed things up, so I guess less vertexes is faster.

Marty.
_____ /____ /|| | || MtY | ||_____|/Marty
Eh? Vertex normals are just the average of the surrounding face normals, nothing more than that. You HAVE to have the face normals first to get vertex normals ( at least all of the ways I''ve seen it done do ).
I know, for calculating the vertex normals in a heightmap... I think the question was: Why normals on vertexes, while a point can''t have a normal?

Marty
_____ /____ /|| | || MtY | ||_____|/Marty
As they said, normal-per-vertex is to make up for the fact that you can''t really render smooth surfaces, everything has to be made up from facets (unless we go into different rendering paradigms, but this is an OpenGL forum ).

Lighting calculations are done at the vertices (and thus require the normal approximating the ''real'' normal that would be at that point if your surface was smooth) and linearly-interpolated across the face. More complex lighting can also interpolate the vertex normals and perform per-pixel lighting.

This topic is closed to new replies.

Advertisement