Advertisement

OpenGL Texture Coordinates

Started by May 27, 2014 03:44 AM
8 comments, last by Misantes 10 years, 8 months ago

Hi all,

So, I've come again, hat in hand, to be pointed in the right direction tongue.png

I've scoured the forums(and quite a bit of the internet-realm) but can only seem to find out of date ways of handling this. I apologize if this is a repeat question, but all of the threads I've found on this are quite old, or at least old enough they don't seem to apply.

I'm needing to draw about 100 different objects with about fifteen different images or so. Instead of loading 15 textures for each of the 15 images, I'm trying to load one texture containing many images and change the texture coordinates to get the right image when needed. Basically, I'm trying to create a sprite sheet tongue.png

My first question, has the method of changing the texture coordinates changed since :

glBindTexture(GL_TEXTURE_2D, 0);

glBegin(GL_QUADS)

whole_bunch_of_coordinates_below(0.0, 0.1)etc

The above method is deprecated (or removed?) for some time now, correct?

I have a feeling I need to be cropping the coordinates in one of the shaders, but the only image information being passed into relevant to the image is the texture sampler (i think) and I'm unsure how to go about changing the coordinates at that point or what to apply them to.

I've started using an SFML texture instead of freeimage for ease of use.

as of now, my loading function looks thusly:


GLuint Object::LoadImage(const char* imageName)
{
    glEnable(GL_BLEND);
    glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
    GLuint TextureID;

    glGenTextures(1, &TextureID);
    glBindTexture(GL_TEXTURE_2D, TextureID);
    image.loadFromFile(imageName);

    glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, image.getSize().x, image.getSize().y, 0, GL_RGBA, GL_UNSIGNED_BYTE, image.getPixelsPtr());

    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT);
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT);
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);

    return TextureID;
}

the previous freeimage load function:


glEnable(GL_BLEND);
    glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
    FREE_IMAGE_FORMAT formato =
        FreeImage_GetFileType(imageName,0);
	FIBITMAP* imagen = FreeImage_Load(formato, imageName);

    int w = FreeImage_GetWidth(imagen);
    int h = FreeImage_GetHeight(imagen);

    textura = new GLubyte[4*w*h];
    char* pixels = (char*)FreeImage_GetBits(imagen);

    for(int j= 0; j<w*h; j++){
        textura[j*4+0]= pixels[j*4+2];
        textura[j*4+1]= pixels[j*4+1];
        textura[j*4+2]= pixels[j*4+0];
        textura[j*4+3]= pixels[j*4+3];
    }
    GLuint TextureID;
    glGenTextures(1, &TextureID);
    glBindTexture(GL_TEXTURE_2D, TextureID);

        glTexImage2D(GL_TEXTURE_2D,0,GL_RGBA, w, h, 0,
             GL_RGBA,GL_UNSIGNED_BYTE,(GLvoid*)textura );
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT);
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT);
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);

apologies for the missing headers, they're rather bogged down with irrelevant things, so to save space I've left them out. The "image" from the first example is a "sf::Image" and the "textura" from the second is "GLubyte*".

I've tried the obvious things(at least to someone entirely new at this) such as trying to change the height and width of the image at load time.

This only distorts the image(which in hindsight should have been reeeeeaaally obvious).

I've also tried creating the coordinates and passing them into the vertex shader via the "in" commands but am really failing at understanding when the texture is actually used with the object or how I could change these after the fact. I am definitely missing something in my understanding of textures and shaders here.

Here is some probably relevant code, for clarity, I've cut out quite a bit that isn't relevant to the texture or object. I'm fairly certain it was nothing relevant to this problem:

In the object creation function:



Object::Object(const char* image,const char* image2, const char* objHere, glm::vec3 place,GLuint programID, int objtypes)
{
    glGenVertexArrays(1, &VertexArrayID);
    glBindVertexArray(VertexArrayID);

    MatrixID = glGetUniformLocation(programID, "MVP");
    ViewMatrixID = glGetUniformLocation(programID, "V");
    ModelMatrixID = glGetUniformLocation(programID, "M");
    PlaneMatrixID = glGetUniformLocation(programID, "P");

    Texture = LoadImage(image);

    loadOBJ(objHere, vertices, uvs, normals,place);

    glGenBuffers(1, &vertexbuffer);
    glBindBuffer(GL_ARRAY_BUFFER, vertexbuffer);
    glBufferData(GL_ARRAY_BUFFER, vertices.size() * sizeof(glm::vec3), &vertices[0], GL_STATIC_DRAW);

    glGenBuffers(1, &uvbuffer);
    glBindBuffer(GL_ARRAY_BUFFER, uvbuffer);
    glBufferData(GL_ARRAY_BUFFER, uvs.size() * sizeof(glm::vec2), &uvs[0], GL_STATIC_DRAW);

    glGenBuffers(1, &normalbuffer);
    glBindBuffer(GL_ARRAY_BUFFER, normalbuffer);
    glBufferData(GL_ARRAY_BUFFER, normals.size() * sizeof(glm::vec3), &normals[0], GL_STATIC_DRAW);

    glUseProgram(programID);
    LightID = glGetUniformLocation(programID, "LightPosition_worldspace");
}

Let me know if you'd like the render function or the shaders posted.

So, to clarify my question,

if the first example is deprecated, through what method does one change the texture coordinates in opengl 4.40?

Feel free to point me to a resource instead of explaining everything to me (though that is always welcome if you're the patient type)tongue.png While I've found several good (and more importantly, current) resources, they all seem to fail to mention this little problem of mine. But, so much information on the internet relating to opengl seems woefully outdated that I've struggled to find a solution to this.

And, while I did search this forum pretty thoroughly, everything I found was from very old threads. If one still applies, or I failed in searching the correct thing, feel free to call me an idiot and link to the correct forum smile.png

Thanks in advance for any help,

Cheers,

smile.png

Beginner here <- please take any opinions with grain of salt

Compute new texture coordinates.

Pass them to the shader as a VBO.

???

Profit.

Texture coordinates, like normals, are pre-vertex attributes: ie, each vertex has its own. So what you do with per vertex attributes? You pass them to the shader along with the vertex data. That data could be interleaved, or in different buffers, indexed, or even instanced. Simplest one is glDrawArrays with one VBO per vertex attribute (ie, one VBO with vertices, other VBO with texture data, other VBO with normals, etc).

"I AM ZE EMPRAH OPENGL 3.3 THE CORE, I DEMAND FROM THEE ZE SHADERZ AND MATRIXEZ"

My journals: dustArtemis ECS framework and Making a Terrain Generator

Advertisement

Hm, that doesn't entirely make sense to me (i'm very new to opengl) but I think that should be enough to go on and point me in the right direction.

Could you humor a beginner for one more question?

I'm passing 3 VBOs into the shader already (the ones in the Object function above) and I've tried passing the coordinates as a VBO into the shader as well, I'm just a little fuzzy on exactly what I'd be doing with them in there to pass that information on to "crop" the texture. How does one go about doing that? Would I just add another attribute to my render function and pass along that VBO there as well?

Funnily(i think smile.png ) I guess it's the ??? in your answer above that is what is giving me the most headache.

Here is my render function as it stands now(with all the extra stuff cut out for clarity):




        glm::mat4 MVP = ProjectionMatrix * ViewMatrix * ModelMatrix;

        glUniformMatrix4fv(MatrixID, 1, GL_FALSE, &MVP[0][0]);
        glUniformMatrix4fv(ModelMatrixID, 1, GL_FALSE, &ModelMatrix[0][0]);
        glUniformMatrix4fv(ViewMatrixID, 1, GL_FALSE, &ViewMatrix[0][0]);

        glm::vec3 lightPos = glm::vec3(0,1,-2);
        glUniform3f(LightID, lightPos.x, lightPos.y, lightPos.z);


        glActiveTexture(GL_TEXTURE0);
        glBindTexture(GL_TEXTURE_2D, Texture);

        glUniform1i(TextureID, 0);
         
        //vertices
        glEnableVertexAttribArray(0);
        glBindBuffer(GL_ARRAY_BUFFER, vertexbuffer);
        glVertexAttribPointer(0,3,GL_FLOAT,GL_FALSE,0,(void*)0);

        //  uvs
        glEnableVertexAttribArray(1);
        glBindBuffer(GL_ARRAY_BUFFER, uvbuffer);
        glVertexAttribPointer(1,2,GL_FLOAT,GL_FALSE,0,(void*)0);

        //normals
        glEnableVertexAttribArray(2);
        glBindBuffer(GL_ARRAY_BUFFER, normalbuffer);
        glVertexAttribPointer(2,3,GL_FLOAT, GL_FALSE,0,(void*)0);

        glDrawArrays(GL_TRIANGLES, 0, vertices.size() );

        glDisableVertexAttribArray(0);
        glDisableVertexAttribArray(1);
        glDisableVertexAttribArray(2);


Am I able to pass along another buffer in there, and if so, how does that apply to the image portion instead of the .obj?

And my genuine apologies if these questions are a little(or not so little) dense. This is me just working through how opengl operates. I tend to get ahead of myself then spend a week figuring out what I'm doing wrong. I suppose I'd have fewer problems if I just started at the beginning of a long and likely boring book, but I find I have the damndest time learning that way. So, I tend to learn by doing and fiddling with things. It's not the most elegant learning process but I find I learn quicker and tend to stay interested and challenged. c'est la vie.

Anyhow, I appreciate your help and any thanks again smile.png

Cheers,

smile.png

Beginner here <- please take any opinions with grain of salt


I've scoured the forums(and quite a bit of the internet-realm) but can only seem to find out of date ways of handling this. I apologize if this is a repeat question, but all of the threads I've found on this are quite old, or at least old enough they don't seem to apply.

Start the internet wide search engine of your choice and try "opengl 4 tutorial", and you'll get at least 3 hits on the first page already suitable to answer how to deal with textures in OpenGL nowadays.

Regarding sprite sheets (or texture atlases, to be precise): This is something that is independent on the OpenGL version. Any good tutorial should tell how texture co-ordinates are to be interpreted, and how the correlation between vertex position and texture co-ordinates is.


… the method of changing the texture coordinates ...

I don't understand what "change" means here exactly. Compute the correct vertex positions and texture co-ordinates on the CPU side, store them into a VBO, pass it to a shader, and use them to draw.


Start the internet wide search engine of your choice and try "opengl 4 tutorial", and you'll get at least 3 hits on the first page already suitable to answer how to deal with textures in OpenGL nowadays.

Well, it's the sprite sheet specifically that I'm struggling to find information on. As I mentioned, the information I've found has been for previous opengl versions. While many current version tutorials specify how to load, bind etc they, in my search, have passed over dealing with texture coordinates in opengl 4. As far as "changing the texture coordinates" I simply mean changing them from the default which displays the entire image to instead crop a specific set of coordinates.

But, simply telling me to search "opengl 4 tutorial" seems to be less than helpful to me. I was under the impression this was a beginners forum. They could probably pass along that advice to a sticky and do away with the forum entirely I suppose. Or, let us ask our inane questions, because there is a lot of information out there, and sometimes asking in a forum to parse that down to something useful can be helpful to someone who is in the process of learning.

Beginner here <- please take any opinions with grain of salt


Well, it's the sprite sheet specifically that I'm struggling to find information on. As I mentioned, the information I've found has been for previous opengl versions. While many current version tutorials specify how to load, bind etc they, in my search, have passed over dealing with texture coordinates in opengl 4. As far as "changing the texture coordinates" I simply mean changing them from the default which displays the entire image to instead crop a specific set of coordinates.
There is no default in the sense of OpenGL. Each vertex needs an explicitly given pair of u,v co-ordinates. Assuming that the sprite is rendered as a quad, you have 4 vertices, and all of them have their own u,v pair. To map the entire texture once, supposing the use as GL_TEXTURE_2D, you explicitly use the entire span of [0,1] for u and [0,1] for v, yielding in the tuples (0,0), (0,1), (1,0), and (1,1) at the corners of the quad. Any subset of the texture has co-ordinates inside these spans. E.g. u being in [0,0.5] and v being in [0,1] denotes half of the texture as a vertical strip, addressed in the vertices as (0,0), (0,1), (0.5,1), and (0.5,0).
When computing such relative co-ordinates, one needs to consider some things. OpenGL used u co-ordinates from left to right, and v co-ordinates from bottom to top. The co-ordinate 0 means the left / lower border of the leftmost / bottom texel, and the co-ordinate 1 means the right / upper border of rightmost / top texel. With this in mind, if you want to address the texel with indices s in [0,w-1], t in [0,h-1], where w and h are the dimensions of the texture measured in texels, you have to use
ul := s / w for the left border of the texel
ur := ul + 1 / w for the right border of the texel
um := ( ul + ur ) / 2 for the center of the texel
and analogously for v when using h instead of w.
So, when the sprite's texels are in a rect with the lower left corner (s1,t1) and the upper right corner (s2,t2), you compute
ul(s1), vb(t1), ur(s2), vt(t2)
and use them as u,v co-ordinate tuples for the vertices:
ul(s1) and vb(t1)
ul(s1) and vt(t2)
ur(s2) and vt(t2)
ur(s2) and vb(t1)

But, simply telling me to search "opengl 4 tutorial" seems to be less than helpful to me. I was under the impression this was a beginners forum. They could probably pass along that advice to a sticky and do away with the forum entirely I suppose. Or, let us ask our inane questions, because there is a lot of information out there, and sometimes asking in a forum to parse that down to something useful can be helpful to someone who is in the process of learning.
Excuse for my rashly answer above; but: Computing texture co-ordinates is independent on the OpenGL version. Passing vertex data (with texture co-ordinates being part of) is dependent on OpenGL version. The former topic, now that it's clear what "change" means in the OP, is declared above. The latter topic could be found in tutorials (you already have). Most of the OP deals with VBOs and vertex data passing, hiding the actual question. Just to my excuse. smile.png
Advertisement

All good, and my apologies for the snark as well (it's 4am for me).

That's actually quite helpful. And, In hindsight and embarassingly, I believe your first answer may have been correct still tongue.png

Your comment about there being no default for the uvs was an embarassing epiphany for me.

I believe I understand the relative coordinates. My biggest problem, I think, is that I wasn't fully understanding how the textures work (your original answer).

In the above code, I'm passing along the uvs in a VBO already, right?

Here:


    glGenBuffers(1, &uvbuffer);
    glBindBuffer(GL_ARRAY_BUFFER, uvbuffer);
    glBufferData(GL_ARRAY_BUFFER, uvs.size() * sizeof(glm::vec2), &uvs[0], GL_STATIC_DRAW);

(Since I didn't include the header, "uvs" is a std::vector.)

And need to clear and push_back the relative texture coordinates into it so the shader can read the values?

I'm using this tutorial pretty heavily, and its section on textures is confusing me to some degree.

http://www.opengl-tutorial.org/beginners-tutorials/tutorial-5-a-textured-cube/

After typing the above, I re-read through the code. The tutorial uses an object loader that generates the uvs and I hadn't realized that (which was why I was thinking they were automatically generated tongue.png). Which is great, unless you're trying to generate your own and idiotically didn't realize they were already generated. I was confusedly thinking the uvs were independent of the texture and mostly only relevant to the shape of the object(which I now realize, it's both).

Seeing as that the loader is still handy and if I don't want to manually add them for every object, for specific objects, should I just clear the vector after loading them and push_back the new relative values (or just use an alternate vector and pass that along instead, i suppose would be preferred)?

If you're feeling patient, feel free to respond (or if you're in a gloating mood, I kind of deserve it). And, I'll definitely take your first advice, since it's clear I do need a better understanding of textures.

But, either way, you've both been a giant help, and thank you again.

Beginner here <- please take any opinions with grain of salt


I'm using this tutorial pretty heavily, and its section on textures is confusing me to some degree.

Don't know what exactly is confusing you, so let me try to shed some light onto the entire thing. Some of the following is probably already known by you, but I need to mention it for completeness.

1.) You need a vertex data stream with the vertices' position. Engine often batch sprites to reduce draw calls. This requires the all vertex positions, although coming from different sprites, to be specified w.r.t the same space, e.g. the world space or the view space. Hence any motion applied to the sprites is already to be considered on the CPU side, before a VBO is filled.

Okay, you can use instancing and hence handle things another way, but that is an advanced topic.

2.) You need a vertex data stream with the vertices' uv co-ordinates. If you have only one texture to deal with, you need just one uv stream. If you have several texture, you may need more than a single uv stream. But it is also possible to use one and the same uv for several textures (e.g. when using a color map and a normal map with the same layout in texture space).

3.) For a sprite you usually don't use normals, because sprites are just flat (letting some exotic variants aside). Otherwise, if normals are available, you need a vertex data stream for them, too.

4.) Whether you use one VBO per data stream, or put all of them into a single VBO, is usually a question of how dynamic the data in each stream is. For example, sprites are often computed frame by frame and transferred to the GPU in a batch. When the CPU computes both the vertex positions and uv co-ordinates on the fly, then both streams are dynamic and can be easily packed into a single VBO. On the other hand, if the CPU computes just vertex positions but re-uses the uv co-ordinates as they are again and again, then the vertex position stream is dynamic but the uv co-ordinates stream is static; this would mean 2 different VBOs when looking at performance.

5.) However, it is crucial to the success that there is a one to one relation in the sequence of vertex positions and uv co-ordinates. On the GPU the position at index #n and the uv co-ordinate at index #n (as well as any other vertex data) together define vertex #n. That means if you load a model and drop uv co-ordinates for the sake of computing them afterwards, you have to ensure that the order you push uv co-oridnates into the buffer is absolutely the same as before.
That said, if you have simple geometry like sprites, it is IMHO better to generate both geometry and uv co-ordinates on the fly. On the other hand, if you have complex model geometry with uv co-ordinates delivered aside, then don't drop the latter but apply calculations on top of them if needed.
6.) With glVertexAttribPointer you make each vertex data stream known to and hence useable by the vertex shader.
7.) Samplers are the things that allow a shader to access a texture. To get that right you need to fill texture memory with texel data (e.g. using glTexImage2D as you do) with a specific texture unit being active (default is #0), and to tell the shader what texture unit to access with which sampler (the glUniform1i routine with the address of the sampler as target).
8.) There are 2 ways to configure samplers. Historically there is the setting within the texture itself, using the glTexParameteri routine as you do. This is not totally right, because the parametrization belongs to the access of the texture but not the texture itself. Hence there is a new way where parametrization is done on the samplers themselves. However, you do it the old way and it works fine so far, so let it be.
9.) Inside the vertex shader you have the uv co-ordinates of the current vertex, and a sampler where a texture is bound to. The uv co-ordinates are prepared so that they are ready to use. The shader calls the sampler with the supplied uv co-ordinate tuple, and gets back an RGBA value.
What you can see from the description above, there is always a unit on the GPU, be it a vertex input register, a texture sampler unit, or whatever (yes, there are more), and you have to tell both sides OpenGL and the shader which units you want to use.

I'm off to work now, but I'll sit down this evening and digest this. Thank you again (and apologies for having to spell everything out). I'll post if I have any followup questions, but you seem like you've covered anything I could have missed.

Cheers,

:)

Beginner here <- please take any opinions with grain of salt

Alright,

After a bit of headbanging, I believe I have this working. I wonder if this is an inefficient way of going about it or not, but, I think I have the basic concepts down at least.

What I've done is take the vector of uv coordinates that are created with the object loader.

For certain objects (the planets in the image below) I'm loading the object, which fills the uv vector with glm::vec2s of the uv coordinates. Since I need only a fraction of those values, during the object creation function, I'm using a for loop to multiply the existing values against their offset. So, if I have a texture atlas of, lets say 4 images (two rows two colums) and need the bottom right image, (unless I have the coords backward and this is the top left image, i keep forgetting those, trial and error will tell I suppose). A simplified version of what I've used is:


 for(int i = 0; i < uvs.size(); i++)
    {
        uvs[i].x *= .5;
        uvs[i].y *= .5;
    }

(i realize I could use a single command and multiply by a vec2, just did it this way for clarity).

Then, for the rest of the program, the correct coordinates are passed along to the shader.

This may be more of a c++ question, so feel free to disregard it, but is there a simpler way to offset the uv values in that vector rather than running a for loop? That seems...like there should be a simpler way I'm not seeing.

And, then secondly, for the ship in the image, I load the object normally, generating the uvs vector. Then, during my render function, before passing the uv coordinates to the VBO, I'm calling:


if(type == ship)
    {
    frameCounter += frameSpeed * clock.restart().asSeconds();
    if(frameCounter >= switchFrame)
        {
        frameCounter = 0;
        offset += 1;
        if(offset>= 1.6)
        offset = .5;
        shipUp = true;
        }
        if(shipUp == true)
            {
            for(int i = 0; i < uvs.size(); i++)
                {
                uvs[i].x += offset;
                }
                shipUp = false;
            }

Please ignore the extremely messy format here, I kinda whipped it up really quickly. While completely recognizing that this is a very inelegant solution (ugly even), it seems to be working for me. Ignore the framecounters and speed, they're just there to even out the animation. I haven't merged this texture atlas with the planet one yet, so this atlas is only a two image sheet, horizontally. These lines basically just shift the coordinates back and forth at an even rate.

Anyhow, that seems to give me a hobo-ish animation.

So, everything seems to be working. I guess the remaining question I have is whether this is a rather inefficient way of handling things (barring the use of instancing, which, after glancing over, I don't think I'm ready for), or, despite being ugly, is a fair enough method? (provided I'm not still going about this entirely the wrong way and it just happens to be working tongue.png)

Here's an image of everything working. Ignore the crappy images tongue.png Just threw together some stuff really quick to see if everything is working. I think it's kind of charming in its hideousness smile.png all the values are passed in pseudo-randomly for this example, so a random value is passing various fractions for the x and y of the planet texture atlas.

Also, thanks for the heads up about the new method for the sampler parametrization. I'll do some research into that.

And, lastly, if I'm not doing everything horribly wrong, I just wanted offer a genuine thanks for your help. You've been quite patient and thorough, and I'm infinitely grateful smile.png

Cheers,

Beginner here <- please take any opinions with grain of salt

This topic is closed to new replies.

Advertisement