16 bit truncation... at least thats what it seems
For some reason when I load 32 bit TGA''s and then make OpenGL textures out of them it looks like they are truncated to 16bit images. But since the application is 32 bit and i have the mag/min filters set to linear, it creates a really stepped gradient with slightly fuzzy edges.
here is the code to define the Opengl texture
glPixelStorei(GL_UNPACK_ALIGNMENT,1);
glGenTextures(1, &texName[0]);
glBindTexture(GL_TEXTURE_2D, texName[0]);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, 256,256,0,GL_RGBA,GL_UNSIGNED_BYTE, initial);
note: initial is an 256x256x4 array of Unsigned Bytes
why does it look so bad?
-menasius"Quitters never win...winners never quit...but those who never win and never quit, are idiots"
This topic is closed to new replies.
Advertisement
Popular Topics
Advertisement
Recommended Tutorials
Advertisement