Why sometimes float and sometimes GLfloat
I''m new to openGL and am wondering why sometimes you use float, and sometimes GLfloat. What''s the difference? Is there also GLint? Thanks!
Ther are many types of C redefined in opengl! Why? For Fun! Or to make everyone see immediately that you use OpenGL in the code. But they make it easier to understand the purpose of the variables in the function definitions.
gl.h form ogl 1.1 :
typedef unsigned int GLenum;
typedef unsigned char GLboolean;
typedef unsigned int GLbitfield;
typedef signed char GLbyte;
typedef short GLshort;
typedef int GLint;
typedef int GLsizei;
typedef unsigned char GLubyte;
typedef unsigned short GLushort;
typedef unsigned int GLuint;
typedef float GLfloat;
typedef float GLclampf; // clampf ????
typedef double GLdouble;
typedef double GLclampd; // and with d ??
typedef void GLvoid;
gl.h form ogl 1.1 :
typedef unsigned int GLenum;
typedef unsigned char GLboolean;
typedef unsigned int GLbitfield;
typedef signed char GLbyte;
typedef short GLshort;
typedef int GLint;
typedef int GLsizei;
typedef unsigned char GLubyte;
typedef unsigned short GLushort;
typedef unsigned int GLuint;
typedef float GLfloat;
typedef float GLclampf; // clampf ????
typedef double GLdouble;
typedef double GLclampd; // and with d ??
typedef void GLvoid;
RIght now there is no real difference. GLfloat is just a typedefed alias for float. In theory, GLfloat is more portable. There is a GLint.
Many APIs have thier own typedefs for standard built-in types these days, like GLint or int32, etc. This is due to the fact that C/C++ standard doesn''t set any hard rules as to what the size of each built-in datatype is. For now, on your standard PC, ''int'' is 32 bits. In the near future it may be 64.
By using ''int32''s, for example, rather than ''int''s, in theory when you port to a 64 bit system, you only need to change your typedef. For most code, just using ''int'' is fine. But if you do binary-based file IO or a lot of bit-flipping on your int variables, size _does_ matter...As they say.
GLfloat, GLint, etc, is just this idea implemented within OpenGL.
Many APIs have thier own typedefs for standard built-in types these days, like GLint or int32, etc. This is due to the fact that C/C++ standard doesn''t set any hard rules as to what the size of each built-in datatype is. For now, on your standard PC, ''int'' is 32 bits. In the near future it may be 64.
By using ''int32''s, for example, rather than ''int''s, in theory when you port to a 64 bit system, you only need to change your typedef. For most code, just using ''int'' is fine. But if you do binary-based file IO or a lot of bit-flipping on your int variables, size _does_ matter...As they say.
GLfloat, GLint, etc, is just this idea implemented within OpenGL.
I think GLclamp* means that the value shouldn''t be smaller than 0 and not bigger than 1.
Visit our homepage: www.rarebyte.de.st
GA
Visit our homepage: www.rarebyte.de.st
GA
Visit our homepage: www.rarebyte.de.stGA
April 23, 2000 02:35 AM
quote: Original post by TheMummy
...
typedef float GLclampf; // clampf ????
...
typedef double GLclampd; // and with d ??
...
The difference is that one version (GLclampf) takes float values, and the other takes double values. Many OpenGL functions have these types of names instead of overloading the functions because it reminds the programmer what type of value you are using for that function. Another good example is glcolor. It comes in all these flavors:
glcolori -integer values
glcolorui -unsigned integer values
glcolorb -byte values
glcolorub -unsigned bytes
glcolorf -float values
glcolord -double precision float values
glcolor*v -pointer to an array of size *
Note there is no definition for a long type, because there are not that many colors to chose from in any hardware presently made. This simply states the type of data being used, and is a little more intuitive than overloading one function so many times, no one can figure it out.
This topic is closed to new replies.
Advertisement
Popular Topics
Advertisement