GLvoid and void GLint int difference
hello
I realized that in Nehe oop code some functions use GLint some other int why? What''s the difference between GLint and int?
why these two classes use GLvoid and void? In which functions should we use GLvoid?
GLvoid KillGLWindow();
void SetFullScreen(bool fullscreen);
January 13, 2003 03:58 PM
OpenGL defines its own data types for porability.
If portability is an issue you for you you should make a habit of ALWAYS using the OpenGL defined data types (the ones prefxed with GL)... otherwise the nomral int, float, etc are fine.
If portability is an issue you for you you should make a habit of ALWAYS using the OpenGL defined data types (the ones prefxed with GL)... otherwise the nomral int, float, etc are fine.
The typedefs are meant to insulate you from possible architecture differences. i.e. my int and your int may not be the same, but our GLint is guaranteed to be. Thus the API function signatures are invariant.
This is the same reason why W32 API functions use things like BOOL and DWORD... and why ideally you should use typedefs in your code : that way, if you need to change a function''s signature (because you changed systems, or moved from floats to doubles ...) you only have to change the typedef and not every function that uses it.
Oh, and give meaningful names such as ''size_type'', ''difference_type'', ''length_type'' that reflect the purpose and use of the type, and not its underlying structure, so as not to be misled... Hungarian notation is an abomination.
As to why NeHe uses one or the other ? I have no idea.
[ Start Here ! | How To Ask Smart Questions | Recommended C++ Books | C++ FAQ Lite | Function Ptrs | CppTips Archive ]
[ Header Files | File Format Docs | LNK2001 | C++ STL Doc | STLPort | Free C++ IDE | Boost C++ Lib | MSVC6 Lib Fixes ]
This is the same reason why W32 API functions use things like BOOL and DWORD... and why ideally you should use typedefs in your code : that way, if you need to change a function''s signature (because you changed systems, or moved from floats to doubles ...) you only have to change the typedef and not every function that uses it.
Oh, and give meaningful names such as ''size_type'', ''difference_type'', ''length_type'' that reflect the purpose and use of the type, and not its underlying structure, so as not to be misled... Hungarian notation is an abomination.
As to why NeHe uses one or the other ? I have no idea.
[ Start Here ! | How To Ask Smart Questions | Recommended C++ Books | C++ FAQ Lite | Function Ptrs | CppTips Archive ]
[ Header Files | File Format Docs | LNK2001 | C++ STL Doc | STLPort | Free C++ IDE | Boost C++ Lib | MSVC6 Lib Fixes ]
"Debugging is twice as hard as writing the code in the first place. Therefore, if you write the code as cleverly as possible, you are, by definition, not smart enough to debug it." — Brian W. Kernighan
As far as I know, GLint can''t be negative. I figured that out through a very harsh way. Every time u make it negative it sets it to a gigantic positive number
quote:
The typedefs are meant to insulate you from possible architecture differences. i.e. my int and your int may not be the same, but our GLint is guaranteed to be. Thus the API function signatures are invariant.
Not true. The spec only defines a minimum length. See table 2.2 (page 9) in the 1.4 spec.
quote:
As far as I know, GLint can't be negative.
Yes they can. GLint is required to be signed. See same table as above.
EDIT: snisarenko, maybe I should say that negative numbers are required to be treated with 2's complement. If you treat a signed negative value's bitpattern as unsigned, it will be very large. This is how 2's complement works, and is nothing special to OpenGL in any way.
[edited by - Brother Bob on January 13, 2003 5:20:37 PM]
firstly thanks for the answers.
i didn''t understand typedef.. in opengl.h there''s a line
typedef int GLint how does this help us with portability?
If I run my code on linux ''int of linux'' is GLint and if i run code on win the ''int of windows'' is GLint..
Fruny wrote:
"
The typedefs are meant to insulate you from possible architecture differences. i.e. my int and your int may not be the same, but our GLint is guaranteed to be. Thus the API function signatures are invariant.
This is the same reason why W32 API functions use things like BOOL and DWORD... and why ideally you should use typedefs in your code : that way, if you need to change a function''s signature (because you changed systems, or moved from floats to doubles ...) you only have to change the typedef and not every function that uses it
"
How is it guaranteed to be?
i didn''t understand typedef.. in opengl.h there''s a line
typedef int GLint how does this help us with portability?
If I run my code on linux ''int of linux'' is GLint and if i run code on win the ''int of windows'' is GLint..
Fruny wrote:
"
The typedefs are meant to insulate you from possible architecture differences. i.e. my int and your int may not be the same, but our GLint is guaranteed to be. Thus the API function signatures are invariant.
This is the same reason why W32 API functions use things like BOOL and DWORD... and why ideally you should use typedefs in your code : that way, if you need to change a function''s signature (because you changed systems, or moved from floats to doubles ...) you only have to change the typedef and not every function that uses it
"
How is it guaranteed to be?
This topic is closed to new replies.
Advertisement
Popular Topics
Advertisement
Recommended Tutorials
Advertisement