Advertisement

Question: PixelFormat in OpenGL

Started by March 10, 2000 08:35 AM
3 comments, last by voodoo 25 years ago
OK what values should Is et the bits and the z-buffer in the pixel format structure? Currently I set both to the current bit depth of my app. My app is set up to be able to handle multiple res. I.E: If i choose to start of my app at 16bits I set both to 16bit, then while running i decide to change to full screen or other res lets say 32bit. I destroy the window and reset the pixel format structure to the new bit depth. Is this good or should I set them both constant? and nevr have them change?
Hardcore Until The End.
The pixel format stuff should always be the same for any mode your app might be running under (unless you''ll allow switches from 8 bpp [indexed] to 16, 24, or 32 bpp [non-indexed]). This is what I have in my project right now (slightly modified):

PIXELFORMATDESCRIPTOR pfd;

// set the pixel format for the game window
memset(&pfd, sizeof(PIXELFORMATDESCRIPTOR), 0);
//---------------- ---- --- -- -
pfd.nSize = sizeof(PIXELFORMATDESCRIPTOR);
pfd.nVersion = 1;
pfd.dwFlags = PFD_DRAW_TO_WINDOW / PFD_SUPPORT_OPENGL / PFD_DOUBLEBUFFER / PFD_SWAP_EXCHANGE;
pfd.iPixelType = PFD_TYPE_RGBA;
pfd.cColorBits = 24;
pfd.cDepthBits = z_depth;
pfd.iLayerType = PFD_MAIN_PLANE;

Now that I''m thinking about it, I''m not entirely sure that cColorBits should always be 24, anymore. Does anyone know? I''m under the assumption that cColorBits only refers to the number of bits used to hold color information in the frame buffer--24 or 32-bpp modes will always use 8 bits per channel, and 16-bpp modes are just weird...so just use 8 bits per channel, too. =)

cDepthBits, I think, can be just about anything, I assume. I''d assign it either a value held in some user-modifiable variable or some constant. MSDN''s example uses 32, and that may just be fine. Test your app to see what works--I need to do the same. =)

I''d appreciate some other comments/agreements/corrections from you gurus out there. =)
Advertisement
Yeah I do alow switching res. Bassicly thw indow is destroyed and then recreated so I set the bits of the descriptor to the current bit depth.
Hardcore Until The End.
I use 16 bpp or 32 bpp for the color bits. 24 isn''t supported on some modern cards (i.e, the one I have) and when set that way, sometimes behaves strangely.


*oof*
*oof*
Now that I think about it, it was pretty silly of me to have set it to 24 all the time when the member cColorBits should contain the number of bits used for the frame buffer, which would be different for different color depths. =) Silly me. The thing I''m still left wondering is exactly what it means to have an x-bit Z-buffer. That is to say, let''s assume that I''m using an 8-bit Z-buffer and I use glVertex3f...how does a 32-bit, floating point value get shrunken down to a bits? Is it just that? That, of course, being a/the reason one wants a larger Z-buffer? =)

This topic is closed to new replies.

Advertisement