RGB Craziness
You can't use the RGB macro for 16-bit color. You gotta roll yer
own 16-bit macro.
// this builds a 16 bit color value in 5.5.5 format (1-bit alpha mode)
#define _RGB16BIT555(r,g,b) ((b%32) +((g%32) << 5) + ((r%32) << 10))
// this builds a 16 bit color value in 5.6.5 format
#define _RGB16BIT565(r,g,b) ((b%32) + (g%64) << 6) + ((r%32) << 11))
Hope that helps.
/Alex
#define _RGB16BIT555(r,g,b) (b >> 3) + ((g >> 3) << 5) + ((r >> 3) << 10))
#define _RGB16BIT565(r,g,b) (b >> 3) + ((g >> 2) << 5) + ((r >> 3) << 11))
- Splat
You can just use fast modulus. Replace %32 with &31.
It only works with powers of 2. Formula is %n = &(n - 1)
or %32 = &(32 - 1) = &31.
Vader
Problem with "modulus"ing the color values is that we want the MOST significant bits of the source color, not the least. & 31 and % 32 will cut off the top three bits that have the _most_ relevant data for a 5-bit / 6-bit color version.
With shifting - the correct way -, 10111011 (187) will become 10111 (23) in 5-bit, while using modulus will get you 11011 (27). If you were to convert these back to 8-bit, the shifted 10111 would become 10111000 (184) and the modulus one would become 11011000 (216). Which is closer to the original 187?
An even more ridiculous example is 00011111 (31) in 8-bit, is converted to 5-bit as 11111 (31), which is the brightest color in 5-bit, while in 8-bit it is quite dark. I can't believe I read this in LaMothe and didn't realize. I guess that book has more errors than I saw at first...
- Splat
[This message has been edited by Splat (edited November 26, 1999).]
What I should of explained was that the modulus is not a substitute for shifting.
I was just optimizing the use of %32.
To use the Lamothe 16-bit macro correctly, you have to use 16-bit values, e.g. you
can't stick in 24-bit values directly like (255,255,255) even though this particular
example will come out correctly as (1F,1F,1F). You have to shift FIRST, then use
the 16-bit macro.
Vader
Jim
#define _RGB16BIT(r,g,b) ((b%32) + ((g%32) << 5) + ((r%32) << 10))
typedef unsigned char UCHAR;
UCHAR *bitmap_buffer = NULL;
bitmap_buffer = (UCHAR *)ddsd.lpSurface;
bitmap_buffer[x+ddsd.lPitch*y]=_RGB16BIT(255,255,255);
Now the colors are better, but not perfect. (0,0,0) makes black, (255,255,255) makes white, but then (0,0,255) makes a really bright blue (it should be dark blue), and other combinations don't work either.
Now about that shifting my color values: I am fairly new to programming, and I am only beginning to grasp the whole shifting thing. Now Vader said that (255,255,255) is 24-bit values, so how EXACTLY do I convert those to 16-bit values?
[btw, I tried those other macros and they all worked the same for me, however I didn't time them so their speeds may vary)
[This message has been edited by CoolMike (edited November 27, 1999).]