Need help switching from 8 bit to 16 bit
I am coding a game at the moment that is all done in 8bit however the artist working on the gaphics wants 16bit. I have read up on high color modes and think my best option is to use 24bit images and convert them to 16 bit images. I can set the video mode to 16bit but that is pretty much as far i have got in changing my code to use 24 bit images and converting them to 16 bit images.
What I am asking is fo help on how to do this or pointers to where I can find this help.
Thanks very much for your time
Just my thoughts take them as you will.
"People spend too much time thinking about the past, whatever else it is, its gone"-Mel Gibson, Man Without A Face
Just my thoughts take them as you will. "People spend too much time thinking about the past, whatever else it is, its gone"-Mel Gibson, Man Without A Face
If I understand your question correctly then it''s pretty easy. First though you need to determine if you are using 5-6-5 or 5-5-5 mode. Look in the DX sdk for information on how to do that. For 5-6-5 mode, the first 5 bits are for red, then next 6 are for green and that last 5 are for blue. For 5-5-5 mode, the first bit is unused, then you have 5 for red, 5 for green, and 5 for blue. Here are two macros that I use to convert between 24 and 16 bit :
inline uint_16 Create_565_Color(uint_8 r, uint_8 g, uint_8 b)
{ return (b >> 3) | ((g >> 2) << 5) | ((r >> 3) << 11); }
inline uint_16 Create_555_Color(uint_8 r, uint_8 g, uint_8 b)
{ return (b >> 3) | ((g >> 3) << 5) | ((r >> 3) << 10); }
Hope that helps.
*** Triality ***
inline uint_16 Create_565_Color(uint_8 r, uint_8 g, uint_8 b)
{ return (b >> 3) | ((g >> 2) << 5) | ((r >> 3) << 11); }
inline uint_16 Create_555_Color(uint_8 r, uint_8 g, uint_8 b)
{ return (b >> 3) | ((g >> 3) << 5) | ((r >> 3) << 10); }
Hope that helps.
*** Triality ***
*** Triality ***
This topic is closed to new replies.
Advertisement
Popular Topics
Advertisement
Recommended Tutorials
Advertisement