bitvector worries...
hey all! i''ve got a bit of a question w/ bitvectors...
first of all, i''m coding for a MUD (multi-user-dungeon/domain). for those of you who aren''t familiar with it, it is an entirely text-based game (like the original Zork?). you access it via telnet, and enter a world of rooms, monsters, and characters built out of text.
since, the game is basically a telnet server, we''re most likely working on *nix system. my knowledge of programming hits a dead-end w/ anything outside of console/win32 -- but luckily a lot of it was written for me. i''ve managed to port it over to win32 just for my personal ease (w/ someone''s help..i can''t remember his name). SO, i''m working w/ a VC6 compiler (but i DO have to get it working in gcc eventually)...
now, on to the problem. in the game, your character has any number of status "affects" -- we call them aff flags. for example, you could be ''affected'' with...AFF_BLIND, or AFF_CURSE, or something like the sort -- I assume that you get plenty of that in more advanced game programming as well. obviously, the most memory saving way to deal w/ these is a bitvector! each bit is given a definition (w/ #define AFF_BLIND 0x1, for example) and later checked with something like:
IS_SET( ch->affbis, AFF_BLIND );
actually, the only time the affbits are ever accessed is with these 3 macros:
#define IS_SET(flag,bit) (flag & bit)
#define SET_BIT(var,bit) (var |= bit)
#define REMOVE_BIT(var,bit) (var &= ~bit)
...other than being saved or loaded from/into a file...
ch->affbit is defined in a struct CHAR_DATA, as a long. Makes sense, since the &, |=, and &= operators work on integer data types. Now MY problem comes in when 32 bits (long) isn''t enough to specify the number of affects there are...
a little more on muds. since they run on a server in which i have to rent cpu time -- i gotta keep my code somewhat efficient. 2nd, since we check the affbits (or tobits, actbits, chanbits, etc, etc -- there are lots of bitvectors) hundreds, if not thousands, of times every second -- i can''t/shouldn''t/should try my best not to make this code too compilicated & overhead-heavy. 3rd, since its checked so often, the smallest change has me editing the source file in 2000+ places...=(
this was a temp fix that i came up with...
#define aa 0x1
#define ab 0x2
...
#define a5 0x40000000
#define a6 0x80000000 /* first 32 bits */
#define ba 0x100000000
#define bb 0x200000000
...
#define b5 0x4000000000000000
#define b6 0x8000000000000000 /* 64 bits */
then, something like..
#define AFF_BLIND aa
#define AFF_CURSE ab
...and so on
i changed the data-type of the bitvectors (ch->affbit) to something like:
struct CHAR_DATA{
...
BITVECTOR affbit;
}
...and added a new struct
struct bitvector{
long high;
long low;
};
typedef bitvector BITVECTOR;
...finally, i rewrote the macros:
#define IS_SET(flag, bit) (( (bit)>a6 ) ? \
( ((flag).high) & ((unsigned long)((bit)>>32)) ) : \
( ((flag).low ) & ((unsigned long) (bit) ) ) \
)
#define SET_BIT(var, bit) ( ( (bit)>a6 ) ? \
( ((var ).high) |= ((unsigned long)((bit)>>32)) ) : \ ( ((var ).low ) |= ((unsigned long) (bit) ) ) \
)
#define REMOVE_BIT(var, bit) ( ( (bit)>a6 ) ? \
( ((var ).high) &= ~((unsigned long)((bit)>>32)) ) : \
( ((var ).low ) &= ~((unsigned long) (bit) ) ) \
)
it checks the hi-dword against the high 32 bits of the "bit"... and the low-dword against the low 32 bits of the "bit". when i extend the macros to 128 bit, however, my VC gives me a "constant too large" error. is there a compiler directive that''ll enable larger constants? if so, is there one in gcc? or should i trash this macro idea & attempt to rewrite the whole damn thing (even though, i''ll lose speed & i''ll have to change it in a million+1 places?) does anyone have any good solutions? thanks much!
---
~khal
---~khal
Forget about 128 bit constants on a 32 bit system, it wont happen. Whatever it is that you are talking about, you are making it way more complicated than it really is. This could help:
//define a structure using bitfields and make it any sizestruct affects {int affect1 : 1; //define affect1 with 1 bitint affect2 : 1; //define affect1 with 1 bit.... //as many as you need};struct affects affbit; //define affbit as this structure instead of a long#define IS_SET(var,affect) (flag.affect)#define SET_BIT(var,affect) (var.affect=1#define REMOVE_BIT(var,affect) (var.affect=0
cmaker- I do not make clones.
stupid smiley faces.
cmaker
- I do not make clones.
cmaker
- I do not make clones.
cmaker- I do not make clones.
If one 32-bit word isn''t enough, you might try to split up the afflictions into sub-categories that will each fit into 32 bits. This also helps execution speed since checking for some affliction won''t require checking through all sub-categories, only those that apply.
In all honesty, though, I think this breaks down identically to the bitfield structure clonemaker suggests (LOL on the smileys). The problem is that you can''t really do masking that way, if you every have blocks of flags you want to check; in order to do that, you''d probably have to throw in a union and overlay bit masks on top of your single-bit flags.
In all honesty, though, I think this breaks down identically to the bitfield structure clonemaker suggests (LOL on the smileys). The problem is that you can''t really do masking that way, if you every have blocks of flags you want to check; in order to do that, you''d probably have to throw in a union and overlay bit masks on top of your single-bit flags.
hhmm..essential, i kinda came up with that on the bitvector struct that i displayed above...
struct BITVECTOR {
long high;
long low;
};
...and inside CHAR_DATA, i defined the affbit as a BITVECTOR, instead of a long.
now, i need to create either macros or functions that can test the entire 64-bit (in this case) bitvector. There''s not a datatype that can handle a 64-bit mask, and if i change it to a high:low, that means i''ve got to change the call to IS_SET, SET_BIT, and REMOVE_BIT in a million different places. the other downside is the book-keeping involved in defining the affects. they''re uniquely defined as a alphanumerical combination (a1, ab, b5, etc). where each combination is truly just a mask (0x1, 0x2, 0x4, 0x8, 0x10, etc...). unforunately, these constants are limited to 32-bits. Is there a way to #define anything similar to a struct? a multi-field constant, so to speak? (i know, i''m really stretching it)
the other solution, to reorganize the data...is very appetizing, however, this also involves a major overhaul of the structure of the system. i''ve been wanting to do it, but its such a large task (and i''m always pressured to put out updates) that i''ve hesitated touching the thing. honestly, though i suppose thats the best solution...thanks =)
---
~khal
struct BITVECTOR {
long high;
long low;
};
...and inside CHAR_DATA, i defined the affbit as a BITVECTOR, instead of a long.
now, i need to create either macros or functions that can test the entire 64-bit (in this case) bitvector. There''s not a datatype that can handle a 64-bit mask, and if i change it to a high:low, that means i''ve got to change the call to IS_SET, SET_BIT, and REMOVE_BIT in a million different places. the other downside is the book-keeping involved in defining the affects. they''re uniquely defined as a alphanumerical combination (a1, ab, b5, etc). where each combination is truly just a mask (0x1, 0x2, 0x4, 0x8, 0x10, etc...). unforunately, these constants are limited to 32-bits. Is there a way to #define anything similar to a struct? a multi-field constant, so to speak? (i know, i''m really stretching it)
the other solution, to reorganize the data...is very appetizing, however, this also involves a major overhaul of the structure of the system. i''ve been wanting to do it, but its such a large task (and i''m always pressured to put out updates) that i''ve hesitated touching the thing. honestly, though i suppose thats the best solution...thanks =)
---
~khal
---~khal
This topic is closed to new replies.
Advertisement
Popular Topics
Advertisement
Recommended Tutorials
Advertisement