After prototyping some other options for holding spatial data,[1] I've settled on using influence maps as a primary source of data for my UtilityAI system. I'm working on an implementation of influence maps in Unity C# based on Dave Mark's GDC talk, “Spatial Knowledge Representation through Modular Scalable Influence Maps." I'm using a grid system for pathfinding with 1m cells, and my first influence maps use the same dimensions and cell size. The AI Units are modern+ riflemen, etc, and no melee units for now. So threat maps will need be be larger since units have long range (especially a sniper unit).
A number of the influence maps need to not include cell positions with obstacles. Right now I'm working on the threat map influence map. It has the added complexity of needing to zero-out tiles with obstacles and tiles not in the agent's line of sight. So here's what I'm thinking:
A grid of bits, where the center of the grid is the agent's position. Each bit corresponds to a tile position. A bit value of 1 for that tile means it's visible. A value of 0 means it's an obstacle or is outside the agen't's line of sight. Then I can take the threat map with its decayed values, and multiply it by this grid of bits, so tiles outside the line of sight or obstacles are multiplied by 0. Tiles in the line of sight are multiplied by 1 so they don't change.
All units are humanoid and have roughly the same height for line of sight. I'm baking this data outside of runtime since it involves a lot of ray casting,[2] and need to store this grid of bits for each tile in memory as efficiently as possible. Currently I'm using a line of sight distance of 32, so each tile has 64*64 bits representing line of sight.
So how should I store and access these line of sight bit values? What option is most memory efficient? BitArrays seem like they might be a good option, but I've read that they're slow. A collection of BitVector32s? One thing I've seen repeatedly in these types of GameDev operations is data stored in 2d textures. This is fascinating, but I don't understand why it's worth going from a data collection to a Texture2d that will have to be turned back into a data collection to be useable.
I'd greatly appreciate some advice on how I should store this data.
[1] Its worth mentioning that I implemented the Polar Visibility concept from the Killzone AI white paper, but the resultant data was too imprecise without doing a ton of ray cast samples.
[2] Baking this data outside of runtime works like this: for each tile, do a ray cast to every tile within the sight radius, which is 32. So every tile has a 64 by 64 grid of 1 or 0 bit values, where 1 means there was no obstacle and the tile is visible at the set hight, and 0 means the ray cast hit a collider on or before the tile. I could speed this up by ray casting in the direction of each row/column of tiles, but since its outside of runtime I don't care too much about the baking process taking a little longer.