Virtual Entities
I've got a little issue with my AI entities. My game levels are large enough to hold a few hundreds npc entities, but CPU power will most likly only allow up to 60-80 entities (empirical values).
I build a micro-threading FSM framework including time-slicing for each individual entity, a job queue for A* requests,freezing state for idle times etc., all for better load balancing, still it is not enough. The costs for planning and movement (including physics) for a single entity are just too high, I think I need to come up with some other solution other then optimizing AI/physics code.
The basic idea is to have a limited pool of active entities which will be spawned on-the-fly in areas of activity. If you would only control the player this would be quite easy, but you although control your own entities (up to 30). On the other hand this game is more like a simulation. Spawning all entities on-the-fly would sacrify the simulation part for none player controlled entities, thought I could live with this decision.
So far I got the following ideas:
I) Just let up to 40 entities roam the level and try to "attract" them to the player and player controlled entities.
II) Divide the level into section and give each section a activity state. Inactive section will release all entity resources whereas active sections will spawn entities on-the-fly. Much like L4D. The player will lead to an higher actvity state whereas player controlled entities will lead to lower activity states.
III) Serialize/Deserialize entities on-the-fly. The original virtual entity population will exists, but once an entity is out of range of any "active entity" (player or player controlled entity), then it will be deserialized.
All this must be a known problem in game development, I just don't want to reinvent the wheel. Any help will be appreciated.
I build a micro-threading FSM framework including time-slicing for each individual entity, a job queue for A* requests,freezing state for idle times etc., all for better load balancing, still it is not enough. The costs for planning and movement (including physics) for a single entity are just too high, I think I need to come up with some other solution other then optimizing AI/physics code.
The basic idea is to have a limited pool of active entities which will be spawned on-the-fly in areas of activity. If you would only control the player this would be quite easy, but you although control your own entities (up to 30). On the other hand this game is more like a simulation. Spawning all entities on-the-fly would sacrify the simulation part for none player controlled entities, thought I could live with this decision.
So far I got the following ideas:
I) Just let up to 40 entities roam the level and try to "attract" them to the player and player controlled entities.
II) Divide the level into section and give each section a activity state. Inactive section will release all entity resources whereas active sections will spawn entities on-the-fly. Much like L4D. The player will lead to an higher actvity state whereas player controlled entities will lead to lower activity states.
III) Serialize/Deserialize entities on-the-fly. The original virtual entity population will exists, but once an entity is out of range of any "active entity" (player or player controlled entity), then it will be deserialized.
All this must be a known problem in game development, I just don't want to reinvent the wheel. Any help will be appreciated.
When dealing with huge numbers of entities, what can work is a LOD level for the distant AI.
This works basically as a model LOD level. You have a rough version of the AI.
This means that you do not need to smooth paths for pathfinding for example, or use a higher level graph (with a really small number of nodes) on that you compute the paths on.
This wouldn't be as fine grained as the version used if the entity is close to the player and thus pathfinding will be faster..
Don't update animations on them, etc.
You can think of examples for all the tasks that an AI entity needs to do.
When the entity becomes relevant for the player(probably based on distance/ line of sight), you switch to the "normal AI" with all the details.
This works basically as a model LOD level. You have a rough version of the AI.
This means that you do not need to smooth paths for pathfinding for example, or use a higher level graph (with a really small number of nodes) on that you compute the paths on.
This wouldn't be as fine grained as the version used if the entity is close to the player and thus pathfinding will be faster..
Don't update animations on them, etc.
You can think of examples for all the tasks that an AI entity needs to do.
When the entity becomes relevant for the player(probably based on distance/ line of sight), you switch to the "normal AI" with all the details.
What about using a hierarchical approach? You may first work on groups of entities (making higher level decisions) and then on single entities. You may then always update groups, but only the individuals in active areas.
What about using a hierarchical approach? You may first work on groups of entities (making higher level decisions) and then on single entities. You may then always update groups, but only the individuals in active areas.
Or you can just combine the two methods(simpler pathfinding, etc.) + just doing this for groups of units if there are groups of units in your game.
It really depends on how your AI is currently structured.
2 levels of detail is a good solution. When you search a larger radius of terrain ignore details. If the monster is close to the player then search only a small radius around the monster.
Also sharing expensive computations between the monsters will help. Group monsters who are close to each other and compute only once a path or other low level computation like predicting where the enemy will be, and then share the knowledge of that computation between all monsters.
Then each individual monster can decide to use the path or make any high level decisions based on the low level results.
Also sharing expensive computations between the monsters will help. Group monsters who are close to each other and compute only once a path or other low level computation like predicting where the enemy will be, and then share the knowledge of that computation between all monsters.
Then each individual monster can decide to use the path or make any high level decisions based on the low level results.
Also remember, you don't have to do high-level AI processing every frame. For many decisions, you can get away with doing things once or twice a second.
Dave Mark - President and Lead Designer of Intrinsic Algorithm LLC
Professional consultant on game AI, mathematical modeling, simulation modeling
Co-founder and 10 year advisor of the GDC AI Summit
Author of the book, Behavioral Mathematics for Game AI
Blogs I write:
IA News - What's happening at IA | IA on AI - AI news and notes | Post-Play'em - Observations on AI of games I play
"Reducing the world to mathematical equations!"
When dealing with huge numbers of entities, what can work is a LOD level for the distant AI.
This works basically as a model LOD level. You have a rough version of the AI.
My problem is tightly coupled to the physics engine. My entities move in a 3d environment controlled by a physics engine. Improving only the AI would shift the bottleneck to the physics engine.
What about using a hierarchical approach? You may first work on groups of entities (making higher level decisions) and then on single entities. You may then always update groups, but only the individuals in active areas.
This sounds more feasable for my requirements.
Also sharing expensive computations between the monsters will help. Group monsters who are close to each other and compute only once a path or other low level computation like predicting where the enemy will be, and then share the knowledge of that computation between all monsters.
Then each individual monster can decide to use the path or make any high level decisions based on the low level results.
Pathfinding isn't the problem (atleast yet :-) ). Sensory scanning of the surrounding and a scripted behaviour tree are quite expensive, but incredible flexible. I think currently the behaviour tree written in lua is my bottleneck. Althought I could optimize it, I want to solve the
problem on an higher level like L4D.
Also remember, you don't have to do high-level AI processing every frame. For many decisions, you can get away with doing things once or twice a second.
Micro-threading and timeslicing already solves this problem. My entities have a "heartbeat" which force an update interval between 100 and 2000 ms. The heartbeat increases when the entity gets in contact with other entities (feels threatened or has been attacked) and decreases when nothing special happens.
Still the entity exists and moves through the world (physics engine + steering behaviour), which can't be optimize in a similar way.
Sofar I think that performance improvement of the AI (LOD etc.) will leave me with a bigger physics performance problem. Currently my only hope to solve this seems to remove the entities from the (physics) world simluation to get
rid of AI and physics load.
First idea(LOD):
1. entity spawns in world with full sensory abilities, behaviour tree and physics representation.
2. entity leaves area of focus defined by the player, the entity will loss its physics representation and most of its sensory abilities, a new more restricted behaviour tree will be taken (low detail level).
3. entity interacts on a meta-level, interaction with other meta-level entities is possible but restricted.
4. entity recover its sensory abilites, behaviourtree and physics representation once it enters the focus area again (will be respawned at a plausible spot)
Second idea(hierachy):
1. ai controlls areas of the world more on a strategy level (N units of X dwell in area Y etc.)
2. ai makes decision on strategy level
3. once an area gained the focus, entities are spawned on-the-fly according to the area profile.
4. entities are just representation of the strategy profile, they act according to it and will give some feedback.
Ashaman, how is it coming?
I was thinking about your 'heartbeat' idea, what exactly does the heartbeat control? Amount of time sliced to the entity? or does it indicate the next scheduled slice of time for it?
;
I was thinking about your 'heartbeat' idea, what exactly does the heartbeat control? Amount of time sliced to the entity? or does it indicate the next scheduled slice of time for it?
;
Ashaman, how is it coming?
I was thinking about your 'heartbeat' idea, what exactly does the heartbeat control? Amount of time sliced to the entity? or does it indicate the next scheduled slice of time for it?
The spawning on-the-fly thing has been implemented and works quite fine, thought it will need more testing and tweaking.
The 'hearbeat' controls the frequency of calling a entity update method which handles the AI part of the entity (the movement part will be updated always every frame). The heartbeat varies between 100ms and 2000ms. That means that a passive entity will be only updated every 2 seconds.
This could lead to delayed reactions but on the other hand delayed reaction of a surprised creature feels rights. The heartbeat is controlled by sensory and action events. When a entity enters combat the heartbeat will be very high, when an entity saw a threat or some interesting item the heartbeat will slowly increase. When an entity does not encounter any interesting stuff and is not in combat, it will slowly decrease up to an 2000ms interval.
PS: I use a deadline first scheduling algorithm for heatbeat managing.
Your number of active entities sounds painfully low... what exactly is the physics/steering stuff doing? What is your hardware target?
It seems to me like a simplification of the physics side would open you up to a lot of heuristic improvements on the AI side, and vice versa; I'd start with making the physics as simple as possible in the case where the player isn't actively observing anything, and ramping it up to full detail when a player-controlled entity is nearby. This is really just the LOD suggestion from earlier, except applied to your entire game simulation rather than just the AI side.
Of course, more details on the nature of your simulation would be incredibly helpful.
For comparison, X3: Terran Conflict simulates several thousand entities actively across the game universe in realtime, on a commodity PC.
It seems to me like a simplification of the physics side would open you up to a lot of heuristic improvements on the AI side, and vice versa; I'd start with making the physics as simple as possible in the case where the player isn't actively observing anything, and ramping it up to full detail when a player-controlled entity is nearby. This is really just the LOD suggestion from earlier, except applied to your entire game simulation rather than just the AI side.
Of course, more details on the nature of your simulation would be incredibly helpful.
For comparison, X3: Terran Conflict simulates several thousand entities actively across the game universe in realtime, on a commodity PC.
Wielder of the Sacred Wands
[Work - ArenaNet] [Epoch Language] [Scribblings]
This topic is closed to new replies.
Advertisement
Popular Topics
Advertisement