Well, I'm currently trying to test and tune the performance of my game and I'm hitting a performance wall within my AI.
Some background:
- pathfinding is not part of the bottleneck and can be ignored
- entities are controlled by a behavior tree writting in Lua using LuaJit2
- entities change update intervals according to activity of surrounding(LOD)
My stress test is a cluster of about 50 entities in a combat situation with the player close. My idle performance for some entities is ~4ms , this stress test goes up to 60-120 ms, that is 1-2ms per entity . My goal is to reduce it to a much lower, more stable behavior (~20ms).
Due to the none linear behavior of the performance, I fear that the sensory scanning of the surrounding is the bottle neck O(n^2). I can query the surrounding for entities which met a list of certain criteria and get a basic rating of their importants (checking the criteria is quite expensive). I'm not sure what the best approach would be, some ideas sofar:
- Asynchronously sensory scanning, an entity enqueues a request and get the result a few frames later.
Issue: Delayed reaction and refactoring of my BHTs. - Dynamic range adjustment, when the performance decreases the scanning range decreases too.
Issue: missing important targets when inside of a cluster. - Caching of sensory results: partly done (current target is locked and reevaluated)
Issue: missing of changing surrounding, delayed reaction ?
Anyone has some experiences, best practises or ideas ?
You may be able to prune your calculations using BSPs or Quadtree's to designate units that are allowed to update since something interesting is going on next to them (i.e., the player is there), the rest are not even quieried until they are close enough become actively involved at which point they are activated. You're best bet is to reduce as much as practical any "search" or "scanning" algorithms in favor of direct calculations with an entity based on local neighbors in the BSP or Quadtree to see if an action should take place due to some sort of threshold (e.g., i can see the player and i want to hurt him so i attack, player made a noise i could hear i'm going to go find him, i see that food and i'm hungry so im going to eat it, etc.). At least that is my approach and what i've seen from alot of AIs that try to implement more complex behavior. Each AI can easily know what it is near it locally since that is quickly accessible in the BSP or Quadtree, and therefore just does a test to see if it is close enough to care about it to respond with some sort of action. Course, if you get 50 AIs in the same location, this could still slow things down, so you'll want to spread them out smartly, but it is still better than testing 50 AIs in a level when you only need to test 2 or 3 which are close enough to respond to the player cause they're all in the same quadrant or subquadrant. Either way, quadtrees make finding which AIs should be active logarithmic and is constant afterwards until the quadrant is left, and makes action determination constant (best)[1 AI and 1 player] or linear (average)[more than 1 entity in a quadrant with memory of past actions] per AI. Even if you allow testing of everything in that quadrant, you're still dealing with a handful of things, so the number of tests is low, so things move quickly.
This also helps a bit with allowing AIs more freedom of action separate of the player as you can run a quadrant per cycle or a subquadrant per cycle to further limit things and then start updating "uninvolved" AIs less when player-vicinity AIs get actively involved with the player. This would allow for more autonomy with less overhead and allows your AIs to do other interesting things, like stumble upon the player, stalk other AIs or look for junk, or respond to a level-wide alarm. For finding stuff your AI doenst directly know about, but may have "general" ideas of, depending on your implementation, it may be able to "roll up" information if you need an AI to find soemthing, so that it knows that quadrant X contains a subquadrant with item Y, it then procedes to quadrant X and looks further when it gets there, eventually getting to the locaiton of item Y. You can use this for the AIs to talk to eachother and share knowledge of interest like "i saw a player in quadrant X" or "there was a dead body in quadrant X, watch out!".
Basically, the best thing i can suggest is, if the player isnt experiencing it, it ultimately is wasted cycles, so don't update AI that isnt involved with the player unless it is important to your game. Keep AI dormant or unchecked until it becomes a concern to the player whenever possible, you can cheat this in many ways: Dormancy can be AIs sleeping, "chatting" with other AIs, "goofing off", "bored", watching a campfire, "cleaning" something, whatever that makes look more interesting to the player when they stumble upon the AI, but ultimately require no cycles to process in any way until the player stumbles upon them when you can activate any animations or whatever to give variety until the player takes action or the local AIs notice the player in some way and react.
Anyway, hope that helps.