Flanking and avoiding threats
Did a quick search for this and didn't see much, so I figured I'd ask.
I'm currently working on a 2D, side scrolling shooter. Players in the world can run, jump, and shoot their way around the environment. It's always a single human player versus a group of AIs.
I recently managed to make the AIs very effective against aggressive players by having them lay down suppressing fire on the player and then continuing the pressure if the player retreated. However, they are still vulnerable to cowardly/camping techniques from the player... they'll try to rush into a position to hit the player but find themselves rushing into a hail of weapons fire. Since they currently respawn randomly, they'll eventually flank the player and push him out by mistake, but I want them to be able to avoid walking through a chokepoint where death is being delivered on a constant basis... preferably by taking another, less dangerous route.
I already have an A* or A*-like (I've yet to figure out which) implementation that allows the AIs to find their way around the level. What, then, is the BEST way to have them select a path around the player on an AI by AI basis? Do you just add extra movement cost to travelling to the node the player is watching, or is there a better way to go about it?
That might be a simple way to impliment it... just increase the cost of the node that the AI dies in. It would be more effective on a sparse node graph than a dense one. Also if your maps run for long periods of time serious irregularities might occur. There are potential pitfall there but it also might work as a quick and dirty solution.
Those were my initial thoughts... the potential pitfalls you bring up can be avoided though.
My AIs take in one kind of information: the AIInfo class, which has properties like position, how high a priority it is to target, whether the AIs should keep their distance, etc. All objects in the world, from boxes to players and projectiles are described in this way. The way the AIs handle this information, there's both reaction times and a very short memory.
Perhaps the AIs could place little AIInfo markers for themselves that do nothing but add movement costs to the node they're closest to... these would be placed wherever weapons fire passes.
Is there a more elegant solution, though?
My AIs take in one kind of information: the AIInfo class, which has properties like position, how high a priority it is to target, whether the AIs should keep their distance, etc. All objects in the world, from boxes to players and projectiles are described in this way. The way the AIs handle this information, there's both reaction times and a very short memory.
Perhaps the AIs could place little AIInfo markers for themselves that do nothing but add movement costs to the node they're closest to... these would be placed wherever weapons fire passes.
Is there a more elegant solution, though?
Yea the only way I can figure a solution is the node cost. Implementing a version of influence mapping should at least get whatever sprite it is moving into a attack to move either with caution(implement a sort of displacement algorithm for when the player begins to move out of his camp position) or in the the most strategic path.
Luck to ya.
Luck to ya.
Thanks for the tactical pathfinding link. I swear I've seen that somewhere before...
Alright, then. I've got an idea. Since my implementation doesn't have portals/cells and nodes aren't placed on a per tile basis, I can't use exactly the same technique demonstrated there. However, instead of precomputing sectors of threat from a specific cell, I could precompute all the nodes that a threat at a specific node has line of sight to within a reasonable distance. Add a threat to the equation and all the nodes being linked from it will be weighted appropriately.
As a second possible implementation, and one that takes into account both aim location AND threat location, is that every percieved threat weights both the node it is on (or, with projectiles, originated from) and the node it is aiming at or going towards. This might have the effect of forcing a flanking attack more often.
Thoughts?
Alright, then. I've got an idea. Since my implementation doesn't have portals/cells and nodes aren't placed on a per tile basis, I can't use exactly the same technique demonstrated there. However, instead of precomputing sectors of threat from a specific cell, I could precompute all the nodes that a threat at a specific node has line of sight to within a reasonable distance. Add a threat to the equation and all the nodes being linked from it will be weighted appropriately.
As a second possible implementation, and one that takes into account both aim location AND threat location, is that every percieved threat weights both the node it is on (or, with projectiles, originated from) and the node it is aiming at or going towards. This might have the effect of forcing a flanking attack more often.
Thoughts?
There is another way you can look at this problem that should help you to better understand the fundamental planning problem you are trying to solve.
Consider the set of possible paths from a start node to a goal node. In a combat situation, the probability of reaching the goal will depend on the exposure to enemy fire that the path provides and the probability of being killed/incapacitated while exposed. If you don't like reading about probabilities, just think in plain language terms like likelihood or chance.
Now you can consider movement in terms of the probability of successfully executing the whole movement plan and achieving the goal. You could simply choose the path with the highest probability of success (presuming you could compute this probability). However, a far better approach is to use Decision Theoretic Planning. Introduce a utility function that is inversely proportional to the movement cost of the path. Thus, paths with cheaper cost have higher utility. DTP now says that the best path to take is the one that maximises the expected utility, where the expected utility is the product of the probability of completing the path (to get the reward) and the utility of that path. This principle of maximum expected utility is a model of rational action.
You can add complexity to this approach by considering Risk Strategies that your agents might employ and designing agents that are risk-seeking, risk-neutral or risk-averse. It's fairly easy to build this into the aforementioned DTP, although I wont elaborate here unless you're particularly interested in this approach.
If you want some more information on this kind of planning, let me know and I'll send you one of my papers. It's in a different pathfinding domain, but the essentials of how to tackle the problem are the same. Additionally, the key aspect of the paper is a decision theoretic replanning methodology that would allow you to trigger replanning based on the players actions that alter the probability of achieving the goal.
The paper doesn't go into the detail you'd need for an implementation, but it does discuss the problem and its solution in more detail than I have done here.
Cheers,
Timkin
Consider the set of possible paths from a start node to a goal node. In a combat situation, the probability of reaching the goal will depend on the exposure to enemy fire that the path provides and the probability of being killed/incapacitated while exposed. If you don't like reading about probabilities, just think in plain language terms like likelihood or chance.
Now you can consider movement in terms of the probability of successfully executing the whole movement plan and achieving the goal. You could simply choose the path with the highest probability of success (presuming you could compute this probability). However, a far better approach is to use Decision Theoretic Planning. Introduce a utility function that is inversely proportional to the movement cost of the path. Thus, paths with cheaper cost have higher utility. DTP now says that the best path to take is the one that maximises the expected utility, where the expected utility is the product of the probability of completing the path (to get the reward) and the utility of that path. This principle of maximum expected utility is a model of rational action.
You can add complexity to this approach by considering Risk Strategies that your agents might employ and designing agents that are risk-seeking, risk-neutral or risk-averse. It's fairly easy to build this into the aforementioned DTP, although I wont elaborate here unless you're particularly interested in this approach.
If you want some more information on this kind of planning, let me know and I'll send you one of my papers. It's in a different pathfinding domain, but the essentials of how to tackle the problem are the same. Additionally, the key aspect of the paper is a decision theoretic replanning methodology that would allow you to trigger replanning based on the players actions that alter the probability of achieving the goal.
The paper doesn't go into the detail you'd need for an implementation, but it does discuss the problem and its solution in more detail than I have done here.
Cheers,
Timkin
I'd love to read your paper about this. I'm not really working on anything like this right now but I'm alway curious to learn about new techniques. Do you have it published anywhere?
Am I understanding this correctly? Basically, you're saying that every part of the path (in my case, every node in my navigation net) should have both the utility value (distance and/or ease of reaching the node) as well as threat values, and then allowing the AI to select whether it's priority is utility or safety?
Taking it further, giving a node a stealth value would give you another option: a sneaky AI?
Am I to understand that this is mereley an extension of other pathfinding techniques that uses varying priorities on the part of the AI?
Also, why would you add an inverse value of utility instead of merely finding the path with the least movement cost, as it's functionally the same? Is it so that it can be more easily related to chance of success? If thats the case, why not measure success as a chance to FAIL instead of switching around movement cost with utility?
You've caught my interest, Timkin! Where can I find your paper?
Taking it further, giving a node a stealth value would give you another option: a sneaky AI?
Am I to understand that this is mereley an extension of other pathfinding techniques that uses varying priorities on the part of the AI?
Also, why would you add an inverse value of utility instead of merely finding the path with the least movement cost, as it's functionally the same? Is it so that it can be more easily related to chance of success? If thats the case, why not measure success as a chance to FAIL instead of switching around movement cost with utility?
You've caught my interest, Timkin! Where can I find your paper?
This topic is closed to new replies.
Advertisement
Popular Topics
Advertisement