trying to learn simple a* pathfinding
i've been reading and reading and reading the begginer tutorials on how to use A* . My goal is a very simple project(no special graphics or anything). i want a circle from one end of the screen to go to the other end. these would be the start and goal positions. using the A* and finding the path. how would i move this circle (sprite) to the goal. im getting kinda confused on how you would do this in a game. remember im really really new to ai and trying to just get that basic shape to move around walls. how is movment of sprites done using A*. for example do you set a mini-goal being the next node working backwords? and when your sprite has reached it, change the mini goal to the next node. etc,etc till you hit the goal? allowing for frames of animation of walking between nodes?
my second question which would be my second program would be how would i do it if the walls can be made at runtime,(before you figure out the path) for example as the user clicks on the map to build walls.
any help would be greatly Appreciated or any advice .
as for some advice, im going to reread the begging tutorial again, and maybe again after that.
thanks
The A* solution is a list of nodes (a list of minigoals you should use).
And for the second question you need to recompute the A* solution after any modifiation to the map (or as an optmization you might check the next position and if it's invalid recompute the A* solution).
And for the second question you need to recompute the A* solution after any modifiation to the map (or as an optmization you might check the next position and if it's invalid recompute the A* solution).
I presume that in your initial example walls exist between the sides of the screen and that you need to find a path between the start and goal states that avoids these walls.
A* works by considering the set of states reachable from the start state by sequences of actions and it searches for the lowest cost sequence that has as its outcome state the goal. These sequences are built one action at a time and the sequence to extend next is that which has the lowest estimated cost of the optimal sequence, assuming that the current sequence is a prefix of the optimal one. The estimated cost is taken as the cost of executing the prefix plus a heuristic guess at the cost of the remainder of the sequence. Because of the way the algorithm works, it guarantess that the first sequence returned that connects the start and goal state is also the lowest cost one.
You can also view this planning in terms of the states that are achieved after the application of each action in the sequence from the state achieved by the previous action. These states are the 'nodes' that are typically described in a state space search. From each node there exists a set of nodes that can be reached by performing one of the actions from the action set. If you consider all possible actions that you could perform in that state, you can lay out the nodes considered in a tree structure, where the start node is the root of the tree. The branching factor of the tree is equal to the length of the action set.
All tree-based search algorithms seek to expand the tree until one of the leaf nodes (nodes at the leading edge of the tree) is the goal node. A* achieves this by expanding the node with the lowest estimated cost at each iteration.
So, how does this help you get your implementation going?
1) You need a data structure to represent your tree. You can do this by building the tree from connected nodes. Each node will need a way of identifying its parent node (the node you applied the action in to arrive at the current node), the action used to get to this node from its parent, a list/array to store the connections to the next nodes that can be achieved from this node and finally the cost to get from the parent to this node and the heuristic cost to get from this node to the goal. The estimated sequence cost can then easily be computed by adding up the costs of each node back to the start node and adding to this the current heuristic cost.
2) You need two lists of nodes (often called the OPEN and CLOSED lists). Onto the closed list you place nodes that have been expanded to produce children (i.e., nodes from which you have considered the application of an action to extend the sequence between the start node and this node). Any new leaf node created (that doesnt yet have children, goes onto the open list).
3) You want to make sure that you only keep track of good solutions. So, if you generate a new leaf node, you need to make sure that you havent already visited this node using another sequence (so check that it isnt in the closed or open lists already). If it is, you need to check which sequence of actions to get to this node has a lower cost and keep the copy of this node corresponding to the lower cost, since this represents the better sequence to get to this node.
You should be able to find free software online for A*, although it would be useful to your AI education for you to implement it yourself. If you haven't already done so, read the articles on pathfinding here. If you have specific questions about implementing A* or understanding the algorithm, please post them and I'm sure there are plenty of people here that would be happy to help you, myself included.
Edit: The solution to your second problem relies on the D* algorithm (Dynamic A*). Once you get your A* working well, then we can help you with D*. ;)
Cheers,
Timkin
A* works by considering the set of states reachable from the start state by sequences of actions and it searches for the lowest cost sequence that has as its outcome state the goal. These sequences are built one action at a time and the sequence to extend next is that which has the lowest estimated cost of the optimal sequence, assuming that the current sequence is a prefix of the optimal one. The estimated cost is taken as the cost of executing the prefix plus a heuristic guess at the cost of the remainder of the sequence. Because of the way the algorithm works, it guarantess that the first sequence returned that connects the start and goal state is also the lowest cost one.
You can also view this planning in terms of the states that are achieved after the application of each action in the sequence from the state achieved by the previous action. These states are the 'nodes' that are typically described in a state space search. From each node there exists a set of nodes that can be reached by performing one of the actions from the action set. If you consider all possible actions that you could perform in that state, you can lay out the nodes considered in a tree structure, where the start node is the root of the tree. The branching factor of the tree is equal to the length of the action set.
All tree-based search algorithms seek to expand the tree until one of the leaf nodes (nodes at the leading edge of the tree) is the goal node. A* achieves this by expanding the node with the lowest estimated cost at each iteration.
So, how does this help you get your implementation going?
1) You need a data structure to represent your tree. You can do this by building the tree from connected nodes. Each node will need a way of identifying its parent node (the node you applied the action in to arrive at the current node), the action used to get to this node from its parent, a list/array to store the connections to the next nodes that can be achieved from this node and finally the cost to get from the parent to this node and the heuristic cost to get from this node to the goal. The estimated sequence cost can then easily be computed by adding up the costs of each node back to the start node and adding to this the current heuristic cost.
2) You need two lists of nodes (often called the OPEN and CLOSED lists). Onto the closed list you place nodes that have been expanded to produce children (i.e., nodes from which you have considered the application of an action to extend the sequence between the start node and this node). Any new leaf node created (that doesnt yet have children, goes onto the open list).
3) You want to make sure that you only keep track of good solutions. So, if you generate a new leaf node, you need to make sure that you havent already visited this node using another sequence (so check that it isnt in the closed or open lists already). If it is, you need to check which sequence of actions to get to this node has a lower cost and keep the copy of this node corresponding to the lower cost, since this represents the better sequence to get to this node.
You should be able to find free software online for A*, although it would be useful to your AI education for you to implement it yourself. If you haven't already done so, read the articles on pathfinding here. If you have specific questions about implementing A* or understanding the algorithm, please post them and I'm sure there are plenty of people here that would be happy to help you, myself included.
Edit: The solution to your second problem relies on the D* algorithm (Dynamic A*). Once you get your A* working well, then we can help you with D*. ;)
Cheers,
Timkin
Could the A* be optimised by creating non-overlapping sets of tiles together?
The set property being each tile in a set could see each other without obstacle.
Then the tile sets are joined together in a transitive graph.
Then something like Djisktras could be used quickly to work out a quick or rough route...
Or am I completely wrong?
The set property being each tile in a set could see each other without obstacle.
Then the tile sets are joined together in a transitive graph.
Then something like Djisktras could be used quickly to work out a quick or rough route...
Or am I completely wrong?
Quote: Original post by ROBERTREAD1
Then something like Djisktras could be used quickly to work out a quick or rough route...
Then it wouldn't be A*. ;) There are certainly very fast search methods for small state spaces with low connectivity that require a lot simpler algorithms than A*. The Distance Tranform (flood fill) is one such example. The reason why A* is so popular is that it is optimally efficient. It promises that (given certain assumptions about the heuristic function) it will expand fewer nodes while searching for the goal than any other tree/graph based algorithm. What isn't expressed here though is the computational cost to achieve this. For small state spaces, a simpler algorithm that can expand nodes more cheaply than A* may actually run faster, because it does less work per node, but more nodes.
Cheers,
Timkin
Timkin,
I was thinking that the Node map could be used to "guide" the A* heuristic.
Could the Heuristic return a value based on the cells distance from the best line?
[Edited by - ROBERTREAD1 on November 23, 2005 8:57:58 AM]
I was thinking that the Node map could be used to "guide" the A* heuristic.
Could the Heuristic return a value based on the cells distance from the best line?
[Edited by - ROBERTREAD1 on November 23, 2005 8:57:58 AM]
thank you for all your help. thanks timkin for explaining the implemtation part also. but i guess where im stuck is trying to do this myself. like you said tim i should create and code it myself , which is what i want to do. i also set a reasonable goal, im not going to start this trying to make a RTS or something like that. just a simple move the guy through the rooms till it gets to its spot.
with that said i have some more questions. what do you use to make the nodes. vectors? lists? and do you delcare memory at runtime or dynamiclly? after rereading it several times, i think i pretty much get the theory behind it, its just actually creating the classes and stuff that is getting me stuck.
with that said i have some more questions. what do you use to make the nodes. vectors? lists? and do you delcare memory at runtime or dynamiclly? after rereading it several times, i think i pretty much get the theory behind it, its just actually creating the classes and stuff that is getting me stuck.
November 23, 2005 11:56 PM
EDIT: Yeah, this was a post by me, Timkin... with the site change, my auto-login had reset and I hadn't noticed I was posting as an AP. 8(
The heuristic must be an estimate of the cost of the remainder of the sequence needed to get from the node to the goal. We don't yet know what this sequence is, or what the 'optimal line' to follow would be (because we haven't considered the state space between the node and the goal yet).
Typically something like (this is quick and dirty... but you should get the point of it)
Then, expanding a node would mean making a new node for each possible action that can be executed and filling in the outcome state of that node given the state the action was applied in. So, let's assume that my OPEN list is ordered with the lowest cost Node at the top...
Dynamically, since you generally wont know how many nodes you'll need at compile time.
Cheers,
Timkin
[Edited by - Timkin on November 24, 2005 6:56:51 PM]
Quote: Original post by ROBERTREAD1
Could the Heuristic return a value based on the cells distance from the best line?
The heuristic must be an estimate of the cost of the remainder of the sequence needed to get from the node to the goal. We don't yet know what this sequence is, or what the 'optimal line' to follow would be (because we haven't considered the state space between the node and the goal yet).
Quote: Original post by wizardpc
with that said i have some more questions. what do you use to make the nodes.
Typically something like (this is quick and dirty... but you should get the point of it)
class Node{ public: Node* parent; //pointer to parent node std::vector< Node* > subnode; // array of pointers to next nodes CoordType s; // coordinate in state space of current node ActionType a; // action used to get to this node from parent double actionCost; // cost of getting from parent to this node double heuristicCost; //heuristic cost of getting to goal from here double estimatedCost; // sum of all actionCosts back to start + // heuristic cost Node(Node* par, CoordType& state, ActionType& action, double c, double h):parent(par), s(state), a(action), actionCost(c), heuristicCost(h){}};
Then, expanding a node would mean making a new node for each possible action that can be executed and filling in the outcome state of that node given the state the action was applied in. So, let's assume that my OPEN list is ordered with the lowest cost Node at the top...
while (OPEN.size()>0){ Node* bestNode = OPEN.pop(); for (int i=0; isubNode = new Node(bestNode,outcome_state,action,action_cost, heuristic_cost); bestNode->subNode->estimatedCost = ComputeCost(bestNode->subNode)+heurist_cost; // now test subNode to see if its in the CLOSED or OPEN lists already. If // not, insert it into the OPEN list in the correct position (OPEN list is // ordered based on lowest to highest estimatedCost of nodes. } // parent==NULL) return sum; while (n->parent->actionCost>0){ sum += n->actionCost; n=n->parent; }; return sum;}
Quote: and do you delcare memory at runtime or dynamiclly?
Dynamically, since you generally wont know how many nodes you'll need at compile time.
Cheers,
Timkin
[Edited by - Timkin on November 24, 2005 6:56:51 PM]
thanks for all the help guys, and to the AP if that was timkin then thanks again also. i got a neat program up and running with your guys help. it starts with a grid of grass tiles, the right click lets you make walls (brown tiles) and the left click outlines a tile as the destination tile. then when i hold spacebar down my yellow BOX ACTUALLY moves with the A* pathfinding technique. i am completely trilled. it feels good even though its just a crummy looking demo. now all i have to do is modify the code abit to make it more resuable and im off to start a simple game using it.
This topic is closed to new replies.
Advertisement
Popular Topics
Advertisement