Advertisement

Pseudo A* Help

Started by August 17, 2007 09:44 PM
5 comments, last by Vorpy 17 years, 3 months ago
Here is my code: http://rafb.net/p/qhxzlt41.nln.html I'm not asking you to fix it, because it is rather broken and error prone. What I would like to ask is if you could look over its basic movement and tell me if I am headed in the right direction. It's written in Python, and the reason I post this is because I have seen another Python A* example which is MUCH more complex than this, and it literally lost me in the first few seconds of looking it over. While writing this pseudo-dribble I was using this as a guideline: http://www.gamedev.net/reference/articles/article2003.asp It's for a basic 2D strategy game which stores coordinates as 21|32. If you need specifics I am worried about my choice of dicts and lists for holding these variables, and how I go about checking them. [Edited by - Gallivan on August 18, 2007 10:50:04 AM]
You posted the same link twice.

And storing coordinates as "x|y" makes absolutely no sense in python. It looks like the sort of trick used in a language where the only variable type is strings. In Python you can just use tuples, lists, maps, or objects. Any of them would make more sense than storing values in a delimited string.
Advertisement
Sorry I edited my above posts' links.

I was worried that I would have to changed how I store my terrain data, but if it's like you say, it would most likely make my pathfinding a lot easier?
It just doesn't make sense to store data as "x|y" in python. The tuple (x,y) is almost exactly the same, except that x and y are already integers instead of strings and you don't need to parse it to access the different elements. It avoids the conversion between ints and strings.

coords = (21,32)
x,y = coords
right = (x+1,y)
topleft = (x-1,y-1)

Tuples and strings are both immutable, so as long as a tuple only contains immutable elements it can be used as a key in a map.

If you have a function like getInfo(x,y), you could use it with a tuple by saying getInfo(*myTuple). The * indicates that the tuple should be unpacked into individual arguments.

I'd recommend starting with simpler algorithms like iterative deepening depth first search and iterative deepening A*. Also look into the Fringe Search algorithm. It can function in a way similiar to A* but is easier to implement efficiently.
Thanks Vorpy, I've changed my coordinate system so they are stored in tuples rather than strings now. I will be looking up the two methods you suggested, and as far as I'm concerned this topic is closed. :)
Sorry for bumping this but...

As I have been coding a depth-first (I think that's what it was called) algorithm, I have run into a problem with determining which child node (the surrounding 4 nodes of the current node) to choose.

My heuristics are formed by taking the x and y difference (target x minus the childs x coordinate, etc for y...), as well as the squares initial movement cost (grass is 10, mountains 20, etc.). The problem being that two completely opposite squares will return the same number (15) despite only one of them making completely logical sense.

Is there a better way to determine heuristics or have I simply fouled up this Frankensteined "manhattan method"?

EDIT: A second Google search turned up some interesting results that will work for me (Diagonal Shortcut?). I think I'll be alright.

[Edited by - Gallivan on August 24, 2007 7:23:56 PM]
Advertisement
The manhatten distance heuristic should use the absolute values of the x difference and the y difference.

A depth first search is a much simpler algorithm than A*, in that it doesn't even use a heuristic, and it doesn't even find an optimal path. It just recursively searches the neighbors of a node (and it's probably a good idea to mark them to avoid visiting the same node twice).

A depth-limited depth first search is the same, except it only recurses to a limited depth. Even in a finite graph it won't find a path if the only paths are longer than the depth.

Iterative deepening just runs depth-limited searches of increasing depth until a path is found. It will find the path with the fewest edges, if the depth is only incremented one unit at a time.

The previous three algorithms assume that all edge weights are the same.

Iterative deepening A* is like iterated deepening depth first search, except that instead of a limited recursion depth, you explore the tree to a limited f value. This can be used to find optimal or near optimal paths in a graph with weighted edges.

All of these algorithms are simpler than A* because they don't need to maintain an open list.

Fringe search is somewhere between iterative deepening and A*. It is similiar to A*, except that instead of maintaining an open list you maintain two lists called now and later. Each iteration you expand all the nodes on the now list. The new nodes that they connect to are either played on the now list or on the later list, depending on their f value. The threshold for the f value is like the limited f value in iterative deepening. Newly discovered nodes that are under the f value will also be expanded in the same iteration. The now list is unordered, and you can add nodes to it anywhere you want. After the now list is empty, swap the now and later lists, raise the f threshold, and run the next iteration.

The amount to raise the f threshold to is a sort of performance versus optimality tradeoff. If you raise it to the lowest f value on the later list, you will find an optimal path. If you raise it higher then each iteration will explore further reducing the total work but the path found can be slightly less than optimal. On a uniform grid there is no problem since all or almost all edges have the same weight.

This topic is closed to new replies.

Advertisement