Advertisement

RTS AI that does not slow the game down to unplayable points?

Started by May 22, 2022 07:04 AM
24 comments, last by Kylotan 1 year, 9 months ago

I wrote some basic A* code without any effort to optimize performance. It should be fairly easy to read.

In the worst case (when the goal is surrounded by obstacles and there is no path, but the algorithm needs to explore the entire map to conclude that) it takes about 6-9 milliseconds on my [fast] computer for a 256x256 map. Easy cases take about 0.6 milliseconds. This is quite a bit worse than I was hoping for. For an old computer this would be intolerable. If I have a bit of time I might try to optimize the code, for my own entertainment. I can think of at least one large performance improvement available, but it's not straight forward to implement.

#include <iostream>
#include <vector>
#include <queue>
#include <algorithm>
#include <chrono>

struct Map {
  int height, width;
  std::vector<int> data;

  Map(int height, int width, int initial_value) : height(height), width(width), data(height * width, initial_value) {
  }

  Map(int height, int width) : height(height), width(width), data(height * width) {
  }

  int &operator[](int x) {
    return data[x];
  }

  int operator[] (int x) const {
    return data[x];
  }
  
  int &operator()(int x, int y) {
    return data[y * width + x];
  }
  
  int operator() (int x, int y) const {
    return data[y * width + x];
  }
};

int heuristic_cost(int a_x, int a_y, int b_x, int b_y) {
  int delta_x = std::abs(a_x - b_x);
  int delta_y = std::abs(a_y - b_y);

  int minimum = std::min(delta_x, delta_y);
  int maximum = std::max(delta_x, delta_y);

  return 10 * maximum + 4 * minimum;
}

std::vector<int> reconstructed_path(Map const &came_from, int node) {
  std::vector<int> result;
  
  while (1) {
    result.push_back(node);
    int previous = came_from[node];
    if (previous == node)
      break;
    node = previous;
  }

  std::reverse(result.begin(), result.end());
  return result;
}

std::vector<int> a_star(Map const &obstacles, int start_x, int start_y, int goal_x, int goal_y) {
  int height = obstacles.height;
  int width = obstacles.width;
  int start = width * start_y + start_x;
  int goal = width * goal_y + goal_x;
  
  // Initialize distance to every node to infinity except for the start
  Map distance_to_start(height, width, 1000000000);
  distance_to_start[start] = 0;
  
  Map came_from(height, width);
  came_from[start] = start;

  // In the open list, we use the lowest 32 bits for the position and the highest 32 bits for the cost, so the order works out
  std::priority_queue<long long, std::vector<long long>, std::greater<long long>> open_list;
  
  // Initialize open list to contain only the starting node
  open_list.push(start);
  
  while (!open_list.empty()) {
    int node = open_list.top() & 0x7fffffffll;
    int node_x = node % width;
    int node_y = node / width;
    open_list.pop();
    if (distance_to_start[node] < 0) // Ignore closed nodes
      continue;
    if (node == goal)
      return reconstructed_path(came_from, goal);

    // Loop over the neighbors
    for (int y_neighbor = std::max(0, node_y - 1), y_neighbor_last = std::min(height - 1, node_y + 1); y_neighbor <= y_neighbor_last; ++y_neighbor) {
      for (int x_neighbor = std::max(0, node_x - 1), x_neighbor_last = std::min(width - 1, node_x + 1); x_neighbor <= x_neighbor_last; ++x_neighbor) {
	if (y_neighbor == 0 && x_neighbor == 0)
	  continue;
	if (obstacles(x_neighbor, y_neighbor) != 0 || obstacles(node_x, y_neighbor) != 0 || obstacles(x_neighbor, node_y) != 0)
	  continue;
	int next = width * y_neighbor + x_neighbor;
	int cost = heuristic_cost(node_x, node_y, x_neighbor, y_neighbor);
	if (distance_to_start[next] > distance_to_start[node] + cost) {
	  distance_to_start[next] = distance_to_start[node] + cost;
	  came_from[next] = node;
	  open_list.push((distance_to_start[next] + heuristic_cost(x_neighbor, y_neighbor, goal_x, goal_y)) * 0x100000000ll + next);
	}
      }
    }
    distance_to_start[node] = -1; // Mark node as closed
  }
  
  return std::vector<int>(); // Failed to find path
}

int main() {
  Map obstacles(256, 256);
  obstacles(3, 0) = 1;
  obstacles(3, 1) = 1;
  obstacles(3, 2) = 1;
  obstacles(3, 3) = 1;
  obstacles(3, 4) = 1;
  obstacles(3, 5) = 1;
  obstacles(7, 5) = 1;
  obstacles(7, 6) = 1;
  obstacles(7, 7) = 1;
  
  auto begin_time = std::chrono::system_clock::now();
  std::vector<int> path = a_star(obstacles, 0, 0, 10, 5);
  auto end_time = std::chrono::system_clock::now();
  
  for (int x : path)
    std::cout << '(' << (x%256) << ',' << x/256 << ')' << '\n';

  std::cout << "Computation took " << std::chrono::duration_cast<std::chrono::microseconds>(end_time-begin_time).count() << " microseconds" << std::endl;
}

alvaro said:
I wrote some basic A* code without any effort to optimize performance. It should be fairly easy to read.

Thanks, i've included it into my test stuff, and visualized your result:

I will now try to add A* as another option beside Dijkstra to the path finder i use for geometry processing.
The question which bothers me is if it still works. I guess it somehow does, but it can fail to find the shortest path.
This is because a heuristic based on euclidean distance can not approximate the geodesic distance on the surface of a non planar mesh.
I wonder this is never mentioned, since any navmesh path finding using A* should be affected.

Will report what i find…

Advertisement

Here is the result from ear to ear of a bunny.
Red is Dijkstra, green is A*. The darker nodes show visited nodes, so we can see A* still does less work, but more than expected maybe.
The lighter shades show the found path. It mostly agrees, but there is a subtle difference on the left ear.
Pretty good.

But i'm not sure i did implement A* correctly.
Unlike Alvaros code, i calculate the heuristic only once for the current node, not for the chosen neighbor, which is suspicious. But i do not understand either code good enough and remain uncertain about details.
Still, posting both my versions in case somebody has some insight.

Edit: Summing up the path lengths confirmed a bug, so the following code isn't right.

		int Next () // Dijkstra
		{
			int c = -1;
			for (;;)
			{
				if (queue.empty()) return -1;
				c = queue.top().second;
				queue.pop();
				if (mExtracted[c]) continue; // prevent duplicates + uniformSpeed up
				mExtracted[c] = true;
				break;
			}
			
			int begin = (*mPrimAdjList)[c];
			int end = (*mPrimAdjList)[c+1];
			for (int i=begin; i<end; i++)
			{
				int n = (*mPrimAdjacency)[i][1];
				int e = (*mPrimAdjacency)[i][0];
				float w = (*mEdgeWeights)[e];

				if (!mExtracted[n] && mPathMinDistance[n] > mPathMinDistance[c] + w)
				{
					mPathMinDistance[n] = mPathMinDistance[c] + w;
					queue.push(std::make_pair(-mPathMinDistance[n], n));
					mPathPreviousEdge[n] = e;
				}
			}
			return c;		
		}

		int Next (std::vector<vec> &nodePositions, vec goal) // AStar
		{
			int c = -1;
			for (;;)
			{
				if (queue.empty()) return -1;
				c = queue.top().second;
				queue.pop();
				if (mExtracted[c]) continue; // prevent duplicates + uniformSpeed up
				mExtracted[c] = true;
				break;
			}
			
			int begin = (*mPrimAdjList)[c];
			int end = (*mPrimAdjList)[c+1];
			for (int i=begin; i<end; i++)
			{
				int n = (*mPrimAdjacency)[i][1];
				int e = (*mPrimAdjacency)[i][0];
				float w = (*mEdgeWeights)[e];
				w += vec(nodePositions[n] - goal).Length();

				if (!mExtracted[n] && mPathMinDistance[n] > mPathMinDistance[c] + w)
				{
					mPathMinDistance[n] = mPathMinDistance[c] + w;
					queue.push(std::make_pair(-mPathMinDistance[n], n));
					mPathPreviousEdge[n] = e;
				}
			}
			return c;		
		}

JoeJ said:
The question which bothers me is if it still works. I guess it somehow does, but it can fail to find the shortest path. This is because a heuristic based on euclidean distance can not approximate the geodesic distance on the surface of a non planar mesh.

The heuristic is just a lower bound for the distance to the goal, it doesn't necessary have to be euclidean distance and does not need to be accurate. Manhattan distance can be used, for example. The main requirement is that it is additive ("distance" from A to C must be ≤ A to B + B to C). The heuristic is used to prioritize which neighbors of expanded nodes to visit next, but doesn't affect the path that is found (if implemented correctly). A* is just Dijkistra's algorithm but with the heuristic to reduce the number of nodes that are visited. Both algorithms find the shortest path through the navigation graph, so the ability of the path to adapt to curved geometry is dependent on the spatial resolution of the graph.

Aressera said:
The main requirement is that it is additive ("distance" from A to C must be ≤ A to B + B to C).

Ah yes. I'm unsure. But i could write some test to verify that's the case i guess.

Aressera said:
The heuristic is used to prioritize which neighbors of expanded nodes to visit next, but doesn't affect the path that is found (if implemented correctly).

Then either both paths of my test have the same length, or i did it wrong. Someday i'll spend more time on it. Currently i don't need it and already forgot all the details.

But the curious thing is: I never saw Astar being used in other peoples code for geometry processing. They all use (or mention) Dijkstra. (Well, maybe i saw this in only 2 or 3 projects, but made by real experts.)
That's where my assumption ‘Astar on 3D surface is inaccurate’ came from.

JoeJ said:
But the curious thing is: I never saw Astar being used in other peoples code for geometry processing. They all use (or mention) Dijkstra. (Well, maybe i saw this in only 2 or 3 projects, but made by real experts.) That's where my assumption ‘Astar on 3D surface is inaccurate’ came from.

I guess that's probably because A* is hard to implement correctly (and fast) and Dijkstra's is easier to grasp. I have used A* plenty for geometric tasks other than navigation, such as finding acoustic diffraction paths in recent SIGGRAPH paper (see section 5.4).

Advertisement

JoeJ said:
Ah yes. I'm unsure. But i could write some test to verify that's the case i guess.

No need for tests, you can easily reason that an overestimate of the unexplored path may lead to a bad solution.

What you basically do is try lots of different paths from the starting point at the same time. (In the algorithm you create new paths at each intermediate point, but you can also see that as having all these paths already from the start, they are just on top of each other so you cannot distinguish between them.)

All paths have an explored first part and an unexplored last path. Both parts have a cost. The explored path has exact costs, the unexplored part has estimated costs.

In each step of the algorithm, you pick the path with the lowest total cost and extend the explored part a bit. Then you put it back in the collection of paths, and find the next path for performing a step. Repeat until the unexplored length is zero.

Now imagine that the unexplored part has a higher-than-exact cost. That makes the path seemingly a bit longer than it really is. You make the explored part a bit longer. Since the real costs are lower, the total cost of that path decreases. In the next iteration you will thus pick the same path (it was already at the minimum cost of all paths to pick it the first time, and it just got better), and extend it again. Again the total cost decreases!! So rather than exploring all paths and picking the most promising path every time, you expand exactly one path to the destination and never even consider trying any of the other paths.

If the one picked path has a detour near the destination, it has accumulated so much advantage in costs because it's so much ahead of the other paths, that by the time it arrived at the detour, the additional real cost becomes non-relevant.

Alberth said:
The explored path has exact costs, the unexplored part has estimated costs.

I'm confused about some detail here. Looking at wikipedia, they use two arrays of costs.
One seems to be the cost of the currently known path, but this contains the heuristic as well, which confuses me.
The second seems the estimate made from guessing about the rest of the path plus the heuristic, which makes sense.

I assume this is fine because the constraint on a valid heuristic guarantees correct results. But to truly understand, i need to write a new pathfinder from scratch.

I'm also unsure about my ‘fix’ to deal with duplicates. I now think this even makes my Dijkstra implementation faulty in rare cases. Instead of just skipping already visited nodes, likely i should update their cost as well to guarantee cost is minimized.

So i hoped to add A* to my stuff with little work, but instead i've learned all my stuff is broken. : )

JoeJ said:
One seems to be the cost of the currently known path, but this contains the heuristic as well, which confuses me.

It does? gScore seems to be the cost of the explored path to me. “tentative__gScore” is the "old gScore + distance just walked" as far as I can see, so still real costs.

I didn't find an explanation to the “tentative” notion (searching "entative" at that page just hits pseudo-code) there, however https://en.wikipedia.org/wiki/Dijkstra%27s_algorithm​ has it too, and it has step 3 in the algorithm where you compute tentative successors by computing real distance of all next neighbors, and then step 4 that selects the next new neighbor with the smallest real distance.

A* doesn't do that, and instead you can have multiple explorations running on the same path. The Remark below the source code highlights that behavior.

JoeJ said:
I'm also unsure about my ‘fix’ to deal with duplicates. … Instead of just skipping already visited nodes, likely i should update their cost as well to guarantee cost is minimized.

Indeed. I currently think that, under the assumption that estimated costs are less or equal to real costs, this can happen if you have neighbors at varying distances. For example for a single new node with 2 already explored neighbors, one neighbor can have eg real costs of 21 and a step of 7 to the new node, while the other node can have costs 22 and a step of 5.

What I don't know is if it ever happens that a better minimum real cost for a node ever occurs once you expanded the path from that node. It would need a path through that node with a shorter real cost, and thus a higher estimated cost than the path that reached that node you're expanding next. I guess it may happen if you have widely varying but always lower than real estimates but no idea what you need to ensure it never happens.

Note that this directly connects to termination of the search. For Dijkstra, termination of the search happens when you try to expand the destination node. This works in Dijkstra because you're always expanding a node that has the smallest real distance from the starting point. If you want that in A*, it should never happen that a node gets a lower real cost after you tried expanding that node.

A couple of things:

  • In my code, this line is confusing:
int cost = heuristic_cost(node_x, node_y, x_neighbor, y_neighbor);

In my particular implementation of the heuristic for this example, the true cost of moving to a neighbor is actually the same as what the heuristic returns, so I lazily used it. I hope that didn't confuse things too much.

  • As others have mentioned, as long as the heuristic never overestimates the cost, the computed path will be optimal. If your heuristic does overestimate the cost, the code still works (and it might actually be significantly faster!), but it might generate a suboptimal path.
  • Once a node is visited, you can be sure that the distance from the start to that point is correctly computed. That's the key feature that makes the A* algorithm work.
  • You might have noticed that my code seems to ignore duplicates, and that works just fine. If a shorter path is found to a node that is already in the open list, it would be tricky to find it in the priority queue and update its cost. However, you can just add the node in the open list multiple times and make sure you don't process already closed nodes.

This topic is closed to new replies.

Advertisement