Advertisement

A.I w/o pathfinding help

Started by November 23, 2002 04:02 PM
0 comments, last by x-bishop 21 years, 11 months ago
im trying to implement a simple 2D game where the game 'board state' functions has already been defined for me, for example i ave access to fcns that tell me if my goal is somewhere above/below/left/right so i dont think i would be able to use A* pathfinding cuz its akin to having a metal detector showing me where the object of interest is relative to me, i dont actually have the 2 end points to make a path.. the game is kind of like a soccer game, so there will be multiple agents on each of the 2 teams, and i have access to where the 'net' is, the ball is (when dropped in the middle) and where the ball is if an opponent has it, all relative to an agents instance. whats the best way to approach something like this, a way to implement some kind of smarts to my agents w/o any pathfinding i suppose i can use minimax, but im not sure how to do that since i dont have an exact 'board state' like a chess/checkers board..so im not sure how i can expand the game tree.. mind you im pretty new to even basic A.I...just familiar with A(*) pathfinding, game trees, heuristic searches etc any ideas would be great. [edited by - x-bishop on November 23, 2002 5:06:24 PM]
"a way to implement some kind of smarts to my agents w/o any pathfinding"

Not sure I understand the problem you describe. But a few thoughts spring to mind.

You could use a simple attractive force: The agent just accelerates in the direction of the goal, the ball, another agent it wants to tackle, whatever.

Obstacles could have a repellent force. So basically your agent is like a little magnet buzzing around a playingfield of other magnets, some attracting, some repelling.

Look at "Seek", "Flee", and "Arrival" in Reynolds:

http://www.red3d.com/cwr/steer/gdc99/

Cheers,

Tim.

This topic is closed to new replies.

Advertisement