First State
-player---->
_
/|
/
/enemy
Second State
enemy-> player---->
/
/
/
/
* Another enemy, in correct position to shoot.
Top-Down rotating bots.
In a Top-Down scrolling game (with rotating entities. Solar Winds, Firefight, Asteroids, that sort''ve thing.) What''re ways of implementing bot AI?
I''ve noticed a couple different behaviour patterns. If you know of anymore, please speak up; I''m sure there are more than these. (and these are probably incorrect)
Alternating Passes
When the targeted player enters withing a certain distance of the enemy, the enemy will first rotate and fly to intersect the path of the player. After the enemy has flown this way for a while, it will rotate and attempt to fly directly behind the targeted player, firing it''s weapon. Eventually, the player will dodge, and this will cause the enemy to change course, bisecting the players path. During any of these stages, if a ray cast along the enemys direction intersects the player, the enemy will attempt a shot. (however, the ray''s direction should be altered slightly, as to anticipate the players acceleration.)
Charging
An enemy will move slowly, or completely stop while lining up to shoot the player. When in position, the enemy will accelerate very fast towards the player, shooting whilst charging. The enemy will then evade the player for a set period of time, then attempt to line up and charge again.
Any ideas?
Though I''m quite new to programming, and esspecially to these forums, I have a few ideas for you. These come from my playing Subspace. Note that I am improvising the names for these.
Strafe
This is where the player finds a relativly slow enemy (or stationary). He/she accelerates past its side, then turns, keeping the arget in sight, and shoots, while constantly adjusting rotation and thrust to keep itself circling its opponent. This depends on the players being able to change direction quickly, and the game having physics similar to asteriods.
Long-range
This would be for a player with better long range firepower than the opposition, though it can still work if everyone is equal. The player fires its weapon from long range, and tries to stay just under its maximum range. In this way, its sniping the opposition while remaining safe from harm.
Harrass
A variation on the Long-range tactic: the player gets into range, fires, then backs away again.
Backpedal
The player, if the enemy is charging, points at the opposition, and then thrusts backwards, thus, running while at the same time fighting back.
If you implement cloaking or blind spots, you could add more to that list. The same if you add extra weapons, items, or devices.
Hope that helps!
Strafe
This is where the player finds a relativly slow enemy (or stationary). He/she accelerates past its side, then turns, keeping the arget in sight, and shoots, while constantly adjusting rotation and thrust to keep itself circling its opponent. This depends on the players being able to change direction quickly, and the game having physics similar to asteriods.
Long-range
This would be for a player with better long range firepower than the opposition, though it can still work if everyone is equal. The player fires its weapon from long range, and tries to stay just under its maximum range. In this way, its sniping the opposition while remaining safe from harm.
Harrass
A variation on the Long-range tactic: the player gets into range, fires, then backs away again.
Backpedal
The player, if the enemy is charging, points at the opposition, and then thrusts backwards, thus, running while at the same time fighting back.
If you implement cloaking or blind spots, you could add more to that list. The same if you add extra weapons, items, or devices.
Hope that helps!
Here's a thought: Genetic programming. (There's a Google keyword for you!)
In genetic programming, you have a population of programs, which interbreed and are subjected to random mutations. The best programs are the most likely to breed, and, therefore, over time, good successful programs are developed.
Now, the concept of a heirarchy of functions to imagine a bot program. Maybe you would have the following functions (I'm writing in a C style, but be aware that these functions are in your tree, not C.
These functions are nodes in your tree. Now add one more type of node: Constant, which is simply a number. Initially, it's generated randomly, and is subject to mutation.
Now, you could evolve a decent behavior this way. Each frame, you would simply run each bot's program. Here's an example program:
You might also want to look into Lisp. It's a programming language much like this little one I put together here for use with Genetic Programming. Honestly, Ive only seen Scheme (a beginner's "dialect" of Lisp) one time, so I don't know it, but it might be useful to you.
More importantly, just do some Google searches on Genetic Programming. You'll get some useful information.
Genetic Programming may not be the best solution to your problem, but it's one approach which you might want to explore.
I had all kinds of wonderful ASCII art trees here, but I edited my post to add more to the end, and the message board then blew them up. Sorry, but I won't redo them. I think my post is pretty clear without them anyway.
[edited by - TerranFury on December 18, 2002 6:17:58 PM]
In genetic programming, you have a population of programs, which interbreed and are subjected to random mutations. The best programs are the most likely to breed, and, therefore, over time, good successful programs are developed.
Now, the concept of a heirarchy of functions to imagine a bot program. Maybe you would have the following functions (I'm writing in a C style, but be aware that these functions are in your tree, not C.
action Move(s meters) //Positive for forwards; negative for backwardsaction Turn(theta radians) //Positive for right; negative for leftShootaction Shoot()bool isClear(x meters, y meters) //Tells if there is anything at the coordinate (x, y), with the bot at (0, 0)radians angleEnemy() // Angle that the bot needs to turn to face the enemy: positive for right; negative for leftmeters distanceEnemy() // Gives distance to enemyhealthUnit myHealth() // Gives current healthaction conditional(bool condition, trueAction, falseAction) //Does one of two things...action Both(action a, action b) // Does two things...bool isGreater(a, b)bool isLess(a, b)bool isEqual(a, b)unit Random(a, b) // Returns a random number between a and bunit Add(a, b) // Adds a and bunit Divide(a, b) // Divides a by b
These functions are nodes in your tree. Now add one more type of node: Constant, which is simply a number. Initially, it's generated randomly, and is subject to mutation.
Now, you could evolve a decent behavior this way. Each frame, you would simply run each bot's program. Here's an example program:
You might also want to look into Lisp. It's a programming language much like this little one I put together here for use with Genetic Programming. Honestly, Ive only seen Scheme (a beginner's "dialect" of Lisp) one time, so I don't know it, but it might be useful to you.
More importantly, just do some Google searches on Genetic Programming. You'll get some useful information.
Genetic Programming may not be the best solution to your problem, but it's one approach which you might want to explore.
I had all kinds of wonderful ASCII art trees here, but I edited my post to add more to the end, and the message board then blew them up. Sorry, but I won't redo them. I think my post is pretty clear without them anyway.
[edited by - TerranFury on December 18, 2002 6:17:58 PM]
GP would definitely be an interesting way to go. (I''m biased.) Be aware, though, you will most likely have to fight bloat. GPs hae a tendency to enlarge at an alarming rate during evolution, so you''ll need to control program size in some way.
Let us know the results if you choose this path...
-Kirk
Let us know the results if you choose this path...
-Kirk
I have some experience with genetic algorithms. I basically used them to control weights on a neural net, teaching a neural net to do simple arithmatic. (Although I think there was something wrong with my implementation of the neural net, as I could only teach the net to multiply and divide...)...
so basically, generate a random sequence of "move, turn, shoot, loop" type commands, then generate a whole bunch of them, then have the GA-implementation select the most effective one?
What if you want to have more complicated situations? (such as a "conserve energy" state, etc.?
GA seems like a cool way to go, although I''m not sure how well it fits with my plan of having "formations". (think homeworld.)
so basically, generate a random sequence of "move, turn, shoot, loop" type commands, then generate a whole bunch of them, then have the GA-implementation select the most effective one?
What if you want to have more complicated situations? (such as a "conserve energy" state, etc.?
GA seems like a cool way to go, although I''m not sure how well it fits with my plan of having "formations". (think homeworld.)
quote: Original post by skjinedmjeet
so basically, generate a random sequence of "move, turn, shoot, loop" type commands, then generate a whole bunch of them, then have the GA-implementation select the most effective one?
What if you want to have more complicated situations? (such as a "conserve energy" state, etc.?
That''s essentially it, except don''t think "sequence;" think "tree." Functions take multiple arguments; each argument is a "branch" in the tree. An expression like (2+7)/3, for example, can be expressed in a binary tree like this:
(/) / \ (+) 3 / \ 2 7
Now hopefully you can sort of see the idea of a tree of functions. In this case, the tree evaluates to a number. In a game, you''d want yor tree of functions to evaluate to an action.
You asked how you could get complicated behaviors with such a tree. That''s why I included the conditional function in the set of functions from my previous post. It takes three arguments. The first is a boolean (which you could get from a function like isGreater), and the second two are actions. It also returns an action. If the boolean is TRUE, it returns the second argument; if it is FALSE, it returns the third.
You can probably see how complicated behaviors could evolve in this manner.
Next, you asked how you could incorporate formations into your GP programs. One thought would be to include some functions that could be used for flocking and see what evolves. Maybe add AngleNearestFriend and DistanceNearestFriend functions. You could develop all kinds of formations just with those.
Just do some google searches on "Genetic Programming." There are a lot of websites that can probably explain a lot of this stuff better than I can.
If you do decide to use GP be warned that it might take a while for halfway decent behaviors to evolve. Also understand that there are other ways of doing effective AI for a game like this (like hardcoded Steering Behaviors). I just wanted you to understand this one option.
Whatever you decide to do, good luck - and post back to let us know how you''re doing!
This topic is closed to new replies.
Advertisement
Popular Topics
Advertisement