Advertisement

Ideas for intelligent StarFighter Behaviors

Started by April 19, 2002 12:47 PM
19 comments, last by Wicked Ewok 22 years, 6 months ago
I''ve been working on a star fighter AI class for a 2D Asteroids type game. Right now I have it structured so that there are various pre-set behaviors that you can attach to a ship. But the bad thing is, during combat, all the ship does is try as hard as he can to point his nose directly at you, and match your speed. There are varying degrees of how much they''ll actually do this, but in the end, all the ships do is fly in circles to hit you. That''s only for the Engage() behavior. So far, I have a very limited list of predetermined behaviors, and it would be nice to see if anyone out there has any ideas on how to make a more realistic behaviors. void ProcessTransport();//transports from point to point void ProcessPatrol(); //patrols any number of points void ProcessGuardPoint(); //gaurds a point in space void ProcessGuardArea(); //patrols an area void ProcessEscort(); //escorts a ship void ProcessHuntFett(); //Hunts you ruthlessly void ProcessEvasive10(); //evasive behavior 1, tries to fly away void ProcessEvasive12(); //flies away, dodges/with varying speeds void ProcessHunt10();//Hunts a ship ruthlessly void ProcessAggressive(); //Shoots you or any other ship it locks Jumping from one behavior to another is done using conditions set for a given AI for a given ship. Jumping around is fine, the behaviors are the things I want to change/mess with, the evasive and aggressive ones get boring after a while. Thanks for any ideas on this. -=~''''^''''~=-.,_Wicked_,.-=~''''^''''~=-
-=~''^''~=-.,_Wicked_,.-=~''^''~=-
Steer toward a point ahead of the target ship. You can use Steering Behaviours (check the articles & resources link above).

The reason you might want to do this is that your NPC ship is then not reacting to the targets actions, but rather predicting them. This feature will arise when you figure out how to predict a future position of the target ship. There are many ways to do this, from the trivial to the complex... I''ll leave you to think on it for a while.

Cheers,

Timkin
Advertisement
I can''t believe I had my ass chewed out on this a few months ago. It''s simple vector math that says that in order to intercept a target, you must plot their future position and use that as your steer point. How FAR in the future is determined by your current range as a function of closing rate.

Dave Mark
President and Lead Designer
Intrinsic Algorithm Development

"Reducing the world to mathematical equations!"

Dave Mark - President and Lead Designer of Intrinsic Algorithm LLC
Professional consultant on game AI, mathematical modeling, simulation modeling
Co-founder and 10 year advisor of the GDC AI Summit
Author of the book, Behavioral Mathematics for Game AI
Blogs I write:
IA News - What's happening at IA | IA on AI - AI news and notes | Post-Play'em - Observations on AI of games I play

"Reducing the world to mathematical equations!"

Innocuous, the math of steering towards a point is trivial, but the actual prediction of *where* the future point is can be a bit more difficult, which is what (I think) Timkin is referring to. As far as the actual AI goes, you might give some thought to trying to make different enemy AI's work in tandem. Firstly, this gives you some leeway in the "future point" prediction, in that it lets you define a series of points or a general region where the ship might be going, and send the various ships to their own individual positions. Secondly, if you assign jobs to the different enemies, you'll get more complex looking behavior, even if the individual behaviors are basic.

For example: say there only two behaviors, a kamikaze attack or an area patrol. You could assign one of these two to the various fighters depending on their proximity to the player, and constantly update their behaviors based on their new relative positions. So a fighter could attempt to crash into the player, miss, and then end up patrolling a given area since it's fairly far away now.

I may misunderstand exactly what you are attempting to do, but if my solution is completely useless for your game, let me know and I'll give it another shot.


[edited by - Mordoch Bob on April 20, 2002 2:26:21 AM]
_________________________________________________________________________________The wind shear alone from a pink golfball can take the head off a 90-pound midget from 300 yards.-Six String Samurai
quote: Original post by InnocuousFox
I can''t believe I had my ass chewed out on this a few months ago.


In this forum??? I would hope not. What was the reason given for suggesting that steering to a predicted state was a bad thing???

The method you have suggested would be the simplest prediction (being linear extrapolation), based on complete domain knowledge.

At the other end would be a non-linear prediction from infrequent observations... a nasty problem, but actually quite solvable given recent algorithms (and not too cimputationally intensive either).

Cheers,

Timkin
to be perhaps too simplistic, and assuming that we have complete domain knowledge [this is terrible, imho], we can calculate the position ahead of a ship as:

ship.p+ship.v

or to add some history to it, have a ship store [for ai purposes only] a set of vectors relative to the current orientation, which we push new values into each frame [or ai frame]

ship.p+ship.v+(rel(ship.p,ship.v2))+....

where rel(va,vb) takes a coordinate space vector (ship.p being the current position, with, in classic d3drm style, 6 values,
and a vector to convert. altho were in 2d so yeah.

the implementation of rel() uses matrices, or any other way of doing things you want to use.

similar for following a ship, except you subract the vectors from the position, (or just head for a position a certain distance behind the ship, which can be a problem when ships turn )

hope this helps. someone chew my ass out for it if im talking out of it.
die or be died...i think
Advertisement
I think the problem with just a linear extrapolation of where the ship is going can be problematic. If this game''s control scheme is like that in the 2D Asteroids game, two things can happen at any time:
-The ship can propel itself in an arbitrary direction
-The ship can "teleport" (this is not necessarily within all Asteroids games, but in many that I''ve played)

So, even if you have a line that is consistant with the velocity, or a curve that takes into account the acceleration, it can change at any time. So, the long and short of it is that we can''t actually say where the ship will be once the enemy AI shows up (unless they''re right next to each other), and therefore we have have to just guess. Now, an intelligently made linear scheme is probably sufficiently smart to not "orbit" the ship when trying to hit it, but you have to always keep in mind that it''s not going to hit the ship 100% of the time.
_________________________________________________________________________________The wind shear alone from a pink golfball can take the head off a 90-pound midget from 300 yards.-Six String Samurai
quote: Original post by Mordoch Bob
So, even if you have a line that is consistant with the velocity, or a curve that takes into account the acceleration, it can change at any time. So, the long and short of it is that we can''t actually say where the ship will be once the enemy AI shows up


Not with any certainty, that is. Well, I can say with certainty that the ship will be somewhere within the boundaries of the game world. If I wanted to believe that the ship could move to any point, from any other point, at any time, then I would have to use a uniform probability distribution - over all possible world states - as my knowledge of the next state.

However, we can do better than that. We can use knowledge of our opponents behaviour to better assess the likelihood of each possible action and from this predict the probability distribution over future states of the opponent.

What would make this a fun problem to work on would be to restrict ALL agents in the game to purely positional data, which could be used to derive approximations for velocity (this is how we humans do it!).

Cheers,

Timkin
Rob Zudek in 2001 wrote about parallel behaviorism (look at www.cs.northwestern.edu/~rob/publications/). He was inspired by the works of Rodney Brooks at MIT (look at www.ai.mit.edu/people/brooks/publications.shtml for "How To Build Complete Creatures Rather Than Isolated Cognitive Simulators" by example).

In essence, he does not store the world to buid a cognitive system. He creates a chain of reaction according to sensor readings: the world is its best representation.

Let's say your enemy ship has 3 sensors:

1 - distance to player (100 - dist_to_player / max_allowable_dist)
2 - health level (100 - health level)
3 - has an order (35% if true / 0% if false)

Each of these sensors will continuously evaluate and return a percentage. You just need to set the timing of sensor evaluation (every 2 seconds by example). The sensors compete for the control of the ship: it is a need driven AI.

Sensor 2--T70-+
Sensor 1-------S---+
Sensor 3 -----------S-----> Result

Each S is a subsume operator: if a sensor has fired, it takes over over control. The T is a threshold operator: the operator returns 0% while the sensor firing is under the threshold. The previous Diagram means that at the lower level, the ship will follow an order. If the player gets too close (the sensor1 reading is over 0%), the ship will follow a scripted reaction (let's say attack). If the ship is too much hit (the sensor2 reading is over 70% here because of the T operator), the ship will follow the last ditch operations (let's say flee). Thus you have a continuous ship reaction.
Note that you can use any other kind of operator instead of subsume or threshold: maximum (returns the highest percentage), minimum (returns the lowest percentage), etc...

The result after a Sensor evaluation is a couple: Sensor origin, percentage. This result will be mapped to an action table:
Sensor1 fired -> ProcessAgressive
Sensor2 fired -> ProcessEvasive10
Sensor3 fired -> ProcessOrder (it can be one of ProcessTransport, ProcessPatrol, ProcessGuardPoint, ProcessGuardArea, ProcessEscort, ProcessHuntFett)

example:
the sensor readings are:
Sensor1: the player is at 100 pixels and the maximum security distance is 120 pixels.
The sensor fires 100% - (100/120) = 16%
Sensor2: the ship has 100% health. The sensor fires 0%
Sensor3: the ship has an escort order. The sensor fires 35%

When run through the evaluation, you get the result Sensor1,16% (the order is subsumed by the fact that the player ship closes dangerously. The ship is intact and will not flee before taking 70% damage). The resulting action according to the map is: ProcessAgressive, the ship fires at the threat.

You can have an admiral that assigns mission to your ships and each of your ship will follow its orders unless the player gets too close to them. If the player gets away, the ships will resume their mission. If the player hits too hard, the ships will flee. The admiral can monitor to see the effectiveness of the squadron mission and reassign orders: the behavior will change.

Now, let's change the behavior according to another sensor:
Sensor4: Hit by player (60% if true / 0% if false)

let's add another operator: WU(time) for wedge up. It sends % when the sensor fired and will keep sending this % until time in seconds has elapsed.

let's change the action map:
Sensor1 fired -> ProcessEvasive12
Sensor2 fired -> ProcessEvasive10
Sensor3 fired -> ProcessOrder (it can be one of ProcessTransport, ProcessPatrol, ProcessGuardPoint, ProcessGuardArea, ProcessEscort, ProcessHuntFett)
Sensor4 fired -> ProcessAgressive

You get the new evaluation diagram:
Sensor 2------T70---+
Sensor 4--WU(5)-----S--+
Sensor 1-----------------S--+
Sensor 3 ---------------------S-----> Result

Thus the enemy ship will follow orders if left alone. If the player gets too close, the ship will begin evasive manouever. If the ship is hit, it will get agressive and attack back for 5 seconds. If the ship has a low health level, it will fly away.

The big advantage is that the change of behaviour is continuous. It is not a state machine with each condition to switch states hardcoded: it is a continuous need driven machine.

Hope that helps.

Ghostly yours,
Red.

[edited by - Red Ghost on April 22, 2002 6:58:30 AM]

[edited by - Red Ghost on April 23, 2002 8:51:03 AM]
Ghostly yours,Red.
On a much more simplistic level, you could make the enemy ships have 2 primary goals:

1) Keep *behind* the target (ie. approach them from behind - this is classic dogfighting strategy).
2) Predict where the target ship will be by the time a laser (or projectile, or whatever you fire) reaches that distance and target on that spot - that way if the ship decides to attack, a laser will (hopefully) hit the target ship. Of course, this will never be perfect - the target ship may take evasive action.

Once you have the attacking ship''s goals figured out, the attacking AI could periodically (once or twice a second?) assess all of its options (ie. steer left/right, accelerate, fire) and, if you write a heuristic function which can put a numerical "goodness" value on each of the 6-10 possible game states, just let the AI ship the best one and take that action, and voila, instant AI.

For the heuristic function, you would probably want to factor in where the attacking ship is in relation to the attackee ship and if any lasers/projectiles would hit the ship in this time. Naturally, this could get very complex (in search time) very quickly, so its probably best to keep it simple - maybe just factor in positional information first, rather than worry about projectiles hitting the ship. It would be nice if later you could factor in projectiles, so an enemy ship would dodge projectiles. That would be cool. But figuring out the most effective heuristic is the most challenging (and fun) part of decent AI.

I''m actually going through a very similar process on my 3D Asteroids game:

http://www.4bitterguys.com/michael/asteroids

The game is incredibly bland in 3D just shooting asteroids, and I thought it might be fun with a few enemy ships attacking you. My enemy ships will have AI based on the above 2 rules, which will hopefully make them smart enough to be a pain in the ass to defeat I''ve only had experience with turn-based AI though, so I''ll be interested to see how it translates real-time.

I did some "AI" for a 2D car racing game a few years ago:

http://members.easyspace.com/mdol/maximumoverdrive.htm

There are 10 cars, and each one has to take care of:
- following the road (steering, braking, handbraking etc.)
- avoiding other cars
- avoiding projectiles
- targeting other cars

As you''ll see if you download the game, some aspects work better than others This was way before I studied AI at uni though

But this is a similar sort of thing to what you''re looking to do I imagine, just with spaceships. The AI in Maximum Overdrive is just a big bunch of if then else statements - ugly, but got the job done (sort of). Hope all this helps

--------------------------
www.4bitterguys.com
--------------------------www.4bitterguys.com

This topic is closed to new replies.

Advertisement