Advertisement

RTS AI

Started by September 10, 2003 10:50 PM
7 comments, last by phaelax 21 years ago
I''m currently working on an rts game. Any ideas on how I might get the computer to form and create its own strategies? You can play any RTS, and after a few games you start to learn how the computer thinks. Their strategic plans become predictable. This is what I want to avoid. Would I design several strategies and have the computer randomly pick one each game, or design some sort of self learning system through trial and error?
Yeah, make it self learning. When it becomes aware we''ll see you on the news

Realistically though I think I''d go for option 1.

Break down possible strategies into dimensions of thought. One dimension might be aggression where +ve is attack, -ve is run away.

List all of the dimensions you want. Give each one a number and get the AI to react based on that bias. Now if you give each game result a value to how successful the AI was then you could make the computer learn in a crude sense.

Mark
Advertisement
This is a current research topic and hasn''t been solved yet. The easier option is to go with option 1. Develop several ''canned plans'' that your game agent can select from. Determine when each is most appropriate to employ and switch between them. If this switching function is non-deterministic, then the game player will have less chance of detecting the actual strategy... or the strategy sequence used.

Option 2 is far more advanced and requires you to model the game player and their strategy to determine the most optimal response to this strategy. This is a very advanced topic in AI. If you feel you''re up to it, give it a go... and if you succeed, write yourself an engine, license it and sit back and watch the money roll in!

Cheers,

Timkin
quote: Original post by phaelax
after a few games you start to learn how the computer thinks. Their strategic plans become predictable. This is what I want to avoid. Would I design several strategies and have the computer randomly pick one each game, or design some sort of self learning system through trial and error?


i''m coming to this party late, and don''t even have much new to offer, but i wanted to post something to inflate my post count so i''ll say something anyway!

i''m a big fan of sitting down and asking "how do i do it?". The neat thing about this sort of thing is humans do it all the time, so you know it''s possible. Not how necessarily, but you know it can be done

First question - can the differences be explained by little decisions or do they absolutely require some large, high-level view? Mitch Resnick did a nifty little book (Termites, Turtles and Traffic Jams or somesuch) on how small decisions can make big changes in how things work. If you know your opponent is making a given kind of troop, you might have a little simple routine that responds in some way

The AI guy (one of?) for Halo did a presentation where he said you should never put randomness in a game. Instead, hook everything to the player. The player is plenty random and probably couldn''t play the game the same way twice if he wanted to. Make some routine that is very input sensitive - it responds very differently based on small changes in how the opponent plays

OK, that''s all theoretical. i''ll try to be more practical. Have you considered writing down all the strategies you''ve seen? Someone did that a decade or more ago for Civilization and came up with only a few strategies - ostrich (avoid combat, build up R&D), barbarian (the light tank rush of civ) and like one or two others

Now, knowing there are only a few general strategies for a game (out-tech them, out produce them, get in first punch, command the sea, command the air, go for nukes, numerous feints/skirmishes, one large assault force, dig in, can''t think of too many else), why are games so different? That has to do with more tactical decisions. Games happen at several layers of abstraction. The decision to move two groups on opposite sides of the enemy base is a higher level decision than deciding which specific unit is going to attack the incoming enemy tank

So you have a couple of basic decisions:
- How much effort to put into attack vs. vs.
build defenses vs. explore vs. collect resources
vs. build tech tree in a given time step?
- Which units to build?
Should be specific to qualities of the map
(amount of metal, water, mountains, whatever)
and responsive to what the other guy is building
- How to move units?
Group size, movement rate (all at own rate
or stick together), avenues of approach,
coordination/timing between groups, etc.
- Any intermediate goals you want to hit and, if
so, in what order?
This could be "control center mountain pass" and
"build hanging gardens wonder" and are assumedly
optional. If they aren''t, then you *have* to do
this step

The above isn''t really that many options and many can be expressed in some small finite way (emphasis on aggression vs. defense vs. explore could be percentages in 5% increments or could just be High, Medium, Low). They are semi-linked. Consider unit building. Your units should already be divided into offensive vs. defensive, so the proportion of each category/type is set by the game play strategy. Which specific offensive units you build might be completely independent of the strategy or they might not. You''ll have to decide if it''s worthwhile. But if they are independent, then you can vary that part of the strategy independent of the other ones you vary

The end result? Something like 20 parameters that can each be set independently by a random number generator. And you don''t have to define a handful of large, complex stratgies, you just define a larger handful of traits

And defined at that level, you can record how well each combination of variable settings worked. You could even record them by opponent (assuming he logs in with some ID you can track) and adapt to a specific person

Does that work? Yeah, probably, a little. The big problem which here is an opportunity, is that sometimes the enemy won''t cooperate. You try to to a quick fools gambit rush to end the game quickly and he ends up being on an island. Doh! Or you want to focus on tech and resource collection and the enemy invades you immediately. A lot of game playing is reacting to the other guy. If you intend to play a game with resource allocation Defense=Low, Offense=Low, Tech=High and the other guy comes at you with a swarm of choppers or your spy sees that they have 6 battleships under construction, better adjust your allocation strategy

Note that, by defining the game through a small set of measureable traits, you can also capture a profile of the player, either overall or over a sequence of time steps (in case they start pacifist and go conqueor half way through the game). The time step way allows you to do the whole Markov thing where you determine the probability of his next action being something (in 10 games, he played pacifist 5 turns then turned violent in 8 games and stayed pacifist in 2 games, so after the enemy being a pacifist for 5 turns, he has an 80% chance of being violent)

As for another option, have i shown you the little comp.ai.games post i wrote on the matching law? It might help give your game some random elements in the small moves while you''re waiting to respond to whatever the enemy does

-baylor
use a neural net Clicky
those are kool
u can use coevolution of 2 computer AIs to make them better
I can''t find the site, but there was one about an organism and a parasite, and as the organism gets better, so does the parasite and so on...
nehoo

_________________________________________
"Why, why why does everyone ask ''''why'''' when ''''how'''' is so much more fun"
-Spawn 1997
_________________________________________"You're just jelous because the voices only talk to me"
I am actually working on the planning/implementation stage of using a GP to do this. I will hopefully use a tournament setup and lots of modules that perform each role as it would be performed in the real life, ie. ranks... a head prgram that worries about high level stuff... basic stradegy... who orders other programms to implement his high ideas, who interpret the ideas and give lower level commands to sub-programms until actuall actions take place at the lowest level, in the manufacturing plants, battle-fields, etc...

I really like the heirachial control myself... it seems to work well in real life

Dwiel
Advertisement
Lol... a good friend and I sat down about 5 years ago and designed a very similar heirarchical control/planning system. We didn''t use GP though, but rather more classical planning and learning protocols. Funnily enough, we too thought it would be particularly suited to process control on a factory production line!!!

Of course, we never got around to implementing it... he moved to London shortly after we finished the design and I was heavily into my PhD at the time. We did design and implement our test-bed though (although I''ve no idea where the code is now)... it was a variant on the game Pengi. Rather than having multiple bees chasing one pengiun, we had multiple NPC pengiuns, ordered around by a King Penguin, trying to squish one bee (the player). The players job was to sting all the penguins (including the King) before they got squished!

The worker penguins determined their individual actions based on local measures, local states and directives from the King, while the King penguin utilised observations from the worker penguins to determine larger scale (near globally optimal) partial plans (partial plans are plans that are not fully specified in terms of primitive actions).

You''re welcome to use this idea for a test bed for your system. If you need more details, just holler... I''m certain I still have all my notes from the project stored in one of my boxes!

Cheers,

Timkin
Ken Stanley and some guys at the texas gamedev workshop are working on a game featuring neuroevolution. Basically it is the players job to ''train'' troops by evolving strategies.

It''s a very novel idea and I hope they have some success.

http://gamedev.ic2.org/tiki-index.php?page=LegionsOfInfluence



My Website: ai-junkie.com | My Book: AI Techniques for Game Programming
In my rts I finished last year, the AI was done by putting each computer units into a group (possibly on its own, possibly with other units). Each group was given a specific purpose, these were:

1) Defense
2) Frontal attack
3) Flank

There were 3 defense groups, with one being very sensitive to anyone near the base and another only caring if people get really close.

The frontal attack group just continually attacks you from the front.

The flank group would go to a spot on the map, wait for all the group members to arrive, and then attack you in force.

It was very simple AI, but it works pretty well. I should also point out that the game plays fairly differently to most RTS games around (units never die, no ranged attacks, "attacking" involves sacrificing a unit for another, so in a 3 vs 2 situation, the 3 group will always win, regardless... but after a few seconds, all units will respawn in locations which aren''t too brilliant)

The basic way I made the strategy change was just by changing the number of units in each groups.

Anyway, if you want to see how it works in practice, you can download it from here, it is a fairly small download (<2MB from memory)

Emoticon

For extending this system, I can think of a few things:

1) More diverse groups.
2) Changing number of units in groups as game progresses (for a conventional rts, where unit numbers are changing, this is a necessity)
3) Learn about the map as game goes on.

It isn''t too hard to build up a layer on top of the map which shows where most conflict happens, and when you have been the closest to achieving the goal. If you can do this well, and end up with a nice smooth map which rates squares according to their previous successes, you can feed this into a pathfinding algorithm and you have a learning AI, which will be pretty smart.

Trying is the first step towards failure.
Trying is the first step towards failure.

This topic is closed to new replies.

Advertisement