Advertisement

Any new fps games which have learning AI (like quake)?

Started by July 05, 2013 06:38 PM
6 comments, last by cr88192 11 years, 2 months ago

Hi, just saw this (http://i.imgur.com/dx7sVXj.jpg), and was just curious if there were any new arena based games similar to quake and UT that use such a style of AI, where it'll actually store the good/bad tactics for later use?

I'd just be interested to try run it for a while, actually see what happens when left to learn from themselves. I had a quick google and found most games don't seem to go for this approach anymore, but I don't know much at all about this sorta stuff so thought it'd be worth asking here :)

I played the crap out of Quake 3 and the original Unreal Tournament - neither game had very good AI .

Quake 4 was the first game in the series that has enemies that actively took cover and attempted to flank the player.

Edit: 4chan is not a good source of information - never EVER quote from there again !

I cannot remember the books I've read any more than the meals I have eaten; even so, they have made me.

~ Ralph Waldo Emerson

Advertisement

Quake AI doesn't learn anything, Carmack himself just tweeted the other day that they don't use neural networks at all. The Reaper bots for Quake(1) did, though, but those were just a 3rd party mod.

Correct me if I'm wrong but In half-life the death match bots would leave nodes (they called them apples) as they ran around. These could then be referred to later to discover the quickest route to the player. Hardly what i would consider good "AI".

Correct me if I'm wrong but In half-life the death match bots would leave nodes (they called them apples) as they ran around. These could then be referred to later to discover the quickest route to the player. Hardly what i would consider good "AI".

I could be wrong, but I believe that was part of the path finding operation.

I cannot remember the books I've read any more than the meals I have eaten; even so, they have made me.

~ Ralph Waldo Emerson

It doesn't learn. The player drops cookies and the AI eats them, following the player. Eventually, after you've dropped enough cookies, the AI can then follow the cookies and do everything you do.

OM NOM NOM!!!

Learning AI! Yay!

OM NOM NOM!

Oh noes, pwned by the cookie monster!

Edit: Ninja'd by the apple man. The bots drop them too, nice to know.

Advertisement

Edit: 4chan is not a good source of information - never EVER quote from there again !

That was a nice story at least... gave me goosebumps.

It draws from that sci-fi theme "AI is slowly learning. Someone leaves it running in the background and forgets about it, later on it becomes super-smart."

Quake AI doesn't learn anything, Carmack himself just tweeted the other day that they don't use neural networks at all. The Reaper bots for Quake(1) did, though, but those were just a 3rd party mod.

yeah, at least the standard Quake 1/2 AIs were basically just finite state machines.

their AI logic was basically just (when angry):

turn in direction of player (if the timer allows);

head forwards;

if collided with something, head in a random direction for a certain random amount of time;

if we have line of sight with the player, fire their weapon.

then they have a few states: standing idle, walking idly, running after the player, attacking, ...

hacking on the AIs to add things like path-finding made them considerably more scary, in that if you got an enemy mad and ran off somewhere, it would catch up (rather than just get stuck in a corner somewhere).

there was logic for misc things though, like to prevent them from running off edges, basically by preventing movements which would cause their bounding-box to no longer be on the ground.

when idle, an AI could optionally follow a sequence of nodes (generally organized into a loop), otherwise it will just stand in one spot until it sees something.

IIRC, the Quake 3 AIs were fairly similar, IIRC mostly just working by randomly following waypoints, with some special nodes set up to indicate for the AI to visit them (with the AI having some logic to compel them to visit all the nodes).

if it sees an enemy along the way, it will shoot at them.

FSMs generally can do a fair amount though and don't really eat the CPU as bad.

beyond this, there is sometimes the "poor mans' neural net" which is basically using matrix multiplies and autocorrelation and similar.

related to this is the use of Markov chains and Bayesian networks and similar.

these can generate interesting results without being too expensive (but are ultimately fairly limited, as-in, they tend not to exhibit any real "intelligence" and only learn within a fairly limited scope).

I had a few times experimented with trying to use genetic-programming for things (*1), but at least in my tests, generally got not-very-compelling results. as in, lots of CPU time for usually not a whole lot interesting going on, and similarly all they really seem to be good at is finding a way to cheat the test (such as by finding a way to exploit bugs in the test data or the interpreter), so a lot more time ends up mostly going into beating on the tests to try to prevent cheating, ...

another limiting factor (for using GP in game-AIs), is that there aren't really a lot of good situations to actually test it with (excepting something like an MMO where the mass of players "doing stuff" could be used to train the AIs, where the AIs try to get kills and avoid being killed).

basically, it would require something much more actively competitive, either actively competing against human players, or at least competing against other GP AIs.

but then (assuming no sort of barrier or similar preventing it), you would probably just end up with waves of monsters heading into newbie areas or spawn-camping and similar, or otherwise just being annoying (I suspect dumber monsters are preferable for most players, and if the monsters are too difficult or unpredictable many players would likely become frustrated).

*1: basically where one has sequences of specialized bytecode (or similar) in a specialized interpreter or similar, with the GP system basically randomly mutating and recombining them. in my past tests, this "bytecode" has typically been ASCII based (generally with each "program" as a collection of strings, generally with 1 or more entry points, and any other strings being usable for internal uses).

usually, for most things, it is faster/easier to just code it up directly.

don't know what all has actually been used in game AIs though.

This topic is closed to new replies.

Advertisement