Advertisement

Monster thinking in an action rpg

Started by August 23, 2012 05:30 PM
18 comments, last by AndreaTucci 12 years, 2 months ago
Hello everybody,
i'm developing an action rpg with some University colleagues. Actually we're on monsters' ai design and we would like to implement a sort of "utility-based ai" so we have a "thinker" that assigns a numeric value on all the monster's decisions and we choose the highest (or the most appropriate, depending on monster's iq) and assign it in a monster's collection of decisions (like a goal-driven design pattern) .
One solution we found, is to write a mathematical formula for each decision, with all the important parameters for evaluation (so for a spell-decision we might have mp,distance from player, player's hp etc). This formula has also coefficients representing some of monster's behaviour (in this way we can alterate formulas by changing coefficients).
Now...
I've read also how fuzzy logic works; I was fashinated by it and by the many ways of expansion it has. I was wondering how we could use this technique to give our ai more semplicity, i mean create evaluations with fuzzy rules as IF player_far AND mp_high AND hp_high THEN veryDesiderable (for a spell having an high casting-time and consume high mp) and then defuzzificate it. In this way it's also simple to create a monster behaviour for example creating ad-hoc rules for every monster's iq category. But is it correct using fuzzy logic in a game with many parameters like an rpg? Is there a way of merging this two techniques? Are there better ai design techniques for evaluating monster's chooses ? Thanks
AndreaTux~
I am no fan of fuzzy logic. It is the kind of thing that sounds fascinating when you first hear about it, but after some time you realize that the problem of dealing with uncertainty mathematically is completely captured by probability theory, which is way more powerful than fuzzy logic.

If you are thinking about writing a formula to evaluate which spell to cast, you are probably going about this the wrong way. Think of all the possible actions the monster might take in the current situation (including spells, attacks, taking cover, calling for reinforcements, running away, asking for mercy and crying for mommy), and then think of what makes each action desirable or not desirable. Put it down in numbers, and you are golden.
Advertisement

Hello everybody,
i'm developing an action rpg with some University colleagues. Actually we're on monsters' ai design and we would like to implement a sort of "utility-based ai" so we have a "thinker" that assigns a numeric value on all the monster's decisions and we choose the highest (or the most appropriate, depending on monster's iq) and assign it in a monster's collection of decisions (like a goal-driven design pattern) .
One solution we found, is to write a mathematical formula for each decision, with all the important parameters for evaluation (so for a spell-decision we might have mp,distance from player, player's hp etc). This formula has also coefficients representing some of monster's behaviour (in this way we can alterate formulas by changing coefficients).


This much all sounds great!


Now...
I've read also how fuzzy logic works; I was fashinated by it and by the many ways of expansion it has. I was wondering how we could use this technique to give our ai more semplicity, i mean create evaluations with fuzzy rules as IF player_far AND mp_high AND hp_high THEN veryDesiderable (for a spell having an high casting-time and consume high mp) and then defuzzificate it. In this way it's also simple to create a monster behaviour for example creating ad-hoc rules for every monster's iq category. But is it correct using fuzzy logic in a game with many parameters like an rpg? Is there a way of merging this two techniques? Are there better ai design techniques for evaluating monster's chooses ? Thanks
[/quote]

This is where I think you've gotten off track a bit.

A utility system does not need fuzzy logic, and in fact fuzzy logic is going to make your system harder to figure out and fine-tune. The problem with FL is that it can act very unpredictably and unintuitively, which is not going to help you get your project done ;-)

The power of utility systems is that you can just write those simple formulae, and with a good graphing tool, you can analyze exactly how the agents will behave. You can also map the behavior directly back into numbers, and use that to find when and where you need to tweak your formulae a bit.

I would suggest going ahead with your initial idea. It sounds like you're very close to getting a good utility-based system conceptualized, and that's a great start. I don't think you'll find it hard to manage once you get into it, and my gut is that you'll benefit a lot by keeping it as simple and straightforward as possible.

Best of luck and keep us posted on your progress! :-)

Wielder of the Sacred Wands
[Work - ArenaNet] [Epoch Language] [Scribblings]

thank you for your answer! But when you say "think of what makes each action desirable or not desirable" what do you exactly mean? I think that a mathematical formula might give me a valuation, but I want to find a better solution (if exists)! A valuation formula is hard to write (if we want a good one) and putting coefficients representing monster specifical behaviour (so we can manipulate the formula depending on them) isn't very nice for my eyes! What do you suggest?
AndreaTux~
A simple example of what might make a spell desirable or undesirable would be damage and "mana" usage. Obviously these are on two different axes and you want to increase one while decreasing usage of the other. Now if that is all we are considering, this might be fairly straight forward. However, if we were to start adding in things like the resistance of the target (e.g. fireball vs. something that resists fire) we start having other formulas mixed in. What about damage that decreases over distance? Now distance to the target is an input. What about the cost of time that it takes to cast? Sometimes shorter cast times might be more preferable to causing more damage.

What all of this can be combined to create is the concept of "expected utility" -- that is, what can we expect to be the total payoff of this action? By comparing the expected utility of all the possible actions, we can see which one is going to be the optimal one simply by picking the highest valued one.

If you want, I wrote an article for Game Developer Magazine in January of 2011 that is reprinted here. That article covers some quick techniques for converting concrete data into conceptual values. For a ton more information on creating utility-based systems, there might be a link to a book around here that is pretty much specifically dedicated to doing mathematics for behaviors in game AI. rolleyes.gif

Dave Mark - President and Lead Designer of Intrinsic Algorithm LLC
Professional consultant on game AI, mathematical modeling, simulation modeling
Co-founder and 10 year advisor of the GDC AI Summit
Author of the book, Behavioral Mathematics for Game AI
Blogs I write:
IA News - What's happening at IA | IA on AI - AI news and notes | Post-Play'em - Observations on AI of games I play

"Reducing the world to mathematical equations!"

I think all the advice you've received so far is great, but just to give an alternative idea. What if you approached the problem not as a perfectly rational actor weighing the costs and benefits of perfectly measured variables, but instead as a monster trying to decide what the hell to do with this crazy hero. Putting myself in the place of said monster, I can think of a couple of likely cognitive strategies:
a) I am an instinctual sort of creature, like an alligator or maybe an ogre, and in the clutch I tend to rely upon particular strategies. These strategies will most likely be widely applicable and crudely effective (having gotten me this far in life) but due to their generality may be exploitable. So as an alligator if I smell a hero crossing my swamp I likely try and take a bite out of him. As an Ogre, I put my club to use. There is probably not going to be much decision-making involved for an instinctual sort of creature.
b) I am a sneaky, crafty, or cunning sort of monster like a goblin shaman or a street urchin. I am not very intelligent, but I may come up with some unexpected way to approach a confrontation. In my view, as the cunning monster I'm not so much comparing a bunch of options and deciding on the cleverest one, rather I'm just more likely to try something unorthodox rather than always rely on a standard approach. So, again I don't need complex decision making, what I really need is some mechanism for simulating creativity (a simple solution might be a lot of prescripted behaviors, that are randomly selected from so as to appear emergent). The key to my cleverness as a goblin shaman isn't that I pick the best possible move, rather, that I tend to try unexpected things, potentially gaining an advantage.
c) I am a normal, competent, human-equivalent intelligent being. This one is pretty complex, because there is such a range of cognitive behavior in humans, let alone fantasy races. However, I would say some good approximations are certainly possible. For one thing, depending on how your system works and whether it can handle this, you might consider the fact that the most important (and often the only) decision that a typical soldier makes in a brief conflict is whether to fight, and a lot of the time only one side makes even that decision. So, an orc hunter might put some effort into sizing up his opponent and deciding whether he feels lucky, but once he's charged in, he probably isn't spending a lot of time deciding who to swing his axe at. I would say, there are a few decisions (whether to fight, whether to run, maybe others?) that could benefit from a weighted statistical model *or* a fuzzy approach, but honestly for these kinds of monsters the choice of which action to take should probably be really simple.
d) I am a highly intelligent being, such as a wizard, a battlefield commander, an elder dragon, or whatever. For this category, I am somewhat divided. Traditionally, games tend to assign the least flexible, and least intelligent cognitive simulations (almost always a simple, scripted pattern) to ostensibly the most intelligent type of enemy. I understand why, as games have to maintain a certain level of fun, and often have to follow certain conventions to do so, but I still dislike it. If I am a seasoned, veteran troop commander, I am not entering a battle without a plan that stands a high chance of success (unless I'm in a desert badum-bum-tsh!). So, for these kinds of monsters, I could see employing a fairly elaborate cognitive model, perhaps even a perfectly rational algorithmic model. But, if you plan on keeping with RPG tradition, then actually you don't even need that for these guys, just some state machines and scripts will do it.

I hope this helps. I'm not disagreeing with anything else said, just offering my take on how certain monsters could think in battle. I probably over-simplified the human-types, because there is really a whole lot that you could do there. Good luck.
Advertisement
Agreed with the above. I hope this isn't going off topic, but I have some thoughts on applying monster IQ to a utility-based system. In my mind the 3 factors below can change monster behaviour significantly:

  1. Weight of tendency to keep doing the same thing.
  2. Weight of preferred actions.
  3. Depth of plan to explore.

The first one would control how predictable and short-sighted monsters are, e.g. keep trying to hit you even if they will die as a result. The second would control personal preferences and also perhaps predictability. For example preferred attack method, or just a preference to attack rather than defend. A high enough weight would make that option the only option unless it has zero utility (e.g. wouldn't work). The depth of plan would allow for real human-like activity.
Thanks everybody. This is our first game ai, so it is a bit hard to choose the right way of representing things. I'm going to take a look on mr. Dave Mark book to get an idea of utility-based ai implementation
AndreaTux~
And what about artificial neural networks? I've seen that I can teach them how to act passing input and related output values, but I'm wondering if in a rpg game it's suitable or not (we have many info to look at), although it's a good thing saying something like "in these situations I expect these outputs" and so on. I'm trying to explore all the usual ai implementation, so please forgive me if I say something wrong!
AndreaTux~
To quote what I believe to be the industry opinion on neural nets, there are niches for these things, but this is not one of them. Neural networks are inherently unpredictable, rarely desirable when writing a game. They're currently more useful for classifying situations than taking actions. You could use case based reasoning if you want to define actions by examples rather than NNs.

This topic is closed to new replies.

Advertisement