Advertisement

On the usefulness of so called computational intelligence methods in game development

Started by October 21, 2007 01:44 PM
14 comments, last by Steadtler 17 years, 1 month ago
Continuing the discussion about CI in games, out of the thread about Togelius's work.
Quote: Original post by alexjc
Quote: Original post by Steadtler Any AI technique can be driven by dynamic data if properly designed to be so.
Oh, absolutely. But keep in mind CI offers tools to gather dynamic data in a useful form, so the "if properly designed to do so" bit is made much easier if you don't reinvent the wheel.
Useful form? Au contraire, CI put the data in a form that is practically impossible to interpret!
Quote: Original post by alexjc If research doesn't make it easier for me to code my FPS bot then it's not a real-life problem? Why did Colin McRae used a NN, novelty? Jeff Hannan knew what he was doing and picked the best solution available... If we can't understand such decisions, then it's probably because our skillset doesn't cover those fields. It's not a problem, but we shouldn't criticize techniques without practical experience with them.
Since you bring out this example so often, I assume Mr. Hannan must have written a paper about how and to which extent they used NN in Colin McRae? Would you kindly point me to it, I'ld like to read it before I comment on that. And please stop implying that I dont know what I am talking about, it belittle yourself.
Quote: Original post by alexjc That's interesting. Care to elaborate? What in particular do they hate? Do they have any experience with CI? What kind of game is it?
I dont like to bring specifics of my work to these forums, so you'll forgive me if I dont elaborate on that side. But every game team have had experience with CI, today you can't cross the street without meeting someone who wants to apply CI to games. They hate that its impossible to explicitely tune and control. The core of the problem is that CI techniques put the behavior control in a implicit form, a black box that is impossible to interpret. Its just about impossible to look at a NN and explain why it output a certain behavior, and even less correct that behavior in a deterministic way. From their very nature as a search over the space of all behavior made possible by the (very non-linear) model, the more the complexity of the behavior you want, the more work it is to rule out idiotic behaviors, when thats possible at all. And in the end, coming up with the proper cost (fit) function is just as much work as doing your AI in a explicit way.
Quote: Original post by alexjc My belief is that any widespread technology has its uses, and if we disregard it by default for a problem then we probably don't know enough about it. That applies to CI in this case.
Yes, there seem to be a lot of faith implied by CI. I dont disregard it by default. I just dont see them solving the core issues of the problems they are being applied to. Take car racing. What makes the AI challenging? Well, you need an AI that has consistency toward a medium-reaching temporal horizon, because the actions you take now have a big effect on the actions you'll be able to take next: how you exit a curve has a big effect on how you'll be able to take the next one. But since there are other cars on the circuit which have unpredictable short-term behavior, you also need an ai that is 100% reactive and can revisit its short-term behavior on each frame. I dont see CI techniques adressing these issues in any specific way, we are expected to hope that a regression on a generic, implicit behavior model will come up with an acceptable solution.
There are interviews with Jeff Hannon about the Colin McRae AI here and more details here.

The neural networks were used specifically to decide which buttons to press, based on the state of the car and some information about the upcoming racing line. They were trained using supervised learning. I'm guessing that the resulting function may have been plotted along various axes to make sure that there weren't any areas where the output suddenly became unreasonable.
Quote: Original post by alexjc
My belief is that any widespread technology has its uses, and if we disregard it by default for a problem then we probably don't know enough about it. That applies to CI in this case.

I guess I'm a bit more disillusioned than you. Just because a technology appears widespread doesn't mean that it is particularly suitable for the problem it is being applied to.
Advertisement
Quote: The core of the problem is that CI techniques put the behavior control in a implicit form, a black box that is impossible to interpret. Its just about impossible to look at a NN and explain why it output a certain behavior, and even less correct that behavior in a deterministic way. From their very nature as a search over the space of all behavior made possible by the (very non-linear) model, the more the complexity of the behavior you want, the more work it is to rule out idiotic behaviors, when thats possible at all. And in the end, coming up with the proper cost (fit) function is just as much work as doing your AI in a explicit way.


I never read the original thread, but this statement just caught me as I'm sort of faced with the same situation. From a game designer perspective, they want to tune the "ends", while implementing CI really only exposes the "means" to those "ends." So, it really boils down really to the conflict of methodology between engineers and designers. Most designers are looking for a specific result or type of result, while engineers are left to fiddle with the means to get there. Though we may propose interesting means that generate really good ends, it's useless to the designers and producers if they can't control the ends the way they like. It's also especially hard to do QA on the product as well.

Now I do think that CI is a very good tool for "development," but not really something you want to ship in full in a product. You can tune your NN or whatnot during production with CI, but in the end, you want to ship something that is tuned with a fixed behavior. Most game designers I talk to all emphasize the need to bring across a consistent user experience, which means the AI is either equally dumb for everyone and all games, or it improves at exactly the same rate under exact circumstances. This seems to be the primary friction behind adopting more complex AI using CI techniques.

Also, it is important to get back to the basics and realize that CI is just a tool. And like any tool, it was built for a purpose. Trying to implement CI in game development must be based on the simple fact that it is the "proper" tool or only tool that is available. In the end, specialized hacks and heuristics that are lighter weight may give you the same end result as CI.
Regarding the "black box" problem, yes it's a problem, but it's a problem that I think could be mitigated with a little creativity.

First, to be fair, hand-written code is far from being immune to the black box problem. It's a pretty common situation to have "write-only code"; it can be a result of bad programmers, a bad programming language, or whatever. And even if you *do* have a great language and great programmers; when an application becomes too big and complicated, it eventually becomes impossible to debug in any easy way.

So onto my main comment, there *are* ways to deal with the black box problem. We don't have to just give up, say that "CI algorithms are impossible to interpret", and close the book. We can invent new tricks to make these systems more transparent. For ANN systems, we can write visualizers that render the network, add a few text labels, and make nodes change color when they are activated. We can watch this visualization while the game is being played, or record it to disk and watch it later.

Or we can borrow tricks from the rest of the programming world. What's the cool thing to do when you want to reduce bugs in a highly complicated application? Write unit tests! And we can absolutely bring the concept of unit-testing to machine learning; it would probably be really successful.

So there *are* options, many of which haven't been fully explored. Whether or not it's worth the man-hours to implement them, that's debatable.
Haven't you prejudiced the debate by referring to CI as "so called computational intelligence"?

They already call it CI so people don't have to say "so called AI"...

Also there is no consensus on what exactly CI means. As far as I'm concerned, any computation designed to mimic intelligence is CI; so all game AI uses CI techniques... (IMO) CI isn't just NN's and evolutionary gizmos, it's also decision trees and FSMs.


Anyway, I've used some CI techniques in an FPS bot before so it could "learn" appropriate tactics for different levels. This produced a lot of data which didn't get used, but I had no trouble writing visualisations for the stuff I was interested in. Such as how often a bot "chooses" a particular path, or what equipment they are "avoiding". These visualisations not only helped develop an AI with appropriate tactics for each level, but also exposed some game-play flaws! (such as tactics the bot "invented" which do work well, but game-play reasons say they shouldn't).

[Edited by - Hodgman on October 22, 2007 1:03:34 AM]
In large part, the debate is hitting on two ends of a continuum:

"CI is difficult to get to do exactly what you want it to."
"Non-CI is difficult to get to do something other than the specific things you told it to."

"CI is difficult for the programer to figure out exactly why the agent is doing something."
"Non-CI is too easy for the player to figure what his/her/its opponent is going to do next.

"CI could lead to emergent behavior that is nifty."
"Non-CI avoids the potential of emergent behavior that looks really stupid."

I think the best point made above is that it is a tool. Depending on what you need to accomplish, you either select a CI-based method or you don't. If you are in an RPG with a character and scene where you want a specific set of actions, you hard-code those actions. If you are in an RTS or sandbox game with thousands of units that need to respond to millions of potential situations... it's foolish to try to hand-code contingencies for each one. At that point, it makes sense to break their little world down into the most granular inputs and outputs and try to build a decision model that accounts for everything. The RPG NPC didn't need this level of detail or flexibility, so that sort of contingency planning was not only overkill, it was dangerous from a game-design standpoint.

Another way of thinking of it is the next stop down the line in the direction away from "cut-scene". We all hate the overuse of "cut-scenes" because they are, by their definition, scripted. They are inserts of exactly what the designer wants us to see at that moment. They will always be the same when given the same parameters. Many FSM or rule-based architectures are the same but with slightly more options to only thinly veil their rigidity. In the end, they are exactly what the designer wants us to see. CI-based architectures move away from that control and resultant rigidty even further... involving a leap of faith in the process.

In the end, it comes down to how much time we are willing to put into testing our creations. I believe it is far more satisfactory as a designer AND as a player, however, to see something that is thinking for itself (so to speak) rather than blindly following a look-up table of orders.

Dave Mark - President and Lead Designer of Intrinsic Algorithm LLC
Professional consultant on game AI, mathematical modeling, simulation modeling
Co-founder and 10 year advisor of the GDC AI Summit
Author of the book, Behavioral Mathematics for Game AI
Blogs I write:
IA News - What's happening at IA | IA on AI - AI news and notes | Post-Play'em - Observations on AI of games I play

"Reducing the world to mathematical equations!"

Advertisement
It seems everyone is entrenched already, so I'll keep this short.


Looking at the problem a different way, what you consider directly tweaking parameter of a FSM for certain behaviors to emerge can in fact be less direct than training a machine learning data-structure on specific outcomes. It's just a matter of perspective.


Funny, the few designers who truly understand how to use CI in practice seem to have gone on to create genre-defining games. To each his own.

A.

Join us in Vienna for the nucl.ai Conference 2015, on July 20-22... Don't miss it!

Quote: Original post by alexjc
It seems everyone is entrenched already, so I'll keep this short....Funny, the few designers who truly understand how to use CI in practice seem to have gone on to create genre-defining games. To each his own.

Do you find that flippant, passive-aggressive jabs at those you consider ignorant tend to make people more or less entrenched?
Quote: Original post by alexjc
It seems everyone is entrenched already, so I'll keep this short.
Looking at the problem a different way, what you consider directly tweaking parameter of a FSM for certain behaviors to emerge can in fact be less direct than training a machine learning data-structure on specific outcomes. It's just a matter of perspective.
.


You know that there is a world of AI beyond CI and FSM right? You seem to assume that not using CI techniques means using FSM exclusively. Expert systems, bayesian logic, blackboard architectures, partial-order planners, decision trees, goal-oriented reactive systems, countless others.

Each with a different temporal horizon, a different expressiveness, a different reactiveness, different interpretability.

But when you use CI to train an abstract model, you can never predict the result. Those methods seldom have any proof of convergence, or even likeliness of convergence. Unlike other methods of learning, like DT learning or SVM. Add to that the strong non-linearity of the search space...

Quote: Original post by alexjc
Funny, the few designers who truly understand how to use CI in practice seem to have gone on to create genre-defining games. To each his own.


"Genre-defining games" meaning games that use CI? Right...

Quote: Original post by Hodgman
Haven't you prejudiced the debate by referring to CI as "so called computational intelligence"?

They already call it CI so people don't have to say "so called AI"...


Debate? I thought it was more like a rant... Im allowed 1 rant per year, right? This was to reflect another aspect I dont like of CI techniques: giving the techniques fancy names that have nothing to do what they really do, but more to what people wish they were doing. Then you get tons of people new to AI that think that ANN really imitate the brains, and that dont realize that GA are just a gradient descent with the "fit" function being the "cost" function.

Quote: Original post by InnocuousFox
I believe it is far more satisfactory as a designer AND as a player, however, to see something that is thinking for itself (so to speak) rather than blindly following a look-up table of orders.


Well, its a vague and tricky concept, but I dont think there isnt any more deliberation from the AI in any CI technique I know of than in an FSM, and less then, per say, a planner or an expert system, unless the training is done real-time in game. Any possible deliberation in a NN is done at training time, and then it just blindly follows the IO-machine. The difference between the deliberation of the FSM machine and the NN is that in the FSM most of the deliberation is performed by the designer, and for the NN it is performed by the interpolation/extrapolation of the examples provided by the designer.

[Edited by - Steadtler on October 22, 2007 7:45:21 PM]
Quote: Original post by SteadtlerThe difference between the deliberation of the FSM machine and the NN is that in the FSM most of the deliberation is performed by the designer, and for the NN it is performed by the interpolation/extrapolation of the examples provided by the designer.

And yet, as we all know, one thing the computers do well is itterations of stuff really really fast. So it is very possible to train a NN or similar structure to handle far more contingencies that is possible by hand-defining cause-effect pairs as a designer.

Dave Mark - President and Lead Designer of Intrinsic Algorithm LLC
Professional consultant on game AI, mathematical modeling, simulation modeling
Co-founder and 10 year advisor of the GDC AI Summit
Author of the book, Behavioral Mathematics for Game AI
Blogs I write:
IA News - What's happening at IA | IA on AI - AI news and notes | Post-Play'em - Observations on AI of games I play

"Reducing the world to mathematical equations!"

This topic is closed to new replies.

Advertisement