Quote:
Original post by ToohrVyk
...it would be enough to have three low-level monsters with perfect AI to kill the player.
Perfect AI is not good AI, either. That is another ridiculous misconception. One of the caveats to von Neumann's Game Theory was that it assumed that the decision maker was entirely rational. That is where it came up short as a modeling tool. Humans (et al) are not completely rational.
When we design computer AI with perfect knowledge, perfect calculation, and perfect execution, we are creating an entity that is not "human-like" precisely because of the fact that we are creating something perfect.
The point is, for too long we have been focused on "solving" a problem with AI. To solve the problem from a scientific, mathematical or engineering standpoint means to maximize it. (i.e. perfect it) However, by doing so, this became no longer fun since any given bot could head-shot us in an instant, etc. So, in a way, we abandoned that quest because we were told "perfect != fun".
What if we were to construct our agents in such a way that we weren't striving for the very non-human, non-fun perfection and, instead, strove to model something that was realistic in the sense of being sub-optimal and even fallible. Not in the contrived sense - that is with "exploits". But in a reasonable fashion.
For example, at a GDC many years ago, I shared a beer with someone who had worked on the AI for a particular NFL game I had played. I told him that my observation was that you could march down the field entirely by doing 10-yard hook patterns. The DB would always be just off the ball as the WR cut back. He looked baffled when I suggested that there could be a fix to this.
I went on to explain that (in real football), after a number of times, the DB would tend to jump the route and try to cut off the hook route. He argued that the human player could "exploit" this feature by then running a fake hook and go right past the DB after drawing him in. It was my turn to be baffled. It was obvious that he had not watched a lot of football.
The point was, the issue that I brought up was an "exploit". The player could take advantage of something that the designer/programmer had obviously not put into the DB logic. The caveat that he thought was an "exploit" is a truly legitimate play based on bluffing. You can't watch an NFL game without hearing such things like "run to set up the pass", "drawing the defense in", "taking the underneath route that they are giving them", and, more relevantly "setting the defender up by running the same route a few times."
In both cases, the DB made an error. In the first, he made the error of doing the same thing every time and not adjusting to the exploit that the player had found. In the second error, the DB got drawn in to doing a legitimate reaction to a legitimate strategy just like DBs do every single week in the NFL.
For the player, the difference is best illustrated by the following sentences:
* I beat the game by finding something stupid in the AI that the designers didn't account for.
* I beat the game by finding a strategy that took advantage of the AI's realistic response.
The first is solving a puzzle. The second is playing football. If you are creating a puzzle game, have at it. If you are trying to create a football simulation, then "bad AI" is disappointing.
By the way, after all this I may as well pimp my book, "Behavioral Mathematics for Game AI". Hopefully it will be on the shelves in time for GDC. It addresses exactly the above issue... step out of the hyper-focus on algorithms and write behaviors instead.