Advertisement

Using neural networks for optimization problems

Started by January 15, 2007 10:52 PM
19 comments, last by Timkin 17 years, 10 months ago
Quote: Original post by Gibberstein
Quote: Original post by YELLOWtrash
yeah but what if the ANN learns to do something BETTER?


You could equally say 'But what if a cat walks across the keyboard a few times and THE KEYS IT STEPPED ON PRODUCED THE GREATEST AI PROGRAM EVER!!! ' - sure, it theoretically could happen, but I wouldn't bet my time or money on it.


Not really.

The fact that Vorpal acknowledged that the NN may learn an effective strategy effuses me with optimism. However, he assumed that it will learn gradient descent. I don't see why it can't learn a different and perhaps more effective strategy.
It won't learn a better strategy because the odds against it are phenomenal. For every useful strategy there are countless useless strategies and there's no way to guide the neural network to the strategies that are actually useful. The neural network learning algorithm has no knowledge of the specific optimization problem that it can use to judge how well it is doing or to try to improve itself. For every way the neural network can change to become better at solving the problem, there are infinitely many ways that it can become worse.

It might be possible to train the neural network to implicitly learn the gradient of the problem space as part of its strategy. I think that this is what a neural network trained using backpropagation and reinforcement learning would do, if it did anything at all. For this to happen the neural network must be complex enough to learn the gradient of the problem space, meaning the network must have lots of nodes for all but the simplest of problems. It would learn excruciatingly slowly and ultimately would do worse than a simulated annealing algorithm.

Another way to train the network would be with a genetic algorithm, but why not just use a genetic algorithm to solve the original problem?

Simpler problems are easier to solve. Any domain knowledge about a problem can be used to aid in finding a solution. Generic solutions like neural networks and genetic algorithms tend to perform much worse than solutions more tailored to the problem at hand. In this case you're taking a general problem (optimization) and turning it into an exponentially more general problem (finding a strategy for optimization) without using any domain knowledge about the problem at all.
Advertisement
Quote: Original post by YELLOWtrash
The fact that Vorpal acknowledged that the NN may learn an effective strategy effuses me with optimism. However, he assumed that it will learn gradient descent. I don't see why it can't learn a different and perhaps more effective strategy.


You may have been misled into believing that the "neural" part of "neural network" means that the algorithm is somehow intelligent or "brainlike". As someone who has spent a considerable amount of time studying and implementing neural networks, I can assure you that it is not. There's nothing magical about it, and in these situations it cannot outstrip, or even remotely match, human intellect. It is a blind, retarded eunuch, stumbling downhill in the dark.
Not so sure about the eunich part but otherwise I would have to agree that NNs are really overrated as an AI solution. Part of the hard issue is what to choose as inputs anyway. If you can identify what metrics are to be used as inputs you already have an idea of what parameters may very well be used in the problem. All you are doing is asking a repetative, darwinian process to ferret out (no offense Steve) what you may already know.

In the end, all you have really done is algorithmicaly determine what the decision thresholds should be for various state changes and rule triggers. It may save you the trouble of choosing them by hand... but at what cost?

Dave Mark - President and Lead Designer of Intrinsic Algorithm LLC
Professional consultant on game AI, mathematical modeling, simulation modeling
Co-founder and 10 year advisor of the GDC AI Summit
Author of the book, Behavioral Mathematics for Game AI
Blogs I write:
IA News - What's happening at IA | IA on AI - AI news and notes | Post-Play'em - Observations on AI of games I play

"Reducing the world to mathematical equations!"

As a naive young student, it was quite a shock to discover that neural networks weren't the miraculous universal solution that the hype suggests. The base perceptron model (the foundation that other NN models extend from) is simply a way to adjust the gradient and intercept of a 2D line until it settles into the optimal position to seperate the possible inputs into two categories. It fails on something as trivial as modelling a XOR gate.

More advanced NN models simply use more dimensions, and multiple planes, but otherwise don't do anything fancier than the basic perceptron. When you look at how they actually work, they really don't measure up to the hype. Good in some very specific pattern recognition tasks, but otherwise they are often a very CPU-heavy way to achieve substandard results.
There are problems where an NN is a good solution, but most game modeling problems are better handled with a state machine, or fuzzy logic if-then logic.

The good NN solution that I've read about involved the medical outcome of patients with numerous medical problems, and predicting whther or not they would have a problem with diabetes in the future.

There are numerous variables (100s), and the NN did a better job of predicting than a nurse case-manager. But for a game, you don't have so many unknowns, as YOU are designing the system. So YOU can design the game response betten than any NN.
Advertisement
Quote: Original post by ID Merlin
The good NN solution that I've read about involved the medical outcome of patients with numerous medical problems, and predicting whther or not they would have a problem with diabetes in the future.

Which, of course, is pattern recognition.

Dave Mark - President and Lead Designer of Intrinsic Algorithm LLC
Professional consultant on game AI, mathematical modeling, simulation modeling
Co-founder and 10 year advisor of the GDC AI Summit
Author of the book, Behavioral Mathematics for Game AI
Blogs I write:
IA News - What's happening at IA | IA on AI - AI news and notes | Post-Play'em - Observations on AI of games I play

"Reducing the world to mathematical equations!"

Quote: Original post by InnocuousFox
Quote: Original post by ID Merlin
The good NN solution that I've read about involved the medical outcome of patients with numerous medical problems, and predicting whther or not they would have a problem with diabetes in the future.

Which, of course, is pattern recognition.


There are 1,000s of diagnoses, and recognizing the sets of those that predict diabetes with a simple algorithm would be extremely difficult. I wonder if the resultant NN could be used to derive the pattern recognition algorithm?
Quote: Original post by ID Merlin
There are 1,000s of diagnoses, and recognizing the sets of those that predict diabetes with a simple algorithm would be extremely difficult. I wonder if the resultant NN could be used to derive the pattern recognition algorithm?


Isn't the NN itself part of the pattern recognition algorithm? Or do you mean extracting the algorithm into a more explicit form?
Yes, it was quite redundant. An NN specifically is a pattern recognition algorithm. That was my point.

Dave Mark - President and Lead Designer of Intrinsic Algorithm LLC
Professional consultant on game AI, mathematical modeling, simulation modeling
Co-founder and 10 year advisor of the GDC AI Summit
Author of the book, Behavioral Mathematics for Game AI
Blogs I write:
IA News - What's happening at IA | IA on AI - AI news and notes | Post-Play'em - Observations on AI of games I play

"Reducing the world to mathematical equations!"

This topic is closed to new replies.

Advertisement