Advertisement

An AI idea

Started by October 04, 2007 11:28 AM
6 comments, last by Vorpy 17 years, 1 month ago
I have an idea for a boardgame ai (Ill use tic-tac-to as an example): Each square acts like a node in a ANN - it has weights that it uses to get input from the squares next to it. It works like this: 1) on the first time each node (square) gets input according to the piece thats there (x,o, or emty). 2) It then gives its output to the squares next to it through the weights. 3) Each node uses the inputs to make new output which it passes on again... 4) Steps 2 and 3 are repeated for a predetermined number of times. 5) During the last cycle the outputs from all the squares are summed up and this is the value of the position for a certain side. The weights for all the squares can be found using a GA or something. What do you think about this idea? I plan on checking if it works with tic-tac-to and then trying it with checkers and finally with go. Thanks.
"We've all heard that a million monkeys banging on a million typewriters will eventually reproduce the entire works of Shakespeare. Now, thanks to the internet, we know this is not true." -- Professor Robert Silensky
http://www.adit.co.uk/html/neural_networks.html

additionally:

http://www.google.com/search?q=tic+tac+toe+neural+network&ie=utf-8&oe=utf-8&aq=t&rls=org.mozilla:en-US:official

-me
Advertisement
It sounds a lot like a Hopfield network applied to a board game. These days you can train them efficiently without using a GA.

The only problem I can foresee is with 4). How do you know when to stop? The simulation might not be "stable" and you'd end up with a NN-version of the game of life!


As far as usefulness goes, I think having these grid-level pattern recognizers is a good idea, but you'll need more layers of logic on top to make a useful AI.

Sounds like a fun project!

Join us in Vienna for the nucl.ai Conference 2015, on July 20-22... Don't miss it!

alexjc:
Thanks for your reply. Can you explain what you mean by additional levels of logic?
Thanks.
"We've all heard that a million monkeys banging on a million typewriters will eventually reproduce the entire works of Shakespeare. Now, thanks to the internet, we know this is not true." -- Professor Robert Silensky
To get a better idea of what you're actually proposing, try drawing this idea out as a feed-forward network, with certain edges having common weights. It's really a normal neural network with some special constraints on how the weights are being set. It's not at all clear that the proposed topology and constraints would be beneficial.
daniel_i_l,

By additional layers I mean more neurons in your network that are connected to the "low-level" neurons. So you'd have 16x16 low level neurons, then maybe 8x8 neurons on top of that each connected to 4 below, etc.


You won't find it easy to make a useful AI out of this. It's still an open area in research.

See Dawkins' stuff for references:
http://en.wikipedia.org/wiki/Hierarchical_Temporal_Memory

Join us in Vienna for the nucl.ai Conference 2015, on July 20-22... Don't miss it!

Advertisement
What I think is, this will be 100 times worst as minimaxing/alphabetaing, at best. You're trying to apply a method with 0 temporal horizon to a problem with a far-reaching temporal horizon, not mentionning the other drawbacks of using NN and GA. Waste of time.
Quote: Original post by Steadtler
What I think is, this will be 100 times worst as minimaxing/alphabetaing, at best. You're trying to apply a method with 0 temporal horizon to a problem with a far-reaching temporal horizon, not mentionning the other drawbacks of using NN and GA. Waste of time.


It would be easy to add minimax to this system by simply using the neural network as the evaluation function.

This topic is closed to new replies.

Advertisement