Simple go programming
Okay, I know the general topic of go (wei'qi, paduk, whatever you wanna call it) has been discussed around here a bit but I've got questions, perhaps easier to answer than how to make computer go stand a chance against high-level players. I'm kinda new to the whole concept of AI (using proprietary script for Galactic Battlegrounds is the only AI-like thing I've ever done), so rather than trying to get a strong go playing program, I'd like to just get one that plays legally and actually works. So my question is, what specific knowledge do I need--aside from the rules of the game--to make a working go playing program? Getting it stronger can come in time, but it does me no good if I can't even write one that works :P.
Thanks.
~Shadowtrooper Out~
Quote: Original post by jpetrie
AI for Go is hard. It is a less-solved problem then chess AI.
Oh, I'm well aware of that. That's why I want to know what I need to learn (AI tecniques, languages, etc.) so that I can learn them and eventually get around to implementing them in a go program.
~Shadowtrooper Out~
Well, I dug that link up by using Google, and there were plenty of others on the search results, so you could start there.
Yeah, now that I'm getting further into the site I do see what you mean, thanks!
~Shadowtrooper Out~
I don't agree with most of what that website says, although many of those things are believed almost religiously by computer-go practitioners.
That is a mixed bag of truths and lies. The number of legal moves in chess is more like 35, and 9x9 go has a very similar branching factor, and yet programs are still not strong there. The real problem is that chess has a natural evaluation function (material count) that works pretty well, and it is easy to refine it with more knowledge (control the center, keep your pawns defending each other, keep your king safe, place your rooks on open columns, etc.) and get a strong program, while in go no such evaluation function is known. The branching factor is a secondary problem, and with a decent evaluation function, many moves can be pruned, or at least searched with very reduced depth.
I stopped reading right there. This is an obvious non sequitur. It is true that having the computer think like a person would give us a go-playing program. But it is also possible that we haven't tried minimax hard enough, or that a good flux capacitor coupled with a phaser beam generator would do the job, only the person writing that never thought about it or it hasn't been invented yet. We built flying machines that don't fly like birds do. We built chess-playing machines that don't play chess like humans do. And I don't see any convincing evidence that go is any different.
Quote: The computers lack ability because Go is a much larger game than chess. In an average chess position, there are about 15 to 25 legal moves. In Go, that number is usually around 250, making it virtually impossible to create a program based entirely on the minimax game tree in the way that Chess programs are written. Brute force no longer works so well.
That is a mixed bag of truths and lies. The number of legal moves in chess is more like 35, and 9x9 go has a very similar branching factor, and yet programs are still not strong there. The real problem is that chess has a natural evaluation function (material count) that works pretty well, and it is easy to refine it with more knowledge (control the center, keep your pawns defending each other, keep your king safe, place your rooks on open columns, etc.) and get a strong program, while in go no such evaluation function is known. The branching factor is a secondary problem, and with a decent evaluation function, many moves can be pruned, or at least searched with very reduced depth.
Quote: Thus, Go programming requires a different route from chess programming. Instead of simulating the techniques human players use to play the game, the programmers must model human thought itself! Not only must a computer now act like a person, but it must also think like a person!!
I stopped reading right there. This is an obvious non sequitur. It is true that having the computer think like a person would give us a go-playing program. But it is also possible that we haven't tried minimax hard enough, or that a good flux capacitor coupled with a phaser beam generator would do the job, only the person writing that never thought about it or it hasn't been invented yet. We built flying machines that don't fly like birds do. We built chess-playing machines that don't play chess like humans do. And I don't see any convincing evidence that go is any different.
"Simple" and "Go Program" don't belong in the same sentence.
Many very smart people have spent years working on it, with
very limited progress. It's widely accepted that no living
human will ever see a computer defeat a professional Go player.
No existing program can even give a decent game to a garden
variery amateur.
Many very smart people have spent years working on it, with
very limited progress. It's widely accepted that no living
human will ever see a computer defeat a professional Go player.
No existing program can even give a decent game to a garden
variery amateur.
---visit my game site http://www.boardspace.net - free online strategy games
Here are my two cents on the topic as I'm sort of interested in the problem myself.
It is true that the biggest problem is not really the branching factor, but rather the evaluation function. The other problem is in the fact that there are various abstract elements in the game itself. For example, when does a program know it has no chance of winning? A preety good human player can usually judge that before the game is over, unless it is a very close game. Even most intermediate level players know when they have been beat and have no chance of winning. Then comes the next question, how do you write a program that figures out the game is over or at its end? Determining this is probably the first step. It has been done, its not impossible, but then the question is, how soon can a program judge that its at the end and that there are no more meaningful moves that will make a difference to the outcome.
The problem with the evaluation function is the nature of the game. Instead of capturing pieces, you're faced with capturing open areas. So, it is impossible to rank a move's importance to the outcome of the game, especially early in the game. Just this point alone makes many learning methods obsolete as there is no real way to attribute a win or lose of a certain move, or maybe a series of moves at times. This is why most successful go playing programs involve pattern look-ups from a fixed database to decide which moves to take and then do a localized search on possible moves. In general, this is how most human players play and were taught how to play.
Personally, I think one approach that a program may benefit from is to not play against the opponent, but rather "play" the opponent. Instead of being aggressive or defensive, you instead try to lure the opponent into situations that are beneficial to the program. This may not be able to create a program that is generally strong, but will create one that will eventually beat anyone if enough matches were played.
It is true that the biggest problem is not really the branching factor, but rather the evaluation function. The other problem is in the fact that there are various abstract elements in the game itself. For example, when does a program know it has no chance of winning? A preety good human player can usually judge that before the game is over, unless it is a very close game. Even most intermediate level players know when they have been beat and have no chance of winning. Then comes the next question, how do you write a program that figures out the game is over or at its end? Determining this is probably the first step. It has been done, its not impossible, but then the question is, how soon can a program judge that its at the end and that there are no more meaningful moves that will make a difference to the outcome.
The problem with the evaluation function is the nature of the game. Instead of capturing pieces, you're faced with capturing open areas. So, it is impossible to rank a move's importance to the outcome of the game, especially early in the game. Just this point alone makes many learning methods obsolete as there is no real way to attribute a win or lose of a certain move, or maybe a series of moves at times. This is why most successful go playing programs involve pattern look-ups from a fixed database to decide which moves to take and then do a localized search on possible moves. In general, this is how most human players play and were taught how to play.
Personally, I think one approach that a program may benefit from is to not play against the opponent, but rather "play" the opponent. Instead of being aggressive or defensive, you instead try to lure the opponent into situations that are beneficial to the program. This may not be able to create a program that is generally strong, but will create one that will eventually beat anyone if enough matches were played.
This topic is closed to new replies.
Advertisement
Popular Topics
Advertisement