Advertisement

A book to read if you're into AI

Started by July 18, 2005 07:30 AM
3 comments, last by NickGeorgia 19 years, 6 months ago
I've been reading this fantastic book by Jeff Hawkins (the founder of Palm Computing, and the inventor of the PalmPilot). It's called 'On Intelligence' and you can buy it from Amazon and probably eBay. The ideas that he expresses in his book are so different to the stuff that most neuroscientists are caught up on, they actually end up making a lot more sense. His theory revolves around two basic principles: Patterns and Prediction. He believes that every cell in the cortex (the part of the brain that seperates mammals from the rest of the animals) has the exact same function, and in large groups, when joined together they form the ability to recognise and repeat patterns of data. This means that every sense we have - vision, smell, touch, sight and hearing, are all processed in the same way. They just seem different to use because of the different types of data pattern. Although this is just a theory , it's been proven (apparently) in his very own neuroscience lab. There's too much to go into great detail, but if you enjoy reading about A.I, it's a nice breath of fresh air. Edit: He has a website too, with a forum to ask questions and stuff on
Hey I just finished that book too! I really liked it as well. It's nice to see someone getting back to the connectionist, human-inspired school of AI (ie neural networks), which seems to be less than popular these days. Personally I am a big fan of connectionism, and am working on an experimental NN project at home.

His theory does a great job of explaining how the brain is able to filter through all the data coming in, and to draw attention to new things. But I don't know if his theory really goes the whole 9 yards. He makes it sound like his theory is the whole story, but there's a lot of stuff to which he doesn't seem to give a satisfying answer.

One of those things is decision making. I think his book says something like "when you want to move your hand, you first form a prediction that your hand will move, and the prediction is what causes your hand muscles to activate". This seemed pretty weird to me. And it doesn't really say at which point the brain decided to move the right hand, instead of say, the left foot.

Another area that could use more details is memory formation. All he really talks about is Hebbian learning, which is something that's been around for years. But so far, models based solely on Hebbian learning don't get very far. Either a) no one has yet made an accurate-enough model of the neuron, or b) there are other mechanisms involved in learning that we don't know about.
Advertisement
Haven't read the book myself, but I like NN's. One of the things I have on my mind is the switching of modes. Like you say pina, when does the hand decide to move and such. I view that as a discrete decision so that is a mode. Examples are moving hand, moving foot, walking, running, dancing, etc. In each of these modes we are doing something that a neural network can model. We might view this as switching between neural network models or other types of models. A higher decision making process determines the sequence of modes. It's just an idea I have, but I think there is a way that the brain models these discrete modes and dynamic time-driven behaviors. The interesting things are how do we learn new modes and relate these to the time-driven stuff. Well, when I get time I am trying to develop a framework for AI and such.... Too many discrete modes at work right now for me to think straight.

In a game, we are modeling a world so the more detailed and open it is, the more interesting behaviors that can arise. Anyway, I'm so tired, I'm not sure any of this made sense. Talk to ya later.

[Edited by - NickGeorgia on July 20, 2005 5:50:58 PM]
Thanks, yeah I know of these models. I am considering all kinds. Too be more general, I am looking at systems called "hybrid" systems which mix both time- and event-driven dynamics. Right now, at the top level I am considering Petri nets and and the low level some kind of AI framework that can learn such as NN's or Fuzzy Neural Networks or Neuro Fuzzy... etc. The hard part is making the Petri Net system pick up new modes in a learning scheme. I'm still considering the others as possibilities and I'm pretty sure there is a type of Petri net floating out there with something similiar to Hidden Markov. Hybrid Petri nets might be suitable since they could do something like Switched Linear, but I don't know if the framework has been developed enough. Also, I might try a Perceptron NN at the top level (levels of abstraction) and Multilayer Neural Networks at the low level (the predictions, etc.)

Edit: Also gotta make sure it's practical to be used in a game computation wise.

[Edited by - NickGeorgia on July 20, 2005 11:04:13 PM]

This topic is closed to new replies.

Advertisement