Advertisement

Unsupervised Hebbian Learning

Started by June 21, 2010 11:40 PM
8 comments, last by alvaro 14 years, 5 months ago
I've been doing some reading about NN and I've stumbled on the wonders of Hebbian learning. I love it so much because it seems to be the most accurate representation of neural plasticity. I'm looking to make an unsupervised, feed forward, Hebbian net to recognize commonly occurring patterns in noisy data.

I've spent a great deal of time scouring the web yet there are still a few gaping holes in my understanding and my deciding of which algorithm to use.

Nearly everything I read seems to deal with Hebbian learning from a single layer approach. Obviously for an accurate model multiple layers will be necessary. I've read many different articles describing completely different algorithms (Oja's rule, generalized Hebbian algorithm, BCM theory, etc.). What are their pros and cons and which are optimal in my situation (simplicity is key)? I have also found almost nothing describing the proper activation functions to use!

You'd think there would be simple answers to these questions but despite how much I've read into this I'm still having a tough time getting past the technical jargon. The deeper I read the farther I get from understanding. I've read enough to know what to do yet I lack crucial details to actually do it. What I really need is someone experienced enough to answer some pointed questions I have and maybe bounce ideas off of. Any help is much appreciated.

You could link me to a tutorial/article but chances are I've already read it.

Yes, I read the thread about NN threads. Hebbian learning isn't your typical NN and it never stated there was an outright ban. So I'm making this thread anyway.

[Edited by - SystemsLock on June 22, 2010 12:11:34 PM]
How does this thread relate to Game AI?

Advertisement
"commonly occurring patterns in noisy data"

Think of a system which can discover the players most used strategies and determine which one he is using. Data which can then be passed onto a simpler AI system that can make use of it to better the gaming experience.

I'll admit I won't fully understand its gaming practicality until I can actually build one, but gaming was my initial idea.
On a lark, Ill posit a plausible possibility.

Game AI is broadly defined. Ive seen many people who use game engines to do, for example, population simulations or research for the sake of research. In these cases, you may be using an AI for purposes significantly different then what youd see in a standard 'video game'. Places to go for such questions are few and far between - Ive had a hard time finding professors to ask at my University (the Dean is into it, but his time is hard to get for an undergrad).

Someone who is interested in a subject but unable to learn more about it will exploit whatever avenues they have available to them. Game Dev is a *great* one, as evidenced by how helpful you were to me, Alvaro.

Also, fuzzy state machines can use NNs to make assessments about the probability of state change... though its not a methodology Ive seen implemented or discussed much. Id imagine it might be because it has a tendency to combine the worst of both - manual work structuring the state machine, and the black box nature of NNs.


@SystemsLock

I dont know. Its on my queue of things to research, but a lot of things just got pushed to the front - sorry. I do however have a book called "The Nonlinear Workbook" and its been very handy for me. I am self taught and love its dense but precise wording. Heres the TOC, check out ch11: http://www.worldscibooks.com/etextbook/5790/5790_toc.pdf
It's not like I am trying not to be helpful on purpose. I just don't see how the question is interesting. As was described in the "Warning on NN threads" thread, this sounds like a solution in search for a problem. This seems to be acceptable in academia, but I really don't understand why.

If you have a problem to solve and you believe Hebbian learning is a promising approach, that's one thing. But just being captivated by the sexiness of a method and then trying to find places to apply it is IMO the wrong way to go about things.

If you find a problem where Hebbian learning performs well, you'll probably have the usual problems with ANNs: You don't know why it's working, and you have no way of tweaking its behavior or fixing it when it stops working. This is as true in video games as it is in many other applications.

Besides, I know nothing about Hebbian learning. :)
Well, that is attributable to a difference in personalities.

When the laser was first created, it was called a solution in search of a problem. And now, the myriad of things that we use them for is staggering.

Given current computing power trends, full simulation of a human brain will be possible in just a couple decades. The human brain uses Hebbian learning.

From my point of view, NOT familiarizing yourself with Hebbian learning on the basis that ANNs suck right now is silly.

---

I checked my copy of the Nonlinear Workbook, and its got a section on Hebbian learning in the Neural Net chapter, along with some alternative approaches. Sent you via private message a small excerpt on Hebbian learning.

[Edited by - the_absent_king on June 22, 2010 5:08:44 PM]
Advertisement
Ha! Very witty.

Thanks for the link, it seems interesting...
Quote: Original post by the_absent_king
When the laser was first created, it was called a solution in search of a problem. And now, the myriad of things that we use them for is staggering.

The fact someone was wrong when making that observation about the laser doesn't mean that everyone that makes similar observations will be wrong. ANNs have been around for a long time and they get a lot of people excited, but they haven't delivered a whole lot.

Quote: Given current computing power trends, full simulation of a human brain will be possible in just a couple decades.

This is a common misconception about the limitations of ANNs: It's not a matter of lack of power. ANNs are parametric functions often with gazillions of parameters to adjust, together with some optimization algorithms to adjust the parameters. The only connection with how a brain works is in the name. I don't have any evidence that an increase in complexity or computing power leads to qualitatively smarter behavior.

Quote: The human brain uses Hebbian learning.

You don't know that.

Quote: From my point of view, NOT familiarizing yourself with Hebbian learning on the basis that ANNs suck right now is silly.

I think ANNs with Hebbian learning are a shot in the dark. I think it's a bit like the people that were trying to fly by building machines that imitated birds, with flapping wings. Eventually the problem of heavier-than-air flight was solved, but the machines that solved it didn't work like birds do. Similarly, when we make machines that can do most of the things that humans can do, they will probably not work like humans at all.

I like the idea of setting up a challenge (in the 1950s chess and natural language processing were proposed) and then trying to engineer the best possible solution. Imitating nature doesn't seem like a very promising approach to me.

Quote: Original post by alvaro
Quote: From my point of view, NOT familiarizing yourself with Hebbian learning on the basis that ANNs suck right now is silly.

I think ANNs with Hebbian learning are a shot in the dark. I think it's a bit like the people that were trying to fly by building machines that imitated birds, with flapping wings. Eventually the problem of heavier-than-air flight was solved, but the machines that solved it didn't work like birds do. Similarly, when we make machines that can do most of the things that humans can do, they will probably not work like humans at all.


I think both of you will appreciate the humor in this.
Quote: Original post by Ezbez
I think both of you will appreciate the humor in this.


That was funny. :)

This topic is closed to new replies.

Advertisement