Advertisement

Which Neural Network would be most suited?

Started by July 03, 2008 05:24 AM
16 comments, last by Timkin 16 years, 4 months ago
Ah I see. Well, I've delved deeper into Markov chains and it seems that this is the way forward. It's quite difficult to find practical information on Markov chains as a lot of it is academic. Does anyone have good resources (esp. introductory) which are more programming orientated? I can find some code snippets, but preferably an explanation so I can run with it and do my own thing.
Thanks.
Quote: Original post by phi
It's quite difficult to find practical information on Markov chains as a lot of it is academic.


What would you consider is 'practical information'?
Advertisement
Something like this:
http://www.ai-junkie.com/ann/evolved/nnt1.html

but related to Markov Chains. A lot of examples are very mathematical, which is good when it comes to understanding, but difficult to implement efficiently. I might be asking for too much, in that regards, but if there's something more programming orientated than "proof" orientated, that would be good.

Thanks for the help.
This is kind of funny, because it looks like you asked a very similar question on NNs almost 2 years ago, and were redirected towards Markov methods back then too.

It's true that Markov Chains - as indeed with most AI methods, to be honest - are often described in complex mathematical terms with the emphasis on proving effectiveness at the expense of simplicity.

In the article on Markov chains, there's a load of mathematical wording which isn't that important to you. What is important is this line: "Markov chains are often described by a directed graph, where the edges are labeled by the probabilities of going from one state to the other states." When a user chooses one of your images, they are in a unique state, specific to having just chosen that image. When they choose the next image, they move to that state. The probability of moving from one state to the next is thus the probability of choosing one image after a given previous image. So, you need to implement one state per image, and each of those states must store the (relative) probability of moving to any of the other states.
Thanks Kylotan! I completly forgot that I had asked the question before. I had this idea at the beginning of uni, but work got in the way and it found it's way into ether. However, I now have some time and decided to ressurect it.
I noticed that the Markov chain only takes into account the previous image. If they were layered, would I be able to have it remember a previous chain of images rather than just the one? For example, a user clicks CAR ---> ACTOR--->ANIMAL but another time they click FILM--->ACTOR and the system would predict "ANIMAL" since it was previously followed straight from ACTOR and completly ignored the CAR/FILM. Is it possible to remember a chain of states rather than the last one?
The Markov chain by definition only takes into account the previous 'state' - you'll see references to that in the technical description. However, you get to define what constitutes a state however you like. So, that state could in fact be the previous 2 (or more) images, should you so choose. So rather than looking at the 'actor' state and predicting 'animal', it would look at the 'film->actor' state and predict whatever else was there. Note that this increases the amount of potential states however, necessarily rarifying your training data somewhat, and so you may want to use some sort of optimisation to aggregate states together.

PS. What uni are you at?
Advertisement
It's finally clicked! :D Thanks for the help. I'm sure I'll be back lol, but I think I'm on some sort of path for now to do some own work. I'm at Nottingham Uni, which I just realised is in your hometown!
Quote: Original post by Kylotan
What is important is this line: "Markov chains are often described by a directed graph, where the edges are labeled by the probabilities of going from one state to the other states."


More correctly, that should read: there is a one-to-one relationship between a first order Markov chain and a finite state automata. As Kylotan noted, one cannot easily represent a k-th order chain with a finite state automata (and one probably shouldn't try for a non-trivial problem).

Cheers,

Timkin

This topic is closed to new replies.

Advertisement