Advertisement

Markov Chains

Started by April 26, 2001 12:43 PM
22 comments, last by Nelson 23 years, 6 months ago
Hi all, Recently, I read an article from Eric Dybsand (Game Devlopers conference 2001: An AI Perspective) in Gramasutra, there is AI topic I even have never heard of, "Markov chains". (BTW, I am a junior AI programmer!!)I wonder if someone can provide me some information or website regarding Markov chanins. Thanks you very much!! Nelson
quote: Original post by Nelson

Hi all,

Recently, I read an article from Eric Dybsand (Game Devlopers conference 2001: An AI Perspective) in Gramasutra, there is AI topic I even have never heard of, "Markov chains". (BTW, I am a junior AI programmer!!)I wonder if someone can provide me some information or website regarding Markov chanins. Thanks you very much!!

Nelson


A 5 second web search using Alta Vista revealled 6,249 hits on "Markov chains". Some of the first of them were:

http://www.ms.uky.edu/~viele/sta281f97/markov/markov.html
http://forum.swarthmore.edu/workshops/sum96/discrete/markov.html
http://www.maths.cam.ac.uk/undergrad/courseinfo/coursesIIA/text/node1.html

(I tried to pick out some of the more introductory in tone)

Also, it''s my opinion, that all AI programmers (junior or otherwise) would be well served to learn how to use a web search when they have questions like this one.

Eric

"You can give a man a fish, and feed him for a day. Or you
can teach a man to fish, and feed him for a lifetime" - Multi-Cultural Saying
Advertisement
Sorry and Thanks!

I heavily rely on Yahoo, and there was only two founds. I should have tried other search engine. sorry for being so naive!! I take your message very seriously (as encouragement!!), I will learn!! Thanks! , and please teach me more!!

Nelson
quote: Original post by Geta
Also, it''s my opinion, that all AI programmers (junior or otherwise) would be well served to learn how to use a web search when they have questions like this one.


Agreed 100%, sometimes I feel *I* should write a FAQ for my fellow students that keep asking me for help.

BUT, there is quite a lot of material, especially as soon as you reach a certain level of education, that you just can''t really find on the Net. Introductory material, mainly.
For instance, I found tons of verbose material on Hidden Markov Models, but I still havent a freaking clue on what the damn thing is, because not a single time did I ever see a reference to introductory material...

So don''t be too snappy with beginners

youpla :-P

-----------------------------Sancte Isidore ora pro nobis !
quote: Original post by ahw

BUT, there is quite a lot of material, especially as soon as you reach a certain level of education, that you just can't really find on the Net. Introductory material, mainly.
For instance, I found tons of verbose material on Hidden Markov Models, but I still havent a freaking clue on what the damn thing is, because not a single time did I ever see a reference to introductory material...

So don't be too snappy with beginners

youpla :-P



There are 3 introductory level URLs in my response above.

As to my alleged snappiness, I noticed you snipped the quote where I pointed out the benefit of learning how to learn by one's own actions? Anyway, I'm sure all readers would appreciate if you would take comments like yours to email if you feel you must jump in with a personal opinion about the style anyone uses to answer questions.

Eric


Edited by - Geta on April 29, 2001 4:03:35 PM
hey Eric, didn''t YOU just post a reply to the board that was a personal opinion about ahw''s message to you?

I personally (yes that means this is an opinion, are you gonna bite my head off to?) think you were pretty tough on Nelson.

I have some experience with both MM''s and HMM''s, and I can tell you I would have really appreciated it if a message board like this could have pointed me in the right direction when I was starting.

Nelson, if you''re having any problems understanding the stuff on MM''s, post back here and i''ll see if I can give you a hand.

kiev

Advertisement
hey Eric, didn''t YOU just post a reply to the board that was a personal opinion about ahw''s message to you?

I personally (yes that means this is an opinion, are you gonna bite my head off to?) think you were pretty tough on Nelson.

I have some experience with both MM''s and HMM''s, and I can tell you I would have really appreciated it if a message board like this could have pointed me in the right direction when I was starting.

Nelson, if you''re having any problems understanding the stuff on MM''s, post back here and i''ll see if I can give you a hand.

kiev

Mmm, well, I''ve just started reading GameDev.net and already I can contribute something! Cool!

This topic is not something that can be covered completely in one post. I would be happy to answer questions and continue this thread for as long as I have something to contribute.

A brief introduction to Markov Chains
(also called Markov Models, Hidden Markov Models).

Markov Chains are models for the temporal evolution of a system. Let''s start with the fundamental underlying assumption of a Markovian system.

"The future is independant of the past, given the present."

This means that any future state of the system can be entirely predicted from your present knowledge state and does not require any knowledge of the past.

Take for example the trajectory of an object. If you know its current position and velocity vector, then you can predict where it will be in the future. You don''t need to know where it was before its current state.

A Markov Chain is therefore entirely determined by two things.
1) Its initial state; and,
2) Its transition function.

The transition function is merely an operator that, given a present state, transforms the system to another state. This might be represented as a finite state machine or perhaps an analytic mathematical model. To continue the trajectory example, we may first assume linear dynamics for our object and then hold that
x(t+dt) = x(t) + v(t)*dt

(x is 1-D position and v is 1-D velocity)

More formerly and perhaps a little more correctly we would say that dx = (v) * dt, so that dx is our operator to apply to our current state. Because the system is linear we have:
x(t+dt) = x(t) + dx.

The Markov chain arises because we run this system over many such time steps. The name also arises from the fact that Markov models are often represented using Graph Models. A graph model for the trajectory problem would be:

--->[v(t-1)]--->[v(t)]--->[v(t+1)]--->
_____ \_____ \_____ \__
| | |
V v v
--->[x(t-1)]--->[x(t)]--->[x(t+1)]--->

(I really hope the arrows come out where they''re supposed to! Sorry if they don''t!!!)

Where a node in the graph [] represents a problem variable and an arrow indicates causal influence. So, for example, x(t) depends on x(t-1) and v(t-1) and on no variables at earlier time steps.

So, how could you use this concept in game development?

A common Markov Chain used in games is a finite state machine where states occur at finite time intervals. FSMs are not particularly interesting for deterministic transitions between states. You could substitute the FSM for a set of if-then rules and achieve the same thing (although perhaps less efficiently).

Markov Models become interesting when you represent the states of the system by probability distributions over the possible values of the state. Then the transition function is given as a conditional probability distribution. Such models are often called Belief Networks and when they represent systems that evolve over time, they are called Dynamic Belief Networks (DBNs).

DBNs are a very powerful tool for modelling the beliefs of agents and for modelling the evolution of a system where there is some uncertainty involved (certainly the case when your NPC is interacting with a human!).

You can find a good coverage of the topic of Markov models and Belief Networks in "An Introduction to Artificial Intelligence", Russel & Norvig (its referenced in the books section of GameDev.net).

I''d be happy to elaborate further on this topic or answer specific questions if they arise.

Regards,

Tim

CORRECTION:

Why is it that I always get this book confused with another I have! The correct title of Stuart Russell''s book is

"Artificial Intelligence: A Modern Approach", S. Russell & P. Norvig., Prentice Hall.

There is a second edition of this book in the works, however, Stuart and his wife recently had an addition to the family, so word is that the revised edition is going slowly!

Tim
quote: Original post by kiev_class

hey Eric, didn''t YOU just post a reply to the board that was a personal opinion about ahw''s message to you?

I personally (yes that means this is an opinion, are you gonna bite my head off to?) think you were pretty tough on Nelson.

I have some experience with both MM''s and HMM''s, and I can tell you I would have really appreciated it if a message board like this could have pointed me in the right direction when I was starting.

Nelson, if you''re having any problems understanding the stuff on MM''s, post back here and i''ll see if I can give you a hand.

kiev



Yeah kiev, that is the problem that arises when people (ahw first, then me, and now you) begin to make personal comments to other people in public. It destroys the information exchange of the board, is rude, and should be taken to email in the first place. If you want to continue swapping personal comments, then let''s take it to email.

BTW, if providing an answer to Nelson''s question (which I did) and offering a suggestion to the programmer communitiy at-large is being "tough on Nelson" then I''m sure he''s capable of standing up and and saying so, right? I''d suspect he doesn''t really need your protection.

Anyway, I think the best thing to do is for the discussion of Markov Chains to take place here and your (and mine, and anyone else''s) personal comments should go to email. Don''t you?

Eric

This topic is closed to new replies.

Advertisement