Advertisement

predator-prey simulation

Started by February 24, 2006 05:21 AM
3 comments, last by andyjwilliams 18 years, 9 months ago
Hello, I wonder if you could help me. I'm currently looking into developing a predator/prey behaviour simulation in which the behaviours of the entities evolve over time. I have no problem with this technically but I wanted to know a little about what has already been done, mainly what are the standard ways of representing or holding the decisions of the entities, and what types of environments are used. I was planning on using a decision tree to process the entity's inputs and to come to a decision on the next course of action, but I just wanted to know what alternitives other people have used as there may be better ways of doing this. Can you point me towards any resourses, books, web pages or journals which might lead me to this information? Many thanks, Andy
I started reading this book, and I love it.

The chapter 3 is about predator-prey simulation (take a look at the big_shoal examples). You can download the binaries and the source code.
Cheers
StratBoy61
Advertisement
Cheers, I'll take a look.
Many years ago (while in my honours year at Uni) I wrote a predator-prey simulation involving learning... the prey was dumb and bounced around within a finite domain... the predator had to learn how to move to catch the prey... it was given information on angle to target relative to its own heading and distance (which was only approximate if the target was in the rear hemisphere...not visible). Local reward was obtained if the predator decreased the angle and/or distance to the target and evolution was based on the rate at which the predator could repeatedly catch the prey.

I used a Holland-style Classifier system (CFS) for the predator and had good success with learning. You could use any manner of rule-encoding schemes though and a decision tree sounds like a good idea to me. What is important I found is the information and how you deal with the delay learning problem (paying off early actions that may not obtain immediate reward from the domain). I did this by setting the objective of getting closer to the prey on the assumption that this also satisfied the 'catching the prey' solution. What is particularly important is how you encode the domain!

Cheers,

Timkin
Thank you Timkin, that is actually very encouraging.

This topic is closed to new replies.

Advertisement