A-ha! I know what you mean now.
I'd been reading about vectors and 3D maths so my head was full of equations, so I thought you were talking about something more mathematical, for some reason all I could think of was the function part of "fitness function" and for some reason "minimizing" took on a very mathsy vibe too. :P
D'oh!
Using a GA to control weights in a Neural Network - rating fitness
You are actually minimising a function... your just representing it point-wise.
My suspicion is that your bot is moving in a straight line (either horizontal or diagonal) and the actual line between the start and goal does not fit along either of these cases. If this is the case, then my intuition is that your objective function (the thing you are minimising) is too heavily weighted to finding the path with the fewest moves (so your bots are favouring straight lines). One way to test this is to remove diagonal moves from your bots. The way to fix the problem (if this is indeed the cause) is to change your fitness evaluation (objective) function. I'd make the function invariant to the problem, but normalising it with respect to the starting distance. Here's one possibility...
where (xs,ys) is the starting coordinate for the problem and (xg,yg) is the goal coordinate. (x,y) is the position of the bot at the time of evaluation. n is the number of actions it has taken to get from the start to its position and a is a positive constant you can alter to change the weighting between distance minimisation and action minimisation.
Obviously there are many other possibilities for an evaluation function and certainly many that would be better. This is just one off the top of my head that I think will work for you.
Cheers,
Timkin
My suspicion is that your bot is moving in a straight line (either horizontal or diagonal) and the actual line between the start and goal does not fit along either of these cases. If this is the case, then my intuition is that your objective function (the thing you are minimising) is too heavily weighted to finding the path with the fewest moves (so your bots are favouring straight lines). One way to test this is to remove diagonal moves from your bots. The way to fix the problem (if this is indeed the cause) is to change your fitness evaluation (objective) function. I'd make the function invariant to the problem, but normalising it with respect to the starting distance. Here's one possibility...
f(x,y,n) = (x-xg)2+(y-yg)2 ------------------ * log(a*n) (xs-xg)2+(ys-yg)2
where (xs,ys) is the starting coordinate for the problem and (xg,yg) is the goal coordinate. (x,y) is the position of the bot at the time of evaluation. n is the number of actions it has taken to get from the start to its position and a is a positive constant you can alter to change the weighting between distance minimisation and action minimisation.
Obviously there are many other possibilities for an evaluation function and certainly many that would be better. This is just one off the top of my head that I think will work for you.
Cheers,
Timkin
Hurst and Bull., applied a nueral network to a learning classifier system to solve a simple woods maze.
(A Self-adaptive Nueral Learning Classifier System with Constructivism for Mobile Robot Control) May not be original source but i'am it references it.
Hope this helps in some way
Regards Wolfe
(A Self-adaptive Nueral Learning Classifier System with Constructivism for Mobile Robot Control) May not be original source but i'am it references it.
Hope this helps in some way
Regards Wolfe
I apologize in advance for bringing this thread back to life :)
This discussion about how to calculate fitness for evolving a FF Neural Net is fantastic. I've only recently gotten into Neural Nets and only have experience in training them with Gentic Algorithms. I really don't have any problems or difficulties to discuss. I would like to talk about better ways of training NN's through Genetic Evolution though.
I've created a modular system where sensors and output devices can easily be attached and removed from a Net. Like Directional sensors, altimeters, engines, things like that. It gave me an excellent playground for working with NN's.
But i have noticed it is difficult to train more "complex" behavior with the standard fitness functions. It seems to me the environment used to train a NN must get more difficult the better the NN's perform. Kind of like always keeping their goal just out of reach. Does anyone have experience with this sort of training?
I'm working to create an environment where a user can actively modify the fitness rules. For example, you think that ground based vehicles are sufficent at locating and finding their target goals, so you start respawning goals at greater distances. Then you start putting obstructions along their paths, ect..
I've started thinking about this "crawl, walk, run" method because my helicopters are always having a difficult time evolving. They are constantly crashing and exploding, haha. So i think for them you must start with a fitness that focuses on Staying in the Air, first. Then move on to more complex things like locating targets, landing, shooting, ect.. What are your thoughts?
The images are really to break up the post a little and hopefully give an idea of what i'm working on. The IP changes somtimes(boo!) but i'll try to keep it updated. oh, and since this is my first post here. Hello everyone :)
This discussion about how to calculate fitness for evolving a FF Neural Net is fantastic. I've only recently gotten into Neural Nets and only have experience in training them with Gentic Algorithms. I really don't have any problems or difficulties to discuss. I would like to talk about better ways of training NN's through Genetic Evolution though.
I've created a modular system where sensors and output devices can easily be attached and removed from a Net. Like Directional sensors, altimeters, engines, things like that. It gave me an excellent playground for working with NN's.
But i have noticed it is difficult to train more "complex" behavior with the standard fitness functions. It seems to me the environment used to train a NN must get more difficult the better the NN's perform. Kind of like always keeping their goal just out of reach. Does anyone have experience with this sort of training?
I'm working to create an environment where a user can actively modify the fitness rules. For example, you think that ground based vehicles are sufficent at locating and finding their target goals, so you start respawning goals at greater distances. Then you start putting obstructions along their paths, ect..
I've started thinking about this "crawl, walk, run" method because my helicopters are always having a difficult time evolving. They are constantly crashing and exploding, haha. So i think for them you must start with a fitness that focuses on Staying in the Air, first. Then move on to more complex things like locating targets, landing, shooting, ect.. What are your thoughts?
The images are really to break up the post a little and hopefully give an idea of what i'm working on. The IP changes somtimes(boo!) but i'll try to keep it updated. oh, and since this is my first post here. Hello everyone :)
I've been working on a better UI in order to change the fitness function realtime. I'm still very interested in anyone's ideas on training NN's better with Genetic Programming through scaling the fitness function. I promise you, i've read books and done my google research. I'd like to hear what you guys think. What do you think about tackling a complex problem where the fitness function reflects many different goals?
On a different note (and slightly off topic), why is it NN's usually drive/fly/walk forward and not backward? My critters are perfectly capable of doing both (and sometimes do) but tend to choose forward 90% of the time. More philosophical than a math problem in my mind. With all random weights and threshholds between -1.0 and 1.0, and outputs normalized with Hyper Tan (ie, between -1 & 1), why forward?
On a different note (and slightly off topic), why is it NN's usually drive/fly/walk forward and not backward? My critters are perfectly capable of doing both (and sometimes do) but tend to choose forward 90% of the time. More philosophical than a math problem in my mind. With all random weights and threshholds between -1.0 and 1.0, and outputs normalized with Hyper Tan (ie, between -1 & 1), why forward?
This topic is closed to new replies.
Advertisement
Popular Topics
Advertisement