Impossible task for a neural network?
Hi all. Long time since my last post here...but i guess that doesnt matter..hmm ;)
Anyhow, i am working on a small soccer simulation where two teams are supposed to learn how to play soccer against each other. Right now they are learning low level functions like passing, shooting, intercepting etc. However, i have _big_ problems with this.
What i do is that for the passing skills, i will train a neural network to output and angle and and a force, both in the interval [0,1]. The angle will be the modification in which to shoot the ball relative the ballholder (will be used as -PI+nnAngle*2PI). The force is a multiplier for the balls maximun speed...quite simple really.
The input i use for this is a 2 tuple with the relative angle to the receiver, the distance to the receiver, the change in angle in next time step and the change in distance in the next time step. This way i get rotationally invariant data and the collection of training examples is simplified.
This seems to me like there would be a strong correlation between the input and the output and thus no problem for a NN to approximate such a function. But no, my results are really crappy and the output is almost "random"....
I have tried various implementations of neural nets so i dont think its in the code either....
EDIT: I am using a feedforward network and train it with backprop..
Any and all suggestions, questions and feedback is very appreciated :)
thank you
--Spencer"All in accordance with the prophecy..."
It sounds like you have all the pieces in there, and that network should at least be able to give results that are close. Some thoughts..
What does your error value look like over time? (error value = difference between desired results & actual output). Does it go straight down and rest at a nice low number?
Maybe you're overtraining to the training set? A good way to check for this is to have two training sets- the real set and the test set. After training it on the real set, you see how well it does on the test set. (you don't actually train it on the test set). If it does well on the real set but not well on the test set, then it could be overtrained.
What does your error value look like over time? (error value = difference between desired results & actual output). Does it go straight down and rest at a nice low number?
Maybe you're overtraining to the training set? A good way to check for this is to have two training sets- the real set and the test set. After training it on the real set, you see how well it does on the test set. (you don't actually train it on the test set). If it does well on the real set but not well on the test set, then it could be overtrained.
I am doing a similar project. I will tell you that the NN i am using(it is working kinda decent, though nowhere close to realtime.), uses hundreds and thousands of nodes, with 5 or six NN's flowing through each other. This is still being researched in a way because there is more than one answer, and results aren't available always.
This topic is closed to new replies.
Advertisement
Popular Topics
Advertisement