learning by imitation?
Im making a flight sim game, but with a very untraditional vehicle: lunar lander.
Consequently there is no dictionary defenition of the different flight manuevuers, and control is a rather complicated dance of balancing thrust, momentum, and gravity.
So, to achieve a given motion, there are a large number of variable inputs, and the control responce needed is pretty impromptu based on those; it took me a long time to learn to fly it myself, and this is purely intuitive/reflexive, I don't have a good description of just what I do to make it work.
I have no idea how I'd go about formalizing what Im doing so that a computer controlled version might also fly.
So the big question is, are there any AI methods that might help with this?
I was thinking.... it'd be nice if there was an algorithm that might learn by imitating me. I'm envisioning something like an obstacle course path that I could fly through myself, then an algorithm that might recognize what I am doing at each part of the course then break down the curved path into simpler motions and control data that it can reuse for flying arbitrary paths on its own.
Im new with AI, does anyone know if what I describe is possible and what direction I should go in to build something like that?
I guess this could be done with a neural network, using the vehicles position,velocity and maybe the nearest obstacle as inputs, and your response (i.e. thrust, turning angle, applied force) as target output.
Hey! Just like the game that came with Windows 3.0!
You could train a couple of Support vector machines (SVM) in parallel, one for each input (like thrust, left, right, etc).
SVM are much more reliable that Neural Networks, IMO.
You could train a couple of Support vector machines (SVM) in parallel, one for each input (like thrust, left, right, etc).
SVM are much more reliable that Neural Networks, IMO.
This topic is closed to new replies.
Advertisement
Popular Topics
Advertisement