Advertisement

Muscles ?

Started by May 06, 2009 01:55 AM
7 comments, last by Predictor 15 years, 6 months ago
I was wondering if anyone could point me in the direction of any projects that exist where virtual characters must learn to coordinate their muscles via some sort of neural network or genetic algorithm, and their body has ragdoll type physics that they must learn to adapt to in order to become able to articulate their virtual selves in a meaningful and coherent fashion. Thanks!
Russell Smith's PHD thesis is similar to this IIRC.
Advertisement
FramSticks might also fit the bill.

http://www.framsticks.com/
Quote: Original post by Hodgman
Russell Smith's PHD thesis is similar to this IIRC.

That's amazing, I'd no idea anyone had successfully done this! I have had a dream for about 10 years to see a game using such technology, I remember asking on this very forum if such a thing was possible a few years back and the answer was basically "no".
I wonder how much training and processng power it needed...

I actually came to this forum to ask about a very similar thing and saw this thread, so I'll post my question here:

If we want to use a neural net to allow simulated creatures to learn how to walk, how do we specify the target behaviour and/or the NN's 'correct' outputs? Thinking about real life, some animals (e.g horses) learn to walk in just a few hours from birth. Surely (without suggesting intelligent design) they don't have a target behaviour, they are simply aiming to get from A to B and somehow figure this out... I don't have the first clue how that fits into a NN design of input/output neurons!

Quote: Original post by d000hg
how do we specify the target behaviour and/or the NN's 'correct' outputs?

Simply measure how far they got towards their goal. That's the trivial part of it really. Animals also learn through imitation so a simpler goal would be to measure how similar the behaviour is to an animal that can already walk.

Quote: Thinking about real life, some animals (e.g horses) learn to walk in just a few hours from birth. Surely (without suggesting intelligent design) they don't have a target behaviour, they are simply aiming to get from A to B and somehow figure this out... I don't have the first clue how that fits into a NN design of input/output neurons!

You're forgetting that life has 2 forms of learning: one performed during an individual's lifetime by its neurons, and one performed across generations through their genes. It's perfectly possible for a species to have instinctive understanding of how to use its body to achieve certain goals, and that doesn't require intelligent design, just a lot of time...

Quote: Original post by Kylotan
Quote: Original post by d000hg
how do we specify the target behaviour and/or the NN's 'correct' outputs?

Simply measure how far they got towards their goal. That's the trivial part of it really. Animals also learn through imitation so a simpler goal would be to measure how similar the behaviour is to an animal that can already walk.
You mean we set the desired output to be "1 mile" and judge on which gets closest?

Quote:
Quote: Thinking about real life, some animals (e.g horses) learn to walk in just a few hours from birth. Surely (without suggesting intelligent design) they don't have a target behaviour, they are simply aiming to get from A to B and somehow figure this out... I don't have the first clue how that fits into a NN design of input/output neurons!

You're forgetting that life has 2 forms of learning: one performed during an individual's lifetime by its neurons, and one performed across generations through their genes. It's perfectly possible for a species to have instinctive understanding of how to use its body to achieve certain goals, and that doesn't require intelligent design, just a lot of time...
You mean genetic memory? How is that expressed in a NN? The problem I see is you can put together a dog torso with 4 legs and some muscles and say "you want this treat"... how does it figure that contracting/relaxing muscles is the way to go about that? And when at first all that happens is it lies there spasming or moving at random, what makes it think "hey, this seems a productive idea"?

I guess that guy linked above did it, but my guess is he gave his robot a lot of help along the way? How it can happen in real life, and how we could emulate that in code, is just amazing to me...

Advertisement
I have some hope for the idea of imitating move. If you have a motion-capture animation database, you can try to make the body move like the real humans did, with some sort of "inverse mechanics" system.

Does anyone know of any prior work in this area?
Quote: Original post by d000hg
You mean we set the desired output to be "1 mile" and judge on which gets closest?

Yeah. I've seen a site that had a Java applet that taught some sort of creature to walk by measuring how far it got with each approach.

Quote: You mean genetic memory? How is that expressed in a NN? The problem I see is you can put together a dog torso with 4 legs and some muscles and say "you want this treat"... how does it figure that contracting/relaxing muscles is the way to go about that? And when at first all that happens is it lies there spasming or moving at random, what makes it think "hey, this seems a productive idea"?

I guess that guy linked above did it, but my guess is he gave his robot a lot of help along the way? How it can happen in real life, and how we could emulate that in code, is just amazing to me...

Obviously nobody's creating true artificial life here by any means. But creating machines that learn some of the processes involved in normal life, eg. moving legs to produce motion, is relatively simple. The 'genetics' in a NN, ie. the fixed information that provides a grounding for the learned information, can come from the developer's choice of inputs and outputs, and in the arrangement of nodes, and probably more importantly in how the error is measured.
Quote: Original post by d000hgYou mean genetic memory? How is that expressed in a NN? The problem I see is you can put together a dog torso with 4 legs and some muscles and say "you want this treat"... how does it figure that contracting/relaxing muscles is the way to go about that? And when at first all that happens is it lies there spasming or moving at random, what makes it think "hey, this seems a productive idea"?

I guess that guy linked above did it, but my guess is he gave his robot a lot of help along the way? How it can happen in real life, and how we could emulate that in code, is just amazing to me...


How well this works will, of course, depend on many factors, but it has been done before. You're right in suggesting that initial solutions in a situation like this flail quite a bit. In the cases I've read about, the "help" that is given which seems to promote success are more along the lines of effective solution representations and GA operators, than "helpful hints" or provision of partial solutions.

This topic is closed to new replies.

Advertisement