I don't know if this already exists, but I was thinking of developing an AI programming language, very assembly-like, that would over time be "consumed" by a Neural Net.
Let me elaborate. You have this very very simple robot. You use N++ to program the robot to avoid walls. Then, during the simulation, you allow pieces of the program you constructed to be slowly replaced by NNs.
It works slowly, but after x cycles, the whole program should have been translated into NNs, and then you can add some GA modifiers to make the program unstable, so that it is forced to evolve.
This would allow us to pre-program complex AI behaviour and allow it to be translated into NNs...
Does something like this already exist?
but why would you want to do that?
My Website: ai-junkie.com | My Books: 'Programming Game AI by Example' & 'AI Techniques for Game Programming'
Well, because you can start with bot that has very predictable behaviour, and then as the NN eats away at the code, you start to see emergence behaviour born from the NN complexity. The bot no longer just avoids collisions, he starts to exibit much more inteligent behaviour.
Besides, we've all wanted at some point to be able to turn parts of a program into AI. With this specially designed programming language, one could.
Besides, we've all wanted at some point to be able to turn parts of a program into AI. With this specially designed programming language, one could.
To get behaviours to evolve, i would say.
Pieces could be trained into an ANN (with the a otherwise you'd have to have an actual brain, with the blood, and the rest of the stuff), The downside would be training.
Now, ann's are good at generalizing, but in some situations, a generalization is not good enough.
Also, ann's wouldn't be able to call other ann's, so calls withing the functions would be impossible.
All in all, i give this idea a rating of 75% for imagination, and a 25% for practicality.
From,
Nice coder
Pieces could be trained into an ANN (with the a otherwise you'd have to have an actual brain, with the blood, and the rest of the stuff), The downside would be training.
Now, ann's are good at generalizing, but in some situations, a generalization is not good enough.
Also, ann's wouldn't be able to call other ann's, so calls withing the functions would be impossible.
All in all, i give this idea a rating of 75% for imagination, and a 25% for practicality.
From,
Nice coder
Click here to patch the mozilla IDN exploit, or click Here then type in Network.enableidn and set its value to false. Restart the browser for the patches to work.
It's possible to just _construct_ ANN that will do desired math function, with some error. We only need to implement addition, subtraction, multiplication and division in ANN, and then use that functions as basic operators of language, and connect blocks of cells together accordinly to expression.
Well, because you can start with bot that has very predictable behaviour, and then as the NN eats away at the code, you start to see emergence behaviour born from the NN complexity
I don't get it. You create functionality that works perfectly, then you replace it with the same functionality. Why would you see emergent behavior? After all, if you replace the handle of a hammer with a different colored handle, it still does exactly the same thing. And if you mean you want to introduce an ANN to somehow degrade the functionality then that makes even less sense... all you end up with is a hammer with a poorly designed handle.
The bot no longer just avoids collisions, he starts to exibit much more inteligent behaviour.
why would it start to do that?
Besides, we've all wanted at some point to be able to turn parts of a program into AI.
I've never wanted to do that. Why would I want to? If I make a hammer out of iron and wood specifically for the purpose of hammering in nails, and it does that job as well as it could be done, why would I throw it away and remake it using plastic and stainless steel?
Maybe I misunderstand you...
I don't get it. You create functionality that works perfectly, then you replace it with the same functionality. Why would you see emergent behavior? After all, if you replace the handle of a hammer with a different colored handle, it still does exactly the same thing. And if you mean you want to introduce an ANN to somehow degrade the functionality then that makes even less sense... all you end up with is a hammer with a poorly designed handle.
The bot no longer just avoids collisions, he starts to exibit much more inteligent behaviour.
why would it start to do that?
Besides, we've all wanted at some point to be able to turn parts of a program into AI.
I've never wanted to do that. Why would I want to? If I make a hammer out of iron and wood specifically for the purpose of hammering in nails, and it does that job as well as it could be done, why would I throw it away and remake it using plastic and stainless steel?
Maybe I misunderstand you...
My Website: ai-junkie.com | My Books: 'Programming Game AI by Example' & 'AI Techniques for Game Programming'
Quote: Original post by fup
Maybe I misunderstand you...
Evidentally. The idea is to take a program, turn it into an ANN that does the same thing as that program and then let the ANN evolve from there.
Evolve what and why?
I don't see the point to evolve a fully functionnal program... what is there to evolve in its internals if it already does the job?
Anyway, how would you evolve, what would be your "fitness" function if you already start from the "perfection" and then try to add new stuff... how do you decide what's better and what's worse?
Also, what's the point to translate an "exact" calculation written by hand by an approximation of it with a neural net? Where would your training set come from to ensure that you're handling every cases?
I'm pretty much on Mat's side here, I probably didn't get something...
I don't see the point to evolve a fully functionnal program... what is there to evolve in its internals if it already does the job?
Anyway, how would you evolve, what would be your "fitness" function if you already start from the "perfection" and then try to add new stuff... how do you decide what's better and what's worse?
Also, what's the point to translate an "exact" calculation written by hand by an approximation of it with a neural net? Where would your training set come from to ensure that you're handling every cases?
I'm pretty much on Mat's side here, I probably didn't get something...
Quote: Original post by xEricx
Evolve what and why?
I don't see the point to evolve a fully functionnal program... what is there to evolve in its internals if it already does the job?
Anyway, how would you evolve, what would be your "fitness" function if you already start from the "perfection" and then try to add new stuff... how do you decide what's better and what's worse?
Also, what's the point to translate an "exact" calculation written by hand by an approximation of it with a neural net? Where would your training set come from to ensure that you're handling every cases?
I'm pretty much on Mat's side here, I probably didn't get something...
The idea is to code solution human can code - that is, *inperfect* solution. Then let it evolve with hope it will evolve into better solution. Idea is to make evolution process be shorter by orders of magnitude, by skipping initial stage.
It's because, with any evolution process, most of the time is spent doing a monkey from protobacteria, and then 1000s times less time to make human from monkey.
And it probably took a long time to make protobacteria, too.
Well, maybe the missing piece here is that I work with self-altering Neural Networks, who grow their own axons, etc, etc, etc...
So, as the NN takes over the program, its faily static, but then we let it grow and expand as it wants.
Some of the emergent behaviour we could see is the NN acomplishing complex tasks with much less neurons than we humans would have designed...
Thats just an example. Neurons from diferent areas might also interlink, giving rise to emergent behaviour, like, a light-capturing cell NN connecting to the motor drive NN, and resulting in behaviour where the robot not only avoids collision with the walls, but also preditcs collision by using lighting information...
Of course our N++ program starts static, and fully predictable, but it sits upon a growing NN design, self-altering, so we have to give it enough cycles for us to watch it grow interesting behaviours.
We could even have a "Zoo" of sorts, where hundreds of virtual units would be simulated, but had evolved from the same base creature, and all we had pre-programed into them was basic food searching and self-replication. They would still have to evolve more complex tasks like group hunting, using hunting tools, etc, etc, etc...
So, as the NN takes over the program, its faily static, but then we let it grow and expand as it wants.
Some of the emergent behaviour we could see is the NN acomplishing complex tasks with much less neurons than we humans would have designed...
Thats just an example. Neurons from diferent areas might also interlink, giving rise to emergent behaviour, like, a light-capturing cell NN connecting to the motor drive NN, and resulting in behaviour where the robot not only avoids collision with the walls, but also preditcs collision by using lighting information...
Of course our N++ program starts static, and fully predictable, but it sits upon a growing NN design, self-altering, so we have to give it enough cycles for us to watch it grow interesting behaviours.
We could even have a "Zoo" of sorts, where hundreds of virtual units would be simulated, but had evolved from the same base creature, and all we had pre-programed into them was basic food searching and self-replication. They would still have to evolve more complex tasks like group hunting, using hunting tools, etc, etc, etc...
This topic is closed to new replies.
Advertisement
Popular Topics
Advertisement