Advertisement

Neural Networks

Started by January 14, 2010 07:09 PM
3 comments, last by djz 15 years, 1 month ago
I'm planning on building a simple life sim, where an agent moves around in a virtual world, facing challenges such as seeking out food, dealing with encounters with other agents, etc. I want to model the agent's behaviour with a simple neural network and I have a couple of questions. Should input neurons always be "hard facts"? For example, I'd have the distance to a food object as an input to a neuron (call it "A") designed to determine the agent's inclination to approach the food, weighted by a sort of "laziness" factor as an attribute of the agent? Suppose I wanted, for the same neuron A, an input that represents the agent's feelings toward that particular food item, which is in itself affected by some other factors. Would it be sensible to represent this as another neuron (B) that holds a value representing this feeling? If so, what would the weight be? Just an arbitrary number? "Fussiness"? How would I model a preference in terms of flavour? Dynamic bias to neuron B based on the identifier of the food type? Finally, would functions of success/failure scenarios typically directly alter weights? Would this limit variety in different outcomes of the "personality" of the agent? What could be done to increase the autonomy of the learning process without causing the agent to make completely irrational decisions? Cheers for any tips :)
Two tasks were named: Seeking food, and dealing with encounters.

Seeking food is fairly clear, and could involve a search radius, line of sight, and terrain. The "laziness" factor can also be viewed as conservation of energy, where energy may be obtained from food. Add a movement cost, which could be as simple as distance covered, or more complicated with movement costs for different types of terrain. Then the value of food can be called worthwhile if it pays for the trip with a net gain in energy/movement points/time/whatever.

Dealing with other agents needs more clarity. What constitutes and encounter, and what do agents do with each other?
--"I'm not at home right now, but" = lights on, but no ones home
Advertisement
Don't think about your solution as a magical neural network "brain" that will just solve all your problems! Instead, try to imagine the function that's being approximated and whether it's actually possible to approximate that function.

The function will look like something simple:
outputs = f(inputs)


For it to work, there needs to be a clear, unambiguous mapping between the inputs and outputs. Try drawing a big table or hyper cube and seeing what you expect the output to be for combinations of input parameters.

Most neural networks also deal with noisy data as well, and do a pretty good job, but they still work deterministically under the good (e.g. perceptrons, hopfield networks).


If you don't understand what I mean, then just try out all the ideas you mentioned and you'll see pretty quickly which work or not -- assuming you're starting from something that works.

Join us in Vienna for the nucl.ai Conference 2015, on July 20-22... Don't miss it!

Cheers for the responses. As for not giving much info on the "encounters" side of things, that's because I really just wanted to find out if my structural ideas for the AI were sound. I think maybe all of the documentation on neural nets has me thinking of them as more than they actually are as, upon reflection, my plan seems pretty straightforward and intuitive anyway.

I'm going to work on the decision-making algorithms first on a very simple interface, then expand the complexity of the world to accommodate things like pathfinding.
You can create seperate NNs and view each NN as a discrete processing unit. Maybe you have a mode-selecting NN that weights the environment and decides what behaviour or state the entity is in, and throw in some fuzzy logic to make it blend states smoothley. A seperate NN that steers towards some goal. Another NN that evades gunfire.

I have no idea if this is common or not, but from my limited experimentation it seems to be effective. It also allows you to specialize and train isolated parts of the entity's 'brain'; and then see how effective it is and putting all the behaviours together. I can see it being tricky to balance for complex entities.

This topic is closed to new replies.

Advertisement