Advertisement

Perceiving sensor system

Started by June 30, 2006 04:53 AM
4 comments, last by gorogoro 18 years, 4 months ago
Hello Again! I'm mouting the sensors of the agents rigth now using physics. We create a physic shape, lets say a frustum to the vision sensor, a sphere to hearing sensor, etc.. And we attach the shape with the agent. When we have a collision with the shape we now what is colliding so what we are seing. This is implemented using listeners, when a collision happens a funcion is called (onEnterSensor(Sensor* sen, SceneNode* node), wich informs me what is the sensor and what is the node that entered in the collision. My main question is: in this way I don't now How many items I am seing at one time, unless I get track of it with lists. My question is: What is better in terms of AI Logic as Well as a Cpu overhead: Option A: - Change the things how they are done now and make something like: The agent asks the physics: What are the objects in collision with my sensor X. Option B: - Let the things how they are and create some lists to keep track of the objects of interess? Option C: - Any other suggestions? :)
This sounds more like a data structures problem than an AI problem. Why would there be a problem using any kind of list? Are you running on a low-powered embedded system, perhaps? Do you use C++ or C?
Advertisement
Quote: Original post by Kylotan
This sounds more like a data structures problem than an AI problem. Why would there be a problem using any kind of list? Are you running on a low-powered embedded system, perhaps? Do you use C++ or C?


C++

The main point is if using the option B (wich is the one I think I'm going to use), I'm going to make a lot of inserts and removes from the list of objects that I'm seeing.

Well, I prefer B Option, i think, maybe depending on the game, the Bot don't need to know, always, what he is seeing... you must have a list of interest objects, a using listener them you must send a signal to the bot, when any (a least one) of this objects get in, or out of its line of sight.
With that method I think the Bot may look more intelligent... always depending on the game...

I Hope this help. With Smoke
SmokingMan
"The best way to predict the future, is to invent it"
My (not yet fully functional) perception system is composed as follows:

Each supported sense (visual, aural, touch) knows 2 classes: A sensor and a stimulus. A sensor has a characteristic (what you have named the shape). A sensor is responsible for checking whether an existing stimulus is detected at all. This is done by collision detection and key/keyhole combinations (with the latter functionality one can simulate things like IR googles and such). If a stimulus is detected it is overhanded to the agent the sensor is belonging to. The agent then decides whether or not the stimulus is of interest and reacts (or even not) accordingly. (For this purpose stimuli carry information like its "violance" and such, but that is another point.)

Of course, a sensor should not overhand a hidden stimulus, e.g. an avatar hidden by a crate from a VisualSensor's point of view. So, a sensor has to investigate the potentially detectable stimuli in their entirety before letting the agent recognize any one of them. During this process it is possible to already filter stimuli of objects that are of no interest. So it is natural to overhand a list of stimuli to the agent. That may also simplify state storing, since the agent is able to "see" all stimuli at once.
HUm sounds cool that game of senses and stimilus :).

I'm doing something more simpler I think!
In the senses system, i keep track of the object that get in the sense (let's say vision), when an object get out the vision I remove it from the list and put it on the memory of the agent for a while so it can remember things seen in short.

I hope this will work.

This topic is closed to new replies.

Advertisement