path planning
Damnit, I hate when I come up with cool ideas and then realize I wasn''t the first to think of them. Well this isn''t exactly college research though, so moving in a similiar direction as other people isn''t an issue. You mentioned that the mixing idea had already been done, do you know the title of the paper? Also if you have any interesting ideas for new research in this area please share them with me :-).
quote:
Original post by Dovyman You mentioned that the mixing idea had already been done, do you know the title of the paper?
Off the top of my head I cannot name a paper... but that''s because there have been many on this issue. Most particularly, the crop up in the robotics community, so you should start there. If I can find some time I''ll take a look through my bibliographic database and see if anything jumps out at me!
quote:
Original post by Dovyman You mentioned that the mixing Also if you have any interesting ideas for new research in this area please share them with me :-).
I have plenty of ideas actually... but then there''s the issue of giving away my ideas to other people/other research institutions!

One interesting problem that still needs to be solved is how to represent an environment internally in an efficient manner, that also lends itself to answering queries about the environment efficiently. So, for example, how does one represent the inside rooms of a house and the items that are scattered around inside the house. Typically people use geometric representations listing the location and extent of objects and then create geometric paths around objects for robots/agents to move along. Is this efficient? It is certainly efficient if you want to visualise the room exactly as it looks to the eye... but its not very efficient for finding paths. Particularly if you have a robot/agent that can navigate around obstacles with reactive behaviours. Then what you really want is a semantic description of the environment, so that the robot/agent knows there is a coffee table behind the couch is it trying to avoid and it knows that it might find the car keys on the coffee table, without actually having a representation of the keys sitting on a representation of the table. Get it?
In terms of replanning, are there better ways to estimate the value of a state than expected utility? Some people have looked at using risk to moderate the value of paths. A classic example is the cliff-side problem, where an agent must choose between two paths. One has a higher cost but lower risk of danger, while the other is a cheaper path but with a higher risk of danger. Are there other ideas obtained from analysing human behaviour that could help to quickly and efficiently estimate the value of a plan?
These are just two ideas. There are plenty more that you should easily come up with if you read the literature on planning for autonomous agents.
Good luck,
Timkin
argh....
i''m shaking from fear
i''m building an AE and was a total noobs from AI
well i have a design issue and i try to resolve it without trying to know it was about AI
when i ask help someone tell me that''s was an issue from the AI field, then i''ve start reading AI text but from game field only
now i start to see that there is something out this field that i should go
the fact is in my AE i''m dealing with mix problem and the model is (from my knowledge) like a SFuWSNN (semantics fuzzy weigted state neural network, that''s mean that the input are semantics wich is affect a dinamic weight which is apraise by a FMS like neurone in a network and there is a loopback from ouput to input of the network which chage the weight of the semantics apraisal), that''s terribly annoying because it may be something else that i doesnot know and i''m reinventing the wheel
actually the engine isn''t fully implemented then i can''t see flaws actually(and i''m not a real programmer), but it work with an internal representation of the world which is divide in a kind of two layer:objects, wich are a vector of wieghted attributes (attributes are things which come by sensor and send by the objects) ; and relation between object which are also weighted
i think one problem AI have to deal is the information discrimination, i think as far as i gone that the fact is they overestimate the brain, and under estimate some of his abilities, emotion (as i read) are othen seen as a stimuli-respond then the study i read on neurologic resherch present emotion as an appraisal and discrimination of information, and also a supervising system for learning things which is both in concurence with logical and apparaise logical as well, emotion are leads by primary needs and goal and inhibit or reinforce discrimination of information and sub goal/needs
see for ex the maslow tree
i think all system intelligent system have to deal whith some kind of ''emotion'' (which is apraisal and supervising of a task) but which can be far away from human emotion
if i made that post it''s for having more information on what i''m doing, illusion and strength, related work, new terms that i doesn''t know etc...
i have begin the work one month ago
well i''m sorry if it''s confusing, expressing some subject (which i have doesn''t much knowledge about) in english is hard because i''m french native speaker
>>>>>>>>>>>>>>>
be good
be evil
but do it WELL
>>>>>>>>>>>>>>>
i''m shaking from fear
i''m building an AE and was a total noobs from AI
well i have a design issue and i try to resolve it without trying to know it was about AI
when i ask help someone tell me that''s was an issue from the AI field, then i''ve start reading AI text but from game field only
now i start to see that there is something out this field that i should go
the fact is in my AE i''m dealing with mix problem and the model is (from my knowledge) like a SFuWSNN (semantics fuzzy weigted state neural network, that''s mean that the input are semantics wich is affect a dinamic weight which is apraise by a FMS like neurone in a network and there is a loopback from ouput to input of the network which chage the weight of the semantics apraisal), that''s terribly annoying because it may be something else that i doesnot know and i''m reinventing the wheel
actually the engine isn''t fully implemented then i can''t see flaws actually(and i''m not a real programmer), but it work with an internal representation of the world which is divide in a kind of two layer:objects, wich are a vector of wieghted attributes (attributes are things which come by sensor and send by the objects) ; and relation between object which are also weighted
i think one problem AI have to deal is the information discrimination, i think as far as i gone that the fact is they overestimate the brain, and under estimate some of his abilities, emotion (as i read) are othen seen as a stimuli-respond then the study i read on neurologic resherch present emotion as an appraisal and discrimination of information, and also a supervising system for learning things which is both in concurence with logical and apparaise logical as well, emotion are leads by primary needs and goal and inhibit or reinforce discrimination of information and sub goal/needs
see for ex the maslow tree
i think all system intelligent system have to deal whith some kind of ''emotion'' (which is apraisal and supervising of a task) but which can be far away from human emotion
if i made that post it''s for having more information on what i''m doing, illusion and strength, related work, new terms that i doesn''t know etc...
i have begin the work one month ago
well i''m sorry if it''s confusing, expressing some subject (which i have doesn''t much knowledge about) in english is hard because i''m french native speaker
>>>>>>>>>>>>>>>
be good
be evil
but do it WELL
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>be goodbe evilbut do it WELL>>>>>>>>>>>>>>>
well
when i get into AI problem and world representation i was lead by hindu''s concept of leelaa and maya (which leads me to memesis by MGS2) and by quantic physics (useful for problem about dinamic world representation)
rather than having a specific task focus, i have fought the whole as a system and agent can be define only at a layer of this system (it is more diffuse, just like a fish in a mare are dissolve anisotropically in the whole mare in quantics physics)
well i have never mind when i begin it was AI
>>>>>>>>>>>>>>>
be good
be evil
but do it WELL
>>>>>>>>>>>>>>>
when i get into AI problem and world representation i was lead by hindu''s concept of leelaa and maya (which leads me to memesis by MGS2) and by quantic physics (useful for problem about dinamic world representation)
rather than having a specific task focus, i have fought the whole as a system and agent can be define only at a layer of this system (it is more diffuse, just like a fish in a mare are dissolve anisotropically in the whole mare in quantics physics)
well i have never mind when i begin it was AI
>>>>>>>>>>>>>>>
be good
be evil
but do it WELL
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>be goodbe evilbut do it WELL>>>>>>>>>>>>>>>
This topic is closed to new replies.
Advertisement
Popular Topics
Advertisement
Recommended Tutorials
Advertisement