Quote:Original post by CzarKirk Ok all that is too advanced for me right now :(
You're trying to do multiple-task, multiple-agent planning with soft goals right from the go. Begin instead with single (hard) goal, single agent planning and move your way from there.
The usual difficulty is of the choice being made -- carrying out a decided action is usually then trivial by comparison though can still have costs using simpler AI methods like pathfinding in carrying out the details).
The situation has to be symbolized. (cognition)
The symbols have to be factored/judged for relevance (including context combinations of the situational symbols )
Solutions to achieve given multiple (often conflicting) goals have to be matched to the situation (making estimations of success -- cost/risk versus payoffs for moving towards achieving multiple goals)
Prioritization using a unified measurement for the evaluation system (apples vs oranges vs bannanas) .
Possibly frequent reevaluations as the situation changes, weighed against the practicality of continuing the planned course. Optimizations to prevent brute force reevaluation...
Uncertainty adds another dimension to be evaluated.
--------------------------------------------[size="1"]Ratings are Opinion, not Fact