Advertisement

Why AI (or 'sort-of' AI) is so costly....

Started by March 25, 2008 11:15 PM
5 comments, last by wodinoneeye 16 years, 7 months ago
One reason (after Ive beeing looking at Planners/Goal-Task-Solution mechanism -- especially when uncertainty is concerened) is the frequency which an 'intelligent' object must reevaluate its environment and shift its behavior. Thats a multiplying factor ontop of the task of evaluating only a single (current) situation with all the classifiers/projections and matching of solutions and finally prioritizing and picking the best course (which might be a fairly detailed plan). Apply that to a fairly complex environment and you have a significant amount of CPU processing (and to a lesser degree memory resource) required -- and that for just one 'intelligent' object. Optionally add a preference mechanism and possibly a learning module to tune the 'preferences' (best practices/ likeliest solution). Double the entire problem by having the behavior simulate social constructs/authority/roles to add constraints and extra-object considerations. This all would sit atop more primitive AI tools like pathfinding with A*, influence maps, FSMs, etc... Its no wonder games have been script driven narrowly choreographed sequences for the most part. Cost ($$, time) in programming the logic is a seperate issue -- usually with AI 10% of the project is the engine and the other 90% is forming the AI logic.
--------------------------------------------[size="1"]Ratings are Opinion, not Fact
This might be a valid use for a form of swarm intelligence. What behavior does a society exhibit? There is a generally accepted norm, which is close to optimal. And yet there are always a percentage of the society that are explorers, re-testing solutions previously classified as suboptimal.

It might be worthwhile to place a probability distribution over the utility function, so that an individual agent's max payoff is different from the programmer's designed ideal.
--"I'm not at home right now, but" = lights on, but no ones home
Advertisement
Quote: Original post by AngleWyrm
This might be a valid use for a form of swarm intelligence. What behavior does a society exhibit? There is a generally accepted norm, which is close to optimal. And yet there are always a percentage of the society that are explorers, re-testing solutions previously classified as suboptimal.

It might be worthwhile to place a probability distribution over the utility function, so that an individual agent's max payoff is different from the programmer's designed ideal.




The system for preferences would include things like preferences of strategies/approaches to solutions (ex - spectrum of cautious, moderate, risktaker, maniac -- being one axis) which then would adjust the thresholds to make certain behaviors acceptable (shift risk evaluations in relation to other cost metrics for selection and then prioritization).

Different behavioral aspects would have their own 'curve' in the generic. Narrower preferences (or added factors) for specific 'keyhole' situation+solution sets could be added (history/memory based -- which could be used as a general solution available as a default to all objects -- as a 'traditional' way for objects to react to their world.

Small divergences in behavior in society can break equilibrium (force your AI to have solutions to a wider range of 'stupid' behaviors (the reason for the old way of handling people who were different -- society was brittle often on the edge of chaos...)


Of course some factors can be isolated and formulated. Example Lawfulness -- following codified (common) laws of behavior and diverging from it puts you at odds with theestablishment. Normal people resort to it usually only in desperation and often not even then. Other people are unlawful by nature and if the payoff vs risk is to their advantage then they can overcome the negatives of being an 'outlaw'. Some might only take the risk when it is unlikely to be discovered (and the laws enforced by a community). Some might only do 'unlawful' acts against an enemy (as defined outside the law system) where other motives carry a higher priority than social conventions.



Of course all this is part of the 'evaluation' functions which decide which solution is viable and then if alternatives exist, then which of them is most acceptable (under the current circumstances).

Social simulations get complex because of the potential for many conflicting motives and longterm considerations as fallout from just one act (les miserabes was consequences of one stolen loaf of bread.....)


Even when conventions are followed conflicts of interest - driven by varying goal priorities (ie greed, honor, fear, friendship, family, etc)




And if you have a complex world that has constant situational changes, if you dont want you objects/nps/whatever to seem like dim mollusks you need to have them constantly reevaluate (frequency) their situation to allow them take action on opportunities as well as to make well considered decisions with potentially longterm consequences (depth).
--------------------------------------------[size="1"]Ratings are Opinion, not Fact
This looks like it is in need of a software design specification sheet.

If an objectively verifiable behavior/functionality is stated, then the computational complexity of the achieving that behavior/functionality can be studied, to see if it is a viable project.

Otherwise it might become a competetive moving target.

[Edited by - AngleWyrm on March 27, 2008 3:25:31 PM]
--"I'm not at home right now, but" = lights on, but no ones home
Quote: Original post by AngleWyrm
This looks like it is in need of a software design specification sheet.

If an objectively verifiable behavior/functionality is stated, then the computational complexity of the achieving that behavior/functionality can be studied, to see if it is a viable project.

Otherwise it might become a competetive moving target.



The 'computational complexity' of this type of AI problem is not so simple to determine. The AI engine itself is not that complex, but the scripted logic and evaluation functions are fairly diverse, irregular and numerous, AND change significantly with different simulation environments that they are applied to.

In the case of this kind of simulation of behavior, what is 'optimal' is quite subjective (as in 'good enough') and cover not just the results of individual scenarios but performance in transitions and overlapping scenario types.

The instance of the logic can always be improved/refined to make the behaviors it generates 'more realistic'.

--------------------------------------------[size="1"]Ratings are Opinion, not Fact
Quote: Original post by wodinoneeye
... but the scripted logic and evaluation functions are fairly diverse, irregular and numerous, AND change significantly with different simulation environments that they are applied to.
This suggests to me both a quest for a superior ideal, unfettered by the mundane lab assignment of the moment, and also a possible paradox. In the realm of mathematics, such quests are the way of science, and great generalizations have sprung from it. A picture of science as a heirarchical structure.

But I'm not so sure that is a good model for intelligence. What if we were to somehow compartmentalize subsystems, and then identify a tree's root node or pyramid's peak? My gut feeling is we would find that central core, or universal principle, to be just one more subsystem with its' own small set of tasks to perform; just one player in a set of players. The heart is by all means an important piece, but it's still only a piece.

In some ways, I'm agreeing with you that complexity might be an irreducable feature. In other ways, I'm suggesting that complexity originates from the task at hand.

[Edited by - AngleWyrm on March 28, 2008 9:23:03 AM]
--"I'm not at home right now, but" = lights on, but no ones home
Advertisement
Quote: Original post by AngleWyrm
Quote: Original post by wodinoneeye
... but the scripted logic and evaluation functions are fairly diverse, irregular and numerous, AND change significantly with different simulation environments that they are applied to.


This suggests to me both a quest for a superior ideal, unfettered by the mundane lab assignment of the moment, and also a possible paradox. In the realm of mathematics, such quests are the way of science, and great generalizations have sprung from it. A picture of science as a heirarchical structure.


This is a project Ive been working on for more than 10 years (off and on) and finally we are getting computers with resources of the magnitude required for this class of problem. For me its an engineering problem, as the research in the AI communities have been exploring related mechanism since I started investigating AI 25 years ago. A major stumbling block to behavioral AI has been how to create the behavior logic as workable data (scripts effectively are 'data'...). The richness/complexity of simple behavior patterns siumulating reasonable interactions by an 'intelligent' object adds up to a suprising amount of scripted logic. Even with generalized patterns the endcases/special cases are numerous. More complex 'world' mechanism multiply the problem space geometricly. Human guided machine learning mechanisms can only shortcut the process so much. The logic is still voluminous and must be staged/inspected/correceted which requires prohibitive amounts of manhours.





Quote:
But I'm not so sure that is a good model for intelligence. What if we were to somehow compartmentalize subsystems, and then identify a tree's root node or pyramid's peak? My gut feeling is we would find that central core, or universal principle, to be just one more subsystem with its' own small set of tasks to perform; just one player in a set of players. The heart is by all means an important piece, but it's still only a piece.


This is a model for 'simulation' of intelligence beyond which we have seen so far in games. The system I propose is not really compartmentalized in that it is targetting an overall goal of creating a 'good enough' behavior within a given simulated environment. There may be numerous precanned 'solutions' to problems formulated from an object's goals, but tying it all together is a generalized metric to evaluate and prioritize/select which solutions are to be executed. The evaluation mechanism I think is the most difficult aspect because my model includes uncertainty which is a hard thing to mathematicly evaluate because it is based on the possibilities in a complex (simulated) world system.

Once the decision is made, carrying out the solution is relatively simple (allowing for reevaluation along the way as the situation changes) using planners and hierarchical FSM on relatively certain information (though in the more advanced simulation I would want to have, simulated human relations always provides potential uncertainties no matter how obvious a local situation is).

Currently Im looking at search mechnisms which evaluate a network of nodes (ala A*) but are seeking paths which offer the most possibility to allow alternate solutions when the 'map' of knowns changes (which they do constantly in any real dynamic system). Again, using this mechanism calls for reevaluation as the problem is incrementally solved (eating lots of CPU...)





Quote:
In some ways, I'm agreeing with you that complexity might be an irreducable feature. In other ways, I'm suggesting that complexity originates from the task at hand.




The 'task at hand' is defined by the complexity of the world simulated and the expression of behaviors required for the simulation. You may be able to roughly reduce the computation required. I would expect it to be something like N^3 or N^4 but that may be still too optimistic.


First how do you factor the simulation into parameters for an equation??

Some that I can think of past the usual N objects and XxY spacial :

System for the 'Grain' of results of actions (ie- partial results versus unitized) This can actually effect spead of effects of events as a field or as an absolute true/false threshold. Mechanism of sensor perceptions are based on this factor. Mechanisms of summated effects over distances are defined by this factor.

Extent of internal state for 'intelligent' objects. I once said that when the human mind became sapient the universe at that moment doubled in complexity.
For simulations which encompass relations between 'intelligent' objects the evaluation of another objects internal state (mindset/motives/...) could become a significant part of any decision.

Detail level of the physical simulation (this is really another aspecrt of 'grain'). How much finess is relevant in the simulation (how many twicthy little details become important in actions to be taken (and obviously the evaluation leading to the decision).





Stepping back from simulatiosn of such higher detail, there is still vast room for improvement of the 'sort-of' AI which can be done on the utterly simple world mechanism that currently exist in games today. Gross details, coarse grained interactions, limited world space and object counts still offer a vast problem space. Creating AI to behave closer to what a human can achieve in the same environment will be enough of a task.
--------------------------------------------[size="1"]Ratings are Opinion, not Fact

This topic is closed to new replies.

Advertisement