Quote:
Original post by Kylotan
Quote:
Original post by AIResearcher
The hinderance of adoption of this technology into most games has to be the development of the parsers capable of taking the complexity of the language medium (text, gesture or spoken) into a machine readable form.
By this, do you mean the semantic interpretation? I was thinking that the actual language side of things is quite simple. It's the planning and querying for more information that seemed like it might be the bottleneck. Obviously the semantics need customising on a per-game basis.
We may have a miscommunication of terms. The hinderance of adoption would be the limiting factor in game developers accepting this technology and adding it into their own games. The way the human interacted with the agent and what was expressible and what was not. The bottleneck, to me, refers to the processing power taken from the game, which would indeed be the higher-order reasoning of the agent.
As for a discussion of the hinderance caused by language:
From the planning, querying and execution side; once you build atomic actions and elements (such as characters and objects), you can use a standard reasoning engine to build procedures, tasks, and higher-order queries.
On the other side, you can easily build a simple grammar (language) capable of executing commands and simple queries.
It's the expansion of capabilities on either side that allow interaction to go from simple command and control to more expressive statements, such as proposals and mutual planning, discussion over what do to next, emotional stimulous and response, etc...
These are abstracted above the game itself, but yet do have to have some internal model as well as the language recognition to understand exactly when a player is communicating something such as proposing an objective. To me, standard communication policies and structured protocols can handle the address of a proposal and subsequent accept, reject or counter. What is difficult is allowing the various ways that it can be triggered. For example, a proposal could be, "How about we X", "Let's X", "I think we should X", etc...
But perhaps this is allowing too much expression in the language. If one were to limit the expression of a proposal to a single utterance or simple set of utterances, or perhaps even a gui component such as a menu; then the hinderance to adoption would be in the acceptance of that interface.