Advertisement

Embedding an intelligent assistant into real-time strategy.

Started by July 08, 2006 08:10 AM
20 comments, last by adam23 18 years, 4 months ago
Quote: Original post by AIResearcher
The hinderance of adoption of this technology into most games has to be the development of the parsers capable of taking the complexity of the language medium (text, gesture or spoken) into a machine readable form.


By this, do you mean the semantic interpretation? I was thinking that the actual language side of things is quite simple. It's the planning and querying for more information that seemed like it might be the bottleneck. Obviously the semantics need customising on a per-game basis.
Quote: Original post by Kylotan
Quote: Original post by AIResearcher
The hinderance of adoption of this technology into most games has to be the development of the parsers capable of taking the complexity of the language medium (text, gesture or spoken) into a machine readable form.


By this, do you mean the semantic interpretation? I was thinking that the actual language side of things is quite simple. It's the planning and querying for more information that seemed like it might be the bottleneck. Obviously the semantics need customising on a per-game basis.


We may have a miscommunication of terms. The hinderance of adoption would be the limiting factor in game developers accepting this technology and adding it into their own games. The way the human interacted with the agent and what was expressible and what was not. The bottleneck, to me, refers to the processing power taken from the game, which would indeed be the higher-order reasoning of the agent.

As for a discussion of the hinderance caused by language:

From the planning, querying and execution side; once you build atomic actions and elements (such as characters and objects), you can use a standard reasoning engine to build procedures, tasks, and higher-order queries.

On the other side, you can easily build a simple grammar (language) capable of executing commands and simple queries.

It's the expansion of capabilities on either side that allow interaction to go from simple command and control to more expressive statements, such as proposals and mutual planning, discussion over what do to next, emotional stimulous and response, etc...

These are abstracted above the game itself, but yet do have to have some internal model as well as the language recognition to understand exactly when a player is communicating something such as proposing an objective. To me, standard communication policies and structured protocols can handle the address of a proposal and subsequent accept, reject or counter. What is difficult is allowing the various ways that it can be triggered. For example, a proposal could be, "How about we X", "Let's X", "I think we should X", etc...

But perhaps this is allowing too much expression in the language. If one were to limit the expression of a proposal to a single utterance or simple set of utterances, or perhaps even a gui component such as a menu; then the hinderance to adoption would be in the acceptance of that interface.

Advertisement
I was blown away by this. I absolutely *must* play a game with this in it.

I would be absolutely thrilled if you could give me (and everyone else on this forum) a donwload to the game you were playing (with the AI assistant in it, of course). Even if it's still in a 'raw' form (not that it looked that way in the video), it would be incredible to play, at least from a technology perspective. Even if I have to type in a "specialized language", I would love to try it out.
That is a really cool demo! About time a real AI helps the player manage its base. I have a lot of questions !
On your blog, you say :
Quote:
It is important to realize that the text being entered has been parsed into a specialized language. This means what you see typed has to be encoded by hand, by a human, in order for the agent to understand it.

Does it mean that the questions printed on the screen is not really what was typed ? Which I would understand as it is not very interesting in this situation to type a whole sentence.
And if you really do some NLP parsing, have encoded a few hundred sentence templates that would be common in a RTS game ?
This demo is neat, but after having seen a lot of chatbot sample discussion, you would understand that I am cautious :-) What kind of mistakes can your system do ? Can you list the vocabulary it can understand ?

Can you give some projects that inspired you ? It sort of reminds me of SHRDLU...
What language did you use for tha AI ? Did you code your own inference engine ? Did you use one ?

PS : I am really curious, but I would understand that you have some technical secrets you don't want to share ;-)
Quote: Original post by Ezbez
I was blown away by this. I absolutely *must* play a game with this in it.

I would be absolutely thrilled if you could give me (and everyone else on this forum) a donwload to the game you were playing (with the AI assistant in it, of course). Even if it's still in a 'raw' form (not that it looked that way in the video), it would be incredible to play, at least from a technology perspective. Even if I have to type in a "specialized language", I would love to try it out.


Now that I have successfully defended my dissertation and turned in the final proof, I hope to work towards debugging some of the problems with distribution as well as work toward a front-end NLP parser and release a demo to the gaming community.

Along those lines, If I work both on the technology as well as its implementation within specific games, I will spread myself to thin. I aim to complete a demonstration version of Stratagus / Battle of Survival. However, I am hoping that some of you will want to see this technology for your own game. I would like to hear your game concept, how you hope to incorporate this technology, and any possible screenshots or previous video game experience you have had. Please email me at John@AssistiveIntelligence.net.

Currently, the technology has been developed for a Windows 32-bit platform. However, the code itself is not too dependent on platform specific code, and other platforms may also be viable. However, I currently intend to try to commercialize the technology, and because it is so young, am not inclined at this time to open-source it.

Quote: Original post by owl
This is so amazing that it's just very hard to belive is true. I probably won't buy into this until I play with it :)


I can understand your reservations and the term 'vaporware' comes to mind with this type of demonstration. All I can offer is that my dissertation committee has seen enough to grant me approval. I hope to release a demonstration version to the gaming community as soon as I can. There are some distribution and packaging issues to overcome, and I would also like to put a nice parser on the front-end.
Quote: Original post by Yvanhoe
That is a really cool demo! About time a real AI helps the player manage its base. I have a lot of questions !
On your blog, you say :
Quote:
It is important to realize that the text being entered has been parsed into a specialized language. This means what you see typed has to be encoded by hand, by a human, in order for the agent to understand it.

Does it mean that the questions printed on the screen is not really what was typed ? Which I would understand as it is not very interesting in this situation to type a whole sentence.
And if you really do some NLP parsing, have encoded a few hundred sentence templates that would be common in a RTS game ?
This demo is neat, but after having seen a lot of chatbot sample discussion, you would understand that I am cautious :-) What kind of mistakes can your system do ? Can you list the vocabulary it can understand ?

Can you give some projects that inspired you ? It sort of reminds me of SHRDLU...
What language did you use for tha AI ? Did you code your own inference engine ? Did you use one ?

PS : I am really curious, but I would understand that you have some technical secrets you don't want to share ;-)


The questions on the screen are hand-coded into a specific internal representation. Later this afternoon, I will be posting a small brief on my blog (http://blog.assistiveintelligence.com) explaining exactly how the process works and providing a script of the demonstration, including the internal representation and some of the processing rules and mechanisms internal to the assistant.

As for inspiration. I developed a mud a while back (based on ROM 2.4) that tried to give creatures the conversational capabilities similar to those found in Ultima V. Keyword driven conversations. That was my initial inspiration into this field. From there, I learned a lot about linguistics, dialogue management, intelligent agents and saw a lot of the state of the art like, TRAINS or COLLAGEN.

The reasoning behind the agent is based on CLIPS / JESS rule-style, but I had to build a limited version of it to overcome the limitation of not being able to nest DEFTEMPLATEs. I limit the actual expressiveness of each rule, so that a protocol modeling system (based on Prioritized Hierarchical Coloured Petri-Nets) can be used to model the individual rules and provide an engineering methodology toward constructing these rules and their resultant conversation models.

There is a separation between the Agent's reasoning and the Conversational Systems's reasoning, they use differen't engines, both just simple ones made from scratch in C++ code.

I hope to post more details on my blog in the aforementioned post.
Advertisement
From what I understand, you are typing the commands in by hand to the ai. From the gaming perspective, it seems like the next step might be a voice interface, which at least takes away the step of trying to type while playing.

"I can't believe I'm defending logic to a turing machine." - Kent Woolworth [Other Space]

interesting, a game that talks back at you when you try to do dumb things? sweet.
Old Username: Talroth
If your signature on a web forum takes up more space than your average post, then you are doing things wrong.
Well it definately has potential with even just giving orders. If you can be doing one thing by hand and given the ai orders to handle something else (like building the squad and grouping them for you).

"I can't believe I'm defending logic to a turing machine." - Kent Woolworth [Other Space]

That is the coolest use of a intelegent agent. Expecially if it can remember different ideas from one run to the next.
Including ways of diferentiating plans so far as to not be confused between them (ie be able to 'rush' or 'tech)
It would be really nice if this ever got incorporated into other games.

This topic is closed to new replies.

Advertisement