Advertisement

Brainstorming: Social AI

Started by July 09, 2007 04:26 PM
9 comments, last by wodinoneeye 17 years, 4 months ago
Your mention of speech reminded me of the testing I did a few years ago with the Microsoft Voice Recognition SDK as an input mechanism for a my project (simple sentences -- Name+Verb+Object with a limited vocabulary or just as an interface enhancement for mode changes or emoter cues, etc,,,).

But that suddenly it (indirectly) made me think of the difficulty/tediousness of entering the sequences for those 'solutions' that I spoke of before (a possibility I thought of previously was issuing instructions to NPCs to teach them sequences of actions). But that would be insufficient to enter many of the logic components that the scripting would need to have.

A possible way to shortcut it might be actually to act out the solution yourself and have the program record your action and the things you interact with -- building a script that could then be edited in detail. A conversion would be made to generalize the objects involved into symbols and logic clauses would be added to represent the relations between the objects and the actions. Extraneous details could be eliminated and the whole thing would be boiled down into code sequences of the steps required for the solution.

I will have to think some more about the different transformations (automatic and user guided) which would reduce the raw description of actions into 'script' code, and some system to recognize patterns (solutions are broken down into substeps which themselves are goals which would have solutions applied recursively) and organize competing solutions for the same goals.
--------------------------------------------[size="1"]Ratings are Opinion, not Fact

This topic is closed to new replies.

Advertisement