Advertisement

Deserted Island

Started by October 30, 2005 09:02 AM
14 comments, last by GameDev.net 19 years, 3 months ago
A prehistoric humanoid has got deserted on a island. Your task is to help him/her stay alive. This is a idea I'm thinking about, as much to try some ideas for AI as to make a real game out of. First thing I would like opinions on, is would it be better to have some sort of direct control of him/her or would using a proto language be okay. Currently in the prototype I have made, the player uses a protolanguage to teach him new skills (plans). When the "game" first starts, he knows some simple actions like: picking objects up, putting one object on top of another, hitting one object with another, throwing objects, attaching one object to another, sleeping, eating, drinking. It is the players task to teach him how to combine these actions into plans which are usefull. He will then try to learn what the result of carrying out a plan is. So if he is taught to pick up a stone and throw it at a small animal, then he should learn that it will often (sometimes) lead to a dead animal which is food. Or if he puts pieces of wood together then he can build a shelter which will keep him warmer and dryer. Attaching a sharp flint to a stick can be used to cut things. Other things he would hopefully learn in time would be not to kill all the animals of one type and maybe farming techniques. I don't see the AI being too complex to start with. The only thing I see needing much work with regards to AI is getting him/her to learn the right results of carrying out the plans, and even here I think if at first he thought that the results of a plan were different to they actually were this might add to the challenge as long as in time he learns the right results. So what I'd like is any ideas people have about this, also if anyone has any ideas about how the gameplay could be improved or different tasks the humanoid needs to do to survive. And would using a protolanguage as the user interface be good enough or should it use some other system. Pie menus maybe (like the Sims). [Edited by - MWr2 on October 30, 2005 9:40:28 AM]
The character can probebly understand simple imidiate results, like that throwing a stick on a tiger is not a smart thing to do. However, more complex results will be to much for his prehistoric brain. I doubt he will be able to understead by himself that while setting the apple tree on fire will warm him in a cold winter night, this is not a good idea because in the long run, he won't be able to get apples from that tree. The player should be able to teach him that. Maby using a punishment system (Black&White).
-----------------------------------------Everyboddy need someboddy!
Advertisement
Learning the longer term results of actions is something I do want to try to include in the future (or at least work on, I have some ideas of how it could be done). But I had thought about how much (or if) the player should be able to punish the character as well.
I had thought about if it would be too tedious using a protolanguage. While I can think of some ways that the player can show him/her what to do, I don't really want that to be the main way of interacting with him.

The hardest part of the first part of the project that I have in mind, is deciding the best way of making it fun to play but at the same time allowing me to do the things I have in mind related to the AI. I could make it so that the gameplay isn't important, as the main purpose is AI. But as I was planning to release it once I have it at a useable point (most likely as open source), I do want there to be some gameplay.

What I describe in the first post is what I plan for Stage one, and I am quite certain that I can do the AI for that (my job is in AI, but not game related). It is the later stages which I expect to start to push the AI.

One of the reasons I do want to use a protolanguage for at least some things, is because one of my aims for the later stages is to develop the protolanguage further and ground it into his/her perceptions and actions more deeply. (I have worked on language grounding as part of my job before.)

But being tedious is a factor, the only other game I can think of where a AI creature tried to understand any language is "Creatures". I certainly hope I can let my caveman understand the protolanguage better than they did in that game.

The voice input can be added once I have everything else working.

As for gameplay, one of the ideas I have is for the caveman to have to go on missions, in which you can not communicate with him. Maybe to another island or into a cave, and you must have taught him enough to survive there and carry out his "mission". Missions could be something like he must find some item or maybe shelter in a cave from a storm for a couple of days. But how much fun is it going to be to just watch your caveman and not be able to control him or correct him until he returns from the mission?

[Edited by - MWr2 on November 2, 2005 4:59:45 AM]
Quote:
You might look into having the learner watch YOU demostrate stuff and emulate you (look into the game Black&White).


Yes, look into Black and White - to see a prime example of how NOT to do it. Honestly, I could not stop my stupid flipping cow eating dung no matter HOW MANY times I slapped him after or while he did it. Then I realised I could complete the entire game without ever worrying about the creature at all. Then I realised I didn't WANT to complete it, as it wasn't so much a game as a tech demo. Rubbish. But anyway...

You have come up with an intriguing and interesting idea though. For me personally, unless you are going to be able to implement truly cutting edge AI systems (and I mean AI in the purest sense, not merely in terms of game AI which is hardly the same thing) an indirect approach, regardless of control method, is likely to end up very frustrating. I would rather see direct control over the humanoid - not necessarily as direct as "press left => move left", but perhaps a little more abstracted than that, like "click here => move here or use this" with the camera detached from the player. Staying alive on a deserted island then becomes your responsibility, not the humanoid's - which I realise is maybe not quite what you had in mind.

In either case, how about the possibility of not so much one humanoid, as a tribe of humanoids? The thing with learning is that by the time your humanoid has learned that throwing sticks at tigers is A Bad Idea, he's dead. (If you choose to have predators and death in your game, of course). On the other hand, if Ug the humanoid throws a stick at a tiger and gets killed, Og the humanoid sees it and thinks "right, don't throw sticks at tigers!" and can pass that information on to the rest of the tribe. They might even learn that it's alright to throw sticks at tigers so long as there's a group of at least ten of them all armed with flint-tipped spears... (Also, a tribe could learn from the one humanoid you have direct control over, if you were to go down that route. The player becomes less of a god, more of a leader).

Finally, in terms of the interface - whether it be menus, pie menus, a protolanguage, speech recognition - my strong recommendation is to prototype, prototype, prototype. Implement all the above ideas (and more) in the crudest and fastest possible way, with no flashy graphics and the most basic interface. Give them to five people to try out and you'll rapidly learn which is the best way to go (and which might prove very difficult to actually implement). If you can get hold of the postmortem for The Sims, I recommend reading it - they didn't come up with that pie menu as their first and only idea.

Overall though, very interesting concept!
I know a lot of people don't like Black and white (and its been a long time since I played it as it didn't hold my attention for all that long), and just as many people dislike Lionhead because of their constant hyping. But they are one game company that I actually like because they at least try to push the AI. Okay so far their games haven't lived up to the hype (or the earlier Bullfrog games), but hopefully if they keep trying they will get it right sooner or later. So for that reason I do buy most Lionhead games, I would hate for EA to see low sales figures and close down Lionhead or even more drastic to see the low sales as a sign that people don't care about AI in games.

Most Game AI is at least 10 to 15 years behind academic research (and also behind a lot of other industries, I know the research being carried out in the industry that I work in is more adavance than those in games.

I think it says a lot when "The Sims" is seen as one of the more advanced AI based games. The Sims 1 and 2 are games that I have done a lot of studying of, and to be honest I see very little real AI in them. The Sims 2 is basicly a eye candy improvement over the Sims 1. The underlying AI technology has little change, it still uses the Simantics scripting language (with slight modifications). Having said that, Maxis is another company that I do support as at least they try new ideas.

Back to my idea, I like your idea of prototyping various user interfaces and seeing what people like the most, Thanks. I do think that getting the user interface correct is going to be one of the hardest parts.

I have also thought of there being more humanoids on the island, maybe all belonging to the same tribe, or maybe they are natives to the island and can react in various ways to "your caveman" appearing on the island. But at first I want to get the basics of the game and AI working before I add inter-humanoid behaviors to it. To be in any way realistic that would really need a whole range of behaviors, like two humanoids being in love, or two hating each other, arguments between them, fights between the males to become the Alpha male, some humanoids leaving the tribe to start a new one. Once I do add more humanoids, I also want to allow them to communicate with eath other using the protolanguage.
So I had planned to leave it until later stages to think about introducing more humanoids.

Once I do introduce more humanoids, then maybe you could have more direct control of your caveman with the task of bringing a more advanced civilization to the natives (who have only the basic skills).

Another idea I had to make it more game like and to allow him to be taught somethings without using the protolanguage, was that the player is his spirit guide. So sometimes you use the protolanguage but also you can appear to him (as another humanoid) and teach him by doing things. To appear would use up energy, so to increase your energy level the caveman might have to do certain things, or get certain items (maybe as part of the missions).

I had thought about the other problem you brought up... how does the humaniod learn what is dangerous if the thing he needs to learn about might kill him. And so far I don't know the best way to handle that. Maybe making him immortal but then why does he need to learn not to do things, he could just sit on the beach and do nothing.

[Edited by - MWr2 on November 2, 2005 2:07:46 PM]
Advertisement
Quote:
how does the humaniod learn what is dangerous if the thing he needs to learn about might kill him.


He can't. It's been a while since I've read any AI material but at least a couple of years ago it unfortunately seemed like many (most?) AI researchers expected results similar to a new-born baby being able to engage in a deep, logical conversation. Think of the prehistoric humanoid in this scenario as a child and you'll soon realize it's actually pretty damn realistic if he during the first couple of minutes of gameplay drowns, falls off a cliff or gets eaten by a tiger.

Think about Average Joe over here. He's got thousands of years of trial and error backing him up, passed on by parents (and later on, several types of media) to children so less trial has to occur, thus less errors (including fatal ones). If Urrgh-Ghorak living next cave to Joe's ancestors hadn't gotten himself eaten by a tiger after throwing a stick at it, we might have no Joe. It's a good thing Joe's ancestors learnt and passed on the information "throwing a stick at a tiger -- bad" . Somewhere along the line some footnotes were added "(but throwing a sharp stick at a tiger can be good)". With even a fairly small population you can quickly gather up information like

1. walking off the cliff at the edge of the island -- bad.
2. Apple tree -- food.
3. Fire & tree -- warm.
4. Fire & tree -- no tree.
5. (4) => Fire & apple tree -- bad.

Once again, I'm out of the loop re modern game AI, but this sounds an awful lot like neural network learning to me. To me the best theoretical solution would be to run a couple of [hundred of] generations of learning AI "behind the scenes" and pass that knowledge on as a "memory" for the prehistoric humanoid before the game begins. I say "theoretical" solution because I honestly don't know if such learning systems would be practical to implement, of if smoke & mirrors would be better suited for this particular application. Like I said, I've been out of the loop (mainly implementing CAD rendering systems for the last five years) but I'm trying to catch up, as games & game (& graphics) development are pretty much the only real computer- related passion I have.

Juha

edit: For game balance issues the pre-game learning could be applied only to "worst case" scenarios. That way, the humanoid wouldn't already know too much (thus removing much of the initial fun of the game, teaching him about making fire, seeking shelter etc.) but he'd have less of a chance to make fatal mistakes like poking a tiger with a stick.

I also like the "spirit guide" idea a lot. It opens up a lot of options as far as game balance goes -- the player appears as a guide and the humanoid learns by observing and repeating, but the player would have to manage the teaching resource. For best progress you'd be giving the humanoid a nudge in the right direction here and there, and let him figure out the details.
Quote:
Original post by ruistola
Quote:
how does the humaniod learn what is dangerous if the thing he needs to learn about might kill him.


He can't. It's been a while since I've read any AI material but at least a couple of years ago it unfortunately seemed like many (most?) AI researchers expected results similar to a new-born baby being able to engage in a deep, logical conversation. Think of the prehistoric humanoid in this scenario as a child and you'll soon realize it's actually pretty damn realistic if he during the first couple of minutes of gameplay drowns, falls off a cliff or gets eaten by a tiger.


I know in real life that this caveman (if he learnt the way I'm doing it at least to start with) would never be able to learn about things which kill him, but what I meant is I'm trying to think of the best way for it to happen in the game. Ie the rules don't have to be the same as in the real world. He could either not die and just get really badly hurt (and feel the pain) or he could come back to life after each time he dies. And if I use the spirit guide idea, then maybe the player loses energy when he dies. Something like that.

I agree with you about how some people in AI research used to think that a AI would be "born" knowing all about the world, r at least that all this knowledge could be programmed into a AI. I think research has moved on since then and people realise that the best(/only) way a AI agent could know enough about the world is for it to learn.

Quote:

Think about Average Joe over here. He's got thousands of years of trial and error backing him up, passed on by parents (and later on, several types of media) to children so less trial has to occur, thus less errors (including fatal ones). If Urrgh-Ghorak living next cave to Joe's ancestors hadn't gotten himself eaten by a tiger after throwing a stick at it, we might have no Joe. It's a good thing Joe's ancestors learnt and passed on the information "throwing a stick at a tiger -- bad" . Somewhere along the line some footnotes were added "(but throwing a sharp stick at a tiger can be good)". With even a fairly small population you can quickly gather up information like

1. walking off the cliff at the edge of the island -- bad.
2. Apple tree -- food.
3. Fire & tree -- warm.
4. Fire & tree -- no tree.
5. (4) => Fire & apple tree -- bad.

Once again, I'm out of the loop re modern game AI, but this sounds an awful lot like neural network learning to me. To me the best theoretical solution would be to run a couple of [hundred of] generations of learning AI "behind the scenes" and pass that knowledge on as a "memory" for the prehistoric humanoid before the game begins. I say "theoretical" solution because I honestly don't know if such learning systems would be practical to implement, of if smoke & mirrors would be better suited for this particular application. Like I said, I've been out of the loop (mainly implementing CAD rendering systems for the last five years) but I'm trying to catch up, as games & game (& graphics) development are pretty much the only real computer- related passion I have.

Juha



This is some of the type of learning that the caveman will do, which is more a decision tree way of learning than a neural network. But it is another thing I need to give more thought to...how much knowledge/memory should he start with.

I use decision trees quite a bit in the prototype I'm working on at the moment, but will be also using other techniques later on (not neural networks, which really aren't the right technique for this sort of thing).

[Edited by - MWr2 on November 2, 2005 12:29:13 PM]
Quote:
Original post by Anonymous Poster
Instructing your Caveman 101

goal + solution -> actions -> result

(need symbols to identify (via language) goals, solutions,actions, results (good and bad) and operator like 'then' 'and' 'do' 'if' 'dont' etc...
If you dont have symbols, getting the AI to generalize from a demonstration for itself is a pretty tall order.
Symbols need to represent generalizations ('axe' is 'weapon', 'knife' is 'weapon', 'club' is 'weapon') if you dont want alot of duplicate logic.
Including multiple classifications ('axe' is 'weapon', 'axe' is 'tool').

Selecting solution that fits the situation - teaching what right and wrong factors that define an appropriate situation fitting a particular solution.
(Og trying to light fire in rain needs a "dont try it if " conditional...)

Combinations - solution = subgoal + subgoal ... or subgoal then subgoal ...
( " first you do this, then you do this, then you do this.... gets tedious describing in detail to your caveman)

Also (the fun part) evaluating to selecting the solution (best if possible) for the given situation. Conflicting situational factors XOR problem etc...

Endless conditional logic that must be taught in a complex simulation environment....
(" Stand 'in' fire --- BADDD ", "Stand 'next to' fire --- GOOOD (unless too hot then BAADDDDD)", "Food in Fire GOOOD, unless too long -- BADDDD", etc etc ad nauseum etc...

Goals need to be prioritized (available goal solutions evaluated to even enable their candidacy) requiring a standard metric to compare. Handling of uncertainty (planning where you cant account for all factors when the actions are actually carried out).
Opportunistic behavior that allows reacting to a changing situation (Ogg single-mindedly intent on catching tasty bunny walks right past Tiger....)



When I consider the logic that Caveman Mk I will have to accumulate, I see pages and pages of logic, classification lists, solution sequences, preferences, etc....


With the AI that I have planned for stage one, I have no doubt that I can do it.

I plan for stage one to be a system which works but has known limitations, the later stages I will be trying different techniques to see which one is the best in various areas. I don't see this as a short term project but as a longer term one so I can try out some of the ideas I have.

Also this will not be one monolithic system, but a agent system made up of different modules, so there will not be one big list of logic and classifications.

As for object classifications I hope to have a minimal set to begin with and then it is up to the player to create other groups, they can then teach him which objects should belong in each group. There will be a number of ways to teach him this, one might be to define the characteristics of the objects which should be in a group. So a cutting tool might need a characteristic of a sharp edge. (so objects on the island will have characteristics and the caveman will be able to tell the characteristics of objects he looks at.) Other ways would be to let the caveman experiement more and then the player tells him if a object is a good object to use for a certain task. So he learns from that, if that object should be in that group (there will need to be sub groups as not all items from one group would be good for all tasks.) So the caveman agent has a grouping module which will learn about the groups objects should be part of and then informs the rest of the agent as needed about these groups. I don't see anything very difficult about this module as it is really just a search and renforcement process.

As for the fire when it is raining, I plan for the results of actions to be a little more in depth than good or bad, so he should lean that rubbing his sticks together should lead to a fire, but then when he comes to trying it when it is raining he should learn that it is unlikely to work. So what he learns is the starting situation for a action/plan and the resulting situation.

Of course the information has to be filtered (by other parts of the agent) so he learns the right pre-situation and the right results. So that if he tries to light a fire when it is raining but also there is a dog nearby he learns that it will not light when it rains instead of thinking that it will not light when a dog is nearby.

As the world and his knowledge gets more complex then the number of results he has learnt gets very large, so it will be a challenge to keep this working later on; I also have other ideas incase my first ideas don't work. Using this technique in the real world would be too much but a game/simulation is in a world which can be controlled and other restrictions placed on it

With your case of walking past a tiger, I expect him to react to the tiger as like I said there will not be one monolithic system which is just reacting to the bunny. He will be taking notice of all infomation which enter his working memory. Of course to make it somewhat realistic, sometimes he should be paying so much attention to something that he doesn't notice other things. So the agent will have a concept of what it is paying attention to but at the same time there will be other processing going on.

I certainly don't expect him to have anywhere near human level intelligence, thats why I said a prehistoric humanoid, even then I'm sure a lot of things he does will be stupid and existing animals would do better. But that is part of this project to see what works and what doesn't, to see how far I can push things.

There are a lot of things from AI research which just never get any where near games (except for a few test ones made by academic researches ...the MIT Synthetic Characters Group did some interesting things but that seems to have closed down now or at least they haven't updated their website in a long time)

There are a few cases of academic research into using POMDPs in games, and this is one technique which I am going to being testing. How well will a POMDP fit in with the rest of the agent?

[Edited by - MWr2 on November 3, 2005 10:22:51 AM]
Quote:
by Anonymous Poster

A very long project, as some of the areas are entire projects in themselves.

I wasnt thinking of a monolithic system, just the magnitude of the 'learned'
data that you want to be accumulated via conversing with your AI.

If your particular input teaching system is a lesser requirement than getting the whole AI to work, you might consider that hand writing/editting the 'thinking' logic and automatic experience building (autonomous from you telling it everything) will be more efficient. Im thinking of the AI doing its own exploration of combinations to find out what works(and building logic), but then YOU ammend its rule sets to show where/what additional complication are which it will subsequently explore and refine. Of course this also hinges on you having a WORKING (full detailed) world mechanism that it can test its abilities against (large subproject 1)...


I hadn't gone into a lot of details of exactly what the humanoid would need to learn or how he would learn it (AI wise) as I was talking about future plans for a game.

Part of my stage 1 is to decide what he should know and what he should learn later. As the game is a secondary aim, I am testing various ways and haven't ruled out anything. I think in some parts my ideas are quite close to your own (It is just I had intended this thread to be on gameplay rather than AI, so hadn't gone into all the details of what I was trying out for the AI).

The world/simulation is currently complete enough for what I have planned for stage 1, the biggest thing missing from it at the moment, is that the actual animation which happens is limited as I'm not a animator or good at making 3d models. But the support for animation is built into engine. The engine is adapted from one I had made before for a different game. Later stages of course will need things added to it to allow more complexity.

I think maybe you are imagining more complexity in the simulated world for stage 1 than I am. I know any world is going to be complex but I am trying to limit it to start with. There will be limits on what can be done in the world.


Quote:


Again this sounds like a candidate for hand written data (wizard assisted...)
that will be more efficient than 'telling' it everything....
Spelling out/declaring all the relationshops (IsA, HasA, UsedFor, etc.. ) is not trivial (data grows as a square of the complexity). Unfortunately your modules will be more interdependant than you think (and upon the base simulation mechanism).



I do want to keep the starting groupings to a minimum so that players have more freedom to group things how they want, but again I am doing tests. And the HasA will be built in, all the characteristics will be built in so the player won't have to go around teaching him that. It is just the IsA and UsedFor, which I hope to keep to a minimum.

I never said that the modules would be independant, I know the modules are dependant on each other, I just meant that the way I am doing the agent, I can work on them separately to a extent.


Quote:


The filtering mechanism itself is quite a task (large subproject 2), as trying to decide which factors in a sequence of actions and what situational factors (especially in a rich simulated environment) led to the results success or failure (or partial result). An entire database of case filter logics is to be generalized from experience logs (generated by the learning sessions). Even if you have an automatic mechanism for this, you still have to create the training set of situations for the AI to interact with.



Yes the filtering is a big task. So far for my prototype (which actually does a lot of what I have planned for stage one. I'd say that it does somewhere around 70-80% of what I have planned for the first stage, but there is still a lot of work left), for the filtering and learning I have adapted code from a project I did at work. The problem is even though I wrote most of that base code, I could never release it as the company I work for, owns the copyright (they have no problems with me using it as long as I never released anything using it). So I need to rewrite that whole process later on (and plan to try a number of different techniques), this I see as being one of the biggest tasks.

I think this might be the biggest difference between our ideas, one of the main aims of my project is to see how much I can improve the filtering and learning so that he can learn the factors involved in a action/plan and the results, rather than telling him what is important. Although I'm not saying he would be able to learn all the factors without any starting knowledge or rules. Thats one of the things I am looking at in the early stages, so that from the test cases he builds up statistical knowledge of what is likely to be important.


Quote:


Even as simple as most RPG game environments are, the complexity is still enough to explode the AI's required knowledge (and the CPU power for it to act in acceptable 'realtime' -- you might add Cluster programming to your project list).


Well so far my prototype does run in realtime but could slow down too much later as the world gets much more complex. But I not too worried about speed, as I have no release date in mind, it only has to run on my hardware and I have access to a 4 way dual core opteron system that I could use if I need to. Multi-threaded is part of the design specs.


Quote:


Constant reevaluation of a situation increases the magnitude of the CPU requirements. Your working memory (or AI's world representation) will also have to keep track of things not just within view (ex- a map of its world, locations of useful objects/resources, tracking of threats (a tiger out of sight nearby is worse than one nearby in sight...)



This is what I meant by working memory, it is not just a set of sensory infomation but should keep track of any infomation which needs processing or is relevant. Also infomation can come from various source, not just sensors, but long term memory and other parts of the agent. And if he learns that lions roar, then when he hears a roar, his working memory should include a association entry of the lion to go with the roar.

Quote:

Consider having quantum levels of 'priority', where entire sets of goals are irrelevant when a much more important goal is active (the goal of maintaining the supply of firewood is a bit irrelevant wnen you are drowning in the river...). Motivations activate/deactivate/shift priorities of goals depending on the current situation (and internal psychological factors/thresholds). A system of evaluating 'worth' (and thereby 'cost') under different situations is a significant task (large subproject 3).



Again here I think our ideas are quite similar. There are levels of priority and goals and plans and other rules can be deactivated/activated. There are also filters which can stop infomation getting through to the working memory. Just like a person has filters stopping sensory data getting through to the conscious level.


Quote:


Even a prehistoric humanoid is quite complex. Maybe you should start your 'first' stage with simulating a bunny or tiger in its environment.



I have done those types of simulations a number of times before and have no interest in doing another one. I am doing a project that will allow me to try some of ideas I have. I'm not saying I will succeed with all my goals, but I have spilt the project up into stages with increasing complexity. Again I think maybe you have got the idea that I plan for more complexity in stage one than I do. I'm not underestimating the complexity but I am limiting it.



Quote:


My ideas for doing this same kind of project is to do alot of the 'coding' of the logic manually (assisted by wizards) for the basics and have good data visualization tools overlaying the 3D presentation. The training would be done in an 'Arena' where the learning object would be dropped into a staged environment (requiring the tools to easily setup such situations...) and the user would monitor/trace the AIs use of its logic and find the deficiencies and thus be able to efficiently edit/expand the logic.
The learning object would automaticly detect gaps in its logic (factors it has no logic for) and request clarification. Later more complicated environments
(from the world simulation) would be presented and the same user guided learning would be carried out. Specialization by roles would create divergent behavior sets (a hierarchy of 'skills' could be identified and built up - skill sets can be generalized and reused by multiple roles).


Again I think our ideas are more alike than it originally seems. While I'm not using wizards as such. I am using a type of "Arena" with the prototype, in that I limit what is on the island (and the size of the island). So I place the humanoid in different situations to see how he reacts and to teach him new things. So in one test, there might be one tree, a stone, a rabbit and a tiger and maybe a couple of other objects. Also I do have it set up so I can visualise into the agent and insert new information directly.

My stage one is basically a serious of these "Arenas" with them getting more complex later on. The game (as described in the earlier posts) would come much later, I just wanted to get some ideas of what sort of input systems people thought might work, plus any other ideas people had.

[Edited by - MWr2 on November 4, 2005 1:24:29 PM]

This topic is closed to new replies.

Advertisement