[Edited by - NovaBlack on March 28, 2009 3:18:59 AM]
Goal Oriented Action Planning. Small video.
Well. FINALLY managed to get a video put together of some AI agents using my planning algorithm to solve problems. Also using a blackboard model to coordinate as a team.
In the scenario i knocked up, 3 agents (an assassin , dog and engineer) are given a goal to kill a target.
Esentially action planning is used to generate a sequence of actions that will result in the compeltion of this goal. My implementation is very similar to that described by Jeff Orkin and used in F.E.A.R. a while back. Essentially actions are defined in terms of preconditions and effects, and the algorithm calculates a valid sequence to satisfy some goal world state. This is an interesting alternative to having hardcoded logic pre-defined within an FSM!.
The blackboard model helps deal with problems as they arise, and coordinate the efforts of agents with different action sets.
Any thoughts welcome!
(Note: i know its not great graphically! lots of placeholders still.. so mainly AI related comments if poss!)
thx!
High quality Video Here:
http://rapidshare.com/files/214352974/NEWFinalWithMusic.wmv
EDIT (by request!) - You tube vid here:
(NOTE: quality is poorer and you cant really read the text too well! - make sure you set you tube to playback vids on high quality, sorts the problems somewhat)
It would be great if you could upload it to youtube.com so it is easy for everyone to see it.
As a side note I find the name Jeff Orkin to be pretty amusing :)
As a side note I find the name Jeff Orkin to be pretty amusing :)
[size="2"]I like the Walrus best.
ill give it a try!
Only problem is the video size :P gets hard to see some of the text when the quality gets bumped down.(EDIT: uploading now... actually i may have been completely wrong about the quality and downsizing! its been ages ince ive used you tube...)
I spent all day fighting with windows movie maker on that one haha.
Ill post a you tube link in a sec though if the quality is still ok!/
Lol yeah Jeff Orkin is a bit of an odd name..... Conjures up the image of a large green bloke with a club screaming at his AI to do 'wots he dam well tellz 'em'.
EDIT: updated the OP to add the youtube link!.
Quality is a bit poorer though, cant really read the text! (ill see what i can do with that!)
[Edited by - NovaBlack on March 28, 2009 3:50:53 AM]
Only problem is the video size :P gets hard to see some of the text when the quality gets bumped down.(EDIT: uploading now... actually i may have been completely wrong about the quality and downsizing! its been ages ince ive used you tube...)
I spent all day fighting with windows movie maker on that one haha.
Ill post a you tube link in a sec though if the quality is still ok!/
Lol yeah Jeff Orkin is a bit of an odd name..... Conjures up the image of a large green bloke with a club screaming at his AI to do 'wots he dam well tellz 'em'.
EDIT: updated the OP to add the youtube link!.
Quality is a bit poorer though, cant really read the text! (ill see what i can do with that!)
[Edited by - NovaBlack on March 28, 2009 3:50:53 AM]
I have absolutely zero experience with AI but I just wanted to post to say that looks like a job well done! Really easy to understand and view, even for someone who knows nothing about it.
Great stuff!
Edit: Oh and for what it's worth, I really liked the graphics, too. Conveys what people need to know to understand the scenario.
Great stuff!
Edit: Oh and for what it's worth, I really liked the graphics, too. Conveys what people need to know to understand the scenario.
Nice video, the choice of adding music to it was pretty smart. I like the interface that displays the actions.
When the guy didn't reach the gun did he ask the other guy to take it for him?
How are the actions composed?
When the guy didn't reach the gun did he ask the other guy to take it for him?
How are the actions composed?
[size="2"]I like the Walrus best.
Great video! Wish I had students like you when I was a T.A.
That makes me want to do start doing hobbying AI again...
That makes me want to do start doing hobbying AI again...
Wow thank you for the kind responses!
I've been pulling my hair out worrying about this project, (going to submit a more complete version, with a few other scenarios, as part of my final year uni project!) and didnt have a clue what people's responses were going to be!.
REALLY pleased that at least nobody thought it was completely awful!. I get very engrossed in my code, and seem to get focused HEAVILY on all the things that i want to go back and code better but never have time to, which means im never happy with it!.
@ SeymourClearly
THX! One of the hardest parts of the whole thing has been how to actually convey whats going on to somebody watching! (In a real game dev situation i'd have some lovely audio files!, but i cant use anything copyrighted at the moment, and as for using my voice to recorde some.. well.. lets just say i could never work on the radio:P). Given me hours of frustration. Glad you liked it and that it didnt pose too many confusing issues.
@ owl
Thx again! The interface is still being polished to make it even more clear, but im REALLY glad that even at this preliminary stage it was understandable!.
The issue with the gun was actually solved using a blackboard architecture. Essentially, to explain the general architecture in a nutshell, i have a squadmanager with a special 'blackboard' work area. all the squadmembers (engineer / dog / assassin) can 'see' the blackboard and have limited access to 'write' information on it. So for example When the whole mission begins i just jam a kill target mission object up on the blackboard for everyone to have a look at. Each agent goes away and indpendently has a think (using the planning algorithm) to see if they can string together a sequence of actions to solve the problem using only the actions they understand. They then write up wether they think they can solve the problem on the mission written on the blackboard. The squadmanager can then elect an applicant (from the pool of agents who say they want to accept a mission).
The gun problem just extends this idea slightly.. Essentially when the assassin encounters a problem , he alters the status flag for his mission on the blackboard, to show that his mission is currently halted as there is something he cannot solve ( This is flagged by the agent using the appropriate flag for his current problem. In this case, not being able to reach the gun, so a 'kCannotPathToEssentialItem' problem flag is used ). The squadmanager sees this during its update, and must decide how best to react in the best interests of the squad as a whole. In this case it knows that a cannotPathToEssentialItem Problem can be solved by placing a SUB mission on the blackboard, with the goal kGoal_GetAndGiveItem. Each agent sees this newmission, goes away and writes up wether they have a plan to solve it. In this case the DOG (with its ability to traverse DIG edges) is the only agent that reports back 'Hey, i can do this!'. The squad manager will then allow the dog to carry out the mission, solving the assassins problem.
This is the same system used to solve all the problems really!. The squad manager and blackboard coordinate at a high level, posting new missions, deciding who the best man for the job is when multiple agents can solve a problem. The agents worry about out the low level details (e.g. dig here, pcikup wrench, repair object etc).
Lol phew. hope that long winded explanation helped. In a round about way i think the answer to the original question was no, but kind of yes. Lol. It actually all happens so fast, that i COULD have the assassin shout an order based on who accepted his request for help. e.g. 'Fetch boy, fetch the gun!', And it would appear as though the assassin had actually told the dog what to do.
With regard to the actions, they are all currently parsed from a lua script at load time, with each action defined in terms of preconditions and effects. Each precondition and effect is simply a world property, which is simply a key value pair. Each key is a unique ID for some property in the world and the value, is its current value.
For example I define some properties that exist in the world:
And then an action that is dependent upon them:
Tha main planning algorithm (again in a brief nutshell!) uses a regressive A* search, starting with the goal world state (e.g. target dead) to link back to the current world state (i.e. the state of the world the agent is currently in .. e.g. no weapon, no ammo, target alive etc). The sequence of actions that successfully links between them, (i.e. the route) is the plan. Shares lots with pathfinding!.
Each node is another potential world state, and each edge is the action that allows traversal between them (beware the butterfly effect though! each action changes certain properties in the agents world representation, and yes, this does mean that the number of different world states can get BIG QUICKLY.. think about how many possible variations there are! especially with infinite but technically valid action sequences like.. pickup weapon, drop weapon, pickup weapon, drop weapon etc.). The regressive nature of the search reduces the combinatorial explosion somewhat, as do duplicate penalties and a host of other things.
Like with A* for pathfinding, i also use cost based heuristics analogous to 'distance travelled so far' (this is where action costs come in! - the sum of the costs of all actions taken in a route so far), and 'distance remaining to goal' (a heuristic based on the number of properties still not satisfied between the goal and current state, this essentially directs the sarch down routes which appear to be doing something that appears meaningful!).
Phew.. apoliogies, i get mildly excited by this stuff and dont stop typing for a while!!.
Anyone has any more questions / things they would like clarified id be more than happy to (try) and explain!
@ Steadtler
THX for the comment! (REALLY appreciated!!). Ive pretty much been working like 15 hours a day for the last 6 months on a multitude of projects to help me get a job in the industry, and the stress + tiredness im experiencing at the moment is unbelievable!. Genuinely helps keep the motivation flowing to hear nice comments!.
[Edited by - NovaBlack on March 29, 2009 7:29:40 AM]
I've been pulling my hair out worrying about this project, (going to submit a more complete version, with a few other scenarios, as part of my final year uni project!) and didnt have a clue what people's responses were going to be!.
REALLY pleased that at least nobody thought it was completely awful!. I get very engrossed in my code, and seem to get focused HEAVILY on all the things that i want to go back and code better but never have time to, which means im never happy with it!.
@ SeymourClearly
Quote:
''I have absolutely zero experience with AI but I just wanted to post to say that looks like a job well done! Really easy to understand and view, even for someone who knows nothing about it.
Great stuff!
Edit: Oh and for what it's worth, I really liked the graphics, too. Conveys what people need to know to understand the scenario.''
THX! One of the hardest parts of the whole thing has been how to actually convey whats going on to somebody watching! (In a real game dev situation i'd have some lovely audio files!, but i cant use anything copyrighted at the moment, and as for using my voice to recorde some.. well.. lets just say i could never work on the radio:P). Given me hours of frustration. Glad you liked it and that it didnt pose too many confusing issues.
@ owl
Quote:
"Nice video, the choice of adding music to it was pretty smart. I like the interface that displays the actions.
When the guy didn't reach the gun did he ask the other guy to take it for him?
How are the actions composed?"
Thx again! The interface is still being polished to make it even more clear, but im REALLY glad that even at this preliminary stage it was understandable!.
The issue with the gun was actually solved using a blackboard architecture. Essentially, to explain the general architecture in a nutshell, i have a squadmanager with a special 'blackboard' work area. all the squadmembers (engineer / dog / assassin) can 'see' the blackboard and have limited access to 'write' information on it. So for example When the whole mission begins i just jam a kill target mission object up on the blackboard for everyone to have a look at. Each agent goes away and indpendently has a think (using the planning algorithm) to see if they can string together a sequence of actions to solve the problem using only the actions they understand. They then write up wether they think they can solve the problem on the mission written on the blackboard. The squadmanager can then elect an applicant (from the pool of agents who say they want to accept a mission).
The gun problem just extends this idea slightly.. Essentially when the assassin encounters a problem , he alters the status flag for his mission on the blackboard, to show that his mission is currently halted as there is something he cannot solve ( This is flagged by the agent using the appropriate flag for his current problem. In this case, not being able to reach the gun, so a 'kCannotPathToEssentialItem' problem flag is used ). The squadmanager sees this during its update, and must decide how best to react in the best interests of the squad as a whole. In this case it knows that a cannotPathToEssentialItem Problem can be solved by placing a SUB mission on the blackboard, with the goal kGoal_GetAndGiveItem. Each agent sees this newmission, goes away and writes up wether they have a plan to solve it. In this case the DOG (with its ability to traverse DIG edges) is the only agent that reports back 'Hey, i can do this!'. The squad manager will then allow the dog to carry out the mission, solving the assassins problem.
This is the same system used to solve all the problems really!. The squad manager and blackboard coordinate at a high level, posting new missions, deciding who the best man for the job is when multiple agents can solve a problem. The agents worry about out the low level details (e.g. dig here, pcikup wrench, repair object etc).
Lol phew. hope that long winded explanation helped. In a round about way i think the answer to the original question was no, but kind of yes. Lol. It actually all happens so fast, that i COULD have the assassin shout an order based on who accepted his request for help. e.g. 'Fetch boy, fetch the gun!', And it would appear as though the assassin had actually told the dog what to do.
With regard to the actions, they are all currently parsed from a lua script at load time, with each action defined in terms of preconditions and effects. Each precondition and effect is simply a world property, which is simply a key value pair. Each key is a unique ID for some property in the world and the value, is its current value.
For example I define some properties that exist in the world:
AddPropertyToMap("B","kTargetIsDead",keyMap); // 'B' indicating boolean AddPropertyToMap("B","kWeaponIsLoaded",keyMap); AddPropertyToMap("B","kHaveWeapon",keyMap);
And then an action that is dependent upon them:
kRangedAttack = Action("kRangedAttack"); -- Preconditions kRangedAttack:AddPrecondition("kHaveWeapon",true); // Can also negate preconditions.. e.g. AddPrecondition_NOT_("kHaveWeapon",true); kRangedAttack:AddPrecondition("kWeaponIsLoaded",true); kRangedAttack:AddPrecondition("kTargetIsDead",false); -- Effects kRangedAttack:AddEffect("kTargetIsDead",true);// Cost assigned so 'cheapest' action used when multiple actions can achieve desired effect.. // e.g. melee attack and ranged attack both have effect ("kTargetIsDead",true), but ranged attack given lower cost since preferred. -- Cost kRangedAttack:SetCost(1) -- Register RegisterAction(kRangedAttack);
Tha main planning algorithm (again in a brief nutshell!) uses a regressive A* search, starting with the goal world state (e.g. target dead) to link back to the current world state (i.e. the state of the world the agent is currently in .. e.g. no weapon, no ammo, target alive etc). The sequence of actions that successfully links between them, (i.e. the route) is the plan. Shares lots with pathfinding!.
Each node is another potential world state, and each edge is the action that allows traversal between them (beware the butterfly effect though! each action changes certain properties in the agents world representation, and yes, this does mean that the number of different world states can get BIG QUICKLY.. think about how many possible variations there are! especially with infinite but technically valid action sequences like.. pickup weapon, drop weapon, pickup weapon, drop weapon etc.). The regressive nature of the search reduces the combinatorial explosion somewhat, as do duplicate penalties and a host of other things.
Like with A* for pathfinding, i also use cost based heuristics analogous to 'distance travelled so far' (this is where action costs come in! - the sum of the costs of all actions taken in a route so far), and 'distance remaining to goal' (a heuristic based on the number of properties still not satisfied between the goal and current state, this essentially directs the sarch down routes which appear to be doing something that appears meaningful!).
Phew.. apoliogies, i get mildly excited by this stuff and dont stop typing for a while!!.
Anyone has any more questions / things they would like clarified id be more than happy to (try) and explain!
@ Steadtler
Quote:
"Great video! Wish I had students like you when I was a T.A.
That makes me want to do start doing hobbying AI again..."
THX for the comment! (REALLY appreciated!!). Ive pretty much been working like 15 hours a day for the last 6 months on a multitude of projects to help me get a job in the industry, and the stress + tiredness im experiencing at the moment is unbelievable!. Genuinely helps keep the motivation flowing to hear nice comments!.
[Edited by - NovaBlack on March 29, 2009 7:29:40 AM]
Really impressive.
Are you using some 3d engine for the graphics?
Are you using some 3d engine for the graphics?
[size="2"]I like the Walrus best.
Cheers!
I spent a bit of time building my own small engine over the last 9 months!. Doesnt really do anything too special at the moment, but gets the basics done for me nice and simply!. Levels are built using a 3D c# level editor, along with any nav data, and saved out into XML. These can then be parsed by the EntityManager and NavGraphManager in my engine. Makes changes a HELL of a lot simpler than manually moving things about :P. Using a basic directX 9 renderer i coded up. Nothing too fancy at the moment, but i literally have spots in the framework to plug in render methods (new HLSL vertex/pixel shaders etc) and to implement post processing if i ever get any time!!.
Lol.. i NEVER get time!.
I spent a bit of time building my own small engine over the last 9 months!. Doesnt really do anything too special at the moment, but gets the basics done for me nice and simply!. Levels are built using a 3D c# level editor, along with any nav data, and saved out into XML. These can then be parsed by the EntityManager and NavGraphManager in my engine. Makes changes a HELL of a lot simpler than manually moving things about :P. Using a basic directX 9 renderer i coded up. Nothing too fancy at the moment, but i literally have spots in the framework to plug in render methods (new HLSL vertex/pixel shaders etc) and to implement post processing if i ever get any time!!.
Lol.. i NEVER get time!.
Very nice work, Nova ... And thank you for the detailed walkthroughs on how both the blackboarding and decision-making processes work.
Much appreciated,
-Matt
Much appreciated,
-Matt
www.mwgames.com - my game projects websiteNimble2D Blog - Simple 2D Game Dev with VB.Net
This topic is closed to new replies.
Advertisement
Popular Topics
Advertisement