AI in RPG
I am doing a online rpg game. It contains battle system. I want to apply AI technology in battle system. Here is the environments of my battle system:
1. 1~5 characters and 1~5 enemies
2. Each enemies can know the information of other enemies.
3. Character and enemies contain magic ability.
4. I want the enemies can have self-learning depend on the output.
5. The battle mode is one by one. That means the action is one by one.
Here is my problems/questions:
1. In my project/game, I need to apply object-oriented technology. So, I want all think can make or be an object. In my idea, I think to implement the AI in the battle system, it needs to build a AI engine, that includes sensing, thinking and action. But I don''t know the AI engine should or should not be a class/object and when the battle occurs, each enemy should create or new an AI engine class/object or AI engine class/object create several enmemy object. I confuse the enemy include AI object or AI engine includes enemy object during the battle system.
2. I still confuse how can implment the sensing, thinking part. In my options, I think each AI object/enemy object should create a temp text file that used to store or record the hit points or life in each character during the battle for self-learning, Am I correct for this option????
3. In the thinking part, if I set the rule or condition that give the enmey thinking and assign the weight in each rules or conditions for analysis of enmey. Is that right? If it is right, I think it will contain many rules or conditions. Is it really AI technology????
4. In RPG game, can it use fuzzy logic or neural nets or FSM technolgoy?Because it is not action game, can they apply it????
5. Although I have read several web site about AI and know(may know/should know) the concept of fuzzy logic,neural nets or FSM but it just the concept. I don''t know how to really to implment this in the RPG game.
6. Can you give several example that make me easy to understand. Because I cannot or don''t know how to orgainze all things.
Here is my questions. Am I any incorrect???Please tell me, and please reply me,thanks all person.......
>_<
Well,
If you have a battlefield with a number of actors on it which are supposed to react according to their inputs, I suggest you would use a NN and evolve the weights with a GA (BTW: you can *always* implement an algorithm using OO techniques).
Each of your soldiers/characters would have a tiny brain with the following inputs:
location of enemy, location of bullet, own position, weapons at hand, weapons of enemy etc
Your outputs would be:
walk towards some direction, fire, flee, duck etc.
Let your characters battle for some time and let them evolve untill you think their actions are correct.
Select best *brain* and voila: AI in rpg
Edo
Edo
quote:
Original post by edotorpedo
Well,
If you have a battlefield with a number of actors on it which are supposed to react according to their inputs, I suggest you would use a NN and evolve the weights with a GA (BTW: you can *always* implement an algorithm using OO techniques).
Each of your soldiers/characters would have a tiny brain with the following inputs:
location of enemy, location of bullet, own position, weapons at hand, weapons of enemy etc
An ANN in this situation is providing a mapping from input space to output space. You would need to completely train the agents in every conceivable environment to ensure that they performed as intended during the game. Given this, you might as well sit down and right hard and fast production rules.
There is a better approach, and that would be to use a Decision Network... and before you ask, I suggest you search the web for literature. Schacter & Poet wrote an early paper on inference in decision networks... that''s a good starting point (if a bit technical).
Timkin
For learning, during battle you could keep a refernce number to check what types of attacks the player has used..
eg :
The player attacks, the varible attack++;
The player attacks, the varible attack++;
The player attacks, the varible attack++;
now its at 3, so its likely the player will use it again, so make the monster do somthing to the player so he cant use this or defend...
Or equally the player has used the same attack hes likely want to use somthing like magic or whatever... so do somthing according to it.
eg :
The player attacks, the varible attack++;
The player attacks, the varible attack++;
The player attacks, the varible attack++;
now its at 3, so its likely the player will use it again, so make the monster do somthing to the player so he cant use this or defend...
Or equally the player has used the same attack hes likely want to use somthing like magic or whatever... so do somthing according to it.
A genetic-type system might be suitable here too...
If each "gene" is a script of sorts which controls how the AI unit behaves, and each new enemy inherits genes from the remaining ones.
This assumes your enemies respawn at a decent interval.
If each "gene" is a script of sorts which controls how the AI unit behaves, and each new enemy inherits genes from the remaining ones.
This assumes your enemies respawn at a decent interval.
After careful deliberation, I have come to the conclusion that Nazrix is not cool. I am sorry for any inconvienience my previous mistake may have caused. We now return you to the original programming
quote:
Original post by Nurgle
A genetic-type system might be suitable here too...
If each "gene" is a script of sorts which controls how the AI unit behaves, and each new enemy inherits genes from the remaining ones.
This assumes your enemies respawn at a decent interval.
At a guess GAs won''t emerge or evolve interesting behavior quickly enough without extensive offline training prior to the game''s release by the developer. It takes a long, long time for evolution to work (just look around).
I think I agree with other posters that NNs, Decision Trees, or FSMs are the better way to go here. I like the utility of GAs and I think they''re just too slow for this kind of environment....
Ferretman
ferretman@gameai.com
www.gameai.com
From the High, Cold, Snowy Mountains of Colorado
Ferretman
ferretman@gameai.com
From the High Mountains of Colorado
GameAI.Com
February 19, 2002 09:34 PM
its quite possible though, and it provides a nice flashy thing to write on the back of your box 
i reckon that given appropriate feedback functions (written by one who knows the ins and outs of the game) you could enhance the evolution speed of the G.A. enough to make it viable.

i reckon that given appropriate feedback functions (written by one who knows the ins and outs of the game) you could enhance the evolution speed of the G.A. enough to make it viable.
i don''t see what is wrong with extensive offline training before release... then the game comes with a half-way decent AI already built in, and it gets better as the player plays. this would add to replayability, since you might "beat" the game, then try again and it will be harder (and better adapted to the player''s particular sytle of play).
--- krez (krezisback@aol.com)
--- krez (krezisback@aol.com)
--- krez ([email="krez_AT_optonline_DOT_net"]krez_AT_optonline_DOT_net[/email])
I actually have the answer to the problem, however it may be a little more complex than people want to implement in a game and it''s success will depend on how much grunt is at your disposal.
The problem with using artificial neural networks as pattern classifiers is that learning is based on batch methods and that classification is only possible for test data that lies within the domain of the training data. If new data is presented that lies outside the training domain, then while the network can provide its best guess at a classification, it cannot adapt to correctly classify the data without complete retraining.
This is the situation faced when training networks for games such as combat RTS. Training needs to be performed during development and then after each battle. While this enables the network to learn from it''s experiences, it doesn''t help at all if the next battle is completely different from any seen before. The network will not be able to cope with the battle until after it is over and has been analysed.
The answer then is a sequential training method. Batch methods are based on the offline, iterative training of a classifier (or function approximator) given a compelte training set, whereas sequential methods are online, providing an estimate of the classification/function value as well as the model parameters each time new observations are made (data is presented to the network).
While the computational cost is higher than pure state estimation, the results are quite impressive and worth investigating. Applying this technique to learning ANNs for use in games would mean that in addition to base learning during development, the network would learn continuously during game play, rather than AFTER the player stops playing!
If you''d like more information on sequential methods check out Alex Nelson''s PhD Thesis (available online). It has a good literature review covering the major work in the field and is a good starting point.
Cheers,
Timkin
The problem with using artificial neural networks as pattern classifiers is that learning is based on batch methods and that classification is only possible for test data that lies within the domain of the training data. If new data is presented that lies outside the training domain, then while the network can provide its best guess at a classification, it cannot adapt to correctly classify the data without complete retraining.
This is the situation faced when training networks for games such as combat RTS. Training needs to be performed during development and then after each battle. While this enables the network to learn from it''s experiences, it doesn''t help at all if the next battle is completely different from any seen before. The network will not be able to cope with the battle until after it is over and has been analysed.
The answer then is a sequential training method. Batch methods are based on the offline, iterative training of a classifier (or function approximator) given a compelte training set, whereas sequential methods are online, providing an estimate of the classification/function value as well as the model parameters each time new observations are made (data is presented to the network).
While the computational cost is higher than pure state estimation, the results are quite impressive and worth investigating. Applying this technique to learning ANNs for use in games would mean that in addition to base learning during development, the network would learn continuously during game play, rather than AFTER the player stops playing!
If you''d like more information on sequential methods check out Alex Nelson''s PhD Thesis (available online). It has a good literature review covering the major work in the field and is a good starting point.
Cheers,
Timkin
This topic is closed to new replies.
Advertisement
Popular Topics
Advertisement
Recommended Tutorials
Advertisement