Advertisement

Applications for AI

Started by April 19, 2001 05:31 AM
15 comments, last by Scott 23 years, 9 months ago
quote:
Original post by liquiddark

Efficiency is also a factor.

Plus, look at Galapagos. Used ANNs, but NOT to good effect, unfortunately.

On another note, I''m wondering whether it would be worthwhile to dev a JRobot using GA and/or ANN techniques, especially for speed estimation...thoughts welcome.

ld


What''s a JRobot? What other problems (decisions) do you want the GA or ANN to solve, would seem to be important to know? Navigation? Power replenishment sources? Target acquisition?

Eric
quote:
Original post by Geta

What''s a JRobot?



A JRobot is an AI written in Java, up to 10k in size (makes it a bit harder, of course), which fights singly, doubly, and in teams of 8 against other JRobots. The robot has these restrictions:

1) fire once per second, in any direction; missiles, once launched fly at a constant velocity which is NOT relative to the speed of the robot (ie the movement of the JRobot does not affect the movement of the missile)
2) move at one of a range of speeds up to one tenth the max speed of the missiles; cannot turn at speeds above one half the max speed
3) limited acceleration
4) Only sensor is a "scanner" which can be pointed in any direction and can have a width from 1 deg to 21 deg; does not seem to pick up missiles

and as a coder

5) no OOP techniques

quote:

What other problems (decisions) do you want the GA or ANN to solve, would seem to be important to know? Navigation? Power replenishment sources? Target acquisition?



Core Problems:
1) Navigation inside a closed rectangle
2) Target Acquisition (of which speed estimation is a significant part)
3) Dodging incoming fire

I''m thinking that good flocking code could probably handle 1) and 3) together, and my current solution to target acquisition (recent-history-based) should eventually give some results, but it would be interesting to see what a GA/ANN-based solution would make of the problem.

A core problem, however, is that the program is an applet, which makes it difficult to automate the fitness evaluation/breeding process. I suppose I could try reproducing the arena and game classes, but from my readings in A-Life I get the impression that changes in the environment have significant impact on the effectiveness of bred solutions, which brings up the question of how effective encapsulation of breeding ground has been for people.

Thanks,
ld
No Excuses
Advertisement
quote:
Original post by Geta
So, I guess this begs the question the original poster implied ... why have not GAs and ANNs been used much in computer game AI?

Eric



For one very simple reason--they''re not very predictable with regards to what you''re going to get at the end. Having a truly unpredictable AI means you can''t depend on there being two guards at the bottom of the steps waiting for the player, since they may have wandered off to play volleyball or something. Most producers *really* don''t like *that* level of unpredictability--though personally I think it would make FPS style games a heck of a lot more interesting.





Ferretman

ferretman@gameai.com
www.gameai.com

From the High Mountains of Colorado

Ferretman
ferretman@gameai.com
From the High Mountains of Colorado
GameAI.Com

While you might not use nn''s in pathfinding for planning level behaviour I''d be surprised if reactive, local scale obstacle avoidance wouldn''t be perfectly suited to their use. Your planning layer gives you an attractor to head towards (your next node point) and the job of avoiding local obstacles, such as other critters in the simulated world that are unpredictable in their behaviours, is carried out by a small network with local ''visual'' inputs about objects in its immediate vicinity. I''ve programmed a neural network, evolved by a GA, to do the opposite, in rushing around a world eating as much food as possible. By making other creatures repulsors rather than attractors, the behaviour would switch from food eating to obstacle avoidance. The network uses no hidden neurons, just 10 inputs (number of pieces of food in one area of the world), 2 outputs (turn and forward movement) and no hidden neurons. It''s also not continuous time, so no worry about those extra divides in the code.
The important factors of it are that the damn thing _looks_ like it''s intelligent and doesn''t use up much processing power. Also, like any AI aspect of a game, you can continue using the current outputs for several frames of action before rerunning the network with the new inputs.

Mike


quote:
Original post by liquiddark

Core Problems:
1) Navigation inside a closed rectangle
2) Target Acquisition (of which speed estimation is a significant part)
3) Dodging incoming fire

I''m thinking that good flocking code could probably handle 1) and 3) together, and my current solution to target acquisition (recent-history-based) should eventually give some results, but it would be interesting to see what a GA/ANN-based solution would make of the problem.




As you probably already know, a sub-set of flocking is steering behaviours (http://www.red3d.com/cwr/steer/) which may be more appropriate for 1) and 3) above that flocking might be. I tested steering behaviors for a wrestling game and was quite pleased with the results.

As to checking out a GA/ANN solution, I say go for it. I try lots of approaches in test bed applications (using placecard graphics) to determine which I like the best. I think the test bed approach is a valid tool for the AI programmer, because it is not always obvious how some approach really will work, until one implements it.

Good luck,

Eric
quote:
Original post by Geta
As you probably already know, a sub-set of flocking is steering behaviours (http://www.red3d.com/cwr/steer/) which may be more appropriate for 1) and 3) above that flocking might be. I tested steering behaviors for a wrestling game and was quite pleased with the results.



I looked at steering behaviours, but because the arena is open and the combat is partly team-based, I thought it might be better to use generic flocking ( I use attractor/avoidance for singleton, so the transition would be reasonably simple ), so Since it''s my first full-fledged implementation of an ai (I did one for an ai course at uni, but I never finished the lowest-level behaviours), the first one''s a throwaway and a copycat to boot. Lessons that I have learned, however, are:

1) formation would improve things greatly
2) against area-effect weapons, spreading out is good
3) concentrated fire ROCKS
4) dodging is good
5) my speed estimation algo has a specific range at which it reaches maximal effect
6) getting the hell out of the centre of the ring early is a REALLY good thing

I think this distills the generic behaviour criteria sufficiently to code it.

Instead of using a steering behaviour to avoid the walls, I''ve opted for a comet run on the centre of the ring. Though it has its problems, it tends to be unexpected, throwing off speed estimation at least momentarily.


quote:
I think the test bed approach is a valid tool for the AI programmer, because it is not always obvious how some approach really will work, until one implements it.



Out of curiosity: do you usually do custom testbeds, or are there tools that you''ve found particularly useful for the purpose?

Thanks,
ld
No Excuses
Advertisement
quote:
Original post by liquiddark

Out of curiosity: do you usually do custom testbeds, or are there tools that you''ve found particularly useful for the purpose?

Thanks,
ld


I haven''t found any tools (other than MSVC++ Dev Studio, Boundschecker and Source Safe) that migrate from project to project. Over the years, I''ve created my own libraries and tools for test beds that I''ve built and used. So, when I need to test out something new, it''s pretty easy for me to construct a test bed that is applicable, out of previously used components.

Eric

This topic is closed to new replies.

Advertisement