Advertisement

Determinig object visiability

Started by April 13, 2006 01:53 PM
23 comments, last by FBMachine 18 years, 7 months ago
Before I say anything i would like to note that i rather a beginner when it comes to AI, so don't flame me if this is a standard process, or if there is a simpler-better way of doing this (also note is that English is not my primary language) I just want to know is this approach valid/usable and a explanation : In FPS or any similar AI system it's necessary for AI algorithm to be able to determine if other object's are visible, and this can be done by culling and ray-intersections, but this is usually "imperfect" because you must use bonding shapes and light's are not included in this. Instead of this i was thinking about a different approach to problem, an algorithm that uses image processing to find out weather object is visible or not... Basically I was thinking of rendering a simplified scene version without textures, only using diffuse colors and per vertex lighting to represent AI scene. All AI important object (characters, trigger items, weapons ...) would be rendered in a specific color and static objects would be in different - "neutral" color . Also the scene would be rendered using scene light's so if a dynamic object isn't receiving any light (or weary little) it would be rendered darker and ignored by AI ... This is just a basic idea, but I am interested in hearing what you think about it, is this done, or is this inefficient, any feedback would be nice ..
(just realized i have not explained this completely :) )

The process i described - i wasn't saying it AI would only function on image processing, but should only confirm weather culling algorithm was correct on some object to create a more realistic behavior.
Advertisement
You could do this, for sure.

You could easily write an 'object id' to a second buffer almost the same way you write a color to the screen buffer.

IIRC the radiosity method described at freespace.virgin.net/hugo.elias/ does something like this.

Will
------------------http://www.nentari.com
I've investigated this option, but reading from video memory is slow, and there is no synchronisation between your visibility data and the current scene. You can't stop the video card, send it your "scene" and wait for it to return you the result like you would do with a simple ray cast. The data you'll get will be most likely ~100ms late, so it might be usable, but if your game is fast paced, it might not be acceptable.

If you find a decent way to implement it, I'd like to see your results!

Good luck

Eric
Quote: Original post by RedDrake
In FPS or any similar AI system it's necessary for AI algorithm to be able to determine if other object's are visible, and this can be done by culling and ray-intersections, but this is usually "imperfect" because you must use bonding shapes and light's are not included in this.


Usually such approximating methods are used to reduce cost
is the tradeoff of cost versus accuracy needed for your application?

Most players rarely notice that the AI might see an extra foot around a corner
the main reason being that this 'innacurate' behaviour is quite common among other players; where deduction, memory of previous situation, hearing, etc tends to give them that extra margin of detection anyway.

sure, the method should work, Sounds slow though

if you want to add features such as the AI 'missing' a player hidden in dark areas you could probably do it more cheaply... first thought is to do an additional ray test between potentially visible object and the light source with distance check to accumulate approximate brightness of object... then have some tunable factor in the AI that ignores or not depending
maybe add in other factors like, how far is the object from AI (in addition to line of sight) as well as how close is the object to nearest wall(trying to hide)
Well actualy i this is an experiment of mine, not anyithing practica (at the moment at least) ...

The fact that reading from video memory is slow, can be adressed by using "small" RT resolutions, buth this woud probably bring in the problems of accuracy ...
I will make some tests and check this ...
Another optimisation is not to check the entire image, instead you can project active objects AABBox to screen cordinates and lock only that area (to be searched for selected color).

As I said, I will make a test app, and post the results soon since i got some free time now.
Advertisement
Also, gameplay wise, I wouldn't use such visibility test AGAINST the player, at least not for a first person shooter, as the only position the player knows about, is the camera position. If you allow any part of the mesh to be seen, you need to provide the player information on wether or not he's well hidden.
Quote: Original post by RedDrake
Basically I was thinking of rendering a simplified scene version without textures, only using diffuse colors and per vertex lighting to represent AI scene. All AI important object (characters, trigger items, weapons ...) would be rendered in a specific color and static objects would be in different - "neutral" color . Also the scene would be rendered using scene light's so if a dynamic object isn't receiving any light (or weary little) it would be rendered darker and ignored by AI ...

This is just a basic idea, but I am interested in hearing what you think about it, is this done, or is this inefficient, any feedback would be nice ..


If you're going to render a scene, no matter how simple, for each character to test if they can see something, and then analyse it at the end to see what is visible, then yes, that is going to be very, very inefficient. It would be better to just perform the ray-intersections from the character to any relevant points of interest.

Quote: Original post by Kylotan
If you're going to render a scene, no matter how simple, for each character to test if they can see something, and then analyse it at the end to see what is visible, then yes, that is going to be very, very inefficient. It would be better to just perform the ray-intersections from the character to any relevant points of interest.


Well actually, the image processing would be just a test, if the ray cast terms are satisfied, image processing is used to check the if the character is acutely visible (and this would not have to be a high frame rate 3-5fps would be sufficient for realistic AI updates ?). The if the character is spotted, the AI is in alert state, and is looking after the player, so standard ray-cast would do fine hear. As i said, this would not replace the ray casts, but would add an improvement to the accuracy/realistic behavior of AI when it comes to determining weather character can be spoted or not. This sort of thing would be useful in games where you need to sneak past enemy, stay in the shadows - like metal gear solid, or splinter cell and similar ...

hmmm too bad i don't have any actual game source with what i could test this AI approach, any opensource FPS that coud be potentialy moded to use this AI aproach ?
I think you can get the source to Quake 3 from id's website.

As the 3-5 fps thing, keep in mind that you'll multiply this frame rate by the number of bots / agents using this system, it will get time consuming pretty fast.

As an experimentation, that's an awesome project. Good luck!

This topic is closed to new replies.

Advertisement