Advertisement

Next step in my path finding code...

Started by May 21, 2002 04:31 PM
13 comments, last by 3dModelMan 22 years, 6 months ago
Hi there. A few weeks ago you guys helped me out with my implementation of A* and it was a great help, needless to say I'm back with more questions (although not on A* itself). Background: I'm using Points of Visibility. Right now I am happy that I can handle many hundreds of POV's and every conceivable route between them in real-time. This was quite a task since the level data is dynamic and POV's and routes can be created or destroyed as the player blasts his way around. If your interested in how I did it, or seeing it in action, let me know. The game uses 3d graphics but it is essentially a 2d game as far as the math goes. This shot shows it all, with the permissible routes between POV's drawn. This shot is the same scene with a couple of objects destroyed, resulting in some changes in available routes. So here's my current problem.. for any point on the map determining ALL the visible POV's. Obviously this has to be done twice for each route calculation, for the start and destination points, and as much as I keep looking I can't help thinking that this is going to be a very expensive operation. No matter what subdivision is employed (I'm currently using a quad-tree), there is still a large work-load in trying to determine whether or not the straight line path between an arbitrary point and all POV's exists unobstructed. Any ideas? (remembering that the map data can change, eliminating a lot of pre-calculation possibilities). I'm chewing over many approaches at the moment but none of them seem particularly elegant so if you can think of any methods I should be looking into, please shout up. Cheers Matt [edited by - 3dModelMan on May 21, 2002 5:34:05 PM]
How are you currently storing your PoV data?

Timkin
Advertisement
Hi,

Here is another explanation. I wanted to keep the length of the original post to a minimum- sorry for the lack of detail.

The POV''s are part of the "objects" you can see in the demo. At the moment all the objects are 4 sided and have a POV at each corner. The routes around them (p1 to p2, p2 to p3, p3 to p4 and p4 to p1) are stored as pre-calculated line segments and can be used to determine if any line in space passes through the objects bounds.

It''s these objects that are stored in the quad-tree, so for any line segment ("POV to Bot", for example) the number of objects that have to be tested for intersection can be kept to a minimum. Testing one line segment is not a problem, the problem is, as soon as you move onto the next POV the quad-tree has to be queried again and all the objects it returns must be tested against until either an intersection is found (hence visibilty test fails) or until they have all been tested unsuccessfully (visibility test passes) and this test must be repeated for all POV''s.

There are a couple of properties to be exploited... The objects never rotate and a POV can never see through it''s own bounds hence it has a maximum field of vision that never changes - the first test could then be whether or not the Bot position lies with it''s field of vision. (I prefer to think of the problem not as "can the Bot see the POV?" but as "can the POV see the Bot?".)

Secondly, when two or more objects are close together they create a more complex bounded shape so testing all these objects at once could be achieved more efficiently than testing them individually... the problem would then be maintaining these complex bounds as they change.

mmmmm, I''m still thinking......

Cheers

Matt
Okay, here's an idea off the top of my head... not sure how well it will work, but you might be able to get some ideas from it.

Consider a circle centred on the bot that has a radius equal to the distance from the bot to the closest PoV - call it p1 - which is a corner of an object I will call A. There will be at most two other PoVs on A that can be seen by the bot (since the object is 4-sided). Check these two adjacent PoVs. Let's assume both are visible to the bot. You can now define a conical section of space 'blocked out' by the object. You don't need to check any PoVs in that cone. An easy way to define the cone is to define it by two angles on the circle centred on the bot that passed through the PoV p1. These angles are approximately the angular displacements of the PoVs adjacent to p1, on A from some reference line. So, you can run through a list of all other PoVs and work out which ones have angular displacements that fall between those two angles and for which the radial distance from the bot is greater than the radius of the cirle. Omit these PoVs from further consideration.

Rinse and repeat on the next closest object... until no more objects remain.

Does that make sense?

Timkin

[edited by - Timkin on May 22, 2002 12:28:32 AM]
Yes Timkin that makes perfect sense as I was already considering algorithms along those lines- considering the Bot as a light source and invalidating the "shadowed" region beyond the nearest objects. It''s good to have ideas confirmed, so thanks for that!

I''ve always found it part of the challenge of coding to implement your own ideas but there''s nothing more frustrating than reading at a later date about some scientific paper that has discovered a "beautiful solution" to the same problem - that''s why I posted here, fishing for ideas

Thanks again

Matt

So all your pathfinding is done from point to point or am I misunderstanding what you're using the POV for?

[edited by - MikeD on May 23, 2002 11:55:37 AM]
Advertisement
Yes Mike, all routes will be constructed from the Points of Visibility. The only two points in a route that aren''t POV''s will be the start and end points. Why do you ask?

- Matt


your using this for visibility testing like
ObstructedView(object_a, object_b) would invoked this? ("cant see" cause it returns 0 if there is nothing blocking view (false). returns the blocking object if there is no unblocked ray

hmmm

if thats true the issue is avoiding checking ray tracing intersection with every object in the game? seems like thats what the entire complexity of this structure is devoted too. the structure could also be used to find intervisibility obstructions for hiding.

the efficient raytrace-obstructed topic in this isnt an AI topic really,\. not a flame i just think the graphics guys might have more to input. since raytracing in general depends on these optimizations. clearly there is a better way then checking intersections against every object in the game.

now the find intervisibility obstructions for hiding(or searching for/predicting hidden locations) in a sparsely populated maps is an AI topic.

im just clarifying.

as far as efficiency have every object on the map point to view edges that it obstructs as an idea. so when the map changes it can force recalculation of just the effected edges. as an idea.

though one thing that disturbs me bout this system is there is roughly n^n edges on your view_edge graph. hmmm
Not quite sure what you are suggesting declspec...

Yes, ray tracing is definitely a graphics issue... and yes the idea I posted above is essentially a ray tracing method... however the rays are pre-defined by the bot position and the PoV positions... so the problem is a little more constrained than standard ray tracing.

3dModelMan,

You might be able to save some computational effort in implementing the above idea by incorporating it into your frustrum culling algorithm. You''d have to consider the full 360 degree field, however you would only need to apply clipping to polygons within the field of view. If you do choose to go down this, or a similar path, you might want to ask in the graphics forum if anyone is familar with combining visibility tests with pathfinding using a graphics algorithm?

Cheers,

actually i wasnt sure of the original problem was just talking to start clarification. consider this:
"the problem is, as soon as you move onto the next POV the quad-tree has to be queried again and all the objects it returns must be tested against until either an intersection is found (hence visibilty test fails) or until they have all been tested unsuccessfully (visibility test passes) and this test must be repeated for all POV''s."

this seems to indicate the structure itself is to heavy in the big O side then no amount of tweaking individual parts is going to help. but consider this:
"as soon as you move onto the next POV...this test must be repeated for all POV''s."

I actually dont understand why every POV has to be recalculated every "think" or what not. its like saying it takes forever to traverse this N times but not showing in implication why the work is neccesary in the first place.

originally i thought why recalcuate each edge. just keep a list of each edge thats interupted by object_a in object_a then when object_a gets moved recalculate from that list. but as i was noting the n^n edge count is overly large it seems. Then i noted that this is reinventing the wheel. i mean if this all comes down to a visibility test then isnt PVS (possibly visible is the meaning of the first to letters) the well defined system for dealing with this? hence my decleration that perhaps the graphics guys have insight. hmm though the most recent PVS thread said it wasnt well suited to outdoors (sparsely populated) maps. all of thiis was not as a statement of AI experience being useless but a statement of graphics experience being _useful_.



This topic is closed to new replies.

Advertisement