Advertisement

Simulating vision - calculating line of sight

Started by October 25, 2007 10:38 AM
19 comments, last by ID Merlin 17 years, 1 month ago
I recently figured out a fast way to calculate areas in line of sight from any given point in a game world, so I thought I'd write up an article describing it. It really needs images, so I put it up on a free host (Geocities unfortunately, apologies for the ads). You can view it here . Let me know what you think - hopefully some people will find it useful.
That's interesting, but also confusing. I don't get it yet. I'm going to have to spend some clipboard time to decipher your idea.
Advertisement
It seems like a large amount of data that you'd have to save/update per entity (since we need to do LOS from each entity to all it's enemies). I'm assuming it's per entity: could be per node, but then that's even more data.

How much memory is required per entity/node to maintain this algorithm?
How frequently does it need to be updated (if it's per entity)?
Is it actually less expensive to maintain that data than to just raycast (if it's per entity)?

If it's per node, you're just making the common trade-off between speed and memory. It's faster but takes up more memory.

-me
You're doing a raycasting of the scene, using the weird flood-fill thing instead of an explicit cast. This is guaranteed to be slower than just testing for visibility with every other object, isn't as robust, and gives you less information.
Reminds me of Jagged Alliance 2 - your example images look like the LoS overlays in that game ;)
I can see an application for this. It would be useful to build a map of all areas that an enemy could attack with missiles, so that the AI would weight vulnerable locations differently than obscured locations. The algorithm would have to be extended to accumulate a map of vulnerable hexes as each enemy location is examined.
Advertisement
It may also have applications in Terrain Reasoning.
Hmm I did spend quite a while trying to make it clear, using the GIFs etc. Hopefully these answers will help.

NOTE: I don't deal with entities in this algorithm, in fact it's what I'm specifically trying to avoid. If I just deal with extra-interesting entities, I don't build up a proper visual description of the world. The algorithm is intended for an object-rich world, not an object-sparse one. If my world doesn't have many objects (like one where we only care about enemies), then I might as well do object-object ray testing. This algorithm is interested in every visible object, like walls and the ground. And that data can be used for many applications such as pathing and positional strategy. I mention this in the article.

Sneftel -
Quote: You're doing a raycasting of the scene, using the weird flood-fill thing instead of an explicit cast. This is guaranteed to be slower than just testing for visibility with every other object, isn't as robust, and gives you less information.
Yes I'm doing a raycasting of the scene. I do a fill instead of explicit casts because if I did explicit casts, I'd be doing more work. Think about this - let's say we are checking the visibility of points (590, 20) and (591, 20). If I did explicit casts, I'd be checking many of the same positions each time, right? If I fill outwards and propagate LOS information, I just make one more step.

Is it "guaranteed to be slower than just testing for visibility with every other object"? Only in object-sparse worlds. Put the viewer in a small closed room and put 5000 other objects outside the room and see if that guarantee holds. This algorithm will fill to the bounds of the room and stop.

I don't see how it gives me less information. The algorithm runs through *every scrap* of visual information. Object-object raycasting only gives the agent information on those objects that you decide to do the raycasts on.

Palidine - What you actually store with your entity is entirely up to you. All the algorithm guarantees is that you will have a chance to store information about anything within line of sight. The number of nodes the algorithm needs to store *at any one time* (its space complexity) is roughly proportional to the radius of the search, but memory and time requirement are reduced as the amount of opaque areas increases. Look at the 'room' gif near the bottom - the green perimeter is tiny, and the green perimeter roughly represents the number of nodes I'm using at any one time.

It's not a simple memory/speed trade-off. It's a way to cast rays where each ray takes maximum advantage of previously cast rays. The more rays you want to cast from a point, the more effective the algorithm is.






I read the terrain reasoning article, and though I'm not working on a shooter, found it thought provoking. I may work on a variation of this algorithm that projects missile shots and ranges from each enemy to help find blind spots and better defensive locations. Because my game uses variable height terrain, and considers LOS, the flood-fill will always have to iterate over all locations and consider the "height" as a factor in the calculation. But since there's only 25x25, I bet that will work even with 20-30 enemies with missile capability.
Quote: Original post by Argus2
Sneftel -
Quote: You're doing a raycasting of the scene, using the weird flood-fill thing instead of an explicit cast. This is guaranteed to be slower than just testing for visibility with every other object, isn't as robust, and gives you less information.
Yes I'm doing a raycasting of the scene. I do a fill instead of explicit casts because if I did explicit casts, I'd be doing more work. Think about this - let's say we are checking the visibility of points (590, 20) and (591, 20). If I did explicit casts, I'd be checking many of the same positions each time, right? If I fill outwards and propagate LOS information, I just make one more step.


Well, it really depends on the granularity of the grid, the sparsity of the positions you're checking, and the distance from the origin. When I did something similar, it became clear that repeated ray-casting was far more efficient - although several areas near the viewpoint were checked 2 or 3 times each, the proportion of areas further out that didn't need to be examined at all was 80%-90%, and they were more numerous too.

Besides, there's usually nothing to stop you caching/memoising results for these inner nodes, meaning you can almost get the best of both approaches.

This topic is closed to new replies.

Advertisement