Advertisement

Determinig object visiability

Started by April 13, 2006 01:53 PM
23 comments, last by FBMachine 18 years, 7 months ago
Quote: Original post by RedDrake
Well actually, the image processing would be just a test, if the ray cast terms are satisfied, image processing is used to check the if the character is acutely visible (and this would not have to be a high frame rate 3-5fps would be sufficient for realistic AI updates ?).


That's still a lot of work to be done. It would make the game slow down significantly when a few characters were in the same area.

Also, I don't think you can get around the slowness of reading from video memory. It's not just slow, it's SLOOOOOOOOOW. The speed of rendering comes from the pipeline and the fact that while you're plotting one pixel, the card is doing texturing/lighting/blending on previous ones. As far as I know, when you ask to read the data, all this has to stop so that you get accurate values. I expect that's going to have a massive impact.

If it's light and shadow you're primarily concerned with, as opposed to pixel-perfect vision, then I'd probably look into trying to do real-time radiosity instead. I think you could simplify that enough to make it usable.
Is there a delay in reading off a rendered scene, or a complete stall of the GPU?

From my thinking, the latency isnt important at all, but the bandwidth... is the bandwidth enough to keep reading frames off the video card in realtime?

Fraps does it.

Its not a big deal if its 200ms late - typical human reaction time is in that ballpark anyways.

I'd be more suspect of the overall idea .. its certainly not a 'general' solution to visibility testing as it requires the act of rendering.

Seems more complicated than its worth. A novel idea with narrow practical value.

Advertisement
As far as I know, it's a complete stall of the pipeline (or one of the pipelines, if there are several independent ones... I don't keep up with the details of how cards are made these days). I suppose there might be some way of getting the card to copy it to a secondary buffer and trickle that back to the CPU asynchronously with minimal effect on rendering speed, but obviously the reduced read bandwidth would mean you couldn't do that as often as once per frame.

I don't know how Fraps does its work, though. It appears to only operate with DirectX and OpenGL applications so I wouldn't be surprised if it's doing something at the driver level that lets it do something that application code can't usually do.
You could write a bot client that does nothing else than BOT ai & visibility determination

I would render each object with another color(RGB) combination and turn off all effects that affect the color values

Lets choose (R,G) as the object identifier 256*256 probabilities(maybe add a bias between objects and let the B be some blended component that stores the brightness(caused by lights)

now render everything that is visible at a lower resolution 640*480 maybe,
read back the buffer, examine the content.

you have a 640*480 pixel sized buffer and a list of visible objects you rendered
now don t examine each pixel, instead place a grid of 16x16 sized cells and examine the corners of these cells, once you found a object in the image, remove it from the visible list and go on until you have checked everything

if there a still a lot of objects left you could perform a second lookup:
this time only the centers of the foremention cells .

You don t need to do a by pixel test since no human would realize this in a fast paced fps shooter
http://www.8ung.at/basiror/theironcross.html
OK now i am beginning to wonder if I should even put the word FPS in the post.
This approach is not suited for fast paced FPS, and it's not necessary ether. You won't notice these tings in Quake3 for example.

This sort of visibility determination is suited for games where player mustn't be spotted by the AI, and needs to sneak by them.

In that sort of games, ignoring the light amount on the character and ignoring things such as transparency, or precision is tenably ruining the game play (it gets quite annoying when the enemy can see you in full dark (you can't see him but he can see you :) )

@Basiror
256*256 color can't be implemented, if it gets implemented - AI would ignore the lighting - witch is pretty much the mayor benefit of this approach. Instead my idea was to :

- Do frustum culling, determine the enemy characters inside
- Request render of boot frustum
- Render the low poly enemy characters with diffuse color red, and per vertex lights
- Project AABox of each character to the image, and scan the area for red color.
- If enough color was found on the scanned area, the character is visible, otherwise the character is not visible

256*256 probabilities is relay not necessary. In games that need this level of precision, enemies only need to check if they see you (or your companion).

This isn't supposed to be done each frame, not even nearly that much. 6fps for all the boots would be enough. If you are in frustum of multiple enemies - the FPS is split - ex. 2 enemies = 3 FPS each, 3 enemies = 2 FPS ...
If the frames need to be rounded, the closer enemy gets extra frame ...

Multiple optimizations can be done, unfortunately I don't have the time i thought i have, so I don't think i will be able to test this any time soon :(

As for FRAPS i think i read it uses GDI ? not shure tough :-\
I think a simpler and more efficient approach might be to do CPU raycasts, but do many of them to random spots on the char's bounding box, and blend the results in over time, so the visibility value changes smoothly.

In other words, on one frame cast from the enemy's head to a spot the player's left shoulder, if visible, count as 1, otherwise zero.

visibility *= 0.95f;
visibility += this_visibility;

Next frame, do another raycast to a different spot on the character, etc.

And repeat each frame. That way you get a smoothly changing idea of visibility ( similar to what you'd get by averaging spatially in a single frame ).

The idea is to do temporal averaging rather than spatial, because it's cheaper, and at times looks better.

We use a similar approach for entity lighting in AG.
Advertisement
Really, no matter what optimisation you can make, it comes down to a simple question - can you read from video memory without stalling the pipeline? If so, it's practical, and an interesting idea. If not, you can't, no matter how infrequently you do it, as it will make the game jerky and unplayable. That question can probably be better answered in one of the graphics API forums.
I think you might benefit most by using some method of software occlusion culling. Yann L used a software renderer that would only render a Z buffer that was used to test if something should be drawn, and it gave a speed boost because it could work at the same time as the GPU (and of course it used low-poly models, no textures, etc). You could add to such a thing two more layers - an "ObjectID" field that is set to 0 for level geometry, 1 for first character, 2 for second, etc; and a "Brightness" that is a grayscale representation of everything. Since you don't need high resolution, you could probably do fine with a 16-bit Z buffer and 8 bits for the other two channels at a resolution of maybe 320*200. If, in addition, you use grossly simplified models for anything complicated, you could easily run several characters at a time with a decent framerate.
Since this could be done while waiting for the GPU to finish the frame (for the human players), etc, it'd probably be far faster than trying to use the GPU for it. It would also allow for far more customization, and could make use of the multi-core CPUs that are becoming somewhat common.
"Walk not the trodden path, for it has borne it's burden." -John, Flying Monk
Quote: Original post by Kylotan
Really, no matter what optimisation you can make, it comes down to a simple question - can you read from video memory without stalling the pipeline? If so, it's practical, and an interesting idea. If not, you can't, no matter how infrequently you do it, as it will make the game jerky and unplayable. That question can probably be better answered in one of the graphics API forums.

Stall the pipeline in what way ?

Are you speaking of the cost of reading from video memory ?
I think this is an AGP related issue, and with PCI Express cards this should not be a problem.

You should be aware (probably you are) the fact that modern physics can be GPU accelerated. How do they do that ??? they also need to read from Video Memory, and a lot frequent. This is solved by PCIe. Anyway PCIe is a standard for the new cards and is replacing the AGP.
And not to mention the benefit of using this viability tests on dual GPU systems ...

@Extrarius
As I said before. The idea i proposed isn't supposed to take advantage of hardware z-buffer for occlusion culling. That would be the benefit, but the primary gain of my approach would be realistic lightning impact on AI determination of visibility. I doubt that it could be achieved in better performance on CPU than GPU.
Quote: Original post by RedDrake
You should be aware (probably you are) the fact that modern physics can be GPU accelerated. How do they do that ??? they also need to read from Video Memory, and a lot frequent. This is solved by PCIe. Anyway PCIe is a standard for the new cards and is replacing the AGP.
And not to mention the benefit of using this viability tests on dual GPU systems ...


My impressions from seeing GPU based physics research papers, is that they are not for realtime situations like games. The reason they exist is for incredibly detailed physics sims where the calculations are costly(not realtime anyway) and they just use the GPU as a second processor to help speed things up. So the memory readback is not a concern compared to the cost of calculations anyway.

This topic is closed to new replies.

Advertisement