Advertisement

First-person terrain navigation frustum clipping problem

Started by January 17, 2023 09:15 AM
24 comments, last by JoeJ 1 year, 10 months ago

Gnollrunner said:
As along as the near plane is also within that boundary, the problem should never occur.

That's the thing: the size of a player-character's collision-shape is likely (at least in my experience) to be small enough that a fairly-standard near-plane will fall outside of it.

(This is something that actually happened to me--hence, I daresay, my thinking to mention it. As I recall, many years ago I had a problem with clipping in my first-person camera, and the advice that I was given was to reduce my near-distance, as my near-plane was essentially “sticking out of” my character-controller.

I suppose that I could have made the controller bigger--but that may have made for a more-unwieldy character.)

Gnollrunner said:
With a 3rd person camera, you can also get very close to terrain, …

I suppose that this depends somewhat on the type of third-person camera in question: an over-the-shoulder follow-camera could well end up pointing at geometry at close range--albeit I would think still less-commonly than a first-person camera.

(The cases of the camera's sides, top, or bottom approaching near to geometry being a slightly different matter; you likely won't see much of the geometry in question that way.)

However, a distant third-person camera (think of certain types of RPG) may never draw near to any geometry at all--or when it does, may be able to just fade that geometry out as it approaches.

Gnollrunner said:
The case of a first-person camera is a bit easier because it should always be in the collision boundary of the player …

Oh, certainly a third-person camera is in many cases a more-complex thing than a first-person one! As you say, the positioning and collision-detection involved in an over-the-shoulder camera is trickier than in the case of a first-person camera.

MWAHAHAHAHAHAHA!!!

My Twitter Account: @EbornIan

Thaumaturge said:
(This is something that actually happened to me--hence, I daresay, my thinking to mention it. As I recall, many years ago I had a problem with clipping in my first-person camera, and the advice that I was given was to reduce my near-distance, as my near-plane was essentially “sticking out of” my character-controller.

That's what i meant with ‘bad compromises’ we have to do. If you have terrain, you also have some draw distance. To prevent Z-fighting, you have to use a large distance for the near clipping plane.
One related workaround is to use different passes and projection settings for things like the first person hands and gun model.

I have never tried inverted depth and don't know how much this helps the problem. People often say it solved everything for them, but the discussion is usually focused on large draw distance and z-fighting, not the close up issues.
Maybe somebody can tell…

Advertisement

I have never tried inverted depth and don't know how much this helps the problem. People often say it solved everything for them, but the discussion is usually focused on large draw distance and z-fighting, not the close up issues.
Maybe somebody can tell…

I have yet to understand why you just can't use normal floating point distance from the camera as Z, at least that's what I do. I even use the same scale for far off planets. I suppose it depends and what you are culling, but if you don't have inordinate detail at range, it seems like it shouldn't be a problem. Some of this may be left over from days where you had less depth bits, but I can't say for sure.

It might also have to do with the fact that I never actually go to world space on the GPU. Everything goes directly to view space, after which I apply perspective. This means that every unique transformation has to be sent down on every frame, but for my particular application it's not really a problem.

Gnollrunner said:
I have yet to understand why you just can't use normal floating point distance from the camera as Z, at least that's what I do.

I'm also very uncertain about this, because i had no ‘NDC’ or ‘clip space’ my software rasterizers like GPUs do, and i never tried to learn more about related details.
But afaict, you can't use world space floating distance for Z. It depends on your setup of the projection matrix, and then the distance from your view space is mapped from 0 to 1 between the front and far planes.
So even if you use infinity for the far plane, the GPU will do this mapping to the 0 to 1 NDC range, which is then used for the depth buffer. And there is no way (but also no need) to bypass this remapping.
I'm confused, as you sound like you would do such bypass. How does your approach differ from the conventional methods of using Model, View, and Projection matrices?
Even if you combine M and V on your side, you still have to use P, which still defines perspective projection and Z range as usual?

Gnollrunner said:
but if you don't have inordinate detail at range, it seems like it shouldn't be a problem.

Yeah, it depends on that detail. As you use automated LOD reduction, you won't have much problems. Even a whole planet at distance is just a sphere, which won't cause z fighting due to low pass filtered depth complexity as a result from LOD.

Where i notice the problem a lot is my debug visualizations. E.g. i want to see a wireframe overlay over my geometry. I'm not willing to invest time on doing this properly, e.g. setting up equal Z test, bias stuff, oe even specific shaders. All i do is displacing the edges by a small constant for the wireframe render.
And with this example z fighting happens very easily. Even for small scenes at the scale of 100 meters, i need to set front clip distance to 1m to get a stable visualization.
What i end up doing is having GUI for camera controls open all the time, and i use it a lot to tweak near / far values to something which works well enough for my current situation.

Sure, the example is not practical for real games. But it feels bad enough to worry about things like e.g. putting some foliage or little rocks on the terrain. Terrain LOD won't help us with related issues.
So i hope inverted depth is indeed the problem solver people say it is.
It works by swapping the spot where accuracy is high (close to zero). Usual projection has this at the near plane, which makes sense, as there is more detail to expect.
But we already get this effect from the z divide as well, so traditional projection doubles it, causing very bad precision at large distance.
If we put zero depth and high precision to the far plane instead, we get a more linear distribution of precision, which seems the better compromise. Afaik, pretty all games now use inverted depth.


I'm confused, as you sound like you would do such bypass. How does your approach differ from the conventional methods of using Model, View, and Projection matrices?

So yeah, I'm basically constrained by DirectX and everything has to fit into its expected space. However, I do take the projection out of the MVP matrix so it's just an MV matrix. I do the P part outside of the matrix with my own rather odd system. Before I had a lot of trouble with stuff flickering at very large distances when P was in the matrix. This wasn't Z fighting because I was rendering a large sphere in chunks. Some of the chunks would apparently jump in and out of the camera space as I moved.

So as I said I now do the projection as a post step. For my “w”. I always use very a large power of two. The “theory” (more like a wild guess) behind this is, a power of two division should be simple and wouldn't mess with the mantissa bits. So it's not exactly using the Z values directly but it's as close as I can get. In any case it solved my problem, at least for now.

Sounds pretty interesting. Like problems from the other side of the extreme, in some sense.

Meanwhile, from my perspective i already feel worried about my attempt to create something like a 16^2 km open world. : )

Advertisement

JoeJ said:

Meanwhile, from my perspective i already feel worried about my attempt to create something like a 16^2 km open world. : )

You can always do the same thing as I do, mainly use double on the CPU, and then convert and send down your transformation matrixes on every frame, to avoid world coordinates GPU side. This should let you make a world as big as you like with no issues.

However, I've found that some people are resistant to using double, but I don't really think the speed is an issue with a reasonably modern computer. There is a space issue of course, but then doing all the work arounds to avoid double has some costs too and makes things harder to deal with. I have the luxury have having planets where (0,0,0) is always at the core and I don't have to worry about base shifting or precision problems. There's a consistent coordinate system across the entire world with sub millimeter resolution.

Gnollrunner said:
You can always do the same thing as I do, mainly use double on the CPU

I wanted to use doubles at least for the editor.
But recently i have found out that's a no go as well, due to memory and storage costs. Likely i even have to use some quantized vertex positions already during production for better compression.
Reasons is that i need to store intermediate data to support local changes and out of core processing. That becomes double digit terabytes pretty quickly.
And worse, due to out of core processing each tile needs to go through hard disk multiple times to execute the whole pipeline. I have a hard time with hiding disk latency behind actual processing.

My primary goal is not geometry, but placement of hierarchical GI probes. And because lighting is potentially unique and uniform resolution everywhere, i can not use the usual tricks like low res heightmap terrain with some instances of buildings or rocks on it.
I also want to avoid self intersections or overlaps of 3D models, because probes inside solid space would waste runtime cycles and cause visual glitches.
So what i do is creating a density volume of the scene, extract isosurface, and do curvature aligned remeshing, where each resulting face gives me one probe of one level of detail.
Unfortunately this brings back the costs of preprocessing which other realtime GI methods can avoid. For large worlds, this cost really becomes a production problem ofc.
To compensate, i try to use the meshes also for geometry with LOD as well. Having unique geometry everywhere would enable some new things, and procedural generation is often much easier with volumetric methods than with meshes.

So, although we work on somewhat similar things, our challenges completely differ because you do realtime and i do offline. : )

Gnollrunner said:
However, I've found that some people are resistant to using double

Yeah, i'm one of them. I'm mostly concerned about the extra cost for runtime physics. But that's because i have robotic character simulation in mind, which is way more than capsules and rigid space ships.
However, Star Citizen proofs doubles work, obviously. : )

JoeJ said:

I wanted to use doubles at least for the editor.
But recently i have found out that's a no go as well, due to memory and storage costs.

I don't know all the ins and outs of your project, but keep in mind models that fit well in single precision range are still stored in single, that can include terrain chucks. My stuff is mostly run-time procedural so that isn't relevant, but in the common case most disk storage would be in single. Only some things expanded in memory would turn into double.

Gnollrunner said:
However, I've found that some people are resistant to using double

Yeah, i'm one of them. I'm mostly concerned about the extra cost for runtime physics.

Well double is max about 30% slower and often closer 20%. That seems like a small margin to be worried about given all the other factors, but OK.

Gnollrunner said:
Well double is max about 30% slower and often closer 20%.

Wow, when i tested this last time, which was about in the year 2000, the cost was much higher. Basically twice.
Tried to google, and found this confirming your numbers. Good to know.

But it makes me wonder. I see the actual execution in the CPU is not the problem, but i would have expected memory bandwidth is.

This topic is closed to new replies.

Advertisement