Advertisement

making game map larger without increasing drawing cost

Started by October 02, 2022 04:23 AM
22 comments, last by JoeJ 2 years, 3 months ago

PolarWolf said:
This is what I mean,maybe in vertex shader or after but before fragment shader,when the 3 vertices of a triangle are all out of view,the pipe line will cull out this triangle,and if a triangle is partially visible,the pipe line will cull out the invisible part,in both cases the cull happens once per triangle,that's why I assum less triangle count make the pipe line cull more efficient.

That's right, yes. Such culling can only happen after the vertex shader, because only after that it is known if the triangle is inside the screen, clipped by the edges, or off screen.
But using GPUs such clipping really is irrelevant to us, and no motivation for specific optimisations, or compromises related to content. And we do not need to know how this clipping process precisely works at all.

As with anything that works using parallel processing, we always have to work and optimize using larger batches, not single units. So we cull either entire models, or clusters of many triangles.
And we do this only coarsely. For example testing a bounding box against the frustum to cull an entire model is quick, simple and thus worth it.
Contrary, if we would use some more accurate and complicated test, e.g. like in Quakes BSP levels, we would get very accurate culling almost per triangle, but the complexity and fine granularity of the algorithm would cost us more than we save.
Then - on a modern GPU - it would be faster to do no culling at all, and render the entire Quake level as a single mesh.

But: Notice how the introduction of GPUs has educated us over decades to replace optimization for work efficiency with dump simple brute force solutions. This worked til yet, but now that Moores Law is dead, we need to change our mindset again, going back to complex algorithms to achieve further progress in software, while progress on HW stagnates and becomes unaffordable. Thus, this quote:

PolarWolf said:
As of occlusion culling,I think the overhead will be too much

Is not actually true in general. It's the opposite: We now need to research complicated algorithms again to get further progress. UE5 Nanite is a again a good example.

But the effort and complexity is much higher now than it was back then when Quake was awesome.
So you have to choose: Do i want to make a game, or do i want to make cutting edge rendering tech?

Likely you don't have the time for both, and thus i try to guide towards using the simple solutions first and see if you get good enough performance for your goals. Usually that should be the case.

It's also this problem in parts why so many people nowadays use off the shelf engines. Because this way they outsource the technical problems to experts who work full time just on that.

As you work on your own engine, you need to find some middle ground. Likely you can not expect to beat those engines, but you gain a lot of experience and flexibility. And it's a good long term investment i think.

Thaumaturge said:
Only if the camera is scaled with the environment

No. It does not even make much sense to ‘scale a camera’? I've never heard anybody saying this before.
But before we get lost in discussion about semantics, this is the example as i understood it was asked about:

We have a camera at position (0,0,0), and we have an arbitrary scene. Let's ignore the near and far clip planes, assuming they are at 0 and infinity (if this would work).
If we now scale the scene about the origin, the projected image from our camera does not change. You can not observe there is any scaling going on at all. Thus the amount of on screen objects remains the same as well.

If you still disagree, one of us has to try it out, i guess?

Advertisement

JoeJ said:
No. It does not even make much sense to ‘scale a camera’? I've never heard anybody saying this before.

If the camera is a node in the scene, then transforms can be applied to it--just as it can be translated, so too can it be scaled. In effect it (largely) just means that the scaled applied will affect the relevant matrix during rendering.

JoeJ said:
If you still disagree, one of us has to try it out, i guess?

I do still disagree--but I'm not sufficiently invested in arguing the matter. And as noted, it's tangential to the more-important advice regarding profiling.

So, let's just agree to disagree, yes?

[edit] On reflection, you may be right for the specific case of the camera being placed at (0, 0, 0).

MWAHAHAHAHAHAHA!!!

My Twitter Account: @EbornIan

Thaumaturge said:
So, let's just agree to disagree, yes?

Sure. I was thinking of rendering a box, then duplicating it and scaling around the camera center. The wireframes should match up so we see only one.

But your proposal is the faster solution : )

@JoeJ That's very informative,What I like about forums is that I often learn things practically useful and hardly get from books.thanks a lot.

we now scale the scene about the origin, the projected image from our camera does not change.

This seems wrong to me. Perspective depth means things in the distance would be smaller. In general (as someone who as done a lot of LOD) I think scaling isn't really the correct discussion. The main thing with LOD is the transitions from higher to lower resolution. Scaling everything up just means you have much less detail, which is not typically what you want.

Advertisement

Gnollrunner said:
This seems wrong to me.

Hard work on box rendering triggered…

Gnollrunner said:
In general (as someone who as done a lot of LOD) I think scaling isn't really the correct discussion.

I think PolarWolf only used this as an example to illustrate the conclusion that distant geometry can have less detail than close geometry.

I'm done with the proof (camera settings are not important, just showing them so we see camera is not at the origin and not axis aligned, so the enigma does not depend on some special case):

As you see, close and distant mesh would look the exact same, if i would not use some displacement along surface normals.

Code:

static bool visScalingRiddle = 1; ImGui::Checkbox("visScalingRiddle", &visScalingRiddle);
if (visScalingRiddle)
{
	static HEMesh mesh;
	static bool init = 1;
	if (init)
	{
		((HEMesh_Import&)mesh).AutoLoadModel ("C:\\dev\\data\\mod\\Armadillo_input.obj");
		init = 0;
	}

	matrix close = cameraMatrix;
	close[3] += close[2] * -30;

	simpleVis.LoadModelviewMatrix((float*)&close);
	((HEMesh_Vis&)mesh).RenderMeshWireFrame(0, 1,0,0); // red
	simpleVis.SetModelviewMatrixToIdentity();

	float scale = 100.f;
	
	matrix distant = close;
	distant[0] *= scale;
	distant[1] *= scale;
	distant[2] *= scale;
	distant[3] = (close[3] - cameraMatrix[3]) * scale + cameraMatrix[3];

	simpleVis.LoadModelviewMatrix((float*)&distant);
	((HEMesh_Vis&)mesh).RenderMeshWireFrame(0.01f, 0,1,0); // green, rendering with slight displacement along the normals to force it visible
	simpleVis.SetModelviewMatrixToIdentity();
	
}

@JoeJ Actually you are right. For some reason I was thinking of scaling an object at some distance, but if you are scaling the whole scene I think it doesn't matter. However, none of this is still relevant for LOD.

Gnollrunner said:
However, none of this is still relevant for LOD.

I brought the LOD topic up because the initial motivation seemed to have similar geometry resolution in screenspace also for distant stuff. That's basically what LOD techniques try to achieve, so imo it is relevant.

Anyway, i just connected some dots:

In offline rendering, LOD is rarely needed. They just model the scene at the resolution they need, and camera is mostly static.
And in games we initially did something similar. By designing levels in a way view distance is always blocked and thus short, or by using distance fog and culling, level geometry never became too tiny and expensive.

Later, just view years ago, some greedy folks had the idea to sell realtime raytracing to gamers, promising a visual upgrade, and finally reaching offline CGI quality.
So they looked at offline raytracing standards, because that's their holy grail. And they already had an API to do this classic but legacy raytracing stuff, so they sold this to Microsoft and started making huge chips for 2000$ GPUs.

Now i understand why those brainless amateurs forgot to give us any option to raytrace continuous LOD geometry. Because for offline raytracing, LOD seems not that important. >:(

Gnollrunner said:
we now scale the scene about the origin, the projected image from our camera does not change.

This seems wrong to me. Perspective depth means things in the distance would be smaller.

I can confirm that it works, I just implemented something similar for my solar-system scale renderer. I did a test on a small scale with some boxes and took some screenshots to verify that the output of the scaled rendering is identical (can't post because deleted already). This is true because there is a ray extending from the camera position through a pixel to infinity, and all that is done is to scale down the position on the ray, which is invisible to the camera as long as the object is also scaled by the corresponding amount. You also need to do things like scale light source intensities (by the square of scale factor), and light attenuation factors to get lighting to be correct. This scaling along view rays is a fundamental problem in the field of computer vision, where there is always a scale ambiguity for the 3D reconstruction of monocular views.

This topic is closed to new replies.

Advertisement