Advertisement

Sick rendering performance cheats

Started by January 21, 2024 11:04 AM
4 comments, last by RmbRT 10 months, 1 week ago

I'm trying to come up with as many performance tricks as possible to render detailed game worlds on integrated graphics chips (i.e., on constrained hardware) that targets OpenGL 3.3. I want to render somewhat pleasing 3D graphics at high framerates even on hardware that is not meant for games, like lower-end laptops. I came up with a few, but would really appreciate any input and suggestions.

  1. Cubemapping faraway static scenery. So far, I the most promising candidate I came up with is to reduce geometry by dynamically generating a skybox that contains a 360° snapshot (colour + depth) of all static, faraway geometry, as the perspective of faraway things does not change much if camera movement is slow. This lets me render the whole (static part of the) scene with a distant near-clipping-plane, potentially across multiple frames, and then in regular frames, I can reduce the far clipping plane to roughly the edge of where the skybox starts. Of course, I need to take some special care that there are no too-glaring artifacts like seams etc. and may need to get the depth buffer corrected if the camera moves. The idea is to use that skybox to render faraway static geometry after close-up geometry has been rendered, and then I can have a third render call that renders distant dynamic geometry by referencing the depth map of the cubemap as a depth test. The cubemap only has to be updated once I move too far from its initial perspective, and it can be constructed ahead-of-time by interpolating where I am currently moving. If I only have to re-render the cubemap every second or two, I can amortise the effort of rendering it once over the course of maybe a second, which leaves me with basically 0 cost of almost-perspectivically correct renderings of huge sceneries without completely fading out objects (as long as they're static or near-static). This technique seems good at hiding terrain seams, or I could use some geometry distortion that tries to keep the edge of the world mapped to its position in the cubemap even after the camera was displaced. Distortion may cause problems with dynamic geometry that is far away, though (especially if you imagine a sharpshooting setup, where angular correctness is important, but those cases can probably be adjusted also). The cubemap should not have any additional cost since you would otherwise use a cubemap for the sky anyway, although maybe at a lower resolution.
  2. Sprites/Holograms to substitute geometry. Another obvious trick, but with obvious artifacts, is drawing dynamic geometry as 2D sprites (somewhat like doom or early 3D MMOs). But if I play a game on shitty hardware, I would probably appreciate 30+FPS framerates more than visual appeal. Does anyone have good references regarding this technique? I am sure some people have tried really hard to get multiple perspectives and animations working with this approach. Is it all pre-rendering based or could a caching strategy like re-rendering a model to a framebuffer in regular intervals work, too? Of course I would also settle for pre-rendered sprites, but if possible, I want to get more perspectivic correctness out of it, mainly so that I can properly tell where a figure is facing instead of having it snap into 45° or 22.5° angles or something like that, which is important to communicate who the target of some enemy is. I think getting the orientation across is more important than perspectivically correct display of a character model. Maybe a technique like those rivelled holographic post cards which have multiple perspectives of the same object would work? When well-made, those really feel like perspectivically correct 3D if you tilt them around only a little, and then they fade/snap to a new perspective if you tilt them further. Is that technically different from just many different angles being imprinted, or is there a way to do that from just a single snapshot? Has anyone ever written a shader for that? I guess it is somewhat like parallax mapping. What is the cost of parallax mapping on integrated GPUs compared to rendering proper geometry? Is it compatible with transparency/stencil? I never used parallax mapping before. I should be able to get away with parallax-mapped sprites for enemies and other things that will allow me to turn them into plausible semi-3D objects as long as I don't go overboard on the angular distortion. This ideally applies to geometry that's mostly unanimated, because smooth animations become really hard to achieve with traditional precomputed sprites.
  3. I never really tried bone/joint-animated geometry on the GPU before, but I think you can most likely also get more performance by reducing keyframe density / supplying fewer transformation updates between which to interpolate over time. I may be out of my depth on this one.
  4. Psychologic rendering. I know that term is probably made up. But I was particularly impressed by this guy's philosophy (starting at roughly 14:20 minutes) regarding painting detailed scenes, especially that the “negative space” between objects is more relevant to the observer than the outline of objects themselves, and that hinting at detail and recurring patterns is sufficient to make you think there is detail or a recurring pattern. I was wondering: if you can do so much even with just lines and a bit of shading, how much further could his approach be taken if you add in colour and textures? Or does a fully coloured picture detract from the psychological effect of it? I hope to find a way to reduce rendering complexity by having very simplistic graphics that still seem complex and detailed through psychological effects.

If anyone has additional suggestions or criticism, I thank you in advance.

Walk with God.

For holographic sprite enemies, I think as long as you have real-time updates when an animation starts or reaches key positions, it should be sufficient. It's important to be able to tell the timing of character actions, so that you can dodge or defend in time. But you don't actually need 60 updates per second, you just need the important cues to be presented on time, I think. Whether you would achieve that with a mix of actual 3d rendering and pre-rendered sprites, or only on-demand rendering into a sprite on update, would be up to you. I think if it's affordable, the game should probably render close-up / real-time relevant enemies in fully animated 3D, but everything that is not timing critical can easily be rendered into a holo sprite every few frames or even less often. And also stuff that's further away. Only when something turns more than say 22.5° from its initial orientation would you have to re-render it into a new hologram, but before that, you can simply rotate the sprite perspectivically and the parallax mapping will make it look somewhat realisticly. Of course, parallax mapping does not work well with multi-layered/self-occluding shapes, since it cannot show something that is not visible from the initial perspective, but that should be a tolerable price to pay.

I think a parallax shader would also greatly complement the scenery cubemap, although at a performance cost. So I would make that optional.

Walk with God.

Advertisement

RmbRT said:
Cubemapping faraway static scenery.

To reduce the depth error, you could use reprojection tricks.
Games use this for example to render distant shadow cascades only each Nth frame, and they apply reprojection to achieve a way to ‘interpolate’ the time in between.
It's basically storing world position per pixel, so you can project this to the current screen space. Velocity per pixel can be used additionally to achieve motion.

RmbRT said:
Sprites/Holograms to substitute geometry.

It's called ‘Impostors’ and widely used.
There is the idea to generalize this, combining the advantages of texture space shading (decouple lighting from framerate) and getting rid of the cost of rendering each pixel again and again for each frame. Basically we want a very cheap way to build an image, without a need to recalculate lighting, and going through all the geometry again.
An advanced example is this.
It is was used to make a very impressive Blade Runner game with high fidelity graphics running on mobile class VR HW:

Could not find a longer video quickly.
Main problem: It's revolutionary and breaks with established engines and tools. Likely they have discontinued the promising project because devs rather use Unity. : )

RmbRT said:
I never really tried bone/joint-animated geometry on the GPU before, but I think you can most likely also get more performance by reducing keyframe density / supplying fewer transformation updates between which to interpolate over time.

Not that promising. You read the animation once every frame, and caches will be cold no matter what. So there is no win from less samples, you need to read them once anyway.

RmbRT said:
Psychologic rendering.

Yikes.
Sounds like… ‘The brain combines sparse samples from the eyes to generate a dense image. So let's skip the eyes, send sparse samples directly to the brain, and thus save the cost to generate dense samples for display.’ : )

But we can't do this, at least not yet, thankfully. So we must generate dense samples to fool our eyes.

The only way around that is 'foveated rendering', which is extremely promising, but requires an eye tracker on every display. : (

But i have read only the headlines, ignoring all the text. : )

The classic ways to reduce rendering costs are hidden surface removal (occlusion culling) and LOD.
That's both still open problems, but UE5 shows big progress on both. Somebody explaining such modern occlusion culling.
But idk if OGL 3 is capable to run compute shaders.

RmbRT said:
For holographic sprite enemies

For some time it was common to render sprites from 3D models, mainly in kind of top down strategy / management sims showing many characters. The sprites were reused over multiple frames. Can't remember a specific game, though.

JoeJ said:
The only way around that is 'foveated rendering', which is extremely promising, but requires an eye tracker on every display. : (

Yeah, I've been thinking about that also. Didn't know the name for it. It's a very situation-specific thing, I think, unless you're simply reducing the resolution of entire rendering tiles (if using tiled rendering).

JoeJ said:
Main problem: It's revolutionary and breaks with established engines and tools. Likely they have discontinued the promising project because devs rather use Unity. : )

Not a problem because I write my own game engine anyway. Adapting a generic game engine to my specific game's needs is probably the same kind of effort as just building the game engine yourself. And since I'm planning to do crazy stuff, I will also build my own rendering engine.

I think that whole-scene elimination goes too far, though, because it creates worse artifacts the closer something is to the camera. Especially on VR that could cause severe nausea, I think. But it's basically the thing I wanted to do for medium to far distances. I just need to stitch the 3D part and the cached part of the scene together properly.

JoeJ said:
Yikes. Sounds like… ‘The brain combines sparse samples from the eyes to generate a dense image. So let's skip the eyes, send sparse samples directly to the brain, and thus save the cost to generate dense samples for display.’ : )

Well, it works fine for art and for cartoons. But if you make it too aggressive, it will become jarring. But I'll do whatever is necessary to make it perform smoothly.

JoeJ said:
For some time it was common to render sprites from 3D models, mainly in kind of top down strategy / management sims showing many characters. The sprites were reused over multiple frames. Can't remember a specific game, though.

Yes, that's the gist of it. And then I'll use parallax mapping to make stuff like turning not require a per-frame re-render, but still give immediate cues to the player.

Walk with God.

This topic is closed to new replies.

Advertisement