I'm trying to come up with as many performance tricks as possible to render detailed game worlds on integrated graphics chips (i.e., on constrained hardware) that targets OpenGL 3.3. I want to render somewhat pleasing 3D graphics at high framerates even on hardware that is not meant for games, like lower-end laptops. I came up with a few, but would really appreciate any input and suggestions.
- Cubemapping faraway static scenery. So far, I the most promising candidate I came up with is to reduce geometry by dynamically generating a skybox that contains a 360° snapshot (colour + depth) of all static, faraway geometry, as the perspective of faraway things does not change much if camera movement is slow. This lets me render the whole (static part of the) scene with a distant near-clipping-plane, potentially across multiple frames, and then in regular frames, I can reduce the far clipping plane to roughly the edge of where the skybox starts. Of course, I need to take some special care that there are no too-glaring artifacts like seams etc. and may need to get the depth buffer corrected if the camera moves. The idea is to use that skybox to render faraway static geometry after close-up geometry has been rendered, and then I can have a third render call that renders distant dynamic geometry by referencing the depth map of the cubemap as a depth test. The cubemap only has to be updated once I move too far from its initial perspective, and it can be constructed ahead-of-time by interpolating where I am currently moving. If I only have to re-render the cubemap every second or two, I can amortise the effort of rendering it once over the course of maybe a second, which leaves me with basically 0 cost of almost-perspectivically correct renderings of huge sceneries without completely fading out objects (as long as they're static or near-static). This technique seems good at hiding terrain seams, or I could use some geometry distortion that tries to keep the edge of the world mapped to its position in the cubemap even after the camera was displaced. Distortion may cause problems with dynamic geometry that is far away, though (especially if you imagine a sharpshooting setup, where angular correctness is important, but those cases can probably be adjusted also). The cubemap should not have any additional cost since you would otherwise use a cubemap for the sky anyway, although maybe at a lower resolution.
- Sprites/Holograms to substitute geometry. Another obvious trick, but with obvious artifacts, is drawing dynamic geometry as 2D sprites (somewhat like doom or early 3D MMOs). But if I play a game on shitty hardware, I would probably appreciate 30+FPS framerates more than visual appeal. Does anyone have good references regarding this technique? I am sure some people have tried really hard to get multiple perspectives and animations working with this approach. Is it all pre-rendering based or could a caching strategy like re-rendering a model to a framebuffer in regular intervals work, too? Of course I would also settle for pre-rendered sprites, but if possible, I want to get more perspectivic correctness out of it, mainly so that I can properly tell where a figure is facing instead of having it snap into 45° or 22.5° angles or something like that, which is important to communicate who the target of some enemy is. I think getting the orientation across is more important than perspectivically correct display of a character model. Maybe a technique like those rivelled holographic post cards which have multiple perspectives of the same object would work? When well-made, those really feel like perspectivically correct 3D if you tilt them around only a little, and then they fade/snap to a new perspective if you tilt them further. Is that technically different from just many different angles being imprinted, or is there a way to do that from just a single snapshot? Has anyone ever written a shader for that? I guess it is somewhat like parallax mapping. What is the cost of parallax mapping on integrated GPUs compared to rendering proper geometry? Is it compatible with transparency/stencil? I never used parallax mapping before. I should be able to get away with parallax-mapped sprites for enemies and other things that will allow me to turn them into plausible semi-3D objects as long as I don't go overboard on the angular distortion. This ideally applies to geometry that's mostly unanimated, because smooth animations become really hard to achieve with traditional precomputed sprites.
- I never really tried bone/joint-animated geometry on the GPU before, but I think you can most likely also get more performance by reducing keyframe density / supplying fewer transformation updates between which to interpolate over time. I may be out of my depth on this one.
- Psychologic rendering. I know that term is probably made up. But I was particularly impressed by this guy's philosophy (starting at roughly 14:20 minutes) regarding painting detailed scenes, especially that the “negative space” between objects is more relevant to the observer than the outline of objects themselves, and that hinting at detail and recurring patterns is sufficient to make you think there is detail or a recurring pattern. I was wondering: if you can do so much even with just lines and a bit of shading, how much further could his approach be taken if you add in colour and textures? Or does a fully coloured picture detract from the psychological effect of it? I hope to find a way to reduce rendering complexity by having very simplistic graphics that still seem complex and detailed through psychological effects.
If anyone has additional suggestions or criticism, I thank you in advance.