🎉 Celebrating 25 Years of GameDev.net! 🎉

Not many can claim 25 years on the Internet! Join us in celebrating this milestone. Learn more about our history, and thank you for being a part of our community!

Placement and rendering techniques for vegetation/details

Started by
1 comment, last by Valakor 5 years, 8 months ago

I'm trying to figure out how to design the vegetation/detail system for my procedural chunked-lod based planet renderer.

While I've found a lot of papers talking about how to homogeneously scatter things over a planar or spherical surface, I couldn't find too much info about how the location of objects is actually encoded. There seems to be a common approach that involves using some sort of texture mapping where different layers of vegetation/rocks/trees etc. define what and where things are placed, but can't figure out how this is actually rendered. I guess that for billboards, these textures could be sampled from the vertex shader and then use a geometry shader to draw a texture onto a generated quad? what about near trees or rocks that need a 3D model instead? Is this handled from the CPU? Is there a specific solution that works better with a chunked-lod approach?

Thanks in advance!

Advertisement

Horizon: Zero Dawn is a great recent example of vegetation placement - their system is broadly categorized into authored source data and tooling, and the runtime placement system.

  1. Source data (and authoring)
    • Painted / generated per-world-tile texture maps, e.g. Rivers, Big Trees, Biome X, etc. These can be whatever information is useful to you in your placement algorithm. It's common that things like 'Rivers' or 'Roads' are actually Signed Distance Fields so you can do things like query '10m away from roads'. You'll generally want access information derived from your terrain as well, like Slope, Curvature, etc.
    • 'Placement Graphs', e.g. some way to express how you want to combine all your source data to place different assets. This could be a textual expression language, a node graph, defined in code, whatever, but the point is that you can take all your source data and define a set of rules for how something is placed
      • Probability(my_tree_a) = (distance(water) < 10) * keyframe(elevation, <some keyframes>) * ...
  2. Runtime placement
    • When you approach a terrain tile in Horizon, they start placing nearby objects using their placement algorithm. This involves taking all the source data + placement graphs and evaluating them. In Horizon's case, they do this on the GPU in a massively parallel fashion to efficiently place thousands of objects.
    • Objects can be placed at pre-defined (ish) locations. You can generate these locations using blue noise, hex packing, etc. but you just want some kind of irregular-ish grid to get natural-looking uniform random placement.
    • At each location, evaluate all the possible placement graphs that could go there (grouped however makes sense to you). Generate some random value and randomly choose what to place at that location based on the evaluated results (it might be nothing!).
    • 'Placing' something really means recording a pair <model ID, position>. You can also generate other information (scale, rotation, tilt, whatever). What you end up withs is basically a point cloud of various models scattered across your landscape. You can transform this and input it into your normal rendering pipeline however works best for you.

The more traditional approach is to move both steps 1 and 2 to offline tooling, and package up all your models and instances to be loaded by the game, but Horizon's approach allows some pretty awesome in-engine iteration (paint to 'Trees' texture and see things respond in real time). Far Cry 5 is an example of the more traditional offline approach iirc.

Edit: Tl;dr, you need some way to come up with a point cloud of models to render. You can do that on the GPU or on the CPU depending on the number of things in your environment, perf requirements, etc. If you're generating chunks, it makes sense to perform your placement algorithm when generating a chunk. When drawing a point from your point cloud, you can determine to draw the billboard or full model based on distance from the viewer, etc. like drawing a normal model.

This topic is closed to new replies.

Advertisement