Advertisement

GPU Octree Construction Confusion

Started by August 19, 2019 03:27 PM
2 comments, last by MJP 5 years, 5 months ago

     I recently have been interested in using octrees for things like voxel cone tracing, path tracing, and maybe even ray tracing. The wall that I have stumbled upon is that I can not understand how to construct the octree structure on the GPU (Not using a compute shader). I understand how the voxelization process happens (using hardware rasterization), however, I have trouble when constructing the octree. I understand how to store the tree, but I do not know how to have enough pixels on my octree texture for when the tree needs to subdivide a node. I also can't simply create a texture that has the maximum number of nodes/pixels that there will be on a tree (i.e creating a texture that has millions of pixels when not all will be filled). According to this source https://maverick.inria.fr/Publications/2011/Cra11/CCrassinThesis_EN_Web.pdf#page=169, I don't have to define the entire texture. It states, 

"Whenever a node needs to be subdivided, a set of 2 × 2 × 2 sub-nodes is 'allocated' inside a global shared node buffer pre-allocated in video memory. [...] These allocations are made through the atomic increment of a global shared counter indicating the next available page of nodes in the shared node buffer." (169)

Long story short, I do not understand this quote or how it is implemented, please help. Thank you in advance.

The game engine I use for programming this on is Unity.

Wow, that's cool ! Seems like they render the scene in three passes along the main axes first and somehow use the information to obtain an LOD, but some pre processing of the geometry seems necessary. I don't have the time to read and understand it all, sorry ...

Sources are available for download: http://gigavoxels.imag.fr/download.html

 

Advertisement

Unfortunately you really don't have many options here when doing this on the GPU. There's 0 mechanisms for allocating more memory on the GPU timeline, that's something you can only do from the CPU. So if you really don't want to run out, you need to somehow get things back to the CPU (and perhaps gracefully handle things on the GPU until the CPU can get the results, allocate more memory, and then pass it on the GPU).

This topic is closed to new replies.

Advertisement