After a tiring search on Google, I came up dry looking for a way to do 3d picking on complex object models, such as any model thats not a sphere, ellipsoid, or a box. A character model would be a good example. I found 2 ways to do picking. One method being getting a color on screen, which is not good because the color could be anything. The other method being ray casting. Only problem, is that the ray casting tests on nearly every tutorial was on simplified models such as spheres or cubes. Yet in 3d studio max, world of warcraft, and other programs clearly show you can click any model, and not just primitives. Does anyone have a good idea on how this can be done? Thanks in advance.
3D Picking Complex Models
Look at Blender, perhaps. It cycles through all objects under that position starting with the closest to the viewer. Since Blender is mesher, I guess that it stores the transformed vertices (in screen coordinates). Testing for each individual triangle is simple. Convex poly also. Other polies need splitting.
If you just need one pick per pixel you could render to a hidden buffer, where each poly gets its individual color.
2 hours ago, Psychopathetica said:such as any model thats not a sphere, ellipsoid, or a box. A character model would be a good example
Often these will have much simpler collisions and they will always have a bounding box. In games a ray will be used to against the bounding box or collision shape; not against the mesh.
2 hours ago, arnero said:Look at Blender
Blender uses a ray trace that checks for intersection with a mesh, to test this you can add two cubes and subdivide one by 10; this will be +/- 12 000 000 tri. When your high poly model reaches 8 000 000 polygons it starts slowing down. Then using the Outliner is faster
Alt_RightClick will show a list of objects hit, allowing selection of what you want.
Outputting some info from the pixel shader is totally a viable path, you just need to output something that gives you enough info to resolve what you want to pick against. For instance you could output a 32-bit integer that serves as a "mesh ID", and if necessary you can output another integer that could contain the "triangle ID". A depth buffer is enough to reconstruct the position, so you don't even need to write anything extra for that.
If you want to do full triangle/mesh intersection, it's certainly doable. The brute-force way to do it would be to just iterate over all of your triangles, testing for intersection. This is obviously a ton of work to do on the CPU for non-trivial scenes, especially if you need to transform your vertices by a joint hierarchy for skinning. You can also do it on the GPU in a compute shader, which is likely to be faster but would still need to handle animation (a pre-skinning compute shader path that caches skinned vertices at the start of the frame could potentially be useful for this). Either way some sort of coarse rejection test (for instance, test against a bounding sphere before testing triangles) would surely improve the performance.
There's also libraries like Embree that accelerate CPU-based triangle/ray intersections, but they do this by building an acceleration structure (typically a bounding volume hierarchy) from all of the triangles. For non-static/animated meshes you would still need to compute the transformed vertex positions on the CPU, and then perform a partial re-build of the acceleration structure.
Another idea I had in mind was something to do with the shader. Like the polygon model you drew out from it, some how feeding it the ray cast information and test to see if it overlapped. Possibly through the fragment shader.
1 hour ago, Psychopathetica said:Another idea I had in mind was something to do with the shader. Like the polygon model you drew out from it, some how feeding it the ray cast information and test to see if it overlapped. Possibly through the fragment shader.
That's maybe the fastest approach because the involved code will be skipped for all pixels except the picked one. But if you do it inside a game you should watch out for the additional register usage - even if the code is executed very rarely, it may reduce occupancy and thus slowing everything down.
If you want to do it in its own pass however, one trick is to narrow the projection matrix so it draws only a very small framebuffer, e.g. 8x8 pixels. You could use 2 int32 rendertargets, for instance one for object ID and one for triangle ID - or something like bit packing with a single one. Then you download the rendertargets + Z and find the closest pixel. Advantage: Even if you click not exactly at a vertex, you get the intended vertex much more likely than when using just a single pixel / ray. (A vertex would become the size of a 4x4 quad, so much easier to pick it than a single pixel!)
If you think of it, ray tracing on CPU should be much faster, even brute force checking every single triangle. (Because the GPU renders only a small framebuffer, it will be 95% idle.) Bounding Box per object would be a nice speedup if you have it. Downside is you need to reimplement things like skinning on CPU.
If you use a physics engine and physics/ graphics geometry is the same you could use tit as well.