Advertisement

Cage mesh deformers, or how Roblox now does clothing

Started by September 09, 2021 09:43 PM
9 comments, last by JoeJ 3 years, 1 month ago

Ref: https://devforum.roblox.com/t/cage-mesh-deformer-studio-beta/1196727

Roblox has something new in a clothing system. You can dress your character with multiple layers of mesh clothing, and they adjust the meshes to keep the layering correct. No more inner layers peeking through outer layers. They call this a “cage mesh deformer”. It's still in beta.

Green shirt = Renderable mesh
Red outline = Inner Cage
Blue outline = Outer Cage

This is a form of precomputation - you do this once during the process of putting on clothes, and the result is a single rigged mesh the GPU can render as usual.

Is this a new concept? Are there any theory papers on this? How do you take a group of clothing mesh layers that fit in pose stance and adjust them so they work as the joints move? Looks like Roblox is automating that, at least partly.

There was, apparently, a Lightwave plugin for cage deformers around 2014, but it didn't catch on. (https://www.lightwave3d.com/assets/plugins/entry/cage-deformer)​ So it's not a totally new idea.

There are a few ways to go at this. Here's one that occurs to me:

  • You have a jacket put on over a body. They're both rigged mesh, and in pose stance (standing, arms out, legs spread slightly) everything is layered properly. But if the arm bends, the elbow pokes through the jacket. The goal is to fix that.
  • So, suppose we project the vertices of the jacket inward onto the body, where they become additional vertices of the body. We project the vertices of the body outward onto the jacket, where they become additional vertices of the jacket.
  • For the jacket, interpolate the bone weights for the new points. Then use the bone weights of the jacket on the corresponding points of the underlying body. Now the body will maintain the proper distance from the inside of the jacket.
  • Use the result as an ordinary rigged mesh.

If there are multiple layers of clothing, do this from the outside in, so each layer is the “outer cage” of the next layer inward.

The idea is to do this automatically, so that users can mix and match clothing and it Just Works.

(Second Life has mix and match clothing from many creators, and poking through is a constant problem, patched by adding alpha texture layers to blank out troublesome inner layers. It doesn't Just Work.)

My guess is that they also adjust weights based on the cage. E g, they “snap” to the cage, and then make sure weight blends are correct as per the position-on-the-cage. That would presumably create the same transform for the same location for different pieces of clothing. As long as the cages then have sufficient separation to avoid interpolation cut-throughs and differences in tesselation, it should Just Work ™

enum Bool { True, False, FileNotFound };
Advertisement

That's about what I figure, but there are sure to be lots of special cases, like where the cuffs meet the wrists and the collar meets the neckline, where you can look inside a little and see part of an inner layer. You ought to be able to figure out which triangles you can discard because there's no way to see something hidden by an outer layer, provided the camera isn't allowed to get super small and close and go down the collar for an ant view. I'm kind of hoping that someone wrote up a system that works, or posted code somewhere.

The Second Life clothing system is known for looking good and offering a huge range of clothing, but being a pain to use and a dog to render. This looks like a way out that keeps the good properties and fixes some of the problems.

Nagle said:
So, suppose we project the vertices of the jacket inward onto the body, where they become additional vertices of the body. We project the vertices of the body outward onto the jacket, where they become additional vertices of the jacket. For the jacket, interpolate the bone weights for the new points. Then use the bone weights of the jacket on the corresponding points of the underlying body. Now the body will maintain the proper distance from the inside of the jacket.

Alternatively, you could only add new vertices to the jacket in areas where the body has higher tessellation and high variance in weights. Not as robust as your both ways solution, but maybe good enough.

The problem likely is how to do the projection. Using mesh normals would be noisy.
I'd use a signed distance field from the body. The gradient of the field gives the direction, so the closest spot on the body mesh can be found simply from gradient.Unit() * distance.
Likely a two ways method would work es well, then you would trace in small steps and mix both fields along the way.

However, generating a good and high resolution SDF is not trivial. I use JFA first ( https://blog.demofox.org/2016/02/29/fast-voronoi-diagrams-and-distance-dield-textures-on-the-gpu-with-the-jump-flooding-algorithm/​ ), but then do some gradient descent iterations to fix the errors. (For me the errors are big because i use density volume data as input, not meshes. This makes my JFA very noisy because it gives only an upper bound but no exact distance like triangles or voronoi seeds would do.)
To save memory and computation, i use a sparse grid data structure to generate the SDF only within some given radius from the surface. This way i can have 1024^3 SDF volumes easily.

Here's a screenshot:

I use it to generate particles for fluid sim, but you can imagine it's easy to project cloth to the model robustly.

However, if your models are lowpoly, it surely is better to use some spatial acceleration structure for mesh triangles and find the closest point on the surface from triangles directly ofc.
Not sure, but such SDF can be blurred, which might help with avoiding noisy weights with high details. It's also possible to combine a jacket SDF + body SDF to handle layers of cloth.

JoeJ said:
It's also possible to combine a jacket SDF + body SDF to handle layers of cloth.

Well, i realize that's not so easy because the jacket has a very thin interior, while what we want would be a filled, solid jacket. That's a problem.
So maybe just use the body SDF, and trace rays along the gradient. So you could find all intersections with all layers of cloth, no matter what's inside or outside.

Another approach, which I have no idea at all whether it matches anything in the above method, would be to know where the material is transparent, and simply exclude all inner layer triangles that are fully covered by something opaque. Additionally, you could tesselate both inner and outer layers such that they each have vertices where the normals meet – project all outer-layer vertices onto the inner layer, and all inner-layer vertices onto the outer layer – such that you know that you have a guaranteed distance and there won't be any “corner cutting” poke-through.

enum Bool { True, False, FileNotFound };
Advertisement

Yes, finding the closest point on the next layer of clothing can be complicated. “Towards the body” isn't it. We know where the bones are and what the weights are, so it may be possible to get guidance from that. Towards closest point on bones involved, weighted by rigging weight, might be close. Needs further thought and probably some prototyping in Blender.

This is encouraging. Thanks, everybody.

hplus0603 said:
As long as the cages then have sufficient separation to avoid interpolation cut-throughs and differences in tesselation, it should Just Work ™

Yes. Getting an algorithmic definition of “sufficient separation” is reasonably difficult.

If you project the points of each layer on the next layer, in both directions, it doubles the number of vertices. Worse, this doubling happens with each additional layer. So skin, shirt, suit coat, overcoat is 8x. Ouch. Need to figure out which additional points are really needed and which are redundant.

If the projected point is “close to” another point on the surface, the nearby point can be used. “Close to” is relative to the distance between the surfaces. A loose suit coat will need fewer extra points than a wetsuit.

Is this a new idea, or did somebody figure this out in the 1990s and write a SIGGRAPH paper or publish in Game Developer or something? It seems obvious enough that someone must have been down this road.

[quote]Is this a new idea[/quote]

Back when I did avatars for a living, I wanted to try it, but at the time, other priorities were higher, and also, vertices were more expensive, so I never did. I did not find a good write-up of it elsewhere, so there is some risk it's actually novel. And then stencil- and id-buffer based methods were all the rage for a bit, and then characters were never the main focus anymore.

If you combine vertex projection with known-hidden triangle elimination, it might not be so bad.

enum Bool { True, False, FileNotFound };

Nagle said:
If the projected point is “close to” another point on the surface, the nearby point can be used. “Close to” is relative to the distance between the surfaces. A loose suit coat will need fewer extra points than a wetsuit.

It also depends on the weights - if they are ‘close / linear' as well. To evaluate this, you could create a local parametrization of a local patch surrounding the vertex in question. Then, for each weight channel affecting the patch, fit a plane to the weights, and if the vertex weight is on the plane (and the plane is an acceptable fit for the whole patch at all), it can be removed.
Headache and path of failures expected : )
Another option would be to generate many random character poses (from joint limits if given), detect problems and increase tessellation to fix them. Maybe better.

Even if you manage to resolve all poke-throughs, results won't be perfect. Because increasing the count of layers also increases the deformation. The fashion designers will hate you in any case.
It's probably more promising to remove parts of cloth and skin which are occluded by outer layers. Then you have less issues with poke-throughs and cloth can remain tighter. Also vertex explosion is prevented.

The problem is also affected from our broken methods of skinning. Matrix palette skinning can't do volume preservation or sliding skin, dual quaternions are no improvement, advanced methods like delta mush are expensive and still fail to respect anatomy. Acceptable results are mostly a matter of artist skills on setting up procedural extra bones, but extreme poses still look more like Zombies than humans.
If we add outer layers at some distance to the skin, related issues of self intersecting folds increase and become amplified.

Thus a perfect solution is out of reach, which makes it hard to decide on how much effort and research should be spent.

This topic is closed to new replies.

Advertisement