Advertisement

Current State of Mocap?

Started by November 15, 2010 06:44 AM
7 comments, last by ddn3 13 years, 11 months ago
I found this thread while looking for cheap/diy mocap solutions:

http://www.gamedev.net/community/forums/topic.asp?topic_id=387230

But it's double super old, and I'm wondering if the state of art has advanced much for small players looking to spend as little as possible. What's possible these days?
i think we'll soon see cheapish mocap solutions based on kinect hardware. first hacks show promise for that direction.
If that's not the help you're after then you're going to have to explain the problem better than what you have. - joanusdmentia

My Page davepermen.net | My Music on Bandcamp and on Soundcloud

Advertisement
If you're looking to spend as little as possible, the name of the game is rotoscoping. It takes a little bit of animation ability, but then, so does mocap (the results of mocap, even after cleanup, are NOT an effective final game animation solution).
Yeah, reading about that actually prompted this question. I wonder how feasible it will be. Maybe using multiple kinects? There's an interference issue right now with using more than one at the same time, but a little clever hacking could get around the issue. (They use an infrared grid to track motion. Thinking either pulsing the grids so they oscillate, or using slightly different frequencies would work there?)

http://www.motion-capture-system.com/

This is what motion capture systems should look like. No spandex.

This is way out of the price range for your average independent, but I thought I would throw it out there as it is something a little different than most traditional systems you see.

One of the best selling points on it would have to be the hand system. (Flaws are it can suffer from really bad sensor drift so you may need to do a basic 'home' positioning from time to time, doesn't do facial capture, and doesn't do world positioning all that well.)

The system works on flexible sensor ropes, so you never lose data due to something obscuring the cameras.
Old Username: Talroth
If your signature on a web forum takes up more space than your average post, then you are doing things wrong.
Quote: Original post by Pete Michaud
Yeah, reading about that actually prompted this question. I wonder how feasible it will be. Maybe using multiple kinects?
Sure, there is research into this direction. Reconstructed depth of this sort tends to be very noisy data, so while it's been feasible to do multi-camera motion reconstruction for awhile, the results haven't been usable for production mocap work.

Quote: http://www.motion-capture-system.com/

This is what motion capture systems should look like. No spandex.
Those have certain advantages -- unlimited capture volume, able to capture in occluded environments, no lighting restrictions -- but they're not very good, for a variety of other reasons. You get only minimal data out of them (and you can't move markers to get different data), they need to be recalibrated often, and it takes a LOT of work to retarget the motion data to your target skeleton. And depending on the sensor types, they may be thrown off by nearby metal, which severely restricts their use. It's a cool technology, but you use it when you can't use optical tracking, not the other way around.
Advertisement
Quote: Original post by Sneftel
Quote: http://www.motion-capture-system.com/

This is what motion capture systems should look like. No spandex.
Those have certain advantages -- unlimited capture volume, able to capture in occluded environments, no lighting restrictions -- but they're not very good, for a variety of other reasons. You get only minimal data out of them (and you can't move markers to get different data), they need to be recalibrated often, and it takes a LOT of work to retarget the motion data to your target skeleton. And depending on the sensor types, they may be thrown off by nearby metal, which severely restricts their use. It's a cool technology, but you use it when you can't use optical tracking, not the other way around.


How do you define "Minimal" data? I only got to use the ShapeWrap III Plus for an afternoon, but it gives more than enough data for a decent humanoid figure, plus it is very clean data. No missing segments from occluded reflectors, and you can never get one that accidentally pops off and bounces across the room. (However, I will admit that can make for some very amusing animation bloopers.)

I also don't understand your comment on the amount of work to retarget the motion data to your target skeleton. With the system I used it was basically already done, just fill in a table and the system remaps the data. All the data was clean with only minimal errors, far less than I've had from data coming out of traditional optical-point tracking. (Especially when working with two or more people. You can dog pile actors and still capture all the data. You just have to deal with positioning in post.) You make it sound as if the Optical data is always going to be cleaner and easier to work with.

Personally I am more than happy to put up with the minor short comings of a system like Shape Wrap if it means I don't have to deal with spandex. Improving a system like Shape Wrap to take care of point drift is what the industry should be pushing for.
Old Username: Talroth
If your signature on a web forum takes up more space than your average post, then you are doing things wrong.
Quote: Original post by Talroth
How do you define "Minimal" data?
Minimal because it only provides angular joint measurements. Chest expansion, shrugging, muscle flexion, etc. can't be measured with angular sensors, and accelerometers have their own (really really annoying) problems.
Quote: I also don't understand your comment on the amount of work to retarget the motion data to your target skeleton.
The issue is that all you have (after processing) is angular data, which is a poor data source for retargeting. Positional data gives you ground truth data for effector positions and lets you deal with differing limb lengths and joint attachments more robustly.
Quote: You make it sound as if the Optical data is always going to be cleaner and easier to work with...Personally I am more than happy to put up with the minor short comings of a system like Shape Wrap if it means I don't have to deal with spandex.
It sounds like you've had poor results with optical tracking in the past. I can't really speak to that -- our mocap coordinator is great, putting on the suit only takes a couple of minutes, and the data is as clean as I'd want. In contrast, we've tested several non-optical systems like that, and the data was never anywhere near as accurate.
Using multiple Kinects or any other type of cheap 3D sensing camera will work but the data is so low resolution and lossy that you can't really use the joint information directly.. If you look at the Kinects videos where they actually show the 3d skeletion you can see the jittery joint movements and times when the guesses are wrong.. What you need to do is run that angular motion and forces into a stabilizing physical model like Natural Motion which has its own built in stabilization algorithms to correct for the imperfect data.. In the end you'll get "believable" but not 1:1 motion capture.. for probably 1/10 the price of a mocap studio..

-ddn

This topic is closed to new replies.

Advertisement