Advertisement

Gesture Recognition Approach

Started by July 10, 2015 10:38 PM
-1 comments, last by KuroSei 9 years, 7 months ago

Hello fellow gamedevs,

i have a quick question regarding gesture recognition and hope for some of you to be better informed than i am. (Which is most likely the case.)

What i did first:

Me beeing new to the field at first took a very naive approach, sampling my sensor x times per second and then did a likelihood analysis (via a Levensthein adaption) - which was painfully slow and not really robust (for obvious reasons).

What i do now:

My new approach differs quite a bit of that.

Now i have a list of sensor data, a time value how fast the state has to be reached to be satisfied and a fuzzyness factor. Every samplepoint in my list then basicly acts as a state and i store which state was last satisfied.

A state is satisfied, as soon as its predecessor is satisfied (state -1 is always) and we reached its ancestor in the given time without leaving the capsule spanned by the two samplepoints and the fuzzynessfactor(as radius).

The question(s) beeing...

Does this approach have a name so i can do some readup on that? I am not very well acquainted with the terminology as many of you might have guessed by now, so i couldnt find anything so far.

Also: Does this even sound like a feasible approach? It works for very simple gestures and i am currently in the process of testing more advanced ones, but it also clearly has its shortcomings...

I hope someone is able and willing to help, if you need more information i will provide this as soon as possible. Thanks in advance and warm regards,

Kuro

PS: I almost forgot to tell! My sensor data is continous and clamped (since its angular data). Positional data is also planned but should work similar.

This topic is closed to new replies.

Advertisement