[sldev] Body motion and facial expression tracking, Microsoft did it

Argent Stonecutter secret.argent at gmail.com
Sat Jun 6 11:43:59 PDT 2009


I think you're envisioning a much more ambitious project than I'm  
suggesting! :)

On 2009-06-05, at 10:39, Jan Ciger wrote:
> How specifically do you decide which joints are relevant for a given
> case? Also which angles are "small enough" before the keyframe doesn't
> work anymore?

You'd provide them as parameters to the LSL call, and eyeball it, and  
decide "yeh, that's good enough" or "no, better cut it down". The UI  
would be an LSL call something like this:

llSetAnimationTarget(integer joint, key target_prim, float  
strength, ...)

Or like this:

llSetAnimationTargetParams(list parameters);

The parameters could be a list of parameters (strength, range,  
timeouts) or an actual list similar to particle systems [ANIM_MASK,  
JOINT_RIGHT_WRIST|JOINT_RIGHT_ELBOW, ANIM_RANGE, 0.1, ANIM_STRENGTH,  
2, ANIM_ANGLE_LIMIT, JOINT_RIGHT_ELBOW, PI/18, ANIM_TARGET,  
target_prim, ....].

So this would move the user's hand to clasp the other avatar's hand,  
while the hand was within 10cm of the goal, with a time facto of two  
seconds, and not bending the right elbow more than 10 degrees from the  
original animation.

At worst, it wouldn't be nearly as ludicrous as having the avatars  
waving hands at each other.

> Then I am a bit confused - I thought we are talking about real-time
> automatic animation, not something to produce better keyframe  
> animation
> to be played at a later time (which is of course doable).

The scripter would run the handshake animation, and eyeball the  
results when played back with a variety of complementary avatars.  
Individual coder's experience and market forces would be left to  
handle the results.



More information about the SLDev mailing list