[sldev] Body motion and facial expression tracking, Microsoft did it
Mike Monkowski
monkowsk at watson.ibm.com
Wed Jun 3 11:09:24 PDT 2009
Philippe Bossut (Merov Linden) wrote:
> On Jun 3, 2009, at 8:17 AM, Mike Monkowski wrote:
>
>
>>Philippe Bossut (Merov Linden) wrote:
>>
>>>Also note that the videos by MS are *not* demos (i.e. live
>>>effective code) but staged shooting.
>>
>>Here's the live demo. Pretty much the same script.
>>http://www.youtube.com/watch?v=GH_gDreIdcM
>>
> Wait, wait: that's not the same script at all:
It's the same as the second of the two earlier videos. They even do the
elephant drawing the same way.
The first of the two videos earlier, as was pointed out before, was just
a concept video.
I agree with you, though, that this isn't really what would be effective
for SL. For SL, conversational cues would be more useful: nodding yes,
no, wave, and maybe hand gestures while speaking.
The UI controls with this might be tricky. Would you have a
push-to-move control like the push-to-talk for the microphone? If not,
you could get a lot of movement out of context for the avatar. Virtual
nose picking? :-)
Ooh, how about handshakes across the internet? That would be difficult.
Sounds like a good topic for the User Experince Interest Group meeting.
Mike
More information about the SLDev
mailing list