[sldev] Anyone here with OpenCV experience?
Melinda Green
melinda at superliminal.com
Fri May 22 00:18:15 PDT 2009
I'm glad that we're narrowing the scope of the discussion toward what
can be done in the short term. I think that triggering existing
animations is definitely the low-hanging fruit because it won't require
any server changes. I interpret the challenge to be figuring out what
can be done quickly and easily. I think that facial gestures are
probably within the realm of the immediately possible though I still
like the idea of trying to ground the discussion by focusing on what
would be required to recognize yes/no head motion triggers and turn them
into avatar head animations. Just achieving that one proof-of-concept
will tell us a *lot* about what else we could or should be attempting to
do with this.
Regarding facial animations, the one idea that Tigro gives me which
almost seems to cute *not* to consider is making sure that his facial
character codes use standard text emoticons instead of numbers. How cute
would it be if a tongue-to-the-right was encoded with colon+P and
tongue-to-the-right was colon+b? It might be impossible to encode the
whole range of simple faces using unicode emoticons, but one nice
benefit would be that we could also parse them out of text chat and
trigger the faces that users already naturally type as a poor-man's way
of expressing emotion.
But again, that's just a cute bit of fun. The important thing is the
first part about simply detecting and triggering head shakes and nods
from video input without any tracking dots or other special user set-up.
What's the easiest way to do that?
-Melinda
Tigro Spottystripes wrote:
> actually, how about a mini "programing language" for the gestures (here
> used in the regular SL sense) triggers to allow flexibility in
> expression detection, with one or two characters identifying which face
> part followed by a number specifying a range of position, perhaps two
> digits to make the range thing more flexible
>
> like, for the "wtf?! o.0 " gesture the trigger could be somthing like
> LE79RE13 (which translates to Left Eye between quite open and absurdly
> open, and Right Eye from almost closed to slightly closed)
>
> we could have specific markers for the corners of the mouth, the upper
> lip, the bottom lip, the chin, left and right cheeks/cheekbones, the
> eyebrows and I'm not sure there is more to track (other than gaze
> direction), ah, of course, tonuge! :P
>
> since the tongue can move out but also sideways, there coudl be two
> different marks, TO for tongue and TS for Toungue Sideways, or if 10 is
> too little resolution for tongue movement make one for each side, and if
> going to one side the marker for the other is zero, hm, actually, we
> will need one for TV Tongue Vertical, for vertical movements of the
> tongue as well (and for the resolution issue if present, just to TU and
> TD like for sideways movement)
>
> Dzonatas Sol escreveu:
>
>> Not everybody will wear a human face in VR, so there is merit to use
>> avatar gestures. A specific interface would require a one to one
>> relationship between the human face and the avatar face. There may be
>> different gestures needed when the human face performs certain
>> actions. To keep it in avatar gestures lets us not rely on that
>> one-to-one relationship. A middle layer between the human facial
>> gestures can act to translate the current shape of the avatar face.
>> The avatar may have several heads (I've seen 7 headed dragons) and
>> maybe someone wants to control them all with human facial gestures.
>> Something that is too specific in a one-to-one relationship wouldn't
>> allow that to happen.
>>
>> I think a channel that sends recognized avatar gestures would be less
>> noisy than a channel that sends all head motions and facial positions,
>> continuously.
>>
>> Moriz Gupte wrote:
>>
>>> I think that with the exception of 'yes' and 'no', the thought of
>>> translating head motions to 'construct' a facial expression or a
>>> gesture expression or even implement FACS
>>> <http://en.wikipedia.org/wiki/Facial_Action_Coding_System>(I dont
>>> think Philip meant that)
>>> seems at least to me to much more complex -- probably why I
>>> interpreted Philip's post the way I did.
>>> And I do not recommend using the very narrow channel of information
>>> that head motions provide to drive a wide range of avatar
>>> gestures...because this channel will be so noisy, that a lot of
>>> ambiguities will arise, so much that it will cease to become useful.
>>> So what looks the 'simple' low bearing fruit... might ultimately be
>>> problematic.
>>>
>>> On Thu, May 21, 2009 at 10:34 PM, Melinda Green
>>> <melinda at superliminal.com <mailto:melinda at superliminal.com>> wrote:
>>>
>>> my understanding is that Philip and Merov's
>>> intent is to simply translate user head movements into avatar
>>> gestures.
>>>
>>>
>>>
>>>
>>> --
>>> Rameshsharma Ramloll PhD Research Assistant Professor Idaho State
>>> University, PocatelloTel: 208-282-5333
>>> More info at http://tr.im/RRamloll
>>> ------------------------------------------------------------------------
>>>
>>> _______________________________________________
>>> Policies and (un)subscribe information available here:
>>> http://wiki.secondlife.com/wiki/SLDev
>>> Please read the policies before posting to keep unmoderated posting privileges
>>>
>> ------------------------------------------------------------------------
>>
>> _______________________________________________
>> Policies and (un)subscribe information available here:
>> http://wiki.secondlife.com/wiki/SLDev
>> Please read the policies before posting to keep unmoderated posting privileges
>>
> _______________________________________________
> Policies and (un)subscribe information available here:
> http://wiki.secondlife.com/wiki/SLDev
> Please read the policies before posting to keep unmoderated posting privileges
>
>
More information about the SLDev
mailing list