[sldev] Anyone here with OpenCV experience?

Tigro Spottystripes tigrospottystripes at gmail.com
Thu May 21 21:06:26 PDT 2009


how about making it even more generalized and allow any input to move
anything, like for example, we could put 3 axis of a VR glove to one
hand using the IK, perhaps like how it happens when a selected object is
in the arm's reach), the analog or digital inputs of the fingers would
trigger the corresponding hand position morph, some axis of another
source control certain face morphs, others may move the feet,  another
set move the hips (with IK so the hands and feet stay where they're
supposed to), then perhaps, a different device to control actual motion
of the avatar, another for the third person camera and so on


for face recognition from video, we can use "virtual" devices, which
have axis corresponding the the measured parameters in the video, this
way, if someone for example wanna use a MIDI or OSC device/source to
better tweak facial expressions, like for machinima, it is also possible
and the client won't care what the source is, it just reads the values
and applies to the mapped property of the avatar (I don't think I would
need to include another example, but with this someone could use a VR
glove to make their av's head and mouth work in the classic analog
sockpuppet style)

ps:a while ago I pictured a way to make an avatar do the complete
Macarena choreography  in real time using a right hand VR glove and
different keypresses to  change on the fly which parts where controled
by the glove and how, I'll see if eventually I'll write it down and post
it on Jira



Dzonatas Sol escreveu:
> After I read all the great replies on this list that this question has 
> generated, I think there may be a way to allow several options and LL 
> wouldn't have to be stuck with only one single driver.
>
> LL just needs to agree upon a generic interface that is needed to 
> control head motion, mouth, common gestures and such. Let the community 
> work together with LL to design any implementation from there. 
> Anotherwords, build towards the generic interface instead of towards the 
> driver itself. It's actually an old development principle.
>
> What are the needed methods for the generic interface?
>
> * Head Yaw
> * Head Pitch
> * Head Roll
>
> Left & Right Versions:
> * Eyelid Blink
> * Eyelid Close
> * Eyelid Open
> * Eyelid Pitch
>
> Left & Right Versions:
> * Eyebrow Pitch
> * Eyebrow Roll
>
> * Mouth...
>
> Sure there will be many for the mouth.
>
> Upon an event to trigger a head gesture, find and play the animations 
> that best match the criteria passed by the generic interface.
>
> Again, there seems several really good suggestion made on this list, so 
> this suggestion for a generic interface is one way to allow the user to 
> use any of them. *wink*
>
> Philip Rosedale wrote:
>   
>> Has anyone here worked with camera-based gesture recognition before?  
>> How about OpenCV?  Is OpenCV the best package for extracting basic head 
>> position/gesture information from a camera image stream?  Merov and I 
>> are pondering a Snowglobe project to detect head motion from simple 
>> cameras and connect it to the SL Viewer. 
>> _______________________________________________
>> Policies and (un)subscribe information available here:
>> http://wiki.secondlife.com/wiki/SLDev
>> Please read the policies before posting to keep unmoderated posting privileges
>>
>>   
>>     
>
> _______________________________________________
> Policies and (un)subscribe information available here:
> http://wiki.secondlife.com/wiki/SLDev
> Please read the policies before posting to keep unmoderated posting privileges
>
>   


More information about the SLDev mailing list