[sldev] dorkbot meeting avatar puppeteering and physical interfaces

Lawson English lenglish5 at cox.net
Thu Jul 3 23:32:27 PDT 2008


evolutie wrote:
> Hi,
>
> Sunday 6 july at 1 PM PDT at
> http://slurl.com/secondlife/Odyssey/85/153/45/ there will be a dorkbot
> session that I think is interesting for many on this list.
>
> JJ Ventrella will present the avatar puppeteering project, for which
> the client code was recently made available to the community.
> http://www.avatarpuppeteering.com/
> Philippe Bossut will present the Segalen / HandsFree3D project. He has
> developed several demos using 3D camera motion tracking to interface
> with Second Life, and spent the last weeks on connecting his work to
> the puppeteering feature.
> http://www.handsfree3d.com/
>
> This will be a good opportunity to get informed, discuss and ask
> questions concerning these technologies and features, so I hope to see
> you all there ;-)
> More in information and links about the event, projects and presenters
> can be found at
> http://rhizomatic.wordpress.com/2008/07/03/dorkbot-session-announcement-4/
>
> chrz,
> Evo
>
>
>   
Definitely try to be there. And... I'd like to point out that a 
"pointing device" is probably not the best way to do real-time animation 
control. 3D mocap using a 3D camera might be the most sexy, but there's 
always multiple keypresses (especially on a Mac), and dare I mention 
MIDI keyboards?

I could see an infinite number of non-mocap interfaces possible for this 
technology, such as the animation equivalent of stroke-based font 
creation, http://www.macintouch.com/gaiji.html, where fundamental units 
of animation movement would mapped to individual keys on a computer or 
MIDI keyboard and evoked by sequences of simple or chorded keypresses. 
Velocity from a MIDI keyboard could corresspond to range of motion, for 
example, while each "note" would correspond to a specific motion or set 
of motions. Playback using MIDI would be possible, and tracks for 
"avatar animation" could be combined with audible MIDI tracks to create 
a new form of dance animation with accompaniment, which could be minutes 
or even hours long, given the relative compression of MIDI over sound or 
even XML.



No doubt other input technologies could be devised as well. Perhaps, 
rather than trying to figure out a one-size fits all strategy, or even a 
handful, the best thing to do would be to devise a plug-in architecture 
to control the overall system? And of course, should LL ever get around 
to defining non-humanoid, non-BVH-driven avatars and animation, the 
system should allow for extensions to handle that as well.



L


More information about the SLDev mailing list