[sldev] Direction for Snowglobe 1.3

Lawson English lenglish5 at cox.net
Fri Dec 4 06:35:03 PST 2009


Carlo Wood wrote:
> On Wed, Dec 02, 2009 at 06:46:27AM -0600, Argent Stonecutter wrote:
>   
>> Maybe you *do* have something in mind that will knock my socks off.  
>> The demo videos seemed to show people dragging limbs around with the  
>> mouse. I can see that being great for photographers but I don't know  
>> I'd use it much. How much internal use at LL did it see?
>>     
>
> If that would allow editing existing animations, by changing
> a single frame of an animation, then I think that would be
> GREAT!
>
>   

The existing puppeteering code (corrections welcome Merov) allowed one 
to manipulate the avatar's bvh values in realtime on the clientside via 
the mouse with IK providing by simple ragdoll physics on the clientside. 
The client would then inform the sim as to the changed avatar position 
and the sim would broadcast an update to all viewers of hte current 
avatar behavior. I assume that the orginating viewer was exempted from 
this update.

One plausible use of this technique would be to tie the avatar bvh 
values to a poser-like timeline which could be scripted clientside (or 
to a mo-cap setup of some kind as Merov was working on). A more 
elaborate setup would be to allow one or more clients to update/control 
each other's coordinates via a p2p connection to allow for coordinated 
mechanima between 2 or more avatars with updates eventually sent to the 
viewer for broadcast to the sim-at-large. One could see a single 
timeline controlling some arbitrary number of avatars behind the scenes 
for fully automated control, OR allow 2 or more viwers to update each 
other's data in some kind of realtime combat/dance/interaction scenario 
and again only update the sim so others can watch the shared experience.

All sorts of hybrid scenarios might be possible, most of which we 
haven't thought of yet.

> Too bad that most (all) animations are no-mod :(, because
> this would only be useful for the end-user, knowing the one
> shape the animation has to be applied to.
>
> The Real Problem to be cracked would be the fact that animations
> have to work with a large variety of shapes. The data stored
> in animations is not the data that is constant over that range
> of shapes. What is constant is the offset from given points on
> the shape to objects or other avatars.
>   
within the context of the puppeteering code, this wouldn't be much of an 
issue because only IK (and I assume ground collision) was allowed for. 
With optional full-blown client physics done p2p style ala croquet, one 
could do just about anything with any aspect of the world-graph, but  a 
full SL-Croquet hybrid is a bit further out.

> I'd like to see animations being stored in the form "place
> hand on shoulder" and give that a higher priority than
> "put upper arm horizontal", which would allow calculation
> of the correct, intended, animations on the viewer - to
> work as intended, for a wide variety of shapes.
>
>   

Be fun if non-humanoid avatars were supported in that scenario. There' 
several more robust avatar control systems out there then bvh (e.g. 
http://ligwww.epfl.ch/%7Eaguye/AML/AMLOverview.pdf ) but I don't know 
what they do, exactly.

Lawson




More information about the SLDev mailing list