[sldev] VWR-10311 Enabling lip sync by default

Melinda Green melinda at superliminal.com
Sat May 2 19:35:57 PDT 2009


I agree that usage data would provide an important missing piece to 
these discussions. I proposed a solution previously but not in this 
forum: Basically we could easily add a logging call to the base class of 
the component library that records a count for each human action on each 
UI element. The UI elements can be identified by a unique string 
consisting of their name plus their parent's name, etc. up the 
containment tree to the root view. The file would look something like this:

COUNT        PATH
26          Root/Communicate/Local Chat/Gestures
1           Root/Communicate/Local Chat/Show Muted Text
1           Root/Menus/Advanced/Character/Enable Lip Sync
1           Root/Menus/View/Look at Last Chatter
10          Root/Snapshot/Format
...

This data should have everything we want to be able to make informed 
decisions about whether and how often various UI elements are actually 
used though maybe a 3rd column to include the last value. That data is 
especially useful when it can be combined with demographic data. On exit 
we just aggregate this list of paths & counts to the user's hard drive. 
This part shouldn't take someone more than an hour or so to implement. 
After that we just need to aggregate that data just like is currently 
done for crash reports.

-Melinda

Moriz Gupte wrote:
> I am thinking that this little issue we are discussing is a symptom of 
> a systemic problem: how a UI is created and how it evolves. So it 
> probably requires a new thread, but for continuity am keeping it here.
>
> I think we should try to find a generic way that if applied, might 
> actually reduce the 'noise' inherent in UIs. What is this noise? This 
> noise can manifest itself 'visually/cognitively'..crowded dialog 
> boxes, too many nested levels of preferences (introduces search 
> problems) etc..
> So, how do we minimize this 'noise':
>
> A) We can avoiding the 'UI as reflection of underlying architecture' 
> trap (and the flow of the conversation somewhat suggests that).
> As developers it is easier for us to use the 'developer' model of the 
> application, assume the user has a similar model and let the 
> architecture guide the UI.
> Yes for IDEs and such...rare cases, this approach can work. There are 
> some UI researchers who have proposed 'UIs' not only as means to 
> use/access an application but also to 'refine and modify' the core 
> functionality of the application. We are not talking about this flavor 
> of tailorability here. Here it would make sense for the underlying 
> architecture to float up and be reflected at the level of the UI. We 
> are more looking at 'consumer' perspective here, ie use of a product, 
> pretty much like using an object in daily use e.g. a toothbrush.
>
> A lot of the UI interface noise in the SL client, ranging from camera 
> controls, voice controls, could be reduced if we start eliminating 
> anything preference or adjustments that actually can be traced to 
> underlying architectural decisions which users with good reason do not 
> care about.
> Keeping lipsync optional optional *when voice is already selected* is 
> difficult to understand. This forced, unnatural decoupling of voice 
> with lipsync is a result of our fixation on the developer model of the 
> application.
>
> B) We can reduce UI noise by hiring a designer king e.g. Tufte (Data 
> Visualization/GIS person) or Don Norman(UI researcher) who has a flair 
> for those things (very rare... ) or let UI design decision be informed 
> by 'use' data.
> The first option only works if you are lucky and land on a UI genius 
> and his/her current state of mind etc.. etc.. (probability of success 
> close to zero)
>
> I propose that we instrument (add some code that tracks UI 
> manipulations) the client so that you get actual data about which 
> features are being used, how often etc..
> This of course should be client side only. We are also not talking 
> about pushing data to LL every time this is done, but to have the data 
> uploaded if / when the user want. I dont expect CPU footprint to be 
> huge here..just writing a piece of data in a file is not expensive.
> This data can be used to inform decisions regarding which control 
> points go where and whether certain control points are needed at all.
> We can use these data to visualize hot spots of UI use and so on.
>
> Just my opinion, of course.
> R
>
> On Sat, May 2, 2009 at 4:49 PM, Philip Rosedale <philip at lindenlab.com 
> <mailto:philip at lindenlab.com>> wrote:
>
>     I'd like to see an implementation of turning on and off the lip sync
>     feature that can be user tested to demonstrate that a new user of SL
>     could be instructed to "Turn off the moving lips" and easy turn the
>     feature off within 30 seconds or so.
>
>     I agree that a software developer, or someone who is already familiar
>     with the Advanced menus can do fine the way it is, but I'd like to
>     move
>     with this project toward a viewer that is "generally appealing" -
>     meaning just as usable to a brand new user of Second Life as an
>     existing
>     one or an experienced developer.
>
>     Does this make sense?
>
>     Philip
>
>     Mike Monkowski wrote:
>     > Philip Rosedale wrote:
>     >> *  We need a clear and discoverable place in the UI where this
>     >> feature can be enabled and disabled.  Probably prefs. Can someone
>     >> take on that design and coding?   Advanced-> isn't the right
>     home for
>     >> this.  We should do that work properly and well to complete this
>     >> feature.
>     >
>     > At least for the time being, I think it should be left in Advanced.
>     > Torley's video describing it points to the Advanced menu.  After a
>     > while, it might make sense to move it, but to change the default
>     > condition and move the UI control at the same time seems a bit
>     devious.
>     >
>     >> * Can someone (Mike?) add a bit more detail on the jira task to
>     >> defend/review that the CPU impact is strictly capped.  For example,
>     >> what is the LOD behavior if there are 100 avatars all talking
>     at the
>     >> same time.  We have LOD tricks for various rendering aspects of the
>     >> system, do they correctly carry through?  Does the CPU load of the
>     >> feature vary by GPU?  I think we need this level of documentation.
>     >
>     > Lip sync gets intensity indicators from voice chat the same way that
>     > the green indicators do, so that is zero overhead.  All it does is
>     > change the morph weights for two localized morphs, very similar
>     to eye
>     > blinks.  I have used the Fast Timers to try to measure any
>     difference,
>     > but see none.  I never tried 100 avatars talking at once.  The
>     most I
>     > ever heard speaking at once is about three.  Yes, the LOD processing
>     > stays exactly the same.  The two new morphs were derived from
>     existing
>     > morphs.
>     >
>     >> * As to the question of whether to default it on or off, clearly it
>     >> is a complex issue.  I'd say lets default it on, and make sure
>     it is
>     >> easy to find the way to turn it off.  For some use cases it is very
>     >> cool, lending immersion and cueing as to speaker.  For other cases
>     >> you will want it off.  We are still at the point where the 'uncanny
>     >> valley' nature of the feature can make it unnerving, and that
>     problem
>     >> is unlikely to be easily solved soon in realtime with low CPU load.
>     >
>     > Hmmm.  Faces that don't move while talking are unnerving to me, like
>     > the commercials with the mannequins.  Creepy.
>     >
>     >> As a final note, I'd say this is a good example of a tough topic
>     >> where the right call is unclear and discussion and debate is
>     >> appropriate.  Also a good case of where if need be, I can just
>     make a
>     >> call and we move on and see what happens.  Given that, why the
>     >> rudeness I am seeing here?  I don't see a need to be insulting to
>     >> each other over this topic.  Maybe I've missed some painful history
>     >> here, but can't see how this is helping us move forward.  I
>     wouldn't
>     >> work internally on projects at LL with colleagues that were overly
>     >> rude, I don't see why it should be any different here!
>     >
>     > I haven't sensed any rudeness from others, just open discussion.
>      If I
>     > have been rude, I apologize.  I did not intend to be.
>     >
>     > Mike
>


More information about the SLDev mailing list