[sldev] Community QA process

Robin Cornelius robin.cornelius at gmail.com
Thu Apr 23 08:53:58 PDT 2009


On Thu, Apr 23, 2009 at 6:14 AM, Philippe Bossut (Merov Linden)
<merov at lindenlab.com> wrote:
> Hi guys,
>
> On Apr 22, 2009, at 11:40 AM, Rob Lanphier wrote:
>> How will we know when we're "done"?
>
> I'm tempted to answer "define 'done'" though that's not very
> constructive :)
>

I think we have a pretty well defined set of features that we want to
implement on this cycle. Certainly having them all "acceptably"
finished and "acceptably" bug free is an important goal here.

I think also we need to monitor the crash rate and the bug reports
coming in. In theory once all bugs of a certain severity or greater
are tackled and the crash rate has reached an appropriate lull we can
declare "good enough". Really incomming crash reports should be
opening new pJIRAs so as we don't have data from this side of the
firewall, would be good to have someone from LL open them with the
stack traces and some indication of volume so this can be used to
judge severity. This should cause a feedback loop that the output of
can be used to flip the go switch.

> Since we have nightly builds, that notion of a done viewer only
> applies to "the OS viewer", the one that gets stamped as "for anyone's
> usage" and not just devs. If we have a community QA, I think the best
> is to agree on a "level of bugs" criteria (say: no show stopper for
> instance). We can decide what constitute a show stopper on this list
> and during the weekly IW triage meeting with the community.
>
>> Will we need to have a daily triage discussion during the last leg
>> of development?
>
> I propose "limited to the identified set of show stoppers". Otherwise
> it'll be endless.

Yes there does need to be a bit of restriction placed on this or as
you say it will get out of hand. Also daily triage is going to be
difficult for many of us and it would be very very useful to have some
asyncronous ways of handling this as well. Probably via the mailing
list is the best as i can ponder things during my lunch time at work
whist most of you are still in bed in the early hours of the morning.

The other thing to be careful of for me at least is, 1) i'm not paid
for any of this and do need to make a living doing my paid job, and
also i do need to have a life other than secondlife development,
although it may not seem like it to some ;-p. But seriously there is
only so much time a week i can spend on opensource projects and I am
sure many others are the same. so daily real time triage discussion
may prove an issue.

>
>> Will there be anyone willing to run through formal test plans?  What
>> is realistic?
>
> That doesn't seem very realistic to me... unless someone volunteers of
> course! :) Having lots of community eyes on the product though should
> ensure a rather good test coverage IMHO.

+1, but out of curiosity what constitutes a formal test plan
internally currently?  What scope does it have and how is it
constructed? This info could be useful and may be ideas can be used as
appropriate for a test plan this side of the firewall.

>
> Then of course, we need more unit tests and auto tests (crash rate in
> particular). That could also give us a release criteria.

Sure, but also crash rate from crashlogger on test builds should be
used for  a release criteria as mentioned previously.

Robin


More information about the SLDev mailing list