[sldev] How to solve this bug? (OpenAL blocking)

Aleric Inglewood aleric.inglewood at gmail.com
Sat Aug 1 10:45:33 PDT 2009


Hi,

I profiled secondlife-bin and it turns out that libopenal is by FAR
the largest CPU munger in a single place:

Counted CPU_CLK_UNHALTED events (Clock cycles when not halted) with a
unit mask of 0x00 (Unhalted core cycles) count 100000
samples  cum. samples  %        cum. %     image name               symbol name
1235671  1235671       25.4705  25.4705    libopenal.so.1.8.466     aluMixData
435456   1671127        8.9759  34.4464    libGLcore.so.173.14.09   (no symbols)
187716   1858843        3.8693  38.3157    libopenal.so.1.8.466     lpFilter2P
153707   2012550        3.1683  41.4840    libopenal.so.1.8.466     lpFilter4P
96160    2108710        1.9821  43.4662    libcwd_r.so.1.0.0
less<libcwd::memblk_key_ct>::operator()(libcwd::memblk_key_ct const&,
libcwd::memblk_key_ct const&) const
95587    2204297        1.9703  45.4365    libcwd_r.so.1.0.0
libcwd::memblk_key_ct::operator<(libcwd::memblk_key_ct) const
89712    2294009        1.8492  47.2857    libcwd_r.so.1.0.0        .plt
77128    2371137        1.5898  48.8755    libopenal.so.1.8.466
CalcSourceParams
76892    2448029        1.5849  50.4604    libstdc++.so.6.0.12      (no symbols)
62967    2510996        1.2979  51.7583    libpthread-2.9.so
pthread_mutex_lock

Note how aluMixData, the function that I was talking about, eats 25%
of all (profile) samples!
This was recorded without that the bug happened: the application is
able to keep up with
the mixing and 'waits' every call, so that the main thread can easily
get the lock.

Ie,

Calls/second: 2757; average time spend: 232 microseconds (63.962400%);
average loop count: 31; average playing count: 28; average size: 64,
time per loop: 7 microseconds; avail_check: 5341; avail_wait: 2584


where avail_wait (the number of times that data was available so that
the application waited for the sound card before
calling aluMixData again) is almost equal to the number of calls (per second).


More information about the SLDev mailing list