[opensource-dev] before opena Jira, APR

Francesco Rabbi sythos at gmail.com
Wed Sep 1 06:46:56 PDT 2010


Il giorno 01/set/2010, alle ore 15:36, Tateru Nino <tateru.nino at gmail.com>
ha scritto:



On 1/09/2010 11:24 PM, Oz Linden (Scott Lawrence) wrote:

On 2010-09-01 7:12, Tateru Nino wrote:

Hmm. It might not be an actual leak per-se... I've noticed in busy areas
that the viewer will often hit a **lot** of parallel HTTP texture fetches.

 That's not very good http behavior, but I doubt that we can get it changed
until the servers are properly supporting persistent connections.

Indeed. It's not exactly best-practice. Creating a priority list of textures
and a configurable concurrent requests cap (default: 16?) would probably be
the way to go.


No, this is a client side problem in file handling, not an HTTP problem...
You can parallelize billions of download, the fail (you can see in my logs)
is in local filesystem file handling. Maybe there are more locks than
suitable, the file/decoder handler must detect the limits and adapt the
pipes.

As seen in logs i suppose when a cached texture fail (timeout, bad crc for
packet loss) the automatic clean try a clear_while_run wasting all openable
files. If a http timeout or corrupted cached texture found the SINGLE
download or the single fileust be deleted or dropped, not whole cache.

If a running viewer have 600 opened tectures and got a timeout now re-open
all to clean them, exceding the default 1024 limit

I've noticed some grey textures too, i begin to think about the old
(patched) bug about decoding fail without retry, the pipe hold the channell
opened wasting resources




-- 
Sent by iPhone
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.secondlife.com/pipermail/opensource-dev/attachments/20100901/0db2c0f5/attachment.htm 


More information about the opensource-dev mailing list