[opensource-dev] New HTTP Library & Project Viewer

Dahlia Trimble dahliatrimble at gmail.com
Thu Aug 2 02:01:45 PDT 2012


I can't help but think something is wrong here.  A single TCP/IP link is
more than capable of saturating available network bandwidth with efficient
transfers of large volumes of data provided the end-points can produce and
consume quickly enough.

It seems part of the problem may in the request/response nature of HTTP.
The viewer needs to make a request for each asset it needs as it discovers
it needs it. It sends a request for each asset, and the provider endpoint
then has to do whatever it does to make the asset available before
beginning to send it back to the client. This may occur relatively
instantly in the case of assets in a server memory cache, or a lot longer
depending on where it needs to be pulled from or how it may need to be
prepared. Assuming this is the case, having multiple overlapping requests
can improve the overall download rate of multiple assets by allowing some
downloads to occur while others are prepared, albeit at the expense of
additional connections. Having a persistent connection reduces some of the
delays introduced by re-establishing a connection for each asset, but it
does nothing to reduce the time that the server endpoint needs to acquire
and prepare the asset to send.

Now (assuming this isn't the case already) if the producer endpoint could
be made aware of future requests, it could fetch and prepare the asset for
transfer prior to the actual request being received, thereby reducing or
eliminating the time delays inherent in the request-response paradigm. This
*may* be as simple as adding additional optional UUIDs and parameters to
the asset request for assets that the viewer would likely be requesting
next. If this were the case, a single connection could have a higher
effective throughput by ensuring minimal delays between request and
response, and reduce the need for more simultaneous connections.

Such a solution may or may not be practical or easily implemented in
existing infrastructure, or may not be as efficient as other designs. My
point is more or less meant to bring more perspectives into the discussion
by considering other bottlenecks that may exist, which if mitigated, could
reduce the need for excessive connections.

Thoughts?
-dahlia

On Wed, Aug 1, 2012 at 7:22 AM, Monty Brandenberg <monty at lindenlab.com>wrote:

> On 7/31/2012 10:03 PM, Kadah wrote:
>
> > Its 8 again with the fallow comment. I tired to track down the rev,
> > but apparently Mecurial 2.2 can't properly annotate that file for some
> > reason, and the new UI for it in TortoiseHg2 is horrid. All of the
> > referenced jiras around its changes are not public.
>
> One of the major reasons around that has to do with the
> behavior of low-end routers.  It really is a problem for
> them.
>
>
> _______________________________________________
> Policies and (un)subscribe information available here:
> http://wiki.secondlife.com/wiki/OpenSource-Dev
> Please read the policies before posting to keep unmoderated posting
> privileges
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.secondlife.com/pipermail/opensource-dev/attachments/20120802/5c0f598e/attachment.htm 


More information about the opensource-dev mailing list