[opensource-dev] New HTTP Library & Project Viewer
Oz Linden (Scott Lawrence)
oz at lindenlab.com
Tue Oct 23 14:54:24 PDT 2012
On 2012-10-23 15:25 , Monty Brandenberg wrote:
> On 10/23/2012 2:05 PM, Dahlia Trimble wrote:
>> Would this excerpt from RFC2616 (section 8.2) be relevent? Perhaps some
>> routers and other infrastructure assume this as design criteria:
> Oh, it absolutely is but mostly honored in its breach. IETF 'SHOULD'
> is so much weaker than its 'MUST'....
>
"SHOULD" is generally taken to mean "MUST unless you fully understand
the implications and have a good reason".
The problem with that in this context is that the reason for the
constraint is to protect the network from a condition called congestion
collapse
<https://en.wikipedia.org/wiki/Network_congestion#Congestive_collapse>;
this occurs when there is too much offered traffic a some point inside
the network. The congestion control behavior of TCP was motivated
specifically by this problem. The difficulty is that while TCP
congestion control works well for packets within a single TCP
connection, and manages competition/sharing fairly well between TCP
connections sharing some network path, it does not do so well when
competing with packets that are not following the same congestion
control rules, which includes both any UDP traffic and the setup and
teardown packets for TCP itself. That means that a small number of long
lived TCP connections end up being well behaved even when links are
strained, but a large number of short lived connections are not, and
problems with both are aggravated by large amounts of UDP on the same paths.
When HTTP 1.0 was first created, its authors brought it to the IETF to
be standardized. At that time, there was no support for persistent
connections or pipelined requests: every request created a new TCP
connection which was closed to indicate the end of the response.
Unfortunately, the Web took off so fast that by the time the IETF saw it
the cat was way out of the bag; it seemed best to create a standard even
though the documented behavior was clearly terrible for the Internet as
a whole. The IESG (a senior review body within the IETF structure)
approved it only with this unusual Note:
> The IESG has concerns about this protocol, and expects this document
> to be replaced relatively soon by a standards track document.
The short lived frequent connections were the main problem that
motivated the Note, and only on the condition that HTTP 1.1 be created
to solve the problem by designing persistent connection support.
Naturally that took longer than anyone wanted and solved a number of
other problems too.
This is in many ways a classic "tragedy of the commons
<https://en.wikipedia.org/wiki/Tragedy_of_the_commons>" problem - if one
viewer uses many connections in parallel while others do not, it gains
substantial advantage. But if most viewers use many connections,
everyone gets worse performance than they would all have gotten had
everyone used fewer.
There is a lot of interesting work going on in the IETF and elsewhere to
improve how both operating systems and routers deal with congestion in
ways that are not limited to controlling individual flows (google
"Bufferbloat" for some of the most recent and interesting work).
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.secondlife.com/pipermail/opensource-dev/attachments/20121023/a88f29bd/attachment.htm
More information about the opensource-dev
mailing list