<?xml version='1.0' encoding='utf-8'?>
<!DOCTYPE rfc [
  <!ENTITY nbsp    "&#160;">
  <!ENTITY zwsp   "&#8203;">
  <!ENTITY nbhy   "&#8209;">
  <!ENTITY wj     "&#8288;">
]>
<?xml-stylesheet type="text/xsl" href="rfc2629.xslt" ?>
<!-- generated by https://github.com/cabo/kramdown-rfc version 1.7.21 (Ruby 3.3.6) -->
<rfc xmlns:xi="http://www.w3.org/2001/XInclude" ipr="trust200902" docName="draft-cheshire-sbm-00" category="info" submissionType="independent" tocInclude="true" sortRefs="true" symRefs="true" version="3">
  <!-- xml2rfc v2v3 conversion 3.25.0 -->
  <front>
    <title abbrev="Source Buffer Management">Source Buffer Management</title>
    <seriesInfo name="Internet-Draft" value="draft-cheshire-sbm-00"/>
    <author fullname="Stuart Cheshire">
      <organization>Apple Inc.</organization>
      <address>
        <email>cheshire@apple.com</email>
      </address>
    </author>
    <date year="2024" month="December" day="09"/>
    <keyword>Bufferbloat</keyword>
    <keyword>Latency</keyword>
    <keyword>Responsiveness</keyword>
    <abstract>
      <?line 79?>

<t>In the past decade there has been growing awareness about the
harmful effects of bufferbloat in the network, and there has
been good work on developments like L4S to address that problem.
However, bufferbloat on the sender itself remains a significant
additional problem, which has not received similar attention.
This document offers techniques and guidance for host networking
software to avoid network traffic suffering unnecessary delays
caused by excessive buffering at the sender. These improvements
are broadly applicable across all datagram and transport
protocols (UDP, TCP, QUIC, etc.) on all operating systems.</t>
    </abstract>
    <note removeInRFC="true">
      <name>About This Document</name>
      <t>
        The latest revision of this draft can be found at <eref target="https://StuartCheshire.github.io/draft-cheshire-sbm/draft-cheshire-sbm.html"/>.
        Status information for this document may be found at <eref target="https://datatracker.ietf.org/doc/draft-cheshire-sbm/"/>.
      </t>
      <t>Source for this draft and an issue tracker can be found at
        <eref target="https://github.com/StuartCheshire/draft-cheshire-sbm"/>.</t>
    </note>
  </front>
  <middle>
    <?line 92?>

<section anchor="conventions-and-definitions">
      <name>Conventions and Definitions</name>
      <t>The key words "<bcp14>MUST</bcp14>", "<bcp14>MUST NOT</bcp14>", "<bcp14>REQUIRED</bcp14>", "<bcp14>SHALL</bcp14>", "<bcp14>SHALL
NOT</bcp14>", "<bcp14>SHOULD</bcp14>", "<bcp14>SHOULD NOT</bcp14>", "<bcp14>RECOMMENDED</bcp14>", "<bcp14>NOT RECOMMENDED</bcp14>",
"<bcp14>MAY</bcp14>", and "<bcp14>OPTIONAL</bcp14>" in this document are to be interpreted as
described in BCP 14 <xref target="RFC2119"/> <xref target="RFC8174"/> when, and only when, they
appear in all capitals, as shown here.</t>
      <?line -18?>

</section>
    <section anchor="introduction">
      <name>Introduction</name>
      <t>In 2010 Jim Gettys identified the problem
of how excessive buffering in networks adversely affects
delay-sensitive applications <xref target="Bloat1"/><xref target="Bloat2"/><xref target="Bloat3"/>.
This important work identifying a non-obvious problem
has lead to valuable developments to improve this situation,
like fq_codel <xref target="RFC8290"/>, PIE <xref target="RFC8033"/>, Cake <xref target="Cake"/>
and L4S <xref target="RFC9330"/>.</t>
      <t>However, excessive buffering at the source
-- in the sending devices themselves --
can equally contribute to degraded performance
for delay-sensitive applications,
and this problem has not yet received
a similar level of attention.</t>
      <t>This document describes the source buffering problem,
steps that have been taken so far to address the problem,
shortcomings with those existing solutions,
and new mechanisms that work better.</t>
      <t>To explain the problem and the solution,
this document begins with some historical background
about why computers have buffers in the first place,
and why buffers are useful.
This document explains the need for backpressure on
senders that are able to exceed the network capacity,
and separates backpressure mechanisms into
direct backpressure and indirect backpressure.</t>
      <t>The document concludes by describing
the TCP_REPLENISH_TIME socket option,
and its equivalent for other networking APIs.</t>
    </section>
    <section anchor="source-buffering">
      <name>Source Buffering</name>
      <t>Starting with the most basic principles,
computers have always had to deal with the situation
where software is able to generate output data
faster than the physical medium can accept it.
The software may be sending data to a paper tape punch,
to an RS232 serial port (UART),
or to a printer connected via a parallel port.
The software may be writing data to a floppy disk
or a spinning hard disk.
It was self-evident to computer designers that it would
be unacceptable for data to be lost in these cases.</t>
    </section>
    <section anchor="direct-backpressure">
      <name>Direct Backpressure</name>
      <t>The early solutions were simple.
When an application wrote data to a file on a floppy disk,
the file system “write” API would not return to the caller
until the data had actually been written to the floppy disk.
This had the natural effect of slowing down
the application so it could not exceed
the capacity of the medium to accept the data.</t>
      <t>Soon it became clear that these simple synchronous APIs
unreasonably limited the performance of the system.
If, instead, the file system “write” API
were to return to the caller immediately
-- even though the actual write to the
spinning hard disk had not yet completed --
then the application could get on with other
useful work while the actual write to the
spinning hard disk proceeded in parallel.</t>
      <t>Some systems allowed a single asynchronous write
to the spinning hard disk to proceed while
the application software performed other processing.
Other systems allowed multiple asynchronous writes to be enqueued,
but even these systems generally imposed some upper bound on
the amount of outstanding incomplete writes they would support.
At some point, if the application software persisted in
trying to write data faster than the medium could accept it,
then the application would be throttled in some way,
either by making the API call a blocking call
(simply not returning control to the application,
removing its ability to do anything else)
or by returning a Unix EWOULDBLOCK error or similar
(to inform the application that its API call had
been unsuccessful, and that it would need to take
action to write its data again at a later time).</t>
      <t>It is informative to observe a comparison with graphics cards.
Most graphics cards support double-buffering.
This allows one frame to be displayed while
the CPU and GPU are working on generating the next frame.
This concurrency allows for greater efficiency,
by enabling two actions to be happening at the same time.
But quintuple-buffering is not better than double-buffering.
Having a pipeline five frames deep, or ten frames,
or fifty frames, is not better than two frames.
For a fast-paced video game, having a display pipeline fifty
frames deep, where every frame is generated, then waits in
the pipeline, and then is displayed fifty frames later,
would not improve performance or efficiency,
but would cause an unacceptable delay between a player
action and seeing the results of that action on the screen.
It is beneficial for the video game to work on preparing
the next frame while the previous frame is being displayed,
but it is not beneficial for the video game to get multiple
frames ahead of the frame currently being displayed.</t>
      <t>Another reason that it is good not to permit an
excessive amount of unsent data to be queued up
is that once data is committed to a buffer,
there are generally limited options for changing it.
Some systems may provide a mechanism to flush the entire
buffer and discard all the data, but mechanisms to
selectively remove or re-order enqueued data
are complicated and rare.
While it could be possible to add such mechanisms,
on balance it is simpler simply to avoid committing
too much unsent data to the buffer in the first place.
If the backlog of unsent data is kept reasonably low,
that gives the source more flexibility decide what to
put into the buffer next, when that opportunity arises.</t>
    </section>
    <section anchor="indirect-backpressure">
      <name>Indirect Backpressure</name>
      <t>All of the situations described above using “direct backpressure”
are one-hop communication where the receiving device is
connected more-or-less directly to the CPU generating the data.
In these cases it is relatively simple for the receiving device
to exert backpressure to influence the rate at which the CPU sends data.</t>
      <t>When we introduce multi-hop networking,
the situation becomes more complicated.
When a flow of packets travels 30 hops though
a network, the bottleneck hop may be quite distant
from the original source of the data stream.</t>
      <t>For example, when a cable modem
with a 35Mb/s output rate of receives
an excessive flow of packets coming in
on its Gb/s Ethernet interface,
the cable modem cannot directly cause
the sending application to block or receive an EWOULDBLOCK error.
The cable modem’s choices are limited to
enqueueing an incoming packet,
discarding an incoming packet,
or enqueueing an incoming packet and
marking with an ECN CE mark <xref target="RFC3168"/>.</t>
      <t>The reasons the cable modem’s choices are so limited
are because of security and packet size constraints.</t>
      <t>Security and trust concerns revolve around preventing a
malicious entity from performing a denial-of-service attack
against a victim device by sending fraudulent messages that
would cause it to reduce its transmission rate.
It is particularly important to guard against an off-path attacker
being able to do this. This concern is addressed if queue size
feedback generated in the network follows the same path already
taken by the data packets and their subsequent acknowledgement
packets. The logic is that any on-path device that is able to
modify data packets (changing the ECN bits in the IP header)
could equally well corrupt or discard packets entirely.
Thus, trusting ECN information from these devices does not
increase security concerns, since these devices could already
perform more malicious actions anyway. The sender already
trusts the receiver to generate accurate acknowledgement
packets, so also trusting it to report ECN information back
to the sender does not increase the security risk.</t>
      <t>A consequence of this security requirement is that it takes a
full round trip time for the source to learn about queue state
in the network. In many common cases this is not a significant
deficiency. For example, if a user is receiving data from a
well connected server on the Internet, and the network
bottleneck is the last hop on the path (e.g., the Wi-Fi hop to
the user’s smartphone in their home) then the location where
the queue is building up (the Wi-Fi Access Point) is very close
to the receiver, and having the receiver echo the queue state
information back to the sender does not add significant delay.</t>
      <t>Packet size constraints, particularly scarce bits available
in the IP header, mean that for pragmatic reasons the ECN
queue size feedback is limited to two states: “The source
may try sending a little faster if desired,” and, “The
source should reduce its sending rate.” Use of these
increase/decrease indications in successive packets allows
the sender to converge on the ideal transmission rate, and
then to oscillate slightly around the ideal transmission
rate as it continues to track changing network conditions.</t>
      <t>Discarding or marking an incoming packet are
what we refer to as indirect backpressure,
with the assumption that these actions will eventually
result in the sending application being throttled
via having a write call blocked,
returning an EWOULDBLOCK error,
or exerting some other form of backpressure that
causes the source application
to temporarily pause sending new data.</t>
    </section>
    <section anchor="case-study-tcpnotsentlowat">
      <name>Case Study -- TCP_NOTSENT_LOWAT</name>
      <t>In April 2011 the author was investigating
sluggishness with Mac OS Screen Sharing,
which uses the VNC Remote Framebuffer (RFB) protocol <xref target="RFC6143"/>.
Initially it seemed like a classic case of network bufferbloat.
However, deeper investigation revealed that in this case
the network was not responsible for the excessive delay --
the excessive delay was being caused by
excessive buffering on the sending device itself.</t>
      <t>In this case the network connection was a relatively slow
DSL line (running at about 500 kb/s) and
the socket send buffer (SO_SNDBUF) was set to 128 kilobytes.
With a 50 ms round-trip time,
about 3 kilobytes (roughly two packets)
was sufficient to fill the bandwidth-delay product of the path.
The remaining 125 kilobytes available in 128 kB socket send buffer
was simply holding bytes that had not even been sent yet.
At 500 kb/s throughput (62.5 kB/s),
this meant that every byte written by the VNC RFB server
spent two seconds sitting in the socket send buffer
before it even left the source machine.
Clearly, delaying every sent byte by two seconds
resulted in a very sluggishness screen sharing experience,
and it did not yield any useful benefit like
higher throughput or lower CPU utilization.</t>
      <t>This lead to the creation in May 2011
of a new socket option on Mac OS and iOS
called “TCP_NOTSENT_LOWAT”.
This new socket option provided the ability for
application software (like the VNC RFB server)
to specify a low-water mark threshold for the
minimum amount of <strong>unsent</strong> data it would like
to have waiting in the socket send buffer.
Instead of encouraging the application to
fill the socket send buffer to its maximum capacity,
the socket send buffer would hold just the data
that had been sent but not yet acknowledged
(enough to fully occupy the bandwidth-delay product
of the network path and fully utilize the available capacity)
plus some <strong>small</strong> amount of additional unsent data waiting to go out.
Some <strong>small</strong> amount of unsent data waiting to go out is
beneficial, so that the network stack has data
ready to send when the opportunity arises
(e.g., a TCP ACK arrives signalling
that previous data has now been delivered).
Too much unsent data waiting to go out
-- in excess of what the network stack
might soon be able to send --
is harmful for delay-sensitive applications
because it increases delay without
meaningfully increasing throughput or utilization.</t>
      <t>Empirically it was found that setting an
unsent data low-water mark threshold of 16 kilobytes
worked well for VNC RFB screen sharing.
When the amount of unsent data fell below this
low-water mark threshold, kevent() would
wake up the VNC RFB screen sharing application
to begin work on preparing the next frame to send.
Once the VNC RFB screen sharing application
had prepared the next frame and written it
to the socket send buffer,
it would again call kevent() to block and wait
to be notified when it became time to begin work
on the following frame.
This allows the VNC RFB screen sharing server
to stay just one frame ahead of
the frame currently being sent over the network,
and not inadvertently get multiple frames ahead.
This provided enough unsent data waiting to go out
to fully utilize the capacity of the path,
without buffering so much unsent data
that it adversely affected usability.</t>
      <t>A demo showing the benefits of using TCP_NOTSENT_LOWAT
with VNC RFB screen sharing was shown at the
Apple Worldwide Developer Conference in June 2015 <xref target="Demo"/>.</t>
    </section>
    <section anchor="shortcomings-of-tcpnotsentlowat">
      <name>Shortcomings of TCP_NOTSENT_LOWAT</name>
      <t>While TCP_NOTSENT_LOWAT achieved its initial intended goal,
later operational experience has revealed some shortcomings.</t>
      <section anchor="platform-differences">
        <name>Platform Differences</name>
        <t>The Linux network maintainers implemented a TCP
socket option with the same name, but different behavior.
While the Apple version of TCP_NOTSENT_LOWAT was
focussed on reducing delay,
the Linux version was focussed on reducing kernel memory usage.
The Apple version of TCP_NOTSENT_LOWAT controls
a low-water mark, below which the application is signalled
that it is time to begin working on generating fresh data.
The Linux version determines a high-water mark for unsent data,
above which the application is <strong>prevented</strong> from writing any more,
even if it has data prepared and ready to enqueue.
Setting TCP_NOTSENT_LOWAT to 16 kilobytes works well on Apple
systems, but can severely limit throughput on Linux systems.
This has lead to confusion among developers and makes it difficult
to write portable code that works on both platforms.</t>
      </section>
      <section anchor="time-versus-bytes">
        <name>Time Versus Bytes</name>
        <t>The original thinking on TCP_NOTSENT_LOWAT focussed on
the number of unsent bytes remaining, but it soon became
clear that the relevant quantity was time, not bytes.
The quantity of interest to the sending application
was how much advance notice it would get of impending
data exhaustion, so that it would have enough time
to generate its next logical block of data.
On low-rate paths (e.g., 250 kb/s and less)
16 kilobytes of unsent data could still result
in a fairly significant unnecessary queueing delay.
On high-rate paths (e.g., Gb/s and above)
16 kilobytes of unsent data could be consumed
very quickly, leaving the sending application
insufficient time to generate its next logical block of data
before the unsent backlog ran out
and available network capacity was left unused.
It became clear that it would be more useful for the
sending application specify how much advance notice
of data exhaustion it required (in milliseconds, or microseconds),
depending on how much time the application anticipated
needing to generate its next logical block of data.
The application could perform this calculation itself,
calculating the estimated current data rate and dividing
that by its desired advance notice time, to compute the number
of outstanding unsent bytes corresponding to that desired time.
However, the application would have to keep adjusting its
TCP_NOTSENT_LOWAT value as the observed data rate changed.
Since the transport protocol already knows the number of
unacknowledged bytes in flight, and the current round-trip delay,
the transport protocol is in a better position
to perform this calculation.
The transport protocol also knows if features like hardware
offload and stretch acks are being used, which could impact
the burstiness of consumption of unsent bytes.
If stretch acks are being used, and a couple of acks arrive
acknowledging 12 kilobytes each, then a 16 kilobyte unsent
backlog could be consumed almost instantly.
Therefore it is better to have the transport protocol
use all the information it has available to estimate
when it expects to run out of unsent data.</t>
      </section>
      <section anchor="other-transport-protocols">
        <name>Other Transport Protocols</name>
        <t>TCP_NOTSENT_LOWAT was initially defined only for TCP.
It would be useful to define equivalent delay management
capabilities for other transport protocols, like QUIC.</t>
      </section>
    </section>
    <section anchor="tcpreplenishtime">
      <name>TCP_REPLENISH_TIME</name>
      <t>Because of these lessons learned, this document proposes
a new mechanism, TCP_REPLENISH_TIME.</t>
      <t>The new TCP_REPLENISH_TIME socket option specifies the
threshold for notifying an application of impending data
exhaustion in terms of microseconds, not bytes.
It is the job of the transport protocol to compute its
best estimate of when the amount of remaining unsent data
falls below the threshold.</t>
      <t>The new TCP_REPLENISH_TIME socket option
should have the same semantics across all
operating systems and network stack implementations.</t>
      <t>Other transport protocols, like QUIC,
and other network APIs not based on BSD Sockets,
should provide equivalent time-based backlog-management
mechanisms, as appropriate to their API design.</t>
      <t>The time-based estimate does not need to be perfectly accurate,
either on the part of the transport protocol estimating how much
time remains before the backlog of unsent data is exhausted,
or on the part of the application estimating how much
time it will need generate its next logical block of data.
If the network data rate increases significantly, or a group of
delayed acknowledgments all arrive together, then the transport
protocol could end up discovering that it has overestimated how
much time remains before the data is exhausted.
If the operating system scheduler is slow to schedule the
application process, or the CPU is busy with other tasks,
then the application may take longer than expected
to generate its next logical block of data.
These situations are not considered to be serious problems,
especially if they only occur infrequently.
For a delay-sensitive application, having some reasonable
mechanism to avoid an excessive backlog of unsent data is
dramatically better than having no such mechanism at all.
Occasional overestimates or underestimates do not
negate the benefit of this capability.</t>
    </section>
    <section anchor="applicability">
      <name>Applicability</name>
      <t>This time-based backlog management is applicable anywhere
that a queue of unsent data may build up on the sending device.</t>
      <t>Since multi-hop network protocols already implement
indirect backpressure in the form of discarding or marking packets,
it can be tempting to use this mechanism
for the first hop of the path too.
However, this is not an ideal solution because indirect
backpressure from the network is very crude compared to
the much richer direct backpressure
that is available within the sending device itself.
Relying on indirect backpressure by
discarding or marking a packet in the sending device itself
is a crude rate-control signal, because it takes a full network
round-trip time before the effect of that drop or mark is
observed at the receiver and echoed back to the sender, and
it may take multiple such round trips before it finally
results in an appropriate reduction in sending rate.</t>
      <t>In contrast to queue buildup in the network, queue buildup
at the sending device has different properties.
When it is the source device itself that is building up a backlog
of unsent data, it has more freedom about to handle this.
When the source of the data and the location of the backlog is
the same device, network security and trust concerns do not apply.
When the mechanism we use to communicate about queue state
is a software API instead of packets sent though a network,
we do not have the constraint of having to work within
limited IP packet header space, and the delivery of queue
state STOP/GO information to the source is immediate.</t>
      <t>Direct backpressure can be achieved
simply making an API call block,
returning a Unix EWOULDBLOCK error,
or using equivalent mechanisms in other APIs,
and has the effect of immediately halting the flow of new data.
Similarly, when the system becomes able to accept more data,
unblocking an API call, indicating that a socket
has become writable using select() or kevent(),
or equivalent mechanisms in other APIs,
has the effect of immediately allowing the production of more data.</t>
      <t>Where direct backpressure mechanisms are possible they
should be preferred over indirect backpressure mechanisms.</t>
      <t>If the outgoing network interface on the source device
is the slowest hop of the network path, then this
is where the backlog of unsent data will accumulate.</t>
      <t>In addition to physical bottlenecks,
devices also have intentional algorithmic bottlenecks:</t>
      <ul spacing="normal">
        <li>
          <t>If the TCP receive window is full, then the sending TCP
implementation will voluntarily refrain from sending new data,
even though the device’s outgoing first-hop interface is easily
capable of sending those packets.</t>
        </li>
        <li>
          <t>The transport protocol’s rate management (congestion control) algorithm
may determine that it should delay before sending more data, so as
not to overflow a queue at some other bottleneck within the network.</t>
        </li>
        <li>
          <t>When packet pacing is being used, the sending network
implementation may choose voluntarily to moderate the rate at
which it emits packets, so as to smooth the flow of packets into
the network, even though the device’s outgoing first-hop interface
might be easily capable of sending at a much higher rate.</t>
        </li>
      </ul>
      <t>Whether the source application is constrained
by a physical bottleneck on the sending device, or
by an algorithmic bottleneck on the sending device,
the benefits of not overcommitting data to the outgoing buffer are similar.</t>
      <t>The goal is for the application software to be able to
write chunks of data large enough to be efficient,
without writing too many of them too quickly.
This avoids the unfortunate situation where a delay-sensitive
application inadvertently writes many blocks of data
long before they will actually depart the source machine,
such that by the time the enqueued data is actually sent,
the application may have newer data that it would rather send instead.
By deferring generating data until the networking code is
actually ready to send it, the application retains more precise
control over what data will be sent when the opportunity arises.</t>
    </section>
    <section anchor="security-considerations">
      <name>Security Considerations</name>
      <t>No security concerns are anticipated resulting from reducing
the amount of stale data sitting in buffers at the sender.</t>
    </section>
    <section anchor="iana-considerations">
      <name>IANA Considerations</name>
      <t>This document has no IANA actions.</t>
    </section>
  </middle>
  <back>
    <references anchor="sec-combined-references">
      <name>References</name>
      <references anchor="sec-normative-references">
        <name>Normative References</name>
        <reference anchor="RFC2119">
          <front>
            <title>Key words for use in RFCs to Indicate Requirement Levels</title>
            <author fullname="S. Bradner" initials="S." surname="Bradner"/>
            <date month="March" year="1997"/>
            <abstract>
              <t>In many standards track documents several words are used to signify the requirements in the specification. These words are often capitalized. This document defines these words as they should be interpreted in IETF documents. This document specifies an Internet Best Current Practices for the Internet Community, and requests discussion and suggestions for improvements.</t>
            </abstract>
          </front>
          <seriesInfo name="BCP" value="14"/>
          <seriesInfo name="RFC" value="2119"/>
          <seriesInfo name="DOI" value="10.17487/RFC2119"/>
        </reference>
        <reference anchor="RFC8174">
          <front>
            <title>Ambiguity of Uppercase vs Lowercase in RFC 2119 Key Words</title>
            <author fullname="B. Leiba" initials="B." surname="Leiba"/>
            <date month="May" year="2017"/>
            <abstract>
              <t>RFC 2119 specifies common key words that may be used in protocol specifications. This document aims to reduce the ambiguity by clarifying that only UPPERCASE usage of the key words have the defined special meanings.</t>
            </abstract>
          </front>
          <seriesInfo name="BCP" value="14"/>
          <seriesInfo name="RFC" value="8174"/>
          <seriesInfo name="DOI" value="10.17487/RFC8174"/>
        </reference>
      </references>
      <references anchor="sec-informative-references">
        <name>Informative References</name>
        <reference anchor="Bloat1" target="https://gettys.wordpress.com/2010/12/06/whose-house-is-of-glasse-must-not-throw-stones-at-another/">
          <front>
            <title>Whose house is of glasse, must not throw stones at another</title>
            <author initials="J." surname="Gettys">
              <organization/>
            </author>
            <date year="2010" month="December"/>
          </front>
        </reference>
        <reference anchor="Bloat2" target="https://queue.acm.org/detail.cfm?id=2071893">
          <front>
            <title>Bufferbloat: Dark Buffers in the Internet</title>
            <author initials="J." surname="Gettys">
              <organization/>
            </author>
            <author initials="K." surname="Nichols">
              <organization/>
            </author>
            <date year="2011" month="November"/>
          </front>
          <seriesInfo name="ACM Queue, Volume 9, issue 11" value=""/>
        </reference>
        <reference anchor="Bloat3" target="https://dl.acm.org/doi/10.1145/2063176.2063196">
          <front>
            <title>Bufferbloat: Dark Buffers in the Internet</title>
            <author initials="J." surname="Gettys">
              <organization/>
            </author>
            <author initials="K." surname="Nichols">
              <organization/>
            </author>
            <date year="2012" month="January"/>
          </front>
          <seriesInfo name="Communications of the ACM, Volume 55, Number 1" value=""/>
        </reference>
        <reference anchor="Cake" target="https://ieeexplore.ieee.org/document/8475045">
          <front>
            <title>Piece of CAKE: A Comprehensive Queue Management Solution for Home Gateways</title>
            <author initials="T." surname="Høiland-Jørgensen">
              <organization/>
            </author>
            <author initials="D." surname="Taht">
              <organization/>
            </author>
            <author initials="J." surname="Morton">
              <organization/>
            </author>
            <date year="2018" month="June"/>
          </front>
          <seriesInfo name="2018 IEEE International Symposium on Local and Metropolitan Area Networks (LANMAN)" value=""/>
        </reference>
        <reference anchor="Demo" target="https://developer.apple.com/videos/play/wwdc2015/719/?time=2199">
          <front>
            <title>Your App and Next Generation Networks</title>
            <author initials="S." surname="Cheshire">
              <organization/>
            </author>
            <date year="2015" month="June"/>
          </front>
          <seriesInfo name="Apple Worldwide Developer Conference" value=""/>
        </reference>
        <reference anchor="RFC3168">
          <front>
            <title>The Addition of Explicit Congestion Notification (ECN) to IP</title>
            <author fullname="K. Ramakrishnan" initials="K." surname="Ramakrishnan"/>
            <author fullname="S. Floyd" initials="S." surname="Floyd"/>
            <author fullname="D. Black" initials="D." surname="Black"/>
            <date month="September" year="2001"/>
            <abstract>
              <t>This memo specifies the incorporation of ECN (Explicit Congestion Notification) to TCP and IP, including ECN's use of two bits in the IP header. [STANDARDS-TRACK]</t>
            </abstract>
          </front>
          <seriesInfo name="RFC" value="3168"/>
          <seriesInfo name="DOI" value="10.17487/RFC3168"/>
        </reference>
        <reference anchor="RFC6143">
          <front>
            <title>The Remote Framebuffer Protocol</title>
            <author fullname="T. Richardson" initials="T." surname="Richardson"/>
            <author fullname="J. Levine" initials="J." surname="Levine"/>
            <date month="March" year="2011"/>
            <abstract>
              <t>RFB ("remote framebuffer") is a simple protocol for remote access to graphical user interfaces that allows a client to view and control a window system on another computer. Because it works at the framebuffer level, RFB is applicable to all windowing systems and applications. This document describes the protocol used to communicate between an RFB client and RFB server. RFB is the protocol used in VNC. This document is not an Internet Standards Track specification; it is published for informational purposes.</t>
            </abstract>
          </front>
          <seriesInfo name="RFC" value="6143"/>
          <seriesInfo name="DOI" value="10.17487/RFC6143"/>
        </reference>
        <reference anchor="RFC8033">
          <front>
            <title>Proportional Integral Controller Enhanced (PIE): A Lightweight Control Scheme to Address the Bufferbloat Problem</title>
            <author fullname="R. Pan" initials="R." surname="Pan"/>
            <author fullname="P. Natarajan" initials="P." surname="Natarajan"/>
            <author fullname="F. Baker" initials="F." surname="Baker"/>
            <author fullname="G. White" initials="G." surname="White"/>
            <date month="February" year="2017"/>
            <abstract>
              <t>Bufferbloat is a phenomenon in which excess buffers in the network cause high latency and latency variation. As more and more interactive applications (e.g., voice over IP, real-time video streaming, and financial transactions) run in the Internet, high latency and latency variation degrade application performance. There is a pressing need to design intelligent queue management schemes that can control latency and latency variation, and hence provide desirable quality of service to users.</t>
              <t>This document presents a lightweight active queue management design called "PIE" (Proportional Integral controller Enhanced) that can effectively control the average queuing latency to a target value. Simulation results, theoretical analysis, and Linux testbed results have shown that PIE can ensure low latency and achieve high link utilization under various congestion situations. The design does not require per-packet timestamps, so it incurs very little overhead and is simple enough to implement in both hardware and software.</t>
            </abstract>
          </front>
          <seriesInfo name="RFC" value="8033"/>
          <seriesInfo name="DOI" value="10.17487/RFC8033"/>
        </reference>
        <reference anchor="RFC8290">
          <front>
            <title>The Flow Queue CoDel Packet Scheduler and Active Queue Management Algorithm</title>
            <author fullname="T. Hoeiland-Joergensen" initials="T." surname="Hoeiland-Joergensen"/>
            <author fullname="P. McKenney" initials="P." surname="McKenney"/>
            <author fullname="D. Taht" initials="D." surname="Taht"/>
            <author fullname="J. Gettys" initials="J." surname="Gettys"/>
            <author fullname="E. Dumazet" initials="E." surname="Dumazet"/>
            <date month="January" year="2018"/>
            <abstract>
              <t>This memo presents the FQ-CoDel hybrid packet scheduler and Active Queue Management (AQM) algorithm, a powerful tool for fighting bufferbloat and reducing latency.</t>
              <t>FQ-CoDel mixes packets from multiple flows and reduces the impact of head-of-line blocking from bursty traffic. It provides isolation for low-rate traffic such as DNS, web, and videoconferencing traffic. It improves utilisation across the networking fabric, especially for bidirectional traffic, by keeping queue lengths short, and it can be implemented in a memory- and CPU-efficient fashion across a wide range of hardware.</t>
            </abstract>
          </front>
          <seriesInfo name="RFC" value="8290"/>
          <seriesInfo name="DOI" value="10.17487/RFC8290"/>
        </reference>
        <reference anchor="RFC9330">
          <front>
            <title>Low Latency, Low Loss, and Scalable Throughput (L4S) Internet Service: Architecture</title>
            <author fullname="B. Briscoe" initials="B." role="editor" surname="Briscoe"/>
            <author fullname="K. De Schepper" initials="K." surname="De Schepper"/>
            <author fullname="M. Bagnulo" initials="M." surname="Bagnulo"/>
            <author fullname="G. White" initials="G." surname="White"/>
            <date month="January" year="2023"/>
            <abstract>
              <t>This document describes the L4S architecture, which enables Internet applications to achieve low queuing latency, low congestion loss, and scalable throughput control. L4S is based on the insight that the root cause of queuing delay is in the capacity-seeking congestion controllers of senders, not in the queue itself. With the L4S architecture, all Internet applications could (but do not have to) transition away from congestion control algorithms that cause substantial queuing delay and instead adopt a new class of congestion controls that can seek capacity with very little queuing. These are aided by a modified form of Explicit Congestion Notification (ECN) from the network. With this new architecture, applications can have both low latency and high throughput.</t>
              <t>The architecture primarily concerns incremental deployment. It defines mechanisms that allow the new class of L4S congestion controls to coexist with 'Classic' congestion controls in a shared network. The aim is for L4S latency and throughput to be usually much better (and rarely worse) while typically not impacting Classic performance.</t>
            </abstract>
          </front>
          <seriesInfo name="RFC" value="9330"/>
          <seriesInfo name="DOI" value="10.17487/RFC9330"/>
        </reference>
      </references>
    </references>
    <?line 605?>

<section numbered="false" anchor="acknowledgments">
      <name>Acknowledgments</name>
      <t>TODO Acknowledgments.</t>
    </section>
  </back>
  <!-- ##markdown-source:
H4sIAAAAAAAAA8V87W7bSJb2/7qK2vQfO5DkOB/dHWNnZx0nnc5MYmdj9wSD
dxYNSixJHFOkhkVa0TYCzG0s8L7AXsb+3r2TuZL3POecKhYlOZ3ZP9tAxxJF
1sf5fM5HcTwem7ZoS3dmH1zXXTNz9kU3n7vGvsuqbOFWrmofmGw6bdzdF2+Z
Za1b1M32zBbVvDYmr2dVtqJh8yabt+PZ0vll0bixn67Gjx4Z301XhfdFXbXb
tcNDuVs7+qdqTdWtpq45MzkNeWYfP3r8dHz6ePzouaEVPDHf2Kxx2Zn9+OaG
Pm/q5nbR1N36zN5c/+Hja3vTZJVf101rsyq31665K2bO2490X1Et7Gvca27d
lh7Mz4wd616mZZ21+PqWJq1mW3z84Gigyhd3rnLeG/rTuTOa0+qEH1/ji6x/
OD5dXmVFiVv+2X3KVuvSTWb1CtezZrY8s8u2Xfuzk5PkxxMajoYu2mU3BaXb
LmvaCyXbyT4VH9DdJS3Wt3R3GG/41ERGmxT1gecPXJos21X5wJisa5d1A+rQ
HNbOu7IUXsrwNozPv9bNIquKf8ta4uWZPV/TduybajbhH52QIUzyz9k6kMJU
dbOih+6IpAYi03+z9gW4cXrGQ8TF4L8xCYo/s7+b2Neubbeerwb5/bisvbPL
uqN/C2/ruV2UmfduZFedb21Vt7ZdNvXG+rYmjtoMMlK3S9c84HFE3l66mYP8
keCdPpLxs2bh2p5pC556AglaNyQZzD3cfXL6+OTRtycbrGPM6xgXflzPx7KO
MZYxphnHvIyxLGOctWNdxknY+uOv3Xq4+vuJvSxmy7ocUiSRbdpY1tyqtHt6
imgBRrWuqVybEuCyvosEOOXr3jWF82ASMfjinf2XznVE1T/UZbdy9vmIqO07
Z/XuXXL9BXdPstlqQqJykruWRGIym69+W+S/efzou9Pvnz8J+37yv7nv32UV
SfcW2368t+2LerXqqmLGYs6yhWGIGJEMz56N7CVbLnuYDnnZE6EuTk4fTU5P
nz4jwfn2yel330747/Nv6dGL7NbdT4ibif3xv/+zKMm8jX/33/9Jc1TeVYNb
Xk7sTbZsd+n3jqxiXQ0I9b4gacduLs5//4qYi32STC8dmz1hdGLm7TXtFRSw
pK/2x5q2/Zpot8m2fkDJrnIg4/d7ZMRF++bVq1fKASZnVtrr7Wpd+6JbWRr7
bT2jSzDf71zb1Ou6LNqssudk9u2la2HyvT16e3757vzy+CCpC+fcp3VZkwnE
R6X5rMMeTr5/+t2zR0+f0YMv3aq+n9DXk6GhCzT7I/lA2Dle4aX71JJgVq7h
rcT1HSLHs31lYmtJvqPMN0XuaEF3rqzXJEIXdUXySp7IHZalcOMkWtSTOxqh
9ifrMtuebDb5DDOefHf6/OS3bbFyv3l8+vw5jfXhh4snp99+fyYfvz19+kQ/
fv/oSfz4+Pkj/fj8yRP6aMbjsc2mvm2yWWvMG1GidUZWNXezjFYO80W2N/N2
6lwFD7mBO8w25KzhPenpuoP9dWaZNStyKdaRRs5a1qVpr69BQysh5IipHEc3
Mnpd5+z5IS1KCrDW27K4dfbt02vb1jbLc1hnephGXTf1tHSrifmx3tADzWgw
Zy1zegCQxhatd+XcNnBepOuZ9cWiKuak/AROaNRChVbHHNnNkmwQbx4+piGV
IuXJ6bEVqWlDfoYgBZ6ZmJslOaYgibRztkmtmy2rgsyk580uuiLPiPGsY+RK
2kALIqjx9bwFTXmDd3WRhx8t8WZOa7Se9wXid1VFS/EeRi13JBaeUBo5pdxO
t9Z9wk9Q8ml8IGsTMpANIeknT0oGAR4B9DWYedrUWV5uLSSPaEIksNmsqcHj
soTIZ4smWwnfAhgzNEZbz8hO26OfXr4f2ZsL+udffnpzMbKunU2OwQI8Dpkm
TaLF+K1v3cpPRPhWRZ6XzhCCIs24E3IKuV66eVExS7wh+jpL4A7CkXv74N1P
1zcPRvLXXl7x5w+vaNoPr17i8/WP52/fxg9G77j+8eqnty/7T/2TF1fv3r26
fCkP01U7uGQevDv/4wOR2AdX72/eXF2ev30gAp2yXdk3JdrCCJK9bYknJNy5
87OmmNIXeubFxfv/+o/Tp/aXX/6B9PDx6enzz5/1y/en3z2lLxsy0zJbXRE/
5CsxcGuINY4ErxCazrI1GdDS073e+mW9qSz0iSj78P+AMv96Zv9xOlufPv0n
vYANDy4Gmg0uMs32r+w9LEQ8cOnANJGag+s7lB6u9/yPg++B7snFf/xtWZAF
Hp9+/9t/MhAh8j1NnXczCA1bM4A3+7tipRDDFohDSONdLoZOFN2QpSLqHVQd
InUVXFOWk4EhE0IqIibOsPaNPbwqMG5QHRHiX34RuPv5s356HD89+fxZbQZp
IakRGSCxe7rALWstWZ1qXE/vCkKcca2wRqXLckjaXVZ2rKcDW0k/qG6LfNLa
Ol7SyLAZnf/l51lNK6cFqk/4/Hlk3795pRfIX+AC0ApdwZ/Pnw2kEfaXb4Hz
wA56o/slq8OhJSl78AAwQ/idFs0hHF1bEVXv6CMFrWSMrftLR/K9tTOKIklv
upb1Kndkf3JiHZkSDivgRGFLv8SFkRE/U0QKRnu+db1NN1m06iVoCeeVmPcd
+x702ScbTDYe/IchQ7dWP7XMQBs4uZYIWtFDdk6TDdyZS54k1NKS86fhvN1Q
tEc/IwhynwovVlQBm26wchu7IndDEZtf6ZQsUFMSfLL5tIPaAjplyoRADPXC
cbyRGdq0qVvAV/ISPHAh/drWTQEkN81mHKNXRD5GAZsleLZaE8PI/cmWh/h8
XjTk92gZMycLxyPhHthPcmOEIXYdqq7cK4IgGQDfMT8Hah09SBov/k13j8FY
N9qapVNVPnhVsp3ZrGi3sgrv1lmDmHs4ZkJRsui1yQkyztrhPXi8qA78MhGv
FfdAwjwruxxzbIMEwfNjVeQ1//Tzh1fv3766fHP9459+vnnz7hWRe3ZLIlqv
hS88Eak3KUdBio8hQQQOMBMkYc/fv4Fz/cYOcjqYyVwT3GTpUYmiHQKGTDNP
+GJN98wKAp0kUjtMzEpEAvQ5Fz0k3scRonUxGwZzEcgUPtJ/ITiamNS1NC5j
CTMnlEkrJ2apRC63nsVq5XIEDDAE2Wzm1gQd2wnTMo69yrZws9GS0HisSgRd
AbJb+teuu2q2JHGmy5X9cP34yWOG6MB3yCId/XT+4eZ4ZOpGn2zYaYNNhK7g
uO+KjEdsyBY5eerwMjZN0Q6XMSdjvCYuF/4WE5BxWRdVhXsIJOd8fWLekMLA
cRMmHbs7tvx4OtAeMkL4NAp0AY3uypygMiFAoQwTmE2gTk2/lWCpqBsZjFnm
nYjDSxHRF4mIioQSoiBbGw2K3TAbC85fmY8EPkDAxKbSfmviZbLbonSM89J9
j4zoO/0kiM/+7a//F5Ryf/vr/4OQym4UWbddU2EwPDMDvRvTke0t+QLPBOGj
IEUcAxtSDNa6+Fgyt5oPllfofEbDZyEwgWn3pYQxOWEmXme6PTLNBdQ1rE6s
h5GlidUIeQIVVVBBJDUslyh+XdNYBSzoLCO7OSuB3JiTwhmhMBGHxLSpK/h4
qC7tm+JhT4HIlDZaklNqA1rp3V6YXyhLojQfIbZtCReM7K/Q3TB/acmHqE7I
AXsiXS23cNnkCyFJdbcQbRcOMOmdPmn2ZZspH1ws5LlkJIyU9NKJtqcEF1Iv
HMdrbFjYqBnxBeLHKBQr3d+zBHJwYJtg7qDEzJZVoAwHNoRgcg4FqwUCnpQd
PIVR+hyYgn7RWWR5ByRJDYWyjm4Uc82Pecw5MVd8ZXdFq65sYYwPrMiroruK
M3D5yBBCCpxiydKxxOxCX4AyER+yB+/WsJFT+G14TV70ir6xZpCF9gRHc8G+
gXdx4iXHYGCXp2HYIp63Muy6JgNKYjjf429KBk/wgXli2oZRLu1FWMlavusU
gi/gKaM3GB2WI1nY1HE2uG1L4T0vjvzXyLiCSU3ud5Wxq+RUH1kiyD7JwLQk
j4vr+G6OWEG3iYHinwBJ6zJoTTL9yDRuVd8x4Vr4vqKEpYDDhAvaEq6in1zp
3TF8Aq2iHzWzP1XFJ/vqI8KnF2+vLn5vXdPAuzcBl5oj4HrOp+9tXP2D7zdD
Cigpla7y3QyyRpoUci6JMxEwhc0QKjXZTIYLLMGQzJZsAdwIVMWlCWJQsXLH
pE3kwxDE9Gl+PFxPydECNrDuZ03hg14TgF8vi5mnRVIcPzHv4KuGF4NgEdU6
8m7jCKvVqLOCeBJcMnINDKsoAykkEmQDVbx4/xPv+DX+kvgFhESrUUQShKBC
to+H01kA1roGebptmBFudkGmGdt3yMgU+JWUb0uaSMaax9rAE4gblXUtEbJX
aTjEay4w0wtSWwJzVdut042CohA6ge6iC/vE+DG7E8lZF2vHcfAc5OdNENec
W48gPXCQco2hzryYk0zqhUMTYQfy88T8wNAFKjkmt8eIKHeE5ujXEYChzK+U
T9dBc5jBQgQaIlTUyTF1QIXiskhCMshbIRYpjBbzhBUe6bmcbkRkcmR6QBHi
34HL3GFbF1SAc2fAOANUxUElSLOBGmWW522CikjQ4IL8EJ4ie63lA6iJ3BXS
j7OGxpioskxp21gG+TCIFG7o6cq6pwlQQmlQHg0RehFNfCHdItmBSNQprynS
SfZZtD2rf2VyeOHgewIPsyXyDQo5ZCZRjpah2GBCsgnnUvSyAmOivQHHkd/l
cl0N1hC0IUKaPnfQu6EOtY82hbXi68h5mUIBcQ2u8h2ssKsV4GAukFQUhf0E
YjT6v3eGAVNJYCWKjThvIaZ7MoQIwPgQJqTxsz4ixDTzsvMCi5AnIDgts7Jw
EEFgzjhPF0AhctPtIEyvKWYtCZPS5kv4gxWEtgbpxnWDlHXw8RIyYR/skmH4
AVtooiZrGKZDJCJsncIdE001/spy+OvZMpmbrEFFgV/JqiHcEUDaWPV7MRGt
pGVBrGuSDhpohz/YoW5+P9AHPJU7KPQo68Uuf2nqW/j1FPbWG/COmLwo7oZp
llXdAOu7T4U62NzNwJsNI+vaIL5EsJ6uCbrDRkilsWYP01V4HO5JI6Q3IYwf
xkjnZRnxdgh2ve1zutkUXOuA5oC1D6QCCHkz78hpjZf1mikai45qHMWMIBvV
58aINKaPR7FzEotxiWyRzCJsCr5ux6lJHPJmEAkqqxsybip0GoYEa7C7BMPp
E9fsZD0Ei5QdClnyHMJ7JJ24ZBJWhADdh4CIQ8kNJ8c5S+vE0DBF+vyFxI2R
0Aifahgh5nsi/CE0Rdy3AX/IRd06pD+bjDbm7ZNHlkb2Gr2YrK88sWQwQCTK
3uKuEMmTN24ZSyAjS+avFqxVNwUZB7KZKoMqDSy9viW5pRCMvaW2faisEf5h
V7Kqc7cyDIAy++TZu+mJD5kQyYrMQx7SG2Q/oz3c3ZokBOEiObT09jWGegUj
R5uTqsOcU2sS0cXJkUyB3Y1Swz7PpNnYAZ6sBQuLKeKVwT/uwVNJhyTz/O2v
/06rXNac14XEx/i1NmrKeK5KYgtOlvLeRkYN5n0/19EWHr4DttCsMgF4Qmpa
8MWlvXhlcVkS16iQcuL6hkUd9sbb9lf24OuwDSmTOQEMSCM48oNsRMgQ6zp8
8W+Q0wo1VWIILMt1elvboGcF+JKYBk28q0uQl/Oo7NDhTLBH2g5xhN07LjHc
IYFUTKPgi7BlVqIVxUtHFLLWtBDDiN0DsNNVgpvBohBaDRwnT97lHScSV6gm
Lpw4VpPCoqKVVAHrayHqVXlt8WLxDbhmjdzirCs5m9SXNYApOvaEYUUV6qOE
KMEjXizBKkERIV+Y15yxR6VSsTjRihOKkitHYDcXRMD0NnOKYmCfeky5U24m
+yYoPkJwmb8kIci3RnLyRJuo1kHlFH0W5BY7CmtoThT6ZrdVvaH4UronjN7M
lVXyXYtiZgNIocCPzL7sVlkggChmRw1JXTHfDqc9ioAEK4IcTwUcS5fLewtI
5ppjI/4+FEw2DmXBumk68qdICyoKCcMKTim3UNyOAgAWRsyCGWIUhzYQNX3e
xTJNXjvGkIY0D6rjevEP0jxCGkUcQvKgBu9KahVfsee9hIeoiehFsbpQUsv2
kUlYrE+8lGsGyWUC751+OMieETQ5K+mfuO0g3Rxw7tIAAhWzP7KUQAMbaSA/
Kh0azj2ac9Z/lpXgLACv4l3I4DfSd1P0yV3IIO3foCvPijVom2LNsWJ0z+qA
aFVIJ1bafaGa0NLmzVDuJwRqiMoV12VWyLUxCuAFaUgwbIPIXQiRJnbg0Ujh
MlRnGsEPESVw1gbSkhmVvoBXOAnQhCAodGbFgC4s0SSuuBD2lmhAgVuuQ0MK
ac+Rmywm4rw/FuMfCr6B1AcXsC62255sfbteIjsghCjQarFyxzZmjMixJbiL
Hxf6IXrqipJtY7e2R/1M55xBse+R4TrGfRzIzsrax/RgkEjZngbHA1El6C23
Drk1lDd7j7wxgu/ZJNEpidr7w05nNLTGMAKw/ZyXusuKEqbH7BqTEbmBTBEy
5G3dZAssbTbwlKQlpre8NlrewifOntMIvEN/Bkx809eBgbTapvdCGT0G/ofU
H8kZCh/kcUZIWBM1RzqCUeH3SzYoiU8KY7E7wlM/+QDSvIv26oQiBVFa1OtC
lR4ZQsmQAeVEs8/ewiSs4MpMRYxcuCCWBVfC9hwii4BmKGtb+1lRIkFhfVks
lgBf6usPD2HEhHkJ5gAFOsn4okfrtg9UYyGzrqRxCUjjZQ+iiIEBDh1CSyT5
HDFtIKFz2R8mPVTJHJlY7cvowmrdJx3F0gfjvaGdciJayjRG0iK7df8UaU41
haL5WoOaW8wsSRKS85kMR5HOSHKmBwCp4ESEK1IjJ8spuQh2OWhIGwQxADsM
cwYRZrI+1m4HLENhInFuzZgobARFd41tvrEXEKzrtsu3djyWgu7l1c31q8ub
P/389urj+Q03pJyvaRxuwBVycnsilwELki1ySguO34wvu8Wi8EtusWPqv8tm
9uraXnMuyV4vOTNEnOF4K27hD5cX9oNboT73AxI1GgAfffjhxbENvVqChdEi
CCz8Bk1WUiVokdNClYL7RCh8QXsz6T+cBqgXZC5psUsa75Dq4wRA3AgUgn7L
ShfSztoyhQFNCs82scdOWvOnSUjax0OSk5My0t7lTRayX7EZzhxqS6mH0hgi
be4NnGgPpC5x2DEgjo09R4buwTSIJmthXl6/tZwAPWq6KqR8xUE/e/TI3lKs
dhxMQyjuYxUhS3F0ffXz9eXLFz/9cKyFYUYnp4+/t7dFWU+3LfIUHyWMfPbI
rrzAhHGECSPtw3jSP0CLQfiLRAGZZDVvx4bH7zQdytPMC81UTWmJmyJvl2Oh
61paqkLQC1880fgJfZTY5+njZ8mM0cOA3bz6Fwe2K0uQVNOyFq8rz2u/jBZf
UdbiSgYnjLZOKk6BoGw6aH+IpY++fTyhdbwgMmsXC/xZK+NJ7hkTxLqxwn1W
mR9eKFwxfs0Egf9ysK3cP9VqK9hhzlH0MgeaLXS5pZunrU9kh2dLkouJuSi5
1j4SieVyEC+Lt8Zrm27TqdWASjSTCeoYWAbJLJNHZGuANhk0IVehuaZAxK+V
2MIBhBMQ1KqqZIJbVnWzJM/EFYBITNI9FCIbTuN0bVHqSZDQCxVa0Dh4RlUE
ekGrfJdt5XwBGqjYQg7aWKB9asl4gVfXhgvPOfv4faNJzlwrMvtDaVZWHGko
txEjzMHa4xHbtH2GH8PGE9NniMEybHq84RoPZw2IIs5DPIM1MuRFi1W3SnLV
Dx9KNvPhQ81nhsICk5ZG5x4aVDe+KEWwxFzBx5jEQ5KdLEaAw+yMicp6wIwg
Ndcib/2J19k3Ot1zv6yV9/hnpCZCCGyiHvbqh/x1qOsnUVZujlwlPQI1nyqi
mJeCsfX2SwbFqEEJ9lUCclqXDCAypyX/aFHCbo7NuqSokT38w4eE+suSyN/z
JOnmTlPNgQkIG2vk4DTZf2iILz6HzGxfS+HIMuChuCGP3AZ3GzI5OYjFCEz8
TQhH9nPRRiOdDCjCnhO2yZqG0+CIAGiZUhLi5nct/2iHDHzoRthFpEbU4fJj
0p9DWfu9LWmHprhMEGBzcEOkAIRiacOM3mLKhjdFnpnbbuQcwK+1ZpqQSSv6
eNoHZ14gbdsaWHBapYiE3hQQY2+phvbp1WpdcIOiQBr4mbkC7gyiL9Y8q0xK
jnv1nghx+m3v3QwogSozYl3sMFqTgSnW7PSwtyKdb47npw4JXrgqc9/8I3vL
gProWLu/NmjLpfB0YMmGbmAHwHIL535VcafwHZg4MVchp/8Vw8M8yIixvzIO
yO2d6muLNmZS9izQyESLKX0GjPjjtmMymsfLZCSSPLJD0snNqtQ3WnG+ZLBt
o5BPcoCa+oyl/qxPDN6zYUUGLYe0W7GSff9BqI6a+6ujzPaa81W9OmnjLieT
uLO8lUfSCqxNK7C63uj21OR+WamjQU7t6W4bG0yvBHkwbT1U9vt2w4R01W4z
PGqzXr0w58ByCkP4VEKQNUUcbFukWnYoSuJg5x5GbOI5B7FM5mvOWcGoxZNa
FPrgcBjXAL6hKCrpsqZVHYzapLR64CcLWOdwIEhSsxxGcRGmAnsWNTkGI+0y
evqF/VEP0thixwCJfVna940lfmPf0wgcvb4s5rojPQ/ztqi6T9E4A4u39D83
WyNlh/wit7bR0s0QOfXNu5DWivs54NpznQK6hCgcFZ6PsdlAiA2uM5A7SC6w
yMzrWcdJeo7/yNdLnFVmCkJk4WEgsc8HnrhFvhDtwKu6AWzNFk4Cj69aiDZq
ebNr20dqdvtKZQquiuhlueUzti7sW5X9TqI5bLZmBG729pm7Fi0PfEDZAnCn
9h6eJNEyDuTu3P1rfPhQK0UuJ9DC+dfQhQyMj+T6yHAwUsyxg4BCemPNvQMB
kmhpjdCQesdDFEUsmnhCK4di2BHWlTDFaNeEiBM6uD0iHBe6LgZ+u1ICxQNh
2rTbBxfEw3nHxCMnKrG6qLfUZVacMS9EbpHsZHMnWSMuPzFirHMtuch6AVtq
Ev+16pWq2Q34+wcamdDUC3b0zMJY+UX3XmD5IeIkEiyJDTkx3Ht9oVmMmYVC
RQRScF1m2COMBIO7QwT7ly6TAiB0hQN9aeaRjMAN55T1DpqQy8DOt2kueddv
YyAceWLzTqac20DgUaUbZNN35M5hTWQEwxLkPi0zlE/qqoe98REOdUIsQAs1
aYEGZpLxAZfIsjIUmeeqM1cVqyrfC5fkQ87/8TON9sF1tD8cm4Ek7mArKTjR
ElFI4QjacPw8zwrOhid59PQoZawua3KdlsNqur+e12EtrKVfs5ippOe7FXKc
jicrZrdIBRDLY63gEKeKKs3TFKFL66tIGhITXCJRMdQenAZlWMIHvIsYXu2e
jmGB43RGVyGnxrXe/Yb2yP6p9udojiHEzIcyvyHkvkcKjW4hETdMo7Wz3B4R
R1fE4EJTJdzsuCpwXlUuHI+MvHlElTbOIzTcManQnlmxRuXYoCU2gKivFd2b
nfGE66HWqQnFEvUY3QmyjSMTr6kAIHG64vK1QkihgZQEuKuMoF8M/6Zb6dCV
csmuGoud6A+U2N4smZ1G74GJQvmYs7CBCDxXmEQ6V2POd5eOiRWgJ2+dW9Oy
/hyrrd4csp04xsglD46HpXk4T3bONQ8I33WoLvcHkPuktpaJLbIS3g5ssEFn
Z5+t0I2SBM25JNOXJAPVk7xqglsOTMrNz+g3lBZavOsghF33MV+E5eAGyJzK
6slpzx1OrTg9+o5DB0hjEefmZZ2J+0brUQvNmd1Kp4pEG1DUcHZd5JAsOM73
MwjvGjBDQ3yxSZqZG7oq7tr74gxsOjAD0BhyLnITUhWmJ7fkhxP76Ag2a8dv
lgIKnd0EC7VnOIlAKznbxO1Z0sVAni7kXrn7VTqZNed2mGmG2301gZYWYBUn
9eYQyEg10oQoE/B9Jqdsm45t6I7FFzwhpzr6lxa9D+fkzUEVkApQKMSgCF85
Pf4NI0qPyHmxQBM1sHwaD/empwIlg7KKL/cwsOYcmBXOJ2cG9ylDVpSlDef3
OUA6dC7RmBd9B5TU/+CSUf/jngTp505Pb67xrg8ktrLhOdXRwfG1Owt3/vqx
SHUjhZS/zDBdyxmCrdYKUyuVghpxlKmXqSyAOutH6lAGmOtNG3oV/lxPQxh9
QKcT8wvrNwUsCyIlOba9LFFfVknD7jkJho8JI9cniP4uchmtnUft4ADQ04zw
fz5534PZe2GDlXPGaW4zxplZKEBffYVkSeJjcG6VT74JfTMNA19cv7TXtTTu
hGWHPuxE2OGPxvKQGo5xIvlJvzP8C8kAiWKD420KjouGz8zIUUslZTJkZFVs
xAjnZaZyuEC6KUPnUTxiFPtWmvYLwqGj86EyRSeG0Ul4Q0kC3+7vnVbRRXG8
PjhzKvn3TgkIB7zM+/tq2PNmmMLvXXafz03QNuAuHynh96zBL7OpgmWP7kJe
YcDnsdiTEK0pDFkq1qiGpIxvH1FngZwiDYyuN2TbBFUJPoVhx7UeYdH+TQ8I
D5B8j8Bxv7u6Yf1s6dBKyc1RnnW0jhfZMqVM0NN/cj5nKZ3S3Hzkt8nxR9tm
/tbfc9SNu2iQCS4pLg5nd8Q1IW/x9yFXP2hsh5uHpMPtkrI1Ud5xejp5EwUt
zbH1lUQ7U2YrLgvFHzQCzBtpmISrlsNEX6gHxCNFnAiLhwGcGZy5kCMJgy7p
e1XD5E3GDUx6Zrg/5aQzVfXOsQiu2JclxX6zWeYlX5dKjedqA/qB+it5zV2R
lVtkCrJDaTV0/kX/u2Wneq7v1+ErWk7dN2OJA+dm0eSlPNU2tK7xiTzpxtrZ
PTe0o5sNCnGw5QGNyYyn97rwe6sdQXW09OZgh1A89aFtNvnBPqTQh4l8P3JD
OKnpgD4lzui8vrAkssOEFhA5TcIdgX3Gmp6pB6FI0tNYaWdVONUeu7bD8s1g
+bHTP1Ag9vg1Xe70GKO0seMuNhsN4Wu06O1Tw8QO34gkodYHX38SG08+uHKr
oephEk+35jBZQ9fw4fer6AQozGW6HdiFcTjIKtnOkU2KcdqJyoWD2KW502qS
Gsr+XL2Eig34pIlN0sIY0cW0lnZEAgagK1KFftj7KF10tJpo6mJRhHW275CN
RptuniNZFzvPJDirBn6f08sB6A0aB7n3h8mSSfJMFIu1iJRo9/1lg19N8oqt
hPacdo1JdSwCrWncxqPhRDFoPRuwzAYxSrtSs2AfzFDhR8HJyfmohtw4enLl
5WwIh6q8FPVKSpMHjrOESDh2ydbDs1uFNkUCN8pqRz0o/MI5B7GSbMa2yQp6
y7txYgDq5HSUO9TfzC9uC10dAG9F3zYR2jeZLPragv7gj9m4sI4IgPumWTwf
+nb1EKZorQl9rW/eB02ThlkKPnDYJtJMa+6chOUlG16yvb65en/y+moQa8Zq
KHOAXwSlL17gNs599VeDGWpORlun9CQ7XqMYzn6zgx90St5zupwBo1TiEkQ9
eOmNQhHAc4HtS03T9DqfvDGCfi1jLiscXuq7JK/lJDtgYIx7FECFU14h7Naj
/izMUhDpqng8P9nsKPbyBqyXacRj5J2FMz79T1LJI8tm5czl0TGMVCgzS+fo
11DhyxTIQomZvVR8GxkHk2EzchYOXw7wOZmYX5kQj3Hi/W8aB0355C9ZFfgk
risfdhn9WDBuil27dlGnDcTx1FiECaktMsFAoRts6IPTxp2Izsk+0BP9kcZ7
wBnHGgibVsiKqe0NTTt8PDi8Dag/HeCR0ZUjJZwqYx3mcqtWVrNyUROnlxS1
p4+dGfPQ6u7RURNOtBGbchJRWi4cXRJfBCOOwukwxJV13xGooAvcEUxsgPUQ
BLHbGKwluOT9KbIBPqoQGcHghvFXzwqEHQRAyZMxeCz1uJnmZPlVYOHQEXZ3
OKPI03AQkGDJoxkiBq9ZasYAxz3luD0/Vipj9KRyF87Es7cNq+l1lM/XeKMH
vCGXbAMCQs3atCM7OfWRgKNwbAWbYiehBhflCHkzQpqBTJkVcMoOw7Afghgg
WMo2Wh8O+zUBsuvxVW2nRppvhcBpcHKIc35+VddaPN89ncmvCBtAhP8h97XF
Cq94YRmwB2SALR3jUO3dVAhDRJPgsVfkndJx9HjkRKZoeTygaocjBgSs/Eh1
j6rd85jZ7f6AhEA8+pPlg5PkkTbhPL28jwq+Q1M06K1gxdX44GDHp0St4ZSd
HiZYdtWtD9GvLfH6Xdv3Lk5dfEdE2zfEhMo6n37nA31sTFZ8QQt5oZkI4anX
gtucG/v46Ec8ziyWcS8QHuQHhv1A+uodnphdYFy+QfCfAPFtMKv6lqzccRoo
EQXtRB4ZhtChiNQuXV8VG7xxgIOYMJxnohzKRLAlJqPnwovIBkVBkkx+w5Hj
N+UxVpuYF5zlJg8GwiY9FPx8/+qv5L12XMgn9xLXM2yoLNr9alSDd5JXionJ
K84K70wIfNhvco9j75HknXLtl7ozpXEoIN0LTZJoR6O5rPdPRcqrCPsCo9ak
pWOEvEZoeNl5ExNhxzKcMe870OO7Egev9OW3F5xfnu+tZ/gaRekQlTv1zI6+
hpfPOiI3MUzEmV/OpIjm8t88mJPXdQ8+06BXL69275yY/w+Avhn1cmIAAA==

-->

</rfc>
