<?xml version='1.0' encoding='utf-8'?>
<!DOCTYPE rfc [
  <!ENTITY nbsp    "&#160;">
  <!ENTITY zwsp   "&#8203;">
  <!ENTITY nbhy   "&#8209;">
  <!ENTITY wj     "&#8288;">
]>
<?xml-stylesheet type="text/xsl" href="rfc2629.xslt" ?>
<!-- generated by https://github.com/cabo/kramdown-rfc version 1.7.2 (Ruby 3.0.2) -->
<rfc xmlns:xi="http://www.w3.org/2001/XInclude" ipr="trust200902" docName="draft-wh-rtgwg-application-aware-dc-network-01" category="std" consensus="true" submissionType="IETF" xml:lang="en" version="3">
  <!-- xml2rfc v2v3 conversion 3.18.2 -->
  <front>
    <title abbrev="APDN">Application-aware Data Center Network (APDN) Use Cases and Requirements</title>
    <seriesInfo name="Internet-Draft" value="draft-wh-rtgwg-application-aware-dc-network-01"/>
    <author initials="H." surname="Wang" fullname="Haibo Wang">
      <organization>Huawei</organization>
      <address>
        <email>rainsword.wang@huawei.com</email>
      </address>
    </author>
    <author initials="H." surname="Huan" fullname="Hongyi  Huang">
      <organization>Huawei</organization>
      <address>
        <email>hongyi.huang@huawei.com</email>
      </address>
    </author>
    <date year="2023" month="November" day="05"/>
    <area>General</area>
    <workgroup>Network Working Group</workgroup>
    <keyword>keyword1</keyword>
    <keyword>keyword2</keyword>
    <keyword>keyword3</keyword>
    <abstract>
      <?line 55?>

<t>Deploying large-scale AI services in data centers poses new challenges to traditional technologies such as load balancing and congestion control. Besides, emerging network technologies such as in-network computing are gradually accepted and used in AI data centers. These network-assisted application acceleration technologies require that cross-layer interaction information can be flexibly transmitted between end-hosts and network nodes.</t>
      <t>APDN (Application-aware Date Center Network) adopts the APN framework for application side to provide more application-aware information for the data center network, enabling the fast evolution of network-application co-design technology. This document elaborates use cases of APDNs and proposes the requirements.</t>
    </abstract>
  </front>
  <middle>
    <?line 62?>

<section anchor="intro">
      <name>Introduction</name>
      <t>Distributed training for AI large model has gradually become an important business in large-scale data centers after the emergence of large AI models such as AlphaGo and ChatGPT4.
In order to improve the efficiency of large model training, large amounts of computing units (for example, thousands of GPUs running simultaneously) are used to perform computing processing in parallel to reduce JCT(job completion time). The concurrent computing nodes require periodic and bandwidth-intensive communications.</t>
      <t>The new multi-party communication mode and characteristics between computing units put forward higher requirements for the throughput performance, load balancing capability, and congestion handling capabilities of the entire data center network.
Traditional data center technology usually regards the network purely as the data transmission carrier for upper-layer applications, and the network provides basic connectivity services.
However, in the scenario of large AI model training, network-assisted technology (e.g., offloading partial computation in the network) is being introduced to improve the efficiency of AI jobs by joint optimization of network communication and computing applications.
In most existing network-assisted cases, the network operators customize and implement private protocols in a very small scope, but cannot achieve general interoperability. 
However, emerging technology for data center network needs to consider serving different transports and applications, as the scale of AI data centers continues to increase and there is a trend to provide cloud services for different AI jobs. The construction of large-scale data centers not only needs to consider general interoperability between devices, but also needs to consider the interoperability between network devices and end-host services.</t>
      <t>This document illustrates use cases that requires application-aware information between network nodes and applications. Current ways of conveying information are limited by the extensibility of packet headers, where only coarse-grained information can be transmitted between the network and the host through a limited space (for example, one-bit ECN [RFC3168] in IP layer).</t>
      <t>The Application-aware Networking (APN) framework <xref target="I-D.li-apn-framework"/> defines that application-aware information (i.e.  APN attribute) including APN identification (ID) and/or APN parameters (e.g. network performance requirements) is encapsulated at network edge devices and carried in packets traversing an APN domain in order to facilitate service provisioning, perform fine-granularity traffic steering and network resource adjustment. Application-aware Networking (APN) framework for application side <xref target="I-D.li-rtgwg-apn-app-side-framework"/> defines the extension of the APN framework for the application side. In this extension, the APN resources of an APN domain is allocated to applications which compose and encapsulate the APN attribute in packets.</t>
      <t>This document explores the APN framework for application side to provide richer interactive information between hosts and networks within the data center. This document provides some use cases and proposes the corresponding requirements for APplication-aware Data center Network (APDN).</t>
      <section anchor="terminology">
        <name>Terminology</name>
        <t>APDN: APplication-aware Data center Network</t>
        <t>SQN: SeQuence Number</t>
        <t>TOR: Top Of Rack switch</t>
        <t>PFC: Priority-based Flow Control</t>
        <t>NIC: Network Interface Card</t>
        <t>ECMP: Equal-Cost Multi-Path routing</t>
        <t>AI: Artificial Intelligence</t>
        <t>JCT: Job Completion Time</t>
        <t>PS: Parameter Server</t>
        <t>INC: In-Network Computing</t>
        <t>APN: APplication-aware Network</t>
      </section>
      <section anchor="requirements-language">
        <name>Requirements Language</name>
        <t>The key words "<bcp14>MUST</bcp14>", "<bcp14>MUST NOT</bcp14>", "<bcp14>REQUIRED</bcp14>", "<bcp14>SHALL</bcp14>", "<bcp14>SHALL
NOT</bcp14>", "<bcp14>SHOULD</bcp14>", "<bcp14>SHOULD NOT</bcp14>", "<bcp14>RECOMMENDED</bcp14>", "<bcp14>NOT RECOMMENDED</bcp14>",
"<bcp14>MAY</bcp14>", and "<bcp14>OPTIONAL</bcp14>" in this document are to be interpreted as
described in BCP 14 <xref target="RFC2119"/> <xref target="RFC8174"/> when, and only when, they
appear in all capitals, as shown here.</t>
        <?line -18?>

</section>
    </section>
    <section anchor="use-case-and-requirements-for-application-aware-date-center-network">
      <name>Use Case and Requirements for Application-aware Date Center Network</name>
      <section anchor="fine-grained-packet-scheduling-for-load-balancing">
        <name>Fine-grained packet scheduling for load balancing</name>
        <t>Traditional data centers adopt per-flow ECMP method to balance traffic across multiple paths. In traditional data centers that focus on cloud computing, due to the diversity of services and random access, the amount of data flows is large but most of them are typically small and short. The ECMP method can realize near-equally distribution of traffic on multiple paths.</t>
        <t>In contrast, the communication pattern is different during the large AI model training.
It is observed that the traffic requires large bandwidth than ever. A single data flow between multiple machines can usually saturate the upstream bandwidth of the entire server's egress NIC (for example, the throughput of single data flow can reach nearly X*100GB).
When per-flow ECMP (e.g., hash-based or round-robin ECMP) is applied, it is common to concurrently distribute elephant flows to a single path. For example, two concurrent 100Gb/s flows may be distributed to the same path with completing for available bandwidth 100Gb/s. In such case, traffic congestion is obvious and greatly affects flow completion time of AI jobs.</t>
        <t>Therefore, it is necessary to implement fine-grained per-packet ECMP -- all the packets of the same flow are sprayed over multiple paths to achieve balancing and avoid congestion. Due to the differences between the delay (propagation, switching) of different paths, packets in the same flow would be likely to be in extensible disorder when they arrive at the end-host, causing performance degradation of upper transport and application. 
To this end, a feasible method is to reorder the disordered packets at the egress TOR (top of rack switch) with applying per-packet ECMP. 
Assuming the scope of multipath transmission starts from ingress to egress TORs, the principle of reordering is that for each TOR-TOR pair, the order in which packets leave the last TOR is consistent with the order in which they arrive at the first TOR.</t>
        <t>To realize packet reordering in egress TOR, the entering order of packets arriving at ingress TOR should be clearly indicated. 
Looking back to existing protocols, the sequence number(SQN) information is not directly indicated at the Ethernet and IP layers.</t>
        <ul spacing="normal">
          <li>
            <t>As far as current implementations, the per-flow/application SQN is generally encapsulated in transport (e.g., TCP, QUIC, RoCEv2) or applications. If reordering packets depends on that SQN, the network devices <bcp14>MUST</bcp14> be able to parse large amount of transport/application layers.</t>
          </li>
          <li>
            <t>The SQN in the upper-layer protocol is allocated based on each transport/application-level flow. That is, the sequence number space and initial value of different flows may be different, and cannot be directly used to express the sequence in which packets arrive at the initial TOR. Although it is possible to assign a corresponding reordering queue to each flow on the egress TOR and reorder packets with the SQN of the upper layer, the hardware resource consumption cannot be overlooked.</t>
          </li>
          <li>
            <t>If the network device directly overwrites TOR-TOR pairwise SQN to the upper-layer SQN, the end-to-end transmission reliability will no longer work.</t>
          </li>
        </ul>
        <t>Therefore, specific order information needs to be transmitted from the first device to the last device with reordering functionality given a multipath forwarding domain.</t>
        <t>APN framework is explored to carry the important order information which, in this case, records sequence number of the packets arriving in the ingress TOR (for example, each TOR-TOR pair has an independent and incremental SQN), and the egress TOR reorders the packets according to that information.</t>
        <t>Requirements:</t>
        <ul spacing="normal">
          <li>
            <t>[REQ1-1] APN <bcp14>SHOULD</bcp14> encapsulate each packet with SQN besides APN ID for reordering. The ingress TOR <bcp14>SHOULD</bcp14> assign and record SQN with certain granularity in each packet regarding their arriving order. The granularity of SQN assignment can be TOR-TOR, port-port, queue-queue.</t>
          </li>
          <li>
            <t>[REQ1-2] The SQN in APN <bcp14>MUST NOT</bcp14> be modified inside the multi-pathing domain and could be cleared from APN at the egress device.</t>
          </li>
          <li>
            <t>[REQ1-3] APN <bcp14>SHOULD</bcp14> be able to carry necessary queue information (i.e., the sorting queue ID) usable for fine-grained reordering process. The queue ID <bcp14>SHOULD</bcp14> be in the same granularity as SQN assignment.</t>
          </li>
        </ul>
      </section>
      <section anchor="inc-uc">
        <name>In-network computing for distributed machine learning training</name>
        <t>Distributed training of machine learning commonly applies AllReduce communication mode<xref target="mpi-doc"/> for cross-accelerator data transfer in the scenarios of data parallelism and model parallelism which perform parallel execution of an application on multiple processors.<br/>
The exchange of intermediate results (i.e., gradient data in machine learning) of per-processor training occupies the majority of the communication process.</t>
        <t>Under the Parameter Server(PS) architecture <xref target="atp"/> (a centralized parameter server is responsible for collecting gradient data from multiple clients, aggregating and sending the aggregation results back to each client), when multiple clients send a large amount of gradient data to the same server simultaneously, it is prone to induce incast (many-to-one) congestion from the perspective of server.</t>
        <t>In-network computing (INC) offloads the processing behavior of the server to the switch. 
When an on-path network device with both high switching and line-rate computing (regarding simple arithmetic operations) capabilities is used as a parameter server to replace the traditional end-host server for gradient aggregation("addition" operation), the distributed AI training application can complete gradient aggregation on the way. On one hand, it turns multiple data streams to single stream within the network, eliminating incast congestion on the server.
On the other hand, distributed computing applications can also benefit from INC due to faster on-switch computing (e.g., ASIC) compared with servers (e.g., CPU).</t>
        <t><xref target="I-D.draft-lou-rtgwg-sinc"/> argues that to implement in-network computing, network devices need to be aware of computing tasks required by applications and correctly parse corresponding data units. For multi-source computing, synchronization signals of different data source streams need to be explicitly indicated as well.</t>
        <t>Current implementations (e.g., ATP<xref target="atp"/>, NetReduce<xref target="netreduce"/>) require the switches to parse upper-layer protocol and understand application-specific logic that is dedicated to certain application because there are still neither general transport or application protocols for INC. 
To support various INC applications, the switch <bcp14>MUST</bcp14> adapt to all kinds of transport/application protocols.<br/>
Furthermore, the end users may simply apply encryption to the whole payload to achieve security, although they are willing to provide some non-sensitive information to benifit from accelerated INC operations. In such case, the switch is unable to fetch those information necessary for INC operations without decryption of the whole payload.
Current status of protocols make it difficult for applications and INC operations to interoperate.</t>
        <t>Fortunately, APN is able to transmit information about the requested INC operations as well as the corresponding data segments, with which the applications can offload some analysis and calculation to the network.</t>
        <t>Requirements:</t>
        <ul spacing="normal">
          <li>
            <t>[REQ2-1] APN <bcp14>MUST</bcp14> carry identifier to distinguish different INC tasks.</t>
          </li>
          <li>
            <t>[REQ2-2] APN <bcp14>MUST</bcp14> support to carry various formats and length of application data (such as gradients in this use case) to apply INC and the expected operations.</t>
          </li>
          <li>
            <t>[REQ2-3] In order to improve the efficiency of INC, APN <bcp14>SHOULD</bcp14> be able to carry other application-aware information that can be used to assist computations and make sure not to compromise the reliability of end-to-end transport.</t>
          </li>
          <li>
            <t>[REQ2-4] APN <bcp14>MUST</bcp14> be able to carry complete INC results and record the computation status in the data packets.</t>
          </li>
        </ul>
      </section>
      <section anchor="refined-congestion-control-that-requires-feedback-of-accurate-congestion-information">
        <name>Refined congestion control that requires feedback of accurate congestion information</name>
        <t>The data center includes at least the following congestion scenarios:</t>
        <ul spacing="normal">
          <li>
            <t>Multi-accelerator collaborative AI model training commonly adopts AllReduce and All2All communication modes (<xref target="inc-uc"/>). When multiple clients send a large amount of gradient data to a server at the same time, incast congestion is likely to occur from server side.</t>
          </li>
          <li>
            <t>Different flows may adopt different methods and strategies of load balancing, it may cause overload on individual links.</t>
          </li>
          <li>
            <t>Due to random access to services in data center, there are still bursts of traffic that could increase the length of queueing and incur congestion.</t>
          </li>
        </ul>
        <t>The industry has proposed different types of congestion control algorithms to alleviate traffic congestion over the paths in data center network.
Among them, ECN-based congestion control algorithms are commonly used in data centers, such as DCTCP<xref target="RFC8257"/>, DCQCN<xref target="dcqcn"/>, etc., which uses ECN to mark congestion according to the occupancy of switch buffer.</t>
        <t>But these methods could only use a 1-bit mark in the packet to indicate congestion information (i.e., queue size reaching a threshold) and are unable to embrace more in-situ measurement information due to limited header space.
Other proposals, for example, HPCC++ <xref target="I-D.draft-miao-ccwg-hpcc"/> collect congestion information along the path hop by hop through inband telemetry, which will keep appending the information of interests to the data packets. However, it greatly increases the length of data packets as traversing hops and requires more consumption of bandwidth resources.</t>
        <t>A trade-off method such as AECN<xref target="I-D.draft-shi-ccwg-advanced-ecn"/> can be used to collect the most important information representing the congestion along the path. Meanwhile, AECN-like methods apply hop-by-hop calculation to reduce the carrying of redundant information. For example, queue delay and the number of congested hops can be calculated cumulatively as packets traverse the path.<br/>
In this use case, the end-host can clarify the scope of the information desired to collect, and the network device needs to record/update the corresponding information hop-by-hop, to the data packet. The collected information might echoed back to the sender via transport protocol. APN could serve such interaction between hosts and switches to realize customized information collection.</t>
        <t>Requirements:</t>
        <ul spacing="normal">
          <li>
            <t>[REQ3-1] APN framework <bcp14>MUST</bcp14> allow the data sender to express its intention about which measurement it wants to collect.</t>
          </li>
          <li>
            <t>[REQ3-2] APN <bcp14>MUST</bcp14> allow network nodes to record/update necessary measurement results, if the nodes decide to do so. The measurement could be queue length of ports, monitored rate of links, the number of PFC frames, probed RTT, variation and so on. APN <bcp14>MAY</bcp14> record the collector of each measurement in order that information consumers can identify possible congestion points.</t>
          </li>
        </ul>
      </section>
    </section>
    <section anchor="encapsulation">
      <name>Encapsulation</name>
      <t>The encapsulation of application-aware information proposed by use cases of APDN in the APN Header <xref target="I-D.draft-li-apn-header"/> will be defined in the future version of the draft.</t>
    </section>
    <section anchor="security-considerations">
      <name>Security Considerations</name>
      <t>TBD.</t>
    </section>
    <section anchor="iana-considerations">
      <name>IANA Considerations</name>
      <t>This document has no IANA actions.</t>
    </section>
  </middle>
  <back>
    <references>
      <name>References</name>
      <references anchor="sec-normative-references">
        <name>Normative References</name>
        <reference anchor="RFC2119">
          <front>
            <title>Key words for use in RFCs to Indicate Requirement Levels</title>
            <author fullname="S. Bradner" initials="S." surname="Bradner"/>
            <date month="March" year="1997"/>
            <abstract>
              <t>In many standards track documents several words are used to signify the requirements in the specification. These words are often capitalized. This document defines these words as they should be interpreted in IETF documents. This document specifies an Internet Best Current Practices for the Internet Community, and requests discussion and suggestions for improvements.</t>
            </abstract>
          </front>
          <seriesInfo name="BCP" value="14"/>
          <seriesInfo name="RFC" value="2119"/>
          <seriesInfo name="DOI" value="10.17487/RFC2119"/>
        </reference>
        <reference anchor="RFC8174">
          <front>
            <title>Ambiguity of Uppercase vs Lowercase in RFC 2119 Key Words</title>
            <author fullname="B. Leiba" initials="B." surname="Leiba"/>
            <date month="May" year="2017"/>
            <abstract>
              <t>RFC 2119 specifies common key words that may be used in protocol specifications. This document aims to reduce the ambiguity by clarifying that only UPPERCASE usage of the key words have the defined special meanings.</t>
            </abstract>
          </front>
          <seriesInfo name="BCP" value="14"/>
          <seriesInfo name="RFC" value="8174"/>
          <seriesInfo name="DOI" value="10.17487/RFC8174"/>
        </reference>
      </references>
      <references anchor="sec-informative-references">
        <name>Informative References</name>
        <reference anchor="mpi-doc" target="https://www.mpi-forum.org/docs/mpi-4.1">
          <front>
            <title>Message-Passing Interface Standard</title>
            <author>
              <organization/>
            </author>
            <date year="2023" month="August"/>
          </front>
        </reference>
        <reference anchor="dcqcn" target="https://conferences.sigcomm.org/sigcomm/2015/pdf/papers/p523.pdf">
          <front>
            <title>Congestion Control for Large-Scale RDMA Deployments</title>
            <author>
              <organization/>
            </author>
            <date>n.d.</date>
          </front>
        </reference>
        <reference anchor="netreduce" target="https://arxiv.org/abs/2009.09736">
          <front>
            <title>NetReduce - RDMA-Compatible In-Network Reduction for Distributed DNN Training Acceleration</title>
            <author>
              <organization/>
            </author>
            <date>n.d.</date>
          </front>
        </reference>
        <reference anchor="atp" target="https://www.usenix.org/conference/nsdi21/presentation/lao">
          <front>
            <title>ATP - In-network Aggregation for Multi-tenant Learning</title>
            <author>
              <organization/>
            </author>
            <date>n.d.</date>
          </front>
        </reference>
        <reference anchor="I-D.li-apn-framework">
          <front>
            <title>Application-aware Networking (APN) Framework</title>
            <author fullname="Zhenbin Li" initials="Z." surname="Li">
              <organization>Huawei Technologies</organization>
            </author>
            <author fullname="Shuping Peng" initials="S." surname="Peng">
              <organization>Huawei Technologies</organization>
            </author>
            <author fullname="Daniel Voyer" initials="D." surname="Voyer">
              <organization>Bell Canada</organization>
            </author>
            <author fullname="Cong Li" initials="C." surname="Li">
              <organization>China Telecom</organization>
            </author>
            <author fullname="Peng Liu" initials="P." surname="Liu">
              <organization>China Mobile</organization>
            </author>
            <author fullname="Chang Cao" initials="C." surname="Cao">
              <organization>China Unicom</organization>
            </author>
            <author fullname="Gyan Mishra" initials="G. S." surname="Mishra">
              <organization>Verizon Inc.</organization>
            </author>
            <date day="3" month="April" year="2023"/>
            <abstract>
              <t>   A multitude of applications are carried over the network, which have
   varying needs for network bandwidth, latency, jitter, and packet
   loss, etc.  Some new emerging applications have very demanding
   performance requirements.  However, in current networks, the network
   and applications are decoupled, that is, the network is not aware of
   the applications' requirements in a fine granularity.  Therefore, it
   is difficult to provide truly fine-granularity traffic operations for
   the applications and guarantee their SLA requirements.

   This document proposes a new framework, named Application-aware
   Networking (APN), where application-aware information (i.e.  APN
   attribute) including APN identification (ID) and/or APN parameters
   (e.g.  network performance requirements) is encapsulated at network
   edge devices and carried in packets traversing an APN domain in order
   to facilitate service provisioning, perform fine-granularity traffic
   steering and network resource adjustment.

              </t>
            </abstract>
          </front>
          <seriesInfo name="Internet-Draft" value="draft-li-apn-framework-07"/>
        </reference>
        <reference anchor="I-D.li-rtgwg-apn-app-side-framework">
          <front>
            <title>Extension of Application-aware Networking (APN) Framework for Application Side</title>
            <author fullname="Zhenbin Li" initials="Z." surname="Li">
              <organization>Huawei Technologies</organization>
            </author>
            <author fullname="Shuping Peng" initials="S." surname="Peng">
              <organization>Huawei Technologies</organization>
            </author>
            <date day="22" month="October" year="2023"/>
            <abstract>
              <t>   The Application-aware Networking (APN) framework defines that
   application-aware information (i.e.  APN attribute) including APN
   identification (ID) and/or APN parameters (e.g. network performance
   requirements) is encapsulated at network edge devices and carried in
   packets traversing an APN domain in order to facilitate service
   provisioning, perform fine-granularity traffic steering and network
   resource adjustment.  This document defines the extension of the APN
   framework for the application side.  In this extension, the APN
   resources of an APN domain is allocated to applications which compose
   and encapsulate the APN attribute in packets.  When the network
   devices in the APN domain receive the packets carrying APN attribute,
   they can directly provide fine-granular traffic operations according
   to these APN attributes in the packets.

              </t>
            </abstract>
          </front>
          <seriesInfo name="Internet-Draft" value="draft-li-rtgwg-apn-app-side-framework-00"/>
        </reference>
        <reference anchor="I-D.draft-lou-rtgwg-sinc">
          <front>
            <title>Signaling In-Network Computing operations (SINC)</title>
            <author fullname="Zhe Lou" initials="Z." surname="Lou">
              <organization>Huawei Technologies</organization>
            </author>
            <author fullname="Luigi Iannone" initials="L." surname="Iannone">
              <organization>Huawei Technologies France S.A.S.U.</organization>
            </author>
            <author fullname="Yizhou Li" initials="Y." surname="Li">
              <organization>Huawei Technologies</organization>
            </author>
            <author fullname="Zhangcuimin" initials="" surname="Zhangcuimin">
              <organization>Huawei Technologies</organization>
            </author>
            <author fullname="Kehan Yao" initials="K." surname="Yao">
              <organization>China Mobile</organization>
            </author>
            <date day="15" month="September" year="2023"/>
            <abstract>
              <t>   This memo introduces "Signaling In-Network Computing operations"
   (SINC), a mechanism to enable signaling in-network computing
   operations on data packets in specific scenarios like NetReduce,
   NetDistributedLock, NetSequencer, etc.  In particular, this solution
   allows to flexibly communicate computational parameters, to be used
   in conjunction with the payload, to in-network SINC-enabled devices
   in order to perform computing operations.

              </t>
            </abstract>
          </front>
          <seriesInfo name="Internet-Draft" value="draft-lou-rtgwg-sinc-01"/>
        </reference>
        <reference anchor="RFC8257">
          <front>
            <title>Data Center TCP (DCTCP): TCP Congestion Control for Data Centers</title>
            <author fullname="S. Bensley" initials="S." surname="Bensley"/>
            <author fullname="D. Thaler" initials="D." surname="Thaler"/>
            <author fullname="P. Balasubramanian" initials="P." surname="Balasubramanian"/>
            <author fullname="L. Eggert" initials="L." surname="Eggert"/>
            <author fullname="G. Judd" initials="G." surname="Judd"/>
            <date month="October" year="2017"/>
            <abstract>
              <t>This Informational RFC describes Data Center TCP (DCTCP): a TCP congestion control scheme for data-center traffic. DCTCP extends the Explicit Congestion Notification (ECN) processing to estimate the fraction of bytes that encounter congestion rather than simply detecting that some congestion has occurred. DCTCP then scales the TCP congestion window based on this estimate. This method achieves high-burst tolerance, low latency, and high throughput with shallow- buffered switches. This memo also discusses deployment issues related to the coexistence of DCTCP and conventional TCP, discusses the lack of a negotiating mechanism between sender and receiver, and presents some possible mitigations. This memo documents DCTCP as currently implemented by several major operating systems. DCTCP, as described in this specification, is applicable to deployments in controlled environments like data centers, but it must not be deployed over the public Internet without additional measures.</t>
            </abstract>
          </front>
          <seriesInfo name="RFC" value="8257"/>
          <seriesInfo name="DOI" value="10.17487/RFC8257"/>
        </reference>
        <reference anchor="I-D.draft-miao-ccwg-hpcc">
          <front>
            <title>HPCC++: Enhanced High Precision Congestion Control</title>
            <author fullname="Rui Miao" initials="R." surname="Miao">
              <organization>Alibaba Group</organization>
            </author>
            <author fullname="Surendra Anubolu" initials="S." surname="Anubolu">
              <organization>Broadcom, Inc.</organization>
            </author>
            <author fullname="Rong Pan" initials="R." surname="Pan">
              <organization>Intel, Corp.</organization>
            </author>
            <author fullname="Jeongkeun Lee" initials="J." surname="Lee">
              <organization>Intel, Corp.</organization>
            </author>
            <author fullname="Barak Gafni" initials="B." surname="Gafni">
              <organization>NVIDIA</organization>
            </author>
            <author fullname="Yuval Shpigelman" initials="Y." surname="Shpigelman">
              <organization>NVIDIA</organization>
            </author>
            <author fullname="Jeff Tantsura" initials="J." surname="Tantsura">
              <organization>NVIDIA</organization>
            </author>
            <author fullname="Guy Caspary" initials="G." surname="Caspary">
              <organization>Cisco Systems</organization>
            </author>
            <date day="5" month="July" year="2023"/>
            <abstract>
              <t>   Congestion control (CC) is the key to achieving ultra-low latency,
   high bandwidth and network stability in high-speed networks.
   However, the existing high-speed CC schemes have inherent limitations
   for reaching these goals.

   In this document, we describe HPCC++ (High Precision Congestion
   Control), a new high-speed CC mechanism which achieves the three
   goals simultaneously.  HPCC++ leverages inband telemetry to obtain
   precise link load information and controls traffic precisely.  By
   addressing challenges such as delayed signaling during congestion and
   overreaction to the congestion signaling using inband and granular
   telemetry, HPCC++ can quickly converge to utilize all the available
   bandwidth while avoiding congestion, and can maintain near-zero in-
   network queues for ultra-low latency.  HPCC++ is also fair and easy
   to deploy in hardware, implementable with commodity NICs and
   switches.

              </t>
            </abstract>
          </front>
          <seriesInfo name="Internet-Draft" value="draft-miao-ccwg-hpcc-00"/>
        </reference>
        <reference anchor="I-D.draft-shi-ccwg-advanced-ecn">
          <front>
            <title>Advanced Explicit Congestion Notification</title>
            <author fullname="Hang Shi" initials="H." surname="Shi">
              <organization>Huawei</organization>
            </author>
            <author fullname="Tianran Zhou" initials="T." surname="Zhou">
              <organization>Huawei</organization>
            </author>
            <date day="10" month="July" year="2023"/>
            <abstract>
              <t>   This document proposes Advanced Explicit Congestion Notification
   mechanism enabling host to obtain the congestion information at the
   bottleneck.  The sender sets the congestion information collection
   command in the packet header indicating the network device to update
   the congestion information field per hop.  The receiver carries the
   updated congestion information back to the sender in the ACK.  The
   sender then leverage the rich congestion information to do congestion
   control.

              </t>
            </abstract>
          </front>
          <seriesInfo name="Internet-Draft" value="draft-shi-ccwg-advanced-ecn-00"/>
        </reference>
        <reference anchor="I-D.draft-li-apn-header">
          <front>
            <title>Application-aware Networking (APN) Header</title>
            <author fullname="Zhenbin Li" initials="Z." surname="Li">
              <organization>Huawei Technologies</organization>
            </author>
            <author fullname="Shuping Peng" initials="S." surname="Peng">
              <organization>Huawei Technologies</organization>
            </author>
            <author fullname="Shuai Zhang" initials="S." surname="Zhang">
              <organization>China Unicom</organization>
            </author>
            <date day="12" month="April" year="2023"/>
            <abstract>
              <t>   This document defines the application-aware networking (APN) header
   which can be used in a variety of data planes.

              </t>
            </abstract>
          </front>
          <seriesInfo name="Internet-Draft" value="draft-li-apn-header-04"/>
        </reference>
      </references>
    </references>
    <?line 236?>

<section numbered="false" anchor="acknowledgements">
      <name>Acknowledgements</name>
    </section>
    <section numbered="false" anchor="contributors">
      <name>Contributors</name>
    </section>
  </back>
  <!-- ##markdown-source:
H4sIAAAAAAAAA6VcbXPbRpL+zl8x53w4aZegLDnZJKq9zSmUHWvLlhVbrtzW
VupqCAzJiUEAwQCiuS7/l/st98vu6e6ZwYCksrm7rS2ZBIF56Zen++lpJMuy
iet0VfynLuvKXKqu7c0k151Z1e3uUrmumLh+sbHO2bq63zW45eb5/YuJbVq+
2XUXT59++/RiUupqdalMNZl0titx21XTlBYj4blMb3Vr1LXutJqbqjOtujXd
tm4/qJOru+vbU/XeGTXXzjiFtai35tfetmaDW91ELxatecB4uHFS1HmlNxi9
aPWyy7brrO1W21Wm9yfLijyrZI7s6fmkXri6NJ1xl5O+KTR/+ELRh0t18fTi
IntK/1dZxteUdWppy9IUylZK9129wdC5LsudWuzUx0150S5zZZeqqju1sg+0
bUyqL9UPpjKtLic08aqt++YybvUn/LHVSv1AlyeTD9vLiVKZ+mB2+Lk4T79c
pF+eYey+W9ct7s/wg63cpXo5Uz9B4vgq8nip7aIOl+p2pSv7DxYHfur11lhc
Nhtty0vVaoxAA8+2uP3f1/zzLK834+HxWDUMX1ernVV88Z/PsOa7Z+t+b/xJ
VbckyQdzOZnYajl8U2rT2AzapY9KeRN6bZzTK5PdaZgfJHdDprPUuVHvyGZ1
W/DdWAzuvbtRL+q23/ClqNln2dNvZEjdrkyHtXVd4y7Pzrbb7YzmXNIzMwxx
htndGV36ckbKKPJf82q0nDm2ZRxtmT52bV0qPK1e0cjZO5iHUW+vX1+pa9OU
9U6s99jUeV0tTWuq3LiZsytIRhbgP59dPD3/6qwplmeNbkzrzpqvLp7N8J20
YbrWFH1uRguDhb3lqzAaWkE2rzcNJLvAim6qLBgg38PLp2VfW9e1dtF3sPLr
21t1T3ZBUr7Kc1PCiOnOo+vX7Uf7wCvWC3dG/j97+u3Xz/6Em3XXjFZ2dX+H
NWEN3hfV1WrVmpWOq3jdl53NOlPpqlOvjG5pCY8qrHemsh956kGIZ5Ur7MX5
WdMa/Nzx2Gelrif4XwaPxiK7VufdZCKKoT2WrDPHOru6Uc60DxbqIHcvCKVy
RimnmpowqTJbla/h/oYMQHU1kE8XlibSpepMvq7qsl5Z/Ob6fK20U2WtC7XQ
QMWc5iNUywfzycV8Zup742xh3BSeY9oV3RkEdXRUOwgSltL0HY8NaF1hPT3j
k4b2GtIpTQl5MYRhi+m2Zup+DVmFuTJyL8fPDDDKAwUzGK+mFXhW3Vp3Km9r
57JS7wDqlkbXYmLRvWm7ulILo5al+Qib3JH0KrexHU25wBqMqRA5imxdu04i
QNhlVUM6MzWZEPojWByLKWYvppwqXdQNBurWUO7drVq2wDAejiwu3SMJn9TZ
tPUDfdzUGPIgloz2QkPQwIk8w2qhxUovStIJ3bHUrlPmoS57frBeDvJOlpDX
GbZoV4mMd6QfhCAAUk8ookypF3VLUYs0CnGSUWI8EorICxsQU6WJ2yR8zrwT
bGxRlGaCoHdDpheA4NMXlr5+xl0pHnQBC2i3MB52F0inMKVaww4Hc1sYGCJk
BoVvmrrtyI0XPdAa0E2WlzrayLMQvo1Ikk2fHJm2JDNhSp5ssPyrslnrH2re
7Bx298Pd/ZezyQ3E2hY0Tk3zQ4tGhlwubW4x5m4YU1Yfdjb1V/Wm7iEmum3w
qL6yuHRCmzcf9aYpzRTD1r3D7HzrD3fv4QZ9xTJydgMQ05XBDeXulP2RHY8M
C/EKppOMjUUCaDicQTyNbglWSrpXgF39dX5/8ku94EeQsrD32Y05Zacl5Mj7
tiWrGMZkL4luiTltXdicZbXAn60tunVGzlk5RFt6cIMtigGShdDAhHEbBmOs
qduNb2LhCYitNXk4pgCS5S66777w8IWMB+5TqLVdraGj1CyjG3VrZEOrNd3u
ZQXEhLj38DPXjV7Y0na76T6WrvG9HN1jxTvYEKqORHLEWWeT+wTD0xsGP4Qa
xcopZLWFeFeApqZvDeGtG+DA4xrnylhO21oMRzvtG2zOg2Ti/E42MxpVkAiC
1Q4qxD4rA1d9wM5jlJpNXtZb82DaKZkQPe2wdA2tH3pQYvAHaJ9s9MTMVrMp
Hl+S4NlOYQUWkhHFao/p6VpPKU1eGLFkwRQx+sc9EauCaTtKon+p8ZACTtuN
zyMThNyzPtF4jHeJABkCNjUB7UeyyCGADttkuJyOpFw3FNhqwFAOBlNjBWLd
llyOIbdp7QOFFuykq/O6ZDDTCkKHHjYwCsgco0wBdh3FNyICOl9bqEWthANI
OOSpxHQRx6LiYrxPlECWcsRS8a8pOOuANVDEasUS8HBhl5wEdWJ6BMASD/aM
zHkrIRQWNYywmNIRW/WS2tgqB5NxJphmy3SIjNtURRos87LuiyF14tXH9XhN
R9BCbPEhJ5josaBAUqwreNXhlh8TakSgwvA6RCO6dPWRQUgKjw4QxO0H4v2H
rCTxvck4NoMm9pRfjoMzp0Ye8Nw/ySj25xc039fiTM097m/1zser6sHsxP2G
0Wj0Ej7FudVOfPAjI7/fLp5sdP7BdGptNIQCiW1Zyyz4vNatM9mKUINzx4Ms
7ljylvpWQDQWm4d3mE9Yk2uIv41ja12ZbGE79Xx+q/7+9sX82fmfvvmZHO7m
TjFonlIKSJZ0mAD6pI/EcIJk7zTJ9j59+u4mu56VFglXlcXrnz9DxUvKUERN
v62dEzszM8V5pO58gnRKTlL2jJP0A2wLDrQMcHVyc31KYjij5Ak/U5DfGDZw
xtkB7IeIN4qOjKzATN24vtScmHfxIVMA3lMblUBTSDpBinWkI6CME+bBayhq
UHNG8JgwgUiTRRDKeesW16boxQEjJC8kK7KICotpyYQwPMG6ArwiEfD0JqwP
Bl/3LQbTxS/wDNrQ7H+nt6NZ+qDMUPGpKI/O6MdHdBstX1DnOB2gq/uTzZAk
4wdSQhhgGh8P+2Mf3JMuFFKWdc4qg4RT/4WTWaSyFMZqD66JhuPo0cYSdR5g
jvkIHtua/wvDabGIlKk9HMeiAyaG9VtIpNrnPvtUJeYvjmjBAIgHJCWvgWaI
WBV70UFueHV3tHqYH6seEjhMvvhC3Zt2YyWWCmG8/H3DTCbvfsS978yPPdOQ
236zMC2E/ubtpbqvG/Vmqd5CFcpBBvl6Mrl7Mb9Ud8i3yBsyZGpQ94uy3oay
0GRyezMfCn9D2WpOJavJ8/nru0v1/FfkltmcUFKKIHe6WyvAJWUxWP8NVt8y
qlAaRmOUpWWeNJmAI1yqv4IkzAeScA+SgKW9w8oC3mBL7QPt5OYWy0kqQfOQ
TJGcjoopioYlmxZk1StdrXq9MoLIH8xOUTHRqSev37+7fzKVf9XtG/789vmP
72/ePr+mz+9eXr16FT9M/B3vXr55/+p6+DQ8OX/z+vXz22t5GFfV6NLkyeur
vz2RDPrJm7v7mze3V6+eSI6aWiTtBg6w8JG/aQ0DqpvASnP4miDn9/O7//6v
8y8BM/+CAHRxfv4toES+fHP+9Zf4giBZyWwcKOUrLHk3gb8Z3XKCiLwQPg1Q
LSXtcut6C29CeIUT/+HvJJmfL9WfF3lz/uVf/AXa8OhikNnoIsvs8MrBwyLE
I5eOTBOlObq+J+nxeq/+Nvoe5J5c/PN3YGNGZefffPcXNp9Y3D+o7Yun/56a
DlvhCx+HODPxOYwDnBV9GWoVY9qI2R8heU5qRBTisiV5LvmkgtOsa0ZuGcLE
SKe5zCUEGQ6H2bu1kzDx2AScWyxhhQgUlU+WI4mZqqJns2Q4tRysJTeL+TQJ
CzEXoYVLcc5TGClY0J08Gy3eUeAR2ke5L9MhiXcbsf5d4w8vhLnQyDDMtpPs
PN065XjI/ksiRBWMOjO/CgMuQnUoBFMvGKoN7AllQpyMK5zadVMP9imdw32Q
EIfLgTAUfRsKZ48wWHC9jp4BrSBYK0TCXETwi4kJtxdGqH3QnZUi4oVMRFFa
FIgH6z4EvbiRDTE5yiFIHKEG4HTXtyFU9w3EYfQmmWNcceAltv+KDGLVUikM
AeGgoDSqfpDq9xfmtYG0gXSBNfzHH86fPv3h+9PZ5CfAz571eg6/1m7tYxKm
w/igMG29ADrRXZxacoJgiqmyLFDSDlWZ6qS6lKocmypNs6bSnpgbpTZhtaT0
GZ28JDvbpiMpWvLizPlnN5oYVzJ4EdzAIWjxcJxqxPqX92z9oG2p6URjELkf
mf2QC4WUa0yjOSRFIjabB1v34lfQiaYt4j6Td85Le1xwS+oVVESlYNcarMQE
sVWGvFK3O1/z8JWD5QiloCGPVKwjOo6AB9J2Q67u7YZ3z+sgl3VNC94DDcKI
9hyMpe8rDeMDBv1Q27Q0NlPXKcgsw6HTiLbBw6CRE8rNtJzLTH2eg2FPGWei
j/L807jyUHyKK9/WfUmkEGTvA9XGQtiNDLRkxQsDoQDK8VMRf8FmvDMHyj2F
NnumMClNKgzVnmO9iAtrQ/FjnzUDjO5rn8lXsHf4ldGyDg941knp1bOi9bDA
GGNcXJm4MtJCddIhLcT87ZAWnorZ0uw7v+pU9VjKlXP9JoAcl49oCNEumf2o
fOg6TcWcZQv8xzM8M1Y6rMHHgwawmbN10HJkH1wTiBEInkkQgkcyWnqjbSuP
yp6hHqEmYbel0b54V9IJBj3DIFFxPY0KEJYh9WCAI8pc2laGIA5Tx9Di5ZIu
t0q2Ng1IKr/JNLFu4WQOtvouyoaWiaDmDTAvBTAt6AXTMYj/VV0z3VyQzkiU
oWIYa3wyr0MUYR5QMQ84ATk4HZEkK4WqAjifd+kkYdfPqWwG4sTmGGoYhCKZ
uoJKkStqqjwKPEbkCNU61qpH9rOUy2EhNLevhGHiUYnAVokf+FhwP7+bKiST
86l6W8+fP1ycqjE/JOwcmU0QcWEaw4cdlZgR5h6XUEMFgjNYSJyhmYgm1Y9G
hys+XZCVjTbk5aIgF0pEeH+Vj69DwTyoZ0yvfYSrxLiPjp+VwMiSoYkyHTKW
4yr2RSmuAFeWy94PuuzNGP324pe/7M8jpALMP3irCEdAIOvivOm8B043dpuw
CnIcdVXS2dNq7cMOKLQgGMUBxyeH+oBNR31iPokBLCZG6braRzNONT0EhgVF
Jyet+BglaMtaETmuwWc5ZY9FH4KJftOEgqEXCoWxEu4HP4Sub5ZHLGkQHN28
BbM2boRZW+tkMT6gpSYSrZOCR1dnXKlO0bQ1pQ313q1FBK5qcAUEylbJYVAa
3l1jcsvJrce3wfVjSXmvEMowPQCe35JfKcOov8RiTfSz7Ktc2AMtjVuJoM4h
JvgzNC72c5FpxpQ9qfhwlYrrQWxuVA2Usu9wIHu4Dza+aSTLkjdB/kzk973D
a/8Afb2vpgg8TnEP4g4fHtNZcSX4wvycvS4XSgibJ7gdzsUSK/VSc+PF5LRm
Dqq1IFWyTcgqpZuXBMDErs+z85+5buaZcVqI4zX7+MTKIpNbSJsGP3NzzTF1
0KGwqFQKftjgnexdtEweS5JbA81AfmlV1VajyeXM0acLEF0UO88rk6aPQ0s0
vEzKmagv2HsFIG+DOWT0ZyqwkPHfWRTKxc8pDNNeQ3mChgEZg1NwoJGiIm4N
B8ZUGQwG6k/r0kAcHEQKnKlexSuGJTwb6SUJK2LWQ8otuHZQqPfwjj0O4EfV
+N7xQKS4UX6eRj45mBfBhieTlaTZbip3WPRY7jOpmd0c69WRQ7KB+nimSTkX
dz4NnRfUmZFnff5oawaljvtPC5MjYsMMj9omSt8Zdnie/+mT77n7/JnXJZ08
se0nHEYyzC0lz0vPml2sQYQmBus2rHyh7elVH+v8eUJsejAfTR4rCjDWNDcY
FRZENTXlCoqrjuZjDj664gjNZb2NKSy5L6wKT7lgDUQWLJcXaKHYwb7ImOFw
qh7mSCSc531jfb16o3+pg58dKWl445lM3leBSOyXYU/u3lFzCObvEOj6ls40
dNdA+idSNWo5OS6G8yJfRSCIl+gugZ+VVUOCOdvUeI/saFFyeUm/UC0y9N15
ruhMFbBl+I0DpcgvJsmESDLK6VQ42/7gPBad8O3lfON1pSTfb2vcOBNoNURZ
GTmIZsOFG1D8PAH/21Fsx6+nKbWPoZd6JRsjZxq+kEY4SeWoI554cnM7Pw0N
Dz6mDK05C7PWD7aOsc+vOGyCCR+G5iqMJmNlENxPaRjqFzX+UAvMQKtZA1Ql
zbiglCxqAH3HvEARxqxhDJSNNL4bz52Oe12sk2STYuuh8TC9bUpKcH2tLNYr
R6fbvlUlai2xipMnupCHngyrOJ0Guhyx6epmcJ5Rj5uuQnXFHJ0g5KRbvZup
N/TVcF8P2wRcpUpKr2xNUn3jTMzXoXw9LjmlGrry6Oy5Etv31pTYj5/am8vk
jXyticL5NaRbPN6GwjvknoMFyNkSi2arhImFOi81A1IqVWViBanShatdvbuZ
n/JlDphsO7IoF26Z370/BcT4o1BpgS/r3p+IQhAE5fDCPhxtjwpTx5pHpwd8
jjJcn+BKNX7UGddp9yF2mnGDwUgMEvpbn8gLFRxTE9Yet4hJ0VAyiEge4rLc
rsrXQILQG0TBFQIeMzIxBXk2WESyfkqLbW73KDqojSlLwoX5cQYeFXJ/5wF6
OnRYf/oUW7A/fz5NWmEDKkgTjWz9KIvlvlwKEvzexaj7IBIParTNfTpLSVJY
PCVCPnNM/WthqFRmfLcO1w87JjnGshmH3pmhPrB3Sjx0OREEwGylcOawAbr7
geJ979iex51Fw74lV9SFbtjsqMr5wfp2yePsP05KQf1F39JSN0zAPJMjVGuF
cTMairFx2aPdCcX0gLxd11we3fH5T1IgdUgwWukaDCzaV6kM80BPHMLpOB9a
V6QJKlcenI6zWVU2+nfMlaAaks2A0Acl6UFOhNZVSGuXpuPCGbUEjHlmSHS9
QpKxGRrqnihllIMPUyM5zKKBw9K6njUxKHqjPxhCV/Imm8ML91sHxJv3puaw
HPqnOjrUhBN32E9nKIJzM4yLSXtgyOMGpQWtPfQpG3covOCjoW/tCII4s9pI
YsM4GeuPh6jsI7yoVgNDds6GvpkyJ86X2FHsDX2EOF4E4sjGLpwkNP9IqC2k
pthbt06AivbH0Mm1LhnpIhkpOFokOsHjRGyyXnr7QM6ZUi9icZyENukQXF2k
9qEF4zQ0pOzEjQO//kgpE1XSBtuNSwQb+30d1hhx+pvETcLpb3dayesEwllD
8UyaONMmVJEFW6+jHJoqTHxyRYvbWEHBUb0H69svC5Gwh21+mWjiYOUxbyGp
hew4ofSeC8QeWe9raadMaONRsZtiyeTz8IWQvb7BJWIZJ+Kkc5ARny0Op1qD
9OSAatQ+Kj1qho8vQHecr8iDONRbIYtxoMjp2NKlHSUlgsQ25BUEgsSDk9mE
d8qbFwPtJEnh28VVWR6hoIi1nz55nvv5dKZ++n/xCx3yWF9jYK5BR3nTI2kf
HZnHUypieq1geuQmBZclro+Uf6V1YPBuOUoSq5BW0JVvQR+3I3A2SwNItJai
qOYiNuUniEA9gjRi0gf2QX94N+oB4IT3+MtK04Pwv+hb14UAzCei4mJcm4lt
vlyhjNDChY9AUSyd4aYHimJkRMywzx1X83xTV5H2Iu8aEzpU9w1cl6uaKY3z
WQLSTj5UPzyz5bNPqfXRsed4twNUX8HymMduptRA6o++f3tqklG02fCSVNq9
MY0vnlzP7+fIA7+jRqCLr76mZPB6/uP89tMnfkGQviOGz6Y+BvXU30Z9rNjd
RnOqHReyV6s0UmDQHkN9erDoSYwUgb6XMOlMNDDRXFg07P2c22Z5Hg84vngo
9JmzxkcAI1RIpNLl6FSOWw1Y99SYYBySieJUjlTplZaYtJjNoiU6yW9LgVUg
U+qxRk147JnGMI3nP6H9V3qO5cAFZIvDgpgQt0yNyscv7+bzP/5RjejOxuo6
y3PQnXWTE93xhZDHdkmvNa+iFal13RBpoX9Ca7KtFhwMDXEAWHXQJB8TfDCm
obCVVEvS0UP1yZCfhYP2EeQPb2p0sesguJ7b8730Sc59hi5eLDhEHR8bWPjp
aQsGGDojYpMqvzvHpN9kSIXCsXd8reo5WXIiX7e2Il5dPNCBe5EZMvL9uByk
zpUxqiAMhw2pfFrjX8oMwku9YaSZmXptdAXJk+JpVRmh8wCtnLhACtlil5H2
9pI3/+4UT0Fh2xdI6XJV7K1qr1tFHEDaIOLbOPHgwy+YLJdU4MUQZieg6Te8
jgf/MtBeC7ZJdqgmN3tJ2XBoxWUYrpNQWXm5G/cI7NsdvTHYjjRx+CaRr0HF
EyvJV87k9fcjeXU6/iDo6RGzDi918MR7Lwls7GrdKZOvaz6jlSKi1Fe4MAq0
T2hooCMzTsAE3jgAi4Gm75MedianfDs0FsTXefZeXfDV0sePhJ6FzH44WBNC
S9nSIAC/i+RY13KuTX0RA7kRABkhIr22UXUu0dgszpwyAZlv/CrIgfIGdpjO
4ZPTKf13CdgQ+GGQRN8CXiB1qEV36WPxqEYcYYAjfptoCveubMeni5x+UlZD
Gcp0z1HuXsxFdtQg1NbUVvv2/n7KTGZ4i8sh16pE26+v/jbOoVkqUnLlqvM4
ogQasnfE5yGQ32Kig0UhY7vhlDxBnIZeOeNmri/U83jkx8kznymkl/Zo1hG6
EhOfxe7wrdwQjmmjLyXkjat28k6KRENqLuZ8zfiXF4rw+LLncwKOAgPL5yEI
2L9Q73x9gzrP+RUn4UjY0PfXstGbq9urw19HHdKUxlW13Cnexr0qWcYOTINc
5R+qelvSqyfynzX4dCmaN8W/PVkicpsnn3k2boCnUmndPnbTZPI/SOZUUXVE
AAA=

-->

</rfc>
