<?xml version='1.0' encoding='utf-8'?>
<!DOCTYPE rfc [
  <!ENTITY nbsp    "&#160;">
  <!ENTITY zwsp   "&#8203;">
  <!ENTITY nbhy   "&#8209;">
  <!ENTITY wj     "&#8288;">
]>
<rfc xmlns:xi="http://www.w3.org/2001/XInclude" submissionType="IETF" docName="draft-li-nmrg-dtn-data-generation-optimization-03" category="info" ipr="trust200902" obsoletes="" updates="" xml:lang="en" symRefs="true" sortRefs="false" tocInclude="true" version="3">
	<front>
    <title abbrev="Data Generation and Optimization for NDT">Data Generation and Optimization for Network Digital Twin Performance Modeling</title>
    <seriesInfo name="Internet-Draft" value="draft-li-nmrg-dtn-data-generation-optimization-03"/>
    <author fullname="Mei Li" initials="M." surname="Li">
      <organization>China Mobile</organization>
      <address>
        <postal>
          <street/>
          <city>Beijing</city>
          <code>100053</code>
          <country>China</country>
        </postal>
        <email>limeiyjy@chinamobile.com</email>
      </address>
    </author>
    <author fullname="Cheng Zhou" initials="C." surname="Zhou">
      <organization>China Mobile</organization>
      <address>
        <postal>
          <street/>
          <city>Beijing</city>
          <code>100053</code>
          <country>China</country>
        </postal>
        <email>zhouchengyjy@chinamobile.com</email>
      </address>
    </author>
    <author fullname="Danyang Chen" initials="D." surname="Chen">
      <organization>China Mobile</organization>
      <address>
        <postal>
          <street/>
          <city>Beijing</city>
          <code>100053</code>
          <country>China</country>
        </postal>
        <email>chendanyang@chinamobile.com</email>
      </address>
    </author>
    <date year="2025" month="March" day="02"/>
    <workgroup>Internet Research Task Force</workgroup>
    <abstract>
      <t>
   Network Digital Twin (NDT) can be used as a secure and cost-effective
   environment for network operators to evaluate network performance in
   various what-if scenarios.  Recently, AI models, especially neural
   networks, have been applied for NDT performance modeling.  The
   quality of deep learning models mainly depends on two aspects: model
   architecture and data.  This memo focuses on how to improve the model
   from the data perspective.</t>
    </abstract>
    <note>
      <name>Requirements Language</name>
      <t>
   The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT",
   "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this
   document are to be interpreted as described in RFC 2119 <xref target="RFC2119" format="default"/>.</t>
    </note>
  </front>
  <middle>
    <section anchor="sect-1" numbered="true" toc="default">
      <name>Introduction</name>
      <t>
   Digital twin is a virtual instance of a physical system (twin) that
   is continually updated with the latter's performance, maintenance,
   and health status data throughout the physical system's life cycle.
   Network Digital Twin (NDT) is a digital twin that is used in the
   context of networking <xref target="I-D.irtf-nmrg-network-digital-twin-arch" format="default"/>.  NDT
   can be used as a secure and cost-effective environment for network
   operators to evaluate network performance in various what-if
   scenarios.  Recently, AI models, especially neural networks, have
   been applied for NDT performance modeling.</t>
      <t>
   The quality of AI models mainly depends on two aspects: model
   architecture and data.  This memo focuses on the impact of training
   data on the model.  The quality of training data will directly affect
   the accuracy and generalization ability of the model.  This memo
   focuses on how to design data generation and optimization methods for
   NDT performance modeling, which can generate simulated network data
   to solve the problem of practical data shortage and select high-
   quality data from various data sources.  Using high-quality data for
   training can improve the accuracy and generalization ability of the
   model.</t>
    </section>
	<section title="Acronyms &amp; Abbreviations">
          <t> NDT: Network Digital Twin</t>

          <t> AI: Artificial Intelligence</t>

          <t> AIGC: AI-Generated Content</t>

          <t> ToS: Type of Service</t>

          <t> OOD: Out-of-Distribution</t>

          <t> FIFO: First In First Out</t>

          <t> SP: Strict Priority</t>

          <t> WFQ: Weighted Fair Queuing</t>

          <t> DRR: Deficit Round Robin</t>

          <t> BFS: Breadth-First Search</t>

          <t> CBR: Constant Bit Rate</t>
    </section>
    <section anchor="sect-3" numbered="true" toc="default">
      <name>Requirements</name>
      <t>
   Performance modeling is vital in NDT, which is involved in typical
   network management scenarios such as planning, operation,
   optimization, and upgrade.  Recently, some studies have applied AI
   models to NDT performance modeling, such as RouteNet <xref target="RouteNet" format="default"/> and
   MimicNet <xref target="MimicNet" format="default"/>.  AI is a data-driven technology whose
   performance heavily depends on data quality.</t>
      <t>
   Network data sources are diverse and of varying quality, making it
   difficult to directly serve as training data for NDT performance
   models:</t>
      <ul spacing="normal">
        <li>
          <t>Practical data from production networks: Data from production
      networks usually have high value, but the quantity, type, and
      accuracy are limited.  Moreover, it is not practical in production
      networks to collect data under various configurations;</t>
        </li>
        <li>
          <t>Network simulators: Network simulators (e.g., NS-3 and OMNeT++)
      can be used to generate simulated network data, which can solve
      the problems of quantity, diversity, and accuracy to a certain
      extent.  However, simulation is usually time-consuming.  In
      addition, there are usually differences between simulated data and
      practical data from production networks, which hinders the
      application of trained models to production networks;</t>
        </li>
        <li>
          <t>Generative AI models: With the development of AI-Generated Content
      (AIGC) technology, generative AI models (e.g., GPT and LLaMA) can
      be used to generate simulated network data, which can solve the
      problems of quantity and diversity to a certain extent.  However,
      the accuracy of the data generated by generative AI models is
      limited and often has gaps with practical data from production
      networks.</t>
        </li>
      </ul>
      <t>
   Therefore, data generation and optimization methods for NDT
   performance modeling are needed, which can generate simulated network
   data to solve the problem of practical data shortage and select high-
   quality data from multi-source data.  High-quality data meets the
   requirements of high accuracy, diversity, and fitting the actual
   situation of practical data.  Training with high-quality data can
   improve the accuracy and generalization of NDT performance models.</t>
    </section>
    <section anchor="sect-4" numbered="true" toc="default">
      <name>Framework of Data Generation and Optimization</name>
      <t>
   The framework of data generation and optimization for NDT performance
   modeling is shown in Figure 1, which includes two stages: the data
   generation stage and the data optimization stage.</t>
      <figure anchor="ure-framework-of-data-generation-and-optimization-for-ndt-performance-modeling">
        <name>Framework of Data Generation and Optimization for NDT Performance Modeling</name>
        <artwork name="" type="" align="left" alt=""><![CDATA[
       Data generation                   Data optimization
+---------------------------+ +-------------------------------------+
|                           | |                                     |
| +---------+               | |              +---------+            |
| |         |               | | +----------+ |         |            |
| | Network |               | | | Practical| | Easy    |            |
| | topology| +-----------+ | | | data     | | samples |            |
| |         | |           | | | +-----+----+ |         |            |
| |         | | Network   | | |       |      |         | +--------+ |
| |         | | simulator | | | +-----v----+ |         | |        | |
| | Routing | |           | | | |          | | Hard    | | High   | |
| | policy  +->           +-+-+-> Candidate+-> samples +-> quality| |
| |         | |           | | | | data     | |         | | data   | |
| |         | | Generative| | | |          | |         | |        | |
| |         | | AI model  | | | +----------+ |         | +--------+ |
| | Traffic | |           | | |              | OOD     |            |
| | matrix  | +-----------+ | |              | samples |            |
| |         | Data generator| |              | (remove)|            |
| +---------+               | |              |         |            |
|  Network                  | |              +---------+            |
|  configuration            | |             Data selection          |
|                           | |                                     |
+---------------------------+ +-------------------------------------+
]]></artwork>
      </figure>
      <section anchor="sect-4.1" numbered="true" toc="default">
        <name>Data Generation Stage</name>
        <t>
   The data generation stage aims to generate candidate data (simulated
   network data) to solve the problem of the shortage of practical data
   from production networks.  This stage first generates network
   configurations and then imports them into data generators to generate
   the candidate data.</t>
        <ul spacing="normal">
          <li>
            <t>Network configurations: Network configurations typically include
      network topology, routing policy, and traffic matrix.  These
      configurations need to be diverse to cover as many scenarios as
      possible.  Topology configurations include the number and
      structure of nodes and edges, node buffers' size and scheduling
      strategy, link capacity, etc.  Routing policy determines the path
      of a packet from the source to the destination.  The traffic
      matrix describes the traffic entering/leaving the network, which
      includes the traffic's source, destination, time and packet size
      distribution, Type of Service (ToS), etc.</t>
          </li>
          <li>
            <t>Data generators: Data generators can be network simulators (e.g.,
      NS-3 and OMNeT++) and/or the generative AI models (e.g., GPT and
      LLaMA).  Network configurations are imported into data generators
      to generate candidate data.</t>
          </li>
        </ul>
      </section>
      <section anchor="sect-4.2" numbered="true" toc="default">
        <name>Data Optimization Stage</name>
        <t>
   The data optimization stage aims to optimize the candidate data from
   various sources to select high-quality data.</t>
        <ul spacing="normal">
          <li>
            <t>Candidate data: Candidate data includes simulated network data
      generated in the data generation stage and the practical data from
      production networks.</t>
          </li>
          <li>
            <t>Data selection: The data selection module investigates the
      candidate data to filter out the easy, hard, and Out-of-
      Distribution (OOD) samples.  Hard examples refer to samples that
      are difficult for the model to accurately predict.  During the
      training process, exposing the model to more hard examples will
      enable it to perform better on such samples later on.  Then the
      easy samples and hard samples are considered valid samples and
      added to the training data.  OOD samples are considered invalid
      and removed.</t>
          </li>
          <li>
            <t>High-quality data: High-quality data needs to meet the
      requirements of high accuracy, diversity, and fitting the actual
      situation of practical data, which can be verified by expert
      knowledge (such as the ranges of delay, queue utilization, link
      utilization, and average port occupancy).</t>
          </li>
        </ul>
      </section>
    </section>
    <section anchor="sect-5" numbered="true" toc="default">
      <name>Data Generation</name>
      <t>
   This section will describe how to generate network configurations,
   including network topology, routing policy, and traffic matrix.  Then
   these configurations will be imported into data generators to
   generate the candidate data.</t>
      <section anchor="sect-5.1" numbered="true" toc="default">
        <name>Network Topology</name>
        <t>
   Network topologies are generated using the Power-Law Out-Degree
   algorithm, where parameters are set according to real-world
   topologies in the Internet Topology Zoo.</t>
        <t>
   When the flow rate exceeds the link bandwidth or the bandwidth set
   for the flow, the packet is temporarily stored in the node buffer.  A
   larger node buffer size means a larger delay and possibly a lower
   packet loss rate.  The node scheduling policy determines the time and
   order of packet transmission, which is randomly selected from the
   policies such as First In First Out (FIFO), Strict Priority (SP),
   Weighted Fair Queuing (WFQ), and Deficit Round Robin (DRR).</t>
        <t>
   A larger link capacity means a smaller delay and less congestion.  To
   cover diverse link loads to get good coverage of possible scenarios,
   we set the link capacity to be proportional to the total average
   bandwidth of the flows passing through the link.</t>
      </section>
      <section anchor="sect-5.2" numbered="true" toc="default">
        <name>Routing Policy</name>
        <t>
   Routing policy plays a crucial role in routing protocols, which
   determines the path of a packet from the source to the destination.</t>
        <ul spacing="normal">
          <li>
            <t>Default: We set the weight of all links in the topology to be the
      same, that is, equal to 1.  Then we use the Dijkstra algorithm to
      generate the shortest path configuration.  Dijkstra algorithm uses
      Breadth-First Search (BFS) to find the single source shortest path
      in a weighted digraph.</t>
          </li>
          <li>
            <t>Variants: We randomly select some links (the same link can be
      chosen more than once) and add a small weight to them.  Then we
      use the Dijkstra algorithm to generate a series of variants of the
      default shortest path configuration based on the weighted graph.
      These variants can add some randomness to the routing
      configuration to cover longer paths and larger delays.</t>
          </li>
        </ul>
      </section>
      <section anchor="sect-5.3" numbered="true" toc="default">
        <name>Traffic Matrix</name>
        <t>
   The traffic matrix is very important for network performance
   modeling.  The traffic matrix can be regarded as a network map, which
   describes the traffic entering/leaving the network, including the
   source, destination, distribution of the traffic, etc.</t>
        <t>
   We generate traffic matrix configurations with variable traffic
   intensity to cover low to high loads.</t>
        <t>
   The parameters packet sizes, packet size probabilities, and ToS are
   generated according to the validation dataset analysis to have
   similar distributions.</t>
        <t>
   The arrival of packets for each source-destination pair is modeled
   using one of the time distributions such as Poisson, Constant Bit
   Rate (CBR), and ON-OFF.</t>
      </section>
    </section>
    <section anchor="sect-6" numbered="true" toc="default">
      <name>Data Optimization</name>
      <t>
   This section will describe how to optimize the data from various
   sources to filter out high-quality data, which includes the seed
   sample selection phase and incremental optimization phase.</t>
      <t>
   Candidate data includes simulated network data generated in the data
   generation stage and real data from production networks.  Data
   optimization supports a variety of selection strategies, including
   high fidelity, high coverage, etc.  High fidelity means that the
   selected data can fit the real data (e.g., having similar topologies,
   routing policies, traffic models, etc.), and high coverage means that
   the selected data can cover as many scenarios as possible.</t>
      <section anchor="sect-6.1" numbered="true" toc="default">
        <name>Seed Sample Selection Phase</name>
        <t>
   In the seed sample selection phase, high-quality seed samples are
   selected through the following steps to provide high-quality initial
   samples for the incremental optimization phase.</t>
        <t>
   STEP 1: Training feature extraction model and feature extraction.</t>
        <t>
   (1.1) The training data D' is selected from the candidate data D
   according to the selection strategy.  For the high fidelity strategy,
   the real data is used as the training data D'; for the high coverage
   strategy, the real data and simulated data are used together as the
   training data D'.</t>
        <t>
   (1.2) Feature extraction model E is trained using the training data
   D'.  Feature extraction model E is a network performance evaluation
   model that can be used to evaluate performance indicators such as
   delay, jitter and packet loss (such as RouteNet).</t>
        <t>
   (1.3) Use the feature extraction model E obtained in STEP (1.2) to
   extract the feature of the training data D' obtained in STEP (1.1).
   A network can be defined as a set of flow F, queue Q, and link L.
   The link state SF (such as link utilization), queue state SQ (such as
   port occupation), and flow state SL (such as delay, throughput,
   packet loss, etc.) are taken as features.  Each sample in the
   training data D' is converted to a feature vector [SF,SQ,SL].</t>
        <t>
   STEP 2: Clustering.</t>
        <t>
   Cluster the training data D' after feature extraction.  Clustering
   (such as K-means and DBSCAN) is an unsupervised machine learning
   technique that can automatically discover the natural groups in the
   data, divide the data into multiple clusters, and the samples in the
   same cluster have similarities.</t>
        <t>
   Repeat STEP 3 and STEP 4 until all clusters have been traversed.</t>
        <t>
   STEP 3: Calculating cluster centers and nearest neighbors.</t>
        <t>
   (3.1) Calculate cluster centers.  The method of calculating cluster
   centers is determined according to the clustering algorithm used in
   STEP 2.  For example, using K-means clustering algorithm, the cluster
   center is calculated by finding the average of all data points in the
   cluster.  These cluster centers are added to the seed dataset DS.</t>
        <t>
   (3.2) Calculate k nearest neighbors of each cluster center and add
   them to the seed dataset DS.  Suitable nearest neighbor calculation
   methods can be used, such as Euclidean distance, cosine distance,
   etc.</t>
        <t>
   STEP 4: Expert knowledge verification.</t>
        <t>
   (4.1) Expert knowledge can be used to verify the validity of samples
   through the range of indicators such as delay, queue occupation, and
   link utilization.  If the verification passed, go to STEP 3.
   Otherwise, go to STEP (4.2).</t>
        <t>
   (4.2) Randomly select m samples from the seed dataset DS and remove
   them.  Calculate the nearest neighbors of the removed m samples, add
   them to the seed data set DS, and go to STEP (4.1).</t>
      </section>
      <section anchor="sect-6.2" numbered="true" toc="default">
        <name>Incremental Optimization Phase</name>
        <t>
   The seed samples are taken as the initial training dataset.  The
   filter model investigates the remaining candidate samples to filter
   out the easy, hard and OOD samples.  Then the easy samples and hard
   samples are added to the training dataset.  These processes are
   repeated to iteratively optimize the filter model and the training
   data until the high-quality data meets the constraints.</t>
      </section>
    </section>
    <section anchor="sect-7" numbered="true" toc="default">
      <name>Use Cases</name>
      <t>
   NDT can be applied to various types of networks, including data
   center networks, IP bearer networks, vehicular networks, wireless
   networks, optical networks, and IoT networks.  This section
   highlights the significance of data generation and optimization in
   NDT by presenting several typical use cases.</t>
   
   <section>
   <name>Configuration Evaluation and Optimization in Data Center Networks</name>
      <t>
	  Data centers are essential for the growth of Internet services,
      consisting of numerous computing and storage nodes linked by a
      data center network (DCN), which serves as the communication
      backbone.  The DCN faces challenges related to its large scale,
      diverse applications, high power density, and the need for
      reliability.  NDT can evaluate configurations and technologies to
      reduce the risk of failures.  For NDT to be effective, it must
      accurately model DCN traffic.  A key challenge lies in generating
      realistic network traffic.  By analyzing traffic patterns, data
      generation and optimization techniques can assist in creating
      simulated network data and optimize both real and simulated data.
      Numerous factors, such as the type of business, network size,
      volume of traffic, and load, influence traffic patterns in
      extensive DCNs.  Moreover, these traffic patterns are dynamic and
      evolve over time.  For instance, workloads that are sensitive to
      latency, like online transaction processing, tend to peak during
      the day, whereas workloads for online analytical processing are
      more prevalent at night.</t>
   </section>
	  
   <section>
   <name>Performance Prediction in IP Bearer Networks</name>
	  <t>
	  Internet service providers encounter challenges in delivering high-bandwidth, 
	  low-latency, and reliable services, especially in large networks like
      metropolitan area networks (MANs) . The widely adopted IP protocol
      adheres to a best-effort principle, making predictable performance
      difficult and complicating the stability and availability of
      network services during failures.  NDT can function as a high-
      fidelity simulation platform for predicting IP bearer network
      performance.  Accurate network status information is vital for
      optimizing protocols and identifying faults.  Recent advancements
      in in-band network telemetry (INT) technology have allowed the
      integration of network performance data into packet headers on the
      data plane.  Utilizing real performance data from INT, data
      generation and optimization techniques can create fine-grained
      simulated data, enhancing both real and simulated datasets for
      better model training outcomes.</t>
   </section>
   
   <section>
   <name>Task Offloading in Vehicular Networks</name>
      <t>
      The rise of vehicular networks has facilitated various delay-sensitive 
	  applications, including autonomous driving and navigation.  However, 
	  vehicles with limited resources struggle to meet the low/ultra-low 
	  latency requirements.  To address this, computationally intensive tasks
      can be offloaded to resource-rich platforms like nearby vehicles,
      edge servers, and cloud servers.  The dynamic nature of these
      networks, along with strict low-delay demands and large task data,
      presents significant offloading challenges.  NDT is an emerging 
	  method that allows real-time monitoring of vehicular networks, 
	  aiding in effective offload decisions. Additionally, machine 
	  learning algorithms are increasingly utilized for task offloading
	  to enhance accuracy and efficiency. Unlike traditional communication 
	  networks, vehicular networks are more dynamic and heterogeneous, 
	  leading to data shortages and quality issues.  Data generation and 
	  optimization techniques can simulate data for adaptability and 
	  filter high-quality data from various sources, thereby improving 
	  model training effectiveness.</t>
    </section>
	
    </section>
    <section anchor="sect-8" numbered="true" toc="default">
      <name>Discussion</name>
      <t>
   Several topics related to data generation and optimization for NDT
   performance modeling require further discussion.</t>
      <ul spacing="normal">
        <li>
          <t>Data generation methods: 1) Generate configurations that cover
      enough scenarios and scale from small to large networks. 2) Choose
      data generators that consider accuracy, speed, fidelity, etc. 3)
      Use data augmentation technology to expand the training data by
      using a small amount of practical data to generate similar data
      through prior knowledge.</t>
        </li>
        <li>
          <t>Data optimization methods: 1) Select data from multi-source
      candidate data, including hard sample mining, OOD detection, etc.
      2) Verify whether the data quality meets the requirements.</t>
        </li>
        <li>
          <t>Deployment: 1) Time/space complexity and explainability of the
      data generation and optimization methods. 2) Provide feedback for
      data collection to form a closed loop.</t>
        </li>
      </ul>
    </section>
    <section anchor="sect-9" numbered="true" toc="default">
      <name>Security Considerations</name>
      <t>
   TBD</t>
    </section>
    <section anchor="sect-10" numbered="true" toc="default">
      <name>IANA Considerations</name>
      <t>
   This document has no requests to IANA.</t>
    </section>
  </middle>
  <back>
    <references>
      <name>References</name>
      <references>
        <name>Informative References</name>
        <reference anchor="I-D.irtf-nmrg-network-digital-twin-arch" target="https://datatracker.ietf.org/doc/html/draft-irtf-nmrg-network-digital-twin-arch-09" xml:base="https://bib.ietf.org/public/rfc/bibxml-ids/reference.I-D.irtf-nmrg-network-digital-twin-arch.xml">
          <front>
            <title>Network Digital Twin: Concepts and Reference Architecture</title>
            <author fullname="Cheng Zhou" initials="C." surname="Zhou">
              <organization>China Mobile</organization>
            </author>
            <author fullname="Hongwei Yang" initials="H." surname="Yang">
              <organization>China Mobile</organization>
            </author>
            <author fullname="Xiaodong Duan" initials="X." surname="Duan">
              <organization>China Mobile</organization>
            </author>
            <author fullname="Diego Lopez" initials="D." surname="Lopez"/>
            <author fullname="Antonio Pastor" initials="A." surname="Pastor"/>
            <author fullname="Qin Wu" initials="Q." surname="Wu">
              <organization>Huawei</organization>
            </author>
            <author fullname="Mohamed Boucadair" initials="M." surname="Boucadair">
              <organization>Orange</organization>
            </author>
            <author fullname="Christian Jacquenet" initials="C." surname="Jacquenet">
              <organization>Orange</organization>
            </author>
            <date day="24" month="January" year="2025"/>
            <abstract>
              <t>Digital Twin technology has been seen as a rapid adoption technology in Industry 4.0. The application of Digital Twin technology in the networking field is meant to develop various rich network applications, realize efficient and cost-effective data-driven network management, and accelerate network innovation. This document presents an overview of the concepts of Digital Twin Network, provides the basic definitions and a reference architecture, lists a set of application scenarios, and discusses such technology's benefits and key challenges.</t>
            </abstract>
          </front>
          <seriesInfo name="Internet-Draft" value="draft-irtf-nmrg-network-digital-twin-arch-09"/>
        </reference>
        <reference anchor="MimicNet">
          <front>
            <title>MimicNet: Fast Performance Estimates for Data Center Networks with Machine Learning. In ACM SIGCOMM 2021 Conference (SIGCOMM ’21).</title>
            <author>
              <organization>Zhang, Q. Zhang., NG, K. K.W. NG., Kazer, C. W. Kazer., Yan, S. Yan., Sedoc, J. Sedoc., and V. Liu. Liu</organization>
            </author>
            <date month="August" year="2021"/>
          </front>
        </reference>
        <reference anchor="RouteNet">
          <front>
            <title>RouteNet: Leveraging Graph Neural Networks for network modeling and optimization in SDN. IEEE Journal on Selected Areas in Communication (JSAC), vol. 38, no. 10</title>
            <author>
              <organization>Rusek, K. Rusek., Suárez-Varela, J. Suárez-Varela., Almasan, P. Almasan., Barlet-Ros, P. Barlet-Ros., and A. Cabellos-Aparicio. Cabellos-Aparicio</organization>
            </author>
            <date month="October" year="2020"/>
          </front>
        </reference>
      </references>
      <references>
        <name>Normative References</name>
        <reference anchor="RFC2119" target="https://www.rfc-editor.org/info/rfc2119" xml:base="https://bib.ietf.org/public/rfc/bibxml/reference.RFC.2119.xml">
          <front>
            <title>Key words for use in RFCs to Indicate Requirement Levels</title>
            <author fullname="S. Bradner" initials="S." surname="Bradner"/>
            <date month="March" year="1997"/>
            <abstract>
              <t>In many standards track documents several words are used to signify the requirements in the specification. These words are often capitalized. This document defines these words as they should be interpreted in IETF documents. This document specifies an Internet Best Current Practices for the Internet Community, and requests discussion and suggestions for improvements.</t>
            </abstract>
          </front>
          <seriesInfo name="BCP" value="14"/>
          <seriesInfo name="RFC" value="2119"/>
          <seriesInfo name="DOI" value="10.17487/RFC2119"/>
        </reference>
      </references>
    </references>
  </back>
</rfc>
