<?xml version="1.0" encoding="US-ASCII"?>
<?xml-stylesheet type='text/xsl' href='rfc2629.xslt' ?>
<!-- used by XSLT processors -->
<!-- For a complete list and description of processing instructions (PIs),
      please see http://xml.resource.org/authoring/README.html. -->
<!-- Below are generally applicable Processing Instructions (PIs) that most I-Ds might want to use.
     (Here they are set differently than their defaults in xml2rfc v1.32) -->
<?rfc strict="yes" ?>
<!-- give errors regarding ID-nits and DTD validation -->
<!-- control the table of contents (ToC) -->
<?rfc toc="yes"?>
<!-- generate a ToC -->
<?rfc tocdepth="4"?>
<!-- the number of levels of subsections in ToC. default: 3 -->
<!-- control references -->
<?rfc symrefs="yes"?>
<!-- use symbolic references tags, i.e, [RFC2119] instead of [1] -->
<?rfc sortrefs="yes" ?>
<!-- sort the reference entries alphabetically -->
<!-- control vertical white space
     (using these PIs as follows is recommended by the RFC Editor) -->
<?rfc compact="yes" ?>
<!-- do not start each main section on a new page -->
<?rfc subcompact="no" ?>
<!-- keep one blank line between list items -->
<!-- end of list of popular I-D processing instructions -->
<rfc ipr="trust200902"  docName="draft-dong-remote-driving-usecase-00" category="info">
  <!-- ***** FRONT MATTER ***** -->

  <front>
    <title abbrev="draft-dong-remote-driving-usecase-00"> 
    Use Case of Remote Driving and its Network Requirements    
    </title>

    <author fullname="Lijun Dong" initials="L." surname="Dong">
      <organization>Futurewei Technologies Inc.</organization>

      <address>
        <postal>
          <street/>
          <city/>
          <region/>
          <code/>
          <country>U.S.A</country>
        </postal>
        <email>lijun.dong@futurewei.com</email>
      </address>
    </author>
	
	

    <author fullname="Richard Li" initials="R." surname="Li">
      <organization>Futurewei Technologies Inc.</organization>
      <address>
        <postal>
          <street/>
          <city/>
          <region/>
          <code/>
          <country>U.S.A</country>
        </postal>
        <email>richard.li@futurewei.com</email>
      </address>
    </author>

    <author fullname="Jungha Hong" initials="J." surname="Hong">
      <organization>ETRI</organization>
      <address>
        <postal>
          <street/>
          <city/>
          <region/>
          <code/>
          <country>Korea</country>
        </postal>
        <email>jhong@etri.re.kr</email>
      </address>
    </author>

    <date/>

    <area>ops</area>

    <workgroup>Independent Submission</workgroup>

    <keyword>Internet-Draft</keyword>

<abstract>
<t>
This document illustrates the use case of remote driving that leverages the human driver's advanced perceptual and cognitive skills to enhance autonomous driving when it is absent or falls short. Specifically the document analyzes the end-to-end latency that is required in the network to support collision avoidance in remote driving. The document also summarizes the other necessary requirements that the networking services shall support. 
</t>
<t>
  
</t>
</abstract>
</front>
<middle>

<section anchor="Intro" title="Introduction to Autonomous Vehicles">
<t>
Autonomous vehicles (AV) have made great progress in the recent years, which rely on numerous well-placed sensors that continuously detect, observe the location and movement of surrounding vehicles, conditions on the road, pedestrians, traffic lights, etc. Autonomous vehicle can be controlled by its own central computer, which manipulates the steering, accelerator, and brake, achieving self-driving in different levels.  
</t>
<t>
SAE International's new standard "J3016: Taxonomy and Definitions for Terms Related to On-Road Motor Vehicle Automated Driving Systems" defines six LoAs (Level of Automation) <xref target="SAEJ3016"/>, including full automation (level 5), high automation (level 4), conditional automation (level 3), partial automation (level 2), driver assistance (level 1), and no automation (level 0). 
</t> 
<t>
Although each vehicle manufacturer has been taking its best effort of making progress in increasing the level of automation, the current automated vehicles by themselves can only fit into the SAE classification 2 or 3. AVs may fail short in unexpected situations. In such cases, it is desirable that humans can operate the vehicle manually to recover from a failure situation through remote driving.  Until the autonomous technology becomes mature enough to be level 5, the experts suggest AVs should be backed up by tele-operations. 
</t>
</section>


<section title="Terms and Abbreviations">
     <t>The terms and abbreviations used in this document are listed below.
       <list style="symbols">   
        <t>AI: Artificial Intelligence</t>
         <t>AV: Autonomous Vehicle</t>
         <t>BE: Best-Effort</t>
         <t>GPS: Global Positioning System</t>         
      </list>
     </t>
    <t>The above terminology is defined in greater details in the remainder of this document.
    </t>
</section>





<section anchor="RemoteDriving" title="Remote Driving">
<t>
Remote driving is a mechanism in which a human driver operates a vehicle from a distance through communication networks. Remote driving leverages the human driver's advanced perceptual and cognitive skills to further assist the autonomous driving when it falls short, and overcomes many complex situations that computer vision or artificial intelligence could not foresee or apprehend. Such situations and possible failures of autonomous driving include: (1) perception failure at night or under challenging weather conditions, e.g., low visibility due to fog, lane markers are covered by snow; (2) confusing or malfunctioning traffic lights, unrecognizable traffic signs due to corrosion or graffiti; (3) Confusing detour signs or complex instructions temporarily ordered by police officers, which require extra knowledge about the local traffic and understanding of the local construction works; (4) Complex or confusing parking signs, which might be handwritten and hard to be understood by computers. Parking might only be allowed on certain dates during the week, or parking lots are only permissible for certain types of vehicles. With remote driving being added to the AV control loop, passengers could feel safe enough.  
</t>

<t>
Remotely operated vehicles may also be of interest to personal transportation services. Vay, a Berlin-based startup <xref target="Vay"/> plans to debut a fleet of taxis controlled by remote teledrivers by 2022. The concept behind Vay is that when you order a Vay, one of teledrivers is tasked to navigate one Vay to your pickup location. Then you take control the Vay. After you reach your destination, the teledriver takes control of the Vay and deliver it to the next nearby customer. During the whole transaction, the remote driving takes place for Vay delivery. This is advertised to happen at the initial roll-out stage, the Vay might be remotely controlled by teledrivers to drive the customers around in the future stages when the technologies are mature enough. Vay's system is promised to be built  safer than conventional driving by controlling the top four causes of fatal urban accidents, which are driving under the influence, speeding, distraction, and fatigue. 
</t>

<t>
Remotely operated trunks could possibly eliminate the threats to road safety, driver/passenger safety that are caused by fleet driver fatigue during long drives. Remotely operated vehicles are also particularly useful compared to autonomous trunking <xref target="Tusimple"/> in situations where it would be hazardous or impossible for humans to operate in, for example, construction vehicles in remote sites or emergency service vehicles in areas that are affected by chemical spills, by active wildfires, or by hurricane conditions. 
</t>

<t>
A remotely controlled vehicle needs to transit necessary data in high volumes to the remote operation center which might be located in edge cloud or central cloud. The data includes all the sensory feeds that the autonomous vehicle itself could collect. Signals from GPS (Global Positioning System) satellites could be combined with reading from tachometers, altimeters, and gyroscopes to provide more accurate positioning of the vehicle. Radar sensors monitor the positions of other vehicles nearby. Lidar (Light Detection and Ranging) sensors bounce pulses of light off the surroundings to identify lane markings and road boundaries. Ultrasonic sensors are used to measure the position of objects that are very close to the vehicle. Video cameras consistently take pictures of the surroundings from different angles. Volumetric data from vehicles are sent from the vehicles to the remote driving center to provide the remote driver with adequate perception of the environment. The remote driver can then provide appropriate instructions to help the autonomous vehicle resolve the issues. 
</t>


<section anchor="Collision" title="Collision Avoidance in Remote Driving">
<t>
In this section, we use a specific collision avoidance scenario in remote driving as shown in <xref target="collisionAvoidance"/> to illustrate that the network and its protocols need to provide the necessary support. There are many similar use cases that have already been specified in <xref target="TR22.885"/> and <xref target="TR22.886"/>.
</t>

<figure anchor="collisionAvoidance"><artwork><![CDATA[
  ______                                           [   ]  
 /|_||_\`.__                                       [   ]
(   _    _ _\ <----Collision Avoidance Distance--->[   ]
=`-(_)--(_)-'                                      [ P ] 
          .-~~~-.
  .- ~ ~-(       )_ _
 /                     ~ -.
|       Networks           \
 \                         .'
   ~- . _____________ . -~   
                    +------+
                    +Remote+
                    +driver+
                    +------+
                       
]]></artwork></figure>

<t>
Given the current technologies in sensing, encoding and decoding, together with the Best Effort (BE) service provided in the current Internet, the total roundtrip delay between the time when the roadside camera captures picture of pedestrian on the crossroad and the time when the self-driving car receives the signal to brake is around 250-400 ms. On the other hand, the latency already incurred by the remote driver's reaction time also adds the total latency, adding to the distance required for the vehicle to come to a stop. The detailed breakdown of the total latency is shown as below:
</t>

<t><list style="symbols">
    <t>Image capture, encoding, decoding and display: 100 ms <xref target="Nuvation"/> <xref target="Sensoray"/>;</t>
    <t>Remote driver's reaction time: 100 ms;</t>
    <t>Total transmission time in the network: 50-200 ms, which includes the time for the image data to reach the remote driver as well as the time for the command to reach the vehicle <xref target="VerizonNetwork"/> <xref target="Candela2020"/>; The image data could be encapsulated in multiple packets, depending on the image resolution and size. Thus the total transmission time in the network might involve 2 or more packets transmission. With the best-effort nature of the current Internet, the total transmission time is not determined and changes at per packet basis, might for example range between 50ms to 200 ms.</t> 
    <t>Total: 250-400 ms. </t>
    </list>   
</t>

<t>
The collision avoidance distance is proportional to the vehicle speed. For example, if the car is driving at 60 km/hour, the collision avoidance distance must be longer than 7 meters, in other words, the self-driving car must start to brake more than 7 meters away from the pedestrians. <xref target="distance"/> shows the calculation of collision avoidance distance based on the vehicle's speed and the current total latency. 
</t>
<t>
If the vehicle is driving at higher speed (e.g., 80 km/hour) and for it to start to brake at shorter distance away from the pedestrians (e.g., 4 meters), the total round-trip delay needs to be much shortened (e.g.,  4/(80/3600)=180 ms). Assuming with the technologies advancement, the total time needed for sensory image capture, framing and encoding, decoding and display is reduced to 60 ms, the total transmission time in the network cannot be longer than 20 ms precisely. Within the 20 ms, the captured image or video data, and other sensory data need to arrive the remote server, the command from the remote driver needs to reach the vehicle as well. 
</t>

<table anchor="distance" align="center" pn="table-1">
        <name>Collision avoidance distance based on vehicle's speed</name>
        <thead>
          <tr>
            <th align="center" colspan="1" rowspan="1">Speed</th>
            <th align="center" colspan="1" rowspan="1">Collision Avoidance Distance</th>
            
          </tr>
        </thead>
        <tbody>
          <tr>
            <td align="center" colspan="1" rowspan="1">5 km/hour = 1.4 m/sec</td>
            <td align="center" colspan="1" rowspan="1">1.4*0.4 = 0.56 m </td>
                            
          </tr>
          <tr>
            <td align="center" colspan="1" rowspan="1">30 km/hour = 8.4 m/sec</td>
            <td align="center" colspan="1" rowspan="1">8.4*0.4 = 3.36 m</td>            
          </tr>
          <tr>
            <td align="center" colspan="1" rowspan="1">60 km/hour = 16.8 m/sec</td>
            <td align="center" colspan="1" rowspan="1">16.8*0.4 = 6.72 m</td>          
          </tr>
          <tr>
            <td align="center" colspan="1" rowspan="1">80 km/hour = 22.3 m/sec</td>
            <td align="center" colspan="1" rowspan="1">22.3*0.4 = 8.92 m</td>          
          </tr>
          <tr>
            <td align="center" colspan="1" rowspan="1">120 km/hour = 33.4 m/sec</td>
            <td align="center" colspan="1" rowspan="1">33.4*0.4 = 13.36 m</td>          
          </tr>
        </tbody>
</table>

</section>
</section>

<section anchor="Requirements" title="Network Requirements">
<t>
The following requirements need to be supported by the networks:
</t>

<t><list style="symbols">
    <t>The networking services shall support multiple concurrent flow streams at high data rates and volumetric data transmission from vehicles with high mobility.</t>
    <t>The networks shall deliver services with service level objectives, specifically latency objectives. The latency objectives are required to be precisely guaranteed and highly reliable, not just "optimized" but quantifiable. </t>
    <t>The network shall be able to identify the packets which carry urgent information and treat them in a differentiated manner with highest priority</t>
	<t>The networking services shall reduce and even avoid dropping/re-transmission of packets with high significance. Packet loss of certain urgent packets are not permissible in the network. </t>
    </list>    
</t>



</section>

<section title="IANA Considerations">
  <t>
   This document requires no actions from IANA.
 </t>
</section>

<section title="Security Considerations"> 
 <t>
   This document introduces no new security issues.
</t>
</section>

<section title="Acknowledgements">  
</section>

</middle>

  <!-- ***** BACK MATTER ***** -->

  <back>
  
<references title="Informative References">

<reference anchor="SAEJ3016" target="sae.org/standards/content/j3016_202104/">
<front>
    <title>Taxonomy and Definitions for Terms Related to Driving Automation Systems for On-Road Motor Vehicles, SAE J3016_202104 </title>
    <author >
    <organization></organization>
    </author>
    <date year="2021"/>
</front>
</reference>

<reference anchor="Nuvation" target="https://www.nuvation.com/industrial-video-capture-display-system">
<front>
    <title>Video Capture and Display</title>
    <author >
    <organization></organization>
    </author>
    <date year="2022"/>
</front>
</reference>

<reference anchor="Sensoray" target="https://www.nuvation.com/industrial-video-capture-display-system">
<front>
    <title>Video Latency, What It Is and Why It's Important</title>
    <author  fullname='Pete Eberlein'>
    <organization></organization>
    </author>
    <date year="2015"/>
</front>
</reference>

<reference anchor="VerizonNetwork" target="https://www.verizon.com/business/solutions/business-continuity/weekly-latency-statistics/">
<front>
    <title>Verizon Network Latency Statistics</title>
    <author>
    <organization></organization>
    </author>
    <date year="2022"/>
</front>
</reference>


<reference anchor="Candela2020" target="hhttps://doi.org/10.1016/j.comnet.2020.107495">
<front>
    <title>Impact of the COVID-19 pandemic on the Internet latency: A large-scale study</title>
    <author fullname='Massimo Candela'>
    <organization></organization>
    </author>
    <author fullname='Valerio Luconi'>
    <organization></organization>
    </author>
    <author fullname='Alessio Vecchio'>
    <organization></organization>
    </author>
  <date year="2020"/>
</front>
    <seriesInfo name="Computer Networks," value="vol. 182, no. 11"/>
</reference>



<reference anchor="Vay" target="https://vay.io/">
<front>
    <title>A New Approach to Driverless mobility </title>
    <author>
    <organization></organization>
    </author>
    <date year="2022"/>
</front>
</reference>

<reference anchor="Tusimple" target="https://www.tusimple.com/">
<front>
    <title>TuSimple Autonomous Trucking</title>
    <author >
    <organization></organization>
    </author>
    <date year="2022"/>
</front>
</reference>



<reference anchor="TR22.885" target="https://www.3gpp.org/ftp/Specs/archive/22_series/22.885/">
<front>
    <title>Study on LTE support for Vehicle to Everything (V2X) services, 3GPP TR 22.885 </title>
    <author >
    <organization></organization>
    </author>
    <date year="2015"/>
</front>
</reference>

<reference anchor="TR22.886" target="https://www.3gpp.org/ftp/Specs/archive/22_series/22.886/">
<front>
    <title>Study on enhancement of 3GPP Support for 5G V2X Services, 3GPP TR 22.886 </title>
    <author >
    <organization></organization>
    </author>
    <date year="2018"/>
</front>
</reference>







	
</references>

  </back>
</rfc>
