<?xml version='1.0' encoding='utf-8'?> encoding='UTF-8'?>

<!DOCTYPE rfc [
  <!ENTITY nbsp    "&#160;">
  <!ENTITY zwsp   "&#8203;">
  <!ENTITY nbhy   "&#8209;">
  <!ENTITY wj     "&#8288;">
]>

<rfc xmlns:xi="http://www.w3.org/2001/XInclude" ipr="trust200902" category="info" docName="draft-ietf-mops-ar-use-case-18" number="9699" consensus="true" obsoletes="" updates="" submissionType="IETF" xml:lang="en" tocInclude="true" symRefs="true" sortRefs="true" version="3">
  <!-- xml2rfc v2v3 conversion 3.11.1 -->

  <front>
    <title abbrev="MOPS AR Use Case">Media Operations abbrev="XR Use Case">Use Case for an Extended
    Reality Application on Edge Computing Infrastructure</title>
    <seriesInfo name="Internet-Draft" value="draft-ietf-mops-ar-use-case-18"/> name="RFC" value="9699"/>
    <author fullname="Renan Krishna" initials="R." surname="Krishna">
      <address>
        <postal>
          <country>United Kingdom</country>
        </postal>
        <email>renan.krishna@gmail.com</email>
        <uri/>
      </address>
    </author>
    <author initials="A." surname="Rahman" fullname="Akbar Rahman">
      <organization>Ericsson</organization>
      <address>
        <postal>
          <street>349 Terry Fox Drive</street>
          <city>Ottawa Ontario</city>
          <city>Ottawa</city>
	  <region>Ontario</region>
          <code>K2K 2V6</code>
          <country>Canada</country>
          <region/>
        </postal>
        <phone/>
        <email>Akbar.Rahman@ericsson.com</email>
        <uri/>
      </address>
    </author>
    <date />
    <area>Operations and Management</area>
    <workgroup> MOPS</workgroup> month="December" year="2024"/>
    <area>OPS</area>
    <workgroup>mops</workgroup>

    <abstract>
      <t>

		This
      <t>This document explores the issues involved in the use of Edge Computing edge
      computing resources to operationalize a media use cases case that involve involves an
      Extended Reality (XR) applications. application. In particular, this document
      discusses those applications an XR application that can run on devices having different
      form factors (such as different physical sizes and shapes) and need Edge needs edge
      computing resources to mitigate the effect of problems such as a the need
      to support interactive communication requiring low latency, limited
      battery power, and heat dissipation from those devices. The intended audience for this document are network
		operators who are interested in providing edge computing resources to operationalize the requirements of such applications.  This document
      also discusses the expected behavior of XR applications applications, which can be
      used to manage the traffic.
		In addition, the document discusses traffic, and the service requirements of for XR applications
      to be able to run on the network. Network operators who are interested
      in providing edge computing resources to operationalize the requirements
      of such applications are the intended audience for this document.
      </t>
    </abstract>
  </front>
  <middle>
    <section anchor="introduction" numbered="true" toc="default">
      <name>Introduction</name>
      <t>
		Extended Reality (XR) is a term that includes Augmented
		Reality (AR), Virtual Reality (VR) (VR), and Mixed Reality (MR)
		<xref target="XR" format="default"/>.  AR combines the real
		and virtual, is interactive interactive, and is aligned to the physical
		world of the user <xref target="AUGMENTED_2"
		format="default"/>. On the other hand, VR places the user
		inside a virtual environment generated by a computer <xref
		target="AUGMENTED" format="default"/>.MR format="default"/>. MR merges the real and
		virtual world along a continuum that connects a completely real
		environment at one end to a completely virtual environment at
		the other end. In this continuum, all combinations of the real
		and virtual are captured <xref target="AUGMENTED"
		format="default"/>.
      </t>

<t>
	    XR applications will bring have several requirements for the network and the
	    mobile devices running these applications.  Some XR applications such
	    (such as AR applications) require a real-time processing of video
	    streams to recognize specific objects. This processing is then
	    used to overlay information on the video being displayed to the
	    user.  In addition, other XR applications such (such as AR and VR will applications) also
	    require generation of new video frames to be played to the
	    user. Both the real-time processing of video streams and the
	    generation of overlay information are computationally intensive
	    tasks that generate heat <xref target="DEV_HEAT_1" format="default"/>,
	    format="default"/> <xref target="DEV_HEAT_2" format="default"/>
	    and drain battery power <xref target="BATT_DRAIN"
	    format="default"/> on the mobile device running the XR
	    application.  Consequently, in order to run applications with XR
	    characteristics on mobile devices, computationally intensive tasks
	    need to be offloaded to resources provided by Edge Computing. edge computing.
      </t>
      <t>
		Edge Computing computing is an emerging paradigm where where, for the purpose of this document, computing resources and storage are made available in close
		network proximity at the edge of the Internet to mobile devices and sensors <xref target="EDGE_1" format="default"/>, format="default"/> <xref target="EDGE_2" format="default"/>. A computing resource or storage is in
		close network proximity to a mobile device or sensor if there is a short and high-capacity network path to it
		such that the latency and bandwidth requirements of applications running on those mobile devices or sensors can be met.
		These edge computing devices use cloud technologies that enable them to support offloaded XR applications. In particular, cloud implementation techniques <xref target="EDGE_3" format="default"/> such as the follows following can be deployed:
		</t>

		<ul

		<dl spacing="normal">
        <li>Disaggregation (using SDN
        <dt>Disaggregation:</dt><dd>Using Software-Defined Networking (SDN) to break vertically integrated systems into independent components- these components. These components can have open interfaces which that are standard, well documented documented, and not proprietary),
			   </li>
        <li>Virtualization (being non-proprietary.</dd>

        <dt>Virtualization:</dt><dd>Being able to run multiple independent copies of those components components, such as SDN Controller apps, applications and Virtual Network Functions Functions, on a
		common hardware platform).</li>
        <li>Commoditization (being platform.</dd>
        <dt>Commoditization:</dt><dd>Being able to elastically scale those virtual components across commodity hardware as the workload dictates).</li>
      </ul> dictates.</dd>
      </dl>

		<t>
		 Such techniques enable XR applications requiring low-latency that require low latency and high bandwidth to be delivered by proximate edge devices. This is because the disaggregated components can run on proximate edge devices rather than on a remote cloud several hops away and deliver low latency, high bandwidth low-latency, high-bandwidth service to offloaded applications <xref target="EDGE_2" format="default"/>.
      </t>

      <t>
	  This document discusses the issues involved when edge computing
	  resources are offered by network operators to operationalize the
	  requirements of XR applications running on devices with various form
	  factors. A network operator for For the purposes purpose of this
	  document document, a network operator is any
	  organization or individual that manages or operates the compute computing
	  resources or storage in close network proximity to a mobile device
	  or sensors. sensor.  Examples of form factors include Head Mounted Displays (HMD) the following: 1)
	  head-mounted displays (HMDs), such as Optical-see through optical see-through HMDs and video-see-through HMDs
	  video see-through HMDs, 2) hand-held displays, and Hand-held displays.
	  Smart phones 3) smartphones
	  with video cameras and location sensing location-sensing capabilities using systems
	  such as a global navigation satellite system (GNSS) are another example of such devices. (GNSS).  These devices
	  have limited battery capacity and dissipate heat when running. Besides Also,
	  as the user of these devices moves around as they run the XR
	  application, the wireless latency and bandwidth available to the
	  devices fluctuates fluctuates, and the communication link itself might fail. As
	  a result, algorithms such as those based on adaptive-bit-rate Adaptive Bitrate (ABR)
	  techniques that base their policy on heuristics or models of
	  deployment perform sub-optimally in such dynamic environments <xref
	  target="ABR_1" format="default"/>.  In addition, network operators
	  can expect that the parameters that characterize the expected
	  behavior of XR applications are heavy-tailed. Heaviness of tails is
	  defined as the difference from the normal distribution in the
	  proportion of the values that fall a long way from the mean <xref
	  target="HEAVY_TAIL_3" format="default"/>. Such workloads require
	  appropriate resource management policies to be used on the Edge. edge.
	  The service requirements of XR applications are also challenging
	  when compared to the current video applications.  In particular particular, several Quality of Experience
	  Quality-of-Experience (QoE) factors such as motion sickness are
	  unique to XR applications and must be considered when
	  operationalizing a network.

		This document motivates examines these issues with a use-case that is the use case presented in the following sections. section.
      </t>
    </section>

    <section anchor="use_case" numbered="true" toc="default">
      <name>Use Case</name>

      <t>
		 A
This use case is now described that involves an application with XR systems' characteristics. application running on a mobile device. Consider
a group of tourists who are being
		 conducted in taking a tour around the historical site of the
Tower of London.  As they move around the site and within the historical
buildings, they can watch and listen to historical scenes in 3D that are
generated by the XR application and then overlaid by their XR headsets onto
their real-world view. The headset then continuously updates their view as they
move around.
      </t>
      <t>
		The XR  application first processes the scene that the walking tourist is watching in real-time real time and identifies objects
		that will be targeted for overlay of high-resolution videos. It then generates high-resolution 3D images
		of historical scenes related to the perspective of the tourist in real-time. real time. These generated video images are then
		overlaid on the view of the real-world real world as seen by the tourist.
      </t>
      <t>
		This  processing of scenes
		and generation of high-resolution images is now are discussed in greater detail. detail below.

      </t>
      <section anchor="processsing_of_scenes" numbered="true" toc="default">
        <name>Processing of Scenes</name>
        <t>
		The task of processing a scene can be broken down into a pipeline of three consecutive subtasks namely subtasks: tracking, followed by an acquisition of a
		model of the real world, and finally registration <xref target="AUGMENTED" format="default"/>.
        </t>
        <t>
		Tracking: The

	<dl newline="false" spacing="normal">

		<dt>Tracking:</dt><dd>The XR application that runs on the mobile device
		needs to track the six-dimensional pose (translational in the
		three perpendicular axes and rotational about those three
		axes) of the user's head, eyes eyes, and the objects that are in
		view <xref target="AUGMENTED" format="default"/>. This
		requires tracking natural features (for example example, points or
		edges of objects) that are then used in the next stage of the pipeline.
        </t>
        <t>
		Acquisition
		pipeline.</dd>

		<dt>Acquisition of a model of the real world: The world:</dt><dd>The
		tracked natural features are used to develop a model of the
		real world. One of the ways this is done is to develop a model based on an
		annotated point cloud (a set of points in space that are
		annotated with descriptors) based model that is then stored in
		a database. To ensure that this database can be scaled up,
		techniques such as combining a client-side simultaneous
		tracking and mapping and a with server-side localization are used
		to construct a model of the real world <xref target="SLAM_1" format="default"/>,
		format="default"/> <xref target="SLAM_2" format="default"/>, format="default"/>
		<xref target="SLAM_3" format="default"/>, format="default"/> <xref target="SLAM_4"
		format="default"/>. Another model that can be built is based
		on a polygon mesh and texture mapping technique. The polygon
		mesh encodes a 3D object's shape shape, which is expressed as a
		collection of small flat surfaces that are polygons. In
		texture mapping, color patterns are mapped on to onto an object's
		surface. A third modelling modeling technique uses a 2D lightfield that
		describes the intensity or color of the light rays arriving at
		a single point from arbitrary directions. Such a 2D lightfield
		is stored as a two-dimensional table. Assuming distant light
		sources, the single point is approximately valid for small
		scenes. For larger scenes, many 3D positions are additionally stored
		stored, making the table 5D. A set of all such points (either a
		2D or 5D lightfield) can then be used to construct a model of
		the real world <xref target="AUGMENTED" format="default"/>.
        </t>
        <t>
		Registration: The
		format="default"/>.</dd>

		<dt>Registration:</dt><dd>The coordinate systems,
		brightness, and color of virtual and real objects need to be
		aligned with each other and other; this process is called registration
		"registration" <xref target="REG" format="default"/>.  Once the
		natural features are tracked as discussed above, virtual
		objects are geometrically aligned with those features by
		geometric registration. This is followed by resolving
		occlusion that can occur between virtual and the real objects
		<xref target="OCCL_1" format="default"/>, format="default"/> <xref target="OCCL_2"
		format="default"/>.

		The XR application also applies photometric registration <xref
		target="PHOTO_REG" format="default"/> by aligning the
		brightness and color between the virtual and real
		objects. Additionally, algorithms that calculate global
		illumination of both the virtual and real objects <xref
		target="GLB_ILLUM_1" format="default"/>, format="default"/> <xref
		target="GLB_ILLUM_2" format="default"/> are executed. Various
		algorithms are also required to deal with artifacts generated by lens distortion
		<xref target="LENS_DIST" format="default"/>, blur <xref
		target="BLUR" format="default"/>, noise <xref target="NOISE" format="default"/> etc. are also required.
        </t>
		format="default"/>, etc.</dd>
        </dl>
      </section>
      <section anchor="generation" numbered="true" toc="default">
        <name>Generation of Images</name>

        <t>
   The XR application must generate a high-quality video that has the
   properties described in the previous step above and overlay the video on the XR device's display- a
   display.  This step is called situated visualization. "situated visualization". A situated
   visualization is a visualization in which the virtual objects that need to
   be seen by the XR user are overlaid correctly on the real world. This
   entails dealing with registration errors that may arise, ensuring that
   there is no visual interference <xref target="VIS_INTERFERE"
   format="default"/>, and finally maintaining temporal coherence by adapting
   to the movement of user's eyes and head.
        </t>
      </section>
    </section>
    <section anchor="Req" numbered="true" toc="default">
      <name>Technical Challenges and Solutions</name>

      <t>
	As discussed in section 2, <xref target="use_case"/>, the components of XR
	applications perform tasks that are computationally intensive, such as
	real-time generation and processing of high-quality video content that are computationally intensive. content.
	This section will discuss discusses the challenges such applications can face as a consequence.</t>
	consequence and offers some solutions.
      </t>
      <t>As a result of performing computationally intensive tasks on XR devices such as XR glasses,
		excessive heat is generated by the chip-sets chipsets that are involved
		in the computation <xref target="DEV_HEAT_1" format="default"/>, format="default"/> <xref target="DEV_HEAT_2" format="default"/>.  Additionally,
		the battery on such devices discharges quickly when running
		such applications <xref target="BATT_DRAIN" format="default"/>.

      </t>
      <t>
	A solution to the problem of heat dissipation and battery drainage problem is to offload the processing and video generation tasks
	to the remote cloud. However, running such tasks on the cloud is not feasible as the end-to-end delays
		must be within the order of a few milliseconds. Additionally, such applications require high bandwidth
		and low jitter to provide a high QoE to the user. In order to achieve such hard timing constraints, computationally intensive
		tasks can be offloaded to Edge edge devices.

      </t>
      <t>
	Another requirement for our use case and similar applications applications, such as 360-degree streaming (streaming of video that represents a view in every direction in 3D space) space), is that the display on
	the XR device should synchronize the visual input with the way the user is moving their head. This synchronization
	is necessary to avoid motion sickness that results from a time-lag time lag between when the user moves their head and
	when the appropriate video scene is rendered. This time lag is often called "motion-to-photon" delay. "motion-to-photon delay".
Studies have shown <xref target="PER_SENSE" format="default"/>, <xref target="XR" format="default"/>, <xref target="OCCL_3" format="default"/> that this delay
can be at most 20ms 20 ms and preferably between 7-15ms 7-15 ms in
order to avoid the motion sickness problem. <xref target="PER_SENSE" format="default"/> <xref target="XR" format="default"/> <xref target="OCCL_3" format="default"/>. Out of these 20ms, 20 ms, display techniques including the refresh
rate of write displays and pixel switching take 12-13ms 12-13 ms <xref target="OCCL_3" format="default"/>, format="default"/> <xref target="CLOUD" format="default"/>. This leaves 7-8ms 7-8 ms for the processing of
motion sensor inputs, graphic rendering, and round-trip-time round-trip time (RTT) between the XR device and the Edge. edge.
The use of predictive techniques to mask latencies has been considered as a mitigating strategy to reduce motion sickness <xref target="PREDICT" format="default"/>.
In addition, Edge Devices edge devices that are proximate to the user might be used to offload these computationally intensive tasks.
   Towards this end, a 3GPP study indicates suggests an Ultra Reliable Ultra-Reliable Low Latency of 0.1ms
   0.1 to 1ms 1 ms for communication between an Edge edge server and User Equipment
   (UE) <xref target="URLLC" format="default"/>.
      </t>
      <t>
		Note that the Edge edge device providing the computation and storage is itself limited in such resources compared to the Cloud.  So,
		for cloud.
		For example, a sudden surge in demand from a large group of tourists can overwhelm that the device. This will result in a degraded user
		 experience as their XR device experiences delays in receiving the video frames. In order to deal
		 with this problem, the client XR applications will need to use Adaptive Bit Rate (ABR) ABR algorithms that choose bit-rates bitrate policies
		 tailored in a fine-grained manner
		 to the resource demands and playback play back the videos with appropriate QoE metrics as the user moves around with the group of tourists.
      </t>

      <t>
   However, the heavy-tailed nature of several operational parameters (e.g.,
   buffer occupancy, throughput, client-server latency, and variable
   transmission times) makes prediction-based adaptation by ABR algorithms
   sub-optimal <xref target="ABR_2" format="default"/>.  This is because with
   such distributions, the law of large numbers (how long does it take takes for the
   sample mean to stabilize) works too slowly <xref target="HEAVY_TAIL_2" format="default"/>,
   format="default"/> and the mean of sample does not equal the mean of
   distribution <xref target="HEAVY_TAIL_2" format="default"/>,
		and format="default"/>; as a result result,
   standard deviation and variance are unsuitable as metrics for such
   operational parameters <xref target="HEAVY_TAIL_1"
   format="default"/>.
   Other subtle issues with these distributions include
   the "expectation paradox" <xref target="HEAVY_TAIL_1" format="default"/> where the
   (the longer the wait for an event, the longer a further need to wait wait) and
   the
		issue of mismatch between the size and count of events <xref
   target="HEAVY_TAIL_1" format="default"/>. This makes These issues make designing an algorithm
   for adaptation error-prone and challenging.
		Such operational parameters include but are not limited to buffer occupancy, throughput, client-server latency, and variable transmission
		times.
   In addition, edge devices and
   communication links may fail fail, and logical communication relationships
   between various software components change frequently as the user moves
   around with their XR device <xref target="UBICOMP" format="default"/>.

      </t>

    </section>
    <section anchor="ArTraffic" numbered="true" toc="default">
      <name>XR Network Traffic</name>

	  <section anchor="traffic_workload" numbered="true" toc="default">
        <name>Traffic Workload</name>

        <t>
		As discussed earlier, in Sections <xref target="introduction" format="counter"/> and <xref target="Req" format="counter" />, the parameters that capture the characteristics of XR application behavior are heavy-tailed.
		Examples of such parameters include the distribution of arrival times between XR application invocation, invocations, the amount
		of data transferred, and the inter-arrival times of packets within a session. As a result, any traffic model based on
		such parameters are themselves is also heavy-tailed. Using
		these models to predict performance under alternative resource allocations by the network operator is challenging. For example, both uplink and downlink traffic to a user device has parameters such as volume of XR data, burst time, and idle time that are heavy-tailed.
      </t>
      <t>

         <xref target="TABLE_1" format="default"/> below shows various
         streaming video applications and their associated throughput
         requirements <xref target="METRICS_1" format="default"/>. Since our
         use case envisages a 6 degrees of freedom (6DoF) video or point
         cloud, it can be seen from the table indicates that it will require 200 to 1000Mbps 1000 Mbps of
         bandwidth.
As seen from the table,  Also, the table shows that XR application applications, such as the
         one in our use case case, transmit a larger amount of data per unit time
         as compared to traditional regular video applications. As a result, issues
         arising out of from heavy-tailed parameters parameters, such as long-range dependent
         traffic <xref target="METRICS_2" format="default"/>, format="default"/> and self-similar
         traffic <xref target="METRICS_3" format="default"/>, would be
         experienced at time scales timescales of milliseconds and microseconds rather
         than hours or seconds. Additionally, burstiness at the time scale timescale of
         tens of milliseconds due to the multi-fractal spectrum of traffic
         will be experienced <xref target="METRICS_4" format="default"/>.
         Long-range dependent traffic can have long bursts bursts, and various
         traffic parameters from widely separated time times can show correlation
         <xref target="HEAVY_TAIL_1" format="default"/>. Self-similar traffic
         contains bursts at a wide range of time scales timescales <xref
         target="HEAVY_TAIL_1" format="default"/>. Multi-fractal spectrum
         bursts for traffic summarizes summarize the statistical distribution of local
         scaling exponents found in a traffic trace <xref
         target="HEAVY_TAIL_1" format="default"/>.  The operational consequences
         consequence of XR traffic having characteristics such as long-range dependency,
         dependency and self-similarity is that the edge servers to which
         multiple XR devices are connected wirelessly could face long bursts
         of traffic <xref target="METRICS_2" format="default"/>, format="default"/> <xref
         target="METRICS_3" format="default"/>. In addition, multi-fractal
         spectrum burstiness at the scale of milli-seconds milliseconds could induce jitter
         contributing to motion sickness <xref target="METRICS_4"
         format="default"/>. This is because bursty traffic combined with
         variable queueing delays leads to large delay jitter <xref
         target="METRICS_4" format="default"/>.  The operators of edge servers
         will need to run a 'managed "managed edge cloud service' service" <xref
         target="METRICS_5" format="default"/> to deal with the above
         problems. Functionalities that such a managed edge cloud service
         could operationally provide include dynamic placement of XR servers,
         mobility support support, and energy management <xref target="METRICS_6"
         format="default"/>.  Providing Edge server support for the edge servers in techniques being developed at the DETNET Working Group at the IETF
         such as those described in <xref target="RFC8939" format="default"/>,
         <xref target="RFC9023" format="default"/>, and <xref target="RFC9450"
         format="default"/> could guarantee performance of XR
         applications. For example, these techniques could be used for the
         link between the XR device and the edge as well as within the managed
         edge cloud service. Another option for the network operators could be to
         deploy equipment that supports differentiated services <xref
         target="RFC2475" format="default"/> or per-connection quality-of-service
         Quality-of-Service (QoS) guarantees using RSVP <xref target="RFC2210"
         format="default"/>.

      </t>
<t>
     Thus, the provisioning of edge servers (in terms of the number of
     servers, the topology, the placement of servers, the assignment of link
     capacity, CPUs, and Graphics Processing Units (GPUs)) should be performed
     with the above factors in mind.
        </t>

	  <table anchor="TABLE_1">
	    <name>Throughput requirement Requirements for streaming video applications</name> Streaming Video Applications</name>
		<thead>
		 <tr>
		  <th> Application</th> <th> Throughput
		  <th>Application</th>
		  <th>Throughput Required</th>
		 </tr>
		</thead>
		<tbody>
		 <tr>
		  <td> <t>Real-world
		  <td><t>Real-world objects annotated with text and images for workflow assistance (e.g. (e.g., repair)</t></td>
		  <td> <t>1 Mbps</t></td>
		 </tr>
		 <tr>
		  <td> <t>Video Conferencing</t></td>
		  <td><t>Video conferencing</t></td>
		  <td> <t>2 Mbps</t></td>
		 </tr>
		 <tr>
		  <td> <t>3D Model model and Data Visualization</t></td> data visualization</t></td>
		  <td> <t>2 to 20 Mbps</t></td>
		 </tr>
		 <tr>
		  <td> <t>Two-way 3D Telepresence</t></td> telepresence</t></td>
		  <td> <t>5 to 25 Mbps</t></td>
		 </tr>
		 <tr>
		  <td> <t>Current-Gen 360-degree video (4K)</t></td>
		  <td> <t>10 to 50 Mbps</t></td>
		 </tr>
		 <tr>
		  <td> <t>Next-Gen 360-degree video (8K, 90+ Frames-per-second, High Dynamic Range, Stereoscopic)</t></td> frames per second, high dynamic range, stereoscopic)</t></td>
		  <td> <t>50 to 200 Mbps</t></td>
		 </tr>
		 <tr>
		  <td> <t>6 Degree of Freedom Video <t>6DoF video or Point Cloud</t></td> point cloud</t></td>
		  <td> <t>200 to 1000 Mbps</t></td>
		 </tr>
		</tbody>
	  </table>

      <t>
 Thus, the provisioning of edge servers in terms of the number of servers, the topology, where to place them, the assignment of link capacity, CPUs and GPUs should keep the above factors in mind.

        </t>

      </section>

	  <section anchor="traffic_performance" numbered="true" toc="default">
        <name>Traffic Performance Metrics</name>

      <t>
	  The performance requirements for XR traffic have characteristics that need to be considered when operationalizing a network.
	  These characteristics are now discussed.</t> discussed in this section.</t>
<t>The bandwidth requirements of XR applications are substantially higher than those of video-based applications.</t>

	<t>The latency requirements of XR applications have been studied recently  <xref target="XR_TRAFFIC" format="default"/>. The following characteristics were identified.: identified:
      </t>
      <ul spacing="normal">
        <li>The uploading of data from an XR device to a remote server for processing dominates the end-to-end latency.
			   </li>
        <li> A lack of visual features in the grid environment can cause increased latencies as the XR device
			   uploads additional visual data for processing to the remote server.</li>
        <li>XR applications tend to have large bursts that are separated by significant time gaps.</li>
      </ul>

	 <t> Additionally, XR applications interact with each other on a time scale timescale of a round-trip-time an RTT propagation, and this must be considered when operationalizing a network.</t>

         <t>
            The following
            <xref target="TABLE_2" format="default"/> <xref target="METRICS_6" format="default"/> shows a taxonomy of
            applications with their associated required response times and bandwidths.
            bandwidths (this data is from Table V in <xref target="METRICS_6"
            format="default"/>). Response times can be defined as the time
            interval between the end of a request submission and the end of
            the corresponding response from a system. If the XR device
            offloads a task to an edge server, the response time of the server
            is the round-trip time RTT from when a data packet is sent from the XR device
            until a response is received. Note that the required response time
            provides an upper bound on for the sum of the time taken by
            computational tasks such (such as processing of scenes, scenes and generation
            of images images) and the round-trip time. RTT. This response time depends only on the Quality of Service (QOS) QoS
            required by an application. The response time is therefore
            independent of the underlying technology of the network and the
            time taken by the computational tasks.

         </t>

        <t>
	  Our use case requires a response time of 20ms 20 ms at most and
	  preferably between 7-15ms 7-15 ms, as discussed earlier. This requirement
	  for response time is similar to the first two entries of in <xref
	  target="TABLE_2" format="default"/> below. format="default"/>. Additionally, the required
	  bandwidth for our use case as discussed in section 5.1, <xref target="TABLE_1" format="default"/>, is 200Mbps-1000Mbps. 200 to 1000 Mbps (see <xref
	  target="traffic_workload"/>).  Since our use case envisages multiple
	  users running the XR applications application on their devices, devices and connected connecting to an
	  the edge server that is closest to them, these latency and bandwidth
	  connections will grow linearly with the number of users.
	  The operators should match the network provisioning to the maximum
	  number of tourists that can be supported by a link to an edge
	  server.
         </t>

	 <table anchor="TABLE_2">
	    <name>Traffic Performance Metrics of Selected XR Applications</name>
		<thead>
		 <tr>
		  <th> Application</th>
		  <th> Required Response Time</th>
		  <th> Expected Data Capacity</th>
		  <th> Possible Implementations/ Examples</th>
		 </tr>
		</thead>
		<tbody>
		 <tr>
		  <td> <t>Mobile XR based
		  <td><t>Mobile XR-based remote assistance with uncompressed
		  4K (1920x1080 pixels) 120 fps HDR 10-bit real-time video
		  stream</t></td>
		  <td> <t>Less
		  <td><t>Less than 10 milliseconds</t></td>
		  <td> <t>Greater
		  <td><t>Greater than 7.5 Gbps</t></td>
		  <td> <t>Assisting
		  <td><t>Assisting maintenance technicians, Industry 4.0
		  remote maintenance, remote assistance in robotics
		  industry</t></td>
		 </tr>
		 <tr>
		  <td> <t>Indoor
		  <td><t>Indoor and localized outdoor navigation </t></td>
		  <td> <t>Less
		  <td><t>Less than 20 milliseconds</t></td>
		  <td> <t>50
		  <td><t>50 to 200 Mbps</t></td>
		  <td> <t>Theme Parks, Shopping Malls, Archaeological Sites, Museum guidance</t></td>
		  <td><t>Guidance in theme parks, shopping malls, archaeological sites, and
		  museums</t></td>
		 </tr>
		 <tr>
		  <td> <t>Cloud-based Mobile
		  <td><t>Cloud-based mobile XR applications</t></td>
		  <td> <t>Less
		  <td><t>Less than 50 milliseconds</t></td>
		  <td> <t>50
		  <td><t>50 to 100 Mbps</t></td>
		  <td> <t>Google
		  <td><t>Google Live View, XR-enhanced Google Translate </t></td>
		 </tr>
		</tbody>
	 </table>

	  </section>

	</section>

<section anchor="conclusion" numbered="true" toc="default">
        <name>Conclusion</name>
        <t>
	    In order to operationalize a use case such as the one presented in this document, a network operator could dimension their network to provide a short and high-capacity network path from the edge compute computing
	    resources or storage to the mobile devices running the XR application. This is required to ensure a response time of 20ms 20 ms at most and preferably between 7-15ms. 7-15 ms. Additionally, a bandwidth of 200
	    to 1000Mbps 1000 Mbps is required by such applications. To deal with the characteristics of XR traffic as discussed in this document, network operators could deploy a managed edge cloud service that operationally
	    provides dynamic placement of XR servers, mobility support support, and energy management. Although the use case is technically feasible, economic viability is an important factor that must be considered.

        </t>
</section>

<section anchor="iana" numbered="true" toc="default">
        <name>IANA Considerations</name>
        <t>
	    This document has no IANA actions.

        </t>
</section>

        <section anchor="Sec" numbered="true" toc="default">
        <name>Security Considerations</name>

        <t>
	    The security issues for the presented use case are similar to other streaming applications
	    those described in <xref target="DIST" format="default"/>, <xref
	    target="NIST1" format="default"/>, <xref target="CWE"
	    format="default"/>, and <xref target="NIST2"
	    format="default"/>. This document itself introduces no does not introduce any new
	    security issues.
        </t>

       </section>

	<section anchor="ack" numbered="true" toc="default">
        <name>Acknowledgements</name>
        <t>
		Many Thanks to Spencer Dawkins, Rohit Abhishek, Jake Holland, Kiran Makhijani, Ali Begen, Cullen Jennings, Stephan Wenger, Eric Vyncke, Wesley Eddy, Paul Kyzivat, Jim Guichard, Roman Danyliw, Warren Kumari, and Zaheduzzaman Sarker for providing very helpful feedback, suggestions and comments.

        </t>

      </section>

  </middle>
  <back>
    <references>
      <name>Informative References</name>

      <reference anchor="DEV_HEAT_1" target=""> target="https://dl.acm.org/doi/10.1145/2637166.2637230">
        <front>
          <title> Draining our Glass: An Energy glass: an energy and Heat heat characterization of Google Glass</title>
          <author initials="R" surname="LiKamWa" fullname="Robert LiKamWa">
            <organization/>
          </author>
          <author initials="Z" surname="Wang" fullname="Zhen Wang">
            <organization/>
          </author>
          <author initials="A" surname="Carroll" fullname="Aaron Carroll">
            <organization/>
          </author>
          <author initials="F" surname="Lin" fullname="Felix Xiaozhu Lin">
            <organization/>
          </author>
          <author initials="L" surname="Zhong" fullname="Lin Zhong">
            <organization/>
          </author>
          <date year="2013"/> year="2014"/>
        </front>
        <seriesInfo name="In Proceedings of" value="5th
        <refcontent>APSys '14: 5th Asia-Pacific Workshop on Systems Systems, pp. 1-7"/> 1-7</refcontent>
        <seriesInfo name="DOI" value="10.1145/2637166.2637230"/>
      </reference>

      <reference anchor="EDGE_1" target=""> target="https://ieeexplore.ieee.org/document/7807196">
        <front>
          <title> The Emergence of Edge Computing</title>
          <author initials="M" surname="Satyanarayanan" fullname="Mahadev Satyanarayanan">
            <organization/>
          </author>
          <date year="2017"/>
        </front>
        <seriesInfo name="In " value="Computer 50(1)
        <refcontent>Computer, vol. 50, no. 1, pp. 30-39"/> 30-39</refcontent>
        <seriesInfo name="DOI" value="10.1109/MC.2017.9"/>
      </reference>

      <reference anchor="EDGE_2" target=""> target="https://ieeexplore.ieee.org/document/8812200">
        <front>
          <title> The Seminal Role of Edge-Native Applications</title>
          <author initials="M" surname="Satyanarayanan" fullname="Mahadev Satyanarayanan">
            <organization/>
          </author>
          <author initials="G" surname="Klas" fullname="Guenter Klas">
            <organization/>
          </author>
          <author initials="M" surname="Silva" fullname="Marco Silva">
            <organization/>
          </author>
          <author initials="S" surname="Mangiante" fullname="Simone Mangiante">
            <organization/>
          </author>
          <date year="2019"/>
        </front>
        <seriesInfo name="In " value="IEEE
        <refcontent>2019 IEEE International Conference on Edge Computing (EDGE) (EDGE), pp. 33-40"/> 33-40</refcontent>
        <seriesInfo name="DOI" value="10.1109/EDGE.2019.00022"/>
      </reference>

      <reference anchor="ABR_1" target=""> target="https://dl.acm.org/doi/10.1145/3098822.3098843">
        <front>
          <title> Neural Adaptive Video Streaming with Pensieve</title>
          <author initials="H" surname="Mao" fullname="Hongzi Mao">
            <organization/>
          </author>
          <author initials="R" surname="Netravali" fullname="Ravi Netravali">
            <organization/>
          </author>
          <author initials="M" surname="Alizadeh" fullname="Mohammad Alizadeh">
            <organization/>
          </author>
          <date year="2017"/>
        </front>
        <seriesInfo name="In " value="Proceedings
        <refcontent>SIGCOMM '17: Proceedings of the Conference of the ACM Special Interest Group on Data Communication, pp. 197-210"/> 197-210</refcontent>
        <seriesInfo name="DOI" value="10.1145/3098822.3098843"/>
      </reference>

      <reference anchor="ABR_2" target=""> target="https://www.usenix.org/conference/nsdi20/presentation/yan">
        <front>
          <title> Learning in situ: a randomized experiment in video streaming </title>
          <author initials="F" surname="Yan" fullname="Francis Y. Yan">
            <organization/>
          </author>
          <author initials="H" surname="Ayers" fullname="Hudson Ayers">
            <organization/>
          </author>
          <author initials="C" surname="Zhu" fullname="Chenzhi Zhu">
            <organization/>
          </author>
          <author initials="S" surname="Fouladi" fullname="Sadjad Fouladi">
            <organization/>
          </author>
          <author initials="J" surname="Hong" fullname="James Hong">
            <organization/>
          </author>
          <author initials="K" surname="Zhang" fullname="Keyi Zhang">
            <organization/>
          </author>
          <author initials="P" surname="Levis" fullname="Philip Levis">
            <organization/>
          </author>
          <author initials="K" surname="Winstein" fullname="Keith Winstein">
            <organization/>
          </author>
          <date month="February" year="2020"/>
        </front>
        <seriesInfo name="In " value=" 17th
        <refcontent>17th USENIX Symposium on Networked Systems Design and Implementation (NSDI 20), '20), pp. 495-511"/> 495-511</refcontent>
      </reference>

      <reference anchor="HEAVY_TAIL_1" target=""> target="https://www.wiley.com/en-us/Internet+Measurement%3A+Infrastructure%2C+Traffic+and+Applications-p-9780470014615">
        <front>
          <title> Internet measurement: infrastructure, traffic
          <title>Internet Measurement: Infrastructure, Traffic and applications</title> Applications</title>
          <author initials="M" surname="Crovella" fullname="Mark Crovella">
            <organization/>
          </author>
          <author initials="B" surname="Krishnamurthy" fullname="Balachander Krishnamurthy">
            <organization/>
          </author>
          <date year="2006"/>
        </front>
        <seriesInfo name="John " value="Wiley
        <refcontent>John Wiley and Sons Inc."/> Sons</refcontent>
      </reference>

      <reference anchor="HEAVY_TAIL_2" target=""> target="https://arxiv.org/pdf/2001.10488">
        <front>
          <title> The Statistical
          <title>Statistical Consequences of Fat Tails</title> Tails: Real World Preasymptotics, Epistemology, and Applications</title>
          <author initials="N" surname="Taleb" fullname="Nassim Nicholas Taleb">
            <organization/>
          </author>
          <date year="2020"/> year="2022"/>
        </front>
        <seriesInfo name="STEM " value="Academic Press"/>
        <refcontent>Revised Edition, STEM Academic Press</refcontent>
      </reference>

      <reference anchor="UBICOMP" target=""> target="https://www.taylorfrancis.com/chapters/edit/10.1201/9781420093612-6/ubiquitous-computing-systems-jakob-bardram-adrian-friday">
        <front>
          <title> Ubiquitous
          <title>Ubiquitous Computing Systems</title>
          <author initials="J" surname="Bardram" fullname="Jakob Eyvind Bardram">
            <organization/>
          </author>
          <author initials="A" surname="Friday" fullname="Adrian Friday">
            <organization/>
          </author>
          <date year="2009"/>
        </front>
        <seriesInfo name="In " value=" Ubiquitous
        <refcontent>Ubiquitous Computing Fundamentals Fundamentals, 1st Edition, Chapman and Hall/CRC Press, pp. 37-94. CRC Press"/> 37-94</refcontent>
      </reference>

      <reference anchor="SLAM_1" target=""> target="https://ieeexplore.ieee.org/document/6909455">
        <front>
          <title> A minimal solution
          <title>A Minimal Solution to the generalized pose-and-scale problem </title> Generalized Pose-and-Scale Problem</title>
          <author initials="J" surname="Ventura" fullname="Jonathan Ventura">
            <organization/>
          </author>
          <author initials="C" surname="Arth" fullname="Clemens Arth">
            <organization/>
          </author>
          <author initials="G" surname="Reitmayr" fullname="Gerhard Reitmayr">
            <organization/>
          </author>
          <author initials="D" surname="Schmalstieg" fullname="Dieter Schmalstieg">
            <organization/>
          </author>
          <date year="2014"/>
        </front>
        <seriesInfo name="In " value="Proceedings of the
        <refcontent>2014 IEEE Conference on Computer Vision and Pattern Recognition, pp. 422-429"/> 422-429</refcontent>
        <seriesInfo name="DOI" value="10.1109/CVPR.2014.61"/>
      </reference>

      <reference anchor="SLAM_2" target=""> target="https://link.springer.com/chapter/10.1007/978-3-319-10593-2_2">
        <front>
          <title>
          <title>gDLS: A scalable solution Scalable Solution to the generalized pose Generalized Pose and scale problem </title> Scale Problem</title>
          <author initials="C" surname="Sweeny" fullname="Chris Sweeny">
            <organization/>
          </author>
          <author initials="V" surname="Fragoso" fullname="Victor Fragoso">
            <organization/>
          </author>
          <author initials="T" surname="Hollerer" surname="Höllerer" fullname="Tobias Hollerer"> Höllerer">
            <organization/>
          </author>
          <author initials="M" surname="Turk" fullname="Matthew Turk">
            <organization/>
          </author>
          <date year="2014"/>
        </front>
        <seriesInfo name="In " value="European Conference on Computer Vision,
        <refcontent>Computer Vision - ECCV 2014, pp. 16-31"/> 16-31</refcontent>
	<seriesInfo name="DOI" value="10.1007/978-3-319-10593-2_2"/>
      </reference>

      <reference anchor="SLAM_3" target=""> target="https://ieeexplore.ieee.org/document/6636302">
        <front>
          <title> Model estimation
          <title>Model Estimation and selection Selection towards unconstrained real-time tracking Unconstrained Real-Time Tracking and mapping </title> Mapping</title>
          <author initials="S" surname="Gauglitz" fullname="Steffen Gauglitz">
            <organization/>
          </author>
          <author initials="C" surname="Sweeny" surname="Sweeney" fullname="Chris Sweeny"> Sweeney">
            <organization/>
          </author>
          <author initials="J" surname="Ventura" fullname="Jonathan Ventura">
            <organization/>
          </author>
          <author initials="M" surname="Turk" fullname="Matthew Turk">
            <organization/>
          </author>
          <author initials="T" surname="Hollerer" surname="Höllerer" fullname="Tobias Hollerer"> Höllerer">
            <organization/>
          </author>
          <date year="2013"/> year="2014"/>
        </front>
        <seriesInfo name="In " value="IEEE transactions
        <refcontent>IEEE Transactions on visualization Visualization and computer graphics, 20(6), Computer Graphics, vol. 20, no. 6, pp. 825-838"/> 825-838</refcontent>
        <seriesInfo name="DOI" value="10.1109/TVCG.2013.243"/>
      </reference>

      <reference anchor="SLAM_4" target=""> target="https://ieeexplore.ieee.org/document/6671783">
        <front>
          <title> Handling
          <title>Handling pure camera rotation in keyframe-based SLAM </title> SLAM</title>
          <author initials="C" surname="Pirchheim" fullname="Christian Pirchheim">
            <organization/>
          </author>
          <author initials="D" surname="Schmalstieg" fullname="Dieter Schmalstieg">
            <organization/>
          </author>
          <author initials="G" surname="Reitmayr" fullname="Gerhard Reitmayr">
            <organization/>
          </author>
          <date year="2013"/>
        </front>
        <seriesInfo name="In " value="2013
        <refcontent>2013 IEEE international symposium International Symposium on mixed Mixed and augmented reality Augmented Reality (ISMAR), pp. 229-238"/> 229-238</refcontent>
        <seriesInfo name="DOI" value="10.1109/ISMAR.2013.6671783"/>
      </reference>

      <reference anchor="OCCL_1" target=""> target="https://onlinelibrary.wiley.com/doi/10.1111/1467-8659.1530011">
        <front>
          <title> Interactive
          <title>Interactive Occlusion and automatic object placementfor augmented reality </title> Automatic Object Placement for Augmented Reality</title>
          <author initials="D.E" surname="Breen" fullname="David E. Breen">
            <organization/>
          </author>
          <author initials="R.T" surname="Whitaker" fullname="Ross T. Whitaker">
            <organization/>
          </author>
          <author initials="E" surname="Rose" fullname="Eric Rose">
            <organization/>
          </author>
          <author initials="M" surname="Tuceryan" fullname="Mihran Tuceryan">
            <organization/>
          </author>
          <date month="August" year="1996"/>
        </front>
        <seriesInfo name="In " value="Computer
        <refcontent>Computer Graphics Forum, vol. 15, no. 3 , 3, pp. 229-238,Edinburgh, UK: Blackwell Science Ltd"/> 11-22</refcontent>
        <seriesInfo name="DOI" value="10.1111/1467-8659.1530011"/>
      </reference>

      <reference anchor="OCCL_2" target=""> target="https://ieeexplore.ieee.org/document/6948419">
        <front>
          <title> Pixel-wise
          <title>Pixel-wise closed-loop registration in video-based augmented reality </title> reality</title>
          <author initials="F" surname="Zheng" fullname="Feng Zheng">
            <organization/>
          </author>
          <author initials="D" surname="Schmalstieg" fullname="Dieter Schmalstieg">
            <organization/>
          </author>
          <author initials="G" surname="Welch" fullname="Greg Welch">
            <organization/>
          </author>
          <date year="2014"/>
        </front>
        <seriesInfo name="In " value="IEEE
        <refcontent>2014 IEEE International Symposium on Mixed and Augmented Reality (ISMAR), pp. 135-143"/> 135-143</refcontent>
        <seriesInfo name="DOI" value="10.1109/ISMAR.2014.6948419"/>
      </reference>

      <reference anchor="PHOTO_REG" target=""> target="https://ieeexplore.ieee.org/document/6165138">
        <front>
          <title> Online tracking
          <title>Online Tracking of outdoor lighting variations Outdoor Lighting Variations for augmented reality Augmented Reality with moving cameras </title> Moving Cameras</title>
          <author initials="Y" surname="Liu" fullname="Yanli Liu">
            <organization/>
          </author>
          <author initials="X" surname="Granier" fullname="Xavier Granier">
            <organization/>
          </author>
          <date year="2012"/>
        </front>
        <seriesInfo name="In " value="IEEE
        <refcontent>IEEE Transactions on visualization Visualization and computer graphics, 18(4), pp.573-580"/> Computer Graphics, vol. 18, no. 4, pp. 573-580</refcontent>
        <seriesInfo name="DOI" value="10.1109/TVCG.2012.53"/>
      </reference>

      <reference anchor="GLB_ILLUM_1" target=""> target="https://ieeexplore.ieee.org/document/6671773">
        <front>
          <title> Differential irradiance caching
          <title>Differential Irradiance Caching for fast high-quality light transport between virtual and real worlds.</title> worlds</title>
          <author initials="P" surname="Kan" fullname="Peter Kan">
            <organization/>
          </author>
          <author initials="H" surname="Kaufmann" fullname="Hannes Kaufmann">
            <organization/>
          </author>
          <date year="2013"/>
        </front>
        <seriesInfo name="In " value="IEEE
        <refcontent>2013 IEEE International Symposium on Mixed and Augmented Reality (ISMAR),pp. 133-141"/> (ISMAR), pp. 133-141</refcontent>
        <seriesInfo name="DOI" value="10.1109/ISMAR.2013.6671773"/>
      </reference>

      <reference anchor="GLB_ILLUM_2" target=""> target="https://ieeexplore.ieee.org/document/6948407">
        <front>
          <title> Delta voxel cone tracing.</title>
          <title>Delta Voxel Cone Tracing</title>
          <author initials="T" surname="Franke" fullname="Tobias Franke">
            <organization/>
          </author>
          <date year="2014"/>
        </front>
        <seriesInfo name="In " value="IEEE
        <refcontent>2014 IEEE International Symposium on Mixed and Augmented Reality (ISMAR), pp. 39-44"/> 39-44</refcontent>
        <seriesInfo name="DOI" value="10.1109/ISMAR.2014.6948407"/>
      </reference>

      <reference anchor="LENS_DIST" target=""> target="https://link.springer.com/chapter/10.1007/978-3-7091-6785-4_2">
        <front>
          <title> Practical calibration procedures
          <title>Practical Calibration Procedures for augmented reality.</title> Augmented Reality</title>
          <author initials="A" surname="Fuhrmann" fullname="Anton Fuhrmann">
            <organization/>
          </author>
          <author initials="D" surname="Schmalstieg" fullname="Dieter Schmalstieg">
            <organization/>
          </author>
          <author initials="W" surname="Purgathofer" fullname="Werner Purgathofer">
            <organization/>
          </author>
          <date year="2000"/>
        </front>
        <seriesInfo name="In " value="Virtual
        <refcontent>Virtual Environments 2000, pp. 3-12. Springer, Vienna"/> 3-12</refcontent>
        <seriesInfo name="DOI" value="10.1007/978-3-7091-6785-4_2"/>
      </reference>

      <reference anchor="BLUR" target=""> target="https://diglib.eg.org/items/6954bf7e-5852-44cf-8155-4ba269dc4cee">
        <front>
          <title> Physically-Based
          <title>Physically-Based Depth of Field in Augmented Reality.</title> Reality</title>
          <author initials="P" surname="Kan" fullname="Peter Kan">
            <organization/>
          </author>
          <author initials="H" surname="Kaufmann" fullname="Hannes Kaufmann">
            <organization/>
          </author>
          <date year="2012"/>
        </front>
        <seriesInfo name="In " value="Eurographics (Short Papers),
        <refcontent>Eurographics 2012 - Short Papers, pp. 89-92."/> 89-92</refcontent>
        <seriesInfo name="DOI" value="10.2312/conf/EG2012/short/089-092"/>
      </reference>

      <reference anchor="NOISE" target=""> target="https://ieeexplore.ieee.org/document/4079277">
        <front>
          <title> Enhanced
          <title>Enhanced visual realism by incorporating camera image effects.</title> effects</title>
          <author initials="J" surname="Fischer" fullname="Jan Fischer">
            <organization/>
          </author>
          <author initials="D" surname="Bartz" fullname="Dirk Bartz">
            <organization/>
          </author>
          <author initials="W" surname="Straßer" surname="Strasser" fullname="Wolfgang Straßer"> Strasser">
            <organization/>
          </author>
          <date year="2006"/>
        </front>
        <seriesInfo name="In " value="IEEE/ACM
        <refcontent>2006 IEEE/ACM International Symposium on Mixed and Augmented Reality, pp. 205-208."/> 205-208</refcontent>
        <seriesInfo name="DOI" value="10.1109/ISMAR.2006.297815"/>
      </reference>

      <reference anchor="VIS_INTERFERE" target=""> target="https://ieeexplore.ieee.org/document/4538846">
        <front>
          <title> Interactive focus
          <title>Interactive Focus and context visualization Context Visualization for augmented reality.</title> Augmented Reality</title>
          <author initials="D" surname="Kalkofen" fullname="Denis Kalkofen">
            <organization/>
          </author>
          <author initials="E" surname="Mendez" fullname="Erick Mendez">
            <organization/>
          </author>
          <author initials="D" surname="Schmalstieg" fullname="Dieter Schmalstieg">
            <organization/>
          </author>
          <date year="2007"/>
        </front>
        <seriesInfo name="In " value="6th
        <refcontent>2007 6th IEEE and ACM International Symposium on Mixed and Augmented Reality, pp. 191-201."/> 191-201</refcontent>
        <seriesInfo name="DOI" value="10.1109/ISMAR.2007.4538846"/>
      </reference>

      <reference anchor="DEV_HEAT_2" target=""> target="https://www.mdpi.com/1424-8220/20/5/1446">
        <front>
          <title> Thermal model
          <title>Thermal Model and countermeasures Countermeasures for future smart glasses.</title> Future Smart Glasses</title>
          <author initials="K" surname="Matsuhashi" fullname="Kodai Matsuhashi">
            <organization/>
          </author>
          <author initials="T" surname="Kanamoto" fullname="Toshiki Kanamoto">
            <organization/>
          </author>
          <author initials="A" surname="Kurokawa" fullname=" Atsushi fullname="Atsushi Kurokawa">
            <organization/>
          </author>
          <date year="2020"/>
        </front>
        <refcontent>Sensors, vol. 20, no. 5, p. 1446</refcontent>
        <seriesInfo name="In " value="Sensors, 20(5), p.1446."/> name="DOI" value="10.3390/s20051446"/>
      </reference>

      <reference anchor="BATT_DRAIN" target=""> target="https://ieeexplore.ieee.org/document/7993011">
        <front>
          <title> A survey
          <title>A Survey of wearable devices Wearable Devices and challenges.</title> Challenges</title>
          <author initials="S" surname="Seneviratne" fullname="Suranga Seneviratne">
            <organization/>
          </author>
          <author initials="Y" surname="Hu" fullname="Yining Hu">
            <organization/>
          </author>
          <author initials="T" surname="Nguyen" fullname=" Tham fullname="Tham Nguyen">
            <organization/>
          </author>
          <author initials="G" surname="Lan" fullname=" Guohao fullname="Guohao Lan">
            <organization/>
          </author>
          <author initials="S" surname="Khalifa" fullname=" Sara fullname="Sara Khalifa">
            <organization/>
          </author>
          <author initials="K" surname="Thilakarathna" fullname=" Kanchana fullname="Kanchana Thilakarathna">
            <organization/>
          </author>
          <author initials="M" surname="Hassan" fullname=" Mahbub fullname="Mahbub Hassan">
            <organization/>
          </author>
          <author initials="A" surname="Seneviratne" fullname=" Aruna fullname="Aruna Seneviratne">
            <organization/>
          </author>
          <date year="2017"/>
        </front>
        <seriesInfo name="In " value="IEEE
        <refcontent>IEEE Communication Surveys and Tutorials, 19(4), p.2573-2620."/> vol. 19 no. 4, pp. 2573-2620</refcontent>
        <seriesInfo name="DOI" value="10.1109/COMST.2017.2731979"/>
      </reference>

      <reference anchor="PER_SENSE" target=""> target="https://dl.acm.org/doi/10.1145/1012551.1012559">
        <front>
          <title> Perceptual sensitivity to head tracking latency in virtual environments with varying degrees of scene complexity.</title>
          <author initials="K" surname="Mania" fullname="Katrina fullname="Katerina Mania">
            <organization/>
          </author>
          <author initials="B.D." surname="Adelstein" fullname="Bernard D. Adelstein "> Adelstein">
            <organization/>
          </author>
          <author initials="S.R." surname="Ellis" fullname=" Stephen fullname="Stephen R. Ellis">
            <organization/>
          </author>
          <author initials="M.I." surname="Hill" fullname=" Michael fullname="Michael I. Hill">
            <organization/>
          </author>
          <date year="2004"/>
        </front>
        <seriesInfo name="In " value="Proceedings
        <refcontent>APGV '04: Proceedings of the 1st Symposium on Applied perception in graphics and visualization visualization, pp. 39-47."/> 39-47</refcontent>
        <seriesInfo name="DOI" value="10.1145/1012551.1012559"/>
      </reference>

      <reference anchor="XR" target=""> target="https://portal.3gpp.org/desktopmodules/Specifications/SpecificationDetails.aspx?specificationId=3534">
        <front>
          <title> 3GPP TR 26.928: Extended
          <title>Extended Reality (XR) in 5G.</title>
          <author initials="" surname="3GPP" fullname="3GPP">
            <organization/> 5G</title>
          <author>
            <organization>3GPP</organization>
          </author>
          <date year="2020"/>
        </front>
        <seriesInfo name="" value="https://portal.3gpp.org/desktopmodules/Specifications/SpecificationDetails.aspx?specificationId=3534"/> name="3GPP TR" value="26.928"/>
      </reference>

      <reference anchor="CLOUD" target=""> target="https://dl.acm.org/doi/10.1145/3442381.3449854">
        <front>
          <title> Surrounded
          <title>Surrounded by the Clouds: A Comprehensive Cloud Reachability Study.</title> Study</title>
          <author initials="L." surname="Corneo" fullname=" Lorenzo fullname="Lorenzo Corneo">
            <organization/>
          </author>
          <author initials="M." surname="Eder" fullname=" Maximilian fullname="Maximilian Eder">
            <organization/>
          </author>
          <author initials="N." surname="Mohan" fullname=" Nitinder fullname="Nitinder Mohan">
            <organization/>
          </author>
          <author initials="A." surname="Zavodovski" fullname=" Aleksandr fullname="Aleksandr Zavodovski">
            <organization/>
          </author>
          <author initials="S." surname="Bayhan" fullname=" Suzan fullname="Suzan Bayhan">
            <organization/>
          </author>
          <author initials="W." surname="Wong" fullname=" Walter fullname="Walter Wong">
            <organization/>
          </author>
          <author initials="P." surname="Gunningberg" fullname=" Per fullname="Per Gunningberg">
            <organization/>
          </author>
          <author initials="J." surname="Kangasharju" fullname=" Jussi fullname="Jussi Kangasharju">
            <organization/>
          </author>
          <author initials="J." surname="Ott" fullname=" Jörg fullname="Jörg Ott">
            <organization/>
          </author>
          <date year="2021"/>
        </front>
        <seriesInfo name="In" value="Proceedings
        <refcontent>WWW '21: Proceedings of the Web Conference 2021, pp. 295-304"/> 295-304</refcontent>
        <seriesInfo name="DOI" value="10.1145/3442381.3449854"/>
      </reference>

      <reference anchor="OCCL_3" target=""> target="https://www.roadtovr.com/oculus-shares-5-key-ingredients-for-presence-in-virtual-reality/">
        <front>
          <title> Oculus
          <title>Oculus Shares 5 Key Ingredients for Presence in Virtual Reality.</title> Reality</title>
          <author initials="B." surname="Lang" fullname="Ben Lang">
            <organization/>
          </author>
          <date day="24" month="September" year="2014"/>
        </front>
        <seriesInfo name="" value="https://www.roadtovr.com/oculus-shares-5-key-ingredients-for-presence-in-virtual-reality/"/>
        <refcontent>Road to VR</refcontent>
      </reference>

      <reference anchor="PREDICT" target=""> target="https://pubmed.ncbi.nlm.nih.gov/22624290/">
        <front>
          <title> The
          <title>The effect of apparent latency on simulator sickness while using a see-through helmet-mounted display: Reducing reducing apparent latency with predictive compensation..</title> compensation</title>
          <author initials="T. J." initials="T.J." surname="Buker" fullname="Timothy J. Buker">
            <organization/>
          </author>
          <author initials="D.A." surname="Vincenzi" fullname="Dennis A. Vincenzi ">
            <organization/>
          </author>
          <author initials="J.E." surname="Deaton" fullname=" John E. Deaton">
            <organization/>
          </author>
          <date month="April" year="2012"/>
        </front>
        <seriesInfo name="In " value="Human factors 54.2,
        <refcontent>Human Factors, vol. 54, no. 2, pp. 235-249."/> 235-249</refcontent>
        <seriesInfo name="DOI" value="10.1177/0018720811428734"/>
      </reference>

      <reference anchor="URLLC" target=""> target="https://portal.3gpp.org/desktopmodules/Specifications/SpecificationDetails.aspx?specificationId=3453">
        <front>
          <title> 3GPP TR 23.725: Study
          <title>Study on enhancement of Ultra-Reliable
              Low-Latency Communication (URLLC) support in the 5G Core
              network (5GC).</title>
          <author initials="" surname="3GPP" fullname="3GPP">
            <organization/> (5GC)</title>
          <author>
            <organization>3GPP</organization>
          </author>
          <date year="2019"/>
        </front>
        <seriesInfo name="" value="https://portal.3gpp.org/desktopmodules/Specifications/               SpecificationDetails.aspx?specificationId=3453"/> name="3GPP TR" value="23.725"/>
      </reference>

      <reference anchor="AUGMENTED" target=""> target="https://www.oreilly.com/library/view/augmented-reality-principles/9780133153217/">
        <front>
          <title> Augmented Reality</title>
          <title>Augmented Reality: Principles and Practice</title>
          <author initials="D. S." initials="D" surname="Schmalstieg" fullname="Dieter Schmalstieg">
            <organization/>
          </author>
          <author initials="T.H." surname="Hollerer" fullname="Dennis A.Hollerer  "> initials="T" surname="Höllerer" fullname="Tobias Höllerer">
            <organization/>
          </author>
          <date year="2016"/>
        </front>
        <seriesInfo name="" value="Addison Wesley"/>
        <refcontent>Addison-Wesley Professional</refcontent>
      </reference>

      <reference anchor="REG" target=""> target="https://direct.mit.edu/pvar/article-abstract/6/4/413/18334/Registration-Error-Analysis-for-Augmented-Reality?redirectedFrom=fulltext">
        <front>
          <title>  Registration error analysis
          <title>Registration Error Analysis for augmented reality.</title> Augmented Reality</title>
          <author initials="R. L." initials="R.L." surname="Holloway" fullname="Richard L. Holloway">
            <organization/>
          </author>
          <date month="August" year="1997"/>
        </front>
        <seriesInfo name="In " value="Presence:Teleoperators
        <refcontent>Presence: Teleoperators and Virtual Environments 6.4, Environments, vol. 6, no. 4, pp. 413-432."/> 413-432</refcontent>
        <seriesInfo name="DOI" value="10.1162/pres.1997.6.4.413"/>
      </reference>

      <reference anchor="XR_TRAFFIC" target=""> target="https://ieeexplore.ieee.org/document/9158434">
        <front>
          <title> Characterization of Multi-User Augmented Reality over Cellular Networks </title>
          <author initials="K." surname="Apicharttrisorn" fullname="Kittipat Apicharttrisorn">
            <organization/>
          </author>
          <author initials="B." surname="Balasubramanian" fullname="Bharath Balasubramanian ">
            <organization/>
          </author>
          <author initials="J." surname="Chen" fullname=" Jiasi fullname="Jiasi Chen">
            <organization/>
          </author>
          <author initials="R." surname="Sivaraj" fullname=" Rajarajan fullname="Rajarajan Sivaraj">
            <organization/>
          </author>
          <author initials="Y." surname="Tsai" fullname=" Yi-Zhen fullname="Yi-Zhen Tsai">
            <organization/>
          </author>
          <author initials="R." surname="Jana" fullname="Rittwik Jana">
            <organization/>
          </author>
          <author initials="S." surname="Krishnamurthy" fullname="Srikanth Krishnamurthy">
            <organization/>
          </author>
          <author initials="T." surname="Tran" fullname="Tuyen Tran">
            <organization/>
          </author>
          <author initials="Y." surname="Zhou" fullname="Yu Zhou">
            <organization/>
          </author>
          <date year="2020"/>
        </front>
        <seriesInfo name="In " value="17th
        <refcontent>2020 17th Annual IEEE International Conference on Sensing, Communication, and Networking (SECON), pp. 1-9. IEEE"/> 1-9</refcontent>
        <seriesInfo name="DOI" value="10.1109/SECON48991.2020.9158434"/>
      </reference>

      <reference anchor="EDGE_3" target=""> target="https://link.springer.com/book/10.1007/978-3-031-79733-0">
        <front>
          <title>  5G mobile networks:
          <title>5G Mobile Networks: A systems approach.</title> Systems Approach</title>
          <author initials="L." surname="Peterson" fullname="Larry Peterson">
            <organization/>
          </author>
	  <author initials="O." surname="Sunay" fullname="Oguz Sunay">
            <organization/>
          </author>
          <date year="2020"/>
        </front>
        <seriesInfo name="In " value="Synthesis
        <refcontent>Synthesis Lectures on Network Systems."/> Systems</refcontent>
        <seriesInfo name="DOI" value="10.1007/978-3-031-79733-0"/>
      </reference>

      <reference anchor="AUGMENTED_2" target=""> target="https://direct.mit.edu/pvar/article-abstract/6/4/355/18336/A-Survey-of-Augmented-Reality?redirectedFrom=fulltext">
        <front>
          <title>  A
          <title>A Survey of Augmented Reality.</title> Reality</title>
          <author initials="R. T." initials="R.T." surname="Azuma" fullname="Ronald T. Azuma">
            <organization/>
          </author>
          <date month="August" year="1997"/>
        </front>
        <seriesInfo name="" value="Presence:Teleoperators
        <refcontent>Presence: Teleoperators and Virtual Environments 6.4, Environments, vol. 6, no. 4, pp. 355-385."/> 355-385</refcontent>
        <seriesInfo name="DOI" value="10.1162/pres.1997.6.4.355"/>
      </reference>

      <reference anchor="METRICS_1" target=""> target="https://gsacom.com/paper/augmented-virtual-reality-first-wave-5g-killer-apps-qualcomm-abi-research/">
        <front>
          <title>  Augmented
          <title>Augmented and Virtual Reality: The first Wave of Killer Apps.</title>
          <author initials="" surname="ABI Research" fullname="ABI Research">
            <organization/> Apps: Qualcomm - ABI Research</title>
          <author>
            <organization>ABI Research</organization>
          </author>
          <date month="April" year="2017"/>
        </front>
        <seriesInfo name="" value="https://gsacom.com/paper/augmented-virtual-reality-first-wave-5g-killer-apps-qualcomm-abi-research/"/>
      </reference>

      <reference anchor="METRICS_2" target=""> target="https://ieeexplore.ieee.org/document/392383">
        <front>
          <title> Wide Area Traffic: The Failure
          <title>Wide area traffic: the failure of Poisson Modelling.</title> modeling</title>
          <author initials="V." surname="Paxon" fullname="Vern Paxon">
            <organization/>
          </author>
	  <author initials="S." surname="Floyd" fullname="Sally Floyd">
            <organization/>
          </author>
          <date month="June" year="1995"/>
        </front>
        <seriesInfo name="In" value="IEEE/ACM
        <refcontent>IEEE/ACM Transactions on Networking, vol. 3, no. 3, pp.  226-244."/> 226-244</refcontent>
        <seriesInfo name="DOI" value="10.1109/90.392383"/>
      </reference>

      <reference anchor="METRICS_3" target=""> target="https://ieeexplore.ieee.org/abstract/document/554723">
        <front>
          <title> Self-Similarity Through High Variability: Statistical Analysis
          <title>Self-similarity through high variability: statistical analysis and Ethernet LAN Traffic traffic at Source Level.</title> source level</title>
          <author initials="W." surname="Willinger" fullname="Walter Willinger">
            <organization/>
          </author>
         <author initials="M.S." surname="Taqqu" fullname="Murad S. Taqqu">
            <organization/>
          </author>
         <author initials="R." surname="Sherman" fullname="Robert Sherman">
            <organization/>
          </author>
         <author initials="D.V." surname="Wilson" fullname="Daniel V. Wilson">
            <organization/>
          </author>
          <date month="February" year="1997"/>
        </front>
        <seriesInfo name="In" value="IEEE/ACM
        <refcontent>IEEE/ACM Transactions on Networking, vol. 5, no. 1, pp.  71-86."/> 71-86</refcontent>
        <seriesInfo name="DOI" value="10.1109/90.554723"/>
      </reference>

      <reference anchor="METRICS_4" target=""> target="https://www.sciencedirect.com/science/article/pii/S1063520300903427">
        <front>
          <title> Multiscale
          <title>Multiscale Analysis and Data Networks.</title> Networks</title>
          <author initials="A.C." surname="Gilbert" fullname="A. C. fullname="A.C. Gilbert">
            <organization/>
          </author>
          <date month="May" year="2001"/>
        </front>
        <seriesInfo name="In" value="Applied
        <refcontent>Applied and Computational Harmonic Analysis, vol. 10, no. 3, pp.  185-202."/> 185-202</refcontent>
        <seriesInfo name="DOI" value="10.1006/acha.2000.0342"/>
      </reference>

      <reference anchor="METRICS_5" target=""> target="https://research.google/pubs/site-reliability-engineering-how-google-runs-production-systems/">
        <front>
          <title> Site
          <title>Site Reliability Engineering: How Google Runs Production Systems.</title> Systems</title>
          <author initials="B." surname="Beyer" fullname="Betsy Beyer"> Beyer" role="editor">
            <organization/>
          </author>
         <author initials="C." surname="Jones" fullname="Chris Jones"> Jones" role="editor">
            <organization/>
          </author>
         <author initials="J." surname="Petoff" fullname="Jennifer Petoff"> Petoff" role="editor">
            <organization/>
          </author>
         <author initials="N.R." surname="Murphy" fullname="Niall Richard Murphy"> Murphy" role="editor">
            <organization/>
          </author>
          <date year="2016"/>
        </front>
        <seriesInfo name="" value="O'Reilly
        <refcontent>O'Reilly Media, Inc."/> Inc.</refcontent>
      </reference>

      <reference anchor="METRICS_6" target=""> target="https://ieeexplore.ieee.org/document/9363323">
        <front>
          <title> A survey
          <title>A Survey on mobile augmented reality with Mobile Augmented Reality With 5G mobile edge computing: architectures, applications, Mobile Edge Computing: Architectures, Applications, and technical aspects.</title> Technical Aspects</title>
          <author initials="Y." surname="Siriwardhana" fullname="Yushan Siriwardhana">
            <organization/>
          </author>
         <author initials="P." surname="Porambage" fullname="Pawani Porambage">
            <organization/>
          </author>
         <author initials="M." surname="Liyanage" fullname="Madhusanka Liyanage">
            <organization/>
          </author>
         <author initials="M." surname="Ylianttila" fullname="Mika Ylianttila">
            <organization/>
          </author>
          <date year="2021"/>
        </front>
        <seriesInfo name="In" value="IEEE
        <refcontent>IEEE Communications Surveys and Tutorials, Vol vol. 23, No. 2"/> no. 2, pp. 1160-1192</refcontent>
        <seriesInfo name="DOI" value="10.1109/COMST.2021.3061981"/>
      </reference>

      <reference anchor="HEAVY_TAIL_3" target=""> target="https://www.wiley.com/en-us/A+Primer+in+Data+Reduction%3A+An+Introductory+Statistics+Textbook-p-9780471101352">
        <front>
          <title> A
          <title>A Primer in Data Reduction.</title> Reduction: An Introductory Statistics Textbook</title>
          <author initials="A." surname="Ehrenberg" fullname="A.S.C Ehrenberg ">
            <organization/>
          </author>
          <date year="1982"/>
        </front>
        <seriesInfo name="John" value="Wiley, London"/>
      </reference>

<reference anchor="RFC9023" target="https://www.rfc-editor.org/info/rfc9023">
<front>
<title>Deterministic Networking (DetNet) Data Plane: IP over IEEE 802.1 Time-Sensitive Networking (TSN)</title>
<author fullname="B. Varga" initials="B." role="editor" surname="Varga"/>
<author fullname="J. Farkas" initials="J." surname="Farkas"/>
<author fullname="A. Malis" initials="A." surname="Malis"/>
<author fullname="S. Bryant" initials="S." surname="Bryant"/>
<date month="June" year="2021"/>
<abstract>
<t>This document specifies the Deterministic Networking IP data plane when operating over a Time-Sensitive Networking (TSN) sub-network. This document does not define new procedures or processes. Whenever this document makes statements or recommendations, these are taken from normative text in the referenced RFCs.</t>
</abstract>
</front>
<seriesInfo name="RFC" value="9023"/>
<seriesInfo name="DOI" value="10.17487/RFC9023"/>
</reference>

<reference anchor="RFC8939" target="https://www.rfc-editor.org/info/rfc8939">
<front>
<title>Deterministic Networking (DetNet) Data Plane: IP</title>
<author fullname="B. Varga" initials="B." role="editor" surname="Varga"/>
<author fullname="J. Farkas" initials="J." surname="Farkas"/>
<author fullname="L. Berger" initials="L." surname="Berger"/>
<author fullname="D. Fedyk" initials="D." surname="Fedyk"/>
<author fullname="S. Bryant" initials="S." surname="Bryant"/>
<date month="November" year="2020"/>
<abstract>
<t>This document specifies the Deterministic Networking (DetNet) data plane operation for IP hosts and routers that provide DetNet service to IP-encapsulated data. No DetNet-specific encapsulation is defined to support IP flows; instead, the existing IP-layer and higher-layer protocol header information is used to support flow identification and DetNet service delivery. This document builds on the DetNet architecture (RFC 8655) and data plane framework (RFC 8938).</t>
</abstract> year="2007"/>
        </front>
<seriesInfo name="RFC" value="8939"/>
<seriesInfo name="DOI" value="10.17487/RFC8939"/>
</reference>

<reference anchor="RFC9450" target="https://www.rfc-editor.org/info/rfc9450">
<front>
<title>Reliable and Available Wireless (RAW) Use Cases</title>
<author fullname="CJ. Bernardos" initials="CJ." role="editor" surname="Bernardos"/>
<author fullname="G. Papadopoulos" initials="G." surname="Papadopoulos"/>
<author fullname="P. Thubert" initials="P." surname="Thubert"/>
<author fullname="F. Theoleyre" initials="F." surname="Theoleyre"/>
<date month="August" year="2023"/>
<abstract>
<t>The wireless medium presents significant specific challenges to achieve properties similar to those of wired deterministic networks. At the same time, a number of use cases cannot be solved with wires and justify the extra effort of going wireless. This document presents wireless use cases (such as aeronautical communications, amusement parks, industrial applications, pro audio and video, gaming, Unmanned Aerial Vehicle (UAV) and vehicle-to-vehicle (V2V) control, edge robotics, and emergency vehicles), demanding reliable
        <refcontent>John Wiley and available behavior.</t>
</abstract>
</front>
<seriesInfo name="RFC" value="9450"/>
<seriesInfo name="DOI" value="10.17487/RFC9450"/> Sons</refcontent>
      </reference>

      <xi:include href="https://bib.ietf.org/public/rfc/bibxml/reference.RFC.9023.xml"/>
      <xi:include href="https://bib.ietf.org/public/rfc/bibxml/reference.RFC.8939.xml"/>
      <xi:include href="https://bib.ietf.org/public/rfc/bibxml/reference.RFC.9450.xml"/>

      <reference anchor="DIST" target=""> target="https://dl.acm.org/doi/10.5555/2029110">
	<front>
	  <title> Distributed Systems: Concepts and Design</title>
	  <author initials="G" surname="Coulouris" fullname="George Coulouris">
	    <organization/>
	  </author>
	  <author initials="J" surname="Dollimore" fullname="Jean Dollimore">
	    <organization/>
	  </author>
	  <author initials="T" surname="Kindberg" fullname="Tim Kindberg">
	    <organization/>
	  </author>
	  <author initials="G" surname="Blair" fullname="Gordon Blair">
	    <organization/>
	  </author>
	  <date year="2011"/>
	</front>
<seriesInfo name="" value="Addison Wesley"/>
	<refcontent>Addison-Wesley</refcontent>
      </reference>

      <reference anchor="NIST1" target=""> target="https://csrc.nist.gov/pubs/sp/800/146/final">
	<front>
<title> NIST SP 800-146: Cloud
	  <title>Cloud Computing Synopsis and Recommendations</title>
<author initials="" surname="" fullname="NIST">
<organization/>
	  <author>
	    <organization>NIST</organization>
	  </author>
	  <date month="May" year="2012"/>
	</front>
        <seriesInfo name="" value="National Institute of Standards and Technology, US Department of Commerce"/> name="NIST SP" value="800-146"/>
	<seriesInfo name="DOI" value="10.6028/NIST.SP.800-146"/>
      </reference>

      <reference anchor="CWE" target=""> target="https://www.sans.org/top25-software-errors/">
	<front>
<title> CWE/SANS
	  <title>CWE/SANS TOP 25 Most Dangerous Software Errorss</title>
<author initials="" surname="" fullname="SANS Institute">
<organization/> Errors</title>
	  <author>
	    <organization>SANS Institute</organization>
	  </author>
<date year="2012"/>
	</front>
<seriesInfo name="" value="Common Weakness Enumeration, SANS Institute"/>
      </reference>

      <reference anchor="NIST2" target=""> target="https://csrc.nist.gov/pubs/sp/800/123/final">
	<front>
<title> NIST SP 800-123: Guide
	  <title>Guide to General Server Security</title>
<author initials="" surname="" fullname="NIST">
<organization/>
	  <author>
	    <organization>NIST</organization>
	  </author>
	  <date month="July" year="2008"/>
	</front>
	<seriesInfo name="" value="National Institute of Standards and Technology, US Department of Commerce"/>
</reference>

	<reference anchor="RFC2210" target="https://www.rfc-editor.org/info/rfc2210">
<front>
<title>The Use of RSVP with IETF Integrated Services</title>
<author fullname="J. Wroclawski" initials="J." surname="Wroclawski"/>
<date month="September" year="1997"/>
<abstract>
<t>This note describes the use of the RSVP resource reservation protocol with the Controlled-Load and Guaranteed QoS control services. [STANDARDS-TRACK]</t>
</abstract>
</front>
<seriesInfo name="RFC" value="2210"/> name="NIST SP" value="800-123"/>
        <seriesInfo name="DOI" value="10.17487/RFC2210"/>
</reference>

<reference anchor="RFC2475" target="https://www.rfc-editor.org/info/rfc2475">
<front>
<title>An Architecture for Differentiated Services</title>
<author fullname="S. Blake" initials="S." surname="Blake"/>
<author fullname="D. Black" initials="D." surname="Black"/>
<author fullname="M. Carlson" initials="M." surname="Carlson"/>
<author fullname="E. Davies" initials="E." surname="Davies"/>
<author fullname="Z. Wang" initials="Z." surname="Wang"/>
<author fullname="W. Weiss" initials="W." surname="Weiss"/>
<date month="December" year="1998"/>
<abstract>
<t>This document defines an architecture for implementing scalable service differentiation in the Internet. This memo provides information for the Internet community.</t>
</abstract>
</front>
<seriesInfo name="RFC" value="2475"/>
<seriesInfo name="DOI" value="10.17487/RFC2475"/> value="10.6028/NIST.SP.800-123"/>
      </reference>

	<xi:include href="https://bib.ietf.org/public/rfc/bibxml/reference.RFC.2210.xml"/>
	<xi:include href="https://bib.ietf.org/public/rfc/bibxml/reference.RFC.2475.xml"/>

      </references>

       <section anchor="ack" numbered="false" toc="default">
        <name>Acknowledgements</name>
        <t>Many thanks to <contact fullname="Spencer Dawkins"/>, <contact
        fullname="Rohit Abhishek"/>, <contact fullname="Jake Holland"/>,
        <contact fullname="Kiran Makhijani"/>, <contact fullname="Ali
        Begen"/>, <contact fullname="Cullen Jennings"/>, <contact
        fullname="Stephan Wenger"/>, <contact fullname="Eric Vyncke"/>,
        <contact fullname="Wesley Eddy"/>, <contact fullname="Paul Kyzivat"/>,
        <contact fullname="Jim Guichard"/>, <contact fullname="Roman
        Danyliw"/>, <contact fullname="Warren Kumari"/>, and <contact
        fullname="Zaheduzzaman Sarker"/> for providing helpful feedback,
        suggestions, and comments.</t>
      </section>

  </back>
</rfc>