idnits 2.17.1 draft-han-iccrg-arvr-transport-problem-01.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- No issues found here. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year -- The document date (March 12, 2017) is 2600 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- == Outdated reference: A later version (-07) exists of draft-ietf-tcpm-cubic-04 Summary: 0 errors (**), 0 flaws (~~), 2 warnings (==), 1 comment (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 Internet Congestion Control Research Group L. Han, Ed. 3 Internet-Draft Huawei Technologies 4 Intended status: Informational K. Smith 5 Expires: September 13, 2017 Vodafone 6 March 12, 2017 8 Problem Statement: Transport Support for Augmented and Virtual Reality 9 Applications 10 draft-han-iccrg-arvr-transport-problem-01 12 Abstract 14 As emerging technology, Augmented Reality (AR) and Virtual Reality 15 (VR) bring up a lot of challenges to technologies such as information 16 display, image processing, fast computing and networking. This 17 document will analyze the requirements of AR and VR to networking, 18 especially to transport protocol. 20 Status of This Memo 22 This Internet-Draft is submitted in full conformance with the 23 provisions of BCP 78 and BCP 79. 25 Internet-Drafts are working documents of the Internet Engineering 26 Task Force (IETF). Note that other groups may also distribute 27 working documents as Internet-Drafts. The list of current Internet- 28 Drafts is at http://datatracker.ietf.org/drafts/current/. 30 Internet-Drafts are draft documents valid for a maximum of six months 31 and may be updated, replaced, or obsoleted by other documents at any 32 time. It is inappropriate to use Internet-Drafts as reference 33 material or to cite them other than as "work in progress." 35 This Internet-Draft will expire on September 13, 2017. 37 Copyright Notice 39 Copyright (c) 2017 IETF Trust and the persons identified as the 40 document authors. All rights reserved. 42 This document is subject to BCP 78 and the IETF Trust's Legal 43 Provisions Relating to IETF Documents 44 (http://trustee.ietf.org/license-info) in effect on the date of 45 publication of this document. Please review these documents 46 carefully, as they describe your rights and restrictions with respect 47 to this document. Code Components extracted from this document must 48 include Simplified BSD License text as described in Section 4.e of 49 the Trust Legal Provisions and are provided without warranty as 50 described in the Simplified BSD License. 52 Table of Contents 54 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . 2 55 1.1. Scope . . . . . . . . . . . . . . . . . . . . . . . . . . 3 56 2. Terminology . . . . . . . . . . . . . . . . . . . . . . . . . 4 57 2.1. Definitions . . . . . . . . . . . . . . . . . . . . . . . 4 58 3. Problem Statement . . . . . . . . . . . . . . . . . . . . . . 6 59 4. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 9 60 5. Security Considerations . . . . . . . . . . . . . . . . . . . 10 61 6. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . 10 62 7. References . . . . . . . . . . . . . . . . . . . . . . . . . 10 63 7.1. Normative References . . . . . . . . . . . . . . . . . . 10 64 7.2. Informative References . . . . . . . . . . . . . . . . . 10 65 Appendix A. Key Factors for Network-Based AR/VR . . . . . . . . 12 66 A.1. Latency Requirements . . . . . . . . . . . . . . . . . . 12 67 A.1.1. Motion to Photon (MTP) Latency . . . . . . . . . . . 12 68 A.1.2. Latency Budget . . . . . . . . . . . . . . . . . . . 13 69 A.2. Throughput Requirements . . . . . . . . . . . . . . . . . 15 70 A.2.1. Average Throughput . . . . . . . . . . . . . . . . . 15 71 A.2.2. Peak Throughput . . . . . . . . . . . . . . . . . . . 19 72 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . 20 74 1. Introduction 76 Virtual Reality (VR) and Augmented Reality (AR) technologies have 77 enormous potential in many different fields, such as entertainment, 78 remote diagnosis, or remote maintenance. AR and VR applications aim 79 to cause users to perceive that they are physically present in a non- 80 physical or partly non-physical world. However, slightly unrealistic 81 artefacts not only distract from the sense of immersion, but they can 82 also cause `VR sickness' [VR-Sickness] by confusing the brain 83 whenever information about the virtual environment is good enough to 84 be believable but not wholly consistent. 86 This document is based on the assumption and prediction that the 87 current localized AR/VR will inevitably evolve to cloud based AR/VR. 88 Since cloud processing and state will be able to supplement local AR/ 89 VR devices, helping to reduce their size and power consumption, and 90 to provide much more content resource and flexibility to the AR/VR 91 applications. 93 Sufficient realism requires both very low latency and a very high 94 information rate. In addition the information rate varies 95 significantly and can include large bursts. This problem statement 96 aims to quantify these requirements, which are largely driven by the 97 video component of the transmission. The ambition is to improve 98 Internet technology so that AR/VR applications can create the 99 impression of remote presence over longer distances. 101 The goal is for the Internet to be able to routinely satisfy these 102 demanding requirements in 5-10 years. Then it will become feasible 103 to launch many new applications, using AR/VR technology in various 104 arrangements as a new platform over the Internet. A 5-10-year 105 horizon is considered appropriate, given it can take 1-2 years to 106 socialize a grand challenge in the IRTF/IETF then 2-3 years for 107 standards documents to be drafted and taken through the RFC process. 108 The technology itself will also take a few years to develop and 109 deploy. That is likely to run partly in parallel to standardization, 110 so the IETF will need to be ready to intervene wherever 111 interoperability is necessary. 113 1.1. Scope 115 This document is aimed at the transport area research community. 116 However, initially, advances at other layers are likely to make the 117 greatest inroads into the problem, for example: 119 o Network architecture: the physical distance between the content 120 cloud of AR/VR and users are short enough to limit the latency 121 caused by the propagation delay in physical media 123 o Motion sensors: reduction in latency for range of interest (RoI) 124 detection 126 o Sending app: better targeted degradation of quality below the 127 threshold of human perception, e.g. outside the range of interest 129 o Sending app: better coding and compression algorithms 131 o Access network: multiplexing bursts further down the layers and 132 therefore between more users, e.g. traffic-dependent scheduling 133 between layer-2 flows not layer-3 flows 135 o Core network: The capacity of the core network is sufficient to 136 support transport of AR/VR traffic cross different service 137 providers. 139 o Receiving app: better decoding and prediction algorithms 141 o Head mounted displays (HMDs): reducing display latency 143 The initial aim is to state the problem in terms of raw information 144 rates and delays. This initial draft can then form the basis of 145 discussions with experts in other fields, to quantify how much of the 146 problem they are likely to be able to remove. Then subsequent drafts 147 can better quantify the size of the remaining transport problem. 149 This document focuses on unicast-based AR/VR, which covers a wide 150 range of applications, such as VR gaming, shopping, surgery, etc. 151 Broadcast/multicast-based AR/VR is outside the scope of this 152 document. It is likely to need more supporting technology such as 153 multicast, caching and edge computing. Broadcast/multicast-based AR/ 154 VR is for live or multi-user events, such as sports broadcasts or 155 online education. The idea is to use panoramic streaming 156 technologies such that users can dynamically select different view 157 points and angles to become immersed in different real time video 158 streams. 160 Our intention is not to promote enhancement of the Internet specially 161 for AR/VR applications. Rather AR/VR is selected as a concrete 162 example that encompasses a fairly wide set of applications. It is 163 expected that an Internet that can support AR/VR will be able to 164 support other applications requiring both high throughput and low 165 latency, such as interactive video. It should be able to support 166 applications with more demanding latency requirements, but perhaps 167 only over shorter distances. For instance, low latency is needed for 168 vehicle to everything (V2X) communication, for example between 169 vehicles on roads, or between vehicles and remote cloud computing. 170 Tactile communication has very demanding latency needs, perhaps as 171 low as 1 ms. 173 2. Terminology 175 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 176 "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this 177 document are to be interpreted as described in RFC 2119 [RFC2119]. 179 2.1. Definitions 181 E2E 182 End-to-end 184 HMD 185 Head-Mounted Display or Device 187 AR 188 Augmented Reality (AR) is a live direct or indirect view of a 189 physical, real-world environment whose elements are augmented 190 (or supplemented) by computer-generated sensory input such as 191 sound, video, graphics or GPS data. It is related to a more 192 general concept called mediated reality, in which a view of 193 reality is modified (possibly even diminished rather than 194 augmented) by a computer 196 VR 197 Virtual Reality (VR) is a computer technology that uses 198 software-generated realistic images, sounds and other 199 sensations to replicate a real environment or an imaginary 200 setting, and simulates a user's physical presence in this 201 environment to enable the user to interact with this space 203 FOV 204 Field of View is the extent of the world that is visible 205 without eye movement, measured in degrees of visual angle in 206 the vertical and horizontal planes 208 Panorama 209 Panorama is any wide-angle view or representation of a physical 210 space, whether in painting, drawing, photography, film, seismic 211 images or a three-dimensional model 213 360 degree video 214 360-degree videos, also known as immersive videos or spherical 215 videos, are video recordings where a view in every direction is 216 recorded at the same time, shot using an omnidirectional camera 217 or a collection of cameras. Most 360-degree video is 218 monoscopic (2D), meaning that it is viewed as a one (360x180 219 equirectangular) image directed to both eyes. Stereoscopic 220 video (3D) is viewed as two distinct (360x180 equirectangular) 221 images directed individually to each eye. 360-degree videos are 222 typically viewed via personal computers, mobile devices such as 223 smartphones, or dedicated HMD 225 MTP and MTP Latency 226 Motion-To-Photon. Motion-to-Photon latency is the time needed 227 for a user movement to be fully reflected on a display screen 228 [MTP-Latency]. 230 Unmanaged 231 For the purpose of this document, if an unmanaged Internet 232 service supports AR/VR applications, it means that basic 233 connectivity provides sufficient support without requiring the 234 application or user to separately request any additional 235 service, even as a once-off request. 237 3. Problem Statement 239 Network based AR/VR applications need both low latency and high 240 throughput. We shall see that the ratio of peak to mean bit-rate 241 makes it challenging to hit both targets. To satisfy extreme delay 242 and throughput requirements as a niche service for a few special 243 users would probably be possible but challenging. This document 244 envisages an even more challenging scenario; to support AR/VR usage 245 as a routine service for the mass-market in the future. This would 246 either need the regular unmanaged Internet service to support both 247 low latency and high throughput, or it would need managed Internet 248 services to be so simple to activate that they would be universally 249 accessible. 251 Each of the elements of the above requirements are expanded and 252 quantified briefly below. The figures used are justified in depth in 253 Appendix A. 255 MTP Latency: AR/VR developers generally agree that MTP latency 256 becomes imperceptible below about 20 ms [Carmack13]. However, 257 some research has concluded that MTP latency MUST be less than 258 17ms for sensitive users [MTP-Latency-NASA]. Experience has shown 259 that standards bodies tend to set demanding quality levels, while 260 motivated humans often happily adapt to lower quality although 261 they struggle with more demanding tasks. Therefore, we MUST be 262 clear that this 20 ms requirement is designed to enable immersive 263 interaction for the same wide range of tasks that people are used 264 to undertaking locally. 266 Latency Budget: If the only component of delay was the speed of 267 light, 20 ms round trip would limit the physical distance between 268 the communicating parties to 3,000 km in air or 2,000 km in glass. 269 We cannot expand the physical scope of an AR/VR application beyond 270 this speed-of-light limit. However, we can ensure that 271 application processing and transport-related delays do not 272 significantly reduce this limited scope. As a rule of thumb they 273 should consume no more than 5-10% (1-2 ms) of this 20 ms budget, 274 and preferably less. See Appendix A.1 for the derivation of these 275 latency requirements. 277 +--------------+-------------+----------+-------------+-------------+ 278 | | Entry-level | Advanced | Ultimate 2D | Ultimate 3D | 279 +--------------+-------------+----------+-------------+-------------+ 280 | Video Type | 4K 2D | 12K 2D | 24K 2D | 24K 3D | 281 | | | | | | 282 | Mean bit | 22 Mb/s | 400 Mb/s | 2.9 Gb/s | 3.3 Gb/s | 283 | rate | | | | | 284 | Peak bit | 130 Mb/s | 1.9 Gb/s | 29 Gb/s | 38 Gb/s | 285 | rate | | | | | 286 | Burst time | 33 ms | 17 ms | 8 ms | 8 ms | 287 +--------------+-------------+----------+-------------+-------------+ 289 Table 1: Raw information rate requirements for various levels of AR/ 290 VR (YUV 420, H.265) 292 Raw information rate: Table 1 shows the summary of mean and peak raw 293 information rate for four types of H.265 video. Not only does the 294 raw information rate rise to very demanding levels, even for 12K 295 'Advanced AR/VR'. But the ratio of peak to mean increases from 296 about 6 for 'Entry-Level' AR/VR to nearly 12 for 'Ultimate 3-D' 297 AR/VR. See Appendix A.2 for more details and derivation of these 298 rate requirements. 300 Buffer constraint: It will be extremely inefficient (and therefore 301 costly) to provide sufficient capacity for the bursts. If the 302 latency constraint were not so tight, it would be more efficient 303 to provide less capacity than the peak rate and buffer the bursts 304 (in the network and/or the hosts). However even if capacity were 305 only provided for 1/k of the peak bit rate, play-out would be 306 delayed by (k-1) times the burst time. For instance, if a 1G b/s 307 link were provided for 'Advanced' AR/VR, we can see that k = 1.9. 308 Then play-out would be delayed by (1.9 - 1) * 17 ms = 15 ms. This 309 would consume 75% of our 20 ms delay budget. Therefore, it seems 310 that capacity sufficient for the peak rate will be needed, with no 311 buffering. We then have to rely on application-layer innovation 312 to reduce the peak bit rate. 314 Simultaneous bursts: One way to deal with such a high peak-to-mean 315 ratio would be to multiplex multiple AR/VR sessions within the 316 same capacity. This problem statement assumes that the bursts are 317 not correlated at the application layer. Then the probability 318 that most sessions burst simultaneously would become tiny. This 319 would be useful for the high degree of statistical multiplexing in 320 a core network, but it would be less useful in access networks, 321 which is where the bottleneck usually is, and where the number of 322 AR/VR sessions in the same bottleneck might often be close to 1. 323 Of course, if the bursts are correlated between different users, 324 there will be no multiplexing gain. 326 Problems with Unmanaged TCP Service: An unmanaged TCP solution would 327 probably use some derivative of TCP congestion control [RFC5681] 328 to adapt to the available capacity. The following problems with 329 TCP congestion control would have to be solved: 331 Transmission loss and throughput: TCP algorithms collectively 332 induce a low level of loss, and the lower the loss the faster 333 they go. TCP throughput is used to measure such performance. 334 No matter what TCP algorithm is used, the TCP throughput is 335 always capped by some parameters, such as RTT, packet loss 336 ration, etc. Importantly, the TCP throughput is always lower 337 than the physical link capacity. So, for a single flow to 338 attain the bit-rates shown in Table 1 requires a loss 339 probability that is so low that it could be physically limited 340 by the bit-error probability experienced over optical fiber 341 links. The analysis [I-D.ietf-tcpm-cubic] has collected the 342 data for different TCP throughput and corresponding packet loss 343 ration. 345 Flow-rate equality: 347 Host-Controlled: TCP ensures rough equality between L4 flow 348 rates as a simple way to ensure that no individual flow is 349 starved when others are not [RFC5290]. Consider a scenario 350 where one user has a dedicated 2 Gb/s access line, and they 351 are running an AR/VR applications that needs a minimum of 352 400 Mb/s. If the AR/VR app used TCP, it would fail whenever 353 the user (or their family) happened to start more than 4 354 other TCP long flows at once, i.e, FTP flows. This simple 355 example shows that flow-rate equality will probably need to 356 be relaxed to enable support for AR/VR as part of the 357 regular unmanaged Internet service. Fortunately, when there 358 is enough capacity for one flow to get 400 Mb/s, every flow 359 does not have to get 400 Mb/s to ensure that no-one starves. 360 This line of logic could allow flow-rate equality to be 361 relaxed in transport protocols like TCP. 363 Network-Enforced: However, if parts of the network were 364 enforcing flow rate equality, relaxing it would be much more 365 difficult. For instance, deployment of the per-flow queuing 366 scheduler in fq_CoDel [I-D.ietf-aqm-fq-codel] will introduce 367 this problem. 369 Dynamics: The bursts shown in Table 1 would be problematic for 370 TCP. It is hard for the throughput of one TCP flow to jump an 371 order of magnitude for one or two round trips, and even harder 372 for other TCP flows to yield over the same time-scale without 373 considerable queuing delay and/or loss. 375 Problems with Unmanaged UDP Service: Using UDP as transport cannot 376 solve the problems as faced by TCP. Fundamentally, IP network can 377 only provide the best-effort service, no matter if the transport 378 on top of IP is TCP or UDP. This is determined by the fact that 379 most of network devices use different variations of "Fair Queuing" 380 algorithm to queue IP flows without the awareness of TCP or UDP 381 protocol. As long as a fair queuing algorithm is used, a UDP flow 382 cannot obtain more bandwidth or shorter latency than others. But 383 using UDP may reduce the burden of re-transmission of lost packet, 384 if the lost packet is not so critical, like a non I-frame; or the 385 lost packet has passed its life cycle. Depending on if it has its 386 own congestion control, current UDP service has two types: 388 UDP with congestion control: QUIC is a typical UDP service with 389 congestion control. The congestion control algorithm used in 390 QUIC is similar to TCP CUBIC. This makes QUIC behave also 391 similar to TCP CUBIC. There will be no fundamental difference 392 compared with unmanaged TCP service in terms of fairness, 393 convergence and bandwidth utilization, etc. 395 UDP without congestion control: If UDP is used as transport 396 without extra congestion control, it will be weaker than with 397 congestion control to support the AR/VR application with high 398 throughput and short latency requirements. 400 Problems with Managed Service: As well as the common problems 401 outlined above, such as simultaneous bursts, the management and 402 policy aspects of managed QoS solution are problematic: 404 Complex provisioning: Currently QoS services are not 405 straightforward to enable, which would make routine widespread 406 support of AR/VR unlikely. It has proved particularly hard to 407 standardize how managed QoS services are enabled across host- 408 network and inter-domain interfaces. 410 Universality: For AR/VR support to become widespread and routine, 411 control of QoS provision would need to comply with the relevant 412 Net Neutrality [NET_Neutrality_ISOC] legislation appropriate to 413 the jurisdictions covering each part of the network path. 415 4. IANA Considerations 417 There is no change with regards to IANA 419 5. Security Considerations 421 There is no security issue introduced by this document 423 6. Acknowledgements 425 Special thanks to Bob Briscoe, he has given a lot advice and comments 426 during the period of study and writing of this draft, he also has 427 done a lot revision for the final draft. 429 We would like to thank Kjetil Raaen and Steve Appleby for comments on 430 early drafts of this work. 432 We also like to thank Huawei's research team leaded by Lei Han, Feng 433 Li and Yue Yin to provide the prospective analysis; also thank 434 Guoping Li, Boyan Tu, Xuefei Tang and Tao Ma from Huawei for their 435 involvement in the work discussion 437 Lastly, we want to thank Huawei's Information LAB, some basic AR/VR 438 data was from its research results 440 7. References 442 7.1. Normative References 444 [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate 445 Requirement Levels", BCP 14, RFC 2119, 446 DOI 10.17487/RFC2119, March 1997, 447 . 449 7.2. Informative References 451 [Carmack13] 452 Carmack, J., "Latency Mitigation Strategies", February 453 2013, . 456 [Chroma] Wikipedia, "Chroma subsampling", 2016, 457 . 459 [Fiber-Light-Speed] 460 Kevin Miller, "Calculating Optical Fiber Latency", 2012, 461 . 464 [GOP] Wikipedia, "Group of pictures", 2016, 465 . 467 [H264_Primer] 468 Adobe, "H.264 Primer", 2016, . 472 [I-D.ietf-aqm-fq-codel] 473 Hoeiland-Joergensen, T., McKenney, P., 474 dave.taht@gmail.com, d., Gettys, J., and E. Dumazet, "The 475 FlowQueue-CoDel Packet Scheduler and Active Queue 476 Management Algorithm", draft-ietf-aqm-fq-codel-06 (work in 477 progress), March 2016. 479 [I-D.ietf-tcpm-cubic] 480 Rhee, I., Xu, L., Ha, S., Zimmermann, A., Eggert, L., and 481 R. Scheffenegger, "CUBIC for Fast Long-Distance Networks", 482 draft-ietf-tcpm-cubic-04 (work in progress), February 483 2017. 485 [MTP-Latency] 486 Kostov, G., "Fostering Player Collaboration Within a 487 Multimodal Co-Located Game", University of Applied 488 Sciences Upper Austria, Masters Thesis , September 2015, 489 . 493 [MTP-Latency-NASA] 494 Bernard D. Adelstein, et al, NASA Ames Research Center, 495 etc, "HEAD TRACKING LATENCY IN VIRTUAL ENVIRONMENTS: 496 PSYCHOPHYSICS AND A MODEL", 2003, 497 . 500 [NET_Neutrality_ISOC] 501 Internet Society, "Network Neutrality, An Internet Society 502 Public Policy Briefing", 2015, 503 . 506 [PSNR] Wikipedia, "Peak signal-to-noise ratio", 2016, 507 . 510 [Raaen16] Raaen, K., "Response time in games : requirements and 511 improvements", University of Oslo, PhD Thesis , February 512 2016, . 515 [RFC5290] Floyd, S. and M. Allman, "Comments on the Usefulness of 516 Simple Best-Effort Traffic", RFC 5290, 517 DOI 10.17487/RFC5290, July 2008, 518 . 520 [RFC5681] Allman, M., Paxson, V., and E. Blanton, "TCP Congestion 521 Control", RFC 5681, DOI 10.17487/RFC5681, September 2009, 522 . 524 [VR-Sickness] 525 Wikipedia, "Virtual reality sickness", 2016, 526 . 529 [YUV] Wikipedia, "YUV", 2016, . 532 Appendix A. Key Factors for Network-Based AR/VR 534 A.1. Latency Requirements 536 A.1.1. Motion to Photon (MTP) Latency 538 Latency is the most important quality parameter of AR/VR 539 applications. With streaming video, caching technology located 540 closer to the user can reduce speed-of-light delays. In contrast 541 with AR/VR user actions are interactive and rarely predictable. At 542 any time a user can turn the HMD to any angle or take any other 543 action in response to virtual reality events. 545 AR/VR developers generally agree that MTP latency becomes 546 imperceptible below about 20 ms [Carmack13]. However, some research 547 has concluded that MTP latency MUST be less than 17ms for sensitive 548 users [MTP-Latency-NASA]. For a summary of numerous references 549 concerning the limit of human perception of delay see the thesis of 550 Raaen [Raaen16]. 552 Latency greater than 20 ms not only degrades the visual experience, 553 but also tends to result in Virtual Reality Sickness [VR-Sickness]. 554 Also known as cybersickness, this can cause symptoms similar to 555 motion sickness or simulator sickness, such as general discomfort, 556 headache, nausea, vomiting, disorientation, etc. 558 Sensory conflict theory believes that sickness can occur when a 559 user's perception of self-motion is based on inconsistent sensory 560 inputs between the visual system, vestibular (balance) system, and 561 non-vestibular proprioceptors (muscle spindles), particularly when 562 these inputs are at odds with the user's expectations from prior 563 experience. Sickness can be minimized by keeping MTP latency below 564 the threshold where humans can detect the lag between visual input 565 and self-motion. 567 The best localized AR/VR systems have significantly improved speed of 568 sensor detection, display refresh, and GPU processing in their head- 569 mounted displays (HMDs) to bring MTP latency below 20 ms for 570 localized AR/VR. However, network-based AR/VR research has just 571 started. 573 A.1.2. Latency Budget 575 Figure 1 illustrates the main components of E2E delay in network- 576 based AR/VR. 578 +------+ +------+ +------+ 579 | T1 |----------->| T4 |------------>| T2 | 580 +------+ +------+ +------+ 581 | 582 | 583 | 584 +------+ | 585 | T6 | | 586 +------+ | 587 ^ | 588 | | 589 | v 590 +------+ +------+ +------+ 591 | T5 |<-----------| T4 |<------------| T3 | 592 +------+ +------+ +------+ 594 T1: Sensor detection and Action capture 595 T2: Computation for ROI (Range of Interest) processing, rendering 596 and encoding 597 T3: GOP (group of pictures) framing and streaming 598 T4: Network transport 599 T5: Terminal decoding 600 T6: Screen refresh 602 Figure 1: The main components of E2E delay in network-based AR/VR 604 Table 2 shows approximate current values and projected values for 605 each component of E2E delay, based on likely technology advances in 606 hardware and software. 608 The current network transport latency is comprised of physical 609 propagation delay and switching/forwarding delay at each network 610 device. 612 1. The physical propagation delay: This is the delay caused by the 613 speed limit of signal transmitting in physical media. Take the fiber 614 as example, the optical transmit cannot exceed the light speed, or, 615 300km/ms in free space. But, light moving through the fiber optic 616 core will travel slower than light through a vacuum because of the 617 differences of the refractive index of light in free space and in the 618 glass. In normal optical fiber, the light speed is about 200km/ms 619 [Fiber-Light-Speed]. 621 2. The switching/forwarding delay: This delay normally is much more 622 than the physical propagation delay, which can vary from 200us to 623 200ms at each hop. 625 +---------+--------------------+----------------------+ 626 | Latency | Current value (ms) | Projected value (ms) | 627 +---------+--------------------+----------------------+ 628 | T1 | 1 | 1 | 629 | T2 | 11 | 2 | 630 | T3 | 110 to 1000 | 5 | 631 | T4 | 0.2 to 100 | ? | 632 | T5 | 5 | 5 | 633 | T6 | 1 | 0.01 | 634 | | | | 635 | MTP | 130 to 1118 | 13 + ? | 636 +---------+--------------------+----------------------+ 638 MTP = T1+T2+T3+T4+T5+T6 640 Table 2: Current and projected latency in key stages in network based 641 AR/VR 643 We can see that MTP latency is currently much greater than 20 ms. 645 If we project that the technology development and advance would bring 646 down the latency in some areas, such as reducing the latency caused 647 by GOP framing and streaming dramatically down to 5ms by using 648 improved parallel hardware processing, and reducing display response 649 time (refreshing latency) to 0.1 us by using OLED, etc; then the 650 budget for the round trip network transport latency will be about 5 651 to 7 ms. 653 This budget will be consumed by propagation delay, switching delay 654 and queuing delay. We can conclude 656 1. The physical distance between user and AR/VR server is limited 657 and MUST be less than 1000km. So, the deployment of AR/VR server 658 SHOULD be close to user as much as possible. 660 2. The total delay budget for network device will be low single 661 digit, i.e. if the distance between user and AR/VR server is 600KM, 662 then the accumulated maximum delay (round trip) allowed for all 663 network devices is about 2 to 4ms. This is equivalent to 1 to 2ms 664 delay in one direction for all network devices on the path. 666 A.2. Throughput Requirements 668 The Network bandwidth required for AR/VR is the actual TCP throughput 669 required by application if the AR/VR stream is transported by TCP. 670 It is another critical parameter for the quality of AR/VR 671 application. 673 The AR/VR network bandwidth depends on the raw streaming data rate, 674 or the bit rate for the video stream. 676 A.2.1. Average Throughput 678 The average network bandwidth for AR/VR is the average bit rate for 679 AR/VR video. 681 For AR/VR video stream, there are many parameters that can impact the 682 bit rate, such as display resolution, 2D or 3D, normal view or 683 panorama view, the codec type for the video processing, the color 684 space and sampling algorithm, the video pattern, etc. 686 Normally, the bit rate for 3D is approximately 1.5 times of 2D; and 687 the bit rate for panorama view is about 4 times of normal view. 689 The latest codec process for high resolution video is H.246 and 690 H.265. It has very high compression ratio. 692 The color space and sampling used in modern video streaming are YUV 693 system [YUV] and chroma subsampling [Chroma]. 695 YUV encodes a color image or video taking human perception into 696 account, allowing reduced bandwidth for chrominance components, 697 thereby typically enabling transmission errors or compression 698 artifacts to be more efficiently masked by the human perception than 699 using a "direct" RGB-representation. 701 Chroma subsampling is the practice of encoding images by implementing 702 less resolution for chroma information than for luma information, 703 taking advantage of the human visual system's lower acuity for color 704 differences than for luminance. 706 There are different sampling systems depends on the ratio of 707 different samples for colors, such as Y'CrCb 4:1:1, Y'CrCb 4:2:0, 708 Y'CrCb 4:2:2, Y'CrCb 4:4:4 and Y'CrCb 4:4:0. The most widely used 709 sampling methods is Y'CrCb 4:2:0, this is often called YUV420 (note, 710 the similar sampling for analog encoding is called Y'UV). 712 The video pattern, or motion rank, will also impact the stream bit 713 rate. The video frames change more frequent, the less data 714 compression will be obtained. 716 Compressed video stream consists of ordered successive group of 717 pictures, or GOP [GOP]. There are three types of pictures (or 718 frames) used in video compression, , such as H.264: 720 Intra code picture, or I-frames [GOP], Predictive coded picture, or 721 P-frames [GOP] and Bipredictive coded picture, or B-frames [GOP]. 723 An I-frame is in effect a fully specified picture, like a 724 conventional static image file. P-frames and B-frames hold only part 725 of the image information, so they need less space to store than an 726 I-frame and thus improve video compression rates. A P-frame holds 727 only the changes in the image from the previous frame. P-frames are 728 also known as delta-frames. A B-frame saves even more space by using 729 differences between the current frame and both the preceding and 730 following frames to specify its content. 732 A typical video stream have a sequence of GOP with pattern, for 733 example, IBBPBBPBBPBB, or, IBBBBPBBBBPBBBB. 735 The real bit rate also depends on the quality of the image user like 736 to view. The Peak signal-to-noise ratio, or PSNR [PSNR] is to denote 737 the quality of a image. The higher the PSNR, the better quality of 738 the image, and the higher the bit rate. 740 Since human can only distinguish some level of image quality 741 difference, it would be efficient to network if we could provide 742 image with minimum PSNR that human eye perception cannot distinguish 743 with image having higher PSNR. Unfortunately, this is still a 744 research topic and there is no fixed minimum PSNR applies all people. 746 So, there is no exact formula for the bit rate, however, we can have 747 experimental formula for the rough estimation of the bit rate for 748 different parameters. 750 Formula (1) is from the H.264 Primer [H264_Primer]: 752 Information rate = W * H * FPS * Rank * 0.07, (1) 754 where: 755 W: Number of pixels in horizontal direction 756 H: Number of pixels in vertical direction 757 FPS: Frames per second 758 Rank: Motion rank, which can be: 759 1: Low motion: video that has minimal movement 760 2: Medium motion: video that has some degree of movement 761 4: High motion: video that has a lot of movement and 762 movement is unpredictable 764 The four formulae tagged (2) below are more generic and with more 765 parameters for calculation of approximate information rates: 767 Average information rate = T * W * H * S * d * FPS / Cv ) 768 I-frame information rate = T * W * H * S * d * FPS / Cj ) 769 Burst size = T * W * H * S * d / Cj ) (2) 770 Burst time = 1/FPS ) 772 where: 773 T: Type of video, 1 for 2D, 2 for 3D 774 W: Number of pixels in horizontal direction 775 H: Number of pixels in vertical direction 776 S: Scale factor, which can be: 777 1 for YUV400 778 1.5 for YUV420 779 2 for YUV422 780 3 for YUV444 781 d: Color depth bits 782 FPS: Frames per second 783 Cv: Average compression ratio for video 784 Cj: Compression ratio for I-frame 786 Table 2 shows the bit rate calculated by the above formula 2 for 787 different AR/VR levels. 789 It MUST be noted that in the Table 2: 791 1. There is no industry standard about the type of VR yet. The 792 definition in the table is simply based on the 4K, 12K and 24K videos 793 for 360x180 degree display. The Ultimate VR is roughly corresponding 794 to the so called "Retina Display" which is about 60 PPD (Pix per 795 degree) or 300 PPI (Pix per inch). However, there is argument about 796 what is the limit of the human vision. J. Blackwell of the Optical 797 Society of America has determined in 1946 that the resolution of the 798 human eye was actually closer to 0.35 arc minutes, which is more than 799 3 times of the Apple's Retina Display (60 PPD). 801 2. The Mean and Peak Bit Rate illustrated in the table is calculated 802 for a specific video with the acceptable perceptive PSNR, and with 803 the typical compression ratio. It does not represent all type of 804 videos. So, the compression ratio in the table is not universally 805 applicable to all videos. 807 3. It MUST be aware that in the real use case, there are many 808 schemes to reduce the video bit rate further in addition to the 809 mandatory video compression. For example, only transmit the expected 810 resolution for the video in the FOV in time, but transmit the video 811 in other areas in slower speed, lower quality and lower resolution. 812 All these technologies and their impact to the bandwidth are out of 813 the scope of the document. 815 4. We assume the whole 360 degree video is transmitted to user site. 816 The same video could be viewed by naked eye, or by HMD (without too 817 much processing power). Thus, there is no difference to the network 818 in bit rate, burst and burst time; The only difference is that using 819 HMD can only view the video limited by its view angle. But if the 820 HMD has its own video decoder, powerful processing and can directly 821 communicate with the AR/VR content source, the network only needs to 822 transport the data defined by HMD resolution which is only a small 823 percentage of the whole 360 degree video. The corresponding data for 824 mean/peak bit rate, burst size can be easily calculated by the 825 formula (2). The last row "Infor Ratio of HMD/Whole video" denotes 826 the ratio of Information amount (mean/peak bit rate and burst size) 827 between HMD and the whole 360 degree video. 829 +-----------------+---------------+----------------+----------------+ 830 | | Entry-level VR| Advanced VR | Ultimate VR | 831 +-----------------+---------------+----------------+----------------+ 832 | Type | 4K 2D Video | 12K 2D Video | 24K 3D Video | 833 +-----------------+---------------+----------------+----------------+ 834 | Resolution W*H | 3840*1920 | 11520*5760 | 23040*11520 | 835 |360 degree video | | | | 836 +-----------------+---------------+----------------+----------------+ 837 | HMD Resolution/ | 960*960/ | 3840*3840/ | 7680*7680/ | 838 | view angle | 90 | 120 | 120 | 839 +-----------------+---------------+----------------+----------------+ 840 | PPD | 11 | 32 | 64 | 841 | (Pix per degree)| | | | 842 +-----------------+---------------+----------------+----------------+ 843 | d (bit) | 8 | 10 | 12 | 844 +-----------------+---------------+----------------+----------------+ 845 | Cv | 120 | 150 |200(2D), 350(3D)| 846 +-----------------+---------------+----------------+----------------+ 847 | FPS | 30 | 60 | 120 | 848 +-----------------+---------------+----------------+----------------+ 849 | Mean Bit rate | 22Mbps | 398Mbps | 2.87Gbps(2D) | 850 | | | | 3.28Gbps(3D) | 851 +-----------------+---------------+----------------+----------------+ 852 | Cj | 20 | 30 | 20(2D), 30(3D) | 853 +-----------------+---------------+----------------+----------------+ 854 | Peak bit rate | 132Mbps | 1.9Gbps | 28.7Gbps(2D)| 855 | | | | 38.2Gbps(3D)| 856 +-----------------+---------------+----------------+----------------+ 857 | Burst size | 553K byte | 4.15M Byte | 29.9M Byte(2D)| 858 | | | | 39.8M Byte(3D)| 859 +-----------------+---------------+----------------+----------------+ 860 | Burst time | 33ms | 17ms | 8ms | 861 +-----------------+---------------+----------------+----------------+ 862 | Infor Ratio of | 0.125 | 0.222 | 0.222 | 863 | HMD/Whole Video | | | | 864 +-----------------+---------------+----------------+----------------+ 866 Table 2 Bit rate for different VR (use YUV420 and H.265) 868 A.2.2. Peak Throughput 870 The peak bandwidth for AR/VR is the peak bit rate for an AR/VR video. 871 In this document, It is defined as the bit rate required to transport 872 an I-frame, and the burst size is the size of I-frame, burst time is 873 the time the I-frame must be transported from end to end based on 874 FPS. 876 Similar to the Mean Bit rate, the calculation of Peak bit rate is 877 purely theoretical and does not take any optimization into account. 879 There are two scenarios that a new I-frame will be generated and 880 transported. One is when the AR/VR video display has dramatically 881 changes that there is no similarity between two images; Another is 882 when the FOV changes. 884 When AR/VR user is moving header or moving his eyeball to change 885 Range of Interest, the FOV will be changed. FOV change may lead to 886 the re-transmit of a new I-frame 888 Since there is no reference frame for the video compression, the 889 I-frame can only be compressed by the infra-frame processing, or the 890 compression for a static image like JPEG, and the compression ratio 891 is much smaller than the inter-frame compression ratio. 893 It is estimated that the normal quality JPEG compression is about 20 894 to 30, This is only a fraction of the compression ratio for the 895 normal video streaming. 897 In addition to the low compression issue, there is another problem 898 involved. Due to the limit of MTP, the new I-frame must be rendered, 899 grouped, transmitted and displayed in the delay budge for the network 900 transport. This will cause the peak bit rate and burst size much 901 bigger than the normal video streaming like IPTV. 903 The peak bit rate or the bit rate for I-frame, burst size and burst 904 time are shown in the Formula 2. From the formula we can see the 905 ratio of peak bit rate and the average bit rate is the ration of Cv/ 906 Cj. Since the Cv could be 100 to 200 for 2D, but the Cj is only 907 about 20 to 30, so, the peak bit rate is about 10 times of average 908 bit rate. 910 Authors' Addresses 912 Lin Han (editor) 913 Huawei Technologies 914 2330 Central Expressway 915 Santa Clara, CA 95050 916 USA 918 Phone: +10 408 330 4613 919 Email: lin.han@huawei.com 920 Kevin Smith 921 Vodafone 922 UK 924 Email: Kevin.Smith@vodafone.com