idnits 2.17.1 draft-saldana-tsvwg-tcmtf-07.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- No issues found here. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year -- The document date (June 10, 2014) is 3601 days in the past. Is this intentional? Checking references for intended status: Best Current Practice ---------------------------------------------------------------------------- (See RFCs 3967 and 4897 for information about using normative references to lower-maturity documents in RFCs) ** Obsolete normative reference: RFC 4960 (ref. 'SCTP') (Obsoleted by RFC 9260) Summary: 1 error (**), 0 flaws (~~), 1 warning (==), 1 comment (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 Transport Area Working Group J. Saldana 3 Internet-Draft University of Zaragoza 4 Intended status: Best Current Practice D. Wing 5 Expires: December 12, 2014 Cisco Systems 6 J. Fernandez Navajas 7 University of Zaragoza 8 Muthu. Perumal 9 Cisco Systems 10 F. Pascual Blanco 11 Telefonica I+D 12 June 10, 2014 14 Tunneling Compressed Multiplexed Traffic Flows (TCM-TF) Reference Model 15 draft-saldana-tsvwg-tcmtf-07 17 Abstract 19 Tunneling Compressed and Multiplexed Traffic Flows (TCM-TF) is a 20 method for improving the bandwidth utilization of network segments 21 that carry multiple flows in parallel sharing a common path. The 22 method combines standard protocols for header compression, 23 multiplexing, and tunneling over a network path for the purpose of 24 reducing the bandwidth. The amount of packets per second can also be 25 reduced. 27 This document describes the TCM-TF framework and the different 28 options which can be used for each layer (header compression, 29 multiplexing and tunneling). 31 Status of This Memo 33 This Internet-Draft is submitted to IETF in full conformance with the 34 provisions of BCP 78 and BCP 79. 36 Internet-Drafts are working documents of the Internet Engineering 37 Task Force (IETF). Note that other groups may also distribute 38 working documents as Internet-Drafts. The list of current Internet- 39 Drafts is at http://datatracker.ietf.org/drafts/current/. 41 Internet-Drafts are draft documents valid for a maximum of six months 42 and may be updated, replaced, or obsoleted by other documents at any 43 time. It is inappropriate to use Internet-Drafts as reference 44 material or to cite them other than as "work in progress." 46 This Internet-Draft will expire on December 12, 2014. 48 Copyright Notice 50 Copyright (c) 2014 IETF Trust and the persons identified as the 51 document authors. All rights reserved. 53 This document is subject to BCP 78 and the IETF Trust's Legal 54 Provisions Relating to IETF Documents 55 (http://trustee.ietf.org/license-info) in effect on the date of 56 publication of this document. Please review these documents 57 carefully, as they describe your rights and restrictions with respect 58 to this document. 60 Table of Contents 62 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . 3 63 1.1. Requirements Language . . . . . . . . . . . . . . . . . . 3 64 1.2. Bandwidth efficiency of flows sending small packets . . . 3 65 1.2.1. Real-time applications using RTP . . . . . . . . . . 3 66 1.2.2. Real-time applications not using RTP . . . . . . . . 4 67 1.2.3. Other applications generating small packets . . . . . 4 68 1.2.4. Optimization of small-packet flows . . . . . . . . . 5 69 1.2.5. Energy consumption considerations . . . . . . . . . . 5 70 1.3. Terminology . . . . . . . . . . . . . . . . . . . . . . . 6 71 1.4. Scenarios of application . . . . . . . . . . . . . . . . 7 72 1.4.1. Multidomain scenario . . . . . . . . . . . . . . . . 7 73 1.4.2. Single domain . . . . . . . . . . . . . . . . . . . . 8 74 1.4.3. Private solutions . . . . . . . . . . . . . . . . . . 9 75 1.4.4. Mixed scenarios . . . . . . . . . . . . . . . . . . . 11 76 1.5. Potential beneficiaries of TCM optimization . . . . . . . 12 77 1.6. Current Standard . . . . . . . . . . . . . . . . . . . . 13 78 1.7. Improved Standard Proposal . . . . . . . . . . . . . . . 13 79 2. Protocol Operation . . . . . . . . . . . . . . . . . . . . . 15 80 2.1. Models of implementation . . . . . . . . . . . . . . . . 15 81 2.2. Choice of the compressing protocol . . . . . . . . . . . 16 82 2.2.1. Context Synchronization in ECRTP . . . . . . . . . . 17 83 2.2.2. Context Synchronization in ROHC . . . . . . . . . . . 18 84 2.3. Multiplexing . . . . . . . . . . . . . . . . . . . . . . 18 85 2.4. Tunneling . . . . . . . . . . . . . . . . . . . . . . . . 18 86 2.4.1. Tunneling schemes over IP: L2TP and GRE . . . . . . . 18 87 2.4.2. MPLS tunneling . . . . . . . . . . . . . . . . . . . 19 88 2.5. Encapsulation Formats . . . . . . . . . . . . . . . . . . 19 89 3. Contributing Authors . . . . . . . . . . . . . . . . . . . . 20 90 4. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . 22 91 5. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 22 92 6. Security Considerations . . . . . . . . . . . . . . . . . . . 22 93 7. References . . . . . . . . . . . . . . . . . . . . . . . . . 23 94 7.1. Normative References . . . . . . . . . . . . . . . . . . 23 95 7.2. Informative References . . . . . . . . . . . . . . . . . 24 97 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . 25 99 1. Introduction 101 This document describes a way to combine existing protocols for 102 header compression, multiplexing and tunneling to save bandwidth for 103 applications that generate long-term flows of small packets. 105 1.1. Requirements Language 107 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 108 "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this 109 document are to be interpreted as described in RFC 2119 [RFC2119]. 111 1.2. Bandwidth efficiency of flows sending small packets 113 The interactivity demands of some real-time services (VoIP, 114 videoconferencing, telemedicine, video vigilance, online gaming, 115 etc.) require a traffic profile consisting of high rates of small 116 packets, which are necessary in order to transmit frequent updates 117 between the two extremes of the communication. These services also 118 demand low network delays. In addition, some other services also use 119 small packets, although they are not delay-sensitive (e.g., instant 120 messaging, M2M packets sending collected data in sensor networks or 121 IoT scenarios using wireless or satellite scenarios). For both the 122 delay-sensitive and delay-insensitive applications, their small data 123 payloads incur significant overhead. 125 When a number of flows based on small packets (small-packet flows) 126 share the same path, bandwidth can be saved by multiplexing packets 127 belonging to different flows. If a transmission queue has not 128 already been formed but multiplexing is desired, it is necessary to 129 add a multiplexing delay, which has to be maintained under some 130 threshold if the service presents tight delay requirements. 132 1.2.1. Real-time applications using RTP 134 The first design of the Internet did not include any mechanism 135 capable of guaranteeing an upper bound for delivery delay, taking 136 into account that the first deployed services were e-mail, file 137 transfer, etc., in which delay is not critical. RTP [RTP] was first 138 defined in 1996 in order to permit the delivery of real-time 139 contents. Nowadays, although there are a variety of protocols used 140 for signaling real-time flows (SIP [SIP], H.323 [H.323], etc.), RTP 141 has become the standard par excellence for the delivery of real-time 142 content. 144 RTP was designed to work over UDP datagrams. This implies that an 145 IPv4 packet carrying real-time information has to include 40 bytes of 146 headers: 20 for IPv4 header, 8 for UDP, and 12 for RTP. This 147 overhead is significant, taking into account that many real-time 148 services send very small payloads. It becomes even more significant 149 with IPv6 packets, as the basic IPv6 header is twice the size of the 150 IPv4 header. Table 1 illustrates the overhead problem of VoIP for 151 two different codecs. 153 +---------------------------------+---------------------------------+ 154 | IPv4 | IPv6 | 155 +---------------------------------+---------------------------------+ 156 | IPv4+UDP+RTP: 40 bytes header | IPv6+UDP+RTP: 60 bytes header | 157 | G.711 at 20 ms packetization: | G.711 at 20 ms packetization: | 158 | 25% header overhead | 37.5% header overhead | 159 | G.729 at 20 ms packetization: | G.729 at 20 ms packetization: | 160 | 200% header overhead | 300% header overhead | 161 +---------------------------------+---------------------------------+ 163 Table 1: Efficiency of different voice codecs 165 1.2.2. Real-time applications not using RTP 167 At the same time, there are many real-time applications that do not 168 use RTP. Some of them send UDP (but not RTP) packets, e.g., First 169 Person Shooter (FPS) online games [First-person], for which latency 170 is very critical. The quickness and the movements of the players are 171 important, and can decide if they win or lose a fight. In addition 172 to latency, these applications may be sensitive to jitter and, to a 173 lesser extent, to packet loss [Gamers], since they implement 174 mechanisms for packet loss concealment. 176 1.2.3. Other applications generating small packets 178 Other applications without delay constraints are also becoming 179 popular (e.g., instant messaging, M2M packets sending collected data 180 in sensor networks using wireless or satellite scenarios). , IoT 181 traffic generated in Constrained RESTful Environments, where UDP 182 packets are employed [I-D.ietf-core-coap].The number of wireless M2M 183 (machine-to-machine) connections is steady growing since a few years, 184 and a share of these is being used for delay-intolerant applications, 185 e.g., industrial SCADA (Supervisory Control And Data Acquisition), 186 power plant monitoring, smart grids, asset tracking. 188 1.2.4. Optimization of small-packet flows 190 In the moments or places where network capacity gets scarce, 191 allocating more bandwidth is a possible solution, but it implies a 192 recurring cost. However, including optimization techniques between a 193 pair of network nodes (able to reduce bandwidth and packets per 194 second) when/where required is a one-time investment. 196 In scenarios including a bottleneck with a single Layer-3 hop, header 197 compression standard algorithms [cRTP], [ECRTP], [IPHC], [ROHC] can 198 be used for reducing the overhead of each flow, at the cost of 199 additional processing. 201 However, if header compression is to be deployed in a network path 202 including several Layer-3 hops, tunneling can be used at the same 203 time in order to allow the header-compressed packets to travel end- 204 to-end, thus avoiding the need to compress and decompress at each 205 intermediate node. In these cases, compressed packets belonging to 206 different flows can be multiplexed together, in order to share the 207 tunnel overhead. In this case, a small multiplexing delay will be 208 necessary as a counterpart, in order to join a number of packets to 209 be sent together. This delay has to be maintained under a threshold 210 in order to grant the delay requirements. 212 A demultiplexer and a decompressor are necessary at the end of the 213 common path, so as to rebuild the packets as they were originally 214 sent, making traffic optimization a transparent process for the 215 extremes of the flow. 217 If only one stream is tunneled and compressed, then little bandwidth 218 savings will be obtained. In contrast, multiplexing is helpful to 219 amortize the overhead of the tunnel header over many payloads. The 220 obtained savings grow with the number of flows optimized together 221 [VoIP_opt], [FPS_opt]. 223 All in all, the combined use of header compression and multipexing 224 provides a trade-off: bandwidth can be exchanged by processing 225 capacity (mainly required for header compression and decompression) 226 and a small additional delay (required for gathering a number of 227 packets to be multiplexed together). 229 1.2.5. Energy consumption considerations 231 As an additional benefit, the reduction of the sent information, and 232 especially the reduction of the amount of packets per second to be 233 managed by the intermediate routers, can be translated into a 234 reduction of the overall energy consumption of network equipment. 235 According to [Efficiency] internal packet processing engines and 236 switching fabric require 60% and 18% of the power consumption of 237 high-end routers respectively. Thus, reducing the number of packets 238 to be managed and switched will reduce the overall energy 239 consumption. The measurements deployed in [Power] on commercial 240 routers corroborate this: a study using different packet sizes was 241 presented, and the tests with big packets showed a reduction of the 242 energy consumption, since a certain amount of energy is associated to 243 header processing tasks, and not only to the sending of the packet 244 itself. 246 All in all, a tradeoff appears: on the one hand, energy consumption 247 is increased in the two extremes due to header compression 248 processing; on the other hand, energy consumption is reduced in the 249 intermediate nodes because of the reduction of the number of packets 250 transmitted. Thi tradeoff should be explored more deeply. 252 1.3. Terminology 254 This document uses a number of terms to refer to the roles played by 255 the entities using TCM-TF. 257 o native packet 259 A packet sent by an application, belonging to a flow that can be 260 optimized by means of TCM-TF. 262 o native flow 264 A flow of native packets. It can be considered a "small-packet flow" 265 when the vast majority of the generated packets present a low 266 payload-to-header ratio. 268 o TCM packet 270 A packet including a number of multiplexed and header-compressed 271 native ones, and also a tunneling header. 273 o TCM flow 275 A flow of TCM packets, each one including a number of multiplexed 276 header-compressed packets. 278 o TCM optimizer 280 The host where TCM optimization is deployed. It corresponds to both 281 the ingress and the egress of the tunnel transporting the compressed 282 and multiplexed packets. 284 If the optimizer compresses headers, multiplexes packets and creates 285 the tunnel, it behaves as a "TCM-ingress optimizer", or "TCM-IO". It 286 takes native packets or flows and "optimizes" them. 288 If it extracts packets from the tunnel, demultiplexes packets and 289 decompresses headers, it behaves as a "TCM-egress optimizer", or 290 "TCM-EO". The TCM-egress optimizer takes a TCM flow and "rebuilds" 291 the native packets as they were originally sent. 293 o TCM-TF session 295 The relationship between a pair of TCM optimizers exchanging TCM 296 packets. 298 o policy manager 300 A network entity which makes the decisions about TCM-TF parameters 301 (e.g., multiplexing period to be used, flows to be optimized 302 together), depending on their IP addresses, ports, etc. It is 303 connected with a number of TCM-TF optimizers, and orchestrates the 304 optimization that takes place between them. 306 1.4. Scenarios of application 308 Different scenarios of application can be considered for the 309 tunneling, compressing and multiplexing solution. They can be 310 classified according to the domains involved in the optimization: 312 1.4.1. Multidomain scenario 314 In this scenario, the TCMT-TF tunnel goes all the way from one 315 network edge (the place where users are attached to the ISP) to 316 another, and therefore it can cross several domains. As shown in 317 Figure 1, the optimization is performed before the packets leave the 318 domain of an ISP; the traffic crosses the Internet tunnelized, and 319 the packets are rebuilt in the second domain. 321 _ _ _ _ _ _ 322 ( ` ) _ _ _ ( ` )_ _ 323 ( +------+ )`) ( ` )_ ( +------+ `) 324 -->(_ -|TCM-IO|--- _) ---> ( ) `) ----->(_-|TCM-EO|--_)--> 325 ( +------+ _) (_ (_ . _) _) ( +------+ _) 326 (_ _ _ _) (_ _ ( _) _) 328 ISP 1 Internet ISP 2 330 <------------------TCM-TF--------------------> 332 Figure 1 334 Note that this is not from border to border (where ISPs connect to 335 the Internet, which could be covered with specialized links) but from 336 an ISP to another (e.g., managing all traffic from individual users 337 arriving at a Game Provider, regardless users' location). 339 Some examples of this could be: 341 o An ISP may place a TCM optimizer in its aggregation network, in 342 order to tunnel all the packets belonging to a certain service, 343 sending them to the application provider, who will rebuild the 344 packets before forwarding them to the application server. This 345 will result in savings for both actors. 347 o A service provider (e.g., an online gaming company) can be allowed 348 to place a TCM optimizer in the aggregation network of an ISP, 349 being able to optimize all the flows of a service (e.g., VoIP, an 350 online game). Another TCM optimizer will rebuild these packets 351 once they arrive to the network of the provider. 353 1.4.2. Single domain 355 TCM-TF is only activated inside an ISP, from the edge to border, 356 inside the network operator. The geographical scope and network 357 depth of TCM-TF activation could be on demand, according to traffic 358 conditions. 360 If we consider the residential users of a real-time interactive 361 application (e.g., VoIP, an online game generating small packets) in 362 a town or a district, a TCM optimizing module can be included in some 363 network devices, in order to group packets with the same destination. 364 As shown in Figure 2, depending on the number of users of the 365 application, the packets can be grouped at different levels in DSL 366 fixed network scenarios, at gateway level in LTE mobile network 367 scenarios or even in other ISP edge routers. TCM-TF may also be 368 applied for fiber residential accesses, and in mobile networks. This 369 would reduce bandwidth requirements in the provider aggregation 370 network 372 +------+ 373 N users -|TCM-IO|\ 374 +------+ \ 375 \ _ _ _ _ 376 +------+ \--> ( ` )_ +------+ ( ` )_ 377 M users -|TCM-IO|------> ( ) `) --|TCM-EO|--> ( ) `) 378 +------+ / ->(_ (_ . _) _) +------+ (_ (_ . _) _) 379 / 380 +------+ / ISP Internet 381 P users -|TCM-IO|/ 382 +------+ 384 <------------TCM-TF--------------> 386 Figure 2 388 At the same time, the ISP may implement TCM-TF capabilities within 389 its own MPLS network in order to optimize internal network resources: 390 optimizing modules can be embedded in the Label Edge Routers of the 391 network. In that scenario MPLS will act as the "tunneling" layer, 392 being the tunnels the paths defined by the MPLS labels and avoiding 393 the use of additional tunneling protocols. 395 Finally, some networks use cRTP [cRTP] in order to obtain bandwidth 396 savings on the access link, but as a counterpart considerable CPU 397 resources are required on the aggregation router. In these cases, by 398 means of TCM, instead of only saving bandwidth on the access link, it 399 could also be saved across the ISP network, thus avoiding the impact 400 on the CPU of the aggregation router. 402 1.4.3. Private solutions 404 End users can also optimize traffic end-to-end from network borders. 405 TCM-TF is used to connect private networks geographically apart 406 (e.g., corporation headquarters and subsidiaries), without the ISP 407 being aware (or having to manage) those flows, as shown in Figure 3, 408 where two different locations are connected through a tunnel 409 traversing the Internet or another network. 411 _ _ _ _ _ _ 412 ( ` )_ +------+ ( ` )_ +------+ ( ` )_ 413 ( ) `) --|TCM-IO|-->( ) `) --|TCM-EO|-->( ) `) 414 (_ (_ . _) _) +------+ (_ (_ . _) _) +------+ (_ (_ . _)_) 416 Location 1 ISP/Internet Location 2 418 <-----------TCM-TF----------> 420 Figure 3 422 Some examples of these scenarios: 424 o The case of an enterprise with a number of distributed central 425 offices, in which an appliance can be placed next to the access 426 router, being able to optimize traffic flows with a shared origin 427 and destination. Thus, a number of remote desktop sessions to the 428 same server can be optimized, or a number of VoIP calls between 429 two offices will also require less bandwidth and fewer packets per 430 second. In many cases, a tunnel is already included for security 431 reasons, so the additional overhead of TCM-TF is lower. 433 o An Internet cafe, which is suitable of having many users of the 434 same application (e.g., VoIP, a game) sharing the same access 435 link. Internet cafes are very popular in countries with 436 relatively low access speeds in households, where home computer 437 penetration is usually low as well. In many of these countries, 438 bandwidth can become a serious limitation for this kind of 439 businesses, so TCM-TF savings may become interesting for their 440 viability. 442 o Community Networks [topology_CNs] (typically deployed in rural 443 areas or in developing countries), in which a number of people in 444 the same geographical place share their connections in a 445 cooperative way. The structure of these networks is not designed 446 from the beginning, but they grow organically as new users join. 447 As a result, a number and a number of wireless hops are usually 448 required in order to reach a router connected to the Internet. 450 o Satellite communication links that often manage the bandwidth by 451 limiting the transmission rate, measured in packets per second 452 (pps), to and from the satellite. Applications like VoIP that 453 generate a large number of small packets can easily fill the 454 maximum number of pps slots, limiting the throughput across such 455 links. As an example, a G.729a voice call generates 50 pps at 20 456 ms packetization time. If the satellite transmission allows 1,500 457 pps, the number of simultaneous voice calls is limited to 30. 458 This results in poor utilization of the satellite link's bandwidth 459 as well as places a low bound on the number of voice calls that 460 can utilize the link simultaneously. TCM optimization of small 461 packets into one packet for transmission will improve the 462 efficiency. 464 o In a M2M/SCADA (Supervisory Control And Data Acquisition) context, 465 TCM optimization can be applied when a satellite link is used for 466 collecting the data of a number of sensors. M2M terminals are 467 normally equipped with sensing devices which can interface to 468 proximity sensor networks through wireless connections. The 469 terminal can send the collected sensing data using a satellite 470 link connecting to a satellite gateway, which in turn will forward 471 the M2M/SCADA data to the to the processing and control center 472 through the Internet. The size of a typical M2M application 473 transaction depends on the specific service and it may vary from a 474 minimum of 20 bytes (e.g., tracking and metering in private 475 security) to about 1,000 bytes (e.g., video-surveillance). In 476 this context, TCM-TF concepts can be also applied to allow a more 477 efficient use of the available satellite link capacity, matching 478 the requirements demanded by some M2M services. If the case of 479 large sensor deployments is considered, where proximity sensor 480 networks transmit data through different satellite terminals, the 481 use of compression algorithms already available in current 482 satellite systems to reduce the overhead introduced by UDP and 483 IPv6 protocols is certainly desirable. In addition to this, 484 tunneling and multiplexing functions available from TCM-TF allows 485 extending compression functionality throughout the rest the 486 network, to eventually reach the processing and control centers. 488 o Desktop or application sharing where the traffic from the server 489 to the client typically consists of the delta of screen updates. 490 Also, the standard for remote desktop sharing emerging for WebRTC 491 in the RTCWEB Working Group is: {something}/SCTP/UDP (Stream 492 Control Transmission Protocol [SCTP]). In this scenario, SCTP/UDP 493 can be used in other cases: chatting, file sharing and 494 applications related to WebRTC peers. There can be hundreds of 495 clients at a site talking to a server located at a datacenter over 496 a WAN. Compressing, multiplexing and tunneling this traffic could 497 save WAN bandwidth and potentially improve latency. 499 1.4.4. Mixed scenarios 501 Different combinations of the previous scenarios can be considered. 502 Agreements between different companies can be established in order to 503 save bandwidth and to reduce packets per second. As an example, 504 Figure 4 shows a game provider that wants to TCM-optimize its 505 connections by establishing associations between different TCM-IO/EOs 506 placed near the game server and several TCM-IO/EOs placed in the 507 networks of different ISPs (agreements between the game provider and 508 each ISP will be necessary). In every ISP, the TCM-IO/EO would be 509 placed in the most adequate point (actually several TCM-IO/EOs could 510 exist per ISP) in order to aggregate enough number of users. 512 _ _ 513 N users ( ` )_ 514 +---+ ( ) `) 515 |TCM|->(_ (_ . _) 516 +---+ ISP 1 \ 517 _ _ \ _ _ _ _ _ 518 M users ( ` )_ \ ( ` ) ( ` ) ( ` ) 519 +---+ ( ) `) \ ( ) `) ( ) `) +---+ ( ) `) 520 |TCM|->(_ (_ ._)---- (_ (_ . _) ->(_ (_ . _)->|TCM|->(_ (_ . _) 521 +---+ ISP 2 / Internet ISP 4 +---+ Game Provider 522 _ _ / ^ 523 O users ( ` )_ / | 524 +---+ ( ) `) / +---+ 525 |TCM|->(_ (_ ._) P users->|TCM| 526 +---+ ISP 3 +---+ 528 Figure 4 530 1.5. Potential beneficiaries of TCM optimization 532 In conclusion, a standard able to compress headers, multiplex a 533 number of packets and send them together using a tunnel, can benefit 534 various stakeholders: 536 o network operators can compress traffic flows sharing a common 537 network segment; 539 o ISPs; 541 o developers of VoIP systems can include this option in their 542 solutions; 544 o service providers, who can achieve bandwidth savings in their 545 supporting infrastructures; 547 o users of Community Networks, who may be able to save significant 548 bandwidth amounts, and to reduce the number of packets per second 549 in their networks. 551 Other fact that has to be taken into account is that the technique 552 not only saves bandwidth but also reduces the number of packets per 553 second, which sometimes can be a bottleneck for a satellite link or 554 even for a network router. 556 1.6. Current Standard 558 The current standard [TCRTP] defines a way to reduce bandwidth and 559 pps of RTP traffic, by combining three different standard protocols: 561 o Regarding compression, [ECRTP] is the selected option. 563 o Multiplexing is accomplished using PPP Multiplexing [PPP-MUX] 565 o Tunneling is accomplished by using L2TP (Layer 2 Tunneling 566 Protocol [L2TPv3]). 568 The three layers are combined as shown in the Figure 5: 570 RTP/UDP/IP 571 | 572 | ---------------------------- 573 | 574 ECRTP compressing layer 575 | 576 | ---------------------------- 577 | 578 PPPMUX multiplexing layer 579 | 580 | ---------------------------- 581 | 582 L2TP tunneling layer 583 | 584 | ---------------------------- 585 | 586 IP 588 Figure 5 590 1.7. Improved Standard Proposal 592 In contrast to the current standard [TCRTP], TCM-TF allows other 593 header compression protocols in addition to RTP/UDP, since services 594 based on small packets also use by bare UDP, as shown in Figure 6: 596 UDP/IP RTP/UDP/IP 597 \ / 598 \ / ------------------------------ 599 \ / 600 Nothing or ROHC or ECRTP or IPHC header compressing layer 601 | 602 | ------------------------------ 603 | 604 PPPMUX or other mux protocols multiplexing layer 605 | 606 / \ ------------------------------ 607 / \ 608 / \ 609 GRE or L2TP \ tunneling layer 610 | MPLS 611 | ------------------------------ 612 IP 614 Figure 6 616 Each of the three layers is considered as independent of the other 617 two, i.e., different combinations of protocols can be implemented 618 according to the new proposal: 620 o Regarding compression, a number of options can be considered: as 621 different standards are able to compress different headers 622 ([cRTP], [ECRTP], [IPHC], [ROHC]). The one to be used can be 623 selected depending on the protocols used by the traffic to 624 compress and the concrete scenario (packet loss percentage, delay, 625 etc.). It also exists the possibility of having a null header 626 compression, in the case of wanting to avoid traffic compression, 627 taking into account the need of storing a context for every flow 628 and the problems of context desynchronization in certain 629 scenarios. Although not shown in Figure 6, ESP (Encapsulating 630 Security Payload [ESP]) headers can also be compressed. 632 o Multiplexing is also accomplished using PPP Multiplexing 633 [PPP-MUX]. Nevertheless, other multiplexing protocols can also be 634 considered. 636 o Tunneling is accomplished by using L2TP (Layer 2 Tunneling 637 Protocol [L2TPv3]) over IP, GRE (Generic Routing Encapsulation 638 [GRE]) over IP, or MPLS (Multiprotocol Label Switching 639 Architecture [MPLS]). 641 It can be observed that TCRTP [TCRTP] is included as an option in 642 TCM-TF, combining [ECRTP], [PPP-MUX] and [L2TPv3], so backwards 643 compatibility with TCRTP is provided. If a TCM optimizer implements 644 ECRTP, PPPMux and L2TPv3, compatibility with RFC4170 MUST be granted. 646 If a single link is being optimized a tunnel is unnecessary. In that 647 case, both optimizers MAY perform header compression between them. 648 Multiplexing may still be useful, since it reduces packets per 649 second, which is interesting in some environments (e.g., satellite). 650 Another reason for that is the desire of reducing energy consumption. 651 Although no tunnel is employed, this can still be considered as TCM- 652 TF optimization, so TCM-TF signaling protocols will be employed here 653 in order to negotiate the compression and multiplexing parameters to 654 be employed. 656 Payload compression schemes may also be used, but they are not the 657 aim of this document. 659 2. Protocol Operation 661 This section describes how to combine protocols belonging to trhee 662 layers (compressing, multiplexing, and tunneling), in order to save 663 bandwidth for the considered flows. 665 2.1. Models of implementation 667 TCM-TF can be implemented in different ways. The most 668 straightforward is to implement it in the devices terminating the 669 flows (these devices can be e.g., voice gateways, or proxies grouping 670 a number of flows): 672 [ending device]---[ending device] 673 ^ 674 | 675 TCM-TF over IP 677 Figure 7 679 Another way TCM-TF can be implemented is with an external optimizer. 680 This device can be placed at strategic places in the network and can 681 dynamically create and destroy TCM-TF sessions without the 682 participation of the endpoints that generate the flows (Figure 8). 684 [ending device]\ /[ending device] 685 \ / 686 [ending device]----[optimizer]-----[optimizer]-----[ending device] 687 / \ 688 [ending device]/ \[ending device] 689 ^ ^ ^ 690 | | | 691 Native IP TCM-TF over IP Native IP 693 Figure 8 695 A number of already compressed flows can also be merged in a tunnel 696 using an optimizer in order to increase the number of flows in a 697 tunnel (Figure 9): 699 [ending device]\ /[ending device] 700 \ / 701 [ending device]----[optimizer]-----[optimizer]------[ending device] 702 / \ 703 [ending device]/ \[ending device] 704 ^ ^ ^ 705 | | | 706 Compressed TCM-TF over IP Compressed 708 Figure 9 710 2.2. Choice of the compressing protocol 712 There are different protocols that can be used for compressing IP 713 flows: 715 o IPHC (IP Header Compression [IPHC]) permits the compression of 716 UDP/IP and ESP/IP headers. It has a low implementation 717 complexity. On the other hand, the resynchronization of the 718 context can be slow over long RTT links. It should be used in 719 scenarios presenting very low packet loss percentage. 721 o cRTP (compressed RTP [cRTP]) works the same way as IPHC, but is 722 also able to compress RTP headers. The link layer transport is 723 not specified, but typically PPP is used. For cRTP to compress 724 headers, it must be implemented on each PPP link. A lot of 725 context is required to successfully run cRTP, and memory and 726 processing requirements are high, especially if multiple hops must 727 implement cRTP to save bandwidth on each of the hops. At higher 728 line rates, cRTP's processor consumption becomes prohibitively 729 expensive. cRTP is not suitable over long-delay WAN links commonly 730 used when tunneling, as proposed by this document. To avoid the 731 per-hop expense of cRTP, a simplistic solution is to use cRTP with 732 L2TP to achieve end-to-end cRTP. However, cRTP is only suitable 733 for links with low delay and low loss. Thus, if multiple router 734 hops are involved, cRTP's expectation of low delay and low loss 735 can no longer be met. Furthermore, packets can arrive out of 736 order. 738 o ECRTP (Enhanced Compressed RTP [ECRTP]) is an extension of cRTP 739 [cRTP] that provides tolerance to packet loss and packet 740 reordering between compressor and decompressor. Thus, ECRTP 741 should be used instead of cRTP when possible (e.g., the two TCM 742 optimizers implementing ECRTP). 744 o ROHC (RObust Header Compression [ROHC]) is able to compress UDP/ 745 IP, ESP/IP and RTP/UDP/IP headers. It is a robust scheme 746 developed for header compression over links with high bit error 747 rate, such as wireless ones. It incorporates mechanisms for quick 748 resynchronization of the context. It includes an improved 749 encoding scheme for compressing the header fields that change 750 dynamically. Its main drawback is that it requires significantly 751 more processing and memory resources than the ones necessary for 752 IPHC or ECRTP. 754 The present document does not determine which of the existing 755 protocols has to be used for the compressing layer. The decision 756 will depend on the scenarioand the service being optimized. It will 757 also be determined by the packet loss probability, RTT, jitter, and 758 the availability of memory and processing resources. The standard is 759 also suitable to include other compressing schemes that may be 760 further developed. 762 2.2.1. Context Synchronization in ECRTP 764 When the compressor receives an RTP packet that has an unpredicted 765 change in the RTP header, the compressor should send a COMPRESSED_UDP 766 packet (described in [ECRTP]) to synchronize the ECRTP decompressor 767 state. The COMPRESSED_UDP packet updates the RTP context in the 768 decompressor. 770 To ensure delivery of updates of context variables, COMPRESSED_UDP 771 packets should be delivered using the robust operation described in 772 [ECRTP]. 774 Because the "twice" algorithm described in [ECRTP] relies on UDP 775 checksums, the IP stack on the RTP transmitter should transmit UDP 776 checksums. If UDP checksums are not used, the ECRTP compressor 777 should use the cRTP Header checksum described in [ECRTP]. 779 2.2.2. Context Synchronization in ROHC 781 ROHC [ROHC] includes a more complex mechanism in order to maintain 782 context synchronization. It has different operation modes and 783 defines compressor states which change depending on link behavior. 785 2.3. Multiplexing 787 Header compressing algorithms require a layer two protocol that 788 allows identifying different protocols. PPP [PPP] is suited for 789 this, although other multiplexing protocols can also be used for this 790 layer of TCM-TF. 792 When header compression is used inside a tunnel, it reduces the size 793 of the headers of the IP packets carried in the tunnel. However, the 794 tunnel itself has overhead due to its IP header and the tunnel header 795 (the information necessary to identify the tunneled payload). 797 By multiplexing multiple small payloads in a single tunneled packet, 798 reasonable bandwidth efficiency can be achieved, since the tunnel 799 overhead is shared by multiple packets belonging to the flows active 800 between the source and destination of an L2TP tunnel. The packet 801 size of the flows has to be small in order to permit good bandwidth 802 savings. 804 If the source and destination of the tunnel are the same as the 805 source and destination of the compressing protocol sessions, then the 806 source and destination must have multiple active small-packet flows 807 to get any benefit from multiplexing. 809 Because of this, TCM-TF is mostly useful for applications where many 810 small-packet flows run between a pair of hosts. The number of 811 simultaneous sessions required to reduce the header overhead to the 812 desired level depends on the average payload size, and also on the 813 size of the tunnel header. A smaller tunnel header will result in 814 fewer simultaneous sessions being required to produce adequate 815 bandwidth efficiencies. 817 2.4. Tunneling 819 Different tunneling schemes can be used for sending end to end the 820 compressed payloads. 822 2.4.1. Tunneling schemes over IP: L2TP and GRE 824 L2TP tunnels should be used to tunnel the compressed payloads end to 825 end. L2TP includes methods for tunneling messages used in PPP 826 session establishment, such as NCP (Network Control Protocol). This 827 allows [IPCP-HC] to negotiate ECRTP compression/decompression 828 parameters. 830 Other tunneling schemes, such as GRE [GRE] may also be used to 831 implement the tunneling layer of TCM-TF. 833 2.4.2. MPLS tunneling 835 In some scenarios, mainly in operator's core networks, the use of 836 MPLS is widely deployed as data transport method. The adoption of 837 MPLS as tunneling layer in this proposal intends to natively adapt 838 TCM-TF to those transport networks. 840 In the same way that layer 3 tunnels, MPLS paths, identified by MPLS 841 labels, established between Label Edge Routers (LSRs), could be used 842 to transport the compressed payloads within an MPLS network. This 843 way, multiplexing layer must be placed over MPLS layer. Note that, 844 in this case, layer 3 tunnel headers do not have to be used, with the 845 consequent data efficiency improvement. 847 2.5. Encapsulation Formats 849 The packet format for a packet compressed is: 851 +------------+-----------------------+ 852 | | | 853 | Compr | | 854 | Header | Data | 855 | | | 856 | | | 857 +------------+-----------------------+ 859 Figure 10 861 The packet format of a multiplexed PPP packet as defined by [PPP-MUX] 862 is: 864 +-------+---+------+-------+-----+ +---+------+-------+-----+ 865 | Mux |P L| | | | |P L| | | | 866 | PPP |F X|Len1 | PPP | | |F X|LenN | PPP | | 867 | Prot. |F T| | Prot. |Info1| ~ |F T| | Prot. |InfoN| 868 | Field | | Field1| | | |FieldN | | 869 | (1) |1-2 octets| (0-2) | | |1-2 octets| (0-2) | | 870 +-------+----------+-------+-----+ +----------+-------+-----+ 872 Figure 11 874 The combined format used for TCM-TF with a single payload is all of 875 the above packets concatenated. Here is an example with one payload, 876 using L2TP or GRE tunneling: 878 +------+------+-------+----------+-------+--------+----+ 879 | IP |Tunnel| Mux |P L| | | | | 880 |header|header| PPP |F X|Len1 | PPP | Compr | | 881 | (20) | | Proto |F T| | Proto | header |Data| 882 | | | Field | | Field1| | | 883 | | | (1) |1-2 octets| (0-2) | | | 884 +------+------+-------+----------+-------+--------+----+ 885 |<------------- IP payload -------------------->| 886 |<-------- Mux payload --------->| 888 Figure 12 890 If the tunneling technology is MPLS, then the scheme would be: 892 +------+-------+----------+-------+--------+----+ 893 |MPLS | Mux |P L| | | | | 894 |header| PPP |F X|Len1 | PPP | Compr | | 895 | | Proto |F T| | Proto | header |Data| 896 | | Field | | Field1| | | 897 | | (1) |1-2 octets| (0-2) | | | 898 -+------+-------+----------+-------+--------+----+ 899 |<---------- MPLS payload -------------->| 900 |<-------- Mux payload --------->| 902 Figure 13 904 If the tunnel contains multiplexed traffic, multiple "PPPMux 905 payload"s are transmitted in one IP packet. 907 3. Contributing Authors 909 Gonzalo Camarillo 910 Ericsson 911 Advanced Signalling Research Lab. 912 FIN-02420 Jorvas 913 Finland 915 Email: Gonzalo.Camarillo@ericsson.com 916 Michael A. Ramalho 917 Cisco Systems, Inc. 918 8000 Hawkins Road 919 Sarasota, FL 34241-9300 920 US 922 Phone: +1.732.832.9723 923 Email: mramalho@cisco.com 925 Jose Ruiz Mas 926 University of Zaragoza 927 Dpt. IEC Ada Byron Building 928 50018 Zaragoza 929 Spain 931 Phone: +34 976762158 932 Email: jruiz@unizar.es 934 Diego Lopez Garcia 935 Telefonica I+D 936 Ramon de la cruz 84 937 28006 Madrid 938 Spain 940 Phone: +34 913129041 941 Email: diego@tid.es 943 David Florez Rodriguez 944 Telefonica I+D 945 Ramon de la cruz 84 946 28006 Madrid 947 Spain 949 Phone: +34 91312884 950 Email: dflorez@tid.es 952 Manuel Nunez Sanz 953 Telefonica I+D 954 Ramon de la cruz 84 955 28006 Madrid 956 Spain 958 Phone: +34 913128821 959 Email: mns@tid.es 960 Juan Antonio Castell Lucia 961 Telefonica I+D 962 Ramon de la cruz 84 963 28006 Madrid 964 Spain 966 Phone: +34 913129157 967 Email: jacl@tid.es 969 Mirko Suznjevic 970 University of Zagreb 971 Faculty of Electrical Engineering and Computing, Unska 3 972 10000 Zagreb 973 Croatia 975 Phone: +385 1 6129 755 976 Email: mirko.suznjevic@fer.hr 978 4. Acknowledgements 980 5. IANA Considerations 982 This memo includes no request to IANA. 984 6. Security Considerations 986 The most straightforward option for securing a number of non-secured 987 flows sharing a path is by the use of IPsec [IPsec], when TCM using 988 an IP tunnel is employed. Instead of adding a security header to the 989 packets of each native flow, and then compressing and multiplexing 990 them, a single IPsec tunnel can be used in order to secure all the 991 flows together, thus achieving a higher efficiency. This use of 992 IPsec protects the packets only within the transport network between 993 tunnel ingress and egress and therefore does not provide end-to-end 994 authentication or encryption. 996 When a number of already secured flows including ESP [ESP] headers 997 are optimized by means of TCM, and the addition of further security 998 is not necessary, their ESP/IP headers can still be compressed using 999 suitable algorithms [RFC5225], in order to improve the efficiency. 1000 This header compression does not change the end-to-end security 1001 model. 1003 The resilience of TCM-TF to denial of service, and the use of TCM-TF 1004 to deny service to other parts of the network infrastructure, is for 1005 future study. 1007 7. References 1009 7.1. Normative References 1011 [ECRTP] Koren, T., Casner, S., Geevarghese, J., Thompson, B., and 1012 P. Ruddy, "Enhanced Compressed RTP (CRTP) for Links with 1013 High Delay, Packet Loss and Reordering", RFC 3545, 2003. 1015 [ESP] Kent, S., "IP Encapsulating Security Payload", RFC 4303, 1016 2005. 1018 [GRE] Farinacci, D., Li, T., Hanks, S., Meyer, D., and P. 1019 Traina, "Generic Routing Encapsulation (GRE)", RFC 2784, 1020 2000. 1022 [H.323] International Telecommunication Union, "Recommendation 1023 H.323", Packet based multimedia communication systems 1024 H.323, July 2003. 1026 [I-D.ietf-core-coap] 1027 Shelby, Z., Hartke, K., and C. Bormann, "Constrained 1028 Application Protocol (CoAP)", draft-ietf-core-coap-18 1029 (work in progress), June 2013. 1031 [IPCP-HC] Engan, M., Casner, S., Bormann, C., and T. Koren, "IP 1032 Header Compression over PPP", RFC 3544, 2003. 1034 [IPHC] Degermark, M., Nordgren, B., and S. Pink, "IP Header 1035 Compression", RFC 2580, 1999. 1037 [IPsec] Kent, S. and K. Seo, "Security Architecture for the 1038 Internet Protocol", RFC 4301, December 2005. 1040 [L2TPv3] Lau, J., Townsley, M., and I. Goyret, "Layer Two Tunneling 1041 Protocol - Version 3 (L2TPv3)", RFC 3931, 2005. 1043 [MPLS] Rosen, E., Viswanathan, A., and R. Callon, "Multiprotocol 1044 Label Switching Architecture", RFC 3031, January 2001. 1046 [PPP] Simpson, W., "The Point-to-Point Protocol (PPP)", RFC 1047 1661, 1994. 1049 [PPP-MUX] Pazhyannur, R., Ali, I., and C. Fox, "PPP Multiplexing", 1050 RFC 3153, 2001. 1052 [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate 1053 Requirement Levels", BCP 14, RFC 2119, March 1997. 1055 [RFC5225] Pelletier, G. and K. Sandlund, "RObust Header Compression 1056 Version 2 (ROHCv2): Profiles for RTP, UDP, IP, ESP and 1057 UDP-Lite", RFC 5225, April 2008. 1059 [ROHC] Sandlund, K., Pelletier, G., and L-E. Jonsson, "The RObust 1060 Header Compression (ROHC) Framework", RFC 5795, 2010. 1062 [RTP] Schulzrinne, H., Casner, S., Frederick, R., and V. 1063 Jacobson, "RTP: A Transport Protocol for Real-Time 1064 Applications", RFC 3550, 2003. 1066 [SCTP] Stewart, Ed., R., "Stream Control Transmission Protocol", 1067 RFC 4960, 2007. 1069 [SIP] Rosenberg, J., Schulzrinne, H., Camarillo, G., and et. 1070 al., "SIP: Session Initiation Protocol", RFC 3261, 2005. 1072 [TCRTP] Thomson, B., Koren, T., and D. Wing, "Tunneling 1073 Multiplexed Compressed RTP (TCRTP)", RFC 4170, 2005. 1075 [cRTP] Casner, S. and V. Jacobson, "Compressing IP/UDP/RTP 1076 Headers for Low-Speed Serial Links", RFC 2508, 1999. 1078 7.2. Informative References 1080 [Efficiency] 1081 Bolla, R., Bruschi, R., Davoli, F., and F. Cucchietti, 1082 "Energy Efficiency in the Future Internet: A Survey of 1083 Existing Approaches and Trends in Energy-Aware Fixed 1084 Network Infrastructures", IEEE Communications Surveys and 1085 Tutorials vol.13, no.2, pp.223,244, 2011. 1087 [FPS_opt] Saldana, J., Fernandez-Navajas, J., Ruiz-Mas, J., Aznar, 1088 J., Viruete, E., and L. Casadesus, "First Person Shooters: 1089 Can a Smarter Network Save Bandwidth without Annoying the 1090 Players?", IEEE Communications Magazine vol. 49, no.11, 1091 pp. 190-198, 2011. 1093 [First-person] 1094 Ratti, S., Hariri, B., and S. Shirmohammadi, "A Survey of 1095 First-Person Shooter Gaming Traffic on the Internet", IEEE 1096 Internet Computing vol 14, no. 5, pp. 60-69, 2010. 1098 [Gamers] Oliveira, M. and T. Henderson, "What online gamers really 1099 think of the Internet?", NetGames '03 Proceedings of the 1100 2nd workshop on Network and system support for games, ACM 1101 New York, NY, USA Pages 185-193, 2003. 1103 [Power] Chabarek, J., Sommers, J., Barford, P., Estan, C., Tsiang, 1104 D., and S. Wright, "Power Awareness in Network Design and 1105 Routing", INFOCOM 2008. The 27th Conference on Computer 1106 Communications. IEEE pp.457,465, 2008. 1108 [VoIP_opt] 1109 Saldana, J., Fernandez-Navajas, J., Ruiz-Mas, J., Murillo, 1110 J., Viruete, E., and J. Aznar, "Evaluating the Influence 1111 of Multiplexing Schemes and Buffer Implementation on 1112 Perceived VoIP Conversation Quality", Computer Networks 1113 (Elsevier) Volume 6, Issue 11, pp 2920 - 2939. Nov. 30, 1114 2012. 1116 [topology_CNs] 1117 Vega, D., Cerda-Alabern, L., Navarro, L., and R. Meseguer, 1118 "Topology patterns of a community network: Guifi. net.", 1119 Proceedings Wireless and Mobile Computing, Networking and 1120 Communications (WiMob), 2012 IEEE 8th International 1121 Conference on (pp. 612-619) , 2012. 1123 Authors' Addresses 1125 Jose Saldana 1126 University of Zaragoza 1127 Dpt. IEC Ada Byron Building 1128 Zaragoza 50018 1129 Spain 1131 Phone: +34 976 762 698 1132 Email: jsaldana@unizar.es 1134 Dan Wing 1135 Cisco Systems 1136 771 Alder Drive 1137 San Jose, CA 95035 1138 US 1140 Phone: +44 7889 488 335 1141 Email: dwing@cisco.com 1142 Julian Fernandez Navajas 1143 University of Zaragoza 1144 Dpt. IEC Ada Byron Building 1145 Zaragoza 50018 1146 Spain 1148 Phone: +34 976 761 963 1149 Email: navajas@unizar.es 1151 Muthu Arul Mozhi Perumal 1152 Cisco Systems 1153 Cessna Business Park 1154 Sarjapur-Marathahalli Outer Ring Road 1155 Bangalore, Karnataka 560103 1156 India 1158 Phone: +91 9449288768 1159 Email: mperumal@cisco.com 1161 Fernando Pascual Blanco 1162 Telefonica I+D 1163 Ramon de la Cruz 84 1164 Madrid 28006 1165 Spain 1167 Phone: +34 913128779 1168 Email: fpb@tid.es