idnits 2.17.1 draft-saldana-tsvwg-tcmtf-08.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- No issues found here. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year -- The document date (December 11, 2014) is 3416 days in the past. Is this intentional? Checking references for intended status: Best Current Practice ---------------------------------------------------------------------------- (See RFCs 3967 and 4897 for information about using normative references to lower-maturity documents in RFCs) ** Obsolete normative reference: RFC 4960 (ref. 'SCTP') (Obsoleted by RFC 9260) == Outdated reference: A later version (-05) exists of draft-suznjevic-tsvwg-mtd-tcmtf-03 Summary: 1 error (**), 0 flaws (~~), 2 warnings (==), 1 comment (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 Transport Area Working Group J. Saldana 3 Internet-Draft University of Zaragoza 4 Intended status: Best Current Practice D. Wing 5 Expires: June 14, 2015 Cisco Systems 6 J. Fernandez Navajas 7 University of Zaragoza 8 M. Perumal 9 Ericsson 10 F. Pascual Blanco 11 Telefonica I+D 12 December 11, 2014 14 Tunneling Compressing and Multiplexing (TCM) Traffic Flows. Reference 15 Model 16 draft-saldana-tsvwg-tcmtf-08 18 Abstract 20 Tunneling, Compressing and Multiplexing (TCM) is a method for 21 improving the bandwidth utilization of network segments that carry 22 multiple small-packet flows in parallel sharing a common path. The 23 method combines different protocols for header compression, 24 multiplexing, and tunneling over a network path for the purpose of 25 reducing the bandwidth. The amount of packets per second can also be 26 reduced. 28 This document describes the TCM framework and the different options 29 which can be used for each layer (header compression, multiplexing 30 and tunneling). 32 Status of This Memo 34 This Internet-Draft is submitted to IETF in full conformance with the 35 provisions of BCP 78 and BCP 79. 37 Internet-Drafts are working documents of the Internet Engineering 38 Task Force (IETF). Note that other groups may also distribute 39 working documents as Internet-Drafts. The list of current Internet- 40 Drafts is at http://datatracker.ietf.org/drafts/current/. 42 Internet-Drafts are draft documents valid for a maximum of six months 43 and may be updated, replaced, or obsoleted by other documents at any 44 time. It is inappropriate to use Internet-Drafts as reference 45 material or to cite them other than as "work in progress." 47 This Internet-Draft will expire on June 14, 2015. 49 Copyright Notice 51 Copyright (c) 2014 IETF Trust and the persons identified as the 52 document authors. All rights reserved. 54 This document is subject to BCP 78 and the IETF Trust's Legal 55 Provisions Relating to IETF Documents 56 (http://trustee.ietf.org/license-info) in effect on the date of 57 publication of this document. Please review these documents 58 carefully, as they describe your rights and restrictions with respect 59 to this document. 61 Table of Contents 63 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . 3 64 1.1. Requirements Language . . . . . . . . . . . . . . . . . . 3 65 1.2. Bandwidth efficiency of flows sending small packets . . . 3 66 1.2.1. Real-time applications using RTP . . . . . . . . . . 3 67 1.2.2. Real-time applications not using RTP . . . . . . . . 4 68 1.2.3. Other applications generating small packets . . . . . 4 69 1.2.4. Optimization of small-packet flows . . . . . . . . . 5 70 1.2.5. Energy consumption considerations . . . . . . . . . . 6 71 1.3. Terminology . . . . . . . . . . . . . . . . . . . . . . . 6 72 1.4. Scenarios of application . . . . . . . . . . . . . . . . 7 73 1.4.1. Multidomain scenario . . . . . . . . . . . . . . . . 7 74 1.4.2. Single domain . . . . . . . . . . . . . . . . . . . . 8 75 1.4.3. Private solutions . . . . . . . . . . . . . . . . . . 9 76 1.4.4. Mixed scenarios . . . . . . . . . . . . . . . . . . . 11 77 1.5. Potential beneficiaries of TCM optimization . . . . . . . 12 78 1.6. Current Standard for VoIP . . . . . . . . . . . . . . . . 13 79 1.7. Current Proposal . . . . . . . . . . . . . . . . . . . . 13 80 2. Protocol Operation . . . . . . . . . . . . . . . . . . . . . 15 81 2.1. Models of implementation . . . . . . . . . . . . . . . . 15 82 2.2. Choice of the compressing protocol . . . . . . . . . . . 16 83 2.2.1. Context Synchronization in ECRTP . . . . . . . . . . 17 84 2.2.2. Context Synchronization in ROHC . . . . . . . . . . . 18 85 2.3. Multiplexing . . . . . . . . . . . . . . . . . . . . . . 18 86 2.4. Tunneling . . . . . . . . . . . . . . . . . . . . . . . . 18 87 2.4.1. Tunneling schemes over IP: L2TP and GRE . . . . . . . 18 88 2.4.2. MPLS tunneling . . . . . . . . . . . . . . . . . . . 19 89 2.5. Encapsulation Formats . . . . . . . . . . . . . . . . . . 19 90 3. Contributing Authors . . . . . . . . . . . . . . . . . . . . 20 91 4. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . 22 92 5. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 22 93 6. Security Considerations . . . . . . . . . . . . . . . . . . . 22 94 7. References . . . . . . . . . . . . . . . . . . . . . . . . . 23 95 7.1. Normative References . . . . . . . . . . . . . . . . . . 23 96 7.2. Informative References . . . . . . . . . . . . . . . . . 24 98 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . 25 100 1. Introduction 102 This document describes a way to combine existing protocols for 103 header compression, multiplexing and tunneling to save bandwidth for 104 applications that generate long-term flows of small packets. 106 1.1. Requirements Language 108 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 109 "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this 110 document are to be interpreted as described in RFC 2119 [RFC2119]. 112 1.2. Bandwidth efficiency of flows sending small packets 114 The interactivity demands of some real-time services (VoIP, 115 videoconferencing, telemedicine, video vigilance, online gaming, 116 etc.) require a traffic profile consisting of high rates of small 117 packets, which are necessary in order to transmit frequent updates 118 between the two extremes of the communication. These services also 119 demand low network delays. In addition, some other services also use 120 small packets, although they are not delay-sensitive (e.g., instant 121 messaging, M2M packets sending collected data in sensor networks or 122 IoT scenarios using wireless or satellite scenarios). For both the 123 delay-sensitive and delay-insensitive applications, their small data 124 payloads incur significant overhead. 126 When a number of flows based on small packets (small-packet flows) 127 share the same path, the traffic can be optimized by multiplexing 128 packets belonging to different flows. As a consequence, bandwidth 129 can be saved and the amount of packets per second can be reduced. If 130 a transmission queue has not already been formed but multiplexing is 131 desired, it is necessary to add a multiplexing delay, which has to be 132 maintained under some threshold if the service presents tight delay 133 requirements. It is a believed fact that this delay and jitter can 134 be of the same order of magnitude or less than other common sources 135 of delay and jitter currently present on the Internet without causing 136 harm to flows that employ congestion control based on delay. 138 1.2.1. Real-time applications using RTP 140 The first design of the Internet did not include any mechanism 141 capable of guaranteeing an upper bound for delivery delay, taking 142 into account that the first deployed services were e-mail, file 143 transfer, etc., in which delay is not critical. RTP [RTP] was first 144 defined in 1996 in order to permit the delivery of real-time 145 contents. Nowadays, although there are a variety of protocols used 146 for signaling real-time flows (SIP [SIP], H.323 [H.323], etc.), RTP 147 has become the standard par excellence for the delivery of real-time 148 content. 150 RTP was designed to work over UDP datagrams. This implies that an 151 IPv4 packet carrying real-time information has to include 40 bytes of 152 headers: 20 for IPv4 header, 8 for UDP, and 12 for RTP. This 153 overhead is significant, taking into account that many real-time 154 services send very small payloads. It becomes even more significant 155 with IPv6 packets, as the basic IPv6 header is twice the size of the 156 IPv4 header. Table 1 illustrates the overhead problem of VoIP for 157 two different codecs. 159 +---------------------------------+---------------------------------+ 160 | IPv4 | IPv6 | 161 +---------------------------------+---------------------------------+ 162 | IPv4+UDP+RTP: 40 bytes header | IPv6+UDP+RTP: 60 bytes header | 163 | G.711 at 20 ms packetization: | G.711 at 20 ms packetization: | 164 | 25% header overhead | 37.5% header overhead | 165 | G.729 at 20 ms packetization: | G.729 at 20 ms packetization: | 166 | 200% header overhead | 300% header overhead | 167 +---------------------------------+---------------------------------+ 169 Table 1: Efficiency of different voice codecs 171 1.2.2. Real-time applications not using RTP 173 At the same time, there are many real-time applications that do not 174 use RTP. Some of them send UDP (but not RTP) packets, e.g., First 175 Person Shooter (FPS) online games [First-person], for which latency 176 is very critical. The quickness and the movements of the players are 177 important, and can decide if they win or lose a fight. In addition 178 to latency, these applications may be sensitive to jitter and, to a 179 lesser extent, to packet loss [Gamers], since they implement 180 mechanisms for packet loss concealment. 182 1.2.3. Other applications generating small packets 184 Other applications without delay constraints are also becoming 185 popular (e.g., instant messaging, M2M packets sending collected data 186 in sensor networks using wireless or satellite scenarios). IoT 187 traffic generated in Constrained RESTful Environments, where UDP 188 packets are employed [RFC7252]. The number of wireless M2M (machine- 189 to-machine) connections is steady growing since a few years, and a 190 share of these is being used for delay-intolerant applications, e.g., 191 industrial SCADA (Supervisory Control And Data Acquisition), power 192 plant monitoring, smart grids, asset tracking. 194 1.2.4. Optimization of small-packet flows 196 In the moments or places where network capacity gets scarce, 197 allocating more bandwidth is a possible solution, but it implies a 198 recurring cost. However, including optimization techniques between a 199 pair of network nodes (able to reduce bandwidth and packets per 200 second) when/where required is a one-time investment. 202 In scenarios including a bottleneck with a single Layer-3 hop, header 203 compression standard algorithms [cRTP], [ECRTP], [IPHC], [ROHC] can 204 be used for reducing the overhead of each flow, at the cost of 205 additional processing. 207 However, if header compression is to be deployed in a network path 208 including several Layer-3 hops, tunneling can be used at the same 209 time in order to allow the header-compressed packets to travel end- 210 to-end, thus avoiding the need to compress and decompress at each 211 intermediate node. In these cases, compressed packets belonging to 212 different flows can be multiplexed together, in order to share the 213 tunnel overhead. In this case, a small multiplexing delay will be 214 necessary as a counterpart, in order to join a number of packets to 215 be sent together. This delay has to be maintained under a threshold 216 in order to grant the delay requirements. 218 A series of recommendations about delay limits have been summarized 219 in [I-D.suznjevic-tsvwg-mtd-tcmtf], in order to maintain this 220 additional delay and jitter in the same order of magnitude than other 221 sources of jitter currently present on the Internet. 223 A demultiplexer and a decompressor are necessary at the end of the 224 common path, so as to rebuild the packets as they were originally 225 sent, making traffic optimization a transparent process for the 226 origin and destination of the flow. 228 If only one stream is tunneled and compressed, then little bandwidth 229 savings will be obtained. In contrast, multiplexing is helpful to 230 amortize the overhead of the tunnel header over many payloads. The 231 obtained savings grow with the number of flows optimized together 232 [VoIP_opt], [FPS_opt]. 234 All in all, the combined use of header compression and multipexing 235 provides a trade-off: bandwidth can be exchanged by processing 236 capacity (mainly required for header compression and decompression) 237 and a small additional delay (required for gathering a number of 238 packets to be multiplexed together). 240 1.2.5. Energy consumption considerations 242 As an additional benefit, the reduction of the sent information, and 243 especially the reduction of the amount of packets per second to be 244 managed by the intermediate routers, can be translated into a 245 reduction of the overall energy consumption of network equipment. 246 According to [Efficiency] internal packet processing engines and 247 switching fabric require 60% and 18% of the power consumption of 248 high-end routers respectively. Thus, reducing the number of packets 249 to be managed and switched will reduce the overall energy 250 consumption. The measurements deployed in [Power] on commercial 251 routers corroborate this: a study using different packet sizes was 252 presented, and the tests with big packets showed a reduction of the 253 energy consumption, since a certain amount of energy is associated to 254 header processing tasks, and not only to the sending of the packet 255 itself. 257 All in all, a tradeoff appears: on the one hand, energy consumption 258 is increased in the two extremes due to header compression 259 processing; on the other hand, energy consumption is reduced in the 260 intermediate nodes because of the reduction of the number of packets 261 transmitted. Thi tradeoff should be explored more deeply. 263 1.3. Terminology 265 This document uses a number of terms to refer to the roles played by 266 the entities using TCM. 268 o native packet 270 A packet sent by an application, belonging to a flow that can be 271 optimized by means of TCM. 273 o native flow 275 A flow of native packets. It can be considered a "small-packet flow" 276 when the vast majority of the generated packets present a low 277 payload-to-header ratio. 279 o TCM packet 281 A packet including a number of multiplexed and header-compressed 282 native ones, and also a tunneling header. 284 o TCM flow 286 A flow of TCM packets, each one including a number of multiplexed 287 header-compressed packets. 289 o TCM optimizer 291 The host where TCM optimization is deployed. It corresponds to both 292 the ingress and the egress of the tunnel transporting the compressed 293 and multiplexed packets. 295 If the optimizer compresses headers, multiplexes packets and creates 296 the tunnel, it behaves as a "TCM-Ingress Optimizer", or "TCM-IO". It 297 takes native packets or flows and "optimizes" them. 299 If it extracts packets from the tunnel, demultiplexes packets and 300 decompresses headers, it behaves as a "TCM-Egress Optimizer", or 301 "TCM-EO". The TCM-Egress Optimizer takes a TCM flow and "rebuilds" 302 the native packets as they were originally sent. 304 o TCM session 306 The relationship between a pair of TCM optimizers exchanging TCM 307 packets. 309 o policy manager 311 A network entity which makes the decisions about TCM optimization 312 parameters (e.g., multiplexing period to be used, flows to be 313 optimized together), depending on their IP addresses, ports, etc. It 314 is connected with a number of TCM optimizers, and orchestrates the 315 optimization that takes place between them. 317 1.4. Scenarios of application 319 Different scenarios of application can be considered for the 320 Tunneling, Compressing and Multiplexing solution. They can be 321 classified according to the domains involved in the optimization: 323 1.4.1. Multidomain scenario 325 In this scenario, the TCM tunnel goes all the way from one network 326 edge (the place where users are attached to the ISP) to another, and 327 therefore it can cross several domains. As shown in Figure 1, the 328 optimization is performed before the packets leave the domain of an 329 ISP; the traffic crosses the Internet tunnelized, and the packets are 330 rebuilt in the second domain. 332 _ _ _ _ _ _ 333 ( ` ) _ _ _ ( ` )_ _ 334 ( +------+ )`) ( ` )_ ( +------+ `) 335 -->(_ -|TCM-IO|--- _) ---> ( ) `) ----->(_-|TCM-EO|--_)--> 336 ( +------+ _) (_ (_ . _) _) ( +------+ _) 337 (_ _ _ _) (_ _ ( _) _) 339 ISP 1 Internet ISP 2 341 <--------------------TCM---------------------> 343 Figure 1 345 Note that this is not from border to border (where ISPs connect to 346 the Internet, which could be covered with specialized links) but from 347 an ISP to another (e.g., managing all traffic from individual users 348 arriving at a Game Provider, regardless users' location). 350 Some examples of this could be: 352 o An ISP may place a TCM optimizer in its aggregation network, in 353 order to tunnel all the packets belonging to a certain service, 354 sending them to the application provider, who will rebuild the 355 packets before forwarding them to the application server. This 356 will result in savings for both actors. 358 o A service provider (e.g., an online gaming company) can be allowed 359 to place a TCM optimizer in the aggregation network of an ISP, 360 being able to optimize all the flows of a service (e.g., VoIP, an 361 online game). Another TCM optimizer will rebuild these packets 362 once they arrive to the network of the provider. 364 1.4.2. Single domain 366 In this case, TCM is only activated inside an ISP, from the edge to 367 border, inside the network operator. The geographical scope and 368 network depth of TCM activation could be on demand, according to 369 traffic conditions. 371 If we consider the residential users of real-time interactive 372 applications (e.g., VoIP, online games generating small packets) in a 373 town or a district, a TCM optimizing module can be included in some 374 network devices, in order to group packets with the same destination. 375 As shown in Figure 2, depending on the number of users of the 376 application, the packets can be grouped at different levels in DSL 377 fixed network scenarios, at gateway level in LTE mobile network 378 scenarios or even in other ISP edge routers. TCM may also be applied 379 for fiber residential accesses, and in mobile networks. This would 380 reduce bandwidth requirements in the provider aggregation network. 382 +------+ 383 N users -|TCM-IO|\ 384 +------+ \ 385 \ _ _ _ _ 386 +------+ \--> ( ` )_ +------+ ( ` )_ 387 M users -|TCM-IO|------> ( ) `) --|TCM-EO|--> ( ) `) 388 +------+ / ->(_ _ (_ . _) _) +------+ (_ _ (_ . _) _) 389 / 390 +------+ / ISP Internet 391 P users -|TCM-IO|/ 392 +------+ 394 <--------------TCM---------------> 396 Figure 2 398 At the same time, the ISP may implement TCM capabilities within its 399 own MPLS network in order to optimize internal network resources: 400 optimizing modules can be embedded in the Label Edge Routers of the 401 network. In that scenario MPLS will act as the "tunneling" layer, 402 being the tunnels the paths defined by the MPLS labels and avoiding 403 the use of additional tunneling protocols. 405 Finally, some networks use cRTP [cRTP] in order to obtain bandwidth 406 savings on the access link, but as a counterpart considerable CPU 407 resources are required on the aggregation router. In these cases, by 408 means of TCM, instead of only saving bandwidth on the access link, it 409 could also be saved across the ISP network, thus avoiding the impact 410 on the CPU of the aggregation router. 412 1.4.3. Private solutions 414 End users can also optimize traffic end-to-end from network borders. 415 TCM is used to connect private networks geographically apart (e.g., 416 corporation headquarters and subsidiaries), without the ISP being 417 aware (or having to manage) those flows, as shown in Figure 3, where 418 two different locations are connected through a tunnel traversing the 419 Internet or another network. 421 _ _ _ _ _ _ 422 ( ` )_ +------+ ( ` )_ +------+ ( ` )_ 423 ( ) `) --|TCM-IO|-->( ) `) --|TCM-EO|-->( ) `) 424 (_ (_ . _) _) +------+ (_ (_ . _) _) +------+ (_ (_ . _)_) 426 Location 1 ISP/Internet Location 2 428 <-------------TCM-----------> 430 Figure 3 432 Some examples of these scenarios: 434 o The case of an enterprise with a number of distributed central 435 offices, in which an appliance can be placed next to the access 436 router, being able to optimize traffic flows with a shared origin 437 and destination. Thus, a number of remote desktop sessions to the 438 same server can be optimized, or a number of VoIP calls between 439 two offices will also require less bandwidth and fewer packets per 440 second. In many cases, a tunnel is already included for security 441 reasons, so the additional overhead of TCM is lower. 443 o An Internet cafe, which is suitable of having many users of the 444 same application (e.g., VoIP, online games) sharing the same 445 access link. Internet cafes are very popular in countries with 446 relatively low access speeds in households, where home computer 447 penetration is usually low as well. In many of these countries, 448 bandwidth can become a serious limitation for this kind of 449 businesses, so TCM savings may become interesting for their 450 viability. 452 o Community Networks [topology_CNs] (typically deployed in rural 453 areas or in developing countries), in which a number of people in 454 the same geographical place share their connections in a 455 cooperative way. The structure of these networks is not designed 456 from the beginning, but they grow organically as new users join. 457 As a result, a number and a number of wireless hops are usually 458 required in order to reach a router connected to the Internet. 460 o Satellite communication links that often manage the bandwidth by 461 limiting the transmission rate, measured in packets per second 462 (pps), to and from the satellite. Applications like VoIP that 463 generate a large number of small packets can easily fill the 464 maximum number of pps slots, limiting the throughput across such 465 links. As an example, a G.729a voice call generates 50 pps at 20 466 ms packetization time. If the satellite transmission allows 1,500 467 pps, the number of simultaneous voice calls is limited to 30. 468 This results in poor utilization of the satellite link's bandwidth 469 as well as places a low bound on the number of voice calls that 470 can utilize the link simultaneously. TCM optimization of small 471 packets into one packet for transmission will improve the 472 efficiency. 474 o In a M2M/SCADA (Supervisory Control And Data Acquisition) context, 475 TCM optimization can be applied when a satellite link is used for 476 collecting the data of a number of sensors. M2M terminals are 477 normally equipped with sensing devices which can interface to 478 proximity sensor networks through wireless connections. The 479 terminal can send the collected sensing data using a satellite 480 link connecting to a satellite gateway, which in turn will forward 481 the M2M/SCADA data to the to the processing and control center 482 through the Internet. The size of a typical M2M application 483 transaction depends on the specific service and it may vary from a 484 minimum of 20 bytes (e.g., tracking and metering in private 485 security) to about 1,000 bytes (e.g., video-surveillance). In 486 this context, TCM concepts can be also applied to allow a more 487 efficient use of the available satellite link capacity, matching 488 the requirements demanded by some M2M services. If the case of 489 large sensor deployments is considered, where proximity sensor 490 networks transmit data through different satellite terminals, the 491 use of compression algorithms already available in current 492 satellite systems to reduce the overhead introduced by UDP and 493 IPv6 protocols is certainly desirable. In addition to this, 494 tunneling and multiplexing functions available from TCM allows 495 extending compression functionality throughout the rest the 496 network, to eventually reach the processing and control centers. 498 o Desktop or application sharing where the traffic from the server 499 to the client typically consists of the delta of screen updates. 500 Also, the standard for remote desktop sharing emerging for WebRTC 501 in the RTCWEB Working Group is: {something}/SCTP/UDP (Stream 502 Control Transmission Protocol [SCTP]). In this scenario, SCTP/UDP 503 can be used in other cases: chatting, file sharing and 504 applications related to WebRTC peers. There can be hundreds of 505 clients at a site talking to a server located at a datacenter over 506 a WAN. Compressing, multiplexing and tunneling this traffic could 507 save WAN bandwidth and potentially improve latency. 509 1.4.4. Mixed scenarios 511 Different combinations of the previous scenarios can be considered. 512 Agreements between different companies can be established in order to 513 save bandwidth and to reduce packets per second. As an example, 514 Figure 4 shows a game provider that wants to TCM-optimize its 515 connections by establishing associations between different TCM-IO/EOs 516 placed near the game server and several TCM-IO/EOs placed in the 517 networks of different ISPs (agreements between the game provider and 518 each ISP will be necessary). In every ISP, the TCM-IO/EO would be 519 placed in the most adequate point (actually several TCM-IO/EOs could 520 exist per ISP) in order to aggregate enough number of users. 522 _ _ 523 N users ( ` )_ 524 +---+ ( ) `) 525 |TCM|->(_ (_ . _) 526 +---+ ISP 1 \ 527 _ _ \ _ _ _ _ _ 528 M users ( ` )_ \ ( ` ) ( ` ) ( ` ) 529 +---+ ( ) `) \ ( ) `) ( ) `) +---+ ( ) `) 530 |TCM|->(_ (_ ._)---- (_ (_ . _) ->(_ (_ . _)->|TCM|->(_ (_ . _) 531 +---+ ISP 2 / Internet ISP 4 +---+ Game Provider 532 _ _ / ^ 533 O users ( ` )_ / | 534 +---+ ( ) `) / +---+ 535 |TCM|->(_ (_ ._) P users->|TCM| 536 +---+ ISP 3 +---+ 538 Figure 4 540 1.5. Potential beneficiaries of TCM optimization 542 In conclusion, a standard able to compress headers, multiplex a 543 number of packets and send them together using a tunnel, can benefit 544 various stakeholders: 546 o network operators can compress traffic flows sharing a common 547 network segment; 549 o ISPs; 551 o developers of VoIP systems can include this option in their 552 solutions; 554 o service providers, who can achieve bandwidth savings in their 555 supporting infrastructures; 557 o users of Community Networks, who may be able to save significant 558 bandwidth amounts, and to reduce the number of packets per second 559 in their networks. 561 Other fact that has to be taken into account is that the technique 562 not only saves bandwidth but also reduces the number of packets per 563 second, which sometimes can be a bottleneck for a satellite link or 564 even for a network router [Online]. 566 1.6. Current Standard for VoIP 568 The current standard [TCRTP] defines a way to reduce bandwidth and 569 pps of RTP traffic, by combining three different standard protocols: 571 o Regarding compression, [ECRTP] is the selected option. 573 o Multiplexing is accomplished using PPP Multiplexing [PPP-MUX] 575 o Tunneling is accomplished by using L2TP (Layer 2 Tunneling 576 Protocol [L2TPv3]). 578 The three layers are combined as shown in the Figure 5: 580 RTP/UDP/IP 581 | 582 | ---------------------------- 583 | 584 ECRTP compressing layer 585 | 586 | ---------------------------- 587 | 588 PPPMUX multiplexing layer 589 | 590 | ---------------------------- 591 | 592 L2TP tunneling layer 593 | 594 | ---------------------------- 595 | 596 IP 598 Figure 5 600 1.7. Current Proposal 602 In contrast to the current standard [TCRTP], TCM allows other header 603 compression protocols in addition to RTP/UDP, since services based on 604 small packets also use by bare UDP, as shown in Figure 6: 606 UDP/IP RTP/UDP/IP 607 \ / 608 \ / ------------------------------ 609 \ / 610 Nothing or ROHC or ECRTP or IPHC header compressing layer 611 | 612 | ------------------------------ 613 | 614 PPPMus or other mux protocols multiplexing layer 615 | 616 / \ ------------------------------ 617 / \ 618 / \ 619 GRE or L2TP \ tunneling layer 620 | MPLS 621 | ------------------------------ 622 IP 624 Figure 6 626 Each of the three layers is considered as independent of the other 627 two, i.e., different combinations of protocols can be implemented 628 according to the new proposal: 630 o Regarding compression, a number of options can be considered: as 631 different standards are able to compress different headers 632 ([cRTP], [ECRTP], [IPHC], [ROHC]). The one to be used can be 633 selected depending on the protocols used by the traffic to 634 compress and the concrete scenario (packet loss percentage, delay, 635 etc.). It also exists the possibility of having a null header 636 compression, in the case of wanting to avoid traffic compression, 637 taking into account the need of storing a context for every flow 638 and the problems of context desynchronization in certain 639 scenarios. Although not shown in Figure 6, ESP (Encapsulating 640 Security Payload [ESP]) headers can also be compressed. 642 o Multiplexing is also accomplished using PPP Multiplexing 643 [PPP-MUX]. Nevertheless, other multiplexing protocols can also be 644 considered. 646 o Tunneling is accomplished by using L2TP (Layer 2 Tunneling 647 Protocol [L2TPv3]) over IP, GRE (Generic Routing Encapsulation 648 [GRE]) over IP, or MPLS (Multiprotocol Label Switching 649 Architecture [MPLS]). 651 It can be observed that TCRTP [TCRTP] is included as an option in 652 TCM, combining [ECRTP], [PPP-MUX] and [L2TPv3], so backwards 653 compatibility with TCRTP is provided. If a TCM optimizer implements 654 ECRTP, PPPMux and L2TPv3, compatibility with RFC4170 MUST be granted. 656 If a single link is being optimized a tunnel is unnecessary. In that 657 case, both optimizers MAY perform header compression between them. 658 Multiplexing may still be useful, since it reduces packets per 659 second, which is interesting in some environments (e.g., satellite). 660 Another reason for that is the desire of reducing energy consumption. 661 Although no tunnel is employed, this can still be considered as TCM 662 optimization, so TCM signaling protocols will be employed here in 663 order to negotiate the compression and multiplexing parameters to be 664 employed. 666 Payload compression schemes may also be used, but they are not the 667 aim of this document. 669 2. Protocol Operation 671 This section describes how to combine protocols belonging to trhee 672 layers (compressing, multiplexing, and tunneling), in order to save 673 bandwidth for the considered flows. 675 2.1. Models of implementation 677 TCM can be implemented in different ways. The most straightforward 678 is to implement it in the devices terminating the flows (these 679 devices can be e.g., voice gateways, or proxies grouping a number of 680 flows): 682 [ending device]---[ending device] 683 ^ 684 | 685 TCM over IP 687 Figure 7 689 Another way TCM can be implemented is with an external optimizer. 690 This device can be placed at strategic places in the network and can 691 dynamically create and destroy TCM sessions without the participation 692 of the endpoints that generate the flows (Figure 8). 694 [ending device]\ /[ending device] 695 \ / 696 [ending device]----[optimizer]-----[optimizer]-----[ending device] 697 / \ 698 [ending device]/ \[ending device] 699 ^ ^ ^ 700 | | | 701 Native IP TCM over IP Native IP 703 Figure 8 705 A number of already compressed flows can also be merged in a tunnel 706 using an optimizer in order to increase the number of flows in a 707 tunnel (Figure 9): 709 [ending device]\ /[ending device] 710 \ / 711 [ending device]----[optimizer]-----[optimizer]------[ending device] 712 / \ 713 [ending device]/ \[ending device] 714 ^ ^ ^ 715 | | | 716 Compressed TCM over IP Compressed 718 Figure 9 720 2.2. Choice of the compressing protocol 722 There are different protocols that can be used for compressing IP 723 flows: 725 o IPHC (IP Header Compression [IPHC]) permits the compression of 726 UDP/IP and ESP/IP headers. It has a low implementation 727 complexity. On the other hand, the resynchronization of the 728 context can be slow over long RTT links. It should be used in 729 scenarios presenting very low packet loss percentage. 731 o cRTP (compressed RTP [cRTP]) works the same way as IPHC, but is 732 also able to compress RTP headers. The link layer transport is 733 not specified, but typically PPP is used. For cRTP to compress 734 headers, it must be implemented on each PPP link. A lot of 735 context is required to successfully run cRTP, and memory and 736 processing requirements are high, especially if multiple hops must 737 implement cRTP to save bandwidth on each of the hops. At higher 738 line rates, cRTP's processor consumption becomes prohibitively 739 expensive. cRTP is not suitable over long-delay WAN links commonly 740 used when tunneling, as proposed by this document. To avoid the 741 per-hop expense of cRTP, a simplistic solution is to use cRTP with 742 L2TP to achieve end-to-end cRTP. However, cRTP is only suitable 743 for links with low delay and low loss. Thus, if multiple router 744 hops are involved, cRTP's expectation of low delay and low loss 745 can no longer be met. Furthermore, packets can arrive out of 746 order. 748 o ECRTP (Enhanced Compressed RTP [ECRTP]) is an extension of cRTP 749 [cRTP] that provides tolerance to packet loss and packet 750 reordering between compressor and decompressor. Thus, ECRTP 751 should be used instead of cRTP when possible (e.g., the two TCM 752 optimizers implementing ECRTP). 754 o ROHC (RObust Header Compression [ROHC]) is able to compress UDP/ 755 IP, ESP/IP and RTP/UDP/IP headers. It is a robust scheme 756 developed for header compression over links with high bit error 757 rate, such as wireless ones. It incorporates mechanisms for quick 758 resynchronization of the context. It includes an improved 759 encoding scheme for compressing the header fields that change 760 dynamically. Its main drawback is that it requires significantly 761 more processing and memory resources than the ones necessary for 762 IPHC or ECRTP. 764 The present document does not determine which of the existing 765 protocols has to be used for the compressing layer. The decision 766 will depend on the scenarioand the service being optimized. It will 767 also be determined by the packet loss probability, RTT, jitter, and 768 the availability of memory and processing resources. The standard is 769 also suitable to include other compressing schemes that may be 770 further developed. 772 2.2.1. Context Synchronization in ECRTP 774 When the compressor receives an RTP packet that has an unpredicted 775 change in the RTP header, the compressor should send a COMPRESSED_UDP 776 packet (described in [ECRTP]) to synchronize the ECRTP decompressor 777 state. The COMPRESSED_UDP packet updates the RTP context in the 778 decompressor. 780 To ensure delivery of updates of context variables, COMPRESSED_UDP 781 packets should be delivered using the robust operation described in 782 [ECRTP]. 784 Because the "twice" algorithm described in [ECRTP] relies on UDP 785 checksums, the IP stack on the RTP transmitter should transmit UDP 786 checksums. If UDP checksums are not used, the ECRTP compressor 787 should use the cRTP Header checksum described in [ECRTP]. 789 2.2.2. Context Synchronization in ROHC 791 ROHC [ROHC] includes a more complex mechanism in order to maintain 792 context synchronization. It has different operation modes and 793 defines compressor states which change depending on link behavior. 795 2.3. Multiplexing 797 Header compressing algorithms require a layer two protocol that 798 allows identifying different protocols. PPP [PPP] is suited for 799 this, although other multiplexing protocols can also be used for this 800 layer of TCM. 802 When header compression is used inside a tunnel, it reduces the size 803 of the headers of the IP packets carried in the tunnel. However, the 804 tunnel itself has overhead due to its IP header and the tunnel header 805 (the information necessary to identify the tunneled payload). 807 By multiplexing multiple small payloads in a single tunneled packet, 808 reasonable bandwidth efficiency can be achieved, since the tunnel 809 overhead is shared by multiple packets belonging to the flows active 810 between the source and destination of an L2TP tunnel. The packet 811 size of the flows has to be small in order to permit good bandwidth 812 savings. 814 If the source and destination of the tunnel are the same as the 815 source and destination of the compressing protocol sessions, then the 816 source and destination must have multiple active small-packet flows 817 to get any benefit from multiplexing. 819 Because of this, TCM is mostly useful for applications where many 820 small-packet flows run between a pair of hosts. The number of 821 simultaneous sessions required to reduce the header overhead to the 822 desired level depends on the average payload size, and also on the 823 size of the tunnel header. A smaller tunnel header will result in 824 fewer simultaneous sessions being required to produce adequate 825 bandwidth efficiencies. 827 2.4. Tunneling 829 Different tunneling schemes can be used for sending end to end the 830 compressed payloads. 832 2.4.1. Tunneling schemes over IP: L2TP and GRE 834 L2TP tunnels should be used to tunnel the compressed payloads end to 835 end. L2TP includes methods for tunneling messages used in PPP 836 session establishment, such as NCP (Network Control Protocol). This 837 allows [IPCP-HC] to negotiate ECRTP compression/decompression 838 parameters. 840 Other tunneling schemes, such as GRE [GRE] may also be used to 841 implement the tunneling layer of TCM. 843 2.4.2. MPLS tunneling 845 In some scenarios, mainly in operator's core networks, the use of 846 MPLS is widely deployed as data transport method. The adoption of 847 MPLS as tunneling layer in this proposal intends to natively adapt 848 TCM to those transport networks. 850 In the same way that layer 3 tunnels, MPLS paths, identified by MPLS 851 labels, established between Label Edge Routers (LSRs), could be used 852 to transport the compressed payloads within an MPLS network. This 853 way, multiplexing layer must be placed over MPLS layer. Note that, 854 in this case, layer 3 tunnel headers do not have to be used, with the 855 consequent data efficiency improvement. 857 2.5. Encapsulation Formats 859 The packet format for a packet compressed is: 861 +------------+-----------------------+ 862 | | | 863 | Compr | | 864 | Header | Data | 865 | | | 866 | | | 867 +------------+-----------------------+ 869 Figure 10 871 The packet format of a multiplexed PPP packet as defined by [PPP-MUX] 872 is: 874 +-------+---+------+-------+-----+ +---+------+-------+-----+ 875 | Mux |P L| | | | |P L| | | | 876 | PPP |F X|Len1 | PPP | | |F X|LenN | PPP | | 877 | Prot. |F T| | Prot. |Info1| ~ |F T| | Prot. |InfoN| 878 | Field | | Field1| | | |FieldN | | 879 | (1) |1-2 octets| (0-2) | | |1-2 octets| (0-2) | | 880 +-------+----------+-------+-----+ +----------+-------+-----+ 882 Figure 11 884 The combined format used for TCM with a single payload is all of the 885 above packets concatenated. Here is an example with one payload, 886 using L2TP or GRE tunneling: 888 +------+------+-------+----------+-------+--------+----+ 889 | IP |Tunnel| Mux |P L| | | | | 890 |header|header| PPP |F X|Len1 | PPP | Compr | | 891 | (20) | | Proto |F T| | Proto | header |Data| 892 | | | Field | | Field1| | | 893 | | | (1) |1-2 octets| (0-2) | | | 894 +------+------+-------+----------+-------+--------+----+ 895 |<------------- IP payload -------------------->| 896 |<-------- Mux payload --------->| 898 Figure 12 900 If the tunneling technology is MPLS, then the scheme would be: 902 +------+-------+----------+-------+--------+----+ 903 |MPLS | Mux |P L| | | | | 904 |header| PPP |F X|Len1 | PPP | Compr | | 905 | | Proto |F T| | Proto | header |Data| 906 | | Field | | Field1| | | 907 | | (1) |1-2 octets| (0-2) | | | 908 -+------+-------+----------+-------+--------+----+ 909 |<---------- MPLS payload -------------->| 910 |<-------- Mux payload --------->| 912 Figure 13 914 If the tunnel contains multiplexed traffic, multiple "PPPMux 915 payload"s are transmitted in one IP packet. 917 3. Contributing Authors 919 Gonzalo Camarillo 920 Ericsson 921 Advanced Signalling Research Lab. 922 FIN-02420 Jorvas 923 Finland 925 Email: Gonzalo.Camarillo@ericsson.com 926 Michael A. Ramalho 927 Cisco Systems, Inc. 928 6310 Watercrest Way, Unit 203 929 Lakewood Ranch, FL 34202 930 USA 932 Phone: +1.732.832.9723 933 Email: mramalho@cisco.com 935 Jose Ruiz Mas 936 University of Zaragoza 937 Dpt. IEC Ada Byron Building 938 50018 Zaragoza 939 Spain 941 Phone: +34 976762158 942 Email: jruiz@unizar.es 944 Diego Lopez Garcia 945 Telefonica I+D 946 Ramon de la cruz 84 947 28006 Madrid 948 Spain 950 Phone: +34 913129041 951 Email: diego@tid.es 953 David Florez Rodriguez 954 Telefonica I+D 955 Ramon de la cruz 84 956 28006 Madrid 957 Spain 959 Phone: +34 91312884 960 Email: dflorez@tid.es 962 Manuel Nunez Sanz 963 Telefonica I+D 964 Ramon de la cruz 84 965 28006 Madrid 966 Spain 968 Phone: +34 913128821 969 Email: mns@tid.es 970 Juan Antonio Castell Lucia 971 Telefonica I+D 972 Ramon de la cruz 84 973 28006 Madrid 974 Spain 976 Phone: +34 913129157 977 Email: jacl@tid.es 979 Mirko Suznjevic 980 University of Zagreb 981 Faculty of Electrical Engineering and Computing, Unska 3 982 10000 Zagreb 983 Croatia 985 Phone: +385 1 6129 755 986 Email: mirko.suznjevic@fer.hr 988 4. Acknowledgements 990 5. IANA Considerations 992 This memo includes no request to IANA. 994 6. Security Considerations 996 The most straightforward option for securing a number of non-secured 997 flows sharing a path is by the use of IPsec [IPsec], when TCM using 998 an IP tunnel is employed. Instead of adding a security header to the 999 packets of each native flow, and then compressing and multiplexing 1000 them, a single IPsec tunnel can be used in order to secure all the 1001 flows together, thus achieving a higher efficiency. This use of 1002 IPsec protects the packets only within the transport network between 1003 tunnel ingress and egress and therefore does not provide end-to-end 1004 authentication or encryption. 1006 When a number of already secured flows including ESP [ESP] headers 1007 are optimized by means of TCM, and the addition of further security 1008 is not necessary, their ESP/IP headers can still be compressed using 1009 suitable algorithms [RFC5225], in order to improve the efficiency. 1010 This header compression does not change the end-to-end security 1011 model. 1013 The resilience of TCM to denial of service, and the use of TCM to 1014 deny service to other parts of the network infrastructure, is for 1015 future study. 1017 7. References 1019 7.1. Normative References 1021 [ECRTP] Koren, T., Casner, S., Geevarghese, J., Thompson, B., and 1022 P. Ruddy, "Enhanced Compressed RTP (CRTP) for Links with 1023 High Delay, Packet Loss and Reordering", RFC 3545, 2003. 1025 [ESP] Kent, S., "IP Encapsulating Security Payload", RFC 4303, 1026 2005. 1028 [GRE] Farinacci, D., Li, T., Hanks, S., Meyer, D., and P. 1029 Traina, "Generic Routing Encapsulation (GRE)", RFC 2784, 1030 2000. 1032 [H.323] International Telecommunication Union, "Recommendation 1033 H.323", Packet based multimedia communication systems 1034 H.323, July 2003. 1036 [IPCP-HC] Engan, M., Casner, S., Bormann, C., and T. Koren, "IP 1037 Header Compression over PPP", RFC 3544, 2003. 1039 [IPHC] Degermark, M., Nordgren, B., and S. Pink, "IP Header 1040 Compression", RFC 2580, 1999. 1042 [IPsec] Kent, S. and K. Seo, "Security Architecture for the 1043 Internet Protocol", RFC 4301, December 2005. 1045 [L2TPv3] Lau, J., Townsley, M., and I. Goyret, "Layer Two Tunneling 1046 Protocol - Version 3 (L2TPv3)", RFC 3931, 2005. 1048 [MPLS] Rosen, E., Viswanathan, A., and R. Callon, "Multiprotocol 1049 Label Switching Architecture", RFC 3031, January 2001. 1051 [PPP] Simpson, W., "The Point-to-Point Protocol (PPP)", RFC 1052 1661, 1994. 1054 [PPP-MUX] Pazhyannur, R., Ali, I., and C. Fox, "PPP Multiplexing", 1055 RFC 3153, 2001. 1057 [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate 1058 Requirement Levels", BCP 14, RFC 2119, March 1997. 1060 [RFC5225] Pelletier, G. and K. Sandlund, "RObust Header Compression 1061 Version 2 (ROHCv2): Profiles for RTP, UDP, IP, ESP and 1062 UDP-Lite", RFC 5225, April 2008. 1064 [RFC7252] Shelby, Z., Hartke, K., and C. Bormann, "The Constrained 1065 Application Protocol (CoAP)", RFC 7252, June 2014. 1067 [ROHC] Sandlund, K., Pelletier, G., and L-E. Jonsson, "The RObust 1068 Header Compression (ROHC) Framework", RFC 5795, 2010. 1070 [RTP] Schulzrinne, H., Casner, S., Frederick, R., and V. 1071 Jacobson, "RTP: A Transport Protocol for Real-Time 1072 Applications", RFC 3550, 2003. 1074 [SCTP] Stewart, Ed., R., "Stream Control Transmission Protocol", 1075 RFC 4960, 2007. 1077 [SIP] Rosenberg, J., Schulzrinne, H., Camarillo, G., and et. 1078 al., "SIP: Session Initiation Protocol", RFC 3261, 2005. 1080 [TCRTP] Thomson, B., Koren, T., and D. Wing, "Tunneling 1081 Multiplexed Compressed RTP (TCRTP)", RFC 4170, 2005. 1083 [cRTP] Casner, S. and V. Jacobson, "Compressing IP/UDP/RTP 1084 Headers for Low-Speed Serial Links", RFC 2508, 1999. 1086 7.2. Informative References 1088 [Efficiency] 1089 Bolla, R., Bruschi, R., Davoli, F., and F. Cucchietti, 1090 "Energy Efficiency in the Future Internet: A Survey of 1091 Existing Approaches and Trends in Energy-Aware Fixed 1092 Network Infrastructures", IEEE Communications Surveys and 1093 Tutorials vol.13, no.2, pp.223,244, 2011. 1095 [FPS_opt] Saldana, J., Fernandez-Navajas, J., Ruiz-Mas, J., Aznar, 1096 J., Viruete, E., and L. Casadesus, "First Person Shooters: 1097 Can a Smarter Network Save Bandwidth without Annoying the 1098 Players?", IEEE Communications Magazine vol. 49, no.11, 1099 pp. 190-198, 2011. 1101 [First-person] 1102 Ratti, S., Hariri, B., and S. Shirmohammadi, "A Survey of 1103 First-Person Shooter Gaming Traffic on the Internet", IEEE 1104 Internet Computing vol 14, no. 5, pp. 60-69, 2010. 1106 [Gamers] Oliveira, M. and T. Henderson, "What online gamers really 1107 think of the Internet?", NetGames '03 Proceedings of the 1108 2nd workshop on Network and system support for games, ACM 1109 New York, NY, USA Pages 185-193, 2003. 1111 [I-D.suznjevic-tsvwg-mtd-tcmtf] 1112 Suznjevic, M. and J. Saldana, "Delay Limits and 1113 Multiplexing Policies to be employed with Tunneling 1114 Compressed Multiplexed Traffic Flows", draft-suznjevic- 1115 tsvwg-mtd-tcmtf-03 (work in progress), June 2014. 1117 [Online] Feng, WC., Chang, F., Feng, W., and J. Walpole, "A traffic 1118 characterization of popular on-line games", IEEE/ACM 1119 Transactions on Networking 13.3 Pages 488-500, 2005. 1121 [Power] Chabarek, J., Sommers, J., Barford, P., Estan, C., Tsiang, 1122 D., and S. Wright, "Power Awareness in Network Design and 1123 Routing", INFOCOM 2008. The 27th Conference on Computer 1124 Communications. IEEE pp.457,465, 2008. 1126 [VoIP_opt] 1127 Saldana, J., Fernandez-Navajas, J., Ruiz-Mas, J., Murillo, 1128 J., Viruete, E., and J. Aznar, "Evaluating the Influence 1129 of Multiplexing Schemes and Buffer Implementation on 1130 Perceived VoIP Conversation Quality", Computer Networks 1131 (Elsevier) Volume 6, Issue 11, pp 2920 - 2939. Nov. 30, 1132 2012. 1134 [topology_CNs] 1135 Vega, D., Cerda-Alabern, L., Navarro, L., and R. Meseguer, 1136 "Topology patterns of a community network: Guifi. net.", 1137 Proceedings Wireless and Mobile Computing, Networking and 1138 Communications (WiMob), 2012 IEEE 8th International 1139 Conference on (pp. 612-619) , 2012. 1141 Authors' Addresses 1143 Jose Saldana 1144 University of Zaragoza 1145 Dpt. IEC Ada Byron Building 1146 Zaragoza 50018 1147 Spain 1149 Phone: +34 976 762 698 1150 Email: jsaldana@unizar.es 1151 Dan Wing 1152 Cisco Systems 1153 771 Alder Drive 1154 San Jose, CA 95035 1155 US 1157 Phone: +44 7889 488 335 1158 Email: dwing@cisco.com 1160 Julian Fernandez Navajas 1161 University of Zaragoza 1162 Dpt. IEC Ada Byron Building 1163 Zaragoza 50018 1164 Spain 1166 Phone: +34 976 761 963 1167 Email: navajas@unizar.es 1169 Muthu Arul Mozhi Perumal 1170 Ericsson 1171 Ferns Icon 1172 Doddanekundi, Mahadevapura 1173 Bangalore, Karnataka 560037 1174 India 1176 Email: muthu.arul@gmail.com 1178 Fernando Pascual Blanco 1179 Telefonica I+D 1180 Ramon de la Cruz 84 1181 Madrid 28006 1182 Spain 1184 Phone: +34 913128779 1185 Email: fpb@tid.es