idnits 2.17.1 draft-saldana-tsvwg-tcmtf-06.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- -- The draft header indicates that this document obsoletes RFC4170, but the abstract doesn't seem to mention this, which it should. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year == The document doesn't use any RFC 2119 keywords, yet seems to have RFC 2119 boilerplate text. -- The document date (December 12, 2013) is 3785 days in the past. Is this intentional? Checking references for intended status: Best Current Practice ---------------------------------------------------------------------------- (See RFCs 3967 and 4897 for information about using normative references to lower-maturity documents in RFCs) ** Obsolete normative reference: RFC 4960 (ref. 'SCTP') (Obsoleted by RFC 9260) Summary: 1 error (**), 0 flaws (~~), 2 warnings (==), 2 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 Transport Area Working Group J. Saldana 3 Internet-Draft University of Zaragoza 4 Obsoletes: 4170 (if approved) D. Wing 5 Intended status: Best Current Practice Cisco Systems 6 Expires: June 15, 2014 J. Fernandez Navajas 7 University of Zaragoza 8 Muthu. Perumal 9 Cisco Systems 10 F. Pascual Blanco 11 Telefonica I+D 12 December 12, 2013 14 Tunneling Compressed Multiplexed Traffic Flows (TCM-TF) Reference Model 15 draft-saldana-tsvwg-tcmtf-06 17 Abstract 19 Tunneling Compressed and Multiplexed Traffic Flows (TCM-TF) is a 20 method for improving the bandwidth utilization of network segments 21 that carry multiple flows in parallel sharing a common path. The 22 method combines standard protocols for header compression, 23 multiplexing, and tunneling over a network path for the purpose of 24 reducing the bandwidth used when multiple flows are carried over that 25 path. The amount of packets per second can also be reduced. 27 This document describes the TCM-TF framework and the different 28 options which can be used for each layer (header compression, 29 multiplexing and tunneling). 31 Status of This Memo 33 This Internet-Draft is submitted to IETF in full conformance with the 34 provisions of BCP 78 and BCP 79. 36 Internet-Drafts are working documents of the Internet Engineering 37 Task Force (IETF). Note that other groups may also distribute 38 working documents as Internet-Drafts. The list of current Internet- 39 Drafts is at http://datatracker.ietf.org/drafts/current/. 41 Internet-Drafts are draft documents valid for a maximum of six months 42 and may be updated, replaced, or obsoleted by other documents at any 43 time. It is inappropriate to use Internet-Drafts as reference 44 material or to cite them other than as "work in progress." 46 This Internet-Draft will expire on June 15, 2014. 48 Copyright Notice 50 Copyright (c) 2013 IETF Trust and the persons identified as the 51 document authors. All rights reserved. 53 This document is subject to BCP 78 and the IETF Trust's Legal 54 Provisions Relating to IETF Documents 55 (http://trustee.ietf.org/license-info) in effect on the date of 56 publication of this document. Please review these documents 57 carefully, as they describe your rights and restrictions with respect 58 to this document. 60 Table of Contents 62 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . 3 63 1.1. Requirements Language . . . . . . . . . . . . . . . . . . 3 64 1.2. Bandwidth efficiency of flows sending small packets . . . 3 65 1.2.1. Real-time applications using RTP . . . . . . . . . . 3 66 1.2.2. Real-time applications not using RTP . . . . . . . . 4 67 1.2.3. Other applications generating small packets . . . . . 4 68 1.2.4. Optimization of small-packet flows . . . . . . . . . 4 69 1.2.5. Energy consumption considerations . . . . . . . . . . 5 70 1.3. Terminology . . . . . . . . . . . . . . . . . . . . . . . 6 71 1.4. Scenarios of application . . . . . . . . . . . . . . . . 7 72 1.4.1. Multidomain scenario . . . . . . . . . . . . . . . . 7 73 1.4.2. Single domain . . . . . . . . . . . . . . . . . . . . 8 74 1.4.3. Private solutions . . . . . . . . . . . . . . . . . . 9 75 1.4.4. Mixed scenarios . . . . . . . . . . . . . . . . . . . 11 76 1.5. Potential beneficiaries of TCM optimization . . . . . . . 11 77 1.6. Current Standard . . . . . . . . . . . . . . . . . . . . 12 78 1.7. Improved Standard Proposal . . . . . . . . . . . . . . . 13 79 2. Protocol Operation . . . . . . . . . . . . . . . . . . . . . 14 80 2.1. Models of implementation . . . . . . . . . . . . . . . . 14 81 2.2. Choice of the compressing protocol . . . . . . . . . . . 15 82 2.2.1. Context Synchronization in ECRTP . . . . . . . . . . 16 83 2.2.2. Context Synchronization in ROHC . . . . . . . . . . . 17 84 2.3. Multiplexing . . . . . . . . . . . . . . . . . . . . . . 17 85 2.4. Tunneling . . . . . . . . . . . . . . . . . . . . . . . . 18 86 2.4.1. Tunneling schemes over IP: L2TP and GRE . . . . . . . 18 87 2.4.2. MPLS tunneling . . . . . . . . . . . . . . . . . . . 18 88 2.5. Encapsulation Formats . . . . . . . . . . . . . . . . . . 18 89 3. Contributing Authors . . . . . . . . . . . . . . . . . . . . 20 90 4. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . 21 91 5. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 21 92 6. Security Considerations . . . . . . . . . . . . . . . . . . . 21 93 7. References . . . . . . . . . . . . . . . . . . . . . . . . . 22 94 7.1. Normative References . . . . . . . . . . . . . . . . . . 22 95 7.2. Informative References . . . . . . . . . . . . . . . . . 23 97 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . 24 99 1. Introduction 101 This document describes a way to combine existing protocols for 102 header compression, multiplexing and tunneling to save bandwidth for 103 applications that generate long-term flows of small packets. 105 1.1. Requirements Language 107 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 108 "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this 109 document are to be interpreted as described in RFC 2119 [RFC2119]. 111 1.2. Bandwidth efficiency of flows sending small packets 113 The interactivity demands of some real-time services (VoIP, 114 videoconferencing, telemedicine, video vigilance, online gaming, 115 etc.) require a traffic profile consisting of high rates of small 116 packets, which are necessary in order to transmit frequent updates 117 between the two extremes of the communication. These services also 118 demand low network delays. In addition, some other services also use 119 small packets, although they are not delay-sensitive (e.g., instant 120 messaging, M2M packets sending collected data in sensor networks 121 using wireless or satellite scenarios). For both the delay-sensitive 122 and delay-insensitive applications, their small data payloads incur 123 significant overhead. 125 When a number of flows based on small packets (small-packet flows) 126 share the same path, bandwidth can be saved by multiplexing packets 127 belonging to different flows. If a transmission queue has not 128 already been formed but multiplexing is desired, it is necessary to 129 add a multiplexing delay, which has to be maintained under some 130 threshold if the service presents tight delay requirements. 132 1.2.1. Real-time applications using RTP 134 The first design of the Internet did not include any mechanism 135 capable of guaranteeing an upper bound for delivery delay, taking 136 into account that the first deployed services were e-mail, file 137 transfer, etc., in which delay is not critical. RTP [RTP] was first 138 defined in 1996 in order to permit the delivery of real-time 139 contents. Nowadays, although there are a variety of protocols used 140 for signaling real-time flows (SIP [SIP], H.323, etc.), RTP has 141 become the standard par excellence for the delivery of real-time 142 content. 144 RTP was designed to work over UDP datagrams. This implies that an 145 IPv4 packet carrying real-time information has to include 40 bytes of 146 headers: 20 for IPv4 header, 8 for UDP, and 12 for RTP. This 147 overhead is significant, taking into account that many real-time 148 services send very small payloads. It becomes even more significant 149 with IPv6 packets, as the basic IPv6 header is twice the size of the 150 IPv4 header. Table 1 illustrates the overhead problem of VoIP for 151 two different codecs. 153 +---------------------------------+---------------------------------+ 154 | IPv4 | IPv6 | 155 +---------------------------------+---------------------------------+ 156 | IPv4+UDP+RTP: 40 bytes header | IPv6+UDP+RTP: 60 bytes header | 157 | G.711 at 20 ms packetization: | G.711 at 20 ms packetization: | 158 | 25% header overhead | 37.5% header overhead | 159 | G.729 at 20 ms packetization: | G.729 at 20 ms packetization: | 160 | 200% header overhead | 300% header overhead | 161 +---------------------------------+---------------------------------+ 163 Table 1: Efficiency of different voice codecs 165 1.2.2. Real-time applications not using RTP 167 At the same time, there are many real-time applications that do not 168 use RTP. Some of them send UDP (but not RTP) packets, e.g., First 169 Person Shooter (FPS) online games [First-person], for which latency 170 is very critical. The quickness and the movements of the players are 171 important, and can decide if they win or lose a fight. 173 1.2.3. Other applications generating small packets 175 Other applications without delay constraints are also becoming 176 popular (e.g., instant messaging, M2M packets sending collected data 177 in sensor networks using wireless or satellite scenarios). The 178 number of wireless M2M (machine-to-machine) connections is steady 179 growing since a few years, and a share of these is being used for 180 delay-intolerant applications, e.g., industrial SCADA (Supervisory 181 Control And Data Acquisition), power plant monitoring, smart grids, 182 asset tracking. 184 1.2.4. Optimization of small-packet flows 186 In the moments or places where network capacity gets scarce, 187 allocating more bandwidth is a possible solution, but it implies a 188 recurring cost. However, including optimization techniques between a 189 pair of network nodes (reducing bandwidth and packets per second) 190 when/where required is a one-time investment. 192 Thus, in scenarios including a bottleneck with a single Layer-3 hop, 193 header compression standard algorithms [cRTP], [ECRTP], [IPHC], 194 [ROHC] can be used for reducing the overhead of each flow, at the 195 cost of additional processing. 197 However, if header compression is to be deployed in a network path 198 including several Layer-3 hops, tunneling can be used in order to 199 allow the header-compressed packets to travel end-to-end, thus 200 avoiding the need to compress and decompress at each intermediate 201 node. In these cases, compressed packets belonging to different 202 flows can be multiplexed together, in order to share the tunnel 203 overhead. In this case, a small multiplexing delay will be necessary 204 as a counterpart, in order to join a number of packets to be sent 205 together. This delay has to be maintained under a threshold in order 206 to grant the delay requirements. 208 A demultiplexer is necessary at the end of the common path, so as to 209 rebuild the packets as they were originally sent, making multiplexing 210 a transparent process for the extremes of the flow. 212 If only one stream is tunneled and compressed, then little bandwidth 213 savings will be obtained. In contrast, multiplexing is helpful to 214 amortize the overhead of the tunnel header over many payloads. The 215 obtained savings grow with the number of flows optimized together 216 [VoIP_opt], [FPS_opt]. 218 1.2.5. Energy consumption considerations 220 As an additional benefit, the reduction of the sent information, and 221 especially the reduction of the amount of packets per second to be 222 managed by the intermediate routers, can be translated into a 223 reduction of the overall energy consumption of network equipment. 224 According to [Efficiency] internal packet processing engines and 225 switching fabric require 60% and 18% of the power consumption of 226 high-end routers respectively. Thus, reducing the number of packets 227 to be managed and switched will reduce the overall energy 228 consumption. The measurements deployed in [Power] on commercial 229 routers corroborate this: a study using different packet sizes was 230 presented, and the tests with big packets made the energy consumption 231 get reduced, since a certain amount of energy is associated to header 232 processing tasks, and not only to the sending of the packet itself. 234 All in all, a tradeoff appears: on the one hand, energy consumption 235 is increased in the two extremes due to header compression 236 processing; on the other hand, energy consumption is reduced in the 237 intermediate nodes because of the reduction in the number of packets 238 transmitted. Thi tradeoff should be explored more deeply. 240 1.3. Terminology 242 This document uses a number of terms to refer to the roles played by 243 the entities using TCM-TF. 245 o native packet 247 A packet sent by an application, belonging to a flow that can be 248 optimized by means of TCM-TF. 250 o native flow 252 A flow of native packets. It can be considered a "small-packet flow" 253 when the vast majority of the generated packets present a low 254 payload-to-header ratio. 256 o TCM packet 258 A packet including a number of multiplexed and header-compressed 259 native ones, and also a tunneling header. 261 o TCM flow 263 A flow of TCM packets, including a number of optimized native flows. 265 o TCM optimizer 267 The host where TCM optimization is deployed. It corresponds to both 268 the ingress and the egress of the tunnel transporting the compressed 269 and multiplexed packets. 271 If the optimizer compresses headers, multiplexes packets and creates 272 the tunnel, it behaves as a "TCM-ingress optimizer", or "TCM-IO". It 273 takes native packets or flows and "optimizes" them. 275 If it extracts packets from the tunnel, demultiplexes packets and 276 decompresses headers, it behaves as a "TCM-egress optimizer", or 277 "TCM-EO". The TCM-egress optimizer takes a TCM flow and "rebuilds" 278 the native packets as they were originally sent. 280 o TCM-TF session 282 The relationship between a pair of TCM optimizers exchanging TCM 283 packets. 285 o policy manager 286 A network entity which makes the decisions about TCM-TF parameters: 287 multiplexing period to be used, flows to be optimized together, 288 depending on their IP addresses, ports, etc. It is connected with a 289 number of TCM-TF optimizers, and orchestrates the optimization that 290 takes place between them. 292 1.4. Scenarios of application 294 Different scenarios of application can be considered for the 295 tunneling, compressing and multiplexing solution. They can be 296 classified according to the domains involved in the optimization: 298 1.4.1. Multidomain scenario 300 In this scenario, the TCMT-TF tunnel goes all the way from one 301 network edge (the place where users are attached to the ISP) to 302 another, and therefore it can cross several domains. As shown in 303 Figure 1, the optimization is performed before the packets leave the 304 domain of an ISP; the traffic crosses the Internet tunnelized, and 305 the packets are rebuilt in the second domain. 307 _ _ _ _ _ _ 308 ( ` ) _ _ _ ( ` )_ _ 309 ( +------+ )`) ( ` )_ ( +------+ `) 310 -->(_ -|TCM-IO|--- _) ---> ( ) `) ----->(_-|TCM-EO|--_)--> 311 ( +------+ _) (_ (_ . _) _) ( +------+ _) 312 (_ _ _ _) (_ _ ( _) _) 314 ISP 1 Internet ISP 2 316 <------------------TCM-TF--------------------> 318 Figure 1 320 Note that this is not from border to border (where ISPs connect to 321 the Internet, which could be covered with specialized links) but from 322 an ISP to another (e.g. managing all traffic from individual users 323 arriving at a Game Provider, regardless users' location). 325 Some examples of this could be: 327 o An ISP could place a TCM optimizer in its aggregation network, in 328 order to tunnel all the packets of a service, sending them to the 329 application provider, who would rebuild the packets before 330 forwarding them to the application server. This would result in 331 savings for both actors. 333 o A service provider (e.g., an online gaming company) could be 334 allowed to place a TCM optimizer in the aggregation network of an 335 ISP, being able to optimize all the flows of a game or service. 336 Another TCM optimizer would rebuild these packets once they arrive 337 to the network of the provider. 339 1.4.2. Single domain 341 TCM-TF is only activated inside an ISP, from the edge to border, 342 inside the network operator. The geographical scope and network 343 depth of TCM-TF activation could be on demand, according to traffic 344 conditions. 346 If we consider the residential users of a real-time interactive 347 application (e.g., VoIP, an online game generating small packets) in 348 a town or a district, a TCM optimizing module can be included in 349 network devices, in order to group packets with the same destination. 350 As shown in Figure 2, depending on the number of users of the 351 application, the packets could be grouped at different levels in DSL 352 fixed network scenarios, at gateway level in LTE mobile network 353 scenarios or even in other ISP edge routers. TCM-TF may also be 354 applied for fiber residential accesses, and in 2G/3G mobile networks. 355 This would reduce bandwidth requirements in the provider aggregation 356 network 358 +------+ 359 N users -|TCM-IO|\ 360 +------+ \ 361 \ _ _ _ _ 362 +------+ \--> ( ` )_ +------+ ( ` )_ 363 M users -|TCM-IO|------> ( ) `) --|TCM-EO|--> ( ) `) 364 +------+ / ->(_ (_ . _) _) +------+ (_ (_ . _) _) 365 / 366 +------+ / ISP Internet 367 P users -|TCM-IO|/ 368 +------+ 370 <------------TCM-TF--------------> 372 Figure 2 374 At the same time, the ISP would implement TCM-TF capabilities within 375 its own MPLS network in order to optimize internal network resources: 376 optimizing modules could be embedded in the Label Edge Routers of the 377 network. In that scenario MPLS would be the "tunneling" layer, being 378 the tunnels the paths defined by the MPLS labels and avoiding the use 379 of other tunneling protocols. 381 Finally, some networks use cRTP [cRTP] in order to obtain bandwidth 382 savings on the access link, but as a counterpart it consumes 383 considerable CPU resources on the aggregation router. In these 384 cases, by means of TCM, instead of only saving bandwidth on the 385 access link, it could also be saved across the ISP network, without 386 the CPU impact on the aggregation router. 388 1.4.3. Private solutions 390 End users can also optimize traffic end-to-end from network borders. 391 TCM-TF is used to connect private networks geographically apart (e.g. 392 corporation headquarters and subsidiaries), without the ISP being 393 aware (or having to manage) those flows, as shown in Figure 3, where 394 two different locations are connected through a tunnel traversing the 395 Internet or another network. 397 _ _ _ _ _ _ 398 ( ` )_ +------+ ( ` )_ +------+ ( ` )_ 399 ( ) `) --|TCM-IO|-->( ) `) --|TCM-EO|-->( ) `) 400 (_ (_ . _) _) +------+ (_ (_ . _) _) +------+ (_ (_ . _)_) 402 Location 1 ISP/Internet Location 2 404 <-----------TCM-TF----------> 406 Figure 3 408 Some examples of these scenarios: 410 o The case of an enterprise with a number of distributed central 411 offices, in which an appliance could be placed next to the access 412 router, being able to optimize traffic flows with a shared origin 413 and destination. Thus, a number of remote desktop sessions to the 414 same server could be optimized, or a number of VoIP calls between 415 two offices could also require less bandwidth and fewer packets 416 per second. In some cases the tunnel is already included for 417 security reasons, so the additional overhead of TCM-TF is lower. 419 o An Internet cafe, which is suitable of having many users of the 420 same application (e.g., VoIP, a game) sharing the same access 421 link. Internet cafes are very popular in countries with 422 relatively low access speeds in households, where home computer 423 penetration is usually low as well. In many of these countries, 424 bandwidth can become a serious limitation for this kind of 425 business, so TCM-TF savings may become interesting for their 426 viability. 428 o Community networks [topology_CNs] (typically deployed in rural 429 areas or in developing countries), in which a number of people in 430 the same geographical place share their connections in a 431 cooperative way, and a number of wireless hops are required in 432 order to reach a router connected to the Internet. 434 o Satellite communication links that often manage the bandwidth by 435 limiting the transmission rate, measured in packets per second 436 (pps), to and from the satellite. Applications like VoIP that 437 generate a large number of small packets can easily fill the 438 maximum number of pps slots, limiting the throughput across such 439 links. As an example, a G.729a voice call generates 50 pps at 20 440 ms packetization time. If the satellite transmission allows 1,500 441 pps, the number of simultaneous voice calls is limited to 30. 442 This results in poor utilization of the satellite link's bandwidth 443 as well as places a low bound on the number of voice calls that 444 can utilize the link simultaneously. TCM optimization of small 445 packets into one packet for transmission would improve the 446 efficiency. 448 o In a M2M/SCADA (Supervisory Control And Data Acquisition) context, 449 TCM optimization can be applied when a satellite link is used for 450 collecting the data of a number of sensors. M2M terminals are 451 normally equipped with sensing devices which can interface to 452 proximity sensor networks through wireless connections. The 453 terminal can send the collected sensing data using a satellite 454 link connecting to a satellite gateway, which in turn will forward 455 the M2M/SCADA data to the to the processing and control center 456 through Internet. The size of typical M2M application transaction 457 depends on the specific service and it may vary from a minimum of 458 20 bytes (e.g., tracking and metering in private security) to 459 about 1,000 bytes (e.g., video-surveillance). In this context, 460 TCM-TF concepts can be also applied to allow a more efficient use 461 of the available satellite link capacity, matching the 462 requirements demanded by some M2M services. If the case of large 463 sensor deployments is considered, where proximity sensor networks 464 transmit data through different satellite terminals, the use of 465 compression algorithms already available in current satellite 466 systems to reduce the overhead introduced by UDP and IPv6 467 protocols is certainly desirable. In addition to this, tunneling 468 and multiplexing functions available from TCM-TF allows extending 469 compression functionality throughout the rest the network, to 470 eventually reach the processing and control centers. 472 o Desktop or application sharing where the traffic from the server 473 to the client typically consists of the delta of screen updates. 474 Also, the standard for remote desktop sharing emerging for WebRTC 475 in the RTCWEB Working Group is: {something}/SCTP/UDP (Stream 476 Control Transmission Protocol [SCTP]). In this scenario, SCTP/UDP 477 could be used in other cases: chatting, file sharing and 478 applications related to WebRTC peers. There could be hundreds of 479 clients at a site talking to a server located at a datacenter over 480 a WAN. Compressing, multiplexing and tunneling this traffic could 481 save WAN bandwidth and potentially improve latency. 483 1.4.4. Mixed scenarios 485 Different combinations of the previous scenarios can be considered. 486 Agreements between different companies can be established in order to 487 save bandwidth and to reduce packets per second. As an example, 488 Figure 4 shows a game provider that wants to TCM-optimize its 489 connections by establishing associations between different TCM-IO/EOs 490 placed in the game server and several TCM-IO/EOs placed in the 491 networks of different ISPs (agreements between the game provider and 492 each ISP would be necessary). In every ISP, the TCM-IO/EO would be 493 placed in the most adequate point (actually several TCM-IO/EOs could 494 exist per ISP) in order to aggregate enough number of users. 496 _ _ 497 N users ( ` )_ 498 +---+ ( ) `) 499 |TCM|->(_ (_ . _) 500 +---+ ISP 1 \ 501 _ _ \ _ _ _ _ _ 502 M users ( ` )_ \ ( ` ) ( ` ) ( ` ) 503 +---+ ( ) `) \ ( ) `) ( ) `) +---+ ( ) `) 504 |TCM|->(_ (_ ._)---- (_ (_ . _) ->(_ (_ . _)->|TCM|->(_ (_ . _) 505 +---+ ISP 2 / Internet ISP 4 +---+ Game Provider 506 _ _ / ^ 507 O users ( ` )_ / | 508 +---+ ( ) `) / +---+ 509 |TCM|->(_ (_ ._) P users->|TCM| 510 +---+ ISP 3 +---+ 512 Figure 4 514 1.5. Potential beneficiaries of TCM optimization 516 In conclusion, a standard able to compress headers, multiplex a 517 number of packets and send them together using a tunnel, can benefit 518 various stakeholders: 520 o network operators can compress traffic flows sharing a common 521 network segment; 523 o ISPs; 524 o developers of VoIP systems can include this option in their 525 solutions; 527 o service providers, who can achieve bandwidth savings in their 528 supporting infrastructures; 530 o users of Community Networks, who may be able to save significant 531 bandwidth amounts, and to reduce the number of packets per second 532 in their networks. 534 Other fact that has to be taken into account is that the technique 535 not only saves bandwidth but also reduces the number of packets per 536 second, which sometimes can be a bottleneck for a satellite link or 537 even for a network router. 539 1.6. Current Standard 541 The current standard [TCRTP] defines a way to reduce bandwidth and 542 pps of RTP traffic, by combining three different standard protocols: 544 o Regarding compression, [ECRTP] is the selected option. 546 o Multiplexing is accomplished using PPP Multiplexing [PPP-MUX] 548 o Tunneling is accomplished by using L2TP (Layer 2 Tunneling 549 Protocol [L2TPv3]). 551 The three layers are combined as shown in the Figure 5: 553 RTP/UDP/IP 554 | 555 | ---------------------------- 556 | 557 ECRTP compressing layer 558 | 559 | ---------------------------- 560 | 561 PPPMUX multiplexing layer 562 | 563 | ---------------------------- 564 | 565 L2TP tunneling layer 566 | 567 | ---------------------------- 568 | 569 IP 571 Figure 5 573 1.7. Improved Standard Proposal 575 In contrast to the current standard [TCRTP], TCM-TF allows other 576 header compression protocols in addition to RTP/UDP, since services 577 based on small packets also use by bare UDP, as shown in Figure 6: 579 UDP/IP RTP/UDP/IP 580 \ / 581 \ / ------------------------------ 582 \ / 583 Nothing or ROHC or ECRTP or IPHC header compressing layer 584 | 585 | ------------------------------ 586 | 587 PPPMUX or other mux protocols multiplexing layer 588 | 589 / \ ------------------------------ 590 / \ 591 / \ 592 GRE or L2TP \ tunneling layer 593 | MPLS 594 | ------------------------------ 595 IP 597 Figure 6 599 Each of the three layers is considered as independent of the other 600 two, i.e. different combinations of protocols can be implemented 601 according to the new proposal: 603 o Regarding compression, a number of options can be considered: as 604 different standards are able to compress different headers 605 ([cRTP], [ECRTP], [IPHC], [ROHC]). The one to be used could be 606 selected depending on the traffic to compress and the concrete 607 scenario (packet loss percentage, delay, etc.). It also exists 608 the possibility of having a null header compression, in the case 609 of wanting to avoid traffic compression, taking into account the 610 need of storing a context for every flow and the problems of 611 context desynchronization in certain scenarios. Although non 612 shown in Figure 6, ESP (Encapsulating Security Payload [ESP]) 613 headers can also be compressed. 615 o Multiplexing is also accomplished using PPP Multiplexing 616 [PPP-MUX]. Nevertheless, other multiplexing protocols can also be 617 considered. 619 o Tunneling is accomplished by using L2TP (Layer 2 Tunneling 620 Protocol [L2TPv3]) over IP, GRE (Generic Routing Encapsulation 622 [GRE]) over IP, or MPLS (Multiprotocol Label Switching 623 Architecture [MPLS]). 625 It can be observed that TCRTP [TCRTP] is included as an option in 626 TCM-TF, combining [ECRTP], [PPP-MUX] and [L2TPv3]. 628 If a single link is being optimized a tunnel is unnecessary. In that 629 case, both optimizers can perform header compression between both of 630 them. Multiplexing may still be useful, since it reduces packets per 631 second which is interesting in some environments (e.g., satellite). 632 Another reason for that is the desire of reducing energy consumption. 633 Although no tunnel is employed, this can still be considered as TCM- 634 TF optimization, so TCM-TF signaling protocols can be employed here 635 in order to negotiate the compression and multiplexing parameters to 636 be employed. 638 Payload compression schemes could also be used, but they are not the 639 aim of this document. 641 2. Protocol Operation 643 This section describes how to combine protocols belonging to trhee 644 layers (compressing, multiplexing, and tunneling), in order to save 645 bandwidth for the considered flows. 647 2.1. Models of implementation 649 TCM-TF can be implemented in different ways. The most 650 straightforward is to implement it in the devices terminating the 651 flows (these devices can be e.g., voice gateways, or proxies grouping 652 a number of flows): 654 [ending device]---[ending device] 655 ^ 656 | 657 TCM-TF over IP 659 Figure 7 661 Another way TCM-TF can be implemented is with an external optimizer. 662 This device could be placed at strategic places in the network and 663 could dynamically create and destroy TCM-TF sessions without the 664 participation of the endpoints that generate the flows (Figure 8). 666 [ending device]\ /[ending device] 667 [ending device]----[optimizer]-----[optimizer]------[ending device] 668 [ending device]/ \[ending device] 669 ^ ^ ^ 670 | | | 671 Native IP TCM-TF over IP Native IP 673 Figure 8 675 A number of already compressed flows can also be merged in a tunnel 676 using an optimizer in order to increase the number of flows in a 677 tunnel (Figure 9): 679 [ending device]\ /[ending device] 680 [ending device]----[optimizer]------[optimizer]-----[ending device] 681 [ending device]/ \[ending device] 682 ^ ^ ^ 683 | | | 684 Compressed TCM-TF over IP Compressed 686 Figure 9 688 2.2. Choice of the compressing protocol 690 There are different protocols that can be used for compressing IP 691 flows: 693 o IPHC (IP Header Compression [IPHC]) permits the compression of UDP 694 /IP and ESP/IP headers. It has a low implementation complexity. 695 On the other hand, the resynchronization of the context can be 696 slow over long RTT links. It should be used in scenarios 697 presenting very low packet loss percentage. 699 o cRTP (compressed RTP [cRTP]) works the same way as IPHC, but is 700 also able to compress RTP headers. The link layer transport is 701 not specified, but typically PPP is used. For cRTP to compress 702 headers, it must be implemented on each PPP link. A lot of 703 context is required to successfully run cRTP, and memory and 704 processing requirements are high, especially if multiple hops must 705 implement cRTP to save bandwidth on each of the hops. At higher 706 line rates, cRTP's processor consumption becomes prohibitively 707 expensive. cRTP is not suitable over long-delay WAN links commonly 708 used when tunneling, as proposed by this document. To avoid the 709 per-hop expense of cRTP, a simplistic solution is to use cRTP with 710 L2TP to achieve end-to-end cRTP. However, cRTP is only suitable 711 for links with low delay and low loss. Thus, if multiple router 712 hops are involved, cRTP's expectation of low delay and low loss 713 can no longer be met. Furthermore, packets can arrive out of 714 order. 716 o ECRTP (Enhanced Compressed RTP [ECRTP]) is an extension of cRTP 717 [cRTP] that provides tolerance to packet loss and packet 718 reordering between compressor and decompressor. Thus, ECRTP 719 should be used instead of cRTP when possible (e.g., the two TCM 720 optimizers implementing ECRTP). 722 o ROHC (RObust Header Compression [ROHC]) is able to compress UDP/ 723 IP, ESP/IP and RTP/UDP/IP headers. It is a robust scheme 724 developed for header compression over links with high bit error 725 rate, such as wireless ones. It incorporates mechanisms for quick 726 resynchronization of the context. It includes an improved 727 encoding scheme for compressing the header fields that change 728 dynamically. Its main drawback is that it requires significantly 729 more processing and memory resources than the ones necessary for 730 IPHC or ECRTP. 732 This standard does not determine which of the existing protocols has 733 to be used for the compressing layer. The decision will depend on 734 the scenario, and will mainly be determined by the packet loss 735 probability, RTT, and the availability of memory and processing 736 resources. The standard is also suitable to include other 737 compressing schemes that may be further developed. 739 2.2.1. Context Synchronization in ECRTP 741 When the compressor receives an RTP packet that has an unpredicted 742 change in the RTP header, the compressor should send a COMPRESSED_UDP 743 packet (described in [ECRTP]) to synchronize the ECRTP decompressor 744 state. The COMPRESSED_UDP packet updates the RTP context in the 745 decompressor. 747 To ensure delivery of updates of context variables, COMPRESSED_UDP 748 packets should be delivered using the robust operation described in 749 [ECRTP]. 751 Because the "twice" algorithm described in [ECRTP] relies on UDP 752 checksums, the IP stack on the RTP transmitter should transmit UDP 753 checksums. If UDP checksums are not used, the ECRTP compressor 754 should use the cRTP Header checksum described in [ECRTP]. 756 2.2.2. Context Synchronization in ROHC 758 ROHC [ROHC] includes a more complex mechanism in order to maintain 759 context synchronization. It has different operation modes and 760 defines compressor states which change depending on link behavior. 762 2.3. Multiplexing 764 Header compressing algorithms require a layer two protocol that 765 allows identifying different protocols. PPP [PPP] is suited for 766 this, although other multiplexing protocols can also be used for this 767 layer of TCM-TF. 769 When header compression is used inside a tunnel, it reduces the size 770 of the headers of the IP packets carried in the tunnel. However, the 771 tunnel itself has overhead due to its IP header and the tunnel header 772 (the information necessary to identify the tunneled payload). 774 By multiplexing multiple small payloads in a single tunneled packet, 775 reasonable bandwidth efficiency can be achieved, since the tunnel 776 overhead is shared by multiple packets belonging to the flows active 777 between the source and destination of an L2TP tunnel. The packet 778 size of the flows has to be small in order to permit good bandwidth 779 savings. 781 If the source and destination of the tunnel are the same as the 782 source and destination of the compressing protocol sessions, then the 783 source and destination must have multiple active small-packet flows 784 to get any benefit from multiplexing. 786 Because of this, TCM-TF is mostly useful for applications where many 787 small-packet flows run between a pair of hosts. The number of 788 simultaneous sessions required to reduce the header overhead to the 789 desired level depends on the average payload size, and also on the 790 size of the tunnel header. A smaller tunnel header will result in 791 fewer simultaneous sessions being required to produce adequate 792 bandwidth efficiencies. 794 2.4. Tunneling 796 Different tunneling schemes can be used for sending end to end the 797 compressed payloads. 799 2.4.1. Tunneling schemes over IP: L2TP and GRE 801 L2TP tunnels should be used to tunnel the compressed payloads end to 802 end. L2TP includes methods for tunneling messages used in PPP 803 session establishment, such as NCP (Network Control Protocol). This 804 allows [IPCP-HC] to negotiate ECRTP compression/decompression 805 parameters. 807 Other tunneling schemes, such as GRE [GRE] may also be used to 808 implement the tunneling layer of TCM-TF. 810 2.4.2. MPLS tunneling 812 In some scenarios, mainly in operator's core networks, the use of 813 MPLS is widely deployed as data transport method. The adoption of 814 MPLS as tunneling layer in this proposal intends to natively adapt 815 TCM-TF to those transport networks. 817 In the same way that layer 3 tunnels, MPLS paths, identified by MPLS 818 labels, established between Label Edge Routers (LSRs), could be used 819 to transport the compressed payloads within an MPLS network. This 820 way, multiplexing layer must be placed over MPLS layer. Note that, 821 in this case, layer 3 tunnel headers do not have to be used, with the 822 consequent data efficiency improvement. 824 2.5. Encapsulation Formats 826 The packet format for a packet compressed is: 828 +------------+-----------------------+ 829 | | | 830 | Compr | | 831 | Header | Data | 832 | | | 833 | | | 834 +------------+-----------------------+ 836 Figure 10 838 The packet format of a multiplexed PPP packet as defined by [PPP-MUX] 839 is: 841 +-------+---+------+-------+-----+ +---+------+-------+-----+ 842 | Mux |P L| | | | |P L| | | | 843 | PPP |F X|Len1 | PPP | | |F X|LenN | PPP | | 844 | Prot. |F T| | Prot. |Info1| ~ |F T| | Prot. |InfoN| 845 | Field | | Field1| | | |FieldN | | 846 | (1) |1-2 octets| (0-2) | | |1-2 octets| (0-2) | | 847 +-------+----------+-------+-----+ +----------+-------+-----+ 849 Figure 11 851 The combined format used for TCM-TF with a single payload is all of 852 the above packets concatenated. Here is an example with one payload, 853 using L2TP or GRE tunneling: 855 +------+------+-------+----------+-------+--------+----+ 856 | IP |Tunnel| Mux |P L| | | | | 857 |header|header| PPP |F X|Len1 | PPP | Compr | | 858 | (20) | | Proto |F T| | Proto | header |Data| 859 | | | Field | | Field1| | | 860 | | | (1) |1-2 octets| (0-2) | | | 861 +------+------+-------+----------+-------+--------+----+ 862 |<------------- IP payload -------------------->| 863 |<-------- Mux payload --------->| 865 Figure 12 867 If the tunneling technology is MPLS, then the scheme would be: 869 +------+-------+----------+-------+--------+----+ 870 |MPLS | Mux |P L| | | | | 871 |header| PPP |F X|Len1 | PPP | Compr | | 872 | | Proto |F T| | Proto | header |Data| 873 | | Field | | Field1| | | 874 | | (1) |1-2 octets| (0-2) | | | 875 -+------+-------+----------+-------+--------+----+ 876 |<---------- MPLS payload -------------->| 877 |<-------- Mux payload --------->| 879 Figure 13 881 If the tunnel contains multiplexed traffic, multiple "PPPMux 882 payload"s are transmitted in one IP packet. 884 3. Contributing Authors 886 Gonzalo Camarillo 887 Ericsson 888 Advanced Signalling Research Lab. 889 FIN-02420 Jorvas 890 Finland 892 Email: Gonzalo.Camarillo@ericsson.com 894 Michael A. Ramalho 895 Cisco Systems, Inc. 896 8000 Hawkins Road 897 Sarasota, FL 34241-9300 898 US 900 Phone: +1.732.832.9723 901 Email: mramalho@cisco.com 903 Jose Ruiz Mas 904 University of Zaragoza 905 Dpt. IEC Ada Byron Building 906 50018 Zaragoza 907 Spain 909 Phone: +34 976762158 910 Email: jruiz@unizar.es 912 Diego Lopez Garcia 913 Telefonica I+D 914 Ramon de la cruz 84 915 28006 Madrid 916 Spain 918 Phone: +34 913129041 919 Email: diego@tid.es 921 David Florez Rodriguez 922 Telefonica I+D 923 Ramon de la cruz 84 924 28006 Madrid 925 Spain 927 Phone: +34 91312884 928 Email: dflorez@tid.es 929 Manuel Nunez Sanz 930 Telefonica I+D 931 Ramon de la cruz 84 932 28006 Madrid 933 Spain 935 Phone: +34 913128821 936 Email: mns@tid.es 938 Juan Antonio Castell Lucia 939 Telefonica I+D 940 Ramon de la cruz 84 941 28006 Madrid 942 Spain 944 Phone: +34 913129157 945 Email: jacl@tid.es 947 Mirko Suznjevic 948 University of Zagreb 949 Faculty of Electrical Engineering and Computing, Unska 3 950 10000 Zagreb 951 Croatia 953 Phone: +385 1 6129 755 954 Email: mirko.suznjevic@fer.hr 956 4. Acknowledgements 958 5. IANA Considerations 960 This memo includes no request to IANA. 962 6. Security Considerations 964 The most straightforward option for securing a number of non-secured 965 flows sharing a path is by the use of IPsec [IPsec], when TCM using 966 an IP tunnel is employed. Instead of adding a security header to the 967 packets of each native flow, and then compressing and multiplexing 968 them, a single IPsec tunnel can be used in order to secure all the 969 flows together, thus achieving a higher efficiency. This use of 970 IPsec protects the packets only within the transport network between 971 tunnel ingress and egress and therefore does not provide end-to-end 972 authentication or encryption. 974 When a number of already secured flows including ESP [ESP] headers 975 are optimized by means of TCM, and the addition of further security 976 is not necessary, their ESP/IP headers can still be compressed using 977 suitable algorithms [RFC5225], in order to improve the efficiency. 978 This header compression does not change the end-to-end security 979 model. 981 The resilience of TCM-TF to denial of service, and the use of TCM-TF 982 to deny service to other parts of the network infrastructure, is for 983 future study. 985 7. References 987 7.1. Normative References 989 [ECRTP] Koren, T., Casner, S., Geevarghese, J., Thompson, B., and 990 P. Ruddy, "Enhanced Compressed RTP (CRTP) for Links with 991 High Delay, Packet Loss and Reordering", RFC 3545, 2003. 993 [ESP] Kent, S., "IP Encapsulating Security Payload", RFC 4303, 994 2005. 996 [GRE] Farinacci, D., Li, T., Hanks, S., Meyer, D., and P. 997 Traina, "Generic Routing Encapsulation (GRE)", RFC 2784, 998 2000. 1000 [IPCP-HC] Engan, M., Casner, S., Bormann, C., and T. Koren, "IP 1001 Header Compression over PPP", RFC 3544, 2003. 1003 [IPHC] Degermark, M., Nordgren, B., and S. Pink, "IP Header 1004 Compression", RFC 2580, 1999. 1006 [IPsec] Kent, S. and K. Seo, "Security Architecture for the 1007 Internet Protocol", RFC 4301, December 2005. 1009 [L2TPv3] Lau, J., Townsley, M., and I. Goyret, "Layer Two Tunneling 1010 Protocol - Version 3 (L2TPv3)", RFC 3931, 2005. 1012 [MPLS] Rosen, E., Viswanathan, A., and R. Callon, "Multiprotocol 1013 Label Switching Architecture", RFC 3031, January 2001. 1015 [PPP-MUX] Pazhyannur, R., Ali, I., and C. Fox, "PPP Multiplexing", 1016 RFC 3153, 2001. 1018 [PPP] Simpson, W., "The Point-to-Point Protocol (PPP)", RFC 1019 1661, 1994. 1021 [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate 1022 Requirement Levels", BCP 14, RFC 2119, March 1997. 1024 [RFC5225] Pelletier, G. and K. Sandlund, "RObust Header Compression 1025 Version 2 (ROHCv2): Profiles for RTP, UDP, IP, ESP and 1026 UDP-Lite", RFC 5225, April 2008. 1028 [ROHC] Sandlund, K., Pelletier, G., and L-E. Jonsson, "The RObust 1029 Header Compression (ROHC) Framework", RFC 5795, 2010. 1031 [RTP] Schulzrinne, H., Casner, S., Frederick, R., and V. 1032 Jacobson, "RTP: A Transport Protocol for Real-Time 1033 Applications", RFC 3550, 2003. 1035 [SCTP] Stewart, Ed., R., "Stream Control Transmission Protocol", 1036 RFC 4960, 2007. 1038 [SIP] Rosenberg, J., Schulzrinne, H., Camarillo, G., and et. 1039 al., "SIP: Session Initiation Protocol", RFC 3261, 2005. 1041 [TCRTP] Thomson, B., Koren, T., and D. Wing, "Tunneling 1042 Multiplexed Compressed RTP (TCRTP)", RFC 4170, 2005. 1044 [cRTP] Casner, S. and V. Jacobson, "Compressing IP/UDP/RTP 1045 Headers for Low-Speed Serial Links", RFC 2508, 1999. 1047 7.2. Informative References 1049 [Efficiency] 1050 Bolla, R., Bruschi, R., Davoli, F., and F. Cucchietti, 1051 "Energy Efficiency in the Future Internet: A Survey of 1052 Existing Approaches and Trends in Energy-Aware Fixed 1053 Network Infrastructures", IEEE Communications Surveys and 1054 Tutorials vol.13, no.2, pp.223,244, 2011. 1056 [FPS_opt] Saldana, J., Fernandez-Navajas, J., Ruiz-Mas, J., Aznar, 1057 J., Viruete, E., and L. Casadesus, "First Person Shooters: 1058 Can a Smarter Network Save Bandwidth without Annoying the 1059 Players?", IEEE Communications Magazine vol. 49, no.11, 1060 pp. 190-198, 2011. 1062 [First-person] 1063 Ratti, S., Hariri, B., and S. Shirmohammadi, "A Survey of 1064 First-Person Shooter Gaming Traffic on the Internet", IEEE 1065 Internet Computing vol 14, no. 5, pp. 60-69, 2010. 1067 [Power] Chabarek, J., Sommers, J., Barford, P., Estan, C., Tsiang, 1068 D., and S. Wright, "Power Awareness in Network Design and 1069 Routing", INFOCOM 2008. The 27th Conference on Computer 1070 Communications. IEEE pp.457,465, 2008. 1072 [VoIP_opt] 1073 Saldana, J., Fernandez-Navajas, J., Ruiz-Mas, J., Murillo, 1074 J., Viruete, E., and J. Aznar, "Evaluating the Influence 1075 of Multiplexing Schemes and Buffer Implementation on 1076 Perceived VoIP Conversation Quality", Computer Networks 1077 (Elsevier) Volume 6, Issue 11, pp 2920 - 2939. Nov. 30, 1078 2012. 1080 [topology_CNs] 1081 Vega, D., Cerda-Alabern, L., Navarro, L., and R. Meseguer, 1082 "Topology patterns of a community network: Guifi. net.", 1083 Proceedings Wireless and Mobile Computing, Networking and 1084 Communications (WiMob), 2012 IEEE 8th International 1085 Conference on (pp. 612-619) , 2012. 1087 Authors' Addresses 1089 Jose Saldana 1090 University of Zaragoza 1091 Dpt. IEC Ada Byron Building 1092 Zaragoza 50018 1093 Spain 1095 Phone: +34 976 762 698 1096 Email: jsaldana@unizar.es 1098 Dan Wing 1099 Cisco Systems 1100 771 Alder Drive 1101 San Jose, CA 95035 1102 US 1104 Phone: +44 7889 488 335 1105 Email: dwing@cisco.com 1107 Julian Fernandez Navajas 1108 University of Zaragoza 1109 Dpt. IEC Ada Byron Building 1110 Zaragoza 50018 1111 Spain 1113 Phone: +34 976 761 963 1114 Email: navajas@unizar.es 1115 Muthu Arul Mozhi Perumal 1116 Cisco Systems 1117 Cessna Business Park 1118 Sarjapur-Marathahalli Outer Ring Road 1119 Bangalore, Karnataka 560103 1120 India 1122 Phone: +91 9449288768 1123 Email: mperumal@cisco.com 1125 Fernando Pascual Blanco 1126 Telefonica I+D 1127 Ramon de la Cruz 84 1128 Madrid 28006 1129 Spain 1131 Phone: +34 913128779 1132 Email: fpb@tid.es