idnits 2.17.1 draft-saldana-tsvwg-tcmtf-10.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- No issues found here. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year -- The document date (December 14, 2015) is 3055 days in the past. Is this intentional? Checking references for intended status: Best Current Practice ---------------------------------------------------------------------------- (See RFCs 3967 and 4897 for information about using normative references to lower-maturity documents in RFCs) == Outdated reference: A later version (-08) exists of draft-irtf-gaia-alternative-network-deployments-02 ** Downref: Normative reference to an Informational draft: draft-irtf-gaia-alternative-network-deployments (ref. 'I-D.irtf-gaia-alternative-network-deployments') ** Obsolete normative reference: RFC 4960 (ref. 'SCTP') (Obsoleted by RFC 9260) == Outdated reference: A later version (-12) exists of draft-saldana-tsvwg-simplemux-02 Summary: 2 errors (**), 0 flaws (~~), 3 warnings (==), 1 comment (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 Transport Area Working Group J. Saldana 3 Internet-Draft University of Zaragoza 4 Intended status: Best Current Practice D. Wing 5 Expires: June 16, 2016 Cisco Systems 6 J. Fernandez Navajas 7 University of Zaragoza 8 M. Perumal 9 Ericsson 10 F. Pascual Blanco 11 Telefonica I+D 12 December 14, 2015 14 Tunneling Compressing and Multiplexing (TCM) Traffic Flows. Reference 15 Model 16 draft-saldana-tsvwg-tcmtf-10 18 Abstract 20 Tunneling, Compressing and Multiplexing (TCM) is a method for 21 improving the bandwidth utilization of network segments that carry 22 multiple small-packet flows in parallel sharing a common path. The 23 method combines different protocols for header compression, 24 multiplexing, and tunneling over a network path for the purpose of 25 reducing the bandwidth consumption. The amount of packets per second 26 can be reduced at the same time. 28 This document describes the TCM framework and the different options 29 which can be used for each of the three layers (header compression, 30 multiplexing and tunneling). 32 Status of This Memo 34 This Internet-Draft is submitted to IETF in full conformance with the 35 provisions of BCP 78 and BCP 79. 37 Internet-Drafts are working documents of the Internet Engineering 38 Task Force (IETF). Note that other groups may also distribute 39 working documents as Internet-Drafts. The list of current Internet- 40 Drafts is at http://datatracker.ietf.org/drafts/current/. 42 Internet-Drafts are draft documents valid for a maximum of six months 43 and may be updated, replaced, or obsoleted by other documents at any 44 time. It is inappropriate to use Internet-Drafts as reference 45 material or to cite them other than as "work in progress." 47 This Internet-Draft will expire on June 16, 2016. 49 Copyright Notice 51 Copyright (c) 2015 IETF Trust and the persons identified as the 52 document authors. All rights reserved. 54 This document is subject to BCP 78 and the IETF Trust's Legal 55 Provisions Relating to IETF Documents 56 (http://trustee.ietf.org/license-info) in effect on the date of 57 publication of this document. Please review these documents 58 carefully, as they describe your rights and restrictions with respect 59 to this document. 61 Table of Contents 63 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . 3 64 1.1. Requirements Language . . . . . . . . . . . . . . . . . . 3 65 1.2. Bandwidth efficiency of flows sending small packets . . . 3 66 1.2.1. Real-time applications using RTP . . . . . . . . . . 3 67 1.2.2. Real-time applications not using RTP . . . . . . . . 4 68 1.2.3. Other applications generating small packets . . . . . 4 69 1.2.4. Optimization of small-packet flows . . . . . . . . . 5 70 1.2.5. Energy consumption considerations . . . . . . . . . . 6 71 1.3. Terminology . . . . . . . . . . . . . . . . . . . . . . . 6 72 1.4. Scenarios of application . . . . . . . . . . . . . . . . 7 73 1.4.1. Multidomain scenario . . . . . . . . . . . . . . . . 7 74 1.4.2. Single domain . . . . . . . . . . . . . . . . . . . . 8 75 1.4.3. Private solutions . . . . . . . . . . . . . . . . . . 9 76 1.4.4. Mixed scenarios . . . . . . . . . . . . . . . . . . . 11 77 1.5. Potential beneficiaries of TCM optimization . . . . . . . 12 78 1.6. Current Standard for VoIP . . . . . . . . . . . . . . . . 13 79 1.7. Current Proposal . . . . . . . . . . . . . . . . . . . . 13 80 2. Protocol Operation . . . . . . . . . . . . . . . . . . . . . 15 81 2.1. Models of implementation . . . . . . . . . . . . . . . . 15 82 2.2. Choice of the compressing protocol . . . . . . . . . . . 16 83 2.2.1. Context Synchronization in ECRTP . . . . . . . . . . 17 84 2.2.2. Context Synchronization in ROHC . . . . . . . . . . . 18 85 2.3. Multiplexing . . . . . . . . . . . . . . . . . . . . . . 18 86 2.4. Tunneling . . . . . . . . . . . . . . . . . . . . . . . . 19 87 2.4.1. Tunneling schemes over IP: L2TP and GRE . . . . . . . 19 88 2.4.2. MPLS tunneling . . . . . . . . . . . . . . . . . . . 19 89 2.5. Encapsulation Formats . . . . . . . . . . . . . . . . . . 19 90 3. Contributing Authors . . . . . . . . . . . . . . . . . . . . 20 91 4. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . 22 92 5. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 22 93 6. Security Considerations . . . . . . . . . . . . . . . . . . . 22 94 7. References . . . . . . . . . . . . . . . . . . . . . . . . . 23 95 7.1. Normative References . . . . . . . . . . . . . . . . . . 23 96 7.2. Informative References . . . . . . . . . . . . . . . . . 25 98 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . 26 100 1. Introduction 102 This document describes a way to combine different protocols for 103 header compression, multiplexing and tunneling to save bandwidth for 104 applications that generate long-term flows of small packets. 106 1.1. Requirements Language 108 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 109 "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this 110 document are to be interpreted as described in RFC 2119 [RFC2119]. 112 1.2. Bandwidth efficiency of flows sending small packets 114 The interactivity demands of some real-time services (VoIP, 115 videoconferencing, telemedicine, video surveillance, online gaming, 116 etc.) make the applications generate a traffic profile consisting of 117 high rates of small packets, which are necessary in order to transmit 118 frequent updates between the two extremes of the communication. 119 These services also demand low network delays. In addition, some 120 other services also use small packets, although they are not delay- 121 sensitive (e.g., instant messaging, M2M packets sending collected 122 data in sensor networks or IoT scenarios using wireless or satellite 123 links). For both the delay-sensitive and delay-insensitive 124 applications, their small data payloads incur significant overhead. 126 When a number of flows based on small packets (small-packet flows) 127 share the same path, their traffic can be optimized by multiplexing 128 packets belonging to different flows. As a consequence, bandwidth 129 can be saved and the amount of packets per second can be reduced. If 130 a number of small packets are waiting in the buffer, they can be 131 multiplexed and transmitted together. In addition, if a transmission 132 queue has not already been formed but multiplexing is desired, it is 133 necessary to add a delay in order to gather a number of packets. 134 This delay has to be maintained under some threshold if the service 135 presents tight delay requirements. It is a believed fact that this 136 delay and jitter can be of the same order of magnitude or less than 137 other common sources of delay and jitter currently present on the 138 Internet without causing harm to flows that employ congestion control 139 based on delay. 141 1.2.1. Real-time applications using RTP 143 The first design of the Internet did not include any mechanism 144 capable of guaranteeing an upper bound for delivery delay, taking 145 into account that the first deployed services were e-mail, file 146 transfer, etc., in which delay is not critical. RTP [RTP] was first 147 defined in 1996 in order to permit the delivery of real-time 148 contents. Nowadays, although a variety of protocols are used for 149 signaling real-time flows (SIP [SIP], H.323 [H.323], etc.), RTP has 150 become the standard par excellence for the delivery of real-time 151 content. 153 RTP was designed to work over UDP datagrams. This implies that an 154 IPv4 packet carrying real-time information has to include (at least) 155 40 bytes of headers: 20 for the IPv4 header, 8 for UDP, and 12 for 156 RTP. This overhead is significant, taking into account that many 157 real-time services send very small payloads. It becomes even more 158 significant with IPv6 packets, as the basic IPv6 header is twice the 159 size of the IPv4 header. Table 1 illustrates the overhead problem of 160 VoIP for two different codecs. 162 +---------------------------------+---------------------------------+ 163 | IPv4 | IPv6 | 164 +---------------------------------+---------------------------------+ 165 | IPv4+UDP+RTP: 40 bytes header | IPv6+UDP+RTP: 60 bytes header | 166 | G.711 at 20 ms packetization: | G.711 at 20 ms packetization: | 167 | 25% header overhead | 37.5% header overhead | 168 | G.729 at 20 ms packetization: | G.729 at 20 ms packetization: | 169 | 200% header overhead | 300% header overhead | 170 +---------------------------------+---------------------------------+ 172 Table 1: Efficiency of different voice codecs 174 1.2.2. Real-time applications not using RTP 176 At the same time, there are many real-time applications that do not 177 use RTP. Some of them send UDP (but not RTP) packets, e.g., First 178 Person Shooter (FPS) online games [First-person], for which latency 179 is very critical. The quickness and the movements of the players are 180 important, and can decide the result of the game. In addition to 181 latency, these applications may be sensitive to jitter and, to a 182 lesser extent, to packet loss, since they implement mechanisms for 183 packet loss concealment [Gamers]. 185 1.2.3. Other applications generating small packets 187 Other applications without delay constraints are also becoming 188 popular. Some examples are instant messaging, M2M packets sending 189 collected data in sensor networks using wireless or satellite links, 190 IoT traffic generated in Constrained RESTful Environments, where UDP 191 packets are employed [RFC7252]. The number of wireless M2M (machine- 192 to-machine) connections is steady growing since a few years, and a 193 share of these is being used for delay-intolerant applications, e.g., 194 industrial SCADA (Supervisory Control And Data Acquisition), power 195 plant monitoring, smart grids, asset tracking. 197 1.2.4. Optimization of small-packet flows 199 In the moments or places where network capacity gets scarce, 200 allocating more bandwidth is a possible solution, but it implies a 201 recurring cost. However, including optimization techniques between a 202 pair of network nodes (able to reduce bandwidth and packets per 203 second) when/where required is a one-time investment. 205 In scenarios including a bottleneck with a single Layer-3 hop, header 206 compression standard algorithms [cRTP], [ECRTP], [IPHC], [ROHC] can 207 be used for reducing the overhead of each flow, at the cost of 208 additional processing. 210 However, if header compression is to be deployed in a network path 211 including several Layer-3 hops, tunneling can be used at the same 212 time in order to allow the header-compressed packets to travel end- 213 to-end, thus avoiding the need to compress and decompress at each 214 intermediate node. In these cases, compressed packets belonging to 215 different flows can be multiplexed together, in order to share the 216 tunnel overhead. In this case, a small multiplexing delay will be 217 required as a counterpart, in order to join a number of packets to be 218 sent together. This delay has to be maintained under a threshold in 219 order to grant the delay requirements. 221 A series of recommendations about delay limits have been summarized 222 in [I-D.suznjevic-dispatch-delay-limits], in order to maintain this 223 additional delay and jitter in the same order of magnitude than other 224 sources of jitter currently present on the Internet. 226 A demultiplexer and a decompressor are necessary at the end of the 227 common path, so as to rebuild the packets as they were originally 228 sent, making traffic optimization a transparent process for the 229 origin and destination of the flow. 231 If only one stream is tunneled and compressed, then little bandwidth 232 savings will be obtained. In contrast, multiplexing is helpful to 233 amortize the overhead of the tunnel header over many payloads. The 234 obtained savings grow with the number of flows optimized together 235 [VoIP_opt], [FPS_opt]. 237 All in all, the combined use of header compression and multipexing 238 provides a trade-off: bandwidth can be exchanged by processing 239 capacity (mainly required for header compression and decompression) 240 and a small additional delay (required for gathering a number of 241 packets to be multiplexed together). 243 The processing delay can be kept really low. It has been shown that 244 the additional delay can be in the order of 250 microseconds for 245 commodity hardware [Simplemux_CIT]. 247 1.2.5. Energy consumption considerations 249 As an additional benefit, the reduction of the sent information, and 250 especially the reduction of the amount of packets per second to be 251 managed by the intermediate routers, can be translated into a 252 reduction of the overall energy consumption of network equipment. 253 According to [Efficiency] internal packet processing engines and 254 switching fabric require 60% and 18% of the power consumption of 255 high-end routers respectively. Thus, reducing the number of packets 256 to be managed and switched will reduce the overall energy 257 consumption. The measurements deployed in [Power] on commercial 258 routers corroborate this: a study using different packet sizes was 259 presented, and the tests with big packets showed a reduction of the 260 energy consumption, since a certain amount of energy is associated to 261 header processing tasks, and not only to the sending of the packet 262 itself. 264 All in all, another trade-off appears: on the one hand, energy 265 consumption is increased in the two extremes due to header 266 compression processing; on the other hand, energy consumption is 267 reduced in the intermediate nodes because of the reduction of the 268 number of packets transmitted. This tradeoff should be explored more 269 deeply. 271 1.3. Terminology 273 This document uses a number of terms to refer to the roles played by 274 the entities using TCM. 276 o native packet 278 A packet sent by an application, belonging to a flow that can be 279 optimized by means of TCM. 281 o native flow 283 A flow of native packets. It can be considered a "small-packet flow" 284 when the vast majority of the generated packets present a low 285 payload-to-header ratio. 287 o TCM packet 289 A packet including a number of multiplexed and header-compressed 290 native ones, and also a tunneling header. 292 o TCM flow 294 A flow of TCM packets, each one including a number of multiplexed 295 header-compressed packets. 297 o TCM optimizer 299 The host where TCM optimization is deployed. It corresponds to both 300 the ingress and the egress of the tunnel transporting the compressed 301 and multiplexed packets. 303 If the optimizer compresses headers, multiplexes packets and creates 304 the tunnel, it behaves as a "TCM-Ingress Optimizer", or "TCM-IO". It 305 takes native packets or flows and "optimizes" them. 307 If it extracts packets from the tunnel, demultiplexes packets and 308 decompresses headers, it behaves as a "TCM-Egress Optimizer", or 309 "TCM-EO". The TCM-Egress Optimizer takes a TCM flow and "rebuilds" 310 the native packets as they were originally sent. 312 o TCM session 314 The relationship between a pair of TCM optimizers exchanging TCM 315 packets. 317 o policy manager 319 A network entity which makes the decisions about TCM optimization 320 parameters (e.g., multiplexing period to be used, flows to be 321 optimized together), depending on their IP addresses, ports, etc. It 322 is connected with a number of TCM optimizers, and orchestrates the 323 optimization that takes place between them. 325 1.4. Scenarios of application 327 Different scenarios of application can be considered for the 328 Tunneling, Compressing and Multiplexing solution. They can be 329 classified according to the domains involved in the optimization: 331 1.4.1. Multidomain scenario 333 In this scenario, the TCM tunnel goes all the way from one network 334 edge (the place where users are attached to the ISP) to another, and 335 therefore it can cross several domains. As shown in Figure 1, the 336 optimization is performed before the packets leave the domain of an 337 ISP; the traffic crosses the Internet tunnelized, and the packets are 338 rebuilt in the second domain. 340 _ _ _ _ _ _ 341 ( ` ) _ _ _ ( ` )_ _ 342 ( +------+ )`) ( ` )_ ( +------+ `) 343 -->(_ -|TCM-IO|--- _) ---> ( ) `) ----->(_-|TCM-EO|--_)--> 344 ( +------+ _) (_ (_ . _) _) ( +------+ _) 345 (_ _ _ _) (_ _ ( _) _) 347 ISP 1 Internet ISP 2 349 <--------------------TCM---------------------> 351 Figure 1 353 Note that this is not from border to border (where ISPs connect to 354 the Internet, which could be covered with specialized links) but from 355 an ISP to another (e.g., managing all traffic from individual users 356 arriving at a Game Provider, regardless users' location). 358 Some examples of this could be: 360 o An ISP may place a TCM optimizer in its aggregation network, in 361 order to tunnel all the packets belonging to a certain service, 362 sending them to the application provider, who will rebuild the 363 packets before forwarding them to the application server. This 364 will result in savings for both actors. 366 o A service provider (e.g., an online gaming company) can be allowed 367 to place a TCM optimizer in the aggregation network of an ISP, 368 being able to optimize all the flows of a service (e.g., VoIP, an 369 online game). Another TCM optimizer will rebuild these packets 370 once they arrive to the network of the provider. 372 1.4.2. Single domain 374 In this case, TCM is only activated inside an ISP, from the edge to 375 border, inside the network operator. The geographical scope and 376 network depth of TCM activation could be on demand, according to 377 traffic conditions. 379 If we consider the residential users of real-time interactive 380 applications (e.g., VoIP, online games generating small packets) in a 381 town or a district, a TCM optimizing module can be included in some 382 network devices, in order to group packets with the same destination. 383 As shown in Figure 2, depending on the number of users of the 384 application, the packets can be grouped at different levels in DSL 385 fixed network scenarios, at gateway level in LTE mobile network 386 scenarios or even in other ISP edge routers. TCM may also be applied 387 for fiber residential accesses, and in mobile networks. This would 388 reduce bandwidth requirements in the provider aggregation network. 390 +------+ 391 N users -|TCM-IO|\ 392 +------+ \ 393 \ _ _ _ _ 394 +------+ \--> ( ` )_ +------+ ( ` )_ 395 M users -|TCM-IO|------> ( ) `) --|TCM-EO|--> ( ) `) 396 +------+ / ->(_ _ (_ . _) _) +------+ (_ _ (_ . _) _) 397 / 398 +------+ / ISP Internet 399 P users -|TCM-IO|/ 400 +------+ 402 <--------------TCM---------------> 404 Figure 2 406 At the same time, the ISP may implement TCM capabilities within its 407 own MPLS network in order to optimize internal network resources: 408 optimizing modules can be embedded in the Label Edge Routers of the 409 network. In that scenario MPLS will act as the "tunneling" layer, 410 being the tunnels the paths defined by the MPLS labels and avoiding 411 the use of additional tunneling protocols. 413 Finally, some networks use cRTP [cRTP] in order to obtain bandwidth 414 savings on the access link, but as a counterpart considerable CPU 415 resources are required on the aggregation router. In these cases, by 416 means of TCM, instead of only saving bandwidth on the access link, it 417 could also be saved across the ISP network, thus avoiding the impact 418 on the CPU of the aggregation router. 420 1.4.3. Private solutions 422 End users can also optimize traffic end-to-end from network borders. 423 TCM is used to connect private networks geographically apart (e.g., 424 corporation headquarters and subsidiaries), without the ISP being 425 aware (or having to manage) those flows, as shown in Figure 3, where 426 two different locations are connected through a tunnel traversing the 427 Internet or another network. 429 _ _ _ _ _ _ 430 ( ` )_ +------+ ( ` )_ +------+ ( ` )_ 431 ( ) `) --|TCM-IO|-->( ) `) --|TCM-EO|-->( ) `) 432 (_ (_ . _) _) +------+ (_ (_ . _) _) +------+ (_ (_ . _)_) 434 Location 1 ISP/Internet Location 2 436 <-------------TCM-----------> 438 Figure 3 440 Some examples of these scenarios are: 442 o The case of an enterprise with a number of distributed central 443 offices, in which an appliance can be placed next to the access 444 router, being able to optimize traffic flows with a shared origin 445 and destination. Thus, a number of remote desktop sessions to the 446 same server can be optimized, or a number of VoIP calls between 447 two offices will also require less bandwidth and fewer packets per 448 second. In many cases, a tunnel is already included for security 449 reasons, so the additional overhead of TCM is lower. 451 o An Internet cafe, which is suitable of having many users of the 452 same application (e.g., VoIP, online games) sharing the same 453 access link. Internet cafes are very popular in countries with 454 relatively low access speeds in households, where home computer 455 penetration is usually low as well. In many of these countries, 456 bandwidth can become a serious limitation for this kind of 457 businesses, so TCM savings may become interesting for their 458 viability. 460 o Alternative Networks [topology_CNs], 461 [I-D.irtf-gaia-alternative-network-deployments] (typically 462 deployed in rural areas and/or in developing countries), in which 463 a number of people in the same geographical place share their 464 connections in a cooperative way. The structure of these networks 465 is not designed from the beginning, but they grow organically as 466 new users join. As a result, a number of wireless hops are 467 usually required in order to reach a router connected to the 468 Internet. 470 o Satellite communication links that often manage the bandwidth by 471 limiting the transmission rate, measured in packets per second 472 (pps), to and from the satellite. Applications like VoIP that 473 generate a large number of small packets can easily fill the 474 maximum number of pps slots, limiting the throughput across such 475 links. As an example, a G.729a voice call generates 50 pps at 20 476 ms packetization time. If the satellite transmission allows 1,500 477 pps, the number of simultaneous voice calls is limited to 30. 478 This results in poor utilization of the satellite link's bandwidth 479 as well as places a low bound on the number of voice calls that 480 can utilize the link simultaneously. TCM optimization of small 481 packets into one packet for transmission will improve the 482 efficiency. 484 o In a M2M/SCADA (Supervisory Control And Data Acquisition) context, 485 TCM optimization can be applied when a satellite link is used for 486 collecting the data of a number of sensors. M2M terminals are 487 normally equipped with sensing devices which can interface to 488 proximity sensor networks through wireless connections. The 489 terminal can send the collected sensing data using a satellite 490 link connecting to a satellite gateway, which in turn will forward 491 the M2M/SCADA data to the to the processing and control center 492 through the Internet. The size of a typical M2M application 493 transaction depends on the specific service and it may vary from a 494 minimum of 20 bytes (e.g., tracking and metering in private 495 security) to about 1,000 bytes (e.g., video-surveillance). In 496 this context, TCM concepts can be also applied to allow a more 497 efficient use of the available satellite link capacity, matching 498 the requirements demanded by some M2M services. If the case of 499 large sensor deployments is considered, where proximity sensor 500 networks transmit data through different satellite terminals, the 501 use of compression algorithms already available in current 502 satellite systems to reduce the overhead introduced by UDP and 503 IPv6 protocols is certainly desirable. In addition to this, 504 tunneling and multiplexing functions available from TCM allows 505 extending compression functionality throughout the rest the 506 network, to eventually reach the processing and control centers. 508 o Desktop or application sharing where the traffic from the server 509 to the client typically consists of the delta of screen updates. 510 Also, the standard for remote desktop sharing emerging for WebRTC 511 in the RTCWEB Working Group is: {something}/SCTP/UDP (Stream 512 Control Transmission Protocol [SCTP]). In this scenario, SCTP/UDP 513 can be used in other cases: chatting, file sharing and 514 applications related to WebRTC peers. There can be hundreds of 515 clients at a site talking to a server located at a datacenter over 516 a WAN. Compressing, multiplexing and tunneling this traffic could 517 save WAN bandwidth and potentially improve latency. 519 1.4.4. Mixed scenarios 521 Different combinations of the previous scenarios can be considered. 522 Agreements between different companies can be established in order to 523 save bandwidth and to reduce packets per second. As an example, 524 Figure 4 shows a game provider that wants to TCM-optimize its 525 connections by establishing associations between different TCM-IO/EOs 526 placed near the game server and several TCM-IO/EOs placed in the 527 networks of different ISPs (agreements between the game provider and 528 each ISP will be necessary). In every ISP, the TCM-IO/EO would be 529 placed in the most adequate point (actually several TCM-IO/EOs could 530 exist per ISP) in order to aggregate enough number of users. 532 _ _ 533 N users ( ` )_ 534 +---+ ( ) `) 535 |TCM|->(_ (_ . _) 536 +---+ ISP 1 \ 537 _ _ \ _ _ _ _ _ 538 M users ( ` )_ \ ( ` ) ( ` ) ( ` ) 539 +---+ ( ) `) \ ( ) `) ( ) `) +---+ ( ) `) 540 |TCM|->(_ (_ ._)---- (_ (_ . _) ->(_ (_ . _)->|TCM|->(_ (_ . _) 541 +---+ ISP 2 / Internet ISP 4 +---+ Game Provider 542 _ _ / ^ 543 O users ( ` )_ / | 544 +---+ ( ) `) / +---+ 545 |TCM|->(_ (_ ._) P users->|TCM| 546 +---+ ISP 3 +---+ 548 Figure 4 550 1.5. Potential beneficiaries of TCM optimization 552 In conclusion, a standard way to compress headers, multiplex a number 553 of packets and send them together using a tunnel, can benefit various 554 stakeholders: 556 o network operators can compress traffic flows sharing a common 557 network segment; 559 o ISPs; 561 o developers of VoIP systems can include this option in their 562 solutions; 564 o service providers, who can achieve bandwidth savings in their 565 supporting infrastructures; 567 o users of Alternative Networks, who may be able to save significant 568 bandwidth amounts, and to reduce the number of packets per second 569 in their networks. 571 Other fact that has to be taken into account is that the technique 572 not only saves bandwidth but also reduces the number of packets per 573 second, which sometimes can be a bottleneck for a satellite link or 574 even for a network router [Online]. 576 1.6. Current Standard for VoIP 578 The current standard [TCRTP] defines a way to reduce bandwidth and 579 pps of RTP traffic, by combining three different standard protocols: 581 o Regarding compression, [ECRTP] is the selected option. 583 o Multiplexing is accomplished using PPP Multiplexing [PPP-MUX] 585 o Tunneling is accomplished by using L2TP (Layer 2 Tunneling 586 Protocol [L2TPv3]). 588 The three layers are combined as shown in the Figure 5: 590 RTP/UDP/IP 591 | 592 | ---------------------------- 593 | 594 ECRTP compressing layer 595 | 596 | ---------------------------- 597 | 598 PPPMUX multiplexing layer 599 | 600 | ---------------------------- 601 | 602 L2TP tunneling layer 603 | 604 | ---------------------------- 605 | 606 IP 608 Figure 5 610 1.7. Current Proposal 612 In contrast to the current standard [TCRTP], TCM allows other header 613 compression protocols in addition to RTP/UDP, since services based on 614 small packets also use by bare UDP, as shown in Figure 6: 616 UDP/IP RTP/UDP/IP 617 \ / 618 \ / ------------------------------ 619 \ / 620 Nothing or ROHC or ECRTP or IPHC header compressing layer 621 | 622 | ------------------------------ 623 | 624 PPPMux or other mux protocols multiplexing layer 625 | 626 / \ ------------------------------ 627 / \ 628 / \ 629 GRE or L2TP \ tunneling layer 630 | MPLS 631 | ------------------------------ 632 IP 634 Figure 6 636 Each of the three layers is considered as independent of the other 637 two, i.e., different combinations of protocols can be implemented 638 according to the new proposal: 640 o Regarding compression, a number of options can be considered: as 641 different standards are able to compress different headers 642 ([cRTP], [ECRTP], [IPHC], [ROHC]). The one to be used can be 643 selected depending on the protocols used by the traffic to 644 compress and the concrete scenario (packet loss percentage, delay, 645 etc.). It also exists the possibility of having a null header 646 compression, in the case of wanting to avoid traffic compression, 647 taking into account the need of storing a context for every flow 648 and the problems of context desynchronization in certain 649 scenarios. Although not shown in Figure 6, ESP (Encapsulating 650 Security Payload [ESP]) headers can also be compressed. 652 o Multiplexing can be accomplished using PPP Multiplexing (PPPMux) 653 [PPP-MUX]. However, PPPMux introduces an additional compelxity, 654 since it requires the use of PPP, and a protocol for tunneling 655 layer 2 frames. For this reason, other multiplexing protocols can 656 also be considered, as the one proposed in 657 [I-D.saldana-tsvwg-simplemux]. 659 o Tunneling is accomplished by using L2TP (Layer 2 Tunneling 660 Protocol [L2TPv3]) over IP, GRE (Generic Routing Encapsulation 661 [GRE]) over IP, or MPLS (Multiprotocol Label Switching 662 Architecture [MPLS]). 664 It can be observed that TCRTP [TCRTP] is included as an option in 665 TCM, combining [ECRTP], [PPP-MUX] and [L2TPv3], so backwards 666 compatibility with TCRTP is provided. If a TCM optimizer implements 667 ECRTP, PPPMux and L2TPv3, compatibility with RFC4170 MUST be granted. 669 If a single link is being optimized a tunnel is unnecessary. In that 670 case, both optimizers MAY perform header compression between them. 671 Multiplexing may still be useful, since it reduces packets per 672 second, which is interesting in some environments (e.g., satellite). 673 Another reason for that is the desire of reducing energy consumption. 674 Although no tunnel is employed, this can still be considered as TCM 675 optimization, so TCM signaling protocols will be employed here in 676 order to negotiate the compression and multiplexing parameters to be 677 employed. 679 Payload compression schemes may also be used, but they are not the 680 aim of this document. 682 2. Protocol Operation 684 This section describes how to combine protocols belonging to trhee 685 layers (compressing, multiplexing, and tunneling), in order to save 686 bandwidth for the considered flows. 688 2.1. Models of implementation 690 TCM can be implemented in different ways. The most straightforward 691 is to implement it in the devices terminating the flows (these 692 devices can be e.g., voice gateways, or proxies grouping a number of 693 flows): 695 [ending device]---[ending device] 696 ^ 697 | 698 TCM over IP 700 Figure 7 702 Another way TCM can be implemented is with an external optimizer. 703 This device can be placed at strategic places in the network and can 704 dynamically create and destroy TCM sessions without the participation 705 of the endpoints that generate the flows (Figure 8). 707 [ending device]\ /[ending device] 708 \ / 709 [ending device]----[optimizer]-----[optimizer]-----[ending device] 710 / \ 711 [ending device]/ \[ending device] 712 ^ ^ ^ 713 | | | 714 Native IP TCM over IP Native IP 716 Figure 8 718 A number of already compressed flows can also be merged in a tunnel 719 using an optimizer in order to increase the number of flows in a 720 tunnel (Figure 9): 722 [ending device]\ /[ending device] 723 \ / 724 [ending device]----[optimizer]-----[optimizer]------[ending device] 725 / \ 726 [ending device]/ \[ending device] 727 ^ ^ ^ 728 | | | 729 Compressed TCM over IP Compressed 731 Figure 9 733 2.2. Choice of the compressing protocol 735 There are different protocols that can be used for compressing IP 736 flows: 738 o IPHC (IP Header Compression [IPHC]) permits the compression of 739 UDP/IP and ESP/IP headers. It has a low implementation 740 complexity. On the other hand, the resynchronization of the 741 context can be slow over long RTT links. It should be used in 742 scenarios presenting very low packet loss percentage. 744 o cRTP (compressed RTP [cRTP]) works the same way as IPHC, but is 745 also able to compress RTP headers. The link layer transport is 746 not specified, but typically PPP is used. For cRTP to compress 747 headers, it must be implemented on each PPP link. A lot of 748 context is required to successfully run cRTP, and memory and 749 processing requirements are high, especially if multiple hops must 750 implement cRTP to save bandwidth on each of the hops. At higher 751 line rates, cRTP's processor consumption becomes prohibitively 752 expensive. cRTP is not suitable over long-delay WAN links commonly 753 used when tunneling, as proposed by this document. To avoid the 754 per-hop expense of cRTP, a simplistic solution is to use cRTP with 755 L2TP to achieve end-to-end cRTP. However, cRTP is only suitable 756 for links with low delay and low loss. Thus, if multiple router 757 hops are involved, cRTP's expectation of low delay and low loss 758 can no longer be met. Furthermore, packets can arrive out of 759 order. 761 o ECRTP (Enhanced Compressed RTP [ECRTP]) is an extension of cRTP 762 [cRTP] that provides tolerance to packet loss and packet 763 reordering between compressor and decompressor. Thus, ECRTP 764 should be used instead of cRTP when possible (e.g., the two TCM 765 optimizers implementing ECRTP). 767 o ROHC (RObust Header Compression [ROHC]) is able to compress UDP/ 768 IP, ESP/IP and RTP/UDP/IP headers. It is a robust scheme 769 developed for header compression over links with high bit error 770 rate, such as wireless ones. It incorporates mechanisms for quick 771 resynchronization of the context. It includes an improved 772 encoding scheme for compressing the header fields that change 773 dynamically. Its main drawback is that it requires significantly 774 more processing and memory resources than the ones necessary for 775 IPHC or ECRTP. 777 The present document does not determine which of the existing 778 protocols has to be used for the compressing layer. The decision 779 will depend on the scenarioand the service being optimized. It will 780 also be determined by the packet loss probability, RTT, jitter, and 781 the availability of memory and processing resources. The standard is 782 also suitable to include other compressing schemes that may be 783 further developed. 785 2.2.1. Context Synchronization in ECRTP 787 When the compressor receives an RTP packet that has an unpredicted 788 change in the RTP header, the compressor should send a COMPRESSED_UDP 789 packet (described in [ECRTP]) to synchronize the ECRTP decompressor 790 state. The COMPRESSED_UDP packet updates the RTP context in the 791 decompressor. 793 To ensure delivery of updates of context variables, COMPRESSED_UDP 794 packets should be delivered using the robust operation described in 795 [ECRTP]. 797 Because the "twice" algorithm described in [ECRTP] relies on UDP 798 checksums, the IP stack on the RTP transmitter should transmit UDP 799 checksums. If UDP checksums are not used, the ECRTP compressor 800 should use the cRTP Header checksum described in [ECRTP]. 802 2.2.2. Context Synchronization in ROHC 804 ROHC [ROHC] includes a more complex mechanism in order to maintain 805 context synchronization. It has different operation modes and 806 defines compressor states which change depending on link behavior. 808 2.3. Multiplexing 810 Header compressing algorithms require a layer two protocol that 811 allows identifying different protocols. PPP [PPP] is suited for 812 this, although other multiplexing protocols can also be used for this 813 layer of TCM. For example, Simplemux [I-D.saldana-tsvwg-simplemux] 814 can be employed as a light multiplexing protocol which is able to 815 carry packets belonging to different protocols. 817 When header compression is used inside a tunnel, it reduces the size 818 of the headers of the IP packets carried in the tunnel. However, the 819 tunnel itself has overhead due to its IP header and the tunnel header 820 (the information necessary to identify the tunneled payload). 822 By multiplexing a number of small payloads in a single tunneled 823 packet, reasonable bandwidth efficiency can be achieved, since the 824 tunnel overhead is shared by multiple packets belonging to the flows 825 active between the source and destination of an L2TP tunnel. The 826 packet size of the flows has to be small in order to permit good 827 bandwidth savings. 829 If the source and destination of the tunnel are the same as the 830 source and destination of the compressing protocol sessions, then the 831 source and destination must have multiple active small-packet flows 832 to get any benefit from multiplexing. 834 Because of this, TCM is mostly useful for applications where many 835 small-packet flows run between a pair of hosts. The number of 836 simultaneous sessions required to reduce the header overhead to the 837 desired level depends on the average payload size, and also on the 838 size of the tunnel header. A smaller tunnel header will result in 839 fewer simultaneous sessions being required to produce adequate 840 bandwidth efficiencies. 842 When multiplexing, a limit in the packet size has to be established 843 in order to avoid problems related to MTU. This document does not 844 establish any rule about this, but it is strongly recommended that 845 some method as Packetization Layer Path MTU Discovery is used before 846 multiplexing packets[RFC4821]. 848 2.4. Tunneling 850 Different tunneling schemes can be used for sending end to end the 851 compressed payloads. 853 2.4.1. Tunneling schemes over IP: L2TP and GRE 855 L2TP tunnels should be used to tunnel the compressed payloads end to 856 end. L2TP includes methods for tunneling messages used in PPP 857 session establishment, such as NCP (Network Control Protocol). This 858 allows [IPCP-HC] to negotiate ECRTP compression/decompression 859 parameters. 861 Other tunneling schemes, such as GRE [GRE] may also be used to 862 implement the tunneling layer of TCM. 864 2.4.2. MPLS tunneling 866 In some scenarios, mainly in operator's core networks, the use of 867 MPLS is widely deployed as data transport method. The adoption of 868 MPLS as tunneling layer in this proposal intends to natively adapt 869 TCM to those transport networks. 871 In the same way that layer 3 tunnels, MPLS paths, identified by MPLS 872 labels, established between Label Edge Routers (LSRs), could be used 873 to transport the compressed payloads within an MPLS network. This 874 way, multiplexing layer must be placed over MPLS layer. Note that, 875 in this case, layer 3 tunnel headers do not have to be used, with the 876 consequent data efficiency improvement. 878 2.5. Encapsulation Formats 880 The packet format for a packet compressed is: 882 +------------+-----------------------+ 883 | | | 884 | Compr | | 885 | Header | Data | 886 | | | 887 | | | 888 +------------+-----------------------+ 890 Figure 10 892 The packet format of a multiplexed PPP packet as defined by [PPP-MUX] 893 is: 895 +-------+---+------+-------+-----+ +---+------+-------+-----+ 896 | Mux |P L| | | | |P L| | | | 897 | PPP |F X|Len1 | PPP | | |F X|LenN | PPP | | 898 | Prot. |F T| | Prot. |Info1| ~ |F T| | Prot. |InfoN| 899 | Field | | Field1| | | |FieldN | | 900 | (1) |1-2 octets| (0-2) | | |1-2 octets| (0-2) | | 901 +-------+----------+-------+-----+ +----------+-------+-----+ 903 Figure 11 905 The combined format used for TCM with a single payload is all of the 906 above packets concatenated. Here is an example with one payload, 907 using L2TP or GRE tunneling: 909 +------+------+-------+----------+-------+--------+----+ 910 | IP |Tunnel| Mux |P L| | | | | 911 |header|header| PPP |F X|Len1 | PPP | Compr | | 912 | (20) | | Proto |F T| | Proto | header |Data| 913 | | | Field |---+ | Field1| | | 914 | | | (1) |1-2 octets| (0-2) | | | 915 +------+------+-------+----------+-------+--------+----+ 916 |<------------- IP payload -------------------->| 917 |<-------- Mux payload --------->| 919 Figure 12 921 If the tunneling technology is MPLS, then the scheme would be: 923 +------+-------+----------+-------+--------+----+ 924 |MPLS | Mux |P L| | | | | 925 |header| PPP |F X|Len1 | PPP | Compr | | 926 | | Proto |F T| | Proto | header |Data| 927 | | Field |---+ | Field1| | | 928 | | (1) |1-2 octets| (0-2) | | | 929 -+------+-------+----------+-------+--------+----+ 930 |<---------- MPLS payload -------------->| 931 |<-------- Mux payload --------->| 933 Figure 13 935 If the tunnel contains multiplexed traffic, multiple "PPPMux 936 payload"s are transmitted in one IP packet. 938 3. Contributing Authors 939 Gonzalo Camarillo 940 Ericsson 941 Advanced Signalling Research Lab. 942 FIN-02420 Jorvas 943 Finland 945 Email: Gonzalo.Camarillo@ericsson.com 947 Michael A. Ramalho 948 Cisco Systems, Inc. 949 6310 Watercrest Way, Unit 203 950 Lakewood Ranch, FL 34202 951 USA 953 Phone: +1.732.832.9723 954 Email: mramalho@cisco.com 956 Jose Ruiz Mas 957 University of Zaragoza 958 Dpt. IEC Ada Byron Building 959 50018 Zaragoza 960 Spain 962 Phone: +34 976762158 963 Email: jruiz@unizar.es 965 Diego Lopez Garcia 966 Telefonica I+D 967 Ramon de la cruz 84 968 28006 Madrid 969 Spain 971 Phone: +34 913129041 972 Email: diego@tid.es 974 David Florez Rodriguez 975 Telefonica I+D 976 Ramon de la cruz 84 977 28006 Madrid 978 Spain 980 Phone: +34 91312884 981 Email: dflorez@tid.es 982 Manuel Nunez Sanz 983 Telefonica I+D 984 Ramon de la cruz 84 985 28006 Madrid 986 Spain 988 Phone: +34 913128821 989 Email: mns@tid.es 991 Juan Antonio Castell Lucia 992 Telefonica I+D 993 Ramon de la cruz 84 994 28006 Madrid 995 Spain 997 Phone: +34 913129157 998 Email: jacl@tid.es 1000 Mirko Suznjevic 1001 University of Zagreb 1002 Faculty of Electrical Engineering and Computing, Unska 3 1003 10000 Zagreb 1004 Croatia 1006 Phone: +385 1 6129 755 1007 Email: mirko.suznjevic@fer.hr 1009 4. Acknowledgements 1011 Jose Saldana, Julian Fernandez Navajas and Jose Ruiz Mas were funded 1012 by the EU H2020 Wi-5 project (Grant Agreement no: 644262). 1014 5. IANA Considerations 1016 This memo includes no request to IANA. 1018 6. Security Considerations 1020 The most straightforward option for securing a number of non-secured 1021 flows sharing a path is by the use of IPsec [IPsec], when TCM using 1022 an IP tunnel is employed. Instead of adding a security header to the 1023 packets of each native flow, and then compressing and multiplexing 1024 them, a single IPsec tunnel can be used in order to secure all the 1025 flows together, thus achieving a higher efficiency. This use of 1026 IPsec protects the packets only within the transport network between 1027 tunnel ingress and egress and therefore does not provide end-to-end 1028 authentication or encryption. 1030 When a number of already secured flows including ESP [ESP] headers 1031 are optimized by means of TCM, and the addition of further security 1032 is not necessary, their ESP/IP headers can still be compressed using 1033 suitable algorithms [RFC5225], in order to improve the efficiency. 1034 This header compression does not change the end-to-end security 1035 model. 1037 The resilience of TCM to denial of service, and the use of TCM to 1038 deny service to other parts of the network infrastructure, is for 1039 future study. 1041 7. References 1043 7.1. Normative References 1045 [cRTP] Casner, S. and V. Jacobson, "Compressing IP/UDP/RTP 1046 Headers for Low-Speed Serial Links", RFC 2508, 1999. 1048 [ECRTP] Koren, T., Casner, S., Geevarghese, J., Thompson, B., and 1049 P. Ruddy, "Enhanced Compressed RTP (CRTP) for Links with 1050 High Delay, Packet Loss and Reordering", RFC 3545, 2003. 1052 [ESP] Kent, S., "IP Encapsulating Security Payload", RFC 4303, 1053 2005. 1055 [GRE] Farinacci, D., Li, T., Hanks, S., Meyer, D., and P. 1056 Traina, "Generic Routing Encapsulation (GRE)", RFC 2784, 1057 2000. 1059 [H.323] International Telecommunication Union, "Recommendation 1060 H.323", Packet based multimedia communication 1061 systems H.323, July 2003. 1063 [I-D.irtf-gaia-alternative-network-deployments] 1064 Saldana, J., Arcia-Moret, A., Braem, B., Pietrosemoli, E., 1065 Sathiaseelan, A., and M. Zennaro, "Alternative Network 1066 Deployments. Taxonomy, characterization, technologies and 1067 architectures", draft-irtf-gaia-alternative-network- 1068 deployments-02 (work in progress), November 2015. 1070 [IPCP-HC] Engan, M., Casner, S., Bormann, C., and T. Koren, "IP 1071 Header Compression over PPP", RFC 3544, 2003. 1073 [IPHC] Degermark, M., Nordgren, B., and S. Pink, "IP Header 1074 Compression", RFC 2580, 1999. 1076 [IPsec] Kent, S. and K. Seo, "Security Architecture for the 1077 Internet Protocol", RFC 4301, December 2005. 1079 [L2TPv3] Lau, J., Townsley, M., and I. Goyret, "Layer Two Tunneling 1080 Protocol - Version 3 (L2TPv3)", RFC 3931, 2005. 1082 [MPLS] Rosen, E., Viswanathan, A., and R. Callon, "Multiprotocol 1083 Label Switching Architecture", RFC 3031, January 2001. 1085 [PPP] Simpson, W., "The Point-to-Point Protocol (PPP)", 1086 RFC 1661, 1994. 1088 [PPP-MUX] Pazhyannur, R., Ali, I., and C. Fox, "PPP Multiplexing", 1089 RFC 3153, 2001. 1091 [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate 1092 Requirement Levels", BCP 14, RFC 2119, 1093 DOI 10.17487/RFC2119, March 1997, 1094 . 1096 [RFC4821] Mathis, M. and J. Heffner, "Packetization Layer Path MTU 1097 Discovery", RFC 4821, March 2007. 1099 [RFC5225] Pelletier, G. and K. Sandlund, "RObust Header Compression 1100 Version 2 (ROHCv2): Profiles for RTP, UDP, IP, ESP and 1101 UDP-Lite", RFC 5225, April 2008. 1103 [RFC7252] Shelby, Z., Hartke, K., and C. Bormann, "The Constrained 1104 Application Protocol (CoAP)", RFC 7252, 1105 DOI 10.17487/RFC7252, June 2014, 1106 . 1108 [ROHC] Sandlund, K., Pelletier, G., and L-E. Jonsson, "The RObust 1109 Header Compression (ROHC) Framework", RFC 5795, 2010. 1111 [RTP] Schulzrinne, H., Casner, S., Frederick, R., and V. 1112 Jacobson, "RTP: A Transport Protocol for Real-Time 1113 Applications", RFC 3550, 2003. 1115 [SCTP] Stewart, Ed., R., "Stream Control Transmission Protocol", 1116 RFC 4960, 2007. 1118 [SIP] Rosenberg, J., Schulzrinne, H., Camarillo, G., and et. 1119 al., "SIP: Session Initiation Protocol", RFC 3261, 2005. 1121 [TCRTP] Thomson, B., Koren, T., and D. Wing, "Tunneling 1122 Multiplexed Compressed RTP (TCRTP)", RFC 4170, 2005. 1124 7.2. Informative References 1126 [Efficiency] 1127 Bolla, R., Bruschi, R., Davoli, F., and F. Cucchietti, 1128 "Energy Efficiency in the Future Internet: A Survey of 1129 Existing Approaches and Trends in Energy-Aware Fixed 1130 Network Infrastructures", IEEE Communications Surveys and 1131 Tutorials vol.13, no.2, pp.223,244, 2011. 1133 [First-person] 1134 Ratti, S., Hariri, B., and S. Shirmohammadi, "A Survey of 1135 First-Person Shooter Gaming Traffic on the Internet", IEEE 1136 Internet Computing vol 14, no. 5, pp. 60-69, 2010. 1138 [FPS_opt] Saldana, J., Fernandez-Navajas, J., Ruiz-Mas, J., Aznar, 1139 J., Viruete, E., and L. Casadesus, "First Person Shooters: 1140 Can a Smarter Network Save Bandwidth without Annoying the 1141 Players?", IEEE Communications Magazine vol. 49, no.11, 1142 pp. 190-198, 2011. 1144 [Gamers] Oliveira, M. and T. Henderson, "What online gamers really 1145 think of the Internet?", NetGames '03 Proceedings of the 1146 2nd workshop on Network and system support for games, ACM 1147 New York, NY, USA Pages 185-193, 2003. 1149 [I-D.saldana-tsvwg-simplemux] 1150 Saldana, J., "Simplemux. A generic multiplexing protocol", 1151 draft-saldana-tsvwg-simplemux-02 (work in progress), 1152 January 2015. 1154 [I-D.suznjevic-dispatch-delay-limits] 1155 Suznjevic, M. and J. Saldana, "Delay Limits for Real-Time 1156 Services", draft-suznjevic-dispatch-delay-limits-00 (work 1157 in progress), December 2015. 1159 [Online] Feng, WC., Chang, F., Feng, W., and J. Walpole, "A traffic 1160 characterization of popular on-line games", IEEE/ACM 1161 Transactions on Networking 13.3 Pages 488-500, 2005. 1163 [Power] Chabarek, J., Sommers, J., Barford, P., Estan, C., Tsiang, 1164 D., and S. Wright, "Power Awareness in Network Design and 1165 Routing", INFOCOM 2008. The 27th Conference on Computer 1166 Communications. IEEE pp.457,465, 2008. 1168 [Simplemux_CIT] 1169 Saldana, J., Forcen, I., Fernandez-Navajas, J., and J. 1170 Ruiz-Mas, "Improving Network Efficiency with Simplemux", 1171 IEEE CIT 2015, International Conference on Computer and 1172 Information Technology , pp. 446-453, 26-28 October 2015, 1173 Liverpool, UK, 2015. 1175 [topology_CNs] 1176 Vega, D., Cerda-Alabern, L., Navarro, L., and R. Meseguer, 1177 "Topology patterns of a community network: Guifi. net.", 1178 Proceedings Wireless and Mobile Computing, Networking and 1179 Communications (WiMob), 2012 IEEE 8th International 1180 Conference on (pp. 612-619) , 2012. 1182 [VoIP_opt] 1183 Saldana, J., Fernandez-Navajas, J., Ruiz-Mas, J., Murillo, 1184 J., Viruete, E., and J. Aznar, "Evaluating the Influence 1185 of Multiplexing Schemes and Buffer Implementation on 1186 Perceived VoIP Conversation Quality", Computer Networks 1187 (Elsevier) Volume 6, Issue 11, pp 2920 - 2939. Nov. 30, 1188 2012. 1190 Authors' Addresses 1192 Jose Saldana 1193 University of Zaragoza 1194 Dpt. IEC Ada Byron Building 1195 Zaragoza 50018 1196 Spain 1198 Phone: +34 976 762 698 1199 Email: jsaldana@unizar.es 1201 Dan Wing 1202 Cisco Systems 1203 771 Alder Drive 1204 San Jose, CA 95035 1205 US 1207 Phone: +44 7889 488 335 1208 Email: dwing@cisco.com 1209 Julian Fernandez Navajas 1210 University of Zaragoza 1211 Dpt. IEC Ada Byron Building 1212 Zaragoza 50018 1213 Spain 1215 Phone: +34 976 761 963 1216 Email: navajas@unizar.es 1218 Muthu Arul Mozhi Perumal 1219 Ericsson 1220 Ferns Icon 1221 Doddanekundi, Mahadevapura 1222 Bangalore, Karnataka 560037 1223 India 1225 Email: muthu.arul@gmail.com 1227 Fernando Pascual Blanco 1228 Telefonica I+D 1229 Ramon de la Cruz 84 1230 Madrid 28006 1231 Spain 1233 Phone: +34 913128779 1234 Email: fpb@tid.es