idnits 2.17.1 draft-saldana-tsvwg-tcmtf-09.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- No issues found here. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year -- The document date (June 12, 2015) is 3238 days in the past. Is this intentional? Checking references for intended status: Best Current Practice ---------------------------------------------------------------------------- (See RFCs 3967 and 4897 for information about using normative references to lower-maturity documents in RFCs) ** Obsolete normative reference: RFC 4960 (ref. 'SCTP') (Obsoleted by RFC 9260) == Outdated reference: A later version (-12) exists of draft-saldana-tsvwg-simplemux-02 == Outdated reference: A later version (-05) exists of draft-suznjevic-tsvwg-mtd-tcmtf-03 Summary: 1 error (**), 0 flaws (~~), 3 warnings (==), 1 comment (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 Transport Area Working Group J. Saldana 3 Internet-Draft University of Zaragoza 4 Intended status: Best Current Practice D. Wing 5 Expires: December 14, 2015 Cisco Systems 6 J. Fernandez Navajas 7 University of Zaragoza 8 M. Perumal 9 Ericsson 10 F. Pascual Blanco 11 Telefonica I+D 12 June 12, 2015 14 Tunneling Compressing and Multiplexing (TCM) Traffic Flows. Reference 15 Model 16 draft-saldana-tsvwg-tcmtf-09 18 Abstract 20 Tunneling, Compressing and Multiplexing (TCM) is a method for 21 improving the bandwidth utilization of network segments that carry 22 multiple small-packet flows in parallel sharing a common path. The 23 method combines different protocols for header compression, 24 multiplexing, and tunneling over a network path for the purpose of 25 reducing the bandwidth. The amount of packets per second can also be 26 reduced. 28 This document describes the TCM framework and the different options 29 which can be used for each layer (header compression, multiplexing 30 and tunneling). 32 Status of This Memo 34 This Internet-Draft is submitted to IETF in full conformance with the 35 provisions of BCP 78 and BCP 79. 37 Internet-Drafts are working documents of the Internet Engineering 38 Task Force (IETF). Note that other groups may also distribute 39 working documents as Internet-Drafts. The list of current Internet- 40 Drafts is at http://datatracker.ietf.org/drafts/current/. 42 Internet-Drafts are draft documents valid for a maximum of six months 43 and may be updated, replaced, or obsoleted by other documents at any 44 time. It is inappropriate to use Internet-Drafts as reference 45 material or to cite them other than as "work in progress." 47 This Internet-Draft will expire on December 14, 2015. 49 Copyright Notice 51 Copyright (c) 2015 IETF Trust and the persons identified as the 52 document authors. All rights reserved. 54 This document is subject to BCP 78 and the IETF Trust's Legal 55 Provisions Relating to IETF Documents 56 (http://trustee.ietf.org/license-info) in effect on the date of 57 publication of this document. Please review these documents 58 carefully, as they describe your rights and restrictions with respect 59 to this document. 61 Table of Contents 63 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . 3 64 1.1. Requirements Language . . . . . . . . . . . . . . . . . . 3 65 1.2. Bandwidth efficiency of flows sending small packets . . . 3 66 1.2.1. Real-time applications using RTP . . . . . . . . . . 3 67 1.2.2. Real-time applications not using RTP . . . . . . . . 4 68 1.2.3. Other applications generating small packets . . . . . 4 69 1.2.4. Optimization of small-packet flows . . . . . . . . . 5 70 1.2.5. Energy consumption considerations . . . . . . . . . . 6 71 1.3. Terminology . . . . . . . . . . . . . . . . . . . . . . . 6 72 1.4. Scenarios of application . . . . . . . . . . . . . . . . 7 73 1.4.1. Multidomain scenario . . . . . . . . . . . . . . . . 7 74 1.4.2. Single domain . . . . . . . . . . . . . . . . . . . . 8 75 1.4.3. Private solutions . . . . . . . . . . . . . . . . . . 9 76 1.4.4. Mixed scenarios . . . . . . . . . . . . . . . . . . . 11 77 1.5. Potential beneficiaries of TCM optimization . . . . . . . 12 78 1.6. Current Standard for VoIP . . . . . . . . . . . . . . . . 13 79 1.7. Current Proposal . . . . . . . . . . . . . . . . . . . . 13 80 2. Protocol Operation . . . . . . . . . . . . . . . . . . . . . 15 81 2.1. Models of implementation . . . . . . . . . . . . . . . . 15 82 2.2. Choice of the compressing protocol . . . . . . . . . . . 16 83 2.2.1. Context Synchronization in ECRTP . . . . . . . . . . 17 84 2.2.2. Context Synchronization in ROHC . . . . . . . . . . . 18 85 2.3. Multiplexing . . . . . . . . . . . . . . . . . . . . . . 18 86 2.4. Tunneling . . . . . . . . . . . . . . . . . . . . . . . . 19 87 2.4.1. Tunneling schemes over IP: L2TP and GRE . . . . . . . 19 88 2.4.2. MPLS tunneling . . . . . . . . . . . . . . . . . . . 19 89 2.5. Encapsulation Formats . . . . . . . . . . . . . . . . . . 19 90 3. Contributing Authors . . . . . . . . . . . . . . . . . . . . 20 91 4. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . 22 92 5. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 22 93 6. Security Considerations . . . . . . . . . . . . . . . . . . . 22 94 7. References . . . . . . . . . . . . . . . . . . . . . . . . . 23 95 7.1. Normative References . . . . . . . . . . . . . . . . . . 23 96 7.2. Informative References . . . . . . . . . . . . . . . . . 24 98 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . 26 100 1. Introduction 102 This document describes a way to combine different protocols for 103 header compression, multiplexing and tunneling to save bandwidth for 104 applications that generate long-term flows of small packets. 106 1.1. Requirements Language 108 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 109 "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this 110 document are to be interpreted as described in RFC 2119 [RFC2119]. 112 1.2. Bandwidth efficiency of flows sending small packets 114 The interactivity demands of some real-time services (VoIP, 115 videoconferencing, telemedicine, video surveillance, online gaming, 116 etc.) require a traffic profile consisting of high rates of small 117 packets, which are necessary in order to transmit frequent updates 118 between the two extremes of the communication. These services also 119 demand low network delays. In addition, some other services also use 120 small packets, although they are not delay-sensitive (e.g., instant 121 messaging, M2M packets sending collected data in sensor networks or 122 IoT scenarios using wireless or satellite scenarios). For both the 123 delay-sensitive and delay-insensitive applications, their small data 124 payloads incur significant overhead. 126 When a number of flows based on small packets (small-packet flows) 127 share the same path, their traffic can be optimized by multiplexing 128 packets belonging to different flows. As a consequence, bandwidth 129 can be saved and the amount of packets per second can be reduced. If 130 a number of small packets are waiting in the buffer, they can be 131 multiplexed and transmitted together. In addition, if a transmission 132 queue has not already been formed but multiplexing is desired, it is 133 necessary to add a delay in order to gather a number of packets. 134 This delay has to be maintained under some threshold if the service 135 presents tight delay requirements. It is a believed fact that this 136 delay and jitter can be of the same order of magnitude or less than 137 other common sources of delay and jitter currently present on the 138 Internet without causing harm to flows that employ congestion control 139 based on delay. 141 1.2.1. Real-time applications using RTP 143 The first design of the Internet did not include any mechanism 144 capable of guaranteeing an upper bound for delivery delay, taking 145 into account that the first deployed services were e-mail, file 146 transfer, etc., in which delay is not critical. RTP [RTP] was first 147 defined in 1996 in order to permit the delivery of real-time 148 contents. Nowadays, although there are a variety of protocols used 149 for signaling real-time flows (SIP [SIP], H.323 [H.323], etc.), RTP 150 has become the standard par excellence for the delivery of real-time 151 content. 153 RTP was designed to work over UDP datagrams. This implies that an 154 IPv4 packet carrying real-time information has to include 40 bytes of 155 headers: 20 for IPv4 header, 8 for UDP, and 12 for RTP. This 156 overhead is significant, taking into account that many real-time 157 services send very small payloads. It becomes even more significant 158 with IPv6 packets, as the basic IPv6 header is twice the size of the 159 IPv4 header. Table 1 illustrates the overhead problem of VoIP for 160 two different codecs. 162 +---------------------------------+---------------------------------+ 163 | IPv4 | IPv6 | 164 +---------------------------------+---------------------------------+ 165 | IPv4+UDP+RTP: 40 bytes header | IPv6+UDP+RTP: 60 bytes header | 166 | G.711 at 20 ms packetization: | G.711 at 20 ms packetization: | 167 | 25% header overhead | 37.5% header overhead | 168 | G.729 at 20 ms packetization: | G.729 at 20 ms packetization: | 169 | 200% header overhead | 300% header overhead | 170 +---------------------------------+---------------------------------+ 172 Table 1: Efficiency of different voice codecs 174 1.2.2. Real-time applications not using RTP 176 At the same time, there are many real-time applications that do not 177 use RTP. Some of them send UDP (but not RTP) packets, e.g., First 178 Person Shooter (FPS) online games [First-person], for which latency 179 is very critical. The quickness and the movements of the players are 180 important, and can decide the result of the game. In addition to 181 latency, these applications may be sensitive to jitter and, to a 182 lesser extent, to packet loss [Gamers], since they implement 183 mechanisms for packet loss concealment. 185 1.2.3. Other applications generating small packets 187 Other applications without delay constraints are also becoming 188 popular some examples are instant messaging, M2M packets sending 189 collected data in sensor networks using wireless or satellite 190 scenarios, IoT traffic generated in Constrained RESTful Environments, 191 where UDP packets are employed [RFC7252]. The number of wireless M2M 192 (machine-to-machine) connections is steady growing since a few years, 193 and a share of these is being used for delay-intolerant applications, 194 e.g., industrial SCADA (Supervisory Control And Data Acquisition), 195 power plant monitoring, smart grids, asset tracking. 197 1.2.4. Optimization of small-packet flows 199 In the moments or places where network capacity gets scarce, 200 allocating more bandwidth is a possible solution, but it implies a 201 recurring cost. However, including optimization techniques between a 202 pair of network nodes (able to reduce bandwidth and packets per 203 second) when/where required is a one-time investment. 205 In scenarios including a bottleneck with a single Layer-3 hop, header 206 compression standard algorithms [cRTP], [ECRTP], [IPHC], [ROHC] can 207 be used for reducing the overhead of each flow, at the cost of 208 additional processing. 210 However, if header compression is to be deployed in a network path 211 including several Layer-3 hops, tunneling can be used at the same 212 time in order to allow the header-compressed packets to travel end- 213 to-end, thus avoiding the need to compress and decompress at each 214 intermediate node. In these cases, compressed packets belonging to 215 different flows can be multiplexed together, in order to share the 216 tunnel overhead. In this case, a small multiplexing delay will be 217 necessary as a counterpart, in order to join a number of packets to 218 be sent together. This delay has to be maintained under a threshold 219 in order to grant the delay requirements. 221 A series of recommendations about delay limits have been summarized 222 in [I-D.suznjevic-tsvwg-mtd-tcmtf], in order to maintain this 223 additional delay and jitter in the same order of magnitude than other 224 sources of jitter currently present on the Internet. 226 A demultiplexer and a decompressor are necessary at the end of the 227 common path, so as to rebuild the packets as they were originally 228 sent, making traffic optimization a transparent process for the 229 origin and destination of the flow. 231 If only one stream is tunneled and compressed, then little bandwidth 232 savings will be obtained. In contrast, multiplexing is helpful to 233 amortize the overhead of the tunnel header over many payloads. The 234 obtained savings grow with the number of flows optimized together 235 [VoIP_opt], [FPS_opt]. 237 All in all, the combined use of header compression and multipexing 238 provides a trade-off: bandwidth can be exchanged by processing 239 capacity (mainly required for header compression and decompression) 240 and a small additional delay (required for gathering a number of 241 packets to be multiplexed together). 243 1.2.5. Energy consumption considerations 245 As an additional benefit, the reduction of the sent information, and 246 especially the reduction of the amount of packets per second to be 247 managed by the intermediate routers, can be translated into a 248 reduction of the overall energy consumption of network equipment. 249 According to [Efficiency] internal packet processing engines and 250 switching fabric require 60% and 18% of the power consumption of 251 high-end routers respectively. Thus, reducing the number of packets 252 to be managed and switched will reduce the overall energy 253 consumption. The measurements deployed in [Power] on commercial 254 routers corroborate this: a study using different packet sizes was 255 presented, and the tests with big packets showed a reduction of the 256 energy consumption, since a certain amount of energy is associated to 257 header processing tasks, and not only to the sending of the packet 258 itself. 260 All in all, another tradeoff appears: on the one hand, energy 261 consumption is increased in the two extremes due to header 262 compression processing; on the other hand, energy consumption is 263 reduced in the intermediate nodes because of the reduction of the 264 number of packets transmitted. Thi tradeoff should be explored more 265 deeply. 267 1.3. Terminology 269 This document uses a number of terms to refer to the roles played by 270 the entities using TCM. 272 o native packet 274 A packet sent by an application, belonging to a flow that can be 275 optimized by means of TCM. 277 o native flow 279 A flow of native packets. It can be considered a "small-packet flow" 280 when the vast majority of the generated packets present a low 281 payload-to-header ratio. 283 o TCM packet 285 A packet including a number of multiplexed and header-compressed 286 native ones, and also a tunneling header. 288 o TCM flow 289 A flow of TCM packets, each one including a number of multiplexed 290 header-compressed packets. 292 o TCM optimizer 294 The host where TCM optimization is deployed. It corresponds to both 295 the ingress and the egress of the tunnel transporting the compressed 296 and multiplexed packets. 298 If the optimizer compresses headers, multiplexes packets and creates 299 the tunnel, it behaves as a "TCM-Ingress Optimizer", or "TCM-IO". It 300 takes native packets or flows and "optimizes" them. 302 If it extracts packets from the tunnel, demultiplexes packets and 303 decompresses headers, it behaves as a "TCM-Egress Optimizer", or 304 "TCM-EO". The TCM-Egress Optimizer takes a TCM flow and "rebuilds" 305 the native packets as they were originally sent. 307 o TCM session 309 The relationship between a pair of TCM optimizers exchanging TCM 310 packets. 312 o policy manager 314 A network entity which makes the decisions about TCM optimization 315 parameters (e.g., multiplexing period to be used, flows to be 316 optimized together), depending on their IP addresses, ports, etc. It 317 is connected with a number of TCM optimizers, and orchestrates the 318 optimization that takes place between them. 320 1.4. Scenarios of application 322 Different scenarios of application can be considered for the 323 Tunneling, Compressing and Multiplexing solution. They can be 324 classified according to the domains involved in the optimization: 326 1.4.1. Multidomain scenario 328 In this scenario, the TCM tunnel goes all the way from one network 329 edge (the place where users are attached to the ISP) to another, and 330 therefore it can cross several domains. As shown in Figure 1, the 331 optimization is performed before the packets leave the domain of an 332 ISP; the traffic crosses the Internet tunnelized, and the packets are 333 rebuilt in the second domain. 335 _ _ _ _ _ _ 336 ( ` ) _ _ _ ( ` )_ _ 337 ( +------+ )`) ( ` )_ ( +------+ `) 338 -->(_ -|TCM-IO|--- _) ---> ( ) `) ----->(_-|TCM-EO|--_)--> 339 ( +------+ _) (_ (_ . _) _) ( +------+ _) 340 (_ _ _ _) (_ _ ( _) _) 342 ISP 1 Internet ISP 2 344 <--------------------TCM---------------------> 346 Figure 1 348 Note that this is not from border to border (where ISPs connect to 349 the Internet, which could be covered with specialized links) but from 350 an ISP to another (e.g., managing all traffic from individual users 351 arriving at a Game Provider, regardless users' location). 353 Some examples of this could be: 355 o An ISP may place a TCM optimizer in its aggregation network, in 356 order to tunnel all the packets belonging to a certain service, 357 sending them to the application provider, who will rebuild the 358 packets before forwarding them to the application server. This 359 will result in savings for both actors. 361 o A service provider (e.g., an online gaming company) can be allowed 362 to place a TCM optimizer in the aggregation network of an ISP, 363 being able to optimize all the flows of a service (e.g., VoIP, an 364 online game). Another TCM optimizer will rebuild these packets 365 once they arrive to the network of the provider. 367 1.4.2. Single domain 369 In this case, TCM is only activated inside an ISP, from the edge to 370 border, inside the network operator. The geographical scope and 371 network depth of TCM activation could be on demand, according to 372 traffic conditions. 374 If we consider the residential users of real-time interactive 375 applications (e.g., VoIP, online games generating small packets) in a 376 town or a district, a TCM optimizing module can be included in some 377 network devices, in order to group packets with the same destination. 378 As shown in Figure 2, depending on the number of users of the 379 application, the packets can be grouped at different levels in DSL 380 fixed network scenarios, at gateway level in LTE mobile network 381 scenarios or even in other ISP edge routers. TCM may also be applied 382 for fiber residential accesses, and in mobile networks. This would 383 reduce bandwidth requirements in the provider aggregation network. 385 +------+ 386 N users -|TCM-IO|\ 387 +------+ \ 388 \ _ _ _ _ 389 +------+ \--> ( ` )_ +------+ ( ` )_ 390 M users -|TCM-IO|------> ( ) `) --|TCM-EO|--> ( ) `) 391 +------+ / ->(_ _ (_ . _) _) +------+ (_ _ (_ . _) _) 392 / 393 +------+ / ISP Internet 394 P users -|TCM-IO|/ 395 +------+ 397 <--------------TCM---------------> 399 Figure 2 401 At the same time, the ISP may implement TCM capabilities within its 402 own MPLS network in order to optimize internal network resources: 403 optimizing modules can be embedded in the Label Edge Routers of the 404 network. In that scenario MPLS will act as the "tunneling" layer, 405 being the tunnels the paths defined by the MPLS labels and avoiding 406 the use of additional tunneling protocols. 408 Finally, some networks use cRTP [cRTP] in order to obtain bandwidth 409 savings on the access link, but as a counterpart considerable CPU 410 resources are required on the aggregation router. In these cases, by 411 means of TCM, instead of only saving bandwidth on the access link, it 412 could also be saved across the ISP network, thus avoiding the impact 413 on the CPU of the aggregation router. 415 1.4.3. Private solutions 417 End users can also optimize traffic end-to-end from network borders. 418 TCM is used to connect private networks geographically apart (e.g., 419 corporation headquarters and subsidiaries), without the ISP being 420 aware (or having to manage) those flows, as shown in Figure 3, where 421 two different locations are connected through a tunnel traversing the 422 Internet or another network. 424 _ _ _ _ _ _ 425 ( ` )_ +------+ ( ` )_ +------+ ( ` )_ 426 ( ) `) --|TCM-IO|-->( ) `) --|TCM-EO|-->( ) `) 427 (_ (_ . _) _) +------+ (_ (_ . _) _) +------+ (_ (_ . _)_) 429 Location 1 ISP/Internet Location 2 431 <-------------TCM-----------> 433 Figure 3 435 Some examples of these scenarios: 437 o The case of an enterprise with a number of distributed central 438 offices, in which an appliance can be placed next to the access 439 router, being able to optimize traffic flows with a shared origin 440 and destination. Thus, a number of remote desktop sessions to the 441 same server can be optimized, or a number of VoIP calls between 442 two offices will also require less bandwidth and fewer packets per 443 second. In many cases, a tunnel is already included for security 444 reasons, so the additional overhead of TCM is lower. 446 o An Internet cafe, which is suitable of having many users of the 447 same application (e.g., VoIP, online games) sharing the same 448 access link. Internet cafes are very popular in countries with 449 relatively low access speeds in households, where home computer 450 penetration is usually low as well. In many of these countries, 451 bandwidth can become a serious limitation for this kind of 452 businesses, so TCM savings may become interesting for their 453 viability. 455 o Community Networks [topology_CNs] (typically deployed in rural 456 areas or in developing countries), in which a number of people in 457 the same geographical place share their connections in a 458 cooperative way. The structure of these networks is not designed 459 from the beginning, but they grow organically as new users join. 460 As a result, a number and a number of wireless hops are usually 461 required in order to reach a router connected to the Internet. 463 o Satellite communication links that often manage the bandwidth by 464 limiting the transmission rate, measured in packets per second 465 (pps), to and from the satellite. Applications like VoIP that 466 generate a large number of small packets can easily fill the 467 maximum number of pps slots, limiting the throughput across such 468 links. As an example, a G.729a voice call generates 50 pps at 20 469 ms packetization time. If the satellite transmission allows 1,500 470 pps, the number of simultaneous voice calls is limited to 30. 471 This results in poor utilization of the satellite link's bandwidth 472 as well as places a low bound on the number of voice calls that 473 can utilize the link simultaneously. TCM optimization of small 474 packets into one packet for transmission will improve the 475 efficiency. 477 o In a M2M/SCADA (Supervisory Control And Data Acquisition) context, 478 TCM optimization can be applied when a satellite link is used for 479 collecting the data of a number of sensors. M2M terminals are 480 normally equipped with sensing devices which can interface to 481 proximity sensor networks through wireless connections. The 482 terminal can send the collected sensing data using a satellite 483 link connecting to a satellite gateway, which in turn will forward 484 the M2M/SCADA data to the to the processing and control center 485 through the Internet. The size of a typical M2M application 486 transaction depends on the specific service and it may vary from a 487 minimum of 20 bytes (e.g., tracking and metering in private 488 security) to about 1,000 bytes (e.g., video-surveillance). In 489 this context, TCM concepts can be also applied to allow a more 490 efficient use of the available satellite link capacity, matching 491 the requirements demanded by some M2M services. If the case of 492 large sensor deployments is considered, where proximity sensor 493 networks transmit data through different satellite terminals, the 494 use of compression algorithms already available in current 495 satellite systems to reduce the overhead introduced by UDP and 496 IPv6 protocols is certainly desirable. In addition to this, 497 tunneling and multiplexing functions available from TCM allows 498 extending compression functionality throughout the rest the 499 network, to eventually reach the processing and control centers. 501 o Desktop or application sharing where the traffic from the server 502 to the client typically consists of the delta of screen updates. 503 Also, the standard for remote desktop sharing emerging for WebRTC 504 in the RTCWEB Working Group is: {something}/SCTP/UDP (Stream 505 Control Transmission Protocol [SCTP]). In this scenario, SCTP/UDP 506 can be used in other cases: chatting, file sharing and 507 applications related to WebRTC peers. There can be hundreds of 508 clients at a site talking to a server located at a datacenter over 509 a WAN. Compressing, multiplexing and tunneling this traffic could 510 save WAN bandwidth and potentially improve latency. 512 1.4.4. Mixed scenarios 514 Different combinations of the previous scenarios can be considered. 515 Agreements between different companies can be established in order to 516 save bandwidth and to reduce packets per second. As an example, 517 Figure 4 shows a game provider that wants to TCM-optimize its 518 connections by establishing associations between different TCM-IO/EOs 519 placed near the game server and several TCM-IO/EOs placed in the 520 networks of different ISPs (agreements between the game provider and 521 each ISP will be necessary). In every ISP, the TCM-IO/EO would be 522 placed in the most adequate point (actually several TCM-IO/EOs could 523 exist per ISP) in order to aggregate enough number of users. 525 _ _ 526 N users ( ` )_ 527 +---+ ( ) `) 528 |TCM|->(_ (_ . _) 529 +---+ ISP 1 \ 530 _ _ \ _ _ _ _ _ 531 M users ( ` )_ \ ( ` ) ( ` ) ( ` ) 532 +---+ ( ) `) \ ( ) `) ( ) `) +---+ ( ) `) 533 |TCM|->(_ (_ ._)---- (_ (_ . _) ->(_ (_ . _)->|TCM|->(_ (_ . _) 534 +---+ ISP 2 / Internet ISP 4 +---+ Game Provider 535 _ _ / ^ 536 O users ( ` )_ / | 537 +---+ ( ) `) / +---+ 538 |TCM|->(_ (_ ._) P users->|TCM| 539 +---+ ISP 3 +---+ 541 Figure 4 543 1.5. Potential beneficiaries of TCM optimization 545 In conclusion, a standard able to compress headers, multiplex a 546 number of packets and send them together using a tunnel, can benefit 547 various stakeholders: 549 o network operators can compress traffic flows sharing a common 550 network segment; 552 o ISPs; 554 o developers of VoIP systems can include this option in their 555 solutions; 557 o service providers, who can achieve bandwidth savings in their 558 supporting infrastructures; 560 o users of Community Networks, who may be able to save significant 561 bandwidth amounts, and to reduce the number of packets per second 562 in their networks. 564 Other fact that has to be taken into account is that the technique 565 not only saves bandwidth but also reduces the number of packets per 566 second, which sometimes can be a bottleneck for a satellite link or 567 even for a network router [Online]. 569 1.6. Current Standard for VoIP 571 The current standard [TCRTP] defines a way to reduce bandwidth and 572 pps of RTP traffic, by combining three different standard protocols: 574 o Regarding compression, [ECRTP] is the selected option. 576 o Multiplexing is accomplished using PPP Multiplexing [PPP-MUX] 578 o Tunneling is accomplished by using L2TP (Layer 2 Tunneling 579 Protocol [L2TPv3]). 581 The three layers are combined as shown in the Figure 5: 583 RTP/UDP/IP 584 | 585 | ---------------------------- 586 | 587 ECRTP compressing layer 588 | 589 | ---------------------------- 590 | 591 PPPMUX multiplexing layer 592 | 593 | ---------------------------- 594 | 595 L2TP tunneling layer 596 | 597 | ---------------------------- 598 | 599 IP 601 Figure 5 603 1.7. Current Proposal 605 In contrast to the current standard [TCRTP], TCM allows other header 606 compression protocols in addition to RTP/UDP, since services based on 607 small packets also use by bare UDP, as shown in Figure 6: 609 UDP/IP RTP/UDP/IP 610 \ / 611 \ / ------------------------------ 612 \ / 613 Nothing or ROHC or ECRTP or IPHC header compressing layer 614 | 615 | ------------------------------ 616 | 617 PPPMux or other mux protocols multiplexing layer 618 | 619 / \ ------------------------------ 620 / \ 621 / \ 622 GRE or L2TP \ tunneling layer 623 | MPLS 624 | ------------------------------ 625 IP 627 Figure 6 629 Each of the three layers is considered as independent of the other 630 two, i.e., different combinations of protocols can be implemented 631 according to the new proposal: 633 o Regarding compression, a number of options can be considered: as 634 different standards are able to compress different headers 635 ([cRTP], [ECRTP], [IPHC], [ROHC]). The one to be used can be 636 selected depending on the protocols used by the traffic to 637 compress and the concrete scenario (packet loss percentage, delay, 638 etc.). It also exists the possibility of having a null header 639 compression, in the case of wanting to avoid traffic compression, 640 taking into account the need of storing a context for every flow 641 and the problems of context desynchronization in certain 642 scenarios. Although not shown in Figure 6, ESP (Encapsulating 643 Security Payload [ESP]) headers can also be compressed. 645 o Multiplexing can be accomplished using PPP Multiplexing (PPPMux) 646 [PPP-MUX]. However, PPPMux introduces an additional compelxity, 647 since it requires the use of PPP, and a protocol for tunneling 648 layer 2 frames. For this reason, other multiplexing protocols can 649 also be considered, as the one proposed in 650 [I-D.saldana-tsvwg-simplemux]. 652 o Tunneling is accomplished by using L2TP (Layer 2 Tunneling 653 Protocol [L2TPv3]) over IP, GRE (Generic Routing Encapsulation 654 [GRE]) over IP, or MPLS (Multiprotocol Label Switching 655 Architecture [MPLS]). 657 It can be observed that TCRTP [TCRTP] is included as an option in 658 TCM, combining [ECRTP], [PPP-MUX] and [L2TPv3], so backwards 659 compatibility with TCRTP is provided. If a TCM optimizer implements 660 ECRTP, PPPMux and L2TPv3, compatibility with RFC4170 MUST be granted. 662 If a single link is being optimized a tunnel is unnecessary. In that 663 case, both optimizers MAY perform header compression between them. 664 Multiplexing may still be useful, since it reduces packets per 665 second, which is interesting in some environments (e.g., satellite). 666 Another reason for that is the desire of reducing energy consumption. 667 Although no tunnel is employed, this can still be considered as TCM 668 optimization, so TCM signaling protocols will be employed here in 669 order to negotiate the compression and multiplexing parameters to be 670 employed. 672 Payload compression schemes may also be used, but they are not the 673 aim of this document. 675 2. Protocol Operation 677 This section describes how to combine protocols belonging to trhee 678 layers (compressing, multiplexing, and tunneling), in order to save 679 bandwidth for the considered flows. 681 2.1. Models of implementation 683 TCM can be implemented in different ways. The most straightforward 684 is to implement it in the devices terminating the flows (these 685 devices can be e.g., voice gateways, or proxies grouping a number of 686 flows): 688 [ending device]---[ending device] 689 ^ 690 | 691 TCM over IP 693 Figure 7 695 Another way TCM can be implemented is with an external optimizer. 696 This device can be placed at strategic places in the network and can 697 dynamically create and destroy TCM sessions without the participation 698 of the endpoints that generate the flows (Figure 8). 700 [ending device]\ /[ending device] 701 \ / 702 [ending device]----[optimizer]-----[optimizer]-----[ending device] 703 / \ 704 [ending device]/ \[ending device] 705 ^ ^ ^ 706 | | | 707 Native IP TCM over IP Native IP 709 Figure 8 711 A number of already compressed flows can also be merged in a tunnel 712 using an optimizer in order to increase the number of flows in a 713 tunnel (Figure 9): 715 [ending device]\ /[ending device] 716 \ / 717 [ending device]----[optimizer]-----[optimizer]------[ending device] 718 / \ 719 [ending device]/ \[ending device] 720 ^ ^ ^ 721 | | | 722 Compressed TCM over IP Compressed 724 Figure 9 726 2.2. Choice of the compressing protocol 728 There are different protocols that can be used for compressing IP 729 flows: 731 o IPHC (IP Header Compression [IPHC]) permits the compression of 732 UDP/IP and ESP/IP headers. It has a low implementation 733 complexity. On the other hand, the resynchronization of the 734 context can be slow over long RTT links. It should be used in 735 scenarios presenting very low packet loss percentage. 737 o cRTP (compressed RTP [cRTP]) works the same way as IPHC, but is 738 also able to compress RTP headers. The link layer transport is 739 not specified, but typically PPP is used. For cRTP to compress 740 headers, it must be implemented on each PPP link. A lot of 741 context is required to successfully run cRTP, and memory and 742 processing requirements are high, especially if multiple hops must 743 implement cRTP to save bandwidth on each of the hops. At higher 744 line rates, cRTP's processor consumption becomes prohibitively 745 expensive. cRTP is not suitable over long-delay WAN links commonly 746 used when tunneling, as proposed by this document. To avoid the 747 per-hop expense of cRTP, a simplistic solution is to use cRTP with 748 L2TP to achieve end-to-end cRTP. However, cRTP is only suitable 749 for links with low delay and low loss. Thus, if multiple router 750 hops are involved, cRTP's expectation of low delay and low loss 751 can no longer be met. Furthermore, packets can arrive out of 752 order. 754 o ECRTP (Enhanced Compressed RTP [ECRTP]) is an extension of cRTP 755 [cRTP] that provides tolerance to packet loss and packet 756 reordering between compressor and decompressor. Thus, ECRTP 757 should be used instead of cRTP when possible (e.g., the two TCM 758 optimizers implementing ECRTP). 760 o ROHC (RObust Header Compression [ROHC]) is able to compress UDP/ 761 IP, ESP/IP and RTP/UDP/IP headers. It is a robust scheme 762 developed for header compression over links with high bit error 763 rate, such as wireless ones. It incorporates mechanisms for quick 764 resynchronization of the context. It includes an improved 765 encoding scheme for compressing the header fields that change 766 dynamically. Its main drawback is that it requires significantly 767 more processing and memory resources than the ones necessary for 768 IPHC or ECRTP. 770 The present document does not determine which of the existing 771 protocols has to be used for the compressing layer. The decision 772 will depend on the scenarioand the service being optimized. It will 773 also be determined by the packet loss probability, RTT, jitter, and 774 the availability of memory and processing resources. The standard is 775 also suitable to include other compressing schemes that may be 776 further developed. 778 2.2.1. Context Synchronization in ECRTP 780 When the compressor receives an RTP packet that has an unpredicted 781 change in the RTP header, the compressor should send a COMPRESSED_UDP 782 packet (described in [ECRTP]) to synchronize the ECRTP decompressor 783 state. The COMPRESSED_UDP packet updates the RTP context in the 784 decompressor. 786 To ensure delivery of updates of context variables, COMPRESSED_UDP 787 packets should be delivered using the robust operation described in 788 [ECRTP]. 790 Because the "twice" algorithm described in [ECRTP] relies on UDP 791 checksums, the IP stack on the RTP transmitter should transmit UDP 792 checksums. If UDP checksums are not used, the ECRTP compressor 793 should use the cRTP Header checksum described in [ECRTP]. 795 2.2.2. Context Synchronization in ROHC 797 ROHC [ROHC] includes a more complex mechanism in order to maintain 798 context synchronization. It has different operation modes and 799 defines compressor states which change depending on link behavior. 801 2.3. Multiplexing 803 Header compressing algorithms require a layer two protocol that 804 allows identifying different protocols. PPP [PPP] is suited for 805 this, although other multiplexing protocols can also be used for this 806 layer of TCM. For example, Simplemux [I-D.saldana-tsvwg-simplemux] 807 can be employed as a light multiplexing protocol which is able to 808 carry packets belonging to different protocols. 810 When header compression is used inside a tunnel, it reduces the size 811 of the headers of the IP packets carried in the tunnel. However, the 812 tunnel itself has overhead due to its IP header and the tunnel header 813 (the information necessary to identify the tunneled payload). 815 By multiplexing multiple small payloads in a single tunneled packet, 816 reasonable bandwidth efficiency can be achieved, since the tunnel 817 overhead is shared by multiple packets belonging to the flows active 818 between the source and destination of an L2TP tunnel. The packet 819 size of the flows has to be small in order to permit good bandwidth 820 savings. 822 If the source and destination of the tunnel are the same as the 823 source and destination of the compressing protocol sessions, then the 824 source and destination must have multiple active small-packet flows 825 to get any benefit from multiplexing. 827 Because of this, TCM is mostly useful for applications where many 828 small-packet flows run between a pair of hosts. The number of 829 simultaneous sessions required to reduce the header overhead to the 830 desired level depends on the average payload size, and also on the 831 size of the tunnel header. A smaller tunnel header will result in 832 fewer simultaneous sessions being required to produce adequate 833 bandwidth efficiencies. 835 When multiplexing, a limit in the packet size has to be established 836 in order to avoid problems related to MTU. This document does not 837 establish any rule about this, but it is strongly recommended that 838 some method as Packetization Layer Path MTU Discovery is used before 839 multiplexing packets[RFC4821]. 841 2.4. Tunneling 843 Different tunneling schemes can be used for sending end to end the 844 compressed payloads. 846 2.4.1. Tunneling schemes over IP: L2TP and GRE 848 L2TP tunnels should be used to tunnel the compressed payloads end to 849 end. L2TP includes methods for tunneling messages used in PPP 850 session establishment, such as NCP (Network Control Protocol). This 851 allows [IPCP-HC] to negotiate ECRTP compression/decompression 852 parameters. 854 Other tunneling schemes, such as GRE [GRE] may also be used to 855 implement the tunneling layer of TCM. 857 2.4.2. MPLS tunneling 859 In some scenarios, mainly in operator's core networks, the use of 860 MPLS is widely deployed as data transport method. The adoption of 861 MPLS as tunneling layer in this proposal intends to natively adapt 862 TCM to those transport networks. 864 In the same way that layer 3 tunnels, MPLS paths, identified by MPLS 865 labels, established between Label Edge Routers (LSRs), could be used 866 to transport the compressed payloads within an MPLS network. This 867 way, multiplexing layer must be placed over MPLS layer. Note that, 868 in this case, layer 3 tunnel headers do not have to be used, with the 869 consequent data efficiency improvement. 871 2.5. Encapsulation Formats 873 The packet format for a packet compressed is: 875 +------------+-----------------------+ 876 | | | 877 | Compr | | 878 | Header | Data | 879 | | | 880 | | | 881 +------------+-----------------------+ 883 Figure 10 885 The packet format of a multiplexed PPP packet as defined by [PPP-MUX] 886 is: 888 +-------+---+------+-------+-----+ +---+------+-------+-----+ 889 | Mux |P L| | | | |P L| | | | 890 | PPP |F X|Len1 | PPP | | |F X|LenN | PPP | | 891 | Prot. |F T| | Prot. |Info1| ~ |F T| | Prot. |InfoN| 892 | Field | | Field1| | | |FieldN | | 893 | (1) |1-2 octets| (0-2) | | |1-2 octets| (0-2) | | 894 +-------+----------+-------+-----+ +----------+-------+-----+ 896 Figure 11 898 The combined format used for TCM with a single payload is all of the 899 above packets concatenated. Here is an example with one payload, 900 using L2TP or GRE tunneling: 902 +------+------+-------+----------+-------+--------+----+ 903 | IP |Tunnel| Mux |P L| | | | | 904 |header|header| PPP |F X|Len1 | PPP | Compr | | 905 | (20) | | Proto |F T| | Proto | header |Data| 906 | | | Field | | Field1| | | 907 | | | (1) |1-2 octets| (0-2) | | | 908 +------+------+-------+----------+-------+--------+----+ 909 |<------------- IP payload -------------------->| 910 |<-------- Mux payload --------->| 912 Figure 12 914 If the tunneling technology is MPLS, then the scheme would be: 916 +------+-------+----------+-------+--------+----+ 917 |MPLS | Mux |P L| | | | | 918 |header| PPP |F X|Len1 | PPP | Compr | | 919 | | Proto |F T| | Proto | header |Data| 920 | | Field | | Field1| | | 921 | | (1) |1-2 octets| (0-2) | | | 922 -+------+-------+----------+-------+--------+----+ 923 |<---------- MPLS payload -------------->| 924 |<-------- Mux payload --------->| 926 Figure 13 928 If the tunnel contains multiplexed traffic, multiple "PPPMux 929 payload"s are transmitted in one IP packet. 931 3. Contributing Authors 932 Gonzalo Camarillo 933 Ericsson 934 Advanced Signalling Research Lab. 935 FIN-02420 Jorvas 936 Finland 938 Email: Gonzalo.Camarillo@ericsson.com 940 Michael A. Ramalho 941 Cisco Systems, Inc. 942 6310 Watercrest Way, Unit 203 943 Lakewood Ranch, FL 34202 944 USA 946 Phone: +1.732.832.9723 947 Email: mramalho@cisco.com 949 Jose Ruiz Mas 950 University of Zaragoza 951 Dpt. IEC Ada Byron Building 952 50018 Zaragoza 953 Spain 955 Phone: +34 976762158 956 Email: jruiz@unizar.es 958 Diego Lopez Garcia 959 Telefonica I+D 960 Ramon de la cruz 84 961 28006 Madrid 962 Spain 964 Phone: +34 913129041 965 Email: diego@tid.es 967 David Florez Rodriguez 968 Telefonica I+D 969 Ramon de la cruz 84 970 28006 Madrid 971 Spain 973 Phone: +34 91312884 974 Email: dflorez@tid.es 975 Manuel Nunez Sanz 976 Telefonica I+D 977 Ramon de la cruz 84 978 28006 Madrid 979 Spain 981 Phone: +34 913128821 982 Email: mns@tid.es 984 Juan Antonio Castell Lucia 985 Telefonica I+D 986 Ramon de la cruz 84 987 28006 Madrid 988 Spain 990 Phone: +34 913129157 991 Email: jacl@tid.es 993 Mirko Suznjevic 994 University of Zagreb 995 Faculty of Electrical Engineering and Computing, Unska 3 996 10000 Zagreb 997 Croatia 999 Phone: +385 1 6129 755 1000 Email: mirko.suznjevic@fer.hr 1002 4. Acknowledgements 1004 Jose Saldana, Julian Fernandez Navajas and Jose Ruiz Mas were funded 1005 by the EU H2020 Wi-5 project (Grant Agreement no: 644262). 1007 5. IANA Considerations 1009 This memo includes no request to IANA. 1011 6. Security Considerations 1013 The most straightforward option for securing a number of non-secured 1014 flows sharing a path is by the use of IPsec [IPsec], when TCM using 1015 an IP tunnel is employed. Instead of adding a security header to the 1016 packets of each native flow, and then compressing and multiplexing 1017 them, a single IPsec tunnel can be used in order to secure all the 1018 flows together, thus achieving a higher efficiency. This use of 1019 IPsec protects the packets only within the transport network between 1020 tunnel ingress and egress and therefore does not provide end-to-end 1021 authentication or encryption. 1023 When a number of already secured flows including ESP [ESP] headers 1024 are optimized by means of TCM, and the addition of further security 1025 is not necessary, their ESP/IP headers can still be compressed using 1026 suitable algorithms [RFC5225], in order to improve the efficiency. 1027 This header compression does not change the end-to-end security 1028 model. 1030 The resilience of TCM to denial of service, and the use of TCM to 1031 deny service to other parts of the network infrastructure, is for 1032 future study. 1034 7. References 1036 7.1. Normative References 1038 [cRTP] Casner, S. and V. Jacobson, "Compressing IP/UDP/RTP 1039 Headers for Low-Speed Serial Links", RFC 2508, 1999. 1041 [ECRTP] Koren, T., Casner, S., Geevarghese, J., Thompson, B., and 1042 P. Ruddy, "Enhanced Compressed RTP (CRTP) for Links with 1043 High Delay, Packet Loss and Reordering", RFC 3545, 2003. 1045 [ESP] Kent, S., "IP Encapsulating Security Payload", RFC 4303, 1046 2005. 1048 [GRE] Farinacci, D., Li, T., Hanks, S., Meyer, D., and P. 1049 Traina, "Generic Routing Encapsulation (GRE)", RFC 2784, 1050 2000. 1052 [H.323] International Telecommunication Union, "Recommendation 1053 H.323", Packet based multimedia communication systems 1054 H.323, July 2003. 1056 [IPCP-HC] Engan, M., Casner, S., Bormann, C., and T. Koren, "IP 1057 Header Compression over PPP", RFC 3544, 2003. 1059 [IPHC] Degermark, M., Nordgren, B., and S. Pink, "IP Header 1060 Compression", RFC 2580, 1999. 1062 [IPsec] Kent, S. and K. Seo, "Security Architecture for the 1063 Internet Protocol", RFC 4301, December 2005. 1065 [L2TPv3] Lau, J., Townsley, M., and I. Goyret, "Layer Two Tunneling 1066 Protocol - Version 3 (L2TPv3)", RFC 3931, 2005. 1068 [MPLS] Rosen, E., Viswanathan, A., and R. Callon, "Multiprotocol 1069 Label Switching Architecture", RFC 3031, January 2001. 1071 [PPP] Simpson, W., "The Point-to-Point Protocol (PPP)", RFC 1072 1661, 1994. 1074 [PPP-MUX] Pazhyannur, R., Ali, I., and C. Fox, "PPP Multiplexing", 1075 RFC 3153, 2001. 1077 [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate 1078 Requirement Levels", BCP 14, RFC 2119, March 1997. 1080 [RFC4821] Mathis, M. and J. Heffner, "Packetization Layer Path MTU 1081 Discovery", RFC 4821, March 2007. 1083 [RFC5225] Pelletier, G. and K. Sandlund, "RObust Header Compression 1084 Version 2 (ROHCv2): Profiles for RTP, UDP, IP, ESP and 1085 UDP-Lite", RFC 5225, April 2008. 1087 [RFC7252] Shelby, Z., Hartke, K., and C. Bormann, "The Constrained 1088 Application Protocol (CoAP)", RFC 7252, June 2014. 1090 [ROHC] Sandlund, K., Pelletier, G., and L-E. Jonsson, "The RObust 1091 Header Compression (ROHC) Framework", RFC 5795, 2010. 1093 [RTP] Schulzrinne, H., Casner, S., Frederick, R., and V. 1094 Jacobson, "RTP: A Transport Protocol for Real-Time 1095 Applications", RFC 3550, 2003. 1097 [SCTP] Stewart, Ed., R., "Stream Control Transmission Protocol", 1098 RFC 4960, 2007. 1100 [SIP] Rosenberg, J., Schulzrinne, H., Camarillo, G., and et. 1101 al., "SIP: Session Initiation Protocol", RFC 3261, 2005. 1103 [TCRTP] Thomson, B., Koren, T., and D. Wing, "Tunneling 1104 Multiplexed Compressed RTP (TCRTP)", RFC 4170, 2005. 1106 7.2. Informative References 1108 [Efficiency] 1109 Bolla, R., Bruschi, R., Davoli, F., and F. Cucchietti, 1110 "Energy Efficiency in the Future Internet: A Survey of 1111 Existing Approaches and Trends in Energy-Aware Fixed 1112 Network Infrastructures", IEEE Communications Surveys and 1113 Tutorials vol.13, no.2, pp.223,244, 2011. 1115 [First-person] 1116 Ratti, S., Hariri, B., and S. Shirmohammadi, "A Survey of 1117 First-Person Shooter Gaming Traffic on the Internet", IEEE 1118 Internet Computing vol 14, no. 5, pp. 60-69, 2010. 1120 [FPS_opt] Saldana, J., Fernandez-Navajas, J., Ruiz-Mas, J., Aznar, 1121 J., Viruete, E., and L. Casadesus, "First Person Shooters: 1122 Can a Smarter Network Save Bandwidth without Annoying the 1123 Players?", IEEE Communications Magazine vol. 49, no.11, 1124 pp. 190-198, 2011. 1126 [Gamers] Oliveira, M. and T. Henderson, "What online gamers really 1127 think of the Internet?", NetGames '03 Proceedings of the 1128 2nd workshop on Network and system support for games, ACM 1129 New York, NY, USA Pages 185-193, 2003. 1131 [I-D.saldana-tsvwg-simplemux] 1132 Saldana, J., "Simplemux. A generic multiplexing protocol", 1133 draft-saldana-tsvwg-simplemux-02 (work in progress), 1134 January 2015. 1136 [I-D.suznjevic-tsvwg-mtd-tcmtf] 1137 Suznjevic, M. and J. Saldana, "Delay Limits and 1138 Multiplexing Policies to be employed with Tunneling 1139 Compressed Multiplexed Traffic Flows", draft-suznjevic- 1140 tsvwg-mtd-tcmtf-03 (work in progress), June 2014. 1142 [Online] Feng, WC., Chang, F., Feng, W., and J. Walpole, "A traffic 1143 characterization of popular on-line games", IEEE/ACM 1144 Transactions on Networking 13.3 Pages 488-500, 2005. 1146 [Power] Chabarek, J., Sommers, J., Barford, P., Estan, C., Tsiang, 1147 D., and S. Wright, "Power Awareness in Network Design and 1148 Routing", INFOCOM 2008. The 27th Conference on Computer 1149 Communications. IEEE pp.457,465, 2008. 1151 [topology_CNs] 1152 Vega, D., Cerda-Alabern, L., Navarro, L., and R. Meseguer, 1153 "Topology patterns of a community network: Guifi. net.", 1154 Proceedings Wireless and Mobile Computing, Networking and 1155 Communications (WiMob), 2012 IEEE 8th International 1156 Conference on (pp. 612-619) , 2012. 1158 [VoIP_opt] 1159 Saldana, J., Fernandez-Navajas, J., Ruiz-Mas, J., Murillo, 1160 J., Viruete, E., and J. Aznar, "Evaluating the Influence 1161 of Multiplexing Schemes and Buffer Implementation on 1162 Perceived VoIP Conversation Quality", Computer Networks 1163 (Elsevier) Volume 6, Issue 11, pp 2920 - 2939. Nov. 30, 1164 2012. 1166 Authors' Addresses 1168 Jose Saldana 1169 University of Zaragoza 1170 Dpt. IEC Ada Byron Building 1171 Zaragoza 50018 1172 Spain 1174 Phone: +34 976 762 698 1175 Email: jsaldana@unizar.es 1177 Dan Wing 1178 Cisco Systems 1179 771 Alder Drive 1180 San Jose, CA 95035 1181 US 1183 Phone: +44 7889 488 335 1184 Email: dwing@cisco.com 1186 Julian Fernandez Navajas 1187 University of Zaragoza 1188 Dpt. IEC Ada Byron Building 1189 Zaragoza 50018 1190 Spain 1192 Phone: +34 976 761 963 1193 Email: navajas@unizar.es 1195 Muthu Arul Mozhi Perumal 1196 Ericsson 1197 Ferns Icon 1198 Doddanekundi, Mahadevapura 1199 Bangalore, Karnataka 560037 1200 India 1202 Email: muthu.arul@gmail.com 1203 Fernando Pascual Blanco 1204 Telefonica I+D 1205 Ramon de la Cruz 84 1206 Madrid 28006 1207 Spain 1209 Phone: +34 913128779 1210 Email: fpb@tid.es