idnits 2.17.1 draft-bagnulo-nfvrg-topology-01.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- No issues found here. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year -- The document date (March 17, 2016) is 2960 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- No issues found here. Summary: 0 errors (**), 0 flaws (~~), 1 warning (==), 1 comment (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 Network Working Group M. Bagnulo 3 Internet-Draft UC3M 4 Intended status: Informational D. Dolson 5 Expires: September 18, 2016 Sandvine 6 March 17, 2016 8 NFVI PoP Network Topology: Problem Statement 9 draft-bagnulo-nfvrg-topology-01 11 Abstract 13 This documents describes considerations for the design of the 14 interconnection network of an NFVI PoP. 16 Status of This Memo 18 This Internet-Draft is submitted in full conformance with the 19 provisions of BCP 78 and BCP 79. 21 Internet-Drafts are working documents of the Internet Engineering 22 Task Force (IETF). Note that other groups may also distribute 23 working documents as Internet-Drafts. The list of current Internet- 24 Drafts is at http://datatracker.ietf.org/drafts/current/. 26 Internet-Drafts are draft documents valid for a maximum of six months 27 and may be updated, replaced, or obsoleted by other documents at any 28 time. It is inappropriate to use Internet-Drafts as reference 29 material or to cite them other than as "work in progress." 31 This Internet-Draft will expire on September 18, 2016. 33 Copyright Notice 35 Copyright (c) 2016 IETF Trust and the persons identified as the 36 document authors. All rights reserved. 38 This document is subject to BCP 78 and the IETF Trust's Legal 39 Provisions Relating to IETF Documents 40 (http://trustee.ietf.org/license-info) in effect on the date of 41 publication of this document. Please review these documents 42 carefully, as they describe your rights and restrictions with respect 43 to this document. Code Components extracted from this document must 44 include Simplified BSD License text as described in Section 4.e of 45 the Trust Legal Provisions and are provided without warranty as 46 described in the Simplified BSD License. 48 Table of Contents 50 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . 2 51 2. Considerations for the design of the NFVI PoP network 52 topology . . . . . . . . . . . . . . . . . . . . . . . . . . 3 53 2.1. External links . . . . . . . . . . . . . . . . . . . . . 3 54 2.2. Number of servers . . . . . . . . . . . . . . . . . . . . 3 55 2.3. Traffic patterns . . . . . . . . . . . . . . . . . . . . 4 56 2.3.1. Macroscopic behaviour . . . . . . . . . . . . . . . . 4 57 2.3.2. Traffic pattern within the PoP . . . . . . . . . . . 5 58 2.4. Technological considerations . . . . . . . . . . . . . . 8 59 2.4.1. Direct and Indirect networks . . . . . . . . . . . . 8 60 2.4.2. SFC technology . . . . . . . . . . . . . . . . . . . 8 61 2.4.3. Network Virtualization Technology . . . . . . . . . . 8 62 2.4.4. Software or Hardware Switching . . . . . . . . . . . 9 63 3. Design goals . . . . . . . . . . . . . . . . . . . . . . . . 9 64 3.1. Effective load . . . . . . . . . . . . . . . . . . . . . 9 65 3.2. Latency . . . . . . . . . . . . . . . . . . . . . . . . . 10 66 3.3. Scalability . . . . . . . . . . . . . . . . . . . . . . . 10 67 3.4. Fault Tolerance . . . . . . . . . . . . . . . . . . . . . 11 68 3.5. Cost . . . . . . . . . . . . . . . . . . . . . . . . . . 12 69 3.6. Backward compatibility . . . . . . . . . . . . . . . . . 12 70 4. Topologies . . . . . . . . . . . . . . . . . . . . . . . . . 12 71 5. Security considerations . . . . . . . . . . . . . . . . . . . 12 72 6. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 12 73 7. Acknowledgments . . . . . . . . . . . . . . . . . . . . . . . 12 74 8. Informative References . . . . . . . . . . . . . . . . . . . 13 75 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . 13 77 1. Introduction 79 An NFVI PoP is defined as a "single geographic location where a 80 number of NFVI-Nodes are sited" where an NFVI-Node is "a physical 81 device deployed and managed as a single entity providing the NFVI 82 functions required to support the execution environment for VNFs" 83 [ETSI_GS_NFV-INF_001]. In other words, an NFVI PoP is the premises 84 where the processing, storage and networking resources (i.e., servers 85 and switches) used to execute the network virtual functions (VNFs) 86 are deployed. The servers and switches in a NFVI PoP will be 87 interconnected forming the NFVI PoP interconnection network. The 88 goal of this document is to explore the different design 89 considerations for the NFI PoP interconnection network topology, 90 including design goals and constrains. 92 The NFVI PoP is essentially a data center, and the NFVI PoP 93 interconnection network is essentially a data center network. As 94 such it is only natural to use the current state of the art in data 95 center networking as a starting point for the design of the NFVI PoP 96 network. 98 2. Considerations for the design of the NFVI PoP network topology 100 This section describes different pieces of information that are 101 relevant input for the design of the NFVI PoP network topology. In 102 some cases, the information is known (and sometimes ready available), 103 while in other cases, the information is not known at this stage. 105 2.1. External links 107 The NFVI PoP is part of the operator's infrastructure and as such it 108 is connected to the rest of the operator's network. Information 109 about the number of links and their respective capacity is naturally 110 a required in order to properly design the NFVI PoP topology. 111 Different types of PoPs have different number of links with different 112 capacity to connect to the rest of the network. In particular, the 113 so-called "local PoPs" that connect the links from end users (either 114 DSL lines or FTTH or else) and also connect to the rest of the 115 operator's network. The "regional" PoPs or "regional data centers" 116 have links to the "local PoPs" and to other "regional PoPs" and other 117 parts of the operator's infrastructure. 119 For instance, a local PoP in a DSL access network can have between 120 15.000 and 150.000 DSL lines with speeds between 10 Mbps and 100 Mbps 121 and tens of links to the core network of the operator with links 122 between 20 Gbps and 80 Gbps. 124 It would be useful to confirm these numbers and to have information 125 about other types of PoPs. 127 2.2. Number of servers 129 While knowing the exact number of servers is not required to design 130 the PoP network topology, knowing the order of the number of servers 131 is at least useful. If the resulting topology have tens of servers, 132 then the topology is likely to be be very simple (e.g., a tree-like 133 topology with access/aggregation/core switches may be suitable). On 134 the other hand, if the topology should encompass several hundreds of 135 servers of even a few thousands of servers, then the problem is more 136 challenging as we are likely to reach the available capacity of 137 existing switches and more sophisticated topologies may be required. 139 The number of servers on a PoP depends on several factors, including 140 the number and capacity of external links (i.e., the offered load to 141 the PoP), the number and type of Virtual Network Functions that will 142 be provided by the PoP, the performance of the VNF implementations 143 and the number and length of service function chains that will be 144 provided. 146 The number of external links is discussed in the previous section. 147 The number and capacity of the external links is relevant to 148 determine the number of servers because they will carry the load 149 offered to the PoP. In other words, traffic coming through the 150 external links will require processing by the different VNF hosted in 151 the servers, influencing the number of servers needed. 153 The number of different VFNs provided in the PoP as well as the 154 number and length of service functions chains provided in the PoP 155 will also influence the number of servers required in the PoP. The 156 more demanding the VNFs provided, the more servers will be needed to 157 provide it and the longer the service function chain a higher number 158 of servers will be required to support it. 160 Finally, the performance of the NFV implementations also affects the 161 number of servers required in a PoP. In particular, some VNF 162 implementations are capable of processing at line speed, while other 163 implementations of other VNFs are not capable of that, requiring 164 additional servers to provide the VNF for the same line speed. While 165 there is some initial work assessing the performance of the different 166 VNFs (e.g., [swBRAS]), still more work is needed to have a full 167 picture for the different VNFs at different line speeds. 169 Overall, we need to have a rough estimate of the range of the number 170 servers that will be part of the PoP network in order to provide a 171 successful design and we need to take into account the aforementioned 172 considerations to obtain it. 174 2.3. Traffic patterns 176 The pattern of the expected traffic of the NFVI PoP network is of 177 course essential to properly design the network topology. In this 178 section we describe different characteristics of the traffic pattern 179 that we believe are relevant and that it would be useful to have 180 information about. 182 2.3.1. Macroscopic behaviour 184 There are essentially 4 types of traffic direction within a NVFI PoP 185 network, namely, cross-through traffic, intra-PoP traffic, PoP- 186 generated traffic and PoP-terminated traffic, as depicted in 187 Figure 1. 189 external +---------------------------------+ external 190 links | NFVI PoP | links 191 ----------------| |--------------- 192 ----------------| |--------------- 193 >-->----->----->--cross-through traffic----->---->------>-----> 194 ----------------| |--------------- 195 >---->----->----->PoP-terminated traffic | 196 ----------------| |--------------- 197 ----------------| PoP-generated traffic----->----->-----> 198 ----------------| |--------------- 199 | <----Intra-PoP traffic---> | 200 +---------------------------------+ 202 Figure 1: Types of traffic in NFVI PoP network 204 The cross-through traffic is the traffic that reaches the PoP through 205 an external link, it is processed by a service function chain (i.e., 206 it is processed by a number of VNFs inside the PoP) and then is 207 forwarded through an external link. Processing such type of traffic 208 is one of the main purposes of the PoP since the PoP is part of the 209 operator's infrastructure whose main purpose is to forward user's 210 traffic. 212 The PoP-generated traffic is generated by VNFs located within the 213 PoP. An example of such VNF would be a cache located inside the PoP 214 which serves content to users. Similarly, PoP-terminated traffic is 215 external traffic that is terminated by one VNF located inside the PoP 216 for example a firewall. 218 Finally, Intra-PoP traffic is traffic generated and terminated inside 219 the PoP that never leaves the PoP. This traffic includes much of the 220 management traffic, deploying and moving virtual machines and VNFs 221 across different servers and other signaling traffic (e.g., the one 222 associated with voice calls). 224 In order to properly design the PoP network topology, it is relevant 225 to know the distribution of the expected traffic in these categories. 227 2.3.2. Traffic pattern within the PoP 229 The traffic within the PoP will be composed of essentially two types 230 of traffic: 232 1. The traffic served by the PoP. This is the traffic coming from 233 and/or going to external links and that should traverse a number 234 of servers where the different VNFs are placed. This includes 235 the cross-through traffic, the PoP-generated traffic and the PoP- 236 terminated traffic. 238 2. The operation and management traffic that includes all the 239 traffic resulting from the management of the virtual machines and 240 the VNFs, as well as signaling traffic required to provides the 241 VNFs. This is the Intra-PoP traffic. 243 The traffic pattern of the traffic served by the PoP is basically 244 determined by the location of the input link, the location of the 245 output link and the mapping of the service function chain to servers. 247 2.3.2.1. Mapping of service function chains to servers 249 There are multiple possible strategies to deploy VNFs and SFCs in 250 servers. 252 o Parallel SFC deployment strategy: One possible approach is to 253 deploy all the VNFs of a given service function chain in a single 254 server and deploy as many of these servers in parallel in order to 255 server the different flows. When more flows arrive to the PoP, 256 more servers are used in parallel. 258 o Sequential SFC deployment strategy: Another possible approach 259 would be to deploy each VNF in a different server and have one (or 260 more) servers dedicated to process this particular VNF for all the 261 flows of the PoP. When the number of flows increases, the number 262 of servers providing each VNF is also increased. 264 o Hybrid strategy: it is also possible to use a hybrid strategy, 265 where several VNFs of the SFC are deployed together in a server 266 and other VNFs of the SFC are deployed in separated servers. 268 There are many factors that influence this decision, including the 269 performance of the implementation of the VNF (maybe the VNF is too 270 demanding to be executed with other VNFs in the same server) or 271 licensing conditions (maybe some VNF licenses are based on the number 272 of servers deployed, while maybe others depend on the number of users 273 served, or even the time the VNF is being executed). 275 In any case, to design the PoP topology it would be relevant to know: 277 The number of servers that the traffic served by the PoP will 278 traverse (which is determined by the length of the SFCs and the 279 deployment strategy of SFCs in servers). 281 The number of different SFCs that will be simultaneously available 282 in the PoP at any point in time. At any point in time, different 283 flows coming from a particular external link can be served by one 284 or more different SFCs. These SFCs can be mapped to different 285 sequences of servers. Depending on this, different flows coming 286 from any external links will have to traverse different sequences 287 of servers, affecting the Intra-PoP traffic pattern. 289 2.3.2.2. Locality 291 There are two locality aspects that are affect the pattern of the 292 traffic served by the PoP. First, whether the servers providing the 293 different VNFs of each SFC can be assumed to be topologically close 294 (e.g., in the same rack). If the SFCs that process the majority of 295 the flows can be assumed to be topologically close, topologies that 296 exploit locality can be useful. 298 The other locality related aspect that affects the topology design is 299 the distribution of output links of the traffic arriving through the 300 different input links. Consider the case of a Local PoP, which has 301 links connecting to users (DSL, FTTH, etc) and links connecting to 302 the rest of the provider's network. Let's call the first type of 303 links user's links and the second type of links, core links. It is 304 reasonable to assume that most of the traffic coming from a user's 305 link will go to a core link and vice-versa. We can expect that the 306 traffic between two user's links will be low and the same for the 307 traffic between two core links. If we now consider the case of a 308 regional PoP, it is not so clear we can make such assumption about 309 the traffic between links. In case this assumption can be made, it 310 would be possible to deign the topology to pair user's link with core 311 link to optimize the transit between them. 313 2.3.2.3. Churn 315 There is also the question about how often the provided SFCs will 316 change and frequently VNFs and virtual machines will be deployed in 317 servers. This affects the amount of churn traffic in the PoP. There 318 may be more to it...? 320 2.3.2.4. Growth 322 Another relevant aspect is the expected growth in terms of offered 323 load to the PoP and also in terms of VNFs in the PoP. We should 324 understand if the capacity of the PoP is expected to increase 325 linearly or exponentially in time. Similarly, we need to understand 326 if the number of VNFs and the length of the SFCs will remain more or 327 less constant or will evolve. If does evolve, which is the expected 328 pace. The reason for this is that different topologies support 329 growth in different manners so depending on the expectation in this 330 aspects, different topologies may be more or less suitable. 332 2.4. Technological considerations 334 2.4.1. Direct and Indirect networks 336 A network is called an Indirect network there are two types of nodes, 337 nodes that source/sink traffic and nodes that forward traffic. A 338 network is called a Direct network if every node plays both roles. 339 Usually data center networks are Indirect networks, with switches 340 that forward packets and servers that source/sink packets. While 341 there have are proposals that use both switches are servers to 342 forward packets (e.g., [BCube]), the main concern expressed against 343 them is that the resources available in the servers should be used to 344 execute applications (which is the final goal of the data center) 345 rather than be used in forwarding packets. 347 In the case of an NFVI PoP network, the actual purpose of the servers 348 is in many cases to forward packets through the VNFs provided by the 349 server, so it may make perfect sense to use servers to forward 350 packets. From this perspective, either direct networks or networks 351 that use both switches and servers to forward packets may be 352 attractive for NFVI PoPs. 354 2.4.2. SFC technology 356 Service Function Chaining can be accomplished using the IETF SFC 357 protocol [I-D.ietf-sfc-architecture] or using a SDN approach, where a 358 controller instructs the switches where to forward the different 359 flows using Openflow. The two approaches have a different 360 architecture with different components and it is possible that 361 different topologies accommodate more naturally the elements of the 362 different SFC architectures. 364 If using an OpenFlow approach, determine what form the rules take, 365 consider when forwarding rules must be dynamically updated due to the 366 arrival of new flows, and determine what peak update rate is 367 required. Evaluate the SDN switches against all of these 368 requirements. 370 2.4.3. Network Virtualization Technology 372 Technologies exist to improve performance of NFV functions, including 373 PCI-passthrough, SRIOV and NUMA-aware process pinning. Other 374 technologies are likely to become available in the future to offload 375 network function. 377 Consider selecting infrastructure with these features if the NFV 378 functions can utilize them and if the orchestration and control-plane 379 infrastructure can configure them optimally. 381 Performance of individual NFV implementations may vary by an order of 382 magnitude with or without the hardware features, so PoP planning and 383 sizing must include consideration of how the functions fit with the 384 hardware and whether the infrastructure can deploy virtual machines 385 in a manner that allows the hardware to be used. 387 2.4.4. Software or Hardware Switching 389 Some network architectures require software switches (such as 390 OpenVSwitch), whereas other architectures only require top-of-rack 391 and backplane Ethernet switching. 393 Although software switches perform very well, they consume processing 394 cores. PoP design must consider how many processor cores are 395 required for software switches. 397 3. Design goals 399 In this section we describe the goals for the design of a NFVI PoP 400 network topology. In broad terms, they include scalability, 401 performance, costs, fault tolerance, operation, management and 402 backward compatibility 404 3.1. Effective load 406 A first performance parameter that we should take into account when 407 considering different topologies is the effective load supported by 408 the network. The main goal of the NFVI PoP is to forward traffic 409 between the different external links connected to the PoP. The 410 performance of the PoP is measured by the traffic it is able to 411 forward i.e., the more effective load it manages. In order to assess 412 the effective load supported by the different topologies, we increase 413 the offered load coming to the PoP through the different links and we 414 measure the effective load that the PoP is able to deliver. 416 The effective load supported by a topology is likely to be affected 417 by multiple factors, including the the different aspects we described 418 in the traffic patterns section Section 2.3 (such as the traffic 419 matrix between the different external links, the characteristics of 420 the SFCs and so on), the routing inside the PoP, the different 421 locality considerations, and the intra-PoP traffic. Moreover, in 422 order for the comparison of two topologies to make sense, they need 423 to be "equal" in some other dimension (e.g., cost, number of servers, 424 number of links, number of switches or else). 426 For example, as a starting point, we can assume a purely random 427 traffic matrix, i.e., every packet arriving through an external link 428 is forwarded through n random servers in the topology and exits 429 through a randomly picked external link, and assume shortest-path, 430 equal cost multi-path routing. We can compare different topologies 431 with the same number N of servers. We perform the comparison by 432 measuring the effective load when increasing the offered load and for 433 different values of N and n. Of course, these conditions may greatly 434 differ from the real operation condition, this is why it is useful to 435 have information about the items described in section Section 2. 437 When performing this evaluation, it is useful to also measure the 438 packet loss and to track the occurrences of hot-spots in the 439 topology, in order to identify the bottlenecks of the topology which 440 may be useful to improve it. 442 Related to this, it may be useful to consider the bisection bandwidth 443 of the different topologies. 445 3.2. Latency 447 Another relevant performance indicator is the latency suffered by 448 packets while traversing the PoP network. That is, for a topology of 449 N servers, which is the latency for a packet that arrives through an 450 external link, traverses n servers and exits through an external 451 link. Since we only care about the latency cause by the topology 452 itself (in order to assess the topology) we can measure the "latency" 453 as the number of hops that the packet should traverse. 455 It is useful to measure the mean latency, but also the maximum 456 latency, since an upper bound for the time a packet stays in the PoP 457 is also relevant. Again, the latency/Hop count depends on the 458 traffic matrix (i.e., the relation of the input and output links), 459 the routing and the different locality aspects, hence it is useful to 460 have information about these aspects. In any case, a purely random 461 case as the one described for the effective load measurement could be 462 used as a starting point. 464 Queuing between software elements can introduce latency, so it is 465 important to include extra hops caused by software components (such 466 as software switches) that may be required to deliver packets to 467 virtual machines from physical interfaces, in contrast to 468 technologies (e.g., SRIOV) that allow virtual machines to receive 469 traffic directly from network interface cards. 471 3.3. Scalability 473 Scalability refers to how well the proposed topology supports the 474 growth in terms of number of servers, line speed of the servers and 475 capacity of the external links. there are some topologies that in 476 order to support an increased number of servers require growing some 477 components beyond what is technically feasible (or what is 478 economically efficient). For instance it is well known that tree 479 topologies require the core switches to grow in order to support more 480 servers, which is not feasible beyond certain point (or it becomes 481 very expensive). That being said, we should consider scalability in 482 the range of servers that we expect that a PoP will have to support 483 in a reasonable time frame. 485 Another dimension to consider is the size of forwarding tables 486 required by switches in the network. E.g., do the switches have 487 capacity to learn the required number of MAC addresses? Some service 488 chaining technologies utilize many private Ethernet addresses; is 489 there capacity to learn the number that are required? The same 490 reasoning should be applied to whichever types of forwarding tables 491 are required, whether IP routing, MPLS, NSH, etc. 493 Another aspect somehow related to scalability is how well the 494 different topologies support incremental growth. It is unclear at 495 this point which will be the growth pace for the NFVI PoPs. In other 496 words, given that we have a PoP with N servers operational, then next 497 time we need to increase the number of servers, will it increase to 498 N+1, to 2*N or to N*N? Different topologies have different grow 499 models. Some support growing lineally indefinitely, others can be 500 over-dimensioned in order to support some linear growth, but after a 501 given number of additional servers, they need to grow exponentially. 503 3.4. Fault Tolerance 505 Fault tolerance is of course paramount for an NFVI PoP network. So, 506 when considering topologies, we must consider fault tolerance 507 aspects. We basically care about how well the topology handles link 508 failures, switch failures and server failures. 510 We can assess the fault tolerance of topology by measuring the 511 following parameters of the topology [DC-networks]: 513 o Node-disjoint paths: The minimum of the number of paths that share 514 no common intermediate nodes between any arbitrary servers. 516 o Edge disjoint paths: The minimum of the total of number of paths 517 that share no common edges between any arbitrary servers. 519 o f-fault tolerance: A network is f-fault tolerant if for any f 520 failed components, the network is still connected. 522 o Redundancy level: A network has redundancy level of r if and only 523 if after removing any set of r components, it remains connected 524 and exists a set of r+1 components such that after removing them, 525 the network is no longer connected. 527 3.5. Cost 529 The cost of the resulting network is also a relevant aspect to be 530 consider. In order to assess the cost, we can consider the number of 531 switches and the number of interfaces in topology for the same number 532 of servers. We should also take into account that type of switches 533 required, as we know that the cost of a switch does not scale 534 linearly with the number of interfaces of the switch and with the 535 speed of the interfaces. 537 3.6. Backward compatibility 539 Another relevant aspect to consider is compatibility with existent 540 hardware. It is unlikely that operators will throw away all their 541 current infrastructure based on specialized hardware and replace it 542 for VNFs running in COTS servers. It is more likely that there will 543 be an incremental deployment where some functions will be virtualized 544 and some function will be executed in hardware. It is then important 545 to consider how the different topologies support such hybrid 546 scenarios. 548 4. Topologies 550 In this section, we plan to describe different topologies that have 551 been proposed for data centers and include some considerations about 552 the different design goals described in section Section 3. 554 5. Security considerations 556 TBD, not sure if there is any. 558 6. IANA Considerations 560 There are no IANA considerations in this memo. 562 7. Acknowledgments 564 We would like to thank Bob Briscoe, Pedro Aranda, Diego Lopez, Al 565 Morton, Joel Halpern and Costin Raiciu for their input. Marcelo 566 Bagnulo is partially funded by the EU Trilogy 2 project. 568 8. Informative References 570 [I-D.ietf-sfc-architecture] 571 Halpern, J. and C. Pignataro, "Service Function Chaining 572 (SFC) Architecture", draft-ietf-sfc-architecture-11 (work 573 in progress), July 2015. 575 [ETSI_GS_NFV-INF_001] 576 ., ETSI., "Network Functions Virtualisation (NFV); 577 Infrastructure Overview", NFV ISG, 2015. 579 [swBRAS] Bifulco, R., Dietz, T., Huici, F., Ahmed, M., and J. 580 Martins, "Rethinking Access Networks with High Performance 581 Virtual Software BRASes", EWSDN 2013, 2013. 583 [BCube] Guo, C., Lu, G., Li, D., Wu, H., and X. Zhang, "BCube: A 584 High Performance, Server-centric Network Architecture for 585 Modular Data Centers", SIGCOMM 2009, 2009. 587 [DC-networks] 588 Liu, Y., Muppala, J., Veeraraghavan, M., Lin, D., and M. 589 Hamdi, "Data Center Networks - Topologies, Architectures 590 and Fault-Tolerance Characteristics", Springer Briefs in 591 Computer Science Springer 2013, 2013. 593 Authors' Addresses 595 Marcelo Bagnulo 596 Universidad Carlos III de Madrid 597 Av. Universidad 30 598 Leganes, Madrid 28911 599 SPAIN 601 Phone: 34 91 6249500 602 Email: marcelo@it.uc3m.es 603 URI: http://www.it.uc3m.es 605 David Dolson 606 Sandvine 607 408 Albert Street 608 Waterloo, ON N2L 3V3 609 Canada 611 Phone: +1 519 880 2400 612 Email: ddolson@sandvine.com