idnits 2.17.1 draft-unify-nfvrg-recursive-programming-00.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- ** The abstract seems to contain references ([I-D.unify-nfvrg-challenges]), which it shouldn't. Please replace those with straight textual mentions of the documents in question. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year -- The document date (March 8, 2015) is 3330 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- == Outdated reference: A later version (-06) exists of draft-ietf-sfc-dc-use-cases-02 == Outdated reference: A later version (-04) exists of draft-unify-nfvrg-challenges-00 Summary: 1 error (**), 0 flaws (~~), 3 warnings (==), 1 comment (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 NFVRG R. Szabo 3 Internet-Draft Z. Qiang 4 Intended status: Informational Ericsson 5 Expires: September 9, 2015 M. Kind 6 Deutsche Telekom AG 7 March 8, 2015 9 Towards recursive virtualization and programming for network and cloud 10 resources 11 draft-unify-nfvrg-recursive-programming-00 13 Abstract 15 The introduction of Network Function Virtualization (NFV) in carrier- 16 grade networks promises improved operations in terms of flexibility, 17 efficiency, and manageability. NFV is an approach to combine network 18 and compute virtualizations together. However, network and compute 19 resource domains expose different virtualizations and programmable 20 interfaces. In [I-D.unify-nfvrg-challenges] we argued for a joint 21 compute and network virtualization by looking into different compute 22 abstractions. 24 In this document we analyze different approaches to orchestrate a 25 service graph with transparent network functions into a commodity 26 data center. We show, that a recursive compute and network joint 27 virtualization and programming has clear advantages compared to other 28 approaches with separated control between compute and network 29 resources. The discussion of the problems and the proposed solution 30 is generic for any data center use case, however, we use NFV as an 31 example. 33 Status of This Memo 35 This Internet-Draft is submitted in full conformance with the 36 provisions of BCP 78 and BCP 79. 38 Internet-Drafts are working documents of the Internet Engineering 39 Task Force (IETF). Note that other groups may also distribute 40 working documents as Internet-Drafts. The list of current Internet- 41 Drafts is at http://datatracker.ietf.org/drafts/current/. 43 Internet-Drafts are draft documents valid for a maximum of six months 44 and may be updated, replaced, or obsoleted by other documents at any 45 time. It is inappropriate to use Internet-Drafts as reference 46 material or to cite them other than as "work in progress." 48 This Internet-Draft will expire on September 9, 2015. 50 Copyright Notice 52 Copyright (c) 2015 IETF Trust and the persons identified as the 53 document authors. All rights reserved. 55 This document is subject to BCP 78 and the IETF Trust's Legal 56 Provisions Relating to IETF Documents 57 (http://trustee.ietf.org/license-info) in effect on the date of 58 publication of this document. Please review these documents 59 carefully, as they describe your rights and restrictions with respect 60 to this document. Code Components extracted from this document must 61 include Simplified BSD License text as described in Section 4.e of 62 the Trust Legal Provisions and are provided without warranty as 63 described in the Simplified BSD License. 65 Table of Contents 67 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . 2 68 2. Terms and Definitions . . . . . . . . . . . . . . . . . . . . 3 69 3. Use Cases . . . . . . . . . . . . . . . . . . . . . . . . . . 4 70 3.1. Black Box DC . . . . . . . . . . . . . . . . . . . . . . . 4 71 3.1.1. Black Box DC with L3 tunnels . . . . . . . . . . . . . . 4 72 3.1.2. Black Box DC with external steering . . . . . . . . . . . 6 73 3.2. White Box DC . . . . . . . . . . . . . . . . . . . . . . . 7 74 4. Recursive approach . . . . . . . . . . . . . . . . . . . . . 8 75 5. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 10 76 6. Security Considerations . . . . . . . . . . . . . . . . . . . 10 77 7. Acknowledgement . . . . . . . . . . . . . . . . . . . . . . . 10 78 8. Informative References . . . . . . . . . . . . . . . . . . . 10 79 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . 11 81 1. Introduction 83 To a large degree there is agreement in the research community that 84 rigid network control limits the flexibility of service creation. In 85 [I-D.unify-nfvrg-challenges] 87 o we analyzed different compute domain abstractions to argue that a 88 joint compute and network virtualization and programming is needed 89 for efficient combination of these resource domains; 91 o we described challenges associated to the combined handling of 92 compute and network resources for a unified proguction 93 environment. 95 Our goal here is to analyze different approaches to instantiate a 96 service graph with transparent network functions into a commodity 97 Data Center (DC). More specifically, we analyze 98 o two black box DC set-ups, where the intra DC network control is 99 limited to some generic compute only control programming 100 interface; 102 o a white box DC set-up, where the intra DC network control is 103 exposed directly to for a DC external control to coordinate 104 forwarding configurations; 106 o a recursive approach, which illustrates potential benefits of a 107 joint compute and network virtualization and control. 109 The discussion of the problems and the proposed solution is generic 110 for any data center use case, however, we use NFV as an example. 112 2. Terms and Definitions 114 We use the term compute and "compute and storage" interchangeably 115 throughout the document. Moreover, we use the following definitions, 116 as established in [ETSI-NFV-Arch]: 118 NFV: Network Function Virtualization - The principle of separating 119 network functions from the hardware they run on by using virtual 120 hardware abstraction. 122 NFVI: NFV Infrastructure - Any combination of virtualized compute, 123 storage and network resources. 125 VNF: Virtualized Network Function - a software-based network 126 function. 128 MANO: Management and Orchestration - In the ETSI NFV framework 129 [ETSI-NFV-MANO], this is the global entity responsible for 130 management and orchestration of NFV lifecycle. 132 Further, we make use of the following terms: 134 NF: a network function, either software-based (VNF) or appliance- 135 based. 137 SW: a (routing/switching) network element with a programmable 138 control plane interface. 140 DC: a data center network element which in addition to a 141 programmable control plane interface offers a DC control 142 interface. 144 CN: a compute node network element, which is controlled by a DC 145 control plane and provides execution environment for virtual 146 machine (VM) images such as VNFs. 148 3. Use Cases 150 The inclusion of commodity Data Centers (DCs), e.g., OpenStack, into 151 the service graphs is far from trivial [I-D.ietf-sfc-dc-use-cases]: 152 different exposures of the internals of the DC will imply different 153 dynamisms in operations, different orchestration complexities and may 154 yield for different business cases with regards to infrastructure 155 sharing. 157 We investigate different scenarios with a simple forwarding graph of 158 three VNFs (o->VNF1->VNF2->VNF3->o), where all VNFs are deployed 159 within the same DC. We assume that the DC is a multi-tier leaf and 160 spine (CLOS) fabric with top-of-the rack switches in Compute Nodes 161 (CNs) and that all VNFs are transparent (bump-in-the-wire) Service 162 Functions. 164 3.1. Black Box DC 166 In Black Bock DC set-ups, we assume, that the compute domain is a 167 autonomous domain with legacy (e.g., OpenStack) orchestration APIs. 168 Due to the lack of direct forwarding control within the DC no native 169 L2 forwarding can be used to insert VNFs running in the DC into the 170 forwarding graph. Instead, explicit tunnels (e.g., VxLAN) must be 171 used, which need termination support within the deployed VNFs. 172 Therefore, VNFs must be aware of the previous and the next hops of 173 the forwarding graph to receive and forward packets accordingly. 175 3.1.1. Black Box DC with L3 tunnels 177 Figure 1 illustrates a set-up where an external VxLAN termination 178 point in the SDN domain is used to forward packets into the first SF 179 (VNF1) of the chain within the DC. VNF1, in turn, is configured to 180 forward packets to the next SF (VNF2) in the chain and so forth with 181 VNF2 and VNF3. 183 In this set-up VNFs must be capable of handling L3 tunnels (e.g., 184 VxLAN) and must act as forwarders themselves. Additionally, an 185 operational L3 underlay must be present so that VNFs can address each 186 other. 188 Furthermore, VNFs holding chain forwarding information could be 189 untrusted user plane functions from 3rd party developers. 190 Enforcement of proper forwarding is problematic. 192 Additionally, compute only orchestration might result in sub-optimal 193 allocation of the VNFs with regards to the forwarding overlay, for 194 example, see back-forth use of a core switch in Figure 1. 196 In [I-D.unify-nfvrg-challenges] we also pointed out that within a 197 single Compute Node (CN) similar VNF placement and overlay 198 optimization problem may reappear in the context of network interface 199 cards and CPU cores. 201 | A A 202 +---+ | S | 203 |SW1| | D | 204 +---+ | N | P 205 / \ V | H 206 / \ | Y 207 | | A | S 208 +---+ +-+-+ | | I 209 |SW | |SW | | | C 210 ,+--++.._ _+-+-+ | | A 211 ,-" _|,,`.""-..+ | C | L 212 _,,,--"" | `. |""-.._ | L | 213 +---+ +--++ `+-+-+ ""+---+ | O | 214 |SW | |SW | |SW | |SW | | U | 215 +---+ ,'+---+ ,'+---+ ,'+---+ | D | 216 | | ,-" | | ,-" | | ,-" | | | | 217 +--+ +--+ +--+ +--+ +--+ +--+ +--+ +--+ | | 218 |CN| |CN| |CN| |CN| |CN| |CN| |CN| |CN| | | 219 +--+ +--+ +--+ +--+ +--+ +--+ +--+ +--+ V V 220 | | | 221 +-+ +-+ +-+ A 222 |V| |V| |V| | L 223 |N| |N| |N| | O 224 |F| |F| |F| | G 225 |1| |3| |2| | I 226 +-+ +-+ +-+ | C 227 +---+ --1>-+ | | +--<3---------------<3---+ | | A 228 |SW1| +-2>-----------------------------2>---+ | L 229 +---+ <4--------------+ V 231 <<=============================================>> 232 IP tunnels, e.g., VxLAN 234 Figure 1: Black Box Data Center with VNF Overlay 236 3.1.2. Black Box DC with external steering 238 Figure 2 illustrates a set-up where an external VxLAN termination 239 point in the SDN domain is used to forward packets among all the SFs 240 (VNF1-VNF3) of the chain within the DC. VNFs in the DC need to be 241 configured to receive and send packets between only the SDN endpoint, 242 hence are not aware of the next hop VNF address. Shall any VNFs need 243 to be relocated, e.g., due to scale in/out as described in 244 [I-D.zu-nfvrg-elasticity-vnf], the forwarding overlay can be 245 transparently re-configured at the SDN domain. 247 Note however, that traffic between the DC internal SFs (VNF1, VNF2, 248 VNF3) need to exit and re-enter the DC through the external SDN 249 switch. This, certainly, is sub-optimal an results in ping-pong 250 traffic similar to the local and remote DC case discussed in 251 [I-D.zu-nfvrg-elasticity-vnf]. 253 | A A 254 +---+ | S | 255 |SW1| | D | 256 +---+ | N | P 257 / \ V | H 258 / \ | Y 259 | | ext port A | S 260 +---+ +-+-+ | | I 261 |SW | |SW | | | C 262 ,+--++.._ _+-+-+ | | A 263 ,-" _|,,`.""-..+ | C | L 264 _,,,--"" | `. |""-.._ | L | 265 +---+ +--++ `+-+-+ ""+---+ | O | 266 |SW | |SW | |SW | |SW | | U | 267 +---+ ,'+---+ ,'+---+ ,'+---+ | D | 268 | | ,-" | | ,-" | | ,-" | | | | 269 +--+ +--+ +--+ +--+ +--+ +--+ +--+ +--+ | | 270 |CN| |CN| |CN| |CN| |CN| |CN| |CN| |CN| | | 271 +--+ +--+ +--+ +--+ +--+ +--+ +--+ +--+ V V 272 | | | 273 +-+ +-+ +-+ A 274 |V| |V| |V| | L 275 |N| |N| |N| | O 276 |F| |F| |F| | G 277 |1| |3| |2| | I 278 +-+ +-+ +-+ | C 279 +---+ --1>-+ | | | | | | A 280 |SW1| <2-----+ | | | | | L 281 | | --3>---------------------------------------+ | | 282 | | <4-------------------------------------------+ | 283 | | --5>------------+ | | 284 +---+ <6----------------+ V 286 <<=============================================>> 287 IP tunnels, e.g., VxLAN 289 Figure 2: Black Box Data Center with ext Overlay 291 3.2. White Box DC 293 Figure 3 illustrates a set-up where the internal network of the DC is 294 exposed in full details through an SDN Controller for steering 295 control. We assume that native L2 forwarding can be applied all 296 through the DC until the VNFs' port, hence IP tunneling and tunnel 297 termination at the VNFs are not needed. Therefore, VNFs need not be 298 forwarding graph aware but transparently receive and forward packets. 299 However, the implications are that the network control of the DC must 300 be handed over to an external forwarding controller (see that the SDN 301 domain and the DC domain overlaps in Figure 3). This most probably 302 prohibits clear operational separation or separate ownerships of the 303 two domains. 305 | A A 306 +---+ | S | 307 |SW1| | D | 308 +---+ | N | P 309 / \ | | H 310 / \ | | Y 311 | | ext port | A | S 312 +---+ +-+-+ | | | I 313 |SW | |SW | | | | C 314 ,+--++.._ _+-+-+ | | | A 315 ,-" _|,,`.""-..+ | | C | L 316 _,,,--"" | `. |""-.._ | | L | 317 +---+ +--++ `+-+-+ ""+---+ | | O | 318 |SW | |SW | |SW | |SW | | | U | 319 +---+ ,'+---+ ,'+---+ ,'+---+ V | D | 320 | | ,-" | | ,-" | | ,-" | | | | 321 +--+ +--+ +--+ +--+ +--+ +--+ +--+ +--+ | | 322 |CN| |CN| |CN| |CN| |CN| |CN| |CN| |CN| | | 323 +--+ +--+ +--+ +--+ +--+ +--+ +--+ +--+ V V 324 | | | 325 +-+ +-+ +-+ A 326 |V| |V| |V| | L 327 |N| |N| |N| | O 328 |F| |F| |F| | G 329 |1| |3| |2| | I 330 +-+ +-+ +-+ | C 331 +---+ --1>-+ | | +--<3---------------<3---+ | | A 332 |SW1| +-2>-----------------------------2>---+ | L 333 +---+ <4--------------+ V 335 <<=============================================>> 336 L2 overlay 338 Figure 3: White Box Data Center with L2 Overlay 340 4. Recursive approach 342 We argued in [I-D.unify-nfvrg-challenges] for a joint software and 343 network programming interface. Consider that such joint software and 344 network abstraction (virtualization) exists around the DC with a 345 corresponding resource programmatic interface. A software and 346 network programming interface could include VNF requests and the 347 definition of the corresponding network overlay. However, such 348 programming interface is similar to the top level services 349 definition, for example, by the means of a VNF Forwarding Graph. 351 Figure 4 illustrates a joint domain virtualization and programming 352 setup. VNF placement and the corresponding traffic steering could be 353 defined in an abstract way, which is orchestrated, split and handled 354 to the next level in the hierarchy for further orchestration. Such 355 setup allows clear operational separation, arbitrary domain 356 virtualization (e.g., topology details could be omitted) and 357 constraint based optimization of domain wide resources. 359 +-------------------------------------------------------+ A 360 | +----------------------------------------------+ A | | 361 | | SDN Domain | | | | | 362 | | +---+ | |S | | 363 | | |SW1| | |D | |O 364 | | +---+ | |N | |V 365 | | / \ | | | |E 366 | +-------------------+-------+------------------+ V | |R 367 | | | | |A 368 | +----------------------------------------------+ A | |R 369 | | DC Domain | | | |C 370 | | Joint +---+ +-+-+ | | | |H 371 | | Abstraction |SW | |SW | | |D | |I 372 | | Softw + ,+--++.._ _+-+-+ | |C | |N 373 | | Network ,-" _|,,`.""-..+ | | | |G 374 | | _,,,--"" | `. |""-.._ | |V | | 375 | | +---+ +--++ `+-+-+ ""+---+ | |I | |V 376 | | |SW | |SW | |SW | |SW | | |R | |I 377 | | +---+ ,'+---+ ,'+---+ ,'+---+ | |T | |R 378 | | | | ,-" | | ,-" | | ,-" | | | | | |T 379 | | +--+ +--+ +--+ +--+ +--+ +--+ +--+ +--+ | | | | 380 | | |CN| |CN| |CN| |CN| |CN| |CN| |CN| |CN| | | | | 381 | | +--+ +--+ +--+ +--+ +--+ +--+ +--+ +--+ | | | | 382 | | | | | | 383 | +----------------------------------------------+ V | | 384 +-------------------------------------------------------+ V 386 +--------------------------------------+ 387 | DC Domain | 388 | +---------------------+ | 389 | | +-+ +-+ +-+ | | 390 | | |V| |V| |V| | | 391 | | |N| |N| |N| | | 392 | SDN Domain | |F| |F| |F| | | 393 | +---------+ | |1| |2| |3| | | 394 | | | | +-+ +-+ +-+ | | 395 | | +---+--+--+>-+ | | | | | | | 396 | | |SW1| | | +-->-+ +-->-+ | | | 397 | | +---+--+<-+------------------+ | | 398 | +---------+ +---------------------+ | 399 | | 400 |<<=========>><<=====================>>| 401 | VNF FG1 VNF FG2 | 402 +--------------------------------------+ 404 <<==================================>> 405 VNF Forwarding Graph overall 407 Figure 4: Recursive Domain Virtualization and Joint VNF FG 408 programming 410 5. IANA Considerations 412 This memo includes no request to IANA. 414 6. Security Considerations 416 TBD 418 7. Acknowledgement 420 The research leading to these results has received funding from the 421 European Union Seventh Framework Programme (FP7/2007-2013) under 422 grant agreement no. 619609 - the UNIFY project. The views expressed 423 here are those of the authors only. The European Commission is not 424 liable for any use that may be made of the information in this 425 document. 427 We would like to thank in particular David Jocha and Janos Elek from 428 Ericsson for the useful discussions. 430 8. Informative References 432 [ETSI-NFV-Arch] 433 ETSI, "Architectural Framework v1.1.1", Oct 2013, 434 . 437 [ETSI-NFV-MANO] 438 ETSI, "Network Function Virtualization (NFV) Management 439 and Orchestration V0.6.1 (draft)", Jul. 2014, 440 . 443 [I-D.ietf-sfc-dc-use-cases] 444 Surendra, S., Tufail, M., Majee, S., Captari, C., and S. 445 Homma, "Service Function Chaining Use Cases In Data 446 Centers", draft-ietf-sfc-dc-use-cases-02 (work in 447 progress), January 2015. 449 [I-D.unify-nfvrg-challenges] 450 Szabo, R., Csaszar, A., Pentikousis, K., Kind, M., and D. 451 Daino, "Unifying Carrier and Cloud Networks: Problem 452 Statement and Challenges", draft-unify-nfvrg-challenges-00 453 (work in progress), October 2014. 455 [I-D.zu-nfvrg-elasticity-vnf] 456 Qiang, Z. and R. Szabo, "Elasticity VNF", draft-zu-nfvrg- 457 elasticity-vnf-01 (work in progress), March 2015. 459 Authors' Addresses 461 Robert Szabo 462 Ericsson Research, Hungary 463 Irinyi Jozsef u. 4-20 464 Budapest 1117 465 Hungary 467 Email: robert.szabo@ericsson.com 468 URI: http://www.ericsson.com/ 470 Zu Qiang 471 Ericsson 472 8400, boul. Decarie 473 Ville Mont-Royal, QC 8400 474 Canada 476 Email: zu.qiang@ericsson.com 477 URI: http://www.ericsson.com/ 478 Mario Kind 479 Deutsche Telekom AG 480 Winterfeldtstr. 21 481 10781 Berlin 482 Germany 484 Email: mario.kind@telekom.de