idnits 2.17.1 draft-ietf-bess-service-chaining-06.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- No issues found here. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year -- The document date (December 4, 2018) is 1942 days in the past. Is this intentional? Checking references for intended status: Proposed Standard ---------------------------------------------------------------------------- (See RFCs 3967 and 4897 for information about using normative references to lower-maturity documents in RFCs) ** Obsolete normative reference: RFC 5575 (Obsoleted by RFC 8955) ** Downref: Normative reference to an Informational RFC: RFC 7348 ** Downref: Normative reference to an Informational RFC: RFC 7665 Summary: 3 errors (**), 0 flaws (~~), 1 warning (==), 1 comment (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 Network Working Group R. Fernando 3 Internet-Draft Cisco Systems 4 Intended status: Standards Track S. Mackie 5 Expires: June 7, 2019 Juniper Networks 6 D. Rao 7 Cisco Systems 8 B. Rijsman 10 M. Napierala 11 ATT Labs 12 T. Morin 13 Orange 14 December 4, 2018 16 Service Function Chaining using Virtual Networks with BGP VPNs 17 draft-ietf-bess-service-chaining-06 19 Abstract 21 This document describes how service function chains (SFC) can be 22 applied to traffic flows using routing in a virtual (overlay) network 23 to steer traffic between service nodes. Chains can include services 24 running in routers, on physical appliances or in virtual machines. 25 Service chains have applicability at the subscriber edge, business 26 edge and in multi-tenant datacenters. The routing function into SFCs 27 and between service functions within an SFC can be performed by 28 physical devices (routers), be virtualized inside hypervisors, or run 29 as part of a host OS. 31 A BGP control plane for route distribution is used to create virtual 32 networks implemented using IP MPLS, VXLAN or other suitable 33 encapsulation, where the routes within the virtual networks cause 34 traffic to flow through a sequence of service nodes that apply packet 35 processing functions to the flows. 37 Two techniques are described: in one the service chain is implemented 38 as a sequence of distinct VPNs between sets of service nodes that 39 apply each service function; in the other, the routes within a VPN 40 are modified through the use of special route targets and modified 41 next-hop resolution to achieve the desired result. 43 In both techniques, service chains can be created by manual 44 configuration of routes and route targets in routing systems, or 45 through the use of a controller which contains a topological model of 46 the desired service chains. 48 This document also contains discussion of load balancing between 49 network functions, symmetric forward and reverse paths when stateful 50 services are involved, and use of classifiers to direct traffic into 51 a service chain. 53 Status of This Memo 55 This Internet-Draft is submitted in full conformance with the 56 provisions of BCP 78 and BCP 79. 58 Internet-Drafts are working documents of the Internet Engineering 59 Task Force (IETF). Note that other groups may also distribute 60 working documents as Internet-Drafts. The list of current Internet- 61 Drafts is at https://datatracker.ietf.org/drafts/current/. 63 Internet-Drafts are draft documents valid for a maximum of six months 64 and may be updated, replaced, or obsoleted by other documents at any 65 time. It is inappropriate to use Internet-Drafts as reference 66 material or to cite them other than as "work in progress." 68 This Internet-Draft will expire on June 7, 2019. 70 Copyright Notice 72 Copyright (c) 2018 IETF Trust and the persons identified as the 73 document authors. All rights reserved. 75 This document is subject to BCP 78 and the IETF Trust's Legal 76 Provisions Relating to IETF Documents 77 (https://trustee.ietf.org/license-info) in effect on the date of 78 publication of this document. Please review these documents 79 carefully, as they describe your rights and restrictions with respect 80 to this document. Code Components extracted from this document must 81 include Simplified BSD License text as described in Section 4.e of 82 the Trust Legal Provisions and are provided without warranty as 83 described in the Simplified BSD License. 85 Table of Contents 87 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . 4 88 1.1. Terminology . . . . . . . . . . . . . . . . . . . . . . . 5 89 2. Service Function Chain Architecture Using Virtual Networking 7 90 2.1. High Level Architecture . . . . . . . . . . . . . . . . . 8 91 2.2. Service Function Chain Logical Model . . . . . . . . . . 10 92 2.3. Service Function Implemented in a Set of SF Instances . . 11 93 2.4. SF Instance Connections to VRFs . . . . . . . . . . . . . 13 94 2.4.1. SF Instance in Physical Appliance . . . . . . . . . . 13 95 2.4.2. SF Instance in a Virtualized Environment . . . . . . 14 97 2.5. Encapsulation Tunneling for Transport . . . . . . . . . . 15 98 2.6. SFC Creation Procedure . . . . . . . . . . . . . . . . . 15 99 2.6.1. SFC Provisioning Using Sequential VPNs . . . . . . . 16 100 2.6.2. Modified-Route SFC Creation . . . . . . . . . . . . . 17 101 2.6.3. Common SFC provisioning considerations . . . . . . . 19 102 2.7. Controller Function . . . . . . . . . . . . . . . . . . . 20 103 2.8. Variations on Setting Prefixes in an SFC . . . . . . . . 20 104 2.8.1. Using a Default Route . . . . . . . . . . . . . . . . 20 105 2.8.2. Using a Default Route and a Large Prefix . . . . . . 21 106 2.8.3. Disaggregated Gateway Routers . . . . . . . . . . . . 21 107 2.8.4. Optimizing VRF usage . . . . . . . . . . . . . . . . 22 108 2.8.5. Dynamic Entry and Exit Signaling . . . . . . . . . . 22 109 2.8.6. Dynamic Re-Advertisements in Intermediate Systems . . 23 110 2.9. Layer-2 Virtual Networks and Service Functions . . . . . 23 111 2.10. Header Transforming Service Functions . . . . . . . . . . 23 112 3. Load Balancing Along a Service Function Chain . . . . . . . . 24 113 3.1. SF Instances Connected to Separate VRFs . . . . . . . . . 24 114 3.2. SF Instances Connected to the Same VRF . . . . . . . . . 25 115 3.3. Combination of Egress and Ingress VRF Load Balancing . . 26 116 3.4. Forward and Reverse Flow Load Balancing . . . . . . . . . 28 117 3.4.1. Issues with Equal Cost Multi-Path Routing . . . . . . 28 118 3.4.2. Modified ECMP with Consistent Hash . . . . . . . . . 28 119 3.4.3. ECMP with Flow Table . . . . . . . . . . . . . . . . 29 120 3.4.4. Dealing with Different Hash Algorithms in an SFC . . 30 121 4. Sharing Service Functions in Different SFCs . . . . . . . . . 31 122 4.1. Shared SFs in L3 SFCs . . . . . . . . . . . . . . . . . . 31 123 4.2. Shared SFs in L2 SFCs . . . . . . . . . . . . . . . . . . 31 124 5. Steering into SFCs Using a Classifier . . . . . . . . . . . . 31 125 6. External Domain Co-ordination . . . . . . . . . . . . . . . . 33 126 7. Fine-grained steering using BGP Flow-Spec . . . . . . . . . . 34 127 8. Controller Federation . . . . . . . . . . . . . . . . . . . . 34 128 9. Coordination Between SF Instances and Controller using BGP . 34 129 10. BGP Extended Communities . . . . . . . . . . . . . . . . . . 35 130 10.1. Route-Target Record . . . . . . . . . . . . . . . . . . 35 131 10.2. Consistent Hash Sort Order . . . . . . . . . . . . . . . 36 132 10.3. Load Balance Settings . . . . . . . . . . . . . . . . . 36 133 11. Summary and Conclusion . . . . . . . . . . . . . . . . . . . 37 134 12. Security Considerations . . . . . . . . . . . . . . . . . . . 37 135 13. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 38 136 14. Acknowledgments . . . . . . . . . . . . . . . . . . . . . . . 38 137 15. References . . . . . . . . . . . . . . . . . . . . . . . . . 38 138 15.1. Normative References . . . . . . . . . . . . . . . . . . 38 139 15.2. Informational References . . . . . . . . . . . . . . . . 39 140 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . 40 142 1. Introduction 144 The purpose of networks is to allow computing systems to communicate 145 with each other. Requests are usually made from the client or 146 customer side of a network, and responses are generated by 147 applications residing in a datacenter. Over time, the network 148 between the client and the application has become more complex, and 149 traffic between the client and the application is acted on by 150 intermediate systems that apply network services. Some of these 151 activities, like firewall filtering, subscriber attachment and 152 network address translation are generally carried out in network 153 devices along the traffic path, while others are carried out by 154 dedicated appliances, such as media proxy and deep packet inspection 155 (DPI). Deployment of these in-network services is complex, time- 156 consuming and costly, since they require configuration of devices 157 with vendor-specific operating systems, sometimes with co-processing 158 cards, or deployment of physical devices in the network, which 159 requires cabling and configuration of the devices that they connect 160 to. Additionally, other devices in the network need to be configured 161 to ensure that traffic is correctly steered through the systems that 162 services are running on. 164 The current mode of operations does not easily allow common 165 operational processes to be applied to the lifecycle of services in 166 the network, or for steering of traffic through them. 168 The recent emergence of Network Functions Virtualization (NFV) 169 [NFVE2E] to provide a standard deployment model for network services 170 as software appliances, combined with Software Defined Networking 171 (SDN) for more dynamic traffic steering can provide foundational 172 elements that will allow network services to be deployed and managed 173 far more efficiently and with more agility than is possible today. 175 This document describes how the combination of several existing 176 technologies can be used to create chains of functions, while 177 preserving the requirements of scale, performance and reliability for 178 service provider networks. The technologies employed are: 180 o Traffic flow between service functions described by routing and 181 network policies rather than by static physical or logical 182 connectivity 184 o Packet header encapsulation in order to create virtual private 185 networks using network overlays 187 o VRFs on both physical devices and in hypervisors to implement 188 forwarding policies that are specific to each virtual network 190 o Optional use of a controller to calculate routes to be installed 191 in routing systems to form a service chain. The controller uses a 192 topological model that stores service function instance 193 connectivity to network devices and intended connectivity between 194 service functions. 196 o MPLS or other labeling to facilitate identification of the next 197 interface to send packets to in a service function chain 199 o BGP or BGP-style signaling to distribute routes in order to create 200 service function chains 202 o Distributed load balancing between service functions performed in 203 the VRFs that service function instance connect to. 205 Virtualized environments can be supported without necessarily running 206 BGP or MPLS natively. Messaging protocols such as NC/YANG, XMPP or 207 OpenFlow may be used to signal forwarding information. Encapsulation 208 mechanisms such as VXLAN or GRE may be used for overlay transport. 209 The term 'BGP-style', above, refers to this type of signaling. 211 Traffic can be directed into service function chains using IP routing 212 at each end of the service function chain, or be directed into the 213 chain by a classifier function that can determine which service chain 214 a traffic flow should pass through based on deep packet inspection 215 (DPI) and/or subscriber identity. 217 The techniques can support an evolution from services implemented in 218 physical devices attached to physical forwarding systems (routers) to 219 fully virtualized implementations as well as intermediate hybrid 220 implementations. 222 1.1. Terminology 224 This document uses the following acronyms and terms. 226 Terms Meaning 227 ----- ----------------------------------------------- 228 AS Autonomous System 229 ASBR Autonomous System Border Router 230 RR Route Reflector 231 RT Route Target 232 SDN Software Defined Network 233 VM Virtual Machine 234 VPN Virtual Private Network 235 VRF VPN Routing and Forwarding table [RFC4364] 237 Table 1 239 This document follows some of the terminology used in [RFC7665] and 240 adds some new terminology: 242 Network Service: An externally visible service offered by a network 243 operator; a service may consist of a single service function or a 244 composite built from several service functions executed in one or 245 more pre-determined sequences and delivered by software executing 246 in physical or virtual devices. 248 Classification: Customer/network/service policy used to identify and 249 select traffic flow(s) requiring certain outbound forwarding 250 actions, in particular, to direct specific traffic flows into the 251 ingress of a particular service function chain, or causing 252 branching within a service function chain. 254 Virtual Network: A logical overlay network built using virtual links 255 or packet encapsulation, over an existing network (the underlay). 257 Service Function Chain (SFC): A service function chain defines an 258 ordered set of service functions that must be applied to packets 259 and/or frames selected as a result of classification. An SFC may 260 be either a linear chain or a complex service graph with multiple 261 branches. The term 'Service Chain' is often used in place of 262 'Service Function Chain'. 264 SFC Set: The pair of SFCs through which the forward and reverse 265 directions of a given classified flow will pass. 267 Service Function (SF): A logical function that is applied to 268 packets. A service function can act at the network layer or other 269 OSI layers. A service function can be embedded in one or more 270 physical network elements, or can be implemented in one or more 271 software instances running on physical or virtual hosts. One or 272 multiple service functions can be embedded in the same network 273 element or run on the same host. Multiple instances of a service 274 function can be enabled in the same administrative domain. We 275 will also refer to 'Service Function' as, simply, 'Service' for 276 simplicity. 278 A non-exhaustive list of services includes: firewalls, DDOS 279 protection, anti-malware/ant-virus systems, WAN and application 280 acceleration, Deep Packet Inspection (DPI), server load balancers, 281 network address translation, HTTP Header Enrichment functions, 282 video optimization, TCP optimization, etc. 284 SF Instance: An instance of software that implements the packet 285 processing of a service function 287 SF Instance Set: A group of SF instances that, in parallel, 288 implement a service function in an SFC. 290 Routing System: A hardware or software system that performs layer 3 291 routing and/or forwarding functions. The term includes physical 292 routers as well as hypervisor or Host OS implementations of the 293 forwarding plane of a conventional router. 295 Gateway: A routing system attached to the source or destination 296 network that peers with the controller, or with the routing system 297 at one end of an SFC. A source network gateway directs traffic 298 from the source network into an SFC, while a destination network 299 gateway distributes traffic towards destinations. The routing 300 systems at each end of an SFC can themselves act as gateways and 301 in a bidirectional SF instance set, gateways can act in both 302 directions VRF: A subsystem within a routing system as defined in 303 [RFC4364] that contains private routing and forwarding tables and 304 has physical and/or logical interfaces associated with it. In the 305 case of hypervisor/Host OS implementations, the term refers only 306 to the forwarding function of a VRF, and this will be referred to 307 as a 'VPN forwarder.' 309 Ingress VRF: A VRF containing an ingress interface of a SF instance 311 Egress VRF: A VRF containing an egress interface of a SF instance 313 Note that in this document the terms 'ingress' and 'egress' are 314 used with respect to SF instances rather than the tunnels that 315 connect SF instances. This is different usage than in VPN 316 literature in general. 318 Entry VRF: A VRF through which traffic enters the SFC from the 319 source network. This VRF may be used to advertise the destination 320 network's routes to the source network. It could be placed on a 321 gateway router or be collocated with the first ingress VRF. 323 Exit VRF: A VRF through which traffic exits the SFC into the 324 destination network. This VRF contains the routes from the 325 destination network and could be located on a gateway router. 326 Alternatively, the egress VRF attached to the last SF instance may 327 also function as the exit VRF. 329 2. Service Function Chain Architecture Using Virtual Networking 331 The techniques described in this document use virtual networks to 332 implement service function chains. Service function chains can be 333 implemented on devices that support existing MPLS VPN and BGP 334 standards [RFC4364], [RFC4271], [RFC4760], as well as other 335 encapsulations, such as VXLAN [RFC7348]. Similarly, equivalent 336 control plane protocols such as BGP-EVPN with type-2 and type-5 route 337 types can also be used where supported [RFC8365]. The set of 338 techniques described in this document represent one implementation 339 approach to realize the SFC architecture described in [RFC7665]. 341 The following sections detail the building blocks of the SFC 342 architecture, and outline the processes of route installation and 343 subsequent route exchange to create an SFC. 345 2.1. High Level Architecture 347 Service function chains can be deployed with or without a classifier. 348 Use cases where SFCs may be deployed without a classifier include 349 multi-tenant data centers, private and public cloud and virtual CPE 350 for business services. Classifiers will primarily be used in mobile 351 and wireline subscriber edge use cases. Use of a classifier is 352 discussed in Section 5. 354 A high-level architecture diagram of an SFC without a classifier, 355 where traffic is routed into and out of the SFC, is shown in 356 Figure 1, below. An optional controller is shown that contains a 357 topological model of the SFC and which configures the network 358 resources to implement the SFC. 360 +-------------------------+ 361 |--- Data plane connection| 362 |=== Encapsulation tunnel | 363 | O VRF | 364 +-------------------------+ 366 Control +------------------------------------------------+ 367 Plane | Controller | 368 ....... +-+------------+----------+----------+---------+-+ 369 | | | | | 370 Service | +---+ | +---+ | +---+ | | 371 Plane | |SF1| | |SF2| | |SF3| | | 372 | +---+ | +---+ | +---+ | | 373 ....... / | | / | | / | | / / 374 +-----+ +--|-|--+ +--|-|--+ +--|-|--+ +-----+ 375 | | | | | | | | | | | | | | | | 376 Net-A-->---O==========O O========O O========O O=========O---->Net-B 377 | | | | | | | | | | 378 Data | R-A | | R-1 | | R-2 | | R-3 | | R-B | 379 Plane +-----+ +-------+ +-------+ +-------+ +-----+ 381 ^ ^ ^ ^ 382 | | | | 383 | Ingress Egress | 384 | VRF VRF | 385 SFC Entry SFC Exit 386 VRF VRF 388 High Level SFC Architecture 390 Figure 1 392 Traffic from Network-A destined for Network-B will pass through the 393 SFC composed of SF instances, SF1, SF2 and SF3. Routing system R-A 394 contains a VRF (shown as 'O' symbol) that is the SFC entry point. 395 This VRF will advertise a route to reach Network-B into Network-A 396 causing any traffic from a source in Network-A with a destination in 397 Network-B to arrive in this VRF. The forwarding table in the VRF in 398 R-A will direct traffic destined for Network-B into an encapsulation 399 tunnel with destination R-1 and a label that identifies the ingress 400 (left) interface of SF1 that R-1 should send the packets out on. The 401 packets are processed by service instance SF-1 and arrive in the 402 egress (right) VRF in R-1. The forwarding entries in the egress VRF 403 direct traffic to the next ingress VRF using encapsulation tunneling. 404 The process is repeated for each service instance in the SFC until 405 packets arrive at the SFC exit VRF (in R-B). This VRF is peered with 406 Network-B and routes packets towards their destinations in the user 407 data plane. In this example, routing systems R-A and R-B are gateway 408 routing systems. 410 In the example, each pair of ingress and egress VRFs are configured 411 in separate routing systems, but such pairs could be collocated in 412 the same routing system, and it is possible for the ingress and 413 egress VRFs for a given SF instance to be in different routing 414 systems. The SFC entry and exit VRFs can be collocated in the same 415 routing system, and the service instances can be local or remote from 416 either or both of the routing systems containing the entry and exit 417 VRFs, and from each other. It is also possible that the ingress and 418 egress VRFs are implemented using alternative mechanisms. 420 The controller is responsible for configuring the VRFs in each 421 routing system, installing the routes in each of the VRFs to 422 implement the SFC, and, in the case of virtualized services, may 423 instantiate the service instances. 425 The controller is not responsible for configuring the SFs themselves. 426 It is assumed that there is a separate system that performs that 427 function e.g the VNF Manager functional component in [NFVE2E]. Note 428 that coordination may be required between a controller and and VNF 429 manager, for instance to ensure that installed firewall filters in a 430 SF align with the subnets whose packets will pass through it. At 431 this time, there are no standard ways to address this, and custom 432 pair-wise integrations between controllers and VNF managers would be 433 required. 435 2.2. Service Function Chain Logical Model 437 A service function chain is a set of logically connected service 438 functions through which traffic can flow. Each egress interface of 439 one service function is logically connected to an ingress interface 440 of the next service function. 442 +------+ +------+ +------+ 443 Network-A-->| SF-1 |-->| SF-2 |-->| SF-3 |-->Network-B 444 +------+ +------+ +------+ 446 A Chain of Service Functions 448 Figure 2 450 In Figure 2, above, a service function chain has been created that 451 connects Network-A to Network-B, such that traffic from a host in 452 Network-A to a host in Network-B will traverse the service function 453 chain. 455 As defined in [RFC7665], a service function chain can be uni- 456 directional or bi-directional. In this document, in order to allow 457 for the possibility that the forward and reverse paths may not be 458 symmetrical, SFCs are defined as uni-directional, and the term 'SFC 459 set' is used to refer to a pair of forward and reverse direction SFCs 460 for some set of routed or classified traffic. 462 2.3. Service Function Implemented in a Set of SF Instances 464 A service function instance is a software system that acts on packets 465 that arrive on an ingress interface of that software system. Service 466 function instances may run on a physical appliance or in a virtual 467 machine. A service function instance may be transparent at layer 2 468 and/or layer 3, and may support branching across multiple egress 469 interfaces and may support aggregation across ingress interfaces. 470 For simplicity, the examples in this document have a single ingress 471 and a single egress interface. 473 Each service function in a chain can be implemented by a single 474 service function instance, or by a set of instances in order to 475 provide scale and resilience. 477 +------------------------------------------------------------------+ 478 | Logical Service Functions Connected in a Chain | 479 | | 480 | +--------+ +--------+ | 481 | Net-A--->| SF-1 |----------->| SF-2 |--->Net-B | 482 | +--------+ +--------+ | 483 | | 484 +------------------------------------------------------------------+ 485 | Service Function Instances Connected by Virtual Networks | 486 | ...... ...... | 487 | : : +------+ : : | 488 | : :-->|SFI-11|-->: : ...... | 489 | : : +------+ : : +------+ : : | 490 | : : : :-->|SFI-21|-->: : | 491 | : : +------+ : : +------+ : : | 492 | A->: VN-1 :-->|SFI-12|-->: VN-2 : : VN-3 :-->B | 493 | : : +------+ : : +------+ : : | 494 | : : : :-->|SFI-22|-->: : | 495 | : : +------+ : : +------+ : : | 496 | : :-->|SFI-13|-->: : '''''' | 497 | : : +------+ : : | 498 | '''''' '''''' | 499 +------------------------------------------------------------------+ 501 Service Functions Are Composed of SF Instances 502 Connected Via Virtual Networks 504 Figure 3 506 In Figure 3, service function SF-1 is implemented in three service 507 function instances, SFI-11, SFI-12, and SFI-13. Service function SF- 508 2 is implemented in two SF instances. The service function instances 509 are connected to the next service function in the chain using a 510 virtual network, VN-2. Additionally, a virtual network (VN-1) is 511 used to enter the SFC and another (VN-3) is used at the exit. 513 The logical connection between two service functions is implemented 514 using a virtual network that contains egress interfaces for instances 515 of one service function, and ingress interfaces of instances of the 516 next service function. Traffic is directed across the virtual 517 network between the two sets of service function instances using 518 layer 3 forwarding (e.g. an MPLS VPN) or layer 2 forwarding (e.g. a 519 VXLAN). 521 The virtual networks could be described as "directed half-mesh", in 522 that the egress interface of each SF instance of one service function 523 can reach any ingress interface of the SF instances of the connected 524 service function. 526 Details on how routing across virtual networks is achieved, and 527 requirements on load balancing across ingress interfaces are 528 discussed in later sections of this document. 530 2.4. SF Instance Connections to VRFs 532 SF instances can be deployed as software running on physical 533 appliances, or in virtual machines running on a hypervisor. These 534 two types are described in more detail in the following sections. 536 2.4.1. SF Instance in Physical Appliance 538 The case of a SF instance running on a physical appliance is shown in 539 Figure 4, below. 541 +---------------------------------+ 542 | | 543 | +-----------------------------+ | 544 | | Service Function Instance | | 545 | +-------^-------------|-------+ | 546 | | Host | | 547 +---------|-------------|---------+ 548 | | 549 +------ |-------------|-------+ 550 | | | | 551 | +----|----+ +-----v----+ | 552 ---------+ Ingress | | Egress +--------- 553 ---------> VRF | | VRF ----------> 554 ---------+ | | +--------- 555 | +---------+ +----------+ | 556 | Routing System | 557 +-----------------------------+ 559 Ingress and Egress VRFs for a Physical Routing System 560 and Physical SF Instance 562 Figure 4 564 The routing system is a physical device and the service function 565 instance is implemented as software running in a physical appliance 566 (host) connected to it. The connection between the physical device 567 and the routing system may use physical or logical interfaces. 568 Transport between VRFs on different routing systems that are 569 connected to other SF instances in an SFC is via encapsulation 570 tunnels, such as MPLS over GRE, or VXLAN. 572 2.4.2. SF Instance in a Virtualized Environment 574 In virtualized environments, a routing system with VRFs that act as 575 VPN forwarders is resident in the hypervisor/Host OS, and is co- 576 resident in the host with one or more SF instances that run in 577 virtual machines. The egress VPN forwarder performs tunnel 578 encapsulation to send packets to other physical or virtual routing 579 systems with attached SF instances to form an SFC. The tunneled 580 packets are sent through the physical interfaces of the host to the 581 other hosts or physical routers. This is illustrated in Figure 5, 582 below. 584 +-------------------------------------+ 585 | +-----------------------------+ | 586 | | Service Function Instance | | 587 | +-------^-------------|-------+ | 588 | | | | 589 | +---------|-------------|---------+ | 590 | | +-------|-------------|-------+ | | 591 | | | | | | | | 592 | | | +----|----+ +-----v----+ | | | 593 ------------+ Ingress | | Egress +----------- 594 ------------> VRF | | VRF ------------> 595 ------------+ | | +----------- 596 | | | +---------+ +----------+ | | | 597 | | | Routing System | | | 598 | | +-----------------------------+ | | 599 | | Hypervisor or Host OS | | 600 | +---------------------------------+ | 601 | Host | 602 +-------------------------------------+ 604 Ingress and Egress VRFs for a Virtual Routing System 605 and Virtualized SF Instance 607 Figure 5 609 When more than one instance of an SF is running on a hypervisor, they 610 can be connected to the same VRF for scale out of an SF within an 611 SFC. 613 The routing mechanisms in the VRFs into and between service function 614 instances, and the encapsulation tunneling between routing systems 615 are identical in the physical and virtual implementations of SFCs and 616 routing systems described in this document. Physical and virtual 617 service functions can be mixed as needed with different combinations 618 of physical and virtual routing systems, within a single service 619 chain. 621 The SF instances are attached to the routing systems via physical, 622 virtual or logical (e.g, 802.1q) interfaces, and are assumed to 623 perform basic L3 or L2 forwarding. 625 A single SF instance can be part of multiple service chains. In this 626 case, the SF instance will have dedicated interfaces (typically 627 logical) and forwarding contexts associated with each service chain. 629 2.5. Encapsulation Tunneling for Transport 631 Encapsulation tunneling is used to transport packets between SF 632 instances in the chain and, when a classifier is not used, from the 633 originating network into the SFC and from the SFC into the 634 destination network. 636 The tunnels can be MPLS over GRE [RFC4023], MPLS over UDP [RFC7510], 637 MPLS over MPLS [RFC3031], VXLAN [RFC7348][RFC7348], or another 638 suitable encapsulation method. 640 Tunneling capabilities may be enabled in each routing system as part 641 of a base configuration or may be configured by the controller. 642 Tunnel encapsulations may be programmed by the controller or signaled 643 using BGP. The encapsulation to be used for a given route is 644 signaled in BGP using the procedures described in 645 [idr-tunnel-encaps], i.e. typically relying on the BGP Tunnel 646 Encapsulation Extended Community. 648 2.6. SFC Creation Procedure 650 This section describes how service chains are created using two 651 methods: 653 o Sequential VPNs - where a conventional VPN is created between each 654 set of SF instances to create the links in the SFC 656 o Route Modification - where each routing system modifies advertised 657 routes that it receives, to realize the links in an SFC on the 658 basis of a special service topology RT and a route- policy that 659 describes the service chain logical topology 661 In both cases the controller, when present, is responsible for 662 creating ingress and egress VRFs, configuring the interfaces 663 connected to SF instances in each VRF, and allocating and configuring 664 import and export RTs for each VRF. Additionally, in the second 665 method, the controller also sends the route-policy containing the 666 service chain logical topology to each routing system. If a 667 controller is not used, these procedures will require to be performed 668 manually or through scripting, for instance. 670 The source and destination networks' prefixes can be configured in 671 the controller, or may be automatically learned through peering 672 between the controller and each network's gateway. This is further 673 described in Section 2.8.5 and Section 6. 675 The following sub-sections describe how RT configuration, local route 676 installation and route distribution occur in each of the methods. 678 It should be noted that depending on the capabilities of the routing 679 systems, a controller can use one or more techniques to realize 680 forwarding along the service chain, ranging from fully centralized to 681 fully distributed. The goal of describing the following two methods 682 is to illustrate the broad approaches and as a base for various 683 optimization options. 685 Interoperability between a controller implementing one method and a 686 controller implementing a different method is achieved by relying on 687 the techniques described in section 5 and section 8, that describe 688 the use of BGP-style service chaining within domains that are 689 interconnected using standard BGP VPN route exchanges. 691 2.6.1. SFC Provisioning Using Sequential VPNs 693 The task of the controller in this method of SFC provisioning is to 694 create a set of VPNs that carry traffic to the destination network 695 through instances of each service function in turn. This is achieved 696 by allocating and configuring RTs such that the egress VRFs of one 697 set of SF instances import an RT that is an export RT for the ingress 698 VRFs of the next, logically connected, set of SF instances. 700 The process of SFC creation is as follows: 702 1. Controller creates a VRF in each routing system that is connected 703 to a service instance that will be used in the SFC 705 2. Controller configures each VRF to contain the logical interface 706 that connects to a SF instance. 708 3. Controller implements route target import and export policies in 709 the VRFs using the same route targets for the egress VRFs of a 710 service function and the ingress VRFs of the next logically 711 connected service function in the SFC. 713 4. Controller installs a static route in each ingress VRF whose next 714 hop is the interface that a SF instance is connected to. The 715 prefix for the route is the destination network to be reached by 716 passing through the SFC. The following sections describe 717 variations that can be used. 719 5. Routing systems advertise the static routes via BGP as VPN routes 720 with next hop being the IP address of the router, with an 721 encapsulation specified and a label that identifies the service 722 instance interface. 724 6. Routing systems containing VRFs with matching route targets 725 receive the updates. 727 7. Routes are installed in egress VRFs with matching import targets. 728 The egress VRFs of each SF instance will now contain VPN routes 729 to one or more routers containing ingress VRFs for SF instances 730 of the next service function in the SFC. 732 Routes to the destination network via the first set of SF instances 733 are advertised into the source network, and the egress VRFs of the 734 last SF instance set have routes into the destination network. 736 As discussed further in Section 3, egress VRFs can load balance 737 across the multiple next hops advertised from the next set of ingress 738 VRFs. 740 2.6.2. Modified-Route SFC Creation 742 In this method of SFC configuration, all the VRFs connected to SF 743 instances for a given SFC are configured with same import and export 744 RT, so they form a VPN-connected mesh between the SF instance 745 interfaces. This is termed the 'Service VPN'. A route is configured 746 or learnt in each VRF with destination being the IP address of a 747 connected SF instance via an interface configured in the VRF. The 748 interface may be a physical or logical interface. The routing system 749 that hosts such a VRF advertises a VPN route for each locally 750 connected SF instance, with a forwarding label that enables it to 751 forward incoming traffic from other routing systems to the connected 752 SF instance. The VPN routes may be advertised via an RR or the 753 controller, which sends these updates to all the other routing 754 systems that have VRFs with the service VPN RT. At this point all 755 the VRFs have a route to reach every SF instance. The same virtual 756 IP address may be used for each SF instance in a set, enabling load- 757 balancing among multiple SF instances in the set. 759 The controller builds a route-policy for the routing systems in the 760 VPN, that describes the logical topology of each service chain that 761 it belongs to. The route-policy contains entries in the form of a 762 tuple for each service chain: 764 {Service-topology-name, Service-topology-RT, Service-node- sequence} 765 where Service-node-sequence is simply an ordered list of the service 766 function interface IP addresses that are in the chain. 768 Every service function chain has a single unique service-topology-RT 769 that is allocated and provisioned on all participating routing 770 systems in the relevant VRFs. 772 The VRF in the routing system that connects to the destination 773 network (i.e. the exit VRF) is configured to attach the Service- 774 topology-RT to exported routes, and the VRF connected to the source 775 network (i.e. the entry VRF) will import routes using the Service- 776 topology-RT. The controller may also be used to originate the 777 Service-topology-RT attached routes. 779 The route-policy may be described in a variety of formats and 780 installed on the routing system using a suitable mechanism. For 781 instance, the policy may be defined in YANG and provisioned using 782 Netconf [RFC6241]. 784 Using Figure 1 for reference, when the gateway R-B advertises a VPN 785 route to Network-B, it attaches the Service-topology-RT. BGP route 786 updates are sent to all the routing systems in the service VPN. The 787 routing systems perform a modified set of actions for next-hop 788 resolution and route installation in the ingress VRFs compared to 789 normal BGP VPN behavior in routing systems, but no changes are 790 required in the operation of the BGP protocol itself. The 791 modification of behavior in the routing systems allows the automatic 792 and constrained flow of traffic through the service chain. 794 Each routing system in the service VPN will process the VPN route to 795 Network-B via R-B as follows: 797 1. If the routing system contains VRFs that import the Service- 798 topology-RT, continue, otherwise ignore the route. 800 2. The routing system identifies the position and role (ingress/ 801 egress) of each of its VRFs in the SFC by comparing the IP 802 address of the route in the VRF to the connected SF instance with 803 those in the Service-node- sequence in the route-policy. 804 Alternatively, the controller may provision the specific service 805 node IP to be used as the next-hop in each VRF, in the route- 806 policy for the VRF. 808 3. The routing system modifies the next-hop of the imported route 809 with the Service-topology-RT, to select the appropriate next-hop 810 as per the route-policy. It ignores the next-hop and label in 811 the received route. It resolves the selected next-hop in the 812 local VRF routing table. 814 4. 816 a. The imported route to Network-B in the ingress VRF is 817 modified to have a next-hop of the IP address of the 818 logically connected SF instance. 820 b. The imported route to Network-B in the egress VRF is modified 821 to have a next hop of the IP address of the next SF instance 822 in the SFC. 824 5. The egress VRFs for the last service function install the VPN 825 route via the gateway R-B unmodified. 827 Note that the modified routes are not re-advertised into the VPN by 828 the various intermediate routing systems in the SFC. 830 2.6.3. Common SFC provisioning considerations 832 In both the methods, for physical routers, the creation and 833 configuration of VRFs, interfaces and local static routes can be 834 performed programmatically using Netconf; and BGP route distribution 835 can use a route reflector (which may be part of the controller). In 836 the virtualized case, where a VPN forwarder is present, creation and 837 configuration of VRFs, interfaces and installation of routes may 838 instead be performed using a single protocol like XMPP, NC/YANG or an 839 equivalent programmatic interface. 841 Also in the virtualized case, the actual forwarding table entries to 842 be installed in the ingress and egress VRFs may be calculated by the 843 controller based on its internal knowledge of the required SFC 844 topology and the connectivity of SF instances to routing systems. In 845 this case, the routes may be directly installed in the forwarders 846 using the programmatic interface and no BGP route advertisement is 847 necessary, except when coordination with external domains (Section 6) 848 or federation between controller domains is employed (Section 8). 849 Note however that this is just one typical model for a virtual 850 forwarding based system. In general, physical and virtual routing 851 systems can be treated exactly the same if they have the same 852 capabilities. 854 In both the methods, the SF instance may also need to be set up 855 appropriately to forward traffic between it's input and output 856 interfaces, either via static, dynamic or policy-based routing. If 857 the service function is a transparent L2 service, then the static 858 route installed in the ingress VRF will have a next-hop of the IP 859 address of the routing system interface that the service instance is 860 attached to on its other interface. 862 2.7. Controller Function 864 The purpose of the controller is to manage instantiation of SFCs in 865 networks and datacenters. When an SFC is to be instantiated, a model 866 of the desired topology (service functions, number of instances, 867 connectivity) is built in the controller either via an API or GUI. 868 The controller then selects resources in the infrastructure that will 869 support the SFC and configures them. This can involve instantiation 870 of SF instances to implement each service function, the instantiation 871 of VRFs that will form virtual networks between SF instances, and 872 installation of routes to cause traffic to flow into and between SF 873 instances. It can also include provisioning the necessary static, 874 dynamic or policy based forwarding on the service function instance 875 to enable it to forward traffic. 877 For simplicity, in this document, the controller is assumed to 878 contain all the required features for management of SFCs. In actual 879 implementations, these features may be distributed among multiple 880 inter-connected systems. E.g. An overarching orchestrator might 881 manage the overall SFC model, sending instructions to a separate 882 virtual machine manager to instantiate service function instances, 883 and to a virtual network manager to set up the service chain 884 connections between them. 886 The controller can also perform necessary BGP signaling and route 887 distribution actions as described throughout this document. 889 2.8. Variations on Setting Prefixes in an SFC 891 The SFC Creation section above described the basic procedures for a 892 couple of SFC creation methods. This section describes some 893 techniques that can extend and provide optimizations on top of the 894 basic procedures. 896 2.8.1. Using a Default Route 898 In the methods described above, it can be noted that only the gateway 899 routing systems need the specific network prefixes to steer traffic 900 in and out of the SFC. The intermediate systems can direct traffic 901 in the ingress and egress VRFs by using only a default route. Hence, 902 it is possible to avoid installing the network prefixes in the 903 intermediate systems. This can be done by splitting the SFC into two 904 sections - one linking the entry and exit VRFs and the other 905 including the intermediate systems. For instance, this may be 906 achieved by using two different Service-topology-RTs in the second 907 method. 909 2.8.2. Using a Default Route and a Large Prefix 911 In the configuration methods described above, the network prefixes 912 for each network (Network-A and Network-B in the example above) 913 connected to the SFC are used in the routes that direct traffic 914 through the SFC. This creates an operational linkage between the 915 implementation of the SFC and the insertion of the SFC into a 916 network. 918 For instance, subscriber network prefixes will normally be segmented 919 across subscriber attachment points such as broadband or mobile 920 gateways. This means that each SFC would have to be configured with 921 the subscriber network prefixes whose traffic it is handling. 923 In a variation of the SFC configuration method described above, the 924 prefixes used in each direction can be such that they include all 925 possible addresses at each side of the SFC. For example, in 926 Figure 1, the prefix for Network-A could include all subscriber IP 927 addresses and the prefix for Network-B could be the default route, 928 0/0. 930 Using this technique, the same routes can be installed in all 931 instances of an SFC that serve different groups of subscribers in 932 different geographic locations. 934 The routes forwarding traffic into a SF instance and to the next SF 935 instance are installed when an SFC is initially built, and each time 936 a SF instance is connected into the SFC, but there is no requirement 937 for VRFs to be reconfigured when traffic from different networks pass 938 through the service chain, so long as their prefix is included in the 939 prefixes in the VRFs along the SFC. 941 In this variation, it is assumed that no subscriber-originated 942 traffic will enter the SFC destined for an IP address also in the 943 subscriber network address range. This will not be a restriction in 944 many cases. 946 2.8.3. Disaggregated Gateway Routers 948 As a slight variation of the above, a network prefix may be 949 disaggregated and spread out among various gateway routers, for 950 instance, in the case of virtual machines in a data-center. In order 951 to reduce the scaling requirements on the routing systems along the 952 SFC, the SFC can again be split into two sections as described above. 953 In addition, the last egress VRF may act as the exit VRF and install 954 the destination network's disaggregated routes. If the destination 955 network's prefixes can be aggregated, for instance into a subnet 956 prefix, then the aggregate prefix may be advertised and installed in 957 the entry VRF. 959 2.8.4. Optimizing VRF usage 961 It may be desirable to avoid using distinct ingress and egress VRFs 962 for the service instances in order to make more efficient use of VRF 963 resources, especially on physical routing systems. The ingress VRF 964 and egress VRF may be treated as conceptual entities and the 965 forwarding realized using one or more options described in this 966 section, combined with the methods described earlier. 968 For instance, the next-hop forwarding label described earlier serves 969 the purpose of directing traffic received from other routing systems 970 directly towards an attached service instance. On the other hand, if 971 the encapsulation mechanism or the device in use requires an IP 972 lookup for incoming packets from other routing systems, then the 973 specific network prefixes may be installed in the intermediate 974 service VRFs to direct traffic towards the attached service 975 instances. 977 Similarly, a per-interface policy-based-routing rule applied to an 978 access interface can serve to direct traffic coming in from attached 979 service instances towards the next SF set. 981 2.8.5. Dynamic Entry and Exit Signaling 983 When either of the methods of the previous sections are employed, the 984 prefixes of the attached networks at each end of an SFC can be 985 signaled into the corresponding VRFs dynamically. This requires that 986 a BGP session is configured either from the network device at each 987 end of the SFC into each network or from the controller. 989 If dynamic signaling is performed, and a bidirectional SFC set is 990 configured, and the gateways to the networks connected via the SFC 991 exchange routes, steps must be taken to ensure that routes to both 992 networks do not get advertised from both ends of the SFC set by re- 993 origination. This can be achieved if a new BGP Extended Community 994 [RFC4360] is implemented to control re-origination. When a route is 995 re- originated, the RTs of the re-originated routes are appended to 996 the new RT-Record Extended Community, and if the RT for the route 997 already exists in the Extended Community, the route is not re- 998 originated (see Section 10.1). 1000 2.8.6. Dynamic Re-Advertisements in Intermediate Systems 1002 The intermediate routing systems attached to the service instances 1003 may also use the dynamic signaling technique from the previous 1004 section to re-advertise received routes up the chain. In this case, 1005 the ingress and egress VRFs are combined into one; and a local route- 1006 policy ensures the re-advertised routes are associated with labels 1007 that direct incoming traffic directly to the attached service 1008 instances on that routing system. 1010 2.9. Layer-2 Virtual Networks and Service Functions 1012 There are SFs that operate at layer-2, in a transparent mode, and 1013 forward traffic based on the MAC DA. When such a SF is present in 1014 the SFC, the procedures at the routing system are modified slightly. 1015 The L3 routes are the same as the L3 SF case, but now an L2 header 1016 has to be supplied for each packet passing through an SF. A 1017 convenient destination MAC address to use is the one that each VRF 1018 uses for the default gateway of its network. This can be the same 1019 for all VRFs in routing systems in the domain of a controller. The 1020 VRF at the egress of the last SF will rewrite the gateway MAC address 1021 with the MAC address of the actual destination before encapsulating 1022 and forwarding. 1024 A SFC may be also be set up between end systems or network segments 1025 within the same Layer-2 bridged network. In this case, applying the 1026 procedures described earlier, the segments or groups of end systems 1027 are placed in distinct Layer-2 virtual networks, which are then then 1028 inter-connected via a sequence of intermediate Layer-2 virtual 1029 networks that form the links in the SFC. Each virtual network maps 1030 to a pair of ingress and egress MAC VRFs on the routing systems to 1031 which the SF instances are attached. The routing systems at the ends 1032 of the SFC will advertise the locally learnt or installed MAC entries 1033 using BGP-EVPN type-2 routes, which will get installed in the MAC 1034 VRFs at the other end. The intermediate systems may use default MAC 1035 routes installed in the ingress and egress MAC VRFs, or the other 1036 variations described earlier in this document. 1038 2.10. Header Transforming Service Functions 1040 If a service function performs an action that changes the source 1041 address in the packet header (e.g., NAT), the routes that were 1042 installed as described above may not support reverse flow traffic. 1044 The solution to this is for the controller modify the routes in the 1045 reverse direction to direct traffic into instances of the 1046 transforming service function. The original routes with a source 1047 prefix (Network-A in Figure 2) are replaced with a route that has a 1048 prefix that includes all the possible addresses that the source 1049 address could be mapped to. In the case of network address 1050 translation, this would correspond to the NAT pool. 1052 3. Load Balancing Along a Service Function Chain 1054 One of the key concepts driving NFV [NFVE2E]is the idea that each 1055 service function along an SFC can be separately scaled by changing 1056 the number of service function instances that implement it. This 1057 requires that load balancing be performed before entry into each 1058 service function. In this architecture, load balancing is performed 1059 in either or both of egress and ingress VRFs depending on the type of 1060 load balancing being performed, and if more than one service instance 1061 is connected to the same ingress VRF. 1063 3.1. SF Instances Connected to Separate VRFs 1065 If SF instances implementing a service in an SFC are each connected 1066 to separate VRFs(e.g. instances are connected to different routers or 1067 are running on different hosts), load balancing is performed in the 1068 egress VRFs of the previous service, or in the VRF that is the entry 1069 to the SFC. The controller distributes BGP multi-path routes to the 1070 egress VRFs. The destination prefix of each route is the ultimate 1071 destination network, or its representative aggregate or default. The 1072 next-hops in the ECMP set are BGP next-hops of the service instances 1073 attached to ingress VRFs of the next service in the SFC. The load 1074 balancing corresponds to BGP Multipath, which requires that the route 1075 distinguishers for each route are distinct in order to recognize that 1076 distinct paths should be used. Hence, each VRF in a distributed, SFC 1077 environment should have a unique route distinguisher. 1079 +------+ +-------------------------+ 1080 O----|SFI-11|---O |--- Data plane connection| 1081 // +------+ \\ |=== Encapsulation tunnel | 1082 // \\ | O VRF | 1083 // \\ | * Load balancer | 1084 // \\ +-------------------------+ 1085 // +------+ \\ 1086 Net-A-->O*====O---|SFI-12|---O====O-->Net-B 1087 \\ +------+ // 1088 \\ // 1089 \\ // 1090 \\ // 1091 \\ +------+ // 1092 O----|SFI-13|---O 1093 +------+ 1095 Egress VRF Load Balancing across SF Instances 1096 Connected to Different VRFs 1098 Figure 6 1100 In the diagram, above, a service function is implemented in three 1101 service instances each connected to separate VRFs. Traffic from 1102 Network-A arrives at VRF at the start of the SFC, and is load 1103 balanced across the service instances using a set of ECMP routes with 1104 next hops being the addresses of the routing systems containing the 1105 ingress VRFs and with labels that identify the ingress interfaces of 1106 the service instances. 1108 In the case that the bandwidth of the links between the load balancer 1109 and the ingress VRFs are unequal, or that the bandwidth capacity of 1110 the service function instances are unequal, this can be signalled in 1111 the routes for each ingress VRF using the extended community 1112 described in [draft-ietf-idr-link-bandwidth] and procedures from 1113 [draft-malhotra-bess-evpn-unequal-lb] could be followed. 1115 3.2. SF Instances Connected to the Same VRF 1117 When SF instances implementing a service in an SFC are connected to 1118 the same ingress VRF, load balancing is performed in the ingress VRF 1119 across the service instances connected to it. The controller will 1120 install routes in the ingress VRF to the destination network with the 1121 interfaces connected to each service instance as next hops. The 1122 ingress VRF will then use ECMP to load balance across the service 1123 instances. 1125 +------+ +-------------------------+ 1126 |SFI-11| |--- Data plane connection| 1127 +------+ |=== Encapsulation tunnel | 1128 / \ | O VRF | 1129 / \ | * Load balancer | 1130 / \ +-------------------------+ 1131 / +------+ \ 1132 Net-A-->O====O*---|SFI-12|----O====O-->Net-B 1133 \ +------+ / 1134 \ / 1135 \ / 1136 \ / 1137 +------+ 1138 |SFI-13| 1139 +------+ 1141 Ingress VRF Load Balancing across SF Instances 1142 Connected to the Same VRF 1144 Figure 7 1146 In the diagram, above, a service is implemented by three service 1147 instances that are connected to the same ingress and egress VRFs. 1148 The ingress VRF load balances across the ingress interfaces using 1149 ECMP, and the egress traffic is aggregated in the egress VRF. 1151 If forwarding labels that identify each SFI ingress interface are 1152 used, and if the routes to each SF instance are advertised with 1153 different route distinguishers, then it is possible to perform ECMP 1154 load balancing at the routing instance at the beginning of the 1155 encapsulation tunnel (which could be the egress VRF of the previous 1156 SF in the SFC). 1158 3.3. Combination of Egress and Ingress VRF Load Balancing 1160 In Figure 8, below, an example SFC is shown where load balancing is 1161 performed in both ingress and egress VRFs. 1163 +-------------------------+ 1164 |--- Data plane connection| 1165 +------+ |=== Encapsulation tunnel | 1166 |SFI-11| | O VRF | 1167 +------+ | * Load balancer | 1168 / \ +-------------------------+ 1169 / \ 1170 / +------+ \ +------+ 1171 O*---|SFI-12|---O*====O---|SFI-21|---O 1172 // +------+ \\ // +------+ \\ 1173 // \\// \\ 1174 // \\ \\ 1175 // //\\ \\ 1176 // +------+ // \\ +------+ \\ 1177 Net-A-->O*====O----|SFI-13|---O*====O---|SFI-22|---O====O-->Net-B 1178 +------+ +------+ 1179 ^ ^ ^ ^ ^ ^ 1180 | | | | | | 1181 | Ingress Egress | | | 1182 | Ingress Egress | 1183 SFC Entry SFC Exit 1185 Load Balancing across SF Instances 1187 Figure 8 1189 In Figure 8, above, an SFC is composed of two services implemented by 1190 three service instances and two service instances, respectively. The 1191 service instances SFI-11 and SFI-12 are connected to the same ingress 1192 and egress VRFs, and all the other service instances are connected to 1193 separate VRFs. 1195 Traffic entering the SFC from Network-A is load balanced across the 1196 ingress VRFs of the first service function by the chain entry VRF, 1197 and then load balanced again across the ingress interfaces of SFI-11 1198 and SFI-12 by the shared ingress VRF. Note that use of standard ECMP 1199 will lead to an uneven distribution of traffic between the three 1200 service instances (25% to SFI-11, 25% to SFI-12, and 50% to SFI-13). 1201 This issue can be mitigated through the use of BGP link bandwidth 1202 extended community [draft-ietf-idr-link-bandwidth] and use of 1203 procedures described in [draft-malhotra-bess-evpn-unequal-lb]. As 1204 described in the previous section, if a next-hop forwarding label is 1205 used, another way to mitigate this effect would be to advertise 1206 routes to each SF instance connected to a VRF with a different route 1207 distinguisher. 1209 After traffic passes through the first set of service instances, it 1210 is load balanced in each of the egress VRFs of the first set of 1211 service instances across the ingress VRFs of the next set of service 1212 instances. 1214 3.4. Forward and Reverse Flow Load Balancing 1216 This section discusses requirements in load balancing for forward and 1217 reverse paths when stateful service functions are deployed. 1219 3.4.1. Issues with Equal Cost Multi-Path Routing 1221 As discussed in the previous sections, load balancing in the forward 1222 SFC in the above example can automatically occur with standard BGP, 1223 if multiple equal cost routes to Network-B are installed into all the 1224 ingress VRFs, and each route directs traffic through a different 1225 service function instance in the next set. The multiple BGP routes 1226 in the routing table will translate to Equal Cost Multi-Path in the 1227 forwarding table. The hash used in the load balancing algorithm (per 1228 packet, per flow or per prefix) is implementation specific. 1230 If a service function is stateful, it is required that forward flows 1231 and reverse flows always pass through the same service function 1232 instance. Standard ECMP does not provide this capability, since the 1233 hash calculation will see different input data for the same flow in 1234 the forward and reverse directions (since the source and destination 1235 fields are reversed). 1237 Additionally, if the number of SF instances changes, either 1238 increasing to expand capacity, or decreases (planned, or due to a SF 1239 instance failure), the hash table in ECMP is recalculated, and most 1240 flows will be directed to a different SF instance and user sessions 1241 will be disrupted. 1243 There are a number of ways to satisfy the requirements of symmetric 1244 forward/reverse paths for flows and minimal disruption when SF 1245 instances are added to or removed from a set. Two techniques that 1246 can be employed are described in the following sections. 1248 3.4.2. Modified ECMP with Consistent Hash 1250 Symmetric forwarding into each side of an SF instance set can be 1251 achieved with a small modification to ECMP if the packet headers are 1252 preserved after passing through the SF instance set and assuming that 1253 the same hash function, same hash salt and same ordering association 1254 of hash buckets to ECMP routes is used in both directions. Each 1255 packet's 5-tuple data is used to calculate which hash bucket, and 1256 therefore which service instance, that the packet will be sent to, 1257 but the source and destination IP address and port information are 1258 swapped in the calculation in the reverse direction. This method 1259 only requires that the list of available service function instances 1260 is consistently maintained in load balance tables in all the routing 1261 systems rather than maintaining flow tables. This requirement can be 1262 met by the use of a distinct VPN route for each instance. 1264 In the SFC architecture described in this document, when SF instances 1265 are added or removed, the controller is required to install (or 1266 remove) routes to the SF instances. The controller could configure 1267 the load balancing function in VRFs that connect to each added (or 1268 removed) SF instance as part of the same network transaction as route 1269 updates to ensure that the load balancer configuration is 1270 synchronized with the set of SF instances. 1272 The consistent ordering among ECMP routes in the routing systems 1273 could be achieved through configuration of the routing systems by the 1274 controller using, for instance, Netconf; or when the routes are 1275 signaled using BGP by the controller or a routing system, the order 1276 for a given instance can be sent in a new 'Consistent Hash Sort 1277 Order' BGP Extended Community (defined in Section 10.2). 1279 The effect of rehashing when SF instances are added or removed can be 1280 minimized, or even eliminated using variations of the technique of 1281 consistent hashing [consistent-hash]. Details are outside the scope 1282 of this document. 1284 3.4.3. ECMP with Flow Table 1286 A second refinement that can ensure forward/reverse flow consistency, 1287 and also provides stability when the number of SF instances changes 1288 ('flow-stickiness'), is the use of dynamically configured IP flow 1289 tables in the VRFs. In this technique, flow tables are used to 1290 ensure that existing flows are unaffected if the number of ECMP 1291 routes changes, and that forward and reverse traffic passes through 1292 the same SF instance in each set of SF instances implementing a 1293 service function. 1295 The flow tables are set up as follows: 1297 1. User traffic with a new 5-tuple enters an egress VRF from a 1298 connected SF instance. 1300 2. The VRF calculates the ECMP hash across available routes (i.e., 1301 ECMP group) to the ingress interfaces of the SF instances in the 1302 next SF instance set. The consistent hash technique described in 1303 section 3.4.2 must be used here and in subsequent steps. 1305 3. The VRF creates a new flow entry for the 5-tuple of the new 1306 traffic with the next-hop being the chosen downstream ECMP group 1307 member (determined in the step 2. above). All subsequent packets 1308 for the same flow will be forwarded using flow lookup and, hence, 1309 will use the same next-hop. 1311 4. The encapsulated packet arrives in the routing system that hosts 1312 the ingress VRF for the selected SF instance. 1314 5. The ingress VRF of the next service instance determines if the 1315 packet came from a routing system that is in an ECMP group in the 1316 reverse direction(i.e., from this ingress VRF back to the 1317 previous set of SF instances). 1319 6. If an ECMP group is found, the ingress VRF creates a flow entry 1320 for the reversed 5-tuple with next-hop of the tunnel on which 1321 traffic arrived. This is for the traffic in the reverse 1322 direction. 1324 7. If multiple SF instances are connected to the ingress VRF, the 1325 ECMP consistent hash is used to choose which one to send the 1326 traffic into. 1328 8. A forward flow table entry is created for the traffic's 5-tuple 1329 with next hop of the interface of the SF instance chosen in the 1330 previous step. 1332 9. The packet is sent into the selected SF instance. 1334 The above method ensures that forward and reverse flows pass through 1335 the same SF instances, and that if the number of ECMP routes changes 1336 when SF instances are added or removed, all existing flows will 1337 continue to flow through the same SF instances, but new flows will 1338 use the new ECMP hash. The only flows affected will be those that 1339 were passing through an SF instance that was removed, and those will 1340 be spread among the remaining SF instances using the updated ECMP 1341 hash. 1343 If the consistent hash algorithm is used in both directions, then 1344 only the forwarding flow entries would be required, and would be 1345 built independently in each direction. If distinct VPN routes with 1346 next-hop forwarding labels are used, then only the flow table in step 1347 3 is sufficient to provide flow stickiness. 1349 3.4.4. Dealing with Different Hash Algorithms in an SFC 1351 In some cases, there will be two or more hash algorithms in 1352 forwarders along an SFC. E.g. when a physical router is at the entry 1353 and exit of the chain, and virtual forwarders are used within the 1354 chain. Forward and reverse flows will mostly not pass through the 1355 same SF instances of the first SF, and the SFC will not operate as 1356 intended if the first SF is stateful. It may be impractical, or 1357 prohibitively expensive to implement the flow table-based methods 1358 described above to achieve flow stability and symmetry. This issue 1359 can be mitigated by ensuring that the first SF is not stateful, or by 1360 placing a null SF between the physical router and the first actual SF 1361 in the SFC. This ensures that the hash method on both sides of 1362 stateful service instances is the same, and the SFC will operate with 1363 flow stability and symmetry if the methods described above are 1364 employed. 1366 4. Sharing Service Functions in Different SFCs 1368 4.1. Shared SFs in L3 SFCs 1370 Sharing SFs among multiple SFCs requires that packets emerging from 1371 an SF can be mapped to the correct next SF. This can be achieved by 1372 configuring an SF to accept packets from multiple subnets on each 1373 interface, configuring addresses from each of these subnets on SFs 1374 and by using next-table policies to direct traffic between subnet- 1375 specific VRFs to and from SF interfaces. However, in mobility and 1376 wireline applications, which are the most common ones where sharing 1377 is desired, classification is on the basis of subscriber id and 1378 traffic type, so discrimination on the basis of subnets is too 1379 coarse-grained. Using host routes along SFC paths could achieve the 1380 desired result of SF sharing, but will not scale appropriately. 1382 4.2. Shared SFs in L2 SFCs 1384 Layer 2 transparent SFs can be shared among multiple service chains 1385 by using a different VLAN for each chain as packets pass through each 1386 SF. Forwarding into different VLANs can be accomplished by using 1387 different labels in the encapsulation of packets arriving at an 1388 ingress VRF. Egress VRFs can have next hops into different SFs based 1389 on the VLAN of each egressing packet. The service chains sharing an 1390 SF might have different networks from each other at each end, or 1391 might be selected on the basis of 5-tuple filtering from one network. 1393 5. Steering into SFCs Using a Classifier 1395 In many applications of SFCs, a classifier will be used to direct 1396 traffic into SFCs. The classifier inspects the first or first few 1397 packets in a flow to determine which SFC the flow should be sent 1398 into. The decision criteria can be based on just the IP 5-tuple of 1399 the header (i.e filter-based forwarding), or could involve analysis 1400 of the payload of packets using deep packet inspection. Integration 1401 with a subscriber management system such as PCRF or AAA may be 1402 required in order to identify which SFC to send traffic to based on 1403 subscriber policy. 1405 An example logical architecture is shown in Figure 9, below where a 1406 classifier is external to a physical router that is hosting the VRFs 1407 that form the ends of two SFC sets. In the case of filter-based 1408 forwarding, classification could occur in a VRF on the router. 1410 +----------+ 1411 | PCRF/AAA | 1412 +-----+----+ 1413 : 1414 : 1415 Subscriber +-----+------+ 1416 Traffic----->| Classifier | 1417 +------------+ 1418 | | 1419 +-------|---|------------------------+ 1420 | | | Router | 1421 | | | | 1422 | O O X--------->Internet 1423 | | | / \ | 1424 | | | O O | 1425 +-------|---|----------------|---|---+ 1426 | | +---+ +---+ | | 1427 | +--+ U +---+ V +-+ | 1428 | +---+ +---+ | 1429 | | 1430 | +---+ +---+ +---+ | 1431 +--+ X +---+ Y +---+ Z +-+ 1432 +---+ +---+ +---+ 1434 Subscriber/Application-Aware Steering with a Classifier 1436 Figure 9 1438 In the diagram, the classifier receives subscriber traffic and sends 1439 the traffic out of one of two logical interfaces, depending on 1440 classification criteria. The logical interfaces of the classifier 1441 are connected to VRFs in a router that are entries to two SFCs (shown 1442 as O in the diagram). 1444 In this scenario, the entry VRF for each chain does not advertise the 1445 destination network prefixes and the modified method of setting 1446 prefixes, described in Section 2.8.2 can be employed. Also, the exit 1447 VRF for each SFC does not peer with a gateway or proxy node in the 1448 destination network and packets are forwarded using IP lookup in the 1449 main routing table or in a VRF that the exit traffic from the SFCs is 1450 directed into (shown as X in the diagram). A flow table may be 1451 required to ensure that reverse traffic is sent into the correct SFC. 1453 An alternative would be where the classifier is itself a distributed, 1454 virtualized service function, but with multiple egress interfaces. 1455 In that case, each virtual classifier instance could be attached to a 1456 set of VRFs that connect to different SFCs. Each chain 1458 entry VRF would load balance across the first SF instance set in its 1459 SFC. The reverse flow table mechanism described in Section 3.4.3 1460 could be employed to ensure that flows return to the originating 1461 classifier instance which may maintain subscriber context and perform 1462 charging and accounting. 1464 6. External Domain Co-ordination 1466 It is likely that SFCs will be managed as a separate administrative 1467 domain from the networks that they receive traffic from, and send 1468 traffic to. If the connected networks use BGP for route 1469 distribution, the controller in the SFC domain can join the network 1470 domains by creating BGP peering sessions with routing systems or 1471 route reflectors in those network domains to exchange VPN routes, or 1472 with local border routers that peer with the external domains. While 1473 a controller can modify route targets for the VRFs within its SFC 1474 domain, it is likely to not have any control over the external 1475 networks with which it is peering. Hence, the design does not assume 1476 that the RTs of external network domains can be modified by the 1477 controller. It may however learn those RTs and use them in it's 1478 modified route advertisements. 1480 In order to steer traffic from external network domains into an SFC, 1481 the controller will advertise a destination network's prefixes into 1482 the peering source network domain with a BGP next-hop and label 1483 associated with the SFC entry point that may be on a routing system 1484 attached to the first SF instance. This advertisement may be over 1485 regular MP-BGP/VPN peering which assumes existing standard VPN 1486 routing/forwarding behavior on the network domain's routers (PEs/ 1487 ASBRs). The controller can learn routes to networks in external 1488 domains at the egress of an SFC and advertise routes to those network 1489 into other external domains using the first ingress routing instance 1490 as the next hop thus allowing dynamic steering through re- 1491 origination of routes. 1493 An operational benefit of this approach is that the SFC topology 1494 within a domain need not be exposed to other domains. Additionally, 1495 using non-specific routes inside an SFC, as described in 1496 Section 2.8.1, means that new networks can be attached to a SFC 1497 without needing to configure prefixes inside the chain. 1499 The controller will typically remove the destination network's RTs 1500 and replace them with the RTs of the source network while advertising 1501 the modified routes. Alternatively, an external domain may be 1502 provisioned with an additional export-only RT and an import- only RT 1503 that the controller can use. 1505 7. Fine-grained steering using BGP Flow-Spec 1507 When steering traffic from an external network domain into an SFC 1508 based on attributes of the packet flow, BGP Flow-spec can be used as 1509 a signaling option. 1511 In this case, the controller can advertise one or more flow-spec 1512 routes into the entry VRF with the appropriate Service-topology-RT 1513 for the SFC. Alternatively, it can use the procedures described in 1514 [RFC5575] or [flowspec-redirect-ip] on the gateway router to redirect 1515 traffic towards the first SF. 1517 If it is desired to steer specific flows from a network domain's 1518 existing routers, the controller can advertise the above flow-spec 1519 routes to the network domain's border routers or route reflectors. 1521 8. Controller Federation 1523 When SFCs are distributed geographically, or in very large-scale 1524 environments, there may be multiple SFC controllers present and they 1525 may variously employ both of the SFC creation methods described in 1526 Section 2.6. If there is a requirement for SFCs to span controller 1527 domains there may be a requirement to exchange information between 1528 controllers. Again, a BGP session between controllers can be used to 1529 exchange route information as described in the previous sections and 1530 allow such domain spanning SFCs to be created. 1532 9. Coordination Between SF Instances and Controller using BGP 1534 In many cases, the configuration of SF instance determines its 1535 network behavior. E.g. when NAT pools are set up, or when an SSL 1536 gateway is configured with a set of enterprise IP addresses to use. 1537 In these cases, the addresses that will be used by the SFs need to be 1538 known in the networks connecting to them in order that traffic can be 1539 properly routed. When SFCs are involved, this means that the 1540 controller has to be notified when such configuration changes are 1541 made in SF instances. Sometimes, the changes will be made by end- 1542 customers and it is desirable the controller adjust the SFC routing 1543 configuration automatically when the change is made, and without 1544 customers needing to notify the service provider via a portal, for 1545 instance, or requiring development of integration modules linking the 1546 SF instances and the controller. 1548 One option for automatic notification for SFs that support BGP is for 1549 the connected forwarding system (physical or virtual SFF) to also 1550 support BGP, and for SF instances to be configured to peer with the 1551 SFF. When changes are made to the configuration of a SF instance, 1552 that for example, the SF will accept packets from a particular 1553 network prefix on one of its interfaces, the SF instance will send a 1554 BGP route update to the SFF it is connected to and which it has a BGP 1555 session with. The controller can then adjust the routes along SFCs 1556 to ensure that packets with destinations in the new prefix reach the 1557 reconfigured SF instance. 1559 BGP could also be used to signal from the controller to a SF instance 1560 that certain traffic should be sent out from a particular interface. 1561 This could be used to direct suspect traffic to a security scrubbing 1562 center,for example. 1564 Note that the SFF need not support a BGP stack itself; it can proxy 1565 BGP messages to the controller which will support such a stack. 1567 10. BGP Extended Communities 1569 10.1. Route-Target Record 1571 Route-Target Record (RT-Record)is defined as a transitive BGP 1572 Extended Community, that contains an Route-Target value representing 1573 one of the RTs that the route has been attached with previously, and 1574 which may no longer be attached to the route on subsequent re- 1575 advertisements (see Section 2.8.5). 1577 A Sub-Type code 0x13 is assigned in the three BGP Extended Community 1578 types - Two-Octet AS-Specific 0x00, IPv4-Address-Specific 0x01 and 1579 Four-Octet AS-Specific 0x02. A Sub-Type code 0x0013 is also assigned 1580 in the BGP Transitive IPv6 Address-Specific Extended Community. 1582 The Extended Community is encoded as follows: 1584 0 1 2 3 1585 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 1586 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ 1587 | 0x00,0x01,0x02| Sub-Type=0x13 | Route-Target Value | 1588 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ 1589 | Route-Target Value contd. | 1590 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ 1592 The Type field of the BGP Route-Target Extended Community is copied 1593 into the Type field of the RT Record Extended Community. 1595 The Value field (Global Administrator and Local Administrator) of the 1596 Route-Target Extended Community is copied into the Route-Target Value 1597 field of the RT Record Extended Community. 1599 When comparing a RT-Record to a Route-Target, only the Type and the 1600 Route-Target value fields are used in the comparison. The sub-type 1601 field is masked out. 1603 When a speaker re-originates a route that contains one or more RTs, 1604 it must add each of these RTs as RT Record extended communities in 1605 the re-originated route. 1607 A speaker must not re-originate a route with an RT, if this RT is 1608 already present as an RT Record extended community. 1610 10.2. Consistent Hash Sort Order 1612 Consistent Hash Sort Order is an optional transitive Opaque BGP 1613 Extended Community of type 0x14, defined as follows: 1615 Type Field : The value of the high-order octet is 0x03 (transitive 1616 opaque). The value of the low-order octet is assigned 1617 as 0x14 by IANA from the Transitive Opaque Extended 1618 Community Sub-Types registry. 1620 Value Field : The value field contains a Sort Order sub-field that 1621 indicates the relative order of this route among the 1622 ECMP set for the prefix, to be sorted in increasing 1623 order. It is a 32-bit unsigned integer. The field is 1624 encoded as shown below: 1626 +------------------------------+ 1627 | Sort Order (4 octets) | 1628 +------------------------------+ 1629 | Reserved (2 octets) | 1630 +------------------------------+ 1632 10.3. Load Balance Settings 1634 Consistent Hash Sort Order is an optional transitive Opaque BGP 1635 Extended Community of type 0x14, defined as follows: 1637 Type Field : The value of the high-order octet is 0x03 (transitive 1638 opaque). The value of the low-order octet is assigned 1639 as 0xaa by IANA from the Transitive Opaque Extended 1640 Community Sub-Types registry. 1642 Value Field : The value field contains flags that indicate which 1643 values in an IP packet's 5-tuple should be used as 1644 inputs to the ECMP hash algorithm. The field is 1645 encoded as shown below: 1647 * 0 1 2 3 1648 * 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 1649 * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ 1650 * | Type 0x03 | Sub-Type 0xaa |s d c p P R R R|R R R R R R R R| 1651 * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ 1652 * | Reserved |B R R R R R R R| Reserved | Reserved | 1653 * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ 1654 * 1655 * Type: 0x03 Opaque 1656 * SubType: 0xAA LoadBalance attribute information (TBA) 1657 * s: Use l3_source_address ECMP Load-balancing 1658 * d: Use l3_destination_address ECMP Load-balancing 1659 * c: Use l4_protocol ECMP Load-balancing 1660 * p: Use l4_source_port ECMP Load-balancing 1661 * P: Use l4_destination_port ECMP Load-balancing 1662 * B: Use source_bias (instead of ECMP load-balancing) 1663 * R: Reserved 1665 11. Summary and Conclusion 1667 The architecture for service function chains described in this 1668 document uses virtual networks implemented as overlays in order to 1669 create service function chains. The virtual networks use standards- 1670 based encapsulation tunneling, such as MPLS over GRE/UDP or VXLAN, to 1671 transport packets into an SFC and between service function instances 1672 without routing in the user address space. Two methods of installing 1673 routes to form service chains are described. 1675 In environments with physical routers, a controller may operate in 1676 tandem with existing BGP route reflectors, and would contain the SFC 1677 topology model, and the ability to install the local static interface 1678 routes to SF instances. In a virtualized environment, the controller 1679 can emulate route refection internally and simply install required 1680 routes directly without advertisements occurring. 1682 12. Security Considerations 1684 The security considerations for SFCs are broadly similar to those 1685 concerning the data, control and management planes of any device 1686 placed in a network. Details are out of scope for this document. 1688 13. IANA Considerations 1690 The new BGP Extended Communities in are assigned types as defined 1691 above in the IANA registry for extended communities. 1693 14. Acknowledgments 1695 The authors would like to thank D. Daino, D.R. Lopez, D. Bernier, 1696 W. Haeffner, A. Farrel, L. Fang, and N. So, for their 1697 contributions to the earlier drafts. The authors would also like to 1698 thank the following individuals for their review and feedback on the 1699 original proposals: E. Rosen, J. Guchard, P. Quinn, P. Bosch, D. 1700 Ward, A. Ganesan, N. Seth, G. Pildush and N. Bitar. The authors 1701 also thank Wim Henderickx for his useful suggestions on several 1702 aspects of the draft. 1704 15. References 1706 15.1. Normative References 1708 [RFC3031] Rosen, E., Viswanathan, A., and R. Callon, "Multiprotocol 1709 Label Switching Architecture", RFC 3031, 1710 DOI 10.17487/RFC3031, January 2001, 1711 . 1713 [RFC4023] Worster, T., Rekhter, Y., and E. Rosen, Ed., 1714 "Encapsulating MPLS in IP or Generic Routing Encapsulation 1715 (GRE)", RFC 4023, DOI 10.17487/RFC4023, March 2005, 1716 . 1718 [RFC4271] Rekhter, Y., Ed., Li, T., Ed., and S. Hares, Ed., "A 1719 Border Gateway Protocol 4 (BGP-4)", RFC 4271, 1720 DOI 10.17487/RFC4271, January 2006, 1721 . 1723 [RFC4360] Sangli, S., Tappan, D., and Y. Rekhter, "BGP Extended 1724 Communities Attribute", RFC 4360, DOI 10.17487/RFC4360, 1725 February 2006, . 1727 [RFC4364] Rosen, E. and Y. Rekhter, "BGP/MPLS IP Virtual Private 1728 Networks (VPNs)", RFC 4364, DOI 10.17487/RFC4364, February 1729 2006, . 1731 [RFC4760] Bates, T., Chandra, R., Katz, D., and Y. Rekhter, 1732 "Multiprotocol Extensions for BGP-4", RFC 4760, 1733 DOI 10.17487/RFC4760, January 2007, 1734 . 1736 [RFC5575] Marques, P., Sheth, N., Raszuk, R., Greene, B., Mauch, J., 1737 and D. McPherson, "Dissemination of Flow Specification 1738 Rules", RFC 5575, DOI 10.17487/RFC5575, August 2009, 1739 . 1741 [RFC6241] Enns, R., Ed., Bjorklund, M., Ed., Schoenwaelder, J., Ed., 1742 and A. Bierman, Ed., "Network Configuration Protocol 1743 (NETCONF)", RFC 6241, DOI 10.17487/RFC6241, June 2011, 1744 . 1746 [RFC7348] Mahalingam, M., Dutt, D., Duda, K., Agarwal, P., Kreeger, 1747 L., Sridhar, T., Bursell, M., and C. Wright, "Virtual 1748 eXtensible Local Area Network (VXLAN): A Framework for 1749 Overlaying Virtualized Layer 2 Networks over Layer 3 1750 Networks", RFC 7348, DOI 10.17487/RFC7348, August 2014, 1751 . 1753 [RFC7510] Xu, X., Sheth, N., Yong, L., Callon, R., and D. Black, 1754 "Encapsulating MPLS in UDP", RFC 7510, 1755 DOI 10.17487/RFC7510, April 2015, 1756 . 1758 [RFC7665] Halpern, J., Ed. and C. Pignataro, Ed., "Service Function 1759 Chaining (SFC) Architecture", RFC 7665, 1760 DOI 10.17487/RFC7665, October 2015, 1761 . 1763 [RFC8365] Sajassi, A., Ed., Drake, J., Ed., Bitar, N., Shekhar, R., 1764 Uttaro, J., and W. Henderickx, "A Network Virtualization 1765 Overlay Solution Using Ethernet VPN (EVPN)", RFC 8365, 1766 DOI 10.17487/RFC8365, March 2018, 1767 . 1769 15.2. Informational References 1771 [consistent-hash] 1772 Karger, D., Lehman, E., Leighton, T., Panigrahy, R., 1773 Levine, M., and D. Lewin, ""Consistent Hashing and Random 1774 Trees: Distributed Caching Protocols for Relieving Hot 1775 Spots on the World Wide Web"", 1997, . 1779 [draft-ietf-idr-link-bandwidth] 1780 Mohapatra, P. and R. Fernando, ""BGP Link Bandwidth 1781 Extended Community"", March 2018. 1783 [draft-malhotra-bess-evpn-unequal-lb] 1784 Malhotra, N., Sajassi, A., Rabadan, J., Drake, J., 1785 Lingala, A., and S. Thoria, ""Weighted Multi-Path 1786 Procedures for EVPN All-Active Multi-Homing"", June 2018. 1788 [flowspec-redirect-ip] 1789 Uttaro, J., Haas, J., Texier, M., Karch, A., Sreekanth, 1790 A., Ray, S., Simpson, A., and W. Henderickx, ""BGP Flow- 1791 Spec Redirect to IP Action"", February 2015. 1793 [idr-tunnel-encaps] 1794 Rosen, E., Patel, K., and G. van de Velde, ""The BGP 1795 Tunnel Encapsulation Attribute"", February 2018. 1797 [NFVE2E] ETSI, ""Network Functions Virtualisation (NFV): 1798 Architectural Framework"", 2013. 1800 Authors' Addresses 1802 Rex Fernando 1803 Cisco Systems 1804 170 W. Tasman Drive 1805 San Jose, CA 95134 1807 Email: rex@cisco.com 1809 Stuart Mackie 1810 Juniper Networks 1811 1133 Innovation Way 1812 Sunnyvale, CA 94089 1814 Email: wsmackie@juniper.net 1816 Dhananjaya Rao 1817 Cisco Systems 1818 170 W. Tasman Drive 1819 San Jose, CA 95134 1821 Email: dhrao@cisco.com 1823 Bruno Rijsman 1825 Email: brunorijsman@gmail.com 1826 Maria Napierala 1827 ATT Labs 1828 200 Laurel Avenue 1829 Middletown, NJ 07748 1831 Email: mnapierala@att.com 1833 Thomas Morin 1834 Orange 1835 2, Avenue Pierre Marzin 1836 Lannion, France 22307 1838 Email: thomas.morin@orange.com