idnits 2.17.1 draft-ietf-bess-service-chaining-02.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- No issues found here. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year -- The document date (October 31, 2016) is 2727 days in the past. Is this intentional? Checking references for intended status: Proposed Standard ---------------------------------------------------------------------------- (See RFCs 3967 and 4897 for information about using normative references to lower-maturity documents in RFCs) == Missing Reference: 'I-D.fang-l3vpn-virtual-ce' is mentioned on line 238, but not defined == Missing Reference: 'I-D.fang-l3vpn-virtual-pe' is mentioned on line 244, but not defined == Missing Reference: 'RFC3031' is mentioned on line 624, but not defined == Missing Reference: 'RFC4360' is mentioned on line 1555, but not defined == Unused Reference: 'RFC2328' is defined on line 1605, but no explicit reference was found in the text == Unused Reference: 'RFC5575' is defined on line 1633, but no explicit reference was found in the text == Unused Reference: 'RFC6241' is defined on line 1655, but no explicit reference was found in the text == Unused Reference: 'RFC7510' is defined on line 1663, but no explicit reference was found in the text -- Obsolete informational reference (is this intentional?): RFC 5575 (Obsoleted by RFC 8955) Summary: 0 errors (**), 0 flaws (~~), 9 warnings (==), 2 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 BGP Enabled Services (bess) R. Fernando 3 INTERNET-DRAFT Cisco 4 Intended status: Standards Track S. Mackie 5 Expires: May 4, 2017 Juniper 6 D. Rao 7 Cisco 8 B. Rijsman 9 Juniper 10 M. Napierala 11 AT&T 12 T. Morin 13 Orange 14 October 31, 2016 16 Service Chaining using Virtual Networks with BGP VPNs 17 draft-ietf-bess-service-chaining-02 19 Abstract 21 This document describes how service function chains (SFC) can be 22 applied to traffic flows using routing in a virtual (overlay) network 23 to steer traffic between service nodes. Chains can include services 24 running in routers, on physical appliances or in virtual machines. 25 Service chains have applicability at the subscriber edge, business 26 edge and in multi-tenant datacenters. The routing function into SFCs 27 and between service functions within an SFC can be performed by 28 physical devices (routers), be virtualized inside hypervisors, or run 29 as part of a host OS. 31 A BGP control plane for route distribution is used to create virtual 32 networks implemented using IP MPLS, VXLAN or other suitable 33 encapsulation, where the routes within the virtual networks cause 34 traffic to flow through a sequence of service nodes that apply packet 35 processing functions to the flows. 37 Two techniques are described: in one the service chain is implemented 38 as a sequence of distinct VPNs between sets of service nodes that 39 apply each service function; in the other, the routes within a VPN 40 are modified through the use of special route targets and modified 41 next-hop resolution to achieve the desired result. 43 In both techniques, service chains can be created by manual 44 configuration of routes and route targets in routing systems, or 45 through the use of a controller which contains a topological model of 46 the desired service chains. 48 This document also contains discussion of load balancing between 49 network functions, symmetric forward and reverse paths when stateful 50 services are involved, and use of classifiers to direct traffic into 51 a service chain. 53 Status of this Memo 55 This Internet-Draft is submitted in full conformance with the 56 provisions of BCP 78 and BCP 79. 58 Internet-Drafts are working documents of the Internet Engineering 59 Task Force (IETF), its areas, and its working groups. Note that 60 other groups may also distribute working documents as Internet- 61 Drafts. 63 Internet-Drafts are draft documents valid for a maximum of six months 64 and may be updated, replaced, or obsoleted by other documents at any 65 time. It is inappropriate to use Internet-Drafts as reference 66 material or to cite them other than as "work in progress." 68 The list of current Internet-Drafts can be accessed at 69 http://www.ietf.org/ietf/1id-abstracts.txt 71 The list of Internet-Draft Shadow Directories can be accessed at 72 http://www.ietf.org/shadow.html 74 This Internet-Draft will expire on October 13, 2016. 76 Copyright Notice and License Notice 78 Copyright (c) 2016 IETF Trust and the persons identified as the 79 document authors. All rights reserved. 81 This document is subject to BCP 78 and the IETF Trust's Legal 82 Provisions Relating to IETF Documents 83 (http://trustee.ietf.org/license-info) in effect on the date of 84 publication of this document. Please review these documents 85 carefully, as they describe your rights and restrictions with respect 86 to this document. Code Components extracted from this document must 87 include Simplified BSD License text as described in Section 4.e of 88 the Trust Legal Provisions and are provided without warranty as 89 described in the Simplified BSD License. 91 Table of Contents 93 1 Introduction ...................................................4 94 1.1 Terminology................................................5 95 2 Service Function Chain Architecture Using Virtual Networking ...8 96 2.1 High Level Architecture....................................9 97 2.2 Service Function Chain Logical Model......................10 98 2.3 Service Function Implemented in a Set of SF Instances.....11 99 2.4 SF Instance Connections to VRFs...........................13 100 2.4.1 SF Instance in Physical Appliance....................13 101 2.4.2 SF Instance in a Virtualized Environment.............14 102 2.5 Encapsulation Tunneling for Transport.....................15 103 2.6 SFC Creation Procedure....................................15 104 2.6.1 SFC Provisioning Using Sequential VPNs...............16 105 2.6.2 Modified-Route SFC Creation..........................18 106 2.7 Controller Function.......................................20 107 2.8 Variations on Setting Prefixes in an SFC..................21 108 2.8.1 Using a Default Route................................21 109 2.8.2 Using a Default Route and a Large Prefix.............21 110 2.8.3 Disaggregated Gateway Routers........................22 111 2.8.4 Optimizing VRF usage.................................23 112 2.8.5 Dynamic Entry and Exit Signaling.....................23 113 2.8.6 Dynamic Re-Advertisements in Intermediate systems....24 114 2.9 Layer-2 Virtual Networks and Services.....................24 115 2.10 Header Transforming Service Functions....................25 116 3 Load Balancing Along a Service Function Chain .................25 117 3.1 SF Instances Connected to Separate VRFs...................25 118 3.2 SF Instances Connected to the Same VRF....................26 119 3.3 Combination of Egress and Ingress VRF Load Balancing......27 120 3.4 Forward and Reverse Flow Load Balancing...................29 121 3.4.1 Issues with Equal Cost Multi-Path Routing............29 122 3.4.2 Modified ECMP with Consistent Hash...................29 123 3.4.3 ECMP with Flow Table.................................30 124 3.4.4 Dealing with different hash algorithms in an SFC.....32 125 4 Steering into SFCs Using a Classifier .........................32 126 5 External Domain Co-ordination .................................34 127 6 Fine-grained steering using BGP Flow-Spec .....................35 128 7 Controller Federation .........................................35 129 8 Coordination Between SF Instances and Controller using BGP ....35 130 9 BGP Extended Communities ......................................36 131 10 Summary and Conclusion........................................38 132 11 Security Considerations.......................................38 133 12 IANA Considerations...........................................38 134 13 Informative References........................................38 136 14 Acknowledgments...............................................40 138 1 Introduction 140 The purpose of networks is to allow computing systems to communicate 141 with each other. Requests are usually made from the client or 142 customer side of a network, and responses are generated by 143 applications residing in a datacenter. Over time, the network between 144 the client and the application has become more complex, and traffic 145 between the client and the application is acted on by intermediate 146 systems that apply network services. Some of these activities, like 147 firewall filtering, subscriber attachment and network address 148 translation are generally carried out in network devices along the 149 traffic path, while others are carried out by dedicated appliances, 150 such as media proxy and deep packet inspection (DPI). Deployment of 151 these in-network services is complex, time- consuming and costly, 152 since they require configuration of devices with vendor-specific 153 operating systems, sometimes with co-processing cards, or deployment 154 of physical devices in the network, which requires cabling and 155 configuration of the devices that they connect to. Additionally, 156 other devices in the network need to be configured to ensure that 157 traffic is correctly steered through the systems that services are 158 running on. 160 The current mode of operations does not easily allow common 161 operational processes to be applied to the lifecycle of services in 162 the network, or for steering of traffic through them. 164 The recent emergence of Network Functions Virtualization (NFV) 165 [NFVE2E] to provide a standard deployment model for network services 166 as software appliances, combined with Software Defined Networking 167 (SDN) for more dynamic traffic steering can provide foundational 168 elements that will allow network services to be deployed and managed 169 far more efficiently and with more agility than is possible today. 171 This document describes how the combination of several existing 172 technologies can be used to create chains of functions, while 173 preserving the requirements of scale, performance and reliability for 174 service provider networks. The technologies employed are: 176 o Traffic flow between service functions described by routing and 177 network policies rather than by static physical or logical 178 connectivity 180 o Packet header encapsulation in order to create virtual private 181 networks using network overlays 183 o VRFs on both physical devices and in hypervisors to implement 184 forwarding policies that are specific to each virtual network 186 o Optional use of a controller to calculate routes to be installed 187 in routing systems to form a service chain. The controller uses a 188 topological model that stores service function instance 189 connectivity to network devices and intended connectivity between 190 service functions. 192 o MPLS or other labeling to facilitate identification of the next 193 interface to send packets to in a service function chain 195 o BGP or BGP-style signaling to distribute routes in order to create 196 service function chains 198 o Distributed load balancing between service functions performed in 199 the VRFs that service function instance connect to. 201 Virtualized environments can be supported without necessarily running 202 BGP or MPLS natively. Messaging protocols such as NC/YANG, XMPP or 203 OpenFlow may be used to signal forwarding information. Encapsulation 204 mechanisms such as VXLAN or GRE may be used for overlay transport. 205 The term 'BGP-style', above, refers to this type of signaling. 207 Traffic can be directed into service function chains using IP routing 208 at each end of the service function chain, or be directed into the 209 chain by a classifier function that can determine which service chain 210 a traffic flow should pass through based on deep packet inspection 211 (DPI) and/or subscriber identity. 213 The techniques can support an evolution from services implemented in 214 physical devices attached to physical forwarding systems (routers) to 215 fully virtualized implementations as well as intermediate hybrid 216 implementations. 218 1.1 Terminology 220 This document uses the following acronyms and terms. 222 Terms Meaning 223 ----- ----------------------------------------------- 224 AS Autonomous System 225 ASBR Autonomous System Border Router 227 FW Firewall 228 I2RS Interface to the Routing System 229 L3VPN Layer 3 VPN 230 LB Load Balancer 231 NLRI Network Layer Reachability Information [RFC4271] 232 P Provider backbone router 233 proxy-arp proxy-Address Resolution Protocol 234 RR Route Reflector 235 RT Route Target 236 SDN Software Defined Network 237 vCE virtual Customer Edge router 238 [I-D.fang-l3vpn-virtual-ce] 239 vFW virtual Firewall 240 vLB virtual Load Balancer 241 VM Virtual Machine 242 vPC virtual Private Cloud 243 vPE virtual Provider Edge router 244 [I-D.fang-l3vpn-virtual-pe] 245 VPN Virtual Private Network 246 VRF VPN Routing and Forwarding table [RFC4364] 247 vRR virtual Route Reflector 249 This document follows some of the terminology used in [sfc-arch] and 250 adds some new terminology: 252 Network Service: An externally visible service offered by a network 253 operator; a service may consist of a single service function or a 254 composite built from several service functions executed in one or 255 more pre-determined sequences and delivered by software executing 256 in physical or virtual devices. 258 Classification: Customer/network/service policy used to identify and 259 select traffic flow(s) requiring certain outbound forwarding 260 actions, in particular, to direct specific traffic flows into the 261 ingress of a particular service function chain, or causing 262 branching within a service function chain. 264 Virtual Network: A logical overlay network built using virtual links 265 or packet encapsulation, over an existing network (the underlay). 267 Service Function Chain (SFC): A service function chain defines an 268 ordered set of service functions that must be applied to packets 269 and/or frames selected as a result of classification. An SFC may be 270 either a linear chain or a complex service graph with multiple 271 branches. The term 'Service Chain' is often used in place of 273 SFC Set: The pair of SFCs through which the forward and reverse 274 directions of a given classified flow will pass. 276 Service Function (SF): A logical function that is applied to 277 packets. A service function can act at the network layer or other 278 OSI layers. A service function can be embedded in one or more 279 physical network elements, or can be implemented in one or more 280 software instances running on physical or virtual hosts. One or 281 multiple service functions can be embedded in the same network 282 element or run on the same host. Multiple instances of a service 283 function can be enabled in the same administrative domain. We will 284 also refer to 'Service Function' as, simply, 'Service' for 285 simplicity. 287 A non-exhaustive list of services includes: firewalls, DDOS 288 protection, anti-malware/ant-virus systems, WAN and application 289 acceleration, Deep Packet Inspection (DPI), server load balancers, 290 network address translation, HTTP Header Enrichment functions, 291 video optimization, TCP optimization, etc. 293 SF Instance: An instance of software that implements the packet 294 processing of a service function 296 SF Instance Set: A group of SF instances that, in parallel, implement 297 a service function in an SFC. 299 Routing System: A hardware or software system that performs layer 3 300 routing and/or forwarding functions. The term includes physical 301 routers as well as hypervisor or Host OS implementations of the 302 forwarding plane of a conventional router. 304 Gateway: A routing system attached to the source or destination 305 network that peers with the controller, or with the routing system 306 at one end of an SFC. A source network gateway directs traffic from 307 the source network into an SFC, while a destination network gateway 308 distributes traffic towards destinations. The routing systems at 309 each end of an SFC can themselves act as gateways and in a 310 bidirectional SF instance set, gateways can act in both directions 312 VRF: A subsystem within a routing system as defined in [RFC4364] that 313 contains private routing and forwarding tables and has physical 314 and/or logical interfaces associated with it. In the case of 315 hypervisor/Host OS implementations, the term refers only to the 316 forwarding function of a VRF, and this will be referred to as a 317 'VPN forwarder.' 319 Ingress VRF: A VRF containing an ingress interface of a SF instance 321 Egress VRF: A VRF containing an egress interface of a SF instance 323 Note that in this document the terms 'ingress' and 'egress' are used 324 with respect to SF instances rather than the tunnels that connect SF 325 instances. This is different usage than in VPN literature in general. 327 Entry VRF: A VRF through which traffic enters the SFC from the source 328 network. This VRF may be used to advertise the destination 329 network's routes to the source network. It could be placed on a 330 gateway router or be collocated with the first ingress VRF. 332 Exit VRF: A VRF through which traffic exits the SFC into the 333 destination network. This VRF contains the routes from the 334 destination network and could be located on a gateway router. 335 Alternatively, the egress VRF attached to the last SF instance may 336 also function as the exit VRF. 338 2 Service Function Chain Architecture Using Virtual Networking 340 The techniques described in this document use virtual networks to 341 implement service function chains. Service function chains can be 342 implemented on devices that support existing MPLS VPN and BGP 343 standards [RFC4364, RFC4271, RFC4760], as well as other 344 encapsulations, such as VXLAN [RFC7348]. Similarly, equivalent 345 control plane protocols such as BGP-EVPN with type-2 and type-5 route 346 types can also be used where supported. The set of techniques 347 described in this document represent one implementation approach to 348 realize the SFC architecture described in [sfc-arch]. 350 The following sections detail the building blocks of the SFC 351 architecture, and outline the processes of route installation and 352 subsequent route exchange to create an SFC. 354 2.1 High Level Architecture 356 Service function chains can be deployed with or without a classifier. 357 Use cases where SFCs may be deployed without a classifier include 358 multi-tenant data centers, private and public cloud and virtual CPE 359 for business services. Classifiers will primarily be used in mobile 360 and wireline subscriber edge use cases. Use of a classifier is 361 discussed in Section 4. 363 A high-level architecture diagram of an SFC without a classifier, 364 where traffic is routed into and out of the SFC, is shown in Figure 365 1, below. An optional controller is shown that contains a topological 366 model of the SFC and which configures the network resources to 367 implement the SFC. 369 +-------------------------+ 370 |--- Data plane connection| 371 |=== Encapsulation tunnel | 372 | O VRF | 373 +-------------------------+ 375 Control +------------------------------------------------+ 376 Plane | Controller | 377 ....... +-+------------+----------+----------+---------+-+ 378 | | | | | 379 Service | +---+ | +---+ | +---+ | | 380 Plane | |SF1| | |SF2| | |SF3| | | 381 | +---+ | +---+ | +---+ | | 382 ....... / | | / | | / | | / / 383 +-----+ +--|-|--+ +--|-|--+ +--|-|--+ +-----+ 384 | | | | | | | | | | | | | | | | 385 Net-A-->---O==========O O========O O========O O=========O---->Net-B 386 | | | | | | | | | | 387 Data | R-A | | R-1 | | R-2 | | R-3 | | R-B | 388 Plane +-----+ +-------+ +-------+ +-------+ +-----+ 390 ^ ^ ^ ^ 391 | | | | 392 | Ingress Egress | 393 | VRF VRF | 394 SFC Entry SFC Exit 395 VRF VRF 397 Figure 1 - High level SFC Architecture 399 Traffic from Network-A destined for Network-B will pass through the 400 SFC composed of SF instances, SF1, SF2 and SF3. Routing system R-A 401 contains a VRF (shown as 'O' symbol) that is the SFC entry point. 402 This VRF will advertise a route to reach Network-B into Network-A 403 causing any traffic from a source in Network-A with a destination in 404 Network-B to arrive in this VRF. The forwarding table in the VRF in 405 R-A will direct traffic destined for Network-B into an encapsulation 406 tunnel with destination R-1 and a label that identifies the ingress 407 (left) interface of SF1 that R-1 should send the packets out on. The 408 packets are processed by service instance SF-1 and arrive in the 409 egress (right) VRF in R-1. The forwarding entries in the egress VRF 410 direct traffic to the next ingress VRF using encapsulation tunneling. 411 The process is repeated for each service instance in the SFC until 412 packets arrive at the SFC exit VRF (in R-B). This VRF is peered with 413 Network-B and routes packets towards their destinations in the user 414 data plane. In this example, routing systems R-A and R-B are gateway 415 routing systems. 417 In the example, each pair of ingress and egress VRFs are configured 418 in separate routing systems, but such pairs could be collocated in 419 the same routing system, and it is possible for the ingress and 420 egress VRFs for a given SF instance to be in different routing 421 systems. The SFC entry and exit VRFs can be collocated in the same 422 routing system, and the service instances can be local or remote from 423 either or both of the routing systems containing the entry and exit 424 VRFs, and from each other. It is also possible that the ingress and 425 egress VRFs are implemented using alternative mechanisms. 427 The controller is responsible for configuring the VRFs in each 428 routing system, installing the routes in each of the VRFs to 429 implement the SFC, and, in the case of virtualized services, may 430 instantiate the service instances. 432 2.2 Service Function Chain Logical Model 434 A service function chain is a set of logically connected service 435 functions through which traffic can flow. Each egress interface of 436 one service function is logically connected to an ingress interface 437 of the next service function. 439 +------+ +------+ +------+ 440 Network-A-->| SF-1 |-->| SF-2 |-->| SF-3 |-->Network-B 441 +------+ +------+ +------+ 443 Figure 2 - A Chain of Service Functions 445 In Figure 2, above, a service function chain has been created that 446 connects Network-A to Network-B, such that traffic from a host in 447 Network-A to a host in Network-B will traverse the service function 448 chain. 450 As defined in [sfc-arch], a service function chain can be uni- 451 directional or bi-directional. In this document, in order to allow 452 for the possibility that the forward and reverse paths may not be 453 symmetrical, SFCs are defined as uni-directional, and the term 'SFC 454 set' is used to refer to a pair of forward and reverse direction SFCs 455 for some set of routed or classified traffic. 457 2.3 Service Function Implemented in a Set of SF Instances 459 A service function instance is a software system that acts on packets 460 that arrive on an ingress interface of that software system. Service 461 function instances may run on a physical appliance or in a virtual 462 machine. A service function instance may be transparent at layer 2 463 and/or layer 3, and may support branching across multiple egress 464 interfaces and may support aggregation across ingress interfaces. For 465 simplicity, the examples in this document have a single ingress and a 466 single egress interface. 468 Each service function in a chain can be implemented by a single 469 service function instance, or by a set of instances in order to 470 provide scale and resilience. 472 +------------------------------------------------------------------+ 473 | Logical Service Functions Connected in a Chain | 474 | | 475 | +--------+ +--------+ | 476 | Net-A--->| SF-1 |----------->| SF-2 |--->Net-B | 477 | +--------+ +--------+ | 478 | | 479 +------------------------------------------------------------------+ 480 | Service Function Instances Connected by Virtual Networks | 481 | ...... ...... | 482 | : : +------+ : : | 483 | : :-->|SFI-11|-->: : ...... | 484 | : : +------+ : : +------+ : : | 485 | : : : :-->|SFI-21|-->: : | 486 | : : +------+ : : +------+ : : | 487 | A->: VN-1 :-->|SFI-12|-->: VN-2 : : VN-3 :-->B | 488 | : : +------+ : : +------+ : : | 489 | : : : :-->|SFI-22|-->: : | 490 | : : +------+ : : +------+ : : | 491 | : :-->|SFI-13|-->: : '''''' | 492 | : : +------+ : : | 493 | '''''' '''''' | 494 +------------------------------------------------------------------+ 496 Figure 3 - Service Functions Are Composed of SF Instances Connected 497 Via Virtual Networks 499 In Figure 3, service function SF-1 is implemented in three service 500 function instances, SFI-11, SFI-12, and SFI-13. Service function SF- 501 2 is implemented in two SF instances. The service function instances 502 are connected to the next service function in the chain using a 503 virtual network, VN-2. Additionally, a virtual network (VN-1) is used 504 to enter the SFC and another (VN-3) is used at the exit. 506 The logical connection between two service functions is implemented 507 using a virtual network that contains egress interfaces for instances 508 of one service function, and ingress interfaces of instances of the 509 next service function. Traffic is directed across the virtual network 510 between the two sets of service function instances using layer 3 511 forwarding (e.g. an MPLS VPN) or layer 2 forwarding (e.g. a VXLAN). 513 The virtual networks could be described as "directed half-mesh", in 514 that the egress interface of each SF instance of one service function 515 can reach any ingress interface of the SF instances of the connected 516 service function. 518 Details on how routing across virtual networks is achieved, and 519 requirements on load balancing across ingress interfaces are 520 discussed in later sections of this document. 522 2.4 SF Instance Connections to VRFs 524 SF instances can be deployed as software running on physical 525 appliances, or in virtual machines running on a hypervisor. These two 526 types are described in more detail in the following sections. 528 2.4.1 SF Instance in Physical Appliance 530 The case of a SF instance running on a physical appliance is shown in 531 Figure 4, below. 533 +---------------------------------+ 534 | | 535 | +-----------------------------+ | 536 | | Service Function Instance | | 537 | +-------^-------------|-------+ | 538 | | Host | | 539 +---------|-------------|---------+ 540 | | 541 +------ |-------------|-------+ 542 | | | | 543 | +----|----+ +-----v----+ | 544 ---------+ Ingress | | Egress +--------- 545 ---------> VRF | | VRF ----------> 546 ---------+ | | +--------- 547 | +---------+ +----------+ | 548 | Routing System | 549 +-----------------------------+ 551 Figure 4 - Ingress and Egress VRFs for a Physical Routing System and 552 Physical SF Instance 554 The routing system is a physical device and the service function 555 instance is implemented as software running in a physical appliance 556 (host) connected to it. The connection between the physical device 557 and the routing system may use physical or logical interfaces. 558 Transport between VRFs on different routing systems that are 559 connected to other SF instances in an SFC is via encapsulation 560 tunnels, such as MPLS over GRE, or VXLAN. 562 2.4.2 SF Instance in a Virtualized Environment 564 In virtualized environments, a routing system with VRFs that act as 565 VPN forwarders is resident in the hypervisor/Host OS, and is co- 566 resident in the host with one or more SF instances that run in 567 virtual machines. The egress VPN forwarder performs tunnel 568 encapsulation to send packets to other physical or virtual routing 569 systems with attached SF instances to form an SFC. The tunneled 570 packets are sent through the physical interfaces of the host to the 571 other hosts or physical routers. This is illustrated in Figure 5, 572 below. 574 +-------------------------------------+ 575 | +-----------------------------+ | 576 | | Service Function Instance | | 577 | +-------^-------------|-------+ | 578 | | | | 579 | +---------|-------------|---------+ | 580 | | +-------|-------------|-------+ | | 581 | | | | | | | | 582 | | | +----|----+ +-----v----+ | | | 583 ------------+ Ingress | | Egress +----------- 584 ------------> VRF | | VRF ------------> 585 ------------+ | | +----------- 586 | | | +---------+ +----------+ | | | 587 | | | Routing System | | | 588 | | +-----------------------------+ | | 589 | | Hypervisor or Host OS | | 590 | +---------------------------------+ | 591 | Host | 592 +-------------------------------------+ 594 Figure 5 - Ingress and Egress VRFs for a Virtual Routing System and 595 Virtualized SF Instance 597 When more than one instance of an SF is running on a hypervisor, they 598 can be connected to the same VRF for scale out of an SF within an 599 SFC. 601 The routing mechanisms in the VRFs into and between service function 602 instances, and the encapsulation tunneling between routing systems 603 are identical in the physical and virtual implementations of SFCs and 604 routing systems described in this document. Physical and virtual 605 service functions can be mixed as needed with different combinations 606 chain. 608 The SF instances are attached to the routing systems via physical, 609 virtual or logical (e.g, 802.1q) interfaces, and are assumed to 610 perform basic L3 or L2 forwarding. 612 A single SF instance can be part of multiple service chains. In this 613 case, the SF instance will have dedicated interfaces (typically 614 logical) and forwarding contexts associated with each service chain. 616 2.5 Encapsulation Tunneling for Transport 618 Encapsulation tunneling is used to transport packets between SF 619 instances in the chain and, when a classifier is not used, from the 620 originating network into the SFC and from the SFC into the 621 destination network. 623 The tunnels can be MPLS over GRE [RFC4023], MPLS over UDP [draft- 624 ietf-mpls-in-udp], MPLS over MPLS [RFC3031], VXLAN [RFC7348], or 625 another suitable encapsulation methods. 627 Tunneling capabilities may be enabled in each routing system as part 628 of a base configuration or may be configured by the controller. 629 Tunnel encapsulations may be programmed by the controller or signaled 630 using BGP. The encapsulation to be used for a given route is signaled 631 in BGP using the procedures described in [draft-rosen- 632 idr-tunnel-encaps], i.e. typically relying on the BGP Tunnel 633 Encapsulation Extended Community. 635 2.6 SFC Creation Procedure 637 This section describes how service chains are created using two 638 methods: 640 o Sequential VPNs - where a conventional VPN is created between each 641 set of SF instances to create the links in the SFC 643 o Route Modification - where each routing system modifies advertised 644 routes that it receives, to realize the links in an SFC on the 645 basis of a special service topology RT and a route- policy that 646 describes the service chain logical topology 648 In both cases the controller, when present, is responsible for 649 creating ingress and egress VRFs, configuring the interfaces 650 connected to SF instances in each VRF, and allocating and configuring 651 import and export RTs for each VRF. Additionally, in the second 652 method, the controller also sends the route-policy containing the 653 service chain logical topology to each routing system. If a 654 controller is not used, these procedures will require to be performed 655 manually or through scripting, for instance. 657 The source and destination networks' prefixes can be configured in 658 the controller, or may be automatically learned through peering 659 between the controller and each network's gateway. This is further 660 described in Section 2.8.5 and Section 5. 662 The following sub-sections describe how RT configuration, local route 663 installation and route distribution occur in each of the methods. 665 It should be noted that depending on the capabilities of the routing 666 systems, a controller can use one or more techniques to realize 667 forwarding along the service chain, ranging from fully centralized to 668 fully distributed. The goal of describing the following two methods 669 is to illustrate the broad approaches and as a base for various 670 optimization options. 672 Interoperability between a controller implementing one method and a 673 controller implementing a different method is achieved by relying on 674 the techniques described in section 5 and section 8, that describe 675 the use of BGP-style service chaining within domains that are 676 interconnected using standard BGP VPN route exchanges. 678 2.6.1 SFC Provisioning Using Sequential VPNs 680 The task of the controller in this method of SFC provisioning is to 681 create a set of VPNs that carry traffic to the destination network 682 through instances of each service function in turn. This is achieved 683 by allocating and configuring RTs such that the egress VRFs of one 684 set of SF instances import an RT that is an export RT for the ingress 685 VRFs of the next, logically connected, set of SF instances. 687 The process of SFC creation is as follows: 689 1. Controller creates a VRF in each routing system that is 690 connected to a service instance that will be used in the SFC 692 2. Controller configures each VRF to contain the logical 693 interface that connects to a SF instance. 695 3. Controller implements route target import and export 696 policies in the VRFs using the same route targets for the 697 egress VRFs of a service function and the ingress VRFs of 698 the next logically connected service function in the SFC. 700 4. Controller installs a static route in each ingress VRF whose 701 next hop is the interface that a SF instance is connected 702 to. The prefix for the route is the destination network to 703 be reached by passing through the SFC. The following 704 sections describe variations that can be used. 706 5. Routing systems advertise the static routes via BGP as VPN 707 routes with next hop being the IP address of the router, 708 with an encapsulation specified and a label that identifies 709 the service instance interface. 711 6. Routing systems containing VRFs with matching route targets 712 receive the updates. 714 7. Routes are installed in egress VRFs with matching import 715 targets. The egress VRFs of each SF instance will now 716 contain VPN routes to one or more routers containing ingress 717 VRFs for SF instances of the next service function in the 718 SFC. 720 Routes to the destination network via the first set of SF instances 721 are advertised into the source network, and the egress VRFs of the 722 last SF instance set have routes into the destination network. 724 As discussed further in Section 3, egress VRFs can load balance 725 across the multiple next hops advertised from the next set of ingress 726 VRFs. 728 2.6.2 Modified-Route SFC Creation 730 In this method of SFC configuration, all the VRFs connected to SF 731 instances for a given SFC are configured with same import and export 732 RT, so they form a VPN-connected mesh between the SF instance 733 interfaces. This is termed the 'Service VPN'. A route is configured 734 or learnt in each VRF with destination being the IP address of a 735 connected SF instance via an interface configured in the VRF. The 736 interface may be a physical or logical interface. The routing system 737 that hosts such a VRF advertises a VPN route for each locally 738 connected SF instance, with a forwarding label that enables it to 739 forward incoming traffic from other routing systems to the connected 740 SF instance. The VPN routes may be advertised via an RR or the 741 controller, which sends these updates to all the other routing 742 systems that have VRFs with the service VPN RT. At this point all the 743 VRFs have a route to reach every SF instance. The same virtual IP 744 address may be used for each SF instance in a set, enabling load- 745 balancing among multiple SF instances in the set. 747 The controller builds a route-policy for the routing systems in the 748 VPN, that describes the logical topology of each service chain that 749 it belongs to. The route-policy contains entries in the form of a 750 tuple for each service chain: 752 {Service-topology-name, Service-topology-RT, Service-node- 753 sequence} 755 where Service-node-sequence is simply an ordered list of the service 756 function interface IP addresses that are in the chain. 758 Every service function chain has a single unique service-topology-RT 759 that is allocated and provisioned on all participating routing 760 systems in the relevant VRFs. 762 The VRF in the routing system that connects to the destination 763 network (i.e. the exit VRF) is configured to attach the Service- 764 topology-RT to exported routes, and the VRF connected to the source 765 network (i.e. the entry VRF) will import routes using the Service- 766 topology-RT. The controller may also be used to originate the 767 Service-topology-RT attached routes. 769 The route-policy may be described in a variety of formats and 770 installed on the routing system using a suitable mechanism. For 771 instance, the policy may be defined in YANG and provisioned using 772 Netconf. 774 Using Figure 1 for reference, when the gateway R-B advertises a VPN 775 route to Network-B, it attaches the Service-topology-RT. BGP route 776 updates are sent to all the routing systems in the service VPN. The 777 routing systems perform a modified set of actions for next-hop 778 resolution and route installation in the ingress VRFs compared to 779 normal BGP VPN behavior in routing systems, but no changes are 780 required in the operation of the BGP protocol itself. The 781 modification of behavior in the routing systems allows the automatic 782 and constrained flow of traffic through the service chain. 784 Each routing system in the service VPN will process the VPN route to 785 Network-B via R-B as follows: 787 1. If the routing system contains VRFs that import the 788 Service-topology-RT, continue, otherwise ignore the route. 790 2. The routing system identifies the position and role 791 (ingress/egress) of each of its VRFs in the SFC by comparing 792 the IP address of the route in the VRF to the connected SF 793 instance with those in the Service-node- sequence in the 794 route-policy. Alternatively, the controller may provision 795 the specific service node IP to be used as the next-hop in 796 each VRF, in the route-policy for the VRF. 798 3. The routing system modifies the next-hop of the imported 799 route with the Service-topology-RT, to select the 800 appropriate next-hop as per the route-policy. It ignores the 801 next-hop and label in the received route. It resolves the 802 selected next-hop in the local VRF routing table. 804 a. The imported route to Network-B in the ingress VRF is 805 modified to have a next-hop of the IP address of the 806 logically connected SF instance. 808 b. The imported route to Network-B in the egress VRF is 809 modified to have a next hop of the IP address of the 810 next SF instance in the SFC. 812 4. The egress VRFs for the last service function install the 813 VPN route via the gateway R-B unmodified. 815 Note that the modified routes are not re-advertised into the VPN by 816 the various intermediate routing systems in the SFC. 818 2.6.3 Common SFC provisioning considerations 820 In both the methods, for physical routers, the creation and 821 configuration of VRFs, interfaces and local static routes can be 822 performed programmatically using Netconf; and BGP route distribution 823 can use a route reflector (which may be part of the controller). In 824 the virtualized case, where a VPN forwarder is present, creation and 825 configuration of VRFs, interfaces and installation of routes may 826 instead be performed using a single protocol like XMPP, NC/YANG or an 827 equivalent programmatic interface. 829 Also in the virtualized case, the actual forwarding table entries to 830 be installed in the ingress and egress VRFs may be calculated by the 831 controller based on its internal knowledge of the required SFC 832 topology and the connectivity of SF instances to routing systems. In 833 this case, the routes may be directly installed in the forwarders 834 using the programmatic interface and no BGP route advertisement is 835 necessary, except when coordination with external domains (Section 5) 836 or federation between controller domains is employed (Section 7). 837 Note however that this is just one typical model for a virtual 838 forwarding based system. In general, physical and virtual routing 839 systems can be treated exactly the same if they have the same 840 capabilities. 842 In both the methods, the SF instance may also need to be set up 843 appropriately to forward traffic between it's input and output 844 interfaces, either via static, dynamic or policy-based routing. If 845 the service function is a transparent L2 service, then the static 846 route installed in the ingress VRF will have a next-hop of the IP 847 address of the routing system interface that the service instance is 848 attached to on its other interface. 850 2.7 Controller Function 852 The purpose of the controller is to manage instantiation of SFCs in 853 networks and datacenters. When an SFC is to be instantiated, a model 854 of the desired topology (service functions, number of instances, 855 connectivity) is built in the controller either via an API or GUI. 856 The controller then selects resources in the infrastructure that will 857 support the SFC and configures them. This can involve instantiation 858 of SF instances to implement each service function, the instantiation 859 of VRFs that will form virtual networks between SF instances, and 860 installation of routes to cause traffic to flow into and between SF 861 instances. It can also include provisioning the necessary static, 862 dynamic or policy based forwarding on the service function instance 863 to enable it to forward traffic. 865 For simplicity, in this document, the controller is assumed to 866 contain all the required features for management of SFCs. In actual 867 implementations, these features may be distributed among multiple 868 inter-connected systems. E.g. An overarching orchestrator might 869 manage the overall SFC model, sending instructions to a separate 870 virtual machine manager to instantiate service function instances, 871 and to a virtual network manager to set up the service chain 872 connections between them. 874 The controller can also perform necessary BGP signaling and route 875 distribution actions as described throughout this document. 877 2.8 Variations on Setting Prefixes in an SFC 879 The SFC Creation section above described the basic procedures for a 880 couple of SFC creation methods. This section describes some 881 techniques that can extend and provide optimizations on top of the 882 basic procedures. 884 2.8.1 Using a Default Route 886 In the methods described above, it can be noted that only the gateway 887 routing systems need the specific network prefixes to steer traffic 888 in and out of the SFC. The intermediate systems can direct traffic in 889 the ingress and egress VRFs by using only a default route. Hence, it 890 is possible to avoid installing the network prefixes in the 891 intermediate systems. This can be done by splitting the SFC into two 892 sections - one linking the entry and exit VRFs and the other 893 including the intermediate systems. For instance, this may be 894 achieved by using two different Service-topology-RTs in the second 895 method. 897 2.8.2 Using a Default Route and a Large Prefix 899 In the configuration methods described above, the network prefixes 900 for each network (Network-A and Network-B in the example above) 901 connected to the SFC are used in the routes that direct traffic 902 implementation of the SFC and the insertion of the SFC into a 903 network. 905 For instance, subscriber network prefixes will normally be segmented 906 across subscriber attachment points such as broadband or mobile 907 gateways. This means that each SFC would have to be configured with 908 the subscriber network prefixes whose traffic it is handling. 910 In a variation of the SFC configuration method described above, the 911 prefixes used in each direction can be such that they include all 912 possible addresses at each side of the SFC. For example, in Figure 1, 913 the prefix for Network-A could include all subscriber IP addresses 914 and the prefix for Network-B could be the default route, 0/0. 916 Using this technique, the same routes can be installed in all 917 instances of an SFC that serve different groups of subscribers in 918 different geographic locations. 920 The routes forwarding traffic into a SF instance and to the next SF 921 instance are installed when an SFC is initially built, and each time 922 a SF instance is connected into the SFC, but there is no requirement 923 for VRFs to be reconfigured when traffic from different networks pass 924 through the service chain, so long as their prefix is included in the 925 prefixes in the VRFs along the SFC. 927 In this variation, it is assumed that no subscriber-originated 928 traffic will enter the SFC destined for an IP address also in the 929 subscriber network address range. This will not be a restriction in 930 many cases. 932 2.8.3 Disaggregated Gateway Routers 934 As a slight variation of the above, a network prefix may be 935 disaggregated and spread out among various gateway routers, for 936 instance, in the case of virtual machines in a data-center. In order 937 to reduce the scaling requirements on the routing systems along the 938 SFC, the SFC can again be split into two sections as described above. 939 In addition, the last egress VRF may act as the exit VRF and install 940 the destination network's disaggregated routes. If the destination 941 network's prefixes can be aggregated, for instance into a subnet 942 prefix, then the aggregate prefix may be advertised and installed in 943 the entry VRF. 945 2.8.4 Optimizing VRF usage 947 It may be desirable to avoid using distinct ingress and egress VRFs 948 for the service instances in order to make more efficient use of VRF 949 resources, especially on physical routing systems. The ingress VRF 950 and egress VRF may be treated as conceptual entities and the 951 forwarding realized using one or more options described in this 952 section, combined with the methods described earlier. 954 For instance, the next-hop forwarding label described earlier serves 955 the purpose of directing traffic received from other routing systems 956 directly towards an attached service instance. On the other hand, if 957 the encapsulation mechanism or the device in use requires an IP 958 lookup for incoming packets from other routing systems, then the 959 specific network prefixes may be installed in the intermediate 960 service VRFs to direct traffic towards the attached service 961 instances. 963 Similarly, a per-interface policy-based-routing rule applied to an 964 access interface can serve to direct traffic coming in from attached 965 service instances towards the next SF set. 967 2.8.5 Dynamic Entry and Exit Signaling 969 When either of the methods of the previous sections are employed, the 970 prefixes of the attached networks at each end of an SFC can be 971 signaled into the corresponding VRFs dynamically. This requires that 972 a BGP session is configured either from the network device at each 973 end of the SFC into each network or from the controller. 975 If dynamic signaling is performed, and a bidirectional SFC set is 976 configured, and the gateways to the networks connected via the SFC 977 exchange routes, steps must be taken to ensure that routes to both 978 networks do not get advertised from both ends of the SFC set by re- 979 origination. This can be achieved if a new BGP Extended Community is 980 implemented to control re-origination. When a route is re-originated, 981 the RTs of the re-originated routes are appended to the new Route- 982 Target Record Extended Community, and if the RT for the route already 983 exists in the Extended Community, the route is not re-originated (see 984 Section 9.1). 986 2.8.6 Dynamic Re-Advertisements in Intermediate systems 988 The intermediate routing systems attached to the service instances 989 may also use the dynamic signaling technique from the previous 990 section to re-advertise received routes up the chain. In this case, 991 the ingress and egress VRFs are combined into one; and a local 992 route-policy ensures the re-advertised routes are associated with 993 labels that direct incoming traffic directly to the attached service 994 instances on that routing system. 996 2.9 Layer-2 Virtual Networks and Service Functions 998 There are SFs that operate at layer-2, in a transparent mode, and 999 forward traffic based on the MAC DA. When such a SF is present in the 1000 SFC, the procedures at the routing system are modified slightly. In 1001 this case, the IP address associated with the SF instance (and used 1002 as the next-hop of routes in the above procedures) is actually the 1003 one assigned to the routing system interface attached to the other 1004 end of the SF instance, or it could be a virtual IP address logically 1005 associated with the service function with a next-hop of the other 1006 routing system interface. The routing system interface uses distinct 1007 interface MAC addresses. This allows the current scheme to be 1008 supported, while allowing the transparent service function to work 1009 using its existing behavior. 1011 A SFC may be also be set up between end systems or network segments 1012 within the same Layer-2 bridged network. In this case, applying the 1013 procedures described earlier, the segments or groups of end systems 1014 are placed in distinct Layer-2 virtual networks, which are then then 1015 inter-connected via a sequence of intermediate Layer-2 virtual 1016 networks that form the links in the SFC. Each virtual network maps to 1017 a pair of ingress and egress MAC VRFs on the routing systems to which 1018 the SF instances are attached. The routing systems at the ends of the 1019 SFC will advertise the locally learnt or installed MAC entries using 1020 BGP-EVPN type-2 routes, which will get installed in the MAC VRFs at 1021 the other end. The intermediate systems may use default MAC routes 1022 installed in the ingress and egress MAC VRFs, or the other variations 1023 described earlier in this document. 1025 2.10 Header Transforming Service Functions 1027 If a service function performs an action that changes the source 1028 address in the packet header (e.g., NAT), the routes that were 1029 installed as described above may not support reverse flow traffic. 1031 The solution to this is for the controller modify the routes in the 1032 reverse direction to direct traffic into instances of the 1033 transforming service function. The original routes with a source 1034 prefix (Network-A in Figure 2) are replaced with a route that has a 1035 prefix that includes all the possible addresses that the source 1036 address could be mapped to. In the case of network address 1037 translation, this would correspond to the NAT pool. 1039 3 Load Balancing Along a Service Function Chain 1041 One of the key concepts driving NFV [NFVE2E]is the idea that each 1042 service function along an SFC can be separately scaled by changing 1043 the number of service function instances that implement it. This 1044 requires that load balancing be performed before entry into each 1045 service function. In this architecture, load balancing is performed 1046 in either or both of egress and ingress VRFs depending on the type of 1047 load balancing being performed, and if more than one service instance 1048 is connected to the same ingress VRF. 1050 3.1 SF Instances Connected to Separate VRFs 1052 If SF instances implementing a service in an SFC are each connected 1053 to separate VRFs(e.g. instances are connected to different routers or 1054 are running on different hosts), load balancing is performed in the 1055 egress VRFs of the previous service, or in the VRF that is the entry 1056 to the SFC. The controller distributes BGP multi-path routes to the 1057 egress VRFs. The destination prefix of each route is the ultimate 1058 destination network, or its representative aggregate or default. The 1059 next-hops in the ECMP set are BGP next-hops of the service instances 1060 attached to ingress VRFs of the next service in the SFC. The load 1061 balancing corresponds to BGP Multipath, which requires that the route 1062 distinguishers for each route are distinct in order to recognize that 1063 distinct paths should be used. Hence, each VRF in a distributed, SFC 1064 environment should have a unique route distinguisher. 1066 +------+ +-------------------------+ 1067 O----|SFI-11|---O |--- Data plane connection| 1068 // +------+ \\ |=== Encapsulation tunnel | 1069 // \\ | O VRF | 1070 // \\ | * Load balancer | 1071 // \\ +-------------------------+ 1072 // +------+ \\ 1073 Net-A-->O*====O----|SFI-12|---O====O-->Net-B 1074 \\ +------+ // 1075 \\ // 1076 \\ // 1077 \\ // 1078 \\ +------+ // 1079 O----|SFI-13|---O 1080 +------+ 1082 Figure 6 - Egress VRF Load Balancing across SF Instances Connected 1083 to Different VRFs 1085 In the diagram, above, a service function is implemented in three 1086 service instances each connected to separate VRFs. Traffic from 1087 Network-A arrives at VRF at the start of the SFC, and is load 1088 balanced across the service instances using a set of ECMP routes with 1089 next hops being the addresses of the routing systems containing the 1090 ingress VRFs and with labels that identify the ingress interfaces of 1091 the service instances. 1093 3.2 SF Instances Connected to the Same VRF 1095 When SF instances implementing a service in an SFC are connected to 1096 the same ingress VRF, load balancing is performed in the ingress VRF 1097 across the service instances connected to it. The controller will 1098 install routes in the ingress VRF to the destination network with the 1099 interfaces connected to each service instance as next hops. The 1100 ingress VRF will then use ECMP to load balance across the service 1101 instances. 1103 +------+ +-------------------------+ 1104 |SFI-11| |--- Data plane connection| 1105 +------+ |=== Encapsulation tunnel | 1106 / \ | O VRF | 1107 / \ | * Load balancer | 1108 / \ +-------------------------+ 1109 / +------+ \ 1110 Net-A-->O====O*----|SFI-12|----O====O-->Net-B 1111 \ +------+ / 1112 \ / 1113 \ / 1114 \ / 1115 +------+ 1116 |SFI-13| 1117 +------+ 1119 Figure 7 - Ingress VRF Load Balancing across SF Instances 1120 Connected to the Same VRF 1122 In the diagram, above, a service is implemented by three service 1123 instances that are connected to the same ingress and egress VRFs. The 1124 ingress VRF load balances across the ingress interfaces using ECMP, 1125 and the egress traffic is aggregated in the egress VRF. 1127 If forwarding labels that identify each SFI ingress interface are 1128 used, and if the routes to each SF instance are advertised with 1129 different route distinguishers, then it is possible to perform ECMP 1130 load balancing at the routing instance at the beginning of the 1131 encapsulation tunnel (which could be the egress VRF of the previous 1132 SF in the SFC). 1134 3.3 Combination of Egress and Ingress VRF Load Balancing 1136 In Figure 8, below, an example SFC is shown where load balancing is 1137 performed in both ingress and egress VRFs. 1139 +-------------------------+ 1140 |--- Data plane connection| 1141 +------+ |=== Encapsulation tunnel | 1142 |SFI-11| | O VRF | 1143 +------+ | * Load balancer | 1144 / \ +-------------------------+ 1145 / \ 1146 / +------+ \ +------+ 1147 O*---|SFI-12|---O*====O---|SFI-21|---O 1148 // +------+ \\ // +------+ \\ 1149 // \\// \\ 1150 // \\ \\ 1151 // //\\ \\ 1152 // +------+ // \\ +------+ \\ 1153 Net-A-->O*====O----|SFI-13|---O*====O---|SFI-22|---O====O-->Net-B 1154 +------+ +------+ 1155 ^ ^ ^ ^ ^ ^ 1156 | | | | | | 1157 | Ingress Egress | | | 1158 | Ingress Egress | 1159 SFC Entry SFC Exit 1161 Figure 8 - Load Balancing across SF Instances 1163 In Figure 8, above, an SFC is composed of two services implemented by 1164 three service instances and two service instances, respectively. The 1165 service instances SFI-11 and SFI-12 are connected to the same ingress 1166 and egress VRFs, and all the other service instances are connected to 1167 separate VRFs. 1169 Traffic entering the SFC from Network-A is load balanced across the 1170 ingress VRFs of the first service function by the chain entry VRF, 1171 and then load balanced again across the ingress interfaces of SFI-11 1172 and SFI-12 by the shared ingress VRF. Note that use of standard ECMP 1173 will lead to an uneven distribution of traffic between the three 1174 service instances (25% to SFI-11, 25% to SFI-12, and 50% to SFI-13). 1175 This issue can be mitigated through the use of BGP link bandwidth 1176 extended community [draft-ietf-idr-link-bandwidth]. As described in 1177 the previous section, if a next-hop forwarding label is used, another 1178 way to mitigate this effect would be to advertise routes to each SF 1179 instance connected to a VRF with a different route distinguisher. 1181 After traffic passes through the first set of service instances, it 1182 is load balanced in each of the egress VRFs of the first set of 1183 service instances across the ingress VRFs of the next set of service 1184 instances. 1186 3.4 Forward and Reverse Flow Load Balancing 1188 This section discusses requirements in load balancing for forward and 1189 reverse paths when stateful service functions are deployed. 1191 3.4.1 Issues with Equal Cost Multi-Path Routing 1193 As discussed in the previous sections, load balancing in the forward 1194 SFC in the above example can automatically occur with standard BGP, 1195 if multiple equal cost routes to Network-B are installed into all the 1196 ingress VRFs, and each route directs traffic through a different 1197 service function instance in the next set. The multiple BGP routes in 1198 the routing table will translate to Equal Cost Multi-Path in the 1199 forwarding table. The hash used in the load balancing algorithm (per 1200 packet, per flow or per prefix) is implementation specific. 1202 If a service function is stateful, it is required that forward flows 1203 and reverse flows always pass through the same service function 1204 instance. Standard ECMP does not provide this capability, since the 1205 hash calculation will see different input data for the same flow in 1206 the forward and reverse directions (since the source and destination 1207 fields are reversed). 1209 Additionally, if the number of SF instances changes, either 1210 increasing to expand capacity, or decreases (planned, or due to a SF 1211 instance failure), the hash table in ECMP is recalculated, and most 1212 flows will be directed to a different SF instance and user sessions 1213 will be disrupted. 1215 There are a number of ways to satisfy the requirements of symmetric 1216 forward/reverse paths for flows and minimal disruption when SF 1217 instances are added to or removed from a set. Two techniques that can 1218 be employed are described in the following sections. 1220 3.4.2 Modified ECMP with Consistent Hash 1222 Symmetric forwarding into each side of an SF instance set can be 1223 achieved with a small modification to ECMP if the packet headers are 1224 preserved after passing through the SF instance set and assuming that 1225 the same hash function, same hash salt and same ordering association 1226 of hash buckets to ECMP routes is used in both 1228 hash bucket, and therefore which service instance, that the packet 1229 will be sent to, but the source and destination IP address and port 1230 information are swapped in the calculation in the reverse direction. 1231 This method only requires that the list of available service function 1232 instances is consistently maintained in load balance tables in all 1233 the routing systems rather than maintaining flow tables. This 1234 requirement can be met by the use of a distinct VPN route for each 1235 instance. 1237 In the SFC architecture described in this document, when SF instances 1238 are added or removed, the controller is required to install (or 1239 remove) routes to the SF instances. The controller could configure 1240 the load balancing function in VRFs that connect to each added (or 1241 removed) SF instance as part of the same network transaction as route 1242 updates to ensure that the load balancer configuration is 1243 synchronized with the set of SF instances. 1245 The consistent ordering among ECMP routes in the routing systems 1246 could be achieved through configuration of the routing systems by the 1247 controller using, for instance, Netconf; or when the routes are 1248 signaled using BGP by the controller or a routing system, the order 1249 for a given instance can be sent in a new 'Consistent Hash Sort 1250 Order' BGP Extended Community (defined in Section 9.2). 1252 The effect of rehashing when SF instances are added or removed can be 1253 minimized, or even eliminated using variations of the technique of 1254 consistent hashing [consistent-hash]. Details are outside the scope 1255 of this document. 1257 3.4.3 ECMP with Flow Table 1259 A second refinement that can ensure forward/reverse flow consistency, 1260 and also provides stability when the number of SF instances changes 1261 ('flow-stickiness'), is the use of dynamically configured IP flow 1262 tables in the VRFs. In this technique, flow tables are used to ensure 1263 that existing flows are unaffected if the number of ECMP routes 1264 changes, and that forward and reverse traffic passes through the same 1265 SF instance in each set of SF instances implementing a service 1266 function. 1268 The flow tables are set up as follows: 1270 1. User traffic with a new 5-tuple enters an egress VRF from a 1271 connected SF instance. 1273 2. The VRF calculates the ECMP hash across available routes 1274 (i.e., ECMP group) to the ingress interfaces of the SF 1275 instances in the next SF instance set. The consistent hash 1276 technique described in section 3.4.2 must be used here and 1277 in subsequent steps. 1279 3. The VRF creates a new flow entry for the 5-tuple of the new 1280 traffic with the next-hop being the chosen downstream ECMP 1281 group member (determined in the step 2. above) . All 1282 subsequent packets for the same flow will be forwarded using 1283 flow lookup and, hence, will use the same next-hop. 1285 4. The encapsulated packet arrives in the routing system that 1286 hosts the ingress VRF for the selected SF instance. 1288 5. The ingress VRF of the next service instance determines if 1289 the packet came from a routing system that is in an ECMP 1290 group in the reverse direction(i.e., from this ingress VRF 1291 back to the previous set of SF instances). 1293 6. If an ECMP group is found, the ingress VRF creates a flow 1294 entry for the reversed 5-tuple with next-hop of the tunnel 1295 on which traffic arrived. This is for the traffic in the 1296 reverse direction. 1298 7. If multiple SF instances are connected to the ingress VRF, 1299 the ECMP consistent hash is used to choose which one to send 1300 the traffic into. 1302 8. A forward flow table entry is created for the traffic's 5- 1303 tuple with next hop of the interface of the SF instance 1304 chosen in the previous step. 1306 9. The packet is sent into the selected SF instance. 1308 The above method ensures that forward and reverse flows pass through 1309 the same SF instances, and that if the number of ECMP routes changes 1310 when SF instances are added or removed, all existing flows will 1311 continue to flow through the same SF instances, but new flows will 1312 use the new ECMP hash. The only flows affected will be those that 1313 were passing through an SF instance that was removed, and those will 1314 be spread among the remaining SF instances using the updated ECMP 1315 hash. 1317 If the consistent hash algorithm is used in both directions, then 1318 only the forwarding flow entries would be required, and would be 1319 next-hop forwarding labels are used, then only the flow table in step 1320 3 is sufficient to provide flow stickiness. 1322 3.4.4 Dealing with different hash algorithms in an SFC 1324 In some cases, there will be two or more hash algorithms in 1325 forwarders along an SFC. E.g. when a physical router is at the entry 1326 and exit of the chain, and virtual forwarders are used within the 1327 chain. Forward and reverse flows will mostly not pass through the 1328 same SF instances of the first SF, and the SFC will not operate as 1329 intended if the first SF is stateful. It may be impractical, or 1330 prohibitively expensive to implement the flow table-based methods 1331 described above to achieve flow stability and symmetry. This issue 1332 can be mitigated by ensuring that the first SF is not stateful, or by 1333 placing a null SF between the physical router and the first actual SF 1334 in the SFC. This ensures that the hash method on both sides of 1335 stateful service instances is the same, and the SFC will operate with 1336 flow stability and symmetry if the methods described above are 1337 employed. 1339 4 Steering into SFCs Using a Classifier 1341 In many applications of SFCs, a classifier will be used to direct 1342 traffic into SFCs. The classifier inspects the first or first few 1343 packets in a flow to determine which SFC the flow should be sent 1344 into. The decision criteria can be based on just the IP 5-tuple of 1345 the header (i.e filter-based forwarding), or could involve analysis 1346 of the payload of packets using deep packet inspection. Integration 1347 with a subscriber management system such as PCRF or AAA may be 1348 required in order to identify which SFC to send traffic to based on 1349 subscriber policy. 1351 An example logical architecture is shown in Figure 9, below where a 1352 classifier is external to a physical router that is hosting the VRFs 1353 that form the ends of two SFC sets. In the case of filter-based 1354 forwarding, classification could occur in a VRF on the router. 1356 +----------+ 1357 | PCRF/AAA | 1358 +-----+----+ 1359 : 1360 : 1361 Subscriber +-----+------+ 1362 Traffic----->| Classifier | 1363 +------------+ 1364 | | 1365 +-------|---|------------------------+ 1366 | | | Router | 1367 | | | | 1368 | O O X--------->Internet 1369 | | | / \ | 1370 | | | O O | 1371 +-------|---|----------------|---|---+ 1372 | | +---+ +---+ | | 1373 | +--+ U +---+ V +-+ | 1374 | +---+ +---+ | 1375 | | 1376 | +---+ +---+ +---+ | 1377 +--+ X +---+ Y +---+ Z +-+ 1378 +---+ +---+ +---+ 1380 Figure 9 - Subscriber/Application-Aware Steering with a Classifier 1382 In the diagram, the classifier receives subscriber traffic and sends 1383 the traffic out of one of two logical interfaces, depending on 1384 classification criteria. The logical interfaces of the classifier are 1385 connected to VRFs in a router that are entries to two SFCs (shown as 1386 O in the diagram). 1388 In this scenario, the entry VRF for each chain does not advertise the 1389 destination network prefixes and the modified method of setting 1390 prefixes, described in Section 2.8.2 can be employed. Also, the exit 1391 VRF for each SFC does not peer with a gateway or proxy node in the 1392 destination network and packets are forwarded using IP lookup in the 1393 main routing table or in a VRF that the exit traffic from the SFCs is 1394 directed into (shown as X in the diagram). A flow table may be 1395 required to ensure that reverse traffic is sent into the correct SFC. 1397 An alternative would be where the classifier is itself a distributed, 1398 virtualized service function, but with multiple egress interfaces. In 1399 that case, each virtual classifier instance could be entry VRF would 1400 load balance across the first SF instance set in its SFC. The reverse 1401 flow table mechanism described in Section 3.4.3 could be employed to 1402 ensure that flows return to the originating classifier instance which 1403 may maintain subscriber context and perform charging and accounting. 1405 5 External Domain Co-ordination 1407 It is likely that SFCs will be managed as a separate administrative 1408 domain from the networks that they receive traffic from, and send 1409 traffic to. If the connected networks use BGP for route distribution, 1410 the controller in the SFC domain can join the network domains by 1411 creating BGP peering sessions with routing systems or route 1412 reflectors in those network domains to exchange VPN routes, or with 1413 local border routers that peer with the external domains. While a 1414 controller can modify route targets for the VRFs within its SFC 1415 domain, it is likely to not have any control over the external 1416 networks with which it is peering. Hence, the design does not assume 1417 that the RTs of external network domains can be modified by the 1418 controller. It may however learn those RTs and use them in it's 1419 modified route advertisements. 1421 In order to steer traffic from external network domains into an SFC, 1422 the controller will advertise a destination network's prefixes into 1423 the peering source network domain with a BGP next-hop and label 1424 associated with the SFC entry point that may be on a routing system 1425 attached to the first SF instance. This advertisement may be over 1426 regular MP-BGP/VPN peering which assumes existing standard VPN 1427 routing/forwarding behavior on the network domain's routers 1428 (PEs/ASBRs). The controller can learn routes to networks in external 1429 domains at the egress of an SFC and advertise routes to those network 1430 into other external domains using the first ingress routing instance 1431 as the next hop thus allowing dynamic steering through re- 1432 origination of routes. 1434 An operational benefit of this approach is that the SFC topology 1435 within a domain need not be exposed to other domains. Additionally, 1436 using non-specific routes inside an SFC, as described in Section 1437 2.8.1, means that new networks can be attached to a SFC without 1438 needing to configure prefixes inside the chain. 1440 The controller will typically remove the destination network's RTs 1441 and replace them with the RTs of the source network while advertising 1442 the modified routes. Alternatively, an external domain may be 1443 provisioned with an additional export-only RT and an import- only RT 1444 that the controller can use. 1446 6 Fine-grained steering using BGP Flow-Spec 1447 When steering traffic from an external network domain into an SFC 1448 based on attributes of the packet flow, BGP Flow-spec can be used as 1449 a signaling option. 1451 In this case, the controller can advertise one or more flow-spec 1452 routes into the entry VRF with the appropriate Service-topology-RT 1453 for the SFC. Alternatively, it can use the procedures described in 1454 RFC5575 or [flowspec-redirect-ip] on the gateway router to redirect 1455 traffic towards the first SF. 1457 If it is desired to steer specific flows from a network domain's 1458 existing routers, the controller can advertise the above flow-spec 1459 routes to the network domain's border routers or route reflectors. 1461 7 Controller Federation 1463 When SFCs are distributed geographically, or in very large-scale 1464 environments, there may be multiple SFC controllers present and they 1465 may variously employ both of the SFC creation methods described in 1466 Section 2.6. If there is a requirement for SFCs to span controller 1467 domains there may be a requirement to exchange information between 1468 controllers. Again, a BGP session between controllers can be used to 1469 exchange route information as described in the previous sections and 1470 allow such domain spanning SFCs to be created. 1472 8 Coordination Between SF Instances and Controller using BGP 1474 In many cases, the configuration of SF instance determines its 1475 network behavior. E.g. when NAT pools are set up, or when an SSL 1476 gateway is configured with a set of enterprise IP addresses to use. 1477 In these cases, the addresses that will be used by the SFs need to be 1478 known in the networks connecting to them in order that traffic can be 1479 properly routed. When SFCs are involved, this means that the 1480 controller has to be notified when such configuration changes are 1481 made in SF instances. Sometimes, the changes will be made by end- 1482 configuration automatically when the change is made, and without 1483 customers needing to notify the service provider via a portal, for 1484 instance, or requiring development of integration modules linking the 1485 SF instances and the controller. 1487 One option for automatic notification for SFs that support BGP is for 1488 the connected forwarding system (physical or virtual SFF) to also 1489 support BGP, and for SF instances to be configured to peer with the 1490 SFF. When changes are made to the configuration of a SF instance, 1491 that for example, the SF will accept packets from a particular 1492 network prefix on one of its interfaces, the SF instance will send a 1493 BGP route update to the SFF it is connected to and which it has a BGP 1494 session with. The controller can then adjust the routes along SFCs to 1495 ensure that packets with destinations in the new prefix reach the 1496 reconfigured SF instance. 1498 BGP could also be used to signal from the controller to a SF instance 1499 that certain traffic should be sent out from a particular interface. 1500 This could be used to direct suspect traffic to a security scrubbing 1501 center,for example. 1503 Note that the SFF need not support a BGP stack itself; it can proxy 1504 BGP messages to the controller which will support such a stack. 1506 9 BGP Extended Communities 1508 9.1 Route-Target Record 1510 Route-Target Record (RT Record) is defined as a transitive BGP 1511 Extended Community, that contains a Route-Target value representing 1512 one of the RTs that the route has been attached with previously, and 1513 which may no longer be attached to the route on subsequent re- 1514 advertisements (see Section 2.8.5). 1516 A Sub-Type code 0x13 is assigned in the three BGP Extended Community 1517 types - Two-Octet AS-Specific 0x00, IPv4-Address-Specific 0x01 and 1518 Four-Octet AS-Specific 0x02. A Sub-Type code 0x0013 is also assigned 1519 in the BGP Transitive IPv6 Address-Specific Extended Community. 1521 The Extended Community is encoded as follows: 1523 0 1 2 3 1524 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 1525 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ 1526 | 0x00,0x01,0x02| Sub-Type=0x13 | Route-Target Value | 1527 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ 1528 | Route-Target Value contd. | 1529 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ 1531 The Type field of the BGP Route-Target Extended Community is 1532 copied into the Type field of the RT Record Extended Community. 1534 The Value field (Global Administrator and Local Administrator) of 1535 the Route-Target Extended Community is copied into the Route- 1536 Target Value field of the RT Record Extended Community. 1538 When comparing a RT-Record to a Route-Target, only the Type and 1539 the Route-Target value fields are used in the comparison. The sub- 1540 type field is masked out. 1542 When a speaker re-originates a route that contains one or more 1543 RTs, it must add each of these RTs as RT Record extended communities 1544 in the re-originated route. 1546 A speaker must not re-originate a route with an RT, if this RT is 1547 already present as an RT Record extended community. 1549 9.2 CONSISTENT_HASH_SORT_ORDER 1551 Consistent Hash Sort Order is an optional transitive Opaque BGP 1552 Extended Community of sub-type 0x14, defined as follows: 1554 Type Field : The value of the high-order octet is determined by 1555 provisioning as per [RFC4360]. The value of the low- 1556 order octet is assigned as 0x14 by IANA from the 1557 Transitive Opaque Extended Community Sub-Types registry. 1559 Value Field : The value field contains a Sort Order sub-field that 1560 indicates the relative order of this route among the 1561 ECMP set for the prefix, to be sorted in increasing 1562 order. It is a 32-bit unsigned integer. The field is 1563 encoded as shown below: 1565 +------------------------------+ 1566 | Sort Order (4 octets) | 1567 +------------------------------+ 1568 | Reserved (2 octets) | 1569 +------------------------------+ 1571 10 Summary and Conclusion 1573 The architecture for service function chains described in this 1574 document uses virtual networks implemented as overlays in order to 1575 create service function chains. The virtual networks use standards- 1576 based encapsulation tunneling, such as MPLS over GRE/UDP or VXLAN, to 1577 transport packets into an SFC and between service function instances 1578 without routing in the user address space. Two methods of installing 1579 routes to form service chains are described. 1581 In environments with physical routers, a controller may operate in 1582 tandem with existing BGP route reflectors, and would contain the SFC 1583 toology model, and the ability to install the local static interface 1584 routes to SF instances. In a virtualized environment, the controller 1585 can emulate route refection internally and simply install required 1586 routes directly without advertisements occurring. 1588 11 Security Considerations 1590 The security considerations for SFCs are broadly similar to those 1591 concerning the data, control and management planes of any device 1592 placed in a network. Details are out of scope for this document. 1594 12 IANA Considerations 1596 The new BGP Extended Communities in Section 9 are assigned types as 1597 defined above in the IANA registry for extended communities. 1599 13 Informative References 1601 [NFVE2E] "Network Functions Virtualisation: End to End Architecture, 1602 http://docbox.etsi.org/ISG/NFV/70- 1603 DRAFT/0010/NFV-0010v016.zip". 1605 [RFC2328] J. Moy, "OSPF Version 2", RFC 2328, April, 1998. 1607 [sfc-arch] Halpern, J. and Pignataro, C., "Service Function Chaining 1608 (SFC) Architecture", RFC 7665, October 2015. 1610 [RFC4364] Rosen, E. and Y. Rekhter, "BGP/MPLS IP Virtual Private 1611 Networks (VPNs)", RFC 4364, February 2006. 1613 [RFC4271] Rekhter, Y., Li, T., and S. Hares, "A Border Gateway 1614 Protocol 4 (BGP-4)", RFC 4271, January 2006. 1616 [RFC4760] Bates, T., Chandra, R., Katz, D., and Y. Rekhter, 1617 "Multiprotocol Extensions for BGP-4", RFC 4760, January 1618 2007. 1620 [RFC7348] Mahalingam, M., et al. "VXLAN: A Framework for Overlaying 1621 Virtualized Layer 2 Networks over Layer 3 Networks", RFC 1622 7348, August 2014. 1624 [draft-rosen-idr-tunnel-encaps] 1625 Rosen, E, Ed et al. "Using the BGP Tunnel Encapsulation 1626 Attribute without the BGP Encapsulation SAFI", August 6, 1627 2015 1629 [draft-ietf-l3vpn-end-system] Marques, P., et al., "BGP-signaled 1630 end-system IP/VPNs", draft-ietf-l3vpn-end-system-04, 1631 October 2, 2014. 1633 [RFC5575] Marques, P., Sheth, N., Raszuk, R., et al., "Dissemination 1634 of Flow Specification Rules", RFC 5575, Ausust 2009. 1636 [draft-ietf-bess-evpn-overlay-02] 1638 A. Sajassi, et al, "A Network Virtualization Overlay 1639 Solution using EVPN", draft-ietf-bess-evpn-overlay, 1640 February 2015. 1642 [draft-ietf-sfc-nsh] 1643 Quinn, P., et al, "Network Service Header", draft-ietf- 1644 sfc-nsh-00, March 2015. 1646 [draft-niu-sfc-mechanism] 1647 Niu, L., Li, H., and Jiang, Y., "A Service Function 1648 Chaining Header and its Mechanism", draft-niu-sfc- 1649 mechanism-00, January 2014. 1651 [draft-rijsman-sfc-metadata-considerations] 1652 B. Rijsman, et al. "Metadata Considerations", draft- 1653 rijsman-sfc-metadata-considerations-00, February 12, 2014 1655 [RFC6241] Enns, R., Bjorklund, M., Schoenwaelder, J., and A. Bierman, 1656 "Network Configuration Protocol (NETCONF)", RFC 6241, June 1657 2011. 1659 [RFC4023] Worster, T., Rekhter, Y., and E. Rosen, "Encapsulating MPLS 1660 in IP or Generic Routing Encapsulation (GRE)", RFC 4023, 1661 March 2005. 1663 [RFC7510] Xu, X., Sheth, N. et al, "Encapsulating MPLS in UDP", RFC 1664 7510, April 2015. 1666 [draft-ietf-i2rs-architecture] 1667 Atlas, A., Halpern, J., Hares, S., Ward, D., and T Nadeau, 1668 "An Architecture for the Interface to the Routing System", 1669 draft-ietf-i2rs-architecture, work in progress, March 1670 2015. 1672 [consistent-hash] 1673 Karger, D.; Lehman, E.; Leighton, T.; Panigrahy, R.; 1674 Levine, M.; Lewin, D. (1997). "Consistent Hashing and 1675 Random Trees: Distributed Caching Protocols for Relieving 1676 Hot Spots on the World Wide Web". Proceedings of the 1677 Twenty-ninth Annual ACM Symposium on Theory of Computing. 1678 ACM Press New York, NY, USA. pp. 654-663. 1680 [draft-ietf-idr-link-bandwidth] 1681 P. Mohapatra, R. Fernando, "BGP Link Bandwidth Extended 1682 Community", draft-ietf-idr-link-bandwidth, work in 1683 progress. 1685 [flowspec-redirect-ip] 1687 Uttaro, J. et al. "BGP Flow-Spec Redirect to IP Action", 1688 draft-ietf-idr-flowspec-redirect-ip-02, February 2015. 1690 14 Acknowledgments 1692 This document was prepared using 2-Word-v2.0.template.dot. 1694 This document is based on earlier drafts [draft-rfernando-bess- 1695 service-chaining] and [draft-mackie-sfc-using-virtual-networking]. 1697 The authors would like to thank D. Daino, D.R. Lopez, D. Bernier, W. 1698 Haeffner, A. Farrel, L. Fang, and N. So, for their contributions to 1699 the earlier drafts. The authors would also like to thank the 1700 following individuals for their review and feedback on the original 1701 proposals: E. Rosen, J. Guchard, P. Quinn, P. Bosch, D. Ward, A. 1702 Ganesan, N. Seth, G. Pildush and N. Bitar. The authors also thank Wim 1703 Henderickx for his useful suggestions on several aspects of the 1705 Authors' Addresses 1707 Rex Fernando 1708 Cisco 1709 170 W Tasman Drive 1710 San Jose, CA 95134 1711 USA 1712 Email: rex@cisco.com 1714 Stuart Mackie 1715 Juniper Networks 1716 1133 Innovation Way 1717 Sunnyvale, CA 94089 1718 USA 1719 Email: wsmackie@juniper.net 1721 Dhananjaya Rao 1722 Cisco 1723 170 W. Tasman Drive 1724 San Jose, CA 95134 1725 USA 1726 Email: dhrao@cisco.com 1728 Bruno Rijsman 1729 Juniper Networks 1730 1133 Innovation Way 1731 Sunnyvale, CA 94089 1732 USA 1733 Email: brijsman@juniper.net 1735 Maria Napierala 1736 AT&T Labs 1737 200 Laurel Avenue 1738 Middletown, NJ 07748 1739 USA 1740 Email: mnapierala@att.com 1742 Thomas Morin 1743 Orange 1744 2, avenue Pierre Marzin 1745 Lannion 22307 1746 France 1747 Email: thomas.morin@orange.com