idnits 2.17.1 draft-fm-bess-service-chaining-02.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- ** The document seems to lack separate sections for Informative/Normative References. All references will be assumed normative when checking for downward references. ** There are 4 instances of too long lines in the document, the longest one being 37 characters in excess of 72. ** The document seems to lack a both a reference to RFC 2119 and the recommended RFC 2119 boilerplate, even if it appears to use RFC 2119 keywords. RFC 2119 keyword, line 1556: '... A speaker MUST NOT re-originate ...' Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year -- The document date (December 7, 2015) is 3056 days in the past. Is this intentional? Checking references for intended status: Proposed Standard ---------------------------------------------------------------------------- (See RFCs 3967 and 4897 for information about using normative references to lower-maturity documents in RFCs) -- Missing reference section? 'NFVE2E' on line 1621 looks like a reference -- Missing reference section? 'RFC4271' on line 1633 looks like a reference -- Missing reference section? 'I-D.fang-l3vpn-virtual-ce' on line 242 looks like a reference -- Missing reference section? 'I-D.fang-l3vpn-virtual-pe' on line 248 looks like a reference -- Missing reference section? 'RFC4364' on line 1630 looks like a reference -- Missing reference section? 'RFC4760' on line 1636 looks like a reference -- Missing reference section? 'RFC7348' on line 1640 looks like a reference -- Missing reference section? 'RFC4023' on line 1679 looks like a reference -- Missing reference section? 'RFC3031' on line 634 looks like a reference -- Missing reference section? 'RFC4360' on line 1565 looks like a reference -- Missing reference section? 'RFC2328' on line 1625 looks like a reference -- Missing reference section? 'RFC5575' on line 1653 looks like a reference -- Missing reference section? 'RFC6241' on line 1675 looks like a reference -- Missing reference section? 'RFC7510' on line 1683 looks like a reference Summary: 3 errors (**), 0 flaws (~~), 1 warning (==), 15 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 1 BGP Enabled Services (bess) R. Fernando 2 INTERNET-DRAFT Cisco 3 Intended status: Standards Track S. Mackie 4 Expires: June 2016 Juniper 5 D. Rao 6 Cisco 7 B. Rijsman 8 Juniper 9 M. Napierala 10 AT&T 11 T. Morin 12 Orange 14 December 7, 2015 15 Expires: June 2016 17 Service Chaining using Virtual Networks with BGP VPNs 18 draft-fm-bess-service-chaining-02 20 Abstract 22 This document describes how service function chains (SFC) can be 23 applied to traffic flows using routing in a virtual (overlay) 24 network to steer traffic between service nodes. Chains can include 25 services running in routers, on physical appliances or in virtual 26 machines. Service chains have applicability at the subscriber edge, 27 business edge and in multi-tenant datacenters. The routing function 28 into SFCs and between service functions within an SFC can be 29 performed by physical devices (routers), be virtualized inside 30 hypervisors, or run as part of a host OS. 32 A BGP control plane for route distribution is used to create virtual 33 networks implemented using IP MPLS, VXLAN or other suitable 34 encapsulation, where the routes within the virtual networks cause 35 traffic to flow through a sequence of service nodes that apply 36 packet processing functions to the flows. 38 Two techniques are described: in one the service chain is 39 implemented as a sequence of distinct VPNs between sets of service 40 nodes that apply each service function; in the other, the routes 41 within a VPN are modified through the use of special route targets 42 and modified next-hop resolution to achieve the desired result. 44 In both techniques, service chains can be created by manual 45 configuration of routes and route targets in routing systems, or 46 through the use of a controller which contains a topological model 47 of the desired service chains. 49 This document also contains discussion of load balancing between 50 network functions, symmetric forward and reverse paths when stateful 51 services are involved, and use of classifiers to direct traffic into 52 a service chain. 54 Status of this Memo 56 This Internet-Draft is submitted in full conformance with the 57 provisions of BCP 78 and BCP 79. 59 Internet-Drafts are working documents of the Internet Engineering 60 Task Force (IETF), its areas, and its working groups. Note that 61 other groups may also distribute working documents as Internet- 62 Drafts. 64 Internet-Drafts are draft documents valid for a maximum of six 65 months and may be updated, replaced, or obsoleted by other documents 66 at any time. It is inappropriate to use Internet-Drafts as 67 reference material or to cite them other than as "work in progress." 69 The list of current Internet-Drafts can be accessed at 70 http://www.ietf.org/ietf/1id-abstracts.txt 72 The list of Internet-Draft Shadow Directories can be accessed at 73 http://www.ietf.org/shadow.html 75 This Internet-Draft will expire on June 7, 2016. 77 Copyright Notice and License Notice 79 Copyright (c) 2015 IETF Trust and the persons identified as the 80 document authors. All rights reserved. 82 This document is subject to BCP 78 and the IETF Trust's Legal 83 Provisions Relating to IETF Documents 84 (http://trustee.ietf.org/license-info) in effect on the date of 85 publication of this document. Please review these documents 86 carefully, as they describe your rights and restrictions with 87 respect to this document. Code Components extracted from this 88 document must include Simplified BSD License text as described in 89 Section 4.e of the Trust Legal Provisions and are provided without 90 warranty as described in the Simplified BSD License. 92 Table of Contents 94 1 Introduction ...................................................4 95 1.1 Terminology................................................5 96 2 Service Function Chain Architecture Using Virtual Networking ...8 97 2.1 High Level Architecture....................................9 98 2.2 Service Function Chain Logical Model......................10 99 2.3 Service Function Implemented in a Set of SF Instances.....11 100 2.4 SF Instance Connections to VRFs...........................13 101 2.4.1 SF Instance in Physical Appliance....................13 102 2.4.2 SF Instance in a Virtualized Environment.............14 103 2.5 Encapsulation Tunneling for Transport.....................15 104 2.6 SFC Creation Procedure....................................15 105 2.6.1 SFC Provisioning Using Sequential VPNs...............16 106 2.6.2 Modified-Route SFC Creation..........................18 107 2.7 Controller Function.......................................20 108 2.8 Variations on Setting Prefixes in an SFC..................21 109 2.8.1 Using a Default Route................................21 110 2.8.2 Using a Default Route and a Large Prefix.............21 111 2.8.3 Disaggregated Gateway Routers........................22 112 2.8.4 Optimizing VRF usage.................................23 113 2.8.5 Dynamic Entry and Exit Signaling.....................23 114 2.8.6 Dynamic Re-Advertisements in Intermediate systems....24 115 2.9 Layer-2 Virtual Networks and Services.....................24 116 2.10 Header Transforming Service Functions....................25 117 3 Load Balancing Along a Service Function Chain .................25 118 3.1 SF Instances Connected to Separate VRFs...................25 119 3.2 SF Instances Connected to the Same VRF....................26 120 3.3 Combination of Egress and Ingress VRF Load Balancing......27 121 3.4 Forward and Reverse Flow Load Balancing...................29 122 3.4.1 Issues with Equal Cost Multi-Path Routing............29 123 3.4.2 Modified ECMP with Consistent Hash...................29 124 3.4.3 ECMP with Flow Table.................................30 125 3.4.4 Dealing with different hash algorithms in an SFC.....32 126 4 Steering into SFCs Using a Classifier .........................32 127 5 External Domain Co-ordination .................................34 128 6 Fine-grained steering using BGP Flow-Spec .....................35 129 7 Controller Federation .........................................35 130 8 Coordination Between SF Instances and Controller using BGP ....35 131 9 BGP Attributes ................................................36 132 10 Summary and Conclusion........................................38 133 11 Security Considerations.......................................38 134 12 IANA Considerations...........................................38 135 13 References....................................................38 136 13.1 Normative References.....................................38 137 13.2 Informative References...................................38 139 14 Acknowledgments...............................................40 141 1 Introduction 143 The purpose of networks is to allow computing systems to communicate 144 with each other. Requests are usually made from the client or 145 customer side of a network, and responses are generated by 146 applications residing in a datacenter. Over time, the network 147 between the client and the application has become more complex, and 148 traffic between the client and the application is acted on by 149 intermediate systems that apply network services. Some of these 150 activities, like firewall filtering, subscriber attachment and 151 network address translation are generally carried out in network 152 devices along the traffic path, while others are carried out by 153 dedicated appliances, such as media proxy and deep packet inspection 154 (DPI). Deployment of these in-network services is complex, time- 155 consuming and costly, since they require configuration of devices 156 with vendor-specific operating systems, sometimes with co-processing 157 cards, or deployment of physical devices in the network, which 158 requires cabling and configuration of the devices that they connect 159 to. Additionally, other devices in the network need to be configured 160 to ensure that traffic is correctly steered through the systems that 161 services are running on. 163 The current mode of operations does not easily allow common 164 operational processes to be applied to the lifecycle of services in 165 the network, or for steering of traffic through them. 167 The recent emergence of Network Functions Virtualization (NFV) 168 [NFVE2E] to provide a standard deployment model for network services 169 as software appliances, combined with Software Defined Networking 170 (SDN) for more dynamic traffic steering can provide foundational 171 elements that will allow network services to be deployed and managed 172 far more efficiently and with more agility than is possible today. 174 This document describes how the combination of several existing 175 technologies can be used to create chains of functions, while 176 preserving the requirements of scale, performance and reliability 177 for service provider networks. The technologies employed are: 179 o Traffic flow between service functions described by routing and 180 network policies rather than by static physical or logical 181 connectivity 183 o Packet header encapsulation in order to create virtual private 184 networks using network overlays 186 o VRFs on both physical devices and in hypervisors to implement 187 forwarding policies that are specific to each virtual network 189 o Optional use of a controller to calculate routes to be installed 190 in routing systems to form a service chain. The controller uses a 191 topological model that stores service function instance 192 connectivity to network devices and intended connectivity between 193 service functions. 195 o MPLS or other labeling to facilitate identification of the next 196 interface to send packets to in a service function chain 198 o BGP or BGP-style signaling to distribute routes in order to 199 create service function chains 201 o Distributed load balancing between service functions performed in 202 the VRFs that service function instance connect to. 204 Virtualized environments can be supported without necessarily 205 running BGP or MPLS natively. Messaging protocols such as NC/YANG, 206 XMPP or OpenFlow may be used to signal forwarding information. 207 Encapsulation mechanisms such as VXLAN or GRE may be used for 208 overlay transport. The term 'BGP-style', above, refers to this type 209 of signaling. 211 Traffic can be directed into service function chains using IP 212 routing at each end of the service function chain, or be directed 213 into the chain by a classifier function that can determine which 214 service chain a traffic flow should pass through based on deep 215 packet inspection (DPI) and/or subscriber identity. 217 The techniques can support an evolution from services implemented in 218 physical devices attached to physical forwarding systems (routers) 219 to fully virtualized implementations as well as intermediate hybrid 220 implementations. 222 1.1 Terminology 224 This document uses the following acronyms and terms. 226 Terms Meaning 227 ----- ----------------------------------------------- 228 AS Autonomous System 229 ASBR Autonomous System Border Router 230 CE Customer Edge 231 FW Firewall 232 I2RS Interface to the Routing System 233 L3VPN Layer 3 VPN 234 LB Load Balancer 235 NLRI Network Layer Reachability Information [RFC4271] 236 P Provider backbone router 237 proxy-arp proxy-Address Resolution Protocol 238 RR Route Reflector 239 RT Route Target 240 SDN Software Defined Network 241 vCE virtual Customer Edge router 242 [I-D.fang-l3vpn-virtual-ce] 243 vFW virtual Firewall 244 vLB virtual Load Balancer 245 VM Virtual Machine 246 vPC virtual Private Cloud 247 vPE virtual Provider Edge router 248 [I-D.fang-l3vpn-virtual-pe] 249 VPN Virtual Private Network 250 VRF VPN Routing and Forwarding table [RFC4364] 251 vRR virtual Route Reflector 253 This document follows some of the terminology used in [sfc-arch] and 254 adds some new terminology: 256 Network Service: An externally visible service offered by a network 257 operator; a service may consist of a single service function or a 258 composite built from several service functions executed in one or 259 more pre-determined sequences and delivered by software executing 260 in physical or virtual devices. 262 Classification: Customer/network/service policy used to identify and 263 select traffic flow(s) requiring certain outbound forwarding 264 actions, in particular, to direct specific traffic flows into the 265 ingress of a particular service function chain, or causing 266 branching within a service function chain. 268 Virtual Network: A logical overlay network built using virtual 269 links or packet encapsulation, over an existing network (the 270 underlay). 272 Service Function Chain (SFC): A service function chain defines an 273 ordered set of service functions that must be applied to packets 274 and/or frames selected as a result of classification. An SFC may 275 be either a linear chain or a complex service graph with multiple 276 branches. The term ''Service Chain'' is often used in place of 277 ''Service Function Chain''. 279 SFC Set: The pair of SFCs through which the forward and reverse 280 directions of a given classified flow will pass. 282 Service Function (SF): A logical function that is applied to 283 packets. A service function can act at the network layer or other 284 OSI layers. A service function can be embedded in one or more 285 physical network elements, or can be implemented in one or more 286 software instances running on physical or virtual hosts. One or 287 multiple service functions can be embedded in the same network 288 element or run on the same host. Multiple instances of a service 289 function can be enabled in the same administrative domain. We will 290 also refer to ''Service Function'' as, simply, ''Service'' for 291 simplicity. 293 A non-exhaustive list of services includes: firewalls, DDOS 294 protection, anti-malware/ant-virus systems, WAN and application 295 acceleration, Deep Packet Inspection (DPI), server load balancers, 296 network address translation, HTTP Header Enrichment functions, 297 video optimization, TCP optimization, etc. 299 SF Instance: An instance of software that implements the packet 300 processing of a service function 302 SF Instance Set: A group of SF instances that, in parallel, 303 implement a service function in an SFC. 305 Routing System: A hardware or software system that performs layer 3 306 routing and/or forwarding functions. The term includes physical 307 routers as well as hypervisor or Host OS implementations of the 308 forwarding plane of a conventional router. 310 Gateway: A routing system attached to the source or destination 311 network that peers with the controller, or with the routing system 312 at one end of an SFC. A source network gateway directs traffic 313 from the source network into an SFC, while a destination network 314 gateway distributes traffic towards destinations. The routing 315 systems at each end of an SFC can themselves act as gateways and 316 in a bidirectional SF instance set, gateways can act in both 317 directions 319 VRF: A subsystem within a routing system as defined in [RFC4364] 320 that contains private routing and forwarding tables and has 321 physical and/or logical interfaces associated with it. In the case 322 of hypervisor/Host OS implementations, the term refers only to the 323 forwarding function of a VRF, and this will be referred to as a 324 ''VPN forwarder.'' 326 Ingress VRF: A VRF containing an ingress interface of a SF instance 328 Egress VRF: A VRF containing an egress interface of a SF instance 330 Note that in this document the terms ''ingress'' and ''egress'' are used 331 with respect to SF instances rather than the tunnels that connect SF 332 instances. This is different usage than in VPN literature in 333 general. 335 Entry VRF: A VRF through which traffic enters the SFC from the 336 source network. This VRF may be used to advertise the destination 337 network's routes to the source network. It could be placed on a 338 gateway router or be collocated with the first ingress VRF. 340 Exit VRF: A VRF through which traffic exits the SFC into the 341 destination network. This VRF contains the routes from the 342 destination network and could be located on a gateway router. 343 Alternatively, the egress VRF attached to the last SF instance may 344 also function as the exit VRF. 346 2 Service Function Chain Architecture Using Virtual Networking 348 The techniques described in this document use virtual networks to 349 implement service function chains. Service function chains can be 350 implemented on devices that support existing MPLS VPN and BGP 351 standards [RFC4364, RFC4271, RFC4760], as well as other 352 encapsulations, such as VXLAN [RFC7348]. Similarly, equivalent 353 control plane protocols such as BGP-EVPN with type-2 and type-5 354 route types can also be used where supported. The set of techniques 355 described in this document represent one implementation approach to 356 realize the SFC architecture described in [sfc-arch]. 358 The following sections detail the building blocks of the SFC 359 architecture, and outline the processes of route installation and 360 subsequent route exchange to create an SFC. 362 2.1 High Level Architecture 364 Service function chains can be deployed with or without a 365 classifier. Use cases where SFCs may be deployed without a 366 classifier include multi-tenant data centers, private and public 367 cloud and virtual CPE for business services. Classifiers will 368 primarily be used in mobile and wireline subscriber edge use cases. 369 Use of a classifier is discussed in Section 4. 371 A high-level architecture diagram of an SFC without a classifier, 372 where traffic is routed into and out of the SFC, is shown in Figure 373 1, below. An optional controller is shown that contains a 374 topological model of the SFC and which configures the network 375 resources to implement the SFC. 377 +-------------------------+ 378 |--- Data plane connection| 379 |=== Encapsulation tunnel | 380 | O VRF | 381 +-------------------------+ 383 Control +------------------------------------------------+ 384 Plane | Controller | 385 ....... +-+------------+----------+----------+---------+-+ 386 | | | | | 387 Service | +---+ | +---+ | +---+ | | 388 Plane | |SF1| | |SF2| | |SF3| | | 389 | +---+ | +---+ | +---+ | | 390 ....... / | | / | | / | | / / 391 +-----+ +--|-|--+ +--|-|--+ +--|-|--+ +-----+ 392 | | | | | | | | | | | | | | | | 393 Net-A-->---O==========O O========O O========O O=========O---->Net-B 394 | | | | | | | | | | 395 Data | R-A | | R-1 | | R-2 | | R-3 | | R-B | 396 Plane +-----+ +-------+ +-------+ +-------+ +-----+ 398 ^ ^ ^ ^ 399 | | | | 400 | Ingress Egress | 401 | VRF VRF | 402 SFC Entry SFC Exit 403 VRF VRF 405 Figure 1 - High level SFC Architecture 407 Traffic from Network-A destined for Network-B will pass through the 408 SFC composed of SF instances, SF1, SF2 and SF3. Routing system R-A 409 contains a VRF (shown as ''O'' symbol) that is the SFC entry point. 410 This VRF will advertise a route to reach Network-B into Network-A 411 causing any traffic from a source in Network-A with a destination in 412 Network-B to arrive in this VRF. The forwarding table in the VRF in 413 R-A will direct traffic destined for Network-B into an encapsulation 414 tunnel with destination R-1 and a label that identifies the ingress 415 (left) interface of SF1 that R-1 should send the packets out on. The 416 packets are processed by service instance SF-1 and arrive in the 417 egress (right) VRF in R-1. The forwarding entries in the egress VRF 418 direct traffic to the next ingress VRF using encapsulation 419 tunneling. The process is repeated for each service instance in the 420 SFC until packets arrive at the SFC exit VRF (in R-B). This VRF is 421 peered with Network-B and routes packets towards their destinations 422 in the user data plane. In this example, routing systems R-A and R-B 423 are gateway routing systems. 425 In the example, each pair of ingress and egress VRFs are configured 426 in separate routing systems, but such pairs could be collocated in 427 the same routing system, and it is possible for the ingress and 428 egress VRFs for a given SF instance to be in different routing 429 systems. The SFC entry and exit VRFs can be collocated in the same 430 routing system, and the service instances can be local or remote 431 from either or both of the routing systems containing the entry and 432 exit VRFs, and from each other. It is also possible that the ingress 433 and egress VRFs are implemented using alternative mechanisms. 435 The controller is responsible for configuring the VRFs in each 436 routing system, installing the routes in each of the VRFs to 437 implement the SFC, and, in the case of virtualized services, may 438 instantiate the service instances. 440 2.2 Service Function Chain Logical Model 442 A service function chain is a set of logically connected service 443 functions through which traffic can flow. Each egress interface of 444 one service function is logically connected to an ingress interface 445 of the next service function. 447 +------+ +------+ +------+ 448 Network-A-->| SF-1 |-->| SF-2 |-->| SF-3 |-->Network-B 449 +------+ +------+ +------+ 451 Figure 2 - A Chain of Service Functions 453 In Figure 2, above, a service function chain has been created that 454 connects Network-A to Network-B, such that traffic from a host in 455 Network-A to a host in Network-B will traverse the service function 456 chain. 458 As defined in [sfc-arch], a service function chain can be uni- 459 directional or bi-directional. In this document, in order to allow 460 for the possibility that the forward and reverse paths may not be 461 symmetrical, SFCs are defined as uni-directional, and the term ''SFC 462 set'' is used to refer to a pair of forward and reverse direction 463 SFCs for some set of routed or classified traffic. 465 2.3 Service Function Implemented in a Set of SF Instances 467 A service function instance is a software system that acts on 468 packets that arrive on an ingress interface of that software system. 469 Service function instances may run on a physical appliance or in a 470 virtual machine. A service function instance may be transparent at 471 layer 2 and/or layer 3, and may support branching across multiple 472 egress interfaces and may support aggregation across ingress 473 interfaces. For simplicity, the examples in this document have a 474 single ingress and a single egress interface. 476 Each service function in a chain can be implemented by a single 477 service function instance, or by a set of instances in order to 478 provide scale and resilience. 480 +------------------------------------------------------------------+ 481 | Logical Service Functions Connected in a Chain | 482 | | 483 | +--------+ +--------+ | 484 | Net-A--->| SF-1 |----------->| SF-2 |--->Net-B | 485 | +--------+ +--------+ | 486 | | 487 +------------------------------------------------------------------+ 488 | Service Function Instances Connected by Virtual Networks | 489 | ...... ...... | 490 | : : +------+ : : | 491 | : :-->|SFI-11|-->: : ...... | 492 | : : +------+ : : +------+ : : | 493 | : : : :-->|SFI-21|-->: : | 494 | : : +------+ : : +------+ : : | 495 | A->: VN-1 :-->|SFI-12|-->: VN-2 : : VN-3 :-->B | 496 | : : +------+ : : +------+ : : | 497 | : : : :-->|SFI-22|-->: : | 498 | : : +------+ : : +------+ : : | 499 | : :-->|SFI-13|-->: : '''''' | 500 | : : +------+ : : | 501 | '''''' '''''' | 502 +------------------------------------------------------------------+ 504 Figure 3 - Service Functions Are Composed of SF Instances Connected 505 Via Virtual Networks 507 In Figure 3, service function SF-1 is implemented in three service 508 function instances, SFI-11, SFI-12, and SFI-13. Service function SF- 509 2 is implemented in two SF instances. The service function instances 510 are connected to the next service function in the chain using a 511 virtual network, VN-2. Additionally, a virtual network (VN-1) is 512 used to enter the SFC and another (VN-3) is used at the exit. 514 The logical connection between two service functions is implemented 515 using a virtual network that contains egress interfaces for 516 instances of one service function, and ingress interfaces of 517 instances of the next service function. Traffic is directed across 518 the virtual network between the two sets of service function 519 instances using layer 3 forwarding (e.g. an MPLS VPN) or layer 2 520 forwarding (e.g. a VXLAN). 522 The virtual networks could be described as "directed half-mesh", in 523 that the egress interface of each SF instance of one service 524 function can reach any ingress interface of the SF instances of the 525 connected service function. 527 Details on how routing across virtual networks is achieved, and 528 requirements on load balancing across ingress interfaces are 529 discussed in later sections of this document. 531 2.4 SF Instance Connections to VRFs 533 SF instances can be deployed as software running on physical 534 appliances, or in virtual machines running on a hypervisor. These 535 two types are described in more detail in the following sections. 537 2.4.1 SF Instance in Physical Appliance 539 The case of a SF instance running on a physical appliance is shown 540 in Figure 4, below. 542 +---------------------------------+ 543 | | 544 | +-----------------------------+ | 545 | | Service Function Instance | | 546 | +-------^-------------|-------+ | 547 | | Host | | 548 +---------|-------------|---------+ 549 | | 550 +------ |-------------|-------+ 551 | | | | 552 | +----|----+ +-----v----+ | 553 ---------+ Ingress | | Egress +--------- 554 ---------> VRF | | VRF ----------> 555 ---------+ | | +--------- 556 | +---------+ +----------+ | 557 | Routing System | 558 +-----------------------------+ 560 Figure 4 - Ingress and Egress VRFs for a Physical Routing System and 561 Physical SF Instance 563 The routing system is a physical device and the service function 564 instance is implemented as software running in a physical appliance 565 (host) connected to it. The connection between the physical device 566 and the routing system may use physical or logical interfaces. 567 Transport between VRFs on different routing systems that are 568 connected to other SF instances in an SFC is via encapsulation 569 tunnels, such as MPLS over GRE, or VXLAN. 571 2.4.2 SF Instance in a Virtualized Environment 573 In virtualized environments, a routing system with VRFs that act as 574 VPN forwarders is resident in the hypervisor/Host OS, and is co- 575 resident in the host with one or more SF instances that run in 576 virtual machines. The egress VPN forwarder performs tunnel 577 encapsulation to send packets to other physical or virtual routing 578 systems with attached SF instances to form an SFC. The tunneled 579 packets are sent through the physical interfaces of the host to the 580 other hosts or physical routers. This is illustrated in Figure 5, 581 below. 583 +-------------------------------------+ 584 | +-----------------------------+ | 585 | | Service Function Instance | | 586 | +-------^-------------|-------+ | 587 | | | | 588 | +---------|-------------|---------+ | 589 | | +-------|-------------|-------+ | | 590 | | | | | | | | 591 | | | +----|----+ +-----v----+ | | | 592 ------------+ Ingress | | Egress +----------- 593 ------------> VRF | | VRF ------------> 594 ------------+ | | +----------- 595 | | | +---------+ +----------+ | | | 596 | | | Routing System | | | 597 | | +-----------------------------+ | | 598 | | Hypervisor or Host OS | | 599 | +---------------------------------+ | 600 | Host | 601 +-------------------------------------+ 603 Figure 5 - Ingress and Egress VRFs for a Virtual Routing System and 604 Virtualized SF Instance 606 When more than one instance of an SF is running on a hypervisor, 607 they can be connected to the same VRF for scale out of an SF within 608 an SFC. 610 The routing mechanisms in the VRFs into and between service function 611 instances, and the encapsulation tunneling between routing systems 612 are identical in the physical and virtual implementations of SFCs 613 and routing systems described in this document. Physical and virtual 614 service functions can be mixed as needed with different combinations 615 of physical and virtual routing systems, within a single service 616 chain. 618 The SF instances are attached to the routing systems via physical, 619 virtual or logical (e.g, 802.1q) interfaces, and are assumed to 620 perform basic L3 or L2 forwarding. 622 A single SF instance can be part of multiple service chains. In this 623 case, the SF instance will have dedicated interfaces (typically 624 logical) and forwarding contexts associated with each service chain. 626 2.5 Encapsulation Tunneling for Transport 628 Encapsulation tunneling is used to transport packets between SF 629 instances in the chain and, when a classifier is not used, from the 630 originating network into the SFC and from the SFC into the 631 destination network. 633 The tunnels can be MPLS over GRE [RFC4023], MPLS over UDP [draft- 634 ietf-mpls-in-udp], MPLS over MPLS [RFC3031], VXLAN [RFC7348], or 635 another suitable encapsulation methods. 637 Tunneling capabilities may be enabled in each routing system as part 638 of a base configuration or may be configured by the controller. 639 Tunnel encapsulations may be programmed by the controller or 640 signaled using BGP. The encapsulation to be used for a given route 641 is signaled in BGP using the procedures described in [draft-rosen- 642 idr-tunnel-encaps], i.e. typically relying on the BGP Tunnel 643 Encapsulation Extended Community. 645 2.6 SFC Creation Procedure 647 This section describes how service chains are created using two 648 methods: 650 o Sequential VPNs - where a conventional VPN is created between 651 each set of SF instances to create the links in the SFC 653 o Route Modification - where each routing system modifies 654 advertised routes that it receives, to realize the links in an 655 SFC on the basis of a special service topology RT and a route- 656 policy that describes the service chain logical topology 658 In both cases the controller, when present, is responsible for 659 creating ingress and egress VRFs, configuring the interfaces 660 connected to SF instances in each VRF, and allocating and 661 configuring import and export RTs for each VRF. Additionally, in the 662 second method, the controller also sends the route-policy containing 663 the service chain logical topology to each routing system. If a 664 controller is not used, these procedures will require to be 665 performed manually or through scripting, for instance. 667 The source and destination networks' prefixes can be configured in 668 the controller, or may be automatically learned through peering 669 between the controller and each network's gateway. This is further 670 described in Section 2.8.5 and Section 5. 672 The following sub-sections describe how RT configuration, local 673 route installation and route distribution occur in each of the 674 methods. 676 It should be noted that depending on the capabilities of the routing 677 systems, a controller can use one or more techniques to realize 678 forwarding along the service chain, ranging from fully centralized 679 to fully distributed. The goal of describing the following two 680 methods is to illustrate the broad approaches and as a base for 681 various optimization options. 683 Interoperability between a controller implementing one method and a 684 controller implementing a different method is achieved by relying on 685 the techniques described in section 5 and section 8, that describe 686 the use of BGP-style service chaining within domains that are 687 interconnected using standard BGP VPN route exchanges. 689 2.6.1 SFC Provisioning Using Sequential VPNs 691 The task of the controller in this method of SFC provisioning is to 692 create a set of VPNs that carry traffic to the destination network 693 through instances of each service function in turn. This is achieved 694 by allocating and configuring RTs such that the egress VRFs of one 695 set of SF instances import an RT that is an export RT for the 696 ingress VRFs of the next, logically connected, set of SF instances. 698 The process of SFC creation is as follows: 700 1. Controller creates a VRF in each routing system that is 701 connected to a service instance that will be used in the 702 SFC 704 2. Controller configures each VRF to contain the logical 705 interface that connects to a SF instance. 707 3. Controller implements route target import and export 708 policies in the VRFs using the same route targets for the 709 egress VRFs of a service function and the ingress VRFs of 710 the next logically connected service function in the SFC. 712 4. Controller installs a static route in each ingress VRF 713 whose next hop is the interface that a SF instance is 714 connected to. The prefix for the route is the destination 715 network to be reached by passing through the SFC. The 716 following sections describe variations that can be used. 718 5. Routing systems advertise the static routes via BGP as VPN 719 routes with next hop being the IP address of the router, 720 with an encapsulation specified and a label that identifies 721 the service instance interface. 723 6. Routing systems containing VRFs with matching route targets 724 receive the updates. 726 7. Routes are installed in egress VRFs with matching import 727 targets. The egress VRFs of each SF instance will now 728 contain VPN routes to one or more routers containing 729 ingress VRFs for SF instances of the next service function 730 in the SFC. 732 Routes to the destination network via the first set of SF instances 733 are advertised into the source network, and the egress VRFs of the 734 last SF instance set have routes into the destination network. 736 As discussed further in Section 3, egress VRFs can load balance 737 across the multiple next hops advertised from the next set of 738 ingress VRFs. 740 2.6.2 Modified-Route SFC Creation 742 In this method of SFC configuration, all the VRFs connected to SF 743 instances for a given SFC are configured with same import and export 744 RT, so they form a VPN-connected mesh between the SF instance 745 interfaces. This is termed the ''Service VPN''. A route is configured 746 or learnt in each VRF with destination being the IP address of a 747 connected SF instance via an interface configured in the VRF. The 748 interface may be a physical or logical interface. The routing system 749 that hosts such a VRF advertises a VPN route for each locally 750 connected SF instance, with a forwarding label that enables it to 751 forward incoming traffic from other routing systems to the connected 752 SF instance. The VPN routes may be advertised via an RR or the 753 controller, which sends these updates to all the other routing 754 systems that have VRFs with the service VPN RT. At this point all 755 the VRFs have a route to reach every SF instance. The same virtual 756 IP address may be used for each SF instance in a set, enabling load- 757 balancing among multiple SF instances in the set. 759 The controller builds a route-policy for the routing systems in the 760 VPN, that describes the logical topology of each service chain that 761 it belongs to. The route-policy contains entries in the form of a 762 tuple for each service chain: 764 {Service-topology-name, Service-topology-RT, Service-node- 765 sequence} 767 where Service-node-sequence is simply an ordered list of the service 768 function interface IP addresses that are in the chain. 770 Every service function chain has a single unique service-topology-RT 771 that is allocated and provisioned on all participating routing 772 systems in the relevant VRFs. 774 The VRF in the routing system that connects to the destination 775 network (i.e. the exit VRF) is configured to attach the Service- 776 topology-RT to exported routes, and the VRF connected to the source 777 network (i.e. the entry VRF) will import routes using the Service- 778 topology-RT. The controller may also be used to originate the 779 Service-topology-RT attached routes. 781 The route-policy may be described in a variety of formats and 782 installed on the routing system using a suitable mechanism. For 783 instance, the policy may be defined in YANG and provisioned using 784 Netconf. 786 Using Figure 1 for reference, when the gateway R-B advertises a VPN 787 route to Network-B, it attaches the Service-topology-RT. BGP route 788 updates are sent to all the routing systems in the service VPN. The 789 routing systems perform a modified set of actions for next-hop 790 resolution and route installation in the ingress VRFs compared to 791 normal BGP VPN behavior in routing systems, but no changes are 792 required in the operation of the BGP protocol itself. The 793 modification of behavior in the routing systems allows the automatic 794 and constrained flow of traffic through the service chain. 796 Each routing system in the service VPN will process the VPN route to 797 Network-B via R-B as follows: 799 1. If the routing system contains VRFs that import the 800 Service-topology-RT, continue, otherwise ignore the route. 802 2. The routing system identifies the position and role 803 (ingress/egress) of each of its VRFs in the SFC by 804 comparing the IP address of the route in the VRF to the 805 connected SF instance with those in the Service-node- 806 sequence in the route-policy. Alternatively, the controller 807 may provision the specific service node IP to be used as 808 the next-hop in each VRF, in the route-policy for the VRF. 810 3. The routing system modifies the next-hop of the imported 811 route with the Service-topology-RT, to select the 812 appropriate next-hop as per the route-policy. It ignores 813 the next-hop and label in the received route. It resolves 814 the selected next-hop in the local VRF routing table. 816 a. The imported route to Network-B in the ingress VRF is 817 modified to have a next-hop of the IP address of the 818 logically connected SF instance. 820 b. The imported route to Network-B in the egress VRF is 821 modified to have a next hop of the IP address of the 822 next SF instance in the SFC. 824 4. The egress VRFs for the last service function install the 825 VPN route via the gateway R-B unmodified. 827 Note that the modified routes are not re-advertised into the VPN by 828 the various intermediate routing systems in the SFC. 830 2.6.3 Common SFC provisioning considerations 832 In both the methods, for physical routers, the creation and 833 configuration of VRFs, interfaces and local static routes can be 834 performed programmatically using Netconf; and BGP route distribution 835 can use a route reflector (which may be part of the controller). In 836 the virtualized case, where a VPN forwarder is present, creation and 837 configuration of VRFs, interfaces and installation of routes may 838 instead be performed using a single protocol like XMPP, NC/YANG or 839 an equivalent programmatic interface. 841 Also in the virtualized case, the actual forwarding table entries to 842 be installed in the ingress and egress VRFs may be calculated by the 843 controller based on its internal knowledge of the required SFC 844 topology and the connectivity of SF instances to routing systems. In 845 this case, the routes may be directly installed in the forwarders 846 using the programmatic interface and no BGP route advertisement is 847 necessary, except when coordination with external domains (Section 848 5) or federation between controller domains is employed (Section 7). 849 Note however that this is just one typical model for a virtual 850 forwarding based system. In general, physical and virtual routing 851 systems can be treated exactly the same if they have the same 852 capabilities. 854 In both the methods, the SF instance may also need to be set up 855 appropriately to forward traffic between it's input and output 856 interfaces, either via static, dynamic or policy-based routing. If 857 the service function is a transparent L2 service, then the static 858 route installed in the ingress VRF will have a next-hop of the IP 859 address of the routing system interface that the service instance is 860 attached to on its other interface. 862 2.7 Controller Function 864 The purpose of the controller is to manage instantiation of SFCs in 865 networks and datacenters. When an SFC is to be instantiated, a model 866 of the desired topology (service functions, number of instances, 867 connectivity) is built in the controller either via an API or GUI. 868 The controller then selects resources in the infrastructure that 869 will support the SFC and configures them. This can involve 870 instantiation of SF instances to implement each service function, 871 the instantiation of VRFs that will form virtual networks between SF 872 instances, and installation of routes to cause traffic to flow into 873 and between SF instances. It can also include provisioning the 874 necessary static, dynamic or policy based forwarding on the service 875 function instance to enable it to forward traffic. 877 For simplicity, in this document, the controller is assumed to 878 contain all the required features for management of SFCs. In actual 879 implementations, these features may be distributed among multiple 880 inter-connected systems. E.g. An overarching orchestrator might 881 manage the overall SFC model, sending instructions to a separate 882 virtual machine manager to instantiate service function instances, 883 and to a virtual network manager to set up the service chain 884 connections between them. 886 The controller can also perform necessary BGP signaling and route 887 distribution actions as described throughout this document. 889 2.8 Variations on Setting Prefixes in an SFC 891 The SFC Creation section above described the basic procedures for a 892 couple of SFC creation methods. This section describes some 893 techniques that can extend and provide optimizations on top of the 894 basic procedures. 896 2.8.1 Using a Default Route 898 In the methods described above, it can be noted that only the 899 gateway routing systems need the specific network prefixes to steer 900 traffic in and out of the SFC. The intermediate systems can direct 901 traffic in the ingress and egress VRFs by using only a default 902 route. Hence, it is possible to avoid installing the network 903 prefixes in the intermediate systems. This can be done by splitting 904 the SFC into two sections - - one linking the entry and exit VRFs and 905 the other including the intermediate systems. For instance, this may 906 be achieved by using two different Service-topology-RTs in the 907 second method. 909 2.8.2 Using a Default Route and a Large Prefix 911 In the configuration methods described above, the network prefixes 912 for each network (Network-A and Network-B in the example above) 913 connected to the SFC are used in the routes that direct traffic 914 through the SFC. This creates an operational linkage between the 915 implementation of the SFC and the insertion of the SFC into a 916 network. 918 For instance, subscriber network prefixes will normally be segmented 919 across subscriber attachment points such as broadband or mobile 920 gateways. This means that each SFC would have to be configured with 921 the subscriber network prefixes whose traffic it is handling. 923 In a variation of the SFC configuration method described above, the 924 prefixes used in each direction can be such that they include all 925 possible addresses at each side of the SFC. For example, in Figure 926 1, the prefix for Network-A could include all subscriber IP 927 addresses and the prefix for Network-B could be the default route, 928 0/0. 930 Using this technique, the same routes can be installed in all 931 instances of an SFC that serve different groups of subscribers in 932 different geographic locations. 934 The routes forwarding traffic into a SF instance and to the next SF 935 instance are installed when an SFC is initially built, and each time 936 a SF instance is connected into the SFC, but there is no requirement 937 for VRFs to be reconfigured when traffic from different networks 938 pass through the service chain, so long as their prefix is included 939 in the prefixes in the VRFs along the SFC. 941 In this variation, it is assumed that no subscriber-originated 942 traffic will enter the SFC destined for an IP address also in the 943 subscriber network address range. This will not be a restriction in 944 many cases. 946 2.8.3 Disaggregated Gateway Routers 948 As a slight variation of the above, a network prefix may be 949 disaggregated and spread out among various gateway routers, for 950 instance, in the case of virtual machines in a data-center. In order 951 to reduce the scaling requirements on the routing systems along the 952 SFC, the SFC can again be split into two sections as described 953 above. In addition, the last egress VRF may act as the exit VRF and 954 install the destination network's disaggregated routes. If the 955 destination network's prefixes can be aggregated, for instance into 956 a subnet prefix, then the aggregate prefix may be advertised and 957 installed in the entry VRF. 959 2.8.4 Optimizing VRF usage 961 It may be desirable to avoid using distinct ingress and egress VRFs 962 for the service instances in order to make more efficient use of VRF 963 resources, especially on physical routing systems. The ingress VRF 964 and egress VRF may be treated as conceptual entities and the 965 forwarding realized using one or more options described in this 966 section, combined with the methods described earlier. 968 For instance, the next-hop forwarding label described earlier serves 969 the purpose of directing traffic received from other routing systems 970 directly towards an attached service instance. On the other hand, if 971 the encapsulation mechanism or the device in use requires an IP 972 lookup for incoming packets from other routing systems, then the 973 specific network prefixes may be installed in the intermediate 974 service VRFs to direct traffic towards the attached service 975 instances. 977 Similarly, a per-interface policy-based-routing rule applied to an 978 access interface can serve to direct traffic coming in from attached 979 service instances towards the next SF set. 981 2.8.5 Dynamic Entry and Exit Signaling 983 When either of the methods of the previous sections are employed, 984 the prefixes of the attached networks at each end of an SFC can be 985 signaled into the corresponding VRFs dynamically. This requires that 986 a BGP session is configured either from the network device at each 987 end of the SFC into each network or from the controller. 989 If dynamic signaling is performed, and a bidirectional SFC set is 990 configured, and the gateways to the networks connected via the SFC 991 exchange routes, steps must be taken to ensure that routes to both 992 networks do not get advertised from both ends of the SFC set by re- 993 origination. This can be achieved if a new BGP Extended Community is 994 implemented to control re-origination. When a route is re- 995 originated, the RTs of the re-originated routes are appended to the 996 new RT-Record Extended Community, and if the RT for the route 997 already exists in the Extended Community, the route is not re- 998 originated (see Section 9.1). 1000 2.8.6 Dynamic Re-Advertisements in Intermediate systems 1002 The intermediate routing systems attached to the service instances 1003 may also use the dynamic signaling technique from the previous 1004 section to re-advertise received routes up the chain. In this case, 1005 the ingress and egress VRFs are combined into one; and a local 1006 route-policy ensures the re-advertised routes are associated with 1007 labels that direct incoming traffic directly to the attached service 1008 instances on that routing system. 1010 2.9 Layer-2 Virtual Networks and Service Functions 1012 There are SFs that operate at layer-2, in a transparent mode, and 1013 forward traffic based on the MAC DA. When such a SF is present in 1014 the SFC, the procedures at the routing system are modified slightly. 1015 In this case, the IP address associated with the SF instance (and 1016 used as the next-hop of routes in the above procedures) is actually 1017 the one assigned to the routing system interface attached to the 1018 other end of the SF instance, or it could be a virtual IP address 1019 logically associated with the service function with a next-hop of 1020 the other routing system interface. The routing system interface 1021 uses distinct interface MAC addresses. This allows the current 1022 scheme to be supported, while allowing the transparent service 1023 function to work using its existing behavior. 1025 A SFC may be also be set up between end systems or network segments 1026 within the same Layer-2 bridged network. In this case, applying the 1027 procedures described earlier, the segments or groups of end systems 1028 are placed in distinct Layer-2 virtual networks, which are then then 1029 inter-connected via a sequence of intermediate Layer-2 virtual 1030 networks that form the links in the SFC. Each virtual network maps 1031 to a pair of ingress and egress MAC VRFs on the routing systems to 1032 which the SF instances are attached. The routing systems at the ends 1033 of the SFC will advertise the locally learnt or installed MAC 1034 entries using BGP-EVPN type-2 routes, which will get installed in 1035 the MAC VRFs at the other end. The intermediate systems may use 1036 default MAC routes installed in the ingress and egress MAC VRFs, or 1037 the other variations described earlier in this document. 1039 2.10 Header Transforming Service Functions 1041 If a service function performs an action that changes the source 1042 address in the packet header (e.g., NAT), the routes that were 1043 installed as described above may not support reverse flow traffic. 1045 The solution to this is for the controller modify the routes in the 1046 reverse direction to direct traffic into instances of the 1047 transforming service function. The original routes with a source 1048 prefix (Network-A in Figure 2) are replaced with a route that has a 1049 prefix that includes all the possible addresses that the source 1050 address could be mapped to. In the case of network address 1051 translation, this would correspond to the NAT pool. 1053 3 Load Balancing Along a Service Function Chain 1055 One of the key concepts driving NFV [NFVE2E]is the idea that each 1056 service function along an SFC can be separately scaled by changing 1057 the number of service function instances that implement it. This 1058 requires that load balancing be performed before entry into each 1059 service function. In this architecture, load balancing is performed 1060 in either or both of egress and ingress VRFs depending on the type 1061 of load balancing being performed, and if more than one service 1062 instance is connected to the same ingress VRF. 1064 3.1 SF Instances Connected to Separate VRFs 1066 If SF instances implementing a service in an SFC are each connected 1067 to separate VRFs(e.g. instances are connected to different routers 1068 or are running on different hosts), load balancing is performed in 1069 the egress VRFs of the previous service, or in the VRF that is the 1070 entry to the SFC. The controller distributes BGP multi-path routes 1071 to the egress VRFs. The destination prefix of each route is the 1072 ultimate destination network, or its representative aggregate or 1073 default. The next-hops in the ECMP set are BGP next-hops of the 1074 service instances attached to ingress VRFs of the next service in 1075 the SFC. The load balancing corresponds to BGP Multipath, which 1076 requires that the route distinguishers for each route are distinct 1077 in order to recognize that distinct paths should be used. Hence, 1078 each VRF in a distributed, SFC environment should have a unique 1079 route distinguisher. 1081 +------+ +-------------------------+ 1082 O----|SFI-11|---O |--- Data plane connection| 1083 // +------+ \\ |=== Encapsulation tunnel | 1084 // \\ | O VRF | 1085 // \\ | * Load balancer | 1086 // \\ +-------------------------+ 1087 // +------+ \\ 1088 Net-A-->O*====O----|SFI-12|---O====O-->Net-B 1089 \\ +------+ // 1090 \\ // 1091 \\ // 1092 \\ // 1093 \\ +------+ // 1094 O----|SFI-13|---O 1095 +------+ 1097 Figure 6 - Egress VRF Load Balancing across SF Instances Connected 1098 to Different VRFs 1100 In the diagram, above, a service function is implemented in three 1101 service instances each connected to separate VRFs. Traffic from 1102 Network-A arrives at VRF at the start of the SFC, and is load 1103 balanced across the service instances using a set of ECMP routes 1104 with next hops being the addresses of the routing systems containing 1105 the ingress VRFs and with labels that identify the ingress 1106 interfaces of the service instances. 1108 3.2 SF Instances Connected to the Same VRF 1110 When SF instances implementing a service in an SFC are connected to 1111 the same ingress VRF, load balancing is performed in the ingress VRF 1112 across the service instances connected to it. The controller will 1113 install routes in the ingress VRF to the destination network with 1114 the interfaces connected to each service instance as next hops. The 1115 ingress VRF will then use ECMP to load balance across the service 1116 instances. 1118 +------+ +-------------------------+ 1119 |SFI-11| |--- Data plane connection| 1120 +------+ |=== Encapsulation tunnel | 1121 / \ | O VRF | 1122 / \ | * Load balancer | 1123 / \ +-------------------------+ 1124 / +------+ \ 1125 Net-A-->O====O*----|SFI-12|----O====O-->Net-B 1126 \ +------+ / 1127 \ / 1128 \ / 1129 \ / 1130 +------+ 1131 |SFI-13| 1132 +------+ 1134 Figure 7 - Ingress VRF Load Balancing across SF Instances 1135 Connected to the Same VRF 1137 In the diagram, above, a service is implemented by three service 1138 instances that are connected to the same ingress and egress VRFs. 1139 The ingress VRF load balances across the ingress interfaces using 1140 ECMP, and the egress traffic is aggregated in the egress VRF. 1142 If forwarding labels that identify each SFI ingress interface are 1143 used, and if the routes to each SF instance are advertised with 1144 different route distinguishers, then it is possible to perform ECMP 1145 load balancing at the routing instance at the beginning of the 1146 encapsulation tunnel (which could be the egress VRF of the previous 1147 SF in the SFC). 1149 3.3 Combination of Egress and Ingress VRF Load Balancing 1151 In Figure 8, below, an example SFC is shown where load balancing is 1152 performed in both ingress and egress VRFs. 1154 +-------------------------+ 1155 |--- Data plane connection| 1156 +------+ |=== Encapsulation tunnel | 1157 |SFI-11| | O VRF | 1158 +------+ | * Load balancer | 1159 / \ +-------------------------+ 1160 / \ 1161 / +------+ \ +------+ 1162 O*---|SFI-12|---O*====O---|SFI-21|---O 1163 // +------+ \\ // +------+ \\ 1164 // \\// \\ 1165 // \\ \\ 1166 // //\\ \\ 1167 // +------+ // \\ +------+ \\ 1168 Net-A-->O*====O----|SFI-13|---O*====O---|SFI-22|---O====O-->Net-B 1169 +------+ +------+ 1170 ^ ^ ^ ^ ^ ^ 1171 | | | | | | 1172 | Ingress Egress | | | 1173 | Ingress Egress | 1174 SFC Entry SFC Exit 1176 Figure 8 - Load Balancing across SF Instances 1178 In Figure 8, above, an SFC is composed of two services implemented 1179 by three service instances and two service instances, respectively. 1180 The service instances SFI-11 and SFI-12 are connected to the same 1181 ingress and egress VRFs, and all the other service instances are 1182 connected to separate VRFs. 1184 Traffic entering the SFC from Network-A is load balanced across the 1185 ingress VRFs of the first service function by the chain entry VRF, 1186 and then load balanced again across the ingress interfaces of SFI-11 1187 and SFI-12 by the shared ingress VRF. Note that use of standard ECMP 1188 will lead to an uneven distribution of traffic between the three 1189 service instances (25% to SFI-11, 25% to SFI-12, and 50% to SFI-13). 1190 This issue can be mitigated through the use of BGP link bandwidth 1191 extended community [draft-ietf-idr-link-bandwidth]. As described in 1192 the previous section, if a next-hop forwarding label is used, 1193 another way to mitigate this effect would be to advertise routes to 1194 each SF instance connected to a VRF with a different route 1195 distinguisher. 1197 After traffic passes through the first set of service instances, it 1198 is load balanced in each of the egress VRFs of the first set of 1199 service instances across the ingress VRFs of the next set of service 1200 instances. 1202 3.4 Forward and Reverse Flow Load Balancing 1204 This section discusses requirements in load balancing for forward 1205 and reverse paths when stateful service functions are deployed. 1207 3.4.1 Issues with Equal Cost Multi-Path Routing 1209 As discussed in the previous sections, load balancing in the forward 1210 SFC in the above example can automatically occur with standard BGP, 1211 if multiple equal cost routes to Network-B are installed into all 1212 the ingress VRFs, and each route directs traffic through a different 1213 service function instance in the next set. The multiple BGP routes 1214 in the routing table will translate to Equal Cost Multi-Path in the 1215 forwarding table. The hash used in the load balancing algorithm (per 1216 packet, per flow or per prefix) is implementation specific. 1218 If a service function is stateful, it is required that forward flows 1219 and reverse flows always pass through the same service function 1220 instance. Standard ECMP does not provide this capability, since the 1221 hash calculation will see different input data for the same flow in 1222 the forward and reverse directions (since the source and destination 1223 fields are reversed). 1225 Additionally, if the number of SF instances changes, either 1226 increasing to expand capacity, or decreases (planned, or due to a SF 1227 instance failure), the hash table in ECMP is recalculated, and most 1228 flows will be directed to a different SF instance and user sessions 1229 will be disrupted. 1231 There are a number of ways to satisfy the requirements of symmetric 1232 forward/reverse paths for flows and minimal disruption when SF 1233 instances are added to or removed from a set. Two techniques that 1234 can be employed are described in the following sections. 1236 3.4.2 Modified ECMP with Consistent Hash 1238 Symmetric forwarding into each side of an SF instance set can be 1239 achieved with a small modification to ECMP if the packet headers are 1240 preserved after passing through the SF instance set and assuming 1241 that the same hash function, same hash salt and same ordering 1242 association of hash buckets to ECMP routes is used in both 1243 directions. Each packet's 5-tuple data is used to calculate which 1244 hash bucket, and therefore which service instance, that the packet 1245 will be sent to, but the source and destination IP address and port 1246 information are swapped in the calculation in the reverse direction. 1247 This method only requires that the list of available service 1248 function instances is consistently maintained in load balance tables 1249 in all the routing systems rather than maintaining flow tables. This 1250 requirement can be met by the use of a distinct VPN route for each 1251 instance. 1253 In the SFC architecture described in this document, when SF 1254 instances are added or removed, the controller is required to 1255 install (or remove) routes to the SF instances. The controller could 1256 configure the load balancing function in VRFs that connect to each 1257 added (or removed) SF instance as part of the same network 1258 transaction as route updates to ensure that the load balancer 1259 configuration is synchronized with the set of SF instances. 1261 The consistent ordering among ECMP routes in the routing systems 1262 could be achieved through configuration of the routing systems by 1263 the controller using, for instance, Netconf; or when the routes are 1264 signaled using BGP by the controller or a routing system, the order 1265 for a given instance can be sent in a new ''Consistent Hash Sort 1266 Order'' BGP Extended Community (defined in Section 9.2). 1268 The effect of rehashing when SF instances are added or removed can 1269 be minimized, or even eliminated using variations of the technique 1270 of consistent hashing [consistent-hash]. Details are outside the 1271 scope of this document. 1273 3.4.3 ECMP with Flow Table 1275 A second refinement that can ensure forward/reverse flow 1276 consistency, and also provides stability when the number of SF 1277 instances changes (''flow-stickiness''), is the use of dynamically 1278 configured IP flow tables in the VRFs. In this technique, flow 1279 tables are used to ensure that existing flows are unaffected if the 1280 number of ECMP routes changes, and that forward and reverse traffic 1281 passes through the same SF instance in each set of SF instances 1282 implementing a service function. 1284 The flow tables are set up as follows: 1286 1. User traffic with a new 5-tuple enters an egress VRF from a 1287 connected SF instance. 1289 2. The VRF calculates the ECMP hash across available routes 1290 (i.e., ECMP group) to the ingress interfaces of the SF 1291 instances in the next SF instance set. The consistent hash 1292 technique described in section 3.4.2 must be used here and 1293 in subsequent steps. 1295 3. The VRF creates a new flow entry for the 5-tuple of the new 1296 traffic with the next-hop being the chosen downstream ECMP 1297 group member (determined in the step 2. above) . All 1298 subsequent packets for the same flow will be forwarded 1299 using flow lookup and, hence, will use the same next-hop. 1301 4. The encapsulated packet arrives in the routing system that 1302 hosts the ingress VRF for the selected SF instance. 1304 5. The ingress VRF of the next service instance determines if 1305 the packet came from a routing system that is in an ECMP 1306 group in the reverse direction(i.e., from this ingress VRF 1307 back to the previous set of SF instances). 1309 6. If an ECMP group is found, the ingress VRF creates a flow 1310 entry for the reversed 5-tuple with next-hop of the tunnel 1311 on which traffic arrived. This is for the traffic in the 1312 reverse direction. 1314 7. If multiple SF instances are connected to the ingress VRF, 1315 the ECMP consistent hash is used to choose which one to 1316 send the traffic into. 1318 8. A forward flow table entry is created for the traffic's 5- 1319 tuple with next hop of the interface of the SF instance 1320 chosen in the previous step. 1322 9. The packet is sent into the selected SF instance. 1324 The above method ensures that forward and reverse flows pass through 1325 the same SF instances, and that if the number of ECMP routes changes 1326 when SF instances are added or removed, all existing flows will 1327 continue to flow through the same SF instances, but new flows will 1328 use the new ECMP hash. The only flows affected will be those that 1329 were passing through an SF instance that was removed, and those will 1330 be spread among the remaining SF instances using the updated ECMP 1331 hash. 1333 If the consistent hash algorithm is used in both directions, then 1334 only the forwarding flow entries would be required, and would be 1335 built independently in each direction. If distinct VPN routes with 1336 next-hop forwarding labels are used, then only the flow table in 1337 step 3 is sufficient to provide flow stickiness. 1339 3.4.4 Dealing with different hash algorithms in an SFC 1341 In some cases, there will be two or more hash algorithms in 1342 forwarders along an SFC. E.g. when a physical router is at the entry 1343 and exit of the chain, and virtual forwarders are used within the 1344 chain. Forward and reverse flows will mostly not pass through the 1345 same SF instances of the first SF, and the SFC will not operate as 1346 intended if the first SF is stateful. It may be impractical, or 1347 prohibitively expensive to implement the flow table-based methods 1348 described above to achieve flow stability and symmetry. This issue 1349 can be mitigated by ensuring that the first SF is not stateful, or 1350 by placing a null SF between the physical router and the first 1351 actual SF in the SFC. This ensures that the hash method on both 1352 sides of stateful service instances is the same, and the SFC will 1353 operate with flow stability and symmetry if the methods described 1354 above are employed. 1356 4 Steering into SFCs Using a Classifier 1358 In many applications of SFCs, a classifier will be used to direct 1359 traffic into SFCs. The classifier inspects the first or first few 1360 packets in a flow to determine which SFC the flow should be sent 1361 into. The decision criteria can be based on just the IP 5-tuple of 1362 the header (i.e filter-based forwarding), or could involve analysis 1363 of the payload of packets using deep packet inspection. Integration 1364 with a subscriber management system such as PCRF or AAA may be 1365 required in order to identify which SFC to send traffic to based on 1366 subscriber policy. 1368 An example logical architecture is shown in Figure 9, below where a 1369 classifier is external to a physical router that is hosting the VRFs 1370 that form the ends of two SFC sets. In the case of filter-based 1371 forwarding, classification could occur in a VRF on the router. 1373 +----------+ 1374 | PCRF/AAA | 1375 +-----+----+ 1376 : 1377 : 1378 Subscriber +-----+------+ 1379 Traffic----->| Classifier | 1380 +------------+ 1381 | | 1382 +-------|---|------------------------+ 1383 | | | Router | 1384 | | | | 1385 | O O X--------->Internet 1386 | | | / \ | 1387 | | | O O | 1388 +-------|---|----------------|---|---+ 1389 | | +---+ +---+ | | 1390 | +--+ U +---+ V +-+ | 1391 | +---+ +---+ | 1392 | | 1393 | +---+ +---+ +---+ | 1394 +--+ X +---+ Y +---+ Z +-+ 1395 +---+ +---+ +---+ 1397 Figure 9 - Subscriber/Application-Aware Steering with a Classifier 1399 In the diagram, the classifier receives subscriber traffic and sends 1400 the traffic out of one of two logical interfaces, depending on 1401 classification criteria. The logical interfaces of the classifier 1402 are connected to VRFs in a router that are entries to two SFCs 1403 (shown as O in the diagram). 1405 In this scenario, the entry VRF for each chain does not advertise 1406 the destination network prefixes and the modified method of setting 1407 prefixes, described in Section 2.8.2 can be employed. Also, the 1408 exit VRF for each SFC does not peer with a gateway or proxy node in 1409 the destination network and packets are forwarded using IP lookup in 1410 the main routing table or in a VRF that the exit traffic from the 1411 SFCs is directed into (shown as X in the diagram). A flow table may 1412 be required to ensure that reverse traffic is sent into the correct 1413 SFC. 1415 An alternative would be where the classifier is itself a 1416 distributed, virtualized service function, but with multiple egress 1417 interfaces. In that case, each virtual classifier instance could be 1418 attached to a set of VRFs that connect to different SFCs. Each chain 1419 entry VRF would load balance across the first SF instance set in its 1420 SFC. The reverse flow table mechanism described in Section 3.4.3 1421 could be employed to ensure that flows return to the originating 1422 classifier instance which may maintain subscriber context and 1423 perform charging and accounting. 1425 5 External Domain Co-ordination 1427 It is likely that SFCs will be managed as a separate administrative 1428 domain from the networks that they receive traffic from, and send 1429 traffic to. If the connected networks use BGP for route 1430 distribution, the controller in the SFC domain can join the network 1431 domains by creating BGP peering sessions with routing systems or 1432 route reflectors in those network domains to exchange VPN routes, or 1433 with local border routers that peer with the external domains. While 1434 a controller can modify route targets for the VRFs within its SFC 1435 domain, it is likely to not have any control over the external 1436 networks with which it is peering. Hence, the design does not assume 1437 that the RTs of external network domains can be modified by the 1438 controller. It may however learn those RTs and use them in it's 1439 modified route advertisements. 1441 In order to steer traffic from external network domains into an SFC, 1442 the controller will advertise a destination network's prefixes into 1443 the peering source network domain with a BGP next-hop and label 1444 associated with the SFC entry point that may be on a routing system 1445 attached to the first SF instance. This advertisement may be over 1446 regular MP-BGP/VPN peering which assumes existing standard VPN 1447 routing/forwarding behavior on the network domain's routers 1448 (PEs/ASBRs). The controller can learn routes to networks in external 1449 domains at the egress of an SFC and advertise routes to those 1450 network into other external domains using the first ingress routing 1451 instance as the next hop thus allowing dynamic steering through re- 1452 origination of routes. 1454 An operational benefit of this approach is that the SFC topology 1455 within a domain need not be exposed to other domains. Additionally, 1456 using non-specific routes inside an SFC, as described in Section 1457 2.8.1, means that new networks can be attached to a SFC without 1458 needing to configure prefixes inside the chain. 1460 The controller will typically remove the destination network's RTs 1461 and replace them with the RTs of the source network while 1462 advertising the modified routes. Alternatively, an external domain 1463 may be provisioned with an additional export-only RT and an import- 1464 only RT that the controller can use. 1466 6 Fine-grained steering using BGP Flow-Spec 1468 When steering traffic from an external network domain into an SFC 1469 based on attributes of the packet flow, BGP Flow-spec can be used as 1470 a signaling option. 1472 In this case, the controller can advertise one or more flow-spec 1473 routes into the entry VRF with the appropriate Service-topology-RT 1474 for the SFC. Alternatively, it can use the procedures described in 1475 RFC5575 or [flowspec-redirect-ip] on the gateway router to redirect 1476 traffic towards the first SF. 1478 If it is desired to steer specific flows from a network domain's 1479 existing routers, the controller can advertise the above flow-spec 1480 routes to the network domain's border routers or route reflectors. 1482 7 Controller Federation 1484 When SFCs are distributed geographically, or in very large-scale 1485 environments, there may be multiple SFC controllers present and they 1486 may variously employ both of the SFC creation methods described in 1487 Section 2.6. If there is a requirement for SFCs to span controller 1488 domains there may be a requirement to exchange information between 1489 controllers. Again, a BGP session between controllers can be used to 1490 exchange route information as described in the previous sections and 1491 allow such domain spanning SFCs to be created. 1493 8 Coordination Between SF Instances and Controller using BGP 1495 In many cases, the configuration of SF instance determines its 1496 network behavior. E.g. when NAT pools are set up, or when an SSL 1497 gateway is configured with a set of enterprise IP addresses to use. 1498 In these cases, the addresses that will be used by the SFs need to 1499 be known in the networks connecting to them in order that traffic 1500 can be properly routed. When SFCs are involved, this means that the 1501 controller has to be notified when such configuration changes are 1502 made in SF instances. Sometimes, the changes will be made by end- 1503 customers and it is desirable the controller adjust the SFC routing 1504 configuration automatically when the change is made, and without 1505 customers needing to notify the service provider via a portal, for 1506 instance, or requiring development of integration modules linking 1507 the SF instances and the controller. 1509 One option for automatic notification for SFs that support BGP is 1510 for the connected forwarding system (physical or virtual SFF) to 1511 also support BGP, and for SF instances to be configured to peer with 1512 the SFF. When changes are made to the configuration of a SF 1513 instance, that for example, the SF will accept packets from a 1514 particular network prefix on one of its interfaces, the SF instance 1515 will send a BGP route update to the SFF it is connected to and which 1516 it has a BGP session with. The controller can then adjust the routes 1517 along SFCs to ensure that packets with destinations in the new 1518 prefix reach the reconfigured SF instance. 1520 BGP could also be used to signal from the controller to a SF 1521 instance that certain traffic should be sent out from a particular 1522 interface. This could be used to direct suspect traffic to a 1523 security scrubbing center,for example. 1525 Note that the SFF need not support a BGP stack itself; it can proxy 1526 BGP messages to the controller which will support such a stack. 1528 9 BGP Extended Communities 1530 9.1 Route-Target_RECORD 1532 Route-Target Record (RT-Record) is an optional, transitive BGP 1533 attribute of Type code TBD. It contains an RT value representing one 1534 of the RTs that the route has been attached with previously, and 1535 which may no longer be attached to the route on subsequent re- 1536 advertisements (see Section 2.8.5). It is encoded as follows: 1538 0 1 2 3 1539 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 1540 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ 1541 |Attr Type Code | | 1542 +-+-+-+-+-+-+-+-+ Route-Target Extended Community Value + 1543 | (8B or 20B) | 1544 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ 1545 Attr Type Code : The BGP Attribute Type Code for the Route-Target 1546 It can be 16 for RFC 4360 Extended Community, or 1547 25 for IPv6 Address Specific Extended Community 1549 Value : It contains a Route-Target Extended Community of 1550 one of the types specified in RFC 4360 or RFC 5701 1552 When a speaker re-originates a route that contains one or more 1553 RTs, it must add each of these RTs as RT-Record extended communities 1554 of the re-originated route. 1556 A speaker MUST NOT re-originate a route to an RT, if this RT is 1557 present as an RT-Record extended community. 1559 9.2 CONSISTENT_HASH_SORT_ORDER 1561 Consistent Hash Sort Order is an optional transitive Opaque BGP 1562 Extended Community of type TBD, defined as follows: 1564 Type Field : The value of the high-order octet is determined by 1565 provisioning as per [RFC4360]. The value of the low- 1566 order octet is to be assigned by IANA from the 1567 Transitive Opaque Extended Community Sub-Types 1568 registry. 1570 Value Field : The value field contains a Sort Order sub-field that 1571 indicates the relative order of this route among the 1572 ECMP set for the prefix, to be sorted in increasing 1573 order. It is a 32-bit unsigned integer. The field is 1574 encoded as shown below: 1576 +------------------------------+ 1578 | Sort Order (4 octets) | 1579 +------------------------------+ 1581 | Reserved (2 octets) | 1583 +------------------------------+ 1585 10 Summary and Conclusion 1587 The architecture for service function chains described in this 1588 document uses virtual networks implemented as overlays in order to 1589 create service function chains. The virtual networks use standards- 1590 based encapsulation tunneling, such as MPLS over GRE/UDP or VXLAN, 1591 to transport packets into an SFC and between service function 1592 instances without routing in the user address space. Two methods of 1593 installing routes to form service chains are described. 1595 In environments with physical routers, a controller may operate in 1596 tandem with existing BGP route reflectors, and would contain the SFC 1597 topology model, and the ability to install the local static 1598 interface routes to SF instances. In a virtualized environment, the 1599 controller can emulate route refection internally and simply install 1600 required routes directly without advertisements occurring. 1602 11 Security Considerations 1604 The security considerations for SFCs are broadly similar to those 1605 concerning the data, control and management planes of any device 1606 placed in a network. Details are out of scope for this document. 1608 12 IANA Considerations 1610 The new BGP Extended Communities in Section 9 require a type 1611 allocation in the IANA registry for extended communities. 1613 13 References 1615 13.1 Normative References 1617 None 1619 13.2 Informative References 1621 [NFVE2E] "Network Functions Virtualisation: End to End 1622 Architecture, http://docbox.etsi.org/ISG/NFV/70- 1623 DRAFT/0010/NFV-0010v016.zip". 1625 [RFC2328] J. Moy, ''OSPF Version 2'', RFC 2328, April, 1998. 1627 [sfc-arch] Halpern, J. and Pignataro, C., "Service Function Chaining 1628 (SFC) Architecture", RFC 7665, October 2015. 1630 [RFC4364] Rosen, E. and Y. Rekhter, "BGP/MPLS IP Virtual Private 1631 Networks (VPNs)", RFC 4364, February 2006. 1633 [RFC4271] Rekhter, Y., Li, T., and S. Hares, "A Border Gateway 1634 Protocol 4 (BGP-4)", RFC 4271, January 2006. 1636 [RFC4760] Bates, T., Chandra, R., Katz, D., and Y. Rekhter, 1637 "Multiprotocol Extensions for BGP-4", RFC 4760, January 1638 2007. 1640 [RFC7348] Mahalingam, M., et al. "VXLAN: A Framework for Overlaying 1641 Virtualized Layer 2 Networks over Layer 3 Networks", RFC 1642 7348, August 2014. 1644 [draft-rosen-idr-tunnel-encaps] 1645 Rosen, E, Ed et al. ''Using the BGP Tunnel Encapsulation 1646 Attribute without the BGP Encapsulation SAFI'', August 6, 1647 2015 1649 [draft-ietf-l3vpn-end-system] Marques, P., et al., ''BGP-signaled 1650 end-system IP/VPNs'', draft-ietf-l3vpn-end-system-04, 1651 October 2, 2014. 1653 [RFC5575] Marques, P., Sheth, N., Raszuk, R., et al., ''Dissemination 1654 of Flow Specification Rules'', RFC 5575, Ausust 2009. 1656 [draft-ietf-bess-evpn-overlay-02] 1658 A. Sajassi, et al, ''A Network Virtualization Overlay 1659 Solution using EVPN'', draft-ietf-bess-evpn-overlay, 1660 February 2015. 1662 [draft-ietf-sfc-nsh] 1663 Quinn, P., et al, "Network Service Header", draft-ietf- 1664 sfc-nsh-00, March 2015. 1666 [draft-niu-sfc-mechanism] 1667 Niu, L., Li, H., and Jiang, Y., "A Service Function 1668 Chaining Header and its Mechanism", draft-niu-sfc- 1669 mechanism-00, January 2014. 1671 [draft-rijsman-sfc-metadata-considerations] 1672 B. Rijsman, et al. ''Metadata Considerations'', draft- 1673 rijsman-sfc-metadata-considerations-00, February 12, 2014 1675 [RFC6241] Enns, R., Bjorklund, M., Schoenwaelder, J., and A. 1676 Bierman, "Network Configuration Protocol (NETCONF)", RFC 1677 6241, June 2011. 1679 [RFC4023] Worster, T., Rekhter, Y., and E. Rosen, "Encapsulating 1680 MPLS in IP or Generic Routing Encapsulation (GRE)", RFC 1681 4023, March 2005. 1683 [RFC7510] Xu, X., Sheth, N. et al, "Encapsulating MPLS in UDP", RFC 1684 7510, April 2015. 1686 [draft-ietf-i2rs-architecture] 1687 Atlas, A., Halpern, J., Hares, S., Ward, D., and T Nadeau, 1688 "An Architecture for the Interface to the Routing System", 1689 draft-ietf-i2rs-architecture, work in progress, March 1690 2015. 1692 [consistent-hash] 1693 Karger, D.; Lehman, E.; Leighton, T.; Panigrahy, R.; 1694 Levine, M.; Lewin, D. (1997). "Consistent Hashing and 1695 Random Trees: Distributed Caching Protocols for Relieving 1696 Hot Spots on the World Wide Web". Proceedings of the 1697 Twenty-ninth Annual ACM Symposium on Theory of Computing. 1698 ACM Press New York, NY, USA. pp. 654- -663. 1700 [draft-ietf-idr-link-bandwidth] 1701 P. Mohapatra, R. Fernando, ''BGP Link Bandwidth Extended 1702 Community'', draft-ietf-idr-link-bandwidth, work in 1703 progress. 1705 [flowspec-redirect-ip] 1707 Uttaro, J. et al. ''BGP Flow-Spec Redirect to IP Action'', 1708 draft-ietf-idr-flowspec-redirect-ip-02, February 2015. 1710 14 Acknowledgments 1712 This document was prepared using 2-Word-v2.0.template.dot. 1714 This document is based on earlier drafts [draft-rfernando-bess- 1715 service-chaining] and [draft-mackie-sfc-using-virtual-networking]. 1717 The authors would like to thank D. Daino, D.R. Lopez, D. Bernier, W. 1718 Haeffner, A. Farrel, L. Fang, and N. So, for their contributions to 1719 the earlier drafts. The authors would also like to thank the 1720 following individuals for their review and feedback on the original 1721 proposals: E. Rosen, J. Guchard, P. Quinn, P. Bosch, D. Ward, A. 1722 Ganesan, N. Seth, G. Pildush and N. Bitar. The authors also thank 1723 Wim Henderickx for his useful suggestions on several aspects of the 1724 draft. 1726 Authors' Addresses 1728 Rex Fernando 1729 Cisco 1730 170 W Tasman Drive 1731 San Jose, CA 95134 1732 USA 1733 Email: rex@cisco.com 1735 Stuart Mackie 1736 Juniper Networks 1737 1133 Innovation Way 1738 Sunnyvale, CA 94089 1739 USA 1740 Email: wsmackie@juniper.net 1742 Dhananjaya Rao 1743 Cisco 1744 170 W. Tasman Drive 1745 San Jose, CA 95134 1746 USA 1747 Email: dhrao@cisco.com 1749 Bruno Rijsman 1750 Juniper Networks 1751 1133 Innovation Way 1752 Sunnyvale, CA 94089 1753 USA 1754 Email: brijsman@juniper.net 1756 Maria Napierala 1757 AT&T Labs 1758 200 Laurel Avenue 1759 Middletown, NJ 07748 1760 USA 1761 Email: mnapierala@att.com 1763 Thomas Morin 1764 Orange 1765 2, avenue Pierre Marzin 1766 Lannion 22307 1767 France 1768 Email: thomas.morin@orange.com