idnits 2.17.1 draft-ietf-forces-interfelfb-02.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- == There are 1 instance of lines with non-RFC2606-compliant FQDNs in the document. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year == Line 860 has weird spacing: '...putPort group...' == The document doesn't use any RFC 2119 keywords, yet seems to have RFC 2119 boilerplate text. -- The document date (November 2, 2015) is 3098 days in the past. Is this intentional? Checking references for intended status: Proposed Standard ---------------------------------------------------------------------------- (See RFCs 3967 and 4897 for information about using normative references to lower-maturity documents in RFCs) -- Obsolete informational reference (is this intentional?): RFC 5405 (Obsoleted by RFC 8085) Summary: 0 errors (**), 0 flaws (~~), 4 warnings (==), 2 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 Internet Engineering Task Force D. Joachimpillai 3 Internet-Draft Verizon 4 Intended status: Standards Track J. Hadi Salim 5 Expires: May 5, 2016 Mojatatu Networks 6 November 2, 2015 8 ForCES Inter-FE LFB 9 draft-ietf-forces-interfelfb-02 11 Abstract 13 This document describes how to extend the ForCES LFB topology across 14 FEs by defining the Inter-FE LFB Class. The Inter-FE LFB Class 15 provides the ability to pass data and metadata across FEs without 16 needing any changes to the ForCES specification. The document 17 focuses on Ethernet transport. 19 Status of This Memo 21 This Internet-Draft is submitted in full conformance with the 22 provisions of BCP 78 and BCP 79. 24 Internet-Drafts are working documents of the Internet Engineering 25 Task Force (IETF). Note that other groups may also distribute 26 working documents as Internet-Drafts. The list of current Internet- 27 Drafts is at http://datatracker.ietf.org/drafts/current/. 29 Internet-Drafts are draft documents valid for a maximum of six months 30 and may be updated, replaced, or obsoleted by other documents at any 31 time. It is inappropriate to use Internet-Drafts as reference 32 material or to cite them other than as "work in progress." 34 This Internet-Draft will expire on May 5, 2016. 36 Copyright Notice 38 Copyright (c) 2015 IETF Trust and the persons identified as the 39 document authors. All rights reserved. 41 This document is subject to BCP 78 and the IETF Trust's Legal 42 Provisions Relating to IETF Documents 43 (http://trustee.ietf.org/license-info) in effect on the date of 44 publication of this document. Please review these documents 45 carefully, as they describe your rights and restrictions with respect 46 to this document. Code Components extracted from this document must 47 include Simplified BSD License text as described in Section 4.e of 48 the Trust Legal Provisions and are provided without warranty as 49 described in the Simplified BSD License. 51 Table of Contents 53 1. Terminology and Conventions . . . . . . . . . . . . . . . . . 3 54 1.1. Requirements Language . . . . . . . . . . . . . . . . . . 3 55 1.2. Definitions . . . . . . . . . . . . . . . . . . . . . . . 3 56 2. Introduction . . . . . . . . . . . . . . . . . . . . . . . . 3 57 3. Problem Scope And Use Cases . . . . . . . . . . . . . . . . . 4 58 3.1. Assumptions . . . . . . . . . . . . . . . . . . . . . . . 4 59 3.2. Sample Use Cases . . . . . . . . . . . . . . . . . . . . 4 60 3.2.1. Basic IPv4 Router . . . . . . . . . . . . . . . . . . 4 61 3.2.1.1. Distributing The Basic IPv4 Router . . . . . . . 6 62 3.2.2. Arbitrary Network Function . . . . . . . . . . . . . 7 63 3.2.2.1. Distributing The Arbitrary Network Function . . . 7 64 4. Inter-FE LFB Overview . . . . . . . . . . . . . . . . . . . . 8 65 4.1. Inserting The Inter-FE LFB . . . . . . . . . . . . . . . 8 66 5. Inter-FE Ethernet Connectivity . . . . . . . . . . . . . . . 10 67 5.1. Inter-FE Ethernet Connectivity Issues . . . . . . . . . . 10 68 5.1.1. MTU Consideration . . . . . . . . . . . . . . . . . . 10 69 5.1.2. Quality Of Service Considerations . . . . . . . . . . 11 70 5.1.3. Congestion Considerations . . . . . . . . . . . . . . 11 71 5.1.4. Deployment Considerations . . . . . . . . . . . . . . 11 72 5.2. Inter-FE Ethernet Encapsulation . . . . . . . . . . . . . 12 73 6. Detailed Description of the Ethernet inter-FE LFB . . . . . . 13 74 6.1. Data Handling . . . . . . . . . . . . . . . . . . . . . . 13 75 6.1.1. Egress Processing . . . . . . . . . . . . . . . . . . 14 76 6.1.2. Ingress Processing . . . . . . . . . . . . . . . . . 15 77 6.2. Components . . . . . . . . . . . . . . . . . . . . . . . 16 78 6.3. Inter-FE LFB XML Model . . . . . . . . . . . . . . . . . 16 79 7. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . 21 80 8. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 21 81 9. IEEE Assignment Considerations . . . . . . . . . . . . . . . 21 82 10. Security Considerations . . . . . . . . . . . . . . . . . . . 21 83 11. References . . . . . . . . . . . . . . . . . . . . . . . . . 22 84 11.1. Normative References . . . . . . . . . . . . . . . . . . 22 85 11.2. Informative References . . . . . . . . . . . . . . . . . 23 86 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . 24 88 1. Terminology and Conventions 90 1.1. Requirements Language 92 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 93 "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this 94 document are to be interpreted as described in [RFC2119]. 96 1.2. Definitions 98 This document reiterates the terminology defined in several ForCES 99 documents [RFC3746], [RFC5810], [RFC5811], and [RFC5812] [RFC7391] 100 [RFC7408] for the sake of contextual clarity. 102 Control Engine (CE) 104 Forwarding Engine (FE) 106 FE Model 108 LFB (Logical Functional Block) Class (or type) 110 LFB Instance 112 LFB Model 114 LFB Metadata 116 ForCES Component 118 LFB Component 120 ForCES Protocol Layer (ForCES PL) 122 ForCES Protocol Transport Mapping Layer (ForCES TML) 124 2. Introduction 126 In the ForCES architecture, a packet service can be modelled by 127 composing a graph of one or more LFB instances. The reader is 128 referred to the details in the ForCES Model [RFC5812]. 130 The current ForCES model describes the processing within a single 131 Forwarding Element (FE) in terms of logical forwarding blocks (LFB), 132 including provision for the Control Element (CE) to establish and 133 modify that processing sequence, and the parameters of the individual 134 LFBs. 136 Under some circumstance, it would be beneficial to be able to extend 137 this view, and the resulting processing across more than one FE. 138 This may be in order to achieve scale by splitting the processing 139 across elements, or to utilize specialized hardware available on 140 specific FEs. 142 Given that the ForCES inter-LFB architecture calls out for the 143 ability to pass metadata between LFBs, it is imperative therefore to 144 define mechanisms to extend that existing feature and allow passing 145 the metadata between LFBs across FEs. 147 This document describes how to extend the LFB topology across FEs i.e 148 inter-FE connectivity without needing any changes to the ForCES 149 definitions. It focuses on using Ethernet as the interconnection 150 between FEs. 152 3. Problem Scope And Use Cases 154 The scope of this document is to solve the challenge of passing 155 ForCES defined metadata alongside packet data across FEs (be they 156 physical or virtual) for the purpose of distributing the LFB 157 processing. 159 3.1. Assumptions 161 o The FEs involved in the Inter-FE LFB belong to the same Network 162 Element(NE) and are within a single administrative private network 163 which is in close proximity. 165 o The FEs are already interconnected using Ethernet. We focus on 166 Ethernet because it is a very common setup as an FE interconnect. 167 While other higher transports (such as UDP over IP) or lower 168 transports could be defined to carry the data and metadata it is 169 simpler to use Ethernet (for the functional scope of a single 170 distributed device already interconnected with ethernet). 172 3.2. Sample Use Cases 174 To illustrate the problem scope we present two use cases where we 175 start with a single FE running all the LFBs functionality then split 176 it into multiple FEs achieving the same end goals. 178 3.2.1. Basic IPv4 Router 180 A sample LFB topology depicted in Figure 1 demonstrates a service 181 graph for delivering basic IPV4 forwarding service within one FE. 182 For the purpose of illustration, the diagram shows LFB classes as 183 graph nodes instead of multiple LFB class instances. 185 Since the illustration on Figure 1 is meant only as an exercise to 186 showcase how data and metadata are sent down or upstream on a graph 187 of LFB instances, it abstracts out any ports in both directions and 188 talks about a generic ingress and egress LFB. Again, for 189 illustration purposes, the diagram does not show exception or error 190 paths. Also left out are details on Reverse Path Filtering, ECMP, 191 multicast handling etc. In other words, this is not meant to be a 192 complete description of an IPV4 forwarding application; for a more 193 complete example, please refer the LFBlib document [RFC6956]. 195 The output of the ingress LFB(s) coming into the IPv4 Validator LFB 196 will have both the IPV4 packets and, depending on the implementation, 197 a variety of ingress metadata such as offsets into the different 198 headers, any classification metadata, physical and virtual ports 199 encountered, tunnelling information etc. These metadata are lumped 200 together as "ingress metadata". 202 Once the IPV4 validator vets the packet (example ensures that no 203 expired TTL etc), it feeds the packet and inherited metadata into the 204 IPV4 unicast LPM LFB. 206 +----+ 207 | | 208 IPV4 pkt | | IPV4 pkt +-----+ +---+ 209 +------------->| +------------->| | | | 210 | + ingress | | + ingress |IPv4 | IPV4 pkt | | 211 | metadata | | metadata |Ucast+------------>| +--+ 212 | +----+ |LPM | + ingress | | | 213 +-+-+ IPv4 +-----+ + NHinfo +---+ | 214 | | Validator metadata IPv4 | 215 | | LFB NextHop| 216 | | LFB | 217 | | | 218 | | IPV4 pkt | 219 | | + {ingress | 220 +---+ + NHdetails} 221 Ingress metadata | 222 LFB +--------+ | 223 | Egress | | 224 <--+ |<-----------------+ 225 | LFB | 226 +--------+ 228 Figure 1: Basic IPV4 packet service LFB topology 230 The IPV4 unicast LPM LFB does a longest prefix match lookup on the 231 IPV4 FIB using the destination IP address as a search key. The 232 result is typically a next hop selector which is passed downstream as 233 metadata. 235 The Nexthop LFB receives the IPv4 packet with an associated next hop 236 info metadata. The NextHop LFB consumes the NH info metadata and 237 derives from it a table index to look up the next hop table in order 238 to find the appropriate egress information. The lookup result is 239 used to build the next hop details to be used downstream on the 240 egress. This information may include any source and destination 241 information (for our purposes, MAC addresses to use) as well as 242 egress ports. [Note: It is also at this LFB where typically the 243 forwarding TTL decrementing and IP checksum recalculation occurs.] 245 The details of the egress LFB are considered out of scope for this 246 discussion. Suffice it is to say that somewhere within or beyond the 247 Egress LFB the IPV4 packet will be sent out a port (Ethernet, virtual 248 or physical etc). 250 3.2.1.1. Distributing The Basic IPv4 Router 252 Figure 2 demonstrates one way the router LFB topology in Figure 1 may 253 be split across two FEs (eg two ASICs). Figure 2 shows the LFB 254 topology split across FEs after the IPV4 unicast LPM LFB. 256 FE1 257 +-------------------------------------------------------------+ 258 | +----+ | 259 | +----------+ | | | 260 | | Ingress | IPV4 pkt | | IPV4 pkt +-----+ | 261 | | LFB +-------------->| +------------->| | | 262 | | | + ingress | | + ingress |IPv4 | | 263 | +----------+ metadata | | metadata |Ucast| | 264 | ^ +----+ |LPM | | 265 | | IPv4 +--+--+ | 266 | | Validator | | 267 | LFB | | 268 +---------------------------------------------------|---------+ 269 | 270 IPv4 packet + 271 {ingress + NHinfo} 272 metadata 273 FE2 | 274 +---------------------------------------------------|---------+ 275 | V | 276 | +--------+ +--------+ | 277 | | Egress | IPV4 packet | IPV4 | | 278 | <-----+ LFB |<----------------------+NextHop | | 279 | | |{ingress + NHdetails} | LFB | | 280 | +--------+ metadata +--------+ | 281 +-------------------------------------------------------------+ 283 Figure 2: Split IPV4 packet service LFB topology 285 Some proprietary inter-connect (example Broadcom HiGig over XAUI 286 [brcm-higig]) are known to exist to carry both the IPV4 packet and 287 the related metadata between the IPV4 Unicast LFB and IPV4 NextHop 288 LFB across the two FEs. 290 This document defines the inter-FE LFB, a standard mechanism for 291 encapsulating, generating, receiving and decapsulating packets and 292 associated metadata FEs over Ethernet. 294 3.2.2. Arbitrary Network Function 296 In this section we show an example of an arbitrary Network Function 297 which is more coarse grained in terms of functionality. Each Network 298 Function may constitute more than one LFB. 300 FE1 301 +-------------------------------------------------------------+ 302 | +----+ | 303 | +----------+ | | | 304 | | Network | pkt |NF2 | pkt +-----+ | 305 | | Function +-------------->| +------------->| | | 306 | | 1 | + NF1 | | + NF1/2 |NF3 | | 307 | +----------+ metadata | | metadata | | | 308 | ^ +----+ | | | 309 | | +--+--+ | 310 | | | | 311 | | | 312 +---------------------------------------------------|---------+ 313 V 315 Figure 3: A Network Function Service Chain within one FE 317 The setup in Figure 3 is a typical of most packet processing boxes 318 where we have functions like DPI, NAT, Routing, etc connected in such 319 a topology to deliver a packet processing service to flows. 321 3.2.2.1. Distributing The Arbitrary Network Function 322 The setup in Figure 3 can be split out across 3 FEs instead of as 323 demonstrated in Figure 4. This could be motivated by scale out 324 reasons or because different vendors provide different functionality 325 which is plugged-in to provide such functionality. The end result is 326 to have the same packet service delivered to the different flows 327 passing through. 329 FE1 FE2 330 +----------+ +----+ FE3 331 | Network | pkt |NF2 | pkt +-----+ 332 | Function +-------------->| +------------->| | 333 | 1 | + NF1 | | + NF1/2 |NF3 | 334 +----------+ metadata | | metadata | | 335 ^ +----+ | | 336 | +--+--+ 337 | 338 V 340 Figure 4: A Network Function Service Chain Distributed Across 341 Multiple FEs 343 4. Inter-FE LFB Overview 345 We address the inter-FE connectivity requirements by defining the 346 inter-FE LFB class. Using a standard LFB class definition implies no 347 change to the basic ForCES architecture in the form of the core LFBs 348 (FE Protocol or Object LFBs). This design choice was made after 349 considering an alternative approach that would have required changes 350 to both the FE Object capabilities (SupportedLFBs) as well 351 LFBTopology component to describe the inter-FE connectivity 352 capabilities as well as runtime topology of the LFB instances. 354 4.1. Inserting The Inter-FE LFB 356 The distributed LFB topology described in Figure 2 is re-illustrated 357 in Figure 5 to show the topology location where the inter-FE LFB 358 would fit in. 360 As can be observed in Figure 5, the same details passed between IPV4 361 unicast LPM LFB and the IPV4 NH LFB are passed to the egress side of 362 the Inter-FE LFB. This information is illustrated as multiplicity of 363 inputs into the egress InterFE LFB instance. Each input represents a 364 unique set of selection information. 366 FE1 367 +-------------------------------------------------------------+ 368 | +----------+ +----+ | 369 | | Ingress | IPV4 pkt | | IPV4 pkt +-----+ | 370 | | LFB +-------------->| +------------->| | | 371 | | | + ingress | | + ingress |IPv4 | | 372 | +----------+ metadata | | metadata |Ucast| | 373 | ^ +----+ |LPM | | 374 | | IPv4 +--+--+ | 375 | | Validator | | 376 | | LFB | | 377 | | IPv4 pkt + metadata | 378 | | {ingress + NHinfo} | 379 | | | | 380 | | +..--+..+ | 381 | | |..| | | | 382 | +-V--V-V--V-+ | 383 | | Egress | | 384 | | InterFE | | 385 | | LFB | | 386 | +------+----+ | 387 +---------------------------------------------------|---------+ 388 | 389 Ethernet Frame with: | 390 IPv4 packet data and metadata 391 {ingress + NHinfo + Inter FE info} 392 FE2 | 393 +---------------------------------------------------|---------+ 394 | +..+.+..+ | 395 | |..|.|..| | 396 | +-V--V-V--V-+ | 397 | | Ingress | | 398 | | InterFE | | 399 | | LFB | | 400 | +----+------+ | 401 | | | 402 | IPv4 pkt + metadata | 403 | {ingress + NHinfo} | 404 | | | 405 | +--------+ +----V---+ | 406 | | Egress | IPV4 packet | IPV4 | | 407 | <-----+ LFB |<----------------------+NextHop | | 408 | | |{ingress + NHdetails} | LFB | | 409 | +--------+ metadata +--------+ | 410 +-------------------------------------------------------------+ 412 Figure 5: Split IPV4 forwarding service with Inter-FE LFB 414 The egress of the inter-FE LFB uses the received packet and metadata 415 to select details for encapsulation when sending messages towards the 416 selected neighboring FE. These details include what to communicate 417 as the source and destination FEs (abstracted as MAC addresses as 418 described in Section 5.2); in addition the original metadata may be 419 passed along with the original IPV4 packet. 421 On the ingress side of the inter-FE LFB the received packet and its 422 associated metadata are used to decide the packet graph continuation. 423 This includes which of the original metadata and which next LFB class 424 instance to continue processing on. In the illustrated Figure 5, an 425 IPV4 Nexthop LFB instance is selected and appropriate metadata is 426 passed on to it. 428 The ingress side of the inter-FE LFB consumes some of the information 429 passed and passes on the IPV4 packet alongside with the ingress and 430 NHinfo metadata to the IPV4 NextHop LFB as was done earlier in both 431 Figure 1 and Figure 2. 433 5. Inter-FE Ethernet Connectivity 435 Section 5.1 describes some of the issues related to using Ethernet as 436 the transport and how we mitigate them. 438 Section 5.2 defines a payload format that is to be used over 439 Ethernet. An existing implementation of this specification on top of 440 Linux Traffic Control [linux-tc] is described in [tc-ife]. 442 5.1. Inter-FE Ethernet Connectivity Issues 444 There are several issues that may occur due to using direct Ethernet 445 encapsulation that need consideration. 447 5.1.1. MTU Consideration 449 Because we are adding data to existing Ethernet frames, MTU issues 450 may arise. We recommend: 452 o To use large MTUs when possible (example with jumbo frames). 454 o Limit the amount of metadata that could be transmitted; our 455 definition allows for filtering of select metadata to be 456 encapsulated in the frame as described in Section 6. We recommend 457 sizing the egress port MTU so as to allow space for maximum size 458 of the metadata total size to allow between FEs. In such a setup, 459 the port is configured to "lie" to the upper layers by claiming to 460 have a lower MTU than it is capable of. MTU setting can be 461 achieved by ForCES control of the port LFB(or other config). In 462 essence, the control plane when explicitly making a decision for 463 the MTU settings of the egress port is implicitly deciding how 464 much metadata will be allowed. 466 5.1.2. Quality Of Service Considerations 468 A raw packet arriving at the Inter-FE LFB (from upstream LFB Class 469 instances) may have COS metadatum indicating how it should be treated 470 from a Quality of Service perspective. 472 The resulting Ethernet frame will be eventually (preferentially) 473 treated by a downstream LFB(typically a port LFB instance) and their 474 COS marks will be honored in terms of priority. In other words the 475 presence of the Inter-FE LFB does not change the COS semantics 477 5.1.3. Congestion Considerations 479 The addition of the Inter-FE encapsulation adds overhead to the 480 packets and therefore bandwidth consumption on the wire. In cases 481 where Inter-FE encapsulated traffic shares wire resources with other 482 traffic, the new dynamics could potentially lead to congestion. In 483 such a case, given that the Inter-FE LFB is deployed within a single 484 administrative domain, the operator may need to enforce usage 485 restrictions. These restrictions may take the form of approriate 486 provisioning; example by rate limiting at an upstream LFB all Inter- 487 FE LFB traffic; or prioritizing non Inter-FE LFB traffic or other 488 techniques such as managed circuit breaking[circuit-b]. 490 It is noted that a lot of the traffic passing through an FE that 491 utilizes the Inter-FE LFB is expected to be IP based which is 492 generally assumed to be congestion controlled and therefore does not 493 need addtional congestion control mechanisms[RFC5405]. 495 5.1.4. Deployment Considerations 497 While we expect to use a unique IEEE-issued ethertype for the inter- 498 FE traffic, we use lessons learned from VXLAN deployment to be more 499 flexible on the settings of the ethertype value used. We make the 500 ether type an LFB read-write component. Linux VXLAN implementation 501 uses UDP port 8472 because the deployment happened much earlier than 502 the point of RFC publication where the IANA assigned udp port issued 503 was 4789 [vxlan-udp]. For this reason we make it possible to define 504 at control time what ethertype to use and default to the IEEE issued 505 ethertype. We justify this by assuming that a given ForCES NE is 506 likely to be owned by a single organization and that the 507 organization's CE(or CE cluster) could program all participating FEs 508 via the inter-FE LFB (described in this document) to recognize a 509 private Ethernet type used for inter-LFB traffic (possibly those 510 defined as available for private use by the IEEE, namely: IDs 0x88B5 511 and 0x88B6). 513 5.2. Inter-FE Ethernet Encapsulation 515 The Ethernet wire encapsulation is illustrated in Figure 6. The 516 process that leads to this encapsulation is described in Section 6. 517 The resulting frame is 32 bit aligned. 519 0 1 2 3 520 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 521 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ 522 | Destination MAC Address | 523 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ 524 | Destination MAC Address | Source MAC Address | 525 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ 526 | Source MAC Address | 527 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ 528 | Inter-FE ethertype | Metadata length | 529 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ 530 | TLV encoded Metadata ~~~..............~~ | 531 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ 532 | TLV encoded Metadata ~~~..............~~ | 533 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ 534 | Original packet data ~~................~~ | 535 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ 537 Figure 6: Packet format suggestion 539 The Ethernet header illustrated in Figure 6) has the following 540 semantics: 542 o The Destination MAC Address is used to identify the Destination 543 FEID by the CE policy (as described in Section 6). 545 o The Source MAC Address is used to identify the Source FEID by the 546 CE policy (as described in Section 6). 548 o The Ethernet type is used to identify the frame as inter-FE LFB 549 type. Ethertype 0xFEFE is to be used (XXX: Note to editor, likely 550 we wont get that value - update when available). 552 o The 16-bit metadata length is used to described the total encoded 553 metadata length (including the 16 bits used to encode the metadata 554 length). 556 o One or more 16-bit TLV encoded Metadatum follows the metadata 557 length field. The TLV type identifies the Metadata id. ForCES 558 IANA-defined Metadata ids will be used. All TLVs will be 32 bit 559 aligned. We recognize that using a 16 bit TLV restricts the 560 metadata id to 16 bits instead of ForCES-defined component ID 561 space of 32 bits. However, at the time of publication we believe 562 this is sufficient to carry all the info we need and approach 563 taken would save us 4 bytes per Metadatum transferred. 565 o The original packet data payload is appended at the end of the 566 metadata as shown. 568 6. Detailed Description of the Ethernet inter-FE LFB 570 The Ethernet inter-FE LFB has two LFB input port groups and three LFB 571 output ports as shown in Figure 7. 573 The inter-FE LFB defines two components used in aiding processing 574 described in Section 6.2. 576 +-----------------+ 577 Inter-FE LFB | | 578 Encapsulated | OUT2+--> decapsulated Packet 579 -------------->|IngressInGroup | + metadata 580 Ethernet Frame | | 581 | | 582 raw Packet + | OUT1+--> Encapsulated Ethernet 583 -------------->|EgressInGroup | Frame 584 Metadata | | 585 | EXCEPTIONOUT +--> ExceptionID, packet 586 | | + metadata 587 +-----------------+ 589 Figure 7: Inter-FE LFB 591 6.1. Data Handling 593 The Inter-FE LFB (instance) can be positioned at the egress of a 594 source FE. Figure 5 illustrates an example source FE in the form of 595 FE1. In such a case an Inter-FE LFB instance receives, via port 596 group EgressInGroup, a raw packet and associated metadata from the 597 preceding LFB instances. The input information is used to produce a 598 selection of how to generate and encapsulate the new frame. The set 599 of all selections is stored in the LFB component IFETable described 600 further below. The processed encapsulated Ethernet Frame will go out 601 on OUT1 to a downstream LFB instance when processing succeeds or to 602 the EXCEPTIONOUT port in the case of a failure. 604 The Inter-FE LFB (instance) can be positioned at the ingress of a 605 receiving FE. Figure 5 illustrates an example destination FE in the 606 form of FE1. In such a case an Inter-FE LFB receives, via an LFB 607 port in the IngressInGroup, an encapsulated Ethernet frame. 608 Successful processing of the packet will result in a raw packet with 609 associated metadata IDs going downstream to an LFB connected on OUT2. 610 On failure the data is sent out EXCEPTIONOUT. 612 6.1.1. Egress Processing 614 The egress Inter-FE LFB receives packet data and any accompanying 615 Metadatum at an LFB port of the LFB instance's input port group 616 labelled EgressInGroup. 618 The LFB implementation may use the incoming LFB port (within LFB port 619 group EgressInGroup) to map to a table index used to lookup the 620 IFETable table. 622 If lookup is successful, a matched table row which has the 623 InterFEinfo details is retrieved with the tuple {optional IFEtype, 624 optional StatId, Destination MAC address(DSTFE), Source MAC 625 address(SRCFE), optional metafilters}. The metafilters lists define 626 a whitelist of which Metadatum are to be passed to the neighboring 627 FE. The inter-FE LFB will perform the following actions using the 628 resulting tuple: 630 o Increment statistics for packet and byte count observed at 631 corresponding IFEStats entry. 633 o When MetaFilterList is present, then walk each received Metadatum 634 and apply against the MetaFilterList. If no legitimate metadata 635 is found that needs to be passed downstream then the processing 636 stops and send the packet and metadata out the EXCEPTIONOUT port 637 with exceptionID of EncapTableLookupFailed [RFC6956]. 639 o Check that the additional overhead of the Ethernet header and 640 encapsulated metadata will not exceed MTU. If it does, increment 641 the error packet count statistics and send the packet and metadata 642 out the EXCEPTIONOUT port with exceptionID of FragRequired 643 [RFC6956]. 645 o Create the Ethernet header 647 o Set the Destination MAC address of the Ethernet header with value 648 found in the DSTFE field. 650 o Set the Source MAC address of the Ethernet header with value found 651 in the SRCFE field. 653 o If the optional IFETYPE is present, set the Ethernet type to the 654 value found in IFETYPE. If IFETYPE is absent then the standard 655 Inter-FE LFB Ethernet type is used (XXX: Note to editor, to be 656 updated). 658 o Encapsulate each allowed Metadatum in a TLV. Use the Metaid as 659 the "type" field in the TLV header. The TLV should be aligned to 660 32 bits. This means you may need to add padding of zeroes to 661 ensure alignment. 663 o Update the Metadata length to the sum of each TLV's space plus 2 664 bytes (for the Metadata length field 16 bit space). 666 The resulting packet is sent to the next LFB instance connected to 667 the OUT1 LFB-port; typically a port LFB. 669 In the case of a failed lookup the original packet and associated 670 metadata is sent out the EXCEPTIONOUT port with exceptionID of 671 EncapTableLookupFailed [RFC6956]. Note that the EXCEPTIONOUT LFB 672 port is merely an abstraction and implementation may in fact drop 673 packets as described above. 675 6.1.2. Ingress Processing 677 An ingressing inter-FE LFB packet is recognized by inspecting the 678 ethertype, and optionally the destination and source MAC addresses. 679 A matching packet is mapped to an LFB instance port in the 680 IngressInGroup. The IFETable table row entry matching the LFB 681 instance port may have optionally programmed metadata filters. In 682 such a case the ingress processing should use the metadata filters as 683 a whitelist of what metadatum is to be allowed. 685 o Increment statistics for packet and byte count observed. 687 o Look at the metadata length field and walk the packet data 688 extracting from the TLVs the metadata values. For each Metadatum 689 extracted, in the presence of metadata filters the metaid is 690 compared against the relevant IFETable row metafilter list. If 691 the Metadatum is recognized, and is allowed by the filter the 692 corresponding implementation Metadatum field is set. If an 693 unknown Metadatum id is encountered, or if the metaid is not in 694 the allowed filter list the implementation is expected to ignore 695 it, increment the packet error statistic and proceed processing 696 other Metadatum. 698 o Upon completion of processing all the metadata, the inter-FE LFB 699 instance resets the data point to the original payload i.e skips 700 the IFE header information. At this point the original packet 701 that was passed to the egress Inter-FE LFB at the source FE is 702 reconstructed. This data is then passed along with the 703 reconstructed metadata downstream to the next LFB instance in the 704 graph. 706 In the case of processing failure of either ingress or egress 707 positioning of the LFB, the packet and metadata are sent out the 708 EXCEPTIONOUT LFB port with appropriate error id. Note that the 709 EXCEPTIONOUT LFB port is merely an abstraction and implementation may 710 in fact drop packets as described above. 712 6.2. Components 714 There are two LFB components accessed by the CE. The reader is asked 715 to refer to the definitions in Figure 8. 717 The first component, populated by the CE, is an array known as the 718 IFETable table. The array rows are made up of IFEInfo structure. 719 The IFEInfo structure constitutes: optional IFETYPE, optionally 720 present StatId, Destination MAC address(DSTFE), Source MAC 721 address(SRCFE), optionally present array of allowed Metaids 722 (MetaFilterList). 724 The second component(ID 2), populated by the FE and read by the CE, 725 is an indexed array known as the IFEStats table. Each IFEStats row 726 which carries statistics information in the structure bstats. 728 A note about the StatId relationship between the IFETable table and 729 IFEStats table: An implementation may choose to map between an 730 IFETable row and IFEStats table row using the StatId entry in the 731 matching IFETable row. In that case the IFETable StatId must be 732 present. Alternative implementation may map at provisioning time an 733 IFETable row to IFEStats table row. Yet another alternative 734 implementation may choose not to use the IFETable row StatId and 735 instead use the IFETable row index as the IFEStats index. For these 736 reasons the StatId component is optional. 738 6.3. Inter-FE LFB XML Model 740 743 745 746 PacketAny 747 Arbitrary Packet 749 750 751 InterFEFrame 752 753 Ethernet Frame with encapsulate IFE information 754 755 757 759 761 762 bstats 763 Basic stats 764 765 766 bytes 767 The total number of bytes seen 768 uint64 769 771 772 packets 773 The total number of packets seen 774 uint32 775 777 778 errors 779 The total number of packets with errors 780 uint32 781 782 784 786 787 IFEInfo 788 Describing IFE table row Information 789 790 791 IFETYPE 792 793 the ethernet type to be used for outgoing IFE frame 794 795 796 uint16 798 799 800 StatId 801 802 the Index into the stats table 803 804 805 uint32 806 807 808 DSTFE 809 810 the destination MAC address of destination FE 811 812 byte[6] 813 814 815 SRCFE 816 817 the source MAC address used for the source FE 818 819 byte[6] 820 821 822 MetaFilterList 823 824 the allowed metadata filter table 825 826 827 828 uint32 829 830 832 833 835 837 838 839 IFE 840 841 This LFB describes IFE connectivity parameterization 842 843 1.0 845 847 848 EgressInGroup 849 850 The input port group of the egress side. 851 It expects any type of Ethernet frame. 852 853 854 855 PacketAny 856 857 858 860 861 IngressInGroup 862 863 The input port group of the ingress side. 864 It expects an interFE encapsulated Ethernet frame. 865 866 867 868 InterFEFrame 869 870 871 873 875 877 878 OUT1 879 880 The output port of the egress side. 881 882 883 884 InterFEFrame 885 886 887 889 890 OUT2 891 892 The output port of the Ingress side. 894 895 896 897 PacketAny 898 899 900 902 903 EXCEPTIONOUT 904 905 The exception handling path 906 907 908 909 PacketAny 910 911 912 ExceptionID 913 914 915 917 919 921 922 IFETable 923 924 the table of all InterFE relations 925 926 927 IFEInfo 928 929 931 932 IFEStats 933 934 the stats corresponding to the IFETable table 935 936 bstats 937 939 941 943 945 947 Figure 8: Inter-FE LFB XML 949 7. Acknowledgements 951 The authors would like to thank Joel Halpern and Dave Hood for the 952 stimulating discussions. Evangelos Haleplidis shepherded and 953 contributed to improving this document. Alia Atlas was the AD 954 sponsor of this document and did a tremendous job of critiquing it. 955 The authors are grateful to Joel Halpern in his role as the Routing 956 Area reviewer in shaping the content of this document. 958 8. IANA Considerations 960 This memo includes one IANA requests within the registry https:// 961 www.iana.org/assignments/forces 963 The request is for the sub-registry "Logical Functional Block (LFB) 964 Class Names and Class Identifiers" to request for the reservation of 965 LFB class name IFE with LFB classid 18 with version 1.0. 967 +--------------+---------+---------+-------------------+------------+ 968 | LFB Class | LFB | LFB | Description | Reference | 969 | Identifier | Class | Version | | | 970 | | Name | | | | 971 +--------------+---------+---------+-------------------+------------+ 972 | 18 | IFE | 1.0 | An IFE LFB to | This | 973 | | | | standardize | document | 974 | | | | inter-FE LFB for | | 975 | | | | ForCES Network | | 976 | | | | Elements | | 977 +--------------+---------+---------+-------------------+------------+ 979 Logical Functional Block (LFB) Class Names and Class Identifiers 981 9. IEEE Assignment Considerations 983 This memo includes a request for a new ethernet protocol type as 984 described in Section 5.2. 986 10. Security Considerations 988 The FEs involved in the Inter-FE LFB belong to the same Network 989 Device (NE) and are within the scope of a single administrative 990 Ethernet LAN private network. Trust of policy in the control and its 991 treatment in the datapath exists already. 993 This document does not alter [RFC5812] or the ForCES 994 Protocol[RFC5810]. As such, it has no impact on their security 995 considerations. This document simply defines the operational 996 parameters and capabilities of an LFB that performs LFB class 997 instance extensions across nodes under a single administrative 998 control. This document does not attempt to analyze the presence or 999 possibility of security interactions created by allowing LFB graph 1000 extension on packets. Any such issues, if they exist should be 1001 resolved by the designers of the particular data path i.e they are 1002 not the responsibility of general mechanism outlined in this 1003 document; one such option for protecting Ethernet is the use of IEEE 1004 802.1AE Media Access Control Security [ieee8021ae] which provides 1005 encryption and authentication. 1007 11. References 1009 11.1. Normative References 1011 [RFC5810] Doria, A., Ed., Hadi Salim, J., Ed., Haas, R., Ed., 1012 Khosravi, H., Ed., Wang, W., Ed., Dong, L., Gopal, R., and 1013 J. Halpern, "Forwarding and Control Element Separation 1014 (ForCES) Protocol Specification", RFC 5810, DOI 10.17487/ 1015 RFC5810, March 2010, 1016 . 1018 [RFC5811] Hadi Salim, J. and K. Ogawa, "SCTP-Based Transport Mapping 1019 Layer (TML) for the Forwarding and Control Element 1020 Separation (ForCES) Protocol", RFC 5811, DOI 10.17487/ 1021 RFC5811, March 2010, 1022 . 1024 [RFC5812] Halpern, J. and J. Hadi Salim, "Forwarding and Control 1025 Element Separation (ForCES) Forwarding Element Model", RFC 1026 5812, DOI 10.17487/RFC5812, March 2010, 1027 . 1029 [RFC7391] Hadi Salim, J., "Forwarding and Control Element Separation 1030 (ForCES) Protocol Extensions", RFC 7391, DOI 10.17487/ 1031 RFC7391, October 2014, 1032 . 1034 [RFC7408] Haleplidis, E., "Forwarding and Control Element Separation 1035 (ForCES) Model Extension", RFC 7408, DOI 10.17487/RFC7408, 1036 November 2014, . 1038 11.2. Informative References 1040 [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate 1041 Requirement Levels", BCP 14, RFC 2119, DOI 10.17487/ 1042 RFC2119, March 1997, 1043 . 1045 [RFC3746] Yang, L., Dantu, R., Anderson, T., and R. Gopal, 1046 "Forwarding and Control Element Separation (ForCES) 1047 Framework", RFC 3746, DOI 10.17487/RFC3746, April 2004, 1048 . 1050 [RFC5405] Eggert, L. and G. Fairhurst, "Unicast UDP Usage Guidelines 1051 for Application Designers", BCP 145, RFC 5405, DOI 1052 10.17487/RFC5405, November 2008, 1053 . 1055 [RFC6956] Wang, W., Haleplidis, E., Ogawa, K., Li, C., and J. 1056 Halpern, "Forwarding and Control Element Separation 1057 (ForCES) Logical Function Block (LFB) Library", RFC 6956, 1058 DOI 10.17487/RFC6956, June 2013, 1059 . 1061 [brcm-higig] 1062 , "HiGig", 1063 . 1065 [circuit-b] 1066 Fairhurst, G., "Network Transport Circuit Breakers", Sep 1067 2015, . 1070 [ieee8021ae] 1071 , "IEEE Standard for Local and metropolitan area networks 1072 Media Access Control (MAC) Security", IEEE 802.1AE-2006, 1073 Aug 2006. 1075 [linux-tc] 1076 Hadi Salim, J., "Linux Traffic Control Classifier-Action 1077 Subsystem Architecture", netdev 01, Feb 2015. 1079 [tc-ife] Hadi Salim, J. and D. Joachimpillai, "Distributing Linux 1080 Traffic Control Classifier-Action Subsystem", netdev 01, 1081 Feb 2015. 1083 [vxlan-udp] 1084 , "iproute2 and kernel code (drivers/net/vxlan.c)", 1085 . 1087 Authors' Addresses 1089 Damascane M. Joachimpillai 1090 Verizon 1091 60 Sylvan Rd 1092 Waltham, Mass. 02451 1093 USA 1095 Email: damascene.joachimpillai@verizon.com 1097 Jamal Hadi Salim 1098 Mojatatu Networks 1099 Suite 200, 15 Fitzgerald Rd. 1100 Ottawa, Ontario K2H 9G1 1101 Canada 1103 Email: hadi@mojatatu.com