idnits 2.17.1 draft-ietf-mpls-arch-06.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- ** Looks like you're using RFC 2026 boilerplate. This must be updated to follow RFC 3978/3979, as updated by RFC 4748. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- == No 'Intended status' indicated for this document; assuming Proposed Standard == It seems as if not all pages are separated by form feeds - found 0 form feeds but 62 pages Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- ** The document seems to lack an IANA Considerations section. (See Section 2.2 of https://www.ietf.org/id-info/checklist for how to handle the case when there are no actions for IANA.) ** The document seems to lack separate sections for Informative/Normative References. All references will be assumed normative when checking for downward references. == There are 1 instance of lines with private range IPv4 addresses in the document. If these are generic example addresses, they should be changed to use any of the ranges defined in RFC 6890 (or successor): 192.0.2.x, 198.51.100.x or 203.0.113.x. Miscellaneous warnings: ---------------------------------------------------------------------------- == Line 270 has weird spacing: '...e class a gr...' == Line 308 has weird spacing: '... router an ...' == Line 367 has weird spacing: '...itching an IE...' == Line 2392 has weird spacing: '...5.2, we will ...' == Line 2488 has weird spacing: '...d later becom...' -- The document seems to lack a disclaimer for pre-RFC5378 work, but may have content which was first submitted before 10 November 2008. If you have contacted all the original authors and they are all willing to grant the BCP78 rights to the IETF Trust, then this is fine, and you can ignore this comment. If not, you may need to add the pre-RFC5378 disclaimer. (See the Legal Provisions document at https://trustee.ietf.org/license-info for more information.) -- The document date (August 1999) is 9021 days in the past. Is this intentional? Checking references for intended status: Proposed Standard ---------------------------------------------------------------------------- (See RFCs 3967 and 4897 for information about using normative references to lower-maturity documents in RFCs) -- Possible downref: Non-RFC (?) normative reference: ref. 'MPLS-ATM' -- Possible downref: Non-RFC (?) normative reference: ref. 'MPLS-BGP' -- Possible downref: Non-RFC (?) normative reference: ref. 'MPLS-CR-LDP' -- Possible downref: Non-RFC (?) normative reference: ref. 'MPLS-FRMWRK' -- Possible downref: Non-RFC (?) normative reference: ref. 'MPLS-FRMRLY' -- Possible downref: Non-RFC (?) normative reference: ref. 'MPLS-LDP' -- Possible downref: Non-RFC (?) normative reference: ref. 'MPLS-RSVP' -- Possible downref: Non-RFC (?) normative reference: ref. 'MPLS-RSVP-TUNNELS' -- Possible downref: Non-RFC (?) normative reference: ref. 'MPLS-SHIM' -- Possible downref: Non-RFC (?) normative reference: ref. 'MPLS-TRFENG' Summary: 3 errors (**), 0 flaws (~~), 8 warnings (==), 12 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 1 Network Working Group Eric C. Rosen 2 Internet Draft Cisco Systems, Inc. 3 Expiration Date: February 2000 4 Arun Viswanathan 5 Lucent Technologies 7 Ross Callon 8 IronBridge Networks, Inc. 10 August 1999 12 Multiprotocol Label Switching Architecture 14 draft-ietf-mpls-arch-06.txt 16 Status of this Memo 18 This document is an Internet-Draft and is in full conformance with 19 all provisions of Section 10 of RFC2026. 21 Internet-Drafts are working documents of the Internet Engineering 22 Task Force (IETF), its areas, and its working groups. Note that 23 other groups may also distribute working documents as Internet- 24 Drafts. 26 Internet-Drafts are draft documents valid for a maximum of six months 27 and may be updated, replaced, or obsoleted by other documents at any 28 time. It is inappropriate to use Internet-Drafts as reference 29 material or to cite them other than as "work in progress." 31 The list of current Internet-Drafts can be accessed at 32 http://www.ietf.org/ietf/1id-abstracts.txt. 34 The list of Internet-Draft Shadow Directories can be accessed at 35 http://www.ietf.org/shadow.html. 37 Abstract 39 This internet draft specifies the architecture for Multiprotocol 40 Label Switching (MPLS). 42 Table of Contents 44 1 Specification ...................................... 4 45 2 Introduction to MPLS ............................... 4 46 2.1 Overview ........................................... 4 47 2.2 Terminology ........................................ 6 48 2.3 Acronyms and Abbreviations ......................... 9 49 2.4 Acknowledgments .................................... 10 50 3 MPLS Basics ........................................ 10 51 3.1 Labels ............................................. 10 52 3.2 Upstream and Downstream LSRs ....................... 11 53 3.3 Labeled Packet ..................................... 11 54 3.4 Label Assignment and Distribution .................. 12 55 3.5 Attributes of a Label Binding ...................... 12 56 3.6 Label Distribution Protocols ....................... 12 57 3.7 Unsolicited Downstream vs. Downstream-on-Demand .... 13 58 3.8 Label Retention Mode ............................... 13 59 3.9 The Label Stack .................................... 14 60 3.10 The Next Hop Label Forwarding Entry (NHLFE) ........ 14 61 3.11 Incoming Label Map (ILM) ........................... 15 62 3.12 FEC-to-NHLFE Map (FTN) ............................. 15 63 3.13 Label Swapping ..................................... 16 64 3.14 Scope and Uniqueness of Labels ..................... 16 65 3.15 Label Switched Path (LSP), LSP Ingress, LSP Egress . 17 66 3.16 Penultimate Hop Popping ............................ 19 67 3.17 LSP Next Hop ....................................... 21 68 3.18 Invalid Incoming Labels ............................ 21 69 3.19 LSP Control: Ordered versus Independent ............ 21 70 3.20 Aggregation ........................................ 22 71 3.21 Route Selection .................................... 24 72 3.22 Lack of Outgoing Label ............................. 25 73 3.23 Time-to-Live (TTL) ................................. 25 74 3.24 Loop Control ....................................... 26 75 3.25 Label Encodings .................................... 27 76 3.25.1 MPLS-specific Hardware and/or Software ............. 27 77 3.25.2 ATM Switches as LSRs ............................... 27 78 3.25.3 Interoperability among Encoding Techniques ......... 29 79 3.26 Label Merging ...................................... 30 80 3.26.1 Non-merging LSRs ................................... 31 81 3.26.2 Labels for Merging and Non-Merging LSRs ............ 31 82 3.26.3 Merge over ATM ..................................... 32 83 3.26.3.1 Methods of Eliminating Cell Interleave ............. 32 84 3.26.3.2 Interoperation: VC Merge, VP Merge, and Non-Merge .. 33 85 3.27 Tunnels and Hierarchy .............................. 34 86 3.27.1 Hop-by-Hop Routed Tunnel ........................... 34 87 3.27.2 Explicitly Routed Tunnel ........................... 34 88 3.27.3 LSP Tunnels ........................................ 34 89 3.27.4 Hierarchy: LSP Tunnels within LSPs ................. 35 90 3.27.5 Label Distribution Peering and Hierarchy ........... 35 91 3.28 Label Distribution Protocol Transport .............. 37 92 3.29 Why More than one Label Distribution Protocol? ..... 37 93 3.29.1 BGP and LDP ........................................ 37 94 3.29.2 Labels for RSVP Flowspecs .......................... 37 95 3.29.3 Labels for Explicitly Routed LSPs .................. 38 96 3.30 Multicast .......................................... 38 97 4 Some Applications of MPLS .......................... 38 98 4.1 MPLS and Hop by Hop Routed Traffic ................. 38 99 4.1.1 Labels for Address Prefixes ........................ 38 100 4.1.2 Distributing Labels for Address Prefixes ........... 39 101 4.1.2.1 Label Distribution Peers for an Address Prefix ..... 39 102 4.1.2.2 Distributing Labels ................................ 39 103 4.1.3 Using the Hop by Hop path as the LSP ............... 40 104 4.1.4 LSP Egress and LSP Proxy Egress .................... 41 105 4.1.5 The Implicit NULL Label ............................ 41 106 4.1.6 Option: Egress-Targeted Label Assignment ........... 42 107 4.2 MPLS and Explicitly Routed LSPs .................... 44 108 4.2.1 Explicitly Routed LSP Tunnels ...................... 44 109 4.3 Label Stacks and Implicit Peering .................. 45 110 4.4 MPLS and Multi-Path Routing ........................ 46 111 4.5 LSP Trees as Multipoint-to-Point Entities .......... 46 112 4.6 LSP Tunneling between BGP Border Routers ........... 47 113 4.7 Other Uses of Hop-by-Hop Routed LSP Tunnels ........ 49 114 4.8 MPLS and Multicast ................................. 49 115 5 Label Distribution Procedures (Hop-by-Hop) ......... 50 116 5.1 The Procedures for Advertising and Using labels .... 50 117 5.1.1 Downstream LSR: Distribution Procedure ............. 50 118 5.1.1.1 PushUnconditional .................................. 51 119 5.1.1.2 PushConditional .................................... 51 120 5.1.1.3 PulledUnconditional ................................ 52 121 5.1.1.4 PulledConditional .................................. 52 122 5.1.2 Upstream LSR: Request Procedure .................... 53 123 5.1.2.1 RequestNever ....................................... 53 124 5.1.2.2 RequestWhenNeeded .................................. 53 125 5.1.2.3 RequestOnRequest ................................... 54 126 5.1.3 Upstream LSR: NotAvailable Procedure ............... 54 127 5.1.3.1 RequestRetry ....................................... 54 128 5.1.3.2 RequestNoRetry ..................................... 54 129 5.1.4 Upstream LSR: Release Procedure .................... 55 130 5.1.4.1 ReleaseOnChange .................................... 55 131 5.1.4.2 NoReleaseOnChange .................................. 55 132 5.1.5 Upstream LSR: labelUse Procedure ................... 55 133 5.1.5.1 UseImmediate ....................................... 56 134 5.1.5.2 UseIfLoopNotDetected ............................... 56 135 5.1.6 Downstream LSR: Withdraw Procedure ................. 56 136 5.2 MPLS Schemes: Supported Combinations of Procedures . 57 137 5.2.1 Schemes for LSRs that Support Label Merging ........ 57 138 5.2.2 Schemes for LSRs that do not Support Label Merging . 58 139 5.2.3 Interoperability Considerations .................... 59 140 6 Security Considerations ............................ 61 141 7 Intellectual Property .............................. 61 142 8 Authors' Addresses ................................. 61 143 9 References ......................................... 62 145 1. Specification 147 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 148 "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this 149 document are to be interpreted as described in RFC 2119. 151 2. Introduction to MPLS 153 This internet draft specifies the architecture for Multiprotocol 154 Label Switching (MPLS). 156 Note that the use of MPLS for multicast is left for further study. 158 2.1. Overview 160 As a packet of a connectionless network layer protocol travels from 161 one router to the next, each router makes an independent forwarding 162 decision for that packet. That is, each router analyzes the packet's 163 header, and each router runs a network layer routing algorithm. Each 164 router independently chooses a next hop for the packet, based on its 165 analysis of the packet's header and the results of running the 166 routing algorithm. 168 Packet headers contain considerably more information than is needed 169 simply to choose the next hop. Choosing the next hop can therefore be 170 thought of as the composition of two functions. The first function 171 partitions the entire set of possible packets into a set of 172 "Forwarding Equivalence Classes (FECs)". The second maps each FEC to 173 a next hop. Insofar as the forwarding decision is concerned, 174 different packets which get mapped into the same FEC are 175 indistinguishable. All packets which belong to a particular FEC and 176 which travel from a particular node will follow the same path (or if 177 certain kinds of multi-path routing are in use, they will all follow 178 one of a set of paths associated with the FEC). 180 In conventional IP forwarding, a particular router will typically 181 consider two packets to be in the same FEC if there is some address 182 prefix X in that router's routing tables such that X is the "longest 183 match" for each packet's destination address. As the packet traverses 184 the network, each hop in turn reexamines the packet and assigns it to 185 a FEC. 187 In MPLS, the assignment of a particular packet to a particular FEC is 188 done just once, as the packet enters the network. The FEC to which 189 the packet is assigned is encoded as a short fixed length value known 190 as a "label". When a packet is forwarded to its next hop, the label 191 is sent along with it; that is, the packets are "labeled" before they 192 are forwarded. 194 At subsequent hops, there is no further analysis of the packet's 195 network layer header. Rather, the label is used as an index into a 196 table which specifies the next hop, and a new label. The old label 197 is replaced with the new label, and the packet is forwarded to its 198 next hop. 200 In the MPLS forwarding paradigm, once a packet is assigned to a FEC, 201 no further header analysis is done by subsequent routers; all 202 forwarding is driven by the labels. This has a number of advantages 203 over conventional network layer forwarding. 205 - MPLS forwarding can be done by switches which are capable of 206 doing label lookup and replacement, but are either not capable of 207 analyzing the network layer headers, or are not capable of 208 analyzing the network layer headers at adequate speed. 210 - Since a packet is assigned to a FEC when it enters the network, 211 the ingress router may use, in determining the assignment, any 212 information it has about the packet, even if that information 213 cannot be gleaned from the network layer header. For example, 214 packets arriving on different ports may be assigned to different 215 FECs. Conventional forwarding, on the other hand, can only 216 consider information which travels with the packet in the packet 217 header. 219 - A packet that enters the network at a particular router can be 220 labeled differently than the same packet entering the network at 221 a different router, and as a result forwarding decisions that 222 depend on the ingress router can be easily made. This cannot be 223 done with conventional forwarding, since the identity of a 224 packet's ingress router does not travel with the packet. 226 - The considerations that determine how a packet is assigned to a 227 FEC can become ever more and more complicated, without any impact 228 at all on the routers that merely forward labeled packets. 230 - Sometimes it is desirable to force a packet to follow a 231 particular route which is explicitly chosen at or before the time 232 the packet enters the network, rather than being chosen by the 233 normal dynamic routing algorithm as the packet travels through 234 the network. This may be done as a matter of policy, or to 235 support traffic engineering. In conventional forwarding, this 236 requires the packet to carry an encoding of its route along with 237 it ("source routing"). In MPLS, a label can be used to represent 238 the route, so that the identity of the explicit route need not be 239 carried with the packet. 241 Some routers analyze a packet's network layer header not merely to 242 choose the packet's next hop, but also to determine a packet's 243 "precedence" or "class of service". They may then apply different 244 discard thresholds or scheduling disciplines to different packets. 245 MPLS allows (but does not require) the precedence or class of service 246 to be fully or partially inferred from the label. In this case, one 247 may say that the label represents the combination of a FEC and a 248 precedence or class of service. 250 MPLS stands for "Multiprotocol" Label Switching, multiprotocol 251 because its techniques are applicable to ANY network layer protocol. 252 In this document, however, we focus on the use of IP as the network 253 layer protocol. 255 A router which supports MPLS is known as a "Label Switching Router", 256 or LSR. 258 A general discussion of issues related to MPLS is presented in "A 259 Framework for Multiprotocol Label Switching" [MPLS-FRMWRK]. 261 2.2. Terminology 263 This section gives a general conceptual overview of the terms used in 264 this document. Some of these terms are more precisely defined in 265 later sections of the document. 267 DLCI a label used in Frame Relay networks to 268 identify frame relay circuits 270 forwarding equivalence class a group of IP packets which are 271 forwarded in the same manner (e.g., 272 over the same path, with the same 273 forwarding treatment) 275 frame merge label merging, when it is applied to 276 operation over frame based media, so that 277 the potential problem of cell interleave 278 is not an issue. 280 label a short fixed length physically 281 contiguous identifier which is used to 282 identify a FEC, usually of local 283 significance. 285 label merging the replacement of multiple incoming 286 labels for a particular FEC with a single 287 outgoing label 289 label swap the basic forwarding operation consisting 290 of looking up an incoming label to 291 determine the outgoing label, 292 encapsulation, port, and other data 293 handling information. 295 label swapping a forwarding paradigm allowing 296 streamlined forwarding of data by using 297 labels to identify classes of data 298 packets which are treated 299 indistinguishably when forwarding. 301 label switched hop the hop between two MPLS nodes, on which 302 forwarding is done using labels. 304 label switched path The path through one or more LSRs at one 305 level of the hierarchy followed by a 306 packets in a particular FEC. 308 label switching router an MPLS node which is capable of 309 forwarding native L3 packets 311 layer 2 the protocol layer under layer 3 (which 312 therefore offers the services used by 313 layer 3). Forwarding, when done by the 314 swapping of short fixed length labels, 315 occurs at layer 2 regardless of whether 316 the label being examined is an ATM 317 VPI/VCI, a frame relay DLCI, or an MPLS 318 label. 320 layer 3 the protocol layer at which IP and its 321 associated routing protocols operate link 322 layer synonymous with layer 2 324 loop detection a method of dealing with loops in which 325 loops are allowed to be set up, and data 326 may be transmitted over the loop, but the 327 loop is later detected 329 loop prevention a method of dealing with loops in which 330 data is never transmitted over a loop 332 label stack an ordered set of labels 334 merge point a node at which label merging is done 336 MPLS domain a contiguous set of nodes which operate 337 MPLS routing and forwarding and which are 338 also in one Routing or Administrative 339 Domain 341 MPLS edge node an MPLS node that connects an MPLS domain 342 with a node which is outside of the 343 domain, either because it does not run 344 MPLS, and/or because it is in a different 345 domain. Note that if an LSR has a 346 neighboring host which is not running 347 MPLS, that that LSR is an MPLS edge node. 349 MPLS egress node an MPLS edge node in its role in handling 350 traffic as it leaves an MPLS domain 352 MPLS ingress node an MPLS edge node in its role in handling 353 traffic as it enters an MPLS domain 355 MPLS label a label which is carried in a packet 356 header, and which represents the packet's 357 FEC 359 MPLS node a node which is running MPLS. An MPLS 360 node will be aware of MPLS control 361 protocols, will operate one or more L3 362 routing protocols, and will be capable of 363 forwarding packets based on labels. An 364 MPLS node may optionally be also capable 365 of forwarding native L3 packets. 367 MultiProtocol Label Switching an IETF working group and the effort 368 associated with the working group 370 network layer synonymous with layer 3 372 stack synonymous with label stack 374 switched path synonymous with label switched path 376 virtual circuit a circuit used by a connection-oriented 377 layer 2 technology such as ATM or Frame 378 Relay, requiring the maintenance of state 379 information in layer 2 switches. 381 VC merge label merging where the MPLS label is 382 carried in the ATM VCI field (or combined 383 VPI/VCI field), so as to allow multiple 384 VCs to merge into one single VC 386 VP merge label merging where the MPLS label is 387 carried din the ATM VPI field, so as to 388 allow multiple VPs to be merged into one 389 single VP. In this case two cells would 390 have the same VCI value only if they 391 originated from the same node. This 392 allows cells from different sources to be 393 distinguished via the VCI. 395 VPI/VCI a label used in ATM networks to identify 396 circuits 398 2.3. Acronyms and Abbreviations 400 ATM Asynchronous Transfer Mode 401 BGP Border Gateway Protocol 402 DLCI Data Link Circuit Identifier 403 FEC Forwarding Equivalence Class 404 FTN FEC to NHLFE Map 405 IGP Interior Gateway Protocol 406 ILM Incoming Label Map 407 IP Internet Protocol 408 LDP Label Distribution Protocol 409 L2 Layer 2 L3 Layer 3 410 LSP Label Switched Path 411 LSR Label Switching Router 412 MPLS MultiProtocol Label Switching 413 NHLFE Next Hop Label Forwarding Entry 414 SVC Switched Virtual Circuit 415 SVP Switched Virtual Path 416 TTL Time-To-Live 417 VC Virtual Circuit 418 VCI Virtual Circuit Identifier 419 VP Virtual Path 420 VPI Virtual Path Identifier 422 2.4. Acknowledgments 424 The ideas and text in this document have been collected from a number 425 of sources and comments received. We would like to thank Rick Boivie, 426 Paul Doolan, Nancy Feldman, Yakov Rekhter, Vijay Srinivasan, and 427 George Swallow for their inputs and ideas. 429 3. MPLS Basics 431 In this section, we introduce some of the basic concepts of MPLS and 432 describe the general approach to be used. 434 3.1. Labels 436 A label is a short, fixed length, locally significant identifier 437 which is used to identify a FEC. The label which is put on a 438 particular packet represents the Forwarding Equivalence Class to 439 which that packet is assigned. 441 Most commonly, a packet is assigned to a FEC based (completely or 442 partially) on its network layer destination address. However, the 443 label is never an encoding of that address. 445 If Ru and Rd are LSRs, they may agree that when Ru transmits a packet 446 to Rd, Ru will label with packet with label value L if and only if 447 the packet is a member of a particular FEC F. That is, they can 448 agree to a "binding" between label L and FEC F for packets moving 449 from Ru to Rd. As a result of such an agreement, L becomes Ru's 450 "outgoing label" representing FEC F, and L becomes Rd's "incoming 451 label" representing FEC F. 453 Note that L does not necessarily represent FEC F for any packets 454 other than those which are being sent from Ru to Rd. L is an 455 arbitrary value whose binding to F is local to Ru and Rd. 457 When we speak above of packets "being sent" from Ru to Rd, we do not 458 imply either that the packet originated at Ru or that its destination 459 is Rd. Rather, we mean to include packets which are "transit 460 packets" at one or both of the LSRs. 462 Sometimes it may be difficult or even impossible for Rd to tell, of 463 an arriving packet carrying label L, that the label L was placed in 464 the packet by Ru, rather than by some other LSR. (This will 465 typically be the case when Ru and Rd are not direct neighbors.) In 466 such cases, Rd must make sure that the binding from label to FEC is 467 one-to-one. That is, Rd MUST NOT agree with Ru1 to bind L to FEC F1, 468 while also agreeing with some other LSR Ru2 to bind L to a different 469 FEC F2, UNLESS Rd can always tell, when it receives a packet with 470 incoming label L, whether the label was put on the packet by Ru1 or 471 whether it was put on by Ru2. 473 It is the responsibility of each LSR to ensure that it can uniquely 474 interpret its incoming labels. 476 3.2. Upstream and Downstream LSRs 478 Suppose Ru and Rd have agreed to bind label L to FEC F, for packets 479 sent from Ru to Rd. Then with respect to this binding, Ru is the 480 "upstream LSR", and Rd is the "downstream LSR". 482 To say that one node is upstream and one is downstream with respect 483 to a given binding means only that a particular label represents a 484 particular FEC in packets travelling from the upstream node to the 485 downstream node. This is NOT meant to imply that packets in that FEC 486 would actually be routed from the upstream node to the downstream 487 node. 489 3.3. Labeled Packet 491 A "labeled packet" is a packet into which a label has been encoded. 492 In some cases, the label resides in an encapsulation header which 493 exists specifically for this purpose. In other cases, the label may 494 reside in an existing data link or network layer header, as long as 495 there is a field which is available for that purpose. The particular 496 encoding technique to be used must be agreed to by both the entity 497 which encodes the label and the entity which decodes the label. 499 3.4. Label Assignment and Distribution 501 In the MPLS architecture, the decision to bind a particular label L 502 to a particular FEC F is made by the LSR which is DOWNSTREAM with 503 respect to that binding. The downstream LSR then informs the 504 upstream LSR of the binding. Thus labels are "downstream-assigned", 505 and label bindings are distributed in the "downstream to upstream" 506 direction. 508 If an LSR has been designed so that it can only look up labels that 509 fall into a certain numeric range, then it merely needs to ensure 510 that it only binds labels that are in that range. 512 3.5. Attributes of a Label Binding 514 A particular binding of label L to FEC F, distributed by Rd to Ru, 515 may have associated "attributes". If Ru, acting as a downstream LSR, 516 also distributes a binding of a label to FEC F, then under certain 517 conditions, it may be required to also distribute the corresponding 518 attribute that it received from Rd. 520 3.6. Label Distribution Protocols 522 A label distribution protocol is a set of procedures by which one LSR 523 informs another of the label/FEC bindings it has made. Two LSRs 524 which use a label distribution protocol to exchange label/FEC binding 525 information are known as "label distribution peers" with respect to 526 the binding information they exchange. If two LSRs are label 527 distribution peers, we will speak of there being a "label 528 distribution adjacency" between them. 530 (N.B.: two LSRs may be label distribution peers with respect to some 531 set of bindings, but not with respect to some other set of bindings.) 533 The label distribution protocol also encompasses any negotiations in 534 which two label distribution peers need to engage in order to learn 535 of each other's MPLS capabilities. 537 THE ARCHITECTURE DOES NOT ASSUME THAT THERE IS ONLY A SINGLE LABEL 538 DISTRIBUTION PROTOCOL. In fact, a number of different label 539 distribution protocols are being standardized. Existing protocols 540 have been extended so that label distribution can be piggybacked on 541 them (see, e.g., [MPLS-BGP], [MPLS-RSVP], [MPLS-RSVP-TUNNELS]). New 542 protocols have also been defined for the explicit purpose of 543 distributing labels (see, e.g., [MPLS-LDP], [MPLS-CR-LDP]. 545 In this document, we try to use the acronym "LDP" to refer 546 specifically to the protocol defined in [MPLS-LDP]; when speaking of 547 label distribution protocols in general, we try to avoid the acronym. 549 3.7. Unsolicited Downstream vs. Downstream-on-Demand 551 The MPLS architecture allows an LSR to explicitly request, from its 552 next hop for a particular FEC, a label binding for that FEC. This is 553 known as "downstream-on-demand" label distribution. 555 The MPLS architecture also allows an LSR to distribute bindings to 556 LSRs that have not explicitly requested them. This is known as 557 "unsolicited downstream" label distribution. 559 It is expected that some MPLS implementations will provide only 560 downstream-on-demand label distribution, and some will provide only 561 unsolicited downstream label distribution, and some will provide 562 both. Which is provided may depend on the characteristics of the 563 interfaces which are supported by a particular implementation. 564 However, both of these label distribution techniques may be used in 565 the same network at the same time. On any given label distribution 566 adjacency, the upstream LSR and the downstream LSR must agree on 567 which technique is to be used. 569 3.8. Label Retention Mode 571 An LSR Ru may receive (or have received) a label binding for a 572 particular FEC from an LSR Rd, even though Rd is not Ru's next hop 573 (or is no longer Ru's next hop) for that FEC. 575 Ru then has the choice of whether to keep track of such bindings, or 576 whether to discard such bindings. If Ru keeps track of such 577 bindings, then it may immediately begin using the binding again if Rd 578 eventually becomes its next hop for the FEC in question. If Ru 579 discards such bindings, then if Rd later becomes the next hop, the 580 binding will have to be reacquired. 582 If an LSR supports "Liberal Label Retention Mode", it maintains the 583 bindings between a label and a FEC which are received from LSRs which 584 are not its next hop for that FEC. If an LSR supports "Conservative 585 Label Retention Mode", it discards such bindings. 587 Liberal label retention mode allows for quicker adaptation to routing 588 changes, but conservative label retention mode though requires an LSR 589 to maintain many fewer labels. 591 3.9. The Label Stack 593 So far, we have spoken as if a labeled packet carries only a single 594 label. As we shall see, it is useful to have a more general model in 595 which a labeled packet carries a number of labels, organized as a 596 last-in, first-out stack. We refer to this as a "label stack". 598 Although, as we shall see, MPLS supports a hierarchy, the processing 599 of a labeled packet is completely independent of the level of 600 hierarchy. The processing is always based on the top label, without 601 regard for the possibility that some number of other labels may have 602 been "above it" in the past, or that some number of other labels may 603 be below it at present. 605 An unlabeled packet can be thought of as a packet whose label stack 606 is empty (i.e., whose label stack has depth 0). 608 If a packet's label stack is of depth m, we refer to the label at the 609 bottom of the stack as the level 1 label, to the label above it (if 610 such exists) as the level 2 label, and to the label at the top of the 611 stack as the level m label. 613 The utility of the label stack will become clear when we introduce 614 the notion of LSP Tunnel and the MPLS Hierarchy (section 3.27). 616 3.10. The Next Hop Label Forwarding Entry (NHLFE) 618 The "Next Hop Label Forwarding Entry" (NHLFE) is used when forwarding 619 a labeled packet. It contains the following information: 621 1. the packet's next hop 623 2. the operation to perform on the packet's label stack; this is 624 one of the following operations: 626 a) replace the label at the top of the label stack with a 627 specified new label 629 b) pop the label stack 631 c) replace the label at the top of the label stack with a 632 specified new label, and then push one or more specified 633 new labels onto the label stack. 635 It may also contain: 637 d) the data link encapsulation to use when transmitting the packet 639 e) the way to encode the label stack when transmitting the packet 641 f) any other information needed in order to properly dispose of 642 the packet. 644 Note that at a given LSR, the packet's "next hop" might be that LSR 645 itself. In this case, the LSR would need to pop the top level label, 646 and then "forward" the resulting packet to itself. It would then 647 make another forwarding decision, based on what remains after the 648 label stacked is popped. This may still be a labeled packet, or it 649 may be the native IP packet. 651 This implies that in some cases the LSR may need to operate on the IP 652 header in order to forward the packet. 654 If the packet's "next hop" is the current LSR, then the label stack 655 operation MUST be to "pop the stack". 657 3.11. Incoming Label Map (ILM) 659 The "Incoming Label Map" (ILM) maps each incoming label to a set of 660 NHLFEs. It is used when forwarding packets that arrive as labeled 661 packets. 663 If the ILM maps a particular label to a set of NHLFEs that contains 664 more than one element, exactly one element of the set must be chosen 665 before the packet is forwarded. The procedures for choosing an 666 element from the set are beyond the scope of this document. Having 667 the ILM map a label to a set containing more than one NHLFE may be 668 useful if, e.g., it is desired to do load balancing over multiple 669 equal-cost paths. 671 3.12. FEC-to-NHLFE Map (FTN) 673 The "FEC-to-NHLFE" (FTN) maps each FEC to a set of NHLFEs. It is used 674 when forwarding packets that arrive unlabeled, but which are to be 675 labeled before being forwarded. 677 If the FTN maps a particular label to a set of NHLFEs that contains 678 more than one element, exactly one element of the set must be chosen 679 before the packet is forwarded. The procedures for choosing an 680 element from the set are beyond the scope of this document. Having 681 the FTN map a label to a set containing more than one NHLFE may be 682 useful if, e.g., it is desired to do load balancing over multiple 683 equal-cost paths. 685 3.13. Label Swapping 687 Label swapping is the use of the following procedures to forward a 688 packet. 690 In order to forward a labeled packet, a LSR examines the label at the 691 top of the label stack. It uses the ILM to map this label to an 692 NHLFE. Using the information in the NHLFE, it determines where to 693 forward the packet, and performs an operation on the packet's label 694 stack. It then encodes the new label stack into the packet, and 695 forwards the result. 697 In order to forward an unlabeled packet, a LSR analyzes the network 698 layer header, to determine the packet's FEC. It then uses the FTN to 699 map this to an NHLFE. Using the information in the NHLFE, it 700 determines where to forward the packet, and performs an operation on 701 the packet's label stack. (Popping the label stack would, of course, 702 be illegal in this case.) It then encodes the new label stack into 703 the packet, and forwards the result. 705 IT IS IMPORTANT TO NOTE THAT WHEN LABEL SWAPPING IS IN USE, THE NEXT 706 HOP IS ALWAYS TAKEN FROM THE NHLFE; THIS MAY IN SOME CASES BE 707 DIFFERENT FROM WHAT THE NEXT HOP WOULD BE IF MPLS WERE NOT IN USE. 709 3.14. Scope and Uniqueness of Labels 711 A given LSR Rd may bind label L1 to FEC F, and distribute that 712 binding to label distribution peer Ru1. Rd may also bind label L2 to 713 FEC F, and distribute that binding to label distribution peer Ru2. 714 Whether or not L1 == L2 is not determined by the architecture; this 715 is a local matter. 717 A given LSR Rd may bind label L to FEC F1, and distribute that 718 binding to label distribution peer Ru1. Rd may also bind label L to 719 FEC F2, and distribute that binding to label distribution peer Ru2. 720 IF (AND ONLY IF) RD CAN TELL, WHEN IT RECEIVES A PACKET WHOSE TOP 721 LABEL IS L, WHETHER THE LABEL WAS PUT THERE BY RU1 OR BY RU2, THEN 722 THE ARCHITECTURE DOES NOT REQUIRE THAT F1 == F2. In such cases, we 723 may say that Rd is using a different "label space" for the labels it 724 distributes to Ru1 than for the labels it distributes to Ru2. 726 In general, Rd can only tell whether it was Ru1 or Ru2 that put the 727 particular label value L at the top of the label stack if the 728 following conditions hold: 730 - Ru1 and Ru2 are the only label distribution peers to which Rd 731 distributed a binding of label value L, and 733 - Ru1 and Ru2 are each directly connected to Rd via a point-to- 734 point interface. 736 When these conditions hold, an LSR may use labels that have "per 737 interface" scope, i.e., which are only unique per interface. We may 738 say that the LSR is using a "per-interface label space". When these 739 conditions do not hold, the labels must be unique over the LSR which 740 has assigned them, and we may say that the LSR is using a "per- 741 platform label space." 743 If a particular LSR Rd is attached to a particular LSR Ru over two 744 point-to-point interfaces, then Rd may distribute to Ru a binding of 745 label L to FEC F1, as well as a binding of label L to FEC F2, F1 != 746 F2, if and only if each binding is valid only for packets which Ru 747 sends to Rd over a particular one of the interfaces. In all other 748 cases, Rd MUST NOT distribute to Ru bindings of the same label value 749 to two different FECs. 751 This prohibition holds even if the bindings are regarded as being at 752 different "levels of hierarchy". In MPLS, there is no notion of 753 having a different label space for different levels of the hierarchy; 754 when interpreting a label, the level of the label is irrelevant. 756 The question arises as to whether it is possible for an LSR to use 757 multiple per-platform label spaces, or to use multiple per-interface 758 label spaces for the same interface. This is not prohibited by the 759 architecture. However, in such cases the LSR must have some means, 760 not specified by the architecture, of determining, for a particular 761 incoming label, which label space that label belongs to. For 762 example, [MPLS-SHIM] specifies that a different label space is used 763 for unicast packets than for multicast packets, and uses a data link 764 layer codepoint to distinguish the two label spaces. 766 3.15. Label Switched Path (LSP), LSP Ingress, LSP Egress 768 A "Label Switched Path (LSP) of level m" for a particular packet P is 769 a sequence of routers, 771 773 with the following properties: 775 1. R1, the "LSP Ingress", is an LSR which pushes a label onto P's 776 label stack, resulting in a label stack of depth m; 778 2. For all i, 10). 802 In other words, we can speak of the level m LSP for Packet P as the 803 sequence of routers: 805 1. which begins with an LSR (an "LSP Ingress") that pushes on a 806 level m label, 808 2. all of whose intermediate LSRs make their forwarding decision 809 by label Switching on a level m label, 811 3. which ends (at an "LSP Egress") when a forwarding decision is 812 made by label Switching on a level m-k label, where k>0, or 813 when a forwarding decision is made by "ordinary", non-MPLS 814 forwarding procedures. 816 A consequence (or perhaps a presupposition) of this is that whenever 817 an LSR pushes a label onto an already labeled packet, it needs to 818 make sure that the new label corresponds to a FEC whose LSP Egress is 819 the LSR that assigned the label which is now second in the stack. 821 We will call a sequence of LSRs the "LSP for a particular FEC F" if 822 it is an LSP of level m for a particular packet P when P's level m 823 label is a label corresponding to FEC F. 825 Consider the set of nodes which may be LSP ingress nodes for FEC F. 826 Then there is an LSP for FEC F which begins with each of those nodes. 827 If a number of those LSPs have the same LSP egress, then one can 828 consider the set of such LSPs to be a tree, whose root is the LSP 829 egress. (Since data travels along this tree towards the root, this 830 may be called a multipoint-to-point tree.) We can thus speak of the 831 "LSP tree" for a particular FEC F. 833 3.16. Penultimate Hop Popping 835 Note that according to the definitions of section 3.15, if is a level m LSP for packet P, P may be transmitted from R[n-1] 837 to Rn with a label stack of depth m-1. That is, the label stack may 838 be popped at the penultimate LSR of the LSP, rather than at the LSP 839 Egress. 841 From an architectural perspective, this is perfectly appropriate. 842 The purpose of the level m label is to get the packet to Rn. Once 843 R[n-1] has decided to send the packet to Rn, the label no longer has 844 any function, and need no longer be carried. 846 There is also a practical advantage to doing penultimate hop popping. 847 If one does not do this, then when the LSP egress receives a packet, 848 it first looks up the top label, and determines as a result of that 849 lookup that it is indeed the LSP egress. Then it must pop the stack, 850 and examine what remains of the packet. If there is another label on 851 the stack, the egress will look this up and forward the packet based 852 on this lookup. (In this case, the egress for the packet's level m 853 LSP is also an intermediate node for its level m-1 LSP.) If there is 854 no other label on the stack, then the packet is forwarded according 855 to its network layer destination address. Note that this would 856 require the egress to do TWO lookups, either two label lookups or a 857 label lookup followed by an address lookup. 859 If, on the other hand, penultimate hop popping is used, then when the 860 penultimate hop looks up the label, it determines: 862 - that it is the penultimate hop, and 864 - who the next hop is. 866 The penultimate node then pops the stack, and forwards the packet 867 based on the information gained by looking up the label that was 868 previously at the top of the stack. When the LSP egress receives the 869 packet, the label which is now at the top of the stack will be the 870 label which it needs to look up in order to make its own forwarding 871 decision. Or, if the packet was only carrying a single label, the 872 LSP egress will simply see the network layer packet, which is just 873 what it needs to see in order to make its forwarding decision. 875 This technique allows the egress to do a single lookup, and also 876 requires only a single lookup by the penultimate node. 878 The creation of the forwarding "fastpath" in a label switching 879 product may be greatly aided if it is known that only a single lookup 880 is ever required: 882 - the code may be simplified if it can assume that only a single 883 lookup is ever needed 885 - the code can be based on a "time budget" that assumes that only a 886 single lookup is ever needed. 888 In fact, when penultimate hop popping is done, the LSP Egress need 889 not even be an LSR. 891 However, some hardware switching engines may not be able to pop the 892 label stack, so this cannot be universally required. There may also 893 be some situations in which penultimate hop popping is not desirable. 894 Therefore the penultimate node pops the label stack only if this is 895 specifically requested by the egress node, OR if the next node in the 896 LSP does not support MPLS. (If the next node in the LSP does support 897 MPLS, but does not make such a request, the penultimate node has no 898 way of knowing that it in fact is the penultimate node.) 900 An LSR which is capable of popping the label stack at all MUST do 901 penultimate hop popping when so requested by its downstream label 902 distribution peer. 904 Initial label distribution protocol negotiations MUST allow each LSR 905 to determine whether its neighboring LSRS are capable of popping the 906 label stack. A LSR MUST NOT request a label distribution peer to pop 907 the label stack unless it is capable of doing so. 909 It may be asked whether the egress node can always interpret the top 910 label of a received packet properly if penultimate hop popping is 911 used. As long as the uniqueness and scoping rules of section 3.14 912 are obeyed, it is always possible to interpret the top label of a 913 received packet unambiguously. 915 3.17. LSP Next Hop 917 The LSP Next Hop for a particular labeled packet in a particular LSR 918 is the LSR which is the next hop, as selected by the NHLFE entry used 919 for forwarding that packet. 921 The LSP Next Hop for a particular FEC is the next hop as selected by 922 the NHLFE entry indexed by a label which corresponds to that FEC. 924 Note that the LSP Next Hop may differ from the next hop which would 925 be chosen by the network layer routing algorithm. We will use the 926 term "L3 next hop" when we refer to the latter. 928 3.18. Invalid Incoming Labels 930 What should an LSR do if it receives a labeled packet with a 931 particular incoming label, but has no binding for that label? It is 932 tempting to think that the labels can just be removed, and the packet 933 forwarded as an unlabeled IP packet. However, in some cases, doing 934 so could cause a loop. If the upstream LSR thinks the label is bound 935 to an explicit route, and the downstream LSR doesn't think the label 936 is bound to anything, and if the hop by hop routing of the unlabeled 937 IP packet brings the packet back to the upstream LSR, then a loop is 938 formed. 940 It is also possible that the label was intended to represent a route 941 which cannot be inferred from the IP header. 943 Therefore, when a labeled packet is received with an invalid incoming 944 label, it MUST be discarded, UNLESS it is determined by some means 945 (not within the scope of the current document) that forwarding it 946 unlabeled cannot cause any harm. 948 3.19. LSP Control: Ordered versus Independent 950 Some FECs correspond to address prefixes which are distributed via a 951 dynamic routing algorithm. The setup of the LSPs for these FECs can 952 be done in one of two ways: Independent LSP Control or Ordered LSP 953 Control. 955 In Independent LSP Control, each LSR, upon noting that it recognizes 956 a particular FEC, makes an independent decision to bind a label to 957 that FEC and to distribute that binding to its label distribution 958 peers. This corresponds to the way that conventional IP datagram 959 routing works; each node makes an independent decision as to how to 960 treat each packet, and relies on the routing algorithm to converge 961 rapidly so as to ensure that each datagram is correctly delivered. 963 In Ordered LSP Control, an LSR only binds a label to a particular FEC 964 if it is the egress LSR for that FEC, or if it has already received a 965 label binding for that FEC from its next hop for that FEC. 967 If one wants to ensure that traffic in a particular FEC follows a 968 path with some specified set of properties (e.g., that the traffic 969 does not traverse any node twice, that a specified amount of 970 resources are available to the traffic, that the traffic follows an 971 explicitly specified path, etc.) ordered control must be used. With 972 independent control, some LSRs may begin label switching a traffic in 973 the FEC before the LSP is completely set up, and thus some traffic in 974 the FEC may follow a path which does not have the specified set of 975 properties. Ordered control also needs to be used if the recognition 976 of the FEC is a consequence of the setting up of the corresponding 977 LSP. 979 Ordered LSP setup may be initiated either by the ingress or the 980 egress. 982 Ordered control and independent control are fully interoperable. 983 However, unless all LSRs in an LSP are using ordered control, the 984 overall effect on network behavior is largely that of independent 985 control, since one cannot be sure that an LSP is not used until it is 986 fully set up. 988 This architecture allows the choice between independent control and 989 ordered control to be a local matter. Since the two methods 990 interwork, a given LSR need support only one or the other. Generally 991 speaking, the choice of independent versus ordered control does not 992 appear to have any effect on the label distribution mechanisms which 993 need to be defined. 995 3.20. Aggregation 997 One way of partitioning traffic into FECs is to create a separate FEC 998 for each address prefix which appears in the routing table. However, 999 within a particular MPLS domain, this may result in a set of FECs 1000 such that all traffic in all those FECs follows the same route. For 1001 example, a set of distinct address prefixes might all have the same 1002 egress node, and label swapping might be used only to get the the 1003 traffic to the egress node. In this case, within the MPLS domain, 1004 the union of those FECs is itself a FEC. This creates a choice: 1005 should a distinct label be bound to each component FEC, or should a 1006 single label be bound to the union, and that label applied to all 1007 traffic in the union? 1008 The procedure of binding a single label to a union of FECs which is 1009 itself a FEC (within some domain), and of applying that label to all 1010 traffic in the union, is known as "aggregation". The MPLS 1011 architecture allows aggregation. Aggregation may reduce the number 1012 of labels which are needed to handle a particular set of packets, and 1013 may also reduce the amount of label distribution control traffic 1014 needed. 1016 Given a set of FECs which are "aggregatable" into a single FEC, it is 1017 possible to (a) aggregate them into a single FEC, (b) aggregate them 1018 into a set of FECs, or (c) not aggregate them at all. Thus we can 1019 speak of the "granularity" of aggregation, with (a) being the 1020 "coarsest granularity", and (c) being the "finest granularity". 1022 When order control is used, each LSR should adopt, for a given set of 1023 FECs, the granularity used by its next hop for those FECs. 1025 When independent control is used, it is possible that there will be 1026 two adjacent LSRs, Ru and Rd, which aggregate some set of FECs 1027 differently. 1029 If Ru has finer granularity than Rd, this does not cause a problem. 1030 Ru distributes more labels for that set of FECs than Rd does. This 1031 means that when Ru needs to forward labeled packets in those FECs to 1032 Rd, it may need to map n labels into m labels, where n > m. As an 1033 option, Ru may withdraw the set of n labels that it has distributed, 1034 and then distribute a set of m labels, corresponding to Rd's level of 1035 granularity. This is not necessary to ensure correct operation, but 1036 it does result in a reduction of the number of labels distributed by 1037 Ru, and Ru is not gaining any particular advantage by distributing 1038 the larger number of labels. The decision whether to do this or not 1039 is a local matter. 1041 If Ru has coarser granularity than Rd (i.e., Rd has distributed n 1042 labels for the set of FECs, while Ru has distributed m, where n > m), 1043 it has two choices: 1045 - It may adopt Rd's finer level of granularity. This would require 1046 it to withdraw the m labels it has distributed, and distribute n 1047 labels. This is the preferred option. 1049 - It may simply map its m labels into a subset of Rd's n labels, if 1050 it can determine that this will produce the same routing. For 1051 example, suppose that Ru applies a single label to all traffic 1052 that needs to pass through a certain egress LSR, whereas Rd binds 1053 a number of different labels to such traffic, depending on the 1054 individual destination addresses of the packets. If Ru knows the 1055 address of the egress router, and if Rd has bound a label to the 1056 FEC which is identified by that address, then Ru can simply apply 1057 that label. 1059 In any event, every LSR needs to know (by configuration) what 1060 granularity to use for labels that it assigns. Where ordered control 1061 is used, this requires each node to know the granularity only for 1062 FECs which leave the MPLS network at that node. For independent 1063 control, best results may be obtained by ensuring that all LSRs are 1064 consistently configured to know the granularity for each FEC. 1065 However, in many cases this may be done by using a single level of 1066 granularity which applies to all FECs (such as "one label per IP 1067 prefix in the forwarding table", or "one label per egress node"). 1069 3.21. Route Selection 1071 Route selection refers to the method used for selecting the LSP for a 1072 particular FEC. The proposed MPLS protocol architecture supports two 1073 options for Route Selection: (1) hop by hop routing, and (2) explicit 1074 routing. 1076 Hop by hop routing allows each node to independently choose the next 1077 hop for each FEC. This is the usual mode today in existing IP 1078 networks. A "hop by hop routed LSP" is an LSP whose route is selected 1079 using hop by hop routing. 1081 In an explicitly routed LSP, each LSR does not independently choose 1082 the next hop; rather, a single LSR, generally the LSP ingress or the 1083 LSP egress, specifies several (or all) of the LSRs in the LSP. If a 1084 single LSR specifies the entire LSP, the LSP is "strictly" explicitly 1085 routed. If a single LSR specifies only some of the LSP, the LSP is 1086 "loosely" explicitly routed. 1088 The sequence of LSRs followed by an explicitly routed LSP may be 1089 chosen by configuration, or may be selected dynamically by a single 1090 node (for example, the egress node may make use of the topological 1091 information learned from a link state database in order to compute 1092 the entire path for the tree ending at that egress node). 1094 Explicit routing may be useful for a number of purposes, such as 1095 policy routing or traffic engineering. In MPLS, the explicit route 1096 needs to be specified at the time that labels are assigned, but the 1097 explicit route does not have to be specified with each IP packet. 1098 This makes MPLS explicit routing much more efficient than the 1099 alternative of IP source routing. 1101 The procedures for making use of explicit routes, either strict or 1102 loose, are beyond the scope of this document. 1104 3.22. Lack of Outgoing Label 1106 When a labeled packet is traveling along an LSP, it may occasionally 1107 happen that it reaches an LSR at which the ILM does not map the 1108 packet's incoming label into an NHLFE, even though the incoming label 1109 is itself valid. This can happen due to transient conditions, or due 1110 to an error at the LSR which should be the packet's next hop. 1112 It is tempting in such cases to strip off the label stack and attempt 1113 to forward the packet further via conventional forwarding, based on 1114 its network layer header. However, in general this is not a safe 1115 procedure: 1117 - If the packet has been following an explicitly routed LSP, this 1118 could result in a loop. 1120 - The packet's network header may not contain enough information to 1121 enable this particular LSR to forward it correctly. 1123 Unless it can be determined (through some means outside the scope of 1124 this document) that neither of these situations obtains, the only 1125 safe procedure is to discard the packet. 1127 3.23. Time-to-Live (TTL) 1129 In conventional IP forwarding, each packet carries a "Time To Live" 1130 (TTL) value in its header. Whenever a packet passes through a 1131 router, its TTL gets decremented by 1; if the TTL reaches 0 before 1132 the packet has reached its destination, the packet gets discarded. 1134 This provides some level of protection against forwarding loops that 1135 may exist due to misconfigurations, or due to failure or slow 1136 convergence of the routing algorithm. TTL is sometimes used for other 1137 functions as well, such as multicast scoping, and supporting the 1138 "traceroute" command. This implies that there are two TTL-related 1139 issues that MPLS needs to deal with: (i) TTL as a way to suppress 1140 loops; (ii) TTL as a way to accomplish other functions, such as 1141 limiting the scope of a packet. 1143 When a packet travels along an LSP, it SHOULD emerge with the same 1144 TTL value that it would have had if it had traversed the same 1145 sequence of routers without having been label switched. If the 1146 packet travels along a hierarchy of LSPs, the total number of LSR- 1147 hops traversed SHOULD be reflected in its TTL value when it emerges 1148 from the hierarchy of LSPs. 1150 The way that TTL is handled may vary depending upon whether the MPLS 1151 label values are carried in an MPLS-specific "shim" header [MPLS- 1152 SHIM], or if the MPLS labels are carried in an L2 header, such as an 1153 ATM header [MPLS-ATM] or a frame relay header [MPLS-FRMRLY]. 1155 If the label values are encoded in a "shim" that sits between the 1156 data link and network layer headers, then this shim MUST have a TTL 1157 field that SHOULD be initially loaded from the network layer header 1158 TTL field, SHOULD be decremented at each LSR-hop, and SHOULD be 1159 copied into the network layer header TTL field when the packet 1160 emerges from its LSP. 1162 If the label values are encoded in a data link layer header (e.g., 1163 the VPI/VCI field in ATM's AAL5 header), and the labeled packets are 1164 forwarded by an L2 switch (e.g., an ATM switch), and the data link 1165 layer (like ATM) does not itself have a TTL field, then it will not 1166 be possible to decrement a packet's TTL at each LSR-hop. An LSP 1167 segment which consists of a sequence of LSRs that cannot decrement a 1168 packet's TTL will be called a "non-TTL LSP segment". 1170 When a packet emerges from a non-TTL LSP segment, it SHOULD however 1171 be given a TTL that reflects the number of LSR-hops it traversed. In 1172 the unicast case, this can be achieved by propagating a meaningful 1173 LSP length to ingress nodes, enabling the ingress to decrement the 1174 TTL value before forwarding packets into a non-TTL LSP segment. 1176 Sometimes it can be determined, upon ingress to a non-TTL LSP 1177 segment, that a particular packet's TTL will expire before the packet 1178 reaches the egress of that non-TTL LSP segment. In this case, the LSR 1179 at the ingress to the non-TTL LSP segment must not label switch the 1180 packet. This means that special procedures must be developed to 1181 support traceroute functionality, for example, traceroute packets may 1182 be forwarded using conventional hop by hop forwarding. 1184 3.24. Loop Control 1186 On a non-TTL LSP segment, by definition, TTL cannot be used to 1187 protect against forwarding loops. The importance of loop control may 1188 depend on the particular hardware being used to provide the LSR 1189 functions along the non-TTL LSP segment. 1191 Suppose, for instance, that ATM switching hardware is being used to 1192 provide MPLS switching functions, with the label being carried in the 1193 VPI/VCI field. Since ATM switching hardware cannot decrement TTL, 1194 there is no protection against loops. If the ATM hardware is capable 1195 of providing fair access to the buffer pool for incoming cells 1196 carrying different VPI/VCI values, this looping may not have any 1197 deleterious effect on other traffic. If the ATM hardware cannot 1198 provide fair buffer access of this sort, however, then even transient 1199 loops may cause severe degradation of the LSR's total performance. 1201 Even if fair buffer access can be provided, it is still worthwhile to 1202 have some means of detecting loops that last "longer than possible". 1203 In addition, even where TTL and/or per-VC fair queuing provides a 1204 means for surviving loops, it still may be desirable where practical 1205 to avoid setting up LSPs which loop. All LSRs that may attach to 1206 non-TTL LSP segments will therefore be required to support a common 1207 technique for loop detection; however, use of the loop detection 1208 technique is optional. The loop detection technique is specified in 1209 [MPLS-ATM] and [MPLS-LDP]. 1211 3.25. Label Encodings 1213 In order to transmit a label stack along with the packet whose label 1214 stack it is, it is necessary to define a concrete encoding of the 1215 label stack. The architecture supports several different encoding 1216 techniques; the choice of encoding technique depends on the 1217 particular kind of device being used to forward labeled packets. 1219 3.25.1. MPLS-specific Hardware and/or Software 1221 If one is using MPLS-specific hardware and/or software to forward 1222 labeled packets, the most obvious way to encode the label stack is to 1223 define a new protocol to be used as a "shim" between the data link 1224 layer and network layer headers. This shim would really be just an 1225 encapsulation of the network layer packet; it would be "protocol- 1226 independent" such that it could be used to encapsulate any network 1227 layer. Hence we will refer to it as the "generic MPLS 1228 encapsulation". 1230 The generic MPLS encapsulation would in turn be encapsulated in a 1231 data link layer protocol. 1233 The MPLS generic encapsulation is specified in [MPLS-SHIM]. 1235 3.25.2. ATM Switches as LSRs 1237 It will be noted that MPLS forwarding procedures are similar to those 1238 of legacy "label swapping" switches such as ATM switches. ATM 1239 switches use the input port and the incoming VPI/VCI value as the 1240 index into a "cross-connect" table, from which they obtain an output 1241 port and an outgoing VPI/VCI value. Therefore if one or more labels 1242 can be encoded directly into the fields which are accessed by these 1243 legacy switches, then the legacy switches can, with suitable software 1244 upgrades, be used as LSRs. We will refer to such devices as "ATM- 1245 LSRs". 1247 There are three obvious ways to encode labels in the ATM cell header 1248 (presuming the use of AAL5): 1250 1. SVC Encoding 1252 Use the VPI/VCI field to encode the label which is at the top 1253 of the label stack. This technique can be used in any network. 1254 With this encoding technique, each LSP is realized as an ATM 1255 SVC, and the label distribution protocol becomes the ATM 1256 "signaling" protocol. With this encoding technique, the ATM- 1257 LSRs cannot perform "push" or "pop" operations on the label 1258 stack. 1260 2. SVP Encoding 1262 Use the VPI field to encode the label which is at the top of 1263 the label stack, and the VCI field to encode the second label 1264 on the stack, if one is present. This technique some advantages 1265 over the previous one, in that it permits the use of ATM "VP- 1266 switching". That is, the LSPs are realized as ATM SVPs, with 1267 the label distribution protocol serving as the ATM signaling 1268 protocol. 1270 However, this technique cannot always be used. If the network 1271 includes an ATM Virtual Path through a non-MPLS ATM network, 1272 then the VPI field is not necessarily available for use by 1273 MPLS. 1275 When this encoding technique is used, the ATM-LSR at the egress 1276 of the VP effectively does a "pop" operation. 1278 3. SVP Multipoint Encoding 1280 Use the VPI field to encode the label which is at the top of 1281 the label stack, use part of the VCI field to encode the second 1282 label on the stack, if one is present, and use the remainder of 1283 the VCI field to identify the LSP ingress. If this technique 1284 is used, conventional ATM VP-switching capabilities can be used 1285 to provide multipoint-to-point VPs. Cells from different 1286 packets will then carry different VCI values. As we shall see 1287 in section 3.26, this enables us to do label merging, without 1288 running into any cell interleaving problems, on ATM switches 1289 which can provide multipoint-to-point VPs, but which do not 1290 have the VC merge capability. 1292 This technique depends on the existence of a capability for 1293 assigning 16-bit VCI values to each ATM switch such that no 1294 single VCI value is assigned to two different switches. (If an 1295 adequate number of such values could be assigned to each 1296 switch, it would be possible to also treat the VCI value as the 1297 second label in the stack.) 1299 If there are more labels on the stack than can be encoded in the ATM 1300 header, the ATM encodings must be combined with the generic 1301 encapsulation. 1303 3.25.3. Interoperability among Encoding Techniques 1305 If is a segment of a LSP, it is possible that R1 will 1306 use one encoding of the label stack when transmitting packet P to R2, 1307 but R2 will use a different encoding when transmitting a packet P to 1308 R3. In general, the MPLS architecture supports LSPs with different 1309 label stack encodings used on different hops. Therefore, when we 1310 discuss the procedures for processing a labeled packet, we speak in 1311 abstract terms of operating on the packet's label stack. When a 1312 labeled packet is received, the LSR must decode it to determine the 1313 current value of the label stack, then must operate on the label 1314 stack to determine the new value of the stack, and then encode the 1315 new value appropriately before transmitting the labeled packet to its 1316 next hop. 1318 Unfortunately, ATM switches have no capability for translating from 1319 one encoding technique to another. The MPLS architecture therefore 1320 requires that whenever it is possible for two ATM switches to be 1321 successive LSRs along a level m LSP for some packet, that those two 1322 ATM switches use the same encoding technique. 1324 Naturally there will be MPLS networks which contain a combination of 1325 ATM switches operating as LSRs, and other LSRs which operate using an 1326 MPLS shim header. In such networks there may be some LSRs which have 1327 ATM interfaces as well as "MPLS Shim" interfaces. This is one example 1328 of an LSR with different label stack encodings on different hops. 1329 Such an LSR may swap off an ATM encoded label stack on an incoming 1330 interface and replace it with an MPLS shim header encoded label stack 1331 on the outgoing interface. 1333 3.26. Label Merging 1335 Suppose that an LSR has bound multiple incoming labels to a 1336 particular FEC. When forwarding packets in that FEC, one would like 1337 to have a single outgoing label which is applied to all such packets. 1338 The fact that two different packets in the FEC arrived with different 1339 incoming labels is irrelevant; one would like to forward them with 1340 the same outgoing label. The capability to do so is known as "label 1341 merging". 1343 Let us say that an LSR is capable of label merging if it can receive 1344 two packets from different incoming interfaces, and/or with different 1345 labels, and send both packets out the same outgoing interface with 1346 the same label. Once the packets are transmitted, the information 1347 that they arrived from different interfaces and/or with different 1348 incoming labels is lost. 1350 Let us say that an LSR is not capable of label merging if, for any 1351 two packets which arrive from different interfaces, or with different 1352 labels, the packets must either be transmitted out different 1353 interfaces, or must have different labels. ATM-LSRs using the SVC or 1354 SVP Encodings cannot perform label merging. This is discussed in 1355 more detail in the next section. 1357 If a particular LSR cannot perform label merging, then if two packets 1358 in the same FEC arrive with different incoming labels, they must be 1359 forwarded with different outgoing labels. With label merging, the 1360 number of outgoing labels per FEC need only be 1; without label 1361 merging, the number of outgoing labels per FEC could be as large as 1362 the number of nodes in the network. 1364 With label merging, the number of incoming labels per FEC that a 1365 particular LSR needs is never be larger than the number of label 1366 distribution adjacencies. Without label merging, the number of 1367 incoming labels per FEC that a particular LSR needs is as large as 1368 the number of upstream nodes which forward traffic in the FEC to the 1369 LSR in question. In fact, it is difficult for an LSR to even 1370 determine how many such incoming labels it must support for a 1371 particular FEC. 1373 The MPLS architecture accommodates both merging and non-merging LSRs, 1374 but allows for the fact that there may be LSRs which do not support 1375 label merging. This leads to the issue of ensuring correct 1376 interoperation between merging LSRs and non-merging LSRs. The issue 1377 is somewhat different in the case of datagram media versus the case 1378 of ATM. The different media types will therefore be discussed 1379 separately. 1381 3.26.1. Non-merging LSRs 1383 The MPLS forwarding procedures is very similar to the forwarding 1384 procedures used by such technologies as ATM and Frame Relay. That is, 1385 a unit of data arrives, a label (VPI/VCI or DLCI) is looked up in a 1386 "cross-connect table", on the basis of that lookup an output port is 1387 chosen, and the label value is rewritten. In fact, it is possible to 1388 use such technologies for MPLS forwarding; a label distribution 1389 protocol can be used as the "signalling protocol" for setting up the 1390 cross-connect tables. 1392 Unfortunately, these technologies do not necessarily support the 1393 label merging capability. In ATM, if one attempts to perform label 1394 merging, the result may be the interleaving of cells from various 1395 packets. If cells from different packets get interleaved, it is 1396 impossible to reassemble the packets. Some Frame Relay switches use 1397 cell switching on their backplanes. These switches may also be 1398 incapable of supporting label merging, for the same reason -- cells 1399 of different packets may get interleaved, and there is then no way to 1400 reassemble the packets. 1402 We propose to support two solutions to this problem. First, MPLS will 1403 contain procedures which allow the use of non-merging LSRs. Second, 1404 MPLS will support procedures which allow certain ATM switches to 1405 function as merging LSRs. 1407 Since MPLS supports both merging and non-merging LSRs, MPLS also 1408 contains procedures to ensure correct interoperation between them. 1410 3.26.2. Labels for Merging and Non-Merging LSRs 1412 An upstream LSR which supports label merging needs to be sent only 1413 one label per FEC. An upstream neighbor which does not support label 1414 merging needs to be sent multiple labels per FEC. However, there is 1415 no way of knowing a priori how many labels it needs. This will depend 1416 on how many LSRs are upstream of it with respect to the FEC in 1417 question. 1419 In the MPLS architecture, if a particular upstream neighbor does not 1420 support label merging, it is not sent any labels for a particular FEC 1421 unless it explicitly asks for a label for that FEC. The upstream 1422 neighbor may make multiple such requests, and is given a new label 1423 each time. When a downstream neighbor receives such a request from 1424 upstream, and the downstream neighbor does not itself support label 1425 merging, then it must in turn ask its downstream neighbor for another 1426 label for the FEC in question. 1428 It is possible that there may be some nodes which support label 1429 merging, but can only merge a limited number of incoming labels into 1430 a single outgoing label. Suppose for example that due to some 1431 hardware limitation a node is capable of merging four incoming labels 1432 into a single outgoing label. Suppose however, that this particular 1433 node has six incoming labels arriving at it for a particular FEC. In 1434 this case, this node may merge these into two outgoing labels. 1436 Whether label merging is applicable to explicitly routed LSPs is for 1437 further study. 1439 3.26.3. Merge over ATM 1441 3.26.3.1. Methods of Eliminating Cell Interleave 1443 There are several methods that can be used to eliminate the cell 1444 interleaving problem in ATM, thereby allowing ATM switches to support 1445 stream merge: 1447 1. VP merge, using the SVP Multipoint Encoding 1449 When VP merge is used, multiple virtual paths are merged into a 1450 virtual path, but packets from different sources are 1451 distinguished by using different VCIs within the VP. 1453 2. VC merge 1455 When VC merge is used, switches are required to buffer cells 1456 from one packet until the entire packet is received (this may 1457 be determined by looking for the AAL5 end of frame indicator). 1459 VP merge has the advantage that it is compatible with a higher 1460 percentage of existing ATM switch implementations. This makes it more 1461 likely that VP merge can be used in existing networks. Unlike VC 1462 merge, VP merge does not incur any delays at the merge points and 1463 also does not impose any buffer requirements. However, it has the 1464 disadvantage that it requires coordination of the VCI space within 1465 each VP. There are a number of ways that this can be accomplished. 1466 Selection of one or more methods is for further study. 1468 This tradeoff between compatibility with existing equipment versus 1469 protocol complexity and scalability implies that it is desirable for 1470 the MPLS protocol to support both VP merge and VC merge. In order to 1471 do so each ATM switch participating in MPLS needs to know whether its 1472 immediate ATM neighbors perform VP merge, VC merge, or no merge. 1474 3.26.3.2. Interoperation: VC Merge, VP Merge, and Non-Merge 1476 The interoperation of the various forms of merging over ATM is most 1477 easily described by first describing the interoperation of VC merge 1478 with non-merge. 1480 In the case where VC merge and non-merge nodes are interconnected the 1481 forwarding of cells is based in all cases on a VC (i.e., the 1482 concatenation of the VPI and VCI). For each node, if an upstream 1483 neighbor is doing VC merge then that upstream neighbor requires only 1484 a single VPI/VCI for a particular stream (this is analogous to the 1485 requirement for a single label in the case of operation over frame 1486 media). If the upstream neighbor is not doing merge, then the 1487 neighbor will require a single VPI/VCI per stream for itself, plus 1488 enough VPI/VCIs to pass to its upstream neighbors. The number 1489 required will be determined by allowing the upstream nodes to request 1490 additional VPI/VCIs from their downstream neighbors (this is again 1491 analogous to the method used with frame merge). 1493 A similar method is possible to support nodes which perform VP merge. 1494 In this case the VP merge node, rather than requesting a single 1495 VPI/VCI or a number of VPI/VCIs from its downstream neighbor, instead 1496 may request a single VP (identified by a VPI) but several VCIs within 1497 the VP. Furthermore, suppose that a non-merge node is downstream 1498 from two different VP merge nodes. This node may need to request one 1499 VPI/VCI (for traffic originating from itself) plus two VPs (one for 1500 each upstream node), each associated with a specified set of VCIs (as 1501 requested from the upstream node). 1503 In order to support all of VP merge, VC merge, and non-merge, it is 1504 therefore necessary to allow upstream nodes to request a combination 1505 of zero or more VC identifiers (consisting of a VPI/VCI), plus zero 1506 or more VPs (identified by VPIs) each containing a specified number 1507 of VCs (identified by a set of VCIs which are significant within a 1508 VP). VP merge nodes would therefore request one VP, with a contained 1509 VCI for traffic that it originates (if appropriate) plus a VCI for 1510 each VC requested from above (regardless of whether or not the VC is 1511 part of a containing VP). VC merge node would request only a single 1512 VPI/VCI (since they can merge all upstream traffic into a single VC). 1513 Non-merge nodes would pass on any requests that they get from above, 1514 plus request a VPI/VCI for traffic that they originate (if 1515 appropriate). 1517 3.27. Tunnels and Hierarchy 1519 Sometimes a router Ru takes explicit action to cause a particular 1520 packet to be delivered to another router Rd, even though Ru and Rd 1521 are not consecutive routers on the Hop-by-hop path for that packet, 1522 and Rd is not the packet's ultimate destination. For example, this 1523 may be done by encapsulating the packet inside a network layer packet 1524 whose destination address is the address of Rd itself. This creates a 1525 "tunnel" from Ru to Rd. We refer to any packet so handled as a 1526 "Tunneled Packet". 1528 3.27.1. Hop-by-Hop Routed Tunnel 1530 If a Tunneled Packet follows the Hop-by-hop path from Ru to Rd, we 1531 say that it is in an "Hop-by-Hop Routed Tunnel" whose "transmit 1532 endpoint" is Ru and whose "receive endpoint" is Rd. 1534 3.27.2. Explicitly Routed Tunnel 1536 If a Tunneled Packet travels from Ru to Rd over a path other than the 1537 Hop-by-hop path, we say that it is in an "Explicitly Routed Tunnel" 1538 whose "transmit endpoint" is Ru and whose "receive endpoint" is Rd. 1539 For example, we might send a packet through an Explicitly Routed 1540 Tunnel by encapsulating it in a packet which is source routed. 1542 3.27.3. LSP Tunnels 1544 It is possible to implement a tunnel as a LSP, and use label 1545 switching rather than network layer encapsulation to cause the packet 1546 to travel through the tunnel. The tunnel would be a LSP , where R1 is the transmit endpoint of the tunnel, and Rn is the 1548 receive endpoint of the tunnel. This is called a "LSP Tunnel". 1550 The set of packets which are to be sent though the LSP tunnel 1551 constitutes a FEC, and each LSR in the tunnel must assign a label to 1552 that FEC (i.e., must assign a label to the tunnel). The criteria for 1553 assigning a particular packet to an LSP tunnel is a local matter at 1554 the tunnel's transmit endpoint. To put a packet into an LSP tunnel, 1555 the transmit endpoint pushes a label for the tunnel onto the label 1556 stack and sends the labeled packet to the next hop in the tunnel. 1558 If it is not necessary for the tunnel's receive endpoint to be able 1559 to determine which packets it receives through the tunnel, as 1560 discussed earlier, the label stack may be popped at the penultimate 1561 LSR in the tunnel. 1563 A "Hop-by-Hop Routed LSP Tunnel" is a Tunnel that is implemented as 1564 an hop-by-hop routed LSP between the transmit endpoint and the 1565 receive endpoint. 1567 An "Explicitly Routed LSP Tunnel" is a LSP Tunnel that is also an 1568 Explicitly Routed LSP. 1570 3.27.4. Hierarchy: LSP Tunnels within LSPs 1572 Consider a LSP . Let us suppose that R1 receives 1573 unlabeled packet P, and pushes on its label stack the label to cause 1574 it to follow this path, and that this is in fact the Hop-by-hop path. 1575 However, let us further suppose that R2 and R3 are not directly 1576 connected, but are "neighbors" by virtue of being the endpoints of an 1577 LSP tunnel. So the actual sequence of LSRs traversed by P is . 1580 When P travels from R1 to R2, it will have a label stack of depth 1. 1581 R2, switching on the label, determines that P must enter the tunnel. 1582 R2 first replaces the Incoming label with a label that is meaningful 1583 to R3. Then it pushes on a new label. This level 2 label has a value 1584 which is meaningful to R21. Switching is done on the level 2 label by 1585 R21, R22, R23. R23, which is the penultimate hop in the R2-R3 tunnel, 1586 pops the label stack before forwarding the packet to R3. When R3 sees 1587 packet P, P has only a level 1 label, having now exited the tunnel. 1588 Since R3 is the penultimate hop in P's level 1 LSP, it pops the label 1589 stack, and R4 receives P unlabeled. 1591 The label stack mechanism allows LSP tunneling to nest to any depth. 1593 3.27.5. Label Distribution Peering and Hierarchy 1595 Suppose that packet P travels along a Level 1 LSP , 1596 and when going from R2 to R3 travels along a Level 2 LSP . From the perspective of the Level 2 LSP, R2's label 1598 distribution peer is R21. From the perspective of the Level 1 LSP, 1599 R2's label distribution peers are R1 and R3. One can have label 1600 distribution peers at each layer of hierarchy. We will see in 1601 sections 4.6 and 4.7 some ways to make use of this hierarchy. Note 1602 that in this example, R2 and R21 must be IGP neighbors, but R2 and R3 1603 need not be. 1605 When two LSRs are IGP neighbors, we will refer to them as "local 1606 label distribution peers". When two LSRs may be label distribution 1607 peers, but are not IGP neighbors, we will refer to them as "remote 1608 label distribution peers". In the above example, R2 and R21 are 1609 local label distribution peers, but R2 and R3 are remote label 1610 distribution peers. 1612 The MPLS architecture supports two ways to distribute labels at 1613 different layers of the hierarchy: Explicit Peering and Implicit 1614 Peering. 1616 One performs label distribution with one's local label distribution 1617 peer by sending label distribution protocol messages which are 1618 addressed to the peer. One can perform label distribution with one's 1619 remote label distribution peers in one of two ways: 1621 1. Explicit Peering 1623 In explicit peering, one distributes labels to a peer by 1624 sending label distribution protocol messages which are 1625 addressed to the peer, exactly as one would do for local label 1626 distribution peers. This technique is most useful when the 1627 number of remote label distribution peers is small, or the 1628 number of higher level label bindings is large, or the remote 1629 label distribution peers are in distinct routing areas or 1630 domains. Of course, one needs to know which labels to 1631 distribute to which peers; this is addressed in section 4.1.2. 1633 Examples of the use of explicit peering is found in sections 1634 4.2.1 and 4.6. 1636 2. Implicit Peering 1638 In Implicit Peering, one does not send label distribution 1639 protocol messages which are addressed to one's peer. Rather, 1640 to distribute higher level labels to ones remote label 1641 distribution peers, one encodes a higher level label as an 1642 attribute of a lower level label, and then distributes the 1643 lower level label, along with this attribute, to one's local 1644 label distribution peers. The local label distribution peers 1645 then propagate the information to their local label 1646 distribution peers. This process continues till the information 1647 reaches the remote peer. 1649 This technique is most useful when the number of remote label 1650 distribution peers is large. Implicit peering does not require 1651 an n-square peering mesh to distribute labels to the remote 1652 label distribution peers because the information is piggybacked 1653 through the local label distribution peering. However, 1654 implicit peering requires the intermediate nodes to store 1655 information that they might not be directly interested in. 1657 An example of the use of implicit peering is found in section 1658 4.3. 1660 3.28. Label Distribution Protocol Transport 1662 A label distribution protocol is used between nodes in an MPLS 1663 network to establish and maintain the label bindings. In order for 1664 MPLS to operate correctly, label distribution information needs to be 1665 transmitted reliably, and the label distribution protocol messages 1666 pertaining to a particular FEC need to be transmitted in sequence. 1667 Flow control is also desirable, as is the capability to carry 1668 multiple label messages in a single datagram. 1670 One way to meet these goals is to use TCP as the underlying 1671 transport, as is done in [MPLS-LDP] and [MPLS-BGP]. 1673 3.29. Why More than one Label Distribution Protocol? 1675 This architecture does not establish hard and fast rules for choosing 1676 which label distribution protocol to use in which circumstances. 1677 However, it is possible to point out some of the considerations. 1679 3.29.1. BGP and LDP 1681 In many scenarios, it is desirable to bind labels to FECs which can 1682 be identified with routes to address prefixes (see section 4.1). If 1683 there is a standard, widely deployed routing algorithm which 1684 distributes those routes, it can be argued that label distribution is 1685 best achieved by piggybacking the label distribution on the 1686 distribution of the routes themselves. 1688 For example, BGP distributes such routes, and if a BGP speaker needs 1689 to also distribute labels to its BGP peers, using BGP to do the label 1690 distribution (see [MPLS-BGP]) has a number of advantages. In 1691 particular, it permits BGP route reflectors to distribute labels, 1692 thus providing a significant scalability advantage over using LDP to 1693 distribute labels between BGP peers. 1695 3.29.2. Labels for RSVP Flowspecs 1697 When RSVP is used to set up resource reservations for particular 1698 flows, it can be desirable to label the packets in those flows, so 1699 that the RSVP filterspec does not need to be applied at each hop. It 1700 can be argued that having RSVP distribute the labels as part of its 1701 path/reservation setup process is the most efficient method of 1702 distributing labels for this purpose. 1704 3.29.3. Labels for Explicitly Routed LSPs 1706 In some applications of MPLS, particularly those related to traffic 1707 engineering, it is desirable to set up an explicitly routed path, 1708 from ingress to egress. It is also desirable to apply resource 1709 reservations along that path. 1711 One can imagine two approaches to this: 1713 - Start with an existing protocol that is used for setting up 1714 resource reservations, and extend it to support explicit routing 1715 and label distribution. 1717 - Start with an existing protocol that is used for label 1718 distribution, and extend it to support explicit routing and 1719 resource reservations. 1721 The first approach has given rise to the protocol specified in 1722 [MPLS-RSVP-TUNNELS], the second to the approach specified in [MPLS- 1723 CR-LDP]. 1725 3.30. Multicast 1727 This section is for further study 1729 4. Some Applications of MPLS 1731 4.1. MPLS and Hop by Hop Routed Traffic 1733 A number of uses of MPLS require that packets with a certain label be 1734 forwarded along the same hop-by-hop routed path that would be used 1735 for forwarding a packet with a specified address in its network layer 1736 destination address field. 1738 4.1.1. Labels for Address Prefixes 1740 In general, router R determines the next hop for packet P by finding 1741 the address prefix X in its routing table which is the longest match 1742 for P's destination address. That is, the packets in a given FEC are 1743 just those packets which match a given address prefix in R's routing 1744 table. In this case, a FEC can be identified with an address prefix. 1746 Note that a packet P may be assigned to FEC F, and FEC F may be 1747 identified with address prefix X, even if P's destination address 1748 does not match X. 1750 4.1.2. Distributing Labels for Address Prefixes 1752 4.1.2.1. Label Distribution Peers for an Address Prefix 1754 LSRs R1 and R2 are considered to be label distribution peers for 1755 address prefix X if and only if one of the following conditions 1756 holds: 1758 1. R1's route to X is a route which it learned about via a 1759 particular instance of a particular IGP, and R2 is a neighbor 1760 of R1 in that instance of that IGP 1762 2. R1's route to X is a route which it learned about by some 1763 instance of routing algorithm A1, and that route is 1764 redistributed into an instance of routing algorithm A2, and R2 1765 is a neighbor of R1 in that instance of A2 1767 3. R1 is the receive endpoint of an LSP Tunnel that is within 1768 another LSP, and R2 is a transmit endpoint of that tunnel, and 1769 R1 and R2 are participants in a common instance of an IGP, and 1770 are in the same IGP area (if the IGP in question has areas), 1771 and R1's route to X was learned via that IGP instance, or is 1772 redistributed by R1 into that IGP instance 1774 4. R1's route to X is a route which it learned about via BGP, and 1775 R2 is a BGP peer of R1 1777 In general, these rules ensure that if the route to a particular 1778 address prefix is distributed via an IGP, the label distribution 1779 peers for that address prefix are the IGP neighbors. If the route to 1780 a particular address prefix is distributed via BGP, the label 1781 distribution peers for that address prefix are the BGP peers. In 1782 other cases of LSP tunneling, the tunnel endpoints are label 1783 distribution peers. 1785 4.1.2.2. Distributing Labels 1787 In order to use MPLS for the forwarding of packets according to the 1788 hop-by-hop route corresponding to any address prefix, each LSR MUST: 1790 1. bind one or more labels to each address prefix that appears in 1791 its routing table; 1793 2. for each such address prefix X, use a label distribution 1794 protocol to distribute the binding of a label to X to each of 1795 its label distribution peers for X. 1797 There is also one circumstance in which an LSR must distribute a 1798 label binding for an address prefix, even if it is not the LSR which 1799 bound that label to that address prefix: 1801 3. If R1 uses BGP to distribute a route to X, naming some other 1802 LSR R2 as the BGP Next Hop to X, and if R1 knows that R2 has 1803 assigned label L to X, then R1 must distribute the binding 1804 between L and X to any BGP peer to which it distributes that 1805 route. 1807 These rules ensure that labels corresponding to address prefixes 1808 which correspond to BGP routes are distributed to IGP neighbors if 1809 and only if the BGP routes are distributed into the IGP. Otherwise, 1810 the labels bound to BGP routes are distributed only to the other BGP 1811 speakers. 1813 These rules are intended only to indicate which label bindings must 1814 be distributed by a given LSR to which other LSRs. 1816 4.1.3. Using the Hop by Hop path as the LSP 1818 If the hop-by-hop path that packet P needs to follow is , then can be an LSP as long as: 1821 1. there is a single address prefix X, such that, for all i, 1822 1<=i, and the Hop-by-hop path for P2 is . Let's suppose that R3 binds label L3 to X, and distributes 2090 this binding to R2. R2 binds label L2 to X, and distributes this 2091 binding to both R1 and R4. When R2 receives packet P1, its incoming 2092 label will be L2. R2 will overwrite L2 with L3, and send P1 to R3. 2093 When R2 receives packet P2, its incoming label will also be L2. R2 2094 again overwrites L2 with L3, and send P2 on to R3. 2096 Note then that when P1 and P2 are traveling from R2 to R3, they carry 2097 the same label, and as far as MPLS is concerned, they cannot be 2098 distinguished. Thus instead of talking about two distinct LSPs, and , we might talk of a single "Multipoint-to- 2100 Point LSP Tree", which we might denote as <{R1, R4}, R2, R3>. 2102 This creates a difficulty when we attempt to use conventional ATM 2103 switches as LSRs. Since conventional ATM switches do not support 2104 multipoint-to-point connections, there must be procedures to ensure 2105 that each LSP is realized as a point-to-point VC. However, if ATM 2106 switches which do support multipoint-to-point VCs are in use, then 2107 the LSPs can be most efficiently realized as multipoint-to-point VCs. 2108 Alternatively, if the SVP Multipoint Encoding (section 3.25.2) can be 2109 used, the LSPs can be realized as multipoint-to-point SVPs. 2111 4.6. LSP Tunneling between BGP Border Routers 2113 Consider the case of an Autonomous System, A, which carries transit 2114 traffic between other Autonomous Systems. Autonomous System A will 2115 have a number of BGP Border Routers, and a mesh of BGP connections 2116 among them, over which BGP routes are distributed. In many such 2117 cases, it is desirable to avoid distributing the BGP routes to 2118 routers which are not BGP Border Routers. If this can be avoided, 2119 the "route distribution load" on those routers is significantly 2120 reduced. However, there must be some means of ensuring that the 2121 transit traffic will be delivered from Border Router to Border Router 2122 by the interior routers. 2124 This can easily be done by means of LSP Tunnels. Suppose that BGP 2125 routes are distributed only to BGP Border Routers, and not to the 2126 interior routers that lie along the Hop-by-hop path from Border 2127 Router to Border Router. LSP Tunnels can then be used as follows: 2129 1. Each BGP Border Router distributes, to every other BGP Border 2130 Router in the same Autonomous System, a label for each address 2131 prefix that it distributes to that router via BGP. 2133 2. The IGP for the Autonomous System maintains a host route for 2134 each BGP Border Router. Each interior router distributes its 2135 labels for these host routes to each of its IGP neighbors. 2137 3. Suppose that: 2139 a) BGP Border Router B1 receives an unlabeled packet P, 2141 b) address prefix X in B1's routing table is the longest 2142 match for the destination address of P, 2144 c) the route to X is a BGP route, 2146 d) the BGP Next Hop for X is B2, 2148 e) B2 has bound label L1 to X, and has distributed this 2149 binding to B1, 2151 f) the IGP next hop for the address of B2 is I1, 2153 g) the address of B2 is in B1's and I1's IGP routing tables 2154 as a host route, and 2156 h) I1 has bound label L2 to the address of B2, and 2157 distributed this binding to B1. 2159 Then before sending packet P to I1, B1 must create a label 2160 stack for P, then push on label L1, and then push on label L2. 2162 4. Suppose that BGP Border Router B1 receives a labeled Packet P, 2163 where the label on the top of the label stack corresponds to an 2164 address prefix, X, to which the route is a BGP route, and that 2165 conditions 3b, 3c, 3d, and 3e all hold. Then before sending 2166 packet P to I1, B1 must replace the label at the top of the 2167 label stack with L1, and then push on label L2. 2169 With these procedures, a given packet P follows a level 1 LSP all of 2170 whose members are BGP Border Routers, and between each pair of BGP 2171 Border Routers in the level 1 LSP, it follows a level 2 LSP. 2173 These procedures effectively create a Hop-by-Hop Routed LSP Tunnel 2174 between the BGP Border Routers. 2176 Since the BGP border routers are exchanging label bindings for 2177 address prefixes that are not even known to the IGP routing, the BGP 2178 routers should become explicit label distribution peers with each 2179 other. 2181 It is sometimes possible to create Hop-by-Hop Routed LSP Tunnels 2182 between two BGP Border Routers, even if they are not in the same 2183 Autonomous System. Suppose, for example, that B1 and B2 are in AS 1. 2184 Suppose that B3 is an EBGP neighbor of B2, and is in AS2. Finally, 2185 suppose that B2 and B3 are on some network which is common to both 2186 Autonomous Systems (a "Demilitarized Zone"). In this case, an LSP 2187 tunnel can be set up directly between B1 and B3 as follows: 2189 - B3 distributes routes to B2 (using EBGP), optionally assigning 2190 labels to address prefixes; 2192 - B2 redistributes those routes to B1 (using IBGP), indicating that 2193 the BGP next hop for each such route is B3. If B3 has assigned 2194 labels to address prefixes, B2 passes these labels along, 2195 unchanged, to B1. 2197 - The IGP of AS1 has a host route for B3. 2199 4.7. Other Uses of Hop-by-Hop Routed LSP Tunnels 2201 The use of Hop-by-Hop Routed LSP Tunnels is not restricted to tunnels 2202 between BGP Next Hops. Any situation in which one might otherwise 2203 have used an encapsulation tunnel is one in which it is appropriate 2204 to use a Hop-by-Hop Routed LSP Tunnel. Instead of encapsulating the 2205 packet with a new header whose destination address is the address of 2206 the tunnel's receive endpoint, the label corresponding to the address 2207 prefix which is the longest match for the address of the tunnel's 2208 receive endpoint is pushed on the packet's label stack. The packet 2209 which is sent into the tunnel may or may not already be labeled. 2211 If the transmit endpoint of the tunnel wishes to put a labeled packet 2212 into the tunnel, it must first replace the label value at the top of 2213 the stack with a label value that was distributed to it by the 2214 tunnel's receive endpoint. Then it must push on the label which 2215 corresponds to the tunnel itself, as distributed to it by the next 2216 hop along the tunnel. To allow this, the tunnel endpoints should be 2217 explicit label distribution peers. The label bindings they need to 2218 exchange are of no interest to the LSRs along the tunnel. 2220 4.8. MPLS and Multicast 2222 Multicast routing proceeds by constructing multicast trees. The tree 2223 along which a particular multicast packet must get forwarded depends 2224 in general on the packet's source address and its destination 2225 address. Whenever a particular LSR is a node in a particular 2226 multicast tree, it binds a label to that tree. It then distributes 2227 that binding to its parent on the multicast tree. (If the node in 2228 question is on a LAN, and has siblings on that LAN, it must also 2229 distribute the binding to its siblings. This allows the parent to 2230 use a single label value when multicasting to all children on the 2231 LAN.) 2233 When a multicast labeled packet arrives, the NHLFE corresponding to 2234 the label indicates the set of output interfaces for that packet, as 2235 well as the outgoing label. If the same label encoding technique is 2236 used on all the outgoing interfaces, the very same packet can be sent 2237 to all the children. 2239 5. Label Distribution Procedures (Hop-by-Hop) 2241 In this section, we consider only label bindings that are used for 2242 traffic to be label switched along its hop-by-hop routed path. In 2243 these cases, the label in question will correspond to an address 2244 prefix in the routing table. 2246 5.1. The Procedures for Advertising and Using labels 2248 There are a number of different procedures that may be used to 2249 distribute label bindings. Some are executed by the downstream LSR, 2250 and some by the upstream LSR. 2252 The downstream LSR must perform: 2254 - The Distribution Procedure, and 2256 - the Withdrawal Procedure. 2258 The upstream LSR must perform: 2260 - The Request Procedure, and 2262 - the NotAvailable Procedure, and 2264 - the Release Procedure, and 2266 - the labelUse Procedure. 2268 The MPLS architecture supports several variants of each procedure. 2270 However, the MPLS architecture does not support all possible 2271 combinations of all possible variants. The set of supported 2272 combinations will be described in section 5.2, where the 2273 interoperability between different combinations will also be 2274 discussed. 2276 5.1.1. Downstream LSR: Distribution Procedure 2278 The Distribution Procedure is used by a downstream LSR to determine 2279 when it should distribute a label binding for a particular address 2280 prefix to its label distribution peers. The architecture supports 2281 four different distribution procedures. 2283 Irrespective of the particular procedure that is used, if a label 2284 binding for a particular address prefix has been distributed by a 2285 downstream LSR Rd to an upstream LSR Ru, and if at any time the 2286 attributes (as defined above) of that binding change, then Rd must 2287 inform Ru of the new attributes. 2289 If an LSR is maintaining multiple routes to a particular address 2290 prefix, it is a local matter as to whether that LSR binds multiple 2291 labels to the address prefix (one per route), and hence distributes 2292 multiple bindings. 2294 5.1.1.1. PushUnconditional 2296 Let Rd be an LSR. Suppose that: 2298 1. X is an address prefix in Rd's routing table 2300 2. Ru is a label distribution peer of Rd with respect to X 2302 Whenever these conditions hold, Rd must bind a label to X and 2303 distribute that binding to Ru. It is the responsibility of Rd to 2304 keep track of the bindings which it has distributed to Ru, and to 2305 make sure that Ru always has these bindings. 2307 This procedure would be used by LSRs which are performing unsolicited 2308 downstream label assignment in the Independent LSP Control Mode. 2310 5.1.1.2. PushConditional 2312 Let Rd be an LSR. Suppose that: 2314 1. X is an address prefix in Rd's routing table 2316 2. Ru is a label distribution peer of Rd with respect to X 2318 3. Rd is either an LSP Egress or an LSP Proxy Egress for X, or 2319 Rd's L3 next hop for X is Rn, where Rn is distinct from Ru, and 2320 Rn has bound a label to X and distributed that binding to Rd. 2322 Then as soon as these conditions all hold, Rd should bind a label to 2323 X and distribute that binding to Ru. 2325 Whereas PushUnconditional causes the distribution of label bindings 2326 for all address prefixes in the routing table, PushConditional causes 2327 the distribution of label bindings only for those address prefixes 2328 for which one has received label bindings from one's LSP next hop, or 2329 for which one does not have an MPLS-capable L3 next hop. 2331 This procedure would be used by LSRs which are performing unsolicited 2332 downstream label assignment in the Ordered LSP Control Mode. 2334 5.1.1.3. PulledUnconditional 2336 Let Rd be an LSR. Suppose that: 2338 1. X is an address prefix in Rd's routing table 2340 2. Ru is a label distribution peer of Rd with respect to X 2342 3. Ru has explicitly requested that Rd bind a label to X and 2343 distribute the binding to Ru 2345 Then Rd should bind a label to X and distribute that binding to Ru. 2346 Note that if X is not in Rd's routing table, or if Rd is not a label 2347 distribution peer of Ru with respect to X, then Rd must inform Ru 2348 that it cannot provide a binding at this time. 2350 If Rd has already distributed a binding for address prefix X to Ru, 2351 and it receives a new request from Ru for a binding for address 2352 prefix X, it will bind a second label, and distribute the new binding 2353 to Ru. The first label binding remains in effect. 2355 This procedure would be used by LSRs performing downstream-on-demand 2356 label distribution using the Independent LSP Control Mode. 2358 5.1.1.4. PulledConditional 2360 Let Rd be an LSR. Suppose that: 2362 1. X is an address prefix in Rd's routing table 2364 2. Ru is a label distribution peer of Rd with respect to X 2366 3. Ru has explicitly requested that Rd bind a label to X and 2367 distribute the binding to Ru 2369 4. Rd is either an LSP Egress or an LSP Proxy Egress for X, or 2370 Rd's L3 next hop for X is Rn, where Rn is distinct from Ru, and 2371 Rn has bound a label to X and distributed that binding to Rd 2373 Then as soon as these conditions all hold, Rd should bind a label to 2374 X and distribute that binding to Ru. Note that if X is not in Rd's 2375 routing table and a binding for X is not obtainable via Rd's next hop 2376 for X, or if Rd is not a label distribution peer of Ru with respect 2377 to X, then Rd must inform Ru that it cannot provide a binding at this 2378 time. 2380 However, if the only condition that fails to hold is that Rn has not 2381 yet provided a label to Rd, then Rd must defer any response to Ru 2382 until such time as it has receiving a binding from Rn. 2384 If Rd has distributed a label binding for address prefix X to Ru, and 2385 at some later time, any attribute of the label binding changes, then 2386 Rd must redistribute the label binding to Ru, with the new attribute. 2387 It must do this even though Ru does not issue a new Request. 2389 This procedure would be used by LSRs that are performing downstream- 2390 on-demand label allocation in the Ordered LSP Control Mode. 2392 In section 5.2, we will discuss how to choose the particular 2393 procedure to be used at any given time, and how to ensure 2394 interoperability among LSRs that choose different procedures. 2396 5.1.2. Upstream LSR: Request Procedure 2398 The Request Procedure is used by the upstream LSR for an address 2399 prefix to determine when to explicitly request that the downstream 2400 LSR bind a label to that prefix and distribute the binding. There 2401 are three possible procedures that can be used. 2403 5.1.2.1. RequestNever 2405 Never make a request. This is useful if the downstream LSR uses the 2406 PushConditional procedure or the PushUnconditional procedure, but is 2407 not useful if the downstream LSR uses the PulledUnconditional 2408 procedure or the the PulledConditional procedures. 2410 This procedure would be used by an LSR when unsolicited downstream 2411 label distribution and Liberal Label Retention Mode are being used. 2413 5.1.2.2. RequestWhenNeeded 2415 Make a request whenever the L3 next hop to the address prefix 2416 changes, or when a new address prefix is learned, and one doesn't 2417 already have a label binding from that next hop for the given address 2418 prefix. 2420 This procedure would be used by an LSR whenever Conservative Label 2421 Retention Mode is being used. 2423 5.1.2.3. RequestOnRequest 2425 Issue a request whenever a request is received, in addition to 2426 issuing a request when needed (as described in section 5.1.2.2). If 2427 Ru is not capable of being an LSP ingress, it may issue a request 2428 only when it receives a request from upstream. 2430 If Rd receives such a request from Ru, for an address prefix for 2431 which Rd has already distributed Ru a label, Rd shall assign a new 2432 (distinct) label, bind it to X, and distribute that binding. 2433 (Whether Rd can distribute this binding to Ru immediately or not 2434 depends on the Distribution Procedure being used.) 2436 This procedure would be used by an LSR which is doing downstream-on- 2437 demand label distribution, but is not doing label merging, e.g., an 2438 ATM-LSR which is not capable of VC merge. 2440 5.1.3. Upstream LSR: NotAvailable Procedure 2442 If Ru and Rd are respectively upstream and downstream label 2443 distribution peers for address prefix X, and Rd is Ru's L3 next hop 2444 for X, and Ru requests a binding for X from Rd, but Rd replies that 2445 it cannot provide a binding at this time, because it has no next hop 2446 for X, then the NotAvailable procedure determines how Ru responds. 2447 There are two possible procedures governing Ru's behavior: 2449 5.1.3.1. RequestRetry 2451 Ru should issue the request again at a later time. That is, the 2452 requester is responsible for trying again later to obtain the needed 2453 binding. This procedure would be used when downstream-on-demand 2454 label distribution is used. 2456 5.1.3.2. RequestNoRetry 2458 Ru should never reissue the request, instead assuming that Rd will 2459 provide the binding automatically when it is available. This is 2460 useful if Rd uses the PushUnconditional procedure or the 2461 PushConditional procedure, i.e., if unsolicited downstream label 2462 distribution is used. 2464 Note that if Rd replies that it cannot provide a binding to Ru, 2465 because of some error condition, rather than because Rd has no next 2466 hop, the behavior of Ru will be governed by the error recovery 2467 conditions of the label distribution protocol, rather than by the 2468 NotAvailable procedure. 2470 5.1.4. Upstream LSR: Release Procedure 2472 Suppose that Rd is an LSR which has bound a label to address prefix 2473 X, and has distributed that binding to LSR Ru. If Rd does not happen 2474 to be Ru's L3 next hop for address prefix X, or has ceased to be Ru's 2475 L3 next hop for address prefix X, then Ru will not be using the 2476 label. The Release Procedure determines how Ru acts in this case. 2477 There are two possible procedures governing Ru's behavior: 2479 5.1.4.1. ReleaseOnChange 2481 Ru should release the binding, and inform Rd that it has done so. 2482 This procedure would be used to implement Conservative Label 2483 Retention Mode. 2485 5.1.4.2. NoReleaseOnChange 2487 Ru should maintain the binding, so that it can use it again 2488 immediately if Rd later becomes Ru's L3 next hop for X. This 2489 procedure would be used to implement Liberal Label Retention Mode. 2491 5.1.5. Upstream LSR: labelUse Procedure 2493 Suppose Ru is an LSR which has received label binding L for address 2494 prefix X from LSR Rd, and Ru is upstream of Rd with respect to X, and 2495 in fact Rd is Ru's L3 next hop for X. 2497 Ru will make use of the binding if Rd is Ru's L3 next hop for X. If, 2498 at the time the binding is received by Ru, Rd is NOT Ru's L3 next hop 2499 for X, Ru does not make any use of the binding at that time. Ru may 2500 however start using the binding at some later time, if Rd becomes 2501 Ru's L3 next hop for X. 2503 The labelUse Procedure determines just how Ru makes use of Rd's 2504 binding. 2506 There are two procedures which Ru may use: 2508 5.1.5.1. UseImmediate 2510 Ru may put the binding into use immediately. At any time when Ru has 2511 a binding for X from Rd, and Rd is Ru's L3 next hop for X, Rd will 2512 also be Ru's LSP next hop for X. This procedure is used when loop 2513 detection is not in use. 2515 5.1.5.2. UseIfLoopNotDetected 2517 This procedure is the same as UseImmediate, unless Ru has detected a 2518 loop in the LSP. If a loop has been detected, Ru will discontinue 2519 the use of label L for forwarding packets to Rd. 2521 This procedure is used when loop detection is in use. 2523 This will continue until the next hop for X changes, or until the 2524 loop is no longer detected. 2526 5.1.6. Downstream LSR: Withdraw Procedure 2528 In this case, there is only a single procedure. 2530 When LSR Rd decides to break the binding between label L and address 2531 prefix X, then this unbinding must be distributed to all LSRs to 2532 which the binding was distributed. 2534 It is required that the unbinding of L from X be distributed by Rd to 2535 a LSR Ru before Rd distributes to Ru any new binding of L to any 2536 other address prefix Y, where X != Y. If Ru were to learn of the new 2537 binding of L to Y before it learned of the unbinding of L from X, and 2538 if packets matching both X and Y were forwarded by Ru to Rd, then for 2539 a period of time, Ru would label both packets matching X and packets 2540 matching Y with label L. 2542 The distribution and withdrawal of label bindings is done via a label 2543 distribution protocol. All label distribution protocols require that 2544 a label distribution adjacency be established between two label 2545 distribution peers (except implicit peers). If LSR R1 has a label 2546 distribution adjacency to LSR R2, and has received label bindings 2547 from LSR R2 via that adjacency, then if adjacency is brought down by 2548 either peer (whether as a result of failure or as a matter of normal 2549 operation), all bindings received over that adjacency must be 2550 considered to have been withdrawn. 2552 As long as the relevant label distribution adjacency remains in 2553 place, label bindings that are withdrawn must always be withdrawn 2554 explicitly. If a second label is bound to an address prefix, the 2555 result is not to implicitly withdraw the first label, but to bind 2556 both labels; this is needed to support multi-path routing. If a 2557 second address prefix is bound to a label, the result is not to 2558 implicitly withdraw the binding of that label to the first address 2559 prefix, but to use that label for both address prefixes. 2561 5.2. MPLS Schemes: Supported Combinations of Procedures 2563 Consider two LSRs, Ru and Rd, which are label distribution peers with 2564 respect to some set of address prefixes, where Ru is the upstream 2565 peer and Rd is the downstream peer. 2567 The MPLS scheme which governs the interaction of Ru and Rd can be 2568 described as a quintuple of procedures: . (Since there is only one Withdraw Procedure, it 2571 need not be mentioned.) A "*" appearing in one of the positions is a 2572 wild-card, meaning that any procedure in that category may be 2573 present; an "N/A" appearing in a particular position indicates that 2574 no procedure in that category is needed. 2576 Only the MPLS schemes which are specified below are supported by the 2577 MPLS Architecture. Other schemes may be added in the future, if a 2578 need for them is shown. 2580 5.2.1. Schemes for LSRs that Support Label Merging 2582 If Ru and Rd are label distribution peers, and both support label 2583 merging, one of the following schemes must be used: 2585 1. 2588 This is unsolicited downstream label distribution with 2589 independent control, liberal label retention mode, and no loop 2590 detection. 2592 2. 2595 This is unsolicited downstream label distribution with 2596 independent control, liberal label retention, and loop 2597 detection. 2599 3. 2602 This is unsolicited downstream label distribution with ordered 2603 control (from the egress) and conservative label retention 2604 mode. Loop detection is optional. 2606 4. 2608 This is unsolicited downstream label distribution with ordered 2609 control (from the egress) and liberal label retention mode. 2610 Loop detection is optional. 2612 5. 2615 This is downstream-on-demand label distribution with ordered 2616 control (initiated by the ingress), conservative label 2617 retention mode, and optional loop detection. 2619 6. 2622 This is downstream-on-demand label distribution with 2623 independent control and conservative label retention mode, 2624 without loop detection. 2626 7. 2629 This is downstream-on-demand label distribution with 2630 independent control and conservative label retention mode, with 2631 loop detection. 2633 5.2.2. Schemes for LSRs that do not Support Label Merging 2635 Suppose that R1, R2, R3, and R4 are ATM switches which do not support 2636 label merging, but are being used as LSRs. Suppose further that the 2637 L3 hop-by-hop path for address prefix X is , and that 2638 packets destined for X can enter the network at any of these LSRs. 2639 Since there is no multipoint-to-point capability, the LSPs must be 2640 realized as point-to-point VCs, which means that there needs to be 2641 three such VCs for address prefix X: , , 2642 and . 2644 Therefore, if R1 and R2 are MPLS peers, and either is an LSR which is 2645 implemented using conventional ATM switching hardware (i.e., no cell 2646 interleave suppression), or is otherwise incapable of performing 2647 label merging, the MPLS scheme in use between R1 and R2 must be one 2648 of the following: 2650 1. 2653 This is downstream-on-demand label distribution with ordered 2654 control (initiated by the ingress), conservative label 2655 retention mode, and optional loop detection. 2657 The use of the RequestOnRequest procedure will cause R4 to 2658 distribute three labels for X to R3; R3 will distribute 2 2659 labels for X to R2, and R2 will distribute one label for X to 2660 R1. 2662 2. 2665 This is downstream-on-demand label distribution with 2666 independent control and conservative label retention mode, 2667 without loop detection. 2669 3. 2672 This is downstream-on-demand label distribution with 2673 independent control and conservative label retention mode, with 2674 loop detection. 2676 5.2.3. Interoperability Considerations 2678 It is easy to see that certain quintuples do NOT yield viable MPLS 2679 schemes. For example: 2681 - 2682 2684 In these MPLS schemes, the downstream LSR Rd distributes label 2685 bindings to upstream LSR Ru only upon request from Ru, but Ru 2686 never makes any such requests. Obviously, these schemes are not 2687 viable, since they will not result in the proper distribution of 2688 label bindings. 2690 - <*, RequestNever, *, *, ReleaseOnChange> 2692 In these MPLS schemes, Rd releases bindings when it isn't using 2693 them, but it never asks for them again, even if it later has a 2694 need for them. These schemes thus do not ensure that label 2695 bindings get properly distributed. 2697 In this section, we specify rules to prevent a pair of label 2698 distribution peers from adopting procedures which lead to infeasible 2699 MPLS Schemes. These rules require either the exchange of information 2700 between label distribution peers during the initialization of the 2701 label distribution adjacency, or apriori knowledge of the information 2702 (obtained through a means outside the scope of this document). 2704 1. Each must state whether it supports label merging. 2706 2. If Rd does not support label merging, Rd must choose either the 2707 PulledUnconditional procedure or the PulledConditional 2708 procedure. If Rd chooses PulledConditional, Ru is forced to 2709 use the RequestRetry procedure. 2711 That is, if the downstream LSR does not support label merging, 2712 its preferences take priority when the MPLS scheme is chosen. 2714 3. If Ru does not support label merging, but Rd does, Ru must 2715 choose either the RequestRetry or RequestNoRetry procedure. 2716 This forces Rd to use the PulledConditional or 2717 PulledUnConditional procedure respectively. 2719 That is, if only one of the LSRs doesn't support label merging, 2720 its preferences take priority when the MPLS scheme is chosen. 2722 4. If both Ru and Rd both support label merging, then the choice 2723 between liberal and conservative label retention mode belongs 2724 to Ru. That is, Ru gets to choose either to use 2725 RequestWhenNeeded/ReleaseOnChange (conservative) , or to use 2726 RequestNever/NoReleaseOnChange (liberal). However, the choice 2727 of "push" vs. "pull" and "conditional" vs. "unconditional" 2728 belongs to Rd. If Ru chooses liberal label retention mode, Rd 2729 can choose either PushUnconditional or PushConditional. If Ru 2730 chooses conservative label retention mode, Rd can choose 2731 PushConditional, PulledConditional, or PulledUnconditional. 2733 These choices together determine the MPLS scheme in use. 2735 6. Security Considerations 2737 Some routers may implement security procedures which depend on the 2738 network layer header being in a fixed place relative to the data link 2739 layer header. The MPLS generic encapsulation inserts a shim between 2740 the data link layer header and the network layer header. This may 2741 cause any such security procedures to fail. 2743 An MPLS label has its meaning by virtue of an agreement between the 2744 LSR that puts the label in the label stack (the "label writer") , and 2745 the LSR that interprets that label (the "label reader"). If labeled 2746 packets are accepted from untrusted sources, or if a particular 2747 incoming label is accepted from an LSR to which that label has not 2748 been distributed, then packets may be routed in an illegitimate 2749 manner. 2751 7. Intellectual Property 2753 The IETF has been notified of intellectual property rights claimed in 2754 regard to some or all of the specification contained in this 2755 document. For more information consult the online list of claimed 2756 rights. 2758 8. Authors' Addresses 2760 Eric C. Rosen 2761 Cisco Systems, Inc. 2762 250 Apollo Drive 2763 Chelmsford, MA, 01824 2764 E-mail: erosen@cisco.com 2766 Arun Viswanathan 2767 Lucent Technologies 2768 101 Crawford Corner Rd., #4D-537 2769 Holmdel, NJ 07733 2770 732-332-5163 2771 E-mail: arunv@dnrc.bell-labs.com 2772 Ross Callon 2773 IronBridge Networks 2774 55 Hayden Avenue, 2775 Lexington, MA 02173 2776 +1-781-372-8117 2777 E-mail: rcallon@ironbridgenetworks.com 2779 9. References 2781 [MPLS-ATM] "MPLS using LDP and ATM VC Switching", Davie, Doolan, 2782 Lawrence, McGloghrie, Rekhter, Rosen, Swallow, work in progress, 2783 April 1999. 2785 [MPLS-BGP] "Carrying Label Information in BGP-4", Rekhter, Rosen, 2786 work in progress, February 1999. 2788 [MPLS-CR-LDP] "Constraint-Based LSP Setup using LDP", Jamoussi, 2789 editor, work in progress, March 1999. 2791 [MPLS-FRMWRK] "A Framework for Multiprotocol Label Switching", 2792 Callon, Doolan, Feldman, Fredette, Swallow, Viswanathan, work in 2793 progress, November 1997 2795 [MPLS-FRMRLY] "Use of Label Switching on Frame Relay Networks", 2796 Conta, Doolan, Malis, work in progress, November 1998 2798 [MPLS-LDP], "LDP Specification", Andersson, Doolan, Feldman, 2799 Fredette, Thomas, work in progress, April 1999. 2801 [MPLS-RSVP] "Use of Label Switching with RSVP", Davie, Rekhter, 2802 Rosen, Viswanathan, Srinivasan, work in progress, March 1998. 2804 [MPLS-RSVP-TUNNELS], "Extensions to RSVP for LSP Tunnels", Awduche, 2805 Berger, Gan, Li, Swallow, Srinvasan, work in progress, March 1999. 2807 [MPLS-SHIM] "MPLS Label Stack Encodings", Rosen, Rekhter, Tappan, 2808 Farinacci, Fedorkow, Li, Conta, work in progress, April 1999. 2810 [MPLS-TRFENG] "Requirements for Traffic Engineering Over MPLS", 2811 Awduche, Malcolm, Agogbua, O'Dell, McManus, work in progress, August 2812 1998.