idnits 2.17.1 draft-ietf-mpls-arch-03.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- ** Looks like you're using RFC 2026 boilerplate. This must be updated to follow RFC 3978/3979, as updated by RFC 4748. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- ** Missing expiration date. The document expiration date should appear on the first and last page. ** The document seems to lack a 1id_guidelines paragraph about the list of current Internet-Drafts. ** The document seems to lack a 1id_guidelines paragraph about the list of Shadow Directories. == No 'Intended status' indicated for this document; assuming Proposed Standard Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- ** The document seems to lack an IANA Considerations section. (See Section 2.2 of https://www.ietf.org/id-info/checklist for how to handle the case when there are no actions for IANA.) ** The document seems to lack separate sections for Informative/Normative References. All references will be assumed normative when checking for downward references. == There are 1 instance of lines with private range IPv4 addresses in the document. If these are generic example addresses, they should be changed to use any of the ranges defined in RFC 6890 (or successor): 192.0.2.x, 198.51.100.x or 203.0.113.x. ** The document seems to lack a both a reference to RFC 2119 and the recommended RFC 2119 boilerplate, even if it appears to use RFC 2119 keywords. RFC 2119 keyword, line 448: '...ne. That is, Rd MUST NOT agree with R...' RFC 2119 keyword, line 625: '... operation MUST be to "pop the stack...' RFC 2119 keyword, line 660: '...M THE NHLFE; THIS MAY IN SOME CASES BE...' RFC 2119 keyword, line 701: '... cases, Rd MUST NOT distribute to Ru...' RFC 2119 keyword, line 853: '...popping the label stack at all MUST do...' (9 more instances...) Miscellaneous warnings: ---------------------------------------------------------------------------- == Line 252 has weird spacing: '...e class a gr...' == Line 290 has weird spacing: '... router an ...' == Line 348 has weird spacing: '...itching an IE...' == Line 2255 has weird spacing: '...4.2, we will ...' == Line 2352 has weird spacing: '...d later becom...' -- The document seems to lack a disclaimer for pre-RFC5378 work, but may have content which was first submitted before 10 November 2008. If you have contacted all the original authors and they are all willing to grant the BCP78 rights to the IETF Trust, then this is fine, and you can ignore this comment. If not, you may need to add the pre-RFC5378 disclaimer. (See the Legal Provisions document at https://trustee.ietf.org/license-info for more information.) -- The document date (February 1999) is 9195 days in the past. Is this intentional? Checking references for intended status: Proposed Standard ---------------------------------------------------------------------------- (See RFCs 3967 and 4897 for information about using normative references to lower-maturity documents in RFCs) -- Possible downref: Non-RFC (?) normative reference: ref. 'MPLS-ATM' -- Possible downref: Non-RFC (?) normative reference: ref. 'MPLS-BGP' -- Possible downref: Non-RFC (?) normative reference: ref. 'MPLS-FRMWRK' -- Possible downref: Non-RFC (?) normative reference: ref. 'MPLS-FRMRLY' -- Possible downref: Non-RFC (?) normative reference: ref. 'MPLS-LDP' -- Possible downref: Non-RFC (?) normative reference: ref. 'MPLS-RSVP' -- Possible downref: Non-RFC (?) normative reference: ref. 'MPLS-RSVP-TUNNELS' -- Possible downref: Non-RFC (?) normative reference: ref. 'MPLS-SHIM' -- Possible downref: Non-RFC (?) normative reference: ref. 'MPLS-TRFENG' Summary: 7 errors (**), 0 flaws (~~), 7 warnings (==), 11 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 1 Network Working Group Eric C. Rosen 2 Internet Draft Cisco Systems, Inc. 3 Expiration Date: August 1999 4 Arun Viswanathan 5 Lucent Technologies 7 Ross Callon 8 IronBridge Networks, Inc. 10 February 1999 12 Multiprotocol Label Switching Architecture 14 draft-ietf-mpls-arch-03.txt 16 Status of this Memo 18 This document is an Internet-Draft and is in full conformance with 19 all provisions of Section 10 of RFC2026. 21 Internet-Drafts are working documents of the Internet Engineering 22 Task Force (IETF), its areas, and its working groups. Note that 23 other groups may also distribute working documents as Internet- 24 Drafts. 26 Internet-Drafts are draft documents valid for a maximum of six months 27 and may be updated, replaced, or obsoleted by other documents at any 28 time. It is inappropriate to use Internet-Drafts as reference 29 material or to cite them other than as "work in progress." 31 To view the list Internet-Draft Shadow Directories, see 32 http://www.ietf.org/shadow.html. 34 Abstract 36 This internet draft specifies the architecture for Multiprotocol 37 Label Switching (MPLS). 39 Table of Contents 41 1 Introduction to MPLS ............................... 4 42 1.1 Overview ........................................... 4 43 1.2 Terminology ........................................ 6 44 1.3 Acronyms and Abbreviations ......................... 9 45 1.4 Acknowledgments .................................... 10 46 2 MPLS Basics ........................................ 10 47 2.1 Labels ............................................. 10 48 2.2 Upstream and Downstream LSRs ....................... 11 49 2.3 Labeled Packet ..................................... 11 50 2.4 Label Assignment and Distribution .................. 11 51 2.5 Attributes of a Label Binding ...................... 12 52 2.6 Label Distribution Protocol (LDP) .................. 12 53 2.7 Downstream vs. Downstream-on-Demand ................ 12 54 2.8 Label Retention Mode ............................... 13 55 2.9 The Label Stack .................................... 13 56 2.10 The Next Hop Label Forwarding Entry (NHLFE) ........ 14 57 2.11 Incoming Label Map (ILM) ........................... 15 58 2.12 FEC-to-NHLFE Map (FTN) ............................. 15 59 2.13 Label Swapping ..................................... 15 60 2.14 Scope and Uniqueness of Labels ..................... 15 61 2.15 Label Switched Path (LSP), LSP Ingress, LSP Egress . 17 62 2.16 Penultimate Hop Popping ............................ 18 63 2.17 LSP Next Hop ....................................... 20 64 2.18 Invalid Incoming Labels ............................ 20 65 2.19 LSP Control: Ordered versus Independent ............ 21 66 2.20 Aggregation ........................................ 22 67 2.21 Route Selection .................................... 23 68 2.22 Lack of Outgoing Label ............................. 24 69 2.23 Time-to-Live (TTL) ................................. 24 70 2.24 Loop Control ....................................... 26 71 2.25 Label Encodings .................................... 26 72 2.25.1 MPLS-specific Hardware and/or Software ............. 26 73 2.25.2 ATM Switches as LSRs ............................... 27 74 2.25.3 Interoperability among Encoding Techniques ......... 28 75 2.26 Label Merging ...................................... 29 76 2.26.1 Non-merging LSRs ................................... 30 77 2.26.2 Labels for Merging and Non-Merging LSRs ............ 30 78 2.26.3 Merge over ATM ..................................... 31 79 2.26.3.1 Methods of Eliminating Cell Interleave ............. 31 80 2.26.3.2 Interoperation: VC Merge, VP Merge, and Non-Merge .. 32 81 2.27 Tunnels and Hierarchy .............................. 33 82 2.27.1 Hop-by-Hop Routed Tunnel ........................... 33 83 2.27.2 Explicitly Routed Tunnel ........................... 33 84 2.27.3 LSP Tunnels ........................................ 33 85 2.27.4 Hierarchy: LSP Tunnels within LSPs ................. 34 86 2.27.5 LDP Peering and Hierarchy .......................... 34 87 2.28 LDP Transport ...................................... 36 88 2.29 Multicast .......................................... 36 89 3 Some Applications of MPLS .......................... 36 90 3.1 MPLS and Hop by Hop Routed Traffic ................. 36 91 3.1.1 Labels for Address Prefixes ........................ 36 92 3.1.2 Distributing Labels for Address Prefixes ........... 37 93 3.1.2.1 LDP Peers for a Particular Address Prefix .......... 37 94 3.1.2.2 Distributing Labels ................................ 37 95 3.1.3 Using the Hop by Hop path as the LSP ............... 38 96 3.1.4 LSP Egress and LSP Proxy Egress .................... 39 97 3.1.5 The Implicit NULL Label ............................ 39 98 3.1.6 Option: Egress-Targeted Label Assignment ........... 40 99 3.2 MPLS and Explicitly Routed LSPs .................... 42 100 3.2.1 Explicitly Routed LSP Tunnels ...................... 42 101 3.3 Label Stacks and Implicit Peering .................. 43 102 3.4 MPLS and Multi-Path Routing ........................ 44 103 3.5 LSP Trees as Multipoint-to-Point Entities .......... 44 104 3.6 LSP Tunneling between BGP Border Routers ........... 45 105 3.7 Other Uses of Hop-by-Hop Routed LSP Tunnels ........ 46 106 3.8 MPLS and Multicast ................................. 47 107 4 LDP Procedures for Hop-by-Hop Routed Traffic ....... 47 108 4.1 The Procedures for Advertising and Using labels .... 47 109 4.1.1 Downstream LSR: Distribution Procedure ............. 48 110 4.1.1.1 PushUnconditional .................................. 48 111 4.1.1.2 PushConditional .................................... 49 112 4.1.1.3 PulledUnconditional ................................ 49 113 4.1.1.4 PulledConditional .................................. 50 114 4.1.2 Upstream LSR: Request Procedure .................... 50 115 4.1.2.1 RequestNever ....................................... 51 116 4.1.2.2 RequestWhenNeeded .................................. 51 117 4.1.2.3 RequestOnRequest ................................... 51 118 4.1.3 Upstream LSR: NotAvailable Procedure ............... 51 119 4.1.3.1 RequestRetry ....................................... 52 120 4.1.3.2 RequestNoRetry ..................................... 52 121 4.1.4 Upstream LSR: Release Procedure .................... 52 122 4.1.4.1 ReleaseOnChange .................................... 52 123 4.1.4.2 NoReleaseOnChange .................................. 52 124 4.1.5 Upstream LSR: labelUse Procedure ................... 53 125 4.1.5.1 UseImmediate ....................................... 53 126 4.1.5.2 UseIfLoopNotDetected ............................... 53 127 4.1.6 Downstream LSR: Withdraw Procedure ................. 53 128 4.2 MPLS Schemes: Supported Combinations of Procedures . 54 129 4.2.1 Schemes for LSRs that Support Label Merging ........ 55 130 4.2.2 Schemes for LSRs that do not Support Label Merging . 56 131 4.2.3 Interoperability Considerations .................... 57 132 5 Security Considerations ............................ 58 133 6 Intellectual Property .............................. 58 134 7 Authors' Addresses ................................. 58 135 8 References ......................................... 59 137 1. Introduction to MPLS 139 1.1. Overview 141 As a packet of a connectionless network layer protocol travels from 142 one router to the next, each router makes an independent forwarding 143 decision for that packet. That is, each router analyzes the packet's 144 header, and each router runs a network layer routing algorithm. Each 145 router independently chooses a next hop for the packet, based on its 146 analysis of the packet's header and the results of running the 147 routing algorithm. 149 Packet headers contain considerably more information than is needed 150 simply to choose the next hop. Choosing the next hop can therefore be 151 thought of as the composition of two functions. The first function 152 partitions the entire set of possible packets into a set of 153 "Forwarding Equivalence Classes (FECs)". The second maps each FEC to 154 a next hop. Insofar as the forwarding decision is concerned, 155 different packets which get mapped into the same FEC are 156 indistinguishable. All packets which belong to a particular FEC and 157 which travel from a particular node will follow the same path (or if 158 certain kinds of multi-path routing are in use, they will all follow 159 one of a set of paths associated with the FEC). 161 In conventional IP forwarding, a particular router will typically 162 consider two packets to be in the same FEC if there is some address 163 prefix X in that router's routing tables such that X is the "longest 164 match" for each packet's destination address. As the packet traverses 165 the network, each hop in turn reexamines the packet and assigns it to 166 a FEC. 168 In MPLS, the assignment of a particular packet to a particular FEC is 169 done just once, as the packet enters the network. The FEC to which 170 the packet is assigned is encoded as a short fixed length value known 171 as a "label". When a packet is forwarded to its next hop, the label 172 is sent along with it; that is, the packets are "labeled" before they 173 are forwarded. 175 At subsequent hops, there is no further analysis of the packet's 176 network layer header. Rather, the label is used as an index into a 177 table which specifies the next hop, and a new label. The old label 178 is replaced with the new label, and the packet is forwarded to its 179 next hop. 181 In the MPLS forwarding paradigm, once a packet is assigned to a FEC, 182 no further header analysis is done by subsequent routers; all 183 forwarding is driven by the labels. This has a number of advantages 184 over conventional network layer forwarding. 186 - MPLS forwarding can be done by switches which are capable of 187 doing label lookup and replacement, but are either not capable of 188 analyzing the network layer headers, or are not capable of 189 analyzing the network layer headers at adequate speed. 191 - Since a packet is assigned to a FEC when it enters the network, 192 the ingress router may use, in determining the assignment, any 193 information it has about the packet, even if that information 194 cannot be gleaned from the network layer header. For example, 195 packets arriving on different ports may be assigned to different 196 FECs. Conventional forwarding, on the other hand, can only 197 consider information which travels with the packet in the packet 198 header. 200 - A packet that enters the network at a particular router can be 201 labeled differently than the same packet entering the network at 202 a different router, and as a result forwarding decisions that 203 depend on the ingress router can be easily made. This cannot be 204 done with conventional forwarding, since the identity of a 205 packet's ingress router does not travel with the packet. 207 - The considerations that determine how a packet is assigned to a 208 FEC can become ever more and more complicated, without any impact 209 at all on the routers that merely forward labeled packets. 211 - Sometimes it is desirable to force a packet to follow a 212 particular route which is explicitly chosen at or before the time 213 the packet enters the network, rather than being chosen by the 214 normal dynamic routing algorithm as the packet travels through 215 the network. This may be done as a matter of policy, or to 216 support traffic engineering. In conventional forwarding, this 217 requires the packet to carry an encoding of its route along with 218 it ("source routing"). In MPLS, a label can be used to represent 219 the route, so that the identity of the explicit route need not be 220 carried with the packet. 222 Some routers analyze a packet's network layer header not merely to 223 choose the packet's next hop, but also to determine a packet's 224 "precedence" or "class of service". They may then apply different 225 discard thresholds or scheduling disciplines to different packets. 227 MPLS allows (but does not require) the precedence or class of service 228 to be fully or partially inferred from the label. In this case, one 229 may say that the label represents the combination of a FEC and a 230 precedence or class of service. 232 MPLS stands for "Multiprotocol" Label Switching, multiprotocol 233 because its techniques are applicable to ANY network layer protocol. 234 In this document, however, we focus on the use of IP as the network 235 layer protocol. 237 A router which supports MPLS is known as a "Label Switching Router", 238 or LSR. 240 A general discussion of issues related to MPLS is presented in "A 241 Framework for Multiprotocol Label Switching" [MPLS-FRMWRK]. 243 1.2. Terminology 245 This section gives a general conceptual overview of the terms used in 246 this document. Some of these terms are more precisely defined in 247 later sections of the document. 249 DLCI a label used in Frame Relay networks to 250 identify frame relay circuits 252 forwarding equivalence class a group of IP packets which are 253 forwarded in the same manner (e.g., 254 over the same path, with the same 255 forwarding treatment) 257 frame merge label merging, when it is applied to 258 operation over frame based media, so that 259 the potential problem of cell interleave 260 is not an issue. 262 label a short fixed length physically 263 contiguous identifier which is used to 264 identify a FEC, usually of local 265 significance. 267 label merging the replacement of multiple incoming 268 labels for a particular FEC with a single 269 outgoing label 271 label swap the basic forwarding operation consisting 272 of looking up an incoming label to 273 determine the outgoing label, 274 encapsulation, port, and other data 275 handling information. 277 label swapping a forwarding paradigm allowing 278 streamlined forwarding of data by using 279 labels to identify classes of data 280 packets which are treated 281 indistinguishably when forwarding. 283 label switched hop the hop between two MPLS nodes, on which 284 forwarding is done using labels. 286 label switched path The path through one or more LSRs at one 287 level of the hierarchy followed by a 288 packets in a particular FEC. 290 label switching router an MPLS node which is capable of 291 forwarding native L3 packets 293 layer 2 the protocol layer under layer 3 (which 294 therefore offers the services used by 295 layer 3). Forwarding, when done by the 296 swapping of short fixed length labels, 297 occurs at layer 2 regardless of whether 298 the label being examined is an ATM 299 VPI/VCI, a frame relay DLCI, or an MPLS 300 label. 302 layer 3 the protocol layer at which IP and its 303 associated routing protocols operate link 304 layer synonymous with layer 2 306 loop detection a method of dealing with loops in which 307 loops are allowed to be set up, and data 308 may be transmitted over the loop, but the 309 loop is later detected 311 loop prevention a method of dealing with loops in which 312 data is never transmitted over a loop 314 label stack an ordered set of labels 315 merge point a node at which label merging is done 317 MPLS domain a contiguous set of nodes which operate 318 MPLS routing and forwarding and which are 319 also in one Routing or Administrative 320 Domain 322 MPLS edge node an MPLS node that connects an MPLS domain 323 with a node which is outside of the 324 domain, either because it does not run 325 MPLS, and/or because it is in a different 326 domain. Note that if an LSR has a 327 neighboring host which is not running 328 MPLS, that that LSR is an MPLS edge node. 330 MPLS egress node an MPLS edge node in its role in handling 331 traffic as it leaves an MPLS domain 333 MPLS ingress node an MPLS edge node in its role in handling 334 traffic as it enters an MPLS domain 336 MPLS label a label which is carried in a packet 337 header, and which represents the packet's 338 FEC 340 MPLS node a node which is running MPLS. An MPLS 341 node will be aware of MPLS control 342 protocols, will operate one or more L3 343 routing protocols, and will be capable of 344 forwarding packets based on labels. An 345 MPLS node may optionally be also capable 346 of forwarding native L3 packets. 348 MultiProtocol Label Switching an IETF working group and the effort 349 associated with the working group 351 network layer synonymous with layer 3 353 stack synonymous with label stack 355 switched path synonymous with label switched path 357 virtual circuit a circuit used by a connection-oriented 358 layer 2 technology such as ATM or Frame 359 Relay, requiring the maintenance of state 360 information in layer 2 switches. 362 VC merge label merging where the MPLS label is 363 carried in the ATM VCI field (or combined 364 VPI/VCI field), so as to allow multiple 365 VCs to merge into one single VC 367 VP merge label merging where the MPLS label is 368 carried din the ATM VPI field, so as to 369 allow multiple VPs to be merged into one 370 single VP. In this case two cells would 371 have the same VCI value only if they 372 originated from the same node. This 373 allows cells from different sources to be 374 distinguished via the VCI. 376 VPI/VCI a label used in ATM networks to identify 377 circuits 379 1.3. Acronyms and Abbreviations 381 ATM Asynchronous Transfer Mode 382 BGP Border Gateway Protocol 383 DLCI Data Link Circuit Identifier 384 FEC Forwarding Equivalence Class 385 FTN FEC to NHLFE Map 386 IGP Interior Gateway Protocol 387 ILM Incoming Label Map 388 IP Internet Protocol 389 LDP Label Distribution Protocol 390 L2 Layer 2 L3 Layer 3 391 LSP Label Switched Path 392 LSR Label Switching Router 393 MPLS MultiProtocol Label Switching 394 NHLFE Next Hop Label Forwarding Entry 395 SVC Switched Virtual Circuit 396 SVP Switched Virtual Path 397 TTL Time-To-Live 398 VC Virtual Circuit 399 VCI Virtual Circuit Identifier 400 VP Virtual Path 401 VPI Virtual Path Identifier 403 1.4. Acknowledgments 405 The ideas and text in this document have been collected from a number 406 of sources and comments received. We would like to thank Rick Boivie, 407 Paul Doolan, Nancy Feldman, Yakov Rekhter, Vijay Srinivasan, and 408 George Swallow for their inputs and ideas. 410 2. MPLS Basics 412 In this section, we introduce some of the basic concepts of MPLS and 413 describe the general approach to be used. 415 2.1. Labels 417 A label is a short, fixed length, locally significant identifier 418 which is used to identify a FEC. The label which is put on a 419 particular packet represents the Forwarding Equivalence Class to 420 which that packet is assigned. 422 Most commonly, a packet is assigned to a FEC based (completely or 423 partially) on its network layer destination address. However, the 424 label is never an encoding of that address. 426 If Ru and Rd are LSRs, they may agree that when Ru transmits a packet 427 to Rd, Ru will label with packet with label value L if and only if 428 the packet is a member of a particular FEC F. That is, they can 429 agree to a "binding" between label L and FEC F for packets moving 430 from Ru to Rd. As a result of such an agreement, L becomes Ru's 431 "outgoing label" representing FEC F, and L becomes Rd's "incoming 432 label" representing FEC F. 434 Note that L does not necessarily represent FEC F for any packets 435 other than those which are being sent from Ru to Rd. L is an 436 arbitrary value whose binding to F is local to Ru and Rd. 438 When we speak above of packets "being sent" from Ru to Rd, we do not 439 imply either that the packet originated at Ru or that its destination 440 is Rd. Rather, we mean to include packets which are "transit 441 packets" at one or both of the LSRs. 443 Sometimes it may be difficult or even impossible for Rd to tell, of 444 an arriving packet carrying label L, that the label L was placed in 445 the packet by Ru, rather than by some other LSR. (This will 446 typically be the case when Ru and Rd are not direct neighbors.) In 447 such cases, Rd must make sure that the binding from label to FEC is 448 one-to-one. That is, Rd MUST NOT agree with Ru1 to bind L to FEC F1, 449 while also agreeing with some other LSR Ru2 to bind L to a different 450 FEC F2, UNLESS Rd can always tell, when it receives a packet with 451 incoming label L, whether the label was put on the packet by Ru1 or 452 whether it was put on by Ru2. 454 It is the responsibility of each LSR to ensure that it can uniquely 455 interpret its incoming labels. 457 2.2. Upstream and Downstream LSRs 459 Suppose Ru and Rd have agreed to bind label L to FEC F, for packets 460 sent from Ru to Rd. Then with respect to this binding, Ru is the 461 "upstream LSR", and Rd is the "downstream LSR". 463 To say that one node is upstream and one is downstream with respect 464 to a given binding means only that a particular label represents a 465 particular FEC in packets travelling from the upstream node to the 466 downstream node. This is NOT meant to imply that packets in that FEC 467 would actually be routed from the upstream node to the downstream 468 node. 470 2.3. Labeled Packet 472 A "labeled packet" is a packet into which a label has been encoded. 473 In some cases, the label resides in an encapsulation header which 474 exists specifically for this purpose. In other cases, the label may 475 reside in an existing data link or network layer header, as long as 476 there is a field which is available for that purpose. The particular 477 encoding technique to be used must be agreed to by both the entity 478 which encodes the label and the entity which decodes the label. 480 2.4. Label Assignment and Distribution 482 In the MPLS architecture, the decision to bind a particular label L 483 to a particular FEC F is made by the LSR which is DOWNSTREAM with 484 respect to that binding. The downstream LSR then informs the 485 upstream LSR of the binding. Thus labels are "downstream-assigned", 486 and label bindings are distributed in the "downstream to upstream" 487 direction. 489 If an LSR has been designed so that it can only look up labels that 490 fall into a certain numeric range, then it merely needs to ensure 491 that it only binds labels that are in that range. 493 2.5. Attributes of a Label Binding 495 A particular binding of label L to FEC F, distributed by Rd to Ru, 496 may have associated "attributes". If Ru, acting as a downstream LSR, 497 also distributes a binding of a label to FEC F, then under certain 498 conditions, it may be required to also distribute the corresponding 499 attribute that it received from Rd. 501 2.6. Label Distribution Protocol (LDP) 503 A Label Distribution Protocol (LDP) is a set of procedures by which 504 one LSR informs another of the label/FEC bindings it has made. Two 505 LSRs which use an LDP to exchange label/FEC binding information are 506 known as "LDP Peers" with respect to the binding information they 507 exchange. If two LSRs are LDP Peers, we will speak of there being an 508 "LDP Adjacency" between them. 510 (N.B.: two LSRs may be LDP Peers with respect to some set of 511 bindings, but not with respect to some other set of bindings.) 513 The LDP also encompasses any negotiations in which two LDP Peers need 514 to engage in order to learn of each other's MPLS capabilities. 516 The architecture does not assume that there is only a single Label 517 Distribution Protocol. Different label distribution protocols might 518 be used for different purposes or in different environments. See, 519 e.g., [MPLS-LDP], [MPLS-BGP], [MPLS-RSVP], [MPLS-RSVP-TUNNELS], etc. 521 2.7. Downstream vs. Downstream-on-Demand 523 The MPLS architecture allows an LSR to explicitly request, from its 524 next hop for a particular FEC, a label binding for that FEC. This is 525 known as "downstream-on-demand" label distribution. 527 The MPLS architecture also allows an LSR to distribute bindings to 528 LSRs that have not explicitly requested them. This is known as 529 "downstream" label distribution. 531 Both of these label distribution techniques may be used in the same 532 network at the same time. However, on any given LDP adjacency, the 533 upstream LSR and the downstream LSR must agree on which technique is 534 to be used. 536 2.8. Label Retention Mode 538 An LSR Ru may receive (or have received) a label binding for a 539 particular FEC from an LSR Rd, even though Rd is not Ru's next hop 540 (or is no longer Ru's next hop) for that FEC. 542 Ru then has the choice of whether to keep track of such bindings, or 543 whether to discard such bindings. If Ru keeps track of such 544 bindings, then it may immediately begin using the binding again if Rd 545 eventually becomes its next hop for the FEC in question. If Ru 546 discards such bindings, then if Rd later becomes the next hop, the 547 binding will have to be reacquired. 549 If an LSR supports "Liberal Label Retention Mode", it maintains the 550 bindings between a label and a FEC which are received from LSRs which 551 are not its next hop for that FEC. If an LSR supports "Conservative 552 Label Retention Mode", it discards such bindings. 554 Liberal label retention mode allows for quicker adaptation to routing 555 changes, but conservative label retention mode though requires an LSR 556 to maintain many fewer labels. 558 2.9. The Label Stack 560 So far, we have spoken as if a labeled packet carries only a single 561 label. As we shall see, it is useful to have a more general model in 562 which a labeled packet carries a number of labels, organized as a 563 last-in, first-out stack. We refer to this as a "label stack". 565 IN MPLS, EVERY FORWARDING DECISION IS BASED EXCLUSIVELY ON THE LABEL 566 AT THE TOP OF THE STACK. 568 Although, as we shall see, MPLS supports a hierarchy, the processing 569 of a labeled packet is completely independent of the level of 570 hierarchy. The processing is always based on the top label, without 571 regard for the possibility that some number of other labels may have 572 been "above it" in the past, or that some number of other labels may 573 be below it at present. 575 An unlabeled packet can be thought of as a packet whose label stack 576 is empty (i.e., whose label stack has depth 0). 578 If a packet's label stack is of depth m, we refer to the label at the 579 bottom of the stack as the level 1 label, to the label above it (if 580 such exists) as the level 2 label, and to the label at the top of the 581 stack as the level m label. 583 The utility of the label stack will become clear when we introduce 584 the notion of LSP Tunnel and the MPLS Hierarchy (section 2.27). 586 2.10. The Next Hop Label Forwarding Entry (NHLFE) 588 The "Next Hop Label Forwarding Entry" (NHLFE) is used when forwarding 589 a labeled packet. It contains the following information: 591 1. the packet's next hop 593 2. the operation to perform on the packet's label stack; this is 594 one of the following operations: 596 a) replace the label at the top of the label stack with a 597 specified new label 599 b) pop the label stack 601 c) replace the label at the top of the label stack with a 602 specified new label, and then push one or more specified 603 new labels onto the label stack. 605 It may also contain: 607 d) the data link encapsulation to use when transmitting the packet 609 e) the way to encode the label stack when transmitting the packet 611 f) any other information needed in order to properly dispose of 612 the packet. 614 Note that at a given LSR, the packet's "next hop" might be that LSR 615 itself. In this case, the LSR would need to pop the top level label, 616 and then "forward" the resulting packet to itself. It would then 617 make another forwarding decision, based on what remains after the 618 label stacked is popped. This may still be a labeled packet, or it 619 may be the native IP packet. 621 This implies that in some cases the LSR may need to operate on the IP 622 header in order to forward the packet. 624 If the packet's "next hop" is the current LSR, then the label stack 625 operation MUST be to "pop the stack". 627 2.11. Incoming Label Map (ILM) 629 The "Incoming Label Map" (ILM) is a mapping from incoming labels to 630 NHLFEs. It is used when forwarding packets that arrive as labeled 631 packets. 633 2.12. FEC-to-NHLFE Map (FTN) 635 The "FEC-to-NHLFE" (FTN) is a mapping from FECs to NHLFEs. It is used 636 when forwarding packets that arrive unlabeled, but which are to be 637 labeled before being forwarded. 639 2.13. Label Swapping 641 Label swapping is the use of the following procedures to forward a 642 packet. 644 In order to forward a labeled packet, a LSR examines the label at the 645 top of the label stack. It uses the ILM to map this label to an 646 NHLFE. Using the information in the NHLFE, it determines where to 647 forward the packet, and performs an operation on the packet's label 648 stack. It then encodes the new label stack into the packet, and 649 forwards the result. 651 In order to forward an unlabeled packet, a LSR analyzes the network 652 layer header, to determine the packet's FEC. It then uses the FTN to 653 map this to an NHLFE. Using the information in the NHLFE, it 654 determines where to forward the packet, and performs an operation on 655 the packet's label stack. (Popping the label stack would, of course, 656 be illegal in this case.) It then encodes the new label stack into 657 the packet, and forwards the result. 659 IT IS IMPORTANT TO NOTE THAT WHEN LABEL SWAPPING IS IN USE, THE NEXT 660 HOP IS ALWAYS TAKEN FROM THE NHLFE; THIS MAY IN SOME CASES BE 661 DIFFERENT FROM WHAT THE NEXT HOP WOULD BE IF MPLS WERE NOT IN USE. 663 2.14. Scope and Uniqueness of Labels 665 A given LSR Rd may bind label L1 to FEC F, and distribute that 666 binding to LDP peer Ru1. Rd may also bind label L2 to FEC F, and 667 distribute that binding to LDP peer Ru2. Whether or not L1 == L2 is 668 not determined by the architecture; this is a local matter. 670 A given LSR Rd may bind label L to FEC F1, and distribute that 671 binding to LDP peer Ru1. Rd may also bind label L to FEC F2, and 672 distribute that binding to LDP peer Ru2. IF (AND ONLY IF) RD CAN 673 TELL, WHEN IT RECEIVES A PACKET WHOSE TOP LABEL IS L, WHETHER THE 674 LABEL WAS PUT THERE BY RU1 OR BY RU2, THEN THE ARCHITECTURE DOES NOT 675 REQUIRE THAT F1 == F2. In such cases, we may say that Rd is using a 676 different "label space" for the labels it distributes to Ru1 than for 677 the labels it distributes to Ru2. 679 In general, Rd can only tell whether it was Ru1 or Ru2 that put the 680 particular label value L at the top of the label stack if the 681 following conditions hold: 683 - Ru1 and Ru2 are the only LDP peers to which Rd distributed a 684 binding of label value L, and 686 - Ru1 and Ru2 are each directly connected to Rd via a point-to- 687 point interface. 689 When these conditions hold, an LSR may use labels that have "per 690 interface" scope, i.e., which are only unique per interface. We may 691 say that the LSR is using a "per-interface label space". When these 692 conditions do not hold, the labels must be unique over the LSR which 693 has assigned them, and we may say that the LSR is using a "per- 694 platform label space." 696 If a particular LSR Rd is attached to a particular LSR Ru over two 697 point-to-point interfaces, then Rd may distribute to Ru a binding of 698 label L to FEC F1, as well as a binding of label L to FEC F2, F1 != 699 F2, if and only if each binding is valid only for packets which Ru 700 sends to Rd over a particular one of the interfaces. In all other 701 cases, Rd MUST NOT distribute to Ru bindings of the same label value 702 to two different FECs. 704 This prohibition holds even if the bindings are regarded as being at 705 different "levels of hierarchy". In MPLS, there is no notion of 706 having a different label space for different levels of the hierarchy; 707 when interpreting a label, the level of the label is irrelevant. 709 The question arises as to whether it is possible for an LSR to use 710 multiple per-platform label spaces, or to use multiple per-interface 711 label spaces for the same interface. This is not prohibited by the 712 architecture. However, in such cases the LSR must have some means, 713 not specified by the architecture, of determining, for a particular 714 incoming label, which label space that label belongs to. For 715 example, [MPLS-SHIM] specifies that a different label space is used 716 for unicast packets than for multicast packets, and uses a data link 717 layer codepoint to distinguish the two label spaces. 719 2.15. Label Switched Path (LSP), LSP Ingress, LSP Egress 721 A "Label Switched Path (LSP) of level m" for a particular packet P is 722 a sequence of routers, 724 726 with the following properties: 728 1. R1, the "LSP Ingress", is an LSR which pushes a label onto P's 729 label stack, resulting in a label stack of depth m; 731 2. For all i, 10). 755 In other words, we can speak of the level m LSP for Packet P as the 756 sequence of routers: 758 1. which begins with an LSR (an "LSP Ingress") that pushes on a 759 level m label, 761 2. all of whose intermediate LSRs make their forwarding decision 762 by label Switching on a level m label, 764 3. which ends (at an "LSP Egress") when a forwarding decision is 765 made by label Switching on a level m-k label, where k>0, or 766 when a forwarding decision is made by "ordinary", non-MPLS 767 forwarding procedures. 769 A consequence (or perhaps a presupposition) of this is that whenever 770 an LSR pushes a label onto an already labeled packet, it needs to 771 make sure that the new label corresponds to a FEC whose LSP Egress is 772 the LSR that assigned the label which is now second in the stack. 774 We will call a sequence of LSRs the "LSP for a particular FEC F" if 775 it is an LSP of level m for a particular packet P when P's level m 776 label is a label corresponding to FEC F. 778 Consider the set of nodes which may be LSP ingress nodes for FEC F. 779 Then there is an LSP for FEC F which begins with each of those nodes. 780 If a number of those LSPs have the same LSP egress, then one can 781 consider the set of such LSPs to be a tree, whose root is the LSP 782 egress. (Since data travels along this tree towards the root, this 783 may be called a multipoint-to-point tree.) We can thus speak of the 784 "LSP tree" for a particular FEC F. 786 2.16. Penultimate Hop Popping 788 Note that according to the definitions of section 2.15, if is a level m LSP for packet P, P may be transmitted from R[n-1] 790 to Rn with a label stack of depth m-1. That is, the label stack may 791 be popped at the penultimate LSR of the LSP, rather than at the LSP 792 Egress. 794 From an architectural perspective, this is perfectly appropriate. 795 The purpose of the level m label is to get the packet to Rn. Once 796 R[n-1] has decided to send the packet to Rn, the label no longer has 797 any function, and need no longer be carried. 799 There is also a practical advantage to doing penultimate hop popping. 800 If one does not do this, then when the LSP egress receives a packet, 801 it first looks up the top label, and determines as a result of that 802 lookup that it is indeed the LSP egress. Then it must pop the stack, 803 and examine what remains of the packet. If there is another label on 804 the stack, the egress will look this up and forward the packet based 805 on this lookup. (In this case, the egress for the packet's level m 806 LSP is also an intermediate node for its level m-1 LSP.) If there is 807 no other label on the stack, then the packet is forwarded according 808 to its network layer destination address. Note that this would 809 require the egress to do TWO lookups, either two label lookups or a 810 label lookup followed by an address lookup. 812 If, on the other hand, penultimate hop popping is used, then when the 813 penultimate hop looks up the label, it determines: 815 - that it is the penultimate hop, and 817 - who the next hop is. 819 The penultimate node then pops the stack, and forwards the packet 820 based on the information gained by looking up the label that was 821 previously at the top of the stack. When the LSP egress receives the 822 packet, the label which is now at the top of the stack will be the 823 label which it needs to look up in order to make its own forwarding 824 decision. Or, if the packet was only carrying a single label, the 825 LSP egress will simply see the network layer packet, which is just 826 what it needs to see in order to make its forwarding decision. 828 This technique allows the egress to do a single lookup, and also 829 requires only a single lookup by the penultimate node. 831 The creation of the forwarding "fastpath" in a label switching 832 product may be greatly aided if it is known that only a single lookup 833 is ever required: 835 - the code may be simplified if it can assume that only a single 836 lookup is ever needed 838 - the code can be based on a "time budget" that assumes that only a 839 single lookup is ever needed. 841 In fact, when penultimate hop popping is done, the LSP Egress need 842 not even be an LSR. 844 However, some hardware switching engines may not be able to pop the 845 label stack, so this cannot be universally required. There may also 846 be some situations in which penultimate hop popping is not desirable. 847 Therefore the penultimate node pops the label stack only if this is 848 specifically requested by the egress node, OR if the next node in the 849 LSP does not support MPLS. (If the next node in the LSP does support 850 MPLS, but does not make such a request, the penultimate node has no 851 way of knowing that it in fact is the penultimate node.) 853 An LSR which is capable of popping the label stack at all MUST do 854 penultimate hop popping when so requested by its downstream LDP peer. 856 Initial LDP negotiations MUST allow each LSR to determine whether its 857 neighboring LSRS are capable of popping the label stack. A LSR MUST 858 NOT request an LDP peer to pop the label stack unless it is capable 859 of doing so. 861 It may be asked whether the egress node can always interpret the top 862 label of a received packet properly if penultimate hop popping is 863 used. As long as the uniqueness and scoping rules of section 2.14 864 are obeyed, it is always possible to interpret the top label of a 865 received packet unambiguously. 867 2.17. LSP Next Hop 869 The LSP Next Hop for a particular labeled packet in a particular LSR 870 is the LSR which is the next hop, as selected by the NHLFE entry used 871 for forwarding that packet. 873 The LSP Next Hop for a particular FEC is the next hop as selected by 874 the NHLFE entry indexed by a label which corresponds to that FEC. 876 Note that the LSP Next Hop may differ from the next hop which would 877 be chosen by the network layer routing algorithm. We will use the 878 term "L3 next hop" when we refer to the latter. 880 2.18. Invalid Incoming Labels 882 What should an LSR do if it receives a labeled packet with a 883 particular incoming label, but has no binding for that label? It is 884 tempting to think that the labels can just be removed, and the packet 885 forwarded as an unlabeled IP packet. However, in some cases, doing 886 so could cause a loop. If the upstream LSR thinks the label is bound 887 to an explicit route, and the downstream LSR doesn't think the label 888 is bound to anything, and if the hop by hop routing of the unlabeled 889 IP packet brings the packet back to the upstream LSR, then a loop is 890 formed. 892 It is also possible that the label was intended to represent a route 893 which cannot be inferred from the IP header. 895 Therefore, when a labeled packet is received with an invalid incoming 896 label, it MUST be discarded, UNLESS it is determined by some means 897 (not within the scope of the current document) that forwarding it 898 unlabeled cannot cause any harm. 900 2.19. LSP Control: Ordered versus Independent 902 Some FECs correspond to address prefixes which are distributed via a 903 dynamic routing algorithm. The setup of the LSPs for these FECs can 904 be done in one of two ways: Independent LSP Control or Ordered LSP 905 Control. 907 In Independent LSP Control, each LSR, upon noting that it recognizes 908 a particular FEC, makes an independent decision to bind a label to 909 that FEC and to distribute that binding to its LDP peers. This 910 corresponds to the way that conventional IP datagram routing works; 911 each node makes an independent decision as to how to treat each 912 packet, and relies on the routing algorithm to converge rapidly so as 913 to ensure that each datagram is correctly delivered. 915 In Ordered LSP Control, an LSR only binds a label to a particular FEC 916 if it is the egress LSR for that FEC, or if it has already received a 917 label binding for that FEC from its next hop for that FEC. 919 If one wants to ensure that traffic in a particular FEC follows a 920 path with some specified set of properties (e.g., that the traffic 921 does not traverse any node twice, that a specified amount of 922 resources are available to the traffic, that the traffic follows an 923 explicitly specified path, etc.) ordered control must be used. With 924 independent control, some LSRs may begin label switching a traffic in 925 the FEC before the LSP is completely set up, and thus some traffic in 926 the FEC may follow a path which does not have the specified set of 927 properties. Ordered control also needs to be used if the recognition 928 of the FEC is a consequence of the setting up of the corresponding 929 LSP. 931 Ordered LSP setup may be initiated either by the ingress or the 932 egress. 934 Ordered control and independent control are fully interoperable. 935 However, unless all LSRs in an LSP are using ordered control, the 936 overall effect on network behavior is largely that of independent 937 control, since one cannot be sure that an LSP is not used until it is 938 fully set up. 940 This architecture allows the choice between independent control and 941 ordered control to be a local matter. Since the two methods 942 interwork, a given LSR need support only one or the other. Generally 943 speaking, the choice of independent versus ordered control does not 944 appear to have any effect on the LDP mechanisms which need to be 945 defined. 947 2.20. Aggregation 949 One way of partitioning traffic into FECs is to create a separate FEC 950 for each address prefix which appears in the routing table. However, 951 within a particular MPLS domain, this may result in a set of FECs 952 such that all traffic in all those FECs follows the same route. For 953 example, a set of distinct address prefixes might all have the same 954 egress node, and label swapping might be used only to get the the 955 traffic to the egress node. In this case, within the MPLS domain, 956 the union of those FECs is itself a FEC. This creates a choice: 957 should a distinct label be bound to each component FEC, or should a 958 single label be bound to the union, and that label applied to all 959 traffic in the union? 961 The procedure of binding a single label to a union of FECs which is 962 itself a FEC (within some domain), and of applying that label to all 963 traffic in the union, is known as "aggregation". The MPLS 964 architecture allows aggregation. Aggregation may reduce the number 965 of labels which are needed to handle a particular set of packets, and 966 may also reduce the amount of LDP control traffic needed. 968 Given a set of FECs which are "aggregatable" into a single FEC, it is 969 possible to (a) aggregate them into a single FEC, (b) aggregate them 970 into a set of FECs, or (c) not aggregate them at all. Thus we can 971 speak of the "granularity" of aggregation, with (a) being the 972 "coarsest granularity", and (c) being the "finest granularity". 974 When order control is used, each LSR should adopt, for a given set of 975 FECs, the granularity used by its next hop for those FECs. 977 When independent control is used, it is possible that there will be 978 two adjacent LSRs, Ru and Rd, which aggregate some set of FECs 979 differently. 981 If Ru has finer granularity than Rd, this does not cause a problem. 982 Ru distributes more labels for that set of FECs than Rd does. This 983 means that when Ru needs to forward labeled packets in those FECs to 984 Rd, it may need to map n labels into m labels, where n > m. As an 985 option, Ru may withdraw the set of n labels that it has distributed, 986 and then distribute a set of m labels, corresponding to Rd's level of 987 granularity. This is not necessary to ensure correct operation, but 988 it does result in a reduction of the number of labels distributed by 989 Ru, and Ru is not gaining any particular advantage by distributing 990 the larger number of labels. The decision whether to do this or not 991 is a local matter. 993 If Ru has coarser granularity than Rd (i.e., Rd has distributed n 994 labels for the set of FECs, while Ru has distributed m, where n > m), 995 it has two choices: 997 - It may adopt Rd's finer level of granularity. This would require 998 it to withdraw the m labels it has distributed, and distribute n 999 labels. This is the preferred option. 1001 - It may simply map its m labels into a subset of Rd's n labels, if 1002 it can determine that this will produce the same routing. For 1003 example, suppose that Ru applies a single label to all traffic 1004 that needs to pass through a certain egress LSR, whereas Rd binds 1005 a number of different labels to such traffic, depending on the 1006 individual destination addresses of the packets. If Ru knows the 1007 address of the egress router, and if Rd has bound a label to the 1008 FEC which is identified by that address, then Ru can simply apply 1009 that label. 1011 In any event, every LSR needs to know (by configuration) what 1012 granularity to use for labels that it assigns. Where ordered control 1013 is used, this requires each node to know the granularity only for 1014 FECs which leave the MPLS network at that node. For independent 1015 control, best results may be obtained by ensuring that all LSRs are 1016 consistently configured to know the granularity for each FEC. 1017 However, in many cases this may be done by using a single level of 1018 granularity which applies to all FECs (such as "one label per IP 1019 prefix in the forwarding table", or "one label per egress node"). 1021 2.21. Route Selection 1023 Route selection refers to the method used for selecting the LSP for a 1024 particular FEC. The proposed MPLS protocol architecture supports two 1025 options for Route Selection: (1) hop by hop routing, and (2) explicit 1026 routing. 1028 Hop by hop routing allows each node to independently choose the next 1029 hop for each FEC. This is the usual mode today in existing IP 1030 networks. A "hop by hop routed LSP" is an LSP whose route is selected 1031 using hop by hop routing. 1033 In an explicitly routed LSP, each LSR does not independently choose 1034 the next hop; rather, a single LSR, generally the LSP ingress or the 1035 LSP egress, specifies several (or all) of the LSRs in the LSP. If a 1036 single LSR specifies the entire LSP, the LSP is "strictly" explicitly 1037 routed. If a single LSR specifies only some of the LSP, the LSP is 1038 "loosely" explicitly routed. 1040 The sequence of LSRs followed by an explicitly routed LSP may be 1041 chosen by configuration, or may be selected dynamically by a single 1042 node (for example, the egress node may make use of the topological 1043 information learned from a link state database in order to compute 1044 the entire path for the tree ending at that egress node). 1046 Explicit routing may be useful for a number of purposes, such as 1047 policy routing or traffic engineering. In MPLS, the explicit route 1048 needs to be specified at the time that labels are assigned, but the 1049 explicit route does not have to be specified with each IP packet. 1050 This makes MPLS explicit routing much more efficient than the 1051 alternative of IP source routing. 1053 2.22. Lack of Outgoing Label 1055 When a labeled packet is traveling along an LSP, it may occasionally 1056 happen that it reaches an LSR at which the ILM does not map the 1057 packet's incoming label into an NHLFE, even though the incoming label 1058 is itself valid. This can happen due to transient conditions, or due 1059 to an error at the LSR which should be the packet's next hop. 1061 It is tempting in such cases to strip off the label stack and attempt 1062 to forward the packet further via conventional forwarding, based on 1063 its network layer header. However, in general this is not a safe 1064 procedure: 1066 - If the packet has been following an explicitly routed LSP, this 1067 could result in a loop. 1069 - The packet's network header may not contain enough information to 1070 enable this particular LSR to forward it correctly. 1072 Unless it can be determined (through some means outside the scope of 1073 this document) that neither of these situations obtains, the only 1074 safe procedure is to discard the packet. 1076 2.23. Time-to-Live (TTL) 1078 In conventional IP forwarding, each packet carries a "Time To Live" 1079 (TTL) value in its header. Whenever a packet passes through a 1080 router, its TTL gets decremented by 1; if the TTL reaches 0 before 1081 the packet has reached its destination, the packet gets discarded. 1083 This provides some level of protection against forwarding loops that 1084 may exist due to misconfigurations, or due to failure or slow 1085 convergence of the routing algorithm. TTL is sometimes used for other 1086 functions as well, such as multicast scoping, and supporting the 1087 "traceroute" command. This implies that there are two TTL-related 1088 issues that MPLS needs to deal with: (i) TTL as a way to suppress 1089 loops; (ii) TTL as a way to accomplish other functions, such as 1090 limiting the scope of a packet. 1092 When a packet travels along an LSP, it SHOULD emerge with the same 1093 TTL value that it would have had if it had traversed the same 1094 sequence of routers without having been label switched. If the 1095 packet travels along a hierarchy of LSPs, the total number of LSR- 1096 hops traversed SHOULD be reflected in its TTL value when it emerges 1097 from the hierarchy of LSPs. 1099 The way that TTL is handled may vary depending upon whether the MPLS 1100 label values are carried in an MPLS-specific "shim" header [MPLS- 1101 SHIM], or if the MPLS labels are carried in an L2 header, such as an 1102 ATM header [MPLS-ATM] or a frame relay header [MPLS-FRMRLY]. 1104 If the label values are encoded in a "shim" that sits between the 1105 data link and network layer headers, then this shim MUST have a TTL 1106 field that SHOULD be initially loaded from the network layer header 1107 TTL field, SHOULD be decremented at each LSR-hop, and SHOULD be 1108 copied into the network layer header TTL field when the packet 1109 emerges from its LSP. 1111 If the label values are encoded in a data link layer header (e.g., 1112 the VPI/VCI field in ATM's AAL5 header), and the labeled packets are 1113 forwarded by an L2 switch (e.g., an ATM switch), and the data link 1114 layer (like ATM) does not itself have a TTL field, then it will not 1115 be possible to decrement a packet's TTL at each LSR-hop. An LSP 1116 segment which consists of a sequence of LSRs that cannot decrement a 1117 packet's TTL will be called a "non-TTL LSP segment". 1119 When a packet emerges from a non-TTL LSP segment, it SHOULD however 1120 be given a TTL that reflects the number of LSR-hops it traversed. In 1121 the unicast case, this can be achieved by propagating a meaningful 1122 LSP length to ingress nodes, enabling the ingress to decrement the 1123 TTL value before forwarding packets into a non-TTL LSP segment. 1125 Sometimes it can be determined, upon ingress to a non-TTL LSP 1126 segment, that a particular packet's TTL will expire before the packet 1127 reaches the egress of that non-TTL LSP segment. In this case, the LSR 1128 at the ingress to the non-TTL LSP segment must not label switch the 1129 packet. This means that special procedures must be developed to 1130 support traceroute functionality, for example, traceroute packets may 1131 be forwarded using conventional hop by hop forwarding. 1133 2.24. Loop Control 1135 On a non-TTL LSP segment, by definition, TTL cannot be used to 1136 protect against forwarding loops. The importance of loop control may 1137 depend on the particular hardware being used to provide the LSR 1138 functions along the non-TTL LSP segment. 1140 Suppose, for instance, that ATM switching hardware is being used to 1141 provide MPLS switching functions, with the label being carried in the 1142 VPI/VCI field. Since ATM switching hardware cannot decrement TTL, 1143 there is no protection against loops. If the ATM hardware is capable 1144 of providing fair access to the buffer pool for incoming cells 1145 carrying different VPI/VCI values, this looping may not have any 1146 deleterious effect on other traffic. If the ATM hardware cannot 1147 provide fair buffer access of this sort, however, then even transient 1148 loops may cause severe degradation of the LSR's total performance. 1150 Even if fair buffer access can be provided, it is still worthwhile to 1151 have some means of detecting loops that last "longer than possible". 1152 In addition, even where TTL and/or per-VC fair queuing provides a 1153 means for surviving loops, it still may be desirable where practical 1154 to avoid setting up LSPs which loop. All LSRs that may attach to 1155 non-TTL LSP segments will therefore be required to support a common 1156 technique for loop detection; however, use of the loop detection 1157 technique is optional. The loop detection technique is specified in 1158 [MPLS-ATM] and [MPLS-LDP]. 1160 2.25. Label Encodings 1162 In order to transmit a label stack along with the packet whose label 1163 stack it is, it is necessary to define a concrete encoding of the 1164 label stack. The architecture supports several different encoding 1165 techniques; the choice of encoding technique depends on the 1166 particular kind of device being used to forward labeled packets. 1168 2.25.1. MPLS-specific Hardware and/or Software 1170 If one is using MPLS-specific hardware and/or software to forward 1171 labeled packets, the most obvious way to encode the label stack is to 1172 define a new protocol to be used as a "shim" between the data link 1173 layer and network layer headers. This shim would really be just an 1174 encapsulation of the network layer packet; it would be "protocol- 1175 independent" such that it could be used to encapsulate any network 1176 layer. Hence we will refer to it as the "generic MPLS 1177 encapsulation". 1179 The generic MPLS encapsulation would in turn be encapsulated in a 1180 data link layer protocol. 1182 The MPLS generic encapsulation is specified in [MPLS-SHIM]. 1184 2.25.2. ATM Switches as LSRs 1186 It will be noted that MPLS forwarding procedures are similar to those 1187 of legacy "label swapping" switches such as ATM switches. ATM 1188 switches use the input port and the incoming VPI/VCI value as the 1189 index into a "cross-connect" table, from which they obtain an output 1190 port and an outgoing VPI/VCI value. Therefore if one or more labels 1191 can be encoded directly into the fields which are accessed by these 1192 legacy switches, then the legacy switches can, with suitable software 1193 upgrades, be used as LSRs. We will refer to such devices as "ATM- 1194 LSRs". 1196 There are three obvious ways to encode labels in the ATM cell header 1197 (presuming the use of AAL5): 1199 1. SVC Encoding 1201 Use the VPI/VCI field to encode the label which is at the top 1202 of the label stack. This technique can be used in any network. 1203 With this encoding technique, each LSP is realized as an ATM 1204 SVC, and the LDP becomes the ATM "signaling" protocol. With 1205 this encoding technique, the ATM-LSRs cannot perform "push" or 1206 "pop" operations on the label stack. 1208 2. SVP Encoding 1210 Use the VPI field to encode the label which is at the top of 1211 the label stack, and the VCI field to encode the second label 1212 on the stack, if one is present. This technique some advantages 1213 over the previous one, in that it permits the use of ATM "VP- 1214 switching". That is, the LSPs are realized as ATM SVPs, with 1215 LDP serving as the ATM signaling protocol. 1217 However, this technique cannot always be used. If the network 1218 includes an ATM Virtual Path through a non-MPLS ATM network, 1219 then the VPI field is not necessarily available for use by 1220 MPLS. 1222 When this encoding technique is used, the ATM-LSR at the egress 1223 of the VP effectively does a "pop" operation. 1225 3. SVP Multipoint Encoding 1227 Use the VPI field to encode the label which is at the top of 1228 the label stack, use part of the VCI field to encode the second 1229 label on the stack, if one is present, and use the remainder of 1230 the VCI field to identify the LSP ingress. If this technique 1231 is used, conventional ATM VP-switching capabilities can be used 1232 to provide multipoint-to-point VPs. Cells from different 1233 packets will then carry different VCI values. As we shall see 1234 in section 2.26, this enables us to do label merging, without 1235 running into any cell interleaving problems, on ATM switches 1236 which can provide multipoint-to-point VPs, but which do not 1237 have the VC merge capability. 1239 This technique depends on the existence of a capability for 1240 assigning 16-bit VCI values to each ATM switch such that no 1241 single VCI value is assigned to two different switches. (If an 1242 adequate number of such values could be assigned to each 1243 switch, it would be possible to also treat the VCI value as the 1244 second label in the stack.) 1246 If there are more labels on the stack than can be encoded in the ATM 1247 header, the ATM encodings must be combined with the generic 1248 encapsulation. 1250 2.25.3. Interoperability among Encoding Techniques 1252 If is a segment of a LSP, it is possible that R1 will 1253 use one encoding of the label stack when transmitting packet P to R2, 1254 but R2 will use a different encoding when transmitting a packet P to 1255 R3. In general, the MPLS architecture supports LSPs with different 1256 label stack encodings used on different hops. Therefore, when we 1257 discuss the procedures for processing a labeled packet, we speak in 1258 abstract terms of operating on the packet's label stack. When a 1259 labeled packet is received, the LSR must decode it to determine the 1260 current value of the label stack, then must operate on the label 1261 stack to determine the new value of the stack, and then encode the 1262 new value appropriately before transmitting the labeled packet to its 1263 next hop. 1265 Unfortunately, ATM switches have no capability for translating from 1266 one encoding technique to another. The MPLS architecture therefore 1267 requires that whenever it is possible for two ATM switches to be 1268 successive LSRs along a level m LSP for some packet, that those two 1269 ATM switches use the same encoding technique. 1271 Naturally there will be MPLS networks which contain a combination of 1272 ATM switches operating as LSRs, and other LSRs which operate using an 1273 MPLS shim header. In such networks there may be some LSRs which have 1274 ATM interfaces as well as "MPLS Shim" interfaces. This is one example 1275 of an LSR with different label stack encodings on different hops. 1276 Such an LSR may swap off an ATM encoded label stack on an incoming 1277 interface and replace it with an MPLS shim header encoded label stack 1278 on the outgoing interface. 1280 2.26. Label Merging 1282 Suppose that an LSR has bound multiple incoming labels to a 1283 particular FEC. When forwarding packets in that FEC, one would like 1284 to have a single outgoing label which is applied to all such packets. 1285 The fact that two different packets in the FEC arrived with different 1286 incoming labels is irrelevant; one would like to forward them with 1287 the same outgoing label. The capability to do so is known as "label 1288 merging". 1290 Let us say that an LSR is capable of label merging if it can receive 1291 two packets from different incoming interfaces, and/or with different 1292 labels, and send both packets out the same outgoing interface with 1293 the same label. Once the packets are transmitted, the information 1294 that they arrived from different interfaces and/or with different 1295 incoming labels is lost. 1297 Let us say that an LSR is not capable of label merging if, for any 1298 two packets which arrive from different interfaces, or with different 1299 labels, the packets must either be transmitted out different 1300 interfaces, or must have different labels. ATM-LSRs using the SVC or 1301 SVP Encodings cannot perform label merging. This is discussed in 1302 more detail in the next section. 1304 If a particular LSR cannot perform label merging, then if two packets 1305 in the same FEC arrive with different incoming labels, they must be 1306 forwarded with different outgoing labels. With label merging, the 1307 number of outgoing labels per FEC need only be 1; without label 1308 merging, the number of outgoing labels per FEC could be as large as 1309 the number of nodes in the network. 1311 With label merging, the number of incoming labels per FEC that a 1312 particular LSR needs is never be larger than the number of LDP 1313 adjacencies. Without label merging, the number of incoming labels 1314 per FEC that a particular LSR needs is as large as the number of 1315 upstream nodes which forward traffic in the FEC to the LSR in 1316 question. In fact, it is difficult for an LSR to even determine how 1317 many such incoming labels it must support for a particular FEC. 1319 The MPLS architecture accommodates both merging and non-merging LSRs, 1320 but allows for the fact that there may be LSRs which do not support 1321 label merging. This leads to the issue of ensuring correct 1322 interoperation between merging LSRs and non-merging LSRs. The issue 1323 is somewhat different in the case of datagram media versus the case 1324 of ATM. The different media types will therefore be discussed 1325 separately. 1327 2.26.1. Non-merging LSRs 1329 The MPLS forwarding procedures is very similar to the forwarding 1330 procedures used by such technologies as ATM and Frame Relay. That is, 1331 a unit of data arrives, a label (VPI/VCI or DLCI) is looked up in a 1332 "cross-connect table", on the basis of that lookup an output port is 1333 chosen, and the label value is rewritten. In fact, it is possible to 1334 use such technologies for MPLS forwarding; an LDP can be used as the 1335 "signalling protocol" for setting up the cross-connect tables. 1337 Unfortunately, these technologies do not necessarily support the 1338 label merging capability. In ATM, if one attempts to perform label 1339 merging, the result may be the interleaving of cells from various 1340 packets. If cells from different packets get interleaved, it is 1341 impossible to reassemble the packets. Some Frame Relay switches use 1342 cell switching on their backplanes. These switches may also be 1343 incapable of supporting label merging, for the same reason -- cells 1344 of different packets may get interleaved, and there is then no way to 1345 reassemble the packets. 1347 We propose to support two solutions to this problem. First, MPLS will 1348 contain procedures which allow the use of non-merging LSRs. Second, 1349 MPLS will support procedures which allow certain ATM switches to 1350 function as merging LSRs. 1352 Since MPLS supports both merging and non-merging LSRs, MPLS also 1353 contains procedures to ensure correct interoperation between them. 1355 2.26.2. Labels for Merging and Non-Merging LSRs 1357 An upstream LSR which supports label merging needs to be sent only 1358 one label per FEC. An upstream neighbor which does not support label 1359 merging needs to be sent multiple labels per FEC. However, there is 1360 no way of knowing a priori how many labels it needs. This will depend 1361 on how many LSRs are upstream of it with respect to the FEC in 1362 question. 1364 In the MPLS architecture, if a particular upstream neighbor does not 1365 support label merging, it is not sent any labels for a particular FEC 1366 unless it explicitly asks for a label for that FEC. The upstream 1367 neighbor may make multiple such requests, and is given a new label 1368 each time. When a downstream neighbor receives such a request from 1369 upstream, and the downstream neighbor does not itself support label 1370 merging, then it must in turn ask its downstream neighbor for another 1371 label for the FEC in question. 1373 It is possible that there may be some nodes which support label 1374 merging, but can only merge a limited number of incoming labels into 1375 a single outgoing label. Suppose for example that due to some 1376 hardware limitation a node is capable of merging four incoming labels 1377 into a single outgoing label. Suppose however, that this particular 1378 node has six incoming labels arriving at it for a particular FEC. In 1379 this case, this node may merge these into two outgoing labels. 1381 Whether label merging is applicable to explicitly routed LSPs is for 1382 further study. 1384 2.26.3. Merge over ATM 1386 2.26.3.1. Methods of Eliminating Cell Interleave 1388 There are several methods that can be used to eliminate the cell 1389 interleaving problem in ATM, thereby allowing ATM switches to support 1390 stream merge: 1392 1. VP merge, using the SVP Multipoint Encoding 1394 When VP merge is used, multiple virtual paths are merged into a 1395 virtual path, but packets from different sources are 1396 distinguished by using different VCIs within the VP. 1398 2. VC merge 1400 When VC merge is used, switches are required to buffer cells 1401 from one packet until the entire packet is received (this may 1402 be determined by looking for the AAL5 end of frame indicator). 1404 VP merge has the advantage that it is compatible with a higher 1405 percentage of existing ATM switch implementations. This makes it more 1406 likely that VP merge can be used in existing networks. Unlike VC 1407 merge, VP merge does not incur any delays at the merge points and 1408 also does not impose any buffer requirements. However, it has the 1409 disadvantage that it requires coordination of the VCI space within 1410 each VP. There are a number of ways that this can be accomplished. 1411 Selection of one or more methods is for further study. 1413 This tradeoff between compatibility with existing equipment versus 1414 protocol complexity and scalability implies that it is desirable for 1415 the MPLS protocol to support both VP merge and VC merge. In order to 1416 do so each ATM switch participating in MPLS needs to know whether its 1417 immediate ATM neighbors perform VP merge, VC merge, or no merge. 1419 2.26.3.2. Interoperation: VC Merge, VP Merge, and Non-Merge 1421 The interoperation of the various forms of merging over ATM is most 1422 easily described by first describing the interoperation of VC merge 1423 with non-merge. 1425 In the case where VC merge and non-merge nodes are interconnected the 1426 forwarding of cells is based in all cases on a VC (i.e., the 1427 concatenation of the VPI and VCI). For each node, if an upstream 1428 neighbor is doing VC merge then that upstream neighbor requires only 1429 a single VPI/VCI for a particular stream (this is analogous to the 1430 requirement for a single label in the case of operation over frame 1431 media). If the upstream neighbor is not doing merge, then the 1432 neighbor will require a single VPI/VCI per stream for itself, plus 1433 enough VPI/VCIs to pass to its upstream neighbors. The number 1434 required will be determined by allowing the upstream nodes to request 1435 additional VPI/VCIs from their downstream neighbors (this is again 1436 analogous to the method used with frame merge). 1438 A similar method is possible to support nodes which perform VP merge. 1439 In this case the VP merge node, rather than requesting a single 1440 VPI/VCI or a number of VPI/VCIs from its downstream neighbor, instead 1441 may request a single VP (identified by a VPI) but several VCIs within 1442 the VP. Furthermore, suppose that a non-merge node is downstream 1443 from two different VP merge nodes. This node may need to request one 1444 VPI/VCI (for traffic originating from itself) plus two VPs (one for 1445 each upstream node), each associated with a specified set of VCIs (as 1446 requested from the upstream node). 1448 In order to support all of VP merge, VC merge, and non-merge, it is 1449 therefore necessary to allow upstream nodes to request a combination 1450 of zero or more VC identifiers (consisting of a VPI/VCI), plus zero 1451 or more VPs (identified by VPIs) each containing a specified number 1452 of VCs (identified by a set of VCIs which are significant within a 1453 VP). VP merge nodes would therefore request one VP, with a contained 1454 VCI for traffic that it originates (if appropriate) plus a VCI for 1455 each VC requested from above (regardless of whether or not the VC is 1456 part of a containing VP). VC merge node would request only a single 1457 VPI/VCI (since they can merge all upstream traffic into a single VC). 1458 Non-merge nodes would pass on any requests that they get from above, 1459 plus request a VPI/VCI for traffic that they originate (if 1460 appropriate). 1462 2.27. Tunnels and Hierarchy 1464 Sometimes a router Ru takes explicit action to cause a particular 1465 packet to be delivered to another router Rd, even though Ru and Rd 1466 are not consecutive routers on the Hop-by-hop path for that packet, 1467 and Rd is not the packet's ultimate destination. For example, this 1468 may be done by encapsulating the packet inside a network layer packet 1469 whose destination address is the address of Rd itself. This creates a 1470 "tunnel" from Ru to Rd. We refer to any packet so handled as a 1471 "Tunneled Packet". 1473 2.27.1. Hop-by-Hop Routed Tunnel 1475 If a Tunneled Packet follows the Hop-by-hop path from Ru to Rd, we 1476 say that it is in an "Hop-by-Hop Routed Tunnel" whose "transmit 1477 endpoint" is Ru and whose "receive endpoint" is Rd. 1479 2.27.2. Explicitly Routed Tunnel 1481 If a Tunneled Packet travels from Ru to Rd over a path other than the 1482 Hop-by-hop path, we say that it is in an "Explicitly Routed Tunnel" 1483 whose "transmit endpoint" is Ru and whose "receive endpoint" is Rd. 1484 For example, we might send a packet through an Explicitly Routed 1485 Tunnel by encapsulating it in a packet which is source routed. 1487 2.27.3. LSP Tunnels 1489 It is possible to implement a tunnel as a LSP, and use label 1490 switching rather than network layer encapsulation to cause the packet 1491 to travel through the tunnel. The tunnel would be a LSP , where R1 is the transmit endpoint of the tunnel, and Rn is the 1493 receive endpoint of the tunnel. This is called a "LSP Tunnel". 1495 The set of packets which are to be sent though the LSP tunnel 1496 constitutes a FEC, and each LSR in the tunnel must assign a label to 1497 that FEC (i.e., must assign a label to the tunnel). The criteria for 1498 assigning a particular packet to an LSP tunnel is a local matter at 1499 the tunnel's transmit endpoint. To put a packet into an LSP tunnel, 1500 the transmit endpoint pushes a label for the tunnel onto the label 1501 stack and sends the labeled packet to the next hop in the tunnel. 1503 If it is not necessary for the tunnel's receive endpoint to be able 1504 to determine which packets it receives through the tunnel, as 1505 discussed earlier, the label stack may be popped at the penultimate 1506 LSR in the tunnel. 1508 A "Hop-by-Hop Routed LSP Tunnel" is a Tunnel that is implemented as 1509 an hop-by-hop routed LSP between the transmit endpoint and the 1510 receive endpoint. 1512 An "Explicitly Routed LSP Tunnel" is a LSP Tunnel that is also an 1513 Explicitly Routed LSP. 1515 2.27.4. Hierarchy: LSP Tunnels within LSPs 1517 Consider a LSP . Let us suppose that R1 receives 1518 unlabeled packet P, and pushes on its label stack the label to cause 1519 it to follow this path, and that this is in fact the Hop-by-hop path. 1520 However, let us further suppose that R2 and R3 are not directly 1521 connected, but are "neighbors" by virtue of being the endpoints of an 1522 LSP tunnel. So the actual sequence of LSRs traversed by P is . 1525 When P travels from R1 to R2, it will have a label stack of depth 1. 1526 R2, switching on the label, determines that P must enter the tunnel. 1527 R2 first replaces the Incoming label with a label that is meaningful 1528 to R3. Then it pushes on a new label. This level 2 label has a value 1529 which is meaningful to R21. Switching is done on the level 2 label by 1530 R21, R22, R23. R23, which is the penultimate hop in the R2-R3 tunnel, 1531 pops the label stack before forwarding the packet to R3. When R3 sees 1532 packet P, P has only a level 1 label, having now exited the tunnel. 1533 Since R3 is the penultimate hop in P's level 1 LSP, it pops the label 1534 stack, and R4 receives P unlabeled. 1536 The label stack mechanism allows LSP tunneling to nest to any depth. 1538 2.27.5. LDP Peering and Hierarchy 1540 Suppose that packet P travels along a Level 1 LSP , 1541 and when going from R2 to R3 travels along a Level 2 LSP . From the perspective of the Level 2 LSP, R2's LDP peer is 1543 R21. From the perspective of the Level 1 LSP, R2's LDP peers are R1 1544 and R3. One can have LDP peers at each layer of hierarchy. We will 1545 see in sections 3.6 and 3.7 some ways to make use of this hierarchy. 1546 Note that in this example, R2 and R21 must be IGP neighbors, but R2 1547 and R3 need not be. 1549 When two LSRs are IGP neighbors, we will refer to them as "Local LDP 1550 Peers". When two LSRs may be LDP peers, but are not IGP neighbors, 1551 we will refer to them as "Remote LDP Peers". In the above example, 1552 R2 and R21 are local LDP peers, but R2 and R3 are remote LDP peers. 1554 The MPLS architecture supports two ways to distribute labels at 1555 different layers of the hierarchy: Explicit Peering and Implicit 1556 Peering. 1558 One performs label Distribution with one's Local LDP Peers by opening 1559 LDP connections to them. One can perform label Distribution with 1560 one's Remote LDP Peers in one of two ways: 1562 1. Explicit Peering 1564 In explicit peering, one sets up LDP connections between Remote 1565 LDP Peers, exactly as one would do for Local LDP Peers. This 1566 technique is most useful when the number of Remote LDP Peers is 1567 small, or the number of higher level label bindings is large, 1568 or the Remote LDP Peers are in distinct routing areas or 1569 domains. Of course, one needs to know which labels to 1570 distribute to which peers; this is addressed in section 3.1.2. 1572 Examples of the use of explicit peering is found in sections 1573 3.2.1 and 3.6. 1575 2. Implicit Peering 1577 In Implicit Peering, one does not have LDP connections to one's 1578 remote LDP peers, but only to one's local LDP peers. To 1579 distribute higher level labels to ones remote LDP peers, one 1580 encodes the higher level labels as an attribute of the lower 1581 level labels, and distributes the lower level label, along with 1582 this attribute, to the local LDP peers. The local LDP peers 1583 then propagate the information to their peers. This process 1584 continues till the information reaches remote LDP peers. Note 1585 that the intermediary nodes may also be remote LDP peers. 1587 This technique is most useful when the number of Remote LDP 1588 Peers is large. Implicit peering does not require a n-square 1589 peering mesh to distribute labels to the remote LDP peers 1590 because the information is piggybacked through the local LDP 1591 peering. However, implicit peering requires the intermediate 1592 nodes to store information that they might not be directly 1593 interested in. 1595 An example of the use of implicit peering is found in section 1596 3.3. 1598 2.28. LDP Transport 1600 LDP is used between nodes in an MPLS network to establish and 1601 maintain the label bindings. In order for LDP to operate correctly, 1602 LDP information needs to be transmitted reliably, and the LDP 1603 messages pertaining to a particular FEC need to be transmitted in 1604 sequence. Flow control is also required, as is the capability to 1605 carry multiple LDP messages in a single datagram. 1607 These goals will be met by using TCP as the underlying transport for 1608 LDP. 1610 (The use of multicast techniques to distribute label bindings is for 1611 further study.) 1613 2.29. Multicast 1615 This section is for further study 1617 3. Some Applications of MPLS 1619 3.1. MPLS and Hop by Hop Routed Traffic 1621 A number of uses of MPLS require that packets with a certain label be 1622 forwarded along the same hop-by-hop routed path that would be used 1623 for forwarding a packet with a specified address in its network layer 1624 destination address field. 1626 3.1.1. Labels for Address Prefixes 1628 In general, router R determines the next hop for packet P by finding 1629 the address prefix X in its routing table which is the longest match 1630 for P's destination address. That is, the packets in a given FEC are 1631 just those packets which match a given address prefix in R's routing 1632 table. In this case, a FEC can be identified with an address prefix. 1634 Note that a packet P may be assigned to FEC F, and FEC F may be 1635 identified with address prefix X, even if P's destination address 1636 does not match X. 1638 3.1.2. Distributing Labels for Address Prefixes 1640 3.1.2.1. LDP Peers for a Particular Address Prefix 1642 LSRs R1 and R2 are considered to be LDP Peers for address prefix X if 1643 and only if one of the following conditions holds: 1645 1. R1's route to X is a route which it learned about via a 1646 particular instance of a particular IGP, and R2 is a neighbor 1647 of R1 in that instance of that IGP 1649 2. R1's route to X is a route which it learned about by some 1650 instance of routing algorithm A1, and that route is 1651 redistributed into an instance of routing algorithm A2, and R2 1652 is a neighbor of R1 in that instance of A2 1654 3. R1 is the receive endpoint of an LSP Tunnel that is within 1655 another LSP, and R2 is a transmit endpoint of that tunnel, and 1656 R1 and R2 are participants in a common instance of an IGP, and 1657 are in the same IGP area (if the IGP in question has areas), 1658 and R1's route to X was learned via that IGP instance, or is 1659 redistributed by R1 into that IGP instance 1661 4. R1's route to X is a route which it learned about via BGP, and 1662 R2 is a BGP peer of R1 1664 In general, these rules ensure that if the route to a particular 1665 address prefix is distributed via an IGP, the LDP peers for that 1666 address prefix are the IGP neighbors. If the route to a particular 1667 address prefix is distributed via BGP, the LDP peers for that address 1668 prefix are the BGP peers. In other cases of LSP tunneling, the 1669 tunnel endpoints are LDP peers. 1671 3.1.2.2. Distributing Labels 1673 In order to use MPLS for the forwarding of packets according to the 1674 hop-by-hop route corresponding to any address prefix, each LSR MUST: 1676 1. bind one or more labels to each address prefix that appears in 1677 its routing table; 1679 2. for each such address prefix X, use an LDP to distribute the 1680 binding of a label to X to each of its LDP Peers for X. 1682 There is also one circumstance in which an LSR must distribute a 1683 label binding for an address prefix, even if it is not the LSR which 1684 bound that label to that address prefix: 1686 3. If R1 uses BGP to distribute a route to X, naming some other 1687 LSR R2 as the BGP Next Hop to X, and if R1 knows that R2 has 1688 assigned label L to X, then R1 must distribute the binding 1689 between L and X to any BGP peer to which it distributes that 1690 route. 1692 These rules ensure that labels corresponding to address prefixes 1693 which correspond to BGP routes are distributed to IGP neighbors if 1694 and only if the BGP routes are distributed into the IGP. Otherwise, 1695 the labels bound to BGP routes are distributed only to the other BGP 1696 speakers. 1698 These rules are intended only to indicate which label bindings must 1699 be distributed by a given LSR to which other LSRs. 1701 3.1.3. Using the Hop by Hop path as the LSP 1703 If the hop-by-hop path that packet P needs to follow is , then can be an LSP as long as: 1706 1. there is a single address prefix X, such that, for all i, 1707 1<=i, and the Hop-by-hop path for P2 is . Let's suppose that R3 binds label L3 to X, and distributes 1974 this binding to R2. R2 binds label L2 to X, and distributes this 1975 binding to both R1 and R4. When R2 receives packet P1, its incoming 1976 label will be L2. R2 will overwrite L2 with L3, and send P1 to R3. 1977 When R2 receives packet P2, its incoming label will also be L2. R2 1978 again overwrites L2 with L3, and send P2 on to R3. 1980 Note then that when P1 and P2 are traveling from R2 to R3, they carry 1981 the same label, and as far as MPLS is concerned, they cannot be 1982 distinguished. Thus instead of talking about two distinct LSPs, and , we might talk of a single "Multipoint-to- 1984 Point LSP Tree", which we might denote as <{R1, R4}, R2, R3>. 1986 This creates a difficulty when we attempt to use conventional ATM 1987 switches as LSRs. Since conventional ATM switches do not support 1988 multipoint-to-point connections, there must be procedures to ensure 1989 that each LSP is realized as a point-to-point VC. However, if ATM 1990 switches which do support multipoint-to-point VCs are in use, then 1991 the LSPs can be most efficiently realized as multipoint-to-point VCs. 1992 Alternatively, if the SVP Multipoint Encoding (section 2.25.2) can be 1993 used, the LSPs can be realized as multipoint-to-point SVPs. 1995 3.6. LSP Tunneling between BGP Border Routers 1997 Consider the case of an Autonomous System, A, which carries transit 1998 traffic between other Autonomous Systems. Autonomous System A will 1999 have a number of BGP Border Routers, and a mesh of BGP connections 2000 among them, over which BGP routes are distributed. In many such 2001 cases, it is desirable to avoid distributing the BGP routes to 2002 routers which are not BGP Border Routers. If this can be avoided, 2003 the "route distribution load" on those routers is significantly 2004 reduced. However, there must be some means of ensuring that the 2005 transit traffic will be delivered from Border Router to Border Router 2006 by the interior routers. 2008 This can easily be done by means of LSP Tunnels. Suppose that BGP 2009 routes are distributed only to BGP Border Routers, and not to the 2010 interior routers that lie along the Hop-by-hop path from Border 2011 Router to Border Router. LSP Tunnels can then be used as follows: 2013 1. Each BGP Border Router distributes, to every other BGP Border 2014 Router in the same Autonomous System, a label for each address 2015 prefix that it distributes to that router via BGP. 2017 2. The IGP for the Autonomous System maintains a host route for 2018 each BGP Border Router. Each interior router distributes its 2019 labels for these host routes to each of its IGP neighbors. 2021 3. Suppose that: 2023 a) BGP Border Router B1 receives an unlabeled packet P, 2025 b) address prefix X in B1's routing table is the longest 2026 match for the destination address of P, 2028 c) the route to X is a BGP route, 2030 d) the BGP Next Hop for X is B2, 2032 e) B2 has bound label L1 to X, and has distributed this 2033 binding to B1, 2035 f) the IGP next hop for the address of B2 is I1, 2037 g) the address of B2 is in B1's and I1's IGP routing tables 2038 as a host route, and 2040 h) I1 has bound label L2 to the address of B2, and 2041 distributed this binding to B1. 2043 Then before sending packet P to I1, B1 must create a label 2044 stack for P, then push on label L1, and then push on label L2. 2046 4. Suppose that BGP Border Router B1 receives a labeled Packet P, 2047 where the label on the top of the label stack corresponds to an 2048 address prefix, X, to which the route is a BGP route, and that 2049 conditions 3b, 3c, 3d, and 3e all hold. Then before sending 2050 packet P to I1, B1 must replace the label at the top of the 2051 label stack with L1, and then push on label L2. 2053 With these procedures, a given packet P follows a level 1 LSP all of 2054 whose members are BGP Border Routers, and between each pair of BGP 2055 Border Routers in the level 1 LSP, it follows a level 2 LSP. 2057 These procedures effectively create a Hop-by-Hop Routed LSP Tunnel 2058 between the BGP Border Routers. 2060 Since the BGP border routers are exchanging label bindings for 2061 address prefixes that are not even known to the IGP routing, the BGP 2062 routers should become explicit LDP peers with each other. 2064 3.7. Other Uses of Hop-by-Hop Routed LSP Tunnels 2066 The use of Hop-by-Hop Routed LSP Tunnels is not restricted to tunnels 2067 between BGP Next Hops. Any situation in which one might otherwise 2068 have used an encapsulation tunnel is one in which it is appropriate 2069 to use a Hop-by-Hop Routed LSP Tunnel. Instead of encapsulating the 2070 packet with a new header whose destination address is the address of 2071 the tunnel's receive endpoint, the label corresponding to the address 2072 prefix which is the longest match for the address of the tunnel's 2073 receive endpoint is pushed on the packet's label stack. The packet 2074 which is sent into the tunnel may or may not already be labeled. 2076 If the transmit endpoint of the tunnel wishes to put a labeled packet 2077 into the tunnel, it must first replace the label value at the top of 2078 the stack with a label value that was distributed to it by the 2079 tunnel's receive endpoint. Then it must push on the label which 2080 corresponds to the tunnel itself, as distributed to it by the next 2081 hop along the tunnel. To allow this, the tunnel endpoints should be 2082 explicit LDP peers. The label bindings they need to exchange are of 2083 no interest to the LSRs along the tunnel. 2085 3.8. MPLS and Multicast 2087 Multicast routing proceeds by constructing multicast trees. The tree 2088 along which a particular multicast packet must get forwarded depends 2089 in general on the packet's source address and its destination 2090 address. Whenever a particular LSR is a node in a particular 2091 multicast tree, it binds a label to that tree. It then distributes 2092 that binding to its parent on the multicast tree. (If the node in 2093 question is on a LAN, and has siblings on that LAN, it must also 2094 distribute the binding to its siblings. This allows the parent to 2095 use a single label value when multicasting to all children on the 2096 LAN.) 2098 When a multicast labeled packet arrives, the NHLFE corresponding to 2099 the label indicates the set of output interfaces for that packet, as 2100 well as the outgoing label. If the same label encoding technique is 2101 used on all the outgoing interfaces, the very same packet can be sent 2102 to all the children. 2104 4. LDP Procedures for Hop-by-Hop Routed Traffic 2106 4.1. The Procedures for Advertising and Using labels 2108 In this section, we consider only label bindings that are used for 2109 traffic to be label switched along its hop-by-hop routed path. In 2110 these cases, the label in question will correspond to an address 2111 prefix in the routing table. 2113 There are a number of different procedures that may be used to 2114 distribute label bindings. Some are executed by the downstream LSR, 2115 and some by the upstream LSR. 2117 The downstream LSR must perform: 2119 - The Distribution Procedure, and 2121 - the Withdrawal Procedure. 2123 The upstream LSR must perform: 2125 - The Request Procedure, and 2127 - the NotAvailable Procedure, and 2128 - the Release Procedure, and 2130 - the labelUse Procedure. 2132 The MPLS architecture supports several variants of each procedure. 2134 However, the MPLS architecture does not support all possible 2135 combinations of all possible variants. The set of supported 2136 combinations will be described in section 4.2, where the 2137 interoperability between different combinations will also be 2138 discussed. 2140 4.1.1. Downstream LSR: Distribution Procedure 2142 The Distribution Procedure is used by a downstream LSR to determine 2143 when it should distribute a label binding for a particular address 2144 prefix to its LDP peers. The architecture supports four different 2145 distribution procedures. 2147 Irrespective of the particular procedure that is used, if a label 2148 binding for a particular address prefix has been distributed by a 2149 downstream LSR Rd to an upstream LSR Ru, and if at any time the 2150 attributes (as defined above) of that binding change, then Rd must 2151 inform Ru of the new attributes. 2153 If an LSR is maintaining multiple routes to a particular address 2154 prefix, it is a local matter as to whether that LSR binds multiple 2155 labels to the address prefix (one per route), and hence distributes 2156 multiple bindings. 2158 4.1.1.1. PushUnconditional 2160 Let Rd be an LSR. Suppose that: 2162 1. X is an address prefix in Rd's routing table 2164 2. Ru is an LDP Peer of Rd with respect to X 2166 Whenever these conditions hold, Rd must bind a label to X and 2167 distribute that binding to Ru. It is the responsibility of Rd to 2168 keep track of the bindings which it has distributed to Ru, and to 2169 make sure that Ru always has these bindings. 2171 This procedure would be used by LSRs which are performing downstream 2172 label assignment in the Independent LSP Control Mode. 2174 4.1.1.2. PushConditional 2176 Let Rd be an LSR. Suppose that: 2178 1. X is an address prefix in Rd's routing table 2180 2. Ru is an LDP Peer of Rd with respect to X 2182 3. Rd is either an LSP Egress or an LSP Proxy Egress for X, or 2183 Rd's L3 next hop for X is Rn, where Rn is distinct from Ru, and 2184 Rn has bound a label to X and distributed that binding to Rd. 2186 Then as soon as these conditions all hold, Rd should bind a label to 2187 X and distribute that binding to Ru. 2189 Whereas PushUnconditional causes the distribution of label bindings 2190 for all address prefixes in the routing table, PushConditional causes 2191 the distribution of label bindings only for those address prefixes 2192 for which one has received label bindings from one's LSP next hop, or 2193 for which one does not have an MPLS-capable L3 next hop. 2195 This procedure would be used by LSRs which are performing downstream 2196 label assignment in the Ordered LSP Control Mode. 2198 4.1.1.3. PulledUnconditional 2200 Let Rd be an LSR. Suppose that: 2202 1. X is an address prefix in Rd's routing table 2204 2. Ru is a label distribution peer of Rd with respect to X 2206 3. Ru has explicitly requested that Rd bind a label to X and 2207 distribute the binding to Ru 2209 Then Rd should bind a label to X and distribute that binding to Ru. 2210 Note that if X is not in Rd's routing table, or if Rd is not an LDP 2211 peer of Ru with respect to X, then Rd must inform Ru that it cannot 2212 provide a binding at this time. 2214 If Rd has already distributed a binding for address prefix X to Ru, 2215 and it receives a new request from Ru for a binding for address 2216 prefix X, it will bind a second label, and distribute the new binding 2217 to Ru. The first label binding remains in effect. 2219 This procedure would be used by LSRs performing downstream-on-demand 2220 label distribution using the Independent LSP Control Mode. 2222 4.1.1.4. PulledConditional 2224 Let Rd be an LSR. Suppose that: 2226 1. X is an address prefix in Rd's routing table 2228 2. Ru is a label distribution peer of Rd with respect to X 2230 3. Ru has explicitly requested that Rd bind a label to X and 2231 distribute the binding to Ru 2233 4. Rd is either an LSP Egress or an LSP Proxy Egress for X, or 2234 Rd's L3 next hop for X is Rn, where Rn is distinct from Ru, and 2235 Rn has bound a label to X and distributed that binding to Rd 2237 Then as soon as these conditions all hold, Rd should bind a label to 2238 X and distribute that binding to Ru. Note that if X is not in Rd's 2239 routing table, or if Rd is not a label distribution peer of Ru with 2240 respect to X, then Rd must inform Ru that it cannot provide a binding 2241 at this time. 2243 However, if the only condition that fails to hold is that Rn has not 2244 yet provided a label to Rd, then Rd must defer any response to Ru 2245 until such time as it has receiving a binding from Rn. 2247 If Rd has distributed a label binding for address prefix X to Ru, and 2248 at some later time, any attribute of the label binding changes, then 2249 Rd must redistribute the label binding to Ru, with the new attribute. 2250 It must do this even though Ru does not issue a new Request. 2252 This procedure would be used by LSRs that are performing downstream- 2253 on-demand label allocation in the Ordered LSP Control Mode. 2255 In section 4.2, we will discuss how to choose the particular 2256 procedure to be used at any given time, and how to ensure 2257 interoperability among LSRs that choose different procedures. 2259 4.1.2. Upstream LSR: Request Procedure 2261 The Request Procedure is used by the upstream LSR for an address 2262 prefix to determine when to explicitly request that the downstream 2263 LSR bind a label to that prefix and distribute the binding. There 2264 are three possible procedures that can be used. 2266 4.1.2.1. RequestNever 2268 Never make a request. This is useful if the downstream LSR uses the 2269 PushConditional procedure or the PushUnconditional procedure, but is 2270 not useful if the downstream LSR uses the PulledUnconditional 2271 procedure or the the PulledConditional procedures. 2273 This procedure would be used by an LSR when downstream label 2274 distribution and Liberal Label Retention Mode are being used. 2276 4.1.2.2. RequestWhenNeeded 2278 Make a request whenever the L3 next hop to the address prefix 2279 changes, or when a new address prefix is learned, and one doesn't 2280 already have a label binding from that next hop for the given address 2281 prefix. 2283 This procedure would be used by an LSR whenever Conservative Label 2284 Retention Mode is being used. 2286 4.1.2.3. RequestOnRequest 2288 Issue a request whenever a request is received, in addition to 2289 issuing a request when needed (as described in section 4.1.2.2). If 2290 Ru is not capable of being an LSP ingress, it may issue a request 2291 only when it receives a request from upstream. 2293 If Rd receives such a request from Ru, for an address prefix for 2294 which Rd has already distributed Ru a label, Rd shall assign a new 2295 (distinct) label, bind it to X, and distribute that binding. 2296 (Whether Rd can distribute this binding to Ru immediately or not 2297 depends on the Distribution Procedure being used.) 2299 This procedure would be used by an LSR which is doing downstream-on- 2300 demand label distribution, but is not doing label merging, e.g., an 2301 ATM-LSR which is not capable of VC merge. 2303 4.1.3. Upstream LSR: NotAvailable Procedure 2305 If Ru and Rd are respectively upstream and downstream label 2306 distribution peers for address prefix X, and Rd is Ru's L3 next hop 2307 for X, and Ru requests a binding for X from Rd, but Rd replies that 2308 it cannot provide a binding at this time, because it has no next hop 2309 for X, then the NotAvailable procedure determines how Ru responds. 2311 There are two possible procedures governing Ru's behavior: 2313 4.1.3.1. RequestRetry 2315 Ru should issue the request again at a later time. That is, the 2316 requester is responsible for trying again later to obtain the needed 2317 binding. This procedure would be used when downstream-on-demand 2318 label distribution is used. 2320 4.1.3.2. RequestNoRetry 2322 Ru should never reissue the request, instead assuming that Rd will 2323 provide the binding automatically when it is available. This is 2324 useful if Rd uses the PushUnconditional procedure or the 2325 PushConditional procedure, i.e., if downstream label distribution is 2326 used. 2328 Note that if Rd replies that it cannot provide a binding to Ru, 2329 because of some error condition, rather than because Rd has no next 2330 hop, the behavior of Ru will be governed by the error recovery 2331 conditions of the label distribution protocol, rather than by the 2332 NotAvailable procedure. 2334 4.1.4. Upstream LSR: Release Procedure 2336 Suppose that Rd is an LSR which has bound a label to address prefix 2337 X, and has distributed that binding to LSR Ru. If Rd does not happen 2338 to be Ru's L3 next hop for address prefix X, or has ceased to be Ru's 2339 L3 next hop for address prefix X, then Ru will not be using the 2340 label. The Release Procedure determines how Ru acts in this case. 2341 There are two possible procedures governing Ru's behavior: 2343 4.1.4.1. ReleaseOnChange 2345 Ru should release the binding, and inform Rd that it has done so. 2346 This procedure would be used to implement Conservative Label 2347 Retention Mode. 2349 4.1.4.2. NoReleaseOnChange 2351 Ru should maintain the binding, so that it can use it again 2352 immediately if Rd later becomes Ru's L3 next hop for X. This 2353 procedure would be used to implement Liberal Label Retention Mode. 2355 4.1.5. Upstream LSR: labelUse Procedure 2357 Suppose Ru is an LSR which has received label binding L for address 2358 prefix X from LSR Rd, and Ru is upstream of Rd with respect to X, and 2359 in fact Rd is Ru's L3 next hop for X. 2361 Ru will make use of the binding if Rd is Ru's L3 next hop for X. If, 2362 at the time the binding is received by Ru, Rd is NOT Ru's L3 next hop 2363 for X, Ru does not make any use of the binding at that time. Ru may 2364 however start using the binding at some later time, if Rd becomes 2365 Ru's L3 next hop for X. 2367 The labelUse Procedure determines just how Ru makes use of Rd's 2368 binding. 2370 There are two procedures which Ru may use: 2372 4.1.5.1. UseImmediate 2374 Ru may put the binding into use immediately. At any time when Ru has 2375 a binding for X from Rd, and Rd is Ru's L3 next hop for X, Rd will 2376 also be Ru's LSP next hop for X. This procedure is used when loop 2377 detection is not in use. 2379 4.1.5.2. UseIfLoopNotDetected 2381 This procedure is the same as UseImmediate, unless Ru has detected a 2382 loop in the LSP. If a loop has been detected, Ru will discard 2383 packets that would otherwise have been labeled with L and sent to Rd. 2385 This procedure is used when loop detection is in use. 2387 This will continue until the next hop for X changes, or until the 2388 loop is no longer detected. 2390 4.1.6. Downstream LSR: Withdraw Procedure 2392 In this case, there is only a single procedure. 2394 When LSR Rd decides to break the binding between label L and address 2395 prefix X, then this unbinding must be distributed to all LSRs to 2396 which the binding was distributed. 2398 It is required that the unbinding of L from X be distributed by Rd to 2399 a LSR Ru before Rd distributes to Ru any new binding of L to any 2400 other address prefix Y, where X != Y. If Ru were to learn of the new 2401 binding of L to Y before it learned of the unbinding of L from X, and 2402 if packets matching both X and Y were forwarded by Ru to Rd, then for 2403 a period of time, Ru would label both packets matching X and packets 2404 matching Y with label L. 2406 The distribution and withdrawal of label bindings is done via a label 2407 distribution protocol, or LDP. LDP is a two-party protocol. If LSR R1 2408 has received label bindings from LSR R2 via an instance of an LDP, 2409 and that instance of that protocol is closed by either end (whether 2410 as a result of failure or as a matter of normal operation), then all 2411 bindings learned over that instance of the protocol must be 2412 considered to have been withdrawn. 2414 As long as the relevant LDP connection remains open, label bindings 2415 that are withdrawn must always be withdrawn explicitly. If a second 2416 label is bound to an address prefix, the result is not to implicitly 2417 withdraw the first label, but to bind both labels; this is needed to 2418 support multi-path routing. If a second address prefix is bound to a 2419 label, the result is not to implicitly withdraw the binding of that 2420 label to the first address prefix, but to use that label for both 2421 address prefixes. 2423 4.2. MPLS Schemes: Supported Combinations of Procedures 2425 Consider two LSRs, Ru and Rd, which are label distribution peers with 2426 respect to some set of address prefixes, where Ru is the upstream 2427 peer and Rd is the downstream peer. 2429 The MPLS scheme which governs the interaction of Ru and Rd can be 2430 described as a quintuple of procedures: . (Since there is only one Withdraw Procedure, it 2433 need not be mentioned.) A "*" appearing in one of the positions is a 2434 wild-card, meaning that any procedure in that category may be 2435 present; an "N/A" appearing in a particular position indicates that 2436 no procedure in that category is needed. 2438 Only the MPLS schemes which are specified below are supported by the 2439 MPLS Architecture. Other schemes may be added in the future, if a 2440 need for them is shown. 2442 4.2.1. Schemes for LSRs that Support Label Merging 2444 If Ru and Rd are label distribution peers, and both support label 2445 merging, one of the following schemes must be used: 2447 1. 2450 This is downstream label distribution with independent control, 2451 liberal label retention mode, and no loop detection. 2453 2. 2456 This is downstream label distribution with independent control, 2457 liberal label retention, and loop detection. 2459 3. 2462 This is downstream label distribution with ordered control 2463 (from the egress) and conservative label retention mode. Loop 2464 detection is optional. 2466 4. 2468 This is downstream label distribution with ordered control 2469 (from the egress) and liberal label retention mode. Loop 2470 detection is optional. 2472 5. 2475 This is downstream-on-demand label distribution with ordered 2476 control (initiated by the ingress), conservative label 2477 retention mode, and optional loop detection. 2479 6. 2482 This is downstream-on-demand label distribution with 2483 independent control and conservative label retention mode, 2484 without loop detection. 2486 7. 2489 This is downstream-on-demand label distribution with 2490 independent control and conservative label retention mode, with 2491 loop detection. 2493 4.2.2. Schemes for LSRs that do not Support Label Merging 2495 Suppose that R1, R2, R3, and R4 are ATM switches which do not support 2496 label merging, but are being used as LSRs. Suppose further that the 2497 L3 hop-by-hop path for address prefix X is , and that 2498 packets destined for X can enter the network at any of these LSRs. 2499 Since there is no multipoint-to-point capability, the LSPs must be 2500 realized as point-to-point VCs, which means that there needs to be 2501 three such VCs for address prefix X: , , 2502 and . 2504 Therefore, if R1 and R2 are MPLS peers, and either is an LSR which is 2505 implemented using conventional ATM switching hardware (i.e., no cell 2506 interleave suppression), or is otherwise incapable of performing 2507 label merging, the MPLS scheme in use between R1 and R2 must be one 2508 of the following: 2510 1. 2513 This is downstream-on-demand label distribution with ordered 2514 control (initiated by the ingress), conservative label 2515 retention mode, and optional loop detection. 2517 The use of the RequestOnRequest procedure will cause R4 to 2518 distribute three labels for X to R3; R3 will distribute 2 2519 labels for X to R2, and R2 will distribute one label for X to 2520 R1. 2522 2. 2525 This is downstream-on-demand label distribution with 2526 independent control and conservative label retention mode, 2527 without loop detection. 2529 3. 2532 This is downstream-on-demand label distribution with 2533 independent control and conservative label retention mode, with 2534 loop detection. 2536 4.2.3. Interoperability Considerations 2538 It is easy to see that certain quintuples do NOT yield viable MPLS 2539 schemes. For example: 2541 - 2542 2544 In these MPLS schemes, the downstream LSR Rd distributes label 2545 bindings to upstream LSR Ru only upon request from Ru, but Ru 2546 never makes any such requests. Obviously, these schemes are not 2547 viable, since they will not result in the proper distribution of 2548 label bindings. 2550 - <*, RequestNever, *, *, ReleaseOnChange> 2552 In these MPLS schemes, Rd releases bindings when it isn't using 2553 them, but it never asks for them again, even if it later has a 2554 need for them. These schemes thus do not ensure that label 2555 bindings get properly distributed. 2557 In this section, we specify rules to prevent a pair of LDP peers from 2558 adopting procedures which lead to infeasible MPLS Schemes. These 2559 rules require the exchange of information between LDP peers during 2560 the initialization of the LDP connection between them. 2562 1. Each must state whether it supports label merging. 2564 2. If Rd does not support label merging, Rd must choose either the 2565 PulledUnconditional procedure or the PulledConditional 2566 procedure. If Rd chooses PulledConditional, Ru is forced to 2567 use the RequestRetry procedure. 2569 That is, if the downstream LSR does not support label merging, 2570 its preferences take priority when the MPLS scheme is chosen. 2572 3. If Ru does not support label merging, but Rd does, Ru must 2573 choose either the RequestRetry or RequestNoRetry procedure. 2574 This forces Rd to use the PulledConditional or 2575 PulledUnConditional procedure respectively. 2577 That is, if only one of the LSRs doesn't support label merging, 2578 its preferences take priority when the MPLS scheme is chosen. 2580 4. If both Ru and Rd both support label merging, then the choice 2581 between liberal and conservative label retention mode belongs 2582 to Ru. That is, Ru gets to choose either to use 2583 RequestWhenNeeded/ReleaseOnChange (conservative) , or to use 2584 RequestNever/NoReleaseOnChange (liberal). However, the choice 2585 of "push" vs. "pull" and "conditional" vs. "unconditional" 2586 belongs to Rd. If Ru chooses liberal label retention mode, Rd 2587 can choose either PushUnconditional or PushConditional. If Ru 2588 chooses conservative label retention mode, Rd can choose 2589 PushConditional, PulledConditional, or PulledUnconditional. 2591 These choices together determine the MPLS scheme in use. 2593 5. Security Considerations 2595 Some routers may implement security procedures which depend on the 2596 network layer header being in a fixed place relative to the data link 2597 layer header. The MPLS generic encapsulation inserts a shim between 2598 the data link layer header and the network layer header. This may 2599 cause such any security procedures to fail. 2601 An MPLS label has its meaning by virtue of an agreement between the 2602 LSR that puts the label in the label stack (the "label writer") , and 2603 the LSR that interprets that label (the "label reader"). If labeled 2604 packets are accepted from untrusted sources, or if a particular 2605 incoming label is accepted from an LSR to which that label has not 2606 been distributed, then packets may be routed in an illegitimate 2607 manner. 2609 6. Intellectual Property 2611 The IETF has been notified of intellectual property rights claimed in 2612 regard to some or all of the specification contained in this 2613 document. For more information consult the online list of claimed 2614 rights. 2616 7. Authors' Addresses 2618 Eric C. Rosen 2619 Cisco Systems, Inc. 2620 250 Apollo Drive 2621 Chelmsford, MA, 01824 2622 E-mail: erosen@cisco.com 2624 Arun Viswanathan 2625 Lucent Technologies 2626 101 Crawford Corner Rd., #4D-537 2627 Holmdel, NJ 07733 2628 732-332-5163 2629 E-mail: arunv@dnrc.bell-labs.com 2631 Ross Callon 2632 IronBridge Networks 2633 55 Hayden Avenue, 2634 Lexington, MA 02173 2635 +1-781-372-8117 2636 E-mail: rcallon@ironbridgenetworks.com 2638 8. References 2640 [MPLS-ATM] "MPLS using ATM VC Switching", Davie, Doolan, Lawrence, 2641 McGloghrie, Rekhter, Rosen, Swallow, work in progress, Internet Draft 2642 , November 1998. 2644 [MPLS-BGP] "Carrying Label Information in BGP-4", Rekhter, Rosen, 2645 work in progress, Internet Draft , 2646 August 1998. 2648 [MPLS-FRMWRK] "A Framework for Multiprotocol Label Switching", 2649 Callon, Doolan, Feldman, Fredette, Swallow, Viswanathan, work in 2650 progress, Internet Draft , November 2651 1997 2653 [MPLS-FRMRLY] "Use of Label Switching on Frame Relay Networks", 2654 Conta, Doolan, Malis, work in progress, Internet Draft , November 1998 2657 [MPLS-LDP], "LDP Specification", Andersson, Doolan, Feldman, 2658 Fredette, Thomas, work in progress, Internet Draft 2661 [MPLS-RSVP] "Use of Label Switching with RSVP", Davie, Rekhter, 2662 Rosen, Viswanathan, Srinivasan, work in progress, Internet Draft 2663 , March 1998. 2665 [MPLS-RSVP-TUNNELS], "Extensions to RSVP for LSP Tunnels", Awduche, 2666 Berger, Gan, Li, Swallow, Srinvasan, work in progress, Internet Draft 2667 , November 1998 2669 [MPLS-SHIM] "MPLS Label Stack Encodings", Rosen, Rekhter, Tappan, 2670 Farinacci, Fedorkow, Li, Conta, work in progress, Internet Draft 2671 , September, 1998 2673 [MPLS-TRFENG] "Requirements for Traffic Engineering Over MPLS", 2674 Awduche, Malcolm, Agogbua, O'Dell, McManus, work in progress, 2675 Internet Draft