idnits 2.17.1 draft-ohba-mpls-loop-prevention-00.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- ** Cannot find the required boilerplate sections (Copyright, IPR, etc.) in this document. Expected boilerplate is as follows today (2024-04-24) according to https://trustee.ietf.org/license-info : IETF Trust Legal Provisions of 28-dec-2009, Section 6.a: This Internet-Draft is submitted in full conformance with the provisions of BCP 78 and BCP 79. IETF Trust Legal Provisions of 28-dec-2009, Section 6.b(i), paragraph 2: Copyright (c) 2024 IETF Trust and the persons identified as the document authors. All rights reserved. IETF Trust Legal Provisions of 28-dec-2009, Section 6.b(i), paragraph 3: This document is subject to BCP 78 and the IETF Trust's Legal Provisions Relating to IETF Documents (https://trustee.ietf.org/license-info) in effect on the date of publication of this document. Please review these documents carefully, as they describe your rights and restrictions with respect to this document. Code Components extracted from this document must include Simplified BSD License text as described in Section 4.e of the Trust Legal Provisions and are provided without warranty as described in the Simplified BSD License. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- ** Missing expiration date. The document expiration date should appear on the first and last page. ** The document seems to lack a 1id_guidelines paragraph about Internet-Drafts being working documents. ** The document seems to lack a 1id_guidelines paragraph about the list of current Internet-Drafts. ** The document seems to lack a 1id_guidelines paragraph about the list of Shadow Directories. == No 'Intended status' indicated for this document; assuming Proposed Standard Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- ** The document seems to lack an IANA Considerations section. (See Section 2.2 of https://www.ietf.org/id-info/checklist for how to handle the case when there are no actions for IANA.) ** The document seems to lack separate sections for Informative/Normative References. All references will be assumed normative when checking for downward references. Miscellaneous warnings: ---------------------------------------------------------------------------- -- The document seems to lack a disclaimer for pre-RFC5378 work, but may have content which was first submitted before 10 November 2008. If you have contacted all the original authors and they are all willing to grant the BCP78 rights to the IETF Trust, then this is fine, and you can ignore this comment. If not, you may need to add the pre-RFC5378 disclaimer. (See the Legal Provisions document at https://trustee.ietf.org/license-info for more information.) -- The document date (March 1998) is 9537 days in the past. Is this intentional? Checking references for intended status: Proposed Standard ---------------------------------------------------------------------------- (See RFCs 3967 and 4897 for information about using normative references to lower-maturity documents in RFCs) -- Possible downref: Normative reference to a draft: ref. '1' -- No information found for draft-mplsdt-ldp-spec - is the name correct? -- Possible downref: Normative reference to a draft: ref. '2' == Outdated reference: A later version (-07) exists of draft-ietf-mpls-arch-00 == Outdated reference: A later version (-05) exists of draft-ietf-mpls-framework-02 -- Possible downref: Normative reference to a draft: ref. '4' == Outdated reference: A later version (-01) exists of draft-davie-mpls-atm-00 -- Possible downref: Normative reference to a draft: ref. '5' Summary: 7 errors (**), 0 flaws (~~), 4 warnings (==), 7 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 1 Multi-Protocol Label Switching WG Yoshihiro Ohba 2 Internet-Draft Yasuhiro Katsube 3 Expiration Date: September 1998 Toshiba 5 Eric Rosen 6 Cisco Systems 8 Paul Doolan 9 Ennovate Networks 11 March 1998 13 MPLS Loop Prevention Mechanism Using LSP-id and Hop Count 15 17 Status of this Memo 19 This document is an Internet-Draft. Internet-Drafts are working 20 documents of the Internet Engineering Task Force (IETF), its areas, 21 and its working groups. Note that other groups may also distribute 22 working documents as Internet-Drafts. 24 Internet-Drafts are draft documents valid for a maximum of six 25 months and may be updated, replaced, or obsoleted by other documents 26 at any time. It is inappropriate to use Internet-Drafts as 27 reference material or to cite them other than as "work in progress." 29 To learn the current status of any Internet-Draft, please check the 30 "1id-abstracts.txt" listing contained in the Internet-Drafts Shadow 31 Directories on ftp.is.co.za (Africa), nic.nordu.net (Europe), 32 munnari.oz.au (Pacific Rim), ds.internic.net (US East Coast), or 33 ftp.isi.edu (US West Coast). 35 Abstract 37 This paper presents a simple mechanism, based on ''LSP-ids'', which 38 can be used to prevent MPLS from setting up label switched paths 39 (LSPs) which have loops. The mechanism is compatible with, but does 40 not require, VC merge. The mechanism can be used with either the 41 local control or the egress control methods of label distribution. 42 The amount of information that must be passed in protocol messages 43 is tightly bounded (i.e., no path-vector is used). When a node 44 needs to change its next hop, a distributed procedure is executed, 45 but only nodes which are downstream of the change are involved. 47 Table of contents 49 1 Introduction ............................................. 2 50 2 Loop prevention mechanism ................................ 3 51 2.1 Overview ................................................. 3 52 2.1.1 Assigning LSP-ids ........................................ 3 53 2.1.2 Preventing looping LSPs .................................. 5 54 2.2 Messages ................................................ 5 55 2.3 Data structure .......................................... 6 56 2.4 Avoiding inconsistency during LSP modification ........... 7 57 2.5 Loop prevention algorithm ................................ 7 58 2.5.1 Algorithm for local control .............................. 8 59 2.5.2 Algorithm for egress control ............................. 11 60 2.6 Designated LSP-id management ............................. 11 61 3 Examples ................................................. 12 62 4 Enhancement of the algorithm ............................. 14 63 4.1 Enhancement for improving protocol performance ........... 14 64 4.1.1 How to deal with multiple SETUP messages ................. 14 65 4.1.2 How to deal with multiple UPDATE messages ................ 15 66 4.1.3 Interoperation of different message processing policies .. 17 67 4.2 Enhancement in route change .............................. 17 68 5 Comparison with path-vector/diffusion method ............. 18 69 6 Security considerations .................................. 19 70 7 Intellectual property considerations ..................... 19 71 8 References ............................................... 19 72 9 Authors' Addresses ....................................... 20 74 1. Introduction 76 This paper presents a simple mechanism, based on "LSP-ids", which 77 can be used to prevent MPLS from setting up label switched paths 78 (LSPs) which have loops. The LSP-id mechanism is a generalization 79 of [1]. 81 The LSP-id mechanism works if some, all, or none of the LSRs in the 82 LSP support VC-merge. It can also be used with either the local 83 control or the egress control label distribution mechanism [2,3]. 85 The LSP-id mechanism is conservative. It will always prevent the 86 formation of looping LSPs. During routing transients, however, it 87 may delay the formation of some non-looping LSPs. 89 The loop-prevention information which must be carried in the 90 protocol messages of the LSP-id mechanism is of fixed size, 91 independent of the length of the LSP. Thus the LSP-id mechanism is 92 more scalable than alternatives which require that path-vectors be 93 carried. 95 To set up a new LSP after a routing change, the LSP-id mechanism 96 requires communication only between nodes which are downstream of 97 the point of change. There is no need to communicate with nodes 98 that are upstream of the point of change. Thus the LSP-id mechanism 99 is more robust than alternatives which require that a diffusion 100 computation be performed. 102 An LSP for a particular Forwarding Equivalence Class (FEC) [4] can 103 be thought of as a tree whose root is the egress LSR for that FEC. 104 With respect to a given node in the tree, we can speak of its 105 "downstream link" as the link which is closest to the root; the 106 node's other edges are "upstream links". 108 The basis of the LSP-id scheme is the assignment of an "LSP-id" to 109 each link in an LSP tree. The assignment is done in such a way 110 that: 112 o in a steady-state loop-free environment, if some node has two 113 upstream links in the same LSP tree, each link will be assigned 114 a distinct LSP-id; 116 o if there is a looping LSP, then there will be some node which 117 has two upstream links in the same LSP tree, where the same 118 LSP-id has been assigned to each link. 120 Loops can then be prevented as follows. If a given node has an 121 upstream link in a particular LSP tree, with a particular LSP-id, it 122 must make sure that it does not acquire any other upstream link in 123 that LSP tree with the same LSP-id. 125 In this paper, we assume unicast LSPs. The loop prevention for 126 multicast LSPs is for further study. 128 2. Loop prevention mechanism 130 In the remainder of the paper, we assume the "downstream-on-demand" 131 is used as the label allocation method between neighboring nodes, 132 although the LSP-id mechanism is applicable to the upstream 133 allocation method. 135 2.1. Overview 137 2.1.1. Assigning LSP-ids 139 An LSP for a particular FEC can be thought of as a tree whose root 140 is the egress LSR for that FEC. With respect to a given node in the 141 tree, we can speak of its "downstream link" as the link which is 142 closest to the root; the node's other edges are "upstream links". 144 The term "link", as used here, refers to a particular relationship 145 on the tree, rather than to a particular physical relationship. In 146 the remainder of this section, when we will speak of the upstream 147 and downstream links of a node, we are speaking with reference to a 148 single LSP tree for a single FEC. 150 A leaf node is a node which has no upstream links. 152 Every node is responsible for assigning an LSP-id and an "LSP-id hop 153 count" for each of its downstream links. 155 Each leaf node assigns an LSP-id to its downstream link as follows. 156 The LSP-id consists of two parts: 158 o leaf-address: an L3 address of the leaf node itself, and 160 o leaf-local-id: a local identifier assigned by the leaf node. 162 The leaf-local-id must be chosen so that if the node in question is 163 a leaf on more than one tree for a given FEC, it uses different 164 LSP-ids for its downstream links on the respective trees. (If there 165 is no "load-splitting", i.e., there is only one LSP tree per FEC, 166 the identifier is not really needed.) 168 A leaf node also assigns an LSP-id hop count of zero to its 169 downstream link. 171 When a node assigns an LSP-id to its downstream link, it must inform 172 its downstream neighbor of the LSP-id and the LSP-id hop count 173 incremented by one. From the point of view of the downstream 174 neighbor, this LSP-id is now assigned to a particular one of its 175 upstream links. 177 A node which is not a leaf node assigns an LSP-id to its downstream 178 link as follows. It chooses, from among the LSP-ids assigned to its 179 upstream links, the one which has the largest LSP-id hop count. (If 180 the largest LSP-id hop count is shared by two or more LSP-ids, a 181 deterministic tie breaker is used to select a single LSP-id: the 182 numerically largest value chosen.) Once the LSP-id for the 183 downstream link is chosen, the associated hop count is incremented 184 by one, and the downstream neighbor is informed of the LSP-id and 185 LSP-id hop count. 187 If a particular node acquires a new upstream link, or loses an 188 existing upstream link, or if the LSP-id or LSP-id hop count of an 189 existing upstream link changes, the node needs to reevaluate its 190 choice of LSP-id and LSP-id hop count for its downstream link. If 191 it changes the LSP-id or LSP-id hop count for its downstream link, 192 it must so inform the downstream neighbor. 194 If a node acquires a new downstream link, it must assign an LSP-id 195 and LSP-id hop count, and so inform its new downstream neighbor. 197 When this procedure has been carried out, every link in an LSP tree 198 will have been assigned an LSP-id. The LSP-id for a particular link 199 will include an L3 address for a leaf node which is upstream of that 200 link. In particular, the LSP-id of a particular link will include 201 the L3 address of the leaf node which is furthest upstream of it 202 (modulo the tie breaker). 204 A routing change causes at least one node to change its next hop, 205 i.e., to acquire a new downstream link. This triggers a sequence of 206 LSP-id updates, proceeding downstream from the point of change. 207 However, change will not necessarily cause a change in the LSP-id of 208 every downstream link in the tree. The LSP-id updates only need to 209 percolate downstream until they reach a node which has a downstream 210 link whose LSP-id and LSP-id hop count do not have to change. 212 2.1.2. Preventing looping LSPs 214 Suppose that node R1's next hop for some FEC changes from what it 215 was previously to node R2. That is, R1 would like to acquire a new 216 downstream link. R1 assigns an LSP-id and LSP-id hop count to that 217 link, according to the above procedures. R1 then informs R2 of the 218 LSP-id and LSP-id hop count. This begins an LSP-id updating process 219 which must percolate downstream. 221 If changing R1's next hop to R2 results in a loop, then, when the 222 LSP-id updating process is completed, there will be some node in the 223 tree which has two upstream links with the same LSP-id. 225 Loops can therefore be prevented as follows: 227 o R2 should not distribute a label for the tree to R1 unless and 228 until R2 has determined that the LSP-id updating process has 229 completed successfully. This requires a procedure in which 230 LSP-id updates percolate downstream, and acknowledgments to the 231 LSP-id updates percolate back upstream. 233 o If, at any point during the process, some node sees that it has 234 two upstream links with the same LSP-id, it should immediately 235 terminate the LSP-id updating process with an error. This 236 requires that there be negative acknowledgments as well as 237 positive acknowledgments. 239 2.2. Messages 241 In the LSP-id algorithm, the following messages are used: SETUP, 242 SETUP_ACK, SETUP_NACK, UPDATE, UPDATE_ACK, UPDATE_NACK, RELEASE, 243 RELEASE_ACK, RELEASE_NACK, and TRIGGER. (Note that the message 244 names might not yet be perfectly aligned with the work of the LDP 245 team.) 247 Each message contains (FEC-id,LSP-id,hop-count). FEC-id is the 248 identifier of the forwarding equivalence class carried by the LSP. 249 LSP-id is an identifier of the LSP, and composed of a leaf-address 250 which is an L3 address of a leaf node and a leaf-local-id which is 251 used for identifying different LSPs with the same leaf-address and 252 the same FEC-id. Multi-path LSPs may have different leaf-local-id's 253 for the same leaf-address and FEC-id. Hop-count is the number of 254 hops from the node represented by the LSP-id to the node that 255 receives the message. 257 A SETUP message is transmitted to the downstream neighbor in order 258 to request a label. A RELEASE message is transmitted to the 259 downstream neighbor in order to release a label. An UPDATE message 260 is transmitted to the downstream neighbor in order to update an 261 LSP-id and a hop-count. 263 SETUP, UPDATE, and RELEASE message are acknowledged, either 264 positively or negatively. The SETUP_ACK, SETUP_NACK, UPDATE_ACK, 265 UPDATE_NACK, RELEASE_ACK, and RELEASE_NACK messages exist for this 266 purpose. (In some cases, RELEASE messages do not need to be 267 acknowledged. See Section 4.1.2.) However, if the receipt of a 268 SETUP, UPDATE, or RELEASE message causes an UPDATE message to be 269 sent downstream, the acknowledgment (positive or negative) of the 270 former message is deferred until an acknowledgment of the latter 271 message is received. 273 Note that there are two types of UPDATE messages; UPDATE messages 274 initiated by SETUP messages (referred to as UPDATE_ADD messages) and 275 UPDATE messages initiated by RELEASE messages (referred to as 276 UPDATE_DEL messages). 278 2.3. Data structure 280 A leaf node originates a SETUP message with 281 (FEC-id,LSP-id,hop-count) in which the hop-count is set to 1, the 282 leaf-address of the LSP-id is set to either the L3 address of its 283 output interface or its router id, and the leaf-local-id of the 284 LSP-id is set to a value which is unique in the same leaf-address 285 and the FEC-id. 287 For each upstream link of an LSP, a pair of (LSP-id,hop-count), 288 which is contained in the SETUP or the UPDATE message is maintained 289 by each node. 291 A downstream LSP-id is also maintained for a downstream link. The 292 upstream LSP-id which gives the maximum hop-count among all the 293 upstream LSP-id's is selected as the downstream LSP-id. If a number 294 of upstream LSP-id's all share the largest hop-count, the largest 295 LSP-id among them is selected as the downstream LSP-id. The 296 downstream LSP-id and hop-count are also referred to as the 297 designated LSP-id and the designated hop-count, respectively. 299 Any change in the upstream LSP-ids and/or the upstream LSP-id hop 300 counts may cause a change in the designated LSP-id. 302 An example of (LSP-id,hop-count) pairs assigned to each link of an 303 LSP is shown in Fig. 1, where an LSP-id is represented by 304 ".". 306 (LSP-id,hop-count) 307 =(A.1,1) (B.3,4) (B.3,5) 308 A-------->R1-------->R2------->R3 309 ^ ^ 310 | (B.3,3) | (C.1,2) 311 (B.3,1) (B.3,2) | | 312 B------->R4-------->R5 R6 313 ^ 314 | (C.1,1) 315 | 316 C 318 Fig. 1. Example of LSP-id's in an LSP 320 2.4. Avoiding inconsistency during LSP modification 322 In order to ensure that all nodes on the LSP have consistent 323 information about the LSP-id and hop-count, events which modify the 324 LSP must be carefully treated. 326 A simple but conservative rule for avoiding inconsistency is to 327 ensure that at any given node, there is at most one outstanding 328 message per LSP at any one time. 330 In other words, when a node receives a SETUP, UPDATE or RELEASE 331 message about a particular LSP from one of its upstream neighbors, 332 it processes the message on the condition that (1) there is no 333 outstanding SETUP, UPDATE or RELEASE message for the same LSP which 334 is awaiting acknowledgment, or (2) processing the message does not 335 cause a change in the designated LSP-id or hop count. 337 If neither condition (1) nor (2) is satisfied, it blocks the message 338 (e.g., returns a {SETUP, UPDATE, RELEASE}_NACK message to the sender 339 or puts the message into a processing queue). We call this rule 340 "exclusive message processing". 342 Note that with the exclusive message processing, in the case where a 343 SETUP_NACK or a RELEASE_NACK message is returned after blocking, the 344 receiver of the NACK should retry to send the NACK'ed message. 346 It is however possible to allow multiple outstanding messages, as 347 long as the order of processing is preserved as the messages 348 percolate downstream. This is discussed in Section 4. This allows 349 the LSP modifications to complete more quickly, reducing the time it 350 takes to adapt to a routing change. There is, however, some cost in 351 simplicity. 353 2.5. Loop prevention algorithm 355 For simplicity, the algorithms for the exclusive message processing 356 rule are shown here. See Section 4.1 for the multiple message 357 processing rule. 359 2.5.1. Algorithm for local control 361 o When a leaf node initiates creation of an LSP for a FEC, it 362 sends a SETUP message that contains the FEC-id for the FEC, the 363 downstream LSP-id, and a hop-count which is set to 1. The SETUP 364 message serves as the request for a label. 366 o When a node receives a SETUP message with regard to a FEC from 367 an upstream neighbor, one of the following actions is taken: 369 + If the received hop-count reaches the allowable maximum 370 value, it returns a SETUP_NACK message. 372 + Otherwise, if the received LSP-id has already been 373 registered (see below for the meaning of "registered"), for 374 a different upstream link which belongs to one of the LSPs 375 having the same FEC-id as the received one, or if the node 376 is the originator of the LSP-id, then loop freedom cannot be 377 guaranteed, and a SETUP_NACK is returned. 379 + Otherwise, if it is the egress node, it registers the 380 received LSP-id and hop-count for the upstream link, assigns 381 a label to that link, and immediately returns a SETUP_ACK 382 message (thereby distributing the label to its upstream 383 neighbor). 385 + Otherwise, if accepting the SETUP message does not change 386 the designated LSP-id (see Section 2.6), it registers the 387 received LSP-id and hop-count for the upstream link, assigns 388 a label to that link, and returns a SETUP_ACK message 389 (thereby distributing the label to its upstream neighbor). 391 + Otherwise, if there is an outstanding message for the LSP, 392 it blocks the message. 394 + Otherwise, if there is a downstream link for the LSP, it 395 sends an UPDATE_ADD message which contains the received 396 FEC-id, LSP-id, and hop-count incremented by one, to the 397 downstream neighbor. it also registers the received LSP-id 398 for the upstream link. 400 + Otherwise, it sends a SETUP message which contains the 401 received FEC-id, LSP-id, and hop-count incremented by one, 402 to the downstream neighbor. It registers the received 403 LSP-id for the upstream link. If the optimistic mode [5] is 404 adopted, a SETUP_ACK message is returned. If the optimistic 405 mode is adopted, loops are not prevented, but any loop which 406 may be formed will soon be detected by the receipt of a 407 negative acknowledgment. 409 o When a node receives an UPDATE_{ADD,DEL} message with regard to 410 an LSP, one of the following actions is taken: 412 + If there is no corresponding upstream link or the received 413 hop-count reaches the allowable maximum value, it returns an 414 UPDATE_{ADD,DEL}_NACK message. 416 + Otherwise, if the received LSP-id has already been 417 registered for a different upstream link which belongs to 418 one of the LSPs having the same FEC-id as the received one, 419 loop freedom cannot be guaranteed, so an UPDATE_ADD_NACK is 420 returned. This case won't happen for UPDATE_DEL messages. 422 + Otherwise, if accepting the message changes neither the 423 designated LSP-id nor the designated hop-count (see Section 424 2.6), it registers the received LSP-id and hop-count for the 425 corresponding upstream link, and immediately returns an 426 UPDATE_{ADD,DEL}_ACK message. 428 + Otherwise, if there is an outstanding message for the LSP, 429 it blocks the message. 431 + Otherwise, if there is a downstream link allocated for the 432 LSP, it forwards the UPDATE_{ADD,DEL} message, with the 433 candidate for the designated LSP-id and the candidate for 434 the designated hop-count incremented by one, to the 435 downstream neighbor. If the received message is UPDATE_DEL, 436 the node also replaces the LSP-id and hop-count for the 437 corresponding upstream link with the received ones and 438 returns an UPDATE_DEL_ACK message to the upstream neighbor. 440 + Otherwise, replaces the LSP-id and hop-count for the 441 corresponding upstream link with the received ones and 442 immediately returns an UPDATE_{ADD,DEL}_ACK message. If it 443 is not the egress node for the LSP, it must also send a 444 SETUP message to the downstream node. 446 o When a node receives a RELEASE message with regard to an LSP, 447 one of the following actions is taken: 449 + If there is no upstream link for the LSP other than the one 450 to be released, it removes the upstream link and the 451 associated (LSP-id,hop-count) pair, returns a RELEASE_ACK 452 message, and sends a RELEASE message to the downstream 453 neighbor. 455 + Otherwise, if accepting the message does not change the 456 designated LSP-id (see Section 2.6), it removes the upstream 457 link and the associated (LSP-id,hop-count) pair, and returns 458 a RELEASE_ACK message. 460 + Otherwise, if there is an outstanding message for the same 461 LSP, it blocks the message. 463 + Otherwise, it removes the upstream link and the associated 464 (LSP-id,hop-count) pair, returns a RELEASE_ACK message. It 465 also sends an UPDATE_DEL message with the candidate for the 466 designated LSP-id and the candidate for the designated 467 hop-count incremented by one, to the downstream neighbor. 469 o When the route with regard to an LSP changes at a node, the node 470 sends a RELEASE message to the old downstream neighbor and a 471 SETUP message to the new downstream neighbor. 473 o When a node receives a SETUP_ACK message with regard to an LSP, 474 it returns a SETUP_ACK or an UPDATE_ADD_ACK message to the 475 upstream neighbor if needed. The designated LSP-id and 476 hop-count is set to the ACK'ed ones. When an UPDATE_ADD_ACK 477 message is returned to the upstream neighbor, the LSP-id and 478 hop-count being registered for the corresponding upstream link 479 are replaced with the ACK'ed ones. Now it is able to start 480 label switching. Note that received SETUP_ACK message contains 481 the label which was assigned by the downstream neighbor, and 482 that any SETUP_ACK message which is sent upstream must also 483 contain a newly assigned label. 485 o When a node receives a SETUP_NACK message with regard to an LSP, 486 instead of starting label switching, it returns a SETUP_ACK, a 487 SETUP_NACK or an UPDATE_ADD_ACK message to the upstream neighbor 488 depending on the operation policy. A halfway LSP is formed as a 489 result when a SETUP_ACK or an UPDATE_ADD_ACK message is 490 returned. When a SETUP_ACK or an UPDATE_ADD_ACK message is 491 returned to the upstream neighbor, it may retry to send a SETUP 492 message to its downstream neighbor. When a SETUP_NACK message 493 is returned to the upstream neighbor, it deletes the 494 corresponding upstream link. When an UPDATE_ADD_ACK message is 495 returned to the upstream neighbor, the LSP-id and hop-count 496 being registered for the corresponding upstream link are 497 replaced with the ACK'ed ones. 499 o When a node receives an UPDATE_{ADD,DEL}_ACK message with regard 500 to an LSP, it replaces the designated LSP-id and hop-count with 501 the ACK'ed ones. If the received message is UPDATE_ADD, it 502 returns a SETUP_ACK or an UPDATE_ADD_ACK message. When an 503 UPDATE_ADD_ACK is returned to the downstream neighbor, the the 504 LSP-id and hop-count being registered for the corresponding 505 upstream link are replaced with the ACK'ed ones. 507 o When a node receives an UPDATE_ADD_NACK message with regard to 508 an LSP, it returns a SETUP_NACK or an UPDATE_ADD_NACK message to 509 the upstream neighbor, without updating the designated LSP-id 510 and hop-count. When a SETUP_NACK is returned to the upstream 511 neighbor, the corresponding upstream link is deleted. 513 o When a node receives an UPDATE_DEL_NACK message with regard to 514 an LSP, it retries to send the UPDATE_DEL message to its 515 downstream neighbor. 517 o When a node receives a RELEASE_ACK message with regard to an 518 LSP, it removes the downstream link. 520 o When a node receives a RELEASE_NACK message (due to, e.g., 521 existence of an outstanding message) with regard to an LSP, it 522 retries to send a RELEASE message. 524 2.5.2. Algorithm for egress control 526 o An egress node, when it first comes up, issues a TRIGGER message 527 to all neighbors. Also, any node which creates a downstream 528 link for a particular LSP, sends a TRIGGER message to each 529 upstream neighbor for which there is no upstream link. 531 o Any node which gets a TRIGGER message (specifying a particular 532 FEC) from its next hop for that FEC will send a SETUP message, 533 which implicitly acknowledges the TRIGGER message. 535 o When a node receives a SETUP message, if neither it has a 536 corresponding downstream link nor it is the egress node, it 537 returns a SETUP_NACK message. Otherwise, if it is the egress 538 node or the current selection of the designated LSP-id and 539 hop-count does not change, it returns a SETUP_ACK message. 540 Otherwise, it sends an UPDATE message to its own next hop. 542 o For other events, nodes follow the same procedure as described 543 in Section 2.5.1. 545 Note that a node which has no upstream link for the LSP yet behaves 546 as a leaf. In the case where the tree is being initially built up 547 (e.g., the egress node has just come up), each node in turn will 548 behave as a leaf for a short period of time. 550 2.6. Designated LSP-id management 552 In the algorithm described in Section 2.5, a change in the 553 designated LSP-id and hop-count is needed in the following cases. 555 (a) When a node receives a SETUP message, if the designated LSP-id 556 does not exist, the designated LSP-id and hop-count need to be 557 set to the received ones. 559 (b) When a node receives a SETUP or an UPDATE_ADD message, if the 560 LSP-id being registered for the received upstream link is not 561 the current designated LSP-id, then if the received hop-count is 562 greater than the designated one, or if the received hop-count is 563 equal to the designated one and the received LSP-id is greater 564 than the designated one, the designated LSP-id and hop-count 565 needs to be updated to the received one. 567 (c) When a node receives an UPDATE_{ADD,DEL} message, if the LSP-id 568 being registered for the received upstream link is the current 569 designated LSP-id, the designated LSP-id and hop-count need to 570 be updated. In this case, the node chooses the LSP-id which has 571 the largest LSP-id hop count from among the upstream LSP-ids 572 (the received LSP-id and hop-count are used for the received 573 upstream link) as the candidate for the designated LSP-id. If 574 the largest LSP-id hop count is shared by two or more LSP-ids, a 575 deterministic tie breaker is used to select a single LSP-id: the 576 numerically largest value chosen. 578 (d) When a node receives a RELEASE message, if the LSP-id being 579 registered for the received upstream link is the current 580 designated LSP-id, the designated LSP-id and hop-count need to 581 be updated. In this case, the node chooses the LSP-id which has 582 the largest LSP-id hop count from among the upstream LSP-ids 583 (excluding the one for the received upstream link) as the 584 candidate for the designated LSP-id. If the largest LSP-id hop 585 count is shared by two or more LSP-ids, a deterministic tie 586 breaker is used to select a single LSP-id: the numerically 587 largest value chosen. 589 3. Examples 591 Consider an MPLS network shown in Fig. 2 which employs local 592 control. Assume that an L3 loop exists for a FEC-id=F. Now leaf 593 nodes R1 and R6 initiates creation of an LSP. In this example, a 594 router id is used as a leaf-address, and leaf-local-id=0 is used 595 hence leaf-local-id is omitted. It is assumed that all nodes are 596 VC-merge capable, and operate in the conservative mode and allow 597 creation of halfway LSPs. 599 +<---------- R5 <---------+ 600 | ^ 601 | | 602 v | 603 R1 -------> R2 --------> R3 --------> R4 604 ^ 605 | 606 | 607 R6 -------> R7 --------> R8 609 Fig. 2 Example MPLS network 611 An example in which the loop is detected before the hop count 612 reaches the predetermined threshold is described below. 614 Assume that R1 sends a SETUP message before R6 does, and that the 615 maximum hop count is set to eight. First we show an example of how 616 to create an LSP. 618 The SETUP message from R1 contains (FEC-id,LSP-id,hop-count) = 619 (F,R1,1). Then R2, R3, R4, and R5 send SETUP messages with 620 (FEC-id,LSP-id,hop-count) = (F,R1,2), (F,R1,3), (F,R1,4), (F,R1,5), 621 respectively. These nodes registers the received 622 (LSP-id,hop-count). 624 When R2 receives the SETUP message from R5, it finds that the 625 leaf-address R1 is already registered by the SETUP message sent by 626 R1, and thus returns a SETUP_NACK message to R5. R5 then returns a 627 SETUP_ACK message to R4. Similarly, SETUP_ACK messages are returned 628 from R4 to R3, R3 to R2, and finally R2 to R1. Then a halfway LSP 629 R1->R2->R3->R4->R5 is created without forming a loop. R5 also sets 630 a retry timer to send a new SETUP message to R6 after certain period 631 of time. Note that the all designated LSP-id's for R1, R2, R3, and 632 R4 are set to R1. 634 Then R6 starts to send a SETUP message to R7. The SETUP message 635 from R6 contains (FEC-id,LSP-id,hop-count) = (F,R6,1). Then R7 and 636 R8 send SETUP messages with (FEC-id,LSP-id,hop-count) = (F,R6,2) and 637 (F,R6,3) to R8 and R3, respectively. These nodes also registers the 638 received (LSP-id,hop-count). 640 When R3 receives the SETUP message from R8, the designated LSP-id 641 changes since the received hop-count(=3) is larger than the already 642 registered one(=2). Then R3 sends an UPDATE_ADD message to R4 with 643 (FEC-id,LSP-id,hop-count) = (F,R6,4). When R4 receives the UPDATE 644 message from R3, it also sends an UPDATE_ADD message to R5 with 645 (FEC-id,LSP-id,hop-count) = (F,R6,5), since the designated LSP-id 646 also changes. 648 When R5 receives the UPDATE_ADD message from R4, since it is the 649 current egress node, it returns an UPDATE_ADD_ACK message to R4. R4 650 then returns an UPDATE_ADD_ACK message to R3. R3 returns a 651 SETUP_ACK message to R8. Similarly, a SETUP_ACK message is returned 652 from R8 to R7, and finally R7 to R8. Then a halfway merged LSP 653 ((R1->R2),(R6->R7->R8))->R3->R4->R5 is created. Note that the 654 designated LSP-id's for R1 and R2 are set to R1, and for other 655 nodes, to R6. 657 R5 also resends a SETUP message to R2 with (FEC-id,LSP-id,hop-count) 658 = (F,R6,6). When R2 receives the SETUP message from R5, since the 659 received LSP-id is not registered by any other upstream neighbors 660 and the received hop-count is less than the threshold, it registers 661 the received LSP-id and hop-count and sends an UPDATE_ADD message to 662 R3 with (FEC-id,LSP-id,hop-count) = (F,R6,7). 664 When R3 receives the UPDATE_ADD message from R2, it finds that the 665 leaf-address R6 is already registered for R8. Then it considers 666 that a loop is about to be formed and thus returns an 667 UPDATE_ADD_NACK message to R2. R2 returns a SETUP_NACK message to 668 R5. R5 does not returns any message to R4 since there is no 669 outstanding message. In this case the halfway merged LSP holds. 671 On the contrary, in the case that R6 sends a SETUP message before R1 672 does, the SETUP message originated by R1 does not trigger the 673 transmission of an UPDATE_ADD message at R3 since the designated 674 LSP-id at R3 is not changed by setting up the upstream link from R2. 676 Next, an example of how to release an LSP is described by using 677 Fig. 2. Assume that a halfway merged LSP 678 ((R1->R2),(R6->R7->R8))->R3->R4->R5 already exists. 680 If R6 sends a RELEASE message before R1 does, the RELEASE message is 681 forwarded until it reaches R3. R3 then issues an UPDATE_DEL message 682 since the designated LSP-id for R3 changes from R6 to R1. The 683 UPDATE_DEL message contains (FEC-id,LSP-id,hop-count) = (F,R1,3). 684 R4 returns an UPDATE_DEL_ACK to R3 and sends an UPDATE_DEL message 685 with (FEC-id,LSP-id,hop-count) = (F,R1,4) to R5. In this case, the 686 LSP is changed to R1->R2->R3->R4->R5. 688 On the contrary, if R1 sends a RELEASE message before R6 does, the 689 RELEASE message originated by R1 does not trigger the transmission 690 of an UPDATE_DEL message at R3 since the designated LSP-id for R3 is 691 not changed by releasing the upstream link between R2 and R3. In 692 this case, the LSP is changed to R6->R7->R8->R3->R4->R5. 694 4. Enhancement of the algorithm 696 The basic algorithm can be enhanced by the following two methods. 698 4.1. Enhancement for improving protocol performance 700 At some cost in simplicity, the time needed to adapt to a routing 701 change can be decreased by modifying the algorithms described in 702 Section 2.5. Instead of adhering to the exclusive processing rule, 703 we can allow a node to have multiple outstanding messages about the 704 same LSP, provided that certain conditions are met. 706 Since a RELEASE message is sent to the downstream neighbor only when 707 all the upstream link has been released, we consider dealing with 708 only multiple SETUP and UPDATE messages. In addition, for the 709 egress control, we need only consider the case of multiple UPDATE 710 messages since SETUP messages are transmitted from downstream to 711 upstream. 713 4.1.1. How to deal with multiple SETUP messages 715 When a node receives a SETUP message, it can forward the SETUP 716 message to the downstream neighbor even if there is an outstanding 717 SETUP message. 719 Each time a node receives a SETUP_ACK message for one of the 720 outstanding SETUP message(s) with regard to a FEC, the designated 721 LSP-id is replaced with the ACK'ed LSP-id if: 723 o the ACK'ed hop-count is greater than the designated hop-count, 724 or 726 o the ACK'ed hop-count is equal to the designated hop-count and 727 the ACK'ed LSP-id is greater than the designated LSP-id. 729 If the designated LSP-id changes, the designated hop-count is also 730 updated. 732 The final result on the designated LSP-id and hop-count after 733 receiving all acknowledgments is independent of the message 734 processing order. 736 Note that during the multiple outstanding message processing, a node 737 may receive a SETUP message from upstream neighbor after returning a 738 SETUP_ACK message for other SETUP message. In this case, if the 739 received hop-count is greater than the one which is already 740 registered for the upstream link, or if the received hop-count is 741 equal to the registered one and the received LSP-id is greater than 742 the registered one, the registered values need to be replaced with 743 the received ones. If the replacement does not need to change the 744 designated LSP-id and hop-count, it executes the replacement and 745 immediately returns a SETUP_ACK message, otherwise, it sends an 746 UPDATE_ADD message to the downstream neighbor. If there is no need 747 to replace the registered values, it immediately returns a SETUP_ACK 748 message. 750 4.1.2. How to deal with multiple UPDATE messages 752 (a) Multiple UPDATE_ADD messages 754 UPDATE_ADD messages can be processed concurrently in the same manner 755 as multiple SETUP messages regardless of the sending order of the 756 UPDATE_ADD messages and receiving order of the UPDATE_ADD_ACK or 757 UPDATE_ADD_NACK messages. 759 (b) Multiple UPDATE_DEL messages 761 For the multiple UPDATE_DEL messages, however, message order must be 762 preserved. For example, in Fig. 3, assume that the current 763 designated LSP-id for R1 is B, and that R1 receives a RELEASE 764 message from B, then receives a RELEASE message from C. 766 Consider the case where R1 sends an UPDATE_DEL message with 767 (LSP-id,hop-count) = (C,2) as a result of B's removal and then sends 768 an UPDATE_DEL message with (LSP-id,hop-count) = (A,2) as a result of 769 C's removal. 771 If R2 processes the UPDATE_DEL message with (LSP-id,hop-count) = 772 (A,2) before the UPDATE_DEL message with (LSP-id,hop-count) = (C,2), 773 R2 may finally select C as the designated LSP-id. R1, however, will 774 have selected (A,1), since C is no longer a leaf of the tree. Thus 775 R1 and R2 have made inconsistent choices. Hence, the message order 776 must be preserved at all nodes in order to avoid inconsistency with 777 multiple UPDATE_DEL messages. 779 A 780 | (LSP-id,hop-count) 781 | =(A,1) 782 v 783 B-------->R1-------->R2------->R3 784 (B,1) ^ (B,2) (B,3) 785 | 786 | (C,1) 787 C 789 Fig. 3. Example of multiple UPDATE_DEL messages 791 (c) A mixture of UPDATE_ADD and UPDATE_DEL messages 793 Similarly, message order must be preserved at all nodes when there 794 are multiple outstanding UPDATE_ADD and UPDATE_DEL messages. 796 For example, in Fig. 4, assume that the current designated LSP-id 797 for R2 is B, and that A sends a SETUP message. In this case, R2 798 sends an UPDATE_ADD message with (LSP-id,hop-count) = (A,3) to R3 as 799 a result of A's participation. 801 Consider the case where A sends a RELEASE message before receiving a 802 SETUP_ACK message. If R2 receives the RELEASE message initiated by 803 A before receiving an UPDATE_ADD_ACK message for the UPDATE_ADD 804 message, R2 sends an UPDATE_DEL message with (LSP-id,hop-count) = 805 (B,2) to R3. 807 If R3 processes the UPDATE_ADD and UPDATE_DEL messages from R2 in 808 reverse order, R3 may finally select A as the designated LSP-id, 809 despite the fact that A is no longer a leaf on the tree. To avoid 810 this problem, message order must also be preserved in this case. 812 A---------R1 813 | 814 | 815 | 816 v 817 B-------->R2-------->R3------->R4 818 (B,1) (B,2) (B,3) 820 Fig. 4. Example of a mixture of UPDATE_ADD and UPDATE_DEL messages 822 As a result of discussions in Section 4.1.1 and 4.1.2, it is 823 concluded that multiple outstanding message processing is possible 824 only if the message order is preserved at all nodes. This 825 requirement would be satisfied if the protocol is (i) implemented 826 over a reliable transport such as TCP for message transfer between 827 neighboring nodes, and (ii) implemented in a way that messages are 828 processed in exactly the same as the received order. Note that if 829 both conditions (i) and (ii) are satisfied, RELEASE_ACK and 830 UPDATE_DEL_ACK messages are not necessary since a change toward hop 831 count decrease is always loop-free. 833 4.1.3. Interoperation of different message processing policies 835 Nodes with exclusive message processing policy and with multiple 836 message processing policy can interoperate by the following way. If 837 a node with exclusive message processing receives multiple SETUP or 838 UPDATE messages, it simply blocks the message. Since both types of 839 nodes have their own mechanism to handle blocking events, they can 840 coexist in the same network. 842 4.2. Enhancement in route change 844 Consider the case where a route changes at R1. With the basic 845 algorithm for local control, R1 sends a RELEASE message to R2, and 846 sends a SETUP message to R6. If the SETUP message reaches R4 before 847 the RELEASE message, the SETUP message is blocked since the LSP-id=A 848 is already registered at R4 for the upstream link on the old route. 850 This would occur during a transient state of the routing protocol 851 where a SETUP message and a RELEASE message may be transmitted in 852 parallel for the same LSP. In such a case, the request is refused 853 though there is actually no loop, and the blocked node has to wait 854 until the links on the old path are removed. 856 old route 857 ==========> (A,4) 858 +----->R2----->R3-----+ 859 | | 860 | v 861 A----->R1 R4----->R5 862 | ^ (A,5) 863 | | 864 +--------->R6---------+ 865 new route 866 ==========> 868 Fig. 5. Example of route change 870 The performance during the route change can be enhanced by 871 introducing an additional rule. That is, when a SETUP message is 872 received, if the same LSP-id as the received one is already 873 registered for a different upstream link but the received hop-count 874 is less than or equal to the registered one, the node withdraws the 875 upstream link for the old path and accept the SETUP message for the 876 new path. For this purpose, an additional WITHDRAW message is used. 878 This modification does not violate the loop-free characteristics, 879 because the scheme still has the restriction against allowing a node 880 to have the same LSP-id registered for the same FEC by two different 881 neighbors. 883 Further enhancement can be expected by using not only hop count but 884 also routing metric information when a node makes a decision whether 885 the node replaces the old upstream link with the new one or not. 886 The use of routing metric is for further study. 888 Note that the nodes with and without this enhanced algorithm are 889 interoperable. 891 5. Comparison with path-vector/diffusion method 893 Advantages: 895 1. Whereas the size of the path-vector increases with the length of 896 the LSP, the sizes of the LSP-id and hop count are constant. 897 Thus the size of messages used by the LSP-id algorithm are 898 unaffected by the network size or topology. This leads to 899 improved scalability. 901 2. In the LSP-id algorithm, a node which is changing its next hop 902 for a particular LSP must interact only with nodes that are 903 between it and the LSP egress on the new path. In the 904 path-vector algorithm, however, it is necessary for the node to 905 initiate a diffusion computation that involves nodes which do 906 not lie between it and the LSP egress. 908 This characteristic makes the LSP-id algorithm more robust. If 909 a diffusion computation is used, misbehaving nodes which aren't 910 even in the path can delay the path setup. In the LSP-id 911 algorithm, the only nodes which can delay the path setup are 912 those nodes which are actually in the path. 914 3. The LSP-id algorithm is well suited for use with both the local 915 control and the egress control. The path-vector/diffusion 916 algorithm, however, is tightly coupled with the egress control 917 method. 919 Disadvantages: 921 1. When a particular node's next hop on a particular LSP tree 922 changes, the LSP-id algorithm does not necessarily allow the 923 node to continue using the old path while the new path is being 924 set up. If there is some node D which is downstream on both 925 paths, then it is not possible to have both paths set up at the 926 same time; otherwise D would have two upstream links with the 927 same LSP-id. So in this case, the old path must be torn down 928 before the new path can be used. 930 So it is possible that during routing transients, there may be 931 periods when there is no LSP for a particular FEC. This 932 disadvantage is a consequence of the fact that the LSP-id 933 algorithm maintains only the LSP-id, not a complete path vector. 934 This reduction of information is thus the source both of the 935 advantages and the disadvantages of the LSP-id algorithm. 937 However, the enhanced mechanism described in section 4.2 can be 938 used to greatly reduce the effects of this disadvantage. That 939 mechanism allows one to leave the existing path set up while 940 initiating the setup of the new path. If it turns out that the 941 two paths have no downstream node in common, then the old path 942 need not be torn down until the new path is put into use. If, 943 on the other hand, the two paths have a node in common, and the 944 newer path is the better one (i.e., the newer path brings the 945 leafs closer to the egress), tear down of the old path will be 946 initiated by the node which is on both paths. This minimizes 947 the time during which there is no label switching for the given 948 FEC. If the new path is worse than the old path, this may cause 949 more delay in switching over. That seems inconsequential 950 though, since in this case data is likely not being delivered 951 along the old path anyway. 953 6. Security considerations 955 Security considerations are not discussed in this document. 957 7. Intellectual property considerations 959 Toshiba and/or Cisco may seek patent or other intellectual property 960 protection for some of the technologies disclosed in this document. 961 If any standards arising from this document are or become protected 962 by one or more patents assigned to Toshiba and/or Cisco, Toshiba 963 and/or Cisco intend to disclose those patents and license them on 964 reasonable and non-discriminatory terms. 966 8. References 968 [1] Y. Ohba, et al., "Flow Attribute Notification Protocol Version 2 969 (FANPv2) Ingress Control Mode," Internet Draft, 970 draft-ohba-csr-fanpv2-icmode-00.txt, Dec. 1997. 972 [2] L. Andersson, et al., "LSP Specification," Internet Draft, 973 draft-mplsdt-ldp-spec-00.txt, Nov. 1997. 975 [3] E Rosen, et al., "A Proposed Architecture for MPLS," 976 Internet Draft, draft-ietf-mpls-arch-00.txt, Aug. 1997. 978 [4] R. Callon, et al., "A Framework for Multiprotocol Label 979 Switching," Internet Draft, draft-ietf-mpls-framework-02.txt, 980 Nov. 1997. 982 [5] B. Davie, et al., "Use of Label Switching With ATM," 983 Internet Draft, draft-davie-mpls-atm-00.txt, Nov. 1997. 985 9. Authors' Addresses 987 Yoshihiro Ohba 988 R&D Center, Toshiba Corp. 989 1, Komukai-Toshiba-cho, Saiwai-ku 990 Kawasaki, 210, Japan 991 Email: ohba@csl.rdc.toshiba.co.jp 993 Yasuhiro Katsube 994 R&D Center, Toshiba Corp. 995 1, Komukai-Toshiba-cho, Saiwai-ku 996 Kawasaki, 210, Japan 997 Email: katsube@isl.rdc.toshiba.co.jp 999 Eric Rosen 1000 Cisco Systems, Inc. 1001 250 Apollo Drive 1002 Chelmsford, MA, 01824 1003 Email: erosen@cisco.com 1005 Paul Doolan 1006 Ennovate Networks 1007 330 Codman Hill Road 1008 Boxborough, MA 1009 Email: pdoolan@ennovatenetworks.com