idnits 2.17.1 draft-katsube-mpls-two-ldp-00.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- ** Cannot find the required boilerplate sections (Copyright, IPR, etc.) in this document. Expected boilerplate is as follows today (2024-04-24) according to https://trustee.ietf.org/license-info : IETF Trust Legal Provisions of 28-dec-2009, Section 6.a: This Internet-Draft is submitted in full conformance with the provisions of BCP 78 and BCP 79. IETF Trust Legal Provisions of 28-dec-2009, Section 6.b(i), paragraph 2: Copyright (c) 2024 IETF Trust and the persons identified as the document authors. All rights reserved. IETF Trust Legal Provisions of 28-dec-2009, Section 6.b(i), paragraph 3: This document is subject to BCP 78 and the IETF Trust's Legal Provisions Relating to IETF Documents (https://trustee.ietf.org/license-info) in effect on the date of publication of this document. Please review these documents carefully, as they describe your rights and restrictions with respect to this document. Code Components extracted from this document must include Simplified BSD License text as described in Section 4.e of the Trust Legal Provisions and are provided without warranty as described in the Simplified BSD License. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- ** Missing expiration date. The document expiration date should appear on the first and last page. ** The document seems to lack a 1id_guidelines paragraph about Internet-Drafts being working documents. ** The document seems to lack a 1id_guidelines paragraph about the list of current Internet-Drafts. ** The document seems to lack a 1id_guidelines paragraph about the list of Shadow Directories. == No 'Intended status' indicated for this document; assuming Proposed Standard Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- ** The document seems to lack an IANA Considerations section. (See Section 2.2 of https://www.ietf.org/id-info/checklist for how to handle the case when there are no actions for IANA.) ** The document seems to lack separate sections for Informative/Normative References. All references will be assumed normative when checking for downward references. ** There are 32 instances of too long lines in the document, the longest one being 2 characters in excess of 72. ** There are 12 instances of lines with control characters in the document. Miscellaneous warnings: ---------------------------------------------------------------------------- == The "Author's Address" (or "Authors' Addresses") section title is misspelled. == Line 277 has weird spacing: '... loop canno...' -- The document seems to lack a disclaimer for pre-RFC5378 work, but may have content which was first submitted before 10 November 2008. If you have contacted all the original authors and they are all willing to grant the BCP78 rights to the IETF Trust, then this is fine, and you can ignore this comment. If not, you may need to add the pre-RFC5378 disclaimer. (See the Legal Provisions document at https://trustee.ietf.org/license-info for more information.) -- Couldn't find a document date in the document -- date freshness check skipped. Checking references for intended status: Proposed Standard ---------------------------------------------------------------------------- (See RFCs 3967 and 4897 for information about using normative references to lower-maturity documents in RFCs) == Missing Reference: 'KATSU' is mentioned on line 454, but not defined -- Possible downref: Normative reference to a draft: ref. 'ARIS' -- Possible downref: Normative reference to a draft: ref. 'ARISSPEC' -- No information found for draft-davie-mpls-rsvp - is the name correct? -- Possible downref: Normative reference to a draft: ref. 'DAVIE' ** Downref: Normative reference to an Informational RFC: RFC 2129 (ref. 'FANP') ** Downref: Normative reference to an Informational RFC: RFC 1953 (ref. 'IFMP') == Outdated reference: A later version (-01) exists of draft-rekhter-tagswitch-arch-00 -- Possible downref: Normative reference to a draft: ref. 'TAG' == Outdated reference: A later version (-04) exists of draft-doolan-tdp-spec-01 -- Possible downref: Normative reference to a draft: ref. 'TDP' -- Possible downref: Normative reference to a draft: ref. 'VISWA' Summary: 11 errors (**), 0 flaws (~~), 6 warnings (==), 9 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 1 Internet Draft Sept., 1997 3 Yasuhiro Katsube (Toshiba) 4 Yoshihiro Ohba (Toshiba) 5 Ken-ichi Nagami (Toshiba) 7 Two Modes of MPLS Explicit Label Distribution Protocol 9 11 Status of this memo 13 This document is an Internet-Draft. Internet-Drafts are working 14 documents of the Internet Engineering Task Force (IETF), its areas, 15 and its working groups. Note that other groups may also distribute 16 working documents as Internet-Drafts. 18 Internet-Drafts are draft documents valid for a maximum of six months 19 and may be updated, replaced, or obsoleted by other documents at any 20 time. It is inappropriate to use Internet-Drafts as reference 21 material or to cite them other than as "work in progress." 23 To learn the current status of any Internet-Draft, please check the 24 "1id-abstracts.txt" listing contained in the Internet-Drafts Shadow 25 Directories on ftp.is.co.za (Africa), nic.nordu.net (Europe), 26 munnari.oz.au (Pacific Rim), ds.internic.net (US East Coast), or 27 ftp.isi.edu (US West Coast). 29 Abstract 31 This memo discusses characteristics of two types of MPLS protocol 32 operations, which we call 'Edge Control' operation and 33 'Distributed' operation individually, and proposes that these two 34 mode of protocol operations should be specified as the explicit 35 Label Distribution Protocol for the MPLS. 37 1. Introduction 39 Label Switched Routers (LSRs) can forward L3 packets based on fixed 40 length labels, e.g., VPI/VCI field in ATM cells, as well as 41 conventional packet forwarding based on L3 address information. 42 Each LSR, including edge router and possibly host, should exchange 43 control messages with its neighbors in order that they share the 44 common understanding of the relationship between the attached label 45 (or its equivalent) and the specific packet stream. 47 Katsube et al. Expires March 1998 [Page 1] 48 With regard to the procedure for control message exchange, which are 49 called Label Distribution Protocol, several mechanisms have been 50 proposed[ARISSPEC][FANP][IFMP][TDP]. They are largely classified 51 into two types of operation; one is what we call "Edge Control" 52 operation[ARISSPEC][TDP], and the other is what we call "Distributed" 53 operation[FANP][IFMP]. 55 With regard to the trigger for establishing or releasing LSPs, 56 the Edge Control operations are often understood as Topology- 57 driven approach, while the Distributed operations as Traffic-driven 58 approach. It should be noted that the issue of how the protocol works 59 (either the Edge Control operation or the Distributed operation) 60 is not necessarily coupled with the issue of what the trigger is. 61 Several combinations would be possible instead. 63 This memo outlines two types of explicit label distribution protocols, 64 discusses characteristics of them individually in terms of the 65 trigger for establishing or releasing LSPs as well as the possible 66 granularity level of the LSPs, and proposes that these two modes of 67 operations should be specified as the explicit Label Distribution 68 Protocol for the MPLS. 70 2. Two Types of Operation for Explicit Label Distribution Protocol 72 2.1 Edge Control Operation 74 (a) Operational overview 76 In the Edge Control operation, the Label Switched Path (LSP) 77 establishment procedure is initiated by an edge node (an ingress or 78 egress endpoint of the LSP which can be a router or a host) of the 79 MPLS cloud. The initiator transmits MPLS control messages to its 80 neighbor (downstream or upstream depending on the actual procedure) 81 in order to request establishment of the LSP. The control messages 82 convey at least information that specifies stream (e.g., L3 83 destination address prefix) to be transmitted over the LSP. The 84 information that identifies the label to be used may also be conveyed 85 in the initial message. 87 The LSR that has received the MPLS initial messages from its neighbor 88 checks the validity of the message, memorizing the notified stream 89 information as well as information to identify the corresponding 90 label (which may be determined by the sender side of the initial 91 messages and notified to the receiver side, or determined by the 92 receiver side and notified to the sender side) at the related 93 interface, and possibly transmitting an acknowledgment to the sender 94 of the initial messages. 96 Katsube et al. Expires March 1998 [Page 2] 97 After having successfully processed the received MPLS control messages, 98 the LSR reconstructs and transmits the MPLS control messages further 99 downstream or upstream along the path of the stream to request 100 establishment of the LSP. The messages includes the information about 101 the stream determined originally by the edge node (initiator). 102 The same procedure is performed at every LSR along the path of the 103 stream until the path reaches the node that cannot extend the LSP 104 any more (e.g., another edge node of the MPLS cloud). 105 An edge-to-edge acknowledgment may be returned to the initiator. 107 (b) Triggers for the LSP establishment or release 109 Triggers for the LSP establishment can be, for example, the creation 110 of an L3 forwarding table entry (Topology-driven), or the arrival of 111 any or specific data traffic corresponding to the L3 forwarding table 112 entry (Traffic-driven) at an edge node. Recognition of a group of 113 L3 address prefixes which are reachable through a specific egress edge 114 node can be a trigger for establishment of much aggregated LSPs. 116 Changes of the paths for existing LSPs in response to L3 route changes 117 would be initiated by the LSR which detect the route change regardless 118 of trigger for the initial establishment. The LSR which detects the 119 route change invalidates the old path by transmitting the control 120 message, which is handled and forwarded hop-by-hop toward the edge 121 node for the existing LSP, and creates the new path by transmitting 122 the control message, which is also handled and forwarded hop-by-hop 123 toward the edge node for the new route. 125 Triggers to release the LSPs would be deletion of the L3 forwarding 126 entry, or can be decrease of the data traffic activity corresponding 127 to the L3 forwarding table. 129 (c) Granularity levels of the LSP 131 Edge routers which initiate the LSP establishment procedure determine 132 the definition of the stream (granularity levels of the LSP) to be 133 transmitted over the LSP. The definition of the stream is included 134 in the establishment message and transmitted hop-by-hop along the path 135 of the stream. A variety of granularity levels can be defined by edge 136 routers, e.g., {dst.prefix}, {BGP next hop}, or {OSPF ABR/ASBR}, 137 depending on the role of the edge node (e.g., just an edge of the MPLS 138 cloud, AS border router, or OSPF Area/AS border router). [ARIS] 140 Establishment of LSPs with fine granularity such as {src.IP, dst.IP}, 141 {src.IP, multicast group} would also be possible with the Message 142 Passing operation, which would be traffic-driven or request-driven. 143 But, as described in 2.2, LSPs with this fine granularity can also be 144 handled by the Distributed operation. 146 Katsube et al. Expires March 1998 [Page 3] 147 (d) Other notes 149 The edge-to-edge massage forwarding in this approach enables to 150 associate several related knowledge with the LSP, e.g., hop-count for 151 the LSP can be notified to edge routers, loop detection or prevention 152 for the LSP becomes possible, and completion of the edge-to-edge LSP 153 can be confirmed by the ingress edge router before transmitting data 154 packets over the LSP. 156 Processing burden for protocol state management and message handling 157 becomes much larger than the Distributed operation in the case that 158 frequency of establishment, change or release of LSPs are relatively 159 high (e.g., for traffic-driven fine granularity stream, or for IP 160 multicast stream with frequent group membership changes). 162 2.2 Distributed Operation 164 (a) Operational overview 166 In the Distributed Operation, the LSP establishment procedure in an 167 MPLS cloud is initiated by individual LSRs (and edge nodes) in a 168 distributed manner. Each of them transmits MPLS control messages to 169 its neighbor (downstream or upstream depending on the actual 170 procedure) in order to share the mapping relationship between a 171 specific stream and the label dedicated to the stream. The messages 172 will convey at least information that specifies stream to be 173 transmitted with the specific label. The information that identifies 174 the label to be used may also be conveyed by the initial message. 176 The LSR that has received the MPLS initial messages from its neighbor 177 checks the validity of the message, memorizing the received stream 178 information as well as the corresponding label information (which may 179 be determined by the sender side of the initial MPLS control messages 180 and notified to the receiver side, or determined by the receiver side 181 and notified to the sender side) at the related interface, and 182 possibly transmitting an acknowledgment to the sender of the initial 183 messages. 185 Unlike the case of the Edge Control operation, exchange of MPLS 186 messages with its neighbor (upstream or downstream) does not trigger 187 exchange of MPLS control messages with its another side of neighbor 188 (upstream or downstream) in an LSR in this case. An MPLS control 189 message exchange for a specific stream between each pair of 190 neighboring LSRs is initiated and carried out independently from the 191 message exchange for the same stream between any other pair of LSRs. 193 Katsube et al. Expires March 1998 [Page 4] 194 (b) Triggers for the LSP establishment or release 196 Triggers for the LSP establishment can be, for example, the arrival 197 of any or specific data traffic (Traffic-driven) at individual LSRs 198 and edge nodes on the path. The common rule with regard to the 199 trigger for the LSP establishment (the condition to initiate the LSP 200 establishment) should be configured to all LSRs and edge nodes in 201 the MPLS cloud in order that the LSPs are successfully established in 202 a distributed manner. 204 The arrival of RSVP Resv messages (Request-driven) at individual 205 LSRs and edge nodes on the path will also be appropriate for the 206 distributed protocol operation. Reception of the Resv message at an 207 LSR from its downstream neighbor triggers the control message exchange 208 with the downstream neighbor (to notify mapping relationship between 209 the stream corresponding to the RSVP flow and the label information 210 to convey the stream), then the LSR transmits the Resv message further 211 upstream. Here we assume the use of current standard RSVP message 212 format with no additional object defined for the MPLS. 213 The same thing applies to the case of multicast with PIM-SM. 214 Reception of PIM Join messages (Request-driven) at an LSR from its 215 downstream neighbor triggers the control message exchange with the 216 downstream neighbor as well as the LSR transmits the PIM Join message 217 further upstream. Here we assume the use of current standard PIM 218 message format with no additional object defined for the MPLS. 220 Changes of the paths for existing LSPs in response to L3 route changes 221 are initiated by the LSR which detect the route change regardless 222 of trigger for the initial establishment. The LSR which detects the 223 route change invalidates the mapping relationship between the label 224 and the stream toward its downstream neighbor by exchanging control 225 messages with it, which however does not trigger the control message 226 transmission toward further downstream nodes. The old path will be 227 released by timeout because of, e.g., no data traffic is emitted to 228 the old path or no Path/Resv message transmitted over the old path. 229 Creation of the new path from the LSR that detect the route change 230 will be carried out in the distributed manner similarly to the 231 initial LSP setup procedure. 233 Triggers to release the LSPs would be, for example, decrease of data 234 traffic activity, or RSVP reservation state expiration at individual 235 LSRs or edge nodes on the path, which keeps principles of distributed 236 operation. 238 (c) Granularity levels of the LSP 240 Definition of the stream should be determined by individual LSRs and 241 edge nodes on the path with their own decision since no such 242 information is conveyed hop-by-hop by the control messages in the 244 Katsube et al. Expires March 1998 [Page 5] 245 Distributed operation. Therefore, the granularity levels provided by 246 the Distributed operation is restricted to the extent that individual 247 LSRs and edge nodes can commonly understand by themselves. 249 In the case of Traffic-driven setup, LSRs and edge nodes on the 250 path can recognize the stream of L3 level end-to-end granularity 251 individually by referring to the data packets (e.g., {src.IP, dst.IP} 252 and {src.IP, multicast group}). In addition, they can recognize the 253 stream of {src.IP, dst.prefix} granularity individually when they are 254 guaranteed to have the forwarding table entry with the same aggregation 255 level given by the routing protocol or by configuration. 257 In the case of Request-driven setup, each of the LSRs can recognize 258 the stream with application to application granularity by referring to 259 the RSVP Resv messages (e.g., {src.IP/port, dst.IP/port}), or 260 recognize the stream with L3 level end-to-end granularity by referring 261 to the PIM Join messages (e.g., {RP, multicast group} or {src.IP, 262 multicast group). 264 As the data packets, the RSVP Resv messages, and the PIM Join messages 265 travel along the path of the LSP with the information of those stream 266 definition, they perform almost the same role as the edge-to-edge 267 Edge Control case, which facilitate the LSP control with the 268 Distributed operation. 270 (d) Other Notes 272 Although no information with edge-to-edge importance can be shared 273 through the Distributed operation, overall procedure is simple and it 274 is easy to follow dynamic changes in router state, e.g., unicast 275 routing, multicast group membership, or RSVP reservation state. 276 Various knowledge related to the LSP such as hop-count, existence of 277 loop cannot be obtained in the Distributed operation. 279 3. Desirable Protocol Operations for Individual Types of LSPs 281 3.1 Unicast LSP 283 3.1.1 Unicast LSP with Arbitrary Granularity Level 285 When the MPLS cloud should provide LSPs for aggregated streams with 286 various granularity levels, the use of Edge Control operation is 287 desirable. The granularity level should be determined by edge nodes 288 (either ingress or egress), then should be notified by MPLS control 289 messages hop-by-hop to all LSRs on the path of the stream. 291 The LSP establishment can be triggered by creation of forwarding table 292 entries (Topology-driven) or the arrival of traffic corresponding 294 Katsube et al. Expires March 1998 [Page 6] 295 to the table entry (Traffic-driven). The release of the LSP can be 296 triggered by the deletion of the forwarding table entries or can be 297 triggered by the decrease of traffic activities corresponding to the 298 table entry. 300 Figure 1 shows examples of a message sequence for unicast LSP setup 301 with the Edge Control operation. The sequences initiated by 302 an ingress edge (like TDP) and the sequence initiated by an egress 303 edge (like ARIS) are shown. Note that the detailed procedure 304 should be specified by the MPLS WG. 306 Ingress======== LSR1 ======== LSR2 ======== LSR3 ========Egress 308 TRG req req req req 309 |-------->++++|-------->++++|-------->++++|-------->++ 310 ack ack ack ack | 311 <--------|++++<--------|++++<--------|++++<--------|++ 313 (i) Ingress Initiated Sequence 315 Ingress======== LSR1 ======== LSR2 ======== LSR3 ========Egress 317 req req req req TRG 318 +<---------|++++<--------|++++<--------|++++<---------| 319 | ack | ack | ack | ack 320 +----------> +---------> +---------> +---------> 322 (ii) Egress Initiated Sequence 324 (TRG = "creation of forwarding entry" 325 or "arrival of data packets") 327 Fig.1 Examples of Message Sequence for Arbitrary Granularity 329 3.1.2 Unicast LSP with Limited Granularity Level 331 When the MPLS cloud provides unicast LSPs for specific end-to-end L3 332 streams on-demand (Traffic-driven), it can adopt the Edge Control 333 operation since the end-to-end L3 stream (specified by 334 {src.IP, dst.IP}) is just one of the granularity levels described 335 in 3.1.1. But it should be noted that provision of traffic-driven 336 LSPs for end-to-end L3 streams requires much frequent establishments 337 or releases of LSPs compared with aggregated LSPs. Distributed 338 operation which is more lightweight than Edge Control operation 340 Katsube et al. Expires March 1998 [Page 7] 341 may be preferable in this case. As described in 2.2, it is possible 342 to provide, for example, {src.IP, dst.prefix} level granularity in a 343 domain whose routers share the forwarding entry with the same 344 level of network mask. 346 Figure 2 shows an example of the message sequence for unicast LSP 347 setup with the Distributed operation triggered by data traffic. 348 Note that the detailed procedure should be specified by the MPLS WG. 350 Ingress======== LSR1 ======== LSR2 ======== LSR3 ========Egress 351 packet packet packet packet 352 -> - - - - - -> - - - - - -> - - - - - -> - - - - - -> 353 TRG req TRG req TRG req TRG req 354 |-------->+ |-------->+ |-------->+ |-------->+ 355 ack | ack | ack | ack | 356 <---------+ <---------+ <---------+ <---------+ 358 (TRG = "arrival of data packets") 360 Fig.2 Example of Message Sequence for Fine Granularity 362 3.2 Multicast LSP 364 When the MPLS cloud provides LSPs along source-based or shared 365 multicast trees, point-to-multipoint LSPs will be established whose 366 origination points are either the ingress edge node closest to the 367 source or the RP for the PIM-SM. 369 The traffic-driven, Distributed operation is straightforward in the 370 case of dense mode protocol such as DVMRP in terms of the initial 371 setup procedure as well as addition or deletion of group members. 372 Triggered by the arrival of multicast packets, each LSR can establish 373 dedicated labels to its downstream neighbors using the Distributed 374 operation. 376 The request-driven, Distributed operation is straightforward in the 377 case of sparse mode protocol such as PIM-SM in terms of initial setup 378 as well as addition or deletion of group members. Triggered by the 379 arrival of PIM Join messages from the downstream neighbors, each LSR 380 can establish dedicated labels to its downstream neighbors using the 381 Distributed operation. Note that inclusion of label information in 382 the PIM Join message may be enough for label establishment in some 383 cases as described in [TAG]. But in the case that the label value is 384 changed between neighboring LSRs as described in [KATSU], inclusion 386 Katsube et al. Expires March 1998 [Page 8] 387 of label information in the PIM Join message alone is not enough but 388 additional message handshake between neighboring LSRs is necessary. 390 Figure 3 and 4 show examples of the message sequence for multicast 391 LSP setup with the Distributed operation in the traffic-driven case 392 and request-driven case individually. Note that the detailed 393 procedure should be specified by the MPLS WG. 395 ack ack ack 396 <---------+ <---------+ <----------+ 397 req | req | req | 398 |-------->+ |-------->+ |--------->+ 399 TRG TRG TRG 400 packet packet packet 401 -> - - - - - -> - - - - - -> - - - - - -> 402 Ingress======== LSR1 ======== LSR2 ========Egress 403 | packet 404 | - - - - - - - - - - - -> 405 +========================Egress 406 req 407 |----------------------->+ 408 ack | 409 <------------------------+ 411 (TRG = "arrival of data packets") 413 Fig.3 Example of Message Sequence for Multicast LSPs (Traffic-driven) 415 ack ack ack 416 <---------+ <---------+ <----------+ 417 req | req | req | 418 |-------->+ |-------->+ |--------->+ 419 TRG TRG TRG 420 PIM Join PIM Join PIM Join 421 <- - - - - <- - - - - <- - - - - 422 Ingress======== LSR1 ======== LSR2 ========Egress 423 | 424 +========================Egress 425 PIM Join 426 <- - - - - - - - - - - - 427 TRG req 428 |----------------------->+ 429 ack | 430 <------------------------+ 432 (TRG = "arrival of PIM Join") 434 Fig.4 Example of Message Sequence for Multicast LSPs (Request-driven) 436 Katsube et al. Expires March 1998 [Page 9] 437 3.3 Unicast/Multicast LSP with RSVP 439 With regard to the LSP establishment in response to the creation of 440 RSVP reservation state (Request-driven), the Edge Control 441 operation initiated by edge nodes does not adequate since each LSR 442 must forward RSVP Resv messages upstream after it succeeds to 443 establish labels toward its downstream neighbors, which requires 444 distributed LSP control operation rather than operation initiated by 445 edge routers. 447 The procedure will almost the same as the case with PIM-SM. 448 Triggered by the arrival of RSVP Resv messages from the downstream 449 neighbors, each LSR would establish dedicated labels to its downstream 450 neighbors using the Distributed operation. Note that the inclusion of 451 label information in the RSVP Resv message may be enough for label 452 establishment in some cases as described in [DAVIE][VISWA]. 453 But in the case that the label value is changed between neighboring 454 LSRs as described in [KATSU], inclusion of label information in the 455 RSVP Resv message alone is not enough but additional message handshake 456 between neighboring LSRs is necessary. 458 Figure 5 shows an example of the message sequence for rsvp-driven LSP 459 setup with the Distributed operation. Note that the detailed 460 procedure should be specified by the MPLS WG. 462 ack ack ack 463 <---------+ <---------+ <----------+ 464 req | req | req | 465 |-------->+ |-------->+ |--------->+ 466 TRG TRG TRG 467 Resv Resv Resv 468 <- - - - - <- - - - - <- - - - - 469 Ingress======== LSR1 ======== LSR2 ========Egress 470 | 471 +========================Egress 472 Resv 473 <- - - - - - - - - - - - 474 TRG req 475 |----------------------->+ 476 ack | 477 <------------------------+ 479 (TRG = "arrival of RSVP Resv message") 481 Fig.5 Example of Message Sequence for LSPs with RSVP 482 (Request-driven) 484 4. Security Consideration 486 Security issues are not discussed in this memo. 488 5. Conclusion 490 Based on to the discussion above, we propose that the two mode of 491 explicit label distribution protocols, which we call "Massage Passing" 492 operation and "Distributed" operation, should be supported. 493 Either of them would be utilized according to the stream granularity 494 trigger, and configuration (p-p/p-mp) of the LSP. 496 6. References 498 [ARIS] A.Viswanathan, N.Feldman, R.Biovie, and R. Woundy, "ARIS: 499 Aggregated Route-Based IP Switching", 500 draft-viswanathan-aris-overview-00.txt, March 1997. 502 [ARISSPEC] N. Feldman, A. Viswanathan, "ARIS Specification", 503 draft-feldman-aris-spec-00.txt, March 1997. 505 [DAVIE] B. Davie, Yakov Rekhter, and Eric Rosen, "Use of Label 506 Switching With RSVP", draft-davie-mpls-rsvp-00.txt, 507 May 1997. 509 [FANP] K. Nagami, Y.Katsube, Y. Shobatake, A. Mogi, S. Matsuzawa, 510 T. Jinmei, and H. Esaki, "Toshiba's Flow Attribute 511 Notification Protocol (FANP) Specification", RFC2129, 512 April 1997. 514 [IFMP] P. Newman, W. L. Edwards, R. Hinden, E. Hoffman, 515 F. C. Liaw, T. Lyon, and G. Minshall, "Ipsilon Flow 516 Management Protocol Specification for IPv4", RFC1953, 517 May 1996. 519 [TAG] Y. Rekhter, B. Davie, D. Katz, E. Rosen, G. Swallow, and 520 D. Farinacci, "Tag Switching Architecture - Overview", 521 draft-rekhter-tagswitch-arch-00.txt, Jan. 1997. 523 [TDP] P.Doolan, B.Davie, D.Katz, Y.Rekhter, and E.Rosen, 524 "Tag Distribution Protocol", draft-doolan-tdp-spec-01.txt, 525 May 1997. 527 [VISWA] A. Viswanathan and V. Srinivasan, "Soft State Switching 528 - A Proposal to Extend RSVP for Switching RSVP Flows -", 529 draft-viswanathan-aris-rsvp-00.txt, March 1997. 531 7. Authors Addresses 533 Yasuhiro Katsube 534 R&D Center, Toshiba Corporation, 535 1 Komukai-Toshiba-cho, Saiwai-ku, 536 Kawasaki, 210, Japan 537 Email: katsube@isl.rdc.toshiba.co.jp 539 Yoshihiro Ohba 540 R&D Center, Toshiba Corporation, 541 1 Komukai-Toshiba-cho, Saiwai-ku, 542 Kawasaki, 210, Japan 543 Email: ohba@csl.rdc.toshiba.co.jp 545 Ken-ichi Nagami 546 R&D Center, Toshiba Corporation, 547 1 Komukai-Toshiba-cho, Saiwai-ku, 548 Kawasaki, 210, Japan 549 Email: nagami@isl.rdc.toshiba.co.jp