idnits 2.17.1 draft-mcdonald-nsis-qos-nslp-00.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- ** Looks like you're using RFC 2026 boilerplate. This must be updated to follow RFC 3978/3979, as updated by RFC 4748. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- == No 'Intended status' indicated for this document; assuming Proposed Standard Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- ** The document seems to lack an IANA Considerations section. (See Section 2.2 of https://www.ietf.org/id-info/checklist for how to handle the case when there are no actions for IANA.) ** The document seems to lack separate sections for Informative/Normative References. All references will be assumed normative when checking for downward references. == There are 1 instance of lines with non-RFC6890-compliant IPv4 addresses in the document. If these are example addresses, they should be changed. ** The document seems to lack a both a reference to RFC 2119 and the recommended RFC 2119 boilerplate, even if it appears to use RFC 2119 keywords. RFC 2119 keyword, line 883: '...ect not added by this QNE then it MUST...' Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the RFC 3978 Section 5.4 Copyright Line does not match the current year -- The document seems to lack a disclaimer for pre-RFC5378 work, but may have content which was first submitted before 10 November 2008. If you have contacted all the original authors and they are all willing to grant the BCP78 rights to the IETF Trust, then this is fine, and you can ignore this comment. If not, you may need to add the pre-RFC5378 disclaimer. (See the Legal Provisions document at https://trustee.ietf.org/license-info for more information.) -- The document date (June 23, 2003) is 7605 days in the past. Is this intentional? Checking references for intended status: Proposed Standard ---------------------------------------------------------------------------- (See RFCs 3967 and 4897 for information about using normative references to lower-maturity documents in RFCs) == Outdated reference: A later version (-09) exists of draft-ietf-nsis-req-08 ** Downref: Normative reference to an Informational draft: draft-ietf-nsis-req (ref. '1') == Outdated reference: A later version (-07) exists of draft-ietf-nsis-fw-02 ** Downref: Normative reference to an Informational draft: draft-ietf-nsis-fw (ref. '2') ** Downref: Normative reference to an Informational RFC: RFC 1633 (ref. '4') ** Downref: Normative reference to an Informational RFC: RFC 2998 (ref. '8') ** Obsolete normative reference: RFC 3369 (ref. '12') (Obsoleted by RFC 3852) -- Possible downref: Normative reference to a draft: ref. '13' == Outdated reference: A later version (-06) exists of draft-ietf-nsis-rsvp-sec-properties-01 ** Downref: Normative reference to an Informational draft: draft-ietf-nsis-rsvp-sec-properties (ref. '14') == Outdated reference: A later version (-06) exists of draft-ietf-nsis-threats-01 ** Downref: Normative reference to an Informational draft: draft-ietf-nsis-threats (ref. '15') -- Possible downref: Normative reference to a draft: ref. '16' -- Possible downref: Normative reference to a draft: ref. '17' -- Possible downref: Normative reference to a draft: ref. '18' -- Possible downref: Normative reference to a draft: ref. '19' == Outdated reference: A later version (-04) exists of draft-westberg-rmd-framework-03 -- Possible downref: Normative reference to a draft: ref. '20' Summary: 11 errors (**), 0 flaws (~~), 8 warnings (==), 8 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 Network Working Group A. McDonald 3 Internet-Draft R. Hancock 4 Expires: December 22, 2003 Siemens/Roke Manor Research 5 H. Tschofenig 6 C. Kappler 7 Siemens AG 8 June 23, 2003 10 A Quality of Service NSLP for NSIS 11 13 Status of this Memo 15 This document is an Internet-Draft and is in full conformance with 16 all provisions of Section 10 of RFC2026. 18 Internet-Drafts are working documents of the Internet Engineering 19 Task Force (IETF), its areas, and its working groups. Note that other 20 groups may also distribute working documents as Internet-Drafts. 22 Internet-Drafts are draft documents valid for a maximum of six months 23 and may be updated, replaced, or obsoleted by other documents at any 24 time. It is inappropriate to use Internet-Drafts as reference 25 material or to cite them other than as "work in progress." 27 The list of current Internet-Drafts can be accessed at http:// 28 www.ietf.org/ietf/1id-abstracts.txt. 30 The list of Internet-Draft Shadow Directories can be accessed at 31 http://www.ietf.org/shadow.html. 33 This Internet-Draft will expire on December 22, 2003. 35 Copyright Notice 37 Copyright (C) The Internet Society (2003). All Rights Reserved. 39 Abstract 41 This draft describes a protocol to be used for signaling QoS 42 reservations in the Internet. It is compatible with the framework and 43 requirements for such signaling protocols developed within NSIS; in 44 conjunction with the NSIS Transport solution, it provides 45 functionality comparable to RSVP: it is independent of the details of 46 QoS specification, and adds support for a greater variety of 47 reservation models, but is simplified by the elimination of support 48 for multicast flows. 50 This draft includes a model of reservation operation and a 51 description of the individual protocol mechanisms, and discusses 52 interactions of the reservation protocol with other protocols and 53 mechanisms. It also includes an outline functional specification and 54 example message flows. 56 Table of Contents 58 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 3 59 1.1 Scope and Background . . . . . . . . . . . . . . . . . . . . . 3 60 1.2 Model of Operation . . . . . . . . . . . . . . . . . . . . . . 3 61 1.3 Terminology . . . . . . . . . . . . . . . . . . . . . . . . . 6 62 2. Protocol Mechanisms . . . . . . . . . . . . . . . . . . . . . 6 63 2.1 State Management . . . . . . . . . . . . . . . . . . . . . . . 7 64 2.2 Informational Messages . . . . . . . . . . . . . . . . . . . . 7 65 2.3 Initiation and Termination . . . . . . . . . . . . . . . . . . 8 66 2.4 State Source and Generation Identification . . . . . . . . . . 10 67 2.5 QoS-NSLP Message Routing . . . . . . . . . . . . . . . . . . . 11 68 2.6 Resource Description Objects . . . . . . . . . . . . . . . . . 12 69 3. External Interactions . . . . . . . . . . . . . . . . . . . . 13 70 3.1 QoS Models . . . . . . . . . . . . . . . . . . . . . . . . . . 13 71 3.2 Implications for NTLP Functionality . . . . . . . . . . . . . 14 72 4. Outline Functional Specification . . . . . . . . . . . . . . . 15 73 4.1 QoS NSLP Messages . . . . . . . . . . . . . . . . . . . . . . 15 74 4.2 Rerouting and Local Repair . . . . . . . . . . . . . . . . . . 21 75 4.3 Mobility and Multihoming . . . . . . . . . . . . . . . . . . . 22 76 5. Example Message Flows . . . . . . . . . . . . . . . . . . . . 23 77 5.1 Basic Sender/Receiver Initiated Example . . . . . . . . . . . 23 78 5.2 Reservation Collision Example . . . . . . . . . . . . . . . . 24 79 5.3 Bidirectional Reservation Example . . . . . . . . . . . . . . 25 80 5.4 Tunnels and Aggregation . . . . . . . . . . . . . . . . . . . 26 81 5.5 Layered Reservations . . . . . . . . . . . . . . . . . . . . . 29 82 6. Open Issues . . . . . . . . . . . . . . . . . . . . . . . . . 31 83 7. Security Considerations . . . . . . . . . . . . . . . . . . . 32 84 8. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . . 33 85 References . . . . . . . . . . . . . . . . . . . . . . . . . . 34 86 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . 35 87 A. Explicit Routing . . . . . . . . . . . . . . . . . . . . . . . 36 88 B. Skip-Stop Routing . . . . . . . . . . . . . . . . . . . . . . 37 89 Full Copyright Statement . . . . . . . . . . . . . . . . . . . 39 91 1. Introduction 93 1.1 Scope and Background 95 This document defines a Quality of Service (QoS) NSIS Signaling Layer 96 Protocol (NSLP), henceforth referred to as the "QoS-NSLP". This 97 protocol establishes and maintains state at nodes along the path of a 98 data flow for the purpose of providing some forwarding resources for 99 that flow. It is intended to satisfy the QoS-related requirements of 100 [1]. This QoS-NSLP is part of a larger suite of signaling protocols, 101 whose structure is outlined in [2]; this defines a common NSIS 102 Transport Layer Protocol (NTLP) which QoS-NSLP uses to carry out many 103 aspects of signaling message delivery. 105 The design of QoS-NSLP is conceptually similar to RSVP [3], and uses 106 soft-state peer-peer refresh messages as the primary state management 107 mechanism. However, there is no backwards compatibility at the 108 protocol level, although interworking would be possible in some 109 circumstances. QoS-NSLP extends the set of reservation mechanisms to 110 meet the requirements of [1], in particular support of sender or 111 receiver initiated reservations, as well as a type of bidirectional 112 reservation. Note that 'sender' and 'receiver' initiation refers to 113 the direction of reservation messages relative to the data flow; the 114 actual signaling entities can be anywhere along the data path, not 115 just at the endpoints. On the other hand, there is no support for IP 116 multicast. 118 QoS-NSLP does not mandate any specific 'QoS Model', i.e. a language 119 for QoS objects or any architecture for provisioning it within a 120 network or any particular node; this is similar to (but stronger 121 than) the decoupling between RSVP and the IntServ architecture [4]. 122 It should be able to carry QoS objects of various different types; 123 the specification of Integrated Services for use with RSVP given in 124 [5] could form the basis of one QoS model. 126 1.2 Model of Operation 128 This section presents a logical model for the operation of the 129 QoS-NSLP and associated provisioning mechanisms within a single node. 130 It is adapted from the discussion in section 1 of [3]. The model is 131 shown in Figure 1. 133 +---------------+ 134 | Local | 135 |Applications or| 136 |Management (e.g| 137 |for aggregates)| 138 +---------------+ 139 ^ 140 ^ 141 V 142 V 143 +----------+ +----------+ +---------+ 144 | QoS-NSLP | | Resource | | Policy | 145 |Processing|<<<<<<>>>>>>>|Management|<<<>>>| Control | 146 +----------+ +----------+ +---------+ 147 | ^ | . ^ 148 | ^ | . ^ 149 | V | . ^ 150 | V | . ^ 151 +----------+ . ^ 152 | NTLP | . ^ 153 |Processing| . V 154 +----------+ . V 155 | | . V 156 +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ 157 | | . V 158 | | . ............................. 159 | | . . Traffic Control . 160 | | . . +---------+. 161 | | . . |Admission|. 162 | | . . | Control |. 163 +----------+ +------------+ . +---------+. 164 ----| Input | | Outgoing |----------------------------------- 165 | Packet | | Interface | .+----------+ +---------+. 166 ====|Processing|====| Selection |===.| Packet |====| Packet |.==> 167 | | |(Forwarding)| .|Classifier| Scheduler|. 168 +----------+ +------------+ .+----------+ +---------+. 169 ............................. 171 ------ = signaling flow 172 =====> = data flow (sender -->receiver) 173 <<<>>> = control and configuration operations 174 ...... = routing table manipulation 176 Figure 1: QoS-NSLP in a Node 178 The main features of the model are as follows: 180 From the perspective of a single node, the request for QoS may result 181 from a local application request, or from processing an incoming 182 QoS-NSLP message. 184 o The 'local application' case includes not only user applications 185 (e.g. a multimedia application), but also network management (e.g. 186 initiating a tunnel to handle an aggregate, or interworking with 187 some other reservation protocol - such as RSVP). In this sense, 188 the model does not distinguish between hosts and routers. 190 o The 'incoming message' case requires NSIS messages to be captured 191 during input packet processing and handled by the NTLP. Only 192 messages related to QoS are passed to the QoS-NSLP. The NTLP may 193 also generate triggers to the QoS-NSLP (e.g. indications that a 194 route change has occurred). 196 The QoS request is handled by a local 'resource management' function, 197 which coordinates the activities required to grant and configure the 198 resource. 200 o The grant processing involves two local decision modules, 'policy 201 control' and 'admission control'. Policy control determines 202 whether the user has administrative permission to make the 203 reservation. Admission control determines whether the node has 204 sufficient available resources to supply the requested QoS. 206 o If both checks succeed, parameters are set in the packet 207 classifier and in the link layer interface (e.g., in the packet 208 scheduler) to obtain the desired QoS. Error notifications are 209 passed back to the request originator. The resource management 210 function may also manipulate the forwarding tables at this stage, 211 to select (or at least pin) a route; this must be done before 212 interface-dependent actions are carried out (including forwarding 213 outgoing messages over any new route), and is in any case 214 invisible to the operation of the protocol. 216 Policy control is expected to make use of a AAA service external to 217 the node itself. Some discussion can be found in [13] and [17]. More 218 generally, the processing of policy and resource management functions 219 may be outsourced to an external node leaving only 'stubs' co-located 220 with the NSLP; however, this is not visible to the protocol 221 operation. 223 The group of user plane functions which implement QoS for a flow 224 (admission control, packet classification, and scheduling) is 225 sometimes known as 'traffic control'. 227 Admission control, packet scheduling, and any part of policy control 228 beyond simple authentication have to be implemented using specific 229 definitions for types and levels of QoS; we refer to this as a QoS 230 model. Our assumption is that the QoS-NSLP is independent of the QoS 231 model, that is, QoS parameters (e.g. IntServ service elements) are 232 interpreted only by the resource management and associated functions, 233 and are opaque to the QoS-NSLP itself. QoS Models are discussed 234 further in Section 3.1. 236 The final stage of processing for a resource request is to indicate 237 to the QoS-NSLP protocol processing that the required resources have 238 been configured. The QoS-NSLP may generate an acknowledgement message 239 in one direction, and may propagate the resource request forwards in 240 the other. Message routing is (by default) carried out by the NTLP 241 module. Note that while the figure shows a unidirectional data flow, 242 the signaling messages can pass in both directions through the node, 243 depending on the particular message and orientation of the 244 reservation. 246 1.3 Terminology 248 The terminology defined in [2] applies to this draft. In addition, 249 the following terms are used: 251 o QNE - an NSIS Entity (NE) which supports the QoS-NSLP. 253 o QNI - a QoS NSLP node acting as an NSIS Initiator (NI), the first 254 node in the sequence of QNEs that issues a reservation request. 256 o QNR - a QoS NSLP node acting as an NSIS Responder (NR), the last 257 node in the sequence of QNEs that receives a reservation request. 259 o QNF - a QoS NSLP node acting as an NSIS Forwarder (NF). 261 In the document, where the phrase "message source" is used it 262 generally refers to the adjacent QNE. Where it means one of the 263 endpoints of the signalling session, this is highlighted by using the 264 terms QNI or QNR. 266 2. Protocol Mechanisms 268 This section describes the conceptual building blocks of the 269 QoS-NSLP, in order to explain the overall structure of the protocol. 270 The message set of QoS-NSLP is deliberately designed to be simple, to 271 ease future analysis of routing and mobility interactions and other 272 extensions: there are just 5 messages, namely a RESERVE/RESPONSE pair 273 (idempotent messages controlling all aspects of state management) and 274 3 stateless informational messages to handle queries and 275 notifications. The default mode of operation delegates all message 276 routing responsibility to the NTLP. 278 2.1 State Management 280 The QoS-NSLP uses one message, the RESERVE message, for reservation 281 state management at QNEs. The RESERVE message is idempotent, i.e. any 282 given message has the same effect however many times it is repeated; 283 whether a RESERVE is installing new state or refreshing, modifying or 284 tearing down already established state is determined independently at 285 each QNE, depending on the existence of related state at that QNE. 286 The RESERVE messages being sent at different points along the path 287 are conceptually independent so far as the protocol is concerned, 288 each being sent by one QNE and received by its next peer. 290 The RESERVE message is used to support a similar function to the RSVP 291 reserve, reservation refresh and teardown as all of these manipulate 292 what is conceptually the same reservation state. Which of these 293 three functions the message causes depends on the QoS objects and 294 flow/session identifier it carries. For example, reservation state is 295 created in QNEs by a RESERVE message with unknown flow/session ID and 296 the topmost QoS object describing non-zero resources. 298 Reservation state is soft state and needs to be refreshed 299 periodically. Refresh (and modify) is achieved by sending a RESERVE 300 message with known flow/session identifier. Finally, a teardown is 301 really a special case of a modify message, with the modification 302 being that resources should be zero. 304 Any RESERVE may be replied to by a 'local' RESPONSE message (i.e. one 305 sent directly back by the next QNE along the path); this happens when 306 the reservation installed does not match the reservation requested 307 (generally, when an error condition occurs), or if the QNE sending 308 the RESERVE explicitly requests it. Any QNE can also create a RESERVE 309 message tagged so it causes messages to be sent further along the 310 path towards the QNR and RESPONSE messages sent all the way back. 312 2.2 Informational Messages 314 The QoS-NSLP uses three messages which do not affect the resource 315 management state at all (are 'impotent') at QNEs. These are the 316 QUERY, QUERY-REPLY and NOTIFY messages. 318 QUERY messages are used to support a similar function to the AdSpec 319 Class in RSVP. A QUERY can be used to determine what kind of QoS 320 services are available along the path, as well as any 321 service-specific attributes and the amount of QoS resources 322 available. 324 The QUERY message payloads can be extended or modified as the message 325 is propagated; when it reaches the end of the path it is changed into 326 a QUERY-REPLY message which carries the same data but in the reverse 327 direction. QUERY messages can also be used purely between adjacent 328 peers. 330 NOTIFY messages are not sent in direct response to other signaling 331 messages. They do not directly modify network state, or require a 332 signaling message in response. They can be sent asynchronously by any 333 node in the path and are commonly used to indicate that a change has 334 occurred in the network. Examples include: change of a reservation 335 caused by a network event (e.g. where rerouting means that a 336 reservation can no longer be supported), or when a reservation is no 337 longer being honoured due to pre-emption. 339 QUERY, QUERY-REPLY and NOTIFY messages are forwarded along the path 340 (they may go upstream or downstream, provided that the NTLP has the 341 necessary path state). 343 2.3 Initiation and Termination 345 2.3.1 Initiating QoS Signaling 347 The simplest case is clearly where the sender initiates the 348 reservation. It can do this simply by sending its first RESERVE 349 message. The sender then becomes also the QoS-NSLP Initiator (QNI) 350 for the reservation. 352 A node further along the data path may also carry out this role (for 353 example, because the sender does not not support the QoS NSLP itself, 354 or because QoS signaling is already done by the sender using some 355 other mechanism.) This node must know the attributes of the flow, 356 including the flow-id, and the QoS properties required. How this is 357 done is not considered part of this protocol specification; options 358 include the node taking part in the application's own signaling, or 359 translating another QoS signaling protocol being used between itself 360 and the sender. So far as the rest of the reservation processing is 361 concerned at other nodes, this case is indistinguishable from the 362 first, except that the QNI address (if visible in any messages) does 363 not match the source address in the flow-id. 365 A receiver initiated QoS NSLP signaling session is essentially 366 identical, except that it requires the QNI to be able to construct 367 the flow identifier (including knowing the encapsulation or other 368 marking used for data packets) and that NTLP reverse-path state 369 should exist. Both of these may require coordination with the sender, 370 which has to be done with end-to-end signaling outside the scope of 371 the QoS-NSLP. If end-to-end reverse path state is created in the 372 NTLP, this should be signaled to the QoS-NSLP at the QNR to allow it 373 to begin the signaling application. 375 2.3.2 Finding the Terminating Node 377 For a sender initiated case where the receiver supports the QoS NSLP, 378 there is no difficulty in finding the terminating node. The receiver 379 knows that it is the endpoint for the data flow, and so also knows to 380 terminate the QoS signaling, and becomes the QNR for the session. 382 Similarly, a node on the path may decide to act as the QNR and 383 terminate the signaling. This is needed in particular where the QoS 384 NSLP signaling reaches a QNE after which there are no more nodes 385 supporting the QoS NSLP before the receiver. This situation cannot be 386 detected by the NSLP on its own, and requires support from the NTLP. 387 This is easily detected and reported by the local NTLP if there are 388 no further NSIS-aware nodes at all; if there are further NSIS-aware 389 nodes but which don't support the QoS-NSLP, this condition must be 390 forwarded back to the last QNE as an error at the NTLP level. 392 In the example shown in Figure 2, NQ is the 'last' node supporting 393 the QoS-NSLP. N1 can simply forward the RESERVE message (treating it 394 as an NTLP message with an 'unknown' payload). At N2 the next peer 395 discovery fails, and it generates an error message which is passed in 396 the reverse direction. 398 NQ N1 N2 D 399 | | | | 400 RESERVE | | | | 401 --------->| | | | 402 | RESERVE | | | 403 +--------->| | | 404 | | RESERVE | | 405 | +--------->| | 406 | | | | 407 | | NTLP Peer | 408 | | Discovery | 409 | | Failure | 410 | | | | 411 | | NTLP Err | | 412 | |<---------+ | 413 | NTLP Err | | | 414 |<---------+ | | 415 | | | | 417 Figure 2: Terminating a Sender Initiated Session 419 In the corresponding receiver initiated scenario, the signaling will 420 reach a node with no 'previous hop'. If this is the sender then the 421 signaling can be terminated immediately, the sender becoming the QNR. 422 In other cases the node has to determine whether it is indeed the 423 last node on the path, or simply lacking the relevant NTLP path 424 state. 426 This situation may occur, for example, at the time of the initial 427 reservation when reverse path state is available at the QNI (data 428 flow receiver) because of a constrained topology (e.g. only one 429 ingress router), but not further upstream. Therefore, when the 430 receiver initiates a reservation, it has a valid route to the 431 upstream peer, but this peer has no valid state to forward the 432 message further. If the upstream peer wishes to propagate the 433 signaling further, it has to generate a notification towards the QNI 434 requesting an end to end (application layer) exchange which can 435 trigger building the reverse path state. The QNI would also have to 436 tag RESERVE messages to indicate that this installation of reverse 437 path forwarding state has already been attempted (or is not 438 possible), in which case the reservation could in general be 439 propagated except in constrained network topologies. 441 2.4 State Source and Generation Identification 443 There are several circumstances where it is necessary for a QNE to 444 identify the adjacent QNE peer which is the source of a signaling 445 application message; for example, it may be to apply the policy that 446 "state can only be modified by messages from the node that created 447 it". 449 We rely on the NTLP to provide this functionality. By default, all 450 outgoing QoS-NSLP messages are tagged like this at the NTLP layer, 451 and this is propagated to the next QNE, where it can be used as an 452 opaque identifier for the state associated with the message; we call 453 this the Source Identification Information (SII). The SII is 454 logically similar to the RSVP_HOP object of [3]; however, any IP (and 455 possibly higher level) addressing information is not interpreted in 456 the QoS-NSLP. Indeed, the intermediate NTLP nodes could enforce 457 topology hiding by masking the content of the SII (provided this is 458 done in a stable way). 460 Reservation messages contain a sequence number allocated by the peer 461 QNE which is the source of the message. Sequence numbers are 462 allocated in ascending order and unique within the context of a given 463 SII. Sequence numbers are used for a number of functions, including 464 identification of the newest RESERVE message, simplifying the 465 identification of RESERVE messages as being refreshes, and tying a 466 RESPONSE back to a RESERVE message. Note that the sequence number may 467 be different at different locations along the path; this is because 468 intermediate nodes may generate reservation updates autonomously 469 while the reservation is in place. 471 2.5 QoS-NSLP Message Routing 473 2.5.1 Default Operation 475 The default mode of operation for the QoS-NSLP is that signaling 476 messages are transported along the path currently taken by a data 477 flow by the NTLP. The necessary information is held in the flow-id, 478 which is used as flow-routing information by the NTLP. All that the 479 QoS-NSLP needs to do is provide the flow-id and a tag to indicate 480 whether the message should be sent upstream (towards the flow sender) 481 or downstream (towards the flow receiver), and the NTLP will deliver 482 it to the next QoS-NSLP-aware node or return an error condition (such 483 as "no next node" or "no routing state available to reach next 484 node"). Whether another message is sent beyond the next node is 485 controlled by the QoS-NSLP at that node, and depends on the message 486 in question. 488 2.5.2 Explicit Routing 490 There are circumstances where it is necessary to address state in an 491 explicitly identified NSIS node, which isn't necessarily on the path 492 associated with a particular (active or inactive) flow. Examples are 493 tearing down state on the 'old' path after a route change, and 494 sending a message which was only meaningful in the context of a 495 previous message (such as a refresh which doesn't include the full 496 reservation data). 498 The first case could be handled by depending on timeout of 499 soft-state; this would lead to temporary waste of resource in those 500 areas. The second could be handled by returning an error in the 501 QoS-NSLP, since the message receiver can determine that the message 502 sender is 'unknown' by checking the SII. An alternative approach, an 503 extension to support explicit routing for such messages, is described 504 in Appendix A. 506 2.5.3 Skip-Stop Routing 508 There may be cases where it is desirable to route a message directly 509 to a QoS-NSLP peer (e.g. to send a confirmation directly to the 510 initiator), rather than sending it via using NTLP. The motivation for 511 doing this is to reduce the burden on the NTLP (especially NTLP-only 512 intermediate nodes), both in message processing and the necessity to 513 maintain reverse-path routing state for the associated flow. 515 The basic technique for doing this is that a subset of QNEs on the 516 path directly store some QNE IP address within the QoS-NSLP. These 517 can send directly to their peer. However, this has to be done in a 518 way which does not invalidate the services provided by the NTLP 519 (error handling, security, congestion control, routing reactions and 520 so on). There are several possible solutions to this problem; one is 521 outlined in Appendix B. 523 2.6 Resource Description Objects 525 Each of the QoS-NSLP message types can carry QoS descriptions in 526 Resource Description Objects (RDOs). The definitions of the possible 527 RDO contents for each message together form a QoS Model. Multiple QoS 528 Models can be defined for use with the QoS-NSLP; the general 529 requirements on a QoS Model are discussed in Section 3.1, but which 530 one is used does not affect the general operation of the QoS-NSLP 531 itself. 533 In order to allow some local selection of which QoS Model to use 534 without destroying all end-to-end aspects of the signaling, QoS-NSLP 535 allows a kind of nesting of QoS Models by 'stacking' more than one 536 RDO within a message. The mechanism for the RESERVE message is as 537 follows; similar rules can be defined for the other QoS-NSLP message 538 types. The QNI places an end-to-end RDO the RESERVE message. 539 However, each QNF may add a further so-called local RDO in the 540 RESERVE message that it forwards. The RDO on top of the stack is the 541 one currently valid, and the others need not be parsed. This 542 procedure allows mapping the QoS described in the end-to-end RDO onto 543 local QoS paradigms. For example, a bandwidth and application type in 544 the end-to-end RDO may be mapped onto a discrete set of available 545 bandwidths and a particular traffic class in the local RDO. If a QNE 546 does not understand the topmost (local) RDO, it generates an error 547 RESPONSE which is sent backwards along the data path. The error 548 message must be terminated and recovered by the last QNE which added 549 it. 551 In general, local RDOs must contain scoping information such as 552 "valid only in a particular domain A". In this case, all edge nodes 553 of domain A must be QNEs and be configured to pop the topmost RDO on 554 egress; they can then add a new one valid only on the inter-domain 555 link, or fall back to the next lower one which may or may not be the 556 end-to-end RDO. This is conceptually similar to the Aggregation 557 Region concept of [9], where one option is that Deaggregators (which 558 correspond to the last QNE in scope) are pre-configured to act in 559 these roles. In general, it is up to Network Management to make QNFs 560 knowledgeable about what scope they are in and what QoS Models they 561 should use. 563 3. External Interactions 565 3.1 QoS Models 567 A QoS Model gathers together the set of ways of describing packet 568 flow behavior (as RDOs), and describes how these RDOs can be used in 569 particular QoS-NSLP transactions. The scope of a QoS model 570 encompasses the tspec/rspec/adspec concepts of RSVP [3]; however the 571 way they describe QoS can be different, and the QoS-NSLP 572 specification leaves the details of how the RDOs in each message 573 should be processed to the QoS Model description. For example, basic 574 RSVP/IntServ [5] functionality would include the fact that RESERVE 575 messages should contain a 'rspec' and QUERY messages an 'rspec' and 576 optionally an 'adspec', and would also describe the parametrisation 577 of those contents. 579 More generalised resource reservation signaling patterns are expected 580 to be encoded in the QoS Model description, without changing the 581 basic operation of the QoS-NSLP itself. For example, it may indicate 582 that a partial reservation, successful only at some nodes, would be 583 acceptable. In order to reduce the number of reservation message 584 exchanges, the RDO in a RESERVE might also include a resource range, 585 containing an upper and a lower bound. QNEs would attempt to reserve 586 the highest amount of resources below the maximum and update the 587 amount accordingly; the QoS Model would also define the possible 588 contents of the RESPONSE if the maximum could not be reserved. 590 Another possibility is of advanced reservations, including a 2-stage 591 reserve/commit mechanism. The intention here is that a QNI can 592 request a firm guarantee that resources will be available at some 593 future time, without actually needing the physical resources to be 594 set aside at the time the request is made. Since this process does 595 actually involve manipulating state in the other QNEs it can be 596 described within a QoS Model as a particular type of reservation, 597 where the initial RDO is processed by admission and policy control 598 but not the packet scheduler. 'Activation' of the reservation takes 599 place simply by sending another RESERVE with an updated RDO. 601 It is not clear whether there is a need for a standardised 'default' 602 QoS Model that can be interpreted by all QNEs. This would naturally 603 be used for the end-to-end RDOs which are handled by the QNI and QNR. 604 Some applications may wish (and are able to) to detail a token 605 bucket, peak rate etc, whereas others just provide the information 606 "this is a delay-sensitive flow of such average bandwidth". The 607 default QoS Model could provide fields for all these values, however 608 not all of them need to be filled in by the application. It would 609 seem that such a default model would be most useful at the network 610 'edge', and quite possibly inappropriate for core network or 611 inter-provider use. In addition, there would seem to be a need for 612 some common elements to all QoS Models (such as the value 'No QoS' to 613 be used when tearing down a reservation or reporting complete 614 resource unavailability). 616 A local QoS Model in contrast is specific to the QoS mechanism used 617 in a particular scoping region. For example, its RDOs may include 618 full RSVP flowspec information. For some popular QoS mechanisms, an 619 official standard may exist for the local QoS Model. However, private 620 ('enterprise') QoS Models could also exist where all QNEs in a 621 scoping region have to be configured specifically to understand it. 622 It is also possible for a QNI to include definitions from a local QoS 623 Model as well as a default one, for example in the case of a mobile 624 node describing desired L2 properties on the air interface. 626 An IANA registry of well-known QoS Models would be required. 628 It is currently an open question whether policy related information, 629 such as accounting and charging information or authorization tokens 630 should be included as part of the QoS Model, or whether it should be 631 defined separately. There is clearly coupling between the two types 632 of information, since it is only possible to make a meaningful policy 633 decision if the QoS description is understood; however, the way the 634 policy information is given may itself be independent of that. This 635 question depends on how AAA issues in general are to be handled in 636 the QoS-NSLP [13] [17] 638 3.2 Implications for NTLP Functionality 640 The QoS-NSLP requires that a some particular features be available in 641 the NTLP messages or be provided as triggers by the NTLP module 642 (either locally or in a distributed fashion). 644 The features currently suggested in this document include: 646 o Last node detection: This trigger is provided by the NTLP to the 647 QoS-NSLP when it determines that this is the last NE on the path 648 which supports the QoS-NSLP. It requires the NTLP to have an error 649 message indicating that no more NSLPs of a particular type are 650 available on the path. (See also Section 2.3.2). 652 o Rerouting detection: This trigger is provided when the NTLP 653 detects that the route taken by a flow (which the QoS-NSLP has 654 issued signaling messages for) has changed. 656 o Reverse path state established: This trigger is provided at the 657 QNI for receiver initiated reservations when the NTLP detects that 658 a path-establishing message has been received from the flow 659 sender. This trigger is needed only when the path-establishing 660 message is not itself a QoS-NSLP message (such as a QUERY). (See 661 also Section 2.3.1.) 663 o Source Identification Information (SII): This information needs to 664 be provided at the NTLP level to identify the QNE which a 665 particular message came from. It needs to be stable across 666 rerouting events which do not change QoS-NSLP adjacencies (but 667 might, for example, change outgoing interfaces). (See also Section 668 2.4.) 670 o Explicit Routing: This facility might be needed to send messages 671 explicitly to a known QNE, in which case the NTLP has to provide 672 topologically correct routing information in a form which can be 673 cached by the QoS-NSLP. (See also Appendix A.) 675 o Stateless Forwarding: This facility might be provided to enable 676 the NTLP to operate in a mode where reverse path state is not kept 677 in some region, analogous to the processing in [9]. Doing this 678 robustly appears to be possible with a pair of additional flags at 679 the NTLP level. (See the description in Appendix B.) 681 o Path length determination: In order to count the number of 682 QoS-NSLP-aware/unaware hops, support from the NTLP is needed to 683 provide the IP hop count between adjacent QNEs. This could be 684 provided by the NTLP by default, if such a facility is felt to be 685 generally useful. 687 Although these requirements are identified here in the context of a 688 Quality of Service NSLP, some of them may also be applicable to other 689 NSLP types. Also, it should be noted that the QoS-NSLP makes fairly 690 cautious assumptions about the level of the transport service 691 provided by the NTLP, for example regarding re-ordering protection 692 across multiple hops or over route changes. A more sophisticated NTLP 693 might allow us to remove QoS-NSLP functionality such as sequence 694 numbering, or tagging state with source identification. However, it 695 isn't clear how generic across other NSLPs such functionality can be 696 made. 698 4. Outline Functional Specification 700 4.1 QoS NSLP Messages 702 The QoS-NSLP defines five message types: 704 o RESERVE 706 o RESPONSE 707 o QUERY 709 o QUERY-REPLY 711 o NOTIFY 713 The first two are idempotent (the resultant state in traffic control 714 is the same however many times an identical message is repeated), and 715 the remainder do not change traffic control state at all (although 716 NOTIFY might indirectly trigger a state-changing action in the 717 receiver). 719 Messages are normally passed from the NSLP to the NTLP via an API 720 which also specifies the signaling application (as QoS-NSLP), the 721 flow/session identifier, and an indication of the intended 722 destination, which is one of 'next hop', or 'previous hop'. On 723 reception, the NTLP provides the same information to the QoS-NSLP 724 along with the source identification. Possible additional features 725 are described in Appendix A and Appendix B. 727 The rest of the message is opaque to the NTLP. QoS-NSLP messages have 728 a common header providing protocol version, message type, and flags 729 associated with protocol extensibility issues (rules about ignoring 730 or rejecting unknown messages); these details are not described here. 732 4.1.1 RESERVE Message 734 RESERVE messages are idempotent. They manipulate reservation state in 735 QNEs by creating, refreshing, modifying and deleting it. Each RESERVE 736 message triggers three actions: 738 o Install, refresh, modify or delete reservation state 740 o Possibly send a RESPONSE 742 o Possibly create a modified message (e.g. adding or removing 743 objects) and pass it to the NTLP 745 The RESERVE message format can be summarised as follows: 747 ::= 748 749 750 751 [] 752 [] 754 4.1.1.1 Reserving resources 756 To reserve resources, the QNI sends a RESERVE message. A RESERVE 757 message with unknown flow/session ID creates reservation state at 758 each QNE, which is used by the local resource management function to 759 grant and configure resources as described in Section 1.2. A RESERVE 760 message carries at least one RDO describing the QoS desired and a 761 lifetime for it. QNEs may add local RDOs, providing nesting of QoS 762 information as in Section 2.6. 764 A RESERVE message carries a sequence number, which is increased for 765 each new RESERVE message pertinent to the same flow/session ID. This 766 allows for identifying the latest state that is being requested. 767 Optionally, it may contain an object to control whether a local and/ 768 or end-to-end RESPONSE should be generated. A RESERVE message may 769 additionally carry other local or global objects, such as accounting 770 and charging information and so on. 772 4.1.1.2 Refreshing 774 In general, the NSIS protocol suite takes a soft state approach to 775 managing reservation state in NEs. Note that although both NTLP and 776 QoS-NSLP have soft state, it is managed independently to avoid 777 interlayer coupling. 779 For NSLP, the state is created by the RESERVE message and must be 780 periodically refreshed. Reservation state is deleted if no new 781 RESERVE messages arrive before the expiration of the "reservation 782 lifetime" interval specified as part of the reservation state. State 783 can also be deleted by explicit teardown described in Section 784 4.1.1.3. At the expiration of a "refresh timeout" period, each QNE 785 independently scans its state and sends a corresponding refreshing 786 RESERVE message to the next QNE peer where it is absorbed. This 787 peer-to-peer refreshing (as opposed to the QNI initiating a refresh 788 which travels all the way to the QNR) allows QNEs to choose refresh 789 intervals as appropriate in their environment. For example, it is 790 conceivable that refreshing intervals in the backbone, where 791 reservations are relatively stable, are much larger than in an access 792 network. The "refresh timeout" is calculated within the QNE and is 793 not part of the protocol; however, it must be chosen to be compatible 794 with the reservation lifetime that is advertised, and an assessment 795 of the reliability of message delivery. The details of timer 796 management and timer changes (slew handling and so on) should be 797 similar to those of RSVP [3]. 799 As well as a 'traditional' soft-refresh (simply repeating the 800 original messages), a summary refresh can be sent if a RESPONSE has 801 been received for the reservation. This is achieved by sending a 802 RESERVE message only containing a pointer to the corresponding 803 reservation (flow/session ID and Sequence number of the RESPONSE), 804 not the reservation information itself in order to speed up 805 processing [7]. 807 Reservations can be refreshed "as is". If reservation state needs to 808 be modified, a RESERVE message containing explicit QoS information 809 (new RDOs) is sent, with a strictly higher 810 RESERVATION_SEQUENCE_NUMBER. If, for example, because of route 811 changes, a QNE receives a refreshing RESERVE message, containing only 812 a pointer which it does not understand (e.g. from an unknown source), 813 it replies with an error RESPONSE message back to the originating 814 (peer) QNE. This QNE replies with a full updated RESERVE message 815 including corresponding RDOs. This way, failures can be repaired 816 quickly locally. This is another advantage of peer-to-peer 817 refreshing. 819 Regarding RFC 2961-style bundling of Refresh messages, there are two 820 design options. Either a QNE may bundle refresh messages before 821 handing them down to NTLP, or NTLP is solely responsible for 822 bundling. The advantage of the former is there is only one NSLP 823 header per bundle. On the other hand, NTLP is best placed to do 824 bundling efficiently because it knows more about path properties 825 (e.g. MTU, packetisation, latency) and whether messages should even 826 follow the same path at the NTLP level. We therefore opt for the 827 latter. NTLP may decide to bundle this bundle with refresh messages 828 from other NSLPs and / or to synchronize and piggyback its own 829 refreshes with QoS-NSLP refresh messages in order to save overhead. 830 Details of this however are clearly out of scope of this document. 832 4.1.1.3 Teardown 834 A RESERVE message with 'zero' RDO removes reservation state of the 835 corresponding flow/session ID immediately. Although because of soft 836 state it is not necessary to explicitly tear down an old reservation, 837 we recommend that QNIs send a teardown request as soon as a 838 reservation is no longer needed. A teardown deletes reservation state 839 and travels towards the QNR from its point of initiation. A Teardown 840 message may be initiated either by an application in an QNI or by a 841 QNF along the route as the result of a state timeout or service 842 preemption. Once initiated, a Teardown message must be forwarded QNE 843 peer - to - QNE peer without delay. 845 4.1.2 RESPONSE Message 847 RESPONSE messages are any messages sent in reply to a RESERVE 848 message. They are idempotent. Their semantics include error reports, 849 simple acknowledgements, and so on. RESPONSE messages may be sent 850 with a scope of a single QoS-NSLP 'hop' or be sent further along the 851 path towards a QNE which explicitly requested it. By default, they 852 are sent within the NTLP and may require reverse-routing state to 853 exist (in the case of a sender initiated reservation). 855 The RESPONSE message format can be summarized as: 857 ::= 858 859 860 [] 861 [] 862 [] 864 4.1.2.1 Error 866 An error RESPONSE message indicates a reservation has failed. It 867 includes the sequence number of the failed RESERVE message. In 868 addition, the SII of the QNE where the RESERVE failed is provided by 869 the NTLP. It is interpreted first by the QNE which sent the RESERVE, 870 which must either attempt corrective action, or tear the reservation 871 down and propagate the error condition further backwards. 873 4.1.2.2 Confirmation 875 To request a confirmation for a reservation request, a QNE includes 876 in the RESERVE message a confirmation-request object containing an 877 identifier supplied by the QNE. If a confirmation-request has already 878 been added by another QNE a second one need not be added, since this 879 QNE will see the RESPONSE anyway. 881 The RESPONSE (whether an error or success indication) echoes back the 882 confirmation-request object. If a RESPONSE contains a 883 confirmation-request object not added by this QNE then it MUST 884 forward the RESPONSE, until it reaches the QNE which provided the 885 original request (matched by the identifier). 887 A confirmation must be issued when the reservation installed does not 888 match the reservation requested, i.e. when - within a range possibly 889 provided in the RESERVE - less resources have been reserved starting 890 from a particular QNF_i towards the QNR. This allows the QNI to issue 891 a new RESERVE in order to adapt resources up to QNF_i. 893 The confirm RESPONSE message may be sent by the QNR simply confirming 894 the RESERVE was successful as requested, or it may be issued by a QNF 895 to confirm it modified a Reservation (partial reservation or - within 896 the bandwidth range - decreased reservation). The Confirm message 897 contains the sequence number of the RESERVE it is in response to. 899 4.1.3 QUERY and QUERY-REPLY Messages 901 QUERY messages do not change the NSLP state at any of the nodes that 902 process them (though, like any NSIS message, they may cause NTLP path 903 state to be created or modified). When the QUERY reaches the end of 904 the path, the message is changed from a QUERY to a QUERY-REPLY and 905 then sent back in the opposite direction. 907 One application of the QUERY message is similar to the use of an 908 AdSpec object in a PATH message for RSVP. The QUERY message can carry 909 parts specific to particular QoS Models, similar to the Guaranteed 910 Service and Controlled-Load Service Fragments of the RSVP ADSPEC 911 message [5]. These will be specified in documents defining particular 912 QoS Models. 914 These may be used to determine what resources are present along the 915 path (e.g. estimating the path bandwidth or minimum path latency). 916 However, they do not guarantee the availability of resources for a 917 subsequent RESERVE request. 919 The QUERY message format is as follows (the QUERY-REPLY is 920 essentially identical): 922 ::= 923 924 925 [] 927 ::= | 928 930 The QUERY_IDENTIFIER is an unstructured numerical identifier for the 931 query, used for matching responses. The value of the identifier is 932 otherwise not significant. 934 The QOS_NSLP_COUNT is initially zero, and is incremented by one at 935 each QNE on the path. It counts the QoS-NSLP aware nodes along the 936 path, and can be used (along with the total path length) to derive 937 the number of non-QoS-NSLP hops, a generalisation of the way RSVP 938 counts the number of non-IntServ hops. It can also accumulate the IP 939 hop length of the path, if this information is provided by the NTLP. 941 A QOS_MODEL_QUERY object can be used to query information that is 942 specific to the type of QoS Model being used. 944 4.1.4 NOTIFY Message 946 NOTIFY messages only provide information to an NE, they do not cause 947 a change in state directly themselves. 949 They main difference between RESPONSE messages and NOTIFY messages is 950 that RESPONSE messages are sent on receipt of a RESERVE message, 951 whereas NOTIFY messages can be sent asynchronously, and in either 952 direction relative to the RESERVE. 954 The message may contain information relating to particular QoS 955 models. These can be used to provide more information when the 956 notification is due to a change in the reservation. 958 The NOTIFY message format is as follows: 960 ::= 961 [] 963 ::= | 964 966 The QOS_MODEL_NOTIFY contains any QoS Model specific information that 967 needs to be carried as part of the QUERY, e.g. the reservation now 968 being used after a reservation change. 970 4.2 Rerouting and Local Repair 972 The detection of rerouting can take place in multiple ways. 974 It can be done at the NTLP (including by the NTLP interacting with 975 routing protocols or by path length monitoring and so on, as 976 described in [18]). Rerouting detected by the NTLP may then be 977 delivered as trigger information to the QoS-NSLP (at one or more 978 locations along the signaling path). 980 Rerouting can also be detected at the QoS-NSLP itself, if a RESERVE 981 arrives refreshing existing state but coming over a new interface; 982 or, if a RESERVE claims to refresh state that does not exist at all. 983 (Similar facts can be deduced from mis-delivered RESPONSE messages.) 984 In either case, we assume for now that any necessary reverse-routing 985 state already exists in the NTLP; actions to stimulate this state 986 being set up will be considered in a later version of this document. 988 In either case, the QoS-NSLP needs to filter the event to avoid 989 flapping a reservation in synchronization with flapping a route, and 990 then carry out local repair actions to ensure that the reservation is 991 set up on the new path and if possible torn down on the old. A 992 high-level outline of the necessary processing is as follows: 994 1. The QNE detecting the re-route issues a new RESERVE with a 995 doubly-incremented sequence number, including a request to receive a 996 confirmation from further down the path. The RESERVE should be given 997 the same SII as previous reservations for the same flow, to avoid 998 disrupting reservations in the case where the next QNE on the path is 999 actually the same. 1001 1a. At QNEs on the new part of the path, the RESERVE installs new 1002 reservation state, and is immediately propagated further to the QNR. 1004 1b. At the QNE where the old and new paths merge, the QOS-NSLP should 1005 generate a RESPONSE which is returned to the QNE initiating the route 1006 change. The state already existing at that QNE is re-labelled with 1007 the SII of the new QNE which requested it. 1009 At this stage, the reservation is essentially installed on the new 1010 path. Further reservation messaging might take place to adjust the 1011 QoS parameters along the path if the new path has very different 1012 characteristics from the old; this takes place (if at all) as a 1013 background activity. 1015 2. If explicit routing (Appendix A) is supported, the QNE detecting 1016 the route change can now issue another new RESERVE with a null QoS 1017 request (i.e. a teardown) and lower sequence number than used in (1), 1018 and explicitly route it along the old path. 1020 2a. A QNE not on the new path will tear down the reservation state 1021 and forward it further if it can. 1023 2b. At the QNE where the old and new paths merge, the teardown will 1024 be ignored, either because it has a lower sequence number than the 1025 newly installed reservation, or because it is attempting to remove 1026 state installed under a different SII. 1028 Note that, with the exception of the explicit routing, this method of 1029 re-routing support is purely an implementation issue at the QNE 1030 detecting the route change, it does not require any other 1031 rerouting-specific protocol features. 1033 4.3 Mobility and Multihoming 1035 There are several circumstances where it is desirable to associate 1036 together two reservations with different flow-ids (typically, 1037 different addresses) but which are conceptually for the same packet 1038 stream. One case is mobility with a change of address; a related 1039 example is of multihoming, where a node sets up reservations for its 1040 flows on a new interface in preparation for handing them over. A 1041 third case is call waiting. In all cases, the wish is for resources 1042 to be shared (singly-booked) over the network region where the flows 1043 share a path. This is comparable to a restricted use of the 1044 Shared-Explicit filter style of [3], in a non-multicast context. 1046 The NSIS protocol suite provides the session id for this purpose: 1047 resources are shared based on having a common session id, even though 1048 their flow ids are different. A later version of this draft will 1049 discuss the modifications to the protocol to support this 1050 functionality, mainly in terms of what identifiers are used to match 1051 state and messages. It should be noted that secure use of the session 1052 id is non-trivial; this problem is discussed in [16]. 1054 5. Example Message Flows 1056 A number of message flows (at NSLP level) are shown here as examples 1057 of the QoS NSLP signaling process. 1059 Section 5.1 below shows a sender initiated NSLP signaling flow; the 1060 RESPONSE messages have been generated from the QNR because of the 1061 RESPONSE REQUESTED object inserted by the QNI. 1063 5.1 Basic Sender/Receiver Initiated Example 1065 S NF1 NF2 R 1066 | | | | 1067 | RESERVE | | | 1068 +--------->| | | 1069 | | RESERVE | | 1070 | +--------->| | 1071 | | | RESERVE | 1072 | | +--------->| 1073 | | | | 1074 | | | RESPONSE | 1075 | | |<---------+ 1076 | | RESPONSE | | 1077 | |<---------+ | 1078 | RESPONSE | | | 1079 |<---------+ | | 1080 | | | | 1081 | | | | 1083 The receiver initiated case is essentially identical, with the 1084 difference that the leftmost node in Section 5.1 is now 'R' (the data 1085 receiver) and the rightmost node is now 'S' (the data sender). In 1086 order to perform the reservation, reverse path state needs to be 1087 installed. Some discussion of how this can be done is given in 1088 Section 2.3.1. 1090 5.2 Reservation Collision Example 1092 This example shows an exchange, with a 'reservation collision' 1093 causing the new reservation to fail. A reservation collision occurs 1094 when the two endpoints of a data flow (or signaling proxies on the 1095 data path but not at the flow endpoints) fail to agree on who should 1096 make the reservation for a flow, leading to a QNE seeing RESERVE 1097 messages from both directions. 1099 In this example (Figure 8), we assume that the QNE detecting the 1100 condition has adopted a 'sender wins' policy: the sender initiated 1101 reservation is accepted (or maintained), and the receiver initiated 1102 one is rejected (or torn down). It isn't clear whether this policy is 1103 reasonable; however, it does appear that some sort of default policy 1104 must be standardised. A slight increase in sophistication would be to 1105 include information in a RESERVE about whether it should be preferred 1106 over reservations in the opposite direction. Clearly, there are also 1107 interactions with AAA issues here. 1109 QNI1 QNR1 1110 QNR2 QNI2 1111 S QNF1 QNF2 QNF3 R 1112 | | | | | 1113 | RESERVE1 | | | | 1114 +--------->| RESERVE1 | | RESERVE2 | 1115 | +--------->| RESERVE2 |<---------+ 1116 | | |<---------+ | 1117 | | | | | 1118 | | | Error2 | | 1119 | | +--------->| Error2 | 1120 | | | +--------->| 1121 | | | RESERVE1 | | 1122 | | +--------->| RESERVE1 | 1123 | | | +--------->| 1124 | | | | | 1126 Figure 8: Reservation Collision 1128 QNF2 detects the collision on receipt of RESERVE2. The reservation 1129 created by RESERVE2 is rejected with the appropriate error condition 1130 as RESERVE1 is propagated onwards towards the receiver. 1132 Any RESPONSE message processing for RESERVE1 is performed in the 1133 normal way. 1135 5.3 Bidirectional Reservation Example 1137 A bidirectional reservation is actually a sender initiated 1138 reservation (for an outbound flow) and a receiver initiated 1139 reservation (for the corresponding inbound flow of a bi-directional 1140 flow) combined together and issued by a single QNI. It is implemented 1141 through the use of NTLP bundling, with the NSLP providing the two 1142 reservations together to the NTLP as an indication that they should 1143 if possible be delivered together. 1145 The diagram below shows a bidirectional reservation. RESPONSE 1146 messages can be provided in the normal manner. 1148 A NF1 NF2 B 1149 | | | | 1150 | RESERVE | | | 1151 +-x-x-x-x->| | | 1152 | | RESERVE | | 1153 | +-x-x-x-x->| | 1154 | | | RESERVE | 1155 | | +-x-x-x-x->| 1156 | | | | 1158 ---> = Reservation for A->B direction 1159 xxx> = Reservation for B->A direction 1160 -x-> = Bundled reservation (A->B and B->A) 1162 The NTLP path state for reverse path routing from A to B (for the 1163 B->A flow) must be set up before the reservation can be performed, 1164 otherwise the receiver initiated half of the reservation will fail. 1166 If the routing is asymmetric then the reservation will be split into 1167 two where the paths diverge. The diagram below shows a network within 1168 an asymmetric route for a bidirectional flow between A and B. 1170 -----------> 1172 +----+ 1173 /--|QNF2|--\ 1174 / +----+ \ 1175 +-------+ +----+ / \ +----+ +-------+ 1176 |A (QNI)|--|QNF1|--+ +--|QNF4|--|B (QNR)| 1177 +-------+ +----+ \ / +----+ +-------+ 1178 \ +----+ / 1179 \--|QNF3|--/ 1180 +----+ 1182 <----------- 1184 The diagram below shows a bidirectional reservation across this 1185 network. RESPONSE messages can be provided in the normal manner. 1187 A NF1 NF2 NF3 NF4 B 1188 | | | | | | 1189 | RESERVE | | | | | 1190 +-x-x-x-x->| | | | | 1191 | | RESERVE | | | | 1192 | +--------->| | | | 1193 | +xxxxxxxxxxxxxxxxxxxx>| | | 1194 | | | RESERVE | | | 1195 | | +-------------------->| RESERVE | 1196 | | | | +--------->| 1197 | | | +xxxxxxxxx>| | 1198 | | | | +xxxxxxxxx>| 1199 | | | | | | 1200 | | | | | | 1202 ---> = Reservation for A->B direction 1203 xxx> = Reservation for B->A direction 1204 -x-> = Bundled reservation (A->B and B->A) 1206 At QNF1 the 'next hop' for the A to B flow is different to the 1207 'previous hop' for the B to A flow, so the reservation bundle must be 1208 split. It then operates as two separate reservations - one sender 1209 initiated, the other receiver initiated. 1211 Even if messages do not arrive bundled (as at QNF4 in the example), 1212 the QoS-NSLP is allowed to merge the state for the flow internally 1213 and use it to issue bundled refresh messages to neighboring QNEs. 1215 5.4 Tunnels and Aggregation 1217 QoS NSLP as defined above also allows dynamically aggregating 1218 reservations such that core network nodes are alleviated from keeping 1219 per-microflow state. Reservations for aggregate flows can be 1220 triggered by individual (per-microflow) reservations, or can be set 1221 up independently. The general advantages in terms of resource 1222 management, particularly in the context of DiffServ networks, are 1223 described in section 4.2.1 of [8]. 1225 Management must have configured QNEs, typically at the boundary of a 1226 domain, to act as aggregating and deaggregating QNEs. The 1227 configuration depends on the aggregation method being used. The two 1228 choices are: 1230 o Tunnel-based, where traffic is encapsulated in an IP tunnel (using 1231 GRE, IP-in-IP tunnel, IPsec, and so on). The aggregating QNE 1232 initiates the tunnel and chooses the endpoint as one of the 1233 deaggregating QNEs at the domain edge. 1235 o DiffServ-based, where normal routing is used within the domain, 1236 but the aggregating QNE marks the aggregated traffic with an 1237 appropriate DSCP. 1239 5.4.1 Sender Initiated Tunnel Aggregation 1241 Here we describe a sender-initiated example for aggregate reservation 1242 set-up. With receiver-initiated aggregate reservations other issues 1243 may arise which need to be investigated in future versions of this 1244 draft, if it is felt that receiver orientation is useful for 1245 reservations in this context. 1247 Apart from the receiver/sender distinction, the method chosen here is 1248 conceptually similar to that of [6]. The tunnel is used as a single 1249 virtual link, in that the 'end-to-end' NSIS signaling for the data 1250 flows is tunneled between the same endpoints so as to be invisible to 1251 the routers between them, and a second signaling session is applied 1252 purely between the tunnel endpoints. 1254 The aggregating QNE (which will be the QNI for the aggregate 1255 reservation) does the following: 1257 o it tunnels 'forwards-path' NSIS messages referring to flows within 1258 the aggregate by adding an IP header addressing the deaggregating 1259 QNE. The aggregator also decapsulates 'reverse-path' NSIS messages 1260 tunneled from the deaggregator. 1262 o whether or not these signaling messages are part of the aggregate 1263 reservation or use a distinct tunnel encapsulation is up to 1264 management; using a distinct encapsulation prevents the signaling 1265 and traffic having to share resources. 1267 o it initiates a RESERVE towards the deaggregator describing 1268 resources to be reserved for the aggregate flow. The algorithm 1269 used to determine aggregate resources is a management and policy 1270 issue. They may e.g. exactly fit the resources needed currently, 1271 or - avoiding frequent reconfigurations - be based on an estimate 1272 of resources needed now and in the near future. Note that the 1273 aggregator will be able to see both directions of QoS-NSLP 1274 messages for all the flows within the aggregate, in particular 1275 RESERVE messages, and these can be used as the input to the 1276 calculation for the aggregate resource requirement. Therefore, 1277 this technique is applicable regardless of whether the end-to-end 1278 signaling is sender or receiver initiated (or indeed a mixture of 1279 the two). 1281 o depending on how aggregate flow and resources are described in the 1282 RESERVE, and depending on the local QoS mechanism, it tunnels data 1283 packets by appending an IP header fitting the aggregate flow ID 1284 and addressing the deaggregating QNE. 1286 The deaggregating QNE (aka QNR for the aggregate reservation) does 1287 the following: 1289 o it terminates the RESERVE for the aggregate and is the QNR for it. 1291 o it receives and decapsulates the tunneled data packets. 1293 o it receives and decapsulates tunneled QoS NSLP signaling packets 1294 and processes them just as any other signaling packet received in 1295 an ordinary fashion. If these are forwards path messages, the NTLP 1296 should be able to use them to install reverse routing state back 1297 up the virtual link (in exactly the same way it can install 1298 reverse routing state back up a real link), given that the other 1299 end of the link is also a QNE. 1301 QNFs on the data path between aggregating and deaggregating QNEs do 1302 not know they are processing an aggregate reservation. Therefore they 1303 don't need any special information, nor do they perform special 1304 packet treatment. Indeed, it is clear from the above descriptions 1305 that aggregations can be nested by just re-applying the above steps. 1307 5.4.2 Receiver Initiated DiffServ Aggregation 1309 An alternative aggregation method is based on the DiffServ 1310 architecture rather than relying on the use of tunnels. It has some 1311 similarities with the description of RSVP aggregation in [9]; in 1312 particular, we assume a 'simple' routing infrastructure where a 1313 shortest path that includes two points (the aggregator and 1314 deaggregator) is also the shortest path between those two points 1315 themselves. We also assume that all DiffServ marked traffic within 1316 the region will be included in aggregate reservations. 1318 The following description depends on the skip-stop routing extension 1319 described in Appendix B. The combination of the stateless mode of the 1320 NTLP and the storage of the ingress QNE identifier in the QoS-NSLP 1321 messages corresponds to the use of the special RSVP-E2E-IGNORE 1322 protocol number in [9]: state is (eventually) not stored in the 1323 interior of the network, and the egress QNE learns the address of the 1324 ingress QNE from the signaling messages. The method works as follows: 1326 o End-to-end QoS-NSLP messages for the individual flows are sent 1327 using skip-stop routing. They must not be interpreted by the 1328 QoS-NSLP within the network; this could be done by giving them a 1329 different QoS Model from that used in the network interior. 1331 o The egress QNE can determine which flows come from which ingress 1332 QNE, and also track the reservation requests for those flows. This 1333 allows it to build up a picture of what aggregate reservations are 1334 needed between it and each ingress QNE. The algorithm that assigns 1335 flows to aggregates (DSCPs) is the responsibility of network 1336 management. 1338 o The egress QNE requests the ingress QNE for an aggregate to set up 1339 reverse path state in the network. The request can be sent as a 1340 special NOTIFY message sent outside the NTLP; the state can be set 1341 up with a QUERY sent from ingress to egress via the NTLP. (This is 1342 where the assumption that simple shortest path routing is being 1343 used.) 1345 o Once the reverse path state is available, the egress QNE sets up a 1346 receiver initiated reservation for the DSCP along that path. Note 1347 that the classifier will be purely the DSCP, on the assumption 1348 that on any interface, all the traffic for any DSCP will be 1349 covered by some reservation (it doesn't matter which). This 1350 reservation can be maintained and modified by the egress QNE as it 1351 tracks the flow ingress point (and possibly DSCP) as derived from 1352 the end-to-end signaling. 1354 5.5 Layered Reservations 1356 The combination of end-to-end and local RDOs together with 1357 reservation aggregation as described in the last section can be used 1358 to perform layered reservations in the style described in [19] and 1359 [20]. Particularly, in [20], a framework (RMD) is proposed for 1360 resource management and reservation in DiffServ networks. The RMD 1361 proposes using two protocols, a Per Hop Reservation (PHR) protocol, 1362 and a Per Domain Reservation (PDR) protocol. 1364 According to [20], "The PHR protocol is used within a DiffServ domain 1365 on a per-hop basis to augment the DiffServ Per Hop Behavior (PHB) 1366 with resource reservation. It is implemented in all nodes in a 1367 DiffServ domain. On the other hand, the PDR protocol manages the 1368 resource reservation per DiffServ domain, relying on the PHR resource 1369 reservation status in all nodes. The PDR is only implemented at the 1370 boundary of a domain (at the edge nodes)." 1372 In [19], this framework is complemented by an end-to-end signaling 1373 protocol, which transports per-flow QoS information to the edge 1374 nodes. This end-to-end protocol is invisible inside the DiffServ 1375 domain. 1377 Tasks of PDR particularly are mapping of end-to-end signaled 1378 parameters on domain specific RMD parameters, specifically DSCPs. 1379 This information must be transmitted from the ingress to the egress 1380 QNF. Furthermore, [20] lists tasks such as admission control, 1381 resource reservation in edge nodes, congestion handling (refusing 1382 admission of new flows). However we believe that all of the latter 1383 are not protocol features but functionalities of the edge nodes 1384 (using information received via the protocols) with which we do not 1385 deal in this ID. They correspond to external interactions with the 1386 components shown in Figure 1. 1388 There are currently two flavors of PHR. One flavor is 1389 reservation-based PHR. Here, the PHR protocol transports aggregate 1390 per-PHB resource requirements to each interior node. These nodes 1391 install corresponding reservation state. The other flavor is 1392 measurement-based PHB. Here, each interior node measures current 1393 load, and determines, based on these measurements, whether a new 1394 resource request arriving via PHB can be accommodated. The advantage 1395 of the latter is that interior nodes do not need to store reservation 1396 state. Furthermore, PHR issues congestion control notifications. As 1397 with PDR, we believe further PHR features such as per-interior node 1398 admission control etc. are functionalities of the interior nodes, 1399 independent of the protocol. 1401 Following the discussion in Section 5.4 above, it is straightforward 1402 to implement the layered RMD signaling using QoS-NSLP. The edge nodes 1403 are (de)aggregating QNEs. They aggregate and tunnel the end-to-end 1404 (per-microflow) signaling. PDR signaling functionality is achieved by 1405 either stacking a local RDO onto end-to-end signaling messages, 1406 informing the deaggregating QNE about DSCP mapping, or by the 1407 aggregating QNE initiating an extra RESERVE towards the deaggregating 1408 QNE which is tunneled through the aggregate region. PHR signaling 1409 functionality is achieved by signaling for the aggregate initiated by 1410 the aggregating QNE. Reservation-based PHR signaling is equivalent to 1411 simply sending a RESERVE for each PHB, which installs reservation 1412 state at each QNF in the aggregation region. Measurement-based PHB 1413 always (also in [19] and [20]) depends on special configurations of 1414 the interior nodes - they are stateless and can measure their traffic 1415 load. Measurement-based PHB functionality can be realized by sending 1416 a QUERY (querying whether sufficient resources are available). For 1417 processing the QUERY, QNFs in this case do not consult their 1418 reservation-state database as they would normally, but perform 1419 traffic load measurements. However, from a protocol perspective, this 1420 is conformant message processing. 1422 We do not discuss congestion handling in this version of the ID as it 1423 is still debated whether this functionality resides in NSLP or NTLP. 1425 6. Open Issues 1427 This section summarises some of the open issues that have arisen 1428 during the preparation of this Internet Draft. Needless to say, 1429 almost all of the proposals and assumptions made here can be 1430 questioned and alternatives proposed; we list here only the ones we 1431 have had most enjoyment and mental stimulation from discussing. 1433 1. Do we need to have a standardised, well known, mandatory QoS 1434 Model? 1436 2. Are the mechanisms for adapting to local QoS Models the most 1437 appropriate and useful? Do we need to include protocol support 1438 for discovering and agreeing these models? 1440 3. Do we need explicit routing (Appendix A) or skip-stop routing 1441 (Appendix B), and should it be possible to extend the latter to 1442 multiple hierarchical levels? 1444 4. How should message scoping be handled? Is it purely a matter of 1445 network management, or is some protocol support for it necessary 1446 or useful? 1448 5. Is the messaging to set up reverse routing state (for the 1449 receiver initiated case) something that should be built into 1450 each application, or should there be out-of-NTLP QoS-NSLP 1451 messages to enable this? 1453 6. Is 'sender wins' an appropriate default policy to handle the 1454 reservation collision problem? 1456 7. Is it interesting for the QoS-NSLP to know the overall path 1457 length and how many nodes on it are QoS-NSLP aware? 1459 8. How should the mobility/multihoming details look? In particular, 1460 how are session id and flow id matching rules modified? 1462 9. Is it possible to send more than one RESPONSE for a RESERVE, 1463 e.g. to handle an error condition discovered after the original 1464 RESPONSE? 1466 10. In particular, how should pre-emption within the network be 1467 signaled back towards the initiator of the reservation - by a 1468 modified RESPONSE or a NOTIFY? 1470 11. Is there a notion of a QoS-NSLP proxy? Or are all QoS-NSLP nodes 1471 effectively proxies anyway? 1473 12. Should we consider packet classifiers which are more (or less) 1474 granular than the flow id, and what effect does this have on the 1475 state matching rules (i.e. is the relevant matching really 1476 against packet classifier rather than flow id)? What should be 1477 done about overlapping packet classifiers in this case? 1479 13. How are AAA issues really handled? 1481 7. Security Considerations 1483 To evaluate the security of the NSLP layer some assumptions regarding 1484 the security mechanism provided at the NTLP layer have to be taken 1485 into considerations. 1487 To address the security threats described in Section 2.1, 2.3, 2.5, 1488 2.6 of [15] it is assumed that an authentication and key exchange 1489 protocol is used to establish a security association between 1490 neighboring NTLP peers. Between neighboring administrative domains it 1491 is very likely that both peers are also NSLP nodes. By choosing an 1492 authentication and key exchange protocol which is resistant to denial 1493 of service attacks, man-in-the-middle attacks and provides strong 1494 authentication the described threats can be addressed. Details need 1495 to be analysed after choosing a specific authentication and key 1496 exchange protocol for the NTLP itself. 1498 As a result the NTLP and therefore also NSLP messages can be 1499 authenticated, integrity, confidentiality and replay protected. 1500 Replay protection ensure that the threat described in Section 2.3 of 1501 [15] is prevented. Confidentiality protection prevents threats 1502 described in Section 2.2 of [15]. This is necessary when additional 1503 policy objects need to be exchanged or to protect the session 1504 identifier (or other payloads used for the same reason) as described 1505 in [16]. 1507 By fetching the authenticated identity used during the NTLP 1508 authentication it is possible to realize the two party authorization 1509 model described in Figure 1 of [17]. To realize either one of the two 1510 third party models shown in Figure 2 and Figure 3 of [17] additional 1511 security mechanisms are required at the NSLP to protect the 1512 authorization tokens and similar. This threat is also described in 1513 Section 2.4 of [15]. Non-repudiation seems to make sense only in the 1514 combination of authorization. The corresponding threat is described 1515 in Section 2.7 of [15]. 1517 Furthermore it might be necessary to protect NSLP message payloads in 1518 an end-to-middle, middle-to-middle or end-to-end fashion (an example 1519 for end-to-end protection is given in [13]). An association between 1520 non peer-to-peer protection and the above-described three party 1521 authorization is to stand to reason. A typical candidate for such a 1522 protection is CMS [12] or a modified Policy Object [10]. By modified 1523 we refer to enhanced functionality and possibly changed 1524 functionality. 1526 Denial of service attacks (see Section 2.9 of [15]) are best 1527 prevented by separating the protocol functionality such as 1528 authentication and key exchange, signaling message delivery and 1529 discovery etc. This allows protocol functionalities from each 1530 protocol to be chosen in such a way that DoS attacks are prevented to 1531 the best possible extent. Some of these issues are therefore also 1532 applicable to the design of the NTLP - impacts can, however, also be 1533 seen at the NSLP. The authorization procedure itself is, to some 1534 extent, also vulnerable to DoS attacks since computing an 1535 authorization decision might require other entities (or even other 1536 networks) to be contacted. Specific authorization procedures need 1537 therefore be evaluated carefully against the vulnerability to 1538 introduce DoS attacks. 1540 Security implications introduced by the session identifier are 1541 discussed in [16] but are applicable to this NSLP. 1543 The security property of network topology hiding is controversial 1544 since to some extent it introduces security, however, on the other 1545 hand it makes debugging more difficult. QUERY messages, messages 1546 performing a record route, etc. are affected. It is left for future 1547 discussions whether this feature should be introduced. Based on 1548 previous work it seems that it is fairly simple to introduce network 1549 topology hiding (from a technical point of view). 1551 The deployment threats addressed in Section 2.14 of [15] need to be 1552 addressed separately in the context of the [13] and [17] discussion. 1554 Once certain mechanisms have been selected the issue of security 1555 parameter exchange and negotiation needs to be evaluated. 1557 This section has to be re-evaluated once the NTLP design is finished 1558 and agreement exists on the security mechanisms. 1560 8. Acknowledgements 1562 This draft draws significant inspiration from RSVP (RFC2205). 1564 The authors would like to express particular thanks to Henning 1565 Schulzrinne. 1567 This Internet Draft is built on previous discussions and inputs from 1568 Marcus Brunner, Jorge Cuellar, Jochen Eisl, Mehmet Ersue, Xiaoming 1569 Fu, Eleanor Hepworth, Holger Karl and Andreas Kassler. 1571 References 1573 [1] Brunner, M., "Requirements for Signaling Protocols", 1574 draft-ietf-nsis-req-08 (work in progress), June 2003. 1576 [2] Hancock, R., "Next Steps in Signaling: Framework", 1577 draft-ietf-nsis-fw-02 (work in progress), March 2003. 1579 [3] Braden, B., Zhang, L., Berson, S., Herzog, S. and S. Jamin, 1580 "Resource ReSerVation Protocol (RSVP) -- Version 1 Functional 1581 Specification", RFC 2205, September 1997. 1583 [4] Braden, B., Clark, D. and S. Shenker, "Integrated Services in 1584 the Internet Architecture: an Overview", RFC 1633, June 1994. 1586 [5] Wroclawski, J., "The Use of RSVP with IETF Integrated 1587 Services", RFC 2210, September 1997. 1589 [6] Terzis, A., Krawczyk, J., Wroclawski, J. and L. Zhang, "RSVP 1590 Operation Over IP Tunnels", RFC 2746, January 2000. 1592 [7] Berger, L., Gan, D., Swallow, G., Pan, P., Tommasi, F. and S. 1593 Molendini, "RSVP Refresh Overhead Reduction Extensions", RFC 1594 2961, April 2001. 1596 [8] Bernet, Y., Ford, P., Yavatkar, R., Baker, F., Zhang, L., 1597 Speer, M., Braden, R., Davie, B., Wroclawski, J. and E. 1598 Felstaine, "A Framework for Integrated Services Operation over 1599 Diffserv Networks", RFC 2998, November 2000. 1601 [9] Baker, F., Iturralde, C., Le Faucheur, F. and B. Davie, 1602 "Aggregation of RSVP for IPv4 and IPv6 Reservations", RFC 3175, 1603 September 2001. 1605 [10] Yadav, S., Yavatkar, R., Pabbati, R., Ford, P., Moore, T., 1606 Herzog, S. and R. Hess, "Identity Representation for RSVP", RFC 1607 3182, October 2001. 1609 [11] Awduche, D., Berger, L., Gan, D., Li, T., Srinivasan, V. and G. 1610 Swallow, "RSVP-TE: Extensions to RSVP for LSP Tunnels", RFC 1611 3209, December 2001. 1613 [12] Housley, R., "Cryptographic Message Syntax (CMS)", RFC 3369, 1614 August 2002. 1616 [13] Tschofenig, H., "NSIS Authentication, Authorization and 1617 Accounting Issues", draft-tschofenig-nsis-aaa-issues-01 (work 1618 in progress), March 2003. 1620 [14] Tschofenig, H., "RSVP Security Properties", 1621 draft-ietf-nsis-rsvp-sec-properties-01 (work in progress), 1622 March 2003. 1624 [15] Kroeselberg, D. and H. Tschofenig, "Security Threats for NSIS", 1625 draft-ietf-nsis-threats-01 (work in progress), January 2003. 1627 [16] Tschofenig, H., Schulzrinne, H., Hancock, R., McDonald, A. and 1628 X. Fu, "Security Implications of the Session Identifier", 1629 draft-tschofenig-nsis-sid-00 (work in progress), June 2003. 1631 [17] Tschofenig, H., Buechli, M., Van den Bosch, S. and H. 1632 Schulzrinne, "QoS NSLP Authorization Issues", 1633 draft-tschofenig-nsis-qos-authz-issues-00 (work in progress), 1634 June 2003. 1636 [18] Hancock, R., Hepworth, E. and A. McDonald, "Design 1637 Considerations for an NSIS Transport Layer Protocol", 1638 draft-mcdonald-nsis-ntlp-considerations-00 (work in progress), 1639 January 2003. 1641 [19] Westberg, L., "A Proposal for RSVPv2-NSLP", 1642 draft-westberg-proposal-for-rsvpv2-nslp-00 (work in progress), 1643 April 2003. 1645 [20] Westberg, L., "Resource Management in Diffserv (RMD) 1646 Framework", draft-westberg-rmd-framework-03 (work in progress), 1647 May 2003. 1649 Authors' Addresses 1651 Andrew McDonald 1652 Siemens/Roke Manor Research 1653 Old Salisbury Lane 1654 Romsey, Hampshire SO51 0ZN 1655 United Kingdom 1657 EMail: andrew.mcdonald@roke.co.uk 1658 Robert Hancock 1659 Siemens/Roke Manor Research 1660 Old Salisbury Lane 1661 Romsey, Hampshire SO51 0ZN 1662 United Kingdom 1664 EMail: robert.hancock@roke.co.uk 1666 Hannes Tschofenig 1667 Siemens AG 1668 Otto-Hahn-Ring 6 1669 Munich 81739 1670 Germany 1672 EMail: hannes.tschofenig@siemens.com 1674 Cornelia Kappler 1675 Siemens AG 1676 Siemensdamm 62 1677 Berlin 13627 1678 Germany 1680 EMail: cornelia.kappler@siemens.com 1682 Appendix A. Explicit Routing 1684 There can be some cases where the QoS-NSLP interacts explicitly with 1685 the routing of messages by the NTLP. Examples are: 1687 1. To tear down state in a node which is no longer on the path 1688 because of a mobility or rerouting event. Left to its own 1689 devices, the NTLP would send the teardown to the new node (which 1690 is presumably not what is wanted). The QoS-NSLP has to ensure 1691 that the message has to go to a specific physical node. 1693 2. To send a message which depends on state previously established 1694 in the node, e.g. to send a message within a particular security 1695 association (see the 'next peer' problem of [14]), or to send a 1696 'summary' refresh referring to an existing acknowledged 1697 reservation without including the full reservation data. 1699 Both of these can be implemented by re-using the source 1700 identification information (SII) described in Section 2.4; the 1701 originator QoS-NSLP uses the SII provided in the messages in the 1702 reverse direction. 1704 1. The tear message includes the SII; the NTLP is instructed to 1705 route the message based on the SII, rather than the flow 1706 identification. (This would include the NTLP unwinding topology 1707 hiding processing at intermediate nodes.) Once the message has 1708 left the originating node along the correct 'old' path, it can 1709 probably be routed normally by the NTLP. 1711 2. The message is sent accompanied by the SII; the NTLP still routes 1712 the message on the flow identification, but as soon as it detects 1713 that the message would diverge from reaching the node given by 1714 the SII, it can generate a route change notification. (This might 1715 be done only at the receiving QoS-NSLP node.) 1717 Using the SII in this way is somewhat analogous to the Explicit 1718 Routing Object (ERO) of [11]. It requires additional capabilities in 1719 the NTLP to make message routing SII-aware (as opposed to just 1720 transporting the SII as an identification tag). Note that some 1721 aspects of SII support (described in Section 4.2) may prevent it 1722 always being topologically correct for explicit reverse routing; in 1723 that case, it may be preferable to use an independent object for 1724 reverse routing - this can be seen as yet another case of the 1725 desirability of separation of routing and identifiers. 1727 Appendix B. Skip-Stop Routing 1729 This is an extension whereby 'interior' NSIS nodes are relieved of 1730 storing some per-flow state (e.g. reverse path routing state) and 1731 processing some message types, by allowing the routing of messages 1732 directly to a QoS-NSLP node (e.g. to send a confirmation directly to 1733 the initiator), rather than sending only using NTLP. 1735 Difficulties of this are that the services provided by the NTLP (in 1736 terms of security, NAT traversal, and other message transport 1737 functions) are lost, because the NTLP only operates between adjacent 1738 on-path peers. Special consideration needs to be given to avoiding 1739 message loops. And the approach should be robust against the 1740 non-existence of a QoS-NSLP-aware downstream node - if no such node 1741 exists, lower layer errors (including the fact that no such node 1742 exists) cannot be forwarded upstream. 1744 The following outlines one potential design supporting this 1745 functionality in a fairly robust way. It requires two additional 1746 NTLP-layer flags. 1748 1. A edge node prepared to shield downstream nodes from processing/ 1749 state storage requirements inserts addressing information for 1750 itself in downstream messages sent via the NTLP, and additionally 1751 sets a 'stateless proposal' (SP) flag. The sending edge node must 1752 be prepared to receive messages outside the NTLP (i.e. unsecured) 1753 at this address, and must also be able to route messages upstream 1754 itself (otherwise loops would form). 1756 2. Intermediate nodes (including NSLP-unaware ones) can clear the SP 1757 flag if they wish (e.g. they are on an addressing or security 1758 boundary), but otherwise forward it. In the meantime, error 1759 messages can also be sent upstream. 1761 3. A node wishing to act as the 'receiving edge' echoes the value of 1762 SP in the message it sends upstream, otherwise leaves it unset. 1763 If SP is set, the message may be sent directly to the sending 1764 edge node. 1766 4. When the sending edge node receives messages with SP set, it sets 1767 a 'stateless requested' (SR) flag in downstream messages. 1768 Intermediate nodes can use this as a signal to flush per-flow 1769 routing state. Any reservation state is not deleted (typically, 1770 in circumstances where this technique is useful, only per-class 1771 reservation state is being stored anyway.) If the sending edge 1772 node stops receiving SP after some timeout, it must clear SR on 1773 the messages it sends. 1775 5. An intermediate node that wishes to generate an upstream message 1776 - typically an error message - encapsulates this in a special 1777 payload and sends it downstream; it may also decide to clear SP. 1778 The receiving edge node can then send it back upstream. 1780 In order not to violate assumptions about reliability and congestion 1781 management being managed by the NTLP, only a subset of QoS-NSLP 1782 messages can be sent 'out of band' in this way, namely RESPONSE 1783 messages (clocked by the rate of reservation messages) or vice versa, 1784 and notifications. 1786 This functionality seems quite complex, but the state save seems 1787 non-trivial also. The consequent tradeoff should be carefully 1788 evaluated. Given that a significant amount of the complexity is 1789 caused by NTLP interactions (including the need to cope with error 1790 cases) it might be worth considering if this functionality should be 1791 built into the NTLP itself. 1793 Full Copyright Statement 1795 Copyright (C) The Internet Society (2003). All Rights Reserved. 1797 This document and translations of it may be copied and furnished to 1798 others, and derivative works that comment on or otherwise explain it 1799 or assist in its implementation may be prepared, copied, published 1800 and distributed, in whole or in part, without restriction of any 1801 kind, provided that the above copyright notice and this paragraph are 1802 included on all such copies and derivative works. However, this 1803 document itself may not be modified in any way, such as by removing 1804 the copyright notice or references to the Internet Society or other 1805 Internet organizations, except as needed for the purpose of 1806 developing Internet standards in which case the procedures for 1807 copyrights defined in the Internet Standards process must be 1808 followed, or as required to translate it into languages other than 1809 English. 1811 The limited permissions granted above are perpetual and will not be 1812 revoked by the Internet Society or its successors or assignees. 1814 This document and the information contained herein is provided on an 1815 "AS IS" basis and THE INTERNET SOCIETY AND THE INTERNET ENGINEERING 1816 TASK FORCE DISCLAIMS ALL WARRANTIES, EXPRESS OR IMPLIED, INCLUDING 1817 BUT NOT LIMITED TO ANY WARRANTY THAT THE USE OF THE INFORMATION 1818 HEREIN WILL NOT INFRINGE ANY RIGHTS OR ANY IMPLIED WARRANTIES OF 1819 MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. 1821 Acknowledgement 1823 Funding for the RFC Editor function is currently provided by the 1824 Internet Society.