idnits 2.17.1 draft-vaananen-mpls-tm-framework-00.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- ** Cannot find the required boilerplate sections (Copyright, IPR, etc.) in this document. Expected boilerplate is as follows today (2024-04-19) according to https://trustee.ietf.org/license-info : IETF Trust Legal Provisions of 28-dec-2009, Section 6.a: This Internet-Draft is submitted in full conformance with the provisions of BCP 78 and BCP 79. IETF Trust Legal Provisions of 28-dec-2009, Section 6.b(i), paragraph 2: Copyright (c) 2024 IETF Trust and the persons identified as the document authors. All rights reserved. IETF Trust Legal Provisions of 28-dec-2009, Section 6.b(i), paragraph 3: This document is subject to BCP 78 and the IETF Trust's Legal Provisions Relating to IETF Documents (https://trustee.ietf.org/license-info) in effect on the date of publication of this document. Please review these documents carefully, as they describe your rights and restrictions with respect to this document. Code Components extracted from this document must include Simplified BSD License text as described in Section 4.e of the Trust Legal Provisions and are provided without warranty as described in the Simplified BSD License. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- ** Missing expiration date. The document expiration date should appear on the first and last page. ** The document seems to lack a 1id_guidelines paragraph about Internet-Drafts being working documents. ** The document seems to lack a 1id_guidelines paragraph about 6 months document validity -- however, there's a paragraph with a matching beginning. Boilerplate error? ** The document seems to lack a 1id_guidelines paragraph about the list of current Internet-Drafts. ** The document seems to lack a 1id_guidelines paragraph about the list of Shadow Directories. == No 'Intended status' indicated for this document; assuming Proposed Standard Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- ** The document seems to lack an IANA Considerations section. (See Section 2.2 of https://www.ietf.org/id-info/checklist for how to handle the case when there are no actions for IANA.) ** The document seems to lack separate sections for Informative/Normative References. All references will be assumed normative when checking for downward references. ** There is 1 instance of lines with control characters in the document. == There is 1 instance of lines with non-RFC6890-compliant IPv4 addresses in the document. If these are example addresses, they should be changed. Miscellaneous warnings: ---------------------------------------------------------------------------- == Line 147 has weird spacing: '...ging of traff...' == Line 212 has weird spacing: '...done in these...' == Line 387 has weird spacing: '...tion of bandw...' == Line 703 has weird spacing: '... in the netwo...' == Line 732 has weird spacing: '... events relat...' == (11 more instances...) -- The document seems to lack a disclaimer for pre-RFC5378 work, but may have content which was first submitted before 10 November 2008. If you have contacted all the original authors and they are all willing to grant the BCP78 rights to the IETF Trust, then this is fine, and you can ignore this comment. If not, you may need to add the pre-RFC5378 disclaimer. (See the Legal Provisions document at https://trustee.ietf.org/license-info for more information.) -- Couldn't find a document date in the document -- date freshness check skipped. Checking references for intended status: Proposed Standard ---------------------------------------------------------------------------- (See RFCs 3967 and 4897 for information about using normative references to lower-maturity documents in RFCs) == Missing Reference: 'RFC2207' is mentioned on line 1156, but not defined == Missing Reference: 'Li97' is mentioned on line 2513, but not defined == Unused Reference: 'Ferguson98' is defined on line 2567, but no explicit reference was found in the text == Unused Reference: 'Fredette97' is defined on line 2573, but no explicit reference was found in the text -- No information found for draft-mplsdt-ldp-spec - is the name correct? -- Possible downref: Normative reference to a draft: ref. 'Andersson97' -- Possible downref: Non-RFC (?) normative reference: ref. 'ATMF96' -- Possible downref: Normative reference to a draft: ref. 'Berson97' ** Downref: Normative reference to an Informational draft: draft-irtf-e2e-queue-mgt (ref. 'Braden97') -- Possible downref: Normative reference to a draft: ref. 'Bradner97' == Outdated reference: A later version (-05) exists of draft-ietf-mpls-framework-02 -- Possible downref: Normative reference to a draft: ref. 'Callon97' -- Possible downref: Non-RFC (?) normative reference: ref. 'Claffy95' -- Possible downref: Non-RFC (?) normative reference: ref. 'Cisco97' == Outdated reference: A later version (-05) exists of draft-ietf-qosr-framework-03 ** Downref: Normative reference to an Informational draft: draft-ietf-qosr-framework (ref. 'Crawley98') -- Possible downref: Non-RFC (?) normative reference: ref. 'Davie97a' -- Possible downref: Normative reference to a draft: ref. 'Davie97b' -- Possible downref: Non-RFC (?) normative reference: ref. 'Ferguson98' -- Possible downref: Non-RFC (?) normative reference: ref. 'Floyd93' -- Possible downref: Non-RFC (?) normative reference: ref. 'Fredette97' -- Possible downref: Normative reference to a draft: ref. 'Gan97' -- Possible downref: Normative reference to a draft: ref. 'Guerin97' -- Possible downref: Non-RFC (?) normative reference: ref. 'I370' -- Possible downref: Normative reference to a draft: ref. 'Jagan97' ** Downref: Normative reference to an Informational draft: draft-li-paste (ref. 'Li98') -- Possible downref: Normative reference to a draft: ref. 'Nichols98' -- Possible downref: Non-RFC (?) normative reference: ref. 'Packeteer97' == Outdated reference: A later version (-03) exists of draft-kksjf-ecn-00 ** Downref: Normative reference to an Experimental draft: draft-kksjf-ecn (ref. 'Ramakr97') -- Possible downref: Normative reference to a draft: ref. 'Rampal97' ** Downref: Normative reference to an Informational RFC: RFC 1633 ** Obsolete normative reference: RFC 1827 (Obsoleted by RFC 2406) ** Downref: Normative reference to an Informational RFC: RFC 1954 ** Obsolete normative reference: RFC 2063 (Obsoleted by RFC 2722) ** Downref: Normative reference to an Informational RFC: RFC 2208 == Outdated reference: A later version (-07) exists of draft-ietf-mpls-arch-00 == Outdated reference: A later version (-05) exists of draft-rosen-tag-stack-03 -- Possible downref: Normative reference to a draft: ref. 'Rosen97b' -- Possible downref: Non-RFC (?) normative reference: ref. 'Schwantag97' -- Possible downref: Non-RFC (?) normative reference: ref. 'Smith97' -- Possible downref: Non-RFC (?) normative reference: ref. 'Tennenh89' -- Possible downref: Non-RFC (?) normative reference: ref. 'Touch97' Summary: 18 errors (**), 0 flaws (~~), 17 warnings (==), 27 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 1 Internet-Draft P. Vaananen 2 Expiration Date: September, 1998 Nokia Telecommunications 4 R. Ravikanth 5 Nokia Research Center 7 March, 1998 9 Framework for Traffic Management in MPLS Networks 11 13 STATUS OF THIS MEMO 15 This document is an Internet-Draft. Internet-Drafts are working 16 documents of the Internet Engineering Task Force (IETF), its areas, and 17 its working groups. Note that other groups may also distribute working 18 documents as Internet-Drafts. 20 Internet-Drafts are draft documents valid for a maximum of six months 21 and may be updated, replaced or obsoleted by other documents at any 22 time. It is inappropriate to use Internet-Drafts as reference material 23 or to cite them other than as "work in progress". 25 To learn the current status of any Internet-Draft, please check the 26 "lid-abstracts.txt" listing contained in the Internet-Drafts Shadow 27 Directories on ftp.is.co.za (Africa), nic.nordu.net (Europe), 28 munnari.oz.au (Pacific Rim), ds.internic.net (US East Coast), or 29 ftp.isi.edu (US West Coast). 31 ABSTRACT 33 It has been recognised that the success of the MPLS depends on the 34 ability to better support the multiservice traffic integration with 35 some levels of service guarantees, which are not feasible to implement 36 with the current destination prefix only based packet forwarding 37 paradigms. 39 The efficient support for these services throughout the network is 40 expected to be possible using label based forwarding paradigm in the 41 network. 43 However, the service categories and the enabling mechanisms to support 44 those service categories are not well addressed in the current 45 proposals for the MPLS working group; the effort has mostly 46 concentrated on the handling of the best effort traffic and associated 47 scalability and routing related issues. 49 The goal of this document is to define a framework for traffic 50 management in MPLS networks. We discuss the set of mechanisms that have 51 been proposed for enabling the implementation of the more advanced 52 services than pure best-effort packet forwarding, and the impact of 53 those mechanisms with respect to MPLS network environments and MPLS 54 protocol implementation. 56 The document describes the mechanisms and their application with the 57 intent to approach the level of the traffic management capabilities 58 that are currently available in hybrid router/ATM or frame relay 59 networks using the MPLS. The approach taken is that no modifications 60 are required in the end station protocol or application software in the 61 first phase of deployment, while this might be allowed later, if deemed 62 necessary. 64 This document concentrates on the issues from the public network 65 operators point of view, although most of the discussion applies as 66 well in the local network environments. 68 Concepts and mechanisms described in this document are based on the 69 previous work done in the subject on various working groups of IETF and 70 other standardisation bodies. It has been attempted to use applicable 71 concepts and terminology from previous work as much as possible. 73 This document concentrates on the MPLS specific issues, number of 74 related mechanisms and concepts are only briefly presented for sake of 75 completeness, and the other related work is referred, where applicable. 76 Reader is suggested to consult the referred material, in case he / she 77 wants to have more information on these areas. 79 ACKNOWLEDGEMENTS 81 The ideas presented in this in document have been based on the 82 information collected from a number of sources. 83 --- Individuals to be added --- 85 1. TABLE OF CONTENTS 86 STATUS OF THIS MEMO 87 ABSTRACT...............................................................1 88 ACKNOWLEDGEMENTS.......................................................2 89 1. TABLE OF CONTENTS..............................................2 90 2. INTRODUCTION...................................................5 91 3. SERVICE CATEGORIES.............................................5 92 3.1 BEST EFFORT SERVICES..........................................,6 93 3.1.1 Enhanced best effort service...................................6 94 3.1.2 Enhanced best effort service with bandwidth allocation.........6 95 3.1.3 Enhanced best effort services in MPLS environments.............6 96 3.2 DIFFERENTIATED SERVICES........................................7 97 3.2.1 Differentiated service.........................................7 98 3.2.2 Differentiated services with bandwidth allocations.............8 99 3.2.3 Differentiated services in MPLS environments...................8 100 3.3 GUARANTEED SERVICES............................................8 101 3.3.1 Services.......................................................9 102 3.3.2 Guaranteed services in MPLS environments.......................9 103 4. TRAFFIC MANAGEMENT REQUIREMENTS...............................10 104 4.1 SERVICE CATEGORY SUPPORT......................................10 105 4.2 ADMISSION CONTROL, MONITORING AND SECURITY....................11 106 4.3 CONGESTION MANAGEMENT.........................................11 107 4.4 SCALABILITY REQUIREMENTS......................................11 108 4.5 ROBUSTNESS AND RELIABILITY....................................12 109 4.6 TOPOLOGY SUPPORT..............................................13 110 4.7 TOPOLOGICAL SCOPE.............................................13 111 4.8 COMPATIBILITY.................................................14 112 4.9 EXTENSIBILITY.................................................14 113 5. CONTROL PLANE MECHANISMS FOR TRAFFIC MANAGEMENT FUNCTIONS.....15 114 5.1 TRIGGERS......................................................15 115 5.1.1 Configuration events..........................................16 116 5.1.2 Signaling events..............................................16 117 5.1.3 Topology changes..............................................16 118 5.1.4 Traffic pattern changes.......................................16 119 5.2 POLICY AND ADMISSION CONTROL..................................17 120 5.2.1 Routing policy................................................17 121 5.2.2 Classification policy.........................................17 122 5.2.3 Admission policy..............................................18 123 5.2.4 Admission control.............................................18 124 5.3 PATH SELECTION................................................19 125 5.4 ACCOUNTING....................................................19 126 5.5 USER AUTHENTICATION...........................................19 127 6. DATA PLANE MECHANISMS FOR TRAFFIC MANAGEMENT FUNCTIONS........20 128 6.1 LABEL FORWARDING PARADIGM.....................................20 129 6.2 CLASSIFICATION................................................20 130 6.2.1 What is classification and where it should be done............20 131 6.2.2 Flow Classification...........................................21 132 6.2.3 Packet Classification.........................................22 133 6.2.4 Classification results for differentiated services............23 134 6.2.5 Classification results for guaranteed services................23 135 6.2.6 Problems with non end-system classifications..................23 136 6.2.6.1 Classification in presence of IPSEC...........................23 137 6.2.6.2 Classification in presence of dynamic address assignment......24 138 6.2.6.3 Classification in presence of dynamic port numbers............24 139 6.2.7 Classification state maintenance..............................24 140 6.3 POLICING......................................................25 141 6.4 MAPPING.......................................................25 142 6.4.1 Direct mapping................................................26 143 6.4.2 Indirect mapping..............................................26 144 6.5 AGGREGATION, MERGING AND DEAGGREGATION........................26 145 6.5.1 Aggregation...................................................26 146 6.5.2 Merging.......................................................27 147 6.5.3 Aggregation and merging of traffic with service guarantees...27 148 6.5.4 Deaggregation.................................................28 149 6.6 QUEUING AND CONGESTION MANAGEMENT.............................28 150 6.6.1 Queue management..............................................28 151 6.6.2 Queuing principles............................................29 152 6.6.3 Congestion control............................................29 153 6.6.3.1 Passive congestion control schemes............................29 154 6.6.3.2 Active congestion control schemes.............................30 155 6.6.4 Packet scheduling.............................................31 156 6.7 TRAFFIC SHAPING...............................................31 157 6.8 LOAD SHARING..................................................32 158 7. LABEL SWITCHED PATH GRANULARITIES AND AGGREGATION.............32 159 8. LABEL SWITCHED PATH TOPOLOGIES AND ASSOCIATED TM PROCEDURES...33 160 8.1 POINT-TO-POINT................................................34 161 8.2 POINT-TO-MULTIPOINT...........................................34 162 8.3 MULTIPOINT-TO-POINT...........................................34 163 8.4 MULTIPOINT-TO-MULTIPOINT......................................35 164 8.5 MULTILEVEL PATHS..............................................35 165 9. NETWORK FUNCTIONAL PARTITIONING...............................37 166 9.1 NETWORK MODELS................................................37 167 9.2 NETWORK ELEMENT CATEGORIES....................................38 168 9.2.1 Hosts.........................................................38 169 9.2.1.1 Enhanced best effort services.................................38 170 9.2.1.2 Differentiated services.......................................38 171 9.2.1.3 Guaranteed services...........................................39 172 9.2.1.4 Participation in MPLS.........................................39 173 9.2.2 MPLS edge nodes...............................................40 174 9.2.2.1 Best effort services to customer..............................42 175 9.2.2.2 Differentiated services to customer...........................42 176 9.2.2.3 Guaranteed services to customer...............................43 177 9.2.2.4 MPLS to customer..............................................43 178 9.2.3 MPLS core node................................................44 179 9.3 INTERFACE CATEGORIES..........................................45 180 9.3.1 Interface to non-MPLS networks................................45 181 9.3.2 Interface inside MPLS network domains.........................45 182 9.3.3 Interface between MPLS network domains........................45 183 10. LSP MAPPINGS TO EXISTING LINK LAYER TECHNOLOGIES..............46 184 11. GENERAL REQUIREMENTS FOR LABEL ENCAPSULATIONS.................46 185 11.1 DIFFERENTIATED SERVICES SUPPORT...............................46 186 11.2 CONGESTION MANAGEMENT SUPPORT.................................47 187 11.2.1 Congestion indicator bit......................................47 188 11.2.2 Examine me bit................................................48 189 11.3 SUPPORT FOR MULTILEVEL LABEL SWITCHED PATHS...................48 190 12. GENERAL REQUIREMENTS FOR DISTRIBUTION OF LABELS AND TM ATTRs..48 191 12.1 SETUP REQUEST.................................................49 192 12.2 SETUP MODIFICATION............................................49 193 12.3 SETUP ACKNOWLEDGE.............................................49 194 12.4 SETUP REJECT..................................................49 195 12.5 DISCUSSION OF SIGNALING PROTOCOLS.............................50 196 12.5.1 General.......................................................50 197 12.5.2 LDP...........................................................50 198 12.5.3 RSVP..........................................................51 199 13. REFERENCES....................................................51 200 14. SECURITY CONSIDERATIONS.......................................54 201 15. AUTHOR'S ADDRESSES............................................58 203 2. INTRODUCTION 205 The ability of the network to support service level guarantees and 206 traffic engineering is becoming very important. This area has been, and 207 will remain as subject area addressed in various working groups of IETF 208 (e.g. INTSERV, RSVP, ISSLL, RAP, DIFFSERV, IPPM, QOSR), IRTF (E2E), ATM 209 Forum (TM), Frame Relay Forum, ITU-T, and various other organisations 210 and user consortiums. 212 We build on the ideas and previous work done in these working groups, 213 and try to build a coherent set of capabilities around the label based 214 packet forwarding technology discussed in MPLS working group of IETF, 215 as described in MPLS framework document [Callon97] and MPLS 216 architecture document [Rosen97a]. 218 The approach taken in this document is to look at the available pieces, 219 and try to fit them on to the MPLS framework in a scaleable 220 fashion.This document presents a requirements and implementation 221 framework in the context of MPLS for the services and capabilities that 222 needs to be built. Possible mechanisms and deployment scenarios to 223 actually achieve these advanced services are also described. 225 The document tries to take evolutionary rather than revolutionary 226 approach, we don't propose to change everything at once (and do not 227 believe it's possible), as previous attempts have quite consistently 228 failed to do it. 230 Focus is to try to answer two questions: what should be done that the 231 quality of the network service perceived by the end user improves, and 232 how to maximise the usage of the network resources, and at the same 233 time do it in scaleable and controlled manner. 235 We feel especially important that the deployment of the technologies 236 presented can be started on the small scale, and without changes to the 237 host communication and application protocols, while this framework 238 attempts to be flexible enough to be able to accommodate such changes 239 when the technology matures and the incremental deployment is 240 determined to be feasible and necessary. 242 We hope to evolve the technologies and protocols of the MPLS towards 243 supporting the capabilities outlined in this document, but do realise 244 that much more detailed discussion, research and specification work 245 needs to be done before the complete set of "wishes" can be 246 accomplished. 248 3. SERVICE CATEGORIES 250 The advanced services requiring the use of the traffic management 251 mechanisms can be broadly divided into three categories on a basis of 252 (i) the level of assurance on service guarantees that can be achieved 253 and (ii) the granularity of guarantees (simple to complex) that is 254 provided. This division is made here to support the discussion of the 255 related traffic management issues. 257 The characteristics of the different service categories are briefly 258 described in the chapters 3.1. to 3.3. 260 3.1 Best effort services 261 3.1.1 Enhanced best effort service 263 The service remains similar to the current best effort service, but 264 with the higher service quality perceived by the end-user, regardless 265 of the applications used. Enhanced best effort service can be realised 266 without specific signalling protocols inside the network. This service 267 differs from "plain old best effort" because of the use of the 268 advanced congestion control mechanisms. The purpose is to provide a 269 more controlled and more fair behaviour during congestion period. 271 Passive congestion control mechanisms based on packet drop policies, 272 such as random early detection [Floyd93], [Braden97] can be used. In 273 addition to passive congestion control mechanisms, active congestion 274 control mechanisms based on congestion feedback and transport protocol 275 interactions have also been suggested [Ramakr97], [Packeteer97], 276 [Jagan97]. 278 This service can be implemented in any router with the support of 279 appropriate traffic management mechanisms. The use of label based 280 forwarding paradigm does add capabilities for the network operator 281 traffic engineering, such as better ways to control the path selection 282 for the traffic. 284 3.1.2 Enhanced best effort service with bandwidth allocation 286 The enhanced best effort service augmented with bandwidth allocation 287 capability allows an operator to optimise network capacity usage, and 288 manage bandwidth usage by allocating it to individual users, networks, 289 or any aggregated community as desired. 291 These services generally require a specific signalling protocol for 292 communication of the related traffic management attributes through the 293 network. 295 3.1.3 Enhanced best effort services in MPLS environments 297 Basic enhanced best effort service does not generally require per-flow 298 state to be maintained in the network elements, the goal is to support 299 fair usage of resources inside network. 301 MPLS enables the carrying of congestion indication over the LSP to 302 allow the LSP endpoints to react to congestion. In addition, the 303 congestion indication can be monitored in the LSP endpoints, and 304 information of congestion exceeding some predetermined threshold can be 305 used e.g. to initiate the re-evaluation LSP path selection. 307 In environments where bandwidth allocations are used, any required 308 traffic management related attributes that are used are generally 309 applied on aggregated streams. The use of label based forwarding 310 paradigm adds easy to implement capabilities to allocate bandwidth to 311 aggregated best effort traffic streams and provides ways to communicate 312 these allocations through the network. 314 Generally enhanced best effort approaches rely on the interactions of 315 the network with end-to-end protocols (e.g. intelligent drop policies) 316 to reduce the load at times of congestion. Common practise at a moment 317 is to use FIFO type queuing. 319 Together with the bandwidth allocation capabilities, the path selection 320 mechanisms, such as explicit label switched paths provide efficient 321 capabilities to network traffic engineering. 322 3.2 Differentiated services 323 3.2.1 Differentiated service 325 Differentiated services are currently being specified in the IETF 326 DIFFSERV working group. Work is in an early phase, and there are 327 several different proposed approaches. 329 Differentiated services, as proposed, allow the traffic to be 330 classified into finite number of priority and/or delay classes. Traffic 331 classified as having the higher priority and/or delay class receives 332 some form of preferential treatment over the traffic that is classified 333 onto lower class. Differentiated service does not attempt to give 334 explicit end-to-end guarantees over the network, instead, in congested 335 network elements, the traffic with higher priority class has a higher 336 probability to get through, or in case of delay priority, scheduled for 337 transmission before the traffic that is not delay sensitive. 339 Differentiated service packet classification can be performed either in 340 the hosts, CPE routers or in the operator network border routers. 342 The information required to perform actual differentiation in the 343 network elements will be carried in the TOS field of the IPv4 packets, 344 referred as DS-byte in differentiated service operational model 345 document [Nichols98]. Thus, as the information required by the buffer 346 management and scheduling algorithms is carried inside the packet, 347 differentiated services do not necessarily require signalling protocols 348 to control the mechanisms that are used to select different treatment 349 for the individual packets. 351 Differentiated services can be implemented in any router that supports 352 the appropriate traffic management mechanisms. 354 3.2.2 Differentiated services with bandwidth allocations 356 In addition to the basic functionality provided by the differentiated 357 services, the addition of the bandwidth allocation capability allows 358 the network operator to allocate the desired bandwidth to the switched 359 paths carrying the differentiated services over the network domain. 360 Depending on how the differentiated service allocations are 361 implemented, the operator can either control the bandwidth share given 362 to each priority class separately, or allocate bandwidth to 363 differentiate service class paths as a whole, and implement 364 differentiation on the basis of capability of the resulting virtual 365 path. 367 3.2.3 Differentiated services in MPLS environments 369 Generally no per-flow state is maintained in the network elements, goal 370 is to support a small, fixed number of service categories. 372 Per stream attributes distributed using the label distribution 373 mechanisms can include the differentiated service category associated 374 with the LSP. 376 One or more queues with simple service policy are used. In case that 377 multiple queues are used to support delay prioritisation, scheduling 378 mechanism ensures that the low delay classes are served first. Weighted 379 scheduling mechanisms may be used instead of strict priority scheduling 380 to ensure that the lower classes cannot suffer of starvation. 382 The support of differentiated services in MPLS environments requires 383 signalling support for the association of the desired category with the 384 label, or alternatively each packet needs to carry the information of 385 the desired service category. 387 MPLS allows the allocation of bandwidth for the differential services 388 in conjunction of the another services in controlled manner. This 389 allows the operator to allocate the available bandwidth between 390 differentiated service category and other categories, on LSP basis 391 depending on implementation. 393 3.3 Guaranteed services 395 These services provide hard guarantees that are explicitly specified 396 for different granularities, and topological scopes from network 397 boundary to network boundary to end-to-end. Guarantees can be given for 398 different kinds of the parameters, such as bandwidth and/or delay, 399 depending on the service class and capabilities of the network elements 400 on the path. Guaranteed services may be based on the contractual 401 guarantees or user-network signalling, such as RSVP. Signalling 402 protocol to communicate the service parameter information is required 403 inside network. 405 In the IETF, guaranteed services have been specified by INTSERV working 406 group. Integrated service framework is described in [RFC1633]. There 407 are currently two services that have been defined by INTSERV; 408 controlled load [RFC2211] and guaranteed service [RFC2212]. These 409 services should be supported in MPLS environments. Service parameter 410 mappings to different link layers specified in the ISSLL working groups 411 should be applicable to MPLS, augmented with the label encapsulation 412 procedures specified in the MPLS WG. 414 3.3.1 Services 416 Two different guaranteed services have been specified in INTSERV effort 417 of the IETF so far: 419 - Controlled load service [RFC2211] 420 - Guaranteed Quality of Service [RFC2212] 422 Other guaranteed service categories that may be applicable to certain 423 MPLS environments have been specified by other standardisation bodies, 424 such as in Frame Relay Forum and ATM Forum [ATMF96]. 426 The service categories specified in other bodies than IETF are not 427 presently discussed in this document, as we attempt to build onto 428 present state of the work of the IETF. The service categories from the 429 other standardisation bodies may become important in the future, and 430 their use in the MPLS context and mappings between IETF services and 431 external categories may be specified as part of MPLS effort or other 432 IETF efforts, such as ISSLL. 434 3.3.2 Guaranteed services in MPLS environments 436 Per-LSP or per-flow state needs to be maintained in the edge MPLS 437 nodes, depending on the topological scope of the guarantees, for end- 438 to-end, flow state is required, and internally, per-LSP state for 439 aggregated guarantees needs to be maintained. Aggregated state 440 information is needed in the core network elements. 442 The implementation of guaranteed services requires the use of the 443 advanced queuing mechanisms in the network elements. Signalling support 444 for communication of changes of the individual or aggregated state 445 information associated with the LSP will be required. 447 For scalability, the aggregation of the guarantees to form guaranteed 448 aggregated label switched paths is desirable. For the implementation of 449 the end-to-end reservations, the information of the parameters of the 450 aggregated entities are required in the de-aggregation points of the 451 network. This can be realised in MPLS by using the multilevel LSPs. 452 This requires signalling of the individual constituents of aggregated 453 flows from the aggregation to de-aggregation point. 455 The current methods for QoS on IP seem to have scalability issues, when 456 the number of connections requesting such services grows. Thus, an 457 issue that is not MPLS specific, is that of making it scalable through 458 a combination of aggregation and provisioning. Such aggregation 459 techniques may place some requirements on MPLS, to the extent that the 460 labels may have to be associated specific kinds of parameters, which 461 pertain to the aggregation. Thus the label assignment and distribution 462 mechanisms should provide ways for distributing such attributes. 464 MPLS benefits the implementation of the guaranteed services, as the 465 association can be made in the border nodes of the network onto LSPs, 466 and the intermediate nodes need only use the label information to 467 retrieve the attributes them require to provide the desired guarantee 468 for the associated LSP. The use of labels to retrieve the state 469 information provides great benefit compared to the model where each 470 node in the path would require to keep state of each guaranteed flow, 471 and find the flow by matching a filter to each packet to retrieve the 472 traffic parameters of the flow. 474 4. TRAFFIC MANAGEMENT REQUIREMENTS 476 Requirements presented in this chapter are a superset of requirements 477 of those expressed in numerous sources. Some of the requirement sources 478 these requirements are based on are [Smith97], [Bradner97], and 479 [RFC1633]. 481 4.1 Service category support 483 - Support for services described in the previous chapter 485 MPLS shall support the implementation of the services described in 486 previous chapter, in such way that the desired set of services can be 487 implemented in same node and same link. The implementation of all 488 services should not be mandatory, but considered as a differentiator 489 between the products. However, the MPLS standardisation effort should 490 describe the set of mechanisms to support all of the above services to 491 ensure the interoperable implementation of these services. 493 - Support for controlled link sharing 495 Network operators shall be able to allocate maximum shares of link 496 bandwidth to different service categories, in a such way that the 497 minimum amount of bandwidth is guaranteed for each service class. This 498 allows the operator to guarantee that the lower priority services 499 cannot suffer from the starvation because of the higher priority 500 services use all available bandwidth. These setting shall be enforced 501 by the policy and admission control together with the policing, queue 502 management and scheduling mechanisms. In the absence of the traffic in 503 higher priority service classes, the bandwidth should be available for 504 use by the lower priority traffic. 506 4.2 Admission control, monitoring and security 508 - Support for authentication 510 Authentication of the users and/or equipment needs to be performed at 511 domain borders to determine that the service user is who he claims to 512 be. Authentication is required to support admission and accounting. 514 - Support for admission policies and control 516 Operator shall be able to apply admission policies in the operational 517 network boundary, to enforce the service agreements between the users 518 and/or other operator network domains. 520 - Support for accounting 522 When the enhanced service levels are used, the incentive for the 523 network operator to provide such services is to get more revenue of the 524 consumers of such services. Accounting is required to keep track of the 525 services used, and to be able to provide usage sensitive pricing 526 policies for enhanced level services. 528 - Service management 530 When the enhanced services are provided for the end-user, inside the 531 operator's network, or between the domains, it is important for both 532 the operator and end-user to be able to monitor that the performance of 533 the provided services fulfil their specifications. The required 534 measurement and management features shall be implemented on network 535 elements and management systems to support these requirements. 537 4.3 Congestion management 539 - Congestion control 541 Congestion control is important even for the best effort services, but 542 becomes more complicated when the different levels of services are 543 supported over same interfaces. Characteristics of mechanisms and 544 guidelines for use of these congestion control mechanisms in multi- 545 service environments shall be specified. 547 4.4 Scalability requirements 548 - Minimisation of the label space requirements 550 The label space may become a limitation of the applicability of the 551 label switching scheme, unless the attention for the constraining the 552 label space in architecture design phase is given. Increased label 553 space makes the management of the label space more difficult, it 554 involves more state keeping in network elements, and implies higher 555 dynamics of change in the label assignments or attributes. Adding 556 advanced services to pure best-effort delivery will inevitably increase 557 the label space requirements, and an attempt should be made in the 558 specification phase to minimise the overhead. Aggregation and merging 559 are examples of the mechanisms to help in the label space containment. 561 - Minimisation of the state in the network elements 563 Flow specific state shall be maintained only on the network elements 564 that are required to handle the individual flows, such as edge network 565 elements. The design goal is that the core network elements do not 566 require to maintain flow specific state information. This enhances the 567 applicability of the MPLS in large networks and high-speed backbone 568 links. 570 - Support of the different granularities of control, single flow to 571 highly aggregated streams 573 It is important that the multiple control levels are supported, 574 depending on the level in the network where the services are provided. 575 General guideline is that the amount of the state information that is 576 required to be maintained decreases from network edge towards the core 577 of the network. 579 - Minimisation of the signalling requirements 581 The state maintenance associated with the control of the path traffic 582 management attributes implies the use of the signalling mechanism to 583 convey this information. It is important that the signalling traffic 584 required by the traffic management support be minimised. 586 4.5 Robustness and reliability 588 - Soft state protocol 590 The protocol(s) resulting from the MPLS work should use soft state 591 approach as much as possible, i.e. to have the state associated with 592 the LSPs required to "expire", if not periodically refreshed. Hard 593 state should only be associated with the administratively configured 594 LSPs (explicit routes, policies, etc.). Care should be taken that the 595 overhead of the state refreshments required to maintain the soft state 596 components does not grow excessive, e.g. due to requirement to refresh 597 the state of associated with each LSP individually. 599 - Security considerations 601 The basic idea of supporting the any kind of service level 602 differentiation opens up the possibilities for the user's to try to 603 gain access to more valuable services without paying the appropriate 604 compensation. In addition, the new kinds of denial of service attacks 605 may become possible. Security considerations have to be taken in 606 account when designing the architecture and protocols for the traffic 607 management aspects. 609 - Reliability 611 The service level agreed upon with the customer have to be monitored, 612 and the means for alerting the network operator of failures, and 613 mechanisms (to possibly automatic) reconfiguring of the switched path 614 arrangement inside the network to quickly remedy the failure have to be 615 considered. 617 4.6 Topology support 619 - Support for point-to-point topology 621 Point-to-point topology is conceivably the simplest of the topologies 622 that needs to be supported. The basic topology, between the network 623 elements is point-to-point path, which can have it's associated 624 parameters. More complex topologies can be supported by merging the 625 ingress paths to single egress paths with different characteristics 626 (aggregation). It shall be possible to support point-to-point LSP's 627 with the associated resource allocations and priorities. 629 - Support for point-to-multipoint topology (multicast) 631 Point to multipoint topology is useful for the support of the multicast 632 data delivery. Point to multipoint topology support shall include means 633 for managing the joins and withdrawals of leafs, affecting only the 634 associated part of the multicast distribution tree. Also, it shall be 635 possible to support heterogeneous receivers in the multicast groups. 637 - Support for multipoint-to-point topology 639 Multipoint-to-point topologies are attractive for scalability reasons. 640 Single destination based tree can be constructed for traffic that can 641 be treated similarly. It shall be possible to support different traffic 642 reservations in different parts of the tree, with higher resource 643 allocations towards the egress points of the multipoint-to-point 644 delivery tree (each merge point adds it's traffic volume to the tree). 646 4.7 Topological scope 647 - Support for different topological scopes (inside domain, between 648 domains, end-to-end) 650 MPLS shall consider the different requirements and scalability aspects 651 imposed by the different topological scope, and the functional 652 partitioning inside MPLS domain and between the MPLS domain and other 653 MPLS or non-MPLS domain. 655 4.8 Compatibility 657 - Support of current applications without modifications 659 There have been numerous proposals in the past to provide the enhanced 660 services, that involve the modification of the end-user application 661 software. Examples of such proposals are end-to-end ATM deployment, use 662 of RSVP by the end-user applications to request service guarantees, and 663 use of the applications to classify their traffic onto differentiated 664 service categories. While such end-to-end guarantees may become 665 important later, it shall be possible to initially implement service 666 contracts without modifications to applications and end-to-end 667 protocols. This can be accomplished by classifying the traffic on the 668 network edge's instead of the end-to-end basis, and providing required 669 transmission capacity (e.g. dedicated switched Ethernet port) to end- 670 user's computer system. Additional advantage is the centralised nature 671 of the management of these services. 673 - Interoperability 675 MPLS should consider the interworking and interoperability of the MPLS 676 based network with the currently available networking technologies, and 677 also describe the advanced service mappings between the other 678 networking technologies and MPLS where applicable. 680 - Support for different link layer technologies 682 Mapping of the label switching paths to different link layer 683 technologies shall be specified taking into account the traffic 684 management capabilities provided by the underlying link layer 685 technology, and the desired properties of the supported service set. 686 Candidates for the link layers suitable for carrying labelled traffic 687 in public network environments include ATM, Frame Relay and MPLS over 688 SONET. 690 4.9 Extensibility 692 - Extensibility 694 Traffic management framework and associated architecture and protocols 695 shall be extensible to support new attributes for supporting new 696 services without the changes to initial concepts and mechanisms. 698 - Mechanism independence 700 The traffic management mechanisms shall be loosely specified, rather in 701 the way of specifying the characteristics of the mechanisms required to 702 support different parts of traffic management functionality. Mechanisms 703 like queue management and scheduling are local in the network element, 704 and thus do not need to be strictly standardised. Suggestions of the 705 applicable mechanism should be given, but vendors should have the 706 freedom to implement whatever mechanisms they feel appropriate to 707 achieving the desired functionality. Additionally, this allows for 708 improvements in the individual mechanisms via active research in the 709 area. Thus it is important to standardise on the semantics of 710 information carried in the signalling protocol (LDP) or that associated 711 with individual packets, as applicable. 713 5. CONTROL PLANE MECHANISMS FOR TRAFFIC MANAGEMENT FUNCTIONS 715 This chapter describes the mechanisms required in the various parts of 716 the network to control the data plane traffic management functions 717 described in the next chapter. These mechanisms include policy and 718 signalling aspects required to set up, and to maintain the LSPs. 720 Note that the location of these mechanisms in the networks is not 721 discussed in this chapter, a discussion of the location of mechanisms 722 in different network environments is given in chapter 9. 724 5.1 Triggers 726 Triggers are events that cause the changes in the LSP configuration. 727 These changes may be LSP establishment, reconfiguration, deletion or 728 attribute modification. The triggers either require going through full 729 or partial LSP establishment process depending on the type of the 730 trigger. 732 Triggers typically result from events related to changes of some 733 information relevant to LSP set-up, such as: 735 - configuration event 736 - signalling event 737 - topology change 738 - traffic pattern change 740 The scope of the change initiated by trigger can be either local (i.e. 741 inside of the network element), regional (i.e. affects the 742 configuration of the peer MPLS nodes) or global (i.e. affects all 743 network elements that compose the LSP). 745 The frequency of the regional and global changes should be minimised. 746 As the finer granularity of control of the LSP attributes is required 747 (e.g. explicit reservations), this becomes increasingly hard to 748 achieve. 750 Properties associated with different kinds of triggers are discussed in 751 sections 5.1.1 to 5.1.3. 753 5.1.1 Configuration events 755 A configuration event can affect either policy or LSP configuration 756 parameters. Policy changes affect the admission or classification 757 policy being used in the node. LSP parameter change affects the 758 attributes associated with the statically configured label switched 759 path. 761 The policy related changes can either force the re-evaluation of the 762 current classifications or be taken into use gradually, as new paths 763 are used. Although the immediate re-evaluation would be desirable, it 764 may have negative effects on the performance and the handling of the 765 current traffic. 767 Parameter changes may require the communication of the change to peer 768 LSRs that compose the LSP (signalling initiated by the 'root' node of 769 the LSP), or be configured onto each LSR along the path individually 770 (management initiated change). In either case, these changes should be 771 taken into effect immediately. 773 5.1.2 Signaling events 775 Signalling event is an externally received trigger that explicitly 776 affects the way LSPs are set, and depending on the signalling event 777 type, may results either in setting-up, tearing down or modifying 778 attributes of the LSPs. 780 It can be foreseen that different kinds of signalling protocols will 781 need to be supported, depending on the interface the event is received 782 from. There will likely be different signalling mechanisms used for 783 users, inside a network domain and between domains (e.g. RSVP and LDP). 785 5.1.3 Topology changes 787 Topology changes are events that are associated with the changes in 788 network topology, and may potentially result in the requirement for 789 large number of reconfiguration of a large number of LSPs. Topology 790 changes are brought to the attention of the label distribution 791 subsystem by the routing protocols and the monitoring of the status of 792 the established LSPs. 794 5.1.4 Traffic pattern changes 796 Traffic pattern change is an event triggered by the user activity that 797 is observed by the network element resulting in the change of the 798 traffic characteristics received over interface. Examples include the 799 appearance of the new traffic flow, or timeout of the existing flow. 800 These changes may affect how the LSPs are set up or attributes of the 801 LSPs. Traffic pattern related changes should be attempted to be kept as 802 local. 804 5.2 Policy and admission control 806 Policies and admission control form a set of processes that directly or 807 indirectly control the set-up of the label switched paths through the 808 network element. 810 5.2.1 Routing policy 812 For the new LDP requests, routing policies applied on the Internetwork 813 are the first controlling policy that controls the potential routes the 814 LDP paths can take through the network. 816 Routing policies are thus not directly involved in the topological 817 control of the LDP establishments, but them control the establishment 818 basis of the information (routing information base) that the LDP uses 819 to determine available routes. 821 It shall be noted that the current routing protocols use the topology 822 and metric information to select the "best" route to use of the 823 multiple options, and do not generally know anything about the path 824 characteristics or services supported on the path available in the 825 routing database. 827 5.2.2 Classification policy 829 Classification is based on two categories of information, specifically 830 information in the headers of the received packets and the control and 831 policy information provided by the configuration (management plane), 832 routing and signalling protocols. 834 IPv4 Header information useable in the classification process: 835 - Destination address 836 - Source address 837 - IP protocol field 838 - TOS 840 TCP/UDP header information useable in the classification process: 841 - Source port number 842 - Destination port number 844 Additional header fields may be parts of the classification, if 845 desired. 847 The classifier makes a decision on the basis of the preconfigured 848 classification policy information, which specifies the kind of 849 treatment the packets belonging to flow would like to receive. Note 850 that the classification policy alone does not guarantee that the 851 desired behaviour will be achieved, this is further refined by the 852 admission policy, admission control and policing functions. 854 For IP packets, the classification process can be generally 855 accomplished by applying the filter template of the form {DA prefix, SA 856 prefix, PRO, TOS, SPN, DPN} to each individual packet. Any of the 857 fields can be a wildcard, so for example all traffic destined to web 858 server would be specified using filter {*,*,6,*,*,80}. In some cases, 859 there may be several different filters that may match the same packet, 860 and the results of the match for the most specific filter should be 861 used in such cases. 863 In addition to packet header information, local information may be 864 added to the classification process. One example of such local 865 information type is the interface the packet was received from. 867 The classification policy determines how the individual flow should be 868 treated, including attributes such as the reservation type and 869 granularity, differentiated service class, etc. On the basis of the 870 filtering result, the packet may be associated with the LSP, or flow 871 identifier. 873 5.2.3 Admission policy 875 Admission policy is the process to determine if the new request for the 876 LDP set-up or attribute modification with some set of reservations is 877 administratively acceptable. This is administratively configured, and 878 is associated with the given granularity entity, such as individual 879 user, user community, or peer AS. 881 The type and the granularity of the information that will be taken into 882 account by the admission policy depends on the interface type, local 883 policy and trigger type (e.g. signalling versus configuration event). 885 When reservation requests of coarse granularity are considered (e.g. 886 individual LDP set-up on public network interface supporting a large 887 corporation), the admission policies are typically applied against the 888 parameters associated with the aggregate set of all reservations 889 currently associated with the community, reservation parameters and the 890 administratively configured maximum resources allocated to that 891 community. 893 5.2.4 Admission control 895 Admission control is the process that is used to determine the resource 896 availability to support a new request or the modification of attributes 897 associated with existing label switched paths. Admission control is 898 invoked as a final step, after it has been determined that the route to 899 the destination is available, and the permission to process the request 900 is granted by admission policy. 902 Admission control gets more complex when the granularity of the 903 reservations increases, being not invoked at all for best-effort 904 traffic, and being most complex for the guaranteed traffic. 906 5.3 Path selection 908 The primary mechanism to control path establishments and deletions in 909 MPLS networks is the routing protocol. In addition, paths through the 910 network can be established using the explicit routing. Static LSPs can 911 be configured through management interface. 913 For the MPLS network elements to be able to automatically locate 914 alternate paths with the sufficient resources available, routing 915 protocols that are able to take in the account additional path 916 attributes instead of just topological connectivity and preconfigured 917 metrics of the available paths is needed. 919 The draft framework for QoS routing work effort have been developed in 920 QOSR working group of the IETF [Crawley98]. However, the routing 921 protocols with suitable metrics to be used in the environments with 922 fine-granularity service guarantees inside or between the domains need 923 to be developed. 925 5.4 Accounting 927 Accounting mechanism is required by the service operators to be able to 928 bill users in accordance with the services used. If the accounting 929 mechanisms are not in place, there is no incentive for the users to use 930 anything but best offered service classes. 932 MPLS accounting mechanisms shall be able to collect usage data with 933 desired granularity (single user to peer operator), together with 934 traffic management attributes associated with the LSP, and transfer 935 this data to operator's billing system. Protocols used for transferring 936 accounting data to billing systems and billing procedures are outside 937 of the scope of the MPLS work. Suitable protocols may include e.g. 938 RADIUS and TACACS+. 940 5.5 User authentication 942 At the moment, these services need to be implemented on the basis of 943 the interface, protocol and network address information, but as the 944 users are mobile (even within a corporate network), and also because of 945 the increasing use of the dynamic address allocation mechanisms, such 946 as DHCP and NAT's the ultimate goal should be to base the service 947 policies on the user information. 949 One possible implementation may be based on the use of directory 950 services, such as LDAP to store the user profile information, but the 951 approach needs to be standardised to be usable in the large scale. 953 6. DATA PLANE MECHANISMS FOR TRAFFIC MANAGEMENT FUNCTIONS 955 This chapter describes the mechanisms required in the various parts of 956 the network to provide support for the transport of the traffic with 957 the service parameters through the network. These mechanisms include 958 all the mechanisms that are involved in per packet decisions that are 959 performed in the intermediate network nodes. The parameters for 960 controlling these mechanisms are determined by the control plane 961 mechanisms described in the previous chapter. 963 Note that the location of these mechanisms in the networks is not 964 discussed in this chapter, discussion of the location of mechanisms in 965 different network environments is given in chapter 9. 967 6.1 Label forwarding paradigm 969 In the best-effort label based forwarding, MPLS nodes use the simple 970 exact match lookup to determine the egress link where the packet should 971 be sent. When the services that require the support for service level 972 differentiation are implemented, MPLS node uses the same exact match 973 label lookup to determine not only where the packet should be destined, 974 but also the additional state information associated with label, 975 related to queuing and scheduling of the packet. 977 6.2 Classification 978 6.2.1 What is classification and where it should be done 980 The purpose of the classification process is to determine the queuing / 981 scheduling treatment that the packets should get as they traverse 982 through the network. 984 The result of the classification determine the following attributes: 986 - service class the packet should be carried on, 987 - for differentiated services the drop priority and / or delay 988 priority for the packet 989 - for guaranteed services the parameters determining the desired 990 service guarantees 992 Packets may be classified as belonging to different service categories 993 in the various places of the end-to-end path traversed. 995 Likely places where the packet classification may occur are: 997 - Operator's domain ingress router 998 - CPE router 999 - Host 1001 When the hosts performs the classification, it may base the 1002 classification decisions either on the protocol used (part of the host 1003 protocol stack), or the attributes communicated from the application. 1004 Guaranteed service parameters will likely be based on the parameters 1005 communicated by applications. 1007 When the classification is performed by the routers (either CPE router 1008 or operator's border router), the classification decisions have to be 1009 based on the protocol information carried on the packet. 1011 Initial deployment is likely to be based on the classification on the 1012 routers, as there is no support for performing the classifications in 1013 the host protocol stacks. When the classification is performed in 1014 router's, modifications to host protocols and applications are not 1015 required. Additionally, it is easier to set up administrative 1016 classification policies when the classification is performed in 1017 routers. 1019 The stand-alone and integrated equipment for performing the 1020 classification for controlling the traffic are available at a moment, 1021 but there are not standard ways to manage these, neither standard ways 1022 on how the classification results are used to control the data stream. 1023 One common characteristic of current solutions is that they are usually 1024 decoupled from the other network equipment. 1026 Depending on the place where the classification is performed, the 1027 procedures performed on subsequent nodes do vary. 1029 6.2.2 Flow Classification 1031 Flow classification is the process of associating a label to individual 1032 traffic flows. This process needs the consideration of the 1033 classification policy to be able the associate the label with the flow. 1034 Depending on the aggregation environment, the label may be associated 1035 with single flow, or if the flow aggregation is supported and suitable 1036 label already exists, flows may be aggregated to stream on the existing 1037 label. 1039 The purpose of the flow classification process is to reduce the 1040 processing load associated on making the decision of which label to 1041 associate with arriving packets. If the full classification can be 1042 performed for each packet without performance penalty and the suitable 1043 label exists, the flow classification is not required. 1045 Flow classification needs to be performed at least once for each new 1046 flow. Flow classification is performed on the edge MPLS nodes, where 1047 the packets from non-MPLS network domain enter onto MPLS network 1048 domain. This process can also produce the simple key, such as the 1049 entry in the hash table to be subsequently used by the packet 1050 classifier for the faster determination of the label that needs to be 1051 associated with individual packets. 1052 As more fine-grained control becomes necessary, flow classification 1053 becomes mandatory, because the accomplishment of fine-grained 1054 guarantees involve the setting up the new LSP or modifying the 1055 parameters of the existing LSP. 1057 In some cases, if it is determined that the suitable label for carrying 1058 the flow does not exist, a new LSP needs to be set up or the attributes 1059 of the existing LSP needs to be changed. The applications that are 1060 allowed to do this should be subject to careful consideration, as it is 1061 preferable to have the LSPs set up beforehand, otherwise the LDP 1062 modifications done on a per flow basis consume too much resources and 1063 become the performance / scalability bottleneck. However, this is 1064 useful for some applications whose characteristics are known beforehand 1065 to require relatively long lasting flow with service level 1066 requirements, such as e.g. videoconferencing. 1068 Classification mechanisms that require the edge routers to maintain 1069 per-flow state information are susceptible to denial of service 1070 attracts by malicious users. One can foresee the attack based on 1071 sending the packets with various destination address / port 1072 combinations in rapid sequence, causing the per flow state to be 1073 established for each packet. This can lead to exceeding of the per-flow 1074 state maintenance and flow establishment handling capacities of the 1075 routers performing the classification. There is no easy cure against 1076 such an attack, except administratively limiting the amount of the per- 1077 flow state that is associated with the interface. Together with the 1078 source address validation, this at least can provide information of 1079 where the attack originated from. 1081 Note that the flow switching as discussed here is nothing new, this has 1082 been used in routers and firewalls for long time. For more information 1083 of flow measurements and classification, see [Claffy95], [RFC2063], 1084 [RFC1954], [Cisco97]. 1086 6.2.3 Packet Classification 1088 Packet classification performs the mapping of the individual packets 1089 onto desired LSPs. Packet classification process essentially assigns 1090 each arriving non-labelled packet onto suitable label switched path, 1091 which has to be available before the packet classifier can perform it's 1092 function. 1094 Prior to the packet classification, the LSP has to have been set up 1095 using either the flow classification process or other mechanisms, such 1096 as setting up the LSP on basis of information provided by management, 1097 topology (e.g. routing protocol) or signalling protocol. 1099 As discussed in the above chapter, flow classification process may help 1100 packet classification by producing the keys to increase the packet 1101 classifier performance. 1103 6.2.4 Classification results for differentiated services 1105 For the differentiated services, classification determines the 1106 differential service attributes, such as drop precedence bit values and 1107 delay precedence bit values. In cases where these attributes differ 1108 from those carried in the received IP packet header, the received 1109 header bits may be overwritten or depending on the implementation of 1110 the diffserv support in MPLS, left alone. 1112 If the differentiated services attributes are allocated on per LSP 1113 basis, then the attributes are associated with the label switched path, 1114 and the result of the classification process should be the label to 1115 that path. 1117 6.2.5 Classification results for guaranteed services 1118 For the guaranteed services, the label for the LSP that has the 1119 associated reservation attributes may be the result of the 1120 classification process. 1122 Alternatively, in fine-grained flow based systems, the flow identifier 1123 which can be used to determine it's individual traffic characteristics 1124 may be the result of the classification process. In this case, these 1125 are mapped to aggregated LSP by mapping function following the 1126 classification function. 1128 6.2.6 Problems with non end-system classifications 1130 There are some known problems in performing the classifications in 1131 intermediate network elements, which are discussed below. 1133 Whether these present a problem, and if so, the extent of the problem 1134 depends on the environment the classification function is performed, 1135 and needs to be addressed in case-by-case basis. 1137 6.2.6.1 Classification in presence of IPSEC 1139 When the transport protocol headers are encrypted, as described in 1140 IPSEC document "IP Encapsulating Security Payload (ESP)" [RFC1827], 1141 the transport layer (UDP/TCP) header information, such as port numbers 1142 cannot be used as parameters for determining onto which flow the packet 1143 belongs to. 1145 This implies that the classification has to be performed before the 1146 encryption is applied, in the customer customer device (typically host, 1147 router or firewall) that performs the encryption process. 1149 Also, as the per flow information is not available in the public 1150 network, it is possible to run MPLS all way to subscriber and use the 1151 label to identify IPSEC encrypted flow encapsulated onto one label. 1152 This way, it would be possible for the operator to enforce the 1153 requested parameters on per encrypted flow basis. 1155 It is also possible to achieve this using the RSVP signalling to the 1156 user, using the IPSEC extensions specified in [RFC2207], which 1157 basically uses SPI instead of the destination port number to identify 1158 the flow. 1160 6.2.6.2 Classification in presence of dynamic address assignment 1162 The increasing use of the dynamic assignment of the IP addresses make 1163 it hard to determine the end-system the packets originated from. 1164 Dynamic address assignments are common in the environments that employ 1165 DHCP, or NATs. 1167 If the end-system address is important part of the classification 1168 policy, then the means to communicate the address - physical system 1169 mappings to classifier needs to be arranged. One possible way to 1170 achieve this in DHCP environments might be to have DHCP/DNS mapping in 1171 use, and resolve IP addresses on basis of DNS bindings. 1173 In environments, where the classification is based more on the protocol 1174 information carried in the packets, dynamic address assignment is not 1175 problem. This is due to the fact that the dynamically assigned 1176 addresses are expected to be same for the duration of the session, and 1177 the flow classifier can still use these addresses for identifying 1178 individual sessions. 1180 6.2.6.3 Classification in presence of dynamic port numbers 1182 Some applications assign the port numbers they use dynamically, and it 1183 is very difficult or even impossible to make the correct classification 1184 on basis of such assignments. For such environments, it appears that 1185 the easiest way to achieve the correct classification is to let host 1186 determine the desired classification. 1188 6.2.7 Classification state maintenance 1190 Classification state maintenance process is related to the deletion of 1191 the per flow state and associated LSP bindings that are not required 1192 anymore. Examples that lead to the removal of classification state are 1193 flow time-out, ending of the individual flow recognised by other means 1194 (e.g. TCP FIN) or signalling event to signify the end of reservation 1195 request. 1197 Classification state maintenance activities ensure that the non-used 1198 flow state information is deleted with appropriate intervals to free up 1199 the resources in network elements. Classification state maintenance 1200 activity shall be mostly local to the MPLS node. Only when the 1201 reservations are made on individual flow basis, this affects the LSP 1202 bindings between peer MPLS nodes. 1204 If the reservation type for the flow was guaranteed reservation, and 1205 the flow was aggregated on the LSP with other guaranteed flows, state 1206 maintenance activity triggers the modification of the reservation 1207 attributes of the LSP the flow was mapped onto, but does not result in 1208 teardown of the LSP. 1210 6.3 Policing 1212 In the environments, where the packet classification is performed by 1213 the end-user's router or user's computer, it is important for the 1214 network operator to be able to enforce the traffic contract to disallow 1215 the users to exceed their contractual limits for the advanced services. 1217 This is performed using mechanism called traffic policing, which 1218 monitors the user's traffic. The policing function can, depending on 1219 the service used, either drop packets, or move the packets to lower 1220 priority or best effort delivery class. 1222 An alternative for the using policing is to allow users send whatever 1223 they want, and meter the usage of different services and bill the user 1224 based on what enters the public network. 1226 However, one likely alternative is to use a combination of these 1227 mechanisms, so that the user can send up to some maximum value 1228 specified by the traffic contract per class / service, and get billed 1229 on basis of combination of basic fee and usage. 1231 In cases where the classification is performed by the operator, the 1232 traffic contract can be enforced as part of the classification process. 1234 Policing actions can be taken at several granularity levels. Policing 1235 can be made for individual flows, when the per-flow reservations are in 1236 effect. Operator likely wants to police on basis of aggregated traffic 1237 contract on customers interface, and on MPLS network boundaries 1238 policing can be based on the individual LSP parameters. 1240 6.4 Mapping 1242 On the basis of the flow identification performed by the classifier, 1243 the mapping process maps the packets to appropriate label switched 1244 path. This process is configured taking into account the traffic class, 1245 attributes associated with the flow and the topology information. 1247 The mapping function is responsible for achieving the aggregation. 1248 Depending on the traffic class, two styles of mappings can exist; 1249 direct and indirect mapping. 1251 6.4.1 Direct mapping 1253 Direct mapping can be used when the reservation does not have explicit 1254 guarantees, like bandwidth associated with it. Traffic classes suitable 1255 for direct mapping are best effort and differentiated services without 1256 bandwidth allocations. 1258 In direct mapping, the association is done directly from the packet 1259 classifier outcome to the desired LSP. 1261 6.4.2 Indirect mapping 1263 Indirect mapping needs to be used when the reservation does have 1264 explicit guarantees, like bandwidth associated with it, and the 1265 aggregation of these is desired. 1267 The need for the indirect mapping arises from the requirement to 1268 maintain per reservation state so that the individual reservation and 1269 its associated resources can be removed from the aggregate LSP. The 1270 reservation state deletion shall commence immediately after the end of 1271 reservation is detected, either through timeout, determined by 1272 observing transport header bits, or as result of signalling event. 1274 The associated parameter changes in the LSP configuration may be made 1275 more infrequently, especially when the frequency of the individual 1276 reservation establishments and deletions associated with given 1277 aggregated LSP is high and the reservations are relatively homogenous. 1278 This reduces the signalling load between the MPLS nodes the along the 1279 LSP. 1281 6.5 Aggregation, merging and deaggregation 1282 6.5.1 Aggregation 1284 Aggregation means that multiple flows that are treated similarly in the 1285 network are associated onto same label. Depending on the supported 1286 service type, the effort to support aggregation ranges from 1287 straightforward to very complicated. 1289 General guidelines for the aggregation to meet the scalability 1290 requirements suggest that the all flows that can be aggregated onto 1291 same label should be aggregated. 1293 Aggregation is the process that is performed at the first place the 1294 packet classification is performed, and involves the association of the 1295 different packets that belong to same forwarding equivalence class the 1296 same label. 1298 Aggregation conserves label space, as the labels do not have to be 1299 associated with the individual traffic flows. 1301 Figure 6.5.1. Aggregation 1303 Consider the node depicted in the figure 6.5.1. Traffic arrives from 1304 non MPLS network interfaces (not labeled) and is mapped onto LSPs. 1305 Because of the aggregation, the number of outgoing LSPs is reduced. 1307 6.5.2 Merging 1309 Merging is also a form of traffic aggregation, but is performed to 1310 label switched paths, instead of the individual packets. In merge 1311 capable node, packets coming from multiple ingress LSPs belonging to 1312 same forwarding equivalence class are sent out on the single label 1313 switched path. 1315 The merging process helps to conserve the label space, and also reduces 1316 the amount of the connection state that needs to be maintained in the 1317 intermediate network elements. 1319 Figure 6.5.2. Merging 1321 6.5.3 Aggregation and merging of traffic with service guarantees 1323 Aggregation of the traffic with service guarantees itself is not a 1324 problem, the problem is to come up with the associated service 1325 parameters for the aggregated path, in such way that the minimum amount 1326 of the resources are reserved, and the guarantees of individual 1327 reservations are maintained through the aggregated path. 1329 Aggregation of the traffic with just bandwidth guarantees is relatively 1330 straightforward; the attributes of the resulting aggregated label 1331 switched paths can be computed on basis of the guarantees given for the 1332 individual paths or flows that are aggregated. 1334 The computation of the aggregate path parameters can be based on simply 1335 a sum of the attributes of flows or paths that the aggregate is 1336 composed of, or can take in the account additional factors like 1337 oversubscription factor. 1339 When explicit guarantees for both delay and bandwidth are given, 1340 aggregation becomes much harder, especially if the delay requirements 1341 are tight. Several aggregation strategies for traffic both with and 1342 without delay guarantees are considered in references[Schwantag97], 1343 [Guerin97], [Berson97] [Rampal97], and [Li98]. 1345 6.5.4 Deaggregation 1347 Deaggregation is the opposite to aggregation and merging, in the sense 1348 that it terminates the label switched path and performs layer three 1349 lookup for the individual packets to determine their next destination. 1351 Deaggregation can associate the packets either with new label switched 1352 path, or to the interface to non-MPLS network. 1354 Note that the service class related information associated with the 1355 labeled packets is not lost in the deaggregation, because the 1356 attributes of the LSP the packet arrived on are available at the 1357 deaggregation point. 1359 If the LSPs are constructed through the MPLS domain, from a set of 1360 domain ingress interfaces to a single domain egress interface, and 1361 packets not associated with this egress interface are not merged or 1362 aggregated to same LSP, deaggregation process is not needed. In such 1363 cases, if the interface is to a non-MPLS domain, the MPLS header is 1364 simply removed. 1366 Figure 6.5.4. Deaggregation 1368 6.6 Queuing and congestion management 1370 6.6.1 Queue management 1372 Queue management mechanisms manage the available queue space, and also 1373 determine the appropriate handling of the arriving packet, on the basis 1374 of the label switched path the packet is received on and the status of 1375 the desired queue. 1377 Queue management is closely related to congestion control, as 1378 congestion can be loosely defined as a condition where the queuing 1379 point on the network element has exceeded or is about to exceed its 1380 allocated queue space, forcing the packets be dropped instead of queued 1381 for resource. 1383 Packet handling decisions include which queue packet should be queued 1384 on, and also whether the packet should be approved onto that queue, 1385 moved to lower priority queue or dropped. 1387 Note that the moving of the individual packets between the different 1388 queues is not necessarily a good course of action, unless all packets 1389 of same flow are put to same queue. This is because the moving of the 1390 individual packets of the flow to lower priority queue is likely cause 1391 the packet re-ordering. 1393 Since the queuing mechanisms vary on the basis of the supported 1394 services and are local onto network element, they need not be subject 1395 to standardisation efforts. 1397 6.6.2 Queuing principles 1399 Various queuing principles can be used for achieving the support of the 1400 required traffic classes. Properties of some possible principles in 1401 order of increasing complexity are discussed below. 1403 All of these queuing principles can be implemented for both cell and 1404 packet switching fabrics. 1406 - Single FIFO queue 1408 All traffic is queued onto single queue. Packets are queued together 1409 with their associated labels. Packets are admitted in the queue on the 1410 basis of a combination of parameters, such as packet class, queue 1411 occupancy level, LSP reservation parameters and measured throughput per 1412 LSP. Packets are scheduled for transmission in the order them arrived. 1413 Property of this queuing scheme is that the delay cannot be minimised 1414 for the packets that require that. 1416 - Multiple FIFO queues 1418 Traffic is queued in multiple queues (minimum of 2) on the basis of 1419 delay priority. Packets are scheduled in priority order, possible along 1420 with guarantee for the minimum service rate specified on per-queue 1421 basis. Packet admission onto queues is as before. 1423 - Shared queuing on per label basis 1425 Traffic is queued different logical queues on basis of the arriving 1426 label. Packet admission to queues is based on the occupancy level of 1427 each logical queue and possibly overall queue space. Requires complex 1428 queue space management algorithms as well as advanced scheduling 1429 mechanisms. This is functionally equivalent to per-VC queuing in ATM 1430 switches. 1432 It is unclear whether the per-label queuing has enough benefits over 1433 multiple FIFO queues with admission control to warrant the extra 1434 implementation complexity. 1436 6.6.3 Congestion control 1437 6.6.3.1 Passive congestion control schemes 1438 Passive congestion control schemes are based on dropping of the packets 1439 when they arrive at the congestion point. Passive schemes rely on the 1440 end-to-end protocols to find out that the packet loss has occurred and 1441 retransmit the dropped traffic with reduced traffic. 1443 Most of the Internet at a moment relies exclusively on the use of the 1444 passive congestion control schemes. TCP congestion control algorithms 1445 have been designed to act exclusively on the basis of packet loss 1446 information. 1448 Over time, numerous algorithms for the more intelligent drop policies 1449 have been developed, examples include RED [Floyd93], W-RED, and CBQ. 1450 These algorithms attempt to increase fairness of the usage of congested 1451 resource, to provide preferential treatment (typically more likely to 1452 get accepted onto queue) for some portion of the flows or to increase 1453 the end-to-end throughput in congestion conditions.. 1455 6.6.3.2 Active congestion control schemes 1457 While passive congestion control algorithms do certainly work, one of 1458 their characteristics is that they waste network resources, as the 1459 traffic first is transmitted onto the congestion point, where it is 1460 dropped, and then retransmitted later. Dropped packets thus introduce 1461 extra overhead in the network portion before the congestion point. 1463 To avoid these disadvantages, there have been proposals to make the 1464 congestion control more active. The goal of the active congestion 1465 control approaches is to reduce or eliminate the packet loss due to the 1466 congestion, or to push the drop point towards the point originating the 1467 traffic. 1469 By active congestion control, we mean that the network more directly 1470 informs the traffic sources of the congestion situations, and more 1471 importantly even before the congestion actually occurs. 1473 These mechanisms are based on the explicit monitoring and notification 1474 of congestion state along the path the traffic is traversing. The 1475 notification can be either direct using explicit semantics to tell the 1476 end-station to slow down, or indirect, using the congestion information 1477 to influence congestion management mechanisms of the transport 1478 protocols to control the rate of the sender. 1480 The direct mechanisms have been attempted in the real networks, but 1481 with little success so far, because the lack of the support of the end- 1482 station transport protocols. It has been shown that these schemes work 1483 reasonably well, when implemented end-to-end. 1485 Examples of the direct congestion control mechanisms include frame 1486 relay congestion notification mechanism [I370], ATM binary and explicit 1487 feedback mechanisms [ATMF96], and proposal for inclusion of the 1488 explicit congestion notification for IPv4 and IPv6 [Ramakr97]. 1490 The natural place to carry the congestion notification information in 1491 MPLS networks would be as part of the label encapsulation header (when 1492 MPLS is mapped to Frame Relay and ATM environments the existing 1493 mechanisms to carry congestion information could be used). 1495 However, as the huge installed base of the existing applications is 1496 built on top of TCP and UDP, more attractive way is to provide direct 1497 feedback inside the network, and indirect feedback in the network 1498 interworking point, taking advantage of the characteristics of the 1499 current transport protocols. Examples of schemes that could be used to 1500 achieve indirect control are [Packeteer97] and [Jagan97]. 1502 The advantage of having the direct control inside network is that when 1503 the transport mechanisms evolve to be better able to take advantage of 1504 this functionality, the direct control can be extended to the end- 1505 stations. 1507 6.6.4 Packet scheduling 1509 Scheduling algorithms determine the order in which traffic waiting in 1510 the queues are scheduled for transmission. Scheduling decisions are 1511 based on the queue specific information e.g. queue priority, weight, 1512 state, etc. 1514 The need of complex scheduling mechanisms depends on the capabilities 1515 provided in the network element, such as shaping, multiple service 1516 class queues, and complex queuing policy. 1518 In FIFO based queuing systems scheduling is trivial (transmit when you 1519 have the opportunity). 1521 6.7 Traffic shaping 1523 Traffic shaping is the process of modifying the traffic characteristics 1524 to conform to desired traffic profile. 1526 Shaping can be used in various parts of the network to make sure that 1527 the resulting traffic conforms to the traffic contract, and thus has a 1528 better chance not to get discarded by the policing or congestion 1529 control mechanisms in the network. 1531 Traffic characteristics tends to get modified by the network, as the 1532 multiple traffic streams interact, and traffic goes through buffer and 1533 scheduling algorithms. The process of shaping inside the network to 1534 make traffic to better conform to its original profile is called 1535 reshaping. 1537 Examples of the possible shaping points are end-station, MPLS edge 1538 node, or MPLS core node. 1540 Shaping can be associated with any granularity, which has defined 1541 traffic characteristics, from application flow to aggregated label 1542 switched path. 1544 Shaping may be achieved as part of scheduling functionality. 1546 6.8 Load sharing 1548 Load sharing can be implemented with MPLS routers using the path 1549 selection based on the load on the available links, and splitting the 1550 aggregated streams that are associated with different LSPs to different 1551 available links. 1553 The load sharing is especially important because of emergence of the 1554 Dense Wavelength Division Multiplexing (DWDM) systems, because these 1555 essentially divide the same fiber to up to tens of different channels 1556 going to the same destination node. Efficient load sharing allows the 1557 tight integration of the routed traffic and the transmission 1558 capabilities. Some of the issues related onto integration of optical 1559 networks and Internet are discussed in [Touch97]. 1561 MPLS based load sharing has advantage over the conventional router 1562 based load sharing, because it can take in the account also where the 1563 packets originated from, unlike the typical conventional routers. 1564 Without the knowledge where the traffic came from, it is not possible 1565 in the receiving node to easily guarantee that the packets are sent in 1566 the same order as they were sent in the previous node. Packet 1567 reordering causes performance degradation problems with TCP and some 1568 other transport protocols. 1570 The concept of the individual flows in the network ingress and/or 1571 egress points also allows to implement the load sharing for example to 1572 web server farms in such a way that the packets of the same session are 1573 always directed to same server. 1575 7. LABEL SWITCHED PATH GRANULARITIES AND AGGREGATION 1577 The subset of the flow granularities defined in the section 2.2.2 of 1578 the MPLS Framework document [Callon97] appears below, with discussion 1579 of their applicability on context of traffic management mechanisms 1580 discussed in this document. 1582 - PQ (Port Quadruples) 1584 Same IP source address prefix, destination address prefix, TTL, IP 1585 protocol and TCP/UDP source/destination ports. 1587 This defines a single communication session between two hosts, and is 1588 generally referred as "flow". 1590 While the recognition of existence of individual flows can be important 1591 at the network boundaries and hosts, per flow state should not be 1592 required at the core network elements, as it quickly yields to 1593 unmanageable amount of state information to be maintained in high-speed 1594 backbone links. This is the reason for the need of aggregation. 1596 - PQT (Port Quadruples with TOS) same IP source address prefix, 1597 destination address prefix, TTL, IP protocol and TCP/UDP 1598 source/destination ports and same IP header TOS field (including 1599 Precedence and TOS bits). 1601 This augments the definition of the flow to take into account the TOS 1602 byte of the IPv4 packet. It is basically possible for the current 1603 applications to use different TOS values for different packets, 1604 although the practise is not likely to yield to any predictable 1605 results, as the TOS byte is not widely supported as part of forwarding 1606 process in current routers. 1608 The differentiated services working group will define the standard 1609 semantics for this byte, but if the single session uses different 1610 values it is likely to yield to packet re-ordering problems in the 1611 network. 1613 For the coarser granularity paths, the aggregation rules should take 1614 into account the topological scope and the traffic types. MPLS nodes 1615 should attempt to aggregate the same type of traffic onto same LSP. 1617 It should be noted that the support of the managed paths and different 1618 services is going to increase the label space consumption, but the 1619 aggregation should be used to minimise this increase. 1621 See chapter 8.5., "Multilevel paths" on discussion on how the use of 1622 multilevel paths can help on the aggregation of traffic with explicit 1623 guarantees. 1625 8. LABEL SWITCHED PATH TOPOLOGIES AND ASSOCIATED TM PROCEDURES 1627 Services are implemented by assigning attributes to label switched 1628 paths. The path is composed of point-to-point segments between adjacent 1629 MPLS nodes. 1631 In complex topologies (excluding point-to-point) each individual 1632 segment may have different values for its attributes, depending on the 1633 location of the segment along the path and the topology of entire path. 1634 This is also true when the flows with resource allocations are 1635 aggregated to stream that is associated with the same LSP. 1637 Properties of the different LSP topologies and related traffic 1638 management issues are discussed in the following chapters. 1640 8.1 Point-to-point 1642 Point-to-point LSP is the simplest of the label switched path 1643 topologies, and this is the basic building block of all LSPs. 1645 In this document, point-to-point LSPs that have their own labels and 1646 attributes, and both the label and its associated attributes have local 1647 significance between the MPLS network elements. These local LSPs are 1648 called segments in this document. 1650 In the simplest case, where the end-to-end LSP with the attributes is 1651 built by concatenating a set of these segments, all segments have the 1652 same attributes, while the label has only the local significance 1653 between neighbour MPLS nodes. 1655 More complex topologies can be constructed by concatenating the 1656 segments and using traffic merge (mpt-pt) and copy operations (pt-mpt) 1657 in the network elements to achieve the desired topological LSP 1658 constructs. 1660 8.2 Point-to-multipoint 1662 Point to multipoint topologies can be constructed using the packet copy 1663 function at the ingress point-to-point LSP segment on the MPLS network 1664 element. The incoming packets are duplicated for each outgoing label 1665 switched path. 1667 Point to multipoint topologies are important for supporting of the 1668 multicast packet delivery. 1670 8.3 Multipoint-to-point 1672 Point to multipoint topologies are important for scalability reasons. 1674 Multipoint to point topologies can be constructed using the packet 1675 merge function at the MPLS network element. The incoming packets from 1676 multiple ingress label switched paths are merged onto same outgoing 1677 label switched path. 1679 In addition to aggregating the traffic destined onto single 1680 destination, in the presence of traffic with explicit guarantees, 1681 aggregation of the traffic parameters to get the attributes for each of 1682 the LSP segment composing the multipoint to point tree is required for 1683 supporting aggregation of the traffic with explicit guarantees. Note 1684 that this can yield for the different segment to get different 1685 attributes as the traffic is merged onto the shared multipoint-to-point 1686 tree. 1688 8.4 Multipoint-to-multipoint 1690 Multipoint to multipoint topologies cannot be directly constructed 1691 using the same labels, but these can be constructed using desired 1692 combination of point-to-point, multipoint-to-point and point-to- 1693 multipoint LSPs. Exact decomposition to simpler topologies depends on 1694 the desired connectivity in multipoint to multipoint topology. Traffic 1695 management requirements of such simpler topologies can be treated as 1696 for the simpler topologies used. 1698 For example, full mesh connectivity between set of endpoints can be 1699 achieved using multipoint-to-point LSPs, with each endpoint acting as a 1700 receiver of separate multipoint to-point tree. 1702 8.5 Multilevel paths 1704 Multilevel paths can be constructed using multiple labels on stack, or 1705 alternatively partitioning the label space to represent different 1706 levels (like VPI/VCI in ATM networks). 1708 The operations associated with label stacks are described in the MPLS 1709 framework document [Callon97] and label stack encoding proposal is 1710 described in [Rosen97b]. 1712 The routing and scheduling decisions of the packets encapsulated on the 1713 on multilevel label switched path are performed on the basis of the top 1714 level label. 1716 Termination of the multilevel LSP is performed in deaggregation point, 1717 where the top level label is removed (referred as label pop in 1718 [Callon97]). Second level label is then available for use as the basis 1719 for routing and scheduling mechanisms. 1721 Multilevel paths are useful when the several paths with similar, but 1722 different service guarantees are aggregated onto same path. At the 1723 deaggregation point, the path characteristics of the individual 1724 aggregated paths that the higher level path is composed of can be 1725 determined on the basis of second level label. 1727 Figure 8.5. Multilevel path example 1729 Consider the simple MPLS network composed of four nodes A-D depicted on 1730 Figure 8.5. 1732 There are two traffic sources with reservations entering node A, from 1733 non-MPLS domains. These two sources are aggregated and leave node A on 1734 LSPx. 1736 At node B, the additional LSP (LSPy) that is destined towards same node 1737 is merged to same LSP, and the combination leaves node B as LSPz. 1738 Original labels are pushed to the label stack, and traffic leaves node 1739 B with top level label LSPz. 1741 At node C, no traffic is either merged or removed from the LSPz, LSP 1742 label just gets replaced and traffic leaves the node C with new label 1743 LSP z'. 1745 The traffic arrives at node D, which deaggregates the traffic to it's 1746 constituents LSP's, denoted as LSP y' and LSP X'. 1748 Now consider that all of the traffic entering and leaving the network 1749 has reservations. The capacity of LSPx is thus function of RES1_in and 1750 RES2_in. The capacity of aggregated LSPz is function of LSPx and LSPy, 1751 which at least LSPx is aggregate. 1753 As the node C does not modify the aggregate in any way, it does not 1754 need to know the parameters of the individual components the aggregate 1755 LSPz is composed of. 1757 Node D, which acts as deaggregation point for LSPy' and LSPx' needs to 1758 know the traffic attributes of both original LSPy and LSPx, but it does 1759 not need to know anything about the parameters of RES1_in and RES2_in. 1761 Compared to the model where each path requires the individual LSP 1762 through the network, the use of aggregation and multilevel paths can 1763 save significant amount of state information and signalling overhead in 1764 the network The use of the multilevel labels enables the de-aggregation 1765 point still distinguish between different sources received in the 1766 aggregated LSP and to treat the traffic according to their original 1767 reservations. 1769 For this to be possible, there needs to be signalling mechanism between 1770 the aggregation point and deaggregation point to communicate the 1771 traffic attributes of the second level labels that are deaggregated. 1772 Note that this does not mean that the deaggregation point does need to 1773 know attributes of all individual LSPs, that are aggregated, 1774 deaggregated LSP may still be aggregate on other level. 1776 Also, if there are large number of aggregated flows on single LSP, and 1777 there is deaggregation point that needs to split the traffic to number 1778 of the aggregated egress LSPs, the deaggregation point only needs to 1779 know which of the second level flows should be associated with which 1780 egress aggregate LSP, and the total aggregate value of each egress 1781 aggregated LSP. 1783 Large benefits can be achieved at the backbone level, by aggregating 1784 all the traffic with reservations with similar characteristics onto 1785 same LSP. 1787 The backbone nodes need only know the reservation parameters of the 1788 aggregated traffic, not the parameters of individual second level LSPs 1789 that compose the aggregate. Signalling protocol needs to be run between 1790 the sending and receiving domain to be able sort out the individuals in 1791 the receiving end, but the backbone does not need to be participating 1792 in this signalling other than carrying the signalling messages. 1794 The attributes of the aggregated LSP can either be modified on basis of 1795 changes of the constituents of aggregate, but up to single message per 1796 change is required to achieve this. Additionally, if this results in 1797 rapid changes to aggregate attributes, this can be dampened e.g. by 1798 having the threshold of the minimum change to aggregate attributes that 1799 needs to happen before the aggregate parameters are signalled to be 1800 changed 1802 9. NETWORK FUNCTIONAL PARTITIONING 1804 For the purposes of this document, we divide the network elements into 1805 four categories, hosts, CPE routers, operator border MPLS nodes and 1806 core MPLS nodes. Note that this is just simple model to facilitate the 1807 discussion in this document, there is no any reason that the roles of 1808 these network elements cannot be combined. 1810 Edge MPLS nodes are the nodes that connect the MPLS aware network 1811 domain to non-MPLS aware domain. Example of such element would be 1812 border router connecting the users attached with Ethernet to the MPLS 1813 aware core network domain. 1815 Both CPE routers and domain border nodes are discussed as MPLS edge 1816 nodes, as their characteristics can be quite same, depending on the 1817 protocols and extent of the MPLS reaches to. 1819 Domain border MPLS nodes are the special cases of the edge MPLS node 1820 that connect the two MPLS aware domains together. 1822 Core MPLS nodes are the MPLS nodes in the core of the network, that are 1823 connected only to the other MPLS nodes; to the edge MPLS nodes and / or 1824 to other core MPLS nodes. 1826 9.1 Network models 1828 Figure 9.1-1. Public MPLS network domain interface 1830 Figure 9.1.-1 depicts the interface between the MPLS network operator 1831 and operator's subscriber network. Subscriber is connected on the MPLS 1832 border node, and depending of the environment can support different 1833 service categories and run different protocols towards the subscriber's 1834 domain. The partitioning of functionality of CPE router and operator 1835 border router in different situations is discussed in section 9.2.2. 1837 9.2 Network element categories 1839 This chapter defines the roles of the different MPLS nodes in the 1840 network, and identifies some basic functionality that these nodes need 1841 to perform to be able to support the traffic management. 1843 For the purposes of this discussion, functionality is divided between 1844 hosts, edge MPLS nodes and core MPLS nodes. 1846 The basic assumption is that instead of using the label information 1847 just to make a forwarding decision, MPLS nodes capable of supporting 1848 differentiated services will use label information also as a part of 1849 the scheduling decision. 1851 9.2.1 Hosts 1852 Hosts are initially likely to be just as they are at a moment, i.e. not 1853 supporting anything more than the best effort application. In the 1854 future, hosts may participate in diffserv packet classification or 1855 support signalling mechanism, such as RSVP to request explicit service 1856 guarantees. 1858 It is also possible that at the some point, hosts participate on the 1859 label distribution protocol. 1861 All of the above functions for the hosts, except the best effort 1862 communication capabilities shall remain optional. 1864 For the different service categories, the functions that the hosts can 1865 implement in the future are detailed in the chapters 9.2.1.1 to 1866 9.2.1.4. 1868 9.2.1.1 Enhanced best effort services 1869 To be able to take advantage of the enhanced best effort service 1870 provided by the network, the modifications to current host TCP/UDP 1871 protocols are not necessarily required. 1873 If the explicit congestion indication information is provided by the 1874 network, modifications to the host transport protocol stack allow the 1875 host to react to the congestion feedback information received from the 1876 network. 1878 9.2.1.2 Differentiated services 1879 To be able to take advantage of the differentiated services provided by 1880 the network, the modifications to current host TCP/UDP protocols are 1881 not necessarily required. Host may optionally participate on the 1882 differentiated services process by performing the packet classification 1883 for the traffic originated from the host. 1885 This is not necessary however, as the flow / packet classification to 1886 differentiated service classes can be also performed on the router. 1888 Hosts that actively participate on the differentiated services 1889 processing have to support the following mechanisms: 1891 - Classification policy (Mandatory) 1892 - Packet Classification (Mandatory) 1893 - Classification state maintenance (Mandatory) 1895 Hosts that actively participate on the differentiated services 1896 processing may additionally support some of the following mechanisms: 1898 - Flow Classification (Optional) 1899 - Traffic shaping (Optional) 1900 - Scheduling (Optional) 1902 9.2.1.3 Guaranteed services 1903 To be able to take advantage of the guaranteed services provided by the 1904 network, the modifications to current host TCP/UDP protocols are not 1905 necessarily required. Host may optionally participate on the guaranteed 1906 services environment by running the signalling protocol to request the 1907 explicit guarantees from the network. 1909 This is not required, as the flow / packet classification process run 1910 on the router can also make the appropriate requests to the network on 1911 the basis of the header information of the packets received by the 1912 host. 1914 Hosts that actively participate in the guaranteed services processing 1915 have to support the following mechanisms: 1917 - Signalling protocol to request the service (Mandatory) 1919 Hosts that actively participate on the guaranteed services processing 1920 may additionally support some of the following mechanisms: 1922 - Traffic shaping (Optional) 1923 - Scheduling (Optional) 1924 - Flow Classification (Optional) 1926 9.2.1.4 Participation in MPLS 1927 Host may desire to participate on MPLS domain by running the LDP 1928 protocol to request and terminate the paths through the network, 1929 possibly with some attributes associated with the requested paths. 1931 The additional advantage of the host participation may be that, high- 1932 performance hosts may use the flow labeled LSPs to cache the state 1933 information inside the host protocol stack to increase performance by 1934 speeding up or bypassing some of the multilayer protocol stack 1935 processing. The unwanted effects of multilayer multiplexing are 1936 discussed in [Tennenh89]. 1938 Because the hosts have limited information of the overall network 1939 topology and the aggregation strategies used by the network, hosts 1940 should only participate by originating and terminating the LSPs with 1941 the fine granularity. Aggregation and deaggregation functions should 1942 thus be left to the network. 1944 Host that actively participates in the MPLS have to support the 1945 following mechanisms depending on the services used: 1947 - LDP processing (Mandatory) 1948 - Classification policy (Mandatory) 1949 - Packet Classification (Mandatory) 1950 - Classification state maintenance (Mandatory) 1952 Hosts that actively participate on the MPLS may additionally support 1953 some of the following mechanisms: 1955 - Traffic shaping (Optional) 1956 - Active congestion control (Optional) 1957 - Scheduling (Optional) 1958 - Flow Classification (Optional) 1960 In addition, hosts may choose to participate in the Intserv environment 1961 that is also MPLS capable, and use the RSVP to carry labels with the 1962 reservations. 1964 Note that there are important security considerations that generally 1965 make it infeasible for the untrusted hosts directly participate on the 1966 operator's LDP domain in any way, discussed in more detail in section 1967 9.2.2.4. 1969 However, for the operator owned "trusted" servers, such as web 1970 hosting facilities, etc. host participation may have some performance 1971 advantages. 1973 9.2.2 MPLS edge nodes 1975 In this context we include both CPE router and operator's MPLS domain 1976 in discussion as edge nodes, as the traffic management functionality is 1977 somehow divided between these two nodes, and the mechanisms described 1978 in sections 5 and 6 of this document apply to both. 1980 An MPLS domain edge node contains interfaces to non-MPLS networks, as 1981 well as to MPLS network domain. There are different scenarios that 1982 determine how the functionality between the public operator's MPLS 1983 border node and the CPE node needs to be divided. 1985 Figure 9.2.2. Implementation framework for MPLS edge node TM 1986 functionality, ingress 1988 The functionality and the implementation framework of the MPLS domain 1989 edge node is depicted in Figure 9.2.2. 1991 As a summary of the functionality that needs to be performed at the 1992 ingress point of the MPLS domain, the following list applies: 1994 Mandatory functions for operator border router: 1996 - Admission policy 1997 - Admission control 1998 - Direct mapping 1999 - Indirect mapping 2000 - Either of two: flow policing or LSP policing 2001 - Aggregation 2002 - Deaggregation 2003 - Queue management 2004 - Queuing 2005 - Scheduling 2006 - Label distribution 2008 Mandatory functions in either CPE equipment or operator's border 2009 router: 2011 - Classification policy 2012 - Packet classification 2013 - Classification state maintenance 2015 Remaining functions, that are optional, may be performed in hosts, CPE 2016 router, operator MPLS border router, or not implemented at all: 2018 - Flow classification 2019 - Flow policing 2020 - Merging 2021 - Congestion marking 2022 - Shaping 2024 An MPLS network ingress point, as viewed from the MPLS domains side has 2025 to classify the traffic according to the desired service categories and 2026 allocate the traffic to the LSPs. 2028 This association between the packets at the domain ingress point and 2029 the label switched path with path attributes determines how the packet 2030 will be treated in all subsequent network elements in the LSP 2031 associated with the label. In addition, ingress MPLS node has to 2032 enforce the traffic contract between the subscriber and the public MPLS 2033 domain operator and participate on the label distribution process. More 2034 detailed descriptions of the above listed functions are given in 2035 sections 5 and 6 of this document. 2037 Note that from the direction of the operator's MPLS domain towards the 2038 customer domain, the following functions are not mandatory: 2040 - Flow classifier 2041 - Packet classifier 2042 - Classification policy 2043 - Indirect mapping 2044 - Direct mapping 2045 - Flow policing 2047 The partitioning of the edge functionality is dependent on the services 2048 offered to the customer, and who is responsible for performing the 2049 traffic classification. 2051 The services that can be offered to customer by the public MPLS domain 2052 operator are: 2054 - Best effort services 2055 - Differentiated services 2056 - Guaranteed services 2057 - MPLS 2059 The network boundary between the user's and operators network can 2060 support any number of the above services. Depending on the 2061 implementation model, the support for some of these services may 2062 require signalling support between the MPLS domain and subscriber 2063 interface. 2065 The different cases are described in more detail in the following 2066 sections, from the operator border node's functionality standpoint. 2068 9.2.2.1 Best effort services to customer 2070 If the best effort service is provided to the customer, edge node would 2071 just map the traffic onto suitable LSP, according to procedures defined 2072 for best effort traffic in the [Callon97] and [Rosen97a]. 2074 If there are service guarantees (e.g. bandwidth) for the some portion 2075 of the user's traffic (e.g. for all traffic destined to network x), 2076 these can be honoured by applying the suitable filter to the traffic 2077 and assigning it to the designated LSP. 2079 9.2.2.2 Differentiated services to customer 2081 In the differentiated service model, the packets need to be marked on 2082 basis of some policy, and the packets receive the different treatment 2083 on basis of the values carried in the DS byte (encapsulated on TOS 2084 field of IPv4 packet). This labelling can be performed by the customer 2085 equipment, such as CPE router or customer's hosts. 2087 In the case that the marking is performed by the subscriber, the 2088 operator's border router needs to police the traffic according to the 2089 service contract between the operator and the customer. Operator may 2090 also need to measure the traffic for accounting purposes, depending on 2091 the contract. 2093 Another alternative is that the operator performs this marking on the 2094 basis of the policy agreed with the customer in the access nodes. 2096 9.2.2.3 Guaranteed services to customer 2098 For security reasons stated in the next chapter, the use of the 2099 guaranteed services towards the customer on based on the MPLS labelling 2100 is not advisable. 2102 If the guaranteed services are supported, the signalling protocol, such 2103 as RSVP needs to be terminated on the operator's border node and the 2104 filter to achieve the classification needs to be applied to each 2105 packet. 2107 If signalling based guaranteed services is used towards the public 2108 network, the network operator may assign the resulting traffic onto 2109 it's own LSP or aggregate it to the LSP with suitable service 2110 guarantees towards the public network. Note that the operator's border 2111 router does not necessarily have to perform the aggregation, as it may 2112 be unlikely that there will be the suitable LSP towards the destination 2113 available. 2115 Alternatively, if signalling is not used, operator can just apply the 2116 set of pre-specified filters according to some policy agreed between 2117 the customer and the operator. 2119 9.2.2.4 MPLS to customer 2121 Operator can run MPLS towards the customer premises, but there are some 2122 important considerations that need to be taken in the account on such 2123 environments. 2125 Since the customer is a non-trusted entity from the operator's 2126 standpoint, and the MPLS allows the establishment of the switched paths 2127 towards the destination, there is no possible way for the operator to 2128 control what enters onto LSP the subscriber's traffic enters onto. This 2129 opens the possibility of denial of service attacks, and other kinds of 2130 malicious uses that could possibly be prevented by the ingress 2131 filtering on the operator's ingress node. When the traffic enters on 2132 the LSP, it is impossible to determine where the traffic originated 2133 from after it is merged with the other traffic, assuming that the bogus 2134 source addresses are used. The only way to prevent this would be to 2135 terminate the LSPs originated from customer premises on the operator's 2136 border node, but in such case there is no reason to run MPLS to the 2137 customer for this type of traffic at all. 2139 Additionally, as the customer does not have the information of the 2140 operator's traffic aggregation policies and access to the routing 2141 information, customer will not be able to perform traffic aggregation. 2142 This would, in practice, mean that the MPLS sessions between operator 2143 and subscriber would have to be based on individual flows, and operator 2144 would be responsible for appropriate aggregation. 2146 An environment, where the use of the MPLS to customer premises makes 2147 sense is when the MPLS is used to create VPNs for the customer. The 2148 customer could then assign the traffic that is destined on the LSP 2149 that's part of the VPN to appropriate VPN. Even in these environments, 2150 it would make sense to use ordinary routing for other traffic. This 2151 assumes that the VPN LSP endpoint(s) trusts the sending entity to some 2152 extent, as the traffic would be carried quite transparently through the 2153 operator's network. 2155 In any case, all traffic that is entering onto operator's network that 2156 is destined to public the network should be validated for the source 2157 address before encapsulating to any label switched path. 2159 So, as a summary, MPLS to the customer's premises does not make much 2160 sense in typical environments. 2162 9.2.3 MPLS core node 2164 Figure 9.2.3 Implementation framework for MPLS core node TM 2165 functionality 2167 MPLS core nodes are high capacity switching elements, that contain only 2168 MPLS interfaces. 2170 Core nodes need to forward packets at high speed and differentiate the 2171 queuing treatment on basis of the label they are received with. These 2172 nodes also participate in routing and label distribution protocols, and 2173 have to support admission control for the traffic that has reservation 2174 requests. 2176 The important thing to note is that the associated state information 2177 for the treatment of the arriving packets can be determined on basis of 2178 label, there is no need for the knowledge or reapplication of the 2179 admission policies or traffic filtering. 2181 The following is a list of the traffic management functions typically 2182 performed by core node: 2184 Mandatory functions: 2186 - Admission policy 2187 - Admission control 2188 - Aggregation 2189 - Queue management 2190 - Passive congestion control 2191 - Queuing 2192 - Scheduling 2193 - Label distribution 2195 Optional mechanisms: 2197 - Deaggregation 2198 - Congestion marking 2199 - LSP policing 2201 Above mechanisms are described in more detail in sections 5 and 6 of 2202 this document. 2204 9.3 Interface categories 2205 9.3.1 Interface to non-MPLS networks 2207 This interface is the point where the MPLS domain connects to existing 2208 network infrastructure, and the first point in the ingress direction, 2209 where packet labelling is performed. Also, in the egress side of the 2210 interface, labels are removed and packets are encapsulated according to 2211 the corresponding data link layer encapsulation. 2213 9.3.2 Interface inside MPLS network domains 2215 This interface is the interconnection point between the different MPLS 2216 network elements inside the domain. This is characterised by the fact 2217 that the packets are received and transmitted labelled, and the 2218 forwarding and scheduling decisions are performed on basis of the label 2219 associated with the received packet. 2221 9.3.3 Interface between MPLS network domains 2223 This interface is the interconnection point between two operationally 2224 different MPLS network domains. 2226 Such an interface applies the policies related to admission of the 2227 labelled path set-ups through the operator's network, and meters the 2228 usage, especially for advanced service categories to be able to monitor 2229 / create inter-operator settlement agreements. The policing functions 2230 in this interface are applied at the LSP level. 2232 The deaggregation of the arriving traffic aggregated to incoming LSPs 2233 to determine the appropriate LSPs inside domain traffic can be done 2234 either immediately on this interface point, or somewhere else in the 2235 network. This is generally required, as it is advantageous for the 2236 external domain's operator to aggregate the traffic as much as 2237 possible, and also since the internal topology (and corresponding LDP 2238 paths) is not known to external domain. 2240 10. LSP MAPPINGS TO EXISTING LINK LAYER TECHNOLOGIES 2242 The discussion of the mappings of the traffic of different service 2243 guarantees to specific data link layers, and what of the requirements 2244 outlined in chapter 4 can be achieved goes here. 2246 Concentrate on ATM and Frame Relay environments, and what is missing 2247 from the current best effort mapping proposals. 2249 --- This section to be added later --- 2251 11. GENERAL REQUIREMENTS FOR LABEL ENCAPSULATIONS 2252 11.1 Differentiated services support 2254 Proposals for the "differentiated services", require some priority 2255 bits to be carried in the packets to be used for providing additional 2256 information to help to select appropriate queuing and scheduling 2257 actions in the intermediate routers. These mechanisms generally rely on 2258 the use of the IPv4 TOS field. 2260 At first look, it appears that the making the determination for the 2261 scheduling action should be based on both label and these 2262 differentiated service bits. There are however some reasons, which make 2263 the determination of all associated parameters strictly on basis of the 2264 information contained in the label: 2266 1.) Straightforward mapping to hardware implementation 2268 Since it is expected that the MPLS nodes may be based on the high- 2269 capacity hardware implementation of the forwarding process, it is 2270 expected that the lookup result can directly be mapped onto hardware 2271 implementation of the particular product. Since the internal 2272 implementations of the supporting mechanisms are not subject to 2273 standardisation, it may be possible that even if the some header bits 2274 are used to indicate e.g. priority, some, possibly complex mapping 2275 needs to be performed to resolve the appropriate information to control 2276 HW based scheduling decisions. When the information is distributed with 2277 the LDP, the network element can perform necessary internal mappings, 2278 and then use the HW lookup table for determining the associated 2279 parameters that control scheduling hardware. 2281 2.) Support for fine grained service guarantees 2283 For the support of the fine grained service guarantees, such as INTSERV 2284 controlled load or guaranteed service, it is impractical to carry the 2285 required amount of the state information in every packet. Also, because 2286 implementations vary, the information cannot be subject to 2287 standardisation. In addition, reasons given in 1.) also apply here. 2289 3.) Requires MPLS node to look only onto fixed portion of the header 2291 Even if the information for the providing the differential services 2292 could be carried in the packet, the system becomes specific for the 2293 header formats of the given protocol, such as IPv4. When the protocol 2294 is changed, the position where the information resides inside the 2295 header changes also. This implies that the hardware should be either 2296 able to identify the protocol on the fly to determine where to look for 2297 information, or this be statically configured for the entity doing the 2298 lookup function, which means that the simultaneous support of multiple 2299 protocols is not feasible. When the information is retrieved just on 2300 basis of label, these problems do not exist. It would be possible e.g. 2301 provide exactly same services for the IPv4 and IPv6 without any 2302 problems using the same label based forwarding entity. 2304 4.) Legacy HW support 2306 Since much of the effort is currently concentrated on how the label 2307 switching should be supported in the legacy hardware with as little 2308 modifications as possible, it would make more sense to use the mapping 2309 on basis of the label. For example in ATM current ATM environments, 2310 there is no support for the differentiated services concept as being 2311 discussed, but there are some quite straightforward mappings that can 2312 be realised by using the currently defined ATM service categories. 2314 5.) Single, standard forwarding paradigm 2316 If the lookup is kept strictly as label only based, it means that same 2317 kinds of services can be provided for completely different applications 2318 and protocols using same network elements. Also, this means that the 2319 new services can be introduced by developing extensions to the LDP, and 2320 implementing the appropriate improvements in the network elements by 2321 keeping the same basic concept intact. 2323 Note that the using different labels for different service class 2324 encodings increases the required label space, but on the environments 2325 that support only best effort or guaranteed traffic, these bits can be 2326 used by different LSPs. 2328 11.2 Congestion management support 2330 11.2.1 Congestion indicator bit 2331 For the purposes of the congestion management, it is desirable to have 2332 one bit of the label to indicate that the LSP is experiencing 2333 congestion. 2335 If the label encapsulation header is protected by checksum that goes 2336 over the label, it is desirable that this bit is excluded from the 2337 checksum calculation so that the hardware can modify the bit directly, 2338 or that the checksum modification mechanism is specified that allows 2339 easy recalculation of the checksum when the bit is modified. 2341 In the frame relay environments, FECN and BECN bits shall be used for 2342 the congestion notification bits. In ATM environments the CI bit of the 2343 header shall be used for congestion notification. 2345 When the multilevel labelling is used, the value of the CI bit shall be 2346 copied to CI bit of second level label in deaggregation point, where 2347 the top level label is removed. 2349 11.2.2 Examine me bit 2350 For the more advanced traffic management mechanism support, it may be 2351 useful to have one bit of the label to indicate that the packet carries 2352 information that intermediate network element need either copy or 2353 modify. 2355 The advantage of having this bit encoded in the label instead than 2356 using dedicated LSP between nodes is that the associated operations can 2357 be made on per LSP basis, possibly in the hardware. 2359 The requirement for the support of this bit requires further study. It 2360 may be advisable to reserve one bit for this (or other) purposes from 2361 the beginning, even if the use has not been defined. 2363 This cannot be easily supported in standard ATM switching hardware, but 2364 ATM provides similar mechanisms in cell level with OAM and RM cells. 2366 11.3 Support for multilevel label switched paths 2368 The multilevel label support is essential for the purposes of scaleable 2369 support of the label switched paths with explicit guarantees. The 2370 mechanisms for supporting this shall be included in the label 2371 encapsulation protocol. 2373 Two levels of the multi-level labels are generally sufficient for the 2374 traffic management purposes and in ATM environments this may be 2375 realised using VPI/VCI partitioning to support first and second level 2376 label encodings. 2378 12. GENERAL REQUIREMENTS FOR DISTRIBUTION OF LABELS AND TM ATTRIBUTES 2380 To be able to realise the basic set of TM functionality, the following 2381 functions shall be available in the protocol used to distribute labels 2382 and the associated traffic management attributes. 2384 12.1 Setup request 2386 This function is used to request the set up of the label switched path 2387 of desired topological scope, granularity and attributes. The traffic 2388 management related attributes need to be specified and available in the 2389 LSP set-up request. 2391 Some of the traffic management related attributes that shall be 2392 available for set-up request function: 2394 - Bandwidth (bits/s) 2395 - Discard priority class (1-Ndisc) (as specified by differentiated 2396 services WG) 2397 - Delay priority class (1-Ndel) (as specified by differentiated 2398 services WG) 2400 It shall be possible to add other possible attributes that may be 2401 required, depending on the desired services and/or signalling protocols 2402 that are to be used in MPLS context. 2404 12.2 Setup modification 2406 LSP set-up modification function will be used to modify the attributes 2407 of the LSPs that have already been set up. Modification can be, e.g. 2408 addition or reduction of the bandwidth of the associated label switched 2409 path. 2411 Same attributes as for the set-up request shall be available for set-up 2412 modification function. 2414 12.3 Setup Acknowledge 2416 LSP set-up acknowledge is received when the LSP with desired attributes 2417 has been set up. Set-up acknowledge can result of either set-up request 2418 or set-up modification functions. 2420 12.4 Setup reject 2422 LSP set-up is rejected when the LSP with desired attributes can not be 2423 supported by the network. Set-up reject can result of either set-up 2424 request or set-up modification functions. Set-up reject shall 2425 communicate the reason why the request or modification was rejected. 2427 Traffic management related parameters that shall be returned in error 2428 conditions that shall be available: 2430 - Reason for rejection: no support for service, no LDP available, no 2431 resources 2432 - Information, such as: available bandwidth, highest available 2433 priority, etc. 2435 12.5 Discussion of signaling protocols 2437 12.5.1 General 2439 There have been proposals to use RSVP for implementing all services 2440 that require more than best effort traffic category support in MPLS 2441 environment. 2443 Also, there is other proposal for implementing limited set of services 2444 for supporting limited set of traffic management functionality, mainly 2445 suitable for the network operator's traffic engineering needs in the 2446 LDP protocol, which does not require the use of the LDP. 2448 The approaches have similar characteristics, and it is quite possible 2449 to achieve the desired functionality in either way. Either method is 2450 not obviously better than the other, and it is unclear whether these 2451 proposals are complementary or competitive. 2453 In addition, operator driven network traffic engineering purposes does 2454 not have as strict requirements for the dynamics and the granularity of 2455 control required. It might be feasible to implement the attributes 2456 required for the traffic engineered LSPs directly in the LDP, without 2457 mandating the use of the RSVP in networks that do not support 2458 differentiated services. 2460 To determine the suitability of the signalling mechanisms for TM 2461 support the proposals should be evaluated and decided against the 2462 requirements for supporting the traffic management related requirements 2463 and their applicability to different topologies, topological scopes and 2464 reservation models. 2466 12.5.2 LDP 2468 Label Distribution Protocol (LDP) has been proposed for the 2469 communication of the bindings between the routes and the LSPs between 2470 the MPLS nodes. 2472 Current proposal of the LDP [Andersson97] does not include objects to 2473 carry any traffic management related attributes for the LSPs, except 2474 the placeholder for the Class-of-service objects. 2476 COS object semantics have not been specified in the current version of 2477 the document, and them will likely be based on the work done in the 2478 IETF DIFFSERV working group. 2480 It has been proposed to extend the current LDP proposal to include the 2481 basic set of the traffic management related attributes as part of the 2482 LDP. Reasoning behind this is that in the environments that are not 2483 otherwise using the RSVP, and do not need all the features provided by 2484 RSVP, the additional complexity inherited with the RSVP may be too 2485 expensive to implement, and the use of single protocol (LDP) should be 2486 sufficient. 2488 12.5.3 RSVP 2490 RSVP [RFC2205] was originally developed for the establishment of the 2491 communication of the reservation parameters of the unicast traffic and 2492 reservation parameters for heterogeneous receivers for the multicast 2493 traffic. 2495 RSVP has scalability issues for the large scale deployment, that are 2496 discussed in the [RFC2208] and [Schwantag97]. 2498 MPLS can be used to address some of the scalability problems of the 2499 RSVP, by using the RSVP signalling at the edges of the network and 2500 either RSVP tunnels inside networks, or by mapping the reservations to 2501 some other signalling protocol, used to carry reservation information 2502 inside the operator's network and possible across domain boundaries. 2504 MPLS networks have the ability, in the core network elements, to make 2505 forwarding decisions using simple label based lookup instead of 2506 applying the flow specific filter to each packet, as required by the 2507 conventional RSVP implementation. Also, because of the aggregation of 2508 the reservations, the core routers can forward the traffic without 2509 keeping track of per-flow state. 2511 MPLS has also been proposed as signalling protocol to be used in MPLS 2512 context for communicating the reservation attributes of the label 2513 switched paths inside the network [Li97]. This operation model can be 2514 coupled to user-to-network RSVP signalling, or operate independently 2515 inside the network, between the network elements. 2517 There are also proposals for setting up explicit paths with 2518 reservations inside MPLS domain using extended RSVP and assigning 2519 labels for such paths [Gan97], [Guerin97] [Davie97a] and [Davie97b]. 2521 13. REFERENCES 2523 [Andersson97] "LDP Specification", L. Andersson, P. Doolan, N. 2524 Feldman, Andre Fredette, work in progress, draft-mplsdt-ldp-spec- 2525 00.txt, November 1997 2527 [ATMF96], "Traffic Management Specification, Version 4.0", ATM Forum, 2528 April 1996 2530 [Berson97] "Aggregation of Integrated Services State", S. Berson, S. 2531 Vincent, work in progress, draft-berson-classy-approach-01.ps, November 2532 1997 2534 [Braden97] "Recommendations on Queue Management and Congestion 2535 Avoidance in the Internet", B. Braden, D. Clarck, J. Crowcroft, B. 2536 Davie, S. Deering, D. Esterin, S. Floyd, V. Jacobson, G. Minshall, G. 2537 Partridge, L. Pettersson, K. Ramakrishnan, S. Schenker, J. Wroclawski 2538 and L. Zang, work in progress, draft-irtf-e2e-queue-mgt-00.ps, March 2539 1997 2541 [Bradner97] "Internet Protocol Quality of Service Problem Statement", 2542 S. Bradner, work in progress, draft-bradner-qos-problem-00.txt, 2543 September 1997 2545 [Callon97] "A Framework for Multiprotocol Label Switching", R. 2546 Callon,P. Doolan, N. Feldman, A. Fredette, G. Swallow, and A. 2547 Wiswanathan, work in progress, draft-ietf-mpls-framework-02.txt, 2548 November 19, 1997 2550 [Claffy95] "A parameterizable methodology for Internet traffic flow 2551 profiling", K.C. Claffy, H-W. Braun, G. C. Polyzos, IEEE Journal on 2552 Selected Areas in Communications, vol. 13, no. 8, pp. 1481-1494, 2553 October 1995. 2555 [Cisco97] "Netflow", White Paper, Cisco Systems, 1997 2557 [Crawley98] "A Framework for QoS-based Routing in the Internet", E. 2558 Crawley, R. Nair,B. Rajagopalan , H. Sandick, work in progress, draft- 2559 ietf-qosr-framework-03.txt, March, 2, 1998 2561 [Davie97a] "Use of Label Switching With RSVP ", B. Davie, Y. 2562 Rekhter, E. Rosen, A. Viswanathan, V. Srinivasan, work in progress, 2563 [Davie97b] "Explicit Route Support in MPLS", B. Davie, T. Li, E. 2564 Rosen, Y. Rekhter,work in progress, draft-davie-mpls-explicit-routes- 2565 00.txt, November 1997 2567 [Ferguson98] "Simple Differential Services: IP TOS and Precedence, 2568 Delay Indication, and Drop Preference", P. Ferguson, work in progress, 2569 [Floyd93] "Random Early Detection gateways for Congestion Avoidance", 2570 S. Floyd, V. Jacobsen, IEEE/ACM Transactions on Networking, volume 1 2571 number 4, August 1993, Pages 397-413 2573 [Fredette97] "Stream Aggregation", A. Fredette, C. White, L. 2574 Andersson, P. Doolan , work in progress, , November 1997 2576 [Gan97] "Setting up Reservations on Explicit Paths using RSVP", D.-H. 2577 Gan, R. Guerin, S. Kamat, T. Li, E. Rosen, work in progress, draft- 2578 guerin-expl-path-rsvp-01.txt, 21 November 1997 2580 [Guerin97] "Aggregating RSVP-based QoS Requests" R. Guerin, S. Blake, 2581 S. Herzog, work in progress, draft-guerin-aggreg-rsvp-00.txt, 21 Nov 2582 1997 2584 [I370] "Congestion Management for the ISDN Frame Relaying Bearer 2585 Service", Recommendation I.370, ITU-T, 1991 2587 [Jagan97] "End-to-End Traffic Management in IP/ATM Internetworks", S. 2588 Jagannath, N. Yin, work in progress, draft-jagan-e2e-traf-mgmt-00.txt, 2589 August 1997 2591 [Li98] "Provider Architecture for Differentiated Services and Traffic 2592 Engineering (PASTE)", T. Li, Y. Rekhter, work in progress, draft-li- 2593 paste-00.txt, January 1998 2595 [Nichols98] "Differentiated Services Operational Model and 2596 Definitions", K. Nichols, S. Blake, work in progress, draft-nichols- 2597 dsopdef-00.txt, February, 1998 2599 [Packeteer97] "Controlling TCP/IP bandwidth", TCP/IP bandwidth 2600 Management Series, Vol 1 Number 1, The Packeteer technical Journal, 2601 1997 2603 [Ramakr97] "A Proposal to add Explicit Congestion Notification (ECN) 2604 to IPv6 and to TCP", K. K. Ramakrishnan, S. Floyd, work in progress, 2605 draft-kksjf-ecn-00.txt, 2606 November 1997 2608 [Rampal97] "Flow Grouping For Reducing Reservation Requirements for 2609 Guaranteed Delay Service",S. Rampal, R. Guerin, work in progress, 2610 draft-rampal-flow-delay-service-01.txt, July 15th, 1997. 2612 [RFC1633] "Integrated Services in the Internet Architecture: an 2613 Overview", R. Braden, D. Clarck, S. Shenker, RFC-1633, June 1994 2615 [RFC1827] "IP Encapsulating Security Payload (ESP)", R. Atkinson, 2616 RFC-1827, August 1995 2618 [RFC1954] "Transmission of Flow Labelled IPv4 on ATM Data Links", P. 2619 Newman, W. L. Edwards, R. Hinden, E. Hoffman, F. Ching Liaw, T. Lyon, 2620 G. Minshall, ,RFC-1954, May 1996 2622 [RFC2063] "Traffic Flow Measurement: Architecture", N. Brownlee, C. 2623 Mills, G. Ruth, RFC-2063, January 1997 2624 [RFC2205] "Resource Reservation Protocol (RSVP) - Version 1 Functional 2625 Specification", R. Braden, L. Zhang, S. Berson, S. Herzog, S. Jamin, 2626 RFC-2205, September 1997 2628 [RFC2208] "Resource ReSerVation Protocol (RSVP) Version 1 2629 Applicability Statement: Some Guidelines on Deployment", A. Mankin, F. 2630 Baker, B. Braden, S. Bradner, M. O`Dell, A. Romanow, A. Weinrib, L. 2631 Zhang, September 1997 2633 [RFC2211] "Specification of the Controlled-Load Network Element 2634 Service", J. Wroclawski, RFC-2211, September 1997 2636 [RFC2212] "Specification of the Guaranteed-Quality of Service", S. 2637 Shenker, C. Partridge, R. Guerin, RFC-2212, September 1997 2639 [Rosen97a] " A proposed architecture for MPLS", E. Rosen, A. 2640 Wiswanathan and R. Callon, work in progress, draft-ietf-mpls-arch- 2641 00.txt, July 1997 2643 [Rosen97b] "Label Switching: Label Stack Encodings", E.C. Rosen,Y. 2644 Rekhter, D. Tappan, D. Farinacci, G. Fedorkow, T. Li, A. Conta, work in 2645 progress, draft-rosen-tag-stack-03.txt, July 1997 2647 [Schwantag97] "An Analysis of the Applicability of RSVP", Ursula 2648 Schwantag, Diploma Thesis, Universitat Karlsruhe, July 15, 1997 2650 [Smith97] "Research Challenges for the Next Generation Internet", 2651 J.E. Smith, F. W. Weingarten, Computing Research Association, May 12- 2652 14, 1997 2654 [Tennenh89] "Layered Multiplexing Considered Harmful", D. 2655 Tennenhouse, Protocols for High-Speed Networks, Rudin and Williamson 2656 (Editors), North Holland, Amsterdam, 1989. 2658 [Touch97] "Bridging the Gap Between Optical Networks and the Internet: 2659 Summary of a Mini-Workshop", DRAFT, Oct. 1-2, 1997, Arlington, VA, Joe 2660 Touch, Ken Young, Joe Berthold 2662 14. SECURITY CONSIDERATIONS 2664 As the support for the different levels of services, together with the 2665 different pricing structures comes in the effect, the mechanisms to 2666 monitor the service usage, enforce the service contract between 2667 parties, authorisation and billing will become important. 2669 It is essential to develop the associated protocols in a such way, that 2670 the different forms of service abuse, such as different forms of theft 2671 of service are not easily possible. 2673 Since this document is not protocol specification, the specifics of the 2674 implementation alternatives are not discussed here. 2676 15. AUTHOR'S ADDRESSES 2678 Pasi Vaananen 2679 Nokia Telecommunications, Inc. 2680 3 Burlington Woods Drive, Suite 250 2681 Burlington, MA 01803 2682 USA 2683 Phone: (781) 238-4981 2684 Fax: (781) 238-4949 2685 Email: pasi.vaananen@ntc.nokia.com 2687 Rayadurgam Ravikanth 2688 Nokia Research Center 2689 3 Burlington Woods Drive, Suite 260 2690 Burlington, MA 01803 2691 USA 2692 Phone: (781) 238-4905 2693 Fax (781) 238-4949 2694 Email: ravikanth.rayadurgam@research.nokia.com