idnits 2.17.1 draft-dasgupta-ccamp-path-comp-analysis-02.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- ** It looks like you're using RFC 3978 boilerplate. You should update this to the boilerplate described in the IETF Trust License Policy document (see https://trustee.ietf.org/license-info), which is required now. -- Found old boilerplate from RFC 3978, Section 5.1 on line 17. -- Found old boilerplate from RFC 3978, Section 5.5, updated by RFC 4748 on line 514. -- Found old boilerplate from RFC 3979, Section 5, paragraph 1 on line 525. -- Found old boilerplate from RFC 3979, Section 5, paragraph 2 on line 532. -- Found old boilerplate from RFC 3979, Section 5, paragraph 3 on line 538. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- No issues found here. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust Copyright Line does not match the current year == The document doesn't use any RFC 2119 keywords, yet seems to have RFC 2119 boilerplate text. -- The document seems to lack a disclaimer for pre-RFC5378 work, but may have content which was first submitted before 10 November 2008. If you have contacted all the original authors and they are all willing to grant the BCP78 rights to the IETF Trust, then this is fine, and you can ignore this comment. If not, you may need to add the pre-RFC5378 disclaimer. (See the Legal Provisions document at https://trustee.ietf.org/license-info for more information.) -- The document date (July 11, 2008) is 5761 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- -- Looks like a reference, but probably isn't: '20' on line 224 -- Looks like a reference, but probably isn't: '100' on line 224 == Unused Reference: 'I-D.ietf-ccamp-inter-domain-rsvp-te' is defined on line 427, but no explicit reference was found in the text == Unused Reference: 'I-D.ietf-ccamp-lsp-stitching' is defined on line 434, but no explicit reference was found in the text -- Obsolete informational reference (is this intentional?): RFC 3784 (Obsoleted by RFC 5305) Summary: 1 error (**), 0 flaws (~~), 4 warnings (==), 10 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 Networking Working Group S. Dasgupta 3 Internet-Draft JC. de Oliveira 4 Intended status: Informational Drexel University 5 Expires: January 12, 2009 JP. Vasseur 6 Cisco Systems 7 July 11, 2008 9 Performance Analysis of Inter-Domain Path Computation Methodologies 10 draft-dasgupta-ccamp-path-comp-analysis-02 12 Status of this Memo 14 By submitting this Internet-Draft, each author represents that any 15 applicable patent or other IPR claims of which he or she is aware 16 have been or will be disclosed, and any of which he or she becomes 17 aware will be disclosed, in accordance with Section 6 of BCP 79. 19 Internet-Drafts are working documents of the Internet Engineering 20 Task Force (IETF), its areas, and its working groups. Note that 21 other groups may also distribute working documents as Internet- 22 Drafts. 24 Internet-Drafts are draft documents valid for a maximum of six months 25 and may be updated, replaced, or obsoleted by other documents at any 26 time. It is inappropriate to use Internet-Drafts as reference 27 material or to cite them other than as "work in progress." 29 The list of current Internet-Drafts can be accessed at 30 http://www.ietf.org/ietf/1id-abstracts.txt. 32 The list of Internet-Draft Shadow Directories can be accessed at 33 http://www.ietf.org/shadow.html. 35 This Internet-Draft will expire on January 12, 2009. 37 Abstract 39 This document presents a performance comparison between the per- 40 domain path computation method and the Path Computation Element (PCE) 41 Architecture based Backward Recursive Path Computation (BRPC) 42 procedure. Metrics to capture the significant performance aspects 43 are identified and detailed simulations are carried out on realistic 44 scenarios. A performance analysis for each of the path computation 45 methods is then undertaken. 47 Requirements Language 49 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 50 "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this 51 document are to be interpreted as described in RFC 2119 [RFC2119]. 53 Table of Contents 55 1. Terminology . . . . . . . . . . . . . . . . . . . . . . . . . 3 56 2. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 3 57 3. Evaluation Metrics . . . . . . . . . . . . . . . . . . . . . . 4 58 4. Simulation Setup . . . . . . . . . . . . . . . . . . . . . . . 6 59 5. Results and Analysis . . . . . . . . . . . . . . . . . . . . . 7 60 5.1. Path Cost . . . . . . . . . . . . . . . . . . . . . . . . 7 61 5.2. Crankback/Setup Delay . . . . . . . . . . . . . . . . . . 8 62 5.3. Signaling Failures . . . . . . . . . . . . . . . . . . . . 8 63 5.4. Failed TE-LSPs/Bandwidth on link failures . . . . . . . . 9 64 5.5. TE LSP/Bandwidth setup capacity . . . . . . . . . . . . . 9 65 6. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 9 66 7. Security Considerations . . . . . . . . . . . . . . . . . . . 9 67 8. Acknowledgment . . . . . . . . . . . . . . . . . . . . . . . . 10 68 9. References . . . . . . . . . . . . . . . . . . . . . . . . . . 10 69 9.1. Normative References . . . . . . . . . . . . . . . . . . . 10 70 9.2. Informative References . . . . . . . . . . . . . . . . . . 10 71 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . . 11 72 Intellectual Property and Copyright Statements . . . . . . . . . . 13 74 1. Terminology 76 Terminology used in this document 78 TE LSP: Traffic Engineered Label Switched Path. 80 CSPF: Constraint Shortest Path First. 82 PCE: Path Computation Element. 84 BRPC: Backward Recursive PCE based Computation. 86 AS: Autonomous System. 88 ABR: Routers used to connect two IGP areas (areas in OSPF or levels 89 in IS-IS). 91 ASBR: Routers used to connect together ASes of a different or the 92 same Service Provider via one or more Inter-AS links. 94 Border LSR: A border LSR is either an ABR in the context of inter- 95 area TE or an ASBR in the context of inter-AS TE. 97 VSPT: Virtual Shortest Path Tree. 99 LSA: Link State Advertisement. 101 LSR: Label Switching Router. 103 IGP: Interior Gateway Protocol. 105 TED: Traffic Engineering Database. 107 PD: Per-Domain 109 2. Introduction 111 The IETF has specified two approaches for the computation of inter- 112 domain (Generalized) Multi-Protocol Label Switching (MPLS) Traffic 113 Engineering (TE) Label Switched Paths (LSP): the per-domain path 114 computation approach defined in 115 [I-D.ietf-ccamp-inter-domain-pd-path-comp] and the PCE based approach 116 specified in[RFC4655]. More specifically we study the PCE based path 117 computation model that makes use of the BRPC method outlined 118 in[I-D.ietf-pce-brpc]. In the rest of this document, we will call PD 119 and PCE the per-domain path computation approach and the PCE path 120 computation approach respectively. 122 In the per-domain path computation approach, each path segment within 123 a domain is computed during the signaling process by each entry node 124 of the domain up to the next hop exit node of that same domain. 126 By contrast the PCE-based approach and in particular the BRPC method 127 defined in [I-D.ietf-pce-brpc] relies the collaboration between a set 128 of PCEs to find to shortest inter-domain path after the computation 129 of which the corresponding TE LSP is signaled: path computation is 130 undertaken using multiple PCEs in a backward recursive fashion from 131 the destination domain to the source domain. The notion of a Virutal 132 Shortest Path Tree (VSPT) is introduced. Each link of a VSPT 133 represents the shortest path satisfying the set of required 134 constraints between the border nodes of a domain and the destination 135 LSR. The VSPT of each domain is returned by the corresponding PCE to 136 create a new VSPT by PCEs present in other domains. 137 [I-D.ietf-pce-brpc] discusses the BRPC procedure in complete detail. 139 This document presents some simulation results and analysis to 140 compare the performance of the above two inter-domain path 141 computation approaches. Two realistic topologies with accompanying 142 traffic matrices are used to undertake the simulations. 144 Note that although the simulations results discussed in this document 145 have used inter-area networks, they also apply to Inter-AS cases. 147 Disclaimer: although simulations have been made on different and 148 realistic topologies showing consistent results, the metrics shown 149 below may vary with the network topology. 151 3. Evaluation Metrics 153 This section discusses the metrics that are used to quantify and 154 compare the performance of the two approaches. 156 o Path Cost. The maximum and average path costs are observed for 157 each TE LSP. The distributions for the maximum and average path 158 costs are then compared for the two path computation approaches. 160 o Signaling Failures. Signaling failures may occur in various 161 circumstances. With PD, the head-end LSR chooses the the 162 downstream border router (ABR, ASBR) according to some selection 163 criteria (IGP shortest path, ....) based on the information in its 164 TED. This ABR then selects the next ABR using its TED, continuing 165 the process till the destination is reached. At each step, the 166 TED information could be out of date, potentially resulting in a 167 signaling failure during setup. In the BRPC procedure, the PCEs 168 are the ABRs that cooperate to form the VSPT based on the 169 information in their respective TEDs. As in the case of the PD 170 approach, information in the TED could be out of date, potentially 171 resulting in signaling failures during setup. Also, only with the 172 PD approach, another situation that leads to a signaling failure 173 is when the selected exit ABR does not have any path obeying the 174 set of constraints toward a downstream exit node or the TE LSP 175 destination. This situation does not occur with the BRPC. The 176 signaling failure metric captures the total number of signaling 177 failures that occur during initial setup and reroute (on link 178 failure) of a TE LSP. The distribution of the number of signaling 179 failures encountered for all TE LSPs is then compared for the PD 180 and BRPC methods. 182 o Crankback Signaling. In this document we made the assumption that 183 in the case of PD, when an entry border node fails to find a route 184 in the corresponding domain, Boundary re-routing crankback 185 [RFC4920] signaling was used. A crankback signaling message 186 propagates to the entry border node of the domain and a new exit 187 border node is chosen. After this, path computation takes place 188 to find a path segment to a new entry border node of the next 189 domain. This causes a additional delay in setup time. This 190 metric captures the distribution of the number of crankback 191 signals and the corresponding delay in setup time for a TE LSP 192 when using PD. The total delay arising from the crankback 193 signaling is proportional to the costs of the links over which the 194 signal travels, i.e., the path which is setup from the entry 195 border node of a domain to its exit border node (the assumption 196 was made that link metrics reflect propagation delays). Similar 197 to above metrics, the distribution of total crankback signaling 198 and corresponding proportional delay across all TE LSPs is 199 compared. 201 o TE LSPs/Bandwidth Setup Capacity. Due to the different path 202 computation techniques, there is a significant difference in the 203 amount of TE LSPs/bandwidth that can be setup. This metric 204 captures the difference in the number of TE LSPs and corresponding 205 bandwidth that can be setup using the two path computation 206 techniques. The traffic matrix is continuously scaled and stopped 207 when the first TE LSP cannot be setup for both the methods. The 208 difference in the scaling factor gives the extra bandwidth that 209 can be setup using the corresponding path computation technique. 211 o Failed TE LSPs/Bandwidth on link failure. Link failures are 212 induced in the network during the course of the simulations 213 conducted. This metric captures the number of TE LSPs and the 214 corresponding bandwidth that failed to find a route when one or 215 more links lying on its path failed. 217 4. Simulation Setup 219 A very detailed simulator has been developed to replicate a real life 220 network scenario accurately. Following is the set of entities used 221 in the simulation with a brief description of their behavior. 223 +------------+-------+-------+--------+--------+---------+----------+ 224 | Domain | # of | # of | OC48 | OC192 | [0,20) | [20,100] | 225 | Name | nodes | links | links | links | Mbps | Mbps | 226 +------------+-------+-------+--------+--------+---------+----------+ 227 | D1 | 17 | 24 | 18 | 6 | 125 | 368 | 228 | D2 | 14 | 17 | 12 | 5 | 76 | 186 | 229 | D3 | 19 | 26 | 20 | 6 | 14 | 20 | 230 | D4 | 9 | 12 | 9 | 3 | 7 | 18 | 231 | MESH-CORE | 83 | 167 | 132 | 35 | 0 | 0 | 232 | (backbone) | | | | | | | 233 | SYM-CORE | 29 | 377 | 26 | 11 | 0 | 0 | 234 | (backbone) | | | | | | | 235 +------------+-------+-------+--------+--------+---------+----------+ 237 Table 1. Domain Details and TE LSP Size Distribution 239 o Topology Description. To obtain meaningful results applicable to 240 present day Service Provider topologies, simulations have been run 241 on two representative topologies. They consists of a large 242 backbone area to which four smaller areas are connected. For the 243 first topology named MESH-CORE, a densely connected backbone was 244 obtained from RocketFuel [ROCKETFUEL]. The second topology has a 245 symmetrical backbone and is called SYM-CORE. The four connected 246 smaller areas are obtained from [DEF-DES]. Details of the 247 topologies are shown in Table 1 along with their layout in Figure 248 1. All TE LSPs setup on this network have their source and 249 destinations in different areas and all of them need to traverse 250 the backbone network. Table 1 also shows the number of TE LSPs 251 that have their sources in the corresponding areas along with 252 their size distribution. 254 o Node behavior. Every node in the topology represents a router 255 that maintains states for all the TE LSPs passing through it. 256 Each node in a domain is a source for TE LSPs to all the other 257 nodes in the other domains. As in a real life scenario, where 258 routers boot up at random points in time, the nodes in the 259 topologies also start sending traffic on the TE LSPs originating 260 from them at a random start time (to take into account the 261 different boot-up times). All nodes are up within an hour of the 262 start of simulation. All nodes maintain a TED that is updated 263 using LSA updates as outlined in [RFC3630]. The flooding scope of 264 the Traffic Engineering IGP updates are restricted only to the 265 domain in which they originate in compliance with [RFC3630] and 266 [RFC3784]. 268 o TE LSP Setup. When a node boots up, it sets up all TE LSPs that 269 originate from it in descending order of size. The network is 270 dimensioned such that all TE LSPs can find a path. Once setup, 271 all TE LSPs stay in the network for the complete duration of the 272 simulation unless they fail due to a link failure. Eventhough the 273 TE LSPs are setup in descending order of size from a head-end 274 router, from the network perspective, TE LSPs are setup in random 275 fashion as the routers bootup at random times. 277 o Inducing Failures. For thorough performance analysis and 278 comparison, link failures are induced in all the areas. Each link 279 in a domain can fail independently with a mean failure time of 24 280 hours and be restored with a mean restore time of 15 minutes. 281 Both inter-failure and inter-restore times are uniformly 282 distributed. No attempt to re-optimize the path of a TE LSP is 283 made when a link is restored. The links that join two domains 284 never fail. This step has been taken to concentrate only on how 285 link failures within domains affect the performance. 287 5. Results and Analysis 289 Simulations were carried out on the two topologies previously 290 described. The results are presented and discussed in this section. 291 All figures are from the PDF version of this document. In the 292 figures, `PD-Setup' and `PCE-Setup' represent results corresponding 293 to the initial setting up of TE LSPs on an empty network using the 294 per-domain and the PCE approach, respectively. Similarly, `PD- 295 Failure' and `PCE-Failure' denote the results under the link failure 296 scenario. A period of one week was simulated and results were 297 collected after the transient period. Figure 2 and Figure 3 298 illustrate the behavior of the metrics for topologies MESH-CORE and 299 SYM-CORE, respectively. 301 5.1. Path Cost 303 Figures 2a and 3a show the distribution of the average path cost of 304 the TE LSPs for MESH-CORE and SYM-CORE, respectively. During initial 305 setup, roughly 40% of TE LSPs for MESH-CORE and 70% of TE LSPs for 306 SYM-CORE have path costs greater with PD (PD-Setup) than with PCE 307 approach (PCE-Setup). This is due to the ability of the BRPC 308 procedure to select the inter-domain shortest constrained paths that 309 satisfy the constraints. Since the per-domain approach to path 310 computation is undertaken in stages where every entry border router 311 to a domain computes the path in the corresponding domain, the most 312 optimal (shortest constrained inter-domain) route is not always 313 found. When failures start to take place in the network, TE LSPs are 314 rerouted over different paths resulting in path costs that are 315 different from the initial costs. PD-Failure and PCE-Failure in 316 Figures 2a and 3a show the distribution of the average path costs 317 that the TE LSPs have over the duration of the simulation with link 318 failures occurring. Similarly, the average path costs with the PD 319 approach are much higher than the PCE approach when link failures 320 occur. Figures 2b and 3b show similar trends and present the maximum 321 path costs for a TE LSP for the two topologies, respectively. It can 322 be seen that with per-domain path computation, the maximum path costs 323 are larger for 30% and 100% of the TE LSPs for MESH-CORE and SYM- 324 CORE, respectively. 326 5.2. Crankback/Setup Delay 328 Due to crankbacks that take place in the per-domain approach of path 329 computation, TE LSP setup time is significantly increased. This 330 could lead to QoS requirements not being met, especially during 331 failures when rerouting needs to be quick in order to keep traffic 332 disruption to a minimum (for example in the absence of local repair 333 mechanisms such as defined in [RFC4090]). Since crankbacks do not 334 take place during path computation with a PCE, setup delays are 335 significantly reduced. Figures 2c and 3c show the distributions of 336 the number of crankbacks that took place during the setup of the 337 corresponding TE LSPs for MESH-CORE and SYM-CORE, respectively. It 338 can be seen that all crankbacks occurred when failures were taking 339 place in the networks. Figures 2d and 3d illustrate the 340 'proportional' setup delays experienced by the TE LSPs due to 341 crankbacks for the two topologies. It can be observed that for a 342 large proportion of the TE LSPs, the setup delays arising out of 343 crankbacks is very large possibly proving to be very detrimental to 344 QoS requirements. The large delays arise out of the crankback 345 signaling that needs to propagate back and forth from the exit border 346 router of a domain to its entry border router. More crankbacks occur 347 for SYM-CORE as compared to MESH-CORE as it is a very `restricted' 348 and `constrained' network in terms of connectivity. This causes a 349 lack of routes and often several cycles of crankback signaling are 350 required to find a constrained path. 352 5.3. Signaling Failures 354 As discussed in the previous sections, signaling failures occur 355 either due to an outdated TED or when a path cannot be found from the 356 selected entry border router. Figures 2e and 3e shows the 357 distribution of the total number of signaling failures experienced by 358 the TE LSPs during setup. About 38% and 55% of TE LSPs for MESH-CORE 359 and SYM-CORE, respectively, experience a signaling failures with per- 360 domain path computation when link failures take place in the network. 361 In contrast, only about 3% of the TE LSPs experience signaling 362 failures with the PCE method. It should be noted that the signaling 363 failures experienced with the PCE correspond only to the TEDs being 364 out of date. 366 5.4. Failed TE-LSPs/Bandwidth on link failures 368 Figures 2f and 3f show the number of TE LSPs and the associated 369 required bandwidth that fail to find a route when link failures are 370 taking place in the topologies. For MESH-CORE, with the per-domain 371 approach, 395 TE LSPs failed to find a path corresponding to 1612 372 Mbps of bandwidth. For PCE, this number is lesser at 374 373 corresponding to 1546 Mbps of bandwidth. For SYM-CORE, with the per- 374 domain approach, 434 TE LSPs fail to find a route corresponding to 375 1893 Mbps of bandwidth. With the PCE approach, only 192 TE LSPs fail 376 to find a route, corresponding to 895 Mbps of bandwidth. It is 377 clearly visible that the PCE allows more TE LSPs to find a route thus 378 leading to better performance during link failures. 380 5.5. TE LSP/Bandwidth setup capacity 382 Since PCE and the per-domain path computation approach differ in how 383 path computation takes place, more bandwidth can be setup with PCE. 384 This is primarily due to the way in which BRPC functions. To observe 385 the extra bandwidth that can fit into the network, the traffic matrix 386 was scaled. Scaling was stopped when the first TE LSP failed to 387 setup with PCE. This metric, like all the others discussed above, is 388 topology dependent (therefore the choice of two topologies for this 389 study). This metric highlights the ability of PCE to fit more 390 bandwidth in the network. For MESH-CORE, on scaling, 1556 Mbps more 391 could be setup with PCE. In comparison, for SYM-CORE this value is 392 986 Mbps. The amount of extra bandwidth that can be setup on SYM- 393 CORE is lesser due to its restricted nature and limited capacity. 395 6. IANA Considerations 397 This document makes no request to IANA for action. 399 7. Security Considerations 401 This document does not raise any security issue. 403 8. Acknowledgment 405 The authors would like to acknowledge Dimitri Papadimitriou for his 406 helpful comments to clarify the text. 408 9. References 410 9.1. Normative References 412 [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate 413 Requirement Levels", BCP 14, RFC 2119, March 1997. 415 9.2. Informative References 417 [DEF-DES] J. Guichard, F. Le Faucheur, and J.-P. Vasseur, "Definitve 418 MPLS Network Designs", Cisco Press, 2005. 420 [I-D.ietf-ccamp-inter-domain-pd-path-comp] 421 Vasseur, J., Ayyangar, A., and R. Zhang, "A Per-domain 422 path computation method for establishing Inter-domain 423 Traffic Engineering (TE) Label Switched Paths (LSPs)", 424 draft-ietf-ccamp-inter-domain-pd-path-comp-06 (work in 425 progress), November 2007. 427 [I-D.ietf-ccamp-inter-domain-rsvp-te] 428 Ayyangar, A., "Inter domain Multiprotocol Label Switching 429 (MPLS) and Generalized MPLS (GMPLS) Traffic Engineering - 430 RSVP-TE extensions", 431 draft-ietf-ccamp-inter-domain-rsvp-te-07 (work in 432 progress), September 2007. 434 [I-D.ietf-ccamp-lsp-stitching] 435 Ayyangar, A., "Label Switched Path Stitching with 436 Generalized Multiprotocol Label Switching Traffic 437 Engineering (GMPLS TE)", draft-ietf-ccamp-lsp-stitching-06 438 (work in progress), April 2007. 440 [I-D.ietf-pce-brpc] 441 Vasseur, J., Zhang, R., Bitar, N., and J. Roux, "A 442 Backward Recursive PCE-based Computation (BRPC) Procedure 443 To Compute Shortest Constrained Inter-domain Traffic 444 Engineering Label Switched Paths", draft-ietf-pce-brpc-09 445 (work in progress), April 2008. 447 [RFC3630] Katz, D., Kompella, K., and D. Yeung, "Traffic Engineering 448 (TE) Extensions to OSPF Version 2", RFC 3630, 449 September 2003. 451 [RFC3784] Smit, H. and T. Li, "Intermediate System to Intermediate 452 System (IS-IS) Extensions for Traffic Engineering (TE)", 453 RFC 3784, June 2004. 455 [RFC4090] Pan, P., Swallow, G., and A. Atlas, "Fast Reroute 456 Extensions to RSVP-TE for LSP Tunnels", RFC 4090, 457 May 2005. 459 [RFC4655] Farrel, A., Vasseur, J., and J. Ash, "A Path Computation 460 Element (PCE)-Based Architecture", RFC 4655, August 2006. 462 [RFC4920] Farrel, A., Satyanarayana, A., Iwata, A., Fujita, N., and 463 G. Ash, "Crankback Signaling Extensions for MPLS and GMPLS 464 RSVP-TE", RFC 4920, July 2007. 466 [ROCKETFUEL] 467 N. Spring, R. Mahajan, and D. Wehterall, "Measuring ISP 468 Topologies with Rocketfuel", Proceedings of ACM SIGCOMM, 469 2002. 471 Authors' Addresses 473 Sukrit Dasgupta 474 Drexel University 475 Dept of ECE, 3141 Chestnut Street 476 Philadelphia, PA 19104 477 USA 479 Phone: 215-895-1862 480 Email: sukrit@ece.drexel.edu 481 URI: www.pages.drexel.edu/~sd88 483 Jaudelice C. de Oliveira 484 Drexel University 485 Dept. of ECE, 3141 Chestnut Street 486 Philadelphia, PA 19104 487 USA 489 Phone: 215-895-2248 490 Email: jau@ece.drexel.edu 491 URI: www.ece.drexel.edu/faculty/deoliveira 492 JP Vasseur 493 Cisco Systems 494 1414 Massachussetts Avenue 495 Boxborough, MA 01719 496 USA 498 Email: jpv@cisco.com 500 Full Copyright Statement 502 Copyright (C) The IETF Trust (2008). 504 This document is subject to the rights, licenses and restrictions 505 contained in BCP 78, and except as set forth therein, the authors 506 retain all their rights. 508 This document and the information contained herein are provided on an 509 "AS IS" basis and THE CONTRIBUTOR, THE ORGANIZATION HE/SHE REPRESENTS 510 OR IS SPONSORED BY (IF ANY), THE INTERNET SOCIETY, THE IETF TRUST AND 511 THE INTERNET ENGINEERING TASK FORCE DISCLAIM ALL WARRANTIES, EXPRESS 512 OR IMPLIED, INCLUDING BUT NOT LIMITED TO ANY WARRANTY THAT THE USE OF 513 THE INFORMATION HEREIN WILL NOT INFRINGE ANY RIGHTS OR ANY IMPLIED 514 WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. 516 Intellectual Property 518 The IETF takes no position regarding the validity or scope of any 519 Intellectual Property Rights or other rights that might be claimed to 520 pertain to the implementation or use of the technology described in 521 this document or the extent to which any license under such rights 522 might or might not be available; nor does it represent that it has 523 made any independent effort to identify any such rights. Information 524 on the procedures with respect to rights in RFC documents can be 525 found in BCP 78 and BCP 79. 527 Copies of IPR disclosures made to the IETF Secretariat and any 528 assurances of licenses to be made available, or the result of an 529 attempt made to obtain a general license or permission for the use of 530 such proprietary rights by implementers or users of this 531 specification can be obtained from the IETF on-line IPR repository at 532 http://www.ietf.org/ipr. 534 The IETF invites any interested party to bring to its attention any 535 copyrights, patents or patent applications, or other proprietary 536 rights that may cover technology that may be required to implement 537 this standard. Please address the information to the IETF at 538 ietf-ipr@ietf.org.