idnits 2.17.1 draft-jacpra-bmwg-pmtest-03.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- == The page length should not exceed 58 lines per page, but there was 4 longer pages, the longest (page 7) being 69 lines == It seems as if not all pages are separated by form feeds - found 0 form feeds but 18 pages Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- ** The document seems to lack an IANA Considerations section. (See Section 2.2 of https://www.ietf.org/id-info/checklist for how to handle the case when there are no actions for IANA.) ** There are 58 instances of too long lines in the document, the longest one being 13 characters in excess of 72. ** There are 26 instances of lines with control characters in the document. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year == Line 124 has weird spacing: '... Router custo...' == Line 479 has weird spacing: '... at far end. ...' == Line 586 has weird spacing: '... at far end f...' == The document doesn't use any RFC 2119 keywords, yet seems to have RFC 2119 boilerplate text. -- The document date (Februrary 06, 2017) is 2669 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- -- Missing reference section? 'RFC2119' on line 101 looks like a reference Summary: 3 errors (**), 0 flaws (~~), 7 warnings (==), 2 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 1 Network Working Group Sudhin Jacob 2 Internet Draft Juniper Networks 3 Intended Status: Informational Praveen Ananthasankaran 4 Expires: August 03,2017 Nokia 5 Februrary 06, 2017 7 Benchmarking of Y1731 Performance Monitoring 8 draft-jacpra-bmwg-pmtest-03 10 Abstract 12 The draft defines the methodologies for benchmarking of the Y1731 13 performance monitoring on DUT in various methods like Calculation 14 of near-end and far-end data. Measurement is done in scenarios by 15 using pre-defined COS and without COS in the network.The test 16 includes Impairment test,High Availability test and soak tests. 18 Status of This Memo 20 This Internet-Draft is submitted in full conformance with the 21 provisions of BCP 78 and BCP 79. 23 Internet-Drafts are working documents of the Internet Engineering 24 Task Force (IETF). Note that other groups may also distribute 25 working documents as Internet-Drafts. The list of current Internet- 26 Drafts is at http://datatracker.ietf.org/drafts/current/. 28 Internet-Drafts are draft documents valid for a maximum of six months 29 and may be updated, replaced, or obsoleted by other documents at any 30 time. It is inappropriate to use Internet-Drafts as reference 31 material or to cite them other than as "work in progress." 33 This Internet-Draft will expire on August 03, 2017. 35 Copyright Notice 37 Copyright (c) 2016 IETF Trust and the persons identified as the 38 document authors. All rights reserved. 40 This document is subject to BCP 78 and the IETF Trust's Legal 41 Provisions Relating to IETF Documents 42 (http://trustee.ietf.org/license-info) in effect on the date of 43 publication of this document. Please review these documents 44 carefully, as they describe your rights and restrictions with respect 45 to this document. Code Components extracted from this document must 46 include Simplified BSD License text as described in Section 4.e of 47 the Trust Legal Provisions and are provided without warranty as 48 described in the Simplified BSD License. 50 Table of Contents 52 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . 3 54 1.1. Requirements Language . . . . . . . . . . . . . . . . . . 3 56 1.2. Terminologies. . . . . . . . . . . . . . . . . . . . . . 3 58 2. Test Topology . . . . . . . . . . . . . . . . . . . . . . . . 3 60 3. Network . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 62 4. Test Procedure . . . . . . . . . . . . . . . . . . . . . . . 5 64 5.Test cases 66 5.1 Y.1731 Two-way Delay Measurement Test procedure. . . . . . . . 5 68 5.2 Y.1731 One-way Delay Measurement Test procedure. . . . . . . . 7 70 5.3 Loss measurement without COS Test Procedure. . . . . . . . . . 9 72 5.4 Loss measurement with COS Test Procedure. . . . . . . . . . . .12 74 5.5. Synthetic Loss Measurement Test Procedure. . . . . . . . . . .15 76 6.Acknowledgements. . . . . . . . . . . . . . . . . . . . . . . . . 18 78 7. Security Considerations. . . . . . . . . . . . . . . . . . . . . 18 80 8.IANA Considerations. . . . . . . . . . . . . . . . . . . . . . . .18 82 1. Introduction 84 Performance monitoring is explained in ITU Y1731.This document defines 85 the methodologies for benchmarking performance of Y1731 over a point to 86 point service. Performance Monitoring has been implemented with 87 many varying designs in order to achieve their intended network functionality. 88 The scope of this document is to define methodologies for benchmarking Y1731 89 performance measurement. The following protocols under Y.1731 will be benchmarked. 91 1. Two-way delay measurement 92 2. One-way delay measurement 93 3. Loss measurement 94 4. Synthetic loss measurement 96 1.1. Requirements Language 98 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 99 "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this 100 document are to be interpreted as described in RFC 2119 [RFC2119]. 102 1.2. Terminologies 104 PM Performance monitoring 106 COS Class of Service 108 In-profile CIR termed as green packets. 110 Out-profile EIR Yellow/Amber packet. 112 LMM Loss Measurement Message 114 LMR Loss Measurement Reply 116 DMM Delay Measurement Message 118 DMR Delay MEasurement Reply 120 P Router Provider Router. 122 PE Router Provider Edge Router 124 CE Router customer Edge Router 126 DUT Device under Test. 128 CCM Continuity check messages 130 2.1 Test Topology 132 | Traffic Generator 133 +----------+ 134 | | 135 | PE2 | 136 | | 137 +----------+ 138 | 139 | 140 +----------+ 141 | | 142 | Core | 143 | router | 144 +----------+ 145 | 146 | 147 +----------+ 148 | | 149 | DUT | 150 | PE1 | 151 +----------+ 152 | 153 |--- Traffic Generator 155 3. Network 157 The benchmarking topology consists of 3 routers and 2 traffic generators. 158 DUT is PE1 connected to CE. The core router is the P router mentioned in the 159 topology. There is layer two(point-to-point) services running from PE1 to PE2. 160 On the top of that performance monitoring such as loss,delay and synthetic 161 measurements are running.PE1 is acting as DUT.The traffic will be layer 2 162 with vlan tag.The frame size will be 64,128,512,1024 and 1400.The tests are 163 carried out using these various frame size.The traffic will be uni directional 164 or bi directional. 166 4. Test Procedure 168 The tests are defined to benchmark the Y1731 performance monitoring in 169 High Availability,Impairment,SOAK,Scale,with traffic of various line rate 170 and frame sizes. 172 4.1 Performance Monitoring with traffic 174 Traffic is send with different .1p priorities,line rate and frame size of 64, 175 128,512,1024,1400.The PM values are measured with each frame size with various 176 line rates. 178 4.2 High Availability 180 The traffic is flowing bi-direction.Then traffic is flowing at "P" packets per 181 sec. The traffic generator is measuring the Tx and Rx packets, while the routing 182 engine failover there should not any packet loss the router tester must show both 183 "P" packet per seconds.The PM historical data should not reset. 185 4.3 Scale 187 This is to measure the performance of DUT in scaling to "X" CFM sessions with 188 Performance monitoring running over it.There should not be any crashes,memory 189 leaks. 191 4.4 SOAK 193 This test is used to measure the performance of DUT over a period of time,with scaled 194 configuration and traffic over a period of time "T'". In each interval "t1" the 195 parameters measured are CPU usage, memory usage and crashes. 197 4.5 Measurement Statistics 199 The test is repeated for "N" times and the value is taken by averaging the values. 201 5 Test Cases 203 5.1 Y.1731 Two-way Delay Measurement Test procedure 205 Basic Testing Objective 207 Check the round trip delay of the network in different conditions of 208 traffic load in the network. 210 Test Procedure 212 Configure a layer 2 point-to-point service between PE1 and PE2. 213 Configure Y.1731 Two way delay measurement over the service.Observe 214 the delay measurement in the following conditions of traffic in the network 216 a. Send 80% of Line-rate traffic with different priorities and frame size. 217 b. Send 40% of Line-rate traffic with different priorities and frame size. 218 c. Without any line traffic 220 The result of all the 3 conditions above are noted and correlated. 222 Test Measurement 224 The following factors needs are to be measured to benchmark the result 225 1. The average two-way delay 226 2. The average two-way delay variation 228 In the above 3 conditions the results obtained must be similar 230 1. Ideal case 232 In this case the hardware aspects of processing capacity and the link 233 level anomalies are not considered. The benchmark is just on the protocol 234 functioning.In such environment where for an ideal case the system should 235 expect delay variation to be zero. 237 2. Practical case 239 This case is used to benchmark results when delay measurement is done on 240 physical hardware (like a router).The factors of packet process jitter and 241 link level delays needs to be considered here.The delay variation in such 242 cases will defer based on the above parameters on different hardware systems. 243 Result will very base on the exact hardware. 245 Delay Variation 246 + 247 ^ | 248 | | 249 | | 250 + | 251 | 252 | +----------+ 253 | | | 254 | | | 255 +------------+----------+-------------+ 256 Time 257 -----> 259 Traffic (0 to 100 percent line rate) 261 Impairment 263 This is to benchmark two-way delay measurement even when both data and PDUs 264 are dropped in the network using the impairment tool. 266 Measurement 268 The results must show similar results before and after this test. 270 High Availability 272 During routing engine failover the historical data must not reset. 274 Scale 276 This is to measure the performance of DUT in scaling to "X" CFM sessions 277 with Performance monitoring running over it.There should not be any 278 crashes,memory leaks. 280 Soak 282 The bi directional traffic is send over service over 24 to 48 283 hours and measure after the stipulated time there must not be 284 any change in behavior in the network for performance monitoring 286 Measurement 288 There should not be any core or crashes,memory leaks. 290 5.2 One-Way delay measurement Test Procedure 292 Basic Testing Objective 294 The test defined to measure the one-way delay measurement. 295 One-way delay measurement as defined in Y.1731 is the delay 296 of the packet to originate from a specific end-point till it 297 reached the other end of the network. The measurement of 298 this mandates the clock to be accurately synchronized as 299 the delay is computed based on the time of two different end-points. 301 Test Procedure 303 Configure a layer2 point-to-point service between PE1 and PE2. 304 Configure Y.1731 one-way delay measurement over the service. 305 Observe the delay measurement delay measurement in the following 306 conditions of traffic in the network 308 a. Send 80% of Line-rate traffic with different priorities with different 309 frame size. 310 b. Send 40% of Line-rate traffic with different priorities with 311 different frame size. 312 c. Without any line traffic 314 The result of all the 3 conditions above are noted and correlated. 316 Test Measurement 318 The following factors needs to be measured to benchmark the result 320 The average one-way delay 321 The average one-way delay variation 323 In the above 3 cases results obtained must be similar. 325 1. Ideal case 327 In this case the hardware aspects of processing capacity and the link 328 level anomalies are not considered. The benchmark is just on the protocol 329 functioning.In such environment where for an ideal case the system should 330 expect delay variation to be zero. 332 2. Practical case 334 This case is used to benchmark results when delay measurement is done on 335 physical hardware (like a router).The factors of packet process jitter and 336 link level delays needs to be considered here.The delay variation in such 337 cases will defer based on the above parameters on different hardware systems. 338 Result will very base on the exact hardware. 340 Delay Variation 341 + 342 ^ | 343 | | 344 | | 345 + | 346 | 347 | +----------+ 348 | | | 349 | | | 350 +------------+----------+-------------+ 351 Time 352 -----> 354 Traffic (0 to 100 percent line rate) 356 Impairment 358 This is to benchmark one-way delay measurement even when both data 359 and PDUs are dropped in the network using the impairment tool. 361 Measurement 363 The results must show similar results before and after this test. 365 High Availability 367 During routing engine failover the historical data must not reset. 369 Scale 371 This is to measure the performance of DUT in scaling to "X" CFM sessions 372 with Performance monitoring running over it.There should not be any 373 crashes,memory leaks. 375 Soak 377 The bi directional traffic is send over service over 24 to 48 378 hours and measure after the stipulated time there must not be 379 any change in behavior in the network for performance monitoring 381 Measurement 383 There should not be any core or crashes,memory leaks. 385 5.3 Loss measurement without COS Test Procedure 387 Basic Testing Objective 389 The test defined methodology for benchmarking data loss in the network 390 on real customer traffic. The Y.1731 indicates to consider only 391 in-profile (green) packet for loss measurement. For this, the testing 392 needs to be done in multiple environment where 394 a.All data packets from traffic generator are sent with single 802.1p 395 priority and the network do not have a COS profile defined. 397 b.All data packets from traffic generator are sent with 0 to 7 398 values for 802.1p priority and the network do not have a COS profile 399 defined. 401 The objective is to benchmark the protocol behavior under different 402 networking conditions and correlate the data.The objective is not 403 to test the actual functioning of Y.1731 Loss measurement.The loss 404 measurement must count only in profile packet, since there is no COS 405 defined.All the packets must be recorded as green. 407 Test Procedure 409 Configure a layer2 point-to-point service between PE1 and PE2. 410 Configure Y.1731 loss measurement over the service. 411 Observe the loss measurement in the following conditions of traffic in 412 the network 414 a.Send 80% of Line-rate traffic with different priorities with different 415 frame size. 416 b.Send 40% of Line-rate traffic with different priorities with different 417 frame size. 418 c.Without any line traffic 420 The result of all the 3 conditions above are noted and correlated. 422 Test Measurement 424 The factors which need to be considered is the acceptable absolute 425 loss for the given network. 427 Impairment 429 This is to benchmark loss measurement even when both data and PDUs 430 are dropped in the network using the impairment tool. 432 Measurement 434 When the data is dropped it must show the loss correctly and PM PDUs 435 are dropped the counting should not be affected,ther should not be 436 any abnormal output. 438 High Availability 440 During routing engine failover the historical data must not reset. 441 Ideal case there must be 0 packet loss. 443 Scale 445 This is to measure the performance of DUT in scaling to "X" CFM sessions 446 with Performance monitoring running over it.There should not be any 447 crashes,memory leaks.Each session must record loss measurement correctly. 449 Soak 451 The bi directional traffic is send over service over 24 to 48 452 hours and measure after the stipulated time there must not be 453 any change in behavior in the network for performance monitoring 455 Measurement 457 There should not be any core or crashes,memory leaks. 459 Result 461 +----------------------------------+ 462 | Traffic sent |Loss measurement| 463 | over the service|(without cos) | 464 | for bi direction| | 465 +----------------------------------+ 466 | 7 Streams at | Near End = 100%| 467 | 100% line rate | Far End = 100% | 468 | with priority | | 469 | from 0 to 7 | | 470 +----------------------------------+ 471 | Dropping 50% | Near End 50% | 472 | of line rate | Far end 100% | 473 | at near end. | Near End loss | 474 | | observed 50% | 475 | | | 476 +----------------------------------+ 477 | Dropping 50% |Near End 100% | 478 | of line rate | Far end 50% | 479 | at far end. | Far End Loss | 480 | | observed 50% | 481 +-----------------+----------------+ 483 5.4. Loss measurement with COS Test Procedure 485 Basic Testing Objective 487 The test defined methodology for benchmarking data loss in the network on 488 real customer traffic. The Y.1731 indicates to consider only in-profile(green) 489 packet for loss measurement. For this, the testing needs to be done in multiple 490 environment where 492 a. All data packets from traffic generator are sent with single 802.1p 493 priority and the network have pre-defined COS profile defined. 495 b. All data packets from traffic generator are sent with 0 to 7 values 496 for 802.1p priority and the network have pre-defined COS profile defined. 498 The COS profile defined needs to have 2 factors 500 a.COS needs to treat different 802.1p as separate class of packets. 502 b.Each Class of packets needs to be an defined CIR for the 503 specific network. 505 The objective is to benchmark the protocol behavior under different 506 networking conditions and correlate the data. The objective is not 507 to test the actual functioning of Y.1731 Loss measurement. 508 The loss measurement must show in profile packet for each COS levels. 509 Each COS level must count only its own defined in profile packets. 510 The Packets, which are termed, as out profile by COS marking must 511 not be counted.When the traffic is send with single 802.1p priority 512 the loss measurement must record value only for that particular COS level. 514 Test Procedure 516 Configure a layer2 point-to-point service between PE1 and PE2. 517 Configure Y.1731 loss measurement over the service. 518 Observe the loss measurement in the following conditions of traffic 519 in the network. 521 d.Send 80% of Line-rate traffic with different priorities with 522 different frame size. 523 e.Send 40% of Line-rate traffic with different priorities with different 524 frame size. 525 f. Without any line traffic 527 The result of all the 3 conditions above are noted and correlated. 529 Test Measurement 531 The factors which need to be considered is the acceptable absolute 532 loss for the given network. 534 Impairment 536 This is to benchmark loss measurement even when both data and PDUs 537 are dropped in the network using the impairment tool. 539 Measurement 541 When the data is dropped it must show the loss correctly and PM PDUs 542 are dropped the counting should not be affected,there should not be 543 any abonormal output. 545 High Availability 547 During routing engine failover the historical data must not reset. 548 Ideal case there must be 0 packet loss. 550 Scale 552 This is to measure the performance of DUT in scaling to "X" CFM sessions 553 with Performance monitoring running over it.There should not be any 554 crashes,memory leaks.Each session must record loss measurement correctly. 556 Soak 558 The bi directional traffic is send over service over 24 to 48 559 hours and measure after the stipulated time there must not be 560 any change in behavior in the network for performance monitoring 562 Measurement 564 There should not be any core or crashes,memory leaks. 566 Result 568 +----------------------------------+ 569 | Traffic sent |Loss measurement| 570 | over the service|(With cos) | 571 | for bi direction| | 572 +----------------------------------+ 573 | 7 Streams at | Near End = 100%| 574 | 100% line rate | Far End = 100% | 575 | with priority | | 576 | from 0 to 7 | | 577 +----------------------------------+ 578 | Dropping 50% | Near End 50%| 579 | of line rate | Far end 100% | 580 | at near end | Near End loss | 581 | for priority | observed 50% | 582 | marked 0 | (priority 0) | 583 +----------------------------------+ 584 | Dropping 50% |Near End 100%| 585 | of line rate | Far end 50% | 586 | at far end for | Far End Loss | 587 | priority 0 | observed 50% | 588 | | (priority 0) | 589 +-----------------+----------------+ 591 5.5. Synthetic Loss Measurement Test Procedure 593 5.5.1 Basic Testing Objective 595 The test defined methodology for benchmarking synthetic loss in the network. 596 The testing needs to be done in multiple environment where 598 a. All data packets from traffic generator are sent with single 802.1p 599 priority and the network do not have a COS profile defined. 600 The synthetic loss measurement also uses the same 802.1p priority as that 601 of traffic. 603 b. All data packets from traffic generator are sent with single 802.1p 604 priority and the network have pre-defined COS profile defined.The synthetic 605 loss measurement also uses the same 802.1p priority as that of traffic. 607 c. All data packets from traffic generator are sent with 0 to 7 608 values for 802.1p priority and the network do not have a COS profile 609 defined. The synthetic loss measurement also uses the same 802.1p priority 610 as that of traffic. Hence 8 sessions are tested in parallel. 612 d. All data packets from traffic generator are sent with 0 to 7 613 values for 802.1p priority and the network have pre-defined COS profile defined. 614 The synthetic loss measurement also uses the same 802.1p priority as that of 615 traffic. Hence 8 sessions are tested in parallel. 617 The COS profile defined needs to have 2 factors 619 1.COS needs to treat different 802.1p as separate class of packets. 621 2.Each Class of packets needs to have defined CIR for the specific network. 623 The objective is to benchmark the protocol behavior under different networking 624 conditions and correlate the data. The objective is not to test the 625 actual functioning of Y.1731 Loss measurement. 627 Test Procedure 629 Configure a layer2 point-to-point service between PE1 and PE2. 630 Configure Y.1731 loss measurement over the service. Observe the synthetic 631 loss measurement in the following conditions of traffic in the network 633 a. Send 80% of Line-rate traffic with different priorities 634 b. Send 40% of Line-rate traffic with different priorities 635 c. Without any line traffic 636 The result of all the 3 conditions above are noted and correlated. 638 Test Measurement 640 The factors which need to be considered is the acceptable absolute loss 641 for the given network. 643 Impairment 645 This is to benchmark synthetic loss measurement even when both data 646 and PDUs are dropped in the network using the impairment tool. 648 Measurement 650 When the data is dropped it must not affect the SLM counters but if 651 synthetic frames are dropped the loss must be shown accordingly. 653 High Availability 655 During routing engine failover the historical data must not reset. 657 Scale 659 This is to measure the performance of DUT in scaling to "X" CFM sessions 660 with Performance monitoring running over it.There should not be any 661 crashes,memory leaks. 663 Soak 665 The bi directional traffic is send over service over 24 to 48 666 hours and measure after the stipulated time there must not be 667 any change in behavior in the network for performance monitoring 669 Measurement 671 There should not be any core or crashes,memory leaks. 673 6. Acknowledgements 675 We would like to thank Al Morton of (ATT) for their support and encouragement. 676 We would like to thank Fioccola Giuseppe of Telecom Italia reviewing our 677 draft and commenting it. 679 7. Security Considerations 681 NA 683 8.IANA Considerations 685 NA 687 Appendix A. Appendix 689 Authors' Addresses 691 Sudhin Jacob 692 Juniper Networks 693 Bangalore 695 Email: sjacob@juniper.net 696 sudhinjacob@rediffmail.com 698 Praveen Ananthasankaran 699 Nokia 700 Manyata Embassy Tech Park, 701 Silver Oak (Wing A), Outer Ring Road, 702 Nagawara, Bangalore-560045 704 Email: praveen.ananthasankaran@nokia.com