idnits 2.17.1 draft-bipi-mboned-ip-multicast-pm-requirement-01.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- ** There are 4 instances of too long lines in the document, the longest one being 1 character in excess of 72. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year == Using lowercase 'not' together with uppercase 'MUST', 'SHALL', 'SHOULD', or 'RECOMMENDED' is not an accepted usage according to RFC 2119. Please use uppercase 'NOT' together with RFC 2119 keywords (if that is what you mean). Found 'SHOULD not' in this paragraph: The measurement methodology and system architecture MUST be scalable. A multicast network for a SP production network usually composes of thousands of nodes. Given the scale, the collecting, processing and reporting overhead of performance measurement data SHOULD not overwhelm either monitoring nodes or management nodes. The volume of reporting traffic should be reasonable and not cause any network congestion. == Using lowercase 'not' together with uppercase 'MUST', 'SHALL', 'SHOULD', or 'RECOMMENDED' is not an accepted usage according to RFC 2119. Please use uppercase 'NOT' together with RFC 2119 keywords (if that is what you mean). Found 'SHOULD not' in this paragraph: If dynamic configuration is supported, the delivery of measurement session control packets SHOULD be reliable so that the measurement sessions can be started, ended and performed in a predictable manner. Meanwhile, the control packets SHOULD not be delivered based on the multicast routing decision. This multicast independent characteristic guarantees that the active interfaces are still under control even if the multicast service is malfunctioning. == Using lowercase 'not' together with uppercase 'MUST', 'SHALL', 'SHOULD', or 'RECOMMENDED' is not an accepted usage according to RFC 2119. Please use uppercase 'NOT' together with RFC 2119 keywords (if that is what you mean). Found 'MUST not' in this paragraph: The monitoring system MUST not impose security risks on the network. For example, the monitoring nodes should be prevented from being exploited by third parties to control measurement sessions arbitrarily, which might make the nodes vulnerable for DDoS attacks. == Using lowercase 'not' together with uppercase 'MUST', 'SHALL', 'SHOULD', or 'RECOMMENDED' is not an accepted usage according to RFC 2119. Please use uppercase 'NOT' together with RFC 2119 keywords (if that is what you mean). Found 'MUST not' in this paragraph: The non-monitoring nodes without the monitoring capabilities SHOULD be able to coexist with monitoring nodes and function. The packets exchanged between monitoring nodes SHOULD be transparent to other nodes and MUST not cause any malfunction of the non-monitoring nodes. -- The document date (March 5, 2010) is 5164 days in the past. Is this intentional? -- Found something which looks like a code comment -- if you have code sections in the document, please surround them with '' and '' lines. Checking references for intended status: Informational ---------------------------------------------------------------------------- == Unused Reference: '2' is defined on line 540, but no explicit reference was found in the text == Unused Reference: '3' is defined on line 546, but no explicit reference was found in the text == Unused Reference: '12' is defined on line 583, but no explicit reference was found in the text == Unused Reference: '13' is defined on line 587, but no explicit reference was found in the text ** Obsolete normative reference: RFC 2234 (ref. '2') (Obsoleted by RFC 4234) == Outdated reference: A later version (-09) exists of draft-ietf-mboned-ssmping-07 == Outdated reference: A later version (-26) exists of draft-ietf-mboned-mtrace-v2-03 Summary: 2 errors (**), 0 flaws (~~), 11 warnings (==), 2 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 1 Network working group M. Bianchetti 2 Internet Draft G. Picciano 3 Intended status: Informational Telecom Italia 4 M. Chen 5 L. Zheng 6 Huawei Technologies Co., Ltd 7 Expires: September 5, 2010 March 5, 2010 9 Requirements for IP multicast performance monitoring 10 draft-bipi-mboned-ip-multicast-pm-requirement-01.txt 12 Status of this Memo 14 This Internet-Draft is submitted to IETF in full conformance with the 15 provisions of BCP 78 and BCP 79. 17 Internet-Drafts are working documents of the Internet Engineering 18 Task Force (IETF), its areas, and its working groups. Note that other 19 groups may also distribute working documents as Internet-Drafts. 21 Internet-Drafts are draft documents valid for a maximum of six months 22 and may be updated, replaced, or obsoleted by other documents at any 23 time. It is inappropriate to use Internet-Drafts as reference 24 material or to cite them other than as "work in progress." 26 The list of current Internet-Drafts can be accessed at 27 http://www.ietf.org/ietf/1id-abstracts.txt. 29 The list of Internet-Draft Shadow Directories can be accessed at 30 http://www.ietf.org/shadow.html. 32 This Internet-Draft will expire on September 5, 2010. 34 Copyright Notice 36 Copyright (c) 2009 IETF Trust and the persons identified as the 37 document authors. All rights reserved. 39 This document is subject to BCP 78 and the IETF Trust's Legal 40 Provisions Relating to IETF Documents 41 (http://trustee.ietf.org/license-info) in effect on the date of 42 publication of this document. Please review these documents 43 carefully, as they describe your rights and restrictions with 44 respect to this document. 46 Abstract 48 With increasing deployment of IP multicast in service providers (SPs) 49 network, SPs need a carrier-grade IP multicast performance monitoring 50 system. This document describes the requirements for such a system 51 for a SP network. This system enables efficient performance 52 monitoring in SPs' production network and provides diagnostic 53 information in case of performance degradation or failure. 55 Table of Contents 57 1. Introduction.................................................3 58 2. Conventions used in this document............................4 59 3. Terminologies................................................4 60 4. Functional Requirements......................................6 61 4.1. Topology discovery and monitoring.......................6 62 4.2. Performance measurement.................................6 63 4.2.1. Loss rate..........................................6 64 4.2.2. One-way delay......................................7 65 4.2.3. Jitter.............................................7 66 4.2.4. Throughput.........................................7 67 4.3. Measurement session management..........................8 68 4.3.1. Segment v.s. Path..................................8 69 4.3.2. Static v.s. Dynamic configuration..................8 70 4.3.3. Proactive v.s. on-demand...........................9 71 4.4. Measurement result report...............................9 72 4.4.1. Performance reports................................9 73 4.4.2. Exceptional alarms.................................9 74 5. Design considerations.......................................10 75 5.1. Inline data-plane measurement..........................10 76 5.2. Scalability............................................10 77 5.3. Robustness.............................................11 78 5.4. Security...............................................11 79 5.5. Device flexibility.....................................11 80 5.6. Extensibility..........................................12 81 6. Security Considerations.....................................12 82 7. IANA Considerations.........................................12 83 8. References..................................................12 84 8.1. Normative References...................................12 85 8.2. Informative References.................................12 86 9. Acknowledgments.............................................13 88 1. Introduction 90 This document describes the requirement for an IP multicast 91 performance monitoring system for service providers (SPs) IP 92 multicast network. This system should enables efficient monitoring of 93 performance metrics of any given multicast channel (*,G) or (S,G) and 94 provides diagnostic information in case of performance degradation or 95 failure. 97 Increasing deployment of IP multicast in SP networks calls for a 98 carrier-grade IP multicast performance monitoring system. SPs have 99 been leveraging IP multicast to provide revenue-generating services, 100 such as IP television (IPTV), video conferencing, as well as the 101 distribution of stock quotes or news. These services are usually 102 loss-sensitive or delay-sensitive, and their data packets need to be 103 delivered over a large scale IP network in real-time. Meanwhile, 104 these services demand relatively strict service-level agreements 105 (SLAs). For example, loss rate over 5% is generally considered 106 unacceptable for IPTV delivery. Video conferencing normally demands 107 delays no more than 150 milliseconds. However, the real-time nature 108 of the traffic and the deployment scale of service make it very 109 challenging for IP multicast performance monitoring in a SP's 110 production network. With increasing deployment of multicast service 111 in SP networks, it becomes mandatory to develop an efficient system 112 that is designed for SPs to accommodate the following functions. 114 o SLA monitoring and verification: verify whether the performance of 115 production multicast network meets SLA requirements. 117 o Network optimization: identify bottlenecks when the performance 118 metrics do not meet the SLA requirements. 120 o Fault localization: pin-point impaired components in case of 121 performance degradation and service disruption. 123 These functions alleviate the OAM cost of IP multicast network for 124 SPs, and ensure the quality of services. 126 However, the existing IP multicast monitoring tools and systems, 127 which were mostly designed either for primitive connectivity 128 diagnosis or for experimental evaluations, does not suit for a SP 129 production network, given the following facts: 131 o Most of them provide end-to-end reachability check only [4][6][8]. 132 They cannot provide sophisticated measurement metrics such as 133 packet loss, one-way delay, and jitter, for the purpose of SLAs 134 verification. 136 o Most of them can perform end-to-end measurements only. For example, 137 RTCP-based monitoring system [7] can report end-to-end packet loss 138 rate and jitter. End-to-end measurements are usually inadequate 139 for fault localization, which needs finer grain measurement data 140 to pin-point exact root causes. 142 o Most of them use probing packets to probe network performance [4] 143 [6]. The approach might yield biased or even irrelevant results 144 because the probing results are sampled and the out-of-band 145 probing packets might be forwarded differently from the monitored 146 user traffic. 148 o Most of them are not scalable in a large deployment like SPs' 149 production network. For example, in IPTV deployment, the number of 150 group member might be in the order of thousands. In this scale, a 151 RTCP-based multicast monitoring system [7] becomes almost unusable 152 because RTCP report intervals of each receiver might be delayed up 153 to minutes or even hours because of over-crowded reporting 154 multicast channel [14]. 156 o Some of them rely on the information from external protocols, 157 which make their capabilities and deployment scenarios limited by 158 the external protocols. The examples are passive measurement tools 159 that collect and analyze messages from protocols such as multicast 160 routing protocols [9], IGMP [11], or RTCP [7], etc. Another 161 example is a SNMP-based system [10] that collects and analyzes 162 relevant multicast MIB information. 164 This document specifies the requirements for an IP multicast traffic 165 monitor for a carrier-grade IP multicast network, which help SPs to 166 do SLA verification, network optimization, and fault localizations in 167 a large production network. 169 2. Conventions used in this document 171 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 172 "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this 173 document are to be interpreted as described in RFC-2119 [1]. 175 3. Terminologies 177 o SSM (source specific multicast): When a multicast group is 178 operating in SSM mode, only one designated node is eligible to 179 send traffic through the multicast channel. A SSM multicast group 180 with the designated source address s and group address G is 181 denoted by (s, G). 183 o ASM (any source multicast): When a multicast group is operating in 184 ASM mode, any node can multicast packets through the multicast 185 channel to other group members. An ASM multicast group with group 186 address G is denoted by (*, G). 188 o Root (of a multicast group): In a SSM multicast group (s, G), the 189 root of this group is the first-hop router next to the source node 190 s. In an ASM multicast group (*, G), the root of this group is the 191 selected rendezvous point router. 193 o Receiver: The term receiver refers to any node in the multicast 194 group that receives multicast traffic. 196 o Internal forwarding path: Given a multicast group and a forwarding 197 node in the group, the internal forwarding path inside the node 198 refers to the data path between the upstream interface towards the 199 root and one of the downstream interfaces towards the receivers. 201 o Multicast forwarding path: Given a multicast group, a multicast 202 forwarding path refers to the sequence of the interfaces, links 203 and internal forwarding paths from the downstream interface at 204 root until the upstream interface at a receiver. 206 o Multicast forwarding tree: Given a multicast group G, the union of 207 all multicast forwarding paths composes the multicast forwarding 208 tree. 210 o Segment (of multicast forwarding path): The segment of a multicast 211 forwarding path refers to part of the path between any two given 212 interfaces. 214 o Measurement session: A measurement session refers to the period of 215 time in which certain performance metrics over a segment of 216 multicast forwarding path is monitored and measured. 218 o Monitoring node: A monitoring node is a node on a multicast 219 forwarding path that is capable of performing traffic performance 220 measurements on its interfaces. 222 o Active interface: An interface of a monitoring node that is turned 223 on to start a measurement session is said active. 225 o Measurement session control packets: The packets are used for 226 dynamic configuration for active interface to coordinate 227 measurement sessions. 229 Figure 1 shows a multicast forwarding tree rooted at a root's 230 interface A. Within router 1, B-C and B-D are two internal forwarding 231 paths. Path A-B-C-E-G-I is a multicast forwarding path, which starts 232 at root's downstream interface A and ends at receiver 2's upstream 233 interface I. A-B, B-C-E are two segments of this forwarding path. 234 When a measurement session for a metric such as loss rate is turned 235 on over segment A-B, interfaces A and B are active interfaces. 237 +--------+ 238 | source | I +--------+ 239 +---#----+ +---<> recv 2 | 240 # +----------+ G | +--------+ 241 # +----------+ C E | <>----+ 242 +---#--+ A B | <>-----<> router 2 | 243 | root <>------<> router 1 | | <>----+ 244 +------+ | <>---+ +----------+ H | J +--------+ 245 +----------+ D | +---<> recv 3 | 246 | F +--------+ +--------+ 247 +--<> recv 1 | 248 +--------+ 249 <> C Interface 250 ------ Link 252 Figure 1. Example of multicast forwarding tree 254 4. Functional Requirements 256 4.1. Topology discovery and monitoring 258 The monitor system SHOULD have mechanisms to collect topology 259 information of the multicast forwarding trees for any given multicast 260 group. The function can be an integrated part of this monitoring 261 system. Alternatively, the function might relies on other tools and 262 protocols, such as mtrace [5], MANTRA[9], etc. The topology 263 information will be referred by network operators to decide where to 264 enable measurement sessions. 266 4.2. Performance measurement 268 The performance metrics that a monitoring node needs to collect 269 include but not limit to the following. 271 4.2.1. Loss rate 273 Loss rate over a segment is the ratio of user packets not delivered 274 to the total number of user packets delivered over this segment 275 during a given interval. The number of user packets not delivered 276 over a segment is the difference between the number of packets 277 transmitted at the starting interface of the segment and received at 278 the ending interface of this segment. Loss rate is crucial for 279 multimedia streaming, such as IPTV, video/audio conferencing. 281 Loss rate over any segment of a multicast forwarding path MUST be 282 provided. The measurement interval MUST be configurable. 284 4.2.2. One-way delay 286 One-way delay over a segment is the average time that user packets 287 take to traverse this segment of forwarding path during a given 288 interval. The time that a user packet traversing a segment is the 289 difference between the time when the user packet leaves the starting 290 interface of this segment and the time when the same user packet 291 arrives at the ending interface of this segment. The one-way delay 292 metric is essential for real-time interactive applications, such as 293 video/audio conferencing, multiplayer gaming. 295 One-way delay over any segment of a multicast forwarding path SHOULD 296 be able to be measured. The measurement interval MUST be configurable. 298 To get accurate one-way delay measurement results, the two end 299 monitoring nodes of the investigated segments might need to have 300 clock synchronized. 302 4.2.3. Jitter 304 Jitter over a segment is the variance of one-way delay over this 305 segment during a given interval. The metric is of great importance 306 for real-time streaming and interactive applications, such as IPTV, 307 audio/video conferencing. 309 One-way delay jitter over any segment of a multicast forwarding path 310 SHOULD be able to be measured. The measurement interval MUST be 311 configurable. 313 Same as One-way delay measurement, to get accurate jitter, the clock 314 frequencies at the two end monitoring nodes might need to be 315 synchronized so that the clocks at two systems will proceed at the 316 same pace. 318 4.2.4. Throughput 320 Throughput of multicast traffic for a group over a segment is the 321 average number of bytes of user packets of this multicast group 322 transmitted over this segment in unit time during a given interval. 323 The information might be useful for resource management. 325 Throughput of multicast traffic over any segment of a multicast 326 forwarding path MAY be measured. The measurement interval MUST be 327 configurable. 329 4.3. Measurement session management 331 A measurement session refers to the period of time in which 332 measurement for certain performance metrics is enabled over a segment 333 of multicast forwarding path or over a complete multicast forwarding 334 path. During a measurement session, the two end interfaces are said 335 active. When an interface is activated, the interfaces start 336 collecting statistics, such as number or timestamps of user packets 337 which belongs to the given multicast group and pass through the 338 interface. When both interfaces are activated, the measurement 339 session starts. During a measurement session, data from two active 340 interfaces are periodically correlated and the performance metrics, 341 such as loss rate or delay, are derived. The correlation can be done 342 either on the downstream interface if the upstream interface passes 343 its data to it or on a third-party if the raw data on two active 344 interfaces are reported to it. When one of the two interfaces is 345 deactivated, the measurement session stops. 347 4.3.1. Segment v.s. Path 349 Network operators SHOULD be able to turn on or off measurements 350 sessions for specific performance metrics over either a segment of 351 multicast forwarding path or over a complete multicast forwarding 352 path at any time. For example in Figure 1, network operator can turn 353 on the measurement session of loss rate over path A-B-D-F and segment 354 A-B-C as well as jitter over segment C-E-G-I simultaneously. This 355 feature allows network operators to zoom into the suspicious 356 components when degradation or failure occurs. 358 4.3.2. Static v.s. Dynamic configuration 360 A measurement session can be configured statically. In this case, 361 network operators activate the two interfaces or configure their 362 parameter settings on the relevant nodes either manually or 363 automatically through agents of network management system (NMS). 365 Optionally, a measurement session can be configured dynamically. In 366 this case, an interface may coordinate another interface on its 367 forwarding path to start or stop a session. Accordingly, the format 368 and process routines of the measurement session control packets need 369 to be specified. The delivery of such packets SHOULD be reliable and 370 MUST be secured. 372 4.3.3. Proactive v.s. on-demand 374 A measurement session can be started either proactively or on demand. 375 Proactive monitoring is either configured to be carried out 376 periodically and continuously or preconfigured to act on certain 377 events such as alarm signals. To save resources, operators may turn 378 on measurement sessions proactively for critical performance metrics 379 over the backbone segments of multicast forwarding tree only. This 380 keeps the overall monitoring overhead minimal during normal network 381 operations. 383 In contrast to proactive monitoring, on-demand monitoring is 384 initiated manually and for a limited amount of time to carry out 385 diagnostics. When network performance degradation or service 386 disruption occurs, operators might turn on measurement sessions on- 387 demand over the interested segments to facilitate fault localization. 389 4.4. Measurement result report 391 The measurement results might be present in two forms: reports or 392 alarms. 394 4.4.1. Performance reports 396 Performance reports contain streams of measurement data over a period 397 of time. A data collection agent MAY actively poll the monitoring 398 nodes and collect the measurement reports from all active interfaces. 399 Alternatively, the monitoring nodes might be configured to upload the 400 reports to the specific data collection agents once the data become 401 available. To save bandwidth, the content of the reports might be 402 aggregated and compressed. The period of reporting SHOULD be able to 403 be configured or controlled by rate limitation mechanisms (e.g., 404 exponentially increasing). 406 4.4.2. Exceptional alarms 408 On the other hand, the active interfaces of a monitoring node or a 409 third-party MAY be configured to raise alarms when exceptional events 410 such as performance degradation or service disruption occur. Alarm 411 thresholds and the management should be specified for each of the 412 performance metric when the measurement session is configured on this 413 interface. During measurement session, once the value of certain 414 performance metric exceeds the threshold, alarm will be raised and 415 reported to the configured nodes. To prevent huge volume of alarms 416 from overloading the management nodes and network congestion, alarm 417 suppression and aggregation mechanisms SHOULD be employed on the 418 interfaces to limit the rate of alarm report and the volume of data. 420 5. Design considerations 422 To make the monitoring system feasible and optimal for a SP 423 production network, the following considerations should take into 424 account when design the system. 426 5.1. Inline data-plane measurement 428 Measurement results collected by probing packets might be biased or 429 even totally irrelevant given the facts that (1) probing packets 430 collect sampled results only and might not capture the real statistic 431 characteristics of the monitored user traffic. Experiments have 432 demonstrated that the measurement sampled by the probing packets, 433 such as ping probes, might be incorrect if sampling interval is too 434 long [1]; (2) probing packets introduce extra load onto the network. 435 In order to improve accuracy, sampling frequency has to be high 436 enough, which in turn increase network overhead and further bias the 437 measurement results; (3) probing packets are usually not in the same 438 multicast group as user packets and might take different forwarding 439 path given that equal cost multi-path routing (ECMP) and link 440 aggregation (LAG) have been widely adopted in SP network. An out-of- 441 band probing packet might take a path totally different from the user 442 packets of the multicast group that it is monitoring. Even if the 443 forwarding path is the same, the intermediate node might apply 444 different queuing and scheduling strategy for the probing packets. As 445 a result, the measured results might be irrelevant. 447 The performance measurement should be "inline" in the sense that the 448 measurement statistics are derived directly from user packets, 449 instead of probing packets. At the same time, unlike offline packet 450 analysis, the measurement is counting user packets at line-speed in 451 real-time without any packet duplication or buffering. 453 To accomplish the inline measurement, some extra packets might need 454 to be injected into user traffic to coordinate measurement across 455 nodes. The volume of these packets SHOULD be keep minimal such that 456 the injection of such packets will not impact measurement accuracy. 458 5.2. Scalability 460 The measurement methodology and system architecture MUST be scalable. 461 A multicast network for a SP production network usually composes of 462 thousands of nodes. Given the scale, the collecting, processing and 463 reporting overhead of performance measurement data SHOULD not 464 overwhelm either monitoring nodes or management nodes. The volume of 465 reporting traffic should be reasonable and not cause any network 466 congestion. 468 5.3. Robustness 470 The measurements MUST be independent of the failure of the underlying 471 multicast network. For example, the monitor SHOULD generate correct 472 measurement result even if some measurement coordinating packets are 473 lost; invalid performance reports should be able to be identified in 474 case that the underlying multicast network is undergoing drastic 475 changes. 477 If dynamic configuration is supported, the delivery of measurement 478 session control packets SHOULD be reliable so that the measurement 479 sessions can be started, ended and performed in a predictable manner. 480 Meanwhile, the control packets SHOULD not be delivered based on the 481 multicast routing decision. This multicast independent characteristic 482 guarantees that the active interfaces are still under control even if 483 the multicast service is malfunctioning. 485 Similarly, if NMS are used to control the monitoring nodes remotely, 486 the communication between monitoring nodes and NMS SHOULD be reliable. 488 5.4. Security 490 The monitoring system MUST not impose security risks on the network. 491 For example, the monitoring nodes should be prevented from being 492 exploited by third parties to control measurement sessions 493 arbitrarily, which might make the nodes vulnerable for DDoS attacks. 495 If dynamic configuration is supported, the measurement session 496 control packets need to be encrypted and authenticated. 498 5.5. Device flexibility 500 Both the software and hardware deployment requirement for the 501 monitoring system SHOULD be reasonable. For example, one-way delay 502 measurement needs clock synchronization across nodes. To require the 503 installation of expensive hardware clock synchronization devices on 504 all monitoring nodes might be too costly to make the monitoring 505 system infeasible for large deployment. 507 The monitor system SHOULD be incrementally deployable, which means 508 that the system can enable monitoring functionality even if some of 509 the nodes in the network are not equipped with the required software 510 and hardware or does not meet the software and hardware deployment 511 requirements. 513 The non-monitoring nodes without the monitoring capabilities SHOULD 514 be able to coexist with monitoring nodes and function. The packets 515 exchanged between monitoring nodes SHOULD be transparent to other 516 nodes and MUST not cause any malfunction of the non-monitoring nodes. 518 5.6. Extensibility 520 The system should be easy to be extended for new functionalities. For 521 example, the system should be easily extended to collect newly 522 defined performance metrics. 524 6. Security Considerations 526 The security issues have been taken into account in design 527 considerations. 529 7. IANA Considerations 531 There is no IANA action required by this draft. 533 8. References 535 8.1. Normative References 537 [1] Bradner, S., "Key words for use in RFCs to Indicate Requirement 538 Levels", BCP 14, RFC 2119, March 1997. 540 [2] Crocker, D. and Overell, P.(Editors), "Augmented BNF for Syntax 541 Specifications: ABNF", RFC 2234, Internet Mail Consortium and 542 Demon Internet Ltd., November 1997. 544 8.2. Informative References 546 [3] "Trading Floor Architecture", White Paper , Online 547 http://www.cisco.com/en/US/docs/solutions/Verticals/Trading_Flo 548 or_Architecture-E.html. 550 [4] Venaas, S., "Multicast Ping Protocol", draft-ietf-mboned- 551 ssmping-07, December 2008. 553 [5] Asaeda, H., Jinmei, T., Fenner, W., and S. Casner, "Mtrace 554 Version 2: Traceroute Facility for IP Multicast", draft-ietf- 555 mboned-mtrace-v2-03, March 2009. 557 [6] Almeroth, K., Wei, L., and D. Farinacci, "Multicast 558 Reachability Monitor (MRM)", draft-ietf-mboned-mrm-01, July 559 2000. 561 [7] Bacher, D., Swan, A., and L. Rowe, "rtpmon: a third-party RTCP 562 monitor", Conference 4th ACM International conference on 563 multimedium, 1997. 565 [8] Sarac, K. and K. Almeroth, "Application Layer Reachability 566 Monitoring for IP Multicast", Journal Computer Networks Journal, 567 Vol.48, No.2, pp.195-213, June 2005. 569 [9] Rajvaidya, P., Almeroth, K., and k. claffy, "A Scalable 570 Architecture for Monitoring and Visualizating Multicast 571 Statistics", Conference IFIP/IEEE Workshop on Distributed 572 Systems: Operations & Management (DSOM), Austin, Texas, USA, 573 December 2000. 575 [10] Sharma, P., Perry, E., and R. Malpani, "IP Multicast 576 Operational Network Management: Design, Challenges and 577 Experiences", Journal IEEE Network, Volume 17, Issue 2, Mar/Apr 578 2003 Page(s): 49 - 55, Mar/Apr 2003. 580 [11] Al-Shaer, E. and Y. Tang, "MRMON: Remote Multicast Monitoring", 581 Conference NOMS, 2004. 583 [12] Sarac, K. and K. Almeroth, "Supporting Multicast Deployment 584 Efforts: A Survey of Tools for Multicast Monitoring", Journal 585 Journal of High Speed Networks, Vol.9, No.3-4, pp.191-211, 2000. 587 [13] Sarac, K. and K. Almeroth, "Monitoring IP Multicast in the 588 Internet: Recent Advances and Ongoing Challenges", Journal IEEE 589 Communication Magazine, 2005. 591 [14] Vit Novotny, Dan Komosny, "Optimization of Large-Scale RTCP 592 Feedback Reporting in Fixed and Mobile Networks," icwmc, pp.85, 593 Third International Conference on Wireless and Mobile 594 Communications (ICWMC'07), 2007 596 9. Acknowledgments 598 The authors would like to thank Wei Cao, Xinchun Guo, and Hui Liu for 599 their helpful comments and discussions. 601 This document was prepared using 2-Word-v2.0.template.dot. 603 Authors' Addresses 605 Mario Bianchetti 606 Broadband Network Services Innovation, Telecom Italia 608 Email: mario.bianchetti@telecomitalia.it 610 Giovanni Picciano 611 Access Network Engineering, Telecom Italia 613 Email: giovanni.picciano@telecomitalia.it 615 Mach(Guoyi) Chen 616 Huawei Technologies Co. Ltd. 617 KuiKe Building, No.9 Xinxi Rd., 618 Shang-Di Information Industry Base, Hai-Dian District, 619 Beijing, 100085 620 P.R. China 622 EMail: mach@huawei.com 624 Lianshu Zheng 625 Huawei Technology Co. Ltd. 626 KuiKe Building, No. 9 Xinxi Road 627 Shang-Di Information Industry Base, Hai-Dian District, 628 Beijing 100085 629 China 631 Email: verozheng@huawei.com