idnits 2.17.1 draft-ietf-mboned-ip-multicast-pm-requirement-00.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- == It seems as if not all pages are separated by form feeds - found 1 form feeds but 14 pages Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- ** There are 5 instances of too long lines in the document, the longest one being 1 character in excess of 72. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year == Using lowercase 'not' together with uppercase 'MUST', 'SHALL', 'SHOULD', or 'RECOMMENDED' is not an accepted usage according to RFC 2119. Please use uppercase 'NOT' together with RFC 2119 keywords (if that is what you mean). Found 'SHOULD not' in this paragraph: If dynamic configuration is supported, the delivery of measurement session control packets SHOULD be reliable so that the measurement sessions can be started, ended and performed in a predictable manner. Meanwhile, the control packets SHOULD not be delivered based on the multicast routing decision. This multicast independent characteristic guarantees that the active interfaces are still under control even if the multicast service is malfunctioning. == Using lowercase 'not' together with uppercase 'MUST', 'SHALL', 'SHOULD', or 'RECOMMENDED' is not an accepted usage according to RFC 2119. Please use uppercase 'NOT' together with RFC 2119 keywords (if that is what you mean). Found 'MUST not' in this paragraph: The monitoring system MUST not impose security risks on the network. For example, the monitoring nodes should be prevented from being exploited by third parties to control measurement sessions arbitrarily, which might make the nodes vulnerable for DDoS attacks. == Using lowercase 'not' together with uppercase 'MUST', 'SHALL', 'SHOULD', or 'RECOMMENDED' is not an accepted usage according to RFC 2119. Please use uppercase 'NOT' together with RFC 2119 keywords (if that is what you mean). Found 'MUST not' in this paragraph: The non-monitoring nodes without the monitoring capabilities SHOULD be able to coexist with monitoring nodes and function. The packets exchanged between monitoring nodes SHOULD be transparent to other nodes and MUST not cause any malfunction of the non-monitoring nodes. -- The document date (January 17, 2011) is 4840 days in the past. Is this intentional? -- Found something which looks like a code comment -- if you have code sections in the document, please surround them with '' and '' lines. Checking references for intended status: Informational ---------------------------------------------------------------------------- == Unused Reference: '11' is defined on line 578, but no explicit reference was found in the text == Outdated reference: A later version (-09) exists of draft-ietf-mboned-ssmping-07 == Outdated reference: A later version (-26) exists of draft-ietf-mboned-mtrace-v2-03 Summary: 1 error (**), 0 flaws (~~), 8 warnings (==), 2 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 1 Network working group A. Tempia Bonda 2 Internet Draft G. Picciano 3 Intended status: Informational Telecom Italia 4 M. Chen 5 L. Zheng 6 Huawei Technologies Co., Ltd 7 Expires: July 17, 2011 January 17, 2011 9 Requirements for IP multicast performance monitoring 10 draft-ietf-mboned-ip-multicast-pm-requirement-00.txt 12 Status of this Memo 14 This Internet-Draft is submitted to IETF in full conformance with the 15 provisions of BCP 78 and BCP 79. 17 Internet-Drafts are working documents of the Internet Engineering 18 Task Force (IETF), its areas, and its working groups. Note that other 19 groups may also distribute working documents as Internet-Drafts. 21 Internet-Drafts are draft documents valid for a maximum of six months 22 and may be updated, replaced, or obsoleted by other documents at any 23 time. It is inappropriate to use Internet-Drafts as reference 24 material or to cite them other than as "work in progress." 26 The list of current Internet-Drafts can be accessed at 27 http://www.ietf.org/ietf/1id-abstracts.txt. 29 The list of Internet-Draft Shadow Directories can be accessed at 30 http://www.ietf.org/shadow.html. 32 This Internet-Draft will expire on July 17, 2011. 34 Copyright Notice 36 Copyright (c) 2009 IETF Trust and the persons identified as the 37 document authors. All rights reserved. 39 This document is subject to BCP 78 and the IETF Trust's Legal 40 Provisions Relating to IETF Documents 41 (http://trustee.ietf.org/license-info) in effect on the date of 42 publication of this document. Please review these documents carefully, 43 as they describe your rights and restrictions with respect to this 44 document. Code Components extracted from this document must include 45 Simplified BSD License text as described in Section 4.e of the Trust 46 Legal Provisions and are provided without warranty as described in 47 the Simplified BSD License. 49 Abstract 51 This document describes the requirement for an IP multicast 52 performance monitoring system for service provider IP multicast 53 networks. This system enables efficient performance monitoring in 54 Service Providers' production networks and provides diagnostic 55 information in case of performance degradation or failure. 57 Table of Contents 59 1. Introduction ................................................ 2 60 2. Conventions used in this document ........................... 4 61 3. Terminologies ............................................... 4 62 4. Functional Requirements ..................................... 6 63 4.1. Topology discovery and monitoring ...................... 6 64 4.2. Performance measurement ................................ 6 65 4.2.1. Loss rate ......................................... 6 66 4.2.2. One-way delay ..................................... 7 67 4.2.3. Jitter ............................................ 7 68 4.2.4. Throughput ........................................ 7 69 4.3. Measurement session management ......................... 8 70 4.3.1. Segment v.s. Path ................................. 8 71 4.3.2. Static v.s. Dynamic configuration ................. 8 72 4.3.3. Proactive v.s. on-demand .......................... 8 73 4.4. Measurement result report .............................. 9 74 4.4.1. Performance reports ............................... 9 75 4.4.2. Exceptional alarms ................................ 9 76 5. Design considerations ...................................... 10 77 5.1. Inline data-plane measurement ......................... 10 78 5.2. Scalability ........................................... 10 79 5.3. Robustness ............................................ 11 80 5.4. Security .............................................. 11 81 5.5. Device flexibility .................................... 11 82 5.6. Extensibility ......................................... 12 83 6. Security Considerations .................................... 12 84 7. IANA Considerations ........................................ 12 85 8. References ................................................. 12 86 8.1. Normative References .................................. 12 87 8.2. Informative References ................................ 12 88 9. Acknowledgments ............................................ 13 90 1. Introduction 92 Service providers (SPs) have been leveraging IP multicast to provide 93 revenue-generating services, such as IP television (IPTV), video 94 conferencing, as well as the distribution of stock quotes or news. 96 These services are usually loss-sensitive or delay-sensitive, and 97 their data packets need to be delivered over a large scale IP network 98 in real-time. Meanwhile, these services demand relatively strict 99 service-level agreements (SLAs). For example, loss rate over 5% is 100 generally considered unacceptable for IPTV delivery. Video 101 conferencing normally demands delays no more than 150 milliseconds. 102 However, the real-time nature of the traffic and the deployment scale 103 of service make it very challenging for IP multicast performance 104 monitoring in a SP's production network. With increasing deployment 105 of multicast service in SP networks, it becomes mandatory to develop 106 an efficient system that is designed for SPs to accommodate the 107 following functions. 109 o SLA monitoring and verification: verify whether the performance of 110 a production multicast network meets SLA requirements. 112 o Network optimization: identify bottlenecks when the performance 113 metrics do not meet the SLA requirements. 115 o Fault localization: pin-point impaired components in case of 116 performance degradation and service disruption. 118 These functions alleviate the OAM cost of IP multicast network for 119 SPs, and ensure the quality of services. 121 However, the existing IP multicast monitoring tools and systems, 122 which were mostly designed either for primitive connectivity 123 diagnosis or for experimental evaluations, do not suit an SP 124 production network, given the following facts: 126 o Most of them provide end-to-end reachability check only [2][4][6]. 127 They cannot provide sophisticated measurement metrics such as 128 packet loss, one-way delay, and jitter, for the purpose of SLA 129 verification. 131 o Most of them can perform end-to-end measurements only. For example, 132 RTCP-based monitoring system [5] can report end-to-end packet loss 133 rate and jitter. End-to-end measurements are usually inadequate 134 for fault localization, which needs finer grain measurement data 135 to pin-point exact root causes. 137 o Most of them use probing packets to probe network performance [2] 138 [4]. The approach might yield biased or even irrelevant results 139 because the probing results are sampled and the out-of-band 140 probing packets might be forwarded differently from the monitored 141 user traffic. 143 o Most of them are not scalable in a large deployment like an SPs' 144 production network. For example, in IPTV deployment, the number of 145 group members might be in the order of thousands. In this scale, 146 an RTCP-based multicast monitoring system [5] becomes almost 147 unusable because RTCP report intervals of each receiver might be 148 delayed up to minutes or even hours because of over-crowded 149 reporting multicast channel [12]. 151 o Some of them rely on the information from external protocols, 152 which make their capabilities and deployment scenarios limited by 153 the external protocols. The examples are passive measurement tools 154 that collect and analyze messages from protocols such as multicast 155 routing protocols [7], IGMP [9], or RTCP [5], etc. Another 156 example is a SNMP-based system [8] that collects and analyzes 157 relevant multicast MIB information. 159 This document describes the requirement for an IP multicast 160 performance monitoring system for service provider (SP) IP multicast 161 networks. This system should enable efficient monitoring of 162 performance metrics of any given multicast channel (*,G) or (S,G) and 163 provides diagnostic information in case of performance degradation or 164 failure, which help SPs to do SLA verification, network optimization, 165 and fault localizations in a large production network. 167 2. Conventions used in this document 169 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 170 "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this 171 document are to be interpreted as described in RFC-2119 [1]. 173 3. Terminologies 175 o SSM (source specific multicast): When a multicast group is 176 operating in SSM mode, only one designated node is eligible to 177 send traffic through the multicast channel. An SSM multicast group 178 with the designated source address s and group address G is 179 denoted by (s, G). 181 o ASM (any source multicast): When a multicast group is operating in 182 ASM mode, any node can multicast packets through the multicast 183 channel to other group members. An ASM multicast group with group 184 address G is denoted by (*, G). 186 o Root (of a multicast group): In an SSM multicast group (s, G), the 187 root of this group is the first-hop router next to the source node 188 s. In an ASM multicast group (*, G), the root of this group is the 189 selected rendezvous point router. 191 o Receiver: The term receiver refers to any node in the multicast 192 group that should receive multicast traffic. 194 o Internal forwarding path: Given a multicast group and a forwarding 195 node in the group, the internal forwarding path inside the node 196 refers to the data path between the upstream interface towards the 197 root and one of the downstream interfaces toward a receiver. 199 o Multicast forwarding path: Given a multicast group, a multicast 200 forwarding path refers to the sequence of the interfaces, links 201 and internal forwarding paths from the downstream interface at the 202 root until the upstream interface at a receiver. 204 o Multicast forwarding tree: Given a multicast group G, the union of 205 all multicast forwarding paths composes the multicast forwarding 206 tree. 208 o Segment (of multicast forwarding path): The segment of a multicast 209 forwarding path refers to part of the path between any two given 210 interfaces. 212 o Measurement session: A measurement session refers to the period of 213 time in which certain performance metrics over a segment of 214 multicast forwarding path is monitored and measured. 216 o Monitoring node: A monitoring node is a node on a multicast 217 forwarding path that is capable of performing traffic performance 218 measurements on its interfaces. 220 o Active interface: An interface of a monitoring node that is turned 221 on to start a measurement session is said to be active. 223 o Measurement session control packets: The packets are used for 224 dynamic configuration for active interface to coordinate 225 measurement sessions. 227 Figure 1 shows a multicast forwarding tree rooted at a root's 228 interface A. Within router 1, B-C and B-D are two internal forwarding 229 paths. Path A-B-C-E-G-I is a multicast forwarding path, which starts 230 at root's downstream interface A and ends at receiver 2's upstream 231 interface I. A-B, B-C-E are two segments of this forwarding path. 232 When a measurement session for a metric such as loss rate is turned 233 on over segment A-B, interfaces A and B are active interfaces. 235 +--------+ 236 | source | I +--------+ 237 +---#----+ +---<> recv 2 | 238 # +----------+ G | +--------+ 239 # +----------+ C E | <>----+ 240 +---#--+ A B | <>-----<> router 2 | 241 | root <>------<> router 1 | | <>----+ 242 +------+ | <>---+ +----------+ H | J +--------+ 243 +----------+ D | +---<> recv 3 | 244 | F +--------+ +--------+ 245 +--<> recv 1 | 246 +--------+ 247 <> C Interface 248 ------ Link 250 Figure 1. Example of multicast forwarding tree 252 4. Functional Requirements 254 4.1. Topology discovery and monitoring 256 The monitor system SHOULD have mechanisms to collect topology 257 information of the multicast forwarding trees for any given multicast 258 group. The function can be an integrated part of this monitoring 259 system. Alternatively, the function might rely on other tools and 260 protocols, such as mtrace [3], MANTRA[7], etc. The topology 261 information will be referenced by network operators to decide where 262 to enable measurement sessions. 264 4.2. Performance measurement 266 The performance metrics that a monitoring node needs to collect 267 include, but are not limit to, the following. 269 4.2.1. Loss rate 271 Loss rate over a segment is the ratio of user packets not delivered 272 to the total number of user packets delivered over this segment 273 during a given interval. The number of user packets not delivered 274 over a segment is the difference between the number of packets 275 transmitted at the starting interface of the segment and received at 276 the ending interface of this segment. Loss rate is crucial for 277 multimedia streaming, such as IPTV, video/audio conferencing. 279 Loss rate over any segment of a multicast forwarding path MUST be 280 provided. The measurement interval MUST be configurable. 282 4.2.2. One-way delay 284 One-way delay over a segment is the average time that user packets 285 take to traverse this segment of forwarding path during a given 286 interval. The time that a user packet traversing a segment is the 287 difference between the time when the user packet leaves the starting 288 interface of this segment and the time when the same user packet 289 arrives at the ending interface of this segment. The one-way delay 290 metric is essential for real-time interactive applications, such as 291 video/audio conferencing, multiplayer gaming. 293 One-way delay over any segment of a multicast forwarding path SHOULD 294 be able to be measured. The measurement interval MUST be configurable. 296 To get accurate one-way delay measurement results, the two end 297 monitoring nodes of the investigated segments might need to have 298 clock synchronized. 300 4.2.3. Jitter 302 Jitter over a segment is the variance of one-way delay over this 303 segment during a given interval. The metric is of great importance 304 for real-time streaming and interactive applications, such as IPTV, 305 audio/video conferencing. 307 One-way delay jitter over any segment of a multicast forwarding path 308 SHOULD be able to be measured. The measurement interval MUST be 309 configurable. 311 Same as One-way delay measurement, to get accurate jitter, the clock 312 frequencies at the two end monitoring nodes might need to be 313 synchronized so that the clocks at two systems will proceed at the 314 same pace. 316 4.2.4. Throughput 318 Throughput of multicast traffic for a group over a segment is the 319 average number of bytes of user packets of this multicast group 320 transmitted over this segment in unit time during a given interval. 321 The information might be useful for resource management. 323 Throughput of multicast traffic over any segment of a multicast 324 forwarding path MAY be measured. The measurement interval MUST be 325 configurable. 327 4.3. Measurement session management 329 A measurement session refers to the period of time in which 330 measurement for certain performance metrics is enabled over a segment 331 of multicast forwarding path or over a complete multicast forwarding 332 path. During a measurement session, the two end interfaces are said 333 active. When an interface is activated, the interfaces start 334 collecting statistics, such as number or timestamps of user packets 335 which belongs to the given multicast group and pass through the 336 interface. When both interfaces are activated, the measurement 337 session starts. During a measurement session, data from two active 338 interfaces are periodically correlated and the performance metrics, 339 such as loss rate or delay, are derived. The correlation can be done 340 either on the downstream interface if the upstream interface passes 341 its data to it or on a third-party if the raw data on two active 342 interfaces are reported to it. When one of the two interfaces is 343 deactivated, the measurement session stops. 345 4.3.1. Segment v.s. Path 347 Network operators SHOULD be able to turn on or off measurements 348 sessions for specific performance metrics over either a segment of 349 multicast forwarding path or over a complete multicast forwarding 350 path at any time. For example in Figure 1, network operator can turn 351 on the measurement session of loss rate over path A-B-D-F and segment 352 A-B-C as well as jitter over segment C-E-G-I simultaneously. This 353 feature allows network operators to zoom into the suspicious 354 components when degradation or failure occurs. 356 4.3.2. Static v.s. Dynamic configuration 358 A measurement session can be configured statically. In this case, 359 network operators activate the two interfaces or configure their 360 parameter settings on the relevant nodes either manually or 361 automatically through agents of network management system (NMS). 363 Optionally, a measurement session can be configured dynamically. In 364 this case, an interface may coordinate another interface on its 365 forwarding path to start or stop a session. Accordingly, the format 366 and process routines of the measurement session control packets need 367 to be specified. The delivery of such packets SHOULD be reliable and 368 it MUST be possible to secure the delivery of such packets. 370 4.3.3. Proactive v.s. on-demand 372 A measurement session can be started either proactively or on demand. 373 Proactive monitoring is either configured to be carried out 374 periodically and continuously or preconfigured to act on certain 375 events such as alarm signals. To save resources, operators may turn 376 on measurement sessions proactively for critical performance metrics 377 over the backbone segments of multicast forwarding tree only. This 378 keeps the overall monitoring overhead minimal during normal network 379 operations. 381 In contrast to proactive monitoring, on-demand monitoring is 382 initiated manually and for a limited amount of time to carry out 383 diagnostics. When network performance degradation or service 384 disruption occurs, operators might turn on measurement sessions on- 385 demand over the interested segments to facilitate fault localization. 387 4.4. Measurement result report 389 The measurement results might be present in two forms: reports or 390 alarms. 392 4.4.1. Performance reports 394 Performance reports contain streams of measurement data over a period 395 of time. A data collection agent MAY actively poll the monitoring 396 nodes and collect the measurement reports from all active interfaces. 397 Alternatively, the monitoring nodes might be configured to upload the 398 reports to the specific data collection agents once the data become 399 available. To save bandwidth, the content of the reports might be 400 aggregated and compressed. The period of reporting SHOULD be able to 401 be configured or controlled by rate limitation mechanisms (e.g., 402 exponentially increasing). 404 4.4.2. Exceptional alarms 406 On the other hand, the active interfaces of a monitoring node or a 407 third-party MAY be configured to raise alarms when exceptional events 408 such as performance degradation or service disruption occur. Alarm 409 thresholds and the management should be specified for each of the 410 performance metric when the measurement session is configured on this 411 interface. During measurement session, once the value of certain 412 performance metric exceeds the threshold, alarm will be raised and 413 reported to the configured nodes. To prevent huge volume of alarms 414 from overloading the management nodes and network congestion, alarm 415 suppression and aggregation mechanisms SHOULD be employed on the 416 interfaces to limit the rate of alarm report and the volume of data. 418 5. Design considerations 420 To make the monitoring system feasible and optimal for a SP 421 production network, the following considerations should take into 422 account when design the system. 424 5.1. Inline data-plane measurement 426 Measurement results collected by probing packets might be biased or 427 even totally irrelevant given the facts that (1) probing packets 428 collect sampled results only and might not capture the real statistic 429 characteristics of the monitored user traffic. Experiments have 430 demonstrated that the measurement sampled by the probing packets, 431 such as ping probes, might be incorrect if sampling interval is too 432 long [10]; (2) probing packets introduce extra load onto the network. 433 In order to improve accuracy, sampling frequency has to be high 434 enough, which in turn increase network overhead and further bias the 435 measurement results; (3) probing packets are usually not in the same 436 multicast group as user packets and might take different forwarding 437 path given that equal cost multi-path routing (ECMP) and link 438 aggregation (LAG) have been widely adopted in SP network. An out-of- 439 band probing packet might take a path totally different from the user 440 packets of the multicast group that it is monitoring. Even if the 441 forwarding path is the same, the intermediate node might apply 442 different queuing and scheduling strategy for the probing packets. As 443 a result, the measured results might be irrelevant. 445 The performance measurement should be "inline" in the sense that the 446 measurement statistics are derived directly from user packets, 447 instead of probing packets. At the same time, unlike offline packet 448 analysis, the measurement is counting user packets at line-speed in 449 real-time without any packet duplication or buffering. 451 To accomplish the inline measurement, some extra packets might need 452 to be injected into user traffic to coordinate measurement across 453 nodes. The volume of these packets SHOULD be keep minimal such that 454 the injection of such packets will not impact measurement accuracy. 456 5.2. Scalability 458 The measurement methodology and system architecture MUST be scalable. 459 A multicast network for an SP production network usually comprises of 460 thousands of nodes. Given the scale, the collecting, processing and 461 reporting overhead of performance measurement data SHOULD NOT 462 overwhelm either monitoring nodes or management nodes. The volume of 463 reporting traffic should be reasonable and not cause any network 464 congestion. 466 5.3. Robustness 468 The measurements MUST be independent of the failure of the underlying 469 multicast network. For example, the monitor SHOULD generate correct 470 measurement result even if some measurement coordinating packets are 471 lost; invalid performance reports should be able to be identified in 472 case that the underlying multicast network is undergoing drastic 473 changes. 475 If dynamic configuration is supported, the delivery of measurement 476 session control packets SHOULD be reliable so that the measurement 477 sessions can be started, ended and performed in a predictable manner. 478 Meanwhile, the control packets SHOULD not be delivered based on the 479 multicast routing decision. This multicast independent characteristic 480 guarantees that the active interfaces are still under control even if 481 the multicast service is malfunctioning. 483 Similarly, if an NMS is used to control the monitoring nodes remotely, 484 the communication between monitoring nodes and the NMS SHOULD be 485 reliable. 487 5.4. Security 489 The monitoring system MUST not impose security risks on the network. 490 For example, the monitoring nodes should be prevented from being 491 exploited by third parties to control measurement sessions 492 arbitrarily, which might make the nodes vulnerable for DDoS attacks. 494 If dynamic configuration is supported, the measurement session 495 control packets need to be encrypted and authenticated. 497 5.5. Device flexibility 499 Both the software and hardware deployment requirement for the 500 monitoring system SHOULD be reasonable. For example, one-way delay 501 measurement needs clock synchronization across nodes. To require the 502 installation of expensive hardware clock synchronization devices on 503 all monitoring nodes might be too costly to make the monitoring 504 system infeasible for large deployment. 506 The monitor system SHOULD be incrementally deployable, which means 507 that the system can enable monitoring functionality even if some of 508 the nodes in the network are not equipped with the required software 509 and hardware or does not meet the software and hardware deployment 510 requirements. 512 The non-monitoring nodes without the monitoring capabilities SHOULD 513 be able to coexist with monitoring nodes and function. The packets 514 exchanged between monitoring nodes SHOULD be transparent to other 515 nodes and MUST not cause any malfunction of the non-monitoring nodes. 517 5.6. Extensibility 519 The system should be easy to be extended for new functionalities. For 520 example, the system should be easily extended to collect newly 521 defined performance metrics. 523 6. Security Considerations 525 The security issues have been taken into account in design 526 considerations (see Section 5.4). 528 7. IANA Considerations 530 There is no IANA action required by this draft. 532 8. References 534 8.1. Normative References 536 [1] Bradner, S., "Key words for use in RFCs to Indicate Requirement 537 Levels", BCP 14, RFC 2119, March 1997. 539 8.2. Informative References 541 [2] Venaas, S., "Multicast Ping Protocol", draft-ietf-mboned- 542 ssmping-07, December 2008. 544 [3] Asaeda, H., Jinmei, T., Fenner, W., and S. Casner, "Mtrace 545 Version 2: Traceroute Facility for IP Multicast", draft-ietf- 546 mboned-mtrace-v2-03, March 2009. 548 [4] Almeroth, K., Wei, L., and D. Farinacci, "Multicast 549 Reachability Monitor (MRM)", draft-ietf-mboned-mrm-01, July 550 2000. 552 [5] Bacher, D., Swan, A., and L. Rowe, "rtpmon: a third-party RTCP 553 monitor", Conference 4th ACM International conference on 554 multimedium, 1997. 556 [6] Sarac, K. and K. Almeroth, "Application Layer Reachability 557 Monitoring for IP Multicast", Journal Computer Networks Journal, 558 Vol.48, No.2, pp.195-213, June 2005. 560 [7] Rajvaidya, P., Almeroth, K., and k. claffy, "A Scalable 561 Architecture for Monitoring and Visualizating Multicast 562 Statistics", Conference IFIP/IEEE Workshop on Distributed 563 Systems: Operations & Management (DSOM), Austin, Texas, USA, 564 December 2000. 566 [8] Sharma, P., Perry, E., and R. Malpani, "IP Multicast 567 Operational Network Management: Design, Challenges and 568 Experiences", Journal IEEE Network, Volume 17, Issue 2, Mar/Apr 569 2003 Page(s): 49 - 55, Mar/Apr 2003. 571 [9] Al-Shaer, E. and Y. Tang, "MRMON: Remote Multicast Monitoring", 572 Conference NOMS, 2004. 574 [10] Sarac, K. and K. Almeroth, "Supporting Multicast Deployment 575 Efforts: A Survey of Tools for Multicast Monitoring", Journal 576 Journal of High Speed Networks, Vol.9, No.3-4, pp.191-211, 2000. 578 [11] Sarac, K. and K. Almeroth, "Monitoring IP Multicast in the 579 Internet: Recent Advances and Ongoing Challenges", Journal IEEE 580 Communication Magazine, 2005. 582 [12] Vit Novotny, Dan Komosny, "Optimization of Large-Scale RTCP 583 Feedback Reporting in Fixed and Mobile Networks," icwmc, pp.85, 584 Third International Conference on Wireless and Mobile 585 Communications (ICWMC'07), 2007 587 9. Acknowledgments 589 The authors would like to thank Wei Cao, Xinchun Guo, and Hui Liu for 590 their helpful comments and discussions. 592 This document was prepared using 2-Word-v2.0.template.dot. 594 Authors' Addresses 596 Alberto Tempia Bonda 597 Telecom Italia 598 Via Reiss Romoli, 274 599 Torino 10148 600 Italy 602 Email: alberto.tempiabonda@telecomitalia.it 604 Giovanni Picciano 605 Telecom Italia 606 Via Di Val Cannuta 250 607 Roma 00166 608 Italy 610 Email: giovanni.picciano@telecomitalia.it 612 Mach(Guoyi) Chen 613 Huawei Technologies Co. Ltd. 614 Huawei Building, No.3 Xinxi Road, 615 Hai-Dian District, 616 Beijing, 100085 617 China 619 EMail: mach@huawei.com 621 Lianshu Zheng 622 Huawei Technology Co. Ltd. 623 Huawei Building, No.3 Xinxi Road, 624 Hai-Dian District, 625 Beijing, 100085 626 China 628 Email: verozheng@huawei.com