idnits 2.17.1 draft-ietf-mboned-ip-multicast-pm-requirement-01.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- == It seems as if not all pages are separated by form feeds - found 1 form feeds but 15 pages Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- ** There are 5 instances of too long lines in the document, the longest one being 1 character in excess of 72. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year == Using lowercase 'not' together with uppercase 'MUST', 'SHALL', 'SHOULD', or 'RECOMMENDED' is not an accepted usage according to RFC 2119. Please use uppercase 'NOT' together with RFC 2119 keywords (if that is what you mean). Found 'SHOULD not' in this paragraph: If dynamic configuration is supported, the delivery of measurement session control packets SHOULD be reliable so that the measurement sessions can be started, ended and performed in a predictable manner. Meanwhile, the control packets SHOULD not be delivered based on the multicast routing decision. This multicast independent characteristic guarantees that the active interfaces are still under control even if the multicast service is malfunctioning. == Using lowercase 'not' together with uppercase 'MUST', 'SHALL', 'SHOULD', or 'RECOMMENDED' is not an accepted usage according to RFC 2119. Please use uppercase 'NOT' together with RFC 2119 keywords (if that is what you mean). Found 'MUST not' in this paragraph: The monitoring system MUST not impose security risks on the network. For example, the monitoring nodes should be prevented from being exploited by third parties to control measurement sessions arbitrarily, which might make the nodes vulnerable for DDoS attacks. == Using lowercase 'not' together with uppercase 'MUST', 'SHALL', 'SHOULD', or 'RECOMMENDED' is not an accepted usage according to RFC 2119. Please use uppercase 'NOT' together with RFC 2119 keywords (if that is what you mean). Found 'MUST not' in this paragraph: The non-monitoring nodes without the monitoring capabilities SHOULD be able to coexist with monitoring nodes and function. The packets exchanged between monitoring nodes SHOULD be transparent to other nodes and MUST not cause any malfunction of the non-monitoring nodes. -- The document date (July 1, 2011) is 4682 days in the past. Is this intentional? -- Found something which looks like a code comment -- if you have code sections in the document, please surround them with '' and '' lines. Checking references for intended status: Informational ---------------------------------------------------------------------------- == Unused Reference: '10' is defined on line 575, but no explicit reference was found in the text == Unused Reference: '11' is defined on line 579, but no explicit reference was found in the text == Outdated reference: A later version (-09) exists of draft-ietf-mboned-ssmping-07 == Outdated reference: A later version (-26) exists of draft-ietf-mboned-mtrace-v2-03 -- Obsolete informational reference (is this intentional?): RFC 2679 (ref. '13') (Obsoleted by RFC 7679) Summary: 1 error (**), 0 flaws (~~), 9 warnings (==), 3 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 1 Network working group A. Tempia Bonda 2 Internet Draft G. Picciano 3 Intended status: Informational Telecom Italia 4 M. Chen 5 L. Zheng 6 Huawei Technologies Co., Ltd 7 Expires: January 1, 2012 July 1, 2011 9 Requirements for IP multicast performance monitoring 10 draft-ietf-mboned-ip-multicast-pm-requirement-01.txt 12 Abstract 14 This document describes the requirement for an IP multicast 15 performance monitoring system for service provider IP multicast 16 networks. This system enables efficient performance monitoring in 17 Service Providers' production networks and provides diagnostic 18 information in case of performance degradation or failure. 20 Status of this Memo 22 This Internet-Draft is submitted to IETF in full conformance with the 23 provisions of BCP 78 and BCP 79. 25 Internet-Drafts are working documents of the Internet Engineering 26 Task Force (IETF), its areas, and its working groups. Note that other 27 groups may also distribute working documents as Internet-Drafts. 29 Internet-Drafts are draft documents valid for a maximum of six months 30 and may be updated, replaced, or obsoleted by other documents at any 31 time. It is inappropriate to use Internet-Drafts as reference 32 material or to cite them other than as "work in progress." 34 The list of current Internet-Drafts can be accessed at 35 http://www.ietf.org/ietf/1id-abstracts.txt. 37 The list of Internet-Draft Shadow Directories can be accessed at 38 http://www.ietf.org/shadow.html. 40 This Internet-Draft will expire on January 1, 2011. 42 Copyright Notice 44 Copyright (c) 2009 IETF Trust and the persons identified as the 45 document authors. All rights reserved. 47 This document is subject to BCP 78 and the IETF Trust's Legal 48 Provisions Relating to IETF Documents 49 (http://trustee.ietf.org/license-info) in effect on the date of 50 publication of this document. Please review these documents carefully, 51 as they describe your rights and restrictions with respect to this 52 document. Code Components extracted from this document must include 53 Simplified BSD License text as described in Section 4.e of the Trust 54 Legal Provisions and are provided without warranty as described in 55 the Simplified BSD License. 57 Table of Contents 59 1. Introduction ................................................. 3 60 2. Conventions used in this document ............................ 4 61 3. Terminologies ................................................ 4 62 4. Functional Requirements ...................................... 6 63 4.1. Topology discovery and monitoring ....................... 6 64 4.2. Performance measurement ................................. 6 65 4.2.1. Loss rate .......................................... 6 66 4.2.2. One-way delay ...................................... 7 67 4.2.3. Jitter ............................................. 7 68 4.2.4. Throughput ......................................... 8 69 4.3. Measurement session management .......................... 8 70 4.3.1. Segment v.s. Path .................................. 8 71 4.3.2. Static v.s. Dynamic configuration .................. 8 72 4.3.3. Proactive v.s. on-demand ........................... 9 73 4.4. Measurement result report ............................... 9 74 4.4.1. Performance reports ................................ 9 75 4.4.2. Exceptional alarms ................................. 9 76 5. Design considerations ....................................... 10 77 5.1. Inline data-plane measurement .......................... 10 78 5.2. Scalability ............................................ 11 79 5.3. Robustness ............................................. 11 80 5.4. Security ............................................... 11 81 5.5. Device flexibility...................................... 11 82 5.6. Extensibility .......................................... 12 83 6. Security Considerations ..................................... 12 84 7. IANA Considerations ......................................... 12 85 8. References .................................................. 12 86 8.1. Normative References ................................... 12 87 8.2. Informative References ................................. 12 88 9. Acknowledgments ............................................. 14 90 1. Introduction 92 Service providers (SPs) have been leveraging IP multicast to provide 93 revenue-generating services, such as IP television (IPTV), video 94 conferencing, as well as the distribution of stock quotes or news. 95 These services are usually loss-sensitive or delay-sensitive, and 96 their data packets need to be delivered over a large scale IP network 97 in real-time. Meanwhile, these services demand relatively strict 98 service-level agreements (SLAs). For example, loss rate over 5% is 99 generally considered unacceptable for IPTV delivery. Video 100 conferencing normally demands delays no more than 150 milliseconds. 101 However, the real-time nature of the traffic and the deployment scale 102 of service make it very challenging for IP multicast performance 103 monitoring in a SP's production network. With increasing deployment 104 of multicast service in SP networks, it becomes mandatory to develop 105 an efficient system that is designed for SPs to accommodate the 106 following functions. 108 o SLA monitoring and verification: verify whether the performance of 109 a production multicast network meets SLA requirements. 111 o Network optimization: identify bottlenecks when the performance 112 metrics do not meet the SLA requirements. 114 o Fault localization: pin-point impaired components in case of 115 performance degradation and service disruption. 117 These functions alleviate the OAM cost of IP multicast network for 118 SPs, and ensure the quality of services. 120 However, the existing IP multicast monitoring tools and systems, 121 which were mostly designed either for primitive connectivity 122 diagnosis or for experimental evaluations, do not suit an SP 123 production network, given the following facts: 125 o Most of them provide end-to-end reachability check only [2][4][6]. 126 They cannot provide sophisticated measurement metrics such as 127 packet loss, one-way delay, and jitter, for the purpose of SLA 128 verification. 130 o Most of them can perform end-to-end measurements only. For example, 131 RTCP-based monitoring system [5] can report end-to-end packet loss 132 rate and jitter. End-to-end measurements are usually inadequate 133 for fault localization, which needs finer grain measurement data 134 to pin-point exact root causes. 136 o Most of them use probing packets to probe network performance [2] 137 [4]. The approach might yield biased or even irrelevant results 138 because the probing results are sampled and the out-of-band 139 probing packets might be forwarded differently from the monitored 140 user traffic. 142 o Most of them are not scalable in a large deployment like an SPs' 143 production network. For example, in IPTV deployment, the number of 144 group members might be in the order of thousands. In this scale, 145 an RTCP-based multicast monitoring system [5] becomes almost 146 unusable because RTCP report intervals of each receiver might be 147 delayed up to minutes or even hours because of over-crowded 148 reporting multicast channel [12]. 150 o Some of them rely on the information from external protocols, 151 which make their capabilities and deployment scenarios limited by 152 the external protocols. The examples are passive measurement tools 153 that collect and analyze messages from protocols such as multicast 154 routing protocols [7], IGMP [9], or RTCP [5], etc. Another 155 example is a SNMP-based system [8] that collects and analyzes 156 relevant multicast MIB information. 158 This document describes the requirement for an IP multicast 159 performance monitoring system for service provider (SP) IP multicast 160 networks. This system should enable efficient monitoring of 161 performance metrics of any given multicast channel (*,G) or (S,G) and 162 provides diagnostic information in case of performance degradation or 163 failure, which help SPs to do SLA verification, network optimization, 164 and fault localizations in a large production network. 166 2. Conventions used in this document 168 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 169 "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this 170 document are to be interpreted as described in RFC-2119 [1]. 172 3. Terminologies 174 o SSM (source specific multicast): When a multicast group is 175 operating in SSM mode, only one designated node is eligible to 176 send traffic through the multicast channel. An SSM multicast group 177 with the designated source address s and group address G is 178 denoted by (s, G). 180 o ASM (any source multicast): When a multicast group is operating in 181 ASM mode, any node can multicast packets through the multicast 182 channel to other group members. An ASM multicast group with group 183 address G is denoted by (*, G). 185 o Root (of a multicast group): In an SSM multicast group (s, G), the 186 root of this group is the first-hop router next to the source node 187 s. In an ASM multicast group (*, G), the root of this group is the 188 selected rendezvous point router. 190 o Receiver: The term receiver refers to any node in the multicast 191 group that should receive multicast traffic. 193 o Internal forwarding path: Given a multicast group and a forwarding 194 node in the group, the internal forwarding path inside the node 195 refers to the data path between the upstream interface towards the 196 root and one of the downstream interfaces toward a receiver. 198 o Multicast forwarding path: Given a multicast group, a multicast 199 forwarding path refers to the sequence of the interfaces, links 200 and internal forwarding paths from the downstream interface at the 201 root until the upstream interface at a receiver. 203 o Multicast forwarding tree: Given a multicast group G, the union of 204 all multicast forwarding paths composes the multicast forwarding 205 tree. 207 o Segment (of multicast forwarding path): The segment of a multicast 208 forwarding path refers to part of the path between any two given 209 interfaces. 211 o Measurement session: A measurement session refers to the period of 212 time in which certain performance metrics over a segment of 213 multicast forwarding path is monitored and measured. 215 o Monitoring node: A monitoring node is a node on a multicast 216 forwarding path that is capable of performing traffic performance 217 measurements on its interfaces. 219 o Active interface: An interface of a monitoring node that is turned 220 on to start a measurement session is said to be active. 222 o Measurement session control packets: The packets are used for 223 dynamic configuration for active interface to coordinate 224 measurement sessions. 226 Figure 1 shows a multicast forwarding tree rooted at a root's 227 interface A. Within router 1, B-C and B-D are two internal forwarding 228 paths. Path A-B-C-E-G-I is a multicast forwarding path, which starts 229 at root's downstream interface A and ends at receiver 2's upstream 230 interface I. A-B, B-C-E are two segments of this forwarding path. 231 When a measurement session for a metric such as loss rate is turned 232 on over segment A-B, interfaces A and B are active interfaces. 234 +--------+ 235 | source | I +--------+ 236 +---#----+ +---<> recv 2 | 237 # +----------+ G | +--------+ 238 # +----------+ C E | <>----+ 239 +---#--+ A B | <>-----<> router 2 | 240 | root <>------<> router 1 | | <>----+ 241 +------+ | <>---+ +----------+ H | J +--------+ 242 +----------+ D | +---<> recv 3 | 243 | F +--------+ +--------+ 244 +--<> recv 1 | 245 +--------+ 246 <> C Interface 247 ------ Link 249 Figure 1. Example of multicast forwarding tree 251 4. Functional Requirements 253 4.1. Topology discovery and monitoring 255 The monitor system SHOULD have mechanisms to collect topology 256 information of the multicast forwarding trees for any given multicast 257 group. The function can be an integrated part of this monitoring 258 system. Alternatively, the function might rely on other tools and 259 protocols, such as mtrace [3], MANTRA[7], etc. The topology 260 information will be referenced by network operators to decide where 261 to enable measurement sessions. 263 4.2. Performance measurement 265 The performance metrics that a monitoring node needs to collect 266 include, but are not limit to, the following. 268 4.2.1. Loss rate 270 Loss rate over a segment is the ratio of user packets not delivered 271 to the total number of user packets delivered over this segment 272 during a given interval. The number of user packets not delivered 273 over a segment is the difference between the number of packets 274 transmitted at the starting interface of the segment and received at 275 the ending interface of this segment. Loss rate is crucial for 276 multimedia streaming, such as IPTV, video/audio conferencing. 278 Loss rate over any segment of a multicast forwarding path MUST be 279 provided. The measurement interval MUST be configurable. 281 4.2.2. One-way delay 283 One-way delay over a segment is the average time that user packets 284 take to traverse this segment of forwarding path during a given 285 interval. The time that a user packet traversing a segment is the 286 difference between the time when the user packet leaves the starting 287 interface of this segment and the time when the same user packet 288 arrives at the ending interface of this segment. The one-way delay 289 metric is essential for real-time interactive applications, such as 290 video/audio conferencing, multiplayer gaming. 292 One-way delay over any segment of a multicast forwarding path SHOULD 293 be able to be measured. The measurement interval MUST be configurable. 295 To get accurate one-way delay measurement results, the two end 296 monitoring nodes of the investigated segments might need to have 297 clock synchronized. The one-way delay metric for packet networks is 298 described in RFC2679 [13]. 300 4.2.3. Jitter 302 Jitter over a segment is the variance of one-way delay over this 303 segment during a given interval. The metric is of great importance 304 for real-time streaming and interactive applications, such as IPTV, 305 audio/video conferencing. 307 One-way delay jitter over any segment of a multicast forwarding path 308 SHOULD be able to be measured. The measurement interval MUST be 309 configurable. 311 Same as One-way delay measurement, to get accurate jitter, the clock 312 frequencies at the two end monitoring nodes might need to be 313 synchronized so that the clocks at two systems will proceed at the 314 same pace. The jitter metric for packet networks is described in 315 RFC2681 [14]. 317 4.2.4. Throughput 319 Throughput of multicast traffic for a group over a segment is the 320 average number of bytes of user packets of this multicast group 321 transmitted over this segment in unit time during a given interval. 322 The information might be useful for resource management. 324 Throughput of multicast traffic over any segment of a multicast 325 forwarding path MAY be measured. The measurement interval MUST be 326 configurable. 328 4.3. Measurement session management 330 A measurement session refers to the period of time in which 331 measurement for certain performance metrics is enabled over a segment 332 of multicast forwarding path or over a complete multicast forwarding 333 path. During a measurement session, the two end interfaces are said 334 active. When an interface is activated, the interfaces start 335 collecting statistics, such as number or timestamps of user packets 336 which belongs to the given multicast group and pass through the 337 interface. When both interfaces are activated, the measurement 338 session starts. During a measurement session, data from two active 339 interfaces are periodically correlated and the performance metrics, 340 such as loss rate or delay, are derived. The correlation can be done 341 either on the downstream interface if the upstream interface passes 342 its data to it or on a third-party if the raw data on two active 343 interfaces are reported to it. When one of the two interfaces is 344 deactivated, the measurement session stops. 346 4.3.1. Segment v.s. Path 348 Network operators SHOULD be able to turn on or off measurements 349 sessions for specific performance metrics over either a segment of 350 multicast forwarding path or over a complete multicast forwarding 351 path at any time. For example in Figure 1, network operator can turn 352 on the measurement session of loss rate over path A-B-D-F and segment 353 A-B-C as well as jitter over segment C-E-G-I simultaneously. This 354 feature allows network operators to zoom into the suspicious 355 components when degradation or failure occurs. 357 4.3.2. Static v.s. Dynamic configuration 359 A measurement session can be configured statically. In this case, 360 network operators activate the two interfaces or configure their 361 parameter settings on the relevant nodes either manually or 362 automatically through agents of network management system (NMS). 364 Optionally, a measurement session can be configured dynamically. In 365 this case, an interface may coordinate another interface on its 366 forwarding path to start or stop a session. Accordingly, the format 367 and process routines of the measurement session control packets need 368 to be specified. The delivery of such packets SHOULD be reliable and 369 it MUST be possible to secure the delivery of such packets. 371 4.3.3. Proactive v.s. on-demand 373 A measurement session can be started either proactively or on demand. 374 Proactive monitoring is either configured to be carried out 375 periodically and continuously or preconfigured to act on certain 376 events such as alarm signals. To save resources, operators may turn 377 on measurement sessions proactively for critical performance metrics 378 over the backbone segments of multicast forwarding tree only. This 379 keeps the overall monitoring overhead minimal during normal network 380 operations. 382 In contrast to proactive monitoring, on-demand monitoring is 383 initiated manually and for a limited amount of time to carry out 384 diagnostics. When network performance degradation or service 385 disruption occurs, operators might turn on measurement sessions on- 386 demand over the interested segments to facilitate fault localization. 388 4.4. Measurement result report 390 The measurement results might be present in two forms: reports or 391 alarms. 393 4.4.1. Performance reports 395 Performance reports contain streams of measurement data over a period 396 of time. A data collection agent MAY actively poll the monitoring 397 nodes and collect the measurement reports from all active interfaces. 398 Alternatively, the monitoring nodes might be configured to upload the 399 reports to the specific data collection agents once the data become 400 available. To save bandwidth, the content of the reports might be 401 aggregated and compressed. The period of reporting SHOULD be able to 402 be configured or controlled by rate limitation mechanisms (e.g., 403 exponentially increasing). 405 4.4.2. Exceptional alarms 407 On the other hand, the active interfaces of a monitoring node or a 408 third-party MAY be configured to raise alarms when exceptional events 409 such as performance degradation or service disruption occur. Alarm 410 thresholds and the management should be specified for each of the 411 performance metric when the measurement session is configured on this 412 interface. During measurement session, once the value of certain 413 performance metric exceeds the threshold, alarm will be raised and 414 reported to the configured nodes. To prevent huge volume of alarms 415 from overloading the management nodes and network congestion, alarm 416 suppression and aggregation mechanisms SHOULD be employed on the 417 interfaces to limit the rate of alarm report and the volume of data. 419 5. Design considerations 421 To make the monitoring system feasible and optimal for a SP 422 production network, the following considerations should take into 423 account when design the system. 425 5.1. Inline data-plane measurement 427 Measurement results collected by probing packets might be biased or 428 even totally irrelevant given the facts that (1) probing packets 429 collect sampled results only and might not capture the real statistic 430 characteristics of the monitored user traffic. Experiments have 431 demonstrated that the measurement sampled by the probing packets, 432 such as ping probes, might be incorrect if sampling interval is too 433 long [1]; (2) probing packets introduce extra load onto the network. 434 In order to improve accuracy, sampling frequency has to be high 435 enough, which in turn increase network overhead and further bias the 436 measurement results; (3) probing packets are usually not in the same 437 multicast group as user packets and might take different forwarding 438 path given that equal cost multi-path routing (ECMP) and link 439 aggregation (LAG) have been widely adopted in SP network. An out-of- 440 band probing packet might take a path totally different from the user 441 packets of the multicast group that it is monitoring. Even if the 442 forwarding path is the same, the intermediate node might apply 443 different queuing and scheduling strategy for the probing packets. As 444 a result, the measured results might be irrelevant. 446 The performance measurement should be "inline" in the sense that the 447 measurement statistics are derived directly from user packets, 448 instead of probing packets. At the same time, unlike offline packet 449 analysis, the measurement is counting user packets at line-speed in 450 real-time without any packet duplication or buffering. 452 To accomplish the inline measurement, some extra packets might need 453 to be injected into user traffic to coordinate measurement across 454 nodes. The volume of these packets SHOULD be keep minimal such that 455 the injection of such packets will not impact measurement accuracy. 457 5.2. Scalability 459 The measurement methodology and system architecture MUST be scalable. 460 A multicast network for an SP production network usually comprises of 461 thousands of nodes. Given the scale, the collecting, processing and 462 reporting overhead of performance measurement data SHOULD NOT 463 overwhelm either monitoring nodes or management nodes. The volume of 464 reporting traffic should be reasonable and not cause any network 465 congestion. 467 5.3. Robustness 469 The measurements MUST be independent of the failure of the underlying 470 multicast network. For example, the monitor SHOULD generate correct 471 measurement result even if some measurement coordinating packets are 472 lost; invalid performance reports should be able to be identified in 473 case that the underlying multicast network is undergoing drastic 474 changes. 476 If dynamic configuration is supported, the delivery of measurement 477 session control packets SHOULD be reliable so that the measurement 478 sessions can be started, ended and performed in a predictable manner. 479 Meanwhile, the control packets SHOULD not be delivered based on the 480 multicast routing decision. This multicast independent characteristic 481 guarantees that the active interfaces are still under control even if 482 the multicast service is malfunctioning. 484 Similarly, if an NMS is used to control the monitoring nodes remotely, 485 the communication between monitoring nodes and the NMS SHOULD be 486 reliable. 488 5.4. Security 490 The monitoring system MUST not impose security risks on the network. 491 For example, the monitoring nodes should be prevented from being 492 exploited by third parties to control measurement sessions 493 arbitrarily, which might make the nodes vulnerable for DDoS attacks. 495 If dynamic configuration is supported, the measurement session 496 control packets need to be encrypted and authenticated. 498 5.5. Device flexibility 500 Both the software and hardware deployment requirement for the 501 monitoring system SHOULD be reasonable. For example, one-way delay 502 measurement needs clock synchronization across nodes. To require the 503 installation of expensive hardware clock synchronization devices on 504 all monitoring nodes might be too costly to make the monitoring 505 system infeasible for large deployment. 507 The monitor system SHOULD be incrementally deployable, which means 508 that the system can enable monitoring functionality even if some of 509 the nodes in the network are not equipped with the required software 510 and hardware or does not meet the software and hardware deployment 511 requirements. 513 The non-monitoring nodes without the monitoring capabilities SHOULD 514 be able to coexist with monitoring nodes and function. The packets 515 exchanged between monitoring nodes SHOULD be transparent to other 516 nodes and MUST not cause any malfunction of the non-monitoring nodes. 518 5.6. Extensibility 520 The system should be easy to be extended for new functionalities. For 521 example, the system should be easily extended to collect newly 522 defined performance metrics. 524 6. Security Considerations 526 The security issues have been taken into account in design 527 considerations (see Section 5.4). 529 7. IANA Considerations 531 There is no IANA action required by this draft. 533 8. References 535 8.1. Normative References 537 [1] Bradner, S., "Key words for use in RFCs to Indicate Requirement 538 Levels", BCP 14, RFC 2119, March 1997. 540 8.2. Informative References 542 [2] Venaas, S., "Multicast Ping Protocol", draft-ietf-mboned- 543 ssmping-07, December 2008. 545 [3] Asaeda, H., Jinmei, T., Fenner, W., and S. Casner, "Mtrace 546 Version 2: Traceroute Facility for IP Multicast", draft-ietf- 547 mboned-mtrace-v2-03, March 2009. 549 [4] Almeroth, K., Wei, L., and D. Farinacci, "Multicast 550 Reachability Monitor (MRM)", draft-ietf-mboned-mrm-01, July 551 2000. 553 [5] Bacher, D., Swan, A., and L. Rowe, "rtpmon: a third-party RTCP 554 monitor", Conference 4th ACM International conference on 555 multimedium, 1997. 557 [6] Sarac, K. and K. Almeroth, "Application Layer Reachability 558 Monitoring for IP Multicast", Journal Computer Networks Journal, 559 Vol.48, No.2, pp.195-213, June 2005. 561 [7] Rajvaidya, P., Almeroth, K., and k. claffy, "A Scalable 562 Architecture for Monitoring and Visualizating Multicast 563 Statistics", Conference IFIP/IEEE Workshop on Distributed 564 Systems: Operations & Management (DSOM), Austin, Texas, USA, 565 December 2000. 567 [8] Sharma, P., Perry, E., and R. Malpani, "IP Multicast 568 Operational Network Management: Design, Challenges and 569 Experiences", Journal IEEE Network, Volume 17, Issue 2, Mar/Apr 570 2003 Page(s): 49 - 55, Mar/Apr 2003. 572 [9] Al-Shaer, E. and Y. Tang, "MRMON: Remote Multicast Monitoring", 573 Conference NOMS, 2004. 575 [10] Sarac, K. and K. Almeroth, "Supporting Multicast Deployment 576 Efforts: A Survey of Tools for Multicast Monitoring", Journal 577 Journal of High Speed Networks, Vol.9, No.3-4, pp.191-211, 2000. 579 [11] Sarac, K. and K. Almeroth, "Monitoring IP Multicast in the 580 Internet: Recent Advances and Ongoing Challenges", Journal IEEE 581 Communication Magazine, 2005. 583 [12] Vit Novotny, Dan Komosny, "Optimization of Large-Scale RTCP 584 Feedback Reporting in Fixed and Mobile Networks," icwmc, pp.85, 585 Third International Conference on Wireless and Mobile 586 Communications (ICWMC'07), 2007 588 [13] Almes, G., Kalidindi,S. and M. Zekauskas, "A One-way Delay 589 Metric for IPPM", RFC 2679, September 1999 591 [14] Almes, G., Kalidindi,S. and M. Zekauskas, "A Round-trip Delay 592 Metric for IPPM", RFC 2681, September 1999. 594 9. Acknowledgments 596 The authors would like to thank Wei Cao, Xinchun Guo, and Hui Liu for 597 their helpful comments and discussions. 599 Authors' Addresses 601 Alberto Tempia Bonda 602 Telecom Italia 603 Via Reiss Romoli, 274 604 Torino 10148 605 Italy 607 Email: alberto.tempiabonda@telecomitalia.it 609 Giovanni Picciano 610 Telecom Italia 611 Via Di Val Cannuta 250 612 Roma 00166 613 Italy 615 Email: giovanni.picciano@telecomitalia.it 617 Mach(Guoyi) Chen 618 Huawei Technologies Co., Ltd 619 No. 3 Xinxi Road, Shang-di, Hai-dian District 620 Beijing 100085 621 China 623 Email: mach@huawei.com 625 Lianshu Zheng 626 Huawei Technologies Co., Ltd 627 No. 3 Xinxi Road, Shang-di, Hai-dian District 628 Beijing 100085 629 China 631 Email: verozheng@huawei.com