idnits 2.17.1 draft-boucadair-lmap-considerations-00.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- No issues found here. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year -- The document date (February 18, 2013) is 4084 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- == Outdated reference: A later version (-05) exists of draft-boucadair-connectivity-provisioning-profile-02 == Outdated reference: A later version (-01) exists of draft-morton-ippm-lmap-path-00 Summary: 0 errors (**), 0 flaws (~~), 3 warnings (==), 1 comment (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 Network Working Group M. Boucadair 3 Internet-Draft C. Jacquenet 4 Intended status: Informational France Telecom 5 Expires: August 22, 2013 February 18, 2013 7 Large scale Measurement of Access network Performance (LMAP): 8 Requirements and Issues from a Network Provider Perspective 9 draft-boucadair-lmap-considerations-00 11 Abstract 13 This document raises several points related to the ongoing LMAP 14 (Large scale Measurement of Access network Performance) effort. The 15 goal is to contribute to define a scope for LMAP and its expected 16 contribution. 18 Status of this Memo 20 This Internet-Draft is submitted in full conformance with the 21 provisions of BCP 78 and BCP 79. 23 Internet-Drafts are working documents of the Internet Engineering 24 Task Force (IETF). Note that other groups may also distribute 25 working documents as Internet-Drafts. The list of current Internet- 26 Drafts is at http://datatracker.ietf.org/drafts/current/. 28 Internet-Drafts are draft documents valid for a maximum of six months 29 and may be updated, replaced, or obsoleted by other documents at any 30 time. It is inappropriate to use Internet-Drafts as reference 31 material or to cite them other than as "work in progress." 33 This Internet-Draft will expire on August 22, 2013. 35 Copyright Notice 37 Copyright (c) 2013 IETF Trust and the persons identified as the 38 document authors. All rights reserved. 40 This document is subject to BCP 78 and the IETF Trust's Legal 41 Provisions Relating to IETF Documents 42 (http://trustee.ietf.org/license-info) in effect on the date of 43 publication of this document. Please review these documents 44 carefully, as they describe your rights and restrictions with respect 45 to this document. Code Components extracted from this document must 46 include Simplified BSD License text as described in Section 4.e of 47 the Trust Legal Provisions and are provided without warranty as 48 described in the Simplified BSD License. 50 Table of Contents 52 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 3 53 2. Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . 4 54 2.1. Service-Specific Measurement . . . . . . . . . . . . . . . 4 55 2.2. Distorting Measurement Results . . . . . . . . . . . . . . 5 56 2.3. On the Impact of Policies . . . . . . . . . . . . . . . . . 5 57 2.4. Classes of Service . . . . . . . . . . . . . . . . . . . . 5 58 2.5. Pending Questions . . . . . . . . . . . . . . . . . . . . . 6 59 3. Security Considerations . . . . . . . . . . . . . . . . . . . . 7 60 4. IANA Considerations . . . . . . . . . . . . . . . . . . . . . . 7 61 5. Acknowledgments . . . . . . . . . . . . . . . . . . . . . . . . 7 62 6. Informative References . . . . . . . . . . . . . . . . . . . . 7 63 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . . 8 65 1. Introduction 67 Service Assurance & Fulfilment is a critical component in the service 68 management environment. Within ISP organizations, dedicated 69 organizational and functional structures are implemented to 70 efficiently monitor and assess the overall quality of deployed 71 services and also the service quality as perceived by end-users. 73 As such, appropriate actions can be taken to solve encountered 74 problems and put any disrupted service back to normal operation. 75 Various tools (e.g., probes, reporting tools, etc.) are deployed to 76 continuously provide feedback on the status of running services and 77 notify managers about operational issues. 79 For the sake of efficient day-to-day operations, an ISP should 80 implement the Service Fulfilment functions that are responsible for 81 checking if the services delivered to the users are consistent with 82 what has been subscribed and possibly negotiated. These functions 83 may also be used as inputs to Service Assurance related functions. 85 The ISP should be able to continuously (preferably in real-time) 86 measure and control the level of quality associated to the services 87 delivered to its customers. Indeed, network anomalies such as node 88 outage, link failures, routing disruption and the subsequent overall 89 service performance degradation should be dynamically reported to 90 appropriate management structures (like a Network Operations Center). 92 Ideally, any issue should be solved (or at least detected and handled 93 as quick as possible) before receiving the complaints from customers. 94 Improvement of current practices should be investigated to enhance 95 the quality of experience as perceived by end-users and also to speed 96 up repair processes whenever a network or service anomaly is 97 detected. 99 Within this context, the introduction of a high level of automation 100 in the global service delivery and operation chains is promising. 101 This does not mean zero-fault networking: automation is rather meant 102 to optimize communication between the different actors of the service 103 delivery chain and also to guarantee the overall consistency between 104 the different management tools. 106 Customers should have the ability to check the fulfilment of the 107 Connectivity Provisioning Profile (CPP, 108 [I-D.boucadair-connectivity-provisioning-profile]) they have 109 subscribed to (and possibly negotiated with the service provider). 110 They could thus evaluate how the Service Provider has delivered the 111 service as a function of what has been defined in the service 112 agreement. Customer- or service-specific indicators and related 113 performance metrics should be accessed by customers so that they can 114 appreciate the level of quality associated to the services they have 115 subscribed to. These data should be updated on a regular basis to 116 adequately reflect the actual status of any service. These 117 indicators (including a combination thereof) should be described and 118 listed in the agreement (see Section 2.13 of 119 [I-D.boucadair-connectivity-provisioning-profile]). 121 The Large scale Measurement of Access network Performance (LMAP) 122 effort can be defined as a tool supported by the Service Assurance 123 functional block provided to customers to assess whether the services 124 they have subscribed comply with what has been defined in the service 125 level agreement (including the technical parameters exposed in a CPP 126 template, for example). 128 As discussed in [I-D.boucadair-connectivity-provisioning-profile], 129 performance metrics are not the only relevant indicators to 130 characterize the connectivity service delivered to the customer; 131 other important technical clauses (e.g., reachability scope, traffic 132 conformance, availability, etc.) need also to be taken into account. 134 Providing customers with tools that can help them better characterize 135 the level of quality associated to the delivery of any service (or a 136 combination thereof) they have subscribed to is likely to enhance 137 their overall quality of experience. As a consequence, such tools 138 would also optimize the overall efficiency of service operation 139 (e.g., by reducing the number of calls placed to online support 140 whenever a problem is pro-actively reported to the customer). 142 This document discusses several questions to be considered when 143 designing such tools. 145 This document makes use of the terms defined in 146 [I-D.morton-ippm-lmap-path]. 148 2. Discussion 150 2.1. Service-Specific Measurement 152 Various service offerings (e.g., IPTV, VoD, Internet, VoIP, etc.) can 153 be delivered to the same customer. All these services rely upon 154 devices that are involved the forwarding of the corresponding 155 service-specific traffic. 157 These services are not restricted to the basic IP connectivity 158 service but also include advanced features. The technical clauses 159 that document the IP connectivity service component of these services 160 may vary one from the other (e.g., a global reachability can be 161 provided for the Internet service while IP connectivity service is 162 restricted to the first SBE/DBE (Session Border Element/Data Border 163 Element) for VoIP services. 165 Furthermore, some of these services may be delivered over dedicated 166 "virtual" channels (e.g., distinct VCs or addresses can be used for 167 each service). 169 Assessing whether the delivered service complies with what has been 170 subscribed by the customer or not suggests that measurement actions 171 should be specific to the communication facilities (forwarding paths, 172 virtual channels, tunnels, etc.) used to deliver the service to the 173 customer. 175 2.2. Distorting Measurement Results 177 Some services may rely on several components provided by distinct 178 administrative entities. For instance, the DNS service may not be 179 provided by the same operator that provides the IP connectivity. The 180 level of quality associated to the delivery of a service may 181 therefore be affected (e.g., because DNS resolution takes longer than 182 expected) even if traffic performance clauses are honored by the 183 network provider. 185 The LMAP system should be designed to accommodate such deployment 186 scenario. 188 2.3. On the Impact of Policies 190 Issues can be experienced when a customer tries to reach a subset of 191 destinations. These issues may not be necessarily due to performance 192 degradation in the local network but to some policies enforced in the 193 destination networks, at the risk of being unable to deliver the 194 service to some networks (e.g., some government contents cannot be 195 accessed from some networks, because of a security policy enforced by 196 the government). 198 The measurement system should be designed to accommodate such 199 contexts. 201 2.4. Classes of Service 203 Prioritization is used to deliver some services; as such, 204 measurements should be bound to the QoS class used for a given 205 service. 207 In some networks, DSCP marking inheritance mechanisms are used to 208 make sure customers cannot injects traffic that belongs to an 209 unauthorized or unsupported class of service. The proposed 210 measurement framework should be designed to handle such designs. 212 2.5. Pending Questions 214 Additional considerations should be taken into account as per the 215 following questions: 216 Q1: How to determine the measurement scope? How to characterize a 217 measurement scope? 218 Q2: Should inter-domain measurement be in the scope? 219 Q3: If so, which inter-domain paths should be used to conduct 220 measurement campaigns? Paths used for measurement may not be 221 those used to forward service data. 222 Q4: Which metrics to use? How contributing agents negotiate the 223 metric to be used? What measurement methodology (e.g., 224 frequency of measurement requests)? What methodology to 225 aggregate results? What approach to follow if a metric is not 226 returned from a given network segment? How to accommodate the 227 use of metrics that may not be supported by all devices along 228 the whole forwarding path? 229 Q5: How measurement and testing methodology are shared between 230 involved parties (e.g., between two service providers)? Should 231 respective responsibilities be negotiated? 232 Q6: How to ensure time synchronization? 233 Q7: How can a measurement system dynamically discover the measuring 234 entities of a single domain? Across several domains? 235 Q8: How to detect a network is LMAP-compliant? How to configure a 236 LMAP client with LMAP server information? 237 Q9: How to guarantee the accuracy of collected data? 238 Q10: How to control access to measurement results? How to prevent 239 revealing measurement results to external parties? 240 Q11: How to map collected data with technical clauses included in a 241 contract/agreement (e.g., CPP)? 242 Q12: Flash crowd issues: to what extent measurement traffic can 243 impact the delivered service during a crisis (e.g., an overload 244 situation in some regions of the LMAP domain, where a LMAP 245 domain is an administrative entity that is composed of LMAP- 246 capable nodes operated by a single structure)? 247 Q13: How to make sure that the entities involved in measurement do 248 not dramatically affect the accuracy of the measurement (as per 249 Heisenberg principle)? Which procedure to apply to control the 250 reliability of LMAP agents? 251 Q14: How to make sure measurement data is not impacted by the home 252 network itself or the machine embedding the measurement agent? 254 Q15: How can a network provider instruct a LMAP agent to hold its 255 requests to prevent network congestion situations (e.g., to 256 avoid link overload)? 257 Q16: How to make sure measurement data accurately reflect the 258 network performance and not the policies enforced in that 259 network? 260 Q17: The LMAP system can be used to assess the level of delivered 261 connectivity service to customers? The system can be embedded 262 in robots enabled in the access segment to emulate the behavior 263 of connected device. How LMAP can accommodate such deployment 264 use case? 265 Q18: To what extent conducting a set of measurement actions at T0 266 will reelect the actual traffic performance to be experienced 267 when invoking the subscribed service? 268 Q19: How path diversity impacts measurements? 269 Q20: How the system is designed to ensure topology hiding? 271 3. Security Considerations 273 TBC. 275 4. IANA Considerations 277 This document does not require any action from IANA. 279 5. Acknowledgments 281 TBC. 283 6. Informative References 285 [I-D.boucadair-connectivity-provisioning-profile] 286 Boucadair, M., Jacquenet, C., and N. Wang, "IP/MPLS 287 Connectivity Provisioning Profile", 288 draft-boucadair-connectivity-provisioning-profile-02 (work 289 in progress), September 2012. 291 [I-D.morton-ippm-lmap-path] 292 Bagnulo, M., Burbridge, T., Crawford, S., Eardley, P., and 293 A. Morton, "A Reference Path and Measurement Points for 294 LMAP", draft-morton-ippm-lmap-path-00 (work in progress), 295 January 2013. 297 Authors' Addresses 299 Mohamed Boucadair 300 France Telecom 301 Rennes, 35000 302 France 304 Email: mohamed.boucadair@orange.com 306 Christian Jacquenet 307 France Telecom 308 Rennes, 35000 309 France 311 Email: christian.jacquenet@orange.com