idnits 2.17.1 draft-bagnulo-lmap-ipfix-00.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- -- The document has examples using IPv4 documentation addresses according to RFC6890, but does not use any IPv6 documentation addresses. Maybe there should be IPv6 examples, too? Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year -- The document date (February 7, 2013) is 4093 days in the past. Is this intentional? Checking references for intended status: Proposed Standard ---------------------------------------------------------------------------- (See RFCs 3967 and 4897 for information about using normative references to lower-maturity documents in RFCs) ** Obsolete normative reference: RFC 5101 (Obsoleted by RFC 7011) ** Downref: Normative reference to an Informational RFC: RFC 5470 == Outdated reference: A later version (-01) exists of draft-bagnulo-ippm-new-registry-independent-00 Summary: 2 errors (**), 0 flaws (~~), 2 warnings (==), 2 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 Network Working Group M. Bagnulo 3 Internet-Draft UC3M 4 Intended status: Standards Track B. Trammell 5 Expires: August 11, 2013 ETH Zurich 6 February 7, 2013 8 An LMAP application for IPFIX 9 draft-bagnulo-lmap-ipfix-00 11 Abstract 13 This document explores the possibility of using IPFIX to report test 14 results from a Measurement Agent to a Collector, in the context of a 15 large measurement platform. 17 Status of this Memo 19 This Internet-Draft is submitted in full conformance with the 20 provisions of BCP 78 and BCP 79. 22 Internet-Drafts are working documents of the Internet Engineering 23 Task Force (IETF). Note that other groups may also distribute 24 working documents as Internet-Drafts. The list of current Internet- 25 Drafts is at http://datatracker.ietf.org/drafts/current/. 27 Internet-Drafts are draft documents valid for a maximum of six months 28 and may be updated, replaced, or obsoleted by other documents at any 29 time. It is inappropriate to use Internet-Drafts as reference 30 material or to cite them other than as "work in progress." 32 This Internet-Draft will expire on August 11, 2013. 34 Copyright Notice 36 Copyright (c) 2013 IETF Trust and the persons identified as the 37 document authors. All rights reserved. 39 This document is subject to BCP 78 and the IETF Trust's Legal 40 Provisions Relating to IETF Documents 41 (http://trustee.ietf.org/license-info) in effect on the date of 42 publication of this document. Please review these documents 43 carefully, as they describe your rights and restrictions with respect 44 to this document. Code Components extracted from this document must 45 include Simplified BSD License text as described in Section 4.e of 46 the Trust Legal Provisions and are provided without warranty as 47 described in the Simplified BSD License. 49 Table of Contents 51 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 3 52 2. Using IPFIX to report test results . . . . . . . . . . . . . . 4 53 3. Example: UDP latency test . . . . . . . . . . . . . . . . . . . 6 54 4. Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . 7 55 5. What standardization is needed for this? . . . . . . . . . . . 8 56 6. Security considerations . . . . . . . . . . . . . . . . . . . . 8 57 7. IANA Considerations . . . . . . . . . . . . . . . . . . . . . . 8 58 8. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . . 8 59 9. References . . . . . . . . . . . . . . . . . . . . . . . . . . 8 60 9.1. Normative References . . . . . . . . . . . . . . . . . . . 8 61 9.2. Informative References . . . . . . . . . . . . . . . . . . 9 62 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . . 9 64 1. Introduction 66 A Large scale Measurement Platform (LMP) is composed by the following 67 fundamental elements: a set of Measurement Agents (MAs), one or more 68 Controllers and one or more Collectors. There may be additional 69 elements in any given such of these platforms, but these three 70 elements are present in all of them. The MAs are pieces of code that 71 run in specialized hardware (hardware probes) or in general purpose 72 devices such as PCs, laptops or mobile phones (software probes). The 73 MA run the tests against other MAs distributed across the Internet. 74 Typically most of the MAs are located in end user networks and a few 75 MAs are located deep into the ISP network, and typically tests are 76 executed from the MAs in the periphery towards MAs located in the 77 core. The Controller is the element that controls the MAs and 78 informs the MAs about what tests to do and when to do them. The 79 protocol between the Controller and the MA is called the Control 80 protocol. After performing the tests, the MAs send the data about 81 the results of the tests performed to the Collector. The protocol 82 used to report test result data from the MA to the Collector is 83 called the Report protocol. In this document we explore the 84 possibility of using IPFIX [RFC5101] as a Report protocol for large 85 scale measurement platforms. 87 In IPFIX terminology [RFC5470], the MA encompasses both the Metering 88 Process (MP) and the Exporting Process (EP), while the Collector is 89 the Collecting Process (CP). IPFIX is used between the EP/MA and the 90 Collector/CP. We propose LMA as an application of IPFIX per 91 [I-D.ietf-ipfix-ie-doctors] 93 Some considerations about the use of IPFIX for LMP: 94 o Separation between Control and Report Protocols: Within a single 95 measurement platform, different protocols can be used for Control 96 and Report, though they must share a common vocabulary 97 representing the measurements to be performed. In particular, if 98 a platform implements IPFIX as a Report protocol, it must 99 implement a different protocol (e.g. NETCONF or other) as a 100 Control protocol. 101 o Report protocol diversity: Some platforms may use IPFIX as a 102 Report protocol, while other platforms may decide to use other 103 protocols (e.g. the Broadband forum architecture may decide to use 104 a different one). We believe that it is important to support this 105 protocol diversity. A key element to support such diversity is an 106 independent metric registry (see 107 [I-D.bagnulo-ippm-new-registry-independent] ) where values for 108 metric identifiers are recorded independently of the Control 109 and/or Report protocol is used. This affects how we use IPFIX as 110 a Report protocol, as presented in this document. 112 o Minimal versus full IPFIX implementation: A key benefit of IPFIX 113 is that, while it was designed to be used in core routers and 114 full-featured measurement machines, the unidirectional nature of 115 the protocol and simple wire format make minimal implementations 116 of Exporting Processes possible. These minimal implementations 117 are well suited to small-scale MAs (such as a mobile app or a 118 process running in a home router). These only need to know about 119 the specific Templates supporting the metric(s) to be reported. 121 2. Using IPFIX to report test results 123 In order to use IPFIX to report test results from the MA to the 124 Collector, we need first to understand what information needs to be 125 conveyed. The information transmitted by the MA to the Collector 126 when reporting test(s) results is the following: 127 o Information about the MA: in particular a MA identifier 128 o Information about the time of the report: when the report was sent 129 (not necessarily when the test was performed) 130 o Information describing the test. This includes: 131 * An identifier of the metric used for the test (see the Metric 132 registry of [I-D.bagnulo-ippm-new-registry-independent] ) 133 * An identifier of the scheduling strategy used to perform the 134 test (see the Scheduling registry of 135 [I-D.bagnulo-ippm-new-registry-independent]) and potential 136 input parameters for the schedule, such as the rate. 137 * An identifier of the output format, (see the Output Type 138 registry of [I-D.bagnulo-ippm-new-registry-independent] ) 139 * An identifier of the environment, notably, if cross traffic was 140 or not present during the execution of the test. (see the 141 Environment registry of 142 [I-D.bagnulo-ippm-new-registry-independent] ) 143 * The input parameters for the test, such as source IP address, 144 destination IP address, source and destination ports and so on. 145 o Information describing the test results. This widely varies with 146 each test, but can include time each packet was sent and received, 147 number of sent and lost packets or other information. 148 We next explore how we can encode this information in IPFIX. 150 In order to convey test information using IPFIX we will naturally use 151 the IPFIX message format and we will define a Template describing the 152 records containing the test result data. We will re-use as many 153 already defined Information Elements (IEs) as possible and we will 154 identify new IEs that are needed. 156 Part of the information can be conveyed using the fields in the IPFIX 157 header, namely: 159 o Information about the MA: In order to convey the MA identifier we 160 can use the Observation Domain field present in the IPFIX header. 161 This would allow to have up to 2^32 MA, which seems sufficient. 162 o Information about the time of the report: The IPFIX header 163 contains an Export Time field that can be used to convey this 164 information. 166 The information describing the test is included in a Template set 167 that contains multiple IEs for each of the different pieces of 168 information we need to convey. This includes: 169 o An identifier of the metric used for the test. In order to convey 170 that we need to define a new IE, let's call it metricIdentifier. 171 The values for this element will be the values registered in the 172 Metric registry of [I-D.bagnulo-ippm-new-registry-independent]. 173 o An identifier of the scheduling strategy used to perform the test. 174 Again, this will be a new IE, called testSchedule and its values 175 will be the values defined in the Scheduling registry of 176 [I-D.bagnulo-ippm-new-registry-independent]. The potential input 177 parameters for the schedule, such as the rate, we probably need a 178 new IE for each of these. Usual scheduling distributions only 179 require a rate, so we can define a new IE called scheduleRate 180 which value will contain the rate for the requested distribution. 181 * NOTE: The distribution in some cases could be extracted from 182 the results, for example, if the results contain each packet 183 sent, it would be easy to spot a periodic scheduling. Probably 184 not so obvious for the Poisson one. Maybe this would be an 185 optional element to be carried when it is not possible to 186 extract it from the test results. 187 o An identifier of the output format. A new IE outputType is needed 188 for this and it would take values out of the ones in the Output 189 Type registry of [I-D.bagnulo-ippm-new-registry-independent]. 190 Some of the output formats require an additional input, like the 191 percentile used to trim the outliers when performing means. There 192 are two approaches here. One approach is that the the Output Type 193 registry creates different entries for the different percentiles, 194 which would result in more entries in the Output Type registry 195 (e.g. one entry for the 95th percentile mean and another one for 196 the 90th percentile mean). This may cause an increase number of 197 entries in the Output Type registry, but since there are not too 198 many usual values, it is likely to be manageable. The other 199 approach is to define an additional IE, for instance, the 200 percentile IE that will have the values for the different 201 percentiles used in the output. 202 o An identifier of the environment, notably, if cross traffic was or 203 not present during the execution of the test. Again, a new IE is 204 needed for this testEnvironment. It will take values of the the 205 Environment registry of 206 [I-D.bagnulo-ippm-new-registry-independent]. 208 o The input parameters for the test. Most of these can be expressed 209 using existing IEs, such as sourceIPv4Address, 210 destinationIPv4Address, etc. 212 Information describing the test results. This widely varies with 213 each test, but can include time each packet was sent and received, 214 number of sent and lost packets or other information. Again most of 215 these can be expressed using existent IEs, and some new ones can be 216 defined if needed for a particular test. 218 3. Example: UDP latency test 220 Let's consider the example of UDP latency. Suppose a MA wants to 221 report the results of a UDP latency test, performed from its own IP 222 address (e.g. 192.0.2.1) to a destination IP address (e.g. 223 203.0.113.1), using source port 23677 and destination port 34567. 224 The test is performed using a periodic scheduling with a rate of 1 225 packet per second during 3 seconds and starts at 10:00 CEST. The 226 test was performed without cross-traffic and the output type is raw. 228 The Template Set for this would be: 229 metricIdentifier 230 testSchedule 231 scheduleRate 232 outputType 233 testEnvironment 234 sourceIPv4Address 235 destinationIPv4Address 236 sourceTransportPort 237 destinationTransportPort 238 flowStartMilliseconds 239 flowEndMilliseconds 241 The data set following this template for the example would be: 242 metricIdentifier = UDP_Latency as per 243 [I-D.bagnulo-ippm-new-registry-independent] 244 testSchedule = Periodic as per 245 [I-D.bagnulo-ippm-new-registry-independent] 246 scheduleRate = 1 247 outputType = Raw as per 248 [I-D.bagnulo-ippm-new-registry-independent] 249 testEnvironment = No-cross-traffic as per 250 [I-D.bagnulo-ippm-new-registry-independent] 251 sourceIPv4Address = 192.0.2.1 252 destinationIPv4Address = 203.0.113.1 253 sourceTransportPort = 23677 254 destinationTransportPort = 34567 255 flowStartMilliseconds = the timestamp corresponding to 10:00 CET 256 flowEndMilliseconds = the timestamp corresponding to 10:00 CET 257 plus 1 millisecond (Assuming 1 ms of delay) 258 --------------------------- 259 metricIdentifier = UDP_Latency as per 260 [I-D.bagnulo-ippm-new-registry-independent] 261 testSchedule = Periodic as per 262 [I-D.bagnulo-ippm-new-registry-independent] 263 scheduleRate = 1 264 outputType = Raw as per 265 [I-D.bagnulo-ippm-new-registry-independent] 266 testEnvironment = No-cross-traffic as per 267 [I-D.bagnulo-ippm-new-registry-independent] 268 sourceIPv4Address = 192.0.2.1 269 destinationIPv4Address = 203.0.113.1 270 sourceTransportPort = 23677 271 destinationTransportPort = 34567 272 flowStartMilliseconds = the timestamp corresponding to 10:00 CET 273 plus one second 274 flowEndMilliseconds = the timestamp corresponding to 10:00 CET 275 plus one second plus 2 millisecond (Assuming 2 ms of delay) 276 --------------------------- 277 metricIdentifier = UDP_Latency as per 278 [I-D.bagnulo-ippm-new-registry-independent] 279 testSchedule = Periodic as per 280 [I-D.bagnulo-ippm-new-registry-independent] 281 scheduleRate = 1 282 outputType = Raw as per 283 [I-D.bagnulo-ippm-new-registry-independent] 284 testEnvironment = No-cross-traffic as per 285 [I-D.bagnulo-ippm-new-registry-independent] 286 sourceIPv4Address = 192.0.2.1 287 destinationIPv4Address = 203.0.113.1 288 sourceTransportPort = 23677 289 destinationTransportPort = 34567 290 flowStartMilliseconds = the timestamp corresponding to 10:00 CET 291 plus two seconds 292 flowEndMilliseconds = the timestamp corresponding to 10:00 CET 293 plus two seconds plus 1 millisecond (Assuming 1 ms of delay) 294 --------------------------- 296 4. Discussion 298 Overhead. As noted in the previous example, all the data describing 299 the test itself is repeated in every Data set resulting in increased 300 overhead. Since the data describing the test can be considered 301 metadata about the test results, it would be possible to explore the 302 possibility of encoding it in Options Templates. 304 5. What standardization is needed for this? 306 So, in order to enable the use of IPFIX for LMP, the following pieces 307 of standardization would be required. 308 o The definition of the metric registry. This is not specific for 309 IPFIX as any other Report protocol is likely to require this, but 310 having an independent registry enables multiple report protocols. 311 o The definition of new IEs. Some of them are identified above, 312 some other are likely to be needed as well. 313 o The definition of the Templates sets for each of the tests to be 314 performed. This is necessary to have a defined Template that 315 different vendors can implement and can use the IPFIX format in 316 the wire, but they don't need to fully implement IPFIX parsing to 317 read arbitrary Template sets, just the ones associated with the 318 relevant metrics. 320 6. Security considerations 322 TBD 324 7. IANA Considerations 326 TBD 328 8. Acknowledgements 330 We would like to thank Sam Crawford and Al Morton for input on early 331 discussions for this draft. 333 9. References 335 9.1. Normative References 337 [RFC5101] Claise, B., "Specification of the IP Flow Information 338 Export (IPFIX) Protocol for the Exchange of IP Traffic 339 Flow Information", RFC 5101, January 2008. 341 [RFC5470] Sadasivan, G., Brownlee, N., Claise, B., and J. Quittek, 342 "Architecture for IP Flow Information Export", RFC 5470, 343 March 2009. 345 [I-D.bagnulo-ippm-new-registry-independent] 346 Bagnulo, M., Burbridge, T., Crawford, S., Eardley, P., and 347 A. Morton, "A registry for commonly used metrics. 348 Independent registries", 349 draft-bagnulo-ippm-new-registry-independent-00 (work in 350 progress), January 2013. 352 9.2. Informative References 354 [I-D.ietf-ipfix-ie-doctors] 355 Trammell, B. and B. Claise, "Guidelines for Authors and 356 Reviewers of IPFIX Information Elements", 357 draft-ietf-ipfix-ie-doctors-07 (work in progress), 358 October 2012. 360 Authors' Addresses 362 Marcelo Bagnulo 363 Universidad Carlos III de Madrid 364 Av. Universidad 30 365 Leganes, Madrid 28911 366 SPAIN 368 Phone: 34 91 6249500 369 Email: marcelo@it.uc3m.es 370 URI: http://www.it.uc3m.es 372 Brian Trammell 373 Swiss Federal Institute of Technology Zurich 374 Gloriastrasse 35 375 8092 Zurich 376 Switzerland 378 Email: trammell@tik.ee.ethz.ch