idnits 2.17.1 draft-ietf-ipfix-a9n-08.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- -- The document has examples using IPv4 documentation addresses according to RFC6890, but does not use any IPv6 documentation addresses. Maybe there should be IPv6 examples, too? Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year == Line 839 has weird spacing: '... retain mask...' == Line 1762 has weird spacing: '...rt time end t...' -- The document date (November 20, 2012) is 4147 days in the past. Is this intentional? Checking references for intended status: Proposed Standard ---------------------------------------------------------------------------- (See RFCs 3967 and 4897 for information about using normative references to lower-maturity documents in RFCs) -- Looks like a reference, but probably isn't: '8' on line 1831 -- Looks like a reference, but probably isn't: '4' on line 1829 -- Looks like a reference, but probably isn't: '2' on line 1830 -- Looks like a reference, but probably isn't: '1' on line 1874 == Outdated reference: A later version (-10) exists of draft-ietf-ipfix-protocol-rfc5101bis-02 -- Possible downref: Normative reference to a draft: ref. 'I-D.ietf-ipfix-protocol-rfc5101bis' == Outdated reference: A later version (-10) exists of draft-ietf-ipfix-mediation-protocol-02 == Outdated reference: A later version (-18) exists of draft-ietf-ipfix-flow-selection-tech-12 Summary: 0 errors (**), 0 flaws (~~), 6 warnings (==), 7 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 IPFIX Working Group B. Trammell 3 Internet-Draft ETH Zurich 4 Intended status: Standards Track A. Wagner 5 Expires: May 24, 2013 Consecom AG 6 B. Claise 7 Cisco Systems, Inc. 8 November 20, 2012 10 Flow Aggregation for the IP Flow Information Export (IPFIX) Protocol 11 draft-ietf-ipfix-a9n-08.txt 13 Abstract 15 This document provides a common implementation-independent basis for 16 the interoperable application of the IP Flow Information Export 17 (IPFIX) Protocol to the handling of Aggregated Flows, which are IPFIX 18 Flows representing packets from multiple Original Flows sharing some 19 set of common properties. It does this through a detailed 20 terminology and a descriptive Intermediate Aggregation Process 21 architecture, including a specification of methods for Original Flow 22 counting and counter distribution across intervals. 24 Status of this Memo 26 This Internet-Draft is submitted in full conformance with the 27 provisions of BCP 78 and BCP 79. 29 Internet-Drafts are working documents of the Internet Engineering 30 Task Force (IETF). Note that other groups may also distribute 31 working documents as Internet-Drafts. The list of current Internet- 32 Drafts is at http://datatracker.ietf.org/drafts/current/. 34 Internet-Drafts are draft documents valid for a maximum of six months 35 and may be updated, replaced, or obsoleted by other documents at any 36 time. It is inappropriate to use Internet-Drafts as reference 37 material or to cite them other than as "work in progress." 39 This Internet-Draft will expire on May 24, 2013. 41 Copyright Notice 43 Copyright (c) 2012 IETF Trust and the persons identified as the 44 document authors. All rights reserved. 46 This document is subject to BCP 78 and the IETF Trust's Legal 47 Provisions Relating to IETF Documents 48 (http://trustee.ietf.org/license-info) in effect on the date of 49 publication of this document. Please review these documents 50 carefully, as they describe your rights and restrictions with respect 51 to this document. Code Components extracted from this document must 52 include Simplified BSD License text as described in Section 4.e of 53 the Trust Legal Provisions and are provided without warranty as 54 described in the Simplified BSD License. 56 Table of Contents 58 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 4 59 1.1. IPFIX Protocol Overview . . . . . . . . . . . . . . . . . 5 60 1.2. IPFIX Documents Overview . . . . . . . . . . . . . . . . . 5 61 2. Terminology . . . . . . . . . . . . . . . . . . . . . . . . . 7 62 3. Use Cases for IPFIX Aggregation . . . . . . . . . . . . . . . 9 63 4. Architecture for Flow Aggregation . . . . . . . . . . . . . . 11 64 4.1. Aggregation within the IPFIX Architecture . . . . . . . . 11 65 4.2. Intermediate Aggregation Process Architecture . . . . . . 15 66 4.2.1. Correlation and Normalization . . . . . . . . . . . . 17 67 5. IP Flow Aggregation Operations . . . . . . . . . . . . . . . . 19 68 5.1. Temporal Aggregation through Interval Distribution . . . . 19 69 5.1.1. Distributing Values Across Intervals . . . . . . . . . 20 70 5.1.2. Time Composition . . . . . . . . . . . . . . . . . . . 22 71 5.1.3. External Interval Distribution . . . . . . . . . . . . 22 72 5.2. Spatial Aggregation of Flow Keys . . . . . . . . . . . . . 23 73 5.2.1. Counting Original Flows . . . . . . . . . . . . . . . 24 74 5.2.2. Counting Distinct Key Values . . . . . . . . . . . . . 25 75 5.3. Spatial Aggregation of Non-Key Fields . . . . . . . . . . 26 76 5.3.1. Counter Statistics . . . . . . . . . . . . . . . . . . 26 77 5.3.2. Derivation of New Values from Flow Keys and 78 non-Key fields . . . . . . . . . . . . . . . . . . . . 26 79 5.4. Aggregation Combination . . . . . . . . . . . . . . . . . 27 80 6. Additional Considerations and Special Cases in Flow 81 Aggregation . . . . . . . . . . . . . . . . . . . . . . . . . 28 82 6.1. Exact versus Approximate Counting during Aggregation . . . 28 83 6.2. Delay and Loss introduced by the IAP . . . . . . . . . . . 28 84 6.3. Considerations for Aggregation of Sampled Flows . . . . . 28 85 6.4. Considerations for Aggregation of Heterogeneous Flows . . 29 86 7. Export of Aggregated IP Flows using IPFIX . . . . . . . . . . 30 87 7.1. Time Interval Export . . . . . . . . . . . . . . . . . . . 30 88 7.2. Flow Count Export . . . . . . . . . . . . . . . . . . . . 30 89 7.2.1. originalFlowsPresent . . . . . . . . . . . . . . . . . 30 90 7.2.2. originalFlowsInitiated . . . . . . . . . . . . . . . . 30 91 7.2.3. originalFlowsCompleted . . . . . . . . . . . . . . . . 31 92 7.2.4. deltaFlowCount . . . . . . . . . . . . . . . . . . . . 31 93 7.3. Distinct Host Export . . . . . . . . . . . . . . . . . . . 31 94 7.3.1. distinctCountOfSourceIPAddress . . . . . . . . . . . . 31 95 7.3.2. distinctCountOfDestinationIPAddress . . . . . . . . . 32 96 7.3.3. distinctCountOfSourceIPv4Address . . . . . . . . . . . 32 97 7.3.4. distinctCountOfDestinationIPv4Address . . . . . . . . 32 98 7.3.5. distinctCountOfSourceIPv6Address . . . . . . . . . . . 32 99 7.3.6. distinctCountOfDestinationIPv6Address . . . . . . . . 33 100 7.4. Aggregate Counter Distribution Export . . . . . . . . . . 33 101 7.4.1. Aggregate Counter Distribution Options Template . . . 33 102 7.4.2. valueDistributionMethod Information Element . . . . . 34 103 8. Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . 37 104 8.1. Traffic Time-Series per Source . . . . . . . . . . . . . . 38 105 8.2. Core Traffic Matrix . . . . . . . . . . . . . . . . . . . 42 106 8.3. Distinct Source Count per Destination Endpoint . . . . . . 47 107 8.4. Traffic Time-Series per Source with Counter 108 Distribution . . . . . . . . . . . . . . . . . . . . . . . 49 109 9. Security Considerations . . . . . . . . . . . . . . . . . . . 52 110 10. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 53 111 11. Acknowledgments . . . . . . . . . . . . . . . . . . . . . . . 54 112 12. References . . . . . . . . . . . . . . . . . . . . . . . . . . 55 113 12.1. Normative References . . . . . . . . . . . . . . . . . . . 55 114 12.2. Informative References . . . . . . . . . . . . . . . . . . 55 115 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . . 57 117 1. Introduction 119 The assembly of packet data into Flows serves a variety of different 120 purposes, as noted in the requirements [RFC3917] and applicability 121 statement [RFC5472] for the IP Flow Information Export (IPFIX) 122 protocol [I-D.ietf-ipfix-protocol-rfc5101bis]. Aggregation beyond 123 the flow level, into records representing multiple Flows, is a common 124 analysis and data reduction technique as well, with applicability to 125 large-scale network data analysis, archiving, and inter-organization 126 exchange. This applicability in large-scale situations, in 127 particular, led to the inclusion of aggregation as part of the IPFIX 128 Mediators Problem Statement [RFC5982], and the definition of an 129 Intermediate Aggregation Process in the Mediator framework [RFC6183]. 131 Aggregation is used for analysis and data reduction in a wide variety 132 of applications, for example in traffic matrix calculation, 133 generation of time series data for visualizations or anomaly 134 detection, or data reduction for long-term trending and storage. 135 Depending on the keys used for aggregation, it may additionally have 136 an anonymizing affect on the data: for example, aggregation 137 operations which eliminate IP addresses make it impossible to later 138 directly identify nodes using those addresses. 140 Aggregation as defined and described in this document covers the 141 applications defined in [RFC5982], including 5.1 "Adjusting Flow 142 Granularity", 5.4 "Time Composition", and 5.5 "Spatial Composition". 143 However, Section 4.2 of this document specifies a more flexible 144 architecture for an Intermediate Aggregation Process than that 145 envisioned by the original Mediator work. Instead of a focus on 146 these specific limited use cases, the Intermediate Aggregation 147 Process is specified to cover any activity commonly described as 148 "flow aggregation". This architecture is intended to describe any 149 such activity without reference to the specific implementation of 150 aggregation. 152 An Intermediate Aggregation Process may be applied to data collected 153 from multiple Observation Points, as it is natural to use aggregation 154 for data reduction when concentrating measurement data. This 155 document specifically does not address the protocol issues that arise 156 when combining IPFIX data from multiple Observation Points and 157 exporting from a single Mediator, as these issues are general to 158 IPFIX Mediation; they are therefore treated in detail in the 159 Mediation Protocol document [I-D.ietf-ipfix-mediation-protocol]. 161 Since Aggregated Flows as defined in the following section are 162 essentially Flows, the IPFIX protocol 163 [I-D.ietf-ipfix-protocol-rfc5101bis] can be used to export, and the 164 IPFIX File Format [RFC5655] can be used to store, aggregated data 165 "as-is"; there are no changes necessary to the protocol. This 166 document provides a common basis for the application of IPFIX to the 167 handling of aggregated data, through a detailed terminology, 168 Intermediate Aggregation Process architecture, and methods for 169 Original Flow counting and counter distribution across intervals. 170 Note that sections 5, 6, and 7 of this document are normative. 172 1.1. IPFIX Protocol Overview 174 In the IPFIX protocol, { type, length, value } tuples are expressed 175 in Templates containing { type, length } pairs, specifying which { 176 value } fields are present in data records conforming to the 177 Template, giving great flexibility as to what data is transmitted. 178 Since Templates are sent very infrequently compared with Data 179 Records, this results in significant bandwidth savings. Various 180 different data formats may be transmitted simply by sending new 181 Templates specifying the { type, length } pairs for the new data 182 format. See [I-D.ietf-ipfix-protocol-rfc5101bis] for more 183 information. 185 The IPFIX Information Element Registry [iana-ipfix-assignments] 186 defines a large number of standard Information Elements which provide 187 the necessary { type } information for Templates. The use of 188 standard elements enables interoperability among different vendors' 189 implementations. Additionally, non-standard enterprise-specific 190 elements may be defined for private use. 192 1.2. IPFIX Documents Overview 194 "Specification of the IPFIX Protocol for the Exchange of IP Traffic 195 Flow Information" [I-D.ietf-ipfix-protocol-rfc5101bis] and its 196 associated documents define the IPFIX Protocol, which provides 197 network engineers and administrators with access to IP traffic flow 198 information. 200 "Architecture for IP Flow Information Export" [RFC5470] defines the 201 architecture for the export of measured IP flow information out of an 202 IPFIX Exporting Process to an IPFIX Collecting Process, and the basic 203 terminology used to describe the elements of this architecture, per 204 the requirements defined in "Requirements for IP Flow Information 205 Export" [RFC3917]. The IPFIX Protocol document 206 [I-D.ietf-ipfix-protocol-rfc5101bis] then covers the details of the 207 method for transporting IPFIX Data Records and Templates via a 208 congestion-aware transport protocol from an IPFIX Exporting Process 209 to an IPFIX Collecting Process. 211 "IP Flow Information Export (IPFIX) Mediation: Problem Statement" 212 [RFC5982] introduces the concept of IPFIX Mediators, and defines the 213 use cases for which they were designed; "IP Flow Information Export 214 (IPFIX) Mediation: Framework" [RFC6183] then provides an 215 architectural framework for Mediators. Protocol-level issues (e.g., 216 Template and Observation Domain handling across Mediators) are 217 covered by "Specification of the Protocol for IPFIX Mediation" 218 [I-D.ietf-ipfix-mediation-protocol]. This document specifies an 219 Intermediate Process which may be applied at an IPFIX Mediator, as 220 well as at an original Observation Point prior to export, or for 221 analysis and data reduction purposes after receipt at a Collecting 222 Process. 224 2. Terminology 226 Terms used in this document that are defined in the Terminology 227 section of the IPFIX Protocol [I-D.ietf-ipfix-protocol-rfc5101bis] 228 document are to be interpreted as defined there. 230 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 231 "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this 232 document are to be interpreted as described in [RFC2119]. 234 In addition, this document defines the following terms 236 Aggregated Flow: A Flow, as defined by 237 [I-D.ietf-ipfix-protocol-rfc5101bis], derived from a set of zero 238 or more original Flows within a defined Aggregation Interval. 239 Note that an Aggregated Flow is defined in the context of an 240 Intermediate Aggregation Process only. Once an Aggregated Flow is 241 exported, it is essentially a Flow as in 242 [I-D.ietf-ipfix-protocol-rfc5101bis] and can be treated as such. 244 Intermediate Aggregation Process: an Intermediate Process (IAP) as 245 in [RFC6183] that aggregates records, based upon a set of Flow 246 Keys or functions applied to fields from the record. 248 Aggregation Interval: A time interval imposed upon an Aggregated 249 Flow. Intermediate Aggregation Processes may use a regular 250 Aggregation Interval (e.g. "every five minutes", "every calendar 251 month"), though regularity is not necessary. Aggregation 252 intervals may also be derived from the time intervals of the 253 Original Flows being aggregated. 255 Partially Aggregated Flow: A Flow during processing within an 256 Intermediate Aggregation Process; refers to an intermediate data 257 structure during aggregation within the Intermediate Aggregation 258 Process architecture detailed in Section 4.2. 260 Original Flow: A Flow given as input to an Intermediate Aggregation 261 Process in order to generate Aggregated Flows. 263 Contributing Flow: An Original Flow that is partially or completely 264 represented within an Aggregated Flow. Each Aggregated Flow is 265 made up of zero or more Contributing Flows, and an Original Flow 266 may contribute to zero or more Aggregated Flows. 268 Original Exporter: The Exporter from which the Original Flows are 269 received; meaningful only when an IAP is deployed at a Mediator. 271 The terminology presented herein improves the precision of, but does 272 not supersede or contradict the terms related to mediation and 273 aggregation defined in the Mediation Problem Statement [RFC5982] and 274 the Mediation Framework [RFC6183] documents. Within this document, 275 the terminology defined in this section is to be considered 276 normative. 278 3. Use Cases for IPFIX Aggregation 280 Aggregation, as a common data reduction method used in traffic data 281 analysis, has many applications. When used with a regular 282 Aggregation Interval and Original Flows containing timing 283 information, it generates time series data from a collection of Flows 284 with discrete intervals, as in the example in Section 8.1. This time 285 series data is itself useful for a wide variety of analysis tasks, 286 such as generating input for network anomaly detection systems, or 287 driving visualizations of volume per time for traffic with specific 288 characteristics. As a second example, traffic matrix calculation 289 from flow data, as shown in Section 8.2 is inherently an aggregation 290 action, by spatially aggregating the Flow Key down to input or output 291 interface, address prefix, or autonomous system. 293 Irregular or data-dependent Aggregation Intervals and key aggregation 294 operations can also be used to provide adaptive aggregation of 295 network flow data. Here, full Flow Records can be kept for Flows of 296 interest, while Flows deemed "less interesting" to a given 297 application can be aggregated. For example, in an IPFIX Mediator 298 equipped with traffic classification capabilities for security 299 purposes, potentially malicious Flows could be exported directly, 300 while known-good or probably-good Flows (e.g. normal web browsing) 301 could be exported simply as time series volumes per web server. 303 Aggregation can also be applied to final analysis of stored Flow 304 data, as shown in the example in Section 8.3. All such aggregation 305 applications in which timing information is not available or not 306 important can be treated as if an infinite Aggregation Interval 307 applies. 309 Note that an Intermediate Aggregation Process which removes 310 potentially sensitive information as identified in [RFC6235] may tend 311 to have an anonymising effect on the Aggregated Flows as well; 312 however, any application of aggregation as part of a data protection 313 scheme should ensure that all the issues raised in [RFC6235] are 314 addressed, specifically Section 4 "Anonymization of IP Flow Data", 315 Section 7.2 "IPFIX-Specific Anonymization Guidelines", and Section 9 316 "Security Considerations". 318 While much of the discussion in this document, and all of the 319 examples, apply to the common case that the Original Flows to be 320 aggregated are all of the same underlying type (i.e., are represented 321 with identical Templates or compatible Templates containing a core 322 set Information Elements which can be freely converted to one 323 another), and that each packet observed by the Metering Process 324 associated with the Original Exporter is represented, this is not a 325 necessary assumption. Aggregation can also be applied as part of a 326 technique applying both aggregation and correlation to pull together 327 multiple views of the same traffic from different Observation Points 328 using different Templates. For example, consider a set of 329 applications running at different Observation Points for different 330 purposes -- one generating flows with round-trip-times for passive 331 performance measurement, and one generating billing records. Once 332 correlated, these flows could used to produce Aggregated Flows 333 containing both volume and performance information together. The 334 correlation and normalization operation described in Section 4.2.1 335 handles this specific case of correlation. Flow correlation in the 336 general case is outside the scope of this document. 338 4. Architecture for Flow Aggregation 340 This section specifies the architecture of the Intermediate 341 Aggregation Process, and how it fits into the IPFIX Architecture. 343 4.1. Aggregation within the IPFIX Architecture 345 An Intermediate Aggregation Process could be deployed at any of three 346 places within the IPFIX Architecture. While aggregation is most 347 commonly done within a Mediator which collects Original Flows from an 348 Original Exporter and exports Aggregated Flows, aggregation can also 349 occur before initial export, or after final collection, as shown in 350 Figure 1. The presence of an IAP at any of these points is of course 351 optional. 353 +===========================================+ 354 | IPFIX Exporter +----------------+ | 355 | | Metering Proc. | | 356 | +-----------------+ +----------------+ | 357 | | Metering Proc. | or | IAP | | 358 | +-----------------+----+----------------+ | 359 | | Exporting Process | | 360 | +-|----------------------------------|--+ | 361 +===|==================================|====+ 362 | | 363 +===|===========================+ | 364 | | Aggregating Mediator | | 365 + +-V-------------------+ | | 366 | | Collecting Process | | | 367 + +---------------------+ | | 368 | | IAP | | | 369 + +---------------------+ | | 370 | | Exporting Process | | | 371 + +-|-------------------+ | | 372 +===|===========================+ | 373 | | 374 +===|==================================|=====+ 375 | | Collector | | 376 | +-V----------------------------------V-+ | 377 | | Collecting Process | | 378 | +------------------+-------------------+ | 379 | | IAP | | 380 | +-------------------+ | 381 | (Aggregation | File Writer | | 382 for Storage) +-----------|-------+ | 383 +================================|===========+ 384 | 385 +------V-----------+ 386 | IPFIX File | 387 +------------------+ 389 Figure 1: Potential Aggregation Locations 391 The Mediator use case is further shown in Figures A and B in 392 [RFC6183]. 394 Aggregation can be applied for either intermediate or final analytic 395 purposes. In certain circumstances, it may make sense to export 396 Aggregated Flows directly after metering, for example, if the 397 Exporting Process is applied to drive a time-series visualization, or 398 when flow data export bandwidth is restricted and flow or packet 399 sampling is not an option. Note that this case, where the 400 Aggregation Process is essentially integrated into the Metering 401 Process, is essentially covered by the IPFIX architecture [RFC5470]: 402 the Flow Keys used are simply a subset of those that would normally 403 be used, and time intervals may be chosen other than those available 404 from the cache policies customarily offered by the Metering Process. 405 A Metering Process in this arrangement MAY choose to simulate the 406 generation of larger Flows in order to generate Original Flow counts, 407 if the application calls for compatibility with an Intermediate 408 Aggregation Process deployed in a separate location. 410 In the specific case that an Intermediate Aggregation Process is 411 employed for data reduction for storage purposes, it can take 412 Original Flows from a Collecting Process or File Reader and pass 413 Aggregated Flows to a File Writer for storage. 415 Deployment of an Intermediate Aggregation Process within a Mediator 416 [RFC5982] is a much more flexible arrangement. Here, the Mediator 417 consumes Original Flows and produces Aggregated Flows; this 418 arrangement is suited to any of the use cases detailed in Section 3. 419 In a Mediator, Original Flows from multiple sources can also be 420 aggregated into a single stream of Aggregated Flows; the 421 architectural specifics of this arrangement are not addressed in this 422 document, which is concerned only with the aggregation operation 423 itself; see [I-D.ietf-ipfix-mediation-protocol] for details. 425 The data paths into and out of an Intermediate Aggregation Process 426 are shown in Figure 2. 428 packets --+ IPFIX Messages IPFIX Files 429 | | | 430 V V V 431 +==================+ +====================+ +=============+ 432 | Metering Process | | Collecting Process | | File Reader | 433 | | +====================+ +=============+ 434 | (Original Flows | | | 435 | or direct | | Original Flows | 436 | aggregation) | V V 437 + - - - - - - - - -+======================================+ 438 | Intermediate Aggregation Process (IAP) | 439 +=========================================================+ 440 | Aggregated Aggregated | 441 | Flows Flows | 442 V V 443 +===================+ +=============+ 444 | Exporting Process | | File Writer | 445 +===================+ +=============+ 446 | | 447 V V 448 IPFIX Messages IPFIX Files 450 Figure 2: Data paths through the aggregation process 452 Note that as Aggregated Flows are IPFIX Flows, an Intermediate 453 Aggregation Process may aggregate already-Aggregated Flows from an 454 upstream IAP as well as original Flows from an upstream Original 455 Exporter or Metering Process. 457 Aggregation may also need to correlate original flows from multiple 458 Metering Processes, each according to a different Template with 459 different Flow Keys and values. This arrangement is shown in 460 Figure 3; in this case, the correlation and normalization operation 461 described in Section 4.2.1 handles merging the Original Flows before 462 aggregation. 464 packets --+---------------------+------------------+ 465 | | | 466 V V V 467 +====================+ +====================+ +====================+ 468 | Metering Process 1 | | Metering Process 2 | | Metering Process n | 469 +====================+ +====================+ +====================+ 470 | | Original Flows | 471 V V V 472 +==================================================================+ 473 | Intermediate Aggregation Process + correlation / normalization | 474 +==================================================================+ 475 | Aggregated Aggregated | 476 | Flows Flows | 477 V V 478 +===================+ +=============+ 479 | Exporting Process | | File Writer | 480 +===================+ +=============+ 481 | | 482 +------------> IPFIX Messages <----------+ 484 Figure 3: Aggregating Original Flows from multiple Metering Processes 486 4.2. Intermediate Aggregation Process Architecture 488 Within this document, an Intermediate Aggregation Process can be seen 489 as hosting a function composed of four types of operations on 490 Partially Aggregated Flows, as illustrated in Figure 4: interval 491 distribution (temporal), key aggregation (spatial), value aggregation 492 (spatial), and aggregate combination. "Partially Aggregated Flows" 493 as defined in Section 2 are essentially the intermediate results of 494 aggregation, internal to the Intermediate Aggregation Process. 496 Original Flows / Original Flows requiring correlation 497 +=============|===================|===================|=============+ 498 | | Intermediate | Aggregation | Process | 499 | | V V | 500 | | +-----------------------------------------------+ | 501 | | | (optional) correlation and normalization | | 502 | | +-----------------------------------------------+ | 503 | | | | 504 | V V | 505 | +--------------------------------------------------------------+ | 506 | | interval distribution (temporal) | | 507 | +--------------------------------------------------------------+ | 508 | | ^ | ^ | | 509 | | | Partially Aggregated | | | | 510 | V | Flows V | | | 511 | +-------------------+ +--------------------+ | | 512 | | key aggregation |<------| value aggregation | | | 513 | | (spatial) |------>| (spatial) | | | 514 | +-------------------+ +--------------------+ | | 515 | | | | | 516 | | Partially Aggregated | | | 517 | V Flows V V | 518 | +--------------------------------------------------------------+ | 519 | | aggregate combination | | 520 | +--------------------------------------------------------------+ | 521 | | | 522 +=======================================|===========================+ 523 V 524 Aggregated Flows 526 Figure 4: Conceptual model of aggregation operations within an IAP 528 Interval distribution: a temporal aggregation operation which 529 imposes an Aggregation Interval on the partially Aggregated Flow. 530 This Aggregation Interval may be regular, irregular, or derived 531 from the timing of the Original Flows themselves. Interval 532 distribution is discussed in detail in Section 5.1. 534 Key aggregation: a spatial aggregation operation which results in 535 the addition, modification, or deletion of Flow Key fields in the 536 Partially Aggregated Flows. New Flow Keys may be derived from 537 existing Flow Keys (e.g., looking up an AS number for an IP 538 address), or "promoted" from specific non-Key fields (e.g., when 539 aggregating Flows by packet count per Flow). Key aggregation can 540 also add new non-Key fields derived from Flow Keys that are 541 deleted during key aggregation; mainly counters of unique reduced 542 keys. Key aggregation is discussed in detail in Section 5.2. 544 Value aggregation: a spatial aggregation operation which results in 545 the addition, modification, or deletion of non-Key fields in the 546 Partially Aggregated Flows. These non-Key fields may be "demoted" 547 from existing Key fields, or derived from existing Key or non-Key 548 fields. Value aggregation is discussed in detail in Section 5.3. 550 Aggregate combination: an operation combining multiple partially 551 Aggregated Flows having undergone interval distribution, key 552 aggregation, and value aggregation which share Flow Keys and 553 Aggregation Intervals into a single Aggregated Flow per set of 554 Flow Key values and Aggregation Interval. Aggregate combination 555 is discussed in detail in Section 5.4. 557 Correlation and normalization: an optional operation, applies when 558 accepting Original Flows from Metering Processes which export 559 different views of essentially the same Flows before aggregation; 560 the details of correlation and normalization are specified in 561 Section 4.2.1, below. 563 The first three of these operations may be carried out any number of 564 times in any order, either on Original Flows or on the results of one 565 of the operations above, with one caveat: since Flows carry their own 566 interval data, any spatial aggregation operation implies a temporal 567 aggregation operation, so at least one interval distribution step, 568 even if implicit, is required by this architecture. This is shown as 569 the first step for the sake of simplicity in the diagram above. Once 570 all aggregation operations are complete, aggregate combination 571 ensures that for a given Aggregation Interval, set of Flow Key 572 values, and Observation Domain, only one Flow is produced by the 573 Intermediate Aggregation Process. 575 This model describes the operations within a single Intermediate 576 Aggregation Process, and it is anticipated that most aggregation will 577 be applied within a single process. However, as the steps in the 578 model may be applied in any order and aggregate combination is 579 idempotent, any number of Intermediate Aggregation Processes 580 operating in series can be modeled as a single process. This allows 581 aggregation operations to be flexibly distributed across any number 582 of processes, should application or deployment considerations so 583 dictate. 585 4.2.1. Correlation and Normalization 587 When accepting Original Flows from multiple Metering Processes, each 588 of which provides a different view of the Original Flow as seen from 589 the point of view of the IAP, an optional correlation and 590 normalization operation combines each of these single Flow Records 591 into a set of unified partially aggregated Flows before applying 592 interval distribution. These unified Flows appear as if they had 593 been measured at a single Metering Process which used the union of 594 the set of Flow Keys and non-key fields of all Metering Processes 595 sending Original Flows to the IAP. 597 Since, due to export errors or other slight irregularities in flow 598 metering, the multiple views may not be completely consistent; 599 normalization involves applying a set of aggregation application 600 specific corrections in order to ensure consistency in the unified 601 Flows. 603 In general, correlation and normalization should take multiple views 604 of essentially the same Flow, as determined by the configuration of 605 the operation itself, and render them into a single unified Flow. 606 Flows which are essentially different should not be unified by the 607 correlation and normalization operation. This operation therefore 608 requires enough information about the configuration and deployment of 609 Metering Processes from which it correlates Original Flows in order 610 to make this distinction correctly and consistently. 612 The exact steps performed to correlate and normalize flows in this 613 step are application-, implementation-, and deployment-specific, and 614 will not be further specified in this document. 616 5. IP Flow Aggregation Operations 618 As stated in Section 2, an Aggregated Flow is simply an IPFIX Flow 619 generated from Original Flows by an Intermediate Aggregation Process. 620 Here, we detail the operations by which this is achieved within an 621 Intermediate Aggregation Process. 623 5.1. Temporal Aggregation through Interval Distribution 625 Interval distribution imposes a time interval on the resulting 626 Aggregated Flows. The selection of an interval is specific to the 627 given aggregation application. Intervals may be derived from the 628 Original Flows themselves (e.g., an interval may be selected to cover 629 the entire time containing the set of all Flows sharing a given Key, 630 as in Time Composition described in Section 5.1.2) or externally 631 imposed; in the latter case the externally imposed interval may be 632 regular (e.g., every five minutes) or irregular (e.g., to allow for 633 different time resolutions at different times of day, under different 634 network conditions, or indeed for different sets of Original Flows). 636 The length of the imposed interval itself has tradeoffs. Shorter 637 intervals allow higher-resolution aggregated data and, in streaming 638 applications, faster reaction time. Longer intervals generally lead 639 to greater data reduction and simplified counter distribution. 640 Specifically, counter distribution is greatly simplified by the 641 choice of an interval longer than the duration of longest Original 642 Flow, itself generally determined by the Original Flow's Metering 643 Process active timeout; in this case an Original Flow can contribute 644 to at most two Aggregated Flows, and the more complex value 645 distribution methods become inapplicable. 647 | | | | 648 | |<--Flow A-->| | | | 649 | |<--Flow B-->| | | 650 | |<-------------Flow C-------------->| | 651 | | | | 652 | interval 0 | interval 1 | interval 2 | 654 Figure 5: Illustration of interval distribution 656 In Figure 5, we illustrate three common possibilities for interval 657 distribution as applies with regular intervals to a set of three 658 Original Flows. For Flow A, the start and end times lie within the 659 boundaries of a single interval 0; therefore, Flow A contributes to 660 only one Aggregated Flow. Flow B, by contrast, has the same duration 661 but crosses the boundary between intervals 0 and 1; therefore, it 662 will contribute to two Aggregated Flows, and its counters must be 663 distributed among these Flows, though in the two-interval case this 664 can be simplified somewhat simply by picking one of the two 665 intervals, or proportionally distributing between them. Only Flows 666 like Flow A and Flow B will be produced when the interval is chosen 667 to be longer than the duration of longest Original Flow, as above. 668 More complicated is the case of Flow C, which contributes to more 669 than two Aggregated Flows, and must have its counters distributed 670 according to some policy as in Section 5.1.1. 672 5.1.1. Distributing Values Across Intervals 674 In general, counters in Aggregated Flows are treated the same as in 675 any Flow. Each counter is independently calculated as if it were 676 derived from the set of packets in the Original Flow: e.g., delta 677 counters are summed, the most recent total count for each Original 678 Flow taken then summed across flows, and so on. 680 When the Aggregation Interval is guaranteed to be longer than the 681 longest Original Flow, a Flow can cross at most one Interval 682 boundary, and will therefore contribute to at most two Aggregated 683 Flows. Most common in this case is to arbitrarily but consistently 684 choose to account the Original Flow's counters either to the first or 685 the last Aggregated Flow to which it could contribute. 687 However, this becomes more complicated when the Aggregation Interval 688 is shorter than the longest Original Flow in the source data. In 689 such cases, each Original Flow can incompletely cover one or more 690 time intervals, and apply to one or more Aggregated Flows. In this 691 case, the Intermediate Aggregation Process must distribute the 692 counters in the Original Flows across one or more resulting 693 Aggregated Flows. There are several methods for doing this, listed 694 here in roughly increasing order of complexity and accuracy; most of 695 these are necessary only in specialized cases. 697 End Interval: The counters for an Original Flow are added to the 698 counters of the appropriate Aggregated Flow containing the end 699 time of the Original Flow. 701 Start Interval: The counters for an Original Flow are added to the 702 counters of the appropriate Aggregated Flow containing the start 703 time of the Original Flow. 705 Mid Interval: The counters for an Original Flow are added to the 706 counters of a single appropriate Aggregated Flow containing some 707 timestamp between start and end time of the Original Flow. 709 Simple Uniform Distribution: Each counter for an Original Flow is 710 divided by the number of time intervals the Original Flow covers 711 (i.e., of appropriate Aggregated Flows sharing the same Flow 712 Keys), and this number is added to each corresponding counter in 713 each Aggregated Flow. 715 Proportional Uniform Distribution: This is like simple uniform 716 distribution, but accounts for the fractional portions of a time 717 interval covered by an Original Flow in the first and last time 718 interval. Each counter for an Original Flow is divided by the 719 number of time _units_ the Original Flow covers, to derive a mean 720 count rate. This rate is then multiplied by the number of time 721 units in the intersection of the duration of the Original Flow and 722 the time interval of each Aggregated Flow. 724 Simulated Process: Each counter of the Original Flow is distributed 725 among the intervals of the Aggregated Flows according to some 726 function the Intermediate Aggregation Process uses based upon 727 properties of Flows presumed to be like the Original Flow. For 728 example, Flow Records representing bulk transfer might follow a 729 more or less proportional uniform distribution, while interactive 730 processes are far more bursty. 732 Direct: The Intermediate Aggregation Process has access to the 733 original packet timings from the packets making up the Original 734 Flow, and uses these to distribute or recalculate the counters. 736 A method for exporting the distribution of counters across multiple 737 Aggregated Flows is detailed in Section 7.4. In any case, counters 738 MUST be distributed across the multiple Aggregated Flows in such a 739 way that the total count is preserved, within the limits of accuracy 740 of the implementation. This property allows data to be aggregated 741 and re-aggregated with negligible loss of original count information. 742 To avoid confusion in interpretation of the aggregated data, all the 743 counters in a given Aggregated Flow MUST be distributed via the same 744 method. 746 More complex counter distribution methods generally require that the 747 interval distribution process track multiple "current" time intervals 748 at once. This may introduce some delay into the aggregation 749 operation, as an interval should only expire and be available for 750 export when no additional Original Flows applying to the interval are 751 expected to arrive at the Intermediate Aggregation Process. 753 Note, however, that since there is no guarantee that Flows from the 754 Original Exporter will arrive in any given order, whether for 755 transport-specific reasons (i.e. UDP reordering) or Metering Process 756 or Exporting Process implementation-specific reasons, even simpler 757 distribution methods may need to deal with flows arriving in other 758 than start time or end time order. Therefore, the use of larger 759 intervals does not obviate the need to buffer Partially Aggregated 760 Flows within "current" time intervals, to ensure the IAP can accept 761 flow time intervals in any arrival order. More generally, the 762 interval distribution process SHOULD accept Flow start and end times 763 in the Original Flows in any reasonable order. The expiration of 764 intervals in interval distribution operations is dependent on 765 implementation and deployment requirements, and MUST be made 766 configurable in contexts in which "reasonable order" is not obvious 767 at implementation time. This operation may lead to delay and loss 768 introduced by the IAP, as detailed in Section 6.2. 770 5.1.2. Time Composition 772 Time Composition as in Section 5.4 of [RFC5982] (or interval 773 combination) is a special case of aggregation, where interval 774 distribution imposes longer intervals on Flows with matching keys and 775 "chained" start and end times, without any key reduction, in order to 776 join long-lived Flows which may have been split (e.g., due to an 777 active timeout shorter than the actual duration of the Flow.) Here, 778 no Key aggregation is applied, and the Aggregation Interval is chosen 779 on a per-Flow basis to cover the interval spanned by the set of 780 aggregated Flows. This may be applied alone in order to normalize 781 split Flows, or in combination with other aggregation functions in 782 order to obtain more accurate Original Flow counts. 784 5.1.3. External Interval Distribution 786 Note that much of the difficulty of interval distribution at an IAP 787 can be avoided simply by configuring the original Exporters to 788 synchronize the time intervals in the Original Flows with the desired 789 aggregation interval. The resulting Original Flows would then be 790 split to align perfectly with the time intervals imposed during 791 Interval Imposition, as shown in Figure 6, though this may reduce 792 their usefulness for non-Aggregation purposes. This approach allows 793 the Intermediate Aggregation Process to use Start Interval or End 794 Interval distribution, while having equivalent information to that 795 available to Direct interval distribution. 797 | | | | 798 |<----Flow D---->|<----Flow E---->|<----Flow F---->| 799 | | | | 800 | interval 0 | interval 1 | interval 2 | 802 Figure 6: Illustration of external interval distribution 804 5.2. Spatial Aggregation of Flow Keys 806 Key aggregation generates a new set of Flow Key values for the 807 Aggregated Flows from the Original Flow Key and non-Key fields in the 808 Original Flows, or from correlation of the Original Flow information 809 with some external source. There are two basic operations here. 810 First, Aggregated Flow Keys may be derived directly from Original 811 Flow Keys through reduction, or the dropping of fields or precision 812 in the Original Flow Keys. Second, Aggregated Flow Keys may be 813 derived through replacement, e.g. by removing one or more fields from 814 the Original Flow and replacing them with fields derived from the 815 removed fields. Replacement may refer to external information (e.g., 816 IP to AS number mappings). Replacement may apply to Flow Keys as 817 well as non-key fields. For example, consider an application which 818 aggregates Original Flows by packet count (i.e., generating an 819 Aggregated Flow for all one-packet Flows, one for all two-packet 820 Flows, and so on). This application would promote the packet count 821 to a Flow Key. 823 Key aggregation may also result in the addition of new non-Key fields 824 to the Aggregated Flows, namely Original Flow counters and unique 825 reduced key counters; these are treated in more detail in 826 Section 5.2.1 and Section 5.2.2, respectively. 828 In any key aggregation operation, reduction and/or replacement may be 829 applied any number of times in any order. Which of these operations 830 are supported by a given implementation is implementation- and 831 application-dependent. 833 Original Flow Keys 835 +---------+---------+----------+----------+-------+-----+ 836 | src ip4 | dst ip4 | src port | dst port | proto | tos | 837 +---------+---------+----------+----------+-------+-----+ 838 | | | | | | 839 retain mask /24 X X X X 840 | | 841 V V 842 +---------+-------------+ 843 | src ip4 | dst ip4 /24 | 844 +---------+-------------+ 846 Aggregated Flow Keys (by source address and destination class-C) 848 Figure 7: Illustration of key aggregation by reduction 850 Figure 7 illustrates an example reduction operation, aggregation by 851 source address and destination class C network. Here, the port, 852 protocol, and type-of-service information is removed from the Flow 853 Key, the source address is retained, and the destination address is 854 masked by dropping the lower 8 bits. 856 Original Flow Keys 858 +---------+---------+----------+----------+-------+-----+ 859 | src ip4 | dst ip4 | src port | dst port | proto | tos | 860 +---------+---------+----------+----------+-------+-----+ 861 | | | | | | 862 V V | | | | 863 +-------------------+ X X X X 864 | ASN lookup table | 865 +-------------------+ 866 | | 867 V V 868 +---------+---------+ 869 | src asn | dst asn | 870 +---------+---------+ 872 Aggregated Flow Keys (by source and dest ASN) 874 Figure 8: Illustration of key aggregation by reduction and 875 replacement 877 Figure 8 illustrates an example reduction and replacement operation, 878 aggregation by source and destination Border Gateway Protocol (BGP) 879 Autonomous System Number (ASN) without ASN information available in 880 the Original Flow. Here, the port, protocol, and type-of-service 881 information is removed from the Flow Keys, while the source and 882 destination addresses are run though an IP address to ASN lookup 883 table, and the Aggregated Flow Keys are made up of the resulting 884 source and destination ASNs. 886 5.2.1. Counting Original Flows 888 When aggregating multiple Original Flows into an Aggregated Flow, it 889 is often useful to know how many Original Flows are present in the 890 Aggregated Flow. Section 7.2 introduces four new information 891 elements in to export these counters. 893 There are two possible ways to count Original Flows, which we call 894 here conservative and non-conservative. Conservative flow counting 895 has the property that each Original Flow contributes exactly one to 896 the total flow count within a set of Aggregated Flows. In other 897 words, conservative flow counters are distributed just as any other 898 counter during interval distribution, except each Original Flow is 899 assumed to have a flow count of one. When a count for an Original 900 Flow must be distributed across a set of Aggregated Flows, and a 901 distribution method is used which does not account for that Original 902 Flow completely within a single Aggregated Flow, conservative flow 903 counting requires a fractional representation. 905 By contrast, non-conservative flow counting is used to count how many 906 Contributing Flows are represented in an Aggregated Flow. Flow 907 counters are not distributed in this case. An Original Flow which is 908 present within N Aggregated Flows would add N to the sum of non- 909 conservative flow counts, one to each Aggregated Flow. In other 910 words, the sum of conservative flow counts over a set of Aggregated 911 Flows is always equal to the number of Original Flows, while the sum 912 of non-conservative flow counts is strictly greater than or equal to 913 the number of Original Flows. 915 For example, consider Flows A, B, and C as illustrated in Figure 5. 916 Assume that the key aggregation step aggregates the keys of these 917 three Flows to the same aggregated Flow Key, and that start interval 918 counter distribution is in effect. The conservative flow count for 919 interval 0 is 3 (since Flows A, B, and C all begin in this interval), 920 and for the other two intervals is 0. The non-conservative flow 921 count for interval 0 is also 3 (due to the presence of Flows A, B, 922 and C), for interval 1 is 2 (Flows B and C), and for interval 2 is 1 923 (Flow C). The sum of the conservative counts 3 + 0 + 0 = 3, the 924 number of Original Flows; while the sum of the non-conservative 925 counts 3 + 2 + 1 = 6. 927 Note that the active and inactive timeouts used to generate Original 928 Flows, as well as the cache policy used to generate those Flows, have 929 an effect on how meaningful either the conservative or non- 930 conservative flow count will be during aggregation. In general, 931 Original Exporters using the IPFIX Configuration Model SHOULD be 932 configured to export Flows with equal or similar activeTimeout and 933 inactiveTimeout configuration values, and the same cacheMode, as 934 defined in [I-D.ietf-ipfix-configuration-model]. Original Exporters 935 not using the IPFIX Configuration Model SHOULD be configured 936 equivalently. 938 5.2.2. Counting Distinct Key Values 940 One common case in aggregation is counting distinct key values that 941 were reduced away during key aggregation. The most common use case 942 for this is counting distinct hosts per Flow Key; for example, in 943 host characterization or anomaly detection, distinct sources per 944 destination or distinct destinations per source are common metrics. 945 These new non-Key fields are added during key aggregation. 947 For such applications, Information Elements for distinct counts of 948 IPv4 and IPv6 addresses are defined in Section 7.3. These are named 949 distinctCountOf(KeyName). Additional such Information Elements 950 should be registered with IANA on an as-needed basis. 952 5.3. Spatial Aggregation of Non-Key Fields 954 Aggregation operations may also lead to the addition of value fields 955 demoted from key fields, or derived from other value fields in the 956 Original Flows. Specific cases of this are treated in the 957 subsections below. 959 5.3.1. Counter Statistics 961 Some applications of aggregation may benefit from computing different 962 statistics than those native to each non-key field (e.g., flags are 963 natively combined via union, and delta counters by summing). For 964 example, minimum and maximum packet counts per Flow, mean bytes per 965 packet per Contributing Flow, and so on. Certain Information 966 Elements for these applications are already provided in the IANA 967 IPFIX Information Elements registry [iana-ipfix-assignments] (e.g. 968 minimumIpTotalLength). 970 A complete specification of additional aggregate counter statistics 971 is outside the scope of this document, and should be added in the 972 future to the IANA IPFIX Information Elements registry on a per- 973 application, as-needed basis. 975 5.3.2. Derivation of New Values from Flow Keys and non-Key fields 977 More complex operations may lead to other derived fields being 978 generated from the set of values or Flow Keys reduced away during 979 aggregation. A prime example of this is sample entropy calculation. 980 This counts distinct values and frequency, so is similar to distinct 981 key counting as in Section 5.2.2, but may be applied to the 982 distribution of values for any flow field. 984 Sample entropy calculation provides a one-number normalized 985 representation of the value spread and is useful for anomaly 986 detection. The behavior of entropy statistics is such that a small 987 number of keys showing up very often drives the entropy value down 988 towards zero, while a large number of keys, each showing up with 989 lower frequency, drives the entropy value up. 991 Entropy statistics are generally useful for identifier keys, such as 992 IP addresses, port numbers, AS numbers, etc. They can also be 993 calculated on flow length, flow duration fields and the like, even if 994 this generally yields less distinct value shifts when the traffic mix 995 changes. 997 As a practical example, one host scanning a lot of other hosts will 998 drive source IP entropy down and target IP entropy up. A similar 999 effect can be observed for ports. This pattern can also be caused by 1000 the scan-traffic of a fast Internet worm. A second example would be 1001 a DDoS flooding attack against a single target (or small number of 1002 targets) which drives source IP entropy up and target IP entropy 1003 down. 1005 A complete specification of additional derived values or entropy 1006 information elements is outside the scope of this document. Any such 1007 Information Elements should be added in the future to the IANA IPFIX 1008 Information Elements registry on a per-application, as-needed basis. 1010 5.4. Aggregation Combination 1012 Interval distribution and key aggregation together may generate 1013 multiple Partially Aggregated Flows covering the same time interval 1014 with the same set of Flow Key values. The process of combining these 1015 Partially Aggregated Flows into a single Aggregated Flow is called 1016 aggregation combination. In general, non-Key values from multiple 1017 Contributing Flows are combined using the same operation by which 1018 values are combined from packets to form Flows for each Information 1019 Element. Delta counters are summed, flags are unioned, and so on. 1021 6. Additional Considerations and Special Cases in Flow Aggregation 1023 6.1. Exact versus Approximate Counting during Aggregation 1025 In certain circumstances, particularly involving aggregation by 1026 devices with limited resources, and in situations where exact 1027 aggregated counts are less important than relative magnitudes (e.g. 1028 driving graphical displays), counter distribution during key 1029 aggregation may be performed by approximate counting means (e.g. 1030 Bloom filters). The choice to use approximate counting is 1031 implementation- and application-dependent. 1033 6.2. Delay and Loss introduced by the IAP 1035 When accepting Original Flows in export order from traffic captured 1036 live, the Intermediate Aggregation Process waits for all Original 1037 Flows which may contribute to a given interval during interval 1038 distribution. This is generally dominated by the active timeout of 1039 the Metering Process measuring the Original Flows. For example, with 1040 Metering Processes configured with a 5 minute active timeout, the 1041 Intermediate Aggregation Process introduces a delay of at least 5 1042 minutes to all exported Aggregated Flows to ensure it has received 1043 all Original Flows. Note that when aggregating flows from multiple 1044 Metering Processes with different active timeouts, the delay is 1045 determined by the maximum active timeout. 1047 In certain circumstances, additional delay at the original Exporter 1048 may cause an IAP to close an interval before the last Original 1049 Flow(s) accountable to the interval arrives; in this case the IAP MAY 1050 drop the late Original Flow(s). Accounting of flows lost at an 1051 Intermediate Process due to such issues is covered in 1052 [I-D.ietf-ipfix-mediation-protocol]. 1054 6.3. Considerations for Aggregation of Sampled Flows 1056 The accuracy of Aggregated Flows may also be affected by sampling of 1057 the Original Flows, or sampling of packets making up the Original 1058 Flows. At the time of writing, the effect of sampling on flow 1059 aggregation is still an open research question. However, to maximize 1060 the comparability of Aggregated Flows, aggregation of sampled Flows 1061 should only be applied to Original Flows sampled using the same 1062 sampling rate and sampling algorithm, Flows created from packets 1063 sampled using the same sampling rate and sampling algorithm, or 1064 Original Flows which have been normalized as if they had the same 1065 sampling rate and algorithm before aggregation. For more on packet 1066 sampling within IPFIX, see [RFC5476]. For more on Flow sampling 1067 within the IPFIX Mediator Framework, see 1068 [I-D.ietf-ipfix-flow-selection-tech]. 1070 6.4. Considerations for Aggregation of Heterogeneous Flows 1072 Aggregation may be applied to Original Flows from different sources 1073 and of different types (i.e., represented using different, perhaps 1074 wildly-different Templates). When the goal is to separate the 1075 heterogeneous Original Flows and aggregate them into heterogeneous 1076 Aggregated Flows, each aggregation should be done at its own 1077 Intermediate Aggregation Process. The Observation Domain ID on the 1078 Messages containing the output Aggregated Flows can be used to 1079 identify the different Processes, and to segregate the output. 1081 However, when the goal is to aggregate these Flows into a single 1082 stream of Aggregated Flows representing one type of data, and if the 1083 Original Flows may represent the same original packet at two 1084 different Observation Points, the Original Flows should be correlated 1085 by the correlation and normalization operation within the IAP to 1086 ensure that each packet is only represented in a single Aggregated 1087 Flow or set of Aggregated Flows differing only by aggregation 1088 interval. 1090 7. Export of Aggregated IP Flows using IPFIX 1092 In general, Aggregated Flows are exported in IPFIX as any other Flow. 1093 However, certain aspects of Aggregated Flow export benefit from 1094 additional guidelines, or new Information Elements to represent 1095 aggregation metadata or information generated during aggregation. 1096 These are detailed in the following subsections. 1098 7.1. Time Interval Export 1100 Since an Aggregated Flow is simply a Flow, the existing timestamp 1101 Information Elements in the IPFIX Information Model (e.g., 1102 flowStartMilliseconds, flowEndNanoseconds) are sufficient to specify 1103 the time interval for aggregation. Therefore, no new aggregation- 1104 specific Information Elements for exporting time interval information 1105 are necessary. 1107 Each Aggregated Flow carrying timing information SHOULD contain both 1108 an interval start and interval end timestamp. 1110 7.2. Flow Count Export 1112 The following four Information Elements are defined to count Original 1113 Flows as discussed in Section 5.2.1. 1115 7.2.1. originalFlowsPresent 1117 Description: The non-conservative count of Original Flows 1118 contributing to this Aggregated Flow. Non-conservative counts 1119 need not sum to the original count on re-aggregation. 1121 Abstract Data Type: unsigned64 1123 Data Type Semantics: deltaCount 1125 ElementId: TBD1 1127 7.2.2. originalFlowsInitiated 1129 Description: The conservative count of Original Flows whose first 1130 packet is represented within this Aggregated Flow. Conservative 1131 counts must sum to the original count on re-aggregation. 1133 Abstract Data Type: unsigned64 1134 Data Type Semantics: deltaCount 1136 ElementId: TBD2 1138 7.2.3. originalFlowsCompleted 1140 Description: The conservative count of Original Flows whose last 1141 packet is represented within this Aggregated Flow. Conservative 1142 counts must sum to the original count on re-aggregation. 1144 Abstract Data Type: unsigned64 1146 Data Type Semantics: deltaCount 1148 ElementId: TBD3 1150 7.2.4. deltaFlowCount 1152 Description: The conservative count of Original Flows contributing 1153 to this Aggregated Flow; may be distributed via any of the methods 1154 expressed by the valueDistributionMethod Information Element. 1156 Abstract Data Type: unsigned64 1158 Data Type Semantics: deltaCount 1160 ElementId: 3 1162 7.3. Distinct Host Export 1164 The following four Information Elements represent the distinct counts 1165 of source and destination network-layer addresses, used to export 1166 distinct host counts reduced away during key aggregation. 1168 7.3.1. distinctCountOfSourceIPAddress 1170 Description: The count of distinct source IP address values for 1171 Original Flows contributing to this Aggregated Flow, without 1172 regard to IP version. This Information Element is preferred to 1173 the IP-version-specific counters, unless it is important to 1174 separate the counts by version. 1176 Abstract Data Type: unsigned64 1178 Data Type Semantics: totalCount 1179 ElementId: TBD4 1181 7.3.2. distinctCountOfDestinationIPAddress 1183 Description: The count of distinct destination IP address values 1184 for Original Flows contributing to this Aggregated Flow, without 1185 regard to IP version. This Information Element is preferred to 1186 the version-specific counters below, unless it is important to 1187 separate the counts by version. 1189 Abstract Data Type: unsigned64 1191 Data Type Semantics: totalCount 1193 ElementId: TBD5 1195 7.3.3. distinctCountOfSourceIPv4Address 1197 Description: The count of distinct source IPv4 address values for 1198 Original Flows contributing to this Aggregated Flow. 1200 Abstract Data Type: unsigned32 1202 Data Type Semantics: totalCount 1204 ElementId: TBD6 1206 7.3.4. distinctCountOfDestinationIPv4Address 1208 Description: The count of distinct destination IPv4 address values 1209 for Original Flows contributing to this Aggregated Flow. 1211 Abstract Data Type: unsigned32 1213 Data Type Semantics: totalCount 1215 ElementId: TBD7 1217 Status: Current 1219 7.3.5. distinctCountOfSourceIPv6Address 1221 Description: The count of distinct source IPv6 address values for 1222 Original Flows contributing to this Aggregated Flow. 1224 Abstract Data Type: unsigned64 1226 Data Type Semantics: totalCount 1228 ElementId: TBD8 1230 Status: Current 1232 7.3.6. distinctCountOfDestinationIPv6Address 1234 Description: The count of distinct destination IPv6 address values 1235 for Original Flows contributing to this Aggregated Flow. 1237 Abstract Data Type: unsigned64 1239 Data Type Semantics: totalCount 1241 ElementId: TBD9 1243 Status: Current 1245 7.4. Aggregate Counter Distribution Export 1247 When exporting counters distributed among Aggregated Flows, as 1248 described in Section 5.1.1, the Exporting Process MAY export an 1249 Aggregate Counter Distribution Option Record for each Template 1250 describing Aggregated Flow records; this Options Template is 1251 described below. It uses the valueDistributionMethod Information 1252 Element, also defined below. Since in many cases distribution is 1253 simple, accounting the counters from Contributing Flows to the first 1254 Interval to which they contribute, this is the default situation, for 1255 which no Aggregate Counter Distribution Record is necessary; 1256 Aggregate Counter Distribution Records are only applicable in more 1257 exotic situations, such as using an Aggregation Interval smaller than 1258 the durations of Original Flows. 1260 7.4.1. Aggregate Counter Distribution Options Template 1262 This Options Template defines the Aggregate Counter Distribution 1263 Record, which allows the binding of a value distribution method to a 1264 Template ID. The scope is the Template Id, whose uniqueness, per 1265 [I-D.ietf-ipfix-protocol-rfc5101bis], is local to the Transport 1266 Session and Observation Domain that generated the Template ID. This 1267 is used to signal to the Collecting Process how the counters were 1268 distributed. The fields are as below: 1270 +-------------------------+-----------------------------------------+ 1271 | IE | Description | 1272 +-------------------------+-----------------------------------------+ 1273 | templateId [scope] | The Template ID of the Template | 1274 | | defining the Aggregated Flows to which | 1275 | | this distribution option applies. This | 1276 | | Information Element MUST be defined as | 1277 | | a Scope Field. | 1278 | | | 1279 | valueDistributionMethod | The method used to distribute the | 1280 | | counters for the Aggregated Flows | 1281 | | defined by the associated Template. | 1282 +-------------------------+-----------------------------------------+ 1284 7.4.2. valueDistributionMethod Information Element 1286 Description: A description of the method used to distribute the 1287 counters from Contributing Flows into the Aggregated Flow records 1288 described by an associated scope, generally a Template. The 1289 method is deemed to apply to all the non-key Information Elements 1290 in the referenced scope for which value distribution is a valid 1291 operation; if the originalFlowsInitiated and/or 1292 originalFlowsCompleted Information Elements appear in the 1293 Template, they are not subject to this distribution method, as 1294 they each infer their own distribution method. This is intended 1295 to be a complete set of possible value distribution methods; it is 1296 encoded as follows: 1298 +-------+-----------------------------------------------------------+ 1299 | Value | Description | 1300 +-------+-----------------------------------------------------------+ 1301 | 0 | Unspecified: The counters for an Original Flow are | 1302 | | explicitly not distributed according to any other method | 1303 | | defined for this Information Element; use for arbitrary | 1304 | | distribution, or distribution algorithms not described by | 1305 | | any other codepoint. | 1306 | | --------------------------------------------------------- | 1307 | | | 1308 | 1 | Start Interval: The counters for an Original Flow are | 1309 | | added to the counters of the appropriate Aggregated Flow | 1310 | | containing the start time of the Original Flow. This | 1311 | | should be assumed the default if value distribution | 1312 | | information is not available at a Collecting Process for | 1313 | | an Aggregated Flow. | 1314 | | --------------------------------------------------------- | 1315 | | | 1316 | 2 | End Interval: The counters for an Original Flow are added | 1317 | | to the counters of the appropriate Aggregated Flow | 1318 | | containing the end time of the Original Flow. | 1319 | | --------------------------------------------------------- | 1320 | | | 1321 | 3 | Mid Interval: The counters for an Original Flow are added | 1322 | | to the counters of a single appropriate Aggregated Flow | 1323 | | containing some timestamp between start and end time of | 1324 | | the Original Flow. | 1325 | | --------------------------------------------------------- | 1326 | | | 1327 | 4 | Simple Uniform Distribution: Each counter for an Original | 1328 | | Flow is divided by the number of time intervals the | 1329 | | Original Flow covers (i.e., of appropriate Aggregated | 1330 | | Flows sharing the same Flow Key), and this number is | 1331 | | added to each corresponding counter in each Aggregated | 1332 | | Flow. | 1333 | | --------------------------------------------------------- | 1334 | | | 1335 | 5 | Proportional Uniform Distribution: Each counter for an | 1336 | | Original Flow is divided by the number of time units the | 1337 | | Original Flow covers, to derive a mean count rate. This | 1338 | | mean count rate is then multiplied by the number of time | 1339 | | units in the intersection of the duration of the Original | 1340 | | Flow and the time interval of each Aggregated Flow. This | 1341 | | is like simple uniform distribution, but accounts for the | 1342 | | fractional portions of a time interval covered by an | 1343 | | Original Flow in the first and last time interval. | 1344 | | --------------------------------------------------------- | 1345 | | | 1346 | 6 | Simulated Process: Each counter of the Original Flow is | 1347 | | distributed among the intervals of the Aggregated Flows | 1348 | | according to some function the Intermediate Aggregation | 1349 | | Process uses based upon properties of Flows presumed to | 1350 | | be like the Original Flow. This is essentially an | 1351 | | assertion that the Intermediate Aggregation Process has | 1352 | | no direct packet timing information but is nevertheless | 1353 | | not using one of the other simpler distribution methods. | 1354 | | The Intermediate Aggregation Process specifically makes | 1355 | | no assertion as to the correctness of the simulation. | 1356 | | --------------------------------------------------------- | 1357 | | | 1358 | 7 | Direct: The Intermediate Aggregation Process has access | 1359 | | to the original packet timings from the packets making up | 1360 | | the Original Flow, and uses these to distribute or | 1361 | | recalculate the counters. | 1362 +-------+-----------------------------------------------------------+ 1363 Abstract Data Type: unsigned8 1365 ElementId: TBD10 1367 Status: Current 1369 8. Examples 1371 In these examples, the same data, described by the same Template, 1372 will be aggregated multiple different ways; this illustrates the 1373 various different functions which could be implemented by 1374 Intermediate Aggregation Processes. Templates are shown in IESpec 1375 format as introduced in [I-D.ietf-ipfix-ie-doctors]. The source data 1376 format is a simplified flow: timestamps, traditional 5-tuple, and 1377 octet count; the flow key fields are the 5-tuple. The Template is 1378 shown in Figure 9. 1380 flowStartMilliseconds(152)[8] 1381 flowEndMilliseconds(153)[8] 1382 sourceIPv4Address(8)[4]{key} 1383 destinationIPv4Address(12)[4]{key} 1384 sourceTransportPort(7)[2]{key} 1385 destinationTransportPort(11)[2]{key} 1386 protocolIdentifier(4)[1]{key} 1387 octetDeltaCount(1)[8] 1389 Figure 9: Input Template for examples 1391 The data records given as input to the examples in this section are 1392 shown below; timestamps are given in H:MM:SS.sss format. In this and 1393 subsequent tables, flowStartMilliseconds is shown in H:MM:SS.sss 1394 format as 'start time', flowEndMilliseconds is shown in H:MM:SS.sss 1395 format as 'end time', sourceIPv4Address is shown as 'source ip4' with 1396 the following 'port' representing sourceTransportPort, 1397 destinationIPv4Address is shown as 'dest ip4' with the following 1398 'port' representing destinationTransportPort, protocolIdentifier is 1399 shown as 'pt', and octetDeltaCount as 'oct'. 1401 start time |end time |source ip4 |port |dest ip4 |port|pt| oct 1402 9:00:00.138 9:00:00.138 192.0.2.2 47113 192.0.2.131 53 17 119 1403 9:00:03.246 9:00:03.246 192.0.2.2 22153 192.0.2.131 53 17 83 1404 9:00:00.478 9:00:03.486 192.0.2.2 52420 198.51.100.2 443 6 1637 1405 9:00:07.172 9:00:07.172 192.0.2.3 56047 192.0.2.131 53 17 111 1406 9:00:07.309 9:00:14.861 192.0.2.3 41183 198.51.100.67 80 6 16838 1407 9:00:03.556 9:00:19.876 192.0.2.2 17606 198.51.100.68 80 6 11538 1408 9:00:25.210 9:00:25.210 192.0.2.3 47113 192.0.2.131 53 17 119 1409 9:00:26.358 9:00:30.198 192.0.2.3 48458 198.51.100.133 80 6 2973 1410 9:00:29.213 9:01:00.061 192.0.2.4 61295 198.51.100.2 443 6 8350 1411 9:04:00.207 9:04:04.431 203.0.113.3 41256 198.51.100.133 80 6 778 1412 9:03:59.624 9:04:06.984 203.0.113.3 51662 198.51.100.3 80 6 883 1413 9:00:30.532 9:06:15.402 192.0.2.2 37581 198.51.100.2 80 6 15420 1414 9:06:56.813 9:06:59.821 203.0.113.3 52572 198.51.100.2 443 6 1637 1415 9:06:30.565 9:07:00.261 203.0.113.3 49914 198.51.100.133 80 6 561 1416 9:06:55.160 9:07:05.208 192.0.2.2 50824 198.51.100.2 443 6 1899 1417 9:06:49.322 9:07:05.322 192.0.2.3 34597 198.51.100.3 80 6 1284 1418 9:07:05.849 9:07:09.625 203.0.113.3 58907 198.51.100.4 80 6 2670 1419 9:10:45.161 9:10:45.161 192.0.2.4 22478 192.0.2.131 53 17 75 1420 9:10:45.209 9:11:01.465 192.0.2.4 49513 198.51.100.68 80 6 3374 1421 9:10:57.094 9:11:00.614 192.0.2.4 64832 198.51.100.67 80 6 138 1422 9:10:59.770 9:11:02.842 192.0.2.3 60833 198.51.100.69 443 6 2325 1423 9:02:18.390 9:13:46.598 203.0.113.3 39586 198.51.100.17 80 6 11200 1424 9:13:53.933 9:14:06.605 192.0.2.2 19638 198.51.100.3 80 6 2869 1425 9:13:02.864 9:14:08.720 192.0.2.3 40429 198.51.100.4 80 6 18289 1427 Figure 10: Input data for examples 1429 8.1. Traffic Time-Series per Source 1431 Aggregating flows by source IP address in time series (i.e., with a 1432 regular interval) can be used in subsequent heavy-hitter analysis and 1433 as a source parameter for statistical anomaly detection techniques. 1434 Here, the Intermediate Aggregation Process imposes an interval, 1435 aggregates the key to remove all key fields other than the source IP 1436 address, then combines the result into a stream of Aggregated Flows. 1437 The imposed interval of 5 minutes is longer than the majority of 1438 flows; for those flows crossing interval boundaries, the entire flow 1439 is accounted to the interval containing the start time of the flow. 1441 In this example the Partially Aggregated Flows after each conceptual 1442 operation in the Intermediate Aggregation Process are shown. These 1443 are meant to be illustrative of the conceptual operations only, and 1444 not to suggest an implementation (indeed, the example shown here 1445 would not necessarily be the most efficient method for performing 1446 these operations). Subsequent examples will omit the Partially 1447 Aggregated Flows for brevity. 1449 The input to this process could be any Flow Record containing a 1450 source IP address and octet counter; consider for this example the 1451 Template and data from the introduction. The Intermediate 1452 Aggregation Process would then output records containing just 1453 timestamps, source IP, and octetDeltaCount, as in Figure 11. 1455 flowStartMilliseconds(152)[8] 1456 flowEndMilliseconds(153)[8] 1457 sourceIPv4Address(8)[4] 1458 octetDeltaCount(1)[8] 1460 Figure 11: Output Template for time series per source 1462 Assume the goal is to get 5-minute (300s) time series of octet counts 1463 per source IP address. The aggregation operations would then be 1464 arranged as in Figure 12. 1466 Original Flows 1467 | 1468 V 1469 +-----------------------+ 1470 | interval distribution | 1471 | * impose uniform | 1472 | 300s time interval | 1473 +-----------------------+ 1474 | 1475 | Partially Aggregated Flows 1476 V 1477 +------------------------+ 1478 | key aggregation | 1479 | * reduce key to only | 1480 | sourceIPv4Address | 1481 +------------------------+ 1482 | 1483 | Partially Aggregated Flows 1484 V 1485 +-------------------------+ 1486 | aggregate combination | 1487 | * sum octetDeltaCount | 1488 +-------------------------+ 1489 | 1490 V 1491 Aggregated Flows 1493 Figure 12: Aggregation operations for time series per source 1495 After applying the interval distribution step to the source data in 1496 Figure 10, only the time intervals have changed; the Partially 1497 Aggregated flows are shown in Figure 13. Note that interval 1498 distribution follows the default Start Interval policy; that is, the 1499 entire flow is accounted to the interval containing the flow's start 1500 time. 1502 start time |end time |source ip4 |port |dest ip4 |port|pt| oct 1503 9:00:00.000 9:05:00.000 192.0.2.2 47113 192.0.2.131 53 17 119 1504 9:00:00.000 9:05:00.000 192.0.2.2 22153 192.0.2.131 53 17 83 1505 9:00:00.000 9:05:00.000 192.0.2.2 52420 198.51.100.2 443 6 1637 1506 9:00:00.000 9:05:00.000 192.0.2.3 56047 192.0.2.131 53 17 111 1507 9:00:00.000 9:05:00.000 192.0.2.3 41183 198.51.100.67 80 6 16838 1508 9:00:00.000 9:05:00.000 192.0.2.2 17606 198.51.100.68 80 6 11538 1509 9:00:00.000 9:05:00.000 192.0.2.3 47113 192.0.2.131 53 17 119 1510 9:00:00.000 9:05:00.000 192.0.2.3 48458 198.51.100.133 80 6 2973 1511 9:00:00.000 9:05:00.000 192.0.2.4 61295 198.51.100.2 443 6 8350 1512 9:00:00.000 9:05:00.000 203.0.113.3 41256 198.51.100.133 80 6 778 1513 9:00:00.000 9:05:00.000 203.0.113.3 51662 198.51.100.3 80 6 883 1514 9:00:00.000 9:05:00.000 192.0.2.2 37581 198.51.100.2 80 6 15420 1515 9:00:00.000 9:05:00.000 203.0.113.3 39586 198.51.100.17 80 6 11200 1516 9:05:00.000 9:10:00.000 203.0.113.3 52572 198.51.100.2 443 6 1637 1517 9:05:00.000 9:10:00.000 203.0.113.3 49914 197.51.100.133 80 6 561 1518 9:05:00.000 9:10:00.000 192.0.2.2 50824 198.51.100.2 443 6 1899 1519 9:05:00.000 9:10:00.000 192.0.2.3 34597 198.51.100.3 80 6 1284 1520 9:05:00.000 9:10:00.000 203.0.113.3 58907 198.51.100.4 80 6 2670 1521 9:10:00.000 9:15:00.000 192.0.2.4 22478 192.0.2.131 53 17 75 1522 9:10:00.000 9:15:00.000 192.0.2.4 49513 198.51.100.68 80 6 3374 1523 9:10:00.000 9:15:00.000 192.0.2.4 64832 198.51.100.67 80 6 138 1524 9:10:00.000 9:15:00.000 192.0.2.3 60833 198.51.100.69 443 6 2325 1525 9:10:00.000 9:15:00.000 192.0.2.2 19638 198.51.100.3 80 6 2869 1526 9:10:00.000 9:15:00.000 192.0.2.3 40429 198.51.100.4 80 6 18289 1528 Figure 13: Interval imposition for time series per source 1530 After the key aggregation step, all Flow Keys except the source IP 1531 address have been discarded, as shown in Figure 14. This leaves 1532 duplicate Partially Aggregated flows to be combined in the final 1533 operation. 1535 start time |end time |source ip4 |octets 1536 9:00:00.000 9:05:00.000 192.0.2.2 119 1537 9:00:00.000 9:05:00.000 192.0.2.2 83 1538 9:00:00.000 9:05:00.000 192.0.2.2 1637 1539 9:00:00.000 9:05:00.000 192.0.2.3 111 1540 9:00:00.000 9:05:00.000 192.0.2.3 16838 1541 9:00:00.000 9:05:00.000 192.0.2.2 11538 1542 9:00:00.000 9:05:00.000 192.0.2.3 119 1543 9:00:00.000 9:05:00.000 192.0.2.3 2973 1544 9:00:00.000 9:05:00.000 192.0.2.4 8350 1545 9:00:00.000 9:05:00.000 203.0.113.3 778 1546 9:00:00.000 9:05:00.000 203.0.113.3 883 1547 9:00:00.000 9:05:00.000 192.0.2.2 15420 1548 9:00:00.000 9:05:00.000 203.0.113.3 11200 1549 9:05:00.000 9:10:00.000 203.0.113.3 1637 1550 9:05:00.000 9:10:00.000 203.0.113.3 561 1551 9:05:00.000 9:10:00.000 192.0.2.2 1899 1552 9:05:00.000 9:10:00.000 192.0.2.3 1284 1553 9:05:00.000 9:10:00.000 203.0.113.3 2670 1554 9:10:00.000 9:15:00.000 192.0.2.4 75 1555 9:10:00.000 9:15:00.000 192.0.2.4 3374 1556 9:10:00.000 9:15:00.000 192.0.2.4 138 1557 9:10:00.000 9:15:00.000 192.0.2.3 2325 1558 9:10:00.000 9:15:00.000 192.0.2.2 2869 1559 9:10:00.000 9:15:00.000 192.0.2.3 18289 1561 Figure 14: Key aggregation for time series per source 1563 Aggregate combination sums the counters per key and interval; the 1564 summations of the first two keys and intervals are shown in detail in 1565 Figure 15. 1567 start time |end time |source ip4 |octets 1568 9:00:00.000 9:05:00.000 192.0.2.2 119 1569 9:00:00.000 9:05:00.000 192.0.2.2 83 1570 9:00:00.000 9:05:00.000 192.0.2.2 1637 1571 9:00:00.000 9:05:00.000 192.0.2.2 11538 1572 + 9:00:00.000 9:05:00.000 192.0.2.2 15420 1573 ----- 1574 = 9:00:00.000 9:05:00.000 192.0.2.2 28797 1576 9:00:00.000 9:05:00.000 192.0.2.3 111 1577 9:00:00.000 9:05:00.000 192.0.2.3 16838 1578 9:00:00.000 9:05:00.000 192.0.2.3 119 1579 + 9:00:00.000 9:05:00.000 192.0.2.3 2973 1580 ----- 1581 = 9:00:00.000 9:05:00.000 192.0.2.3 20041 1582 Figure 15: Summation during aggregate combination 1584 Applying this to each set of Partially Aggregated Flows to produce 1585 the final Aggregated Flows shown in Figure 16 to be exported by the 1586 Template in Figure 11. 1588 start time |end time |source ip4 |octets 1589 9:00:00.000 9:05:00.000 192.0.2.2 28797 1590 9:00:00.000 9:05:00.000 192.0.2.3 20041 1591 9:00:00.000 9:05:00.000 192.0.2.4 8350 1592 9:00:00.000 9:05:00.000 203.0.113.3 12861 1593 9:05:00.000 9:10:00.000 192.0.2.2 1899 1594 9:05:00.000 9:10:00.000 192.0.2.3 1284 1595 9:05:00.000 9:10:00.000 203.0.113.3 4868 1596 9:10:00.000 9:15:00.000 192.0.2.2 2869 1597 9:10:00.000 9:15:00.000 192.0.2.3 20614 1598 9:10:00.000 9:15:00.000 192.0.2.4 3587 1600 Figure 16: Aggregated Flows for time series per source 1602 8.2. Core Traffic Matrix 1604 Aggregating flows by source and destination autonomous system number 1605 in time series is used to generate core traffic matrices. The core 1606 traffic matrix provides a view of the state of the routes within a 1607 network, and can be used for long-term planning of changes to network 1608 design based on traffic demand. Here, imposed time intervals are 1609 generally much longer than active flow timeouts. The traffic matrix 1610 is reported in terms of octets, packets, and flows, as each of these 1611 values may have a subtly different effect on capacity planning. 1613 This example demonstrates key aggregation using derived keys and 1614 Original Flow counting. While some Original Flows may be generated 1615 by Exporting Processes on forwarding devices, and therefore contain 1616 the bgpSourceAsNumber and bgpDestinationAsNumber Information 1617 Elements, Original Flows from Exporting Processes on dedicated 1618 measurement devices without routing data contain only a 1619 destinationIPv[46]Address. For these flows, the Mediator must look 1620 up a next hop AS from an IP-to-AS table, replacing source and 1621 destination addresses with AS numbers. The table used in this 1622 example is shown in Figure 17. (Note that due to limited example 1623 address space, in this example we ignore the common practice of 1624 routing only blocks of /24 or larger). 1626 prefix |ASN 1627 192.0.2.0/25 64496 1628 192.0.2.128/25 64497 1629 198.51.100/24 64498 1630 203.0.113.0/24 64499 1632 Figure 17: Example Autonomous system number map 1634 The Template for Aggregated Flows produced by this example is shown 1635 in Figure 18. 1637 flowStartMilliseconds(152)[8] 1638 flowEndMilliseconds(153)[8] 1639 bgpSourceAsNumber(16)[4] 1640 bgpDestinationAsNumber(17)[4] 1641 octetDeltaCount(1)[8] 1643 Figure 18: Output Template for traffic matrix 1645 Assume the goal is to get 60-minute time series of octet counts per 1646 source/destination ASN pair. The aggregation operations would then 1647 be arranged as in Figure 19. 1649 Original Flows 1650 | 1651 V 1652 +-----------------------+ 1653 | interval distribution | 1654 | * impose uniform | 1655 | 3600s time interval | 1656 +-----------------------+ 1657 | 1658 | Partially Aggregated Flows 1659 V 1660 +------------------------+ 1661 | key aggregation | 1662 | * reduce key to only | 1663 | sourceIPv4Address + | 1664 | destIPv4Address | 1665 +------------------------+ 1666 | 1667 V 1668 +------------------------+ 1669 | key aggregation | 1670 | * replace addresses | 1671 | with ASN from map | 1672 +------------------------+ 1673 | 1674 | Partially Aggregated Flows 1675 V 1676 +-------------------------+ 1677 | aggregate combination | 1678 | * sum octetDeltaCount | 1679 +-------------------------+ 1680 | 1681 V 1682 Aggregated Flows 1684 Figure 19: Aggregation operations for traffic matrix 1686 After applying the interval distribution step to the source data in 1687 Figure 10,; the Partially Aggregated flows are shown in Figure 20. 1688 Note that the flows are identical to those in interval distribution 1689 step in the previous example, except the chosen interval (1 hour, 1690 3600 seconds) is different; therefore, all the flows fit into a 1691 single interval. 1693 start time |end time |source ip4 |port |dest ip4 |port|pt| oct 1694 9:00:00 10:00:00 192.0.2.2 47113 192.0.2.131 53 17 119 1695 9:00:00 10:00:00 192.0.2.2 22153 192.0.2.131 53 17 83 1696 9:00:00 10:00:00 192.0.2.2 52420 198.51.100.2 443 6 1637 1697 9:00:00 10:00:00 192.0.2.3 56047 192.0.2.131 53 17 111 1698 9:00:00 10:00:00 192.0.2.3 41183 198.51.100.67 80 6 16838 1699 9:00:00 10:00:00 192.0.2.2 17606 198.51.100.68 80 6 11538 1700 9:00:00 10:00:00 192.0.2.3 47113 192.0.2.131 53 17 119 1701 9:00:00 10:00:00 192.0.2.3 48458 198.51.100.133 80 6 2973 1702 9:00:00 10:00:00 192.0.2.4 61295 198.51.100.2 443 6 8350 1703 9:00:00 10:00:00 203.0.113.3 41256 198.51.100.133 80 6 778 1704 9:00:00 10:00:00 203.0.113.3 51662 198.51.100.3 80 6 883 1705 9:00:00 10:00:00 192.0.2.2 37581 198.51.100.2 80 6 15420 1706 9:00:00 10:00:00 203.0.113.3 52572 198.51.100.2 443 6 1637 1707 9:00:00 10:00:00 203.0.113.3 49914 197.51.100.133 80 6 561 1708 9:00:00 10:00:00 192.0.2.2 50824 198.51.100.2 443 6 1899 1709 9:00:00 10:00:00 192.0.2.3 34597 198.51.100.3 80 6 1284 1710 9:00:00 10:00:00 203.0.113.3 58907 198.51.100.4 80 6 2670 1711 9:00:00 10:00:00 192.0.2.4 22478 192.0.2.131 53 17 75 1712 9:00:00 10:00:00 192.0.2.4 49513 198.51.100.68 80 6 3374 1713 9:00:00 10:00:00 192.0.2.4 64832 198.51.100.67 80 6 138 1714 9:00:00 10:00:00 192.0.2.3 60833 198.51.100.69 443 6 2325 1715 9:00:00 10:00:00 203.0.113.3 39586 198.51.100.17 80 6 11200 1716 9:00:00 10:00:00 192.0.2.2 19638 198.51.100.3 80 6 2869 1717 9:00:00 10:00:00 192.0.2.3 40429 198.51.100.4 80 6 18289 1719 Figure 20: Interval imposition for traffic matrix 1721 The next steps are to discard irrelevant key fields and to replace 1722 the source and destination addresses with source and destination AS 1723 numbers in the map; the results of these key aggregation steps are 1724 shown in Figure 21. 1726 start time |end time |source ASN |dest ASN |octets 1727 9:00:00 10:00:00 AS64496 AS64497 119 1728 9:00:00 10:00:00 AS64496 AS64497 83 1729 9:00:00 10:00:00 AS64496 AS64498 1637 1730 9:00:00 10:00:00 AS64496 AS64497 111 1731 9:00:00 10:00:00 AS64496 AS64498 16838 1732 9:00:00 10:00:00 AS64496 AS64498 11538 1733 9:00:00 10:00:00 AS64496 AS64497 119 1734 9:00:00 10:00:00 AS64496 AS64498 2973 1735 9:00:00 10:00:00 AS64496 AS64498 8350 1736 9:00:00 10:00:00 AS64499 AS64498 778 1737 9:00:00 10:00:00 AS64499 AS64498 883 1738 9:00:00 10:00:00 AS64496 AS64498 15420 1739 9:00:00 10:00:00 AS64499 AS64498 1637 1740 9:00:00 10:00:00 AS64499 AS64498 561 1741 9:00:00 10:00:00 AS64496 AS64498 1899 1742 9:00:00 10:00:00 AS64496 AS64498 1284 1743 9:00:00 10:00:00 AS64499 AS64498 2670 1744 9:00:00 10:00:00 AS64496 AS64497 75 1745 9:00:00 10:00:00 AS64496 AS64498 3374 1746 9:00:00 10:00:00 AS64496 AS64498 138 1747 9:00:00 10:00:00 AS64496 AS64498 2325 1748 9:00:00 10:00:00 AS64499 AS64498 11200 1749 9:00:00 10:00:00 AS64496 AS64498 2869 1750 9:00:00 10:00:00 AS64496 AS64498 18289 1752 Figure 21: Key aggregation for traffic matrix: reduction and 1753 replacement 1755 Finally, aggregate combination sums the counters per key and 1756 interval. The resulting Aggregated Flows containing the traffic 1757 matrix, shown in Figure 22, are then exported using the Template in 1758 Figure 18. Note that these aggregated flows represent a sparse 1759 matrix: AS pairs for which no traffic was received have no 1760 corresponding record in the output. 1762 start time end time source ASN dest ASN octets 1763 9:00:00 10:00:00 AS64496 AS64497 507 1764 9:00:00 10:00:00 AS64496 AS64498 86934 1765 9:00:00 10:00:00 AS64499 AS64498 17729 1767 Figure 22: Aggregated Flows for traffic matrix 1769 The output of this operation is suitable for re-aggregation: that is, 1770 traffic matrices from single links or Observation Points can be 1771 aggregated through the same interval imposition and aggregate 1772 combination steps in order to build a traffic matrix for an entire 1773 network. 1775 8.3. Distinct Source Count per Destination Endpoint 1777 Aggregating flows by destination address and port, and counting 1778 distinct sources aggregated away, can be used as part of passive 1779 service inventory and host characterization. This example shows 1780 aggregation as an analysis technique, performed on source data stored 1781 in an IPFIX File. As the Transport Session in this File is bounded, 1782 removal of all timestamp information allows summarization of the 1783 entire time interval contained within the interval. Removal of 1784 timing information during interval imposition is equivalent to an 1785 infinitely long imposed time interval. This demonstrates both how 1786 infinite intervals work, and how unique counters work. The 1787 aggregation operations are summarized in Figure 23. 1789 Original Flows 1790 | 1791 V 1792 +-----------------------+ 1793 | interval distribution | 1794 | * discard timestamps | 1795 +-----------------------+ 1796 | 1797 | Partially Aggregated Flows 1798 V 1799 +----------------------------+ 1800 | value aggregation | 1801 | * discard octetDeltaCount | 1802 +----------------------------+ 1803 | 1804 | Partially Aggregated Flows 1805 V 1806 +----------------------------+ 1807 | key aggregation | 1808 | * reduce key to only | 1809 | destIPv4Address + | 1810 | destTransportPort, | 1811 | * count distinct sources | 1812 +----------------------------+ 1813 | 1814 | Partially Aggregated Flows 1815 V 1816 +----------------------------------------------+ 1817 | aggregate combination | 1818 | * no-op (distinct sources already counted) | 1819 +----------------------------------------------+ 1820 | 1821 V 1822 Aggregated Flows 1824 Figure 23: Aggregation operations for source count 1826 The Template for Aggregated Flows produced by this example is shown 1827 in Figure 24. 1829 destinationIPv4Address(12)[4] 1830 destinationTransportPort(11)[2] 1831 distinctCountOfSourceIPAddress(TBD4)[8] 1833 Figure 24: Output Template for source count 1835 Interval distribution, in this case, merely discards the timestamp 1836 information from the Original Flows in Figure 10 , and as such is not 1837 shown. Likewise, the value aggregation step simply discards the 1838 octetDeltaCount value field. The key aggregation step reduces the 1839 key to the destinationIPv4Address and destinationTransportPort, 1840 counting the distinct source addresses. Since this is essentially 1841 the output of this aggregation function, the aggregate combination 1842 operation is a no-op; the resulting Aggregated Flows are shown in 1843 Figure 25. 1845 dest ip4 |port |dist src 1846 192.0.2.131 53 3 1847 198.51.100.2 80 1 1848 198.51.100.2 443 3 1849 198.51.100.67 80 2 1850 198.51.100.68 80 2 1851 198.51.100.133 80 2 1852 198.51.100.3 80 3 1853 198.51.100.4 80 2 1854 198.51.100.17 80 1 1855 198.51.100.69 443 1 1857 Figure 25: Aggregated flows for source count 1859 8.4. Traffic Time-Series per Source with Counter Distribution 1861 Returning to the example in Section 8.1, note that our source data 1862 contains some flows with durations longer than the imposed interval 1863 of five minutes. The default method for dealing with such flows is 1864 to account them to the interval containing the flow's start time. 1866 In this example, the same data is aggregated using the same 1867 arrangement of operations and the same output Template as the as in 1868 Section 8.1, but using a different counter distribution policy, 1869 Simple Uniform Distribution, as described in Section 5.1.1. In order 1870 to do this, the Exporting Process first exports the Aggregate Counter 1871 Distribution Options Template, as in Figure 26. 1873 templateId(12)[2]{scope} 1874 valueDistributionMethod(TBD10)[1] 1876 Figure 26: Aggregate Counter Distribution Options Template 1878 This Template is followed by an Aggregate Counter Distribution Record 1879 described by this Template; assuming the output Template in Figure 11 1880 has ID 257, this record would appear as in Figure 27. 1882 template ID | value distribution method 1883 257 4 (simple uniform) 1884 Figure 27: Aggregate Counter Distribution Record 1886 Following metadata export, the aggregation steps follow as before. 1887 However, two long flows are distributed across multiple intervals in 1888 the interval imposition step, as indicated with "*" in Figure 28. 1889 Note the uneven distribution of the three-interval, 11200-octet flow 1890 into three Partially Aggregated Flows of 3733, 3733, and 3734 octets; 1891 this ensures no cumulative error is injected by the interval 1892 distribution step. 1894 start time |end time |source ip4 |port |dest ip4 |port|pt| oct 1895 9:00:00.000 9:05:00.000 192.0.2.2 47113 192.0.2.131 53 17 119 1896 9:00:00.000 9:05:00.000 192.0.2.2 22153 192.0.2.131 53 17 83 1897 9:00:00.000 9:05:00.000 192.0.2.2 52420 198.51.100.2 443 6 1637 1898 9:00:00.000 9:05:00.000 192.0.2.3 56047 192.0.2.131 53 17 111 1899 9:00:00.000 9:05:00.000 192.0.2.3 41183 198.51.100.67 80 6 16838 1900 9:00:00.000 9:05:00.000 192.0.2.2 17606 198.51.100.68 80 6 11538 1901 9:00:00.000 9:05:00.000 192.0.2.3 47113 192.0.2.131 53 17 119 1902 9:00:00.000 9:05:00.000 192.0.2.3 48458 198.51.100.133 80 6 2973 1903 9:00:00.000 9:05:00.000 192.0.2.4 61295 198.51.100.2 443 6 8350 1904 9:00:00.000 9:05:00.000 203.0.113.3 41256 198.51.100.133 80 6 778 1905 9:00:00.000 9:05:00.000 203.0.113.3 51662 198.51.100.3 80 6 883 1906 9:00:00.000 9:05:00.000 192.0.2.2 37581 198.51.100.2 80 6 7710* 1907 9:00:00.000 9:05:00.000 203.0.113.3 39586 198.51.100.17 80 6 3733* 1908 9:05:00.000 9:10:00.000 203.0.113.3 52572 198.51.100.2 443 6 1637 1909 9:05:00.000 9:10:00.000 203.0.113.3 49914 197.51.100.133 80 6 561 1910 9:05:00.000 9:10:00.000 192.0.2.2 50824 198.51.100.2 443 6 1899 1911 9:05:00.000 9:10:00.000 192.0.2.3 34597 198.51.100.3 80 6 1284 1912 9:05:00.000 9:10:00.000 203.0.113.3 58907 198.51.100.4 80 6 2670 1913 9:05:00.000 9:10:00.000 192.0.2.2 37581 198.51.100.2 80 6 7710* 1914 9:05:00.000 9:10:00.000 203.0.113.3 39586 198.51.100.17 80 6 3733* 1915 9:10:00.000 9:15:00.000 192.0.2.4 22478 192.0.2.131 53 17 75 1916 9:10:00.000 9:15:00.000 192.0.2.4 49513 198.51.100.68 80 6 3374 1917 9:10:00.000 9:15:00.000 192.0.2.4 64832 198.51.100.67 80 6 138 1918 9:10:00.000 9:15:00.000 192.0.2.3 60833 198.51.100.69 443 6 2325 1919 9:10:00.000 9:15:00.000 192.0.2.2 19638 198.51.100.3 80 6 2869 1920 9:10:00.000 9:15:00.000 192.0.2.3 40429 198.51.100.4 80 6 18289 1921 9:10:00.000 9:15:00.000 203.0.113.3 39586 198.51.100.17 80 6 3734* 1923 Figure 28: Distributed interval imposition for time series per source 1925 Subsequent steps are as in Section 8.1; the results, to be exported 1926 using the Template shown in Figure 11, are shown in Figure 29, with 1927 Aggregated Flows differing from the example in Section 8.1 indicated 1928 by "*". 1930 start time |end time |source ip4 |octets 1931 9:00:00.000 9:05:00.000 192.0.2.2 21087* 1932 9:00:00.000 9:05:00.000 192.0.2.3 20041 1933 9:00:00.000 9:05:00.000 192.0.2.4 8350 1934 9:00:00.000 9:05:00.000 203.0.113.3 5394* 1935 9:05:00.000 9:10:00.000 192.0.2.2 9609* 1936 9:05:00.000 9:10:00.000 192.0.2.3 1284 1937 9:05:00.000 9:10:00.000 203.0.113.3 8601* 1938 9:10:00.000 9:15:00.000 192.0.2.2 2869 1939 9:10:00.000 9:15:00.000 192.0.2.3 20614 1940 9:10:00.000 9:15:00.000 192.0.2.4 3587 1941 9:10:00.000 9:15:00.000 203.0.113.3 3734* 1943 Figure 29: Aggregated Flows for time series per source with counter 1944 distribution 1946 9. Security Considerations 1948 This document specifies the operation of an Intermediate Aggregation 1949 Process with the IPFIX Protocol; the Security Considerations for the 1950 protocol itself in Section 11 [RFC-EDITOR NOTE: verify section 1951 number] of [I-D.ietf-ipfix-protocol-rfc5101bis] therefore apply. In 1952 the common case that aggregation is performed on a Mediator, the 1953 Security Considerations for Mediators in Section 9 of [RFC6183] apply 1954 as well. 1956 As mentioned in Section 3, certain aggregation operations may tend to 1957 have an anonymizing effect on flow data by obliterating sensitive 1958 identifiers. Aggregation may also be combined with anonymization 1959 within a Mediator, or as part of a chain of Mediators, to further 1960 leverage this effect. In any case in which an Intermediate 1961 Aggregation Process is applied as part of a data anonymization or 1962 protection scheme, or is used together with anonymization as 1963 described in [RFC6235], the Security Considerations in Section 9 of 1964 [RFC6235] apply. 1966 10. IANA Considerations 1968 This document specifies the creation of new IPFIX Information 1969 Elements in the IPFIX Information Element registry 1970 [iana-ipfix-assignments], as defined in Section 7 above. IANA has 1971 assigned Information Element numbers to these Information Elements, 1972 and entered them into the registry. 1974 [NOTE for IANA: The text TBDn should be replaced with the respective 1975 assigned Information Element numbers where they appear in this 1976 document. ] 1978 11. Acknowledgments 1980 Special thanks to Elisa Boschi for early work on the concepts laid 1981 out in this document. Thanks to Lothar Braun, Christian Henke, and 1982 Rahul Patel for their reviews and valuable feedback, with special 1983 thanks to Paul Aitken for his multiple detailed reviews. This work 1984 is materially supported by the European Union Seventh Framework 1985 Programme under grant agreement 257315 (DEMONS). 1987 12. References 1989 12.1. Normative References 1991 [I-D.ietf-ipfix-protocol-rfc5101bis] 1992 Claise, B. and B. Trammell, "Specification of the IP Flow 1993 Information eXport (IPFIX) Protocol for the Exchange of 1994 Flow Information", draft-ietf-ipfix-protocol-rfc5101bis-02 1995 (work in progress), June 2012. 1997 [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate 1998 Requirement Levels", BCP 14, RFC 2119, March 1997. 2000 12.2. Informative References 2002 [RFC3917] Quittek, J., Zseby, T., Claise, B., and S. Zander, 2003 "Requirements for IP Flow Information Export (IPFIX)", 2004 RFC 3917, October 2004. 2006 [RFC5470] Sadasivan, G., Brownlee, N., Claise, B., and J. Quittek, 2007 "Architecture for IP Flow Information Export", RFC 5470, 2008 March 2009. 2010 [RFC5472] Zseby, T., Boschi, E., Brownlee, N., and B. Claise, "IP 2011 Flow Information Export (IPFIX) Applicability", RFC 5472, 2012 March 2009. 2014 [RFC5476] Claise, B., Johnson, A., and J. Quittek, "Packet Sampling 2015 (PSAMP) Protocol Specifications", RFC 5476, March 2009. 2017 [RFC5655] Trammell, B., Boschi, E., Mark, L., Zseby, T., and A. 2018 Wagner, "Specification of the IP Flow Information Export 2019 (IPFIX) File Format", RFC 5655, October 2009. 2021 [RFC5982] Kobayashi, A. and B. Claise, "IP Flow Information Export 2022 (IPFIX) Mediation: Problem Statement", RFC 5982, 2023 August 2010. 2025 [RFC6183] Kobayashi, A., Claise, B., Muenz, G., and K. Ishibashi, 2026 "IP Flow Information Export (IPFIX) Mediation: Framework", 2027 RFC 6183, April 2011. 2029 [RFC6235] Boschi, E. and B. Trammell, "IP Flow Anonymization 2030 Support", RFC 6235, May 2011. 2032 [I-D.ietf-ipfix-mediation-protocol] 2033 Claise, B., Kobayashi, A., and B. Trammell, "Operation of 2034 the IP Flow Information Export (IPFIX) Protocol on IPFIX 2035 Mediators", draft-ietf-ipfix-mediation-protocol-02 (work 2036 in progress), July 2012. 2038 [I-D.ietf-ipfix-ie-doctors] 2039 Trammell, B. and B. Claise, "Guidelines for Authors and 2040 Reviewers of IPFIX Information Elements", 2041 draft-ietf-ipfix-ie-doctors-07 (work in progress), 2042 October 2012. 2044 [I-D.ietf-ipfix-configuration-model] 2045 Muenz, G., Claise, B., and P. Aitken, "Configuration Data 2046 Model for IPFIX and PSAMP", 2047 draft-ietf-ipfix-configuration-model-11 (work in 2048 progress), June 2012. 2050 [I-D.ietf-ipfix-flow-selection-tech] 2051 D'Antonio, S., Zseby, T., Henke, C., and L. Peluso, "Flow 2052 Selection Techniques", 2053 draft-ietf-ipfix-flow-selection-tech-12 (work in 2054 progress), September 2012. 2056 [iana-ipfix-assignments] 2057 Internet Assigned Numbers Authority, "IP Flow Information 2058 Export Information Elements 2059 (http://www.iana.org/assignments/ipfix)". 2061 Authors' Addresses 2063 Brian Trammell 2064 Swiss Federal Institute of Technology Zurich 2065 Gloriastrasse 35 2066 8092 Zurich 2067 Switzerland 2069 Phone: +41 44 632 70 13 2070 Email: trammell@tik.ee.ethz.ch 2072 Arno Wagner 2073 Consecom AG 2074 Bleicherweg 64a 2075 8002 Zurich 2076 Switzerland 2078 Email: arno@wagner.name 2080 Benoit Claise 2081 Cisco Systems, Inc. 2082 De Kleetlaan 6a b1 2083 1831 Diegem 2084 Belgium 2086 Phone: +32 2 704 5622 2087 Email: bclaise@cisco.com