Flow Aggregation for the IP Flow Information Export (IPFIX) Protocol
Swiss Federal Institute of Technology Zurich
Gloriastrasse 358092 ZurichSwitzerland+41 44 632 70 13trammell@tik.ee.ethz.ch
Consecom AG
Bleicherweg 64a8002 ZurichSwitzerlandarno@wagner.name
Cisco Systems, Inc.
De Kleetlaan 6a b11831 DiegemBelgium+32 2 704 5622bclaise@cisco.com
Operations
IPFIX Working GroupThis document provides a common implementation-independent basis for
the interoperable application of the IP Flow Information Export (IPFIX)
Protocol to the handling of Aggregated Flows, which are IPFIX Flows
representing packets from multiple Original Flows sharing some set of
common properties. It does this through a detailed terminology and a
descriptive Intermediate Aggregation Process architecture, including a
specification of methods for Original Flow counting and counter
distribution across intervals.The assembly of packet data into Flows serves a variety of
different purposes, as noted in the requirements and applicability statement for the IP Flow
Information Export (IPFIX) protocol.
Aggregation beyond the flow level, into records representing multiple
Flows, is a common analysis and data reduction technique as well, with
applicability to large-scale network data analysis, archiving, and
inter-organization exchange. This applicability in large-scale
situations, in particular, led to the inclusion of aggregation as part
of the IPFIX Mediators Problem
Statement, and the definition of an Intermediate Aggregation
Process in the Mediator framework.Aggregation is used for analysis and data reduction in a wide
variety of applications, for example in traffic matrix calculation,
generation of time series data for visualizations or anomaly
detection, or data reduction for long-term trending and storage.
Depending on the keys used for aggregation, it may additionally have
an anonymizing affect on the data: for example, aggregation operations
which eliminate IP addresses make it impossible to later directly
identify nodes using those addresses.Aggregation as defined and described in this document covers the
applications defined in , including 5.1
"Adjusting Flow Granularity", 5.4 "Time Composition", and 5.5 "Spatial
Composition". However, of this document
specifies a more flexible architecture for an Intermediate Aggregation
Process than that envisioned by the original Mediator work. Instead of
a focus on these specific limited use cases, the Intermediate
Aggregation Process is specified to cover any activity commonly
described as "flow aggregation". This architecture is intended to
describe any such activity without reference to the specific
implementation of aggregation.An Intermediate Aggregation Process may be applied to data
collected from multiple Observation Points, as it is natural to use
aggregation for data reduction when concentrating measurement data.
This document specifically does not address the protocol issues that
arise when combining IPFIX data from multiple Observation Points and
exporting from a single Mediator, as these issues are general to IPFIX
Mediation; they are therefore treated in detail in the Mediation Protocol
document.Since Aggregated Flows as defined in the following section are
essentially Flows, the IPFIX protocol can be used to export,
and the IPFIX File Format can be used to
store, aggregated data "as-is"; there are no changes necessary to the
protocol. This document provides a common basis for the application of
IPFIX to the handling of aggregated data, through a detailed
terminology, Intermediate Aggregation Process architecture, and
methods for Original Flow counting and counter distribution across
intervals.In the IPFIX protocol, { type, length, value } tuples are expressed
in Templates containing { type, length } pairs, specifying which {
value } fields are present in data records conforming to the Template,
giving great flexibility as to what data is transmitted. Since
Templates are sent very infrequently compared with Data Records, this
results in significant bandwidth savings. Various different data
formats may be transmitted simply by sending new Templates specifying
the { type, length } pairs for the new data format. See for more
information.The IPFIX Information Element
Registry defines a large number of standard Information
Elements which provide the necessary { type } information for
Templates. The use of standard elements enables interoperability among
different vendors' implementations. Additionally, non-standard
enterprise-specific elements may be defined for private use."Specification of
the IPFIX Protocol for the Exchange of IP Traffic Flow
Information" and its associated documents define the IPFIX
Protocol, which provides network engineers and administrators with
access to IP traffic flow information."Architecture for IP Flow Information
Export" defines the architecture for the export of measured IP
flow information out of an IPFIX Exporting Process to an IPFIX
Collecting Process, and the basic terminology used to describe the
elements of this architecture, per the requirements defined in "Requirements for IP Flow Information Export".
The IPFIX Protocol document then covers
the details of the method for transporting IPFIX Data Records and
Templates via a congestion-aware transport protocol from an IPFIX
Exporting Process to an IPFIX Collecting Process."IP Flow Information Export (IPFIX)
Mediation: Problem Statement" introduces the concept of IPFIX
Mediators, and defines the use cases for which they were designed;
"IP Flow Information Export (IPFIX) Mediation:
Framework" then provides an architectural framework for
Mediators. Protocol-level issues (e.g., Template and Observation
Domain handling across Mediators) are covered by "Specification of the
Protocol for IPFIX Mediation". This document specifies an
Intermediate Process which may be applied at an IPFIX Mediator, as
well as at an original Observation Point prior to export, or for
analysis and data reduction purposes after receipt at a Collecting
Process.Terms used in this document that are defined in the Terminology
section of the IPFIX
Protocol document are to be interpreted as defined there.The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL
NOT", "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in
this document are to be interpreted as described in .In addition, this document defines the following terms:
A Flow, as defined by , derived from a set
of zero or more original Flows within a defined Aggregation
Interval. The primary difference between a Flow and an Aggregated
Flow in the general case is that the time interval (i.e., the
two-tuple of start and end times) of a Flow is derived from
information about the timing of the packets comprising the Flow,
while the time interval of an Aggregated Flow is often
externally imposed. Note that an Aggregated Flow is defined in the
context of an Intermediate Aggregation Process only. Once an
Aggregated Flow is exported, it is essentially a Flow as in and can be treated
as such.an Intermediate
Process (IAP) as in that aggregates
records, based upon a set of Flow Keys or functions applied to
fields from the record.A time interval imposed upon
an Aggregated Flow. Intermediate Aggregation Processes may use a
regular Aggregation Interval (e.g. "every five minutes", "every
calendar month"), though regularity is not necessary. Aggregation
intervals may also be derived from the time intervals of the
Original Flows being aggregated.A Flow during processing
within an Intermediate Aggregation Process; refers to an
intermediate data structure during aggregation within the
Intermediate Aggregation Process architecture detailed in .A Flow given as input to an
Intermediate Aggregation Process in order to generate Aggregated
Flows.An Original Flow that is
partially or completely represented within an Aggregated Flow.
Each Aggregated Flow is made up of zero or more Contributing
Flows, and an Original Flow may contribute to zero or more
Aggregated Flows. The Exporter from which the
Original Flows are received; meaningful only when an IAP is
deployed at a Mediator.The terminology presented herein improves the precision of, but
does not supersede or contradict the terms related to mediation and
aggregation defined in the Mediation Problem
Statement and the Mediation
Framework documents. Within this document, the terminology
defined in this section is to be considered normative.Aggregation, as a common data reduction method used in traffic data
analysis, has many applications. When used with a regular Aggregation
Interval and Original Flows containing timing information, it
generates time series data from a collection of Flows with discrete
intervals, as in the example in . This time
series data is itself useful for a wide variety of analysis tasks,
such as generating input for network anomaly detection systems, or
driving visualizations of volume per time for traffic with specific
characteristics. As a second example, traffic matrix calculation from
flow data, as shown in is inherently an
aggregation action, by spatially aggregating the Flow Key down to
input or output interface, address prefix, or autonomous system.Irregular or data-dependent Aggregation Intervals and key
aggregation operations can also be used to provide adaptive
aggregation of network flow data. Here, full Flow Records can be kept
for Flows of interest, while Flows deemed "less interesting" to a
given application can be aggregated. For example, in an IPFIX Mediator
equipped with traffic classification capabilities for security
purposes, potentially malicious Flows could be exported directly,
while known-good or probably-good Flows (e.g. normal web browsing)
could be exported simply as time series volumes per web server.Aggregation can also be applied to final analysis of stored Flow
data, as shown in the example in . All
such aggregation applications in which timing information is not
available or not important can be treated as if an infinite
Aggregation Interval applies.Note that an Intermediate Aggregation Process which removes
potentially sensitive information as identified in may tend to have an anonymising effect on the
Aggregated Flows as well; however, any application of aggregation as
part of a data protection scheme should ensure that all the issues
raised in are addressed, specifically Section
4 "Anonymization of IP Flow Data", Section 7.2 "IPFIX-Specific
Anonymization Guidelines", and Section 9 "Security
Considerations".While much of the discussion in this document, and all of the
examples, apply to the common case that the Original Flows to be
aggregated are all of the same underlying type (i.e., are represented
with identical Templates or compatible Templates containing a core set
Information Elements which can be freely converted to one another),
and that each packet observed by the Metering Process associated with
the Original Exporter is represented, this is not a necessary
assumption. Aggregation can also be applied as part of a technique
applying both aggregation and correlation to pull together multiple
views of the same traffic from different Observation Points using
different Templates. For example, consider a set of applications
running at different Observation Points for different purposes -- one
generating flows with round-trip-times for passive performance
measurement, and one generating billing records. Once correlated,
these flows could used to produce Aggregated Flows containing both
volume and performance information together. The correlation and
normalization operation described in handles this specific case of
correlation. Flow correlation in the general case is outside the scope
of this document.This section specifies the architecture of the Intermediate Aggregation Process, and how it fits into the IPFIX Architecture.An Intermediate Aggregation Process could be deployed at any of three places within the IPFIX Architecture. While aggregation is most commonly done within a Mediator which collects Original Flows from an Original Exporter and exports Aggregated Flows, aggregation can also occur before initial export, or after final collection, as shown in . The presence of an IAP at any of these points is of course optional.The Mediator use case is further shown in Figures A and B in .Aggregation can be applied for either intermediate or final
analytic purposes. In certain circumstances, it may make sense to
export Aggregated Flows directly after metering, for example, if the
Exporting Process is applied to drive a time-series visualization, or
when flow data export bandwidth is restricted and flow or packet
sampling is not an option. Note that this case, where the Aggregation
Process is essentially integrated into the Metering Process, is
essentially covered by the IPFIX
architecture: the Flow Keys used are simply a subset of those
that would normally be used, and time intervals may be chosen other
than those available from the cache policies customarily offered by
the Metering Process. A Metering Process in this arrangement MAY
choose to simulate the generation of larger Flows in order to generate
Original Flow counts, if the application calls for compatibility with
an Intermediate Aggregation Process deployed in a separate
location.In the specific case that an Intermediate Aggregation Process is
employed for data reduction for storage purposes, it can take Original
Flows from a Collecting Process or File Reader and pass Aggregated
Flows to a File Writer for storage.Deployment of an Intermediate Aggregation Process within a Mediator is a much more flexible arrangement.
Here, the Mediator consumes Original Flows and produces Aggregated
Flows; this arrangement is suited to any of the use cases detailed in
. In a Mediator, Original Flows from
multiple sources can also be aggregated into a single stream of
Aggregated Flows; the architectural specifics of this arrangement are
not addressed in this document, which is concerned only with the
aggregation operation itself; see for details.The data paths into and out of an Intermediate Aggregation Process
are shown in .Note that as Aggregated Flows are IPFIX Flows, an Intermediate
Aggregation Process may aggregate already-Aggregated Flows from an
upstream IAP as well as original Flows from an upstream Original
Exporter or Metering Process.Aggregation may also need to correlate original flows from multiple
Metering Processes, each according to a different Template with
different Flow Keys and values. This arrangement is shown in ; in this case, the correlation and
normalization operation described in handles merging the Original Flows before
aggregation.Within this document, an Intermediate Aggregation Process can be
seen as hosting a function composed of four types of operations on
Partially Aggregated Flows, as illustrated in : interval distribution (temporal), key
aggregation (spatial), value aggregation (spatial), and aggregate
combination. "Partially Aggregated Flows" as defined in are essentially the intermediate results
of aggregation, internal to the Intermediate Aggregation
Process.a temporal aggregation
operation which imposes an Aggregation Interval on the partially
Aggregated Flow. This Aggregation Interval may be regular,
irregular, or derived from the timing of the Original Flows
themselves. Interval distribution is discussed in detail in .a spatial aggregation operation
which results in the addition, modification, or deletion of Flow
Key fields in the Partially Aggregated Flows. New Flow Keys may be
derived from existing Flow Keys (e.g., looking up an AS number for
an IP address), or "promoted" from specific non-Key fields (e.g.,
when aggregating Flows by packet count per Flow). Key aggregation
can also add new non-Key fields derived from Flow Keys that are
deleted during key aggregation; mainly counters of unique reduced
keys. Key aggregation is discussed in detail in .a spatial aggregation
operation which results in the addition, modification, or deletion
of non-Key fields in the Partially Aggregated Flows. These non-Key
fields may be "demoted" from existing Key fields, or derived from
existing Key or non-Key fields. Value aggregation is discussed in
detail in .an operation combining
multiple partially Aggregated Flows having undergone interval
distribution, key aggregation, and value aggregation which share
Flow Keys and Aggregation Intervals into a single Aggregated Flow
per set of Flow Key values and Aggregation Interval. Aggregate
combination is discussed in detail in .an optional
operation, applies when accepting Original Flows from Metering
Processes which export different views of essentially the same
Flows before aggregation; the details of correlation and
normalization are specified in , below.The first three of these operations may be carried out any number
of times in any order, either on Original Flows or on the results of
one of the operations above, with one
caveat: since Flows carry their own interval data, any spatial
aggregation operation implies a temporal aggregation operation, so at
least one interval distribution step, even if implicit, is required by
this architecture. This is shown as the first step for the sake of
simplicity in the diagram above. Once all aggregation operations are
complete, aggregate combination ensures that for a given Aggregation
Interval, set of Flow Key values, and Observation Domain, only one
Flow is produced by the Intermediate Aggregation Process.This model describes the operations within a single Intermediate
Aggregation Process, and it is anticipated that most aggregation will
be applied within a single process. However, as the steps in the model
may be applied in any order and aggregate combination is idempotent,
any number of Intermediate Aggregation Processes operating in series
can be modeled as a single process. This allows aggregation operations
to be flexibly distributed across any number of processes, should
application or deployment considerations so dictate.When accepting Original Flows from multiple Metering Processes,
each of which provides a different view of the Original Flow as
seen from the point of view of the IAP, an optional correlation
and normalization operation combines each of these single Flow
Records into a set of unified partially aggregated Flows before
applying interval distribution. These unified Flows appear as if
they had been measured at a single Metering Process which used the
union of the set of Flow Keys and non-key fields of all Metering
Processes sending Original Flows to the IAP.Since, due to export errors or other slight irregularities in
flow metering, the multiple views may not be completely
consistent; normalization involves applying a set of aggregation
application specific corrections in order to ensure consistency in
the unified Flows.In general, correlation and normalization should take multiple
views of essentially the same Flow, as determined by the
configuration of the operation itself, and render them into a
single unified Flow. Flows which are essentially different should
not be unified by the correlation and normalization operation.
This operation therefore requires enough information about the
configuration and deployment of Metering Processes from which it
correlates Original Flows in order to make this distinction
correctly and consistently.The exact steps performed to correlate and normalize flows in
this step are application-, implementation-, and
deployment-specific, and will not be further specified in this
document.As stated in , an Aggregated Flow
is simply an IPFIX Flow generated from Original Flows by an
Intermediate Aggregation Process. Here, we detail the operations by
which this is achieved within an Intermediate Aggregation Process.Interval distribution imposes a time interval on the resulting
Aggregated Flows. The selection of an interval is specific to the
given aggregation application. Intervals may be derived from the
Original Flows themselves (e.g., an interval may be selected to
cover the entire time containing the set of all Flows sharing
a given Key, as in Time Composition described in ) or externally imposed; in the latter case
the externally imposed interval may be regular (e.g., every five
minutes) or irregular (e.g., to allow for different time
resolutions at different times of day, under different network
conditions, or indeed for different sets of Original Flows).The length of the imposed interval itself has tradeoffs.
Shorter intervals allow higher-resolution aggregated data and, in
streaming applications, faster reaction time. Longer intervals
generally lead to greater data reduction and simplified counter
distribution. Specifically, counter distribution is greatly
simplified by the choice of an interval longer than the duration
of longest Original Flow, itself generally determined by the
Original Flow's Metering Process active timeout; in this case an
Original Flow can contribute to at most two Aggregated Flows, and
the more complex value distribution methods become
inapplicable.In , we illustrate three common
possibilities for interval distribution as applies with regular
intervals to a set of three Original Flows. For Flow A, the start
and end times lie within the boundaries of a single interval 0;
therefore, Flow A contributes to only one Aggregated Flow. Flow B,
by contrast, has the same duration but crosses the boundary
between intervals 0 and 1; therefore, it will contribute to two
Aggregated Flows, and its counters must be distributed among these
Flows, though in the two-interval case this can be simplified
somewhat simply by picking one of the two intervals, or
proportionally distributing between them. Only Flows like Flow A
and Flow B will be produced when the interval is chosen to be
longer than the duration of longest Original Flow, as above. More
complicated is the case of Flow C, which contributes to more than
two Aggregated Flows, and must have its counters distributed
according to some policy as in .In general, counters in Aggregated Flows are treated the same
as in any Flow. Each counter is independently calculated as if
it were derived from the set of packets in the Original Flow:
e.g., delta counters are summed, the most recent total count for
each Original Flow taken then summed across flows, and so
on.When the Aggregation Interval is guaranteed to be longer than
the longest Original Flow, a Flow can cross at most one Interval
boundary, and will therefore contribute to at most two
Aggregated Flows. Most common in this case is to arbitrarily but
consistently choose to account the Original Flow's counters
either to the first or the last Aggregated Flow to which it
could contribute.However, this becomes more complicated when the Aggregation
Interval is shorter than the longest Original Flow in the source
data. In such cases, each Original Flow can incompletely cover
one or more time intervals, and apply to one or more Aggregated
Flows. In this case, the Intermediate Aggregation Process must distribute the
counters in the Original Flows across one or more resulting
Aggregated Flows. There are several methods for doing this,
listed here in roughly increasing order of complexity and
accuracy; most of these are necessary only in specialized
cases.The counters for an Original
Flow are added to the counters of the appropriate Aggregated
Flow containing the end time of the Original Flow.The counters for an Original
Flow are added to the counters of the appropriate Aggregated
Flow containing the start time of the Original Flow.The counters for an Original
Flow are added to the counters of a single appropriate
Aggregated Flow containing some timestamp between start and
end time of the Original Flow.Each counter for
an Original Flow is divided by the number of time intervals
the Original Flow covers (i.e., of appropriate Aggregated
Flows sharing the same Flow Keys), and this number is added
to each corresponding counter in each Aggregated Flow. This is
like simple uniform distribution, but accounts for the
fractional portions of a time interval covered by an
Original Flow in the first and last time interval. Each
counter for an Original Flow is divided by the number of
time _units_ the Original Flow covers, to derive a mean
count rate. This rate is then multiplied by the number of
time units in the intersection of the duration of the
Original Flow and the time interval of each Aggregated
Flow.Each counter of the
Original Flow is distributed among the intervals of the
Aggregated Flows according to some function the Intermediate
Aggregation Process uses based upon properties of Flows
presumed to be like the Original Flow. For example, Flow
Records representing bulk transfer might follow a more or
less proportional uniform distribution, while interactive
processes are far more bursty.The Intermediate Aggregation Process
has access to the original packet timings from the packets
making up the Original Flow, and uses these to distribute or
recalculate the counters.A method for exporting the distribution of counters across
multiple Aggregated Flows is detailed in . In any case, counters MUST be
distributed across the multiple Aggregated Flows in such a way
that the total count is preserved, within the limits of accuracy
of the implementation. This property allows data to be
aggregated and re-aggregated with negligible loss of original
count information. To avoid confusion in interpretation of the
aggregated data, all the counters from one Aggregated Flow MUST
be distributed via the same method.More complex counter distribution methods generally require
that the interval distribution process track multiple "current"
time intervals at once. This may introduce some delay into the
aggregation operation, as an interval should only expire and be
available for export when no additional Original Flows applying
to the interval are expected to arrive at the Intermediate
Aggregation Process.Note, however, that since there is no guarantee that Flows
from the Original Exporter will arrive in any given order,
whether for transport-specific reasons (i.e. UDP reordering) or
Metering Process or Exporting Process implementation-specific
reasons, even simpler distribution methods may need to deal with
flows arriving in other than start time or end time order.
Therefore, the use of larger intervals does not obviate the need
to buffer Partially Aggregated Flows within "current" time
intervals, to ensure the IAP can accept flow time intervals in
any arrival order. More generally, the interval distribution
process SHOULD accept flow start and end times in the Original
Flows in any reasonable order. The expiration of intervals in
interval distribution operations is dependent on implementation
and deployment requirements, and SHOULD be made configurable in
contexts in which "reasonable order" is not obvious at
implementation time. This operation may lead to delay and loss
introduced by the IAP, as detailed in .Time Composition as in Section 5.4 of (or interval combination) is a special case
of aggregation, where interval distribution imposes longer
intervals on Flows with matching keys and "chained" start and
end times, without any key reduction, in order to join
long-lived Flows which may have been split (e.g., due to an
active timeout shorter than the actual duration of the Flow.)
Here, no Key aggregation is applied, and the Aggregation
Interval is chosen on a per-Flow basis to cover the interval
spanned by the set of aggregated Flows. This may be applied
alone in order to normalize split Flows, or in combination with
other aggregation functions in order to obtain more accurate
Original Flow counts.Note that much of the difficulty of interval distribution at
an IAP can be avoided simply by configuring the original
Exporters to synchronize the time intervals in the Original
Flows with the desired aggregation interval. The resulting
Original Flows would then be split to align perfectly with the
time intervals imposed during Interval Imposition, as shown in
, though this may reduce their
usefulness for non-Aggregation purposes. This approach allows
the Intermediate Aggregation Process to use Start Interval or
End Interval distribution, while having equivalent information
to that available to Direct interval distribution.Key aggregation generates a new set of Flow Key values for the
Aggregated Flows from the Original Flow Key and non-Key fields in
the Original Flows, or from correlation of the Original Flow
information with some external source. There are two basic
operations here. First, Aggregated Flow Keys may be derived
directly from Original Flow Keys through reduction, or the
dropping of fields or precision in the Original Flow Keys. Second,
Aggregated Flow Keys may be derived through replacement, e.g. by
removing one or more fields from the Original Flow and replacing
them with fields derived from the removed fields. Replacement may
refer to external information (e.g., IP to AS number mappings).
Replacement may apply to Flow Keys as well as non-key fields. For
example, consider an application which aggregates Original Flows
by packet count (i.e., generating an Aggregated Flow for all
one-packet Flows, one for all two-packet Flows, and so on). This
application would promote the packet count to a Flow Key.Key aggregation may also result in the addition of new non-Key
fields to the Aggregated Flows, namely Original Flow counters and
unique reduced key counters; these are treated in more detail in
and ,
respectively.In any key aggregation operation, reduction and/or replacement
may be applied any number of times in any order. Which of these
operations are supported by a given implementation is
implementation- and application-dependent. illustrates an example
reduction operation, aggregation by source address and
destination class C network. Here, the port, protocol, and
type-of-service information is removed from the Flow Key, the
source address is retained, and the destination address is masked
by dropping the lower 8 bits. illustrates an example
reduction and replacement operation, aggregation by source and
destination Border Gateway Protocol (BGP) Autonomous System Number
(ASN) without ASN information available in the Original Flow.
Here, the port, protocol, and type-of-service information is
removed from the Flow Keys, while the source and destination
addresses are run though an IP address to ASN lookup table, and
the Aggregated Flow Keys are made up of the resulting source and
destination ASNs.When aggregating multiple Original Flows into an Aggregated
Flow, it is often useful to know how many Original Flows are
present in the Aggregated Flow. introduces four new information
elements in to export these counters.There are two possible ways to count Original Flows, which
we call here conservative and non-conservative. Conservative
flow counting has the property that each Original Flow
contributes exactly one to the total flow count within a set of
Aggregated Flows. In other words, conservative flow counters are
distributed just as any other counter during interval
distribution, except each Original Flow is assumed to have a
flow count of one. When a count for an Original Flow must be
distributed across a set of Aggregated Flows, and a distribution
method is used which does not account for that Original Flow
completely within a single Aggregated Flow, conservative flow
counting requires a fractional representation.By contrast, non-conservative flow counting is used to count
how many Contributing Flows are represented in an Aggregated
Flow. Flow counters are not distributed in this case. An
Original Flow which is present within N Aggregated Flows would
add N to the sum of non-conservative flow counts, one to each
Aggregated Flow. In other words, the sum of conservative flow
counts over a set of Aggregated Flows is always equal to the
number of Original Flows, while the sum of non-conservative flow
counts is strictly greater than or equal to the number of
Original Flows.For example, consider Flows A, B, and C as illustrated in
. Assume that the key aggregation
step aggregates the keys of these three Flows to the same
aggregated Flow Key, and that start interval counter
distribution is in effect. The conservative flow count for
interval 0 is 3 (since Flows A, B, and C all begin in this
interval), and for the other two intervals is 0. The
non-conservative flow count for interval 0 is also 3 (due to the
presence of Flows A, B, and C), for interval 1 is 2 (Flows B and
C), and for interval 2 is 1 (Flow C). The sum of the
conservative counts 3 + 0 + 0 = 3, the number of Original Flows;
while the sum of the non-conservative counts 3 + 2 + 1 = 6.Note that the active and inactive timeouts used to generate
Original Flows, as well as the cache policy used to generate
those Flows, have an effect on how meaningful either the
conservative or non-conservative flow count will be during
aggregation. In general, Original Exporters using the IPFIX
Configuration Model SHOULD be configured to export Flows with
equal or similar activeTimeout and inactiveTimeout configuration
values, and the same cacheMode, as defined in ; Original
Exporters not using the IPFIX Configuration Model SHOULD be
configured equivalently.One common case in aggregation is counting distinct key
values that were reduced away during key aggregation. The most
common use case for this is counting distinct hosts per Flow
Key; for example, in host characterization or anomaly detection,
distinct sources per destination or distinct destinations per
source are common metrics. These new non-Key fields are added
during key aggregation.For such applications, Information Elements for distinct
counts of IPv4 and IPv6 addresses are defined in . These are named
distinctCountOf(KeyName). Additional such Information Elements
SHOULD be registered with IANA on an as-needed basis.Aggregation operations may also lead to the addition of value
fields demoted from key fields, or derived from other value fields
in the Original Flows. Specific cases of this are treated in the
subsections below. Some applications of aggregation may benefit from
computing different statistics than those native to each
non-key field (e.g., flags are natively combined via union,
and delta counters by summing). For example, minimum and
maximum packet counts per Flow, mean bytes per packet per
Contributing Flow, and so on. Certain Information Elements
for these applications are already provided in the IANA
IPFIX Information Elements registry
(http://www.iana.org/assignments/ipfix/ipfix.html (e.g.
minimumIpTotalLength).A complete specification of additional aggregate counter
statistics is outside the scope of this document, and should
be added in the future to the IANA IPFIX Information
Elements registry on a per-application, as-needed basis.More complex operations may lead to other derived fields
being generated from the set of values or Flow Keys reduced away
during aggregation. A prime example of this is sample entropy
calculation. This counts distinct values and frequency, so is
similar to distinct key counting as in , but may be applied to the distribution
of values for any flow field. Sample entropy calculation provides a one-number normalized
representation of the value spread and is useful for anomaly
detection. The behavior of entropy statistics is such that a
small number of keys showing up very often drives the entropy
value down towards zero, while a large number of keys, each
showing up with lower frequency, drives the entropy value
up.Entropy statistics are generally useful for identifier keys,
such as IP addresses, port numbers, AS numbers, etc. They can
also be calculated on flow length, flow duration fields and the
like, even if this generally yields less distinct value shifts
when the traffic mix changes.As a practical example, one host scanning a lot of other
hosts will drive source IP entropy down and target IP entropy
up. A similar effect can be observed for ports. This pattern can
also be caused by the scan-traffic of a fast Internet worm. A
second example would be a DDoS flooding attack against a single
target (or small number of targets) which drives source IP
entropy up and target IP entropy down.A complete specification of additional derived values or
entropy information elements is outside the scope of this
document. Any such Information Elements should be added in the
future to the IANA IPFIX Information Elements registry on a
per-application, as-needed basis.Interval distribution and key aggregation together may generate
multiple Partially Aggregated Flows covering the same time
interval with the same set of Flow Key values. The process of
combining these Partially Aggregated Flows into a single
Aggregated Flow is called aggregation combination. In general,
non-Key values from multiple Contributing Flows are combined using
the same operation by which values are combined from packets to
form Flows for each Information Element. Delta counters are
summed, flags are unioned, and so on.In certain circumstances, particularly involving aggregation by
devices with limited resources, and in situations where exact
aggregated counts are less important than relative magnitudes
(e.g. driving graphical displays), counter distribution during key
aggregation may be performed by approximate counting means (e.g.
Bloom filters). The choice to use approximate counting is
implementation- and application-dependent.When accepting Original Flows in export order from traffic
captured live, the Intermediate Aggregation Process waits for all
Original Flows which may contribute to a given interval during
interval distribution. This is generally dominated by the active
timeout of the Metering Process measuring the Original Flows. For
example, with Metering Processes configured with a 5 minute
active timeout, the Intermediate Aggregation Process introduces a
delay of at least 5 minutes to all exported Aggregated Flows to
ensure it has received all Original Flows. Note that when
aggregating flows from multiple Metering Processes with different
active timeouts, the delay is determined by the maximum active
timeout.In certain circumstances, additional delay at the original
Exporter may cause an IAP to close an interval before the last
Original Flow(s) accountable to the interval arrives; in this
case the IAP SHOULD drop the late Original Flow(s). Accounting of
flows lost at an Intermediate Process due to such issues is
covered in .The accuracy of Aggregated Flows may also be affected by
sampling of the Original Flows, or sampling of packets making up
the Original Flows. At the time of writing, the effect of sampling
on flow aggregation is still an open research question. However,
to maximize the comparability of Aggregated Flows, aggregation of
sampled Flows SHOULD only use Original Flows sampled using the
same sampling rate and sampling algorithm, Flows created from
packets sampled using the same sampling rate and sampling
algorithm, or Original Flows which have been normalized as if they
had the same sampling rate and algorithm before aggregation. For
more on packet sampling within IPFIX, see . For more on Flow sampling within the IPFIX
Mediator Framework, see .Aggregation may be applied to Original Flows from different
sources and of different types (i.e., represented using different,
perhaps wildly-different Templates). When the goal is to separate
the heterogeneous Original Flows and aggregate them into
heterogeneous Aggregated Flows, each aggregation should be done at
its own Intermediate Aggregation Process. The Observation Domain ID
on the Messages containing the output Aggregated Flows can be used
to identify the different Processes, and to segregate the
output.However, when the goal is to aggregate these Flows into a single
stream of Aggregated Flows representing one type of data, and if the
Original Flows may represent the same original packet at two
different Observation Points, the Original Flows should be
correlated by the correlation and normalization operation within the
IAP to ensure that each packet is only represented in a single
Aggregated Flow or set of Aggregated Flows differing only by
aggregation interval.In general, Aggregated Flows are exported in IPFIX as any other
Flow. However, certain aspects of Aggregated Flow export benefit from
additional guidelines, or new Information Elements to represent
aggregation metadata or information generated during aggregation.
These are detailed in the following subsections.Since an Aggregated Flow is simply a Flow, the existing
timestamp Information Elements in the IPFIX Information Model
(e.g., flowStartMilliseconds, flowEndNanoseconds) are sufficient
to specify the time interval for aggregation. Therefore, no new
aggregation-specific Information Elements for exporting time
interval information are necessary.Each Aggregated Flow carrying timing information SHOULD contain
both an interval start and interval end timestamp.The following four Information Elements are defined to count Original Flows as discussed in .
The non-conservative count of Original Flows contributing to
this Aggregated Flow. Non-conservative counts need not sum to
the original count on re-aggregation.
unsigned64deltaCountTBD1
The conservative count of Original Flows whose first packet is
represented within this Aggregated Flow. Conservative counts
must sum to the original count on re-aggregation.
unsigned64deltaCountTBD2
The conservative count of Original Flows whose last packet is
represented within this Aggregated Flow. Conservative counts
must sum to the original count on re-aggregation.
unsigned64deltaCountTBD3
The conservative count of Original Flows contributing to this
Aggregated Flow; may be distributed via any of the methods
expressed by the valueDistributionMethod Information
Element.unsigned64deltaCount3[IANA NOTE: This Information
Element is compatible with Information Element 3 as used in
NetFlow version 9.]The following four Information Elements represent the distinct counts of source and destination network-layer addresses, used to export distinct host counts reduced away during key aggregation.The count of distinct source IP address values for Original Flows contributing to this Aggregated Flow, without regard to IP version. This Information Element is preferred to the IP-version-specific counters, unless it is important to separate the counts by version.unsigned64totalCountTBD4The count of distinct destination IP address values for Original Flows contributing to this Aggregated Flow, without regard to IP version. This Information Element is preferred to the version-specific counters below, unless it is important to separate the counts by version.unsigned64totalCountTBD5The count of distinct source IPv4 address values for Original Flows contributing to this Aggregated Flow.unsigned32totalCountTBD6The count of distinct destination IPv4 address values for Original Flows contributing to this Aggregated Flow.unsigned32totalCountTBD7CurrentThe count of distinct source IPv6 address values for Original Flows contributing to this Aggregated Flow.unsigned64totalCountTBD8CurrentThe count of distinct destination IPv6 address values for Original Flows contributing to this Aggregated Flow.unsigned64totalCountTBD9CurrentWhen exporting counters distributed among Aggregated Flows, as
described in , the Exporting Process
MAY export an Aggregate Counter Distribution Option Record for
each Template describing Aggregated Flow records; this Options
Template is described below. It uses the valueDistributionMethod
Information Element, also defined below. Since in many cases
distribution is simple, accounting the counters from Contributing
Flows to the first Interval to which they contribute, this is the
default situation, for which no Aggregate Counter Distribution
Record is necessary; Aggregate Counter Distribution Records are
only applicable in more exotic situations, such as using an
Aggregation Interval smaller than the durations of Original
Flows.This Options Template defines the Aggregate Counter
Distribution Record, which allows the binding of a value
distribution method to a Template ID. The scope is the
Template Id, whose uniqueness, per , is local to the
Transport Session and Observation Domain that generated the
Template ID. This is used to signal to the Collecting Process
how the counters were distributed. The fields are as
below:IEDescriptiontemplateId [scope]
The Template ID of the Template defining the Aggregated
Flows to which this distribution option applies. This
Information Element MUST be defined as a Scope Field.
valueDistributionMethod
The method used to distribute the counters for the
Aggregated Flows defined by the associated Template.
A description of the method used to distribute the
counters from Contributing Flows into the Aggregated
Flow records described by an associated scope,
generally a Template. The method is deemed to apply to
all the non-key Information Elements in the referenced
scope for which value distribution is a valid
operation; if the originalFlowsInitiated and/or
originalFlowsCompleted Information Elements appear in
the Template, they are not subject to this
distribution method, as they each infer their own
distribution method. This is intended to be a complete
set of possible value distribution methods; it is
encoded as follows:
ValueDescription0Unspecified: The counters for an Original
Flow are explicitly not distributed according to any
other method defined for this Information Element; use
for arbitrary distribution, or distribution algorithms
not described by any other codepoint.
---------------------------------------------------------1Start Interval: The counters for an
Original Flow are added to the counters of the
appropriate Aggregated Flow containing the start time
of the Original Flow. This should be assumed the
default if value distribution information is not
available at a Collecting Process for an Aggregated
Flow. --------------------------------------------------------- 2End Interval: The counters for an Original
Flow are added to the counters of the appropriate
Aggregated Flow containing the end time of the
Original Flow. --------------------------------------------------------- 3Mid Interval: The counters for an Original
Flow are added to the counters of a single appropriate
Aggregated Flow containing some timestamp between
start and end time of the Original Flow. --------------------------------------------------------- 4Simple Uniform Distribution: Each counter
for an Original Flow is divided by the number of time
intervals the Original Flow covers (i.e., of
appropriate Aggregated Flows sharing the same Flow
Key), and this number is added to each corresponding
counter in each Aggregated Flow. --------------------------------------------------------- 5Proportional Uniform Distribution: Each
counter for an Original Flow is divided by the number
of time _units_ the Original Flow covers, to derive a
mean count rate. This mean count rate is then
multiplied by the number of time units in the
intersection of the duration of the Original Flow and
the time interval of each Aggregated Flow. This is
like simple uniform distribution, but accounts for the
fractional portions of a time interval covered by an
Original Flow in the first and last time interval. --------------------------------------------------------- 6Simulated Process: Each counter of the
Original Flow is distributed among the intervals of
the Aggregated Flows according to some function the
Intermediate Aggregation Process uses based upon properties of
Flows presumed to be like the Original Flow. This is
essentially an assertion that the Intermediate Aggregation Process
has no direct packet timing information but is
nevertheless not using one of the other simpler
distribution methods. The Intermediate Aggregation Process
specifically makes no assertion as to the correctness
of the simulation. --------------------------------------------------------- 7Direct: The Intermediate Aggregation Process has access
to the original packet timings from the packets making
up the Original Flow, and uses these to distribute or
recalculate the counters.unsigned8TBD10CurrentIn these examples, the same data, described by the same Template, will be aggregated multiple different ways; this illustrates the various different functions which could be implemented by Intermediate Aggregation Processes. Templates are shown in IESpec format as introduced in . The source data format is a simplified flow: timestamps, traditional 5-tuple, and octet count; the flow key fields are the 5-tuple. The Template is shown in .The data records given as input to the examples in this section are
shown below; timestamps are given in H:MM:SS.sss format. In this and
subsequent tables, flowStartMilliseconds is shown in H:MM:SS.sss format as
'start time', flowEndMilliseconds is shown in H:MM:SS.sss format as 'end
time', sourceIPv4Address is shown as 'source ip4' with the following
'port' representing sourceTransportPort, destinationIPv4Address is shown
as 'dest ip4' with the following 'port' representing
destinationTransportPort, protocolIdentifier is shown as 'pt', and
octetDeltaCount as 'oct'.Aggregating flows by source IP address in time series (i.e., with a
regular interval) can be used in subsequent heavy-hitter analysis and as
a source parameter for statistical anomaly detection techniques. Here,
the Intermediate Aggregation Process imposes an interval, aggregates the
key to remove all key fields other than the source IP address, then
combines the result into a stream of Aggregated Flows. The imposed
interval of 5 minutes is longer than the majority of flows; for those
flows crossing interval boundaries, the entire flow is accounted to the
interval containing the start time of the flow.In this example the Partially Aggregated Flows after each conceptual
operation in the Intermediate Aggregation Process are shown. These are
meant to be illustrative of the conceptual operations only, and not to
suggest an implementation (indeed, the example shown here would not
necessarily be the most efficient method for performing these
operations). Subsequent examples will omit the Partially Aggregated
Flows for brevity.The input to this process could be any Flow Record containing a
source IP address and octet counter; consider for this example the
Template and data from the introduction. The Intermediate Aggregation
Process would then output records containing just timestamps, source IP,
and octetDeltaCount, as in .Assume the goal is to get 5-minute (300s) time series of octet counts per source IP address. The aggregation operations would then be arranged as in .After applying the interval distribution step to the source data in , only the time intervals have changed; the Partially Aggregated flows are shown in . Note that interval distribution follows the default Start Interval policy; that is, the entire flow is accounted to the interval containing the flow's start time.After the key aggregation step, all Flow Keys except the source IP address have been discarded, as shown in . This leaves duplicate Partially Aggregated flows to be combined in the final operation.Aggregate combination sums the counters per key and interval; the summations of the first two keys and intervals are shown in detail in .Applying this to each set of Partially Aggregated Flows to produce the final Aggregated Flows shown in to be exported by the Template in .Aggregating flows by source and destination autonomous system number
in time series is used to generate core traffic matrices. The core
traffic matrix provides a view of the state of the routes within a
network, and can be used for long-term planning of changes to network
design based on traffic demand. Here, imposed time intervals are
generally much longer than active flow timeouts. The traffic matrix is
reported in terms of octets, packets, and flows, as each of these values
may have a subtly different effect on capacity planning.This example demonstrates key aggregation using derived keys and
Original Flow counting. While some Original Flows may be generated by
Exporting Processes on forwarding devices, and therefore contain the
bgpSourceAsNumber and bgpDestinationAsNumber Information Elements,
Original Flows from Exporting Processes on dedicated measurement devices
without routing data contain only a destinationIPv[46]Address. For these
flows, the Mediator must look up a next hop AS from an IP-to-AS table,
replacing source and destination addresses with AS numbers. The table
used in this example is shown in . (Note that
due to limited example address space, in this example we ignore the
common practice of routing only blocks of /24 or larger).The Template for Aggregated Flows produced by this example is shown in .Assume the goal is to get 60-minute time series of octet counts per source/destination ASN pair. The aggregation operations would then be arranged as in .After applying the interval distribution step to the source data in
,; the Partially Aggregated flows are shown
in . Note that the flows are identical to
those in interval distribution step in the previous example, except the
chosen interval (1 hour, 3600 seconds) is different; therefore, all the
flows fit into a single interval.The next steps are to discard irrelevant key fields and to replace the source and destination addresses with source and destination AS numbers in the map; the results of these key aggregation steps are shown in .Finally, aggregate combination sums the counters per key and interval. The resulting Aggregated Flows containing the traffic matrix, shown in , are then exported using the Template in . Note that these aggregated flows represent a sparse matrix: AS pairs for which no traffic was received have no corresponding record in the output.The output of this operation is suitable for re-aggregation: that is, traffic matrices from single links or Observation Points can be aggregated through the same interval imposition and aggregate combination steps in order to build a traffic matrix for an entire network.Aggregating flows by destination address and port, and counting
distinct sources aggregated away, can be used as part of passive service
inventory and host characterization. This example shows
aggregation as an analysis technique, performed on source data stored in
an IPFIX File. As the Transport Session in this File is bounded, removal
of all timestamp information allows summarization of the entire time
interval contained within the interval. Removal of timing information
during interval imposition is equivalent to an infinitely long imposed
time interval. This demonstrates both how infinite intervals work, and
how unique counters work. The aggregation operations are summarized in .The Template for Aggregated Flows produced by this example is shown in .Interval distribution, in this case, merely discards the timestamp
information from the Original Flows in , and
as such is not shown. Likewise, the value aggregation step simply discards
the octetDeltaCount value field. The key aggregation step reduces the key
to the destinationIPv4Address and destinationTransportPort, counting the
distinct source addresses. Since this is essentially the output of this
aggregation function, the aggregate combination operation is a no-op; the
resulting Aggregated Flows are shown in .Returning to the example in , note that our source data contains some flows with durations longer than the imposed interval of five minutes. The default method for dealing with such flows is to account them to the interval containing the flow's start time.In this example, the same data is aggregated using the same arrangement of operations and the same output Template as the as in , but using a different counter distribution policy, Simple Uniform Distribution, as described in . In order to do this, the Exporting Process first exports the Aggregate Counter Distribution Options Template, as in .This Template is followed by an Aggregate Counter Distribution Record described by this Template; assuming the output Template in has ID 257, this record would appear as in .Following metadata export, the aggregation steps follow as before.
However, two long flows are distributed across multiple intervals in the
interval imposition step, as indicated with "*" in . Note the uneven distribution of the
three-interval, 11200-octet flow into three Partially Aggregated Flows of
3733, 3733, and 3734 octets; this ensures no cumulative error is injected
by the interval distribution step.Subsequent steps are as in ; the results, to be exported using the Template shown in , are shown in , with Aggregated Flows differing from the example in indicated by "*".This document specifies the operation of an Intermediate
Aggregation Process with the IPFIX Protocol; the Security
Considerations for the protocol itself in Section 11 [RFC-EDITOR NOTE:
verify section number] of therefore apply. In the
common case that aggregation is performed on a Mediator, the Security
Considerations for Mediators in Section 9 of
apply as well.As mentioned in , certain aggregation
operations may tend to have an anonymizing effect on flow data by
obliterating sensitive identifiers. Aggregation may also be combined
with anonymization within a Mediator, or as part of a chain of
Mediators, to further leverage this effect. In any case in which an
Intermediate Aggregation Process is applied as part of a data
anonymization or protection scheme, or is used together with
anonymization as described in , the Security
Considerations in Section 9 of apply.This document specifies the creation of new IPFIX
Information Elements in the IPFIX Information Element registry located
at http://www.iana.org/assignments/ipfix, as defined in above. IANA has assigned Information
Element numbers to these Information Elements, and entered them into
the registry.[NOTE for IANA: The text TBDn should be replaced with the
respective assigned Information Element numbers where they appear in
this document. Note that the deltaFlowCount Information Element has
been assigned the number 3, as it is compatible with the corresponding
existing (reserved) NetFlow v9 Information Element. Other Information
Element numbers should be assigned outside the NetFlow V9
compatibility range, as these Information Elements are not supported
by NetFlow V9.]Special thanks to Elisa Boschi for early work on the concepts laid
out in this document. Thanks to Lothar Braun, Christian Henke, and
Rahul Patel for their reviews and valuable feedback, with special
thanks to Paul Aitken for his multiple detailed reviews. This work is
materially supported by the European Union Seventh Framework Programme
under grant agreement 257315 (DEMONS).IP Flow Information Export Information Elements (http://www.iana.org/assignments/ipfix/ipfix.xml)