idnits 2.17.1 draft-ali-spring-sr-traffic-accounting-01.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- == There are 4 instances of lines with non-RFC2606-compliant FQDNs in the document. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year -- The document date (May 21, 2018) is 2168 days in the past. Is this intentional? Checking references for intended status: Proposed Standard ---------------------------------------------------------------------------- (See RFCs 3967 and 4897 for information about using normative references to lower-maturity documents in RFCs) == Outdated reference: A later version (-06) exists of draft-filsfils-spring-segment-routing-policy-05 == Outdated reference: A later version (-19) exists of draft-ietf-idr-te-lsp-distribution-08 -- Obsolete informational reference (is this intentional?): RFC 7752 (Obsoleted by RFC 9552) Summary: 0 errors (**), 0 flaws (~~), 4 warnings (==), 2 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 SPRING Working Group Z. Ali 3 Internet-Draft C. Filsfils 4 Intended status: Standards Track K. Talaulikar 5 Expires: November 22, 2018 S. Sivabalan 6 J. Liste 7 Cisco Systems, Inc. 8 M. Horneffer 9 Deutsche Telekom 10 R. Raszuk 11 Bloomberg LP 12 S. Litkowski 13 Orange Business Services 14 G. Dawra 15 LinkedIn 16 D. Voyer 17 R. Morton 18 Bell Canada 19 May 21, 2018 21 Traffic Accounting in Segment Routing Networks 22 draft-ali-spring-sr-traffic-accounting-01.txt 24 Abstract 26 Segment Routing (SR) allows a headend node to steer a packet flow 27 along any path. Intermediate per-flow states are eliminated thanks 28 to source routing. Traffic accounting plays a critical role in 29 network operation and capacity planning. A traffic account solution 30 is required for SR networks that provides the necessary functionality 31 without creating any additional per SR path states in the fabric. 33 This document provides a holistic view of network capacity planning 34 in a SR network and specifies the mechanisms and counters that are 35 required for a SR Traffic Accounting solution. 37 Requirements Language 39 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 40 "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this 41 document are to be interpreted as described in [RFC2119]. 43 Status of This Memo 45 This Internet-Draft is submitted in full conformance with the 46 provisions of BCP 78 and BCP 79. 48 Internet-Drafts are working documents of the Internet Engineering 49 Task Force (IETF). Note that other groups may also distribute 50 working documents as Internet-Drafts. The list of current Internet- 51 Drafts is at https://datatracker.ietf.org/drafts/current/. 53 Internet-Drafts are draft documents valid for a maximum of six months 54 and may be updated, replaced, or obsoleted by other documents at any 55 time. It is inappropriate to use Internet-Drafts as reference 56 material or to cite them other than as "work in progress." 58 This Internet-Draft will expire on November 22, 2018. 60 Copyright Notice 62 Copyright (c) 2018 IETF Trust and the persons identified as the 63 document authors. All rights reserved. 65 This document is subject to BCP 78 and the IETF Trust's Legal 66 Provisions Relating to IETF Documents 67 (https://trustee.ietf.org/license-info) in effect on the date of 68 publication of this document. Please review these documents 69 carefully, as they describe your rights and restrictions with respect 70 to this document. Code Components extracted from this document must 71 include Simplified BSD License text as described in Section 4.e of 72 the Trust Legal Provisions and are provided without warranty as 73 described in the Simplified BSD License. 75 Table of Contents 77 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . 3 78 2. SR Traffic Counters . . . . . . . . . . . . . . . . . . . . . 5 79 2.1. Traffic Counters Naming convention . . . . . . . . . . . 5 80 2.2. Per-Interface SR Counters . . . . . . . . . . . . . . . . 6 81 2.2.1. Per interface, per protocol aggregate egress SR 82 traffic counters (SR.INT.E.PRO) . . . . . . . . . . . 6 83 2.2.2. Per interface, per traffic-class, per protocol 84 aggregate egress SR traffic counters 85 (SR.INT.E.PRO.TC) . . . . . . . . . . . . . . . . . . 7 86 2.2.3. Per interface aggregate ingress SR traffic counter 87 (SR.INT.I) . . . . . . . . . . . . . . . . . . . . . 7 88 2.2.4. Per interface, per TC aggregate ingress SR traffic 89 counter (SR.INT.I.TC) . . . . . . . . . . . . . . . . 7 90 2.3. Prefix SID Counters . . . . . . . . . . . . . . . . . . . 7 91 2.3.1. Per-prefix SID egress traffic counter (PSID.E) . . . 8 92 2.3.2. Per-prefix SID per-TC egress traffic counter 93 (PSID.E.TC) . . . . . . . . . . . . . . . . . . . . . 8 94 2.3.3. Per-prefix SID, per egress interface traffic counter 95 (PSID.INT.E) . . . . . . . . . . . . . . . . . . . . 8 97 2.3.4. Per-prefix SID per TC per egress interface traffic 98 counter (PSID.INT.E.TC) . . . . . . . . . . . . . . 8 99 2.3.5. Per-prefix SID, per ingress interface traffic counter 100 (PSID.INT.I) . . . . . . . . . . . . . . . . . . . . 8 101 2.3.6. Per-prefix SID, per TC, per ingress interface traffic 102 counter (PSID.INT.I.TC) . . . . . . . . . . . . . . . 9 103 2.4. SR Policy Counters . . . . . . . . . . . . . . . . . . . 9 104 2.4.1. Per-SR Policy Aggregate traffic counter (POL) . . . . 9 105 2.4.2. Per-SR Policy labelled steered aggregate traffic 106 counter (POL.BSID) . . . . . . . . . . . . . . . . . 9 107 2.4.3. Per-SR Policy, per TC Aggregate traffic counter 108 (POL.TC) . . . . . . . . . . . . . . . . . . . . . . 9 109 2.4.4. Per-SR Policy, per TC labelled steered aggregate 110 traffic counter (POL.BSID.TC) . . . . . . . . . . . . 10 111 2.4.5. Per-SR Policy, Per-Segment-List Aggregate traffic 112 counter (POL.SL) . . . . . . . . . . . . . . . . . . 10 113 2.4.6. Per-SR Policy, Per-Segment-List labelled steered 114 aggregate traffic counter (POL.SL.BSID) . . . . . . . 10 115 3. SR Traffic Matrix . . . . . . . . . . . . . . . . . . . . . . 10 116 3.1. Traffic Matrix Border . . . . . . . . . . . . . . . . . . 11 117 3.2. Choosing Traffic Matrix Border . . . . . . . . . . . . . 11 118 3.3. Deriving Demand Matrix . . . . . . . . . . . . . . . . . 11 119 3.4. Traffic Matrix Counters . . . . . . . . . . . . . . . . . 12 120 3.4.1. Per-Prefix SID Traffic Matrix counter (PSID.E.TM) . . 12 121 3.4.2. Per-Prefix, Per TC SID Traffic Matrix counter 122 (PSID.E.TM.TC) . . . . . . . . . . . . . . . . . . . 12 123 4. Internet Protocol Flow Information Export . . . . . . . . . . 12 124 5. SR Traffic Accounting . . . . . . . . . . . . . . . . . . . . 12 125 6. Security Considerations . . . . . . . . . . . . . . . . . . . 14 126 7. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 14 127 8. Acknowledgement . . . . . . . . . . . . . . . . . . . . . . . 14 128 9. Contributors . . . . . . . . . . . . . . . . . . . . . . . . 14 129 10. References . . . . . . . . . . . . . . . . . . . . . . . . . 15 130 10.1. Normative References . . . . . . . . . . . . . . . . . . 15 131 10.2. Informative References . . . . . . . . . . . . . . . . . 16 132 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . 16 134 1. Introduction 136 One of the main architecture principles of Segment Routing (SR) 137 [I-D.ietf-spring-segment-routing] is that it maintains per-flow 138 states only at the ingress nodes in the SR domain. The approach 139 taken in this document respects the architecture principles of SR, 140 i.e., it does not create any additional control and data plane states 141 at the transit or egress node for traffic accounting. Only the 142 ingress node of an SR policy 143 [I-D.filsfils-spring-segment-routing-policy] maintains per-flow 144 counters for traffic accounting, which are also needed for other use- 145 cases like billing. 147 Capacity planning is the continuous art of forecasting traffic load 148 and failures to evolve the network topology, its capacity, and its 149 routing to meet a defined Service-Level Agreement (SLA). This 150 document takes a holistic view of traffic accounting and its role in 151 operation and capacity planning in SR networks. 153 The Traffic Matrix (TM) [Traffic-Matrices] is one of the important 154 components of the holistic approach to traffic accounting taken in 155 this document. A network's traffic matrix is the volume of 156 aggregated traffic flows that enter, traverse and leave an 157 arbitrarily defined boundary in the network over a given time 158 interval. The TM border defines the arbitrary boundary nodes of a 159 contiguous portion of the network across which service providers wish 160 to measure traffic flows. The TM border defined for traffic matrix 161 collection does not have to be at the edge of the network, e.g., it 162 can also be placed at the aggregation layer. Knowledge of the 163 traffic matrix is essential to efficient and effective planning, 164 design, engineering, and operation of any IP or MPLS network. 166 This document defines the traffic matrix counters for accounting at 167 the router and how these counters simplify traffic matrix calculation 168 process for the network. Furthermore, it specifies policy, prefix- 169 SID and interface counters for accounting in an SR network. The goal 170 of the document is to provide a holistic view of traffic accounting 171 in an SR network. 173 This document assumes that the routers export the traffic counters 174 defined in Section 2 and Section 3 to an external controller. It is 175 also assumed that the controller also collects the following 176 information in order to get the visibility required for traffic 177 accounting: 179 o Network topology information indicates all the nodes and their 180 inter-connecting links (e.g. via BGP-LS [RFC7752] ) 182 o SR Policies instantiated at various node and their BSID (e.g. 183 using PCEP [RFC8231] or BGP-LS [I-D.ietf-idr-te-lsp-distribution]) 185 o Aggregate traffic counters and statistics for links that include 186 link utilization, per traffic class (TC) statistics, drop 187 counters, etc. 189 o IPFIX [RFC7011] data and the flow accounting information derived 190 from an IPFIX collector. 192 The methods for collection of this information by the controller is 193 beyond the scope of the document. 195 2. SR Traffic Counters 197 This section describes counters for traffic accounting in segment 198 routing networks. The essence of Segment Routing consists in scaling 199 the network by only maintaining per-flow state at the source or edge 200 of the network. Specifically, only the headend of an SR policy 201 maintains the related per-policy state 202 [I-D.filsfils-spring-segment-routing-policy] . Egress and Midpoints 203 along the source route do not maintain any per-policy state. The 204 traffic counters described in this section respects the architecture 205 principles of SR, while given visibility to the service provider for 206 network operation and capacity planning. The SR traffic counters are 207 divided into three categories: interface counters, prefix counters 208 and SR policy counters at the policy head-end. 210 2.1. Traffic Counters Naming convention 212 The section uses the following naming convention when referring to 213 the various counters. This is done in order to assign mnemonic names 214 to SR counters. 216 o The term counter(s) in all of the definitions specified in this 217 document refers either to the (packet, byte) counters or the byte 218 counter. 220 o SR: any traffic whose FIB lookup is a segment (IGP prefix/Adj 221 segments, BGP segments, any type of segments) or the matched FIB 222 entry is steered on an SR Policy. 224 o INT in name indicates a counter is implemented at a per interface 225 level. 227 o E in name refers to egress direction (with respect to the traffic 228 flow). 230 o I in name refers to ingress direction (with respect to the traffic 231 flow). 233 o TC in name indicates a counter is implemented on a Traffic Class 234 (TC) basis. 236 o TM in name refers to a Traffic Matrix (TM) counter. 238 o PRO in name indicates that the counter is implemented on per 239 protocol/adjacency type basis. Per PRO counters in this document 240 can either be accounts for: 242 * LAB (Labelled Traffic): the matched FIB entry is a segment, and 243 the outgoing packet has at least one label (that label does not 244 have to be a segment label, e.g., the label may be a VPN 245 label). 247 * V4 (IPv4 Traffic): the matched FIB entry is a segment which is 248 PoP'ed. The outgoing packet is IPv4. 250 * V6 (IPv6 Traffic): the matched FIB entry is a segment which is 251 PoP'ed. The outgoing packet is IPv6. 253 o POL in name refers to a Policy counter. 255 o BSID in name indicates a policy counter for labelled traffic. 257 o SL in name indicates a policy counter is implemented at a Segment- 258 List (SL) level. 260 Counter nomenclature is exemplified using the following example: 262 o SR.INT.E.PRO: Per-interface per-protocol aggregate egress SR 263 traffic. 265 o POL.BSID: Per-SR Policy labelled steered aggregate traffic 266 counter. 268 2.2. Per-Interface SR Counters 270 For each local interface, node N maintains the following per- 271 interface SR counters. These counters include accounting due to 272 push, pop or swap operations on SR traffic. 274 2.2.1. Per interface, per protocol aggregate egress SR traffic counters 275 (SR.INT.E.PRO) 277 The following counters are included under this category. 279 o SR.INT.E.LAB: For each egress interface (INT.E), N MUST maintain 280 counter(s) for the aggregate SR traffic forwarded over the (INT.E) 281 interface as labelled traffic. 283 o SR.INT.E.V4: For each egress interface (INT.E), N MUST maintain 284 counter(s) for the aggregate SR traffic forwarded over the (INT.E) 285 interface as IPv4 traffic (due to the pop operation). 287 o SR.INT.E.V6: For each egress interface (INT.E), N MUST maintain 288 counter(s) for the aggregate SR traffic forwarded over the (INT.E) 289 interface as IPv6 traffic (due to the pop operation). 291 2.2.2. Per interface, per traffic-class, per protocol aggregate egress 292 SR traffic counters (SR.INT.E.PRO.TC) 294 This counter provides per Traffic Class (TC) breakdown of 295 SR.INT.E.PRO. The following counters are included under this 296 category. 298 o SR.INT.E.LAB.TC: For each egress interface (INT.E) and a given 299 Traffic Class (TC), N SHOULD maintain counter(s) for the aggregate 300 SR traffic forwarded over the (INT.E) interface as labelled 301 traffic. 303 o SR.INT.E.V4.TC: For each egress interface (INT.E) and a given 304 Traffic Class (TC), N SHOULD maintain counter(s) for the aggregate 305 SR traffic forwarded over the (INT.E) interface as IPv4 traffic 306 (due to the pop operation). 308 o SR.INT.E.V6.TC: For each egress interface (INT.E) and a given 309 Traffic Class (TC), N SHOULD maintain counter(s) for the aggregate 310 SR traffic forwarded over the (INT.E) interface as IPv6 traffic 311 (due to the pop operation). 313 2.2.3. Per interface aggregate ingress SR traffic counter (SR.INT.I) 315 The SR.INT.I counter is defined as follows: 317 For each ingress interface (INT.I), N SHOULD maintain counter(s) for 318 the aggregate SR traffic received on I. 320 2.2.4. Per interface, per TC aggregate ingress SR traffic counter 321 (SR.INT.I.TC) 323 This counter provides per Traffic Class (TC) breakdown of the 324 SR.INT.I. It is defined as follow: 326 For each ingress interface (INT.I) and a given Traffic Class (TC), N 327 MAY maintain counter(s) for the aggregate SR traffic (matching the 328 traffic class TC criteria) received on I. 330 2.3. Prefix SID Counters 332 For a remote prefix SID S, node N maintains the following prefix SID 333 counters. These counters include accounting due to push, pop or swap 334 operations on the SR traffic. 336 2.3.1. Per-prefix SID egress traffic counter (PSID.E) 338 This counter is defined as follows: 340 For a remote prefix SID S, N MUST maintain counter(s) for aggregate 341 traffic forwarded towards S. 343 2.3.2. Per-prefix SID per-TC egress traffic counter (PSID.E.TC) 345 This counter provides per Traffic Class (TC) breakdown of PSID.E. It 346 is defined as follows: 348 For a given Traffic Class (TC) and a remote prefix SID S, N SHOULD 349 maintain counter(s) for traffic forwarded towards S. 351 2.3.3. Per-prefix SID, per egress interface traffic counter 352 (PSID.INT.E) 354 This counter is defined as follows: 356 For a given egress interface (INT.E) and a remote prefix SID S, N 357 SHOULD maintain counter(s) for traffic forwarded towards S over the 358 (INT.E) interface. 360 2.3.4. Per-prefix SID per TC per egress interface traffic counter 361 (PSID.INT.E.TC) 363 This counter provides per Traffic Class (TC) breakdown of PSID.INT.E. 364 It is defined as follows: 366 For a given Traffic Class (TC), an egress interface (INT.E) and a 367 remote prefix SID S, N MAY maintain counter(s) for traffic forwarded 368 towards S over the (INT.E) interface. 370 2.3.5. Per-prefix SID, per ingress interface traffic counter 371 (PSID.INT.I) 373 This counter is defined as follows: 375 For a given ingress interface (INT.I) and a remote prefix SID S, N 376 MAY maintain counter(s) for the traffic received on I and forwarded 377 towards S. 379 2.3.6. Per-prefix SID, per TC, per ingress interface traffic counter 380 (PSID.INT.I.TC) 382 This counter provides per Traffic Class (TC) breakdown of PSID.INT.I. 383 It is defined as follows: 385 For a given Traffic Class (TC), ingress interface (INT.I), and a 386 remote prefix SID S, N MAY maintain counter(s) for the traffic 387 received on I and forwarded towards S. 389 2.4. SR Policy Counters 391 Per policy counters are only maintained at the policy head-end node. 392 For each SR policy [I-D.filsfils-spring-segment-routing-policy] , the 393 head-end node maintains the following counters. 395 2.4.1. Per-SR Policy Aggregate traffic counter (POL) 397 This counter includes both labelled and unlabelled steered traffic. 398 It is defined as: 400 For each SR policy (P), head-end node N MUST maintain counter(s) for 401 the aggregate traffic steered onto P. 403 2.4.2. Per-SR Policy labelled steered aggregate traffic counter 404 (POL.BSID) 406 This counter is defined as: 408 For each SR policy (P), head-end node N SHOULD maintain counter(s) 409 for the aggregate labelled traffic steered onto P. Please note that 410 labelled steered traffic refers to incoming packets with an active 411 SID matching a local BSID of an SR policy at the head-end. 413 2.4.3. Per-SR Policy, per TC Aggregate traffic counter (POL.TC) 415 This counter provides per Traffic Class (TC) breakdown of POL. It is 416 defined as follows: 418 For each SR policy (P) and a given Traffic Class (TC), head-end node 419 N SHOULD maintain counter(s) for the aggregate traffic (matching the 420 traffic class TC criteria) steered onto P. 422 2.4.4. Per-SR Policy, per TC labelled steered aggregate traffic counter 423 (POL.BSID.TC) 425 This counter provides per Traffic Class (TC) breakdown of POL.BSID. 426 It is defined as follows: 428 For each SR policy (P) and a given Traffic Class (TC), head-end node 429 N MAY maintain counter(s) for the aggregate labelled traffic steered 430 onto P. 432 2.4.5. Per-SR Policy, Per-Segment-List Aggregate traffic counter 433 (POL.SL) 435 This counter is defined as: 437 For each SR policy (P) and a given Segment-List (SL), head-end node N 438 SHOULD maintain counter(s) for the aggregate traffic steered onto the 439 Segment-List (SL) of P. 441 2.4.6. Per-SR Policy, Per-Segment-List labelled steered aggregate 442 traffic counter (POL.SL.BSID) 444 This counter is defined as: 446 For each SR policy (P) and a given Segment-List (SL), head-end node N 447 MAY maintain counter(s) for the aggregate labelled traffic steered 448 onto the Segment-List SL of P. Please note that labelled steered 449 traffic refers to incoming packets with an active SID matching a 450 local BSID of an SR policy at the head-end. 452 3. SR Traffic Matrix 454 A traffic matrix T(N, M) is the amount of traffic entering the 455 network at node N and leaving the network at node M, where N and M 456 are border nodes at an arbitrarily defined boundary in the network. 457 The TM border defines the arbitrary boundary nodes of a contiguous 458 portion of the network across which service providers wish to measure 459 traffic flows. The traffic matrix (also called demand matrix) 460 contains all the demands crossing the TM border. It has as many rows 461 as ingress edge nodes and as many columns as egress edge nodes at the 462 TM border. The demand D(N, M) is the cell of the matrix at row N and 463 column M. 465 In order to facilitate network level traffic matrix calculations, 466 nodes position at the edge of the network boundary SHOULD support 467 traffic matrix counters. The nodes positioned within the network 468 boundary are not required to support these counters. 470 3.1. Traffic Matrix Border 472 The service provider needs to establish TM border to collect traffic 473 matrix. The TM border defines the boundary nodes of a contiguous 474 portion of the network across which the service provider wishes to 475 measure traffic flows. The TM border divides the network into two 476 parts: 478 o Internal part: a contiguous part of the network that is located 479 within the TM border. 481 o External part: anything outside of the TM border 483 The TM border cuts through nodes, resulting in two types of 484 interfaces: internal and external interfaces. Interfaces are 485 internal if they are located inside the TM border, they are external 486 if they are found outside the TM border. 488 A node implementing Traffic Matrix SHOULD support the classification 489 of any of its interfaces as internal or external. How a node marks 490 it interfaces as external or internal is an implementation matter and 491 beyond the scope of this document. 493 3.2. Choosing Traffic Matrix Border 495 An operator can choose where the TM border is located. Typically, 496 this will be at the edge of the network, but it can also be placed at 497 the aggregation layer. Or an operator can use multiple TM borders 498 for each of their network domains, with each TM border cutting 499 through different nodes; different TM borders cannot cut through the 500 same nodes. 502 3.3. Deriving Demand Matrix 504 The goal is to measure the volume of traffic that enters a TM border 505 node n through an external interface and leaves through an external 506 interface of another TM border node m. This traffic volume yields 507 the traffic matrix entry Tn,m. Measuring this for every pair of TM 508 border nodes (n,m) results in the complete traffic matrix. 510 Service providers use various techniques to compute traffic matrix, 511 including a combination of collecting link utilization, gathering 512 IPFIX data, collect MPLS forwarding statistics, etc. A service 513 provider may also use traffic matrix counters defined in this 514 document for that purpose. The usefulness and applicability of the 515 Traffic Matrix do not depend on the TM collection mechanism. 517 3.4. Traffic Matrix Counters 519 As mentioned above, a Traffic Matrix (TM) provides, for every ingress 520 point N into the network and every egress point M out of the network, 521 the volume of traffic T(N, M) from N to M over a given time interval. 522 To measure the traffic matrix, nodes in an SR network designate its 523 interfaces as either internal or external. 525 When Node N receives a packet destined to remote prefix SID M, N 526 maintains the following counters. These counters include accounting 527 due to push, pop or swap operations. 529 3.4.1. Per-Prefix SID Traffic Matrix counter (PSID.E.TM) 531 This counter is defined as follows: 533 For a given remote prefix SID M, N SHOULD maintain counter(s) for all 534 the traffic received on any external interfaces and forwarded towards 535 M. 537 3.4.2. Per-Prefix, Per TC SID Traffic Matrix counter (PSID.E.TM.TC) 539 This counter provides per Traffic Class (TC) breakdown of PSID.E.TM. 540 It is defined as follows: 542 For a given Traffic Class (TC) and a remote prefix SID M, N SHOULD 543 maintain counter(s) for all the traffic received on any external 544 interfaces and forwarded towards M. 546 4. Internet Protocol Flow Information Export 548 Internet Protocol Flow Information Export (IPFIX) [RFC7011] [RFC7012] 549 [RFC7013] [RFC7014] [RFC7015] is a standard of export for Internet 550 Protocol flow information. IPFIX is extensively deployed and used by 551 network management systems to facilitate services such as 552 measurement, security, accounting and billing. IPFIX also plays a 553 vital role in traffic accounting in SR network. For example, IPFIX 554 can be used for traffic accounting on an SR policy, without requiring 555 any change to the SR-MPLS or IPFIX protocols. 557 5. SR Traffic Accounting 559 The SR counters, IPFIX data, Traffic Matrix, network topology 560 information, node, and link statistics, SR policies configuration, 561 etc. constitute components of SR traffic accounting. This section 562 describes some potential use of this information, but other 563 mechanisms also exist. 565 One of the possible uses is centered around the traffic matrix. An 566 external controller collects the traffic counters, including the 567 traffic matrix, defined in Section 3 from the routers. Using the 568 Traffic Matrix TM(N, M), the controller knows the exact traffic is 569 entering node N and leaving node M, where node N and M are edge node 570 on an arbitrary TM border. The controller also collects network 571 topology and SR policies configuration from the network. Using this 572 information, the controller runs local path calculation algorithm to 573 map these demands onto the individual SR paths. This enables a 574 controller to determine the path that would be taken through the 575 network (including ECMP paths) for any prefix at any node. 576 Specifically, the controller starts with distributing the TM(N, M) 577 equally among all ECMP from node N to node M. By repeating the 578 process for all entry and exit nodes in the network, the controller 579 predicts how the demands are distributed among SR paths in the 580 network. The equal distribution of the traffic demand assumption is 581 validated by correlating the projected load with the link and node 582 statistics and other traffic counters described in this document. 583 The various SR counters described in Section 2 provide the view of 584 each segment's ingress and egress statistics at every node and link 585 in the network, which is further supplemented by SR Policies' 586 statistics that are available at all head-end nodes. The controller 587 adjusts the predicted values, accordingly. How such adjustments are 588 performed is beyond the scope of this document. The predicted 589 traffic mapping to the individual SR path maybe used for serval 590 purposes. That includes simulating "what-if" scenarios, develop 591 contingency and maintenance plans, manage network latency and to 592 anticipate and prevent congestion, etc. For example, if there is 593 congestion on the link between two nodes, the controller can identify 594 the SR path causing the congestion and how to re-route it to relieve 595 it. 597 Another possible use is built around the IPFIX data. IPFIX can be 598 used for traffic account on an SR policy, without requiring any 599 change to the SR-MPLS or IPFIX protocols. This is because when 600 traffic is steered on an SR policy, the steering is based on a match 601 of the fields of the incoming packet. A controller can replicate the 602 matching criteria to account for the traffic received at the egress 603 for the given SR policy. The policy counters, other traffic counters 604 defined in Section 2, and information of packet loss over policy can 605 further supplement the IPFIX based accounting for measuring, 606 accounting, and billing on per policy basis. 608 IPFIX Information when required and enabled provides a more granular 609 visibility of network flows (including SR Policy flows) at any point 610 in the network that can be correlated. For example, IPFIX may be 611 enabled on the nodes and links at the network edge (on similar lines 612 as the nodes along the Traffic Matrix Border)to analyze the flows 613 entering and leaving a specific network region. Additionally, it can 614 be also enabled at any node or a specific link within the network for 615 analyzing flows through it - either on demand or continuously. IPFIX 616 can also be enabled on the head-end nodes and endpoints of SR 617 Policies in the network to analyze flows steered through various 618 policies. Since IPFIX sampling also includes the MPLS label stack on 619 the packet, the traffic flows for a specific SR Policy can also be 620 determined at any intermediate link or node in the network, when 621 necessary. 623 Link level statistics information, derived using the ingress and 624 egress counters (including the QoS counters on a per TC basis), 625 provides the view of link utilization including for a specific class 626 of service at any point. This helps detect congestion for the link 627 as a whole or for specific class of service. 629 In summary, a controller can analyze all of the above information 630 together and correlated them to predicted traffic mapping to the 631 individual SR paths. The aggregate demands on the network and their 632 paths can be determined and correlated with link utilization to 633 identify the flows causing congestion for specific links. Visibility 634 into all the flows on a link can be achieved using the SR counters 635 and supplemented by IPIX. 637 6. Security Considerations 639 This document does not define any new protocol extensions and does 640 not impose any additional security challenges. 642 7. IANA Considerations 644 This document has no actions for IANA. 646 8. Acknowledgement 648 The authors like to thank Kris Michielsen for his valuable comments 649 and suggestions. 651 9. Contributors 653 The following people have substantially contributed to the editing of 654 this document: 656 Francois Clad 657 Cisco Systems 658 Email: fclad@cisco.com 659 Faisal Iqbal 660 Cisco Systems 661 Email: faiqbal@cisco.com 663 10. References 665 10.1. Normative References 667 [I-D.filsfils-spring-segment-routing-policy] 668 Filsfils, C., Sivabalan, S., Raza, K., Liste, J., Clad, 669 F., Talaulikar, K., Ali, Z., Hegde, S., 670 daniel.voyer@bell.ca, d., Lin, S., bogdanov@google.com, 671 b., Krol, P., Horneffer, M., Steinberg, D., Decraene, B., 672 Litkowski, S., and P. Mattes, "Segment Routing Policy for 673 Traffic Engineering", draft-filsfils-spring-segment- 674 routing-policy-05 (work in progress), February 2018. 676 [I-D.ietf-spring-segment-routing] 677 Filsfils, C., Previdi, S., Ginsberg, L., Decraene, B., 678 Litkowski, S., and R. Shakir, "Segment Routing 679 Architecture", draft-ietf-spring-segment-routing-15 (work 680 in progress), January 2018. 682 [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate 683 Requirement Levels", BCP 14, RFC 2119, 684 DOI 10.17487/RFC2119, March 1997, 685 . 687 [RFC7011] Claise, B., Ed., Trammell, B., Ed., and P. Aitken, 688 "Specification of the IP Flow Information Export (IPFIX) 689 Protocol for the Exchange of Flow Information", STD 77, 690 RFC 7011, DOI 10.17487/RFC7011, September 2013, 691 . 693 [RFC7012] Claise, B., Ed. and B. Trammell, Ed., "Information Model 694 for IP Flow Information Export (IPFIX)", RFC 7012, 695 DOI 10.17487/RFC7012, September 2013, 696 . 698 [RFC7013] Trammell, B. and B. Claise, "Guidelines for Authors and 699 Reviewers of IP Flow Information Export (IPFIX) 700 Information Elements", BCP 184, RFC 7013, 701 DOI 10.17487/RFC7013, September 2013, 702 . 704 [RFC7014] D'Antonio, S., Zseby, T., Henke, C., and L. Peluso, "Flow 705 Selection Techniques", RFC 7014, DOI 10.17487/RFC7014, 706 September 2013, . 708 [RFC7015] Trammell, B., Wagner, A., and B. Claise, "Flow Aggregation 709 for the IP Flow Information Export (IPFIX) Protocol", 710 RFC 7015, DOI 10.17487/RFC7015, September 2013, 711 . 713 10.2. Informative References 715 [I-D.ietf-idr-te-lsp-distribution] 716 Previdi, S., Dong, J., Chen, M., Gredler, H., and J. 717 Tantsura, "Distribution of Traffic Engineering (TE) 718 Policies and State using BGP-LS", draft-ietf-idr-te-lsp- 719 distribution-08 (work in progress), December 2017. 721 [RFC7752] Gredler, H., Ed., Medved, J., Previdi, S., Farrel, A., and 722 S. Ray, "North-Bound Distribution of Link-State and 723 Traffic Engineering (TE) Information Using BGP", RFC 7752, 724 DOI 10.17487/RFC7752, March 2016, 725 . 727 [RFC8231] Crabbe, E., Minei, I., Medved, J., and R. Varga, "Path 728 Computation Element Communication Protocol (PCEP) 729 Extensions for Stateful PCE", RFC 8231, 730 DOI 10.17487/RFC8231, September 2017, 731 . 733 [Traffic-Matrices] 734 Schnitter, T-Systems, S. and M. Horneffer, T-Com, "Traffic 735 Matrices for MPLS Networks with LDP Traffic Statistics, 736 Proc. Networks2004, VDE-Verlag 2004", 2015. 738 Authors' Addresses 740 Zafar Ali 741 Cisco Systems, Inc. 743 Email: zali@cisco.com 745 Clarence Filsfils 746 Cisco Systems, Inc. 748 Email: cfilsfil@cisco.com 750 Ketan Talaulikar 751 Cisco Systems, Inc. 753 Email: ketant@cisco.com 754 Siva Sivabalan 755 Cisco Systems, Inc. 757 Email: msiva@cisco.com 759 Jose Liste 760 Cisco Systems, Inc. 762 Email: jliste@cisco.com 764 Martin Horneffer 765 Deutsche Telekom 767 Email: martin.horneffer@telekom.de 769 Robert Raszuk 770 Bloomberg LP 772 Email: robert@raszuk.net 774 Stephane Litkowski 775 Orange Business Services 777 Email: stephane.litkowski@orange.com 779 Gaurav Dawra 780 LinkedIn 782 Email: gdawra.ietf@gmail.com 784 Daniel Voyer 785 Bell Canada 787 Email: daniel.voyer@bell.ca 789 Rick Morton 790 Bell Canada 792 Email: rick.morton@bell.ca