idnits 2.17.1 draft-geib-ippm-connectivity-monitoring-03.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- == It seems as if not all pages are separated by form feeds - found 0 form feeds but 13 pages Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- No issues found here. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year -- The document date (July 3, 2020) is 1392 days in the past. Is this intentional? Checking references for intended status: Proposed Standard ---------------------------------------------------------------------------- (See RFCs 3967 and 4897 for information about using normative references to lower-maturity documents in RFCs) No issues found here. Summary: 0 errors (**), 0 flaws (~~), 2 warnings (==), 1 comment (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 ippm R. Geib, Ed. 3 Internet-Draft Deutsche Telekom 4 Intended status: Standards Track July 3, 2020 5 Expires: January 4, 2021 7 A Connectivity Monitoring Metric for IPPM 8 draft-geib-ippm-connectivity-monitoring-03 10 Abstract 12 Within a Segment Routing domain, segment routed measurement packets 13 can be sent along pre-determined paths. This enables new kinds of 14 measurements. Connectivity monitoring allows to supervise the state 15 and performance of a connection or a (sub)path from one or a few 16 central monitoring systems. This document specifies a suitable 17 type-P connectivity monitoring metric. 19 Status of This Memo 21 This Internet-Draft is submitted in full conformance with the 22 provisions of BCP 78 and BCP 79. 24 Internet-Drafts are working documents of the Internet Engineering 25 Task Force (IETF). Note that other groups may also distribute 26 working documents as Internet-Drafts. The list of current Internet- 27 Drafts is at https://datatracker.ietf.org/drafts/current/. 29 Internet-Drafts are draft documents valid for a maximum of six months 30 and may be updated, replaced, or obsoleted by other documents at any 31 time. It is inappropriate to use Internet-Drafts as reference 32 material or to cite them other than as "work in progress." 34 This Internet-Draft will expire on January 4, 2021. 36 Copyright Notice 38 Copyright (c) 2020 IETF Trust and the persons identified as the 39 document authors. All rights reserved. 41 This document is subject to BCP 78 and the IETF Trust's Legal 42 Provisions Relating to IETF Documents 43 (https://trustee.ietf.org/license-info) in effect on the date of 44 publication of this document. Please review these documents 45 carefully, as they describe your rights and restrictions with respect 46 to this document. Code Components extracted from this document must 47 include Simplified BSD License text as described in Section 4.e of 48 the Trust Legal Provisions and are provided without warranty as 49 described in the Simplified BSD License. 51 Table of Contents 53 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . 2 54 1.1. Requirements Language . . . . . . . . . . . . . . . . . . 4 55 2. A brief segment routing connectivity monitoring framework . . 4 56 3. Singleton Definition for Type-P-SR-Path-Connectivity-and- 57 Congestion . . . . . . . . . . . . . . . . . . . . . . . . . 7 58 3.1. Metric Name . . . . . . . . . . . . . . . . . . . . . . . 7 59 3.2. Metric Parameters . . . . . . . . . . . . . . . . . . . . 7 60 3.3. Metric Units . . . . . . . . . . . . . . . . . . . . . . 8 61 3.4. Definition . . . . . . . . . . . . . . . . . . . . . . . 8 62 3.5. Discussion . . . . . . . . . . . . . . . . . . . . . . . 8 63 3.6. Methodologies . . . . . . . . . . . . . . . . . . . . . . 9 64 3.7. Errors and Uncertainties . . . . . . . . . . . . . . . . 10 65 3.8. Reporting the Metric . . . . . . . . . . . . . . . . . . 11 66 4. Singleton Definition for Type-P-SR-Path-Round-Trip-Delay- 67 Estimate . . . . . . . . . . . . . . . . . . . . . . . . . . 11 68 5. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 12 69 6. Security Considerations . . . . . . . . . . . . . . . . . . . 12 70 7. References . . . . . . . . . . . . . . . . . . . . . . . . . 12 71 7.1. Normative References . . . . . . . . . . . . . . . . . . 12 72 7.2. Informative References . . . . . . . . . . . . . . . . . 13 73 Author's Address . . . . . . . . . . . . . . . . . . . . . . . . 13 75 1. Introduction 77 Within a Segment Routing domain, measurement packets can be sent 78 along pre-determined segment routed paths [RFC8402]. A segment 79 routed path may consist of pre-determined sub paths, specific router- 80 interfaces or a combination of both. A measurement path may also 81 consist of sub paths spanning multiple routers, given that all 82 segments to address a desired path are available and known at the SR 83 domain edge interface. 85 A Path Monitoring System or PMS (see [RFC8403]) is a dedicated 86 central Segment Routing (SR) domain monitoring device (as compared to 87 a distributed monitoring approach based on router-data and -functions 88 only). Monitoring individual sub-paths or point-to-point connections 89 is executed for different purposes. IGP exchanges hello messages 90 between neighbors to keep alive routing and swiftly adapt routing to 91 topology changes. Network Operators may be interested in monitoring 92 connectivity and congestion of interfaces or sub-paths at a timescale 93 of seconds, minutes or hours. In both cases, the periodicity is 94 significantly smaller than commodity interface monitoring based on 95 router counters, which may be collected on a minute timescale to keep 96 the processor- or monitoring data-load low. 98 The IPPM architecture was a first step to that direction [RFC2330]. 99 Commodity IPPM solutions require dedicated measurement systems, a 100 large number of measurement agents and synchronised clocks. 101 Monitoring a domain from edge to edge by commodity IPPM solutions 102 increases scalability of the monitoring system. But localising the 103 site of a detected change in network behaviour may then require 104 network tomography methods. 106 The IPPM Metrics for Measuring Connectivity offer generic 107 connectivity metrics [RFC2678]. These metrics allow to measure 108 connectivity between end nodes without making any assumption on the 109 paths between them. The metric and the type-p packet specified by 110 this document follow a different approach: they are designed to 111 monitor connectivity and performance of a specific single link or a 112 path segment. The underlying definition of connectivity is partially 113 the same: a packet not reaching a destination indicates a loss of 114 connectivity. An IGP re-route may indicate a loss of a link, while 115 it might not cause loss of connectivity between end systems. The 116 metric specified here enables link-loss detection, if the change in 117 end-to-end delay along a new route is differing from that of the 118 original path. 120 A Segment Routing PMS which is part of an SR domain is IGP topology 121 aware, covering the IP and (if present) the MPLS layer topology 122 [RFC8402]. This allows to steer PMS measurement packets along 123 arbitrary pre-determined concatenated sub-paths, identified by 124 suitable segments. Basically, a number of overlaid measurement paths 125 is set up. The delays of packets sent along each on of these paths 126 is measured. Single changes in topology cause correlated changes in 127 the measurement packet delay (or packet loss) of different 128 measurement paths. By a suitable set up, the number of measurement 129 paths may be limited to one per connection (or sub-path) to be 130 monitored. In addition to information revealed by a commodity ICMP 131 ping measurement, the metric and method specified here identify the 132 location of a congested interface. To do so, tomography assumptions 133 and methods are combined to first plan the overlaid SR measurement 134 path set up and later on to evaluate the captured delay measurements. 136 This document specifies a type-p metric determining properties of an 137 SR path which allows to monitor connectivity and congestion of 138 interfaces and further allows to locate the path or interface which 139 caused a change in the reported type-p metric. This document is 140 focussed on the MPLS layer, but the methodology may be applied within 141 SR domains or MPLS domains in general. 143 1.1. Requirements Language 145 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 146 "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this 147 document are to be interpreted as described in RFC 2119 [RFC2119]. 149 2. A brief segment routing connectivity monitoring framework 151 The Segment Routing IGP topology information consists of the IP and 152 (if present) the MPLS layer topology. The minimum SR topology 153 information consists of Node-Segment-Identifiers (Node-SID), 154 identifying an SR router. The IGP exchange of Adjacency-SIDs [I- 155 D.draft-ietf-isis-segment-routing-extensions], which identify local 156 interfaces to adjacent nodes, is optional. It is RECOMMENDED to 157 distribute Adj-SIDs in a domain operating a PMS to monitor 158 connectivity as specified below. If Adj-SIDs aren't availbale, 159 [RFC8029] provides methods how to steer packets along desired paths 160 by the proper choice of an MPLS Echo-request IP-destination address. 161 A detailed description of [RFC8029] methods as a replacement of Adj- 162 SIDs is out of scope of this document. 164 A round trip measurement between two adjacent nodes is a simple 165 method to monitor connectivity of a connecting link. If multiple 166 links are operational between two adjacent nodes and only a single 167 one fails, a single plain round trip measurement may fail to identify 168 which link has failed. A round trip measurement also fails to 169 identify which interface is congested, even if only a single link 170 connects two adjacent nodes. 172 Segment Routing enables the set-up of extended measurement loops. 173 Several different measurement loops can be set up. If these form a 174 partial overlay, any change in the network properties impacts more 175 than a single loop's round trip time (or causes drops of packets of 176 more than one loop). Randomly chosen loop paths including the 177 interfaces or paths to be monitored may fail to produce unique result 178 patterns. The approach picked here uses specified measurement loop 179 and path overlay design. A centralised monitoring approach benefits 180 from keeping the number of required measurement loops low. This 181 improves scalability by minimising the number of measurement loops. 182 This also keeps the number of required packets and results to be 183 evaluated and correlated low. 185 An additional property of the measurement path set-up specified below 186 is that it allows to estimate the packet round trip and the one way 187 delay of a monitored link (or path). The delay along a single link 188 is not perfectly symmetric. Packet processing causes small delay 189 differences per interface and direction. These cause an error, which 190 can't be quantified or removed by the specified method. Quantifying 191 this error requires a different measurement set-up. As this will 192 introduce additional measurements loops, packets and evaluations, the 193 cost in terms of reduced scalability is not felt to be worth the 194 benefit in measurement accuracy. IPPM however honors precision more 195 than accuracy and the mentioned processing differences are relatively 196 stable, resulting in relatively precise delay estimates. 198 An example SR domain is shown below. The PMS shown should monitor 199 the connectivity of all 6 links between nodes L100 and L200 one one 200 side and the connected nodes L050, L060 and L070 on the other side. 201 The round trip times per measurement loop are assumed to exhibit 202 unique delays. 204 +---+ +----+ +----+ 205 |PMS| |L100|-----|L050| 206 +---+ +----+\ /+----+ 207 | / \ \_/_____ 208 | / \ / \+----+ 209 +----+/ \/_ +----|L060| 210 |L300| / |/ +----+ 211 +----+\ / /\_ 212 \ / / \ 213 \+----+ / +----+ 214 |L200|-----|L070| 215 +----+ +----+ 217 Connectivity verification with a PMS 219 Figure 1 221 The SID values are picked for convenient reading only. Node-SID: 100 222 identifies L100, Node-SID: 300 identifies L300 and so on. Adj-SID 223 10050: Adjacency L100 to L050, Adj-SID 10060: Adjacency L100 to L060, 224 Adj-SID 60200: Adjacency L60 to L200 226 Monitoring the 6 links between Ln00 and L0m0 nodes requires 6 227 measurement loops, each of which has the following properties: 229 o Each loop follows a single round trip from one Ln00 to one L0m0 230 (e.g., between L100 and L050). 232 o Each loop passes two more links: one between that Ln00 and another 233 L0m0 and from there to the other Ln00 (e.g., between L100 and L060 234 and then L060 to L200) 236 o Every link is passed by a single round trip per measurement loop 237 only once and only once unidirectional by two other loops, and the 238 latter two pass along opposing directions (that's three loops 239 passing each single link, e.g., one having a round trip L100 to 240 L050 and back, a second passing L100 to L050 only and a third loop 241 passing L050 to L100 only). 243 Note that any 6 links between two to six nodes can be monitored that 244 way too (if multiple parallel links between two nodes are monitored, 245 the differences in delay may require a sufficiently high clock 246 resulotion, if applicable). 248 This results in 6 measurement loops for the given example (the start 249 and end of each measurement loop is PMS to L300 to L100 or L200 and a 250 similar sub-path on the return leg. It is ommitted here for 251 brevity): 253 1. M1 is the delay along L100 -> L050 -> L100 -> L060 -> L200 255 2. M2 is the delay along L100 -> L060 -> L100 -> L070 -> L200 257 3. M3 is the delay along L100 -> L070 -> L100 -> L050 -> L200 259 4. M4 is the delay along L200 -> L050 -> L200 -> L060 -> L100 261 5. M5 is the delay along L200 -> L060 -> L200 -> L070 -> L100 263 6. M6 is the delay along L200 -> L070 -> L200 -> L050 -> L100 265 An example for a stack of a loop consisting of Node-SID segments 266 allowing to caprture M1 is (top to bottom): 100 | 050 | 100 | 060 | 267 200 | PMS. 269 An example for a stack of Adj-SID segments the loop resulting in M1 270 is (top to bottom): 100 | 10050 | 50100 | 10060 | 60200 | PMS. As 271 can be seen, the Node-SIDs 100 and PMS are present at top and bottom 272 of the segment stack. Their purpose is to transport the packet from 273 the PMS to the start of the measurement loop at L100 and return it to 274 the PMS from its end. 276 The measurement loops set up as shown have the following properties: 278 o If the loops are set up using Node-SIDs only, any single complete 279 loss of connectivity caused by a failing single link between any 280 Ln00 and any L0m0 node briefly disturbs (and changes the measured 281 delay) of three loops. Traffic to Node-SIDs is rerouted. 283 o If the loops are set up using Adj-SIDs only (and Node-SIDs only to 284 send the packet from PMS to the loop starting point and from the 285 loop end back to the PMS), any single complete loss of 286 connectivity caused by a failing single link between any Ln00 and 287 any L0m0 node terminates the traffic along three loops. The 288 packets of these loops will be dropped, until the link gets back 289 into service. Traffic to Adj-SIDs is not rerouted. 291 o Any congested single interface between any Ln00 and any L0m0 node 292 only impacts the measured delay of two measurement loops. 294 o As an example, the formula for a single Round Trip Delay (RTD) is 295 shown here 4 * RTD_L100-L050-L100 = 3 * M1 + M3 + M6 - M2 - M4 - 296 M5 298 A closer look reveals that each single event of interest for the 299 proposed metric, which are a loss of connectivity or a case of 300 congestion, uniquely only impacts a single a-priori determinable set 301 of measurement loops. If, e.g., connectivity is lost between L200 302 and L050, measurement loops (3), (4) and (6) indicate a change in the 303 measured delay. 305 As a second example, if the interface L070 to L100 is congested, 306 measurement loops (3) and (5) indicate a change in the measured 307 delay. Without listing all events, all cases of single losses of 308 connectivity or single events of congestion influence only delay 309 measurements of a unique set of measurement loops. 311 A congestion event adding latency to two specific measurement loops 312 allows calculation of the delay added by the queue at the congested 313 interface. Thus, the resulting RTD increase can be assigned to a 314 single interface. 316 3. Singleton Definition for Type-P-SR-Path-Connectivity-and-Congestion 318 3.1. Metric Name 320 Type-P-SR-Path-Connectivity-and-Congestion 322 3.2. Metric Parameters 324 o Src, the IP address of a source host 326 o Dst, the IP address of a destination host if IP routing is 327 applicable; in the case of MPLS routing, a diagnostic address as 328 specified by [RFC8029] 330 o T, a time 332 o lambda, a rate in reciprocal seconds 333 o L, a packet length in bits. The packets of a Type P packet stream 334 from which the sample Path-Connectivity-and-Congestion metric is 335 taken MUST all be of the same length. 337 o MLA, a Monitoring Loop Address information ensuring that a 338 singleton passes a single sub-path_a to be monitored 339 bidirectional, a sub-path_b to be monitored unidirectional and a 340 sub-path_c to be monitored unidirectional, where sub-path_a, -_b 341 and -_c MUST NOT be identical. 343 o P, the specification of the packet type, over and above the source 344 and destination addresses 346 o DS, a constant time interval between two type-P packets 348 3.3. Metric Units 350 A sequence of consecutive time values. 352 3.4. Definition 354 A moving average of AV time values per measurement path is compared 355 by a change point detection algorithm. The temporal packet spacing 356 value DS represents the smallest period within which a change in 357 connectivity or congestion may be detected. 359 A single loss of connectivity of a sub-path between two nodes affects 360 three different measurement paths. Depending on the value chosen for 361 DS, packet loss might occur (note that the moving average evaluation 362 needs to span a longer period than convergence time; alternatively, 363 packet-loss visible along the three measurement paths may serve as an 364 evaluation criterium). After routing convergence the type-p packets 365 along the three measurement paths show a change in delay. 367 A congestion of a single interface of a sub-path connecting two nodes 368 affects two different measurement paths. The the type-p packets 369 along the two congested measurement paths show an additional change 370 in delay. 372 3.5. Discussion 374 Detection of a multiple losses of monitored sub-path connectivity or 375 congestion of a multiple monitored sub-paths may be possible. These 376 cases have not been investigated, but may occur in the case of Shared 377 Risk Link Groups. Monitoring Shared Risk LinkGroups and sub-paths 378 with multiple failures abd congestion is not within scope of this 379 document. 381 3.6. Methodologies 383 For the given type-p, the methodology is as follows: 385 o The set of measurement paths MUST be routed in a way that each 386 single loss of connectivity and each case of single interface 387 congestion of one of the sub-paths passed by a type-p packet 388 creates a unique pattern of type-p packets belonging to a subset 389 of all configured measurement paths indicate a change in the 390 measured delay. As a minimum, each sub-path to be monitored MUST 391 be passed 393 o 395 * by one measurement_path_1 and its type-p packet in 396 bidirectional direction 398 * by one measurement_path_2 and its type-p packet in "downlink" 399 direction 401 * by one measurement_path_3 and its type-p packet in "uplink" 402 direction 404 o "Uplink" and "Downlink" have no architectural relevance. The 405 terms are chosen to express, that the packets of 406 measurement_path_2 and measuremnt_path_3 pass the monitored sub- 407 path unidirectional in opposing direction. Measuremnt_path_1, 408 measurement_path_2 and measurement_path_3 MUST NOT be identical. 410 o All measurement paths SHOULD terminate between identical sender 411 and receiver interfaces. It is recommended to connect the sender 412 and receiver as closely to the paths to be monitored as possible. 413 Each intermediate sub-path between sender and receiver one one 414 hand and sub-paths to be monitored is an additional source of 415 errors requiring separate monitoring. 417 o Segment Routed domains supporting Node- and Adj-SIDs should enable 418 the monitoring path set-up as specified. Other routing protocols 419 may be used as well, but the monitoring path set up might be 420 complex or impossible. 422 o Pre-compute how the two and three measurement path delay changes 423 correlate to sub-path connectivity and congestion patterns. 424 Absolute change valaues aren't required, a simultaneous change of 425 two or three particular measurement paths is. 427 o Ensure that the temporal resolution of the measurement clock 428 allows to reliably capture a unique delay value for each 429 configured measurement path while sub-path connectivity is 430 complete and no congestion is present. 432 o Synchronised clocks are not strictly required, as the metric is 433 evaluating differences in delay. Changes in clock synchronisation 434 SHOULD NOT be close to the time interval within which changes in 435 connectivity or congestion should be monitored. 437 o At the Src host, select Src and Dst IP addresses, and address 438 information to route the type-p packet along one of the configured 439 measurement path. Form a test packet of Type-P with these 440 addresses. 442 o Configure the Dst host access to receive the packet. 444 o At the Src host, place a timestamp, a sequence number and a unique 445 identifier of the measurement path in the prepared Type-P packet, 446 and send it towards Dst. 448 o Capture the one-way delay and determine packet-loss by the metrics 449 specified by [RFC7679] and [RFC7680] respectively and store the 450 result for the path. 452 o If two or three subpaths indicate a change in delay, report a 453 change in connectivity or congestion status as pre-computed above. 455 o If two or three sub paths indicate a change in delay, report a 456 change in connectivity or congestion status as pre-computed above. 458 Note that monitoring 6 sub paths requires setting up 6 monitoring 459 paths as shown in the figure above. 461 3.7. Errors and Uncertainties 463 Sources of error are: 465 o Measurement paths whose delays don't indicate a change after sub- 466 path connectivity changed. 468 o A timestamps whose resolution is missing or inacurrate at the 469 delays measured for the different monitoring paths. 471 o Multiple occurrences of sub path connectivity and congestion. 473 o Loss of connectivity and congestion along sub-paths connecting the 474 measurement device(s) with the sub-paths to be monitored. 476 3.8. Reporting the Metric 478 The metric reports loss of connectivity of monitored sub-path or 479 congestion of an interface and identifies the sub-path and the 480 direction of traffic in the case of congestion. 482 The temporal resolution of the detected events depends on the spacing 483 interval of packets transmitted per measurement path. An identical 484 sending interval is chosen for every measurement path. As a rule of 485 thumb, an event is reliably detected if a sample consists of at least 486 5 probes indicating the same underlying change in behavior. 487 Depending on the underlying event either two or three measurement 488 paths are impacted. At least two consecutively received measurement 489 packets per measurement path should suffice to indicate a change. 490 The values chosen for an operational network will have to reflect 491 scalability constraints of a PMS measurement interface. As an 492 example, a PMS may work reliable if no more than one measurement 493 packet is transmitted per millisecond. Further, measurement is 494 configured so that the measurement packets return to the sender 495 interface. Assume always groups of 6 links to be monitored as 496 described above by 6 measurements paths. If one packet is sent per 497 measurement path within 500 ms, up to 498 links can be monitored with 498 a reliable temporal resolution of roughly one second per detected 499 event. 501 Note that per group measurement packet spacing, measurement loop 502 delay difference and latency caused by congestion impact the 503 reporting interval. If each measurement path of a single 6 link 504 monitoring group is addressed in consecutive milliseconds (within the 505 500 ms interval) and the sum of maximum physical delay of the per 506 group measurement paths and latency possibly added by congestion is 507 below 490 ms, the one second reports reliably capture 4 packets of 508 two different measurement paths, if two measurement paths are 509 congested, or 6 packets of three different measurement paths, if a 510 link is lost. 512 A variety of reporting options exist, if scalability issues and 513 network properties are respected. 515 4. Singleton Definition for Type-P-SR-Path-Round-Trip-Delay-Estimate 517 This section will be added in a later version, if there's interest in 518 picking up this work. 520 5. IANA Considerations 522 If standardised, the metric will require an entry in the IPPM metric 523 registry. 525 6. Security Considerations 527 This draft specifies how to use methods specified or described within 528 [RFC8402] and [RFC8403]. It does not introduce new or additional SR 529 features. The security considerations of both references apply here 530 too. 532 7. References 534 7.1. Normative References 536 [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate 537 Requirement Levels", BCP 14, RFC 2119, 538 DOI 10.17487/RFC2119, March 1997, 539 . 541 [RFC2678] Mahdavi, J. and V. Paxson, "IPPM Metrics for Measuring 542 Connectivity", RFC 2678, DOI 10.17487/RFC2678, September 543 1999, . 545 [RFC7679] Almes, G., Kalidindi, S., Zekauskas, M., and A. Morton, 546 Ed., "A One-Way Delay Metric for IP Performance Metrics 547 (IPPM)", STD 81, RFC 7679, DOI 10.17487/RFC7679, January 548 2016, . 550 [RFC7680] Almes, G., Kalidindi, S., Zekauskas, M., and A. Morton, 551 Ed., "A One-Way Loss Metric for IP Performance Metrics 552 (IPPM)", STD 82, RFC 7680, DOI 10.17487/RFC7680, January 553 2016, . 555 [RFC8029] Kompella, K., Swallow, G., Pignataro, C., Ed., Kumar, N., 556 Aldrin, S., and M. Chen, "Detecting Multiprotocol Label 557 Switched (MPLS) Data-Plane Failures", RFC 8029, 558 DOI 10.17487/RFC8029, March 2017, 559 . 561 [RFC8402] Filsfils, C., Ed., Previdi, S., Ed., Ginsberg, L., 562 Decraene, B., Litkowski, S., and R. Shakir, "Segment 563 Routing Architecture", RFC 8402, DOI 10.17487/RFC8402, 564 July 2018, . 566 7.2. Informative References 568 [RFC2330] Paxson, V., Almes, G., Mahdavi, J., and M. Mathis, 569 "Framework for IP Performance Metrics", RFC 2330, 570 DOI 10.17487/RFC2330, May 1998, 571 . 573 [RFC8403] Geib, R., Ed., Filsfils, C., Pignataro, C., Ed., and N. 574 Kumar, "A Scalable and Topology-Aware MPLS Data-Plane 575 Monitoring System", RFC 8403, DOI 10.17487/RFC8403, July 576 2018, . 578 Author's Address 580 Ruediger Geib (editor) 581 Deutsche Telekom 582 Heinrich Hertz Str. 3-7 583 Darmstadt 64295 584 Germany 586 Phone: +49 6151 5812747 587 Email: Ruediger.Geib@telekom.de