idnits 2.17.1 draft-amante-irs-topology-use-cases-00.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- No issues found here. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year == The document doesn't use any RFC 2119 keywords, yet has text resembling RFC 2119 boilerplate text. -- The document date (October 2, 2012) is 4224 days in the past. Is this intentional? -- Found something which looks like a code comment -- if you have code sections in the document, please surround them with '' and '' lines. Checking references for intended status: Informational ---------------------------------------------------------------------------- == Unused Reference: 'I-D.atlas-irs-problem-statement' is defined on line 903, but no explicit reference was found in the text == Unused Reference: 'I-D.ward-irs-framework' is defined on line 920, but no explicit reference was found in the text == Outdated reference: A later version (-27) exists of draft-ietf-alto-protocol-13 == Outdated reference: A later version (-21) exists of draft-ietf-pce-stateful-pce-01 Summary: 0 errors (**), 0 flaws (~~), 6 warnings (==), 2 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 Internet Engineering Task Force S. Amante 3 Internet-Draft Level 3 Communications, Inc. 4 Intended status: Informational J. Medved 5 Expires: April 5, 2013 Cisco Systems, Inc. 6 T. Nadeau 7 Juniper Networks 8 October 2, 2012 10 Topology API Use Cases 11 draft-amante-irs-topology-use-cases-00 13 Abstract 15 This document describes use cases for gathering routing, forwarding 16 and policy information, (hereafter referred to as topology 17 information), about the network and reflecting changes to the 18 topology back into the network and related systems. It describes 19 several applications that need to view or change the topology of the 20 underlying physical or logical network. This document further 21 demonstrates a need for a "Topology Manager" and related functions 22 that collects topology data from network elements and other data 23 sources, coalesces the collected data into a coherent view of the 24 overall network topology, and normalizes the network topology view 25 for use by clients -- namely, applications that consume or want to 26 change topology information. 28 Status of this Memo 30 This Internet-Draft is submitted in full conformance with the 31 provisions of BCP 78 and BCP 79. 33 Internet-Drafts are working documents of the Internet Engineering 34 Task Force (IETF). Note that other groups may also distribute 35 working documents as Internet-Drafts. The list of current Internet- 36 Drafts is at http://datatracker.ietf.org/drafts/current/. 38 Internet-Drafts are draft documents valid for a maximum of six months 39 and may be updated, replaced, or obsoleted by other documents at any 40 time. It is inappropriate to use Internet-Drafts as reference 41 material or to cite them other than as "work in progress." 43 This Internet-Draft will expire on April 5, 2013. 45 Copyright Notice 47 Copyright (c) 2012 IETF Trust and the persons identified as the 48 document authors. All rights reserved. 50 This document is subject to BCP 78 and the IETF Trust's Legal 51 Provisions Relating to IETF Documents 52 (http://trustee.ietf.org/license-info) in effect on the date of 53 publication of this document. Please review these documents 54 carefully, as they describe your rights and restrictions with respect 55 to this document. Code Components extracted from this document must 56 include Simplified BSD License text as described in Section 4.e of 57 the Trust Legal Provisions and are provided without warranty as 58 described in the Simplified BSD License. 60 Table of Contents 62 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 3 63 1.1. Statistics Collection . . . . . . . . . . . . . . . . . . 5 64 1.2. Inventory Collection . . . . . . . . . . . . . . . . . . . 5 65 1.3. Requirements Language . . . . . . . . . . . . . . . . . . 6 66 2. Terminology . . . . . . . . . . . . . . . . . . . . . . . . . 6 67 3. Orchestration, Collection & Presentation Framework . . . . . . 7 68 3.1. Overview . . . . . . . . . . . . . . . . . . . . . . . . . 7 69 3.2. Topology Manager . . . . . . . . . . . . . . . . . . . . . 8 70 3.3. Policy Manager . . . . . . . . . . . . . . . . . . . . . . 10 71 3.4. Orchestration Manager . . . . . . . . . . . . . . . . . . 11 72 4. Use Cases . . . . . . . . . . . . . . . . . . . . . . . . . . 12 73 4.1. Virtualized Views of the Network . . . . . . . . . . . . . 12 74 4.1.1. Capacity Planning and Traffic Engineering . . . . . . 12 75 4.1.2. Services Provisioning . . . . . . . . . . . . . . . . 15 76 4.1.3. Rapid IP Renumbering, AS Migration . . . . . . . . . . 15 77 4.1.4. Troubleshooting & Monitoring . . . . . . . . . . . . . 17 78 4.2. Path Computation Element (PCE) . . . . . . . . . . . . . . 17 79 4.3. ALTO Server . . . . . . . . . . . . . . . . . . . . . . . 18 80 5. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . . 19 81 6. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 19 82 7. Security Considerations . . . . . . . . . . . . . . . . . . . 19 83 8. References . . . . . . . . . . . . . . . . . . . . . . . . . . 20 84 8.1. Normative References . . . . . . . . . . . . . . . . . . . 20 85 8.2. Informative References . . . . . . . . . . . . . . . . . . 20 86 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . . 20 88 1. Introduction 90 In today's networks, a variety of applications, such as Traffic 91 Engineering, Capacity Planning, Security Auditing or Services 92 Provisioning (for example, Virtual Private Networks), have a common 93 need to acquire and consume network topology information. 94 Unfortunately, all of these applications are (typically) vertically 95 integrated: each uses its own proprietary normalized view of the 96 network and proprietary data collectors, interpreters and adapters, 97 which speak a variety of protocols, (SNMP, CLI, SQL, etc.) directly 98 to network elements and to back-office systems. While some of the 99 topological information can be distributed using routing protocols, 100 unfortunately it is not desirable for some of these applications to 101 understand or participate in routing protocols. 103 This approach is incredibly inefficient for several reasons. First, 104 developers must write duplicate 'network discovery' functions, which 105 then become challenging to maintain over time, particularly as/when 106 new equipment are first introduced to the network. Second, since 107 there is no common "vocabulary" to describe various components in the 108 network, such as physical links, logical links, or IP prefixes, each 109 application has its own data model. To solve this, some solutions 110 have distributed this information in the normalized form of routing 111 distribution. However, this information still does not contain 112 "inactive" topological information, thus not containing information 113 considered to be part of a network's inventory. 115 These limitations lead to applications being unable to easily 116 exchange information with each other. For example, applications 117 cannot share changes with each other that are (to be) applied to the 118 physical and/or logical network, such as installation of new physical 119 links, or deployment of security ACL's. Each application must 120 frequently poll network elements and other data sources to ensure 121 that it has a consistent representation of the network so that it can 122 carry out its particular domain-specific tasks. In other cases, 123 applications that cannot speak routing protocols must use proprietary 124 CLI or other management interfaces which represent the topological 125 information in non-standard formats or worse, semantic models. 127 Overall, the software architecture described above at best results in 128 incredibly inefficient use of both software developer resources and 129 network resources, and at worst, it results in some applications 130 simply not having access to this information. 132 Figure 1 is an illustration of how individual applications collect 133 data from the underlying network. Applications retrieve inventory, 134 network topology, state and statistics information by communicating 135 directly with underlying Network Elements as well as with 136 intermediary proxies of the information. In addition, applications 137 transmit changes required of a Network Element's configuration and/or 138 state directly to individual Network Elements, (most commonly using 139 CLI or Netconf). It is important to note that the "data models" or 140 semantics of this information contained within Network Elements are 141 largely proprietary with respect to most configuration and state 142 information, hence why a proprietary CLI is often the only choice to 143 reflect changes in a NE's configuration or state. This remains the 144 case even when standards-based mechanisms such as Netconf are used 145 which provide a standard syntax model, but still often lack due to 146 the proprietary semantics associated with the internal representation 147 of the information. 149 +---------------+ 150 +----------------+ | 151 | Applications |-+ 152 +----------------+ 153 ^ ^ ^ 154 SQL, RPC, ReST # | * SQL, RPC, ReST ... 155 ######################## | ************************ 156 # | * 157 +------------+ | +------------+ 158 | Statistics | | | Inventory | 159 | Collection | | | Collection | 160 +------------+ | +------------+ 161 ^ | NETCONF, SNMP, ^ 162 | | CLI, TL1, ... | 163 +-------------------------+-------------------------+ 164 | | | 165 V V V 166 +----------------+ +----------------+ +----------------+ 167 | Network Element| | Network Element| | Network Element| 168 | +------------+ |<-LLDP->| +------------+ |<-LMP->| +------------+ | 169 | | Data Model | | | | Data Model | | | | Data Model | | 170 | +------------+ | | +------------+ | | +------------+ | 171 +----------------+ +----------------+ +----------------+ 173 Figure 1: Applications getting topology data 175 Figure 1 shows how current management interfaces such as NETCONF, 176 SNMP, CLI, etc. are used to transmit or receive information to/from 177 various Network Elements. The figure also shows that protocols such 178 as LLDP and LMP participate in topology discovery, specifically to 179 discover adjacent network elements. 181 The following sections describe the "Statistics Collection" and 182 "Inventory Collection" functions. 184 1.1. Statistics Collection 186 In Figure 1, "Statistics Collection" is a dedicated infrastructure 187 that collects statistics from Network Elements. It periodically 188 polls Network Elements (for example, every 5-minutes) for octets 189 transferred per interface, per LSP, etc. Collected statistics are 190 stored and collated, (for example, to provide hourly, daily, weekly 191 95th-percentile figures), within the statistics data warehouse. 192 Applications typically query the statistics data warehouse rather 193 than poll Network Elements directly to get the appropriate set of 194 link utilization figures for their analysis. 196 1.2. Inventory Collection 198 "Inventory Collection" is a network function responsible for 199 collecting network element component and state (i.e.: interface up/ 200 down, SFP/XFP optics inserted into physical port, etc.) information 201 directly from network elements, as well as storing inventory 202 information about physical network assets that are not retrievable 203 from network elements, (hereafter referred to as a inventory asset 204 database). Inventory Collection from network elements commonly use 205 SNMP and CLI to acquire inventory information. The information 206 housed in the Inventory Manager is retrieved by applications using a 207 variety of protocols: SQL, RPC, etc. Inventory information, 208 retrieved from Network Elements, is updated in the Inventory 209 Collection system on a periodic basis to reflect changes in the 210 physical and/or logical network assets. The polling interval to 211 retrieve updated information is varied depending on scaling 212 constraints of the Inventory Collection systems and expected 213 intervals at which changes to the physical and/or logical assets are 214 expected to occur. 216 Examples of changes in network inventory that need be learned by the 217 Inventory Collection function are as follows: 219 o Discovery of new Network Elements. These elements may or may not 220 be actively used in the network (i.e.: provisioned but not yet 221 activated). 223 o Insertion or removal of line cards or other modules, i.e.: optics 224 modules, during service or equipment provisioning. 226 o Changes made to a specific Network Element through a management 227 interface by a field technician. 229 o Indication of an NE's physical location and associated cable run 230 list, at the time of installation. 232 o Insertion of removal of cables that result in dynamic discovery of 233 a new or lost adjacent neighbor, etc. 235 1.3. Requirements Language 237 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 238 "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this 239 document are to be interpreted as described in RFC 2119 [RFC2119] 241 2. Terminology 243 The following briefly defines some of the terminology used within 244 this document. 246 Inventory Manager: Describes a function of collecting network 247 element inventory and state information directly from network 248 elements, and potentially associated offline inventory databases, 249 via standards-based data models. Components contained in this 250 super set might be visible or invisible to a specific network 251 layer, i.e.: a physical link is visible within the IGP, however 252 the Layer-2 switch through which the physical link traverses is 253 unknown to the Layer-3 IGP. . 255 Policy Manager: Describes a function of attaching metadata to 256 network components/attributes. Such metadata is likely to include 257 security, routing, L2 VLAN ID, IP numbering, etc. policies that 258 enable the Topology Manager to: a) assemble a normalized view of 259 the network for clients to access; b) allow clients (or, upper- 260 layer applications) read-only vs. read-write access to various 261 network layers and/or network components, etc. The Policy Manager 262 function may be a sub-component of the Topology Manager or it may 263 be a standalone. This will be determined as the work with IRS 264 evolves. 266 Topology Manager: Network components (inventory, etc.) are retrieved 267 from the Inventory Manager and synthesized with information from 268 the Policy Manager into cohesive, normalized views of network 269 layers. The Topology Manager exposes normalized views of the 270 network via standards-based data models to Clients, or higher- 271 layer applications, to act upon in a read-only and/or read-write 272 fashion. The Topology Manager may also push information back into 273 the Inventory Manager and/or Network Elements to execute changes 274 to the network's behavior, configuration or state. 276 Orchestration Manager: Describes a function of stitching together 277 resources (i.e.: compute, storage) and/or services with the 278 network or vice-versa. The Orchestration Manager relies on the 279 capabilities provided by the other "Managers" listed above in 280 order to realize a complete service. 282 Normalized Topology Data Model: A data model that is constructed and 283 represented using an open, standards-based model that is 284 consistent between implementations. 286 Data Model Abstraction: The notion that one is able to represent the 287 same set of elements in a data model at different levels of 288 "focus" in order to limit the amount of information exchanged in 289 order to convey this information. 291 Multi-Layer Topology: Topology is commonly referred to using the OSI 292 protocol layering model. For example, Layer 3 represents routed 293 topologies that typically use IPv4 or IPv6 addresses. It is 294 envisioned that, eventually, multiple layers of the network may be 295 represented in a single, normalized view of the network to certain 296 applications, (i.e.: Capacity Planning, Traffic Engineering, etc.) 298 Network Element (NE): refers to a network device that typically is 299 addressable (but not always), and hosts. It is sometimes referred 300 to as Nodes. 302 Links: Every NE contains at least 1 link. These are used to connect 303 the NE to other NEs in the network. Links may be in a variety of 304 states including up, down, administratively down, internally 305 testing, or dormant. Links are often synonymous with network 306 ports on NEs. 308 3. Orchestration, Collection & Presentation Framework 310 3.1. Overview 312 Section 1 demonstrates the need for a network function that would 313 provide a common, standard-based topology view to applications. Such 314 topology collection/management/presentation function would be a part 315 wider framework, that would also include policy management and 316 orchestration. The framework is shown in Figure 2. 318 +---------------+ 319 +----------------+ | 320 | Applications |-+ 321 +----------------+ 322 Websockets, ReST, XMPP, ... ^ Websockets, ReST, XMPP, ... 323 +-------------------------+-------------------------+ 324 | | | 325 +------------+ +-------------------------+ +-------------+ 326 | Policy |<----| Topology Manager |---->|Orchestration| 327 | Manager | | +---------------------+ | | Manager | 328 +------------+ | | Topology Data Model | | +-------------+ 329 | +---------------------+ | 330 +-------------------------+ 331 ^ ^ ^ 332 Websockets, ReST, XMPP # | * Websockets, ReST, XMPP 333 ######################## | ************************ 334 # | * 335 +------------+ | +------------+ 336 | Statistics | | | Inventory | 337 | Collection | | | Collection | 338 +------------+ | +------------+ 339 ^ | IRS, NETCONF, SNMP, ^ 340 | | TL1 ... | 341 +-------------------------+-------------------------+ 342 | | | 343 V V V 344 +----------------+ +----------------+ +----------------+ 345 | Network Element| | Network Element| | Network Element| 346 | +------------+ |<-LLDP->| +------------+ |<-LMP->| +------------+ | 347 | | Data Model | | | | Data Model | | | | Data Model | | 348 | +------------+ | | +------------+ | | +------------+ | 349 +----------------+ +----------------+ +----------------+ 351 Figure 2: Topology Manager 353 The following sections describe in detail the Topology Manager, 354 Policy Manager and Orchestration Manager functions. 356 3.2. Topology Manager 358 The Topology Manager is responsible for retrieving topological 359 information from the network via a variety of sources. The first 360 most obvious source is the "live" IGP or an equivalent mechanism. 361 "Live" IGP provides information about links that are components of 362 the active topology, in other words links that are present in the 363 Link State Database (LSDB) and are eligible for forwarding. The 364 second source of topology information is the Inventory Collection 365 system, which provides information for network components not visible 366 within the IGP's LSDB, (i.e.: links or nodes, or properties of those 367 links or nodes, at lower layers of the network). 369 The Topology Manager would synthesize retrieved information into 370 cohesive, abstracted views of the network using a standards-based, 371 normalized topology data model. The Topology Manager can then expose 372 these data models to Clients, or higher-layer applications using a 373 northbound interface, which would be a protocol/API commonly used by 374 higher-layer applications to retrieve and update information. 375 Examples of such protocols are ReST, Websockets, or XMPP. Topology 376 Manager's clients would be able to act upon the information in a 377 read-only and/or read-write fashion, (based on policies defined 378 within the Policy Manager). 380 Clients may request changes to the network topology by publishing 381 changes within data models and sending those to the Topology Manager. 382 The Topology Manager internally validates the requested changes 383 against various constraints and, if the changes are permitted, the 384 Topology Manager updates associated Managers (Policy or Inventory 385 Managers), communicates those changes to the individual network 386 elements and, finally, verifies that those configurations were 387 properly received and executed by the network elements. 389 It is envisioned that the Topology Manager will ultimately contain 390 topology information for multiple layers of the network: Transport, 391 Ethernet and IP/MPLS as well as multiple (IGP) areas and/or multiple 392 Autonomous Systems (ASes). This allows the Topology Manager to 393 stitch together a holistic view of several layers of the network, 394 which is an important requirement, particularly for upper-layer 395 Traffic Engineering, Capacity Planning and Provisioning Clients 396 (applications) used to design, augment and optimize IP/MPLS networks 397 that require knowledge of underlying Shared Risk Link Groups (SRLG) 398 within the Transport and/or Ethernet layers of the network. 400 The Topology Manager must have the ability to discover and 401 communicate with network elements who are not only active and visible 402 within the Link State Database (LSDB) of an active IGP, but also 403 network elements who are active, but invisible to a LSDB (e.g.: L2 404 Ethernet switches, ROADM's, etc.) that are part of the underlying 405 Transport network. This requirement will influence the choice of 406 protocols needed by the Topology Manager to communicate to/from 407 network elements at the various network layers. 409 It is also important to recognize that the Topology Manager will be 410 gleaning not only (relatively) static inventory information from the 411 Inventory Manager, i.e.: what linecards, interface types, etc. are 412 actively inserted into network elements, but also dynamic inventory 413 information, as well. With respect to the latter, network elements 414 are expected to rely on various Link Layer Discovery Protocols (i.e.: 415 LLDP, LMP, etc.) that will aid in automatically identifying an 416 adjacent node, port, etc. at the far-side of a link. This 417 information is then pushed to or pulled by the Topology Manager in 418 order for it to have an accurate representation of the physical 419 topology of the network. 421 3.3. Policy Manager 423 The Policy Manager is the function used to enforce and program 424 policies applicable to network component/attribute data. Policy 425 enforcement is a network-wide function that can be consumed by 426 various network elements and services including the Inventory 427 Manager, Topology Manager or other network elements. Such policies 428 are likely to encompass the following. 430 o Logical Identifier Numbering Policies 432 * Correlation of IP prefix to link based on type of link (P-P, 433 P-PE, PE-CE, etc.) 435 * Correlation of IP Prefix to IGP Area 437 * Layer-2 VLAN ID assignments, etc. 439 o Routing Configuration Policies 441 * OSPF Area or IS-IS Net-ID to Node (Type) Correlation 443 * BGP routing policies, i.e.: nodes designated for injection of 444 aggregate routes, max-prefix policies, AFI/SAFI to node 445 correlation, etc. 447 o Security Policies 449 * Access Control Lists 451 * Rate-limiting 453 o Network Component/Attribute Data Access Policies. Client's 454 (upper-layer application) read-only or read-write access to 455 Network Components/Attributes contained in the "Inventory Manager" 456 as well as Policies contained within the "Policy Manager" itself. 458 The Policy Manager function may be a sub-component of the Topology or 459 Orchestration Manager or it may be a standalone. This will be 460 determined as the work with IRS evolves. 462 3.4. Orchestration Manager 464 The Orchestration Manager provides the ability to stitch together 465 resources (i.e.: compute, storage) and/or services with the network 466 or vice-versa. Examples of 'generic' services may include the 467 following: 469 o Application-specific Load Balancing 471 o Application-specific Network (Bandwidth) Optimization 473 o Application or End-User specific Class-of-Service 475 o Application or End-User specific Network Access Control 477 The above services could then enable coupling of resources with the 478 network to realize the following: 480 o Network Optimization: Creation and Migration of Virtual Machines 481 (VM's) so they are adjacent to storage in the same DataCenter. 483 o Network Access Control: Coupling of available (generic) compute 484 nodes within the appropriate point of the data-path to perform 485 firewall, NAT, etc. functions on data traffic. 487 The Orchestration Manager is expected to exchange data models with 488 the Topology Manager, Policy Manager and Inventory Manager functions. 489 In addition, the Orchestration Manager is expected to support publish 490 and subscribe capabilities to those functions, as well as to Clients, 491 to enable scalability with respect to event notifications. 493 The Orchestration Manager may receive requests from Clients 494 (applications) for immediate access to specific network resources. 495 However, Clients may request to schedule future appointments to 496 reserve appropriate network resources when, for example, a special 497 event is scheduled to start and end. 499 Finally, the Orchestration Manager should have the flexibility to 500 determine what network layer(s) may be able to satisfy a given 501 Client's request, based on constraints received from the Client as 502 well as those constraints learned from the Policy and/or Topology 503 Manager functions. This could allow the Orchestration Manager to, 504 for example, satisfy a given service request for a given Client using 505 the optical network (via OTN service) if there is insufficient IP/ 506 MPLS capacity at the specific moment the Client's request is 507 received. 509 The operational model is shown in the following figure. 511 TBD. 513 Figure 3: Overall Reference Model 515 4. Use Cases 517 4.1. Virtualized Views of the Network 519 4.1.1. Capacity Planning and Traffic Engineering 521 When performing Traffic Engineering and/or Capacity Planning of an 522 IP/MPLS network, it is important to account for SRLG's that exist 523 within the underlying physical, optical and Ethernet networks. 524 Currently, it's quite common to create and/or take "snapshots", at 525 infrequent intervals, that comprise the inventory data of the 526 underlying physical and optical layer networks. This inventory data 527 then needs to be massaged or normalized to conform to the data import 528 requirements of sometimes separate Traffic Engineering and/or 529 Capacity Planning tools. This process is error-prone and 530 inefficient, particularly as the underlying network inventory 531 information changes due to introduction of, for example, new network 532 element makes or models, linecards, capabilities, etc. at the optical 533 and/or Ethernet layers of the underlying network. 535 This is inefficient with respect to the time and expense consumed by 536 software developer, Capacity Planning and Traffic Engineering 537 resources to normalize and sanity check underlying network inventory 538 information, before it can be consumed by IP/MPLS Capacity Planning 539 and Traffic Engineering applications. Due to this inefficiency, the 540 underlying physical network inventory information, (containing SRLG 541 and corresponding critical network asset information), used by the 542 IP/MPLS Capacity Planning and TE applications is not updated 543 frequently, thus exposing the network to, at minimum, inefficient 544 utilization and, at worst, critical impairments. 546 An Inventory Manager function is required that will, first, extract 547 inventory information from network elements -- and potentially 548 associated offline inventory databases to acquire physical cross- 549 connects and other information that is not available directly from 550 network elements -- at the physical, optical, Ethernet and IP/MPLS 551 layers of the network via standards-based data models. Data models 552 and associated vocabulary will be required to represent not only 553 components inside or directly connected to network elements, but also 554 to represent components of a physical layer path (i.e.: cross-connect 555 panels, etc.) The aforementioned inventory will comprise the 556 complete set of inactive and active network components. 558 A Statistics Collection Function is also required. As stated above, 559 it will collect utilization statistics from Network Elements, archive 560 and aggregate them in a statistics data warehouse. Summaries of 561 these figures then need to be exposed in normalized data models to 562 the Topology Manager so it can easily acquire historical link and LSP 563 utilization figures that can be used to, for example, build trended 564 utilization models to forecast expected changes to the physical 565 and/or logical network components to accommodate network growth. 567 The Topology Manager function may then augment the Inventory Manager 568 information by communicating directly with Network Elements to reveal 569 the IGP-based view of the active topology of the network. This will 570 allow the Topology Manager to include dynamic information from the 571 IGP, such as Available Bandwidth, Reserved Bandwidth, etc. Traffic 572 Engineering (TE) attributes associated with links, contained with the 573 Traffic Engineering Database (TED) on Network Elements. 575 It is important to recognize that extracting topology information 576 from the network solely via an IGP, (such as IS-IS TE or OSPF TE), is 577 inadequate for this use case. First, IGP's only expose the active 578 components (e.g. vertices of the SPF tree) of the IP network; 579 unfortunately, they are not aware of "hidden" or inactive interfaces 580 within IP/MPLS network elements, (e.g.: unused linecards or unused 581 ports), or components that reside at a lower layer than IP/MPLS, e.g. 582 Ethernet switches, Optical transport systems, etc. This occurs 583 frequently during the course of maintenance, augment and optimization 584 activities on the network. Second, IGP's only convey SRLG 585 information that have been first applied within the router's 586 configurations, either manually or programatically. As mentioned 587 previously, this SRLG information in the IP/MPLS network is subject 588 to being infrequently updated and, as a result, may inadequately 589 account for critical, underlying network fate sharing properties that 590 are necessary to properly design resilient circuits and/or paths 591 through the network. 593 In this use case, the Inventory Manager will need to be capable of 594 using a variety of existing protocols such as: NETCONF, CLI, SNMP, 595 TL1, etc. depending on the capabilities of the network elements. The 596 Topology Manager will need to be capable of communicating via an IGP 597 from a (set of) Network Elements. It is important to consider that 598 to acquire topology information from Network Elements will require 599 read-only access to the IGP. However, the end result of the 600 computations performed by the Capacity Planning Client may require 601 changes to various IGP attributes, (e.g.: IGP metrics, TE link- 602 colors, etc.) These may be applied directly by devising a new 603 capability to either: a) inject information into the IGP that 604 overrides the same information injected by the originating Network 605 Element; or, b) allowing the Topology and/or Inventory Manager the 606 ability to write changes to the Network Element's configuration in 607 order to have it adjust the appropriate IGP attribute(s) and re-flood 608 them throughout the IGP. It would be desirable to have a single 609 mechanism (data model or protocol) that allows the Topology Manager 610 to read and write IGP attributes. 612 Once the Topology Manager function has assembled a normalized view of 613 the topology and synthesized associated metadata with each component 614 of the topology (link type, link properties, statistics, intra-layer 615 relationships, etc.), it can then expose this information via its 616 northbound API to Clients. In this use case that means Capacity 617 Planning and Traffic Engineering applications, which are not required 618 to know innate details of individual network elements, but do require 619 generalized information about the node and links that comprise the 620 network, e.g.: links used to interconnect nodes, SRLG information 621 (from the underlying network), utilization rates of each link over 622 some period of time, etc. In this case, it is important that any 623 Client that understands both the web services API and the normalized 624 data model can communicate with the Topology Manager in order to 625 understand the network topology information that was provided by 626 network elements from potentially different vendors, all of which 627 likely represent that topology information internally using different 628 models. If the Client had gone directly to the network elements 629 themselves, it would have to translate and then normalize these 630 different representations for itself. However, in this case, the 631 Topology Manager has done that for it. 633 When this information is consumed by the Traffic Engineering 634 application, it may run a variety of CSPF algorithms the result of 635 which is likely a list of RSVP LSP's that need to be 636 (re-)established, or torn down, in the network to globally optimize 637 the packing efficiency of physical links throughout the network. The 638 end result of the Traffic Engineering application is "pushing" out to 639 the Topology Manager, via a standard data model to be defined here, a 640 list of RSVP LSP's and their associated characteristics, (i.e.: head 641 and tail-end LSR's, bandwidth, priority, preemption, etc.). The 642 Topology Manager then would consume this information and carry out 643 those instructions by speaking directly to network elements, perhaps 644 via PCEP Extensions for Stateful PCE [I-D.ietf-pce-stateful-pce], 645 which in turn initiates RSVP signaling through the network to 646 establish the LSP's. 648 After this information is consumed by the Capacity Planning 649 application, it may run a variety of algorithms the result of which 650 is a list of new inventory that is required to be purchased (or, 651 redeployed) as well as associated work orders for field technicians 652 to augment the network for expected growth. It would be ideal if 653 this information was also "pushed" back into the Topology and, in 654 turn, Inventory Manager as "inactive" links and/or nodes, so that as 655 new equipment is installed it can be automatically correlated with 656 original design and work order packages associated with that augment. 658 4.1.2. Services Provisioning 660 Beyond Capacity Planning and Traffic Engineering applications, having 661 a normalized view of just the IP/MPLS layer of the network is still 662 very important for other mission critical applications such as 663 Security Auditing and IP/MPLS Services Provisioning, (e.g.: L2VPN, 664 L3VPN, etc.). With respect to the latter, these types of 665 applications should not need a detailed understanding of, for 666 example, SRLG information, assuming that the underlying MPLS Tunnel 667 LSP's are known to account for the resiliency requirements of all 668 services that ride over them. Nonetheless, for both types of 669 applications it is critical that they have a common and up-to-date 670 normalized view of the IP/MPLS network in order to easily instantiate 671 new services at the appropriate places in the network, in the case of 672 VPN services, or validate that ACL's are configured properly to 673 protect associated routing, signaling and management protocols on the 674 network, with respect to Security Auditing. 676 For this use case, what is most commonly needed by a VPN Service 677 Provisioning application is as follows. First, Service PE's need to 678 be identified in all markets/cities where the customer has identified 679 they want service. Next, does their exist one, or more, Servies PE's 680 in each city with connectivity to the access network(s), e.g.: SONET/ 681 TDM, used to deliver the PE-CE tail circuits to the Service's PE. 682 Finally, does the Services PE have available capacity on both the 683 PE-CE access interface and its uplinks to terminate the tail circuit? 684 If this were to be generalized, this would be considered an Resource 685 Selection function. Namely, the VPN Provisioning application would 686 iteratively query the Topology Manager to narrow down the scope of 687 resources to the set of Services PE's with the appropriate uplink 688 bandwidth and access circuit capability plus capacity to realize the 689 requested VPN service. Once the VPN Provisioning application has a 690 candidate list of resources it then requests the Topology Manager to 691 go about configuring the Services PE's and associated access circuits 692 to realize the customer's VPN service. 694 4.1.3. Rapid IP Renumbering, AS Migration 696 A variety of reasons exist for the "rapid renumbering" of IPv4/IPv6 697 prefixes and ASN's in an IP/MPLS network. Perhaps the most common 698 reason is as a result of mergers, acquisitions or divestitures of 699 companies, organizations or divisions. 701 Inside the network of an Enterprise or Service Provider, there 702 already exist protocols such as DHCP or SLAAC to support rapid 703 renumbering of hosts, (i.e.: servers, laptops, tablets, etc.). These 704 are outside the scope of this document. However, there still exists 705 a critical need to quickly renumber network infrastructure, namely: 706 router interfaces, management interfaces, etc. in order to: a) avoid 707 overlapping RFC 1918 addresses in previously separate domains; b) 708 allow for (better) aggregation of IP prefixes within areas/domains of 709 an IGP; c) allow for more efficient utilization of globally unique 710 IPv4 addresses, which are in limited supply; d) realize business 711 synergies of combining two different AS'es into one, etc. 713 The set of IPv4 and IPv6 prefixes that have been configured on point- 714 to-point, LAN, Loopback, Tunnel, Management, PE-CE and other 715 interfaces would be gathered from all network elements by the 716 Inventory Manager function. Similarly, the set of ASN's that have 717 been configured on individual NE's, as the global BGP Autonomous 718 System Number, and the PE-CE interfaces is also acquired from the 719 Inventory Manager. Afterward, an "inventory" report of the total 720 number, based on type, of IPv4/IPv6 prefixes could be quickly 721 assembled to understand how much address space is required to 722 accommodate the existing network, but also future growth plans. 723 Next, a new IP prefix and ASN would be assigned to the overall 724 network. An operator may then decide to manually carve up the IP 725 prefix into sub-prefixes that are assigned to various functions or 726 interface types in the network, i.e.: all Loopback interface 727 addresses are assigned from a specific GUA IPv4/IPv6 prefix. Other 728 rules may be crafted by the operator so that, for example, GUA IPv4/ 729 IPv6 prefixes for interfaces within each IGP area are assigned out of 730 contiguous address space so that they may be (easily) summarized 731 within the IGP configuration. Finally, the set of ASN's, IP 732 prefixes, rules and/or policies governing how their are to be 733 assigned are encoded in a data model/schema an sent to a Topology 734 Manager (TM). The Topology Manager is then responsible for 735 communicating changes to the Inventory Manager and/or Network 736 Elements in a proper sequence, or order of operations, so as to not 737 lose network connectivity from the Topology Manager to the network 738 elements. 740 This function could be extended further whereby the Orchestration 741 Manager would be used in order to automatically create a list of IP 742 addresses and their associated DNS names, which would then be 743 "pushed" to Authoritative DNS servers so that interface names would 744 get updated in DNS automatically. In addition, the Orchestration 745 Manager function could notify a "Infrastructure Security" application 746 that IP prefixes on the network has changed so that it then updates 747 ACL's used to, for example, protect IP/MPLS routing and signaling 748 protocols used on the network. 750 4.1.4. Troubleshooting & Monitoring 752 Once the Topology Manager has a normalized view of several layers of 753 the network, it's then possible to more easily expose a richer set of 754 data to network operators when performing diagnosis, troubleshooting 755 and repairs on the network. Specifically, there is a need to 756 (rapidly) assemble a current, accurate and comprehensive network 757 diagram of a L2VPN or L3VPN service for a particular customer when 758 either: a) attempting to diagnose a service fault/error; or, b) 759 attempting to augment the customer's existing service. Information 760 that may be assembled into a comprehensive picture could include 761 physical and logical components related specifically to that 762 customer's service, i.e.: VLAN's or channels used by the PE-CE access 763 circuits, CoS policies, historical PE-CE circuit utilization, etc. 764 The Topology Manager would assemble this information, on behalf of 765 each of the network elements and other data sources in and associated 766 with the network, and could present this information in a vendor- 767 independent data model to applications to be displayed allowing the 768 operator (or, potentially, the customer through a SP's Web portal) to 769 visualize the information. 771 4.2. Path Computation Element (PCE) 773 As described in [RFC4655] a PCE can be used to compute MPLS-TE paths 774 within a "domain" (such as an IGP area) or across multiple domains 775 (such as a multi-area AS, or multiple ASes). 777 o Within a single area, the PCE offers enhanced computational power 778 that may not be available on individual routers, sophisticated 779 policy control and algorithms, and coordination of computation 780 across the whole area. 782 o If a router wants to compute a MPLS-TE path across IGP areas its 783 own TED lacks visibility of the complete topology. That means 784 that the router cannot determine the end-to-end path, and cannot 785 even select the right exit router (Area Border Router - ABR) for 786 an optimal path. This is an issue for large-scale networks that 787 need to segment their core networks into distinct areas, but which 788 still want to take advantage of MPLS-TE. 790 The PCE presents a computation server that may have visibility into 791 more than one IGP area or AS, or may cooperate with other PCEs to 792 perform distributed path computation. The PCE needs access to the 793 topology and the Traffic Engineering Database (TED) for the area(s) 794 it serves, but [RFC4655] does not describe how this is achieved. 795 Many implementations make the PCE a passive participant in the IGP so 796 that it can learn the latest state of the network, but this may be 797 sub-optimal when the network is subject to a high degree of churn, or 798 when the PCE is responsible for multiple areas. 800 The following figure shows how a PCE can get its TED information 801 using a Topology Server. 803 +----------+ 804 | ----- | TED synchronization via Topology API 805 | | TED |<-+----------------------------------+ 806 | ----- | | 807 | | | | 808 | | | | 809 | v | | 810 | ----- | | 811 | | PCE | | | 812 | ----- | | 813 +----------+ | 814 ^ | 815 | Request/ | 816 | Response | 817 v | 818 Service +----------+ Signaling +----------+ +----------+ 819 Request | Head-End | Protocol | Adjacent | | Topology | 820 -------->| Node |<------------>| Node | | Manager | 821 +----------+ +----------+ +----------+ 823 Figure 4: Topology use case: Path Computation Element 825 4.3. ALTO Server 827 An ALTO Server [RFC5693] is an entity that generates an abstracted 828 network topology and provides it to network-aware applications over a 829 web service based API. Example applications are p2p clients or 830 trackers, or CDNs. The abstracted network topology comes in the form 831 of two maps: a Network Map that specifies allocation of prefixes to 832 PIDs, and a Cost Map that specifies the cost between PIDs listed in 833 the Network Map. For more details, see [I-D.ietf-alto-protocol]. 835 ALTO abstract network topologies can be auto-generated from the 836 physical topology of the underlying network. The generation would 837 typically be based on policies and rules set by the operator. Both 838 prefix and TE data are required: prefix data is required to generate 839 ALTO Network Maps, TE (topology) data is required to generate ALTO 840 Cost Maps. Prefix data is carried and originated in BGP, TE data is 841 originated and carried in an IGP. The mechanism defined in this 842 document provides a single interface through which an ALTO Server can 843 retrieve all the necessary prefix and network topology data from the 844 underlying network. Note an ALTO Server can use other mechanisms to 845 get network data, for example, peering with multiple IGP and BGP 846 Speakers. 848 The following figure shows how an ALTO Server can get network 849 topology information from the underlying network using the Topology 850 API. 852 +--------+ 853 | Client |<--+ 854 +--------+ | 855 | ALTO +--------+ +----------+ 856 +--------+ | Protocol | ALTO | Network Topology | Topology | 857 | Client |<--+------------| Server |<-----------------| Manager | 858 +--------+ | | | | | 859 | +--------+ +----------+ 860 +--------+ | 861 | Client |<--+ 862 +--------+ 864 Figure 5: Topology use case: ALTO Server 866 5. Acknowledgements 868 The authors wish to thank Alia Atlas, Dave Ward, Hannes Gredler, 869 Stafano Previdi for their valuable contributions and feedback to this 870 draft. 872 6. IANA Considerations 874 This memo includes no request to IANA. 876 7. Security Considerations 878 At the moment, the Use Cases covered in this document apply 879 specifically to a single Service Provider or Enterprise network. 880 Therefore, network administrations should take appropriate 881 precautions to ensure appropriate access controls exist so that only 882 internal applications and end-users have physical or logical access 883 to the Topology Manager. This should be similar to precautions that 884 are already taken by Network Administrators to secure their existing 885 Network Management, OSS and BSS systems. 887 As this work evolves, it will be important to determine the 888 appropriate granularity of access controls in terms of what 889 individuals or groups may have read and/or write access to various 890 types of information contained with the Topology Manager. It would 891 be ideal, if these access control mechanisms were centralized within 892 the Topology Manager itself. 894 8. References 896 8.1. Normative References 898 [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate 899 Requirement Levels", BCP 14, RFC 2119, March 1997. 901 8.2. Informative References 903 [I-D.atlas-irs-problem-statement] 904 Atlas, A., Nadeau, T., and D. Ward, "Interface to the 905 Routing System Problem Statement", 906 draft-atlas-irs-problem-statement-00 (work in progress), 907 July 2012. 909 [I-D.ietf-alto-protocol] 910 Alimi, R., Penno, R., and Y. Yang, "ALTO Protocol", 911 draft-ietf-alto-protocol-13 (work in progress), 912 September 2012. 914 [I-D.ietf-pce-stateful-pce] 915 Crabbe, E., Medved, J., Varga, R., and I. Minei, "PCEP 916 Extensions for Stateful PCE", 917 draft-ietf-pce-stateful-pce-01 (work in progress), 918 July 2012. 920 [I-D.ward-irs-framework] 921 Atlas, A., Nadeau, T., and D. Ward, "Interface to the 922 Routing System Framework", draft-ward-irs-framework-00 923 (work in progress), July 2012. 925 [RFC4655] Farrel, A., Vasseur, J., and J. Ash, "A Path Computation 926 Element (PCE)-Based Architecture", RFC 4655, August 2006. 928 [RFC5693] Seedorf, J. and E. Burger, "Application-Layer Traffic 929 Optimization (ALTO) Problem Statement", RFC 5693, 930 October 2009. 932 Authors' Addresses 934 Shane Amante 935 Level 3 Communications, Inc. 936 1025 Eldorado Blvd 937 Broomfield, CO 80021 938 USA 940 Email: shane@level3.net 942 Jan Medved 943 Cisco Systems, Inc. 944 170 West Tasman Drive 945 San Jose, CA 95134 946 USA 948 Email: jmedved@cisco.com 950 Thomas D. Nadeau 951 Juniper Networks 952 1194 N. Mathilda Ave. 953 Sunnyvale, CA 94089 954 USA 956 Email: tnadeau@juniper.net