idnits 2.17.1 draft-amante-irs-topology-use-cases-01.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- No issues found here. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year == Line 246 has weird spacing: '...Manager is a ...' == Line 254 has weird spacing: '...Manager is a ...' == Line 269 has weird spacing: '...Manager is a ...' == Line 274 has weird spacing: '...Manager is a ...' == Line 280 has weird spacing: '...n Model is an...' == The document doesn't use any RFC 2119 keywords, yet has text resembling RFC 2119 boilerplate text. -- The document date (August 16, 2013) is 3878 days in the past. Is this intentional? -- Found something which looks like a code comment -- if you have code sections in the document, please surround them with '' and '' lines. Checking references for intended status: Informational ---------------------------------------------------------------------------- == Unused Reference: 'I-D.ietf-pce-stateful-pce' is defined on line 817, but no explicit reference was found in the text == Outdated reference: A later version (-27) exists of draft-ietf-alto-protocol-17 == Outdated reference: A later version (-13) exists of draft-ietf-idr-ls-distribution-03 == Outdated reference: A later version (-11) exists of draft-ietf-isis-te-metric-extensions-00 == Outdated reference: A later version (-11) exists of draft-ietf-ospf-te-metric-extensions-04 == Outdated reference: A later version (-21) exists of draft-ietf-pce-stateful-pce-05 Summary: 0 errors (**), 0 flaws (~~), 13 warnings (==), 2 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 Internet Engineering Task Force J. Medved 3 Internet-Draft Cisco Systems, Inc. 4 Intended status: Informational T. Nadeau 5 Expires: February 17, 2014 Juniper Networks 6 S. Amante 8 August 16, 2013 10 Topology API Use Cases 11 draft-amante-irs-topology-use-cases-01 13 Abstract 15 This document describes use cases for gathering routing, forwarding 16 and policy information, (hereafter referred to as topology 17 information), about the network. It describes several applications 18 that need to view the topology of the underlying physical or logical 19 network. This document further demonstrates a need for a "Topology 20 Manager" and related functions that collects topology data from 21 network elements and other data sources, coalesces the collected data 22 into a coherent view of the overall network topology, and normalizes 23 the network topology view for use by clients -- namely, applications 24 that consume topology information. 26 Status of This Memo 28 This Internet-Draft is submitted in full conformance with the 29 provisions of BCP 78 and BCP 79. 31 Internet-Drafts are working documents of the Internet Engineering 32 Task Force (IETF). Note that other groups may also distribute 33 working documents as Internet-Drafts. The list of current Internet- 34 Drafts is at http://datatracker.ietf.org/drafts/current/. 36 Internet-Drafts are draft documents valid for a maximum of six months 37 and may be updated, replaced, or obsoleted by other documents at any 38 time. It is inappropriate to use Internet-Drafts as reference 39 material or to cite them other than as "work in progress." 41 This Internet-Draft will expire on February 17, 2014. 43 Copyright Notice 45 Copyright (c) 2013 IETF Trust and the persons identified as the 46 document authors. All rights reserved. 48 This document is subject to BCP 78 and the IETF Trust's Legal 49 Provisions Relating to IETF Documents 50 (http://trustee.ietf.org/license-info) in effect on the date of 51 publication of this document. Please review these documents 52 carefully, as they describe your rights and restrictions with respect 53 to this document. Code Components extracted from this document must 54 include Simplified BSD License text as described in Section 4.e of 55 the Trust Legal Provisions and are provided without warranty as 56 described in the Simplified BSD License. 58 Table of Contents 60 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . 2 61 1.1. Statistics Collection . . . . . . . . . . . . . . . . . . 4 62 1.2. Inventory Collection . . . . . . . . . . . . . . . . . . 5 63 1.3. Requirements Language . . . . . . . . . . . . . . . . . . 5 64 2. Terminology . . . . . . . . . . . . . . . . . . . . . . . . . 6 65 3. The Orchestration, Collection & Presentation Framework . . . 7 66 3.1. Overview . . . . . . . . . . . . . . . . . . . . . . . . 7 67 3.2. The Topology Manager . . . . . . . . . . . . . . . . . . 8 68 3.3. The Policy Manager . . . . . . . . . . . . . . . . . . . 10 69 3.4. Orchestration Manager . . . . . . . . . . . . . . . . . . 10 70 4. Use Cases . . . . . . . . . . . . . . . . . . . . . . . . . . 11 71 4.1. Virtualized Views of the Network . . . . . . . . . . . . 11 72 4.1.1. Capacity Planning and Traffic Engineering . . . . . . 11 73 4.1.1.1. Present Mode of Operation . . . . . . . . . . . . 12 74 4.1.1.2. Proposed Mode of Operation . . . . . . . . . . . 12 75 4.1.2. Services Provisioning . . . . . . . . . . . . . . . . 14 76 4.1.3. Troubleshooting & Monitoring . . . . . . . . . . . . 14 77 4.2. Path Computation Element (PCE) . . . . . . . . . . . . . 15 78 4.3. ALTO Server . . . . . . . . . . . . . . . . . . . . . . . 16 79 5. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . 17 80 6. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 17 81 7. Security Considerations . . . . . . . . . . . . . . . . . . . 17 82 8. References . . . . . . . . . . . . . . . . . . . . . . . . . 17 83 8.1. Normative References . . . . . . . . . . . . . . . . . . 18 84 8.2. Informative References . . . . . . . . . . . . . . . . . 18 85 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . 18 87 1. Introduction 89 In today's networks, a variety of applications, such as Traffic 90 Engineering, Capacity Planning, Security Auditing or Services 91 Provisioning (for example, Virtual Private Networks), have a common 92 need to acquire and consume network topology information. 93 Unfortunately, all of these applications are (typically) vertically 94 integrated: each uses its own proprietary normalized view of the 95 network and proprietary data collectors, interpreters and adapters, 96 which speak a variety of protocols, (SNMP, CLI, SQL, etc.) directly 97 to network elements and to back-office systems. While some of the 98 topological information can be distributed using routing protocols, 99 unfortunately it is not desirable for some of these applications to 100 understand or participate in routing protocols. 102 This approach is incredibly inefficient for several reasons. First, 103 developers must write duplicate 'network discovery' functions, which 104 then become challenging to maintain over time, particularly as/when 105 new equipment are first introduced to the network. Second, since 106 there is no common "vocabulary" to describe various components in the 107 network, such as physical links, logical links, or IP prefixes, each 108 application has its own data model. To solve this, some solutions 109 have distributed this information in the normalized form of routing 110 distribution. However, this information still does not contain 111 "inactive" topological information, thus not containing information 112 considered to be part of a network's inventory. 114 These limitations lead to applications being unable to easily 115 exchange information with each other. For example, applications 116 cannot share changes with each other that are (to be) applied to the 117 physical and/or logical network, such as installation of new physical 118 links, or deployment of security ACL's. Each application must 119 frequently poll network elements and other data sources to ensure 120 that it has a consistent representation of the network so that it can 121 carry out its particular domain-specific tasks. In other cases, 122 applications that cannot speak routing protocols must use proprietary 123 CLI or other management interfaces which represent the topological 124 information in non-standard formats or worse, semantic models. 126 Overall, the software architecture described above at best results in 127 inefficient use of both software developer resources and network 128 resources, and at worst, it results in some applications simply not 129 having access to this information. 131 Figure 1 is an illustration of how individual applications collect 132 data from the underlying network. Applications retrieve inventory, 133 network topology, state and statistics information by communicating 134 directly with underlying Network Elements as well as with 135 intermediary proxies of the information. In addition, applications 136 transmit changes required of a Network Element's configuration and/or 137 state directly to individual Network Elements, (most commonly using 138 CLI or Netconf). It is important to note that the "data models" or 139 semantics of this information contained within Network Elements are 140 largely proprietary with respect to most configuration and state 141 information, hence why a proprietary CLI is often the only choice to 142 reflect changes in a NE's configuration or state. This remains the 143 case even when standards-based mechanisms such as Netconf are used 144 which provide a standard syntax model, but still often lack due to 145 the proprietary semantics associated with the internal representation 146 of the information. 148 +---------------+ 149 +----------------+ | 150 | Applications |-+ 151 +----------------+ 152 ^ ^ ^ 153 SQL, RPC, ReST # | * SQL, RPC, ReST ... 154 ########################## | ********************** 155 # | * 156 +------------+ | +------------+ 157 | Statistics | | | Inventory | 158 | Collection | | | Collection | 159 +------------+ | +------------+ 160 ^ | NETCONF, I2RS, SNMP, ^ 161 | | CLI, TL1, ... | 162 +------------------------+-----------------------+ 163 | | | 164 | | | 165 +---------------+ +---------------+ +---------------+ 166 |Network Element| |Network Element| |Network Element| 167 | +-----------+ | | +-----------+ | | +-----------+ | 168 | |Information| |<-LLDP->| |Information| |<-LMP->| |Information| | 169 | | Model | | | | Model | | | | Model | | 170 | +-----------+ | | +-----------+ | | +-----------+ | 171 +---------------+ +---------------+ +---------------+ 173 Figure 1: Applications getting topology data 175 Figure 1 shows how current management interfaces such as NETCONF, 176 SNMP, CLI, etc. are used to transmit or receive information to/from 177 various Network Elements. The figure also shows that protocols such 178 as LLDP and LMP participate in topology discovery, specifically to 179 discover adjacent network elements. 181 The following sections describe the "Statistics Collection" and 182 "Inventory Collection" functions. 184 1.1. Statistics Collection 186 In Figure 1, "Statistics Collection" is a dedicated infrastructure 187 that collects statistics from Network Elements. It periodically (for 188 example, every 5-minutes) polls Network Elements for octets 189 transferred per interface, per LSP, etc. Collected statistics are 190 stored and collated within a statistics data warehouse. Applications 191 typically query the statistics data warehouse rather than poll 192 Network Elements directly to get the appropriate set of link 193 utilization figures for their analysis. 195 1.2. Inventory Collection 197 "Inventory Collection" is a network function responsible for 198 collecting component and state information directly from Network 199 Elements, as well as for storing inventory information about physical 200 network assets that are not retrievable from Network Elements. The 201 collected data is hereafter referred to as the 'Inventory Asset 202 Database. Examples of information collected from Network Elements 203 are: interface up/down status, the type of SFP/XFP optics inserted 204 into physical port, etc. 206 The Inventory Collection function may use SNMP and CLI to acquire 207 inventory information from Network Elements. The information housed 208 in the Inventory Manager is retrieved by applications via a variety 209 of protocols: SQL, RPC, REST etc. Inventory information, retrieved 210 from Network Elements, is periodically updated in the Inventory 211 Collection system to reflect changes in the physical and/or logical 212 network assets. The polling interval to retrieve updated information 213 is varied depending on scaling constraints of the Inventory 214 Collection systems and expected intervals at which changes to the 215 physical and/or logical assets are expected to occur. 217 Examples of changes in network inventory that need be learned by the 218 Inventory Collection function are as follows: 220 o Discovery of new Network Elements. These elements may or may not 221 be actively used in the network (i.e.: provisioned but not yet 222 activated). 224 o Insertion or removal of line cards or other modules, such as 225 optics modules, during service or equipment provisioning. 227 o Changes made to a specific Network Element through a management 228 interface by a field technician. 230 o Indication of an NE's physical location and associated cable run 231 list, at the time of installation. 233 o Insertion of removal of cables that result in dynamic discovery of 234 a new or lost adjacent neighbor, etc. 236 1.3. Requirements Language 237 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 238 "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this 239 document are to be interpreted as described in RFC 2119 [RFC2119] 241 2. Terminology 243 The following briefly defines some of the terminology used within 244 this document. 246 Inventory Manager is a function that collects Network Element 247 inventory and state information directly from Network Elements and 248 from associated offline inventory databases. Inventory 249 information may only be visible at a specific network layer; for 250 example, a physical link is visible within the IGP, but a Layer-2 251 switch through which the physical link traverses is unknown to the 252 Layer-3 IGP. 254 Policy Manager is a function that attaches metadata to network 255 components/attributes. Such metadata may include security, 256 routing, L2 VLAN ID, IP numbering, etc. policies, which enable the 257 Topology Manager to: 259 * Assemble a normalized view of the network for clients (or 260 upper-layer applications 262 * Allow clients (or upper-layer applications) access to 263 information collected from various network layers and/or 264 network components, etc. 266 The Policy Manager function may be a sub-component of the Topology 267 Manager or it may be a standalone function. 269 Topology Manager is a function that collects topological information 270 from a variety of sources in the network and provides a normalized 271 view of the network topology to clients and/or higher-layer 272 applications. 274 Orchestration Manager is a function that stitches together 275 resources, such as compute or storage, and/or services with the 276 network or vice-versa. To realize a complete service, the 277 Orchestration Manager relies on capabilities provided by the other 278 "Managers" listed above. 280 Normalized Topology Information Model is an open, standards-based 281 information model of the network topology. 283 Information Model Abstraction: The notion that one is able to 284 represent the same set of elements in an information model at 285 different levels of "focus" in order to limit the amount of 286 information exchanged in order to convey this information. 288 Multi-Layer Topology: Topology is commonly referred to using the OSI 289 protocol layering model. For example, Layer 3 represents routed 290 topologies that typically use IPv4 or IPv6 addresses. It is 291 envisioned that, eventually, multiple layers of the network may be 292 represented in a single, normalized view of the network to certain 293 applications, (i.e.: Capacity Planning, Traffic Engineering, etc.) 295 Network Element (NE) refers to a network device that typically is 296 addressable (but not always), or a host. It is sometimes referred 297 to as a 'Node'. 299 Links: Every NE contains at least 1 link. These are used to connect 300 the NE to other NEs in the network. Links may be in a variety of 301 states, such as up, down, administratively down, internally 302 testing, or dormant. Links are often synonymous with network 303 ports on NEs. 305 3. The Orchestration, Collection & Presentation Framework 307 3.1. Overview 309 Section 1 demonstrates the need for a network function that would 310 provide a common, standards-based topology view to applications. 311 Such topology collection/management/presentation function would be a 312 part of a wider framework that should also include policy management 313 and orchestration. The framework is shown in Figure 2. 315 +---------------+ 316 +----------------+ | 317 | Applications |-+ 318 +----------------+ 319 ^ Websockets, ReST, XMPP... 320 +------------------------+-------------------------+ 321 | | | 322 +------------+ +------------------------+ +-------------+ 323 | Policy |<----| Topology Manager |---->|Orchestration| 324 | Manager | | +--------------------+ | | Manager | 325 +------------+ | |Topology Information| | +-------------+ 326 | | Model | | 327 | +--------------------+ | 328 +------------------------+ 329 ^ ^ ^ 330 Websockets, ReST, XMPP # | * Websockets, ReST, XMPP 331 ####################### | ************************ 332 # | * 333 +------------+ | +------------+ 334 | Statistics | | | Inventory | 335 | Collection | | | Collection | 336 +------------+ | +------------+ 337 ^ | I2RS, NETCONF, SNMP, ^ 338 | | TL1 ... | 339 +------------------------+------------------------+ 340 | | | 341 +---------------+ +---------------+ +---------------+ 342 |Network Element| |Network Element| |Network Element| 343 | +-----------+ | | +-----------+ | | +-----------+ | 344 | |Information| |<-LLDP->| |Information| |<-LMP-->| |Information| | 345 | | Model | | | | Model | | | | Model | | 346 | +-----------+ | | +-----------+ | | +-----------+ | 347 +---------------+ +---------------+ +---------------+ 349 Figure 2: Topology Manager 351 The following sections describe in detail the Topology Manager, 352 Policy Manager and Orchestration Manager functions. 354 3.2. The Topology Manager 356 The Topology Manager is a function that collects topological 357 information from a variety of sources in the network and provides a 358 cohesive, abstracted view of the network topology to clients and/or 359 higher-layer applications. The topology view is based on a 360 standards-based, normalized topology information model. 362 Topology information sources can be: 364 o The "live" Layer 3 IGP or an equivalent mechanism that provides 365 information about links that are components of the active 366 topology. Active topology links are present in the Link State 367 Database (LSDB) and are eligible for forwarding. Layer 3 IGP 368 information can be obtained by listening to IGP updates flooded 369 through an IGP domain, or from Network Elements. 371 o The Inventory Collection system that provides information for 372 network components not visible within the Layer 3 IGP's LSDB, 373 (i.e.: links or nodes, or properties of those links or nodes, at 374 lower layers of the network). 376 o The Statistics Collection system that provides traffic 377 information, such as traffic demands or link utilizations. 379 The Topology Manager provides topology information to Clients or 380 higher-layer applications via a northbound interface, such as ReST, 381 Websockets, or XMPP. 383 The Topology Manager will contain topology information for multiple 384 layers of the network: Transport, Ethernet and IP/MPLS, as well as 385 information for multiple Layer 3 IGP areas and multiple Autonomous 386 Systems (ASes). The topology information can be used by higher-level 387 applications, such as Traffic Engineering, Capacity Planning and 388 Provisioning. Such applications are typically used to design, 389 augment and optimize IP/MPLS networks, and require knowledge of 390 underlying Shared Risk Link Groups (SRLG) within the Transport and/or 391 Ethernet layers of the network. 393 The Topology Manager must be able to discover Network Elements that 394 are not visible in the "live" L3 IGP's Link State Database (LSDB). 395 Such Network Elements can either be inactive, or active but invisible 396 in the L3 LSDB (e.g.: L2 Ethernet switches, ROADM's, or Network 397 Elements that are in an underlying transport network). 399 In addition static inventory information collected from the Inventory 400 Manager, the Topology Manager will also collect dynamic inventory 401 information. For example, Network Elements utilize various Link 402 Layer Discovery Protocols (i.e.: LLDP, LMP, etc.) to automatically 403 identify adjacent nodes and ports. This information can be pushed to 404 or pulled by the Topology Manager in order to create an accurate 405 representation of the physical topology of the network 407 3.3. The Policy Manager 409 The Policy Manager is the function used to enforce and program 410 policies applicable to network component/attribute data. Policy 411 enforcement is a network-wide function that can be consumed by 412 various Network Elements and services, including the Inventory 413 Manager, the Topology Manager and other Network Elements. Such 414 policies are likely to encompass the following: 416 o Logical Identifier Numbering Policies 418 * Correlation of IP prefix to link based on link type, such as 419 P-P, P-PE, or PE-CE. 421 * Correlation of IP Prefix to IGP Area 423 * Layer-2 VLAN ID assignments, etc. 425 o Routing Configuration Policies 427 * OSPF Area or IS-IS Net-ID to Node (Type) Correlation 429 * BGP routing policies, such as nodes designated for injection of 430 aggregate routes, max-prefix policies, or AFI/SAFI to node 431 correlation.. 433 o Security Policies 435 * Access Control Lists 437 * Rate-limiting 439 o Network Component/Attribute Data Access Policies. Client's 440 (upper-layer application) access to Network Components/Attributes 441 contained in the "Inventory Manager" as well as Policies contained 442 within the "Policy Manager" itself. 444 The Policy Manager function may be either a sub-component of the 445 Topology or Orchestration Manager or a standalone component. 447 3.4. Orchestration Manager 449 The Orchestration Manager provides the ability to stitch together 450 resources (such as compute or storage) and/or services with the 451 network or vice-versa. Examples of 'generic' services may include 452 the following: 454 o Application-specific Load Balancing 455 o Application-specific Network (Bandwidth) Optimization 457 o Application or End-User specific Class-of-Service 459 o Application or End-User specific Network Access Control 461 The above services could then enable coupling of resources with the 462 network to realize the following: 464 o Network Optimization: Creation and Migration of Virtual Machines 465 (VM's) so they are adjacent to storage in the same DataCenter. 467 o Network Access Control: Coupling of available (generic) compute 468 nodes within the appropriate point of the data-path to perform 469 firewall, NAT, etc. functions on data traffic. 471 The Orchestration Manager will exchange information models with the 472 Topology Manager, the Policy Manager and the Inventory Manager. In 473 addition, the Orchestration Manager must support publish and 474 subscribe capabilities to those functions, as well as to Clients. 476 The Orchestration Manager may receive requests from Clients 477 (applications) for immediate access to specific network resources. 478 However, Clients may request to schedule future appointments to 479 reserve appropriate network resources when, for example, a special 480 event is scheduled to start and end. 482 Finally, the Orchestration Manager should have the flexibility to 483 determine what network layer(s) may be able to satisfy a given 484 Client's request, based on constraints received from the Client as 485 well as constraints learned from the Policy and Topology Managers. 486 This could allow the Orchestration Manager to, for example, satisfy a 487 given service request for a given Client using the optical network 488 (via OTN service) if there is insufficient IP/MPLS capacity at the 489 specific moment the Client's request is received. 491 The operational model is shown in the following figure. 493 TBD. 495 Figure 3: Overall Reference Model 497 4. Use Cases 499 4.1. Virtualized Views of the Network 501 4.1.1. Capacity Planning and Traffic Engineering 502 4.1.1.1. Present Mode of Operation 504 When performing Traffic Engineering and/or Capacity Planning of an IP 505 /MPLS network, it is important to account for SRLG's that exist 506 within the underlying physical, optical and Ethernet networks. 507 Currently, it's quite common to take "snapshots" at infrequent 508 intervals that comprise the inventory data of the underlying physical 509 and optical layer networks. This inventory data is then normalized 510 to conform to data import requirements of sometimes separate Traffic 511 Engineering and/or Capacity Planning tools. This process is error- 512 prone and inefficient, particularly as the underlying network 513 inventory information changes due to introduction of new network 514 element makes or models, line cards, capabilities, etc.. 516 The present mode of operation is inefficient with respect to Software 517 Development, Capacity Planning and Traffic Engineering resources. 518 Due to this inefficiency, the underlying physical network inventory 519 information (containing SRLG and corresponding critical network 520 assets information) is not updated frequently, thus exposing the 521 network to, at minimum, inefficient utilization and, at worst, 522 critical impairments. 524 4.1.1.2. Proposed Mode of Operation 526 First, the Inventory Manager will extract inventory information from 527 network elements and associated inventory databases. Information 528 extracted from inventory databases will include physical cross- 529 connects and other information that is not available directly from 530 network elements. Standards-based information models and associated 531 vocabulary will be required to represent not only components inside 532 or directly connected to network elements, but also to represent 533 components of a physical layer path (i.e.: cross-connect panels, 534 etc.) The inventory data will comprise the complete set of inactive 535 and active network components. 537 Second, the Topology Manager will augment the inventory information 538 with topology information obtained from Network Elements and other 539 sources, and provide an IGP-based view of the active topology of the 540 network. The Topology Manager will also include non-topology dynamic 541 information from IGPs, such as Available Bandwidth, Reserved 542 Bandwidth, Traffic Engineering (TE) attributes associated with links, 543 etc. 545 Finally, the Statistics Collector will collect utilization statistics 546 from Network Elements, and archive and aggregate them in a statistics 547 data warehouse. Selected statistics and other dynamic data may be 548 distributed through IGP routing protocols 549 ([I-D.ietf-isis-te-metric-extensions] and 551 [I-D.ietf-ospf-te-metric-extensions]) and then collected at the 552 Statistics Collection Function via BGP-LS 553 ([I-D.ietf-idr-ls-distribution]). Statistics summaries then will be 554 exposed in normalized information models to the Topology Manager, 555 which can use them to, for example, build trended utilization models 556 to forecast expected changes to physical and logical network 557 components. 559 It is important to recognize that extracting topology information 560 from the network solely from Network Elements and IGPs (IS-IS TE or 561 OSPF TE), is inadequate for this use case. First, IGPs only expose 562 the active components (e.g. vertices of the SPF tree) of the IP 563 network, and are not aware of "hidden" or inactive interfaces within 564 IP/MPLS network elements, such as unused line cards or ports. IGPs 565 are also not aware of components that reside at a layer lower than IP 566 /MPLS, such as Ethernet switches, or Optical transport systems. 567 Second, IGP's only convey SRLG information that have been first 568 applied within a router's configurations, either manually or 569 programatically. As mentioned previously, this SRLG information in 570 the IP/MPLS network is subject to being infrequently updated and, as 571 a result, may inadequately account for critical, underlying network 572 fate sharing properties that are necessary to properly design 573 resilient circuits and/or paths through the network. 575 Once the Topology Manager has assembled a normalized view of the 576 topology and metadata associated with each component of the topology, 577 it can expose this information via its northbound API to the Capacity 578 Planning and Traffic Engineering applications. The applications only 579 require generalized information about nodes and links that comprise 580 the network, e.g.: links used to interconnect nodes, SRLG information 581 (from the underlying network), utilization rates of each link over 582 some period of time, etc. 584 Note that any client/application that understands the Topology 585 Manager's northbound API and its topology information model can 586 communicate with the Topology Manager. Note also that topology 587 information may be provided by Network Elements from different 588 vendors, which may use different information models. If a Client 589 wanted to retrieve topology information directly from Network 590 Elements, it would have to translate and normalize these different 591 representations. 593 A Traffic Engineering application may run a variety of CSPF 594 algorithms that create a list of TE tunnels that globally optimize 595 the packing efficiency of physical links throughout the network. The 596 TE tunnels are then programmed into the network either directly or 597 through a controller. Programming of TE tunnels into the network is 598 outside the scope of this document. 600 A Capacity Planning application may run a variety of algorithms the 601 result of which is a list of new inventory that is required for 602 purchase or redeployment, as well as associated work orders for field 603 technicians to augment the network for expected growth. 605 4.1.2. Services Provisioning 607 Beyond Capacity Planning and Traffic Engineering applications, having 608 a normalized view of just the IP/MPLS layer of the network is still 609 very important for other mission critical applications, such as 610 Security Auditing and IP/MPLS Services Provisioning, (e.g.: L2VPN, 611 L3VPN, etc.). With respect to the latter, these types of 612 applications should not need a detailed understanding of, for 613 example, SRLG information, assuming that the underlying MPLS Tunnel 614 LSP's are known to account for the resiliency requirements of all 615 services that ride over them. Nonetheless, for both types of 616 applications it is critical to have a common and up-to-date 617 normalized view of the IP/MPLS network to, for example, instantiate 618 new services at optimal locations in the network, or to validate 619 proper ACL configuration to protect associated routing, signaling and 620 management protocols on the network. 622 A VPN Service Provisioning application must perform the following 623 resource selection operations: 625 o Identify Service PE's in all markets/cities where the customer has 626 identified they want service 628 o Identify one or more existing Servies PE's in each city with 629 connectivity to the access network(s), e.g.: SONET/TDM, used to 630 deliver the PE-CE tail circuits to the Service's PE. 632 o Determine that the Services PE have available capacity on both the 633 PE-CE access interface and its uplinks to terminate the tail 634 circuit 636 The VPN Provisioning application would iteratively query the Topology 637 Manager to narrow down the scope of resources to the set of Services 638 PEs with the appropriate uplink bandwidth and access circuit 639 capability plus capacity to realize the requested VPN service. Once 640 the VPN Provisioning application has a candidate list of resources it 641 requests programming of the Service PE's and associated access 642 circuits to set up a customer's VPN service into the network. 643 Programming of Service PEs is outside the scope of this document. 645 4.1.3. Troubleshooting & Monitoring 646 Once the Topology Manager has a normalized view of several layers of 647 the network, it can expose a rich set of data to network operators 648 who are performing diagnosis, troubleshooting and repairs on the 649 network. Specifically, there is a need to (rapidly) assemble a 650 current, accurate and comprehensive network diagram of a L2VPN or 651 L3VPN service for a particular customer when either: a) attempting to 652 diagnose a service fault/error; or, b) attempting to augment the 653 customer's existing service. Information that may be assembled into 654 a comprehensive picture could include physical and logical components 655 related specifically to that customer's service, i.e.: VLAN's or 656 channels used by the PE-CE access circuits, CoS policies, historical 657 PE-CE circuit utilization, etc. The Topology Manager would assemble 658 this information, on behalf of each of the network elements and other 659 data sources in and associated with the network, and would present 660 this information in a vendor-independent data model to applications 661 to be displayed allowing the operator (or, potentially, the customer 662 through a SP's Web portal) to visualize the information. 664 4.2. Path Computation Element (PCE) 666 As described in [RFC4655] a PCE can be used to compute MPLS-TE paths 667 within a "domain" (such as an IGP area) or across multiple domains 668 (such as a multi-area AS, or multiple ASes). 670 o Within a single area, the PCE offers enhanced computational power 671 that may not be available on individual routers, sophisticated 672 policy control and algorithms, and coordination of computation 673 across the whole area. 675 o If a router wants to compute a MPLS-TE path across IGP areas its 676 own TED lacks visibility of the complete topology. That means 677 that the router cannot determine the end-to-end path, and cannot 678 even select the right exit router (Area Border Router - ABR) for 679 an optimal path. This is an issue for large-scale networks that 680 need to segment their core networks into distinct areas, but which 681 still want to take advantage of MPLS-TE. 683 The PCE presents a computation server that may have visibility into 684 more than one IGP area or AS, or may cooperate with other PCEs to 685 perform distributed path computation. The PCE needs access to the 686 topology and the Traffic Engineering Database (TED) for the area(s) 687 it serves, but [RFC4655] does not describe how this is achieved. 688 Many implementations make the PCE a passive participant in the IGP so 689 that it can learn the latest state of the network, but this may be 690 sub-optimal when the network is subject to a high degree of churn, or 691 when the PCE is responsible for multiple areas. 693 The following figure shows how a PCE can get its TED information 694 using a Topology Server. 696 +----------+ 697 | ----- | TED synchronization via Topology API 698 | | TED |<-+----------------------------------+ 699 | ----- | | 700 | | | | 701 | | | | 702 | v | | 703 | ----- | | 704 | | PCE | | | 705 | ----- | | 706 +----------+ | 707 ^ | 708 | Request/ | 709 | Response | 710 v | 711 Service +----------+ Signaling +----------+ +----------+ 712 Request | Head-End | Protocol | Adjacent | | Topology | 713 -------->| Node |<------------>| Node | | Manager | 714 +----------+ +----------+ +----------+ 716 Figure 4: Topology use case: Path Computation Element 718 4.3. ALTO Server 720 An ALTO Server [RFC5693] is an entity that generates an abstracted 721 network topology and provides it to network-aware applications over a 722 web service based API. Example applications are p2p clients or 723 trackers, or CDNs. The abstracted network topology comes in the form 724 of two maps: a Network Map that specifies allocation of prefixes to 725 PIDs, and a Cost Map that specifies the cost between PIDs listed in 726 the Network Map. For more details, see [I-D.ietf-alto-protocol]. 728 ALTO abstract network topologies can be auto-generated from the 729 physical topology of the underlying network. The generation would 730 typically be based on policies and rules set by the operator. Both 731 prefix and TE data are required: prefix data is required to generate 732 ALTO Network Maps, TE (topology) data is required to generate ALTO 733 Cost Maps. Prefix data is carried and originated in BGP, TE data is 734 originated and carried in an IGP. The mechanism defined in this 735 document provides a single interface through which an ALTO Server can 736 retrieve all the necessary prefix and network topology data from the 737 underlying network. Note an ALTO Server can use other mechanisms to 738 get network data, for example, peering with multiple IGP and BGP 739 Speakers. 741 The following figure shows how an ALTO Server can get network 742 topology information from the underlying network using the Topology 743 API. 745 +--------+ 746 | Client |<--+ 747 +--------+ | 748 | ALTO +--------+ +----------+ 749 +--------+ | Protocol | ALTO | Network Topology | Topology | 750 | Client |<--+------------| Server |<-----------------| Manager | 751 +--------+ | | | | | 752 | +--------+ +----------+ 753 +--------+ | 754 | Client |<--+ 755 +--------+ 757 Figure 5: Topology use case: ALTO Server 759 5. Acknowledgements 761 The authors wish to thank Alia Atlas, Dave Ward, Hannes Gredler, 762 Stafano Previdi for their valuable contributions and feedback to this 763 draft. 765 6. IANA Considerations 767 This memo includes no request to IANA. 769 7. Security Considerations 771 At the moment, the Use Cases covered in this document apply 772 specifically to a single Service Provider or Enterprise network. 773 Therefore, network administrations should take appropriate 774 precautions to ensure appropriate access controls exist so that only 775 internal applications and end-users have physical or logical access 776 to the Topology Manager. This should be similar to precautions that 777 are already taken by Network Administrators to secure their existing 778 Network Management, OSS and BSS systems. 780 As this work evolves, it will be important to determine the 781 appropriate granularity of access controls in terms of what 782 individuals or groups may have read and/or write access to various 783 types of information contained with the Topology Manager. It would 784 be ideal, if these access control mechanisms were centralized within 785 the Topology Manager itself. 787 8. References 788 8.1. Normative References 790 [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate 791 Requirement Levels", BCP 14, RFC 2119, March 1997. 793 8.2. Informative References 795 [I-D.ietf-alto-protocol] 796 Alimi, R., Penno, R., and Y. Yang, "ALTO Protocol", draft- 797 ietf-alto-protocol-17 (work in progress), July 2013. 799 [I-D.ietf-idr-ls-distribution] 800 Gredler, H., Medved, J., Previdi, S., Farrel, A., and S. 801 Ray, "North-Bound Distribution of Link-State and TE 802 Information using BGP", draft-ietf-idr-ls-distribution-03 803 (work in progress), May 2013. 805 [I-D.ietf-isis-te-metric-extensions] 806 Previdi, S., Giacalone, S., Ward, D., Drake, J., Atlas, 807 A., and C. Filsfils, "IS-IS Traffic Engineering (TE) 808 Metric Extensions", draft-ietf-isis-te-metric- 809 extensions-00 (work in progress), June 2013. 811 [I-D.ietf-ospf-te-metric-extensions] 812 Giacalone, S., Ward, D., Drake, J., Atlas, A., and S. 813 Previdi, "OSPF Traffic Engineering (TE) Metric 814 Extensions", draft-ietf-ospf-te-metric-extensions-04 (work 815 in progress), June 2013. 817 [I-D.ietf-pce-stateful-pce] 818 Crabbe, E., Medved, J., Minei, I., and R. Varga, "PCEP 819 Extensions for Stateful PCE", draft-ietf-pce-stateful- 820 pce-05 (work in progress), July 2013. 822 [RFC4655] Farrel, A., Vasseur, J., and J. Ash, "A Path Computation 823 Element (PCE)-Based Architecture", RFC 4655, August 2006. 825 [RFC5693] Seedorf, J. and E. Burger, "Application-Layer Traffic 826 Optimization (ALTO) Problem Statement", RFC 5693, October 827 2009. 829 Authors' Addresses 830 Jan Medved 831 Cisco Systems, Inc. 832 170 West Tasman Drive 833 San Jose, CA 95134 834 USA 836 Email: jmedved@cisco.com 838 Thomas D. Nadeau 839 Juniper Networks 840 1194 N. Mathilda Ave. 841 Sunnyvale, CA 94089 842 USA 844 Email: tnadeau@juniper.net 846 Shane Amante