idnits 2.17.1 draft-amante-i2rs-topology-use-cases-01.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- No issues found here. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year == Line 248 has weird spacing: '...Manager is a ...' == Line 256 has weird spacing: '...Manager is a ...' == Line 271 has weird spacing: '...Manager is a ...' == Line 276 has weird spacing: '...Manager is a ...' == Line 282 has weird spacing: '...n Model is an...' == The document doesn't use any RFC 2119 keywords, yet has text resembling RFC 2119 boilerplate text. -- The document date (October 20, 2013) is 3841 days in the past. Is this intentional? -- Found something which looks like a code comment -- if you have code sections in the document, please surround them with '' and '' lines. Checking references for intended status: Informational ---------------------------------------------------------------------------- == Unused Reference: 'I-D.ietf-pce-stateful-pce' is defined on line 866, but no explicit reference was found in the text == Outdated reference: A later version (-16) exists of draft-farrkingel-pce-abno-architecture-05 == Outdated reference: A later version (-27) exists of draft-ietf-alto-protocol-17 == Outdated reference: A later version (-13) exists of draft-ietf-idr-ls-distribution-03 == Outdated reference: A later version (-11) exists of draft-ietf-isis-te-metric-extensions-00 == Outdated reference: A later version (-11) exists of draft-ietf-ospf-te-metric-extensions-04 == Outdated reference: A later version (-21) exists of draft-ietf-pce-stateful-pce-05 Summary: 0 errors (**), 0 flaws (~~), 14 warnings (==), 2 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 Internet Engineering Task Force J. Medved 3 Internet-Draft S. Previdi 4 Intended status: Informational Cisco Systems, Inc. 5 Expires: April 23, 2014 V. Lopez 6 Telefonica I+D 7 S. Amante 9 October 20, 2013 11 Topology API Use Cases 12 draft-amante-i2rs-topology-use-cases-01 14 Abstract 16 This document describes use cases for gathering routing, forwarding 17 and policy information, (hereafter referred to as topology 18 information), about the network. It describes several applications 19 that need to view the topology of the underlying physical or logical 20 network. This document further demonstrates a need for a "Topology 21 Manager" and related functions that collects topology data from 22 network elements and other data sources, coalesces the collected data 23 into a coherent view of the overall network topology, and normalizes 24 the network topology view for use by clients -- namely, applications 25 that consume topology information. 27 Status of This Memo 29 This Internet-Draft is submitted in full conformance with the 30 provisions of BCP 78 and BCP 79. 32 Internet-Drafts are working documents of the Internet Engineering 33 Task Force (IETF). Note that other groups may also distribute 34 working documents as Internet-Drafts. The list of current Internet- 35 Drafts is at http://datatracker.ietf.org/drafts/current/. 37 Internet-Drafts are draft documents valid for a maximum of six months 38 and may be updated, replaced, or obsoleted by other documents at any 39 time. It is inappropriate to use Internet-Drafts as reference 40 material or to cite them other than as "work in progress." 42 This Internet-Draft will expire on April 23, 2014. 44 Copyright Notice 46 Copyright (c) 2013 IETF Trust and the persons identified as the 47 document authors. All rights reserved. 49 This document is subject to BCP 78 and the IETF Trust's Legal 50 Provisions Relating to IETF Documents 51 (http://trustee.ietf.org/license-info) in effect on the date of 52 publication of this document. Please review these documents 53 carefully, as they describe your rights and restrictions with respect 54 to this document. Code Components extracted from this document must 55 include Simplified BSD License text as described in Section 4.e of 56 the Trust Legal Provisions and are provided without warranty as 57 described in the Simplified BSD License. 59 Table of Contents 61 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . 2 62 1.1. Statistics Collection . . . . . . . . . . . . . . . . . . 5 63 1.2. Inventory Collection . . . . . . . . . . . . . . . . . . 5 64 1.3. Requirements Language . . . . . . . . . . . . . . . . . . 6 65 2. Terminology . . . . . . . . . . . . . . . . . . . . . . . . . 6 66 3. The Orchestration, Collection & Presentation Framework . . . 7 67 3.1. Overview . . . . . . . . . . . . . . . . . . . . . . . . 7 68 3.2. The Topology Manager . . . . . . . . . . . . . . . . . . 8 69 3.3. The Policy Manager . . . . . . . . . . . . . . . . . . . 10 70 3.4. Orchestration Manager . . . . . . . . . . . . . . . . . . 10 71 4. Use Cases . . . . . . . . . . . . . . . . . . . . . . . . . . 11 72 4.1. Virtualized Views of the Network . . . . . . . . . . . . 11 73 4.1.1. Capacity Planning and Traffic Engineering . . . . . . 11 74 4.1.1.1. Present Mode of Operation . . . . . . . . . . . . 12 75 4.1.1.2. Proposed Mode of Operation . . . . . . . . . . . 12 76 4.1.2. Services Provisioning . . . . . . . . . . . . . . . . 14 77 4.1.3. Troubleshooting & Monitoring . . . . . . . . . . . . 14 78 4.2. Virtual Network Topology Manager (VNTM) . . . . . . . . . 15 79 4.3. Path Computation Element (PCE) . . . . . . . . . . . . . 16 80 4.4. ALTO Server . . . . . . . . . . . . . . . . . . . . . . . 17 81 5. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . 18 82 6. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 18 83 7. Security Considerations . . . . . . . . . . . . . . . . . . . 18 84 8. References . . . . . . . . . . . . . . . . . . . . . . . . . 18 85 8.1. Normative References . . . . . . . . . . . . . . . . . . 18 86 8.2. Informative References . . . . . . . . . . . . . . . . . 19 87 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . 20 89 1. Introduction 90 In today's networks, a variety of applications, such as Traffic 91 Engineering, Capacity Planning, Security Auditing or Services 92 Provisioning (for example, Virtual Private Networks), have a common 93 need to acquire and consume network topology information. 94 Unfortunately, all of these applications are (typically) vertically 95 integrated: each uses its own proprietary normalized view of the 96 network and proprietary data collectors, interpreters and adapters, 97 which speak a variety of protocols, (SNMP, CLI, SQL, etc.) directly 98 to network elements and to back-office systems. While some of the 99 topological information can be distributed using routing protocols, 100 unfortunately it is not desirable for some of these applications to 101 understand or participate in routing protocols. 103 This approach is incredibly inefficient for several reasons. First, 104 developers must write duplicate 'network discovery' functions, which 105 then become challenging to maintain over time, particularly as/when 106 new equipment are first introduced to the network. Second, since 107 there is no common "vocabulary" to describe various components in the 108 network, such as physical links, logical links, or IP prefixes, each 109 application has its own data model. To solve this, some solutions 110 have distributed this information in the normalized form of routing 111 distribution. However, this information still does not contain 112 "inactive" topological information, thus not containing information 113 considered to be part of a network's inventory. 115 These limitations lead to applications being unable to easily 116 exchange information with each other. For example, applications 117 cannot share changes with each other that are (to be) applied to the 118 physical and/or logical network, such as installation of new physical 119 links, or deployment of security ACL's. Each application must 120 frequently poll network elements and other data sources to ensure 121 that it has a consistent representation of the network so that it can 122 carry out its particular domain-specific tasks. In other cases, 123 applications that cannot speak routing protocols must use proprietary 124 CLI or other management interfaces which represent the topological 125 information in non-standard formats or worse, semantic models. 127 Overall, the software architecture described above at best results in 128 inefficient use of both software developer resources and network 129 resources, and at worst, it results in some applications simply not 130 having access to this information. 132 Figure 1 is an illustration of how individual applications collect 133 data from the underlying network. Applications retrieve inventory, 134 network topology, state and statistics information by communicating 135 directly with underlying Network Elements as well as with 136 intermediary proxies of the information. In addition, applications 137 transmit changes required of a Network Element's configuration and/or 138 state directly to individual Network Elements, (most commonly using 139 CLI or Netconf). It is important to note that the "data models" or 140 semantics of this information contained within Network Elements are 141 largely proprietary with respect to most configuration and state 142 information, hence why a proprietary CLI is often the only choice to 143 reflect changes in a NE's configuration or state. This remains the 144 case even when standards-based mechanisms such as Netconf are used 145 which provide a standard syntax model, but still often lack due to 146 the proprietary semantics associated with the internal representation 147 of the information. 149 +---------------+ 150 +----------------+ | 151 | Applications |-+ 152 +----------------+ 153 ^ ^ ^ 154 SQL, RPC, ReST # | * SQL, RPC, ReST ... 155 ########################## | ********************** 156 # | * 157 +------------+ | +------------+ 158 | Statistics | | | Inventory | 159 | Collection | | | Collection | 160 +------------+ | +------------+ 161 ^ | NETCONF, I2RS, SNMP, ^ 162 | | CLI, TL1, ... | 163 +------------------------+-----------------------+ 164 | | | 165 | | | 166 +---------------+ +---------------+ +---------------+ 167 |Network Element| |Network Element| |Network Element| 168 | +-----------+ | | +-----------+ | | +-----------+ | 169 | |Information| |<-LLDP->| |Information| |<-LMP->| |Information| | 170 | | Model | | | | Model | | | | Model | | 171 | +-----------+ | | +-----------+ | | +-----------+ | 172 +---------------+ +---------------+ +---------------+ 174 Figure 1: Applications getting topology data 176 Figure 1 shows how current management interfaces such as NETCONF, 177 SNMP, CLI, etc. are used to transmit or receive information to/from 178 various Network Elements. The figure also shows that protocols such 179 as LLDP and LMP participate in topology discovery, specifically to 180 discover adjacent network elements. 182 The following sections describe the "Statistics Collection" and 183 "Inventory Collection" functions. 185 1.1. Statistics Collection 187 In Figure 1, "Statistics Collection" is a dedicated infrastructure 188 that collects statistics from Network Elements. It periodically (for 189 example, every 5-minutes) polls Network Elements for octets 190 transferred per interface, per LSP, etc. Collected statistics are 191 stored and collated within a statistics data warehouse. Applications 192 typically query the statistics data warehouse rather than poll 193 Network Elements directly to get the appropriate set of link 194 utilization figures for their analysis. 196 1.2. Inventory Collection 198 "Inventory Collection" is a network function responsible for 199 collecting component and state information directly from Network 200 Elements, as well as for storing inventory information about physical 201 network assets that are not retrievable from Network Elements. The 202 collected data is hereafter referred to as the 'Inventory Asset 203 Database. Examples of information collected from Network Elements 204 are: interface up/down status, the type of SFP/XFP optics inserted 205 into physical port, etc. 207 The Inventory Collection function may use SNMP and CLI to acquire 208 inventory information from Network Elements. The information housed 209 in the Inventory Manager is retrieved by applications via a variety 210 of protocols: SQL, RPC, REST etc. Inventory information, retrieved 211 from Network Elements, is periodically updated in the Inventory 212 Collection system to reflect changes in the physical and/or logical 213 network assets. The polling interval to retrieve updated information 214 is varied depending on scaling constraints of the Inventory 215 Collection systems and expected intervals at which changes to the 216 physical and/or logical assets are expected to occur. 218 Examples of changes in network inventory that need be learned by the 219 Inventory Collection function are as follows: 221 o Discovery of new Network Elements. These elements may or may not 222 be actively used in the network (i.e.: provisioned but not yet 223 activated). 225 o Insertion or removal of line cards or other modules, such as 226 optics modules, during service or equipment provisioning. 228 o Changes made to a specific Network Element through a management 229 interface by a field technician. 231 o Indication of an NE's physical location and associated cable run 232 list, at the time of installation. 234 o Insertion of removal of cables that result in dynamic discovery of 235 a new or lost adjacent neighbor, etc. 237 1.3. Requirements Language 239 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 240 "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this 241 document are to be interpreted as described in RFC 2119 [RFC2119] 243 2. Terminology 245 The following briefly defines some of the terminology used within 246 this document. 248 Inventory Manager is a function that collects Network Element 249 inventory and state information directly from Network Elements and 250 from associated offline inventory databases. Inventory 251 information may only be visible at a specific network layer; for 252 example, a physical link is visible within the IGP, but a Layer-2 253 switch through which the physical link traverses is unknown to the 254 Layer-3 IGP. 256 Policy Manager is a function that attaches metadata to network 257 components/attributes. Such metadata may include security, 258 routing, L2 VLAN ID, IP numbering, etc. policies, which enable the 259 Topology Manager to: 261 * Assemble a normalized view of the network for clients (or 262 upper-layer applications 264 * Allow clients (or upper-layer applications) access to 265 information collected from various network layers and/or 266 network components, etc. 268 The Policy Manager function may be a sub-component of the Topology 269 Manager or it may be a standalone function. 271 Topology Manager is a function that collects topological information 272 from a variety of sources in the network and provides a normalized 273 view of the network topology to clients and/or higher-layer 274 applications. 276 Orchestration Manager is a function that stitches together 277 resources, such as compute or storage, and/or services with the 278 network or vice-versa. To realize a complete service, the 279 Orchestration Manager relies on capabilities provided by the other 280 "Managers" listed above. 282 Normalized Topology Information Model is an open, standards-based 283 information model of the network topology. 285 Information Model Abstraction: The notion that one is able to 286 represent the same set of elements in an information model at 287 different levels of "focus" in order to limit the amount of 288 information exchanged in order to convey this information. 290 Multi-Layer Topology: Topology is commonly referred to using the OSI 291 protocol layering model. For example, Layer 3 represents routed 292 topologies that typically use IPv4 or IPv6 addresses. It is 293 envisioned that, eventually, multiple layers of the network may be 294 represented in a single, normalized view of the network to certain 295 applications, (i.e.: Capacity Planning, Traffic Engineering, etc.) 297 Network Element (NE) refers to a network device that typically is 298 addressable (but not always), or a host. It is sometimes referred 299 to as a 'Node'. 301 Links: Every NE contains at least 1 link. These are used to connect 302 the NE to other NEs in the network. Links may be in a variety of 303 states, such as up, down, administratively down, internally 304 testing, or dormant. Links are often synonymous with network 305 ports on NEs. 307 3. The Orchestration, Collection & Presentation Framework 309 3.1. Overview 311 Section 1 demonstrates the need for a network function that would 312 provide a common, standards-based topology view to applications. 313 Such topology collection/management/presentation function would be a 314 part of a wider framework that should also include policy management 315 and orchestration. The framework is shown in Figure 2. 317 +---------------+ 318 +----------------+ | 319 | Applications |-+ 320 +----------------+ 321 ^ Websockets, ReST, XMPP... 322 +------------------------+-------------------------+ 323 | | | 324 +------------+ +------------------------+ +-------------+ 325 | Policy |<----| Topology Manager |---->|Orchestration| 326 | Manager | | +--------------------+ | | Manager | 327 +------------+ | |Topology Information| | +-------------+ 328 | | Model | | 329 | +--------------------+ | 330 +------------------------+ 331 ^ ^ ^ 332 Websockets, ReST, XMPP # | * Websockets, ReST, XMPP 333 ####################### | ************************ 334 # | * 335 +------------+ | +------------+ 336 | Statistics | | | Inventory | 337 | Collection | | | Collection | 338 +------------+ | +------------+ 339 ^ | I2RS, NETCONF, SNMP, ^ 340 | | TL1 ... | 341 +------------------------+------------------------+ 342 | | | 343 +---------------+ +---------------+ +---------------+ 344 |Network Element| |Network Element| |Network Element| 345 | +-----------+ | | +-----------+ | | +-----------+ | 346 | |Information| |<-LLDP->| |Information| |<-LMP-->| |Information| | 347 | | Model | | | | Model | | | | Model | | 348 | +-----------+ | | +-----------+ | | +-----------+ | 349 +---------------+ +---------------+ +---------------+ 351 Figure 2: Topology Manager 353 The following sections describe in detail the Topology Manager, 354 Policy Manager and Orchestration Manager functions. 356 3.2. The Topology Manager 358 The Topology Manager is a function that collects topological 359 information from a variety of sources in the network and provides a 360 cohesive, abstracted view of the network topology to clients and/or 361 higher-layer applications. The topology view is based on a 362 standards-based, normalized topology information model. 364 Topology information sources can be: 366 o The "live" Layer 3 IGP or an equivalent mechanism that provides 367 information about links that are components of the active 368 topology. Active topology links are present in the Link State 369 Database (LSDB) and are eligible for forwarding. Layer 3 IGP 370 information can be obtained by listening to IGP updates flooded 371 through an IGP domain, or from Network Elements. 373 o The Inventory Collection system that provides information for 374 network components not visible within the Layer 3 IGP's LSDB, 375 (i.e.: links or nodes, or properties of those links or nodes, at 376 lower layers of the network). 378 o The Statistics Collection system that provides traffic 379 information, such as traffic demands or link utilizations. 381 The Topology Manager provides topology information to Clients or 382 higher-layer applications via a northbound interface, such as ReST, 383 Websockets, or XMPP. 385 The Topology Manager will contain topology information for multiple 386 layers of the network: Transport, Ethernet and IP/MPLS, as well as 387 information for multiple Layer 3 IGP areas and multiple Autonomous 388 Systems (ASes). The topology information can be used by higher-level 389 applications, such as Traffic Engineering, Capacity Planning and 390 Provisioning. Such applications are typically used to design, 391 augment and optimize IP/MPLS networks, and require knowledge of 392 underlying Shared Risk Link Groups (SRLG) within the Transport and/or 393 Ethernet layers of the network. 395 The Topology Manager must be able to discover Network Elements that 396 are not visible in the "live" L3 IGP's Link State Database (LSDB). 397 Such Network Elements can either be inactive, or active but invisible 398 in the L3 LSDB (e.g.: L2 Ethernet switches, ROADM's, or Network 399 Elements that are in an underlying transport network). 401 In addition static inventory information collected from the Inventory 402 Manager, the Topology Manager will also collect dynamic inventory 403 information. For example, Network Elements utilize various Link 404 Layer Discovery Protocols (i.e.: LLDP, LMP, etc.) to automatically 405 identify adjacent nodes and ports. This information can be pushed to 406 or pulled by the Topology Manager in order to create an accurate 407 representation of the physical topology of the network 409 3.3. The Policy Manager 411 The Policy Manager is the function used to enforce and program 412 policies applicable to network component/attribute data. Policy 413 enforcement is a network-wide function that can be consumed by 414 various Network Elements and services, including the Inventory 415 Manager, the Topology Manager and other Network Elements. Such 416 policies are likely to encompass the following: 418 o Logical Identifier Numbering Policies 420 * Correlation of IP prefix to link based on link type, such as 421 P-P, P-PE, or PE-CE. 423 * Correlation of IP Prefix to IGP Area 425 * Layer-2 VLAN ID assignments, etc. 427 o Routing Configuration Policies 429 * OSPF Area or IS-IS Net-ID to Node (Type) Correlation 431 * BGP routing policies, such as nodes designated for injection of 432 aggregate routes, max-prefix policies, or AFI/SAFI to node 433 correlation.. 435 o Security Policies 437 * Access Control Lists 439 * Rate-limiting 441 o Network Component/Attribute Data Access Policies. Client's 442 (upper-layer application) access to Network Components/Attributes 443 contained in the "Inventory Manager" as well as Policies contained 444 within the "Policy Manager" itself. 446 The Policy Manager function may be either a sub-component of the 447 Topology or Orchestration Manager or a standalone component. 449 3.4. Orchestration Manager 451 The Orchestration Manager provides the ability to stitch together 452 resources (such as compute or storage) and/or services with the 453 network or vice-versa. Examples of 'generic' services may include 454 the following: 456 o Application-specific Load Balancing 457 o Application-specific Network (Bandwidth) Optimization 459 o Application or End-User specific Class-of-Service 461 o Application or End-User specific Network Access Control 463 The above services could then enable coupling of resources with the 464 network to realize the following: 466 o Network Optimization: Creation and Migration of Virtual Machines 467 (VM's) so they are adjacent to storage in the same DataCenter. 469 o Network Access Control: Coupling of available (generic) compute 470 nodes within the appropriate point of the data-path to perform 471 firewall, NAT, etc. functions on data traffic. 473 The Orchestration Manager will exchange information models with the 474 Topology Manager, the Policy Manager and the Inventory Manager. In 475 addition, the Orchestration Manager must support publish and 476 subscribe capabilities to those functions, as well as to Clients. 478 The Orchestration Manager may receive requests from Clients 479 (applications) for immediate access to specific network resources. 480 However, Clients may request to schedule future appointments to 481 reserve appropriate network resources when, for example, a special 482 event is scheduled to start and end. 484 Finally, the Orchestration Manager should have the flexibility to 485 determine what network layer(s) may be able to satisfy a given 486 Client's request, based on constraints received from the Client as 487 well as constraints learned from the Policy and Topology Managers. 488 This could allow the Orchestration Manager to, for example, satisfy a 489 given service request for a given Client using the optical network 490 (via OTN service) if there is insufficient IP/MPLS capacity at the 491 specific moment the Client's request is received. 493 The operational model is shown in the following figure. 495 TBD. 497 Figure 3: Overall Reference Model 499 4. Use Cases 501 4.1. Virtualized Views of the Network 503 4.1.1. Capacity Planning and Traffic Engineering 504 4.1.1.1. Present Mode of Operation 506 When performing Traffic Engineering and/or Capacity Planning of an IP 507 /MPLS network, it is important to account for SRLG's that exist 508 within the underlying physical, optical and Ethernet networks. 509 Currently, it's quite common to take "snapshots" at infrequent 510 intervals that comprise the inventory data of the underlying physical 511 and optical layer networks. This inventory data is then normalized 512 to conform to data import requirements of sometimes separate Traffic 513 Engineering and/or Capacity Planning tools. This process is error- 514 prone and inefficient, particularly as the underlying network 515 inventory information changes due to introduction of new network 516 element makes or models, line cards, capabilities, etc.. 518 The present mode of operation is inefficient with respect to Software 519 Development, Capacity Planning and Traffic Engineering resources. 520 Due to this inefficiency, the underlying physical network inventory 521 information (containing SRLG and corresponding critical network 522 assets information) is not updated frequently, thus exposing the 523 network to, at minimum, inefficient utilization and, at worst, 524 critical impairments. 526 4.1.1.2. Proposed Mode of Operation 528 First, the Inventory Manager will extract inventory information from 529 network elements and associated inventory databases. Information 530 extracted from inventory databases will include physical cross- 531 connects and other information that is not available directly from 532 network elements. Standards-based information models and associated 533 vocabulary will be required to represent not only components inside 534 or directly connected to network elements, but also to represent 535 components of a physical layer path (i.e.: cross-connect panels, 536 etc.) The inventory data will comprise the complete set of inactive 537 and active network components. 539 Second, the Topology Manager will augment the inventory information 540 with topology information obtained from Network Elements and other 541 sources, and provide an IGP-based view of the active topology of the 542 network. The Topology Manager will also include non-topology dynamic 543 information from IGPs, such as Available Bandwidth, Reserved 544 Bandwidth, Traffic Engineering (TE) attributes associated with links, 545 etc. 547 Finally, the Statistics Collector will collect utilization statistics 548 from Network Elements, and archive and aggregate them in a statistics 549 data warehouse. Selected statistics and other dynamic data may be 550 distributed through IGP routing protocols 551 ([I-D.ietf-isis-te-metric-extensions] and 553 [I-D.ietf-ospf-te-metric-extensions]) and then collected at the 554 Statistics Collection Function via BGP-LS 555 ([I-D.ietf-idr-ls-distribution]). Statistics summaries then will be 556 exposed in normalized information models to the Topology Manager, 557 which can use them to, for example, build trended utilization models 558 to forecast expected changes to physical and logical network 559 components. 561 It is important to recognize that extracting topology information 562 from the network solely from Network Elements and IGPs (IS-IS TE or 563 OSPF TE), is inadequate for this use case. First, IGPs only expose 564 the active components (e.g. vertices of the SPF tree) of the IP 565 network, and are not aware of "hidden" or inactive interfaces within 566 IP/MPLS network elements, such as unused line cards or ports. IGPs 567 are also not aware of components that reside at a layer lower than IP 568 /MPLS, such as Ethernet switches, or Optical transport systems. 569 Second, IGP's only convey SRLG information that have been first 570 applied within a router's configurations, either manually or 571 programatically. As mentioned previously, this SRLG information in 572 the IP/MPLS network is subject to being infrequently updated and, as 573 a result, may inadequately account for critical, underlying network 574 fate sharing properties that are necessary to properly design 575 resilient circuits and/or paths through the network. 577 Once the Topology Manager has assembled a normalized view of the 578 topology and metadata associated with each component of the topology, 579 it can expose this information via its northbound API to the Capacity 580 Planning and Traffic Engineering applications. The applications only 581 require generalized information about nodes and links that comprise 582 the network, e.g.: links used to interconnect nodes, SRLG information 583 (from the underlying network), utilization rates of each link over 584 some period of time, etc. 586 Note that any client/application that understands the Topology 587 Manager's northbound API and its topology information model can 588 communicate with the Topology Manager. Note also that topology 589 information may be provided by Network Elements from different 590 vendors, which may use different information models. If a Client 591 wanted to retrieve topology information directly from Network 592 Elements, it would have to translate and normalize these different 593 representations. 595 A Traffic Engineering application may run a variety of CSPF 596 algorithms that create a list of TE tunnels that globally optimize 597 the packing efficiency of physical links throughout the network. The 598 TE tunnels are then programmed into the network either directly or 599 through a controller. Programming of TE tunnels into the network is 600 outside the scope of this document. 602 A Capacity Planning application may run a variety of algorithms the 603 result of which is a list of new inventory that is required for 604 purchase or redeployment, as well as associated work orders for field 605 technicians to augment the network for expected growth. 607 4.1.2. Services Provisioning 609 Beyond Capacity Planning and Traffic Engineering applications, having 610 a normalized view of just the IP/MPLS layer of the network is still 611 very important for other mission critical applications, such as 612 Security Auditing and IP/MPLS Services Provisioning, (e.g.: L2VPN, 613 L3VPN, etc.). With respect to the latter, these types of 614 applications should not need a detailed understanding of, for 615 example, SRLG information, assuming that the underlying MPLS Tunnel 616 LSP's are known to account for the resiliency requirements of all 617 services that ride over them. Nonetheless, for both types of 618 applications it is critical to have a common and up-to-date 619 normalized view of the IP/MPLS network to, for example, instantiate 620 new services at optimal locations in the network, or to validate 621 proper ACL configuration to protect associated routing, signaling and 622 management protocols on the network. 624 A VPN Service Provisioning application must perform the following 625 resource selection operations: 627 o Identify Service PE's in all markets/cities where the customer has 628 identified they want service 630 o Identify one or more existing Servies PE's in each city with 631 connectivity to the access network(s), e.g.: SONET/TDM, used to 632 deliver the PE-CE tail circuits to the Service's PE. 634 o Determine that the Services PE have available capacity on both the 635 PE-CE access interface and its uplinks to terminate the tail 636 circuit 638 The VPN Provisioning application would iteratively query the Topology 639 Manager to narrow down the scope of resources to the set of Services 640 PEs with the appropriate uplink bandwidth and access circuit 641 capability plus capacity to realize the requested VPN service. Once 642 the VPN Provisioning application has a candidate list of resources it 643 requests programming of the Service PE's and associated access 644 circuits to set up a customer's VPN service into the network. 645 Programming of Service PEs is outside the scope of this document. 647 4.1.3. Troubleshooting & Monitoring 648 Once the Topology Manager has a normalized view of several layers of 649 the network, it can expose a rich set of data to network operators 650 who are performing diagnosis, troubleshooting and repairs on the 651 network. Specifically, there is a need to (rapidly) assemble a 652 current, accurate and comprehensive network diagram of a L2VPN or 653 L3VPN service for a particular customer when either: a) attempting to 654 diagnose a service fault/error; or, b) attempting to augment the 655 customer's existing service. Information that may be assembled into 656 a comprehensive picture could include physical and logical components 657 related specifically to that customer's service, i.e.: VLAN's or 658 channels used by the PE-CE access circuits, CoS policies, historical 659 PE-CE circuit utilization, etc. The Topology Manager would assemble 660 this information, on behalf of each of the network elements and other 661 data sources in and associated with the network, and would present 662 this information in a vendor-independent data model to applications 663 to be displayed allowing the operator (or, potentially, the customer 664 through a SP's Web portal) to visualize the information. 666 4.2. Virtual Network Topology Manager (VNTM) 668 Virtual Network Topology Manager (VNTM) is in charge of managing the 669 Virtual Network Topology (VNT), as defined in [RFC5623]. VNT is 670 defined in [RFC5212] as a set of one or more LSPs in one or more 671 lower-layer networks that provides information for efficient path 672 handling in an upper-layer network. 674 The maintenance of virtual topology is a complicated task. VNTM have 675 to decide which are the nodes to be interconnected in the lower-layer 676 to fulfill the resource requirements of the upper-layer. This means 677 to create a topology to cope with all demands of the upper layer 678 without wasting resources in the underlying network. Once the 679 decision is made, some actions have to be taken in the network 680 elements of the layers so the new LSPs are provisioned. Moreover, 681 VNT has to release unwanted resources, so they can be available in 682 the lower-layer network for other uses. 684 VNTM does not have to solve all previous problems in all scenarios. 685 As defined in [RFC5623] in the PCE-VNTM cooperation model, PCEis 686 computing the paths in the higher layer and when there is not enough 687 resources in the VNT, PCE requests to the VNTM for a new path in the 688 VNT. VNTM checks PCE request using internal policies to check 689 whether this request can be take into account or not. VNTM requests 690 the egress node in the upper layer to set-up the path in the lower 691 layer. However, the VNTM can actively modify the VNT based on the 692 policies and network status without waiting to an explicit PCE 693 request. 695 Regarding the provisioning phase, VNTM may have to directly talk with 696 an NMS to set-up the connection [RFC5623] or it can delegate this 697 function to the provisioning manager 698 [I-D.farrkingel-pce-abno-architecture]. 700 The aim of this document is not to categorize all implementation 701 options for VNTM, but to present the necessity to retrieve 702 topological information to perform its functions. The VNTM may 703 require the topologies of the lower and/or upper layer and even the 704 inter-layer relation between the upper and lower layer nodes, to 705 decide which is the optimal VNT. 707 4.3. Path Computation Element (PCE) 709 As described in [RFC4655] a PCE can be used to compute MPLS-TE paths 710 within a "domain" (such as an IGP area) or across multiple domains 711 (such as a multi-area AS, or multiple ASes). 713 o Within a single area, the PCE offers enhanced computational power 714 that may not be available on individual routers, sophisticated 715 policy control and algorithms, and coordination of computation 716 across the whole area. 718 o If a router wants to compute a MPLS-TE path across IGP areas its 719 own TED lacks visibility of the complete topology. That means 720 that the router cannot determine the end-to-end path, and cannot 721 even select the right exit router (Area Border Router - ABR) for 722 an optimal path. This is an issue for large-scale networks that 723 need to segment their core networks into distinct areas, but which 724 still want to take advantage of MPLS-TE. 726 The PCE presents a computation server that may have visibility into 727 more than one IGP area or AS, or may cooperate with other PCEs to 728 perform distributed path computation. The PCE needs access to the 729 topology and the Traffic Engineering Database (TED) for the area(s) 730 it serves, but [RFC4655] does not describe how this is achieved. 731 Many implementations make the PCE a passive participant in the IGP so 732 that it can learn the latest state of the network, but this may be 733 sub-optimal when the network is subject to a high degree of churn, or 734 when the PCE is responsible for multiple areas. 736 The following figure shows how a PCE can get its TED information 737 using a Topology Server. 739 +----------+ 740 | ----- | TED synchronization via Topology API 741 | | TED |<-+----------------------------------+ 742 | ----- | | 743 | | | | 744 | | | | 745 | v | | 746 | ----- | | 747 | | PCE | | | 748 | ----- | | 749 +----------+ | 750 ^ | 751 | Request/ | 752 | Response | 753 v | 754 Service +----------+ Signaling +----------+ +----------+ 755 Request | Head-End | Protocol | Adjacent | | Topology | 756 -------->| Node |<------------>| Node | | Manager | 757 +----------+ +----------+ +----------+ 759 Figure 4: Topology use case: Path Computation Element 761 4.4. ALTO Server 763 An ALTO Server [RFC5693] is an entity that generates an abstracted 764 network topology and provides it to network-aware applications over a 765 web service based API. Example applications are p2p clients or 766 trackers, or CDNs. The abstracted network topology comes in the form 767 of two maps: a Network Map that specifies allocation of prefixes to 768 PIDs, and a Cost Map that specifies the cost between PIDs listed in 769 the Network Map. For more details, see [I-D.ietf-alto-protocol]. 771 ALTO abstract network topologies can be auto-generated from the 772 physical topology of the underlying network. The generation would 773 typically be based on policies and rules set by the operator. Both 774 prefix and TE data are required: prefix data is required to generate 775 ALTO Network Maps, TE (topology) data is required to generate ALTO 776 Cost Maps. Prefix data is carried and originated in BGP, TE data is 777 originated and carried in an IGP. The mechanism defined in this 778 document provides a single interface through which an ALTO Server can 779 retrieve all the necessary prefix and network topology data from the 780 underlying network. Note an ALTO Server can use other mechanisms to 781 get network data, for example, peering with multiple IGP and BGP 782 Speakers. 784 The following figure shows how an ALTO Server can get network 785 topology information from the underlying network using the Topology 786 API. 788 +--------+ 789 | Client |<--+ 790 +--------+ | 791 | ALTO +--------+ +----------+ 792 +--------+ | Protocol | ALTO | Network Topology | Topology | 793 | Client |<--+------------| Server |<-----------------| Manager | 794 +--------+ | | | | | 795 | +--------+ +----------+ 796 +--------+ | 797 | Client |<--+ 798 +--------+ 800 Figure 5: Topology use case: ALTO Server 802 5. Acknowledgements 804 The authors wish to thank Alia Atlas, Dave Ward, Hannes Gredler, 805 Stafano Previdi for their valuable contributions and feedback to this 806 draft. 808 6. IANA Considerations 810 This memo includes no request to IANA. 812 7. Security Considerations 814 At the moment, the Use Cases covered in this document apply 815 specifically to a single Service Provider or Enterprise network. 816 Therefore, network administrations should take appropriate 817 precautions to ensure appropriate access controls exist so that only 818 internal applications and end-users have physical or logical access 819 to the Topology Manager. This should be similar to precautions that 820 are already taken by Network Administrators to secure their existing 821 Network Management, OSS and BSS systems. 823 As this work evolves, it will be important to determine the 824 appropriate granularity of access controls in terms of what 825 individuals or groups may have read and/or write access to various 826 types of information contained with the Topology Manager. It would 827 be ideal, if these access control mechanisms were centralized within 828 the Topology Manager itself. 830 8. References 832 8.1. Normative References 834 [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate 835 Requirement Levels", BCP 14, RFC 2119, March 1997. 837 8.2. Informative References 839 [I-D.farrkingel-pce-abno-architecture] 840 King, D. and A. Farrel, "A PCE-based Architecture for 841 Application-based Network Operations", draft-farrkingel- 842 pce-abno-architecture-05 (work in progress), July 2013. 844 [I-D.ietf-alto-protocol] 845 Alimi, R., Penno, R., and Y. Yang, "ALTO Protocol", draft- 846 ietf-alto-protocol-17 (work in progress), July 2013. 848 [I-D.ietf-idr-ls-distribution] 849 Gredler, H., Medved, J., Previdi, S., Farrel, A., and S. 850 Ray, "North-Bound Distribution of Link-State and TE 851 Information using BGP", draft-ietf-idr-ls-distribution-03 852 (work in progress), May 2013. 854 [I-D.ietf-isis-te-metric-extensions] 855 Previdi, S., Giacalone, S., Ward, D., Drake, J., Atlas, 856 A., and C. Filsfils, "IS-IS Traffic Engineering (TE) 857 Metric Extensions", draft-ietf-isis-te-metric- 858 extensions-00 (work in progress), June 2013. 860 [I-D.ietf-ospf-te-metric-extensions] 861 Giacalone, S., Ward, D., Drake, J., Atlas, A., and S. 862 Previdi, "OSPF Traffic Engineering (TE) Metric 863 Extensions", draft-ietf-ospf-te-metric-extensions-04 (work 864 in progress), June 2013. 866 [I-D.ietf-pce-stateful-pce] 867 Crabbe, E., Medved, J., Minei, I., and R. Varga, "PCEP 868 Extensions for Stateful PCE", draft-ietf-pce-stateful- 869 pce-05 (work in progress), July 2013. 871 [RFC4655] Farrel, A., Vasseur, J., and J. Ash, "A Path Computation 872 Element (PCE)-Based Architecture", RFC 4655, August 2006. 874 [RFC5212] Shiomoto, K., Papadimitriou, D., Le Roux, JL., Vigoureux, 875 M., and D. Brungard, "Requirements for GMPLS-Based Multi- 876 Region and Multi-Layer Networks (MRN/MLN)", RFC 5212, July 877 2008. 879 [RFC5623] Oki, E., Takeda, T., Le Roux, JL., and A. Farrel, 880 "Framework for PCE-Based Inter-Layer MPLS and GMPLS 881 Traffic Engineering", RFC 5623, September 2009. 883 [RFC5693] Seedorf, J. and E. Burger, "Application-Layer Traffic 884 Optimization (ALTO) Problem Statement", RFC 5693, October 885 2009. 887 Authors' Addresses 889 Jan Medved 890 Cisco Systems, Inc. 891 170 West Tasman Drive 892 San Jose, CA 95134 893 USA 895 Email: jmedved@cisco.com 897 Stefano Previdi 898 Cisco Systems, Inc. 899 170, West Tasman Drive 900 San Jose, CA 95134 901 USA 903 Email: sprevidi@cisco.com 905 Victor Lopez 906 Telefonica I+D 907 c/ Don Ramon de la Cruz 84 908 Madrid 28006 909 Spain 911 Email: vlopez@tid.es 913 Shane Amante