Internet Engineering Task ForceS. Amante Internet-Draft Level 3 Communications, Inc. Intended status: InformationalJ. MedvedExpires: August 16, 2013Internet-Draft S. Previdi Intended status: Informational Cisco Systems, Inc.T. Nadeau Juniper Networks February 12,Expires: April 23, 2014 V. Lopez Telefonica I+D S. Amante October 20, 2013 Topology API Use Casesdraft-amante-i2rs-topology-use-cases-00draft-amante-i2rs-topology-use-cases-01 Abstract This document describes use cases for gathering routing, forwarding and policy information, (hereafter referred to as topology information), about the network. It describes several applications that need to view the topology of the underlying physical or logical network. This document further demonstrates a need for a "Topology Manager" and related functions that collects topology data from network elements and other data sources, coalesces the collected data into a coherent view of the overall network topology, and normalizes the network topology view for use by clients -- namely, applications that consume topology information.Requirements Language The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this document are to be interpreted as described in RFC 2119 [RFC2119].Status ofthisThis Memo This Internet-Draft is submitted in full conformance with the provisions of BCP 78 and BCP 79. Internet-Drafts are working documents of the Internet Engineering Task Force (IETF). Note that other groups may also distribute working documents as Internet-Drafts. The list of current Internet- Drafts is at http://datatracker.ietf.org/drafts/current/. Internet-Drafts are draft documents valid for a maximum of six months and may be updated, replaced, or obsoleted by other documents at any time. It is inappropriate to use Internet-Drafts as reference material or to cite them other than as "work in progress." This Internet-Draft will expire onAugust 16, 2013.April 23, 2014. Copyright Notice Copyright (c) 2013 IETF Trust and the persons identified as the document authors. All rights reserved. This document is subject to BCP 78 and the IETF Trust's Legal Provisions Relating to IETF Documents (http://trustee.ietf.org/license-info) in effect on the date of publication of this document. Please review these documents carefully, as they describe your rights and restrictions with respect to this document. Code Components extracted from this document must include Simplified BSD License text as described in Section 4.e of the Trust Legal Provisions and are provided without warranty as described in the Simplified BSD License. Table of Contents 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . .. 32 1.1. Statistics Collection . . . . . . . . . . . . . . . . . . 5 1.2. Inventory Collection . . . . . . . . . . . . . . . . . ..5 1.3. Requirements Language . . . . . . . . . . . . . . . . . . 6 2. Terminology . . . . . . . . . . . . . . . . . . . . . . . . . 6 3. The Orchestration, Collection & Presentation Framework . . .. . .7 3.1. Overview . . . . . . . . . . . . . . . . . . . . . . . ..7 3.2. The Topology Manager . . . . . . . . . . . . . . . . . .. . .8 3.3. The Policy Manager . . . . . . . . . . . . . . . . . . .. . .10 3.4. Orchestration Manager . . . . . . . . . . . . . . . . . . 10 4. Use Cases . . . . . . . . . . . . . . . . . . . . . . . . . .1211 4.1. Virtualized Views of the Network . . . . . . . . . . . .. 1211 4.1.1. Capacity Planning and Traffic Engineering . . . . . . 11 4.1.1.1. Present Mode of Operation . . . . . . . . . . . . 12 4.1.1.2. Proposed Mode of Operation . . . . . . . . . . . 12 4.1.2. Services Provisioning . . . . . . . . . . . . . . . .1514 4.1.3. Troubleshooting & Monitoring . . . . . . . . . . . . 14 4.2. Virtual Network Topology Manager (VNTM) . . . . . . . . . 154.2.4.3. Path Computation Element (PCE) . . . . . . . . . . . . ..164.3.4.4. ALTO Server . . . . . . . . . . . . . . . . . . . . . . . 17 5. Acknowledgements . . . . . . . . . . . . . . . . . . . . . ..18 6. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 18 7. Security Considerations . . . . . . . . . . . . . . . . . . .1918 8. References . . . . . . . . . . . . . . . . . . . . . . . . .. 1918 8.1. Normative References . . . . . . . . . . . . . . . . . .. 1918 8.2. Informative References . . . . . . . . . . . . . . . . ..19 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . ..20 1. Introduction In today's networks, a variety of applications, such as Traffic Engineering, Capacity Planning, Security Auditing or Services Provisioning (for example, Virtual Private Networks), have a common need to acquire and consume network topology information. Unfortunately, all of these applications are (typically) vertically integrated: each uses its own proprietary normalized view of the network and proprietary data collectors, interpreters and adapters, which speak a variety of protocols, (SNMP, CLI, SQL, etc.) directly to network elements and to back-office systems. While some of the topological information can be distributed using routing protocols, unfortunately it is not desirable for some of these applications to understand or participate in routing protocols. This approach is incredibly inefficient for several reasons. First, developers must write duplicate 'network discovery' functions, which then become challenging to maintain over time, particularly as/when new equipment are first introduced to the network. Second, since there is no common "vocabulary" to describe various components in the network, such as physical links, logical links, or IP prefixes, each application has its own data model. To solve this, some solutions have distributed this information in the normalized form of routing distribution. However, this information still does not contain "inactive" topological information, thus not containing information considered to be part of a network's inventory. These limitations lead to applications being unable to easily exchange information with each other. For example, applications cannot share changes with each other that are (to be) applied to the physical and/or logical network, such as installation of new physical links, or deployment of security ACL's. Each application must frequently poll network elements and other data sources to ensure that it has a consistent representation of the network so that it can carry out its particular domain-specific tasks. In other cases, applications that cannot speak routing protocols must use proprietary CLI or other management interfaces which represent the topological information in non-standard formats or worse, semantic models. Overall, the software architecture described above at best results inincrediblyinefficient use of both software developer resources and network resources, and at worst, it results in some applications simply not having access to this information. Figure 1 is an illustration of how individual applications collect data from the underlying network. Applications retrieve inventory, network topology, state and statistics information by communicating directly with underlying Network Elements as well as with intermediary proxies of the information. In addition, applications transmit changes required of a Network Element's configuration and/or state directly to individual Network Elements, (most commonly using CLI or Netconf). It is important to note that the "data models" or semantics of this information contained within Network Elements are largely proprietary with respect to most configuration and state information, hence why a proprietary CLI is often the only choice to reflect changes in a NE's configuration or state. This remains the case even when standards-based mechanisms such as Netconf are used which provide a standard syntax model, but still often lack due to the proprietary semantics associated with the internal representation of the information. +---------------+ +----------------+ | | Applications |-+ +----------------+ ^ ^ ^ SQL, RPC, ReST # | * SQL, RPC, ReST ...################################################## |********************************************** # | * +------------+ | +------------+ | Statistics | | | Inventory | | Collection | | | Collection | +------------+ | +------------+ ^ | NETCONF, I2RS, SNMP, ^ | | CLI, TL1, ... |+-------------------------+-------------------------++------------------------+-----------------------+ | | | | | |+----------------+ +----------------+ +----------------+ | Network+---------------+ +---------------+ +---------------+ |Network Element|| Network|Network Element|| Network|Network Element| |+------------++-----------+ | | +-----------+ | | +-----------+ | | |Information| |<-LLDP->|+------------+|Information| |<-LMP->|+------------+|Information| | | |DataModel | | | |DataModel | | | |DataModel | | |+------------++-----------+ | |+------------++-----------+ | |+------------++-----------+ |+----------------+ +----------------+ +----------------++---------------+ +---------------+ +---------------+ Figure 1: Applications getting topology data Figure 1 shows how current management interfaces such as NETCONF, SNMP, CLI, etc. are used to transmit or receive information to/from various Network Elements. The figure also shows that protocols such as LLDP and LMP participate in topology discovery, specifically to discover adjacent network elements. The following sections describe the "Statistics Collection" and "Inventory Collection" functions. 1.1. Statistics Collection In Figure 1, "Statistics Collection" is a dedicated infrastructure that collects statistics from Network Elements. It periodicallypolls Network Elements(for example, every 5-minutes) polls Network Elements for octets transferred per interface, per LSP, etc. Collected statistics are stored andcollated, (for example, to provide hourly, daily, weekly 95th-percentile figures),collated withinthea statistics data warehouse. Applications typically query the statistics data warehouse rather than poll Network Elements directly to get the appropriate set of link utilization figures for their analysis. 1.2. Inventory Collection "Inventory Collection" is a network function responsible for collectingnetwork elementcomponent and state(i.e.: interface up/ down, SFP/XFP optics inserted into physical port, etc.)information directly fromnetwork elements,Network Elements, as well as for storing inventory information about physical network assets that are not retrievable fromnetwork elements, (hereafterNetwork Elements. The collected data is hereafter referred to asa inventory asset database).the 'Inventory Asset Database. Examples of information collected from Network Elements are: interface up/down status, the type of SFP/XFP optics inserted into physical port, etc. The Inventory Collectionfrom network elements commonlyfunction may use SNMP and CLI to acquire inventoryinformation.information from Network Elements. The information housed in the Inventory Manager is retrieved by applicationsusingvia a variety of protocols: SQL, RPC, REST etc. Inventory information, retrieved from Network Elements, is periodically updated in the Inventory Collection systemon a periodic basisto reflect changes in the physical and/or logical network assets. The polling interval to retrieve updated information is varied depending on scaling constraints of the Inventory Collection systems and expected intervals at which changes to the physical and/or logical assets are expected to occur. Examples of changes in network inventory that need be learned by the Inventory Collection function are as follows: o Discovery of new Network Elements. These elements may or may not be actively used in the network (i.e.: provisioned but not yet activated). o Insertion or removal of line cards or other modules,i.e.:such as optics modules, during service or equipment provisioning. o Changes made to a specific Network Element through a management interface by a field technician. o Indication of an NE's physical location and associated cable run list, at the time of installation. o Insertion of removal of cables that result in dynamic discovery of a new or lost adjacent neighbor, etc. 1.3. Requirements Language The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this document are to be interpreted as described in RFC 2119 [RFC2119] 2. Terminology The following briefly defines some of the terminology used within this document. InventoryManager: DescribesManager is a functionof collecting network elementthat collects Network Element inventory and state information directly fromnetwork elements,Network Elements andpotentiallyfrom associated offline inventorydatabases, via standards-based data models. Components contained in this super set mightdatabases. Inventory information may only be visibleor invisible toat a specific networklayer, i.e.:layer; for example, a physical link is visible within the IGP,however thebut a Layer-2 switch through which the physical link traverses is unknown to the Layer-3 IGP..PolicyManager: DescribesManager is a functionof attachingthat attaches metadata to network components/attributes. Such metadatais likely tomay include security, routing, L2 VLAN ID, IP numbering, etc.policies thatpolicies, which enable the Topology Manager to:a) assemble* Assemble a normalized view of the network for clientsto access; b) allow(or upper-layer applications * Allow clients(or, upper- layer(or upper-layer applications) access to information collected from various network layers and/or network components, etc. The Policy Manager function may be a sub-component of the Topology Manager or it may be astandalone. This will be determined as the work with I2RS evolves.standalone function. TopologyManager: Network components (inventory, etc.) are retrieved from the InventoryManagerand synthesized withis a function that collects topological information fromthe Policy Manager into cohesive, normalized viewsa variety of sources in the networklayers. The Topology Manager exposesand provides a normalizedviewsview of the networkvia standards-based data models to Clients, or higher- layer applications,topology toact upon in a read-only and/or read-write fashion. The Topology Manager may also push information back into the Inventory Managerclients and/orNetwork Elements to execute changes to the network's behavior, configuration or state.higher-layer applications. OrchestrationManager: DescribesManager is a functionof stitchingthat stitches togetherresources (i.e.: compute, storage)resources, such as compute or storage, and/or services with the network or vice-versa.TheTo realize a complete service, the Orchestration Manager relies onthecapabilities provided by the other "Managers" listedabove in order to realize a complete service.above. Normalized TopologyData Model: A data model thatInformation Model isconstructed and represented usingan open, standards-based information modelthat is consistent between implementations. Dataof the network topology. Information Model Abstraction: The notion that one is able to represent the same set of elements ina dataan information model at different levels of "focus" in order to limit the amount of information exchanged in order to convey this information. Multi-Layer Topology: Topology is commonly referred to using the OSI protocol layering model. For example, Layer 3 represents routed topologies that typically use IPv4 or IPv6 addresses. It is envisioned that, eventually, multiple layers of the network may be represented in a single, normalized view of the network to certain applications, (i.e.: Capacity Planning, Traffic Engineering, etc.) Network Element(NE):(NE) refers to a network device that typically is addressable (but not always), or a host. It is sometimes referred to as a 'Node'. Links: Every NE contains at least 1 link. These are used to connect the NE to other NEs in the network. Links may be in a variety ofstates includingstates, such as up, down, administratively down, internally testing, or dormant. Links are often synonymous with network ports on NEs. 3. The Orchestration, Collection & Presentation Framework 3.1. Overview Section 1 demonstrates the need for a network function that would provide a common,standard-basedstandards-based topology view to applications. Such topology collection/management/presentation function would be a part of a widerframework,framework thatwouldshould also include policy management and orchestration. The framework is shown in Figure 2. +---------------+ +----------------+ | | Applications |-+ +----------------+ ^ Websockets, ReST,XMPP, I2RS, ... +-------------------------+-------------------------+XMPP... +------------------------+-------------------------+ | | | +------------++-------------------------++------------------------+ +-------------+ | Policy |<----| Topology Manager |---->|Orchestration| | Manager | |+---------------------++--------------------+ | | Manager | +------------+ | |Topology Information| | +-------------+ | |Topology DataModel | |+-------------+|+---------------------++--------------------+ |+-------------------------++------------------------+ ^ ^ ^ Websockets, ReST, XMPP # | * Websockets, ReST, XMPP############################################### | ************************ # | * +------------+ | +------------+ | Statistics | | | Inventory | | Collection | | | Collection | +------------+ | +------------+ ^ | I2RS, NETCONF, SNMP, ^ | | TL1 ... |+-------------------------+-------------------------++------------------------+------------------------+ | | | +---------------+ +---------------+ +---------------+ |Network Element| |Network Element| |Network Element| | +-----------+ | |+----------------+ +----------------+ +----------------++-----------+ |Network Element||Network Element|+-----------+ |Network Element||+------------+|Information| |<-LLDP->|+------------+ |<-LMP->| +------------+|Information| |<-LMP-->| |Information| | | |DataModel | | | |DataModel | | | |DataModel | | |+------------++-----------+ | |+------------++-----------+ | |+------------++-----------+ |+----------------+ +----------------+ +----------------++---------------+ +---------------+ +---------------+ Figure 2: Topology Manager The following sections describe in detail the Topology Manager, Policy Manager and Orchestration Manager functions. 3.2. The Topology Manager The Topology Manager isresponsible for retrievinga function that collects topological information from a variety of sources in the networkviaand provides avarietycohesive, abstracted view ofsources.the network topology to clients and/or higher-layer applications. Thefirst most obvious sourcetopology view isthebased on a standards-based, normalized topology information model. Topology information sources can be: o The "live" Layer 3 IGP or an equivalentmechanism. "Live" IGPmechanism that provides information about links that are components of the activetopology, in other wordstopology. Active topology linksthatare present in the Link State Database (LSDB) and are eligible for forwarding.The second source of topologyLayer 3 IGP informationis thecan be obtained by listening to IGP updates flooded through an IGP domain, or from Network Elements. o The Inventory Collectionsystem, whichsystem that provides information for network components not visible within the Layer 3 IGP's LSDB, (i.e.: links or nodes, or properties of those links or nodes, at lower layers of the network). o Thethird source source of topology information is theStatistics Collectionsystem, whichsystem that provides trafficinformation (traffic demands,information, such as traffic demands or linkutilizations, etc.).utilizations. The Topology Managerwill synthesize retrieved information into cohesive, abstracted views of the network using a standards-based, normalizedprovides topologydata model. The Topology Manager can then expose these data modelsinformation toClients,Clients or higher-layer applicationsusingvia a northbound interface,which would be a protocol/API commonly used by higher-layer applications to retrieve and update information. Examples ofsuchprotocols areas ReST, Websockets, or XMPP.Topology Manager's clients would be able to act upon the information in a read-only and/or read-write fashion, (based on policies defined within the Policy Manager). It is envisioned that theThe Topology Manager willultimatelycontain topology information for multiple layers of the network: Transport, Ethernet andIP/MPLSIP/MPLS, as well as information for multiple(IGP)Layer 3 IGP areasand/orand multiple Autonomous Systems (ASes).This allows the Topology Manager to stitch together a holistic view of several layers of the network, which is an important requirement, particularly for upper-layerThe topology information can be used by higher-level applications, such as Traffic Engineering, Capacity Planning andProvisioning Clients (applications)Provisioning. Such applications are typically used to design, augment and optimize IP/MPLSnetworks thatnetworks, and require knowledge of underlying Shared Risk Link Groups (SRLG) within the Transport and/or Ethernet layers of the network. The Topology Manager musthave the abilitybe able to discoverand communicate with network elements whoNetwork Elements that are notonly active andvisiblewithinin the "live" L3 IGP's Link State Database(LSDB) of an(LSDB). Such Network Elements can either be inactive, or activeIGP, but also network elements who are active,but invisibleto ain the L3 LSDB (e.g.: L2 Ethernet switches, ROADM's,etc.)or Network Elements that arepart of thein an underlyingTransport network. This requirement will influence the choice of protocols needed by the Topology Manager to communicate to/from network elements at the various network layers. It is also important to recognize that the Topology Manager will be gleaning not only (relatively)transport network). In addition static inventory information collected from the Inventory Manager,i.e.: what linecards, interface types, etc. are actively inserted into network elements, butthe Topology Manager will also collect dynamic inventoryinformation, as well. With respect to the latter, network elements are expected to rely oninformation. For example, Network Elements utilize various Link Layer Discovery Protocols (i.e.: LLDP, LMP, etc.)that will aid into automaticallyidentifying anidentify adjacentnode, port, etc. at the far-side of a link.nodes and ports. This informationis thencan be pushed to or pulled by the Topology Manager in orderfor ittohavecreate an accurate representation of the physical topology of thenetwork.network 3.3. The Policy Manager The Policy Manager is the function used to enforce and program policies applicable to network component/attribute data. Policy enforcement is a network-wide function that can be consumed by variousnetwork elementsNetwork Elements andservicesservices, including the Inventory Manager, the Topology Managerorand othernetwork elements.Network Elements. Such policies are likely to encompass thefollowing.following: o Logical Identifier Numbering Policies * Correlation of IP prefix to link based ontype oflink(P-P,type, such as P-P, P-PE,PE-CE, etc.)or PE-CE. * Correlation of IP Prefix to IGP Area * Layer-2 VLAN ID assignments, etc. o Routing Configuration Policies * OSPF Area or IS-IS Net-ID to Node (Type) Correlation * BGP routing policies,i.e.:such as nodes designated for injection of aggregate routes, max-prefix policies, or AFI/SAFI to nodecorrelation, etc.correlation.. o Security Policies * Access Control Lists * Rate-limiting o Network Component/Attribute Data Access Policies. Client's (upper-layer application)read-only or read-writeaccess to Network Components/Attributes contained in the "Inventory Manager" as well as Policies contained within the "Policy Manager" itself. The Policy Manager function may be either a sub-component of the Topology or Orchestration Manager orit may beastandalone. This will be determined as the work with I2RS evolves.standalone component. 3.4. Orchestration Manager The Orchestration Manager provides the ability to stitch together resources(i.e.: compute,(such as compute or storage) and/or services with the network or vice-versa. Examples of 'generic' services may include the following: o Application-specific Load Balancing o Application-specific Network (Bandwidth) Optimization o Application or End-User specific Class-of-Service o Application or End-User specific Network Access Control The above services could then enable coupling of resources with the network to realize the following: o Network Optimization: Creation and Migration of Virtual Machines (VM's) so they are adjacent to storage in the same DataCenter. o Network Access Control: Coupling of available (generic) compute nodes within the appropriate point of the data-path to perform firewall, NAT, etc. functions on data traffic. The Orchestration Manageris expected towill exchangedatainformation models with the Topology Manager, the Policy Manager and the InventoryManager functions.Manager. In addition, the Orchestration Manageris expected tomust support publish and subscribe capabilities to those functions, as well as toClients, to enable scalability with respect to event notifications.Clients. The Orchestration Manager may receive requests from Clients (applications) for immediate access to specific network resources. However, Clients may request to schedule future appointments to reserve appropriate network resources when, for example, a special event is scheduled to start and end. Finally, the Orchestration Manager should have the flexibility to determine what network layer(s) may be able to satisfy a given Client's request, based on constraints received from the Client as well asthoseconstraints learned from the Policyand/orand TopologyManager functions.Managers. This could allow the Orchestration Manager to, for example, satisfy a given service request for a given Client using the optical network (via OTN service) if there is insufficientIP/ MPLSIP/MPLS capacity at the specific moment the Client's request is received. The operational model is shown in the following figure. TBD. Figure 3: Overall Reference Model 4. Use Cases 4.1. Virtualized Views of the Network 4.1.1. Capacity Planning and Traffic Engineering 4.1.1.1. Present Mode of Operation When performing Traffic Engineering and/or Capacity Planning of anIP/MPLSIP /MPLS network, it is important to account for SRLG's that exist within the underlying physical, optical and Ethernet networks. Currently, it's quite common tocreate and/ortake"snapshots","snapshots" at infrequentintervals,intervals that comprise the inventory data of the underlying physical and optical layer networks. This inventory data is thenneeds to be massaged ornormalized to conform tothedata import requirements of sometimes separate Traffic Engineering and/or Capacity Planning tools. This process iserror-proneerror- prone and inefficient, particularly as the underlying network inventory information changes due to introductionof, for example,of new network element makes or models,linecards,line cards, capabilities,etc. at the optical and/or Ethernet layersetc.. The present mode ofthe underlying network. Thisoperation is inefficient with respect tothe time and expense consumed by software developer,Software Development, Capacity Planning and Traffic Engineeringresources to normalize and sanity check underlying network inventory information, before it can be consumed by IP/MPLS Capacity Planning and Traffic Engineering applications.resources. Due to this inefficiency, the underlying physical network inventoryinformation,information (containing SRLG and corresponding critical networkasset information), used by the IP/MPLS Capacity Planning and TE applicationsassets information) is not updated frequently, thus exposing the network to, at minimum, inefficient utilization and, at worst, critical impairments.An4.1.1.2. Proposed Mode of Operation First, the Inventory Managerfunction is required that will, first,will extract inventory information from network elements--andpotentiallyassociatedofflineinventory databases. Information extracted from inventory databasesto acquirewill include physical cross- connects and other information that is not available directly from networkelements -- at the physical, optical, Ethernet and IP/MPLS layers of the network via standards-based data models. Dataelements. Standards-based information models and associated vocabulary will be required to represent not only components inside or directly connected to network elements, but also to represent components of a physical layer path (i.e.: cross-connect panels, etc.) Theaforementionedinventory data will comprise the complete set of inactive and active network components.A Statistics Collection Function isSecond, the Topology Manager will augment the inventory information with topology information obtained from Network Elements and other sources, and provide an IGP-based view of the active topology of the network. The Topology Manager will alsorequired. As stated above, itinclude non-topology dynamic information from IGPs, such as Available Bandwidth, Reserved Bandwidth, Traffic Engineering (TE) attributes associated with links, etc. Finally, the Statistics Collector will collect utilization statistics from Network Elements, and archive and aggregate them in a statistics data warehouse. Selected statistics and other dynamic data may be distributed through IGP routing protocols([I-D.previdi-isis-te-metric-extensions]([I-D.ietf-isis-te-metric-extensions] and [I-D.ietf-ospf-te-metric-extensions]) and then collected at the Statistics Collection Function via BGP-LS ([I-D.ietf-idr-ls-distribution]).Summaries of these figuresStatistics summaries thenneed towill be exposed in normalizeddatainformation models to the TopologyManager so it can easily acquire historical link and LSP utilization figures thatManager, which canbe useduse them to, for example, build trended utilization models to forecast expected changes tothephysicaland/orand logical networkcomponents to accommodate network growth. The Topology Manager function may then augment the Inventory Manager information by communicating directly with Network Elements to reveal the IGP-based view of the active topology of the network. This will allow the Topology Manager to include dynamic information from the IGP, such as Available Bandwidth, Reserved Bandwidth, etc. Traffic Engineering (TE) attributes associated with links, contained with the Traffic Engineering Database (TED) on Network Elements.components. It is important to recognize that extracting topology information from the network solelyvia an IGP, (such as IS-ISfrom Network Elements and IGPs (IS-IS TE or OSPF TE), is inadequate for this use case. First,IGP'sIGPs only expose the active components (e.g. vertices of the SPF tree) of the IPnetwork; unfortunately, theynetwork, and are not aware of "hidden" or inactive interfaces within IP/MPLS network elements,(e.g.: unused linecards orsuch as unusedports),line cards or ports. IGPs are also not aware of components that reside at alowerlayer lower thanIP/MPLS, e.g.IP /MPLS, such as Ethernet switches, or Optical transportsystems, etc. This occurs frequently during the course of maintenance, augment and optimization activities on the network.systems. Second, IGP's only convey SRLG information that have been first applied withinthea router's configurations, either manually or programatically. As mentioned previously, this SRLG information in the IP/MPLS network is subject to being infrequently updated and, as a result, may inadequately account for critical, underlying network fate sharing properties that are necessary to properly design resilient circuits and/or paths through the network.In this use case, the Inventory Manager will need to be capable of using a variety of existing protocols such as: NETCONF, CLI, SNMP, TL1, etc. depending on the capabilities of the network elements. The Topology Manager will need to be capable of communicating via an IGP from a (set of) Network Elements. It is important to consider that to acquire topology information from Network Elements will require read-only access to the IGP. However, the end result of the computations performed by the Capacity Planning Client may require changes to various IGP attributes, (e.g.: IGP metrics, TE link- colors, etc.) These may be applied directly by devising a new capability to either: a) inject information into the IGP that overrides the same information injected by the originating Network Element; or, b) allowing the Topology and/or Inventory Manager the ability to write changes to the Network Element's configuration in order to have it adjust the appropriate IGP attribute(s) and re-flood them throughout the IGP. It would be desirable to have a single mechanism (data model or protocol) that allows the Topology Manager to read and write IGP attributes.Once the Topology Managerfunctionhas assembled a normalized view of the topology andsynthesized associatedmetadata associated with each component of thetopology (link type, link properties, statistics, intra-layer relationships, etc.),topology, it canthenexpose this information via its northbound API toClients. In this use case that meansthe Capacity Planning and Traffic Engineeringapplications, which are not required to know innate details of individual network elements, but doapplications. The applications only require generalized information aboutthe nodenodes and links that comprise the network, e.g.: links used to interconnect nodes, SRLG information (from the underlying network), utilization rates of each link over some period of time, etc.In this case, it is importantNote that anyClientclient/application that understandsboththeweb servicesTopology Manager's northbound API andthe normalized dataits topology information model can communicate with the TopologyManager in order to understand the networkManager. Note also that topology informationthat wasmay be provided bynetwork elementsNetwork Elements frompotentiallydifferent vendors,all ofwhichlikely represent that topology information internally usingmay use different information models. Ifthea Clienthad gone directlywanted tothe network elements themselves,retrieve topology information directly from Network Elements, it would have to translate andthennormalize these differentrepresentations for itself. However, in this case, the Topology Manager has done that for it. When this information is consumed by therepresentations. A Traffic Engineeringapplication, itapplication may run a variety of CSPF algorithmsthe result of which is likelythat create a list ofRSVP LSP'sTE tunnels thatneed to be (re-)established, or torn down, in the network toglobally optimize the packing efficiency of physical links throughout the network. Theend result of the Traffic Engineering application is "pushing" out to the Topology Manager, via a standard data model to be defined here, a list of RSVP LSP's and their associated characteristics, (i.e.: head and tail-end LSR's, bandwidth, priority, preemption, etc.). The Topology ManagerTE tunnels are thenwould consume this information and carry out those instructions by speaking directly toprogrammed into the networkelements, perhaps via PCEP Extensions for Stateful PCE [I-D.ietf-pce-stateful-pce], which in turn initiates RSVP signalingeither directly or through a controller. Programming of TE tunnels into the networkto establish the LSP's. After this informationisconsumed byoutside the scope of this document. A Capacity Planningapplication, itapplication may run a variety of algorithms the result of which is a list of new inventory that is requiredto be purchased (or, redeployed)for purchase or redeployment, as well as associated work orders for field technicians to augment the network for expected growth.It would be ideal if this information was also "pushed" back into the Topology and, in turn, Inventory Manager as "inactive" links and/or nodes, so that as new equipment is installed it can be automatically correlated with original design and work order packages associated with that augment.4.1.2. Services Provisioning Beyond Capacity Planning and Traffic Engineering applications, having a normalized view of just the IP/MPLS layer of the network is still very important for other mission criticalapplicationsapplications, such as Security Auditing and IP/MPLS Services Provisioning, (e.g.: L2VPN, L3VPN, etc.). With respect to the latter, these types of applications should not need a detailed understanding of, for example, SRLG information, assuming that the underlying MPLS Tunnel LSP's are known to account for the resiliency requirements of all services that ride over them. Nonetheless, for both types of applications it is criticalthat theyto have a common and up-to-date normalized view of the IP/MPLS networkin order to easilyto, for example, instantiate new services atthe appropriate placesoptimal locations in the network,in the case of VPN services,or to validatethat ACL's are configured properlyproper ACL configuration to protect associated routing, signaling and management protocols on thenetwork, with respect to Security Auditing. For this use case, what is most commonly needed by anetwork. A VPN Service Provisioning applicationis as follows. First,must perform the following resource selection operations: o Identify Service PE'sneed to be identifiedin all markets/cities where the customer has identified they wantservice. Next, does their exist one,service o Identify one ormore,more existing Servies PE's in each city with connectivity to the access network(s), e.g.:SONET/ TDM,SONET/TDM, used to deliver the PE-CE tail circuits to the Service's PE.Finally, doeso Determine that the Services PE have available capacity on both the PE-CE access interface and its uplinks to terminate the tailcircuit? If this were to be generalized, this would be considered an Resource Selection function. Namely, thecircuit The VPN Provisioning application would iteratively query the Topology Manager to narrow down the scope of resources to the set of ServicesPE'sPEs with the appropriate uplink bandwidth and access circuit capability plus capacity to realize the requested VPN service. Once the VPN Provisioning application has a candidate list of resources itthenrequests programming of theTopology Manager to go about configuring the ServicesService PE's and associated access circuits torealize theset up a customer's VPNservice.service into the network. Programming of Service PEs is outside the scope of this document. 4.1.3. Troubleshooting & Monitoring Once the Topology Manager has a normalized view of several layers of the network,it's then possible to more easilyit can expose aricherrich set of data to network operatorswhenwho are performing diagnosis, troubleshooting and repairs on the network. Specifically, there is a need to (rapidly) assemble a current, accurate and comprehensive network diagram of a L2VPN or L3VPN service for a particular customer when either: a) attempting to diagnose a service fault/error; or, b) attempting to augment the customer's existing service. Information that may be assembled into a comprehensive picture could include physical and logical components related specifically to that customer's service, i.e.: VLAN's or channels used by the PE-CE access circuits, CoS policies, historical PE-CE circuit utilization, etc. The Topology Manager would assemble this information, on behalf of each of the network elements and other data sources in and associated with the network, andcouldwould present this information in avendor- independentvendor-independent data model to applications to be displayed allowing the operator (or, potentially, the customer through a SP's Web portal) to visualize the information. 4.2. Virtual Network Topology Manager (VNTM) Virtual Network Topology Manager (VNTM) is in charge of managing the Virtual Network Topology (VNT), as defined in [RFC5623]. VNT is defined in [RFC5212] as a set of one or more LSPs in one or more lower-layer networks that provides information for efficient path handling in an upper-layer network. The maintenance of virtual topology is a complicated task. VNTM have to decide which are the nodes to be interconnected in the lower-layer to fulfill the resource requirements of the upper-layer. This means to create a topology to cope with all demands of the upper layer without wasting resources in the underlying network. Once the decision is made, some actions have to be taken in the network elements of the layers so the new LSPs are provisioned. Moreover, VNT has to release unwanted resources, so they can be available in the lower-layer network for other uses. VNTM does not have to solve all previous problems in all scenarios. As defined in [RFC5623] in the PCE-VNTM cooperation model, PCEis computing the paths in the higher layer and when there is not enough resources in the VNT, PCE requests to the VNTM for a new path in the VNT. VNTM checks PCE request using internal policies to check whether this request can be take into account or not. VNTM requests the egress node in the upper layer to set-up the path in the lower layer. However, the VNTM can actively modify the VNT based on the policies and network status without waiting to an explicit PCE request. Regarding the provisioning phase, VNTM may have to directly talk with an NMS to set-up the connection [RFC5623] or it can delegate this function to the provisioning manager [I-D.farrkingel-pce-abno-architecture]. The aim of this document is not to categorize all implementation options for VNTM, but to present the necessity to retrieve topological information to perform its functions. The VNTM may require the topologies of the lower and/or upper layer and even the inter-layer relation between the upper and lower layer nodes, to decide which is the optimal VNT. 4.3. Path Computation Element (PCE) As described in [RFC4655] a PCE can be used to compute MPLS-TE paths within a "domain" (such as an IGP area) or across multiple domains (such as a multi-area AS, or multiple ASes). o Within a single area, the PCE offers enhanced computational power that may not be available on individual routers, sophisticated policy control and algorithms, and coordination of computation across the whole area. o If a router wants to compute a MPLS-TE path across IGP areas its own TED lacks visibility of the complete topology. That means that the router cannot determine the end-to-end path, and cannot even select the right exit router (Area Border Router - ABR) for an optimal path. This is an issue for large-scale networks that need to segment their core networks into distinct areas, but which still want to take advantage of MPLS-TE. The PCE presents a computation server that may have visibility into more than one IGP area or AS, or may cooperate with other PCEs to perform distributed path computation. The PCE needs access to the topology and the Traffic Engineering Database (TED) for the area(s) it serves, but [RFC4655] does not describe how this is achieved. Many implementations make the PCE a passive participant in the IGP so that it can learn the latest state of the network, but this may be sub-optimal when the network is subject to a high degree of churn, or when the PCE is responsible for multiple areas. The following figure shows how a PCE can get its TED information using a Topology Server. +----------+ | ----- | TED synchronization via Topology API | | TED |<-+----------------------------------+ | ----- | | | | | | | | | | | v | | | ----- | | | | PCE | | | | ----- | | +----------+ | ^ | | Request/ | | Response | v | Service +----------+ Signaling +----------+ +----------+ Request | Head-End | Protocol | Adjacent | | Topology | -------->| Node |<------------>| Node | | Manager | +----------+ +----------+ +----------+ Figure 4: Topology use case: Path Computation Element4.3.4.4. ALTO Server An ALTO Server [RFC5693] is an entity that generates an abstracted network topology and provides it to network-aware applications over a web service based API. Example applications areContent Delivery Network (CDNs), peer-to- peer clouds/swarms, as well as inter-layer optimization cases such as mobilep2p clients or trackers, or CDNs. The abstracted networkwilling to understandtopology comes in thecongestion levelform ofunderneath backhaul infrastructure. ALTO mechanisms are based on "Maps"two maps: a Network Map thatcontain an abstracted versionspecifies allocation ofthe topology. Such Maps are built by the ALTO server or made availableprefixes tothe ALTO server by a Topology Manager. The content of Maps are multiple: a mapping list where each prefix is mapped into a Partition Identifier (called PID)PIDs, and a Cost Map that specifies the costmatrix (representing the distance)betweenPIDs.PIDs listed in the Network Map. For more details, see [I-D.ietf-alto-protocol]. ALTO abstract network topologies(represented in the Maps)can begenerated in multiple ways among which the Topology Manager providesauto-generated from theabstractedphysical topologyto the ALTO server so that the ALTO server is capableofserving applications. ALTO Maps may representthewhole network infrastructure and are not limited to a specific layer. E.g.: the cost matrix (called the Cost Map) can represent the IP/MPLS layer path costs as well as integrating the optical cost.underlying network. The generation would typically be based on policies and rules set by the operator.All the relevant information such as Nodes, Links, Prefixes,Both prefix and TEpaths (LSPs/Tunnels), etc.data are required: prefix data is requiredso for theto generate ALTOserverNetwork Maps, TE (topology) data is required tohave an exhaustive and consistent view of the infrastructure. Typically, a Topology Manager would aggregate all the necessary information and would producegenerate ALTOmaps. Mechanisms through which a Topology Manager acquires topology information include interaction with the IGPCost Maps. Prefix data is carried andthe use of BGP-LS extension.originated in BGP, TE data is originated and carried in an IGP. The mechanism defined in this document provides a single interface through which an ALTO Server can retrieve all the necessary prefix and network topology data from the underlyingnetwork (i.e.: the Topology Manager).network. Note an ALTO Server can use other mechanisms to get network data, for example, peering with multiple IGP and BGP Speakers. The following figure shows how an ALTO Server can get network topology information from the underlying network using the Topology API. +--------+ | Client |<--+ +--------+ | | ALTO +--------+ +----------+ +--------+ | Protocol | ALTO | Network Topology | Topology | | Client |<--+------------| Server |<-----------------| Manager | +--------+ | | | | | | +--------+ +----------+ +--------+ | | Client |<--+ +--------+ Figure 5: Topology use case: ALTO Server 5. Acknowledgements The authors wish to thank Alia Atlas, DaveWard andWard, HannesGredlerGredler, Stafano Previdi for their valuable contributions and feedback to this draft. 6. IANA Considerations This memo includes no request to IANA. 7. Security Considerations At the moment, the Use Cases covered in this document apply specifically to a single Service Provider or Enterprise network. Therefore, network administrations should take appropriate precautions to ensure appropriate access controls exist so that only internal applications and end-users have physical or logical access to the Topology Manager. This should be similar to precautions that are already taken by Network Administrators to secure their existing Network Management, OSS and BSS systems. As this work evolves, it will be important to determine the appropriate granularity of access controls in terms of what individuals or groups may have read and/or write access to various types of information contained with the Topology Manager. It would be ideal, if these access control mechanisms were centralized within the Topology Manager itself. 8. References 8.1. Normative References [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate Requirement Levels", BCP 14, RFC 2119, March 1997. 8.2. Informative References [I-D.farrkingel-pce-abno-architecture] King, D. and A. Farrel, "A PCE-based Architecture for Application-based Network Operations", draft-farrkingel- pce-abno-architecture-05 (work in progress), July 2013. [I-D.ietf-alto-protocol] Alimi, R., Penno, R., and Y. Yang, "ALTO Protocol",draft-ietf-alto-protocol-13draft- ietf-alto-protocol-17 (work in progress),September 2012.July 2013. [I-D.ietf-idr-ls-distribution] Gredler, H., Medved, J., Previdi, S., Farrel, A., and S. Ray, "North-Bound Distribution of Link-State and TE Information using BGP",draft-ietf-idr-ls-distribution-01draft-ietf-idr-ls-distribution-03 (work in progress),October 2012.May 2013. [I-D.ietf-isis-te-metric-extensions] Previdi, S., Giacalone, S., Ward, D., Drake, J., Atlas, A., and C. Filsfils, "IS-IS Traffic Engineering (TE) Metric Extensions", draft-ietf-isis-te-metric- extensions-00 (work in progress), June 2013. [I-D.ietf-ospf-te-metric-extensions] Giacalone, S., Ward, D., Drake, J., Atlas, A., and S. Previdi, "OSPF Traffic Engineering (TE) Metric Extensions",draft-ietf-ospf-te-metric-extensions-02draft-ietf-ospf-te-metric-extensions-04 (work in progress),December 2012.June 2013. [I-D.ietf-pce-stateful-pce] Crabbe, E., Medved, J., Minei, I., and R. Varga, "PCEP Extensions for Stateful PCE",draft-ietf-pce-stateful-pce-02draft-ietf-pce-stateful- pce-05 (work in progress),October 2012. [I-D.previdi-isis-te-metric-extensions] Previdi, S., Giacalone, S., Ward, D., Drake, J., Atlas, A., and C. Filsfils, "IS-IS Traffic Engineering (TE) Metric Extensions", draft-previdi-isis-te-metric-extensions-02 (work in progress), October 2012.July 2013. [RFC4655] Farrel, A., Vasseur, J., and J. Ash, "A Path Computation Element (PCE)-Based Architecture", RFC 4655, August 2006. [RFC5212] Shiomoto, K., Papadimitriou, D., Le Roux, JL., Vigoureux, M., and D. Brungard, "Requirements for GMPLS-Based Multi- Region and Multi-Layer Networks (MRN/MLN)", RFC 5212, July 2008. [RFC5623] Oki, E., Takeda, T., Le Roux, JL., and A. Farrel, "Framework for PCE-Based Inter-Layer MPLS and GMPLS Traffic Engineering", RFC 5623, September 2009. [RFC5693] Seedorf, J. and E. Burger, "Application-Layer Traffic Optimization (ALTO) Problem Statement", RFC 5693, October 2009. Authors' AddressesShane Amante Level 3 Communications, Inc. 1025 Eldorado Blvd Broomfield, CO 80021 USA Email: shane@level3.netJan Medved Cisco Systems, Inc. 170 West Tasman Drive San Jose, CA 95134 USA Email: jmedved@cisco.com Stefano Previdi Cisco Systems, Inc.Via Del Serafico 200 Rome 00144 IT Email: sprevidi@cisco.com Thomas D. Nadeau Juniper Networks 1194 N. Mathilda Ave. Sunnyvale,170, West Tasman Drive San Jose, CA9408995134 USA Email:tnadeau@juniper.netsprevidi@cisco.com Victor Lopez Telefonica I+D c/ Don Ramon de la Cruz 84 Madrid 28006 Spain Email: vlopez@tid.es Shane Amante