< draft-amante-i2rs-topology-use-cases-00.txt   draft-amante-i2rs-topology-use-cases-01.txt >
Internet Engineering Task Force S. Amante Internet Engineering Task Force J. Medved
Internet-Draft Level 3 Communications, Inc. Internet-Draft S. Previdi
Intended status: Informational J. Medved Intended status: Informational Cisco Systems, Inc.
Expires: August 16, 2013 S. Previdi Expires: April 23, 2014 V. Lopez
Cisco Systems, Inc. Telefonica I+D
T. Nadeau S. Amante
Juniper Networks
February 12, 2013 October 20, 2013
Topology API Use Cases Topology API Use Cases
draft-amante-i2rs-topology-use-cases-00 draft-amante-i2rs-topology-use-cases-01
Abstract Abstract
This document describes use cases for gathering routing, forwarding This document describes use cases for gathering routing, forwarding
and policy information, (hereafter referred to as topology and policy information, (hereafter referred to as topology
information), about the network. It describes several applications information), about the network. It describes several applications
that need to view the topology of the underlying physical or logical that need to view the topology of the underlying physical or logical
network. This document further demonstrates a need for a "Topology network. This document further demonstrates a need for a "Topology
Manager" and related functions that collects topology data from Manager" and related functions that collects topology data from
network elements and other data sources, coalesces the collected data network elements and other data sources, coalesces the collected data
into a coherent view of the overall network topology, and normalizes into a coherent view of the overall network topology, and normalizes
the network topology view for use by clients -- namely, applications the network topology view for use by clients -- namely, applications
that consume topology information. that consume topology information.
Requirements Language Status of This Memo
The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT",
"SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this
document are to be interpreted as described in RFC 2119 [RFC2119].
Status of this Memo
This Internet-Draft is submitted in full conformance with the This Internet-Draft is submitted in full conformance with the
provisions of BCP 78 and BCP 79. provisions of BCP 78 and BCP 79.
Internet-Drafts are working documents of the Internet Engineering Internet-Drafts are working documents of the Internet Engineering
Task Force (IETF). Note that other groups may also distribute Task Force (IETF). Note that other groups may also distribute
working documents as Internet-Drafts. The list of current Internet- working documents as Internet-Drafts. The list of current Internet-
Drafts is at http://datatracker.ietf.org/drafts/current/. Drafts is at http://datatracker.ietf.org/drafts/current/.
Internet-Drafts are draft documents valid for a maximum of six months Internet-Drafts are draft documents valid for a maximum of six months
and may be updated, replaced, or obsoleted by other documents at any and may be updated, replaced, or obsoleted by other documents at any
time. It is inappropriate to use Internet-Drafts as reference time. It is inappropriate to use Internet-Drafts as reference
material or to cite them other than as "work in progress." material or to cite them other than as "work in progress."
This Internet-Draft will expire on August 16, 2013. This Internet-Draft will expire on April 23, 2014.
Copyright Notice Copyright Notice
Copyright (c) 2013 IETF Trust and the persons identified as the Copyright (c) 2013 IETF Trust and the persons identified as the
document authors. All rights reserved. document authors. All rights reserved.
This document is subject to BCP 78 and the IETF Trust's Legal This document is subject to BCP 78 and the IETF Trust's Legal
Provisions Relating to IETF Documents Provisions Relating to IETF Documents
(http://trustee.ietf.org/license-info) in effect on the date of (http://trustee.ietf.org/license-info) in effect on the date of
publication of this document. Please review these documents publication of this document. Please review these documents
carefully, as they describe your rights and restrictions with respect carefully, as they describe your rights and restrictions with respect
to this document. Code Components extracted from this document must to this document. Code Components extracted from this document must
include Simplified BSD License text as described in Section 4.e of include Simplified BSD License text as described in Section 4.e of
the Trust Legal Provisions and are provided without warranty as the Trust Legal Provisions and are provided without warranty as
described in the Simplified BSD License. described in the Simplified BSD License.
Table of Contents Table of Contents
1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 3 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . 2
1.1. Statistics Collection . . . . . . . . . . . . . . . . . . 5 1.1. Statistics Collection . . . . . . . . . . . . . . . . . . 5
1.2. Inventory Collection . . . . . . . . . . . . . . . . . . . 5 1.2. Inventory Collection . . . . . . . . . . . . . . . . . . 5
1.3. Requirements Language . . . . . . . . . . . . . . . . . . 6 1.3. Requirements Language . . . . . . . . . . . . . . . . . . 6
2. Terminology . . . . . . . . . . . . . . . . . . . . . . . . . 6 2. Terminology . . . . . . . . . . . . . . . . . . . . . . . . . 6
3. Orchestration, Collection & Presentation Framework . . . . . . 7 3. The Orchestration, Collection & Presentation Framework . . . 7
3.1. Overview . . . . . . . . . . . . . . . . . . . . . . . . . 7 3.1. Overview . . . . . . . . . . . . . . . . . . . . . . . . 7
3.2. Topology Manager . . . . . . . . . . . . . . . . . . . . . 8 3.2. The Topology Manager . . . . . . . . . . . . . . . . . . 8
3.3. Policy Manager . . . . . . . . . . . . . . . . . . . . . . 10 3.3. The Policy Manager . . . . . . . . . . . . . . . . . . . 10
3.4. Orchestration Manager . . . . . . . . . . . . . . . . . . 10 3.4. Orchestration Manager . . . . . . . . . . . . . . . . . . 10
4. Use Cases . . . . . . . . . . . . . . . . . . . . . . . . . . 12 4. Use Cases . . . . . . . . . . . . . . . . . . . . . . . . . . 11
4.1. Virtualized Views of the Network . . . . . . . . . . . . . 12 4.1. Virtualized Views of the Network . . . . . . . . . . . . 11
4.1.1. Capacity Planning and Traffic Engineering . . . . . . 12 4.1.1. Capacity Planning and Traffic Engineering . . . . . . 11
4.1.2. Services Provisioning . . . . . . . . . . . . . . . . 15 4.1.1.1. Present Mode of Operation . . . . . . . . . . . . 12
4.1.3. Troubleshooting & Monitoring . . . . . . . . . . . . . 15 4.1.1.2. Proposed Mode of Operation . . . . . . . . . . . 12
4.2. Path Computation Element (PCE) . . . . . . . . . . . . . . 16 4.1.2. Services Provisioning . . . . . . . . . . . . . . . . 14
4.3. ALTO Server . . . . . . . . . . . . . . . . . . . . . . . 17 4.1.3. Troubleshooting & Monitoring . . . . . . . . . . . . 14
5. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . . 18 4.2. Virtual Network Topology Manager (VNTM) . . . . . . . . . 15
6. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 18 4.3. Path Computation Element (PCE) . . . . . . . . . . . . . 16
7. Security Considerations . . . . . . . . . . . . . . . . . . . 19 4.4. ALTO Server . . . . . . . . . . . . . . . . . . . . . . . 17
8. References . . . . . . . . . . . . . . . . . . . . . . . . . . 19 5. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . 18
8.1. Normative References . . . . . . . . . . . . . . . . . . . 19 6. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 18
8.2. Informative References . . . . . . . . . . . . . . . . . . 19 7. Security Considerations . . . . . . . . . . . . . . . . . . . 18
Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . . 20 8. References . . . . . . . . . . . . . . . . . . . . . . . . . 18
8.1. Normative References . . . . . . . . . . . . . . . . . . 18
8.2. Informative References . . . . . . . . . . . . . . . . . 19
Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . 20
1. Introduction 1. Introduction
In today's networks, a variety of applications, such as Traffic In today's networks, a variety of applications, such as Traffic
Engineering, Capacity Planning, Security Auditing or Services Engineering, Capacity Planning, Security Auditing or Services
Provisioning (for example, Virtual Private Networks), have a common Provisioning (for example, Virtual Private Networks), have a common
need to acquire and consume network topology information. need to acquire and consume network topology information.
Unfortunately, all of these applications are (typically) vertically Unfortunately, all of these applications are (typically) vertically
integrated: each uses its own proprietary normalized view of the integrated: each uses its own proprietary normalized view of the
network and proprietary data collectors, interpreters and adapters, network and proprietary data collectors, interpreters and adapters,
which speak a variety of protocols, (SNMP, CLI, SQL, etc.) directly which speak a variety of protocols, (SNMP, CLI, SQL, etc.) directly
to network elements and to back-office systems. While some of the to network elements and to back-office systems. While some of the
topological information can be distributed using routing protocols, topological information can be distributed using routing protocols,
skipping to change at page 3, line 36 skipping to change at page 3, line 33
application has its own data model. To solve this, some solutions application has its own data model. To solve this, some solutions
have distributed this information in the normalized form of routing have distributed this information in the normalized form of routing
distribution. However, this information still does not contain distribution. However, this information still does not contain
"inactive" topological information, thus not containing information "inactive" topological information, thus not containing information
considered to be part of a network's inventory. considered to be part of a network's inventory.
These limitations lead to applications being unable to easily These limitations lead to applications being unable to easily
exchange information with each other. For example, applications exchange information with each other. For example, applications
cannot share changes with each other that are (to be) applied to the cannot share changes with each other that are (to be) applied to the
physical and/or logical network, such as installation of new physical physical and/or logical network, such as installation of new physical
links, or deployment of security ACL's. Each application must links, or deployment of security ACL's. Each application must
frequently poll network elements and other data sources to ensure frequently poll network elements and other data sources to ensure
that it has a consistent representation of the network so that it can that it has a consistent representation of the network so that it can
carry out its particular domain-specific tasks. In other cases, carry out its particular domain-specific tasks. In other cases,
applications that cannot speak routing protocols must use proprietary applications that cannot speak routing protocols must use proprietary
CLI or other management interfaces which represent the topological CLI or other management interfaces which represent the topological
information in non-standard formats or worse, semantic models. information in non-standard formats or worse, semantic models.
Overall, the software architecture described above at best results in Overall, the software architecture described above at best results in
incredibly inefficient use of both software developer resources and inefficient use of both software developer resources and network
network resources, and at worst, it results in some applications resources, and at worst, it results in some applications simply not
simply not having access to this information. having access to this information.
Figure 1 is an illustration of how individual applications collect Figure 1 is an illustration of how individual applications collect
data from the underlying network. Applications retrieve inventory, data from the underlying network. Applications retrieve inventory,
network topology, state and statistics information by communicating network topology, state and statistics information by communicating
directly with underlying Network Elements as well as with directly with underlying Network Elements as well as with
intermediary proxies of the information. In addition, applications intermediary proxies of the information. In addition, applications
transmit changes required of a Network Element's configuration and/or transmit changes required of a Network Element's configuration and/or
state directly to individual Network Elements, (most commonly using state directly to individual Network Elements, (most commonly using
CLI or Netconf). It is important to note that the "data models" or CLI or Netconf). It is important to note that the "data models" or
semantics of this information contained within Network Elements are semantics of this information contained within Network Elements are
largely proprietary with respect to most configuration and state largely proprietary with respect to most configuration and state
information, hence why a proprietary CLI is often the only choice to information, hence why a proprietary CLI is often the only choice to
reflect changes in a NE's configuration or state. This remains the reflect changes in a NE's configuration or state. This remains the
case even when standards-based mechanisms such as Netconf are used case even when standards-based mechanisms such as Netconf are used
which provide a standard syntax model, but still often lack due to which provide a standard syntax model, but still often lack due to
the proprietary semantics associated with the internal representation the proprietary semantics associated with the internal representation
of the information. of the information.
+---------------+ +---------------+
+----------------+ | +----------------+ |
| Applications |-+ | Applications |-+
+----------------+ +----------------+
^ ^ ^ ^ ^ ^
SQL, RPC, ReST # | * SQL, RPC, ReST ... SQL, RPC, ReST # | * SQL, RPC, ReST ...
######################## | ************************ ########################## | **********************
# | * # | *
+------------+ | +------------+ +------------+ | +------------+
| Statistics | | | Inventory | | Statistics | | | Inventory |
| Collection | | | Collection | | Collection | | | Collection |
+------------+ | +------------+ +------------+ | +------------+
^ | NETCONF, I2RS, SNMP, ^ ^ | NETCONF, I2RS, SNMP, ^
| | CLI, TL1, ... | | | CLI, TL1, ... |
+-------------------------+-------------------------+ +------------------------+-----------------------+
| | | | | |
| | | | | |
+----------------+ +----------------+ +----------------+ +---------------+ +---------------+ +---------------+
| Network Element| | Network Element| | Network Element| |Network Element| |Network Element| |Network Element|
| +------------+ |<-LLDP->| +------------+ |<-LMP->| +------------+ | | +-----------+ | | +-----------+ | | +-----------+ |
| | Data Model | | | | Data Model | | | | Data Model | | | |Information| |<-LLDP->| |Information| |<-LMP->| |Information| |
| +------------+ | | +------------+ | | +------------+ | | | Model | | | | Model | | | | Model | |
+----------------+ +----------------+ +----------------+ | +-----------+ | | +-----------+ | | +-----------+ |
+---------------+ +---------------+ +---------------+
Figure 1: Applications getting topology data Figure 1: Applications getting topology data
Figure 1 shows how current management interfaces such as NETCONF, Figure 1 shows how current management interfaces such as NETCONF,
SNMP, CLI, etc. are used to transmit or receive information to/from SNMP, CLI, etc. are used to transmit or receive information to/from
various Network Elements. The figure also shows that protocols such various Network Elements. The figure also shows that protocols such
as LLDP and LMP participate in topology discovery, specifically to as LLDP and LMP participate in topology discovery, specifically to
discover adjacent network elements. discover adjacent network elements.
The following sections describe the "Statistics Collection" and The following sections describe the "Statistics Collection" and
"Inventory Collection" functions. "Inventory Collection" functions.
1.1. Statistics Collection 1.1. Statistics Collection
In Figure 1, "Statistics Collection" is a dedicated infrastructure In Figure 1, "Statistics Collection" is a dedicated infrastructure
that collects statistics from Network Elements. It periodically that collects statistics from Network Elements. It periodically (for
polls Network Elements (for example, every 5-minutes) for octets example, every 5-minutes) polls Network Elements for octets
transferred per interface, per LSP, etc. Collected statistics are transferred per interface, per LSP, etc. Collected statistics are
stored and collated, (for example, to provide hourly, daily, weekly stored and collated within a statistics data warehouse. Applications
95th-percentile figures), within the statistics data warehouse. typically query the statistics data warehouse rather than poll
Applications typically query the statistics data warehouse rather Network Elements directly to get the appropriate set of link
than poll Network Elements directly to get the appropriate set of utilization figures for their analysis.
link utilization figures for their analysis.
1.2. Inventory Collection 1.2. Inventory Collection
"Inventory Collection" is a network function responsible for "Inventory Collection" is a network function responsible for
collecting network element component and state (i.e.: interface up/ collecting component and state information directly from Network
down, SFP/XFP optics inserted into physical port, etc.) information Elements, as well as for storing inventory information about physical
directly from network elements, as well as storing inventory network assets that are not retrievable from Network Elements. The
information about physical network assets that are not retrievable collected data is hereafter referred to as the 'Inventory Asset
from network elements, (hereafter referred to as a inventory asset Database. Examples of information collected from Network Elements
database). Inventory Collection from network elements commonly use are: interface up/down status, the type of SFP/XFP optics inserted
SNMP and CLI to acquire inventory information. The information into physical port, etc.
housed in the Inventory Manager is retrieved by applications using a
variety of protocols: SQL, RPC, etc. Inventory information, The Inventory Collection function may use SNMP and CLI to acquire
retrieved from Network Elements, is updated in the Inventory inventory information from Network Elements. The information housed
Collection system on a periodic basis to reflect changes in the in the Inventory Manager is retrieved by applications via a variety
physical and/or logical network assets. The polling interval to of protocols: SQL, RPC, REST etc. Inventory information, retrieved
retrieve updated information is varied depending on scaling from Network Elements, is periodically updated in the Inventory
constraints of the Inventory Collection systems and expected Collection system to reflect changes in the physical and/or logical
intervals at which changes to the physical and/or logical assets are network assets. The polling interval to retrieve updated information
expected to occur. is varied depending on scaling constraints of the Inventory
Collection systems and expected intervals at which changes to the
physical and/or logical assets are expected to occur.
Examples of changes in network inventory that need be learned by the Examples of changes in network inventory that need be learned by the
Inventory Collection function are as follows: Inventory Collection function are as follows:
o Discovery of new Network Elements. These elements may or may not o Discovery of new Network Elements. These elements may or may not
be actively used in the network (i.e.: provisioned but not yet be actively used in the network (i.e.: provisioned but not yet
activated). activated).
o Insertion or removal of line cards or other modules, i.e.: optics o Insertion or removal of line cards or other modules, such as
modules, during service or equipment provisioning. optics modules, during service or equipment provisioning.
o Changes made to a specific Network Element through a management o Changes made to a specific Network Element through a management
interface by a field technician. interface by a field technician.
o Indication of an NE's physical location and associated cable run o Indication of an NE's physical location and associated cable run
list, at the time of installation. list, at the time of installation.
o Insertion of removal of cables that result in dynamic discovery of o Insertion of removal of cables that result in dynamic discovery of
a new or lost adjacent neighbor, etc. a new or lost adjacent neighbor, etc.
skipping to change at page 6, line 19 skipping to change at page 6, line 19
The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT",
"SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this
document are to be interpreted as described in RFC 2119 [RFC2119] document are to be interpreted as described in RFC 2119 [RFC2119]
2. Terminology 2. Terminology
The following briefly defines some of the terminology used within The following briefly defines some of the terminology used within
this document. this document.
Inventory Manager: Describes a function of collecting network Inventory Manager is a function that collects Network Element
element inventory and state information directly from network inventory and state information directly from Network Elements and
elements, and potentially associated offline inventory databases, from associated offline inventory databases. Inventory
via standards-based data models. Components contained in this information may only be visible at a specific network layer; for
super set might be visible or invisible to a specific network example, a physical link is visible within the IGP, but a Layer-2
layer, i.e.: a physical link is visible within the IGP, however switch through which the physical link traverses is unknown to the
the Layer-2 switch through which the physical link traverses is Layer-3 IGP.
unknown to the Layer-3 IGP. .
Policy Manager: Describes a function of attaching metadata to Policy Manager is a function that attaches metadata to network
network components/attributes. Such metadata is likely to include components/attributes. Such metadata may include security,
security, routing, L2 VLAN ID, IP numbering, etc. policies that routing, L2 VLAN ID, IP numbering, etc. policies, which enable the
enable the Topology Manager to: a) assemble a normalized view of Topology Manager to:
the network for clients to access; b) allow clients (or, upper-
layer applications) access to information collected from various
network layers and/or network components, etc. The Policy Manager
function may be a sub-component of the Topology Manager or it may
be a standalone. This will be determined as the work with I2RS
evolves.
Topology Manager: Network components (inventory, etc.) are retrieved * Assemble a normalized view of the network for clients (or
from the Inventory Manager and synthesized with information from upper-layer applications
the Policy Manager into cohesive, normalized views of network
layers. The Topology Manager exposes normalized views of the
network via standards-based data models to Clients, or higher-
layer applications, to act upon in a read-only and/or read-write
fashion. The Topology Manager may also push information back into
the Inventory Manager and/or Network Elements to execute changes
to the network's behavior, configuration or state.
Orchestration Manager: Describes a function of stitching together * Allow clients (or upper-layer applications) access to
resources (i.e.: compute, storage) and/or services with the information collected from various network layers and/or
network or vice-versa. The Orchestration Manager relies on the network components, etc.
capabilities provided by the other "Managers" listed above in
order to realize a complete service.
Normalized Topology Data Model: A data model that is constructed and The Policy Manager function may be a sub-component of the Topology
represented using an open, standards-based model that is Manager or it may be a standalone function.
consistent between implementations.
Data Model Abstraction: The notion that one is able to represent the Topology Manager is a function that collects topological information
same set of elements in a data model at different levels of from a variety of sources in the network and provides a normalized
"focus" in order to limit the amount of information exchanged in view of the network topology to clients and/or higher-layer
order to convey this information. applications.
Orchestration Manager is a function that stitches together
resources, such as compute or storage, and/or services with the
network or vice-versa. To realize a complete service, the
Orchestration Manager relies on capabilities provided by the other
"Managers" listed above.
Normalized Topology Information Model is an open, standards-based
information model of the network topology.
Information Model Abstraction: The notion that one is able to
represent the same set of elements in an information model at
different levels of "focus" in order to limit the amount of
information exchanged in order to convey this information.
Multi-Layer Topology: Topology is commonly referred to using the OSI Multi-Layer Topology: Topology is commonly referred to using the OSI
protocol layering model. For example, Layer 3 represents routed protocol layering model. For example, Layer 3 represents routed
topologies that typically use IPv4 or IPv6 addresses. It is topologies that typically use IPv4 or IPv6 addresses. It is
envisioned that, eventually, multiple layers of the network may be envisioned that, eventually, multiple layers of the network may be
represented in a single, normalized view of the network to certain represented in a single, normalized view of the network to certain
applications, (i.e.: Capacity Planning, Traffic Engineering, etc.) applications, (i.e.: Capacity Planning, Traffic Engineering, etc.)
Network Element (NE): refers to a network device that typically is Network Element (NE) refers to a network device that typically is
addressable (but not always), or a host. It is sometimes referred addressable (but not always), or a host. It is sometimes referred
to as a 'Node'. to as a 'Node'.
Links: Every NE contains at least 1 link. These are used to connect Links: Every NE contains at least 1 link. These are used to connect
the NE to other NEs in the network. Links may be in a variety of the NE to other NEs in the network. Links may be in a variety of
states including up, down, administratively down, internally states, such as up, down, administratively down, internally
testing, or dormant. Links are often synonymous with network testing, or dormant. Links are often synonymous with network
ports on NEs. ports on NEs.
3. Orchestration, Collection & Presentation Framework 3. The Orchestration, Collection & Presentation Framework
3.1. Overview 3.1. Overview
Section 1 demonstrates the need for a network function that would Section 1 demonstrates the need for a network function that would
provide a common, standard-based topology view to applications. Such provide a common, standards-based topology view to applications.
topology collection/management/presentation function would be a part Such topology collection/management/presentation function would be a
wider framework, that would also include policy management and part of a wider framework that should also include policy management
orchestration. The framework is shown in Figure 2. and orchestration. The framework is shown in Figure 2.
+---------------+ +---------------+
+----------------+ | +----------------+ |
| Applications |-+ | Applications |-+
+----------------+ +----------------+
^ Websockets, ReST, XMPP, I2RS, ... ^ Websockets, ReST, XMPP...
+-------------------------+-------------------------+ +------------------------+-------------------------+
| | | | | |
+------------+ +-------------------------+ +-------------+ +------------+ +------------------------+ +-------------+
| Policy |<----| Topology Manager |---->|Orchestration| | Policy |<----| Topology Manager |---->|Orchestration|
| Manager | | +---------------------+ | | Manager | | Manager | | +--------------------+ | | Manager |
+------------+ | | Topology Data Model | | +-------------+ +------------+ | |Topology Information| | +-------------+
| +---------------------+ | | | Model | |
+-------------------------+ | +--------------------+ |
^ ^ ^ +------------------------+
Websockets, ReST, XMPP # | * Websockets, ReST, XMPP ^ ^ ^
######################## | ************************ Websockets, ReST, XMPP # | * Websockets, ReST, XMPP
# | * ####################### | ************************
+------------+ | +------------+ # | *
| Statistics | | | Inventory | +------------+ | +------------+
| Collection | | | Collection | | Statistics | | | Inventory |
+------------+ | +------------+ | Collection | | | Collection |
^ | I2RS, NETCONF, SNMP, ^ +------------+ | +------------+
| | TL1 ... | ^ | I2RS, NETCONF, SNMP, ^
+-------------------------+-------------------------+ | | TL1 ... |
| | | +------------------------+------------------------+
| | | | | |
+----------------+ +----------------+ +----------------+ +---------------+ +---------------+ +---------------+
| Network Element| | Network Element| | Network Element| |Network Element| |Network Element| |Network Element|
| +------------+ |<-LLDP->| +------------+ |<-LMP->| +------------+ | | +-----------+ | | +-----------+ | | +-----------+ |
| | Data Model | | | | Data Model | | | | Data Model | | | |Information| |<-LLDP->| |Information| |<-LMP-->| |Information| |
| +------------+ | | +------------+ | | +------------+ | | | Model | | | | Model | | | | Model | |
+----------------+ +----------------+ +----------------+ | +-----------+ | | +-----------+ | | +-----------+ |
+---------------+ +---------------+ +---------------+
Figure 2: Topology Manager Figure 2: Topology Manager
The following sections describe in detail the Topology Manager, The following sections describe in detail the Topology Manager,
Policy Manager and Orchestration Manager functions. Policy Manager and Orchestration Manager functions.
3.2. Topology Manager 3.2. The Topology Manager
The Topology Manager is responsible for retrieving topological The Topology Manager is a function that collects topological
information from the network via a variety of sources. The first information from a variety of sources in the network and provides a
most obvious source is the "live" IGP or an equivalent mechanism. cohesive, abstracted view of the network topology to clients and/or
"Live" IGP provides information about links that are components of higher-layer applications. The topology view is based on a
the active topology, in other words links that are present in the standards-based, normalized topology information model.
Link State Database (LSDB) and are eligible for forwarding. The
second source of topology information is the Inventory Collection
system, which provides information for network components not visible
within the IGP's LSDB, (i.e.: links or nodes, or properties of those
links or nodes, at lower layers of the network). The third source
source of topology information is the Statistics Collection system,
which provides traffic information (traffic demands, link
utilizations, etc.).
The Topology Manager will synthesize retrieved information into Topology information sources can be:
cohesive, abstracted views of the network using a standards-based,
normalized topology data model. The Topology Manager can then expose
these data models to Clients, or higher-layer applications using a
northbound interface, which would be a protocol/API commonly used by
higher-layer applications to retrieve and update information.
Examples of such protocols are ReST, Websockets, or XMPP. Topology
Manager's clients would be able to act upon the information in a
read-only and/or read-write fashion, (based on policies defined
within the Policy Manager).
It is envisioned that the Topology Manager will ultimately contain o The "live" Layer 3 IGP or an equivalent mechanism that provides
topology information for multiple layers of the network: Transport, information about links that are components of the active
Ethernet and IP/MPLS as well as multiple (IGP) areas and/or multiple topology. Active topology links are present in the Link State
Autonomous Systems (ASes). This allows the Topology Manager to Database (LSDB) and are eligible for forwarding. Layer 3 IGP
stitch together a holistic view of several layers of the network, information can be obtained by listening to IGP updates flooded
which is an important requirement, particularly for upper-layer through an IGP domain, or from Network Elements.
Traffic Engineering, Capacity Planning and Provisioning Clients
(applications) used to design, augment and optimize IP/MPLS networks
that require knowledge of underlying Shared Risk Link Groups (SRLG)
within the Transport and/or Ethernet layers of the network.
The Topology Manager must have the ability to discover and o The Inventory Collection system that provides information for
communicate with network elements who are not only active and visible network components not visible within the Layer 3 IGP's LSDB,
within the Link State Database (LSDB) of an active IGP, but also (i.e.: links or nodes, or properties of those links or nodes, at
network elements who are active, but invisible to a LSDB (e.g.: L2 lower layers of the network).
Ethernet switches, ROADM's, etc.) that are part of the underlying
Transport network. This requirement will influence the choice of
protocols needed by the Topology Manager to communicate to/from
network elements at the various network layers.
It is also important to recognize that the Topology Manager will be o The Statistics Collection system that provides traffic
gleaning not only (relatively) static inventory information from the information, such as traffic demands or link utilizations.
Inventory Manager, i.e.: what linecards, interface types, etc. are
actively inserted into network elements, but also dynamic inventory
information, as well. With respect to the latter, network elements
are expected to rely on various Link Layer Discovery Protocols (i.e.:
LLDP, LMP, etc.) that will aid in automatically identifying an
adjacent node, port, etc. at the far-side of a link. This
information is then pushed to or pulled by the Topology Manager in
order for it to have an accurate representation of the physical
topology of the network.
3.3. Policy Manager The Topology Manager provides topology information to Clients or
higher-layer applications via a northbound interface, such as ReST,
Websockets, or XMPP.
The Topology Manager will contain topology information for multiple
layers of the network: Transport, Ethernet and IP/MPLS, as well as
information for multiple Layer 3 IGP areas and multiple Autonomous
Systems (ASes). The topology information can be used by higher-level
applications, such as Traffic Engineering, Capacity Planning and
Provisioning. Such applications are typically used to design,
augment and optimize IP/MPLS networks, and require knowledge of
underlying Shared Risk Link Groups (SRLG) within the Transport and/or
Ethernet layers of the network.
The Topology Manager must be able to discover Network Elements that
are not visible in the "live" L3 IGP's Link State Database (LSDB).
Such Network Elements can either be inactive, or active but invisible
in the L3 LSDB (e.g.: L2 Ethernet switches, ROADM's, or Network
Elements that are in an underlying transport network).
In addition static inventory information collected from the Inventory
Manager, the Topology Manager will also collect dynamic inventory
information. For example, Network Elements utilize various Link
Layer Discovery Protocols (i.e.: LLDP, LMP, etc.) to automatically
identify adjacent nodes and ports. This information can be pushed to
or pulled by the Topology Manager in order to create an accurate
representation of the physical topology of the network
3.3. The Policy Manager
The Policy Manager is the function used to enforce and program The Policy Manager is the function used to enforce and program
policies applicable to network component/attribute data. Policy policies applicable to network component/attribute data. Policy
enforcement is a network-wide function that can be consumed by enforcement is a network-wide function that can be consumed by
various network elements and services including the Inventory various Network Elements and services, including the Inventory
Manager, Topology Manager or other network elements. Such policies Manager, the Topology Manager and other Network Elements. Such
are likely to encompass the following. policies are likely to encompass the following:
o Logical Identifier Numbering Policies o Logical Identifier Numbering Policies
* Correlation of IP prefix to link based on type of link (P-P, * Correlation of IP prefix to link based on link type, such as
P-PE, PE-CE, etc.) P-P, P-PE, or PE-CE.
* Correlation of IP Prefix to IGP Area * Correlation of IP Prefix to IGP Area
* Layer-2 VLAN ID assignments, etc. * Layer-2 VLAN ID assignments, etc.
o Routing Configuration Policies o Routing Configuration Policies
* OSPF Area or IS-IS Net-ID to Node (Type) Correlation * OSPF Area or IS-IS Net-ID to Node (Type) Correlation
* BGP routing policies, i.e.: nodes designated for injection of * BGP routing policies, such as nodes designated for injection of
aggregate routes, max-prefix policies, AFI/SAFI to node aggregate routes, max-prefix policies, or AFI/SAFI to node
correlation, etc. correlation..
o Security Policies o Security Policies
* Access Control Lists * Access Control Lists
* Rate-limiting * Rate-limiting
o Network Component/Attribute Data Access Policies. Client's o Network Component/Attribute Data Access Policies. Client's
(upper-layer application) read-only or read-write access to (upper-layer application) access to Network Components/Attributes
Network Components/Attributes contained in the "Inventory Manager" contained in the "Inventory Manager" as well as Policies contained
as well as Policies contained within the "Policy Manager" itself. within the "Policy Manager" itself.
The Policy Manager function may be a sub-component of the Topology or The Policy Manager function may be either a sub-component of the
Orchestration Manager or it may be a standalone. This will be Topology or Orchestration Manager or a standalone component.
determined as the work with I2RS evolves.
3.4. Orchestration Manager 3.4. Orchestration Manager
The Orchestration Manager provides the ability to stitch together The Orchestration Manager provides the ability to stitch together
resources (i.e.: compute, storage) and/or services with the network resources (such as compute or storage) and/or services with the
or vice-versa. Examples of 'generic' services may include the network or vice-versa. Examples of 'generic' services may include
following: the following:
o Application-specific Load Balancing o Application-specific Load Balancing
o Application-specific Network (Bandwidth) Optimization o Application-specific Network (Bandwidth) Optimization
o Application or End-User specific Class-of-Service o Application or End-User specific Class-of-Service
o Application or End-User specific Network Access Control o Application or End-User specific Network Access Control
The above services could then enable coupling of resources with the The above services could then enable coupling of resources with the
network to realize the following: network to realize the following:
o Network Optimization: Creation and Migration of Virtual Machines o Network Optimization: Creation and Migration of Virtual Machines
(VM's) so they are adjacent to storage in the same DataCenter. (VM's) so they are adjacent to storage in the same DataCenter.
o Network Access Control: Coupling of available (generic) compute o Network Access Control: Coupling of available (generic) compute
nodes within the appropriate point of the data-path to perform nodes within the appropriate point of the data-path to perform
firewall, NAT, etc. functions on data traffic. firewall, NAT, etc. functions on data traffic.
The Orchestration Manager is expected to exchange data models with The Orchestration Manager will exchange information models with the
the Topology Manager, Policy Manager and Inventory Manager functions. Topology Manager, the Policy Manager and the Inventory Manager. In
In addition, the Orchestration Manager is expected to support publish addition, the Orchestration Manager must support publish and
and subscribe capabilities to those functions, as well as to Clients, subscribe capabilities to those functions, as well as to Clients.
to enable scalability with respect to event notifications.
The Orchestration Manager may receive requests from Clients The Orchestration Manager may receive requests from Clients
(applications) for immediate access to specific network resources. (applications) for immediate access to specific network resources.
However, Clients may request to schedule future appointments to However, Clients may request to schedule future appointments to
reserve appropriate network resources when, for example, a special reserve appropriate network resources when, for example, a special
event is scheduled to start and end. event is scheduled to start and end.
Finally, the Orchestration Manager should have the flexibility to Finally, the Orchestration Manager should have the flexibility to
determine what network layer(s) may be able to satisfy a given determine what network layer(s) may be able to satisfy a given
Client's request, based on constraints received from the Client as Client's request, based on constraints received from the Client as
well as those constraints learned from the Policy and/or Topology well as constraints learned from the Policy and Topology Managers.
Manager functions. This could allow the Orchestration Manager to, This could allow the Orchestration Manager to, for example, satisfy a
for example, satisfy a given service request for a given Client using given service request for a given Client using the optical network
the optical network (via OTN service) if there is insufficient IP/ (via OTN service) if there is insufficient IP/MPLS capacity at the
MPLS capacity at the specific moment the Client's request is specific moment the Client's request is received.
received.
The operational model is shown in the following figure. The operational model is shown in the following figure.
TBD. TBD.
Figure 3: Overall Reference Model Figure 3: Overall Reference Model
4. Use Cases 4. Use Cases
4.1. Virtualized Views of the Network 4.1. Virtualized Views of the Network
4.1.1. Capacity Planning and Traffic Engineering 4.1.1. Capacity Planning and Traffic Engineering
4.1.1.1. Present Mode of Operation
When performing Traffic Engineering and/or Capacity Planning of an When performing Traffic Engineering and/or Capacity Planning of an IP
IP/MPLS network, it is important to account for SRLG's that exist /MPLS network, it is important to account for SRLG's that exist
within the underlying physical, optical and Ethernet networks. within the underlying physical, optical and Ethernet networks.
Currently, it's quite common to create and/or take "snapshots", at Currently, it's quite common to take "snapshots" at infrequent
infrequent intervals, that comprise the inventory data of the intervals that comprise the inventory data of the underlying physical
underlying physical and optical layer networks. This inventory data and optical layer networks. This inventory data is then normalized
then needs to be massaged or normalized to conform to the data import to conform to data import requirements of sometimes separate Traffic
requirements of sometimes separate Traffic Engineering and/or Engineering and/or Capacity Planning tools. This process is error-
Capacity Planning tools. This process is error-prone and prone and inefficient, particularly as the underlying network
inefficient, particularly as the underlying network inventory inventory information changes due to introduction of new network
information changes due to introduction of, for example, new network element makes or models, line cards, capabilities, etc..
element makes or models, linecards, capabilities, etc. at the optical
and/or Ethernet layers of the underlying network.
This is inefficient with respect to the time and expense consumed by The present mode of operation is inefficient with respect to Software
software developer, Capacity Planning and Traffic Engineering Development, Capacity Planning and Traffic Engineering resources.
resources to normalize and sanity check underlying network inventory Due to this inefficiency, the underlying physical network inventory
information, before it can be consumed by IP/MPLS Capacity Planning information (containing SRLG and corresponding critical network
and Traffic Engineering applications. Due to this inefficiency, the assets information) is not updated frequently, thus exposing the
underlying physical network inventory information, (containing SRLG network to, at minimum, inefficient utilization and, at worst,
and corresponding critical network asset information), used by the critical impairments.
IP/MPLS Capacity Planning and TE applications is not updated
frequently, thus exposing the network to, at minimum, inefficient
utilization and, at worst, critical impairments.
An Inventory Manager function is required that will, first, extract 4.1.1.2. Proposed Mode of Operation
inventory information from network elements -- and potentially
associated offline inventory databases to acquire physical cross- First, the Inventory Manager will extract inventory information from
network elements and associated inventory databases. Information
extracted from inventory databases will include physical cross-
connects and other information that is not available directly from connects and other information that is not available directly from
network elements -- at the physical, optical, Ethernet and IP/MPLS network elements. Standards-based information models and associated
layers of the network via standards-based data models. Data models vocabulary will be required to represent not only components inside
and associated vocabulary will be required to represent not only or directly connected to network elements, but also to represent
components inside or directly connected to network elements, but also components of a physical layer path (i.e.: cross-connect panels,
to represent components of a physical layer path (i.e.: cross-connect etc.) The inventory data will comprise the complete set of inactive
panels, etc.) The aforementioned inventory will comprise the and active network components.
complete set of inactive and active network components.
A Statistics Collection Function is also required. As stated above, Second, the Topology Manager will augment the inventory information
it will collect utilization statistics from Network Elements, archive with topology information obtained from Network Elements and other
and aggregate them in a statistics data warehouse. Selected sources, and provide an IGP-based view of the active topology of the
statistics and other dynamic data may be distributed through IGP network. The Topology Manager will also include non-topology dynamic
routing protocols ([I-D.previdi-isis-te-metric-extensions] and information from IGPs, such as Available Bandwidth, Reserved
Bandwidth, Traffic Engineering (TE) attributes associated with links,
etc.
Finally, the Statistics Collector will collect utilization statistics
from Network Elements, and archive and aggregate them in a statistics
data warehouse. Selected statistics and other dynamic data may be
distributed through IGP routing protocols
([I-D.ietf-isis-te-metric-extensions] and
[I-D.ietf-ospf-te-metric-extensions]) and then collected at the [I-D.ietf-ospf-te-metric-extensions]) and then collected at the
Statistics Collection Function via BGP-LS Statistics Collection Function via BGP-LS
([I-D.ietf-idr-ls-distribution]). Summaries of these figures then ([I-D.ietf-idr-ls-distribution]). Statistics summaries then will be
need to be exposed in normalized data models to the Topology Manager exposed in normalized information models to the Topology Manager,
so it can easily acquire historical link and LSP utilization figures which can use them to, for example, build trended utilization models
that can be used to, for example, build trended utilization models to to forecast expected changes to physical and logical network
forecast expected changes to the physical and/or logical network components.
components to accommodate network growth.
The Topology Manager function may then augment the Inventory Manager
information by communicating directly with Network Elements to reveal
the IGP-based view of the active topology of the network. This will
allow the Topology Manager to include dynamic information from the
IGP, such as Available Bandwidth, Reserved Bandwidth, etc. Traffic
Engineering (TE) attributes associated with links, contained with the
Traffic Engineering Database (TED) on Network Elements.
It is important to recognize that extracting topology information It is important to recognize that extracting topology information
from the network solely via an IGP, (such as IS-IS TE or OSPF TE), is from the network solely from Network Elements and IGPs (IS-IS TE or
inadequate for this use case. First, IGP's only expose the active OSPF TE), is inadequate for this use case. First, IGPs only expose
components (e.g. vertices of the SPF tree) of the IP network; the active components (e.g. vertices of the SPF tree) of the IP
unfortunately, they are not aware of "hidden" or inactive interfaces network, and are not aware of "hidden" or inactive interfaces within
within IP/MPLS network elements, (e.g.: unused linecards or unused IP/MPLS network elements, such as unused line cards or ports. IGPs
ports), or components that reside at a lower layer than IP/MPLS, e.g. are also not aware of components that reside at a layer lower than IP
Ethernet switches, Optical transport systems, etc. This occurs /MPLS, such as Ethernet switches, or Optical transport systems.
frequently during the course of maintenance, augment and optimization Second, IGP's only convey SRLG information that have been first
activities on the network. Second, IGP's only convey SRLG applied within a router's configurations, either manually or
information that have been first applied within the router's programatically. As mentioned previously, this SRLG information in
configurations, either manually or programatically. As mentioned the IP/MPLS network is subject to being infrequently updated and, as
previously, this SRLG information in the IP/MPLS network is subject a result, may inadequately account for critical, underlying network
to being infrequently updated and, as a result, may inadequately fate sharing properties that are necessary to properly design
account for critical, underlying network fate sharing properties that resilient circuits and/or paths through the network.
are necessary to properly design resilient circuits and/or paths
through the network.
In this use case, the Inventory Manager will need to be capable of
using a variety of existing protocols such as: NETCONF, CLI, SNMP,
TL1, etc. depending on the capabilities of the network elements. The
Topology Manager will need to be capable of communicating via an IGP
from a (set of) Network Elements. It is important to consider that
to acquire topology information from Network Elements will require
read-only access to the IGP. However, the end result of the
computations performed by the Capacity Planning Client may require
changes to various IGP attributes, (e.g.: IGP metrics, TE link-
colors, etc.) These may be applied directly by devising a new
capability to either: a) inject information into the IGP that
overrides the same information injected by the originating Network
Element; or, b) allowing the Topology and/or Inventory Manager the
ability to write changes to the Network Element's configuration in
order to have it adjust the appropriate IGP attribute(s) and re-flood
them throughout the IGP. It would be desirable to have a single
mechanism (data model or protocol) that allows the Topology Manager
to read and write IGP attributes.
Once the Topology Manager function has assembled a normalized view of Once the Topology Manager has assembled a normalized view of the
the topology and synthesized associated metadata with each component topology and metadata associated with each component of the topology,
of the topology (link type, link properties, statistics, intra-layer it can expose this information via its northbound API to the Capacity
relationships, etc.), it can then expose this information via its Planning and Traffic Engineering applications. The applications only
northbound API to Clients. In this use case that means Capacity require generalized information about nodes and links that comprise
Planning and Traffic Engineering applications, which are not required the network, e.g.: links used to interconnect nodes, SRLG information
to know innate details of individual network elements, but do require
generalized information about the node and links that comprise the
network, e.g.: links used to interconnect nodes, SRLG information
(from the underlying network), utilization rates of each link over (from the underlying network), utilization rates of each link over
some period of time, etc. In this case, it is important that any some period of time, etc.
Client that understands both the web services API and the normalized
data model can communicate with the Topology Manager in order to
understand the network topology information that was provided by
network elements from potentially different vendors, all of which
likely represent that topology information internally using different
models. If the Client had gone directly to the network elements
themselves, it would have to translate and then normalize these
different representations for itself. However, in this case, the
Topology Manager has done that for it.
When this information is consumed by the Traffic Engineering Note that any client/application that understands the Topology
application, it may run a variety of CSPF algorithms the result of Manager's northbound API and its topology information model can
which is likely a list of RSVP LSP's that need to be communicate with the Topology Manager. Note also that topology
(re-)established, or torn down, in the network to globally optimize information may be provided by Network Elements from different
vendors, which may use different information models. If a Client
wanted to retrieve topology information directly from Network
Elements, it would have to translate and normalize these different
representations.
A Traffic Engineering application may run a variety of CSPF
algorithms that create a list of TE tunnels that globally optimize
the packing efficiency of physical links throughout the network. The the packing efficiency of physical links throughout the network. The
end result of the Traffic Engineering application is "pushing" out to TE tunnels are then programmed into the network either directly or
the Topology Manager, via a standard data model to be defined here, a through a controller. Programming of TE tunnels into the network is
list of RSVP LSP's and their associated characteristics, (i.e.: head outside the scope of this document.
and tail-end LSR's, bandwidth, priority, preemption, etc.). The
Topology Manager then would consume this information and carry out
those instructions by speaking directly to network elements, perhaps
via PCEP Extensions for Stateful PCE [I-D.ietf-pce-stateful-pce],
which in turn initiates RSVP signaling through the network to
establish the LSP's.
After this information is consumed by the Capacity Planning A Capacity Planning application may run a variety of algorithms the
application, it may run a variety of algorithms the result of which result of which is a list of new inventory that is required for
is a list of new inventory that is required to be purchased (or, purchase or redeployment, as well as associated work orders for field
redeployed) as well as associated work orders for field technicians technicians to augment the network for expected growth.
to augment the network for expected growth. It would be ideal if
this information was also "pushed" back into the Topology and, in
turn, Inventory Manager as "inactive" links and/or nodes, so that as
new equipment is installed it can be automatically correlated with
original design and work order packages associated with that augment.
4.1.2. Services Provisioning 4.1.2. Services Provisioning
Beyond Capacity Planning and Traffic Engineering applications, having Beyond Capacity Planning and Traffic Engineering applications, having
a normalized view of just the IP/MPLS layer of the network is still a normalized view of just the IP/MPLS layer of the network is still
very important for other mission critical applications such as very important for other mission critical applications, such as
Security Auditing and IP/MPLS Services Provisioning, (e.g.: L2VPN, Security Auditing and IP/MPLS Services Provisioning, (e.g.: L2VPN,
L3VPN, etc.). With respect to the latter, these types of L3VPN, etc.). With respect to the latter, these types of
applications should not need a detailed understanding of, for applications should not need a detailed understanding of, for
example, SRLG information, assuming that the underlying MPLS Tunnel example, SRLG information, assuming that the underlying MPLS Tunnel
LSP's are known to account for the resiliency requirements of all LSP's are known to account for the resiliency requirements of all
services that ride over them. Nonetheless, for both types of services that ride over them. Nonetheless, for both types of
applications it is critical that they have a common and up-to-date applications it is critical to have a common and up-to-date
normalized view of the IP/MPLS network in order to easily instantiate normalized view of the IP/MPLS network to, for example, instantiate
new services at the appropriate places in the network, in the case of new services at optimal locations in the network, or to validate
VPN services, or validate that ACL's are configured properly to proper ACL configuration to protect associated routing, signaling and
protect associated routing, signaling and management protocols on the management protocols on the network.
network, with respect to Security Auditing.
For this use case, what is most commonly needed by a VPN Service A VPN Service Provisioning application must perform the following
Provisioning application is as follows. First, Service PE's need to resource selection operations:
be identified in all markets/cities where the customer has identified
they want service. Next, does their exist one, or more, Servies PE's
in each city with connectivity to the access network(s), e.g.: SONET/
TDM, used to deliver the PE-CE tail circuits to the Service's PE.
Finally, does the Services PE have available capacity on both the
PE-CE access interface and its uplinks to terminate the tail circuit?
If this were to be generalized, this would be considered an Resource
Selection function. Namely, the VPN Provisioning application would
iteratively query the Topology Manager to narrow down the scope of
resources to the set of Services PE's with the appropriate uplink
bandwidth and access circuit capability plus capacity to realize the
requested VPN service. Once the VPN Provisioning application has a
candidate list of resources it then requests the Topology Manager to
go about configuring the Services PE's and associated access circuits
to realize the customer's VPN service.
4.1.3. Troubleshooting & Monitoring o Identify Service PE's in all markets/cities where the customer has
identified they want service
o Identify one or more existing Servies PE's in each city with
connectivity to the access network(s), e.g.: SONET/TDM, used to
deliver the PE-CE tail circuits to the Service's PE.
o Determine that the Services PE have available capacity on both the
PE-CE access interface and its uplinks to terminate the tail
circuit
The VPN Provisioning application would iteratively query the Topology
Manager to narrow down the scope of resources to the set of Services
PEs with the appropriate uplink bandwidth and access circuit
capability plus capacity to realize the requested VPN service. Once
the VPN Provisioning application has a candidate list of resources it
requests programming of the Service PE's and associated access
circuits to set up a customer's VPN service into the network.
Programming of Service PEs is outside the scope of this document.
4.1.3. Troubleshooting & Monitoring
Once the Topology Manager has a normalized view of several layers of Once the Topology Manager has a normalized view of several layers of
the network, it's then possible to more easily expose a richer set of the network, it can expose a rich set of data to network operators
data to network operators when performing diagnosis, troubleshooting who are performing diagnosis, troubleshooting and repairs on the
and repairs on the network. Specifically, there is a need to network. Specifically, there is a need to (rapidly) assemble a
(rapidly) assemble a current, accurate and comprehensive network current, accurate and comprehensive network diagram of a L2VPN or
diagram of a L2VPN or L3VPN service for a particular customer when L3VPN service for a particular customer when either: a) attempting to
either: a) attempting to diagnose a service fault/error; or, b) diagnose a service fault/error; or, b) attempting to augment the
attempting to augment the customer's existing service. Information customer's existing service. Information that may be assembled into
that may be assembled into a comprehensive picture could include a comprehensive picture could include physical and logical components
physical and logical components related specifically to that related specifically to that customer's service, i.e.: VLAN's or
customer's service, i.e.: VLAN's or channels used by the PE-CE access channels used by the PE-CE access circuits, CoS policies, historical
circuits, CoS policies, historical PE-CE circuit utilization, etc. PE-CE circuit utilization, etc. The Topology Manager would assemble
The Topology Manager would assemble this information, on behalf of this information, on behalf of each of the network elements and other
each of the network elements and other data sources in and associated data sources in and associated with the network, and would present
with the network, and could present this information in a vendor- this information in a vendor-independent data model to applications
independent data model to applications to be displayed allowing the to be displayed allowing the operator (or, potentially, the customer
operator (or, potentially, the customer through a SP's Web portal) to through a SP's Web portal) to visualize the information.
visualize the information.
4.2. Path Computation Element (PCE) 4.2. Virtual Network Topology Manager (VNTM)
Virtual Network Topology Manager (VNTM) is in charge of managing the
Virtual Network Topology (VNT), as defined in [RFC5623]. VNT is
defined in [RFC5212] as a set of one or more LSPs in one or more
lower-layer networks that provides information for efficient path
handling in an upper-layer network.
The maintenance of virtual topology is a complicated task. VNTM have
to decide which are the nodes to be interconnected in the lower-layer
to fulfill the resource requirements of the upper-layer. This means
to create a topology to cope with all demands of the upper layer
without wasting resources in the underlying network. Once the
decision is made, some actions have to be taken in the network
elements of the layers so the new LSPs are provisioned. Moreover,
VNT has to release unwanted resources, so they can be available in
the lower-layer network for other uses.
VNTM does not have to solve all previous problems in all scenarios.
As defined in [RFC5623] in the PCE-VNTM cooperation model, PCEis
computing the paths in the higher layer and when there is not enough
resources in the VNT, PCE requests to the VNTM for a new path in the
VNT. VNTM checks PCE request using internal policies to check
whether this request can be take into account or not. VNTM requests
the egress node in the upper layer to set-up the path in the lower
layer. However, the VNTM can actively modify the VNT based on the
policies and network status without waiting to an explicit PCE
request.
Regarding the provisioning phase, VNTM may have to directly talk with
an NMS to set-up the connection [RFC5623] or it can delegate this
function to the provisioning manager
[I-D.farrkingel-pce-abno-architecture].
The aim of this document is not to categorize all implementation
options for VNTM, but to present the necessity to retrieve
topological information to perform its functions. The VNTM may
require the topologies of the lower and/or upper layer and even the
inter-layer relation between the upper and lower layer nodes, to
decide which is the optimal VNT.
4.3. Path Computation Element (PCE)
As described in [RFC4655] a PCE can be used to compute MPLS-TE paths As described in [RFC4655] a PCE can be used to compute MPLS-TE paths
within a "domain" (such as an IGP area) or across multiple domains within a "domain" (such as an IGP area) or across multiple domains
(such as a multi-area AS, or multiple ASes). (such as a multi-area AS, or multiple ASes).
o Within a single area, the PCE offers enhanced computational power o Within a single area, the PCE offers enhanced computational power
that may not be available on individual routers, sophisticated that may not be available on individual routers, sophisticated
policy control and algorithms, and coordination of computation policy control and algorithms, and coordination of computation
across the whole area. across the whole area.
skipping to change at page 17, line 5 skipping to change at page 17, line 5
topology and the Traffic Engineering Database (TED) for the area(s) topology and the Traffic Engineering Database (TED) for the area(s)
it serves, but [RFC4655] does not describe how this is achieved. it serves, but [RFC4655] does not describe how this is achieved.
Many implementations make the PCE a passive participant in the IGP so Many implementations make the PCE a passive participant in the IGP so
that it can learn the latest state of the network, but this may be that it can learn the latest state of the network, but this may be
sub-optimal when the network is subject to a high degree of churn, or sub-optimal when the network is subject to a high degree of churn, or
when the PCE is responsible for multiple areas. when the PCE is responsible for multiple areas.
The following figure shows how a PCE can get its TED information The following figure shows how a PCE can get its TED information
using a Topology Server. using a Topology Server.
+----------+ +----------+
| ----- | TED synchronization via Topology API | ----- | TED synchronization via Topology API
| | TED |<-+----------------------------------+ | | TED |<-+----------------------------------+
| ----- | | | ----- | |
| | | | | | | |
| | | | | | | |
| v | | | v | |
| ----- | | | ----- | |
| | PCE | | | | | PCE | | |
| ----- | | | ----- | |
+----------+ | +----------+ |
^ | ^ |
| Request/ | | Request/ |
| Response | | Response |
v | v |
Service +----------+ Signaling +----------+ +----------+ Service +----------+ Signaling +----------+ +----------+
Request | Head-End | Protocol | Adjacent | | Topology | Request | Head-End | Protocol | Adjacent | | Topology |
-------->| Node |<------------>| Node | | Manager | -------->| Node |<------------>| Node | | Manager |
+----------+ +----------+ +----------+ +----------+ +----------+ +----------+
Figure 4: Topology use case: Path Computation Element Figure 4: Topology use case: Path Computation Element
4.3. ALTO Server 4.4. ALTO Server
An ALTO Server [RFC5693] is an entity that generates an abstracted An ALTO Server [RFC5693] is an entity that generates an abstracted
network topology and provides it to network-aware applications over a network topology and provides it to network-aware applications over a
web service based API. web service based API. Example applications are p2p clients or
trackers, or CDNs. The abstracted network topology comes in the form
Example applications are Content Delivery Network (CDNs), peer-to- of two maps: a Network Map that specifies allocation of prefixes to
peer clouds/swarms, as well as inter-layer optimization cases such as PIDs, and a Cost Map that specifies the cost between PIDs listed in
mobile network willing to understand the congestion level of the Network Map. For more details, see [I-D.ietf-alto-protocol].
underneath backhaul infrastructure.
ALTO mechanisms are based on "Maps" that contain an abstracted
version of the topology. Such Maps are built by the ALTO server or
made available to the ALTO server by a Topology Manager. The content
of Maps are multiple: a mapping list where each prefix is mapped into
a Partition Identifier (called PID) and the cost matrix (representing
the distance) between PIDs. For more details, see
[I-D.ietf-alto-protocol].
ALTO abstract network topologies (represented in the Maps) can be
generated in multiple ways among which the Topology Manager provides
the abstracted topology to the ALTO server so that the ALTO server is
capable of serving applications. ALTO Maps may represent the whole
network infrastructure and are not limited to a specific layer.
E.g.: the cost matrix (called the Cost Map) can represent the IP/MPLS
layer path costs as well as integrating the optical cost.
The generation would typically be based on policies and rules set by
the operator. All the relevant information such as Nodes, Links,
Prefixes, TE paths (LSPs/Tunnels), etc. is required so for the ALTO
server to have an exhaustive and consistent view of the
infrastructure.
Typically, a Topology Manager would aggregate all the necessary
information and would produce ALTO maps. Mechanisms through which a
Topology Manager acquires topology information include interaction
with the IGP and the use of BGP-LS extension.
The mechanism defined in this document provides a single interface ALTO abstract network topologies can be auto-generated from the
through which an ALTO Server can retrieve all the necessary prefix physical topology of the underlying network. The generation would
and network topology data from the underlying network (i.e.: the typically be based on policies and rules set by the operator. Both
Topology Manager). Note an ALTO Server can use other mechanisms to prefix and TE data are required: prefix data is required to generate
ALTO Network Maps, TE (topology) data is required to generate ALTO
Cost Maps. Prefix data is carried and originated in BGP, TE data is
originated and carried in an IGP. The mechanism defined in this
document provides a single interface through which an ALTO Server can
retrieve all the necessary prefix and network topology data from the
underlying network. Note an ALTO Server can use other mechanisms to
get network data, for example, peering with multiple IGP and BGP get network data, for example, peering with multiple IGP and BGP
Speakers. Speakers.
The following figure shows how an ALTO Server can get network The following figure shows how an ALTO Server can get network
topology information from the underlying network using the Topology topology information from the underlying network using the Topology
API. API.
+--------+ +--------+
| Client |<--+ | Client |<--+
+--------+ | +--------+ |
| ALTO +--------+ +----------+ | ALTO +--------+ +----------+
+--------+ | Protocol | ALTO | Network Topology | Topology | +--------+ | Protocol | ALTO | Network Topology | Topology |
| Client |<--+------------| Server |<-----------------| Manager | | Client |<--+------------| Server |<-----------------| Manager |
+--------+ | | | | | +--------+ | | | | |
| +--------+ +----------+ | +--------+ +----------+
+--------+ | +--------+ |
| Client |<--+ | Client |<--+
+--------+ +--------+
Figure 5: Topology use case: ALTO Server Figure 5: Topology use case: ALTO Server
5. Acknowledgements 5. Acknowledgements
The authors wish to thank Alia Atlas, Dave Ward and Hannes Gredler The authors wish to thank Alia Atlas, Dave Ward, Hannes Gredler,
for their valuable contributions and feedback to this draft. Stafano Previdi for their valuable contributions and feedback to this
draft.
6. IANA Considerations 6. IANA Considerations
This memo includes no request to IANA. This memo includes no request to IANA.
7. Security Considerations 7. Security Considerations
At the moment, the Use Cases covered in this document apply At the moment, the Use Cases covered in this document apply
specifically to a single Service Provider or Enterprise network. specifically to a single Service Provider or Enterprise network.
Therefore, network administrations should take appropriate Therefore, network administrations should take appropriate
skipping to change at page 19, line 32 skipping to change at page 19, line 7
8. References 8. References
8.1. Normative References 8.1. Normative References
[RFC2119] Bradner, S., "Key words for use in RFCs to Indicate [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate
Requirement Levels", BCP 14, RFC 2119, March 1997. Requirement Levels", BCP 14, RFC 2119, March 1997.
8.2. Informative References 8.2. Informative References
[I-D.farrkingel-pce-abno-architecture]
King, D. and A. Farrel, "A PCE-based Architecture for
Application-based Network Operations", draft-farrkingel-
pce-abno-architecture-05 (work in progress), July 2013.
[I-D.ietf-alto-protocol] [I-D.ietf-alto-protocol]
Alimi, R., Penno, R., and Y. Yang, "ALTO Protocol", Alimi, R., Penno, R., and Y. Yang, "ALTO Protocol", draft-
draft-ietf-alto-protocol-13 (work in progress), ietf-alto-protocol-17 (work in progress), July 2013.
September 2012.
[I-D.ietf-idr-ls-distribution] [I-D.ietf-idr-ls-distribution]
Gredler, H., Medved, J., Previdi, S., Farrel, A., and S. Gredler, H., Medved, J., Previdi, S., Farrel, A., and S.
Ray, "North-Bound Distribution of Link-State and TE Ray, "North-Bound Distribution of Link-State and TE
Information using BGP", draft-ietf-idr-ls-distribution-01 Information using BGP", draft-ietf-idr-ls-distribution-03
(work in progress), October 2012. (work in progress), May 2013.
[I-D.ietf-isis-te-metric-extensions]
Previdi, S., Giacalone, S., Ward, D., Drake, J., Atlas,
A., and C. Filsfils, "IS-IS Traffic Engineering (TE)
Metric Extensions", draft-ietf-isis-te-metric-
extensions-00 (work in progress), June 2013.
[I-D.ietf-ospf-te-metric-extensions] [I-D.ietf-ospf-te-metric-extensions]
Giacalone, S., Ward, D., Drake, J., Atlas, A., and S. Giacalone, S., Ward, D., Drake, J., Atlas, A., and S.
Previdi, "OSPF Traffic Engineering (TE) Metric Previdi, "OSPF Traffic Engineering (TE) Metric
Extensions", draft-ietf-ospf-te-metric-extensions-02 (work Extensions", draft-ietf-ospf-te-metric-extensions-04 (work
in progress), December 2012. in progress), June 2013.
[I-D.ietf-pce-stateful-pce] [I-D.ietf-pce-stateful-pce]
Crabbe, E., Medved, J., Minei, I., and R. Varga, "PCEP Crabbe, E., Medved, J., Minei, I., and R. Varga, "PCEP
Extensions for Stateful PCE", Extensions for Stateful PCE", draft-ietf-pce-stateful-
draft-ietf-pce-stateful-pce-02 (work in progress), pce-05 (work in progress), July 2013.
October 2012.
[I-D.previdi-isis-te-metric-extensions]
Previdi, S., Giacalone, S., Ward, D., Drake, J., Atlas,
A., and C. Filsfils, "IS-IS Traffic Engineering (TE)
Metric Extensions",
draft-previdi-isis-te-metric-extensions-02 (work in
progress), October 2012.
[RFC4655] Farrel, A., Vasseur, J., and J. Ash, "A Path Computation [RFC4655] Farrel, A., Vasseur, J., and J. Ash, "A Path Computation
Element (PCE)-Based Architecture", RFC 4655, August 2006. Element (PCE)-Based Architecture", RFC 4655, August 2006.
[RFC5693] Seedorf, J. and E. Burger, "Application-Layer Traffic [RFC5212] Shiomoto, K., Papadimitriou, D., Le Roux, JL., Vigoureux,
Optimization (ALTO) Problem Statement", RFC 5693, M., and D. Brungard, "Requirements for GMPLS-Based Multi-
October 2009. Region and Multi-Layer Networks (MRN/MLN)", RFC 5212, July
2008.
Authors' Addresses [RFC5623] Oki, E., Takeda, T., Le Roux, JL., and A. Farrel,
"Framework for PCE-Based Inter-Layer MPLS and GMPLS
Traffic Engineering", RFC 5623, September 2009.
Shane Amante [RFC5693] Seedorf, J. and E. Burger, "Application-Layer Traffic
Level 3 Communications, Inc. Optimization (ALTO) Problem Statement", RFC 5693, October
1025 Eldorado Blvd 2009.
Broomfield, CO 80021
USA
Email: shane@level3.net Authors' Addresses
Jan Medved Jan Medved
Cisco Systems, Inc. Cisco Systems, Inc.
170 West Tasman Drive 170 West Tasman Drive
San Jose, CA 95134 San Jose, CA 95134
USA USA
Email: jmedved@cisco.com Email: jmedved@cisco.com
Stefano Previdi Stefano Previdi
Cisco Systems, Inc. Cisco Systems, Inc.
Via Del Serafico 200 170, West Tasman Drive
Rome 00144 San Jose, CA 95134
IT USA
Email: sprevidi@cisco.com Email: sprevidi@cisco.com
Thomas D. Nadeau
Juniper Networks
1194 N. Mathilda Ave.
Sunnyvale, CA 94089
USA
Email: tnadeau@juniper.net Victor Lopez
Telefonica I+D
c/ Don Ramon de la Cruz 84
Madrid 28006
Spain
Email: vlopez@tid.es
Shane Amante
 End of changes. 80 change blocks. 
527 lines changed or deleted 492 lines changed or added

This html diff was produced by rfcdiff 1.48. The latest version is available from http://tools.ietf.org/tools/rfcdiff/