idnits 2.17.1 draft-farrkingel-pce-abno-architecture-16.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- == There are 2 instances of lines with non-RFC6890-compliant IPv4 addresses in the document. If these are example addresses, they should be changed. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year -- The document date (28 January 2015) is 3376 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- -- Obsolete informational reference (is this intentional?): RFC 6982 (Obsoleted by RFC 7942) Summary: 0 errors (**), 0 flaws (~~), 2 warnings (==), 2 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 Internet Engineering Task Force D. King 3 Internet-Draft Old Dog Consulting 4 Intended status: Informational A. Farrel 5 Expires: 28 July 2015 Juniper Networks 6 28 January 2015 8 A PCE-based Architecture for Application-based Network Operations 10 draft-farrkingel-pce-abno-architecture-16.txt 12 Abstract 14 Services such as content distribution, distributed databases, or 15 inter-data center connectivity place a set of new requirements on the 16 operation of networks. They need on-demand and application-specific 17 reservation of network connectivity, reliability, and resources (such 18 as bandwidth) in a variety of network applications (such as point-to- 19 point connectivity, network virtualization, or mobile back-haul) and 20 in a range of network technologies from packet (IP/MPLS) down to 21 optical. Additionally, existing services or capabilities like 22 pseudowire connectivity or global concurrent optimization can benefit 23 from an operational scheme that considers the application needs and 24 the network status. An environment that operates to meet these types 25 of requirement is said to have Application-Based Network Operations 26 (ABNO). 28 ABNO brings together many existing technologies for gathering 29 information about the resources available in a network, for 30 consideration of topologies and how those topologies map to 31 underlying network resources, for requesting path computation, and 32 for provisioning or reserving network resources. Thus, ABNO may be 33 seen as the use of a toolbox of existing components enhanced with a 34 few new elements. The key component within an ABNO is the Path 35 Computation Element (PCE), which can be used for computing paths and 36 is further extended to provide policy enforcement capabilities for 37 ABNO. 39 This document describes an architecture and framework for ABNO 40 showing how these components fit together. It provides a cookbook of 41 existing technologies to satisfy the architecture and meet the needs 42 of the applications. 44 Status of this Memo 46 This Internet-Draft is submitted in full conformance with the 47 provisions of BCP 78 and BCP 79. 49 Internet-Drafts are working documents of the Internet Engineering 50 Task Force (IETF). Note that other groups may also distribute 51 working documents as Internet-Drafts. The list of current Internet- 52 Drafts is at http://datatracker.ietf.org/drafts/current/. 54 Internet-Drafts are draft documents valid for a maximum of six months 55 and may be updated, replaced, or obsoleted by other documents at any 56 time. It is inappropriate to use Internet-Drafts as reference 57 material or to cite them other than as "work in progress." 59 Copyright Notice 61 Copyright (c) 2015 IETF Trust and the persons identified as the 62 document authors. All rights reserved. 64 This document is subject to BCP 78 and the IETF Trust's Legal 65 Provisions Relating to IETF Documents 66 (http://trustee.ietf.org/license-info) in effect on the date of 67 publication of this document. Please review these documents 68 carefully, as they describe your rights and restrictions with respect 69 to this document. Code Components extracted from this document must 70 include Simplified BSD License text as described in Section 4.e of 71 the Trust Legal Provisions and are provided without warranty as 72 described in the Simplified BSD License. 74 Table of Contents 76 1. Introduction ................................................ 4 77 1.1 Scope ..................................................... 5 78 2. Application-based Network Operations (ABNO) .................. 5 79 2.1 Assumptions and Requirements .............................. 5 80 2.2 Implementation of the Architecture ........................ 6 81 2.3 Generic Architecture ...................................... 8 82 2.3.1 ABNO Components ........................................ 9 83 2.3.2 ABNO Functional Interfaces ............................ 15 84 3. ABNO Use Cases .............................................. 23 85 3.1 Inter-AS Connectivity ..................................... 24 86 3.2 Multi-Layer Networking .................................... 29 87 3.2.1 Data Center Interconnection across Multi-Layer Networks 33 88 3.3 Make-Before-Break ......................................... 36 89 3.3.1 Make-Before-Break for Re-optimization ................. 36 90 3.3.2 Make-Before-Break for Restoration ..................... 37 91 3.3.3 Make-Before-Break for Path Test and Selection ......... 38 92 3.4 Global Concurrent Optimization ............................ 40 93 3.4.1 Use Case: GCO with MPLS LSPs .......................... 41 94 3.5 Adaptive Network Management (ANM) ......................... 43 95 3.5.1. ANM Trigger ........................................ 44 96 3.5.2. Processing request and GCO computation ............. 44 97 3.5.3. Automated Provisioning Process ..................... 45 98 3.6 Pseudowire Operations and Management ...................... 46 99 3.6.1 Multi-Segment Pseudowires ........................... 46 100 3.6.2 Path-Diverse Pseudowires ............................ 48 101 3.6.3 Path-Diverse Multi-Segment Pseudowires .............. 49 102 3.6.4 Pseudowire Segment Protection ....................... 49 103 3.6.5 Applicability of ABNO to Pseudowires ................ 50 104 3.7 Cross-Stratum Optimization ................................ 51 105 3.7.1. Data Center Network Operation ..................... 51 106 3.7.2. Application of the ABNO Architecture .............. 53 107 3.8 ALTO Server ............................................... 55 108 3.9 Other Potential Use Cases ................................. 58 109 3.9.1 Traffic Grooming and Regrooming ..................... 58 110 3.9.2 Bandwidth Scheduling ................................ 58 111 4. Survivability and Redundancy within the ABNO Architecture ... 58 112 5. Security Consideration ...................................... 59 113 6. Manageability Considerations ................................ 60 114 7. IANA Considerations ......................................... 60 115 8. Acknowledgements ............................................ 60 116 9. References .................................................. 61 117 9.1 Informative References ................................... 61 118 10. Contributors' Addresses .................................... 65 119 11. Authors' Addresses ......................................... 66 120 A. Undefined Interfaces ........................................ 66 121 B. Implementation Status ...................................... 67 123 1. Introduction 125 Networks today integrate multiple technologies allowing network 126 infrastructure to deliver a variety of services to support the 127 different characteristics and demands of applications. There is an 128 increasing demand to make the network responsive to service requests 129 issued directly from the application layer. This differs from the 130 established model where services in the network are delivered in 131 response to management commands driven by a human user. 133 These application-driven requests and the services they establish 134 place a set of new requirements on the operation of networks. They 135 need on-demand and application-specific reservation of network 136 connectivity, reliability, and resources (such as bandwidth) in a 137 variety of network applications (such as point-to-point connectivity, 138 network virtualization, or mobile back-haul) and in a range of 139 network technologies from packet (IP/MPLS) down to optical. An 140 environment that operates to meet this type of application-aware 141 requirement is said to have Application-Based Network Operation 142 (ABNO). 144 The Path Computation Element (PCE) [RFC4655] was developed to provide 145 path computation services for GMPLS and MPLS controlled networks. 146 The applicability of PCE can be extended to provide path computation 147 and policy enforcement capabilities for ABNO platforms and services. 149 ABNO can provide the following types of service to applications by 150 coordinating the components that operate and manage the network: 152 - Optimization of traffic flows between applications to create an 153 overlay network for communication in use cases such as file 154 sharing, data caching or mirroring, media streaming, or real-time 155 communications described as Application Layer Traffic Optimization 156 (ALTO) [RFC5693]. 158 - Remote control of network components allowing coordinated 159 programming of network resources through such techniques as 160 Forwarding and Control Element Separation (ForCES) [RFC3746], 161 OpenFlow [ONF], and the Interface to the Routing System (I2RS) 162 [I-D.ietf-i2rs-architecture], or through the control plane 163 coordinated through the PCE communication Protocol (PCEP) 164 [I-D.ietf-pce-pce-initiated-lsp]. 166 - Interconnection of Content Delivery Networks (CDNi) [RFC6707] 167 through the establishment and resizing of connections between 168 content distribution networks. Similarly, ABNO can coordinate 169 inter-data center connections. 171 - Network resource coordination to automate provisioning, facilitate 172 traffic grooming and regrooming, bandwidth scheduling, and global 173 concurrent optimization using PCEP [RFC5557]. 175 - Virtual Private Network (VPN) planning in support of deployment of 176 new VPN customers and to facilitate inter-data center connectivity. 178 This document outlines the architecture and use cases for ABNO, and 179 shows how the ABNO architecture can be used for coordinating control 180 system and application requests to compute paths, enforce policies, 181 and manage network resources for the benefit of the applications that 182 use the network. The examination of the use cases shows the ABNO 183 architecture as a toolkit comprising many existing components and 184 protocols and so this document looks like a cookbook. ABNO is 185 compatible with pre-existing Network Management System (NMS) and 186 Operations Support System (OSS) deployments as well as with more 187 recent developments in programmatic networks such as Software Defined 188 Networking (SDN). 190 1.1 Scope 192 This document describes a toolkit. It shows how existing functional 193 components described in a large number of separate documents can be 194 brought together within a single architecture to provide the function 195 necessary for ABNO. 197 In many cases, existing protocols are known to be good enough or 198 almost good enough to satisfy the requirements of interfaces between 199 the components. In these cases the protocols are called out as 200 suitable candidates for use within an implementation of ABNO. 202 In other cases it is clear that further work will be required, and in 203 those cases a pointer to on-going work that may be of use is 204 provided. Where there is no current work that can be identified by 205 the authors, a short description of the missing interface protocol is 206 given in the Appendix. 208 Thus, this document may be seen as providing an applicability 209 statement for existing protocols, and guidance for developers of new 210 protocols or protocol extensions. 212 2. Application Based Network Operations (ABNO) 214 2.1 Assumptions 216 The principal assumption underlying this document is that existing 217 technologies should be used where they are adequate for the task. 218 Furthermore, when an existing technology is almost sufficient, it is 219 assumed to be preferable to make minor extensions rather than to 220 invent a whole new technology. 222 Note that this document describes an architecture. Functional 223 components are architectural concepts and have distinct and clear 224 responsibilities. Pairs of functional components interact over 225 functional interfaces that are, themselves, architectural concepts. 227 2.2 Implementation of the Architecture 229 It needs to be strongly emphasized that this document describes a 230 functional architecture. It is not a software design. Thus, it is 231 not intended that this architecture constrain implementations. 232 However, the separation of the ABNO functions into separate 233 functional components with clear interfaces between them enables 234 implementations to choose which features to include and allows 235 different functions to be distributed across distinct processes or 236 even processors. 238 An implementation of this architecture may make several important 239 decisions about the functional components: 241 - Multiple functional components may be grouped together into one 242 software component such that all of the functions are bundled 243 and only the external interfaces are exposed. This may have 244 distinct advantages for fast paths within the software, and can 245 reduce inter-process communication overhead. 247 For example, an active, stateful PCE could be implemented as a 248 single server combining the ABNO components of the PCE, the 249 Traffic Engineering Database, the Label Switched Path Database, 250 and the Provisioning Manager (see Section 2.3). 252 - The functional components could be distributed across separate 253 processes, processors, or servers so that the interfaces are 254 exposed as external protocols. 256 For example, the OAM Handler (see Section 2.3.1.6) could be 257 presented on a dedicated server in the network that consumes all 258 status reports from the network, aggregates them, correlates them, 259 and then dispatches notifications to other servers that need to 260 understand what has happened. 262 - There could be multiple instances of any or each of the 263 components. That is, the function of a functional component could 264 be partitioned across multiple software components with each 265 responsible for handling a specific feature or a partition of the 266 network. 268 For example, there may be multiple Traffic Engineering Databases 269 (see Section 2.3.1.8) in an implementation with each holding the 270 topology information of a separate network domain (such as a 271 network layer or an Autonomous System). Similarly there could be 272 multiple PCE instances each processing a different Traffic 273 Engineering Database, and potentially distributed on different 274 servers under different management control. As a final example, 275 there could be multiple ABNO Controllers each with capability to 276 support different classes of application or application service. 278 The purpose of the description of this architecture is to facilitate 279 different implementations while offering interoperability between 280 implementations of key components and easy interaction with the 281 applications and with the network devices. 283 2.3 Generic ABNO Architecture 285 The following diagram illustrates the ABNO architecture. The 286 components and functional interfaces are discussed in Sections 2.3.1 287 and 2.3.2 respectively. The use cases described in Section 3 show 288 how different components are used selectively to provide different 289 services. It is important to understand that the relationships and 290 interfaces shown between components on this figure are illustrative 291 of some of the common or likely interactions, however this figure 292 does not preclude other interfaces and relationships as necessary 293 to realize specific functionality. 295 +----------------------------------------------------------------+ 296 | OSS / NMS / Application Service Coordinator | 297 +-+---+---+----+-----------+---------------------------------+---+ 298 | | | | | | 299 ...|...|...|....|...........|.................................|...... 300 : | | | | +----+----------------------+ | : 301 : | | | +--+---+ | | +---+---+ : 302 : | | | |Policy+--+ ABNO Controller +------+ | : 303 : | | | |Agent | | +--+ | OAM | : 304 : | | | +-+--+-+ +-+------------+----------+-+ | |Handler| : 305 : | | | | | | | | | | | : 306 : | | +-+---++ | +----+-+ +-------+-------+ | | +---+---+ : 307 : | | |ALTO | +-+ VNTM |--+ | | | | : 308 : | | |Server| +--+-+-+ | | | +--+---+ | : 309 : | | +--+---+ | | | PCE | | | I2RS | | : 310 : | | | +-------+ | | | | |Client| | : 311 : | | | | | | | | +-+--+-+ | : 312 : | +-+----+--+-+ | | | | | | | : 313 : | | Databases +-------:----+ | | | | | : 314 : | | TED | | +-+---+----+----+ | | | | : 315 : | | LSP-DB | | | | | | | | | : 316 : | +-----+--+--+ +-+---------------+-------+-+ | | | : 317 : | | | | Provisioning Manager | | | | : 318 : | | | +-----------------+---+-----+ | | | : 319 ...|.......|..|.................|...|....|...|.......|..|.....|...... 320 | | | | | | | | | | 321 | +-+--+-----------------+--------+-----------+----+ | 322 +----/ Client Network Layer \--+ 323 | +----------------------------------------------------+ | 324 | | | | | | 325 ++------+-------------------------+--------+----------+-----+-+ 326 / Server Network Layers \ 327 +-----------------------------------------------------------------+ 329 Figure 1 : Generic ABNO Architecture 331 2.3.1 ABNO Components 333 This section describes the functional components shown as boxes in 334 Figure 1. The interactions between those components, the functional 335 interfaces, are described in Section 2.3.2. 337 2.3.1.1 NMS and OSS 339 A Network Management System (NMS) or an Operations Support System 340 (OSS) can be used to control, operate, and manage a network. Within 341 the ABNO architecture, an NMS or OSS may issue high-level service 342 requests to the ABNO Controller. It may also establish policies for 343 the activities of the components within the architecture. 345 The NMS and OSS can be consumers of network events reported through 346 the OAM Handler and can act on these reports as well as displaying 347 them to users and raising alarms. The NMS and OSS can also access 348 the Traffic Engineering Database (TED) and Label Switched Path 349 Database (LSP-DB) to show the users the current state of the network. 351 Lastly, the NMS and OSS may utilize a direct programmatic or 352 configuration interface to interact with the network elements within 353 the network. 355 2.3.1.2 Application Service Coordinator 357 In addition to the NMS and OSS, services in the ABNO architecture 358 may be requested by or on behalf of applications. In this context 359 the term "application" is very broad. An application may be a 360 program that runs on a host or server and that provides services to a 361 user, such as a video conferencing application. Alternatively, an 362 application may be a software tool that a user uses to make requests 363 to the network to set up specific services such as end-to-end 364 connections or scheduled bandwidth reservations. Finally, an 365 application may be a sophisticated control system that is responsible 366 for arranging the provision of a more complex network service such as 367 a virtual private network. 369 For the sake of this architecture, all of these concepts of an 370 application are grouped together and are shown as the Application 371 Service Coordinator since they are all in some way responsible for 372 coordinating the activity of the network to provide services for use 373 by applications. In practice, the function of the Application 374 Service Coordinator may be distributed across multiple applications 375 or servers. 377 The Application Service Coordinator communicates with the ABNO 378 Controller to request operations on the network. 380 2.3.1.3 ABNO Controller 382 The ABNO Controller is the main gateway to the network for the NMS, 383 OSS, and Application Service Coordinator for the provision of 384 advanced network coordination and functions. The ABNO Controller 385 governs the behavior of the network in response to changing network 386 conditions and in accordance with application network requirements 387 and policies. It is the point of attachment, and invokes the right 388 components in the right order. 390 The use cases in Section 3 provide a clearer picture of how the 391 ABNO Controller interacts with the other components in the ABNO 392 architecture. 394 2.3.1.4 Policy Agent 396 Policy plays a very important role in the control and management of 397 the network. It is, therefore, significant in influencing how the 398 key components of the ABNO architecture operate. 400 Figure 1 shows the Policy Agent as a component that is configured 401 by the NMS/OSS with the policies that it applies. The Policy Agent 402 is responsible for propagating those policies into the other 403 components of the system. 405 Simplicity in the figure necessitates leaving out many of the policy 406 interactions that will take place. Although the Policy Agent is only 407 shown interacting with the ABNO Controller, the Alto Server, and the 408 Virtual Network Topology Manager (VNTM), it will also interact with a 409 number of other components and the network elements themselves. For 410 example, the Path Computation Element (PCE) will be a Policy 411 Enforcement Point (PEP) [RFC2753] as described in [RFC5394], and the 412 Interface to the Routing System (I2RS) Client will also be a PEP as 413 noted in [I-D.ietf-i2rs-architecture]. 415 2.3.1.5 Interface to the Routing System (I2RS) Client 417 The Interface to the Routing System (I2RS) is described in 418 [I-D.ietf-i2rs-architecture]. The interface provides a programmatic 419 way to access (for read and write) the routing state and policy 420 information on routers in the network. 422 The I2RS Client is introduced in [I-D.ietf-i2rs-problem-statement]. 423 Its purpose is to manage information requests across a number of 424 routers (each of which runs an I2RS Agent) and coordinate setting or 425 gathering state to/from those routers. 427 2.3.1.6 OAM Handler 429 Operations, Administration, and Maintenance (OAM) plays a critical 430 role in understanding how a network is operating, detecting faults, 431 and taking the necessary action to react to problems in the network. 433 Within the ABNO architecture, the OAM Handler is responsible for 434 receiving notifications (often called alerts) from the network about 435 potential problems, for correlating them, and for triggering other 436 components of the system to take action to preserve or recover the 437 services that were established by the ABNO Controller. The OAM 438 Handler also reports network problems and, in particular, service- 439 affecting problems to the NMS, OSS, and Application Service 440 Coordinator. 442 Additionally, the OAM Handler interacts with the devices in the 443 network to initiate OAM actions within the data plane such as 444 monitoring and testing. 446 2.3.1.7 Path Computation Element (PCE) 448 The Path Computation Element (PCE) is introduced in [RFC4655]. It is 449 a functional component that services requests to compute paths across 450 a network graph. In particular, it can generate traffic engineered 451 routes for MPLS-TE and GMPLS Label Switched Paths (LSPs). The PCE 452 may receive these requests from the ABNO Controller, from the Virtual 453 Network Topology Manager, or from network elements themselves. 455 The PCE operates on a view of the network topology stored in the 456 Traffic Engineering Database (TED). A more sophisticated computation 457 may be provided by a Stateful PCE that enhances the TED with a 458 database (the LSP-DB - see section 2.3.1.8.2) containing information 459 about the LSPs that are provisioned and operational within the 460 network as described in [RFC4655] and [I-D.ietf-pce-stateful-pce]. 462 Additional function in an Active PCE allows a functional component 463 that includes a Stateful PCE to make provisioning requests to set up 464 new services or to modify in-place services as described in 465 [I-D.ietf-pce-stateful-pce] and [I-D.ietf-pce-pce-initiated-lsp]. 466 This function may directly access the network elements, or may be 467 channelled through the Provisioning Manager. 469 Coordination between multiple PCEs operating on different TEDs can 470 prove useful for performing path computation in multi-domain (for 471 example, inter-AS) or multi-layer networks. 473 Since the PCE is a key component of the ABNO architecture, a better 474 view of its role can be gained by examining the use cases described 475 in Section 3. 477 2.3.1.8 Databases 479 The ABNO Architecture includes a number of databases that contain 480 information stored for use by the system. The two main databases are 481 the Traffic Engineering Database (TED) and the LSP Database (LSP-DB), 482 but there may be a number of other databases to contain information 483 about topology (ALTO Server), policy (Policy Agent), services (ABNO 484 Controller), etc. 486 In the text that follows specific key components that are consumers 487 of the databases are highlighted. It should be noted that the 488 databases are available for inspection by any of the ABNO components. 489 Updates to the databases should be handled with some care since 490 allowing multiple components to write to a database can be the cause 491 of a number of contention and sequencing problems. 493 2.3.1.8.1 Traffic Engineering Database (TED) 495 The Traffic Engineering Database (TED) is a data store of topology 496 information about a network that may be enhanced with capability 497 data (such as metrics or bandwidth capacity) and active status 498 information (such as up/down status or residual unreserved 499 bandwidth). 501 The TED may be built from information supplied by the network or 502 from data (such as inventory details) sourced through the NMS/OSS. 504 The principal use of the TED in the ABNO architecture is to provide 505 the raw data on which the Path Computation Element operates. But 506 the TED may also be inspected by users at the NMS/OSS to view the 507 current status of the network, and may provide information to 508 application services such as Application Layer Traffic Optimization 509 (ALTO) [RFC5693]. 511 2.3.1.8.2 LSP Database 513 The LSP Database (LSP-DB) is a data store of information about LSPs 514 that have been set up in the network, or that could be established. 515 The information stored includes the paths and resource usage of the 516 LSPs. 518 The LSP-DB may be built from information generated locally. For 519 example, when LSPs are provisioned, the LSP-DB can be updated. The 520 database can also be constructed from information gathered from the 521 network by polling or reading the state of LSPs that have already 522 been set up. 524 The main use of the LSP-DB within the ABNO architecture is to enhance 525 the planning and optimization of LSPs. New LSPs can be established 526 to be path-disjoint from other LSPs in order to offer protected 527 services; LSPs can be rerouted in order to put them on more optimal 528 paths or to make network resources available for other LSPs; LSPs can 529 be rapidly repaired when a network failure is reported; LSPs can be 530 moved onto other paths in order to avoid resources that have planned 531 maintenance outages. A stateful PCE (see Section 2.3.1.7) is a 532 primary consumer of the LSP-DB. 534 2.3.1.8.3 Shared Risk Link Group (SRLG) Databases 536 The TED may, itself, be supplemented by SRLG information that assigns 537 to each network resource one or more identifiers that associates the 538 resource with other resources in the same TED that share the same 539 risk of failure. 541 While this information can be highly useful, it may be supplemented 542 by additional detailed information maintained in a separate database 543 and indexed using the SRLG identifier from the TED. Such a database 544 can interpret SRLG information provided by other networks (such as 545 server networks), can provide failure probabilities associated with 546 each SRLG, can offer prioritization when SRLG-disjoint paths cannot 547 be found, and can correlate SRLGs between different server networks 548 or between different peer networks. 550 2.3.1.8.4 Other Databases 552 There may be other databases that are built within the ABNO system 553 and which are referenced when operating the network. These databases 554 might include information about, for example, traffic flows and 555 demands, predicted or scheduled traffic demands, link and node 556 failure and repair history, network resources such as packet labels 557 and physical labels (i.e., MPLS and GMPLS labels), etc. 559 As mentioned in Section 2.3.1.8.1, the TED may be enhanced by 560 inventory information. It is quite likely in many networks that such 561 an inventory is held in a separate database (the Inventory Database) 562 that includes details of manufacturer, model, installation date, etc. 564 2.3.1.9 ALTO Server 566 The ALTO server provides network information to the application 567 layer based on abstract maps of a network region. This information 568 provides a simplified view, but it is useful to steer application 569 layer traffic. ALTO services enable Service Providers to share 570 information about network locations and the costs of paths between 571 them. The selection criteria to choose between two locations may 572 depend on information such as maximum bandwidth, minimum cross- 573 domain traffic, lower cost to the user, etc. 575 The ALTO server generates ALTO views to share information with the 576 Application Service Coordinator so that it can better select paths 577 in the network to carry application-layer traffic. The ALTO views 578 are computed based on information from the network databases, from 579 policies configured by the Policy Agent, and through the algorithms 580 used by the PCE. 582 Specifically, the base ALTO protocol [RFC7285] defines a single-node 583 abstract view of a network to the Application Service Coordinator. 584 Such a view consists of two maps: a network map and a cost map. A 585 network map defines multiple provider-defined Identifiers (PIDs), 586 which represent entrance points to the network. Each node in the 587 application layer is known as an End Point (EP), and each EP is 588 assigned to a PID, because PIDs are the entry points of the 589 application in the network. As defined in [RFC7285], a PID can 590 denote a subnet, a set of subnets, a metropolitan area, a PoP, etc. 591 Each such network region can be a single domain or multiple networks, 592 it is just the view that the ALTO server is exposing to the 593 application layer. A cost map provides costs between EPs and/or 594 PIDs. The criteria that Application Service Coordinator uses to 595 choose application routes between two locations may depend on 596 attributes such as maximum bandwidth, minimum cross-domain traffic, 597 lower cost to the user, etc. 599 2.3.1.10 Virtual Network Topology Manager (VNTM) 601 A Virtual Network Topology (VNT) is defined in [RFC5212] as a set of 602 one or more LSPs in one or more lower-layer networks that provides 603 information for efficient path handling in an upper-layer network. 604 For instance, a set of LSPs in a wavelength division multiplexed 605 (WDM) network can provide connectivity as virtual links in a higher- 606 layer packet-switched network. 608 The VNT enhances the physical/dedicated links that are available in 609 the upper-layer network and is configured by setting up or tearing 610 down the lower-layer LSPs and by advertising the changes into the 611 higher-layer network. The VNT can be adapted to traffic demands 612 so that capacity in the higher-layer network can be created or 613 released as needed. Releasing unwanted VNT resources makes them 614 available in the lower-layer network for other uses. 616 The creation of virtual topology for inclusion in a network is not a 617 simple task. Decisions must be made about which nodes in the upper- 618 layer it is best to connect, in which lower-layer network to 619 provision LSPs to provide the connectivity, and how to route the LSPs 620 in the lower-layer network. Furthermore, some specific actions have 621 to be taken to cause the lower-layer LSPs to be provisioned and the 622 connectivity in the upper-layer network to be advertised. 624 [RFC5623] describes how the VNTM may instantiate connections in the 625 server-layer in support of connectivity in the client-layer. Within 626 the ABNO architecture, the creation of new connections may be 627 delegated to the Provisioning Manager as discussed in Section 628 2.3.1.11. 630 All of these actions and decisions are heavily influenced by policy, 631 so the VNTM component that coordinates them takes input from the 632 Policy Agent. The VNTM is also closely associated with the PCE for 633 the upper-layer network and each of the PCEs for the lower-layer 634 networks. 636 2.3.1.11 Provisioning Manager 638 The Provisioning Manager is responsible for making or channelling 639 requests for the establishment of LSPs. This may be instructions to 640 the control plane running in the networks, or may involve the 641 programming of individual network devices. In the latter case, the 642 Provisioning Manager may act as an OpenFlow Controller [ONF]. 644 See Section 2.3.2.6 for more details of the interactions between the 645 Provisioning Manager and the network. 647 2.3.1.12 Client and Server Network Layers 649 The client and server networks are shown in Figure 1 as illustrative 650 examples of the fact that the ABNO architecture may be used to 651 coordinate services across multiple networks where lower-layer 652 networks provide connectivity in upper-layer networks. 654 Section 3.2 describes a set of use cases for multi-layer networking. 656 2.3.2 Functional Interfaces 658 This section describes the interfaces between functional components 659 that might be externalized in an implementation allowing the 660 components to be distributed across platforms. Where existing 661 protocols might provide all or most of the necessary capabilities 662 they are noted. Appendix A notes the interfaces where more protocol 663 specification may be needed. 665 As noted at the top of Section 2.3, it is important to understand 666 that the relationships and interfaces shown between components in 667 Figure 1 are illustrative of some of the common or likely 668 interactions, however this figure and the descriptions in the sub- 669 sections below do not preclude other interfaces and relationships as 670 necessary to realize specific functionality. Thus, some of the 671 interfaces described below might not be visible as specific 672 relationships in Figure 1, but they can nevertheless exist. 674 2.3.2.1 Configuration and Programmatic Interfaces 676 The network devices may be configured or programmed direct from the 677 NMS/OSS. Many protocols already exist to perform these functions 678 including: 680 - SNMP [RFC3412] 681 - NETCONF [RFC6241] 682 - RESTCONF [I-D.ietf-netconf-restconf] 683 - GSMP [RFC3292] 684 - ForCES [RFC5810] 685 - OpenFlow [ONF] 686 - PCEP [I-D.ietf-pce-pce-initiated-lsp]. 688 The TeleManagement Forum (TMF) Multi-Technology Operations System 689 Interface (MTOSI) standard [TMF-MTOSI] was developed to facilitate 690 application-to-application interworking and provides network level 691 management capabilities to discover, configure and activate 692 resources. Initially the MTOSI information model was only capable of 693 representing connection oriented networks and resources. In later 694 releases, support is added for connection less networks. MTOSI is 695 from NMS perspective a north bound interface and is based on SOAP 696 web services. 698 From the ABNO perspective, network configuration is a pass-through 699 function. It can be seen represented on the left hand side of 700 Figure 1. 702 2.3.2.2 TED Construction from the Networks 704 As described in Section 2.3.1.8, the TED provides details of the 705 capabilities and state of the network for use by the ABNO system and 706 the PCE in particular. 708 The TED can be constructed by participating in the IGP-TE protocols 709 run by the networks (for example, OSPF-TE [RFC3630] and ISIS-TE 710 [RFC5305]). Alternatively, the TED may be fed using link-state 711 distribution extensions to BGP [I-D.ietf-idr-ls-distribution]. 713 The ABNO system may maintain a single TED unified across multiple 714 networks, or may retain a separate TED for each network. 716 Additionally, an ALTO Server [RFC5693] may provide an abstracted 717 topology from a network to build an application-level TED that can 718 be used by a PCE to compute paths between servers and application- 719 layer entities for the provision of application services. 721 2.3.2.3 TED Enhancement 723 The TED may be enhanced by inventory information supplied from the 724 NMS/OSS. This may supplement the data collected as described in 725 Section 2.3.2.2 with information that is not normally distributed 726 within the network such as node types and capabilities, or the 727 characteristics of optical links. 729 No protocol is currently identified for this interface, but the 730 protocol developed or adopted to satisfy the requirements of the 731 Interface to the Routing System (I2RS) [I-D.ietf-i2rs-architecture] 732 may be a suitable candidate because it is required to be able to 733 distribute bulk routing state information in a well-defined encoding 734 language. Another candidate protocol may be NETCONF [RFC6241] 735 passing data encoded using YANG [RFC6020]. 737 Note that, in general, any protocol and encoding that is suitable 738 for presenting the TED as described in Section 2.3.2.4 will likely be 739 suitable (or could be made suitable) for enabling write-access to the 740 TED as described in this section. 742 2.3.2.4 TED Presentation 744 The TED may be presented north-bound from the ABNO system for use by 745 an NMS/OSS or by the Application Service Coordinator. This allows 746 users and applications to get a view of the network topology and the 747 status of the network resources. It also allows planning and 748 provisioning of application services. 750 There are several protocols available for exporting the TED north- 751 bound: 753 - The ALTO protocol [RFC7285] is designed to distribute the 754 abstracted topology used by an ALTO Server and may prove useful for 755 exporting the TED. ALTO server provides the cost between EPs or 756 between PIDs, so the application layer can select which is the most 757 appropriate connection for the information exchange between its 758 application end points. 760 - The same protocol used to export topology information from the 761 network can be used to export the topology from the TED. 762 [I-D.ietf-idr-ls-distribution]. 764 - The Interface to the Routing System (I2RS) 765 [I-D.ietf-i2rs-architecture] will require a protocol that is 766 capable of handling bulk routing information exchanges that would 767 be suitable for exporting the TED. In this case it would make 768 sense to have a standardized representation of the TED in a formal 769 data modelling language such as YANG [RFC6020] so that an existing 770 protocol could be used such as NETCONF [RFC6241] or XMPP [RFC6120]. 772 Note that export from the TED can be a full dump of the content 773 (expressed in a suitable abstraction language) as described above, or 774 it could be an aggregated or filtered set of data based on policies 775 or specific requirements. Thus, the relationships shown in Figure 1 776 may be a little simplistic in that the ABNO Controller may also be 777 involved in preparing and presenting the TED information over a 778 north-bound interface. 780 2.3.2.5 Path Computation Requests from the Network 782 As originally specified in the PCE architecture [RFC4655], network 783 elements can make path computation requests to a PCE using PCEP 784 [RFC5440]. This facilitates the network setting up LSPs in response 785 to simple connectivity requests, and it allows the network to re- 786 optimize or repair LSPs. 788 2.3.2.6 Provisioning Manager Control of Networks 790 As described in Section 2.3.1.11, the Provisioning Manager makes or 791 channels requests to provision resources in the network. These 792 operations can take place at two levels: there can be requests to 793 program/configure specific resources in the data or forwarding 794 planes; and there can be requests to trigger a set of actions to be 795 programmed with the assistance of a control plane. 797 A number of protocols already exist to provision network resources as 798 follows: 800 - Program/configure specific network resources 802 - ForCES [RFC5810] defines a protocol for separation of the control 803 element (the Provisioning Manager) from the forwarding elements 804 in each node in the network. 806 - The Generic Switch Management Protocol (GSMP) [RFC3292] is an 807 asymmetric protocol that allows one or more external switch 808 controllers (such as the Provisioning Manager) to establish and 809 maintain the state of a label switch such as an MPLS switch. 811 - OpenFlow [ONF] is a communications protocol that gives an 812 OpenFlow Controller (such as the Provisioning Manager) access to 813 the forwarding plane of a network switch or router in the 814 network. 816 - Historically, other configuration-based mechanisms have been used 817 to set up the forwarding/switching state at individual nodes 818 within networks. Such mechanisms have ranged from non-standard 819 command line interfaces (CLIs) to various standards-based options 820 such as TL1 [TL1] and SNMP [RFC3412]. These mechanisms are not 821 designed for rapid operation of a network and are not easily 822 programmatic. They are not proposed for use by the Provisioning 823 Manager as part of the ABNO architecture. 825 - NETCONF [RFC6241] provides a more active configuration protocol 826 that may be suitable for bulk programming of network resources. 827 Its use in this way is dependent on suitable YANG modules being 828 defined for the necessary options. Early work in the IETF's 829 Netmod working group is focused on a higher level of routing 830 function more comparable with the function discussed in Section 831 2.3.2.8 [I-D.ietf-netmod-routing-cfg]. 833 - The [TMF-MTOSI] specification provides provisioning, activation 834 and deactivation and release of resources via the Service 835 Activation Interface (SAI). The Common Communication Vehicle 836 (CCV) is the middleware required to implement MTOSI. CCV is then 837 used to provide middleware abstraction in combination with Web 838 Services Description Language (WSDL) to allow MTOSI interfaces to 839 be bound to different middleware technologies as needed. 841 - Trigger actions through the control plane 843 - LSPs can be requested using a management system interface to the 844 head end of the LSP using tools such as CLIs, TL1 [TL1] or SNMP 845 [RFC3412]. Configuration at this granularity is not as time- 846 critical as when individual network resources are programmed 847 because the main task of programming end-to-end connectivity is 848 devolved to the control plane. Nevertheless, these mechanisms 849 remain unsuitable for programmatic control of the network and are 850 not proposed for use by the Provisioning Manager as part of the 851 ABNO architecture. 853 - As noted above, NETCONF [RFC6241] provides a more active 854 configuration protocol. This may be particularly suitable for 855 requesting the establishment of LSPs. Work would be needed to 856 complete a suitable YANG module. 858 - The PCE communication Protocol (PCEP) [RFC5440] has been proposed 859 as a suitable protocol for requesting the establishment of LSPs 861 [I-D.ietf-pce-pce-initiated-lsp]. This works well because the 862 protocol elements necessary are exactly the same as used to 863 respond to a path computation request. 865 The functional element that issues PCEP requests to establish 866 LSPs is known as an "Active PCE", however it should be noted that 867 the ABNO functional component responsible for requesting LSPs is 868 the Provisioning Manager. Other controllers like the VNTM and 869 the ABNO Controller use the services of the Provisioning Manager 870 to isolate the twin functions of computing and requesting paths 871 from the provisioning mechanisms in place with any given network. 873 Note that I2RS does not provide a mechanism for control of network 874 resources at this level as it is designed to provide control of 875 routing state in routers, not forwarding state in the data plane. 877 2.3.2.7 Auditing the Network 879 Once resources have been provisioned or connections established in 880 the network, it is important that the ABNO system can determine the 881 state of the network. Similarly, when provisioned resources are 882 modified or taken out of service, the changes in the network need to 883 be understood by the ABNO system. This function falls into four 884 categories: 886 - Updates to the TED are gathered as described in Section 2.3.2.2. 888 - Explicit notification of the successful establishment and the 889 subsequent state of LSP can be provided through extensions to PCEP 890 as described in [I-D.ietf-pce-stateful-pce] and 891 [I-D.ietf-pce-pce-initiated-lsp]. 893 - OAM can be commissioned and the results inspected by the OAM 894 Handler as described in Section 2.3.2.14. 896 - A number of ABNO components may make inquiries and inspect network 897 state through a variety of techniques including I2RS, NETCONF, or 898 SNMP. 900 2.3.2.8 Controlling The Routing System 902 As discussed in Section 2.3.1.5, the Interface to the Routing System 903 (I2RS) provides a programmatic way to access (for read and write) the 904 routing state and policy information on routers in the network. The 905 I2RS Client issues requests to routers in the network to establish or 906 retrieve routing state. Those requests utilize the I2RS protocol 907 which will be based on a combination of NETCONF [RFC6241] and 908 RESTCONF [I-D.ietf-netconf-restconf] with some additional features. 910 2.3.2.9 ABNO Controller Interface to PCE 912 The ABNO Controller needs to be able to consult the PCE to determine 913 what services can be provisioned in the network. There is no reason 914 why this interface cannot be based on the standard PCE communication 915 Protocol as defined in [RFC5440]. 917 2.3.2.10 VNTM Interface to and from PCE 919 There are two interactions between the Virtual Network Topology 920 Manager and the PCE. 922 The first interaction is used when VNTM wants to determine what LSPs 923 can be set up in a network: in this case it uses the standard PCEP 924 interface [RFC5440] to make path computation requests. 926 The second interaction arises when a PCE determines that it cannot 927 compute a requested path or notices that (according to some 928 configured policy) a network is short of resources (for example, the 929 capacity on some key link is close to exhausted). In this case, the 930 PCE may notify the VNTM which may (again according to policy) act to 931 construct more virtual topology. This second interface is not 932 currently specified although it may be that the protocol selected or 933 designed to satisfy I2RS will provide suitable features (see Section 934 2.3.2.8) or an extension could be made to the PCEP Notify message 935 (PCNtf) [RFC5440]. 937 2.3.2.11 ABNO Control Interfaces 939 The north-bound interface from the ABNO Controller is used by the 940 NMS, OSS, and Application Service Coordinator to request services in 941 the network in support of applications. The interface will also need 942 to be able to report the asynchronous completion of service requests 943 and convey changes in the status of services. 945 This interface will also need strong capabilities for security, 946 authentication, and policy. 948 This interface is not currently specified. It needs to be a 949 transactional interface that supports the specification of abstract 950 services with adequate flexibility to facilitate easy extension and 951 yet be concise and easily parsable. 953 It is possible that the protocol designed to satisfy I2RS will 954 provide suitable features (see Section 2.3.2.8). 956 2.3.2.12 ABNO Provisioning Requests 958 Under some circumstances the ABNO Controller may make requests direct 959 to the Provisioning Manager. For example, if the Provisioning 960 Manager is acting as an SDN Controller then the ABNO Controller may 961 use one of the APIs defined to allow requests to me made to the SDN 962 Controller (such as the Floodlight REST API [Flood]). Alternatively, 963 since the Provisioning Manager may also receive instructions from a 964 stateful PCE, the use of PCEP extensions might be appropriate in 965 some cases [I-D.ietf-pce-pce-initiated-lsp]. 967 2.3.2.13 Policy Interfaces 969 As described in Section 2.3.1.4 and throughout this document, policy 970 forms a critical component of the ABNO architecture. The role of 971 policy will include enforcing the following rules and requirements: 973 - Adding resources on demand should be gated by the authorized 974 capability. 976 - Client microflows should not trigger server-layer setup or 977 allocation. 979 - Accounting capabilities should be supported. 981 - Security mechanisms for authorization of requests and capabilities 982 are required. 984 Other policy-related function in the system might include the policy 985 behavior of the routing and forwarding system such as: 987 - ECMP behavior 989 - Classification of packets onto LSPs or QoS catgories. 991 Various policy-capable architectures have been defined including a 992 framework for using policy with a PCE-enabled system [RFC5394]. 993 However, the take-up of the IETF's Common Open Policy Service 994 protocol (COPS) [RFC2748] has been poor. 996 New work will be needed to define all of the policy interfaces within 997 the ABNO architecture. Work will also be needed to determine which 998 are internal interfaces and which may be external and so in need of 999 a protocol specification. There is some discussion that the I2RS 1000 protocol may support the configuration and manipulation of policies. 1002 2.3.2.14 OAM and Reporting 1004 The OAM Handler must interact with the network to perform several 1005 actions: 1007 - Enabling OAM function within the network. 1009 - Performing proactive OAM operations in the network. 1011 - Receiving notifications of network events. 1013 Any of the configuration and programmatic interfaces described in 1014 Section 2.3.2.1 may serve this purpose. NETCONF notifications are 1015 described in [RFC5277], and OpenFlow supports a number of 1016 asynchronous event notifications [ONF]. Additionally Syslog 1017 [RFC5424] is a protocol for reporting events from the network, and 1018 IPFIX [RFC7011] is designed to allow network statistics to be 1019 aggregated and reported. 1021 The OAM Handler also correlates events reported from the network and 1022 reports them onward to the ABNO Controller (which can apply the 1023 information to the recovery of services that it has provisioned) and 1024 to the NMS, OSS, and Application Service Coordinator. The reporting 1025 mechanism used here can be essentially the same as used when events 1026 are reported from the network and no new protocol is needed although 1027 new data models may be required for technology-independent OAM 1028 reporting. 1030 3. ABNO Use Cases 1032 This section provides a number of examples of how the ABNO 1033 architecture can be applied to provide application-driven and 1034 NMS/OSS-driven network operations. The purpose of these examples is 1035 to give some concrete material to demonstrate the architecture so 1036 that it may be more easily comprehended, and to illustrate that the 1037 application of the architecture is achieved by "profiling" and by 1038 selecting only the relevant components and interfaces. 1040 Similarly, it is not the intention that this section contains a 1041 complete list of all possible applications of ABNO. The examples are 1042 intended to broadly cover a number of applications that are commonly 1043 discussed, but this does not preclude other use cases. 1045 The descriptions in this section are not fully detailed applicability 1046 statements for ABNO. It is anticipated that such applicability 1047 statements, for the use cases described and for other use cases, 1048 cold be suitable material for separate documents. 1050 3.1 Inter-AS Connectivity 1052 The following use case describes how the ABNO framework can be used 1053 to set up an end-to-end MPLS service across multiple Autonomous 1054 Systems (ASes). Consider the simple network topology shown in Figure 1055 2. The three ASes (ASa, ASb, and ASc) are connected at ASBRs a1, a2, 1056 b1 through b4, c1, and c2. A source node (s) located in ASa is to be 1057 connected to a destination node (d) located in ASc. The optimal path 1058 for the LSP from s to d must be computed, and then the network must 1059 be triggered to set up the LSP. 1061 +--------------+ +-----------------+ +--------------+ 1062 |ASa | | ASb | | ASc | 1063 | +--+ | | +--+ +--+ | | +--+ | 1064 | |a1|-|-|-|b1| |b3|-|-|-|c1| | 1065 | +-+ +--+ | | +--+ +--+ | | +--+ +-+ | 1066 | |s| | | | | |d| | 1067 | +-+ +--+ | | +--+ +--+ | | +--+ +-+ | 1068 | |a2|-|-|-|b2| |b4|-|-|-|c2| | 1069 | +--+ | | +--+ +--+ | | +--+ | 1070 | | | | | | 1071 +--------------+ +-----------------+ +--------------+ 1073 Figure 2 : Inter-AS Domain Topology with H-PCE (Parent PCE) 1075 The following steps are performed to deliver the service within the 1076 ABNO architecture. 1078 1. Request Management 1080 As shown in Figure 3, the NMS/OSS issues a request to the ABNO 1081 Controller for a path between s and d. The ABNO Controller 1082 verifies that the NMS/OSS has sufficient rights to make the 1083 service request. 1085 +---------------------+ 1086 | NMS/OSS | 1087 +----------+----------+ 1088 | 1089 V 1090 +--------+ +-----------+-------------+ 1091 | Policy +-->-+ ABNO Controller | 1092 | Agent | | | 1093 +--------+ +-------------------------+ 1095 Figure 3 : ABNO Request Management 1097 2. Service Path Computation with Hierarchical PCE 1099 The ABNO Controller needs to determine an end-to-end path for the 1100 LSP. Since the ASes will want to maintain a degree of 1101 confidentiality about their internal resources and topology, they 1102 will not share a TED and each will have its own PCE. In such a 1103 situation, the Hierarchical PCE (H-PCE) architecture described in 1104 [RFC6805] is applicable. 1106 As shown in Figure 4, the ABNO Controller sends a request to the 1107 parent PCE for an end-to-end path. As described in [RFC6805], the 1108 parent PCE consults its TED that shows the connectivity between 1109 ASes. This helps it understand that the end-to-end path must 1110 cross each of ASa, ASb, and ASc, so it sends individual path 1111 computation requests to each of PCE a, b, and c to determine the 1112 best options for crossing the ASes. 1114 Each child PCE applies policy to the requests it receives to 1115 determine whether the request is to be allowed and to select the 1116 type of network resources that can be used in the computation 1117 result. For confidentiality reasons, each child PCE may supply 1118 its computation responses using a path key [RFC5520] to hide the 1119 details of the path segment it has computed. 1121 +-----------------+ 1122 | ABNO Controller | 1123 +----+-------+----+ 1124 | A 1125 V | 1126 +--+-------+--+ +--------+ 1127 +--------+ | | | | 1128 | Policy +-->-+ Parent PCE +---+ AS TED | 1129 | Agent | | | | | 1130 +--------+ +-+----+----+-+ +--------+ 1131 / | \ 1132 / | \ 1133 +-----+-+ +---+---+ +-+-----+ 1134 | | | | | | 1135 | PCE a | | PCE b | | PCE c | 1136 | | | | | | 1137 +---+---+ +---+---+ +---+---+ 1138 | | | 1139 +--+--+ +--+--+ +--+--+ 1140 | TEDa| | TEDb| | TEDc| 1141 +-----+ +-----+ +-----+ 1143 Figure 4 : Path Computation Request with Hierarchical PCE 1144 The parent PCE collates the responses from the children and 1145 applies its own policy to stitch them together into the best end- 1146 to-end path which it returns as a response to the ABNO Controller. 1148 3. Provisioning the End-to-End LSP 1150 There are several options for how the end-to-end LSP gets 1151 provisioned in the ABNO architecture. Some of these are described 1152 below. 1154 3a. Provisioning from the ABNO Controller With a Control Plane 1156 Figure 5 shows how the ABNO Controller makes a request through 1157 the Provisioning Manager to establish the end-to-end LSP. As 1158 described in Section 2.3.2.6 these interactions can use the 1159 NETCONF protocol [RFC6241] or the extensions to PCEP described 1160 in [I-D.ietf-pce-pce-initiated-lsp]. In either case, the 1161 provisioning request is sent to the head end Label Switching 1162 Router (LSR), and that LSR signals in the control plane (using 1163 a protocol such as RSVP-TE [RFC3209]) to cause the LSP to be 1164 established. 1166 +-----------------+ 1167 | ABNO Controller | 1168 +--------+--------+ 1169 | 1170 V 1171 +------+-------+ 1172 | Provisioning | 1173 | Manager | 1174 +------+-------+ 1175 | 1176 V 1177 +--------------------+------------------------+ 1178 / Network \ 1179 +-------------------------------------------------+ 1181 Figure 5 : Provisioning the End-to-End LSP 1183 3b. Provisioning through Programming Network Resources 1185 Another option is that the LSP is provisioned hop by hop from 1186 the Provisioning Manager using a mechanism such as ForCES 1187 [RFC5810] or OpenFlow [ONF] as described in Section 2.3.2.6. 1188 In this case, the picture is the same as shown in Figure 5. 1189 The interaction between the ABNO Controller and the 1190 Provisioning Manager will be PCEP or NETCONF as described in 1191 option 3a., and the Provisioning Manager will have the 1192 responsibility to fan out the requests to the individual 1193 network elements. 1195 3c. Provisioning with an Active Parent PCE 1197 The active PCE is described in Section 2.3.1.7 based on the 1198 concepts expressed in [I-D.ietf-pce-pce-initiated-lsp]. In 1199 this approach, the process described in 3a is modified such 1200 that the PCE issues a PCEP command to the network direct 1201 without a response being first returned to the ABNO 1202 Controller. 1204 This situation is shown in Figure 6, and could be modified so 1205 that the Provisioning Manager still programs the individual 1206 network elements as described in 3b. 1208 +-----------------+ 1209 | ABNO Controller | 1210 +----+------------+ 1211 | 1212 V 1213 +--+----------+ +--------------+ 1214 +--------+ | | | Provisioning | 1215 | Policy +-->-+ Parent PCE +---->----+ Manager | 1216 | Agent | | | | | 1217 +--------+ +-+----+----+-+ +-----+--------+ 1218 / | \ | 1219 / | \ | 1220 +-----+-+ +---+---+ +-+-----+ V 1221 | | | | | | | 1222 | PCE a | | PCE b | | PCE c | | 1223 | | | | | | | 1224 +-------+ +-------+ +-------+ | 1225 | 1226 +--------------------------------+------------+ 1227 / Network \ 1228 +-------------------------------------------------+ 1230 Figure 6 : LSP Provisioning with an Active PCE 1232 3d. Provisioning with Active Child PCEs and Segment Stitching 1234 A mixture of the approaches described in 3b and 3c can result 1235 in a combination of mechanisms to program the network to 1236 provide the end-to-end LSP. Figure 7 shows how each child PCE 1237 can be an active PCE responsible for setting up an edge-to- 1238 edge LSP segment across one of the ASes. The ABNO Controller 1239 then uses the Provisioning Manager to program the inter-AS 1240 connections using ForCES or OpenFlow and the LSP segments are 1241 stitched together following the ideas described in [RFC5150]. 1242 Philosophers may debate whether the Parent PCE in this model 1243 is active (instructing the children to provision LSP segments) 1244 or passive (requesting path segments that the children 1245 provision). 1247 +-----------------+ 1248 | ABNO Controller +-------->--------+ 1249 +----+-------+----+ | 1250 | A | 1251 V | | 1252 +--+-------+--+ | 1253 +--------+ | | | 1254 | Policy +-->-+ Parent PCE | | 1255 | Agent | | | | 1256 +--------+ ++-----+-----++ | 1257 / | \ | 1258 / | \ | 1259 +---+-+ +--+--+ +-+---+ | 1260 | | | | | | | 1261 |PCE a| |PCE b| |PCE c| | 1262 | | | | | | V 1263 +--+--+ +--+--+ +---+-+ | 1264 | | | | 1265 V V V | 1266 +----------+-+ +------------+ +-+----------+ | 1267 |Provisioning| |Provisioning| |Provisioning| | 1268 |Manager | |Manager | |Manager | | 1269 +-+----------+ +-----+------+ +-----+------+ | 1270 | | | | 1271 V V V | 1272 +--+-----+ +----+---+ +--+-----+ | 1273 / AS a \=====/ AS b \=====/ AS c \ | 1274 +------------+ A +------------+ A +------------+ | 1275 | | | 1276 +-----+----------------+-----+ | 1277 | Provisioning Manager +----<-------+ 1278 +----------------------------+ 1280 Figure 7 : LSP Provisioning With Active Child PCEs and Stitching 1282 4. Verification of Service 1284 The ABNO Controller will need to ascertain that the end-to-end LSP 1285 has been set up as requested. In the case of a control plane 1286 being used to establish the LSP, the head end LSR may send a 1287 notification (perhaps using PCEP) to report successful setup, but 1288 to be sure that the LSP is up, the ABNO Controller will request 1289 the OAM Handler to perform Continuity Check OAM in the Data Plane 1290 and report back that the LSP is ready to carry traffic. 1292 5. Notification of Service Fulfillment 1294 Finally, when the ABNO Controller is satisfied that the requested 1295 service is ready to carry traffic, it will notify the NMS/OSS. 1296 The delivery of the service may be further checked through 1297 auditing the network as described in 2.3.2.7. 1299 3.2 Multi-Layer Networking 1301 Networks are typically constructed using multiple layers. These 1302 layers represent separations of administrative regions or of 1303 technologies, and may also represent a distinction between client 1304 and server networking roles. 1306 It is preferable to coordinate network resource control and 1307 utilization (i.e., consideration and control of multiple layers), 1308 rather than controlling and optimizing resources at each layer 1309 independently. This facilitates network efficiency and network 1310 automation, and may be defined as inter-layer traffic engineering. 1312 The PCE architecture supports inter-layer traffic engineering 1313 [RFC5623] and, in combination with the ABNO architecture, provides a 1314 suite of capabilities for network resource coordination across 1315 multiple layers. 1317 The following use case demonstrates ABNO used to coordinate 1318 allocation of server-layer network resources to create virtual 1319 topology in a client-layer network in order to satisfy a request for 1320 end-to-end client-layer connectivity. Consider the simple multi- 1321 layer network in Figure 8. 1323 +--+ +--+ +--+ +--+ +--+ +--+ 1324 |P1|---|P2|---|P3| |P4|---|P5|---|P6| 1325 +--+ +--+ +--+ +--+ +--+ +--+ 1326 \ / 1327 \ / 1328 +--+ +--+ +--+ 1329 |L1|--|L2|--|L3| 1330 +--+ +--+ +--+ 1332 Figure 8 : A Multi-Layer Network 1334 There are six packet layer routers (P1 through P6) and three optical 1335 layer lambda switches (L1 through L3). There is connectivity in the 1336 packet layer between routers P1, P2, and P3, and also between routers 1337 P4, P5, and P6, but there is no packet-layer connectivity between 1338 these two islands of routers perhaps because of a network failure or 1339 perhaps because all existing bandwidth between the islands has 1340 already been used up. However, there is connectivity in the optical 1341 layer between switches L1, L2, and L3, and the optical network is 1342 connected out to routers P3 and P4 (they have optical line cards). 1343 In this example, a packet layer connection (an MPLS LSP) is desired 1344 between P1 and P6. 1346 In the ABNO architecture, the following steps are performed to 1347 deliver the service. 1349 1. Request Management 1351 As shown in Figure 9, the Application Service Coordinator issues a 1352 request for connectivity from P1 to P6 in the packet layer 1353 network. That is, the Application Service Coordinator requests an 1354 MPLS LSP with a specific bandwidth to carry traffic for its 1355 application. The ABNO Controller verifies that the Application 1356 Service Coordinator has sufficient rights to make the service 1357 request. 1359 +---------------------------+ 1360 | Application Service | 1361 | Coordinator | 1362 +-------------+-------------+ 1363 | 1364 V 1365 +------+ +------------+------------+ 1366 |Policy+->-+ ABNO Controller | 1367 |Agent | | | 1368 +------+ +-------------------------+ 1370 Figure 9 : Application Service Coordinator Request Management 1372 2. Service Path Computation in the Packet Layer 1374 The ABNO Controller sends a path computation request to the 1375 packet layer PCE to compute a suitable path for the requested LSP 1376 as shown in Figure 10. The PCE uses the appropriate policy for 1377 the request and consults the TED for the packet layer. It 1378 determines that no path is immediately available. 1380 +-----------------+ 1381 | ABNO Controller | 1382 +----+------------+ 1383 | 1384 V 1385 +--------+ +--+-----------+ +--------+ 1386 | Policy +-->--+ Packet Layer +---+ Packet | 1387 | Agent | | PCE | | TED | 1388 +--------+ +--------------+ +--------+ 1390 Figure 10 : Path Computation Request 1392 3. Invocation of VNTM and Path Computation in the Optical Layer 1394 After the path computation failure in step 2, instead of notifying 1395 ABNO Controller of the failure, the PCE invokes the VNTM to see 1396 whether it can create the necessary link in the virtual network 1397 topology to bridge the gap. 1399 As shown in Figure 11, the packet layer PCE reports the 1400 connectivity problem to the VNTM, and the VNTM consults policy to 1401 determine what it is allowed to do. Assuming that the policy 1402 allows, VNTM asks the optical layer PCE to find a path across the 1403 optical network that could be provisioned to provide a virtual 1404 link for the packet layer. In addressing this request, the 1405 optical layer PCE consults a TED for the optical layer network. 1407 +------+ 1408 +--------+ | | +--------------+ 1409 | Policy +-->--+ VNTM +--<--+ Packet Layer | 1410 | Agent | | | | PCE | 1411 +--------+ +---+--+ +--------------+ 1412 | 1413 V 1414 +---------------+ +---------+ 1415 | Optical Layer +---+ Optical | 1416 | PCE | | TED | 1417 +---------------+ +---------+ 1419 Figure 11 : Invocation of VNTM and Optical Layer Path Computation 1421 4. Provisioning in the Optical Layer 1423 Once a path has been found across the optical layer network it 1424 needs to be provisioned. The options follow those in step 3 of 1425 Section 3.1. That is, provisioning can be initiated by the 1426 optical layer PCE or by its user, the VNTM. The command can be 1427 sent to the head end of the optical LSP (P3) so that the control 1428 plane (for example, GMPLS RSVP-TE [RFC3473]) can be used to 1429 provision the LSP. Alternatively, the network resources can be 1430 provisioned direct using any of the mechanisms described in 1431 Section 2.3.2.6. 1433 5. Creation of Virtual Topology in the Packet Layer 1435 Once the LSP has been set up in the optical layer it can be made 1436 available in the packet layer as a virtual link. If the GMPLS 1437 signaling used the mechanisms described in [RFC6107] this process 1438 can be automated within the control plane, otherwise it may 1439 require a specific instruction to the head end router of the 1440 optical LSP (for example, through I2RS). 1442 Once the virtual link is created as shown in Figure 12, it is 1443 advertised in the IGP for the packet layer network and the link 1444 will appear in the TED for the packet layer network. 1446 +--------+ 1447 | Packet | 1448 | TED | 1449 +------+-+ 1450 A 1451 | 1452 +--+ +--+ 1453 |P3|....................|P4| 1454 +--+ +--+ 1455 \ / 1456 \ / 1457 +--+ +--+ +--+ 1458 |L1|--|L2|--|L3| 1459 +--+ +--+ +--+ 1461 Figure 12 : Advertisement of a New Virtual Link 1463 6. Path Computation Completion and Provisioning in the Packet Layer 1465 Now there are sufficient resources in the packet layer network. 1466 The PCE for the packet layer can complete its work and the MPLS 1467 LSP can be provisioned as described in Section 3.1. 1469 7. Verification and Notification of Service Fulfillment 1471 As discussed in Section 3.1, the ABNO Controller will need to 1472 verify that the end-to-end LSP has been correctly established 1473 before reporting service fulfillment to the Application 1474 Service Coordinator. 1476 Furthermore, it is highly likely that service verification will be 1477 necessary before the optical layer LSP can be put into service as 1478 a virtual link. Thus, the VNTM will need to coordinate with the 1479 OAM Handler to ensure that the LSP is ready for use. 1481 3.2.1 Data Center Interconnection across Multi-Layer Networks 1483 In order to support new and emerging cloud-based applications, such 1484 as real-time data backup, virtual machine migration, server 1485 clustering or load reorganization, the dynamic provisioning and 1486 allocation of IT resources and the interconnection of multiple, 1487 remote Data Centers (DC) is a growing requirement. 1489 These operations require traffic being delivered between data 1490 centers, and, typically, the connections providing such inter-DC 1491 connectivity are provisioned using static circuits or dedicated 1492 leased lines, leading to an inefficiency in terms of resource 1493 utilization. Moreover, a basic requirement is that such a group of 1494 remote DCs can be operated logically as one. 1496 In such environments, the data plane technology is operator and 1497 provider dependent. Their customers may rent lambda switch capable 1498 (LSC), packet switch capable (PSC), or time division multiplexing 1499 (TDM) services, and the application and usage of the ABNO 1500 architecture and Controller enables the required dynamic end to end 1501 network service provisioning, regardless of underlying service and 1502 transport layers. 1504 Consequently, the interconnection of DCs may involve the operation, 1505 control, and management of heterogeneous environments: each DC site 1506 and the metro-core network segment used to interconnect them, with 1507 regard to not only the underlying data plane technology, but also the 1508 control plane. For example, each DC site or domain could be 1509 controlled locally in a centralized way (e.g., via OpenFlow [ONF]), 1510 whereas the metro-core transport infrastructure is controlled by 1511 GMPLS. Although OpenFlow is specially adapted to single domain 1512 intra-DC networks (packet level control, lots of routing exceptions), 1513 a standardized GMPLS-based architecture would enable dynamic optical 1514 resource allocation and restoration in multi-domain (e.g., multi- 1515 vendor) core networks interconnecting distributed data centers. 1517 The application of an ABNO architecture and related procedures would 1518 involve the following aspects: 1520 1. Request From the Application Service Coordinator or NMS 1522 As shown in Figure 13, the ABNO Controller receives a request from 1523 the Application Service Coordinator or from the NMS, in order to 1524 create a new end-to-end connection between two end points. The 1525 actual addressing of these end points is discussed in the next 1526 section. The ABNO Controller asks the PCE for a path between 1527 these two endpoints, after considering any applicable policy as 1528 defined by the Policy Agent (see Figure 1). 1530 +---------------------------+ 1531 | Application Service | 1532 | Coordinator or NMS | 1533 +-------------+-------------+ 1534 | 1535 V 1536 +------+ +------------+------------+ 1537 |Policy+->-+ ABNO Controller | 1538 |Agent | | | 1539 +------+ +-------------------------+ 1541 Figure 13 : Application Service Coordinator Request Management 1543 2. Addressing Mapping 1545 In order to compute an end to end path, the PCE needs to have a 1546 unified view of the overall topology, which means that it has to 1547 consider and identify the actual endpoints with regard to the 1548 client network addresses. The ABNO Controller and/or the PCE may 1549 need to translate or map addresses from different address spaces. 1550 Depending on how the topology information is disseminated and 1551 gathered, there are two possible scenarios: 1553 a. The Application Layer knows the Client Network Layer. Entities 1554 belonging to the application layer may have an interface with 1555 the TED or with an ALTO server allowing those entities to map 1556 the high level endpoints to network addresses. The mechanism 1557 used to enable this address correlation is out of the scope of 1558 this document, but relies on direct interfaces to other ABNO 1559 components in addition to the interface to the ABNO Controller. 1561 In this scenario, the request from the NMS or Application 1562 Service Coordinator contains addresses in the client layer 1563 network. Therefore, when the ABNO Controller requests the PCE 1564 to compute a path between two end points the PCE is able to use 1565 the supplied addresses, compute the path, and continue the 1566 work-flow in communication with the Provisioning Manager. 1568 b. Application Layer does not know the Client Network Layer. In 1569 this case, when the ABNO Controller receives a request from the 1570 NMS or Application Service Coordinator the request contains 1571 only identifiers from the Application layer address space. In 1572 order for the PCE to compute and end-to-end path, these 1573 identifiers must be converted to addresses in the client layer 1574 network. This translation can be performed by the ABNO 1575 controller which can access the TED and ALTO databases allowing 1576 the path computation request it sends to PCE to be simply 1577 contained within one network and TED. Alternatively, the 1578 computation request could use the application layer identifiers 1579 leaving the job of address mapping to the PCE. 1581 Note that both approaches in this scenario require clear 1582 identification of the address spaces that are in use in order 1583 to avoid any confusion. 1585 3. Provisioning Process 1587 Once the path has been obtained, the provisioning manager receives 1588 a high level provisioning request to provision the service. 1589 Since, in the considered use case, the network elements are not 1590 necessarily configured using the same protocol, the end-to-end 1592 +-----------------+ 1593 | ABNO Controller | 1594 +-------+---------+ 1595 | 1596 | 1597 V 1598 +------+ +------+-------+ 1599 | VNTM +--<--+ PCE | 1600 +---+--+ +------+-------+ 1601 | | 1602 V V 1603 +-----+---------------+------------+ 1604 | Provisioning Manager | 1605 +----------------------------------+ 1606 | | | | | 1607 V | V | V 1608 OpenFlow V ForCes V PCEP 1609 NETCONF SNMP 1611 Figure 14 : Provisioning Process 1613 path is split into segments, and the ABNO Controller coordinates 1614 or orchestrates the establishment by adapting and/or translating 1615 the abstract provisioning request to concrete segment requests, by 1616 means of a VNTM or PCE, which issue the corresponding commands or 1617 instructions. The provisioning may involve configuring the data 1618 plane elements directly or delegating the establishment of the 1619 underlying connection to a dedicated control plane instance, 1620 responsible for that segment. 1622 The Provisioning Manager could use a number of mechanisms to 1623 program the network elements as shown in Figure 14. It learns 1624 which technology is used for the actual provisioning at each 1625 segment either by manual configuration or by discovery. 1627 4. Verification and Notification of Service Fulfillment 1629 Once the end-to-end connectivity service has been provisioned, and 1630 after the verification of the correct operation of the service, 1631 the ABNO Controller needs to notify the Application Service 1632 Coordinator or NMS. 1634 3.3 Make-Before-Break 1636 A number of different services depend on the establishment of a new 1637 LSP so that traffic supported by an existing LSP can be switched 1638 little or no disruption. This section describes those use cases, 1639 presents a generic model for make-before-break within the ABNO 1640 architecture, and shows how each use case can be supported by using 1641 elements of the generic model. 1643 3.3.1 Make-Before-Break for Re-optimization 1645 Make-before-break is a mechanism supported in RSVP-TE signaling where 1646 a new LSP is set up before the LSP it replaces is torn down 1647 [RFC3209]. This process has several benefits in situations such as 1648 re-optimization of in-service LSPs. 1650 The process is simple, and the example shown in Figure 15 utilizes a 1651 stateful PCE [I-D.ietf-pce-stateful-pce] to monitor the network and 1652 take re-optimization actions when necessary. In this process a 1653 service request is made to the ABNO Controller by a requester such as 1654 the OSS. The service request indicates that the LSP should be re- 1655 optimized under specific conditions according to policy. This allows 1656 the ABNO Controller to manage the sequence and prioritization of re- 1657 optimizing multiple LSPs using elements of Global Concurrent 1658 Optimization (GCO) as described in Section 3.4, and applying policies 1659 across the network so that, for instance, LSPs for delay-sensitive 1660 services are re-optimized first. 1662 The ABNO Controller commissions the PCE to compute and set up the 1663 initial path. 1665 Over time, the PCE monitors the changes in the network as reflected 1666 in the TED, and according to the configured policy may compute and 1667 set up a replacement path, using make-before-break within the 1668 network. 1670 Once the new path has been set up and the Network reports that it is 1671 in use correctly, PCE tears down the old path and may report the 1672 re-optimization event to the ABNO Controller. 1674 +---------------------------------------------+ 1675 | OSS / NMS / Application Service Coordinator | 1676 +----------------------+----------------------+ 1677 | 1678 +------------+------------+ 1679 | ABNO Controller | 1680 +------------+------------+ 1681 | 1682 +------+ +-------+-------+ +-----+ 1683 |Policy+-----+ PCE +-----+ TED | 1684 |Agent | +-------+-------+ +-----+ 1685 +------+ | 1686 | 1687 +----------------------+----------------------+ 1688 / Network \ 1689 +-------------------------------------------------+ 1691 Figure 15 : The Make-Before-Break Process 1693 3.3.2 Make-Before-Break for Restoration 1695 Make-before-break may also be used to repair a failed LSP where 1696 there is a desire to retain resources along some of the path, and 1697 where there is the potential for other LSPs to "steal" the resources 1698 if the failed LSP is torn down first. Unlike the example in Section 1699 3.3.1, this case addresses a situation where the service is 1700 interrupted, but this interruption arises from the break in service 1701 introduced by the network failure. Obviously, in the case of a 1702 point-to-multipoint LSP, the failure might only affect part of the 1703 tree and the disruption will only be to a subset of the destination 1704 leaves so that a make-before-break restoration approach will not 1705 cause disruption to the leaves that were not affected by the original 1706 failure. 1708 Figure 16 shows the components that interact for this use case. A 1709 service request is made to the ABNO Controller by a requester such as 1710 the OSS. The service request indicates that the LSP may be restored 1711 after failure and should attempt to reuse as much of the original 1712 path as possible. 1714 The ABNO Controller commissions the PCE to compute and set up the 1715 initial path. The ABNO Controller also requests the OAM Handler to 1716 initiate OAM on the LSP and to monitor the results. 1718 At some point the network reports a fault to the OAM Handler which 1719 notifies the ABNO Controller. 1721 The ABNO Controller commissions the PCE to compute a new path, re- 1722 using as much of the original path as possible, and PCE sets up the 1723 new LSP. 1725 Once the new path has been set up and the Network reports that it is 1726 in use correctly, the ABNO Controller instructs the PCE to tear down 1727 the old path. 1729 +---------------------------------------------+ 1730 | OSS / NMS / Application Service Coordinator | 1731 +----------------------+----------------------+ 1732 | 1733 +------------+------------+ +-------+ 1734 | ABNO Controller +---+ OAM | 1735 +------------+------------+ |Handler| 1736 | +---+---+ 1737 +-------+-------+ | 1738 | PCE | | 1739 +-------+-------+ | 1740 | | 1741 +----------------------+--------------------+-+ 1742 / Network \ 1743 +-------------------------------------------------+ 1745 Figure 16 : The Make-Before-Break Restoration Process 1747 3.3.3 Make-Before-Break for Path Test and Selection 1749 In a more complicated use case, an LSP may be monitored for a number 1750 of attributes such as delay and jitter. When the LSP falls below a 1751 threshold, the traffic may be moved to another LSP that offers the 1752 desired (or at least a better) quality of service. To achieve this, 1753 it is necessary to establish the new LSP and test it, and because the 1754 traffic must not be interrupted, make-before-break must be used. 1756 Moreover, it may be the case that no new LSP can provide the desired 1757 attributes, and that a number of LSPs need to be tested so that the 1758 best can be selected. Furthermore, even when the original LSP is set 1759 up, it could be desirable to test a number of LSPs before deciding 1760 which should be used to carry the traffic. 1762 Figure 17 shows the components that interact for this use case. 1763 Because multiple LSPs might exist at once, a distinct action is 1764 needed to coordinate which one carries the traffic, and this is the 1765 job of the I2RS Client acting under the control of the ABNO 1766 Controller. 1768 The OAM Handler is responsible for initiating tests on the LSPs and 1769 for reporting the results back to the ABNO Controller. The OAM 1770 Handler can also check end-to-end connectivity test results across a 1771 multi-domain network even when each domain runs a different 1772 technology. For example, an end-to-end might be achieved by 1773 stitching together an MPLS segment, an Ethernet/VLAN segment, and an 1774 IP etc. 1776 Otherwise, the process is similar to that for re-optimization 1777 discussed in Section 3.3.1. 1779 +---------------------------------------------+ 1780 | OSS / NMS / Application Service Coordinator | 1781 +----------------------+----------------------+ 1782 | 1783 +------+ +------------+------------+ +-------+ 1784 |Policy+---+ ABNO Controller +----+ OAM | 1785 |Agent | | +--+ |Handler| 1786 +------+ +------------+------------+ | +---+---+ 1787 | | | 1788 +-------+-------+ +--+---+ | 1789 | PCE | | I2RS | | 1790 +-------+-------+ |Client| | 1791 | +--+---+ | 1792 | | | 1793 +-----------------------+---------------+-----+-+ 1794 / Network \ 1795 +---------------------------------------------------+ 1797 Figure 17 : The Make-Before-Break Path Test and Selection Process 1799 The pseudo-code that follows gives an indication of the interactions 1800 between ABNO components. 1802 OSS requests quality-assured service 1804 :Label1 1806 DoWhile not enough LSPs (ABNO Controller) 1807 Instruct PCE to compute and provision the LSP (ABNO Controller) 1808 Create the LSP (PCE) 1809 EndDo 1811 :Label2 1813 DoFor each LSP (ABNO Controller) 1814 Test LSP (OAM Handler) 1815 Report results to ABNO Controller (OAM Handler) 1816 EndDo 1818 Evaluate results of all tests (ABNO Controller) 1819 Select preferred LSP and instruct I2RS client (ABNO Controller) 1820 Put traffic on preferred LSP (I2RS Client) 1822 DoWhile too many LSPs (ABNO Controller) 1823 Instruct PCE to tear down unwanted LSP (ABNO Controller) 1824 Tear down unwanted LSP (PCE) 1825 EndDo 1827 DoUntil trigger (OAM controller, ABNO Controller, Policy Agent) 1828 keep sending traffic (Network) 1829 Test LSP (OAM Handler) 1830 EndDo 1832 If there is already a suitable LSP (ABNO Controller) 1833 GoTo Label2 1834 Else 1835 GoTo Label1 1836 EndIf 1838 3.4 Global Concurrent Optimization 1840 Global Concurrent Optimization (GCO) is defined in [RFC5557] and 1841 represents a key technology for maximizing network efficiency by 1842 computing a set of traffic-engineered paths concurrently. A GCO path 1843 computation request will simultaneously consider the entire topology 1844 of the network, and the complete set of new LSPs together with their 1845 respective constraints. Similarly, GCO may be applied to recompute 1846 the paths of a set of existing LSPs. 1848 GCO may be requested in a number of scenarios. These include: 1850 o Routing of new services where the PCE should consider other 1851 services or network topology. 1853 o A reoptimization of existing services due to fragmented network 1854 resources or sub-optimized placement of sequentially computed 1855 services. 1857 o Recovery of connectivity for bulk services in the event of a 1858 catastrophic network failure. 1860 A service provider may also want to compute and deploy new bulk 1861 services based on a predicted traffic matrix. The GCO 1862 functionality and capability to perform concurrent computation 1863 provides a significant network optimization advantage, thus utilizing 1864 network resources optimally and avoiding blocking. 1866 The following use case shows how the ABNO architecture and components 1867 are used to achieve concurrent optimization across a set of services. 1869 3.4.1 Use Case: GCO with MPLS LSPs 1871 When considering the GCO path computation problem, we can split the 1872 GCO objective functions into three optimization categories, these 1873 are: 1875 o Minimize aggregate Bandwidth Consumption (MBC). 1877 o Minimize the load of the Most Loaded Link (MLL). 1879 o Minimize Cumulative Cost of a set of paths (MCC). 1881 This use case assumes the GCO request will be offline and be 1882 initiated from an NMS/OSS, that is it may take significant time to 1883 compute the service, and the paths reported in the response may 1884 want to be verified by the user before being provisioned within 1885 the network. 1887 1. Request Management 1889 The NMS/OSS issues a request for new service connectivity for bulk 1890 services. The ABNO Controller verifies that the NMS/OSS has 1891 sufficient rights to make the service request and apply a GCO 1892 attribute with a request to Minimize aggregate Bandwidth 1893 Consumption (MBC) as shown in Figure 18. 1895 +---------------------+ 1896 | NMS/OSS | 1897 +----------+----------+ 1898 | 1899 V 1900 +--------+ +-----------+-------------+ 1901 | Policy +-->-+ ABNO Controller | 1902 | Agent | | | 1903 +--------+ +-------------------------+ 1905 Figure 18 : NMS Request to ABNO Controller 1907 1a. Each service request has a source, destination and bandwidth 1908 request. These service requests are sent to the ABNO 1909 Controller and categorized as a GCO. The PCE uses the 1910 appropriate policy for the request and consults the TED for 1911 the packet layer. 1913 2. Service Path Computation in the Packet Layer 1915 To compute a set of services for the GCO application, PCEP 1916 supports synchronization vector (SVEC) lists for synchronized 1917 dependent path computations as defined in [RFC5440] and described 1918 in [RFC6007]. 1920 2a. The ABNO Controller sends the bulk service request to the 1921 GCO-capable packet layer PCE using PCEP messaging. The PCE 1922 uses the appropriate policy for the request and consults the 1923 TED for the packet layer as shown in Figure 19. 1925 +-----------------+ 1926 | ABNO Controller | 1927 +----+------------+ 1928 | 1929 V 1930 +--------+ +--+-----------+ +--------+ 1931 | | | | | | 1932 | Policy +-->--+ GCO-capable +---+ Packet | 1933 | Agent | | Packet Layer | | TED | 1934 | | | PCE | | | 1935 +--------+ +--------------+ +--------+ 1937 Figure 19 : Path Computation Request from GCO-capable PCE 1939 2b. Upon receipt of the bulk (GCO) service requests, the PCE 1940 applies the MBC objective function and computes the services 1941 concurrently. 1943 2c. Once the requested GCO service path computation completes, the 1944 PCE sends the resulting paths back to the ABNO Controller. The 1945 response includes a fully computed explicit path for each 1946 service (TE LSP). 1948 +---------------------+ 1949 | NMS/OSS | 1950 +----------+----------+ 1951 ^ 1952 | 1953 +----------+----------+ 1954 | ABNO Controller | 1955 | | 1956 +---------------------+ 1958 Figure 20 : ABNO Sends Solution to the NMS/OSS 1960 3. The concurrently computed solution received from the PCE is sent 1961 back to the NMS/OSS by the ABNO Controller as a PCEP response as 1962 shown in Figure 20. The NMS/OSS user can then check the candidate 1963 paths and either provision the new services, or save the solution 1964 for deployment in the future. 1966 3.5 Adaptive Network Management (ANM) 1968 The ABNO architecture provides the capability for reactive network 1969 control of resources relying on classification, profiling and 1970 prediction based on current demands and resource utilization. 1971 Server-layer transport network resources, such as Optical Transport 1972 Network (OTN) time-slicing [G.709], or the fine granularity grid of 1973 wavelengths with variable spectral bandwidth (flexi-grid) [G.694.1], 1974 can be manipulated to meet current and projected demands in a model 1975 called Elastic Optical Networks (EON) [EON]. 1977 EON provides spectrum-efficient and scalable transport by 1978 introducing flexible granular traffic grooming in the optical 1979 frequency domain. This is achieved using arbitrary contiguous 1980 concatenation of optical spectrum that allows creation of custom- 1981 sized bandwidth. This bandwidth is defined in slots of 12,5GHz. 1983 Adaptive Network Management (ANM) with EON allows appropriately- 1984 sized optical bandwidth to be allocated to an end-to-end optical 1985 path. In flexi-grid, the allocation is performed according to the 1986 traffic volume, optical modulation format, and associated reach, or 1987 following user requests, and can be achieved in a highly spectrum- 1988 efficient and scalable manner. Similarly, OTN provides for flexible 1989 and granular provisioning of bandwidth on top of wavelength switched 1990 optical networks (WSON). 1992 To efficiently use optical resources, a system is required which can 1993 monitor network resources, and decide the optimal network 1994 configuration based on the status, bandwidth availability and user 1995 service. We call this ANM. 1997 3.5.1. ANM Trigger 1999 There are different reasons to trigger an adaptive network 2000 management process, these include: 2002 o Measurement: traffic measurements can be used in order to cause 2003 spectrum allocations that fit the traffic needs as efficiently as 2004 possible. This function may be influenced by measuring the IP 2005 router traffic flows, by examining traffic engineering or link 2006 state databases, by usage thresholds for critical links in the 2007 network, or by requests from external entities. Nowadays, network 2008 operators have active monitoring probes in the network, which 2009 store their results in the OSS. The OSS or OAM Handler components 2010 activate this measurement-based trigger, so the ABNO Controller 2011 would not be directly involved in this case. 2013 o Human: operators may request ABNO to run an adaptive network 2014 planning process via a NMS. 2016 o Periodic: adaptive network planning process can be run 2017 periodically to find an optimum configuration. 2019 An ABNO Controller would receive a request from OSS or NMS to run an 2020 adaptive network manager process. 2022 3.5.2. Processing request and GCO computation 2024 Based on the human or periodic trigger requests described in the 2025 previous Section, the OSS or NMS will send a request to the ABNO 2026 Controller to perform EON-based GCO. The ABNO Controller will 2027 select a set of services to be reoptimized and choose an objective 2028 function that will deliver the best use of network resources. In 2029 making these choices, the ABNO Controller is guided by network-wide 2030 policy on the use of resources, the definition of optimization, and 2031 the level of perturbation to existing services that is tolerable. 2033 Much as in Section 3.5, this request for GCO is passed to the PCE. 2034 The PCE can then consider the end-to-end paths and every channel's 2035 optimal spectrum assignment in order to satisfy traffic demands and 2036 optimize the optical spectrum consumption within the network. 2038 The PCE will operate on the TED, but is likely to also be stateful so 2039 that it knows which LSPs correspond to which waveband allocations on 2040 which links in the network. Once PCE arrives at an answer, it 2041 returns a set of potential paths to the ABNO Controller which passes 2042 them on to the NMS or OSS to supervise/select the subsequent path 2043 set-up/modification process. 2045 This exchange is shown in Figure 21. Note that the figure does not 2046 show the interactions used by the OSS/NMS for establishing or 2047 modifying LSPs in the network. 2049 +---------------------------+ 2050 | OSS or NMS | 2051 +-----------+---+-----------+ 2052 | ^ 2053 V | 2054 +------+ +----------+---+----------+ 2055 |Policy+->-+ ABNO Controller | 2056 |Agent | | | 2057 +------+ +----------+---+----------+ 2058 | ^ 2059 V | 2060 +-----+---+----+ 2061 + PCE | 2062 +--------------+ 2064 Figure 21 : Adaptive Network Management with human intervention 2066 3.5.3. Automated Provisioning Process 2068 Although most of network operations are supervised by the operator, 2069 there are some actions, which may not require supervision, like a 2070 simple modification of a modulation format in a Bit-rate Variable 2071 Transponder (BVT) (to increase the optical spectrum efficiency or 2072 reduce energy consumption). In this processes, where human 2073 intervention is not required, the PCE sends the Provisioning Manager 2074 new configuration to configure the network elements as shown in 2075 Figure 22. 2077 +------------------------+ 2078 | OSS or NMS | 2079 +-----------+------------+ 2080 | 2081 V 2082 +------+ +----------+------------+ 2083 |Policy+->-+ ABNO Controller | 2084 |Agent | | | 2085 +------+ +----------+------------+ 2086 | 2087 V 2088 +------+------+ 2089 + PCE | 2090 +------+------+ 2091 | 2092 V 2093 +----------------------------------+ 2094 | Provisioning Manager | 2095 +----------------------------------+ 2097 Figure 22 : Adaptive Network Management without human intervention 2099 3.6 Pseudowire Operations and Management 2101 Pseudowires in an MPLS network [RFC3985] operate as a form of layered 2102 network over the connectivity provided by the MPLS network. The 2103 pseudowires are carried by LSPs operating as transport tunnels, and 2104 planning is necessary to determine how those tunnels are placed in 2105 the network and which tunnels are used by any pseudowire. 2107 This section considers four use cases: multi-segment pseudowires, 2108 path-diverse pseudowires, path-diverse multi-segment pseudowires, and 2109 pseudowire segment protection. Section 3.6.5 describes the 2110 applicability of the ABNO architecture to these four use cases. 2112 3.6.1 Multi-Segment Pseudowires 2114 [RFC5254] described the architecture for multi-segment pseudowires. 2115 An end-to-end service, as shown in Figure 23, can consist of a 2116 series of stitched segments shown on the figure as AC, PW1, PW2, PW3, 2117 and AC. Each pseudowire segment is stitched at a 'stitching PE' (S- 2118 PE): for example, PW1 is stitched to PW2 at S-PE1. Each access 2119 circuit (AC) is stitched to a pseudowire segment at a 'terminating 2120 PE' (T-PE): for example, PW1 is stitched to the AC at T-PE1. 2122 Each pseudowire segment is carried across the MPLS network in an LSP 2123 operating as a transport tunnel: for example, PW1 is carried in LSP1. 2124 The LSPs between provider edge nodes (PEs) may traverse different 2125 MPLS networks with the PEs as border nodes, or the PEs may lie within 2126 the network such that each LSP spans only part of the network. 2128 ----- ----- ----- ----- 2129 --- |T-PE1| LSP1 |S-PE1| LSP2 |S-PE3| LSP3 |T-PE2| +---+ 2130 | | AC | |=======| |=======| |=======| | AC | | 2131 |CE1|----|........PW1........|..PW2........|..PW3........|----|CE2| 2132 | | | |=======| |=======| |=======| | | | 2133 --- | | | | | | | | +---+ 2134 ----- ----- ----- ----- 2136 Figure 23 : Multi-Segment Pseudowire 2138 While the topology shown in Figure 23 is easy to navigate, the 2139 reality of a deployed network can be considerably more complex. The 2140 topology in Figure 24 shows a small mesh of PEs. The links between 2141 the PEs are not physical links but represent the potential of MPLS 2142 LSPs between the PEs. 2144 When establishing the end-to-end service between customer edge nodes 2145 (CEs) CE1 and CE2, some choice must be made about which PEs to use. 2146 In other words, a path computation must be made to determine the 2147 pseudowire segment 'hops', and then the necessary LSP tunnels must be 2148 established to carry the pseudowire segments that will be stitched 2149 together. 2151 Of course, each LSP may itself require a path computation decision to 2152 route it through the MPLS network between PEs. 2154 The choice of path for the multi-segment pseudowire will depend on 2155 such issues as: 2156 - MPLS connectivity 2157 - MPLS bandwidth availability 2158 - pseudowire stitching capability and capacity at PEs 2159 - policy and confidentiality considerations for use of PEs. 2161 ----- 2162 |S-PE5| 2163 /-----\ 2164 --- ----- -----/ \----- ----- --- 2165 |CE1|----|T-PE1|-------|S-PE1|-------|S-PE3|-------|T-PE2|----|CE2| 2166 --- -----\ -----\ ----- /----- --- 2167 \ | ------- | / 2168 \ ----- \----- / 2169 -----|S-PE2|-------|S-PE4|----- 2170 ----- ----- 2172 Figure 24 : Multi-Segment Pseudowire Network Topology 2174 3.6.2 Path-Diverse Pseudowires 2176 The connectivity service provided by a pseudowire may need to be 2177 resilient to failure. In many cases, this function is provided by 2178 provisioning a pair of pseudowires carried by path-diverse LSPs 2179 across the network as shown in Figure 25 (the terminology is 2180 inherited directly from [RFC3985]). Clearly, in this case, the 2181 challenge is to keep the two LSPs (LSP1 and LSP2) disjoint within the 2182 MPLS network. This problem is not different from the normal MPLS 2183 path-diversity problem. 2185 ------- ------- 2186 | PE1 | LSP1 | PE2 | 2187 AC | |=======================| | AC 2188 ----...................PW1...................---- 2189 --- - / | |=======================| | \ ----- 2190 | |/ | | | | \| | 2191 | CE1 + | | MPLS Network | | + CE2 | 2192 | |\ | | | | /| | 2193 --- - \ | |=======================| | / ----- 2194 ----...................PW2...................---- 2195 AC | |=======================| | AC 2196 | | LSP2 | | 2197 ------- ------- 2199 Figure 25 : Path-Diverse Pseudowires 2201 ------- ------- 2202 | PE1 | LSP1 | PE2 | 2203 AC | |=======================| | AC 2204 ---...................PW1...................--- 2205 / | |=======================| | \ 2206 ----- / | | | | \ ----- 2207 | |/ ------- ------- \| | 2208 | CE1 + MPLS Network + CE2 | 2209 | |\ ------- ------- /| | 2210 ----- \ | PE3 | | PE4 | / ----- 2211 \ | |=======================| | / 2212 ---...................PW2...................--- 2213 AC | |=======================| | AC 2214 | | LSP2 | | 2215 ------- ------- 2217 Figure 26 : Path-Diverse Pseudowires With Disjoint PEs 2219 The path-diverse pseudowire is developed in Figure 26 by the "dual- 2220 homing" of each CE through more than one PE. The requirement for LSP 2221 path diversity is exactly the same, but it is complicated by the LSPs 2222 having distinct end points. In this case, the head-end router (e.g., 2223 PE1) cannot be relied upon to maintain the path diversity through the 2224 signaling protocol because it is aware of the path of only one of the 2225 LSPs. Thus some form of coordinated path computation approach is 2226 needed. 2228 3.6.3 Path-Diverse Multi-Segment Pseudowires 2230 Figure 27 shows how the services in the previous two sections may be 2231 combined to offer end-to-end diverse paths in a multi-segment 2232 environment. To offer end-to-end resilience to failure, two entirely 2233 diverse, end-to-end multi-segment pseudowires may be needed. 2235 ----- ----- 2236 |S-PE5|--------------|T-PE4| 2237 /-----\ ----- \ 2238 ----- -----/ \----- ----- \ --- 2239 |T-PE1|-------|S-PE1|-------|S-PE3|-------|T-PE2|--|CE2| 2240 --- / -----\ -----\ ----- /----- --- 2241 |CE1|< ------- | ------- | / 2242 --- \ ----- \----- \----- / 2243 |T-PE3|-------|S-PE2|-------|S-PE4|----- 2244 ----- ----- ----- 2246 Figure 27 : Path-Diverse Multi-Segment Pseudowire Network Topology 2248 Just as in any diverse-path computation, the selection of the first 2249 path needs to be made with awareness of the fact that a second, 2250 fully-diverse path is also needed. If a sequential computation was 2251 applied to the topology in Figure 27, the first path CE1,T-PE1,S-PE1, 2252 S-PE3,T-PE2,CE2 would make it impossible to find a second path that 2253 was fully diverse from the first. 2255 But the problem is complicated by the multi-layer nature of the 2256 network. It is not enough that the PEs are chosen to diverse because 2257 the LSP tunnels between them might share links within the MPLS 2258 network. Thus, a multi-layer planning solution is needed to achieve 2259 the desired level of service. 2261 3.6.4 Pseudowire Segment Protection 2263 An alternative to the end-to-end pseudowire protection service 2264 enabled by the mechanism described in Section 3.6.3 can be achieved 2265 by protecting individual pseudowire segments or PEs. For example, in 2266 Figure 27, the pseudowire between S-PE1 and S-PE5 may be protected by 2267 a pair of stitched segments running between S-PE1 and S-PE5, and 2268 between S-PE5 and S-PE3. This is shown in detail in Figure 28. 2270 ------- ------- ------- 2271 | S-PE1 | LSP1 | S-PE5 | LSP3 | S-PE3 | 2272 | |============| |============| | 2273 | .........PW1..................PW3.......... | Outgoing 2274 Incoming | : |============| |============| : | segment 2275 segment | : | ------- | :.......... 2276 ...........: | | : | 2277 | : | | : | 2278 | : |=================================| : | 2279 | .........PW2............................... | 2280 | |=================================| | 2281 | | LSP2 | | 2282 ------- ------- 2284 Figure 28 : Fragment of a Segment-Protected Multi-Segment Pseudowire 2286 The determination of pseudowire protection segments requires 2287 coordination and planning, and just as in Section 3.6.5, this 2288 planning must be cognizant of the paths taken by LSPs through the 2289 underlying MPLS networks. 2291 3.6.5 Applicability of ABNO to Pseudowires 2293 The ABNO architecture lends itself well to the planning and control 2294 pseudowires in the use cases described above. The user or 2295 application needs a single point at which it requests services: the 2296 ABNO Controller. The ABNO Controller can ask a PCE to draw on the 2297 topology of pseudowire stitching-capable PEs as well as additional 2298 information regarding PE capabilities, such as load on PEs and 2299 administrative policies, and the PCE can use a series of TEDs or 2300 other PCEs for the underlying MPLS networks to determine the paths of 2301 the LSP tunnels. At the time of writing, PCEP does not support path 2302 computation requests and responses concerning pseudowires, but the 2303 concepts are very similar to existing uses and the necessary 2304 extensions would be very small. 2306 Once the paths have been computed a number of different provisioning 2307 systems can be used to instantiate the LSPs and provision the 2308 pseudowires under the control of the Provisioning Manager. The ABNO 2309 Controller will use the I2RS Client to instruct the network devices 2310 about what traffic should be placed on which pseudowires, and in 2311 conjunction with the OAM Handler can ensure that failure events are 2312 handled correctly, that service quality levels are appropriate, and 2313 that service protection levels are maintained. 2315 In many respects, the pseudowire network forms an overlay network 2316 (with its own TED and provisioning mechanisms) carried by underlying 2317 packet networks. Further client networks (the pseudowire payloads) 2318 may be carried by the pseduowire network. Thus, the problem space 2319 being addressed by ABNO in this case is a classic multi-layer 2320 network. 2322 3.7. Cross-Stratum Optimization (CSO) 2324 Considering the term "stratum" to broadly differentiate the layers of 2325 most concern to the application and to the network in general, the 2326 need for Cross Stratum optimization (CSO) arises when the application 2327 stratum and network stratum need to be coordinated to achieve 2328 operational efficiency as well as resource optimization in both 2329 application and network strata. 2331 Data center-based applications can provide a wide variety of services 2332 such as video gaming, cloud computing, and grid applications. High- 2333 bandwidth video applications are also emerging, such as remote 2334 medical surgery, live concerts, and sporting events. 2336 This use-case for the ABNO architecture is mainly concerned with data 2337 center applications that make substantial bandwidth demands either in 2338 aggregate or individually. In addition these applications may need 2339 specific bounds on QoS-related parameters such as latency and jitter. 2341 3.7.1. Data Center Network Operation 2343 Data centers come in a wide variety of sizes and configurations, but 2344 all contain compute servers, storage, and application control. Data 2345 centers offer application services to end-users such as video gaming, 2346 cloud computing and others. Since the data centers used to provide 2347 application services may be distributed around a network, the 2348 decisions about the control and management of application services, 2349 such as where to instantiate another service instance or to which 2350 data center a new client is assigned, can have a significant impact 2351 on the state of the network. Conversely the capabilities and state 2352 of the network can have a major impact on application performance. 2354 These decisions are typically made by applications with very little 2355 or no information concerning the underlying network. Hence, such 2356 decisions may be sub-optimal from the application's point of view or 2357 considering network resource utilization and quality of service. 2359 Cross-stratum optimization is the process of optimizing both the 2360 application experience and the network utilization by coordinating 2361 decisions in the application stratum and the network stratum. 2362 Application resources can be roughly categorized into computing 2363 resources, (i.e., servers of various types and granularities such as 2364 VMs, memory, and storage) and content (e.g., video, audio, databases, 2365 and large data sets). By network stratum we mean the IP layer and 2366 below (e.g., MPLS, SDH, OTN, WDM). The network stratum has resources 2367 that include routers, switches, and links. We are particularly 2368 interested in further unleashing the potential presented by MPLS and 2369 GMPLS control planes at the lower network layers in response to the 2370 high aggregate or individual demands from the application layer. 2372 This use-case demonstrates that the ABNO architecture can allow 2373 cross-stratum application/network optimization for the data center 2374 use case. Other forms of cross-stratum optimization (for example, 2375 for peer-to-peer applications) are out of scope. 2377 3.7.1.1. Virtual Machine Migration 2379 A key enabler for data center cost savings, consolidation, 2380 flexibility and application scalability has been the technology of 2381 compute virtualization provided through Virtual Machines (VMs). To 2382 the software application a VM looks like a dedicated processor with 2383 dedicated memory and a dedicated operating system. 2385 VMs offer not only a unit of compute power but also provide an 2386 "application environment" that can be replicated, backed up, and 2387 moved. Different VM configurations may be offered that are optimized 2388 for different types of processing (e.g., memory intensive, throughput 2389 intensive). 2391 VMs may be moved between compute resources in a data center and could 2392 be moved between data centers. VM migration serves to balance load 2393 across data center resources and has several modes: 2394 (i) scheduled vs. dynamic; 2395 (ii) bulk vs. sequential; 2396 (iii) point-to-point vs. point-to-multi-point 2398 While VM migration may solve problems of load or planned maintenance 2399 within a data center it can also be effective to reduce network load 2400 around the data center. But the act of migrating VMs especially 2401 between data centers can impact the network and other services that 2402 are offered. 2404 For certain applications such as disaster recovery, bulk migration is 2405 required on the fly, which may necessitate concurrent computation and 2406 path setup dynamically. 2408 Thus, application stratum operations must also take account of the 2409 situation in the network stratum even as the application stratum 2410 actions may be driven by the status of the network stratum. 2412 3.7.1.2. Load Balancing 2414 Application servers may be instantiated in many data centers located 2415 in different parts of the network. When an end-user makes an 2416 application request, a decision has to be made about which data 2417 center should host the processing and storage required to meet the 2418 request. One of the major drivers for operating multiple data 2419 centers (rather than one very large data center) is so that the 2420 application will run on a machine that is closer to the end-users and 2421 thus improve the user experience by reducing network latency. 2422 However, if the network is congested or the data center is overloaded 2423 this strategy can backfire. 2425 Thus, among the key factors to be considered in choosing the server 2426 on which to instantiate a VM for an application include: 2428 - The utilization of the servers in the data center 2430 - The network load conditions within a data center 2432 - The network load conditions between data centers 2434 - The network conditions between the end-user and data center 2436 Again, the choices made in the application stratum need to consider 2437 the situation in the network stratum. 2439 3.7.2. Application of the ABNO Architecture 2441 This section shows how the ABNO architecture is applicable to the 2442 cross-stratum data center issues described in Section 3.7.1. 2444 Figure 29 shows a diagram of an example data center-based 2445 application. A carrier network provides access for an end-user 2446 through PE4. Three data centers (DC1, DC2, and DC3) are accessed 2447 through different parts of the network via PE1, PE2, and PE3. 2449 The Application Service Coordinator receives information from the 2450 end-user about the desired services, and converts this to service 2451 requests that it passes to the the ABNO Controller. The end-users 2452 may already know which data center they wish to use, the Application 2453 Service Coordinator may be able to make this determination, or 2454 otherwise the task of selecting the data center must be performed by 2455 the ABNO Controller, and this may utilize a further database (see 2456 Section 2.3.1.8) to contain information about server loads and other 2457 data center parameters. 2459 The ABNO controller examines the network resources using information 2460 gathered from the other ABNO components and uses those components to 2461 configure the network to support the end-user's needs. 2463 +----------+ +---------------------------------+ 2464 | End-user |--->| Application Service Coordinator | 2465 +----------+ +---------------------------------+ 2466 | | 2467 | v 2468 | +-----------------+ 2469 | | ABNO Controller | 2470 | +-----------------+ 2471 | | 2472 | v 2473 | +---------------------+ +--------------+ 2474 | |Other ABNO Components| | o o o DC 1 | 2475 | +---------------------+ | \|/ | 2476 | | ------|---O | 2477 | v | | | 2478 | --------------------------|-- +--------------+ 2479 | / Carrier Network PE1 | \ 2480 | / .....................O \ +--------------+ 2481 | | . | | o o o DC 2 | 2482 | | PE4 . PE2 | | \|/ | 2483 ---------|----O........................O---|--|---O | 2484 | . | | | 2485 | . PE3 | +--------------+ 2486 \ .....................O / 2487 \ | / +--------------+ 2488 --------------------------|-- | o o o DC 3 | 2489 | | \|/ | 2490 ------|---O | 2491 | | 2492 +--------------+ 2494 Figure 29 : The ABNO Architecture in the Context of 2495 Cross-Stratum Optimization for Data Centers 2497 3.7.2.1. Deployed Applications, Services, and Products 2499 The ABNO controller will need to utilize a number of components to 2500 realize the CSO functions described in Section 3.7.1. 2502 The ALTO server provides information about topological proximity and 2503 appropriate geographical locations servers with respect to the 2504 underlying networks. This information can be used to optimize the 2505 selection of peer location which will help reduce the path of IP 2506 traffic or can contain it within specific service providers' 2507 networks. ALTO in conjunction with the ABNO Controller and the 2508 Application Service Coordinator can address general problems such as 2509 the selection of application servers based on resource availability 2510 and usage of the underlying networks. 2512 The ABNO Controller can also formulate a view of current network load 2513 from the TED and from the OAM Handler (for example, by running 2514 diagnostic tools that measure latency, jitter, and packet loss). 2515 This view obviously influences not just how paths from end-user to 2516 data center are provisioned, but can also guide the selection of 2517 which data center should provide the service and possibly even the 2518 points of attachment to be used by the end-user and to reach the 2519 chosen data center. A view of how PCE can fit in with CSO is 2520 provided in [I-D.dhody-pce-cso-enabled-path-computation] on which the 2521 content of Figure 29 is based. 2523 As already discussed, the combination of the ABNO Controller and the 2524 Application Service Coordinator will need to be able to select (and 2525 possibly migrate) the location of the VM that provides the service 2526 for the end-user. Since a common technique used to direct the end- 2527 user to the correct VM/server is to employ DNS redirection, an 2528 important capability of the ABNO controller will be to be able to 2529 program the DNS servers accordingly. 2531 Furthermore, as already noted in other sections of this document, the 2532 ABNO Controller can coordinate the placement of traffic within the 2533 network to achieve load-balancing and to provide resilience to 2534 failures. These features can be used in conjunction with the 2535 functions discussed above, to ensure that the placement of new VMs, 2536 the traffic that they generate, and the load caused by VM migration 2537 can be carried by the network and do not disrupt existing services. 2539 3.8 ALTO Server 2541 The ABNO architecture allows use cases with joint network and 2542 application-layer optimization. In such a use case, an application 2543 is presented with an abstract network topology containing only 2544 information relevant to the application. The application computes 2545 its application-layer routing according to its application objective. 2546 The application may interact with the ABNO Controller to set up 2547 explicit LSPs to support its application-layer routing. 2549 The following steps are performed to illustrate such a use case. 2551 1. Application Request of Application-layer Topology 2553 Consider the network shown in Figure 30. The network consists of 2554 5 nodes and 6 links. 2556 The application, which has endpoints hosted at N0, N1, and N2, 2557 requests network topology so that it can compute its application 2558 layer routing, for example, to maximize the throughput of content 2559 replication among endpoints at the three sites. 2561 +----+ L0 Wt=10 BW=50 +----+ 2562 | N0 |............................| N3 | 2563 +----+ +----+ 2564 | \ L4 | 2565 | \ Wt=7 | 2566 | \ BW=40 | 2567 | \ | 2568 L1 | +----+ | 2569 Wt=10 | | N4 | L2 | 2570 BW=45 | +----+ Wt=12 | 2571 | / BW=30 | 2572 | / L5 | 2573 | / Wt=10 | 2574 | / BW=45 | 2575 +----+ +----+ 2576 | N1 |............................| N2 | 2577 +----+ L3 Wt=15 BW=35 +----+ 2579 Figure 30 : Raw Network Topology 2581 +----+ 2582 | N0 |............ 2583 +----+ \ 2584 | \ \ 2585 | \ \ 2586 | \ \ 2587 | | \ AL0M2 2588 L1 | | AL4M5 \ Wt=22 2589 Wt=10 | | Wt=17 \ BW=30 2590 BW=40 | | BW=40 \ 2591 | | \ 2592 | / \ 2593 | / \ 2594 | / \ 2595 +----+ +----+ 2596 | N1 |........................| N2 | 2597 +----+ L3 Wt=15 BW=35 +----+ 2599 Figure 31 : Reduced Graph for a Particular Application 2601 The request arrives at the ABNO Controller, which forwards the 2602 request to the ALTO Server component. The ALTO Server consults 2603 the Policy Agent, the TED, and the PCE to return an abstract, 2604 application-layer topology. Figure 31 shows a possible reduced, 2605 topology for the application. For example, the policy may specify 2606 that the bandwidth exposed to an application may not exceed 40. 2607 The network has precomputed that the route from N0 to N2 should 2608 use the path of N0->N3-> N2, according to goals such as GCO (see 2609 Section 3.4). 2611 The ALTO Server uses the topology and existing routing to compute 2612 an abstract network map consisting of 3 PIDs. The pair-wise 2613 bandwidth as well as shared bottlenecks will be computed from the 2614 internal network topology and reflected in cost maps. 2616 2. Application Computes Application Overlay 2618 Using the abstract topology, the application computes an 2619 application-layer routing. For concreteness, the application may 2620 compute a spanning tree to maximize the total bandwidth from N0 to 2621 N2. Figure 32 shows an example application-layer routing using a 2622 route of N0->N1-> N2 for 35 Mbps and N0->N2 for 30 Mbps, for a 2623 total of 65 Mbps. 2625 +----+ 2626 | N0 |----------------------------------+ 2627 +----+ AL0M2 BW=30 | 2628 | | 2629 | | 2630 | | 2631 | | 2632 | L1 | 2633 | | 2634 | BW=35 | 2635 | | 2636 | | 2637 | | 2638 V V 2639 +----+ L3 BW=35 +----+ 2640 | N1 |...............................>| N2 | 2641 +----+ +----+ 2643 Figure 32 : Application-layer Spanning Tree 2645 3. Application Path Set Up by the ABNO Controller 2647 The application may submit its application routes to the ABNO 2648 Controller to set up explicit LSPs to support its operation. The 2649 ABNO Controller consults the ALTO maps to map the application 2650 layer routing back to internal network topology and then instructs 2651 the provisioning manager to set up the paths. The ABNO Controller 2652 may re-trigger GCO to re-optimize network traffic engineering. 2654 3.9 Other Potential Use Cases 2656 This section serves as a place-holder for other potential use cases 2657 that might get documented in future documents. 2659 3.9.1 Traffic Grooming and Regrooming 2661 This use case could cover the following scenarios: 2663 - Nested LSPs 2664 - Packet Classification (IP flows into LSPs at edge routers) 2665 - Bucket Stuffing 2666 - IP Flows into ECMP Hash Bucket 2668 3.9.2 Bandwidth Scheduling 2670 Bandwidth Scheduling consist of configuring LSPs based on a given 2671 time schedule. This can be used to support maintenance or 2672 operational schedules or to adjust network capacity based on 2673 traffic pattern detection. 2675 The ABNO framework provides the components to enable bandwidth 2676 scheduling solutions. 2678 4. Survivability and Redundancy within the ABNO Architecture 2680 The ABNO architecture described in this document is presented in 2681 terms of functional units. Each unit could be implemented separately 2682 or bundled with other units into single programs or products. 2683 Furthermore, each implemented unit or bundle could be deployed on a 2684 separate device (for example, a network server), on a separate 2685 virtual machine (for example, in data center), or groups of programs 2686 could be deployed on the same processor. From the point of view of 2687 the architecutral model, these implementation and deployment choices 2688 are entirely unimportant. 2690 Similarly, the realisation of a functional component of the ABNO 2691 architecture could be supported by more than one instance of an 2692 implementation, or by different intances of different implementations 2693 that provide the same or similar function. For example, the PCE 2694 component might have multiple instantiations for sharing the 2695 processing load of a large number of computation requests, and 2696 different instances might have different algorithmic capabilities so 2697 that one instance might serve parallel computation requests for 2698 disjoint paths, while another instance might have the capability to 2699 compute optimal point-to-multipoint paths. 2701 This ability to have multiple instances of ABNO components also 2702 enables resiliency within the model since, in the event of the 2703 failure of one instance of one component (because of software 2704 failure, hardware failure, or connectivity problems) other instances 2705 can take over. In some circumstances state synchronization between 2706 instances of components may be needed in order to facilitate seamless 2707 resiliency. 2709 How these features are achieved in an ABNO implementation or 2710 deployment is outside the scope of this document. 2712 5. Security Consideration 2714 The ABNO architecture describes a network system and security must 2715 play an important part. 2717 The first consideration is that the external protocols (those shown 2718 as entering or leaving the big box in Figure 1) must be appropriately 2719 secured. This security will include authentication and authorization 2720 to control access to the different functions that the ABNO system can 2721 perform, to enable different policies based on identity, and to 2722 manage the control of the network devices. 2724 Secondly, the internal protocols that are used between ABNO 2725 components must also have appropriate security particularly when the 2726 components are implemented on separate network nodes. 2728 Considering that the ABNO system contains a lot of data about the 2729 network, the services carried by the network, and the services 2730 delivered to customers, access to information held in the system must 2731 be carefully managed. Since such access will be largely through 2732 the external protocols, the policy-based controls enabled by 2733 authentication will be powerful. But it should also be noted that 2734 any data sent from the databases in the ABNO system can reveal 2735 details of the network and should, therefore, be considered as a 2736 candidate for encryption. Furthermore, since ABNO components can 2737 access the information stored in the database, care is required to 2738 ensure that all such components are genuine, and to consider 2739 encrypting data that flows between components when they are 2740 implemented at remote nodes. 2742 The conclusion is that all protocols used to realize the ABNO 2743 architecture should have rich security features. 2745 6. Manageability Considerations 2747 The whole of the ABNO architecture is essentially about managing the 2748 network. In this respect there is very little extra to say. ABNO 2749 provides a mechanisms to gather and collate information about the 2750 network, reporting it to management applications, storing it for 2751 future inspection, and triggering actions according to configured 2752 policies. 2754 The ABNO system will, itself, need monitoring and management. This 2755 can be seen as falling into several categories: 2756 - Management of external protocols 2757 - Management of internal protocols 2758 - Management and monitoring of ABNO components 2759 - Configuration of policy to be applied across the ABNO system. 2761 7. IANA Considerations 2763 This document makes no requests for IANA action. 2765 8. Acknowledgements 2767 Thanks for discussions and review are due to Ken Gray, Jan Medved, 2768 Nitin Bahadur, Diego Caviglia, Joel Halpern, Brian Field, Ori 2769 Gerstel, Daniele Ceccarelli, Diego Caviglia Cyril Margaria, Jonathan 2770 Hardwick, Nico Wauters, Tom Taylor, Qin Wu, and Luis Contreras. 2771 Thanks to George Swallow for suggesting the existence of the SRLG 2772 database. Tomonori Takeda and Julien Meuric provided valuable 2773 comments as part of their Routing Directorate reviews. Tina Tsou 2774 provided comments as part of her Operational Directorate review. 2776 This work received funding from the European Union's Seventh 2777 Framework Programme for research, technological development and 2778 demonstration through the PACE project under grant agreement 2779 number 619712 and through the IDEALIST project under grant agreement 2780 number 317999. 2782 9. References 2784 9.1. Informative References 2786 [EON] Gerstel, O., Jinno, M., Lord, A., and S.J.B. Yoo, "Elastic 2787 optical networking: a new dawn for the optical layer?", 2788 IEEE Communications Magazine, Volume:50, Issue:2, 2789 ISSN 0163-6804, February 2012. 2791 [Flood] Project Floodlight, "Floodlight REST API", 2792 http://www.projectfloodlight.org. 2794 [G.694.1] ITU-T Recommendation G.694.1 (revision 2), "Spectral grids 2795 for WDM applications: DWDM frequency grid", February 2012. 2797 [G.709] ITU-T, "Interface for the Optical Transport Network 2798 (OTN)", G.709 Recommendation, October 2009. 2800 [I-D.dhody-pce-cso-enabled-path-computation] 2801 "Dhody, D., Lee, Y., Contreras, LM., Gonzalez de Dios, O, 2802 and N. Ciulli, "Cross Stratum Optimization enabled Path 2803 Computation", draft-dhody-pce-cso-enabled-path-computation, 2804 work in progress. 2806 [I-D.ietf-i2rs-architecture] 2807 Atlas, A., Halpern, J., Hares, S., Ward, D., and T. Nadeau, 2808 "An Architecture for the Interface to the Routing System", 2809 draft-ietf-i2rs-architecture, work in progress. 2811 [I-D.ietf-i2rs-problem-statement] 2812 Atlas, A., Nadeau, T., and D. Ward, "Interface to the 2813 Routing System Problem Statement", 2814 draft-ietf-i2rs-problem-statement, work in progress. 2816 [I-D.ietf-idr-ls-distribution] 2817 Gredler, H., Medved, J., Previdi, S., Farrel, A., and 2818 Ray, S., "North-Bound Distribution of Link-State and TE 2819 Information using BGP", draft-ietf-idr-ls-distribution, 2820 work in progress. 2822 [I-D.ietf-netconf-restconf] 2823 Bierman, A., Bjorklund, M., and K. Watsen, "RESTCONF 2824 protocol", draft-ietf-netconf-restconf, work in progress. 2826 [I-D.ietf-netmod-routing-cfg] 2827 Lhotka, L., "A YANG Data Model for Routing Management", 2828 draft-ietf-netmod-routing-cfg, work in progress. 2830 [I-D.ietf-pce-pce-initiated-lsp] 2831 Crabbe, E., Minei, I., Sivabalan, S., and Varga, R., "PCEP 2832 Extensions for PCE-initiated LSP Setup in a Stateful PCE 2833 Model", draft-ietf-pce-pce-initiated-lsp, work in 2834 progress. 2836 [I-D.ietf-pce-stateful-pce] 2837 Crabbe, E., Medved, J., Minei, I., and R. Varga, "PCEP 2838 Extensions for Stateful PCE", draft-ietf-pce-stateful-pce, 2839 work in progress. 2841 [ONF] Open Networking Foundation, "OpenFlow Switch Specification 2842 Version 1.4.0 (Wire Protocol 0x05)", October 2013. 2844 [RFC2748] Durham, D., Ed., Boyle, J., Cohen, R., Herzog, S., Rajan, 2845 R., and A. Sastry, "The COPS (Common Open Policy Service) 2846 Protocol", RFC 2748, January 2000. 2848 [RFC2753] Yavatkar, R., Pendarakis, D. and R. Guerin, "A 2849 Framework for Policy-based Admission Control", RFC2753, 2850 January 2000. 2852 [RFC3209] D. Awduche et al., "RSVP-TE: Extensions to RSVP for LSP 2853 Tunnels", RFC 3209, December 2001. 2855 [RFC3292] Doria, A., Hellstrand, F., Sundell, K., and Worster, T., 2856 "General Switch Management Protocol (GSMP) V3", RFC 3292, 2857 June 2002. 2859 [RFC3412] Case, J., Harrington, D., Preshun, R., and Wijnen, B., 2860 "Message Processing and Dispatching for the Simple Network 2861 Management Protocol (SNMP)", RFC 3412, December 2002. 2863 [RFC3473] L. Berger et al., "Generalized Multi-Protocol Label 2864 Switching (GMPLS) Signaling Resource ReserVation Protocol- 2865 Traffic Engineering (RSVP-TE) Extensions", RFC 3473, 2866 January 2003. 2868 [RFC3630] Katz, D., Kmpella, K., and Yeung, D., "Traffic Engineering 2869 (TE) Extensions to OSPF Version 2", RFC 3630, September 2870 2003. 2872 [RFC3746] Yang, L., Dantu, R., Anderson, T., and Gopal, R., 2873 "Forwarding and Control Element Separation (ForCES) 2874 Framework", RFC 3746, April 2004. 2876 [RFC3985] Bryant, S., Ed., and P. Pate, Ed., "Pseudo Wire Emulation 2877 Edge-to-Edge (PWE3) Architecture", RFC 3985, March 2005. 2879 [RFC4655] Farrel, A., Vasseur, J.-P., and Ash, J., "A Path 2880 Computation Element (PCE)-Based Architecture", RFC 4655, 2881 October 2006. 2883 [RFC5150] Ayyangar, A., Kompella, K., Vasseur, JP. and Farrel, A., 2884 "Label Switched Path Stitching with Generalized 2885 Multiprotocol Label Switching Traffic Engineering (GMPLS 2886 TE)", RFC 5150, February 2008. 2888 [RFC5212] Shiomoto, K., Papadimitriou, D., Le Roux, JL., Vigoureux, 2889 M., and Brungard, D., "Requirements for GMPLS-Based Multi- 2890 Region and Multi-Layer Networks (MRN/MLN)", RFC 5212, July 2891 2008. 2893 [RFC5254] Bitar, N., Bocci, M. and L. Martini, "Requirements for 2894 Multi-Segment Pseudowire Emulation Edge-to-Edge (PWE3)", 2895 RFC 5254, October 2008 2897 [RFC5277] Chisholm, S. and H. Trevino, "NETCONF Event Notifications", 2898 RFC 5277, July 2008. 2900 [RFC5305] Li, T. and H. Smit, "IS-IS Extensions for Traffic 2901 Engineering", RFC 5305, October 2008. 2903 [RFC5394] Bryskin, I., Papadimitriou, D., Berger, L. and Ash, J., 2904 "Policy-Enabled Path Computation Framework", RFC 5394, 2905 December 2008. 2907 [RFC5424] R. Gerhards, "The Syslog Protocol", RFC 5424, March 2009. 2909 [RFC5440] Vasseur, JP. and Le Roux, JL., "Path Computation Element 2910 (PCE) Communication Protocol (PCEP)", RFC 5440, March 2009. 2912 [RFC5520] Bradford, R., Vasseur, JP., and Farrel, A., "Preserving 2913 Topology Confidentiality in Inter-Domain Path Computation 2914 Using a Path-Key-Based Mechanism", RC 5520, April 2009. 2916 [RFC5557] Lee, Y., Le Roux, JL., King, D., and Oki, E., "Path 2917 Computation Element Communication Protocol (PCEP) 2918 Requirements and Protocol Extensions in Support of Global 2919 Concurrent Optimization", RFC 5557, July 2009. 2921 [RFC5623] Oki, E., Takeda, T., Le Roux, JL., and Farrel, A., 2922 "Framework for PCE-Based Inter-Layer MPLS and GMPLS Traffic 2923 Engineering", RFC 5623, September 2009. 2925 [RFC5693] Seedorf, J., and Burger, E., "Application-Layer Traffic 2926 Optimization (ALTO) Problem Statement", RFC 5693, October 2927 2009. 2929 [RFC5810] A. Doria, et al., "Forwarding and Control Element 2930 Separation (ForCES) Protocol Specification", RFC 5810, 2931 March 2010. 2933 [RFC6007] I. Nishioka. and D. King., "Use of the Synchronization 2934 VECtor (SVEC) List for Synchronized Dependent Path 2935 Computations", RFC 6007, September 2010. 2937 [RFC6020] Bjorklund, M., "YANG - A Data Modeling Language for the 2938 Network Configuration Protocol (NETCONF)", RFC 6020, 2939 October 2010. 2941 [RFC6107] Shiomoto, K. and A. Farrel, "Procedures for Dynamically 2942 Signaled Hierarchical Label Switched Paths", RFC 6107, 2943 February 2011. 2945 [RFC6120] P. Saint-Andre, "Extensible Messaging and Presence Protocol 2946 (XMPP): Core", RFC 6120, March 2011. 2948 [RFC6241] Enns, R., Bjorklund, M., Schoenwaelder, J., and Bierman, 2949 A., "Network Configuration Protocol (NETCONF)", RFC 6241, 2950 June 2011. 2952 [RFC6707] Niven-Jenkins, B., Le Faucheur, F., and Bitar, N., "Content 2953 Distribution Network Interconnection (CDNI) Problem 2954 Statement", RFC 6707, September 2012. 2956 [RFC6805] King, D. and Farrel, A., "The Application of the Path 2957 Computation Element Architecture to the Determination of a 2958 Sequence of Domains in MPLS and GMPLS", RFC 6805, November 2959 2012. 2961 [RFC6982] Sheffer, Y. and A. Farrel, "Improving Awareness of Running 2962 Code: The Implementation Status Section", RFC 6982, July 2963 2013. 2964 [RFC Editor Note: This reference can be removed when Section 6 is 2965 removed] 2967 [RFC7011] Claise, B., Trammell, B., and Paitken, "Specification of 2968 the IP Flow Information Export (IPFIX) Protocol for the 2969 Exchange of IP Traffic Flow Information", STD 77, RFC 7011, 2970 Spetember 2013. 2972 [RFC7297] Boucadair, M., Jacquenet, c., and N. Wang, "IP/MPLS 2973 Connectivity Provisioning Profile (CPP)", RFC 7297, July 2974 2014. 2976 [RFC7285] Alimi, R., Penno, R., and Y. Yang, "Application-Layer 2977 Traffic Optimization (ALTO) Protocol", RFC 7285, September 2978 2014. 2980 [TL1] Telcorida, "Operations Application Messages - Language For 2981 Operations Application", GR-831, November 1996. 2983 [TMF-MTOSI] 2984 TeleManagement Forum "Multi-Technology Operations Systems 2985 Interface (MTOSI)", 2986 https://www.tmforum.org/MTOSI/2319/home.html 2988 10. Contributors' Addresses 2990 Quintin Zhao 2991 Huawei Technologies 2992 125 Nagog Technology Park 2993 Acton, MA 01719 2994 US 2995 Email: qzhao@huawei.com 2997 Victor Lopez 2998 Telefonica I+D 2999 Email: vlopez@tid.es 3001 Ramon Casellas 3002 CTTC 3003 Email: ramon.casellas@cttc.es 3005 Yuji Kamite 3006 NTT Communications Corporation 3007 Email: y.kamite@ntt.com 3009 Yosuke Tanaka 3010 NTT Communications Corporation 3011 Email: yosuke.tanaka@ntt.com 3013 Young Lee 3014 Huawei Technologies 3015 Email: leeyoung@huawei.com 3017 Y. Richard Yang 3018 Yale University 3019 yry@cs.yale.edu 3021 11. Authors' Addresses 3023 Daniel King 3024 Old Dog Consulting 3025 Email: daniel@olddog.co.uk 3027 Adrian Farrel 3028 Juniper Networks 3029 Email: adrian@olddog.co.uk 3031 Appendix A. Undefined Interfaces 3033 This Appendix provides a brief list of interfaces that are not yet 3034 defined at the time of writing. Interfaces where there is a choice 3035 of existing protocols are not listed. 3037 - An interface for adding additional information to the Traffic 3038 Engineering Database is described in Section 2.3.2.3. No protocol 3039 is currently identified for this interface, but candidates include: 3041 - The protocol developed or adopted to satisfy the requirements of 3042 I2RS [I-D.ietf-i2rs-architecture] 3044 - NETCONF [RFC6241] 3046 - The protocol to be used by the Interface to the Routing System 3047 is described in Section 2.3.2.8. The I2RS working group has 3048 determined that this protocol will be based on a combination of 3049 NETCONF [RFC6241] and RESTCONF [I-D.ietf-netconf-restconf] with 3050 further additions and modifications as deemed necessary to deliver 3051 the desired function. The details of the protocol are still to be 3052 determined. 3054 - As described in Section 2.3.2.10, the Virtual Network Topology 3055 Manager needs an interface that can be used by a PCE or the ABNO 3056 Controller to inform it that a client layer needs more virtual 3057 topology. It is possible that the protocol identified for use with 3058 I2RS will satisfy this requirement, or this could be achieved using 3059 extensions to the PCEP Notify message (PCNtf). 3061 - The north-bound interface from the ABNO Controller is used by the 3062 NMS, OSS, and Application Service Coordinator to request services 3063 in the network in support of applications as described in Section 3064 2.3.2.11. 3066 - It is possible that the protocol selected or designed to satisfy 3067 I2RS will address the requirement. 3069 - A potential approach for this type of interface is described in 3070 [RFC7297] for a simple use case. 3072 - As noted in Section 2.3.2.14 there may be layer-independent data 3073 models for offering common interfaces to control, configure, and 3074 report OAM. 3076 - As noted in Section 3.6, the ABNO model could be used to applicable 3077 to placing multi-segment pseudowires in a network topology made up 3078 of S-PEs and MPLS tunnels. The current definition of PCEP 3079 [RFC5440] and associated extensions that are work in progress does 3080 not include all of the details to request such paths, so some work 3081 might be necessary although the general concepts will be easily 3082 re-usable. Indeed, such work may be necessary for the wider 3083 applicability of PCE in many networking scenarios. 3085 Appendix B. Implementation Status 3087 [RFC Editor Note: Please remove this entire section prior to publication 3088 as an RFC.] 3090 This section records the status of known implementations of the 3091 architecture described in this document at the time of posting of 3092 this Internet-Draft, and is based on a proposal described in RFC 6982 3093 [RFC6982]. The description of implementations in this section is 3094 intended to assist the IETF in its decision processes in progressing 3095 drafts to RFCs. Please note that the listing of any individual 3096 implementation here does not imply endorsement by the IETF. 3097 Furthermore, no effort has been spent to verify the information 3098 presented here that was supplied by IETF contributors. This is not 3099 intended as, and must not be construed to be, a catalog of available 3100 implementations or their features. Readers are advised to note that 3101 other implementations may exist. 3103 According to RFC 6982, "this will allow reviewers and working groups 3104 to assign due consideration to documents that have the benefit of 3105 running code, which may serve as evidence of valuable experimentation 3106 and feedback that have made the implemented protocols more mature. 3107 It is up to the individual working groups to use this information as 3108 they see fit." 3110 B.1. Telefonica Investigacion y Desarrollo (TID) 3112 Organization Responsible for the Implementation: 3114 Telefonica Investigacion y Desarrollo (TID) 3115 Core Network Evolution 3117 Implementation Name and Details: 3119 Netphony ABNO 3121 Brief Description: 3123 Experimental testbed implementation of ABNO architecture. 3125 Level of Maturity: 3127 The set-up has been tested with flexgrid and WSON scenarios, 3128 interfacing Telefonica's GMPLS protocols and vendors controllers 3129 (ADVA, Infinera and Ciena) 3131 Licensing: 3133 To be released 3135 Implementation Experience: 3137 All tests made in TID have been successfully concluded. 3139 No issue has been reported. TID's tests experience has been 3140 reported in a number of journal papers. Contact Victor Lopez and 3141 Oscar Gonzalez de Dios for more information 3143 Contact Information: 3145 Victor Lopez: victor.lopezalvarez@telefonica.com 3146 Oscar Gonzalez de Dios: oscar.gonzalezdedios@telefonica.com 3148 Interoperability: 3150 PCEP has been tested with NSN, CNIT, and CTTC. 3152 BGP-LS with CTTC, UPC, and Telecom Italia.