idnits 2.17.1 draft-farrkingel-pce-abno-architecture-13.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- == There are 2 instances of lines with non-RFC6890-compliant IPv4 addresses in the document. If these are example addresses, they should be changed. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year -- The document date (13 October 2014) is 3476 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- == Missing Reference: 'G.694.1' is mentioned on line 1963, but not defined == Unused Reference: 'G.694.2' is defined on line 2786, but no explicit reference was found in the text -- Obsolete informational reference (is this intentional?): RFC 6982 (Obsoleted by RFC 7942) Summary: 0 errors (**), 0 flaws (~~), 4 warnings (==), 2 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 Internet Engineering Task Force D. King 3 Internet-Draft Old Dog Consulting 4 Intended status: Informational A. Farrel 5 Expires: 13 April 2015 Juniper Networks 6 13 October 2014 8 A PCE-based Architecture for Application-based Network Operations 10 draft-farrkingel-pce-abno-architecture-13.txt 12 Abstract 14 Services such as content distribution, distributed databases, or 15 inter-data center connectivity place a set of new requirements on the 16 operation of networks. They need on-demand and application-specific 17 reservation of network connectivity, reliability, and resources (such 18 as bandwidth) in a variety of network applications (such as point-to- 19 point connectivity, network virtualization, or mobile back-haul) and 20 in a range of network technologies from packet (IP/MPLS) down to 21 optical. Additionally, existing services or capabilities like 22 pseudowire connectivity or global concurrent optimization can benefit 23 from a operational scheme that considers the application needs and 24 the network status. An environment that operates to meet these types 25 of requirement is said to have Application-Based Network Operations 26 (ABNO). 28 ABNO brings together many existing technologies for gathering 29 information about the resources available in a network, for 30 consideration of topologies and how those topologies map to 31 underlying network resources, for requesting path computation, and 32 for provisioning or reserving network resources. Thus, ABNO may be 33 seen as the use of a toolbox of existing components enhanced with a 34 few new elements. The key component within an ABNO is the Path 35 Computation Element (PCE), which can be used for computing paths and 36 is further extended to provide policy enforcement capabilities for 37 ABNO. 39 This document describes an architecture and framework for ABNO 40 showing how these components fit together. It provides a cookbook of 41 existing technologies to satisfy the architecture and meet the needs 42 of the applications. 44 Status of this Memo 46 This Internet-Draft is submitted in full conformance with the 47 provisions of BCP 78 and BCP 79. 49 Internet-Drafts are working documents of the Internet Engineering 50 Task Force (IETF). Note that other groups may also distribute 51 working documents as Internet-Drafts. The list of current Internet- 52 Drafts is at http://datatracker.ietf.org/drafts/current/. 54 Internet-Drafts are draft documents valid for a maximum of six months 55 and may be updated, replaced, or obsoleted by other documents at any 56 time. It is inappropriate to use Internet-Drafts as reference 57 material or to cite them other than as "work in progress." 59 Copyright Notice 61 Copyright (c) 2014 IETF Trust and the persons identified as the 62 document authors. All rights reserved. 64 This document is subject to BCP 78 and the IETF Trust's Legal 65 Provisions Relating to IETF Documents 66 (http://trustee.ietf.org/license-info) in effect on the date of 67 publication of this document. Please review these documents 68 carefully, as they describe your rights and restrictions with respect 69 to this document. Code Components extracted from this document must 70 include Simplified BSD License text as described in Section 4.e of 71 the Trust Legal Provisions and are provided without warranty as 72 described in the Simplified BSD License. 74 Table of Contents 76 1. Introduction ................................................ 4 77 1.1 Scope ..................................................... 5 78 2. Application-based Network Operations (ABNO) .................. 5 79 2.1 Assumptions and Requirements .............................. 5 80 2.2 Implementation of the Architecture ........................ 6 81 2.3 Generic Architecture ...................................... 8 82 2.3.1 ABNO Components ........................................ 9 83 2.3.2 ABNO Functional Interfaces ............................ 15 84 3. ABNO Use Cases .............................................. 23 85 3.1 Inter-AS Connectivity ..................................... 24 86 3.2 Multi-Layer Networking .................................... 29 87 3.2.1 Data Center Interconnection across Multi-Layer Networks 33 88 3.3 Make-Before-Break ......................................... 36 89 3.3.1 Make-Before-Break for Re-optimization ................. 36 90 3.3.2 Make-Before-Break for Restoration ..................... 37 91 3.3.3 Make-Before-Break for Path Test and Selection ......... 38 92 3.4 Global Concurrent Optimization ............................ 40 93 3.4.1 Use Case: GCO with MPLS LSPs .......................... 41 94 3.5 Adaptive Network Management (ANM) ......................... 43 95 3.5.1. ANM Trigger ........................................ 44 96 3.5.2. Processing request and GCO computation ............. 44 97 3.5.3. Automated Provisioning Process ..................... 45 98 3.6 Pseudowire Operations and Management ...................... 46 99 3.6.1 Multi-Segment Pseudowires ........................... 46 100 3.6.2 Path-Diverse Pseudowires ............................ 48 101 3.6.3 Path-Diverse Multi-Segment Pseudowires .............. 49 102 3.6.4 Pseudowire Segment Protection ....................... 49 103 3.6.5 Applicability of ABNO to Pseudowires ................ 50 104 3.7 Cross-Stratum Optimization ................................ 51 105 3.7.1. Data Center Network Operation ..................... 51 106 3.7.2. Application of the ABNO Architecture .............. 53 107 3.8 ALTO Server ............................................... 55 108 3.9 Other Potential Use Cases ................................. 58 109 3.9.1 Grooming and Regrooming ............................. 58 110 3.9.2 Bandwidth Scheduling ................................ 58 111 4. Survivability and Redundancy within the ABNO Architecture ... 58 112 5. Security Consideration ...................................... 59 113 6. Manageability Considerations ................................ 60 114 7. IANA Considerations ......................................... 60 115 8. Acknowledgements ............................................ 60 116 9. References .................................................. 61 117 9.1 Informative References ................................... 61 118 10. Contributors' Addresses .................................... 65 119 11. Authors' Addresses ......................................... 66 120 A. Undefined Interfaces ........................................ 66 121 B. Implementation Status ...................................... 67 123 1. Introduction 125 Networks today integrate multiple technologies allowing network 126 infrastructure to deliver a variety of services to support the 127 different characteristics and demands of applications. There is an 128 increasing demand to make the network responsive to service requests 129 issued directly from the application layer. This differs from the 130 established model where services in the network are delivered in 131 response to management commands driven by a human user. 133 These application-driven requests and the services they establish 134 place a set of new requirements on the operation of networks. They 135 need on-demand and application-specific reservation of network 136 connectivity, reliability, and resources (such as bandwidth) in a 137 variety of network applications (such as point-to-point connectivity, 138 network virtualization, or mobile back-haul) and in a range of 139 network technologies from packet (IP/MPLS) down to optical. An 140 environment that operates to meet this type of application-aware 141 requirement is said to have Application-Based Network Operation 142 (ABNO). 144 The Path Computation Element (PCE) [RFC4655] was developed to provide 145 path computation services for GMPLS and MPLS networks. The 146 applicability of PCE can be extended to provide path computation and 147 policy enforcement capabilities for ABNO platforms and services. 149 ABNO can provide the following types of service to applications by 150 coordinating the components that operate and manage the network: 152 - Optimization of traffic flows between applications to create an 153 overlay network for communication in use cases such as file 154 sharing, data caching or mirroring, media streaming, or real-time 155 communications described as Application Layer Traffic Optimization 156 (ALTO) [RFC5693]. 158 - Remote control of network components allowing coordinated 159 programming of network resources through such techniques as 160 Forwarding and Control Element Separation (ForCES) [RFC3746], 161 OpenFlow [ONF], and the Interface to the Routing System (I2RS) 162 [I-D.ietf-i2rs-architecture]. 164 - Interconnection of Content Delivery Networks (CDNi) [RFC6707] 165 through the establishment and resizing of connections between 166 content distribution networks. Similarly, ABNO can coordinate 167 inter-data center connections. 169 - Network resource coordination to automate provisioning, facilitate 170 grooming and regrooming, bandwidth scheduling, and global 171 concurrent optimization [RFC5557]. 173 - Virtual Private Network (VPN) planning in support of deployment of 174 new VPN customers and to facilitate inter-data center connectivity. 176 This document outlines the architecture and use cases for ABNO, and 177 shows how the ABNO architecture can be used for coordinating control 178 system and application requests to compute paths, enforce policies, 179 and manage network resources for the benefit of the applications that 180 use the network. The examination of the use cases shows the ABNO 181 architecture as a toolkit comprising many existing components and 182 protocols and so this document looks like a cookbook. ABNO is 183 compatible with pre-existing Network Management System (NMS) and 184 Operations Support System (OSS) deployments as well as with more 185 recent developments in programmatic networks such as Software Defined 186 Networking (SDN). 188 1.1 Scope 190 This document describes a toolkit. It shows how existing functional 191 components described in a large number of separate documents can be 192 brought together within a single architecture to provide the function 193 necessary for ABNO. 195 In many cases, existing protocols are known to be good enough or 196 almost good enough to satisfy the requirements of interfaces between 197 the components. In these cases the protocols are called out as 198 suitable candidates for use within an implementation of ABNO. 200 In other cases it is clear that further work will be required, and in 201 those cases a pointer to on-going work that may be of use is 202 provided. Where there is no current work that can be identified by 203 the authors, a short description of the missing interface protocol is 204 given in the Appendix. 206 Thus, this document may be seen as providing an applicability 207 statement for existing protocols, and guidance for developers of new 208 protocols or protocol extensions. 210 2. Application Based Network Operations (ABNO) 212 2.1 Assumptions 214 The principal assumption underlying this document is that existing 215 technologies should be used where they are adequate for the task. 216 Furthermore, when an existing technology is almost sufficient, it is 217 assumed to be preferable to make minor extensions rather than to 218 invent a whole new technology. 220 Note that this document describes an architecture. Functional 221 components are architectural concepts and have distinct and clear 222 responsibilities. Pairs of functional components interact over 223 functional interfaces that are, themselves, architectural concepts. 225 2.2 Implementation of the Architecture 227 It needs to be strongly emphasized that this document describes a 228 functional architecture. It is not a software design. Thus, it is 229 not intended that this architecture constrain implementations. 230 However, the separation of the ABNO functions into separate 231 functional components with clear interfaces between them enables 232 implementations to choose which features to include and allows 233 different functions to be distributed across distinct processes or 234 even processors. 236 An implementation of this architecture may make several important 237 decisions about the functional components: 239 - Multiple functional components may be grouped together into one 240 software component such that all of the functions are bundled 241 and only the external interfaces are exposed. This may have 242 distinct advantages for fast paths within the software, and can 243 reduce inter-process communication overhead. 245 For example, an active, stateful PCE could be implemented as a 246 single server combining the ABNO components of the PCE, the 247 Traffic Engineering Database, the LSP Database, and the 248 Provisioning Manager (see Section 2.3). 250 - The functional components could be distributed across separate 251 processes, processors, or servers so that the interfaces are 252 exposed as external protocols. 254 For example, the OAM Handler (see Section 2.3.1.6) could be 255 presented on a dedicated server in the network that consumes all 256 status reports from the network, aggregates them, correlates them, 257 and then dispatches notifications to other servers that need to 258 understand what has happened. 260 - There could be multiple instances of any or each of the 261 components. That is, the function of a functional component could 262 be partitioned across multiple software components with each 263 responsible for handling a specific feature or a partition of the 264 network. 266 For example, there may be multiple Traffic Engineering Databases 267 (see Section 2.3.1.8) in an implementation with each holding the 268 topology information of a separate network domain (such as a 269 network layer or an Autonomous System). Similarly there could be 270 multiple PCE instances each processing on a different Traffic 271 Engineering Database, and potentially distributed on different 272 servers under different management control. As a final example, 273 there could be multiple ABNO Controllers each with capability to 274 support different classes of application or application service. 276 The purpose of the description of this architecture is to facilitate 277 different implementations while offering interoperability between 278 implementations of key components and easy interaction with the 279 applications and with the network devices. 281 2.3 Generic ABNO Architecture 283 The following diagram illustrates the ABNO architecture. The 284 components and functional interfaces are discussed in Sections 2.3.1 285 2.3.2 respectively. The use cases described in Section 3 show how 286 different components are used selectively to provide different 287 services. It is important to understand that the relationships and 288 interfaces shown between components on this figure are illustrative 289 of some of the common or likely interactions, however this figure 290 does not preclude other interfaces and relationships as necessary 291 to realize specific functionality. 293 +----------------------------------------------------------------+ 294 | OSS / NMS / Application Service Coordinator | 295 +-+---+---+----+-----------+---------------------------------+---+ 296 | | | | | | 297 ...|...|...|....|...........|.................................|...... 298 : | | | | +----+----------------------+ | : 299 : | | | +--+---+ | | +---+---+ : 300 : | | | |Policy+--+ ABNO Controller +------+ | : 301 : | | | |Agent | | +--+ | OAM | : 302 : | | | +-+--+-+ +-+------------+----------+-+ | |Handler| : 303 : | | | | | | | | | | | : 304 : | | +-+---++ | +----+-+ +-------+-------+ | | +---+---+ : 305 : | | |ALTO | +-+ VNTM |--+ | | | | : 306 : | | |Server| +--+-+-+ | | | +--+---+ | : 307 : | | +--+---+ | | | PCE | | | I2RS | | : 308 : | | | +-------+ | | | | |Client| | : 309 : | | | | | | | | +-+--+-+ | : 310 : | +-+----+--+-+ | | | | | | | : 311 : | | Databases +-------:----+ | | | | | : 312 : | | TED | | +-+---+----+----+ | | | | : 313 : | | LSP-DB | | | | | | | | | : 314 : | +-----+--+--+ +-+---------------+-------+-+ | | | : 315 : | | | | Provisioning Manager | | | | : 316 : | | | +-----------------+---+-----+ | | | : 317 ...|.......|..|.................|...|....|...|.......|..|.....|...... 318 | | | | | | | | | | 319 | +-+--+-----------------+--------+-----------+----+ | 320 +----/ Client Network Layer \--+ 321 | +----------------------------------------------------+ | 322 | | | | | | 323 ++------+-------------------------+--------+----------+-----+-+ 324 / Server Network Layers \ 325 +-----------------------------------------------------------------+ 327 Figure 1 : Generic ABNO Architecture 329 2.3.1 ABNO Components 331 This section describes the functional components shown as boxes in 332 Figure 1. The interactions between those components, the functional 333 interfaces, are described in Section 2.3.2. 335 2.3.1.1 NMS and OSS 337 A Network Management System (NMS) or an Operations Support System 338 (OSS) can be used to control, operate, and manage a network. Within 339 the ABNO architecture, an NMS or OSS may issue high-level service 340 requests to the ABNO Controller. It may also establish policies for 341 the activities of the components within the architecture. 343 The NMS and OSS can be consumers of network events reported through 344 the OAM Handler and can act on these reports as well as displaying 345 them to users and raising alarms. The NMS and OSS can also access 346 the Traffic Engineering Database (TED) and Label Switched Path 347 Database (LSP-DB) to show the users the current state of the network. 349 Lastly, the NMS and OSS may utilize a direct programmatic or 350 configuration interface to interact with the network elements within 351 the network. 353 2.3.1.2 Application Service Coordinator 355 In addition to the NMS and OSS, services in the ABNO architecture 356 may be requested by or on behalf of applications. In this context 357 the term "application" is very broad. An application may be a 358 program that runs on a host or server and that provides services to a 359 user, such as a video conferencing application. Alternatively, an 360 application may be a software tool with which a user makes requests 361 of the network to set up specific services such as end-to-end 362 connections or scheduled bandwidth reservations. Finally, an 363 application may be a sophisticated control system that is responsible 364 for arranging the provision of a more complex network service such as 365 a virtual private network. 367 For the sake of this architecture, all of these concepts of an 368 application are grouped together and are shown as the Application 369 Service Coordinator since they are all in some way responsible for 370 coordinating the activity of the network to provide services for use 371 by applications. In practice, the function of the Application 372 Service Coordinator may be distributed across multiple applications 373 or servers. 375 The Application Service Coordinator communicates with the ABNO 376 Controller to request operations on the network. 378 2.3.1.3 ABNO Controller 380 The ABNO Controller is the main gateway to the network for the NMS, 381 OSS, and Application Service Coordinator for the provision of 382 advanced network coordination and functions. The ABNO Controller 383 governs the behavior of the network in response to changing network 384 conditions and in accordance with application network requirements 385 and policies. It is the point of attachment, and invokes the right 386 components in the right order. 388 The use cases in Section 3 provide a clearer picture of how the 389 ABNO Controller interacts with the other components in the ABNO 390 architecture. 392 2.3.1.4 Policy Agent 394 Policy plays a very important role in the control and management of 395 the network. It is, therefore, significant in influencing how the 396 key components of the ABNO architecture operate. 398 Figure 1 shows the Policy Agent as a component that is configured 399 by the NMS/OSS with the policies that it applies. The Policy Agent 400 is responsible for propagating those policies into the other 401 components of the system. 403 Simplicity in the figure necessitates leaving out many of the policy 404 interactions that will take place. Although the Policy Agent is only 405 shown interacting with the ABNO Controller, the Alto Server, and the 406 Virtual Network Topology Manager (VNTM), it will also interact with a 407 number of other components and the network elements themselves. For 408 example, the Path Computation Element (PCE) will be a Policy 409 Enforcement Point (PEP) [RFC2753] as described in [RFC5394], and the 410 Interface to the Routing System (I2RS) Client will also be a PEP as 411 noted in [I-D.ietf-i2rs-architecture]. 413 2.3.1.5 Interface to the Routing System (I2RS) Client 415 The Interface to the Routing System (I2RS) is described in 416 [I-D.ietf-i2rs-architecture]. The interface provides a programmatic 417 way to access (for read and write) the routing state and policy 418 information on routers in the network. 420 The I2RS Client is introduced in [I-D.ietf-i2rs-problem-statement]. 421 Its purpose is to manage information requests across a number of 422 routers (each of which runs an I2RS Agent) and coordinate setting or 423 gathering state to/from those routers. 425 2.3.1.6 OAM Handler 427 Operations, Administration, and Maintenance (OAM) plays a critical 428 role in understanding how a network is operating, detecting faults, 429 and taking the necessary action to react to problems in the network. 431 Within the ABNO architecture, the OAM Handler is responsible for 432 receiving notifications (often called alerts) from the network about 433 potential problems, for correlating them, and for triggering other 434 components of the system to take action to preserve or recover the 435 services that were established by the ABNO Controller. The OAM 436 Handler also reports network problems and, in particular, service- 437 affecting problems to the NMS, OSS, and Application Service 438 Coordinator. 440 Additionally, the OAM Handler interacts with the devices in the 441 network to initiate OAM actions within the data plane such as 442 monitoring and testing. 444 2.3.1.7 Path Computation Element (PCE) 446 The Path Computation Element (PCE) is introduced in [RFC4655]. It is 447 a functional component that services requests to compute paths across 448 a network graph. In particular, it can generate traffic engineered 449 routes for MPLS-TE and GMPLS Label Switched Paths (LSPs). The PCE 450 may receive these requests from the ABNO Controller, from the Virtual 451 Network Topology Manager, or from network elements themselves. 453 The PCE operates on a view of the network topology stored in the 454 Traffic Engineering Database (TED). A more sophisticated computation 455 may be provided by a Stateful PCE that enhances the TED with a 456 database (the LSP-DB - see section 2.3.1.8.2) containing information 457 about the LSPs that are provisioned and operational within the 458 network as described in [RFC4655] and [I-D.ietf-pce-stateful-pce]. 460 Additional function in an Active PCE allows a functional component 461 that includes a Stateful PCE to make provisioning requests to set up 462 new services or to modify in-place services as described in 463 [I-D.ietf-pce-stateful-pce] and [I-D.ietf-pce-pce-initiated-lsp]. 464 This function may directly access the network elements, or may be 465 channelled through the Provisioning Manager. 467 Coordination between multiple PCEs operating on different TEDs can 468 prove useful for performing path computation in multi-domain (for 469 example, inter-AS) or multi-layer networks. 471 Since the PCE is a key component of the ABNO architecture, a better 472 view of its role can be gained by examining the use cases described 473 in Section 3. 475 2.3.1.8 Databases 477 The ABNO Architecture includes a number of databases that contain 478 information stores for use by the system. The two main databases are 479 the Traffic Engineering Database (TED) and the LSP Database (LSP-DB), 480 but there may be a number of other databases to contain information 481 about topology (ALTO Server), policy (Policy Agent), services (ABNO 482 Controller), etc. 484 In the text that follows specific key components that are consumers 485 of the databases are highlighted. It should be noted that the 486 databases are available for inspection by any of the ABNO components. 487 Updates to the databases should handled with some care since allowing 488 multiple components to write to a database can be the cause of a 489 number of contention and sequencing problems. 491 2.3.1.8.1 Traffic Engineering Database (TED) 493 The Traffic Engineering Database (TED) is a data store of topology 494 information about a network that may be enhanced with capability 495 data (such as metrics or bandwidth capacity) and active status 496 information (such as up/down status or residual unreserved 497 bandwidth). 499 The TED may be built from information supplied by the network or 500 from data (such as inventory details) sourced through the NMS/OSS. 502 The principal use of the TED in the ABNO architecture is to provide 503 the raw data on which the Path Computation Element operates. But 504 the TED may also be inspected by users at the NMS/OSS to view the 505 current status of the network, and may provide information to 506 application services such as Application Layer Traffic Optimization 507 (ALTO) [RFC5693]. 509 2.3.1.8.2 LSP Database 511 The LSP Database (LSP-DB) is a data store of information about LSPs 512 that have been set up in the network, or that could be established. 513 The information stored includes the paths and resource usage of the 514 LSPs. 516 The LSP-DB may be built from information generated locally. For 517 example, when LSPs are provisioned, the LSP-DB can be updated. The 518 database can also be constructed from information gathered from the 519 network by polling or reading the state of LSPs that have already 520 been set up. 522 The main use of the LSP-DB within the ABNO architecture is to enhance 523 the planning and optimization of LSPs. New LSPs can be established 524 to be path-disjoint from other LSPs in order to offer protected 525 services; LSPs can be rerouted in order to put them on more optimal 526 paths or to make network resources available for other LSPs; LSPs can 527 be rapidly repaired when a network failure is reported; LSPs can be 528 moved onto other paths in order to avoid resources that have planned 529 maintenance outages. A stateful PCE (see Section 2.3.1.7) is a 530 primary consumer of the LSP-DB. 532 2.3.1.8.3 Shared Risk Link Group (SRLG) Databases 534 The TED may, itself, be supplemented by SRLG information that assigns 535 to each network resource one or more identifiers that associates the 536 resource with other resources in the same TED that share the same 537 risk of failure. 539 While this information can be highly useful, it may be supplemented 540 by additional detailed information maintained in a separate database 541 and indexed using the SRLG identifier from the TED. Such a database 542 can interpret SRLG information provided by other networks (such as 543 server networks), can provide failure probabilities associated with 544 each SRLG, can offer prioritization when SRLG-disjoint paths can't be 545 found, and can correlate SRLGs between different server networks or 546 between different peer networks. 548 2.3.1.8.4 Other Databases 550 There may be other databases that are built within the ABNO system 551 and which are referenced when operating the network. These databases 552 might include information about, for example, traffic flows and 553 demands, predicted or scheduled traffic demands, links and node 554 failure and repair history, network resources such as packet labels 555 and physical labels (i.e., MPLS and GMPLS labels), etc. 557 As mentioned in Section 2.3.1.8.1, the TED may be enhanced by 558 inventory information. It is quite likely in many networks that such 559 an inventory is held in a separate database (the Inventory Database) 560 that includes details of manufacturer, model, installation date, etc. 562 2.3.1.9 ALTO Server 564 The ALTO server provides network information to the application 565 layer based on abstract maps of a network region. This information 566 provides a simplified view, but it is useful to steer application 567 layer traffic. ALTO services enable Service Providers to share 568 information about network locations and the costs of paths between 569 them. The selection criteria to choose between two locations may 570 depend on information such as maximum bandwidth, minimum cross- 571 domain traffic, lower cost to the user, etc. 573 The ALTO server generates ALTO views to share information with the 574 Application Service Coordinator so that it can better select paths 575 in the network to carry application-layer traffic. The ALTO views 576 are computed based on information from the network databases, from 577 policies configured by the Policy Agent, and through the algorithms 578 used by the PCE. 580 Specifically, the base ALTO protocol [RFC7285] defines a single-node 581 abstract view of a network to the Application Service Coordinator. 582 Such a view consists of two maps: a network map and a cost map. A 583 network map defines multiple provider-defined Identifiers (PIDs), 584 which represent entrance points to the network. Each node in the 585 application layer is known as an End Point (EP), and each EP is 586 assigned to a PID, because PIDs are the entry points of the 587 application in the network. As defined in [RFC7285], a PID can 588 denote a subnet, a set of subnets, a metropolitan area, a PoP, etc. 589 Each such network region can be a single domain or multiple networks, 590 it is just the view that the ALTO server is exposing to the 591 application layer. A cost map provides costs between EPs and/or 592 PIDs. The criteria that Application Service Coordinator uses to 593 choose application routes between two locations may depend on 594 attributes such as maximum bandwidth, minimum cross-domain traffic, 595 lower cost to the user, etc. 597 2.3.1.10 Virtual Network Topology Manager (VNTM) 599 A Virtual Network Topology (VNT) is defined in [RFC5212] as a set of 600 one or more LSPs in one or more lower-layer networks that provides 601 information for efficient path handling in an upper-layer network. 602 For instance, a set of LSPs in a wavelength division multiplexed 603 (WDM) network can provide connectivity as virtual links in a higher- 604 layer packet switched network. 606 The VNT enhances the physical/dedicated links that are available in 607 the upper-layer network and is configured by setting up or tearing 608 down the lower-layer LSPs and by advertising the changes into the 609 higher-layer network. The VNT can be adapted to traffic demands 610 so that capacity in the higher-layer network can be created or 611 released as needed. Releasing unwanted VNT resources makes them 612 available in the lower-layer network for other uses. 614 The creation of virtual topology for inclusion in a network is not a 615 simple task. Decisions must be made about which nodes in the upper- 616 layer it is best to connect, in which lower-layer network to 617 provision LSPs to provide the connectivity, and how to route the LSPs 618 in the lower-layer network. Furthermore, some specific actions have 619 to be taken to cause the lower-layer LSPs to be provisioned and the 620 connectivity in the upper-layer network to be advertised. 622 [RFC5623] describes how the VNTM may instantiate connections in the 623 server-layer in support of connectivity in the client-layer. Within 624 the ABNO architecture, the creation of new connections may be 625 delegated to the Provisioning Manager as discussed in Section 626 2.3.1.11. 628 All of these actions and decisions are heavily influenced by policy, 629 so the VNTM component that coordinates them takes input from the 630 Policy Agent. The VNTM is also closely associated with the PCE for 631 the upper-layer network and each of the PCEs for the lower-layer 632 networks. 634 2.3.1.11 Provisioning Manager 636 The Provisioning Manager is responsible for making or channelling 637 requests for the establishment of LSPs. This may be instructions to 638 the control plane running in the networks, or may involve the 639 programming of individual network devices. In the latter case, the 640 Provisioning Manager may act as an OpenFlow Controller [ONF]. 642 See Section 2.3.2.6 for more details of the interactions between the 643 Provisioning Manager and the network. 645 2.3.1.12 Client and Server Network Layers 647 The client and server networks are shown in Figure 1 as illustrative 648 examples of the fact that the ABNO architecture may be used to 649 coordinate services across multiple networks where lower-layer 650 networks provide connectivity in upper-layer networks. 652 Section 3.2 describes a set of use cases for multi-layer networking. 654 2.3.2 Functional Interfaces 656 This section describes the interfaces between functional components 657 that might be externalized in an implementation allowing the 658 components to be distributed across platforms. Where existing 659 protocols might provide all or most of the necessary capabilities 660 they are noted. Appendix A notes the interfaces where more protocol 661 specification may be needed. 663 As noted at the top of Section 2.3, it is important to understand 664 that the relationships and interfaces shown between components in 665 Figure 1 are illustrative of some of the common or likely 666 interactions, however this figure and the descriptions in the sub- 667 sections below do not preclude other interfaces and relationships as 668 necessary to realize specific functionality. Thus, some of the 669 interfaces described below might not be visible as specific 670 relationships in Figure 1, but they can nevertheless exist. 672 2.3.2.1 Configuration and Programmatic Interfaces 674 The network devices may be configured or programmed direct from the 675 NMS/OSS. Many protocols already exist to perform these functions 676 including: 678 - SNMP [RFC3412] 679 - Netconf [RFC6241] 680 - ForCES [RFC5810] 681 - OpenFlow [ONF] 682 - PCEP [I-D.ietf-pce-pce-initiated-lsp]. 684 The TeleManagement Forum (TMF) Multi-Technology Operations System 685 Interface (MTOSI) standard [TMF-MTOSI] was developed to facilitate 686 application-to-application interworking and provides network level 687 management capabilities to discover, configure and activate 688 resources. Initially the MTOSI information model was only capable of 689 representing connection oriented networks and resources. In later 690 releases, support is added for connection less networks. MTOSI is 691 from NMS perspective a north bound interface and is based on SOAP 692 web services. 694 From the ABNO perspective, network configuration is a pass-through 695 function. It can be seen represented on the left hand side of 696 Figure 1. 698 2.3.2.2 TED Construction from the Networks 700 As described in Section 2.3.1.8, the TED provides details of the 701 capabilities and state of the network for use by the ABNO system and 702 the PCE in particular. 704 The TED can be constructed by participating in the IGP-TE protocols 705 run by the networks (for example, OSPF-TE [RFC3630] and ISIS-TE 706 [RFC5305]). Alternatively, the TED may be fed using link-state 707 distribution extensions to BGP [I-D.ietf-idr-ls-distribution]. 709 The ABNO system may maintain a single TED unified across multiple 710 networks, or may retain a separate TED for each network. 712 Additionally, an ALTO Server [RFC5693] may provide an abstracted 713 topology from a network to build an application-level TED that can 714 be used by a PCE to compute paths between servers and application- 715 layer entities for the provision of application services. 717 2.3.2.3 TED Enhancement 719 The TED may be enhanced by inventory information supplied from the 720 NMS/OSS. This may supplement the data collected as described in 721 Section 2.3.2.2 with information that is not normally distributed 722 within the network such as node types and capabilities, or the 723 characteristics of optical links. 725 No protocol is currently identified for this interface, but the 726 protocol developed or adopted to satisfy the requirements of the 727 Interface to the Routing System (I2RS) [I-D.ietf-i2rs-architecture] 728 may be a suitable candidate because it is required to be able to 729 distribute bulk routing state information in a well-defined encoding 730 language. Another candidate protocol may be Netconf [RFC6241] 731 passing data encoded using YANG [RFC6020]. 733 Note that, in general, any protocol and encoding that is suitable 734 for presenting the TED as described in Section 2.3.2.4 will likely be 735 suitable (or could be made suitable) for enabling write-access to the 736 TED as described in this section. 738 2.3.2.4 TED Presentation 740 The TED may be presented north-bound from the ABNO system for use by 741 an NMS/OSS or by the Application Service Coordinator. This allows 742 users and applications to get a view of the network topology and the 743 status of the network resources. It also allows planning and 744 provisioning of application services. 746 There are several protocols available for exporting the TED north- 747 bound: 749 - The ALTO protocol [RFC7285] is designed to distribute the 750 abstracted topology used by an ALTO Server and may prove useful for 751 exporting the TED. ALTO server provides the cost between EPs or 752 between PIDs, so the application layer can select which is the most 753 appropriate connection for the information exchange between its 754 application end points. 756 - The same protocol used to export topology information from the 757 network can be used to export the topology from the TED. 758 [I-D.ietf-idr-ls-distribution]. 760 - The Interface to the Routing System (I2RS) 761 [I-D.ietf-i2rs-architecture] will require a protocol that is 762 capable of handling bulk routing information exchanges that would 763 be suitable for exporting the TED. In this case it would make 764 sense to have a standardized representation of the TED in a formal 765 data modelling language such as YANG [RFC6020] so that an existing 766 protocol could be used such as Netconf [RFC6241] or XMPP [RFC6120]. 768 Note that export from the TED can be a full dump of the content 769 (expressed in a suitable abstraction language) as described above, or 770 it could be an aggregated or filtered set of data based on policies 771 or specific requirements. Thus, the relationships shown in Figure 1 772 may be a little simplistic in that the ABNO Controller may also be 773 involved in preparing and presenting the TED information over a 774 north-bound interface. 776 2.3.2.5 Path Computation Requests from the Network 778 As originally specified in the PCE architecture [RFC4655], network 779 elements can make path computation requests to a PCE using the PCE 780 protocol (PCEP) [RFC5440]. This facilitates the network setting up 781 LSPs in response to simple connectivity requests, and it allows the 782 network to re-optimize or repair LSPs. 784 2.3.2.6 Provisioning Manager Control of Networks 786 As described in Section 2.3.1.11, the Provisioning Manager makes or 787 channels requests to provision resources in the network. These 788 operations can take place at two levels: there can be requests to 789 program/configure specific resources in the data or forwarding 790 planes; and there can be requests to trigger a set of actions to be 791 programmed with the assistance of a control plane. 793 A number of protocols already exist to provision network resources as 794 follows: 796 - Program/configure specific network resources 798 - ForCES [RFC5810] defines a protocol for separation of the control 799 element (the Provisioning Manager) from the forwarding elements 800 in each node in the network. 802 - The Generic Switch Management Protocol (GSMP) [RFC3292] is an 803 asymmetric protocol that allows one or more external switch 804 controllers (such as the Provisioning Manager) to establish and 805 maintain the state of a label switch such as an MPLS switch. 807 - OpenFlow [ONF] is a communications protocol that gives an 808 OpenFlow Controller (such as the Provisioning Manager) access to 809 the forwarding plane of a network switch or router in the 810 network. 812 - Historically, other configuration-based mechanisms have been used 813 to set up the forwarding/switching state at individual nodes 814 within networks. Such mechanisms have ranged from non-standard 815 command line interfaces (CLIs) to various standards-based options 816 such as TL1 [TL1] and SNMP [RFC3412]. These mechanisms are not 817 designed for rapid operation of a network and are not easily 818 programmatic. They are not proposed for use by the Provisioning 819 Manager as part of the ABNO architecture. 821 - Netconf [RFC6241] provides a more active configuration protocol 822 that may be suitable for bulk programming of network resources. 823 Its use in this way is dependent on suitable YANG modules being 824 defined for the necessary options. Early work in the IETF's 825 Netmod working group is focused on a higher level of routing 826 function more comparable with the function discussed in Section 827 2.3.2.8 [I-D.ietf-netmod-routing-cfg]. 829 - The [TMF-MTOSI] specification provides provisioning, activation 830 and deactivation and release of resources via the Service 831 Activation Interface (SAI). The Common Communication Vehicle 832 (CCV) is the middleware required to implement MTOSI. CCV is then 833 used to provide middleware abstraction in combination with Web 834 Services Description Language (WSDL) to allow MTOSI interfaces to 835 be bound to different middleware technologies as needed. 837 - Trigger actions through the control plane 839 - LSPs can be requested using a management system interface to the 840 head end of the LSP using tools such as CLIs, TL1 [TL1] or SNMP 841 [RFC3412]. Configuration at this granularity is not as time- 842 critical as when individual network resources are programmed 843 because the main task of programming end-to-end connectivity is 844 devolved to the control plane. Nevertheless, these mechanisms 845 remain unsuitable for programmatic control of the network and are 846 not proposed for use by the Provisioning Manager as part of the 847 ABNO architecture. 849 - As noted above, Netconf [RFC6241] provides a more active 850 configuration protocol. This may be particularly suitable for 851 requesting the establishment of LSPs. Work would be needed to 852 complete a suitable YANG module. 854 - The PCE protocol (PCEP) [RFC5440] has been proposed as a suitable 855 protocol for requesting the establishment of LSPs 856 [I-D.ietf-pce-pce-initiated-lsp]. This works well because the 857 protocol elements necessary are exactly the same as used to 858 respond to a path computation request. 860 The functional element that issues PCEP requests to establish 861 LSPs is known as an "Active PCE", however it should be noted that 862 the ABNO functional component responsible for requesting LSPs is 863 the Provisioning Manager. Other controllers like the the VNTM 864 and the ABNO Controller use the services of the Provisioning 865 Manager to isolate the twin functions of computing and requesting 866 paths from the provisioning mechanisms in place with any given 867 network. 869 Note that I2RS does not provide a mechanism for control of network 870 resources at this level as it is designed to provide control of 871 routing state in routers, not forwarding state in the data plane. 873 2.3.2.7 Auditing the Network 875 Once resources have been provisioned or connections established in 876 the network, it is important that the ABNO system can determine the 877 state of the network. Similarly, when provisioned resources are 878 modified or taken out of service, the changes in the network need to 879 be understood by the ABNO system. This function falls into four 880 categories: 882 - Updates to the TED are gathered as described in Section 2.3.2.2. 884 - Explicit notification of the successful establishment and the 885 subsequent state of LSP can be provided through extensions to PCEP 886 as described in [I-D.ietf-pce-stateful-pce] and 887 [I-D.ietf-pce-pce-initiated-lsp]. 889 - OAM can be commissioned and the results inspected by the OAM 890 Handler as described in Section 2.3.2.14. 892 - A number of ABNO components may make inquiries and inspect network 893 state through a variety of techniques including I2RS, Netconf, or 894 SNMP. 896 2.3.2.8 Controlling The Routing System 898 As discussed in Section 2.3.1.5, the Interface to the Routing System 899 (I2RS) provides a programmatic way to access (for read and write) the 900 routing state and policy information on routers in the network. The 901 I2RS Client issues requests to routers in the network to establish or 902 retrieve routing state. Those requests utilize the I2RS protocol 903 which has yet to be selected/designed by the IETF. 905 2.3.2.9 ABNO Controller Interface to PCE 907 The ABNO Controller needs to be able to consult the PCE to determine 908 what services can be provisioned in the network. There is no reason 909 why this interface cannot be based on the standard PCE protocol as 910 defined in [RFC5440]. 912 2.3.2.10 VNTM Interface to and from PCE 914 There are two interactions between the Virtual Network Topology 915 Manager and the PCE. 917 The first interaction is used when VNTM wants to determine what LSPs 918 can be set up in a network: in this case it uses the standard PCEP 919 interface [RFC5440] to make path computation requests. 921 The second interaction arises when a PCE determines that it cannot 922 compute a requested path or notices that (according to some 923 configured policy) a network is short of resources (for example, the 924 capacity on some key link is close to exhausted). In this case, the 925 PCE may notify the VNTM which may (again according to policy) act to 926 construct more virtual topology. This second interface is not 927 currently specified although it may be that the protocol selected or 928 designed to satisfy I2RS will provide suitable features (see Section 929 2.3.2.8) or an extension could be made to the PCEP Notify message 930 (PCNtf) [RFC5440]. 932 2.3.2.11 ABNO Control Interfaces 934 The north-bound interface from the ABNO Controller is used by the 935 NMS, OSS, and Application Service Coordinator to request services in 936 the network in support of applications. The interface will also need 937 to be able to report the asynchronous completion of service requests 938 and convey changes in the status of services. 940 This interface will also need strong capabilities for security, 941 authentication, and policy. 943 This interface is not currently specified. It needs to be a 944 transactional interface that supports the specification of abstract 945 services with adequate flexibility to facilitate easy extension and 946 yet be concise and easily parsable. 948 It is possible that the protocol selected or designed to satisfy I2RS 949 will provide suitable features (see Section 2.3.2.8). 951 2.3.2.12 ABNO Provisioning Requests 953 Under some circumstances the ABNO Controller may make requests direct 954 to the Provisioning Manager. For example, if the Provisioning 955 Manager is acting as an SDN Controller then the ABNO Controller may 956 use one of the APIs defined to allow requests to me made to the SDN 957 Controller (such as the Floodlight REST API [Flood]). Alternatively, 958 since the Provisioning Manager may also receive instructions from a 959 stateful PCE, the use of PCEP extensions might be appropriate in 960 some cases [I-D.ietf-pce-pce-initiated-lsp]. 962 2.3.2.13 Policy Interfaces 964 As described in Section 2.3.1.4 and throughout this document, policy 965 forms a critical component of the ABNO architecture. The role of 966 policy will include enforcing the following rules and requirements: 968 - Adding resources on demand should be gated by the authorized 969 capability. 971 - Client microflows should not trigger server-layer setup or 972 allocation. 974 - Accounting capabilities should be supported. 976 - Security mechanisms for authorization of requests and capabilities 977 are required. 979 Other policy-related function in the system might include the policy 980 behavior of the routing and forwarding system such as: 982 - ECMP behavior 984 - Classification of packets onto LSPs or QoS catgories. 986 Various policy-capable architectures have been defined including a 987 framework for using policy with a PCE-enabled system [RFC5394]. 988 However, the take-up of the IETF's Common Open Policy Service 989 protocol (COPS) [RFC2748] has been poor. 990 New work will be needed to define all of the policy interfaces within 991 the ABNO architecture and to determine which are internal interfaces 992 and which may be external and so in need of a protocol specification. 993 There is some discussion that the I2RS protocol may support the 994 configuration and manipulation of policies. 996 2.3.2.14 OAM and Reporting 998 The OAM Handler must interact with the networks to perform several 999 actions: 1001 - Enabling OAM function within the network. 1003 - Performing proactive OAM operations in the network. 1005 - Receiving notifications of network events. 1007 Any of the configuration and programmatic interfaces described in 1008 Section 2.3.2.1 may serve this purpose. Netconf notifications are 1009 described in [RFC5277], and OpenFlow supports a number of 1010 asynchronous event notifications [ONF]. Additionally Syslog 1011 [RFC5424] is a protocol for reporting events from the network, and 1012 IPFIX [RFC7011] is designed to allow network statistics to be 1013 aggregated and reported. 1015 The OAM Handler also correlates events reported from the network and 1016 reports them onward to the ABNO Controller (which can apply the 1017 information to the recovery of services that it has provisioned) and 1018 to the NMS, OSS, and Application Service Coordinator. The reporting 1019 mechanism used here can be essentially the same as used when events 1020 are reported from the network and no new protocol is needed although 1021 new data models for technology-independent OAM reporting. 1023 3. ABNO Use Cases 1025 This section provides a number of examples of how the ABNO 1026 architecture can be applied to provide application-driven and 1027 NMS/OSS-driven network operations. The purpose of these examples is 1028 to give some concrete material to demonstrate the architecture so 1029 that it may be more easily comprehended, and to illustrate that the 1030 application of the architecture is achieved by "profiling" and by 1031 selecting only the relevant components and interfaces. 1033 Similarly, it is not the intention that this section contains a 1034 complete list of all possible applications of ABNO. The examples are 1035 intended to broadly cover a number of applications that are commonly 1036 discussed, but this does not preclude other use cases. 1038 The descriptions in this section are not fully detailed applicability 1039 statements for ABNO. It is anticipated that such applicability 1040 statements, for the use cases described and for other use cases, 1041 cold be suitable material for separate documents. 1043 3.1 Inter-AS Connectivity 1045 The following use case describes how the ABNO framework can be used 1046 set up an end-to-end MPLS service across multiple Autonomous Systems 1047 (ASes). Consider the simple network topology shown in Figure 2. The 1048 three ASes (ASa, ASb, and ASc) are connected at ASBRs a1, a2, b1 1049 through b4, c1, and c2. A source node (s) located in ASa is to be 1050 connected to a destination node (d) located in ASc. The optimal path 1051 for the LSP from s to d must be computed, and then the network must 1052 be triggered to set up the LSP. 1054 +--------------+ +-----------------+ +--------------+ 1055 |ASa | | ASb | | ASc | 1056 | +--+ | | +--+ +--+ | | +--+ | 1057 | |a1|-|-|-|b1| |b3|-|-|-|c1| | 1058 | +-+ +--+ | | +--+ +--+ | | +--+ +-+ | 1059 | |s| | | | | |d| | 1060 | +-+ +--+ | | +--+ +--+ | | +--+ +-+ | 1061 | |a2|-|-|-|b2| |b4|-|-|-|c2| | 1062 | +--+ | | +--+ +--+ | | +--+ | 1063 | | | | | | 1064 +--------------+ +-----------------+ +--------------+ 1066 Figure 2 : Inter-AS Domain Topology with H-PCE (Parent PCE) 1068 The following steps are performed to deliver the service within the 1069 ABNO architecture. 1071 1. Request Management 1073 As shown in Figure 3, the NMS/OSS issues a request to the ABNO 1074 Controller for a path between s and d. The ABNO Controller 1075 verifies that the NMS/OSS has sufficient rights to make the 1076 service request. 1078 +---------------------+ 1079 | NMS/OSS | 1080 +----------+----------+ 1081 | 1082 V 1083 +--------+ +-----------+-------------+ 1084 | Policy +-->-+ ABNO Controller | 1085 | Agent | | | 1086 +--------+ +-------------------------+ 1088 Figure 3 : ABNO Request Management 1090 2. Service Path Computation with Hierarchical PCE 1092 The ABNO Controller needs to determine an end-to-end path for the 1093 LSP. Since the ASes will want to maintain a degree of 1094 confidentiality about their internal resources and topology, they 1095 will not share a TED and each will have its own PCE. In such a 1096 situation, the Hierarchical PCE (H-PCE) architecture described in 1097 [RFC6805] is applicable. 1099 As shown in Figure 4, the ABNO Controller sends a request to the 1100 parent PCE for an end-to-end path. As described in [RFC6805], the 1101 parent PCE consults its TED that shows the connectivity between 1102 ASes. This helps it understand that the end-to-end path must 1103 cross each of ASa, ASb, and ASc, so it sends individual path 1104 computation requests to each of PCE a, b, and c to determine the 1105 best options for crossing the ASes. 1107 Each child PCE applies policy to the requests it receives to 1108 determine whether the request is to be allowed and to select the 1109 type of network resources that can be used in the computation 1110 result. For confidentiality reasons, each child PCE may supply 1111 its computation responses using a path key [RFC5520] to hide the 1112 details of the path segment it has computed. 1114 +-----------------+ 1115 | ABNO Controller | 1116 +----+-------+----+ 1117 | A 1118 V | 1119 +--+-------+--+ +--------+ 1120 +--------+ | | | | 1121 | Policy +-->-+ Parent PCE +---+ AS TED | 1122 | Agent | | | | | 1123 +--------+ +-+----+----+-+ +--------+ 1124 / | \ 1125 / | \ 1126 +-----+-+ +---+---+ +-+-----+ 1127 | | | | | | 1128 | PCE a | | PCE b | | PCE c | 1129 | | | | | | 1130 +---+---+ +---+---+ +---+---+ 1131 | | | 1132 +--+--+ +--+--+ +--+--+ 1133 | TEDa| | TEDb| | TEDc| 1134 +-----+ +-----+ +-----+ 1136 Figure 4 : Path Computation Request with Hierarchical PCE 1137 The parent PCE collates the responses from the children and 1138 applies its own policy to stitch them together into the best end- 1139 to-end path which it returns as a response to the ABNO Controller. 1141 3. Provisioning the End-to-End LSP 1143 There are several options for how the end-to-end LSP gets 1144 provisioned in the ABNO architecture. Some of these are described 1145 below. 1147 3a. Provisioning from the ABNO Controller With a Control Plane 1149 Figure 5 shows how the ABNO Controller makes a request through 1150 the Provisioning Manager to establish the end-to-end LSP. As 1151 described in Section 2.3.2.6 these interactions can use the 1152 Netconf protocol [RFC6241] or the extensions to PCEP described 1153 in [I-D.ietf-pce-pce-initiated-lsp]. In either case, the 1154 provisioning request is sent to the head end Label Switching 1155 Router (LSR) and it signals in the control plane (using a 1156 protocol such as RSVP-TE [RFC3209]) to cause the LSP to be 1157 established. 1159 +-----------------+ 1160 | ABNO Controller | 1161 +--------+--------+ 1162 | 1163 V 1164 +------+-------+ 1165 | Provisioning | 1166 | Manager | 1167 +------+-------+ 1168 | 1169 V 1170 +--------------------+------------------------+ 1171 / Network \ 1172 +-------------------------------------------------+ 1174 Figure 5 : Provisioning the End-to-End LSP 1176 3b. Provisioning through Programming Network Resources 1178 Another option is that the LSP is provisioned hop by hop from 1179 the Provisioning Manager using a mechanism such as ForCES 1180 [RFC5810] or OpenFlow [ONF] as described in Section 2.3.2.6. 1181 In this case, the picture is the same as shown in Figure 5. 1182 The interaction between the ABNO Controller and the 1183 Provisioning Manager will be PCEP or Netconf as described in 1184 option 3a., and the Provisioning Manager will have the 1185 responsibility to fan out the requests to the individual 1186 network elements. 1188 3c. Provisioning with an Active Parent PCE 1190 The active PCE is described in Section 2.3.1.7 based on the 1191 concepts expressed in [I-D.ietf-pce-pce-initiated-lsp]. In 1192 this approach, the process described in 3a is modified such 1193 that the PCE issues a PCEP command to the network direct 1194 without a response being first returned to the ABNO 1195 Controller. 1197 This situation is shown in Figure 6, and could be modified so 1198 that the Provisioning Manager still programs the individual 1199 network elements as described in 3b. 1201 +-----------------+ 1202 | ABNO Controller | 1203 +----+------------+ 1204 | 1205 V 1206 +--+----------+ +--------------+ 1207 +--------+ | | | Provisioning | 1208 | Policy +-->-+ Parent PCE +---->----+ Manager | 1209 | Agent | | | | | 1210 +--------+ +-+----+----+-+ +-----+--------+ 1211 / | \ | 1212 / | \ | 1213 +-----+-+ +---+---+ +-+-----+ V 1214 | | | | | | | 1215 | PCE a | | PCE b | | PCE c | | 1216 | | | | | | | 1217 +-------+ +-------+ +-------+ | 1218 | 1219 +--------------------------------+------------+ 1220 / Network \ 1221 +-------------------------------------------------+ 1223 Figure 6 : LSP Provisioning with an Active PCE 1225 3d. Provisioning with Active Child PCEs and Segment Stitching 1227 A mixture of the approaches described in 3b and 3c can result 1228 in a combination of mechanisms to program the network to 1229 provide the end-to-end LSP. Figure 7 shows how each child PCE 1230 can be an active PCE responsible for setting up an edge-to- 1231 edge LSP segment across one of the ASes. The ABNO Controller 1232 then uses the Provisioning Manager to program the inter-AS 1233 connections using ForCES or OpenFlow and the LSP segments are 1234 stitched together following the ideas described in [RFC5150]. 1235 Philosophers may debate whether the Parent PCE in this model 1236 is active (instructing the children to provision LSP segments) 1237 or passive (requesting path segments that the children 1238 provision). 1240 +-----------------+ 1241 | ABNO Controller +-------->--------+ 1242 +----+-------+----+ | 1243 | A | 1244 V | | 1245 +--+-------+--+ | 1246 +--------+ | | | 1247 | Policy +-->-+ Parent PCE | | 1248 | Agent | | | | 1249 +--------+ ++-----+-----++ | 1250 / | \ | 1251 / | \ | 1252 +---+-+ +--+--+ +-+---+ | 1253 | | | | | | | 1254 |PCE a| |PCE b| |PCE c| | 1255 | | | | | | V 1256 +--+--+ +--+--+ +---+-+ | 1257 | | | | 1258 V V V | 1259 +----------+-+ +------------+ +-+----------+ | 1260 |Provisioning| |Provisioning| |Provisioning| | 1261 |Manager | |Manager | |Manager | | 1262 +-+----------+ +-----+------+ +-----+------+ | 1263 | | | | 1264 V V V | 1265 +--+-----+ +----+---+ +--+-----+ | 1266 / AS a \=====/ AS b \=====/ AS c \ | 1267 +------------+ A +------------+ A +------------+ | 1268 | | | 1269 +-----+----------------+-----+ | 1270 | Provisioning Manager +----<-------+ 1271 +----------------------------+ 1273 Figure 7 : LSP Provisioning With Active Child PCEs and Stitching 1275 4. Verification of Service 1277 The ABNO Controller will need to ascertain that the end-to-end LSP 1278 has been set up as requested. In the case of a control plane 1279 being used to establish the LSP, the head end LSR may send a 1280 notification (perhaps using PCEP) to report successful setup, but 1281 to be sure that the LSP is up, the ABNO Controller will request 1282 the OAM Handler to perform Continuity Check OAM in the Data Plane 1283 and report back that the LSP is ready to carry traffic. 1285 5. Notification of Service Fulfillment 1287 Finally, when the ABNO Controller is satisfied that the requested 1288 service is ready to carry traffic, it will notify the NMS/OSS. 1289 The delivery of the service may be further checked through 1290 auditing the network as described in 2.3.2.7. 1292 3.2 Multi-Layer Networking 1294 Networks are typically constructed using multiple layers. These 1295 layers represent separations of administrative regions or of 1296 technologies, and may also represent a distinction between client 1297 and server networking roles. 1299 It is preferable to coordinate network resource control and 1300 utilization (i.e., consideration and control of multiple layers), 1301 rather than controlling and optimizing resources at each layer 1302 independently. This facilitates network efficiency and network 1303 automation, and may be defined as inter-layer traffic engineering. 1305 The PCE architecture supports inter-layer traffic engineering 1306 [RFC5623] and, in combination with the ABNO architecture, provides a 1307 suite of capabilities for network resource coordination across 1308 multiple layers. 1310 The following use case demonstrates ABNO used to coordinate 1311 allocation of server-layer network resources to create virtual 1312 topology in a client-layer network in order to satisfy a request for 1313 end-to-end client-layer connectivity. Consider the simple multi- 1314 layer network in Figure 8. 1316 +--+ +--+ +--+ +--+ +--+ +--+ 1317 |P1|---|P2|---|P3| |P4|---|P5|---|P6| 1318 +--+ +--+ +--+ +--+ +--+ +--+ 1319 \ / 1320 \ / 1321 +--+ +--+ +--+ 1322 |L1|--|L2|--|L3| 1323 +--+ +--+ +--+ 1325 Figure 8 : A Multi-Layer Network 1327 There are six packet layer routers (P1 through P6) and three optical 1328 layer lambda switches (L1 through L3). There is connectivity in the 1329 packet layer between routers P1, P2, and P3, and also between routers 1330 P4, P5, and P6, but there is no packet-layer connectivity between 1331 these two islands of routers perhaps because of a network failure or 1332 perhaps because all existing bandwidth between the islands has 1333 already been used up. However, there is connectivity in the optical 1334 layer between switches L1, L2, and L3, and the optical network is 1335 connected out to routers P3 and P4 (they have optical line cards). 1336 In this example, a packet layer connection (an MPLS LSP) is desired 1337 between P1 and P6. 1339 In the ABNO architecture, the following steps are performed to 1340 deliver the service. 1342 1. Request Management 1344 As shown in Figure 9, the Application Service Coordinator issues a 1345 request for connectivity from P1 to P6 in the packet layer 1346 network. That is, the Application Service Coordinator requests an 1347 MPLS LSP with a specific bandwidth to carry traffic for its 1348 application. The ABNO Controller verifies that the Application 1349 Service Coordinator has sufficient rights to make the service 1350 request. 1352 +---------------------------+ 1353 | Application Service | 1354 | Coordinator | 1355 +-------------+-------------+ 1356 | 1357 V 1358 +------+ +------------+------------+ 1359 |Policy+->-+ ABNO Controller | 1360 |Agent | | | 1361 +------+ +-------------------------+ 1363 Figure 9 : Application Service Coordinator Request Management 1365 2. Service Path Computation in the Packet Layer 1367 The ABNO Controller sends a path computation request to the 1368 packet layer PCE to compute a suitable path for the requested LSP 1369 as shown in Figure 10. The PCE uses the appropriate policy for 1370 the request and consults the TED for the packet layer. It 1371 determines that no path is immediately available. 1373 +-----------------+ 1374 | ABNO Controller | 1375 +----+------------+ 1376 | 1377 V 1378 +--------+ +--+-----------+ +--------+ 1379 | Policy +-->--+ Packet Layer +---+ Packet | 1380 | Agent | | PCE | | TED | 1381 +--------+ +--------------+ +--------+ 1383 Figure 10 : Path Computation Request 1385 3. Invocation of VNTM and Path Computation in the Optical Layer 1387 After the path computation failure in step 2, instead of notifying 1388 ABNO Controller of the failure, the PCE invokes the VNTM to see 1389 whether it can create the necessary link in the virtual network 1390 topology to bridge the gap. 1392 As shown in Figure 11, the packet layer PCE reports the 1393 connectivity problem to the VNTM, and the VNTM consults policy to 1394 determine what it is allowed to do. Assuming that the policy 1395 allows, VNTM asks the optical layer PCE to find a path across the 1396 optical network that could be provisioned to provide a virtual 1397 link for the packet layer. In addressing this request, the 1398 optical layer PCE consults a TED for the optical layer network. 1400 +------+ 1401 +--------+ | | +--------------+ 1402 | Policy +-->--+ VNTM +--<--+ Packet Layer | 1403 | Agent | | | | PCE | 1404 +--------+ +---+--+ +--------------+ 1405 | 1406 V 1407 +---------------+ +---------+ 1408 | Optical Layer +---+ Optical | 1409 | PCE | | TED | 1410 +---------------+ +---------+ 1412 Figure 11 : Invocation of VNTM and Optical Layer Path Computation 1414 4. Provisioning in the Optical Layer 1416 Once a path has been found across the optical layer network it 1417 needs to be provisioned. The options follow those in step 3 of 1418 Section 3.1. That is, provisioning can be initiated by the 1419 optical layer PCE or by its user, the VNTM. The command can be 1420 sent to the head end of the optical LSP (P3) so that the control 1421 plane (for example, GMPLS [RFC3473]) can be used to provision the 1422 LSP. Alternatively, the network resources can be provisioned 1423 direct using any of the mechanisms described in Section 2.3.2.6. 1425 5. Creation of Virtual Topology in the Packet Layer 1427 Once the LSP has been set up in the optical layer it can be made 1428 available in the packet layer as a virtual link. If the GMPLS 1429 signaling used the mechanisms described in [RFC6107] this process 1430 can be automated within the control plane, otherwise it may 1431 require a specific instruction to the head end router of the 1432 optical LSP (for example, through the Interface to the Routing 1433 System). 1435 Once the virtual link is created as shown in Figure 12, it is 1436 advertised in the IGP for the packet layer network and the link 1437 will appear in the TED for the packet layer network. 1439 +--------+ 1440 | Packet | 1441 | TED | 1442 +------+-+ 1443 A 1444 | 1445 +--+ +--+ 1446 |P3|....................|P4| 1447 +--+ +--+ 1448 \ / 1449 \ / 1450 +--+ +--+ +--+ 1451 |L1|--|L2|--|L3| 1452 +--+ +--+ +--+ 1454 Figure 12 : Advertisement of a New Virtual Link 1456 6. Path Computation Completion and Provisioning in the Packet Layer 1458 Now there are sufficient resources in the packet layer network. 1459 The PCE for the packet layer can complete its work and the MPLS 1460 LSP can be provisioned as described in Section 3.1. 1462 7. Verification and Notification of Service Fulfillment 1464 As discussed in Section 3.1, the ABNO Controller will need to 1465 verify that the end-to-end LSP has been correctly established 1466 before reporting service fulfillment to the Application 1467 Service Coordinator. 1469 Furthermore, it is highly likely that service verification will be 1470 necessary before the optical layer LSP can be put into service as 1471 a virtual link. Thus, the VNTM will need to coordinate with the 1472 OAM Handler to ensure that the LSP is ready for use. 1474 3.2.1 Data Center Interconnection across Multi-Layer Networks 1476 In order to support new and emerging cloud-based applications, such 1477 as real-time data backup, virtual machine migration, server 1478 clustering or load reorganization, the dynamic provisioning and 1479 allocation of IT resources and the interconnection of multiple, 1480 remote Data Centers (DC) is a growing requirement. 1482 These operations require traffic being delivered between data 1483 centers, and, typically, the connections providing such inter-DC 1484 connectivity are provisioned using static circuits or dedicated 1485 leased lines, leading to an inefficiency in terms of resource 1486 utilization. Moreover, a basic requirement is that such a group of 1487 remote DCs can be operated logically as one. 1489 In such environments, the data plane technology is operator and 1490 provider dependent. Their customers may rent LSC, PSC or TDM 1491 services, and the application and usage of the ABNO architecture and 1492 Controller enables the required dynamic end to end network service 1493 provisioning, regardless of underlying service and transport layers. 1495 Consequently, the interconnection of DCs may involve the operation, 1496 control, and management of heterogeneous environments. Each DC site 1497 and the metro-core network segment used to interconnect them, with 1498 regard to not only the underlying data plane technology, but also the 1499 control plane. For example, each DC site or domain could be 1500 controlled locally in a centralized way (e.g., via OpenFlow [ONF]), 1501 whereas the metro-core transport infrastructure is controlled by 1502 GMPLS. Although OpenFlow is specially adapted to single domain 1503 intra-DC networks (packet level control, lots of routing exceptions), 1504 a standardized GMPLS based architecture would enable dynamic optical 1505 resource allocation and restoration in multi-domain (e.g., multi- 1506 vendor) core networks interconnecting distributed data centers. 1508 The application of an ABNO architecture and related procedures would 1509 involve the following aspects: 1511 1. Request From the Application Service Coordinator or NMS 1513 As shown in Figure 13, the ABNO Controller receives a request from 1514 the Application Service Coordinator or from the NMS, in order to 1515 create a new end-to-end connection between two end points. The 1516 actual addressing of these end points is discussed in the next 1517 section. The ABNO Controller asks the PCE for a path between 1518 these two endpoints, after considering any applicable policy as 1519 defined by the Policy Agent (see Figure 1). 1521 +---------------------------+ 1522 | Application Service | 1523 | Coordinator or NMS | 1524 +-------------+-------------+ 1525 | 1526 V 1527 +------+ +------------+------------+ 1528 |Policy+->-+ ABNO Controller | 1529 |Agent | | | 1530 +------+ +-------------------------+ 1532 Figure 13 : Application Service Coordinator Request Management 1534 2. Cross-Stratum Addressing Mapping 1536 In order to compute an end to end path, the PCE needs to have a 1537 unified view of the overall topology, which means that it has to 1538 consider and identify the actual endpoints with regard to the 1539 client network addresses. The ABNO Controller and/or the PCE may 1540 need to translate or map addresses from different address spaces. 1541 Depending on how the topology information is disseminated and 1542 gathered, there are two possible scenarios: 1544 a. The Application Layer knows the Client Network Layer. Entities 1545 belonging to the application layer may have an interface with 1546 the TED or with an ALTO server allowing those entities to map 1547 the high level endpoints to network addresses. The mechanism 1548 used to enable this address correlation is out of the scope of 1549 this document, but relies on direct interfaces to other ABNO 1550 components in addition to the interface to the ABNO Controller. 1552 In this scenario, the request from the NMS or Application 1553 Service Coordinator contains addresses in the client layer 1554 network. Therefore, when the ABNO Controller requests the PCE 1555 to compute a path between two end points the PCE is able to use 1556 the supplied addresses, compute the path, and continue the 1557 work-flow in communication with the Provisioning Manager. 1559 b. Application Layer does not know the Client Network Layer. In 1560 this case, when the ABNO Controller receives a request from the 1561 NMS or Application Service Coordinator the request contains 1562 only identifiers from the Application later address space. In 1563 order for the PCE t compute and end-to-end path, these 1564 identifiers must be converted to addresses in the client layer 1565 network. This translation can be performed by the ABNO 1566 controller which can access the TED and ALTO databases allowing 1567 the path computation request it sends to PCE to be simply 1568 contained within one network and TED. Alternatively, the 1569 computation request could use the application layer identifiers 1570 leaving the job of address mapping to the PCE. 1572 Note that both approaches in this scenario require clear 1573 identification of the address spaces that are in use in order 1574 to avoid any confusion. 1576 3. Provisioning Process 1578 Once the path has been obtained, the provisioning manager receives 1579 a high level provisioning request to provision the service. 1580 Since, in the considered use case, the network elements are not 1581 necessarily configured using the same protocol, the end to end 1582 path is split into segments, and the ABNO Controller coordinates 1583 or orchestrates the establishment by adapting and/or translating 1584 the abstract provisioning request to concrete segment requests, by 1585 means of a VNTM or PCE, which issue the corresponding commands or 1587 +-----------------+ 1588 | ABNO Controller | 1589 +-------+---------+ 1590 | 1591 | 1592 V 1593 +------+ +------+-------+ 1594 | VNTM +--<--+ PCE | 1595 +---+--+ +------+-------+ 1596 | | 1597 V V 1598 +-----+---------------+------------+ 1599 | Provisioning Manager | 1600 +----------------------------------+ 1601 | | | | | 1602 V | V | V 1603 OpenFlow V ForCes V PCEP 1604 NetConf SNMP 1606 Figure 14 : Provisioning Process 1608 instructions. The provisioning may involve configuring the data 1609 plane elements directly or delegating the establishment of the 1610 underlying connection to a dedicated control plane instance, 1611 responsible for that segment. 1613 The Provisioning Manager could use a number of mechanisms to 1614 program the network elements as shown in Figure 14. It learns 1615 which technology is used for the actual provisioning at each 1616 segment either by manual configuration or discovery. 1618 4. Verification and Notification of Service Fulfillment 1620 Once the end-to-end connectivity service has been provisioned, and 1621 after the verification of the correct operation of the service, 1622 the ABNO Controller needs to notify the Application Service 1623 Coordinator or NMS. 1625 3.3 Make-Before-Break 1627 A number of different services depend on the establishment of a new 1628 LSP so that traffic supported by an existing LSP can be switched 1629 without disruption. This section describes those use cases, presents 1630 a generic model for make-before-break within the ABNO architecture, 1631 and shows how each use case can be supported by using elements of the 1632 generic model. 1634 3.3.1 Make-Before-Break for Re-optimization 1636 Make-before-break is a mechanism supported in RSVP-TE signaling where 1637 a new LSP is set up before the LSP it replaces is torn down 1638 [RFC3209]. This process has several benefits in situations such as 1639 re-optimization of in-service LSPs. 1641 The process is simple, and the example shown in Figure 15 utilizes a 1642 stateful PCE [I-D.ietf-pce-stateful-pce] to monitor the network and 1643 take re-optimization actions when necessary. In this process a 1644 service request is made to the ABNO Controller by a requester such as 1645 the OSS. The service request indicates that the LSP should be re- 1646 optimized under specific conditions according to policy. This allows 1647 the ABNO Controller to manage the sequence and prioritization of re- 1648 optimizing multiple LSPs using elements of Global Concurrent 1649 Optimization (GCO) as described in Section 3.4, and applying policies 1650 across the network so that, for instance, LSPs for delay-sensitive 1651 services are re-optimized first. 1653 The ABNO Controller commissions the PCE to compute and set up the 1654 initial path. 1656 Over time, the PCE monitors the changes in the network as reflected 1657 in the TED, and according to the configured policy may compute and 1658 set up a replacement path, using make-before-break within the 1659 network. 1661 Once the new path has been set up and the Network reports that it is 1662 in use correctly, PCE tears down the old path and may report the 1663 re-optimization event to the ABNO Controller. 1665 +---------------------------------------------+ 1666 | OSS / NMS / Application Service Coordinator | 1667 +----------------------+----------------------+ 1668 | 1669 +------------+------------+ 1670 | ABNO Controller | 1671 +------------+------------+ 1672 | 1673 +------+ +-------+-------+ +-----+ 1674 |Policy+-----+ PCE +-----+ TED | 1675 |Agent | +-------+-------+ +-----+ 1676 +------+ | 1677 | 1678 +----------------------+----------------------+ 1679 / Network \ 1680 +-------------------------------------------------+ 1682 Figure 15 : The Make-Before-Break Process 1684 3.3.2 Make-Before-Break for Restoration 1686 Make-before-break may also be used to repair a failed LSP where 1687 there is a desire to retain resources along some of the path, and 1688 where there is the potential for other LSPs to "steal" the resources 1689 if the failed LSP is torn down first. Unlike the example in Section 1690 3.3.1, this case is service-interrupting, but that arises from the 1691 break in service introduced by the network failure. Obviously, in 1692 the case of a point-to-multipoint LSP, the failure might only affect 1693 part of the tree and the disruption will only be to a subset of the 1694 destination leaves so that a make-before-break restoration approach 1695 will not cause disruption to the leaves that were not affected by 1696 the original failure. 1698 Figure 16 shows the components that interact for this use case. A 1699 service request is made to the ABNO Controller by a requester such as 1700 the OSS. The service request indicates that the LSP may be restored 1701 after failure and should attempt to reuse as much of the original 1702 path as possible. 1704 The ABNO Controller commissions the PCE to compute and set up the 1705 initial path. The ABNO Controller also requests the OAM Handler to 1706 initiate OAM on the LSP and to monitor the results. 1708 At some point the network reports a fault to the OAM Handler which 1709 notifies the ABNO Controller. 1711 The ABNO Controller commissions the PCE to compute a new path, re- 1712 using as much of the original path as possible, and PCE sets up the 1713 new LSP. 1715 Once the new path has been set up and the Network reports that it is 1716 in use correctly, the ABNO Controller instructs the PCE to tear down 1717 the old path. 1719 +---------------------------------------------+ 1720 | OSS / NMS / Application Service Coordinator | 1721 +----------------------+----------------------+ 1722 | 1723 +------------+------------+ +-------+ 1724 | ABNO Controller +---+ OAM | 1725 +------------+------------+ |Handler| 1726 | +---+---+ 1727 +-------+-------+ | 1728 | PCE | | 1729 +-------+-------+ | 1730 | | 1731 +----------------------+--------------------+-+ 1732 / Network \ 1733 +-------------------------------------------------+ 1735 Figure 16 : The Make-Before-Break Restoration Process 1737 3.3.3 Make-Before-Break for Path Test and Selection 1739 In a more complicated use case, an LSP may be monitored for a number 1740 of attributes such as delay and jitter. When the LSP falls below a 1741 threshold, the traffic may be moved to another LSP that offers the 1742 desired (or at least a better) quality of service. To achieve this, 1743 it is necessary to establish the new LSP and test it, and because the 1744 traffic must not be interrupted, make-before-break must be used. 1746 Moreover, it may be the case that no new LSP can provide the desired 1747 attributes, and that a number of LSPs need to be tested so that the 1748 best can be selected. Furthermore, even when the original LSP is set 1749 up, it could be desirable to test a number of LSPs before deciding 1750 which should be used to carry the traffic. 1752 Figure 17 shows the components that interact for this use case. 1753 Because multiple LSPs might exist at once, a distinct action is 1754 needed to coordinate which one carries the traffic, and this is the 1755 job of the I2RS Client acting under the control of the ABNO 1756 Controller. 1758 The OAM Handler is responsible for initiating tests on the LSPs and 1759 for reporting the results back to the ABNO Controller. The OAM 1760 Handler can also check end-to-end connectivity test results across a 1761 multi-domain network even when each domain runs a different 1762 technology. For example, an end-to-end might be achieved by 1763 stitching together an MPLS segment, an Ethernet/VLAN segment, and an 1764 IP etc. 1766 Otherwise, the process is similar to that for re-optimization 1767 discussed in Section 3.3.1. 1769 +---------------------------------------------+ 1770 | OSS / NMS / Application Service Coordinator | 1771 +----------------------+----------------------+ 1772 | 1773 +------+ +------------+------------+ +-------+ 1774 |Policy+---+ ABNO Controller +----+ OAM | 1775 |Agent | | +--+ |Handler| 1776 +------+ +------------+------------+ | +---+---+ 1777 | | | 1778 +-------+-------+ +--+---+ | 1779 | PCE | | I2RS | | 1780 +-------+-------+ |Client| | 1781 | +--+---+ | 1782 | | | 1783 +-----------------------+---------------+-----+-+ 1784 / Network \ 1785 +---------------------------------------------------+ 1787 Figure 17 : The Make-Before-Break Path Test and Selection Process 1789 The pseudo-code that follows gives an indication of the interactions 1790 between ABNO components. 1792 OSS requests quality-assured service 1794 :Label1 1796 DoWhile not enough LSPs (ABNO Controller) 1797 Instruct PCE to compute and provision the LSP (ABNO Controller) 1798 Create the LSP (PCE) 1799 EndDo 1801 :Label2 1803 DoFor each LSP (ABNO Controller) 1804 Test LSP (OAM Handler) 1805 Report results to ABNO Controller (OAM Handler) 1806 EndDo 1808 Evaluate results of all tests (ABNO Controller) 1809 Select preferred LSP and instruct I2RS client (ABNO Controller) 1810 Put traffic on preferred LSP (I2RS Client) 1812 DoWhile too many LSPs (ABNO Controller) 1813 Instruct PCE to tear down unwanted LSP (ABNO Controller) 1814 Tear down unwanted LSP (PCE) 1815 EndDo 1817 DoUntil trigger (OAM controller, ABNO Controller, Policy Agent) 1818 keep sending traffic (Network) 1819 Test LSP (OAM Handler) 1820 Endif 1821 EndDo 1823 If there is already a suitable LSP (ABNO Controller) 1824 GoTo Label2 1825 Else 1826 GoTo Label1 1827 EndIf 1829 3.4 Global Concurrent Optimization 1831 Global Concurrent Optimization (GCO) is defined in [RFC5557] and 1832 represents a key technology for maximizing network efficiency by 1833 computing a set of traffic engineered paths concurrently. A GCO path 1834 computation request will simultaneously consider the entire topology 1835 of the network, and the complete set of new LSPs together with their 1836 respective constraints. Similarly, GCO may be applied to recompute 1837 the paths of a set of existing LSPs. 1839 GCO may be requested in a number of scenarios. These include: 1841 o Routing of new services where the PCE should consider other 1842 services or network topology. 1844 o A reoptimization of existing services due to fragmented network 1845 resources or sub-optimized placement of sequentially computed 1846 services. 1848 o Recovery of connectivity for bulk services in the event of a 1849 catastrophic network failure. 1851 A service provider may also want to compute and deploy new bulk 1852 services based on a predicted traffic matrix. The GCO 1853 functionality and capability to perform concurrent computation 1854 provides a significant network optimization advantage, thus utilizing 1855 network resources optimally and avoiding blocking. 1857 The following use case shows how the ABNO architecture and components 1858 are used to achieve concurrent optimization across a set of services. 1860 3.4.1 Use Case: GCO with MPLS LSPs 1862 When considering the GCO path computation problem, we can split the 1863 GCO objective functions into three optimization categories, these 1864 are: 1866 o Minimize aggregate Bandwidth Consumption (MBC). 1868 o Minimize the load of the Most Loaded Link (MLL). 1870 o Minimize Cumulative Cost of a set of paths (MCC). 1872 This use case assumes the GCO request will be offline and be 1873 initiated from an NMS/OSS, that is it may take significant time to 1874 compute the service, and the paths reported in the response may 1875 want to be verified by the user before being provisioned within 1876 the network. 1878 1. Request Management 1880 The NMS/OSS issues a request for new service connectivity for bulk 1881 services. The ABNO Controller verifies that the NMS/OSS has 1882 sufficient rights to make the service request and apply a GCO 1883 attribute with a request to Minimize aggregate Bandwidth 1884 Consumption (MBC) as shown in Figure 18. 1886 +---------------------+ 1887 | NMS/OSS | 1888 +----------+----------+ 1889 | 1890 V 1891 +--------+ +-----------+-------------+ 1892 | Policy +-->-+ ABNO Controller | 1893 | Agent | | | 1894 +--------+ +-------------------------+ 1896 Figure 18 : NMS Request to ABNO Controller 1898 1a. Each service request has a source, destination and bandwidth 1899 request. These service requests are sent to the ABNO 1900 Controller and categorized as a GCO. The PCE uses the 1901 appropriate policy for the request and consults the TED for 1902 the packet layer. 1904 2. Service Path Computation in the Packet Layer 1906 To compute a set of services for the GCO application, PCEP 1907 supports synchronization vector (SVEC) lists for synchronized 1908 dependent path computations as defined in [RFC5440] and described 1909 in [RFC6007]. 1911 2a. The ABNO Controller sends the bulk service request to the 1912 GCO-capable packet layer PCE using PCEP messaging. The PCE 1913 uses the appropriate policy for the request and consults the 1914 TED for the packet layer as shown in Figure 19. 1916 +-----------------+ 1917 | ABNO Controller | 1918 +----+------------+ 1919 | 1920 V 1921 +--------+ +--+-----------+ +--------+ 1922 | | | | | | 1923 | Policy +-->--+ GCO-capable +---+ Packet | 1924 | Agent | | Packet Layer | | TED | 1925 | | | PCE | | | 1926 +--------+ +--------------+ +--------+ 1928 Figure 19 : Path Computation Request from GCO-capable PCE 1930 2b. Upon receipt of the bulk (GCO) service requests, the PCE 1931 applies the MBC objective function and computes the services 1932 concurrently. 1934 2c. Once the requested GCO service path computation completes, the 1935 PCE sends the resulting paths back to the ABNO Controller as a 1936 PCEP response as shown in Figure 20. The response includes a 1937 fully computed explicit path for each service (TE LSP). 1939 +---------------------+ 1940 | NMS/OSS | 1941 +----------+----------+ 1942 ^ 1943 | 1944 +----------+----------+ 1945 | ABNO Controller | 1946 | | 1947 +---------------------+ 1949 Figure 20 : ABNO Sends Solution to the NMS/OSS 1951 3. The concurrently computed solution received from the PCE is sent 1952 back to the NMS/OSS by the ABNO Controller. The NMS/OSS user can 1953 then check the candidate paths and either provision the new 1954 services, or save the solution for deployment in the future. 1956 3.5 Adaptive Network Management (ANM) 1958 The ABNO architecture provides the capability for reactive network 1959 control of resources based on classification, profiling and 1960 prediction based on current demands and resource utilization. 1961 Server-layer transport network resources, such as Optical Transport 1962 Network (OTN) time-slicing [G.709], or the fine granularity grid of 1963 wavelengths with variable spectral bandwidth (flexi-grid) [G.694.1], 1964 can be manipulated to meet current and projected demands in a model 1965 called Elastic Optical Networks (EON) [EON]. 1967 EON provides spectrum-efficient and scalable transport by 1968 introducing flexible granular grooming in the optical frequency 1969 domain. This is achieved using arbitrary contiguous 1970 concatenation of optical spectrum that allows creation of custom- 1971 sized bandwidth. This bandwidth is defined in slots of 12,5GHz. 1973 Adaptive Network Management (ANM) with EON allows appropriately- 1974 sized optical bandwidth to be allocated to an end-to-end optical 1975 path. In flexi-grid, the allocation is performed according to the 1976 traffic volume or following user requests, and can be achieved in a 1977 highly spectrum-efficient and scalable manner. Similarly, OTN 1978 provides an adaptive and elastic provisioning of bandwidth on top of 1979 wavelength switched optical networks (WSON). 1981 To efficiently use optical resources, a system is required which can 1982 monitor network resources, and decide the optimal network 1983 configuration based on the status, bandwidth availability and user 1984 service. We call this ANM. 1986 3.5.1. ANM Trigger 1988 There are different reasons to trigger an adaptive network 1989 management process, these include: 1991 o Measurement: traffic measurements can be used in order to cause 1992 spectrum allocations that fit the traffic needs as efficiently as 1993 possible. This function may be influenced by measuring the IP 1994 router traffic flows, by examining traffic engineering or link 1995 state databases, by usage thresholds for critical links in the 1996 network, or by requests from external entities. Nowadays, network 1997 operators have active monitoring probes in the network, which 1998 store their results in the OSS. The OSS or OAM Handler components 1999 activate this measurement-based trigger, so the ABNO Controller 2000 would not be directly involved in this case. 2002 o Human: operators may request ABNO to run an adaptive network 2003 planning process via a NMS. 2005 o Periodic: adaptive network planning process can be run 2006 periodically to find an optimum configuration. 2008 An ABNO Controller would receive a request from OSS or NMS to run an 2009 adaptive network manager process. 2011 3.5.2. Processing request and GCO computation 2013 Based on the human or periodic trigger requests described in the 2014 previous Section, the OSS or NMS will send a request to the ABNO 2015 Controller to perform EON-based GCO. The ABNO Controller will 2016 select a set of services to be reoptimized and choose an objective 2017 function that will deliver the best use of network resources. In 2018 making these choices, the ABNO Controller is guided by network-wide 2019 policy on the use of resources, the definition of optimization, and 2020 the level of perturbation to existing services that is tolerable. 2022 Much as in Section 3.5, this request for GCO is passed to the PCE. 2023 The PCE can then consider the end-to-end paths and every channel's 2024 optimal spectrum assignment in order to satisfy traffic demands and 2025 optimize the optical spectrum consumption within the network. 2027 The PCE will operate on the TED, but is likely to also be stateful so 2028 that it knows which LSPs correspond to which waveband allocations on 2029 which links in the network. Once PCE arrives at an answer, it 2030 returns a set of potential paths to the ABNO Controller which passes 2031 them on to the NMS or OSS to supervise/select the subsequent path 2032 set-up/modification process. 2034 This exchange is shown in Figure 21. Note that the figure does not 2035 show the interactions used by the OSS/NMS for establishing or 2036 modifying LSPs in the network. 2038 +---------------------------+ 2039 | OSS or NMS | 2040 +-----------+---+-----------+ 2041 | ^ 2042 V | 2043 +------+ +----------+---+----------+ 2044 |Policy+->-+ ABNO Controller | 2045 |Agent | | | 2046 +------+ +----------+---+----------+ 2047 | ^ 2048 V | 2049 +-----+---+----+ 2050 + PCE | 2051 +--------------+ 2053 Figure 21 : Adaptive Network Management with human intervention 2055 3.5.3. Automated Provisioning Process 2057 Although most of network operations are supervised by the operator, 2058 there are some actions, which may not require supervision, like a 2059 simple modification of a modulation format in a Bit-rate Variable 2060 Transponder (BVT) (to increase the optical spectrum efficiency or 2061 reduce energy consumption). In this processes, where human 2062 intervention is not required, the PCE sends the Provisioning Manager 2063 new configuration to configure the network elements as shown in 2064 Figure 22. 2066 +------------------------+ 2067 | OSS or NMS | 2068 +-----------+------------+ 2069 | 2070 V 2071 +------+ +----------+------------+ 2072 |Policy+->-+ ABNO Controller | 2073 |Agent | | | 2074 +------+ +----------+------------+ 2075 | 2076 V 2077 +------+------+ 2078 + PCE | 2079 +------+------+ 2080 | 2081 V 2082 +----------------------------------+ 2083 | Provisioning Manager | 2084 +----------------------------------+ 2086 Figure 22 : Adaptive Network Management without human intervention 2088 3.6 Pseudowire Operations and Management 2090 Pseudowires in an MPLS network [RFC3985] operate as a form of layered 2091 network over the connectivity provided by the MPLS network. The 2092 pseudowires are carried by LSPs operating as transport tunnels, and 2093 planning is necessary to determine how those tunnels are placed in 2094 the network and which tunnels are used by any pseudowire. 2096 This section considers four use cases: multi-segment pseudowires, 2097 path-diverse pseudowires, path-diverse multi-segment pseudowires, and 2098 pseudowire segment protection. Section 3.6.5 describes the 2099 applicability of the ABNO architecture to these four use cases. 2101 3.6.1 Multi-Segment Pseudowires 2103 [RFC5254] described the architecture for multi-segment pseudowires. 2104 An end-to-end service, as shown in Figure 23, can consist of a 2105 series of stitched segments shown on the figure as AC, PW1, PW2, PW3, 2106 and AC. Each pseudowire segment is stitched at a 'stitching PE' (S- 2107 PE): for example, PW1 is stitched to PW2 at S-PE1. Each access 2108 circuit (AC) is stitched to a pseudowire segment at a 'terminating 2109 PE' (T-PE): for example, PW1 is stitched to the AC at T-PE1. 2111 Each pseudowire segment is carried across the MPLS network in an LSP 2112 operating as a transport tunnel: for example, PW1 is carried in LSP1. 2113 The LSPs between provider edge nodes (PEs) may traverse different 2114 MPLS networks with the PEs as border nodes, or the PEs may lie within 2115 the network such that the LSPs each only span part of the network. 2117 ----- ----- ----- ----- 2118 --- |T-PE1| LSP1 |S-PE1| LSP2 |S-PE3| LSP3 |T-PE2| +---+ 2119 | | AC | |=======| |=======| |=======| | AC | | 2120 |CE1|----|........PW1........|..PW2........|..PW3........|----|CE2| 2121 | | | |=======| |=======| |=======| | | | 2122 --- | | | | | | | | +---+ 2123 ----- ----- ----- ----- 2125 Figure 23 : Multi-Segment Pseudowire 2127 While the topology shown in Figure 23 is easy to navigate, the 2128 reality of a deployed network can be considerably more complex. The 2129 topology in Figure 24 shows a small mesh of PEs. The links between 2130 the PEs are not physical links but represent the potential of MPLS 2131 LSPs between the PEs. 2133 When establishing the end-to-end service between customer edge nodes 2134 (CEs) CE1 and CE2, some choice must be made about which PEs to use. 2135 In other words, a path computation must be made to determine the 2136 pseudowire segment 'hops', and then the necessary LSP tunnels must be 2137 established to carry the pseudowire segments that will be stitched 2138 together. 2140 Of course, each LSP may itself require a path computation decision to 2141 route it through the MPLS network between PEs. 2143 The choice of path for the multi-segment pseudowire will depend on 2144 such issues as: 2145 - MPLS connectivity 2146 - MPLS bandwidth availability 2147 - pseudowire stitching capability and capacity at PEs 2148 - policy and confidentiality considerations for use of PEs. 2150 ----- 2151 |S-PE5| 2152 /-----\ 2153 --- ----- -----/ \----- ----- --- 2154 |CE1|----|T-PE1|-------|S-PE1|-------|S-PE3|-------|T-PE2|----|CE2| 2155 --- -----\ -----\ ----- /----- --- 2156 \ | ------- | / 2157 \ ----- \----- / 2158 -----|S-PE2|-------|S-PE4|----- 2159 ----- ----- 2161 Figure 24 : Multi-Segment Pseudowire Network Topology 2163 3.6.2 Path-Diverse Pseudowires 2165 The connectivity service provided by a pseudowire may need to be 2166 resilient to failure. In many cases, this function is provided by 2167 provisioning a pair of pseudowires carried by path-diverse LSPs 2168 across the network as shown in Figure 25 (the terminology is 2169 inherited directly from [RFC3985]). Clearly, in this case, the 2170 challenge is to keep the two LSPs (LSP1 and LSP2) disjoint within the 2171 MPLS network. This problem is not different from the normal MPLS 2172 path-diversity problem. 2174 ------- ------- 2175 | PE1 | LSP1 | PE2 | 2176 AC | |=======================| | AC 2177 ----...................PW1...................---- 2178 --- - / | |=======================| | \ ----- 2179 | |/ | | | | \| | 2180 | CE1 + | | MPLS Network | | + CE2 | 2181 | |\ | | | | /| | 2182 --- - \ | |=======================| | / ----- 2183 ----...................PW2...................---- 2184 AC | |=======================| | AC 2185 | | LSP2 | | 2186 ------- ------- 2188 Figure 25 : Path-Diverse Pseudowires 2190 ------- ------- 2191 | PE1 | LSP1 | PE2 | 2192 AC | |=======================| | AC 2193 ---...................PW1...................--- 2194 / | |=======================| | \ 2195 ----- / | | | | \ ----- 2196 | |/ ------- ------- \| | 2197 | CE1 + MPLS Network + CE2 | 2198 | |\ ------- ------- /| | 2199 ----- \ | PE3 | | PE4 | / ----- 2200 \ | |=======================| | / 2201 ---...................PW2...................--- 2202 AC | |=======================| | AC 2203 | | LSP2 | | 2204 ------- ------- 2206 Figure 26 : Path-Diverse Pseudowires With Disjoint PEs 2208 The path-diverse pseudowire is developed in Figure 26 by the "dual- 2209 homing" of each CE through more than one PE. The requirement for LSP 2210 path diversity is exactly the same, but it is complicated by the LSPs 2211 having distinct end points. In this case, the head-end router (e.g., 2212 PE1) cannot be relied upon to maintain the path diversity through the 2213 signaling protocol because it is aware of the path of the only one of 2214 the LSPs. Thus some form of coordinated path computation approach is 2215 needed. 2217 3.6.3 Path-Diverse Multi-Segment Pseudowires 2219 Figure 27 shows how the services in the previous two sections may be 2220 combined to offer end-to-end diverse paths in a multi-segment 2221 environment. To offer end-to-end resilience to failure, two entirely 2222 diverse, end-to-end multi-segment pseudowires may be needed. 2224 ----- ----- 2225 |S-PE5|--------------|T-PE4| 2226 /-----\ ----- \ 2227 ----- -----/ \----- ----- \ --- 2228 |T-PE1|-------|S-PE1|-------|S-PE3|-------|T-PE2|--|CE2| 2229 --- / -----\ -----\ ----- /----- --- 2230 |CE1|< ------- | ------- | / 2231 --- \ ----- \----- \----- / 2232 |T-PE3|-------|S-PE2|-------|S-PE4|----- 2233 ----- ----- ----- 2235 Figure 27 : Path-Diverse Multi-Segment Pseudowire Network Topology 2237 Just as in any diverse-path computation, the selection of the first 2238 path needs to be made with awareness of the fact that a second, 2239 fully-diverse path is also needed. If a sequential computation was 2240 applied to the topology in Figure 27, the first path CE1,T-PE1,S-PE1, 2241 S-PE3,T-PE2,CE2 would make it impossible to find a second path that 2242 was fully diverse from the first. 2244 But the problem is complicated by the multi-layer nature of the 2245 network. It is not enough that the PEs are chosen to diverse because 2246 the LSP tunnels between them might share links within the MPLS 2247 network. Thus, a multi-layer planning solution is needed to achieve 2248 the desired level of service. 2250 3.6.4 Pseudowire Segment Protection 2252 An alternative to the end-to-end pseudowire protection service 2253 described in Section 3.6.3 can be achieved by protecting individual 2254 pseudowire segments or PEs. For example, in Figure 27, the 2255 pseudowire between S-PE1 and S-PE5 may be protected by a pair of 2256 stitched segments running between S-PE1 and S-PE5, and between S-PE5 2257 and S-PE3. This is shown in detail in Figure 28. 2259 ------- ------- ------- 2260 | S-PE1 | LSP1 | S-PE5 | LSP3 | S-PE3 | 2261 | |============| |============| | 2262 | .........PW1..................PW3.......... | Outgoing 2263 Incoming | : |============| |============| : | segment 2264 segment | : | ------- | :.......... 2265 ...........: | | : | 2266 | : | | : | 2267 | : |=================================| : | 2268 | .........PW2............................... | 2269 | |=================================| | 2270 | | LSP2 | | 2271 ------- ------- 2273 Figure 28 : Fragment of a Segment-Protected Multi-Segment Pseudowire 2275 The determination of pseudowire protection segments requires 2276 coordination and planning, and just as in Section 3.6.5, this 2277 planning must be cognizant of the paths taken by LSPs through the 2278 underlying MPLS networks. 2280 3.6.5 Applicability of ABNO to Pseudowires 2282 The ABNO architecture lends itself well to the planning and control 2283 pseudowires in the use cases described above. The user or 2284 application needs a single point at which it requests services: the 2285 ABNO Controller. The ABNO Controller can ask a PCE to draw on the 2286 topology of pseudowire stitching-capable PEs as well as additional 2287 information regarding PE capabilities, such as load on PEs and 2288 administrative policies, and the PCE can use a series of TEDs or 2289 other PCEs for the underlying MPLS networks to determine the paths of 2290 the LSP tunnels. At the time of writing, PCEP does not support path 2291 computation requests and responses concerning pseudowires, but the 2292 concepts are very similar to existing uses and the necessary 2293 extensions would be very small. 2295 Once the paths have been computed a number of different provisioning 2296 systems can be used to instantiate the LSPs and provision the 2297 pseudowires under the control of the Provisioning Manager. The ABNO 2298 Controller will use the I2RS Client to instruct the network devices 2299 about what traffic should be placed on which pseudowires, and in 2300 conjunction with the OAM Handler can ensure that failure events are 2301 handled correctly, that service quality levels are appropriate, and 2302 that service protection levels are maintained. 2304 In many respects, the pseudowire network forms an overlay network 2305 (with its own TED and provisioning mechanisms) carried by underlying 2306 packet networks. Further client networks (the pseudowire payloads) 2307 may be carried by the pseduowire network. Thus, the problem space 2308 being addressed by ABNO in this case is a classic multi-layer 2309 network. 2311 3.7. Cross-Stratum Optimization (CSO) 2313 Considering the term "stratum" to broadly differentiate the layers of 2314 most concern to the application and to the network in general, the 2315 need for Cross Stratum optimization (CSO) arises when the application 2316 stratum and network stratum need to be coordinated to achieve 2317 operational efficiency as well as resource optimization in both 2318 application and network strata. 2320 Data center based applications can provide a wide variety of services 2321 such as video gaming, cloud computing, and grid applications. High- 2322 bandwidth video applications are also emerging, such as remote 2323 medical surgery, live concerts, and sporting events. 2325 This use-case for the ABNO architecture is mainly concerned with data 2326 center applications that make substantial bandwidth demands either in 2327 aggregate or individually. In addition these applications may need 2328 specific bounds on QoS related parameters such as latency and jitter. 2330 3.7.1. Data Center Network Operation 2332 Data centers come in a wide variety of sizes and configurations, but 2333 all contain compute servers, storage, and application control. Data 2334 centers offer application services to end-users such as video gaming, 2335 cloud computing and others. Since the data centers used to provide 2336 application services may be distributed around a network, the 2337 decisions about the control and management of application services, 2338 such as where to instantiate another service instance or to which 2339 data center a new client is assigned, can have a significant impact 2340 on the state of the network. Conversely the capabilities and state 2341 of the network can have a major impact on application performance. 2343 These decisions are typically made by applications with very little 2344 or no information concerning the underlying network. Hence, such 2345 decisions may be sub-optimal from the application's point of view or 2346 considering network resource utilization and quality of service. 2348 Cross-stratum optimization is the process of optimizing both the 2349 application experience and the network utilization by coordinating 2350 decisions in the application stratum and the network stratum. 2351 Application resources can be roughly categorized into computing 2352 resources, (i.e., servers of various types and granularities such as 2353 VMs, memory, and storage) and content (e.g., video, audio, databases, 2354 and large data sets). By network stratum we mean the IP layer and 2355 below (e.g., MPLS, SDH, OTN, WDM). The network stratum has resources 2356 that include routers, switches, and links. We are particularly 2357 interested in further unleashing the potential presented by MPLS and 2358 GMPLS control planes at the lower network layers in response to the 2359 high aggregate or individual demands from the application layer. 2361 This use-case demonstrates that the ABNO architecture can allow 2362 cross-stratum application/network optimization for the data center 2363 use case. Other forms of cross-stratum optimization (for example, 2364 for peer-to-peer applications) are out of scope. 2366 3.7.1.1. Virtual Machine Migration 2368 A key enabler for data center cost savings, consolidation, 2369 flexibility and application scalability has been the technology of 2370 compute virtualization provided through Virtual Machines (VMs). To 2371 the software application a VM looks like a dedicated processor with 2372 dedicated memory and a dedicated operating system. 2374 VMs offer not only a unit of compute power but also provide an 2375 "application environment" that can be replicated, backed up, and 2376 moved. Different VM configurations may be offered that are optimized 2377 for different types of processing (e.g., memory intensive, throughput 2378 intensive). 2380 VMs may be moved between compute resources in a data center and could 2381 be moved between data centers. VM migration serves to balance load 2382 across data center resources and has several modes: 2383 (i) scheduled vs. dynamic; 2384 (ii) bulk vs. sequential; 2385 (iii) point-to-point vs. point-to-multi-point 2387 While VM migration may solve problems of load or planned maintenance 2388 within a data center it can also be effective to reduce network load 2389 around the data center. But the act of migrating VMs especially 2390 between data centers can impact the network and other services that 2391 are offered. 2393 For certain applications such as disaster recovery, bulk migration is 2394 required on the fly, which may necessitate concurrent computation and 2395 path setup dynamically. 2397 Thus, application stratum operations must also take account of the 2398 situation in the network stratum even as the application stratum 2399 actions may be driven by the status of the network stratum. 2401 3.7.1.2. Load Balancing 2403 Application servers may be instantiated in many data centers located 2404 in different parts of the network. When an end-user makes a request 2405 an application request, a decision has to be made about which data 2406 center should host the processing and storage required to meet the 2407 request. One of the major drivers for operating multiple data 2408 centers (rather than one very large data center) is so that the 2409 application will run on a machine that is closer to the end-users and 2410 thus improve the user experience by reducing network latency. 2411 However, if the network is congested or the data center is overloaded 2412 this strategy can backfire. 2414 Thus, among the key factors to be considered in choosing the server 2415 on which to instantiate a VM for an application include: 2417 - The utilization of the servers in the data center 2419 - The network loading conditions within a data center 2421 - The network loading conditions between data centers 2423 - The network conditions between the end-user and data center 2425 Again, the choices made in the application stratum need to consider 2426 the situation in the network stratum. 2428 3.7.2. Application of the ABNO Architecture 2430 This section shows how the ABNO architecture is applicable to the 2431 cross-stratum data center issues described in Section 3.7.1. 2433 Figure 29 shows a diagram of an example data center based 2434 application. A carrier network provides access for an end-user 2435 through PE4. Three data centers (DC1, DC2, and DC3) are accessed 2436 through different parts of the network via PE1, PE2, and PE3. 2438 The Application Service Coordinator receives information from the 2439 end-user about the services it wants, and converts this to service 2440 requests that it passes to the the ABNO Controller. The end-user 2441 may already know which data center it wishes to use, the Application 2442 Service Coordinator may be able to make this determination, or 2443 otherwise the task of selecting the data center must be performed by 2444 the ABNO Controller, and this may utilize a further database (see 2445 Section 2.3.1.8) to contain information about server loads and other 2446 data center parameters. 2448 The ABNO controller examines the network resources using information 2449 gathered from the other ABNO components and uses those components to 2450 configure the network to support the end-user's needs. 2452 +----------+ +---------------------------------+ 2453 | End-user |--->| Application Service Coordinator | 2454 +----------+ +---------------------------------+ 2455 | | 2456 | v 2457 | +-----------------+ 2458 | | ABNO Controller | 2459 | +-----------------+ 2460 | | 2461 | v 2462 | +---------------------+ +--------------+ 2463 | |Other ABNO Components| | o o o DC 1 | 2464 | +---------------------+ | \|/ | 2465 | | ------|---O | 2466 | v | | | 2467 | --------------------------|-- +--------------+ 2468 | / Carrier Network PE1 | \ 2469 | / .....................O \ +--------------+ 2470 | | . | | o o o DC 2 | 2471 | | PE4 . PE2 | | \|/ | 2472 ---------|----O........................O---|--|---O | 2473 | . | | | 2474 | . PE3 | +--------------+ 2475 \ .....................O / 2476 \ | / +--------------+ 2477 --------------------------|-- | o o o DC 3 | 2478 | | \|/ | 2479 ------|---O | 2480 | | 2481 +--------------+ 2483 Figure 29 : The ABNO Architecture in the Context of 2484 Cross-Stratum Optimization for Data Centers 2486 3.7.2.1. Deployed Applications, Services, and Products 2488 The ABNO controller will need to utilize a number of components to 2489 realize the CSO functions described in Section 3.7.1. 2491 The ALTO server provides information about topological proximity and 2492 appropriate geographical locations servers with respect to the 2493 underlying networks. This information can be used to optimize the 2494 selection of peer location which will help reduce the path of IP 2495 traffic or can contain it within specific service providers' 2496 networks. ALTO in conjunction with the ABNO Controller and the 2497 Application Service Coordinator can address general problems such as 2498 the selection of application servers based on resource availability 2499 and usage of the underlying networks. 2501 The ABNO Controller can also formulate a view of current network load 2502 from the TED and from the OAM Handler (for example, by running 2503 diagnostic tools that measure latency, jitter, and packet loss). 2504 This view obviously influences not just how paths from end-user to 2505 data center are provisioned, but can also guide the selection of 2506 which data center should provide the service and possibly even the 2507 points of attachment to be used by the end-user and to reach the 2508 chosen data center. A view of how PCE can fit in with CSO is 2509 provided in [I-D.dhody-pce-cso-enabled-path-computation] on which the 2510 content of Figure 29 is based. 2512 As already discussed, the combination of the ABNO Controller and the 2513 Application Service Coordinator will need to be able to select (and 2514 possibly migrate) the location of the VM that provides the service 2515 for the end-user. Since a common technique used to direct the end- 2516 user to the correct VM/server is to employ DNS redirection, an 2517 important capability of the ABNO controller will be to be able to 2518 program the DNS servers accordingly. 2520 Furthermore, as already noted in other sections of this document, the 2521 ABNO Controller can coordinate the placement of traffic within the 2522 network to achieve load-balancing and to provide resilience to 2523 failures. These features can be used in conjunction with the 2524 functions discussed above, to ensure that the placement of new VMs, 2525 the traffic that they generate, and the load caused by VM migration 2526 can be carried by the network and do not disrupt existing services. 2528 3.8 ALTO Server 2530 The ABNO architecture allows use cases with joint network and 2531 application-layer optimization. In such a use case, an application 2532 is presented with an abstract network topology containing only 2533 information relevant to the application. The application computes 2534 its application-layer routing according to its application objective. 2535 The application may interact with the ABNO Controller to set up 2536 explicit LSPs to support its application-layer routing. 2538 The following steps are performed to illustrate such a use case. 2540 1. Application Request of Application-layer Topology 2542 Consider the network shown in Figure 30. The network consists of 2543 5 nodes and 6 links. 2545 The application, which has endpoints hosted at N0, N1, and N2, 2546 requests network topology so that it can compute its application 2547 layer routing, for example, to maximize the throughput of content 2548 replication among endpoints at the three sites. 2550 +----+ L0 Wt=10 BW=50 +----+ 2551 | N0 |............................| N3 | 2552 +----+ +----+ 2553 | \ L4 | 2554 | \ Wt=7 | 2555 | \ BW=40 | 2556 | \ | 2557 L1 | +----+ | 2558 Wt=10 | | N4 | L2 | 2559 BW=45 | +----+ Wt=12 | 2560 | / BW=30 | 2561 | / L5 | 2562 | / Wt=10 | 2563 | / BW=45 | 2564 +----+ +----+ 2565 | N1 |............................| N2 | 2566 +----+ L3 Wt=15 BW=35 +----+ 2568 Figure 30 : Raw Network Topology 2570 +----+ 2571 | N0 |............ 2572 +----+ \ 2573 | \ \ 2574 | \ \ 2575 | \ \ 2576 | | \ AL0M2 2577 L1 | | AL4M5 \ Wt=22 2578 Wt=10 | | Wt=17 \ BW=30 2579 BW=40 | | BW=40 \ 2580 | | \ 2581 | / \ 2582 | / \ 2583 | / \ 2584 +----+ +----+ 2585 | N1 |........................| N2 | 2586 +----+ L3 Wt=15 BW=35 +----+ 2588 Figure 31 : Reduced Graph for a Particular Application 2590 The request arrives at the ABNO Controller, which forwards the 2591 request to the ALTO Server component. The ALTO Server consults 2592 the Policy Agent, the TED, and the PCE to return an abstract, 2593 application-layer topology. Figure 31 shows a possible reduced, 2594 topology for the application. For example, the policy may specify 2595 that the bandwidth exposed to an application may not exceed 40. 2596 The network has precomputed that the route from N0 to N2 should 2597 use the path of N0->N3-> N2, according to goals such as GCO (see 2598 Section 3.4). 2600 The ALTO Server uses the topology and existing routing to compute 2601 an abstract network map consisting of 3 PIDs. The pair-wise 2602 bandwidth as well as shared bottlenecks will be computed from the 2603 internal network topology and reflected in cost maps. 2605 2. Application Computes Application Overlay 2607 Using the abstract topology, the application computes an 2608 application-layer routing. For concreteness, the application may 2609 compute a spanning tree to maximize the total bandwidth from N0 to 2610 N2. Figure 32 shows an example application-layer routing using a 2611 route of N0->N1-> N2 for 35 Mbps and N0->N2 for 30 Mbps, for a 2612 total of 65 Mbps. 2614 +----+ 2615 | N0 |----------------------------------+ 2616 +----+ AL0M2 BW=30 | 2617 | | 2618 | | 2619 | | 2620 | | 2621 | L1 | 2622 | | 2623 | BW=35 | 2624 | | 2625 | | 2626 | | 2627 V V 2628 +----+ L3 BW=35 +----+ 2629 | N1 |...............................>| N2 | 2630 +----+ +----+ 2632 Figure 32 : Application-layer Spanning Tree 2634 3. ABNO Controller Setup Application Path 2636 The application may submit its application routes to the ABNO 2637 Controller to set up explicit LSPs to support its operation. The 2638 ABNO Controller consults the ALTO maps to map the application 2639 layer routing back to internal network topology and then instructs 2640 the provisioning manager to set up the paths. The ABNO Controller 2641 may re-trigger GCO to re-optimize network traffic engineering. 2643 3.9 Other Potential Use Cases 2645 This section serves as a place-holder for other potential use cases 2646 that might get documented in future documents. 2648 3.9.1 Grooming and Regrooming 2650 This use case could cover the following scenarios: 2652 - Nested LSPs 2653 - Packet Classification (IP flows into LSPs at edge routers) 2654 - Bucket Stuffing 2655 - IP Flows into ECMP Hash Bucket 2657 3.9.2 Bandwidth Scheduling 2659 Bandwidth Scheduling consist of configuring LSPs based on a given 2660 time schedule. This can be used to support maintenance or 2661 operational schedules or to adjust network capacity based on 2662 traffic pattern detection. 2664 The ABNO framework provides the components to enable bandwidth 2665 scheduling solutions. 2667 4. Survivability and Redundancy within the ABNO Architecture 2669 The ABNO architecture described in this document is presented in 2670 terms of functional units. Each unit could be implemented separately 2671 or bundled with other units into single programs or products. 2672 Furthermore, each implemented unit or bundle could be deployed on a 2673 separate device (for example, a network server), on a separate 2674 virtual machine (for example, in data center), or groups of programs 2675 could be deployed on the same processor. From the point of view of 2676 the architecutral model, these implementation and deployment choices 2677 are entirely unimportant. 2679 Similarly, the realisation of a functional component of the ABNO 2680 architecture could be supported by more than one instance of an 2681 implementation, or by different intances of different implementations 2682 that provide the same or similar function. For example, the PCE 2683 component might have multiple instantiations for sharing the 2684 processing load of a large number of computation requests, and 2685 different instances might have different algorithmic capabilities so 2686 that one instance might serve parallel computation requests for 2687 disjoint paths, while another instance might have the capability to 2688 compute optimal point-to-multipoint paths. 2690 This ability to have multiple instances of ABNO components also 2691 enables resiliency within the model since, in the event of the 2692 failure of one instance of one component (because of software 2693 failure, hardware failure, or connectivity problems) other instances 2694 can take over. In some circumstances state synchronization between 2695 instances of components may be needed in order to facilitate seamless 2696 resiliency. 2698 How these features are achieved in an ABNO implementation or 2699 deployment is outside the scope of this document. It is worth 2700 noting that the VNFpool effort in the IETF is examining how instances 2701 of network functions may be "pooled" for resilence and potentially 2702 for load-balancing. 2704 5. Security Consideration 2706 The ABNO architecture describes a network system and security must 2707 play an important part. 2709 The first consideration is that the external protocols (those shown 2710 as entering or leaving the big box in Figure 1) must be appropriately 2711 secured. This security will include authentication and authorization 2712 to control access to the different functions that the ABNO system can 2713 perform, to enable different policies based on identity, and to 2714 regulate the control of the network devices. 2716 Secondly, the internal protocols that are used between ABNO 2717 components must also have appropriate security particularly when the 2718 components are implemented on separate network nodes. 2720 Considering that the ABNO system contains a lot of data about the 2721 network, the services carried by the network, and the services 2722 delivered to customers, access to information held in the system must 2723 be carefully regulated. Since such access will be largely through 2724 the external protocols, the policy-based controls enabled by 2725 authentication will be powerful. But it should also be noted that 2726 any data sent from the databases in the ABNO system can reveal 2727 details of the network and should, therefore, be considered as a 2728 candidate for encryption. Furthermore, since ABNO components can 2729 access the information stored in the database, care is required to 2730 ensure that all such components are genuine, and to consider 2731 encrypting data that flows between components when they are 2732 implemented at remote nodes. 2734 The conclusion is that all protocols used to realize the ABNO 2735 architecture should have rich security features. 2737 6. Manageability Considerations 2739 The whole of the ABNO architecture is essentially about managing the 2740 network. In this respect there is very little extra to say. ABNO 2741 provides a mechanisms to gather and collate information about the 2742 network, reporting it to management applications, storing it for 2743 future inspection, and triggering actions according to configured 2744 policies. 2746 The ABNO system will, itself, need monitoring and management. This 2747 can be seen as falling into several categories: 2748 - Management of external protocols 2749 - Management of internal protocols 2750 - Management and monitoring of ABNO components 2751 - Configuration of policy to be applied across the ABNO system. 2753 7. IANA Considerations 2755 This document makes no requests for IANA action. 2757 8. Acknowledgements 2759 Thanks for discussions and review are due to Ken Gray, Jan Medved, 2760 Nitin Bahadur, Diego Caviglia, Joel Halpern, Brian Field, Ori 2761 Gerstel, Daniele Ceccarelli, Diego Caviglia Cyril Margaria, Jonathan 2762 Hardwick, Nico Wauters, Tom Taylor, Qin Wu, and Luis Contreras. 2763 Thanks to George Swallow for suggesting the existence of the SRLG 2764 database. Tomonori Takeda provided valuable comments as part of his 2765 Routing Directorate review. Tina Tsou provided comments as part of 2766 her Operational Directorate review. 2768 This work received funding from the European Union's Seventh 2769 Framework Programme for research, technological development and 2770 demonstration through the PACE project under grant agreement 2771 number 619712 and through the IDEALIST project under grant agreement 2772 number 317999. 2774 9. References 2776 9.1. Informative References 2778 [EON] Gerstel, O., Jinno, M., Lord, A., and S.J.B. Yoo, "Elastic 2779 optical networking: a new dawn for the optical layer?", 2780 IEEE Communications Magazine, Volume:50, Issue:2, 2781 ISSN 0163-6804, February 2012. 2783 [Flood] Project Floodlight, "Floodlight REST API", 2784 http://www.projectfloodlight.org. 2786 [G.694.2] ITU-T Recommendation G.694.2, "Spectral grids for WDM 2787 applications: CWDM wavelength grid", December 2003. 2789 [G.709] ITU-T, "Interface for the Optical Transport Network 2790 (OTN)", G.709 Recommendation, October 2009. 2792 [I-D.dhody-pce-cso-enabled-path-computation] 2793 "Dhody, D., Lee, Y., Contreras, LM., Gonzalez de Dios, O, 2794 and N. Ciulli, "Cross Stratum Optimization enabled Path 2795 Computation", draft-dhody-pce-cso-enabled-path-computation, 2796 work in progress. 2798 [I-D.ietf-i2rs-architecture] 2799 Atlas, A., Halpern, J., Hares, S., Ward, D., and T. Nadeau, 2800 "An Architecture for the Interface to the Routing System", 2801 draft-ietf-i2rs-architecture, work in progress. 2803 [I-D.ietf-i2rs-problem-statement] 2804 Atlas, A., Nadeau, T., and D. Ward, "Interface to the 2805 Routing System Problem Statement", 2806 draft-ietf-i2rs-problem-statement, work in progress. 2808 [I-D.ietf-idr-ls-distribution] 2809 Gredler, H., Medved, J., Previdi, S., Farrel, A., and 2810 Ray, S., "North-Bound Distribution of Link-State and TE 2811 Information using BGP", draft-ietf-idr-ls-distribution, 2812 work in progress. 2814 [I-D.ietf-netmod-routing-cfg] 2815 Lhotka, L., "A YANG Data Model for Routing Management", 2816 draft-ietf-netmod-routing-cfg, work in progress. 2818 [I-D.ietf-pce-pce-initiated-lsp] 2819 Crabbe, E., Minei, I., Sivabalan, S., and Varga, R., "PCEP 2820 Extensions for PCE-initiated LSP Setup in a Stateful PCE 2821 Model", draft-ietf-pce-pce-initiated-lsp, work in 2822 progress. 2824 [I-D.ietf-pce-stateful-pce] 2825 Crabbe, E., Medved, J., Minei, I., and R. Varga, "PCEP 2826 Extensions for Stateful PCE", draft-ietf-pce-stateful-pce, 2827 work in progress. 2829 [ONF] Open Networking Foundation, "OpenFlow Switch Specification 2830 Version 1.4.0 (Wire Protocol 0x05)", October 2013. 2832 [RFC2748] Durham, D., Ed., Boyle, J., Cohen, R., Herzog, S., Rajan, 2833 R., and A. Sastry, "The COPS (Common Open Policy Service) 2834 Protocol", RFC 2748, January 2000. 2836 [RFC2753] Yavatkar, R., Pendarakis, D. and R. Guerin, "A 2837 Framework for Policy-based Admission Control", RFC2753, 2838 January 2000. 2840 [RFC3209] D. Awduche et al., "RSVP-TE: Extensions to RSVP for LSP 2841 Tunnels", RFC 3209, December 2001. 2843 [RFC3292] Doria, A., Hellstrand, F., Sundell, K., and Worster, T., 2844 "General Switch Management Protocol (GSMP) V3", RFC 3292, 2845 June 2002. 2847 [RFC3412] Case, J., Harrington, D., Preshun, R., and Wijnen, B., 2848 "Message Processing and Dispatching for the Simple Network 2849 Management Protocol (SNMP)", RFC 3412, December 2002. 2851 [RFC3473] L. Berger et al., "Generalized Multi-Protocol Label 2852 Switching (GMPLS) Signaling Resource ReserVation Protocol- 2853 Traffic Engineering (RSVP-TE) Extensions", RFC 3473, 2854 January 2003. 2856 [RFC3630] Katz, D., Kmpella, K., and Yeung, D., "Traffic Engineering 2857 (TE) Extensions to OSPF Version 2", RFC 3630, September 2858 2003. 2860 [RFC3746] Yang, L., Dantu, R., Anderson, T., and Gopal, R., 2861 "Forwarding and Control Element Separation (ForCES) 2862 Framework", RFC 3746, April 2004. 2864 [RFC3985] Bryant, S., Ed., and P. Pate, Ed., "Pseudo Wire Emulation 2865 Edge-to-Edge (PWE3) Architecture", RFC 3985, March 2005. 2867 [RFC4655] Farrel, A., Vasseur, J.-P., and Ash, J., "A Path 2868 Computation Element (PCE)-Based Architecture", RFC 4655, 2869 October 2006. 2871 [RFC5150] Ayyangar, A., Kompella, K., Vasseur, JP. and Farrel, A., 2872 "Label Switched Path Stitching with Generalized 2873 Multiprotocol Label Switching Traffic Engineering (GMPLS 2874 TE)", RFC 5150, February 2008. 2876 [RFC5212] Shiomoto, K., Papadimitriou, D., Le Roux, JL., Vigoureux, 2877 M., and Brungard, D., "Requirements for GMPLS-Based Multi- 2878 Region and Multi-Layer Networks (MRN/MLN)", RFC 5212, July 2879 2008. 2881 [RFC5254] Bitar, N., Bocci, M. and L. Martini, "Requirements for 2882 Multi-Segment Pseudowire Emulation Edge-to-Edge (PWE3)", 2883 RFC 5254, October 2008 2885 [RFC5277] Chisholm, S. and H. Trevino, "NETCONF Event Notifications", 2886 RFC 5277, July 2008. 2888 [RFC5305] Li, T. and H. Smit, "IS-IS Extensions for Traffic 2889 Engineering", RFC 5305, October 2008. 2891 [RFC5394] Bryskin, I., Papadimitriou, D., Berger, L. and Ash, J., 2892 "Policy-Enabled Path Computation Framework", RFC 5394, 2893 December 2008. 2895 [RFC5424] R. Gerhards, "The Syslog Protocol", RFC 5424, March 2009. 2897 [RFC5440] Vasseur, JP. and Le Roux, JL., "Path Computation Element 2898 (PCE) Communication Protocol (PCEP)", RFC 5440, March 2009. 2900 [RFC5520] Bradford, R., Vasseur, JP., and Farrel, A., "Preserving 2901 Topology Confidentiality in Inter-Domain Path Computation 2902 Using a Path-Key-Based Mechanism", RC 5520, April 2009. 2904 [RFC5557] Lee, Y., Le Roux, JL., King, D., and Oki, E., "Path 2905 Computation Element Communication Protocol (PCEP) 2906 Requirements and Protocol Extensions in Support of Global 2907 Concurrent Optimization", RFC 5557, July 2009. 2909 [RFC5623] Oki, E., Takeda, T., Le Roux, JL., and Farrel, A., 2910 "Framework for PCE-Based Inter-Layer MPLS and GMPLS Traffic 2911 Engineering", RFC 5623, September 2009. 2913 [RFC5693] Seedorf, J., and Burger, E., "Application-Layer Traffic 2914 Optimization (ALTO) Problem Statement", RFC 5693, October 2915 2009. 2917 [RFC5810] A. Doria, et al., "Forwarding and Control Element 2918 Separation (ForCES) Protocol Specification", RFC 5810, 2919 March 2010. 2921 [RFC6007] I. Nishioka. and D. King., "Use of the Synchronization 2922 VECtor (SVEC) List for Synchronized Dependent Path 2923 Computations", RFC 6007, September 2010. 2925 [RFC6020] Bjorklund, M., "YANG - A Data Modeling Language for the 2926 Network Configuration Protocol (NETCONF)", RFC 6020, 2927 October 2010. 2929 [RFC6107] Shiomoto, K. and A. Farrel, "Procedures for Dynamically 2930 Signaled Hierarchical Label Switched Paths", RFC 6107, 2931 February 2011. 2933 [RFC6120] P. Saint-Andre, "Extensible Messaging and Presence Protocol 2934 (XMPP): Core", RFC 6120, March 2011. 2936 [RFC6241] Enns, R., Bjorklund, M., Schoenwaelder, J., and Bierman, 2937 A., "Network Configuration Protocol (NETCONF)", RFC 6241, 2938 June 2011. 2940 [RFC6707] Niven-Jenkins, B., Le Faucheur, F., and Bitar, N., "Content 2941 Distribution Network Interconnection (CDNI) Problem 2942 Statement", RFC 6707, September 2012. 2944 [RFC6805] King, D. and Farrel, A., "The Application of the Path 2945 Computation Element Architecture to the Determination of a 2946 Sequence of Domains in MPLS and GMPLS", RFC 6805, November 2947 2012. 2949 [RFC6982] Sheffer, Y. and A. Farrel, "Improving Awareness of Running 2950 Code: The Implementation Status Section", RFC 6982, July 2951 2013. 2952 [RFC Editor Note: This reference can be removed when Section 6 is 2953 removed] 2955 [RFC7011] Claise, B., Trammell, B., and Paitken, "Specification of 2956 the IP Flow Information Export (IPFIX) Protocol for the 2957 Exchange of IP Traffic Flow Information", STD 77, RFC 7011, 2958 Spetember 2013. 2960 [RFC7297] Boucadair, M., Jacquenet, c., and N. Wang, "IP/MPLS 2961 Connectivity Provisioning Profile (CPP)", RFC 7297, July 2962 2014. 2964 [RFC7285] Alimi, R., Penno, R., and Y. Yang, "Application-Layer 2965 Traffic Optimization (ALTO) Protocol", RFC 7285, September 2966 2014. 2968 [TL1] Telcorida, "Operations Application Messages - Language For 2969 Operations Application", GR-831, November 1996. 2971 [TMF-MTOSI] 2972 TeleManagement Forum "Multi-Technology Operations Systems 2973 Interface (MTOSI)", 2974 https://www.tmforum.org/MTOSI/2319/home.html 2976 10. Contributors' Addresses 2978 Quintin Zhao 2979 Huawei Technologies 2980 125 Nagog Technology Park 2981 Acton, MA 01719 2982 US 2983 Email: qzhao@huawei.com 2985 Victor Lopez 2986 Telefonica I+D 2987 Email: vlopez@tid.es 2989 Ramon Casellas 2990 CTTC 2991 Email: ramon.casellas@cttc.es 2993 Yuji Kamite 2994 NTT Communications Corporation 2995 Email: y.kamite@ntt.com 2997 Yosuke Tanaka 2998 NTT Communications Corporation 2999 Email: yosuke.tanaka@ntt.com 3001 Young Lee 3002 Huawei Technologies 3003 Email: leeyoung@huawei.com 3005 Y. Richard Yang 3006 Yale University 3007 yry@cs.yale.edu 3009 11. Authors' Addresses 3011 Daniel King 3012 Old Dog Consulting 3013 Email: daniel@olddog.co.uk 3015 Adrian Farrel 3016 Juniper Networks 3017 Email: adrian@olddog.co.uk 3019 Appendix A. Undefined Interfaces 3021 This Appendix provides a brief list of interfaces that are not yet 3022 defined at the time of writing. Interfaces where there is a choice 3023 of existing protocols are not listed. 3025 - An interface for adding additional information to the Traffic 3026 Engineering Database is described in Section 2.3.2.3. No protocol 3027 is currently identified for this interface, but candidates include: 3029 - The protocol developed or adopted to satisfy the requirements of 3030 I2RS [I-D.ietf-i2rs-architecture] 3032 - Netconf [RFC6241] 3034 - The protocol or protocols to be used by the Interface to the 3035 Routing System described in Section 2.3.2.8 have yet to be 3036 determined. The I2RS working group will make this decision after 3037 use cases and protocol requirements have been agreed. Various 3038 candidate protocols have been identified although none appears to 3039 be suitable without some extensions to the currently-specified 3040 protocol elements. The list of protocols supplied here is 3041 illustrative and not intended to constrain the work of the I2RS 3042 working group. The order of the list is not significant. 3044 - OpenFlow [ONF] 3045 - Netconf [RFC6241] 3046 - ForCES [RFC3746] 3048 - As described in Section 2.3.2.10, the Virtual Network Topology 3049 Manager needs an interface that can be used by a PCE or the ABNO 3050 Controller to inform it that a client layer needs more virtual 3051 topology. It is possible that the protocol identified for use with 3052 I2RS will satisfy this requirement, or this could be achieved using 3053 extensions to the PCEP Notify message (PCNtf). 3055 - The north-bound interface from the ABNO Controller is used by the 3056 NMS, OSS, and Application Service Coordinator to request services 3057 in the network in support of applications as described in Section 3058 2.3.2.11. 3060 - It is possible that the protocol selected or designed to satisfy 3061 I2RS. 3063 - A potential approach for this type of interface is described in 3064 [RFC7297] for a simple use case. 3066 - As noted in Section 2.3.2.14 there may be layer-independent data 3067 models for offering common interfaces to control, configure, and 3068 report OAM. 3070 - As noted in Section 3.6, the ABNO model could be used to applicable 3071 to placing multi-segment pseudowires in a network topology made up 3072 of S-PEs and MPLS tunnels. The current definition of PCEP 3073 [RFC5440] and associated extensions that are work in progress does 3074 not include all of the details to request such paths, so some work 3075 might be necessary although the general concepts will be easily 3076 re-usable. Indeed, such work may be necessary for the wider 3077 applicability of PCE in many networking scenarios. 3079 Appendix B. Implementation Status 3081 [RFC Editor Note: Please remove this entire seciton prior to publication 3082 as an RFC.] 3084 This section records the status of known implementations of the 3085 architecture described in this document at the time of posting of 3086 this Internet-Draft, and is based on a proposal described in RFC 6982 3087 [RFC6982]. The description of implementations in this section is 3088 intended to assist the IETF in its decision processes in progressing 3089 drafts to RFCs. Please note that the listing of any individual 3090 implementation here does not imply endorsement by the IETF. 3091 Furthermore, no effort has been spent to verify the information 3092 presented here that was supplied by IETF contributors. This is not 3093 intended as, and must not be construed to be, a catalog of available 3094 implementations or their features. Readers are advised to note that 3095 other implementations may exist. 3097 According to RFC 6982, "this will allow reviewers and working groups 3098 to assign due consideration to documents that have the benefit of 3099 running code, which may serve as evidence of valuable experimentation 3100 and feedback that have made the implemented protocols more mature. 3101 It is up to the individual working groups to use this information as 3102 they see fit." 3104 B.1. Telefonica Investigacion y Desarrollo (TID) 3106 Organization Responsible for the Implementation: 3108 Telefonica Investigacion y Desarrollo (TID) 3109 Core Network Evolution 3111 Implementation Name and Details: 3113 Netphony ABNO 3115 Brief Description: 3117 Experimental testbed implementation of ABNO architecture. 3119 Level of Maturity: 3121 The set-up has been tested with flexgrid and WSON scenarios, 3122 interfacing Telefonica's GMPLS protocol and vendors controllers 3123 (ADVA, Infinera and Ciena) 3125 Licensing: 3127 To be released 3129 Implementation Experience: 3131 All tests made in TID has been successfully concluded. 3133 No issue has been reported. TID's tests experience has been 3134 reported in a number of journal papers. Contact Victor Lopez and 3135 Oscar Gonzalez de Dios for more information 3137 Contact Information: 3139 Victor Lopez: victor.lopezalvarez@telefonica.com 3140 Oscar Gonzalez de Dios: oscar.gonzalezdedios@telefonica.com 3142 Interoperability: 3144 PCEP has been tested with NSN, CNIT, and CTTC. 3146 BPG-LS with CTTC, UPC, and Telecom Italia.