idnits 2.17.1 draft-strassner-anima-control-loops-01.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- No issues found here. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year -- The document date (April 19, 2016) is 2929 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- No issues found here. Summary: 0 errors (**), 0 flaws (~~), 1 warning (==), 1 comment (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 1 ANIMA J. Strassner 2 Internet-Draft Huawei Technologies 3 Intended status: Informational J. Halpern 4 Expires: October 23, 2016 Ericsson 5 M. Behringer 6 Cisco Systems 7 April 19, 2016 9 The Use of Control Loops in Autonomic Networking 10 draft-strassner-anima-control-loops-01 12 Abstract 14 This document defines the requirements for an autonomic control 15 loop, describes different types of control loops, and explains how 16 control loops are used in an Autonomic System. Control loops are 17 used to enable Autonomic Network Management systems to adapt the 18 behavior of the systems that they manage to respond to changes in 19 user needs, business goals, and/or environmental conditions. 21 Status of This Memo 23 This Internet-Draft is submitted in full conformance with the 24 provisions of BCP 78 and BCP 79. 26 Internet-Drafts are working documents of the Internet Engineering 27 Task Force (IETF). Note that other groups may also distribute 28 working documents as Internet-Drafts. The list of current Internet- 29 Drafts is at http://datatracker.ietf.org/drafts/current/. 31 Internet-Drafts are draft documents valid for a maximum of six months 32 and may be updated, replaced, or obsoleted by other documents at any 33 time. It is inappropriate to use Internet-Drafts as reference 34 material or to cite them other than as "work in progress." 36 This Internet-Draft will expire on October 23, 2016. 38 Copyright Notice 40 Copyright (c) 2016 IETF Trust and the persons identified as the 41 document authors. All rights reserved. 43 This document is subject to BCP 78 and the IETF Trust's Legal 44 Provisions Relating to IETF Documents 45 (http://trustee.ietf.org/license-info) in effect on the date of 46 publication of this document. Please review these documents 47 carefully, as they describe your rights and restrictions with respect 48 to this document. Code Components extracted from this document must 49 include Simplified BSD License text as described in Section 4.e of 50 the Trust Legal Provisions and are provided without warranty as 51 described in the Simplified BSD License. 53 Table of Contents 55 1. Introduction ................................................. 3 56 2. Conventions Used in This Document ............................ 4 57 3. Terminology .................................................. 4 58 3.1. Acronyms ................................................. 4 59 3.2. Definitions ................................................ 4 60 3.2.1. Control Loop .......................................... 4 61 3.2.2. Control Loop, Open .................................... 4 62 3.2.3. Control Loop, Closed .................................. 5 63 3.2.4. Control Loop, Proportional ............................ 5 64 3.2.5. Control Loop, Proportional-Derivative ................. 5 65 3.2.6. Control Loop, Proportional-Integral-Derivative (PID) .. 5 66 3.2.7. Control Loop, Cascade ................................. 5 67 3.2.8. Control System ........................................ 5 68 4. Requirements for Control Loops in Autonomic Networks ......... 5 69 4.1. Mandatory Autonomic Control Loop Requirements ........... 6 70 4.1.1. Observe and Collect Data ........................... 6 71 4.1.2. Orient Data ........................................ 6 72 4.1.3. Analyze Data ....................................... 7 73 4.1.4. Plan Actions Based on Oriented Data ................ 7 74 4.1.5. Decide Which Plan(s) to Execute .................... 7 75 4.1.6. Execute the Plan(s) ................................ 7 76 4.1.7. Detect and Resolve Conflicts ....................... 8 77 4.2. Desired Autonomic Control Loop Requirements ............. 8 78 4.2.1. Observe and Collect Data From External Systems ..... 9 79 4.2.2. Orient Data from External Systems .................. 9 80 4.2.3. Execute One or More Machine Learning Algorithms .... 9 81 4.2.4. Register Control Loop Capabilities ................ 10 82 4.2.5. Register Control Loop Requirements ................ 10 83 4.3. Optional Autonomic Control Loop Requirements .............. 10 84 4.3.1. Use of A Single Information Model ................. 11 85 4.3.2. Use of Ontologies ................................. 11 86 4.3.3. Collaborate With Other Control Loops .............. 11 87 5. Control Loop Usage in Autonomic Networks .................... 12 88 5.1. Autonomic Management .................................. 12 89 5.2. Policy and Context ..................................... 13 90 5.3 Types of Policies ....................................... 14 91 5.3.1. Policies Organized by Actors ...................... 14 92 5.3.2. Policies Organized by Technology .................. 15 93 5.4. Policy Conflicts ....................................... 15 94 5.4.1. Policy Conflicts Caused by Technology ............. 15 95 5.4.2. Policy Conflicts Caused by Different Systems ...... 16 96 5.5. Control Loops .......................................... 16 97 5.5.1. Types of Control .................................. 16 98 5.5.2. Types of Control Loops ............................ 17 99 5.5.3. Management of an Autonomic Control Loop ........... 18 101 Table of Contents (continued) 103 6. Security Considerations ..................................... 18 104 7. IANA Considerations .......................................... 18 105 8. Acknowledgements ............................................. 18 106 9. References ................................................... 19 107 Authors' Addresses .............................................. 20 109 1. Introduction 111 The document "Autonomic Networking - Definitions and Design Goals" 112 [RFC7575] explains the fundamental concepts behind Autonomic 113 Networking. In section 1, it says: "The fundamental concept 114 involves eliminating external systems from a system's control loops 115 and closing of control loops within the Autonomic System itself, 116 with the goal of providing the system with self-management 117 capabilities...". In section 5, it also describes a high-level 118 reference model [draft-ietf-anima-reference-model-01]. This 119 document expands on the definition and use of control loops 120 [draft-ietf-anima-reference-model-01] (section 8.5) in Autonomic 121 Systems to self-adapt to various changes to achieve self-management. 123 In particular, this document describes how control loops are used in 124 Autonomic Network Management to enable the Autonomic System (and its 125 subsystems, which may or may not be autonomic) to adapt (on its own) 126 to enable Autonomic Network Management systems to adapt the behavior 127 of the systems and components that they manage to respond to changes 128 in user needs, business goals, and/or environmental conditions. 129 Such changes can alter the goals that the Autonomic System must 130 achieve, or how those goals are achieved. For example, this may 131 result in the offering of changed or even new services and resources. 133 Control loops operate to continuously observe and collect data about 134 the set of managed entities, components, and systems that are being 135 managed, as well as the context in which they are operating. This 136 enables the Autonomic Management System to understand changes in the 137 behavior of the system being managed, analyze those changes, and 138 then provide actions to move the state of the system being managed 139 towards a common goal. Self-adaptive systems move decision-making 140 from static, pre-defined commands to dynamic processes computed at 141 runtime. 143 This document defines the requirements for an autonomic control 144 loop, describes different types of control loops, and explains how 145 control loops are used in an Autonomic System. 147 As discussed in [RFC7575], the goal of this work is not to focus 148 exclusively on fully autonomic nodes or networks. In reality, most 149 networks will run with some autonomic functions, while the rest of 150 the network will not. The reference model defined in 151 [draft-ietf-anima-reference-model] allows for this hybrid approach. 153 This is a living document, and will evolve with the technical 154 solutions developed in the ANIMA WG. 156 2. Conventions Used in This Document 158 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 159 "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in 160 this document are to be interpreted as described in [RFC2119]. In 161 this document, these words will appear with that interpretation 162 only when in ALL CAPS. Lower case uses of these words are not to 163 be interpreted as carrying [RFC2119] significance. 165 3. Terminology 167 This section defines acronyms, terms, and symbology used in the 168 rest of this document. 170 3.1. Acronyms 172 ANI Autonomic Network Infrastructure 173 CLI Command Line Interface 174 OAM&P Operations, Administration, Management, and Provisioning 175 PID Proportional-Integral-Derivative (a type of controller) 177 3.2. Definitions 179 This section defines the terminology that is used in this document. 181 3.2.1. Control Loop 183 A control loop is a type of control system that manages the 184 behavior of the devices and systems that it is governing. 186 3.2.2. Control Loop, Open 188 A control loop whose output is generated based only on the input(s) 189 it receives. 191 3.2.3. Control Loop, Closed 193 A control loop whose output is a function of the current output and 194 a set of corrections made to that output based on feedback. 196 3.2.4. Control Loop, Proportional 198 A type of control algorithm that generates a stronger response 199 when the system is farther away from its goal state. In other words, 200 the response of the control algorithm is proportion to the amount of 201 error received. 203 3.2.5. Control Loop, Proportional-Derivative 205 A type of control algorithm that uses the rate-of-change of the 206 error with time. In other words, it uses the error when the system 207 is far away from the goal state, and then corrects for the momentum 208 of the system as it gets closer to the goal state. 210 3.2.6. Control Loop, Proportional-Integral-Derivative (PID) 212 A type of control algorithm that adds an integral action to a 213 proportional derivative controller. The integral term eliminates 214 long-term steady-state errors. By integrating the error over time, 215 the controller can drive the system closer to the goal state. 217 3.2.7. Control Loop, Cascade 219 A type of controller where multiple controllers (usually PID) are 220 used to provide fine-grained control. The simplest type of cascade 221 control is two PIDs, where one PID controls the 223 3.2.8. Control System 225 A control system consists of systems and processes that 226 collectively govern the output of the system. 228 4. Requirements for Control Loops in Autonomic Networks 230 The following subsections define the requirements that Autonomic 231 Control Loops MUST, SHOULD, and MAY provide. 233 4.1. Mandatory Autonomic Control Loop Requirements 235 An autonomic control loop MUST be able to perform the following 236 functions as part of its operation: 238 o Observe and collect data from the system being managed 240 o Orient these data, so that their meaning and significance can 241 be understood in the proper context 243 o Analyze the collected data through filtering, correlation, 244 and other mechanisms to define a model of past, current, and 245 future states 247 o Plan different actions based on inferring trends, determining 248 root causes, and similar processes 250 o Decide which plan(s) to execute, and when 252 o Execute the plan(s), and then repeat these steps 254 o Detect and resolve any conflicts between different goals 255 that the Autonomic System is given 257 These seven requirements are further explained in the following 258 subsections. 260 4.1.1. Observe and Collect Data 262 Control loops begin with input data. An autonomic control loop 263 MUST be able to observe and collect data, as instructed by an 264 Autonomic Management System. Without the proper input data, the 265 control loop will be ineffective at best, and likely useless. 266 However, many data in their raw form are not easy to understand 267 by an Autonomic System, and may not be compatible with other data 268 that have been collected by the Autonomic System. Hence, this stage 269 is a mostly passive ability to collect data that is meaningful for 270 the management process, and relies heavily on the next 271 (orientation) step. 273 4.1.2. Orient Data 275 The orientation of data ensures that those data are taken in the 276 correct context. This enables their meaning, as well as their 277 relative importance, to be properly assigned. Autonomic control 278 loops MUST orient data. 280 Orientation of data was the second step in Boyd's OODA loop 281 [Boyd95]. OODA stands for Observe, Orient, Decide, and Act. The 282 FOCALE [Strassner07] control loops are an extension of OODA. 284 The orientation step of OODA is critical, as it determines how 285 observations, decisions, and actions are performed. This mimics 286 human behavior, since most people react according to how they 287 perceive the world, as opposed to how the world really is. 289 In FOCALE, the orient step is a model-based translation of 290 received data to normalize those data into a common form, where 291 they can each contribute to the overall perception of the System 292 that is being managed. For example, data from a variety of sensors 293 (e.g., pressure, visual, thermal, etc.) can be fitted into an 294 overall model that also includes performance of IP services, 295 device interfaces, and other entities. Without this normalization 296 of applicable device information, the overall context of the 297 system is not known. This in turn increases the risk of the wrong 298 decision being made. 300 4.1.3. Analyze Data 302 The analysis of data is critical for enabling the control loop to 303 operate properly. Autonomic control loops MUST be able to analyze 304 data, after they have been oriented, in order to determine the set 305 of critical properties that the control loop is operating against. 307 For example, the analysis might derive the current state of the 308 system being managed; this can then be compared to the desired 309 state of the system to define an error function that can be fed 310 back into the control loop. As another example, one or more 311 attributes could be monitored to determine whether the system is 312 operating as planned or not; again, an error function is then 313 defined that can be fed back into the control loop. This step is 314 part of the Orient function in OODA, but is separate in FOCALE. 316 4.1.4. Plan Actions Based on Oriented Data 318 Once the analysis is done, the Autonomic System then understands 319 if its current behavior needs to be modified or changed. This 320 takes the form of one or more plans. An autonomic control loop 321 MUST be able to generate one or more plans to govern the behavior 322 of the system being managed. There can be many different ways to 323 solve a problem; a plan is built for each way to enable them to 324 be compared and contrasted. 326 4.1.5. Decide Which Plan(s) to Execute 328 Given a set of plans generated by the control loop, a control 329 loop MUST be able to choose which plan, or set of plans, to 330 execute. If multiple plans are to be executed, then the autonomic 331 control loop MUST define the order (if any) of execution of each 332 control loop. 334 In FOCALE, each plan is evaluated with respect to the current 335 context. This enables context to optimize which plan, or set of 336 plans, are best suited to achieving the goal(s) in managing the 337 behavior of the system. Note that this forms another very 338 important loop in FOCALE: context selects policies. As context 339 changes, a new working set of policies are selected. Hence, the 340 behavior of the Autonomic System adapts to changing context. 342 4.1.7. Detect and Resolve Conflicts 344 Autonomic systems typically use policy rules to either help in 345 making decisions, or to provide actions to take as part of the 346 control loop. An Autonomic System MUST be able to detect, and 347 then resolve, conflicts. Both MAPE [Kephart03] and FOCALE 348 [Strassner07] provide several examples of this behavior. 350 4.2. Desired Autonomic Control Loop Requirements 352 An autonomic control loop SHOULD be able to perform the following 353 functions as part of its operation: 355 o Observe and collect data from other devices and/or systems that 356 can influence the behavior of the system being managed 358 o Orient data from other devices and/or systems that can 359 influence the behavior of the system being managed, so that 360 their meaning and significance can be understood in the 361 proper context 363 o Execute one or more machine learning algorithms that can learn 364 from and make predictions on monitored data. This enables more 365 efficient adaptivity. It also enables "shortcuts" to be built 366 that enable one or more functional blocks of the control loop 367 to be skipped because the Autonomic System already recognizes 368 what needs to be corrected in the system. 370 o Register the capabilities that this control loop can govern 371 with a collection of other Autonomic Systems that it may 372 exchange information and control with 374 o Register the requirements that this control loop needs in 375 order to accomplish its tasks 377 These five requirements are further explained in the following 378 subsections. 380 4.2.1. Observe and Collect Data From External Systems 382 Autonomic Systems are context-aware. This means that the context 383 of the Autonomic System helps determine what actions (if any) 384 should be taken at any given time. Therefore, Autonomic Systems 385 SHOULD take into account data that directly and indirectly affect 386 the goals of the Autonomic System. This includes data that affect 387 the Autonomic System itself and/or data that affect the system 388 that is being governed by the Autonomic System. 390 Data that directly affects the Autonomic System are data that 391 belong to the Autonomic System, and/or the system being governed 392 by the Autonomic System. Data that indirectly affects the Autonomic 393 System are data that belong to systems that are neither the 394 Autonomic System nor the system that the Autonomic System is 395 managing. 397 4.2.2. Orient Data from External Systems 399 All data, regardless of whether it directly or indirectly affects 400 the Autonomic System, SHOULD be oriented so that a common frame of 401 reference is built to consider the relative importance of observed 402 and collected data. This orientation places ensures that data are 403 compared and analyzed in the correct context. This enables their 404 meaning, as well as their relative importance, to be properly 405 assigned. Autonomic control loops SHOULD orient external data. 407 4.2.3. Execute One or More Machine Learning Algorithms 409 Machine learning refers to algorithms that can learn from, and 410 make predictions about, data. Machine learning algorithms use a 411 model, built from a set of exemplar data, to make predictions and 412 decisions. More formally, [Mitchell97] defines machine learning as: 413 "A computer program is said to learn from experience E with respect 414 to some class of tasks T and performance measure P, if its 415 performance at tasks in T, as measured by P, improves with 416 experience E." 418 Machine learning provides the ability for the Autonomic System to 419 learn from its environment, without burdening the developer to 420 program an explicit set of steps to do so. As such, it is well- 421 suited for providing the basis for learning from the environment 422 in order to adapt the services and resources that it offers in 423 order to maintain, protect, or better fulfil its goals. 425 4.2.4. Register Control Loop Capabilities 427 Autonomic systems provide a number of functional capabilities. 428 Sometimes, a control loop of one Autonomic System may have 429 excess processing available that could be used by other control 430 loops of the same or different Autonomic Systems. Therefore, an 431 autonomic control loop SHOULD register the functional capabilities 432 that it provides, so that other autonomic control loops can 433 request the use of one or more of those functional capabilities. 434 Note that the use of models and/or ontologies greatly simplifies 435 this task, as models and ontologies provide a common vocabulary, 436 complete with meanings, that are shared by all autonomic elements 437 in an Autonomic System. 439 4.2.5. Register Control Loop Requirements 441 Autonomic systems provide a number of functional capabilities. 442 Sometimes, a control loop can benefit from other resources (that 443 are part of other Autonomic Networks) that available to perform 444 one or more of the functions required by the control loop. 446 This may be because the control loop has run out of resources from 447 its own autonomic elements, or it may be because other autonomic 448 elements can supply more powerful, or robust, versions of the 449 functions that a control loop needs compared to the functions 450 provided by its own autonomic elements. In order for this to occur, 451 an autonomic control loop SHOULD register its requirements. This 452 enables other autonomic elements to provide resources and/or 453 services to the autonomic control loop, as needed. 455 4.3. Optional Autonomic Control Loop Requirements 457 An autonomic control loop MAY be able to perform the following 458 functions as part of its operation: 460 o Use a single information model to help normalize observed 461 and collected data. 463 o Use one or more ontologies to define semantics for data. If 464 models define facts, then ontologies conceptually define the 465 semantics for those facts. This is critical in enabling the 466 Autonomic System to reason and learn. 468 o Collaborate with other control loops of other Autonomic 469 Systems, so that autonomic control can be extended beyond 470 the confines of any one system to a collection of Autonomic 471 Systems 473 These three requirements are further explained in the following 474 subsections. 476 4.3.1. Use of A Single Information Model 478 Autonomic Systems MAY use an information model to define common 479 concepts used by all systems that are interacting with each other. 481 The advantage of using an information model is that it defines a 482 set of concepts in a technology-neutral format. This is important, 483 because most management systems use a variety of different data 484 models (e.g., directories, relational databases, in-memory 485 databases, and others). Each of these data models structures and 486 organizes data differently, and has very different ways of 487 performing basic operations (e.g., create, read, update, and 488 delete) on those data using very different protocols. Hence, if 489 a data object is updated in one data model, how can the system 490 reliably update other instances of that data object if the 491 protocol, representation, data type, and other elements of the 492 data object are different? 494 The role of an information model is to define common concepts 495 once; this enables a set of mappings between the information 496 model and each data model to be defined, so that data coherency 497 is maintained in each data model. 499 4.3.2. Use of Ontologies 501 Autonomic Systems MAY use a set of ontologies for defining the 502 meaning associated with different facts collected by the 503 Autonomic System. Facts can be derived from models as well as 504 from the ontologies themselves. 506 Information and data models are important. However, neither 507 type of model can typically support reasoning, because neither 508 type of model defines formal semantics for the data. Ontologies 509 use a formal mathematical model for defining semantics (e.g., 510 description logic or first order logic). Hence, one can build 511 a multi-graph, where different model elements are linked together 512 using semantic edges defined by ontologies. 514 This is an important step towards both orienting data as well as 515 harmonizing data in general. Without understanding the associated 516 semantics of data, it is difficult (if not impossible) to ensure 517 that the operation of the control loop will be correct. 519 4.3.3. Collaborate With Other Control Loops 521 Autonomic Systems MAY collaborate with other Autonomic Systems. 522 This enables multiple Autonomic Systems to support each other, and 523 work together to achieve goals that are mutually beneficial. 525 5. Control Loop Usage in Autonomic Networks 527 Autonomic systems use closed control loops. They may use one or 528 multiple control loops to manage behavior; examples of these are 529 the MAPE-K loop [Kephart03] and FOCALE [Strassner07] control loops, 530 respectively. 532 Control loops operate to continuously observe and collect data 533 that enables the autonomic management system to understand changes 534 to the behavior of the system being managed, and then provide 535 actions to move the state of the system being managed toward a 536 common goal. Self-adaptive systems move decision-making from 537 static, pre-defined commands to dynamic processes computed at 538 runtime. 540 Ideally, Autonomic Management will co-exist with traditional, or 541 on-autonomic, management methods. This is because autonomic 542 management will either be introduced in a greenfield environment 543 (where it is the "only" management method), or more likely, in a 544 hybrid environment that includes legacy systems and devices that 545 are not capable of Autonomic Management. 547 5.1. Autonomic Management 549 In a hybrid environment, autonomic control loops are used to 550 manage individual autonomic functions. In some hybrid environments 551 (e.g., where a number of autonomic nodes are collaborating to 552 provide a collective response to the system) and in many 553 greenfield environments, autonomic control loops are used to 554 manage not only functions, but processes and behaviors. 556 An autonomic control loop can be implemented using traditional 557 and/or autonomic control mechanisms; examples include procedural 558 and cognitive methods, respectively. 560 There are two types of behavior that are implied by the autonomic 561 system coexisting with traditional systems: (1) autonomic methods 562 can be used to manage legacy elements using traditional mechanisms 563 (e.g., CLI, SNMP), and (2) autonomic methods can use a proxy to 564 translate their management mechanisms into one or more forms that 565 legacy elements can understand. 567 In principle, both of these approaches could be used by autonomic 568 systems to manage autonomic elements. However, in practice, most 569 autonomic systems will use autonomic mechanisms to manage autonomic 570 elements, due to increased efficiency and expressivity. 572 Note that in either case, the basic control loop does NOT change. 573 This is because the purpose of the control loop is to achieve its 574 goals. Hence, it doesn't matter if new and/or legacy protocols 575 are used, as long as the tasks can be accomplished. 577 5.2. Policy and Context 579 Policies can be used with control loops to guide the operation of 580 the control loop. FOCALE is one example of this approach, and is 581 shown in simplified form in Figure 1. 583 Feedback 584 Policy 585 +---------+ Results +---------+ 586 | Context +<------------+ Policy | 587 | Manager +------------>+ Manager | 588 +----+----+ Selects +--+---+--+ 589 / \ Policies | / \ 590 | | | 591 | Defines Behavior | | Feedback Results 592 | | | 593 | \ / | 594 | +---+---+---+ 595 | Context | Autonomic | 596 | Data | Manager | 597 | +---+---+---+ 598 | | / \ 599 | Adjust | | 600 +-------+------+ | | Feedback Results 601 | | \ / | 602 | +---------+ | Input +---+---+-----------------------+ 603 | | System | | Data \| | 604 | | Being |--+----+-----+ +----------+ +---------+ | 605 | | Managed | | / \ /| | Observe, | \| Plan, |--+-------+ 606 | +---------+ | | | | Orient, |----+ Decide, |--+----+ | 607 | | | | | Compare, | /| Act | | | | 608 | System | | | | Analyze | +----+----+ | | | 609 | | | | +-----+----+ / \ | | | 610 | | | | / \ | | | | 611 +------+-------+ | | \ / \ / | | | 612 / \ | | +----+-------------+----+ | | | 613 | | | | Machine Learning | | | | 614 | | | | and Reasoning | | | | 615 | | | +-----------------------+ | | | 616 | | | | | | 617 | | | Control Loop Elements | | | 618 | | +-------------------------------+ | | 619 | | | | 620 | +------------------------------------------+ | 621 | Current State == Desired State | 622 | | 623 +----------------------------------------------------------+ 624 Current State != Desired State 626 Figure 1. Simplified View of the FOCALE Autonomic Architecture 628 In FOCALE, Context is computed from information obtained and/or 629 observed from the system being managed, along with other factors 630 (e.g., business rules). This context information is used to 631 determine the context that the system being managed is in. This 632 context selects a working set of policies that are applicable for 633 that context. The Policy Manager executes its policies, which 634 define the behavior to be implemented by the Autonomic Manager. 635 The Autonomic Manager then adjusts the set of control loop 636 elements according to policy. 638 In the above simple example, if the current state equals the 639 desired state, then no adjustment is necessary, so the control 640 loop continues monitoring input data. In contrast, if the current 641 states does not equal the desired state, then the control loop 642 computes one or more plans, decides which plan(s) to execute, and 643 then monitors the execution of the plan(s) to ensure that the 644 expected outcomes occurred. 646 Machine learning algorithms monitor all of the operations of the 647 control loop, building up a knowledge base that can correlate 648 types of scenarios to solutions. It also records the efficacy of 649 the remediations determined by the control loop processing. 651 5.3 Types of Policies 653 The previous section showed the importance of using context-aware 654 policies to control the processing of Autonomic control loops. 655 There are two types of classifications of policies: 657 1) policies that pertain to specific actors, and 658 2) policies of a technological nature 660 5.3.1. Policies Organized by Actors 662 The Policy Continuum [Davy07] defines a set of stratified policy 663 languages, where each language is used by one or more actors in 664 the end-to-end management of the system. This helps ensure 665 consistency among the different constituencies that use policies, 666 enabling each constituency to use a grammar and terminology that 667 is familiar to them while being able to relate each language 668 to at least each other language at the next lower (or higher) 669 level of abstraction. 671 The purpose of the Policy Continuum is to emphasize that different 672 actors think of policy differently. The essential point of the 673 Policy Continuum is **not** the **number** of languages used, but 674 rather the number of **actors** that **require different concepts 675 and terminology** (and hence, different forms and structure of 676 policy) to define the desired behavior of the system. 678 5.3.2. Policies Organized by Technology 680 The document draft-strassner-supa-generic-policy-info-model-02 681 describes the difference between two types of policy rules. 682 Imperative policies, typified by "condition-action" or "event- 683 condition-action", define the set of commands to perform to 684 manipulate the state of the system. In contrast, declarative 685 policies, typified by logic-based languages, define relationships 686 between variables in terms of functions or inference rules. A 687 third type of policy rule, called a procedural policy, is one 688 that explicitly defines a sequence of actions to execute given a 689 set of conditions. 691 To date, the vast majority of policy implementations are either 692 imperative or procedural. Lately, a lot of excitement has been 693 generated over the concept of "intent-based" policies, which have 694 been described as declarative policies. 696 5.4. Policy Conflicts 698 There are two classes of policy conflicts that must be taken into 699 account if policy is to be used to control the processing of the 700 control loop. They are: 702 1) Conflicts arising from technology, and 703 2) Conflicts arising from different actors 705 This is elaborated on in the following two subsections. 707 5.4.1. Policy Conflicts Caused by Technology 709 In imperative and procedural policies, policy conflict detection 710 and remediation MUST be provided. Since state is directly 711 manipulated by both types of these policies, different instances 712 of each can give rise to conflicting actions in response to the 713 same conditions. For example, if two policies have the same 714 conditions but different actions, this is a conflict. 716 There are many different algorithms to resolve policy conflicts. 717 The simplest is adding a priority integer to each policy rule. 718 This, however, is not advised, because: 720 1) it is complex to ensure that all integers are properly 721 ordered for all cases, and 722 2) this is a static, reactive mechanism, and may not be able 723 to be adjusted dynamically to resolve all conflicts 725 This type of policy conflict detection and resolution will be 726 examined later in the lifecycle of the ANIMA WG. 728 Certain types of policy languages, such as logic-based declarative 729 policies, do not need an explicit policy conflict detection process. 730 This is because the logic itself ensures that policy conflicts are 731 not allowed. A simple example is Datalog, which consists of a set of 732 statements that determine whether a proposition is true or not. 734 5.4.2. Policy Conflicts Caused by Different Systems 736 Conflict can occur between the following broad classes of systems: 738 o between actions of different autonomic networks 740 o between actions of an autonomic network and actions of a non- 741 autonomic network 743 [RFC7575] recommends the use of prioritization, which yields the 744 following (incomplete) first pass of remediation: 746 o manual, or operator-driven (e.g., using scripts) operations 747 have the highest priority 749 o operator-driven autonomic operations 751 o default behavior of autonomic operations 753 < more in the next revision of this I-D > 755 5.5. Control Loops 757 Control loops provide a generic mechanism for self-adaptation. That 758 is, as user needs, business goals, and the ANI itself change, self- 759 adaptation enables the ANI to change the services and resources it 760 makes available to adapt to these changes. Self-adaptive systems 761 move decision-making from static, pre-defined commands to dynamic 762 processes computed at runtime. 764 Control loops operate to continuously capture data that enables the 765 understanding of the system, and then provide actions to move the 766 state of the system toward a common goal. 768 5.5.1. Types of Control 770 There are two generic types of closed loop control. Feedback 771 control adjusts the control loop based on measuring the output of 772 the system being managed to generate an error signal (the 773 deviation of the current state vs. its desired state). Action is 774 then taken to reduce the deviation. 776 In contrast, feedforward control anticipates future effects on a 777 controlled variable by measuring other variables whose values may 778 be more timely, and adjusts the process based on those variables. 779 In this approach, control is not error-based, but rather, based 780 on knowledge. 782 Autonomic control loops MAY require both feedforward and feedback 783 control, depending on the specific type of algorithm used. 785 5.5.2. Types of Control Loops 787 There are many different types of control loops. In autonomics, 788 the most commonly cited loop is called Monitor-Analyze-Plan-Execute 789 (with Knowledge), called MAPE-K [Kephart03]. However, MAPE-K has a 790 number of systemic problems, as described in [Strassner09]. Thus, 791 other autonomic architectures, such as AutoI [AutoI] and FOCALE 792 [Strassner07] use different types of control loops. In these two 793 cases, both AutoI and FOCALE evolved from the OODA control loop 794 [Boyd95]. One of the most important reasons for using this loop, 795 and not the MAPE-K loop, is because the OODA loop contains a 796 critical step not contained in other loops: orientation. 797 Orientation determines how observations, decisions, and actions 798 are performed. For example, assume that different types of sensor 799 data need to be collected. Furthermore, assume that each type of 800 sensor data uses a different data model. Orientation explicitly 801 ensures that each set of sensor data is normalized to a common form. 802 As another example, different data often have different semantics 803 that affect their interpretation; orientation explicitly takes this 804 into effect. 806 Figure 2 shows a simplified model of a control loop containing both 807 feedforward and feedback elements. 809 Input Variables 810 ----------+-------------------------+ 811 | | 812 | | 813 \ / \ / 814 +-----+------+ +----+----+ 815 Set Point --->| Controller |------------>| Process |--+---> Output 816 +-----+------+ Deltas of +---------+ | 817 ^ Control | 818 | Variable(s) | 819 | | 820 +---------------------------------+ 822 Figure 2: Control Loop with Feedforward and Feedback Elements 824 Note that Figure 2 is a STATIC model. Figure 3 is a dynamic version, 825 called a Model-Reference Adaptive Control Loop (MRACL). 827 Model +--------------+ 828 +-------+ Output | Adaptive |<----+ 829 +--->| Model |--------->| Algorithm(s) | | 830 | +-------+ +---+-----+----+ | 831 | Adjusted | ^ | 832 Input | Parameters | | | 833 --------+ +----------------+ | | 834 | | | | 835 | | +---------+ | 836 | \ / | | 837 | +-----+------+ | +---------+ | 838 +--->| Controller |-----+------>| Process |--+---> Output 839 +-----+------+ Deltas of +---------+ | 840 ^ Control | 841 | Variable(s) | 842 | | 843 +---------------------------------+ 845 Figure 3: A Model-Reference Adaptive Control Loop 847 More complex adaptive control loops have been defined; these will 848 be described in a future I-D, so that an appropriate gap analysis 849 can be defined to recommend an architectural approach for ANIMA. 851 5.5.3. Management of an Autonomic Control Loop 853 Both standard and adaptive control loops (e.g., as represented in 854 Figures 2 and 3, respectively) enable intervention by a human 855 administrator or central control systems, if required. Interaction 856 mechanisms include changing the behaviour of one or more elements in 857 the control loop, as well as providing mechanisms to bypass parts of 858 the control loop (e.g., skip the "decide" phase and go directly to 859 the "action" phase of an OODA loop, as is done in FOCALE). This also 860 enables the default behaviour to be changed if necessary. 862 6. Security Considerations 864 To be done in the next revision 866 7. IANA Considerations 868 This document requests no action by IANA. 870 8. Acknowledgements 872 TBD 874 9. References 876 9.1. Informative References 878 [draft-ietf-anima-reference-model-01] 879 Behringer, M., Carpenter, B., Eckert, T., Ciavagia, L., Liu, B., 880 Nobre, J., Strassner, J., "A Reference Model for Autonomic 881 Networking", June 2015 883 [RFC2119] 884 Bradner, S., "Key words for use in RFCs to Indicate Requirement 885 Levels", BCP 14, RFC 2119, March 1997. 887 [RFC7575] 888 Behringer, M., Pritikin, M., Bjarnason, S., Clemm, A., 889 Carpenter, B., Jiang, S., and L. Ciavaglia, "Autonomic Networking: 890 Definitions and Design Goals", RFC 7575, June 2015. 892 [AutoI] 893 Galis, A., Denazis, S., Bassi, A., Giacomin, P., Berl, A., 894 Fischer, A., de Meer, H., Strassner, J., Davy, S., Macedo, D., 895 Pujolle, G., Loyola, J.R., Serrat, J., Lefevre, L., Cheniour, A., 896 "Management Architecture and Systems for Future Internet 897 Networks," in FIA Book: "Towards the Future Internet - A European 898 Research Perspective". IOS Press, May 2009, pp. 112-122, 899 ISBN 978-1-60750-007-0 901 [Boyd95] 902 Boyd, J.R., "The Essence of Winning and Losing", 28 June, 1995 904 [Davy07] 905 Davy, S., Jennings, B., Strassner, J., "The Policy Continuum - A 906 Formal Model", Proc. of the 2nd Intl. IEEE Workshop on Modeling 907 Autonomic Communication Environments (MACE), Multicon Lecture 908 Notes, No. 6, Multicon, Berlin, 2007, pages 65-78 910 [Kephart03] 911 Kephart, J. and D. Chess, "The Vision of Autonomic Computing", 912 IEEE Computer, vol. 36, no. 1, pp. 41-50, DOI 913 10.1109/MC.2003.1160055, January 2003. 915 [Mitchell97] 916 Mitchell, T., "Machine Learning", McGraw-Hill, March, 1997 917 ISBN 978-0070428072 919 [Strassner07] 920 Strassner, J., Agoulmine, N., Lehtihet, E., "FOCALE - A Novel 921 Autonomic Networking Architecture", International Transactions 922 on Systems, Science, and Applications (ITSSA) Journal, Vol. 3, 923 No 1, pp 64-79, May, 2007 925 [Strassner09] 926 Strassner, J., Kim, S., Hong, J., "The Design of an Autonomic 927 Communication Element to Manage Future Internet Services", Proc. 928 of the 12th Asia-Pacific Network Operations and Management 929 Conference, pg 122-132 931 Authors' Addresses 933 John Strassner 934 Huawei Technologies 935 2330 Central Expressway 936 Santa Clara, CA 95050 937 USA 938 Email: john.sc.strassner@huawei.com 940 Joel Halpern 941 Ericsson 942 P. O. Box 6049 943 Leesburg, VA 20178 944 Email: joel.halpern@ericsson.com 946 Michael H. Behringer 947 Cisco Systems 948 Building D, 45 Allee des Ormes 949 Mougins 06250 950 France 951 Email: mbehring@cisco.com