idnits 2.17.1 draft-eckert-anima-stable-connectivity-00.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- ** The document seems to lack separate sections for Informative/Normative References. All references will be assumed normative when checking for downward references. ** There is 1 instance of too long lines in the document, the longest one being 6 characters in excess of 72. == There are 1 instance of lines with non-RFC2606-compliant FQDNs in the document. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year -- The document date (October 26, 2014) is 3469 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- == Unused Reference: 'I-D.behringer-autonomic-control-plane' is defined on line 557, but no explicit reference was found in the text == Unused Reference: 'I-D.irtf-nmrg-an-gap-analysis' is defined on line 562, but no explicit reference was found in the text == Unused Reference: 'I-D.irtf-nmrg-autonomic-network-definitions' is defined on line 567, but no explicit reference was found in the text == Unused Reference: 'I-D.pritikin-bootstrapping-keyinfrastructures' is defined on line 574, but no explicit reference was found in the text == Unused Reference: 'RFC4193' is defined on line 580, but no explicit reference was found in the text == Outdated reference: A later version (-06) exists of draft-irtf-nmrg-an-gap-analysis-02 == Outdated reference: A later version (-07) exists of draft-irtf-nmrg-autonomic-network-definitions-04 Summary: 2 errors (**), 0 flaws (~~), 9 warnings (==), 1 comment (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 ANIMA T. Eckert 3 Internet-Draft M. Behringer 4 Intended status: Informational Cisco 5 Expires: April 29, 2015 October 26, 2014 7 Autonomic Network Stable Connectivity 8 draft-eckert-anima-stable-connectivity-00 10 Abstract 12 This document describes how to integrate OAM processes with the 13 autonomic control plane (ACP) in Autonomic Networks (AN). to provide 14 stable connectivity for those processes. 16 Status of This Memo 18 This Internet-Draft is submitted in full conformance with the 19 provisions of BCP 78 and BCP 79. 21 Internet-Drafts are working documents of the Internet Engineering 22 Task Force (IETF). Note that other groups may also distribute 23 working documents as Internet-Drafts. The list of current Internet- 24 Drafts is at http://datatracker.ietf.org/drafts/current/. 26 Internet-Drafts are draft documents valid for a maximum of six months 27 and may be updated, replaced, or obsoleted by other documents at any 28 time. It is inappropriate to use Internet-Drafts as reference 29 material or to cite them other than as "work in progress." 31 This Internet-Draft will expire on April 29, 2015. 33 Copyright Notice 35 Copyright (c) 2014 IETF Trust and the persons identified as the 36 document authors. All rights reserved. 38 This document is subject to BCP 78 and the IETF Trust's Legal 39 Provisions Relating to IETF Documents 40 (http://trustee.ietf.org/license-info) in effect on the date of 41 publication of this document. Please review these documents 42 carefully, as they describe your rights and restrictions with respect 43 to this document. Code Components extracted from this document must 44 include Simplified BSD License text as described in Section 4.e of 45 the Trust Legal Provisions and are provided without warranty as 46 described in the Simplified BSD License. 48 Table of Contents 50 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . 2 51 1.1. Self dependent OAM connectivity . . . . . . . . . . . . . 2 52 1.2. Data Communication Networks (DCNs) . . . . . . . . . . . 3 53 1.3. Leveraging the ACP . . . . . . . . . . . . . . . . . . . 3 54 2. Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . 4 55 2.1. Stable connectivity for centralized OAM operations . . . 4 56 2.1.1. Simple connectivity for non-autonomic NOC application 57 devices . . . . . . . . . . . . . . . . . . . . . . . 4 58 2.1.2. Limitations and enhancement overview . . . . . . . . 5 59 2.1.3. Simultaneous ACP and data plane connectivity . . . . 5 60 2.1.4. IPv4 only NOC application devices . . . . . . . . . . 7 61 2.1.5. Path selection policies . . . . . . . . . . . . . . . 8 62 2.1.6. Autonomic NOC device/applications . . . . . . . . . . 9 63 2.1.7. Encryption of data-plane connections . . . . . . . . 10 64 2.1.8. Long term direction of the solution . . . . . . . . . 11 65 2.2. Stable connectivity for distributed network/OAM functions 11 66 3. Security Considerations . . . . . . . . . . . . . . . . . . . 12 67 4. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 12 68 5. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . 12 69 6. Change log [RFC Editor: Please remove] . . . . . . . . . . . 12 70 7. References . . . . . . . . . . . . . . . . . . . . . . . . . 12 71 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . 13 73 1. Introduction 75 1.1. Self dependent OAM connectivity 77 OAM (Operations, Administration and Management) processes for data 78 networks are often subject to the problem of circular dependencies 79 when relying on network connectivity of the network to be managed for 80 the OAM operations itself: 82 The ability to perform OAM operations on a network device requires 83 first the execution of OAM procedures necessary to create network 84 connectivity to that device in all intervening devices. This 85 typically leads to sequential, 'expanding ring configuration' from a 86 NOC. It also leads to tight dependencies between provisioning tools 87 and security enrollment of devices. Any process that wants to enroll 88 multiple devices along a newly deployed network topology needs to 89 tightly interlock with the provisioning process that creates 90 connectivity before the enrollment can move on to the next device. 92 When performing change operations on a network, it likewise is 93 necessary to understand at any step of that process that there is no 94 interruption of connectivity that could lead to removal of 95 connectivity to remote devices. This includes especially change 96 provisioning of routing, security and addressing policies in the 97 network that often occur through mergers and acquisitions, the 98 introduction of IPv6 or other mayor re-hauls in the infrastructure 99 design. 101 All this circular dependencies make OAM processes complex and 102 potentially fragile. When automation is being used, for example 103 through provisioning systems or network controllers, this complexity 104 extends into that automation software. 106 1.2. Data Communication Networks (DCNs) 108 In the late 1990'th and early 2000, IP networks became the method of 109 choice to build separate OAM networks for the communications 110 infrastructure in service providers. This concept was standardized 111 in G.7712/Y.1703 and called "Data Communications Networks" (DCN). 112 These where (and still are) physically separate IP(/MPLS) networks 113 that provide access to OAM interfaces of all equipment that had to be 114 managed, from PSTN switches over optical equipment to nowadays 115 ethernet and IP/MPLS production network equipment. 117 Such DCN provide stable connectivity not subject to aforementioned 118 problems because they are separate network entirely, so change 119 configuration of the production IP network is done via the DCN but 120 never affects the DCN configuration. Of course, this approach comes 121 at a cost of buying and operating a separate network and this cost is 122 not feasible for many networks, most notably smaller service 123 providers, most enterprises and typical IoT networks. 125 1.3. Leveraging the ACP 127 One goal of the Autonomic Networks Autonomic Control plane (ACP) is 128 to provide similar stable connectivity as a DCN, but without having 129 to build a separate DCN. It is clear that such 'in-band' approach 130 can never achieve fully the same level of separation, but the goal is 131 to get as close to it as possible. 133 This solution approach has several aspects. One aspect is designing 134 the implementation of the ACP in network devices to make it actually 135 perform without interruption by changes in what we will call in this 136 document the "data-plane", aka: the operator or controller configured 137 services planes of the network equipment. This aspect is not 138 currently covered in this document. 140 Another aspect is how to leverage the stable IPv6 connectivity 141 provided by the ACP to build actual OAM solutions. This is the 142 current scope of this document. 144 2. Solutions 146 2.1. Stable connectivity for centralized OAM operations 148 In the most common case, OAM operations will be performed by one or 149 more applications running on a variety of centralized NOC systems 150 that communicate with network devices. We describe differently 151 advanced approaches to leverage the ACP for stable connectivity 152 leveraging the ACP. The descriptions will show that there is a wide 153 range of options, some of which are simple, some more complex. 155 Most easily we think there are three stages of interest: 157 o There are simple options described first that we consider to be 158 good starting points to operationalize the use of the ACP for 159 stable connectivity. 161 o The are more advanced intermediate options that try to establish 162 backward compatibility with existing deployed approached such as 163 leveraging NAT. Selection and deployment of these approaches 164 needs to be carefully vetted to ensure that they provide positive 165 RoI. This very much depends on the operational processes of the 166 network operator. 168 o It seems clearly feasible to build towards a long-term 169 configuration that provides all the desired operational, zero 170 touch and security benefits of an autonomic network, but a range 171 of details for this still have to be worked out. 173 2.1.1. Simple connectivity for non-autonomic NOC application devices 175 In the most simple deployment case, the ACP extends all the way into 176 the NOC via a network device that is set up to provide access into 177 the ACP natively to non-autonomic devices. It acts as the default- 178 router to those hosts and provides them with only IPv6 connectivity 179 into the ACP - but no IPv4 connectivity. NOC devices with this setup 180 need to support IPv6 but require no other modifications to leverage 181 the ACP. 183 This setup is sufficient for troubleshooting OAM operations such as 184 SSH into network devices, NMS that perform SNMP read operations for 185 status checking, for software downloads into autonomic devices and so 186 on. In conjunction with otherwise unmodified OAM operations via 187 separate NOC devices/applications it can provide a good subset of the 188 interesting stable connectivity goals from the ACP. 190 Because the ACP provides 'only' for IPv6 connectivity, and because 191 the addressing provided by the ACP does not include any addressing 192 structure that operations in a NOC often relies on to recognize where 193 devices are on the network, it is likely highly desirable to set up 194 DNS so that the ACP IPv6 addresses of autonomic devices are known via 195 domain names with logical names. For example, if DNS in the network 196 was set up with names for network devices as 197 devicename.noc.example.com, then the ACP address of that device could 198 be mapped to devicename-acp.noc.exmaple.com. 200 2.1.2. Limitations and enhancement overview 202 This most simple type of attachment of NOC applications to the ACP 203 suffers from a range of limitations: 205 1. NOC applications can not directly probe whether the desired so 206 called 'data-plane' network connectivity works because they do 207 not directly have access to it. This problem is not dissimilar 208 to probing connectivity for other services (such as VPN services) 209 that they do not have direct access to, so the NOC may already 210 employ appropriate mechanisms to deal with this issue (probing 211 proxies). 213 2. NOC applications need to support IPv6 which often is still not 214 the case in many enterprise networks. 216 3. Performance of the ACP will be limited versus normal 'data-plane' 217 connectivity. The setup of the ACP will often support only non- 218 hardware accelerated forwarding. Running a large amount of 219 traffic through the ACP, especially for tasks where it is not 220 necessary will reduce its performance/effectiveness for those 221 operations where it is necessary or highly desirable. 223 4. Security of the ACP is reduced by exposing the ACP natively (and 224 unprotected) into a LAN In the NOC where the NOC devices are 225 attached to it. 227 These four problems can be tackled independently of each other by 228 solution improvements. Combining these solutions improvements 229 together ultimately leads towards the the target long term solution. 231 2.1.3. Simultaneous ACP and data plane connectivity 233 Simultaneous connectivity to both ACP and data-plane can be achieved 234 in a variety of ways. If the data-plane is only IPv4, then any 235 method for dual-stack attachment of the NOC device/application will 236 suffice: IPv6 connectivity from the NOC provides access via the ACP, 237 IPv4 will provide access via the data-plane. If as explained above 238 in the most simple case, an autonomic device supports native 239 attachment to the ACP, and the existing NOC setup is IPv4 only, then 240 it could be sufficient to simply attach the ACP device(s) as the IPv6 241 default-router to the NOC LANs and keep the existing IPv4 default 242 router setup unchanged. 244 If the data-plane of the network is also supporting IPv6, then the 245 NOC devices that need access to the ACP should have a dual-homing 246 IPv6 setup. One option is to make the NOC devices multi-homed with 247 one logical or physical IPv6 interface connecting to the data-plane, 248 and another into the ACP. The LAN that provides access to the ACP 249 should then be given an IPv6 prefix that shares a common prefix with 250 the IPv6 ULA of the ACP so that the standard IPv6 interface selection 251 rules on the NOC host would result in the desired automatic selection 252 of the right interface: towards the ACP facing interface for 253 connections to ACP addresses, and towards the data-plane interface 254 for anything else. If this can not be achieved automatically, then 255 it needs to be done via simple IPv6 static routes in the NOC host. 257 Providing two virtual (eg: dot1q subnet) connections into NOC hosts 258 may be seen as undesired complexity. In that case the routing policy 259 to provide access to both ACP and data-plane via IPv6 needs to happen 260 in the NOC network itself: The NOC application device gets a single 261 attachment interface but still with the same two IPv6 addresses as in 262 before - one for use towards the ACP, one towards the data-plane. 263 The first-hop router connecting to the NOC application device would 264 then have separate interfaces: one towards the data-plane, one 265 towards the ACP. Routing of traffic from NOC application hosts would 266 then have to be based on the source IPv6 address of the host: Traffic 267 from the address designated for ACP use would get routed towards the 268 ACP, traffic from the designated data-plane address towards the data- 269 plane. 271 In the most simple case, we get the following topology: Existing NOC 272 application devices connect via an existing NOClan and existing first 273 hop Rtr1 to the data-plane. Rtr1 is not made autonomic, but instead 274 the edge router of the Autonomic network ANrtr is attached via a 275 separate interface to Rtr1 and ANrtr provides access to the ACP via 276 ACPaccessLan. Rtr1 is configured with the above described IPv6 277 source routing policies and the NOC-app-devices are given the 278 secondary IPv6 address for connectivity into the ACP. 280 ---- ... (data-plane) 281 NOC-app-device(s) ---- NOClan --- Rtr1 282 ---- ACPaccessLan -- ANrtr ... (ACP) 284 Figure 1 286 If Rtr1 was to be upgraded to also implement Autonomic Networking and 287 the ACP, the picture would change as follows: 289 ---- ... (data-plane) 290 NOC-app-device(s) ---- NOClan --- ANrtr1 291 . . ---- ... (ACP) 292 \-/ 293 (ACP to data-plane loopback) 295 Figure 2 297 In this case, ANrtr1 would have to implement some more advanced 298 routing such as cross-VRF routing because the data-plane and ACP are 299 most likely run via separate VRFs. A simple short-term workaround 300 could be a physical external loopback cable into two ports of ANrtr1 301 to connect the data-plane and ACP VRF as shown in the picture. 303 2.1.4. IPv4 only NOC application devices 305 With the ACP being intentionally IPv6 only, attachment of IPv4 only 306 NOC application devices to the ACP requires the use of IPv4 to IPv6 307 NAT. This NAT setup could for example be done in Rt1r1 in above 308 picture to also support IPv4 only NOC application devices connected 309 to NOClan. 311 To support connections initiated from IPv4 only NOC applications 312 towards the ACP of network devices, it is necessary to create a 313 static mapping of ACP IPv6 addresses into an unused IPv4 address 314 space and dynamic or static mapping of the IPv4 NOC application 315 device address (prefix) into IPv6 routed in the ACP. The main issue 316 in this setup is the mapping of all ACP IPv6 addresses to IPv4. 317 Without further network intelligence, this needs to be a 1:1 address 318 mapping because the prefix used for ACP IPv6 addresses is too long to 319 be mapped directly into IPv4 on a prefix basis. 321 One could implement in router software dynamic mappings by leveraging 322 DNS, but it seems highly undesirable to implement such complex 323 technologies for something that ultimately is a temporary problem 324 (IPv4 only NOC application devices). With today's operational 325 directions it is likely more preferable to automate the setup of 1:1 326 NAT mappings in that NAT router as part of the automation process of 327 network device enrollment into the ACP. 329 The ACP can also be used for connections initiated by the network 330 device into the NOC application devices. For example syslog from 331 autonomic devices. In this case, static mappings of the NOC 332 application devices IPv4 addresses are required. This can easily be 333 done with a static prefix mapping into IPv6. 335 Overall, the use of NAT is especially subject to the RoI 336 considerations, but the methods described here may not be too 337 different from the same problems encountered totally independent of 338 AN/ACP when some parts of the network are to introduce IPv6 but NOC 339 application devices are not (yet) upgradeable. 341 2.1.5. Path selection policies 343 As mentioned above, the ACP is not expected to have high performance 344 because its primary goal is connectivity, and for existing platforms 345 this often means that is a lot more effort to implement that 346 additional connectivity with hardware acceleration than without - 347 especially because of the desire to support full encryption across 348 the ACP. Some of these issues may go away in the future with further 349 adoption of the ACP and network device designs that better tender to 350 the needs of a separate OAM plane, but it is wise to plan for even 351 long-term designs of the solution that does NOT depend on high- 352 performance of the ACP because high-performance encryption is not yet 353 something we should expect from every device across which we would 354 want to provide this solution. [ Note that this is opposite to the 355 expectation that future NOC application devices will have IPv6, so 356 that any considerations for IPv4/NAT are temporary]. 358 To solve the expected performance limitations of the ACP, we do 359 expect to have the above describe dual-connectivity via both ACP and 360 data-plane between NOC application devices and AN devices with ACP. 361 The ACP connectivity is expected to always be there (as soon as a 362 device is enrolled), but the data-plane connectivity is only present 363 under normal operations but will not be present during eg: early 364 stages of device bootstrap, failures, provisioning mistakes or during 365 network configuration changes. 367 The desired policy is therefore as follows: In the absence of further 368 security considerations (see below), traffic between NOC application 369 and AN devices should prefer data-plane connectivity and resort only 370 to using the ACP when necessary, unless it is an operation known to 371 be so much tied to the cases where the ACP is necessary that it makes 372 no sense to try using the data plane. An example here is of course 373 the SSH connection from the NOC into a network device to troubleshoot 374 network connectivity. This could easily always rely on the ACP. 375 Likewise, if a NOC application is known to transmit large amounts of 376 data, and it uses the ACP, then its performance need to be controlled 377 so that it will not overload the ACP performance. Typical examples 378 of this are software downloads. 380 There is a wide range of methods to build up these policies. We 381 describe a few: 383 DNS can be used to set up names for the same network devices but with 384 different addresses assigned: One name (name.noc.example.com) with 385 only the data-plane address(es) (IPv4 and/or IPv6) to be used for 386 probing connectivity or performing routine software downloads that 387 may stall/fail when there are connectivity issues. One name (name- 388 acp.noc.example.com) with only the ACP reachable address of the 389 device for troubleshooting and probing/discovery that is desired to 390 always only use the ACP. One name with data plane and ACP addresses 391 (name-both.noc.example.com). 393 Traffic policing and/or shaping of at the ACP edge in the NOC can be 394 used to throttle applications such as software download into the ACP. 396 MP-TCP is a very attractive candidate to automate the use of both 397 data-plane and ACP and minimize or fully avoid the need for the above 398 mentioned logical names to pre-set the desired connectivity (data- 399 plane-only, ACP only, both). For example, a set-up for non MP-TCP 400 aware applications would be as follows: 402 DNS naming is set up to provide the ACP IPv6 address of network 403 devices. Unbeknownst to the application, MP-TCP is used. MP-TCP 404 mutually discovers between the NOC and network device the data-plane 405 address and caries all traffic across it when that MP-TCP sub-flow 406 across the data-plane can be built. 408 In the Autonomic network devices where data-plane and ACP are in 409 separate VRFs, it is clear that this type of MP-TCP sub-flow creation 410 across different VRFs is new/added functionality. Likewise the 411 policies of preferring a particular address (NOC-device) or VRF (AN 412 device) for the traffic is potentially also a policy not provided as 413 a standard. 415 2.1.6. Autonomic NOC device/applications 417 Setting up connectivity between the NOC and autonomic devices when 418 the NOC device itself is non-autonomic is as mentioned in the 419 beginning a security issue. It also results as shown in the previous 420 paragraphs in a range of connectivity considerations, some of which 421 may be quite undesirable or complex to operationalize. 423 Making NOC application devices autonomic and having them participate 424 in the ACP is therefore not only a highly desirable solution to the 425 security issues, but can also provide a likely easier 426 operationalization of the ACP because it minimizes NOC-special edge 427 considerations - the ACP is simply built all the way automatically, 428 even inside the NOC and only authorized and authenticate NOC devices/ 429 applications will have access to it. 431 Supporting the ACP all the way into an application device requires 432 implementing the following aspects in it: AN bootstrap/enrollment 433 mechanisms, the secure channel for the ACP and at least the host side 434 of IPv6 routing setup for the ACP. Minimally this could all be 435 implemented as an application and be made available to the host OS 436 via eg: a tap driver to make the ACP show up as another IPv6 enabled 437 interface. 439 Having said this: If the structure of NOC applications is transformed 440 through virtualization anyhow, then it may be considered equally 441 secure and appropriate to construct a (physical) NOC application 442 system by combining a virtual AN/ACP enabled router with non-AN/ACP 443 enabled NOC-application VMs via a hypervisor, leveraging the 444 configuration options described in the previous sections but jut 445 virtualizing them. 447 2.1.7. Encryption of data-plane connections 449 When combining ACP and data-plane connectivity for availability and 450 performance reasons, this too has an impact on security: When using 451 the ACP, the traffic will be mostly encryption protected, especially 452 when considering the above described use of AN application devices. 453 If instead the data-plane is used, then this is not the case anymore 454 unless it is done by the application. 456 The most simple solution for this problem exists when using AN NOC 457 application devices, because in that case the communicating AN NOC 458 application and the AN network device have certificates through the 459 AN enrollment process that they can mutually trust (same AN domain). 460 In result, data-plane connectivity that does support this can simply 461 leverage TLS/dTLS with mutual AN-domain certificate authentication - 462 and does not incur new key management. 464 If this automatic security benefit is seen as most important, but a 465 "full" ACP stack into the NOC application device is unfeasible, then 466 it would still be possible to design a stripped down version of AN 467 functionality for such NOC hosts that only provides enrollment of the 468 NOC host into the AN domain to the extend that the host receives an 469 AN domain certificate, but without directly participating in the ACP 470 afterwards. Instead, the host would just leverage TLS/dTLS using its 471 AN certificate via the data-plane with AN network devices as well as 472 indirectly via the ACP with the above mentioned in-NOC network edge 473 connectivity into the ACP. 475 When using the ACP itself, TLS/dTLS for the transport layer between 476 NOC application and network device is somewhat of a double price to 477 pay (ACP also encrypts) and could potentially be optimized away, but 478 given the assumed lower performance of the ACP, it seems that this is 479 an unnecessary optimization. 481 2.1.8. Long term direction of the solution 483 If we consider what potentially could be the most lightweight and 484 autonomic long term solution based on the technologies described 485 above, we see the following direction: 487 1. NOC applications should at least support IPv6. IPv4/IPv6 NAT in 488 the network to enable use of ACP is long term undesirable. 489 Having IPv4 only applications automatically leverage IPv6 490 connectivity via host-stack options is likely non-feasible (NOTE: 491 this has still to be vetted more). 493 2. Build the ACP as a lightweight application for NOC application 494 devices so ACP extends all the way into the actual NOC 495 application devices. 497 3. Leverage and as necessary enhance MP-TCP with automatic dual- 498 connectivity: If the MP-TCP unaware application is using ACP 499 connectivity, the policies used should add sub-flow(s) via the 500 data-plane and prefer them. 502 4. Consider how to best map NOC application desires to underlying 503 transport mechanisms: With the above mentioned 3 points, not all 504 options are covered. Depending on the OAM operation, one may 505 still want only ACP, only data-plane, or automatically prefer one 506 over the other and/or use the ACP with low performance or high- 507 performance (for emergency OAM actions such as countering DDoS). 508 It is as of today not clear what the simplest set of tools is to 509 enable explicitly the choice of desired behavior of each OAM 510 operations. The use of the above mentioned DNS and MP-TCP 511 mechanisms is a start, but this will require additional thoughts. 512 This is likely a specific case of the more generic scope of TAPS. 514 2.2. Stable connectivity for distributed network/OAM functions 516 The ACP can provide common direct-neighbor discovery and capability 517 negotiation and stable and secure connectivity for functions running 518 distributed in network devices. It can therefore eliminate the need 519 to re-implement similar functions in each distributed function in the 520 network. Today, every distributed protocol does this with functional 521 elements usually called "Hello" mechanisms and with often protocol 522 specific security mechanisms. 524 KARP has tried to start provide common directions and therefore 525 reduce the re-invention of at least some of the security aspects, but 526 it only covers routing-protocols and it is unclear how well it 527 applicable to a potentially wider range of network distributed agents 528 such as those performing distributed OAM functions. The ACP can help 529 in these cases. 531 This section is TBD for further iterations of this draft. 533 3. Security Considerations 535 The security considerations are covered in the appropriate sub- 536 sections of the solutions described. 538 4. IANA Considerations 540 This document requests no action by IANA. 542 5. Acknowledgements 544 This work originated from an Autonomic Networking project at cisco 545 Systems, which started in early 2010 including customers involved in 546 the design and early testing. Many people contributed to the aspects 547 described in this document, including in alphabetical order: BL 548 Balaji, Steinthor Bjarnason, Yves Herthoghs, Sebastian Meissner, Ravi 549 Kumar Vadapalli. 551 6. Change log [RFC Editor: Please remove] 553 00: Initial version. 555 7. References 557 [I-D.behringer-autonomic-control-plane] 558 Behringer, M., Bjarnason, S., BL, B., and T. Eckert, "An 559 Autonomic Control Plane", draft-behringer-autonomic- 560 control-plane-00 (work in progress), June 2014. 562 [I-D.irtf-nmrg-an-gap-analysis] 563 Jiang, S., Carpenter, B., and M. Behringer, "Gap Analysis 564 for Autonomic Networking", draft-irtf-nmrg-an-gap- 565 analysis-02 (work in progress), October 2014. 567 [I-D.irtf-nmrg-autonomic-network-definitions] 568 Behringer, M., Pritikin, M., Bjarnason, S., Clemm, A., 569 Carpenter, B., Jiang, S., and L. Ciavaglia, "Autonomic 570 Networking - Definitions and Design Goals", draft-irtf- 571 nmrg-autonomic-network-definitions-04 (work in progress), 572 October 2014. 574 [I-D.pritikin-bootstrapping-keyinfrastructures] 575 Pritikin, M., Behringer, M., and S. Bjarnason, 576 "Bootstrapping Key Infrastructures", draft-pritikin- 577 bootstrapping-keyinfrastructures-01 (work in progress), 578 September 2014. 580 [RFC4193] Hinden, R. and B. Haberman, "Unique Local IPv6 Unicast 581 Addresses", RFC 4193, October 2005. 583 Authors' Addresses 585 Toerless Eckert 586 Cisco 588 Email: eckert@cisco.com 590 Michael H. Behringer 591 Cisco 593 Email: mbehring@cisco.com