idnits 2.17.1 draft-ietf-anima-autonomic-control-plane-12.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- == There are 1 instance of lines with non-RFC6890-compliant IPv4 addresses in the document. If these are example addresses, they should be changed. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year == Line 816 has weird spacing: '... called rfcS...' == Line 1414 has weird spacing: '...k-local unic...' == Line 1415 has weird spacing: '...lticast messa...' -- The document date (October 12, 2017) is 2387 days in the past. Is this intentional? Checking references for intended status: Proposed Standard ---------------------------------------------------------------------------- (See RFCs 3967 and 4897 for information about using normative references to lower-maturity documents in RFCs) == Missing Reference: 'ACP VRF' is mentioned on line 2469, but not defined == Missing Reference: 'Select' is mentioned on line 2626, but not defined == Missing Reference: 'Plane' is mentioned on line 2628, but not defined ** Obsolete normative reference: RFC 5246 (Obsoleted by RFC 8446) ** Obsolete normative reference: RFC 6347 (Obsoleted by RFC 9147) == Outdated reference: A later version (-45) exists of draft-ietf-anima-bootstrapping-keyinfra-07 == Outdated reference: A later version (-07) exists of draft-ietf-anima-prefix-management-05 == Outdated reference: A later version (-10) exists of draft-ietf-anima-reference-model-04 == Outdated reference: A later version (-10) exists of draft-ietf-anima-stable-connectivity-06 == Outdated reference: A later version (-29) exists of draft-ietf-netconf-zerotouch-17 == Outdated reference: A later version (-44) exists of draft-ietf-roll-useofrplinfo-16 -- Obsolete informational reference (is this intentional?): RFC 2821 (Obsoleted by RFC 5321) -- Obsolete informational reference (is this intentional?): RFC 4941 (Obsoleted by RFC 8981) Summary: 2 errors (**), 0 flaws (~~), 14 warnings (==), 3 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 ANIMA WG M. Behringer, Ed. 3 Internet-Draft 4 Intended status: Standards Track T. Eckert, Ed. 5 Expires: April 15, 2018 Huawei 6 S. Bjarnason 7 Arbor Networks 8 October 12, 2017 10 An Autonomic Control Plane (ACP) 11 draft-ietf-anima-autonomic-control-plane-12 13 Abstract 15 Autonomic functions need a control plane to communicate, which 16 depends on some addressing and routing. This Autonomic Management 17 and Control Plane should ideally be self-managing, and as independent 18 as possible of configuration. This document defines such a plane and 19 calls it the "Autonomic Control Plane", with the primary use as a 20 control plane for autonomic functions. It also serves as a "virtual 21 out of band channel" for OAM (Operations Administration and 22 Management) communications over a network that is secure and reliable 23 even when the network is not configured, or not misconfigured. 25 Status of This Memo 27 This Internet-Draft is submitted in full conformance with the 28 provisions of BCP 78 and BCP 79. 30 Internet-Drafts are working documents of the Internet Engineering 31 Task Force (IETF). Note that other groups may also distribute 32 working documents as Internet-Drafts. The list of current Internet- 33 Drafts is at https://datatracker.ietf.org/drafts/current/. 35 Internet-Drafts are draft documents valid for a maximum of six months 36 and may be updated, replaced, or obsoleted by other documents at any 37 time. It is inappropriate to use Internet-Drafts as reference 38 material or to cite them other than as "work in progress." 40 This Internet-Draft will expire on April 15, 2018. 42 Copyright Notice 44 Copyright (c) 2017 IETF Trust and the persons identified as the 45 document authors. All rights reserved. 47 This document is subject to BCP 78 and the IETF Trust's Legal 48 Provisions Relating to IETF Documents 49 (https://trustee.ietf.org/license-info) in effect on the date of 50 publication of this document. Please review these documents 51 carefully, as they describe your rights and restrictions with respect 52 to this document. Code Components extracted from this document must 53 include Simplified BSD License text as described in Section 4.e of 54 the Trust Legal Provisions and are provided without warranty as 55 described in the Simplified BSD License. 57 Table of Contents 59 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . 4 60 2. Acronyms and Terminology . . . . . . . . . . . . . . . . . . 6 61 3. Use Cases for an Autonomic Control Plane . . . . . . . . . . 10 62 3.1. An Infrastructure for Autonomic Functions . . . . . . . . 10 63 3.2. Secure Bootstrap over a not configured Network . . . . . 10 64 3.3. Data-Plane Independent Permanent Reachability . . . . . . 11 65 4. Requirements . . . . . . . . . . . . . . . . . . . . . . . . 12 66 5. Overview . . . . . . . . . . . . . . . . . . . . . . . . . . 12 67 6. Self-Creation of an Autonomic Control Plane (ACP) (Normative) 14 68 6.1. ACP Domain, Certificate and Network . . . . . . . . . . . 14 69 6.1.1. Certificate Domain Information Field . . . . . . . . 15 70 6.1.2. ACP domain membership check . . . . . . . . . . . . . 18 71 6.1.3. Certificate Maintenance . . . . . . . . . . . . . . . 18 72 6.2. ACP Adjacency Table . . . . . . . . . . . . . . . . . . . 20 73 6.3. Neighbor Discovery with DULL GRASP . . . . . . . . . . . 21 74 6.4. Candidate ACP Neighbor Selection . . . . . . . . . . . . 23 75 6.5. Channel Selection . . . . . . . . . . . . . . . . . . . . 24 76 6.6. Candidate ACP Neighbor verification . . . . . . . . . . . 26 77 6.7. Security Association protocols . . . . . . . . . . . . . 26 78 6.7.1. ACP via IKEv2 . . . . . . . . . . . . . . . . . . . . 26 79 6.7.2. ACP via dTLS . . . . . . . . . . . . . . . . . . . . 27 80 6.7.3. ACP Secure Channel Requirements . . . . . . . . . . . 28 81 6.8. GRASP in the ACP . . . . . . . . . . . . . . . . . . . . 28 82 6.8.1. GRASP as a core service of the ACP . . . . . . . . . 28 83 6.8.2. ACP as the Security and Transport substrate for GRASP 29 84 6.9. Context Separation . . . . . . . . . . . . . . . . . . . 33 85 6.10. Addressing inside the ACP . . . . . . . . . . . . . . . . 33 86 6.10.1. Fundamental Concepts of Autonomic Addressing . . . . 33 87 6.10.2. The ACP Addressing Base Scheme . . . . . . . . . . . 35 88 6.10.3. ACP Zone Addressing Sub-Scheme . . . . . . . . . . . 35 89 6.10.4. ACP Manual Addressing Sub-Scheme . . . . . . . . . . 38 90 6.10.5. ACP Vlong Addressing Sub-Scheme . . . . . . . . . . 39 91 6.10.6. Other ACP Addressing Sub-Schemes . . . . . . . . . . 40 92 6.11. Routing in the ACP . . . . . . . . . . . . . . . . . . . 40 93 6.11.1. RPL Profile . . . . . . . . . . . . . . . . . . . . 41 94 6.12. General ACP Considerations . . . . . . . . . . . . . . . 44 95 6.12.1. Performance . . . . . . . . . . . . . . . . . . . . 44 96 6.12.2. Addressing of Secure Channels in the data-plane . . 45 97 6.12.3. MTU . . . . . . . . . . . . . . . . . . . . . . . . 45 98 6.12.4. Multiple links between nodes . . . . . . . . . . . . 46 99 6.12.5. ACP interfaces . . . . . . . . . . . . . . . . . . . 46 100 7. ACP support on L2 switches/ports (Normative) . . . . . . . . 49 101 7.1. Why . . . . . . . . . . . . . . . . . . . . . . . . . . . 49 102 7.2. How (per L2 port DULL GRASP) . . . . . . . . . . . . . . 50 103 8. Support for Non-ACP Components (Normative) . . . . . . . . . 52 104 8.1. ACP Connect . . . . . . . . . . . . . . . . . . . . . . . 52 105 8.1.1. Non-ACP Controller / NMS system . . . . . . . . . . . 52 106 8.1.2. Software Components . . . . . . . . . . . . . . . . . 54 107 8.1.3. Auto Configuration . . . . . . . . . . . . . . . . . 55 108 8.1.4. Combined ACP/Data-Plane Interface (VRF Select) . . . 56 109 8.1.5. Use of GRASP . . . . . . . . . . . . . . . . . . . . 57 110 8.2. ACP through Non-ACP L3 Clouds (Remote ACP neighbors) . . 58 111 8.2.1. Configured Remote ACP neighbor . . . . . . . . . . . 58 112 8.2.2. Tunneled Remote ACP Neighbor . . . . . . . . . . . . 59 113 8.2.3. Summary . . . . . . . . . . . . . . . . . . . . . . . 60 114 9. Benefits (Informative) . . . . . . . . . . . . . . . . . . . 60 115 9.1. Self-Healing Properties . . . . . . . . . . . . . . . . . 60 116 9.2. Self-Protection Properties . . . . . . . . . . . . . . . 61 117 9.2.1. From the outside . . . . . . . . . . . . . . . . . . 61 118 9.2.2. From the inside . . . . . . . . . . . . . . . . . . . 62 119 9.3. The Administrator View . . . . . . . . . . . . . . . . . 63 120 10. Further Considerations (Informative) . . . . . . . . . . . . 63 121 10.1. BRSKI Bootstrap (ANI) . . . . . . . . . . . . . . . . . 63 122 10.2. ACP (and BRSKI) Diagnostics . . . . . . . . . . . . . . 65 123 10.3. Enabling and disabling ACP/ANI . . . . . . . . . . . . . 69 124 10.3.1. Filtering for non-ACP/ANI packets . . . . . . . . . 70 125 10.3.2. Admin Down State . . . . . . . . . . . . . . . . . . 70 126 10.3.3. Interface level ACP/ANI enable . . . . . . . . . . . 73 127 10.3.4. Which interfaces to auto-enable ? . . . . . . . . . 73 128 10.3.5. Node Level ACP/ANI enable . . . . . . . . . . . . . 75 129 10.3.6. Undoing ANI/ACP enable . . . . . . . . . . . . . . . 76 130 10.3.7. Summary . . . . . . . . . . . . . . . . . . . . . . 77 131 10.4. ACP Neighbor discovery protocol selection . . . . . . . 77 132 10.4.1. LLDP . . . . . . . . . . . . . . . . . . . . . . . . 77 133 10.4.2. mDNS and L2 support . . . . . . . . . . . . . . . . 78 134 10.4.3. Why DULL GRASP . . . . . . . . . . . . . . . . . . . 78 135 10.5. Choice of routing protocol (RPL) . . . . . . . . . . . . 78 136 10.6. Extending ACP channel negotiation (via GRASP) . . . . . 80 137 10.7. CAs, domains and routing subdomains . . . . . . . . . . 81 138 10.8. Adopting ACP concepts for other environments . . . . . . 83 139 11. Security Considerations . . . . . . . . . . . . . . . . . . . 85 140 12. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 86 141 13. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . 87 142 14. Change log [RFC Editor: Please remove] . . . . . . . . . . . 87 143 14.1. Initial version . . . . . . . . . . . . . . . . . . . . 87 144 14.2. draft-behringer-anima-autonomic-control-plane-00 . . . . 87 145 14.3. draft-behringer-anima-autonomic-control-plane-01 . . . . 87 146 14.4. draft-behringer-anima-autonomic-control-plane-02 . . . . 88 147 14.5. draft-behringer-anima-autonomic-control-plane-03 . . . . 88 148 14.6. draft-ietf-anima-autonomic-control-plane-00 . . . . . . 88 149 14.7. draft-ietf-anima-autonomic-control-plane-01 . . . . . . 88 150 14.8. draft-ietf-anima-autonomic-control-plane-02 . . . . . . 89 151 14.9. draft-ietf-anima-autonomic-control-plane-03 . . . . . . 89 152 14.10. draft-ietf-anima-autonomic-control-plane-04 . . . . . . 90 153 14.11. draft-ietf-anima-autonomic-control-plane-05 . . . . . . 90 154 14.12. draft-ietf-anima-autonomic-control-plane-06 . . . . . . 91 155 14.13. draft-ietf-anima-autonomic-control-plane-07 . . . . . . 91 156 14.14. draft-ietf-anima-autonomic-control-plane-08 . . . . . . 93 157 14.15. draft-ietf-anima-autonomic-control-plane-09 . . . . . . 94 158 14.16. draft-ietf-anima-autonomic-control-plane-10 . . . . . . 96 159 14.17. draft-ietf-anima-autonomic-control-plane-11 . . . . . . 98 160 14.18. draft-ietf-anima-autonomic-control-plane-12 . . . . . . 98 161 15. References . . . . . . . . . . . . . . . . . . . . . . . . . 100 162 15.1. Normative References . . . . . . . . . . . . . . . . . . 100 163 15.2. Informative References . . . . . . . . . . . . . . . . . 102 164 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . 105 166 1. Introduction 168 Autonomic Networking is a concept of self-management: Autonomic 169 functions self-configure, and negotiate parameters and settings 170 across the network. [RFC7575] defines the fundamental ideas and 171 design goals of Autonomic Networking. A gap analysis of Autonomic 172 Networking is given in [RFC7576]. The reference architecture for 173 Autonomic Networking in the IETF is currently being defined in the 174 document [I-D.ietf-anima-reference-model] 176 Autonomic functions need an autonomously built communications 177 infrastructure or network plane (there is no well-established name 178 for this). This infrastructure needs to be secure, resilient and re- 179 usable by all autonomic functions. Section 5 of [RFC7575] introduces 180 that infrastructure and calls it the "Autonomic Control Plane" (ACP). 181 More descriptively it would be the "Autonomic communications 182 infrastructure for Management and Control". For naming consistency 183 with that prior document, this document continues to use the name ACP 184 though. 186 Today, the management and control plane of networks typically runs in 187 the global routing table, which is dependent on correct configuration 188 and routing. Misconfigurations or routing problems can therefore 189 disrupt management and control channels. Traditionally, an out of 190 band network has been used to recover from such problems, or 191 personnel is sent on site to access devices through console ports 192 (craft ports). However, both options are expensive. 194 In increasingly automated networks either centralized management 195 systems or distributed autonomic service agents in the network 196 require a control plane which is independent of the configuration of 197 the network they manage, to avoid impacting their own operations 198 through the configuration actions they take. 200 This document describes a modular design for a self-forming, self- 201 managing and self-protecting "Autonomic Control Plane" (ACP) which is 202 a virtual in-band network designed to be as independent as possible 203 of configuration, addressing and routing problems. The details how 204 this achieved are defined in Section 6. The ACP is designed to 205 remains operational even in the presence of configuration errors, 206 addressing or routing issues, or where policy could inadvertently 207 affect connectivity of both data packets or control packets. 209 This document uses the term "data-plane" to refer to anything in the 210 network nodes that is not the ACP, and therefore considered to be 211 dependent on (mis-)configuration. This data-plen includes both the 212 traditional forwarding-plane, as well as any pre-existing control- 213 plane, such as routing protocols that establish routing tables for 214 the forwarding plane. 216 The Autonomic Control Plane serves several purposes at the same time: 218 o Autonomic functions communicate over the ACP. The ACP therefore 219 supports directly Autonomic Networking functions, as described in 220 [I-D.ietf-anima-reference-model]. For example, GRASP 221 [I-D.ietf-anima-grasp] runs securely inside the ACP and depends on 222 the ACP as its "security and transport substrate". 224 o An operator can use it to log into remote devices, even if the 225 network is misconfigured or not configured. 227 o A controller or network management system can use it to securely 228 bootstrap network devices in remote locations, even if the network 229 in between is not yet configured; no data-plane dependent 230 bootstrap configuration is required. An example of such a secure 231 bootstrap process is described in 232 [I-D.ietf-anima-bootstrapping-keyinfra] 234 This document describes these use cases for the ACP in Section 3, it 235 defines the requirements in Section 4. Section 5 gives an overview 236 how the ACP is constructed, and in Section 6 the process is defined 237 in detail. Section 7 defines how to support ACP on L2 switches. 238 Section 8 explains how non-ACP nodes and networks can be integrated. 239 The following sections are non-normative: Section 7 reviews benefits 240 of the ACP (after all the details have been defined), Section 10 241 provides additional explanations and describes additional details or 242 future work possibilities that where considered not to be appropriate 243 for standardization in this document but nevertheless assumed to be 244 helpful for candidate adopters of the ACP. 246 The ACP as defined in this document can be implemented and operated 247 without dependency against other components of autonomous networks 248 except for the GRASP protocol on which it depends. The document 249 "Autonomic Network Stable Connectivity" 250 [I-D.ietf-anima-stable-connectivity] describes how the ACP alone can 251 be used to provide stable connectivity for autonomic and non- 252 autonomic OAM applications ("Operations Administration and 253 Management"). It also explains on how existing management solutions 254 can leverage the ACP in parallel with traditional management models, 255 when to use the ACP and, how to integrate IPv4 based management, etc. 257 Combining ACP with BRSKI ("Bootstrapping Remote Secure Key 258 Infrastructures", see [I-D.ietf-anima-bootstrapping-keyinfra]) 259 results in the "Autonomic Network Infrastructure" as defined in 260 [I-D.ietf-anima-reference-model]. which provides autonomic 261 connectivity (from ACP) with full secure zero touch bootstrap (from 262 BRSKI). The ANI itself does not constitute an Autonomic Network, but 263 it enables building more or less autonomic networks on top of it - 264 using either centralized, SDN ("Software Defined Networking", see 265 [RFC7426]) style automation or distributed automation via ASA 266 ("Autonomic Service Agents") / "Autonomic Functions" - or a mixture 267 of both. See [I-D.ietf-anima-reference-model] for more information. 269 2. Acronyms and Terminology 271 In the rest of the document we will refer to systems using the ACP as 272 "nodes". Typically such a node is a physical (network equipment) 273 device, but it can equally be some virtualized system. Therefore, we 274 do not refer to them as devices unless the context specifically calls 275 for a physical system. 277 This document introduces or uses the following terms (sorted 278 alphabetically). Terms introduced are explained on first use, so 279 this list is for reference only. 281 ACP: "Autonomic Control Plane". The Autonomic Function defined in 282 this document. It provides secure zero-touch transitive (network 283 wide) IPv6 connectivity for all nodes in the same ACP domain. The 284 ACP is primarily meant to be used as a component of the ANI to 285 enable Autonomic Networks but it can equally be used in simple ANI 286 networks (with no other Autonomic Functions) or completely by 287 itself. 289 ACP address: An IPv6 address assigned to the ACP node. It is stored 290 in the domain information field of the ACP domain certificate. 292 ACP address range/set: The ACP address may imply a range or set of 293 addresses that the node can assign for different purposes. This 294 address range/set is derived by the node from the format of the 295 ACP address called the "addressing sub-scheme". 297 ACP connect: An interface on an ACP node providing access to the ACP 298 for non ACP capable nodes without using an ACP secure channel. 299 See Section 8.1.1. 301 ACP domain: The ACP domain is the set of nodes with domain 302 certificates that allow them to authenticate each other as members 303 of the ACP domain. See Section 6.1.2. 305 domain information (field): An rfc822Name information element (e.g.: 306 field) in the domain certificate in which the ACP relevant 307 information is encoded: the domain name and the ACP address. 309 ACP loopback interface: The interface in the ACP VRF that has the 310 ACP address assigned to it. 312 ACP network: The ACP network constitutes all the nodes that have 313 access to the ACP. It is the set of active and transitively 314 connected nodes of an ACP domain plus all nodes that get access to 315 the ACP of that domain via ACP edge nodes. 317 ACP (ULA) prefix(es): The prefixes routed across the ACP. In the 318 normal/simple case, the ACP has one ULA prefix, see Section 6.10. 319 The ACP routing table may include multiple ULA prefixes if the 320 "rsub" option is used to create addresses from more than one ULA 321 prefix. See Section 6.1.1. The ACP may also include non-ULA 322 prefixes if those are configured on ACP connect interfaces. See 323 Section 8.1.1. 325 ACP secure channel: A security association established hop-by-hop 326 between adjacent ACP nodes to carry traffic of the ACP VRF 327 separated from data-plane traffic in-band over the same links as 328 the data-plane. 330 ACP secure channel protocol: The protocol used to build an ACP 331 secure channel, e.g.: IKEv2/IPsec or dTLS. 333 ACP virtual interface: An interface in the ACP VRF mapped to one or 334 more ACP secure channels. See Section 6.12.5. 336 AN "Autonomic Network": A network according to 337 [I-D.ietf-anima-reference-model]. Its main components are ANI, 338 Autonomic Functions and Intent. 340 (AN) Domain Name: An FQDN (Fully Qualified Domain Name) in the 341 domain information field of the Domain Certificate. See 342 Section 6.1.1. 344 ANI (nodes/network): "Autonomic Network Infrastructure". The ANI is 345 the infrastructure to enable Autonomic Networks. It includes ACP, 346 BRSKI and GRASP. Every Autonomic Network includes the ANI, but 347 not every ANI network needs to include autonomic functions beyond 348 the ANI (nor intent). An ANI network without further autonomic 349 functions can for example support secure zero touch bootstrap and 350 stable connectivity for SDN networks - see 351 [I-D.ietf-anima-stable-connectivity]. 353 ANIMA: "Autonomic Networking Integrated Model and Approach". ACP, 354 BRSKI and GRASP are products of the IETF ANIMA working group. 356 ASA: "Autonomic Service Agent". Autonomic software modules running 357 on an ANI device. The components making up the ANI (BRSKI, ACP, 358 GRASP) are also described as ASAs. 360 Autonomic Function: A function/service in an Autonomic Network (AN) 361 composed of one or more ASA across one or more ANI nodes. 363 BRSKI: "Bootstrapping Remote Secure Key Infrastructures" 364 ([I-D.ietf-anima-bootstrapping-keyinfra]. A protocol extending 365 EST to enable secure zero touch bootstrap in conjunction with ACP. 366 ANI nodes use ACP, BRSKI and GRASP. 368 data-plane: The counterpoint to the ACP VRF in an ACP node: all VRFs 369 other than the ACP VRF. In a simple ACP or ANI node, the data- 370 plane is typically provisioned non-autonomic, for example manually 371 (including across the ACP) or via SDN controllers. In a full 372 Autonomic Network node, the data-plane is managed autonomically 373 via Autonomic Functions and Intent. Note that other (non-ANIMA) 374 RFC use the data-plane to refer to what is better called the 375 forwarding plane. This is not the way the term is used in this 376 document! 378 ACP (ANI/AN) Domain Certificate: A provisioned certificate (LDevID) 379 carrying the domain information field which is used by the ACP to 380 learn its address in the ACP and to derive and cryptographically 381 assert its membership in the ACP domain. 383 device: A physical system, or physical node. 385 Enrollment: The process where a node presents identification (for 386 example through keying material such as the private key of an 387 IDevID) to a network and acquires a network specific identity and 388 trust anchor such as an LDevID. 390 EST: "Enrollment over Secure Transport" ([RFC7030]). IETF standard 391 protocol for enrollment of a node with an LDevID. BRSKI is based 392 on EST. 394 GRASP: "Generic Autonomic Signaling Protocol". An extensible 395 signaling protocol required by the ACP for ACP neighbor discovery. 396 The ACP also provides the "security and transport substrate" for 397 the "ACP instance of GRASP" which is run inside the ACP to support 398 BRSKI and other future Autonomic Functions. See 399 [I-D.ietf-anima-grasp]. 401 IDevID: An "Initial Device IDentity" X.509 certificate installed by 402 the vendor on new equipment. Contains information that 403 establishes the identity of the node in the context of its vendor/ 404 manufacturer such as device model/type and serial number. See 405 [AR8021]. 407 Intent: Northbound operator and automation facing interface of an 408 Autonomic Network according to [I-D.ietf-anima-reference-model]. 410 LDevID: A "Local Device IDentity" is an X.509 certificate installed 411 during "enrollment". The Domain Certificate used by the ACP is an 412 LDevID. See [AR8021]. 414 MIC: "Manufacturer Installed Certificate". Another word not used in 415 this document to describe an IDevID. 417 native interface: Interfaces existing on a node without 418 configuration of the already running node. On physical nodes 419 these are usually physical interfaces. On virtual nodes their 420 equivalent. 422 node: A system, e.g.: supporting the ACP according to this document. 423 Can be virtual or physical. Physical nodes are called devices. 425 RPL: "IPv6 Routing Protocol for Low-Power and Lossy Networks". The 426 routing protocol used in the ACP. 428 MASA (service): "Manufacturer Authorized Signing Authority". A 429 vendor/manufacturer or delegated cloud service on the Internet 430 used as part of the BRSKI protocol. 432 sUDI: "secured Unique Device Identifier". Another term not used in 433 this document to refer to an IDevID. 435 UDI: "Unique Device Identifier". In the context of this document 436 unsecured identity information of a node typically consisting of 437 at least device model/type and serial number, often in a vendor 438 specific format. See sUDI and LDevID. 440 ULA: A "Unique Local Address" (ULA) is an IPv6 address in the block 441 fc00::/7, defined in [RFC4193]. It is the approximate IPv6 442 counterpart of the IPv4 private address ([RFC1918]). 444 (ACP) VRF: The ACP is modelled in this document as a "Virtual 445 Routing and Forwarding" (VRF) component in a network node. 447 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 448 "SHOULD", "SHOULD NOT", "RECOMMENDED", "NOT RECOMMENDED", "MAY", and 449 "OPTIONAL" in this document are to be interpreted as described in 450 [RFC2119] when they appear in ALL CAPS. When these words are not in 451 ALL CAPS (such as "should" or "Should"), they have their usual 452 English meanings, and are not to be interpreted as [RFC2119] key 453 words. 455 3. Use Cases for an Autonomic Control Plane 457 3.1. An Infrastructure for Autonomic Functions 459 Autonomic Functions need a stable infrastructure to run on, and all 460 autonomic functions should use the same infrastructure to minimize 461 the complexity of the network. This way, there is only need for a 462 single discovery mechanism, a single security mechanism, and other 463 processes that distributed functions require. 465 3.2. Secure Bootstrap over a not configured Network 467 Today, bootstrapping a new node typically requires all nodes between 468 a controlling node such as an SDN controller ("Software Defined 469 Networking", see [RFC7426]) and the new node to be completely and 470 correctly addressed, configured and secured. Bootstrapping and 471 configuration of a network happens in rings around the controller - 472 configuring each ring of devices before the next one can be 473 bootstrapped. Without console access (for example through an out of 474 band network) it is not possible today to make devices securely 475 reachable before having configured the entire network leading up to 476 them. 478 With the ACP, secure bootstrap of new devices can happen without 479 requiring any configuration such as the transit connectivity to 480 bootstrap further devices. A new device can automatically be 481 bootstrapped in a secure fashion and be deployed with a domain 482 certificate. This does not require any configuration on intermediate 483 nodes, because they can communicate zero-touch and securely through 484 the ACP. 486 3.3. Data-Plane Independent Permanent Reachability 488 Today, most critical control plane protocols and network management 489 protocols are running in the data-plane (global routing table) of the 490 network. This leads to undesirable dependencies between control and 491 management plane on one side and the data-plane on the other: Only if 492 the data-plane is operational, will the other planes work as 493 expected. 495 Data-plane connectivity can be affected by errors and faults, for 496 example misconfigurations that make AAA (Authentication, 497 Authorization and Accounting) servers unreachable can lock an 498 administrator out of a device; routing or addressing issues can make 499 a device unreachable; shutting down interfaces over which a current 500 management session is running can lock an admin irreversibly out of 501 the device. Traditionally only console access can help recover from 502 such issues. 504 Data-plane dependencies also affect applications in a NOC ("Network 505 Operations Center") such as SDN controller applications: Certain 506 network changes are today hard to operate, because the change itself 507 may affect reachability of the devices. Examples are address or mask 508 changes, routing changes, or security policies. Today such changes 509 require precise hop-by-hop planning. 511 The ACP provides reachability that is independent of the data-plane 512 (except for the dependency discussed in Section 6.12.2 which can be 513 removed through future work), which allows control plane and 514 management plane to operate more robustly: 516 o For management plane protocols, the ACP provides the functionality 517 of a "Virtual-out-of-band (VooB) channel", by providing 518 connectivity to all nodes regardless of their configuration or 519 global routing table. 521 o For control plane protocols, the ACP allows their operation even 522 when the data-plane is temporarily faulty, or during transitional 523 events, such as routing changes, which may affect the control 524 plane at least temporarily. This is specifically important for 525 autonomic service agents, which could affect data-plane 526 connectivity. 528 The document "Autonomic Network Stable Connectivity" 529 [I-D.ietf-anima-stable-connectivity] explains the use cases for the 530 ACP in significantly more detail and explains how the ACP can be used 531 in practical network operations. 533 4. Requirements 535 The Autonomic Control Plane has the following requirements: 537 ACP1: The ACP SHOULD provide robust connectivity: As far as 538 possible, it should be independent of configured addressing, 539 configuration and routing. Requirements 2 and 3 build on this 540 requirement, but also have value on their own. 542 ACP2: The ACP MUST have a separate address space from the data- 543 plane. Reason: traceability, debug-ability, separation from 544 data-plane, security (can block easily at edge). 546 ACP3: The ACP MUST use autonomically managed address space. Reason: 547 easy bootstrap and setup ("autonomic"); robustness (admin 548 can't mess things up so easily). This document suggests to 549 use ULA addressing for this purpose ("Unique Local Address", 550 see [RFC4193]). 552 ACP4: The ACP MUST be generic. Usable by all the functions and 553 protocols of the AN infrastructure. It MUST NOT be tied to a 554 particular application or transport protocol. 556 ACP5: The ACP MUST provide security: Messages coming through the ACP 557 MUST be authenticated to be from a trusted node, and SHOULD 558 (very strong SHOULD) be encrypted. 560 The ACP operates hop-by-hop, because this interaction can be built on 561 IPv6 link local addressing, which is autonomic, and has no dependency 562 on configuration (requirement 1). It may be necessary to have ACP 563 connectivity across non-ACP nodes, for example to link ACP nodes over 564 the general Internet. This is possible, but introduces a dependency 565 against stable/resilient routing over the non-ACP hops (see 566 Section 8.2). 568 5. Overview 570 The Autonomic Control Plane is constructed in the following way (for 571 details, see Section 6): 573 1. An ACP node creates a VRF ("Virtual Routing and Forwarding") 574 instance, or a similar virtual context. 576 2. It determines, following a policy, a candidate peer list. This 577 is the list of nodes to which it should establish an Autonomic 578 Control Plane. Default policy is: To all link-layer adjacent 579 nodes supporting ACP. 581 3. For each node in the candidate peer list, it authenticates that 582 node and negotiates a mutually acceptable channel type. 584 4. It then establishes a secure tunnel of the negotiated channel 585 type. These tunnels are placed into the previously set up VRF. 586 This creates an overlay network with hop-by-hop tunnels. 588 5. Inside the ACP VRF, each node sets up a loopback interface with 589 its ULA IPv6 address. 591 6. Each node runs a lightweight routing protocol, to announce 592 reachability of the virtual addresses inside the ACP (see 593 Section 6.12.5). 595 Note: 597 o Non-autonomic NMS ("Network Management Systems") or SDN 598 controllers have to be manually connected into the ACP. 600 o Connecting over non-ACP Layer-3 clouds initially requires a tunnel 601 between ACP nodes. 603 o None of the above operations (except manual ones) is reflected in 604 the configuration of the node. 606 The following figure illustrates the ACP. 608 ACP node 1 ACP node 2 609 ................... ................... 610 secure . . secure . . secure 611 tunnel : +-----------+ : tunnel : +-----------+ : tunnel 612 ..--------| ACP VRF |---------------------| ACP VRF |---------.. 613 : / \ / \ <--routing--> / \ / \ : 614 : \ / \ / \ / \ / : 615 ..--------| loopback |---------------------| loopback |---------.. 616 : | interface | : : | interface | : 617 : +-----------+ : : +-----------+ : 618 : : : : 619 : data-plane :...............: data-plane : 620 : : link : : 621 :.................: :.................: 623 Figure 1 625 The resulting overlay network is normally based exclusively on hop- 626 by-hop tunnels. This is because addressing used on links is IPv6 627 link local addressing, which does not require any prior set-up. This 628 way the ACP can be built even if there is no configuration on the 629 node, or if the data-plane has issues such as addressing or routing 630 problems. 632 6. Self-Creation of an Autonomic Control Plane (ACP) (Normative) 634 This section describes the components and steps to set up an 635 Autonomic Control Plane (ACP), and highlights the key properties 636 which make it "indestructible" against many inadvertent changes to 637 the data-plane, for example caused by misconfigurations. 639 An ACP node can be a router, switch, controller, NMS host, or any 640 other IP capable node. Initially, it must have a certificate, as 641 well as an (empty) ACP Adjacency Table (described in Section 6.2). 642 It then can start to discover ACP neighbors and build the ACP. This 643 is described step by step in the following sections: 645 6.1. ACP Domain, Certificate and Network 647 ACP relies on group security. An ACP domain is a group of nodes that 648 trust each other to participate in ACP operations. To establish 649 trust, the ACP requires certificates: An ACP node MUST have keying 650 material consisting of a certificate (LDevID), with which it can 651 cryptographically assert its membership in the ACP domain and trust 652 anchor(s) associated with that certificate with which it can verify 653 the membership of other nodes (see Section 6.1.2). The certificate 654 is called the ACP domain certificate, the trust anchor(s) are the CA 655 ("Certificate Authority") of the ACP domain. 657 The ACP does not mandate specific mechanisms by which this keying 658 material is provisioned into the ACP node, it only requires the 659 following ACP specific information field in its domain certificate as 660 well as those of candidate ACP peers. See Section 10.1 for more 661 information about enrollment or provisioning options. 663 Note: LDevID ("Local Device IDentification") is the term used to 664 indicate a certificate that was provisioned by the owner of a node as 665 opposed to IDevID ("Initial Device IDentifier") that may has been 666 loaded on the node during manufacturing time. Those IDevID do not 667 include owner and deployment specific information to allows autonomic 668 establishment of trust for the operations of an ACP domain (e.g.: 669 between two ACP nodes without relying on any third party). 671 This document uses the term ACP in many places where its reference 672 document use the word autonomic. This is done because those 673 reference document consider fully autonomic network and nodes, but 674 support of ACP does not require support for other components of 675 autonomic networks. Therefore the word autonomic would be irritating 676 to operators interested in only the ACP: 678 [RFC7575] defines the term "Autonomic Domain" as a collection of 679 autonomic nodes. ACP nodes do not need to be fully autonomic, but 680 when they are, then the ACP domain is an autonomic domain. Likewise, 681 [I-D.ietf-anima-reference-model] defines the term "Domain 682 Certificate" as the certificate used in an autonomic domain. The ACP 683 domain certificate is that domain certificate when ACP nodes are 684 (fully) autonomic nodes. Finally, this document uses the term ACP 685 network to refer to the network created by active ACP nodes in an ACP 686 domain. The ACP network itself can extend beyond ACP nodes through 687 the mechanisms described in Section 8.1). 689 The ACP domain certificate can and should be used for any 690 authentication between ACP nodes where the required security is 691 domain membership. Section 6.1.2 defines this "ACP domain membership 692 check". The uses of this check that are standardized in this 693 document are for the establishment of ACP secure channels 694 (Section 6.6) and for ACP GRASP (Section 6.8.2). Other uses are 695 subject to future work, but it is recommended that it is the default 696 security check for any end-to-end connections between ASA. It is 697 equally useable by other functions such as legacy OAM functions. 699 6.1.1. Certificate Domain Information Field 701 Information about the domain MUST be encoded in the domain 702 certificate in a subjectAltName / rfc822Name field according to the 703 following ABNF definition ([RFC5234]): 705 [RFC Editor: Please substitute SELF in all occurences of rfcSELF with 706 the RFC number assigned to this document and remove this comment 707 line] 709 domain-information = local-part "@" domain 711 local-part = key "." local-info 713 key = "rfcSELF" 715 local-info = [ acp-address ] [ "+" rsub extensions ] 717 acp-address = 32hex-dig 719 hex-dig = DIGIT / "a" / "b" / "c" / "d" / "e" / "f" 720 rsub = [ domain-name ] ; empty if not used 722 domain = domain-name 724 routing-subdomain = [ rsub " ." ] domain 726 domain-name = ; according to section 3.5 of [RFC1034] 728 extensions = *( "+" extension ) 730 extension = ; future definition. Must fit into [RFC5322] simple dot- 731 atom format. 733 Example: 735 domain-information = rfcSELF+fda379a6f6ee00000200000064000000+area5 736 1.research@acp.example.com 738 routing-subdomain = area51.research.acp.example.com 740 "acp-address" MUST be the ACP address of the node. It is optional to 741 support variations of the ACP mechanisms, for example other means for 742 nodes to assign ACP addresses to themselves. Such methods are 743 subject to future work though. 745 Note: "acp-address" cannot use standard IPv6 address formats because 746 it must match the simple dot-atom format of [RFC5322]. ":" are not 747 allowed in that format. 749 "domain" is used to indicate the ACP Domain across which all ACP 750 nodes trust each other and are willing to build ACP channel to each 751 other. See Section 6.1.2. Domain SHOULD be the FQDN of a domain 752 owned by the operator assigning the certificate. This is a simple 753 method to ensure that the domain is globally unique and collision of 754 ACP addresses would therefore only happen due to ULA hash collisions. 755 If the operator does not own any FQDN, it should choose a string in 756 FQDN format that intends to be equally unique. 758 "routing-subdomain" is the autonomic subdomain that is used to 759 calculate the hash for the ULA prefix of the ACP address of the node. 760 "rsub" is optional and should only be used when its impacts are 761 understood. When "rsub" is not used, "routing-subdomain" is the same 762 as "domain". 764 The optional "extensions" field is used for future extensions to this 765 specification. It MUST be ignored if present and not understood. 767 Note that the maximum size of "domain-information" is 254 characters 768 and the maximum size of node-info is 64 characters according to 769 [RFC5280] that is referring to [RFC2821] (superseded by [RFC5321]). 771 The subjectAltName / rfc822Name encoding of the ACP domain name and 772 ACP address is used for the following reasons: 774 o There are a wide range of pre-existing protocols/services where 775 authentication with LDevID is desirable. Enrolling and 776 maintaining separate LDevIDs for each of these protocols/services 777 is often undesirable overhead. Therefore, the information element 778 required for the ACP in the domain certificate should be encoded 779 in a way that minimizes the possibility of creating 780 incompatibilities with such other uses beside the authentication 781 for the ACP. 783 o The elements in the LDevID required for the ACP should not cause 784 incompatibilities with any pre-existing ASN.1 software potentially 785 in use in those other pre-existing SW systems. This eliminates 786 the use of novel information elements because those require 787 extensions to those pre-existing ASN.1 parsers. 789 o subjectAltName / rfc822Name is a pre-existing element that must be 790 supported by all existing ASN.1 parsers for LDevID. 792 o The elements in the LDevID required for the ACP should also not be 793 misinterpreted by any pre-existing protocol/service that might use 794 the LDevID. If the elements used for the ACP are interpreted by 795 other protocols/services, then the impact should be benign. 797 o Using an IP address format encoding could result in non-benign 798 misinterpretation of the domain information field; other protocol/ 799 services unaware of the ACP could try to do something with the ACP 800 address that would fail to work correctly. For example, the 801 address could be interpreted to be an address of the node in a VRF 802 other than the ACP VRF. 804 o At minimum, both the AN domain name and the non-domain name 805 derived part of the ACP address need to be encoded in one or more 806 appropriate fields of the certificate, so there are not many 807 alternatives with pre-existing fields where the only possible 808 conflicts would likely be beneficial. 810 o rfc822Name encoding is quite flexible. We choose to encode the 811 full ACP address AND the domain name with sub part into a single 812 rfc822Name information element it, so that it is easier to 813 examine/use the "domain information field". 815 o The format of the rfc822Name is chosen so that an operator can set 816 up a mailbox called rfcSELF@ that would receive emails 817 sent towards the rfc822Name of any node inside a domain. This is 818 possible because in many modern mail systems, components behind a 819 "+" character are considered part of a single mailbox. In other 820 words, it is not necessary to set up a separate mailbox for every 821 ACP node, but only one for the whole domain. 823 o In result, if any unexpected use of the ACP addressing information 824 in a certificate happens, it is benign and detectable: it would be 825 mail to that mailbox. 827 See section 4.2.1.6 of [RFC5280] for details on the subjectAltName 828 field. 830 6.1.2. ACP domain membership check 832 The following points constitute the ACP domain membership check: 834 o The peer certificate is valid as proven by the security 835 associations protocol exchange. 837 o The peers certificate is signed by one of the trust anchors 838 associated with the ACP domain certificate. 840 o If the node certificates indicate a CDP (or OCSP) then the peer's 841 certificate must be valid according to those criteria. e.g.: OCSP 842 check across the ACP or not listed in the CRL retrieved from the 843 CDP. 845 o The peers certificate has a syntactically valid domain information 846 field (subjectAltName / rfc822Name) and the domain name in that 847 peers domain information field is the same as in this ACP node 848 certificate. Note that future Intent rules may modify this. See 849 Section 10.7. 851 6.1.3. Certificate Maintenance 853 ACP nodes MUST support certificate renewal via EST ("Enrollment over 854 Secure Transport", see [RFC7030]) and MAY support other mechanisms. 855 An ACP network must have at least one ACP node supporting EST server 856 functionality across the ACP so that EST renewal is useable. The 857 mechanism by which the domain certificate was initially provisioned 858 SHOULD provide a mechanism to store the URL of one EST server with 859 its ACP address into the node for later renewal. This server does 860 not have to be the same as the one performing the initial certificate 861 enrolment. 863 ACP nodes that are EST servers MUST announce their service via GRASP 864 in the ACP through M_FLOOD messages: 866 Example: 868 [M_FLOOD, 12340815, h'fda379a6f6ee0000200000064000001', 210000, 869 ["SRV.est", 4, 255, "EST-TLS"], 870 [O_IPv6_LOCATOR, 871 h'fda379a6f6ee0000200000064000001', TCP, 80] 872 ] 874 The formal CDDL definition is: 876 flood-message = [M_FLOOD, session-id, initiator, ttl, 877 +[objective, (locator-option / [])]] 879 objective = ["SRV.est", objective-flags, loop-count, 880 objective-value] 882 objective-flags = sync-only ; as in GRASP spec 883 sync-only = 4 ; M_FLOOD only requires synchronization 884 loop-count = 255 ; recommended 885 objective-value = text ; name of the (list of) of supported 886 ; protocols: "EST-TLS" for RFC7030. 888 The objective value "SRV.est" indicates that the objective is an 889 [RFC7030] compliant EST server. 891 The M_FLOOD message MUST be sent periodically. The default SHOULD be 892 60 seconds, the value SHOULD be operator configurable. It must be so 893 high that the aggregate amount of periodic M_FLOODs from all flooded 894 objectives causes only negligible traffic across the ACP. The ttl 895 parameter SHOULD be 3.5 times the period so that up to three 896 consecutive messages can be dropped before considering an 897 announcement expired. In the example above, the ttl is 210000 msec, 898 3.5 times 60 seconds. 900 Domain certificates SHOULD by default be renewed 50% into their 901 lifetime. When performing renewal, the node SHOULD attempt to 902 connect to the remembered EST server. If that fails, it SHOULD 903 attempt to connect to EST server(s) learned via GRASP. The server 904 with which certificate renewal succeeds SHOULD be remembered for the 905 next renewal. 907 Remembering the last renewal server and preferring it provides 908 stickiness which can help diagnostics. It also provides some 909 protection against off-path compromised ACP members announcing bogus 910 information into GRASP. 912 The ACP node MUST support CRLs ("Certificate Revocation Lists") via 913 HTTPs from one or more CDPs ("CRL Distribution Points"). These CDPs 914 MUST be indicated in the Domain Certificate when used. If the CDP 915 URL uses an IPv6 ULA, the ACP node will try to reach it via the ACP. 916 In that case the ACP address in the domain certificate of the CDP as 917 learned by the ACP node during the HTTPs TLS handshake SHOULD match 918 that ULA address in the HTTPs URL. 920 Renewal of certificates SHOULD start after less than 50% of the 921 domain certificate lifetime so that network operations has ample time 922 to investigate and resolve any problems that cause a node to not 923 renew its domain certificate in time - and to allow prolonged periods 924 of running parts of a network disconnected from any CA. 926 Certificate lifetime should be set to be as short as feasible. Given 927 how certificate renewal is fully automated via ACP and EST, the 928 primarily limiting factor for shorter certificate lifetimes (than the 929 typical one year) is load on the EST server(s) and CA. It is 930 therefore recommended that ACP domain certificates are managed via a 931 CA chain where the assigning CA has enough performance to manage 932 short lived certificates. 934 See Section 10.1 for further optimizations of certificate maintenance 935 when BRSKI can be used ("Bootstrapping Remote Secure Key 936 Infrastructures", see [I-D.ietf-anima-bootstrapping-keyinfra]). 938 6.2. ACP Adjacency Table 940 To know to which nodes to establish an ACP channel, every ACP node 941 maintains an adjacency table. The adjacency table contains 942 information about adjacent ACP nodes, at a minimum: node-ID, Link- 943 local IPv6 address (discovered by GRASP as explained below), domain, 944 certificate. An ACP node MUST maintain this adjacency table up to 945 date. This table is used to determine to which neighbor an ACP 946 connection is established. 948 Where the next ACP node is not directly adjacent, the information in 949 the adjacency table can be supplemented by configuration. For 950 example, the node-ID and IP address could be configured. 952 The adjacency table MAY contain information about the validity and 953 trust of the adjacent ACP node's certificate. However, subsequent 954 steps MUST always start with authenticating the peer. 956 The adjacency table contains information about adjacent ACP nodes in 957 general, independently of their domain and trust status. The next 958 step determines to which of those ACP nodes an ACP connection should 959 be established. 961 Interaction between ACP and other autonomic elements like GRASP (see 962 below) or ASAs should be via an API that allows (appropriately access 963 controlled) read/write access to the ACP Adjacency Table. 964 Specification of such an API is subject to future work. 966 6.3. Neighbor Discovery with DULL GRASP 968 The ACP uses one instance of DULL GRASP ( See section 3.5.2.2 of 969 [I-D.ietf-anima-grasp] for its formal definition) for every physical 970 L2 subnet of the ACP node to discover physically adjacent candidate 971 ACP neighbors. Native interfaces (e.g.: physical interfaces on 972 physical nodes) SHOULD be brought up automatically enough so that ACP 973 discovery can be performed and any native interfaces with ACP 974 neighbors can then be brought into the ACP even if the interface is 975 otherwise not configured. Reception of packets on such otherwise not 976 configured interfaces MUST be limited so that at first only IPv6 977 link-local address assignment (SLAAC) and DULL GRASP works and then 978 only the following ACP secure channel setup packets - but not any 979 other unnecessary traffic (e.g.: no other link-local IPv6 transport 980 stack responders for example). 982 Note that the use of the IPv6 link-local multicast address 983 (ALL_GRASP_NEIGHBORS) implies the need to use MLD ([RFC3810]) to 984 announce the desire to receive packets for that address. Otherwise 985 DULL GRASP could fail to operate correctly in the presence of MLD 986 snooping, non-ACP enabled L2 switches - because those would stop 987 forwarding DULL GRASP packets. Switches not supporting MLD snooping 988 simply need to operate as pure L2 bridges for IPv6 multicast packets 989 for DULL GRASP to work. 991 ACP discovery SHOULD NOT be enabled by default on non-native 992 interfaces. In particular, ACP discovery MUST NOT run inside the ACP 993 across ACP virtual interfaces. See Section 10.3 for further, non- 994 normative suggestions how to enable/disable ACP at node and interface 995 level. See Section 8.2.2 for more details about tunnels (typical 996 non-native interfaces). See Section 7 for how ACP should be extended 997 on devices operating (also) as L2 bridges. 999 Note: If an ACP node also implements BRSKI (see Section 10.1) then 1000 the above considerations also apply to discovery for BRSKI. Each 1001 DULL instance of GRASP set up for ACP is then also used for the 1002 discovery of a bootstrap proxy via BRSKI when the node does not have 1003 a domain certificate. Discovery of ACP neighbors happens only when 1004 the node does have the certificate. The node therefore never needs 1005 to discover both a bootstrap proxy and ACP neighbor at the same time. 1007 An ACP node announces itself to potential ACP peers by use of the 1008 "AN_ACP" objective. This is a synchronization objective intended to 1009 be flooded on a single link using the GRASP Flood Synchronization 1010 (M_FLOOD) message. In accordance with the design of the Flood 1011 message, a locator consisting of a specific link-local IP address, IP 1012 protocol number and port number will be distributed with the flooded 1013 objective. An example of the message is informally: 1015 Example: 1017 [M_FLOOD, 12340815, h'fe80000000000000c0011001FEEF0000, 180000, 1018 ["AN_ACP", 4, 1, "IKEv2"], 1019 [O_IPv6_LOCATOR, 1020 h'fe80000000000000c0011001FEEF0000, UDP, 15000] 1021 ] 1023 The formal CDDL definition is: 1025 flood-message = [M_FLOOD, session-id, initiator, ttl, 1026 +[objective, (locator-option / [])]] 1028 objective = ["AN_ACP", objective-flags, loop-count, 1029 objective-value] 1031 objective-flags = sync-only ; as in the GRASP specification 1032 sync-only = 4 ; M_FLOOD only requires synchronization 1033 loop-count = 1 ; limit to link-local operation 1034 objective-value = text ; name of the (list of) secure 1035 ; channel negotiation protocol(s) 1037 The objective-flags field is set to indicate synchronization. 1039 The loop-count is fixed at 1 since this is a link-local operation. 1041 In the above (recommended) example the period of sending of the 1042 objective could be 60 seconds the indicated ttl of 180000 msec means 1043 that the objective would be cached by ACP nodes even when two out of 1044 three messages are dropped in transit. 1046 The session-id is a random number used for loop prevention 1047 (distinguishing a message from a prior instance of the same message). 1048 In DULL this field is irrelevant but must still be set according to 1049 the GRASP specification. 1051 The originator MUST be the IPv6 link local address of the originating 1052 ACP node on the sending interface. 1054 The 'objective-value' parameter is (normally) a string indicating the 1055 secure channel protocol available at the specified or implied 1056 locator. 1058 The locator is optional and only required when the secure channel 1059 protocol is not offered at a well-defined port number, or if there is 1060 no well-defined port number. "IKEv2" is the abbreviation for 1061 "Internet Key Exchange protocol version 2", as defined in [RFC7296]. 1062 It is the main protocol used by the Internet IP security architecture 1063 ("IPsec", see [RFC4301]). We therefore use the term "IKEv2" and not 1064 "IPsec" in the GRASP definitions below and example above. "IKEv2" 1065 has a well-defined port number 500, but in the above example, the 1066 candidate ACP neighbor is offering ACP secure channel negotiation via 1067 IKEv2 on port 15000 (for the sake of creating a non-standard 1068 example). 1070 If a locator is included, it MUST be an O_IPv6_LOCATOR, and the IPv6 1071 address MUST be the same as the initiator address (these are DULL 1072 requirements to minimize third party DoS attacks). 1074 The secure channel methods defined in this document use the objective 1075 values of "IKEv2" and "dTLS". There is no distinction between IKEv2 1076 native and GRE-IKEv2 because this is purely negotiated via IKEv2. 1078 A node that supports more than one secure channel protocol needs to 1079 flood multiple versions of the "AN_ACP" objective, each accompanied 1080 by its own locator. This can be in a single GRASP M_FLOOD message. 1082 If multiple secure channel protocols are supported that all are run 1083 on well-defined ports, then they can be announced via a single AN_ACP 1084 objective using a list of string names as the objective value without 1085 a following locator-option. 1087 Note that a node serving both as an ACP node and BRSKI Join Proxy may 1088 choose to distribute the "AN_ACP" objective and the respective BRSKI 1089 in the same M_FLOOD message, since GRASP allows multiple objectives 1090 in one message. This may be impractical though if ACP and BRSKI 1091 operations are implemented via separate software modules / ASAs. 1093 The result of the discovery is the IPv6 link-local address of the 1094 neighbor as well as its supported secure channel protocols (and non- 1095 standard port they are running on). It is stored in the ACP 1096 Adjacency Table, see Section 6.2 which then drives the further 1097 building of the ACP to that neighbor. 1099 6.4. Candidate ACP Neighbor Selection 1101 An ACP node must determine to which other ACP nodes in the adjacency 1102 table it should build an ACP connection. This is based on the 1103 information in the ACP Adjacency table. 1105 The ACP is by default established exclusively between nodes in the 1106 same domain. This includes all routing subdomains. Section 10.7 1107 explains how ACP connections across multiple routing subdomains are 1108 special. 1110 Future extensions to this document including Intent can change this 1111 default behavior. Examples include: 1113 o Build the ACP across all domains that have a common parent domain. 1114 For example ACP nodes with domain "example.com", nodes of 1115 "example.com", "access.example.com", "core.example.com" and 1116 "city.core.example.com" could all establish one single ACP. 1118 o ACP connections across domains with different CA (certificate 1119 authorities) could establish a common ACP by installing the 1120 alternate domains' CA into the trusted anchor store. This is an 1121 executive management action that could easily be accomplished 1122 through the control channel created by the ACP. 1124 Since Intent is transported over the ACP, the first ACP connection a 1125 node establishes is always following the default behavior. See 1126 Section 10.7 for more details. 1128 The result of the candidate ACP neighbor selection process is a list 1129 of adjacent or configured autonomic neighbors to which an ACP channel 1130 should be established. The next step begins that channel 1131 establishment. 1133 6.5. Channel Selection 1135 To avoid attacks, initial discovery of candidate ACP peers cannot 1136 include any non-protected negotiation. To avoid re-inventing and 1137 validating security association mechanisms, the next step after 1138 discovering the address of a candidate neighbor can only be to try 1139 first to establish a security association with that neighbor using a 1140 well-known security association method. 1142 At this time in the lifecycle of ACP nodes, it is unclear whether it 1143 is feasible to even decide on a single MTI (mandatory to implement) 1144 security association protocol across all ACP nodes: 1146 From the use-cases it seems clear that not all type of ACP nodes can 1147 or need to connect directly to each other or are able to support or 1148 prefer all possible mechanisms. For example, code space limited IoT 1149 devices may only support dTLS ("datagram Transport Layer Security 1150 version 1.2", see [RFC6347]) because that code exists already on them 1151 for end-to-end security, but low-end in-ceiling L2 switches may only 1152 want to support MacSec because that is also supported in their chips. 1154 Only a flexible gateway device may need to support both of these 1155 mechanisms and potentially more. 1157 To support extensible secure channel protocol selection without a 1158 single common MTI protocol, ACP nodes must try all the ACP secure 1159 channel protocols it supports and that are feasible because the 1160 candidate ACP neighbor also announced them via its AN_ACP GRASP 1161 parameters (these are called the "feasible" ACP secure channel 1162 protocols). 1164 To ensure that the selection of the secure channel protocols always 1165 succeeds in a predictable fashion without blocking, the following 1166 rules apply: 1168 An ACP node may choose to attempt initiate the different feasible ACP 1169 secure channel protocols it supports according to its local policies 1170 sequentially or in parallel, but it MUST support acting as a 1171 responder to all of them in parallel. 1173 Once the first secure channel protocol succeeds, the two peers know 1174 each other's certificates because it must be used by all secure 1175 channel protocols for mutual authentication. The node with the lower 1176 Node-ID in the ACP address becomes Bob, the one with the higher Node- 1177 ID in the certificate Alice. 1179 Bob becomes passive, he does not attempt to further initiate ACP 1180 secure channel protocols with Alice and does not consider it to be an 1181 error when Alice closes secure channels. Alice becomes the active 1182 party, continues to attempt setting up secure channel protocols with 1183 Bob until she arrives at the best one from her view that also works 1184 with Bob. 1186 For example, originally Bob could have been the initiator of one ACP 1187 secure channel protocol that Bob prefers and the security association 1188 succeeded. The roles of Bob and Alice are then assigned. At this 1189 stage, the protocol may not even have completed negotiating a common 1190 security profile. The protocol could for example could have been 1191 IPsec via IKEv2 ("IP security", see [RFC4301] and "Internet Key 1192 Exchange protocol version 2", see [RFC7296]. It is now up to Alice 1193 to decide how to proceed. Even if the IPsec connecting determined a 1194 working profile with Bob, Alice might prefer some other secure 1195 protocol (e.g.: dTLS) and try to set that up with Bob. If that 1196 succeeds, she would close the IPsec connection. If no better 1197 protocol attempt succeeds, she would keep the IPsec connection. 1199 All this negotiation is in the context of an "L2 interface". Alice 1200 and Bob will build ACP connections to each other on every "L2 1201 interface" that they both connect to. An autonomic node must not 1202 assume that neighbors with the same L2 or link-local IPv6 addresses 1203 on different L2 interfaces are the same node. This can only be 1204 determined after examining the certificate after a successful 1205 security association attempt. 1207 6.6. Candidate ACP Neighbor verification 1209 Independent of the security association protocol chosen, candidate 1210 ACP neighbors need to be authenticated based on their domain 1211 certificate. This implies that any secure channel protocol MUST 1212 support certificate based authentication that can support the ACP 1213 domain membership check as defined in Section 6.1.2. If it fails, 1214 the connection attempt is aborted and an error logged (with 1215 throttling). 1217 6.7. Security Association protocols 1219 The following sections define the security association protocols that 1220 we consider to be important and feasible to specify in this document: 1222 6.7.1. ACP via IKEv2 1224 An ACP node announces its ability to support IKEv2 as the ACP secure 1225 channel protocol in GRASP as "IKEv2". 1227 6.7.1.1. Native IPsec 1229 To run ACP via IPsec natively, no further IANA assignments/ 1230 definitions are required. An ACP node supporting native IPsec MUST 1231 use IPsec security setup via IKEv2, tunnel mode, local and peer link- 1232 local IPv6 addresses used for encapsulation, ESP with AES256 for 1233 encryption and SHA256 hash. 1235 In terms of IKEv2, this means the initiator will offer to support 1236 IPsec tunnel mode with next protocol equal 41 (IPv6). 1238 IPsec tunnel mode is required because the ACP will route/forward 1239 packets received from any other ACP node across the ACP secure 1240 channels, and not only its own generated ACP packets. With IPsec 1241 transport mode, it would only be possible to send packets originated 1242 by the ACP node itself. 1244 ESP is used because ACP mandates the use of encryption for ACP secure 1245 channels. 1247 6.7.1.2. IPsec with GRE encapsulation 1249 In network devices it is often more common to implement high 1250 performance virtual interfaces on top of GRE encapsulation than on 1251 top of a "native" IPsec association (without any other encapsulation 1252 than those defined by IPsec). On those devices it may be beneficial 1253 to run the ACP secure channel on top of GRE protected by the IPsec 1254 association. 1256 To run ACP via GRE/IPsec, no further IANA assignments/definitions are 1257 required. The ACP node MUST support IPsec security setup via IKEv2, 1258 IPsec transport mode, local and peer link-local IPv6 addresses used 1259 for encapsuation, ESP with AES256 encryption and SHA256 hash. 1261 When GRE is used, transport mode is sufficient because the routed ACP 1262 packets are not "tunneled" by IPsec but rather by GRE: IPsec only has 1263 to deal with the GRE/IP packet which always uses the local and peer 1264 link-local IPv6 addresses and is therefore applicable to transport 1265 mode. 1267 ESP is used because ACP mandates the use of encryption for ACP secure 1268 channels. 1270 In terms of IKEv2 negotiation, this means the initiator must offer to 1271 support IPsec transport mode with next protocol equal to GRE (47) 1272 followed by the offer for native IPsec as described above (because 1273 that option is mandatory to support). 1275 If IKEv2 initiator and responder support GRE, it will be selected. 1276 The version of GRE to be used must the according to [RFC7676]. 1278 6.7.2. ACP via dTLS 1280 We define the use of ACP via dTLS in the assumption that it is likely 1281 the first transport encryption code basis supported in some classes 1282 of constrained devices. 1284 To run ACP via UDP and dTLS v1.2 [RFC6347] a locally assigned UDP 1285 port is used that is announced as a parameter in the GRASP AN_ACP 1286 objective to candidate neighbors. All ACP nodes supporting dTLS as a 1287 secure channel protocol MUST support AES256 encryption and not permit 1288 weaker crypto options. 1290 There is no additional session setup or other security association 1291 besides this simple dTLS setup. As soon as the dTLS session is 1292 functional, the ACP peers will exchange ACP IPv6 packets as the 1293 payload of the dTLS transport connection. Any dTLS defined security 1294 association mechanisms such as re-keying are used as they would be 1295 for any transport application relying solely on dTLS. 1297 6.7.3. ACP Secure Channel Requirements 1299 A baseline ACP node MUST support IPsec natively and MAY support IPsec 1300 via GRE. A constrained ACP node MUST support dTLS. ACP nodes 1301 connecting constrained areas with baseline areas MUST therefore 1302 support IPsec and dTLS. 1304 ACP nodes need to specify in documentation the set of secure ACP 1305 mechanisms they support. 1307 An ACP secure channel MUST immediately be terminated when the 1308 lifetime of any certificate in the chain used to authenticate the 1309 neighbor expires or becomes revoked. Note that this is not standard 1310 behavior in secure channel protocols such as IPsec because the 1311 certificate authentication only influences the setup of the secure 1312 channel in these protocols. 1314 6.8. GRASP in the ACP 1316 6.8.1. GRASP as a core service of the ACP 1318 The ACP MUST run an instance of GRASP inside of it. It is a key part 1319 of the ACP services. They function in GRASP that makes it 1320 fundamental as a service is the ability for ACP wide service 1321 discovery (called objectives in GRASP). In most other solution 1322 designs such distributed discovery does not exist at all or was added 1323 as an afterthought and relied upon inconsistently. 1325 ACP provides IP unicast routing via the RPL routing protocol 1326 (described below). 1328 The ACP does not use IP multicast routing nor does it provide generic 1329 IP multicast services. Instead, the ACP provides service discovery 1330 via the objective discovery/announcement and negotiation mechanisms 1331 of the ACP GRASP instance (services are a form of objectives). These 1332 mechanisms use hop-by-hop reliable flooding of GRASP messages for 1333 both service discovery (GRASP M_DISCOVERY messages) and service 1334 announcement (GRASP M_FLOOD messages). 1336 IP multicast is not used by the ACP because the ANI (Autonomic 1337 Networking Infrastructure) itself does not require IP multicast but 1338 only service announcement/discovery. Using IP multicast for that 1339 would have made it necessary to develop a zero-touch autoconfiguring 1340 solution for ASM (Any Source Multicast - original form of IP 1341 multicast defined in [RFC1112]), which would be quite complex and 1342 difficult to justify. One aspect of complexity that has never been 1343 attempted to be solved in IETF documents is the automatic-selection 1344 of routers that should be PIM-SM rendezvous points (RPs) (see 1345 [RFC7761]). The other aspects of complexity are the implementation 1346 of MLD ([RFC4604]), PIM-SM and Anycast-RP (see [RFC4610]). If those 1347 implementations already exist in a product, then they would be very 1348 likely tied to accelerated forwarding which consumes hardware 1349 resources, and that in return is difficult to justify as a cost of 1350 performing only service discovery. 1352 Future ASA may need high performance in-network data replication. 1353 That is the case when the use of IP multicast is justified. These 1354 ASA can then use service discovery from ACP GRASP, and then they do 1355 not need ASM but only SSM (Source Specific Multicast, see [RFC4607]) 1356 for the IP multicast replication. SSM itself can simply be enabled 1357 in the data-plane (or even in an update to the ACP) without any other 1358 configuration than just enabling it on all nodes and only requires a 1359 simpler version of MLD (see [RFC5790]). 1361 LSP (Link State Protocol) based IGP routing protocols typically have 1362 a mechanism to flood information, and such a mechanism could be used 1363 to flood GRASP objectives by defining them to be information of that 1364 IGP. This would be a possible optimization in future variations of 1365 the ACP that do use an LSP routing protocol. Note though that such a 1366 mechanism would not work easily for GRASP M_DISCOVERY messages which 1367 are constrained flooded up to a node where a responder is found. We 1368 do expect that many future services in ASA will have only few 1369 consuming ASA, and for those cases, M_DISCOVERY is the more efficient 1370 method than flooding across the whole domain. 1372 Because the ACP uses RPL, one desirable future extension is to use 1373 RPLs existing notion of loop-free distribution trees (DODAG) to make 1374 GRASPs flooding more efficient both for M_FLOOD and M_DISCOVERY) See 1375 Section 6.12.5 how this will be specifically beneficial when using 1376 NBMA interfaces. This is not currently specified in this document 1377 because it is not quite clear yet what exactly the implications are 1378 to make GRASP flooding depend on RPL DODAG convergence and how 1379 difficult it would be to let GRASP flooding access the DODAG 1380 information. 1382 6.8.2. ACP as the Security and Transport substrate for GRASP 1384 In the terminology of GRASP ([I-D.ietf-anima-grasp]), the ACP is the 1385 security and transport substrate for the GRASP instance run inside 1386 the ACP ("ACP GRASP"). 1388 This means that the ACP is responsible to ensure that this instance 1389 of GRASP is only sending messages across the ACP GRASP virtual 1390 interfaces. Whenever the ACP adds or deletes such an interface 1391 because of new ACP secure channels or loss thereof, the ACP needs to 1392 indicate this to the ACP instance of GRASP. The ACP exists also in 1393 the absence of any active ACP neighbors. It is created when the node 1394 has a domain certificate. In this case ASAs using GRASP running on 1395 the same node would still need to be able to discover each other's 1396 objectives. When the ACP does not exist, ASAs leveraging the ACP 1397 instance of GRASP via APIs MUST still be able to operate, and MUST be 1398 able to understand that there is no ACP and that therefore the ACP 1399 instance of GRASP can not operate. 1401 The way ACP acts as the security and transport substrate for GRASP is 1402 visualized in the following picture: 1404 [RFC Editor: please try to put the following picture on a single page 1405 and remove this note. We cannot figure out how to do this with XML. 1406 The picture does fit on a single page.] 1408 ACP: 1409 ............................................................... 1410 . . 1411 . /-GRASP-flooding-\ ACP GRASP instance . 1412 . / \ . 1413 . GRASP GRASP GRASP . 1414 . link-local unicast link-local . 1415 . multicast messages multicast . 1416 . messages | messages . 1417 . | | | . 1418 ............................................................... 1419 . v v v ACP security and transport . 1420 . | | | substrate for GRASP . 1421 . | | | . 1422 . | ACP GRASP | - ACP GRASP . 1423 . | loopback | loopback interface . 1424 . | interface | - AN-cert auth . 1425 . | TLS | . 1426 . ACP GRASP | ACP GRASP - ACP GRASP virtual . 1427 . subnet1 | subnet2 virtual interfaces . 1428 . TCP | TCP . 1429 . | | | . 1430 ............................................................... 1431 . | | | ^^^ Users of ACP (GRASP/ASA) . 1432 . | | | ACP interfaces/addressing . 1433 . | | | . 1434 . | | | . 1435 . | ACP-loopback Interf.| <- ACP loopback interface . 1436 . | ACP-address | - address (global ULA) . 1437 . subnet1 | subnet2 <- ACP virtual interfaces . 1439 . link-local | link-local - link-local addresses . 1440 ............................................................... 1441 . | | | ACP routing and forwarding . 1442 . | RPL-routing | . 1443 . | /IP-Forwarding\ | . 1444 . | / \ | . 1445 . ACP IPv6 packets ACP IPv6 packets . 1446 . |/ \| . 1447 . IPsec/dTLS IPsec/dTLS - AN-cert auth . 1448 ............................................................... 1449 | | data-plane 1450 | | 1451 | | - ACP secure channel 1452 link-local link-local - encap addresses 1453 subnet1 subnet2 - data-plane interfaces 1454 | | 1455 ACP-Nbr1 ACP-Nbr2 1457 Figure 2 1459 GRASP unicast messages inside the ACP always use the ACP address. 1460 Link-local ACP addresses must not be used inside objectives. GRASP 1461 unicast messages inside the ACP are transported via TLS 1.2 1462 ([RFC5246]) connections with AES256 encryption and SHA256. Mutual 1463 authentication uses the ACP domain membership check defined in 1464 (Section 6.1.2). 1466 GRASP link-local multicast messages are targeted for a specific ACP 1467 virtual interface (as defined Section 6.12.5) but are sent by the ACP 1468 into an equally built ACP GRASP virtual interface constructed from 1469 the TCP connection(s) to the IPv6 link-local neighbor address(es) on 1470 the underlying ACP virtual interface. If the ACP GRASP virtual 1471 interface has two or more neighbors, the GRASP link-local multicast 1472 messages are replicated to all neighbor TCP connections. 1474 TLS and TLS connections for GRASP in the ACP use the IANA assigned 1475 TCP port for GRASP (7107). Effectively the transport stack is 1476 expected to be TLS for connections from/to the ACP address (e.g.: 1477 global scope address(es)) and TCP for connections from/to link-local 1478 addresses on the ACP virtual interfaces. The latter ones are only 1479 used for flooding of GRASP messages. 1481 6.8.2.1. Discussion 1483 TCP encapsulation for GRASP M_DISCOVERY and M_FLOOD link local 1484 messages is used because these messages are flooded across 1485 potentially many hops to all ACP nodes and a single link with even 1486 temporary packet loss issues (e.g.: WiFi/Powerline link) can reduce 1487 the probability for loss free transmission so much that applications 1488 would want to increase the frequency with which they send these 1489 messages. This would result in more traffic flooding than hop-by-hop 1490 reliable retransmission as provided for by TCP. 1492 TLS is mandated for GRASP non-link-local unicast because the ACP 1493 secure channel mandatory authentication and encryption protects only 1494 against attacks from the outside but not against attacks from the 1495 inside: Compromised ACP members that have (not yet) been detected and 1496 removed (e.g.: via domain certificate revocation / expiry). 1498 If GRASP peer connections would just use TCP, compromised ACP members 1499 could simply eavesdrop passively on GRASP peer connections for whom 1500 they are on-path ("Man In The Middle" - MITM). Or intercept and 1501 modify them. With TLS, it is not possible to completely eliminate 1502 problems with compromised ACP members, but attacks are a lot more 1503 complex: 1505 Eavesdropping/spoofing by a compromised ACP node is still possible 1506 because in the model of the ACP and GRASP, the provider and consumer 1507 of an objective have initially no unique information (such as an 1508 identity) about the other side which would allow them to distinguish 1509 a benevolent from a compromised peer. The compromised ACP node would 1510 simply announce the objective as well, potentially filter the 1511 original objective in GRASP when it is a MITM and act as an 1512 application level proxy. This of course requires that the 1513 compromised ACP node understand the semantic of the GRASP negotiation 1514 to an extend that allows it to proxy it without being detected, but 1515 in an AN environment this is quite likely public knowledge or evens 1516 standardized. 1518 The GRASP TLS connections are run like any other ACP traffic through 1519 the ACP secure channels. This leads to double authentication/ 1520 encryption. Future work optimizations could avoid this but it is 1521 unclear how beneficial/feasible this is: 1523 o The security considerations for GRASP change against attacks from 1524 non-ACP (e.g.: "outside") nodes: TLS is subject to reset attacks 1525 while secure channel protocols may be not (e.g.: IPsec is not). 1527 o The secure channel method may leverage hardware acceleration and 1528 there may be little or no gain in eliminating it. 1530 o The GRASP TLS connections need to implement any additional 1531 security options that are required for secure channels. For 1532 example the closing of connections when the peers certificate has 1533 expired. 1535 6.9. Context Separation 1537 The ACP is in a separate context from the normal data-plane of the 1538 node. This context includes the ACP channels IPv6 forwarding and 1539 routing as well as any required higher layer ACP functions. 1541 In classical network systems, a dedicated so called "Virtual routing 1542 and forwarding instance" (VRF) is one logical implementation option 1543 for the ACP. If possible by the systems software architecture, 1544 separation options that minimize shared components are preferred, 1545 such as a logical container or virtual machine instance. The context 1546 for the ACP needs to be established automatically during bootstrap of 1547 a node. As much as possible it should be protected from being 1548 modified unintentionally by ("data-plane") configuration. 1550 Context separation improves security, because the ACP is not 1551 reachable from the global routing table. Also, configuration errors 1552 from the data-plane setup do not affect the ACP. 1554 6.10. Addressing inside the ACP 1556 The channels explained above typically only establish communication 1557 between two adjacent nodes. In order for communication to happen 1558 across multiple hops, the autonomic control plane requires ACP 1559 network wide valid addresses and routing. Each ACP node must create 1560 a loopback interface with an ACP network wide unique address inside 1561 the ACP context (as explained in in Section 6.9). This address may 1562 be used also in other virtual contexts. 1564 With the algorithm introduced here, all ACP nodes in the same routing 1565 subdomain have the same /48 ULA global ID prefix. Conversely, ULA 1566 global IDs from different domains are unlikely to clash, such that 1567 two networks can be merged, as long as the policy allows that merge. 1568 See also Section 9.1 for a discussion on merging domains. 1570 Links inside the ACP only use link-local IPv6 addressing, such that 1571 each node only requires one routable virtual address. 1573 6.10.1. Fundamental Concepts of Autonomic Addressing 1575 o Usage: Autonomic addresses are exclusively used for self- 1576 management functions inside a trusted domain. They are not used 1577 for user traffic. Communications with entities outside the 1578 trusted domain use another address space, for example normally 1579 managed routable address space (called "data-plane" in this 1580 document). 1582 o Separation: Autonomic address space is used separately from user 1583 address space and other address realms. This supports the 1584 robustness requirement. 1586 o Loopback-only: Only ACP loopback interfaces (and potentially those 1587 configured for "ACP connect", see Section 8.1) carry routable 1588 address(es); all other interfaces (called ACP virtual interfaces) 1589 only use IPv6 link local addresses. The usage of IPv6 link local 1590 addressing is discussed in [RFC7404]. 1592 o Use-ULA: For loopback interfaces of ACP nodes, we use Unique Local 1593 Addresses (ULA), as specified in [RFC4193]. An alternative scheme 1594 was discussed, using assigned ULA addressing. The consensus was 1595 to use ULA-random [[RFC4193] with L=1], because it was deemed to 1596 be sufficient. 1598 o No external connectivity: They do not provide access to the 1599 Internet. If a node requires further reaching connectivity, it 1600 should use another, traditionally managed address scheme in 1601 parallel. 1603 o Addresses in the ACP are permanent, and do not support temporary 1604 addresses as defined in [RFC4941]. 1606 o Addresses in the ACP are not considered sensitive on privacy 1607 grounds because ACP nodes are not expected to be end-user devices. 1608 Therefore, ACP addresses do not need to be pseudo-random as 1609 discussed in [RFC7721]. Because they are not propagated to 1610 untrusted (non ACP) nodes and stay within a domain (of trust), we 1611 also consider them not to be subject to scanning attacks. 1613 The ACP is based exclusively on IPv6 addressing, for a variety of 1614 reasons: 1616 o Simplicity, reliability and scale: If other network layer 1617 protocols were supported, each would have to have its own set of 1618 security associations, routing table and process, etc. 1620 o Autonomic functions do not require IPv4: Autonomic functions and 1621 autonomic service agents are new concepts. They can be 1622 exclusively built on IPv6 from day one. There is no need for 1623 backward compatibility. 1625 o OAM protocols no not require IPv4: The ACP may carry OAM 1626 protocols. All relevant protocols (SNMP, TFTP, SSH, SCP, Radius, 1627 Diameter, ...) are available in IPv6. 1629 6.10.2. The ACP Addressing Base Scheme 1631 The Base ULA addressing scheme for ACP nodes has the following 1632 format: 1634 8 40 2 78 1635 +--+-------------------------+------+------------------------------+ 1636 |fd| hash(routing-subdomain) | Type | (sub-scheme) | 1637 +--+-------------------------+------+------------------------------+ 1639 Figure 3: ACP Addressing Base Scheme 1641 The first 48 bits follow the ULA scheme, as defined in [RFC4193], to 1642 which a type field is added: 1644 o "fd" identifies a locally defined ULA address. 1646 o The 40 bits ULA "global ID" (term from [RFC4193]) for ACP 1647 addresses carried in the domain information field of domain 1648 certificates are the first 40 bits of the SHA256 hash of the 1649 routing subdomain from the same domain information field. In the 1650 example of Section 6.1.1, the routing subdomain is 1651 "area51.research.acp.example.com" and the 40 bits ULA "global ID" 1652 a379a6f6ee. 1654 o To allow for extensibility, the fact that the ULA "global ID" is a 1655 hash of the routing subdomain SHOULD NOT be assumed by any ACP 1656 node during normal operations. The hash function is only executed 1657 during the creation of the certificate. If BRSKI is used then the 1658 registrar will create the domain information field in response to 1659 the CSR Attribute Request by the pledge. 1661 o Type: This field allows different address sub-schemes. This 1662 addresses the "upgradability" requirement. Assignment of types 1663 for this field will be maintained by IANA. 1665 The sub-scheme may imply a range or set of addresses assigned to the 1666 node, this is called the ACP address range/set and explained in each 1667 sub-scheme. 1669 6.10.3. ACP Zone Addressing Sub-Scheme 1671 The sub-scheme defined here is defined by the Type value 00b (zero) 1672 in the base scheme. 1674 64 64 1675 +-----------------+---------+---++-----------------------------+---+ 1676 | (base scheme) | Zone-ID | Z || Node-ID | 1677 | | | || Registrar-ID | Node-Number| V | 1678 +-----------------+---------+---++--------------+--------------+---+ 1679 50 13 1 48 15 1 1681 Figure 4: ACP Zone Addressing Sub-Scheme 1683 The fields are defined as follows: 1685 o Zone-ID: If set to all zero bits: The Node-ID bits are used as an 1686 identifier (as opposed to a locator). This results in a non- 1687 hierarchical, flat addressing scheme. Any other value indicates a 1688 zone. See Section 6.10.3.1 on how this field is used in detail. 1690 o Z: MUST be 0. 1692 o Node-ID: A unique value for each node. 1694 The 64 bit Node-ID is derived and composed as follows: 1696 o Registrar-ID (48 bit): A number unique inside the domain that 1697 identifies the registrar which assigned the Node-ID to the node. 1698 A MAC address of the registrar can be used for this purpose. 1700 o Node-Number: A number which is unique for a given registrar, to 1701 identify the node. This can be a sequentially assigned number. 1703 o V (1 bit): Virtualization bit: 0: Indicates the ACP itself ("ACP 1704 node base system); 1: Indicates the optional "host" context on the 1705 ACP node (see below). 1707 In the Zone addressing sub-scheme, the ACP address in the certificate 1708 has Zone and V fields as all zero bits. The ACP address set includes 1709 addresses with any Zone value and any V value. 1711 The "Node-ID" itself is unique in a domain (i.e., the Zone-ID is not 1712 required for uniqueness). Therefore, a node can be addressed either 1713 as part of a flat hierarchy (zone ID = 0), or with an aggregation 1714 scheme (any other zone ID). A address with zone-ID = 0 is an 1715 identifier, with another zone-ID as a locator. See Section 6.10.3.1 1716 for a description of the zone bits. 1718 The Virtual bit in this sub-scheme allows to easily add the ACP as a 1719 component to existing systems without causing problems in the port 1720 number space between the services in the ACP and the existing system. 1722 V:0 is the ACP router (autonomous node base system), V:1 is the host 1723 with pre-existing transport endpoints on it that could collide with 1724 the transport endpoints used by the ACP router. The ACP host could 1725 for example have a p2p virtual interface with the V:0 address as its 1726 router into the ACP. Depending on the SW design of ASA (outside the 1727 scope of this specification), they may use the V:0 or V:1 address. 1729 The location of the V bit(s) at the end of the address allows to 1730 announce a single prefix for each ACP node. For example, in a 1731 network with 20,000 ACP nodes, this avoid 20,000 additional routes in 1732 the routing table. 1734 6.10.3.1. Usage of the Zone Field 1736 The "Zone-ID" allows for the introduction of structure in the 1737 addressing scheme. 1739 Zone = zero is the default addressing scheme in an ACP domain. Every 1740 ACP node MUST respond to its ACP address with zone=0. Used on its 1741 own this leads to a non-hierarchical address scheme, which is 1742 suitable for networks up to a certain size. In this case, the 1743 addresses primarily act as identifiers for the nodes, and aggregation 1744 is not possible. 1746 If aggregation is required, the 13 bit value allows for up to 8192 1747 zones. The allocation of zone numbers may either happen 1748 automatically through a to-be-defined algorithm; or it could be 1749 configured and maintained manually. 1751 If a node learns through an autonomic method or through configuration 1752 that it is part of a zone, it MUST also respond to its ACP address 1753 with that zone number. In this case the ACP loopback is configured 1754 with two ACP addresses: One for zone 0 and one for the assigned zone. 1755 This method allows for a smooth transition between a flat addressing 1756 scheme and an hierarchical one. 1758 (Theoretically, the 13 bits for the Zone-ID would allow also for two 1759 levels of zones, introducing a sub-hierarchy. We do not think this 1760 is required at this point, but a new type could be used in the future 1761 to support such a scheme.) 1763 Note: The Zone-ID is one method to introduce structure or hierarchy 1764 into the ACP. Another way is the use of the routing subdomain field 1765 in the ACP that leads to different /40 ULA prefixes within an ACP 1766 domain. This gives future work two options to consider. 1768 6.10.4. ACP Manual Addressing Sub-Scheme 1770 The sub-scheme defined here is defined by the Type value 00b (zero) 1771 in the base scheme. 1773 64 64 1774 +---------------------+---------+---++-----------------------------+ 1775 | (base scheme) |Subnet-ID| Z || Interface Identifier | 1776 +---------------------+---------+---++-----------------------------+ 1777 50 13 1 1779 Figure 5: ACP Manual Addressing Sub-Scheme 1781 The fields are defined as follows: 1783 o Subnet-ID: Configured subnet identifier. 1785 o Z: MUST be 1. 1787 o Interface Identifier. 1789 This sub-scheme is meant for "manual" allocation to subnets where the 1790 other addressing schemes cannot be used. The primary use case is for 1791 assignment to ACP connect subnets (see Section 8.1.1). 1793 "Manual" means that allocations of the Subnet-ID need to be done 1794 today with pre-existing, non-autonomic mechanisms. Every subnet that 1795 uses this addressing sub-scheme needs to use a unique Subnet-ID 1796 (unless some anycast setup is done). Future work may define 1797 mechanisms for auto-coordination between ACP nodes and auto- 1798 allocation of Subnet-IDs between them. 1800 The Z field is following the Subnet-ID field so that future work 1801 could allocate/coordinate both Zone-ID and Subnet-ID consistently and 1802 use an integrated aggregatable routing approach across them. Z=0 1803 (Zone sub-scheme) would then be used for network wide unique, 1804 registrar assigned (and certificate protected) Node-IDs primarily for 1805 ACP nodes while Z=1 would be used for node-level assigned Interface 1806 Identifiers primarily for non-ACP-nodes (on logical subnets where the 1807 ACP node is a router). 1809 Manual addressing sub-scheme addresses SHOULD only be used in domain 1810 certificates assigned to nodes that cannot fully participate in the 1811 automatic establishment of ACP secure channels or ACP routing. The 1812 intended use are nodes connecting to the ACP via an ACP edge node and 1813 ACP connect (see Section 8.1) - such as legacy NOC equipment. They 1814 would not use their domain certificate for ACP secure channel 1815 creation and therefore do not need to participate in ACP routing 1816 either. They would use the certificate for authentication of any 1817 transport services. The value of the Interface Identifier is left 1818 for future definitions. 1820 6.10.5. ACP Vlong Addressing Sub-Scheme 1822 The sub-scheme defined here is defined by the Type value 01b (one) in 1823 the base scheme. 1825 50 78 1826 +---------------------++-----------------------------+----------+ 1827 | (base scheme) || Node-ID | 1828 | || Registrar-ID | Node-Number| V | 1829 +---------------------++--------------+--------------+----------+ 1830 46 33/17 8/16 1832 Figure 6: ACP Vlong Addressing Sub-Scheme 1834 This addressing scheme foregoes the Zone field to allow for larger, 1835 flatter routed networks (e.g.: as in IoT) with more than 2^32 Node- 1836 Numbers. It also allows for up to 2^16 - 65536 different virtualized 1837 addresses, which could be used to address individual software 1838 components in an ACP node. 1840 The fields are the same as in the Zone sub-scheme with the following 1841 refinements: 1843 o V: Virtualization bit: Values 0 and 1 as in Zone sub-scheme, 1844 further values use via definition in future work. 1846 o Registrar-ID: To maximize Node-Number and V, the Registrar-ID is 1847 reduced to 46 bits. This still allows to use the MAC address of a 1848 registrar by removing the V and U bits from the 48 bits of a MAC 1849 address (those two bits are never unique, so they cannot be used 1850 to distinguish MAC addresses). 1852 o If the first bit of the "Node-Number" is "1", then the Node-Number 1853 is 17 bit long and the V field is 16 bit long. Otherwise the 1854 Node-Number is 33 bit long and the V field is 8 bit long. "0" bit 1855 Node-Numbers are intended to be used for "general purpose" ACP 1856 nodes that would potentially have a limited number (< 256) of 1857 clients (ASA/Autonomic Functions or legacy services) nof the ACP 1858 that require separate V(irtual) addresses. "1" bit Node-Numbers 1859 are intended for ACP nodes that are ACP edge nodes (see 1860 Section 8.1.1) or that have a large number of clients requiring 1861 separate V(irtual) addresses. For example large SDN controllers 1862 with container modular software architecture (see Section 8.1.2). 1864 In the Vlong addressing sub-scheme, the ACP address in the 1865 certificate has all V field bits as zero. The ACP address set for 1866 the node includes any V value. 1868 6.10.6. Other ACP Addressing Sub-Schemes 1870 Before further addressing sub-schemes are defined, experience with 1871 the schemes defined here should be collected. The schemes defined in 1872 this document have been devised to allow hopefully sufficiently 1873 flexible setup of ACPs for a variety of situation. These reasons 1874 also lead to the fairly liberal use of address space: The Zone 1875 addressing sub-schemes is intended to enable optimized routing in 1876 large networks by reserving bits for zones. The Vlong addressing 1877 sub-scheme enables the allocation of 8/16 bit of addresses inside 1878 individual ACP nodes. Both address spaces allow distributed, 1879 uncoordinated allocation of node addresses by reserving bits for the 1880 Registrar-ID field in the address. 1882 IANA is asked need to assign a new "type" for each new addressing 1883 sub-scheme. With the current allocations, only 2 more schemes are 1884 possible, so the last addressing scheme should consider to be 1885 extensible in itself (e.g.: by reserving bits from it for further 1886 extensions. 1888 6.11. Routing in the ACP 1890 Once ULA address are set up all autonomic entities should run a 1891 routing protocol within the autonomic control plane context. This 1892 routing protocol distributes the ULA created in the previous section 1893 for reachability. The use of the autonomic control plane specific 1894 context eliminates the probable clash with the global routing table 1895 and also secures the ACP from interference from the configuration 1896 mismatch or incorrect routing updates. 1898 The establishment of the routing plane and its parameters are 1899 automatic and strictly within the confines of the autonomic control 1900 plane. Therefore, no manual configuration is required. 1902 All routing updates are automatically secured in transit as the 1903 channels of the autonomic control plane are by default secured, and 1904 this routing runs only inside the ACP. 1906 The routing protocol inside the ACP is RPL ([RFC6550]). See 1907 Section 10.5 for more details on the choice of RPL. 1909 RPL adjacencies are set up across all ACP channels in the same domain 1910 including all its routing subdomains. See Section 10.7 for more 1911 details. 1913 6.11.1. RPL Profile 1915 The following is a description of the RPL profile that ACP nodes need 1916 to support by default. The format of this section is derived from 1917 draft-ietf-roll-applicability-template. 1919 6.11.1.1. Summary 1921 In summary, the profile chosen for RPL is one that expects a fairly 1922 reliable network reasonable fast links so that RPL convergence will 1923 be triggered immediately upon recognition of link failure/recovery. 1925 The key limitation of the chosen profile is that it is designed to 1926 not require any data-plane artifacts (such as [RFC6553]). While the 1927 senders/receivers of ACP packets can be legacy NOC devices connected 1928 via "ACP connect" (see Section 8.1.1 to the ACP, their connectivity 1929 can be handled as non-RPL-aware leafs (or "Internet") according to 1930 the data-plane architecture explained in 1931 [I-D.ietf-roll-useofrplinfo]. This non-artifact profile is largely 1932 driven by the desire to avoid introducing the required Hop-by-Hop 1933 headers into the ACP VRF control plane. Many devices will have their 1934 VRF forwarding code designed into silicon. 1936 In this profile choice, RPL has no data-plane artifacts. A simple 1937 destination prefix based upon the routing table is used. A 1938 consequence of supporting only a single instanceID (containing one 1939 DODAG), the ACP will only accommodate only a single class of routing 1940 table and cannot create optimized routing paths to accomplish latency 1941 or energy goals. 1943 Consider a network that has multiple NOCs in different locations. 1944 Only one NOC will become the DODAG root. Other NOCs will have to 1945 send traffic through the DODAG (tree) rooted in the primary NOC. 1946 Depending on topology, this can be an annoyance from a latency point 1947 of view, but it does not represent a single point of failure, as the 1948 DODAG can reconfigure itself when it detects data plane forwarding 1949 failures. 1951 The lack of a RPI (the header defined by [RFC6553]), means that the 1952 data-plane will have no rank value that can be used to detect loops. 1953 As a result, traffic may loop until the TTL of the packet reaches 1954 zero. This the same behavior as that of other IGPs that do not have 1955 the data-plane options as RPPL. There are a variety of heuristics 1956 that can be used to signal from the data-plane to the RPL control 1957 plane that a new route is needed. 1959 Additionally, failed ACP tunnels will be detected by IKEv2 Dead Peer 1960 Detection (which can function as a replacement for an LLN's ETX). A 1961 failure of an ACP tunnel should signal the RPL control plane to pick 1962 a different parent. 1964 Future Extensions to this RPL profile can provide optimality for 1965 multiple NOCs. This requires utilizing data-plane artifact including 1966 IPinIP encap/decap on ACP routers and processing of IPv6 RPI headers. 1967 Alternatively, (Src,Dst) routing table entries could be used. A 1968 decision for the preferred technology would have to be done when such 1969 extension is defined. 1971 6.11.1.2. RPL Instances 1973 Single RPL instance. Default RPLInstanceID = 0. 1975 6.11.1.3. Storing vs. Non-Storing Mode 1977 RPL Mode of Operations (MOP): mode 3 "Storing Mode of Operations with 1978 multicast support". Implementations should support also other modes. 1979 Note: Root indicates mode in DIO flow. 1981 6.11.1.4. DAO Policy 1983 Proactive, aggressive DAO state maintenance: 1985 o Use K-flag in unsolicited DAO indicating change from previous 1986 information (to require DAO-ACK). 1988 o Retry such DAO DAO-RETRIES(3) times with DAO- ACK_TIME_OUT(256ms) 1989 in between. 1991 6.11.1.5. Path Metric 1993 Hopcount. 1995 6.11.1.6. Objective Function 1997 Objective Function (OF): Use OF0 [RFC6552]. No use of metric 1998 containers. 2000 rank_factor: Derived from link speed: <= 100Mbps: 2001 LOW_SPEED_FACTOR(5), else HIGH_SPEED_FACTOR(1) 2003 6.11.1.7. DODAG Repair 2005 Global Repair: we assume stable links and ranks (metrics), so no need 2006 to periodically rebuild DODAG. DODAG version only incremented under 2007 catastrophic events (e.g.: administrative action). 2009 Local Repair: As soon as link breakage is detected, send No-Path DAO 2010 for all the targets that where reachable only via this link. As soon 2011 as link repair is detected, validate if this link provides you a 2012 better parent. If so, compute your new rank, and send new DIO that 2013 advertises your new rank. Then send a DAO with a new path sequence 2014 about yourself. 2016 stretch_rank: none provided ("not stretched"). 2018 Data Path Validation: Not used. 2020 Trickle: Not used. 2022 6.11.1.8. Multicast 2024 Not used yet but possible because of the selected mode of operations. 2026 6.11.1.9. Security 2028 [RFC6550] security not used, substituted by ACP security. 2030 6.11.1.10. P2P communications 2032 Not used. 2034 6.11.1.11. IPv6 address configuration 2036 Every ACP node (RPL node) announces an IPv6 prefix covering the 2037 address(es) used in the ACP node. The prefix length depends on the 2038 chosen addressing sub-scheme of the ACP address provisioned into the 2039 certificate of the ACP node, e.g.: /127 for Zone addressing sub- 2040 scheme or /112 or /120 for Vlong addressing sub-scheme. See 2041 Section 6.10 for more details. 2043 Every ACP node MUST install a black hole (aka null) route for 2044 whatever ACP address space that it advertises (i.e.: the /96 or 2045 /127). This is avoid routing loops for addresses that an ACP node 2046 has not (yet) used. 2048 6.11.1.12. Administrative parameters 2050 Administrative Preference ([RFC6552], 3.2.6 - to become root): 2051 Indicated in DODAGPreference field of DIO message. 2053 o Explicit configured "root": 0b100 2055 o Registrar (Default): 0b011 2057 o AN-connect (non-registrar): 0b010 2059 o Default: 0b001. 2061 6.11.1.13. RPL Data-Plane artifacts 2063 RPI (RPL Packet Information [RFC6553]): Not used as there is only a 2064 single instance, and data path validation is not being used. 2066 SRH (RPL Source Routing - RFC6552): Not used. Storing mode is being 2067 used. 2069 6.11.1.14. Unknown Destinations 2071 Because RPL minimizes the size of the routing and forwarding table, 2072 prefixes reachable through the same interface as the RPL root are not 2073 known on every ACP node. Therefore traffic to unknown destination 2074 addresses can only be discovered at the RPL root. The RPL root 2075 SHOULD have attach safe mechanisms to operationally discover and log 2076 such packets. 2078 6.12. General ACP Considerations 2080 Since channels are by default established between adjacent neighbors, 2081 the resulting overlay network does hop by hop encryption. Each node 2082 decrypts incoming traffic from the ACP, and encrypts outgoing traffic 2083 to its neighbors in the ACP. Routing is discussed in Section 6.11. 2085 6.12.1. Performance 2087 There are no performance requirements against ACP implementations 2088 defined in this document because the performance requirements depend 2089 on the intended use case. It is expected that full autonomic node 2090 with a wide range of ASA can require high forwarding plane 2091 performance in the ACP, for example for telemetry, but that 2092 determination is for future work. Implementations of ACP to solely 2093 support traditional/SDN style use cases can benefit from ACP at lower 2094 performance, especially if the ACP is used only for critical 2095 operations, e.g.: when the data-plane is not available. See 2096 [I-D.ietf-anima-stable-connectivity] for more details. 2098 6.12.2. Addressing of Secure Channels in the data-plane 2100 In order to be independent of the data-plane configuration of global 2101 IPv6 subnet addresses (that may not exist when the ACP is brought 2102 up), Link-local secure channels MUST use IPv6 link local addresses 2103 between adjacent neighbors. The fully autonomic mechanisms in this 2104 document only specify these link-local secure channels. Section 8.2 2105 specifies extensions in which secure channels are tunnels. For 2106 those, this requirement does not apply. 2108 The Link-local secure channels specified in this document therefore 2109 depend on basic IPv6 link-local functionality to be auto-enabled by 2110 the ACP and prohibiting the data-plane from disabling it. The ACP 2111 also depends on being able to operate the secure channel protocol 2112 (e.g.: IPsec / dTLS) across IPv6 link-local addresses, something that 2113 may be an uncommon profile. Functionally, these are the only 2114 interactions with the data-plane that the ACP needs to have. 2116 To mitigate these interactions with the data-plane, extensions to 2117 this document may specify additional layer 2 or layer encapsulations 2118 for ACP secure channels as well as other protocols to auto-discover 2119 peer endpoints for such encapsulations (e.g.: tunneling across L3 or 2120 use of L2 only encapsulations). 2122 6.12.3. MTU 2124 The MTU for ACP secure channels must be derived locally from the 2125 underlying link MTU minus the secure channel encapsulation overhead. 2127 ACP secure Channel protocols do not need to perform MTU discovery 2128 because they are built across L2 adjacencies - the MTU on both sides 2129 connecting to the L2 connection are assumed to be consistent. 2130 Extensions to ACP where the ACP is for example tunneled need to 2131 consider how to guarantee MTU consistency. This is a standard issue 2132 with tunneling, not specific to running the ACP across it. Transport 2133 stacks running across ACP can perform normal PMTUD (Path MTU 2134 Discovery). Because the ACP is meant to be prioritize reliability 2135 over performance, they MAY opt to only expect IPv6 minimum MTU (1280) 2136 to avoid running into PMTUD implementation bugs or underlying link 2137 MTU mismatch problems. 2139 6.12.4. Multiple links between nodes 2141 If two nodes are connected via several links, the ACP SHOULD be 2142 established across every link, but it is possible to establish the 2143 ACP only on a sub-set of links. Having an ACP channel on every link 2144 has a number of advantages, for example it allows for a faster 2145 failover in case of link failure, and it reflects the physical 2146 topology more closely. Using a subset of links (for example, a 2147 single link), reduces resource consumption on the node, because state 2148 needs to be kept per ACP channel. The negotiation scheme explained 2149 in Section 6.5 allows Alice (the node with the higher ACP address) to 2150 drop all but the desired ACP channels to Bob - and Bob will not re- 2151 try to build these secure channels from his side unless Alice shows 2152 up with a previously unknown GRASP announcement (e.g.: on a different 2153 link or with a different address announced in GRASP). 2155 6.12.5. ACP interfaces 2157 The ACP VRF has conceptually two type of interfaces: The "ACP 2158 loopback interface(s)" to which the ACP ULA address(es) are assigned 2159 and the "ACP virtual interfaces" that are mapped to the ACP secure 2160 channels. 2162 The term "loopback interface" was introduced initially to refer to an 2163 internal interface on a node that would allow IP traffic between 2164 transport endpoints on the node in the absence or failure of any or 2165 all external interfaces, see [RFC4291] section 2.5.3. 2167 Even though loopback interfaces where originally designed to hold 2168 only loopback addresses not reachable from outside the node, these 2169 interfaces are also commonly used today to hold addresses reachable 2170 from the outside. They are meant to be reachable independent of any 2171 external interface being operational, and therefore to be more 2172 resilient. These addresses on loopback interfaces can be thought of 2173 as "node addresses" instead of "interface addresses", and that is 2174 what ACP address(es) are. This construct makes it therefore possible 2175 to address ACP nodes with a well-defined set of addresses independent 2176 of the number of external interfaces. 2178 For these reason, the ACP (ULA) address(es) are assigned to loopback 2179 interface(s). 2181 ACP secure channels, e.g.: IPsec, dTLS or other future security 2182 associations with neighboring ACP nodes can be mapped to ACP virtual 2183 interfaces in different ways: 2185 ACP point-to-point virtual interface: 2187 Each ACP secure channel is mapped into a separate point-to-point ACP 2188 virtual interface. If a physical subnet has more than two ACP 2189 capable nodes (in the same domain), this implementation approach will 2190 lead to a full mesh of ACP virtual interfaces between them. 2192 ACP multi-access virtual interface: 2194 In a more advanced implementation approach, the ACP will construct a 2195 single multi-access ACP virtual interface for all ACP secure channels 2196 to ACP capable nodes reachable across the same underlying (physical) 2197 subnet. IPv6 link-local multicast packets sent into an ACP multi- 2198 access virtual interface are replicated to every ACP secure channel 2199 mapped into the ACP multicast-access virtual interface. IPv6 unicast 2200 packets sent into an ACP multi-access virtual interface are sent to 2201 the ACP secure channel that belongs to the ACP neighbor that is the 2202 next-hop in the ACP forwarding table entry used to reach the packets 2203 destination address. 2205 There is no requirement for all ACP nodes on the same multi-access 2206 subnet to use the same type of ACP virtual interface. This is purely 2207 a node local decision. 2209 ACP nodes MUST perform standard IPv6 operations across ACP virtual 2210 interfaces including SLAAC (Stateless Address Auto-Configuration - 2211 [RFC4862]) to assign their IPv6 link local address on the ACP virtual 2212 interface and ND (Neighbor Discovery - [RFC4861]) to discover which 2213 IPv6 link-local neighbor address belongs to which ACP secure channel 2214 mapped to the ACP virtual interface. This is independent of whether 2215 the ACP virtual interface is point-to-point or multi-access. 2217 ACP nodes MAY reduce the amount of link-local IPv6 multicast packets 2218 from ND by learning the IPv6 link-local neighbor address to ACP 2219 secure channel mapping from other messages such as the source address 2220 of IPv6 link-local multicast RPL messages - and therefore forego the 2221 need to send Neighbor Solicitation messages. 2223 ACP nodes MUST NOT derive their ACP virtual interface IPv6 link local 2224 address from their IPv6 link-local address used on the underlying 2225 interface (e.g.: the address that is used as the encapsulation 2226 address in the ACP secure channel protocols defined in this 2227 document). This ensures that the ACP virtual interface operations 2228 will not depend on the specifics of the encapsulation used by the ACP 2229 secure channel and that attacks against SLAAC on the physical 2230 interface will not introduce new attack vectors against the 2231 operations of the ACP virtual interface. 2233 The link-layer address of an ACP virtual interface is the address 2234 used for the underlying interface across which the secure tunnels are 2235 built, typically Ethernet addresses. Because unicast IPv6 packets 2236 sent to an ACP virtual interface are not sent to a link-layer 2237 destination address but rather an ACP secure channel, the link-layer 2238 address fields SHOULD be ignored on reception and instead the ACP 2239 secure channel from which the message was received should be 2240 remembered. 2242 Multi-access ACP virtual interfaces are preferable implementations 2243 when the underlying interface is a (broadcast) multi-access subnet 2244 because they do reflect the presence of the underlying multi-access 2245 subnet into the virtual interfaces of the ACP. This makes it for 2246 example simpler to build services with topology awareness inside the 2247 ACP VRF in the same way as they could have been built running 2248 natively on the multi-access interfaces. 2250 Consider also the impact of point-to-point vs. multi-access virtual 2251 interface on the efficiency of flooding via link local multicasted 2252 messages: 2254 Assume a LAN with three ACP neighbors, Alice, Bob and Carol. Alice's 2255 ACP GRASP wants to send a link-local GRASP multicast message to Bob 2256 and Carol. If Alice's ACP emulates the LAN as one point-to-point 2257 virtual interface to Bob and one to Carol, The sending applications 2258 itself will send two copies, if Alice's ACP emulates a LAN, GRASP 2259 will send one packet and the ACP will replicate it. The result is 2260 the same. The difference happens when Bob and Carol receive their 2261 packet. If they use ACP point-to-point virtual interfaces, their 2262 GRASP instance would forward the packet from Alice to each other as 2263 part of the GRASP flooding procedure. These packets are unnecessary 2264 and would be discarded by GRASP on receipt as duplicates (by use of 2265 the GRASP Session ID). If Bob and Charlies ACP would emulate a 2266 multi-access virtual interface, then this would not happen, because 2267 GRASPs flooding procedure does not replicate back packets to the 2268 interface that they were received from. 2270 Note that link-local GRASP multicast messages are not sent directly 2271 as IPv6 link-local multicast UDP messages into ACP virtual 2272 interfaces, but instead into ACP GRASP virtual interfaces, that are 2273 layered on top of ACP virtual interfaces to add TCP reliability to 2274 link-local multicast GRASP messages. Nevertheless, these ACP GRASP 2275 virtual interfaces perform the same replication of message and, 2276 therefore, result in the same impact on flooding. See Section 6.8.2 2277 for more details. 2279 RPL does support operations and correct routing table construction 2280 across non-broadcast multi-access (NBMA) subnets. This is common 2281 when using many radio technologies. When such NBMA subnets are used, 2282 they MUST NOT be represented as ACP multi-access virtual interfaces 2283 because the replication of IPv6 link-local multicast messages will 2284 not reach all NBMA subnet neighbors. In result, GRASP message 2285 flooding would fail. Instead, each ACP secure channel across such an 2286 interface MUST be represented as a ACP point-to-point virtual 2287 interface. These requirements can be avoided by coupling the ACP 2288 flooding mechanism for GRASP messages directly to RPL (flood GRASP 2289 across DODAG), but such an enhancement is subject for future work. 2291 Care must also be taken when creating multi-access ACP virtual 2292 interfaces across ACP secure channels between ACP nodes in different 2293 domains or routing subdomains. The policies to be negotiated may be 2294 described as peer-to-peer policies in which case it is easier to 2295 create ACP point-to-point virtual interfaces for these secure 2296 channels. 2298 7. ACP support on L2 switches/ports (Normative) 2300 7.1. Why 2302 ANrtr1 ------ ANswitch1 --- ANswitch2 ------- ANrtr2 2303 .../ \ \ ... 2304 ANrtrM ------ \ ------- ANrtrN 2305 ANswitchM ... 2307 Figure 7 2309 Consider a large L2 LAN with ANrtr1...ANrtrN connected via some 2310 topology of L2 switches. Examples include large enterprise campus 2311 networks with an L2 core, IoT networks or broadband aggregation 2312 networks which often have even a multi-level L2 switched topology. 2314 If the discovery protocol used for the ACP is operating at the subnet 2315 level, every ACP router will see all other ACP routers on the LAN as 2316 neighbors and a full mesh of ACP channels will be built. If some or 2317 all of the AN switches are autonomic with the same discovery 2318 protocol, then the full mesh would include those switches as well. 2320 A full mesh of ACP connections like this can creates fundamental 2321 scale challenges. The number of security associations of the secure 2322 channel protocols will likely not scale arbitrarily, especially when 2323 they leverage platform accelerated encryption/decryption. Likewise, 2324 any other ACP operations (such as routing) needs to scale to the 2325 number of direct ACP neighbors. An ACP router with just 4 physical 2326 interfaces might be deployed into a LAN with hundreds of neighbors 2327 connected via switches. Introducing such a new unpredictable scaling 2328 factor requirement makes it harder to support the ACP on arbitrary 2329 platforms and in arbitrary deployments. 2331 Predictable scaling requirements for ACP neighbors can most easily be 2332 achieved if in topologies like these, ACP capable L2 switches can 2333 ensure that discovery messages terminate on them so that neighboring 2334 ACP routers and switches will only find the physically connected ACP 2335 L2 switches as their candidate ACP neighbors. With such a discovery 2336 mechanism in place, the ACP and its security associations will only 2337 need to scale to the number of physical interfaces instead of a 2338 potentially much larger number of "LAN-connected" neighbors. And the 2339 ACP topology will follow directly the physical topology, something 2340 which can then also be leveraged in management operations or by ASAs. 2342 In the example above, consider ANswitch1 and ANswitchM are ACP 2343 capable, and ANswitch2 is not ACP capable. The desired ACP topology 2344 is that ANrtr1 and ANrtrM only have an ACP connection to ANswitch1, 2345 and that ANswitch1, ANrtr2, ANrtrN have a full mesh of ACP connection 2346 amongst each other. ANswitch1 also has an ACP connection with 2347 ANswitchM and ANswitchM has ACP connections to anything else behind 2348 it. 2350 7.2. How (per L2 port DULL GRASP) 2352 To support ACP on L2 switches or L2 switched ports of an L3 device, 2353 it is necessary to make those L2 ports look like L3 interfaces for 2354 the ACP implementation. This primarily involves the creation of a 2355 separate DULL GRASP instance/domain on every such L2 port. Because 2356 GRASP has a dedicated link-local IPv6 multicast address 2357 (ALL_GRASP_NEIGHBORS), it is sufficient that all packets for this 2358 address are being extracted at the port level and passed to that DULL 2359 GRASP instance. Likewise the IPv6 link-local multicast packets sent 2360 by that DULL GRASP instance need to be sent only towards the L2 port 2361 for this DULL GRASP instance. 2363 If the device with L2 ports is supporting per L2 port ACP DULL GRASP 2364 as well as MLD snooping ([RFC4541]), then MLD snooping must be 2365 changed to never forward packets for ALL_GRASP_NEIGHBORS because that 2366 would cause the problem that per L2 port ACP DULL GRASP is meant to 2367 overcome (forwarding DULL GRASP packets across L2 ports). 2369 The rest of ACP operations can operate in the same way as in L3 2370 devices: Assume for example that the device is an L3/L2 hybrid device 2371 where L3 interfaces are assigned to VLANs and each VLAN has 2372 potentially multiple ports. DULL GRASP is run as described 2373 individually on each L2 port. When it discovers a candidate ACP 2374 neighbor, it passes its IPv6 link-local address and supported secure 2375 channel protocols to the ACP secure channel negotiation that can be 2376 bound to the L3 (VLAN) interface. It will simply use link-local IPv6 2377 multicast packets to the candidate ACP neighbor. Once a secure 2378 channel is established to such a neighbor, the virtual interface to 2379 which this secure channel is mapped should then actually be the L2 2380 port and not the L3 interface to best map the actual physical 2381 topology into the ACP virtual interfaces. See Section 6.12.5 for 2382 more details about how to map secure channels into ACP virtual 2383 interfaces. Note that a single L2 port can still have multiple ACP 2384 neighbors if it connect for example to multiple ACP neighbors via a 2385 non-ACP enabled switch. The per L2 port ACP virtual interface can 2386 therefore still be a multi-access virtual LAN. 2388 For example, in the above picture, ANswitch1 would run separate DULL 2389 GRASP instances on its ports to ANrtr1, ANswitch2 and ANswitchI, even 2390 though all those three ports may be in the data plane in the same 2391 (V)LAN and perform L2 switching between these ports, ANswitch1 would 2392 perform ACP L3 routing between them. 2394 The description in the previous paragraph was specifically meant to 2395 illustrate that on hybrid L3/L2 devices that are common in 2396 enterprise, IoT and broadband aggregation, there is only the GRASP 2397 packet extraction (by Ethernet address) and GRASP link-local 2398 multicast per L2-port packet injection that has to consider L2 ports 2399 at the hardware forwarding level. The remaining operations are 2400 purely ACP control plane and setup of secure channels across the L3 2401 interface. This hopefully makes support for per-L2 port ACP on those 2402 hybrid devices easy. 2404 This L2/L3 optimized approach is subject to "address stealing", e.g.: 2405 where a device on one port uses addresses of a device on another 2406 port. This is a generic issue in L2 LANs and switches often already 2407 have some form of "port security" to prohibit this. They rely on NDP 2408 or DHCP learning of which port/MAC-address and IPv6 address belong 2409 together and block duplicates. This type of function needs to be 2410 enabled to prohibit DoS attacks. Likewise the GRASP DULL instance 2411 needs to ensure that the IPv6 address in the locator-option matches 2412 the source IPv6 address of the DULL GRASP packet. 2414 In devices without such a mix of L2 port/interfaces and L3 interfaces 2415 (to terminate any transport layer connections), implementation 2416 details will differ. Logically most simply every L2 port is 2417 considered and used as a separate L3 subnet for all ACP operations. 2418 The fact that the ACP only requires IPv6 link-local unicast and 2419 multicast should make support for it on any type of L2 devices as 2420 simple as possible, but the need to support secure channel protocols 2421 may be a limiting factor to supporting ACP on such devices. Future 2422 options such as 802.1ae could improve that situation. 2424 A generic issue with ACP in L2 switched networks is the interaction 2425 with the Spanning Tree Protocol. Ideally, the ACP should be built 2426 also across ports that are blocked in STP so that the ACP does not 2427 depend on STP and can continue to run unaffected across STP topology 2428 changes (where re-convergence can be quite slow). The above 2429 described simple implementation options are not sufficient for this. 2430 Instead they would simply have the ACP run across the active STP 2431 topology and the ACP would equally be interrupted and re-converge 2432 with STP changes. 2434 8. Support for Non-ACP Components (Normative) 2436 8.1. ACP Connect 2438 8.1.1. Non-ACP Controller / NMS system 2440 The Autonomic Control Plane can be used by management systems, such 2441 as controllers or network management system (NMS) hosts (henceforth 2442 called simply "NMS hosts"), to connect to devices (or other type of 2443 nodes) through it. For this, an NMS host must have access to the 2444 ACP. The ACP is a self-protecting overlay network, which allows by 2445 default access only to trusted, autonomic systems. Therefore, a 2446 traditional, non-ACP NMS system does not have access to the ACP by 2447 default, just like any other external node. 2449 If the NMS host is not autonomic, i.e., it does not support autonomic 2450 negotiation of the ACP, then it can be brought into the ACP by 2451 explicit configuration. To support connections to adjacent non-ACP 2452 nodes, an ACP node must support "ACP connect" (sometimes also connect 2453 "autonomic connect"): 2455 "ACP connect" is a function on an autonomic node that is called an 2456 "ACP edge node". With "ACP connect", interfaces on the node can be 2457 configured to be put into the ACP VRF. The ACP is then accessible to 2458 other (NOC) systems on such an interface without those systems having 2459 to support any ACP discovery or ACP channel setup. This is also 2460 called "native" access to the ACP because to those (NOC) systems the 2461 interface looks like a normal network interface (without any 2462 encryption/novel-signaling). 2464 data-plane "native" (no ACP) 2465 . 2466 +--------+ +----------------+ . +-------------+ 2467 | ACP | |ACP Edge Node | . | | 2468 | Node | | | v | | 2469 | |-------|...[ACP VRF]....+-----------------| |+ 2470 | | ^ |. | | NOC Device || 2471 | | . | .[data-plane]..+-----------------| "NMS hosts" || 2472 | | . | [ VRF ] | . ^ | || 2473 +--------+ . +----------------+ . . +-------------+| 2474 . . . +-------------+ 2475 . . . 2476 data-plane "native" . ACP "native" (unencrypted) 2477 + ACP auto-negotiated . "ACP connect subnet" 2478 and encrypted . 2479 ACP connect interface 2480 e.g.: "vrf ACP native" (config) 2482 Figure 8: ACP connect 2484 ACP connect has security consequences: All systems and processes 2485 connected via ACP connect have access to all ACP nodes on the entire 2486 ACP, without further authentication. Thus, the ACP connect interface 2487 and (NOC) systems connected to it must be physically controlled/ 2488 secured. For this reason the mechanisms described here do explicitly 2489 not include options to allow for a non-ACP router to be connected 2490 across an ACP connect interface and addresses behind such a router 2491 routed inside the ACP. 2493 An ACP connect interface provides exclusively access to only the ACP. 2494 This is likely insufficient for many NMS hosts. Instead, they would 2495 require a second "data-plane" interface outside the ACP for 2496 connections between the NMS host and administrators, or Internet 2497 based services, or for direct access to the data-plane. The document 2498 "Autonomic Network Stable Connectivity" 2499 [I-D.ietf-anima-stable-connectivity] explains in more detail how the 2500 ACP can be integrated in a mixed NOC environment. 2502 The ACP connect interface must be (auto-)configured with an IPv6 2503 address prefix. Is prefix SHOULD be covered by one of the (ULA) 2504 prefix(es) used in the ACP. If using non-autonomic configuration, it 2505 SHOULD use the ACP Manual Addressing Sub-Scheme (Section 6.10.4). It 2506 SHOULD NOT use a prefix that is also routed outside the ACP so that 2507 the addresses clearly indicate whether it is used inside the ACP or 2508 not. 2510 The prefix of ACP connect subnets MUST be distributed by the ACP edge 2511 node into the ACP routing protocol (RPL). The NMS hosts MUST connect 2512 to prefixes in the ACP routing table via its ACP connect interface. 2513 In the simple case where the ACP uses only one ULA prefix and all ACP 2514 connect subnets have prefixes covered by that ULA prefix, NMS hosts 2515 can rely on [RFC6724] - The NMS host will select the ACP connect 2516 interface because any ACP destination address is best matched by the 2517 address on the ACP connect interface. If the NMS hosts ACP connect 2518 interface uses another prefix or if the ACP uses multiple ULA 2519 prefixes, then the NMS hosts require (static) routes towards the ACP 2520 interface. 2522 ACP Edge Nodes MUST only forward IPv6 packets received from an ACP 2523 connect interface into the ACP that has an IPv6 address from the ACP 2524 prefix assigned to this interface (sometimes called "RPF filtering"). 2525 This MAY be changed through administrative measures. 2527 To limit the security impact of ACP connect, nodes supporting it 2528 SHOULD implement a security mechanism to allow configuration/use of 2529 ACP connect interfaces only on nodes explicitly targeted to be 2530 deployed with it (such as those physically secure locations like a 2531 NOC). For example, the certificate of such node could include an 2532 extension required to permit configuration of ACP connect interfaces. 2533 This prohibits that a random ACP node with easy physical access that 2534 is not meant to run ACP connect could start leaking the ACP when it 2535 becomes compromised and the intruder configures ACP connect on it. 2536 The full workflow including the mechanism by which a registrar would 2537 select which node to give such a certificate to is subject to future 2538 work. 2540 8.1.2. Software Components 2542 The ACP connect mechanism be only be used to connect physically 2543 external systems (NMS hosts) to the ACP but also other applications, 2544 containers or virtual machines. In fact, one possible way to 2545 eliminate the security issue of the external ACP connect interface is 2546 to collocate an ACP edge node and an NMS host by making one a virtual 2547 machine or container inside the other; and therefore converting the 2548 unprotected external ACP subnet into an internal virtual subnet in a 2549 single device. This would ultimately result in a fully ACP enabled 2550 NMS host with minimum impact to the NMS hosts software architecture. 2551 This approach is not limited to NMS hosts but could equally be 2552 applied to devices consisting of one or more VNF (virtual network 2553 functions): An internal virtual subnet connecting out-of-band- 2554 management interfaces of the VNFs to an ACP edge router VNF. 2556 The core requirement is that the software components need to have a 2557 network stack that permits access to the ACP and optionally also the 2558 data-plane. Like in the physical setup for NMS hosts this can be 2559 realized via two internal virtual subnets. One that is connecting to 2560 the ACP (which could be a container or virtual machine by itself), 2561 and one (or more) connecting into the data-plane. 2563 This "internal" use of ACP connect approach should not considered to 2564 be a "workaround" because in this case it is possible to build a 2565 correct security model: It is not necessary to rely on unprovable 2566 external physical security mechanisms as in the case of external NMS 2567 hosts. Instead, the orchestration of the ACP, the virtual subnets 2568 and the software components can be done by trusted software that 2569 could be considered to be part of the ANI (or even an extended ACP). 2570 This software component is responsible to ensure that only trusted 2571 software components will get access to that virtual subnet and that 2572 only even more trusted software components will get access to both 2573 the ACP virtual subnet and the data-plane (because those ACP users 2574 could leak traffic between ACP and data-plane). This trust could be 2575 established for example through cryptographic means such signed 2576 software packages. The specification of these mechanisms is subject 2577 to future work. 2579 Note that ASA (Autonomic Software Agents) could also be software 2580 components as described in this section, but further details of ASAs 2581 are subject to future work. 2583 8.1.3. Auto Configuration 2585 ACP edge nodes, NMS hosts and software components that as described 2586 in the previous section are meant to be composed via virtual 2587 interfaces SHOULD support on the ACP connect subnet Stateless Address 2588 Autoconfiguration (SLAAC - [RFC4862]) and route autoconfiguration 2589 according to [RFC4191]. 2591 The ACP edge node acts as the router on the ACP connect subnet, 2592 providing the (auto-)configured prefix for the ACP connect subnet to 2593 NMS hosts and/or software components. The ACP edge node uses route 2594 prefix option of RFC4191 to announce the default route (::/) with a 2595 lifetime of 0 and aggregated prefixes for routes in the ACP routing 2596 table with normal lifetimes. This will ensure that the ACP edge node 2597 does not become a default router, but that the NMS hosts and software 2598 components will route the prefixes used in the ACP to the ACP edge 2599 node. 2601 Aggregated prefix means that the ACP edge node needs to only announce 2602 the /48 ULA prefixes used in the ACP but none of the actual /64 2603 (Manual Addressing Sub-Scheme), /127 (Zone Addressing Sub-Scheme), 2604 /112 or /120 (Vlong Addressing Sub-Scheme) routes of actual ACP 2605 nodes. If ACP interfaces are configured with non ULA prefixes, then 2606 those prefixes cannot be aggregated without further configured policy 2607 on the ACP edge node. This explains the above recommendation to use 2608 ACP ULA prefix covered prefixes for ACP connect interfaces: They 2609 allow for a shorter list of prefixes to be signaled via RFC4191 to 2610 NMS hosts and software components. 2612 The ACP edge nodes that have a Vlong ACP address MAY allocate a 2613 subset of their /112 or /120 address prefix to ACP connect 2614 interface(s) to eliminate the need to non-autonomically configure/ 2615 provision the address prefixes for such ACP connect interfaces. 2617 8.1.4. Combined ACP/Data-Plane Interface (VRF Select) 2619 Combined ACP and Data-Plane interface 2620 . 2621 +--------+ +--------------------+ . +--------------+ 2622 | ACP | |ACP Edge No | . | NMS Host(s) | 2623 | Node | | | . | / Software | 2624 | | | [ACP ]. | . | |+ 2625 | | | .[VRF ] .[VRF ] | v | "ACP address"|| 2626 | +-------+. .[Select].+--------+ "Date Plane || 2627 | | ^ | .[Data ]. | | Address(es)"|| 2628 | | . | [Plane] | | || 2629 | | . | [VRF ] | +--------------+| 2630 +--------+ . +--------------------+ +--------------+ 2631 . 2632 data-plane "native" and + ACP auto-negotiated/encrypted 2634 Figure 9: VRF select 2636 Using two physical and/or virtual subnets (and therefore interfaces) 2637 into NMS Hosts (as per Section 8.1.1) or Software (as per 2638 Section 8.1.2) may be seen as additional complexity, for example with 2639 legacy NMS Hosts that support only one IP interface. 2641 To provide a single subnet into both ACP and data-plane, the ACP Edge 2642 node needs to de-multiplex packets from NMS hosts into ACP VRF and 2643 data-plane VRF. This is sometimes called "VRF select". If the ACP 2644 VRF has no overlapping IPv6 addresses with the data-plane (as it 2645 should), then this function can use the IPv6 Destination address. 2646 The problem is Source Address Selection on the NMS Host(s) according 2647 to RFC6724. 2649 Consider the simple case: The ACP uses only one ULA prefix, the ACP 2650 IPv6 prefix for the Combined ACP and data-plane interface is covered 2651 by that ULA prefix. The ACP edge node announces both the ACP IPv6 2652 prefix and one (or more) prefixes for the data-plane. Without 2653 further policy configurations on the NMS Host(s), it may select its 2654 ACP address as a source address for data-plane ULA destinations 2655 because of Rule 8 of RFC6724. The ACP edge node can pass on the 2656 packet to the data-plane, but the ACP source address should not be 2657 used for data-plane traffic, and return traffic may fail. 2659 If the ACP carries multiple ULA prefixes or non-ULA ACP connect 2660 prefixes, then the correct source address selection becomes even more 2661 problematic. 2663 With separate ACP connect and data-plane subnets and RFC4191 prefix 2664 announcements that are to be routed across the ACP connect interface, 2665 RFC6724 source address selection Rule 5 (use address of outgoing 2666 interface) will be used, so that above problems do not occur, even in 2667 more complex cases of multiple ULA and non-ULA prefixes in the ACP 2668 routing table. 2670 To achieve the same behavior with a Combined ACP and data-plane 2671 interface, the ACP Edge Node needs to behave as two separate routers 2672 on the interface: One link-local IPv6 address/router for its ACP 2673 reachability, and one link-local IPv6 address/router for its data- 2674 plane reachability. The Router Advertisements for both are as 2675 described above (Section 8.1.3): For the ACP, the ACP prefix is 2676 announced together with RFC4191 option for the prefixes routed across 2677 the ACP and lifetime=0 to disqualify this next-hop as a default 2678 router. For the data-plane, the data-plane prefix(es) are announced 2679 together with whatever dafault router parameters are used for the 2680 data-plane. 2682 In result, RFC6724 source address selection Rule 5.5 may result in 2683 the same correct source address selection behavior of NMS hosts 2684 without further configuration on it as the separate ACP connect and 2685 data-plane interfaces. As described in the text for Rule 5.5, this 2686 is only a may, because IPv6 hosts are not required to track next-hop 2687 information. If an NMS Host does not do this, then separate ACP 2688 connect and data-plane interfaces are the preferable method of 2689 attachment. 2691 ACP edge nodes MAY support the Combined ACP and Data-Plane interface. 2693 8.1.5. Use of GRASP 2695 GRASP can and should be possible to use across ACP connect 2696 interfaces, especially in the architectural correct solution when it 2697 is used as a mechanism to connect Software (e.g.: ASA or legacy NMS 2698 applications) to the ACP. Given how the ACP is the security and 2699 transport substrate for GRASP, the trustworthiness of nodes/software 2700 allowed to participate in the ACP GRASP domain is one of the main 2701 reasons why the ACP section describes no solution with non-ACP 2702 routers participating in the ACP routing table. 2704 ACP connect interfaces can be dealt with in the GRASP ACP domain like 2705 any other ACP interface assuming that any physical ACP connect 2706 interface is physically protected from attacks and that the connected 2707 Software or NMS Hosts are equally trusted as that on other ACP nodes. 2708 ACP edge nodes SHOULD have options to filter GRASP messages in and 2709 out of ACP connect interfaces (permit/deny) and MAY have more fine- 2710 grained filtering (e.g.: based on IPv6 address of originator or 2711 objective). 2713 When using "Combined ACP and Data-Plane Interfaces", care must be 2714 taken that only GRASP messages intended for the ACP GRASP domain 2715 received from Software or NMS Hosts are forwarded by ACP edge nodes. 2716 Currently there is no definition for a GRASP security and transport 2717 substrate beside the ACP, so there is no definition how such 2718 Software/NMS Host could participate in two separate GRASP Domains 2719 across the same subnet (ACP and data-plane domains). At current it 2720 is assumed that all GRASP packets on a Combined ACP and data-plane 2721 interface belong to the GRASP ACP Domain. They must all use the ACP 2722 IPv6 addresses of the Software/NMS Hosts. The link-local IPv6 2723 addresses of Software/NMS Hosts (used for GRASP M_DISCOVERY and 2724 M_FLOOD messages) are also assumed to belong to the ACP address 2725 space. 2727 8.2. ACP through Non-ACP L3 Clouds (Remote ACP neighbors) 2729 Not all nodes in a network may support the ACP. If non-ACP Layer-2 2730 devices are between ACP nodes, the ACP will work across it since it 2731 is IP based. However, the autonomic discovery of ACP neighbors via 2732 DULL GRASP is only intended to work across L2 connections, so it is 2733 not sufficient to autonomically create ACP connections across non-ACP 2734 Layer-3 devices. 2736 8.2.1. Configured Remote ACP neighbor 2738 On the ACP node, remote ACP neighbors are configured as follows: 2740 remote-peer = [ local-address, method, remote-address ] 2741 local-address = ip-address 2742 remote-address = transport-address 2743 transport-address = 2744 [ (ip-address | pattern) ?( , protocol ?(, port)) (, pmtu) ] 2745 ip-address = (ipv4-address | ipv6-address ) 2746 method = "IKEv2" / "dTLS" / .. 2747 pattern = some IP address set 2749 For each candidate configured remote ACP neighbor, the secure channel 2750 protocol "method" is configured with its expected local IP address 2751 and remote transport endpoint. Transport protocol and port number 2752 for the remote transport endpoint are usually not necessary to 2753 configure if defaults for the secure channel protocol method exist. 2755 This is the same information that would be communicated via DULL for 2756 L2 adjacent candidate ACP neighbors. DULL is not used because the 2757 remote IP address would need to be configured anyhow and if the 2758 remote transport address would not be configured but learned via DULL 2759 then this would create a third party attack vector. 2761 The secure channel method leverages the configuration to filter 2762 incoming connection requests by the remote IP address. This is 2763 supplemental security. The primary security is via the mutual domain 2764 certificate based authentication of the secure channel protocol. 2766 On a hub node, the remote IP address may be set to some pattern 2767 instead of explicit IP addresses. In this case, the node does not 2768 attempt to initiate secure channel connections but only acts as their 2769 responder. This allows for simple hub&spoke setups for the ACP where 2770 some method (subject to further specification) provisions the 2771 transport-address of hubs into spokes and hubs accept connections 2772 from any spokes. The typical use case for this are spokes connecting 2773 via the Internet to hubs. For example, this would be simple 2774 extension to BRSKI to allow zero-touch security across the Internet. 2776 Unlike adjacent ACP neighbor connections, configured remote ACP 2777 neighbor connections can also be across IPv4. Not all (future) 2778 secure channel methods may support running IPv6 (as used in the ACP 2779 across the secure channel connection) over IPv4 encapsulation. 2781 Unless the secure channel method supports PMTUD, it needs to be set 2782 up with minimum MTU or the path mtu (pmtu) should be configured. 2784 8.2.2. Tunneled Remote ACP Neighbor 2786 An IPinIP, GRE or other form of pre-existing tunnel is configured 2787 between two remote ACP peers and the virtual interfaces representing 2788 the tunnel are configured to "ACP enable". This will enable IPv6 2789 link local addresses and DULL on this tunnel. In result, the tunnel 2790 is used for normal "L2 adjacent" candidate ACP neighbor discovery 2791 with DULL and secure channel setup procedures described in this 2792 document. 2794 Tunneled Remote ACP Neighbor requires two encapsulations: the 2795 configured tunnel and the secure channel inside of that tunnel. This 2796 makes it in general less desirable than Configured Remote ACP 2797 Neighbor. Benefits of tunnels are that it may be easier to implement 2798 because there is no change to the ACP functionality - just running it 2799 over a virtual (tunnel) interface instead of only native interfaces. 2800 The tunnel itself may also provide PMTUD while the secure channel 2801 method may not. Or the tunnel mechanism is permitted/possible 2802 through some firewall while the secure channel method may not. 2804 8.2.3. Summary 2806 Configured/Tunneled Remote ACP neighbors are less "indestructible" 2807 than L2 adjacent ACP neighbors based on link local addressing, since 2808 they depend on more correct data-plane operations, such as routing 2809 and global addressing. 2811 Nevertheless, these options may be crucial to incrementally deploy 2812 the ACP, especially if it is meant to connect islands across the 2813 Internet. Implementations SHOULD support at least Tunneled Remote 2814 ACP Neighbors via GRE tunnels - which is likely the most common 2815 router-to-router tunneling protocol in use today. 2817 Future work could envisage an option where the edge nodes of the L3 2818 cloud is configured to automatically forward ACP discovery messages 2819 to the right exit point. This optimisation is not considered in this 2820 document. 2822 9. Benefits (Informative) 2824 9.1. Self-Healing Properties 2826 The ACP is self-healing: 2828 o New neighbors will automatically join the ACP after successful 2829 validation and will become reachable using their unique ULA 2830 address across the ACP. 2832 o When any changes happen in the topology, the routing protocol used 2833 in the ACP will automatically adapt to the changes and will 2834 continue to provide reachability to all nodes. 2836 o If the domain certificate of an existing ACP node gets revoked, it 2837 will automatically be denied access to the ACP as its domain 2838 certificate will be validated against a Certificate Revocation 2839 List during authentication. Since the revocation check is only 2840 done at the establishment of a new security association, existing 2841 ones are not automatically torn down. If an immediate disconnect 2842 is required, existing sessions to a freshly revoked node can be 2843 re-set. 2845 The ACP can also sustain network partitions and mergers. Practically 2846 all ACP operations are link local, where a network partition has no 2847 impact. Nodes authenticate each other using the domain certificates 2848 to establish the ACP locally. Addressing inside the ACP remains 2849 unchanged, and the routing protocol inside both parts of the ACP will 2850 lead to two working (although partitioned) ACPs. 2852 There are few central dependencies: A certificate revocation list 2853 (CRL) may not be available during a network partition; a suitable 2854 policy to not immediately disconnect neighbors when no CRL is 2855 available can address this issue. Also, a registrar or Certificate 2856 Authority might not be available during a partition. This may delay 2857 renewal of certificates that are to expire in the future, and it may 2858 prevent the enrolment of new nodes during the partition. 2860 After a network partition, a re-merge will just establish the 2861 previous status, certificates can be renewed, the CRL is available, 2862 and new nodes can be enrolled everywhere. Since all nodes use the 2863 same trust anchor, a re-merge will be smooth. 2865 Merging two networks with different trust anchors requires the trust 2866 anchors to mutually trust each other (for example, by cross-signing). 2867 As long as the domain names are different, the addressing will not 2868 overlap (see Section 6.10). 2870 It is also highly desirable for implementation of the ACP to be able 2871 to run it over interfaces that are administratively down. If this is 2872 not feasible, then it might instead be possible to request explicit 2873 operator override upon administrative actions that would 2874 administratively bring down an interface across which the ACP is 2875 running. Especially if bringing down the ACP is known to disconnect 2876 the operator from the node. For example any such down administrative 2877 action could perform a dependency check to see if the transport 2878 connection across which this action is performed is affected by the 2879 down action (with default RPL routing used, packet forwarding will be 2880 symmetric, so this is actually possible to check). 2882 9.2. Self-Protection Properties 2884 9.2.1. From the outside 2886 As explained in Section 6, the ACP is based on secure channels built 2887 between nodes that have mutually authenticated each other with their 2888 domain certificates. The channels themselves are protected using 2889 standard encryption technologies like DTLS or IPsec which provide 2890 additional authentication during channel establishment, data 2891 integrity and data confidentiality protection of data inside the ACP 2892 and in addition, provide replay protection. 2894 An attacker will not be able to join the ACP unless having a valid 2895 domain certificate, also packet injection and sniffing traffic will 2896 not be possible due to the security provided by the encryption 2897 protocol. 2899 The ACP also serves as protection (through authentication and 2900 encryption) for protocols relevant to OAM that may not have secured 2901 protocol stack options or where implementation or deployment of those 2902 options fails on some vendor/product/customer limitations. This 2903 includes protocols such as SNMP, NTP/PTP, DNS, DHCP, syslog, 2904 Radius/Diameter/TACACS, IPFIX/Netflow - just to name a few. 2905 Protection via the ACP secure hop-by-hop channels for these protocols 2906 is meant to be only a stopgap though: The ultimate goal is for these 2907 and other protocols to use end-to-end encryption utilizing the domain 2908 certificate and rely on the ACP secure channels primarily for zero- 2909 touch reliable connectivity, but not primarily for security. 2911 The remaining attack vector would be to attack the underlying AN 2912 protocols themselves, either via directed attacks or by denial-of- 2913 service attacks. However, as the ACP is built using link-local IPv6 2914 address, remote attacks are impossible. The ULA addresses are only 2915 reachable inside the ACP context, therefore, unreachable from the 2916 data-plane. Also, the ACP protocols should be implemented to be 2917 attack resistant and not consume unnecessary resources even while 2918 under attack. 2920 9.2.2. From the inside 2922 The security model of the ACP is based on trusting all members of the 2923 group of nodes that do receive an ACP domain certificate for the same 2924 domain. Attacks from the inside by a compromised group member are 2925 therefore the biggest challenge. 2927 Group members must overall the secured so that there are no easy way 2928 to compromise them, such as data-plane accessible privilege level 2929 with simple passwords. This is a lot easier to do in devices whose 2930 software is designed from the ground up with security in mind than 2931 with legacy software based system where ACP is added on as another 2932 feature. 2934 As explained above, traffic across the ACP SHOULD still be end-to-end 2935 encrypted whenever possible. This includes traffic such as GRASP, 2936 EST and BRSKI inside the ACP. This minimizes man in the middle 2937 attacks by compromised ACP group members. Such attackers cannot 2938 eavesdrop or modify communications, they can just filter them (which 2939 is unavoidable by any means). 2941 Further security can be achieved by constraining communication 2942 patterns inside the ACP, for example through roles that could be 2943 encoded into the domain certificates. This is subject for future 2944 work. 2946 9.3. The Administrator View 2948 An ACP is self-forming, self-managing and self-protecting, therefore 2949 has minimal dependencies on the administrator of the network. 2950 Specifically, since it is independent of configuration, there is no 2951 scope for configuration errors on the ACP itself. The administrator 2952 may have the option to enable or disable the entire approach, but 2953 detailed configuration is not possible. This means that the ACP must 2954 not be reflected in the running configuration of nodes, except a 2955 possible on/off switch. 2957 While configuration is not possible, an administrator must have full 2958 visibility of the ACP and all its parameters, to be able to do 2959 trouble-shooting. Therefore, an ACP must support all show and debug 2960 options, as for any other network function. Specifically, a network 2961 management system or controller must be able to discover the ACP, and 2962 monitor its health. This visibility of ACP operations must clearly 2963 be separated from visibility of data-plane so automated systems will 2964 never have to deal with ACP aspect unless they explicitly desire to 2965 do so. 2967 Since an ACP is self-protecting, a node not supporting the ACP, or 2968 without a valid domain certificate cannot connect to it. This means 2969 that by default a traditional controller or network management system 2970 cannot connect to an ACP. See Section 8.1.1 for more details on how 2971 to connect an NMS host into the ACP. 2973 10. Further Considerations (Informative) 2975 The following sections cover topics that are beyond the primary cope 2976 of this document (e.g.: bootstrap), that explain decisions made in 2977 this document (e.g.: choice of GRASP) or that explain desirable 2978 extensions or implementation details for the ACP that are not 2979 considered to be appropriate to standardize in this document. 2981 10.1. BRSKI Bootstrap (ANI) 2983 [I-D.ietf-anima-bootstrapping-keyinfra] (BRSKI) describes how nodes 2984 with an IDevID certificate can securely and zero-touch enroll with a 2985 domain certificate (LDevID) to support the ACP. BRSKI also leverages 2986 the ACP to enable zero touch bootstrap of new nodes across networks 2987 without any configuration requirements across the transit nodes 2988 (e.g.: no DHCP/DS forwarding/server setup). This includes otherwise 2989 not configured networks as described in Section 3.2. Therefore BRSKI 2990 in conjunction with ACP provides for a secure and zero-touch 2991 management solution for complete networks. Nodes supporting such an 2992 infrastructure (BRSKI and ACP) are called ANI nodes (Autonomic 2993 Networking Infrastructure), see [I-D.ietf-anima-reference-model]. 2994 Nodes that do not support an IDevID but only an (insecure) vendor 2995 specific Unique Device Identifier (UDI) or nodes whose manufacturer 2996 does not support a MASA could use some future security reduced 2997 version of BRSKI. 2999 When BRSKI is used to provision a domain certificate (which is called 3000 enrollment), the registrar (acting as an EST server) must include the 3001 subjectAltName / rfc822Name encoded ACP address and domain name to 3002 the enrolling node (called pledge) via its response to the pledges 3003 EST CSR Attribute request that is mandatory in BRSKI. 3005 The Certificate Authority in an ACP network must not change the 3006 subjectAltName / rfc822Name in the certificate. The ACP nodes can 3007 therefore find their ACP address and domain using this field in the 3008 domain certificate, both for themselves, as well as for other nodes. 3010 The use of BRSKI in conjunction with the ACP can also help to further 3011 simplify maintenance and renewal of domain certificates. Instead of 3012 relying on CRL, the lifetime of certificates can be made extremely 3013 small, for example in the order of hours. When a node fails to 3014 connect to the ACP within its certificate lifetime, it cannot connect 3015 to the ACP to renew its certificate across it (using just EST), but 3016 it can still renew its certificate as an "enrolled/expired pledge" 3017 via the BRSKI bootstrap proxy. This requires only that the BRSKI 3018 registrar honors expired domain certificates and that the pledge 3019 first attempts to perform TLS authentication for BRSKI bootstrap with 3020 its expired domain certificate - and only reverts to its IDevID when 3021 this fails. This mechanism could also render CRLs unnecessary 3022 because the BRSKI registrar in conjunction with the CA would not 3023 renew revoked certificates - only a "no-not-renew" list would be 3024 necessary on registrars/CA. 3026 In the absence of BRSKI or less secure variants thereof, provisioning 3027 of certificates may involve one or more touches or non-standardized 3028 automation. Node vendors usually support provisioning of 3029 certificates into nodes via PKCS#7 (see [RFC2315]) and may support 3030 this provisioning through vendor specific models via Netconf 3031 ([RFC6241]). If such nodes also support Netconf Zero-Touch 3032 ([I-D.ietf-netconf-zerotouch]) then this can be combined to zero- 3033 touch provisioning of domain certificates into nodes. Unless there 3034 are equivalent integration of Netconf connections across the ACP as 3035 there is in BRSKI, this combination would not support zero-touch 3036 bootstrap across a not configured network though. 3038 10.2. ACP (and BRSKI) Diagnostics 3040 Even though ACP and ANI in general are taking out many manual 3041 configuration mistakes through their automation, it is important to 3042 provide good diagnostics for them. 3044 The basic diagnostics is support of (yang) data models representing 3045 the complete (auto-)configuration and operational state of all 3046 components: BRSKI, GRASP, ACP and the infrastructure used by them: 3047 TLS/dTLS, IPsec, certificates, trust anchors, time, VRF and so on. 3048 While necessary, this is not sufficient: 3050 Simply representing the state of components does not allow operators 3051 to quickly take action - unless they do understand how to interpret 3052 the data, and that can mean a requirement for deep understanding of 3053 all components and how they interact in the ACP/ANI. 3055 Diagnostic supports should help to quickly answer the questions 3056 operators are expected to ask, such as "is the ACP working correctly 3057 ?", or "why is there no ACP connection to a known neighboring node ?" 3059 In current network management approaches, the logic to answer these 3060 questions is most often built as centralized diagnostics software 3061 that leverages the above mentioned data models. While this approach 3062 is feasible for components utilizing the ANI, it is not sufficient to 3063 diagnose the ANI itself: 3065 o Developing the logic to identify common issues requires 3066 operational experience with the components of the ANI. Letting 3067 each management system define its own analysis is inefficient. As 3068 much as possible, future work should attempt to standardize data 3069 models that support common error diagnostic. 3071 o When the ANI is not operating correctly, it may not be possible to 3072 run diagnostics from remote because of missing connectivity. The 3073 ANI should therefore have diagnostic capabilities available 3074 locally on the nodes themselves. 3076 o Certain operations are difficult or impossible to monitor in real- 3077 time, such as initial bootstrap issues in a network location where 3078 no capabilities exist to attach local diagnostics. Therefore it 3079 is important to also define means of capturing (logging) 3080 diagnostics locally for later retrieval. Ideally, these captures 3081 are also non-volatile so that they can survive extended power-off 3082 conditions - for example when a device that fails to be brought up 3083 zero-touch is being sent back for diagnostics at a more 3084 appropriate location. 3086 The most simple form of diagnostics answering questions like the 3087 above is to represent the relevant information sequentially in 3088 dependency order, so that the first non-expected/non-operational item 3089 is the most likely root cause. Or just log/highlight that item. For 3090 example: 3092 Q: Is ACP operational to accept neighbor connections: 3094 o Check if any potentially necessary configuration to make ACP/ANI 3095 operational are correct (see Section 10.3 for a discussion of such 3096 commands). 3098 o Does the system time look reasonable, or could it be the default 3099 system time after clock chip battery failure (certificate checks 3100 depend on reasonable notion of time). 3102 o Does the node have keying material - domain certificate, trust 3103 anchors. 3105 o If no keying material and ANI is supported/enabled, check the 3106 state of BRSKI (not detailed in this example). 3108 o Check the validity of the domain certificate: 3110 * Does the certificate authenticate against the trust anchor ? 3112 * Has it been revoked ? 3114 * Was the last scheduled attempt to retrieve a CRL successful 3115 (e.g.: do we know that our CRL information is up to date). 3117 * Is the certificate valid: validity start time in the past, 3118 expiration time in the future ? 3120 * Does the certificate have a correctly formatted ACP information 3121 field ? 3123 o Was the ACP VRF successfully created ? 3125 o Is ACP enabled on one or more interfaces that are up and running ? 3127 If all this looks good, the ACP should be running locally "fine" - 3128 but we did not check any ACP neighborships. 3130 Question: why does the node not create a working ACP connection to a 3131 neighbor on an interface ? 3132 o Is the interface physically up ? Does it have an IPv6 link-local 3133 address ? 3135 o Is it enabled for ACP ? 3137 o Do we successfully send DULL GRASP messages to the interface (link 3138 layer errors) ? 3140 o Do we receive DULL GRASP messages on the interface ? If not, some 3141 intervening L2 equipment performing bad MLD snooping could have 3142 caused problems. Provide e.g.: diagnostics of the MLD querier 3143 IPv6 and MAC address. 3145 o Do we see the ACP objective in any DULL GRASP message from that 3146 interface ? Diagnose the supported secure channel methods. 3148 o Do we know the MAC address of the neighbor with the ACP objective 3149 ? If not, diagnose SLAAC/ND state. 3151 o When did we last attempt to build an ACP secure channel to the 3152 neighbor ? 3154 o If it failed, why: 3156 * Did the neighbor close the connection on us or did we close the 3157 connection on it because the domain certificate membership 3158 failed ? 3160 * If the neighbor closed the connection on us, provide any error 3161 diagnostics from the secure channel protocol. 3163 * If we failed the attempt, display our local reason: 3165 + There was no common secure channel protocol supported by the 3166 two neighbors (this could not happen on nodes supporting 3167 this specification because it mandates common support for 3168 IPsec). 3170 + The ACP domain certificate membership check (Section 6.1.2) 3171 fails: 3173 - The neighbors certificate does not have the required 3174 trust anchor. Provide diagnostics which trust anchor it 3175 has (can identify whom the device belongs to). 3177 - The neighbors certificate does not have the same domain 3178 (or no domain at all). Diagnose domain-name and 3179 potentially other other cert info. 3181 - The neighbors certificate has been revoked or could not 3182 be authenticated by OCSP. 3184 - The neighbors certificate has expired - or is not yet 3185 valid. 3187 * Any other connection issues in e.g.: IKEv2 / IPsec, dTLS ?". 3189 Question: Is the ACP operating correctly across its secure channels 3190 ?: 3192 o Are there one or more active ACP neighbors with secure channels ? 3194 o Is the RPL routing protocol for the ACP running ? 3196 o Is there a default route to the root in the ACP routing table ? 3198 o Is there for each direct ACP neighbor not reachable over the ACP 3199 virtual interface to the root a route in the ACP routing table ? 3201 o Is ACP GRASP running ? 3203 o Is at least one SRV.est objective cached (to support certificate 3204 renewal) ? 3206 o Is there at least one BRSKI registrar objective cached (in case 3207 BRSKI is supported) 3209 o Is BRSKI proxy operating normally on all interfaces where ACP is 3210 operating ? 3212 o ... 3214 These lists are not necessarily complete, but illustrate the 3215 principle and show that there are variety of issues ranging from 3216 normal operational causes (a neighbor in another ACP domain) over 3217 problems in the credentials management (certificate lifetimes), 3218 explicit security actions (revocation) or unexpected connectivity 3219 issues (intervening L2 equipment). 3221 The items so far are illustrating how the ANI operations can be 3222 diagnosed with passive observation of the operational state of its 3223 components including historic/cached/counted events. This is not 3224 necessary sufficient to provide good enough diagnostics overall: 3226 The components of ACP and BRSKI are designed with security in mind 3227 but they do not attempt to provide diagnostics for building the 3228 network itself. Consider two examples: 3230 1. BRSKI does not allow for a neighboring device to identify the 3231 pledges certificate (IDevID). Only the selected BRSKI-registrar 3232 can do this, but it may be difficult to disseminate information 3233 about undesired pledges from those registrars to locations/nodes 3234 where information about those pledges is desired. 3236 2. LLDP disseminates information about nodes to their immediate 3237 neighbors, such as node model/type/software and interface name/ 3238 number of the connection. This information is often helpful or 3239 even necessary in network diagnostics. It can equally considered 3240 to be too insecure to make this information available unprotected 3241 to all possible neighbors. 3243 An "interested adjacent party" can always determine the IDevID of a 3244 BRSKI pledge by behaving like a BRSKI proxy/registrar. Therefore the 3245 IDevID of a BRSKI pledge is not meant to be protected - it just has 3246 to be queried and is not signaled unsolicited (as it would be in 3247 LLDP) so that other observers on the same subnet can determine who is 3248 an "interested adjacent party". 3250 Desirable options for additional diagnostics subject to future work 3251 include: 3253 1. Determine if LLDP should be a recommended functionality for ANI 3254 devices to improve diagnostics, and if so, which information 3255 elements it should signal (insecure). 3257 2. In alternative to LLDP, A DULL GRASP diagnostics objective could 3258 be defined to carry these information elements. 3260 3. The IDevID of BRSKI pledges should be included in the selected 3261 insecure diagnostics option. 3263 4. A richer set of diagnostics information should be made available 3264 via the secured ACP channels, using either single-hop GRASP or 3265 network wide "topology discovery" mechanisms. 3267 10.3. Enabling and disabling ACP/ANI 3269 Both ACP and BRSKI require interfaces to be operational enough to 3270 support sending/receiving their packets. In node types where 3271 interfaces are by default (e.g.: without operator configuration) 3272 enabled, such as most L2 switches, this would be less of a change in 3273 behavior than in most L3 devices (e.g.: routers), where interfaces 3274 are by default disabled. In almost all network devices it is common 3275 though for configuration to change interfaces to a physically 3276 disabled state and that would break the ACP. 3278 In this section, we discuss a suggested operational model to enable/ 3279 disable interfaces and nodes for ACP/ANI in a way that minimizes the 3280 risk of operator action to break the ACP in this way, and that also 3281 minimizes operator surprise when ACP/ANI becomes supported in node 3282 software. 3284 10.3.1. Filtering for non-ACP/ANI packets 3286 Whenever this document refers to enabling an interface for ACP (or 3287 BRSKI), it only requires to permit the interface to send/receive 3288 packets necessary to operate ACP (or BRSKI) - but not any other data- 3289 plane packets. Unless the data-plane is explicitly configured/ 3290 enabled, all packets not required for ACP/BRSKI should be filtered on 3291 input and output: 3293 Both BRSKI and ACP require link-local only IPv6 operations on 3294 interfaces and DULL GRASP. IPv6 link-local operations means the 3295 minimum signaling to auto-assign an IPv6 link-local address and talk 3296 to neighbors via their link-local address: SLAAC (Stateless Address 3297 Auto-Configuration - [RFC4862]) and ND (Neighbor Discovery - 3298 [RFC4861]). When the device is a BRSKI pledge, it may also require 3299 TCP/TLS connections to BRSKI proxies on the interface. When the 3300 device has keying material, and the ACP is running, it requires DULL 3301 GRASP packets and packets necessary for the secure-channel mechanism 3302 it supports, e.g.: IKEv2 and IPsec ESP packets or dTLS packets to the 3303 IPv6 link-local address of an ACP neighbor on the interface. It also 3304 requires TCP/TLS packets for its BRSKI proxy functionality, if it 3305 does support BRSKI. 3307 10.3.2. Admin Down State 3309 Interfaces on most network equipment have at least two states: "up" 3310 and "down". These may have product specific names. "down" for 3311 example could be called "shutdown" and "up" could be called "no 3312 shutdown". The "down" state disables all interface operations down 3313 to the physical level. The "up" state enables the interface enough 3314 for all possible L2/L3 services to operate on top of it and it may 3315 also auto-enable some subset of them. More commonly, the operations 3316 of various L2/L3 services is controlled via additional node-wide or 3317 interface level options, but they all become only active when the 3318 interface is not "down". Therefore an easy way to ensure that all 3319 L2/L3 operations on an interface are inactive is to put the interface 3320 into "down" state. The fact that this also physically shuts down the 3321 interface is in many cases just a side effect, but it may be 3322 important in other cases (see below). 3324 To provide ACP/ANI resilience against operators configuring 3325 interfaces to "down" state, this document recommends to separate the 3326 "down" state of interfaces into an "admin down" state where the 3327 physical layer is kept running and ACP/ANI can use the interface and 3328 a "physical down" state. Any existing "down" configurations would 3329 map to "admin down". In "admin down", any existing L2/L3 services of 3330 the data-plane should see no difference to "physical down" state. To 3331 ensure that no data-plane packets could be sent/received, packet 3332 filtering could be established automatically as described above in 3333 Section 10.3.1. 3335 As necessary (see discussion below) new configuration options could 3336 be introduced to issue "physical down". The options should be 3337 provided with additional checks to minimize the risk of issuing them 3338 in a way that breaks the ACP without automatic restoration. For 3339 example they could be denied to be issued from a control connection 3340 (netconf/ssh) that goes across the interface itself ("do not 3341 disconnect yourself"). Or they could be performed only temporary and 3342 only be made permanent with additional later reconfirmation. 3344 In the following sub-sections important aspects to the introduction 3345 of "admin down" state are discussed. 3347 10.3.2.1. Security 3349 Interfaces are physically brought down (or left in default down 3350 state) as a form of security. "Admin down" state as described above 3351 provides also a high level of security because it only permits ACP/ 3352 ANI operations which are both well secured. Ultimately, it is 3353 subject to security review for the deployment whether "admin down" is 3354 a feasible replacement for "physical down". 3356 The need to trust into the security of ACP/ANI operations need to be 3357 weighed against the operational benefits of permitting this: Consider 3358 the typical example of a CPE (customer premises equipment) with no 3359 on-site network expert. User ports are in physical down state unless 3360 explicitly configured not to be. In a misconfiguration situation, 3361 the uplink connection is incorrectly plugged into such a user port. 3362 The device is disconnected from the network and therefore no 3363 diagnostics from the network side is possible anymore. 3364 Alternatively, all ports default to "admin down". The ACP (but not 3365 the data-plane) would still automatically form. Diagnostics from the 3366 network side is possible and operator reaction could include to 3367 either make this port the operational uplink port or to instruct re- 3368 cabling. Security wise, only ACP/ANI could be attacked, all other 3369 functions are filtered on interfaces in "admin down" state. 3371 10.3.2.2. Fast state propagation and Diagnostics 3373 "Physical down" state propagates on many interface types (e.g.: 3374 Ethernet) to the other side. This can trigger fast L2/L3 protocol 3375 reaction on the other side and "admin down" would not have the same 3376 (fast) result. 3378 Bringing interfaces to "physical down" state is to the best of our 3379 knowledge always a result of operator action, but today, never the 3380 result of (autonomous) L2/L3 services running on the nodes. 3381 Therefore one option is to change the operator action to not rely on 3382 link-state propagation anymore. This may not be possible when both 3383 sides are under different operator control, but in that case it is 3384 unlikely that the ACP is running across the link and actually putting 3385 the interface into "physical down" state may still be a good option. 3387 Ideally, fast physical state propagation is replaced by fast software 3388 driven state propagation. For example a DULL GRASP "admin-state" 3389 objective could be used to autoconfigure a BFD session between the 3390 two sides of the link that would be used to propagate the "up" vs. 3391 admin down state. 3393 Triggering physical down state may also be used as a mean of 3394 diagnosing cabling in the absence of easier methods. It is more 3395 complex than automated neighbor diagnostics because it requires 3396 coordinated remote access to both (likely) sides of a link to 3397 determine whether up/down toggling will cause the same reaction on 3398 the remote side. 3400 See Section 10.2 for a discussion about how LLDP and/or diagnostics 3401 via GRASP could be used to provide neighbor diagnostics, and 3402 therefore hopefully eliminating the need for "physical down" for 3403 neighbor diagnostics - as long as both neighbors support ACP/ANI. 3405 10.3.2.3. Low Level Link Diagnostics 3407 "Physical down" is performed to diagnose low-level interface behavior 3408 when higher layer services (e.g.: IPv6) are not working. Especially 3409 Ethernet links are subject to a wide variety of possible wrong 3410 configuration/cablings if they do not support automatic selection of 3411 variable parameters such as speed (10/100/1000 Mbps), crossover 3412 (Auto-MDIX) and connector (fiber, copper - when interfaces have 3413 multiple but can only enable one at a time). The need for low level 3414 link diagnostic can therefore be minimized by using fully 3415 autoconfiguring links. 3417 In addition to "Physical down", low level diagnostics of Ethernet or 3418 other interfaces also involve the creation of other states on 3419 interfaces, such as physical loopback (internal and/or external) or 3420 bringing down all packet transmissions for reflection/cable-length 3421 measurements. Any of these options would disrupt ACP as well. 3423 In cases where such low-level diagnostics of an operational link is 3424 desired but where the link could be a single point of failure for the 3425 ACP, ASA on both nodes of the link could perform a negotiated 3426 diagnostics that automatically terminates in a predetermined manner 3427 without dependence on external input ensuring the link will become 3428 operational again. 3430 10.3.2.4. Power Consumption 3432 Power consumption of "physical down" interfaces may be significantly 3433 lower than those in "admin down" state, for example on long range 3434 fiber interfaces. Assuming reasonable clocks on devices, mechanisms 3435 for infrequent periodic probing could allow to automatically 3436 establish ACP connectivity across such links. Bring up interfaces 3437 for 5 seconds to probe if there is an ACP neighbor on the remote end 3438 every 500 seconds = 1% power consumption. 3440 10.3.3. Interface level ACP/ANI enable 3442 The interface level configuration option "ACP enable" enables ACP 3443 operations on an interface, starting with ACP neighbor discovery via 3444 DULL GRAP. The interface level configuration option "ANI enable" on 3445 nodes supporting BRSKI and ACP starts with BRSKI pledge operations 3446 when there is no domain certificate on the node. On ACP/BRSKI nodes, 3447 "ACP enable" may not need to be supported, but only "ANI enable". 3448 Unless overridden by global configuration options (see later), "ACP/ 3449 ANI enable" will result in "down" state on an interface to behave as 3450 "admin down". 3452 10.3.4. Which interfaces to auto-enable ? 3454 (Section 6.3) requires that "ACP enable" is automatically set on 3455 native interfaces, but not on non-native interfaces (reminder: a 3456 native interface is one that exists without operator configuration 3457 action such as physical interfaces in physical devices). 3459 Ideally, ACP enable is set automatically on all interfaces that 3460 provide access to additional connectivity that allows to reach more 3461 nodes of the ACP domain. The best set of interfaces necessary to 3462 achieve this is not possible to determine automatically. Native 3463 interfaces are the best automatic approximation. 3465 Consider an ACP domain of ACP nodes transitively connected via native 3466 interfaces. A data-plane tunnel between two of these nodes that are 3467 non-adjacent is created and "ACP enable" is set for that tunnel. ACP 3468 RPL sees this tunnel as just as a single hop. Routes in the ACP 3469 would use this hop as an attractive path element to connect regions 3470 adjacent to the tunnel nodes. In result, the actual hop-by-hop paths 3471 used by traffic in the ACP can become worse. In addition, correct 3472 forwarding in the ACP now depends on correct data-plane forwarding 3473 config including QoS, filtering and other security on the data-plane 3474 path across which this tunnel runs. This is the main issue why "ACP/ 3475 ANI enable" should not be set automatically on non-native interfaces. 3477 If the tunnel would connect two previously disjoint ACP regions, then 3478 it likely would be useful for the ACP. A data-plane tunnel could 3479 also run across nodes without ACP and provide additional connectivity 3480 for an already connected ACP network. The benefit of this additional 3481 ACP redundancy has to be weighed against the problems of relying on 3482 the data-plane. If a tunnel connects two separate ACP regions: how 3483 many tunnels should be created to connect these ACP regions reliably 3484 enough ? Between which nodes ? These are all standard tunneled 3485 network design questions not specific to the ACP, and there are no 3486 generic fully automated answers. 3488 Instead of automatically setting "ACP enable" on these type of 3489 interfaces, the decision needs to be based on the use purpose of the 3490 non-native interface and "ACP enable" needs to be set in conjunction 3491 with the mechanism through which the non-native interface is created/ 3492 configured. 3494 In addition to explicit setting of "ACP/ANI enable", non-native 3495 interfaces also need to support configuration of the ACP RPL cost of 3496 the link - to avoid the problems of attracting too much traffic to 3497 the link as described above. 3499 Even native interfaces may not be able to automatically perform BRSKI 3500 or ACP because they may require additional operator input to become 3501 operational. Example include DSL interfaces requiring PPPoE 3502 credentials or mobile interfaces requiring credentials from a SIM 3503 card. Whatever mechanism is used to provide the necessary config to 3504 the device to enable the interface can also be expanded to decide on 3505 whether or not to set "ACP/ANI enable". 3507 The goal of automatically setting "ACP/ANI enable" on interfaces 3508 (native or not) is to eliminate unnecessary "touches" to the node to 3509 make its operation as much as possible "zero-touch" with respect to 3510 ACP/ANI. If there are "unavoidable touches" such a creating/ 3511 configuring a non-native interface or provisioning credentials for a 3512 native interface, then "ACP/ANI enable" should be added as an option 3513 to that "touch". If a wrong "touch" is easily fixed (not creating 3514 another high-cost touch), then the default should be not to enable 3515 ANI/ACP, and if it is potentially expensive or slow to fix (e.g.: 3516 parameters on SIM card shipped to remote location), then the default 3517 should be to enable ACP/ANI. 3519 10.3.5. Node Level ACP/ANI enable 3521 A node level command "ACP/ANI enable [up-if-only]" enables ACP or ANI 3522 on the node (ANI = ACP + BRSKI). Without this command set, any 3523 interface level "ACP/ANI enable" is ignored. Once set, ACP/ANI will 3524 operate interface where "ACP/ANI enable" is set. Setting of 3525 interface level "ACP/ANI enable" is either automatic (default) or 3526 explicit through operator action as described in the previous 3527 section. 3529 If the option "up-if-only" is selected, the behavior of "down" 3530 interfaces is unchanged, and ACP/ANI will only operate on interfaces 3531 where "ACP/ANI enable" is set and that are "up". When it is not set, 3532 then "down" state of interfaces with "ACP/ANI enable" is modified to 3533 behave as "admin down". 3535 10.3.5.1. Brownfield nodes 3537 A "brownfield" node is one that already has a configured data-plane. 3539 Executing global "ACP/ANI enable [up-if-only]" on each node is the 3540 only command necessary to create an ACP across a network of 3541 brownfield nodes once all the nodes have a domain certificate. When 3542 BRSKI is used ("ANI enable"), provisioning of the certificates only 3543 requires set-up of a single BRSKI-registrar node which could also 3544 implement a CA for the network. This is the most simple way to 3545 introduce ACP/ANI into existing (== brownfield) networks. 3547 The need to explicitly enable ACP/ANI is especially important in 3548 brownfield nodes because otherwise software updates may introduce 3549 support for ACP/ANI: Automatic enablement of ACP/ANI in networks 3550 where the operator does not only not want ACP/ANI but where he likely 3551 never even heard of it could be quite irritating to him. Especially 3552 when "down" behavior is changed to "admin down". 3554 Automatically setting "ANI enable" on brownfield nodes where the 3555 operator is unaware of it could also be a critical security issue 3556 depending on the vouchers used by BRKSI on these nodes. An attacker 3557 could claim to be the owner of these devices and create an ACP that 3558 the attacker has access/control over. In network where the operator 3559 explicitly wants to enable the ANI this could not happen, because he 3560 would create a BRSKI registrar that would discover attack attempts. 3561 Nodes requiring "ownership vouchers" would not be subject to that 3562 attack. See [I-D.ietf-anima-bootstrapping-keyinfra] for more 3563 details. Note that a global "ACP enable" alone is not subject to 3564 these type of attacks, because it always depends on some other 3565 mechanism first to provision domain certificates into the device. 3567 10.3.5.2. Greenfield nodes 3569 A "greenfield" node is one that did not have any prior configuration. 3571 For greenfield nodes, only "ANI enable" is relevant. If another 3572 mechanism than BRSKI is used to (zero-touch) bootstrap a node, then 3573 it is up to that mechanism to provision domain certificates and to 3574 set global "ACP enable" as desired. 3576 Nodes supporting full ANI functionality set "ANI enable" 3577 automatically when they decide that they are greenfield, e.g.: that 3578 they are powering on from factory condition. They will then put all 3579 native interfaces into "admin down" state and start to perform BRSKI 3580 pledge functionality - and once a domain certificate is enrolled they 3581 automatically enable ACP. 3583 Attempts for BRSKI pledge operations in greenfield state should 3584 terminate automatically when another method of configuring the node 3585 is used. Methods that indicate some form of physical possession of 3586 the device such as configuration via the serial console could lead to 3587 immediate termination of BRSKI, while other parallel 3588 autoconfiguration methods subject to remote attacks might lead to 3589 BRSKI termination only after they were successful. Details of this 3590 may vary widely over different type of nodes. When BRSKI pledge 3591 operation terminates, this will automatically unset "ANI enable" and 3592 should terminate any temporarily needed state on the device to 3593 perform BRSKI - DULL GRASP, BRSKI pledge and any IPv6 configuration 3594 on interfaces. 3596 10.3.6. Undoing ANI/ACP enable 3598 Disabling ANI/ACP by undoing "ACP/ANI enable" is a risk for the 3599 reliable operations of the ACP if it can be executed by mistake or 3600 unauthorized. This behavior could be influenced through some 3601 additional property in the certificate (e.g.: in the domain 3602 information extension field) subject to future work: In an ANI 3603 deployment intended for convenience, disabling it could be allowed 3604 without further constraints. In an ANI deployment considered to be 3605 critical more checks would be required. One very controlled option 3606 would be to not permit these commands unless the domain certificate 3607 has been revoked or is denied renewal. Configuring this option would 3608 be a parameter on the BRSKI registrar(s). As long as the node did 3609 not receive a domain certificate, undoing "ANI/ACP enable" should not 3610 have any additional constraints. 3612 10.3.7. Summary 3614 Node-wide "ACP/ANI enable [up-if-only]" commands enable the operation 3615 of ACP/ANI. This is only auto-enabled on ANI greenfield devices, 3616 otherwise it must be configured explicitly. 3618 If the option "up-if-only" is not selected, interfaces enabled for 3619 ACP/ANI interpret "down" state as "admin down" and not "physical 3620 down". In "admin-down" all non-ACP/ANI packets are filtered, but the 3621 physical layer is kept running to permit ACP/ANI to operate. 3623 (New) commands that result in physical interruption ("physical down", 3624 "loopback) of ACP/ANI enabled interfaces should be built to protect 3625 continuance or reestablishment of ACP as much as possible. 3627 Interface level "ACP/ANI enable" control per-interface operations. 3628 It is enabled by default on native interfaces and has to be 3629 configured explicitly on other interfaces. 3631 Disabling "ACP/ANI enable" global and per-interface should have 3632 additional checks to minimize undesired breakage of ACP. The degree 3633 of control could be a domain wide parameter in the domain 3634 certificates. 3636 10.4. ACP Neighbor discovery protocol selection 3638 This section discusses why GRASP DULL was chosen as the discovery 3639 protocol for L2 adjacent candidate ACP neighbors. The contenders 3640 considered where GRASP, mDNS or LLDP. 3642 10.4.1. LLDP 3644 LLDP (and Cisco's similar CDP) are example of L2 discovery protocols 3645 that terminate their messages on L2 ports. If those protocols would 3646 be chosen for ACP neighbor discovery, ACP neighbor discovery would 3647 therefore also terminate on L2 ports. This would prevent ACP 3648 construction over non-ACP capable but LLDP or CDP enabled L2 3649 switches. LLDP has extensions using different MAC addresses and this 3650 could have been an option for ACP discovery as well, but the 3651 additional required IEEE standardization and definition of a profile 3652 for such a modified instance of LLDP seemed to be more work than the 3653 benefit of "reusing the existing protocol" LLDP for this very simple 3654 purpose. 3656 10.4.2. mDNS and L2 support 3658 mDNS [RFC6762] with DNS-SD RRs (Resource Records) as defined in 3659 [RFC6763] is a key contender as an ACP discovery protocol. because it 3660 relies on link-local IP multicast, it does operates at the subnet 3661 level, and is also found in L2 switches. The authors of this 3662 document are not aware of mDNS implementation that terminate their 3663 mDNS messages on L2 ports instead of the subnet level. If mDNS was 3664 used as the ACP discovery mechanism on an ACP capable (L3)/L2 switch 3665 as outlined in Section 7, then this would be necessary to implement. 3666 It is likely that termination of mDNS messages could only be applied 3667 to all mDNS messages from such a port, which would then make it 3668 necessary to software forward any non-ACP related mDNS messages to 3669 maintain prior non-ACP mDNS functionality. Adding support for ACP 3670 into such L2 switches with mDNS could therefore create regression 3671 problems for prior mDNS functionality on those nodes. With low 3672 performance of software forwarding in many L2 switches, this could 3673 also make the ACP risky to support on such L2 switches. 3675 10.4.3. Why DULL GRASP 3677 LLDP was not considered because of the above mentioned issues. mDNS 3678 was not selected because of the above L2 mDNS considerations and 3679 because of the following additional points: 3681 If mDNS was not already existing in a node, it would be more work to 3682 implement than DULL GRASP, and if an existing implementation of mDNS 3683 was used, it would likely be more code space than a separate 3684 implementation of DULL GRASP or a shared implementation of DULL GRASP 3685 and GRASP in the ACP. 3687 10.5. Choice of routing protocol (RPL) 3689 This Appendix explains why RPL - "IPv6 Routing Protocol for Low-Power 3690 and Lossy Networks ([RFC6550] was chosen as the default (and in this 3691 specification only) routing protocol for the ACP. The choice and 3692 above explained profile was derived from a pre-standard 3693 implementation of ACP that was successfully deployed in operational 3694 networks. 3696 Requirements for routing in the ACP are: 3698 o Self-management: The ACP must build automatically, without human 3699 intervention. Therefore routing protocol must also work 3700 completely automatically. RPL is a simple, self-managing 3701 protocol, which does not require zones or areas; it is also self- 3702 configuring, since configuration is carried as part of the 3703 protocol (see Section 6.7.6 of [RFC6550]). 3705 o Scale: The ACP builds over an entire domain, which could be a 3706 large enterprise or service provider network. The routing 3707 protocol must therefore support domains of 100,000 nodes or more, 3708 ideally without the need for zoning or separation into areas. RPL 3709 has this scale property. This is based on extensive use of 3710 default routing. RPL also has other scalability improvements, 3711 such as selecting only a subset of peers instead of all possible 3712 ones, and trickle support for information synchronization. 3714 o Low resource consumption: The ACP supports traditional network 3715 infrastructure, thus runs in addition to traditional protocols. 3716 The ACP, and specifically the routing protocol must have low 3717 resource consumption both in terms of memory and CPU requirements. 3718 Specifically, at edge nodes, where memory and CPU are scarce, 3719 consumption should be minimal. RPL builds a destination-oriented 3720 directed acyclic graph (DODAG), where the main resource 3721 consumption is at the root of the DODAG. The closer to the edge 3722 of the network, the less state needs to be maintained. This 3723 adapts nicely to the typical network design. Also, all changes 3724 below a common parent node are kept below that parent node. 3726 o Support for unstructured address space: In the Autonomic 3727 Networking Infrastructure, node addresses are identifiers, and may 3728 not be assigned in a topological way. Also, nodes may move 3729 topologically, without changing their address. Therefore, the 3730 routing protocol must support completely unstructured address 3731 space. RPL is specifically made for mobile ad-hoc networks, with 3732 no assumptions on topologically aligned addressing. 3734 o Modularity: To keep the initial implementation small, yet allow 3735 later for more complex methods, it is highly desirable that the 3736 routing protocol has a simple base functionality, but can import 3737 new functional modules if needed. RPL has this property with the 3738 concept of "objective function", which is a plugin to modify 3739 routing behavior. 3741 o Extensibility: Since the Autonomic Networking Infrastructure is a 3742 new concept, it is likely that changes in the way of operation 3743 will happen over time. RPL allows for new objective functions to 3744 be introduced later, which allow changes to the way the routing 3745 protocol creates the DAGs. 3747 o Multi-topology support: It may become necessary in the future to 3748 support more than one DODAG for different purposes, using 3749 different objective functions. RPL allow for the creation of 3750 several parallel DODAGs, should this be required. This could be 3751 used to create different topologies to reach different roots. 3753 o No need for path optimisation: RPL does not necessarily compute 3754 the optimal path between any two nodes. However, the ACP does not 3755 require this today, since it carries mainly non-delay-sensitive 3756 feedback loops. It is possible that different optimisation 3757 schemes become necessary in the future, but RPL can be expanded 3758 (see point "Extensibility" above). 3760 10.6. Extending ACP channel negotiation (via GRASP) 3762 The mechanism described in the normative part of this document to 3763 support multiple different ACP secure channel protocols without a 3764 single network wide MTI protocol is important to allow extending 3765 secure ACP channel protocols beyond what is specified in this 3766 document, but it will run into problem if it would be used for 3767 multiple protocols: 3769 The need to potentially have multiple of these security associations 3770 even temporarily run in parallel to determine which of them works 3771 best does not support the most lightweight implementation options. 3773 The simple policy of letting one side (Alice) decide what is best may 3774 not lead to the mutual best result. 3776 The two limitations can easier be solved if the solution was more 3777 modular and as few as possible initial secure channel negotiation 3778 protocols would be used, and these protocols would then take on the 3779 responsibility to support more flexible objectives to negotiate the 3780 mutually preferred ACP security channel protocol. 3782 IKEv2 is the IETF standard protocol to negotiate network security 3783 associations. It is meant to be extensible, but it is unclear 3784 whether it would be feasible to extend IKEv2 to support possible 3785 future requirements for ACP secure channel negotiation: 3787 Consider the simple case where the use of native IPsec vs. IPsec via 3788 GRE is to be negotiated and the objective is the maximum throughput. 3789 Both sides would indicate some agreed upon performance metric and the 3790 preferred encapsulation is the one with the higher performance of the 3791 slower side. IKEv2 does not support negotiation with this objective. 3793 Consider dTLS and some form of 802.1AE ([MACSEC]) are to be added as 3794 negotiation options - and the performance objective should work 3795 across all IPsec, dDTLS and 802.1AE options. In the case of MacSEC, 3796 the negotiation would also need to determine a key for the peering. 3797 It is unclear if it would be even appropriate to consider extending 3798 the scope of negotiation in IKEv2 to those cases. Even if feasible 3799 to define, it is unclear if implementations of IKEv2 would be eager 3800 to adopt those type of extension given the long cycles of security 3801 testing that necessarily goes along with core security protocols such 3802 as IKEv2 implementations. 3804 A more modular alternative to extending IKEv2 could be to layer a 3805 modular negotiation mechanism on top of the multitude of existing or 3806 possible future secure channel protocols. For this, GRASP over TLS 3807 could be considered as a first ACP secure channel negotiation 3808 protocol. The following are initial considerations for such an 3809 approach. A full specification is subject to a separate document: 3811 To explicitly allow negotiation of the ACP channel protocol, GRASP 3812 over a TLS connection using the GRASP_LISTEN_PORT and the nodes and 3813 peers link-local IPv6 address is used. When Alice and Bob support 3814 GRASP negotiation, they do prefer it over any other non-explicitly 3815 negotiated security association protocol and should wait trying any 3816 non-negotiated ACP channel protocol until after it is clear that 3817 GRASP/TLS will not work to the peer. 3819 When Alice and Bob successfully establish the GRASP/TSL session, they 3820 will negotiate the channel mechanism to use using objectives such as 3821 performance and perceived quality of the security. After agreeing on 3822 a channel mechanism, Alice and Bob start the selected Channel 3823 protocol. Once the secure channel protocol is successfully running, 3824 the GRASP/TLS connection can be kept alive or timed out as long as 3825 the selected channel protocol has a secure association between Alice 3826 and Bob. When it terminates, it needs to be re-negotiated via GRASP/ 3827 TLS. 3829 Notes: 3831 o Negotiation of a channel type may require IANA assignments of code 3832 points. 3834 o TLS is subject to reset attacks, which IKEv2 is not. Normally, 3835 ACP connections (as specified in this document) will be over link- 3836 local addresses so the attack surface for this one issue in TCP 3837 should be reduced (note that this may not be true when ACP is 3838 tunneled as described in Section 8.2.2. 3840 o GRASP packets received inside a TLS connection established for 3841 GRASP/TLS ACP negotiation are assigned to a separate GRASP domain 3842 unique to that TLS connection. 3844 10.7. CAs, domains and routing subdomains 3846 There is a wide range of setting up different ACP solution by 3847 appropriately using CAs and the domain and rsub elements in the 3848 domain information field of the domain certificate. We summarize 3849 these options here as they have been explained in different parts of 3850 the document in before and discuss possible and desirable extensions: 3852 An ACP domain is the set of all ACP nodes using certificates from the 3853 same CA using the same domain field. GRASP inside the ACP is run 3854 across all transitively connected ACP nodes in a domain. 3856 The rsub element in the domain information field primarily allows to 3857 use addresses from different ULA prefixes. One use case is to create 3858 multiple networks that initially may be separated, but where it 3859 should be possible to connect them without further extensions to ACP 3860 when necessary. 3862 Another use case for routing subdomains is as the starting point for 3863 structuring routing inside an ACP. For example, different routing 3864 subdomains could run different routing protocols or different 3865 instances of RPL and auto-aggregation / distribution of routes could 3866 be done across inter routing subdomain ACP channels based on 3867 negotiation (e.g.: via GRASP). This is subject for further work. 3869 RPL scales very well. It is not necessary to use multiple routing 3870 subdomains to scale ACP domains in a way it would be possible if 3871 other routing protocols where used. They exist only as options for 3872 the above mentioned reasons. 3874 If different ACP domains are to be created that should not allow to 3875 connect to each other by default, these ACP domains simply need to 3876 have different domain elements in the domain information field. 3877 These domain elements can be arbitrary, including subdomains of one 3878 another: Domains "example.com" and "research.example.com" are 3879 separate domains if both are domain elements in the domain 3880 information element of certificates. 3882 It is not necessary to have a separate CA for different ACP domains: 3883 an operator can use a single CA to sign certificates for multiple ACP 3884 domains that are not allowed to connect to each other because the 3885 checks for ACP adjacencies includes comparison of the domain part. 3887 If multiple independent networks choose the same domain name but had 3888 their own CA, these would not form a single ACP domain because of CA 3889 mismatch. Therefore there is no problem in choosing domain names 3890 that are potentially also used by others. Nevertheless it is highly 3891 recommended to use domain names that one can have high probability to 3892 be unique. It is recommended to use domain names that start with a 3893 DNS domain names owned by the assigning organization and unique 3894 within it. For example "acp.example.com" if you own "example.com". 3896 Future extensions, primarily through intent can create more flexible 3897 options how to build ACP domains. 3899 Intent could modify the ACP connection check to permit connections 3900 between different domains. 3902 If different domains use the same CA one would change the ACP setup 3903 to permit for the ACP to be established between the two ACP nodes, 3904 but no routing nor ACP GRASP to be built across this adjacency. The 3905 main difference over routing subdomains is to not permit for the ACP 3906 GRASP instance to be built across the adjacency. Instead, one would 3907 only build a point to point GRASP instance between those peers to 3908 negotiate what type of exchanges are desired across that connection. 3909 This would include routing negotiation, how much GRASP information to 3910 transit and what data-plane forwarding should be done. This approach 3911 could also allow for Intent to only be injected into the network from 3912 one side and propagate via this GRASP connection. 3914 If different domains have different CAs, they should start to trust 3915 each other by intent injected into both domains that would add the 3916 other domains CA as a trust point during the ACP connection setup - 3917 and then following up with the previous point of inter-domain 3918 connections across domains with the same CA (e.g.: GRASP 3919 negotiation). 3921 10.8. Adopting ACP concepts for other environments 3923 The ACP as specified in this document is very explicit about the 3924 choice of options to allow interoperable implementations. The 3925 choices made may not be the best for all environments, but the 3926 concepts used by the ACP can be used to build derived solutions: 3928 The ACP specifies the use of ULA and deriving its prefix from the 3929 domain name so that no address allocation is required to deploy the 3930 ACP. The ACP will equally work not using ULA but any other /50 IPv6 3931 prefix. This prefix could simply be a configuration of the 3932 registrars when using BRSKI to enroll the domain certificates - 3933 instead of the registrar deriving the /50 ULA prefix from the AN 3934 domain name. 3936 Some solutions may already have an auto-addressing scheme, for 3937 example derived from existing unique device identifiers (e.g.: MAC 3938 addresses). In those cases it may not be desirable to assign 3939 addresses to devices via the ACP address information field in the way 3940 described in this document. The certificate may simply serve to 3941 identify the ACP domain, and the address field could be empty/unused. 3942 The only fix required in the remaining way the ACP operate is to 3943 define another element in the domain certificate for the two peers to 3944 decide who is Alice and who is Bob during secure channel building. 3945 Note though that future work may leverage the acp address to 3946 authenticate "ownership" of the address by the device. If the 3947 address used by a device is derived from some pre-existing permanent 3948 local ID (such as MAC address), then it would be useful to store that 3949 address in the certificate using the format of the access address 3950 information field or in a similar way. 3952 The ACP is defined as a separate VRF because it intends to support 3953 well managed networks with a wide variety of configurations. 3954 Therefore, reliable, configuration-indestructible connectivity cannot 3955 be achieved from the data-plane itself. In solutions where all 3956 transit connectivity impacting functions are fully automated 3957 (including security), indestructible and resilient, it would be 3958 possible to eliminate the need for the ACP to be a separate VRF. 3959 Consider the most simple example system in which there is no separate 3960 data-plane, but the ACP is the data-plane. Add BRSKI, and it becomes 3961 a fully autonomic network - except that it does not support automatic 3962 addressing for user equipment. This gap can then be closed for 3963 example by adding a solution derived from 3964 [I-D.ietf-anima-prefix-management]. 3966 The routing protocol chosen by the ACP design (RPL) does explicitly 3967 not optimize for shortest paths and fastest convergence. Variations 3968 of the ACP may want to use a different routing protocol. 3970 Variations such as what routing protocol to use, or whether to 3971 instantiate an ACP in a VRF or (as suggested above) as the actual 3972 data-plane, can be automatically chosen in implementations built to 3973 support multiple options by deriving them from future parameters in 3974 the certificate. Parameters in certificates should be limited to 3975 those that would not need to be changed more often than certificates 3976 would need to be updated anyhow; Or by ensuring that these parameters 3977 can be provisioned before the variation of an ACP is activated in a 3978 node. Using BRSKI, this could be done for example as additional 3979 follow-up signaling directly after the certificate enrolment, still 3980 leveraging the BRSKI TLS connection and therefore not introducing any 3981 additional connectivity requirements. 3983 Last but not least, secure channel protocols including their 3984 encapsulation are easily added to ACP solutions. Secure channels may 3985 even be replaced by simple neighbor authentication to create 3986 simplified ACP variations for environments where no real security is 3987 required but just protection against non-malicious misconfiguration. 3988 Or for environments where all traffic is known or forced to be end- 3989 to-end protected and other means for infrastructure protection are 3990 used. Any future network OAM should always use end-to-end security 3991 anyhow and can leverage the domain certificates and is therefore not 3992 dependent on security to be provided for by ACP secure channels. 3994 11. Security Considerations 3996 An ACP is self-protecting and there is no need to apply configuration 3997 to make it secure. Its security therefore does not depend on 3998 configuration. 4000 However, the security of the ACP depends on a number of other 4001 factors: 4003 o The usage of domain certificates depends on a valid supporting PKI 4004 infrastructure. If the chain of trust of this PKI infrastructure 4005 is compromised, the security of the ACP is also compromised. This 4006 is typically under the control of the network administrator. 4008 o Security can be compromised by implementation errors (bugs), as in 4009 all products. 4011 There is no prevention of source-address spoofing inside the ACP. 4012 This implies that if an attacker gains access to the ACP, it can 4013 spoof all addresses inside the ACP and fake messages from any other 4014 node. 4016 Fundamentally, security depends on correct operation, implementation 4017 and architecture. Autonomic approaches such as the ACP largely 4018 eliminate the dependency on correct operation; implementation and 4019 architectural mistakes are still possible, as in all networking 4020 technologies. 4022 Many details of ACP are designed with security in mind and discussed 4023 elsewhere in the document: 4025 IPv6 addresses used by nodes in the ACP are covered as part of the 4026 nodes domain certificate as described in Section 6.1.1. This allows 4027 even verification of ownership of a peers IPv6 address when using a 4028 connection authenticated with the domain certificate. 4030 The ACP acts as a security (and transport) substrate for GRASP inside 4031 the ACP such that GRASP is not only protected by attacks from the 4032 outside, but also by attacks from compromised inside attackers - by 4033 relying not only on hop-by-hop security of ACP secure channels, but 4034 adding end-to-end security for those GRASP messages. See 4035 Section 6.8.2. 4037 ACP provides for secure, resilient zero-touch discovery of EST 4038 servers for certificate renewal. See Section 6.1.3. 4040 ACP provides extensible, auto-configuring hop-by-hop protection of 4041 the ACP infrastructure via the negotiation of hop-by-hop secure 4042 channel protocols. See Section 6.5 and Section 10.6. 4044 The ACP is designed to minimize attacks from the outside by 4045 minimizing its dependency against any non-ACP operations on a node. 4046 The only dependency in the specification in this document is the need 4047 to share link-local addresses for the ACP secure channel 4048 encapsulation with the data-plane. See Section 6.12.2. 4050 In combination with BRSKI, ACP enables a resilient, fully zero-touch 4051 network solution for short-lived certificates that can be renewed or 4052 re-enrolled even after unintentional expiry (e.g.: because of 4053 interrupted connectivity). See Section 10.1. 4055 12. IANA Considerations 4057 This document defines the "Autonomic Control Plane". 4059 The IANA is requested to register the value "AN_ACP" (without quotes) 4060 to the GRASP Objectives Names Table in the GRASP Parameter Registry. 4061 The specification for this value is this document, Section 6.3. 4063 The IANA is requested to register the value "SRV.est" (without 4064 quotes) to the GRASP Objectives Names Table in the GRASP Parameter 4065 Registry. The specification for this value is this document, 4066 Section 6.1.3. 4068 Note that the objective format "SRV." is intended to be 4069 used for any that is an [RFC6335] registered service 4070 name. This is a proposed update to the GRASP registry subject to 4071 future work and only mentioned here for informational purposed to 4072 explain the unique format of the objective name. 4074 The IANA is requested to create an ACP Parameter Registry with 4075 currently one registry table - the "ACP Address Type" table. 4077 The IANA is requested to create an ACP Parameter Registry with 4078 currently one registry table - the "ACP Address Type" table. 4080 "ACP Address Type" Table. The value in this table are numeric values 4081 0...3 paired with a name (string). Future values MUST be assigned 4082 using the Standards Action policy defined by [RFC8126]. The 4083 following initial values are assigned by this document: 4085 0: ACP Zone Addressing Sub-Scheme (ACP RFC Figure 4) / ACP Manual 4086 Addressing Sub-Scheme (ACP RFC Section 6.10.4) 4087 1: ACP Vlong Addressing Sub-Scheme (ACP RFC Section 6.10.5) 4089 13. Acknowledgements 4091 This work originated from an Autonomic Networking project at Cisco 4092 Systems, which started in early 2010. Many people contributed to 4093 this project and the idea of the Autonomic Control Plane, amongst 4094 which (in alphabetical order): Ignas Bagdonas, Parag Bhide, Balaji 4095 BL, Alex Clemm, Yves Hertoghs, Bruno Klauser, Max Pritikin, Michael 4096 Richardson, Ravi Kumar Vadapalli. 4098 Special thanks to Brian Carpenter and Sheng Jiang for their thorough 4099 reviews and to Pascal Thubert and Michael Richardson to provide the 4100 details for the recommendations of the use of RPL in the ACP 4102 Further input and suggestions were received from: Rene Struik, Brian 4103 Carpenter, Benoit Claise. 4105 14. Change log [RFC Editor: Please remove] 4107 14.1. Initial version 4109 First version of this document: draft-behringer-autonomic-control- 4110 plane 4112 14.2. draft-behringer-anima-autonomic-control-plane-00 4114 Initial version of the anima document; only minor edits. 4116 14.3. draft-behringer-anima-autonomic-control-plane-01 4118 o Clarified that the ACP should be based on, and support only IPv6. 4120 o Clarified in intro that ACP is for both, between devices, as well 4121 as for access from a central entity, such as an NMS. 4123 o Added a section on how to connect an NMS system. 4125 o Clarified the hop-by-hop crypto nature of the ACP. 4127 o Added several references to GDNP as a candidate protocol. 4129 o Added a discussion on network split and merge. Although, this 4130 should probably go into the certificate management story longer 4131 term. 4133 14.4. draft-behringer-anima-autonomic-control-plane-02 4135 Addresses (numerous) comments from Brian Carpenter. See mailing list 4136 for details. The most important changes are: 4138 o Introduced a new section "overview", to ease the understanding of 4139 the approach. 4141 o Merged the previous "problem statement" and "use case" sections 4142 into a mostly re-written "use cases" section, since they were 4143 overlapping. 4145 o Clarified the relationship with draft-ietf-anima-stable- 4146 connectivity 4148 14.5. draft-behringer-anima-autonomic-control-plane-03 4150 o Took out requirement for IPv6 --> that's in the reference doc. 4152 o Added requirement section. 4154 o Changed focus: more focus on autonomic functions, not only virtual 4155 out of band. This goes a bit throughout the document, starting 4156 with a changed abstract and intro. 4158 14.6. draft-ietf-anima-autonomic-control-plane-00 4160 No changes; re-submitted as WG document. 4162 14.7. draft-ietf-anima-autonomic-control-plane-01 4164 o Added some paragraphs in addressing section on "why IPv6 only", to 4165 reflect the discussion on the list. 4167 o Moved the data-plane ACP out of the main document, into an 4168 appendix. The focus is now the virtually separated ACP, since it 4169 has significant advantages, and isn't much harder to do. 4171 o Changed the self-creation algorithm: Part of the initial steps go 4172 into the reference document. This document now assumes an 4173 adjacency table, and domain certificate. How those get onto the 4174 device is outside scope for this document. 4176 o Created a new section 6 "workarounds for non-autonomic nodes", and 4177 put the previous controller section (5.9) into this new section. 4178 Now, section 5 is "autonomic only", and section 6 explains what to 4179 do with non-autonomic stuff. Much cleaner now. 4181 o Added an appendix explaining the choice of RPL as a routing 4182 protocol. 4184 o Formalised the creation process a bit more. Now, we create a 4185 "candidate peer list" from the adjacency table, and form the ACP 4186 with those candidates. Also it explains now better that policy 4187 (Intent) can influence the peer selection. (section 4 and 5) 4189 o Introduce a section for the capability negotiation protocol 4190 (section 7). This needs to be worked out in more detail. This 4191 will likely be based on GRASP. 4193 o Introduce a new parameter: ACP tunnel type. And defines it in the 4194 IANA considerations section. Suggest GRE protected with IPSec 4195 transport mode as the default tunnel type. 4197 o Updated links, lots of small edits. 4199 14.8. draft-ietf-anima-autonomic-control-plane-02 4201 o Added explicitly text for the ACP channel negotiation. 4203 o Merged draft-behringer-anima-autonomic-addressing-02 into this 4204 document, as suggested by WG chairs. 4206 14.9. draft-ietf-anima-autonomic-control-plane-03 4208 o Changed Neighbor discovery protocol from GRASP to mDNS. Bootstrap 4209 protocol team decided to go with mDNS to discover bootstrap proxy, 4210 and ACP should be consistent with this. Reasons to go with mDNS 4211 in bootstrap were a) Bootstrap should be reuseable also outside of 4212 full anima solutions and introduce as few as possible new 4213 elements. mDNS was considered well-known and very-likely even pre- 4214 existing in low-end devices (IoT). b) Using GRASP both for the 4215 insecure neighbor discovery and secure ACP operatations raises the 4216 risk of introducing security issues through implementation issues/ 4217 non-isolation between those two instances of GRASP. 4219 o Shortened the section on GRASP instances, because with mDNS being 4220 used for discovery, there is no insecure GRASP session any longer, 4221 simplifying the GRASP considerations. 4223 o Added certificate requirements for ANIMA in section 5.1.1, 4224 specifically how the ANIMA information is encoded in 4225 subjectAltName. 4227 o Deleted the appendix on "ACP without separation", as originally 4228 planned, and the paragraph in the main text referring to it. 4230 o Deleted one sub-addressing scheme, focusing on a single scheme 4231 now. 4233 o Included information on how ANIMA information must be encoded in 4234 the domain certificate in section "preconditions". 4236 o Editorial changes, updated draft references, etc. 4238 14.10. draft-ietf-anima-autonomic-control-plane-04 4240 Changed discovery of ACP neighbor back from mDNS to GRASP after 4241 revisiting the L2 problem. Described problem in discovery section 4242 itself to justify. Added text to explain how ACP discovery relates 4243 to BRSKY (bootstrap) discovery and pointed to Michael Richardsons 4244 draft detailing it. Removed appendix section that contained the 4245 original explanations why GRASP would be useful (current text is 4246 meant to be better). 4248 14.11. draft-ietf-anima-autonomic-control-plane-05 4250 o Section 5.3 (candidate ACP neighbor selection): Add that Intent 4251 can override only AFTER an initial default ACP establishment. 4253 o Section 6.10.1 (addressing): State that addresses in the ACP are 4254 permanent, and do not support temporary addresses as defined in 4255 RFC4941. 4257 o Modified Section 6.3 to point to the GRASP objective defined in 4258 draft-carpenter-anima-ani-objectives. (and added that reference) 4260 o Section 6.10.2: changed from MD5 for calculating the first 40 bits 4261 to SHA256; reason is MD5 should not be used any more. 4263 o Added address sub-scheme to the IANA section. 4265 o Made the routing section more prescriptive. 4267 o Clarified in Section 8.1.1 the ACP Connect port, and defined that 4268 term "ACP Connect". 4270 o Section 8.2: Added some thoughts (from mcr) on how traversing a L3 4271 cloud could be automated. 4273 o Added a CRL check in Section 6.7. 4275 o Added a note on the possibility of source-address spoofing into 4276 the security considerations section. 4278 o Other editoral changes, including those proposed by Michael 4279 Richardson on 30 Nov 2016 (see ANIMA list). 4281 14.12. draft-ietf-anima-autonomic-control-plane-06 4283 o Added proposed RPL profile. 4285 o detailed dTLS profile - dTLS with any additional negotiation/ 4286 signaling channel. 4288 o Fixed up text for ACP/GRE encap. Removed text claiming its 4289 incompatible with non-GRE IPsec and detailled it. 4291 o Added text to suggest admin down interfaces should still run ACP. 4293 14.13. draft-ietf-anima-autonomic-control-plane-07 4295 o Changed author association. 4297 o Improved ACP connect setion (after confusion about term came up in 4298 the stable connectivity draft review). Added picture, defined 4299 complete terminology. 4301 o Moved ACP channel negotiation from normative section to appendix 4302 because it can in the timeline of this document not be fully 4303 specified to be implementable. Aka: work for future document. 4304 That work would also need to include analysing IKEv2 and describin 4305 the difference of a proposed GRASP/TLS solution to it. 4307 o Removed IANA request to allocate registry for GRASP/TLS. This 4308 would come with future draft (see above). 4310 o Gave the name "ACP information field" to the field in the 4311 certificate carrying the ACP address and domain name. 4313 o Changed the rules for mutual authentication of certificates to 4314 rely on the domain in the ACP information field of the certificate 4315 instead of the OU in the certificate. Also renewed the text 4316 pointing out that the ACP information field in the certificate is 4317 meant to be in a form that it does not disturb other uses of the 4318 certificate. As long as the ACP expected to rely on a common OU 4319 across all certificates in a domain, this was not really true: 4320 Other uses of the certificates might require different OUs for 4321 different areas/type of devices. With the rules in this draft 4322 version, the ACP authentication does not rely on any other fields 4323 in the certificate. 4325 o Added an extension field to the ACP information field so that in 4326 the future additional fields like a subdomain could be inserted. 4327 An example using such a subdomain field was added to the pre- 4328 existing text suggesting sub-domains. This approach is necessary 4329 so that there can be a single (main) domain in the ACP information 4330 field, because that is used for mutual authentication of the 4331 certificate. Also clarified that only the register(s) SHOULD/MUST 4332 use that the ACP address was generated from the domain name - so 4333 that we can easier extend change this in extensions. 4335 o Took the text for the GRASP discovery of ACP neighbors from Brians 4336 grasp-ani-objectives draft. Alas, that draft was behind the 4337 latest GRASP draft, so i had to overhaul. The mayor change is to 4338 describe in the ACP draft the whole format of the M_FLOOD message 4339 (and not only the actual objective). This should make it a lot 4340 easier to read (without having to go back and forth to the GRASP 4341 RFC/draft). It was also necessary because the locator in the 4342 M_FLOOD messages has an important role and its not coded inside 4343 the objective. The specification of how to format the M_FLOOD 4344 message shuold now be complete, the text may be some duplicate 4345 with the DULL specificateion in GRASP, but no contradiction. 4347 o One of the main outcomes of reworking the GRASP section was the 4348 notion that GRASP announces both the candidate peers IPv6 link 4349 local address but also the support ACP security protocol including 4350 the port it is running on. In the past we shied away from using 4351 this information because it is not secured, but i think the 4352 additional attack vectors possible by using this information are 4353 negligible: If an attacker on an L2 subnet can fake another 4354 devices GRASP message then it can already provide a similar amount 4355 of attack by purely faking the link-local address. 4357 o Removed the section on discovery and BRSKI. This can be revived 4358 in the BRSKI document, but it seems mood given how we did remove 4359 mDNS from the latest BRSKI document (aka: this section discussed 4360 discrepancies between GRASP and mDNS discovery which should not 4361 exist anymore with latest BRSKI. 4363 o Tried to resolve the EDNOTE about CRL vs. OCSP by pointing out we 4364 do not specify which one is to be used but that the ACP should be 4365 used to reach the URL included in the certificate to get to the 4366 CRL storage or OCSP server. 4368 o Changed ACP via IPsec to ACP via IKEv2 and restructured the 4369 sections to make IPsec native and IPsec via GRE subsections. 4371 o No need for any assigned dTLS port if ACP is run across dTLS 4372 because it is signaled via GRASP. 4374 14.14. draft-ietf-anima-autonomic-control-plane-08 4376 Modified mentioning of BRSKI to make it consistent with current 4377 (07/2017) target for BRSKI: MASA and IDevID are mandatory. Devices 4378 with only insecure UDI would need a security reduced variant of 4379 BRSKI. Also added mentioning of Netconf Zero-Touch. Made BRSKI non- 4380 normative for ACP because wrt. ACP it is just one option how the 4381 domain certificate can be provisioned. Instead, BRSKI is mandatory 4382 when a device implements ANI which is ACP+BRSKI. 4384 Enhanced text for ACP across tunnels to decribe two options: one 4385 across configured tunnels (GRE, IPinIP etc) a more efficient one via 4386 directed DULL. 4388 Moved decription of BRSKI to appendex to emphasize that BRSKI is not 4389 a (normative) dependency of GRASP, enhanced text to indicate other 4390 options how Domain Certificates can be provisioned. 4392 Added terminology section. 4394 Separated references into normative and non-normative. 4396 Enhanced section about ACP via "tunnels". Defined an option to run 4397 ACP secure channel without an outer tunnel, discussed PMTU, benefits 4398 of tunneling, potential of using this with BRSKI, made ACP via GREP a 4399 SHOULD requirement. 4401 Moved appendix sections up before IANA section because there where 4402 concerns about appendices to be to far on the bottom to be read. 4403 Added (Informative) / (Normative) to section titles to clarify which 4404 sections are informative and which are normative 4406 Moved explanation of ACP with L2 from precondition to separate 4407 section before workarounds, made it instructive enough to explain how 4408 to implement ACP on L2 ports for L3/L2 switches and made this part of 4409 normative requirement (L2/L3 switches SHOULD support this). 4411 Rewrote section "GRASP in the ACP" to define GRASP in ACP as 4412 mandatory (and why), and define the ACP as security and transport 4413 substrate to GRASP in ACP. And how it works. 4415 Enhanced "self-protection" properties section: protect legacy 4416 management protocols. Security in ACP is for protection from outside 4417 and those legacy protocols. Otherwise need end-to-end encryption 4418 also inside ACP, e.g.: with domain certificate. 4420 Enhanced initial domain certificate section to include requirements 4421 for maintenance (renewal/revocation) of certificates. Added 4422 explanation to BRSKI informative section how to handle very short 4423 lived certificates (renewal via BRSKI with expired cert). 4425 Modified the encoding of the ACP address to better fit RFC822 simple 4426 local-parts (":" as required by RFC5952 are not permitted in simple 4427 dot-atoms according to RFC5322. Removed reference to RFC5952 as its 4428 now not needed anymore. 4430 Introduced a sub-domain field in the ACP information in the 4431 certificate to allow defining such subdomains with depending on 4432 future Intent definitions. It also makes it clear what the "main 4433 domain" is. Scheme is called "routing subdomain" to have a unique 4434 name. 4436 Added V8 (now called Vlong) addressing sub-scheme according to 4437 suggestion from mcr in his mail from 30 Nov 2016 4438 (https://mailarchive.ietf.org/arch/msg/anima/ 4439 nZpEphrTqDCBdzsKMpaIn2gsIzI). Also modified the explanation of the 4440 single V bit in the first sub-scheme now renamed to Zone sub-scheme 4441 to distinguish it. 4443 14.15. draft-ietf-anima-autonomic-control-plane-09 4445 Added reference to RFC4191 and explained how it should be used on ACP 4446 edge routers to allow autoconfiguration of routing by NMS hosts. 4447 This came after review of stable connectivity draft where ACP connect 4448 is being referred to. 4450 V8 addressing Sub-Scheme was modified to allow not only /8 device- 4451 local address space but also /16. This was in response to the 4452 possible need to have maybe as much as 2^12 local addresses for 4453 future encaps in BRSKI like IPinIP. It also would allow fully 4454 autonomic address assignment for ACP connect interfaces from this 4455 local address space (on an ACP edge device), subject to approval of 4456 the implied update to rfc4291/rfc4193 (IID length). Changed name to 4457 Vlong addressing sub-scheme. 4459 Added text in response to Brian Carpenters review of draft-ietf- 4460 anima-stable-connectivity-04. 4462 o The stable connectivity draft was vaguely describing ACP connect 4463 behavior that is better standardized in this ACP draft. 4465 o Added new ACP "Manual" addressing sub-scheme with /64 subnets for 4466 use with ACP connect interfaces. Being covered by the ACP ULA 4467 prefix, these subnets do not require additional routing entries 4468 for NMS hosts. They also are fully 64-bit IID length compliant 4469 and therefore not subject to 4191bis considerations. And they 4470 avoid that operators manually assign prefixes from the ACP ULA 4471 prefixes that might later be assigned autonomiously. 4473 o ACP connect auto-configuration: Defined that ACP edge devices, NMS 4474 hosts should use RFC4191 to automatically learn ACP prefixes. 4475 This is especially necessary when the ACP uses multiple ULA 4476 prefixes (via e.g.: the rsub domain certificate option), or if ACP 4477 connect subinterfaces use manually configured prefixes NOT covered 4478 by the ACP ULA prefixes. 4480 o Explained how rfc6724 is (only) sufficient when the NMS host has a 4481 separate ACP connect and data-plane interface. But not when there 4482 is a single interface. 4484 o Added a separate subsection to talk about "software" instead of 4485 "NMS hosts" connecting to the ACP via the "ACP connect" method. 4486 The reason is to point out that the "ACP connect" method is not 4487 only a workaround (for NMS hosts), but an actual desirable long 4488 term architectural component to modularily build software (e.g.: 4489 ASA or OAM for VNF) into ACP devices. 4491 o Added a section to define how to run ACP connect across the same 4492 interface as the data-plane. This turns out to be quite 4493 challenging because we only want to rely on existing standards for 4494 the network stack in the NMS host/software and only define what 4495 features the ACP edge device needs. 4497 o Added section about use of GRASP over ACP connect. 4499 o Added text to indicate packet processing/filtering for security: 4500 filter incorrect packets arriving on ACP connect interfaces, 4501 diagnose on RPL root packets to incorrect destination address (not 4502 in ACP connect section, but because of it). 4504 o Reaffirm security goal of ACP: Do not permit non-ACP routers into 4505 ACP routing domain. 4507 Made this ACP document be an update to RFC4291 and RFC4193. At the 4508 core, some of the ACP addressing sub-schemes do effectively not use 4509 64-bit IIDs as required by RFC4191 and debated in rfc4191bis. During 4510 6man in prague, it was suggested that all documents that do not do 4511 this should be classified as such updates. Add a rather long section 4512 that summarizes the relevant parts of ACP addressing and usage and. 4513 Aka: This section is meant to be the primary review section for 4514 readers interested in these changes (e.g.: 6man WG.). 4516 Added changes from Michael Richardsons review https://github.com/ 4517 anima-wg/autonomic-control-plane/pull/3/commits, textual and: 4519 o ACP discovery inside ACP is bad *doh*!. 4521 o Better CA trust and revocation sentences. 4523 o More details about RPL behavior in ACP. 4525 o black hole route to avoid loops in RPL. 4527 Added requirement to terminate ACP channels upon cert expiry/ 4528 revocation. 4530 Added fixes from 08-mcr-review-reply.txt (on github): 4532 o AN Domain Names are FQDNs. 4534 o Fixed bit length of schemes, numerical writing of bits (00b/01b). 4536 o Lets use US american english. 4538 14.16. draft-ietf-anima-autonomic-control-plane-10 4540 Used the term routing subdomain more consistently where previously 4541 only subdomain was used. Clarified use of routing subdomain in 4542 creation of ULA "global ID" addressing prefix. 4544 6.7.1.* Changed native IPsec encapsulation to tunnel mode 4545 (necessary), explaned why. Added notion that ESP is used, added 4546 explanations why tunnel/transport mode in native vs. GRE cases. 4548 6.10.3/6.10.5 Added term "ACP address range/set" to be able to better 4549 explain how the address in the ACP certificate is actually the base 4550 address (lowest address) of a range/set that is available to the 4551 device. 4553 6.10.4 Added note that manual address sub-scheme addresses must not 4554 be used within domain certificates (only for explicit configuration). 4556 6.12.5 Refined explanation of how ACP virtual interfaces work (p2p 4557 and multipoint). Did seek for pre-existing RFCs that explain how to 4558 built a multi-access interface on top of a full mesh of p2p 4559 connections (6man WG, anima WG mailing lists), but could not find any 4560 prior work that had a succinct explanation. So wrote up an 4561 explanation here. Added hopefully all necessary and sufficient 4562 details how to map ACP unicast packets to ACP secure channel, how to 4563 deal with ND packet details. Added verbage for ACP not to assign the 4564 virtual interface link-local address from the underlying interface. 4565 Addd note that GRAP link-local messages are treated specially but 4566 logically the same. Added paragraph about NBMA interfaces. 4568 remaining changes from Brian Carpenters review. See Github file 4569 draft-ietf-anima-autonomic-control-plane/08-carpenter-review-reply.tx 4570 for more detailst: 4572 Added multiple new RFC references for terms/technologies used. 4574 Fixed verbage in several places. 4576 2. (terminology) Added 802.1AR as reference. 4578 2. Fixed up definition of ULA. 4580 6.1.1 Changed definition of ACP information in cert into ABNF format. 4581 Added warning about maximum size of ACP address field due to domain- 4582 name limitations. 4584 6.2 Mentioned API requirement between ACP and clients leveraging 4585 adjacency table. 4587 6.3 Fixed TTL in GRASP example: msec, not hop-count!. 4589 6.8.2 MAYOR: expanded security/transport substrate text: 4591 Introduced term ACP GRASP virtual interface to explain how GRASP 4592 link-local multicast messages are encapsulated and replicated to 4593 neighbors. Explain how ACP knows when to use TLS vs. TCP (TCP only 4594 for link-local address (sockets). Introduced "ladder" picture to 4595 visualize stack. 4597 6.8.2.1 Expanded discussion/explanation of security model. TLS for 4598 GRASP unicsast connections across ACP is double encryption (plus 4599 underlying ACP secure channel), but highly necessary to avoid very 4600 simple man-in-the-middle attacks by compromised ACP members on-path. 4601 Ultimately, this is done to ensure that any apps using GRASP can get 4602 full end-to-end secrecy for information sent across GRASP. But for 4603 publically known ASA services, even this will not provide 100% 4604 security (this is discussed). Also why double encryption is the 4605 better/easier solution than trying to optimize this. 4607 6.10.1 Added discussion about pseudo-random addressing, scanning- 4608 attaacks (not an issue for ACP). 4610 6.12.2 New performance requirements section added. 4612 6.10.1 Added notion to first experiment with existing addressing 4613 schemes before defining new ones - we should be flexible enough. 4615 6.3/7.2 clarified the interactions between MLD and DULL GRASP and 4616 specified what needs to be done (e.g.: in 2 switches doing ACP per L2 4617 port). 4619 12. Added explanations and cross-references to various security 4620 aspects of ACP discussed elsewhere in the document. 4622 13. Added IANA requirements. 4624 Added RFC2119 boilerplate. 4626 14.17. draft-ietf-anima-autonomic-control-plane-11 4628 Same text as -10 Unfortunately when uploading -10 .xml/.txt to 4629 datatracker, a wrong version of .txt got uploaded, only the .xml was 4630 correct. This impacts the -10 html version on datatra cker and the 4631 PDF versions as well. Because rfcdiff also compares the .txt 4632 version, this -11 version was crea ted so that one can compare 4633 changes from -09 and changes to the next version (-12). 4635 14.18. draft-ietf-anima-autonomic-control-plane-12 4637 Sheng Jiangs extensive review. Thanks! See Github file draft-ietf- 4638 anima-autonomic-control-plane/09-sheng-review-reply.txt for more 4639 details. Many of the larger changes listed below where inspired by 4640 the review. 4642 Removed the claim that the document is updating RFC4291,RFC4193 and 4643 the section detailling it. Done on suggestion of Michael Richardson 4644 - just try to describe use of addressing in a way that would not 4645 suggest a need claim update to architecture. 4647 Terminology cleanup: 4649 o Replaced "device" with "node" in text. Kept "device" only when 4650 referring to "physical node". Added definitions for those words. 4651 Includes changes of derived terms, especially in addressing: 4652 "Node-ID" and "Node-Number" in the addressing details. 4654 o Replaced term "autonomic FOOBAR" with "acp FOOBAR" as whereever 4655 appropriate: "autonomic" would imply that the node would need to 4656 support more than the ACP, but that is not correct in most of the 4657 cases. Wanted to make sure that implementers know they only need 4658 to support/implement ACP - unless stated otherwise. Includes 4659 "AN->ACP node", "AN->ACP adjacency table" and so on. 4661 1 Added explanation in the introduction about relationship between 4662 ACP, BRSKI, ANI and Autonomic Networks. 4664 6.1.1 Improved terminology and features of the certificate 4665 information field. Now called domain information field instead of 4666 ACP information field. The acp-address field in the domain 4667 information field is now optional, enabling easier introduction of 4668 various future options. 4670 6.1.2 Moved ACP domainer membership check from section 6.6 to (ACP 4671 secure channels setup) here because it is not only used for ACP 4672 secure channel setup. 4674 6.1.3 Fix text about certificate renewal after discussion with Max 4675 Pritikin/Michael Richardson/Brian Carpenter: 4677 o Version 10 erroneously assumed that the certificate itself could 4678 store a URL for renewal, but that is only possible for CRL URLs. 4679 Text now only refers to "remembered EST server" without implying 4680 that this is stored in the certificate. 4682 o Objective for RFC7030/EST domain certificate renewal was changed 4683 to "SRV.est" See also IANA section for explanation. 4685 o Removed detail of distance based service selection. This can be 4686 better done in future work because it would require a lot more 4687 detail for a good DNS-SD compatible approach. 4689 o Removed detail about trying to create more security by using ACP 4690 address from certificate of peer. After rethinking, this does not 4691 seem to buy additional security. 4693 6.10 Added reference to 6.12.5 in initial use of "loopback interface" 4694 in section 6.10 in result of email discussion michaelR/michaelB. 4696 10.2 Introduced informational section (diagnostics) because of 4697 operational experience - ACP/ANI undeployable without at least 4698 diagnostics like this. 4700 10.3 Introduced informational section (enabling/disabling) ACP. 4701 Important to discuss this for security reasons (e.g.: why to never 4702 never auto-enable ANI on brownfield devices), for implementers and to 4703 answer ongoing questions during WG meetings about how to deal with 4704 shutdown interface. 4706 10.8 Added informational section discussing possible future 4707 variations of the ACP for potential adopters that cannot directly use 4708 the complete solution described in this document unmodified. 4710 15. References 4712 15.1. Normative References 4714 [I-D.ietf-anima-grasp] 4715 Bormann, C., Carpenter, B., and B. Liu, "A Generic 4716 Autonomic Signaling Protocol (GRASP)", draft-ietf-anima- 4717 grasp-15 (work in progress), July 2017. 4719 [RFC1034] Mockapetris, P., "Domain names - concepts and facilities", 4720 STD 13, RFC 1034, DOI 10.17487/RFC1034, November 1987, 4721 . 4723 [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate 4724 Requirement Levels", BCP 14, RFC 2119, 4725 DOI 10.17487/RFC2119, March 1997, 4726 . 4728 [RFC3810] Vida, R., Ed. and L. Costa, Ed., "Multicast Listener 4729 Discovery Version 2 (MLDv2) for IPv6", RFC 3810, 4730 DOI 10.17487/RFC3810, June 2004, 4731 . 4733 [RFC4191] Draves, R. and D. Thaler, "Default Router Preferences and 4734 More-Specific Routes", RFC 4191, DOI 10.17487/RFC4191, 4735 November 2005, . 4737 [RFC4193] Hinden, R. and B. Haberman, "Unique Local IPv6 Unicast 4738 Addresses", RFC 4193, DOI 10.17487/RFC4193, October 2005, 4739 . 4741 [RFC4291] Hinden, R. and S. Deering, "IP Version 6 Addressing 4742 Architecture", RFC 4291, DOI 10.17487/RFC4291, February 4743 2006, . 4745 [RFC4301] Kent, S. and K. Seo, "Security Architecture for the 4746 Internet Protocol", RFC 4301, DOI 10.17487/RFC4301, 4747 December 2005, . 4749 [RFC4861] Narten, T., Nordmark, E., Simpson, W., and H. Soliman, 4750 "Neighbor Discovery for IP version 6 (IPv6)", RFC 4861, 4751 DOI 10.17487/RFC4861, September 2007, 4752 . 4754 [RFC4862] Thomson, S., Narten, T., and T. Jinmei, "IPv6 Stateless 4755 Address Autoconfiguration", RFC 4862, 4756 DOI 10.17487/RFC4862, September 2007, 4757 . 4759 [RFC5246] Dierks, T. and E. Rescorla, "The Transport Layer Security 4760 (TLS) Protocol Version 1.2", RFC 5246, 4761 DOI 10.17487/RFC5246, August 2008, 4762 . 4764 [RFC5280] Cooper, D., Santesson, S., Farrell, S., Boeyen, S., 4765 Housley, R., and W. Polk, "Internet X.509 Public Key 4766 Infrastructure Certificate and Certificate Revocation List 4767 (CRL) Profile", RFC 5280, DOI 10.17487/RFC5280, May 2008, 4768 . 4770 [RFC5322] Resnick, P., Ed., "Internet Message Format", RFC 5322, 4771 DOI 10.17487/RFC5322, October 2008, 4772 . 4774 [RFC6347] Rescorla, E. and N. Modadugu, "Datagram Transport Layer 4775 Security Version 1.2", RFC 6347, DOI 10.17487/RFC6347, 4776 January 2012, . 4778 [RFC6550] Winter, T., Ed., Thubert, P., Ed., Brandt, A., Hui, J., 4779 Kelsey, R., Levis, P., Pister, K., Struik, R., Vasseur, 4780 JP., and R. Alexander, "RPL: IPv6 Routing Protocol for 4781 Low-Power and Lossy Networks", RFC 6550, 4782 DOI 10.17487/RFC6550, March 2012, 4783 . 4785 [RFC6552] Thubert, P., Ed., "Objective Function Zero for the Routing 4786 Protocol for Low-Power and Lossy Networks (RPL)", 4787 RFC 6552, DOI 10.17487/RFC6552, March 2012, 4788 . 4790 [RFC7030] Pritikin, M., Ed., Yee, P., Ed., and D. Harkins, Ed., 4791 "Enrollment over Secure Transport", RFC 7030, 4792 DOI 10.17487/RFC7030, October 2013, 4793 . 4795 [RFC7296] Kaufman, C., Hoffman, P., Nir, Y., Eronen, P., and T. 4796 Kivinen, "Internet Key Exchange Protocol Version 2 4797 (IKEv2)", STD 79, RFC 7296, DOI 10.17487/RFC7296, October 4798 2014, . 4800 [RFC7676] Pignataro, C., Bonica, R., and S. Krishnan, "IPv6 Support 4801 for Generic Routing Encapsulation (GRE)", RFC 7676, 4802 DOI 10.17487/RFC7676, October 2015, 4803 . 4805 15.2. Informative References 4807 [AR8021] IEEE SA-Standards Board, "IEEE Standard for Local and 4808 metropolitan area networks - Secure Device Identity", 4809 December 2009, . 4812 [I-D.ietf-anima-bootstrapping-keyinfra] 4813 Pritikin, M., Richardson, M., Behringer, M., Bjarnason, 4814 S., and K. Watsen, "Bootstrapping Remote Secure Key 4815 Infrastructures (BRSKI)", draft-ietf-anima-bootstrapping- 4816 keyinfra-07 (work in progress), July 2017. 4818 [I-D.ietf-anima-prefix-management] 4819 Jiang, S., Du, Z., Carpenter, B., and Q. Sun, "Autonomic 4820 IPv6 Edge Prefix Management in Large-scale Networks", 4821 draft-ietf-anima-prefix-management-05 (work in progress), 4822 August 2017. 4824 [I-D.ietf-anima-reference-model] 4825 Behringer, M., Carpenter, B., Eckert, T., Ciavaglia, L., 4826 Pierre, P., Liu, B., Nobre, J., and J. Strassner, "A 4827 Reference Model for Autonomic Networking", draft-ietf- 4828 anima-reference-model-04 (work in progress), July 2017. 4830 [I-D.ietf-anima-stable-connectivity] 4831 Eckert, T. and M. Behringer, "Using Autonomic Control 4832 Plane for Stable Connectivity of Network OAM", draft-ietf- 4833 anima-stable-connectivity-06 (work in progress), September 4834 2017. 4836 [I-D.ietf-netconf-zerotouch] 4837 Watsen, K., Abrahamsson, M., and I. Farrer, "Zero Touch 4838 Provisioning for NETCONF or RESTCONF based Management", 4839 draft-ietf-netconf-zerotouch-17 (work in progress), 4840 September 2017. 4842 [I-D.ietf-roll-useofrplinfo] 4843 Robles, I., Richardson, M., and P. Thubert, "When to use 4844 RFC 6553, 6554 and IPv6-in-IPv6", draft-ietf-roll- 4845 useofrplinfo-16 (work in progress), July 2017. 4847 [MACSEC] IEEE SA-Standards Board, "IEEE Standard for Local and 4848 Metropolitan Area Networks: Media Access Control (MAC) 4849 Security", June 2006, 4850 . 4853 [RFC1112] Deering, S., "Host extensions for IP multicasting", STD 5, 4854 RFC 1112, DOI 10.17487/RFC1112, August 1989, 4855 . 4857 [RFC1918] Rekhter, Y., Moskowitz, B., Karrenberg, D., de Groot, G., 4858 and E. Lear, "Address Allocation for Private Internets", 4859 BCP 5, RFC 1918, DOI 10.17487/RFC1918, February 1996, 4860 . 4862 [RFC2315] Kaliski, B., "PKCS #7: Cryptographic Message Syntax 4863 Version 1.5", RFC 2315, DOI 10.17487/RFC2315, March 1998, 4864 . 4866 [RFC2821] Klensin, J., Ed., "Simple Mail Transfer Protocol", 4867 RFC 2821, DOI 10.17487/RFC2821, April 2001, 4868 . 4870 [RFC4541] Christensen, M., Kimball, K., and F. Solensky, 4871 "Considerations for Internet Group Management Protocol 4872 (IGMP) and Multicast Listener Discovery (MLD) Snooping 4873 Switches", RFC 4541, DOI 10.17487/RFC4541, May 2006, 4874 . 4876 [RFC4604] Holbrook, H., Cain, B., and B. Haberman, "Using Internet 4877 Group Management Protocol Version 3 (IGMPv3) and Multicast 4878 Listener Discovery Protocol Version 2 (MLDv2) for Source- 4879 Specific Multicast", RFC 4604, DOI 10.17487/RFC4604, 4880 August 2006, . 4882 [RFC4607] Holbrook, H. and B. Cain, "Source-Specific Multicast for 4883 IP", RFC 4607, DOI 10.17487/RFC4607, August 2006, 4884 . 4886 [RFC4610] Farinacci, D. and Y. Cai, "Anycast-RP Using Protocol 4887 Independent Multicast (PIM)", RFC 4610, 4888 DOI 10.17487/RFC4610, August 2006, 4889 . 4891 [RFC4941] Narten, T., Draves, R., and S. Krishnan, "Privacy 4892 Extensions for Stateless Address Autoconfiguration in 4893 IPv6", RFC 4941, DOI 10.17487/RFC4941, September 2007, 4894 . 4896 [RFC5234] Crocker, D., Ed. and P. Overell, "Augmented BNF for Syntax 4897 Specifications: ABNF", STD 68, RFC 5234, 4898 DOI 10.17487/RFC5234, January 2008, 4899 . 4901 [RFC5321] Klensin, J., "Simple Mail Transfer Protocol", RFC 5321, 4902 DOI 10.17487/RFC5321, October 2008, 4903 . 4905 [RFC5790] Liu, H., Cao, W., and H. Asaeda, "Lightweight Internet 4906 Group Management Protocol Version 3 (IGMPv3) and Multicast 4907 Listener Discovery Version 2 (MLDv2) Protocols", RFC 5790, 4908 DOI 10.17487/RFC5790, February 2010, 4909 . 4911 [RFC6241] Enns, R., Ed., Bjorklund, M., Ed., Schoenwaelder, J., Ed., 4912 and A. Bierman, Ed., "Network Configuration Protocol 4913 (NETCONF)", RFC 6241, DOI 10.17487/RFC6241, June 2011, 4914 . 4916 [RFC6335] Cotton, M., Eggert, L., Touch, J., Westerlund, M., and S. 4917 Cheshire, "Internet Assigned Numbers Authority (IANA) 4918 Procedures for the Management of the Service Name and 4919 Transport Protocol Port Number Registry", BCP 165, 4920 RFC 6335, DOI 10.17487/RFC6335, August 2011, 4921 . 4923 [RFC6553] Hui, J. and JP. Vasseur, "The Routing Protocol for Low- 4924 Power and Lossy Networks (RPL) Option for Carrying RPL 4925 Information in Data-Plane Datagrams", RFC 6553, 4926 DOI 10.17487/RFC6553, March 2012, 4927 . 4929 [RFC6724] Thaler, D., Ed., Draves, R., Matsumoto, A., and T. Chown, 4930 "Default Address Selection for Internet Protocol Version 6 4931 (IPv6)", RFC 6724, DOI 10.17487/RFC6724, September 2012, 4932 . 4934 [RFC6762] Cheshire, S. and M. Krochmal, "Multicast DNS", RFC 6762, 4935 DOI 10.17487/RFC6762, February 2013, 4936 . 4938 [RFC6763] Cheshire, S. and M. Krochmal, "DNS-Based Service 4939 Discovery", RFC 6763, DOI 10.17487/RFC6763, February 2013, 4940 . 4942 [RFC7404] Behringer, M. and E. Vyncke, "Using Only Link-Local 4943 Addressing inside an IPv6 Network", RFC 7404, 4944 DOI 10.17487/RFC7404, November 2014, 4945 . 4947 [RFC7426] Haleplidis, E., Ed., Pentikousis, K., Ed., Denazis, S., 4948 Hadi Salim, J., Meyer, D., and O. Koufopavlou, "Software- 4949 Defined Networking (SDN): Layers and Architecture 4950 Terminology", RFC 7426, DOI 10.17487/RFC7426, January 4951 2015, . 4953 [RFC7575] Behringer, M., Pritikin, M., Bjarnason, S., Clemm, A., 4954 Carpenter, B., Jiang, S., and L. Ciavaglia, "Autonomic 4955 Networking: Definitions and Design Goals", RFC 7575, 4956 DOI 10.17487/RFC7575, June 2015, 4957 . 4959 [RFC7576] Jiang, S., Carpenter, B., and M. Behringer, "General Gap 4960 Analysis for Autonomic Networking", RFC 7576, 4961 DOI 10.17487/RFC7576, June 2015, 4962 . 4964 [RFC7721] Cooper, A., Gont, F., and D. Thaler, "Security and Privacy 4965 Considerations for IPv6 Address Generation Mechanisms", 4966 RFC 7721, DOI 10.17487/RFC7721, March 2016, 4967 . 4969 [RFC7761] Fenner, B., Handley, M., Holbrook, H., Kouvelas, I., 4970 Parekh, R., Zhang, Z., and L. Zheng, "Protocol Independent 4971 Multicast - Sparse Mode (PIM-SM): Protocol Specification 4972 (Revised)", STD 83, RFC 7761, DOI 10.17487/RFC7761, March 4973 2016, . 4975 [RFC8126] Cotton, M., Leiba, B., and T. Narten, "Guidelines for 4976 Writing an IANA Considerations Section in RFCs", BCP 26, 4977 RFC 8126, DOI 10.17487/RFC8126, June 2017, 4978 . 4980 Authors' Addresses 4982 Michael H. Behringer (editor) 4984 Email: michael.h.behringer@gmail.com 4986 Toerless Eckert (editor) 4987 Futurewei Technologies Inc. 4988 2330 Central Expy 4989 Santa Clara 95050 4990 USA 4992 Email: tte+ietf@cs.fau.de 4993 Steinthor Bjarnason 4994 Arbor Networks 4995 2727 South State Street, Suite 200 4996 Ann Arbor MI 48104 4997 United States 4999 Email: sbjarnason@arbor.net