idnits 2.17.1 draft-ietf-anima-autonomic-control-plane-15.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- == There are 7 instances of lines with non-RFC6890-compliant IPv4 addresses in the document. If these are example addresses, they should be changed. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year == Line 1019 has weird spacing: '... called rfcS...' == Line 1743 has weird spacing: '...k-local unic...' == Line 1744 has weird spacing: '...lticast messa...' -- The document date (June 6, 2018) is 2151 days in the past. Is this intentional? Checking references for intended status: Proposed Standard ---------------------------------------------------------------------------- (See RFCs 3967 and 4897 for information about using normative references to lower-maturity documents in RFCs) == Missing Reference: 'ACP VRF' is mentioned on line 2975, but not defined == Missing Reference: 'Data-Plane' is mentioned on line 2977, but not defined == Missing Reference: 'Select' is mentioned on line 3132, but not defined == Missing Reference: 'Plane' is mentioned on line 3134, but not defined == Missing Reference: 'RFCxxxx' is mentioned on line 5860, but not defined == Unused Reference: 'RFC1034' is defined on line 5880, but no explicit reference was found in the text == Unused Reference: 'I-D.ietf-roll-applicability-template' is defined on line 6006, but no explicit reference was found in the text == Outdated reference: A later version (-08) exists of draft-ietf-cbor-cddl-02 ** Obsolete normative reference: RFC 5246 (Obsoleted by RFC 8446) ** Obsolete normative reference: RFC 6347 (Obsoleted by RFC 9147) == Outdated reference: A later version (-45) exists of draft-ietf-anima-bootstrapping-keyinfra-15 == Outdated reference: A later version (-10) exists of draft-ietf-anima-reference-model-06 == Outdated reference: A later version (-29) exists of draft-ietf-netconf-zerotouch-21 == Outdated reference: A later version (-44) exists of draft-ietf-roll-useofrplinfo-23 -- Obsolete informational reference (is this intentional?): RFC 2821 (Obsoleted by RFC 5321) -- Obsolete informational reference (is this intentional?): RFC 4941 (Obsoleted by RFC 8981) -- Obsolete informational reference (is this intentional?): RFC 6830 (Obsoleted by RFC 9300, RFC 9301) Summary: 2 errors (**), 0 flaws (~~), 17 warnings (==), 4 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 ANIMA WG T. Eckert, Ed. 3 Internet-Draft Huawei 4 Intended status: Standards Track M. Behringer, Ed. 5 Expires: December 8, 2018 6 S. Bjarnason 7 Arbor Networks 8 June 6, 2018 10 An Autonomic Control Plane (ACP) 11 draft-ietf-anima-autonomic-control-plane-15 13 Abstract 15 Autonomic functions need a control plane to communicate, which 16 depends on some addressing and routing. This Autonomic Management 17 and Control Plane should ideally be self-managing, and as independent 18 as possible of configuration. This document defines such a plane and 19 calls it the "Autonomic Control Plane", with the primary use as a 20 control plane for autonomic functions. It also serves as a "virtual 21 out-of-band channel" for Operations Administration and Management 22 (OAM) communications over a network that is secure and reliable even 23 when the network is not configured, or misconfigured. 25 Status of This Memo 27 This Internet-Draft is submitted in full conformance with the 28 provisions of BCP 78 and BCP 79. 30 Internet-Drafts are working documents of the Internet Engineering 31 Task Force (IETF). Note that other groups may also distribute 32 working documents as Internet-Drafts. The list of current Internet- 33 Drafts is at https://datatracker.ietf.org/drafts/current/. 35 Internet-Drafts are draft documents valid for a maximum of six months 36 and may be updated, replaced, or obsoleted by other documents at any 37 time. It is inappropriate to use Internet-Drafts as reference 38 material or to cite them other than as "work in progress." 40 This Internet-Draft will expire on December 8, 2018. 42 Copyright Notice 44 Copyright (c) 2018 IETF Trust and the persons identified as the 45 document authors. All rights reserved. 47 This document is subject to BCP 78 and the IETF Trust's Legal 48 Provisions Relating to IETF Documents 49 (https://trustee.ietf.org/license-info) in effect on the date of 50 publication of this document. Please review these documents 51 carefully, as they describe your rights and restrictions with respect 52 to this document. Code Components extracted from this document must 53 include Simplified BSD License text as described in Section 4.e of 54 the Trust Legal Provisions and are provided without warranty as 55 described in the Simplified BSD License. 57 Table of Contents 59 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . 5 60 1.1. Applicability and Scope . . . . . . . . . . . . . . . . . 7 61 2. Acronyms and Terminology . . . . . . . . . . . . . . . . . . 9 62 3. Use Cases for an Autonomic Control Plane . . . . . . . . . . 14 63 3.1. An Infrastructure for Autonomic Functions . . . . . . . . 14 64 3.2. Secure Bootstrap over a not configured Network . . . . . 14 65 3.3. Data-Plane Independent Permanent Reachability . . . . . . 15 66 4. Requirements . . . . . . . . . . . . . . . . . . . . . . . . 16 67 5. Overview . . . . . . . . . . . . . . . . . . . . . . . . . . 17 68 6. Self-Creation of an Autonomic Control Plane (ACP) (Normative) 18 69 6.1. ACP Domain, Certificate and Network . . . . . . . . . . . 18 70 6.1.1. Certificate Domain Information Field . . . . . . . . 19 71 6.1.2. ACP domain membership check . . . . . . . . . . . . . 22 72 6.1.3. Certificate Maintenance . . . . . . . . . . . . . . . 23 73 6.1.3.1. GRASP objective for EST server . . . . . . . . . 23 74 6.1.3.2. Renewal . . . . . . . . . . . . . . . . . . . . . 24 75 6.1.3.3. Certificate Revocation Lists (CRLs) . . . . . . . 25 76 6.1.3.4. Lifetimes . . . . . . . . . . . . . . . . . . . . 25 77 6.1.3.5. Re-enrollment . . . . . . . . . . . . . . . . . . 26 78 6.1.3.6. Failing Certificates . . . . . . . . . . . . . . 27 79 6.2. ACP Adjacency Table . . . . . . . . . . . . . . . . . . . 28 80 6.3. Neighbor Discovery with DULL GRASP . . . . . . . . . . . 28 81 6.4. Candidate ACP Neighbor Selection . . . . . . . . . . . . 31 82 6.5. Channel Selection . . . . . . . . . . . . . . . . . . . . 32 83 6.6. Candidate ACP Neighbor verification . . . . . . . . . . . 34 84 6.7. Security Association protocols . . . . . . . . . . . . . 34 85 6.7.1. ACP via IKEv2 . . . . . . . . . . . . . . . . . . . . 34 86 6.7.1.1. Native IPsec . . . . . . . . . . . . . . . . . . 34 87 6.7.1.2. IPsec with GRE encapsulation . . . . . . . . . . 35 88 6.7.2. ACP via DTLS . . . . . . . . . . . . . . . . . . . . 35 89 6.7.3. ACP Secure Channel Requirements . . . . . . . . . . . 36 90 6.8. GRASP in the ACP . . . . . . . . . . . . . . . . . . . . 36 91 6.8.1. GRASP as a core service of the ACP . . . . . . . . . 36 92 6.8.2. ACP as the Security and Transport substrate for GRASP 37 93 6.8.2.1. Discussion . . . . . . . . . . . . . . . . . . . 39 94 6.9. Context Separation . . . . . . . . . . . . . . . . . . . 40 95 6.10. Addressing inside the ACP . . . . . . . . . . . . . . . . 40 96 6.10.1. Fundamental Concepts of Autonomic Addressing . . . . 41 97 6.10.2. The ACP Addressing Base Scheme . . . . . . . . . . . 42 98 6.10.3. ACP Zone Addressing Sub-Scheme . . . . . . . . . . . 43 99 6.10.3.1. Usage of the Zone-ID Field . . . . . . . . . . . 44 100 6.10.4. ACP Manual Addressing Sub-Scheme . . . . . . . . . . 45 101 6.10.5. ACP Vlong Addressing Sub-Scheme . . . . . . . . . . 47 102 6.10.6. Other ACP Addressing Sub-Schemes . . . . . . . . . . 48 103 6.10.7. ACP Registrars . . . . . . . . . . . . . . . . . . . 48 104 6.10.7.1. Use of BRSKI or other Mechanism/Protocols . . . 48 105 6.10.7.2. Unique Address/Prefix allocation . . . . . . . . 49 106 6.10.7.3. Addressing Sub-Scheme Policies . . . . . . . . . 50 107 6.10.7.4. Address/Prefix Persistence . . . . . . . . . . . 51 108 6.10.7.5. Further Details . . . . . . . . . . . . . . . . 51 109 6.11. Routing in the ACP . . . . . . . . . . . . . . . . . . . 51 110 6.11.1. RPL Profile . . . . . . . . . . . . . . . . . . . . 52 111 6.11.1.1. Summary . . . . . . . . . . . . . . . . . . . . 52 112 6.11.1.2. RPL Instances . . . . . . . . . . . . . . . . . 53 113 6.11.1.3. Storing vs. Non-Storing Mode . . . . . . . . . . 53 114 6.11.1.4. DAO Policy . . . . . . . . . . . . . . . . . . . 53 115 6.11.1.5. Path Metric . . . . . . . . . . . . . . . . . . 54 116 6.11.1.6. Objective Function . . . . . . . . . . . . . . . 54 117 6.11.1.7. DODAG Repair . . . . . . . . . . . . . . . . . . 54 118 6.11.1.8. Multicast . . . . . . . . . . . . . . . . . . . 54 119 6.11.1.9. Security . . . . . . . . . . . . . . . . . . . . 54 120 6.11.1.10. P2P communications . . . . . . . . . . . . . . . 54 121 6.11.1.11. IPv6 address configuration . . . . . . . . . . . 54 122 6.11.1.12. Administrative parameters . . . . . . . . . . . 55 123 6.11.1.13. RPL Data-Plane artifacts . . . . . . . . . . . . 55 124 6.11.1.14. Unknown Destinations . . . . . . . . . . . . . . 55 125 6.12. General ACP Considerations . . . . . . . . . . . . . . . 55 126 6.12.1. Performance . . . . . . . . . . . . . . . . . . . . 56 127 6.12.2. Addressing of Secure Channels in the Data-Plane . . 56 128 6.12.3. MTU . . . . . . . . . . . . . . . . . . . . . . . . 56 129 6.12.4. Multiple links between nodes . . . . . . . . . . . . 57 130 6.12.5. ACP interfaces . . . . . . . . . . . . . . . . . . . 57 131 7. ACP support on L2 switches/ports (Normative) . . . . . . . . 60 132 7.1. Why . . . . . . . . . . . . . . . . . . . . . . . . . . . 60 133 7.2. How (per L2 port DULL GRASP) . . . . . . . . . . . . . . 61 134 8. Support for Non-ACP Components (Normative) . . . . . . . . . 63 135 8.1. ACP Connect . . . . . . . . . . . . . . . . . . . . . . . 63 136 8.1.1. Non-ACP Controller / NMS system . . . . . . . . . . . 63 137 8.1.2. Software Components . . . . . . . . . . . . . . . . . 65 138 8.1.3. Auto Configuration . . . . . . . . . . . . . . . . . 66 139 8.1.4. Combined ACP/Data-Plane Interface (VRF Select) . . . 67 140 8.1.5. Use of GRASP . . . . . . . . . . . . . . . . . . . . 68 141 8.2. ACP through Non-ACP L3 Clouds (Remote ACP neighbors) . . 69 142 8.2.1. Configured Remote ACP neighbor . . . . . . . . . . . 69 143 8.2.2. Tunneled Remote ACP Neighbor . . . . . . . . . . . . 71 144 8.2.3. Summary . . . . . . . . . . . . . . . . . . . . . . . 71 146 9. Benefits (Informative) . . . . . . . . . . . . . . . . . . . 71 147 9.1. Self-Healing Properties . . . . . . . . . . . . . . . . . 71 148 9.2. Self-Protection Properties . . . . . . . . . . . . . . . 73 149 9.2.1. From the outside . . . . . . . . . . . . . . . . . . 73 150 9.2.2. From the inside . . . . . . . . . . . . . . . . . . . 74 151 9.3. The Administrator View . . . . . . . . . . . . . . . . . 74 152 10. ACP Operations (Informative) . . . . . . . . . . . . . . . . 75 153 10.1. ACP (and BRSKI) Diagnostics . . . . . . . . . . . . . . 75 154 10.2. ACP Registrars . . . . . . . . . . . . . . . . . . . . . 80 155 10.2.1. Registrar interactions . . . . . . . . . . . . . . . 80 156 10.2.2. Registrar Parameter . . . . . . . . . . . . . . . . 82 157 10.2.3. Certificate renewal and limitations . . . . . . . . 82 158 10.2.4. ACP Registrars with sub-CA . . . . . . . . . . . . . 83 159 10.2.5. Centralized Policy Control . . . . . . . . . . . . . 84 160 10.3. Enabling and disabling ACP/ANI . . . . . . . . . . . . . 84 161 10.3.1. Filtering for non-ACP/ANI packets . . . . . . . . . 85 162 10.3.2. Admin Down State . . . . . . . . . . . . . . . . . . 85 163 10.3.2.1. Security . . . . . . . . . . . . . . . . . . . . 86 164 10.3.2.2. Fast state propagation and Diagnostics . . . . . 86 165 10.3.2.3. Low Level Link Diagnostics . . . . . . . . . . . 87 166 10.3.2.4. Power Consumption . . . . . . . . . . . . . . . 88 167 10.3.3. Interface level ACP/ANI enable . . . . . . . . . . . 88 168 10.3.4. Which interfaces to auto-enable? . . . . . . . . . . 88 169 10.3.5. Node Level ACP/ANI enable . . . . . . . . . . . . . 90 170 10.3.5.1. Brownfield nodes . . . . . . . . . . . . . . . . 90 171 10.3.5.2. Greenfield nodes . . . . . . . . . . . . . . . . 91 172 10.3.6. Undoing ANI/ACP enable . . . . . . . . . . . . . . . 91 173 10.3.7. Summary . . . . . . . . . . . . . . . . . . . . . . 92 174 11. Background and Futures (Informative) . . . . . . . . . . . . 92 175 11.1. ACP Address Space Schemes . . . . . . . . . . . . . . . 92 176 11.2. BRSKI Bootstrap (ANI) . . . . . . . . . . . . . . . . . 93 177 11.3. ACP Neighbor discovery protocol selection . . . . . . . 94 178 11.3.1. LLDP . . . . . . . . . . . . . . . . . . . . . . . . 94 179 11.3.2. mDNS and L2 support . . . . . . . . . . . . . . . . 95 180 11.3.3. Why DULL GRASP . . . . . . . . . . . . . . . . . . . 95 181 11.4. Choice of routing protocol (RPL) . . . . . . . . . . . . 95 182 11.5. ACP Information Distribution and multicast . . . . . . . 97 183 11.6. Extending ACP channel negotiation (via GRASP) . . . . . 98 184 11.7. CAs, domains and routing subdomains . . . . . . . . . . 100 185 11.8. Adopting ACP concepts for other environments . . . . . . 101 186 12. Security Considerations . . . . . . . . . . . . . . . . . . . 103 187 13. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 104 188 14. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . 105 189 15. Change log [RFC Editor: Please remove] . . . . . . . . . . . 105 190 15.1. Initial version . . . . . . . . . . . . . . . . . . . . 105 191 15.2. draft-behringer-anima-autonomic-control-plane-00 . . . . 105 192 15.3. draft-behringer-anima-autonomic-control-plane-01 . . . . 106 193 15.4. draft-behringer-anima-autonomic-control-plane-02 . . . . 106 194 15.5. draft-behringer-anima-autonomic-control-plane-03 . . . . 106 195 15.6. draft-ietf-anima-autonomic-control-plane-00 . . . . . . 106 196 15.7. draft-ietf-anima-autonomic-control-plane-01 . . . . . . 107 197 15.8. draft-ietf-anima-autonomic-control-plane-02 . . . . . . 107 198 15.9. draft-ietf-anima-autonomic-control-plane-03 . . . . . . 108 199 15.10. draft-ietf-anima-autonomic-control-plane-04 . . . . . . 108 200 15.11. draft-ietf-anima-autonomic-control-plane-05 . . . . . . 108 201 15.12. draft-ietf-anima-autonomic-control-plane-06 . . . . . . 109 202 15.13. draft-ietf-anima-autonomic-control-plane-07 . . . . . . 109 203 15.14. draft-ietf-anima-autonomic-control-plane-08 . . . . . . 111 204 15.15. draft-ietf-anima-autonomic-control-plane-09 . . . . . . 113 205 15.16. draft-ietf-anima-autonomic-control-plane-10 . . . . . . 115 206 15.17. draft-ietf-anima-autonomic-control-plane-11 . . . . . . 116 207 15.18. draft-ietf-anima-autonomic-control-plane-12 . . . . . . 117 208 15.19. draft-ietf-anima-autonomic-control-plane-13 . . . . . . 118 209 15.20. draft-ietf-anima-autonomic-control-plane-14 . . . . . . 120 210 15.21. draft-ietf-anima-autonomic-control-plane-15 . . . . . . 124 211 15.22. wish-list . . . . . . . . . . . . . . . . . . . . . . . 124 212 16. References . . . . . . . . . . . . . . . . . . . . . . . . . 125 213 16.1. Normative References . . . . . . . . . . . . . . . . . . 125 214 16.2. Informative References . . . . . . . . . . . . . . . . . 127 215 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . 131 217 1. Introduction 219 Autonomic Networking is a concept of self-management: Autonomic 220 functions self-configure, and negotiate parameters and settings 221 across the network. [RFC7575] defines the fundamental ideas and 222 design goals of Autonomic Networking. A gap analysis of Autonomic 223 Networking is given in [RFC7576]. The reference architecture for 224 Autonomic Networking in the IETF is specified in the document 225 [I-D.ietf-anima-reference-model] 227 Autonomic functions need an autonomically built communications 228 infrastructure. This infrastructure needs to be secure, resilient 229 and re-usable by all autonomic functions. Section 5 of [RFC7575] 230 introduces that infrastructure and calls it the Autonomic Control 231 Plane (ACP). More descriptively it would be the "Autonomic 232 communications infrastructure for Management and Control". For 233 naming consistency with that prior document, this document continues 234 to use the name ACP though. 236 Today, the management and control plane of networks typically uses 237 the global routing table, which is dependent on correct configuration 238 and routing. Misconfigurations or routing problems can therefore 239 disrupt management and control channels. Traditionally, an out-of- 240 band network has been used to avoid or allow recovery from such 241 problems, or personnel are sent on site to access devices through 242 console ports (craft ports). However, both options are expensive. 244 In increasingly automated networks either centralized management 245 systems or distributed autonomic service agents in the network 246 require a control plane which is independent of the configuration of 247 the network they manage, to avoid impacting their own operations 248 through the configuration actions they take. 250 This document describes a modular design for a self-forming, self- 251 managing and self-protecting Autonomic Control Plane (ACP), which is 252 a virtual in-band network designed to be as independent as possible 253 of configuration, addressing and routing problems. The details how 254 this achieved are defined in Section 6. The ACP is designed to 255 remains operational even in the presence of configuration errors, 256 addressing or routing issues, or where policy could inadvertently 257 affect connectivity of both data packets or control packets. 259 This document uses the term "Data-Plane" to refer to anything in the 260 network nodes that is not the ACP, and therefore considered to be 261 dependent on (mis-)configuration. This Data-Plane includes both the 262 traditional forwarding-plane, as well as any pre-existing control- 263 plane, such as routing protocols that establish routing tables for 264 the forwarding plane. 266 The Autonomic Control Plane serves several purposes at the same time: 268 1. Autonomic functions communicate over the ACP. The ACP therefore 269 supports directly Autonomic Networking functions, as described in 270 [I-D.ietf-anima-reference-model]. For example, Generic Autonomic 271 Signaling Protocol (GRASP - [I-D.ietf-anima-grasp]) runs securely 272 inside the ACP and depends on the ACP as its "security and 273 transport substrate". 275 2. A controller or network management system can use it to securely 276 bootstrap network devices in remote locations, even if the 277 network in between is not yet configured; no Data-Plane dependent 278 bootstrap configuration is required. An example of such a secure 279 bootstrap process is described in 280 [I-D.ietf-anima-bootstrapping-keyinfra] 282 3. An operator can use it to log into remote devices, even if the 283 network is misconfigured or not configured. 285 This document describes these purposes as use cases for the ACP in 286 Section 3, it defines the requirements in Section 4. Section 5 gives 287 an overview how the ACP is constructed, and in Section 6 the process 288 is defined in detail. Section 7 defines how to support ACP on L2 289 switches. Section 8 explains how non-ACP nodes and networks can be 290 integrated. 292 The following sections are non-normative: Section 9 reviews benefits 293 of the ACP (after all the details have been defined), Section 10 294 provides operational recommendations, Section 11 provides additional 295 explanations and describes additional details or future work 296 possibilities that where considered not to be appropriate for 297 standardization in this document but were considered important to 298 document. 300 The ACP provides secure IPv6 connectivity, therefore it can not only 301 be used as the secure connectivity for self-management as required 302 for the ACP in [RFC7575], but it can also be used as the secure 303 connectivity for traditional (centralized) management. The ACP can 304 be implemented and operated without any other components of autonomic 305 networks, except for the GRASP protocol which it leverages. 307 The document "Using Autonomic Control Plane for Stable Connectivity 308 of Network OAM" [RFC8368] describes how the ACP alone can be used to 309 provide secure and stable connectivity for autonomic and non- 310 autonomic Operations Administration and Management (OAM) 311 applications. That document also explains how existing management 312 solutions can leverage the ACP in parallel with traditional 313 management models, when to use the ACP and how to integrate with 314 potentially IPv4 only OAM backends. 316 Combining ACP with Bootstrapping Remote Secure Key Infrastructures 317 (BRSKI), see [I-D.ietf-anima-bootstrapping-keyinfra]) results in the 318 "Autonomic Network Infrastructure" as defined in 319 [I-D.ietf-anima-reference-model], which provides autonomic 320 connectivity (from ACP) with fully secure zero-touch (automated) 321 bootstrap from BRSKI. The ANI itself does not constitute an 322 Autonomic Network, but it allows the building of more or less 323 autonomic networks on top of it - using either centralized, Software 324 Defined Networking (SDN) (see [RFC7426]), style automation or 325 distributed automation via Autonomic Service Agents (ASA) / Autonomic 326 Functions (AF) - or a mixture of both. See 327 [I-D.ietf-anima-reference-model] for more information. 329 1.1. Applicability and Scope 331 Please see the following Terminology section (Section 2) for 332 explanations of terms used in this section. 334 The design of the ACP as defined in this document is considered to be 335 applicable to all types of "professionally managed" networks: Service 336 Provider, Local Area Network (LAN), Metro(politan networks), Wide 337 Area Network (WAN), Enterprise Information Technology (IT) and 338 Operational Technology (OT) networks. The ACP can operate equally on 339 layer 3 equipment and on layer 2 equipment such a bridges (see 340 Section 7). The encryption mechanism used by the ACP is defined to 341 be negotiable, therefore it can be extended to environments with 342 different encryption protocol preferences. The minimum 343 implementation requirements in this document attempt to achieve 344 maximum interoperability by requiring support for few options: IPsec, 345 DTLS - depending on type of device. 347 The implementation footprint of the ACP consists of Public Key 348 Infrastructure (PKI) code for the ACP certificate, the GRASP 349 protocol, UDP, TCP and TLS (for security and reliability of GRASP), 350 the ACP secure channel protocol used (such as IPsec or DTLS), and an 351 instance of IPv6 packet forwarding and routing via the RPL routing 352 protocol ([RFC6550]) that is separate from routing and forwarding for 353 the Data-Plane (user traffic). 355 The ACP uses only IPv6 to avoid complexity of dual-stack ACP 356 operations (IPv6/IPv4). Nevertheless, it can without any changes be 357 integrated into even otherwise IPv4-only network devices. The Data- 358 Plane itself would not need to change, it could continue to be IPv4 359 only. For such IPv4 only devices, the IPv6 protocol itself would be 360 additional implementation footprint only used for the ACP. 362 The protocol choices of the ACP are primarily based on wide use and 363 support in networks and devices, well understood security properties 364 and required scalability. The ACP design is an attempt to produce 365 the lowest risk combination of existing technologies and protocols to 366 build a widely applicable operational network management solution: 368 RPL was chosen because it requires a smaller routing table footprint 369 in large networks compared to other routing protocols with an 370 autonomically configured single area. The deployment experience of 371 large scale Internet of Things (IoT) networks serves as the basis for 372 wide deployment experience with RPL. The profile chosen for RPL in 373 the ACP does not not leverage any RPL specific forwarding plane 374 features (IPv6 extension headers), making its implementation a pure 375 control plane software requirement. 377 GRASP is the only completely novel protocol used in the ACP, and this 378 choice was necessary because there is no existing suitable protocol 379 to provide the necessary functions to the ACP, so GRASP was developed 380 to fill that gap. 382 The ACP design can be applicable to (cpu, memory) constrained devices 383 and (bitrate, reliability) constrained networks, but this document 384 does not attempt to define the most constrained type of devices or 385 networks to which the ACP is applicable. RPL and DTLS are two 386 protocol choices already making ACP more applicable to constrained 387 environments. See Section 11.8 for discussions about how variations 388 of the ACP could be defined in the future to better meet different 389 expectations from those on which the current design is based. 391 2. Acronyms and Terminology 393 In the rest of the document we will refer to systems using the ACP as 394 "nodes". Typically such a node is a physical (network equipment) 395 device, but it can equally be some virtualized system. Therefore, we 396 do not refer to them as devices unless the context specifically calls 397 for a physical system. 399 This document introduces or uses the following terms (sorted 400 alphabetically). Terms introduced are explained on first use, so 401 this list is for reference only. 403 ACP: "Autonomic Control Plane". The Autonomic Function as defined 404 in this document. It provides secure zero-touch (automated) 405 transitive (network wide) IPv6 connectivity for all nodes in the 406 same ACP domain as well as a GRASP instance running across this 407 ACP IPv6 connectivity. The ACP is primarily meant to be used as a 408 component of the ANI to enable Autonomic Networks but it can 409 equally be used in simple ANI networks (with no other Autonomic 410 Functions) or completely by itself. 412 ACP address: An IPv6 address assigned to the ACP node. It is stored 413 in the domain information field of the ->"ACP domain certificate" 414 (). 416 ACP address range/set: The ACP address may imply a range or set of 417 addresses that the node can assign for different purposes. This 418 address range/set is derived by the node from the format of the 419 ACP address called the "addressing sub-scheme". 421 ACP connect interface: An interface on an ACP node providing access 422 to the ACP for non ACP capable nodes without using an ACP secure 423 channel. See Section 8.1.1. 425 ACP domain: The ACP domain is the set of nodes with ->"ACP domain 426 certificates" that allow them to authenticate each other as 427 members of the ACP domain. See also Section 6.1.2. 429 ACP (ANI/AN) Domain Certificate: A provisioned [RFC5280] certificate 430 (LDevID) carrying the domain information field which is used by 431 the ACP to learn its address in the ACP and to derive and 432 cryptographically assert its membership in the ACP domain. 434 domain information (field): An rfc822Name information element (e.g., 435 field) in the domain certificate in which the ACP relevant 436 information is encoded: the domain name and the ACP address. 438 ACP Loopback interface: The Loopback interface in the ACP VRF that 439 has the ACP address assigned to it. 441 ACP network: The ACP network constitutes all the nodes that have 442 access to the ACP. It is the set of active and transitively 443 connected nodes of an ACP domain plus all nodes that get access to 444 the ACP of that domain via ACP edge nodes. 446 ACP (ULA) prefix(es): The /48 IPv6 address prefixes used across the 447 ACP. In the normal/simple case, the ACP has one ULA prefix, see 448 Section 6.10. The ACP routing table may include multiple ULA 449 prefixes if the "rsub" option is used to create addresses from 450 more than one ULA prefix. See Section 6.1.1. The ACP may also 451 include non-ULA prefixes if those are configured on ACP connect 452 interfaces. See Section 8.1.1. 454 ACP secure channel: A security association established hop-by-hop 455 between adjacent ACP nodes to carry traffic of the ACP VRF 456 separated from Data-Plane traffic in-band over the same links as 457 the Data-Plane. 459 ACP secure channel protocol: The protocol used to build an ACP 460 secure channel, e.g., Internet Key Exchange Protocol version 2 461 (IKEv2) with IPsec or Datagram Transport Layer Security (DTLS). 463 ACP virtual interface: An interface in the ACP VRF mapped to one or 464 more ACP secure channels. See Section 6.12.5. 466 AN "Autonomic Network": A network according to 467 [I-D.ietf-anima-reference-model]. Its main components are ANI, 468 Autonomic Functions and Intent. 470 (AN) Domain Name: An FQDN (Fully Qualified Domain Name) in the 471 domain information field of the Domain Certificate. See 472 Section 6.1.1. 474 ANI (nodes/network): "Autonomic Network Infrastructure". The ANI is 475 the infrastructure to enable Autonomic Networks. It includes ACP, 476 BRSKI and GRASP. Every Autonomic Network includes the ANI, but 477 not every ANI network needs to include autonomic functions beyond 478 the ANI (nor intent). An ANI network without further autonomic 479 functions can for example support secure zero-touch (automated) 480 bootstrap and stable connectivity for SDN networks - see 481 [RFC8368]. 483 ANIMA: "Autonomic Networking Integrated Model and Approach". ACP, 484 BRSKI and GRASP are products of the IETF ANIMA working group. 486 ASA: "Autonomic Service Agent". Autonomic software modules running 487 on an ANI device. The components making up the ANI (BRSKI, ACP, 488 GRASP) are also described as ASAs. 490 Autonomic Function: A function/service in an Autonomic Network (AN) 491 composed of one or more ASA across one or more ANI nodes. 493 BRSKI: "Bootstrapping Remote Secure Key Infrastructures" 494 ([I-D.ietf-anima-bootstrapping-keyinfra]. A protocol extending 495 EST to enable secure zero-touch bootstrap in conjunction with ACP. 496 ANI nodes use ACP, BRSKI and GRASP. 498 Data-Plane: The counterpoint to the ACP VRF in an ACP node: all 499 routing and forwarding in the node other than the ACP VRF. In a 500 simple ACP or ANI node, the Data-Plane is typically provisioned 501 non-autonomic, for example manually (including across the ACP) or 502 via SDN controllers. In a fully Autonomic Network node, the Data- 503 Plane is managed autonomically via Autonomic Functions and Intent. 504 Note that other (non-ANIMA) RFC use the Data-Plane to refer to 505 what is better called the forwarding plane. This is not the way 506 the term is used in this document! 508 device: A physical system, or physical node. 510 Enrollment: The process where a node presents identification (for 511 example through keying material such as the private key of an 512 IDevID) to a network and acquires a network specific identity and 513 trust anchor such as an LDevID. 515 EST: "Enrollment over Secure Transport" ([RFC7030]). IETF standard 516 protocol for enrollment of a node with an LDevID. BRSKI is based 517 on EST. 519 GRASP: "Generic Autonomic Signaling Protocol". An extensible 520 signaling protocol required by the ACP for ACP neighbor discovery. 521 The ACP also provides the "security and transport substrate" for 522 the "ACP instance of GRASP". This instance of GRASP runs across 523 the ACP secure channels to support BRSKI and other future 524 Autonomic Functions. See [I-D.ietf-anima-grasp]. 526 IDevID: An "Initial Device IDentity" X.509 certificate installed by 527 the vendor on new equipment. Contains information that 528 establishes the identity of the node in the context of its vendor/ 529 manufacturer such as device model/type and serial number. See 530 [AR8021]. IDevID can not be used for the ACP because they are not 531 provisioned by the owner of the network, so they can not directly 532 indicate an ACP domain they belong to. 534 in-band (management): The type of management used predominantly in 535 IP based networks, not leveraging an ->"out-of-band network" (). 536 In in-band management, access to the managed equipment depends on 537 the configuration of this equipment itself: interface, addressing, 538 forwarding, routing, policy, security, management. This 539 dependency makes in-band management fragile because the 540 configuration actions performed may break in-band management 541 connectivity. Breakage can not only be unintentional, it can 542 simply be an unavoidable side effect of being unable to create 543 configuration schemes where in-band management connectivity 544 configuration is unaffected by Data-Plane configuration. See also 545 ->"(virtual) out-of-band network" (). 547 Intent: Policy language of an autonomic network according to 548 [I-D.ietf-anima-reference-model]. 550 Loopback interface: The conventional name for an internal IP 551 interface to which addresses may be assigned, but which transmits 552 no external traffic. 554 LDevID: A "Local Device IDentity" is an X.509 certificate installed 555 during "enrollment". The Domain Certificate used by the ACP is an 556 LDevID. See [AR8021]. 558 MIC: "Manufacturer Installed Certificate". Another word not used in 559 this document to describe an IDevID. 561 native interface: Interfaces existing on a node without 562 configuration of the already running node. On physical nodes 563 these are usually physical interfaces. On virtual nodes their 564 equivalent. 566 node: A system, e.g., supporting the ACP according to this document. 567 Can be virtual or physical. Physical nodes are called devices. 569 Node-ID: The identifier of an ACP node inside that ACP. It is the 570 last 64 (see Section 6.10.3) or 78 bit (see xref target="Vlong"/>) 571 of the ACP address. 573 (virtual) out-of-band network: An out-of-band network is a secondary 574 network used to manage a primary network. The equipment of the 575 primary network is connected to the out-of-band network via 576 dedicated management ports on the primary network equipment. 577 Serial (console) management ports are most common, higher end 578 network equipment also has ethernet ports dedicated only for 579 management. An out-of-band network provides management access to 580 the primary network independent of the configuration state of the 581 primary network. One of the goals of the ACP is to provide this 582 benefit of out-of-band networks virtually on the primary network 583 equipment. The ACP VRF acts as a virtual out of band network 584 device providing configuration independent management access. The 585 ACP secure channels are the virtual links of the ACP virtual out- 586 of-band network, meant to be operating independent of the 587 configuration of the primary network. See also ->"in-band 588 (management)" (). 590 RPL: "IPv6 Routing Protocol for Low-Power and Lossy Networks". The 591 routing protocol used in the ACP. See [RFC6550]. 593 MASA (service): "Manufacturer Authorized Signing Authority". A 594 vendor/manufacturer or delegated cloud service on the Internet 595 used as part of the BRSKI protocol. 597 (ACP/ANI/BRSKI) Registrar: An ACP registrar is an entity (software 598 and/or person) that is orchestrating the enrollment of ACP nodes 599 with the ACP domain certificate. ANI nodes use BRSKI, so ANI 600 registrars are also called BRSKI registrars. For non-ANI ACP 601 nodes, the registrar mechanisms are undefined by this document. 602 See Section 6.10.7. Renewal and other maintenance (such as 603 revocation) of ACP domain certificates may be performed by other 604 entities than registrars. EST must be supported for ACP domain 605 certificate renewal (see Section 6.1.3). BRSKI is an extension of 606 EST, so ANI/BRSKI registrars can easily support ACP domain 607 certificate renewal in addition to initial enrollment. 609 sUDI: "secured Unique Device Identifier". Another term not used in 610 this document to refer to an IDevID. 612 UDI: "Unique Device Identifier". In the context of this document 613 unsecured identity information of a node typically consisting of 614 at least device model/type and serial number, often in a vendor 615 specific format. See sUDI and LDevID. 617 ULA: (Global ID prefix) A "Unique Local Address" (ULA) is an IPv6 618 address in the block fc00::/7, defined in [RFC4193]. It is the 619 approximate IPv6 counterpart of the IPv4 private address 620 ([RFC1918]). The ULA Global ID prefix are the first 48 bit of a 621 ULA address. In this document it is abbreviated as "ULA prefix". 623 (ACP) VRF: The ACP is modeled in this document as a "Virtual Routing 624 and Forwarding" instance (VRF). This means that it is based on a 625 "virtual router" consisting of a separate IPv6 forwarding table to 626 which the ACP virtual interfaces are attached and an associated 627 separate IPv6 routing table. Unlike the VRFs on MPLS/VPN-PE 628 ([RFC4364]) or LISP XTR ([RFC6830]), the ACP VRF does not have any 629 special "core facing" functionality or routing/mapping protocols 630 shared across multiple VRFs. In vendor products a VRF such as the 631 ACP-VRF may also be referred to as a so called VRF-lite. 633 (ACP) Zone: An ACP zone is a connected region of the ACP where nodes 634 derive from their non-aggregatable ACP address (identifier 635 address) an aggregatable ACP zone address (locator address). See 636 the definition of the ACP Zone Addressing Sub-Scheme 637 (Section 6.10.3). The complete definition of zones is subject to 638 future work because this document does not describe the routing 639 protocols details for aggregation of ACP zone addresses, but only 640 their addressing scheme. 642 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 643 "SHOULD", "SHOULD NOT", "RECOMMENDED", "NOT RECOMMENDED", "MAY", and 644 "OPTIONAL" in this document are to be interpreted as described in 645 [RFC8174] when they appear in ALL CAPS. When these words are not in 646 ALL CAPS (such as "should" or "Should"), they have their usual 647 English meanings, and are not to be interpreted as [RFC8174] key 648 words. 650 3. Use Cases for an Autonomic Control Plane 652 3.1. An Infrastructure for Autonomic Functions 654 Autonomic Functions need a stable infrastructure to run on, and all 655 autonomic functions should use the same infrastructure to minimize 656 the complexity of the network. This way, there is only need for a 657 single discovery mechanism, a single security mechanism, and other 658 processes that distributed functions require. 660 3.2. Secure Bootstrap over a not configured Network 662 Today, bootstrapping a new node typically requires all nodes between 663 a controlling node such as an SDN controller ("Software Defined 664 Networking", see [RFC7426]) and the new node to be completely and 665 correctly addressed, configured and secured. Bootstrapping and 666 configuration of a network happens in rings around the controller - 667 configuring each ring of devices before the next one can be 668 bootstrapped. Without console access (for example through an out-of- 669 band network) it is not possible today to make devices securely 670 reachable before having configured the entire network leading up to 671 them. 673 With the ACP, secure bootstrap of new devices can happen without 674 requiring any configuration such as the transit connectivity to 675 bootstrap further devices. A new device can automatically be 676 bootstrapped in a secure fashion and be deployed with a domain 677 certificate. This does not require any configuration on intermediate 678 nodes, because they can communicate zero-touch and securely through 679 the ACP. 681 3.3. Data-Plane Independent Permanent Reachability 683 Today, most critical control plane protocols and network management 684 protocols are using the Data-Plane (global routing table) of the 685 network. This leads to undesirable dependencies between control and 686 management plane on one side and the Data-Plane on the other: Only if 687 the Data-Plane is operational, will the other planes work as 688 expected. 690 Data-Plane connectivity can be affected by errors and faults, for 691 example misconfigurations that make AAA (Authentication, 692 Authorization and Accounting) servers unreachable or can lock an 693 administrator out of a device; routing or addressing issues can make 694 a device unreachable; shutting down interfaces over which a current 695 management session is running can lock an admin irreversibly out of 696 the device. Traditionally only console access can help recover from 697 such issues. 699 Data-Plane dependencies also affect applications in a Network 700 Operations Center (NOC) such as SDN controller applications: Certain 701 network changes are today hard to operate, because the change itself 702 may affect reachability of the devices. Examples are address or mask 703 changes, routing changes, or security policies. Today such changes 704 require precise hop-by-hop planning. 706 The ACP provides reachability that is independent of the Data-Plane 707 (except for the dependency discussed in Section 6.12.2 which can be 708 removed through future work), which allows the control plane and 709 management plane to operate more robustly: 711 o For management plane protocols, the ACP provides the functionality 712 of a Virtual out of Band (VooB) channel, by providing connectivity 713 to all nodes regardless of their configuration or global routing 714 table. 716 o For control plane protocols, the ACP allows their operation even 717 when the Data-Plane is temporarily faulty, or during transitional 718 events, such as routing changes, which may affect the control 719 plane at least temporarily. This is specifically important for 720 autonomic service agents, which could affect Data-Plane 721 connectivity. 723 The document "Using Autonomic Control Plane for Stable Connectivity 724 of Network OAM" [RFC8368] explains this use case for the ACP in 725 significantly more detail and explains how the ACP can be used in 726 practical network operations. 728 4. Requirements 730 The Autonomic Control Plane has the following requirements: 732 ACP1: The ACP SHOULD provide robust connectivity: As far as 733 possible, it should be independent of configured addressing, 734 configuration and routing. Requirements 2 and 3 build on this 735 requirement, but also have value on their own. 737 ACP2: The ACP MUST have a separate address space from the Data- 738 Plane. Reason: traceability, debug-ability, separation from 739 Data-Plane, security (can block easily at edge). 741 ACP3: The ACP MUST use autonomically managed address space. Reason: 742 easy bootstrap and setup ("autonomic"); robustness (admin 743 can't mess things up so easily). This document suggests using 744 ULA addressing for this purpose ("Unique Local Address", see 745 [RFC4193]). 747 ACP4: The ACP MUST be generic. Usable by all the functions and 748 protocols of the ANI. Clients of the ACP MUST NOT be tied to 749 a particular application or transport protocol. 751 ACP5: The ACP MUST provide security: Messages coming through the ACP 752 MUST be authenticated to be from a trusted node, and SHOULD 753 (very strong SHOULD) be encrypted. 755 Explanation for ACP4: In a fully autonomic network (AN), newly 756 written ASA could potentially all communicate exclusively via GRASP 757 with each other, and if that was assumed to be the only requirement 758 against the ACP, it would not need to provide IPv6 layer connectivity 759 between nodes, but only GRASP connectivity. Nevertheless, because 760 ACP also intends to support non-AN networks, it it is crucial to 761 support IPv6 layer connectivity across the ACP to support any 762 transport and application layer protocols. 764 Th eACP operates hop-by-hop, because this interaction can be built on 765 IPv6 link local addressing, which is autonomic, and has no dependency 766 on configuration (requirement 1). It may be necessary to have ACP 767 connectivity across non-ACP nodes, for example to link ACP nodes over 768 the general Internet. This is possible, but introduces a dependency 769 against stable/resilient routing over the non-ACP hops (see 770 Section 8.2). 772 5. Overview 774 The Autonomic Control Plane is constructed in the following way (for 775 details, see Section 6): 777 1. An ACP node creates a Virtual Routing and Forwarding (VRF) 778 instance, or a similar virtual context. 780 2. It determines, following a policy, a candidate peer list. This 781 is the list of nodes to which it should establish an Autonomic 782 Control Plane. Default policy is: To all link-layer adjacent 783 nodes supporting ACP. 785 3. For each node in the candidate peer list, it authenticates that 786 node and negotiates a mutually acceptable channel type. 788 4. For each node in the candidate peer list, it then establishes a 789 secure tunnel of the negotiated type. The resulting tunnels are 790 then placed into the previously set up VRF. This creates an 791 overlay network with hop-by-hop tunnels. 793 5. Inside the ACP VRF, each node assigns its ULA IPv6 address to a 794 Loopback interface assigned to the ACP VRF. 796 6. Each node runs a lightweight routing protocol, to announce 797 reachability of the virtual addresses inside the ACP (see 798 Section 6.12.5). 800 Note: 802 o Non-autonomic NMS ("Network Management Systems") or SDN 803 controllers have to be explicitly configured for connection into 804 the ACP. 806 o Connecting over non-ACP Layer-3 clouds requires explicit 807 configuration. See Section 8.2. This may be automated in the 808 future through auto discovery mechanisms across L3. 810 o None of the above operations (except explicit configured ones) are 811 reflected in the configuration of the node. 813 The following figure illustrates the ACP. 815 ACP node 1 ACP node 2 816 ................... ................... 817 secure . . secure . . secure 818 channel: +-----------+ : channel : +-----------+ : channel 819 ..--------| ACP VRF |---------------------| ACP VRF |---------.. 820 : / \ / \ <--routing--> / \ / \ : 821 : \ / \ / \ / \ / : 822 ..--------| Loopback |---------------------| Loopback |---------.. 823 : | interface | : : | interface | : 824 : +-----------+ : : +-----------+ : 825 : : : : 826 : Data-Plane :...............: Data-Plane : 827 : : link : : 828 :.................: :.................: 830 Figure 1: ACP VRF and secure channels 832 The resulting overlay network is normally based exclusively on hop- 833 by-hop tunnels. This is because addressing used on links is IPv6 834 link local addressing, which does not require any prior set-up. This 835 way the ACP can be built even if there is no configuration on the 836 node, or if the Data-Plane has issues such as addressing or routing 837 problems. 839 6. Self-Creation of an Autonomic Control Plane (ACP) (Normative) 841 This section describes the components and steps to set up an 842 Autonomic Control Plane (ACP), and highlights the key properties 843 which make it "indestructible" against many inadvertent changes to 844 the Data-Plane, for example caused by misconfigurations. 846 An ACP node can be a router, switch, controller, NMS host, or any 847 other IP capable node. Initially, it must have its ACP domain 848 certificate, as well as an (empty) ACP Adjacency Table (described in 849 Section 6.2). It then can start to discover ACP neighbors and build 850 the ACP. This is described step by step in the following sections: 852 6.1. ACP Domain, Certificate and Network 854 The ACP relies on group security. An ACP domain is a group of nodes 855 that trust each other to participate in ACP operations. To establish 856 trust, each ACP member requires keying material: An ACP node MUST 857 have a certificate (LDevID) and a Trust Anchor (TA) consisting of a 858 certificate (chain) used to sign the LDevID of all ACP domain 859 members. The LDevID is used to cryptographically authenticate the 860 membership of its owner node in the ACP domain to other ACP domain 861 members, the TA is used to authenticate the ACP domain membership of 862 other nodes (see Section 6.1.2). 864 The LDevID is called the ACP domain certificate, the TA is the 865 Certificate Authority (CA) of the ACP domain. 867 The ACP does not mandate specific mechanisms by which this keying 868 material is provisioned into the ACP node, it only requires the 869 Domain information field as specified in Section 6.1.1 in its domain 870 certificate as well as those of candidate ACP peers. See 871 Section 11.2 for more information about enrollment or provisioning 872 options. 874 This document uses the term ACP in many places where the Autonomic 875 Networking reference documents [RFC7575] and 876 [I-D.ietf-anima-reference-model] use the word autonomic. This is 877 done because those reference documents consider (only) fully 878 autonomic networks and nodes, but support of ACP does not require 879 support for other components of autonomic networks. Therefore the 880 word autonomic might be misleading to operators interested in only 881 the ACP: 883 [RFC7575] defines the term "Autonomic Domain" as a collection of 884 autonomic nodes. ACP nodes do not need to be fully autonomic, but 885 when they are, then the ACP domain is an autonomic domain. Likewise, 886 [I-D.ietf-anima-reference-model] defines the term "Domain 887 Certificate" as the certificate used in an autonomic domain. The ACP 888 domain certificate is that domain certificate when ACP nodes are 889 (fully) autonomic nodes. Finally, this document uses the term ACP 890 network to refer to the network created by active ACP nodes in an ACP 891 domain. The ACP network itself can extend beyond ACP nodes through 892 the mechanisms described in Section 8.1). 894 The ACP domain certificate can and should be used for any 895 authentication between ACP nodes where the required security is 896 domain membership. Section 6.1.2 defines this "ACP domain membership 897 check". The uses of this check that are standardized in this 898 document are for the establishment of ACP secure channels 899 (Section 6.6) and for ACP GRASP (Section 6.8.2). Other uses are 900 subject to future work, but it is recommended that it is the default 901 security check for any end-to-end connections between ASA. It is 902 equally useable by other functions such as legacy OAM functions. 904 6.1.1. Certificate Domain Information Field 906 Information about the domain MUST be encoded in the domain 907 certificate in a subjectAltName / rfc822Name field according to the 908 following ABNF definition ([RFC5234]): 910 [RFC Editor: Please substitute SELF in all occurences of rfcSELF in 911 this document with the RFC number assigned to this document and 912 remove this comment line] 914 domain-information = local-part "@" domain 915 local-part = key [ "." local-info ] 916 key = "rfcSELF" 917 local-info = [ acp-address ] [ "+" rsub extensions ] 918 acp-address = 32hex-dig 919 hex-dig = DIGIT / "a" / "b" / "c" / "d" / "e" / "f" 920 rsub = [ domain-name ] ; empty if not used 921 domain = domain-name 922 routing-subdomain = [ rsub " ." ] domain 923 domain-name = ; ; as of RFC 1034, section 3.5 924 extensions = *( "+" extension ) 925 extension = ; future definition. 926 ; Must fit RFC5322 simple dot-atom format. 928 Example: 929 domain-information = rfcSELF+fd89b714f3db00000200000064000000 930 +area51.research@acp.example.com 931 routing-subdomain = area51.research.acp.example.com 933 Figure 2: ACP Domain Information Field ABNF 935 "acp-address" MUST be the ACP address of the node. It is optional to 936 support variations of the ACP mechanisms, for example other means for 937 nodes to assign ACP addresses to themselves. Such methods are 938 subject to future work though. 940 Note: "acp-address" cannot use standard IPv6 address formats because 941 it must match the simple dot-atom format of [RFC5322]. ":" are not 942 allowed in that format. 944 "domain" is used to indicate the ACP Domain across which all ACP 945 nodes trust each other and are willing to build ACP channel to each 946 other. See Section 6.1.2. Domain SHOULD be the FQDN of a domain 947 owned by the operator assigning the certificate. This is a simple 948 method to ensure that the domain is globally unique and collision of 949 ACP addresses would therefore only happen due to ULA hash collisions. 950 If the operator does not own any FQDN, it should choose a string in 951 FQDN format that intends to be equally unique. 953 "routing-subdomain" is the autonomic subdomain that is used to 954 calculate the hash for the ULA Global ID of the ACP address of the 955 node. "rsub" is optional; its syntax is defined in this document, 956 but its semantics are for further study. Understanding the benefits 957 of using rsub may depend on the results of future work on enhancing 958 routing for the ACP. When "rsub" is not used, "routing-subdomain" is 959 the same as "domain". "rsub" needs to be in the "local-part"; it 960 could not syntactically be separated from "domain-name" if "domain" 961 is just a domain name. It also makes it easier for domain name to be 962 a valid e-mail target. 964 The optional "extensions" field is used for future extensions to this 965 specification. It MUST be ignored if present and not understood. 967 In this specification, the "acp-address" field is REQUIRED, but 968 future variations (see Section 11.8) may use local information to 969 derive the ACP address. In this case, "acp-address" could be empty. 970 Such a variation would be indicated by an appropriate "extension". 971 If "acp-address" is empty, and "rsub" is empty too, the "local-part" 972 will have the format "rfcSELF + + extension(s)". The two plus 973 characters are necessary so the node can unambiguously parse that 974 both "acp-address" and "rsub" are empty. 976 Note that the maximum size of "domain-information" is 254 characters 977 and the maximum size of node-info is 64 characters according to 978 [RFC5280] that is referring to [RFC2821] (superseded by [RFC5321]). 980 The subjectAltName / rfc822Name encoding of the ACP domain name and 981 ACP address is used for the following reasons: 983 o It should be possible to share the LDevID with other uses beside 984 the ACP. Therefore, the information element required for the ACP 985 should be encoded so that it minimizes the possibility of creating 986 incompatibilities with such other uses. 988 o The information for the ACP should not cause incompatibilities 989 with any pre-existing ASN.1 software. This eliminates the 990 introduction of a novel information element because that could 991 require extensions to such pre-existing ASN.1 parsers. 993 o subjectAltName / rfc822Name is a pre-existing element that must be 994 supported by all existing ASN.1 parsers for LDevID. 996 o The element required for the ACP should not be misinterpreted by 997 any other uses of the LDevID. If the element used for the ACP is 998 interpreted by other uses, the impact should be benign. 1000 o Using an IP address format encoding could result in non-benign 1001 misinterpretation of the domain information field; other uses 1002 unaware of the ACP could try to do something with the ACP address 1003 that would fail to work correctly. For example, the address could 1004 be interpreted to be an address of the node which does not belong 1005 to the ACP VRF. 1007 o At minimum, both the AN domain name and the non-domain name 1008 derived part of the ACP address need to be encoded in one or more 1009 appropriate fields of the certificate, so there are not many 1010 alternatives with pre-existing fields where the only possible 1011 conflicts would likely be beneficial. 1013 o rfc822Name encoding is quite flexible. We choose to encode the 1014 full ACP address AND the domain name with sub part into a single 1015 rfc822Name information element it, so that it is easier to 1016 examine/use the "domain information field". 1018 o The format of the rfc822Name is chosen so that an operator can set 1019 up a mailbox called rfcSELF@ that would receive emails 1020 sent towards the rfc822Name of any node inside a domain. This is 1021 possible because in many modern mail systems, components behind a 1022 "+" character are considered part of a single mailbox. In other 1023 words, it is not necessary to set up a separate mailbox for every 1024 ACP node, but only one for the whole domain. 1026 o In result, if any unexpected use of the ACP addressing information 1027 in a certificate happens, it is benign and detectable: it would be 1028 mail to that mailbox. 1030 See section 4.2.1.6 of [RFC5280] for details on the subjectAltName 1031 field. 1033 6.1.2. ACP domain membership check 1035 The following points constitute the ACP domain membership check: 1037 o The peer certificate is valid as proven by the security 1038 associations protocol exchange. 1040 o The peer's certificate is signed by one of the trust anchors 1041 associated with the ACP domain certificate. 1043 o If the node certificates indicates a Certificate Revocation List 1044 (CRL) Distribution Point (CDP) ([RFC5280], section 4.2.1.13) or 1045 Online Certificate Status Protocol (OCSP) responder ([RFC5280], 1046 section 4.2.2.1), then the peer's certificate must be valid 1047 according to those criteria: An OCSP check for the peers 1048 certificate across the ACP must succeed or the peer certificate 1049 must not be listed in the CRL retrieved from the CDP. 1051 o The peers certificate has a syntactically valid domain information 1052 field (subjectAltName / rfc822Name) and the domain name in that 1053 peers domain information field is the same as in this ACP node 1054 certificate. Note that future Intent rules may modify this. See 1055 Section 11.7. 1057 6.1.3. Certificate Maintenance 1059 ACP nodes MUST support certificate renewal via EST ("Enrollment over 1060 Secure Transport", see [RFC7030]) and MAY support other mechanisms. 1061 An ACP network MUST have at least one ACP node supporting EST server 1062 functionality across the ACP so that EST renewal is useable. 1064 ACP nodes SHOULD be able to remember the EST server from which they 1065 last renewed their ACP domain certificate and SHOULD provide the 1066 ability for this remembered EST server to also be set by the ACP 1067 Registrar (see Section 6.10.7) that initially enrolled the ACP device 1068 with its ACP domain certificate. When BRSKI (see 1069 [I-D.ietf-anima-bootstrapping-keyinfra]) is used, the ACP address of 1070 the BRSKI registrar from the BRSKI TLS connection SHOULD be 1071 remembered and used for the next renewal via EST if that registrar 1072 also announces itself as an EST server via GRASP (see next section) 1073 on its ACP address. 1075 6.1.3.1. GRASP objective for EST server 1077 ACP nodes that are EST servers MUST announce their service via GRASP 1078 in the ACP through M_FLOOD messages. See [I-D.ietf-anima-grasp], 1079 section 2.8.11 for the definition of this message type: 1081 Example: 1083 [M_FLOOD, 12340815, h'fd89b714f3db0000200000064000001', 210000, 1084 ["SRV.est", 4, 255 ], 1085 [O_IPv6_LOCATOR, 1086 h'fd89b714f3db0000200000064000001', TCP, 80] 1087 ] 1089 Figure 3: GRASP SRV.est example 1091 The formal definition of the objective in Concise data definition 1092 language (CDDL) (see [I-D.ietf-cbor-cddl]) is as follows: 1094 flood-message = [M_FLOOD, session-id, initiator, ttl, 1095 +[objective, (locator-option / [])]] 1097 objective = ["SRV.est", objective-flags, loop-count, 1098 objective-value] 1100 objective-flags = sync-only ; as in GRASP spec 1101 sync-only = 4 ; M_FLOOD only requires synchronization 1102 loop-count = 255 ; recommended 1103 objective-value = ; Not used (yet) 1105 Figure 4: GRASP SRV.est definition 1107 The objective value "SRV.est" indicates that the objective is an 1108 [RFC7030] compliant EST server because "est" is an [RFC6335] 1109 registered service name for [RFC7030]. Future backward compatible 1110 extensions/alternatives to [RFC7030] may be indicated through 1111 objective-value. Future non-backward compatible certificate renewal 1112 options must use a different objective-name. 1114 The M_FLOOD message MUST be sent periodically. The default SHOULD be 1115 60 seconds, the value SHOULD be operator configurable. The frequency 1116 of sending MUST be such that the aggregate amount of periodic 1117 M_FLOODs from all flooding sources causes only negligible traffic 1118 across the ACP. The ttl parameter SHOULD be 3.5 times the period so 1119 that up to three consecutive messages can be dropped before 1120 considering an announcement expired. In the example above, the ttl 1121 is 210000 msec, 3.5 times 60 seconds. When a service announcer using 1122 these parameters unexpectedly dies immediately after sending the 1123 M_FLOOD, receivers would consider it expired 210 seconds later. When 1124 a receiver tries to connect to this dead service before this timeout, 1125 it will experience a failing connection and use that as an indication 1126 that the service is dead and select another instance of the same 1127 service instead. 1129 6.1.3.2. Renewal 1131 When performing renewal, the node SHOULD attempt to connect to the 1132 remembered EST server. If that fails, it SHOULD attempt to connect 1133 to an EST server learned via GRASP. The server with which 1134 certificate renewal succeeds SHOULD be remembered for the next 1135 renewal. 1137 Remembering the last renewal server and preferring it provides 1138 stickiness which can help diagnostics. It also provides some 1139 protection against off-path compromised ACP members announcing bogus 1140 information into GRASP. 1142 Renewal of certificates SHOULD start after less than 50% of the 1143 domain certificate lifetime so that network operations has ample time 1144 to investigate and resolve any problems that causes a node to not 1145 renew its domain certificate in time - and to allow prolonged periods 1146 of running parts of a network disconnected from any CA. 1148 6.1.3.3. Certificate Revocation Lists (CRLs) 1150 The ACP node SHOULD support Certificate Revocation Lists (CRL) via 1151 HTTPs from one or more CRL Distribution Points (CDPs). The CDP(s) 1152 MUST be indicated in the Domain Certificate when used. If the CDP 1153 URL uses an IPv6 address (ULA address when using the addressing rules 1154 specified in this document), the ACP node will connect to the CDP via 1155 the ACP. If the CDP URL uses an IPv6 address (ULA address when using 1156 the addressing rules specified in this document), the ACP node will 1157 connect to the CDP via the ACP. If the CDP uses a domain name, the 1158 ACP node will connect to the CDP via the Data-Plane. 1160 It is common to use domain names for CDP(s), but there is no 1161 requirement for the ACP to support DNS. Any DNS lookup in the Data- 1162 Plane is not only a possible security issue, but it would also not 1163 indicate whether the resolved address is meant to be reachable across 1164 the ACP. Therefore, the use of an IPv6 address versus the use of a 1165 DNS name doubles as an indicator whether or not to reach the CDP via 1166 the ACP. 1168 A CDP can be reachable across the ACP either by running it on a node 1169 with ACP or by connecting its node via an ACP connect interface (see 1170 Section 8.1). The CDP SHOULD use an ACP domain certificate for its 1171 HTTPs connections. The connecting ACP node SHOULD verify that the 1172 CDP certificate used during the HTTPs connection has the same ACP 1173 address as indicated in the CDP URL of the nodes ACP domain 1174 certificate 1176 6.1.3.4. Lifetimes 1178 Certificate lifetime may be set to shorter lifetimes than customary 1179 (1 year) because certificate renewal is fully automated via ACP and 1180 EST. The primary limiting factor for shorter certificate lifetimes 1181 is load on the EST server(s) and CA. It is therefore recommended 1182 that ACP domain certificates are managed via a CA chain where the 1183 assigning CA has enough performance to manage short lived 1184 certificates. See also Section 10.2.4 for discussion about an 1185 example setup achieving this. 1187 When certificate lifetimes are sufficiently short, such as few hours, 1188 certificate revocation may not be necessary, allowing to simplify the 1189 overall certificate maintenance infrastructure. 1191 See Section 11.2 for further optimizations of certificate maintenance 1192 when BRSKI can be used ("Bootstrapping Remote Secure Key 1193 Infrastructures", see [I-D.ietf-anima-bootstrapping-keyinfra]). 1195 6.1.3.5. Re-enrollment 1197 An ACP node may determine that its ACP domain certificate has 1198 expired, for example because the ACP node was powered down or 1199 disconnected longer than its certificate lifetime. In this case, the 1200 ACP node SHOULD convert to a role of a re-enrolling candidate ACP 1201 node. 1203 In this role, the node does maintain the trust anchor and certificate 1204 chain associated with its ACP domain certificate exclusively for the 1205 purpose of re-enrollment, and attempts (or waits) to get re-enrolled 1206 with a new ACP certificate. The details depend on the mechanisms/ 1207 protocols used by the ACP registrars. 1209 Please refer to Section 6.10.7 for explanations about ACP registrars 1210 and vouchers as used in the following text. 1212 When BRSKI is used (aka: on ACP nodes that are ANI nodes), the re- 1213 enrolling candidate ACP node would attempt to enroll like a candidate 1214 ACP node (BRSKI pledge), but instead of using the ACP nodes IDevID, 1215 it SHOULD first attempt to use its ACP domain certificate in the 1216 BRSKI TLS authentication. The BRSKI registrar MAY honor this 1217 certificate beyond its expiration date purely for the purpose of re- 1218 enrollment. Using the ACP nodes domain certificate allows the BRSKI 1219 registrar to learn that nodes ACP domain information field, so that 1220 the BRSKI registrar can re-assign the same ACP address information to 1221 the ACP node in the new ACP domain certificate. 1223 If the BRSKI registrar denies the use of the old ACP domain 1224 certificate, the re-enrolling candidate ACP node MUST re-attempt re- 1225 enrollment using its IDevID as defined in BRSKI during the TLS 1226 connection setup. 1228 Both when the BRSKI connection is attempted with the old ACP domain 1229 certificate or the IDevID, the re-enrolling candidate ACP node SHOULD 1230 authenticate the BRSKI registrar during TLS connection setup based on 1231 its existing trust anchor/certificate chain information associated 1232 with its old ACP certificate. The re-enrolling candidate ACP node 1233 SHOULD only request a voucher from the BRSKI registrar when this 1234 authentication fails during TLS connection setup. 1236 When other mechanisms than BRSKI are used for ACP domain certificate 1237 enrollment, the principles of the re-enrolling candidate ACP node are 1238 the same. The re-enrolling candidate ACP node attempts to 1239 authenticate any ACP registrar peers during re-enrollment protocol/ 1240 mechanisms via its existing certificate chain/trust anchor and 1241 provides its existing ACP domain certificate and other identification 1242 (such as the IDevID) as necessary to the registrar. 1244 Maintaining existing trust anchor information is especially important 1245 when enrollment mechanisms are used that unlike BRSKI do not leverage 1246 a voucher mechanism to authenticate the ACP registrar and where 1247 therefore the injection of certificate failures could otherwise make 1248 the ACP node easily attackable remotely. 1250 When using BRSKI or other protocol/mechanisms supporting vouchers, 1251 maintaining existing trust anchor information allows for re- 1252 enrollment of expired ACP certificates to be more lightweight, 1253 especially in environments where repeated acquisition of vouchers 1254 during the lifetime of ACP nodes may be operationally expensive or 1255 otherwise undesirable. 1257 6.1.3.6. Failing Certificates 1259 An ACP domain certificate is called failing in this document, if/when 1260 the ACP node can determine that it was revoked (or explicitly not 1261 renewed), or in the absence of such explicit local diagnostics, when 1262 the ACP node fails to connect to other ACP nodes in the same ACP 1263 domain using its ACP certificate. For connection failures to 1264 determine the ACP domain certificate as the culprit, the peer should 1265 pass the domain membership check (Section 6.1.2) and other reasons 1266 for the connection failure can be excluded because of the connection 1267 error diagnostics. 1269 This type of failure can happen during setup/refresh of a secure ACP 1270 channel connections or any other use of the ACP domain certificate, 1271 such as for the TLS connection to an EST server for the renewal of 1272 the ACP domain certificate. 1274 Example reasons for failing certificates that the ACP node can only 1275 discover through connection failure are that the domain certificate 1276 or any of its signing certificates could have been revoked or may 1277 have expired, but the ACP node can not self-diagnose this condition 1278 directly. Revocation information or clock synchronization may only 1279 be available across the ACP, but the ACP node can not build ACP 1280 secure channels because ACP peers reject the ACP node's domain 1281 certificate. 1283 ACP nodes SHOULD support the option to determines whether its ACP 1284 certificate is failing, and when it does, put itself into the role of 1285 a re-enrolling candidate ACP node as explained above 1286 (Section 6.1.3.5). 1288 6.2. ACP Adjacency Table 1290 To know to which nodes to establish an ACP channel, every ACP node 1291 maintains an adjacency table. The adjacency table contains 1292 information about adjacent ACP nodes, at a minimum: Node-ID 1293 (identifier of the node inside the ACP, see Section 6.10.3 and 1294 Section 6.10.5), interface on which neighbor was discovered (by GRASP 1295 as explained below), link-local IPv6 address of neighbor on that 1296 interface, certificate (including domain information field). An ACP 1297 node MUST maintain this adjacency table up to date. This table is 1298 used to determine to which neighbor an ACP connection is established. 1300 Where the next ACP node is not directly adjacent (i.e., not on a link 1301 connected to this node), the information in the adjacency table can 1302 be supplemented by configuration. For example, the Node-ID and IP 1303 address could be configured. 1305 The adjacency table MAY contain information about the validity and 1306 trust of the adjacent ACP node's certificate. However, subsequent 1307 steps MUST always start with authenticating the peer. 1309 The adjacency table contains information about adjacent ACP nodes in 1310 general, independently of their domain and trust status. The next 1311 step determines to which of those ACP nodes an ACP connection should 1312 be established. 1314 Interaction between ACP and other autonomic elements like GRASP (see 1315 below) or ASAs should be via an API that allows (appropriately access 1316 controlled) read/write access to the ACP Adjacency Table. 1317 Specification of such an API is subject to future work. 1319 6.3. Neighbor Discovery with DULL GRASP 1321 [RFC Editor: GRASP draft is in RFC editor queue, waiting for 1322 dependencies, including ACP. Please ensure that references to I- 1323 D.ietf-anima-grasp that include section number references (throughout 1324 this document) will be updated in case any last-minute changes in 1325 GRASP would make those section references change. 1327 DULL GRASP is a limited subset of GRASP intended to operate across an 1328 insecure link-local scope. See section 2.5.2 of 1329 [I-D.ietf-anima-grasp] for its formal definition. The ACP uses one 1330 instance of DULL GRASP for every L2 interface of the ACP node to 1331 discover link level adjacent candidate ACP neighbors. Unless 1332 modified by policy as noted earlier (Section 5 bullet point 2.), 1333 native interfaces (e.g., physical interfaces on physical nodes) 1334 SHOULD be initialized automatically enough, so that ACP discovery can 1335 be performed and any native interfaces with ACP neighbors can then be 1336 brought into the ACP even if the interface is otherwise not 1337 configured. Reception of packets on such otherwise not configured 1338 interfaces MUST be limited so that at first only IPv6 State Less 1339 Address Auto Configuration (SLAAC - [RFC4862]) and DULL GRASP work 1340 and then only the following ACP secure channel setup packets - but 1341 not any other unnecessary traffic (e.g., no other link-local IPv6 1342 transport stack responders for example). 1344 Note that the use of the IPv6 link-local multicast address 1345 (ALL_GRASP_NEIGHBORS) implies the need to use Multicast Listener 1346 Discovery Version 2 (MLDv2, see [RFC3810]) to announce the desire to 1347 receive packets for that address. Otherwise DULL GRASP could fail to 1348 operate correctly in the presence of MLD snooping, non-ACP enabled L2 1349 switches - because those would stop forwarding DULL GRASP packets. 1350 Switches not supporting MLD snooping simply need to operate as pure 1351 L2 bridges for IPv6 multicast packets for DULL GRASP to work. 1353 ACP discovery SHOULD NOT be enabled by default on non-native 1354 interfaces. In particular, ACP discovery MUST NOT run inside the ACP 1355 across ACP virtual interfaces. See Section 10.3 for further, non- 1356 normative suggestions on how to enable/disable ACP at node and 1357 interface level. See Section 8.2.2 for more details about tunnels 1358 (typical non-native interfaces). See Section 7 for how ACP should be 1359 extended on devices operating (also) as L2 bridges. 1361 Note: If an ACP node also implements BRSKI to enroll its ACP domain 1362 certificate (see Section 11.2 for a summary), then the above 1363 considerations also apply to GRASP discovery for BRSKI. Each DULL 1364 instance of GRASP set up for ACP is then also used for the discovery 1365 of a bootstrap proxy via BRSKI when the node does not have a domain 1366 certificate. Discovery of ACP neighbors happens only when the node 1367 does have the certificate. The node therefore never needs to 1368 discover both a bootstrap proxy and ACP neighbor at the same time. 1370 An ACP node announces itself to potential ACP peers by use of the 1371 "AN_ACP" objective. This is a synchronization objective intended to 1372 be flooded on a single link using the GRASP Flood Synchronization 1373 (M_FLOOD) message. In accordance with the design of the Flood 1374 message, a locator consisting of a specific link-local IP address, IP 1375 protocol number and port number will be distributed with the flooded 1376 objective. An example of the message is informally: 1378 Example: 1380 [M_FLOOD, 12340815, h'fe80000000000000c0011001FEEF0000, 210000, 1381 ["AN_ACP", 4, 1, "IKEv2" ], 1382 [O_IPv6_LOCATOR, 1383 h'fe80000000000000c0011001FEEF0000, UDP, 15000] 1384 ["AN_ACP", 4, 1, "DTLS" ], 1385 [O_IPv6_LOCATOR, 1386 h'fe80000000000000c0011001FEEF0000, UDP, 17000] 1387 ] 1389 Figure 5: GRASP AN_ACP example 1391 The formal CDDL definition is: 1393 flood-message = [M_FLOOD, session-id, initiator, ttl, 1394 +[objective, (locator-option / [])]] 1396 objective = ["AN_ACP", objective-flags, loop-count, 1397 objective-value] 1399 objective-flags = sync-only ; as in the GRASP specification 1400 sync-only = 4 ; M_FLOOD only requires synchronization 1401 loop-count = 1 ; limit to link-local operation 1402 objective-value = method 1403 method = "IKEv2" / "DTLS" ; or future methods 1405 Figure 6: GRASP AN_ACP definition 1407 The objective-flags field is set to indicate synchronization. 1409 The loop-count is fixed at 1 since this is a link-local operation. 1411 In the above example the RECOMMENDED period of sending of the 1412 objective is 60 seconds. The indicated ttl of 210000 msec means that 1413 the objective would be cached by ACP nodes even when two out of three 1414 messages are dropped in transit. 1416 The session-id is a random number used for loop prevention 1417 (distinguishing a message from a prior instance of the same message). 1418 In DULL this field is irrelevant but must still be set according to 1419 the GRASP specification. 1421 The originator MUST be the IPv6 link local address of the originating 1422 ACP node on the sending interface. 1424 The 'objective-value' parameter is a string indicating the secure 1425 channel protocol available at the specified or implied locator. 1427 The locator-option is optional and only required when the secure 1428 channel protocol is not offered at a well-defined port number, or if 1429 there is no well-defined port number. 1431 "IKEv2" is the abbreviation for "Internet Key Exchange protocol 1432 version 2", as defined in [RFC7296]. It is the main protocol used by 1433 the Internet IP security architecture ("IPsec", see [RFC4301]). We 1434 therefore use the term "IKEv2" and not "IPsec" in the GRASP 1435 definitions and example above. "IKEv2" has a well-defined port 1436 number 500, but in the above example, the candidate ACP neighbor is 1437 offering ACP secure channel negotiation via IKEv2 on port 15000 (for 1438 the sake of creating a non-standard example). 1440 If a locator is included, it MUST be an O_IPv6_LOCATOR, and the IPv6 1441 address MUST be the same as the initiator address (these are DULL 1442 requirements to minimize third party DoS attacks). 1444 The secure channel methods defined in this document use the objective 1445 values of "IKEv2" and "DTLS". There is no distinction between IKEv2 1446 native and GRE-IKEv2 because this is purely negotiated via IKEv2. 1448 A node that supports more than one secure channel protocol method 1449 needs to flood multiple versions of the "AN_ACP" objective so that 1450 each method can be accompanied by its own locator-option. This can 1451 use a single GRASP M_FLOOD message as shown in Figure 5. 1453 Note that a node serving both as an ACP node and BRSKI Join Proxy may 1454 choose to distribute the "AN_ACP" objective and the respective BRSKI 1455 in the same M_FLOOD message, since GRASP allows multiple objectives 1456 in one message. This may be impractical though if ACP and BRSKI 1457 operations are implemented via separate software modules / ASAs. 1459 The result of the discovery is the IPv6 link-local address of the 1460 neighbor as well as its supported secure channel protocols (and non- 1461 standard port they are running on). It is stored in the ACP 1462 Adjacency Table, see Section 6.2 which then drives the further 1463 building of the ACP to that neighbor. 1465 6.4. Candidate ACP Neighbor Selection 1467 An ACP node must determine to which other ACP nodes in the adjacency 1468 table it should build an ACP connection. This is based on the 1469 information in the ACP Adjacency table. 1471 The ACP is by default established exclusively between nodes in the 1472 same domain. This includes all routing subdomains. Section 11.7 1473 explains how ACP connections across multiple routing subdomains are 1474 special. 1476 Future extensions to this document including Intent can change this 1477 default behavior. Examples include: 1479 o Build the ACP across all domains that have a common parent domain. 1480 For example ACP nodes with domain "example.com", nodes of 1481 "example.com", "access.example.com", "core.example.com" and 1482 "city.core.example.com" could all establish one single ACP. 1484 o ACP connections across domains with different Certificate 1485 Authorities (CA) could establish a common ACP by installing the 1486 alternate domains' CA into the trusted anchor store. This is an 1487 executive management action that could easily be accomplished 1488 through the control channel created by the ACP. 1490 Since Intent is transported over the ACP, the first ACP connection a 1491 node establishes is always following the default behavior. See 1492 Section 11.7 for more details. 1494 The result of the candidate ACP neighbor selection process is a list 1495 of adjacent or configured autonomic neighbors to which an ACP channel 1496 should be established. The next step begins that channel 1497 establishment. 1499 6.5. Channel Selection 1501 To avoid attacks, initial discovery of candidate ACP peers cannot 1502 include any non-protected negotiation. To avoid re-inventing and 1503 validating security association mechanisms, the next step after 1504 discovering the address of a candidate neighbor can only be to try 1505 first to establish a security association with that neighbor using a 1506 well-known security association method. 1508 At this time in the lifecycle of ACP nodes, it is unclear whether it 1509 is feasible to even decide on a single MTI (mandatory to implement) 1510 security association protocol across all ACP nodes. 1512 From the use-cases it seems clear that not all type of ACP nodes can 1513 or need to connect directly to each other or are able to support or 1514 prefer all possible mechanisms. For example, code space limited IoT 1515 devices may only support DTLS ("datagram Transport Layer Security 1516 version 1.2", see [RFC6347]) because that code exists already on them 1517 for end-to-end security, but low-end in-ceiling L2 switches may only 1518 want to support Media Access Control Security (MacSec, see 802.1AE 1519 ([MACSEC]) because that is also supported in their chips. Only a 1520 flexible gateway device may need to support both of these mechanisms 1521 and potentially more. 1523 To support extensible secure channel protocol selection without a 1524 single common MTI protocol, ACP nodes must try all the ACP secure 1525 channel protocols it supports and that are feasible because the 1526 candidate ACP neighbor also announced them via its AN_ACP GRASP 1527 parameters (these are called the "feasible" ACP secure channel 1528 protocols). 1530 To ensure that the selection of the secure channel protocols always 1531 succeeds in a predictable fashion without blocking, the following 1532 rules apply: 1534 o An ACP node may choose to attempt initiate the different feasible 1535 ACP secure channel protocols it supports according to its local 1536 policies sequentially or in parallel, but it MUST support acting 1537 as a responder to all of them in parallel. 1539 o Once the first secure channel protocol succeeds, the two peers 1540 know each other's certificates because they must be used by all 1541 secure channel protocols for mutual authentication. The node with 1542 the lower Node-ID in the ACP address becomes Bob, the one with the 1543 higher Node-ID in the certificate Alice. 1545 o Bob becomes passive, he does not attempt to further initiate ACP 1546 secure channel protocols with Alice and does not consider it to be 1547 an error when Alice closes secure channels. Alice becomes the 1548 active party, continues to attempt setting up secure channel 1549 protocols with Bob until she arrives at the best one from her view 1550 that also works with Bob. 1552 For example, originally Bob could have been the initiator of one ACP 1553 secure channel protocol that Bob prefers and the security association 1554 succeeded. The roles of Bob and Alice are then assigned. At this 1555 stage, the protocol may not even have completed negotiating a common 1556 security profile. The protocol could for example be IPsec via IKEv2 1557 ("IP security", see [RFC4301] and "Internet Key Exchange protocol 1558 version 2", see [RFC7296]. It is now up to Alice to decide how to 1559 proceed. Even if the IPsec connection from Bob succeeded, Alice 1560 might prefer another secure protocol over IPsec (e.g., FOOBAR), and 1561 try to set that up with Bob. If that preference of Alice succeeds, 1562 she would close the IPsec connection. If no better protocol attempt 1563 succeeds, she would keep the IPsec connection. 1565 All this negotiation is in the context of an "L2 interface". Alice 1566 and Bob will build ACP connections to each other on every "L2 1567 interface" that they both connect to. An autonomic node must not 1568 assume that neighbors with the same L2 or link-local IPv6 addresses 1569 on different L2 interfaces are the same node. This can only be 1570 determined after examining the certificate after a successful 1571 security association attempt. 1573 6.6. Candidate ACP Neighbor verification 1575 Independent of the security association protocol chosen, candidate 1576 ACP neighbors need to be authenticated based on their domain 1577 certificate. This implies that any secure channel protocol MUST 1578 support certificate based authentication that can support the ACP 1579 domain membership check as defined in Section 6.1.2. If it fails, 1580 the connection attempt is aborted and an error logged. Attempts to 1581 reconnect MUST be throttled. The RECOMMENDED default is exponential 1582 backoff with a a minimum delay of 10 seconds and a maximum delay of 1583 640 seconds. 1585 6.7. Security Association protocols 1587 The following sections define the security association protocols that 1588 we consider to be important and feasible to specify in this document: 1590 6.7.1. ACP via IKEv2 1592 An ACP node announces its ability to support IKEv2 as the ACP secure 1593 channel protocol in GRASP as "IKEv2". 1595 6.7.1.1. Native IPsec 1597 To run ACP via IPsec natively, no further IANA assignments/ 1598 definitions are required. An ACP node that is supporting native 1599 IPsec MUST use IPsec security setup via IKEv2, tunnel mode, local and 1600 peer link-local IPv6 addresses used for encapsulation. It MUST then 1601 support ESP with AES256 for encryption and SHA256 hash and MUST NOT 1602 permit weaker crypto options. 1604 In terms of IKEv2, this means the initiator will offer to support 1605 IPsec tunnel mode with next protocol equal 41 (IPv6). 1607 IPsec tunnel mode is required because the ACP will route/forward 1608 packets received from any other ACP node across the ACP secure 1609 channels, and not only its own generated ACP packets. With IPsec 1610 transport mode, it would only be possible to send packets originated 1611 by the ACP node itself. 1613 ESP is used because ACP mandates the use of encryption for ACP secure 1614 channels. 1616 6.7.1.2. IPsec with GRE encapsulation 1618 In network devices it is often more common to implement high 1619 performance virtual interfaces on top of GRE encapsulation than on 1620 top of a "native" IPsec association (without any other encapsulation 1621 than those defined by IPsec). On those devices it may be beneficial 1622 to run the ACP secure channel on top of GRE protected by the IPsec 1623 association. 1625 To run ACP via GRE/IPsec, no further IANA assignments/definitions are 1626 required. An ACP node that is supporting ACP via GRE/IPsec MUST then 1627 support IPsec security setup via IKEv2, IPsec transport mode, local 1628 and peer link-local IPv6 addresses used for encapsulation, ESP with 1629 AES256 encryption and SHA256 hash. 1631 When GRE is used, transport mode is sufficient because the routed ACP 1632 packets are not "tunneled" by IPsec but rather by GRE: IPsec only has 1633 to deal with the GRE/IP packet which always uses the local and peer 1634 link-local IPv6 addresses and is therefore applicable to transport 1635 mode. 1637 ESP is used because ACP mandates the use of encryption for ACP secure 1638 channels. 1640 In terms of IKEv2 negotiation, this means the initiator must offer to 1641 support IPsec transport mode with next protocol equal to GRE (47) 1642 followed by the offer for native IPsec as described above (because 1643 that option is mandatory to support). 1645 If IKEv2 initiator and responder support GRE, it will be selected. 1646 The version of GRE to be used must the according to [RFC7676]. 1648 6.7.2. ACP via DTLS 1650 We define the use of ACP via DTLS in the assumption that it is likely 1651 the first transport encryption code basis supported in some classes 1652 of constrained devices. 1654 To run ACP via UDP and DTLS v1.2 [RFC6347] a locally assigned UDP 1655 port is used that is announced as a parameter in the GRASP AN_ACP 1656 objective to candidate neighbors. All ACP nodes supporting DTLS as a 1657 secure channel protocol MUST support AES256 encryption and MUST NOT 1658 permit weaker crypto options. 1660 There is no additional session setup or other security association 1661 besides this simple DTLS setup. As soon as the DTLS session is 1662 functional, the ACP peers will exchange ACP IPv6 packets as the 1663 payload of the DTLS transport connection. Any DTLS defined security 1664 association mechanisms such as re-keying are used as they would be 1665 for any transport application relying solely on DTLS. 1667 6.7.3. ACP Secure Channel Requirements 1669 A baseline ACP node MUST support IPsec natively and MAY support IPsec 1670 via GRE. A constrained ACP node that can not support IPsec MUST 1671 support DTLS. ACP nodes connecting constrained areas with baseline 1672 areas MUST therefore support IPsec and DTLS. 1674 ACP nodes need to specify in documentation the set of secure ACP 1675 mechanisms they support. 1677 An ACP secure channel MUST immediately be terminated when the 1678 lifetime of any certificate in the chain used to authenticate the 1679 neighbor expires or becomes revoked. Note that this is not standard 1680 behavior in secure channel protocols such as IPsec because the 1681 certificate authentication only influences the setup of the secure 1682 channel in these protocols. 1684 6.8. GRASP in the ACP 1686 6.8.1. GRASP as a core service of the ACP 1688 The ACP MUST run an instance of GRASP inside of it. It is a key part 1689 of the ACP services. The function in GRASP that makes it fundamental 1690 as a service of the ACP is the ability to provide ACP wide service 1691 discovery (using objectives in GRASP). 1693 ACP provides IP unicast routing via the RPL routing protocol (see 1694 Section 6.11). 1696 The ACP does not use IP multicast routing nor does it provide generic 1697 IP multicast services (the handling of GRASP link-local multicast 1698 messages is explained in Section 6.8.2). Instead, the ACP provides 1699 service discovery via the objective discovery/announcement and 1700 negotiation mechanisms of the ACP GRASP instance (services are a form 1701 of objectives). These mechanisms use hop-by-hop reliable flooding of 1702 GRASP messages for both service discovery (GRASP M_DISCOVERY 1703 messages) and service announcement (GRASP M_FLOOD messages). 1705 See Section 11.5 for more discussion about this design choice of the 1706 ACP and considerations for possible future variations. 1708 6.8.2. ACP as the Security and Transport substrate for GRASP 1710 In the terminology of GRASP ([I-D.ietf-anima-grasp]), the ACP is the 1711 security and transport substrate for the GRASP instance run inside 1712 the ACP ("ACP GRASP"). 1714 This means that the ACP is responsible for ensuring that this 1715 instance of GRASP is only sending messages across the ACP GRASP 1716 virtual interfaces. Whenever the ACP adds or deletes such an 1717 interface because of new ACP secure channels or loss thereof, the ACP 1718 needs to indicate this to the ACP instance of GRASP. The ACP exists 1719 also in the absence of any active ACP neighbors. It is created when 1720 the node has a domain certificate, and continues to exist even if all 1721 of its neighbors cease operation. 1723 In this case ASAs using GRASP running on the same node would still 1724 need to be able to discover each other's objectives. When the ACP 1725 does not exist, ASAs leveraging the ACP instance of GRASP via APIs 1726 MUST still be able to operate, and MUST be able to understand that 1727 there is no ACP and that therefore the ACP instance of GRASP can not 1728 operate. 1730 The way ACP acts as the security and transport substrate for GRASP is 1731 visualized in the following picture: 1733 [RFC Editor: please try to put the following picture on a single page 1734 and remove this note. We cannot figure out how to do this with XML. 1735 The picture does fit on a single page.] 1737 ACP: 1738 ............................................................... 1739 . . 1740 . /-GRASP-flooding-\ ACP GRASP instance . 1741 . / \ . 1742 . GRASP GRASP GRASP . 1743 . link-local unicast link-local . 1744 . multicast messages multicast . 1745 . messages | messages . 1746 . | | | . 1747 ............................................................... 1748 . v v v ACP security and transport . 1749 . | | | substrate for GRASP . 1750 . | | | . 1751 . | ACP GRASP | - ACP GRASP . 1752 . | Loopback | Loopback interface . 1753 . | interface | - ACP-cert auth . 1754 . | TLS | . 1755 . ACP GRASP | ACP GRASP - ACP GRASP virtual . 1757 . subnet1 | subnet2 virtual interfaces . 1758 . TCP | TCP . 1759 . | | | . 1760 ............................................................... 1761 . | | | ^^^ Users of ACP (GRASP/ASA) . 1762 . | | | ACP interfaces/addressing . 1763 . | | | . 1764 . | | | . 1765 . | ACP-Loopback Interf.| <- ACP Loopback interface . 1766 . | ACP-address | - address (global ULA) . 1767 . subnet1 | subnet2 <- ACP virtual interfaces . 1768 . link-local | link-local - link-local addresses . 1769 ............................................................... 1770 . | | | ACP routing and forwarding . 1771 . | RPL-routing | . 1772 . | /IP-Forwarding\ | . 1773 . | / \ | . 1774 . ACP IPv6 packets ACP IPv6 packets . 1775 . |/ \| . 1776 . IPsec/DTLS IPsec/DTLS - ACP-cert auth . 1777 ............................................................... 1778 | | Data-Plane 1779 | | 1780 | | - ACP secure channel 1781 link-local link-local - encapsulation addresses 1782 subnet1 subnet2 - Data-Plane interfaces 1783 | | 1784 ACP-Nbr1 ACP-Nbr2 1786 Figure 7: ACP as security and transport substrate for GRASP 1788 GRASP unicast messages inside the ACP always use the ACP address. 1789 Link-local ACP addresses must not be used inside objectives. GRASP 1790 unicast messages inside the ACP are transported via TLS 1.2 1791 ([RFC5246]) connections with AES256 encryption and SHA256. Mutual 1792 authentication uses the ACP domain membership check defined in 1793 (Section 6.1.2). 1795 GRASP link-local multicast messages are targeted for a specific ACP 1796 virtual interface (as defined Section 6.12.5) but are sent by the ACP 1797 into an ACP GRASP virtual interface that is constructed from the TCP 1798 connection(s) to the IPv6 link-local neighbor address(es) on the 1799 underlying ACP virtual interface. If the ACP GRASP virtual interface 1800 has two or more neighbors, the GRASP link-local multicast messages 1801 are replicated to all neighbor TCP connections. 1803 TLS and TLS connections for GRASP in the ACP use the IANA assigned 1804 TCP port for GRASP (7107). Effectively the transport stack is 1805 expected to be TLS for connections from/to the ACP address (e.g., 1806 global scope address(es)) and TCP for connections from/to link-local 1807 addresses on the ACP virtual interfaces. The latter ones are only 1808 used for flooding of GRASP messages. 1810 6.8.2.1. Discussion 1812 TCP encapsulation for GRASP M_DISCOVERY and M_FLOOD link local 1813 messages is used because these messages are flooded across 1814 potentially many hops to all ACP nodes and a single link with even 1815 temporary packet loss issues (e.g., WiFi/Powerline link) can reduce 1816 the probability for loss free transmission so much that applications 1817 would want to increase the frequency with which they send these 1818 messages. Such shorter periodic retransmission of datagrams would 1819 result in more traffic and processing overhead in the ACP than the 1820 hop-by-hop reliable retransmission mechanism by TCP and duplicate 1821 elimination by GRASP. 1823 TLS is mandated for GRASP non-link-local unicast because the ACP 1824 secure channel mandatory authentication and encryption protects only 1825 against attacks from the outside but not against attacks from the 1826 inside: Compromised ACP members that have (not yet) been detected and 1827 removed (e.g., via domain certificate revocation / expiry). 1829 If GRASP peer connections would just use TCP, compromised ACP members 1830 could simply eavesdrop passively on GRASP peer connections for whom 1831 they are on-path ("Man In The Middle" - MITM). Or intercept and 1832 modify them. With TLS, it is not possible to completely eliminate 1833 problems with compromised ACP members, but attacks are a lot more 1834 complex: 1836 Eavesdropping/spoofing by a compromised ACP node is still possible 1837 because in the model of the ACP and GRASP, the provider and consumer 1838 of an objective have initially no unique information (such as an 1839 identity) about the other side which would allow them to distinguish 1840 a benevolent from a compromised peer. The compromised ACP node would 1841 simply announce the objective as well, potentially filter the 1842 original objective in GRASP when it is a MITM and act as an 1843 application level proxy. This of course requires that the 1844 compromised ACP node understand the semantics of the GRASP 1845 negotiation to an extent that allows it to proxy it without being 1846 detected, but in an ACP environment this is quite likely public 1847 knowledge or even standardized. 1849 The GRASP TLS connections are run like any other ACP traffic through 1850 the ACP secure channels. This leads to double authentication/ 1851 encryption. Future work optimizations could avoid this but it is 1852 unclear how beneficial/feasible this is: 1854 o The security considerations for GRASP change against attacks from 1855 non-ACP (e.g., "outside") nodes: TLS is subject to reset attacks 1856 while secure channel protocols may be not (e.g., IPsec is not). 1858 o The secure channel method may leverage hardware acceleration and 1859 there may be little or no gain in eliminating it. 1861 o The GRASP TLS connections need to implement any additional 1862 security options that are required for secure channels. For 1863 example the closing of connections when the peers certificate has 1864 expired. 1866 6.9. Context Separation 1868 The ACP is in a separate context from the normal Data-Plane of the 1869 node. This context includes the ACP channels' IPv6 forwarding and 1870 routing as well as any required higher layer ACP functions. 1872 In classical network system, a dedicated so called Virtual routing 1873 and forwarding instance (VRF) is one logical implementation option 1874 for the ACP. If possible by the systems software architecture, 1875 separation options that minimize shared components are preferred, 1876 such as a logical container or virtual machine instance. The context 1877 for the ACP needs to be established automatically during bootstrap of 1878 a node. As much as possible it should be protected from being 1879 modified unintentionally by ("Data-Plane") configuration. 1881 Context separation improves security, because the ACP is not 1882 reachable from the global routing table. Also, configuration errors 1883 from the Data-Plane setup do not affect the ACP. 1885 6.10. Addressing inside the ACP 1887 The channels explained above typically only establish communication 1888 between two adjacent nodes. In order for communication to happen 1889 across multiple hops, the autonomic control plane requires ACP 1890 network wide valid addresses and routing. Each ACP node must create 1891 a Loopback interface with an ACP network wide unique address inside 1892 the ACP context (as explained in in Section 6.9). This address may 1893 be used also in other virtual contexts. 1895 With the algorithm introduced here, all ACP nodes in the same routing 1896 subdomain have the same /48 ULA prefix. Conversely, ULA global IDs 1897 from different domains are unlikely to clash, such that two ACP 1898 networks can be merged, as long as the policy allows that merge. See 1899 also Section 9.1 for a discussion on merging domains. 1901 Links inside the ACP only use link-local IPv6 addressing, such that 1902 each nodes ACP only requires one routable virtual address. 1904 6.10.1. Fundamental Concepts of Autonomic Addressing 1906 o Usage: Autonomic addresses are exclusively used for self- 1907 management functions inside a trusted domain. They are not used 1908 for user traffic. Communications with entities outside the 1909 trusted domain use another address space, for example normally 1910 managed routable address space (called "Data-Plane" in this 1911 document). 1913 o Separation: Autonomic address space is used separately from user 1914 address space and other address realms. This supports the 1915 robustness requirement. 1917 o Loopback-only: Only ACP Loopback interfaces (and potentially those 1918 configured for "ACP connect", see Section 8.1) carry routable 1919 address(es); all other interfaces (called ACP virtual interfaces) 1920 only use IPv6 link local addresses. The usage of IPv6 link local 1921 addressing is discussed in [RFC7404]. 1923 o Use-ULA: For Loopback interfaces of ACP nodes, we use Unique Local 1924 Addresses (ULA), as defined in [RFC4193] with L=1 (as defined in 1925 section 3.1 of [RFC4193]). Note that the random hash for ACP 1926 Loopback addresses uses the definition in Section 6.10.2 and not 1927 the one of [RFC4193] section 3.2.2. 1929 o No external connectivity: They do not provide access to the 1930 Internet. If a node requires further reaching connectivity, it 1931 should use another, traditionally managed address scheme in 1932 parallel. 1934 o Addresses in the ACP are permanent, and do not support temporary 1935 addresses as defined in [RFC4941]. 1937 o Addresses in the ACP are not considered sensitive on privacy 1938 grounds because ACP nodes are not expected to be end-user devices. 1939 Therefore, ACP addresses do not need to be pseudo-random as 1940 discussed in [RFC7721]. Because they are not propagated to 1941 untrusted (non ACP) nodes and stay within a domain (of trust), we 1942 also consider them not to be subject to scanning attacks. 1944 The ACP is based exclusively on IPv6 addressing, for a variety of 1945 reasons: 1947 o Simplicity, reliability and scale: If other network layer 1948 protocols were supported, each would have to have its own set of 1949 security associations, routing table and process, etc. 1951 o Autonomic functions do not require IPv4: Autonomic functions and 1952 autonomic service agents are new concepts. They can be 1953 exclusively built on IPv6 from day one. There is no need for 1954 backward compatibility. 1956 o OAM protocols do not require IPv4: The ACP may carry OAM 1957 protocols. All relevant protocols (SNMP, TFTP, SSH, SCP, Radius, 1958 Diameter, ...) are available in IPv6. See also [RFC8368] for how 1959 ACP could be made to interoperate with IPv4 only OAM. 1961 6.10.2. The ACP Addressing Base Scheme 1963 The Base ULA addressing scheme for ACP nodes has the following 1964 format: 1966 8 40 2 78 1967 +--+-------------------------+------+------------------------------+ 1968 |fd| hash(routing-subdomain) | Type | (sub-scheme) | 1969 +--+-------------------------+------+------------------------------+ 1971 Figure 8: ACP Addressing Base Scheme 1973 The first 48 bits follow the ULA scheme, as defined in [RFC4193], to 1974 which a type field is added: 1976 o "fd" identifies a locally defined ULA address. 1978 o The 40 bits ULA "global ID" (term from [RFC4193]) for ACP 1979 addresses carried in the domain information field of domain 1980 certificates are the first 40 bits of the SHA256 hash of the 1981 routing subdomain from the same domain information field. In the 1982 example of Section 6.1.1, the routing subdomain is 1983 "area51.research.acp.example.com" and the 40 bits ULA "global ID" 1984 89b714f3db. 1986 o To allow for extensibility, the fact that the ULA "global ID" is a 1987 hash of the routing subdomain SHOULD NOT be assumed by any ACP 1988 node during normal operations. The hash function is only executed 1989 during the creation of the certificate. If BRSKI is used then the 1990 BRSKI registrar will create the domain information field in 1991 response to the EST Certificate Signing Request (CSR) Attribute 1992 Request message by the pledge. 1994 o Type: This field allows different address sub-schemes. This 1995 addresses the "upgradability" requirement. Assignment of types 1996 for this field will be maintained by IANA. 1998 The sub-scheme may imply a range or set of addresses assigned to the 1999 node, this is called the ACP address range/set and explained in each 2000 sub-scheme. 2002 Please refer to Section 6.10.7 and Section 11.1 for further 2003 explanations why the following Sub-Addressing schemes are used and 2004 why multiple are necessary. 2006 6.10.3. ACP Zone Addressing Sub-Scheme 2008 The sub-scheme defined here is defined by the Type value 00b (zero) 2009 in the base scheme and 0 in the Z bit. 2011 64 64 2012 +-----------------+---+---------++-----------------------------+---+ 2013 | (base scheme) | Z | Zone-ID || Node-ID | 2014 | | | || Registrar-ID | Node-Number| V | 2015 +-----------------+---+---------++--------------+--------------+---+ 2016 50 1 13 48 15 1 2018 Figure 9: ACP Zone Addressing Sub-Scheme 2020 The fields are defined as follows: 2022 o Zone-ID: If set to all zero bits: The Node-ID bits are used as an 2023 identifier (as opposed to a locator). This results in a non- 2024 hierarchical, flat addressing scheme. Any other value indicates a 2025 zone. See Section 6.10.3.1 on how this field is used in detail. 2027 o Z: MUST be 0. 2029 o Node-ID: A unique value for each node. 2031 The 64 bit Node-ID is derived and composed as follows: 2033 o Registrar-ID (48 bit): A number unique inside the domain that 2034 identifies the ACP registrar which assigned the Node-ID to the 2035 node. A MAC address of the ACP registrar can be used for this 2036 purpose. 2038 o Node-Number: A number which is unique for a given ACP registrar, 2039 to identify the node. This can be a sequentially assigned number. 2041 o V (1 bit): Virtualization bit: 0: Indicates the ACP itself ("ACP 2042 node base system); 1: Indicates the optional "host" context on the 2043 ACP node (see below). 2045 In the ACP Zone Addressing Sub-Scheme, the ACP address in the 2046 certificate has Zone-ID and V fields as all zero bits. The ACP 2047 address set includes addresses with any Zone-ID value and any V 2048 value. 2050 The "Node-ID" itself is unique in a domain (i.e., the Zone-ID is not 2051 required for uniqueness). Therefore, a node can be addressed either 2052 as part of a flat hierarchy (Zone-ID = 0), or with an aggregation 2053 scheme (any other Zone-ID). An address with Zone-ID = 0 is an 2054 identifier, with a Zone-ID !=0 it is a locator. See Section 6.10.3.1 2055 for more details. 2057 The Virtual bit in this sub-scheme allows the easy addition of the 2058 ACP as a component to existing systems without causing problems in 2059 the port number space between the services in the ACP and the 2060 existing system. V:0 is the ACP router (autonomic node base system), 2061 V:1 is the host with pre-existing transport endpoints on it that 2062 could collide with the transport endpoints used by the ACP router. 2063 The ACP host could for example have a p2p virtual interface with the 2064 V:0 address as its router into the ACP. Depending on the software 2065 design of ASAs, which is outside the scope of this specification, 2066 they may use the V:0 or V:1 address. 2068 The location of the V bit(s) at the end of the address allows the 2069 announcement of a single prefix for each ACP node. For example, in a 2070 network with 20,000 ACP nodes, this avoid 20,000 additional routes in 2071 the routing table. 2073 6.10.3.1. Usage of the Zone-ID Field 2075 The Zone-ID allows for the introduction of structure in the 2076 addressing scheme. 2078 Zone-ID = 0 is the default addressing scheme in an ACP domain. Every 2079 ACP node with a Zone Addressing Sub-Scheme address MUST respond to 2080 its ACP address with Zone-ID = 0. Used on its own this leads to a 2081 non-hierarchical address scheme, which is suitable for networks up to 2082 a certain size. Zone-ID = 0 addresses act as identifiers for the 2083 nodes, and aggregation of these address in the ACP routing table is 2084 not possible. 2086 If aggregation is required, the 13 bit Zone-ID value allows for up to 2087 8191 zones. The allocation of Zone-ID's may either happen 2088 automatically through a to-be-defined algorithm; or it could be 2089 configured and maintained explicitly. 2091 If a node learns through a future autonomic method or through 2092 configuration that it is part of a zone, it MUST also respond to its 2093 ACP address with that Zone-ID. In this case the ACP Loopback is 2094 configured with two ACP addresses: One for Zone-ID = 0 and one for 2095 the assigned Zone-ID. This method allows for a smooth transition 2096 between a flat addressing scheme and an hierarchical one. 2098 A node knowing it is in a zone MUST also use that Zone-ID != 0 2099 address in GRASP locator fields. This eliminates the use of the 2100 identifier address (Zone-ID = 0) in forwarding and the need for 2101 network wide reachability of those non-aggregatable identifier 2102 addresses. Zone-ID != 0 addresses are assumed to be aggregatable in 2103 routing/forwarding based on how they are allocated in the ACP 2104 topology (subject to future work). 2106 Note: Theoretically, the 13 bits for the Zone-ID would allow also for 2107 two levels of zones, introducing a sub-hierarchy. We do not think 2108 this is required at this point, but a new type could be used in the 2109 future to support such a scheme. 2111 Note: The Zone-ID is one method to introduce structure or hierarchy 2112 into the ACP. Another way is the use of the routing subdomain field 2113 in the ACP that leads to multiple /48 Global IDs within an ACP 2114 domain. This gives future work two options to consider. 2116 Note: Zones and Zone-ID as defined here are not related to [RFC4007] 2117 zones or zone_id. ACP zone addresses are not scoped (reachable only 2118 from within an RFC4007 zone) but reachable across the whole ACP. An 2119 RFC4007 zone_id is a zone index that has only local significance on a 2120 node, whereas an ACP Zone-ID is an identifier for an ACP zone that is 2121 unique across that ACP. 2123 6.10.4. ACP Manual Addressing Sub-Scheme 2125 The sub-scheme defined here is defined by the Type value 00b (zero) 2126 in the base scheme and 1 in the Z bit. 2128 64 64 2129 +---------------------+---+----------++-----------------------------+ 2130 | (base scheme) | Z | Subnet-ID|| Interface Identifier | 2131 +---------------------+---+----------++-----------------------------+ 2132 50 1 13 2134 Figure 10: ACP Manual Addressing Sub-Scheme 2136 The fields are defined as follows: 2138 o Subnet-ID: Configured subnet identifier. 2140 o Z: MUST be 1. 2142 o Interface Identifier. 2144 This sub-scheme is meant for "manual" allocation to subnets where the 2145 other addressing schemes cannot be used. The primary use case is for 2146 assignment to ACP connect subnets (see Section 8.1.1). 2148 "Manual" means that allocations of the Subnet-ID need to be done 2149 today with pre-existing, non-autonomic mechanisms. Every subnet that 2150 uses this addressing sub-scheme needs to use a unique Subnet-ID 2151 (unless some anycast setup is done). Future work may define 2152 mechanisms for auto-coordination between ACP nodes and auto- 2153 allocation of Subnet-IDs between them. 2155 The Z bit field was added to distinguish Zone addressing and manual 2156 addressing sub-schemes without requiring one more bit in the base 2157 scheme and therefore allowing for the Vlong scheme (described below) 2158 to have one more bit available. 2160 Manual addressing sub-scheme addresses SHOULD only be used in domain 2161 certificates assigned to nodes that cannot fully participate in the 2162 automatic establishment of ACP secure channels or ACP routing. The 2163 intended use are nodes connecting to the ACP via an ACP edge node and 2164 ACP connect interfaces (see Section 8.1) - such as legacy NOC 2165 equipment. They would not use their domain certificate for ACP 2166 secure channel creation and therefore do not need to participate in 2167 ACP routing either. They would use the certificate for 2168 authentication of any transport services. The value of the Interface 2169 Identifier is left for future definitions. 2171 6.10.5. ACP Vlong Addressing Sub-Scheme 2173 The sub-scheme defined here is defined by the Type value 01b (one) in 2174 the base scheme. 2176 50 78 2177 +---------------------++-----------------------------+----------+ 2178 | (base scheme) || Node-ID | 2179 | || Registrar-ID | Node-Number| V | 2180 +---------------------++--------------+--------------+----------+ 2181 50 46 24/16 8/16 2183 Figure 11: ACP Vlong Addressing Sub-Scheme 2185 This addressing scheme foregoes the Zone-ID field to allow for 2186 larger, flatter routed networks (e.g., as in IoT) with 8421376 Node- 2187 Numbers (2^23+2^15). It also allows for up to 2^16 (i.e. 65536) 2188 different virtualized addresses within a node, which could be used to 2189 address individual software components in an ACP node. 2191 The fields are the same as in the Zone-ID sub-scheme with the 2192 following refinements: 2194 o V: Virtualization bit: Values 0 and 1 are assigned in the same way 2195 as in the Zone-ID sub-scheme. 2197 o Registrar-ID: To maximize Node-Number and V, the Registrar-ID is 2198 reduced to 46 bits. This still permits the use of the MAC address 2199 of an ACP registrar by removing the V and U bits from the 48 bits 2200 of a MAC address (those two bits are never unique, so they cannot 2201 be used to distinguish MAC addresses). 2203 o If the first bit of the "Node-Number" is "1", then the Node-Number 2204 is 16 bit long and the V field is 16 bit long. Otherwise the 2205 Node-Number is 24 bit long and the V field is 8 bit long. 2207 "0" bit Node-Numbers are intended to be used for "general purpose" 2208 ACP nodes that would potentially have a limited number (< 256) of 2209 clients (ASA/Autonomic Functions or legacy services) of the ACP that 2210 require separate V(irtual) addresses. "1" bit Node-Numbers are 2211 intended for ACP nodes that are ACP edge nodes (see Section 8.1.1) or 2212 that have a large number of clients requiring separate V(irtual) 2213 addresses. For example large SDN controllers with container modular 2214 software architecture (see Section 8.1.2). 2216 In the Vlong addressing sub-scheme, the ACP address in the 2217 certificate has all V field bits as zero. The ACP address set for 2218 the node includes any V value. 2220 6.10.6. Other ACP Addressing Sub-Schemes 2222 Before further addressing sub-schemes are defined, experience with 2223 the schemes defined here should be collected. The schemes defined in 2224 this document have been devised to allow hopefully sufficiently 2225 flexible setup of ACPs for a variety of situation. These reasons 2226 also lead to the fairly liberal use of address space: The Zone 2227 Addressing Sub-Scheme is intended to enable optimized routing in 2228 large networks by reserving bits for Zone-ID's. The Vlong addressing 2229 sub-scheme enables the allocation of 8/16 bit of addresses inside 2230 individual ACP nodes. Both address spaces allow distributed, 2231 uncoordinated allocation of node addresses by reserving bits for the 2232 registrar-ID field in the address. 2234 IANA is asked need to assign a new "type" for each new addressing 2235 sub-scheme. With the current allocations, only 2 more schemes are 2236 possible, so the last addressing scheme should consider making 2237 provisions provision for further extensions (e.g., by reserving bits 2238 from it for further extensions). 2240 6.10.7. ACP Registrars 2242 The ACP address prefix is assigned to the ACP node during enrollment/ 2243 provisioning of the ACP domain certificate to the ACP node. It is 2244 intended to persist unchanged through the lifetime of the ACP node. 2246 Because of the ACP addressing sub-schemes explained above, ACP nodes 2247 for a single ACP domain can be enrolled by multiple distributed and 2248 uncoordinated entities called ACP registrars. These ACP registrars 2249 are responsible to enroll ACP domain certificates and associated 2250 trust anchor(s) to candidate ACP nodes and are also responsible that 2251 an ACP domain information field is included in the ACP domain 2252 certificate. 2254 6.10.7.1. Use of BRSKI or other Mechanism/Protocols 2256 Any protocols or mechanisms may be used as ACP registrars, as long as 2257 the resulting ACP certificate and trust anchors allow to perform the 2258 ACP domain membership described in Section 6.1.2 with other ACP 2259 domain members, and meet the ACP addressing requirements for its ACP 2260 domain information field as described further below in this section. 2262 An ACP registrar could be a person deciding whether to enroll a 2263 candidate ACP node and then orchestrating the enrollment of the ACP 2264 certificate and associated trust anchor, using command line or web 2265 based commands on the candidate ACP node and trust anchor to generate 2266 and sign the ACP domain certificate and configure certificate and 2267 trust anchors onto the node. 2269 The only currently defined protocol for ACP registrars is BRSKI 2270 ([I-D.ietf-anima-bootstrapping-keyinfra]). When BRSKI is used, the 2271 ACP nodes are called ANI nodes, and the ACP registrars are called 2272 BRSKI or ANI registrars. The BRSKI specification does not define the 2273 handling of the ACP domain information field because the rules do not 2274 depend on BRSKI but apply equally to any protocols/mechanisms an ACP 2275 registrar may use. 2277 6.10.7.2. Unique Address/Prefix allocation 2279 ACP registrars MUST NOT allocate ACP address prefixes to ACP nodes 2280 via the ACP domain information field that would collide with the ACP 2281 address prefixes of other ACP nodes in the same ACP domain. This 2282 includes both prefixes allocated by the same ACP registrar to 2283 different ACP nodes as well as prefixes allocated by other ACP 2284 registrars for the same ACP domain. 2286 For this purpose, an ACP registrar MUST have one or more unique 46 2287 bit identifiers called Registrar-IDs used to allocate ACP address 2288 prefixes. The lower 46 bits of a EUI-48 MAC addresses are globally 2289 unique 46 bit identifiers, so ACP registrars with known unique EUI-48 2290 MAC addresses can use these as Registrar-IDs. Registrar-IDs do not 2291 need to be globally unique but only unique across the set of ACP 2292 registrars for an ACP domain, so other means to assign unique 2293 Registrar-IDs to ACP registrars can be used, such as configuration on 2294 the ACP registrars. 2296 When the candidate ACP device (called Pledge in BRSKI) is to be 2297 enrolled into an ACP domain, the ACP registrar needs to allocate a 2298 unique ACP address to the node and ensure that the ACP certificate 2299 gets a domain information field (Section 6.1.1) with the appropriate 2300 information - ACP domain-name, ACP-address, and so on. If the ACP 2301 registrar uses BRSKI, it signals the ACP information field to the 2302 Pledge via the EST /csraddrs command (see 2303 [I-D.ietf-anima-bootstrapping-keyinfra], section 5.8.2 - "EST CSR 2304 Attributes"). 2306 [RFC editor: please update reference to section 5.8.2 accordingly 2307 with latest BRSKI draft at time of publishing, or RFC] 2309 6.10.7.3. Addressing Sub-Scheme Policies 2311 The ACP registrar selects for the candidate ACP node a unique address 2312 prefix from an appropriate ACP addressing sub-scheme, either a zone 2313 addressing sub-scheme prefix (see Section 6.10.3), or a Vlong 2314 addressing sub-scheme prefix (see Section 6.10.5). The assigned ACP 2315 address prefix encoded in the domain information field of the ACP 2316 domain certificate indicates to the ACP node its ACP address 2317 information. The sub-addressing scheme indicates the prefix length: 2318 /126 for zone address sub-scheme, /120 or /112 for Vlong address sub- 2319 scheme. The first address of the prefix is the ACP address, all 2320 other addresses in the prefix are for other uses by the ACP node as 2321 described in the zone and Vlong addressing sub scheme sections. The 2322 ACP address prefix itself is then signaled by the ACP node into the 2323 ACP routing protocol (see Section 6.11) to establish IPv6 2324 reachability across the ACP. 2326 The choice of addressing sub-scheme and prefix-length in the Vlong 2327 address sub-scheme is subject to ACP registrar policy. It could be 2328 an ACP domain wide policy, or a per ACP node or per ACP node type 2329 policy. For example, in BRSKI, the ACP registrar is aware of the 2330 IDevID of the candidate ACP node, which contains a serialNnumber that 2331 is typically indicating the nodes vendor and device type and can be 2332 used to drive a policy selecting an appropriate addressing sub-scheme 2333 for the (class of) node(s). 2335 ACP registrars SHOULD default to allocate ACP zone sub-address scheme 2336 addresses with Subnet-ID 0. Allocation and use of zone sub-addresses 2337 with Subnet-ID != 0 is outside the scope of this specification 2338 because it would need to go along with rules for extending ACP 2339 routing to multiple zones, which is outside the scope of this 2340 specification. 2342 ACP registrars that can use the IDevID of a candidate ACP device 2343 SHOULD be able to choose the zone vs. Vlong sub-address scheme for 2344 ACP nodes based on the serialNumber of the IDevID, for example by the 2345 PID (Product Identifier) part which identifies the product type, or 2346 the complete serialNumber. 2348 In a simple allocation scheme, an ACP registrar remembers 2349 persistently across reboots for its currently used Registrar-ID and 2350 for each addressing scheme (zone with Subnet-ID 0, Vlong with /112, 2351 Vlong with /120), the next Node-Number available for allocation and 2352 increases it after successful enrollment to an ACP node. In this 2353 simple allocation scheme, the ACP registrar would not recycle ACP 2354 address prefixes from no longer used ACP nodes. 2356 6.10.7.4. Address/Prefix Persistence 2358 When an ACP domain certificate is renewed or rekeyed via EST or other 2359 mechanisms, the ACP address/prefix in the ACP domain information 2360 field MUST be maintained unless security issues or violations of the 2361 unique address assignment requirements exist or are suspected by the 2362 ACP registrar. Even when the renewing/rekeying ACP registrar is not 2363 the same as the one that enrolled the prior ACP certificate. See 2364 Section 10.2.4 for an example. ACP address information SHOULD also 2365 be maintained even after an ACP certificate did expire or failed. 2366 See Section 6.1.3.5 and Section 6.1.3.6. 2368 6.10.7.5. Further Details 2370 Section 10.2 discusses further informative details of ACP registrars: 2371 What interactions registrars need, what parameters they require, 2372 certificate renewal and limitations, use of sub-CAs on registrars and 2373 centralized policy control. 2375 6.11. Routing in the ACP 2377 Once ULA address are set up all autonomic entities should run a 2378 routing protocol within the autonomic control plane context. This 2379 routing protocol distributes the ULA created in the previous section 2380 for reachability. The use of the autonomic control plane specific 2381 context eliminates the probable clash with the global routing table 2382 and also secures the ACP from interference from the configuration 2383 mismatch or incorrect routing updates. 2385 The establishment of the routing plane and its parameters are 2386 automatic and strictly within the confines of the autonomic control 2387 plane. Therefore, no explicit configuration is required. 2389 All routing updates are automatically secured in transit as the 2390 channels of the autonomic control plane are by default secured, and 2391 this routing runs only inside the ACP. 2393 The routing protocol inside the ACP is RPL ([RFC6550]). See 2394 Section 11.4 for more details on the choice of RPL. 2396 RPL adjacencies are set up across all ACP channels in the same domain 2397 including all its routing subdomains. See Section 11.7 for more 2398 details. 2400 6.11.1. RPL Profile 2402 The following is a description of the RPL profile that ACP nodes need 2403 to support by default. The format of this section is derived from 2404 draft-ietf-roll-applicability-template. 2406 6.11.1.1. Summary 2408 In summary, the profile chosen for RPL is one that expects a fairly 2409 reliable network reasonably fast links so that RPL convergence will 2410 be triggered immediately upon recognition of link failure/recovery. 2412 The key limitation of the chosen profile is that it is designed to 2413 not require any Data-Plane artifacts (such as [RFC6553]). While the 2414 senders/receivers of ACP packets can be legacy NOC devices connected 2415 via ACP connect (see Section 8.1.1 to the ACP, their connectivity can 2416 be handled as non-RPL-aware leafs (or "Internet") according to the 2417 Data-Plane architecture explained in [I-D.ietf-roll-useofrplinfo]. 2418 This non-artifact profile is largely driven by the desire to avoid 2419 introducing the required Hop-by-Hop headers into the ACP forwarding 2420 plane, especially to support devices with silicon forwarding planes 2421 that can not support insertion/removal of these headers in silicon. 2423 In this profile choice, RPL has no Data-Plane artifacts. A simple 2424 destination prefix based upon the routing table is used. A 2425 consequence of supporting only a single instanceID that is containing 2426 one Destination Oriented Directed Acyclic Graph (DODAG), the ACP will 2427 only accommodate only a single class of routing table and cannot 2428 create optimized routing paths to accomplish latency or energy goals. 2430 Consider a network that has multiple NOCs in different locations. 2431 Only one NOC will become the DODAG root. Other NOCs will have to 2432 send traffic through the DODAG (tree) rooted in the primary NOC. 2433 Depending on topology, this can be an annoyance from a latency point 2434 of view, but it does not represent a single point of failure, as the 2435 DODAG can reconfigure itself when it detects data plane forwarding 2436 failures. 2438 The lack of RPL Packet Information (RPI, the IPv6 header for RPL 2439 defined by [RFC6553]), means that the Data-Plane will have no rank 2440 value that can be used to detect loops. As a result, traffic may 2441 loop until the TTL of the packet reaches zero. This the same 2442 behavior as that of other IGPs that do not have the Data-Plane 2443 options as RPL. 2445 Since links in the ACP are assumed to be mostly reliable (or have 2446 link layer protection against loss) and because there is no stretch 2447 according to Section 6.11.1.7, loops should be exceedingly rare 2448 though. 2450 There are a variety of mechanisms possible in RPL to further avoid 2451 temporary loops: DODAG Information Objects (DIOs) SHOULD be sent 2452 2...3 times to inform children when losing the last parent. The 2453 technique in [RFC6550] section 8.2.2.6. (Detaching) SHOULD be 2454 favored over that in section 8.2.2.5., (Poisoning) because it allows 2455 local connectivity. Nodes SHOULD select more than one parent, at 2456 least 3 if possible, and send Destination Advertisement Objects 2457 (DAO)s to all of then in parallel. 2459 Additionally, failed ACP tunnels will be detected by IKEv2 Dead Peer 2460 Detection (which can function as a replacement for a Low-power and 2461 Lossy Networks' (LLN's) Expected Transmission Count (ETX). A failure 2462 of an ACP tunnel should signal the RPL control plane to pick a 2463 different parent. 2465 Future Extensions to this RPL profile can provide optimality for 2466 multiple NOCs. This requires utilizing Data-Plane artifact including 2467 IPinIP encap/decap on ACP routers and processing of IPv6 RPI headers. 2468 Alternatively, (Src,Dst) routing table entries could be used. A 2469 decision for the preferred technology would have to be done when such 2470 extension is defined. 2472 6.11.1.2. RPL Instances 2474 Single RPL instance. Default RPLInstanceID = 0. 2476 6.11.1.3. Storing vs. Non-Storing Mode 2478 RPL Mode of Operations (MOP): MUST support mode 2 - "Storing Mode of 2479 Operations with no multicast support". Implementations MAY support 2480 mode 3 ("... with multicast support" as that is a superset of mode 2481 2). Note: Root indicates mode in DIO flow. 2483 6.11.1.4. DAO Policy 2485 Proactive, aggressive DAO state maintenance: 2487 o Use K-flag in unsolicited DAO indicating change from previous 2488 information (to require DAO-ACK). 2490 o Retry such DAO DAO-RETRIES(3) times with DAO- ACK_TIME_OUT(256ms) 2491 in between. 2493 6.11.1.5. Path Metric 2495 Hopcount. 2497 6.11.1.6. Objective Function 2499 Objective Function (OF): Use OF0 [RFC6552]. No use of metric 2500 containers. 2502 rank_factor: Derived from link speed: <= 100Mbps: 2503 LOW_SPEED_FACTOR(5), else HIGH_SPEED_FACTOR(1) 2505 6.11.1.7. DODAG Repair 2507 Global Repair: we assume stable links and ranks (metrics), so no need 2508 to periodically rebuild DODAG. DODAG version only incremented under 2509 catastrophic events (e.g., administrative action). 2511 Local Repair: As soon as link breakage is detected, send No-Path DAO 2512 for all the targets that where reachable only via this link. As soon 2513 as link repair is detected, validate if this link provides you a 2514 better parent. If so, compute your new rank, and send new DIO that 2515 advertises your new rank. Then send a DAO with a new path sequence 2516 about yourself. 2518 stretch_rank: none provided ("not stretched"). 2520 Data Path Validation: Not used. 2522 Trickle: Not used. 2524 6.11.1.8. Multicast 2526 Not used yet but possible because of the selected mode of operations. 2528 6.11.1.9. Security 2530 [RFC6550] security not used, substituted by ACP security. 2532 6.11.1.10. P2P communications 2534 Not used. 2536 6.11.1.11. IPv6 address configuration 2538 Every ACP node (RPL node) announces an IPv6 prefix covering the 2539 address(es) used in the ACP node. The prefix length depends on the 2540 chosen addressing sub-scheme of the ACP address provisioned into the 2541 certificate of the ACP node, e.g., /127 for Zone Addressing Sub- 2542 Scheme or /112 or /120 for Vlong addressing sub-scheme. See 2543 Section 6.10 for more details. 2545 Every ACP node MUST install a black hole (aka null) route for 2546 whatever ACP address space that it advertises (i.e.: the /96 or 2547 /127). This is avoid routing loops for addresses that an ACP node 2548 has not (yet) used. 2550 6.11.1.12. Administrative parameters 2552 Administrative Preference ([RFC6550], 3.2.6 - to become root): 2553 Indicated in DODAGPreference field of DIO message. 2555 o Explicit configured "root": 0b100 2557 o ACP registrar (Default): 0b011 2559 o ACP-connect (non-registrar): 0b010 2561 o Default: 0b001. 2563 6.11.1.13. RPL Data-Plane artifacts 2565 RPI (RPL Packet Information [RFC6553]): Not used as there is only a 2566 single instance, and data path validation is not being used. 2568 SRH (RPL Source Routing - RFC6552): Not used. Storing mode is being 2569 used. 2571 6.11.1.14. Unknown Destinations 2573 Because RPL minimizes the size of the routing and forwarding table, 2574 prefixes reachable through the same interface as the RPL root are not 2575 known on every ACP node. Therefore traffic to unknown destination 2576 addresses can only be discovered at the RPL root. The RPL root 2577 SHOULD have attach safe mechanisms to operationally discover and log 2578 such packets. 2580 6.12. General ACP Considerations 2582 Since channels are by default established between adjacent neighbors, 2583 the resulting overlay network does hop by hop encryption. Each node 2584 decrypts incoming traffic from the ACP, and encrypts outgoing traffic 2585 to its neighbors in the ACP. Routing is discussed in Section 6.11. 2587 6.12.1. Performance 2589 There are no performance requirements against ACP implementations 2590 defined in this document because the performance requirements depend 2591 on the intended use case. It is expected that full autonomic node 2592 with a wide range of ASA can require high forwarding plane 2593 performance in the ACP, for example for telemetry, but that 2594 determination is for future work. Implementations of ACP to solely 2595 support traditional/SDN style use cases can benefit from ACP at lower 2596 performance, especially if the ACP is used only for critical 2597 operations, e.g., when the Data-Plane is not available. See 2598 [RFC8368] for more details. 2600 6.12.2. Addressing of Secure Channels in the Data-Plane 2602 In order to be independent of the Data-Plane configuration of global 2603 IPv6 subnet addresses (that may not exist when the ACP is brought 2604 up), Link-local secure channels MUST use IPv6 link local addresses 2605 between adjacent neighbors. The fully autonomic mechanisms in this 2606 document only specify these link-local secure channels. Section 8.2 2607 specifies extensions in which secure channels are tunnels. For 2608 those, this requirement does not apply. 2610 The Link-local secure channels specified in this document therefore 2611 depend on basic IPv6 link-local functionality to be auto-enabled by 2612 the ACP and prohibiting the Data-Plane from disabling it. The ACP 2613 also depends on being able to operate the secure channel protocol 2614 (e.g., IPsec / DTLS) across IPv6 link-local addresses, something that 2615 may be an uncommon profile. Functionally, these are the only 2616 interactions with the Data-Plane that the ACP needs to have. 2618 To mitigate these interactions with the Data-Plane, extensions to 2619 this document may specify additional layer 2 or layer encapsulations 2620 for ACP secure channels as well as other protocols to auto-discover 2621 peer endpoints for such encapsulations (e.g., tunneling across L3 or 2622 use of L2 only encapsulations). 2624 6.12.3. MTU 2626 The MTU for ACP secure channels must be derived locally from the 2627 underlying link MTU minus the secure channel encapsulation overhead. 2629 ACP secure Channel protocols do not need to perform MTU discovery 2630 because they are built across L2 adjacencies - the MTU on both sides 2631 connecting to the L2 connection are assumed to be consistent. 2632 Extensions to ACP where the ACP is for example tunneled need to 2633 consider how to guarantee MTU consistency. This is an issue of 2634 tunnels, not an issue of running the ACP across a tunnel. Transport 2635 stacks running across ACP can perform normal PMTUD (Path MTU 2636 Discovery). Because the ACP is meant to be prioritize reliability 2637 over performance, they MAY opt to only expect IPv6 minimum MTU (1280) 2638 to avoid running into PMTUD implementation bugs or underlying link 2639 MTU mismatch problems. 2641 6.12.4. Multiple links between nodes 2643 If two nodes are connected via several links, the ACP SHOULD be 2644 established across every link, but it is possible to establish the 2645 ACP only on a sub-set of links. Having an ACP channel on every link 2646 has a number of advantages, for example it allows for a faster 2647 failover in case of link failure, and it reflects the physical 2648 topology more closely. Using a subset of links (for example, a 2649 single link), reduces resource consumption on the node, because state 2650 needs to be kept per ACP channel. The negotiation scheme explained 2651 in Section 6.5 allows Alice (the node with the higher ACP address) to 2652 drop all but the desired ACP channels to Bob - and Bob will not re- 2653 try to build these secure channels from his side unless Alice shows 2654 up with a previously unknown GRASP announcement (e.g., on a different 2655 link or with a different address announced in GRASP). 2657 6.12.5. ACP interfaces 2659 The ACP VRF has conceptually two type of interfaces: The "ACP 2660 Loopback interface(s)" to which the ACP ULA address(es) are assigned 2661 and the "ACP virtual interfaces" that are mapped to the ACP secure 2662 channels. 2664 The term "Loopback interface" was introduced initially to refer to an 2665 internal interface on a node that would allow IP traffic between 2666 transport endpoints on the node in the absence or failure of any or 2667 all external interfaces, see [RFC4291] section 2.5.3. 2669 Even though Loopback interfaces were originally designed to hold only 2670 Loopback addresses not reachable from outside the node, these 2671 interfaces are also commonly used today to hold addresses reachable 2672 from the outside. They are meant to be reachable independent of any 2673 external interface being operational, and therefore to be more 2674 resilient. These addresses on Loopback interfaces can be thought of 2675 as "node addresses" instead of "interface addresses", and that is 2676 what ACP address(es) are. This construct makes it therefore possible 2677 to address ACP nodes with a well-defined set of addresses independent 2678 of the number of external interfaces. 2680 For these reason, the ACP (ULA) address(es) are assigned to Loopback 2681 interface(s). 2683 ACP secure channels, e.g., IPsec, DTLS or other future security 2684 associations with neighboring ACP nodes can be mapped to ACP virtual 2685 interfaces in different ways: 2687 ACP point-to-point virtual interface: 2689 Each ACP secure channel is mapped into a separate point-to-point ACP 2690 virtual interface. If a physical subnet has more than two ACP 2691 capable nodes (in the same domain), this implementation approach will 2692 lead to a full mesh of ACP virtual interfaces between them. 2694 ACP multi-access virtual interface: 2696 In a more advanced implementation approach, the ACP will construct a 2697 single multi-access ACP virtual interface for all ACP secure channels 2698 to ACP capable nodes reachable across the same underlying (physical) 2699 subnet. IPv6 link-local multicast packets sent into an ACP multi- 2700 access virtual interface are replicated to every ACP secure channel 2701 mapped into the ACP multicast-access virtual interface. IPv6 unicast 2702 packets sent into an ACP multi-access virtual interface are sent to 2703 the ACP secure channel that belongs to the ACP neighbor that is the 2704 next-hop in the ACP forwarding table entry used to reach the packets 2705 destination address. 2707 There is no requirement for all ACP nodes on the same multi-access 2708 subnet to use the same type of ACP virtual interface. This is purely 2709 a node local decision. 2711 ACP nodes MUST perform standard IPv6 operations across ACP virtual 2712 interfaces including SLAAC (Stateless Address Auto-Configuration) - 2713 [RFC4862]) to assign their IPv6 link local address on the ACP virtual 2714 interface and ND (Neighbor Discovery - [RFC4861]) to discover which 2715 IPv6 link-local neighbor address belongs to which ACP secure channel 2716 mapped to the ACP virtual interface. This is independent of whether 2717 the ACP virtual interface is point-to-point or multi-access. 2719 "Optimistic Duplicate Address Detection (DAD)" according to [RFC4429] 2720 is RECOMMENDED because the likelihood for duplicates between ACP 2721 nodes is highly improbable as long as the address can be formed from 2722 a globally unique local assigned identifier (e.g., EUI-48/EUI-64, see 2723 below). 2725 ACP nodes MAY reduce the amount of link-local IPv6 multicast packets 2726 from ND by learning the IPv6 link-local neighbor address to ACP 2727 secure channel mapping from other messages such as the source address 2728 of IPv6 link-local multicast RPL messages - and therefore forego the 2729 need to send Neighbor Solicitation messages. 2731 The ACP virtual interface IPv6 link local address can be derived from 2732 any appropriate local mechanism such as node local EUI-48 or EUI-64 2733 ("EUI" stands for "Extended Unique Identifier"). It MUST NOT depend 2734 on something that is attackable from the Data-Plane such as the IPv6 2735 link-local address of the underlying physical interface, which can be 2736 attacked by SLAAC, or parameters of the secure channel encapsulation 2737 header that may not be protected by the secure channel mechanism. 2739 The link-layer address of an ACP virtual interface is the address 2740 used for the underlying interface across which the secure tunnels are 2741 built, typically Ethernet addresses. Because unicast IPv6 packets 2742 sent to an ACP virtual interface are not sent to a link-layer 2743 destination address but rather an ACP secure channel, the link-layer 2744 address fields SHOULD be ignored on reception and instead the ACP 2745 secure channel from which the message was received should be 2746 remembered. 2748 Multi-access ACP virtual interfaces are preferable implementations 2749 when the underlying interface is a (broadcast) multi-access subnet 2750 because they do reflect the presence of the underlying multi-access 2751 subnet into the virtual interfaces of the ACP. This makes it for 2752 example simpler to build services with topology awareness inside the 2753 ACP VRF in the same way as they could have been built running 2754 natively on the multi-access interfaces. 2756 Consider also the impact of point-to-point vs. multi-access virtual 2757 interface on the efficiency of flooding via link local multicasted 2758 messages: 2760 Assume a LAN with three ACP neighbors, Alice, Bob and Carol. Alice's 2761 ACP GRASP wants to send a link-local GRASP multicast message to Bob 2762 and Carol. If Alice's ACP emulates the LAN as one point-to-point 2763 virtual interface to Bob and one to Carol, The sending applications 2764 itself will send two copies, if Alice's ACP emulates a LAN, GRASP 2765 will send one packet and the ACP will replicate it. The result is 2766 the same. The difference happens when Bob and Carol receive their 2767 packet. If they use ACP point-to-point virtual interfaces, their 2768 GRASP instance would forward the packet from Alice to each other as 2769 part of the GRASP flooding procedure. These packets are unnecessary 2770 and would be discarded by GRASP on receipt as duplicates (by use of 2771 the GRASP Session ID). If Bob and Charly's ACP would emulate a 2772 multi-access virtual interface, then this would not happen, because 2773 GRASPs flooding procedure does not replicate back packets to the 2774 interface that they were received from. 2776 Note that link-local GRASP multicast messages are not sent directly 2777 as IPv6 link-local multicast UDP messages into ACP virtual 2778 interfaces, but instead into ACP GRASP virtual interfaces, that are 2779 layered on top of ACP virtual interfaces to add TCP reliability to 2780 link-local multicast GRASP messages. Nevertheless, these ACP GRASP 2781 virtual interfaces perform the same replication of message and, 2782 therefore, result in the same impact on flooding. See Section 6.8.2 2783 for more details. 2785 RPL does support operations and correct routing table construction 2786 across non-broadcast multi-access (NBMA) subnets. This is common 2787 when using many radio technologies. When such NBMA subnets are used, 2788 they MUST NOT be represented as ACP multi-access virtual interfaces 2789 because the replication of IPv6 link-local multicast messages will 2790 not reach all NBMA subnet neighbors. In result, GRASP message 2791 flooding would fail. Instead, each ACP secure channel across such an 2792 interface MUST be represented as a ACP point-to-point virtual 2793 interface. These requirements can be avoided by coupling the ACP 2794 flooding mechanism for GRASP messages directly to RPL (flood GRASP 2795 across DODAG), but such an enhancement is subject for future work. 2797 Care must also be taken when creating multi-access ACP virtual 2798 interfaces across ACP secure channels between ACP nodes in different 2799 domains or routing subdomains. The policies to be negotiated may be 2800 described as peer-to-peer policies in which case it is easier to 2801 create ACP point-to-point virtual interfaces for these secure 2802 channels. 2804 7. ACP support on L2 switches/ports (Normative) 2806 7.1. Why 2808 ANrtr1 ------ ANswitch1 --- ANswitch2 ------- ANrtr2 2809 .../ \ \ ... 2810 ANrtrM ------ \ ------- ANrtrN 2811 ANswitchM ... 2813 Figure 12: Topology with L2 ACP switches 2815 Consider a large L2 LAN with ANrtr1...ANrtrN connected via some 2816 topology of L2 switches. Examples include large enterprise campus 2817 networks with an L2 core, IoT networks or broadband aggregation 2818 networks which often have even a multi-level L2 switched topology. 2820 If the discovery protocol used for the ACP is operating at the subnet 2821 level, every ACP router will see all other ACP routers on the LAN as 2822 neighbors and a full mesh of ACP channels will be built. If some or 2823 all of the AN switches are autonomic with the same discovery 2824 protocol, then the full mesh would include those switches as well. 2826 A full mesh of ACP connections like this can creates fundamental 2827 scale challenges. The number of security associations of the secure 2828 channel protocols will likely not scale arbitrarily, especially when 2829 they leverage platform accelerated encryption/decryption. Likewise, 2830 any other ACP operations (such as routing) needs to scale to the 2831 number of direct ACP neighbors. An ACP router with just 4 physical 2832 interfaces might be deployed into a LAN with hundreds of neighbors 2833 connected via switches. Introducing such a new unpredictable scaling 2834 factor requirement makes it harder to support the ACP on arbitrary 2835 platforms and in arbitrary deployments. 2837 Predictable scaling requirements for ACP neighbors can most easily be 2838 achieved if in topologies like these, ACP capable L2 switches can 2839 ensure that discovery messages terminate on them so that neighboring 2840 ACP routers and switches will only find the physically connected ACP 2841 L2 switches as their candidate ACP neighbors. With such a discovery 2842 mechanism in place, the ACP and its security associations will only 2843 need to scale to the number of physical interfaces instead of a 2844 potentially much larger number of "LAN-connected" neighbors. And the 2845 ACP topology will follow directly the physical topology, something 2846 which can then also be leveraged in management operations or by ASAs. 2848 In the example above, consider ANswitch1 and ANswitchM are ACP 2849 capable, and ANswitch2 is not ACP capable. The desired ACP topology 2850 is that ANrtr1 and ANrtrM only have an ACP connection to ANswitch1, 2851 and that ANswitch1, ANrtr2, ANrtrN have a full mesh of ACP connection 2852 amongst each other. ANswitch1 also has an ACP connection with 2853 ANswitchM and ANswitchM has ACP connections to anything else behind 2854 it. 2856 7.2. How (per L2 port DULL GRASP) 2858 To support ACP on L2 switches or L2 switched ports of an L3 device, 2859 it is necessary to make those L2 ports look like L3 interfaces for 2860 the ACP implementation. This primarily involves the creation of a 2861 separate DULL GRASP instance/domain on every such L2 port. Because 2862 GRASP has a dedicated link-local IPv6 multicast address 2863 (ALL_GRASP_NEIGHBORS), it is sufficient that all packets for this 2864 address are being extracted at the port level and passed to that DULL 2865 GRASP instance. Likewise the IPv6 link-local multicast packets sent 2866 by that DULL GRASP instance need to be sent only towards the L2 port 2867 for this DULL GRASP instance. 2869 If the device with L2 ports is supporting per L2 port ACP DULL GRASP 2870 as well as MLD snooping ([RFC4541]), then MLD snooping must be 2871 changed to never forward packets for ALL_GRASP_NEIGHBORS because that 2872 would cause the problem that per L2 port ACP DULL GRASP is meant to 2873 overcome (forwarding DULL GRASP packets across L2 ports). 2875 The rest of ACP operations can operate in the same way as in L3 2876 devices: Assume for example that the device is an L3/L2 hybrid device 2877 where L3 interfaces are assigned to VLANs and each VLAN has 2878 potentially multiple ports. DULL GRASP is run as described 2879 individually on each L2 port. When it discovers a candidate ACP 2880 neighbor, it passes its IPv6 link-local address and supported secure 2881 channel protocols to the ACP secure channel negotiation that can be 2882 bound to the L3 (VLAN) interface. It will simply use link-local IPv6 2883 multicast packets to the candidate ACP neighbor. Once a secure 2884 channel is established to such a neighbor, the virtual interface to 2885 which this secure channel is mapped should then actually be the L2 2886 port and not the L3 interface to best map the actual physical 2887 topology into the ACP virtual interfaces. See Section 6.12.5 for 2888 more details about how to map secure channels into ACP virtual 2889 interfaces. Note that a single L2 port can still have multiple ACP 2890 neighbors if it connect for example to multiple ACP neighbors via a 2891 non-ACP enabled switch. The per L2 port ACP virtual interface can 2892 therefore still be a multi-access virtual LAN. 2894 For example, in the above picture, ANswitch1 would run separate DULL 2895 GRASP instances on its ports to ANrtr1, ANswitch2 and ANswitchI, even 2896 though all those three ports may be in the data plane in the same 2897 (V)LAN and perform L2 switching between these ports, ANswitch1 would 2898 perform ACP L3 routing between them. 2900 The description in the previous paragraph was specifically meant to 2901 illustrate that on hybrid L3/L2 devices that are common in 2902 enterprise, IoT and broadband aggregation, there is only the GRASP 2903 packet extraction (by Ethernet address) and GRASP link-local 2904 multicast per L2-port packet injection that has to consider L2 ports 2905 at the hardware forwarding level. The remaining operations are 2906 purely ACP control plane and setup of secure channels across the L3 2907 interface. This hopefully makes support for per-L2 port ACP on those 2908 hybrid devices easy. 2910 This L2/L3 optimized approach is subject to "address stealing", e.g., 2911 where a device on one port uses addresses of a device on another 2912 port. This is a generic issue in L2 LANs and switches often already 2913 have some form of "port security" to prohibit this. They rely on NDP 2914 or DHCP learning of which port/MAC-address and IPv6 address belong 2915 together and block duplicates. This type of function needs to be 2916 enabled to prohibit DoS attacks. Likewise the GRASP DULL instance 2917 needs to ensure that the IPv6 address in the locator-option matches 2918 the source IPv6 address of the DULL GRASP packet. 2920 In devices without such a mix of L2 port/interfaces and L3 interfaces 2921 (to terminate any transport layer connections), implementation 2922 details will differ. Logically most simply every L2 port is 2923 considered and used as a separate L3 subnet for all ACP operations. 2924 The fact that the ACP only requires IPv6 link-local unicast and 2925 multicast should make support for it on any type of L2 devices as 2926 simple as possible, but the need to support secure channel protocols 2927 may be a limiting factor to supporting ACP on such devices. Future 2928 options such as MacSec could improve that situation. 2930 A generic issue with ACP in L2 switched networks is the interaction 2931 with the Spanning Tree Protocol. Ideally, the ACP should be built 2932 also across ports that are blocked in STP so that the ACP does not 2933 depend on STP and can continue to run unaffected across STP topology 2934 changes (where re-convergence can be quite slow). The above 2935 described simple implementation options are not sufficient for this. 2936 Instead they would simply have the ACP run across the active STP 2937 topology and the ACP would equally be interrupted and re-converge 2938 with STP changes. 2940 8. Support for Non-ACP Components (Normative) 2942 8.1. ACP Connect 2944 8.1.1. Non-ACP Controller / NMS system 2946 The Autonomic Control Plane can be used by management systems, such 2947 as controllers or network management system (NMS) hosts (henceforth 2948 called simply "NMS hosts"), to connect to devices (or other type of 2949 nodes) through it. For this, an NMS host must have access to the 2950 ACP. The ACP is a self-protecting overlay network, which allows by 2951 default access only to trusted, autonomic systems. Therefore, a 2952 traditional, non-ACP NMS system does not have access to the ACP by 2953 default, just like any other external node. 2955 If the NMS host is not autonomic, i.e., it does not support autonomic 2956 negotiation of the ACP, then it can be brought into the ACP by 2957 explicit configuration. To support connections to adjacent non-ACP 2958 nodes, an ACP node must support "ACP connect" (sometimes also called 2959 "autonomic connect"): 2961 "ACP connect" is a function on an autonomic node that is called an 2962 "ACP edge node". With "ACP connect", interfaces on the node can be 2963 configured to be put into the ACP VRF. The ACP is then accessible to 2964 other (NOC) systems on such an interface without those systems having 2965 to support any ACP discovery or ACP channel setup. This is also 2966 called "native" access to the ACP because to those (NOC) systems the 2967 interface looks like a normal network interface (without any 2968 encryption/novel-signaling). 2970 Data-Plane "native" (no ACP) 2971 . 2972 +--------+ +----------------+ . +-------------+ 2973 | ACP | |ACP Edge Node | . | | 2974 | Node | | | v | | 2975 | |-------|...[ACP VRF]....+-----------------| |+ 2976 | | ^ |. | | NOC Device || 2977 | | . | .[Data-Plane]..+-----------------| "NMS hosts" || 2978 | | . | [ ] | . ^ | || 2979 +--------+ . +----------------+ . . +-------------+| 2980 . . . +-------------+ 2981 . . . 2982 Data-Plane "native" . ACP "native" (unencrypted) 2983 + ACP auto-negotiated . "ACP connect subnet" 2984 and encrypted . 2985 ACP connect interface 2986 e.g., "vrf ACP native" (config) 2988 Figure 13: ACP connect 2990 ACP connect has security consequences: All systems and processes 2991 connected via ACP connect have access to all ACP nodes on the entire 2992 ACP, without further authentication. Thus, the ACP connect interface 2993 and (NOC) systems connected to it must be physically controlled/ 2994 secured. For this reason the mechanisms described here do explicitly 2995 not include options to allow for a non-ACP router to be connected 2996 across an ACP connect interface and addresses behind such a router 2997 routed inside the ACP. 2999 An ACP connect interface provides exclusively access to only the ACP. 3000 This is likely insufficient for many NMS hosts. Instead, they would 3001 require a second "Data-Plane" interface outside the ACP for 3002 connections between the NMS host and administrators, or Internet 3003 based services, or for direct access to the Data-Plane. The document 3004 "Using Autonomic Control Plane for Stable Connectivity of Network 3005 OAM" [RFC8368] explains in more detail how the ACP can be integrated 3006 in a mixed NOC environment. 3008 The ACP connect interface must be (auto-)configured with an IPv6 3009 address prefix. Is prefix SHOULD be covered by one of the (ULA) 3010 prefix(es) used in the ACP. If using non-autonomic configuration, it 3011 SHOULD use the ACP Manual Addressing Sub-Scheme (Section 6.10.4). It 3012 SHOULD NOT use a prefix that is also routed outside the ACP so that 3013 the addresses clearly indicate whether it is used inside the ACP or 3014 not. 3016 The prefix of ACP connect subnets MUST be distributed by the ACP edge 3017 node into the ACP routing protocol (RPL). The NMS hosts MUST connect 3018 to prefixes in the ACP routing table via its ACP connect interface. 3019 In the simple case where the ACP uses only one ULA prefix and all ACP 3020 connect subnets have prefixes covered by that ULA prefix, NMS hosts 3021 can rely on [RFC6724] - The NMS host will select the ACP connect 3022 interface because any ACP destination address is best matched by the 3023 address on the ACP connect interface. If the NMS hosts ACP connect 3024 interface uses another prefix or if the ACP uses multiple ULA 3025 prefixes, then the NMS hosts require (static) routes towards the ACP 3026 interface. 3028 ACP Edge Nodes MUST only forward IPv6 packets received from an ACP 3029 connect interface into the ACP that has an IPv6 address from the ACP 3030 prefix assigned to this interface (sometimes called "RPF filtering"). 3031 This MAY be changed through administrative measures. 3033 To limit the security impact of ACP connect, nodes supporting it 3034 SHOULD implement a security mechanism to allow configuration/use of 3035 ACP connect interfaces only on nodes explicitly targeted to be 3036 deployed with it (such as those physically secure locations like a 3037 NOC). For example, the certificate of such node could include an 3038 extension required to permit configuration of ACP connect interfaces. 3039 This prohibits that a random ACP node with easy physical access that 3040 is not meant to run ACP connect could start leaking the ACP when it 3041 becomes compromised and the intruder configures ACP connect on it. 3042 The full workflow including the mechanism by which an ACP registrar 3043 would select which node to give such a certificate to is subject to 3044 future work. 3046 8.1.2. Software Components 3048 The ACP connect mechanism be only be used to connect physically 3049 external systems (NMS hosts) to the ACP but also other applications, 3050 containers or virtual machines. In fact, one possible way to 3051 eliminate the security issue of the external ACP connect interface is 3052 to collocate an ACP edge node and an NMS host by making one a virtual 3053 machine or container inside the other; and therefore converting the 3054 unprotected external ACP subnet into an internal virtual subnet in a 3055 single device. This would ultimately result in a fully ACP enabled 3056 NMS host with minimum impact to the NMS hosts software architecture. 3057 This approach is not limited to NMS hosts but could equally be 3058 applied to devices consisting of one or more VNF (virtual network 3059 functions): An internal virtual subnet connecting out-of-band 3060 management interfaces of the VNFs to an ACP edge router VNF. 3062 The core requirement is that the software components need to have a 3063 network stack that permits access to the ACP and optionally also the 3064 Data-Plane. Like in the physical setup for NMS hosts this can be 3065 realized via two internal virtual subnets. One that is connecting to 3066 the ACP (which could be a container or virtual machine by itself), 3067 and one (or more) connecting into the Data-Plane. 3069 This "internal" use of ACP connect approach should not considered to 3070 be a "workaround" because in this case it is possible to build a 3071 correct security model: It is not necessary to rely on unprovable 3072 external physical security mechanisms as in the case of external NMS 3073 hosts. Instead, the orchestration of the ACP, the virtual subnets 3074 and the software components can be done by trusted software that 3075 could be considered to be part of the ANI (or even an extended ACP). 3076 This software component is responsible for ensuring that only trusted 3077 software components will get access to that virtual subnet and that 3078 only even more trusted software components will get access to both 3079 the ACP virtual subnet and the Data-Plane (because those ACP users 3080 could leak traffic between ACP and Data-Plane). This trust could be 3081 established for example through cryptographic means such signed 3082 software packages. The specification of these mechanisms is subject 3083 to future work. 3085 Note that ASA (Autonomic Software Agents) could also be software 3086 components as described in this section, but further details of ASAs 3087 are subject to future work. 3089 8.1.3. Auto Configuration 3091 ACP edge nodes, NMS hosts and software components that as described 3092 in the previous section are meant to be composed via virtual 3093 interfaces SHOULD support on the ACP connect subnet StateLess Address 3094 Autoconfiguration (SLAAC - [RFC4862]) and route auto configuration 3095 according to [RFC4191]. 3097 The ACP edge node acts as the router on the ACP connect subnet, 3098 providing the (auto-)configured prefix for the ACP connect subnet to 3099 NMS hosts and/or software components. The ACP edge node uses route 3100 prefix option of RFC4191 to announce the default route (::/) with a 3101 lifetime of 0 and aggregated prefixes for routes in the ACP routing 3102 table with normal lifetimes. This will ensure that the ACP edge node 3103 does not become a default router, but that the NMS hosts and software 3104 components will route the prefixes used in the ACP to the ACP edge 3105 node. 3107 Aggregated prefix means that the ACP edge node needs to only announce 3108 the /48 ULA prefixes used in the ACP but none of the actual /64 3109 (Manual Addressing Sub-Scheme), /127 (ACP Zone Addressing Sub- 3110 Scheme), /112 or /120 (Vlong Addressing Sub-Scheme) routes of actual 3111 ACP nodes. If ACP interfaces are configured with non ULA prefixes, 3112 then those prefixes cannot be aggregated without further configured 3113 policy on the ACP edge node. This explains the above recommendation 3114 to use ACP ULA prefix covered prefixes for ACP connect interfaces: 3115 They allow for a shorter list of prefixes to be signaled via RFC4191 3116 to NMS hosts and software components. 3118 The ACP edge nodes that have a Vlong ACP address MAY allocate a 3119 subset of their /112 or /120 address prefix to ACP connect 3120 interface(s) to eliminate the need to non-autonomically configure/ 3121 provision the address prefixes for such ACP connect interfaces. 3123 8.1.4. Combined ACP/Data-Plane Interface (VRF Select) 3125 Combined ACP and Data-Plane interface 3126 . 3127 +--------+ +--------------------+ . +--------------+ 3128 | ACP | |ACP Edge No | . | NMS Host(s) | 3129 | Node | | | . | / Software | 3130 | | | [ACP ]. | . | |+ 3131 | | | .[VRF ] .[VRF ] | v | "ACP address"|| 3132 | +-------+. .[Select].+--------+ "Date Plane || 3133 | | ^ | .[Data ]. | | Address(es)"|| 3134 | | . | [Plane] | | || 3135 | | . | [ ] | +--------------+| 3136 +--------+ . +--------------------+ +--------------+ 3137 . 3138 Data-Plane "native" and + ACP auto-negotiated/encrypted 3140 Figure 14: VRF select 3142 Using two physical and/or virtual subnets (and therefore interfaces) 3143 into NMS Hosts (as per Section 8.1.1) or Software (as per 3144 Section 8.1.2) may be seen as additional complexity, for example with 3145 legacy NMS Hosts that support only one IP interface. 3147 To provide a single subnet into both ACP and Data-Plane, the ACP Edge 3148 node needs to de-multiplex packets from NMS hosts into ACP VRF and 3149 Data-Plane. This is sometimes called "VRF select". If the ACP VRF 3150 has no overlapping IPv6 addresses with the Data-Plane (as it should), 3151 then this function can use the IPv6 Destination address. The problem 3152 is Source Address Selection on the NMS Host(s) according to RFC6724. 3154 Consider the simple case: The ACP uses only one ULA prefix, the ACP 3155 IPv6 prefix for the Combined ACP and Data-Plane interface is covered 3156 by that ULA prefix. The ACP edge node announces both the ACP IPv6 3157 prefix and one (or more) prefixes for the Data-Plane. Without 3158 further policy configurations on the NMS Host(s), it may select its 3159 ACP address as a source address for Data-Plane ULA destinations 3160 because of Rule 8 of RFC6724. The ACP edge node can pass on the 3161 packet to the Data-Plane, but the ACP source address should not be 3162 used for Data-Plane traffic, and return traffic may fail. 3164 If the ACP carries multiple ULA prefixes or non-ULA ACP connect 3165 prefixes, then the correct source address selection becomes even more 3166 problematic. 3168 With separate ACP connect and Data-Plane subnets and RFC4191 prefix 3169 announcements that are to be routed across the ACP connect interface, 3170 RFC6724 source address selection Rule 5 (use address of outgoing 3171 interface) will be used, so that above problems do not occur, even in 3172 more complex cases of multiple ULA and non-ULA prefixes in the ACP 3173 routing table. 3175 To achieve the same behavior with a Combined ACP and Data-Plane 3176 interface, the ACP Edge Node needs to behave as two separate routers 3177 on the interface: One link-local IPv6 address/router for its ACP 3178 reachability, and one link-local IPv6 address/router for its Data- 3179 Plane reachability. The Router Advertisements for both are as 3180 described above (Section 8.1.3): For the ACP, the ACP prefix is 3181 announced together with RFC4191 option for the prefixes routed across 3182 the ACP and lifetime=0 to disqualify this next-hop as a default 3183 router. For the Data-Plane, the Data-Plane prefix(es) are announced 3184 together with whatever dafault router parameters are used for the 3185 Data-Plane. 3187 In result, RFC6724 source address selection Rule 5.5 may result in 3188 the same correct source address selection behavior of NMS hosts 3189 without further configuration on it as the separate ACP connect and 3190 Data-Plane interfaces. As described in the text for Rule 5.5, this 3191 is only a may, because IPv6 hosts are not required to track next-hop 3192 information. If an NMS Host does not do this, then separate ACP 3193 connect and Data-Plane interfaces are the preferable method of 3194 attachment. Hosts implementing [RFC8028] should (instead of may) 3195 implement [RFC6724] Rule 5.5, so it is preferred for hosts to support 3196 [RFC8028]. 3198 ACP edge nodes MAY support the Combined ACP and Data-Plane interface. 3200 8.1.5. Use of GRASP 3202 GRASP can and should be possible to use across ACP connect 3203 interfaces, especially in the architectural correct solution when it 3204 is used as a mechanism to connect Software (e.g., ASA or legacy NMS 3205 applications) to the ACP. Given how the ACP is the security and 3206 transport substrate for GRASP, the trustworthiness of nodes/software 3207 allowed to participate in the ACP GRASP domain is one of the main 3208 reasons why the ACP section describes no solution with non-ACP 3209 routers participating in the ACP routing table. 3211 ACP connect interfaces can be dealt with in the GRASP ACP domain like 3212 any other ACP interface assuming that any physical ACP connect 3213 interface is physically protected from attacks and that the connected 3214 Software or NMS Hosts are equally trusted as that on other ACP nodes. 3215 ACP edge nodes SHOULD have options to filter GRASP messages in and 3216 out of ACP connect interfaces (permit/deny) and MAY have more fine- 3217 grained filtering (e.g., based on IPv6 address of originator or 3218 objective). 3220 When using "Combined ACP and Data-Plane Interfaces", care must be 3221 taken that only GRASP messages intended for the ACP GRASP domain 3222 received from Software or NMS Hosts are forwarded by ACP edge nodes. 3223 Currently there is no definition for a GRASP security and transport 3224 substrate beside the ACP, so there is no definition how such 3225 Software/NMS Host could participate in two separate GRASP Domains 3226 across the same subnet (ACP and Data-Plane domains). At current it 3227 is assumed that all GRASP packets on a Combined ACP and Data-Plane 3228 interface belong to the GRASP ACP Domain. They must all use the ACP 3229 IPv6 addresses of the Software/NMS Hosts. The link-local IPv6 3230 addresses of Software/NMS Hosts (used for GRASP M_DISCOVERY and 3231 M_FLOOD messages) are also assumed to belong to the ACP address 3232 space. 3234 8.2. ACP through Non-ACP L3 Clouds (Remote ACP neighbors) 3236 Not all nodes in a network may support the ACP. If non-ACP Layer-2 3237 devices are between ACP nodes, the ACP will work across it since it 3238 is IP based. However, the autonomic discovery of ACP neighbors via 3239 DULL GRASP is only intended to work across L2 connections, so it is 3240 not sufficient to autonomically create ACP connections across non-ACP 3241 Layer-3 devices. 3243 8.2.1. Configured Remote ACP neighbor 3245 On the ACP node, remote ACP neighbors are configured explicitly. The 3246 parameters of such a "connection" are described in the following 3247 ABNF. Future work could transform this into a YANG ([RFC7950]) data 3248 model. 3250 connection = [ method , local-addr, remote-addr, ?pmtu ] 3251 method = [ "IKEv2" , ?port ] 3252 method //= [ "DTLS", port ] 3253 local-addr = [ address , ?vrf ] 3254 remote-addr = [ address ] 3255 address = ("any" | ipv4-address | ipv6-address ) 3256 vrf = tstr ; Name of a VRF on this node with local-address 3258 ABNF for parameters of explicitly configured remote ACP neighbors 3260 Explicit configuration of a remote-peer according to this ABNF 3261 provides all the information to build a secure channel without 3262 requiring a tunnel to that peer and running DULL GRASP inside of it. 3264 The configuration includes the parameters otherwise signaled via DULL 3265 GRASP: local address, remote (peer) locator and method. The 3266 differences over DULL GRASP local neighbor discovery and secure 3267 channel creation are as follows: 3269 o The local and remote address can be IPv4 or IPv6 and are typically 3270 global scope addresses. 3272 o The vrf across which the connection is built (and in which local- 3273 addr exists) can to be specified. If vrf is not specified, it is 3274 the default vrf on the node. In DULL GRASP the VRF is implied by 3275 the interface across which DULL GRASP operates. 3277 o If local address is "any", the local address used when initiating 3278 a secure channel connection is decided by source address selection 3279 ([RFC6724] for IPv6). As a responder, the connection listens on 3280 all addresses of the node in the selected vrf. 3282 o Configuration of port is only required for methods where no 3283 defaults exist (e.g., "DTLS"). 3285 o If remote address is "any", the connection is only a responder. 3286 It is a "hub" that can be used by multiple remote peers to connect 3287 simultaneously - without having to know or configure their 3288 addresses. Example: Hub site for remote "spoke" sites reachable 3289 over the Internet. 3291 o Pmtu should be configurable to overcome issues/limitations of Path 3292 MTU Discovery (PMTUD). 3294 o IKEv2/IPsec to remote peers should support the optional NAT 3295 Traversal (NAT-T) procedures. 3297 8.2.2. Tunneled Remote ACP Neighbor 3299 An IPinIP, GRE or other form of pre-existing tunnel is configured 3300 between two remote ACP peers and the virtual interfaces representing 3301 the tunnel are configured to "ACP enable". This will enable IPv6 3302 link local addresses and DULL on this tunnel. In result, the tunnel 3303 is used for normal "L2 adjacent" candidate ACP neighbor discovery 3304 with DULL and secure channel setup procedures described in this 3305 document. 3307 Tunneled Remote ACP Neighbor requires two encapsulations: the 3308 configured tunnel and the secure channel inside of that tunnel. This 3309 makes it in general less desirable than Configured Remote ACP 3310 Neighbor. Benefits of tunnels are that it may be easier to implement 3311 because there is no change to the ACP functionality - just running it 3312 over a virtual (tunnel) interface instead of only native interfaces. 3313 The tunnel itself may also provide PMTUD while the secure channel 3314 method may not. Or the tunnel mechanism is permitted/possible 3315 through some firewall while the secure channel method may not. 3317 8.2.3. Summary 3319 Configured/Tunneled Remote ACP neighbors are less "indestructible" 3320 than L2 adjacent ACP neighbors based on link local addressing, since 3321 they depend on more correct Data-Plane operations, such as routing 3322 and global addressing. 3324 Nevertheless, these options may be crucial to incrementally deploy 3325 the ACP, especially if it is meant to connect islands across the 3326 Internet. Implementations SHOULD support at least Tunneled Remote 3327 ACP Neighbors via GRE tunnels - which is likely the most common 3328 router-to-router tunneling protocol in use today. 3330 Future work could envisage an option where the edge nodes of the L3 3331 cloud is configured to automatically forward ACP discovery messages 3332 to the right exit point. This optimization is not considered in this 3333 document. 3335 9. Benefits (Informative) 3337 9.1. Self-Healing Properties 3339 The ACP is self-healing: 3341 o New neighbors will automatically join the ACP after successful 3342 validation and will become reachable using their unique ULA 3343 address across the ACP. 3345 o When any changes happen in the topology, the routing protocol used 3346 in the ACP will automatically adapt to the changes and will 3347 continue to provide reachability to all nodes. 3349 o If the domain certificate of an existing ACP node gets revoked, it 3350 will automatically be denied access to the ACP as its domain 3351 certificate will be validated against a Certificate Revocation 3352 List during authentication. Since the revocation check is only 3353 done at the establishment of a new security association, existing 3354 ones are not automatically torn down. If an immediate disconnect 3355 is required, existing sessions to a freshly revoked node can be 3356 re-set. 3358 The ACP can also sustain network partitions and mergers. Practically 3359 all ACP operations are link local, where a network partition has no 3360 impact. Nodes authenticate each other using the domain certificates 3361 to establish the ACP locally. Addressing inside the ACP remains 3362 unchanged, and the routing protocol inside both parts of the ACP will 3363 lead to two working (although partitioned) ACPs. 3365 There are few central dependencies: A certificate revocation list 3366 (CRL) may not be available during a network partition; a suitable 3367 policy to not immediately disconnect neighbors when no CRL is 3368 available can address this issue. Also, an ACP registrar or 3369 Certificate Authority might not be available during a partition. 3370 This may delay renewal of certificates that are to expire in the 3371 future, and it may prevent the enrollment of new nodes during the 3372 partition. 3374 Highly resilient ACP designs can be built by using ACP registrars 3375 with embedded sub-CA, as outlined in Section 10.2.4. As long a a 3376 partition is left with one or more of such ACP registrars, it can 3377 continue to enroll new candidate ACP nodes as long as the ACP 3378 registrars sub-CA certificate does not expire. Because the ACP 3379 addressing relies on unique Registrar-IDs, a later re-merge of 3380 partitions will also not cause problems with ACP addresses assigned 3381 during partitioning. 3383 After a network partition, a re-merge will just establish the 3384 previous status, certificates can be renewed, the CRL is available, 3385 and new nodes can be enrolled everywhere. Since all nodes use the 3386 same trust anchor, a re-merge will be smooth. 3388 Merging two networks with different trust anchors requires the trust 3389 anchors to mutually trust each other (for example, by cross-signing). 3390 As long as the domain names are different, the addressing will not 3391 overlap (see Section 6.10). 3393 It is also highly desirable for implementation of the ACP to be able 3394 to run it over interfaces that are administratively down. If this is 3395 not feasible, then it might instead be possible to request explicit 3396 operator override upon administrative actions that would 3397 administratively bring down an interface across which the ACP is 3398 running. Especially if bringing down the ACP is known to disconnect 3399 the operator from the node. For example any such down administrative 3400 action could perform a dependency check to see if the transport 3401 connection across which this action is performed is affected by the 3402 down action (with default RPL routing used, packet forwarding will be 3403 symmetric, so this is actually possible to check). 3405 9.2. Self-Protection Properties 3407 9.2.1. From the outside 3409 As explained in Section 6, the ACP is based on secure channels built 3410 between nodes that have mutually authenticated each other with their 3411 domain certificates. The channels themselves are protected using 3412 standard encryption technologies like DTLS or IPsec which provide 3413 additional authentication during channel establishment, data 3414 integrity and data confidentiality protection of data inside the ACP 3415 and in addition, provide replay protection. 3417 An attacker will not be able to join the ACP unless having a valid 3418 domain certificate, also packet injection and sniffing traffic will 3419 not be possible due to the security provided by the encryption 3420 protocol. 3422 The ACP also serves as protection (through authentication and 3423 encryption) for protocols relevant to OAM that may not have secured 3424 protocol stack options or where implementation or deployment of those 3425 options fails on some vendor/product/customer limitations. This 3426 includes protocols such as SNMP, NTP/PTP, DNS, DHCP, syslog, 3427 Radius/Diameter/TACACS, IPFIX/Netflow - just to name a few. 3428 Protection via the ACP secure hop-by-hop channels for these protocols 3429 is meant to be only a stopgap though: The ultimate goal is for these 3430 and other protocols to use end-to-end encryption utilizing the domain 3431 certificate and rely on the ACP secure channels primarily for zero- 3432 touch reliable connectivity, but not primarily for security. 3434 The remaining attack vector would be to attack the underlying ACP 3435 protocols themselves, either via directed attacks or by denial-of- 3436 service attacks. However, as the ACP is built using link-local IPv6 3437 address, remote attacks are impossible. The ULA addresses are only 3438 reachable inside the ACP context, therefore, unreachable from the 3439 Data-Plane. Also, the ACP protocols should be implemented to be 3440 attack resistant and not consume unnecessary resources even while 3441 under attack. 3443 9.2.2. From the inside 3445 The security model of the ACP is based on trusting all members of the 3446 group of nodes that do receive an ACP domain certificate for the same 3447 domain. Attacks from the inside by a compromised group member are 3448 therefore the biggest challenge. 3450 Group members must be protected against attackers so that there is no 3451 easy way to compromise them, or use them as a proxy for attacking 3452 other devices across the ACP. For example, management plane 3453 functions (transport ports) should only be reachable from the ACP but 3454 not the Data-Plane. Especially for those management plane functions 3455 that have no good protection by themselves because they do not have 3456 secure end-to-end transport and to whom ACP does not only provides 3457 automatic reliable connectivity but also protection against attacks. 3458 Protection across all potential attack vectors is typically easier to 3459 do in devices whose software is designed from the ground up with 3460 security in mind than with legacy software based systems where the 3461 ACP is added on as another feature. 3463 As explained above, traffic across the ACP SHOULD still be end-to-end 3464 encrypted whenever possible. This includes traffic such as GRASP, 3465 EST and BRSKI inside the ACP. This minimizes man in the middle 3466 attacks by compromised ACP group members. Such attackers cannot 3467 eavesdrop or modify communications, they can just filter them (which 3468 is unavoidable by any means). 3470 Further security can be achieved by constraining communication 3471 patterns inside the ACP, for example through roles that could be 3472 encoded into the domain certificates. This is subject for future 3473 work. 3475 9.3. The Administrator View 3477 An ACP is self-forming, self-managing and self-protecting, therefore 3478 has minimal dependencies on the administrator of the network. 3479 Specifically, since it is independent of configuration, there is no 3480 scope for configuration errors on the ACP itself. The administrator 3481 may have the option to enable or disable the entire approach, but 3482 detailed configuration is not possible. This means that the ACP must 3483 not be reflected in the running configuration of nodes, except a 3484 possible on/off switch. 3486 While configuration is not possible, an administrator must have full 3487 visibility of the ACP and all its parameters, to be able to do 3488 trouble-shooting. Therefore, an ACP must support all show and debug 3489 options, as for any other network function. Specifically, a network 3490 management system or controller must be able to discover the ACP, and 3491 monitor its health. This visibility of ACP operations must clearly 3492 be separated from visibility of Data-Plane so automated systems will 3493 never have to deal with ACP aspect unless they explicitly desire to 3494 do so. 3496 Since an ACP is self-protecting, a node not supporting the ACP, or 3497 without a valid domain certificate cannot connect to it. This means 3498 that by default a traditional controller or network management system 3499 cannot connect to an ACP. See Section 8.1.1 for more details on how 3500 to connect an NMS host into the ACP. 3502 10. ACP Operations (Informative) 3504 The following sections document important operational aspects of the 3505 ACP. They are not normative because they do not impact the 3506 interoperability between components of the ACP, but they include 3507 recommendations/requirements for possible followup standards work 3508 such as operational YANG model definitions: 3510 o Section 10.1 describes recommended operator diagnostics 3511 capabilities of ACP nodes. The have been derived from diagnostic 3512 of a commercially available ACP implementation. 3514 o Section 10.2 describes high level how an ACP registrar needs to 3515 work, what its configuration parameters are and specific issues 3516 impacting the choices of deployment design due to renewal and 3517 revocation issues. It describes a model where ACP Registrars have 3518 their own sub-CA to provide the most disributed deployment option 3519 for ACP Registrars, and it describes considerations for 3520 centralized policy control of ACP Registrar operations. 3522 o Section 10.3 describes suggested ACP node behavior and operational 3523 interfaces (configuration options) to manage the ACP in so-called 3524 greenfield devices (previously unconfigured) and brownfield 3525 devices (preconfigured). 3527 The recommendations and suggestions of this chapter were derived from 3528 operational experience gained with a commercially available pre- 3529 standard ACP implementation. 3531 10.1. ACP (and BRSKI) Diagnostics 3533 Even though ACP and ANI in general are taking out many manual 3534 configuration mistakes through their automation, it is important to 3535 provide good diagnostics for them. 3537 The basic diagnostics is support of (yang) data models representing 3538 the complete (auto-)configuration and operational state of all 3539 components: BRSKI, GRASP, ACP and the infrastructure used by them: 3540 TLS/DTLS, IPsec, certificates, trust anchors, time, VRF and so on. 3541 While necessary, this is not sufficient: 3543 Simply representing the state of components does not allow operators 3544 to quickly take action - unless they do understand how to interpret 3545 the data, and that can mean a requirement for deep understanding of 3546 all components and how they interact in the ACP/ANI. 3548 Diagnostic supports should help to quickly answer the questions 3549 operators are expected to ask, such as "is the ACP working 3550 correctly?", or "why is there no ACP connection to a known 3551 neighboring node?" 3553 In current network management approaches, the logic to answer these 3554 questions is most often built as centralized diagnostics software 3555 that leverages the above mentioned data models. While this approach 3556 is feasible for components utilizing the ANI, it is not sufficient to 3557 diagnose the ANI itself: 3559 o Developing the logic to identify common issues requires 3560 operational experience with the components of the ANI. Letting 3561 each management system define its own analysis is inefficient. As 3562 much as possible, future work should attempt to standardize data 3563 models that support common error diagnostic. 3565 o When the ANI is not operating correctly, it may not be possible to 3566 run diagnostics from remote because of missing connectivity. The 3567 ANI should therefore have diagnostic capabilities available 3568 locally on the nodes themselves. 3570 o Certain operations are difficult or impossible to monitor in real- 3571 time, such as initial bootstrap issues in a network location where 3572 no capabilities exist to attach local diagnostics. Therefore it 3573 is important to also define means of capturing (logging) 3574 diagnostics locally for later retrieval. Ideally, these captures 3575 are also non-volatile so that they can survive extended power-off 3576 conditions - for example when a device that fails to be brought up 3577 zero-touch is being sent back for diagnostics at a more 3578 appropriate location. 3580 The most simple form of diagnostics answering questions like the 3581 above is to represent the relevant information sequentially in 3582 dependency order, so that the first non-expected/non-operational item 3583 is the most likely root cause. Or just log/highlight that item. For 3584 example: 3586 Q: Is ACP operational to accept neighbor connections: 3588 o Check if any potentially necessary configuration to make ACP/ANI 3589 operational are correct (see Section 10.3 for a discussion of such 3590 commands). 3592 o Does the system time look reasonable, or could it be the default 3593 system time after clock chip battery failure (certificate checks 3594 depend on reasonable notion of time). 3596 o Does the node have keying material - domain certificate, trust 3597 anchors. 3599 o If no keying material and ANI is supported/enabled, check the 3600 state of BRSKI (not detailed in this example). 3602 o Check the validity of the domain certificate: 3604 * Does the certificate authenticate against the trust anchor? 3606 * Has it been revoked? 3608 * Was the last scheduled attempt to retrieve a CRL successful 3609 (e.g., do we know that our CRL information is up to date). 3611 * Is the certificate valid: validity start time in the past, 3612 expiration time in the future? 3614 * Does the certificate have a correctly formatted ACP information 3615 field? 3617 o Was the ACP VRF successfully created? 3619 o Is ACP enabled on one or more interfaces that are up and running? 3621 If all this looks good, the ACP should be running locally "fine" - 3622 but we did not check any ACP neighbor relationships. 3624 Question: why does the node not create a working ACP connection to a 3625 neighbor on an interface? 3627 o Is the interface physically up? Does it have an IPv6 link-local 3628 address? 3630 o Is it enabled for ACP? 3632 o Do we successfully send DULL GRASP messages to the interface (link 3633 layer errors)? 3635 o Do we receive DULL GRASP messages on the interface? If not, some 3636 intervening L2 equipment performing bad MLD snooping could have 3637 caused problems. Provide e.g., diagnostics of the MLD querier 3638 IPv6 and MAC address. 3640 o Do we see the ACP objective in any DULL GRASP message from that 3641 interface? Diagnose the supported secure channel methods. 3643 o Do we know the MAC address of the neighbor with the ACP objective? 3644 If not, diagnose SLAAC/ND state. 3646 o When did we last attempt to build an ACP secure channel to the 3647 neighbor? 3649 o If it failed, why: 3651 * Did the neighbor close the connection on us or did we close the 3652 connection on it because the domain certificate membership 3653 failed? 3655 * If the neighbor closed the connection on us, provide any error 3656 diagnostics from the secure channel protocol. 3658 * If we failed the attempt, display our local reason: 3660 + There was no common secure channel protocol supported by the 3661 two neighbors (this could not happen on nodes supporting 3662 this specification because it mandates common support for 3663 IPsec). 3665 + The ACP domain certificate membership check (Section 6.1.2) 3666 fails: 3668 - The neighbors certificate does not have the required 3669 trust anchor. Provide diagnostics which trust anchor it 3670 has (can identify whom the device belongs to). 3672 - The neighbors certificate does not have the same domain 3673 (or no domain at all). Diagnose domain-name and 3674 potentially other other cert info. 3676 - The neighbors certificate has been revoked or could not 3677 be authenticated by OCSP. 3679 - The neighbors certificate has expired - or is not yet 3680 valid. 3682 * Any other connection issues in e.g., IKEv2 / IPsec, DTLS?. 3684 Question: Is the ACP operating correctly across its secure channels? 3686 o Are there one or more active ACP neighbors with secure channels? 3688 o Is the RPL routing protocol for the ACP running? 3690 o Is there a default route to the root in the ACP routing table? 3692 o Is there for each direct ACP neighbor not reachable over the ACP 3693 virtual interface to the root a route in the ACP routing table? 3695 o Is ACP GRASP running? 3697 o Is at least one SRV.est objective cached (to support certificate 3698 renewal)? 3700 o Is there at least one BRSKI registrar objective cached (in case 3701 BRSKI is supported) 3703 o Is BRSKI proxy operating normally on all interfaces where ACP is 3704 operating? 3706 o ... 3708 These lists are not necessarily complete, but illustrate the 3709 principle and show that there are variety of issues ranging from 3710 normal operational causes (a neighbor in another ACP domain) over 3711 problems in the credentials management (certificate lifetimes), 3712 explicit security actions (revocation) or unexpected connectivity 3713 issues (intervening L2 equipment). 3715 The items so far are illustrating how the ANI operations can be 3716 diagnosed with passive observation of the operational state of its 3717 components including historic/cached/counted events. This is not 3718 necessary sufficient to provide good enough diagnostics overall: 3720 The components of ACP and BRSKI are designed with security in mind 3721 but they do not attempt to provide diagnostics for building the 3722 network itself. Consider two examples: 3724 1. BRSKI does not allow for a neighboring device to identify the 3725 pledges certificate (IDevID). Only the selected BRSKI registrar 3726 can do this, but it may be difficult to disseminate information 3727 about undesired pledges from those BRSKI registrars to locations/ 3728 nodes where information about those pledges is desired. 3730 2. The Link Layer Discovery Protocol (LLDP, [LLDP]) disseminates 3731 information about nodes to their immediate neighbors, such as 3732 node model/type/software and interface name/number of the 3733 connection. This information is often helpful or even necessary 3734 in network diagnostics. It can equally considered to be too 3735 insecure to make this information available unprotected to all 3736 possible neighbors. 3738 An "interested adjacent party" can always determine the IDevID of a 3739 BRSKI pledge by behaving like a BRSKI proxy/registrar. Therefore the 3740 IDevID of a BRSKI pledge is not meant to be protected - it just has 3741 to be queried and is not signaled unsolicited (as it would be in 3742 LLDP) so that other observers on the same subnet can determine who is 3743 an "interested adjacent party". 3745 Desirable options for additional diagnostics subject to future work 3746 include: 3748 1. Determine if LLDP should be a recommended functionality for ANI 3749 devices to improve diagnostics, and if so, which information 3750 elements it should signal (insecure). 3752 2. In alternative to LLDP, A DULL GRASP diagnostics objective could 3753 be defined to carry these information elements. 3755 3. The IDevID of BRSKI pledges should be included in the selected 3756 insecure diagnostics option. 3758 4. A richer set of diagnostics information should be made available 3759 via the secured ACP channels, using either single-hop GRASP or 3760 network wide "topology discovery" mechanisms. 3762 10.2. ACP Registrars 3764 As described in Section 6.10.7, the ACP addressing mechanism is 3765 designed to enable lightweight, distributed and uncoordinated ACP 3766 registrars that are providing ACP address prefixes to candidate ACP 3767 nodes by enrolling them with an ACP domain certificate into an ACP 3768 domain via any appropriate mechanism/protocol, automated or not. 3770 This section discusses informatively more details and options for ACP 3771 registrars. 3773 10.2.1. Registrar interactions 3775 This section summarizes and discusses the interactions with other 3776 entities required by an ACP registrar. 3778 In a simple instance of an ACP network, no central NOC component 3779 beside a trust anchor (root CA) is required. One or more 3780 uncoordinated acting ACP registrar can be set up, performing the 3781 following interactions: 3783 To orchestrate enrolling a candidate ACP node autonomically, the ACP 3784 registrar can rely on the ACP and use Proxies to reach the candidate 3785 ACP node, therefore allowing minimum pre-existing (auto-)configured 3786 network services on the candidate ACP node. BRSKI defines the BRSKI 3787 proxy, a design that can be adopted for various protocols that 3788 Pledges/candidate ACP nodes could want to use, for example BRSKI over 3789 CoAP (Constrained Application Protocol), or proxying of Netconf. 3791 To reach a trust anchor unaware of the ACP, the ACP registrar would 3792 use the Data-Plane. ACP and Data-Plane in an ACP registrar could 3793 (and by default should be) completely isolated from each other at the 3794 network level. Only applications like the ACP registrar would need 3795 the ability for their transport stacks to access both. 3797 In non autonomic enrollment options, the data plane between a ACP 3798 registrar and the candidate ACP node needs to be configured first. 3799 This includes the ACP registrar and the candidate ACP node. Then any 3800 appropriate set of protocols can be used between ACP registrar and 3801 candidate ACP node to discover the other side, and then connect and 3802 enroll (configure) the candidate ACP node with an ACP domain 3803 certificate. Netconf ZeroTouch ([I-D.ietf-netconf-zerotouch]) is an 3804 example protocol that could be used for this. BRSKI using optional 3805 discovery mechanisms is equally a possibility for candidate ACP nodes 3806 attempting to be enrolled across non-ACP networks, such as the 3807 Internet. 3809 When candidate ACP nodes have secure bootstrap, like BRSKI Pledges, 3810 they will not trust to be configured/enrolled across the network, 3811 unless being presented with a voucher (see [RFC8366]) authorizing the 3812 network to take posession of the node. An ACP registrar will then 3813 need a method to retrieve such a voucher, either offline, or online 3814 from a MASA (Manufacturer Authorized Signing Authority). BRSKI and 3815 Netconf ZeroTouch are two protocols that include capabilities to 3816 present the voucher to the candidate ACP node. 3818 An ACP registrar could operate EST for ACP certificate renewal and/or 3819 act as a CRL Distribution point. A node performing these services 3820 does not need to support performing (initial) enrollment, but it does 3821 require the same above described connectivity as an ACP registrar: 3822 via the ACP to ACP nodes and via the Data-Plane to the trust anchor 3823 and other sources of CRL information. 3825 10.2.2. Registrar Parameter 3827 The interactions of an ACP registrar outlined Section 6.10.7 and 3828 Section 10.2.1 above depend on the following parameters: 3830 A URL to the trust anchor (root CA) and credentials so that the 3831 ACP registrar can let the trust anchor sign candidate ACP member 3832 certificates. 3834 The ACP domain-name. 3836 The Registrar-ID to use. This could default to a MAC address of 3837 the ACP registrar. 3839 For recovery, the next-useable Node-IDs for zone (Zone-ID=0) sub- 3840 addressing scheme, for Vlong /112 and for Vlong /1120 sub- 3841 addressing scheme. These IDs would only need to be provisioned 3842 after recovering from a crash. Some other mechanism would be 3843 required to remember these IDs in a backup location or to recover 3844 them from the set of currently known ACP nodes. 3846 Policies if candidate ACP nodes should receive a domain 3847 certificate or not, for example based on the devices LDevID as in 3848 BRSKI. The ACP registrar may have a whitelist or blacklist of 3849 devices serialNumbers from teir LDevID. 3851 Policies what type of address prefix to assign to a candidate ACP 3852 devices, based on likely the same information. 3854 For BRSKI or other mechanisms using vouchers: Parameters to 3855 determine how to retrieve vouchers for specific type of secure 3856 bootstrap candidate ACP nodes (such as MASA URLs), unless this 3857 information is automatically learned such as from the LDevID of 3858 candidate ACP nodes (as defined in BRSKI). 3860 10.2.3. Certificate renewal and limitations 3862 When an ACP node renews/rekeys its certificate, it may end up doing 3863 so via a different registrar (e.g., EST server) than the one it 3864 originally received its ACP domain certificate from, for example 3865 because that original ACP registrar is gone. The ACP registrar 3866 through which the renewal/rekeying is performed would by default 3867 trust the ACP domain information from the ACP nodes current ACP 3868 domain certificate and maintain this information so that the ACP node 3869 maintains its ACP address prefix. In EST renewal/rekeying, the ACP 3870 nodes current ACP domain certificate is signaled during the TLS 3871 handshake. 3873 This simple scenario has two limitations: 3875 1. The ACP registrars can not directly assign certificates to nodes 3876 and therefore needs an "online" connection to the trust anchor 3877 (root CA). 3879 2. Recovery from a compromised ACP registrar is difficult. When an 3880 ACP registrar is compromised, it can insert for example 3881 conflicting ACP domain information and create thereby an attack 3882 against other ACP nodes through the ACP routing protocol. 3884 Even when such a malicious ACP registrar is detected, resolving the 3885 problem may be difficult because it would require identifying all the 3886 wrong ACP domain certificates assigned via the ACP registrar after it 3887 was was compromised. And without additional centralized tracking of 3888 assigned certificates there is no way to do this - assuming one can 3889 not retrieve this information from the . 3891 10.2.4. ACP Registrars with sub-CA 3893 In situations, where either of the above two limitations are an 3894 issue, ACP registrars could also be sub-CAs. This removes the need 3895 for connectivity to a root-CA whenever an ACP node is enrolled, and 3896 reduces the need for connectivity of such an ACP registrar to a root- 3897 CA to only those times when it needs to renew its own certificate. 3898 The ACP registrar would also now use its own (sub-CA) certificate to 3899 enroll and sign the ACP nodes certificates, and therefore it is only 3900 necessary to revoke a compromised ACP registrars sub-CA certificate. 3901 Or let it expire and not renew it, when the certificate of the sub-CA 3902 is appropriately short-lived. 3904 As the ACP domain membership check verifies a peer ACP node's ACP 3905 domain certicate trust chain, it will also verify the signing 3906 certificate which is the compromised/revoked sub-CA certificate. 3907 Therefore ACP domain membership for an ACP node enrolled from a 3908 compromised ACP registrar will fail. 3910 ACP nodes enrolled by a compromised ACP registrar would automatically 3911 fail to establish ACP channels and ACP domain certificate renewal via 3912 EST and therefore revert to their role as a candidate ACP members and 3913 attempt to get a new ACP domain certificate from an ACP registrar - 3914 for example via BRSKI. In result, ACP registrars that have an 3915 associated sub-CA makes isolating and resolving issues with 3916 compromised registrars easier. 3918 Note that ACP registrars with sub-CA functionality also can control 3919 the lifetime of ACP domain certificates easier and therefore also be 3920 used as a tool to introduce short lived certificates and not rely on 3921 CRL, whereas the certificates for the sub-CAs themselves could be 3922 longer lived and subject to CRL. 3924 10.2.5. Centralized Policy Control 3926 When using multiple, uncoordinated ACP registrars, several advanced 3927 operations are potentially more complex than with a single, resilient 3928 policy control backend, for example including but not limited to: 3930 Which candidate ACP node is permitted or not permitted into an ACP 3931 domain. This may not be a decision to be taken upfront, so that a 3932 per-serialNumber policy can be loaded into ever ACP registrar. 3933 Instead, it may better be decided in real-time including 3934 potentially a human decision in a NOC. 3936 Tracking of all enrolled ACP nodes and their certificate 3937 information. For example in support of revoking individual ACP 3938 nodes certificates. 3940 More flexible policies what type of address prefix or even what 3941 specific address prefix to assign to a candidate ACP node. 3943 These and other operations could be introduced more easily by 3944 introducing a centralized Policy Management System (PMS) and 3945 modifying ACP registrar behavior so that it queries the PMS for any 3946 policy decision occuring during the candidate ACP node enrollment 3947 process and/or the ACP node certificate renewal process. For 3948 example, which ACP address prefix to assign. Likewise the ACP 3949 registrar would report any relevant state change information to the 3950 PMS as well, for example when a certificate was successfully enrolled 3951 onto a candidate ACP node. Such an ACP registrar PMS interface 3952 definition is subject to future work. 3954 10.3. Enabling and disabling ACP/ANI 3956 Both ACP and BRSKI require interfaces to be operational enough to 3957 support sending/receiving their packets. In node types where 3958 interfaces are by default (e.g., without operator configuration) 3959 enabled, such as most L2 switches, this would be less of a change in 3960 behavior than in most L3 devices (e.g.: routers), where interfaces 3961 are by default disabled. In almost all network devices it is common 3962 though for configuration to change interfaces to a physically 3963 disabled state and that would break the ACP. 3965 In this section, we discuss a suggested operational model to enable/ 3966 disable interfaces and nodes for ACP/ANI in a way that minimizes the 3967 risk of operator action to break the ACP in this way, and that also 3968 minimizes operator surprise when ACP/ANI becomes supported in node 3969 software. 3971 10.3.1. Filtering for non-ACP/ANI packets 3973 Whenever this document refers to enabling an interface for ACP (or 3974 BRSKI), it only requires to permit the interface to send/receive 3975 packets necessary to operate ACP (or BRSKI) - but not any other Data- 3976 Plane packets. Unless the Data-Plane is explicitly configured/ 3977 enabled, all packets not required for ACP/BRSKI should be filtered on 3978 input and output: 3980 Both BRSKI and ACP require link-local only IPv6 operations on 3981 interfaces and DULL GRASP. IPv6 link-local operations means the 3982 minimum signaling to auto-assign an IPv6 link-local address and talk 3983 to neighbors via their link-local address: SLAAC (Stateless Address 3984 Auto-Configuration - [RFC4862]) and ND (Neighbor Discovery - 3985 [RFC4861]). When the device is a BRSKI pledge, it may also require 3986 TCP/TLS connections to BRSKI proxies on the interface. When the 3987 device has keying material, and the ACP is running, it requires DULL 3988 GRASP packets and packets necessary for the secure-channel mechanism 3989 it supports, e.g., IKEv2 and IPsec ESP packets or DTLS packets to the 3990 IPv6 link-local address of an ACP neighbor on the interface. It also 3991 requires TCP/TLS packets for its BRSKI proxy functionality, if it 3992 does support BRSKI. 3994 10.3.2. Admin Down State 3996 Interfaces on most network equipment have at least two states: "up" 3997 and "down". These may have product specific names. "down" for 3998 example could be called "shutdown" and "up" could be called "no 3999 shutdown". The "down" state disables all interface operations down 4000 to the physical level. The "up" state enables the interface enough 4001 for all possible L2/L3 services to operate on top of it and it may 4002 also auto-enable some subset of them. More commonly, the operations 4003 of various L2/L3 services is controlled via additional node-wide or 4004 interface level options, but they all become only active when the 4005 interface is not "down". Therefore an easy way to ensure that all 4006 L2/L3 operations on an interface are inactive is to put the interface 4007 into "down" state. The fact that this also physically shuts down the 4008 interface is in many cases just a side effect, but it may be 4009 important in other cases (see below). 4011 To provide ACP/ANI resilience against operators configuring 4012 interfaces to "down" state, this document recommends to separate the 4013 "down" state of interfaces into an "admin down" state where the 4014 physical layer is kept running and ACP/ANI can use the interface and 4015 a "physical down" state. Any existing "down" configurations would 4016 map to "admin down". In "admin down", any existing L2/L3 services of 4017 the Data-Plane should see no difference to "physical down" state. To 4018 ensure that no Data-Plane packets could be sent/received, packet 4019 filtering could be established automatically as described above in 4020 Section 10.3.1. 4022 As necessary (see discussion below) new configuration options could 4023 be introduced to issue "physical down". The options should be 4024 provided with additional checks to minimize the risk of issuing them 4025 in a way that breaks the ACP without automatic restoration. For 4026 example they could be denied to be issued from a control connection 4027 (netconf/ssh) that goes across the interface itself ("do not 4028 disconnect yourself"). Or they could be performed only temporary and 4029 only be made permanent with additional later reconfirmation. 4031 In the following sub-sections important aspects to the introduction 4032 of "admin down" state are discussed. 4034 10.3.2.1. Security 4036 Interfaces are physically brought down (or left in default down 4037 state) as a form of security. "Admin down" state as described above 4038 provides also a high level of security because it only permits ACP/ 4039 ANI operations which are both well secured. Ultimately, it is 4040 subject to security review for the deployment whether "admin down" is 4041 a feasible replacement for "physical down". 4043 The need to trust into the security of ACP/ANI operations need to be 4044 weighed against the operational benefits of permitting this: Consider 4045 the typical example of a CPE (customer premises equipment) with no 4046 on-site network expert. User ports are in physical down state unless 4047 explicitly configured not to be. In a misconfiguration situation, 4048 the uplink connection is incorrectly plugged into such a user port. 4049 The device is disconnected from the network and therefore no 4050 diagnostics from the network side is possible anymore. 4051 Alternatively, all ports default to "admin down". The ACP (but not 4052 the Data-Plane) would still automatically form. Diagnostics from the 4053 network side is possible and operator reaction could include to 4054 either make this port the operational uplink port or to instruct re- 4055 cabling. Security wise, only ACP/ANI could be attacked, all other 4056 functions are filtered on interfaces in "admin down" state. 4058 10.3.2.2. Fast state propagation and Diagnostics 4060 "Physical down" state propagates on many interface types (e.g., 4061 Ethernet) to the other side. This can trigger fast L2/L3 protocol 4062 reaction on the other side and "admin down" would not have the same 4063 (fast) result. 4065 Bringing interfaces to "physical down" state is to the best of our 4066 knowledge always a result of operator action, but today, never the 4067 result of (autonomic) L2/L3 services running on the nodes. Therefore 4068 one option is to change the operator action to not rely on link-state 4069 propagation anymore. This may not be possible when both sides are 4070 under different operator control, but in that case it is unlikely 4071 that the ACP is running across the link and actually putting the 4072 interface into "physical down" state may still be a good option. 4074 Ideally, fast physical state propagation is replaced by fast software 4075 driven state propagation. For example a DULL GRASP "admin-state" 4076 objective could be used to auto configure a Bidirectional Forwarding 4077 Protocol (BFD, [RFC5880]) session between the two sides of the link 4078 that would be used to propagate the "up" vs. admin down state. 4080 Triggering physical down state may also be used as a mean of 4081 diagnosing cabling in the absence of easier methods. It is more 4082 complex than automated neighbor diagnostics because it requires 4083 coordinated remote access to both (likely) sides of a link to 4084 determine whether up/down toggling will cause the same reaction on 4085 the remote side. 4087 See Section 10.1 for a discussion about how LLDP and/or diagnostics 4088 via GRASP could be used to provide neighbor diagnostics, and 4089 therefore hopefully eliminating the need for "physical down" for 4090 neighbor diagnostics - as long as both neighbors support ACP/ANI. 4092 10.3.2.3. Low Level Link Diagnostics 4094 "Physical down" is performed to diagnose low-level interface behavior 4095 when higher layer services (e.g., IPv6) are not working. Especially 4096 Ethernet links are subject to a wide variety of possible wrong 4097 configuration/cablings if they do not support automatic selection of 4098 variable parameters such as speed (10/100/1000 Mbps), crossover 4099 (Auto-MDIX) and connector (fiber, copper - when interfaces have 4100 multiple but can only enable one at a time). The need for low level 4101 link diagnostic can therefore be minimized by using fully auto 4102 configuring links. 4104 In addition to "Physical down", low level diagnostics of Ethernet or 4105 other interfaces also involve the creation of other states on 4106 interfaces, such as physical Loopback (internal and/or external) or 4107 bringing down all packet transmissions for reflection/cable-length 4108 measurements. Any of these options would disrupt ACP as well. 4110 In cases where such low-level diagnostics of an operational link is 4111 desired but where the link could be a single point of failure for the 4112 ACP, ASA on both nodes of the link could perform a negotiated 4113 diagnostics that automatically terminates in a predetermined manner 4114 without dependence on external input ensuring the link will become 4115 operational again. 4117 10.3.2.4. Power Consumption 4119 Power consumption of "physical down" interfaces may be significantly 4120 lower than those in "admin down" state, for example on long range 4121 fiber interfaces. Assuming reasonable clocks on devices, mechanisms 4122 for infrequent periodic probing could allow to automatically 4123 establish ACP connectivity across such links. Bring up interfaces 4124 for 5 seconds to probe if there is an ACP neighbor on the remote end 4125 every 500 seconds = 1% power consumption. 4127 10.3.3. Interface level ACP/ANI enable 4129 The interface level configuration option "ACP enable" enables ACP 4130 operations on an interface, starting with ACP neighbor discovery via 4131 DULL GRAP. The interface level configuration option "ANI enable" on 4132 nodes supporting BRSKI and ACP starts with BRSKI pledge operations 4133 when there is no domain certificate on the node. On ACP/BRSKI nodes, 4134 "ACP enable" may not need to be supported, but only "ANI enable". 4135 Unless overridden by global configuration options (see later), "ACP/ 4136 ANI enable" will result in "down" state on an interface to behave as 4137 "admin down". 4139 10.3.4. Which interfaces to auto-enable? 4141 (Section 6.3) requires that "ACP enable" is automatically set on 4142 native interfaces, but not on non-native interfaces (reminder: a 4143 native interface is one that exists without operator configuration 4144 action such as physical interfaces in physical devices). 4146 Ideally, ACP enable is set automatically on all interfaces that 4147 provide access to additional connectivity that allows to reach more 4148 nodes of the ACP domain. The best set of interfaces necessary to 4149 achieve this is not possible to determine automatically. Native 4150 interfaces are the best automatic approximation. 4152 Consider an ACP domain of ACP nodes transitively connected via native 4153 interfaces. A Data-Plane tunnel between two of these nodes that are 4154 non-adjacent is created and "ACP enable" is set for that tunnel. ACP 4155 RPL sees this tunnel as just as a single hop. Routes in the ACP 4156 would use this hop as an attractive path element to connect regions 4157 adjacent to the tunnel nodes. In result, the actual hop-by-hop paths 4158 used by traffic in the ACP can become worse. In addition, correct 4159 forwarding in the ACP now depends on correct Data-Plane forwarding 4160 config including QoS, filtering and other security on the Data-Plane 4161 path across which this tunnel runs. This is the main issue why "ACP/ 4162 ANI enable" should not be set automatically on non-native interfaces. 4164 If the tunnel would connect two previously disjoint ACP regions, then 4165 it likely would be useful for the ACP. A Data-Plane tunnel could 4166 also run across nodes without ACP and provide additional connectivity 4167 for an already connected ACP network. The benefit of this additional 4168 ACP redundancy has to be weighed against the problems of relying on 4169 the Data-Plane. If a tunnel connects two separate ACP regions: how 4170 many tunnels should be created to connect these ACP regions reliably 4171 enough? Between which nodes? These are all standard tunneled 4172 network design questions not specific to the ACP, and there are no 4173 generic fully automated answers. 4175 Instead of automatically setting "ACP enable" on these type of 4176 interfaces, the decision needs to be based on the use purpose of the 4177 non-native interface and "ACP enable" needs to be set in conjunction 4178 with the mechanism through which the non-native interface is created/ 4179 configured. 4181 In addition to explicit setting of "ACP/ANI enable", non-native 4182 interfaces also need to support configuration of the ACP RPL cost of 4183 the link - to avoid the problems of attracting too much traffic to 4184 the link as described above. 4186 Even native interfaces may not be able to automatically perform BRSKI 4187 or ACP because they may require additional operator input to become 4188 operational. Example include DSL interfaces requiring PPPoE 4189 credentials or mobile interfaces requiring credentials from a SIM 4190 card. Whatever mechanism is used to provide the necessary config to 4191 the device to enable the interface can also be expanded to decide on 4192 whether or not to set "ACP/ANI enable". 4194 The goal of automatically setting "ACP/ANI enable" on interfaces 4195 (native or not) is to eliminate unnecessary "touches" to the node to 4196 make its operation as much as possible "zero-touch" with respect to 4197 ACP/ANI. If there are "unavoidable touches" such a creating/ 4198 configuring a non-native interface or provisioning credentials for a 4199 native interface, then "ACP/ANI enable" should be added as an option 4200 to that "touch". If a wrong "touch" is easily fixed (not creating 4201 another high-cost touch), then the default should be not to enable 4202 ANI/ACP, and if it is potentially expensive or slow to fix (e.g., 4203 parameters on SIM card shipped to remote location), then the default 4204 should be to enable ACP/ANI. 4206 10.3.5. Node Level ACP/ANI enable 4208 A node level command "ACP/ANI enable [up-if-only]" enables ACP or ANI 4209 on the node (ANI = ACP + BRSKI). Without this command set, any 4210 interface level "ACP/ANI enable" is ignored. Once set, ACP/ANI will 4211 operate interface where "ACP/ANI enable" is set. Setting of 4212 interface level "ACP/ANI enable" is either automatic (default) or 4213 explicit through operator action as described in the previous 4214 section. 4216 If the option "up-if-only" is selected, the behavior of "down" 4217 interfaces is unchanged, and ACP/ANI will only operate on interfaces 4218 where "ACP/ANI enable" is set and that are "up". When it is not set, 4219 then "down" state of interfaces with "ACP/ANI enable" is modified to 4220 behave as "admin down". 4222 10.3.5.1. Brownfield nodes 4224 A "brownfield" node is one that already has a configured Data-Plane. 4226 Executing global "ACP/ANI enable [up-if-only]" on each node is the 4227 only command necessary to create an ACP across a network of 4228 brownfield nodes once all the nodes have a domain certificate. When 4229 BRSKI is used ("ANI enable"), provisioning of the certificates only 4230 requires set-up of a single BRSKI registrar node which could also 4231 implement a CA for the network. This is the most simple way to 4232 introduce ACP/ANI into existing (== brownfield) networks. 4234 The need to explicitly enable ACP/ANI is especially important in 4235 brownfield nodes because otherwise software updates may introduce 4236 support for ACP/ANI: Automatic enablement of ACP/ANI in networks 4237 where the operator does not only not want ACP/ANI but where he likely 4238 never even heard of it could be quite irritating to him. Especially 4239 when "down" behavior is changed to "admin down". 4241 Automatically setting "ANI enable" on brownfield nodes where the 4242 operator is unaware of it could also be a critical security issue 4243 depending on the vouchers used by BRKSI on these nodes. An attacker 4244 could claim to be the owner of these devices and create an ACP that 4245 the attacker has access/control over. In network where the operator 4246 explicitly wants to enable the ANI this could not happen, because he 4247 would create a BRSKI registrar that would discover attack attempts. 4248 Nodes requiring "ownership vouchers" would not be subject to that 4249 attack. See [I-D.ietf-anima-bootstrapping-keyinfra] for more 4250 details. Note that a global "ACP enable" alone is not subject to 4251 these type of attacks, because it always depends on some other 4252 mechanism first to provision domain certificates into the device. 4254 10.3.5.2. Greenfield nodes 4256 A "greenfield" node is one that did not have any prior configuration. 4258 For greenfield nodes, only "ANI enable" is relevant. If another 4259 mechanism than BRSKI is used to (zero-touch) bootstrap a node, then 4260 it is up to that mechanism to provision domain certificates and to 4261 set global "ACP enable" as desired. 4263 Nodes supporting full ANI functionality set "ANI enable" 4264 automatically when they decide that they are greenfield, e.g., that 4265 they are powering on from factory condition. They will then put all 4266 native interfaces into "admin down" state and start to perform BRSKI 4267 pledge functionality - and once a domain certificate is enrolled they 4268 automatically enable ACP. 4270 Attempts for BRSKI pledge operations in greenfield state should 4271 terminate automatically when another method of configuring the node 4272 is used. Methods that indicate some form of physical possession of 4273 the device such as configuration via the serial console could lead to 4274 immediate termination of BRSKI, while other parallel auto 4275 configuration methods subject to remote attacks might lead to BRSKI 4276 termination only after they were successful. Details of this may 4277 vary widely over different type of nodes. When BRSKI pledge 4278 operation terminates, this will automatically unset "ANI enable" and 4279 should terminate any temporarily needed state on the device to 4280 perform BRSKI - DULL GRASP, BRSKI pledge and any IPv6 configuration 4281 on interfaces. 4283 10.3.6. Undoing ANI/ACP enable 4285 Disabling ANI/ACP by undoing "ACP/ANI enable" is a risk for the 4286 reliable operations of the ACP if it can be executed by mistake or 4287 unauthorized. This behavior could be influenced through some 4288 additional property in the certificate (e.g., in the domain 4289 information extension field) subject to future work: In an ANI 4290 deployment intended for convenience, disabling it could be allowed 4291 without further constraints. In an ANI deployment considered to be 4292 critical more checks would be required. One very controlled option 4293 would be to not permit these commands unless the domain certificate 4294 has been revoked or is denied renewal. Configuring this option would 4295 be a parameter on the BRSKI registrar(s). As long as the node did 4296 not receive a domain certificate, undoing "ANI/ACP enable" should not 4297 have any additional constraints. 4299 10.3.7. Summary 4301 Node-wide "ACP/ANI enable [up-if-only]" commands enable the operation 4302 of ACP/ANI. This is only auto-enabled on ANI greenfield devices, 4303 otherwise it must be configured explicitly. 4305 If the option "up-if-only" is not selected, interfaces enabled for 4306 ACP/ANI interpret "down" state as "admin down" and not "physical 4307 down". In "admin-down" all non-ACP/ANI packets are filtered, but the 4308 physical layer is kept running to permit ACP/ANI to operate. 4310 (New) commands that result in physical interruption ("physical down", 4311 "loopback") of ACP/ANI enabled interfaces should be built to protect 4312 continuance or reestablishment of ACP as much as possible. 4314 Interface level "ACP/ANI enable" control per-interface operations. 4315 It is enabled by default on native interfaces and has to be 4316 configured explicitly on other interfaces. 4318 Disabling "ACP/ANI enable" global and per-interface should have 4319 additional checks to minimize undesired breakage of ACP. The degree 4320 of control could be a domain wide parameter in the domain 4321 certificates. 4323 11. Background and Futures (Informative) 4325 The following sections discuss additional background information 4326 about aspects of the normative parts of this document or associated 4327 mechanisms such as BRSKI (such as why specific choices where made by 4328 the ACP) and they provide discussion about possble future variations 4329 of the ACP. 4331 11.1. ACP Address Space Schemes 4333 This document defines the Zone, Vlong and Manual sub address schemes 4334 primarily to support address prefix assignment via distributed, 4335 potentially uncoordinated ACP registrars as defined in 4336 Section 6.10.7. This costs 48/46 bit identifier so that these ACP 4337 registrar can assign non-conflicting address prefixes. This design 4338 does not leave enough bits to simultaneously support a large number 4339 of nodes (Node-ID) plus a large prefix of local addresses for every 4340 node plus a large enough set of bits to identify a routing Zone. In 4341 result, Zone, Vlong 8/16 attempt to support all features, but in via 4342 separate prefixes. 4344 In networks that always expect to rely on a centralized PMS as 4345 described above (Section 10.2.5), the 48/46 bits for the Registrar-ID 4346 could be saved. Such variations of the ACP addressing mecchanisms 4347 could be introduct through future work in different ways. If the 4348 prefix rfcSELF in the ACP information field was changed, incompatible 4349 ACP variations could be created where every design aspect of the ACP 4350 could be changed. Including all addressing choices. If instead a 4351 new addressing sub-type would be defined, it could be a backward 4352 compatible extension of this ACP specification. Information such as 4353 the size of a zone-prefix and the length of the prefix assigned to 4354 the ACP node itself could be encoded via the extension field of the 4355 ACP domain information. 4357 Note that an explicitly defined "Manual" addressing sub-scheme is 4358 always beneficial to provide an easy way for ACP nodes to prohibit 4359 incorrect manual configuration of any non-"Manual" ACP address spaces 4360 and therefore ensure hat "Manual" operations will never impact 4361 correct routing for any non-"Manual" ACP addresses assigned via ACP 4362 domain certificates. 4364 11.2. BRSKI Bootstrap (ANI) 4366 [I-D.ietf-anima-bootstrapping-keyinfra] (BRSKI) describes how nodes 4367 with an IDevID certificate can securely and zero-touch enroll with a 4368 domain certificate (LDevID) to support the ACP. BRSKI also leverages 4369 the ACP to enable zero-touch bootstrap of new nodes across networks 4370 without any configuration requirements across the transit nodes 4371 (e.g., no DHCP/DNS forwarding/server setup). This includes otherwise 4372 not configured networks as described in Section 3.2. Therefore BRSKI 4373 in conjunction with ACP provides for a secure and zero-touch 4374 management solution for complete networks. Nodes supporting such an 4375 infrastructure (BRSKI and ACP) are called ANI nodes (Autonomic 4376 Networking Infrastructure), see [I-D.ietf-anima-reference-model]. 4377 Nodes that do not support an IDevID but only an (insecure) vendor 4378 specific Unique Device Identifier (UDI) or nodes whose manufacturer 4379 does not support a MASA could use some future security reduced 4380 version of BRSKI. 4382 When BRSKI is used to provision a domain certificate (which is called 4383 enrollment), the BRSKI registrar (acting as an enhanced EST server) 4384 must include the subjectAltName / rfc822Name encoded ACP address and 4385 domain name to the enrolling node (called pledge) via its response to 4386 the pledges EST CSR Attribute request that is mandatory in BRSKI. 4388 The Certificate Authority in an ACP network must not change the 4389 subjectAltName / rfc822Name in the certificate. The ACP nodes can 4390 therefore find their ACP address and domain using this field in the 4391 domain certificate, both for themselves, as well as for other nodes. 4393 The use of BRSKI in conjunction with the ACP can also help to further 4394 simplify maintenance and renewal of domain certificates. Instead of 4395 relying on CRL, the lifetime of certificates can be made extremely 4396 small, for example in the order of hours. When a node fails to 4397 connect to the ACP within its certificate lifetime, it cannot connect 4398 to the ACP to renew its certificate across it (using just EST), but 4399 it can still renew its certificate as an "enrolled/expired pledge" 4400 via the BRSKI bootstrap proxy. This requires only that the BRSKI 4401 registrar honors expired domain certificates and that the pledge 4402 first attempts to perform TLS authentication for BRSKI bootstrap with 4403 its expired domain certificate - and only reverts to its IDevID when 4404 this fails. This mechanism could also render CRLs unnecessary 4405 because the BRSKI registrar in conjunction with the CA would not 4406 renew revoked certificates - only a "Do-not-renew" list would be 4407 necessary on BRSKI registrars/CA. 4409 In the absence of BRSKI or less secure variants thereof, provisioning 4410 of certificates may involve one or more touches or non-standardized 4411 automation. Node vendors usually support provisioning of 4412 certificates into nodes via PKCS#7 (see [RFC2315]) and may support 4413 this provisioning through vendor specific models via Netconf 4414 ([RFC6241]). If such nodes also support Netconf Zero-Touch 4415 ([I-D.ietf-netconf-zerotouch]) then this can be combined to zero- 4416 touch provisioning of domain certificates into nodes. Unless there 4417 are equivalent integration of Netconf connections across the ACP as 4418 there is in BRSKI, this combination would not support zero-touch 4419 bootstrap across a not configured network though. 4421 11.3. ACP Neighbor discovery protocol selection 4423 This section discusses why GRASP DULL was chosen as the discovery 4424 protocol for L2 adjacent candidate ACP neighbors. The contenders 4425 considered where GRASP, mDNS or LLDP. 4427 11.3.1. LLDP 4429 LLDP and Cisco's earlier Cisco Discovery Protocol (CDP) are example 4430 of L2 discovery protocols that terminate their messages on L2 ports. 4431 If those protocols would be chosen for ACP neighbor discovery, ACP 4432 neighbor discovery would therefore also terminate on L2 ports. This 4433 would prevent ACP construction over non-ACP capable but LLDP or CDP 4434 enabled L2 switches. LLDP has extensions using different MAC 4435 addresses and this could have been an option for ACP discovery as 4436 well, but the additional required IEEE standardization and definition 4437 of a profile for such a modified instance of LLDP seemed to be more 4438 work than the benefit of "reusing the existing protocol" LLDP for 4439 this very simple purpose. 4441 11.3.2. mDNS and L2 support 4443 Multicast DNNS (mDNS) [RFC6762] with DNS Service Discovery (DNS-SD) 4444 Resource Records (RRs) as defined in [RFC6763] is a key contender as 4445 an ACP discovery protocol. because it relies on link-local IP 4446 multicast, it does operates at the subnet level, and is also found in 4447 L2 switches. The authors of this document are not aware of mDNS 4448 implementation that terminate their mDNS messages on L2 ports instead 4449 of the subnet level. If mDNS was used as the ACP discovery mechanism 4450 on an ACP capable (L3)/L2 switch as outlined in Section 7, then this 4451 would be necessary to implement. It is likely that termination of 4452 mDNS messages could only be applied to all mDNS messages from such a 4453 port, which would then make it necessary to software forward any non- 4454 ACP related mDNS messages to maintain prior non-ACP mDNS 4455 functionality. Adding support for ACP into such L2 switches with 4456 mDNS could therefore create regression problems for prior mDNS 4457 functionality on those nodes. With low performance of software 4458 forwarding in many L2 switches, this could also make the ACP risky to 4459 support on such L2 switches. 4461 11.3.3. Why DULL GRASP 4463 LLDP was not considered because of the above mentioned issues. mDNS 4464 was not selected because of the above L2 mDNS considerations and 4465 because of the following additional points: 4467 If mDNS was not already existing in a node, it would be more work to 4468 implement than DULL GRASP, and if an existing implementation of mDNS 4469 was used, it would likely be more code space than a separate 4470 implementation of DULL GRASP or a shared implementation of DULL GRASP 4471 and GRASP in the ACP. 4473 11.4. Choice of routing protocol (RPL) 4475 This section motivates why RPL - "IPv6 Routing Protocol for Low-Power 4476 and Lossy Networks ([RFC6550] was chosen as the default (and in this 4477 specification only) routing protocol for the ACP. The choice and 4478 above explained profile was derived from a pre-standard 4479 implementation of ACP that was successfully deployed in operational 4480 networks. 4482 Requirements for routing in the ACP are: 4484 o Self-management: The ACP must build automatically, without human 4485 intervention. Therefore routing protocol must also work 4486 completely automatically. RPL is a simple, self-managing 4487 protocol, which does not require zones or areas; it is also self- 4488 configuring, since configuration is carried as part of the 4489 protocol (see Section 6.7.6 of [RFC6550]). 4491 o Scale: The ACP builds over an entire domain, which could be a 4492 large enterprise or service provider network. The routing 4493 protocol must therefore support domains of 100,000 nodes or more, 4494 ideally without the need for zoning or separation into areas. RPL 4495 has this scale property. This is based on extensive use of 4496 default routing. RPL also has other scalability improvements, 4497 such as selecting only a subset of peers instead of all possible 4498 ones, and trickle support for information synchronization. 4500 o Low resource consumption: The ACP supports traditional network 4501 infrastructure, thus runs in addition to traditional protocols. 4502 The ACP, and specifically the routing protocol must have low 4503 resource consumption both in terms of memory and CPU requirements. 4504 Specifically, at edge nodes, where memory and CPU are scarce, 4505 consumption should be minimal. RPL builds a destination-oriented 4506 directed acyclic graph (DODAG), where the main resource 4507 consumption is at the root of the DODAG. The closer to the edge 4508 of the network, the less state needs to be maintained. This 4509 adapts nicely to the typical network design. Also, all changes 4510 below a common parent node are kept below that parent node. 4512 o Support for unstructured address space: In the Autonomic 4513 Networking Infrastructure, node addresses are identifiers, and may 4514 not be assigned in a topological way. Also, nodes may move 4515 topologically, without changing their address. Therefore, the 4516 routing protocol must support completely unstructured address 4517 space. RPL is specifically made for mobile ad-hoc networks, with 4518 no assumptions on topologically aligned addressing. 4520 o Modularity: To keep the initial implementation small, yet allow 4521 later for more complex methods, it is highly desirable that the 4522 routing protocol has a simple base functionality, but can import 4523 new functional modules if needed. RPL has this property with the 4524 concept of "objective function", which is a plugin to modify 4525 routing behavior. 4527 o Extensibility: Since the Autonomic Networking Infrastructure is a 4528 new concept, it is likely that changes in the way of operation 4529 will happen over time. RPL allows for new objective functions to 4530 be introduced later, which allow changes to the way the routing 4531 protocol creates the DAGs. 4533 o Multi-topology support: It may become necessary in the future to 4534 support more than one DODAG for different purposes, using 4535 different objective functions. RPL allow for the creation of 4536 several parallel DODAGs, should this be required. This could be 4537 used to create different topologies to reach different roots. 4539 o No need for path optimization: RPL does not necessarily compute 4540 the optimal path between any two nodes. However, the ACP does not 4541 require this today, since it carries mainly non-delay-sensitive 4542 feedback loops. It is possible that different optimization 4543 schemes become necessary in the future, but RPL can be expanded 4544 (see point "Extensibility" above). 4546 11.5. ACP Information Distribution and multicast 4548 IP multicast is not used by the ACP because the ANI (Autonomic 4549 Networking Infrastructure) itself does not require IP multicast but 4550 only service announcement/discovery. Using IP multicast for that 4551 would have made it necessary to develop a zero-touch auto configuring 4552 solution for ASM (Any Source Multicast - the original form of IP 4553 multicast defined in [RFC1112]), which would be quite complex and 4554 difficult to justify. One aspect of complexity where no attempt at a 4555 solution has been described in IETF documents is the automatic- 4556 selection of routers that should be PIM Sparse Mode (PIM-SM) 4557 Rendezvous Points (RPs) (see [RFC7761]). The other aspects of 4558 complexity are the implementation of MLD ([RFC4604]), PIM-SM and 4559 Anycast-RP (see [RFC4610]). If those implementations already exist 4560 in a product, then they would be very likely tied to accelerated 4561 forwarding which consumes hardware resources, and that in return is 4562 difficult to justify as a cost of performing only service discovery. 4564 Some future ASA may need high performance in-network data 4565 replication. That is the case when the use of IP multicast is 4566 justified. Such an ASA can then use service discovery from ACP 4567 GRASP, and then they do not need ASM but only SSM (Source Specific 4568 Multicast, see [RFC4607]) for the IP multicast replication. SSM 4569 itself can simply be enabled in the Data-Plane (or even in an update 4570 to the ACP) without any other configuration than just enabling it on 4571 all nodes and only requires a simpler version of MLD (see [RFC5790]). 4573 LSP (Link State Protocol) based IGP routing protocols typically have 4574 a mechanism to flood information, and such a mechanism could be used 4575 to flood GRASP objectives by defining them to be information of that 4576 IGP. This would be a possible optimization in future variations of 4577 the ACP that do use an LSP routing protocol. Note though that such a 4578 mechanism would not work easily for GRASP M_DISCOVERY messages which 4579 are intelligently (constrained) flooded not across the whole ACP, but 4580 only up to a node where a responder is found. We do expect that many 4581 future services in ASA will have only few consuming ASA, and for 4582 those cases, M_DISCOVERY is the more efficient method than flooding 4583 across the whole domain. 4585 Because the ACP uses RPL, one desirable future extension is to use 4586 RPLs existing notion of loop-free distribution trees (DODAG) to make 4587 GRASPs flooding more efficient both for M_FLOOD and M_DISCOVERY) See 4588 Section 6.12.5 how this will be specifically beneficial when using 4589 NBMA interfaces. This is not currently specified in this document 4590 because it is not quite clear yet what exactly the implications are 4591 to make GRASP flooding depend on RPL DODAG convergence and how 4592 difficult it would be to let GRASP flooding access the DODAG 4593 information. 4595 11.6. Extending ACP channel negotiation (via GRASP) 4597 The mechanism described in the normative part of this document to 4598 support multiple different ACP secure channel protocols without a 4599 single network wide MTI protocol is important to allow extending 4600 secure ACP channel protocols beyond what is specified in this 4601 document, but it will run into problem if it would be used for 4602 multiple protocols: 4604 The need to potentially have multiple of these security associations 4605 even temporarily run in parallel to determine which of them works 4606 best does not support the most lightweight implementation options. 4608 The simple policy of letting one side (Alice) decide what is best may 4609 not lead to the mutual best result. 4611 The two limitations can easier be solved if the solution was more 4612 modular and as few as possible initial secure channel negotiation 4613 protocols would be used, and these protocols would then take on the 4614 responsibility to support more flexible objectives to negotiate the 4615 mutually preferred ACP security channel protocol. 4617 IKEv2 is the IETF standard protocol to negotiate network security 4618 associations. It is meant to be extensible, but it is unclear 4619 whether it would be feasible to extend IKEv2 to support possible 4620 future requirements for ACP secure channel negotiation: 4622 Consider the simple case where the use of native IPsec vs. IPsec via 4623 GRE is to be negotiated and the objective is the maximum throughput. 4624 Both sides would indicate some agreed upon performance metric and the 4625 preferred encapsulation is the one with the higher performance of the 4626 slower side. IKEv2 does not support negotiation with this objective. 4628 Consider DTLS and some form of MacSec are to be added as negotiation 4629 options - and the performance objective should work across all IPsec, 4630 dDTLS and MacSec options. In the case of MacSEC, the negotiation 4631 would also need to determine a key for the peering. It is unclear if 4632 it would be even appropriate to consider extending the scope of 4633 negotiation in IKEv2 to those cases. Even if feasible to define, it 4634 is unclear if implementations of IKEv2 would be eager to adopt those 4635 type of extension given the long cycles of security testing that 4636 necessarily goes along with core security protocols such as IKEv2 4637 implementations. 4639 A more modular alternative to extending IKEv2 could be to layer a 4640 modular negotiation mechanism on top of the multitude of existing or 4641 possible future secure channel protocols. For this, GRASP over TLS 4642 could be considered as a first ACP secure channel negotiation 4643 protocol. The following are initial considerations for such an 4644 approach. A full specification is subject to a separate document: 4646 To explicitly allow negotiation of the ACP channel protocol, GRASP 4647 over a TLS connection using the GRASP_LISTEN_PORT and the nodes and 4648 peers link-local IPv6 address is used. When Alice and Bob support 4649 GRASP negotiation, they do prefer it over any other non-explicitly 4650 negotiated security association protocol and should wait trying any 4651 non-negotiated ACP channel protocol until after it is clear that 4652 GRASP/TLS will not work to the peer. 4654 When Alice and Bob successfully establish the GRASP/TSL session, they 4655 will negotiate the channel mechanism to use using objectives such as 4656 performance and perceived quality of the security. After agreeing on 4657 a channel mechanism, Alice and Bob start the selected Channel 4658 protocol. Once the secure channel protocol is successfully running, 4659 the GRASP/TLS connection can be kept alive or timed out as long as 4660 the selected channel protocol has a secure association between Alice 4661 and Bob. When it terminates, it needs to be re-negotiated via GRASP/ 4662 TLS. 4664 Notes: 4666 o Negotiation of a channel type may require IANA assignments of code 4667 points. 4669 o TLS is subject to reset attacks, which IKEv2 is not. Normally, 4670 ACP connections (as specified in this document) will be over link- 4671 local addresses so the attack surface for this one issue in TCP 4672 should be reduced (note that this may not be true when ACP is 4673 tunneled as described in Section 8.2.2. 4675 o GRASP packets received inside a TLS connection established for 4676 GRASP/TLS ACP negotiation are assigned to a separate GRASP domain 4677 unique to that TLS connection. 4679 11.7. CAs, domains and routing subdomains 4681 There is a wide range of setting up different ACP solution by 4682 appropriately using CAs and the domain and rsub elements in the 4683 domain information field of the domain certificate. We summarize 4684 these options here as they have been explained in different parts of 4685 the document in before and discuss possible and desirable extensions: 4687 An ACP domain is the set of all ACP nodes using certificates from the 4688 same CA using the same domain field. GRASP inside the ACP is run 4689 across all transitively connected ACP nodes in a domain. 4691 The rsub element in the domain information field permits the use of 4692 addresses from different ULA prefixes. One use case is to create 4693 multiple networks that initially may be separated, but where it 4694 should be possible to connect them without further extensions to ACP 4695 when necessary. 4697 Another use case for routing subdomains is as the starting point for 4698 structuring routing inside an ACP. For example, different routing 4699 subdomains could run different routing protocols or different 4700 instances of RPL and auto-aggregation / distribution of routes could 4701 be done across inter routing subdomain ACP channels based on 4702 negotiation (e.g., via GRASP). This is subject for further work. 4704 RPL scales very well. It is not necessary to use multiple routing 4705 subdomains to scale ACP domains in a way it would be possible if 4706 other routing protocols where used. They exist only as options for 4707 the above mentioned reasons. 4709 If different ACP domains are to be created that should not allow to 4710 connect to each other by default, these ACP domains simply need to 4711 have different domain elements in the domain information field. 4712 These domain elements can be arbitrary, including subdomains of one 4713 another: Domains "example.com" and "research.example.com" are 4714 separate domains if both are domain elements in the domain 4715 information element of certificates. 4717 It is not necessary to have a separate CA for different ACP domains: 4718 an operator can use a single CA to sign certificates for multiple ACP 4719 domains that are not allowed to connect to each other because the 4720 checks for ACP adjacencies includes comparison of the domain part. 4722 If multiple independent networks choose the same domain name but had 4723 their own CA, these would not form a single ACP domain because of CA 4724 mismatch. Therefore there is no problem in choosing domain names 4725 that are potentially also used by others. Nevertheless it is highly 4726 recommended to use domain names that one can have high probability to 4727 be unique. It is recommended to use domain names that start with a 4728 DNS domain names owned by the assigning organization and unique 4729 within it. For example "acp.example.com" if you own "example.com". 4731 Future extensions, primarily through intent can create more flexible 4732 options how to build ACP domains. 4734 Intent could modify the ACP connection check to permit connections 4735 between different domains. 4737 If different domains use the same CA one would change the ACP setup 4738 to permit for the ACP to be established between the two ACP nodes, 4739 but no routing nor ACP GRASP to be built across this adjacency. The 4740 main difference over routing subdomains is to not permit for the ACP 4741 GRASP instance to be built across the adjacency. Instead, one would 4742 only build a point to point GRASP instance between those peers to 4743 negotiate what type of exchanges are desired across that connection. 4744 This would include routing negotiation, how much GRASP information to 4745 transit and what Data-Plane forwarding should be done. This approach 4746 could also allow for Intent to only be injected into the network from 4747 one side and propagate via this GRASP connection. 4749 If different domains have different CAs, they should start to trust 4750 each other by intent injected into both domains that would add the 4751 other domains CA as a trust point during the ACP connection setup - 4752 and then following up with the previous point of inter-domain 4753 connections across domains with the same CA (e.g., GRASP 4754 negotiation). 4756 11.8. Adopting ACP concepts for other environments 4758 The ACP as specified in this document is very explicit about the 4759 choice of options to allow interoperable implementations. The 4760 choices made may not be the best for all environments, but the 4761 concepts used by the ACP can be used to build derived solutions: 4763 The ACP specifies the use of ULA and deriving its prefix from the 4764 domain name so that no address allocation is required to deploy the 4765 ACP. The ACP will equally work not using ULA but any other /50 IPv6 4766 prefix. This prefix could simply be a configuration of the ACP 4767 registrars (for example when using BRSKI) to enroll the domain 4768 certificates - instead of the ACP registrar deriving the /50 ULA 4769 prefix from the AN domain name. 4771 Some solutions may already have an auto-addressing scheme, for 4772 example derived from existing unique device identifiers (e.g., MAC 4773 addresses). In those cases it may not be desirable to assign 4774 addresses to devices via the ACP address information field in the way 4775 described in this document. The certificate may simply serve to 4776 identify the ACP domain, and the address field could be empty/unused. 4777 The only fix required in the remaining way the ACP operate is to 4778 define another element in the domain certificate for the two peers to 4779 decide who is Alice and who is Bob during secure channel building. 4780 Note though that future work may leverage the acp address to 4781 authenticate "ownership" of the address by the device. If the 4782 address used by a device is derived from some pre-existing permanent 4783 local ID (such as MAC address), then it would be useful to store that 4784 address in the certificate using the format of the access address 4785 information field or in a similar way. 4787 The ACP is defined as a separate VRF because it intends to support 4788 well managed networks with a wide variety of configurations. 4789 Therefore, reliable, configuration-indestructible connectivity cannot 4790 be achieved from the Data-Plane itself. In solutions where all 4791 transit connectivity impacting functions are fully automated 4792 (including security), indestructible and resilient, it would be 4793 possible to eliminate the need for the ACP to be a separate VRF. 4794 Consider the most simple example system in which there is no separate 4795 Data-Plane, but the ACP is the Data-Plane. Add BRSKI, and it becomes 4796 a fully autonomic network - except that it does not support automatic 4797 addressing for user equipment. This gap can then be closed for 4798 example by adding a solution derived from 4799 [I-D.ietf-anima-prefix-management]. 4801 TCP/TLS as the protocols to provide reliability and security to GRASP 4802 in the ACP may not be the preferred choice in constrained networks. 4803 For example, CoAP/DTLS (Constrained Application Protocol) may be 4804 preferred where they are already used, allowing to reduce the 4805 additional code space footprint for the ACP on those devices. 4806 Because the transport for GRASP is not only hop-by-hop, but end-to- 4807 end across the ACP, this would require the definition of an 4808 incompatible variant of the ACP. Non-constrained devices could 4809 support both variants (the ACP as defined here, and one using CoAP/ 4810 DTLS for GRASP), and the variant used in a deployment could be chosen 4811 for example through a parameter of the domain certificate. 4813 The routing protocol chosen by the ACP design (RPL) does explicitly 4814 not optimize for shortest paths and fastest convergence. Variations 4815 of the ACP may want to use a different routing protocol or introduce 4816 more advanced RPL profiles. 4818 Variations such as what routing protocol to use, or whether to 4819 instantiate an ACP in a VRF or (as suggested above) as the actual 4820 Data-Plane, can be automatically chosen in implementations built to 4821 support multiple options by deriving them from future parameters in 4822 the certificate. Parameters in certificates should be limited to 4823 those that would not need to be changed more often than certificates 4824 would need to be updated anyhow; Or by ensuring that these parameters 4825 can be provisioned before the variation of an ACP is activated in a 4826 node. Using BRSKI, this could be done for example as additional 4827 follow-up signaling directly after the certificate enrollment, still 4828 leveraging the BRSKI TLS connection and therefore not introducing any 4829 additional connectivity requirements. 4831 Last but not least, secure channel protocols including their 4832 encapsulation are easily added to ACP solutions. Secure channels may 4833 even be replaced by simple neighbor authentication to create 4834 simplified ACP variations for environments where no real security is 4835 required but just protection against non-malicious misconfiguration. 4836 Or for environments where all traffic is known or forced to be end- 4837 to-end protected and other means for infrastructure protection are 4838 used. Any future network OAM should always use end-to-end security 4839 anyhow and can leverage the domain certificates and is therefore not 4840 dependent on security to be provided for by ACP secure channels. 4842 12. Security Considerations 4844 An ACP is self-protecting and there is no need to apply configuration 4845 to make it secure. Its security therefore does not depend on 4846 configuration. 4848 However, the security of the ACP depends on a number of other 4849 factors: 4851 o The usage of domain certificates depends on a valid supporting PKI 4852 infrastructure. If the chain of trust of this PKI infrastructure 4853 is compromised, the security of the ACP is also compromised. This 4854 is typically under the control of the network administrator. 4856 o Security can be compromised by implementation errors (bugs), as in 4857 all products. 4859 There is no prevention of source-address spoofing inside the ACP. 4860 This implies that if an attacker gains access to the ACP, it can 4861 spoof all addresses inside the ACP and fake messages from any other 4862 node. 4864 Fundamentally, security depends on correct operation, implementation 4865 and architecture. Autonomic approaches such as the ACP largely 4866 eliminate the dependency on correct operation; implementation and 4867 architectural mistakes are still possible, as in all networking 4868 technologies. 4870 Many details of ACP are designed with security in mind and discussed 4871 elsewhere in the document: 4873 IPv6 addresses used by nodes in the ACP are covered as part of the 4874 nodes domain certificate as described in Section 6.1.1. This allows 4875 even verification of ownership of a peers IPv6 address when using a 4876 connection authenticated with the domain certificate. 4878 The ACP acts as a security (and transport) substrate for GRASP inside 4879 the ACP such that GRASP is not only protected by attacks from the 4880 outside, but also by attacks from compromised inside attackers - by 4881 relying not only on hop-by-hop security of ACP secure channels, but 4882 adding end-to-end security for those GRASP messages. See 4883 Section 6.8.2. 4885 ACP provides for secure, resilient zero-touch discovery of EST 4886 servers for certificate renewal. See Section 6.1.3. 4888 ACP provides extensible, auto-configuring hop-by-hop protection of 4889 the ACP infrastructure via the negotiation of hop-by-hop secure 4890 channel protocols. See Section 6.5 and Section 11.6. 4892 The ACP is designed to minimize attacks from the outside by 4893 minimizing its dependency against any non-ACP operations on a node. 4894 The only dependency in the specification in this document is the need 4895 to share link-local addresses for the ACP secure channel 4896 encapsulation with the Data-Plane. See Section 6.12.2. 4898 In combination with BRSKI, ACP enables a resilient, fully zero-touch 4899 network solution for short-lived certificates that can be renewed or 4900 re-enrolled even after unintentional expiry (e.g., because of 4901 interrupted connectivity). See Section 11.2. 4903 13. IANA Considerations 4905 This document defines the "Autonomic Control Plane". 4907 The IANA is requested to register the value "AN_ACP" (without quotes) 4908 to the GRASP Objectives Names Table in the GRASP Parameter Registry. 4909 The specification for this value is this document, Section 6.3. 4911 The IANA is requested to register the value "SRV.est" (without 4912 quotes) to the GRASP Objectives Names Table in the GRASP Parameter 4913 Registry. The specification for this value is this document, 4914 Section 6.1.3. 4916 Note that the objective format "SRV." is intended to be 4917 used for any that is an [RFC6335] registered service 4918 name. This is a proposed update to the GRASP registry subject to 4919 future work and only mentioned here for informational purposed to 4920 explain the unique format of the objective name. 4922 The IANA is requested to create an ACP Parameter Registry with 4923 currently one registry table - the "ACP Address Type" table. 4925 "ACP Address Type" Table. The value in this table are numeric values 4926 0...3 paired with a name (string). Future values MUST be assigned 4927 using the Standards Action policy defined by [RFC8126]. The 4928 following initial values are assigned by this document: 4930 0: ACP Zone Addressing Sub-Scheme (ACP RFC Figure 9) / ACP Manual 4931 Addressing Sub-Scheme (ACP RFC Section 6.10.4) 4932 1: ACP Vlong Addressing Sub-Scheme (ACP RFC Section 6.10.5) 4934 14. Acknowledgements 4936 This work originated from an Autonomic Networking project at Cisco 4937 Systems, which started in early 2010. Many people contributed to 4938 this project and the idea of the Autonomic Control Plane, amongst 4939 which (in alphabetical order): Ignas Bagdonas, Parag Bhide, Balaji 4940 BL, Alex Clemm, Yves Hertoghs, Bruno Klauser, Max Pritikin, Michael 4941 Richardson, Ravi Kumar Vadapalli. 4943 Special thanks to Brian Carpenter, Elwyn Davies, Joel Halpern and 4944 Sheng Jiang for their thorough reviews and to Pascal Thubert and 4945 Michael Richardson to provide the details for the recommendations of 4946 the use of RPL in the ACP. 4948 Further input, review or suggestions were received from: Rene Struik, 4949 Brian Carpenter, Benoit Claise, William Atwood and Yongkang Zhang. 4951 15. Change log [RFC Editor: Please remove] 4953 15.1. Initial version 4955 First version of this document: draft-behringer-autonomic-control- 4956 plane 4958 15.2. draft-behringer-anima-autonomic-control-plane-00 4960 Initial version of the anima document; only minor edits. 4962 15.3. draft-behringer-anima-autonomic-control-plane-01 4964 o Clarified that the ACP should be based on, and support only IPv6. 4966 o Clarified in intro that ACP is for both, between devices, as well 4967 as for access from a central entity, such as an NMS. 4969 o Added a section on how to connect an NMS system. 4971 o Clarified the hop-by-hop crypto nature of the ACP. 4973 o Added several references to GDNP as a candidate protocol. 4975 o Added a discussion on network split and merge. Although, this 4976 should probably go into the certificate management story longer 4977 term. 4979 15.4. draft-behringer-anima-autonomic-control-plane-02 4981 Addresses (numerous) comments from Brian Carpenter. See mailing list 4982 for details. The most important changes are: 4984 o Introduced a new section "overview", to ease the understanding of 4985 the approach. 4987 o Merged the previous "problem statement" and "use case" sections 4988 into a mostly re-written "use cases" section, since they were 4989 overlapping. 4991 o Clarified the relationship with draft-ietf-anima-stable- 4992 connectivity 4994 15.5. draft-behringer-anima-autonomic-control-plane-03 4996 o Took out requirement for IPv6 --> that's in the reference doc. 4998 o Added requirement section. 5000 o Changed focus: more focus on autonomic functions, not only virtual 5001 out-of-band. This goes a bit throughout the document, starting 5002 with a changed abstract and intro. 5004 15.6. draft-ietf-anima-autonomic-control-plane-00 5006 No changes; re-submitted as WG document. 5008 15.7. draft-ietf-anima-autonomic-control-plane-01 5010 o Added some paragraphs in addressing section on "why IPv6 only", to 5011 reflect the discussion on the list. 5013 o Moved the Data-Plane ACP out of the main document, into an 5014 appendix. The focus is now the virtually separated ACP, since it 5015 has significant advantages, and isn't much harder to do. 5017 o Changed the self-creation algorithm: Part of the initial steps go 5018 into the reference document. This document now assumes an 5019 adjacency table, and domain certificate. How those get onto the 5020 device is outside scope for this document. 5022 o Created a new section 6 "workarounds for non-autonomic nodes", and 5023 put the previous controller section (5.9) into this new section. 5024 Now, section 5 is "autonomic only", and section 6 explains what to 5025 do with non-autonomic stuff. Much cleaner now. 5027 o Added an appendix explaining the choice of RPL as a routing 5028 protocol. 5030 o Formalised the creation process a bit more. Now, we create a 5031 "candidate peer list" from the adjacency table, and form the ACP 5032 with those candidates. Also it explains now better that policy 5033 (Intent) can influence the peer selection. (section 4 and 5) 5035 o Introduce a section for the capability negotiation protocol 5036 (section 7). This needs to be worked out in more detail. This 5037 will likely be based on GRASP. 5039 o Introduce a new parameter: ACP tunnel type. And defines it in the 5040 IANA considerations section. Suggest GRE protected with IPSec 5041 transport mode as the default tunnel type. 5043 o Updated links, lots of small edits. 5045 15.8. draft-ietf-anima-autonomic-control-plane-02 5047 o Added explicitly text for the ACP channel negotiation. 5049 o Merged draft-behringer-anima-autonomic-addressing-02 into this 5050 document, as suggested by WG chairs. 5052 15.9. draft-ietf-anima-autonomic-control-plane-03 5054 o Changed Neighbor discovery protocol from GRASP to mDNS. Bootstrap 5055 protocol team decided to go with mDNS to discover bootstrap proxy, 5056 and ACP should be consistent with this. Reasons to go with mDNS 5057 in bootstrap were a) Bootstrap should be reuseable also outside of 5058 full anima solutions and introduce as few as possible new 5059 elements. mDNS was considered well-known and very-likely even pre- 5060 existing in low-end devices (IoT). b) Using GRASP both for the 5061 insecure neighbor discovery and secure ACP operatations raises the 5062 risk of introducing security issues through implementation issues/ 5063 non-isolation between those two instances of GRASP. 5065 o Shortened the section on GRASP instances, because with mDNS being 5066 used for discovery, there is no insecure GRASP session any longer, 5067 simplifying the GRASP considerations. 5069 o Added certificate requirements for ANIMA in section 5.1.1, 5070 specifically how the ANIMA information is encoded in 5071 subjectAltName. 5073 o Deleted the appendix on "ACP without separation", as originally 5074 planned, and the paragraph in the main text referring to it. 5076 o Deleted one sub-addressing scheme, focusing on a single scheme 5077 now. 5079 o Included information on how ANIMA information must be encoded in 5080 the domain certificate in section "preconditions". 5082 o Editorial changes, updated draft references, etc. 5084 15.10. draft-ietf-anima-autonomic-control-plane-04 5086 Changed discovery of ACP neighbor back from mDNS to GRASP after 5087 revisiting the L2 problem. Described problem in discovery section 5088 itself to justify. Added text to explain how ACP discovery relates 5089 to BRSKY (bootstrap) discovery and pointed to Michael Richardsons 5090 draft detailing it. Removed appendix section that contained the 5091 original explanations why GRASP would be useful (current text is 5092 meant to be better). 5094 15.11. draft-ietf-anima-autonomic-control-plane-05 5096 o Section 5.3 (candidate ACP neighbor selection): Add that Intent 5097 can override only AFTER an initial default ACP establishment. 5099 o Section 6.10.1 (addressing): State that addresses in the ACP are 5100 permanent, and do not support temporary addresses as defined in 5101 RFC4941. 5103 o Modified Section 6.3 to point to the GRASP objective defined in 5104 draft-carpenter-anima-ani-objectives. (and added that reference) 5106 o Section 6.10.2: changed from MD5 for calculating the first 40 bits 5107 to SHA256; reason is MD5 should not be used any more. 5109 o Added address sub-scheme to the IANA section. 5111 o Made the routing section more prescriptive. 5113 o Clarified in Section 8.1.1 the ACP Connect port, and defined that 5114 term "ACP Connect". 5116 o Section 8.2: Added some thoughts (from mcr) on how traversing a L3 5117 cloud could be automated. 5119 o Added a CRL check in Section 6.7. 5121 o Added a note on the possibility of source-address spoofing into 5122 the security considerations section. 5124 o Other editoral changes, including those proposed by Michael 5125 Richardson on 30 Nov 2016 (see ANIMA list). 5127 15.12. draft-ietf-anima-autonomic-control-plane-06 5129 o Added proposed RPL profile. 5131 o detailed DTLS profile - DTLS with any additional negotiation/ 5132 signaling channel. 5134 o Fixed up text for ACP/GRE encap. Removed text claiming its 5135 incompatible with non-GRE IPsec and detailled it. 5137 o Added text to suggest admin down interfaces should still run ACP. 5139 15.13. draft-ietf-anima-autonomic-control-plane-07 5141 o Changed author association. 5143 o Improved ACP connect setion (after confusion about term came up in 5144 the stable connectivity draft review). Added picture, defined 5145 complete terminology. 5147 o Moved ACP channel negotiation from normative section to appendix 5148 because it can in the timeline of this document not be fully 5149 specified to be implementable. Aka: work for future document. 5150 That work would also need to include analysing IKEv2 and describin 5151 the difference of a proposed GRASP/TLS solution to it. 5153 o Removed IANA request to allocate registry for GRASP/TLS. This 5154 would come with future draft (see above). 5156 o Gave the name "ACP information field" to the field in the 5157 certificate carrying the ACP address and domain name. 5159 o Changed the rules for mutual authentication of certificates to 5160 rely on the domain in the ACP information field of the certificate 5161 instead of the OU in the certificate. Also renewed the text 5162 pointing out that the ACP information field in the certificate is 5163 meant to be in a form that it does not disturb other uses of the 5164 certificate. As long as the ACP expected to rely on a common OU 5165 across all certificates in a domain, this was not really true: 5166 Other uses of the certificates might require different OUs for 5167 different areas/type of devices. With the rules in this draft 5168 version, the ACP authentication does not rely on any other fields 5169 in the certificate. 5171 o Added an extension field to the ACP information field so that in 5172 the future additional fields like a subdomain could be inserted. 5173 An example using such a subdomain field was added to the pre- 5174 existing text suggesting sub-domains. This approach is necessary 5175 so that there can be a single (main) domain in the ACP information 5176 field, because that is used for mutual authentication of the 5177 certificate. Also clarified that only the register(s) SHOULD/MUST 5178 use that the ACP address was generated from the domain name - so 5179 that we can easier extend change this in extensions. 5181 o Took the text for the GRASP discovery of ACP neighbors from Brians 5182 grasp-ani-objectives draft. Alas, that draft was behind the 5183 latest GRASP draft, so i had to overhaul. The mayor change is to 5184 describe in the ACP draft the whole format of the M_FLOOD message 5185 (and not only the actual objective). This should make it a lot 5186 easier to read (without having to go back and forth to the GRASP 5187 RFC/draft). It was also necessary because the locator in the 5188 M_FLOOD messages has an important role and its not coded inside 5189 the objective. The specification of how to format the M_FLOOD 5190 message shuold now be complete, the text may be some duplicate 5191 with the DULL specificateion in GRASP, but no contradiction. 5193 o One of the main outcomes of reworking the GRASP section was the 5194 notion that GRASP announces both the candidate peers IPv6 link 5195 local address but also the support ACP security protocol including 5196 the port it is running on. In the past we shied away from using 5197 this information because it is not secured, but i think the 5198 additional attack vectors possible by using this information are 5199 negligible: If an attacker on an L2 subnet can fake another 5200 devices GRASP message then it can already provide a similar amount 5201 of attack by purely faking the link-local address. 5203 o Removed the section on discovery and BRSKI. This can be revived 5204 in the BRSKI document, but it seems mood given how we did remove 5205 mDNS from the latest BRSKI document (aka: this section discussed 5206 discrepancies between GRASP and mDNS discovery which should not 5207 exist anymore with latest BRSKI. 5209 o Tried to resolve the EDNOTE about CRL vs. OCSP by pointing out we 5210 do not specify which one is to be used but that the ACP should be 5211 used to reach the URL included in the certificate to get to the 5212 CRL storage or OCSP server. 5214 o Changed ACP via IPsec to ACP via IKEv2 and restructured the 5215 sections to make IPsec native and IPsec via GRE subsections. 5217 o No need for any assigned DTLS port if ACP is run across DTLS 5218 because it is signaled via GRASP. 5220 15.14. draft-ietf-anima-autonomic-control-plane-08 5222 Modified mentioning of BRSKI to make it consistent with current 5223 (07/2017) target for BRSKI: MASA and IDevID are mandatory. Devices 5224 with only insecure UDI would need a security reduced variant of 5225 BRSKI. Also added mentioning of Netconf Zero-Touch. Made BRSKI non- 5226 normative for ACP because wrt. ACP it is just one option how the 5227 domain certificate can be provisioned. Instead, BRSKI is mandatory 5228 when a device implements ANI which is ACP+BRSKI. 5230 Enhanced text for ACP across tunnels to decribe two options: one 5231 across configured tunnels (GRE, IPinIP etc) a more efficient one via 5232 directed DULL. 5234 Moved decription of BRSKI to appendex to emphasize that BRSKI is not 5235 a (normative) dependency of GRASP, enhanced text to indicate other 5236 options how Domain Certificates can be provisioned. 5238 Added terminology section. 5240 Separated references into normative and non-normative. 5242 Enhanced section about ACP via "tunnels". Defined an option to run 5243 ACP secure channel without an outer tunnel, discussed PMTU, benefits 5244 of tunneling, potential of using this with BRSKI, made ACP via GREP a 5245 SHOULD requirement. 5247 Moved appendix sections up before IANA section because there where 5248 concerns about appendices to be to far on the bottom to be read. 5249 Added (Informative) / (Normative) to section titles to clarify which 5250 sections are informative and which are normative 5252 Moved explanation of ACP with L2 from precondition to separate 5253 section before workarounds, made it instructive enough to explain how 5254 to implement ACP on L2 ports for L3/L2 switches and made this part of 5255 normative requirement (L2/L3 switches SHOULD support this). 5257 Rewrote section "GRASP in the ACP" to define GRASP in ACP as 5258 mandatory (and why), and define the ACP as security and transport 5259 substrate to GRASP in ACP. And how it works. 5261 Enhanced "self-protection" properties section: protect legacy 5262 management protocols. Security in ACP is for protection from outside 5263 and those legacy protocols. Otherwise need end-to-end encryption 5264 also inside ACP, e.g., with domain certificate. 5266 Enhanced initial domain certificate section to include requirements 5267 for maintenance (renewal/revocation) of certificates. Added 5268 explanation to BRSKI informative section how to handle very short 5269 lived certificates (renewal via BRSKI with expired cert). 5271 Modified the encoding of the ACP address to better fit RFC822 simple 5272 local-parts (":" as required by RFC5952 are not permitted in simple 5273 dot-atoms according to RFC5322. Removed reference to RFC5952 as its 5274 now not needed anymore. 5276 Introduced a sub-domain field in the ACP information in the 5277 certificate to allow defining such subdomains with depending on 5278 future Intent definitions. It also makes it clear what the "main 5279 domain" is. Scheme is called "routing subdomain" to have a unique 5280 name. 5282 Added V8 (now called Vlong) addressing sub-scheme according to 5283 suggestion from mcr in his mail from 30 Nov 2016 5284 (https://mailarchive.ietf.org/arch/msg/anima/ 5285 nZpEphrTqDCBdzsKMpaIn2gsIzI). Also modified the explanation of the 5286 single V bit in the first sub-scheme now renamed to Zone sub-scheme 5287 to distinguish it. 5289 15.15. draft-ietf-anima-autonomic-control-plane-09 5291 Added reference to RFC4191 and explained how it should be used on ACP 5292 edge routers to allow auto configuration of routing by NMS hosts. 5293 This came after review of stable connectivity draft where ACP connect 5294 is being referred to. 5296 V8 addressing Sub-Scheme was modified to allow not only /8 device- 5297 local address space but also /16. This was in response to the 5298 possible need to have maybe as much as 2^12 local addresses for 5299 future encaps in BRSKI like IPinIP. It also would allow fully 5300 autonomic address assignment for ACP connect interfaces from this 5301 local address space (on an ACP edge device), subject to approval of 5302 the implied update to rfc4291/rfc4193 (IID length). Changed name to 5303 Vlong addressing sub-scheme. 5305 Added text in response to Brian Carpenters review of draft-ietf- 5306 anima-stable-connectivity-04. 5308 o The stable connectivity draft was vaguely describing ACP connect 5309 behavior that is better standardized in this ACP draft. 5311 o Added new ACP "Manual" addressing sub-scheme with /64 subnets for 5312 use with ACP connect interfaces. Being covered by the ACP ULA 5313 prefix, these subnets do not require additional routing entries 5314 for NMS hosts. They also are fully 64-bit IID length compliant 5315 and therefore not subject to 4191bis considerations. And they 5316 avoid that operators manually assign prefixes from the ACP ULA 5317 prefixes that might later be assigned autonomiously. 5319 o ACP connect auto-configuration: Defined that ACP edge devices, NMS 5320 hosts should use RFC4191 to automatically learn ACP prefixes. 5321 This is especially necessary when the ACP uses multiple ULA 5322 prefixes (via e.g., the rsub domain certificate option), or if ACP 5323 connect subinterfaces use manually configured prefixes NOT covered 5324 by the ACP ULA prefixes. 5326 o Explained how rfc6724 is (only) sufficient when the NMS host has a 5327 separate ACP connect and Data-Plane interface. But not when there 5328 is a single interface. 5330 o Added a separate subsection to talk about "software" instead of 5331 "NMS hosts" connecting to the ACP via the "ACP connect" method. 5332 The reason is to point out that the "ACP connect" method is not 5333 only a workaround (for NMS hosts), but an actual desirable long 5334 term architectural component to modularily build software (e.g., 5335 ASA or OAM for VNF) into ACP devices. 5337 o Added a section to define how to run ACP connect across the same 5338 interface as the Data-Plane. This turns out to be quite 5339 challenging because we only want to rely on existing standards for 5340 the network stack in the NMS host/software and only define what 5341 features the ACP edge device needs. 5343 o Added section about use of GRASP over ACP connect. 5345 o Added text to indicate packet processing/filtering for security: 5346 filter incorrect packets arriving on ACP connect interfaces, 5347 diagnose on RPL root packets to incorrect destination address (not 5348 in ACP connect section, but because of it). 5350 o Reaffirm security goal of ACP: Do not permit non-ACP routers into 5351 ACP routing domain. 5353 Made this ACP document be an update to RFC4291 and RFC4193. At the 5354 core, some of the ACP addressing sub-schemes do effectively not use 5355 64-bit IIDs as required by RFC4191 and debated in rfc4191bis. During 5356 6man in prague, it was suggested that all documents that do not do 5357 this should be classified as such updates. Add a rather long section 5358 that summarizes the relevant parts of ACP addressing and usage and. 5359 Aka: This section is meant to be the primary review section for 5360 readers interested in these changes (e.g., 6man WG.). 5362 Added changes from Michael Richardsons review https://github.com/ 5363 anima-wg/autonomic-control-plane/pull/3/commits, textual and: 5365 o ACP discovery inside ACP is bad *doh*!. 5367 o Better CA trust and revocation sentences. 5369 o More details about RPL behavior in ACP. 5371 o black hole route to avoid loops in RPL. 5373 Added requirement to terminate ACP channels upon cert expiry/ 5374 revocation. 5376 Added fixes from 08-mcr-review-reply.txt (on github): 5378 o AN Domain Names are FQDNs. 5380 o Fixed bit length of schemes, numerical writing of bits (00b/01b). 5382 o Lets use US american english. 5384 15.16. draft-ietf-anima-autonomic-control-plane-10 5386 Used the term routing subdomain more consistently where previously 5387 only subdomain was used. Clarified use of routing subdomain in 5388 creation of ULA "global ID" addressing prefix. 5390 6.7.1.* Changed native IPsec encapsulation to tunnel mode 5391 (necessary), explaned why. Added notion that ESP is used, added 5392 explanations why tunnel/transport mode in native vs. GRE cases. 5394 6.10.3/6.10.5 Added term "ACP address range/set" to be able to better 5395 explain how the address in the ACP certificate is actually the base 5396 address (lowest address) of a range/set that is available to the 5397 device. 5399 6.10.4 Added note that manual address sub-scheme addresses must not 5400 be used within domain certificates (only for explicit configuration). 5402 6.12.5 Refined explanation of how ACP virtual interfaces work (p2p 5403 and multipoint). Did seek for pre-existing RFCs that explain how to 5404 built a multi-access interface on top of a full mesh of p2p 5405 connections (6man WG, anima WG mailing lists), but could not find any 5406 prior work that had a succinct explanation. So wrote up an 5407 explanation here. Added hopefully all necessary and sufficient 5408 details how to map ACP unicast packets to ACP secure channel, how to 5409 deal with ND packet details. Added verbage for ACP not to assign the 5410 virtual interface link-local address from the underlying interface. 5411 Addd note that GRAP link-local messages are treated specially but 5412 logically the same. Added paragraph about NBMA interfaces. 5414 remaining changes from Brian Carpenters review. See Github file 5415 draft-ietf-anima-autonomic-control-plane/08-carpenter-review-reply.tx 5416 for more detailst: 5418 Added multiple new RFC references for terms/technologies used. 5420 Fixed verbage in several places. 5422 2. (terminology) Added 802.1AR as reference. 5424 2. Fixed up definition of ULA. 5426 6.1.1 Changed definition of ACP information in cert into ABNF format. 5427 Added warning about maximum size of ACP address field due to domain- 5428 name limitations. 5430 6.2 Mentioned API requirement between ACP and clients leveraging 5431 adjacency table. 5433 6.3 Fixed TTL in GRASP example: msec, not hop-count!. 5435 6.8.2 MAYOR: expanded security/transport substrate text: 5437 Introduced term ACP GRASP virtual interface to explain how GRASP 5438 link-local multicast messages are encapsulated and replicated to 5439 neighbors. Explain how ACP knows when to use TLS vs. TCP (TCP only 5440 for link-local address (sockets). Introduced "ladder" picture to 5441 visualize stack. 5443 6.8.2.1 Expanded discussion/explanation of security model. TLS for 5444 GRASP unicsast connections across ACP is double encryption (plus 5445 underlying ACP secure channel), but highly necessary to avoid very 5446 simple man-in-the-middle attacks by compromised ACP members on-path. 5447 Ultimately, this is done to ensure that any apps using GRASP can get 5448 full end-to-end secrecy for information sent across GRASP. But for 5449 publically known ASA services, even this will not provide 100% 5450 security (this is discussed). Also why double encryption is the 5451 better/easier solution than trying to optimize this. 5453 6.10.1 Added discussion about pseudo-random addressing, scanning- 5454 attaacks (not an issue for ACP). 5456 6.12.2 New performance requirements section added. 5458 6.10.1 Added notion to first experiment with existing addressing 5459 schemes before defining new ones - we should be flexible enough. 5461 6.3/7.2 clarified the interactions between MLD and DULL GRASP and 5462 specified what needs to be done (e.g., in 2 switches doing ACP per L2 5463 port). 5465 12. Added explanations and cross-references to various security 5466 aspects of ACP discussed elsewhere in the document. 5468 13. Added IANA requirements. 5470 Added RFC2119 boilerplate. 5472 15.17. draft-ietf-anima-autonomic-control-plane-11 5474 Same text as -10 Unfortunately when uploading -10 .xml/.txt to 5475 datatracker, a wrong version of .txt got uploaded, only the .xml was 5476 correct. This impacts the -10 html version on datatra cker and the 5477 PDF versions as well. Because rfcdiff also compares the .txt 5478 version, this -11 version was crea ted so that one can compare 5479 changes from -09 and changes to the next version (-12). 5481 15.18. draft-ietf-anima-autonomic-control-plane-12 5483 Sheng Jiangs extensive review. Thanks! See Github file draft-ietf- 5484 anima-autonomic-control-plane/09-sheng-review-reply.txt for more 5485 details. Many of the larger changes listed below where inspired by 5486 the review. 5488 Removed the claim that the document is updating RFC4291,RFC4193 and 5489 the section detailling it. Done on suggestion of Michael Richardson 5490 - just try to describe use of addressing in a way that would not 5491 suggest a need claim update to architecture. 5493 Terminology cleanup: 5495 o Replaced "device" with "node" in text. Kept "device" only when 5496 referring to "physical node". Added definitions for those words. 5497 Includes changes of derived terms, especially in addressing: 5498 "Node-ID" and "Node-Number" in the addressing details. 5500 o Replaced term "autonomic FOOBAR" with "acp FOOBAR" as whereever 5501 appropriate: "autonomic" would imply that the node would need to 5502 support more than the ACP, but that is not correct in most of the 5503 cases. Wanted to make sure that implementers know they only need 5504 to support/implement ACP - unless stated otherwise. Includes 5505 "AN->ACP node", "AN->ACP adjacency table" and so on. 5507 1 Added explanation in the introduction about relationship between 5508 ACP, BRSKI, ANI and Autonomic Networks. 5510 6.1.1 Improved terminology and features of the certificate 5511 information field. Now called domain information field instead of 5512 ACP information field. The acp-address field in the domain 5513 information field is now optional, enabling easier introduction of 5514 various future options. 5516 6.1.2 Moved ACP domainer membership check from section 6.6 to (ACP 5517 secure channels setup) here because it is not only used for ACP 5518 secure channel setup. 5520 6.1.3 Fix text about certificate renewal after discussion with Max 5521 Pritikin/Michael Richardson/Brian Carpenter: 5523 o Version 10 erroneously assumed that the certificate itself could 5524 store a URL for renewal, but that is only possible for CRL URLs. 5525 Text now only refers to "remembered EST server" without implying 5526 that this is stored in the certificate. 5528 o Objective for RFC7030/EST domain certificate renewal was changed 5529 to "SRV.est" See also IANA section for explanation. 5531 o Removed detail of distance based service selection. This can be 5532 better done in future work because it would require a lot more 5533 detail for a good DNS-SD compatible approach. 5535 o Removed detail about trying to create more security by using ACP 5536 address from certificate of peer. After rethinking, this does not 5537 seem to buy additional security. 5539 6.10 Added reference to 6.12.5 in initial use of "loopback interface" 5540 in section 6.10 in result of email discussion michaelR/michaelB. 5542 10.2 Introduced informational section (diagnostics) because of 5543 operational experience - ACP/ANI undeployable without at least 5544 diagnostics like this. 5546 10.3 Introduced informational section (enabling/disabling) ACP. 5547 Important to discuss this for security reasons (e.g., why to never 5548 never auto-enable ANI on brownfield devices), for implementers and to 5549 answer ongoing questions during WG meetings about how to deal with 5550 shutdown interface. 5552 10.8 Added informational section discussing possible future 5553 variations of the ACP for potential adopters that cannot directly use 5554 the complete solution described in this document unmodified. 5556 15.19. draft-ietf-anima-autonomic-control-plane-13 5558 Swap author list (with permission). 5560 6.1.1. Eliminate blank lines in definition by making it a picture 5561 (reformatting only). 5563 6.10.3.1 New paragraph: Explained how nodes using Zone-ID != 0 need 5564 to use Zone-ID != 0 in GRASP so that we can avoid routing/forwarding 5565 of Zone-ID = 0 prefixes. 5567 Rest of feedback from review of -12, see 5568 https://raw.githubusercontent.com/anima-wg/autonomic-control- 5569 plane/master/draft-ietf-anima-autonomic-control-plane/12-feedback- 5570 reply.txt 5572 Review from Brian Carpenter: 5574 various: Autonomous -> autonomic(ally) in all remaining occurences. 5576 various: changed "manual (configured)" to "explicitly (configured)" 5577 to not exclude the option of (SDN controller) automatic configuration 5578 (no humans involved). 5580 1. Fixed reference to section 9. 5582 2. Added definition of loopback interface == internal interface. 5583 After discus on WG mailing lists, including 6man. 5585 6.1.2 Defined CDP/OCSP and pointed to RFC5280 for them. 5587 6.1.3 Removed "EST-TLS", no objective value needed or beneficial, 5588 added explanation paragraph why. 5590 6.2 Added to adjacency table the interface that a neighbor is 5591 discovered on. 5593 6.3 Simplified CDDL syntax: Only one method per AN_ACP objective 5594 (because of locators). Example with two objectives in GRASP message. 5596 6.8.1 Added note about link-local GRASP multicast message to avoid 5597 confusion. 5599 8.1.4 Added RFC8028 as recommended on hosts to better support VRF- 5600 select with ACP. 5602 8.2.1 Rewrote and Simplified CDDL for configured remote peer and 5603 explanations. Removed pattern option for remote peer. Not important 5604 enough to be mandated. 5606 Review thread started by William Atwood: 5608 2. Refined definition of VRF (vs. MPLS/VPN, LISP, VRF-LITE). 5610 2. Refined definition of ACP (ACP includes ACP GRASP instance). 5612 2. Added explanation for "zones" to terminology section and into 5613 Zone Addressing Sub Scheme section, relating it to RFC4007 zones 5614 (from Brian Carpenter). 5616 4. Fixed text for ACP4 requirement (Clients of the ACP must not be 5617 tied to specific protocol.). 5619 5. Fixed step 4. with proposed text. 5621 6.1.1 Included suggested explanation for rsub semantics. 5623 6.1.3 must->MUST for at least one EST server in ACP network to 5624 autonomically renew certs. 5626 6.7.2 normative: AND MUST NOT (permit weaker crypto options. 5628 6.7.1.1 also included text denying weaker IPsec profile options. 5630 6.8.2 Fixed description how to build ACP GRASP virtual interfaces. 5631 Added text that ACP continues to exist in absence of ACP neighbors. 5633 various: Make sure all "zone" words are used consistently. 5635 6.10.2/various: fixed 40 bit RFC4193 ULA prefix in all examples to 5636 89b714f3db (thanks MichaelR). 5638 6.10.1 Removed comment about assigned ULA addressing. Decision not 5639 to use it now ancient history of WG decision making process, not 5640 worth nothing anymore in the RFC. 5642 Review from Yongkang Zhang: 5644 6.10.5 Fixed length of Node-Numbers in ACP Vlong Addressing Sub- 5645 Scheme. 5647 15.20. draft-ietf-anima-autonomic-control-plane-14 5649 Disclaimer: All new text introduced by this revision provides only 5650 additional explanations/ details based on received reviews and 5651 analysis by the authors. No changes to beavior already specified in 5652 prior revisions. 5654 Joel Halpern, review part 3: 5656 Define/explain "ACP registrar" in reply to Joel Halpern review part 5657 3, resolving primarily 2 documentation issues:: 5659 1. Unclear how much ACP depends on BRSKI. ACP document was 5660 referring unqualified to registrars and Registrar-ID in the 5661 addressing section without explaining what a registrar is, 5662 leading to the assumption it must be a BRSKI Registrar. 5664 2. Unclear how the ACP addresses in ACP domain certificates are 5665 assigned because the BRSKI document does not defines this, but 5666 refers to this ACP document. 5668 Wrt. 1: ACP does NOT depend on BRSKI registrars, instead ANY 5669 appropriate automated or manual mechanism can be used to enroll ACP 5670 nodes with ACP domain certificates. This revision calls defines such 5671 mechanisms the "ACP registrar" and defines requirements. this is 5672 non-normative, because it does not define specific mechanisms that 5673 need to be support. In ANI devices, ACP Registrars are BRSKI 5674 Registrars. In non-ANI ACP networks, the registrar may simply be a 5675 person using CLI/web-interfaces to provision domain certificates and 5676 set the ACP address correctly in the ACP domain certificate. 5678 Wrt. 2.: The BRSKI document does rightfully not define how the ACP 5679 address assignment and creation of the ACP domain information field 5680 has to work because this is independent of BRSKI and needs to follow 5681 the same rules whatever protocol/mechanisms are used to implement an 5682 ACP Registrar. Another set of protocols that could be used instead 5683 of BRSKI is Netconf/Netconf-Call-Home, but such an alternative ACP 5684 Registrar solution would need to be speficied in it's own document. 5686 Additional text/sections had to be added to detail important 5687 conditions so that automatic certificate maintenance for ACP nodes 5688 (with BRSKI or other mechanisms) can be done in a way that as good as 5689 possible maintains ACP address information of ACP nodes across the 5690 nodes lifetime because that ACP address is intended as an identifier 5691 of the ACP node. 5693 Summary of sections added: 5695 o 6.1.3.5/6.1.3.6 (normative): re-enrollment of ACP nodes after 5696 certificate exiry/failure in a way that allows to maintain as much 5697 as possible ACP address information. 5699 o 6.10.7 (normative): defines "ACP Registrar" including requirements 5700 and how it can perform ACP address assignment. 5702 o 10.3 (informative): details / examples about registrars to help 5703 implementers and operators understand easier how they operate, and 5704 provide suggestion of models that a likely very ueful (sub-CA and/ 5705 or centralized policy manaement). 5707 o 10.4 (informative): Explains the need for the multiple address 5708 sub-spaces defined in response to discuss with Joel. 5710 Other changes: 5712 Updated references (RFC8366, RFC8368). 5714 Introduced sub-section headings for 6.1.3 (certificate maintenance) 5715 because section became too long with newly added sub-sections. Also 5716 some small text fixups/remove of duplicate text. 5718 Gen-ART review, Elwyn Davies: 5720 [RFC Editor: how can i raise the issue of problematic cross 5721 references of terms in the terminology section - rendering is 5722 problematic. ]. 5724 4. added explanation for ACP4 (finally). 5726 6.1.1 Simplified text in bullet list explaining rfc822 encoding. 5728 6.1.3 refined second paragraph defining remembering of previous EST 5729 server and explaiing how to do this with BRSKI. 5731 9.1 Added paragraph outlining the benefit of the sub-CA Registrar 5732 option for supporting partitioned networks. 5734 Roughly 100 more nits/minor fixes throughout the document. See: 5735 https://raw.githubusercontent.com/anima-wg/autonomic-control- 5736 plane/master/draft-ietf-anima-autonomic-control-plane/13-elwynd- 5737 reply.txt 5739 Joel Halpern, review part 2: 5741 6.1.1: added note about "+ +" format in address field when acp- 5742 address and rsub are empty. 5744 6.5.10 - clarified text about V bit in Vlong addressing scheme. 5746 6.10.3/6.10.4 - moved the Z bit field up front (directly after base 5747 scheme) and indicated more explicitly Z is part of selecting of the 5748 sub-addressing scheme. 5750 Refined text about reaching CRL Distribution Point, explain why 5751 address as indicator to use ACP. 5753 Note from Brian Carpenter: RFC Editor note for section reference into 5754 GRASP. 5756 IOT directorate review from Pascal Thubert: 5758 Various Nits/typos. 5760 TBD: Punted wish for mentioning RFC reference titles to RFC editor 5761 for now. 5763 1. Added section 1.1 - applicability, discussing protocol choices 5764 re. applicability to constrained devices (or not). Added notion of 5765 TCP/TLS va CoAP/DTLS to section 10.4 in support of this. 5767 2. Added in-band / out-of-band into terminology. 5769 5. Referenced section 8.2 for remote ACP channel configuration. 5771 6.3 made M_FLOOD periods RECOMMENDED (less guesswork) 5773 6.7.x Clarified conditional nature of MUST for the profile details of 5774 IPsec parameters (aka: onlt 6.7.3 defines actual MUST for nodes, 5775 prior notions only define the requirements for IPsec profiles IF 5776 IPsec is supported. 5778 6.8.1 Moved discussion about IP multicast, IGP, RPL for GRASP into a 5779 new subsection in the informative part (section 10) to tighten up 5780 text in normative part. 5782 6.10.1 added another reference to stable-connectivity for interop 5783 with IPv4 management. 5785 6.10.1 removed mentioning of ULA-Random, term was used in email 5786 discus of ULA with L=1, but term actually not defined in rfc4193, so 5787 mentioning it is just confusing/redundant. Also added note about the 5788 random hash being defined in this document, not using SHA1 from 5789 rfc4193. 5791 6.11.1.1 added suggested text about mechanisms to further reduce 5792 opportunities for loop during reconvergence (active signaling options 5793 from RFC6550). 5795 6.11.1.3 made mode 2 MUST and mode 2 MAY (RPL MOP - mode of 5796 operations). Removes ambiguity ambiguity. 5798 6.12.5 Added recommendation for RFC4429 (optimistic DAD). 5800 Nits from Benjamin Kaduk: dTLS -> DTLS: 5802 Review from Joel Halpern: 5804 1. swapped order of "purposes" for ACP to match order in section 3. 5806 1. Added notion about manageability of ACP gong beyond RFC7575 5807 (before discussion of stable connectivity). 5809 2. Changed definition of Intent to be same as reference model 5810 (policy lanuage instead of API). 5812 6.1.1 changed BNF specification so that a local-part without acp- 5813 address (for future extensions) would not be rfcSELF.+rsub but 5814 simpler rfcSELF+rsub. Added explanation why rsub is in local-part. 5816 Tried to eliminate unnecessary references to VRF to minimize 5817 assumption how system is designed. 5819 6.1.3 Explained how to make CDP reachable via ACP. 5821 6.7.2 Made it clearer that constrained devices MUST support DTLS if 5822 they can not support IPsec. 5824 6.8.2.1 clarified first paragraph (TCP restransmissions lightweight). 5826 6.11.1 fixed up RPL profile text - to remove "VRF". Text was also 5827 buggy. mentioned control plane, but its a forwarding/silicon issue to 5828 have these header. 5830 6.12.5 Clarified how link-local ACP channel address can be derived, 5831 and how not. 5833 8.2.1 Fixed up text to distinguish between configuration and model 5834 describing parameters of the configuration (spec only provides 5835 parameter model). 5837 Various Nits. 5839 15.21. draft-ietf-anima-autonomic-control-plane-15 5841 Only reshuffling and formatting changes, but wanted to allow 5842 reviewers later to easily compare -13 with -14, and these changes in 5843 -15 mess that up too much. 5845 increased TOC depth to 4. 5847 Separated and reordered section 10 into an operational and a 5848 background and futures section. The background and futures could 5849 also become appendices if the layout of appendices in RFC format 5850 wasn't so horrible that you really only want to avoid using them (all 5851 the way after a lot of text like references that stop most readers 5852 from proceeding any further). 5854 15.22. wish-list 5856 From -13 review from Pascal Thubert: Picture to show dual-NOC routing 5857 limitation. 5859 [RFC Editor: Question: Is it possible to change the first occurences 5860 of [RFCxxxx] references to "rfcxxx title" [RFCxxxx]? the XML2RFC 5861 format does not seem to offer such a format, but i did not want to 5862 duplicate 50 first references to be duplicate - one reference for 5863 title mentioning and one for RFC number.] 5865 16. References 5867 16.1. Normative References 5869 [I-D.ietf-anima-grasp] 5870 Bormann, C., Carpenter, B., and B. Liu, "A Generic 5871 Autonomic Signaling Protocol (GRASP)", draft-ietf-anima- 5872 grasp-15 (work in progress), July 2017. 5874 [I-D.ietf-cbor-cddl] 5875 Birkholz, H., Vigano, C., and C. Bormann, "Concise data 5876 definition language (CDDL): a notational convention to 5877 express CBOR data structures", draft-ietf-cbor-cddl-02 5878 (work in progress), February 2018. 5880 [RFC1034] Mockapetris, P., "Domain names - concepts and facilities", 5881 STD 13, RFC 1034, DOI 10.17487/RFC1034, November 1987, 5882 . 5884 [RFC3810] Vida, R., Ed. and L. Costa, Ed., "Multicast Listener 5885 Discovery Version 2 (MLDv2) for IPv6", RFC 3810, 5886 DOI 10.17487/RFC3810, June 2004, 5887 . 5889 [RFC4191] Draves, R. and D. Thaler, "Default Router Preferences and 5890 More-Specific Routes", RFC 4191, DOI 10.17487/RFC4191, 5891 November 2005, . 5893 [RFC4193] Hinden, R. and B. Haberman, "Unique Local IPv6 Unicast 5894 Addresses", RFC 4193, DOI 10.17487/RFC4193, October 2005, 5895 . 5897 [RFC4291] Hinden, R. and S. Deering, "IP Version 6 Addressing 5898 Architecture", RFC 4291, DOI 10.17487/RFC4291, February 5899 2006, . 5901 [RFC4301] Kent, S. and K. Seo, "Security Architecture for the 5902 Internet Protocol", RFC 4301, DOI 10.17487/RFC4301, 5903 December 2005, . 5905 [RFC4861] Narten, T., Nordmark, E., Simpson, W., and H. Soliman, 5906 "Neighbor Discovery for IP version 6 (IPv6)", RFC 4861, 5907 DOI 10.17487/RFC4861, September 2007, 5908 . 5910 [RFC4862] Thomson, S., Narten, T., and T. Jinmei, "IPv6 Stateless 5911 Address Autoconfiguration", RFC 4862, 5912 DOI 10.17487/RFC4862, September 2007, 5913 . 5915 [RFC5234] Crocker, D., Ed. and P. Overell, "Augmented BNF for Syntax 5916 Specifications: ABNF", STD 68, RFC 5234, 5917 DOI 10.17487/RFC5234, January 2008, 5918 . 5920 [RFC5246] Dierks, T. and E. Rescorla, "The Transport Layer Security 5921 (TLS) Protocol Version 1.2", RFC 5246, 5922 DOI 10.17487/RFC5246, August 2008, 5923 . 5925 [RFC5280] Cooper, D., Santesson, S., Farrell, S., Boeyen, S., 5926 Housley, R., and W. Polk, "Internet X.509 Public Key 5927 Infrastructure Certificate and Certificate Revocation List 5928 (CRL) Profile", RFC 5280, DOI 10.17487/RFC5280, May 2008, 5929 . 5931 [RFC5322] Resnick, P., Ed., "Internet Message Format", RFC 5322, 5932 DOI 10.17487/RFC5322, October 2008, 5933 . 5935 [RFC6347] Rescorla, E. and N. Modadugu, "Datagram Transport Layer 5936 Security Version 1.2", RFC 6347, DOI 10.17487/RFC6347, 5937 January 2012, . 5939 [RFC6550] Winter, T., Ed., Thubert, P., Ed., Brandt, A., Hui, J., 5940 Kelsey, R., Levis, P., Pister, K., Struik, R., Vasseur, 5941 JP., and R. Alexander, "RPL: IPv6 Routing Protocol for 5942 Low-Power and Lossy Networks", RFC 6550, 5943 DOI 10.17487/RFC6550, March 2012, 5944 . 5946 [RFC6552] Thubert, P., Ed., "Objective Function Zero for the Routing 5947 Protocol for Low-Power and Lossy Networks (RPL)", 5948 RFC 6552, DOI 10.17487/RFC6552, March 2012, 5949 . 5951 [RFC6553] Hui, J. and JP. Vasseur, "The Routing Protocol for Low- 5952 Power and Lossy Networks (RPL) Option for Carrying RPL 5953 Information in Data-Plane Datagrams", RFC 6553, 5954 DOI 10.17487/RFC6553, March 2012, 5955 . 5957 [RFC7030] Pritikin, M., Ed., Yee, P., Ed., and D. Harkins, Ed., 5958 "Enrollment over Secure Transport", RFC 7030, 5959 DOI 10.17487/RFC7030, October 2013, 5960 . 5962 [RFC7296] Kaufman, C., Hoffman, P., Nir, Y., Eronen, P., and T. 5963 Kivinen, "Internet Key Exchange Protocol Version 2 5964 (IKEv2)", STD 79, RFC 7296, DOI 10.17487/RFC7296, October 5965 2014, . 5967 [RFC7676] Pignataro, C., Bonica, R., and S. Krishnan, "IPv6 Support 5968 for Generic Routing Encapsulation (GRE)", RFC 7676, 5969 DOI 10.17487/RFC7676, October 2015, 5970 . 5972 [RFC8174] Leiba, B., "Ambiguity of Uppercase vs Lowercase in RFC 5973 2119 Key Words", BCP 14, RFC 8174, DOI 10.17487/RFC8174, 5974 May 2017, . 5976 16.2. Informative References 5978 [AR8021] IEEE SA-Standards Board, "IEEE Standard for Local and 5979 metropolitan area networks - Secure Device Identity", 5980 December 2009, . 5983 [I-D.ietf-anima-bootstrapping-keyinfra] 5984 Pritikin, M., Richardson, M., Behringer, M., Bjarnason, 5985 S., and K. Watsen, "Bootstrapping Remote Secure Key 5986 Infrastructures (BRSKI)", draft-ietf-anima-bootstrapping- 5987 keyinfra-15 (work in progress), April 2018. 5989 [I-D.ietf-anima-prefix-management] 5990 Jiang, S., Du, Z., Carpenter, B., and Q. Sun, "Autonomic 5991 IPv6 Edge Prefix Management in Large-scale Networks", 5992 draft-ietf-anima-prefix-management-07 (work in progress), 5993 December 2017. 5995 [I-D.ietf-anima-reference-model] 5996 Behringer, M., Carpenter, B., Eckert, T., Ciavaglia, L., 5997 and J. Nobre, "A Reference Model for Autonomic 5998 Networking", draft-ietf-anima-reference-model-06 (work in 5999 progress), February 2018. 6001 [I-D.ietf-netconf-zerotouch] 6002 Watsen, K., Abrahamsson, M., and I. Farrer, "Zero Touch 6003 Provisioning for Networking Devices", draft-ietf-netconf- 6004 zerotouch-21 (work in progress), March 2018. 6006 [I-D.ietf-roll-applicability-template] 6007 Richardson, M., "ROLL Applicability Statement Template", 6008 draft-ietf-roll-applicability-template-09 (work in 6009 progress), May 2016. 6011 [I-D.ietf-roll-useofrplinfo] 6012 Robles, I., Richardson, M., and P. Thubert, "When to use 6013 RFC 6553, 6554 and IPv6-in-IPv6", draft-ietf-roll- 6014 useofrplinfo-23 (work in progress), May 2018. 6016 [LLDP] IEEE SA-Standards Board, "IEEE Standard for Local and 6017 Metropolitan Area Networks: Station and Media Access 6018 Control Connectivity Discovery", June 2016, 6019 . 6022 [MACSEC] IEEE SA-Standards Board, "IEEE Standard for Local and 6023 Metropolitan Area Networks: Media Access Control (MAC) 6024 Security", June 2006, 6025 . 6028 [RFC1112] Deering, S., "Host extensions for IP multicasting", STD 5, 6029 RFC 1112, DOI 10.17487/RFC1112, August 1989, 6030 . 6032 [RFC1918] Rekhter, Y., Moskowitz, B., Karrenberg, D., de Groot, G., 6033 and E. Lear, "Address Allocation for Private Internets", 6034 BCP 5, RFC 1918, DOI 10.17487/RFC1918, February 1996, 6035 . 6037 [RFC2315] Kaliski, B., "PKCS #7: Cryptographic Message Syntax 6038 Version 1.5", RFC 2315, DOI 10.17487/RFC2315, March 1998, 6039 . 6041 [RFC2821] Klensin, J., Ed., "Simple Mail Transfer Protocol", 6042 RFC 2821, DOI 10.17487/RFC2821, April 2001, 6043 . 6045 [RFC4007] Deering, S., Haberman, B., Jinmei, T., Nordmark, E., and 6046 B. Zill, "IPv6 Scoped Address Architecture", RFC 4007, 6047 DOI 10.17487/RFC4007, March 2005, 6048 . 6050 [RFC4364] Rosen, E. and Y. Rekhter, "BGP/MPLS IP Virtual Private 6051 Networks (VPNs)", RFC 4364, DOI 10.17487/RFC4364, February 6052 2006, . 6054 [RFC4429] Moore, N., "Optimistic Duplicate Address Detection (DAD) 6055 for IPv6", RFC 4429, DOI 10.17487/RFC4429, April 2006, 6056 . 6058 [RFC4541] Christensen, M., Kimball, K., and F. Solensky, 6059 "Considerations for Internet Group Management Protocol 6060 (IGMP) and Multicast Listener Discovery (MLD) Snooping 6061 Switches", RFC 4541, DOI 10.17487/RFC4541, May 2006, 6062 . 6064 [RFC4604] Holbrook, H., Cain, B., and B. Haberman, "Using Internet 6065 Group Management Protocol Version 3 (IGMPv3) and Multicast 6066 Listener Discovery Protocol Version 2 (MLDv2) for Source- 6067 Specific Multicast", RFC 4604, DOI 10.17487/RFC4604, 6068 August 2006, . 6070 [RFC4607] Holbrook, H. and B. Cain, "Source-Specific Multicast for 6071 IP", RFC 4607, DOI 10.17487/RFC4607, August 2006, 6072 . 6074 [RFC4610] Farinacci, D. and Y. Cai, "Anycast-RP Using Protocol 6075 Independent Multicast (PIM)", RFC 4610, 6076 DOI 10.17487/RFC4610, August 2006, 6077 . 6079 [RFC4941] Narten, T., Draves, R., and S. Krishnan, "Privacy 6080 Extensions for Stateless Address Autoconfiguration in 6081 IPv6", RFC 4941, DOI 10.17487/RFC4941, September 2007, 6082 . 6084 [RFC5321] Klensin, J., "Simple Mail Transfer Protocol", RFC 5321, 6085 DOI 10.17487/RFC5321, October 2008, 6086 . 6088 [RFC5790] Liu, H., Cao, W., and H. Asaeda, "Lightweight Internet 6089 Group Management Protocol Version 3 (IGMPv3) and Multicast 6090 Listener Discovery Version 2 (MLDv2) Protocols", RFC 5790, 6091 DOI 10.17487/RFC5790, February 2010, 6092 . 6094 [RFC5880] Katz, D. and D. Ward, "Bidirectional Forwarding Detection 6095 (BFD)", RFC 5880, DOI 10.17487/RFC5880, June 2010, 6096 . 6098 [RFC6241] Enns, R., Ed., Bjorklund, M., Ed., Schoenwaelder, J., Ed., 6099 and A. Bierman, Ed., "Network Configuration Protocol 6100 (NETCONF)", RFC 6241, DOI 10.17487/RFC6241, June 2011, 6101 . 6103 [RFC6335] Cotton, M., Eggert, L., Touch, J., Westerlund, M., and S. 6104 Cheshire, "Internet Assigned Numbers Authority (IANA) 6105 Procedures for the Management of the Service Name and 6106 Transport Protocol Port Number Registry", BCP 165, 6107 RFC 6335, DOI 10.17487/RFC6335, August 2011, 6108 . 6110 [RFC6724] Thaler, D., Ed., Draves, R., Matsumoto, A., and T. Chown, 6111 "Default Address Selection for Internet Protocol Version 6 6112 (IPv6)", RFC 6724, DOI 10.17487/RFC6724, September 2012, 6113 . 6115 [RFC6762] Cheshire, S. and M. Krochmal, "Multicast DNS", RFC 6762, 6116 DOI 10.17487/RFC6762, February 2013, 6117 . 6119 [RFC6763] Cheshire, S. and M. Krochmal, "DNS-Based Service 6120 Discovery", RFC 6763, DOI 10.17487/RFC6763, February 2013, 6121 . 6123 [RFC6830] Farinacci, D., Fuller, V., Meyer, D., and D. Lewis, "The 6124 Locator/ID Separation Protocol (LISP)", RFC 6830, 6125 DOI 10.17487/RFC6830, January 2013, 6126 . 6128 [RFC7404] Behringer, M. and E. Vyncke, "Using Only Link-Local 6129 Addressing inside an IPv6 Network", RFC 7404, 6130 DOI 10.17487/RFC7404, November 2014, 6131 . 6133 [RFC7426] Haleplidis, E., Ed., Pentikousis, K., Ed., Denazis, S., 6134 Hadi Salim, J., Meyer, D., and O. Koufopavlou, "Software- 6135 Defined Networking (SDN): Layers and Architecture 6136 Terminology", RFC 7426, DOI 10.17487/RFC7426, January 6137 2015, . 6139 [RFC7575] Behringer, M., Pritikin, M., Bjarnason, S., Clemm, A., 6140 Carpenter, B., Jiang, S., and L. Ciavaglia, "Autonomic 6141 Networking: Definitions and Design Goals", RFC 7575, 6142 DOI 10.17487/RFC7575, June 2015, 6143 . 6145 [RFC7576] Jiang, S., Carpenter, B., and M. Behringer, "General Gap 6146 Analysis for Autonomic Networking", RFC 7576, 6147 DOI 10.17487/RFC7576, June 2015, 6148 . 6150 [RFC7721] Cooper, A., Gont, F., and D. Thaler, "Security and Privacy 6151 Considerations for IPv6 Address Generation Mechanisms", 6152 RFC 7721, DOI 10.17487/RFC7721, March 2016, 6153 . 6155 [RFC7761] Fenner, B., Handley, M., Holbrook, H., Kouvelas, I., 6156 Parekh, R., Zhang, Z., and L. Zheng, "Protocol Independent 6157 Multicast - Sparse Mode (PIM-SM): Protocol Specification 6158 (Revised)", STD 83, RFC 7761, DOI 10.17487/RFC7761, March 6159 2016, . 6161 [RFC7950] Bjorklund, M., Ed., "The YANG 1.1 Data Modeling Language", 6162 RFC 7950, DOI 10.17487/RFC7950, August 2016, 6163 . 6165 [RFC8028] Baker, F. and B. Carpenter, "First-Hop Router Selection by 6166 Hosts in a Multi-Prefix Network", RFC 8028, 6167 DOI 10.17487/RFC8028, November 2016, 6168 . 6170 [RFC8126] Cotton, M., Leiba, B., and T. Narten, "Guidelines for 6171 Writing an IANA Considerations Section in RFCs", BCP 26, 6172 RFC 8126, DOI 10.17487/RFC8126, June 2017, 6173 . 6175 [RFC8366] Watsen, K., Richardson, M., Pritikin, M., and T. Eckert, 6176 "A Voucher Artifact for Bootstrapping Protocols", 6177 RFC 8366, DOI 10.17487/RFC8366, May 2018, 6178 . 6180 [RFC8368] Eckert, T., Ed. and M. Behringer, "Using an Autonomic 6181 Control Plane for Stable Connectivity of Network 6182 Operations, Administration, and Maintenance (OAM)", 6183 RFC 8368, DOI 10.17487/RFC8368, May 2018, 6184 . 6186 Authors' Addresses 6188 Toerless Eckert (editor) 6189 Huawei USA - Futurewei Technologies Inc. 6190 2330 Central Expy 6191 Santa Clara 95050 6192 USA 6194 Email: tte+ietf@cs.fau.de 6195 Michael H. Behringer (editor) 6197 Email: michael.h.behringer@gmail.com 6199 Steinthor Bjarnason 6200 Arbor Networks 6201 2727 South State Street, Suite 200 6202 Ann Arbor MI 48104 6203 United States 6205 Email: sbjarnason@arbor.net