idnits 2.17.1 draft-watson-dinrg-delmap-01.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- ** The document seems to lack an IANA Considerations section. (See Section 2.2 of https://www.ietf.org/id-info/checklist for how to handle the case when there are no actions for IANA.) ** There are 2 instances of too long lines in the document, the longest one being 13 characters in excess of 72. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year -- The document date (October 23, 2018) is 2009 days in the past. Is this intentional? -- Found something which looks like a code comment -- if you have code sections in the document, please surround them with '' and '' lines. Checking references for intended status: Experimental ---------------------------------------------------------------------------- -- Looks like a reference, but probably isn't: '32' on line 365 == Outdated reference: A later version (-05) exists of draft-mazieres-dinrg-scp-04 -- Obsolete informational reference (is this intentional?): RFC 6962 (Obsoleted by RFC 9162) Summary: 2 errors (**), 0 flaws (~~), 2 warnings (==), 4 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 Network Working Group J. Watson 3 Internet-Draft UC Berkeley 4 Intended status: Experimental S. Li 5 Expires: April 26, 2019 EFF 6 C. Man 7 Stanford University 8 October 23, 2018 10 Delegated Distributed Mappings 11 draft-watson-dinrg-delmap-01 13 Abstract 15 Delegated namespaces underpin almost every Internet-scale system - 16 domain name management, IP address allocation, Public Key 17 Infrastructure, etc. - but are centrally managed by entities with 18 unilateral revocation abilities and no common interface. This draft 19 specifies a generalized scheme for delegation that supports explicit 20 time-bound guarantees and limits misuse. Mappings may be secured by 21 any general purpose distributed consensus protocol; clients can query 22 the local state of any number of participants and receive the correct 23 result barring a compromise at the consensus layer. 25 Status of This Memo 27 This Internet-Draft is submitted in full conformance with the 28 provisions of BCP 78 and BCP 79. 30 Internet-Drafts are working documents of the Internet Engineering 31 Task Force (IETF). Note that other groups may also distribute 32 working documents as Internet-Drafts. The list of current Internet- 33 Drafts is at https://datatracker.ietf.org/drafts/current/. 35 Internet-Drafts are draft documents valid for a maximum of six months 36 and may be updated, replaced, or obsoleted by other documents at any 37 time. It is inappropriate to use Internet-Drafts as reference 38 material or to cite them other than as "work in progress." 40 This Internet-Draft will expire on April 26, 2019. 42 Copyright Notice 44 Copyright (c) 2018 IETF Trust and the persons identified as the 45 document authors. All rights reserved. 47 This document is subject to BCP 78 and the IETF Trust's Legal 48 Provisions Relating to IETF Documents 49 (https://trustee.ietf.org/license-info) in effect on the date of 50 publication of this document. Please review these documents 51 carefully, as they describe your rights and restrictions with respect 52 to this document. Code Components extracted from this document must 53 include Simplified BSD License text as described in Section 4.e of 54 the Trust Legal Provisions and are provided without warranty as 55 described in the Simplified BSD License. 57 Table of Contents 59 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . 2 60 2. Structure . . . . . . . . . . . . . . . . . . . . . . . . . . 4 61 2.1. Cells . . . . . . . . . . . . . . . . . . . . . . . . . . 4 62 2.2. Tables . . . . . . . . . . . . . . . . . . . . . . . . . 6 63 2.3. Root Key Listing . . . . . . . . . . . . . . . . . . . . 7 64 3. Interacting with a Consensus Node . . . . . . . . . . . . . . 7 65 3.1. Storage Format . . . . . . . . . . . . . . . . . . . . . 7 66 3.2. Client Interface . . . . . . . . . . . . . . . . . . . . 8 67 4. Consensus-layer requirements . . . . . . . . . . . . . . . . 10 68 4.1. Interface . . . . . . . . . . . . . . . . . . . . . . . . 11 69 4.2. Validation . . . . . . . . . . . . . . . . . . . . . . . 11 70 5. Security Considerations . . . . . . . . . . . . . . . . . . . 12 71 5.1. DoS mitigation . . . . . . . . . . . . . . . . . . . . . 13 72 5.2. Consensus node compromise . . . . . . . . . . . . . . . . 13 73 5.3. Upstream compromise . . . . . . . . . . . . . . . . . . . 14 74 5.4. Root listing governance . . . . . . . . . . . . . . . . . 14 75 6. References . . . . . . . . . . . . . . . . . . . . . . . . . 15 76 6.1. Normative References . . . . . . . . . . . . . . . . . . 15 77 6.2. Informative References . . . . . . . . . . . . . . . . . 15 78 Acknowledgments . . . . . . . . . . . . . . . . . . . . . . . . . 16 79 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . 16 81 1. Introduction 83 Internet entities rely heavily on delegated namespaces to function 84 properly. Typical web services have been delegated a domain name 85 (after negotitation with an appropriate registrar) under which they 86 host the entirety of their public-facing content, or obtain a public 87 IP range from their ISP, which itself has been delegated through 88 intermediary registries by the Internet Numbers Registry [RFC7249]. 89 An enormous amount of value and trust is therefore placed in these 90 assignments (in this draft, _mappings_) yet they are dangerously 91 ephemeral. Delegating authorities, either maliciously or 92 accidentally, can unilaterally revoke or replace mappings they've 93 made, compromising infrastructure security. Presented in this draft 94 is a generalized mechanism for securely managing such mappings and 95 their delegations by publishing authenticated time-locked commitments 96 to namespace ownership entries. Known entities identified by public 97 key are assigned namespaces (e.g. domain prefixes) under which they 98 are authorized to create mapping records, or _cells_. A namespace's 99 cells are grouped into logical units we term _tables_. 101 Alone, this structure does not ensure security, given that any 102 hosting server could arbitrarily modify cells or service clients with 103 bogus entries. We maintain security and consistency through a 104 distributed consensus algorithm. While detailed descriptions of 105 varying consensus protocols are out of scope for this draft, we 106 provide for a general-purpose interface between the delegation 107 structure and a consensus layer. At a minimum, the consensus layer 108 must apply mapping updates in a consistent order, prevent 109 equivocation, disallow unauthorized modification, and grant consensus 110 nodes the ability to enforce high-level rules associated with the 111 tables. We find that federated protocols such as the Stellar 112 Consensus Protocol [I-D.mazieres-dinrg-scp] are promising given their 113 capability for open participation, broad diversity of interests among 114 consensus participants, and providing accountability for malicious 115 behavior. Clients may query any number of trusted servers to 116 retrieve a correct result barring widespread collusion. 118 The ability to impose consistency yields several useful properties. 119 The foremost is enforcing delegation semantics: a table's authority 120 may choose to delegate a portion of its own namespace recursively, 121 but must document the specific range and delegee in on of the table's 122 cells. Since each delegation forms a new table, for which a delegee 123 is the sole authority, assigned namespace ranges must be unique. 124 Consensus can also enforce that the delegating authority not make 125 modifications to any delegated table and thus need not be trusted by 126 the delegee. 128 In addition, we provide explicit support for "commitments" that 129 enforce an explicit lower-bound on the duration of delegations. 130 Otherwise valid changes to cells that have a valid commitment are 131 disallowed, including revoking delegations. Upon expiration, 132 however, the same namespace may be delegated to another party. 134 Finally, decentralized infrastructure is highly visible and commonly 135 misused. As mappings are replicated among consensus nodes, of 136 primary concern is resource exhaustion. We limit undesired abuse of 137 the structure by embedding recursive scale restrictions inside 138 mappings, verified and ratified at consensus. Combined with time- 139 bounded delegations, this ensures that the system is resistant to 140 spam in the short-term and can remove misbehaving hierarchies in the 141 long-term. 143 The remainder of this draft specifies the structure for authenticated 144 mapping management as well as its interfaces to consensus protocol 145 implementations and users. 147 2. Structure 149 Trust within the delegation structure is based on public key 150 signatures. Namespace authorities must sign mapping additions, 151 modifications, delegations, and revocations to their table as proof 152 to the consensus participants that such changes are legitimate. For 153 the sake of completeness, the public key and signature types are 154 detailed below. All types in this draft are described in XDR 155 [RFC4506]. 157 typedef publickey opaque<>; /* Typically a 256 byte RSA signature */ 159 struct signature { 160 publickey pk; 161 opaque data<>; 162 }; 164 2.1. Cells 166 Cells are the basic unit of the delegation structure. In general, 167 they compose an authenticated record of a mapping that may be queried 168 by clients. We describe two types of cells: 170 enum celltype { 171 VALUE = 0, 172 DELEGATE = 1 173 }; 175 Value cells store individual mapping entries. They resolve a lookup 176 key to an arbitrary value, for example, an encryption key associated 177 with an email address or the zone files associated with a particular 178 domain. The public key of the cell's owner (e.g. the email account 179 holder, the domain owner) is also included, as well as a signature 180 authenticating the current version of the cell. The cell's 181 "update_sig" must be made by either the "owner_key", or when created, 182 the authority of the table containing the cell, as is described 183 below. The cell owner may rotate their public key at any time by 184 signing the update with the old key. 186 struct valuecell { 187 opaque value<>; 188 publickey owner_key; 190 signature update_sig; /* Table signs cell creation, owner signs updates */ 191 }; 193 Delegate cells have a similar structure but different semantics. 194 Rather than resolving to an individual mapping, they authorize the 195 _delegee_ to create arbitrary value cells within an assigned 196 namespace. This namespace must be a subset of the _delegator_'s own 197 namespace range. Like the table authority, the delegee is uniquely 198 identified by their public key. Each delegate cell and subsequent 199 updates to the cell are signed by the delegator - this ensures that 200 the delegee cannot unilaterally modify its namespace, which limits 201 the range of legitimate mappings they can create. Finally, an 202 _allowance_ must be provided to limit the upper-bound size of a 203 delegated table. Negative allowance values indicates no limit is 204 placed on the table. Given that the delegee has complete control 205 over the contents of their table, it is emphatically not recommended 206 to grant a "delegatecell" an unlimited allowance to limit the storage 207 burden on consensus nodes. A table with a non-negative allowance may 208 not grant a delegee a negative one. This limit is recursive along 209 delegations - the total number of cells in a table plus the sum of 210 allowances among its "delegatecells" must be less than or equal to 211 the table's allowance, if non-negative. This must be validated 212 during consensus before adding new cells to a table, which can be 213 done at every consensus node because table entry counts are visible 214 publicly. 216 struct delegatecell { 217 opaque namespace<>; 218 publickey delegee; 219 signature authority_sig; /* Delegator solely controls inclusion in table */ 220 int allowance; 221 }; 223 Both cell types share a set of common data members, namely a set of 224 UNIX timestamps recording the creation time and, if applicable, the 225 time of last modification. An additional "commitment" timestamp must 226 be present in every mapping. It is an explicit guarantee on behalf 227 of the table's authority that the mapping will remain valid until at 228 least the specified time. Therefore, while value cell owners may 229 modify their cell at any time (e.g. key rotation), the authority 230 cannot change (or remove) the cell until its commitment expires, as 231 enforced by the consensus nodes. Similarly, delegated namespaces are 232 guaranteed to be valid until the commitment timestamp expiration, 233 although after expiration, they can be reassigned to other parties. 235 Likely, most long-term delegations will be renewed (with a new 236 commitment timestamp) before the expiration of the current period. 237 The tradeoff between protecting delegees from arbitrary authority 238 action and allowing quick reconfiguration is customizable to the use 239 case. Larger services should use longer delegation periods for 240 stability whereas small namespaces with a smaller number of users 241 should use shorter delegations. 243 union innercell switch (celltype type) { 244 case VALUE: 245 valuecell vcell; 246 case DELEGATE: 247 delegatecell dcell; 248 }; 250 struct cell { 251 unsigned hyper create_time; /* 64-bit UNIX timestamps */ 252 unsigned hyper *revision_time; 253 unsigned hyper commitment_time; 254 innercell c; 255 } 257 2.2. Tables 259 Every cell is stored in a table, which groups all the mappings 260 created by a single authority public key for a specific namespace. 261 Individual cells are referenced by an application-specific label in a 262 lookup table. _The combination of a lookup key and a referenced cell 263 value forms a mapping_. 265 struct tableentry { 266 opaque lookup_key<>; 267 cell c; 268 } 270 Delegating the whole or part of a namespace requires adding a new 271 lookup key for the namespace and a matching delegate cell. Each 272 delegation must be validated in the context of the other table 273 entries and the table itself. For example, the owner of a table 274 delegated an /8 IPv4 block must not to delegate the same /16 block to 275 two different tables. 277 struct table { 278 tableentry entries<>; 279 }; 281 To generalize correctness, each table must conform with a prefix- 282 based rule: for every cell "c" in a table controlling namespace "x", 283 "x" must be a prefix of "c" and there cannot exist another cell "c'" 284 such that "c" is a prefix of "c'". While there exist many more 285 hierarchical delegation mechanisms, many can be simply represented in 286 a prefix scheme. For example, suffix-based delegations including 287 domain name hierarchies can use reversed keys internally and perform 288 a swap in the application layer before displaying any results to 289 clients. Likewise, 'flat' delegation schemes where there is no 290 explicit restriction can use an empty prefix. 292 2.3. Root Key Listing 294 Each linked group of delegation tables for a particular namespace is 295 rooted by a public key stored in a flat root key listing, which is 296 the entry point for lookup operations. Well-known application 297 identifier strings denote the namespace they control. We describe 298 below how lookups can be accomplished on the mappings. 300 struct rootentry { 301 publickey namespace_root_key; 302 string application_identifier<>; 303 signature listing_sig; 304 int allowance; 305 } 307 struct rootlisting { 308 rootentry roots<>; 309 } 311 A significant question is how to properly administer entries in this 312 listing, which we address in Security Considerations. 314 3. Interacting with a Consensus Node 316 3.1. Storage Format 318 Delegation tables are stored in a Merkle hash tree, described in 319 detail in [RFC6962]. In particular, it enables efficient lookups and 320 logarithmic proofs of existence in the tree, and prevents 321 equivocation between different participants. Among others, we can 322 leverage Google's [Trillian] Merkle tree implementation which 323 generalizes the datastructures used in Certificate Transparency. In 324 map mode, the tree can manage arbitrary key-value pairs at scale, but 325 critically, this requires flattening the delegation links such that 326 each table may be queried, while ensuring that a full lookup from the 327 application root is made for each mapping. 329 Given a "rootentry", the corresponding table in the Merkle tree can 330 be queried at the following key (where || refers to concatenation): 332 root_table_name = app_id || namespace_root_key 334 It follows that tables for delegated namespaces are found at: 336 table = root_table_name || delegee_key_1 || ... || delegee_key_n 338 And finally, individual entries are identified by the namespace 339 lookup key: 341 cell = table || desired_lookup_key 343 Once an entry is found in the tree, a logarithmic proof can be 344 constructed with the hashes of the siblings of each node in the 345 tree's path to the entry. 347 struct merkleproof { 348 opaque sibling_hashes[32]<>; 349 cell entry_cell; 350 signature tree_sig; 351 } 353 The entry is hashed together with each "sibling_hash" - if the total 354 matches the known tree root hash, then the entry must have been in 355 the tree. 357 3.2. Client Interface 359 The presence of a natural mapping structure motivates an external 360 client interface similar to a key-value store. 362 struct MerkleRootOperation { } 364 struct MerkleRootReturn { 365 opaque root_hash[32]; 366 signature tree_sig; 367 } 369 It is important to note that the client should not rely on a root 370 hash that has been provided by a single server to verify a 371 "merkleproof", instead querying multiple consensus nodes using this 372 interface. Upon discovering that different servers are advertising 373 non-matching hashes, the signed proof should be used to prove to 374 other clients/nodes that one or more malicious trees are 375 equivocating. 377 enum ReturnCode { 378 CELL = 0, 379 TABLE = 1, 380 ERROR = 2 381 } 383 struct GetOperation { 384 string application_identifier; 385 opaque full_lookup_key<>; 386 } 388 union GetReturn switch (ReturnCode ret) { 389 case CELL: 390 cell value; 391 merkleproof p; 392 case TABLE: 393 table t; 394 merkleproof p; 395 case ERROR: 396 string reason; 397 } 399 Given an application identifier and the fully-qualified lookup key, 400 the map described in the previous section can be searched 401 recursively. At each table, we find the cell whose name matches a 402 prefix of the desired lookup key. If the cell contains a 403 "valuecell", it is returned if the cell's key matches the lookup key 404 exactly, else an "ERROR" is returned. If the cell contains a 405 "delegatecell", it must contain the key for the next table, on which 406 the process is repeated. If no cell is found by prefix-matching, the 407 node should return "ERROR" if the key has not been fully found, else 408 the table itself (containing all of the current cells) is provided to 409 the client. As in every interaction with the delegated mapping 410 structure, users should verify the attached proof. Verifying 411 existence of an entry follows from the same method. 413 struct SetOperation { 414 string application_identifier; 415 opaque full_lookup_key<>; 416 cell c; 417 } 419 struct SetRootOperation { 420 rootentry e; 421 bool remove; 422 } 424 union SetReturn switch (ReturnCode ret) { 425 case SUCCESS: 426 opaque empty; 427 case ERROR: 428 string reason; 429 } 431 Creating or updating a cell at a specified path requires once again 432 the full lookup key, as well as the new version of the cell to place. 433 The new cell must be well-formed under the validation checks 434 described in the previous section, else an "ERROR" is returned. For 435 example, updating a cell's owner without a signature by the previous 436 owning key should not succeed. Both value cells and new/updated 437 delegations may be created through this method. Removing cells from 438 tables (after their commitment timestamps have expired) can be 439 accomplished by replacing the value or delegated namespace with an 440 empty value and setting the owner's key to that of the table 441 authority. Asking the consensus layer to approve a new root entry 442 follows a similar process, although the application identifier and 443 lookup key is unnecessary (see "SetRootOperation"). Nodes can also 444 trigger votes to remove entries from the root key listing to redress 445 misbehaving applications. 447 4. Consensus-layer requirements 449 Safety is ensured by reaching distributed consensus on the state of 450 the tree. The general nature of a Merkle tree as discussed in the 451 previous section enables almost any consensus protocol to support 452 delegated mappings, with varying guarantees on the conditions under 453 which safety is maintained and different trust implications. For 454 example, a deployment on a cluster of nodes running a classic 455 Byzantine Fault Tolerant consensus protocol such as [PBFT] requires a 456 limited, static membership and can tolerate compromises in up to a 457 third of its nodes. In comparison, proof-of-work schemes including 458 many cryptocurrencies have open membership but rely on economic 459 incentives and distributed control of hashing power to provide 460 safety, and federated consensus algorithms like the Stellar Consensus 461 Protocol (SCP) [I-D.mazieres-dinrg-scp] combine dynamic members with 462 real-world trust relationships but require careful configuration. 463 Determining which scheme, if any, is the best protocol to support 464 authenticated delegation is an open question. 466 4.1. Interface 468 At a minimum, the consensus layer is expected to provide mechanisms 469 for nodes to 471 1. Submit new values (commonly cell, but also root listing, updates) 472 for consensus 474 2. Receive externalized values to which the protocol has committed 476 3. Validate values received from other nodes for each iteration of 477 the protocol, as specified below 479 Most input values to the consensus layer will consist of cell 480 updates, but the same mechanism is ideally suited for updates to the 481 root key listing, as previously discussed. Specific protocols may 482 require additional functionality from the delegated mapping layer, 483 which should be implemented to ensure that valid updates are 484 eventually applied (assuming a working consensus layer). 486 4.2. Validation 488 Incorrect (potentially malicious) updates to the Merkle tree should 489 be rejected by nodes participating in consensus. Given the known 490 prefix-delegation scheme, each node can apply the same validation 491 procedure without requiring table-specific knowledge. Validation 492 also provides a simple mechanism for rate-limiting actors attempting 493 to perform DoS attacks, as only the most recent change to a 494 particular cell need be retained, and the total number of updates to 495 any particular table or overall can be capped. Upon any modification 496 to the delegation tables, a "SetOperation" or "SetRootOperation" as 497 defined in the previous section, the submitted change to the 498 consensus layer should: 500 1. Reference an existing application identifier in the root key 501 listing and a valid table if applicable. 503 2. For updates to all cells: 505 * contain an unmodified "create_time" or a current timestamp if 506 a new cell 508 * contain a current "revision_time" in the case of an update 509 * set a "commitment_time" greater than or equal to the previous 510 commitment 512 * result in a total table size ("valuecell" count + 513 "delegatecell" allowances) less than or equal to the table 514 allowance, if not unlimited 516 3. For updates to value cells: 518 * be signed with the table authority's public key for new 519 mappings 521 * be signed only by the current "owner_key" if the cell 522 commitment has not yet expired, or by either the owner or 523 table authority upon expiration for updates to the value or 524 owner keys 526 * have a lookup key in the table that belongs to the authority's 527 namespace 529 * not conflict with other cells in its table, breaking the 530 prefix-delegation property 532 4. For updates to delegate cells: 534 * be signed by the table authority's public key for new 535 delegations or updates 537 * retain the same "namespace" and "delegee" value unless the 538 "commitment_time" is expired 540 * contain a valid namespace owned by the authority delegating 541 the cell 543 * not conflict with other values or delegations in the same 544 table, breaking the prefix-delegation property 546 * not grant unlimited (negative) allowance unless the delegating 547 table also has an unlimited allowance 549 Only after a round of the consensus protocol is successful are the 550 changes exposed to client lookups. 552 5. Security Considerations 553 5.1. DoS mitigation 555 Full consensus nodes must maintain complete, up-to-date table state 556 in order to correctly validate and apply updates. A significant 557 concern is limiting computation and storage resources expended as the 558 result of malicious entities operating in the delegation structure. 559 This is doubly important because of the explicit lack of trust from a 560 delegee to its delegating namespace. While this prevents higher- 561 level organizations from making arbitrary changes to delegated 562 namespaces (as is currently possible in the CA hierarchy), a delegee 563 may choose to incur unreasonable storage costs by filling their table 564 with millions of garbage cells. Of course, since the delegee has a 565 commitment to controlling the specific namespace for a certain time 566 period, these cells cannot be removed. We recognize that this 567 requires the provider to place some amount of trust in their users to 568 consume resources responsibly, and attempt to limit misuse. 570 The allowances included in each delegation work to address this, 571 since it explicitly defines an agreement between the delegator and 572 delegee as to the expected size required for correct operation. 573 Since allowances are provided at the root level as well (ignoring 574 unlimited allowances) there exists an upper bound on the total number 575 of cells that consensus nodes should expect to be required to 576 maintain. Importantly, the ability to unlimit the table size (as 577 well as further delegations) increases the risk of misuse but 578 provides significant flexibility for well-known systems like DNS and 579 IP allocation. This can be mitigated by assigning unlimited 580 allowances only to well-known entities where real-world 581 accountability limits the urge to misbehave. Consensus nodes are 582 also encouraged to rate-limit excessive "SetOperation"s from clients 583 to further limit this issue. 585 5.2. Consensus node compromise 587 We rely on the safety properties of the underlying consensus layer to 588 provide a consistent view of the delegated mapping tables. This 589 ensures that no honest node will serve mappings to clients that have 590 not succeeded at reaching consensus. There is nothing directly 591 preventing compromised consensus nodes from maliciously serving 592 entries (e.g. incorrect DNS zone records) to clients as they see fit, 593 _however_, they must also provide an inclusion proof and expose their 594 Merkle root hash. As noted previously, clients and other auditing 595 parties may compare roots and discover misbehavior. The proof 596 associated with a query is unequivocal proof that is sufficient to 597 ignore the compromised node in further consensus rounds. Past 598 individual compromise, the exact point at which a network of 599 consensus nodes can completely violate safety varies from protocol to 600 protocol (majority hashing power attack in Bitcoin, no quorum 601 intersection of well-behaved nodess in SCP, etc.). Thus, it is not 602 secure to rely only on a small group of nodes hosted by one or two 603 distinct entities for consensus, as they are easily targeted. The 604 generalized delegated mappings mechanism described in this draft 605 allows parties from radically different sectors to collectively 606 provide security, limiting the impact of a small number of malicious 607 nodes. Finally, in the case of an extremely large-scale compromise, 608 mappings stored in prior trees with known root hashes are still valid 609 - they cannot be modified without forging the inclusion proof whose 610 root hash the client will verify. 612 5.3. Upstream compromise 614 As in any hierarchical delegation system, some amount of trust must 615 be placed in the upstream provider. With this work, we strive to 616 minimize the amount and nature of trust that any entity has to place 617 in their upstream dependencies. 619 In a regular authenticated delegation system, the network must 620 unilaterally trust a particular namespace operator to not equivocate 621 (i.e. not present different states of the database to different 622 entities) and to not hijack control of a particular delegated entry. 623 Under consensus, nodes can trust that a single entity cannot force 624 the network to equivocate, and entities can audit the database for 625 any misbehavior. The time-bound commitments to namespace delegations 626 limit misuse to the brief renewal window, during which end-entities 627 can monitor the network for misbehavior. 629 Although an upstream entity can still unilaterally censor and deny 630 service to a particular entity for the namespace that they control, 631 their ability to hijack an existing delegee's entries is both limited 632 and auditable. 634 5.4. Root listing governance 636 Relying on a centralized party in the long term to reliably and 637 consistently manage the root key listing would create a centralized 638 point of failure, so we consider alternative mechanisms of governing 639 the root of the structure presented in this draft. Concurrent work 640 on IP address allocation [IP-blockchain] explores using a 641 Decentralized Autonomous Organization built on the Ethereum 642 blockchain to manage all delegations where proper behavior is 643 economically motivated. We identify similar challenges: controlling 644 spam and misuse, while operating in a decentralized manner. 646 In this draft, we focus on enabling governance through consensus 647 operations. For that reason, potential root entries are nominated 648 with a proposed allowance, which will restrict the total number of 649 cells currently supported by an application. For large systems such 650 as IP delegation or well-known entities like the IETF, the limit can 651 be disabled as discussed earlier. It is important that decisions 652 regarding root listing membership be made by the consensus nodes 653 themselves, since they bear the largest burden to store tables, 654 communicate with other nodes, and service client queries. If an 655 application begins to run out of allowance (too many cells or large 656 delegations), it can sign and nominate a new "rootentry" for the same 657 application identifier with a larger value, at which point the other 658 nodes can (given global knowledge of table sizes and growth rates, 659 along with potential real-world information) determine whether or not 660 to accept the change. Note that if the consensus layer is 661 compromised as discussed above, the governance of the root listing 662 also becomes insecure. 664 6. References 666 6.1. Normative References 668 [RFC4506] Eisler, M., Ed., "XDR: External Data Representation 669 Standard", STD 67, RFC 4506, DOI 10.17487/RFC4506, May 670 2006, . 672 [Trillian] 673 Google, "Trillian: General Transparency", n.d., 674 . 676 6.2. Informative References 678 [I-D.mazieres-dinrg-scp] 679 Barry, N., Losa, G., Mazieres, D., McCaleb, J., and S. 680 Polu, "The Stellar Consensus Protocol (SCP)", draft- 681 mazieres-dinrg-scp-04 (work in progress), June 2018. 683 [IP-blockchain] 684 Angieri, S., Garcia-Martinez, A., Liu, B., Yan, Z., Wang, 685 C., and M. Bagnulo, "An experiment in distributed Internet 686 address management using blockchains", 2018, 687 . 689 [PBFT] Castro, M. and B. Liskov, "Practical Byzantine Fault 690 Tolerance", 1999, 691 . 693 [RFC6962] Laurie, B., Langley, A., and E. Kasper, "Certificate 694 Transparency", RFC 6962, DOI 10.17487/RFC6962, June 2013, 695 . 697 [RFC7249] Housley, R., "Internet Numbers Registries", RFC 7249, 698 DOI 10.17487/RFC7249, May 2014, 699 . 701 Acknowledgments 703 We are grateful for the contributions and feedback on design and 704 applicability by David Mazieres, as well as help and feedback from 705 many members of the IRTF DIN research group, including Dirk Kutscher 706 and Melinda Shore. 708 This work was supported by The Stanford Center For Blockchain 709 Research. 711 Authors' Addresses 713 Jean-Luc Watson 714 UC Berkeley 715 Cory Hall, 545W 716 Berkeley, CA 94720 717 US 719 Email: jlwatson@eecs.berkeley.edu 721 Sydney Li 722 Electronic Frontier Foundation 723 815 Eddy Street 724 San Francisco, CA 94109 725 US 727 Email: sydney@eff.org 729 Colin Man 730 Stanford University 731 353 Serra Mall 732 Stanford, CA 94305 733 US 735 Email: colinman@cs.stanford.edu