idnits 2.17.1 draft-hallambaker-mesh-trust-03.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- ** The document seems to lack an IANA Considerations section. (See Section 2.2 of https://www.ietf.org/id-info/checklist for how to handle the case when there are no actions for IANA.) ** The abstract seems to contain references ([1]), which it shouldn't. Please replace those with straight textual mentions of the documents in question. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year -- The document date (October 23, 2019) is 1639 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- -- Looks like a reference, but probably isn't: '1' on line 1206 -- Obsolete informational reference (is this intentional?): RFC 6962 (Obsoleted by RFC 9162) Summary: 2 errors (**), 0 flaws (~~), 1 warning (==), 3 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 Network Working Group P. Hallam-Baker 3 Internet-Draft October 23, 2019 4 Intended status: Informational 5 Expires: April 25, 2020 7 Mathematical Mesh 3.0 Part VI: The Trust Mesh 8 draft-hallambaker-mesh-trust-03 10 Abstract 12 This paper extends Shannon's concept of a 'work factor' as applied to 13 evaluation of cryptographic algorithms to provide an objective 14 measure of the practical security offered by a protocol or 15 infrastructure design. Considering the hypothetical work factor 16 based on an informed estimate of the probable capabilities of an 17 attacker with unknown resources provides a better indication of the 18 relative strength of protocol designs than the computational work 19 factor of the best-known attack. 21 The social work factor is a measure of the trustworthiness of a 22 credential issued in a PKI based on the cost of having obtained the 23 credential through fraud at a certain point in time. Use of the 24 social work factor allows evaluation of Certificate Authority based 25 trust models and peer to peer (Web of Trust) models to be evaluated 26 in the same framework. The analysis demonstrates that both 27 approaches have limitations and that in certain applications, a 28 blended model is superior to either by itself. 30 The final section of the paper describes a proposal to realize this 31 blended model using the Mathematical Mesh. 33 [Note to Readers] 35 Discussion of this draft takes place on the MATHMESH mailing list 36 (mathmesh@ietf.org), which is archived at 37 https://mailarchive.ietf.org/arch/search/?email_list=mathmesh. 39 This document is also available online at 40 http://mathmesh.com/Documents/draft-hallambaker-mesh-trust.html [1] . 42 Status of This Memo 44 This Internet-Draft is submitted in full conformance with the 45 provisions of BCP 78 and BCP 79. 47 Internet-Drafts are working documents of the Internet Engineering 48 Task Force (IETF). Note that other groups may also distribute 49 working documents as Internet-Drafts. The list of current Internet- 50 Drafts is at https://datatracker.ietf.org/drafts/current/. 52 Internet-Drafts are draft documents valid for a maximum of six months 53 and may be updated, replaced, or obsoleted by other documents at any 54 time. It is inappropriate to use Internet-Drafts as reference 55 material or to cite them other than as "work in progress." 57 This Internet-Draft will expire on April 25, 2020. 59 Copyright Notice 61 Copyright (c) 2019 IETF Trust and the persons identified as the 62 document authors. All rights reserved. 64 This document is subject to BCP 78 and the IETF Trust's Legal 65 Provisions Relating to IETF Documents 66 (https://trustee.ietf.org/license-info) in effect on the date of 67 publication of this document. Please review these documents 68 carefully, as they describe your rights and restrictions with respect 69 to this document. Code Components extracted from this document must 70 include Simplified BSD License text as described in Section 4.e of 71 the Trust Legal Provisions and are provided without warranty as 72 described in the Simplified BSD License. 74 Table of Contents 76 1. Work Factor . . . . . . . . . . . . . . . . . . . . . . . . . 3 77 1.1. Computational Work Factor . . . . . . . . . . . . . . . . 3 78 1.2. Hypothetical Work Factor . . . . . . . . . . . . . . . . 4 79 1.3. Known Unknowns . . . . . . . . . . . . . . . . . . . . . 5 80 1.4. Defense in Depth . . . . . . . . . . . . . . . . . . . . 7 81 1.5. Mutual Reinforcement . . . . . . . . . . . . . . . . . . 7 82 1.6. Safety in Numbers . . . . . . . . . . . . . . . . . . . . 8 83 1.7. Cost Factor . . . . . . . . . . . . . . . . . . . . . . . 10 84 1.8. Social Work Factor . . . . . . . . . . . . . . . . . . . 13 85 1.8.1. Related work . . . . . . . . . . . . . . . . . . . . 14 86 2. The problem of trust . . . . . . . . . . . . . . . . . . . . 14 87 2.1. Existing approaches . . . . . . . . . . . . . . . . . . . 15 88 2.1.1. Trust After First Use (TAFU) . . . . . . . . . . . . 16 89 2.1.2. Direct Trust . . . . . . . . . . . . . . . . . . . . 16 90 2.1.3. Certificate Authority . . . . . . . . . . . . . . . . 16 91 2.1.4. Web of Trust . . . . . . . . . . . . . . . . . . . . 18 92 2.1.5. Chained notary . . . . . . . . . . . . . . . . . . . 18 93 2.1.6. A blended approach . . . . . . . . . . . . . . . . . 20 94 3. The Mesh of Trust . . . . . . . . . . . . . . . . . . . . . . 21 95 3.1. Master Profile . . . . . . . . . . . . . . . . . . . . . 21 96 3.2. Uniform Data Fingerprints . . . . . . . . . . . . . . . . 22 97 3.3. Strong Internet Names . . . . . . . . . . . . . . . . . . 22 98 3.4. Trust notary . . . . . . . . . . . . . . . . . . . . . . 23 99 3.5. Endorsement . . . . . . . . . . . . . . . . . . . . . . . 23 100 3.6. Evaluating trust . . . . . . . . . . . . . . . . . . . . 24 101 4. Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . 24 102 5. Security Considerations . . . . . . . . . . . . . . . . . . . 24 103 6. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . 24 104 7. References . . . . . . . . . . . . . . . . . . . . . . . . . 24 105 7.1. Normative References . . . . . . . . . . . . . . . . . . 24 106 7.2. Informative References . . . . . . . . . . . . . . . . . 25 107 7.3. URIs . . . . . . . . . . . . . . . . . . . . . . . . . . 26 108 Author's Address . . . . . . . . . . . . . . . . . . . . . . . . 26 110 1. Work Factor 112 Recent events have highlighted both the need for open standards-based 113 security protocols and the possibility that the design of such 114 protocols may have been sabotaged [Schneier2013] . We thus face two 115 important and difficult challenges, first to design an Internet 116 security infrastructure that offers practical security against the 117 class of attacks revealed, and secondly, to convince potential users 118 that the proposed new infrastructure has not been similarly 119 sabotaged. 121 The measure of a security of a system is the cost and difficulty of 122 making a successful attack. The security of a safe is measured by 123 the length time it is expected to resist attack using a specified set 124 of techniques. The security of a cryptographic algorithm against a 125 known attack is measured by the computational cost of the attack. 127 This paper extends Shannon's concept of a 'work factor' [Shannon1949] 128 to provide an objective measure of the security a protocol or 129 infrastructure offers against other forms of attack. 131 1.1. Computational Work Factor 133 The term 'Computational Work Factor' is used to refer to Shannon's 134 original concept. 136 One of Shannon's key insights was that the work factor of a 137 cryptographic algorithm could be exponential. Adding a single bit to 138 the key size of an ideal symmetric algorithm presents only a modest 139 increase in computational effort for the defender but doubles the 140 work factor for the attacker. 142 More precisely, the difficulty of breaking a cryptographic algorithm 143 is generally measured by the work-factor ratio. If the cost of 144 encrypting a block with 56-bit DES is x, the worst case cost of 145 recovering the key through a brute force attack is 2^56x. The 146 security of DES has changed over time because x has fallen 147 exponentially. 149 While the work factor is traditionally measured in terms of the 150 number of operations, many cryptanalytic techniques permit memory 151 used to be traded for computational complexity. An attack requiring 152 2^64 bytes of memory that reduces the number of operations required 153 to break a 128 bit cipher to 2^64 is a rather lower concern than one 154 which reduces the number of operations to 2^80. The term 'cost' is 155 used to gloss over such distinctions. 157 The Computational Work Factor ratio WF-C (A) of a cryptographic 158 algorithm A, is the cost of the best-known attack divided by the cost 159 of the algorithm itself. 161 1.2. Hypothetical Work Factor 163 Modern cryptographic algorithms use keys of 128 bits or more and 164 present a work factor ratio of 2^128 against brute force attack. 165 This work factor is at least 2^72 times higher than DES and 166 comfortably higher than the work factor of 2^80 operations that is 167 generally believed to be the practical limit to current attacks. 169 Though Moore's law has delivered exponential improvements in 170 computing performance over the past four decades, this has been 171 achieved through continual reductions in the minimum feature size of 172 VLSI circuits. As the minimum feature size rapidly approaches the 173 size of individual atoms, this mechanism has already begun to stall 174 [Intel2018] . 176 While an exceptionally well-resourced attacker may gain performance 177 advances through use of massive parallelism, faster clock rates made 178 possible by operating at super-low temperatures and custom designed 179 circuits, the return on such approaches is incremental rather than 180 exponential. 182 Performance improvements may allow an attacker to break systems with 183 a work factor several orders of magnitude greater than the public 184 state of the art. But an advance in cryptanalysis might permit a 185 potentially more significant reduction in the work factor. 187 The primary consideration in the choice of a cryptographic algorithm 188 therefore is not the known computational work factor as measured 189 according to the best publicly known attack but the confidence that 190 the computational work factor of the best attack that might be known 191 to the attacker. 193 While the exact capabilities of the adversary are unknown, a group of 194 informed experts may arrive at a conservative estimate of their 195 likely capabilities. In particular, it is the capabilities of 196 nation-state actors that generally give rise to greatest concern in 197 security protocol design. In this paper we refer to this set of 198 actors as nation-state class adversaries in recognition of the fact 199 that certain technology companies posses computing capabilities that 200 rival if not exceed those of the largest state actors and those 201 capabilities could at least in theory be co-opted for other purposes 202 in certain circumstances. 204 The probability that a nation-state class has discovered an attack 205 against AES-128 with a work factor ratio of 2^120 might be considered 206 relatively high while the probability that an attack with a work 207 factor ratio of less than 2^64 is very low. 209 We define the hypothetical work factor function WF-H (A, p) as 210 follows: If WF is a work factor ratio and p is an informed estimate 211 of the probability that an adversary has developed an attack with a 212 work factor ratio against algorithm A of WF or less then WF-H (A, p) 213 = WF. 215 Since the best-known public attack is known to the attacker, WF-H (A, 216 1) = _CWF (A) 218 The inverse function WF-H' (A, WF) returns the estimated probability 219 that the work factor of algorithm A is at least WF. 221 The hypothetical work factor and its inverse may be used to compare 222 the relative strengths of protocol designs. Given designs A and B, 223 we can state that B is an improvement on A if WF-H (A,p) > WF-H (B,p) 224 for all p. 226 When considering a protocol or infrastructure design we can thus 227 improve a protocol by either: 229 o Increasing WF-H (A,p) for some p, or 231 o Decreasing WF-H '(A,WF) 233 1.3. Known Unknowns 235 Unlike the computational work factor, the hypothetical work factor 236 does not provide an objective measure of the security offered by a 237 design. The purpose of the hypothetical work factor is to allow the 238 protocol designer to compare the security offered by different design 239 choices. 241 The task that the security engineer faces is to secure the system 242 from all attacks whether the attacks themselves are known or unknown. 243 In the current case it is known that an attacker is capable of 244 breaking at least some of the cryptographic algorithms in use. But 245 not which algorithms are affected or the nature of the attack(s). 247 Unlike the computational work factor, the hypothetical work factor 248 does not deliver an academically rigorous, publication and citation 249 worthy measure of the strength of a design. That is not its purpose. 250 the purpose of the hypothetical work factor is to assist the protocol 251 designer in designing protocols. 253 Design of security protocols has always required the designer to 254 consider attackers whose capabilities are not currently known and 255 thus involved a considerable degree of informed opinion and 256 guesswork. Whether correctly or not, the decision to reject changes 257 to the DNSSEC protocol to enable deployment in 2002 rested in part on 258 a statement by a Security Area Director that a proposed change gave 259 him 'a bad feeling in his gut'. The hypothetical work factor permits 260 the security designer to model to quantify such intestinally based 261 assumptions and model the effect on the security of the resulting 262 design. 264 Security is a property of systems rather than individual components. 265 While it is quite possible that there are no royal roads to 266 cryptanalysis and cryptanalysis of algorithms such as AES 128 is 267 infeasible even for the nation state class adversaries, such 268 adversaries are not limited to use of cryptanalytic attacks. 270 Despite the rise of organized cyber-crime, many financial systems 271 still employ weak cryptographic systems that are known to be 272 vulnerable to cryptanalytic attacks that are well within the 273 capabilities of the attackers. But fraud based on such techniques 274 remains vanishingly rare as it is much easier for the attackers to 275 persuade bank customers to simply give their access credentials to 276 the attacker. 278 Even if a nation-state class attacker has a factoring attack which 279 renders an attack on RSA-2048 feasible, it is almost certainly easier 280 for a nation-state class attacker to compromise a system using 281 RSA-2048 in other ways. For example, persuading the target of the 282 surveillance to use cryptographic devices with a random number 283 generator that leaks a crib for the attacker. Analyzing the second 284 form of attack requires a different type of analysis which is 285 addressed in the following section on social work factor. 287 1.4. Defense in Depth 289 The motivation behind introducing the concept of the hypothetical 290 work factor is a long experience of seeing attempts to make security 291 protocols more robust being deflected by recourse to specious 292 arguments based on the computational work factor. 294 For example, consider the case in which a choice between a single 295 security control and a defense in depth strategy is being considered: 297 o Option A: Uses algorithm X for protection. 299 o Option B: Uses a combination of algorithm X and algorithm Y for 300 protection such that the attacker must defeat both to break the 301 system and algorithms based on different cryptographic principles 302 are chosen so as to minimize the risk of a common failure mode. 304 If the computational work factor for both algorithms X and Y is 305 2^128, both options present the same work factor ratio. Although 306 Option B offers twice the security, it also requires twice the work. 308 The argument that normally wins is that both options present the same 309 computational work factor ratio of 2^128, Option A is simpler and 310 therefore Option A should be chosen. This despite the obvious fact 311 that only Option B offers defense in depth. 313 If we consider the adversary of being capable of performing a work 314 factor ratio of 2^80 and the probability the attacker has discovered 315 an attack capable of breaking algorithms X and Y to be 10% in each 316 case, the probability that the attacker can break Option A is 10% 317 while the probability that an attack on Option B is only 1%, a 318 significant improvement. 320 While Option B clearly offers a significant potential improvement in 321 security, this improvement is only fully realized if the 322 probabilities of a feasible attack are independent. 324 1.5. Mutual Reinforcement 326 The defense in depth approach affords a significant improvement in 327 security but an improvement that is incremental rather than 328 exponential in character. With mutual reinforcement we design the 329 mechanism such that in addition to requiring the attacker to break 330 each of the component algorithms, the difficulty of the attacks is 331 increased. 333 For example, consider the use of a Deterministic Random Number 334 Generator R(s,n) which returns a sequence of values R(s,1), R(s,2)... 335 from an initial seed s. 337 Two major concerns in the design of such generators are the 338 possibility of bias and that the seed value be somehow leaked through 339 a side channel. 341 Both concerns are mitigated if instead of using the output of one 342 generator directly, two independent random number generators with 343 distinct seeds are used. 345 For example, consider the use of the value R1(s1,n) XOR R2(s2,n) 346 where R1(s,n) and R2(s,n) are different random number generation 347 functions and s1, s2 are distinct seeds. 349 The XOR function has the property of preserving randomness so that 350 the output is guaranteed to be at least as random as either of the 351 generators from which it is built (provided that there is not a 352 common failure mode). Further, recovery of either random seed is at 353 least as hard as using the corresponding generator on its own. Thus, 354 the Hypothetical work factor for the combined system is improved to 355 at least the same extent as in the defense in depth case. 357 But any attempt to break either generator must now face the 358 additional complexity introduced by the output being masked with the 359 unknown output of the other. An attacker cannot cryptanalyze the two 360 generator functions independently. If the two generators and the 361 seeds are genuinely independent, the combined hypothetical work 362 factor is the product of the hypothetical work factors from which it 363 is built. 365 While implementing two independent generators and seeds represents a 366 significant increase in cost for the implementer, a similar 367 exponential leverage might be realized with negligible additional 368 complexity through use of a cryptographic digest of the generator 369 output to produce the masking value. 371 1.6. Safety in Numbers 373 In a traditional security analysis, the question of concern is 374 whether a cryptanalytic attack is feasible or not. When considering 375 an indiscriminate intercept capability as in a nation-state class 376 attack, the concern is not just whether an individual communication 377 might be compromised but the number of communications that may be 378 compromised for a given amount of effort. 380 'Perfect' Forward Secrecy is an optional feature supported in IPSec 381 and TLS. In 2008, implementations of TLS/1.2 [RFC6246] purported to 382 offer a choice between: 384 Direct key exchange with a work factor dependent on the difficulty of 385 breaking RSA 2048 387 Direct key exchange followed by a perfect forward secrecy exchange 388 with a work factor dependent on the difficulty of breaking both RSA 389 2048 and DH 1024. 391 Using the computational work factor alone suggests that the second 392 scheme has little advantage over the first since the computational 393 work factor of Diffie Hellman using the best-known techniques 2^80 394 while the computational work factor for RSA 2048 is 2^112. Use of 395 the perfect forward secrecy exchange has a significant impact on 396 server performance but does not increase the difficulty of 397 cryptanalysis. 399 Use of perfect forward secrecy with a combination of RSA and Diffie 400 Hellman does not provide a significant improvement in the 401 hypothetical work factor either if individual messages are 402 considered. The RSA and Diffie Hellman systems are closely related 403 and so an attacker that can break RSA 2048 can almost certainly break 404 RSA 1024. Moreover, computational work factor for DH 1024 is only 405 2^80 and thus feasibly within the reach of a well-funded and 406 determined attacker. 408 According to the analysis informally applied during design, use of 409 perfect forward secrecy does provide an important security benefit 410 when multiple messages are considered. While a sufficiently funded 411 and determined attacker could conceivably break tens, hundreds or 412 even thousands of DH 1024 keys a year, it is rather less likely that 413 an attacker could break millions a year. The OCSP servers operated 414 by Comodo CA receive over 2 billion hits a day and this represents 415 only a fraction of the number of uses of TLS on the Internet. Use of 416 perfect forward secrecy does not prevent an attacker from decrypting 417 any particular message but raises the cost of indiscriminate 418 intercept and decryption. 420 Unfortunately, this analysis is wrong because the TLS key exchange 421 does not achieve a work factor dependent on the difficulty of 422 breaking both RSA 2048 and DH 1024. The pre-master secret 423 established in the initial RSA 2048 exchange is only used to 424 authenticate the key exchange process itself. The session keys used 425 to encrypt content are derived from the weaker ephemeral key 426 exchange, the parameters of which are exchanged in plaintext. Due to 427 this defect in the design of the protocol, the Work Factor of the 428 protocol is the work factor of DH1024 alone. 430 Nor does the use of Diffie Hellman in this fashion provide security 431 when multiple messages are exchanged. The Logjam attack [Adrian2015] 432 exploits the fact that the difficulty of breaking the discrete 433 logarithm involves four major steps, the first three of which are the 434 most computationally intensive and only depend on the shared group 435 parameters. The cost of breaking a hundred Diffie Hellman public 436 keys is not a hundred times the cost of breaking a single key, there 437 is almost no difference. 439 Work factor analysis exposes these flaws in the design of the 440 TLS/1.2. Since the session keys used to encrypt traffic do not 441 depend on knowing the secret established in the RSA2048 exchange, the 442 work factor of the protocol is the lesser of 2^80 and 2^112. 444 A simple means of ensuring that the work factor of a protocol is not 445 reduced by a fresh key exchange is to use a one-way function such as 446 a cryptographic digest or a key exchange to combine the output of the 447 prior exchange with its successor. This principle is employed in the 448 double ratchet algorithm [Ratchet] used in the Signal protocol. In 449 the Mesh, the HKDF Key Derivation function [RFC5869] is frequently 450 used for the same purpose. 452 The work factor downgrade issue was addressed in TLS/1.3 [RFC8446] 453 albeit in a less direct fashion by encrypting the ephemeral key 454 exchange. 456 1.7. Cost Factor 458 As previously discussed, cryptanalysis is not the only tool available 459 to an attacker. Faced with a robust cryptographic defense, Internet 460 criminals have employed 'social engineering' instead. A nation-state 461 class attacker may use any and every tool at their disposal including 462 tools that are unique to government backed adversaries such as the 463 threat of legal sanctions against trusted intermediaries. 465 Although attackers can and will use every tool at their disposal, 466 each tool carries a cost and some tools require considerable advance 467 planning to use. It is conceivable that the AES standard published 468 by NIST contains a backdoor that somehow escaped the extensive peer 469 review. But any such effort would have had to have begun well in 470 advance of 1998 when the Rijndael cipher was first published. 472 Nation-state class actors frequently rely for security on the same 473 infrastructures that they are attempting to attack. Thus, the 474 introduction of vulnerabilities that might also be exploited by the 475 opposition incurs a cost to both. This concern is recognized in the 476 NSA 'NOBUS' doctrine: Nobody but us. To introduce a vulnerability in 477 a random number generator that can only be exploited by a party that 478 knows the necessary private key is acceptable. But introducing a 479 vulnerability that depends on the use of an unpublished cryptanalytic 480 technique is not because that same technique might be discovered by 481 the opposition. 483 Subversion of cryptographic apparatus such as Hardware Security 484 Modules (HSMs) and SSL accelerators faces similar constraints. HSMs 485 may be compromised by an adversary but the compromise must have taken 486 place before the device was manufactured or serviced. 488 Just as computational attacks are limited by the cryptanalytic 489 techniques known to and the computational resources available to the 490 attacker, social attacks are limited by the cost of the attack and 491 the capacity of the attacker. 493 The Cost Factor C(t) is an estimate of the cost of performing an 494 attack on or before a particular date in time (t). 496 For the sake of simplicity, currency units are used under the 497 assumption that all the resources required are fungible and that all 498 attackers face the same costs. But such assumptions may need to be 499 reconsidered when there is a range of attackers with very different 500 costs and capabilities. A hacktivist group could not conceivably 501 amass the computational and covert technical resources available to 502 the NSA but such a group could in certain circumstances conceivably 503 organize a protest with a million or more participants while the 504 number of NSA employees is believed to still be somewhat fewer. 506 The computational and hypothetical work factors are compared against 507 estimates of the computational resources of the attacker. An attack 508 is considered to be infeasible if that available computational 509 resources do not allow the attack to be performed within a useful 510 period of time. 512 The cost factor is likewise compared against an incentive estimate, 513 I(t) which is also time based. 515 o An attack is considered to be productive for an attacker if there 516 was a time t for which I(t) > C(t). 518 o An attack is considered to be unproductive if there is no time at 519 which it was productive for that attacker. 521 Unlike Cost Factor for which a lower bound based on the lowest cost 522 and highest capacity may be usefully applied to all attackers, 523 differences in the incentive estimate between attackers are likely to 524 be very significant. Almost every government has the means to 525 perform financial fraud on a vast scale but only rarely does a 526 government have the incentive. When governments do engage in 527 activities such as counterfeiting banknotes this has been done for 528 motives beyond mere peculation. 530 While government actors do not respond to the same incentives as 531 Internet criminals, governments fund espionage activities in the 532 expectation of a return on their investment. A government agency 533 director who does not produce the desired returns is likely to be 534 replaced. 536 For example, when the viability of SSL and the Web PKI for protecting 537 Internet payments was considered in the mid-1990s, the key question 538 was whether the full cost of obtaining a fraudulently issued 539 certificate would exceed the expected financial return where the full 540 cost is understood to include the cost of registering a bogus 541 corporation, submitting the documents and all the other activities 542 that would be required if a sustainable model for payments fraud was 543 to be established. 545 For an attack to be attractive to an attacker it is not just 546 necessary for it to be productive, the time between the initial 547 investment and the reward and the likelihood of success are also 548 important factors. An attack that requires several years of advance 549 planning is much less attractive than an attack which returns an 550 immediate profit. 552 An attack may be made less attractive by 554 o Increasing the cost 556 o Reducing the incentive 558 o Reducing the expected gain 560 o Reducing the probability that the incentive will be realized 562 o Increasing the time between the initial investment and the return. 564 Most real-world security infrastructures are based on more than one 565 of these approaches. The WebPKI is designed to increase the cost of 566 attack by introducing validation requirements and reduce the expected 567 gain through its revocation infrastructure. 569 1.8. Social Work Factor 571 In the cost factor analysis, it is assumed that all costs are 572 fungible, and the attack capacity of the attacker is only limited by 573 their financial resources. Some costs are not fungible however, in 574 particular inducing a large number of people to accept a forgery 575 without the effort being noticed requires much more than a limitless 576 supply of funds. 578 In a computational attack an operation will at worst fail to deliver 579 success. There is no penalty for failure beyond having failed to 580 succeed. When attempting to perpetuate a fraud on the general 581 public, every attempt carries a risk of exposure of the entire 582 scheme. When attempting to perform any covert activity, every 583 additional person who is indoctrinated into the conspiracy increases 584 the chance of exposure. 586 The totalitarian state envisioned by George Orwell in 1984 was only 587 plausible because each and every citizen is coerced to act as a party 588 to the conspiracy. The erasure and replacement of the past was 589 possible because the risk of exposure was nil. 591 In 2011, I expressed concern to a retired senior member of the NSA 592 staff that the number of contractors being hired to perform cyber- 593 sabotage operations represented a security risk and might be creating 594 a powerful constituency with an interest in the aggressive 595 militarization of cyberspace rather than preparing for its defense. 596 Subsequent disclosures by Robert Snowden have validated the 597 disclosure risk aspect of these concerns. Empirically, the NSA, an 598 organization charged with protecting the secrecy of government 599 documents, was unable to maintain the secrecy of their most important 600 secrets when the size of the conspiracy reached a few ten thousand 601 people. 603 The community of commercial practitioners of cryptographic 604 information security is small in size but encompasses many 605 nationalities. Many members of the community are bound by 606 ideological commitments to protecting personal privacy as an 607 unqualified moral objective. 609 Introducing a backdoor into a HSM, application or operating system 610 platform requires that every person with access to the platform 611 source or who might be called in to audit the code be a party to the 612 conspiracy. Tapping the fiber optic cables that support the Internet 613 backbone requires only a small work crew and digging equipment. 614 Maintaining a covert backdoor in a major operating system platform 615 would require hundreds if not thousands of engineers to participate 616 in the conspiracy. 618 The Social Work Factor _SWF(t) is a measure of the cost of 619 establishing a fraud in a conspiracy starting at date t. The cost is 620 measured in the number of actions that the party perpetrating the 621 fraud must perform that carry a risk of exposure. 623 In general, the Social Work Factor will increase over time. 624 Perpetrating a fraud claiming that the Roman emperor Nero never 625 existed today would require that millions of printed histories be 626 erased and rewritten, every person who has ever taught or taken a 627 lesson in Roman history would have to participate in the fraud. The 628 Social Work Factor would be clearly prohibitive. 630 The Social Work Factor in the immediate aftermath of Nero's 631 assassination in 68 would have been considerably lower. While the 632 emperor Nero was obviously not erased from history, this did happen 633 to Akhenaten, an Egyptian pharaoh of the 18^th dynasty whose 634 monuments were dismantled, statues destroyed, and his name erased 635 from the lists of kings. 637 1.8.1. Related work 639 It has not escaped the notice of the author that the social work 640 factor might be applied as a general metric for assessing the 641 viability of a conspiracy hypothesis. 643 Applying social work factor analysis to the moon landing conspiracy 644 theory we note that almost all of the tens of thousands of NASA 645 employees who worked on the Apollo project would have had to be a 646 part of the conspiracy and so would an even larger number of people 647 who worked for NASA contractors. The cost of perpetrating the hoax 648 would have clearly exceeded any imaginable benefit while the risk of 649 the hoax being exposed would have been catastrophic. 651 2. The problem of trust 653 Traditional (symmetric key) cryptography allows two parties to 654 communicate securely provided they both know a particular piece of 655 information known as a key that must be known to encrypt or decrypt 656 the content. Public Key cryptography proposed by Diffie and Hellman 657 [Diffie76] provides much greater flexibility by using separate keys 658 for separate roles such that it is possible to do one without being 659 able to do the other. In a public key system, an encryption key 660 allows information to be encrypted but not to be decrypted. That 661 role can only be performed using the corresponding decryption key. 663 The Mathematical Mesh recryption services further extend the 664 capabilities of traditional public key infrastructures by further 665 partitioning of the roles associated with the private key. In the 666 Mesh, this capability is referred to as 'recryption' as it was 667 originally conceived of as being a form of Proxy Re-encryption as 668 described by Blaze et. al. but it might equally well be considered as 669 realizing distributed key generation as described by Pedersen. A 670 decryption key is split into two or more parts such that both parts 671 must be involved to complete a private key operation. These parts 672 are then distributed to separate parties, thus achieving 673 cryptographic enforcement of a separation of duties. 675 Public key cryptography allows many (but certainly not all) 676 information security concerns to be reduced to management of 677 cryptographic keys. If Alice knows the Bob's encryption key, she can 678 send Bob an encrypted message that only he can read. If Bob knows 679 Alice's signature key, Bob can verify that a digital signature on the 680 message really was created by Alice. 682 A Public Key Infrastructure (PKI) is a combination of technologies, 683 practices and services that support the management of public key 684 pairs. In particular, if Alice does not know Bob's public key, any 685 infrastructure that is designed to provide her with this information 686 may be regarded as a form of PKI. 688 The big challenge faced in the design, deployment of operation of a 689 PKI is that while Alice and Bob can communicate with perfect secrecy 690 if they use each other's actual public keys, they will have worse 691 than no security if an attacker can persuade them to use keys they 692 control instead. One of the chief concerns in PKI therefore is to 693 allow users to assess the level of risk they face, a quality known as 694 trust. 696 2.1. Existing approaches 698 Few areas of information security have engaged so much passionate 699 debate or diverse proposals as PKI architecture. Yet despite the 700 intensity of this argument the state of deployment of PKI in the 701 Internet has remained almost unchanged. 703 TLS and SSH, the only Internet security protocols that have 704 approached ubiquity both operate at the transport layer. The use of 705 IPSEC is largely limited to providing VPN access. DNSSEC remains a 706 work in progress. Use of end-to-end secure email messaging is 707 negligible and shows no sign of improvement as long as competition 708 between S/MIME and OpenPGP remains at a stalemate in which one has a 709 monopoly on mindshare and the other a monopoly on deployment. 711 2.1.1. Trust After First Use (TAFU) 713 Trust After First Use is a simple but often effective form of PKI. 714 Instead of trying to verify each other's public key the first time 715 they attempt to communicate, the parties record the public key 716 credentials presented in their first interaction and check that the 717 same credentials are presented in subsequent transactions. While 718 this approach does not absolutely guarantee that 'Alice' is really 719 talking to 'Bob', as the conversation continues over hours, months or 720 even years, they are both assured that they are talking to the same 721 person. 723 2.1.2. Direct Trust 725 In the direct trust model, credentials are exchanged in person. The 726 exchange may be of the actual public key itself or by means of a 727 'fingerprint' which is simply a means of formatting a cryptographic 728 digest of the key to the user. 730 Use of direct trust is robust and avoids the need to introduce any 731 form of trusted third party. It is also limited for the obvious 732 reason that it is not always possible for users to meet in person. 733 For this reason, protocols that attempt to offer a direct trust model 734 often turn out to be being used in trust-after-first-use mode in 735 practice when the behavior of users is examined. 737 2.1.3. Certificate Authority 739 The archetype of what is generally considered to be 'PKI' was 740 introduced in Kohnfelder's 1978 Msc. Thesis [Kohnfelder78] . A 741 Certificate Authority (CA)whose signature key is known to all the 742 participants issues certificates binding the user's public key to 743 their name and/or contact address(es). 745 This approach forms the basis of almost every widely deployed PKI 746 including the EMV PKI that support smart card payments, the CableLabs 747 PKI that supports the use of set top boxes to access copyright 748 protected content and the WebPKI mentioned earlier that supports the 749 use of TLS in online commerce. 751 One area in which the CA model has not met with widespread success is 752 the provision of end-to-end secure email described in the original 753 paper. Despite the fact that S/MIME secure email has been supported 754 by practically every major email client for over 20 years, only a 755 small number of users are aware that email encryption is supported 756 and even fewer user it on a regular basis. 758 One of the reasons for this lack of uptake is the lack of uptake 759 itself. Until a critical mass of users is established, the network 760 effect presents as the chicken and egg problem. Another reason for 761 the failure is the sheer inconvenience use of S/MIME presents to the 762 user. Obtaining, installing and maintaining certificates requires 763 significant user effort and knowledge. But even if these obstacles 764 are addressed (as the Mesh attempts to do), as far as the open 765 Internet is concerned, S/MIME provides little or no benefit over a 766 direct trust model because there is no equivalent of the WebPKI for 767 email. 769 Most CAs that operate WebPKI services also offer S/MIME PKI services, 770 but these are seldom used except by enterprises and government 771 agencies where certificates are usually issued for internal use only. 773 One of the chief difficulties in establishing a MailPKI analogous to 774 the WebPKI is the difficulty of establishing a set of validation 775 requirements that are cost effective to users and present a 776 meaningful social work factor to attackers. 778 When VeriSign began operating the first Internet CA, two classes of 779 email certificate were offered that have since become a de facto 780 industry standard: 782 Class 1: The CA verified that the subject applying for the 783 certificate could read email sent to the address specified in the 784 certificate. 786 Class 2: The requirements of class 1 plus the requirement that the 787 certificate be issued through a Registration Authority that had 788 been separately determined to meet the considerably more stringent 789 validation requirements for organizations specified in class 3 and 790 in particular, demonstrated ownership of the corresponding domain 791 name. 793 Class 2 certificates were designed to be issued by organizations to 794 their employees and arguably present a more than adequate social work 795 factor to prevent most forms of attack. S/MIME certificates are in 796 daily use to secure very sensitive communications relating to very 797 high value transactions. But this represents a niche application of 798 what was intended to be a ubiquitous infrastructure that would 799 eventually secure every email communication. 801 The only type of certificate that the typical Internet user can 802 obtain is class 1 which at best offers a small improvement on social 803 work factor over Trust After First Use. 805 2.1.4. Web of Trust 807 The concept of the Web of Trust was introduced by Zimmerman with the 808 launch of PGP. It represents the antithesis of the hierarchical CA 809 model then being proposed for the Privacy Enhance Mail scheme being 810 considered by the IETF at the time. A core objection to this model 811 was the fact that users could only communicate securely by obtaining 812 a certificate from a CA. The goal of PGP was to democratize the 813 process by making every user a trust provider. 815 Like S/MIME, OpenPGP protocol has achieved some measure of success 816 but has fallen far short of its original goal of becoming ubiquitous 817 and almost none of the users have participated in the Web of Trust. 819 One of the chief technical limitations of the Web of Trust is that 820 trust degrades over distance. An introduction from a friend of a 821 friend has less value than one from a friend. As the number of users 822 gets larger, the chains of trust get longer, and the trustworthiness 823 of the link becomes smaller. 825 Another limitation is that as is fitting for a concept launched at 826 the high tide of postmodernism, the trust provided is inherently 827 relative. Every user has a different view of the Web of Trust and 828 thus a different degree of trust in the other users. This makes it 829 impossible for a commercial service to offer to navigate the Web of 830 Trust on a user's behalf. 832 2.1.5. Chained notary 834 The rise of BitCoin [Bitcoin] and the blockchain technology on which 835 it is based have given rise to numerous proposals that make use of a 836 tamper-evident notary as either the basis for a new PKI (e.g. 837 NameCoin [Namecoin] ) or to provide additional audit controls for an 838 existing PKI (e.g. Certificate Transparency [RFC6962] ). 840 The principle of making a digital notary service tamper-evident by 841 means of combining each output of the notary with the input of its 842 successor using a cryptographic digest was proposed in 1991 by Haber 843 and Stornetta [Haber91] . Every output of the notary depends on every 844 one of the previous inputs. Thus, any attempt to modify an input 845 will cause every subsequent output to be invalidated. 847 Notaries operating according to these principles can quickly achieve 848 prohibitively high social work factors by simply signing their output 849 values at regular intervals and publishing a record of the signed 850 values. Any attempt by the notary to tamper with the log will 851 produce a non-repudiable proof of the defection. Thus once an input 852 value is enrolled in a chained notary, the social work factor for 853 modifying that input subsequent to that becomes the same as the 854 social work factor for subverting the notary and every party that has 855 a record of the signed outputs of that notary. 857 Enrolling the signed outputs of one notary as an input to another 858 independently operated notary establishes a circumstance in which it 859 is not possible for one notary to defect unless the other does as 860 well. Applying the same principle to a collection of notaries 861 establishes a circumstance in which it is not possible for any notary 862 to defect without that defection becoming evident unless every other 863 notary also defects. If such infrastructures are operated in 864 different countries by a variety of reputable notaries, the social 865 work factor of modifying an input after it is enrolled may be 866 considered as to rapidly approach infinity. 868 One corollary of this effect is that just as there is only one global 869 postal system, one telephone system and one Internet, convergence of 870 the chained notary infrastructure is also inevitable. Users seeking 871 the highest possible degree of tamper evidence will seek out notaries 872 that cross notarize with the widest and most diverse range of other 873 notaries. I propose a name for this emergent infrastructure, the 874 Internotary. 876 According to the image presented in the popular press, it is the 877 minting of new cryptocurrency that provides stability to the 878 distributed leger at the heart of BitCoin, Etherium and their many 879 imitators. The fact that notaries that do not require proof of work, 880 proof of stake or any other form of seigniorage offer the same social 881 work factor (effectively infinite) as those that do demonstrates that 882 it is not necessary to consume nation-state level quantities of 883 electricity to operate such infrastructures. 885 The attraction of employing such notaries in a PKI system is that the 886 social work factor to forge a credential prior to a date that has 887 already been notarized as past is infinite. It is obvious that 888 almost none of the thousands of OpenPGP keys registered with the key 889 server infrastructure for 'Barack Obama' are genuine and so all the 890 registered keys are untrustworthy. But if it was known that one 891 particular key had been registered in the 1980s, before Obama had 892 become a political leader, that particular key would be considerably 893 more trustworthy than the rest. 895 The use of chained notaries may be viewed as providing a distributed 896 form of Trust After First Use. The first use event in this case is 897 the enrollment of the event in the notary. Instead of Alice having 898 to engage in separate first use events with Bob, Carol, Doug and 899 every other user she interacts with, a single first use event with 900 the internotary supports all her existing and future contacts. 902 2.1.6. A blended approach 904 As we have seen, different PKI architectures have emerged to serve 905 different communities of use by offering different forms of trust. 906 The trust provided by the OpenPGP and S/MIME PKIs to the communities 907 they serve is distinct. The S/MIME PKI does not provide a useful 908 means of establishing a trusted relationship in a personal capacity. 909 The OpenPGP PKI is not appropriate for establishing a trust 910 relationship in an enterprise capacity. Yet despite this obvious 911 difference in capabilities, there has been no convergence between 912 these competing approaches in the past two decades. 914 The only convergence in approach that has developed over this period 915 is within the applications that rely on PKI. Most SSH clients and 916 servers make provision for use of CA issued certificates for 917 authentication. Most email clients may be configured to support 918 OpenPGP in addition to S/MIME. 920 While offering the choice of CA issued, direct trust or Web of Trust 921 credentials is better than insisting on the use of the one, true PKI, 922 this approach is less powerful than a blended approach allowing the 923 user to make use of all of them. 925 In the blended approach, every user is a trust provider and can 926 provide endorsements to other user and some (but not necessarily all) 927 users have CA issued certificates. 929 This approach follows the same patterns that have been applied in the 930 issue of government credentials for centuries. In many countries, 931 passport applications must be endorsed by either a member of a 932 profession that has frequent interaction with the public (e.g. 933 doctors, lawyers and clerics), a licensed and registered set of 934 public notaries or both. 936 Analysis of the blended approach in terms of work factor reveals the 937 surprising result that it can achieve a higher social work factor 938 than either the CA model alone or the Web of Trust model alone. 940 Consider the case that Alice and Bob have each obtained a certificate 941 that presents a Social Work Factor of $10. Applying the CA model in 942 isolation, $10 is the limit to the SFW that can be achieved. But if 943 Alice and Bob were to meet and exchange endorsements, the SFW may be 944 increased by up to $10. If the exchange of endorsements is made in 945 person by means of some QR code mediated cryptographic protocol, we 946 might reasonably ascribe a SWF of $20 to each credential. 948 This higher SWF can now be used to evaluate the value of endorsements 949 issued by Alice and Bob to user Carol and of Carol to Doug, neither 950 of whom has a CA issued certificates. While the SWF of Carol is 951 certainly less than $20 and the SWF or Doug is even lower, it is 952 certainly greater than $0. 954 While these particular values are given for the sake of example, it 955 is clearly the case that as with the WebPKI, the blended approach 956 permits trust to be quantified according to objective criteria even 957 if the reliability of the values assigned remains subjective. The 958 Google Page Rank algorithm did not have to be perfect to be useful 959 and just as the deployment of the Web spurred the development of 960 engines offering better and more accurate search engines, deployment 961 of blended PKI may be reasonably be expected to lead to the 962 development of better and more accurate means of evaluating trust. 964 The power of the blended approach is that it provides the reach of 965 the Web of Trust model with the resilience of the CA model while 966 permitting a measurable improvement in work factor over both. 968 Combining the blended trust model with the internotary model allows 969 these SWF values to be fixed in time. It is one thing for an 970 attacker to spend $100 to impersonate the President of the United 971 States. It is quite another for an attacker to spend $100 per target 972 on every person who might become President of the United States in 20 973 years' time. 975 3. The Mesh of Trust 977 The purpose of the Mathematical Mesh is to put the user rather than 978 the designer in control of their trust infrastructure. To this end, 979 the Mesh supports use of any credential issued by any form of PKI and 980 provides a means of using these credentials in a blended model. 982 3.1. Master Profile 984 The Mesh provides an infrastructure that enables a user to manage all 985 the cryptographic keys and other infrastructure that are necessary to 986 provide security. 988 A Mesh master profile is the root of trust for each user's personal 989 PKI. By definition, every device, every application key that is a 990 part of user's personal Mesh profile is ultimately authenticated 991 either directly or indirectly by the signature key published in the 992 master profile. 994 Unlike user keys in traditional PKIs, a Mesh master profile is 995 designed to permit (but not require) life long use. A Master profile 996 can be revoked but does not expire. It is not possible to change the 997 signature key in a master profile. Should a compromise occur, a new 998 master profile must be created. 1000 3.2. Uniform Data Fingerprints 1002 Direct trust in the Mesh is realized through use of Uniform Data 1003 Fingerprints (UDF) [draft-hallambaker-mesh-udf] . A UDF consists of a 1004 cryptographic digest (e.g. SHA-2-512) over a data sequence and a 1005 content type identifier. 1007 UDFs are presented as a Base32 encoded sequence with separators every 1008 25 characters. UDFs may be presented at different precisions 1009 according to the intended use. The 25-character presentation 1010 provides a work factor of 2^117 and is short enough to put on a 1011 business card or present as a QR code. The 50-character presentation 1012 provides a work factor of 2^242 and is compact enough to be used in a 1013 configuration file. 1015 For example, the UDF of the text/plain sequence "UDF Data Value" may 1016 be presented in either of the following forms: 1018 MDDK7-N6A72-7AJZN-OSTRX-XKS7D 1019 MDDK7-N6A72-7AJZN-OSTRX-XKS7D-JAFXI-6OZSL-U2VOA-TZQ6J-MHPTS 1021 The UDF of a user's master profile signature key is used as a 1022 persistent, permanent identifier of the user that is unique to them 1023 and will remain constant for their entire life unless they have 1024 reason to replace their master profile with a new one. The exchange 1025 of master profile UDFs is the means by which Mesh users establish 1026 direct trust. 1028 3.3. Strong Internet Names 1030 A Strong Internet name (SIN) [draft-hallambaker-mesh-udf] is a valid 1031 Internet address that contains a UDF fingerprint of a security policy 1032 describing interpretation of that name. 1034 While a SIN creates a strong binding between an Internet address and 1035 a security policy, it does not provide a mechanism for discovery of 1036 the security policy. Nor is it necessarily the case that this is 1037 publicly available. 1039 For example, Example Inc holds the domain name example.com and has 1040 deployed a private CA whose root of trust is a PKIX certificate with 1041 the UDF fingerprint MB2GK-6DUF5-YGYYL-JNY5E-RWSHZ. 1043 Alice is an employee of Example Inc., she uses three email addresses: 1045 For example, Example Inc holds the domain name example.com and has 1046 deployed a private CA whose root of trust is a PKIX certificate with 1047 the UDF fingerprint MB2GK-6DUF5-YGYYL-JNY5E-RWSHZ. 1049 Alice is an employee of Example Inc., she uses three email addresses: 1051 alice@example.com A regular email address (not a SIN). 1053 alice@mm--mb2gk-6duf5-ygyyl-jny5e-rwshz.example.com A strong email 1054 address that is backwards compatible. 1056 alice@example.com.mm--mb2gk-6duf5-ygyyl-jny5e-rwshz A strong email 1057 address that is backwards incompatible. 1059 Use of SINs allows the use of a direct trust model to provide end-to- 1060 end security using existing, unmodified email clients and other 1061 Internet applications. 1063 For example, Bob might use Microsoft Outlook 2019, an email 1064 application that has no support for SINs as his email client. He 1065 configures Outlook to direct outbound mail through a SIN-aware proxy 1066 service. When Bob attempts to send mail to a strong email address 1067 for Alice, the proxy recognizes that the email address is a SIN and 1068 ensures that the necessary security enhancements are applied to meet 1069 the implicit security policy. 1071 3.4. Trust notary 1073 A Mesh trust notary is a chained notary service that accepts 1074 notarization requests from users and enrolls them in a publicly 1075 visible, tamper-evident, append-only log. 1077 The practices for operation of the trust notary are currently 1078 undefined but should be expected to follow the approach described 1079 above. 1081 The trust notary protocol provides support for establishing an 1082 internotary through cross certification. The append only log format 1083 is a DARE Container [draft-hallambaker-mesh-dare] , the service 1084 protocol is currently in development. 1086 3.5. Endorsement 1088 An endorsement is a document submitted to a trust notary that 1089 includes a claim of the form 'public key X is held by user Y'. Mesh 1090 endorsements may be issued by CAs or by ordinary users. 1092 3.6. Evaluating trust 1094 One of the chief advantages of the World Wide Web over previous 1095 networked hypertext proposals was that it provided no means of 1096 searching for content. While the lack of a search capability was an 1097 obstacle to content discovery in the early Web, competing solutions 1098 to meeting this need were deployed, revised and replaced. 1100 The Mesh takes the same approach to evaluation of trust. The Mesh 1101 provides an infrastructure for expression of trust claims but is 1102 silent on their interpretation. As with the development of search 1103 for the Web, the evaluation of trust in the Mesh is left to the 1104 application of venture capital to deep AI. 1106 4. Conclusions 1108 This paper describes the principal approaches used to establish 1109 Internet trust, a means of evaluating them and a proposed successor. 1110 It now remains to determine the effectiveness of the proposed 1111 approach by attempting deployment. 1113 5. Security Considerations 1115 This document describes the means by which interparty identification 1116 risk is managed and controlled in the Mathematical Mesh. 1118 The security considerations for use and implementation of Mesh 1119 services and applications are described in the Mesh Security 1120 Considerations guide [draft-hallambaker-mesh-security] . 1122 6. Acknowledgements 1124 A list of people who have contributed to the design of the Mesh is 1125 presented in [draft-hallambaker-mesh-architecture] . 1127 7. References 1129 7.1. Normative References 1131 [draft-hallambaker-mesh-architecture] 1132 Hallam-Baker, P., "Mathematical Mesh 3.0 Part I: 1133 Architecture Guide", draft-hallambaker-mesh- 1134 architecture-10 (work in progress), August 2019. 1136 [draft-hallambaker-mesh-security] 1137 Hallam-Baker, P., "Mathematical Mesh Part VII: Security 1138 Considerations", draft-hallambaker-mesh-security-01 (work 1139 in progress), July 2019. 1141 7.2. Informative References 1143 [Adrian2015] 1144 Adrian, D., "Weak Diffie-Hellman and the Logjam Attack", 1145 October 2015. 1147 [Bitcoin] Finley, K., "After 10 Years, Bitcoin Has Changed 1148 Everything?And Nothing", November 2018. 1150 [Diffie76] 1151 Diffie, W. and M. Hellman, "New Directions in 1152 Cryptography", November 1976. 1154 [draft-hallambaker-mesh-dare] 1155 Hallam-Baker, P., "Mathematical Mesh 3.0 Part III : Data 1156 At Rest Encryption (DARE)", draft-hallambaker-mesh-dare-04 1157 (work in progress), August 2019. 1159 [draft-hallambaker-mesh-udf] 1160 Hallam-Baker, P., "Mathematical Mesh 3.0 Part II: Uniform 1161 Data Fingerprint.", draft-hallambaker-mesh-udf-07 (work in 1162 progress), October 2019. 1164 [Haber91] Haber, S. and W. Stornetta, "How to Time-Stamp a Digital 1165 Document", 1991. 1167 [Intel2018] 1168 Bell, L., "Intel delays 10nm Cannon Lake processors, 1169 again, until late 2019", July 2018. 1171 [Kohnfelder78] 1172 Kohnfelder, L., "Towards a Practical Public-Key 1173 Cryptosystem", May 1978. 1175 [Namecoin] 1176 Inc., N., "Namecoin Web Site", 2019. 1178 [Ratchet] Marlinspike, M. and T. Perrin, "The Double Ratchet 1179 Algorithm", November 2016. 1181 [RFC5869] Krawczyk, H. and P. Eronen, "HMAC-based Extract-and-Expand 1182 Key Derivation Function (HKDF)", RFC 5869, 1183 DOI 10.17487/RFC5869, May 2010. 1185 [RFC6246] Sajassi, A., Brockners, F., Mohan, D., and Y. Serbest, 1186 "Virtual Private LAN Service (VPLS) Interoperability with 1187 Customer Edge (CE) Bridges", RFC 6246, 1188 DOI 10.17487/RFC6246, June 2011. 1190 [RFC6962] Laurie, B., Langley, A., and E. Kasper, "Certificate 1191 Transparency", RFC 6962, DOI 10.17487/RFC6962, June 2013. 1193 [RFC8446] Rescorla, E., "The Transport Layer Security (TLS) Protocol 1194 Version 1.3", RFC 8446, DOI 10.17487/RFC8446, August 2018. 1196 [Schneier2013] 1197 Schneier, B., "Defending Against Crypto Backdoors", 1198 October 2013. 1200 [Shannon1949] 1201 Shannon, C., "Communication Theory of Secrecy Systems", 1202 1949. 1204 7.3. URIs 1206 [1] http://mathmesh.com/Documents/draft-hallambaker-mesh-trust.html 1208 Author's Address 1210 Phillip Hallam-Baker 1212 Email: phill@hallambaker.com