idnits 2.17.1 draft-hallambaker-prismproof-trust-01.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- ** The document seems to lack an Introduction section. ** The document seems to lack a Security Considerations section. ** The document seems to lack an IANA Considerations section. (See Section 2.2 of https://www.ietf.org/id-info/checklist for how to handle the case when there are no actions for IANA.) Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year -- The document date (October 27, 2014) is 3467 days in the past. Is this intentional? Checking references for intended status: Proposed Standard ---------------------------------------------------------------------------- (See RFCs 3967 and 4897 for information about using normative references to lower-maturity documents in RFCs) No issues found here. Summary: 3 errors (**), 0 flaws (~~), 1 warning (==), 1 comment (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 1 Internet Engineering Task Force (IETF) Phillip Hallam-Baker 2 Internet-Draft Comodo Group Inc. 3 Intended Status: Standards Track October 27, 2014 4 Expires: April 30, 2015 6 PRISM Proof Trust Model 7 draft-hallambaker-prismproof-trust-01 9 Abstract 11 This paper extends Shanon's concept of a 'work factor' to provide an 12 objective measure of the practical security offered by a protocol or 13 infrastructure design. Considering the hypothetical work factor based 14 on an informed estimate of the probable capabilities of an attacker 15 with unknown resources provides a better indication of the relative 16 strength of protocol designs than the computational work factor of 17 the best known attack. 19 The social work factor is a measure of the trustworthiness of a 20 credential issued in a PKI based on the cost of having obtained the 21 credential through fraud at a certain point in time. Use of the 22 social work factor allows evaluation of Certificate Authority based 23 trust models, peer to peer (Web of Trust) models to be evaluated in 24 the same framework. The analysis shows that each model has clear 25 benefits over the other for some classes of user but most classes of 26 user are served better by a combination of both. 28 Status of This Memo 30 This Internet-Draft is submitted in full conformance with the 31 provisions of BCP 78 and BCP 79. 33 Internet-Drafts are working documents of the Internet Engineering 34 Task Force (IETF). Note that other groups may also distribute 35 working documents as Internet-Drafts. The list of current Internet- 36 Drafts is at http://datatracker.ietf.org/drafts/current/. 38 Internet-Drafts are draft documents valid for a maximum of six months 39 and may be updated, replaced, or obsoleted by other documents at any 40 time. It is inappropriate to use Internet-Drafts as reference 41 material or to cite them other than as "work in progress." 43 Copyright Notice 45 Copyright (c) 2014 IETF Trust and the persons identified as the 46 document authors. All rights reserved. 48 This document is subject to BCP 78 and the IETF Trust's Legal 49 Provisions Relating to IETF Documents 50 (http://trustee.ietf.org/license-info) in effect on the date of 51 publication of this document. Please review these documents 52 carefully, as they describe your rights and restrictions with respect 53 to this document. Code Components extracted from this document must 54 include Simplified BSD License text as described in Section 4.e of 55 the Trust Legal Provisions and are provided without warranty as 56 described in the Simplified BSD License. 58 Table of Contents 60 1. Work Factor . . . . . . . . . . . . . . . . . . . . . . . . . 4 61 1.1. Computational Work Factor . . . . . . . . . . . . . . . . 4 62 1.2. Hypothetical Work Factor . . . . . . . . . . . . . . . . 5 63 1.2.1. Known Unknowns . . . . . . . . . . . . . . . . . . . 6 64 1.2.2. Defense in Depth . . . . . . . . . . . . . . . . . . 7 65 1.2.3. Mutual Reinforcement . . . . . . . . . . . . . . . . 8 66 1.2.4. Safety in Numbers . . . . . . . . . . . . . . . . . 8 67 1.3. Cost Factor . . . . . . . . . . . . . . . . . . . . . . . 9 68 1.4. Social Work Factor . . . . . . . . . . . . . . . . . . . 11 69 2. The Problem of Evaluating Trust . . . . . . . . . . . . . . . 13 70 2.1. Probability and Risk . . . . . . . . . . . . . . . . . . 13 71 2.2. Reputation . . . . . . . . . . . . . . . . . . . . . . . 14 72 2.3. Curated Spaces . . . . . . . . . . . . . . . . . . . . . 15 73 2.4. Trustworthy Time . . . . . . . . . . . . . . . . . . . . 15 74 3. Maximizing Social Work Factor to Maximize Trust . . . . . . . 15 75 3.1. Trust Specifiers . . . . . . . . . . . . . . . . . . . . 16 76 3.1.1. Key Identifiers . . . . . . . . . . . . . . . . . . 16 77 3.1.2. Self Signed Certificates . . . . . . . . . . . . . . 17 78 3.2. Trust Assertions . . . . . . . . . . . . . . . . . . . . 17 79 3.2.1. Certificate Authority Issued Certificates . . . . . 17 80 3.2.2. Key Signingey Signing . . . . . . . . . . . . . . . 19 81 3.2.3. Adding Key Endorsement to PKIX . . . . . . . . . . . 19 82 3.3. Trust Meta Assertions . . . . . . . . . . . . . . . . . . 22 83 3.3.1. Revocation and Status Checkings Checking . . . . . . 22 84 3.3.2. Notarization . . . . . . . . . . . . . . . . . . . . 22 85 3.3.3. Transparency . . . . . . . . . . . . . . . . . . . . 22 86 3.4. Other Approaches . . . . . . . . . . . . . . . . . . . . 22 87 3.4.1. DNSSEC . . . . . . . . . . . . . . . . . . . . . . . 23 88 3.4.2. SPKI / SDSI . . . . . . . . . . . . . . . . . . . . 23 89 3.4.3. Identity Based Cryptography . . . . . . . . . . . . 23 90 4. Maximizing Social Work Factor in a Notary Infrastructure . . . 24 91 5. Conclusions and Related Work . . . . . . . . . . . . . . . . . 25 92 Author's Address . . . . . . . . . . . . . . . . . . . . . . . . . 25 94 1. Work Factor 96 Recent events have highlighted both the need for open standards based 97 security protocols and the possibility that the design of such 98 protocols may have been sabotaged. The community thus faces two 99 important and difficult challenges, first to design an Internet 100 security infrastructure that offers practical security against the 101 class of attacks revealed, and secondly, to convince potential users 102 that the proposed new infrastructure has not been similarly 103 sabotaged. 105 The security of a system should measured by the difficulty of 106 attacking it. The security of a safe is measured by the length time 107 it is expected to resist attack using a specified set of techniques. 108 The security of a cryptographic algorithm against a known attack is 109 measured by the computational cost of the attack. 111 This paper extends Shanon's concept of a 'work factor' to provide an 112 objective measure of the security a protocol or infrastructure offers 113 against other forms of attack. 115 1.1. Computational Work Factor 117 The term 'Computational Work Factor' is used to refer to Shanon's 118 original concept. 120 One of Shanon's key insights was that the work factor of a 121 cryptographic algorithm could be exponential. Adding a single bit to 122 the key size of an ideal symmetric algorithm presents only a modest 123 increase in computational effort for the defender but doubles the 124 work factor for the attacker. 126 More precisely, the difficulty of breaking a cryptographic algorithm 127 is generally measured by the work-factor ratio. If the cost of 128 encrypting a block with 56 bit DES is x, the worst case cost of 129 recovering the key through a brute force attack is x * 2^56. The 130 security of DES has changed over time because the cost x has fallen 131 exponentially. 133 While the work factor is traditionally measured in terms of the 134 number of operations, many cryptanalytic techniques permit memory 135 used to be traded for computational complexity. An attack requiring 136 2^64 bytes of memory that reduces the number of operations required 137 to break a 128 bit cipher to 2^64 is a rather lower concern than one 138 which reduces the number of operations to 2^80. The term 'cost' is 139 used to gloss over such distinctions. 141 [Note that in the following analysis, the constraints of the IETF 142 document format make use of the established notation impractical and 143 a confusing mess, hence the departure from Shannon's notation.] 144 The Computational Work Factor ratio WF_C (A) of a cryptographic 145 algorithm A, is the cost of the best known attack divided by the cost 146 of the algorithm itself. 148 1.2. Hypothetical Work Factor 150 Modern cryptographic algorithms use keys of 128 bits or more and 151 present a work factor ratio of 2^128 against brute force attack. This 152 work factor is at least 2^72 times higher than DES and comfortably 153 higher than the work factor of 2^80 operations that is generally 154 believed to be about the limit to current attacks. 156 While an exceptionally well resourced attacker may gain performance 157 advances through use of massive parallelism, faster clock rates made 158 possible by operating at super-low temperatures and custom designed 159 circuits, the return on such approaches is incremental rather than 160 exponential. 162 Performance improvements may allow an attacker to break systems with 163 a work factor several orders of magnitue greater than the public 164 state of the art. But an advance in cryptanalysis might permit a 165 potentially more significant reduction in the work factor. 167 The primary consideration in the choice of a cryptographic algorithm 168 therefore is not the known computational work factor as measured 169 according to the best publicly known attack but the confidence that 170 the computational work factor of the best attack that might be known 171 to the attacker. 173 While the exact capabilities of the adversary are unknown, a group of 174 informed experts may arrive at a conservative estimate of their 175 likely capabilities. The probability that a government attacker has 176 discovered an attack against AES-128 with a work factor ratio of 177 2^120 might be considered relatively high while the probability that 178 an attack with a work factor ratio of less than 2^64 is very low. 180 We define the hypothetical work factor function WF_H (A, p) as 181 follows: If WF is a work factor ratio and p is an informed estimate 182 of the probability that an adversary has developed an attack with a 183 work factor ratio against algorithm A of WF or less then WF_H (A, p) 184 = WF. 186 Since the best known public attack is known to the attacker, WF_H(A, 187 1) <= WF_C (A) 189 The inverse function WF_H' (A, WF) returns the estimated probability 190 that the work factor of algorithm A is at least WF. 192 The hypothetical work factor and its inverse may be used to compare 193 the relative strengths of protocol designs. Given designs A and B, we 194 can state that B is an improvement on A if WF_H(A,p) > WF_H (B,p) for 195 all p. 197 When considering a protocol or infrastructure design we can thus 198 improve a protocol by either: 200 * Increasing WF_H(A,p) for some p, or 202 * Decreasing WF_H'(A,WF) 204 1.2.1. Known Unknowns 206 Unlike the computational work factor, the hypothetical work factor 207 does not provide an objective measure of the security offered by a 208 design. The purpose of the hypothetical work factor is to allow the 209 protocol designer to compare the security offered by different design 210 choices. 212 The task that the security engineer faces is to secure the system 213 from all attacks whether the attacks themselves are known or unknown. 214 In the current case it is known that an attacker is capable of 215 breaking at least some of the cryptographic algorithms in use but not 216 which algorithms are affected or the nature of the attack(s). 218 Unlike the computational work factor, the hypothetical work factor 219 does not deliver an academically rigorous, publication and citation 220 worthy measure of the strength of a design. That is not its purpose. 221 the purpose of the hypothetical work factor is to assist the protocol 222 designer in designing protocols. 224 Design of security protocols has always required the designer to 225 consider attackers whose capabilities are not currently known and 226 thus involved a considerable degree of informed opinion and 227 guesswork. Whether correctly or not, the decision to reject changes 228 to the DNSSEC protocol to enable deployment in 2002 rested in part on 229 a statement by a Security Area Director that a proposed change gave 230 him a bad feeling in his gut. The hypothetical work factor permits 231 the security designer to model to quantify such intestinally based 232 assumptions and model the effect on the security of the resulting 233 design. 235 Security is a property of systems rather than individual components. 236 While it is quite possible that there are no royal roads to 237 cryptanalysis and cryptanalysis of algorithms such as AES 128 is 238 infeasible even for the PRISM-class adversaries, such adversaries are 239 not limited to use of cryptanalytic attacks. 241 Despite the rise of organized cyber-crime, many financial systems 242 still employ weak cryptographic systems that are known to be 243 vulnerable to cryptanalytic attacks that are well within the 244 capabilities of the attackers. But fraud based on such techniques 245 remains vanishingly rare as it is much easier for the attackers to 246 persuade bank customers to simply give their access credentials to 247 the attacker. 249 Even if a PRISM-class attacker has a factoring attack which renders 250 an attack on RSA-2048 feasible, it is almost certainly easier for a 251 PRISM-class attacker to compromise a system using RSA-2048 in other 252 ways. For example persuading the target of the surveillance to use 253 cryptographic devices with a random number generator that leaks a 254 crib for the attacker. Analyzing the second form of attack requires a 255 different type of analysis which is addressed in the following 256 section on social work factor. 258 1.2.2. Defense in Depth 260 The motivation behind introducing the concept of the hypothetical 261 work factor is a long experience of seeing attempts to make security 262 protocols more robust being deflected by recourse to specious 263 arguments based on the computational work factor. 265 For example, consider the case in which a choice between a single 266 security control and a defense in depth strategy is being considered: 268 * Option A: Uses algorithm X for protection. 270 * Option B: Uses a combination of algorithm X and algorithm Y for 271 protection such that the attacker must defeat both to break the 272 system and algorithms based on different cryptographic 273 principles are chosen so as to minimize the risk of a common 274 failure mode. 276 If the computational work factor for both algorithms X and Y is 277 2^128, both options present the same work factor ratio. Although 278 Option B offers twice the security, it also requires twice the work. 280 The argument that normally wins is that both options present the same 281 computational work factor ratio of 2^128, Option A is simpler and 282 therefore Option A should be chosen. This despite the obvious fact 283 that only Option B offers defense in depth. 285 If we consider the adversary of being capable of performing a work 286 factor ratio of 2^80 and the probability the attacker has discovered 287 an attack capable of breaking algorithms X and Y to be 10% in each 288 case, the probability that the attacker can break Option A is 10% 289 while the probability that an attack on Option B is only 1%, a 290 significant improvement. 292 While Option B clearly offers a significant potential improvement in 293 security, this improvement is only fully realized if the 294 probabilities of a feasible attack are independent. 296 1.2.3. Mutual Reinforcement 298 The defense in depth approach affords a significant improvement in 299 security but an improvement that is incremental rather than 300 exponential in character. With mutual reinforcement we design the 301 mechanism such that in addition to requiring the attacker to break 302 each of the component algorithms, the difficulty of the attacks is 303 increased. 305 For example, consider the use of a Deterministic Random Number 306 Generator R(s) which returns a sequence of values R(s)_1, R(s)_2 from 307 an initial seed s. 309 Two major concerns in the design of such generators are the 310 possibility of bias and that the seed value be somehow leaked through 311 a side channel. 313 Both concerns are mitigated if instead of using the output of one 314 generator directly, the value R1(s1) XOR R2(s2) is used where R1 and 315 R2 are independent random number generators and s1, s2 are distinct 316 seeds. 318 The XOR function has the property of preserving randomness so that 319 the output is guaranteed to be at least as random as either of the 320 generators from which it is built (provided that there is not a 321 common failure mode). Further, recovery of either random seed is at 322 least as hard as using the corresponding generator on its own. Thus 323 the Hypothetical work factor for the combined system is improved to 324 at least the same extent as in the defense in depth case. 326 But any attempt to break either generator must now face the 327 additional complexity introduced by the output being masked with the 328 unknown output of the other. An attacker cannot cryptanalyze the two 329 generator functions independently. If the two generators and the 330 seeds are genuinely independent, the combined hypothetical work 331 factor is the product of the hypothetical work factors from which it 332 is built. 334 While implementing two independent generators and seeds represents a 335 significant increase in cost for the implementer, a similar 336 exponential leverage might be realized with negligible additional 337 complexity through use of a cryptographic digest of the generator 338 output to produce the masking value. 340 1.2.4. Safety in Numbers 342 In a traditional security analysis the question of concern is whether 343 a cryptanalytic attack is feasible or not. When considering an 344 indiscriminate intercept capability as in a PRISM-class attack, the 345 concern is not just whether an individual communication might be 346 compromised but the number of communications that may be compromised 347 for a given amount of effort. 349 'Perfect' Forward Secrecy is an optional feature supported in IPSec 350 and TLS. Current implementations of TLS offer a choice between: 352 * Direct key exchange with a work factor dependent on the 353 difficulty of breaking RSA 2048 355 * Direct key exchange followed by a perfect forward secrecy 356 exchange with a work factor dependent on the difficulty of 357 breaking RSA 2048 and DH 1024. 359 Using the computational work factor alone suggests that the second 360 scheme has little advantage over the first since the computational 361 work factor of Diffie Hellman using the best known techniques 2^80 362 while the computational work factor for RSA 2048 is 2^112. Use of the 363 perfect forward secrecy exchange has a significant impact on server 364 performance but does not increase the difficulty of cryptanalysis. 366 Use of perfect forward secrecy with a combination of RSA and Diffie 367 Hellman does not provide a significant improvement in the 368 hypothetical work factor either if individual messages are 369 considered. The RSA and Diffie Hellman systems are closely related 370 and so an attacker that can break RSA 2048 can almost certainly break 371 RSA 1024. Moreover computational work factor for DH 1024 is only 2^80 372 and thus feasibly within the reach of a well funded and determined 373 attacker. 375 Use of perfect forward secrecy does provide an important security 376 benefit when multiple messages are considered. While a sufficiently 377 funded and determined attacker could conceivably break tens, hundreds 378 or even thousands of DH 1024 keys a year, it is rather less likely 379 that an attacker could break millions a year. The Comodo OCSP server 380 receives over 2 billion hits a day and this represents only a 381 fraction of the number of uses of SSL on the Internet. Use of perfect 382 forward secrecy does not prevent an attacker from decrypting any 383 particular message but raises the cost of indiscriminate intercept 384 and decryption. 386 There is security in numbers: If every communication is protected by 387 perfect forward secrecy the hypothetical work factor for decrypting 388 every communication is the hypothetical work factor of decrypting one 389 communication times the number of communications. 391 1.3. Cost Factor 393 As previously discussed, cryptanalysis is not the only tool available 394 to an attacker. Faced with a robust cryptographic defense, Internet 395 criminals have employed 'social engineering' instead. A PRISM-class 396 attacker may use any and every tool at their disposal including tools 397 that are unique to government backed adversaries such as the threat 398 of legal sanctions against trusted intermediaries. 400 Although attackers can and will use every tool at their disposal, 401 each tool carries a cost and some tools require considerable advance 402 planning to use. It is conceivable that the AES standard published by 403 NIST contains a backdoor that somehow escaped the extensive peer 404 review. But any such effort would have had to have begun well in 405 advance of 1998 when the Rijndael cipher was first published. 406 Subversion of cryptographic apparatus such as Hardware Security 407 Modules (HSMs) and SSL accelerators faces similar constraints. A HSM 408 may be compromised by an adversary but the compromise must have taken 409 place before the device was manufactured or serviced. 411 Just as computational attacks are limited by the cryptanalytic 412 techniques known to and the computational resources available to the 413 attacker, social attacks are limited by the cost of the attack and 414 the capacity of the attacker. 416 The Cost Factor C(t) is an estimate of the cost of performing an 417 attack on or before a particular date in time (t). 419 For the sake of simplicity, currency units are used under the 420 assumption that all the resources required are fungible and that all 421 attackers face the same costs. But such assumptions may need to be 422 reconsidered when there is a range of attackers with very different 423 costs and capabilities. A hacktivist group could not conceivably 424 amass the computational and covert technical resources available to 425 the NSA but such a group could in certain circumstances conceivably 426 organize a protest with a million or more participants while the 427 number of NSA employees is believed to still be somewhat fewer. 429 The computational and hypothetical work factors are compared against 430 estimates of the computational resources of the attacker. An attack 431 is considered to be infeasible if that available computational 432 resources do not allow the attack to be performed within a useful 433 period of time. 435 The cost factor is likewise compared against an incentive estimate, 436 I(t) which is also time based. 438 An attack is considered to be productive for an attacker if there was 439 a time t for which I(t) > C(t). 441 An attack is considered to be unproductive if there is no time at 442 which it was productive for that attacker. 444 Unlike Cost Factor for which a lower bound based on the lowest cost 445 and highest capacity may be usefully applied to all attackers, 446 differences in the incentive estimate between attackers are likely to 447 be very significant. Almost every government has the means to perform 448 financial fraud on a vast scale but only rarely does a government 449 have the incentive and when governments do engage in activities such 450 as counterfeiting banknotes this has been done for motives beyond 451 mere peculation. 453 While government actors do not respond to the same incentives as 454 Internet criminals, governments fund espionage activities in the 455 expectation of a return on their investment. A government agency 456 director who does not produce the desired returns is likely to be 457 replaced. 459 For example, when the viability of SSL and the Web PKI for protecting 460 Internet payments was considered in the mid 1990s, the key question 461 was whether the full cost of obtaining a fraudulently issued 462 certificate would exceed the expected financial return where the full 463 cost is understood to include the cost of registering a bogus 464 corporation, submitting the documents and all the other activities 465 that would be required if a sustainable model for payments fraud was 466 to be established. 468 For an attack to be attractive to an attacker it is not just 469 necessary for it to be productive, the time between the initial 470 investment and the reward and the likelihood of success are also 471 important factors. An attack that requires several years of advance 472 planning is much less attractive than an attack which returns an 473 immediate profit. 475 An attack may be made less attractive by 477 * Increasing the cost 479 * Reducing the incentive 481 * Reducing the expected gain 483 * Reducing the probability that the incentive will be realized 485 * Increasing the time between the initial investment and the 486 return. 488 1.4. Social Work Factor 490 In the cost factor analysis it is assumed that all costs are fungible 491 and the attack capacity of the attacker is only limited by their 492 financial resources. Some costs are not fungible however, in 493 particular inducing a large number of people to accept a forgery 494 without the effort being noticed requires much more than a limitless 495 supply of funds. 497 In a computational attack an operation will at worst fail to deliver 498 success. There is no penalty for failure beyond having failed to 499 succeed. When attempting to perpetuate a fraud on the general public, 500 every attempt carries a risk of exposure of the entire scheme. When 501 attempting to perform any covert activity, every additional person 502 who is indoctrinated into the conspiracy increases the chance of 503 exposure. 505 The totalitarian state envisioned by George Orwell in 1984 is only 506 possible because each and every citizen is in effect a party to the 507 conspiracy. The erasure and replacement of the past is possible 508 because the risk of exposure is nil. 510 In 2011 I expressed concern to a retired senior member of the NSA 511 staff that the number of contractors being hired to perform cyber- 512 sabotage operations represented a security risk and might be creating 513 a powerful constituency with an interest in the aggressive 514 militarization of cyberspace rather than preparing for its defense. 515 Subsequent disclosures by Robert Snowden have validated the 516 disclosure risk aspect of these concerns. 518 Empirically, the NSA, an organization charged with protecting the 519 secrecy of government documents, was unable to maintain the secrecy 520 of their most important secrets when the size of the conspiracy 521 reached a few ten thousand people. 523 The community of commercial practitioners cryptographic information 524 security is small in size but encompases many nationalities. Many 525 members of the community are bound by ideological commitments to 526 protecting personal privacy as an unqualified moral objective. 528 Introducing a backdoor into a HSM, application or operating system 529 platform requires that every person with access to the platform 530 source or who might be called in to audit the code be a party to the 531 conspiracy. Tapping the fiber optic cables that support the Internet 532 backbone requires only a small work crew and digging equipment. 533 Maintaining a covert backdoor in a major operating system platform 534 would require hundreds if not thousands of engineers to participate 535 in the conspiracy. 537 The Social Work Factor WF_S(t) is a measure of the cost of 538 establishing a fraud in a conspiracy starting at date t. The cost is 539 measured in the number of actions that the party perpetrating the 540 fraud must perform that carry a risk of exposure. 542 In general, the Social Work Factor will increase over time. 543 Perpetrating a fraud claiming that the Roman emperor Nero never 544 existed today would require that millions of printed histories be 545 erased and rewritten, every person who has ever taught or taken a 546 lesson in Roman history would have to participate in the fraud. The 547 Social Work Factor would be clearly prohibitive. 549 The Social Work Factor of perpetrating such a fraud today is 550 prohibitive, the cost in the immediate aftermath of Nero's 551 assassination in 68 would have been considerably lower. While the 552 emperor Nero was obviously not erased from history there is a strong 553 consensus among Egyptian archeologists that something of the sort 554 happened to Tutankhamun before the discovery of his tomb by Howard 555 Carter. 557 2. The Problem of Evaluating Trust 559 The Prism-Proof Email testbed attempts to facilitate the development 560 and deployment of a new email privacy protection infrastructure by 561 dividing the problem into the parts for which there are known, well 562 established (if not necessarily perfect) solutions and the parts for 563 which there are not with clearly defined interfaces between the two 564 parts. 566 The Trust Publication Web Service is a JSON/REST Web service that 567 supports the publication of all existing forms of trust assertion 568 (PKIX, OpenPGP, SAML). For the sake of future simplicity, a new ASN.1 569 message format for OpenPGP-style key endorsement is proposed so that 570 all the forms of trust assertion that might be used in a PKI may be 571 expressed without recourse to multiple data encoding formats. The 572 Trust Publication Web Service need not be a trusted service since its 573 role is essentially that of a proxy, routing messages such as 574 certificate requests to the appropriate destination(s). 576 The Omnibroker Web Service is a trusted service that a mail user 577 agent or proxy can query to determine which security enhancements 578 (encryption, signature, etc.) should be added to an outbound message 579 (among other functions). 581 Between the Trust Publication Web Service and the Omnibroker service 582 sits the hard research problem of how to make sense of and what value 583 to place on the CA issued certificates, peer to peer key 584 endorsements, revocation information and other signed assertions that 585 might exist. 587 Robust implementation of public key cryptography allows the signature 588 on a signed assertion to be verified as belonging to a holder of the 589 corresponding signature key with near certainty. But such an 590 assertion can only be considered trustworthy if the purported signer 591 is trustworthy and is the actual holder of the corresponding signing 592 key, claims that are in turn established by more signed assertions. 593 Expanding the scope of our search increases the number of documents 594 on which we are relying for trust rather than answering the question 595 we wish to answer. 597 2.1. Probability and Risk 599 Attempting to analyze the trustworthiness of a signed assertion in a 600 heterarchical topology such as Phil Zimmerman's Web of Trust leads to 601 an infinite regression. Alice may see that Bob's key has been signed 602 by Carol, Doug and Edward but this should only give Alice more 603 confidence in the validity of Bon's key if she knows that Carol Doug 604 and Edward are distinct individuals. 606 Probability is a model of events that are random. An attack is a 607 conscious act on the part of an attacker and is only random insofar 608 as the attacker's motive may not require a particular choice of 609 victim or method. In such cases the particular attack is 'random' 610 from the point of view of the victim but that an attack would take 611 place is due to the fact that the attacker had motive, means and 612 opportunity. 614 The motive for an attacker depends on the perceived rather than the 615 actual difficulty of breaking a system. It might be some time before 616 a competent attacker attempts to break an insecure system that is for 617 some reason generally believed to be secure. But the rate of attack 618 is likely to increase rapidly once the vulnerability is widely known. 619 The system has not become less trustworthy over time, rather the 620 system was always untrustworthy and it is only the consequences of 621 that fact that have changed. 623 Analyzing the trustworthiness of a Web of Trust using an estimate of 624 the probability that an assertion might be fraudulent is 625 unsatisfactory because it requires us to provide as an input to our 626 calculations the very quantity we are trying to arrive at as an 627 output. 629 Attempting to estimate the probability of default for each assertion 630 in a Web of trust leads us to an infinite recursion as Alice trusts 631 Bob trusts Carol trusts Alice. We can define the inductive step but 632 have no base case to ground it with. 634 2.2. Reputation 636 Another frequently proposed metric for analyzing the trustworthiness 637 of assertions is 'reputation'. Reputation is a measure of risk based 638 on reports of past behavior. 640 Reputation has proved somewhat effective in online restaurant 641 reviews. A restaurant that receives a high number of good reviews is 642 likely to be worth visiting but ratings based on a small number of 643 reviews can be wildly inaccurate. A glowing review may have been 644 written by a satisfied customer or by an unscrupulous proprietor. A 645 series of negative reviews may be written by unsatisfied customers or 646 a jealous competitor. 648 Restaurant reviews work because there is a widely shared 649 understanding of what makes a good or a bad restaurant. There is no 650 similar shared understanding of the quality of a public key 651 validation process except among specialists in the field. 653 2.3. Curated Spaces 655 'Reputation based' systems have proved highly effective at 656 controlling email abuse but these systems use a large quantity of 657 empirical data including from honeypot email servers, expert analysis 658 of abuse traffic and content analysis to arrive at reputation scores. 659 It is the action of the curator that turns the raw data into a useful 660 measure of risk rather than the mechanical application of a clever 661 algorithm. 663 There is a good argument to be made for introducing a curator into a 664 PKI trust model but that is an argument about who should perform the 665 analysis rather than how the analysis is to be performed. 667 2.4. Trustworthy Time 669 The problem of grounding the Web of Trust is solved if there is 670 available a notary authority whose trustworthiness is beyond 671 reasonable dispute. Once a trust assertion has been notarized by such 672 a notary authority, the cost of forgery becomes the cost of suborning 673 the notary authority. 675 As is demonstrated later, use of linked timestamps and cross- 676 notarization amongst notaries makes it possible to establish a 677 timestamp notary infrastructure such that perpetrating a forgery 678 requires each and every notary in the infrastructure is compromised. 680 The analysis of the set of trust assertions begins by asserting a 681 Social Work Factor to the earliest assertion prior to the time at 682 which it was notarized. This provides the base case of the induction 683 from which the rest of the analysis proceeds. 685 3. Maximizing Social Work Factor to Maximize Trust 687 As previously described, the purpose of the Social Work Factor is to 688 support the design process by allowing the consequences of different 689 design approaches to be considered. 691 When designing a trust infrastructure, there are two different 692 attacks that need to be considered. First there is the attack where a 693 credential is issued for an entirely fictitious persona, secondly 694 there is the attack where a credential is issued to an impostor 695 impersonating a real persona. 697 The consequences of the two attacks are very different, particularly 698 where a confidentiality infrastructure is concerned. Indeed it might 699 be considered desirable to encourage participants to create and use 700 fictitious personas to provide anonymity for their actions in certain 701 circumstances. An impostor who gains a credential for a real person 702 can use it to persuade relying parties that their communications are 703 confidential when they are in fact compromised and can steal the use 704 of the target's reputation. 706 The context of an attack is also important. The confidentiality of 707 the private communications of an individual is an issue for that 708 individual and their correspondents alone. The confidentiality of the 709 communications of an individual acting for their employer is much 710 more complex. In addition to the employer having an interest in 711 protecting the confidentiality of the communication, there may be a 712 legitimate employer interest in being able to view the contents. For 713 example, it is now generally accepted in many countries that most 714 government employees do not have a right of privacy from the people 715 who they ultimately work for unless their job function falls into a 716 narrowly scoped exception. 718 3.1. Trust Specifiers 720 A trust specifier is a mechanism that identifies a public key either 721 directly (e.g. a self-signed certificate) or indirectly (e.g. a Key 722 identifier). 724 Trust specifiers are not trust assertions but may be used to create 725 trust assertions. For example, an OpenPGP fingerprint does not make 726 any statement about the owner of a public key but an OpenPGP 727 fingerprint printed on a business card is an explicit claim that the 728 specified public key may be used to send encrypted email to the 729 individual named on the card. 731 3.1.1. Key Identifiers 733 PGP introduced the use of key fingerprints as the basis for key 734 exchange. A cryptographic digest value is computed from the user's 735 public key and used as the basis for key endorsement (called key 736 signing in the PGP terminology). 738 The term 'fingerprint' has no formal definition in the PKIX 739 specifications but the term is widely used to refer to a message 740 digest of the entire contents of a certificate. Since this use is 741 incompatible with the PGP usage, the term Key Identifier is prefered 742 as this is unambiguous in both contexts. 744 In PKIX, the key identifier value is a value chosen by the issuer to 745 uniquely identify a public key. The Key Identifier value of the 746 issuing public key is specified using the authorityKeyInfo extension 747 and the Key Identifier value of the subject is specified in the 748 subjectKeyIdentifier extension. 750 While a PKIX Key Identifier is not required to have the strong 751 binding to the corresponding public key that an OpenPGP identifier 752 does, a profile could require that certificates specify 'strong' Key 753 Identifiers formed using a cryptographic message digest of the public 754 key parameters. 756 Strong Key Identifiers are not trust assertions but they may be used 757 to facilitate the creation of trust assertions through key signing, a 758 form of the endorsement mechanism discussed below. 760 Strong Key Identifiers may also be used to publish informal key 761 assertions by adding them to a business card or a Web Page. Such uses 762 might be facilitated through definition of appropriate URI and QR 763 code formats. 765 3.1.2. Self Signed Certificates 767 In the context of PKI, the term 'certificate' is generally understood 768 to refer to an X.509 public key certificate that binds a name and/or 769 an Internet address to a public key.ublic key. 771 A certificate may be either a self signed certificate or a CA issued 772 certificate. Since the work factor of creating a self-signed 773 certificate is negligible, such certificates demonstrate little in 774 themselves but present the subject's public key data in a format that 775 is compatible with many existing applications. 777 As with Key Identifiers, a self signed certificate is not a useful 778 trust assertion in its own right but may be used to facilitate the 779 creation of trust assertions through notarization or endorsement. A 780 public key certificate may also be used as the basis for making a 781 Certificate Signing Request to a Certificate Authority. 783 3.2. Trust Assertions 785 In a PKI, a trust assertion makes a statement about the holder of a 786 public key. In the PKIX model as currently deployed and used, the 787 only forms of trust assertion are Key Signing Certificates and End 788 Entity Certificates. In the OpenPGP model every user is also a trust 789 provider and trust assertions are created in peer-to-peer fashion to 790 create a Web of Trust. 792 3.2.1. Certificate Authority Issued Certificates 794 A Certificate Authority (CA) is a trusted third party that issues 795 digital certificates. A private CA issues certificates for a closed 796 community of relying parties, a public CA issues certificates without 797 restriction on the relying parties. 799 In the Web PKI, providers of Web browsers and platform providers 800 embed the trust anchors of selected public Certificate Authorities 801 into the application as default trust providers. 803 Operation of a Certificate Authority involves two types of 804 certificate. A certificate is either a certificate signing 805 certificate or an end entity certificate. While it is possible for a 806 Certificate Authority to lose control of a signing key used to issue 807 certificates, the Social Work Factor for such attacks can be made 808 prohibitively high. 810 A CA Certificate Policy defines (among other things) the validation 811 criteria that the CA applies before deciding to issue a certificate. 812 A certificate policy is designed to provide a balance between the 813 social work factor presented to an attacker and the cost to the CA 814 and the subject. The CA-Browser forum Extended Validation practices 815 are designed to present a very high social work factor to attackers 816 while Domain Validation presents a significantly lower cost to both 817 subjects and attackers. 819 The EV guidelines in particular are designed to present a social work 820 factor that increases each time an attacker attempts an attack. 821 Registering one corporation is relatively straightforward. 822 Registering a corporation in a way that prevents ownership being 823 traced back to the owners is rather more difficult. Registering 824 hundreds of false front corporations is considerably harder as any 825 common link between the corporations means that if one corporation is 826 discovered to be a front for fraudulent purposes, the rest will come 827 under close scrutiny. scrutiny. 829 A major advantage of the CA certificate issue approach is that they 830 allow a high degree of trust to be established very quickly while it 831 takes a considerable time to establish trust in a pure Web of Trust 832 approach. 834 The main drawback to the CA issue approach is in providing trust to 835 individuals for personal use rather than corporate or government 836 entities or to employees working for such entities. Corporations and 837 Government entities invest in obtaining EV validated certificates as 838 a cost of doing business that has an established return. There is no 839 obvious return on obtaining a similar high assurance credential for 840 personal use but validating an individual's credentials is just as 841 complicated as validating the credentials of a corporation. The 842 history of the UK National Identity card suggests that there is no 843 reason to expect that the cost of provisioning credentials would be 844 significantly reduced at scale. 846 The CA issue model has been successfully applied to issue of 847 credentials to employees for use within an organization and such 848 credentials are occasionally used for external purposes such as 849 S/MIME email. But such certificates are not intended for personal use 850 and are typically revoked when employment ends. 852 3.2.2. Key Signingey Signing 854 In the OpenPGP model every key holder is a trust provider and every 855 key may be used to provide trust through 'key signing'. The trust 856 value of key signing depends on how close the relying party is to the 857 signer. A key that I have signed myself is far more trustworthy than 858 a key signed by a friend of a friend of a friend. 860 The Social Work Factor of forging an individual key signing is low 861 but could be fixed in time through use of a notary. A recent key 862 signing purporting to be for the public key of Barack Obama would 863 have negligible evidentiary value unless produced by someone very 864 close to me or endorsed through other means. A key signing that had 865 been notarized during his time as a Harvard student would be 866 considerably more trustworthy. 868 As the separation between the relying party and the signer becomes 869 larger it becomes increasingly desirable to require multiple 870 independent key signing paths. The Social Work Factor combined with 871 an absolutely reliable notary service provide a firm basis for 872 evaluating the trust value of such Webs of trust: The trustworthiness 873 of a key is dependent on whether the incentive to create a forgery 874 ever exceeded the Social Work Factor of performing a forgery. 876 Use of peer-to-peer key signing does provide a viable model for 877 establishing high assurance credentials for personal use but 878 establishing a high degree of assurance is like making the best 879 quality Scotch whisky: the product takes an inordinately long time to 880 reach maturity. It is hard to see how key-signing could be made a 881 viable model for commercial use. It is an especially poor fit for 882 issue of credentials to employees as the employer has no control over 883 the issue or use of the credential and no ability to revoke the 884 credential when an employee is terminated. 886 One advantage frequently cited for OpenPGP over PKIX is that there is 887 no CA role and the infrastructure is therefore 'free'. Neither the 888 premise, nor the conclusion holds, Many profitable businesses have 889 been built on the basis of open source software and a Web of Trust 890 that reached critical mass would offer abundant opportunities to 891 companies with PKI expertise. 893 Nor is the lack of business stakeholders in an infrastructure 894 necessarily an advantage. One of the reasons that the Web PKI has 895 been successful is that there are many businesses that promote the 896 use of SSL certificates. 898 3.2.3. Adding Key Endorsement to PKIX 900 Considering the Social Work Factor presented by trust assertions in 901 the PKIX and OpenPGP model establishes clear benefits to both 902 approaches. Each model best suits a different community. 904 One approach to a next generation PKI therefore would be to simply 905 enable the use of either scheme in any application. This is not the 906 best approach however as it leaves the email security market in a 907 BetaMax/VHS standards war that has been stalemated for almost two 908 decades. Further, attempting to maintain the use of two different 909 PKIs leads to an unnecessary increase in complexity. 911 A PKI that combines the PKIX and OpenPGP approaches offers clear 912 advantages over both. For many practical reasons that will only be 913 summarised here, it is better to extend the existing PKIX 914 infrastructure to support Key Signing rather than start from the 915 OpenPGP message formats or attempt to design a system based on a new 916 encoding format. These include the deployed base of S/MIME email 917 clients, the use of PKIX to support SSL and the widespread support 918 for PKIX in common code platforms. 920 Rather than attempting to force peer-to-peer Key Signing model into 921 the existing PKIX model, a new structure, the Key Endorsement is 922 proposed instead. While it is technically possible to use PKIX cross 923 certificates as a substitute for PGP Key Signing, the legacy PKIX 924 infrastructure is designed on the assumption that a cross certificate 925 is either completely trustworthy or completely untrustworthy. It is 926 better to introduce a new data format to represent new semantics than 927 to attempt to retrofit new semantics into a legacy format with a 928 deployed infrastructure. 930 I propose the introduction of a Key Endorsement, similar to a PKIX 931 certificate except that: 933 * A Key Endorsement is signed by a PKIX end entity certificate, 934 not a Key Signing Certificate. 936 * Instead of a validity interval, there is an issue time. 938 * The signer and subject key identifiers are required elements of 939 the structure rather than optional extensions. 941 * Key endorsements are not intended for direct use by relying 942 applications. 944 Adding Key Endorsements to the PKIX model allows the use of both 945 trust models in combination. 947 3.2.3.1. Peer Endorsement 949 Key endorsement may also be used to endorse keys of peers. While the 950 ability of the 16 year old Alice to accurately validate the public 951 keys of her peers is questionable, the value of a notarized 952 endorsement increases over time. 954 Relying on peer endorsement alone requires that the relying party be 955 'close' to the signer. Using a combination of CA issued certificates 956 and Key Endorsements allows a high social work factor to be 957 established even for a remote population. It is quite feasible for an 958 attacker to generate a network of one, a hundred or even a million 959 key signing events but generating a fraudulent network that contains 960 a mixture of CA validated certificates and key endorsements has a 961 much higher social work factor. 963 3.2.3.2. Self Endorsement 965 Self endorsement is a special form of endorsement as a person is 966 least likely to make false statements that might harm their own 967 security. Although this is possible in the case that coercion or 968 undue psychological pressure is applied. 970 A user may be expected to have multiple public keys issued over the 971 course of their life, for use at school, university, at different 972 employers and for personal use. In the existing PKIX model each of 973 these keys are independent. Key endorsement allows a user to 974 countersign new keys with older keys, establishing a personal web of 975 trust that develops over time so that the social work factor is 976 preserved and increased over time rather than starting from scratch 977 each time a certificate is issued. 979 For example, Alice is issued a certificate at school which she uses 980 to sign a key endorsement for a personal key she creates. When Alice 981 goes to University, she endorses her university key with her personal 982 key and vice versa. On graduating, Alice becomes a journalist. 983 Sources can send her encrypted messages containing tips confident in 984 the knowledge that Alice is the same Alice who attended the high 985 school and university listed in her official biography. 987 3.2.3.3. Key Endorsement Parties 989 A PGP Key Signing party is an event held to facilitate exchange of 990 PGP fingerprints. Like the use of hashtags and many other important 991 constructs in social media, key signing parties are a practice that 992 have arisen out of use rather than being part of the original model. 994 Recognizing a Key Endorsement Party as being a special form of peer 995 endorsement enables special consideration when making a trust 996 evaluation. It is not practical for a thousand attendees at an 997 international conference to perform mutual key endorsements with 998 every other participant (a half million pairs!) but it is entirely 999 practical to establish a scheme in which anyone who gets their key 1000 endorsed by some number (e.g. five) of qualified Key Endorsers will 1001 have their key endorsed by the Key Endorsement Party key at the end. 1003 3.3. Trust Meta Assertions 1005 Trust Meta Assertions are trust assertions that make statements about 1006 other trust assertions 1008 3.3.1. Revocation and Status Checkings Checking 1010 Real users and real administrators make mistakes from time to time. 1011 Private keys are lost or stolen or misused. Any system that does not 1012 provide a mechanism for forgiving mistakes is unlikely to be 1013 practical in the real world. 1015 Revocation checking limits the incentive for attack by allowing the 1016 time window of vulnerability to be limited in the case of a 1017 fraudulent certificate issue or that a private key is discovered to 1018 be lost or stolen. This does not increase the social work factor but 1019 decreases the value of an attack. 1021 3.3.2. Notarization 1023 Notarization or a Trust Assertion of Key Identifier by a trustworthy 1024 notary allows the social work factor of forging the trust assertion 1025 or key identifier to be raised to an infeasible level for dates after 1026 the notarization took place. 1028 While a traditional notary might be suborned relatively easily, a 1029 digital notary can be constructed in such a fashion that post dating 1030 a notary assertion by more than a few hours or days is infeasible. 1032 3.3.3. Transparency 1034 Certificate Transparency is a proposed infrastructure to (Laurie et. 1035 al.). [[Describe] 1037 Publishing issued certificates increases the probability that an 1038 attempted fraud will be discovered and thus increases the Social Work 1039 Factor and reduces the incentive. 1041 3.4. Other Approaches 1043 The X.509/PKIX and OpenPGP infrastructures are not the only 1044 infrastructures that have been proposed by they are the only large 1045 scale infrastructures which large numbers of users rely on today. The 1046 Social Work Factor over time may be used to evaluate alternative PKI 1047 options that have not (yet) achieved widespread use. 1049 3.4.1. DNSSEC 1051 Like PKIX, DNSSEC is based on a hierarchical trust model. Unlike 1052 PKIX, DNSSEC is only capable of making statements about DNS names. 1054 The Social Work Factor is not a useful tool to analyze DNSSEC 1055 assertions because there is no historical dimension to the DNSSEC 1056 infrastructure. The trustworthiness of a DNSSEC assertion depends on 1057 the trustworthiness of the root operator, the TLD registry and the 1058 party that signed the corresponding DNS zone. If these are 1059 trustworthy then so is the assertion, there is no alternative if not. 1061 While the root and registry operators have strong commercial 1062 incentives not to default, a default may be coerced through 1063 government action and relying parties have no independent means to 1064 determine if a default has taken place. 1066 3.4.2. SPKI / SDSI 1068 The chief distinguishing characteristic of SPKI is that SPKI names 1069 are not universal. While this has the interesting effect of 1070 simplifying the evaluation of trust within the SPKI naming 1071 infrastructure, this effect is lost when attempting to send an email 1072 because the Internet email system is based on the assumption that the 1073 namespace is universal. It does not make sense to talk about 'Alice's 1074 bob@example.com' (although UUCP email did use a scheme of that type. 1076 3.4.3. Identity Based Cryptography 1078 Identity based cryptography is a frequently touted 'alternative' to 1079 conventional public key cryptography in which the public key of each 1080 subject is a deterministic function of their name and a master public 1081 key which is known to everyone. Each user obtains their private key 1082 from the party that created the master public key pair using the 1083 master private key. 1085 While many benefits have been claimed for the Identity based 1086 approach, applying to the holder of the master private key for a 1087 private key offers no benefits to key holders over applying to a 1088 certificate from a CA. 1090 Identity based cryptography does offer relying parties the ability to 1091 obtain the public key for a counterparty without the need to 1092 communicate with another party, but this is hardly much of an 1093 advantage unless there is no means for certificates to be passed in- 1094 ban and there is no advantage at all if an external source has to be 1095 queried to obtain the status of a public key to determine that the 1096 private key has not been reported lost or compromised. 1098 The simplicity of certificate chain validation and status checking in 1099 Identity Based Cryptography is the result of the technology being 1100 unable to support these features rather than the features being 1101 unnecessary. 1103 4. Maximizing Social Work Factor in a Notary Infrastructure 1105 The ability to determine that a trust event occurred before a certain 1106 point in time increases the social work factor for forging the event 1107 after that point in time. If suborning the notary is infeasible, the 1108 Social Work Factor is raised to an infeasible level. 1110 Harber and Stornetta proposed a notary that produced a 'catenate 1111 certificate' in which each notary output is fed as an input into the 1112 next. It is thus impossible to insert notary events between two prior 1113 notary events without breaking the cryptographic algorithm used for 1114 authentication. 1116 A chain of notary events may be fixed in time by notarizing events 1117 that are unpredictable in advance but known with a high degree of 1118 certainty afterward. For example the weekly lottery numbers or the 1119 closing prices on various equity markets. 1121 The principal drawback to grounding the notary chains using external 1122 events is that the ability of a party to verify the trustworthiness 1123 of the notary is bounded by the trustworthiness of the reports of the 1124 external events used for verification. 1126 A better approach is to have establish a large number of 1127 independently operated notaries and a procedure for cross 1128 notification that ensures that no notary can default unless every 1129 other notary defaults. The Social Work Factor of suborning any notary 1130 then becomes the Social Work Factor of suborning every notary and 1131 erasing all histories for their notary events. 1133 For ease of explanation, a two tier notary infrastructure is 1134 envisaged in which a notary is either a local notary or a meta 1135 notary. 1137 Local notaries are notaries that produce a catenate log of 1138 notarization requests submitted by its users. The current value of 1139 the notary chain is presented to one or more meta-notaries at regular 1140 time intervals (e.g. an hour) to prevent backdating of notary claims 1141 and notarizes the output of one or more meta notaries at regular 1142 intervals to prevent predating. 1144 Meta notaries are notaries that only notarize the requests submitted 1145 by local notaries and by other meta notaries. Before accepting a 1146 notarization request, a meta notary audits the actions of the notary 1147 making the request to ensure that the time-stamp values etc. are 1148 consistent and correct. 1150 The schedule of peer to peer notification among meta notaries is set 1151 such that a notary request made to any of the local notaries served 1152 will affect the output of every meta notary within a predetermined 1153 period of time. 1155 To make use of such a notary infrastructure, a relying party chooses 1156 at least one meta notary and obtains and maintains a record of the 1157 catenate certificate chain over time. The trust chain of the chosen 1158 meta notary may then be used to verify any notary assertion presented 1159 by any other meta notary which may in turn be used to verify any 1160 notary assertion from a local notary that participates in the scheme. 1162 A significant benefit of this approach is that the ability to verify 1163 notary assertions is assured even if the Local Notary that originally 1164 produced it ceases functioning. All that is necessary for the 1165 continued operation of the system to be assured is for the pool of 1166 meta notaries to be sufficiently large to render suborning all of 1167 them infeasible. 1169 5. Conclusions and Related Work 1171 It has not escaped the notice of the author that the social work 1172 factor might be applied as a general metric for assessing the 1173 viability of a political conspiracy hypothesis. 1175 Author's Address 1177 Phillip Hallam-Baker 1178 Comodo Group Inc. 1180 philliph@comodo.com