idnits 2.17.1 draft-hallambaker-iab-deployment-00.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- ** The document seems to lack an Introduction section. ** The document seems to lack a Security Considerations section. ** The document seems to lack an IANA Considerations section. (See Section 2.2 of https://www.ietf.org/id-info/checklist for how to handle the case when there are no actions for IANA.) ** The abstract seems to contain references ([1]), which it shouldn't. Please replace those with straight textual mentions of the documents in question. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year -- The document date (May 3, 2019) is 1820 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- -- Missing reference section? '1' on line 642 looks like a reference Summary: 4 errors (**), 0 flaws (~~), 1 warning (==), 2 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 Network Working Group P. Hallam-Baker 3 Internet-Draft May 3, 2019 4 Intended status: Informational 5 Expires: November 4, 2019 7 The Devil is in the Deployment 8 draft-hallambaker-iab-deployment-00 10 Abstract 12 The defining feature of a standard is that it be widely, preferably 13 ubiquitously used. The deployment strategies of previous protocol 14 standardization efforts are compared and best practice suggested for 15 application and infrastructure protocol deployment strategies are 16 described. Recommendations for enabling deployment of specific 17 protocols and for future IETF working practices are made. 19 This draft is a generalization of the principles used to develop the 20 deployment strategy for the Mathematical Mesh. Many documents 21 describing deployment considerations have been developed during the 22 development of the Mesh and these have motivated many changes to the 23 design during the course of development. 25 The Mesh is consciously and deliberately modeled on the same 26 strategies that succeeded in the Web. Some of these strategies are 27 well known: 29 Other parts of the Web strategy have not been widely discussed. This 30 paper presents some parts of the strategy most relevant to the IAB 31 workshop program. 33 This document is also available online at 34 http://mathmesh.com/Documents/draft-hallambaker-iab-deployment.html 35 [1] . 37 Status of This Memo 39 This Internet-Draft is submitted in full conformance with the 40 provisions of BCP 78 and BCP 79. 42 Internet-Drafts are working documents of the Internet Engineering 43 Task Force (IETF). Note that other groups may also distribute 44 working documents as Internet-Drafts. The list of current Internet- 45 Drafts is at https://datatracker.ietf.org/drafts/current/. 47 Internet-Drafts are draft documents valid for a maximum of six months 48 and may be updated, replaced, or obsoleted by other documents at any 49 time. It is inappropriate to use Internet-Drafts as reference 50 material or to cite them other than as "work in progress." 52 This Internet-Draft will expire on November 4, 2019. 54 Copyright Notice 56 Copyright (c) 2019 IETF Trust and the persons identified as the 57 document authors. All rights reserved. 59 This document is subject to BCP 78 and the IETF Trust's Legal 60 Provisions Relating to IETF Documents 61 (https://trustee.ietf.org/license-info) in effect on the date of 62 publication of this document. Please review these documents 63 carefully, as they describe your rights and restrictions with respect 64 to this document. Code Components extracted from this document must 65 include Simplified BSD License text as described in Section 4.e of 66 the Trust Legal Provisions and are provided without warranty as 67 described in the Simplified BSD License. 69 Table of Contents 71 1. Lessons from History . . . . . . . . . . . . . . . . . . . . 2 72 1.1. The World Wide Web . . . . . . . . . . . . . . . . . . . 3 73 1.2. IPv6 . . . . . . . . . . . . . . . . . . . . . . . . . . 4 74 1.2.1. Recommendations . . . . . . . . . . . . . . . . . . . 7 75 1.3. DNSSEC and DANE . . . . . . . . . . . . . . . . . . . . . 7 76 1.3.1. DANE . . . . . . . . . . . . . . . . . . . . . . . . 9 77 1.3.2. DPRIV . . . . . . . . . . . . . . . . . . . . . . . . 10 78 2. Recommendations . . . . . . . . . . . . . . . . . . . . . . . 10 79 2.1. Purpose of the IETF . . . . . . . . . . . . . . . . . . . 10 80 2.2. Design for Deployment . . . . . . . . . . . . . . . . . . 11 81 2.3. Identify Stakeholders and Gatekeepers . . . . . . . . . . 12 82 2.4. Realistic Schedules . . . . . . . . . . . . . . . . . . . 12 83 2.5. Eliminate Deployment Dependencies . . . . . . . . . . . . 13 84 2.6. Recognize Failure . . . . . . . . . . . . . . . . . . . . 14 85 3.1. URIs . . . . . . . . . . . . . . . . . . . . . . . . . . 14 86 Author's Address . . . . . . . . . . . . . . . . . . . . . . . . 14 88 1. Lessons from History 90 When the Internet was first being developed, the number of hosts was 91 zero and the user community was highly motivated to adopt new 92 technologies because they were developing them. Today the Internet 93 has four billion users and forty years of legacy infrastructure. If 94 we are going to improve our record of deploying new developments we 95 must look past the earliest pioneering days and focus on deployment 96 of technologies developed since the Internet had grown in size to the 97 point where deployment was a primary design constraint. 99 If we are going to succeed in being relevant, we must design for 100 deployment. We also have to have the courage to learn from past 101 mistakes. Reminding people of past mistakes is never popular but 102 learning from them is the only way to avoid them. 104 1.1. The World Wide Web 106 Contrary to subsequent histories, the success of the World Wide Web 107 was neither inevitable nor an accidental consequence of the design. 108 The Web was neither the first network hypertext system proposed or 109 the best funded. Then the Web was demonstrated in public for the 110 first time late in 1992, there were two dozen competing schemes and 111 the only developers paid to work on the Web full time were Tim 112 Berners-Lee himself and one intern. 114 Having read and re-read the www-talk archives many times during 115 research of prior art, it is clear that deployment was of paramount 116 concern in the design of URLs, HTTP and HTML. The scheme prefix was 117 added to the original HTTP locator in response to Dan Connolly's 118 suggestion that the Web should permit access to any resource 119 regardless of the access protocol. The port field was added after 120 developers complained that they could not run a Web server because 121 they lacked the system privileges then required to bind to port 122 numbers lower than 1024. SGML was adopted as the basis for the 123 markup language in spite of rather than because of its technical 124 merits. 126 The success of the Web was in part due to the fact that it was 127 designed to solve a specific set of problems rather than to realize 128 Ted Nelson's vision. Stripping out difficult to implement features 129 (search, payments, referential transparency) from the core allowed 130 them to be solved separately as demand and resources permitted. 132 But more important than the design was the fact that the Web offered 133 a dramatically lower cost of deployment than any of its rivals. At 134 the time 'free software' generally came at the cost of several days 135 effort trying to get the source code to compile. Pricing for 136 commercial software was based on a fraction of the price of the 137 machine on which it ran which ranged from the tens of thousands to 138 the millions of dollars. 140 The Web clients and servers were free for non-commercial use and the 141 implementations developed by NCSA had been developed with US 142 government grants. This last consideration was of key importance in 143 the Clinton administration decision to use the Web as the basis for 144 realizing Al Gore's vision of an 'Information superhighway'. 146 Work on gaining the endorsement of the White House began before the 147 1992 presidential election. The MIT AI lab had begun its Intelligent 148 Information Infrastructures project which put material from all the 149 campaigns online a year earlier. I made contact with Jock Gill who 150 was then in charge of the Clinton-Gore campaign and proposed that the 151 White House deploy a Web site. 153 The launch of the IBM PC a decade earlier had validated the use of 154 the 'microcomputer' as a business tool. Before the IBM PC, the 155 priority of most MIS departments was to maintain their monopoly. As 156 an intern at ICI, I spent four months writing code to screen scrape 157 reports from the IBM mainframe so that my boss could analyze them 158 with Lotus 123. My predecessor had had to work with a machine hidden 159 in a cupboard so the MIS department couldn't find and confiscate it. 160 The IBM endorsement of the microcomputer legitimized the 161 microcomputer and I wanted a similar endorsement for the Web. 163 The strategy paid off. Before the launch of whitehouse.gov, we could 164 persuade almost no businesses outside the computing industry to adopt 165 the Web. This was not for lack of effort and despite the fevered 166 press reporting on the thousand percent per week growth rate of the 167 Web. After whitehouse.gov was launched, there was no need for 168 persuasion. The Web was growing of its own accord. 170 Another endorsement that was aggressively pursued was Microsoft. 171 This initiative was taken by Robert Cailliau over the course of many 172 months in 1993. Subsequent attacks on Microsoft for 'stealing' the 173 Web technologies has always rankled me. The truth of the matter is 174 that we gave them the technology and pleaded with them to distribute 175 it. 177 In summary, the Web succeeded because it was designed for deployment 178 and we aggressively pursued key endorsements to market it. A 179 necessary part of the design for deployment approach was abandoning 180 approaches that were regarded as sacrosanct in the field for reasons 181 that turned out to be grounded in ideology rather than technology. 183 1.2. IPv6 185 Despite work on IPv6 starting at the same time as the Web and despite 186 the fact that the projected growth rate of the Internet was projected 187 to exhaust the IPv4 address space in 1998, deployment of IPv6 188 continues to fall short of expectations. 190 It is arguable that the delay in adoption of IPv6 is in part due to 191 the success of the Web. Wide Area Networking was growing at an 192 exponential rate before the Web appeared on the scene. But the Web 193 caused the Internet to kill off the competing WAN protocols. Had 194 that occurred in 1997, the transition to IPv6 might conceivably have 195 been completed first. But Internet supremacy became inevitable in 196 1995 instead meaning that it was IPv4 not IPv6 that became 197 ubiquitous. 199 Deployment of the Internet has been driven by two killer 200 applications: Email and the Web. And here there is an ironic twist. 201 Over 1993 the proportion of Internet users who were Web users rose 202 from ~0% to ~100% because the URL scheme field delivered 203 interoperability. But from the start of 1994 to the end of 1995 the 204 percentage of global WAN traffic that was Internet traffic grew from 205 under 20% to over 80% and this change was likely driven by the Web's 206 lack of interoperability. 208 In 1992, the primary applications for academic computer networks were 209 remote access, email and file transfer. As a graduate student in the 210 UK the primary WAN networks I made use of were HEPNET which ran on 211 DECNET phase 4 and JANET which ran a protocol stack called coloured 212 books. I did not have direct access to the Internet from my Oxford 213 University machine but I could access Internet machines in Germany 214 via HEPNET. I could also exchange email with Internet users via a 215 mail gateway which used the heuristic that email addresses beginning 216 with com. or edu. were Internet addresses and reverse the big-endian 217 addressing convention adopted by JANET. Remote access and file 218 transfer could be achieved using similar (but less reliable) 219 techniques. 221 Before the Web, attempts to secure funding for access to any computer 222 network other than JANET were an exercise in futility. The only 223 person with decision making power was the Secretary of State for 224 Education who was most unlikely to fund a rival to the system his own 225 department was paying to build. Not least because the advisory board 226 was staffed by the people who were developing it. Any attempt to 227 propose use of a rival technology was easily defeated by pointing out 228 that JANET afforded access to the exact same resources. 230 The Web changed this calculus because even though the Web itself 231 could run on other protocols, it was the content that the users 232 wanted access to and that was only available on the Internet (at 233 least as far as the Minister was aware). 235 The first point of this apparent digression is that while these 236 political considerations may sound petty and short sighted, they are 237 the exact considerations that gave rise to a particular 238 interpretation of the 'end to end' principle which insists that the 239 IP address remains constant end-to-end, an interpretation that 240 appears nowhere in the original papers. The point of 'IP end-to-end' 241 was that applications work a lot better without unnecessary 242 translations from one protocol stack to another and back. Government 243 sponsorship can be a powerful driver of early adoption. It can also 244 lead to a situation in which the market is deadlocked in a format 245 war. 247 The second point of the digression is that 'IP end-to-end' became 248 ideology, a slogan. Any suggestion that NAT was beneficial was 249 attacked using exclusionary tactics. I was attacked in very 250 unpleasant terms for suggesting that I had no intention of paying my 251 ISP $10/month for each device added to my home network when I could 252 do the same thing for free using NAT. Since I now have 200 devices 253 on my home network, this would cost me $24,000 a year. 255 To the extent there was a deployment strategy for IPv6, it was to 256 raise concern that exhaustion of the address space was imminent and 257 make dire predictions of the consequences of this happening while 258 discouraging the use of NAT or other techniques that might mitigate 259 the problem. This approach was unsustainable. The main party that 260 would suffer if the IPv4 address poll was exhausted was the ISPs. In 261 1998, I was amused when a neighbor reported that the broadband 262 provider I had abandoned because their TOS did not permit use of NAT 263 had shipped him a new router with NAT enabled by default. I was 264 further amused when I read the new TOS to find that use of NAT was 265 still prohibited. 267 The argument made against NAT was that users were not going to move 268 to IPv6 unless the new protocol offered new features. The reverse is 269 in fact the case. Even today, IPv6 is not a choice for most Internet 270 users. It is a feature their ISP does or does not provide. Or more 271 accurately, it is a feature that some of the multiple ISPs that a 272 user might rely on in a single day might provide. Differentiating 273 IPv6 by offering additional features to application developers is 274 doomed to failure because the IP is a network layer capability that 275 the application protocol designer does not and indeed cannot rely on. 277 The OSI layered model is a poor guide to the Internet protocol stack 278 but the principles of abstraction and encapsulation are not. An 279 application layer protocol has no business dealing in network layer 280 addresses. We should regard application protocols that rely on IP 281 addresses being constant end-to-end as being poorly architected 282 rather than attempting to police global provision of Internet 283 services to enforce a misfeature. 285 1.2.1. Recommendations 287 Rather than trying to drive deployment of IPv6 by limiting the 288 functionality of the IPv4 Internet, we should attempt to eliminate as 289 many differences as possible. Our end goal should be to improve 290 network capabilities as quickly as possible rather than to achieve 291 IPv4 sunset as quickly as possible. 293 What is important is that we have enough addresses to allow the 294 Internet to continue to grow. NAT has allowed the number of devices 295 connected to the Internet to exceed the number of IPv4 addresses by 296 at least an order of magnitude by allowing multiple devices at the 297 same site to share the same address. The effectiveness of this 298 strategy will inevitably decline as the number of sites begins to 299 approach the number of available addresses. 301 The priority therefore should be to make access to the IPv6 backbone 302 as ubiquitous as possible, including access using devices that are 303 not IPv6 capable and never will be. 305 I have 200 devices on my home network of which only 20 are configured 306 for use of IPv6. I am not replacing my 36" plotter just so that I 307 can connect via IPv6. Nor do I plan to open up walls or climb 308 ladders to do so. Nor is anyone else in a similar position going to 309 do so. But every one of those devices could function as if it were 310 IPv6 capable as far as the rest of the Internet was concerned if the 311 NAT box connecting my home network to the Inter-network was 312 appropriately configured. 314 Deployment cannot be advanced by withholding features, but it can be 315 advanced by offering better performance. Users demand gigabit 316 connection speeds because they believe they will deliver performance. 317 Users are likely to demand an IETF specified suite of RFC compliances 318 if they believe that this will provide better performance. But the 319 industry can only follow the IETF lead if the IETF recommendation is 320 actionable. 'Stop using IPv4' is not actionable today and will not 321 be actionable as far as the home user is concerned within our 322 lifetimes. A recommendation that ISPs provide IPv6 to the home/ 323 enterprise and that every home router support a feature set that 324 allows every device connected to the local network to make full and 325 transparent use of that capability is actionable. 327 1.3. DNSSEC and DANE 329 Like IPv6, DNSSEC was also proposed at roughly the same time as the 330 Web and it is generally argued that deployment of DNSSEC was eclipsed 331 by the rise of a Web technology. The introduction of SSL (now TLS) 332 led to the deployment of what is now known as the WebPKI, one of only 333 two Internet security protocols that has approached ubiquitous 334 deployment in its field of use (the other being the closely related 335 SSH). 337 This is a misconception as the WebPKI was developed to provide a 338 sufficient accountability infrastructure to enable Internet commerce. 339 The DNSSEC was never intended to provide a form of authentication 340 that was sufficient for accountability. But neither is the WebPKI 341 capable of supporting what should have been the primary objective of 342 DNSSEC: Authenticated distribution of security policy. 344 One of the chief problems faced in deployment of DNSSEC was that 345 until critical mass is reached, the network effect works against 346 deployment. DNS services were typically consumed through operating 347 system services and no major operating platform provider was going to 348 provide support for DNSSEC until there was customer demand. No 349 customer was going to demand support for DNSSEC in the operating 350 system until they could register their keys in their domain and the 351 registries were not going to support registration of keys until there 352 was some means of using them. 354 The first time the CEO of any major Internet technology provider 355 mentioned DNSSEC was when Stratton Sclavos gave the potential for 356 deployment of DNSSEC as one of the major potential benefits of the 357 acquisition of Network Solutions in 2000. At that point, VeriSign 358 was the only major stakeholder in the DNS infrastructure endorsing 359 deployment of DNSSEC. 361 The endorsement was not appreciated by the DNS community. One of the 362 DNSEXT chairs who was also a member of the IESG repeatedly 363 demonstrated open hostility to the name 'VeriSign' and anyone 364 associated with the company. 366 In 2001, detailed examination of the DNSSEC deployment requirements 367 revealed that the implementation of the NSEC record as it was then 368 specified would require every zone to be signed, increasing the size 369 of the zone file by more than an order of magnitude. This would in 370 turn require substantial changes to the architecture of the ATLAS 371 infrastructure then being developed. Storing the complete zone file 372 at every node would require more than 4GB of memory and thus require 373 the use of 64 bit machines which would add an estimated $30 million 374 to the cost. Re-engineering the system to partition the database 375 would delay deployment by a year at least. 377 These facts and a technical proposal that addressed the issue were 378 presented. One of the responses to the proposal was that if the .com 379 zone was too large to be signed using DNSSEC, the correct solution 380 was to reduce the size of .com, not to change DNSSEC. While I was 381 not surprised the statement was made, it should perhaps have been 382 surprising that nobody laughed. 384 Besides delaying the start of actual DNSSEC deployment by a decade, 385 the situation came very close to litigation that could have 386 bankrupted the IETF. When Sclavos resigned in 2007, one of the 387 principle complaints of his performance made by the board was the 388 failure to show synergies between the businesses he had acquired, in 389 particular the failure to deploy DNSSEC. 391 Attempts to deploy DANE and DPRIV have fallen victim to similarly 392 blinkered thinking. 394 1.3.1. DANE 396 DANE was an attempt to use the DNS to provide certification of server 397 keys and distribute security policy using the DNS. Despite repeated 398 warnings, the working group never recognized that attempting to 399 achieve both goals in one system would introduce constraints that 400 doomed deployment. 402 At the time DANE was proposed, most DNS registrars operated their 403 domain name registration businesses as a loss leader for their other 404 services. The major profit center for most being sale of TLS 405 Certificates. The same DNS registrars were the gatekeeper for 406 deployment of DNSSEC. 408 Had the scope of DANE been limited to issue of free certificates, 409 DNSSEC need not have been an essential requirement and the registrars 410 would not have been gatekeepers for the deployment of DANE and the 411 fact that DANE would eliminate their main source of earnings would 412 not have mattered. But DANE was also intended to be a means of 413 publishing security policy information and in particular to tell 414 clients that they must use TLS. This meant that deployment of DANE 415 was necessarily dependent of deployment of DNSSEC. 417 Had deployment of DANE and DNSSEC been decoupled so that one could be 418 used without the other if necessary, a virtuous cycle of deployment 419 might have been realized in which the success of one encouraged the 420 other. 422 To this day, very few DNS registrars advertise support for DNSSEC and 423 none that I am aware of facilitate use of DANE TLSA records. 425 1.3.2. DPRIV 427 DPRIV was an attempt to provide confidentiality for DNS protocol 428 communications between end user clients and resolution services. 430 As with DANE and DNSSEC, the deployment constraints the Web browser 431 providers were ignored and the design was predicated on an undeployed 432 technology. 434 DPRIV did have the backing of VeriSign, the primary DNS operator. 435 But the Web browser providers did not express interest. Recognizing 436 the urgent need to protect the confidentiality of DNS traffic, the 437 working group decided to complete its work in a year. This in turn 438 constrained the choice of cryptographic protection to TLS and since 439 TLS is layered on TCP/IP, this meant DPRIV could only come close to 440 meeting the latency requirements set by the browser providers if TCP 441 Fast Open, an experimental technology was used. 443 2. Recommendations 445 Almost any advice on deployment strategy is likely to prove useful. 446 The one counterexample being the advice that is most frequently 447 given: To give up on any hope of making changes because the scale of 448 the Internet has made all new infrastructure deployment impossible. 450 The Internet can and does change. New protocols and protocol 451 features are developed and successfully deployed every day. Only a 452 small fraction of that work takes place in the IETF and for every 453 project that succeeds, ten or perhaps a hundred fail. But even work 454 that is a failure is worthwhile if at least one other person learns a 455 lesson that allows another project to succeed. 457 2.1. Purpose of the IETF 459 The first comment I received when I announced the Mesh 3.0 460 documentation set was to ask if anyone else was planning to implement 461 the specification as the primary focus of IETF work was to enable 462 interoperability between implementations. 464 While developing clear specification documents that describe 465 compelling designs facilitating widespread interoperability is a 466 noble and important goal, it is not one that the IETF is designed to 467 serve. If I wanted to develop elegant and compelling specifications, 468 I would hardly want to do so in a hundred-person committee. 470 It doesn't take a hundred people to design a specification, but it 471 frequently requires that number or even more to represent the 472 interests of all the stakeholders and gatekeepers whose requirements 473 must be met if deployment is to be successful. 475 The main if not the sole reason I attend standards organization 476 meetings is to build the constituency necessary for adoption. If I 477 had already established that deployment constituency, I would have no 478 need of coming to the IETF in the first place. Equally, it would be 479 surprising to say the least if anyone else had begun an 480 implementation of a set of specifications when I have been 481 discouraging anyone from doing that until the work was nearly 482 complete. 484 If our work has any importance, it is that it improves the Internet. 485 It is the consensus outside the IETF in the user community that 486 ultimately counts. Over the years I have seen many attempts to short 487 circuit the process and march a specification through at breakneck 488 speed. While this is an effective strategy for impressing managers, 489 it is not an effective strategy for building a deployment community. 490 It is much easier to form a strong consensus among a dozen people who 491 start with strongly aligned views and interests than to form such a 492 consensus among a hundred people representing twenty different 493 deployment constituencies. But it is the latter that is necessary if 494 we are to achieve deployment. 496 Many people try to accelerate IETF process, I have frequently 497 preferred a slower pace when I believe it might allow endorsement by 498 a key stakeholder or a gatekeeper. 500 Recommendation: The IAB should recognize that at least one purpose of 501 the IETF is to help technology developers build a deployment 502 constituency and that this is properly a result of rather than a 503 precondition for considering work. 505 2.2. Design for Deployment 507 The need to design for deployment is argued in the previous section. 508 The question for the IAB is when and how that approach should be 509 required. 511 Rather than clutter up every RFC with a mandatory 'Deployments 512 Consideration' section, the intended outcome is more likely to be 513 received if this is an exceptional rather than a routine requirement 514 and takes the form of a deliverable required of a working group 515 during the chartering process. As is frequently the case with use 516 cases and requirements documents, a document describing deployment 517 considerations need not necessarily be published as an RFC but would 518 serve to inform and facilitate the design process. 520 Recommendation: The IESG should request presentation of a 'Deployment 521 Considerations' section as a deliverable when chartering or re- 522 chartering work where success requires widespread adoption. 524 2.3. Identify Stakeholders and Gatekeepers 526 When deployment of a specification depends on adoption by a 527 particular community of stakeholders, the opinions expressed by that 528 community must be considered when designing for deployment. When one 529 or more stakeholders have an influence so strong that it amounts to a 530 veto power on particular forms of deployment they should be 531 recognized as a gatekeeper and the deployment strategy designed 532 accordingly. 534 It is important to note however that recognizing stakeholders and 535 gatekeepers is not the same as affording them veto power. What is 536 important is that their views must be considered by the Working Group 537 even if the stakeholders and gatekeepers themselves are not present. 538 If a proposal to improve Internet security is critically dependent on 539 adoption by Web browser providers, their deployment criteria must be 540 determined and respected. Or if it is impossible to reconcile the 541 objectives with those criteria the design must be changed so that it 542 does not depend on adoption by the Web browser providers. 544 It is of course necessary that the IETF operate under the fiction 545 that every participant participates in their personal capacity alone 546 and is not speaking for their employer. And this is certainly true 547 to the extent that of the 1,500 attendees at a typical meeting, few 548 if any have direct authority to speak on behalf of an organization of 549 more than fifty people. And those that do are less likely to speak 550 because of that fact. 552 But this polite fiction should not prevent Working Groups from 553 soliciting and receiving direct input on the issue of deployment 554 criteria from constituencies identified as stakeholders and 555 gatekeepers. 557 Recommendation: When the IESG requires deployment considerations be 558 produced, these should specify the key stakeholders and gatekeepers 559 and the positions these parties have expressed. 561 2.4. Realistic Schedules 563 Some of the standards efforts I have been involved in have succeeded 564 and some of those efforts have failed. While it is never possible to 565 know with certainty that an effort will succeed, it is often possible 566 to predict with a high degree of certainty that an effort will fail. 567 There is no more certain sign that an effort is doomed than when the 568 introductory slides assert that the urgency of the problem is so 569 great that a solution much be found and deployed within 12 months. 571 There are very few problems that are currently being addressed in 572 IETF Working Groups that have not been recognized as issues for at 573 least five years. Most had been understood as issues for a decade or 574 more. The claim that any issue has suddenly become so urgent that 575 there is insufficient time to consider it properly should therefore 576 bear a heavy burden of proof. 578 The fact that a Working Group charter sets an unrealistic schedule is 579 not of course any guarantee that it will be met. And it is usually 580 apparent from the start that this was never the intention. Setting 581 an unrealistic schedule allows the scope of work to be controlled to 582 exclude unwanted use cases, requirements and constraints and thus 583 ensure that the Working Group selects a particular approach that 584 allows the party controlling the process to re-use existing code 585 rather than write new code. 587 Recommendation: The IESG should reject Working group schedules that 588 leave insufficient time to discuss the use cases, requirements and 589 appropriate technology. 591 2.5. Eliminate Deployment Dependencies 593 One of the most useful, certainly the most frequent advice offered by 594 Jim Schadd when I shared an office with him at W3C was to avoid 595 'error 22': do not build research on research. 597 It is not uncommon for one Working Group to attempt to force 598 deployment of their work by persuading another working group to make 599 it an essential requirement. Rather than encouraging such 600 dependencies, they should be vigorously discouraged. 602 Another form of deployment dependency is the requirement that a 603 standard be designed in a particular way so that a particular 604 stakeholder can re-use existing code. While re-use of existing code 605 is an advantage, it is very rarely as decisive an advantage as the 606 proposer imagines. The fact that a designer was able to lash 607 together a prototype using an existing 500K line library does not 608 necessarily suggest that this will provide a short cut to development 609 of a production version of the same system. 611 Recommendation: When the IESG requires deployment considerations be 612 produced, these should specify all the technologies that the proposal 613 is dependent on, the status of each and a justification given for 614 reliance on any technology that is not already ubiquitously deployed. 616 2.6. Recognize Failure 618 Probably the hardest step for the IETF to take as an institution is 619 to recognize when an approach has failed and to stop investing 620 resources in that effort. 622 One of the most important decisions that the IETF took in the 623 deployment of end-to-end secure mail was the recognition that PEM had 624 failed to win adoption and clear the field for S/MIME and OpenPGP. 626 It is now time for the IETF to have the courage to recognize that 627 S/MIME and OpenPGP have failed to thrive. They have both established 628 significant user bases and serve important functions. But neither 629 has made appreciable progress in adoption in the past two decades and 630 neither is likely to achieve ubiquity. Recognizing that these legacy 631 protocols have failed to thrive would not render them obsolete but 632 would clear the field allowing alternative approaches to be proposed. 634 Recommendation: The IAB should be tasked with performing periodic 635 reviews of IETF standards and identify those that have 'failed to 636 thrive'. 638 3. References 640 3.1. URIs 642 [1] http://mathmesh.com/Documents/draft-hallambaker-iab- 643 deployment.html 645 Author's Address 647 Phillip Hallam-Baker 649 Email: phill@hallambaker.com