idnits 2.17.1
draft-hartman-webauth-phishing-09.txt:
Checking boilerplate required by RFC 5378 and the IETF Trust (see
https://trustee.ietf.org/license-info):
----------------------------------------------------------------------------
** It looks like you're using RFC 3978 boilerplate. You should update this
to the boilerplate described in the IETF Trust License Policy document
(see https://trustee.ietf.org/license-info), which is required now.
-- Found old boilerplate from RFC 3978, Section 5.1 on line 15.
-- Found old boilerplate from RFC 3978, Section 5.5, updated by RFC 4748 on
line 1021.
-- Found old boilerplate from RFC 3979, Section 5, paragraph 1 on line 1032.
-- Found old boilerplate from RFC 3979, Section 5, paragraph 2 on line 1039.
-- Found old boilerplate from RFC 3979, Section 5, paragraph 3 on line 1045.
Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt:
----------------------------------------------------------------------------
No issues found here.
Checking nits according to https://www.ietf.org/id-info/checklist :
----------------------------------------------------------------------------
No issues found here.
Miscellaneous warnings:
----------------------------------------------------------------------------
== The copyright year in the IETF Trust Copyright Line does not match the
current year
-- The document seems to lack a disclaimer for pre-RFC5378 work, but may
have content which was first submitted before 10 November 2008. If you
have contacted all the original authors and they are all willing to grant
the BCP78 rights to the IETF Trust, then this is fine, and you can ignore
this comment. If not, you may need to add the pre-RFC5378 disclaimer.
(See the Legal Provisions document at
https://trustee.ietf.org/license-info for more information.)
-- The document date (August 18, 2008) is 5730 days in the past. Is this
intentional?
Checking references for intended status: Informational
----------------------------------------------------------------------------
== Outdated reference: A later version (-07) exists of
draft-iab-auth-mech-05
-- Obsolete informational reference (is this intentional?): RFC 2818
(Obsoleted by RFC 9110)
-- Obsolete informational reference (is this intentional?): RFC 4346
(Obsoleted by RFC 5246)
Summary: 1 error (**), 0 flaws (~~), 2 warnings (==), 9 comments (--).
Run idnits with the --verbose option for more detailed information about
the items above.
--------------------------------------------------------------------------------
2 Network Working Group S. Hartman
3 Internet-Draft Painless Security
4 Intended status: Informational August 18, 2008
5 Expires: February 19, 2009
7 Requirements for Web Authentication Resistant to Phishing
8 draft-hartman-webauth-phishing-09.txt
10 Status of this Memo
12 By submitting this Internet-Draft, each author represents that any
13 applicable patent or other IPR claims of which he or she is aware
14 have been or will be disclosed, and any of which he or she becomes
15 aware will be disclosed, in accordance with Section 6 of BCP 79.
17 Internet-Drafts are working documents of the Internet Engineering
18 Task Force (IETF), its areas, and its working groups. Note that
19 other groups may also distribute working documents as Internet-
20 Drafts.
22 Internet-Drafts are draft documents valid for a maximum of six months
23 and may be updated, replaced, or obsoleted by other documents at any
24 time. It is inappropriate to use Internet-Drafts as reference
25 material or to cite them other than as "work in progress."
27 The list of current Internet-Drafts can be accessed at
28 http://www.ietf.org/ietf/1id-abstracts.txt.
30 The list of Internet-Draft Shadow Directories can be accessed at
31 http://www.ietf.org/shadow.html.
33 This Internet-Draft will expire on February 19, 2009.
35 Abstract
37 This memo proposes requirements for protocols between web browsers
38 and relying parties at websites; these requirements also impact third
39 parties involved in the authentication process. These requirements
40 minimize the likelihood that criminals will be able to gain the
41 credentials necessary to impersonate a user or be able to
42 fraudulently convince users to disclose personal information. To
43 meet these requirements browsers must change. Websites must never
44 receive information such as passwords that can be used to impersonate
45 the user to third parties. Browsers should authenticate the website
46 to the browser as part of authenticating the user to the website.
47 Browsers MUST flag situations when this authentication fails and flag
48 situations when the target website is not authorized to accept the
49 identity being offered as this is a strong indication of fraud.
50 These requirements may serve as a basis for requirements for
51 preventing fraud in environments other than the web.
53 Table of Contents
55 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 4
56 1.1. Purpose of this Memo . . . . . . . . . . . . . . . . . . . 5
57 1.2. Progress to Date . . . . . . . . . . . . . . . . . . . . . 6
58 2. Terminology . . . . . . . . . . . . . . . . . . . . . . . . . 7
59 2.1. Passwords and Interface . . . . . . . . . . . . . . . . . 7
60 2.2. Requirements notation . . . . . . . . . . . . . . . . . . 7
61 3. Threat Model . . . . . . . . . . . . . . . . . . . . . . . . . 8
62 3.1. Capabilities of Attackers . . . . . . . . . . . . . . . . 9
63 3.2. Attacks of Interest . . . . . . . . . . . . . . . . . . . 10
64 4. Requirements for Authentication that Protects Credentials . . 11
65 4.1. Support for Passwords and OTher Methods . . . . . . . . . 11
66 4.2. Trusted UI . . . . . . . . . . . . . . . . . . . . . . . . 11
67 4.3. No Password Equivelents . . . . . . . . . . . . . . . . . 12
68 4.4. Mutual Authentication . . . . . . . . . . . . . . . . . . 13
69 4.5. Authentication Tied to Request and Response . . . . . . . 14
70 4.6. Restricted Identity Providers . . . . . . . . . . . . . . 15
71 4.7. Protecting Enrollment . . . . . . . . . . . . . . . . . . 15
72 5. Is it the right Server? . . . . . . . . . . . . . . . . . . . 17
73 6. Iana Considerations . . . . . . . . . . . . . . . . . . . . . 19
74 7. Acknowledgments . . . . . . . . . . . . . . . . . . . . . . . 20
75 8. Security Considerations . . . . . . . . . . . . . . . . . . . 21
76 9. References . . . . . . . . . . . . . . . . . . . . . . . . . . 23
77 9.1. Normative References . . . . . . . . . . . . . . . . . . . 23
78 9.2. Informative References . . . . . . . . . . . . . . . . . . 23
79 Appendix A. Trusted UI Mechanisms . . . . . . . . . . . . . . . . 25
80 Appendix B. Change History . . . . . . . . . . . . . . . . . . . 26
81 B.1. Changes Since 08 . . . . . . . . . . . . . . . . . . . . . 26
82 B.2. Changes since 07 . . . . . . . . . . . . . . . . . . . . . 26
83 B.3. Changes since 06 . . . . . . . . . . . . . . . . . . . . . 27
84 B.4. Changes since 05 . . . . . . . . . . . . . . . . . . . . . 27
85 B.5. Changes since 02 . . . . . . . . . . . . . . . . . . . . . 28
86 B.6. Changes since 01 . . . . . . . . . . . . . . . . . . . . . 28
87 Author's Address . . . . . . . . . . . . . . . . . . . . . . . . . 29
88 Intellectual Property and Copyright Statements . . . . . . . . . . 30
90 1. Introduction
92 Typically, web sites ask users to send a user name and password in
93 order to log in and authenticate their identity to the website. The
94 user name and plaintext password are often sent over a TLS [RFC4346]
95 encrypted connection. As a result of this plaintext password
96 protocol, the server learns the password and can pretend to be the
97 user to any other system where the user has used the same user name
98 and password. The security of passwords over TLS depends on making
99 sure that the password is sent to the right, trusted server and on
100 that server not exposing the password to third parties. HTTPs
101 [RFC2818] implementations typically confirm that the name entered by
102 the user in the URL corresponds to the certificate.
104 One serious security threat on the web today is phishing. Phishing
105 is a form of fraud where an attacker convinces a user to provide
106 confidential information to the attacker believing they are providing
107 the information to a party they trust with that information. For
108 example, an email claiming to be from a user's bank may direct the
109 user to go to a website and verify account information. The attacker
110 captures the user name and password and potentially other sensitive
111 information. Domain names that look like target websites, links in
112 email, and many other factors contribute to phishers' ability to
113 convince users to trust them.
115 Typically the user names and password are not directly valuable to
116 the phisher. However they can be used to access resources of value.
117 For example a bank password may permit money transfer or access to
118 information useful in identity theft.
120 It is useful to distinguish two targets of phishing. Sometimes
121 phishing is targeting web authentication credentials such as user
122 name and password. Sometimes phishing is targeting other
123 confidential information, such as bank account numbers. This memo
124 presents requirements that can be part of a solution to significantly
125 reduce the effectiveness of the first category of phishing: provided
126 that a user uses an authentication mechanism that meets these
127 requirements, even if the user authenticates to the wrong server,
128 that server cannot impersonate the user to a third party. However,
129 to combat phishing targeted at other confidential information, the
130 best we know how to do is help the user detect fraud before they
131 release confidential information.
133 The approach taken by this memo is to handle these two types of
134 phishing differently. The user is given new authentication
135 mechanisms. If the user uses these mechanisms,they have strong
136 assurances that their password has not been disclosed and that the
137 ensuing data returned from the server was generated by a party that
138 either knows their password or who is authenticated by an identity
139 provider (a third party involved in the authentication exchange in
140 order to allow credentials to be used across a wider variety of
141 websites) who knows their password. The server can then use
142 confidential information known to the user and server to enhance the
143 user's trust in its identity beyond what is available given the
144 social engineering attacks against TLS server authentication. If a
145 user authenticates to the wrong server but discovers this before they
146 give that server any other confidential information, then there
147 exposure is very limited. The success of this solution depends
148 heavily on whether the user uses the new authentication mechanisms;
149 designing ways for users to tell if they are using the authentication
150 mechanisms and encouraging users to use these mechanisms will be
151 critical to achieving any security benefit from these requirements.
152 The success of a solution to preventing the disclosure of other
153 confidential information based on giving users information about
154 whether they are authenticated to the right server depends on the
155 user being able to take advantage of this information and choosing to
156 do so.
158 The requirements presented in this memo are intended to be useful to
159 browser designers, designers of other HTTP applications and designers
160 of future HTTP authentication mechanisms.
162 These requirements and mechanisms that meet these requirements are
163 not sufficient to stop phishing; at best, they form part of a
164 solution. The World Wide Web Consortium proposes recommendations on
165 user interface guidelines for web security context [WSCUIG]. These
166 guidelines propose mechanisms that will make it more likely that
167 users will detect fraud before authentication. Efforts to limit the
168 effect of malicious software and to provide trustable software for
169 authentication are also important. Efforts to track known frauds and
170 alert users when they encounter fraudulent sites are also critical.
171 Together, all these efforts may significantly reduce phishing.
173 1.1. Purpose of this Memo
175 In publishing this memo, the IETF recommends making available
176 authentication mechanisms that meet the requirements outlined in
177 Section 4 in HTTP user agents including web browsers. It is hoped
178 that these mechanisms will prove a useful step in fighting phishing.
179 However this memo does not restrict work either in the IETF or any
180 other organization. In particular, new authentication efforts are
181 not bound to meet the requirements posed in this memo unless the
182 charter for those efforts chooses to make these binding requirements.
183 Less formally, the IETF presents this memo as an option to pursue
184 while acknowledging that there may be other promising paths both now
185 and in the future.
187 1.2. Progress to Date
189 This non-normative section describes the author of this memo's
190 impressions of the current state of HTTP authentication with regard
191 to these requirements.
193 In the spring of 2008, Microsoft demonstrated that with no change to
194 the spec, GSS-API and NTLM HTTP authentication could be extended to
195 support channel binding [RFC5056] [RFC4559]. At first glance, the
196 Microsoft extension appears to meet all the requirements outlined in
197 this memo for an authentication mechanism. In addition, Microsoft
198 has outlined extensions to HTTP digest authentication that also
199 appear to meet these requirements [DIGEST-BIND]. The Microsoft
200 extensions do not provide the client with information on whether the
201 server supports the extension; so the client may not know whether it
202 is strongly authenticated or not. Also, the Microsoft extensions are
203 focused for enterprise deployment and so concerns regarding upgrade
204 negotiation and other issues that would be important in a wider
205 deployment are not covered. However Microsoft's efforts underscore
206 that new security mechanisms are not needed in order to meet these
207 requirements. Originally, I had expected that changes to meet these
208 requirements would be more extensive, but still expected they would
209 be incremental changes to existing mechanisms.
211 However there is still work that needs to be done in order to make
212 mechanisms meeting these requirements available in a usable manner
213 across the Internet. Most of that work concerns usability and falls
214 outside the IETF. Results of the usability work may fall within the
215 IETF; mechanisms for picking the right credentials to use for a given
216 site may require minor extensions to security mechanisms .
217 Mechanisms to provide smoothe upgrades from plaintext password
218 protocols to mechanisms meeting these requirements may require
219 additional HTTP headers, particularly for non-browser agents. In
220 addition, these requirements may be useful to efforts that are
221 designing HTTP authentication mechanisms for unrelated reasons.
223 2. Terminology
225 2.1. Passwords and Interface
227 There are two related concepts: the user interface of passwords and
228 plaintext password protocols. A plaintext password protocol is a
229 protocol where the server receives credentials sufficient to
230 impersonate a user to third parties. A password interface provides a
231 user experience where a user types a password into any computer,
232 including one they have never used before and that is sufficient to
233 authenticate. The requirements in this memo require support for
234 password user interfaces as one option for authentication. The
235 requirements of this memo are incompatible with plaintext password
236 protocols.
238 2.2. Requirements notation
240 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT",
241 "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this
242 document are to be interpreted as described in [RFC2119].
244 3. Threat Model
246 This section describes the assumed capabilities of phishers,
247 describes assumptions about web security and describes what
248 vulnerabilities exist. Human factors issues contribute significantly
249 to these vulnerabilities. For example, security information
250 dialogues in web browsers can provide information on the subject of a
251 certificate. However, users rarely examine this information, so an
252 attacker could be successful even if examining the security dialogue
253 would show an attack. This threat model is intended to include these
254 sorts of attacks and so it is broader than the technical threats
255 against protocols. Efforts are under way to improve these human
256 factors issues [WSCUIG]. However these efforts only reduce the risk
257 that a user will be confused; even given improved user experience for
258 dealing with security context information, users will make mistakes
259 and believe that an attacker's site is the site they intended to
260 communicate with.
262 We assume that the implementations of authentication systems can be
263 trusted sufficiently to meet their design objectives. This does not
264 mean that the entire local system and browser need to be trusted.
265 However if there is a component that has access to users' passwords,
266 that component needs to be secure enough to be trusted not to divulge
267 passwords to attackers. Similarly in a system that used smart cards,
268 the smart cards would need to be trusted not to give attackers access
269 to private keys or other authentication material. Designing
270 implementations to limit the size and complexity of the most trusted
271 components and to layer trust will be important to the security of
272 implementations. Designing protocols to enable good implementation
273 will be critical to the utility of the protocols. As a consequence
274 of this assumption, these requirements are insufficient to provide
275 protection against phishing if malicious browser extensions, Trojan
276 software or other malicious software is installed into a sufficiently
277 trusted part of the local computer or authentication tokens.
279 We assume that users have limited motivation to combat phishing.
280 Users cannot be expected to read the source of web pages, understand
281 how DNS works well enough to look out for spoofed domains, or
282 understand URI encoding. Users do not typically understand
283 certificates and cannot make informed decisions about whether the
284 subject name in a certificate corresponds to the entity they are
285 attempting to communicate with. As a consequence of this assumption,
286 users will likely be fooled by strings either in website names or
287 certificates that look visually similar but that are composed of
288 different code points.
290 3.1. Capabilities of Attackers
292 We assume attackers can convince the user to go to a website of their
293 choosing. Since the attacker controls the web site and since the
294 user chose to go to the website the TLS certificate will verify and
295 the website will appear to be secure. The certificate will typically
296 not be issued to the entity the user thinks they are communicating
297 with, but as discussed above, the user will not notice this.
298 Mechanisms attackers use to accomplish this include links with a
299 misleading name or URI, which they may distribute in emails; attacks
300 against DNS; and man-in-the-middle attacks against a TLS handshake.
301 The former two attacks allow the attacker to pass authentication
302 because the victim user can be tricked into accepting the attacker's
303 certificate. The latter attack will typically create a warning on
304 the victim user's side, but many users do not make informed decisions
305 on how to respond to such a warning, making them inclined to accept
306 the bogus certificate.
308 The attacker can convincingly replicate any part of the UI of the
309 website being spoofed. The attacker can also spoof trust markers
310 such as the security lock, URL bar and other parts of the browser UI
311 sufficiently that a significant class of users will not treat the
312 spoofed security indicators as a problem. There is one limitation to
313 the attacker's ability to replicate UI. The attacker cannot
314 replicate a UI that depends on information the attacker does not
315 know. For example, an attacker could generally replicate the UI of a
316 banking site's login page. However the attacker probably could not
317 replicate the account summary page until the attacker learned the
318 user name and password because the attacker would not know what
319 accounts to list or approximate balances that will look convincing to
320 a user. Of course attackers may know some personal information about
321 a user. Websites that want to rely on attackers not knowing certain
322 information need to maintain the privacy of that information.
324 It's not clear how valuable this limitation on the attacker's ability
325 will prove in practice. Research into the effectiveness of security
326 indicators [SECIND] suggests that users do not pay attention to
327 security indicators. One difference between the security indicators
328 tested in today's research and using private information to detect
329 fraud is that the private information may be directly related to the
330 task the user is trying to perform. However the attacker can attempt
331 to come up with a convincing explanation such as a partial outage or
332 system upgrade for why the private information is not available.
334 The attacker can convince the user to do anything with the phishing
335 site that they would do with the real target site. As a consequence,
336 when passwords are used, if we want to avoid the user giving the
337 attacker their password, the web site must prove that it has an
338 established authentic relationship with the user without requiring a
339 plaintext password protocol. One approach could be to transition to
340 a solution where the user could not give the real target site their
341 password if they are using a new mechanism. Instead they will need
342 to cryptographically prove that they know their password without
343 revealing it.
345 3.2. Attacks of Interest
347 The ultimate goal of these requirements is to provide protection
348 against disclosure of confidential information to unintended parties.
349 These requirements focus on two such disclosures and handle them
350 separately. The first category is disclosure of credentials that
351 could allow an unintended party to impersonate the user, possibly
352 gaining access to additional confidential information. The second
353 attack is disclosure of confidential information not directly related
354 to authentication. The second class of attack cannot be directly
355 defeated, but we can give information to users that they could use to
356 help know when they are communicating with an unintended party.
358 Note that some authentication systems such as Kerberos [RFC4120]
359 provide a facility to delegate the ability to act as the user to the
360 target of the authentication. Such a facility when used with an
361 inappropriately trusted target would be an instance of the first
362 class of attack. Solutions to these requirements with similar
363 facilities MUST discuss the security considerations surrounding use
364 of these facilities.
366 Of less serious concerns at the present time are attacks on data
367 integrity where a phisher provides false or misleading information to
368 a user.
370 4. Requirements for Authentication that Protects Credentials
372 This section describes requirements for web authentication solutions.
373 These solutions are intended to prevent phishing targeted at
374 obtaining web authentication credentials. These requirements will
375 make it more difficult for phishers to target other confidential
376 information.
378 4.1. Support for Passwords and OTher Methods
380 The web authentication solution MUST support the password user
381 interface and MUST be secure even when the password interface is
382 commonly used. In many environments, users need the ability to walk
383 up to a computer they have never used before and log in to a website.
384 Carrying a smart card or USB token significantly increases the
385 deployment cost of the website and decreases user convenience. The
386 smart card is costly to deploy because it requires a process for
387 replacing smart cards, requires support staff to be trained in
388 answering questions regarding smart cards and requires a smart card
389 to be issued when an identity is issued. Smart cards are less
390 convenient because users cannot gain access to protected resources
391 without having their card physically with them. Many public access
392 computers do not have smart cards available and do not provide access
393 to USB ports; when they do they tend not to support smart cards. It
394 is important not to underestimate the training costs (either in money
395 or user frustration) of teaching people used to remembering a user
396 name and password about a new security technology. Sites that
397 aggregate identity--for example allowing a user to log into an
398 identity provider and then gain access to other resources may be a
399 significant part of a solution. However we cannot assume that a
400 given user will have only one such website: there are valid and
401 common reasons a user (or the relying party) would not trust all
402 identity information to one such site.
404 A solution to these requirements MUST also support smart cards and
405 other authentication solutions. Some environments have security
406 requirements that are strong enough that passwords simply are not a
407 viable option. Many efforts are under way to reduce the deployment
408 costs of token-based authentication mechanisms and to address some of
409 the concerns that make passwords a requirement today.
411 4.2. Trusted UI
413 Users need the ability to trust components of the UI in order to know
414 that the UI is being presented by a trusted component of the device.
415 The primary concern is to make sure that the user knows any password
416 is being given to trusted software rather than being filled into an
417 HTML form element that will be sent to the server as part of a
418 plaintext password protocol.
420 There are many approaches to establishing a trusted UI. One example
421 is to use a dynamic UI based on a secret shared by the user and the
422 local UI; the paper [ANTIPHISHING] recommends this approach. The W3C
423 recommends this approach for security indicators in section 7.1 of
424 its user interface guidelines [WSCUIG]. However, the W3C notes that
425 research suggests users may not pay attention to these trust
426 indicators. A second approach is to provide a UI action that
427 highlights trusted or non-trusted components in some way. This could
428 work similarly to the Expose feature in Apple's Mac OS X where a
429 keystroke visually distinguishes structural elements of the UI. Of
430 course such a mechanism would only be useful if users actually used
431 it. Finally, another potential approach is to benefit from extensive
432 research in the multi-level security community in designing UIs to
433 display classified, compartmentalized information. It is critical
434 that these UIs be able to label information and that these labels not
435 be spoofable. These approaches are not exhaustive and may not even
436 be good; they are provided to demonstrate that thought into how to
437 design trusted UIs is ongoing. However, designing a user interface
438 that allows users of the web to distinguish trusted components from
439 components potentially controlled by an attacker is an open problem.
440 It is likely that transitioning to many new security protocols will
441 depend on a solution to this problem.
443 4.3. No Password Equivelents
445 A critical requirement is that when a user authenticates to a
446 website, the website MUST NOT receive a strong password equivalent
447 [IABAUTH]. A strong password equivalent is anything that would allow
448 a phisher to authenticate as a user with a different website.
449 Consequently, plaintext password protocols are incompatible with
450 these requirements. Weak password equivalents (quantities that act
451 as a password for a given service but cannot be reused with other
452 services ) are problematic outside of the context of enrolling a user
453 or changing a password. The requirement for mutual authentication
454 Section 4.4 is incompatible with sending weak password equivalents in
455 every authentication. Even if that requirement is relaxed, the scope
456 of a particular weak password equivalent needs to be carefully
457 considered. Consider for example a protocol that hashes a password
458 and the host name component of a URI together to form a weak password
459 equivalent. The same password equivalent is used regardless of which
460 certificate authority certifies the public key of the website. If an
461 attacker mounted a man-in-the-middle attack, presenting a self-signed
462 certificate, and the user accepted the certificate when asked by the
463 browser, then the attacker would receive the same weak password
464 equivalent needed to access the legitimate website. Such a protocol
465 would not do a good job of addressing the threats outlined in the
466 threat model. However if mutual authentication were not a
467 requirement, a protocol that hashed a password and the public key
468 from the TLS certificate of the website to form a weak password
469 equivalent might meet the other requirements. In any event, weak
470 password equivalents MUST NOT be sent without confidentiality
471 protection.
473 There are two implications of this requirement. First, a strong
474 cryptographic authentication protocol needs to be used instead of
475 sending the password encrypted over TLS. The zero-knowledge class of
476 password protocols such as those discussed in section 8 of the IAB
477 authentication mechanisms document [IABAUTH] seem potentially useful
478 in this case at a first glance. However, mechanisms in this space
479 tend to have significant deployment problems because of intellectual
480 property issues.
482 The second implication of this requirement is that if an
483 authentication token is presented to a website, the website MUST NOT
484 be able to modify the token to authenticate as the user to a third
485 party. The party generating the token must bind it to either the
486 website that will receive the token or to a key known only to the
487 user. Binding could include cryptographic binding or mechanisms such
488 as issuing a one-time password for use with a specific website. If
489 tokens are bound to keys, the user MUST prove knowledge of this key
490 as part of the authentication process. The key MUST NOT be disclosed
491 to the server unless the token is bound to the server and the key is
492 only used with that token or server.
494 4.4. Mutual Authentication
496 [ANTIPHISHING] describes a requirement for mutual authentication. A
497 common phishing practice is to accept a user name and password as
498 part of an attempt to make the phishing site authentic. The real
499 target is some other confidential information. The user name and
500 password are captured, but are not verified. After the user name and
501 password are entered, the phishing site collects other confidential
502 information. When mutual authentication fails, there is a strong
503 indication of a problem: either the user supplied the wrong
504 credential or the website is not the one the user intended to
505 communicate with.
507 Requiring mutual authentication excludes a class of mechanisms where
508 a weak password equivalent is generated for the server and is sent.
509 One prominent member of this class is [PWDHASH]; this mechanism has
510 the desirable property that it requires no change to the server and
511 can be implemented locally on the browser. These mechanisms provide
512 better security than plaintext password protocols. However attacks
513 where the server ignores authentication in order to obtain
514 confidential information are important enough that it is desirable to
515 develop mechanisms that provide this assurance. The desire to
516 develop these new mechanisms is not intended to discourage the
517 deployment of mechanisms like Pwdhash that improve security today.
519 Typically one protocol performs authentication of both parties.
520 There tend to be opportunities for a man-in-the-middle attack when
521 one protocol authenticates one direction and another protocol
522 authenticates the opposite direction. Sometimes, as in the case of
523 TLS and plaintext password protocols, the opportunity for attacks
524 depends on human factors issues or certificate management. In other
525 cases, attacks may be more direct. Authentication of the server and
526 client at the TLS level is sufficient to meet the requirement of
527 mutual authentication. If authentication is based on a shared secret
528 such as a password, then the authentication protocol MUST prove that
529 the secret or a suitable verifier is known by both parties.
530 Interestingly the existence of a shared secret will provide better
531 confidence that the right server is being contacted than if public
532 key credentials are used in their typical mode. By their nature,
533 public key credentials allow parties to be contacted without a prior
534 security association. In protecting against phishing targeted at
535 obtaining other confidential information, this may prove a liability.
536 However public key credentials provide strong protection against
537 phishing targeted at obtaining authentication credentials because
538 they are not vulnerable to dictionary attacks. Such dictionary
539 attacks are a significant weakness of shared secrets such as
540 passwords intended to be remembered by humans. For public key
541 protocols, the mutual authentication requirement would mean that the
542 server typically needs to sign an assertion of what identity it
543 authenticated or of the request as a whole.
545 4.5. Authentication Tied to Request and Response
547 Users expect that whatever party they authenticate to will be the
548 party that generates the content they see. One possible phishing
549 attack is to insert the phisher between the user and the real site as
550 a man-in-the-middle. On today's websites, the phisher typically
551 gains the user's user name and password. Even if the other
552 requirements of this specification are met, the phisher could gain
553 access to the user's session on the target site. This attack is of
554 particular concern to the banking industry. A man-in-the-middle may
555 gain access to the session which may give the phisher confidential
556 information or the ability to execute transactions on the user's
557 behalf.
559 The authentication system MUST guarantee to the user and the target
560 server that the request was generated by the authenticated user and
561 the response is generated by the target server . This can be done in
562 several ways including:
564 1. Assuming that only certificates from trusted CAs are accepted and
565 the user has not bypassed server certificate validation, it is
566 sufficient to confirm that the identity of the server at the TLS
567 level is the same at the HTTP authentication level. In the case
568 of TLS client authentication this is trivially true. Note
569 however that [WSCUIG] recommends accepting self-signed
570 certificates in some cases, so relying on this approach for cases
571 other than TLS authentication may be problematic.
573 2. Another alternative is to bind the authentication exchange to the
574 channel created by the TLS session. The general concept behind
575 channel binding is discussed in [RFC5056]. Channel binding has
576 been added to HTTP authentication mechanisms based on digest
577 authentication and on GSS-API, suggesting that support for
578 channel binding is workable for future HTTP authentication
579 mechanisms.
581 4.6. Restricted Identity Providers
583 Some identity providers will allow anyone to accept their identity.
584 However particularly for financial institutions and large service
585 providers it will be common that only authorized business partners
586 will be able to accept the identity. The confirmation that the
587 relying party is such a business partner will often be a significant
588 part of the value provided by the identity provider, so it is
589 important that the protocol enable this. For such identities, the
590 user MUST be assured that the target server is authorized by the
591 identity provider to accept identities from that identity provider.
592 Several mechanisms could be used to accomplish this:
594 1. The target server can provide a certificate issued by the
595 identity provider as part of the authentication.
597 2. The identity provider can explicitly approve the target server.
598 For example in a redirect-based scheme the identity provider
599 knows the identity of the relying party before providing claims
600 of identity to that party. A similar situation happens with
601 Kerberos or Digest Authentication in a AAA infrastructure
602 [RFC5090].
604 4.7. Protecting Enrollment
606 One area of particular vulnerability to phishing is enrollment of a
607 new identity in an authentication system. Protecting against
608 phishing targeted at obtaining other confidential information as a
609 new service is established is outside the scope of this document. If
610 confidential information such as credit card numbers are provided as
611 part of account setup, then this may be a target for phishing.
613 However there is one critical aspect in which enrollment impacts the
614 security of authentication. During enrollment, a password is
615 typically established for an account or other security credentials
616 are associated with an account. The process of establishing a
617 password MUST NOT provide a strong password equivalent (a quantity
618 such as the password itself that could be used to log into another
619 service where the same password is used as the user). That is,
620 parties other than the user and web browser MUST NOT gain enough
621 information to impersonate the user to a third party while
622 establishing a password.
624 5. Is it the right Server?
626 In Section 4, requirements were presented for web authentication
627 solutions to minimize the risk of phishing targeted at web access
628 information. This section discusses in a non-normative manner
629 various mechanisms for determining that the right server has been
630 contacted. Authenticating to the right party is an important part of
631 reducing the risk of phishing targeted at other confidential
632 information.
634 Validation of the certificates used in TLS and verification that the
635 name in the URI maps to these certificates can be useful. As
636 discussed in Section 3, attackers can spoof the name in the URI.
637 However the TLS checks do defeat some attacks. The W3C user
638 interface guidelines may significantly increase the value of these
639 checks [WSCUIG]. As discussed in Section 4.5, TLS validation may be
640 important to higher-level checks.
642 A variety of initiatives propose to assign trust to servers. This
643 includes proposals to allow users to indicate certain servers are
644 trusted based on information they enter. Also, proposals to allow
645 third parties including parties established for this purpose and
646 existing certificate authorities to indicate trust have been made.
647 These proposals will almost certainly make phishing more difficult.
649 In the case where there is an existing relationship, these
650 requirements provide a way that information about the relationship
651 can be used to provide assurance that the right party has been
652 contacted.
654 In Section 4.2, we discuss how a secret between the user and their
655 local computer can be used to let the user know when a password will
656 be handled securely. A similar mechanism can be used to help the
657 user once they are authenticated to the website. The website can
658 present information based on a secret shared between the user and
659 website to convince the user that they have authenticated to the
660 correct site. This depends critically on the requirements of
661 Section 4.5 to guarantee that the phisher cannot obtain the secret.
663 Various schemes have used a secret shared between the server and the
664 web browser before authentication. Cookies or some other state
665 management mechanism are used to select the right secret to display
666 as the user logs into the site. Unfortunately these schemes have
667 proven ineffective in practice [SECIND]. However, the set of
668 information that can be used as contextual clues to evaluate whether
669 the right server has been reached after authentication is much
670 greater. For example, a bank server knows what accounts a user has
671 and knows their balances. A business partner may have information
672 about past transactions or the current state of transactions. If
673 this information is related to the task that the user is trying to
674 perform, they may be more likely to evaluate it and notice problems
675 than they are to notice a missing security indicator before login.
676 Strong authentication mechanisms enable this type of evaluation after
677 the user has logged in. However it is not known how effective this
678 will be in practice.
680 6. Iana Considerations
682 This document requests no action of IANA.
684 7. Acknowledgments
686 I'd like to thank the MIT Kerberos Consortium for its funding of work
687 on this memo prior to April 2008.
689 I'd like to thank Nicolas Williams, Matt Knopp and David Blumenthal
690 for helping me walk through these requirements and make sure that if
691 a solution met them it would actually protect against the real world
692 attacks consumers of our technology are facing. I was particularly
693 focusing on attacks that financial institutions are seeing and their
694 help with this was greatly appreciated.
696 I'd like to thank Eric Rescorla and Ben Laurie for their significant
697 comments on this draft.
699 Eliot Lear provided many last call comments and helped work through
700 several long standing issues with the document.
702 Christian Vogt provided text and review comments.
704 The requirements discussed here are similar to the principles
705 outlined in "Limits to Anti-Phishing" [ANTIPHISHING]. Most of this
706 work was discovered independently but work from that paper has been
707 integrated where appropriate. It seems good that these requirements
708 are similar to the principles outlined by someone facing phishing as
709 an operational reality.
711 8. Security Considerations
713 This memo discusses the security of web authentication and how to
714 minimize the risk of phishing in web authentication systems. This
715 section discusses the security of the overall system and discusses
716 how components of the system that are not directly within the scope
717 of this document affect the security of web transactions. This
718 section also discusses residual risks that remain even when the
719 requirements proposed here are implemented.
721 The approach taken here is to separate the problem of phishing into
722 phishing targeted at web authentication credentials and phishing
723 targeted at other information. Users are given some trusted
724 mechanism to determine whether they are typing their password into a
725 secure browser component that will authenticate them to the web
726 server--a component that presents a password interface--or whether
727 they are typing their password into a legacy mechanism that will send
728 their password to the server as part of a plaintext password
729 protocol. If the user types a password into the trusted browser
730 component, they have strong assurances that their password has not
731 been disclosed and that the page returned from the web server was
732 generated by a party that either knows their password or who is
733 authenticated by an identity provider who knows their password. The
734 web server can then use confidential information known to the user
735 and web server to enhance the user's trust in its identity beyond
736 what is available given the social engineering attacks against TLS
737 server authentication. If a user enters their password into the
738 wrong server but discovers this before they give that server any
739 other confidential information, then there exposure is very limited.
741 This model assumes that the parts of the browser and operating system
742 with access to passwords or other long-term credentials are trusted
743 software. As discussed in Section 3, there are numerous attacks
744 against host security. Appropriate steps should be taken to minimize
745 these risks. If the security of the trusted software is compromised,
746 the password can be captured as it is typed by the user.
748 This model assumes that users will only enter their passwords into
749 trusted browser components. There are several potential problems
750 with this assumption. First, users need to understand the UI
751 distinction and know what it looks like when they are typing into a
752 trusted component and what a legacy HTML form looks like. It is not
753 clear that we have yet developed a solution to this user interface
754 problem. Users must care enough about the security of their
755 passwords to only type them into trusted components. The browser
756 must be designed in such a way that the server cannot create a UI
757 component that appears to be a trusted component but is actually a
758 legacy HTML form; the W3C user interface guidelines [WSCUIG] provides
759 requirements that are designed to prevent security sensitive user
760 interface from being spoofed by attacker-supplied content. The W3C
761 guidelines provide requirements for a more limited context focused
762 around security context but not authentication information. However
763 starting from these requirements may be a successful approach.
765 In addition, a significant risk that users will type their password
766 into legacy HTML forms comes from the incremental deployment of any
767 web authentication technology. Websites will need a way to work with
768 older web browsers that do not yet support mechanisms that meet these
769 requirements. Not all websites will immediately adopt these
770 mechanisms. Users will sometimes browse from computers that have
771 mechanisms meeting these requirements and sometimes from older
772 browsers. They only gain protection from phishing when they type
773 passwords into trusted components. If the same password is sometimes
774 used with websites that meet these requirements and sometimes with
775 legacy websites, and if the password is captured by a phisher
776 targeting a legacy website, then that captured password can be used
777 even on websites meeting these requirements. Similarly, if a user is
778 tricked into using HTML forms when they should not, passwords can be
779 exposed. Users can significantly reduce this risk by using different
780 passwords for websites that use trusted browser authentication than
781 for those that still use HTML forms.
783 The risk of dictionary attack is always a significant concern for
784 password systems. Users can (but typically do not) minimize this
785 risk by choosing long, hard to guess phrases for passwords. The risk
786 of offline dictionary attack can be removed once a password is
787 already established by using a zero-knowledge password protocol. The
788 risk of online dictionary attack is always present. The risk of
789 offline dictionary attack is always present when setting up a new
790 password or changing a password. Minimizing the number of services
791 that use the same password can minimize this risk. When zero-
792 knowledge password protocols are used, being extra careful to make
793 sure the right server is used when establishing a password can
794 significantly reduce this risk.
796 9. References
798 9.1. Normative References
800 [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate
801 Requirement Levels", BCP 14, RFC 2119, March 1997.
803 [WSCUIG] Roessler , T. and A. Saldhana , "Web Security Context:
804 User Interface Guidelines", W3C Working Draft, July 2008,
805 .
807 Publication of this draft needs to block until and unless
808 this references is approved as some form of W3C
809 recommendation.
811 9.2. Informative References
813 [ANTIPHISHING]
814 Nelson, J. and D. Jeske, "Limits to Anti Phishing",
815 January 2006.
817 Proceedings of the W3c Security and Usability Workshop; ht
818 tp://www.w3.org/2005/Security/usability-ws/papers/
819 37-google/'
821 [DIGEST-BIND]
822 Santesson, S., Damour, K., and P. Hallin, "Channel Binding
823 for HTTP Digest Authentication",
824 draft-santesson-digestbind-01.txt (work in progress),
825 July 2008.
827 [IABAUTH] Rescorla, E., "A Survey of Authentication Mechanisms",
828 draft-iab-auth-mech-05.txt (work in progress),
829 February 2006.
831 [PWDHASH] Ross, B., Jackson, C., Miyake, N., Boneh, D., and J.
832 Mitchell, "Stronger Password Authentication Using Browser
833 Extensions", Proceedings 14th Usenix Security Symposium,
834 2005, .
836 [RFC2818] Rescorla, E., "HTTP Over TLS", RFC 2818, May 2000.
838 [RFC4120] Neuman, C., Yu, T., Hartman, S., and K. Raeburn, "The
839 Kerberos Network Authentication Service (V5)", RFC 4120,
840 July 2005.
842 [RFC4346] Dierks, T. and E. Rescorla, "The Transport Layer Security
843 (TLS) Protocol Version 1.1", RFC 4346, April 2006.
845 [RFC4559] Jaganathan, K., Zhu, L., and J. Brezak, "SPNEGO-based
846 Kerberos and NTLM HTTP Authentication in Microsoft
847 Windows", RFC 4559, June 2006.
849 [RFC5056] Williams, N., "On the Use of Channel Bindings to Secure
850 Channels", RFC 5056, November 2007.
852 [RFC5090] Sterman, B., Sadolevsky, D., Schwartz, D., Williams, D.,
853 and W. Beck, "RADIUS Extension for Digest Authentication",
854 RFC 5090, February 2008.
856 [SECIND] Schechter , S., Dhamija , R., Ozment, A., and I. Fischer,
857 "The Emperor's New Security Indicators: An evaluation of
858 website authentication and the effect of role playing on
859 usability studies", IEEE Symposium on Security and
860 Privacy, May 2007, .
865 Appendix A. Trusted UI Mechanisms
867 There are three basic approaches to establishing a trusted UI. The
868 first is to use a dynamic UI based on a secret known by the user;
869 [ANTIPHISHING] recommends this approach. A second approach is to
870 provide a UI action that highlights trusted or non-trusted components
871 in some way. This could work similarly to the Expose feature in
872 Apple's OS X where a keystroke visually distinguishes structural
873 elements of the UI. Of course such a mechanism would only be useful
874 if users actually used it. Finally, the multi-level security
875 community has extensive research in designing UIs to display
876 classified, compartmentalized information. It is critical that these
877 UIs be able to label information and that these labels not be
878 spoofable.
880 See Section 5 for another case where confidential information in a UI
881 can be used to build trust.
883 Appendix B. Change History
885 Note to rfc editor: This section should be removed prior to
886 publication.
888 B.1. Changes Since 08
890 Propose a new purpose section. Also, add a note describing what has
891 been done to date on these issues.
893 B.2. Changes since 07
895 Reword the abstract not to talk about identity providers
897 Define identity provider. I'm moving away from using it except
898 where necessary, but I think that there a couple of cases where
899 the term is helpful rather than confusing.
901 Add a paragraph to the introduction helping to define how this
902 work fits in with other work.
904 Significantly rework the mutual authentication requirement to
905 describe why pwdhash is excluded, to give more motivation and to
906 try and clarify that authentication at different layers is
907 problematic
909 Rework the requirement for binding authentication to requests and
910 responses. The discussion of channel binding was obsolete and has
911 been updated based on advances in that area. Drop the comment
912 about redirect based schemes, because that depends on certificate
913 validation and the W3C guidelines recommend accepting self-signed
914 certificates in some cases.
916 Remove most references to identity providers from restricted
917 identities section and protecting enrollment section. The
918 concepts don't actually depend on whether an identity provider is
919 used.
921 Rework the section on finding the right server to provide a more
922 accurate description of image hints prior to login and to discuss
923 the uncertainty surrounding the effectiveness of strategies
924 discussed.
926 Rephrase terminology in security considerations to be consistent
927 with changes throughout the rest of the document. Refer to the
928 W3C guidelines as appropriate.
930 B.3. Changes since 06
932 Much expanded description of concerns about weak password
933 equivalents. They are not excluded except by the mutual
934 authentication requirement. However there are significant scoping
935 issues with them.
937 Clarify that the effectiveness of confidential information being
938 used to strengthen mutual authentication depends on users taking
939 advantage of that.
941 Continue to clarify differences between plaintext password
942 protocols and the password user interface
944 Reduce the use of the term identity provider; it's not entirely
945 clear that concept needs to be worked in here and right now
946 identity provider is an undefined term
948 The text on how to make trusted UIs sounded very authoritative;
949 that was not the intent, so rework that text.
951 B.4. Changes since 05
953 Clarified introduction to distinguish what happens at the TLS
954 layer and what at the HTTP layer. Discuss motivation of phishing
955 more.
957 In the introduction, restate claims to be more accurate. These
958 requirements are useful if users actually use the authentication
959 mechanisms; convincing them to do so and making it obvious whether
960 they are is a significant risk. Also, we may give them the
961 theoretical information necessary to detect fraud, but whether
962 they act on that is open.
964 Add a purpose of this memo section. Whatever text ends up there
965 after community discussion needs to be called out in the last
966 call.
968 Add a section calling out the difference between plaintext
969 password protocols and password interface. This needs to be
970 worked into the rest of the document.
972 Update the threat model. Significant hopefully clarifying
973 changes.
975 B.5. Changes since 02
977 Updated discussion of TLS authentication to point out that it does
978 meet the requirement of mutual authentication.
980 Added pointer to HTTP TLS channel bindings work
982 B.6. Changes since 01
984 Updated threat model to give examples of attacks that are in scope
985 and to more clearly discuss host security based on comments from
986 Chris Drake.
988 Clarify attacks of interest to be consistent with the
989 introduction.
991 Fix ups regarding one-time passwords. I'm not sure that OTPs can
992 meet all the requirements but clean things up where they clearly
993 can meet a requirement.
995 Clarify that in the mutual authentication case I'm concerned about
996 authentication of client to the server.
998 Clean up bugs in security considerations
1000 Author's Address
1002 Sam Hartman
1003 Painless Security, LLC
1005 Email: hartmans-ietf@mit.edu
1007 Full Copyright Statement
1009 Copyright (C) The IETF Trust (2008).
1011 This document is subject to the rights, licenses and restrictions
1012 contained in BCP 78, and except as set forth therein, the authors
1013 retain all their rights.
1015 This document and the information contained herein are provided on an
1016 "AS IS" basis and THE CONTRIBUTOR, THE ORGANIZATION HE/SHE REPRESENTS
1017 OR IS SPONSORED BY (IF ANY), THE INTERNET SOCIETY, THE IETF TRUST AND
1018 THE INTERNET ENGINEERING TASK FORCE DISCLAIM ALL WARRANTIES, EXPRESS
1019 OR IMPLIED, INCLUDING BUT NOT LIMITED TO ANY WARRANTY THAT THE USE OF
1020 THE INFORMATION HEREIN WILL NOT INFRINGE ANY RIGHTS OR ANY IMPLIED
1021 WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE.
1023 Intellectual Property
1025 The IETF takes no position regarding the validity or scope of any
1026 Intellectual Property Rights or other rights that might be claimed to
1027 pertain to the implementation or use of the technology described in
1028 this document or the extent to which any license under such rights
1029 might or might not be available; nor does it represent that it has
1030 made any independent effort to identify any such rights. Information
1031 on the procedures with respect to rights in RFC documents can be
1032 found in BCP 78 and BCP 79.
1034 Copies of IPR disclosures made to the IETF Secretariat and any
1035 assurances of licenses to be made available, or the result of an
1036 attempt made to obtain a general license or permission for the use of
1037 such proprietary rights by implementers or users of this
1038 specification can be obtained from the IETF on-line IPR repository at
1039 http://www.ietf.org/ipr.
1041 The IETF invites any interested party to bring to its attention any
1042 copyrights, patents or patent applications, or other proprietary
1043 rights that may cover technology that may be required to implement
1044 this standard. Please address the information to the IETF at
1045 ietf-ipr@ietf.org.