Current Meeting Report
2.5.11 Securely Available Credentials (sacred)
NOTE: This charter is a snapshot of the 52nd IETF Meeting in Salt Lake City, Utah USA. It
may now be out-of-date. Last Modified:
Stephen Farrell <email@example.com>
Magnus Nystrom <firstname.lastname@example.org>
Security Area Director(s):
Jeffrey Schiller <email@example.com>
Marcus Leech <firstname.lastname@example.org>
Security Area Advisor:
Marcus Leech <email@example.com>
To Subscribe: firstname.lastname@example.org
In Body: (un)subscribe
Description of Working Group:
The credentials used in a public key infrastructure (PKI) typically
consist of a public/private key pair, a corresponding certificate
or certificate chain and some trust or root certification authority
information. They are usually stored on a desktop or laptop system
as part of an application specific store. Currently, support for
credential export/import is uneven and end users need to get too
involved with the mechanics of creating and maintaining their PKI
Application specific stores also mean that users cannot easily use the
same credential in multiple applications or on multiple devices. In
effect, today, credentials aren't portable. PKIs that use hardware
tokens (e.g., smart cards, PCMCIA cards) do allow for portability of the
user's credentials, however, most systems do not use hardware tokens,
but would benefit if similar portability features were available.
Ideally, users would be able to use a common set of credentials with
their desktop and laptop PCs, PDAs, cell phones, and other
Internet-ready devices. Even where hardware tokens are used, there may
also be substantial benefit derived from using credential portability
protocols in support of management functions such as, for example,
installation, token recovery (e.g. locked PIN), or token replacement.
There are at least two possible solutions for providing credential
portability. The first involves the use of a "credential server".
Credentials are uploaded to the server by one device (e.g., a desktop
computer); they can be stored there and downloaded when needed by the
same or a different device (e.g., a mobile phone, PDA, or laptop
A second solution involves the "direct" transfer of credentials from
one device to another (e.g., from a mobile phone to a PDA). Although
there may be servers involved in the transfer, in security terms the
transfer is direct - that is, there is no "credential server" that takes
an active part in securing the exchanges.
While it might be possible that a single protocol can be developed for
both types of solution, two different protocols may be needed: one for
interacting with a "credential server"; and the other to facilitate the
"direct" transfer of credentials.
Security is at a premium for this working group; only authorized
clients should be allowed to download credentials; credentials must be
protected against eavesdropping and active attacks; attackers
must not be able to successfully replace an entity's credentials at a
credential server; etc. In general, the security provided by such
systems will be less than is provided in systems using hardware
tokens, as the hardware tokens tend to be more resistant to improper
inspection and modification. However, in many environments,
the security offered will be sufficient.
Availability is also at a premium. Credentials must be available
to many different types of client with different characteristics in
terms of processing power, storage and network connectivity.
The working group will produce:
1) An informational document(s) describing and identifying the detailed
requirements for any protocol in this area, along with an
architectural view of any such protocol.
2) A standards-track document(s) describing the details of the adopted
or developed protocol.
The WG will specifically take into account the requirements of the
IPSRA WG, and the protocols selected by this WG should provide a
solution for a subset of those requirements.
Goals and Milestones:
|Done||  || Submit first draft of Requirements document|
|Done||  || Submit first draft of Frameworks document|
|Done||  || Submit second draft of Requirements document|
|Done||  || Submit second draft of Frameworks document|
|Dec 00||  || Submit first draft of Protocol document (incl. PDU syntax)|
|Done||  || Requirements document to Informational RFC|
|Mar 01||  || Frameworks document to Informational RFC|
|Done||  || Submit second draft of Protocol document|
|Jun 01||  || Protocol document to Proposed Standard|
Request For Comments:
|RFC3157|| ||Securely Available Credentials - Requirements
Current Meeting Report
Notes on SACRED meeting
Salt Lake City, December 12, 2001.
Magnus Nystrom, Stephen Farrell, Co-Chairs.
There were ~80 attendees.
Introduction, agenda bashing
Working group status
Protocol I-D discussion
Protocol I-D Overview
Protocol I-D Delta
Magnus reviewed the WG Status:
Since London, the requirements document was published as RFC 3157.
Version -02 of the Framework document was published on mailing list. The decision was made in London to develop next revision of the Framework document, and then wait until the protocol document is completed, and then publish both documents together to ensure consistency. There has been no work done on the document since London. The next version will be put out in January
Version -00 of the PKI Enrollment Information draft was published in June, and expires in December. This document will not to be progressed further within this working group at this point
Version -00 of the Protocol draft was published in October. (This was still a -00 draft because the document name was changed when PDM was dropped in favor of SRP)
Stephen described the current Protocol I-D, draft-itef-sacred-protocol-bss-00.txt. A reasonable number of people admitted to having read the draft (much more than had read the draft in London). This draft is based on the earlier PDM draft; it uses BEEP/TCP for transport; SASL-SRP (via BEEP) for authentication; PKCS#15 credential format; and an XML schema to define payloads
A question was asked about whether SASL-SRP is encumbered? The answer given is that weíre not sure at this time. There is an IPR declaration by Stanford covering SRP on the ietf site.
This draft is compatible with XKMS
There are four open issues with this draft:
- should the protocol support administrative operations?
- Should we tie account-password changes with credential changes somehow? Mike Just responded "no". Stephen Farrell said that he tends to agree; the client can always force them together.
- Mapping from SASL-SRP id to cTLS certificate. A comment was made that we would love to be able to solve this problem, but lots of other efforts have this same problem, and they always leave it to the implementation to define. Anybody who has "start TLS" followed by SASL-external has this problem, but donít address it because they donít know how. Stephen asked the commenter to please post pointers to other experiences addressing this issue on the list.
- Reasonable check on Upload:LastModified
Magnus then initiated the "Transport discussion". The ID tied SACRED to BEEP, and used the SASL services provided by BEEP for security. Magnus & Gareth Richards posted another approach, integrating SASL support into SACRED itself, rather than relying on the transport (BEEP) to provide it. The primary reason for this is that clients might not always support BEEP.(The SACRED protocol is expected to be implemented on a wide variety of clients; some very limited in nature; mandating/relying on BEEP could harm adoption of the overall SACRED protocol)
This proposal is a delta to the current transport I-D, with some exceptions:
- allows for easier mapping to http, SOAP, etc.
- allows for other security context establishment mechanisms than SASL
There are other changes aimed at constrained clients. This approach is still XKMS compatible, using SASL.
Dave Crocker responded with a counter-proposal, which he called "The Zen of SACRED/BEEP".
It follows a number of precepts which include:
- invent as little as possible;
- complexity and effort are not reduced by being moved to different layer, so put functions where they belong;
- it is good to reuse;
- use the simplest solution;
- avoid using a hammer to drive a screw.
He then described what BEEP is and what it isnít. BEEP is merely: protocol basics mantra (framing, syntax, exchange rules, response code); Session options, if you want; multiplexing (asynchrony/push) if you want; a standard point of departure above TCP. Using Beep means: using common standards (XML, MIME, SASL); the application is freed from network basics.
There are traditionally a number of arguments made against BEEP, none of which are true according to Dave: Adds size, adds delay, adds coding effort, it is new; it is easier faster to grow your own.
Jeff Schiller noted that one of the features of "roll your own" is you write code that does what you want. If you follow Daveís approach, it can be painful from an implementation standpoint (you have to get an XML parser, and a BEEP toolkit, and...) Dave agreed that thatís a good and valid issue, but the features in BEEP generally need to be provided somewhere. It gets back to "should you use a subroutine library you didnít write?" Using BEEP versus rolling your own is the same logic.
Eric Rescorla asked whether Daveís argument is an argument that BEEP is an acceptable binding for SACRED, or an argument that we donít need any other bindings because BEEPís all we need? Dave said that BEEP presents a necessary and sufficient interface to the underlying parts of services. Rolling your own is only a good approach if it gives you want you want with acceptable risk, performance, effort...
Keith Moore questioned the merits of using HTTP under SACRED. The question is not one of footprint, itís one of mindshare. People believe that HTTP is already there, even in challenged environments, and they know how to work with it. Thatís a dubious assumption, because HTTP is almost certainly wrong for this type of application. There are so many assumptions built into the HTTP infrastructure that youíll run afoul of one or more of them if you try to reuse it. Bottom line is that HTTP is a tar baby you donít want to stick your fist in. Also, having more bearers/transports actually harms interoperability in this case.
Magnus reiterated that one has to be practical; we want to support a wide variety of clients/platforms with SACRED. If these platforms wind up having to support another protocol in order to support SACRED, they may just wind up abandoning SACRED altogether.
Keith Moore strongly urged the group to pick ONE transport, because when you implement things over multiple transports/bearers, you run into subtle quirks that caused massive interoperability problems. OSI showed that you CANíT mix & match different layers; you have to pick them all the way down.
Magnus agreed that certainly you have to support one transport - that doesnít mean that you canít support other transports. He wants a cleaner separation of things so that SACRED payloads can be moved to other transports.
Phill Hallam-Baker noted the users will wind up dictating what the transport must be.
Jeff Schiller asked, what is SACRED trying to do? Itís something simple, and it seems to be getting really complicated. So it seems like we ought to just come up with one way to do it, and do it that way. Otherwise weíre going to have the mandatory-to-implement battles again, and then there will be interoperability problems and maybe security problems.
Stephen took a straw poll of who is likely to implement SACRED. A number of hands were raised; about 15. Of those, all but one would prefer to separate the authentication from the transport and be able to put it into the payload. When Stephen took a similar poll of meeting attendees in general, there was a more even split.
Limor Elbaz, Discretix, then started a discussion on the peer-to-peer SACRED solution. She covered the concept & importance of peer-to-peer secure credentials transport; some use cases (e.g., when you donít want to go through a server because of risks of storing credentials on the server, or of potential active attacks exploiting the server enrollment process). She explained that, in some cases, direct transfer has a psychological sense of security, and there may be a requirement in some cases to limit the scenarios in which the user will be able to do credential transfer.
Direct transfer of credentials differs from the server-based solution in a number of ways. Direct connection will not necessarily be able to adopt the server-based mechanisms (e.g. TLS); direct transfer requires no management (server, database, enrollment, etc.); some of the security attascks are not relevant in direct connection; in direct transfer there is no need to refer to (or set requirements for) transport layer; no changes in credential formats would be needed.
Limor proposed the following roadmap for Direct transfer progress: SACRED should start working on a new specification, which will deal with direct transfer. The effort would start by collecting requirements; any input from the working group will help. Ultimately, it would be good to coordinate the direct transfer solution with some of the WAP WIM-related work.
Stephen closed the meeting by reviewing the status of the working group "at large". For a long time there was almost no work or input in this group. Magnus encouraged on the mailing list those interested in publishing the protocol document as "standards-track" to step forward, or the document would probably be published as "experimental". There was limited but positive feedback on the list on this issue; several people emphasized the need for standards-track progress. We still need more review of the documents, in general, but we will continue development of a standards-track protocol at this time. The chairs will bring this issue (experimental vs. standards-track) up again when reaching WG last-call for the protocol document.
SACRED with SASL support
Direct (Peer-to-Peer) Secure Transfer of Credentials
The Zen of SACRED/BEEP