2.6.3 One-way Hash Function (hash)

NOTE: This charter is a snapshot of the 63rd IETF Meeting in Paris, France. It may now be out-of-date.

Last Modified: 2005-06-27


Paul Hoffman <paul.hoffman@vpnc.org>

Security Area Director(s):

Russ Housley <housley@vigilsec.com>
Sam Hartman <hartmans-ietf@mit.edu>

Security Area Advisor:

Russ Housley <housley@vigilsec.com>

Mailing Lists:

General Discussion:
To Subscribe:

Description of Working Group:

No description available

Goals and Milestones:

No Current Internet-Drafts

No Request For Comments

Current Meeting Report

Hash BoF
IETF 63, Paris
Monday August 1st, 1815-1945, room 341
Chair: Paul Hoffman
Notes taken by Pasi Eronen and Rich Graveman, edited by Paul Hoffman

1. Paul Hoffman: Introduction

(see slides)

Introduced the BOF and rationale for it

This deals only with collision resistance, not preimage resistance. 

2. Bill Burr: NIST workshop

NIST is organizing hash workshop on October 31 - November 1

If you want to attend you have to pre-register

Expecting about 20 talks; the agenda will be posted in a couple of weeks

Results of this workshop will help in us deciding whether to run a
competition for hash functions, and policies regarding SHA-256

URL:  http://www.nist.gov/hash-function

3. Russ Housley: Preprocessing to strengthen current hash functions

(presenting on Michael Szydlo's behalf)

(see slides)

Eric Rescorla: As a defense, how specific this is to current techniques
of attacking hash functions?

Russ: This defeats the currently-known attacks, but not all unknown
attacks. Basically we don't know how wide class of attacks this defeats.
The paper has some more details.

4. Ran Canetti: Randomized hashing in signatures

(see slides)

Eric Rescorla: One concern people often have about if attacker can
control initial value, it makes forgeries easier.

Ran: You need to incorporate randomness in some other way than IV.

Eric: 2nd thing, how much randomness you need?

Ran: Any amount is better than none, at most 512 bits (the function's
block size).

Uri Blumenthal: Are there any calculations to prove this randomized is
any better than non-randomized?

Ran: With current SHA-1 attacks, randomness helps in transforming
off-line brute-force attack to on-line.

Uri: In some cases it is a problem to come up with even 128 bits of
randomness for every signature or hash. Also questioned how to calculate
how much randomness you need and how much it helps.

5. Steven Bellovin: Analyzing hash agility in IETF protocols

(see slides)

Stephen Kent: When looking at this, did you consider handling this with
configuration of trust anchors?

Bellovin: If IPsec is used in relatively closed environment, it might
work. But for things like TLS, probably not.

6. Tim Polk: Hash truncation at NIST

(see slides)

Donald Eastlake: Why not put the truncation length every 12 bytes, like
the "whitening" earlier?

Tim Polk: Currently, we only put it in IV. But I'll take this idea to
the editors

7. Paul Hoffman: Charter discussion

(see slides)

Paul: Let's start with the charter from the list but feel free to amend
it. Also, let's discuss if this should be in the IETF or the IRTF.

Wes Hardaker: I haven't read the charter. I'm confused what exactly is
within the scope. We have 4000 RFCs and someone needs to grep for hash

Paul: That's not included right now.

Wes: It seems the output would be guidance to the rest of IETF, mandated

Uri: Designing a hash function is not appropriate for either IETF or
IRTF.  Collecting requirements, or designing modes like truncated
hashes, or recommendations on how to use a hash function, could belong

Larry Masinter: IETF should do things are needed and only IETF will do,
not things that are needed but someone else is likely to do.  Designing
hash algorithms is the latter, going through through IETF protocols is
clearly former, so maybe it should be in the charter.

Paul: draft-hoffman-has-attacks-04 concludes that in most IETF protocols
the attacks don't really matter. The main exception is certificates, and
the attacks are not yet useful.

Jon Callas: Randomized hashing or whitening is interesting, we could
implement it in OpenPGP in a couple of hours. But what should we doing?
There are workshops coming soon. If I have an objection to making a
charter, it's only that we should wait for Vancouver to have more

Gregory Lebovitz: I think doing all protocol work here is not a good
idea, rather produce coaching to other WGs that can then update their

Ran: I agree that IETF or IRTF should not design hash functions.
Standardizing modes of operation for hash functions, yes.  IETF is
sometimes more agile than other organizations, for instance HMAC started
here, and was then used elsewhere.

Paul: Talking about BCPs, we also need to decide whether we want to have
a one recommendation/answer for the next hash function, or several.
Several gives flexibility, but may confuse the audience.

Uri: Our job is not designing modes of operation, but standardizing
modes designed and thoroughly peer-reviewed elsewhere. I don't think
we're at that point yet with things like randomized hashing.  We also
need to collect input from not only protocol people, but also hardware;
it would not be good if our design is 10x slower than SHA-1.

Paul: Some people are implementing SHA1 in hardware, are we making that
hardware obsolete?

Steven Bellovin: We could do an informational RFC based on the paper by
Eric and I, e.g. how to design protocol for hash function agility.
Second thing that is needed is analyzing more protocols than Eric and I
did to see what works and what not.

Eric: We currently have partial information. We know some things we have
are weak. In the first stage, we can patch things and provide
input/requirements to people designing new hash functions. In stage two,
we would replace things with things that come out from standardization
process elsewhere, like new hash functions.

David Black: I also support producing BCPs, one important thing to
decide is to when to publish that BCP, when we're sufficiently confident
to recommend something. This is important also outside the IETF.  On the
hardware issue, SHA-1 is pain in hardware, being as fast as SHA-1 might
not be sufficient requirement.

Wes Hardaker: People need guidance in rollover and algorithm

Paul: If we do some work, is it BCPs? (Some nodding and humming) Yes, it
looks like most people are.

Aaron Falk (IRTF chair): BCP does not sound like research to me, so this
sounds more like IETF than IRTF. The IRTF is mostly about doing our own
research before handing it off to the IETF. It depends on how you want
to scope the work. Shorter-term protocol work would better fit in IETF,
longer-term work maybe in CFRG research group?

Ran: As CFRG co-chair, some of this is clearly within scope of CFRG. But
also would make sense to have a focused IETF working group.

Christian Huitema: I would like to see IETF WG working at least on
negotiation mechanisms.

Jim Touch: We seem to be talking mainly about hashes within scope of
signatures, but signature algorithms are out of scope. Is this a good

Paul: More needs to be discussed on the mailing list.



Collision Resistant Usage of SHA-1 via Message Pre-processing
Randomized Hashing for Signatures
Deploying New Hash Functions
Hash Truncation