Signed HTTP ExchangesGooglejyasskin@chromium.orgThis document specifies how a server can send an HTTP exchange—a request
URL, content negotiation information, and a response—with signatures that
vouch for that exchange’s
authenticity. These signatures can be verified against an origin’s certificate
to establish that the exchange is authoritative for an origin even if it was
transferred over a connection that isn’t. The signatures can also be used in
other ways described in the appendices.These signatures contain countermeasures against downgrade and
protocol-confusion attacks.Discussion of this draft takes place on the HTTP working group mailing list
(ietf-http-wg@w3.org), which is archived
at https://lists.w3.org/Archives/Public/ietf-http-wg/.The source code and issues list for this draft can be found
in https://github.com/WICG/webpackage.Signed HTTP exchanges provide a way to prove the authenticity of a resource in
cases where the transport layer isn’t sufficient. This can be used in several
ways:When signed by a certificate () that’s trusted for an origin, an
exchange can be treated as authoritative for that origin, even if it was
transferred over a connection that isn’t authoritative (Section 9.1 of
) for that origin. See and
.A top-level resource can use a public key to identify an expected publisher for
particular subresources, a system known as Subresource Integrity (). An
exchange’s signature provides the matching proof of authorship. See
.A signature can vouch for the exchange in some way, for example that it
appears in a transparency log or that static analysis indicates that it omits
certain attacks. See and .Subsequent work toward the use cases in
will provide a way to group signed exchanges into bundles that can be
transmitted and stored together, but single signed exchanges are useful enough
to standardize on their own.
A string for which the URL
parser (), when run
without a base URL, returns a URL rather than a failure, and for which that URL
has a null fragment. This is similar to the absolute-URL
string concept defined by
() but might not include exactly the same strings.
The entity that wrote the content in a particular resource. This specification
deals with publishers rather than authors.
The entity that controls the server for a particular origin . The
publisher can get a CA to issue certificates for their private keys and can
run a TLS server for their origin.
An HTTP request URL, content negotiation information, and an HTTP response.
This can be encoded into a request message from a client with its matching
response from a server, into the request in a PUSH_PROMISE with its matching
response stream, or into the dedicated format in
, which uses to
encode the content negotiation information. This is not quite the same meaning
as defined by Section 8 of , which assumes the content negotiation
information is embedded into HTTP request headers.
An entity that fetches signed HTTP exchanges from a publisher or another
intermediate and forwards them to another intermediate or a client.
An entity that uses a signed HTTP exchange and needs to be able to prove that
the publisher vouched for it as coming from its claimed origin.
Defined by section
4.16.The key words “MUST”, “MUST NOT”, “REQUIRED”, “SHALL”, “SHALL
NOT”, “SHOULD”, “SHOULD NOT”, “RECOMMENDED”, “NOT RECOMMENDED”,
“MAY”, and “OPTIONAL” in this document are to be interpreted as
described in BCP 14 when, and only when, they
appear in all capitals, as shown here.In the response of an HTTP exchange the server MAY include a Signature header
field () holding a list of one or more parameterised
signatures that vouch for the content of the exchange. Exactly which content the
signature vouches for can depend on how the exchange is transferred
().The client categorizes each signature as “valid” or “invalid” by validating that
signature with its certificate or public key and other metadata against the
exchange’s URL, response headers, and content (). This
validity then informs higher-level protocols.Each signature is parameterised with information to let a client fetch assurance
that a signed exchange is still valid, in the face of revoked certificates and
newly-discovered vulnerabilities. This assurance can be bundled back into the
signed exchange and forwarded to another client, which won’t have to re-fetch
this validity information for some period of time.The Signature header field conveys a list of signatures for an exchange, each
one accompanied by information about how to determine the authority of and
refresh that signature. Each signature directly signs the exchange’s URL and
response headers and identifies one of those headers that enforces the integrity
of the exchange’s payload.The Signature header is a Structured Header as defined by
. Its value MUST be a parameterised list
(Section 3.4 of ). Its ABNF is:Each parameterised identifier in the list MUST have parameters named “sig”,
“integrity”, “validity-url”, “date”, and “expires”. Each parameterised identifier
MUST also have either “cert-url” and “cert-sha256” parameters or an “ed25519key”
parameter. This specification gives no meaning to the identifier itself, which
can be used as a human-readable identifier for the signature (see
). The present parameters MUST have the following
values:
Byte sequence (Section 3.10 of ) holding
the signature of most of these parameters and the exchange’s URL and response
headers.
A string (Section 3.8 of ) containing a
“/”-separated sequence of names starting with the lowercase name of the
response header field that guards the response payload’s integrity. The
meaning of subsequent names depends on the response header field, but for the
“digest” header field, the single following name is the name of the digest
algorithm that guards the payload’s integrity.
A string (Section 3.8 of ) containing an
absolute URL () with a scheme of “https” or “data”.
Byte sequence (Section 3.10 of ) holding
the SHA-256 hash of the first certificate found at “cert-url”.
Byte sequence (Section 3.10 of ) holding
an Ed25519 public key ().
A string (Section 3.8 of ) containing an
absolute URL () with a scheme of “https”.
An integer (Section 3.6 of )
representing a Unix time.The “cert-url” parameter is not signed, so intermediates can update it with a
pointer to a cached version.The following header is included in the response for an exchange with effective
request URI https://example.com/resource.html. Newlines are added for
readability.There are 4 signatures: 2 from different secp256r1 certificates within
https://example.com/, one using a raw ed25519 public key that’s also
controlled by example.com, and a fourth using a secp256r1 certificate owned by
thirdparty.example.com.All 4 signatures rely on the Digest response header with the mi-sha256 digest
algorithm to guard the integrity of the response payload.The signatures include a “validity-url” that includes the first time the resource
was seen. This allows multiple versions of a resource at the same URL to be
updated with new signatures, which allows clients to avoid transferring extra
data while the old versions don’t have known security bugs.The certificates at https://example.com/oldcerts and
https://example.com/newcerts have subjectAltNames of example.com, meaning
that if they and their signatures validate, the exchange can be trusted as
having an origin of https://example.com/. The publisher might be using two
certificates because their readers have disjoint sets of roots in their trust
stores.The publisher signed with all three certificates at the same time, so they share
a validity range: 7 days starting at 2017-11-19 21:53 UTC.The publisher then requested an additional signature from
thirdparty.example.com, which did some validation or processing and then
signed the resource at 2017-11-19 23:11 UTC. thirdparty.example.com only
grants 4-day signatures, so clients will need to re-validate more often. provides a way to parameterise
identifiers but not other supported types like byte sequences. If the
Signature header field is notionally a list of parameterised signatures, maybe
we should add a “parameterised byte sequence” type.Should the cert-url and validity-url be lists so that intermediates can offer a
cache without losing the original URLs? Putting lists in dictionary fields is
more complex than allows, so they’re
single items for now.To sign an exchange’s response headers, they need to be serialized into a byte string.
Since intermediaries and distributors might
rearrange, add, or just reserialize headers, we can’t use the literal bytes of
the headers as this serialization. Instead, this section defines a CBOR
representation that can be embedded into other CBOR, canonically serialized
(), and then signed.The CBOR representation of a set of response metadata and headers is the CBOR
() map with the following mappings:The byte string ‘:status’ to the byte string containing the response’s 3-digit
status code, andFor each response header field, the header field’s lowercase name as a byte
string to the header field’s value as a byte string.Given the HTTP exchange:The cbor representation consists of the following item, represented using the
extended diagnostic notation from appendix G:The resource at a signature’s cert-url MUST have the
application/cert-chain+cbor content type, MUST be canonically-encoded CBOR
(), and MUST match the following CDDL:The first map (second item) in the CBOR array is treated as the end-entity
certificate, and the client will attempt to build a path () to it
from a trusted root using the other certificates in the chain.Each cert value MUST be a DER-encoded X.509v3 certificate ().
Other key/value pairs in the same array item define properties of this
certificate.The first certificate’s ocsp value MUST be a complete, DER-encoded OCSP
response for that certificate (using the ASN.1 type OCSPResponse defined in
). Subsequent certificates MUST NOT have an ocsp value.Each certificate’s sct value if any MUST be a
SignedCertificateTimestampList for that certificate as defined by Section
3.3 of .Loading a cert-url takes a forceFetch flag. The client MUST:Let raw-chain be the result of fetching () cert-url. If
forceFetch is not set, the fetch can be fulfilled from a cache using
normal HTTP semantics . If this fetch fails, return
“invalid”.Let certificate-chain be the array of certificates and properties produced
by parsing raw-chain using the CDDL above. If any of the requirements above
aren’t satisfied, return “invalid”. Note that this validation requirement
might be impractical to completely achieve due to certificate validation
implementations that don’t enforce DER encoding or other standard
constraints.Return certificate-chain.Within this specification, the canonical serialization of a CBOR item uses the
following rules derived from Section 3.9 of with erratum 4964
applied:Integers and the lengths of arrays, maps, and strings MUST use the smallest
possible encoding.Items MUST NOT be encoded with indefinite length.The keys in every map MUST be sorted in the bytewise lexicographic order of
their canonical encodings. For example, the following keys are correctly sorted:
10, encoded as 0A.100, encoded as 18 64.-1, encoded as 20.“z”, encoded as 61 7A.“aa”, encoded as 62 61 61.[100], encoded as 81 18 64.[-1], encoded as 81 20.false, encoded as F4.Note: this specification does not use floating point, tags, or other more
complex data types, so it doesn’t need rules to canonicalize those.The client MUST parse the Signature header field as the parameterised list
(Section 4.2.5 of ) described in
. If an error is thrown during this parsing or any of the
requirements described there aren’t satisfied, the exchange has no valid
signatures. Otherwise, each member of this list represents a signature with
parameters.The client MUST use the following algorithm to determine whether each signature
with parameters is invalid or potentially-valid for an exchange’srequestUrl, a byte sequence that can be parsed into the exchange’s effective
request URI (Section 5.5 of ),responseHeaders, a byte sequence holding the canonical serialization
() of the CBOR representation () of
the exchange’s response metadata and headers, andpayload, a stream of bytes constituting the exchange’s payload body (Section
3.3 of ). Note that the payload body is the message body with any
transfer encodings removed.Potentially-valid results include:The signed headers of the exchange so that higher-level protocols can avoid
relying on unsigned headers, andEither a certificate chain or a public key so that a higher-level protocol can
determine whether it’s actually valid.This algorithm accepts a forceFetch flag that avoids the cache when fetching
URLs. A client that determines that a potentially-valid certificate chain is
actually invalid due to an expired OCSP response MAY retry with forceFetch set
to retrieve an updated OCSP from the original server.Let:
signature be the signature (byte sequence in the parameterised
identifier’s “sig” parameter).integrity be the signature’s “integrity” parameter.validity-url be the signature’s “validity-url” parameter.cert-url be the signature’s “cert-url” parameter, if any.cert-sha256 be the signature’s “cert-sha256” parameter, if any.ed25519key be the signature’s “ed25519key” parameter, if any.date be the signature’s “date” parameter, interpreted as a Unix time.expires be the signature’s “expires” parameter, interpreted as a Unix
time.Set publicKey and signing-alg depending on which key fields are present:
If cert-url is present:
Let certificate-chain be the result of loading the certificate chain
at cert-url passing the forceFetch flag (). If
this returns “invalid”, return “invalid”.Let main-certificate be the first certificate in certificate-chain.Set publicKey to main-certificate’s public key.If publicKey is an RSA key, return “invalid”.If publicKey is a key using the secp256r1 elliptic curve, set
signing-alg to ecdsa_secp256r1_sha256 as defined in Section 4.2.3 of
.Otherwise, either return “invalid” or set signing-alg to a non-legacy
signing algorithm defined by TLS 1.3 or later
(). This choice MUST depend only on
publicKey’s type and not on any other context.If ed25519key is present, set publicKey to ed25519key and
signing-alg to ed25519, as defined by If expires is more than 7 days (604800 seconds) after date, return
“invalid”.If the current time is before date or after expires, return “invalid”.Let message be the concatenation of the following byte strings. This
matches the format to avoid cross-protocol attacks if
anyone uses the same key in a TLS certificate and an exchange-signing
certificate.
A string that consists of octet 32 (0x20) repeated 64 times.A context string: the ASCII encoding of “HTTP Exchange 1”.
Note: RFC EDITOR PLEASE DELETE THIS NOTE; The implementation of the final
RFC MUST use this context string, but implementations of drafts MUST NOT
use it and MUST use another draft-specific string beginning with “HTTP
Exchange 1 ” instead. This ensures that signers can predict how their
signatures will be used.A single 0 byte which serves as a separator.If cert-sha256 is set, a byte holding the value 32 followed by the 32
bytes of the value of cert-sha256. Otherwise a 0 byte.The 8-byte big-endian encoding of the length in bytes of validity-url,
followed by the bytes of validity-url.The 8-byte big-endian encoding of date.The 8-byte big-endian encoding of expires.The 8-byte big-endian encoding of the length in bytes of requestUrl,
followed by the bytes of requestUrl.The 8-byte big-endian encoding of the length in bytes of responseHeaders,
followed by the bytes of responseHeaders.If cert-url is present and the SHA-256 hash of main-certificate’s
cert_data is not equal to cert-sha256 (whose presence was checked when the
Signature header field was parsed), return “invalid”.
Note that this intentionally differs from TLS 1.3, which signs the entire
certificate chain in its Certificate Verify (Section 4.4.3 of
), in order to allow updating the stapled OCSP
response without updating signatures at the same time.If signature is not a valid signature of message by publicKey using
signing-alg, return “invalid”.If headers, interpreted according to , does not
contain a Content-Type response header field (Section 3.1.1.5 of
), return “invalid”.
Clients MUST interpret the signed payload as this specified media type
instead of trying to sniff a media type from the bytes of the payload, for
example by attaching an X-Content-Type-Options: nosniff header field
() to the extracted response.If integrity names a header field and parameter that is not present in
responseHeaders or which the client cannot use to check the
integrity of payload (for example, the header field is new and hasn’t been
implemented yet), then return “invalid”. If the selected header field
provides integrity guarantees weaker than SHA-256, return “invalid”. If
validating integrity using the selected header field requires the client to
process records larger than 16384 bytes, return “invalid”. Clients MUST
implement at least the Digest header field with its mi-sha256 digest
algorithm (Section 3 of ).
Note: RFC EDITOR PLEASE DELETE THIS NOTE; Implementations of drafts of this
RFC MUST recognize the draft spelling of the content encoding and digest
algorithm specified by until that draft is
published as an RFC. For example, implementations of
draft-thomson-http-mice-03 would use mi-sha256-03 and MUST NOT use
mi-sha256 itself. This ensures that final implementations don’t need to
handle compatibility with implementations of early drafts of that content
encoding.
If payload doesn’t match the integrity information in the header described
by integrity, return “invalid”.Return “potentially-valid” with whichever is present of certificate-chain
or ed25519key.Note that the above algorithm can determine that an exchange’s headers are
potentially-valid before the exchange’s payload is received. Similarly, if
integrity identifies a header field and parameter like Digest:mi-sha256
()
that can incrementally validate the payload, early parts of the payload can be
determined to be potentially-valid before later parts of the payload.
Higher-level protocols MAY process parts of the exchange that have been
determined to be potentially-valid as soon as that determination is made but
MUST NOT process parts of the exchange that are not yet potentially-valid.
Similarly, as the higher-level protocol determines that parts of the exchange
are actually valid, the client MAY process those parts of the exchange and MUST
wait to process other parts of the exchange until they too are determined to be
valid.Should the signed message use the TLS format (with an initial 64 spaces) even
though these certificates can’t be used in TLS servers?Both OCSP responses and signatures are designed to expire a short
time after they’re signed, so that revoked certificates and signed exchanges
with known vulnerabilities are distrusted promptly.This specification provides no way to update OCSP responses by themselves.
Instead, clients need to re-fetch the “cert-url” to get a chain
including a newer OCSP response.The “validity-url” parameter of the signatures provides
a way to fetch new signatures or learn where to fetch a complete updated
exchange.Each version of a signed exchange SHOULD have its own validity URLs, since each
version needs different signatures and becomes obsolete at different times.The resource at a “validity-url” is “validity data”, a CBOR map matching the
following CDDL ():The elements of the signatures array are parameterised identifiers (Section
4.2.6 of ) meant to replace the signatures
within the Signature header field pointing to this validity data. If the
signed exchange contains a bug severe enough that clients need to stop using the
content, the signatures array MUST NOT be present.If the the update map is present, that indicates that a new version of the
signed exchange is available at its effective request URI (Section 5.5 of
) and can give an estimate of the size of the updated exchange
(update.size). If the signed exchange is currently the most recent version,
the update SHOULD NOT be present.If both the signatures and update fields are present, clients can use the
estimated size to decide whether to update the whole resource or just its
signatures.For example, say a signed exchange whose URL is https://example.com/resource
has the following Signature header field (with line breaks included and
irrelevant fields omitted for ease of reading).At 2017-11-27 11:02 UTC, sig1 and sig2 have expired, but thirdpartysig
doesn’t exipire until 23:11 that night, so the client needs to fetch
https://example.com/resource.validity.1511157180 (the validity-url of sig1
and sig2) if it wishes to update those signatures. This URL might contain:This indicates that the client could fetch a newer version at
https://example.com/resource (the original URL of the exchange), or that the
validity period of the old version can be extended by replacing the first two of
the original signatures (the ones with a validity-url of
https://example.com/resource.validity.1511157180) with the single new
signature provided. (This might happen at the end of a migration to a new root
certificate.) The signatures of the updated signed exchange would be:https://example.com/resource.validity.1511157180 could also expand the set of
signatures if its signatures array contained more than 2 elements.Signature header fields cost on the order of 300 bytes for ECDSA signatures,
so servers might prefer to avoid sending them to clients that don’t intend to
use them. A client can send the Accept-Signature header field to indicate that
it does intend to take advantage of any available signatures and to indicate
what kinds of signatures it supports.When a server receives an Accept-Signature header field in a client request,
it SHOULD reply with any available Signature header fields for its response
that the Accept-Signature header field indicates the client supports. However,
if the Accept-Signature value violates a requirement in this section, the
server MUST behave as if it hadn’t received any Accept-Signature header at
all.The Accept-Signature header field is a Structured Header as defined by
. Its value MUST be a parameterised list
(Section 3.4 of ). Its ABNF is:The order of identifiers in the Accept-Signature list is not significant.
Identifiers, ignoring any initial “-“ character, MUST NOT be duplicated.Each identifier in the Accept-Signature header field’s value indicates that a
feature of the Signature header field () is supported. If
the identifier begins with a “-“ character, it instead indicates that the
feature named by the rest of the identifier is not supported. Unknown
identifiers and parameters MUST be ignored because new identifiers and new
parameters on existing identifiers may be defined by future specifications.Identifiers starting with “digest/” indicate that the client supports the
Digest header field ({{!RFC3230) with the parameter from the HTTP Digest
Algorithm Values
Registry
registry named in lower-case by the rest of the identifier. For example,
“digest/mi-blake2” indicates support for Merkle integrity with the
as-yet-unspecified mi-blake2 parameter, and “-digest/mi-sha256” indicates
non-support for Merkle integrity with the mi-sha256 content encoding.If the Accept-Signature header field is present, servers SHOULD assume support
for “digest/mi-sha256” unless the header field states otherwise.Identifiers starting with “ecdsa/” indicate that the client supports
certificates holding ECDSA public keys on the curve named in lower-case by the
rest of the identifier.If the Accept-Signature header field is present, servers SHOULD assume support
for “ecdsa/secp256r1” unless the header field states otherwise.The “ed25519key” identifier has parameters indicating the public keys that will
be used to validate the returned signature. Each parameter’s name is
re-interpreted as a byte sequence (Section 3.10 of
) encoding a prefix of the public key. For
example, if the client will validate signatures using the public key whose
base64 encoding is 11qYAYKxCrfVS/7TyWQHOg7hcvPapiMlrwIaaPcHURo=, valid
Accept-Signature header fields include:but notbecause 5 bytes isn’t a valid length for encoded base64, and notbecause it doesn’t start or end with the *s that indicate a byte sequence.Note that ed25519key; ** is an empty prefix, which matches all public keys, so
it’s useful in subresource integrity () cases like <link rel=preload
as=script href="..."> where the public key isn’t known until the matching
<script src="..." integrity="..."> tag.states that the client will accept signatures with payload integrity assured by
the Digest header and mi-sha256 digest algorithm and implies that the client
will accept signatures from ECDSA keys on the secp256r1 curve.states that the client will accept ECDSA keys on the secp384r1 curve but not the
secp256r1 curve and payload integrity assured with the Digest: mi-sha256
header field.Is an Accept-Signature header useful enough to pay for itself? If clients wind
up sending it on most requests, that may cost more than the cost of sending
Signatures unconditionally. On the other hand, it gives servers an indication
of which kinds of signatures are supported, which can help us upgrade the
ecosystem in the future.Is Accept-Signature the right spelling, or do we want to imitate Want-Digest
(Section 4.3.1 of ) instead?Do I have the right structure for the identifiers indicating feature support?To determine whether to trust a cross-origin exchange, the client takes a
Signature header field () and the exchange’srequestUrl, a byte sequence that can be parsed into the exchange’s effective
request URI (Section 5.5 of ),responseHeaders, a byte sequence holding the canonical serialization
() of the CBOR representation () of
the exchange’s response metadata and headers, andpayload, a stream of bytes constituting the exchange’s payload body (Section
3.3 of ).The client MUST parse the Signature header into a list of signatures according
to the instructions in , and run the following algorithm
for each signature, stopping at the first one that returns “valid”. If any
signature returns “valid”, return “valid”. Otherwise, return “invalid”.If the signature’s “validity-url” parameter is not
same-origin
with requestUrl, return “invalid”.Use to determine the signature’s validity for
requestUrl, responseHeaders, and payload, getting certificate-chain back. If
this returned “invalid” or didn’t return a certificate chain, return
“invalid”.Let response be the response metadata and headers parsed out of
responseHeaders.If Section 3 of forbids a shared cache from storing response,
return “invalid”.If response’s headers contain an uncached header field, as defined in
, return “invalid”.Let authority be the host component of requestUrl.Validate the certificate-chain using the following substeps. If any of them
fail, re-run once over the signature with the
forceFetch flag set, and restart from step 2. If a substep fails again,
return “invalid”.
Use certificate-chain to validate that its first entry,
main-certificate is trusted as authority’s server certificate
( and other undocumented conventions). Let path be the path
that was used from the main-certificate to a trusted root, including the
main-certificate but excluding the root.Validate that main-certificate has the CanSignHttpExchanges extension
().Validate that main-certificate has an ocsp property
() with a valid OCSP response whose lifetime
(nextUpdate - thisUpdate) is less than 7 days (). Note that
this does not check for revocation of intermediate certificates, and
clients SHOULD implement another mechanism for that.Validate that valid SCTs from trusted logs are available from any of: The SignedCertificateTimestampList in main-certificate’s sct
property (),An OCSP extension in the OCSP response in main-certificate’s ocsp
property, orAn X.509 extension in the certificate in main-certificate’s cert
property,
as described by Section 3.3 of .Return “valid”.Hop-by-hop and other uncached headers MUST NOT appear in a signed exchange.
These will eventually be listed in , but for now
they’re listed here:Hop-by-hop header fields listed in the Connection header field (Section 6.1 of
).Header fields listed in the no-cache response directive in the Cache-Control
header field (Section 5.2.2.2 of ).Header fields defined as hop-by-hop:
ConnectionKeep-AliveProxy-ConnectionTrailerTransfer-EncodingUpgradeStateful headers as defined below.As described in , a publisher can cause problems if they
sign an exchange that includes private information. There’s no way for a client
to be sure an exchange does or does not include private information, but header
fields that store or convey stored state in the client are a good sign.A stateful response header field modifies state, including authentication
status, in the client. The HTTP cache is not considered part of this state.
These include but are not limited to:Authentication-Control, Authentication-Info, Clear-Site-Data, Optional-WWW-Authenticate, Proxy-Authenticate, Proxy-Authentication-Info, Public-Key-Pins, Sec-WebSocket-Accept, Set-Cookie, Set-Cookie2, SetProfile, Strict-Transport-Security, WWW-Authenticate, We define a new X.509 extension, CanSignHttpExchanges to be used in the
certificate when the certificate permits the usage of signed exchanges. When
this extension is not present the client MUST NOT accept a signature from the
certificate as proof that a signed exchange is authoritative for a domain
covered by the certificate. When it is present, the client MUST follow the
validation procedure in .Note that this extension contains an ASN.1 NULL (bytes 05 00) because some
implementations have bugs with empty extensions.Leaf certificates without this extension need to be revoked if the private key
is exposed to an unauthorized entity, but they generally don’t need to be
revoked if a signing oracle is exposed and then removed.CA certificates, by contrast, need to be revoked if an unauthorized entity is
able to make even one unauthorized signature.Certificates with this extension MUST be revoked if an unauthorized entity is
able to make even one unauthorized signature.Conforming CAs MUST NOT mark this extension as critical.Clients MUST NOT accept certificates with this extension in TLS connections
(Section 4.4.2.2 of ).RFC EDITOR PLEASE DELETE THE REST OF THE PARAGRAPHS IN THIS SECTIONImplementations of drafts of this specification MAY recognize the
id-ce-canSignHttpExchangesDraft OID as identifying the CanSignHttpExchanges
extension. This OID might or might not be used as the final OID for the
extension, so certificates including it might need to be reissued once the final
RFC is published.A signed exchange can be transferred in several ways, of which three are
described here.The signature for a signed exchange can be included in a normal HTTP response.
Because different clients send different request header fields, clients don’t
know how the server’s content negotiation algorithm works, and intermediate
servers add response header fields, it can be impossible to have a signature for
the exchange’s exact request, content negotiation, and response. Therefore, when a client
calls the validation procedure in ) to validate the
Signature header field for an exchange represented as a normal HTTP
request/response pair, it MUST pass:The Signature header field,The effective request URI (Section 5.5 of ) of the request,The serialized headers defined by , andThe response’s payload.If the client relies on signature validity for any aspect of its behavior, it
MUST ignore any header fields that it didn’t pass to the validation procedure.If the signed response includes a Variants header field, the client MUST use
the cache behavior algorithm in Section 4 of to
check that the signed response is an appropriate representation for the request
the client is trying to fulfil. If the response is not an appropriate
representation, the client MUST treat the signature as invalid.The serialized headers of an exchange represented as a normal HTTP
request/response pair (Section 2.1 of or Section 8.1 of
) are the canonical serialization () of the CBOR
representation () of the response status code (Section 6
of ) and the response header fields whose names are listed in that
response’s Signed-Headers header field (). If a response
header field name from Signed-Headers does not appear in the response’s header
fields, the exchange has no serialized headers.If the exchange’s Signed-Headers header field is not present, doesn’t parse as
a Structured Header () or doesn’t follow
the constraints on its value described in , the exchange has
no serialized headers.Do the serialized headers of an exchange need to include the Signed-Headers
header field itself?The Signed-Headers header field identifies an ordered list of response header
fields to include in a signature. The request URL and response status are
included unconditionally. This allows a TLS-terminating intermediate to reorder
headers without breaking the signature. This can also allow the intermediate
to add headers that will be ignored by some higher-level protocols, but
provides a hook to let other higher-level protocols
reject such insecure headers.This header field appears once instead of being incorporated into the
signatures’ parameters because the signed header fields need to be consistent
across all signatures of an exchange, to avoid forcing higher-level protocols to
merge the header field lists of valid signatures.Signed-Headers is a Structured Header as defined by
. Its value MUST be a list (Section 3.2 of
). Its ABNF is:Each element of the Signed-Headers list must be a lowercase string (Section
3.8 of ) naming an HTTP response header
field. Pseudo-header field names (Section 8.1.2.1 of ) MUST NOT
appear in this list.Higher-level protocols SHOULD place requirements on the minimum set of headers
to include in the Signed-Headers header field.To allow servers to Server-Push (Section 8.2 of ) signed exchanges
() signed by an authority for which the server is not authoritative
(Section 9.1 of ), this section defines an HTTP/2 extension.Clients that might accept signed Server Pushes with an authority for which the
server is not authoritative indicate this using the HTTP/2 SETTINGS parameter
ENABLE_CROSS_ORIGIN_PUSH (0xSETTING-TBD).An ENABLE_CROSS_ORIGIN_PUSH value of 0 indicates that the client does not
support cross-origin Push. A value of 1 indicates that the client does support
cross-origin Push.A client MUST NOT send a ENABLE_CROSS_ORIGIN_PUSH setting with a value other
than 0 or 1 or a value of 0 after previously sending a value of 1. If a server
receives a value that violates these rules, it MUST treat it as a connection
error (Section 5.4.1 of ) of type PROTOCOL_ERROR.The use of a SETTINGS parameter to opt-in to an otherwise incompatible protocol
change is a use of “Extending HTTP/2” defined by Section 5.5 of . If
a server were to send a cross-origin Push without first receiving a
ENABLE_CROSS_ORIGIN_PUSH setting with the value of 1 it would be a protocol
violation.The signatures on a Pushed cross-origin exchange may be untrusted for several
reasons, for example that the certificate could not be fetched, that the
certificate does not chain to a trusted root, that the signature itself doesn’t
validate, that the signature is expired, etc. This draft conflates all of these
possible failures into one error code, NO_TRUSTED_EXCHANGE_SIGNATURE
(0xERROR-TBD).How fine-grained should this specification’s error codes be?If the client has set the ENABLE_CROSS_ORIGIN_PUSH setting to 1, the server MAY
Push a signed exchange for which it is not authoritative, and the client MUST
NOT treat a PUSH_PROMISE for which the server is not authoritative as a stream
error (Section 5.4.2 of ) of type PROTOCOL_ERROR, as described in
Section 8.2 of , unless there is another error as described below.Instead, the client MUST validate such a PUSH_PROMISE and its response against
the following list:If the PUSH_PROMISE includes any non-pseudo request header fields, the client
MUST treat it as a stream error (Section 5.4.2 of ) of type
PROTOCOL_ERROR.If the PUSH_PROMISE’s method is not GET, the client MUST treat it as a
stream error (Section 5.4.2 of ) of type PROTOCOL_ERROR.Run the algorithm in over:
The Signature header field from the response.The effective request URI from the PUSH_PROMISE.The canonical serialization () of the CBOR representation
() of the pushed response’s status and its headers
except for the Signature header field.The response’s payload.
If this returns “invalid”, the client MUST treat the response as a stream error
(Section 5.4.2 of ) of type NO_TRUSTED_EXCHANGE_SIGNATURE.
Otherwise, the client MUST treat the pushed response as if the server were
authoritative for the PUSH_PROMISE’s authority.Is it right that “validity-url” is required to be same-origin with the exchange?
This allows the mitigation against downgrades in , but
prohibits intermediates from providing a cache of the validity information. We
could do both with a list of URLs.To allow signed exchanges to be the targets of <link rel=prefetch> tags, we
define the application/signed-exchange content type that represents a signed
HTTP exchange, including a request URL, response metadata
and header fields, and a response payload.When served over HTTP, a response containing an application/signed-exchange
payload MUST include at least the following response header fields, to reduce
content sniffing vulnerabilities ():Content-Type: application/signed-exchange;v=versionX-Content-Type-Options: nosniffThis content type consists of the concatenation of the following items:8 bytes consisting of the ASCII characters “sxg1” followed by 4 0x00 bytes,
to serve as a file signature. This is redundant with the MIME type, and
recipients that receive both MUST check that they match and stop parsing if
they don’t.
Note: RFC EDITOR PLEASE DELETE THIS NOTE; The implementation of the final RFC
MUST use this file signature, but implementations of drafts MUST NOT use it
and MUST use another implementation-specific 8-byte string beginning with
“sxg1-“.2 bytes storing a big-endian integer fallbackUrlLength.fallbackUrlLength bytes holding a fallbackUrl, which MUST be an absolute
URL with a scheme of “https”.
Note: The byte location of the fallback URL is intended to remain invariant
across versions of the application/signed-exchange format so that parsers
encountering unknown versions can always find a URL to redirect to.
Issue: Should this fallback information also include the method?3 bytes storing a big-endian integer sigLength. If this is larger than
16384 (16*1024), parsing MUST fail.3 bytes storing a big-endian integer headerLength. If this is larger than
524288 (512*1024), parsing MUST fail.sigLength bytes holding the Signature header field’s value
().headerLength bytes holding signedHeaders, the canonical serialization
() of the CBOR representation of the response
headers of the exchange represented by the application/signed-exchange
resource (), excluding the Signature header field.The payload body (Section 3.3 of ) of the exchange represented by
the application/signed-exchange resource.
Note that the use of the payload body here means that a Transfer-Encoding
header field inside the application/signed-exchange header block has no
effect. A Transfer-Encoding header field on the outer HTTP response that
transfers this resource still has its normal effect.To determine whether to trust a cross-origin exchange stored in an
application/signed-exchange resource, pass the Signature header field’s
value, fallbackUrl as the effective request URI, signedHeaders, and the
payload body to the algorithm in .An example application/signed-exchange file representing a possible signed
exchange with https://example.com/ follows, with lengths represented by
descriptions in <>s, CBOR represented in the extended diagnostic format
defined in Appendix G of , and most of the Signature
header field and payload elided with a …:Should this be a CBOR format, or is the current mix of binary and CBOR better?Are the mime type, extension, and magic number right?If a publisher blindly signs all responses as their origin, they can cause at
least two kinds of problems, described below. To avoid this, publishers SHOULD
design their systems to opt particular public content that doesn’t depend on
authentication status into signatures instead of signing by default.Signing systems SHOULD also incorporate the following mitigations to reduce the
risk that private responses are signed:Strip the Cookie request header field and other identifying information
like client authentication and TLS session IDs from requests whose exchange
is destined to be signed, before forwarding the request to a backend.Only sign exchanges where the response includes a Cache-Control: public
header. Clients are not required to fail signature-checking for exchanges
that omit this Cache-Control response header field to reduce the risk that
naïve signing systems blindly add it.Blind signing can sign responses that create session cookies or otherwise change
state on the client to identify a particular session. This breaks certain kinds
of CSRF defense and can allow an attacker to force a user into the attacker’s
account, where the user might unintentionally save private information, like
credit card numbers or addresses.This specification defends against cookie-based attacks by blocking the
Set-Cookie response header, but it cannot prevent Javascript or other response
content from changing state.If a site signs private information, an attacker might set up their own account
to show particular private information, forward that signed information to a
victim, and use that victim’s confusion in a more sophisticated attack.Stripping authentication information from requests before sending them to
backends is likely to prevent the backend from showing attacker-specific
information in the signed response. It does not prevent the attacker from
showing their victim a signed-out page when the victim is actually signed in,
but while this is still misleading, it seems less likely to be useful to the
attacker.Relaxing the requirement to consult DNS when determining authority for an origin
means that an attacker who possesses a valid certificate no longer needs to be
on-path to redirect traffic to them; instead of modifying DNS, they need only
convince the user to visit another Web site in order to serve responses signed
as the target. This consideration and mitigations for it are shared by the
combination of and .Signing a bad response can affect more users than simply serving a bad response,
since a served response will only affect users who make a request while the bad
version is live, while an attacker can forward a signed response until its
signature expires. Publishers should consider shorter signature expiration times
than they use for cache expiration times.Clients MAY also check the “validity-url” of an
exchange more often than the signature’s expiration would require. Doing so for
an exchange with an HTTPS request URI provides a TLS guarantee that the exchange
isn’t out of date (as long as is resolved to keep the
same-origin requirement).An attacker with temporary access to a signing oracle can sign “still valid”
assertions with arbitrary timestamps and expiration times. As a result, when a
signing oracle is removed, the keys it provided access to MUST be revoked so
that, even if the attacker used them to sign future-dated exchange validity
assertions, the key’s OCSP assertion will expire, causing the exchange as a
whole to become untrusted.The use of a single Signed-Headers header field prevents us from signing
aspects of the request other than its effective request URI (Section 5.5 of
). For example, if a publisher signs both Content-Encoding: br and
Content-Encoding: gzip variants of a response, what’s the impact if an
attacker serves the brotli one for a request with Accept-Encoding: gzip? This
is mitigated by using instead of request headers
to describe how the client should run content negotiation.The simple form of Signed-Headers also prevents us from signing less than the
full request URL. The SRI use case () may benefit from being able to
leave the authority less constrained. can succeed when some delivered headers aren’t included
in the signed set. This accommodates current TLS-terminating intermediates and
may be useful for SRI (), but is risky for trusting cross-origin
responses (, , and
). requires all headers to be
included in the signature before trusting cross-origin pushed resources, at Ryan
Sleevi’s recommendation.Clients MUST NOT trust an effective request URI claimed by an
application/signed-exchange resource () without
either ensuring the resource was transferred from a server that was
authoritative (Section 9.1 of ) for that URI’s origin, or calling
the algorithm in and getting “valid” back.In general, key re-use across multiple protocols is a bad idea.Using an exchange-signing key in a TLS (or other directly-internet-facing)
server increases the risk that an attacker can steal the private key, which will
allow them to mint packages (similar to ) until their
theft is discovered.Using a TLS key in a CanSignHttpExchanges certificate makes it less likely that
the server operator will discover key theft, due to the considerations in
.This specification uses the CanSignHttpExchanges X.509 extension
() to discourage re-use of TLS keys to sign exchanges
or vice-versa.We require that clients reject certificates with the CanSignHttpExchanges
extension when making TLS connections to minimize the chance that servers will
re-use keys like this. Ideally, we would make the extension critical so that
even clients that don’t understand it would reject such TLS connections, but
this proved impossible because certificate-validating libraries ship on
significantly different schedules from the clients that use them.Even once all clients reject these certificates in TLS connections, this will
still just discourage and not prevent key re-use, since a server operator can
unwisely request two different certificates with the same private key.While modern browsers tend to trust the Content-Type header sent with a
resource, especially when accompanied by X-Content-Type-Options: nosniff,
plugins will sometimes search for executable content buried inside a resource
and execute it in the context of the origin that served the resource, leading to
XSS vulnerabilities. For example, some PDF reader plugins look for %PDF
anywhere in the first 1kB and execute the code that follows it.The application/signed-exchange format ()
includes a URL and response headers early in the format, which an
attacker could use to cause these plugins to sniff a bad content type.To avoid vulnerabilities, in addition to the response header requirements in
, servers are advised to only serve an
application/signed-exchange resource (SXG) from a domain if it would also be
safe for that domain to serve the SXG’s content directly, and to follow at least
one of the following strategies:Only serve signed exchanges from dedicated domains that don’t have access to
sensitive cookies or user storage.Generate signed exchanges “offline”, that is, in response to a trusted author
submitting content or existing signatures reaching a certain age, rather than
in response to untrusted-reader queries.Do all of:
If the SXG’s fallback URL () is derived
from the request URL,
percent-encode ()
any bytes that are greater than 0x7E or are not URL code
points () in the
fallback URL . It is particularly important to make sure no unescaped
nulls (0x00) or angle brackets (0x3C and 0x3E) appear.Do not reflect request header fields into the set of response headers.There are still a few binary length fields that an attacker may influence to
contain sensitive bytes, but they’re always followed by lowercase alphabetic
strings from a small set of possibilities, which reduces the chance that a
client will sniff them as indicating a particular content type.To encourage servers to include the X-Content-Type-Options: nosniff header
field, clients SHOULD reject signed exchanges served without it.Normally, when a client fetches https://o1.com/resource.js,
o1.com learns that the client is interested in the resource. If
o1.com signs resource.js, o2.com serves it as
https://o2.com/o1resource.js, and the client fetches it from there,
then o2.com learns that the client is interested, and if the client
executes the Javascript, that could also report the client’s interest back to
o1.com.Often, o2.com already knew about the client’s interest, because it’s the
entity that directed the client to o1resource.js, but there may be cases where
this leaks extra information.For non-executable resource types, a signed response can improve the privacy
situation by hiding the client’s interest from the original publisher.To prevent network operators other than o1.com or o2.com from learning which
exchanges were read, clients SHOULD only load exchanges fetched over a transport
that’s protected from eavesdroppers. This can be difficult to determine when the
exchange is being loaded from local disk, but when the client itself requested
the exchange over a network it SHOULD require TLS () or a
successor transport layer, and MUST NOT accept exchanges transferred over plain
HTTP without TLS.TODO: possibly register the validity-url format.This section registers the Signature header field in the “Permanent Message
Header Field Names” registry ().Header field name: SignatureApplicable protocol: httpStatus: standardAuthor/Change controller: IETFSpecification document(s): of this documentThis section registers the Accept-Signature header field in the “Permanent
Message Header Field Names” registry ().Header field name: Accept-SignatureApplicable protocol: httpStatus: standardAuthor/Change controller: IETFSpecification document(s): of this documentThis section registers the Signed-Headers header field in the “Permanent
Message Header Field Names” registry ().Header field name: Signed-HeadersApplicable protocol: httpStatus: standardAuthor/Change controller: IETFSpecification document(s): of this documentThis section establishes an entry for the HTTP/2 Settings Registry that was
established by Section 11.3 of Name: ENABLE_CROSS_ORIGIN_PUSHCode: 0xSETTING-TBDInitial Value: 0Specification: This documentThis section establishes an entry for the HTTP/2 Error Code Registry that was
established by Section 11.4 of Name: NO_TRUSTED_EXCHANGE_SIGNATURECode: 0xERROR-TBDDescription: The client does not trust the signature for a cross-origin Pushed
signed exchange.Specification: This documentType name: applicationSubtype name: signed-exchangeRequired parameters:v: A string denoting the version of the file format. ( ABNF:
version = DIGIT/%x61-7A) The version defined in this specification is 1.
When used with the Accept header field (Section 5.3.1 of ), this
parameter can be a comma (,)-separated list of version strings. (
ABNF: version-list = version *( "," version )) The server is then expected
to reply with a resource using a particular version from that list.
Note: RFC EDITOR PLEASE DELETE THIS NOTE; Implementations of drafts of this
specification MUST NOT use simple integers to describe their versions, and
MUST instead define implementation-specific strings to identify which draft is
implemented. The newest version of
describes the meaning of
one such string.Optional parameters: N/AEncoding considerations: binarySecurity considerations: see Interoperability considerations: N/APublished specification: This specification (see
).Applications that use this media type: N/AFragment identifier considerations: N/AAdditional information:Deprecated alias names for this type: N/AMagic number(s): 73 78 67 31 00File extension(s): .sxgMacintosh file type code(s): N/APerson and email address to contact for further information: See Authors’
Addresses section.Intended usage: COMMONRestrictions on usage: N/AAuthor: See Authors’ Addresses section.Change controller: IESGType name: applicationSubtype name: cert-chain+cborRequired parameters: N/AOptional parameters: N/AEncoding considerations: binarySecurity considerations: N/AInteroperability considerations: N/APublished specification: This specification (see ).Applications that use this media type: N/AFragment identifier considerations: N/AAdditional information:Deprecated alias names for this type: N/AMagic number(s): 1*9(??) 67 F0 9F 93 9C E2 9B 93File extension(s): N/AMacintosh file type code(s): N/APerson and email address to contact for further information: See Authors’
Addresses section.Intended usage: COMMONRestrictions on usage: N/AAuthor: See Authors’ Addresses section.Change controller: IESGFetchWHATWGThe Open Group Base Specifications Issue 7IEEEThe Open GroupURLWHATWGInternet X.509 Public Key Infrastructure Certificate and Certificate Revocation List (CRL) ProfileThis memo profiles the X.509 v3 certificate and X.509 v2 certificate revocation list (CRL) for use in the Internet. An overview of this approach and model is provided as an introduction. The X.509 v3 certificate format is described in detail, with additional information regarding the format and semantics of Internet name forms. Standard certificate extensions are described and two Internet-specific extensions are defined. A set of required certificate extensions is specified. The X.509 v2 CRL format is described in detail along with standard and Internet-specific extensions. An algorithm for X.509 certification path validation is described. An ASN.1 module and examples are provided in the appendices. [STANDARDS-TRACK]Hypertext Transfer Protocol (HTTP/1.1): Message Syntax and RoutingThe Hypertext Transfer Protocol (HTTP) is a stateless application-level protocol for distributed, collaborative, hypertext information systems. This document provides an overview of HTTP architecture and its associated terminology, defines the "http" and "https" Uniform Resource Identifier (URI) schemes, defines the HTTP/1.1 message syntax and parsing requirements, and describes related security concerns for implementations.HTTP Representation VariantsThis specification introduces an alternative way to communicate a secondary cache key for a HTTP resource, using the HTTP "Variants" and "Variant-Key" response header fields. Its aim is to make HTTP proactive content negotiation more cache-friendly.Hypertext Transfer Protocol Version 2 (HTTP/2)This specification describes an optimized expression of the semantics of the Hypertext Transfer Protocol (HTTP), referred to as HTTP version 2 (HTTP/2). HTTP/2 enables a more efficient use of network resources and a reduced perception of latency by introducing header field compression and allowing multiple concurrent exchanges on the same connection. It also introduces unsolicited push of representations from servers to clients.This specification is an alternative to, but does not obsolete, the HTTP/1.1 message syntax. HTTP's existing semantics remain unchanged.Key words for use in RFCs to Indicate Requirement LevelsIn many standards track documents several words are used to signify the requirements in the specification. These words are often capitalized. This document defines these words as they should be interpreted in IETF documents. This document specifies an Internet Best Current Practices for the Internet Community, and requests discussion and suggestions for improvements.Ambiguity of Uppercase vs Lowercase in RFC 2119 Key WordsRFC 2119 specifies common key words that may be used in protocol specifications. This document aims to reduce the ambiguity by clarifying that only UPPERCASE usage of the key words have the defined special meanings.Structured Headers for HTTPThis document describes a set of data types and algorithms associated with them that are intended to make it easier and safer to define and handle HTTP header fields. It is intended for use by new specifications of HTTP header fields as well as revisions of existing header field specifications when doing so does not cause interoperability issues.Edwards-Curve Digital Signature Algorithm (EdDSA)This document describes elliptic curve signature scheme Edwards-curve Digital Signature Algorithm (EdDSA). The algorithm is instantiated with recommended parameters for the edwards25519 and edwards448 curves. An example implementation and test vectors are provided.Concise Binary Object Representation (CBOR)The Concise Binary Object Representation (CBOR) is a data format whose design goals include the possibility of extremely small code size, fairly small message size, and extensibility without the need for version negotiation. These design goals make it different from earlier binary serializations such as ASN.1 and MessagePack.Concise data definition language (CDDL): a notational convention to express CBOR and JSON data structuresThis document proposes a notational convention to express CBOR data structures (RFC 7049, Concise Binary Object Representation). Its main goal is to provide an easy and unambiguous way to express structures for protocol messages and data formats that use CBOR or JSON.X.509 Internet Public Key Infrastructure Online Certificate Status Protocol - OCSPThis document specifies a protocol useful in determining the current status of a digital certificate without requiring Certificate Revocation Lists (CRLs). Additional mechanisms addressing PKIX operational requirements are specified in separate documents. This document obsoletes RFCs 2560 and 6277. It also updates RFC 5912.Certificate TransparencyThis document describes an experimental protocol for publicly logging the existence of Transport Layer Security (TLS) certificates as they are issued or observed, in a manner that allows anyone to audit certificate authority (CA) activity and notice the issuance of suspect certificates as well as to audit the certificate logs themselves. The intent is that eventually clients would refuse to honor certificates that do not appear in a log, effectively forcing CAs to add all issued certificates to the logs.Logs are network services that implement the protocol operations for submissions and queries that are defined in this document.Hypertext Transfer Protocol (HTTP/1.1): CachingThe Hypertext Transfer Protocol (HTTP) is a stateless \%application- level protocol for distributed, collaborative, hypertext information systems. This document defines HTTP caches and the associated header fields that control cache behavior or indicate cacheable response messages.The Transport Layer Security (TLS) Protocol Version 1.3This document specifies version 1.3 of the Transport Layer Security (TLS) protocol. TLS allows client/server applications to communicate over the Internet in a way that is designed to prevent eavesdropping, tampering, and message forgery.This document updates RFCs 5705 and 6066, and obsoletes RFCs 5077, 5246, and 6961. This document also specifies new requirements for TLS 1.2 implementations.Hypertext Transfer Protocol (HTTP/1.1): Semantics and ContentThe Hypertext Transfer Protocol (HTTP) is a stateless \%application- level protocol for distributed, collaborative, hypertext information systems. This document defines the semantics of HTTP/1.1 messages, as expressed by request methods, request header fields, response status codes, and response header fields, along with the payload of messages (metadata and body content) and mechanisms for content negotiation.Merkle Integrity Content EncodingThis memo introduces a content-coding for HTTP that provides progressive integrity for message contents. This integrity protection can be evaluated on a partial representation, allowing a recipient to process a message as it is delivered while retaining strong integrity protection.Instance Digests in HTTPHTTP/1.1 defines a Content-MD5 header that allows a server to include a digest of the response body. However, this is specifically defined to cover the body of the actual message, not the contents of the full file (which might be quite different, if the response is a Content-Range, or uses a delta encoding). Also, the Content-MD5 is limited to one specific digest algorithm; other algorithms, such as SHA-1 (Secure Hash Standard), may be more appropriate in some circumstances. Finally, HTTP/1.1 provides no explicit mechanism by which a client may request a digest. This document proposes HTTP extensions that solve these problems. [STANDARDS-TRACK]Registration Procedures for Message Header FieldsThis specification defines registration procedures for the message header fields used by Internet mail, HTTP, Netnews and other applications. This document specifies an Internet Best Current Practices for the Internet Community, and requests discussion and suggestions for improvements.Augmented BNF for Syntax Specifications: ABNFInternet technical specifications often need to define a formal syntax. Over the years, a modified version of Backus-Naur Form (BNF), called Augmented BNF (ABNF), has been popular among many Internet specifications. The current specification documents ABNF. It balances compactness and simplicity with reasonable representational power. The differences between standard BNF and ABNF involve naming rules, repetition, alternatives, order-independence, and value ranges. This specification also supplies additional rule definitions and encoding for a core lexical analyzer of the type common to several Internet specifications. [STANDARDS-TRACK]Subresource IntegrityUse Cases and Requirements for Web PackagesThis document lists use cases for signing and/or bundling collections of web pages, and extracts a set of requirements from them. Note to Readers Discussion of this draft takes place on the ART area mailing list (art@ietf.org), which is archived at https://mailarchive.ietf.org/arch/search/?email_list=art [1]. The source code and issues list for this draft can be found in https://github.com/WICG/webpackage [2].The Web Origin ConceptThis document defines the concept of an "origin", which is often used as the scope of authority or privilege by user agents. Typically, user agents isolate content retrieved from different origins to prevent malicious web site operators from interfering with the operation of benign web sites. In addition to outlining the principles that underlie the concept of origin, this document details how to determine the origin of a URI and how to serialize an origin into a string. It also defines an HTTP header field, named "Origin", that indicates which origins are associated with an HTTP request. [STANDARDS-TRACK]HTTP CachingThe Hypertext Transfer Protocol (HTTP) is a stateless application- level protocol for distributed, collaborative, hypertext information systems. This document defines HTTP caches and the associated header fields that control cache behavior or indicate cacheable response messages. This document obsoletes RFC 7234.HTTP Authentication Extensions for Interactive ClientsThis document specifies extensions for the HTTP authentication framework for interactive clients. Currently, fundamental features of HTTP-level authentication are insufficient for complex requirements of various Web-based applications. This forces these applications to implement their own authentication frameworks by means such as HTML forms, which becomes one of the hurdles against introducing secure authentication mechanisms handled jointly by servers and user agents. The extended framework fills gaps between Web application requirements and HTTP authentication provisions to solve the above problems, while maintaining compatibility with existing Web and non-Web uses of HTTP authentication.HTTP Authentication-Info and Proxy-Authentication-Info Response Header FieldsThis specification defines the "Authentication-Info" and "Proxy- Authentication-Info" response header fields for use in Hypertext Transfer Protocol (HTTP) authentication schemes that need to return information once the client's authentication credentials have been accepted.Clear Site DataHypertext Transfer Protocol (HTTP/1.1): AuthenticationThe Hypertext Transfer Protocol (HTTP) is a stateless application- level protocol for distributed, collaborative, hypermedia information systems. This document defines the HTTP Authentication framework.Public Key Pinning Extension for HTTPThis document defines a new HTTP header that allows web host operators to instruct user agents to remember ("pin") the hosts' cryptographic identities over a period of time. During that time, user agents (UAs) will require that the host presents a certificate chain including at least one Subject Public Key Info structure whose fingerprint matches one of the pinned fingerprints for that host. By effectively reducing the number of trusted authorities who can authenticate the domain during the lifetime of the pin, pinning may reduce the incidence of man-in-the-middle attacks due to compromised Certification Authorities.The WebSocket ProtocolThe WebSocket Protocol enables two-way communication between a client running untrusted code in a controlled environment to a remote host that has opted-in to communications from that code. The security model used for this is the origin-based security model commonly used by web browsers. The protocol consists of an opening handshake followed by basic message framing, layered over TCP. The goal of this technology is to provide a mechanism for browser-based applications that need two-way communication with servers that does not rely on opening multiple HTTP connections (e.g., using XMLHttpRequest or <iframe>s and long polling). [STANDARDS-TRACK]HTTP State Management MechanismThis document defines the HTTP Cookie and Set-Cookie header fields. These header fields can be used by HTTP servers to store state (called cookies) at HTTP user agents, letting the servers maintain a stateful session over the mostly stateless HTTP protocol. Although cookies have many historical infelicities that degrade their security and privacy, the Cookie and Set-Cookie header fields are widely used on the Internet. This document obsoletes RFC 2965. [STANDARDS-TRACK]HTTP State Management MechanismThis document specifies a way to create a stateful session with Hypertext Transfer Protocol (HTTP) requests and responses. [STANDARDS-TRACK]Implementation of OPS Over HTTPHTTP Strict Transport Security (HSTS)This specification defines a mechanism enabling web sites to declare themselves accessible only via secure connections and/or for users to be able to direct their user agent(s) to interact with given sites only over secure connections. This overall policy is referred to as HTTP Strict Transport Security (HSTS). The policy is declared by web sites via the Strict-Transport-Security HTTP response header field and/or by other means, such as user agent configuration, for example. [STANDARDS-TRACK]The ORIGIN HTTP/2 FrameThis document specifies the ORIGIN frame for HTTP/2, to indicate what origins are available on a given connection.Secondary Certificate Authentication in HTTP/2A use of TLS Exported Authenticators is described which enables HTTP/2 clients and servers to offer additional certificate-based credentials after the connection is established. The means by which these credentials are used with requests is defined.Signed HTTP Exchanges Implementation CheckpointsThis document describes checkpoints of draft-yasskin-http-origin- signed-responses to synchronize implementation between clients, intermediates, and publishers. Note to Readers Discussion of this draft takes place on the HTTP working group mailing list (ietf-http-wg@w3.org), which is archived at https://lists.w3.org/Archives/Public/ietf-http-wg/ [1]. The source code and issues list for this draft can be found in https://github.com/WICG/webpackage [2].PKCS #1: RSA Cryptography Specifications Version 2.2This document provides recommendations for the implementation of public-key cryptography based on the RSA algorithm, covering cryptographic primitives, encryption schemes, signature schemes with appendix, and ASN.1 syntax for representing keys and for identifying the schemes.This document represents a republication of PKCS #1 v2.2 from RSA Laboratories' Public-Key Cryptography Standards (PKCS) series. By publishing this RFC, change control is transferred to the IETF.This document also obsoletes RFC 3447.Content-Signature Header Field for HTTPA Content-Signature header field is defined for use in HTTP. This header field carries a signature of the payload body of a message.HTTP Header for digital signaturesThis document describes the Content-Signature HTTP entity header. It is used to transmit one or more digital signatures comprised of metadata and the HTTP message body.Signing HTTP MessagesWhen communicating over the Internet using the HTTP protocol, it can be desirable for a server or client to authenticate the sender of a particular message. It can also be desirable to ensure that the message was not tampered with during transit. This document describes a way for servers and clients to simultaneously add authentication and message integrity to HTTP messages by using a digital signature.Transport Layer Security (TLS) Extensions: Extension DefinitionsThis document provides specifications for existing TLS extensions. It is a companion document for RFC 5246, "The Transport Layer Security (TLS) Protocol Version 1.2". The extensions specified are server_name, max_fragment_length, client_certificate_url, trusted_ca_keys, truncated_hmac, and status_request. [STANDARDS-TRACK]To reduce round trips, a server might use HTTP/2 Push (Section 8.2 of
) to inject a subresource from another server into the client’s
cache. If anything about the subresource is expired or can’t be verified, the
client would fetch it from the original server.For example, if https://example.com/index.html includesThen to avoid the need to look up and connect to jquery.com in the critical
path, example.com might push that resource signed by jquery.com.In order to speed up loading but still maintain control over its content, an
HTML page in a particular origin O.com could tell clients to load its
subresources from an intermediate content distributor that’s not authoritative,
but require that those resources be signed by O.com so that the distributor
couldn’t modify the resources. This is more constrained than the common CDN case
where O.com has a CNAME granting the CDN the right to serve arbitrary content
as O.com.To make it easier to configure the right distributor for a given request,
computation of the physicalsrc could be encapsulated in a custom element:where the <dist-img> implementation generates an appropriate <img> based on,
for example, a <meta name="dist-base"> tag elsewhere in the page. However,
this has the downside that the
preloader can no
longer see the physical source to download it. The resulting delay might cancel
out the benefit of using a distributor.This could be used for some of the same purposes as SRI ().To implement this with the current proposal, the distributor would respond to
the physical request to https://distributor.com/O.com/img.png with first a
signed PUSH_PROMISE for https://O.com/img.png and then a redirect to
https://O.com/img.png.The W3C WebAppSec group is investigating
using signatures in .
They need a way to transmit the signature with the response, which this proposal
provides.Their needs are simpler than most other use cases in that the
integrity="ed25519-[public-key]" attribute and CSP-based ways of expressing a
public key don’t need that key to be wrapped into a certificate.The “ed25519key” signature parameter supports this simpler way of attaching a
key.The current proposal for signature-based SRI describes signing only the content
of a resource, while this specification requires them to sign the request URI as
well. This issue is tracked in
https://github.com/mikewest/signature-based-sri/issues/5. The details of what
they need to sign will affect whether and how they can use this proposal.So-called “Binary Transparency” may eventually allow users to verify that a
program they’ve been delivered is one that’s available to the public, and not a
specially-built version intended to attack just them. Binary transparency
systems don’t exist yet, but they’re likely to work similarly to the successful
Certificate Transparency logs described by .Certificate Transparency depends on Signed Certificate Timestamps that prove a
log contained a particular certificate at a particular time. To build the same
thing for Binary Transparency logs containing HTTP resources or full websites,
we’ll need a way to provide signatures of those resources, which signed
exchanges provides.Native app stores like the Apple App
Store and the Android Play
Store grant their contents powerful abilities,
which they attempt to make safe by analyzing the applications before offering
them to people. The web has no equivalent way for people to wait to run an
update of a web application until a trusted authority has vouched for it.While full application analysis probably needs to wait until the authority can
sign bundles of exchanges, authorities may be able to guarantee certain
properties by just checking a top-level resource and its -constrained
sub-resources.Fully-offline websites can be represented as bundles of signed exchanges,
although an optimization to reduce the number of signature verifications may be
needed. Work on this is in progress in the https://github.com/WICG/webpackage
repository.To verify that a thing came from a particular origin, for use in the same
context as a TLS connection, we need someone to vouch for the signing key with
as much verification as the signing keys used in TLS. The obvious way to do this
is to re-use the web PKI and CA ecosystem.If we re-use existing TLS server certificates, we incur the risks that:TLS server certificates must be accessible from online servers, so they’re
easier to steal or use as signing oracles than an offline key. An exchange’s
signing key doesn’t need to be online.A server using an origin-trusted key for one purpose (e.g. TLS) might
accidentally sign something that looks like an exchange, or vice versa.These risks are considered too high, so we define a new X.509 certificate
extension in that requires CAs to issue new
certificates for this purpose. We expect at least one low-cost CA to be willing
to sign certificates with this extension.In order to prevent an attacker who can convince the server to sign some
resource from causing those signed bytes to be interpreted as something else the
new X.509 extension here is forbidden from being used in TLS servers. If
changes to allow re-use in TLS servers, we would need
to:Avoid key types that are used for non-TLS protocols whose output could be
confused with a signature. That may be just the rsaEncryption OID from
.Use the same format as TLS’s signatures, specified in Section 4.4.3 of
, with a context string that’s specific to this use.The specification also needs to define which signing algorithm to use. It
currently specifies that as a function from the key type, instead of allowing
attacker-controlled data to specify it.The client needs to be able to find the certificate vouching for the signing
key, a chain from that certificate to a trusted root, and possibly other trust
information like SCTs (). One approach would be to include the
certificate and its chain in the signature metadata itself, but this wastes
bytes when the same certificate is used for multiple HTTP responses. If we
decide to put the signature in an HTTP header, certificates are also unusually
large for that context.Another option is to pass a URL that the client can fetch to retrieve the
certificate and chain. To avoid extra round trips in fetching that URL, it could
be bundled with the signed content or
PUSHed with it. The risks from the
client_certificate_url extension (Section 11.3 of ) don’t seem to
apply here, since an attacker who can get a client to load an exchange and fetch
the certificates it references, can also get the client to perform those fetches
by loading other HTML.To avoid using an unintended certificate with the same public key as the
intended one, the content of the leaf certificate or the chain should be
included in the signed data, like TLS does (Section 4.4.3 of
).The previous and
schemes signed just the content, while
( could also sign the response headers and the
request method and path. However, the same path, response headers, and content
may mean something very different when retrieved from a different server.
currently includes the whole request URL in the
signature, but it’s possible we need a more flexible scheme to allow some
higher-level protocols to accept a less-signed URL.Servers might want to sign other request headers in order to capture their
effects on content negotiation. However, there’s no standard algorithm to check
that a client’s actual request headers match request headers sent by a server.
The most promising attempt at this is , which
encodes the content negotiation algorithm into the Variants and Variant-Key
response headers. The proposal here () assumes that is in use and
doesn’t sign request headers.HTTP headers are traditionally munged by proxies, making it impossible to
guarantee that the client will see the same sequence of bytes as the publisher
published. In the HTTPS world, we have more end-to-end header integrity, but
it’s still likely that there are enough TLS-terminating proxies that the
publisher’s signatures would tend to break before getting to the client.There’s no way in current HTTP for the response to a client-initiated request
(Section 8.1 of ) to convey the request headers it expected to
respond to, but we sidestep that by conveying content negotiation information in
response headers, per .Since proxies are unlikely to modify unknown content types, we can wrap the
original exchange into an application/signed-exchange format
() and include the Cache-Control: no-transform
header when sending it.To reduce the likelihood of accidental modification by proxies, the
application/signed-exchange format includes a file signature that doesn’t
collide with other known signatures.To help the PUSHed subresources use case (), we might
also want to extend the PUSH_PROMISE frame type to include a signature, and
that could tell intermediates not to change the ensuing headers.A normal HTTPS response is authoritative only for one client, for as long as its
cache headers say it should live. A signed exchange can be re-used for many
clients, and if it was generated while a server was compromised, it can continue
compromising clients even if their requests happen after the server recovers.
This signing scheme needs to mitigate that risk.Certificates are mis-issued and private keys are stolen, and in response clients
need to be able to stop trusting these certificates as promptly as possible.
Online revocation
checks don’t work, so
the industry has moved to pushed revocation lists and stapled OCSP responses
.Pushed revocation lists work as-is to block trust in the certificate signing an
exchange, but the signatures need an explicit strategy to staple OCSP responses.
One option is to extend the certificate download () to
include the OCSP response too, perhaps in the
TLS 1.3 CertificateEntry format.The signed content in a response might be vulnerable to attacks, such as XSS, or
might simply be discovered to be incorrect after publication. Once the author
fixes those vulnerabilities or mistakes, clients should stop trusting the old
signed content in a reasonable amount of time. Similar to certificate
revocation, I expect the best option to be stapled “this version is still valid”
assertions with short expiration times.These assertions could be structured as:A signed minimum version number or timestamp for a set of request headers:
This requires that signed responses need to include a version number or
timestamp, but allows a server to provide a single signature covering all
valid versions.A replacement for the whole exchange’s signature. This requires the publisher to
separately re-sign each valid version and requires each version to include a
different update URL, but allows intermediates to serve less data. This is
the approach taken in .A replacement for the exchange’s signature and an update for the embedded
expires and related cache-control HTTP headers . This naturally
extends publishers’ intuitions about cache expiration and the existing cache
revalidation behavior to signed exchanges. This is sketched and its downsides
explored in .The signature also needs to include instructions to intermediates for how to
fetch updated validity assertions.Simpler implementations are, all things equal, less likely to include bugs. This
section describes decisions that were made in the rest of the specification to
reduce complexity.In general, we’re trying to eliminate unnecessary choices in the specification.
For example, instead of requiring clients to support two methods for verifying
payload integrity, we only require one.Clients can be designed with a more-trusted network layer that decides how to
trust resources and then provides those resources to less-trusted rendering
processes along with handles to the storage and other resources they’re allowed
to access. If the network layer can enforce that it only operates on chunks of
data up to a certain size, it can avoid the complexity of spooling large files
to disk.To allow the network layer to verify signed exchanges using a bounded amount of
memory, requires the signature to be less than
16kB and the headers to be less than 512kB, and requires
that the MI record size be less than 16kB. This allows the network layer to
validate a bounded chunk at a time, and pass that chunk on to a renderer, and
then forget about that chunk before processing the next one.The Digest header field from requires the network layer to buffer
the entire response body, so it’s disallowed.This draft could expire signature validity using the normal HTTP cache control
headers () instead of embedding an expiration date in the signature
itself. This section specifies how that would work, and describes why I haven’t
chosen that option.The signatures in the Signature header field () would no
longer contain “date” or “expires” fields.The validity-checking algorithm () would initialize date
from the resource’s Date header field (Section 7.1.1.2 of ) and
initialize expires from either the Expires header field (Section 5.3 of
) or the Cache-Control header field’s max-age directive (Section
5.2.2.8 of ) (added to date), whichever is present, preferring
max-age (or failing) if both are present.Validity updates () would include a list of replacement
response header fields. For each header field name in this list, the client
would remove matching header fields from the stored exchange’s response header
fields. Then the client would append the replacement header fields to the stored
exchange’s response header fields.For example, given a stored exchange of:And an update listing the following headers:The resulting stored exchange would be:In an exchange with multiple signatures, using cache control to expire
signatures forces all signatures to initially live for the same period. Worse,
the update from one signature’s “validity-url” might not match the update for
another signature. Clients would need to maintain a current set of headers for
each signature, and then decide which set to use when actually parsing the
resource itself.This need to store and reconcile multiple sets of headers for a single signed
exchange argues for embedding a signature’s lifetime into the signature.RFC EDITOR PLEASE DELETE THIS SECTION.draft-05Define absolute URLs, and limit the schemes each instance can use.Fill in TBD size limits.Update to mice-03 including the Digest header.Refer to draft-yasskin-httpbis-origin-signed-exchanges-impl for draft version
numbers.Require exchange’s response to be cachable by a shared cache.Define the “integrity” field of the Signature header to include subfields of
the main integrity-protecting header, including the digest algorithm.Put a fallback URL at the beginning of the application/signed-exchange
format, which replaces the ‘:url’ key from the CBOR representation of the
exchange’s request and response metadata and headers.Remove the rest of the request headers from the signed data, in favor of
representing content negotiation with the Variants response header.Make the signed message format a concatenation of byte sequences, which helps
implementations avoid re-serializing the exchange’s request and response
metadata and headers.Explicitly check the response payload’s integrity instead of assuming the
client did it elsewhere in processing the response.Reject uncached header fields.Update to draft-ietf-httpbis-header-structure-09.Update to the final TLS 1.3 RFC.draft-04Update to draft-ietf-httpbis-header-structure-06.Replace the application/http-exchange+cbor format with a simpler
application/signed-exchange format that:
Doesn’t require a streaming CBOR parser parse it from a network stream.Doesn’t allow request payloads or response trailers, which don’t fit into
the signature model.Allows checking the signature before parsing the exchange headers.Require absolute URLs.Make all identifiers in headers lower-case, as required by Structured Headers.Switch back to the TLS 1.3 signature format.Include the version and draft number in the signature context string.Remove support for integrity protection using the Digest header field.Limit the record size in the mi-sha256 encoding.Forbid RSA keys, and only require clients to support secp256r1 keys.Add a test OID for the CanSignHttpExchanges X.509 extension.draft-03Allow each method of transferring an exchange to define which headers are
signed, have the cross-origin methods use all headers, and remove the
allResponseHeaders flag.Describe footguns around signing private content, and block certain headers to
make it less likely.Define a CBOR structure to hold the certificate chain instead of re-using the
TLS1.3 message. The TLS 1.3 parser fails on unexpected extensions while this
format should ignore them, and apparently TLS implementations don’t expose
their message parsers enough to allow passing a message to a certificate
verifier.Require an X.509 extension for the signing certificate.draft-02Signatures identify a header (e.g. Digest or MI) to guard the payload’s
integrity instead of directly signing over the payload.The validityUrl is signed.Use CBOR maps where appropriate, and define how they’re canonicalized.Remove the update.url field from signature validity updates, in favor of just
re-fetching the original request URL.Define an HTTP/2 extension to use a setting to enable cross-origin Server
Push.Define an Accept-Signature header to negotiate whether to send Signatures
and which ones.Define an application/http-exchange+cbor format to fetch signed exchanges
without HTTP/2 Push.2 new use cases.Thanks to Devin Mullins, Ilari Liusvaara, Justin Schuh, Mark Nottingham, Mike
Bishop, Ryan Sleevi, and Yoav Weiss for comments that improved this draft.