IESG Narrative Minutes
Narrative Minutes of the IESG Teleconference on 2015-01-22. These are not an official record of the meeting.
Narrative scribe: John Leslie and Susan Hares (The scribe was sometimes uncertain who was speaking.)
Corrections from: (none)
2. Protocol Actions
2.1 WG Submissions
2.1.1 New Items
2.1.2 Returning Items
2.2 Individual Submissions
2.2.1 New Items
2.2.2 Returning Items
2.3 Status Changes
2.3.1 New Items
2.3.2 Returning Items
3. Document Actions
3.1 WG Submissions
3.1.1 New Items
3.1.2 Returning Items
3.2 Individual Submissions Via AD
3.2.1 New Items
3.2.2 Returning Items
3.3 Status Changes
3.3.1 New Items
3.3.2 Returning Items
3.4 IRTF and Independent Submission Stream Documents
3.4.1 New Items
3.4.2 Returning Items
1225 EST no break
4 Working Group Actions
4.1 WG Creation
4.1.1 Proposed for IETF Review
4.1.2 Proposed for Approval
4.2 WG Rechartering
4.2.1 Under evaluation for IETF Review
4.2.2 Proposed for Approval
5. IAB News We can use
6. Management Issues
7. Agenda Working Group News
1252 EST entered Executive Session
(at 2015-01-22 07:33:02 PST)
- I had a discuss on DTLS1.0 as the MTI. I'm told that was decided by the WG last Nov in consultation with WebRTC and TLS WG chairs, so while I'd prefer to see DTLS1.2 as the MTI, I've cleared the DISCUSS. - Figure 1: Couldn't ICE/UDP be somewhat confusing for someone unaware that ICE is more of an algorithm than a wire protocol? Might be nice to clarify that here in the intro. (If you want to be nice, if you don't that's ok too and can be the right decision.) - section 3: Isn't "complete SCTP packet" a teeny bit ambiguous? It could mean including the IP and other lower headers but I guess you do not. But that's a nit since it's probably clear enough that you don't put an IP or layer 2 header into the DTLS payload:-) - Given heartbleed, and the use here of RFC6520 I think some note of that famous implementation bug would be wise. Just to a pointer to how to not have that problem. But it's not a protocol bug so I'm not trying to insist, i.e. no need for us to argue the toss on this:-) - I'm also wondering if the text here on 6520 is sufficiently clear given this week's discussion of that on the rtcweb list. (I'm not on tsvwg@ so would appreciate an update on how the thread  pans out on the tsvwg list before we approve this.)  https://www.ietf.org/mail-archive/web/rtcweb/current/msg14069.html
The DTLS implementation MUST support DTLS 1.0 [RFC4347] and SHOULD support the most recently published version of DTLS, which is DTLS 1.2 [RFC6347] as of December 2014. December 2014 is wrong.
I agree that Stephen's DISCUSS needs to be sorted out. I've a couple of minor comments on a paragraph in Section 1: This encapsulation of SCTP over DTLS over or UDP or ICE/UDP (see [RFC5245]) can provide a NAT traversal solution together with confidentiality, source authentication, and integrity protected transfers. Is there a protocol missing before the first "or", or does the first "or" need to be deleted (the latter, I think)? The phrase "together with" implies that something else is needed (as in "X, together with Y, provides Z"). Does the sentence mean to say that <this encapsulation> can provide [a NAT traversal solution that includes confidentiality, source authentication, and integrity-protected transfers]? Or does it mean to say that <this encapsulation> can provide a NAT traversal solution, as well as confidentiality, source authentication, and integrity-protected transfers]? I think it's one of those.
Thanks for clarifying my concerns that we aren't missing a case of upstream-assigned labels in a unicast tunnel. We only really need upstream-assigned labels if the tunnel itself is multi-access.
I think you're missing a little bit of spec needed for the MPLS with DTLS case. (Are there implementations of the DTLS stuff btw?) 1) I assume the intent is that the MPLS-with-DTPS port (TBD2) is only ever used with DTLS, in which case don't you need a MUST or MUST NOT somewhere? 2) Is a single listener on port TBD2 supposed to handle just one DTLS session with one LSR or many DTLS sessions with each LSR that's on a different host/port or something else? Don't you need to say?
The draft improved a lot and I am on no-objection after discussing the point below: The UDP source port is randomly chosen and I guess not further retained (*) for receiving incoming MPLS-over-UDP packets. In turn this means that "returning" packets are sent to the MPLS/UDP port number. (*) nobody is listening on that UDP port. This will cause issues when NATs/firewalls are in-between, as the return traffic, i.e., the traffic incoming from the public part of the NAT, is not matching any NAT binding that was created before by an outgoing packet. Similar for firewalls as no state was created when the packet traversed the firewall. NATs are probably not a big issue in a carrier net, but firewalls will for sure. I guess this should be mentioned as an operational note.
- An editorial detail in the abstract. Take it or leave it The MPLS-in-UDP encapsulation technology must only be deployed within a single network (with a single network operator) or networks of an adjacent set of co-operating network operators where traffic is managed to avoid congestion, rather than over the Internet where congestion control is required. We rarely see specifications ("must" sentences) in abstracts. Do you want to say something like: The MPLS-in-UDP encapsulation applicability is for networks where traffic is managed to avoid congestion, rather than over the Internet where congestion control is required.
Thanks for addressing the SecDir review: http://www.ietf.org/mail-archive/web/secdir/current/msg04303.html https://www.ietf.org/mail-archive/web/secdir/current/msg05318.html
Thanks to all involved for working through the issues with congestion and zero checksums. I'm happy to ballot Yes on this version (while watching Martin's Discuss out of the corner of my eye).
Thank you for the extensive discussion of the UDP zero checksum over IPv6 issue.
Nits remarked upon by Young Lee in his RTG Dir review will need to be addressed.
The security considerations points at RFC 3473 which in turn says IPsec or RFC 2747 (from 2000) which has a HMAC-MD5 based integrity mechanism (still ok as far we we know) but which also says that "It is likely that the IETF will define a standard key management protocol." Ah well;-) If there was something else that came along in the last 15 years that's better I guess it'd be good to note. But this is non-blocking as this isn't the place to fix any such issues except to the extent of adding some better references.
This is an editorial nit, which the RFC editor might catch, but they'd have to know about the distinction between double precision floating point and fixnums, so I figured I'd just mention it to be safe: In particular, an I-JSON sender cannot expect a receiver to treat an integer whose absolute value is greater than 9007199254740991 (i.e., that is outside the range [-(2**53)+1, (2**53)-1]) as an exact value. The "in particular" at the beginning of this sentence has no antecedent, so it doesn't make sense to say it. You should just delete "in particular". I wonder if there was some text prior to this that got deleted in a previous revision... For applications which require the exact interchange of numbers with greater magnitude or precision (one example would be 64-bit integers), it is RECOMMENDED to encode them in JSON string values. This requires that the receiving program understand the intended semantic of the value. What was the rationale for this? I don't know of a lot of platforms that don't support 64-bit integers, so this seems overly restrictive. I'm not raising this as a major issue because I am sure there _was_ a rationale, but I'd like to hear it.
I share Juergen/s question ...
- 2.1: I've no idea what Surrogate or Noncharacter means here - a precise reference would be good for me. And the examples don't help me. So I agree with Barry's discuss. - 4.3: Did the WG discuss whether to require/encourage inclusion/exclusion of 00 values and timezones in times? E.g. is there a preference for 20150119T2304Z or 20150119T230400Z which represent the same time. - 4.3: I'm also a bit surprised you don't say that UTC is the default TZ. I think those time rules do help interoperability so defining defaults would have been an improvement. Why is that? (I don't think 3339 does that or does it?) - 4.4: I don't recall them so have had to track down the difference between base64 and base64url and check I'm using the right APIs over and over. That might be because I only write code sporadically (and badly:-) and forget stuff in between, but an example of the difference (possibly parenthetical) would help me a bit, just so's I could look at a value I generate and spot I've done it wrong again. - I mostly agree with the secdir reviewer's  point that it might be good to RECOMMEND use of I-JSON for more security sensitive applications, but I'd have felt more strongly about that if you'd profiled the time values more strictly as messing up those can lead to vulnerabilities and being more precise there helps to get e.g. signatures correct a bit more easily.  https://www.ietf.org/mail-archive/web/secdir/current/msg05380.html
I should note that there was a Gen-ART review from Meral with some minor editorial observations. For instance, the first sentence of Section 3 could use improvement. These could be handled by the RFC Editor, too.
Though it makes me sad that we have to have this fork, rather than having fixed RFC 7159.
There is still a ongoing discussion between J. Schönwälder (OPS-DIR) and Tim Bray. On Sun, Jan 04, 2015 at 04:54:55PM -0800, Tim Bray wrote: > On Sun, Jan 4, 2015 at 12:14 PM, Juergen Schoenwaelder < > firstname.lastname@example.org> wrote: > > >> My understanding is that compliance to I-JSON means compliance to >> section 2 of this document. Perhaps it makese sense to clarify this >> (in particular if my interpretation is wrong). >> > > Hmm, good point. The draft currently doesn’t mention compliance; all it > does is give a syntactic definition of something called an “I-JSON > message”. The notion was that other specs which wanted to require the use > of I-JSON should say something like “The message body must be an I-JSON > message [RFCXXXX]”. I think that would work fine, but I wonder if others, > like you, will be bothered by the absence of a specification of > “compliance”. > I am raising this question since I think the draft goes somewhat beyond simply defining I-JSON (which I believe is the material contained in section 2). In particular, the I-D uses RFC 2119 language in a section titled "Protocol-design Recommendations". It is not clear to me how these recommendations have been selected or why they are part of an I-JSON specification. This applies mostly to sections 4.3 and 4.4. Anyway, since these sections use RFC 2119 requirements language, I am wondering what happens if a protocol complies to section 2 but not all of section 4 - is it using I-JSON? I hope so, but it might be good to make this clear. /js ---------------------------------------------------- Editorial nit: - s/values in in ISO 8601/values in ISO 8601/
I agree with the SecDir reviewer in his assessment of this draft after reading it. I'm putting this in as comments/suggestions to be considered as the security considerations really should be in the JSON document. If any considerations are missing, that is where I'd expect to see them. The JSON format is not simple, so agree with the SecDir reviewer that one would have expected additional handling considerations for security purposes to be in that document. They don't need to be listed in this one. Having said that, it might be good idea to add text to the security considerations section, to state that I-JSON restricts and limits some of the dangerous formats of the original JSON, therefore it might be considered more secure than the original JSON. Perhaps also mention that security critical usages of the JSON should use I-JSON (perhaps even provide references to the jose specifications). https://www.ietf.org/mail-archive/web/secdir/current/msg05380.html
This should be quite simple to sort out: -- Section 2.1 -- Object member names, and string values in arrays and object members, MUST NOT include code points which identify Surrogates or Noncharacters. Where are the definitions of "Surrogates" and "Noncharacters"? Because you say they MUST NOT be included, I think they need to be defined in normative reference(s) and cited here (they're not defined in 3629, nor does 3620 cite a definition).
The RFC Editor will make a lot of "which" -> "that" changes. Just be aware. -- Sections 3 and 4 -- Going in the other direction from Jürgen's comments, I think these sections are unremarkable, and would just suggest changing the 2119 language to English words instead. This is all giving sage advice, not defining protocol. -- Section 5 -- I don't normally comment on "Acknowledgments" sections, and please take this as you will and do as you think best. You mention the contributions of Douglas Crockford, and I wonder whether you might also mention the contributions of Ecma International TC39.
In this text: 1. A PCP client should construct a set of candidate source addresses (Section 4 of [RFC6724]), based on application input and PCP [RFC6887] constraints. For example, when sending a PEER or a MAP with FILTER request for an existing TCP connection, the only candidate source address is the source address used for the existing TCP connection. But when sending a MAP request for a service that will accept incoming connections, the candidate source addresses may be all of the node's IP addresses, or some subset of IP addresses on which the service is configured to listen. 2. The PCP client then sorts the PCP server IP addresses as per Section 6 of [RFC6724] using the candidate source addresses selected in the previous step as input to the destination address selection algorithm. if I'm understanding this, if multiple PCP clients end up with the same list of candidate source addresses. and then sort the same list into the same order, does that mean they'll tend to select the same IP addresses that have sorted to the front of the list, even though the PCP server has multiple IP addresses, or will something I'm not seeing cause a more balanced load distribution? Perhaps there are reasons why that's OK, but I thought I should ask ...
- I don't get how one can really ensure the restriction below is satisfied, nor why it's needed. (I do get that some setups will be able to check that.) " o The configuration mechanism must distinguish IP addresses that belong to the same PCP server." - The secdir review  also makes a resonable point that explaining the risk (here) of Nonce re-use would be good.  https://www.ietf.org/mail-archive/web/secdir/current/msg05355.html
I support Brian's DISCUSS points, especially the one about one vs. multiple PCP servers. If the client doesn't know which case it's in, it can't really follow these procedures.
I support Stephen's comments and think the SecDir reviewer recommendations would be helpful.
Thanks for addressing these issues.
"If the PCP client has exhausted all IP addresses configured for a given PCP server, the procedure SHOULD be repeated every fifteen (15) minutes until the PCP request is successfully answered." Is there something that prevents a client from re-trying this procedure endlessly for a server whose whole set of IP addresses remains unresponsive? Phone call to tech support? ;)
This text makes sense, but I think it needs to be changed somewhat: 3. When parsing the IPv6 header chain, if the packet is identified to be a DHCPv6 packet meant for a DHCPv6 client or the packet contains an unrecognized Next Header value, DHCPv6-Shield MUST drop the packet, and SHOULD log the packet drop event in an implementation-specific manner as a security alert. DHCPv6-Shield MUST provide a configuration knob that controls whether packets with unrecognized Next Header values are dropped; this configuration knob MUST default to "drop". RATIONALE: An unrecognized Next Header value could possibly identify an IPv6 Extension Header, and thus be leveraged to conceal a DHCPv6-server packet (since there is no way for DHCPv6-Shield to parse past unrecognized Next Header values [I-D.gont-6man-rfc6564bis]). [RFC7045] requires that nodes be configurable with respect to whether packets with unrecognized headers are forwarded, and allows the default behavior to be that such packets be dropped. I think it's worth considering whether the default setting for this configuration knob should be "drop" or "pass." The problem with defaulting to "drop" is that it means that extension headers the DHCPv6 Shield device does not understand fail to pass, which could cause operational problems. The problem with not defaulting to "drop" you have already explained. I do not think that the threat of DHCPv6 spoofing is sufficient to justify defaulting to drop. Yes, DHCPv6 spoofing can cause operational issues. So can filtering "unknown" headers. The frustrating thing about this document is that it actually solves the problem the wrong way. What this document should recommend is filtering of DHCPv6 packets from _clients_. If a rogue DHCP server can't see client multicasts because DHCPv6 shield is blocking them, then it can't know to attack DHCPv6 clients. This substantially limits the rogue's ability to attack DHCPv6 clients on the local subnet. If you combine that with server packet filtering but do not block unknown headers, I think you have achieved a good tradeoff between the problems caused by whatever spoofing might get to a client using an unknown header and the problems caused by blocking non-DHCP packets that use that unknown header for some legitimate purpose. So, realizing that this would be a major change, the way I would LIKE you to address this discuss is to add DHCPv6 client packet filtering. You could also address it by changing the default for the unknown header filter, but I would understand if you felt that this was inadequate. Or you could argue persuasively that I'm wrong, which has been known to happen. :)
Abstract: This document specifies a Best Current Practice for the implementation of DHCPv6 Shield. No, this does not specify a Best Current Practice *for* implementing DHCPv6-Shield; it's a Best Current Practice *for* the Internet (or some portion thereof). A "Best Current Practice" is not something that you specify. This document *does* specify "a set of operational practices or guidelines for implementation of DHCPv6 Shield." Say that instead. Section 4: s/MUST be/is Section 5: OLD The following filtering rules MUST be enforced as part of a NEW The following are the filtering rules that are enforced as part of Sub bullet 2: s/SHOULD log the packet/ought to log the packet (That's not an implementation requirement, just something good to do.) Sub bullet 2: s/MUST contain the/must contain the (That's just re-describing something in another document, not a new requirement.) Sub bullet 3, first paragraph: The first sentence contradicts the second sentence as it's written with regard to unrecognized Next Header values. I suggest splitting this up: 3. DHCPv6-Shield MUST provide a configuration knob that controls whether packets with unrecognized Next Header values are dropped; this configuration knob MUST default to "drop". When parsing the IPv6 header chain, if the packet contains an unrecognized Next Header value and the configuration knob is configured to "drop", DHCPv6-Shield MUST drop the packet, and ought to log the packet drop event in an implementation-specific manner as a security alert. RATIONALE: [...] 4. When parsing the IPv6 header chain, if the packet is identified to be a DHCPv6 packet meant for a DHCPv6 client, DHCPv6-Shield MUST drop the packet, and ought to the packet drop event in an implementation-specific manner as a security alert. 5. In all other cases... OLD The above rules require that if a packet is dropped due to this filtering policy, the packet drop event be logged in an implementation-specific manner as a security fault. The logging mechanism SHOULD include a per -port drop counter dedicated to DHCPv6-Shield packet drops. NEW The above indicates that if a packet is dropped due to this filtering policy, the packet drop event be logged in an implementation-specific manner as a security fault. It is useful for the logging mechanism to include a per -port drop counter dedicated to DHCPv6-Shield packet drops.
There is one thing here I can't figure out, maybe you can enlighten me though... section 5, bullet 3: this seems like another "don't make it easier to use IPv6 rule" and as a default, which I can't figure. Why do you even need to block "an unrecognized Next Header value" to protect against a spoofed DHCPv6 response message? - intro: s/meant to/sent to/ ?
- We note that DHCPv6-Shield only mitigates only DHCPv6-based attacks against hosts. Remove one "only" - OLD: DHCPv6-Shield MUST parse the entire IPv6 header chain present in the packet, to identify whether it is a DHCPv6 packet meant for a DHCPv6 client (i.e., a DHCPv6-server message). NEW: DHCPv6-Shield implementations MUST parse the entire IPv6 header chain present in the packet, to identify whether it is a DHCPv6 packet meant for a DHCPv6 client (i.e., a DHCPv6-server message). - As mentioned by Jürgen in his OPS-DIR review: Section 5 is titled "DHCPv6-Shield Implementation Advice". It uses RFC2129 MUST language and talks about criteria for compliance. Is "Advice" really the right word for this? Sounds a bit soft for what are actually implementation requirements. Fernando propoped: The title was borrowed from a similar I-D for RA-Guard implementation. I guess we could simply say "DHCPv6-Shield Implementation"? I thought it was a good idea.
I'd like to understand why this is a BC and if that's the right designation. Hannes brought this up in his SecDir review: https://www.ietf.org/mail-archive/web/secdir/current/msg05273.html
= Section 5 = I think the point that Pete makes about sub-bullet 3 is valid, and that it's possible for an implementer to do the wrong thing because of the confused way in which sub-bullet 3 is written. I think this can be resolved by adopting the changes that Pete suggests. If you choose to not adopt all of Pete's changes in Section 5 and retain the normative recommendations about logging, I'd like to discuss what the difference is between a security fault and a security alert. It's hard for me to see how the spec can normatively recommend implementation-specific behavior and then use two different terms for what that behavior is supposed to entail without explaining the difference between them. (And even if you remove the normative logging recommendations, it would still help to explain what the difference is, but that would no longer be DISCUSS-worthy I think.)
= Section 1 = s/meant to DHCPv6 clients/intended for DHCPv6 clients/ s/a specific ports/specific ports/ s/DCHPv6-Shield/DHCPv6-Shield/ s/only mitigates only/only mitigates/ = Section 5 = I support all of the changes to Section 5 suggested by Pete. I don't think the spec should recommend logging packet drop events unless it explains what is meant to be done with the logs.
a dreft to be posted to address lc coments from ralph droms and Sheng Jiang
3.2 says: A server MUST ignore a "h2" token in an Upgrade header field. Presence of a token with "h2" implies HTTP/2 over TLS, which is instead negotiated as described in Section 3.3. And 3.3 says: HTTP/2 over TLS uses the "h2" application token. The "h2c" token MUST NOT be sent by a client or selected by a server. Why isn't the presence of an "h2" Upgrade token on a clear text connection, or an "h2c" application token on a TLS connection, grounds for slamming the connection? Seems like something nefarious might be going on in either case. Seems like "MUST NOT send, MUST be a connection error if received" seems like the right thing to do. 4.2 says: An endpoint MUST send a FRAME_SIZE_ERROR error if a frame exceeds the size defined in SETTINGS_MAX_FRAME_SIZE, any limit defined for the frame type, or it is too small to contain mandatory frame data. But later 5.1 says: Receiving any frames other than [blah blah blah] on a stream in this state MUST be treated as a connection error (Section 5.4.1) of type PROTOCOL_ERROR. The MUSTs in there appear contradictory. If I get a frame with the wrong type for my current state that is *also* a bogus size, is there a requirement that I do PROTOCOL_ERROR, or FRAME_SIZE_ERROR, or is the choice of error code not really a requirement at all? I suspect that the "MUST be treated as a connection error" is the key and *not* the particular error code. I would re-word to simply say, "The FRAME_SIZE_ERROR error is sent when a frame exceeds..." in 4.2. In 5.1 and elsewhere, you could say something like: Receiving any frames other than [blah blah blah] on a stream in this state MUST be treated as a connection error (Section 5.4.1). Error type PROTOCOL_ERROR can be used for this condition. I just think we'll regret when the protocol police come around saying, "But it said you MUST use such-and-so error code, and he didn't, so it's fine if my implementation does X", where X is a thoroughly idiotic thing. 4.3: Earlier in the section, you say: A complete header block consists of either: o a single HEADERS or PUSH_PROMISE frame, with the END_HEADERS flag set, or o a HEADERS or PUSH_PROMISE frame with the END_HEADERS flag cleared and one or more CONTINUATION frames, where the last CONTINUATION frame has the END_HEADERS flag set. So you've defined that the last frame (or the only frame if there's no continuation) has the END_HEADERS set. Cool. But then later you make a point to say: The last frame in a sequence of HEADERS or CONTINUATION frames MUST have the END_HEADERS flag set. The last frame in a sequence of PUSH_PROMISE or CONTINUATION frames MUST have the END_HEADERS flag set. I don't get the MUSTs. What is the situation that I need to look out for here? Is there a circumstance where an implementation might think it's OK to send the last frame without END_HEADERS set? Seems to me those two sentences can be deleted, but maybe I'm missing something. Also unnecessarily MUSTy: OLD An endpoint receiving HEADERS, PUSH_PROMISE or CONTINUATION frames MUST reassemble header blocks and perform decompression even if the frames are to be discarded. NEW Hence, an endpoint receiving HEADERS, PUSH_PROMISE or CONTINUATION frames needs to reassemble header blocks and perform decompression even if the frames are to be discarded. END 5.1: open: [...] From this state either endpoint can send a frame with an END_STREAM flag set, which causes the stream to transition into one of the "half closed" states: [...]. Either endpoint can send a RST_STREAM frame from this state, causing it to transition immediately to "closed". You should probably define the silly state of a RST_STREAM frame with and END_STREAM flag set on it. I presume that you immediately go to "closed", but if you implemented it in a goofy way, you may end up in "half closed". A clarifying bit: OLD half closed (local): A stream that is in the "half closed (local)" state cannot be used for sending frames. Only WINDOW_UPDATE, PRIORITY and RST_STREAM frames can be sent in this state. NEW half closed (local): A stream that is in the "half closed (local)" state cannot be used for sending frames other than WINDOW_UPDATE, PRIORITY and RST_STREAM. END Also: If an endpoint receives additional frames for a stream that is in this state, other than WINDOW_UPDATE, PRIORITY or RST_STREAM, it MUST respond with a stream error (Section 5.4.2) of type STREAM_CLOSED. I think it would be good to introduce some language somewhere, maybe here (and in other places throughout section 6 referring to stream errors) or maybe in 5.4, that says that you MUST respond with a stream error *unless the frame in question would also constitute a connection error, in which case you MUST respond with a connection error*. 5.4.3: I'm not clear on how to implement a "MUST assume". What is it that the implementation MUST do? 8.2.2: I was actually a little surprised to see the SETTINGS_MAX_CONCURRENT_STREAMS suggestion. I would have thought that using window size would be much more obvious. Any reason you chose to discuss one instead of the other? 11.3: No advice or criteria to use for the expert reviewer?
Just a quick one, I'm not sure if it's me reading it wrong or a bug... happy to clear and let you fix once we've figured it out anyway. 8.2.2 says that clients MUST validate that a proxy is configured for the correspondsing request. Since you say MUST, don't you need to say what exacftly is meant there? If that is somewhere here, I'm not seeing it. I realise that some of that may be controversial, but if so, I think saying that would be better than leaving it dangling (Note that I've also a comment below about 10.1 not saying enough, if the fix for this were to be a new paragraph or two about when pushed stuff is ok, then you might handle my comment on 10.1 here, and that might be better, not sure.)
(Putting this one first as I'd really like an answer but will not be blocking on it no matter how much I dislike the answer:-) - 6.1, and elsewhere, I just want to check that the 256 octet limit on padding isn't too restrictive a design. Is it still possible to say arrange that every response from a server has the same size? (Say from a server that is only used for serving images and where there could be a 2KB variation in file size or something.) I think that can work via HEADERS and CONTINUATIONs but wanted to check the limits of flexibility here. I'm similarly interested in the limits within which a client can add padding to a request, but I think the same trick works if it works at all. - 3.1: a question on the text to be removed about draft-specific identifiers - did the WG consider how to handle drafts produced from now until the RFC issues? I've not yet looked to see if there are normative dependencies that might lengthen that process, but if there might be it could be useful to discuss in the WG if it turns out there are a bunch of discusses that might take a while and a few versions to sort out. If the IESG eval is clean enough and there are no delaying normative refs then that's probably not needed. - 3.2, the last para before 3.2.1 is hard to read, maybe it was added late but it seems to use concepts not yet explained at that point (e.g. half-closed) Maybe a forward ref to Figure 2 would be enough to fix. - 5.3.4, "can be moved" in 1st sentence - I don't get why it's ok to not say MUST or SHOULD there, iff the change is as a result of a peer's action. Shouldn't a 404 for a dependecy have a predictable impact on the overall page load? - 6.10, odd that there's no padding here - 8.1, a bit of abnf here might have helped, ah well - 188.8.131.52, I assume the underscors in "_:authority_" are a typo, if not they are possibly confusing - end of p57, does the last para mean that a header field value can be split betwee a HEADERS frame and the subsequent CONTINUATION frame (with possible padding in between when looked at on-the-wire)? If so, then I think you might want to be clearer about that since I could see it leading to interop issues. - p64, DNS-ID is what (dNSName?)? And "Common Name" might be better as commonName, but both probably need a reference. (I don't care about nitty capitalisations, but wouldn't some coders need to reference?) - 9.1, which TLS library allows a client to know that it'll shortly be time to re-key? Doing so is doable, as e.g. NTP has a mechanism like that for pre-announcing leap seconds I'm told. Buti I'm not sure anyone actually does it at all today. - 9.2.1 - the renegotiation text is confusing. First para on that starts with "MUST be disable" next one says "MAY use" for client cert confid. I think adding something like "Other than as noted below" before the start of the 1st renego para (the 3rd para of 9.2.1) might fix this, but could be some more editing would be better. - 9.2.2 - Isn't it sad that there are so many undesirable TLS ciphersuites. Sorry about that;-) - 10.1 - I think this could be a lot clearer about which pushed things are to be thrown away and I think being clearer about that might avoid some future problems. - 10.5.1 has a few typos, no harm being nice to the RFC editor and fixing those if you're pushing out a -17 - 10.7 - what general purpose padding does TLS1.2 provide? I'm sad that this section is so negative - does that indicate that people haven't really tried this out so much? While it is clear there can be failed attempts at using padding to thwart traffic analysis I think having this mechanism defined so we can in future learn how to better mitigate traffic analysis is a good thing and we ought not be so down on that. - For the record, I'm also sad that this isn't all and only over TLS with the option of opportunistic security for http: schemed URIs but I accept that the wg debated that and decided for this.
An excellent document. Alissa spotted already what I would have remarked about the "short period" in Section 5.1.
(1) Section 3.2: "A server MUST ignore a "h2" token in an Upgrade header field." What is the reasoning behind this exclusion? This seems to unnecessarily rule out the use of TLS, especially given that the server can opt out by choosing "h2c". (2) Figure 1 seems really confusing. If the reader notices the phrase "9 octets of the frame header", he'll probably come to the right conclusion, but it also seems likely that some readers will infer from the layout that the header is 12 octets long, with the fields aligned to word boundaries. Just eliminating the header with the bit positions would help a lot. Likewise for the figures in Section 6. (3) Section 9.1.1: "For "http" resources..." This seems to imply that requests for "http" resources can only be carried over bare TCP, which seems wrong given the presence of the ":scheme" pseudo-header. Proposed text: OLD: "For "http" resources, this depends on the host having resolved to the same IP address." NEW: "For TCP connections without TLS, this depends on the host having resolved to the same IP address." (4) Section 9.1.1: "For "https" resources..." The salient requirement here is that the certificate provided by the server MUST pass any checks that the client would have done if it were initiating the connection afresh. In addition to the name check here, that would include things like HPKP. Suggested text: OLD: "For "https" resources, connection reuse additionally depends on having a certificate that is valid for the host in the URI." NEW: "For "https" resources, connection reuse additionally depends on having a certificate that is valid for the host in the URI. The certificate presented by the server MUST satisfy any checks that the client would perform when forming a new TLS connection for the host in the URI (e.g. HPKP checks [HPKP])." (5) Section 10.4: "Pushed responses for which an origin server is not authoritative (see Section 10.1) are never cached or used." This seems like a rather important point, for which I can't find any normative text. It seems like in Section 8.2.1, the client should be REQUIRED to verify that the ":authority" field in the PUSH_PROMISE contains a value for which the client would have been willing to re-use the connection (as specified in Section 9.1).
Section 2.1: The definitions of "client" and "server" here are a bit lean. For example, one might read them and conclude that the client and server roles are independent of who sends requests and responses. It would be good to clarify these roles, updating the definition in RFC 7230: "An HTTP "client" is a program that establishes a connection to a server for the purpose of sending one or more HTTP requests. An HTTP "server" is a program that accepts connections in order to service HTTP requests by sending HTTP responses." Section 2.1: "across a virtual channel" What is a virtual channel? Can this phrase just be deleted? Section 3.1: "CREF1: RFC Editor's Note:" It seems like it could be useful to leave in a variant of this note, describing the variant identifiers used by pre-RFC versions of the protocol. (And thus removing the RFC 2119 language.) Section 3.4: "the sever MUST send" Section 4.1: "R: A reserved 1-bit field." I was mystified by the purpose of this field until I realized that it's only there because stream IDs are 31 bits (to make room for the Exclusive flag, I guess). Might help this read more smoothly to note something to that effect here. Section 5.1: When I initially read Figure 2, I thought that "H/", "ES/", etc. designated types of frames (or frames + flags). If, as they appear, they indicate alternatives, it would be clearer to add a space before the "/". Section 184.108.40.206: "_:authority_" Remove the underscores? Section 10.3: "if they are translater verbatim"
Thank you for a very well-written writeup, which helped the review. Same remark as Richard regarding figure 1
Nice work on this draft! It is very well written and easy to read. There was also a lot of thought put into tough security questions like compression and the use of cipher suites in working group sessions, which is very much appreciated. I have a few non-blocking comments that are mostly editorial. Editorial consideration: Section 9.2.2 Thanks for adjusting the language in this section and in Appendix A to avoid confusion, making the WG consensus clear that the blacklist usage is a "SHOULD NOT", changing the word prohibited to "blacklist" works for me. Followup from last IETF meting and my previous comment: Section 9.2.2 - I'll note first that there is clear WG consensus for using blacklists. I'm not expecting a change here, but wanted to note that I think you may run into supportability/cost issues in the future with this approach. In the current text, any cipher suite that's new (not yet blacklisted) can be supported, which includes regional cipher suites. Since we are in the midst of a push for crypto and seeing the response from those who monitor, including governments, this opens up room for requirements to be set country by country outside of the protocol.
I agree with the other ADs who have complimented the authors on the readability of this spec. I have what I'm assuming to be a very easy Discuss question to resolve, along with a small number of nits as Comments. I expect to be a Yes very soon. I'm confused between these two statements: 1. Flow control is specific to a connection; i.e., it is "hop-by- hop", not "end-to-end". and Both types of flow control are hop-by-hop; that is, only between the two endpoints. Could you help me get unconfused?
This is a very minor point, but this text In particular, HTTP/1.0 allowed only one request to be outstanding at a time on a given TCP connection. HTTP/1.1 added request pipelining, but this only partially addressed request concurrency and still suffers from head-of-line blocking. Therefore, HTTP/1.1 clients that need to make many requests typically use multiple connections to a server in order to achieve concurrency and thereby reduce latency. doesn't seem quite right to me. HTTP/1.0 (without persistent connections) closed TCP connections to give an indication that the resource represented by the URL had been completely transferred. "one request to be outstanding at a time on a given TCP connection" sounds like you could send a second request on that TCP connection after you receive a response to the first request, but that's not possible. And HTTP/1.0 clients certainly opened multiple connections to a server as well. Could you look at this text one more time? In this text Endpoints MUST NOT exceed the limit set by their peer. An endpoint that receives a HEADERS frame that causes their advertised concurrent stream limit to be exceeded MUST treat this as a stream error (Section 5.4.2) of type PROTOCOL_ERROR or REFUSED_STREAM. I'm wondering why both of these stream error types are appropriate here. Is there any guidance about when to choose one type or the other? In this text: After sending a SETTINGS frame that reduces the initial flow control window size, a receiver has two options for handling streams that exceed flow control limits: 1. The receiver can immediately send RST_STREAM with FLOW_CONTROL_ERROR error code for the affected streams. 2. The receiver can accept the streams and tolerate the resulting head of line blocking, sending WINDOW_UPDATE frames as it consumes data. I found myself wondering how a receiver would choose between these options. Is there any guidance you could provide? For this reference: [TCP] Postel, J., "Transmission Control Protocol", STD 7, RFC 793, September 1981. perhaps https://tools.ietf.org/html/draft-ietf-tcpm-tcp-rfc4614bis-08, currently in the RFC Editor's queue, would be more appropriate?
Thanks for all of your work on this. I'm not thrilled with the decisions around TLS and opportunistic security, but I understand how we got here. = Section 3.1 = It might be worth keeping this sentence from the text that is marked for deletion: "Examples and text throughout the rest of this document use "h2" as a matter of editorial convenience only." = Section 3.2 = "A server that supports HTTP/2 can accept the upgrade with a 101 (Switching Protocols) response." Is there a reason why this behavior is not normatively described? Is there some other response that clients should expect that has the same semantic? = Section 5.1 = "WINDOW_UPDATE or RST_STREAM frames can be received in this state for a short period after a DATA or HEADERS frame containing an END_STREAM flag is sent. ... endpoints MAY choose to treat frames that arrive a significant time after sending END_STREAM as a connection error (Section 5.4.1) of type PROTOCOL_ERROR." Can some ballpark guidance be given about how long "a short period" and "a significant time" are expected to be? Or how an implementation might calibrate these values? s/Frame of unknown types/Frames of unknown types/ = Section 5.2.2 = "Deployments that do not require this capability can advertise a flow control window of the maximum size, incrementing the available space when new data is received." I found this a little vague. Does this mean deployments can advertise a flow control window of size 2^31-1 and then locally increment available memory as new data is received? Would be good to state here both what the maximum size is and what is meant by "available space." = Section 6.8 = s/the remote can know/the remote peer can know/ = Section 8.2 = s/that is not cacheable, unsafe or that/that is not cacheable, safe or that/ = Section 10.7 = "Intermediaries SHOULD retain padding for DATA frames, but MAY drop padding for HEADERS and PUSH_PROMISE frames. A valid reason for an intermediary to change the amount of padding of frames is to improve the protections that padding provides." The different recommendations for different frame types here are a bit puzzling. Why are intermediaries expected to be able to improve padding-based protection for HEADERS and PUSH_PROMISE frames more so than for DATA frames? = Section 10.8 = Is there anything more you could say about possible mitigations for the fingerprinting issue? Or there might not be, absent some coordination between endpoints, which would likewise be useful to note.
This has been a long time in coming, thank you.
Excellent stuff! A well-written clear description of a reasonably complex thing.
I just wonder if there is a second implementation, the shepherd report is pointing out one.
David Black's Gen-ART review raised the issue of never-indexed fields, and whether guidance or a list of header fields should be in the document to describe when this option should be used. Has the WG discussed this in the past, and what conclusion did it came to? Are there standardised header fields that would clearly be on such a list, if it were given in the document?
Section 2.3.3: "Indices between 1 and the length of the static table..." The use of 1-based indexing here seems likely to lead to incompatibilities. Section 3: Currently, you never say explicitly that a header block is the concatenation of encoded header fields, where each field is encoded according to Section 6. This would be a good spot to do that. Section 5.1: "... always finishes at the end of an octet" It was not immediately clear to me that the "?" bits indicated that an integer need not *begin* at an octet boundary. It would be helpful to note that here.
Similar to Jari's COMMENT. David Black, part of the combined OPS/GEN-ART review (http://www.ietf.org/mail-archive/web/gen-art/current/msg11197.html) mentions: The second major issue looks serious - one of the major motivations for HPACK is to mitigate attacks on DEFLATE (e.g., CRIME) via use of never indexed fields wrt compression. The absence of a list of header fields that MUST use that never indexed functionality appears to be a serious oversight. Could I ask one of you to place a Discuss to ensure that these concerns are addressed? ==================== I haven't had the time to read the draft (shocking I know). So I'm unclear at this point if the feedback is DISCUSS/COMMENT-worthy, but ... I've got a very high respect for David's technical reviews. In many years of review, it's the first time he directly asked me to file a DISCUSS. So I want to go to the bottom of this issue. If this approach is clumsy (yes, I know, the DISCUSS should be in my name, not on behalf of David), I could also "DEFER" this draft. I also see that the authors/David engaged in the discussion on the email@example.com list. Good.
Thank you for your work on this draft and for the thorough security considerations section. I do agree with the SecDir reviewer that an early reference to the security considerations section would be useful, please consider adding that. http://www.ietf.org/mail-archive/web/secdir/current/msg05406.html Another good point is that while this draft addresses current threats (CRIME), the WG should keep in mind that the attacks could evolve. This is really just to think ahead with options since HPACK is a relatively new algorithm, and since encryption of compressed headers is known to be somewhat perilous. It is possible that a clever attacker will develop a new attack in the future (i.e., CRIME++ ) that works against HPACK-compressed header fields.
My one question about this was about lack of extensibility of the static table, but I see that some intro text has been added to the editor's copy of the document <http://http2.github.io/http2-spec/compression.html> that speaks to this. Keeping that text would be good imo.
I do agree with Adrian's comments about clarifying how and what the experiment is.
I don't specifically care to object to the publication of this document and I don't feel strongly enough to abstain, but I do find it hard to understand why it is being published even as experimental. It feels to me like a consolation prize for not being the WG's selected solution. Maybe it would have been better to pursue publication either as an historic record of an idea not adopted, or as an informational record of some existing implementation and deployment. That way it would have been less confusing to the market. Anyway, given that it is positioned as an experimental RFC, I wish this document explained why and how it is experimental in nature. It is not a requirement to do this, but it would make a lot of sense. - How is the Internet kept safe from the experiment? - What feedback do you want from experimentation? - How will you judge the success of the experiment? - How do you plan to move the experiment to standards track?
For the same reasons as Brian.
- I support Pete's discuss about IPR - surely this has to go around to IETF LC again for that?
I am balloting no objection on the grounds that this document has been reviewed by the WG and the IETF community at large and apparently "passed" the last calls in terms of having rough consensus. However, the proposed solution looks personally to me like a big hack or in other words this document is creating a cross IP version protocol address translator (including using transport protocols). Actually, the whole work of the softwire working group should be reconsidered from an architectural view. Is this really the long term solution to get the IP transition right or is this just creating the next headache in five years as something out of the networking layer and the transport layer is mixed together as an IPv6 address?
Several people, including a Gen-ART reviewer, have asked about the status designation. I think we have all read about the history of how we got here. I do not personally have an objection for upgrading the document (but of course that would require a new last call). Nor do I think the IESG or reviewers should have a strong opinion in the matter. However, I'd suggest that broad applicability and interest from the working group, in today's context, should be the deciding factor, if someone wants to make a change.
These are strictly procedural issues for the IESG. I have glanced through the content of the document and have no reason to believe it is otherwise objectionable. 1. Several of the IESG comments on this document are for the -06 version of this document, which was put forward for Experimental status. This one is put forward for Proposed Standard, but comments haven't been updated. I would like the IESG to be clear what it is balloting on. Along similar lines, the ballot seems to be taken from the shepherd writeup, which is clearly a writeup for this document as Experimental. Probably the ballot should have been cleared and re-issued. But so long as we confirm on the call that nobody's position will have changed due to the status change, that's cool. 2. One the day of the latest Last Call, after the Last Call announcement was generated, there was an IPR Disclosure on this document by a participant in the WG. The Last Call announcement went out indicating that there were no IPR disclosures on the document. The WG participant who made the disclosure is also the listed inventor for both of the patents cited in the disclosure, and the disclosure is for a royalty-bearing license. This appears to me to be a horribly late disclosure. Did the WG discuss this disclosure? 3. I would like to hear some discussion of the abstentions now in light of the fact that the document is going for Proposed Standard. I understand that Brian's position is that the status of the document makes no difference to his assessment, but I'd like to hear from others on this point before I ballot No Objection.
Exactly like Adrian, I would like some more information on the Experimental status. As examples: http://tools.ietf.org/html/rfc7360#section-1.3 http://tools.ietf.org/html/rfc6614#section-1.3 I believe this should be common practice for experiemental RFCs. Re-reading RFC 3933, and I don't see this. Maybe an IESG statement ... ? Editorial from Victor K. (OPS-DIR review): Section 7.1 Paragraph 2: old text “.. a CE requires an the IPv6 prefix to be assigned to the CE” new text “.. a CE requires an IPv6 prefix to be assigned to the CE.” Section 7.2 Paragraph 3: old text “.. no specific routes need to be advertised externally for MAPto operate, neither in IPv6 nor IPv4 BGP.” new text “.. no specific IPv6 or IPv4 routes need to be advertised externally outside the service provider’s network for MAP to operate.” I added this version of the sentence since it makes more sense to me. Also, you technically don’t need BGP on the ISP side (although I can’t a modern network which does not use it).
The security considerations look good, thank you.
I've had a quick look, and nothing stands out. I trust my distinguished colleagues from Vermont and Maryland to duke it out.
I find it a dis-service to the community for the softwire WG to put forth multiple solutions that solve essentially the same problem (https://mailarchive.ietf.org/arch/msg/ietf/jcscmIHmAQSvXLAlLLvfhnC2P8A). I believe the confusion caused by a myriad of solutions in this space, regardless of whether they are Standards Track or Experimental, will adversely impact vendors, operators, and end-users. My only hope is that this confusion will speed up the transition to IPv6-only operations within the affected networks.
This looks like a liability nightmare. I understand why you want to do it, and to some degree sympathize, but this looks to me like a baseball bat you are handing to law enforcement that will be used against unsuspecting web operators who publish things they think are unobjectionable but that wind up being considered objectionable in some jurisdiction. I can easily see it being used to suppress LGBT content, for example, and any sort of useful sex ed content for teens. The IETF should not be associated with this specification.
I'm looking at this text: Origin servers that utilize the "safe" preference SHOULD document that they do so, along with the criteria that they use to denote objectionable content. and wondering - is this an RFC 2119 SHOULD? I'm guessing documenting that you utilize "safe" has no impact on interoperation, so maybe this is more like "ought to"? - is there any guidance that could be given about how this documentation might be made available to users, so it would be easier for users to find it? I won't be a bit surprised if the answer is "no", but I wanted to ask ...
Thanks for this simple document. A fine idea to document it. I found... Note that this specification does not precisely define what "safe" is; rather, it is interpreted within the scope of each Web site that chooses to act upon this information (or not). That is good, but perhaps not painted red enough for some folk, notwithstanding the discussion in the Security Considerations section. How about: Note that this specification does not precisely define what "safe" is; rather, it is interpreted within the scope of each Web site that chooses to act upon this information. Furthermore, requesting "safe" does not guarantee that the Web site will apply any filters. --- I looked for (and found!) discussion of the insertion of "safe" into a stream. It's a fair discussion, but a worry for me. Having created this tool, is there a way to ensure that it is not used to filter my access to Web sites without me knowing? Of course, an intermediary that can insert "safe" can also modify the content, but it is much simpler to rely on the server to do that so it would be nice to have a way to prevent or detect insertion of "safe". Similarly, an intermediary that can insert "safe" in a request can remove "safe-supplied" from a response. Perhaps there is nothing to be done?
Sigh. The disposition of this is no longer clear so I am re-instating my discuss. Before I abstain on this I would like to briefly discuss the evaluation of rough consensus and check if a few points raised were addressed. I'll put those as discuss points, but plan to abstain once those are briefly covered as I think that this is something the IETF should not specify, never mind "endorse" as a proposed standard. I think it is something we would regret publishing, much as we would have regretted it had we produced an RFC for the (IMO quite similarly broken and damaging) do-not-track (DNT) flag. (1) While I am definitely in the camp who would prefer that we not specify this at all, and hence am a biased judge, I can't see that there was rough consensus for this, having just re-read the IETF LC mails. The write-up does clearly acknowledge that any consensus was very rough in the view of the sponsoring AD. Note that I'm not at all questioning Barry's intentions here, just his conclusion. In any case, my reading is that there were arguments not addressed (see below) and that it is just not credible that all this LC discussion results in no change at all in the draft, and it is basically not at all clear to me that what seems like a more or less 50:50 set of folks in each camp, (I didn't count though), with both "camps" making reasonable arguments for and against, can in this case constitute even a very rough consensus. So I'd like to chat about that with the IESG in case my biased opinion turns out to better map to the mail archive than Barry's AD evaluation of the last call. (2) I also don't believe the point I raised about the scope of this was ever addressed. Does emitting this apply to just the response to that request, or to the origin or to whatever the server thinks is correct or what? Having an undefined semantics and an undefined scope seems broken to me at least but the point was never addressed that I can see. (3) The proxy injecting this header field means that the user cannot get any signal that this has been done and appendix C even says that the site should not allow the UA to unset the proxy's preference. This also encourages the use of plaintext. Other than saying "yeah, that's what's done" I don't believe that this problem was explored at all, never mind addressed. (4) The point raised by Joe Hall of CDT that emitting this signals a higher probability that the site is dealing with a minor (and hence perhaps with a user more easily socially exploited) is I think valid and is not reflected in the draft nor much in the discussion. While the author offered to add text, no change occurred. (5) I don't see where the point raised by Christian Huitema was dealt with - that the IETF standardising this will likely lead to (in particular) governments who wish to censor content requiring conformance to RFC7xxx. I'm not sure that we have a good BCP telling us to not collude with such, but I don't believe that point was addressed in the LC. (Note: it's quite possible that I missed some things that were dealt with, or that there's scope for disagreement as to whether or not things were or were not addressed.)
- I didn't raise this during LC so I'll just make it a comment, but I also find it objectionable that Appendix C says that even if the user stops sending this preference then the servers should continue to behave as if it is being sent. That just seems like broken protocol behaviour to me esp with no defined semantics. - "become much simpler" is IMO utterly clearly not correct yet not even that obvious change was made after IETF LC.
I am abstaining, because I have an issue with what the safe-hint header will mean in reality and I do see issues coming out of censorship (i.e., when proxies are inserting this header w/o letting the user know). It might be even an illegal action in some countries to inject such header into the communication between browser and server, as this will modify the communication. Take Germany as one example. The document is not and cannot specifiy what objectionable content is, as this depends on too many factors, such as culture background. In short, I do share what Adrian, Alissa, Kathleen and Stephen have alread said.
Still not too sure how to ballot this document. So no record for now. So basically the proxy for my company/provider/country will decide what's "safe" versus "objectionable" content. Btw, http://charliehebdo.fr/ "objectionable" content or not? It depends, right... So basically a proxy is required, right? We can't expect web server to flag themselves what could be "objectionble" content, like advertisement, porn, or charliehebdo. (that would remind me of the evil bit april 1st RFC: If you're evil, say it.). That could work if the laws are changed, the laws in all countries. And bbviously there is an international agreement on what "safe" means. Let's face it, that will not happen. Note: Moving a server in a different countries because different rules apply is no big deal. Therefore, I'm kind of hesitant between: - publishing this document doesn't matter because it will not be widely implemented, So it won't matter much. - this specification might be enforced in wrong way, so it's evil and we should not publish it I want to hear about the different arguments before balloting.
Thanks for your work in this area. I do think it's important to experiment with ways to improve options for getting safe content without needing a proxy service to filter it out for you. I do think there is more to work through before this goes forward, but think it is worth trying to figure out if there is something we can do here. I do agree with the concerns of other ADs on this potentially being used for censorship, although think experimenting to see if this or something similar works would be worthwhile. Since this just sets a technical option, and you can already do this with cookies, I'd be happier to see this just between a server and client. If this can be altered by middleboxes, there is the potential for censorship with MITM approaches if a cleartext session is used. Assuming this is between a server and client, with the onus on the server to provide "safe" content (whatever that means to the server), there could be regional restrictions as to what that means put in place. However, I don't think there is anything to stop that from happening now and we have already had offshore/out-of-country web servers to get around taxes and other local/regional requirements. I don't think this flag will cause censorship issues forced on servers in a region where it wouldn't happen anyway. I do think there is an opportunity to reduce the number of middleboxes (proxy web servers filtering by DNS or URLs) used by organizations that can afford to run these services to protect their users from objectionable content. We are just talking about a technical option with no policy definition or requirements for it provided. The current methods require the ability to deploy a box and pay for personnel to administer it. Although there are a few implementations listed, how has this been working in experiments? I see this draft is listed as standards track. I'd prefer the option to be strictly between client and server and not with middleboxes requiring cleartext. It would help to have the text clearly state that this might be used at organizations to prevent objectionable content to meet HR requirements to remove the focus from children so this is seen as a broader solution. If this is at a school or corporate setting, sure, it's easier to make this setting with a proxy, but realistically, most use standard images running on the computers (or should) that get overwritten on a regular basis to wipe away any malware automatically (kiosks at schools, corporate may not get overwritten as often or at all). With this approach, it's easy enough to have this setting maintained in the browser/computer. Perhaps removing or changing the example on schools would help? Thanks for removing the emphasis on middleboxes. I think it would help to emphasize that the setting should be in the browser and could be on default images at schools/organizations for client to server communications. Grade schools might not be using images, but Universities do as they have learned from experience with malware. I'm also wondering how this might interact with ads. I know there are different technologies at play for ad insertion, but don't know the details of how they all work. Would ads be "safe" as well? Ideally, this would happen at the web server as we should be striving for encrypted sessions( As opposed to ad insertion by a middlebox - likely to be how this is done), which would change how this flag works from the current proposal that requires cleartext. Do ad servers recognize this or might ads be racy on a "safe" page? If ads are not safe, then this is really meaningless to prevent the issues I've had to deal with as a former CISO at a few organizations where we have had to investigate and fire people who have viewed porn at work. This could be really helpful to others in the same position I was in, preventing such access at work. I'm not as worried about the cleartext as I think the push for encryption will lead to a change in how this flag is set and where it can be changed rather than prevent people from turning on encryption. From the EFF statistics, 30% of web sites are encrypted (may be higher now) and from our corporate observations, as of last June, 78% of EMC employee web traffic is encrypted (I hope that's about the same for other organizations, but don't have access to their stats).
I agree with Joel's discuss. My concerns are with the ability to insert this flag to enable censorship. I do also have concerns about using a binary flag for "safe". For instance, is a webpage with comments filled with graphic sexual threats "safe"? Sad to say, they are common. Does sexual material (except for things like sex ed, breastfeeding help, etc.) get grouped with violence, offensive political, etc? Is it better to have 32 semantic-less flags so that nuance can be better supported? I agree with Kathleen that an effort to improve the draft is worthwhile.
I whole-heartily agree with Stephen and Joel on this. This is an unprotected field that is indicating a desire to receive content deemed safe by someone else's subjective view. Combine that with the encouraged use of plaintext and proxies is not good. Even the Mozilla developers recognize that the issues here are not network/protocol issues (just like DNT).
I wish I had had time to engage on this one during earlier discussions or IETF LC, but I did not. My apologies for that. I see no reason to standardize this bit. Although folks have argued that it should be standardized because there are multiple independent implementations of it, I think that is a red herring given that there is no standardized semantic associated with it. "Objectionable" and "safe" are characteristics that are defined differently by different sites, users, and cultures; their meanings can change throughout time; and as evidenced by existing sites and applications that provide their own content filtering preference settings, those preferences are often not binary. (This is in fact one way in which the "safe" preference is distinct from DNT, because in the DNT case they actually tried to define the semantic. That was incredibly challenging and in the case of "objectionable" content I do not believe it is possible.) The same is true for the "safer" concept described in Appendix C. Which is safer, filtering violent content or filtering nudity (or both)? Different users would answer that question differently. For a site that offers both of those choices independently, advising sites to associate the "safe" preference with the "safer" of those three options is meaningless to the user -- the user will still have to rely on the site's concept of which one is "safer" or "safest" if they want to experience the benefit of this preference. Given the lack of a standardized semantic, I also think proliferation of this header could incentivize increased censorship. Since the deployment of this header is designed to dramatically increase the number of requests in which a preference for "safe" content is signaled (since it's designed to be sent on all requests), sites looking for an excuse to take down content altogether, or legal authorities looking for data to back up claims that the web should be rid of particular kinds of content in the first place or that the preference should be required to be on by default will potentially have lots more data to back up their claims. Furthermore, having a country-level proxy insert this could dramatically change content availability for a large user population with very little effort on the censor's side. I don't see the need to wait for any of these things to happen and then try to put the genie back in the bottle, because I doubt that will be possible. I also don't see how the various arguments made about proxies inserting this preference can be properly reconciled. On the one hand, proponents of the header have argued that one of the reasons its presence does not necessarily indicate that the user is a minor or otherwise vulnerable is because a proxy could insert the preference on behalf of many users. On the other hand, the idea of a proxy inserting this against a particular adult user's wishes as a means to censor his Internet connection is clearly anathema to most folks and there is discussion about removing the proxy text. I don't see a solution where we could have it both ways -- have the preference indicate nothing in particular about the user, while discouraging proxies from inserting it. I have sympathy for parents for whom the landscape of sites and apps offering parental controls is complex. But I think the risks for the Internet, users, and the IETF of standardizing this preference far outweigh the benefits to parents. As long as "objectionable" content is in the eye of the beholder, setting these preferences site-by-site provides a useful safeguard.
I'll quote stephen here, since he makes a point that I championed during the discussion. I would like to dicuss this. (3) The proxy injecting this header field means that the user cannot get any signal that this has been done and appendix C even says that the site should not allow the UA to unset the proxy's preference. This also encourages the use of plaintext. Other than saying "yeah, that's what's done" I don't believe that this problem was explored at all, never mind addressed. Injection of this value by proxies gets to the very heart of question of consent between two parties the requester and sender and that of the agency of the requestor. Encouraging transparent middle boxes to mess with the contents of flows is imho an irresponsible act on the part of the IETF. I could have definitely held my nose and pass this without comment were it to very strongly discourage that acceptance of such a hint over non-confidential non-tamper resistant channels.
I echo Adrian's thanks for positioning this as Experimental, and describing what the experiment is. I echo Barry's "well-written document". I'm delighted to read this text: 12.4. Transport behaviour This proposal does not modify the way RADIUS interacts with the underlying transport (UDP). That is, RADIUS keeps following a lock- step behaviour, that requires receiving an explicit acknowledge for each chunk sent. Hence, bursts of traffic which could congest links between peers are not an issue.
Thank you for positioning this as Experimental and for writing sections 2 and 3: they cleared up any concerns I had. In section 3 you say: Instead, the CoA client MUST send a CoA-Request packet containing session identification attributes, along with Service-Type = Additional-Authorization, and a State attribute. Implementations not supporting fragmentation will respond with a CoA- NAK, and an Error-Cause of Unsupported-Service. Since this is not new behaviour (i.e. an implementation that is not part of this experiment will follow this behaviour according to previous specifications), a reference would be nice. Perhaps... s/the CoA client MUST/according to [RFCfoo] the CoA client will/
A well-written document and thank you for being an experimental document. Looking forward to reports from the wild how good or bad it works.
I have no concerns with the technical content of the document; I did a quick review, and nothing causes concern. But I do want to briefly DISCUSS a procedural point. I am quite sure I will clear the DISCUSS on the call. The issue of "Updates" worries me a bit. Here's the part that concerned me: Note that if this experiment does not succeed, the "T" flag allocation would not persist, as it is tightly associated to this document. That's true, but using Updates will mean that RFC 6929 will *always* have an "Updated By" pointer pointing to this Experimental RFC. That itself could cause serious confusion, and would likely result in the "T" allocation never being able to be used again. I really don't think this document needs to, or should, update the other documents. 2865 and 6158 are only being updated because this document violates a MUST and a SHOULD. But it's an experiment. Any implementation of 2865 or 6158 not participating in the experiment is going to need to honor those MUSTs and SHOULDs. There's no real reason to call out this update to people who are only looking at those two documents. 6929 is a bit trickier, because of the issue of other folks trying to use the reserved value, but as you say in the document: "not such a great number of specifications extending that field are expected." If/when this document goes to Standards Track, that's the time to let people know. If there's a real fear of this experiment interfering with other implementations, it really is time to make the registry. Finally, I'm concerned about the precedent of Experimental documents updating Standards Track and BCP docs, especially on the grounds provided. Perhaps there's a case for that to happen once in a while, but we don't want pointers to possibly failed experiments to be in the meta-data forever. Like I said, we can sort this on the call pretty quickly. But I want to make sure we understand the implications here.
In the security considerations section, I didn't see a discussion on packet re-assembly and associated security issues such as overlapping fragments. If there is a reference that already covers that in the draft, I missed it and would appreciate you pointing me to it. In a quick search, I found a couple of references that may be useful for some text/reference on details of this attack type, but are at the IP layer. The draft already prevents other attack types that involve ordering and size of fragments with lengths included, so this is just meant to ask about overlapping fragments. Section 4 of: https://tools.ietf.org/html/rfc1858 https://tools.ietf.org/html/draft-ietf-6man-overlap-fragment-03 Examples might overlap to change any part of the AAA data once reassembled.
Thanks for your work on this draft, I also agree that it is well written and placed well as experimental.
This seems a very well written document, and was easy to read. In particular, thanks very much for Section 2: it's one of the best of that sort of explanation I've seen, and should be a model for other such specs. Version -11 addresses all the minor comments I had; thanks.
bert Wijnen's opsdir review which I think was adequately discussed. I did the OPS-DIR review for document draft-ietf-radext-radius-fragmentation-09 Such OPS-DIR reviews focus on operational (operator) and management aspects. I notice that there is a section: 12. Operational considerations But the considerations do (if I understand it correctly) not describe any considerations for operating or managing the Internet network. They have to do with the process of knowing/assigning new values or flags in the "Reserved" field of the "Long Extended Type" attribute [RFC6929]. So trhat seems to be more of a ietf-process consideration than an operational consideration. Since we often do use a section name of "Operational Considerations:" to describe considerations that network operators must understand and/or be aware of, it may be confusing for readers of this document. Maybe a solution is to rename section 12 to "Future IETF document considerations" or some such ?? nit and/or minor question: I wondered about: 2. Status of this document This document is an Experimental RFC. It defines a proposal to allow sending and receiving data exceeding the 4096 octet limit in RADIUS packets imposed by [RFC2865], without requiring the modification of intermediary proxies. I thought that one would never describe inside a document that it is experimental, do we? The status I thought was an external tag, no? Anyways, given the content of section2, it is fine. That text will need to be changed anyways if this ever advances onto standards track. Bert Wijnen
I wish I'd had time to read this properly and ballot yes, but I didn't, sorry;-) This is however good and useful work.
Please move Appendix A into section 1.3 as it would be better to have all terms, symbols, and variables used in the draft defined in the terminology section. Russ Housley noticed this and I agree with him in that it would be good to fix. In 1.4 should this include key sizes as well since they are not discussed? I see the explanation in section 5 and am just wondering if the procedures are the same when key properties change as opposed to expiration and revocation, which are both mentioned in the draft. The SecDir review found a few nits you should probably fix as well: https://www.ietf.org/mail-archive/web/secdir/current/msg05318.html
This being outside of my domain of focus, I'm curious about the choice to publish this document now. There seem to be a number of places where the possibility that a currently unspecified interface will be defined by I2RS, although I gather that this document is not setting requirements for what I2RS will produce. This jumped out at me especially in the case of the ABNO control interface, which seems like a central component. I was also wondering whether ALTO is already being used in the ways described in this document, and Section 3.9 also caught my eye, as it seemed odd to go ahead with what are essentially placeholder use cases to be filled in later. I realize there is a trade-off between describing a high-level architecture to drive specification of the components and identifying how existing components can be fit together, but the combination of all of the above items made me wonder about the utility of writing this architecture down now when it could perhaps change depending on how the missing pieces get specified, implemented, and used. The security considerations seem to emphasize the sensitivity of the network data involved in ABNO and the corresponding need to protect it, but couldn't the application data involved be equally as sensitive and deserving of protection from unauthorized access? That point seems to be missing in the text.