IESG Narrative Minutes
Narrative Minutes of the IESG Teleconference on 2012-04-12. These are not an official record of the meeting.
Narrative scribe: John Leslie (The scribe was sometimes uncertain who was speaking.)
Corrections from: Barry, Pete, Benoit
2. Protocol Actions
2.1 WG Submissions
2.1.1 New Items
2.1.2 Returning Items
2.2 Individual Submissions
2.2.1 New Items
2.2.2 Returning Items
3. Document Actions
3.1 WG Submissions
3.1.1 New Items
3.1.2 Returning Items
3.2 Individual Submissions Via AD
3.2.1 New Items
3.2.2 Returning Items
3.3 IRTF and Independent Submission Stream Documents
3.3.1 New Items
3.3.2 Returning Items
1254 EDT break
1300 EDT back
4 Working Group Actions
4.1 WG Creation
4.1.1 Proposed for IETF Review
4.1.2 Proposed for Approval
4.2 WG Rechartering
4.2.1 Under evaluation for IETF Review
4.2.2 Proposed for Approval
5. IAB News We can use
6. Management Issues
7. Agenda Working Group News
Barry: wrong charter sent; best thing is to send version we intended to approve; does it need to go on another telechat; changes not insignificant, not sure whether IESG has seen the correct version
Russ: post correct version, diffs, ask if anyone wants to discuss again
Barry: there is a correct version, I'll sort it out
1356 EDT Adjourned
(at 2012-04-12 07:30:04 PDT)
It is not clear to me why it is necessary to create the protocol specific variant of the RFC5226 Review process described in section 11.1, 11.2, 11.3, 11.4. Creating new variants of the IANA process creates confusion, and unless there is a good reason specific to this protocol, one of the standard IANA processes should be called out. If the plan is to have a list review followed by an expert review of the list discussion, the timetable needs to call out time for the list to do a review and then a time for the expert to do their review.
I read: This specification replaces and obsoletes the OAuth 1.0 protocol described in RFC 5849. I've not been familiar with OAuth, and one question that bothered me: Why should I implement/upgrade to OAuth 2.0, compared to 1.0? It's not mentioned in the draft. I had to search somewhere to find the answer: in the current charter, which says: In April 2010 the OAuth 1.0 specification, documenting pre-IETF work, was published as an informational document (RFC 5849). The working group has since been developing OAuth 2.0, a standards-track version that will reflect IETF consensus. Version 2.0 will consider the implementation experience with version 1.0, a discovered security vulnerability (session fixation attack), the use cases and functionality proposed with OAuth WRAP [draft-hardt-oauth-01] and will * improve the terminology used, * consider broader use cases, * embody good security practices, * improve interoperability, and * provide guidelines for extensibility. Adding at least the first two sentences (or something similar) + one about the "discovered security vulnerability" would make sense, at least to me... Unless this specified in a different document (maybe I-D.ietf-oauth-v2-threatmodel?)
I came to a similar conclusion as Benoit: readers of this document would benefit from a one-paragraph summary of the reasons for the development of Oauth 2.0. A summary or overview of technical differences would be helpful, as well, if it's not too lengthy. I also agree with Stewart's DISCUSS regarding the adoption of modified RFC 5226 review processes rather than reusing existing processes.
In section 1, right before 1.1 begins, HTTP is called a transport protocol. While this tends to happen, it still isn't correct. It would be better to reword the sentence replacing: "with any other transport protocol" to something more like: "over any other protocol"
I should like to see a statement along the lines of "OAuth 2.0 is not intended to be backward compatible with OAuth 1.0. The protocol versions may co-exist in the network and implementations may chhose to support both. However, it is the intention of this document that new implementation support OAuth 2.0 as specified in this document, and that OAuth 1.0 is used only to support existing deployed implementations." --- It would be useful to include a concise section titled "Changes from OAuth 1.0 (RFC 5849)". This would help implementers moving from 1.0 to 2.0 (and would help reviewers as well :-)
Can't Appendix A be folded into Section 12. Perhaps make it 12.1?
4.3.2 says that the authorization server MUST "validate the resource owner password credentials", but it doesn't say exactly how one might do that. For example, it doesn't say whether to compare things case-sensitively or (and this is the reason it even occurred to me) whether one should be normalizing the UTF-8. I'm fine with that being left as an exercise to the reader if this is the common practice in security protocols. And UTF-8 doesn't make this special; even comparing US-ASCII has it's quirks. The UTF-8 just made it noticable to me. 8: Just confirming that you are OK with the following legal ABNF productions: type-name and param-name could each be "...---..." response-name could be "_____" error-code could be "z...---..." Those all OK productions?
Please consider the following substitutions for the websites and email lists pointed to in section 11.1: http://www.iesg.org -> http://www.ietf.org/iesg firstname.lastname@example.org -> email@example.com
At v25, I appreciate that you must have slayed more than your fair share of dragons (and some more than once I bet). I appreciate your efforts. Just a couple of things I'd like to discuss: 0) General: I found the lack of ABNF somewhat disconcerting in that implementers would have to hunt through the spec to figure out all the values of a given field. For example grant_type has different values based on the different kind of access_token requests - four to be more precise - but there's no ABNF for the field. There are many examples of this. It would greatly aid implementers if a) the ABNF for all fields were included in the draft and b) all the ABNF was collected in one place. I had individual discusses for each field that had missing ABNF, but it was getting out of hand so I'm just going to do this one general discuss on this topic. 1) General: If I buy your argument in s1.7 that this is a framework and you can leave bits needed to fully implement it out, then should this draft not have "protocol" in the title or be tweaked to acknowledge it's not complete? I can hear you all groaning now, but it's truth in advertising. The other RFCs that have been oft quoted as frameworks, like PKIX and CMS, that would allow you to not pick MTI, like the token format, don't have "protocol" in the name. Adding some like ": Framework" after the title so that it's clear this ain't the hole shooting match would, I think, be truth in advertising. It's just a little misleading that the abstract/intro lead you to believe if you implement this draft you'll be access these resources but you have to dig in to s1.7 and s7 to know that the bits need to actually determine access aren't defined in the draft. 2) Figure 1: Because so many of the later sections refer to the not shown protocol flow I decided to make it a discuss (though I'm sure more than one person would say this is a comment): s1.2: Figure 1: If the preferred mechanism for the client request is to go indirectly through the Authorization Server it would be really good to depict that. You could just add that to Figure 1 or add a Figure 2. Further, shouldn't the out-of-scope bits also be shown too: client registration, and the interaction between the authorization and resource servers so we get the complete picture? You can mark them out-of-scope in the figure. And another thing, the bearer token picture shows client credentials in (C) shouldn't this also show them as optional? 3) s1.6/s2.3.1/s3.1: So some might consider this nit-picking but when you say "Whenever TLS is required by this specification" do you mean "Whenever this specification requires TLS be used"? MTI doesn't mean mandatory to use, but in this case I think you do mean mandatory to use because it ships around cleartext passwords. This also comes up in s2.3.1 and 3.1 where the text indicates: t/The authorization server MUST require TLS as described in Section 1.6 when sending requests using password authentication. I'd just replace require with use in both places. Note that s22.214.171.124 seems to have it right: "require the use of TLS". 4) s1.7: When you say "authorization server capabilities" you're talking about the client discovering which token format is supported? I think the draft needs to be clear that without these underdefined things that the protocol can only interop with the clients being configured a priori. 5) s1.7: Since you brought it up (and I thank you for being upfront about it) and you provided some examples, shouldn't the list of underdefined things be completely listed? That way if somebody wants to profile this for their use they know all the bits and pieces they need to write down. 6) s2: The protocol to register the client is out-of-scope but is the directions for the client developer in scope? If so, shouldn't 2119 language be used here: When registering a client, the client developer: - MUST specify the client type… - SHOULD provide its client redirection … - MUST include any other … 7) s2.1: How is trust established? 8) s2.2: How unique is the client_id? Is it just for this server or universally unique? If it's the later how do you guarantee this? Is there some requirement for the length of the string? 9) s2.3.1: Where is this described: Since this client authentication method involves a password, the authorization server MUST protect any endpoint utilizing it against brute force attacks. 10) s3.1/s3.1.2/s3.2/etc.: why the MUST NOT here and what happens if a fragment is included: The endpoint URI MUST NOT include a fragment component. 11) s3.1/s3.2: What happens if they are included more than once - is it rejected or is the first one accepted?: Request and response parameters MUST NOT be included more than once. 12) s126.96.36.199: This section made me scratch my head a bit. In which of the scenario's flows is the SHOULD for (i.e., where do you think TLS won't be implemented)? Is it (D) in Figure 3? 13) s188.8.131.52: How does the authorization server warn the resource owner about the insecure endpoint? 14) s184.108.40.206/s220.127.116.11: Under what circumstances wouldn't you inform the resource owner of the error (i.e., why isn't that SHOULD a MUST)? 15) s18.104.22.168: Are there any security considerations that would result if the client includes third-party scripts? 16) s22.214.171.124: How is this done: If third-party scripts are included, the client MUST ensure that its own scripts (used to extract and remove the credentials from the URI) will execute first. 17) s126.96.36.199/s188.8.131.52: The errors in these two sections have the same values but just slightly different meanings: one refers to authorization codes and the other refers to access tokens. Is it wise to use the same name for the error values? This issue would go away if the error_description was required. 18) s4.1.2/s184.108.40.206/s4.2.2/s220.127.116.11/: Don't you need to say which type of HTTP status code is returned e.g., is it always 302 as shown in the exampled? 19) How is the expiry time of the access token provided to the resource server? Is this supposed to be documented in the access token documents? 20) the bearer token spec contained character set restrictions on the error, error_description, and error_uri: Values for the "error" and "error_description" attributes MUST NOT include characters outside the set %x20-21 / %x23-5B / %x5D-7E. Values for the "error_uri" attribute MUST conform to the URI-Reference syntax, and thus MUST NOT include characters outside the set %x21 / %x23-5B / %x5D-7E. Do these apply here as well? This might get cleared up with some ABNF. 21) s10.3: Given Richard's point in GEN-ART review on 10.3, I think it might be worth adding the text you suggested. 22) s10: About the parameters that require secure transmission/storage: Would a compromise be to just list the ones that require secure transmission/storage? We often do/require this for protocols (e.g., SNMP, NETCONF).
0) s2.1: Assume either you'll rev the draft of Stephen will add the text via an RFC editor note: A clients may be implemented as a distributed set of components, each with a different client type and security context (e.g. a distributed client with both a confidential server-based component and a public browser-based component). If the authorization server does not provide support for such clients, or does not provide guidance with regard to their registration, the client SHOULD register each component as a separate client. 1) s1: r/created by passwords/inherent in passwords 2) s1: r/OAuth with any transport protocol other than HTTP is undefined./OAuth with any transport protocol other than HTTP is out-of-scope. 3) s1: General: Nice reference to RFC 4949 in s1.8, but that got me to wondering whether you should use the term "capability token" as opposed to "access token" where the definition of capability token is: (I) A token (usually an unforgeable data object) that gives the bearer or holder the right to access a system resource. Possession of the token is accepted by a system as proof that the holder has been authorized to access the resource indicated by the token. 4) s1.5: r/Issuing a refresh token is optional/Issuing a refresh token is OPTIONAL ? 5) s1.5: r/If the authorization server issues a refresh token, it is included when issuing an access token./If the authorization server issues a refresh token, it is included when issuing an access token (i.e., step (D) in Figure 1). 6) s1.6: Did you mean that additional transport-layer *security* mechanisms can be implemented? 7) ID-nits is coughing on your 2119 paragraph. Replace ' with " and it ought to go away. 8) really we're going to use: application/x-www-form-urlencoded 9) s18.104.22.168: add a period to the end of:prior to utilizing the authorization endpoint 10) s22.214.171.124/s4.1.2/s4.2.2/s5.1: SHOULD in the following: The authorization server should document the size of any value it issues 11) s5.2: 1st sentence indicates 400 is the response but in invalid_client it says might be a 401. Should the 1st sentence use 4xx instead to indicate it's from the client error set of codes? 12) This might have been settled already: s11.4.1: A GEN-ART comment on the bearer token draft might have trigged a necessary change to the OAuth Extension Error Registry to allow bearer tokens errors to use the same registry. Would need to add a fourth usage location: "resource access error response" to be able to use this registry for bearer error types. From Richard's GEN-ART review: 13) On redirects: I think it might help to add somewhere that redirects and directs can be accomplished through HTTP redirects or through other implementation alternatives. 14) s2.3.1: I think it would help to add something that says this is only used with Token Endpoint (3.2) which is limited to POST only.
In Section 1, I suggest changing: "for use with other transport protocols" to something more like: "for use over other protocols". HTTP is not a transport protocol.
Section 2.1 states : Clients SHOULD make authenticated requests with a bearer token using the "Authorization" request header field with the "Bearer" HTTP authorization scheme. Is the SHOULD simply to show a preference for the Authorization request approach over the methods defined in Sections 2.2 and 2.3? If so, in what type of situation would the Authorization request approach not be used?
The Gen-ART Review by Alexey Melnikov on 10-Apr-2012 reports that two major issues that were raised in an earlier review were not addressed. I have added my own thoughts in addition to those provided by Alexey. First, the "scope" attribute is a space-delimited list of scope values indicating the required scope of the access token for accessing the requested resource. In some cases, the "scope" value will be used when requesting a new access token with sufficient scope of access to make use of the protected resource. The "scope" attribute MUST NOT appear more than once. The "scope" value is intended for programmatic use and is not meant to be displayed to end users. In response to the previous review by Alexey, the document editor provided explanation in email; however, this response was not reflected in the subsequent update to the document. More information about the "scope" attribute is needed, especially about the manner that it is used and the possible values. As this attribute is not meant to be displayed to end users, please indicate what values are possible and which entity can allocate them. Is there an IANA registry for possible attribute values? If so, what are the rules for assigning a new registry value. Second, Section 3.1 specifies Error Codes. Alexey suggested the use of an IANA registry for this field. Apparently there is already a registry created by draft-ietf-oauth-v2. However this document does not register values defined in this section in that registry. Please explain why the IANA registry is not leveraged by this document.
Mark Nottingham's Applications Area review <http://www.ietf.org/mail-archive/web /apps-discuss/current/msg03805.html> has a couple of comments that I think deserve further reply: * Section 1: Introduction The introduction explains oauth, but it doesn't fully explain the relationship of this specification to OAuth 2.0. E.g., can it be used independently from the rest of OAuth? Likewise, the overview (section 1.3) seems more specific to the OAuth specification than this document. As I read it, this mechanism could be used for ANY bearer token, not just one generated through OAuth flows. If it is indeed more general, I'd recommend minimising the discussion of OAuth, perhaps even removing it from the document title. I agree that the title would be better simply as "HTTP Bearer Tokens", and then explain in the Abstract and Intro that the motivation and intended use of these Bearer Tokens is the OAuth 2.0 specification. A possibly useful side effect of this change might be that you can make OAuth 2.0 an informative (as against a normative) reference, and that these things could be reused for other purposes in the future. Not a huge deal, but I (like Mark) was unconvinced that the reference to OAuth in the title was necessary. * Section 3 The WWW-Authenticate Response Header Field The difference between a realm and a scope is not explained. Are the functionally equivalent, just a single value vs. a list? Some text, and probably an example, might help explain this a bit better. One of his comments asked for some additional review. I don't have a personal opinion whether this is needed, but perhaps you should pursue this: * General The draft currently doesn't mention whether Bearer is suitable for use as a proxy authentication scheme. I suspect it *may*; it would be worth discussing this with some proxy implementers to gauge their interest (e.g., Squid). Finally, there was his major issue. I have not put this in a DISCUSS since, in all honesty, I don't fully understand the implications here. I intend to re-post to the apps-discuss list to see if we can get a better explanation of what the issue is. However, I strongly urge the AD, shepherd, and chairs, as well as the authors, to review this concern. If I get more information that makes the issue clear to me, I may ask the IESG to discuss: * Section 2.3 URI Query Parameter This section effectively reserves a URI query parameter for the draft's use. This should not be done lightly, since this would be a precedent for the IETF encroaching upon a server's URIs (done previously in RFC5785, but in a much more limited fashion, as a tactic to prevent further, uncontrolled encroachment). Given that the draft already discourages the use of this mechanism, I'd recommend dropping it altogether. If the Working Group wishes it to remain, this issues should be vetted both through the APPS area and the W3C liaison. (The same criticism could be leveled at Section 2.2 Form-Encoded Body Parameter, but that at least isn't surfaced in an identifier)
While editing this I say Mike's responses so I just cut them in to see if we can't have one thread going on this draft for my discusses/comments. I added Mike's responses in between <mike> and </mike> #1 was updated based on input from Julian. #9 was updated based Alexey's GEN-ART review. #13 is new. First off, I appreciate that you have likely slayed more than a few dragons working on this draft and I appreciate your efforts. Would just like to clear up a few things: 1) I'm hoping the answer to this one is "there's no problem" but I gotta ask and maybe the APPs ADs can confirm: Is there any issue with this specification using ABNF from [I-D.ietf-httpbis-p1-messaging] while OAUTH 2.0 uses [RFC5234]? <mike> > None that I’m aware of. Both specs are syntactically well-defined. </mike> From Julian: The ABNF from HTTPbis is a superset of RFC 5234 in that it defines a list rule for readability. I don't think that this rule is used anymore in the bearer spec, so it can just say it's using RFC 5234. So could can this just reference 5234 for the ABNF? 2) I thought maybe this spec was going to explain how the resource server knows that the access token provided hasn't expired, but it didn't. How's that going to happen again? <mike> > That’s out of scope for this specification, as the Bearer spec is, by > design, token type independent, but in scope for profiles for specific > token types such as draft-ietf-oauth-saml2-bearer and > draft-jones-oauth-jwt-bearer. In those profiles you’ll find requirements > for expiration time assertions in the tokens used. </mike> Okay I'll give you a on this one because this isn't really talking about the direct interaction between the authorization server and the resource server, but just for my own edification where is that exchange defined - is there a draft about this interaction? 3) s1: Last para: Okay isn't it step (D) partially addressed too? The access token format returned by the authorization server is defined in this specification - right? Further, in s5.2 there are recommendations for issuance of access tokens and that's covered in (D)? <mike> > You’re correct that semantic requirements are placed upon the access > token communicated in step D. The protocol portion of D is solely within > the OAuth Core spec, however, whereas the protocol elements for steps E > and F are defined in the Bearer spec. If you think it’s warranted, a > sentence something like “This document also imposes requirements upon > the access token returned in Step D” could be added at the end of > Section 1. Your thoughts? </mike> Yeah I think that's worth adding. Maybe I'm just being pedantic, but I think it's better to add this in. 4) s2: What happens if the client uses more than one method? <mike> > The spec says “Clients MUST NOT use more than one method to transmit the > token in each request.” The behavior when violating a MUST is undefined. I was wondering if this was maybe an HTTP requirement? Anyway...often times when you say MUST NOT we'd like to know what happens when the implementation doesn't follow the rules and 2119 provides this helpful bit of advice: Document authors should take the time to elaborate the security implications of not following recommendations or requirements as most implementors will not have had the benefit of the experience and discussion that produced the specification. 5) s2.1: b64token is pretty forgiving in that it allows a whole bunch of different encodings. Is one the MTI? > None are MTI, again because this spec is, by design, token type > independent. Specific profiles using this spec will define particular > MTI encodings for particular token types. Okay this confuses me. Are you saying there's going to be different types of bearer tokens? Is there going to be a registry for them? 6) s3: What happens if realm, scope, and the error attributes appear more than once? <mike> > The spec says “The realm attribute MUST NOT appear more than once”, “The > scope attribute MUST NOT appear more than once”, and “…includes one of > the following error codes in the response”. The behavior when violating > these normative requirements is undefined. </mike> see #4. 7) s3: Under what circumstances wouldn't you want an error returned? <mike> > The spec says “If the request lacks any authentication information (i.e. > the client was unaware authentication is necessary or attempted using an > unsupported authentication method), the resource server SHOULD NOT > include an error code or other error information.” This restriction is > in place to avoid leaking potentially useful information to an attacker. </mike> Should'a caught that - consider this one closed. 8) s3.1: Trying to figure out the error requirements. Are the shoulds in the three codes telling you that you could send other 4** codes than those listed or that if you can come up with a good reason you don't need to send one at all? <mike> > The SHOULDs are there because while the use of 400, 401, and 403 for > those cases are highly recommended, the working group found, in > consultation with Mark Nottingham and other experts, that sometimes in > practice different error codes are used under these same or similar > circumstances. For instance, some implementations may be returning 401 > (Unauthorized) for insufficient scope conditions, rather than 403 > (Forbidden). </mike> Consider this one closed. 9) s3.1: I thought scope was defined in draft-ietf-oauth-v2 shouldn't you just point there and then you can pick up the character set restrictions from the ABNF there? <mike> > The scope syntax is also defined in Section 3.3 of OAuth Core, but for >use in different, but semantically related, protocol contexts. (Core >uses it as a request parameter. Bearer uses it as an error response >parameter.) Yes, the syntax restrictions for scope values could be > included by reference to Core, rather than included in Bearer, but given > that other parameter syntax restrictions are also needed for error > response parameters (see your next question), it seemed simpler for > developers to include all of them in once place in the Bearer spec. </mike> and then I added: Additionally: If the "scope" attribute defined in draft-ietf-oauth-v2-bearer-18.txt is the same as in draft-ietf-oauth-v2-25.txt, then draft-ietf-oauth-v2-bearer-18.txt must reference Section 3.3 of draft-ietf-oauth-v2. Secondly, the definitions are a bit out of sync and the one in draft-ietf- oauth-v2 seems a bit better. This actually answers my question about who can allocate values. (See my Gen-Art review and associated threads on the OAUTH mailing list.) If the value contains multiple space-delimited strings, their order does not matter, and each string adds an additional access range to the requested scope. I think this is quite valuable addition. Suggested updated text for draft-ietf-oauth-v2-bearer:. The "scope" attribute is defined in Section 3.3 of [draft-ietf-oauth-v2]. The "scope" attribute is a space-delimited list of case sensitive scope values indicating the required scope of the access token for accessing the requested resource. "scope" values are implementation defined and there is no centralized registry for them, allowed values are defined by the authorization server. Note that the order of "scope" values is not significant. In some cases, the "scope" value will be used when requesting a new access token with sufficient scope of access to utilize the protected resource. Use of the "scope" attribute is OPTIONAL. The "scope" attribute MUST NOT appear more than once. The "scope" value is intended for programmatic use and is not meant to be displayed to end users. 10) s3.1: Shouldn't the character set restrictions on error, error_description, and error_uri be in draft-ietf-oauth-v2? <mike> > Yes, I believe these same restrictions should be present in the Core > spec. Unfortunately, they aren’t at present, I think in part, because > the “error”, “error_description”, and “error_uri” errors there are used > in a different protocol contexts where it’s easier to use non-ASCII > characters. (The Core spec specifies UTF-8 encoding for > error_description, for instance, rather than limiting it to the ASCII > subset in the Bearer spec.) I believe it would increase consistency and > reduce confusion for developers if you filed an issue against the Core > spec requesting that the same character set restrictions be applied to > these related error values in Core as were already agreed to (after MUCH > working group discussion) for Bearer. (ASCII is sufficient because the > error_description is “to provide developers a human-readable explanation > that is not meant to be displayed to end users”.) </mike> I already did fie a discuss about this on the base spec :) 11) s5.2: TLS is required and that's great, but what I think this means is that if the redirection endpoint (defined in 3.1.2 of draft-ietf-oauth-v2) decides not to implement TLS (it's only a SHOULD) then this token format can't be used in that scenario? I think this needs to be very clearly documented - then again maybe I'm totally wrong. <mike> > I’m not sure how to be much more clear than the current statement that > “The authorization server MUST implement TLS”. </mike> Fair enough, let me come up with some words. 12) s5.2: Do the two "issue" recommendations apply generally to all types of tokens? If they do, then shouldn't they be moved to the base spec? <mike> > (I assume you meant 5.3 here.) No, they do not. For instance, > proof-of-possession tokens (which require an additional protocol > exchange in general to use) have very different security > characteristics. The security considerations for each class of token can > be different (although sometimes admittedly overlapping). > BTW, for security considerations of the Core spec, reviewers should also > be aware of the intentionally much more comprehensive > draft-ietf-oauth-v2-threatmodel document, which has completed working > group last call. </mike> I did mean s5.3. Consider this one closed based on the idea that the explanation to #5 all makes sense. 13) This one most like a DISCUSS-DISCUSS (i.e., nothing for the authors to do at this time): Do we really want to define an HTTP authentication mechanism herein? Isn't the http* WG going to work on that?
1) Figure 1: I've made some suggested changes to Figure 1 in draft-ietf-oauth-v2 and you should keep the two aligned. <mike> > Sure. Please send these to me and keep me apprised about whether they > are adopted in the Core spec. </mike> I'll make to. There might no be any changes in the end. 2) s2.1: r/Resource servers MUST support this method./Resource servers compliant with this specification MUST support this method. <mike> > OK </mike> 3) s2.2/s2.3: r/Resource servers MAY support this method./Resource servers compliant with specification MAY support this method. <mike> > OK </mike> 4) s5.2: You could point to the cookies document for security considerations on cookies: RFC 6265. <mike> >cookies: RFC 6265. > OK in principle. Specific proposed text would be welcomed. </mike> Fair enough. How about adding the following to the end of the para that starts ... Cookies are typically: See [RFC6265] for security considerations about cookies (aka HTTP state management). 5) s5.2: Peter's gone, but his document (RFC 6125) lives on. It discusses matching server Ids. Might add a reference to that draft in this draft. <mike> > There’s history on this one. :-/ Per the history entries, a previous > reference to RFC 2818 was changed to RFC 6125 in draft 14 at the request > of Stephen Farrell. Then, in draft 17, the 6125 reference was removed in > favor of text referencing 2818 supplied as a result of the Gen-ART > review by Alexey Melnikov (and reviewed by Stephen). I’d love to do > whatever the right thing is here, but if a change is to be made, I’d > request that the new text be reviewed by all of Stephen, Alexey, and > Peter Saint-Andre before being changed in the draft. </mike> I'll take the action to coordinate text with the five of us. Should see a message shortly. 6) s5.3: r/SSL/TLS ;) <mike> > Sure </mike>
This DISCUSS is raised against the publication status of the document. No action is required from the authors until it is resolved. I expect to clear this DISCUSS after the telechat discussion of the document. The relationship between this document and RFC 5296, and the exact status of RFC 5296 should be clarified before this document is published. I've given an editorial example of the nature of the problem in my COMMENTs. Adrian has entered a COMMENT that I will expand to a DISCUSS to request a couple of sentences explaining why this document was written to accompany the summary of changes Adrian requested. This issue might be addressed by the RFC Editor note.
There are at least two instances of references to "new EAP codes" or "New EAP Packets" that should be updated to reflect that the EAP-Initiate and EAP-Finish Packet Codes are already defined, and add a citation to the appropriate IANA registry. This typo (missing " " in the line containing "cryptosuite") was copied forward from RFC 5296 (best read with fixed-width font): 0 1 2 3 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | Code | Identifier | Length | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | Type |R|B|L| Reserved| SEQ | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | 1 or more TVs or TLVs ~ +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | cryptosuite | Authentication Tag ~ +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
I have no objection to thepublication of this document. Please supply a short section "Changes from RFC 5296" Pleasecheck that all Erratahave also been applied to this revision http://www.rfc-editor.org/errata_search.php?rfc=5296
This document says there are no IANA actions. RFC 5296 did a number of things in the EAP registry - Registered Packet Codes 5 and 6 - Created the Message Types table - Created the Initiate and Finish Attributes table - Created the Re-authentication Cryptosuites table It also registered two values in the USRK Key Labels registry. Shouldn't the references in those IANA registries now all be changed to point to this new RFC, instead of the now-obsolete 5296?
Pedantic nits: 4.1 (and similarly in 4.3 and 4.6): The rIK Label is the 8-bit ASCII string: What is an "8-bit ASCII string"? Do you mean "the following string of US-ASCII characters encoded in octets" or something like that? 5.2 (also 5.3.2 and 5.3.3): a 16-bit sequence number Any issue about byte order or sign/unsigned with these? 5.3.3: The value field is a 32-bit field and contains the lifetime of the [...] in seconds. Again, any issue with byte order or sign/unsigned with these?
Nice job folks. Only nits: s3.2: r/5shows/5 shows s4.1: 4th bullet r/must/MUST ?
I have some clarifying questions for the point 1. and 2. that will determine whether these are real DISCUSS's or not. A 10 minutes discussion with Al would help a lot 1. Looking at the list of documents at http://datatracker.ietf.org/wg/ippm/, I see that the first set of metrics where IP or IPPM related http://tools.ietf.org/html/rfc2680 -> A One-way Packet Loss Metric for IPPM http://tools.ietf.org/html/rfc2681 -> A Round-trip Delay Metric for IPPM ... then I see that that IP (or IPPM to be more precise) is not included any longer http://datatracker.ietf.org/doc/rfc4737/ -> Packet Reordering Metrics http://datatracker.ietf.org/doc/rfc5560/ -> A One-Way Packet Duplication Metric I would be interested to understand the change, and it might help with my next question. When reading this draft, I was wondering: 1. if this metric is only for packets? I'm pretty sure it's the case, specifically, when I see the section 3.4 that speaks about packet loss. So should the title be "Round Trip Packet Loss Metrics", or even, to be fully in line with RFC2680, ""Round Trip Packet Loss Metrics for IPPM"? 2. if the metric was not only for packet, but for application data (the abstract mentions "Many user applications"), then what would be the link with PMOL, RFC6390? Note: I believe that TWAMP doesn't deal with application data, but could be easily extended. A solution such as the Cisco IP SLA (http://tools.ietf.org/html/draft-cisco-sla-protocol-00) could do it. 2. Section 4.3 o the Dst sent a Type-P packet back to the Src as immediately as possible, and Why is this even useful to mention "as immediately as possible"? I mean: if you have to use round-trip packet loss (instead of one-way packet loss), it's because you're not able to install a "responder" application on the target device. Therefore, you have no control at all on that target device. And you are forced to use a protocol such as ICMP. So, why is this even useful to say "as immediately as possible" if you have no control on that target device? The sentence "the Dst sent a Type-P packet back to the Src as immediately as possible" only makes sense in the case of one-way delay metric. I have the same issue with your new proposed text (discussed with Adrian) o the Dst sent a Type-P packet back to the Src as quickly as possible (certainly less than Tmax, and fast enough for the intended purpose), and I have the same issue with your new proposed text in section 4.4 (discussed with Adrian, AFAIK) We add the following guidance regarding the responder process to "send a Type-P packet back to the Src as quickly as possible". A response that was not generated within Tmax is inadequate for any realistic test, and the Src will discard such responses. A responder that serves typical round-trip loss testing (which is relevant to higher-layer application performance) SHOULD produce a response in 1 second or less. A responder that is unable to satisfy this requirement SHOULD log the fact so that an operator can adjust the load and priorities as necessary. Analysis of responder time-stamps [RFC5357] that finds responses are not generated in a timely fashion SHOULD result in operator notification, and the operator SHOULD suspend tests to the responder since it may be overloaded. Additional measurement considerations are described in Section 8, below. For example, "A responder that is unable to satisfy this requirement SHOULD log the fact so that an operator can adjust the load and priorities as necessary. " I've been doing IP SLA measurements for years with Cisco boxes, and I would only use round trip delay and loss metrics when I can't touch the target device. And here you're asking the target device to do a task for you in case of round trip loss... Note: the default configuration for SLA measurement is to put a responder on the target device, and to measure in both directions the one way delay, the loss, and jitter. Or maybe, the metric in this draft can only be used with the TWAMP protocol, which I believe requires some configuration on the target device? However, it appears it's not a requirement as TWAMP is mentioned as one example in 8. Measurement Considerations and Calibration Prior to conducting this measurement, the participating hosts MUST be configured to send and receive test packets of the chosen Type-P. Standard measurement protocols are capable of this task [RFC5357], but any reliable method is sufficient. Next question: why do mention "but any reliable method is sufficient.". It means that that metric can't be used with ICMP? Anyway, it needs some clarifications. 3. In section 4.3 Following the precedent of[RFC2681], we make the simplifying assertion: Type-P-Round-trip-Loss(Src->Dst) = Type-P-Round-trip-Loss(Dst->Src) While I could agree that Type-P-Round-trip(Src->Dst) = Type-P-Round- trip(Dst->Src), at some conditions, I disagree with the assertion that if you loose 50% packets, you can conclude that you lost 25% in each direction.
- section 1, introduction s/round-trip loss/ round-trip packet loss Note: I'm sure there are multiple instances of this one. - section 1, introduction Also, the specifications of the One-way Loss metric [RFC2680] and the Round- trip Delay metric [RFC2681] are frequently referenced and modified to match the round-trip circumstances addressed here s/One-way Loss metric [RFC2680]/A One-way Packet Loss Metric for IPPM [RFC2680] s/Round-trip Delay metric [RFC2681] /A Round-trip Delay Metric for IPPM [RFC2681] - Section 3.3. Metric Definition This section is specific to each metric. What does this section add? Maybe it's part of a template? - Section 4.1. Name: Type-P-Round-trip-Loss I double checked this name with RFC 2680 1. It should be Type-P-Round-trip-Packet-Loss throughout the document 2. I opened this errata on RFC 2680 http://www.rfc- editor.org/errata_search.php?eid=3186 - Section 7 As discussed above, packet reordering is always a possibility. In addition to the severe delay variation that usually accompanies it, reordering on the Src->Dst path will cause a mis-alignment of sequence numbers applied at the reflector when compared to the sender numbers. Measurement implementations SHOULD address this possible outcome. "reflector" is a term specific to TWAMP, and not found in RFC2330
Other comments coming from Dan Frost's review > 1. Although it's probably obvious to most readers, it would be helpful > to provide a brief informal definition of "round-trip loss" early in > the introduction. A mention of the venerable "ping" procedure would > also not be amiss. > 2. Most of the text seems to assume an "active" or test-based > measurement approach, but Section 9.2 refers to passive measurement. > It would be helpful to discuss the applicability of the latter > approach. > Nits: > > 1. The phrase "as immediately as possible" that appears a couple of > times in the text (and that seems to originate in RFC 5357) is a bit > unfortunate. "Immediately" or "as quickly as possible" are better. > > 2. Section 5.4, second paragraph: s/affects/effects/ > > 3. Section 8, second paragraph: s/Two key features ... is described/ > Two key features ... are described/ > > 4. Section 9.3, first paragraph: > OLD > it is possible to change the processing of the packets (e.g. > increasing or decreasing delay) that may distort the measured > performance. > NEW > it is possible to change the processing of the packets (e.g. > increasing or decreasing delay) in a way that may distort the > measured performance. > END
I had a discuss to check that Sandy Murphy's secdir review comments had been taken into account. I asked and wasn't told they hadn't been, so I've cleared.
In the last paragraph of Section 5, the document says: " ... (or other process, the details of which MUST be specified if used)." Specified how? Is an RFC required? Is a standards-track RFC required? This document already mentions the lack of an IANA registry. Will an IANA registry be needed to help locate these specifications.
Please consider the comments raised by the Gen-ART Review by Ben Campbell on 10-Apr-2012. The review can be found here: http://www.ietf.org/mail-archive/web/gen-art/current/msg07340.html
(1) "screwed up" in Section 6.5 is not very technical; please say what is really wrong (loss, corruption, reordering, etc.) (2) Section 1 lists a number of *BUGS* in implementations as the motivations for this. It starts by saying that a use case for this is people not implementing RFC 4585 dithering correctly, then says that another use case is that there are other poor designs causing implosions of FIRs. It seems silly to write this new RFC adding a new mechanism rather than just applying pressure to fix those implementations; it would be useful to discuss why that isn't the right answer, since receivers have to implement reactions to this new report anyways, they should be fixing their bugs. I think this is an especially relevant question given the lack of implementation noted in the writeup and Pete's ballot.
Just checking: there's no way that a 3rd party loss report could cause a flood of re-transmitted data (that hadn't actually been lost) to be (re-)sent to a target is there? If so, that might constitute a new DoS vector. Its not clearly the case that that can't happen. If it could, then that'd be another reason to authenticate these messages. nits/typos: - s/to pose/pose/ - s/message,which/message, which/ - maybe s/the distribution source will not/if the distribution source will not/ in 6.1? (and some missing spaces there too)o - I like "badly screwed up" as a descriptive phrase!
I just have a few questions on this draft: 1. The Protocol Overview section states : "Intermediaries in the network that receive a RTCP TPLR SHOULD NOT send their own additional Third-Party Loss Report messages for the same packet sequence numbers." Why is this not a MUST? Is it simply to handle intermediate devices that don't support this function? If there is another scenario where a device may send a TPLR that overlaps, it would be good to spell that out. 2. There are two places (Sections 4.1 & 4.2) where the length field in the feedback message is set to "2+1*N". Should I interpret that to mean the value is really just N+2? Or is there something I am missing?
The document writeup says, "There are not yet any reported implementations." Are you really saying that for a protocol that appears to have serious congestion control effects, nobody has written a line of code yet? Has there been any testing of this at all? Are there any planned implementations (perhaps by more than one independent implementer)? If not, perhaps this should be published as Experimental first.
Had the same question Stephen had.
Thanks for a well-written document.
I'm not sure if there are really no new security considerations here, but the difference may be relatively minor, (given how I understand these protocols are used, i.e. without any cryptographic authentication;-). Anyway, my questions: Which of the RFCs referred to in section 5 calls out that sending a spoofed wildcard message will have a bigger impact for lower cost for an attacker? Could it also be the case that an attacker able to inject one of these needs less information about the network to cause the same amount of damage compared to an attacker who could not send a wildcard message?
ONly a nit: S2: R bit: r/Must/MUST
Given that this 1) This is of interest to 3GPP 2) MPLS-TP seems to be a popular choice in Mobile wireless backhaul. 3) Most service provider core networks use MPLS Should there not be a reference to RFC5129, and a note that ECN needs to be propagated from the tunnel to the payload? I am not sure how common MPLS ECN is, but it is not mentioned anywhere in the MPLS-TP specifications.
I don't object to the publication of this document, but it does bother me how much effort, time, and pages go into describing a protocol extension that no-one is apparently bothered to implement. What is the value of a standards track RFC in this case? How can we know whether the document or the protocol are right?
I've a couple of general comments and some nits. The former: - 55 pages to discuss two bits? something wrong there;-) - last para of section 3 (before 3.1), but a general question: you say ECN is set before congestion results in packet drops but I thought that was the point of PCN (the WG) which is just finishing. Are these things all sensible together? I assume a receiver/sender here can be within a PCN "domain" or whatever's the right term. Does all the ECN logic here work if the bits are actually set by a PCN conformant node? - Section 11: I don't get this sentence: "Secure RTP (SRTP) [RFC3711] does satisfy the requirement to protect this mechanism despite only providing authentication if a entity is within the security context or not." What's it mean? nits: - 2nd last para of section 3, maybe s/differences will/differences/ since you've presumably now figured it out? - p12, 2nd last para typo: s/mechanism/mechamisms/ in 2nd sentence. the leap-of-faith and ICE-based methods could do with references maybe - 3.3, 1st sentence seems odd, isn't it a tautology? - last para on p13, is that a 2119 MUST? looks like one - s/the are/they are/ on p36 - section 11 s/inferring/interfering/ in 3rd last para, and
I am also curious how this approach will interact with a PCN-conformant node (as asked by Stephen).
Some non-blocking comments -- though I would *really* like to see the IANA Considerations comment addressed. --- Section 3: ECN support is more important for RTP sessions than, for instance, is the case for TCP. This is because the impact of packet loss in real- time audio-visual media flows is highly visible to users. Effective ECN support for RTP flows running over UDP will allow real-time audio-visual applications to respond to the onset of congestion I'm not clear about what the first sentence is comparing, because RTP doesn't compare to TCP. Do you mean that ECN support is more important for RTP sessions over UDP than for RTP sessions over TCP? I don't think so. Do you mean that it's more important for RTP sessions than for *other applications over TCP*? I think that's it. But then what does TCP have to do with it? It seems that the point is that RTP is more sensitive to congestion issues that other applications are, regardless of the underlying transport protocol. In any case, please clarify that sentence. --- Section 3.1: Do we really need 2119 language in the requirements? I rather think that requirements would generate 2119 language in the protocol. --- Section 9: You explain that the situation with existing APIs is such that it makes "this specification difficult to implement portably." And that's all you say. Any words of wisdom here? Advice to implementors about how to handle the situation? --- Section 10.1: Following the guidelines in [RFC4566], the IANA is requested to register one new SDP attribute: I see a lot of SDP Parameters registries and tables, and it's not at all clear to me which one this gets registered in. Maybe it's clear to IANA, and maybe this is fine, but maybe also it should be made clearer here. Can you give the exact name of the registry and the table within the registry, to avoid mistakes? In general, the different subsections of Section 10 are inconsistent in how (and how specifically) they name the registries and tables you intend to update. I like the way 10.6 does it -- no chance for confusion at all there.
3.1: I don't understand what the 2119 words add. These are requirements for the protocol designers, not requirements for the protocol implementers. 6.1: qdtext = %x20-21 / %x23-7E / %x80-FF ; any 8-bit ASCII except <"> That makes me worried. You do not provide an escaping mechanism such that someone could put a quote in their quoted text. You do not specify the interpretation of the stuff from 0x80 through 0xFF (UTF-8? ISO-8859-1? uninterpreted octet?), and worse you call it "8-bit ASCII" which does not have a clear meaning. You also leave out 0x7F (not mentioned in the comment), and I have a guess as to why (it's not printable), but you don't say why. I understand you want this to be extensible, but I don't think the above is fully baked. Perhaps explain what you want to allow and I can recommend some alternatives. 10.1: This attribute defines the ability to negotiate the use of ECT (ECN capable transport) for RTP flows running over UDP/IP. This attribute should be put in the SDP offer if the offering party wishes to receive an ECT flow. The answering party should include the attribute in the answer if it wish to receive an ECT flow. If the answerer does not include the attribute then ECT MUST be disabled in both directions. I don't think it's a good idea to put protocol instructions into the IANA template. These are all already documented earlier in this document. Just put a pointer to [This document, section 6.1] and skip the last 3 sentences above. You don't want people trying to implement from the registry.
Please run this document through the NIT checker before publication.
It would be helpful to those searching for information if the abstract noted that this document revised RFC 3517
I have no objection to the publication of this document. One comment from Chris LILJENSTOLPE, part of the OPS-Directorate review. I wish the authors had selected some other state variable name other than DupAck for the multiple SACK counter. While it is well described in the draft, on first read it is really not a Duplicate ACK counter, but a multiple SACK counter (number of SACKs between covering ACKs). While useful, it would have been more intuitive to call it MultSack or some such. I do not propose editing the draft just for this purpose, but if another version of the draft is required, it may make the digestion of the material a little easier. I leave up to you to act on his feedback. Regards, Benoit.
I have no objection to the publication of this document.Just a couple of nits. --- Isn't [PF01] rather old to be cited as "evidence that hosts are not using the SACK information when making retransmission and congestion control decisions"? I guess this was good evidence when 3517 was first written, but maybe a different form of words is called for now? Perhapswe don't even need the evidence to motivte this work since it is now established. --- Section 1 A summary of the changes between this document and [RFC3517] can be found in Section 9. Pardon my pedantry, but the changes are between 3517 and this document.
"Pipe" definition says "The algorithm" is often referred to as the pipe alg. That's a little unclear, maybe better to say "The algorithm defined here...." and if that is the case, to also put that in the abstract and intro just to make it easier for someone who does call it that to find the RFC.
Section 7 talks about the effectiveness of this approach when paired with TCP Reno, but I do not see any discussion of possible interactions with other TCP congestion control algorithms. Has this re-transmission algorithm been tested with other congestion control algorithms?
The Gen-ART Review by Ben Campbell on 4-Apr-2012 suggests some improvements. Please consider them. The review can be found here: http://www.ietf.org/mail-archive/web/gen-art/current/msg07319.html
This seems a good, clear document. Thanks for a thought-out Security Considerations section, as well. I have one question, as a non-expert on this topic: All four functions in section 4 are "SHOULD implement." Can a meaningful implementation really be done if NONE of them are included? If so, fine. If not, maybe a few more words in the first paragraph would be useful, explaining under what conditions it's important to include them or makes sense to leave them out.
An editorial: It it is relative short document, but recents RFCs seems all to have a table of contents, which is missing in this draft.
I have no objection to the publication of this document, but I have a few comments that either reperesent my failure to grasp what you are doing, or would make useful improvements to the document. --- I would prefer that Section 3 did not include the format of the Reconfigure Message option. Rather than "update" the option with a full replacement, isn't it enough to say that msg-type may now additionally take the value 6 to indicate Rebind? --- Section 4 The server MUST include a Reconfigure Message option (as defined in Section 3) to select whether the client responds with a Renew message, a Rebind message or an Information-Request message. Include in what? --- Section 4 is headed "Server Behavior" The Reconfigure message causes the client to initiate a Renew/Reply, a Rebind/Reply message exchange or an Information-request/Reply message exchange. Seems to be describing the client behavior. At least give a forward pointer to Section 5. --- Section 4 The server interprets the receipt of a Renew, a Rebind or an Information-request message (whichever was specified in the original Reconfigure message) from the client as satisfying the Reconfigure message request. Presumably, only if threceived message matches the msg-tpe in the Reconfigure Message option? What if there is a mismatch? can the mismatch be caused by a race? --- Section 5 How is a legacy client going to handle a Reconfigure Message option with msg-type set to Rebind? Presumably it is going to run some 3315 logic to drop or nack the message as "msg-type unknown, unexpected, or unsupported". I believe you should mention this as it impacts on server behavior.
Since I know squat about DHCPv6 these may be cleared up really quickly: - Section 7 calls out a clear vulnerability and suggests use of the AUTH option from RFC 3315. I'm told that nobody ever uses the v4 equivalent functionality, is that the same for v6? If so, it would then seem that we have a vulnerability with no practical mitigation which would seem like a bad thing. I'd hope to see at least an honest recognition of that, if its in fact the case. - I don't see why the dhc-secure-dhcpv6 is non-normative since its one of two possible ways to do a thing. - Should one of AUTH from 3315 or dhc-secure-dhcpv6 be mandatory to implement? If not, why not?
- 1st sentence of abstract seems odd, v. hard to read anyway and that's not so good usually. How does the "Reconfigure Message" extend "the Reconfigure Message"? (That's how I read it anyway)
Please consider the editorial suggestions in the Gen-ART Review by Francis Dupont on 7-Apr-2012. The review can be found here: http://www.ietf.org/mail-archive/web/gen-art/current/msg07344.html
The introduction motivates some of these changes with a use case of a network administrator who is preparing to shut down a dhcpv6 server causing clients to move to a different server. Is it possible (if so, how easy would it be) to misconfigure the servers involved to cause them to enter a rebind war with each other? If this is something a client might experience, is there guidance to give the client implementations on how to react when it happens?
1) I support Stephen's discuss. 2) s4: I was having some issues tracking exactly which paragraphs in 19.1-19.3 were being updated/replaced. Could you do the old/new so we knew which paragraphs were being replaced. Ex (assuming I got this bit right): 4.1 Updates to Section 19.1 OLD: A server sends a Reconfigure message to cause a client to initiate immediately a Renew/Reply or Information-request/Reply message exchange with the server. NEW: The server MUST include a Reconfigure Message option (as defined in Section 3) to select whether the client responds with a Renew message, a Rebind message or an Information-Request message. 3) s5: If the text replaces the text in s19.4 of RFC 3315 could you just say that? r/This section updates specific text in/This section replaces
Support Stephen's DISCUSS
I have no objection to the publication of this document, but I note that the Security Considerations section is flimsy. Surely there are security issues with how the mapping table at the AFTR is built. Although that is a "local matter" inplementers and deployers need to be aware that this feature must be secured.
1. The security considerations section here appears to be way too brief. I'd like to have known when it is safe to use this, and especially when it is not safe, e.g. if the g/w is on the customer premises and the CID is an IPv4 address, could the customer (hacking the g/w) hijack someone else's (guessable) CID? (That may or may not be a real threat, but I found it hard-to-impossible to figure out based on this draft.) 2. RFC 6275's security considerations don't appear to apply to this in an obvious way, which part(s) of RFC6275 section 15 are relevant here? Same question applies to RFC 5213. 3. TS29060 seems like a normative reference, why is it not? Is version 9.1.0 the right version to reference? (there seem to be many) That document (on page 143 of 155) has a two line section 12 on security which is just a reference to something else. I don't know what is meant by referring to this from section 9 here.
- p7, what does "must have a proper understanding" mean? - p8, CE, PE and ECMP are not expanded (and maybe need a reference/definition, particularly ECMP) - Please consider the points raised in Tobias Gondrom's secdir review.   http://www.ietf.org/mail-archive/web/secdir/current/msg03029.html
Section 6 lists a set of abbreviations to describe the type of IPv4 addresses being used in a deployment. I understand all the possibilities, except for "nm" (described as non-meaningful/dummy). This that just a diplomatic way of describing a network deployment that is squatting on someone's public IPv4 address space?
I second Stephen's DISCUSS and Pete's comment.
There is only one use of 2119 language, and I'm not convinced it's necessary: o The softwire between the Gateway and the AFTR MAY be created at system startup time OR dynamically established on-demand. Is this a protocol option that one or both sides needs to be aware of? That is, does the Gateway or the AFTR need to prepare itself for on-demand establishment, or to be prepared that on-demand might not be available? I suspect you can change it to "may" or "can" and delete the reference to 2119.
I support Stephen's discuss.
Please also consider the (very recent) comments from the secdir review.   http://www.ietf.org/mail-archive/web/secdir/current/msg03228.html My previous comments are below but from a quick glance seem to be addressed in -12. Two substantive comments and a bunch of nits, but this is good stuff. #1 The write up talks about running code which is great. Did the implementers of both take a look at this version of the document? I don't recall any last-minute changes but no harm checking. #2 I was left wondering about pkcs#1.5 and bleichenbacher's TLS attack and other side-channel attacks, e.g. based on timing or power. Those are not mentioned here, but are not things about which every coder would know. Is there a good document covering such side-channels against PGP, and/or ECC that could be added to section 13? (I'd bet there is, doesn't need to be an RFC.) I think that'd be a good addition. If there's no good document at least some mention of side channels as a security consideration would be good. Nits: - 1st para of section 5 reads as if the ECDH variant here is not interoperable with 6090, is that the case or not? If not (as I hope) then fixing that would be good. - the 2119 language at the end of section 6 is odd, better to say you MUST NOT use another format if there's any doubt that any recipient doesn't support the new format. - Does the 2119 lanaguage in section 7 mean that implementations MUST support all of sha-256, sha-384 and sha-512? I've no problem with that but making it clear would be better for interop. Section 12 sort of says otherwise but its a little confusing. Maybe add a forward reference to section 12 from 7? (Is the section 13 forward reference there correct?) - start of p7 s/respecfully/respectively/ nice typo:-) same typo elsewhere as well - the pesudocode on p7 would be better as a figure so it can be referenced. - "the" is missing in various places, I skipped over a bunch until it got to me;-) that was in section 10: s/applying KDF/applying the KDF/ - section 11 could confuse a coder as to whether the truncated form or usual encoding of the OIDs is used in the protocol. Making that clearer would be good, e.g., by saying that the non-truncated form is never used in this protocol (but would be found in e.g., x.509 certs for keys concerned). - The reference to TripleDES in section 13 can I guess be deleted and probably refers to earlier text that's no longer present.
Thanks for addressing issues raised in the Gen-ART Review by Christer Holmberg on 19-Mar-2012. I suggest an update to the Abstract: This document defines an Elliptic Curve Cryptography extension to the OpenPGP public key format and specifies three Elliptic Curves that enjoy broad support by other standards, including standards published by the US National Institute of Standards and Technology. The document specifies the conventions for interoperability between compliant OpenPGP implementations that make use of this extension and these Elliptic Curves.
Some very minor comments [UPDATE: adequately addressed in -12]: Section 2: Any implementation MAY adhere to the format and methods specified in this document, in which case such an implementation is called a compliant application. That seems a bit of a silly use of 2119 language. I think what you really mean is this: Any implementation that adheres to the format and methods specified in this document is called a compliant application. The sentence after that seems silly as well: the normative language here only applies to applications that want it to apply to them. We don't lock people up if they don't comply with our specs. It's a small point, and I completely don't mind if you ignore me here, but I suggest removing the sentence.
[Thanks for address my other comment) In section 8: o 20 octets representing the UTF-8 encoding of the string "Anonymous Sender ", where the space code point has the hexadecimal value 20. You would have been safer to say "the US-ASCII encoding of the string" instead of "the UTF-8 encoding". Given the goofiness of non-normalized encodings of characters in UTF-8, I still think it would probably be best to actually specify *all* of the octets to avoid some bonehead typing on a keyboard and getting it wrong: o 20 octets representing the UTF-8 encoding of the string "Anonymous Sender ", the specific octets as follows: 41 6E 6F 6E 79 6D 6F 75 73 20 53 65 6E 64 65 72 20 20 20 20 That way you're sure.
This is a huge document and it did make me worry that so many pages are needed to describe an *extension*. But I didn't find anything that was superfluous or wordy, so I have no issue with its publication. --- I did expect to see a short piece of text about how implementations of this spec would interact with deployed 4791 implementations. Not withstanding that this document updates 4791 (such that new 4791 implementations are presumably expected to include support for this document), we do have to worry about the deployed base. This would probably not take many words.
- Thanks for handling Klaas Wierenga's good secdir review so well and quickly! - 126.96.36.199 says the server "MUST allow" but later says how the server can return errors if e.g. the client hasn't permission for the change requested. It might be better to say at the top that "The server MUST be able to allow Attendees to:" - 3.2.3 says its about HTTP methods, but uses webdav methods as well (e.g. COPY, MOVE) so maybe a reference to rfc 4918 would be useful at the start here? (Or wherever is best to go for those.) - I guess this is maybe not too likely but just to check. If a client guesses a UID to try find out who's up to what, 188.8.131.52 says the server SHOULD return the URL if there is a collision. I wondered whether that URL might expose some information, in which case the question is whether such UIDs are easily guessed or not. If such UIDs can be guessable, then maybe say something to the effect that the server might want to not return URLs that might expose details of the events (if such exist) and might want to return an innocuous error. Or better might be to RECOMMNEND that the UIDs (and URLs as well maybe) used for this be hard to guess. Note that the attack here (if it exists) could come from an authenticated client as well as from the Internet. The point here is to check that the UIDs don't allow me to get at information for which I'd get only 403 if I sent a request to the URL. (I guess its a separate question as to whether sending 403 gives away something that a 404 doesn't, but if so, that'd be for another day and draft.) - In 7.x sections you say clients MUST NOT include these parameters. Is there a need to say that server MUST NOT accept messages from (bad) clients that do in fact contain these parameters? Might be easy enough to get wrong if the server developer didn't pay any attention to what the client developer might get or do wrong.
Generally: I think the 2119 language could use a good scrub. I think you use it in places where there is no real option, or there is no real interoperability implication. Please review. Section 3.2.8: Servers MUST reset the "PARTSTAT" property parameter value of all "ATTENDEE" properties, except the one that corresponds to the Organizer, to "NEEDS-ACTION" when the Organizer reschedules an event. Don't you mean for all "ATTENDEE" properties *on each affected component*? I wouldn't have complained about this except for the MUST; if it's a requirement, you've got to be clear. If the change is for a recurrence instance that does not include that attendee, PARTSTAT shouldn't be reset, correct? (See section 3.2.6.)
This may be a DISCUSS-DISCUSS, but I would like to pose the following questions: 1) Can CONEX distinguish between congestion that occurs on the local network and congestion that occurs downstream?. For example, assume that my ISP deploys CONEX. Assume also that a loss-prone link connects my PC to the CPE router in my kitchen. The TCP stack on my PC will report lots of loss. My ISP will detect this and when it congests, it will penalize me even further, even though I am not contributing to loss on the ISP's network? 2) Can CONEX distinguish between congestion that occurs on the local network and congestion that occurs upstream? For example, assume that my ISP deploys CONEX. Assume also that I subscribe to a stream that incurs loss before it hits my ISPs network. My ISP will detect this and when it congests, it will penalize me even further, even though I am not contributing to loss on the ISP's network? 3) Is the applicability of CONEX restricted to access networks, where it is possible to deploy per-user policers at the distant end of the network from the user? 4) Can CONEX markings be used as an attack vector? 5) How will CONEX behave in networks where incoming traffic can be characterized as follows: 90% is streaming UDP over IP multicast 10% is TCP. In this example, assume that multicast traffic is responsible for 90% of the congestion and that the multicast receivers send traffic in the reverse direction very infrequently. 6) How will CONEX work in a transition scenario, when some transport layer stacks are CONEX aware and others are not. 7) Does CONEX encourage traffic originators to falsify congestion markings?
In Section 3.1, please be specific about the policer counting IP-Layer-ConEx- Signals, and not Congestion-Feedback-Signals
I could use a little help understand the example in section 3.1. Do I have it right that ConEx is used to provide information about congestion in a flow to devices that are not directly experiencing the congestion? In the use case in section 3.1, the congestion policer is placed exactly at the point where the existence of congestion is known. Why is any signaling mechanism needed at all? In the first paragraph of section 3.2, how does ConEx specifically encourage the use of scavenger transport protocols, relative to other congestion policing mechanisms? Does the second paragraph of section 3.2 suggest that ConEx is used to actively affect traffic management in a way that is not directly related to congestion experienced at the user device? That is, the receiver uses artificially generated congestion signals to cause ConEx marking that affects its received traffic. This use case is fine, except that labeling the receiver->sender signaling as "congestion feedback" is no longer accurate.
A fine document that would have been enhanced by a short exposure of the Conex references to live up to the "entry point" claim. --- Classic ASCII-art. Well done!
Section 2.4: what (if any) metric is used for rest-of-path and upstream- congestion? Is it volume? If so, or if not, be good to say that.
In general, this is a good high-level description of what the community should expect in the coming CONEX drafts. I do have a few questions though... 1. The draft talks about attributing congestion-volume contributions. Shouldn't there be some description of how that would be done? That sounds like a lot of state to maintain when congestion begins to occur. 2. Conceptually, if a CONEX-aware device in a network sees 10 packets of varying size from a single source and all have these CONEX markings, how do I equate the congestion-volume contribution of that source? Are there assumptions made about the packets' characteristics *in the last RTT* based on the packets seen in the current RTT?
Interesting that congestion-volume can be measured in one of two ways. If I'm the user in s3.2 (or in the last para of s3.3) will I know which measurement technique was used? Is it up to the operator to decide which one to use?
s2.2: Maybe: Congestion-volume is a property of traffic, whereas congestion describes *a property of* a link or a path. s3.1: 1st para: manage really meanest throttle and management means throttling right ;) s3.1: I'm obviously not hard over on either of these: Monitor is much a nicer term than policer, but maybe monitor is overloaded. Also "police traffic" maybe "manage misbehaving". s3.3: For give the security guy, but "scavenger transports" refers to ... Vultured TCP (vTCP)?
Basically I've a bunch of nits for what seems like a good piece of documentation. - There are a good few nitty few English language issues, too many to list now. Better if those were fixed before the RFC editor has to do it. - section 3, issue 3 - what does "implementing DNS" mean? Which kinds of DNS node, stub resolver, recursive resolver,... - section 4, calling RFC 6144 "The" framework document seems a bit generic, suggest using the full title. - section 4, are there cases where a host can't distinguish which of the 6144 scenarios apply that might confuse matters here? Not sure. - section 4, is "IPv6 connection" the right term? - Why are there no references for a bunch of I-Ds named here? Its ok to add informative references even to expired drafts IMO, (I wonder if others disagree;-) - There were changes agreed based on the secdir review.  Some of those may overlap with the above (sorry, didn't have time to check properly)  http://www.ietf.org/mail-archive/web/secdir/current/msg03233.html
This is a reasonable complete assessment of the problem space. I only have some comments/suggestions to put forth: 1. In paragraph 5 of section 1, I would suggest changing "... analyses all known solution proposals known ..." to "... analyzes all proposed solutions known ...". 2. In section 2, you reference WKP before you define the acronym. 3. I see several uses of the noun "analyses" used as a verb. I suggest you change those to "analyzes". 4. Section 4 has an expansion of WKP that is redundant with the expansion done earlier in section 2. 5. Throughout most of the solution description sub-sections, drafts are called out by name and author(s) without a direct reference. Is this being done simply to avoid having to publish those drafts? 6. Section 5.3 is an almost duplication of Section 5.2, only the summary is different. 7. In 5.6.1, the acronyms ASM and SSM are used without expansion or context.
Just a small thing: Section 4, third bullet: Is this an attempt to avoid references to "work in progress"? There's no need to avoid it, and I'd rather see the references. Just make them informative, and they won't block this document. If they're dead (or dying) I-Ds that won't be completed, I'd still like to see the names (not just the titles) so I can find them in the archives. The same goes for Brian Carpenter's "referrals" draft, which you refer to later in that section, and other drafts mentioned in other sections.
Suggesting that elements hard-code an IPv4 address (see section 5.1.1) is perilous, and the draft referenced in that section doesn't seem to support the notion. Why is this suggestion here? Could it be removed?
I'm sure Wes saw that Jouni agreed to some additional text based on Sam Weiler jumping in with Alexey - https://www.ietf.org/mail- archive/web/secdir/current/msg03250.html - so this is just a reminded to include the text.
On the basis of a quick read and complete confidence that Benoit will work with the authors to make the draft perfect.
- Introduction The most significant performance parameter is the rate at which IP flows are created and expired in the network device's memory and exported to a collector One or multiple different rates? I guess different ones (but reading the document further will tell). So: The most significant performance parameters are the rates at which IP flows are created, expired in the network device's memory, and exported to a collector However, looking at the terminology section, it seems that you have only one benchmark metric: "Flow Monitoring Throughput". BMWG is about black box testing, but it doesn't mean that we don't have 3 different rates. The section 3.1 about "the Flow Monitoring Throughput" proves I'm right. Please improve the text. - See email "No active/inactive timeout definitions in any IPFIX RFCs? Idle versus inactive terminology? (part of draft-ietf-bmwg-ipflow-meth-09 review)" sent to the IPFIX WG. - Section 4.3.1 The (*) in Figure 2 designates the Observation Points in the default configuration. Other DUT Observation Points might be configured depending on the specific measurement needs as follows: a. ingress port/ports only b. egress port/ports only c. both ingress and egress If I refer to figure 2, there is no return traffic to the "traffic sender". Therefore, how could it be b. or c.? Am I dreaming or you had in the past a similar figure that explains that the return traffic could come back to the "traffic sender"? Figure 2 should be updated, or a new figure added, because, for egress, the traffic analysis must happen also on the "traffic sender" - Section 4.3.3 The Exporting Process SHOULD be configured with IPFIX [RFC5101] as the protocol to use to format the Flow Export data You want a MUST here, as IP Flow = IPFIX at the IETF. Same remark for this sentence in 4.3 The DUT MUST support the Flow monitoring architecture as specified by [RFC5470]. The DUT SHOULD support IPFIX [RFC5101] to allow meaningful results comparison due to the standardized export protocol. Same remark for this sentence in 4.4 However if the Collector is also used to decode the Flow Export data then it SHOULD support IPFIX [RFC5101] for meaningful results However, looking at figure 1, you mention NetFlow, others. So you want to add that additional export mechanism MAY use the same benchmarking mechanism, i.e. NetFlow v9 [RFC3954]
- Introduction Monitoring of IP flows (Flow monitoring) is defined in the Architecture for IP Flow Information Export [RFC5470] and related IPFIX documents. Which documents? Do we expect the BMWG community to know about the relevant IPFIX documents. Please refer to "them" - Abstract mentions This document provides a methodology and framework for quantifying the performance impact of monitoring of IP flows on a network device and export of this information to a collector. It identifies the rate at which the IP flows are created, expired, and successfully exported as a new performance metric in combination with traditional throughput. The metric is only applicable to the devices compliant with the Architecture for IP Flow Information Export [RFC5470]. However, the introduction mentions: This document provides a methodology for measuring Flow monitoring performance so that network operators have a framework for measurements of impact on the network and network equipment. So if this document covers both "impact on the network and network equipment", it should be clearly mentioned in the abstract. Maybe this is what you mean by "export of this information to a collector", but it can be understood in different ways: impact on the Exporter, and/or Collector, and/or network. - A more complete understanding of the stress points of a particular device can be attained using this internal information and the tester MAY choose to gather this information during the measurement iterations. replace "device" by "DUT" - 2.1 Existing Terminology -> I would refer to RFC5101 instead of RFC5470 when possible. Because RFC5101 will be updated by RFC5101bis. And one important change in RFC5101bis will be the new "Flow" definition, which will be removed "IP". RFC5470 will most likely not have a bis version. - 2.2.5 Flow Export Rate Definition: The number of Cache entries that expire from the Cache (as defined by the Flow Expiration term) and are exported to the Collector within a measurement time interval. There SHOULD NOT be any export filtering, so that all the expired cache entries are exported. If there is export filtering and it can't be disabled, this needs to be noted. If you use to use RFC2119 terms in the definition, be consistent: replace "this needs to be noted. " by this MUST be noted". - 3.2 Device Applicability The Flow monitoring performance metric is applicable to network devices that implement [RFC5470] architecture. Replace the end with something similar to: "that implement RFC5101 and RFC5102, according to [RFC5470] architecture." After reading the entire draft, I see the sentence I was looking in section 4.3 The DUT MUST support the Flow monitoring architecture as specified by [RFC5470]. The DUT SHOULD support IPFIX [RFC5101] to allow meaningful results comparison due to the standardized export protocol. You should have something similar in section 3.2 - NetFlow is mentioned. At least refer to RFC3954. - "The Cache entries are expired from the Cache depending on the Cache configuration (ie, the Active and Inactive Timeouts, number of Cache entries and the Cache Size)" The cache entries is not a configuration parameter. - The DUT's export interface (connecting the Collector) MUST NOT be used for forwarding the test traffic but only for the Flow Export data containing the Flow Records. In all measurements, the export interface MUST have enough bandwidth to transmit Flow Export data without congestion. In other words, the export interface MUST NOT be a bottleneck during the measurement. I guess that the "the collector (interface) MUST NOT be a bottleneck during the measurement". Exactly like you wrote for the traffic receiver " The traffic receiver MUST have sufficient resources to measure all test traffic transferred successfully by the DUT." After reading section 4.4, I see this sentence: The Collector MUST be capable of capturing the export packets sent from the DUT at the full rate without losing any of them. This is a source of confusion for me in this draft. It seems that the information is fragmented throughout the draft... Therefore, the reading flow is sometimes not easy... - what is the difference between "Any such feature configuration MUST be part of the measurement report." and "All configurations MUST be fully documented." Should the latter say "All configurations MUST be fully documented in the measurement report"? - The DUT configuration and any existing Cache MUST be erased before application of any new configuration for the currently executed measurement. replace Cache by Cache entries? - Section 4.3.2 The Cache Size available to the DUT MUST be known and taken into account when designing the measurement as specified in section 5. What about Metering Process features that increase the cache size. Is this allowed? Should this be documented in the measurement report? Or should the cache size be set up to its maximum before starting the measurement? - Section 4.3.2 The configuration of the Metering Process MUST be recorded. -> MUST be included in the measurement report? - Section 4.3.3 The templates used by the tested implementations SHOULD be analysed and reported as part of the measurement report. Ideally only tests with same templates layout should be compared. "template layout" = Template Record in RFC5101. Please add to the terminology and use the term. - Section 4.3.3 Only benchmarks with the same transport layer protocol should be compared. should -> SHOULD - Section 4.3.4. For all the examples such as the following, please include the IPFIX Information Element. Flow Keys: Source IP address Destination IP address MPLS label (for MPLS traffic type only) Transport layer source port Transport layer destination port IP protocol number (IPv6 next header) IP type of service (IPv6 traffic class) Rational: - what does the MPLS label mean? In IPFIX IANA, we report mplsTopLabelStackSection, which is a combination of "The Label, Exp, and S fields". Note: clarify this as well in the section 4.3.6 - packet counters, byte counters: we have multiple IEs for those. Which one should the DUT chose? - IP addresses: IPv4 and/or IPv6? - 4.6 Frame Formats Flow monitoring itself is not dependent in any way on the media used on the input and output ports. Any media can be used as supported by the DUT and the test equipment. What about the export interface? - section 4.8 The used packet size SHOULD be part of the measurement report Why not a MUST? - section 5.1 I read multiple times the following sentences, and still could not understand the link between the first two. The number of unique Flow Keys sets that the traffic generator (sender) provides should be multiple times larger than the Cache Size. This ensures that the existing Cache entries are never updated before Flow Expiration and Flow Export. The Cache Size MUST be known in order to define the measurement circumstances properly. - Section 7 Flow Monitoring Accuracy Don't you have to say a few words about the accuracy of DUT that also does forwarding, i.e. the traffic analysis at the "traffic receiver" in figure 1 must also check that no packets were lost in the case of ingress monitoring. Note: there is also the case of egress monitoring. Interestingly, bidirectional is mentioned in the "Appendix A: Recommended Report Format", but not a single time in the draft. - Appendix A: would be great to mention which entries are SHOULD and MUST Note: not sure why it's an appendix, as there is normative text referring to each entries - Appendix A could be very helpful to poll the IPFIX-MIB (http://tools.ietf.org/html/draft-ietf-ipfix-rfc5815bis-03, currently in RFC- editor queue) and IPFIX-CONF (http://tools.ietf.org/html/draft-ietf-ipfix- configuration-model-10, currently in RFC-editor queue) as MAY in the measurement report. Note: [IPFIX-CONF] is almost ready, simply waiting for PSAMP-MIB. I believe it makes sense to wait for this RFC. - Appendix B. Not sure whether the use of "SHOULD" in the appendix is appropriate. To be checked. - Appendix B.6 Tests With Bidirectional Traffic Not sure why this is an appendix. The recommended report mentions Traffic Direction unidirectional, bidirectional Direction ingress, egress, both ... and the bidirectionality is only mentioned in the appendix? - Section 5.2 Traffic Generation The traffic generator needs to increment the Flow Keys values with each sent packet. This way each packet represents one Cache entry in the DUT Cache. Here is a comment I made to you years ago. You will find surprises if you increment the IP addresses by one in your generator, as opposed to random traffic, as the hash function are not optimized for incremental IP addresses. You should say a few words about this.
A statement like this in section 1 begs for a little more explanation: The most significant performance parameter is the rate at which IP flows are created and expired in the network device's memory and exported to a collector. Is there a reference or some other justification for this statement? Could this statement simply be elided without losing the importance of the document? And here are a couple of minor editorial or clarification suggestions: In section 3.4.2: Mainly the Flow Export Rate caused by the test traffic during an [RFC2544] measurement MUST be known and reported. s/Mainly/Most importantly/ ?? In section 4.9.1 and 4.9.2, is it intended that the destination IP address recycles to the address for stream 1 after stream 10000?
As usual, I find most of the uses of 2119 language in a document of this sort to be bizarre: I don't understand how a statement like "MUST be part of the measurement report" constitutes something "required for interoperation or to limit behavior which has the potential for causing harm", and is not simply trying "to impose a particular method on implementors where the method is not required for interoperability." I'm hard pressed to find any occurrence of 2119 language in this document that is used as 2119 intended.
Please don't include citations in the Abstract. --- I found just one use of RFC 2119 langauge in this document. I suggest fixing it to lower case and removing Section 1. --- It would be nice to havesome citations in the Introduction and Terminology sections. --- I was slightly confused as to whether this is intended as an applicablity statement (how you use SNCP for PON) or the definition of the extension of ANCP to PON (see Section 4). It might be nice to harmonize the language across the document. --- You are to be commended for your skill with ASCII-art. I will use your document as evidence that no other graphics tools are needed!
The IETF LC announcement appears to have been missing the IPR declaration, which is RAND with possibly fee, and was only filed on March 27th 3 days before the end of IETF LC. I think this one has to go around the loop again, or am I missing something?
I second Stephen's DISCUSS: Thomas Haag is both an editor on the document and an inventor on the disclosed patent. Also: The ToC and the section numbers appear to be confused: Section 9, Security Considerations, on page 31, comes after section 10, Access Loop Configuration. There's another Section 10 following it, and neither of those sections are in the ToC. Also, Section 13, Acknowledgments, is empty... that's OK if it's right, but is there really no one you want to acknowledge here?
I agree with Adrian's comment: 2119 language is unnecessary in this document and should be removed. I also agree with regard to Stephen's DISCUSS; this must be re-last-called with a pointer to the IPR disclosure. I must say that I'm of two minds about this document, neither of them good. On the one hand, the document seems to be applying ANCP to a particular technology (in this case PON), and I therefore don't understand why it isn't going for Standards Track. On the other hand, from up here in the nosebleed section of the layers, the entirety of this document looks like it is either all layer 2 stuff or is a big giant walking layer violation. I really don't understand why the IETF is devoting WG time to working on technology like this. I was sorely tempted to simply Abstain on this document. I don't see what it adds to our document series. Perhaps someone can explain.
1) It's entirely possible this is somewhere in the other ancp specs and I missed it. The draft claims: Fundamental to leveraging the broadcast capability on the PON for multicast delivery is the ability to assign a single encryption key for all PON frames carrying all multicast channels or a key per set of multicast channels that correspond to service packages, or none. Is this referring to the key used for IPsec/IKEv2 as required by RFC 6320? How are you distributing the keys to everybody? It seems (to my untrained eye) like stream access in ancp is done through a join request and white/grey/black list checking not based on any kind of key material.
s11: Maybe strike the first used and add protocol after signalling in the following: Here an appropriate mechanism to protect the used signalling needs to be used. NEW: Here an appropriate mechanism to protect the signalling protocol needs to be used.
Thanks for addressing my Discuss and Comments
What is the expectation for stability of the URI and URL elements of a registration? Should an expert disallow e.g. bit.ly or a blog URL? I think it'd be good to say something here. I don't care what you choose for any of the reasonable choices:-)
I agree with Pete's DISCUSSion and am also not a fan of 2119 keywords being used in this document. I would like to better understand if the concept of a LoA is consistent across the types of frameworks mentioned in this document. The introduction says that the registry will support LoAs from a variety of frameworks. However, the description of the Context Class in Section 3 talks about XML Schemas compliant with SAML 2.0. Is this registry limited to frameworks compliant with SAML 2.0? If so, this needs to be specified.
I am not sure why the second paragraph in the Introduction begins and ends with underscores. Are there any existing LoAs that should be pre-populated in this table? If so, they should be shown in section 6.
The Gen-ART Review by David Black on 1-Apr-2012 lead to some discussion and agreement on some document updates. The updates have not appeared yet. The review can be found here: http://www.ietf.org/mail-archive/web/gen-art/current/msg07309.html
I agree with Pete's DISCUSS, as it refers to Section 5. I see no problem with using an Informational document to do what Section 4 does -- IANA will set up the registry as stated, and the terms specified here aren't meant to be used beyond this document, so Informational is fine. But Section 5 is telling implementors of *something* what they MUST and MUST NOT do, and Informational doesn't seem right for that. The second paragraph of Section 5 leaves me shaking my head. I'd like to see it be more clear about what one MUST NOT infer. It strikes me as a really wishy- washy statement as it is. Section 7 is missing something after "An implementor of". (I'd also hyphenate "level-of-assurance URIs", to make it clear that it's a compound modifier.) And I agree with Stephen's comment that the definition of "URI" in Section 3 should say something, one way or another, about what expectations do or don't exist on the lifetime of the URI. Also, is it acceptable/expected/to-be-avoided to have multiple URIs registered that define the same LoA profile? Given that the URI is the registry key, it seems important to expand a bit on this stuff here.
[I don't feel very strongly about this, so if everyone else is OK with it, I am happy to clear this DISCUSS. But I did think it was worthy of DISCUSSion.] If this was simply the creation of a registry, I wouldn't have thought twice about its status as Informational. But section 4 is giving a particular process and policy for additions to the registry and section 5 is attributing semantics for protocol users (both of them using 2119 language just to make the point). Doesn't that mean this should be a BCP since it's defining IETF policy and procedure?
I am not a fan of using 2119 language in the registration template. You are not giving instructions to implementers on interoperability or damage to the network; this is for registrants and IANA. And in all cases I can find, it is simply unnecessary. I suggest: OLD: The following information MUST be provided with each registration: NEW: The following information must be provided with each registration: OLD: Informational URL: A URL containing auxilliary information. This URL MUST minimally reference contact information for the administrative authority of the level of assurance definition. NEW: Informational URL: A URL containing auxilliary information. At a minimum this URL needs to reference contact information for the administrative authority of the level of assurance definition. OLD: Note that it is not uncommon for a single XML Schema to contain definitions of multiple URIs. In that case the registration MUST be repeated for each URI. Both the name and the URI MUST uniquely identify the LoA. NEW: Note that it is not uncommon for a single XML Schema to contain definitions of multiple URIs. In that case a separate registration is to be used for each URI. The name and the URI are to uniquely identify the LoA. OLD: The name MUST fulfill the following ABNF: NEW: Names are defined by the following ABNF: OLD: The following ABNF productions represent reserved values and names matching any of these productions MUST NOT be present in any registration: NEW: Names that correspond to the following ABNF productions are reserved values and are not to be registered:
Section 5 is confusing (it has drawn comments from several reviewers). Can it be reworded to avoid the confusion that's been expressed? Registries are often used to help implementations/deployments accidentally use the same name for two different purposes. The description here does not seem to consider that part of the motivation for having a registry - in fact, it goes to some length to tell users to expect there to be names in use that aren't listed, and by inference, might collide. Discussing the implications of such a collision may help avoid them.
I agree with the comments posted about the second paragraph of section 5. If it needs to stay (even in a rewritten form), please consider providing an example of the kind of implied meaning that a user of the registry must not assume.
It's not clear to me that this needs to be documented as an RFC. Maybe a FAQ for meeting hosts?
re Mass transit: It would be good if there was an explicit note on safety It would be good if there was a note as to whether the signs include place names in a western character set.
Section 3. Helpful information There are a number of general categories of information listed below. Some of it, such as sections 3.1 and 3.3, is necessary for travel, the rest can be considered nice-to-have. If you would change the Table of Content (TOC) to have the current 3.3 as 3.2, then you would have nice order in the TOC, sorting by importance/relevance ... which you could stress in the document. 3. Helpful information . . . . . . . . . . . . . . . . . . . . . 5 3.1. Travel . . . . . . . . . . . . . . . . . . . . . . . . . . 5 3.1.1. Transit between the airport or train station and primary hotels . . . . . . . . . . . . . . . . . . . . 5 184.108.40.206. Taxi information . . . . . . . . . . . . . . . . . 6 220.127.116.11. Mass Transit . . . . . . . . . . . . . . . . . . . 6 3.1.2. Getting around near the conference venue . . . . . . . 7 3.2. Regional/International considerations . . . . . . . . . . 3.2.1. Health and Safety . . . . . . . . . . . . . . . . . . 18.104.22.168. Water availability . . . . . . . . . . . . . . . . 3.3.2. Money . . . . . . . . . . . . . . . . . . . . . . . . 3.3. Food . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3.1. Restaurants . . . . . . . . . . . . . . . . . . . . . 3.3.2. Other Food items . . . . . . . . . . . . . . . . . . . 3.4. Communications and electronics . . . . . . . . . . . . . . 10 3.5. Weather . . . . . . . . . . . . . . . . . . . . . . . . . 11 3.6. Fitness . . . . . . . . . . . . . . . . . . . . . . . . .
This seems more like a nice webpage to me than something that needs to be an RFC.
You might note somewhere that meeting specific sites tend to go away, as has happened with ietf75.se which is now something to do with poker (or at least the advert offered there when I looked was).
I support Stewart's DISCUSS. The distinction between this document and the other LISP documents, which are also EXPERIMENTAL, is subtle and likely to be lost on the reader.
I am putting a discuss on this because I think that the IESG needs to talk about this draft. I will clear the discuss on the call. I think that the document needs some text in the introduction making it clear that the purpose of this draft is to record some early thoughts on this subject by the author. Otherwise the RFC will be too easily confused with the ordinary output of the LISP WG. The approach described seems a viable way of running LISP and thus I am not sure why this is not being taken through the WG or as AD sponsored. I understand the history is that this work pre-dated the WG, but there is now a WG.
I am surprised that the author did not tackle the database version wrap problem by providing some really large number that could never wrap (128 bits springs to mind). Given the size of the payload, the size of the database header seems unlikely to be an issue.
- I think a paragraph putting this into context (as per Eliot's mail) would be very valuable for the reader who might otherwise think this is the "mainstream" experiment. - Do you really want to refer to ITU-T x.509 rather than rfc5280 for certificates? - I think you could note that key roll-over and key distribution generally are for future study. - You could even mention the potential for using DANE if you wanted as a different PKI as another possibility for future study. - CMS is widely deployed (all S/MIME clients include it) but you could still say pkcs#7 is more widely supported by libraries and tools. - There doesn't seem to be any way to limit an authority to certain EIDs and/or RLOCs, such as is done by SIDR. Might be worth noting? - If you need revocation checks as part of signature validation, then you probably ought say that that's not included in the analysis in section 5.
I agree that this document should be published as a record of one way of doing the LISP mapping. The following commentary is really meant for the IESG and the ISE... Given that there does not appear to be any effort to actually implement this specification, does it make sense to publish it as Experimental? It would seem that Informational would be a fine way to document this approach. If I follow some of the arguments that Pete and Ron have made recently, I would even support the publication of this document as Historical, but I am not sure if the ISE can do that.