2.2.2 Improved Cross-Area Review (icar)

NOTE: This charter is a snapshot of the 59th IETF Meeting in Seoul, Korea. It may now be out-of-date.

Last Modified: 2004-02-11

Chair(s):
Joel Halpern <joel@stevecrocker.com>
Mark Allman <mallman@icir.org>
General Area Director(s):
Harald Alvestrand <harald@alvestrand.no>
General Area Advisor:
Harald Alvestrand <harald@alvestrand.no>
Mailing Lists:
General Discussion: icar@ietf.org
To Subscribe: icar-request@ietf.org
Archive: https://www1.ietf.org/mail-archive/working-groups/icar/current/maillist.html
Description of Working Group:
The WG will work out mechanisms for improved cross-functional review within the IETF, addressing the issues identified by the PROBLEM WG. This includes a better community review, as well as more structured pre-IESG review that may be used to improve scalability of the IESG review function.

Definitions:

o Cross-functional review: document review covering different aspects of Internet technology, including those represented by WGs within different IETF areas.

o Community review: document review performed by individual IETF participants and not caused by their responsibilities within the IETF management structure.

o Structured review: more formal and role-based document review performed by individuals assuming certain responsibilities (such as by WG chairs, directorate and IESG members.)

It is an explicit goal of the WG to come up with mechanisms encouraging early review of the documents while ideas are still in the formation stage. An early review is best for catching architectural problems while they're still relatively easy to solve. In particular, many cross-functional interactions can be spotted and dealt with, thus avoiding many "late surprises". A final review (currently done by the IESG) can catch remaining cross-functional interactions, as well as deal with overall quality issues.

The WG will cooperate with others in starting and evaluating experiments with early community and structured reviews. The evaluation of such experiments may be published as Informational RFCs if the group so desires.

The WG will also coordinate with other WGs involved in the IETF reform process on proposed changes to the IETF Working Group operations and Standards process if those are considered necessary.

Goals and Milestones:
Feb 04  Publish Internet Drafts on improved community and structured reviews
Sep 04  Submit draft on improved community review to the IESG for publication as BCP
Sep 04  Submit draft on improved structured review to the IESG for publication as BCP
Jan 05  Evaluate WG progress and potential; close or recharter
No Current Internet-Drafts
No Request For Comments

Current Meeting Report

ICAR (Improved Cross Area Review) WG
Minutes


Chairs:
Joel Halpern  jhalpern@megisto.com
Mark Allman mallman@icir.org
with Alex Zinin guest starring for Mark who could not make it.


Minutes: April Marine (who apologizes for missing or messing up names)


IETF 59, Seoul, South Korea.  March 3, 2004 9:00-11:30 am


Welcome:


Joel: welcome. Co-chair Mark could not make it to this one, so Alex Zinin 
will help to ensure the chairs listen well.


What is icar?
 - provide a place for discussing cross area review
 - propose useful experiments in early and late stage review
 - select recommended BCPs on review techniques


We need to do recommendations of how to review early so we get on with 
trying stuff out.


WG email is icar@ietf.org.  Please send pointers to drafts the group 
should consider to that email list.  If you have a draft you think the 
group should consider, please use the naming convention: 
draft-<person>-icar-<name>-<nn>.txt You do not need approval to use that 
name, and it is an easy way to get relevant docs.  Once the group agrees on 
approaches to recommend, we will use the normal 
draft-ietf-icar.... naming convention.  At this point, we want to be aware of 
all docs, all ideas, half-baked or not.


The group will have an issues tracker.


Agenda:


First, existing experience. Reports from current efforts:
 - SIRS,  Crocker
 - MIB Doctors,  Wijnen
 - Security Area Directorate,  Bellovin
 - Routing Area Directorate, Zinin


As they speak, consider: what constitutes good review?
how can we help cause early review mechanisms to be used?



SIRS - Dave Crocker


SIRS is based on an idea of senior IETF folks doing external reviews 
(reviews external to the WG).


Requests for review come from WGs or elsewhere. Contact:
 graybeards.net/sirs
 ietf-sirs@yahoogroups.com


SIRS did some reviews, very informal but with structure considering 
limited resources.


Don't know if early reviews would have reduced the need for later 
reviews


No directions re what should be in a review and no evaluation of the 
results of a review.  For example, icar is explicitly looking for cross 
area reviews, but SIRS did not figure out reviews by type.


Started hearing rumors very early that SIRS had failed. Getting reviews of 
the nature we need in the type we need is an evolutionary process.


The fact that SIRS got early signups was indicator of interest.


The criteria for who should/can review is a tough debate.


There was a low number of review requests, but that could be due to rumor of 
failure.


Need a sustained effort.


Need a central point for tracking review efforts, accessing reviews and 
seeing collections of them. Such infratstructure takes time and needs a 
real tracking system and someone to watch it.


Some types of reviews that need to happen:
 - cross area is one
 - review at design stage vs. later review


MIB Doctors - Bert Wijnen


psg.com//~bwijnen/iCarMib59.ppt


When Bert first became OPS AD, some MIBS could not even be run, so he 
decided before approving any MIB document, it would be sanity checked.


So started organizing some "MIB doctors" to check syntax and etc. 
www.ops.ietf.org/mib-doctors.html


They advise AD what to do re MIBs and can review generic IETF docs from 
network management point of view. Also, MIB doctors can advise WGs.


Guidelines are available in ID 
draft-ietf-ops-mib-review-guidelines-02.txt.


Now, this review takes place too much at the end of the process, which is 
problematic.


They try to do consistent mib reviews so result doesn't vary depending on 
which reviewer you get.


They send comments to authors and WG mailing list and relevant ADs so that 
the doctors can follow up re comments or fixes.


Focus is on MIB syntax and structure/reuse, not on technology 
specifically, so need to understand what the doctors do and don't do and 
what they know/don't know to understand what is and isn't reviewed.


Originally, this was just a volunteer effort, which didn't quite work. It's 
better to ask a specific doctor. to review a specific MIB. Ergo, that is a 
scaling issue because currently the AD is looking for reviewers 
personally.



Security Directorate - Steve Bellovin


(no slides)


History of sec directorate:
 - first met for lunch to discuss issues, not so effective
 - then mailing list/web page that listed docs and asked for 
volunteers to read docs--big failure, no one volunteered so Jeff 
(Schiller, former security AD) gave up.


Steve re-started the Directorate and has been asking specific people to 
review a specific doc. Also tracking who responds with reviews and what the 
RTT is and using this to edit membership security directorate.


It's not perfect - people don't always have the expertise and/or they get 
overloaded.


Russ Housley (co-SEC AD) adds that pointing to a particular person is the 
key vs. just asking for general help. Since we've been doing that, we've got 
about 80% of the reviews we were asking for, vs. 5%.


Steve and Bert have to go to other WGs, questions for them now? and none


Bert-- MIB Doctors haven't done much in early cross-area review. They once 
started looking at --00 docs and that would be useful except that the 
doctors use several tools to analyze a MIB, so if you want  MIB doctor 
review early, then the doc version has to be mature enough for the tools to 
work.



Routing Directorate - Alex Zinin


The routing directorate has about 20 members today.  It was started by the 
current ADs, Bill and Alex.  Members are selected by the ADs and the 
group's official status is as an advisory body for the ADs.


rtg-dir@ietf.org  (closed list)


The Directorate is mainly used to review docs coming out of WGs to the 
IESG. Sometimes docs go from IESG to  the directorate when ADs need 
advice.


General appeals for help rarely work.


Each doc is assigned 2 reviewers, depending on topic and expertise of 
reviewers.  A deadline of usually 1 or 2 weeks is set.


Usually comments go to WG list or authors, but sometimes they are 
discussed in rtg-dir list first or sometimes the AD proxys comments to the 
WG.


ADs comments/feedback on things helps the directorate learn what IESG is 
looking for.


The log comments in tracker so is not lose and can be followed up on.


The directorate review usually happens during the AD review period. 
Recently though, they have ask the chairs to cc the routing 
directorate on final WG last call, so there is less of a delay.


Sometimes Alex forwards the document part of the IESG agenda to the 
rtg-dir so that folks know what is coming up.


This is considered a "late" review, so they ask the directorate 
reviewers to look for show stoppers, plus ID-nits.


They separate technical comments from editorial comments.



Presentations done. Next, discussion:


What constitues good reivew?
 first, for early reviews
   - is the number of reviews significant?
   - is the range of skill sets among reviewers important?


Harald: what you get in a review depends on what you are aiming at. For 
example, directorates are helping AD make a decision. SIRS is very 
informal, so what review you get depends and you don't really know if you 
got what you wanted.


If you want a good review: figure out what that means. What is the 
purpose of the review?


How many reviews are needed?
Are several different people needed?
Is breadth or depth of skill set of reviewers important?


Dave Thayler: Review should include both problems and also 
suggestions for improvement.  This is especially important in early 
review process as it is easier to make large changes at that stage.


Ole: early reviews are very important, both due to quality and timing. They 
could speed up work on some things and save time on rat holes.  Also broad 
reviews are important


Cathleen Moriarity: Don't get thrown in the wrong direction 
--reviewer should fully understand the problem in order to make useful 
suggestions. Some suggestions could send person in wrong direction.


Joel: that raises an interesting question.  If we raise the bar too high for 
reviewers, then we might discourage reviews. But it is important for 
reviewer to understand what problem is being addressed and the context of 
problem.


Dave T: Bert in jabber re the question of whether the number of reivews is 
signifant says 2 is better than one. Dave agrees that 2 is 
significantly better than 1, but after that the advantage drops.


Dimitri Papadimitriou: do reviews suggest areas that might come up in 
future as problems in that space? e.g. technical content might generate 
further issues?  Joel: sounds useful if reviewer can note future 
implications.  They might be things that people working on topic don't see 
because they don't have larger context that reviews might have. Might be 
"obvious" that such review is needed to some, but not to the folks 
working on the topic.


Loa Anderson Scope of review comment: if reviewer fully has to 
understand the problem, you probably limit the number of reviewers to 
folks who are in close cooperation with folks writing the docs. We might 
want to have expertise coming in from different areas to understand the 
scope of what you are doing.  e.g. if MPLS protocol asks for security 
review, they need to understand that the feedback comes back is from 
someone who knows X vs. Y.


Harald: I have a review team experiment that is 1 mo. old. They read a doc 
and see if it is in good English.


We need docs that are technically sound, but we also have 
responsibility to write docs that are comprehensible--can you figure out the 
problem that they are trying to solve?  Useful feedbacd can come from 
folks who don't totally understand technology for an area.


Anderson: good point


Dave Crocker via jabber: in near term focus on quality of issues, what 
topics should be covered, what sorts of comments are helpful, etc. maybe a 
BCP?


Ole: supports Harald in re language point because that is part of 
experience needing to be quite broad. e.g. getting comments from folks who 
tried to understand what it is all about is useful too.


We'll have an interesting time writing a doc trying to capture depth vs. 
breadth in experience.


Harald: IESG discovered that when folks start reviewing, they don't 
understand what to look for, but after awhile it's much more obvious. 
Reviewers are made not born..


Tove, comment on review team "graybeard" and "sirs" as names seem to 
exclude some folks :)


What group recommends will have a neutral name. It is important not to 
exclude folks.


Greg Danly: Re issue of readability: the first RFC is just like a unix man 
page! :) hard to write something that is technically correct but also 
approachable by tech professionals in the field. a fairly wide 
audience.  If we have docs that are easy to read, then that should be part of 
review process.


Alex: one addition, important to think about the role of the review 
process in the IETF consensus process.  We don't want WG to ignore 
comments just because they already came to a consensus. We need balance of WG 
ideas and input from other areas.


Any comment on late reviews?  Do comments also apply to late reviews?


Dmitri: early and late reviewers could be different people. Later 
reviews could be more implementor focused.


Alex: early review needs as many eyes as possible. Later review is 
touchier e.g. if directorate feels something was addressed but the IESG 
doesn't agree, you have a problem.  It's important to have 
consistency with IESG approval process.  AD delegation of review needs to be 
done well.


Greg D: I think that there is a conceptual difference between early and 
late review. Late review could show conceptual problems or changes if no 
early review was done. If do an early review, then late review can 
clarify or show cross group consensus.


Dura..? nortel Early review should be done in WG for catch syntax, 
language.  Late reviews should involve tech aspects as well as other areas 
input, Early review w/in wg.


Ole: Don't completely agree. The real advantage is to have review 
outside of the WG.


Next agenda: discuss docs that currently talk about early review:
 SIRS experience
 allman--problem-wg-revcomm-00
 
draft-ietf-ietf-ops-mib-review-guidelines
 draft-carpenter-solutions-sirs-02.txt


Joel: How can we identify docs for cross area review? Here "early" means 
early cross area, not the within WG review.


Loa: agreed. If you say we should have folks not involved in WG do early 
cross area reviews, you really need someone with a particular 
expertise. He may be in the WG or not, don't exclude people who are in WG.


James Polk: Is there a repository, so if i write a draft i can post it and 
offer for review? and a tracking on progress?  Joel: not currently such a 
repository or even a process.  This WG will figure out what needs to 
happen and what tools are needed.  James: I think that a posting process is a 
good idea, even though there is no way to ensure someone will review a 
draft.


Harald: tools: given our limited resource environment, we'd like to have 
some experimenting done before committing resources to the final choice. We 
encourage folks to experiment because experience of 10 people doing 10 
reviews using any mechanism to check them is valuable.  "genauth" 
experiment visible on website, mailing list of reviews.  I know more about 
tools needed after having tried a bit than I did before.


Jan? what is the relation between early review process and advisors for WG?


Joel: good question, and the answer is probably up to WG. need to work with 
ADs and dirs if we want to tie to their advisors.


Russ: getting security advisors for WGs is really hard due to time 
commitment greater than single review.


jabber: dave crocker: we should formulate a guide for reviews and add a 
Sunday tutorial. BCP would be a separate output from the group. 
also: sirs web/list suffered from lack of better tools, but had limited 
resources.


Joel: many WG did not ask for SIRS reviews. Why? Did they need better 
incentives?  We need to make sure WG pays attention to feedback.  Are 
there incentives or motivations for encouraging WGs to get early 
reviews?


Could this in some way speed up later iesg review? That could be an 
incentive.


Alex: the more eyes that see the doc, the faster it gets through the IESG.


Dave: I strongly suggest the review function be independent of IETF 
administrative management team.  Strickly for technical commentary.  Also, 
getting review requests and a  BCP will help motivate WGs.  IETF 
management culture encouraging early reviews.  WG should have a doc that is 
like a technical proposal, more than idea but less tech detail. "the boxes 
and arrows draft."


Harald: if you want to create a body of power and responsibility outside of 
the IETF management structure, be careful what you ask for. Creating such a 
body with real power outside of IETF management structure would take 
longer than other approaches.


Dmitri: what about cross body reviews?  e.g. ITU. How do we deal with 
those reviews? can that process be integrated with this groups' 
recommendations?


Alex: We had a comment like that during charter review to "improve 
coordination with other standards bodies or external standards body 
review" etc. But we should be careful about that as there are a lot of 
details that could get in the way of process.  I think we should 
concentrate on our internal review first because we understand those 
processes much better before do interaction with other SDOs.  Joel: good 
point, that is not within the charter as written, but an important 
question.


Ole: focus on internal issues, but don't forget cross relations. If other 
SDOs have concerns or someone from there comes, address it.


Dave C's response to Harald: what power does a review group have? and what 
benefit is there in making the review group subject to IETF power? 
Harald: if review is to make any sense, must address quality, 
relevance, and timeliness of IETF output. The management of the IETF 
exists to foster that quality, relevance and timeliness.


We can already ignore reviewers? Harald: if you have reviewers that make a 
difference, they have power. If you have reviews that don't make a 
difference, they don't have power.  We have both kinds.


Some docs area already out there:


draft-zinin-early-review-00
draft-iesg-alvestrand-twolevel-00



Alex: Area Review Teams doc


Motivation for this doc was the same as led to creation of ICAR working 
group.  We want to improve cross area review in order to improve quality of 
documents and to get that review before docs reach the IESG. Higher 
document quality should reduce IESG approval time. make reviews 
consistent with the type of review the IESG does later.


This doc assumes IESG still responsible for final approval.  ADs are 
personally responsible to the community for the quality of approved docs.


Each Area would have an Area Review Team with a well known mailing list 
such as <area>-review@ietf.org.


The team would include at least ADs and other experts selected by ADs.


ADs delegate review to team members.


Selection is arbitrary but may include an open call for 
nominations.


Two members assigned to each doc.


Other members are asked to review, but not required to.


The review is initiated by WG chairs or by AD early in process or 
automatically started for IETF Last Call.  Team member review requires AD 
approval.


The mechanism is an Email request and the AD approves assigns 
reviewers from ART.
Comments go to WG or authors. and WG looks at comments and addresses them as 
usual.


Then as docs are updated, same reviewers review again.


During IESG review, the AD can bring up comments that haven't been 
addressed or override a reviewer's comments.  This consistent with 
personal responsibility ADs have for quality of doc.


Starting review during IETF last call means all areas ARTs can look at
it if necessary.


Documenting review results is important, vital for follow up.
WG chairs reponsible for issue tracking and summarizing resolution


For individual submissions, either AD or an ART person tracks issues.


The model of trust, responsibility and accountability does not change.
Delegation of review does not delegate responsibility of content.


Motivation and credit for team members important, but not sure how to do.


Questions:


Loa: if team picked personally by AD, does the change of an AD cause a big 
change in the review team?  New AD will probably look at the members and 
make changes or add some new people.  That has to due with personal 
responsibility of AD
Loa: so review team is appointed by ADs?
Alex: yes


Dmitri: what if docs have several WG last calls? Alex: some WGs use "last 
call" to really initiate early review. so WG chairs will use judgement as to 
when to request cross-area review.


Ole: I'm worried about how reviewers are selected. If it is only to off 
load AD work, then that may be good, but we also miss other views, not just 
an extended AD.  It's important to see diversity of opinions on review 
teams. Especially if their review is equal to the decision or just input.  
Alex: yes, there is a balance here.  Most Ads probably are interested in 
diverse opinions. Not sure how to require that, but seems we have the same 
goal.


One idea was to have the ART members selected by nomcom, but that has a 
bunch of drawbacks. Alex is working from idea that current structure stays 
the same.


Greg: there's a conception that the ART teams might be small personal army 
for the ADs. However, when someone is asked to review a doc, they are 
using their own technical judgement, which may or may not align with the 
AD's. There is no guarantee of outcome. It is based on individual 
integrity.  So even hand selecting the ART does not guaranatee any 
particular result.  But getting folks who are qualified and know the area is 
the most important thing.


Alex: yes, it's a  bi-directional process. The comments of reviewers 
influence AD. Then there is also the process of educating each other.


Joel: this is one proposal we can consider for late state formal review


Harald: also has a draft.  The relevant section in draft very short. 
Mainly it deals with noting how the IESG works when reviewing docs and that 
it doesn't have to be those people doing that review. It suggests we put 
together teams where the collective judgement of that team covers all the 
aspects that need to be covered for late review.  For when a WG thinks that 
they have covered everything.  In the IESG, people tend to learn from each 
other and adjust to each other's reviews, knowing what the other folks will 
usually catch.  Put together review teams that are cross area from the 
beginning, vs. having one review team per area. Vertical (Alex) vs. 
horizontal (Harald) approaches.  Harald suggested teams be able to 
approve documents, but that is probably out of scope for this WG. But it 
might be a worthwhile direction to experiment in.


Dave T: How is it determined which team approves a doc?  
Harald: Throw dice, except that if a group of docs refers to each other, the 
same team should cover.

Dave:  but not all review teams will not be equal in terms of quality or 
also re direction. if they have the power to approve, one team could say yes 
and another no to same doc.  We need a central management point of 
ultimate responsibility.  The WG can't shop around, might be able to do so 
with Harald's group. 
Harald: yeah, valid concern. if we want to have judgement exercised by more 
than one gourp of people, we'll have to expect and deal with 
differences. We have to have a place in the system that can tell a review 
team it is out of line, so need a central authority.


Greg Danly: we need to look at cross area section. Seems the ART thing has 2 
reviewers per area, but this is not as much cross area.


missed name: not sure how can come up with a team and know whether or not 
they have authority to pass docs.  By the way, I don't think the team 
should have authority to pass documents. Review teams should be 
helpers.


Harald: two possible outcomes for teams (3 if you count failure). 1) get 
docs reviewed and IESG disagrees.  Review wasn't thorough enough, in which 
case the review teams should be advisory. or 2) we run the experiment and 
see the review team finds all significant problems with the doc. The IESG 
sees review, the team  is happy, the WG deals with all comments, IESG 
finds nothing more. Then if that is the normal case, why not allow team to 
approve docs??  Need to run the experiment first.


Finally, AD has a question of the chairs: what are the next steps?


Joel:
 - put web site up,
 - get tracker set up,
 - get all relevant docs issued with the name convention so they can be 
reviewed,
 - then have discussion of those docs against the topics of early and late 
review as appropriate with the goal of trying to
 - clarify what experiments can be done soon and
 - recommend those experiments.
 - Then start BCP on what review contents and longer review 
procedures should be.


Harald hopes we have the energy to try it to the finish.

Slides

Agenda
SIRS
MIB Doctors Experiences
Experience with RTG-DIR
Area Review Teams