Thomas and Chris:   I have reviewed this document as part of the Operational directorate's ongoing effort to review all IETF documents being processed by the IESG.  These comments were written with the intent of improving the operational aspects of the IETF drafts. Comments that are not addressed in last call may be included in AD reviews during the IESG review.  Document editors and WG chairs should treat these comments  just like any other last call comments.   Status of the draft: Not ready (publication wise), Almost ready (technical basis)   Overall comment: First of all, as a long-term supporter of the OLSR work I am very happy to see this experimental draft that seeks to provide clear proof points of the MT-OLSR-v2 work.  I believe the MT-OLSR-v2 can provide substantial benefit in the industry.   My comments should be taken as a way to help the experiment provide real data points to IETF and to the industry on the successful.  Real data will help the long term debate over the ADOV-v2 and OLSR-v2 work.  Almost 10 years ago Don Fedyk show me some very limited theoretical analysis on where early version of these protocol fit within 802.11s work.  Experimental work is critical to expansion of this work as new 802.11 and new mobile radios continue to make MANET’s work relevant for future networks.   Summary of Comments: My comments have 6 major issues, and a set of editorial changes.  Five of my major points have to do with adding more details to the draft to judge the experiment valuable.  One way to resolve these comments is to create document providing details on the test that will be run.  A second way to resolve these comments on experiment is to provide additional high-level guidance in this document.   The 6th major technical point is that you have not done the IANA Section to match previous RFCs (7181, 7188).  I believe Barry Lieba and IANA have already noted this issue as well.   The format of the IANA section should be changed to include all the IANA registration issues.   I have other editorial comments but since I have indicated a substantial section re-write due to major comments, I will be glad to review the document for final editorial comments in a second pass.     Technical summary: Basically – it is a good extension of existing work.  It is useful for Wireless and MANET for mobile. The authors did a good job of focusing on just the revisions.  However, I am recommending more detail on the experiments will help the protocol and help to create knobs for configuration and management of the protocol.   Here are the main points of the review:   Status: Almost ready, but has 5 major points that will help define the experiment as a success. 1 Major point about the IANA Section.   Major 1 issues are the exact definition of the tests that make it a successful experiment.    (p. 7, section 4, section 5, parameters This draft needs to have the clarity of setting expectations for if the experiments succeeds.  The recommended tests in major concern 1-4 could be created in a separate draft.   If so, this draft should reference that draft.   A few things that should be in the tests are:   1)       Topologies that prove the OLSR-v2 and MT-OLSR “doesn’t break” either protocol.  As previous tests of link-state protocols have shown, it is the topologies and the changes between the topologies that cause failures.  Changes to topologies include migrating between topologies due to link failures or link flapping.  In the OLSR-v2 and MT-OLSR world this is extremely important as mobile nodes may have radios or Wifi that fades in and out.    2)       Scaling tests – what happens if the arrays round out of space?  Does the route calculation for each metric type cause problems? Are there efficiencies that some implementations use to improve scaling?   3)       Interoperability tests with the previous versions OSLR-v2 (can you crash an older OLSR-V2 version)?  These tests should be not just two boxes but topologies of nodes connected by Wifi (802.11n, 802.11ac, 802.11k (if ready)) and other mobile radios (software defined radios, military, and others).   4)       Failure tests/Error conditions (section 5) – E.g. what happens parameter arrays have repetitions in the IFACE_METRIC_TYPES. What happens if the ordering of the LINK_METRIC_TYPE.metric type does not include the ROUTER_METRIC_TYPE First?   Other tests should be described.  All these tests should have topologies, parameters, and results.  OSLR-2 tests should pick up the theory from the benchmarking WG for testing OSPF and ISIS depends on the types of topologies that are given.  For other authors, I would give more precise details.  However, due to the expertise of the two authors – I gave just this high level guidance.  I will be glad to work through scenarios with the authors.   Major 2: Should the be a negotiation of resources sizing or just an overload bit flag when it fails?   We know that overload bits have problems.  So, does this experiment try to fix this in the MT-OLSR2?  If not, as it goes to from scalar to arrays – how will peers know when it fails in one MT-Topology versus another?  Is this experiment restricted to fate-sharing among the MT-OLSR-v2 topologies due to resources on a single?  Should the negotiation of the types in the Hello have a resource constraint in the MPR-WILLING TLV?  This negotiation for MT-OLSRv2 is different than the simple upgrade from OLSR to OLSRv2.  It is important to know how the overload works in MT-OLSR-v2 only, and combined MT-OLSRv2 and OLSR-v2.  This overload should be tested in a variety of topologies with parameters, topologies, and routing flow carefully detailed. Again, see the benchmarking drafts on OSPF and ISIS.   Again, what I pose is questions that I feel the authors should consider in an experiment for this type of work.  These expert authors have the capability to place additional high-level in this document (or more detail in a different document).  Again, I will gladly provide the authors with feedback and review   Major 3:  It is not clear that the experiment is consider whether the MPR calculation per TE-metric will consume significant resources. A good benchmarking (see Major 1) would be useful.  The theoretical assurance that:   “ Each router may make its own decision as to whether or not to use a link metric, or link metrics, for flooding MPR calculation, and if so which and how. This decision MUST be made in a manner that ensures that flooded messages will reach the same symmetric 2-hop neighbors as would be the case for a router not supporting MT-OLSRv2 (section 10, paragraph)   Is not really strongly there. If this is experimental, it is important to test this point in the benchmarking.  I do not really see it in the experiment.  This applies to the 1-hop neighbors (both those not considered (symmetric links)) and those considered in the 2-hop consideration.   The experiment should again set benchmarks for success on the MPR calculation for MT-OLSR-v2, and combined MT-OLSR-v2 and OLSR-v2 which contain several topologies, careful parameter setting, and methods to record convergence time.    Major comment 4:  The link between NHDP and this standard’s experiment with MPR is not clear.   I expected some comment on how NHDP interacts with the MPR Tests above.  This may simply be a revision of the text for the use case for MPR calculation.  I expected some comment because  RFC 7466 states in its abstract:    Neighborhood Discovery Protocol (NHDP) enables "ignoring" some 1-hop    neighbors if the measured link quality from that 1-hop neighbor is    below an acceptable threshold while still retaining the corresponding    link information as acquired from the HELLO message exchange.  This    allows immediate reinstatement of the 1-hop neighbor if the link    quality later improves sufficiently.      NHDP also collects information about symmetric 2-hop neighbors.    However, it specifies that if a link from a symmetric 1-hop neighbor    ceases being symmetric, including while "ignored" (as described    above), then corresponding symmetric 2-hop neighbors are removed.    This may lead to symmetric 2-hop neighborhood information being    permanently removed (until further HELLO message received) if    the link quality of a symmetric 1-hop neighbor drops below the    acceptable threshold, even if only for a moment.     Major concern 5:  Experiments should drive to create operational guidelines for deployment, configuration knobs, and use cases (ADOV-2, OLSR-v2, MT-OLSR-v2)   While all these major issues are not directly operations, early experiments will help the operations people to set the management variables in section 10:   1)       Reasonable TE metrics, 2)       Detecting that MANET is sufficiently connected, 3)       Providing guidance to deployments on performance of route sets for diff-serv in different environments (Wifi and other mobile radio nodes), 4)       Determine how the mixture of OLSR-v2 and MT-OLSR-v2 work in different environments. 5)       Determine how to design for OLSR-v2 and MT-OLSR-v2 networks for better MPR assignment with NHDP     Major 6: The IANA section does not answer all the IANA questions.   It has most of the information, but I think it is not up to the latest IANA format and information.   Barry Leiba and others have noted that the RFC 7181 and RFC7188 do not match this IANA section.  Rather than repeat these comments, I will simple state the data needs to be consistent and the format match IANA’s comments.     Please let me know if I can help with any additional review of comments on this exciting work.   Sue