idnits 2.17.1 draft-peng-lsr-igp-flooding-opt-methods-00.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- ** The document seems to lack a both a reference to RFC 2119 and the recommended RFC 2119 boilerplate, even if it appears to use RFC 2119 keywords. RFC 2119 keyword, line 173: '...BIER Record" packet MUST be always set...' RFC 2119 keyword, line 235: '...ter case, node A MUST drop the receive...' RFC 2119 keyword, line 473: '...odes information MUST not be included ...' RFC 2119 keyword, line 474: '... computation and MUST not be stored in...' Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year == Using lowercase 'not' together with uppercase 'MUST', 'SHALL', 'SHOULD', or 'RECOMMENDED' is not an accepted usage according to RFC 2119. Please use uppercase 'NOT' together with RFC 2119 keywords (if that is what you mean). Found 'MUST not' in this paragraph: The troublesome problem is that according to traditonal ISIS LSP or OSPF LSU packet processing rules, the content of these type of packets can not be changed by transient nodes otherwise multiple copy with the same KEY but different content (e.g, different visited nodes information) received on a node will cause a checksum error. So the visited nodes information MUST not be included for checksum computation and MUST not be stored in LSDB for path computation, it is only used for flooding control. -- The document date (August 19, 2019) is 1712 days in the past. Is this intentional? Checking references for intended status: Proposed Standard ---------------------------------------------------------------------------- (See RFCs 3967 and 4897 for information about using normative references to lower-maturity documents in RFCs) == Missing Reference: 'A' is mentioned on line 100, but not defined == Missing Reference: 'B' is mentioned on line 100, but not defined == Missing Reference: 'N' is mentioned on line 104, but not defined == Missing Reference: 'X' is mentioned on line 371, but not defined -- Looks like a reference, but probably isn't: '1' on line 374 -- Looks like a reference, but probably isn't: '3' on line 374 -- Looks like a reference, but probably isn't: '7' on line 378 -- Looks like a reference, but probably isn't: '2' on line 374 -- Looks like a reference, but probably isn't: '4' on line 374 -- Looks like a reference, but probably isn't: '5' on line 378 -- Looks like a reference, but probably isn't: '6' on line 378 -- Looks like a reference, but probably isn't: '8' on line 378 -- Looks like a reference, but probably isn't: '9' on line 378 -- Looks like a reference, but probably isn't: '10' on line 378 -- Looks like a reference, but probably isn't: '11' on line 378 -- Looks like a reference, but probably isn't: '12' on line 378 == Outdated reference: A later version (-18) exists of draft-ietf-lsr-dynamic-flooding-03 ** Downref: Normative reference to an Experimental draft: draft-ietf-lsr-dynamic-flooding (ref. 'I-D.ietf-lsr-dynamic-flooding') Summary: 2 errors (**), 0 flaws (~~), 7 warnings (==), 13 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 LSR WG Shaofu. Peng 3 Internet-Draft Zheng. Zhang 4 Intended status: Standards Track ZTE Corporation 5 Expires: February 20, 2020 August 19, 2019 7 IGP Flooding Optimization Methods 8 draft-peng-lsr-igp-flooding-opt-methods-00 10 Abstract 12 This document mainly describe a method to optimize IGP flooding by 13 visited record, the visited record information could be encapsulated 14 in outer carring header, or as a part of IGP PDU. 16 Status of This Memo 18 This Internet-Draft is submitted in full conformance with the 19 provisions of BCP 78 and BCP 79. 21 Internet-Drafts are working documents of the Internet Engineering 22 Task Force (IETF). Note that other groups may also distribute 23 working documents as Internet-Drafts. The list of current Internet- 24 Drafts is at https://datatracker.ietf.org/drafts/current/. 26 Internet-Drafts are draft documents valid for a maximum of six months 27 and may be updated, replaced, or obsoleted by other documents at any 28 time. It is inappropriate to use Internet-Drafts as reference 29 material or to cite them other than as "work in progress." 31 This Internet-Draft will expire on February 20, 2020. 33 Copyright Notice 35 Copyright (c) 2019 IETF Trust and the persons identified as the 36 document authors. All rights reserved. 38 This document is subject to BCP 78 and the IETF Trust's Legal 39 Provisions Relating to IETF Documents 40 (https://trustee.ietf.org/license-info) in effect on the date of 41 publication of this document. Please review these documents 42 carefully, as they describe your rights and restrictions with respect 43 to this document. Code Components extracted from this document must 44 include Simplified BSD License text as described in Section 4.e of 45 the Trust Legal Provisions and are provided without warranty as 46 described in the Simplified BSD License. 48 Table of Contents 50 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . 2 51 2. Solutions begin First Established Phase . . . . . . . . . . . 3 52 2.1. BIER based IGP flooding . . . . . . . . . . . . . . . . . 4 53 2.1.1. Overview . . . . . . . . . . . . . . . . . . . . . . 4 54 2.1.2. BIER Encapsulation Extensions . . . . . . . . . . . . 4 55 2.1.3. IGP Capability Extensions . . . . . . . . . . . . . . 5 56 2.1.4. Operations . . . . . . . . . . . . . . . . . . . . . 5 57 2.1.4.1. Local Generated Link State Data . . . . . . . . . 5 58 2.1.4.2. Remote Generated Link State Data . . . . . . . . 5 59 2.1.4.3. Not Directly Connected Neighbors in Tier-based 60 Networks . . . . . . . . . . . . . . . . . . . . 6 61 2.1.4.4. Error Correction . . . . . . . . . . . . . . . . 7 62 2.1.5. Other considerations . . . . . . . . . . . . . . . . 7 63 2.1.6. Examples . . . . . . . . . . . . . . . . . . . . . . 7 64 2.1.6.1. A Sparse Network Example . . . . . . . . . . . . 8 65 2.1.6.2. A Tier-based Densy Network Example . . . . . . . 9 66 2.1.6.3. A Fullmesh Densy Network Example . . . . . . . . 10 67 2.2. IGP Extensions to Record Visited Nodes . . . . . . . . . 11 68 3. Solutions after First Established Phase . . . . . . . . . . . 11 69 4. Security Considerations . . . . . . . . . . . . . . . . . . . 11 70 5. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 11 71 6. Normative References . . . . . . . . . . . . . . . . . . . . 11 72 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . 12 74 1. Introduction 76 IGP flooding issue of densy networks such as spine-leaf, Clos, or Fat 77 Tree topology has get creased attentions and solution seeking. 78 Conventional IS-IS, OSPFv2 and OSPFv3 all perform redundantly 79 flooding information throughout the dense topology, leading to 80 overloaded control plane inputs and thereby creating operational 81 issues. 83 [I-D.ietf-lsr-dynamic-flooding] has ananylized the issues and 84 described a common solution to build a sparse FT (Flooding Topology) 85 dedicated to link state packet flooding. However it is a bit complex 86 to cover all sceneries to compute an optimal FT to reduce the 87 redundancy flooding, sometimes it need a rollback to traditional 88 flooding rules to guarantee function correct and have to abandon 89 performance. Implementors have to consider too many type of events 90 that maybe affect the FT based flooding behavior with special careful 91 detail treatment per specific event. For example, in some cases both 92 a new FT and an old FT need work together, in some cases a temporary 93 flooding on non-FT link is needed. 95 The following figure 1 simply illustrate a possilbe timing sequence 96 example according to FT solution. Although we believe it can be 97 easily addressed, it just indicates the inherent complexity of this 98 solution that must be given adequate care. 100 [A]...........[B] [A]...........[B] 101 | | 102 | | 103 | | 104 [N] [N] 106 (a) FT on node A (b) FT on node B 107 Fig.1 FT inconsistency on multiple nodes 109 Suppose at some time node A computed the FT as Fig.1(a), node B 110 computed the FT as Fig.1(b), this inconsistency would be eliminated 111 at last, but just at this time, a link state data need be flooded 112 along FT, so node A thought node B would propagate data to N, but 113 node B also thought node A would propagate data to N, the result is 114 that nobody propagated data to N. 116 Note the FT itself need to be computed frequently triggered by any 117 topology events, especially during the first established phase of 118 network, where the ultimal optimal FT can be computed just based on 119 the full stable topology database that maybe hard to get from the 120 fully redundant flooding. The computation overhead maybe offset its 121 benefits. 123 This document try to discuss some other possible methods to optimize 124 IGP flooding with little cost, simple logic, and implementation 125 friendly. 127 2. Solutions begin First Established Phase 129 Netwrok administrator expect to solve the redundant flooding problem 130 from the beginning of a densy network power-on, to quickly deploy 131 service, it can't tolerate a long time to get a stable network. 133 A solution maybe possible to record the potential visited node of the 134 link state data packet, to filter nodes that have already been 135 visited. We will discuss two methods to record the visited nodes as 136 following. 138 2.1. BIER based IGP flooding 140 2.1.1. Overview 142 Bit Index Explicit Replication (BIER) [RFC8279] is an architecture 143 that provides optimal multicast forwarding without requiring 144 intermediate routers to maintain any per-flow state by using a 145 multicast-specific BIER header. [RFC8296] defines two types of BIER 146 encapsulation formats: one is MPLS encapsulation, the other is non- 147 MPLS encapsulation. It is convenient to use BIER to record visited 148 nodes. To fulfill IGP flooding optimization, some extensions need be 149 applied to BIER encapsulation. 151 For an IGP area/level, a BIER sub-domain is used to construct the IGP 152 tolology. Supposed that each node in the IGP area/level is BIER- 153 enabled, they belong to the same BIER sub-domain. Each node is 154 provisioned with a "BFR-id" that is unique within the sub-domain. 155 Now a "BIER Record" function is introduced to BIER forwarding 156 mechanism defined in [RFC8279] and [RFC8296]. "BIER Record" function 157 will record the BIER packet, which contains the IGP link state data 158 such as ISIS LSP(Link State PDU) or OSPF LSU(Link State Update), has 159 visited how many nodes, i.e, the bit-string included in the BIER 160 header of a "BIER Record" packet will contain the related BP(bit 161 position) of all visited nodes' BFR-id. Once a node received a link 162 state data contained in "BIER Record" packet, it never continues to 163 flood the data toward to the neighbors that have already existed in 164 the received bit-string. 166 2.1.2. BIER Encapsulation Extensions 168 [RFC8296] defines the BIER encapsulation format, the "Rsv" field is 169 currently unused, a new bit (the rightmost bit) of the "Rsv" field 170 can be used for flag-R (Record), if set to 1 indicate the BIER packet 171 is a "BIER Record" packet, otherwise is a traditional BIER packet. 172 "BIER Record" packet received on a node can never be forwarded again, 173 the TTL field in the received "BIER Record" packet MUST be always set 174 to 1. 176 The "Proto" field is currently not provided to encapsulate IGP 177 payload. IANA has assigned value 1~6 for "Proto" field, a new value 178 (suggested 7) is to indicate the encapsulated payload is ISIS 179 LSP(Link State PDU), a new value (suggested 8) is to indicate the 180 encapsulated payload is OSPF LSU(Link State Update). 182 2.1.3. IGP Capability Extensions 184 Each node inside the IGP area/level can be provisioned whether or not 185 support BIER based IGP flooding capability and advertised this router 186 capability to other nodes. 188 A new flag (flag-B) is introduced for Flags field of IS-IS Router 189 Capability TLV-242 [RFC7981] as well as Informational Capabilities of 190 OSPF Router Informational Capabilities TLV [RFC7770], if set to 1 191 indicate the advertised node has BIER based IGP flooding capability, 192 otherwise has not. 194 2.1.4. Operations 196 2.1.4.1. Local Generated Link State Data 198 Suppose that a node A generates a link state data, e.g, because of a 199 new link inserting, it will flood the data (ISIS LSP or OSPF LSU) to 200 neighbor N. If both A and N support BIER based IGP flooding 201 capability, node A can send the data contained in the "BIER Record" 202 packet to node N, the send-bitstring, i.e, the bit-string of BIER 203 header of the sending "BIER Record" packet, will include BP of A and 204 all its neighbors (including N). Note that if there are multiple 205 links between A and N, only one link is choosen to send packet. 207 If any of node A and N can not support BIER based IGP flooding 208 capability, node A will take the traditional flooding mechanism to 209 flood data to N, i.e, the link state data is not encapsulated in BIER 210 header but in traditional L2 header (for ISIS) or IP header (for 211 OSPF). 213 Network administrator can config local policy on all nodes in the 214 network to force to send link state data by "BIER Record" packet if 215 he ensure that all nodes are really capable of BIER based IGP 216 flooding. This policy is useful to speed up the convergence during 217 the early phase of network power on. 219 2.1.4.2. Remote Generated Link State Data 221 Node A can also receive a remote link state data from neighbor N, the 222 data maybe originated from N itself or a third node. The data could 223 be received by traditional IGP flooding mechanism or "BIER Record" 224 packet (we term the bit-string of BIER header of the received "BIER 225 Record" packet as recv-bitstring). 227 In former case, node A will check if there are already an item with 228 the same KEY existed in the local LSDB and compare who is new and who 229 is old. If no local item or local item is old, node A need add or 230 update the data to local LSDB, and continue to flood it towards other 231 neighbors except N. If local item is new, node A just flood the 232 local item to neighbor N. If local item is totally same as received 233 data, no processing. 235 In later case, node A MUST drop the received data if it has not BIER 236 based IGP flooding capability, otherwise it will also check if there 237 are already an item with the same KEY existed in the local LSDB and 238 compare who is new and who is old. If no local item or local item is 239 old, node A need add or update the data to local LSDB, and continue 240 to flood it towards other neighbors except N and neighbors contained 241 in recv-bitstring. If local item is new, node A just flood the local 242 item to neighbor N. If local item is totally same as received data, 243 no processing. 245 2.1.4.2.1. Continuous Flooding Procedure 247 For the above two cases, if node A need continue to flood the remote 248 link state data to any neighbors, it need check If both itself and 249 the neighbor support BIER based IGP flooding capability, if yes node 250 A can send the data contained in the "BIER Record" packet to the 251 neighbor, the send-bitstring will include BP of A, and all neighbors 252 of A, and all nodes already contained in recv-bitstring especially 253 for the above later case. 255 If any of node A and the neighbor can not support BIER based IGP 256 flooding capability, node A will take the traditional flooding 257 mechanism to flood data to the neighbor, i.e, the link state data is 258 not encapsulated in BIER header but in traditional L2 header (for 259 ISIS) or IP header (for OSPF). 261 2.1.4.3. Not Directly Connected Neighbors in Tier-based Networks 263 Data centers often deployed a spine-leaf, Clos, or Fat Tree topology, 264 the key feature is that this class of topology is constructed with 265 serveral tiers, nodes in the same tier have connections rarely, but 266 each node have full connections to all nodes in the neighbor tier. 268 Although a node A within tier-x has not any connections with other 269 nodes in the same tier-x, it can configure local policy to preserve 270 these not-directly connected (NDC) neighbors. These NDC neighbors 271 within same tier can be explicitly inserted to the send-bitstring of 272 "BIER Record" packets towards most of real neighbor nodes in the 273 neighbor tier, but a very few neighbor (extremely a single neighbor) 274 in the neighbor tier received the "BIER Record" packe without NDC 275 neighbors inserting. This policy can significantly reduce the 276 redundant flooding. 278 2.1.4.4. Error Correction 280 As a node decides whether or not to flood remote link state data to a 281 neighbor according to the recv-bitstring, some neighbors that not be 282 included in recv-bitstring will receive the data, some other 283 neighbors that be included in recv-bitstring will be filtered and can 284 not receive the data. In extremity, the filtered neighbors maybe 285 exactly not receive the data before, due to link interrupt. 287 The same issue can be occurred for local link state data, the data 288 will send to all neighbors with send-string including all neighbors, 289 but one neighbor maybe exactly not receive the data due to link 290 interrupt right now. 292 To recovery the lost data toward a neighbor, normal database 293 synchronization mechanisms (i.e., OSPF DDP, IS-IS CSNP) would apply 294 between local and remote node. Traditionally, database 295 synchronization packet is periodicaly sent on broadcast link to 296 confirm that all nodes connected to the LAN has the same LSDB. This 297 document adjust it to any type of link. As long as the node enable 298 the capability of BIER based IGP flooding, it will apply the database 299 synchronization mechanism with neighbor in despite of the type of 300 link between them. For broadcast link, a DR or DIS is elected to 301 send the database synchronization packet periodicaly. For P2P link, 302 similar election method could be used to let one side with high 303 priority to send the database synchronization packet. The period is 304 recommended long. 306 Due to database synchronization mechanism, if any link state data 307 need to be flooded from one side to another, the operations are 308 totally similar with section "2.1.4.2.1 Continuous Flooding 309 Procedure". 311 2.1.5. Other considerations 313 As defined in [RFC8401] and [RFC8444], BFR-id is advertised within 314 prefix reachability, that would be too late for a node to get the 315 BFR-id information of all neighbors when a link state data is 316 launched. A new advertisement method maybe that each node carry 317 local BFR-id in IGP hello packets, if this is true non-MPLS BIER 318 encapsulation is suitable. 320 2.1.6. Examples 321 2.1.6.1. A Sparse Network Example 323 [2]------[5] 324 / \ / \ 325 / \ / \ 326 [X]---[1]-------[3] [7] 327 \ / \ / 328 \ / \ / 329 [4]------[6] 331 Fig.2 A Sparse Network Example 333 Fig.2 shows a sparse network, which is constructed by node 1~7 and 334 the corresponding links originally. Now a new node X with its link 335 is added to the network. Suppose that all nodes have BIER based IGP 336 flooding capability. 338 From the perspective of node 1, it will create session with node X, 339 and an local link state data for unidirectional link(1->X) is 340 generated, node 1 will send it by "BIER Record" packet with send- 341 bitstring (X, 1, 2, 3, 4) toward each neighbor, i.e, X, 2, 3, 4. 343 Node 2 receives the "BIER Record" packet from node 1, extracte the 344 link state data from the packet and store it in local LSDB. Because 345 the recv-bitstring has already contained neighbor 3, node 2 just 346 continues to flood the data to neighbor 5, i.e, a new "BIER Record" 347 packet is produced with send-bitstring (X, 1, 2, 3, 4, 5) which is 348 combined with node 2 itself, plus all neighbors of node 2, plus all 349 nodes already in recv-bitstring. 351 Similarly, Node 5 receives the data from node 2, store it in local 352 LSDB and only continues to flood it to node 7. Node 5 will receive 353 the data from node 3 repeatedly, because node 3 will also receive 354 data from node 1 with recv-bitstring (X, 1, 2, 3, 4) without 5. 356 Node 6 is mostly like node 5. 358 Note that node X will also generate a local link state data for 359 unidirectional link(X->1), no further elaboration. 361 Also note that the new node X will receive stock link state data from 362 node 1 according to normal database synchronization immediately 363 caused by link UP. 365 In this example, we can see that the redundant flooding behavior is 366 suppressed within limits, as redundant flooding behavior in sparse 367 network is not serious at all. 369 2.1.6.2. A Tier-based Densy Network Example 371 [X] 372 | 373 | 374 Spine [1] [2] [3] [4] 375 /:::::::::::::::::::::::::::\ 376 /:::::::::::::::::::::::::::::\ 377 /:::::::::::::::::::::::::::::::\ 378 Leaf [5] [6] [7] [8] [9] [10] [11] [12] 380 Fig.3 A Tier-based Densy Network 382 Fig.3 shows a Spine-leaf densy network, which is constructed by node 383 1~12 and the corresponding links originally. Node 1~4 is within 384 spine tier, node 5~12 is within leaf tier. Each spine node connects 385 all leaf nodes, and vice versa. Now a new node X with its link is 386 added to the network. Suppose that all nodes have BIER based IGP 387 flooding capability. 389 From the perspective of node 1, it will create session with node X, 390 and an local link state data for unidirectional link(1->X) is 391 generated. As above mentioned, although node 1 within tier-spine has 392 not any connections with other nodes (2, 3, 4) in the same tier- 393 spine, it can configure local policy to preserve these not-directly 394 connected (NDC) neighbors. So node 1 will send the link state data 395 by "BIER Record" packet with send-bitstring (X, 1, 2, 3, 4, 5, 6, 7, 396 8, 9, 10, 11, 12) including NDC neighbors toward most of neighbors in 397 tier-leaf, i.e, 6~12, but send "BIER Record" packet with send- 398 bitstring (X, 1, 5, 6, 7, 8, 9, 10, 11, 12) toward a single neighbor 399 in tier-leaf, i.e, 5. Note that node 5 must be an active node, if 400 not a new single neighbor in tier-leaf must be selected to receive 401 data without NDC neighbors inserting. 403 Node 5 receives the "BIER Record" packet from node 1, extracte the 404 link state data from the packet and store it in local LSDB. Node 5 405 continues to flood the data to neighbor 2, 3, 4, i.e, a new "BIER 406 Record" packet is produced with send-bitstring (X, 1, 2, 3, 4, 5, 6, 407 7, 8, 9, 10, 11, 12) which is combined with node 5 itself, plus all 408 neighbors of node 5, plus all nodes already in recv-bitstring. 410 Node 2 receives the "BIER Record" packet from node 5, extracte the 411 link state data from the packet and store it in local LSDB. Because 412 the recv-bitstring has already contained all neighbors, node 2 no 413 longer continues to flood the data. 415 Node 3, 4 is similar to node 2. 417 Node 6 receives the "BIER Record" packet from node 1, extracte the 418 link state data from the packet and store it in local LSDB. Because 419 the recv-bitstring has also contained all neighbors (due to NDC 420 neighbors inserting), node 6 no longer continues to flood the data. 422 Node 7~12 is similar to node 6. 424 In this example, we can see that the redundant flooding behavior is 425 suppressed with a definite improvement, other densy tier-based 426 networks have the same optimizing effect. 428 2.1.6.3. A Fullmesh Densy Network Example 430 ________[3]_______ 431 / \ 432 / ****************** \ 433 / \ 434 [2]*********************[4] 435 / \ 436 / ************************** \ 437 / \ 438 [X]---[1] ******************************[5] 439 \ / 440 \ *************************** / 441 \ / 442 [8]**********************[6] 443 \ / 444 \ ***************** / 445 \_______[7]_______/ 447 Fig.4 A Fullmesh Densy Network 449 Fig.4 shows a fullmesh densy network, which is constructed by node 450 1~8 and the corresponding links originally. Each node directly 451 connects all other nodes. Now a new node X with its link is added to 452 the network. Suppose that all nodes have BIER based IGP flooding 453 capability. 455 How the optimization is reached is just like example 1, but in this 456 example we will see the local link state data that generated on node 457 1 will never continue to be flooded again by any other receiving 458 nodes, the redundant flooding behavior is suppressed completely. 460 2.2. IGP Extensions to Record Visited Nodes 462 Although BIER is convenient to carry potential visited nodes 463 information of link state data, some network may not delpoy BIER. 464 Alternate method is to directly extend ISIS or OSPF protocol to carry 465 visited nodes information that is advertised with ISIS LSP or OSPF 466 LSU. 468 The troublesome problem is that according to traditonal ISIS LSP or 469 OSPF LSU packet processing rules, the content of these type of 470 packets can not be changed by transient nodes otherwise multiple copy 471 with the same KEY but different content (e.g, different visited nodes 472 information) received on a node will cause a checksum error. So the 473 visited nodes information MUST not be included for checksum 474 computation and MUST not be stored in LSDB for path computation, it 475 is only used for flooding control. 477 The detailed extensions for ISIS and OSPF will be discussed in the 478 next version of this document. 480 3. Solutions after First Established Phase 482 Netwrok administrator maybe let go of the redundant flooding behavior 483 during first established phase of network power-on, but seek 484 solutions to supress the subsequent redundant flooding after the 485 network is stable. 487 Each node could have a waiting period to act as traditional flooding 488 behavior, when the waiting timer expired it will act as enhanced 489 flooding behavior. 491 The possible methods will be discussed in the next version of this 492 document. 494 4. Security Considerations 496 TBD 498 5. IANA Considerations 500 TBD 502 6. Normative References 504 [I-D.ietf-lsr-dynamic-flooding] 505 Li, T., Psenak, P., Ginsberg, L., Chen, H., Przygienda, 506 T., Cooper, D., Jalil, L., and S. Dontula, "Dynamic 507 Flooding on Dense Graphs", draft-ietf-lsr-dynamic- 508 flooding-03 (work in progress), June 2019. 510 [RFC7770] Lindem, A., Ed., Shen, N., Vasseur, JP., Aggarwal, R., and 511 S. Shaffer, "Extensions to OSPF for Advertising Optional 512 Router Capabilities", RFC 7770, DOI 10.17487/RFC7770, 513 February 2016, . 515 [RFC7981] Ginsberg, L., Previdi, S., and M. Chen, "IS-IS Extensions 516 for Advertising Router Information", RFC 7981, 517 DOI 10.17487/RFC7981, October 2016, 518 . 520 [RFC8279] Wijnands, IJ., Ed., Rosen, E., Ed., Dolganow, A., 521 Przygienda, T., and S. Aldrin, "Multicast Using Bit Index 522 Explicit Replication (BIER)", RFC 8279, 523 DOI 10.17487/RFC8279, November 2017, 524 . 526 [RFC8296] Wijnands, IJ., Ed., Rosen, E., Ed., Dolganow, A., 527 Tantsura, J., Aldrin, S., and I. Meilik, "Encapsulation 528 for Bit Index Explicit Replication (BIER) in MPLS and Non- 529 MPLS Networks", RFC 8296, DOI 10.17487/RFC8296, January 530 2018, . 532 [RFC8401] Ginsberg, L., Ed., Przygienda, T., Aldrin, S., and Z. 533 Zhang, "Bit Index Explicit Replication (BIER) Support via 534 IS-IS", RFC 8401, DOI 10.17487/RFC8401, June 2018, 535 . 537 [RFC8444] Psenak, P., Ed., Kumar, N., Wijnands, IJ., Dolganow, A., 538 Przygienda, T., Zhang, J., and S. Aldrin, "OSPFv2 539 Extensions for Bit Index Explicit Replication (BIER)", 540 RFC 8444, DOI 10.17487/RFC8444, November 2018, 541 . 543 Authors' Addresses 545 Shaofu Peng 546 ZTE Corporation 547 No.68 Zijinghua Road,Yuhuatai District 548 Nanjing 210012 549 China 551 Email: peng.shaofu@zte.com.cn 552 Zheng(Sandy) Zhang 553 ZTE Corporation 554 No.50 Software Avenue,Yuhuatai District 555 Nanjing 210012 556 China 558 Email: zzhang_ietf@hotmail.com