idnits 2.17.1 draft-peng-p2psip-one-hop-plugin-03.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- == The page length should not exceed 58 lines per page, but there was 30 longer pages, the longest (page 26) being 112 lines Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- ** The document seems to lack a Security Considerations section. (A line matching the expected section header was found, but with an unexpected indentation: ' 11. Security Considerations' ) ** The document seems to lack an IANA Considerations section. (See Section 2.2 of https://www.ietf.org/id-info/checklist for how to handle the case when there are no actions for IANA.) (A line matching the expected section header was found, but with an unexpected indentation: ' 12. IANA Considerations' ) ** The document seems to lack an Authors' Addresses Section. ** There are 461 instances of too long lines in the document, the longest one being 5 characters in excess of 72. == There are 4 instances of lines with non-RFC6890-compliant IPv4 addresses in the document. If these are example addresses, they should be changed. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year == Line 812 has weird spacing: '...eerType joi...' == Line 1168 has weird spacing: '...nfoType rout...' == Line 1174 has weird spacing: '...ingInfo whol...' == Line 1216 has weird spacing: '...ingInfo join...' == Line 1240 has weird spacing: '...ingInfo leav...' == (2 more instances...) -- The document date (Feburary 16, 2013) is 4146 days in the past. Is this intentional? Checking references for intended status: Proposed Standard ---------------------------------------------------------------------------- (See RFCs 3967 and 4897 for information about using normative references to lower-maturity documents in RFCs) == Missing Reference: 'OneHopLookups' is mentioned on line 244, but not defined == Missing Reference: 'SliceIdLength' is mentioned on line 358, but not defined == Missing Reference: 'UnitIdLength' is mentioned on line 360, but not defined == Unused Reference: 'RFC2234' is defined on line 1363, but no explicit reference was found in the text ** Obsolete normative reference: RFC 2234 (Obsoleted by RFC 4234) == Outdated reference: A later version (-26) exists of draft-ietf-p2psip-base-15 Summary: 5 errors (**), 0 flaws (~~), 14 warnings (==), 1 comment (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 P2PSIP Working Group J. Peng 3 Internet Draft Q. Yu 4 Intended status: Standards Track China Mobile 5 Expires: August 20 2013 Y. Li 6 Beijing University of Posts and 7 Telecommunications 8 Feburary 16, 2013 10 One Hop Lookups Algorithm Plugin for RELOAD 11 draft-peng-p2psip-one-hop-plugin-03 13 Abstract 15 This document defines a specific Topology Plugin using a ramification 16 of the basic One Hop Lookups based DHT algorithm which is called ONE- 17 HOP-RELOAD. In the One Hop Lookups algorithm, each peer maintains a 18 full routing table containing information about every node on the 19 overlay in order to route RELOAD message in one hop. Compared with 20 CHORD-RELOAD algorithm, ONE-HOP-RELOAD improves the routing 21 efficiency, and can maintain complete membership information with 22 reasonable bandwidth requirements. This algorithm is able to handle 23 frequent membership changes by superimposing a well-defined hierarchy 24 on the system that guarantees topology disturbance events 25 notification reach every peer in the overlay within a specified 26 amount of time. Currently some typical peer-to-peer storages systems 27 have stringent latency requirements, such as Amazon's Dynamo which is 28 built for latency sensitive applications uses One-Hop algorithm, so 29 that each node maintains enough routing information locally to route 30 a request to the appropriate node directly. 32 Status of this Memo 34 This Internet-Draft is submitted in full conformance with the 35 provisions of BCP 78 and BCP 79. 37 Internet-Drafts are working documents of the Internet Engineering 38 Task Force (IETF), its areas, and its working groups. Note that 39 other groups may also distribute working documents as Internet-Drafts. 41 Internet-Drafts are draft documents valid for a maximum of six months 42 and may be updated, replaced, or obsoleted by other documents at any 43 time. It is inappropriate to use Internet-Drafts as reference 44 material or to cite them other than as "work in progress." 46 The list of current Internet-Drafts can be accessed at 47 http://www.ietf.org/ietf/1id-abstracts.txt 49 The list of Internet-Draft Shadow Directories can be accessed at 50 http://www.ietf.org/shadow.html 51 This Internet-Draft will expire on August 20, 2013. 53 Copyright Notice 55 Copyright (c) 2012 IETF Trust and the persons identified as the 56 document authors. All rights reserved. 58 This document is subject to BCP 78 and the IETF Trust's Legal 59 Provisions Relating to IETF Documents 60 (http://trustee.ietf.org/license-info) in effect on the date of 61 publication of this document. Please review these documents carefully, 62 as they describe your rights and restrictions with respect to this 63 document. Code Components extracted from this document must include 64 Simplified BSD License text as described in Section 4.e of the Trust 65 Legal Provisions and are provided without warranty as described in 66 the Simplified BSD License. 68 Table of Contents 70 1. Introduction ................................................ 3 71 2. Terminology ................................................. 4 72 3. Hash function ............................................... 5 73 4. Network Architecture......................................... 5 74 5. Peer data structure ......................................... 6 75 5.1. Routing Table .......................................... 6 76 5.2. Peer Type .............................................. 6 77 5.2.1. Peer Type Structure ............................... 7 78 5.3. Region ID .............................................. 8 79 5.4. Special Identification Information ..................... 8 80 6. Routing ..................................................... 9 81 6.1. Next-Hop Selection Mechanism ........................... 9 82 6.2. Fault Tolerance........................................ 10 83 6.2.1. Resource-ID Routing Failure ...................... 11 84 6.2.1.1. Next-Hop Peer Leaving ....................... 11 85 6.2.1.2. New Peer Joining ............................ 12 86 6.2.2. Node-ID Routing Failure .......................... 13 87 6.2.2.1. Next-Hop Peer Leaving ....................... 13 88 6.2.2.2. Next-Hop Client Leaving ..................... 14 89 7. Replica Placement Policy ................................... 14 90 8. Joining .................................................... 15 91 8.1. Joining Process........................................ 15 92 8.2. Joinreq Message Structure ............................. 18 93 9. Leaving .................................................... 18 94 9.1. Handling Neighbor Failures ............................ 18 95 9.2. Leavereq Message Structure ............................ 20 96 10. Updates ................................................... 22 97 10.1. Topology Anomaly Detection ........................... 22 98 10.2. Topology Disturbance Propagation Procedure ........... 22 99 10.3. Updatereq Message Structure .......................... 23 100 10.3.1. Update Types .................................... 23 101 10.3.2. Routing Information ............................. 24 102 10.3.3. Event notification .............................. 25 103 10.3.4. ONE-HOP-RELOAD Update Data ...................... 27 104 11. Security Considerations ................................... 28 105 12. IANA Considerations........................................ 28 106 13. References ................................................ 28 107 13.1. Normative References ................................. 28 108 13.2. Informative References ............................... 28 109 14. Acknowledgments ........................................... 29 110 Authors' Addresses ............................................ 30 112 1. Introduction 114 RELOAD [I-D.ietf-p2psip-base] is a peer-to-peer (P2P) signaling 115 protocol for use on the Internet. RELOAD is explicitly designed to 116 work with a variety of overlay algorithms, each implementation of 117 that is provided by a Topology Plugin so that each overlay can select 118 an appropriate overlay algorithm that relies on the common RELOAD 119 core protocols and code. In the RELOAD base protocol, the Topology 120 Plugin is defined by a DHT based on Chord, and this specification 121 defines a new DHT based on One Hop Lookups which provides a high 122 routing efficiency and can handle frequent membership changes in a 123 way that has reasonable bandwidth consumption. 125 Structured peer-to-peer overlay network like Chord, Pastry, CAN 126 provide a substrate for building large-scale distributed applications. 127 These overlays allow applications to locate objects stored in the 128 system in a limited number of overlay hops. These peer-to-peer lookup 129 algorithms strive to maintain a small amount of per-node routing 130 state, typically O (log N). All these O (log N) algorithms incur less 131 maintenance traffic on large or high-churn networks but need nearly O 132 (log N) steps to find one peer or resource, if there are a large 133 number of peers in the overlay, it costs a lot time to locate peer or 134 resource which may have a bad influence on the application level of 135 RELOAD. 137 The experiment results [One-Hop-Lookups] show that the peer-to-peer 138 system which uses the One Hop Lookups algorithm can route very 139 efficiently even though the system is large and membership is 140 changing rapidly. They show analytic results to prove that the whole 141 routing table maintenance bandwidth requirements including the 142 topology anomaly detection and topology disturbance propagation are 143 small enough to make most participants in a 10^5 system, and the load 144 on the peer in the overlay increases linearly with the size of the 145 system. For example, in a system with 10^5 nodes, the load on an 146 ordinary peer is 3.84 kbps and the load on a slice leader is 35 kbps 147 upstream. In the simulation experiments described in the paper, they 148 assume dynamic membership behavior (i.e., node joining and leaving) 149 of the peer-to-peer system as in Gnutella, which is representative of 150 an open Internet environment. From the study of Gnutella [Gnutella], 151 we can draw the conclusion that there are about 20 and 200 membership 152 change events per second, in a system with 10^5 and 10^6 peers 153 respectively. 155 The real time communication has a high demand on the routing 156 efficiency, for example, the VoIP usage on the application level of 157 RELOAD, it depends on the looking up efficiency of the Overlay 158 Topology. So, in a relative small and stable network like a low-churn 159 telecom core net with VoIP applications, the O (1) algorithms as a 160 Topology Plugin of RELOAD is beneficial. 162 Currently some typical peer-to-peer storages systems have stringent 163 latency requirements, such as Amazon's Dynamo which is built for 164 latency sensitive applications uses One-Hop algorithm, so that each 165 node on the overlay maintains enough routing information locally to 166 route a request to the appropriate node directly. 168 The algorithm described in this document is assigned the name ONE- 169 HOP-RELOAD to indicate it is an adaptation of the basic One Hop 170 algorithm based DHT. This algorithm uses the core techniques of the 171 original One Hop Lookups, and has been adapted to RELOAD protocol. In 172 the One Hop Lookups algorithm, each peer maintains a complete 173 description of system memberships, and it uses dissemination trees to 174 propagate event information to update all the peers' routing table 175 within several seconds so that peers can maintain their own 176 membership information accurately with low communications costs. 178 In this draft, the network architecture and the routing information 179 which peers maintain in the ONE-HOP-RELOAD are first described. Then 180 the replication and fault tolerance strategies are stated. At last, 181 the overlay specific data structures into the RELOAD frame are filled, 182 and some procedures with topology messages defined in the RELOAD 183 Topology Plugin are implemented. All of the definitions and 184 implementations based on the requirements and methods provided by 185 RELOAD. 187 2. Terminology 189 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 190 "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this 191 document are to be interpreted as described in RFC-2119 [RFC2119]. 193 We use the terminology and definitions from the RELOAD Base Protocol 194 [I-D.ietf-p2psip-base] extensively in this document. 196 3. Hash function 198 In this One Hop Lookups based topology plugin, the size of the Node- 199 ID and Resource-ID is 128 bits. The hash of a Resource-ID MUST be 200 computed by SHA-1 [RFC3174]. 202 4. Network Architecture 204 Like in Chord, ONE-HOP-RELOAD uses a ring topology, and the successor 205 and predecessor of a peer with Node-ID i refer to the first peer 206 clockwise and counterclockwise respectively from peer i in the ring. 207 A peer, n, is responsible for a particular Resource-ID k if k is less 208 than or equal to n and k is greater than p, where p is the Node-ID of 209 the predecessor of peer n. Care must be taken when computing to note 210 that all math is modulo 2^128. 212 On this basis, we construct a three layered DHT to form dissemination 213 trees which are used to propagate topology disturbance event 214 information, i.e., joins and leaves. It is suggested that the scale 215 of peer-to-peer system is pre-configured manually, and we impose this 216 hierarchy on the system with dynamic membership by dividing the 128- 217 bit circular identifier space into k equal contiguous intervals 218 called slices, the ith slice contains all nodes currently in the 219 overlay whose node identifiers lie in the range [i*2^128/k, 220 (i+1)*2^128/k) [OneHopLookups]. 222 Each slice has a slice leader, which is chosen dynamically as the 223 node that is the successor of the mid-point of the slice identifier 224 space in the One Hop Lookups Algorithm, i.e., the slice leader of the 225 ith slice is the successor node of the key (i+1/2)*2^128 /k. But 226 frequent slice leader changing due to joining or leaving of peers 227 will cause higher network traffic expenses, because all the peers in 228 the slice and other slice leaders need to modify the configuration 229 and to establish a new connection with the new slice leader. 230 Therefore, it is better to choose the peer with high reliability and 231 availability peer to become the slice leader, and assign the peer a 232 fixed Node-ID which is the successor of the mid-point of the slice 233 identifier space in order to reduce the dynamic slice leader 234 replacement. 236 Similarly, each slice is divided into equal-sized intervals called 237 units. Each unit has a unit leader, which is dynamically chosen as 238 the successor of the mid-point of the unit identifier space. 240 The choice of the number of levels in the hierarchy involves a 241 tradeoff. A large number of levels imply a larger delay in 242 propagating the information, whereas a small number of levels 243 generate a large load at the nodes in the upper levels 244 [OneHopLookups]. Recommended to choose the three level DHT, because 245 it leads to reasonable bandwidth consumption and message propagation 246 delay. 248 5. Peer data structure 250 Each peer keeps track of the Whole Routing Table and a neighbor table. 251 The Whole Routing Table of each node keeps all of nodes?routing 252 information, so that after receiving the RELOAD request message the 253 peer can find the destination peer by just one hop if the items of 254 routing table are correct and the peer is working properly. The 255 neighbor table contains at least the three peers before and after 256 this peer in the DHT ring. It is used for topology maintenance and 257 node anomaly detection. There may not be three entries in any cases, 258 e.g. in the case of small rings or during changing of the ring 259 topology. 261 5.1. Routing Table 263 The routing table is the union of the neighbor table and the whole 264 routing table. 266 Fundamentally, the neighbor table entry contains the Node-IDs of the 267 predecessors and successors. The Whole Routing Table (WRT) maintains 268 the routing information of all the nodes in the overlay. The WRT 269 entry should at least contain the Node-ID, the address information 270 that may be IPv4 or IPv6 addresses and should include IP addresses, 271 port numbers and Resource-ID range which the peer is responsible for. 273 5.2. Peer Type 275 In the One Hop Lookups algorithm, whenever one peer detects a change 276 in membership (its successor failed or it has a new successor), it 277 will notify its local slice leader by sending an event notification 278 message as soon as possible. The local slice leader collects all 279 event notifications it receives from its own slice and disseminates 280 these notifications to other slice leaders. At the same time, the 281 slice leaders aggregate messages they receive for a short time period 282 and then dispatch these messages to all unit leaders of their 283 respective slices. Unit leaders spread these messages to themselves? 284 successor and predecessor in internal unit. 286 From above, we can conclude that there are four kinds of peer type in 287 the One Hop Lookups as follows: 289 Unit Boundary: 291 The peer at unit boundaries do not send topology disturbance event 292 notification message (defined in Update message) to their 293 neighboring peers outside their unit. This ensures that there is 294 no redundancy in the communications: a peer will get Update 295 message only from its neighbor that is one step closer to its unit 296 leader, so that the Update message just being transferred within 297 unit. In a unit, message is always flowing from the unit leader to 298 the both ends of the unit. 300 Unit Leader: 302 Unit Leader receives Update Messages from slice leader, and spread 303 these messages to its successor and predecessor in internal unit. 305 Slice Leader: 307 Slice leader is a special peer which is responsible for topology 308 maintenance and management. It can collect Update Messages from 309 slice internal and other slices and do other management tasks. Any 310 peer in the slice can send topology disturbance event notification 311 message (defined in Update message) to its slice leader to inform 312 events. Different from SPM mechanism in the SandStone [SandStone], 313 the slice leader in ONE-HOP-RELOAD participate in the resource 314 location and discovery procedures, and it is best to be pre- 315 configured. 317 Ordinary Peer: 319 Ordinary peer do not belong to the above three roles, it receives 320 Update message from its successor or predecessor, and spread 321 forward the message along the direction. Ordinary peer in the 322 overlay know its unit leader and slice leader, and when it detects 323 a change in membership (its predecessor failed or it has a new 324 predecessor), it will notify its local slice leader. 326 Each peer in the overlay all need to record their peer type. Note 327 that unit leader may also be unit boundary while the ring topology is 328 small. 330 5.2.1. Peer Type Structure 332 enum { reserved (0), 334 ordinary_node (1), unit_boundary(2), unit_leader(3), 335 slice_leader(4), (255)} 336 OneHopPeerType; 338 The OneHopPeerType gives an enumeration of peer type which identifies 339 the different peer role. When a peer is both a unit boundary and a 340 unit leader, we can use array of the OneHopPeerType which contains 341 unit_boundary and unit_leader two elements to identify its peer type. 343 5.3. Region ID 345 The peer belonging to a specific unit of a specific slice should 346 record its own location information. In the One Hop Lookups algorithm, 347 we use Slice-ID and Unit-ID to mark the peer location. The scale of 348 peer-to-peer system, i.e. the number of hierarchy levels, the number 349 of slices and units, the Node-ID range of each slice and unit are all 350 pre-configured manually. When a new peer prepares to join in the 351 overlay, it can obtain configuration information from the 352 configuration server which is responsible for assigning Node-IDs and 353 providing Node-ID range forms for each slice and each unit. Then, the 354 joining peer can compute its own Region ID by using the routing 355 information obtained from its neighbors in the procedure of joining 356 the overlay. 358 typedef opaque SliceId[SliceIdLength]; 360 typedef opaque UnitId[UnitIdLength]; 362 struct { 364 SliceId slice_id; 366 UnitId unit_id; 367 } RegionId; 369 The struct of RegionId is used to identify the peer location. Both 370 SliceId and UnitId is fixed-length structure represented as a series 371 of bytes, with the most significant byte first. The length is set on 372 a per-overlay basis within the range of 16-20 bytes (128 to 160 bits). 374 5.4. Special Identification Information 376 In the One Hop Lookups algorithm, each peer has its own peer type, 377 such as ordinary node or unit leader. In order to maintain the 378 accuracy and stability of the overlay layer network architecture, 379 each peer also needs to maintain additional identification 380 information which relates to the peer type closely, named as the 381 special identification information. 383 Ordinary Node / Unit Boundary: 385 The ordinary node and unit boundary in the overlay should record 386 its unit leader and slice leader identification. In addition, it 387 can also maintain the Node-ID range of each slice and each unit 388 obtained from the configuration server when joining in the overlay. 390 Unit Leader: 392 The identification information maintained by the unit leader and 393 the ordinary node are basically the same, except that the unit 394 leader does not need to maintain the Node-ID of unit leader. 396 Slice Leader: 398 Slice Leader collects Update messages from its own slice and other 399 slices and spread these messages to the network according to the 400 given rules. Thus, each slice leader should record the 401 identification information about all the slice leaders and unit 402 leaders within the slice it is responsible for in order to 403 dispatch the Update messages. 405 Each peer needs to maintain appropriate identification information 406 according to their peer type. 408 6. Routing 410 6.1. Next-Hop Selection Mechanism 412 Next-Hop Selection mechanism defines that a peer who receives a 413 RELOAD message carrying the destination ID, i.e., Node-ID and 414 Resource-ID, according to its own routing information, routes the 415 message to the next hop peer on the overlay. Two scenarios, i.e. the 416 Resource-ID routing and the Node-ID routing, are analyzed here. 418 Resource-ID routing: 420 If the peer is not responsible for the purpose Resource-ID k, but 421 is directly connected to a node with Node-ID k, then it MUST route 422 the message to that node. Otherwise, it finds the smallest Node-ID 423 that is greater than k to indicate it should be responsible for 424 the Resource-ID k; and then route the message to that destination 425 node. 427 Node-ID routing: 429 If the peer is not responsible for the purpose Node-ID k, but is 430 directly connected to a node with Node-ID k, then it MUST route 431 the message to that node. Otherwise, it finds the peer in the 432 Whole Routing Table whose Node-ID equals to k to indicate it is 433 the destination node k; and then route the request to the peer. 435 If no such node is found, it finds the smallest Node-ID that is 436 greater than k to indicate it should be the overlay access node of 437 the client k; and then route the message to that destination peer. 439 Note that in the CHORD-RELOAD algorithm [I-D.ietf-p2psip-base], each 440 peer maintains a finger table which contains only a small fraction of 441 routing information in the overlay, and peer has established TLS 442 connections with all the peers in the routing table which is the 443 union of the neighbor table and the finger table during the process 444 of joining in the overlay. Thus, for the peer-to-peer system that 445 uses the CHORD-RELOAD algorithm, the connection table that Link 446 Management Module in the RELOAD protocol maintains is the set of 447 nodes to which a node is directly connected, and the peers in the 448 routing table will all be on the connection table but not vice versa. 450 The ONE-HOP-RELOAD algorithm differs from the CHORD-RELOAD algorithm. 451 It gets each peer to maintain the Whole Routing Table, so when the 452 system scales to million peers, the routing table will be very large. 453 If the peer sets up TLS connections with all the other peers in the 454 overlay, it will consume a large amount of network resources. 455 Therefore, establishing full connections is not recommended. In the 456 ONE-HOP-RELOAD algorithm, a peer only needs to select some related 457 peers to establish TLS connections according to the needs of the 458 network architecture, thus the connection table is a subset of the 459 Whole Routing Table. 461 The consequence of changing the relationship between routing table 462 and connection table is that the peer chosen by the Next-Hop Node 463 Selection mechanism may not have an active TLS connection with the 464 source node. Therefore, the peer may need to set up a connection 465 before routing message to the destination node. 467 6.2. Fault Tolerance 469 Topology disturbance which is triggered by membership change events, 470 i.e., joining and leaving, may lead to a one hop routing failure. If 471 the source peer that prepares to send message to the next hop peer 472 does not update its routing information in time when the topology 473 disturbance occurs, it may route messages to an incorrect peer or a 474 peer which has withdrawn from the overlay. 476 Two scenarios, i.e. the Resource-ID routing and the Node-ID routing, 477 are analyzed here. In Resource-ID routing scenario, the disturbances 478 caused by peer joining and leaving are separately analyzed. In 479 Resource-ID routing scenario, the disturbances caused by peer leaving 480 and client leaving are separately analyzed. Note that Figure 1 is a 481 peer-to-peer system schematic. 483 +--------+ +--------+ +--------+ 484 | Peer 10|--------------| Peer 20|--------------| Peer 30| 485 +--------+ +--------+ +--------+ 486 | | 487 | | 488 +--------+ +--------+ 489 | Peer 90| | Peer 50| 490 +--------+ +--------+ 491 | | 492 | | 493 +--------+ +--------+ +--------+ 494 | Peer 80|--------------| Peer 70|--------------| Peer 60| 495 +--------+ +--------+ +--------+ 496 | 497 | 498 +-----------+ 499 |Resource 65| 500 +-----------+ 501 Figure 1 peer-to-peer system schematic 503 6.2.1. Resource-ID Routing Failure 505 6.2.1.1. Next-Hop Peer Leaving 507 When a peer leaves the peer-to-peer system, the Resource-ID k which 508 the peer is responsible for will be taken over by its successor. At 509 the same time, a source peer in the overlay wants to route the RELOAD 510 message to the peer who manages Resource-ID k and has just left the 511 network. The message will then be routed to a non-existent peer so 512 that the source peer will receive an error RELOAD response message, 513 and the message Error Code name is Error_Request_Timeout which means 514 that the request time is out or the target peer is unreachable. 516 For instance, Peer 70 has left the overlay, and its Resource 65 has 517 been taken over by Peer 80. At this time, Peer 20 want to send a 518 RELOAD message to the peer who is responsible for the Resource 65, 519 but its routing table has not been updated due to this topology 520 changes, then Peer 20 would route the RELOAD message to the old Peer 521 70, and this one hop routing to the Peer 70 will fail due to Peer 70 522 leaving. After that, Peer 20 will receive an error RELOAD response 523 message whose Error Code name is Error_Request_Timeout. 525 Peer 20 who receives this error message can take two response 526 measures. If using reactive fault tolerance mechanism, it then sends 527 an immediate RELOAD message to the successor Peer 80 of the failed 528 Peer 70. Otherwise, it should wait for receiving a topology 529 disturbance message to update its own routing information, and then 530 resend the RELOAD message. 531 +--------+ +--------+ +--------+ 532 | Peer 10|--------------| Peer 20|--------------| Peer 30| 533 +--------+ +--------+ +--------+ 534 | | 535 | | 536 +--------+ +--------+ 537 | Peer 90| | Peer 50| 538 +--------+ +--------+ 539 | | 540 | | 541 +--------+ +--------+ +--------+ 542 | Peer 80|--------------------------------------| Peer 60| 543 +--------+ +--------+ +--------+ 544 | 545 | 546 +-----------+ 547 |Resource 65| 548 +-----------+ 549 6.2.1.2. New Peer Joining 551 When a peer joins the peer-to-peer system, it will take over the 552 Resource-ID k which its successor node is responsible for. At the 553 same time, a source peer in the overlay wants to route the RELOAD 554 message to the peer who manages Resource-ID k during the past and has 555 just transferred Resource-ID k to the new online peer, then the 556 message will be routed to an old and incorrect peer so that the 557 source peer may receive an error RELOAD response message, and the 558 message Error Code Name is Error_Not_Found which means that target 559 resource is not part of the destination peer management. Another way 560 to deal with the new peer joining is that the old peer forwards the 561 RELOAD message to the new joining peer directly, and this mechanism 562 will increase the numbers of routing hops. 564 For instance, Peer 68 has just joined the overlay network and taken 565 over the Resource 65 from successor Peer 70. At this time, Peer 20 566 want to send a RELOAD message to the peer who is responsible for the 567 Resource 65, but its routing table has not been updated due to this 568 topology changes, then Peer 20 would route the RELOAD message to the 569 old Peer 70. 571 Peer 70 who receives this error message can take two response 572 measures. If using routing forwarding schema, Peer 70 would directly 573 forward the message to the new joining Peer 68 who is responsible for 574 the Resource 65. Otherwise, Peer 70 would return an error response to 575 Peer 20 whose Error Code name is Error_Not_Found. 577 +--------+ +--------+ +--------+ 578 | Peer 10|--------------| Peer 20|--------------| Peer 30| 579 +--------+ +--------+ +--------+ 580 | | 581 | | 582 +--------+ +--------+ 583 | Peer 90| | Peer 50| 584 +--------+ +--------+ 585 | | 586 | | 587 +--------+ +--------+ +--------+ +--------+ 588 | Peer 80|------| Peer 70|-----| Peer 68|-------| Peer 60| 589 +--------+ +--------+ +--------+ +--------+ 590 | 591 | 592 +-----------+ 593 |Resource 65| 594 +-----------+ 595 6.2.2. Node-ID Routing Failure 597 6.2.2.1. Next-Hop Peer Leaving 599 The source peer in the overlay route the RELOAD message to the Node- 600 ID m of the peer who has left the network, then message will be 601 routed to a non-existent peer so that the source peer will receive an 602 error RELOAD response message, and the message Error Code name is 603 Error_Request_Timeout which means that the request time is out or the 604 target peer is unreachable. 606 For instance, Peer 70 has left the overlay network. At this time, 607 Peer 20 want to send a RELOAD message to the Peer 70, but its routing 608 table has not been updated due to this topology changes, then Peer 20 609 would route the RELOAD message to the old Peer 70, and this one hop 610 routing to the Peer 70 will fail due to Peer 70 leaving. After that, 611 Peer 20 will receive an error RELOAD response message whose Error 612 Code name is Error_Request_Timeout. 614 +--------+ +--------+ +--------+ 615 | Peer 10|--------------| Peer 20|--------------| Peer 30| 616 +--------+ +--------+ +--------+ 617 | | 618 | | 619 +--------+ +--------+ 620 | Peer 90| | Peer 50| 621 +--------+ +--------+ 622 | | 623 | | 624 +--------+ +--------+ +--------+ 625 | Peer 80|--------------------------------------| Peer 60| 626 +--------+ +--------+ +--------+ 627 | 628 | 629 +----------+ 630 | Client 65| 631 +----------+ 633 6.2.2.2. Next-Hop Client Leaving 635 The source peer in the overlay route the RELOAD message to the Node- 636 ID m of the client who has left the network, then message will be 637 routed to the Overlay Access Peer that is responsible for the leaving 638 client m. 640 This Overlay Access Peer who receives the RELOAD message can take two 641 response measures. If using reactive fault tolerance mechanism, it 642 then response an error RELOAD response message, and the message Error 643 Code name is Error_Not_Found which means that target resource is not 644 part of the destination peer management. Otherwise, the source peer 645 will receive an error RELOAD response message, and the message Error 646 Code name is Error_Request_Timeout which means that the request time 647 is out or the target peer is unreachable. 649 7. Replica Placement Policy 651 To achieve high availability and reliability, ONE-HOP-RELOAD 652 replicates data in multiple peers, three on default. The replica 653 placement policy has two kinds of design scheme. 655 1. Two backup data stored in two successors. 657 2. The way defined in SandStone [SandStone] can also be used. The 658 first data is stored in the primary peer; the second replica is 659 saved in a peer of different unit but in the same slice; and the 660 third replica is saved in a peer in a different slice. The most 661 important issue of this scheme is the choosing of the replica peer. 662 Recommended that two levels of strip segmentation based ID 663 assignment mechanism can be used to allocate Node-ID and choose 664 backup nodes. 666 To guarantee the consistency among multiple replicas is a difficult 667 but inevitable task. When a peer receives a Store request for 668 Resource-ID k, and it is responsible for Resource-ID k, it MUST store 669 the data and returns a success response and then send a Store request 670 to its backup nodes to maintain replicas consistency. This part is 671 beyond the scope of the present document, specific details can be 672 found in reference papers. 674 The topology disturbance always leads to the changes of data backup 675 relationships between several the data storage peers. If using 676 replica placement policy similar to the SandStone, the distance 677 between the master data storage peer and backup peer may be far, 678 therefore the procedure of data migration caused by the topology 679 disturbance may not be triggered immediately. There are two kinds of 680 trigger the procedure of data migration strategy: 682 1. Reactive Notification: The joining or leaving peer or the peer who 683 detected topology disturbances should take the initiative to 684 inform the affected backup node, and then trigger the data 685 migration. 687 2. Passively Waiting: When the peer received topology change event 688 notification message, it should test whether its own data backup 689 relationship is affected or not. If affected, it will trigger data 690 migration process. 692 8. Joining 694 8.1. Joining Process 696 The joining process for a joining peer (JP) with Node-ID n is as 697 follows: 699 1. JP MUST connect to its chosen or preconfigured bootstrap node. 701 2. JP SHOULD send an Attach request to its admitting peer (AP) for 702 Node-ID n. The "send_update" flag should be used to acquire the 703 routing table and other information from AP. 705 3. JP determines its peer type and Region-ID according to the routing 706 information of AP. If JP is an ordinary peer, it should send 707 Attach requests to initiate connections to each of the peers in 708 the neighbor table. After establishing connections with all the 709 peers in the neighbor table, JP MUST record all the peers it has 710 contacted into the neighbor table. If JP is other type of peer, it 711 will perform some additional processes for a special peer type 712 described as follows: 714 Unit Boundary: 716 If JP is the Unit Boundary where AP locates, AP is the old unit 717 boundary and JP will replace its role. Then JP piggybacks its 718 own peer types on the Update message to AP in order to inform 719 AP to convert peer role. 721 If JP and AP are not in the same unit or even slice, and JP and 722 PP are in the same unit, JP will replace the role of PP which 723 is the old Unit Boundary where JP locates. 725 If JP is the new Unit Leader and there is no peer within the 726 unit, JP also is the new Unit Boundary of this unit. 728 Unit Leader: 730 If JP is the new Unit Leader and AP is in the same unit with JP, 731 AP is the old leader who is about to be replaced. JP informs AP 732 to convert the peer type of AP and add Node-ID of the new Unit 733 Leader into the routing information of AP. Then JP should 734 notify its slice leader to modify the unit leader 735 identification information and trigger the topology disturbance 736 procedure. 738 If there is no peer within the unit and JP is both the new Unit 739 Leader and the new Unit Boundary whose Node-ID is pre- 740 configured manually, JP should notify its slice leader to 741 modify the unit leader identification information and trigger 742 the topology disturbance procedure. 744 Slice Leader: 746 If JP is the new Slice Leader where AP locates, AP is the old 747 leader who is about to be replaced. JP informs AP to convert 748 its peer type and modify its special identification information. 749 If using reactive scheme, JP should send Attach message to 750 establish TLS connection with all the other slice leaders and 751 the whole peers in the slice. 753 If there is no peer within the slice and JP is the new Slice 754 Leader whose Node-ID is pre-configured manually, JP needs to 755 obtain the special identification information of Slice Leader 756 from the configuration server, and wait for peer joining its 757 slice. 759 4. If JP is not new slice leader, it MUST send an Attach request to 760 the slice leader where JP locations in order to timely report 761 topology changing information to its slice leader. 763 5. JP MUST send a Join to AP, the AP sends the response to the Join. 765 6. AP MUST do a series of Store requests to JP to store the data that 766 JP will be responsible for. 768 7. AP MUST send JP an Update explicitly labeling JP as its 769 predecessor. At this point, JP is part of the ring and responsible 770 for a section of the overlay. AP can now forget any data which is 771 assigned to JP and not AP. 773 8. JP piggybacks its routing information on the Update message to AP. 774 AP should start topology change event propagation process. The 775 process is described in section 10.2. 777 If JP sends an Attach to other peers in the overlay with send_update, 778 it will receive the Update messages from the other peers which carry 779 the routing information of other peers, including the type of the 780 peer, its region, and its own neighbor table. JP can construct its 781 own routing information from these information, and perform related 782 procedures to join the overlay and play its own role. 784 Note that in the CHORD-RELOAD algorithm [I-D.ietf-p2psip-base], each 785 peer keeps track of a finger table and a neighbor table, and peer has 786 established TLS connections with all the peers in the routing table 787 which is the union of the neighbor table and the finger table in the 788 process of joining in the overlay. But the ONE-HOP-RELOAD algorithm 789 differs from the CHORD-RELOAD. It defines that each peer maintains 790 the Whole Routing Table so that the routing information is too large 791 that the peer can't establish connections with all the peers in the 792 Whole Routing Table. Therefore, in the joining process described 793 above, we select some related peers in the overlay to establish 794 connections with the joining peer according to the different peer 795 type. 797 8.2. Joinreq Message Structure 799 Before sending Joinreq message to AP, JP needs to construct its own 800 routing information and send Attach messages to related peers. If its 801 peer type is special, it also should trigger some relevant peers to 802 perform peer type conversion procedure. After completing these tasks, 803 JP sends Joinreq message to AP, informing AP that it has been 804 completed the preparatory work to join the overlay, and can start to 805 take over the overlay data which it is responsible for. 807 The overlay_specific_data field in Joinreq message MUST contain the 808 OneHopJoinData structure defined below: 810 struct { 812 OneHopPeerType joining_peer_type; 814 RegionId 815 region_id; 817 IpAddressPort joining_peer_address; 818 } OneHopJoinData; 820 The contents of this structure are as follows: 822 joining_peer_type: 824 The peer type of JP. 826 region_id: 828 The identification of the region where JP locates. 830 joining_peer_address: 832 The address information of JP may be IPv4 or IPv6 addresses. If AP 833 receives Joinreq message carried the address information of JP, it 834 should add the information into its own routing table. 836 AP which receives a Joinreq message from JP should check whether the 837 message is correct and return a success response if correct. 839 9. Leaving 841 9.1. Handling Neighbor Failures 843 Every time the peer in the overlay finds that its neighboring peer 844 has already left the network or is ready to leave (maybe as 845 determined by connectivity pings or the Leave message from the 846 leaving peer), it MUST perform the following tasks: 848 o Obtaining the routing information of leaving node. When the peer 849 finds that its neighbor has left the network abnormally, it needs 850 to calculate the routing information of the leaving peer according 851 to its local routing information. 853 o Updating its routing information, including removing the entry 854 from its neighbor table and replacing it with the best match peer 855 in its own Whole Routing Table. 857 o If the peer is the successor of the leaving peer, it also should 858 start the topology disturbance propagation procedure to report and 859 propagate the topology disturbance event. The process is described 860 in section 10.2. It is also recommended that multiple relevant 861 peers should simultaneously report the events to its slice leader, 862 in order to avoid the loss of topology disturbance information. 864 o If the data management or backup relationship changes, triggering 865 the corresponding data migration procedure. 867 If the type of the leaving peer is special, it will perform some 868 additional processes for a special peer type described as follows: 870 Unit Boundary: 872 LP is the Unit Boundary where its predecessor or successor 873 locations, then one of the two neighbors will become the new unit 874 boundary and replace the role of LP. 876 Unit Leader: 878 If JP is the Unit Leader where its successor NP locates, NP will 879 become the new Unit Leader who will replace the role of LP. NP 880 piggybacks Unit Leader replacement information on the Update 881 message to the slice leader. The slice leader received this Update 882 message would modify the unit leader identification information. 884 Slice Leader: 886 If JP is the Slice Leader where its successor NP locates, NP will 887 become the new Slice Leader who will replace the role of LP. If 888 using reactive scheme, the new slice leader NP should send Attach 889 messages to establish TLS connections with all the other slice 890 leaders and the whole peers in its own slice; and then send the 891 topology disturbance event information to the peers in its own 892 slice and all the other slice leaders. Note that the successor NP 893 MUST maintain the replica of routing information of LP which 894 contains the identifications of all the slice leaders and unit 895 leaders in the slice. 897 9.2. Leavereq Message Structure 899 Peers SHOULD send a Leave request to all members of their neighbor 900 table prior to exiting the Overlay Instance. The 901 overlay_specific_data field MUST contain the OneHopLeaveData 902 structure defined below: 904 struct { 906 NodeId predecessors<0...2^16-1>; 908 NodeId successors<0...2^16-1>; 909 } Neighbors; 911 struct { 913 OneHopNodeType 914 predecessors<0...2^16-1>; 916 NodeId successors<0...2^16-1>; 917 } OneHopLeaveData; 919 struct { 921 OneHopPeerType 922 leaving_peer_type; 924 RegionId 925 region_id; 927 Neighbors neighbor_table; 929 select (leaving_peer_type) { 931 case ordinary_peer: 933 NodeId unit_leader; 935 NodeId slice_leader; 937 case unit_leader: 939 NodeId slice_leader; 941 case slice_leader: 943 NodeId unit_leader_list<0...N>; 945 NodeId slice_leader_list<0...N>; 947 } 948 } OneHopLeaveData; 950 The contents of this structure are as follows: 952 region_id: 954 The identification of the region where JP locates. 956 neighbor_table: 958 The neighbor_table contains all the Node-IDs of the current 959 entries in the neighbor table except for the leaving one. 961 The 'leaving_peer_type' field indicates the type of the leaving peer: 963 If the type of the leaving peer is 'ordinary_peer' the contents will 964 be: 966 unit_leader 968 The Node-ID of the Unit Leader of the unit where leaving peer 969 locates. 971 slice_leader 973 The Node-ID of the Slice Leader of the slice where leaving peer 974 locates. 976 If the type of the leaving peer is 'unit_leader' the contents will 977 be: 979 slice_leader 981 The Node-ID of the Slice Leader of the slice where leaving peer 982 locate s. 984 If the type of the leaving peer is 'slice_leader' the contents will 985 be: 987 unit_leader_list 989 The Node-ID list of each Unit Leader in the slice which the the 990 leaving Slice Leader manages. 992 slice_leader_list 994 The Node-ID list of all the other Slice Leader. 996 When a peer is ready to leave the overlay, it takes the initiative to 997 send a Leave message to all peers in its own neighbor table. Any peer 998 which receives a Leave for a peer in its neighbor set follows 999 procedures as if it had detected a peer failure as described in 1000 Section 9.1. 1002 10. Updates 1004 10.1. Topology Anomaly Detection 1006 A peer MUST maintain connections with all the peers in its neighbor 1007 table. A peer MUST try to maintain at least three predecessors and 1008 successors at least. In the CHORD-RELOAD algorithm [I-D.ietf-p2psip- 1009 base], it is RECOMMENDED that O (log (N)) predecessors and successors 1010 be maintained in the neighbor table. 1012 Every peer in the overlay including Slice Leader MUST periodically 1013 send a keep-alive message (defined by Update message) to all the peer 1014 in its neighbor table. The purpose of this is to keep the predecessor 1015 and successor lists up to date and to detect failed peers. The 1016 default time is about every ten minutes. A peer SHOULD randomly 1017 offset these Update requests so they do not occur all at once. 1019 When detecting the topology anomaly, the peer follows procedures as 1020 described in Section 9.1. 1022 10.2. Topology Disturbance Propagation Procedure 1024 The successor of the joining peer or leaving peer will trigger the 1025 topology disturbance propagation procedure that realizes topology 1026 change event spread in the network in accordance with the given 1027 direction. The purpose of this is to keep the Whole Routing Table up 1028 to date and to adjust the network topology. 1030 The topology disturbance propagation procedure for the successor with 1031 Node-ID n who detects a topology change, i.e., new predecessor 1032 joining and old predecessor leaving, as follows: 1034 1. The successor MUST send an Update message which piggybacks the 1035 topology change event information to its slice leader directly. 1037 2. The slice leader MUST collect all topology change events it 1038 receives from the peers in its own slice and aggregates them 1039 during several seconds (default time is about 20 seconds); and 1040 then sends Update messages to the other slice leaders to spread 1041 these events. Best not to send Update messages to other slice 1042 leaders at the same time. 1044 3. The slice leader waits for a short minutes (default time is about 1045 10 seconds), and aggregates all Update messages received during 1046 this period; and then dispatch the Update message to all the unit 1047 leaders in its own slice. 1049 4. A unit leader SHOULD forward the Update message to its successor 1050 and predecessor. 1052 5. Other peers propagate this Update message in one direction: if 1053 they receive from their successors, they SHOULD send it to their 1054 successors and vice versa. Note that the Unit Boundaries SHOULD 1055 NOT send Update messages to their neighbor peers outside their 1056 unit. 1058 10.3. Updatereq Message Structure 1060 The purpose of using Update message is to piggyback the routing 1061 information of the peer or to spread the topology disturbance events 1062 information on the overlay. Thus, Update message can carry two kinds 1063 of content: routing information of the peer on the overlay and 1064 topology disturbance event information, i.e., peer joining and peer 1065 leaving. 1067 The Update message for the ONE-HOP-RELOAD is defined as follows. 1069 10.3.1. Update Types 1071 enum { reserved (0), 1073 routing_info (1), event_notification(2), (255)} 1075 UpdateType; 1077 The 'UpdateType' gives an enumeration of Update message including 1078 'routing_info' and 'event_notification' 1080 routing_info 1082 The routing_info message piggybacks the routing information of 1083 the message sending peer, including the routing table which may 1084 be the union of the Whole Routing Table and neighbor table or 1085 only the neighbor table, peer type, region ID and special 1086 identification information. 1088 event_notification 1090 The event_notification message will take a list of topology 1091 disturbance events information which indicates topology 1092 changing in the overlay and may trigger peers to modify their 1093 routing information. 1095 10.3.2. Routing Information 1097 enum { reserved (0), 1099 full (1), peer_info(2), (255)} 1101 RoutingInfoType; 1103 The 'RoutingInfoType' gives an enumeration of routing_info including 1104 'full' and 'peer_info' it identifies the different types of 1105 routing_info message: 1107 full 1109 The kind of routing_info message piggybacks all the routing 1110 table of the message sending peer, including the Whole Routing 1111 Table and neighbor table. 1113 peer_info 1115 The kind of routing_info message piggybacks only the neighbor 1116 table of the message sending peer and some other identification 1117 information. 1119 struct { 1121 NodeId peer_id; 1123 IpAddressPort address; 1125 } OneHopRoutingInfo; 1127 The struct of 'OneHopRoutingInfo' is used to construct the Whole 1128 Routing Table entry. The WRT entry should at least contain the Node- 1129 ID and the address information that may be IPv4 or IPv6 addresses: 1131 peer_id 1133 The Node-ID of the peer in the overlay. 1135 address 1137 The address information that may be IPv4 or IPv6 addresses. 1139 struct { 1141 OneHopPeerType 1142 peer_type; 1144 RegionId region_id; 1146 Neighbors neighbor_table; 1148 select (peer_type) { 1150 case ordinary_peer: 1152 NodeId unit_leader; 1154 NodeId slice_leader; 1156 case unit_leader: 1158 NodeId slice_leader; 1160 case slice_leader: 1162 NodeId unit_leader_list<0...N>; 1164 NodeId slice_leader_list<0...N>; 1166 } 1168 RoutingInfoType routing_info_type; 1170 select (routing_info_type) { 1172 case full: 1174 OneHopRoutingInfo whole_routing_info<0...N>; 1176 case peer_info: 1178 } 1179 } RoutingInfo; 1181 The parameters of this structure are similar to the 'OneHopLeaveData' 1182 struct described in section 9.2, except that the 'RoutingInfo' could 1183 piggyback the Whole Routing Table. 1185 The 'Routing_info_type' field indicates whether the message includes 1186 the WRT or not. If the type of the field is 'Full' the contents will 1187 be: 1189 whole_routing_info 1191 The variable represents the information of peers?routing table 1192 which has all of the peers?information in the overlay. 1194 If the type of the field is 'peer_info? the message does not carry 1195 WRT. 1197 10.3.3. Event notification 1199 enum { reserved (0), 1201 peer_joining (1), peer_leaving(2), (255)} 1203 EventNotificationType; 1205 The 'EventNotificationType' gives an enumeration of topology 1206 disturbance event kinds, including peer joining and peer leaving. 1208 struct { 1210 EventNotificationType event_notification_type; 1212 select (event_notification_type) { 1214 case peer_joining: 1216 OneHopRoutingInfo joining_peer_info; 1218 OneHopPeerType joining_peer_type; 1220 RegionId region_id; 1222 select (joining_peer_type) { 1224 case unit_leader: 1226 RegionId change_region_id; 1228 NodeId old_unit_leader; 1230 case slice_leader: 1232 RegionId change_region_id; 1234 NodeId old_slice_leader; 1236 } 1238 case peer_leaving: 1240 OneHopRoutingInfo leaving_peer_info; 1242 OneHopPeerType leaving_peer_type; 1244 RegionId region_id; 1246 select (leaving_peer_type) { 1248 case unit_leader: 1250 RegionId change_region_id; 1252 NodeId new_unit_leader; 1254 case slice_leader: 1256 RegionId change_region_id; 1258 NodeId new_slice_leader; 1260 } 1262 } 1263 } EventNotificationItem; 1265 The structure of 'EventNotificationItem' is an item of the 1266 EventNotification message. The peer who receives this item should 1267 refresh its routing information and do some other topology management 1268 tasks. 1270 The contents of this structure are as follows: 1272 The 'Event_notification_type' field indicates the type of the 1273 topology disturbance event, including 'peer_joining' and 1274 'peer_leaving' 1276 If the type of the event is 'peer_joining' the contents will be: 1278 joining_peer_info 1279 The entry corresponds to the Node-ID of joining peer in the 1280 Whole Routing Table 1282 region_id 1284 The identification of the region where joining peer locates. 1286 The 'joining_peer_type' field indicates the peer type of the 1287 joining peer whose type is special, including 'slice_leader' and 1288 'unit_leader' 1290 If the type of the joining peer is 'unit_leader' the contents 1291 will be: 1293 change_region_id 1295 The identification of the region where joining peer locates. 1296 The joining peer will take the place of the old Unit Leader. 1298 old_unit_leader 1300 The Node-ID of the old Unit Leader who has been replaced by the 1301 joining peer. 1303 If the type of the joining peer is 'slice_leader' the contents 1304 will be: 1306 change_region_id 1308 The identification of the region where joining peer locates. 1309 The joining peer will take the place of the old Slice Leader. 1311 old_slice_leader 1313 The Node-ID of the old Slice Leader who has been replaced by 1314 joining peer. 1316 If the type of the event is 'peer_leaving' the contents are similar 1317 to the parameters of 'peer_joining' event, except that when the 1318 leaving peer is the old Unit Leader or Slice Leader, the 1319 corresponding parameters specify the Node-ID of the new Unit Leader 1320 or Slice Leader. 1322 10.3.4. ONE-HOP-RELOAD Update Data 1324 struct { 1326 UpdateType update_type; 1327 Select (update_type) { 1329 case routing_info: 1331 RoutingInfo routing_info; 1333 case event_notification: 1335 EventNotificationItem event_notification_list<0...N>; 1337 } 1338 } OneHopUpdateData; 1340 This structure is the Update message used in the ONE-HOP-RELOAD. The 1341 Update message is composed by two kinds of data. One of them is the 1342 RoutingInfo structures, and the other is the EventNotification 1343 structures. All the peers in the overlay can use this Update message 1344 to carry routing information or spread topology disturbance event 1345 information. 1347 11. Security Considerations 1349 There is no specific security consideration associated with this 1350 draft. 1352 12. IANA Considerations 1354 There are no IANA considerations associated to this memo. 1356 13. References 1358 13.1. Normative References 1360 [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate 1361 Requirement Levels", BCP 14, RFC 2119, March 1997. 1363 [RFC2234] Crocker, D. and Overell, P.(Editors), "Augmented BNF for 1364 Syntax Specifications: ABNF", RFC 2234, Internet Mail 1365 Consortium and Demon Internet Ltd., November 1997. 1367 [I-D.ietf-p2psip-base] 1368 Jennings, C., Lowekamp, B., Rescorla, E., Baset, S., and H. 1369 Schulzrinne, "REsource LOcation And Discovery (RELOAD) Base 1370 Protocol", draft-ietf-p2psip-base-15 (work in progress), 1371 May 2011. 1373 13.2. Informative References 1375 [RFC3174] Eastlake, D. and P. Jones, "US Secure Hash Algorithm 1 1376 (SHA1)", RFC 3174, September 2001. 1378 [One-Hop-Lookups] 1379 Gupta, A., Liskov, B., and R. Rodrigues, "One Hop Lookups 1380 for Peer-to-Peer Overlays", June 2003. 1382 [SandStone] 1383 Shi, G., Chen, J., Gong, H., Fan, L., Xue, H., Lu, Q., and 1384 L. Liang, "SandStone: A DHT based Carrier Grade Distributed 1385 Storage System", September 2009. 1387 [Gnutella] 1388 S. Saroiu, P. K. Gummadi, and S. D. Gribble, "A measurement 1389 study of per-to-peer file sharing systems", Jan 2002. 1391 14. Acknowledgments 1392 Authors?Addresses 1394 Jin Peng 1395 China Mobile 1396 Unit 2, 28 Xuanwumenxi Ave, 1397 Xuanwu District 1398 Beijing 100053 1399 P.R.China 1401 Email: pengjin@chinamobile.com 1403 Qing Yu 1404 China Mobile 1405 Unit 2, 28 Xuanwumenxi Ave, 1406 Xuanwu District 1407 Beijing 100053 1408 P.R.China 1410 Email: yuqing@chinamobile.com 1412 Yuan Li 1413 Beijing University of Posts and Telecommunications 1414 10 Xi Tu Cheng Rd. 1415 Haidian District 1416 Beijing 100876 1417 P.R.China 1419 Email: liyuan8903@139.com