idnits 2.17.1 draft-ietf-p2psip-self-tuning-09.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- No issues found here. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year -- The document date (August 10, 2013) is 3912 days in the past. Is this intentional? Checking references for intended status: Proposed Standard ---------------------------------------------------------------------------- (See RFCs 3967 and 4897 for information about using normative references to lower-maturity documents in RFCs) ** Obsolete normative reference: RFC 5245 (Obsoleted by RFC 8445, RFC 8839) ** Obsolete normative reference: RFC 5389 (Obsoleted by RFC 8489) == Outdated reference: A later version (-09) exists of draft-ietf-p2psip-concepts-05 Summary: 2 errors (**), 0 flaws (~~), 2 warnings (==), 1 comment (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 P2PSIP Working Group J. Maenpaa 3 Internet-Draft G. Camarillo 4 Intended status: Standards Track Ericsson 5 Expires: February 11, 2014 August 10, 2013 7 Self-tuning Distributed Hash Table (DHT) for REsource LOcation And 8 Discovery (RELOAD) 9 draft-ietf-p2psip-self-tuning-09.txt 11 Abstract 13 REsource LOcation And Discovery (RELOAD) is a peer-to-peer (P2P) 14 signaling protocol that provides an overlay network service. Peers 15 in a RELOAD overlay network collectively run an overlay algorithm to 16 organize the overlay, and to store and retrieve data. This document 17 describes how the default topology plugin of RELOAD can be extended 18 to support self-tuning, that is, to adapt to changing operating 19 conditions such as churn and network size. 21 Status of This Memo 23 This Internet-Draft is submitted in full conformance with the 24 provisions of BCP 78 and BCP 79. 26 Internet-Drafts are working documents of the Internet Engineering 27 Task Force (IETF). Note that other groups may also distribute 28 working documents as Internet-Drafts. The list of current Internet- 29 Drafts is at http://datatracker.ietf.org/drafts/current/. 31 Internet-Drafts are draft documents valid for a maximum of six months 32 and may be updated, replaced, or obsoleted by other documents at any 33 time. It is inappropriate to use Internet-Drafts as reference 34 material or to cite them other than as "work in progress." 36 This Internet-Draft will expire on February 11, 2014. 38 Copyright Notice 40 Copyright (c) 2013 IETF Trust and the persons identified as the 41 document authors. All rights reserved. 43 This document is subject to BCP 78 and the IETF Trust's Legal 44 Provisions Relating to IETF Documents 45 (http://trustee.ietf.org/license-info) in effect on the date of 46 publication of this document. Please review these documents 47 carefully, as they describe your rights and restrictions with respect 48 to this document. Code Components extracted from this document must 49 include Simplified BSD License text as described in Section 4.e of 50 the Trust Legal Provisions and are provided without warranty as 51 described in the Simplified BSD License. 53 Table of Contents 55 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . 2 56 2. Terminology . . . . . . . . . . . . . . . . . . . . . . . . . 3 57 3. Introduction to Stabilization in DHTs . . . . . . . . . . . . 5 58 3.1. Reactive vs. Periodic Stabilization . . . . . . . . . . . 5 59 3.2. Configuring Periodic Stabilization . . . . . . . . . . . 6 60 3.3. Adaptive Stabilization . . . . . . . . . . . . . . . . . 7 61 4. Introduction to Chord . . . . . . . . . . . . . . . . . . . . 7 62 5. Extending Chord-reload to Support Self-tuning . . . . . . . . 9 63 5.1. Update Requests . . . . . . . . . . . . . . . . . . . . . 9 64 5.2. Neighbor Stabilization . . . . . . . . . . . . . . . . . 10 65 5.3. Finger Stabilization . . . . . . . . . . . . . . . . . . 11 66 5.4. Adjusting Finger Table Size . . . . . . . . . . . . . . . 11 67 5.5. Detecting Partitioning . . . . . . . . . . . . . . . . . 11 68 5.6. Leaving the Overlay . . . . . . . . . . . . . . . . . . . 11 69 6. Self-tuning Chord Parameters . . . . . . . . . . . . . . . . 12 70 6.1. Estimating Overlay Size . . . . . . . . . . . . . . . . . 12 71 6.2. Determining Routing Table Size . . . . . . . . . . . . . 13 72 6.3. Estimating Failure Rate . . . . . . . . . . . . . . . . . 13 73 6.3.1. Detecting Failures . . . . . . . . . . . . . . . . . 14 74 6.4. Estimating Join Rate . . . . . . . . . . . . . . . . . . 15 75 6.5. Estimate Sharing . . . . . . . . . . . . . . . . . . . . 15 76 6.6. Calculating the Stabilization Interval . . . . . . . . . 16 77 7. Overlay Configuration Document Extension . . . . . . . . . . 17 78 8. Security Considerations . . . . . . . . . . . . . . . . . . . 17 79 9. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 18 80 9.1. Message Extensions . . . . . . . . . . . . . . . . . . . 18 81 10. Acknowledgments . . . . . . . . . . . . . . . . . . . . . . . 18 82 11. References . . . . . . . . . . . . . . . . . . . . . . . . . 18 83 11.1. Normative References . . . . . . . . . . . . . . . . . . 19 84 11.2. Informative References . . . . . . . . . . . . . . . . . 19 85 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . 21 87 1. Introduction 89 REsource LOcation And Discovery (RELOAD) [I-D.ietf-p2psip-base] is a 90 peer-to-peer signaling protocol that can be used to maintain an 91 overlay network, and to store data in and retrieve data from the 92 overlay. For interoperability reasons, RELOAD specifies one overlay 93 algorithm, called chord-reload, that is mandatory to implement. This 94 document extends the chord-reload algorithm by introducing self- 95 tuning behavior. 97 Distributed Hash Table (DHT) based overlay networks are self- 98 organizing, scalable and reliable. However, these features come at a 99 cost: peers in the overlay network need to consume network bandwidth 100 to maintain routing state. Most DHTs use a periodic stabilization 101 routine to counter the undesirable effects of churn on routing. To 102 configure the parameters of a DHT, some characteristics such as churn 103 rate and network size need to be known in advance. These 104 characteristics are then used to configure the DHT in a static 105 fashion by using fixed values for parameters such as the size of the 106 successor set, size of the routing table, and rate of maintenance 107 messages. The problem with this approach is that it is not possible 108 to achieve a low failure rate and a low communication overhead by 109 using fixed parameters. Instead, a better approach is to allow the 110 system to take into account the evolution of network conditions and 111 adapt to them. This document extends the mandatory-to-implement 112 chord-reload algorithm by making it self-tuning. Two main advantages 113 of self-tuning are that users no longer need to tune every DHT 114 parameter correctly for a given operating environment and that the 115 system adapts to changing operating conditions. 117 The remainder of this document is structured as follows: Section 2 118 provides definitions of terms used in this document. Section 3 119 discusses alternative approaches to stabilization operations in DHTs, 120 including reactive stabilization, periodic stabilization, and 121 adaptive stabilization. Section 4 gives an introduction to the Chord 122 DHT algorithm. Section 5 describes how this document extends the 123 stabilization routine of the chord-reload algorithm. Section 6 124 describes how the stabilization rate and routing table size are 125 calculated in an adaptive fashion. 127 2. Terminology 129 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 130 "SHOULD", "SHOULD NOT", "RECOMMENDED", "NOT RECOMMENDED", "MAY", and 131 "OPTIONAL" in this document are to be interpreted as described in RFC 132 2119 [RFC2119]. 134 This document uses the terminology and definitions from the Concepts 135 and Terminology for Peer to Peer SIP [I-D.ietf-p2psip-concepts] 136 draft. 138 numBitsInNodeId: Specifies the number of bits in a RELOAD Node-ID. 140 DHT: Distributed Hash Tables (DHTs) are a class of decentralized 141 distributed systems that provide a lookup service similar to a 142 regular hash table. Given a key, any peer participating in the 143 system can retrieve the value associated with that key. The 144 responsibility for maintaining the mapping from keys to values is 145 distributed among the peers. 147 Chord Ring: The Chord DHT uses ring topology and orders identifiers 148 on an identifier circle of size 2^numBitsInNodeId. This 149 identifier circle is called the Chord ring. On the Chord ring, 150 the responsibility for a key k is assigned to the node whose 151 identifier equals to or immediately follows k. 153 Finger Table: A data structure with up to (but typically less than) 154 numBitsInNodeId entries maintained by each peer in a Chord-based 155 overlay. The ith entry in the finger table of peer n contains the 156 identity of the first peer that succeeds n by at least 157 2^(numBitsInNodeId-i) on the Chord ring. This peer is called the 158 ith finger of peer n. As an example, the first entry in the finger 159 table of peer n contains a peer half-way around the Chord ring 160 from peer n. The purpose of the finger table is to accelerate 161 lookups. 163 n.id: An abbreviation that is in this document used refer to the 164 Node-ID of peer n. 166 O(g(n)): Informally, saying that some equation f(n) = O(g(n)) means 167 that f(n) is less than some constant multiple of g(n). 169 Omega(g(n)): Informally, saying that some equation f(n) = 170 Omega(g(n)) means that f(n) is more than some constant multiple of 171 g(n). 173 Predecessor List: A data structure containing the first predecessors 174 of a peer on the Chord ring. 176 Successor List: A data structure containing the first r successors 177 of a peer on the Chord ring. 179 Neighborhood Set: A term used to refer to the set of peers included 180 in the successor and predecessor lists of a given peer. 182 Routing Table: Contents of a given peer's routing table include the 183 set of peers that the peer can use to route overlay messages. The 184 routing table is made up of the finger table, successor list and 185 predecessor list. 187 3. Introduction to Stabilization in DHTs 189 DHTs use stabilization routines to counter the undesirable effects of 190 churn on routing. The purpose of stabilization is to keep the 191 routing information of each peer in the overlay consistent with the 192 constantly changing overlay topology. There are two alternative 193 approaches to stabilization: periodic and reactive [rhea2004]. 194 Periodic stabilization can either use a fixed stabilization rate or 195 calculate the stabilization rate in an adaptive fashion. 197 3.1. Reactive vs. Periodic Stabilization 199 In reactive stabilization, a peer reacts to the loss of a peer in its 200 neighborhood set or to the appearance of a new peer that should be 201 added to its neighborhood set by sending a copy of its neighbor table 202 to all peers in the neighborhood set. Periodic recovery, in 203 contrast, takes place independently of changes in the neighborhood 204 set. In periodic recovery, a peer periodically shares its 205 neighborhood set with each or a subset of the members of that set. 207 The chord-reload algorithm [I-D.ietf-p2psip-base] supports both 208 reactive and periodic stabilization. It has been shown in [rhea2004] 209 that reactive stabilization works well for small neighborhood sets 210 (i.e., small overlays) and moderate churn. However, in large-scale 211 (e.g., 1000 peers or more [rhea2004]) or high-churn overlays, 212 reactive stabilization runs the risk of creating a positive feedback 213 cycle, which can eventually result in congestion collapse. In 214 [rhea2004], it is shown that a 1000-peer overlay under churn uses 215 significantly less bandwidth and has lower latencies when periodic 216 stabilization is used than when reactive stabilization is used. 217 Although in the experiments carried out in [rhea2004], reactive 218 stabilization performed well when there was no churn, its bandwidth 219 use was observed to jump dramatically under churn. At higher churn 220 rates and larger scale overlays, periodic stabilization uses less 221 bandwidth and the resulting lower contention for the network leads to 222 lower latencies. For this reason, most DHTs such as CAN [CAN], Chord 223 [Chord], Pastry [Pastry], Bamboo [rhea2004], etc. use periodic 224 stabilization [ghinita2006]. As an example, the first version of 225 Bamboo used reactive stabilization, which caused Bamboo to suffer 226 from degradation in performance under churn. To fix this problem, 227 Bamboo was modified to use periodic stabilization. 229 In Chord, periodic stabilization is typically done both for 230 successors and fingers. An alternative strategy is analyzed in 231 [krishnamurthy2008]. In this strategy, called the correction-on- 232 change maintenance strategy, a peer periodically stabilizes its 233 successors but does not do so for its fingers. Instead, finger 234 pointers are stabilized in a reactive fashion. The results obtained 235 in [krishnamurthy2008] imply that although the correction-on-change 236 strategy works well when churn is low, periodic stabilization 237 outperforms the correction-on-change strategy when churn is high. 239 3.2. Configuring Periodic Stabilization 241 When periodic stabilization is used, one faces the problem of 242 selecting an appropriate execution rate for the stabilization 243 procedure. If the execution rate of periodic stabilization is high, 244 changes in the system can be quickly detected, but at the 245 disadvantage of increased communication overhead. Alternatively, if 246 the stabilization rate is low and the churn rate is high, routing 247 tables become inaccurate and DHT performance deteriorates. Thus, the 248 problem is setting the parameters so that the overlay achieves the 249 desired reliability and performance even in challenging conditions, 250 such as under heavy churn. This naturally results in high cost 251 during periods when the churn level is lower than expected, or 252 alternatively, poor performance or even network partitioning in worse 253 than expected conditions. 255 In addition to selecting an appropriate stabilization interval, 256 regardless of whether periodic stabilization is used or not, an 257 appropriate size needs to be selected for the neighborhood set and 258 for the finger table. 260 The current approach is to configure overlays statically. This works 261 in situations where perfect information about the future is 262 available. In situations where the operating conditions of the 263 network are known in advance and remain static throughout the 264 lifetime of the system, it is possible to choose fixed optimal values 265 for parameters such as stabilization rate, neighborhood set size and 266 routing table size. However, if the operating conditions (e.g., the 267 size of the overlay and its churn rate) do not remain static but 268 evolve with time, it is not possible to achieve both a low lookup 269 failure rate and a low communication overhead by using fixed 270 parameters [ghinita2006]. 272 As an example, to configure the Chord DHT algorithm, one needs to 273 select values for the following parameters: size of successor list, 274 stabilization interval, and size of the finger table. To select an 275 appropriate value for the stabilization interval, one needs to know 276 the expected churn rate and overlay size. According to 277 [liben-nowell2002], a Chord network in a ring-like state remains in a 278 ring-like state as long as peers send Omega(log2^2(N)) messages 279 before N new peers join or N/2 peers fail. Thus, in a 500-peer 280 overlay churning at a rate such that one peer joins and one peer 281 leaves the network every 30 seconds, an appropriate stabilization 282 interval would be on the order of 93s. According to [Chord], the size 283 of the successor list and finger table should be on the order of 284 log2(N). Having a successor list of size O(log2(N)) makes it 285 unlikely that a peer will lose all of its successors, which would 286 cause the Chord ring to become disconnected. Thus, in a 500-peer 287 network each peer should maintain on the order of nine successors and 288 fingers. However, if the churn rate doubles and the network size 289 remains unchanged, the stabilization rate should double as well. 290 That is, the appropriate maintenance interval would now be on the 291 order of 46s. On the other hand, if the churn rate becomes e.g. six- 292 fold and the size of the network grows to 2000 peers, on the order of 293 eleven fingers and successors should be maintained and the 294 stabilization interval should be on the order of 42s. If one 295 continued using the old values, this could result in inaccurate 296 routing tables, network partitioning, and deteriorating performance. 298 3.3. Adaptive Stabilization 300 A self-tuning DHT takes into consideration the continuous evolution 301 of network conditions and adapts to them. In a self-tuning DHT, each 302 peer collects statistical data about the network and dynamically 303 adjusts its stabilization rate, neighborhood set size, and finger 304 table size based on the analysis of the data [ghinita2006]. 305 Reference [mahajan2003] shows that by using self-tuning, it is 306 possible to achieve high reliability and performance even in adverse 307 conditions with low maintenance cost. Adaptive stabilization has 308 been shown to outperform periodic stabilization in terms of both 309 lookup failures and communication overhead [ghinita2006]. 311 4. Introduction to Chord 313 Chord [Chord] is a structured P2P algorithm that uses consistent 314 hashing to build a DHT out of several independent peers. Consistent 315 hashing assigns each peer and resource a fixed-length identifier. 316 Peers use SHA-1 as the base hash fuction to generate the identifiers. 317 As specified in RELOAD base, the length of the identifiers is 318 numBitsInNodeId=128 bits. The identifiers are ordered on an 319 identifier circle of size 2^numBitsInNodeId. On the identifier 320 circle, key k is assigned to the first peer whose identifier equals 321 or follows the identifier of k in the identifier space. The 322 identifier circle is called the Chord ring. 324 Different DHTs differ significantly in performance when bandwidth is 325 limited. It has been shown that when compared to other DHTs, the 326 advantages of Chord include that it uses bandwidth efficiently and 327 can achieve low lookup latencies at little cost [li2004]. 329 A simple lookup mechanism could be implemented on a Chord ring by 330 requiring each peer to only know how to contact its current successor 331 on the identifier circle. Queries for a given identifier could then 332 be passed around the circle via the successor pointers until they 333 encounter the first peer whose identifier is equal to or larger than 334 the desired identifier. Such a lookup scheme uses a number of 335 messages that grows linearly with the number of peers. To reduce the 336 cost of lookups, Chord maintains also additional routing information; 337 each peer n maintains a data structure with up to numBitsInNodeId 338 entries, called the finger table. The first entry in the finger 339 table of peer n contains the peer half-way around the ring from peer 340 n. The second entry contains the peer that is 1/4th of the way 341 around, the third entry the peer that is 1/8th of the way around, 342 etc. In other words, the ith entry in the finger table at peer n 343 contains the identity of the first peer s that succeeds n by at least 344 2^(numBitsInNodeId-i) on the Chord ring. This peer is called the ith 345 finger of peer n. The interval between two consecutive fingers is 346 called a finger interval. The ith finger interval of peer n covers 347 the range [n.id + 2^(numBitsInNodeId-i), n.id + 348 2^(numBitsInNodeId-i+1)) on the Chord ring. In an N-peer network, 349 each peer maintains information about O(log2(N)) other peers in its 350 finger table. As an example, if N=100000, it is sufficient to 351 maintain 17 fingers. 353 Chord needs all peers' successor pointers to be up to date in order 354 to ensure that lookups produce correct results as the set of 355 participating peers changes. To achieve this, peers run a 356 stabilization protocol periodically in the background. The 357 stabilization protocol of the original Chord algorithm uses two 358 operations: successor stabilization and finger stabilization. 359 However, the Chord algorithm of RELOAD base defines two additional 360 stabilization components, as will be discussed below. 362 To increase robustness in the event of peer failures, each Chord peer 363 maintains a successor list of size r, containing the peer's first r 364 successors. The benefit of successor lists is that if each peer 365 fails independently with probability p, the probability that all r 366 successors fail simultaneously is only p^r. 368 The original Chord algorithm maintains only a single predecessor 369 pointer. However, multiple predecessor pointers (i.e., a predecessor 370 list) can be maintained to speed up recovery from predecessor 371 failures. The routing table of a peer consists of the successor 372 list, finger table, and predecessor list. 374 5. Extending Chord-reload to Support Self-tuning 376 This section describes how the mandatory-to-implement chord-reload 377 algorithm defined in RELOAD base [I-D.ietf-p2psip-base] can be 378 extended to support self-tuning. 380 The chord-reload algorithm supports both reactive and periodic 381 recovery strategies. When the self-tuning mechanisms defined in this 382 document are used, the periodic recovery strategy MUST be used. 383 Further, chord-reload specifies that at least three predecessors and 384 three successors need to be maintained. When the self-tuning 385 mechanisms are used, the appropriate sizes of the successor list and 386 predecessor list are determined in an adaptive fashion based on the 387 estimated network size, as will be described in Section 6. 389 As specified in RELOAD base, each peer MUST maintain a stabilization 390 timer. When the stabilization timer fires, the peer MUST restart the 391 timer and carry out the overlay stabilization routine. Overlay 392 stabilization has four components in chord-reload: 394 1. Update the neighbor table. We refer to this as neighbor 395 stabilization. 397 2. Refreshing the finger table. We refer to this as finger 398 stabilization. 400 3. Adjusting finger table size. 402 4. Detecting partitioning. We refer to this as strong 403 stabilization. 405 As specified in RELOAD base [I-D.ietf-p2psip-base], a peer sends 406 periodic messages as part of the neighbor stabilization, finger 407 stabilization, and strong stabilization routines. In neighbor 408 stabilization, a peer periodically sends an Update request to every 409 peer in its Connection Table. The default time is every ten minutes. 410 In finger stabilization, a peer periodically searches for new peers 411 to include in its finger table. This time defaults to one hour. 412 This document specifies how the neighbor stabilization and finger 413 stabilization intervals can be determined in an adaptive fashion 414 based on the operating conditions of the overlay. The subsections 415 below describe how this document extends the four components of 416 stabilization. 418 5.1. Update Requests 420 As described in RELOAD base [I-D.ietf-p2psip-base], the neighbor and 421 finger stabilization procedures are implemented using Update 422 requests. RELOAD base defines three types of Update requests: 423 'peer_ready', 'neighbors', and 'full'. Regardless of the type, all 424 Update requests include an 'uptime' field. Since the self-tuning 425 extensions require information on the uptimes of peers in the routing 426 table, the sender of an Update request MUST include its current 427 uptime in seconds in the 'uptime' field. 429 When self-tuning is used, each peer decides independently the 430 appropriate size for the successor list, predecessor list and finger 431 table. Thus, the 'predecessors', 'successors', and 'fingers' fields 432 included in RELOAD Update requests are of variable length. As 433 specified in RELOAD [I-D.ietf-p2psip-base], variable length fields 434 are on the wire preceded by length bytes. In the case of the 435 successor list, predecessor list, and finger table, there are two 436 length bytes (allowing lengths up to 2^16-1). The number of NodeId 437 structures included in each field can be calculated based on the 438 length bytes since the size of a single NodeId structure is 16 bytes. 439 If a peer receives more entries than fit into its successor list, 440 predecessor list or finger table, the peer MUST ignore the extra 441 entries. If a peer receives less entries than it currently has in 442 its own data structure, the peer MUST NOT drop the extra entries from 443 its data structure. 445 5.2. Neighbor Stabilization 447 In the neighbor stabilization operation of chord-reload, a peer 448 periodically sends an Update request to every peer in its Connection 449 Table. In a small, low-churn overlay, the amount of traffic this 450 process generates is typically acceptable. However, in a large-scale 451 overlay churning at a moderate or high churn rate, the traffic load 452 may no longer be acceptable since the size of the connection table is 453 large and the stabilization interval relatively short. The self- 454 tuning mechanisms described in this document are especially designed 455 for overlays of the latter type. Therefore, when the self-tuning 456 mechanisms are used, each peer MUST send a periodic Update request 457 only to its first predecessor and first successor on the Chord ring. 459 The neighbor stabilization routine MUST be executed when the 460 stabilization timer fires. To begin the neighbor stabilization 461 routine, a peer MUST send an Update request to its first successor 462 and its first predecessor. The type of the Update request MUST be 463 'neighbors'. The Update request MUST include the successor and 464 predecessor lists of the sender. If a peer receiving such an Update 465 request learns from the predecessor and successor lists included in 466 the request that new peers can be included in its neighborhood set, 467 it MUST send Attach requests to the new peers. 469 After a new peer has been added to the predecessor or successor list, 470 an Update request of type 'peer_ready' MUST be sent to the new peer. 471 This allows the new peer to insert the sender into its neighborhood 472 set. 474 5.3. Finger Stabilization 476 Chord-reload specifies two alternative methods for searching for new 477 peers to the finger table. Both of the alternatives can be used with 478 the self-tuning extensions defined in this document. 480 Immediately after a new peer has been added to the finger table, a 481 Probe request MUST be sent to the new peer to fetch its uptime. The 482 requested_info field of the Probe request MUST be set to contain the 483 ProbeInformationType 'uptime' defined in RELOAD base 484 [I-D.ietf-p2psip-base]. 486 5.4. Adjusting Finger Table Size 488 The chord-reload algorithm defines how a peer can make sure that the 489 finger table is appropriately sized to allow for efficient routing. 490 Since the self-tuning mechanisms specified in this document produce a 491 network size estimate, this estimate can be directly used to 492 calculate the optimal size for the finger table. This mechanism MUST 493 be used instead of the one specified by chord-reload. A peer MUST 494 use the network size estimate to determine whether it needs to adjust 495 the size of its finger table each time when the stabilization timer 496 fires. The way this is done is explained in Section 6.2. 498 5.5. Detecting Partitioning 500 This document does not require any changes to the mechanism chord- 501 reload uses to detect network partitioning. 503 5.6. Leaving the Overlay 504 As specified in RELOAD base [I-D.ietf-p2psip-base], a leaving peer 505 SHOULD send a Leave request to all members of its neighbor table 506 prior to leaving the overlay. The overlay_specific_data field MUST 507 contain the ChordLeaveData structure. The Leave requests that are 508 sent to successors MUST contain the predecessor list of the leaving 509 peer. The Leave requests that are sent to the predecessors MUST 510 contain the successor list of the leaving peer. If a given successor 511 can identify better predecessors than are already included in its 512 predecessor lists by investigating the predecessor list it receives 513 from the leaving peer, it MUST send Attach requests to them. 514 Similarly, if a given predecessor identifies better successors by 515 investigating the successor list it receives from the leaving peer, 516 it MUST send Attach requests to them. 518 6. Self-tuning Chord Parameters 520 This section specifies how to determine an appropriate stabilization 521 rate and routing table size in an adaptive fashion. The proposed 522 mechanism is based on [mahajan2003], [liben-nowell2002], and 523 [ghinita2006]. To calculate an appropriate stabilization rate, the 524 values of three parameters MUST be estimated: overlay size N, failure 525 rate U, and join rate L. To calculate an appropriate routing table 526 size, the estimated network size N can be used. Peers in the overlay 527 MUST re-calculate the values of the parameters to self-tune the 528 chord-reload algorithm at the end of each stabilization period before 529 re-starting the stabilization timer. 531 6.1. Estimating Overlay Size 533 Techniques for estimating the size of an overlay network have been 534 proposed for instance in [mahajan2003], [horowitz2003], 535 [kostoulas2005], [binzenhofer2006], and [ghinita2006]. In Chord, the 536 density of peer identifiers in the neighborhood set can be used to 537 produce an estimate of the size of the overlay, N [mahajan2003]. 538 Since peer identifiers are picked randomly with uniform probability 539 from the numBitsInNodeId-bit identifier space, the average distance 540 between peer identifiers in the successor set is (2^numBitsInNodeId)/ 541 N. 543 To estimate the overlay network size, a peer MUST compute the average 544 inter-peer distance d between the successive peers starting from the 545 most distant predecessor and ending to the most distant successor in 546 the successor list. The estimated network size MUST be calculated 547 as: 549 2^numBitsInNodeId 550 N = ------------------- 551 d 553 This estimate has been found to be accurate within 15% of the real 554 network size [ghinita2006]. Of course, the size of the neighborhood 555 set affects the accuracy of the estimate. 557 During the join process, a joining peer fills its routing table by 558 sending a series of Ping and Attach requests, as specified in RELOAD 559 base [I-D.ietf-p2psip-base]. Thus, a joining peer immediately has 560 enough information at its disposal to calculate an estimate of the 561 network size. 563 6.2. Determining Routing Table Size 565 As specified in RELOAD base, the finger table must contain at least 566 16 entries. When the self-tuning mechanisms are used, the size of 567 the finger table MUST be set to max(log2(N), 16) using the estimated 568 network size N. 570 The size of the successor list MUST be set to log2(N). An 571 implementation MAY place a lower limit on the size of the successor 572 list. As an example, the implementation might require the size of 573 the successor list to be always at least three. 575 A peer MAY choose to maintain a fixed-size predecessor list with only 576 three entries as specified in RELOAD base. However, it is 577 RECOMMENDED that a peer maintains log2(N) predecessors. 579 6.3. Estimating Failure Rate 581 A typical approach is to assume that peers join the overlay according 582 to a Poisson process with rate L and leave according to a Poisson 583 process with rate parameter U [mahajan2003]. The value of U can be 584 estimated using peer failures in the finger table and neighborhood 585 set [mahajan2003]. If peers fail with rate U, a peer with M unique 586 peer identifiers in its routing table should observe K failures in 587 time K/(M*U). Every peer in the overlay MUST maintain a history of 588 the last K failures. The current time MUST be inserted into the 589 history when the peer joins the overlay. The estimate of U MUST be 590 calculated as: 592 k 593 U = --------, 594 M * Tk 596 where M is the number of unique peer identifiers in the routing 597 table, Tk is the time between the first and the last failure in the 598 history, and k is the number of failures in the history. If k is 599 smaller than K, the estimate MUST be computed as if there was a 600 failure at the current time. It has been shown that an estimate 601 calculated in a similar manner is accurate within 17% of the real 602 value of U [ghinita2006]. 604 The size of the failure history K affects the accuracy of the 605 estimate of U. One can increase the accuracy by increasing K. 606 However, this has the side effect of decreasing responsiveness to 607 changes in the failure rate. On the other hand, a small history size 608 may cause a peer to overreact each time a new failure occurs. In 609 [ghinita2006], K is set 25% of the routing table size. Use of this 610 approach is RECOMMENDED. 612 6.3.1. Detecting Failures 614 A new failure MUST be inserted to the failure history in the 615 following cases: 617 1. A Leave request is received from a neigbhor. 619 2. A peer fails to reply to a Ping request sent in the situation 620 explained below. If no packets have been received on a 621 connection during the past 2*Tr seconds (where Tr is the 622 inactivity timer defined by ICE [RFC5245]), a RELOAD Ping request 623 MUST be sent to the remote peer. RELOAD mandates the use of STUN 624 [RFC5389] for keepalives. STUN keepalives take the form of STUN 625 Binding Indication transactions. As specified in ICE [RFC5245], 626 a peer sends a STUN Binding Indication if there has been no 627 packet sent on a connection for Tr seconds. Tr is configurable 628 and has a default of 15 seconds. Although STUN Binding 629 Indications do not generate a response, the fact that a peer has 630 failed can be learned from the lack of packets (Binding 631 Indications or application protocol packets) received from the 632 peer. If the remote peer fails to reply to the Ping request, the 633 sender MUST consider the remote peer to have failed. 635 As an alternative to relying on STUN keepalives to detect peer 636 failure, a peer could send additional, frequent RELOAD messages to 637 every peer in its Connection Table. These messages could be Update 638 requests, in which case they would serve two purposes: detecting peer 639 failure and stabilization. However, as the cost of this approach can 640 be very high in terms of bandwidth consumption and traffic load, 641 especially in large-scale overlays experiencing churn, its use is NOT 642 RECOMMENDED. 644 6.4. Estimating Join Rate 646 Reference [ghinita2006] proposes that a peer can estimate the join 647 rate based on the uptime of the peers in its routing table. An 648 increase in peer join rate will be reflected by a decrease in the 649 average age of peers in the routing table. Thus, each peer MUST 650 maintain an array of the ages of the peers in its routing table 651 sorted in increasing order. Using this information, an estimate of 652 the global peer join rate L MUST be calculated as: 654 N 655 L = ---------------, 656 Ages[rsize/2] 658 where Ages is an array containing the ages of the peers in the 659 routing table sorted in increasing order and rsize is the size of the 660 routing table. It has been shown that the estimate obtained by using 661 this method is accurate within 22% of the real join rate 662 [ghinita2006]. Of course, the size of the routing table affects the 663 accuracy. 665 In order for this mechanism to work, peers need to exchange 666 information about the time they have been present in the overlay. 667 Peers receive the uptimes of their successors and predecessors during 668 the stabilization operations since all Update requests carry uptime 669 values. A joining peer learns the uptime of the admitting peer since 670 it receives an Update from the admitting peer during the join 671 procedure. Peers learn the uptimes of new fingers since they can 672 fetch the uptime using a Probe request after having attached to the 673 new finger. 675 6.5. Estimate Sharing 677 To improve the accuracy of network size, join rate, and leave rate 678 estimates, peers MUST share their estimates. When the stabilization 679 timer fires, a peer MUST select number-of-peers-to-probe random peers 680 from its finger table and send each of them a Probe request. The 681 targets of Probe requests are selected from the finger table rather 682 than from the neighbor table since neighbors are likely to make 683 similar errors when calculating their estimates. number-of-peers-to- 684 probe is a new element in the overlay configuration document. It is 685 defined in Section 7 and has a default value of 4. Both the Probe 686 request and the answer returned by the target peer MUST contain a new 687 message extension whose MessageExtensionType is 'self_tuning_data'. 688 This extension type is defined in Section 9.1. The 689 extension_contents field of the MessageExtension structure MUST 690 contain a SelfTuningData structure: 692 struct { 693 uint32 network_size; 694 uint32 join_rate; 695 uint32 leave_rate; 696 } SelfTuningData; 698 The contents of the SelfTuningData structure are as follows: 700 The latest network size estimate calculated by the sender. 702 The latest join rate estimate calculated by the sender. 704 The latest leave rate estimate calculated by the sender. 706 The join and leave rates are expressed as joins or failures per 24 707 hours. As an example, if the global join rate estimate a peer has 708 calculated is 0.123 peers/s, it would include in the join_rate field 709 the ceiling of the value 10627.2 (24*60*60*0.123 = 10627.2), that is, 710 the value 10628. 712 The 'type' field of the MessageExtension structure MUST be set to 713 contain the value 'self_tuning_data'. The 'critical' field of the 714 structure MUST be set to False. 716 A peer MUST store all estimates it receives in Probe requests and 717 answers during a stabilization interval. When the stabilization 718 timer fires, the peer MUST calculate the estimates to be used during 719 the next stabilization interval by taking the 75th percentile of a 720 data set containing its own estimate and the received estimates. 722 6.6. Calculating the Stabilization Interval 724 According to [liben-nowell2002], a Chord network in a ring-like state 725 remains in a ring-like state as long as peers send Omega(log2(N))^2 726 messages before N new peers join or N/2 peers fail. We can use the 727 estimate of peer failure rate, U, to calculate the time Tf in which N 728 /2 peers fail: 730 1 731 Tf = ------ 732 2*U 734 Based on this estimate, a stabilization interval Tstab-1 MUST be 735 calculated as: 737 Tf 738 Tstab-1 = ----------- 739 log2^2(N) 741 On the other hand, the estimated join rate L can be used to calculate 742 the time in which N new peers join the overlay. Based on the 743 estimate of L, a stabilization interval Tstab-2 MUST be calculated 744 as: 746 N 747 Tstab-2 = --------------- 748 L * log2^2(N) 750 Finally, the actual stabilization interval Tstab that MUST be used 751 can be obtained by taking the minimum of Tstab-1 and Tstab-2. 753 The results obtained in [maenpaa2009] indicate that making the 754 stabilization interval too small has the effect of making the overlay 755 less stable (e.g., in terms of detected loops and path failures). 756 Thus, a lower limit should be used for the stabilization period. 757 Based on the results in [maenpaa2009], a lower limit of 15s is 758 RECOMMENDED, since using a stabilization period smaller than this 759 will with a high probability cause too much traffic in the overlay. 761 7. Overlay Configuration Document Extension 763 This document extends the RELOAD overlay configuration document by 764 adding one new element, "number-of-peers-to-probe", inside each 765 "configuration" element. 767 self-tuning:number-of-peers-to-probe: The number of fingers to which 768 Probe requests are sent to obtain their network size, join rate, 769 and leave rate estimates. The default value is 4. 771 This new element is formally defined as follows: 773 namespace self-tuning = "urn:ietf:params:xml:ns:p2p:self-tuning" 775 parameter &= element self-tuning:number-of-peers-to-probe { 776 xsd:unsignedInt } 778 This namespace is added into the element in the 779 overlay configuration file. 781 8. Security Considerations 782 In the same way as malicious or compromised peers implementing the 783 RELOAD base protocol [I-D.ietf-p2psip-base] can advertise false 784 network metrics or distribute false routing table information for 785 instance in RELOAD Update messages, malicious peers implementing this 786 specification may share false join rate, leave rate, and network size 787 estimates. For such attacks, the same security concerns apply as in 788 the RELOAD base specification. In addition, as long as the amount of 789 malicious peers in the overlay remains modest, the statistical 790 mechanisms applied in Section 6.5 (i.e., the use of 75th percentiles) 791 to process the shared estimates a peer obtains help ensuring that 792 estimates that are clearly different from (i.e., larger or smaller 793 than) other received estimates will not significantly influence the 794 process of adapting the stabilization interval and routing table 795 size. 797 9. IANA Considerations 799 9.1. Message Extensions 801 This document introduces one additional extension to the "RELOAD 802 Extensions" Registry: 804 +------------------+-------+---------------+ 805 | Extension Name | Code | Specification | 806 +------------------+-------+---------------+ 807 | self_tuning_data | 3 | RFC-AAAA | 808 +------------------+-------+---------------+ 810 The contents of the extension are defined in Section 6.5. 812 Note to RFC Editor: please replace AAAA with the RFC number for this 813 specification. 815 10. Acknowledgments 817 The authors would like to thank Jani Hautakorpi for his contributions 818 to the document. The authors would also like to thank Carlos 819 Bernardos for his comments on the document. 821 11. References 822 11.1. Normative References 824 [I-D.ietf-p2psip-base] 825 Jennings, C., Lowekamp, B., Rescorla, E., Baset, S., and 826 H. Schulzrinne, "REsource LOcation And Discovery (RELOAD) 827 Base Protocol", draft-ietf-p2psip-base-26 (work in 828 progress), February 2013. 830 [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate 831 Requirement Levels ", BCP 14, RFC 2119, March 1997. 833 [RFC5245] Rosenberg, J., "Interactive Connectivity Establishment 834 (ICE): A Protocol for Network Address Translator (NAT) 835 Traversal for Offer/Answer Protocols", RFC 5245, April 836 2010. 838 [RFC5389] Rosenberg, J., Mahy, R., Matthews, P., and D. Wing, 839 "Session Traversal Utilities for NAT (STUN)", RFC 5389, 840 October 2008. 842 11.2. Informative References 844 [CAN] Ratnasamy, S., Francis, P., Handley, M., Karp, R., and S. 845 Schenker, "A Scalable Content-Addressable Network", In 846 Proceedings of the 2001 Conference on Applications, 847 Technologies, Architectures and Protocols for Computer 848 Communications pp. 161-172, August 2001. 850 [Chord] Stoica, I., Morris, R., Liben-Nowell, D., Karger, D., 851 Kaashoek, M., Dabek, F., and H. Balakrishnan, "Chord: A 852 Scalable Peer-to-peer Lookup Service for Internet 853 Applications", IEEE/ACM Transactions on Networking Volume 854 11, Issue 1, pp. 17-32, February 2003. 856 [I-D.ietf-p2psip-concepts] 857 Bryan, D., Matthews, P., Shim, E., Willis, D., and S. 858 Dawkins, "Concepts and Terminology for Peer to Peer SIP", 859 draft-ietf-p2psip-concepts-05 (work in progress), July 860 2013. 862 [Pastry] Rowstron, A. and P. Druschel, "Pastry: Scalable, 863 Decentralized Object Location and Routing for Large-Scale 864 Peer-to-Peer Systems", In Proceedings of the IFIP/ACM 865 International Conference on Distribued Systems Platforms 866 pp. 329-350, November 2001. 868 [binzenhofer2006] 869 Binzenhofer, A., Kunzmann, G., and R. Henjes, "A Scalable 870 Algorithm to Monitor Chord-Based P2P Systems at Runtime", 871 In Proceedings of the 20th IEEE International Parallel and 872 Distributed Processing Symposium (IPDPS) pp. 1-8, April 873 2006. 875 [ghinita2006] 876 Ghinita, G. and Y. Teo, "An Adaptive Stabilization 877 Framework for Distributed Hash Tables", In Proceedings of 878 the 20th IEEE International Parallel and Distributed 879 Processing Symposium (IPDPS) pp. 29-38, April 2006. 881 [horowitz2003] 882 Horowitz, K. and D. Malkhi, "Estimating Network Size from 883 Local Information", Information Processing Letters Volume 884 88, Issue 5, pp. 237-243, December 2003. 886 [kostoulas2005] 887 Kostoulas, D., Psaltoulis, D., Gupta, I., Birman, K., and 888 A. Demers, "Decentralized Schemes for Size Estimation in 889 Large and Dynamic Groups", In Proceedings of the 4th IEEE 890 International Symposium on Network Computing and 891 Applications pp. 41-48, July 2005. 893 [krishnamurthy2008] 894 Krishnamurthy, S., El-Ansary, S., Aurell, E., and S. 895 Haridi, "Comparing Maintenance Strategies for Overlays", 896 In Proceedings of the 16th Euromicro Conference on 897 Parallel, Distributed and Network-Based Processing pp. 898 473-482, February 2008. 900 [li2004] Li, J., Strinbling, J., Gil, T., Morris, R., and M. 901 Kaashoek, "Comparing the Performance of Distributed Hash 902 Tables Under Churn", Peer-to-Peer Systems III, volume 3279 903 of Lecture Notes in Computer Science Springer, pp. 87-99, 904 February 2005. 906 [liben-nowell2002] 907 Liben-Nowell, D., Balakrishnan, H., and D. Karger, 908 "Observations on the Dynamic Evolution of Peer-to-Peer 909 Networks", In Proceedings of the 1st International 910 Workshop on Peer-to-Peer Systems (IPTPS) pp. 22-33, March 911 2002. 913 [maenpaa2009] 914 Maenpaa, J. and G. Camarillo, "A Study on Maintenance 915 Operations in a Chord-Based Peer-to-Peer Session 916 Initiation Protocol Overlay Network", In Proceedings of 917 the 23rd IEEE International Parallel and Distributed 918 Processing Symposium (IPDPS) pp. 1-9, May 2009. 920 [mahajan2003] 921 Mahajan, R., Castro, M., and A. Rowstron, "Controlling the 922 Cost of Reliability in Peer-to-Peer Overlays", In 923 Proceedings of the 2nd International Workshop on Peer-to- 924 Peer Systems (IPTPS) pp. 21-32, February 2003. 926 [rhea2004] 927 Rhea, S., Geels, D., Roscoe, T., and J. Kubiatowicz, 928 "Handling Churn in a DHT", In Proceedings of the USENIX 929 Annual Technical Conference pp. 127-140, June 2004. 931 Authors' Addresses 933 Jouni Maenpaa 934 Ericsson 935 Hirsalantie 11 936 Jorvas 02420 937 Finland 939 Email: Jouni.Maenpaa@ericsson.com 941 Gonzalo Camarillo 942 Ericsson 943 Hirsalantie 11 944 Jorvas 02420 945 Finland 947 Email: Gonzalo.Camarillo@ericsson.com