idnits 2.17.1 draft-dreibholz-vnfpool-rserpool-applic-07.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- No issues found here. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year -- The document date (March 6, 2018) is 2236 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- ** Obsolete normative reference: RFC 4960 (ref. '1') (Obsoleted by RFC 9260) == Outdated reference: A later version (-34) exists of draft-dreibholz-rserpool-asap-hropt-21 == Outdated reference: A later version (-33) exists of draft-dreibholz-rserpool-delay-20 == Outdated reference: A later version (-31) exists of draft-dreibholz-rserpool-enrp-takeover-18 == Outdated reference: A later version (-21) exists of draft-dreibholz-rserpool-nextgen-ideas-08 Summary: 1 error (**), 0 flaws (~~), 5 warnings (==), 1 comment (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 Network Working Group T. Dreibholz 3 Internet-Draft Simula@OsloMET 4 Intended status: Informational M. Tuexen 5 Expires: September 7, 2018 Muenster Univ. of App. Sciences 6 M. Shore 7 No Mountain Software 8 N. Zong 9 Huawei Technologies 10 March 6, 2018 12 The Applicability of Reliable Server Pooling (RSerPool) for Virtual 13 Network Function Resource Pooling (VNFPOOL) 14 draft-dreibholz-vnfpool-rserpool-applic-07.txt 16 Abstract 18 This draft describes the application of Reliable Server 19 Pooling (RSerPool) for Virtual Network Function Resource 20 Pooling (VNFPOOL). 22 Status of This Memo 24 This Internet-Draft is submitted in full conformance with the 25 provisions of BCP 78 and BCP 79. 27 Internet-Drafts are working documents of the Internet Engineering 28 Task Force (IETF). Note that other groups may also distribute 29 working documents as Internet-Drafts. The list of current Internet- 30 Drafts is at http://datatracker.ietf.org/drafts/current/. 32 Internet-Drafts are draft documents valid for a maximum of six months 33 and may be updated, replaced, or obsoleted by other documents at any 34 time. It is inappropriate to use Internet-Drafts as reference 35 material or to cite them other than as "work in progress." 37 This Internet-Draft will expire on September 7, 2018. 39 Copyright Notice 41 Copyright (c) 2018 IETF Trust and the persons identified as the 42 document authors. All rights reserved. 44 This document is subject to BCP 78 and the IETF Trust's Legal 45 Provisions Relating to IETF Documents 46 (http://trustee.ietf.org/license-info) in effect on the date of 47 publication of this document. Please review these documents 48 carefully, as they describe your rights and restrictions with respect 49 to this document. Code Components extracted from this document must 50 include Simplified BSD License text as described in Section 4.e of 51 the Trust Legal Provisions and are provided without warranty as 52 described in the Simplified BSD License. 54 Table of Contents 56 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . 2 57 1.1. Abbreviations . . . . . . . . . . . . . . . . . . . . . . 2 58 2. Virtual Network Function Resource Pooling . . . . . . . . . . 3 59 3. Reliable Server Pooling . . . . . . . . . . . . . . . . . . . 3 60 3.1. Introduction . . . . . . . . . . . . . . . . . . . . . . 3 61 3.2. Registrar Operations . . . . . . . . . . . . . . . . . . 4 62 3.3. Pool Element Operations . . . . . . . . . . . . . . . . . 5 63 3.4. Takeover Procedure . . . . . . . . . . . . . . . . . . . 5 64 3.5. Pool User Operations . . . . . . . . . . . . . . . . . . 6 65 3.5.1. Handle Resolution and Response . . . . . . . . . . . 6 66 3.5.2. Pool Member Selection Policies . . . . . . . . . . . 6 67 3.5.3. Handle Resolution and Response . . . . . . . . . . . 6 68 3.6. Automatic Configuration . . . . . . . . . . . . . . . . . 7 69 3.7. State Synchronisation . . . . . . . . . . . . . . . . . . 7 70 3.7.1. Cookies . . . . . . . . . . . . . . . . . . . . . . . 7 71 3.7.2. Businesss Cards . . . . . . . . . . . . . . . . . . . 8 72 3.8. Protocol Stack . . . . . . . . . . . . . . . . . . . . . 8 73 3.9. Extensions . . . . . . . . . . . . . . . . . . . . . . . 9 74 3.10. Reference Implementation and Deployment . . . . . . . . . 9 75 4. Usage of Reliable Server Pooling . . . . . . . . . . . . . . 9 76 5. Security Considerations . . . . . . . . . . . . . . . . . . . 10 77 6. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 10 78 7. Testbed Platform . . . . . . . . . . . . . . . . . . . . . . 10 79 8. Acknowledgments . . . . . . . . . . . . . . . . . . . . . . . 10 80 9. References . . . . . . . . . . . . . . . . . . . . . . . . . 10 81 9.1. Normative References . . . . . . . . . . . . . . . . . . 10 82 9.2. Informative References . . . . . . . . . . . . . . . . . 11 83 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . 14 85 1. Introduction 87 1.1. Abbreviations 89 o PE: Pool Element 91 o PR: Pool Registrar 93 o PU: Pool User 95 o RSerPool: Reliable Server Pooling 96 o SCTP: Stream Control Transmission Protocol 98 o VNFPOOL: Virtual Network Function Resource Pooling 100 2. Virtual Network Function Resource Pooling 102 Virtualised Network Function (VNF) (e.g. vFW, vLB) -- as introduced 103 in more detail in [12] -- provides the same function as the 104 equivalent network function (e.g. FW, LB), but is deployed as 105 software instances running on general purpose servers via 106 virtualisation platform. The main features of VNF include the 107 following aspects: 109 1. A service consists of a sequence of topologically distributed VNF 110 instances where the data connections are preferably directly 111 established between the instances. 113 2. There are potentially more factors that cause VNF instance 114 transition or even failure; VNF pool refers to a group of VNF 115 instances providing same network function. 117 Virtualisation technology allows network function virtualisation 118 operators to build a reliable VNF by pooling the underlying 119 resources, such as CPU, storage, networking, etc. to form a cluster 120 of VNF instances. VNF pool refers to a cluster or group of VNF 121 instances providing same network function. Each VNF pool has a Pool 122 Manager (PM) to manage the VNF instance such as instance selection, 123 monitoring, etc. There will be a redundancy mechanism for a reliable 124 PM to achieve reliable VNF. More details on VNF pool can be found in 125 [12]. 127 3. Reliable Server Pooling 129 3.1. Introduction 130 +---------------+ 131 | Pool User | 132 +---------------+ 133 ^ 134 | ASAP 135 V 136 +---------------+ ENRP +---------------+ 137 | Registrar |<-------->| Registrar | 138 +---------------+ +---------------+ 139 ^ 140 | ASAP 141 V 142 +------------------------------------------------------------+ 143 | +--------------+ +--------------+ +--------------+ | 144 | | Pool Element | | Pool Element | ... ... | Pool Element | | 145 | +--------------+ +--------------+ +--------------+ | 146 | Server Pool | 147 +------------------------------------------------------------+ 149 Figure 1 151 An overview of the RSerPool framework -- which is defined as RFC in 152 [2] -- is provided in Figure 1. There are three types of components: 154 o Pool Element (PE) denotes a server in a pool. PEs in the same 155 pool provide the same service. 157 o Pool User (PU) denotes a client using the service of a pool. 159 o Pool Registrar (PR) is the management component for the pools. 161 The set of all pools within an operation scope (for example: an 162 organisation, a company or a department) is denoted as handlespace. 163 Clearly, a single PR would be a single point of failure. Therefore, 164 PRs also have to be redundant. Within the handlespace, each pool is 165 identified by a unique pool handle (PH). 167 3.2. Registrar Operations 169 The PRs of an operation scope synchronise their view of the 170 handlespace by using the Endpoint haNdlespace Redundancy 171 Protocol (ENRP, defined as RFCs in [4], [5]). In contrast to for 172 instance the Domain Name System (DNS), an operation scope is 173 restricted to a single administrative domain. That is, all of its 174 components are under the control of the same authority (for example: 175 a company). This property leads to small management overhead, which 176 also allows for RSerPool usage on devices having only limited memory 177 and CPU resources (for example: telecommunications equipment). 178 Nevertheless, PEs may be distributed globally to continue their 179 service even in case of localised disasters (like for example an 180 earthquake). Each PR in the operation scope is identified by a PR 181 ID, which is a randomly chosen 32-bit number. 183 3.3. Pool Element Operations 185 Within their operation scope, the PEs may choose an arbitrary PR to 186 register into a pool by using the Aggregate Server Access 187 Protocol (ASAP, defined as RFCs in [3], [5]). The registration is 188 performed by using an ASAP_REGISTRATION message. Within its pool, a 189 PE is characterised by its PE ID, which is a randomly chosen 32-bit 190 number. Upon registration at a PR, the chosen PR becomes the Home- 191 PR (PR-H) of the newly registered PE. A PR-H is responsible for 192 monitoring the availability of its PEs by ASAP_ENDPOINT_KEEP_ALIVE 193 messages (to be acknowledged by a PE via an 194 ASAP_ENDPOINT_KEEP_ALIVE_ACK message within a configured timeout). 195 The PR-H propagates the information about its PEs to the other PRs of 196 the operation scope via ENRP_UPDATE messages. 198 PEs re-register regularly in an interval denoted as registration 199 lifetime and for information updates. Similar to the registration, a 200 re-registration is performed by using another ASAP_REGISTRATION 201 message. PEs may intentionally deregister from the pool by using an 202 ASAP_DEREGISTRATION message. Also like for the registration, the 203 PR-H makes the deregistration known to the other PRs within the 204 operation scope by using an ENRP_UPDATE message. 206 3.4. Takeover Procedure 208 As soon as a PE detects the failure of its PR-H (that is: its request 209 is not answered within a given timeout), it simply tries another PR 210 of the operation scope for its registration and deregistration 211 requests. However, as a double safeguard, the remaining PRs also 212 negotiate a takeover of the PEs managed by a dead PR. This ensures 213 that each PE again gets a working PR-H as soon as possible. The PRs 214 of an operation scope monitor the availability of each other PR by 215 using ENRP_PRESENCE messages, which are transmitted regularly. If 216 there is no ENRP_PRESENCE within a given timeout, the peer is assumed 217 to be dead and a so-called takeover procedure (see also [21] for more 218 details) is initiated for the PEs managed by the dead PR: from all 219 PRs having started this takeover procedure, the PR with the highest 220 PR ID takes over the ownership of these PEs. The PEs are informed 221 about being taken over by their new PR-H via an 222 ASAP_ENDPOINT_KEEP_ALIVE with Home-flag set. The PEs are requested 223 to adopt the sender of this Home-flagged message as their new PR-H. 225 3.5. Pool User Operations 227 3.5.1. Handle Resolution and Response 229 In order to access the service of a pool given by its PH, a PU 230 requests a PE selection from an arbitrary PR of the operation scope, 231 again by using ASAP. This selection procedure is denoted as handle 232 resolution. Upon reception of a so-called ASAP_HANDLE_RESOLUTION 233 message the PR selects the requested list of PE identities and 234 returns them in an ASAP_HANDLE_RESOLUTION_RESPONSE message. 236 3.5.2. Pool Member Selection Policies 238 The pool-specific selection rule is denoted as pool member selection 239 policy or shortly as pool policy. Two classes of load distribution 240 policies are supported: non-adaptive and adaptive strategies (a 241 detailed overview is provided by [16], [18], [23], [19]). While 242 adaptive strategies base their selections on the current PE state 243 (which requires up-to-date information), non-adaptive algorithms do 244 not need such data. A basic set of adaptive and non-adaptive pool 245 policies is defined as RFC in [7]. 247 Defined in [7] are the non-adaptive policies Round Robin (RR), 248 Random (RAND) and Priority (PRIO) as well as the adaptive policies 249 Least Used (LU) and Least Used with Degradation (LUD). While RR/RAND 250 select PEs in turn/randomly, PRIO selects one of the PEs having the 251 highest priority. PRIO can for example be used to realise a master/ 252 backup PE setup. Only if there are no master PEs left, a backup PE 253 is selected. Round-robin selection is applied among PEs having the 254 same priority. LU selects the least-used PE, according to up-to-date 255 application-specific load information. Round robin selection is 256 applied among multiple least-loaded PEs. LUD, which is evaluated by 257 [20], furthermore introduces a load decrement constant that is added 258 to the actual load each time a PE is selected. It is used to 259 compensate inaccurate load states due to delayed updates. An update 260 resets the load to the actual load value. 262 3.5.3. Handle Resolution and Response 264 PE may fail, for example due to hardware or network failures. Since 265 there is a certain latency between the actual failure of a PE and the 266 removal of its entry from the handlespace -- depending on the 267 interval and timeout for the ASAP_ENDPOINT_KEEP_ALIVE monitoring -- 268 the PUs may report unreachable PEs to a PR by using an 269 ASAP_ENDPOINT_UNREACHABLE message. A PR locally counts these reports 270 for each PE and when reaching the threshold MAX-BAD-PE-REPORT 271 (default is 3, as defined in the RFC [3]), the PR may decide to 272 remove the PE from the handlespace. The counter of a PE is reset 273 upon its re-registration. More details on this threshold and 274 guidelines for its configuration can be found in [22]. 276 3.6. Automatic Configuration 278 RSerPool components need to know the PRs of their operation scope. 279 While it is of course possible to configure a list of PRs into each 280 component, RSerPool also provides an auto-configuration feature: PRs 281 may send so-called announces, that is, ASAP_ANNOUNCE and 282 ENRP_PRESENCE messages which are regularly sent over UDP via IP 283 multicast. Unlike broadcasts, multicast messages can also be 284 transported over routers (at least, this is easily possible within 285 LANs). The announces of the PRs can be heard by the other 286 components, which can maintain a list of currently available PRs. 287 That is, RSerPool components are usually just turned on and 288 everything works automatically. 290 3.7. State Synchronisation 292 RSerPool has been explicitly designed to be application-independent. 293 Therefore, RSerPool has not intended to define special state 294 synchronisation mechanisms for RSerPool-based applications. Such 295 state synchronisation mechanisms are considered as tasks of the 296 applications themselves. However, RSerPool defines two mechanisms to 297 at least support the implementation of more sophisticated strategies: 298 Cookies and Businesss Cards. Details on these mechanisms can also be 299 found in Subsection 3.9.5 of [16]. 301 3.7.1. Cookies 303 ASAP provides the mechanism of Client-Based State Sharing as 304 introduced in [17]. Whenever useful, the PE may package its state in 305 form of a state cookie and send it -- by an ASAP_COOKIE message -- to 306 the PU. The PU stores the latest state cookie received from the PE. 307 Upon PE failure, this stored cookie is sent in an ASAP_COOKIE_ECHO to 308 the newly chosen PE. This PE may then restore the state. A shared 309 secret known by all PEs of a pool may be used to protect the state 310 from being manipulated or read by the PU. 312 While Client-Based State Sharing is very simple, it may be 313 inefficient when the state changes too frequently, is too large (the 314 size limit of an ASAP_COOKIE/ASAP_COOKIE_ECHO is 64 KiB) or if it 315 must be prevented that a PU sends a state cookie to multiple PEs in 316 order to duplicate its sessions. 318 3.7.2. Businesss Cards 320 Depending on the application, there may be constraints restricting 321 the set of PEs usable for failover. The ASAP_BUSINESS_CARD message 322 is used to inform peer components about such constraints. 324 The first case to use a Business Card is if only a restricted set of 325 PEs in the pool may be used for failover. For example, in a large 326 pool, each PE can share its complete set of session states with a few 327 other PEs only. This keeps the system scalable. That is, a PE in a 328 pool of n servers does not have to synchronise all session states 329 with the other n-1 PEs. In this case, a PE has to tell its PU the 330 set of PE identities being candidates for a failover using an 331 ASAP_BUSINESS_CARD message. A PE may update the list of possible 332 failover candidates at any time by sending another Business Card. 333 The PU has to store the latest list of failover candidates. Of 334 course, if a failover becomes necessary, the PU has to select from 335 this list using the appropriate pool policy -- instead of performing 336 the regular PE selection by handle resolution at a PR. Therefore, 337 some literature also denotes the Business Card by the more expressive 338 term "last will". 340 In symmetric scenarios, where a PU is also a PE of another pool, the 341 PU has to tell this fact to its PE. This is realised by sending an 342 ASAP_BUSINESS_CARD message to the PE, providing the PH of its pool. 343 Optionally, also specific PE identities for failover may be provided. 344 The format remains the same as explained in the previous paragraph. 345 If the PE detects a failure of its PU, the PE may -- now in the role 346 of a PU -- use the provided PH for a handle resolution to find a new 347 PE or use the provided PE identities to select one. After that, it 348 can perform a failover to that PE. 350 3.8. Protocol Stack 352 The protocol stack of a PR provides ENRP and ASAP services to PRs and 353 PEs/PUs respectively. But between PU and PE, ASAP provides a Session 354 Layer protocol in the OSI model. From the perspective of the 355 Application Layer, the PU side establishes a session with a pool. 356 ASAP takes care of selecting a PE of the pool, initiating and 357 maintaining the underlying transport connection and triggering a 358 failover procedure when the PE becomes unavailable. 360 The Transport Layer protocol is by default SCTP (as defined in [1]) 361 -- except for the UDP-based automatic configuration announces (see 362 Section 3.6) -- over possibly multi-homed IPv4 and/or IPv6. SCTP has 363 been chosen due to its support of multi-homing and its reliability 364 features (see also [26]). 366 3.9. Extensions 368 A couple of extensions to RSerPool are existing: Handle Resolution 369 Option defined in [9] improves the PE selection by letting the PU 370 tell the PR its required number of PEs to be selected. ENRP Takeover 371 Suggestion introduced in [11] ensures load balancing among PRs. [10] 372 defines a delay-sensitive pool policy. [8] defines an SNMP MIB for 373 RSerPool. 375 3.10. Reference Implementation and Deployment 377 RSPLIB is the Open Source reference implementation of RSerPool. It 378 is currently -- as of February 2016 -- available for Linux, FreeBSD, 379 MacOS and Solaris. It is actively maintained. Particularly, it is 380 also included in Ubuntu Linux as well as in the FreeBSD ports 381 collection. RSPLIB can be downloaded from [14]. Further details on 382 the implementation are available in [16], [24]. 384 RSerPool with RSPLIB is deployed in a couple of Open Source projects, 385 including the SimProcTC Simulation Processing Tool-Chain for 386 distributing simulation runs in a compute pool (see [25] as well as 387 the simulation run distribution project explained in [26] for a 388 practical example) as well as for service infrastructure management 389 in the NorNet Core research testbed (see [27], [28]). 391 4. Usage of Reliable Server Pooling 393 **** TO BE DISCUSSED! **** 395 The following features of RSerPool can be used for VNFPOOL: 397 o Pool management. 399 o PE seclection with pool policies. 401 o Session management with help of ASAP_BUSINESS_CARD. 403 The following features have to be added to RSerPool itself: 405 o Support of TCP including MPTCP as additional/alternative transport 406 protocols. 408 o Possibly add some special pool policies? 410 o See also [13] for ideas on a next generation of RSerPool. 412 The following features have to be provided outside of RSerPool: 414 o State synchronisation for VNFPOOL. 416 o Pool Manager functionality as an RSerPool-based service. 418 5. Security Considerations 420 Security considerations for RSerPool can be found in [6]. 421 Furthermore, [23] examines the robustness of RSerPool systems against 422 attacks. 424 6. IANA Considerations 426 This document introduces no additional considerations for IANA. 428 7. Testbed Platform 430 A large-scale and realistic Internet testbed platform with support 431 for Reliable Server Pooling and the underlying SCTP protocol is 432 NorNet. A description of and introduction to NorNet is provided in 433 [28], [29], [30]. Further information can be found on the project 434 website [15] at https://www.nntb.no. 436 8. Acknowledgments 438 The authors would like to thank Xing Zhou for the friendly support. 440 9. References 442 9.1. Normative References 444 [1] Stewart, R., Ed., "Stream Control Transmission Protocol", 445 RFC 4960, DOI 10.17487/RFC4960, September 2007, 446 . 448 [2] Lei, P., Ong, L., Tuexen, M., and T. Dreibholz, "An 449 Overview of Reliable Server Pooling Protocols", RFC 5351, 450 DOI 10.17487/RFC5351, September 2008, . 453 [3] Stewart, R., Xie, Q., Stillman, M., and M. Tuexen, 454 "Aggregate Server Access Protocol (ASAP)", RFC 5352, 455 DOI 10.17487/RFC5352, September 2008, . 458 [4] Xie, Q., Stewart, R., Stillman, M., Tuexen, M., and A. 459 Silverton, "Endpoint Handlespace Redundancy Protocol 460 (ENRP)", RFC 5353, DOI 10.17487/RFC5353, September 2008, 461 . 463 [5] Stewart, R., Xie, Q., Stillman, M., and M. Tuexen, 464 "Aggregate Server Access Protocol (ASAP) and Endpoint 465 Handlespace Redundancy Protocol (ENRP) Parameters", 466 RFC 5354, DOI 10.17487/RFC5354, September 2008, 467 . 469 [6] Stillman, M., Ed., Gopal, R., Guttman, E., Sengodan, S., 470 and M. Holdrege, "Threats Introduced by Reliable Server 471 Pooling (RSerPool) and Requirements for Security in 472 Response to Threats", RFC 5355, DOI 10.17487/RFC5355, 473 September 2008, . 475 [7] Dreibholz, T. and M. Tuexen, "Reliable Server Pooling 476 Policies", RFC 5356, DOI 10.17487/RFC5356, September 2008, 477 . 479 [8] Dreibholz, T. and J. Mulik, "Reliable Server Pooling MIB 480 Module Definition", RFC 5525, DOI 10.17487/RFC5525, April 481 2009, . 483 [9] Dreibholz, T., "Handle Resolution Option for ASAP", draft- 484 dreibholz-rserpool-asap-hropt-21 (work in progress), 485 August 2017. 487 [10] Dreibholz, T. and X. Zhou, "Definition of a Delay 488 Measurement Infrastructure and Delay-Sensitive Least-Used 489 Policy for Reliable Server Pooling", draft-dreibholz- 490 rserpool-delay-20 (work in progress), August 2017. 492 [11] Dreibholz, T. and X. Zhou, "Takeover Suggestion Flag for 493 the ENRP Handle Update Message", draft-dreibholz-rserpool- 494 enrp-takeover-18 (work in progress), August 2017. 496 [12] Zong, N., Dunbar, L., Shore, M., Lopez, D., and G. 497 Karagiannis, "Virtualized Network Function (VNF) Pool 498 Problem Statement", draft-zong-vnfpool-problem- 499 statement-06 (work in progress), July 2014. 501 [13] Dreibholz, T., "Ideas for a Next Generation of the 502 Reliable Server Pooling Framework", draft-dreibholz- 503 rserpool-nextgen-ideas-08 (work in progress), August 2017. 505 9.2. Informative References 507 [14] Dreibholz, T., "Thomas Dreibholz's RSerPool Page", 508 Online: https://www.uni-due.de/~be0001/rserpool/, 2017, 509 . 511 [15] Dreibholz, T., "NorNet -- A Real-World, Large-Scale Multi- 512 Homing Testbed", Online: https://www.nntb.no/, 2017, 513 . 515 [16] Dreibholz, T., "Reliable Server Pooling - Evaluation, 516 Optimization and Extension of a Novel IETF Architecture", 517 March 2007, . 521 [17] Dreibholz, T., "An Efficient Approach for State Sharing in 522 Server Pools", Proceedings of the 27th IEEE Local Computer 523 Networks Conference (LCN) Pages 348-349, 524 ISBN 0-7695-1591-6, DOI 10.1109/LCN.2002.1181806, November 525 2002, . 529 [18] Dreibholz, T. and E. Rathgeb, "On the Performance of 530 Reliable Server Pooling Systems", Proceedings of the IEEE 531 Conference on Local Computer Networks (LCN) 30th 532 Anniversary Pages 200-208, ISBN 0-7695-2421-4, 533 DOI 10.1109/LCN.2005.98, November 2005, 534 . 537 [19] Dreibholz, T. and E. Rathgeb, "An Evaluation of the Pool 538 Maintenance Overhead in Reliable Server Pooling Systems", 539 SERSC International Journal on Hybrid Information 540 Technology (IJHIT) Number 2, Volume 1, Pages 17-32, 541 ISSN 1738-9968, April 2008, . 545 [20] Zhou, X., Dreibholz, T., and E. Rathgeb, "A New Server 546 Selection Strategy for Reliable Server Pooling in Widely 547 Distributed Environments", Proceedings of the 2nd IEEE 548 International Conference on Digital Society (ICDS) Pages 549 171-177, ISBN 978-0-7695-3087-1, DOI 10.1109/ICDS.2008.12, 550 February 2008, . 554 [21] Zhou, X., Dreibholz, T., Fa, F., Du, W., and E. Rathgeb, 555 "Evaluation and Optimization of the Registrar Redundancy 556 Handling in Reliable Server Pooling Systems", Proceedings 557 of the IEEE 23rd International Conference on Advanced 558 Information Networking and Applications (AINA) Pages 559 256-262, ISBN 978-0-7695-3638-5, DOI 10.1109/AINA.2009.25, 560 May 2009, . 564 [22] Dreibholz, T. and E. Rathgeb, "Overview and Evaluation of 565 the Server Redundancy and Session Failover Mechanisms in 566 the Reliable Server Pooling Framework", International 567 Journal on Advances in Internet Technology (IJAIT) Number 568 1, Volume 2, Pages 1-14, ISSN 1942-2652, June 2009, 569 . 572 [23] Dreibholz, T., Zhou, X., Becke, M., Pulinthanath, J., 573 Rathgeb, E., and W. Du, "On the Security of Reliable 574 Server Pooling Systems", International Journal on 575 Intelligent Information and Database 576 Systems (IJIIDS) Number 6, Volume 4, Pages 552-578, 577 ISSN 1751-5858, DOI 10.1504/IJIIDS.2010.036894, December 578 2010, . 581 [24] Dreibholz, T. and M. Becke, "The RSPLIB Project - From 582 Research to Application", Demo Presentation at the IEEE 583 Global Communications Conference (GLOBECOM), December 584 2010, . 587 [25] Dreibholz, T. and E. Rathgeb, "A Powerful Tool-Chain for 588 Setup, Distributed Processing, Analysis and Debugging of 589 OMNeT++ Simulations", Proceedings of the 1st ACM/ICST 590 International Workshop on OMNeT++ ISBN 978-963-9799-20-2, 591 DOI 10.4108/ICST.SIMUTOOLS2008.2990, March 2008, 592 . 595 [26] Dreibholz, T., "Evaluation and Optimisation of Multi-Path 596 Transport using the Stream Control Transmission Protocol", 597 Habilitation Treatise, March 2012, 598 . 602 [27] Gran, E., Dreibholz, T., and A. Kvalbein, "NorNet Core - A 603 Multi-Homed Research Testbed", Computer Networks, Special 604 Issue on Future Internet Testbeds Volume 61, Pages 75-87, 605 ISSN 1389-1286, DOI 10.1016/j.bjp.2013.12.035, March 2014, 606 . 608 [28] Dreibholz, T. and E. Gran, "Design and Implementation of 609 the NorNet Core Research Testbed for Multi-Homed Systems", 610 Proceedings of the 3nd International Workshop on Protocols 611 and Applications with Multi-Homing Support (PAMS) Pages 612 1094-1100, ISBN 978-0-7695-4952-1, 613 DOI 10.1109/WAINA.2013.71, March 2013, 614 . 618 [29] Dreibholz, T., "NorNet at NICTA - An Introduction to the 619 NorNet Testbed", Invited Talk at National Information 620 Communications Technology Australia (NICTA), January 2016, 621 . 624 [30] Dreibholz, T., "An Experiment Tutorial for the NorNet Core 625 Testbed at the the Universidad de Castilla-La Mancha", 626 Tutorial at the Universidad de Castilla-La Mancha, 627 Instituto de Investigacion Informatica de Albacete, 628 February 2017, . 631 Authors' Addresses 633 Thomas Dreibholz 634 Simula Centre for Digital Engineering 635 Martin Linges vei 17 636 1364 Fornebu, Akershus 637 Norway 639 Phone: +47-6782-8200 640 Fax: +47-6782-8201 641 Email: dreibh@simula.no 642 URI: https://www.uni-due.de/~be0001/ 643 Michael Tuexen 644 Muenster University of Applied Sciences 645 Stegerwaldstrasse 39 646 48565 Steinfurt, Nordrhein-Westfalen 647 Germany 649 Email: tuexen@fh-muenster.de 651 Melinda Shore 652 No Mountain Software 653 PO Box 16271 654 Two Rivers, Alaska 99716 655 U.S.A. 657 Phone: +1-907-322-9522 658 Email: melinda.shore@nomountain.net 659 URI: https://www.linkedin.com/pub/melinda-shore/9/667/236 661 Ning Zong 662 Huawei Technologies 663 101 Software Avenue 664 Nanjing, Jiangsu 210012 665 China 667 Email: zongning@huawei.com 668 URI: https://cn.linkedin.com/pub/ning-zong/15/737/490