idnits 2.17.1 draft-dreibholz-vnfpool-rserpool-applic-08.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- No issues found here. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year -- The document date (February 1, 2019) is 1910 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- == Outdated reference: A later version (-34) exists of draft-dreibholz-rserpool-asap-hropt-23 == Outdated reference: A later version (-33) exists of draft-dreibholz-rserpool-delay-22 == Outdated reference: A later version (-31) exists of draft-dreibholz-rserpool-enrp-takeover-20 == Outdated reference: A later version (-21) exists of draft-dreibholz-rserpool-nextgen-ideas-10 ** Obsolete normative reference: RFC 4960 (Obsoleted by RFC 9260) Summary: 1 error (**), 0 flaws (~~), 5 warnings (==), 1 comment (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 Network Working Group T. Dreibholz 3 Internet-Draft SimulaMet 4 Intended status: Informational M. Tuexen 5 Expires: August 5, 2019 Muenster Univ. of Appl. Sciences 6 M. Shore 7 No Mountain Software 8 N. Zong 9 Huawei Technologies 10 February 1, 2019 12 The Applicability of Reliable Server Pooling (RSerPool) for Virtual 13 Network Function Resource Pooling (VNFPOOL) 14 draft-dreibholz-vnfpool-rserpool-applic-08 16 Abstract 18 This draft describes the application of Reliable Server 19 Pooling (RSerPool) for Virtual Network Function Resource 20 Pooling (VNFPOOL). 22 Status of This Memo 24 This Internet-Draft is submitted in full conformance with the 25 provisions of BCP 78 and BCP 79. 27 Internet-Drafts are working documents of the Internet Engineering 28 Task Force (IETF). Note that other groups may also distribute 29 working documents as Internet-Drafts. The list of current Internet- 30 Drafts is at https://datatracker.ietf.org/drafts/current/. 32 Internet-Drafts are draft documents valid for a maximum of six months 33 and may be updated, replaced, or obsoleted by other documents at any 34 time. It is inappropriate to use Internet-Drafts as reference 35 material or to cite them other than as "work in progress." 37 This Internet-Draft will expire on August 5, 2019. 39 Copyright Notice 41 Copyright (c) 2019 IETF Trust and the persons identified as the 42 document authors. All rights reserved. 44 This document is subject to BCP 78 and the IETF Trust's Legal 45 Provisions Relating to IETF Documents 46 (https://trustee.ietf.org/license-info) in effect on the date of 47 publication of this document. Please review these documents 48 carefully, as they describe your rights and restrictions with respect 49 to this document. Code Components extracted from this document must 50 include Simplified BSD License text as described in Section 4.e of 51 the Trust Legal Provisions and are provided without warranty as 52 described in the Simplified BSD License. 54 Table of Contents 56 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . 2 57 1.1. Abbreviations . . . . . . . . . . . . . . . . . . . . . . 2 58 2. Virtual Network Function Resource Pooling . . . . . . . . . . 3 59 3. Reliable Server Pooling . . . . . . . . . . . . . . . . . . . 3 60 3.1. Introduction . . . . . . . . . . . . . . . . . . . . . . 3 61 3.2. Registrar Operations . . . . . . . . . . . . . . . . . . 4 62 3.3. Pool Element Operations . . . . . . . . . . . . . . . . . 5 63 3.4. Takeover Procedure . . . . . . . . . . . . . . . . . . . 5 64 3.5. Pool User Operations . . . . . . . . . . . . . . . . . . 6 65 3.5.1. Handle Resolution and Response . . . . . . . . . . . 6 66 3.5.2. Pool Member Selection Policies . . . . . . . . . . . 6 67 3.5.3. Handle Resolution and Response . . . . . . . . . . . 6 68 3.6. Automatic Configuration . . . . . . . . . . . . . . . . . 7 69 3.7. State Synchronisation . . . . . . . . . . . . . . . . . . 7 70 3.7.1. Cookies . . . . . . . . . . . . . . . . . . . . . . . 7 71 3.7.2. Businesss Cards . . . . . . . . . . . . . . . . . . . 8 72 3.8. Protocol Stack . . . . . . . . . . . . . . . . . . . . . 8 73 3.9. Extensions . . . . . . . . . . . . . . . . . . . . . . . 9 74 3.10. Reference Implementation and Deployment . . . . . . . . . 9 75 4. Usage of Reliable Server Pooling . . . . . . . . . . . . . . 9 76 5. Security Considerations . . . . . . . . . . . . . . . . . . . 10 77 6. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 10 78 7. Testbed Platform . . . . . . . . . . . . . . . . . . . . . . 10 79 8. Acknowledgments . . . . . . . . . . . . . . . . . . . . . . . 10 80 9. References . . . . . . . . . . . . . . . . . . . . . . . . . 10 81 9.1. Normative References . . . . . . . . . . . . . . . . . . 10 82 9.2. Informative References . . . . . . . . . . . . . . . . . 12 83 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . 15 85 1. Introduction 87 1.1. Abbreviations 89 o PE: Pool Element 91 o PR: Pool Registrar 93 o PU: Pool User 95 o RSerPool: Reliable Server Pooling 96 o SCTP: Stream Control Transmission Protocol 98 o VNFPOOL: Virtual Network Function Resource Pooling 100 2. Virtual Network Function Resource Pooling 102 Virtualised Network Function (VNF) (e.g. vFW, vLB) -- as introduced 103 in more detail in [I-D.zong-vnfpool-problem-statement] -- provides 104 the same function as the equivalent network function (e.g. FW, LB), 105 but is deployed as software instances running on general purpose 106 servers via virtualisation platform. The main features of VNF 107 include the following aspects: 109 1. A service consists of a sequence of topologically distributed VNF 110 instances where the data connections are preferably directly 111 established between the instances. 113 2. There are potentially more factors that cause VNF instance 114 transition or even failure; VNF pool refers to a group of VNF 115 instances providing same network function. 117 Virtualisation technology allows network function virtualisation 118 operators to build a reliable VNF by pooling the underlying 119 resources, such as CPU, storage, networking, etc. to form a cluster 120 of VNF instances. VNF pool refers to a cluster or group of VNF 121 instances providing same network function. Each VNF pool has a Pool 122 Manager (PM) to manage the VNF instance such as instance selection, 123 monitoring, etc. There will be a redundancy mechanism for a reliable 124 PM to achieve reliable VNF. More details on VNF pool can be found in 125 [I-D.zong-vnfpool-problem-statement]. 127 3. Reliable Server Pooling 129 3.1. Introduction 130 +---------------+ 131 | Pool User | 132 +---------------+ 133 ^ 134 | ASAP 135 V 136 +---------------+ ENRP +---------------+ 137 | Registrar |<-------->| Registrar | 138 +---------------+ +---------------+ 139 ^ 140 | ASAP 141 V 142 +------------------------------------------------------------+ 143 | +--------------+ +--------------+ +--------------+ | 144 | | Pool Element | | Pool Element | ... ... | Pool Element | | 145 | +--------------+ +--------------+ +--------------+ | 146 | Server Pool | 147 +------------------------------------------------------------+ 149 Figure 1 151 An overview of the RSerPool framework -- which is defined as RFC in 152 [RFC5351] -- is provided in Figure 1. There are three types of 153 components: 155 o Pool Element (PE) denotes a server in a pool. PEs in the same 156 pool provide the same service. 158 o Pool User (PU) denotes a client using the service of a pool. 160 o Pool Registrar (PR) is the management component for the pools. 162 The set of all pools within an operation scope (for example: an 163 organisation, a company or a department) is denoted as handlespace. 164 Clearly, a single PR would be a single point of failure. Therefore, 165 PRs also have to be redundant. Within the handlespace, each pool is 166 identified by a unique pool handle (PH). 168 3.2. Registrar Operations 170 The PRs of an operation scope synchronise their view of the 171 handlespace by using the Endpoint haNdlespace Redundancy 172 Protocol (ENRP, defined as RFCs in [RFC5353], [RFC5354]). In 173 contrast to for instance the Domain Name System (DNS), an operation 174 scope is restricted to a single administrative domain. That is, all 175 of its components are under the control of the same authority (for 176 example: a company). This property leads to small management 177 overhead, which also allows for RSerPool usage on devices having only 178 limited memory and CPU resources (for example: telecommunications 179 equipment). Nevertheless, PEs may be distributed globally to 180 continue their service even in case of localised disasters (like for 181 example an earthquake). Each PR in the operation scope is identified 182 by a PR ID, which is a randomly chosen 32-bit number. 184 3.3. Pool Element Operations 186 Within their operation scope, the PEs may choose an arbitrary PR to 187 register into a pool by using the Aggregate Server Access 188 Protocol (ASAP, defined as RFCs in [RFC5352], [RFC5354]). The 189 registration is performed by using an ASAP_REGISTRATION message. 190 Within its pool, a PE is characterised by its PE ID, which is a 191 randomly chosen 32-bit number. Upon registration at a PR, the chosen 192 PR becomes the Home-PR (PR-H) of the newly registered PE. A PR-H is 193 responsible for monitoring the availability of its PEs by 194 ASAP_ENDPOINT_KEEP_ALIVE messages (to be acknowledged by a PE via an 195 ASAP_ENDPOINT_KEEP_ALIVE_ACK message within a configured timeout). 196 The PR-H propagates the information about its PEs to the other PRs of 197 the operation scope via ENRP_UPDATE messages. 199 PEs re-register regularly in an interval denoted as registration 200 lifetime and for information updates. Similar to the registration, a 201 re-registration is performed by using another ASAP_REGISTRATION 202 message. PEs may intentionally deregister from the pool by using an 203 ASAP_DEREGISTRATION message. Also like for the registration, the 204 PR-H makes the deregistration known to the other PRs within the 205 operation scope by using an ENRP_UPDATE message. 207 3.4. Takeover Procedure 209 As soon as a PE detects the failure of its PR-H (that is: its request 210 is not answered within a given timeout), it simply tries another PR 211 of the operation scope for its registration and deregistration 212 requests. However, as a double safeguard, the remaining PRs also 213 negotiate a takeover of the PEs managed by a dead PR. This ensures 214 that each PE again gets a working PR-H as soon as possible. The PRs 215 of an operation scope monitor the availability of each other PR by 216 using ENRP_PRESENCE messages, which are transmitted regularly. If 217 there is no ENRP_PRESENCE within a given timeout, the peer is assumed 218 to be dead and a so-called takeover procedure (see also [AINA2009] 219 for more details) is initiated for the PEs managed by the dead PR: 220 from all PRs having started this takeover procedure, the PR with the 221 highest PR ID takes over the ownership of these PEs. The PEs are 222 informed about being taken over by their new PR-H via an 223 ASAP_ENDPOINT_KEEP_ALIVE with Home-flag set. The PEs are requested 224 to adopt the sender of this Home-flagged message as their new PR-H. 226 3.5. Pool User Operations 228 3.5.1. Handle Resolution and Response 230 In order to access the service of a pool given by its PH, a PU 231 requests a PE selection from an arbitrary PR of the operation scope, 232 again by using ASAP. This selection procedure is denoted as handle 233 resolution. Upon reception of a so-called ASAP_HANDLE_RESOLUTION 234 message the PR selects the requested list of PE identities and 235 returns them in an ASAP_HANDLE_RESOLUTION_RESPONSE message. 237 3.5.2. Pool Member Selection Policies 239 The pool-specific selection rule is denoted as pool member selection 240 policy or shortly as pool policy. Two classes of load distribution 241 policies are supported: non-adaptive and adaptive strategies (a 242 detailed overview is provided by [Dre2006], [LCN2005], [IJIIDS2010], 243 [IJHIT2008]). While adaptive strategies base their selections on the 244 current PE state (which requires up-to-date information), non- 245 adaptive algorithms do not need such data. A basic set of adaptive 246 and non-adaptive pool policies is defined as RFC in [RFC5356]. 248 Defined in [RFC5356] are the non-adaptive policies Round Robin (RR), 249 Random (RAND) and Priority (PRIO) as well as the adaptive policies 250 Least Used (LU) and Least Used with Degradation (LUD). While RR/RAND 251 select PEs in turn/randomly, PRIO selects one of the PEs having the 252 highest priority. PRIO can for example be used to realise a master/ 253 backup PE setup. Only if there are no master PEs left, a backup PE 254 is selected. Round-robin selection is applied among PEs having the 255 same priority. LU selects the least-used PE, according to up-to-date 256 application-specific load information. Round robin selection is 257 applied among multiple least-loaded PEs. LUD, which is evaluated by 258 [ICDS2008-LUD], furthermore introduces a load decrement constant that 259 is added to the actual load each time a PE is selected. It is used 260 to compensate inaccurate load states due to delayed updates. An 261 update resets the load to the actual load value. 263 3.5.3. Handle Resolution and Response 265 PE may fail, for example due to hardware or network failures. Since 266 there is a certain latency between the actual failure of a PE and the 267 removal of its entry from the handlespace -- depending on the 268 interval and timeout for the ASAP_ENDPOINT_KEEP_ALIVE monitoring -- 269 the PUs may report unreachable PEs to a PR by using an 270 ASAP_ENDPOINT_UNREACHABLE message. A PR locally counts these reports 271 for each PE and when reaching the threshold MAX-BAD-PE-REPORT 272 (default is 3, as defined in the RFC [RFC5352]), the PR may decide to 273 remove the PE from the handlespace. The counter of a PE is reset 274 upon its re-registration. More details on this threshold and 275 guidelines for its configuration can be found in [IJAIT2009]. 277 3.6. Automatic Configuration 279 RSerPool components need to know the PRs of their operation scope. 280 While it is of course possible to configure a list of PRs into each 281 component, RSerPool also provides an auto-configuration feature: PRs 282 may send so-called announces, that is, ASAP_ANNOUNCE and 283 ENRP_PRESENCE messages which are regularly sent over UDP via IP 284 multicast. Unlike broadcasts, multicast messages can also be 285 transported over routers (at least, this is easily possible within 286 LANs). The announces of the PRs can be heard by the other 287 components, which can maintain a list of currently available PRs. 288 That is, RSerPool components are usually just turned on and 289 everything works automatically. 291 3.7. State Synchronisation 293 RSerPool has been explicitly designed to be application-independent. 294 Therefore, RSerPool has not intended to define special state 295 synchronisation mechanisms for RSerPool-based applications. Such 296 state synchronisation mechanisms are considered as tasks of the 297 applications themselves. However, RSerPool defines two mechanisms to 298 at least support the implementation of more sophisticated strategies: 299 Cookies and Businesss Cards. Details on these mechanisms can also be 300 found in Subsection 3.9.5 of [Dre2006]. 302 3.7.1. Cookies 304 ASAP provides the mechanism of Client-Based State Sharing as 305 introduced in [LCN2002]. Whenever useful, the PE may package its 306 state in form of a state cookie and send it -- by an ASAP_COOKIE 307 message -- to the PU. The PU stores the latest state cookie received 308 from the PE. Upon PE failure, this stored cookie is sent in an 309 ASAP_COOKIE_ECHO to the newly chosen PE. This PE may then restore 310 the state. A shared secret known by all PEs of a pool may be used to 311 protect the state from being manipulated or read by the PU. 313 While Client-Based State Sharing is very simple, it may be 314 inefficient when the state changes too frequently, is too large (the 315 size limit of an ASAP_COOKIE/ASAP_COOKIE_ECHO is 64 KiB) or if it 316 must be prevented that a PU sends a state cookie to multiple PEs in 317 order to duplicate its sessions. 319 3.7.2. Businesss Cards 321 Depending on the application, there may be constraints restricting 322 the set of PEs usable for failover. The ASAP_BUSINESS_CARD message 323 is used to inform peer components about such constraints. 325 The first case to use a Business Card is if only a restricted set of 326 PEs in the pool may be used for failover. For example, in a large 327 pool, each PE can share its complete set of session states with a few 328 other PEs only. This keeps the system scalable. That is, a PE in a 329 pool of n servers does not have to synchronise all session states 330 with the other n-1 PEs. In this case, a PE has to tell its PU the 331 set of PE identities being candidates for a failover using an 332 ASAP_BUSINESS_CARD message. A PE may update the list of possible 333 failover candidates at any time by sending another Business Card. 334 The PU has to store the latest list of failover candidates. Of 335 course, if a failover becomes necessary, the PU has to select from 336 this list using the appropriate pool policy -- instead of performing 337 the regular PE selection by handle resolution at a PR. Therefore, 338 some literature also denotes the Business Card by the more expressive 339 term "last will". 341 In symmetric scenarios, where a PU is also a PE of another pool, the 342 PU has to tell this fact to its PE. This is realised by sending an 343 ASAP_BUSINESS_CARD message to the PE, providing the PH of its pool. 344 Optionally, also specific PE identities for failover may be provided. 345 The format remains the same as explained in the previous paragraph. 346 If the PE detects a failure of its PU, the PE may -- now in the role 347 of a PU -- use the provided PH for a handle resolution to find a new 348 PE or use the provided PE identities to select one. After that, it 349 can perform a failover to that PE. 351 3.8. Protocol Stack 353 The protocol stack of a PR provides ENRP and ASAP services to PRs and 354 PEs/PUs respectively. But between PU and PE, ASAP provides a Session 355 Layer protocol in the OSI model. From the perspective of the 356 Application Layer, the PU side establishes a session with a pool. 357 ASAP takes care of selecting a PE of the pool, initiating and 358 maintaining the underlying transport connection and triggering a 359 failover procedure when the PE becomes unavailable. 361 The Transport Layer protocol is by default SCTP (as defined in 362 [RFC4960]) -- except for the UDP-based automatic configuration 363 announces (see Section 3.6) -- over possibly multi-homed IPv4 and/or 364 IPv6. SCTP has been chosen due to its support of multi-homing and 365 its reliability features (see also [Dre2012]). 367 3.9. Extensions 369 A couple of extensions to RSerPool are existing: Handle Resolution 370 Option defined in [I-D.dreibholz-rserpool-asap-hropt] improves the PE 371 selection by letting the PU tell the PR its required number of PEs to 372 be selected. ENRP Takeover Suggestion introduced in 373 [I-D.dreibholz-rserpool-enrp-takeover] ensures load balancing among 374 PRs. [I-D.dreibholz-rserpool-delay] defines a delay-sensitive pool 375 policy. [RFC5525] defines an SNMP MIB for RSerPool. 377 3.10. Reference Implementation and Deployment 379 RSPLIB is the Open Source reference implementation of RSerPool. It 380 is currently -- as of February 2016 -- available for Linux, FreeBSD, 381 MacOS and Solaris. It is actively maintained. Particularly, it is 382 also included in Ubuntu Linux as well as in the FreeBSD ports 383 collection. RSPLIB can be downloaded from [RSerPoolPage]. Further 384 details on the implementation are available in [Dre2006], 385 [Globecom2010-Demo]. 387 RSerPool with RSPLIB is deployed in a couple of Open Source projects, 388 including the SimProcTC Simulation Processing Tool-Chain for 389 distributing simulation runs in a compute pool (see 390 [OMNeTWorkshop2008] as well as the simulation run distribution 391 project explained in [Dre2012] for a practical example) as well as 392 for service infrastructure management in the NorNet Core research 393 testbed (see [ComNets2013-Core], [PAMS2013-NorNet]). 395 4. Usage of Reliable Server Pooling 397 **** TO BE DISCUSSED! **** 399 The following features of RSerPool can be used for VNFPOOL: 401 o Pool management. 403 o PE seclection with pool policies. 405 o Session management with help of ASAP_BUSINESS_CARD. 407 The following features have to be added to RSerPool itself: 409 o Support of TCP including MPTCP as additional/alternative transport 410 protocols. 412 o Possibly add some special pool policies? 413 o See also [I-D.dreibholz-rserpool-nextgen-ideas] for ideas on a 414 next generation of RSerPool. 416 The following features have to be provided outside of RSerPool: 418 o State synchronisation for VNFPOOL. 420 o Pool Manager functionality as an RSerPool-based service. 422 5. Security Considerations 424 Security considerations for RSerPool can be found in [RFC5355]. 425 Furthermore, [IJIIDS2010] examines the robustness of RSerPool systems 426 against attacks. 428 6. IANA Considerations 430 This document introduces no additional considerations for IANA. 432 7. Testbed Platform 434 A large-scale and realistic Internet testbed platform with support 435 for Reliable Server Pooling and the underlying SCTP protocol is 436 NorNet. A description of and introduction to NorNet is provided in 437 [PAMS2013-NorNet], [NICTA2016-Presentation], 438 [UCLM2017-NorNet-Tutorial]. Further information can be found on the 439 project website [NorNet-Website] at https://www.nntb.no. 441 8. Acknowledgments 443 The authors would like to thank Xing Zhou for the friendly support. 445 9. References 447 9.1. Normative References 449 [I-D.dreibholz-rserpool-asap-hropt] 450 Dreibholz, T., "Handle Resolution Option for ASAP", draft- 451 dreibholz-rserpool-asap-hropt-23 (work in progress), 452 September 2018. 454 [I-D.dreibholz-rserpool-delay] 455 Dreibholz, T. and X. Zhou, "Definition of a Delay 456 Measurement Infrastructure and Delay-Sensitive Least-Used 457 Policy for Reliable Server Pooling", draft-dreibholz- 458 rserpool-delay-22 (work in progress), September 2018. 460 [I-D.dreibholz-rserpool-enrp-takeover] 461 Dreibholz, T. and X. Zhou, "Takeover Suggestion Flag for 462 the ENRP Handle Update Message", draft-dreibholz-rserpool- 463 enrp-takeover-20 (work in progress), September 2018. 465 [I-D.dreibholz-rserpool-nextgen-ideas] 466 Dreibholz, T., "Ideas for a Next Generation of the 467 Reliable Server Pooling Framework", draft-dreibholz- 468 rserpool-nextgen-ideas-10 (work in progress), September 469 2018. 471 [I-D.zong-vnfpool-problem-statement] 472 Zong, N., Dunbar, L., Shore, M., Lopez, D., and G. 473 Karagiannis, "Virtualized Network Function (VNF) Pool 474 Problem Statement", draft-zong-vnfpool-problem- 475 statement-06 (work in progress), July 2014. 477 [RFC4960] Stewart, R., Ed., "Stream Control Transmission Protocol", 478 RFC 4960, DOI 10.17487/RFC4960, September 2007, 479 . 481 [RFC5351] Lei, P., Ong, L., Tuexen, M., and T. Dreibholz, "An 482 Overview of Reliable Server Pooling Protocols", RFC 5351, 483 DOI 10.17487/RFC5351, September 2008, 484 . 486 [RFC5352] Stewart, R., Xie, Q., Stillman, M., and M. Tuexen, 487 "Aggregate Server Access Protocol (ASAP)", RFC 5352, 488 DOI 10.17487/RFC5352, September 2008, 489 . 491 [RFC5353] Xie, Q., Stewart, R., Stillman, M., Tuexen, M., and A. 492 Silverton, "Endpoint Handlespace Redundancy Protocol 493 (ENRP)", RFC 5353, DOI 10.17487/RFC5353, September 2008, 494 . 496 [RFC5354] Stewart, R., Xie, Q., Stillman, M., and M. Tuexen, 497 "Aggregate Server Access Protocol (ASAP) and Endpoint 498 Handlespace Redundancy Protocol (ENRP) Parameters", 499 RFC 5354, DOI 10.17487/RFC5354, September 2008, 500 . 502 [RFC5355] Stillman, M., Ed., Gopal, R., Guttman, E., Sengodan, S., 503 and M. Holdrege, "Threats Introduced by Reliable Server 504 Pooling (RSerPool) and Requirements for Security in 505 Response to Threats", RFC 5355, DOI 10.17487/RFC5355, 506 September 2008, . 508 [RFC5356] Dreibholz, T. and M. Tuexen, "Reliable Server Pooling 509 Policies", RFC 5356, DOI 10.17487/RFC5356, September 2008, 510 . 512 [RFC5525] Dreibholz, T. and J. Mulik, "Reliable Server Pooling MIB 513 Module Definition", RFC 5525, DOI 10.17487/RFC5525, April 514 2009, . 516 9.2. Informative References 518 [AINA2009] 519 Zhou, X., Dreibholz, T., Fa, F., Du, W., and E. Rathgeb, 520 "Evaluation and Optimization of the Registrar Redundancy 521 Handling in Reliable Server Pooling Systems", Proceedings 522 of the IEEE 23rd International Conference on Advanced 523 Information Networking and Applications (AINA) Pages 524 256-262, ISBN 978-0-7695-3638-5, DOI 10.1109/AINA.2009.25, 525 May 2009, . 529 [ComNets2013-Core] 530 Gran, E., Dreibholz, T., and A. Kvalbein, "NorNet Core - A 531 Multi-Homed Research Testbed", Computer Networks, Special 532 Issue on Future Internet Testbeds Volume 61, Pages 75-87, 533 ISSN 1389-1286, DOI 10.1016/j.bjp.2013.12.035, March 2014, 534 . 536 [Dre2006] Dreibholz, T., "Reliable Server Pooling - Evaluation, 537 Optimization and Extension of a Novel IETF Architecture", 538 March 2007, . 542 [Dre2012] Dreibholz, T., "Evaluation and Optimisation of Multi-Path 543 Transport using the Stream Control Transmission Protocol", 544 Habilitation Treatise, March 2012, 545 . 549 [Globecom2010-Demo] 550 Dreibholz, T. and M. Becke, "The RSPLIB Project - From 551 Research to Application", Demo Presentation at the IEEE 552 Global Communications Conference (GLOBECOM), December 553 2010, . 556 [ICDS2008-LUD] 557 Zhou, X., Dreibholz, T., and E. Rathgeb, "A New Server 558 Selection Strategy for Reliable Server Pooling in Widely 559 Distributed Environments", Proceedings of the 2nd IEEE 560 International Conference on Digital Society (ICDS) Pages 561 171-177, ISBN 978-0-7695-3087-1, DOI 10.1109/ICDS.2008.12, 562 February 2008, . 566 [IJAIT2009] 567 Dreibholz, T. and E. Rathgeb, "Overview and Evaluation of 568 the Server Redundancy and Session Failover Mechanisms in 569 the Reliable Server Pooling Framework", International 570 Journal on Advances in Internet Technology (IJAIT) Number 571 1, Volume 2, Pages 1-14, ISSN 1942-2652, June 2009, 572 . 575 [IJHIT2008] 576 Dreibholz, T. and E. Rathgeb, "An Evaluation of the Pool 577 Maintenance Overhead in Reliable Server Pooling Systems", 578 SERSC International Journal on Hybrid Information 579 Technology (IJHIT) Number 2, Volume 1, Pages 17-32, 580 ISSN 1738-9968, April 2008, . 584 [IJIIDS2010] 585 Dreibholz, T., Zhou, X., Becke, M., Pulinthanath, J., 586 Rathgeb, E., and W. Du, "On the Security of Reliable 587 Server Pooling Systems", International Journal on 588 Intelligent Information and Database 589 Systems (IJIIDS) Number 6, Volume 4, Pages 552-578, 590 ISSN 1751-5858, DOI 10.1504/IJIIDS.2010.036894, December 591 2010, . 594 [LCN2002] Dreibholz, T., "An Efficient Approach for State Sharing in 595 Server Pools", Proceedings of the 27th IEEE Local Computer 596 Networks Conference (LCN) Pages 348-349, 597 ISBN 0-7695-1591-6, DOI 10.1109/LCN.2002.1181806, November 598 2002, . 602 [LCN2005] Dreibholz, T. and E. Rathgeb, "On the Performance of 603 Reliable Server Pooling Systems", Proceedings of the IEEE 604 Conference on Local Computer Networks (LCN) 30th 605 Anniversary Pages 200-208, ISBN 0-7695-2421-4, 606 DOI 10.1109/LCN.2005.98, November 2005, 607 . 610 [NICTA2016-Presentation] 611 Dreibholz, T., "NorNet at NICTA - An Introduction to the 612 NorNet Testbed", Invited Talk at National Information 613 Communications Technology Australia (NICTA), January 2016, 614 . 617 [NorNet-Website] 618 Dreibholz, T., "NorNet -- A Real-World, Large-Scale Multi- 619 Homing Testbed", Online: https://www.nntb.no/, 2017, 620 . 622 [OMNeTWorkshop2008] 623 Dreibholz, T. and E. Rathgeb, "A Powerful Tool-Chain for 624 Setup, Distributed Processing, Analysis and Debugging of 625 OMNeT++ Simulations", Proceedings of the 1st ACM/ICST 626 International Workshop on OMNeT++ ISBN 978-963-9799-20-2, 627 DOI 10.4108/ICST.SIMUTOOLS2008.2990, March 2008, 628 . 631 [PAMS2013-NorNet] 632 Dreibholz, T. and E. Gran, "Design and Implementation of 633 the NorNet Core Research Testbed for Multi-Homed Systems", 634 Proceedings of the 3nd International Workshop on Protocols 635 and Applications with Multi-Homing Support (PAMS) Pages 636 1094-1100, ISBN 978-0-7695-4952-1, 637 DOI 10.1109/WAINA.2013.71, March 2013, 638 . 642 [RSerPoolPage] 643 Dreibholz, T., "Thomas Dreibholz's RSerPool Page", 644 Online: https://www.uni-due.de/~be0001/rserpool/, 2017, 645 . 647 [UCLM2017-NorNet-Tutorial] 648 Dreibholz, T., "An Experiment Tutorial for the NorNet Core 649 Testbed at the the Universidad de Castilla-La Mancha", 650 Tutorial at the Universidad de Castilla-La Mancha, 651 Instituto de Investigacion Informatica de Albacete, 652 February 2017, . 655 Authors' Addresses 657 Thomas Dreibholz 658 Simula Metropolitan Centre for Digital Engineering 659 Martin Linges vei 17 660 1364 Fornebu, Akershus 661 Norway 663 Phone: +47-6782-8200 664 Fax: +47-6782-8201 665 Email: dreibh@simula.no 666 URI: https://www.uni-due.de/~be0001/ 668 Michael Tuexen 669 Muenster University of Applied Sciences 670 Stegerwaldstrasse 39 671 48565 Steinfurt, Nordrhein-Westfalen 672 Germany 674 Email: tuexen@fh-muenster.de 675 URI: https://www.fh-muenster.de/fb2/personen/professoren/tuexen/ 677 Melinda Shore 678 No Mountain Software 679 PO Box 16271 680 Two Rivers, Alaska 99716 681 U.S.A. 683 Phone: +1-907-322-9522 684 Email: melinda.shore@nomountain.net 685 URI: https://www.linkedin.com/pub/melinda-shore/9/667/236 686 Ning Zong 687 Huawei Technologies 688 101 Software Avenue 689 Nanjing, Jiangsu 210012 690 China 692 Email: zongning@huawei.com 693 URI: https://cn.linkedin.com/pub/ning-zong/15/737/490