idnits 2.17.1 draft-ietf-rserpool-overview-06.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- ** It looks like you're using RFC 3978 boilerplate. You should update this to the boilerplate described in the IETF Trust License Policy document (see https://trustee.ietf.org/license-info), which is required now. -- Found old boilerplate from RFC 3978, Section 5.1 on line 20. -- Found old boilerplate from RFC 3978, Section 5.5, updated by RFC 4748 on line 644. -- Found old boilerplate from RFC 3979, Section 5, paragraph 1 on line 655. -- Found old boilerplate from RFC 3979, Section 5, paragraph 2 on line 662. -- Found old boilerplate from RFC 3979, Section 5, paragraph 3 on line 668. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- No issues found here. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust Copyright Line does not match the current year -- The document seems to lack a disclaimer for pre-RFC5378 work, but may have content which was first submitted before 10 November 2008. If you have contacted all the original authors and they are all willing to grant the BCP78 rights to the IETF Trust, then this is fine, and you can ignore this comment. If not, you may need to add the pre-RFC5378 disclaimer. (See the Legal Provisions document at https://trustee.ietf.org/license-info for more information.) -- The document date (May 06, 2008) is 5806 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- == Outdated reference: A later version (-21) exists of draft-ietf-rserpool-asap-19 == Outdated reference: A later version (-21) exists of draft-ietf-rserpool-enrp-19 == Outdated reference: A later version (-18) exists of draft-ietf-rserpool-common-param-16 == Outdated reference: A later version (-10) exists of draft-ietf-rserpool-policies-08 == Outdated reference: A later version (-15) exists of draft-ietf-rserpool-threats-11 Summary: 1 error (**), 0 flaws (~~), 6 warnings (==), 7 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 Network Working Group P. Lei 3 Internet-Draft Cisco Systems, Inc. 4 Intended status: Informational L. Ong 5 Expires: November 7, 2008 Ciena Corporation 6 M. Tuexen 7 Muenster Univ. of Applied Sciences 8 T. Dreibholz 9 University of Duisburg-Essen 10 May 06, 2008 12 An Overview of Reliable Server Pooling Protocols 13 draft-ietf-rserpool-overview-06.txt 15 Status of this Memo 17 By submitting this Internet-Draft, each author represents that any 18 applicable patent or other IPR claims of which he or she is aware 19 have been or will be disclosed, and any of which he or she becomes 20 aware will be disclosed, in accordance with Section 6 of BCP 79. 22 Internet-Drafts are working documents of the Internet Engineering 23 Task Force (IETF), its areas, and its working groups. Note that 24 other groups may also distribute working documents as Internet- 25 Drafts. 27 Internet-Drafts are draft documents valid for a maximum of six months 28 and may be updated, replaced, or obsoleted by other documents at any 29 time. It is inappropriate to use Internet-Drafts as reference 30 material or to cite them other than as "work in progress." 32 The list of current Internet-Drafts can be accessed at 33 http://www.ietf.org/ietf/1id-abstracts.txt. 35 The list of Internet-Draft Shadow Directories can be accessed at 36 http://www.ietf.org/shadow.html. 38 This Internet-Draft will expire on November 7, 2008. 40 Abstract 42 The Reliable Server Pooling effort (abbreviated "RSerPool"), provides 43 an application-independent set of services and protocols for building 44 fault tolerant and highly available client/server applications. This 45 document provides an overview of the protocols and mechanisms in the 46 reliable server pooling suite. 48 Table of Contents 50 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 3 51 2. Aggregate Server Access Protocol (ASAP) Overview . . . . . . . 6 52 2.1. Pool Initialization . . . . . . . . . . . . . . . . . . . 6 53 2.2. Pool Entity Registration . . . . . . . . . . . . . . . . . 7 54 2.3. Pool Entity Selection . . . . . . . . . . . . . . . . . . 7 55 2.4. Endpoint Keep-Alive . . . . . . . . . . . . . . . . . . . 7 56 2.5. Failover Services . . . . . . . . . . . . . . . . . . . . 7 57 2.5.1. Cookie Mechanism . . . . . . . . . . . . . . . . . . . 8 58 2.5.2. Business Card Mechanism . . . . . . . . . . . . . . . 8 59 3. Endpoint Handlespace Redundancy Protocol (ENRP) Overview . . . 8 60 3.1. Initialization . . . . . . . . . . . . . . . . . . . . . . 8 61 3.2. Server Discovery and Home Server Selection . . . . . . . . 9 62 3.3. Failure Detection, Handlespace Audit and 63 Synchronization . . . . . . . . . . . . . . . . . . . . . 9 64 3.4. Server Takeover . . . . . . . . . . . . . . . . . . . . . 9 65 4. Example Scenarios . . . . . . . . . . . . . . . . . . . . . . 9 66 4.1. Example Scenario using RSerPool Resolution Service . . . . 9 67 4.2. Example Scenario using RSerPool Session Services . . . . . 11 68 5. Reference Implementation . . . . . . . . . . . . . . . . . . . 12 69 6. Security Considerations . . . . . . . . . . . . . . . . . . . 12 70 7. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 12 71 8. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . . 13 72 9. References . . . . . . . . . . . . . . . . . . . . . . . . . . 13 73 9.1. Normative References . . . . . . . . . . . . . . . . . . . 13 74 9.2. Informative References . . . . . . . . . . . . . . . . . . 14 75 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . . 14 76 Intellectual Property and Copyright Statements . . . . . . . . . . 16 78 1. Introduction 80 The Reliable Server Pooling (RSerPool) protocol suite is designed to 81 provide client applications ("pool users") with the ability to select 82 a server (a "pool element") from among a group of servers providing 83 equivalent service (a "pool"). The protocols are currently targeted 84 for Experimental Track. 86 The RSerPool architecture supports high-availability and load 87 balancing by enabling a pool user to identify the most appropriate 88 server from the server pool at a given time. The architecture is 89 defined to support a set of basic goals: 91 o application-independent protocol mechanisms 93 o separation of server naming from IP addressing 95 o use of the end-to-end principle to avoid dependencies on 96 intermediate equipment 98 o separation of session availability/failover functionality from 99 application itself 101 o facilitate different server selection policies 103 o facilitate a set of application-independent failover capabilities 105 o peer-to-peer structure 107 The basic components of the RSerPool architecture are shown in 108 Figure 1 below: 110 ....................... 111 ______ ______ . +-------+ . 112 / ENRP \ / ENRP \ . | | . 113 |Server| <----> |Server|<----------.----->| PE 1 | . 114 \______/ ENRP \______/ ASAP(1) . | | . 115 ^ . +-------+ . 116 | . . 117 | ASAP(2) . Server Pool . 118 V . . 119 +-------+ . +-------+ . 120 | | . | | . 121 | PU |<---------->. | PE 2 | . 122 | | PU to PE . | | . 123 +-------+ . +-------+ . 124 . . 125 . +-------+ . 126 . | | . 127 . | PE 3 | . 128 . | | . 129 . +-------+ . 130 ....................... 132 Figure 1 134 A server pool is defined as a set of one or more servers providing 135 the same application functionality. The servers are called Pool 136 Elements (PEs). Multiple PEs in a server pool can be used to provide 137 fault tolerance or load sharing, for example. The PEs register into 138 and deregister out of the pool at an entity called the Endpoint 139 haNdlespace Redundancy Protocol (ENRP) server, using the Aggregate 140 Server Access Protocol ASAP [I-D.ietf-rserpool-asap] (this 141 association is labelled ASAP(1) in the figure). 143 Each server pool is identified by a unique byte string called the 144 pool handle (PH). The pool handle allows a mapping from the pool to 145 a specific PE located by its IP address (both IPv4 and IPv6 PE 146 addresses are supported) and port number. The pool handle is what is 147 specified by the Pool User (PU) when it attempts to access a server 148 in the pool. To resolve the pool handle to the address necessary to 149 access a PE, the PU consults an ENRP server using ASAP (this 150 association is labeled ASAP(2) in the figure). The space of pool 151 handles is assumed to be a flat space with limited operational scope 152 (see RFC3237 [RFC3237]). Administration of pool handles is not 153 addressed by the RSerPool protocol drafts at this time. The 154 protocols used between PU and PE are application-specific. It is 155 assumed that the PU and PE are configured to support a common set of 156 protocols for application layer communication, independent of the 157 RSerPool mechanisms. 159 RSerPool provides a number of tools to aid client migration between 160 servers on server failure: it allows the client to identify 161 alternative servers, either on initial discovery or in real time; it 162 also allows the original server to provide a state cookie to the 163 client that can be forwarded to an alternative server to provide 164 application-specific state information. This information is 165 exchanged between PE and PU directly, over the association labeled PU 166 to PE in the figure. 168 It is envisioned that ENRP servers provide a fully distributed and 169 fault-tolerant registry service. They use ENRP 170 [I-D.ietf-rserpool-enrp] to maintain synchronization of data 171 concerning the pool handle mapping space. For PUs and PEs, the ENRP 172 servers are functionally equal. Due to the synchronization provided 173 by ENRP, they can contact an arbitrary one for registration/ 174 deregistration (PE) or PH resolution (PU). An illustration 175 containing 3 ENRP servers is provided in Figure 2 below: 177 ______ _____ 178 ... / ENRP \ / ENRP \ ... 179 PEs/PUs <---->|Server| <----> |Server|<----> PEs/PUs 180 ... ASAP \______/ ENRP \______/ ASAP ... 181 ^ ^ 182 | | 183 | / ENRP \ | 184 +---->|Server|<----+ 185 ENRP \______/ ENRP 186 ^ 187 | ASAP 188 v 189 ... 190 PEs/PUs 191 ... 193 Figure 2 195 The requirements for the Reliable Server Pooling framework are 196 defined in RFC3237 [RFC3237]. It is worth noting that the 197 requirements on RSerPool in the area of load balancing partially 198 overlap with GRID computing/high performance computing. However, the 199 scope of both areas is completely different: GRID and high 200 performance computing also cover topics like managing different 201 administrative domains, data locking and synchronization, inter- 202 session communication and resource accounting for powerful 203 computation services, but the intention of RSerPool is simply a 204 lightweight realization of load distribution and session management. 206 In particular, these functionalities are intended to be used on 207 systems with small memory and CPU resources only. Any further 208 functionality is not in the scope of RSerPool and can -- if necessary 209 -- provided by the application itself. 211 This document provides an overview of the RSerPool protocol suite, 212 specifically the Aggregate Server Access Protocol ASAP 213 [I-D.ietf-rserpool-asap] and the Endpoint Handlespace Redundancy 214 Protocol ENRP [I-D.ietf-rserpool-enrp]. In addition to the protocol 215 specifications, there is a common parameter format specification 216 COMMON [I-D.ietf-rserpool-common-param] for both protocols, a 217 definition of server selection rules (pool policies) POLICIES 218 [I-D.ietf-rserpool-policies], as well as a security threat analysis 219 THREATS [I-D.ietf-rserpool-threats]. 221 2. Aggregate Server Access Protocol (ASAP) Overview 223 ASAP defines a straight-forward set of mechanisms necessary to 224 support the creation and maintenance of pools of redundant servers. 225 These mechanisms include: 227 o registration of a new server into a server pool 229 o deregistration of an existing server from a pool 231 o resolution of a pool handle to a server or list of servers 233 o liveness detection for servers in a pool 235 o failover mechanisms for handling a server failure 237 2.1. Pool Initialization 239 Pools come into existence when a PE registers the first instance of 240 the pool name with an ENRP server. They disappear when the last PE 241 deregisters. In other words, the starting of the first PE on some 242 machine causes the creation of the pool when the registration reaches 243 the ENRP server. 245 It is assumed that information needed for RSerPool, such as the 246 address of an ENRP server to contact, is configured into the PE 247 beforehand. Methods of automating this configuration process are not 248 addressed at this time. 250 2.2. Pool Entity Registration 252 A new server joins an existing pool by sending a Registration message 253 via ASAP to an ENRP server, indicating the pool handle of the pool 254 that it wishes to join, a PE identifier for itself (chosen randomly), 255 information about its lifetime in the pool, and what transport 256 protocols and selection policy it supports. The ENRP server that it 257 first contacts is called its Home ENRP server, and maintains a list 258 of subscriptions by the PE as well as performs periodic audits to 259 confirm that the PE is still responsive. 261 Similar procedures are applied to de-register itself from the server 262 pool (or alternatively the server may simply let the lifetime that it 263 previously registered with expire, after which it is gracefully 264 removed from the pool. 266 2.3. Pool Entity Selection 268 When an endpoint wishes to be connected to a server in the pool, it 269 generates an ASAP Handle Resolution message and sends this to its 270 home ENRP server. The ENRP server resolves the handle based on its 271 knowledge of pool servers and returns a Handle Resolution Response 272 via ASAP. The response contains a list of the IP addresses of one or 273 more servers in the pool that can be contacted. The process by which 274 the list of servers is created may involve a number of policies for 275 server selection. The RSerPool protocol suite defines a few basic 276 policies and allows the use of external server selection input for 277 more complex policies. 279 2.4. Endpoint Keep-Alive 281 ENRP servers monitor the status of pool elements using the ASAP 282 Endpoint Keep-Alive message. A PE responds to the ASAP Keep-Alive 283 message with an Endpoint Keep-Alive Ack response. 285 In addition, a PU can notify its home ENRP server that the PE it used 286 has become unresponsive by sending an ASAP Endpoint Unreachable 287 message to the ENRP server. 289 2.5. Failover Services 291 While maintaining application-independence, the RSerPool protocol 292 suite provides some simple hooks for supporting failover of an 293 individual session with a pool element. Generally, mechanisms for 294 failover that rely on application state or transaction status cannot 295 be defined without more specific knowledge of the application being 296 supported. However, some simple mechanisms supported by RSerPool 297 allow some level of failover that any application can use. 299 2.5.1. Cookie Mechanism 301 Cookies may optionally be generated by the ASAP layer and 302 periodically sent from the PE to the PU. The PU only stores the last 303 received cookie. In case of failover the PU sends this last received 304 cookie to the new PE. This method provides a simple way of state 305 sharing between the PEs. Please note that the old PE should sign the 306 cookie and the receiving PE should verify that signature. For the 307 PU, the cookie has no structure and is only stored and transmitted to 308 the new PE. 310 2.5.2. Business Card Mechanism 312 A PE can send a business card to its peer (PE or PU), containing its 313 pool handle and guidance concerning which other PEs the peer should 314 use for failover. This gives a PE a means of telling a PU what it 315 identifies as the "next best" PE to use in case of failure, which may 316 be based on pool considerations, such as load balancing, or user 317 considerations, such as PEs that have the most up-to-date state 318 information. 320 3. Endpoint Handlespace Redundancy Protocol (ENRP) Overview 322 A set of server pools, which is denoted as a handlespace, is managed 323 by ENRP servers. Pools are not valid in the whole Internet but only 324 in smaller domains, called the operational scope. The ENRP servers 325 use the ENRP protocol in order to maintain a distributed, fault- 326 tolerant, real-time registry service. ENRP servers communicate with 327 each other for information exchange, such as pool membership changes, 328 handlespace data synchronization, etc.. 330 3.1. Initialization 332 Each ENRP server initially generates a 32-bit server ID that it uses 333 in subsequent messaging and remains unchanged over the lifetime of 334 the server. It then attempts to learn all of the other ENRP servers 335 within the scope of the server pool, either by using a pre-defined 336 Mentor server or by sending out Presence messages on a well-known 337 multicast channel in order to determine other ENRP servers from the 338 responses and select one as Mentor. A Mentor can be any peer ENRP 339 server. The most current handlespace data is requested by Handle 340 Table Requests from the Mentor. The received answer in form of 341 Handle Table Response messages is unpacked into the local database. 342 After that, the ENRP server is ready to provide ENRP services. 344 3.2. Server Discovery and Home Server Selection 346 PEs can now register their presence with the newly functioning ENRP 347 server by using ASAP messages. They discover the new ENRP server 348 after the server sends out an ASAP Server Announce message on the 349 well-known ASAP multicast channel. PEs only have to register with 350 one ENRP server, as other ENRP servers supporting the pool will 351 synchronize their knowledge about pool elements using the ENRP 352 protocol. 354 The PE may have a configured list of ENRP servers to talk to, in the 355 form of a list of IP addresses, in which case it will start to setup 356 associations with some number of them and assign the first one that 357 responds to it as its Home ENRP Server. 359 Alternatively it can listen on the multicast channel for a set period 360 and when it hears an ENRP server, start an association. The first 361 server it gets up can then become its Home ENRP Server. 363 3.3. Failure Detection, Handlespace Audit and Synchronization 365 ENRP servers send ENRP Presence messages to all of their peers in 366 order to show their liveness. These Presence messages also include a 367 checksum computed over all PE identities for which the ENRP server is 368 in the role of a Home ENRP server. Each ENRP server maintains an up- 369 to-date list of its peers and can also compute the checksum expected 370 from a certain peer, according to its local handlespace database. By 371 comparing the expected sum and the sum reported by a peer (denoted as 372 handlespace audit), an inconsistency can be detected. In such a 373 case, the handlespace -- restricted to the PEs owned by that peer -- 374 can be requested for synchronization, analogously to Section 3.2. 376 3.4. Server Takeover 378 If the unresponsiveness of an ENRP server is detected, the remaining 379 ENRP servers negotiate which other server takes over the Home ENRP 380 role for the PEs of the failed peer. After reaching a consensus on 381 the takeover, the ENRP server taking over these PEs sends a 382 notification to its peers (via ENRP) as well as to the PEs taken over 383 (via ASAP). 385 4. Example Scenarios 387 4.1. Example Scenario using RSerPool Resolution Service 389 RSerPool can be used in a 'standalone' manner, where the application 390 uses RSerPool to determine the address of a primary server in the 391 pool, and then interacts directly with that server without further 392 use of RSerPool services. If the initial server fails, the 393 application uses RSerPool again to find the next server in the pool. 395 For pool user ("client") applications, if an ASAP implementation is 396 available on the client system, there are typically only three 397 modifications required to the application source code: 399 1. Instead of specifying the hostnames of primary, secondary, 400 tertiary servers, etc., the application user specifies a pool 401 handle. 403 2. Instead of using a DNS based service (e.g. the Unix library 404 function getaddrinfo()) to translate from a hostname to an IP 405 address, the application will invoke an RSerPool service 406 primitive provisionally named GETPRIMARYSERVER that takes a pool 407 handle as input, and returns the IP address of the primary 408 server. The application then uses that IP address just as it 409 would have used the IP address returned by the DNS in the 410 previous scenario. 412 3. Without the use of additional RSerPool services, failure 413 detection and failover procedures must be designed into each 414 application. However, when failure is detected on the primary 415 server, instead of invoking DNS translation again on the hostname 416 of a secondary server, the application invokes a service 417 primitive provisionally named GETNEXTSERVER, which performs two 418 functions in a single operation. 420 1. First, it indicates to the RSerPool layer the failure of the 421 server returned by a previous GETPRIMARYSERVER or 422 GETNEXTSERVER call. 424 2. Second, it provides the IP address of the next server that 425 should be contacted, according to the best information 426 available to the RSerPool layer at the present time (e.g. set 427 of available pool elements, pool element policy in effect for 428 the pool, etc.). 430 Note: at the time of this document, a full API for use with RSerPool 431 Protocols has not yet been defined. 433 For pool element ("server") applications where an ASAP implementation 434 is available, two changes are required to the application source 435 code: 437 1. The server should invoke the REGISTER service primitive upon 438 startup to add itself into the server pool using an appropriate 439 pool handle. This also includes the address(es) protocol or 440 mapping id, port (if required by the mapping), and pooling 441 policy(s). 443 2. The server should invoke the DEREGISTER service primitive to 444 remove itself from the server pool when shutting down. 446 When using these RSerPool services, RSerPool provides benefits that 447 are limited (as compared to utilizing all services), but nevertheless 448 quite useful as compared to not using RSerPool at all. First, the 449 client user need only supply a single string, i.e. the pool handle, 450 rather than a list of servers. Second, the decision as to which 451 server is to be used can be determined dynamically by the server 452 selection mechanism (i.e. a "pool policy" performed by ASAP; see ASAP 453 [I-D.ietf-rserpool-asap]). Finally, when failures occur, these are 454 reported to the pool via signalling present in ASAP 455 [I-D.ietf-rserpool-asap] and ENRP [I-D.ietf-rserpool-enrp], other 456 clients will eventually know (once this failure is confirmed by other 457 elements of the RSerPool architecture) that this server has failed. 459 4.2. Example Scenario using RSerPool Session Services 461 When the full suite of RSerPool services is used, all communication 462 between the pool user and the pool element is mediated by the 463 RSerPool framework, including session establishment and teardown, and 464 the sending and receiving of data. Accordingly, it is necessary to 465 modify the application to use the service primitives (i.e. the API) 466 provided by RSerPool, rather than the transport layer primitives 467 provided by TCP, SCTP, or whatever transport protocol is being used. 469 As in the previous case, sessions (rather than connections or 470 associations) are established, and the destination endpoint is 471 specified as a pool handle rather than as a list of IP addresses with 472 a port number. However, failover from one pool element to another is 473 fully automatic, and can be transparent to the application (so long 474 as the application has saved enough state in a state cookie): 476 The RSerPool framework control channel provides maintenance 477 functions to keep pool element lists, policies, etc. current. 479 Since the application data (e.g. data channel) is managed by the 480 RSerPool framework, unsent data (data not yet submitted by 481 RSerPool to the underlying transport protocol) is automatically 482 redirected to the newly selected pool element upon failover. If 483 the underlying transport layer supports retrieval of unsent data 484 (as in SCTP), retrieved unsent data can also be automatically re- 485 sent to the newly selected pool element. 487 An application server (pool element) can provide a state cookie 488 (described in Section 2.5.1) that is automatically passed on to 489 another pool element (by the ASAP layer at the pool user) in the 490 event of a failover. This state cookie can be used to assist the 491 application at the new pool element in recreating whatever state 492 is needed to continue a session or transaction that was 493 interrupted by a failure in the communication between a pool user 494 and the original pool element. 496 The application client (pool user) can provide a callback function 497 that is invoked on the pool user side in the case of a failover. 498 This callback function can execute any application specific 499 failover code, such as generating a special message (or sequence 500 of messages) that helps the new pool element construct any state 501 needed to continue an in-process session. 503 Suppose in a particular peer-to-peer application, PU A is 504 communicating with PE B, and it so happens that PU A is also a PE 505 in pool X. PU A can pass a "business card" to PE B identifying it 506 as a member of pool X. In the event of a failure at A, or a 507 failure in the communication link between A and B, PE B can use 508 the information in the business card to contact an equivalent PE 509 to PU A from pool X. 511 Additionally, if the application at PU A is aware of some 512 particular PEs of pool X that would be preferred for B to contact 513 in the event that A becomes unreachable from B, PU A can provide 514 that list to the ASAP layer, and it will be included in A's 515 business card (see Section 2.5.2). 517 5. Reference Implementation 519 The reference implementation of RSerPool is available at 520 [RSerPoolPage] and described in [Dre2006]. 522 6. Security Considerations 524 This document does not identify security requirements beyond those 525 already documented in the ENRP and ASAP protocol specifications. A 526 security threat analysis of RSerPool is provided in THREATS 527 [I-D.ietf-rserpool-threats]. 529 7. IANA Considerations 531 This document does not require additional IANA actions beyond those 532 already identified in the ENRP and ASAP protocol specifications. 534 8. Acknowledgements 536 The authors wish to thank Maureen Stillman, Qiaobing Xie, Randall 537 Stewart, Scott Bradner, and many others for their invaluable 538 comments. 540 9. References 542 9.1. Normative References 544 [RFC3237] Tuexen, M., Xie, Q., Stewart, R., Shore, M., Ong, L., 545 Loughney, J., and M. Stillman, "Requirements for Reliable 546 Server Pooling", RFC 3237, January 2002. 548 [I-D.ietf-rserpool-asap] 549 Stewart, R., Xie, Q., Stillman, M., and M. Tuexen, 550 "Aggregate Server Access Protocol (ASAP)", 551 draft-ietf-rserpool-asap-19 (work in progress), 552 March 2008. 554 [I-D.ietf-rserpool-enrp] 555 Kim, D., Stewart, R., Stillman, M., Tuexen, M., and A. 556 Silverton, "Endpoint Handlespace Redundancy Protocol 557 (ENRP)", draft-ietf-rserpool-enrp-19 (work in progress), 558 March 2008. 560 [I-D.ietf-rserpool-common-param] 561 Stewart, R., Xie, Q., Stillman, M., and M. Tuexen, 562 "Aggregate Server Access Protocol (ASAP) and Endpoint 563 Handlespace Redundancy Protocol (ENRP) Parameters", 564 draft-ietf-rserpool-common-param-16 (work in progress), 565 March 2008. 567 [I-D.ietf-rserpool-policies] 568 Tuexen, M. and T. Dreibholz, "Reliable Server Pooling 569 Policies", draft-ietf-rserpool-policies-08 (work in 570 progress), March 2008. 572 [I-D.ietf-rserpool-threats] 573 Stillman, M., Gopal, R., Guttman, E., Holdrege, M., and S. 574 Sengodan, "Threats Introduced by RSerPool and Requirements 575 for Security in Response to Threats", 576 draft-ietf-rserpool-threats-11 (work in progress), 577 April 2008. 579 9.2. Informative References 581 [RSerPoolPage] 582 Dreibholz, T., "Thomas Dreibholz's RSerPool Page", 583 URL: http://tdrwww.iem.uni-due.de/dreibholz/rserpool/. 585 [Dre2006] Dreibholz, T., "Reliable Server Pooling -- Evaluation, 586 Optimization and Extension of a Novel IETF Architecture", 587 Ph.D. Thesis University of Duisburg-Essen, Faculty of 588 Economics, Institute for Computer Science and Business 589 Information Systems, URL: http:// 590 duepublico.uni-duisburg-essen.de/servlets/DerivateServlet/ 591 Derivate-16326/Dre2006-final.pdf, March 2007. 593 Authors' Addresses 595 Peter Lei 596 Cisco Systems, Inc. 597 955 Happfield Dr. 598 Arlington Heights, IL 60004 599 US 601 Phone: +1 773 695-8201 602 Email: peterlei@cisco.com 604 Lyndon Ong 605 Ciena Corporation 606 PO Box 308 607 Cupertino, CA 95015 608 US 610 Email: Lyong@Ciena.com 612 Michael Tuexen 613 Muenster Univ. of Applied Sciences 614 Stegerwaldstr. 39 615 48565 Steinfurt 616 Germany 618 Email: tuexen@fh-muenster.de 619 Thomas Dreibholz 620 University of Duisburg-Essen, Institute for Experimental Mathematics 621 Ellernstrasse 29 622 45326 Essen, Nordrhein-Westfalen 623 Germany 625 Phone: +49 201 183-7637 626 Fax: +49 201 183-7673 627 Email: dreibh@iem.uni-due.de 628 URI: http://www.iem.uni-due.de/~dreibh/ 630 Full Copyright Statement 632 Copyright (C) The IETF Trust (2008). 634 This document is subject to the rights, licenses and restrictions 635 contained in BCP 78, and except as set forth therein, the authors 636 retain all their rights. 638 This document and the information contained herein are provided on an 639 "AS IS" basis and THE CONTRIBUTOR, THE ORGANIZATION HE/SHE REPRESENTS 640 OR IS SPONSORED BY (IF ANY), THE INTERNET SOCIETY, THE IETF TRUST AND 641 THE INTERNET ENGINEERING TASK FORCE DISCLAIM ALL WARRANTIES, EXPRESS 642 OR IMPLIED, INCLUDING BUT NOT LIMITED TO ANY WARRANTY THAT THE USE OF 643 THE INFORMATION HEREIN WILL NOT INFRINGE ANY RIGHTS OR ANY IMPLIED 644 WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. 646 Intellectual Property 648 The IETF takes no position regarding the validity or scope of any 649 Intellectual Property Rights or other rights that might be claimed to 650 pertain to the implementation or use of the technology described in 651 this document or the extent to which any license under such rights 652 might or might not be available; nor does it represent that it has 653 made any independent effort to identify any such rights. Information 654 on the procedures with respect to rights in RFC documents can be 655 found in BCP 78 and BCP 79. 657 Copies of IPR disclosures made to the IETF Secretariat and any 658 assurances of licenses to be made available, or the result of an 659 attempt made to obtain a general license or permission for the use of 660 such proprietary rights by implementers or users of this 661 specification can be obtained from the IETF on-line IPR repository at 662 http://www.ietf.org/ipr. 664 The IETF invites any interested party to bring to its attention any 665 copyrights, patents or patent applications, or other proprietary 666 rights that may cover technology that may be required to implement 667 this standard. Please address the information to the IETF at 668 ietf-ipr@ietf.org.