idnits 2.17.1 draft-rosenberg-simple-intradomain-federation-00.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- ** It looks like you're using RFC 3978 boilerplate. You should update this to the boilerplate described in the IETF Trust License Policy document (see https://trustee.ietf.org/license-info), which is required now. -- Found old boilerplate from RFC 3978, Section 5.1 on line 15. -- Found old boilerplate from RFC 3978, Section 5.5, updated by RFC 4748 on line 1282. -- Found old boilerplate from RFC 3979, Section 5, paragraph 1 on line 1293. -- Found old boilerplate from RFC 3979, Section 5, paragraph 2 on line 1300. -- Found old boilerplate from RFC 3979, Section 5, paragraph 3 on line 1306. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- ** There are 7 instances of too long lines in the document, the longest one being 1 character in excess of 72. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust Copyright Line does not match the current year == Line 425 has weird spacing: '...ple.com alic...' == Line 426 has weird spacing: '...ple.com zeke...' -- The document seems to lack a disclaimer for pre-RFC5378 work, but may have content which was first submitted before 10 November 2008. If you have contacted all the original authors and they are all willing to grant the BCP78 rights to the IETF Trust, then this is fine, and you can ignore this comment. If not, you may need to add the pre-RFC5378 disclaimer. (See the Legal Provisions document at https://trustee.ietf.org/license-info for more information.) -- The document date (November 10, 2007) is 6013 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- == Outdated reference: A later version (-05) exists of draft-ietf-speermint-consolidated-presence-im-usecases-02 Summary: 2 errors (**), 0 flaws (~~), 4 warnings (==), 7 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 SIMPLE J. Rosenberg 3 Internet-Draft Cisco 4 Intended status: Informational November 10, 2007 5 Expires: May 13, 2008 7 Models for Intra-Domain Presence Federation 8 draft-rosenberg-simple-intradomain-federation-00 10 Status of this Memo 12 By submitting this Internet-Draft, each author represents that any 13 applicable patent or other IPR claims of which he or she is aware 14 have been or will be disclosed, and any of which he or she becomes 15 aware will be disclosed, in accordance with Section 6 of BCP 79. 17 Internet-Drafts are working documents of the Internet Engineering 18 Task Force (IETF), its areas, and its working groups. Note that 19 other groups may also distribute working documents as Internet- 20 Drafts. 22 Internet-Drafts are draft documents valid for a maximum of six months 23 and may be updated, replaced, or obsoleted by other documents at any 24 time. It is inappropriate to use Internet-Drafts as reference 25 material or to cite them other than as "work in progress." 27 The list of current Internet-Drafts can be accessed at 28 http://www.ietf.org/ietf/1id-abstracts.txt. 30 The list of Internet-Draft Shadow Directories can be accessed at 31 http://www.ietf.org/shadow.html. 33 This Internet-Draft will expire on May 13, 2008. 35 Copyright Notice 37 Copyright (C) The IETF Trust (2007). 39 Abstract 41 Presence federation involves the sharing of presence information 42 across multiple presence systems. Most often, presence federation is 43 assumed to be between different organizations, such as between two 44 enterprises or between and enterprise and a service provider. 45 However, federation can occur within a single organization or domain. 46 This can be the result of a multi-vendor network, or a consequence of 47 a large organization that requires partitioning. This document 48 examines different use cases and models for intra-domain federation. 50 Table of Contents 52 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 3 53 2. Partitioned . . . . . . . . . . . . . . . . . . . . . . . . . 5 54 2.1. Applicability . . . . . . . . . . . . . . . . . . . . . . 6 55 2.2. Routing . . . . . . . . . . . . . . . . . . . . . . . . . 7 56 2.2.1. Centralized Database . . . . . . . . . . . . . . . . . 8 57 2.2.2. Routing Proxy . . . . . . . . . . . . . . . . . . . . 9 58 2.2.3. Subdomaining . . . . . . . . . . . . . . . . . . . . . 10 59 2.2.4. Peer-to-Peer . . . . . . . . . . . . . . . . . . . . . 12 60 2.2.5. Forking . . . . . . . . . . . . . . . . . . . . . . . 12 61 2.2.6. Provisioned Routing . . . . . . . . . . . . . . . . . 12 62 2.3. Policy . . . . . . . . . . . . . . . . . . . . . . . . . . 12 63 2.4. Presence Data . . . . . . . . . . . . . . . . . . . . . . 12 64 3. Unioned . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 65 3.1. Applicability . . . . . . . . . . . . . . . . . . . . . . 14 66 3.2. Hierarchical Model . . . . . . . . . . . . . . . . . . . . 17 67 3.2.1. Routing . . . . . . . . . . . . . . . . . . . . . . . 19 68 3.2.2. Policy and Identity . . . . . . . . . . . . . . . . . 20 69 3.2.2.1. Root Only . . . . . . . . . . . . . . . . . . . . 20 70 3.2.2.2. Distributed Provisioning . . . . . . . . . . . . . 22 71 3.2.2.3. Central Provisioning . . . . . . . . . . . . . . . 23 72 3.2.3. Presence Data . . . . . . . . . . . . . . . . . . . . 25 73 3.3. Peer Model . . . . . . . . . . . . . . . . . . . . . . . . 25 74 3.3.1. Routing . . . . . . . . . . . . . . . . . . . . . . . 27 75 3.3.2. Policy . . . . . . . . . . . . . . . . . . . . . . . . 28 76 3.3.3. Presence Data . . . . . . . . . . . . . . . . . . . . 28 77 4. Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 78 5. Future Considerations . . . . . . . . . . . . . . . . . . . . 28 79 6. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . . 29 80 7. Security Considerations . . . . . . . . . . . . . . . . . . . 29 81 8. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 29 82 9. Informative References . . . . . . . . . . . . . . . . . . . . 29 83 Author's Address . . . . . . . . . . . . . . . . . . . . . . . . . 30 84 Intellectual Property and Copyright Statements . . . . . . . . . . 31 86 1. Introduction 88 Presence refers to the ability, willingness and desire to communicate 89 across differing devices, mediums and services [RFC2778]. Presence 90 is described using presence documents [RFC3863] [RFC4479], exchanged 91 using a SIP-based event package [RFC3856] 93 Presence federation refers to the sharing of presence information 94 across multiple presence systems. This interconnection involves 95 passing of subscriptions from one system to another, and then the 96 passing of notifications in the opposite direction. 98 Most often, presence federation is considered in the context of 99 interconnection between different domains, also known as inter-domain 100 presence federation 101 [I-D.ietf-speermint-consolidated-presence-im-usecases]. For example, 102 consider the network of Figure 1, which shows one model for inter- 103 domain federation. In this network, alice belongs to the example.org 104 domain, and Bob belongs to the example.com domain. Alice subscribes 105 to her buddy list on her presence server (which is also acting as her 106 Resource List Server (RLS) [RFC4662]), and that list includes 107 bob@example.com. Alice's presence server generates a back-end 108 subscription on the federated link between example.org and 109 example.com. The example.com presence server authorizes the 110 subscription, and if permitted, generates notifications back to 111 Alice's presence server, which are in turn passed to Alice. 113 ............................. .............................. 114 . . . . 115 . . . . 116 . alice@example.org . . bob@example.com . 117 . +------------+ SUB . . +------------+ . 118 . | | Bob . . | | . 119 . | Presence |------------------->| Presence | . 120 . | Server | . . | Server | . 121 . | | . . | | . 122 . | |<-------------------| | . 123 . | | NOTIFY . | | . 124 . +------------+ . . +------------+ . 125 . ^ | . . ^ . 126 . SUB | | . . |PUB . 127 . Buddy | |NOTIFY . . | . 128 . List | | . . | . 129 . | | . . | . 130 . | V . . | . 131 . +-------+ . . +-------+ . 132 . | | . . | | . 133 . | | . . | | . 134 . | | . . | | . 135 . +-------+ . . +-------+ . 136 . . . . 137 . Alice's . . Bob's . 138 . PC . . PC . 139 . . . . 140 ............................. .............................. 142 example.org example.com 144 Figure 1: Inter-Domain Model 146 However, federation can happen within a domain as well. We define 147 intra-domain federation as the interconnection of presence servers 148 within a single domain, where domain refers explicity to the right 149 hand side of the @-sign in the SIP URI. A single domain can have 150 multiple presence systems for several reasons: 152 o The domain may be very large, and for purposes of scale, requires 153 several different presence servers. However, intra-domain 154 federation is not the same as clustering, which is also done for 155 scale. Clustering involves a tight synchronization and 156 coordination across servers, always of the same vendor. Intra- 157 domain federation involves much looser coupling and can be between 158 vendors. 160 o The domain may be divided up organizationally in such a way that 161 different users are served by different parts of organization, and 162 each part of the organization manages its own presence servers. 164 o A domain has chosen multiple vendors for its presence 165 infrastructure, each having their own servers. 167 When considering architectures for intra-domain presence federation, 168 several issues need to be considered: 170 Routing: How are subscriptions routed to the right presence 171 server(s)? This issue is more complex in intra-domain models, 172 since the right hand side of the @-sign cannot be used to perform 173 this routing. 175 Policy and Identity: Where do user policies reside, and what 176 presence server(s) are responsible for executing that policy? 177 What identities does the user have in each system and how do they 178 relate? 180 Data Ownership: Which presence servers are responsible for which 181 pieces of presence information, and how are those pieces composed 182 to form a coherent and consistent view of user presence? 184 The sections below describe several different models for intra-domain 185 federation. Each model is driven by a set of use cases, which are 186 described in an applicability subsection for each model. Each model 187 description also discusses how routing, policy, and composition work. 189 2. Partitioned 191 In the partitioned model, a single domain has a multiplicity of 192 presence servers, each of which manages a non-overlapping set of 193 users. That is, for each user in the domain, their presence data and 194 policy reside on a single server. Each "single server" may in fact 195 be physically implemented on more than one box, for the purposes of 196 scale or high availability. However, the key definition of "single 197 server" here is that it represents an isolated presence functionality 198 that operates independently of any of the others. 200 Another important facet of the partitioned model is that, even though 201 users are partitioned across different servers, they each share the 202 same domain name in the right hand side of their URI, and this URI is 203 what those users use when communicating with other users both inside 204 and outside of the domain. There are many reasons why a domain would 205 want all of its users to share the same right-hand side of the @-sign 206 even though it is partitioned internally: 208 o The partitioning may reflect organizational or geographical 209 structures that a domain admistrator does not want to reflect 210 externally. 212 o If each partition had a separate domain name (i.e., 213 engineering.example.com and sales.example.com), if a user changed 214 organizations, this would necessitate a change in their URI. 216 o For reasons of vanity, users often like to have their URI (which 217 appear on business cards, email, and so on), to be brief and 218 short. 220 This model is illustrated in Figure 2. As the model shows, the 221 domain example.com has six users across three servers, each of which 222 is handling two of the users. 224 ..................................................................... 225 . . 226 . . 227 . . 228 . joe@example.com alice@example.com padma@example.com . 229 . bob@example.com zeke@example.com hannes@example.com . 230 . +-----------+ +-----------+ +-----------+ . 231 . | | | | | | . 232 . | Server | | Server | | Server | . 233 . | 1 | | 2 | | 3 | . 234 . | | | | | | . 235 . +-----------+ +-----------+ +-----------+ . 236 . . 237 . . 238 . . 239 . example.com . 240 ..................................................................... 242 Figure 2: Partitioned Model 244 2.1. Applicability 246 The partitioned model arises naturally in larger domains, such as an 247 enterprise or service provider, where issues of scale cause the 248 domain to be managed by a multiplicity of independent servers. 250 One common use case is a multi-national organization with regional IT 251 departments, each of which supports a particular set of 252 nationalities. It is very common for each regional IT department to 253 deploy and run its own servers for its own population. In that case, 254 the domain would end up being composed of the presence servers 255 deployed by each regional IT department. Indeed, in many 256 organizations, each regional IT department might end up using 257 different vendors. This can be a consequence of differing regional 258 requirements for features (such as compliance or localization 259 support), differing sales channels and markets in which vendors sell, 260 and so on. 262 Another common use case is an organization that is just very large, 263 and their size exceeds the capacity that a single "server" can 264 provide. So, instead, the domain breaks its users into partitions 265 (perhaps arbitrarily) and then uses intra-domain federation to allow 266 the overall system to scale up to arbitrary sizes. The word "server" 267 is used in quotes above because, in fact, each such "server" might be 268 a clustered set of servers from a particular vendor. Clustering is 269 not the same as intra-domain federation; clustering involves a 270 tightly coupled coordination between servers of the same vendor. 271 Intra-domain federation involves a looser coupling - only using 272 standards-based protocols and possibly involving multiple vendors. 274 Yet another common use case is an organization that requires multiple 275 vendors for each service, in order to avoid vendor lock in and drive 276 competition between its vendors. Since the servers will come from 277 different vendors, a natural way to deploy them is to partition the 278 users across them. Such multi-vendor networks are extremely common 279 in large service provider networks, many of which have hard 280 requirements for multiple vendors. 282 Another use case is one where certain vendors might specialize in 283 specific types of clients, or provide features or presence data that 284 are unique, such that a domain might wish to use clients from a 285 multiplicity of vendors, depending on the needs of its users. For 286 example, one vendor might provide a mobile client (but no desktop 287 client), while another provides a desktop client but no mobile 288 client. A domain might want some users to have a mobile client, 289 while others have the desktop client. This leads to the partitioned 290 model. 292 2.2. Routing 294 The partitioned intra-domain model works almost identically to an 295 inter-domain federated model, with the primary difference being 296 routing. In inter-domain federation, the domain part of the URI can 297 be used to route presence subscriptions from the watcher's domain to 298 the domain of the presentity. This is no longer the case in an 299 intra-domain model. Consider the case where Joe subscribes to his 300 buddy list, which is served by his presence server (server 1 in 301 Figure 2). Alice is a member of Joe's buddy list. How does server 1 302 know that the back-end subscription to Alice needs to get routed to 303 server 2? 305 2.2.1. Centralized Database 307 ..................................................................... 308 . +-----------+ . 309 . alice? | | . 310 . +---------------> | Database | . 311 . | server 2 | | . 312 . | +-------------| | . 313 . | | +-----------+ . 314 . | | . 315 . | | . 316 . | | . 317 . | | . 318 . | | . 319 . | | . 320 . | V . 321 . joe@example.com alice@example.com padma@example.com . 322 . bob@example.com zeke@example.com hannes@example.com . 323 . +-----------+ +-----------+ +-----------+ . 324 . | | | | | | . 325 . | Server | | Server | | Server | . 326 . | 1 | | 2 | | 3 | . 327 . | | | | | | . 328 . +-----------+ +-----------+ +-----------+ . 329 . . 330 . . 331 . . 332 . example.com . 333 ..................................................................... 335 Figure 3: Centralized DB 337 One solution is to rely on a common, centralized database that 338 maintains mappings of users to specific servers, shown in Figure 3. 339 When Joe subscribes to his buddy list that contains Alice, server 1 340 would query this database, asking it which server is responsible for 341 alice@example.com. The database would indicate server 2, and then 342 server 1 would generate the backend SUBSCRIBE request towards server 343 2. This is a common technique in large email systems. It is often 344 implemented using internal sub-domains; so that the database would 345 return alice@central.example.com to the query, and server 1 would 346 modify the Request-URI in the SUBSCRIBE request to reflect this. 348 Routing database solutions have the problem that they require 349 standardization on a common schema and database protocol in order to 350 work in multi-vendor environments. For example, LDAP and SQL are 351 both possibilities. There is variety in LDAP schema; one possibility 352 is H.350.4, which could be adapted for usage here [RFC3944]. 354 2.2.2. Routing Proxy 356 ..................................................................... 357 . +-----------+ . 358 . SUB alice | | . 359 . +---------------> | Routing | . 360 . | | Proxy | . 361 . | | | . 362 . | +-----------+ . 363 . | | . 364 . | | . 365 . | | . 366 . | |SUB Alice . 367 . | | . 368 . | | . 369 . | V . 370 . joe@example.com alice@example.com padma@example.com . 371 . bob@example.com zeke@example.com hannes@example.com . 372 . +-----------+ +-----------+ +-----------+ . 373 . | | | | | | . 374 . | Server | | Server | | Server | . 375 . | 1 | | 2 | | 3 | . 376 . | | | | | | . 377 . +-----------+ +-----------+ +-----------+ . 378 . . 379 . . 380 . . 381 . example.com . 382 ..................................................................... 384 Figure 4: Routing Proxy 386 A similar solution is to rely on a routing proxy. Instead of a 387 centralized database, there would be a centralized SIP proxy farm. 388 Server 1 would send subscriptions for users it doesn't serve to this 389 server farm, and the servers would lookup the user in a database 390 (which is now accessed only by the routing proxy), and the resulting 391 subscriptions are sent to the correct server. A redirect server can 392 be used as well, in which case the flow is very much like that of a 393 centralized database. 395 Routing proxies have the benefit that they do not require a common 396 database schema and protocol, but they do require a centralized 397 server function that sees all subscriptions, which can be a scale 398 challenge. 400 2.2.3. Subdomaining 402 In this solution, each user is associated with a subdomain, and is 403 provisioned as part of their respective presence server using that 404 subdomain. Consequently, each presence server thinks it is its own, 405 separate domain. However, when a user adds a presentity to their 406 buddy list without the subdomain, they first consult a shared 407 database which returns the subdomained URI to subscribe to. This 408 sub-domained URI can be returned because the user provided a search 409 criteria, such as "Find Alice Chang", or provided the non-subdomained 410 URI (alice@example.com). This is shown in Figure 5 411 ..................................................................... 412 . +-----------+ . 413 . who is Alice? | | . 414 . +---------------------->| Database | . 415 . | alice@b.example.com | | . 416 . | +---------------------| | . 417 . | | +-----------+ . 418 . | | . 419 . | | . 420 . | | . 421 . | | . 422 . | | . 423 . | | . 424 . | | . 425 . | | joe@a.example.com alice@b.example.com padma@c.example.com . 426 . | | bob@a.example.com zeke@b.example.com hannes@c.example.com . 427 . | | +-----------+ +-----------+ +-----------+ . 428 . | | | | | | | | . 429 . | | | Server | | Server | | Server | . 430 . | | | 1 | | 2 | | 3 | . 431 . | | | | | | | | . 432 . | | +-----------+ +-----------+ +-----------+ . 433 . | | ^ . 434 . | | | . 435 . | | | . 436 . | | | . 437 . | | | . 438 . | | | . 439 . | | +-----------+ . 440 . | +-------------------->| | . 441 . | | Watcher | . 442 . | | | . 443 . +-----------------------| | . 444 . +-----------+ . 445 . . 446 . . 447 . . 448 . example.com . 449 ..................................................................... 451 Figure 5: Subdomaining 453 Subdomaining puts the burden of routing within the client. The 454 servers can be completely unaware that they are actually part of the 455 same domain, and integrate with each other exactly as they would in 456 an inter-domain model. However, the client is given the burden of 457 determining the subdomained URI from the original URI or buddy name, 458 and then subscribing directly to that server, or including the 459 subdomained URI in their buddylist. The client is also responsible 460 for hiding the subdomain structure from the user. 462 2.2.4. Peer-to-Peer 464 Another model is to utilize a peer-to-peer network amongst all of the 465 servers, and store URI to server mappings in the distributed hash 466 table it creates. This has some nice properties but does require a 467 standardized and common p2p protocol across vendors, which does not 468 exist today. 470 2.2.5. Forking 472 Yet another solution is to utilize forking. Each server is 473 provisioned with the domain names or IP addresses of the other 474 servers, but not with the mapping of users to each of those servers. 475 When a server needs to create a back-end subscription for a user it 476 doesn't have, it forks the SUBSCRIBE request to all of the other 477 servers. This request will be rejected with a 404 on the servers 478 which do not handle that user, and accepted on the one that does. 479 The approach assumes that presence servers can differentiate inbound 480 SUBSCRIBE requests from end users (which cause back-end subscriptions 481 to get forked) and from other servers (which do not cause back-end 482 subscriptions). This approach works very well in organizations with 483 a relatively small number of servers (say, two or three), and becomes 484 increasingly ineffective with more and more servers. 486 2.2.6. Provisioned Routing 488 Yet another solution is to provision each server with each user, but 489 for servers that don't actually serve the user, the provisioning 490 merely tells the server where to proxy the request. This solution 491 has extremely poor operational properties, requiring multiple points 492 of provisioning across disparate systems. 494 2.3. Policy 496 A fundamental characteristic of the partitioned model is that there 497 is a single point of policy enforcement (authorization rules and 498 composition policy) for each user. 500 2.4. Presence Data 502 Another fundamental characteristic of the partitioned model is that 503 the presence data for a user is managed authoritatively on a single 504 server. In the example of Figure 2, the presence data for Alice 505 lives on server 2 alone (recall that server two may be physically 506 implemented as a multiplicity of boxes from a single vendor, each of 507 which might have a portion of the presence data, but externally it 508 appears to behave as if it were a single server). A subscription 509 from Bob to Alice may cause a transfer of presence information from 510 server 2 to server 1, but server 2 remains authoritative and is the 511 single root source of all data for Alice. 513 3. Unioned 515 In the unioned model, each user is actually served by more than one 516 presence server. In this case, "served" implies two properties: 518 o A user is served by a server when that user is provisioned on that 519 server, and 521 o That server is authoritative for some piece of presence state 522 associated with that user 524 In essence, in the unioned model, a user's presence data is 525 distributed across many presence servers. In the partitioned model, 526 its centralized in a single presence server. 528 This definition speaks specifically to ownership of presence data as 529 the key property. This rules out several cases which involve a mix 530 of servers within the enterprise, but do not constitute intra-domain 531 unioned federation: 533 o A user utilizes an outbound SIP proxy from one vendor, which 534 connects to a presence server from another vendor. Even though 535 this will result in presence subscriptions and notifications 536 flowing between servers, and the user is potentially provisioned 537 on both, there is no authoritative presence state in the outbound 538 proxy, and so this is not intra-domain federation. 540 o A user utilizes a Resource List Server (RLS) from one vendor, 541 which holds their buddy list, and accesses presence data from a 542 presence server from another vendor. This case is actually the 543 partitioned case, not the unioned case. Effectively, the buddy 544 list itself is another "user", and it exists entirely on one 545 server (the RLS), while the actual users on the buddy list exist 546 entirely within another. Consequently, this case does not have 547 the property that a single presence resource exists on multiple 548 servers at the same time. 550 o A user subscribes to the presence of a presentity. This 551 subscription is first passed to their presence server, which acts 552 as a proxy, and instead sends the subscription to the UA of the 553 user, which acts as a presence edge server. In this model, it may 554 appear as if there are two presence servers for the user (the 555 actual server and their UA). However, the server is acting as a 556 proxy in this case. There is only one source of presence 557 information. 559 3.1. Applicability 561 The unioned models arise naturally for several reasons. 563 Firstly, it is often the case that specific client applications and 564 devices are designed to only work with their corresponding servers. 565 In an ideal world, clients would all implement to standards and this 566 would not happen, but in practice, the vast majority of presence 567 endpoints work only (or only work well) with the server from the same 568 vendor. In addition, certain vendors might specialize in specific 569 types of clients, or provide features that are unique, such that a 570 domain might wish to use clients from a multiplicity of vendors. For 571 example, one vendor might provide a mobile client (but no desktop 572 client), while another provides a desktop client but no mobile 573 client. A domain might want each user to have both a mobile client 574 and a desktop client, which will require servers from each vendor, 575 leading to the unioned case. This is shown in Figure 6. Another 576 example is where one vendor that provides a business telephone with 577 presence, but no desktop client, while another provides a deskop 578 client but no business telephone. 580 alice@example.com alice@example.com 581 +------------+ +------------+ 582 | | | | 583 | Presence | | Presence | 584 | Server |--------------| Server | 585 | 1 | | 2 | 586 | | | | 587 | | | | 588 +------------+ +------------+ 589 \ / 590 \ / 591 \ / 592 \ / 593 \ / 594 \ / 595 \...................../....... 596 \ / . 597 .\ / . 598 . \ | +--------+ . 599 . | |+------+| . 600 . +---+ || || . 601 . |+-+| || || . 602 . |+-+| |+------+| . 603 . | | +--------+ . 604 . | | /------ / . 605 . +---+ /------ / . 606 . --------/ . 607 . . 608 ............................. 610 Alice 612 Figure 6: Unioned Case 1 614 Secondly, presence can contain rich information, including activities 615 of the user (such as whether they are in a meeting or on the phone), 616 their geographic location, and their mood. This presence state is 617 can be determined manually (where the user enters and updates the 618 information), or automatically. Automatic determination of these 619 states is far preferable, since it put less burden on the user. 620 Determination of these presence states is done by taking "raw" data 621 about the user, and using it to generate corresponding presence 622 states. This raw data can come from any source that has information 623 about the user, including their calendaring server, their VoIP 624 infrastructure, their VPN server, their laptop operating system, and 625 so on. Each of these components is typically made by different 626 vendors, each of which is likely to integrate that data with their 627 presence servers. Consequently, presence servers from different 628 vendors are likely to specialize in particular pieces of presence 629 data, based on the other infrastructure they provide. 631 Consequently, though a user may have all of their devices connected 632 to and associated with a single presence server, that presence server 633 may have incomplete presence state about the user. Another presence 634 server in the enterprise, due to its access to state for that user, 635 has additional data which needs to be accessed by the first presence 636 server in order to provide a comprehensive view of presence data. 637 This is shown in Figure 7. 639 alice@example.com alice@example.com 640 +------------+ +------------+ 641 | | | | 642 | Presence | | Presence | 643 | Server |--------------| Server | 644 | 1 | | 2 | 645 | | | | 646 | | | | 647 +------------+ +------------+ 648 ^ | | 649 | | | 650 | | | 651 ///-------\\\ | | 652 ||| specialized ||| | | 653 || state || | | 654 \\\-------/// | | 655 ............................. 656 . | | . 657 . | | +--------+ . 658 . | |+------+| . 659 . +---+ || || . 660 . |+-+| || || . 661 . |+-+| |+------+| . 662 . | | +--------+ . 663 . | | /------ / . 664 . +---+ /------ / . 665 . --------/ . 666 . . 667 . . 668 ............................. 669 Alice 671 Figure 7: Unioned Case 2 673 Another use case for unioned federation are subscriber moves. 674 Consider a domain which uses multiple presence servers, typically 675 running in a partitioned configuration. The servers are organized 676 regionally so that each user is served by a presence server handling 677 their region. A user is moving from one region to a new job in 678 another, while retaining their SIP URI. In order to provide a smooth 679 transition, ideally the system would provide a "make before break" 680 functionality, allowing the user to be added onto the new server 681 prior to being removed from the old. During the transition period, 682 especially if the user had multiple clients to be moved, they can end 683 up with presence state existing on both servers at the same time. 685 3.2. Hierarchical Model 687 The unioned intra-federation model can be realized in one of two ways 688 - using a hierarchical structure or a peer structure. 690 In the hierarchical model, presence subscriptions for the presentity 691 in question are always routed first to one of the servers - the root 692 - and then the root presence server subscribes to the next layer of 693 presence servers (which may, in turn, subscribe to the presence state 694 in other presence servers). Each presence server composes the 695 presence information it receives from its children, applying local 696 authorization and composition policies, and then passes the results 697 up to the higher layer. This is shown in Figure 8. 699 +-----------+ 700 *-----------* | | 701 |Auth and |---->| Presence | <--- root 702 |Composition| | Server | 703 *-----------* | | 704 | | 705 +-----------+ 706 / --- 707 / ---- 708 / ---- 709 / ---- 710 V -V 711 +-----------+ +-----------+ 712 | | | | 713 *-----------* | Presence | *-----------* | Presence | 714 |Auth and |-->| Server | |Auth and |-->| Server | 715 |Composition| | | |Composition| | | 716 *-----------* | | *-----------* | | 717 +-----------+ +-----------+ 718 | --- 719 | ----- 720 | ----- 721 | ----- 722 | ----- 723 | ----- 724 V --V 725 +-----------+ +-----------+ 726 | | | | 727 *-----------* | Presence | *-----------* | Presence | 728 |Auth and |-->| Server | |Auth and |-->| Server | 729 |Composition| | | |Composition| | | 730 *-----------* | | *-----------* | | 731 +-----------+ +-----------+ 733 Figure 8: Hierarchical Model 735 Its important to note that this hierarchy defines the sequence of 736 presence composition and policy application, and does not imply a 737 literal message flow. As an example, consider once more the use case 738 of Figure 6. Assume that presence server 1 is the root, and presence 739 server 2 is its child. When Bob's PC subscribes to Bob's buddy list 740 (on presence server 2), that subscription will first go to presence 741 server 2. However, that presence server knows that it is not the 742 root in the hierarchy, and despite the fact that it has presence 743 state for Alice (who is on Bob's buddy list), it creates a back-end 744 subscription to presence server 1. Presence server 1, as the root, 745 subscribes to Alice's state at presence server 2. Now, since this 746 subscription came from presence server 1 and not Bob directly, 747 presence server 2 provides the presence state. This is received at 748 presence server 1, which composes the data with its own state for 749 Alice, and then provides the results back to presence server 2, 750 which, having acted as an RLS, forwards the results back to Bob. 751 Consequently, this flow, as a message sequence diagram, involves 752 notifications passing from presence server 2, to server 1, back to 753 server 2. However, in terms of composition and policy, it was done 754 first at the child node (presence server 2), and then those results 755 used at the parent node (presence server 1). 757 3.2.1. Routing 759 In the hierarchical model, each presence server needs to be 760 provisioned with the root, its parent and its children presence 761 servers for each presentity it handles. These relationships could in 762 fact be different on a presentity-by-presentity basis; however, this 763 is complex to manage. In all likelihood, the parent and child 764 relationships are identical for each presentities. The overall 765 routing algorithm can be described thusly: 767 o If a SUBCRIBE is received from the parent node for this 768 presentity, perform subscriptions to each child node for this 769 presentity, and then take the results, apply composition and 770 authorization policies, and propagate to the parent. 772 o If a SUBSCRIBE is received from a node that is not the parent node 773 for this presentity, proxy the SUBSCRIBE to the parent node. This 774 includes cases where the node that sent the SUBSCRIBE is a child 775 node. 777 This routing rule is relatively simple, and in a two-server system is 778 almost trivial to provision. Interestingly, it works in cases where 779 some users are partitioned and some are unioned. When the users are 780 partitioned, this routing algorithm devolves into the forking 781 algorithm of Section 2.2.5. This points to the forking algorithm as 782 the "natural" routing algorithm for partitioned models. 784 An important property of the routing in the hierarchical model is 785 that the sequence of composition and policy operations are identical 786 for all watchers to that presentity, regardless of which presence 787 server they are associated with. The result is that the overall 788 presence state provided to a watcher is always consistent and 789 independent of the server the watcher is connected to. We call this 790 property the *consistency property*, and it is an important metric in 791 assessing the correctness of a federated presence system. 793 3.2.2. Policy and Identity 795 Policy and identity are a clear challenge in the unioned model. 797 Firstly, since a user is provisioned on many servers, it is possible 798 that the identifier they utilize could be different on each server. 799 For example, on server 1, they could be joe@example.com, whereas on 800 server 2, they are joe.smith@example.com. In cases where the 801 identifiers are not equivalent, a mapping function needs to be 802 provisioned. This ideally happens on the server performing the back- 803 end subscription. 805 Secondly, the unioned model will result in back-end subscriptions 806 extending from one presence server to another presence server. These 807 subscriptions, though made by the presence server, need to be made 808 on-behalf-of the user that originally requested the presence state of 809 the presentity. Since the presence server extending the back-end 810 subscription will not often have credentials to claim identity of the 811 watcher, asserted identity using techniques like P-Asserted-ID 812 [RFC3325] are required, along with the associated trust relationships 813 between servers. 815 The principle challenge in a unioned presence model is policy, 816 including both authorization and composition policies. There are 817 three potential solutions to the administration of policy in the 818 hierarchical model (only two of which apply in the peer model, as 819 we'll discuss below. These are root-only, distributed provisioned, 820 and central provisioned. 822 3.2.2.1. Root Only 824 In the root-only policy model, authorization policy and composition 825 policy are applied only at the root of the tree. This is shown in 826 Figure 9. 828 +-----------+ 829 *-----------* | | 830 |Auth and |---->| Presence | <--- root 831 |Composition| | Server | 832 *-----------* | | 833 | | 834 +-----------+ 835 / --- 836 / ---- 837 / ---- 838 / ---- 839 V -V 840 +-----------+ +-----------+ 841 | | | | 842 | Presence | | Presence | 843 | Server | | Server | 844 | | | | 845 | | | | 846 +-----------+ +-----------+ 847 | --- 848 | ----- 849 | ----- 850 | ----- 851 | ----- 852 | ----- 853 V --V 854 +-----------+ +-----------+ 855 | | | | 856 | Presence | | Presence | 857 | Server | | Server | 858 | | | | 859 | | | | 860 +-----------+ +-----------+ 862 Figure 9: Root Only 864 As long as the subscription request came from its parent, every child 865 presence server would automatically accept the subscription, and 866 provide notifications containing the full presence state it is aware 867 of. Any composition performed by a child presence server would need 868 to be lossless, in that it fully combines the source data without 869 loss of information, and also be done without any per-user 870 provisioning or configuration, operating in a default or 871 administrator-provisioned mode of operation. 873 The root-only model has the benefit that it requires the user to 874 provision policy in a single place (the root). However, it has the 875 drawback that the composition and policy processing may be performed 876 very poorly. Presumably, the purpose of the multiplicity of presence 877 servers is because each has access to and specializes in manipulation 878 of certain pieces of presence state. For example, if a child server 879 provides geolocation information, the root presence server may not 880 have sufficient authorization policy capabilities to allow the user 881 to manage how that geolocation information is provided to watchers. 883 3.2.2.2. Distributed Provisioning 885 The distributed provisioned model looks exactly like the diagram of 886 Figure 8. Each presence server is separately provisioned with its 887 own policies, including what users are allowed to watch, what 888 presence data they will get, and how it will be composed. 890 One immediate concern is whether the overall policy processing, when 891 performed independently at each server, is consistent, sane, and 892 provides reasonable degrees of privacy. It turns out that it can, if 893 some guidelines are followed. 895 Firstly, consider basic "yes/no" authorization policies. Lets say a 896 presentity, Alice, provides an authorization policy in server 1 where 897 Bob can see her presence, but on server 2, provides a policy where 898 Bob cannot. If presence server 1 is the root, the subscription is 899 accepted there, but the back-end subscription to presence server 2 900 would be rejected. As long as presence server 1 then rejects the 901 subscription, the system provides the correct behavior. This can be 902 turned into a more general rule: 904 o To guarantee privacy safety, if the back-end subscription 905 generated by a presence server is denied, that server must deny 906 the triggering subscription in turn, regardless of its own 907 authorization policies. 909 Things get more complicated when one considers authorization policies 910 whose job is to block access to specific pieces of information, as 911 opposed to blocking a user completely. For example, lets say Alice 912 wants to allow Bob to see her presence, but not her geolocation 913 information. She provisions a rule on server 1 that blocks 914 geolocation information, but grants it on server 2. The correct mode 915 of operation in this case is that the overall system will block 916 geolocation from Bob. But will it? In fact, it will, if a few 917 additional guidelines are followed: 919 o If a presence server adds any information to a presence document 920 beyond the information received from its children, it must provide 921 authorization policies that govern the access to that information. 923 o If a presence server does not understand a piece of presence data 924 provided by its child, it should not attempt to apply its own 925 authorization policies to access of that information. 927 o A presence server should not add information to a presence 928 document that overlaps with information that can be added by its 929 parent. 931 If these rules are followed, the overall system provides privacy 932 safety and the overall policy applied is reasonable. This is because 933 these rules effectively segment the application of policy based on 934 specific data, to the servers that own the corresponding data. For 935 example, consider once more the geolocation use case described above, 936 and assume server 2 is the root. If server 1 has access to, and 937 provides geolocation information in presence documents it produces, 938 then server 1 would be the only one to provide authorization policies 939 governing geolocation. Server 2 would receive presence documents 940 from server 1 containing (or not) geolocation, but since it doesn't 941 provide or control geolocation, it lets that information pass 942 through. Thus, the overall presence document provided to the watcher 943 will containg gelocation if Alice wanted it to, and not otherwise, 944 and the controls for access to geolocation would exist only on server 945 1. 947 The second major concern on distributed provisioning is that it is 948 confusing for users. However, in the model that is described here, 949 each server would necessarily be providing distinct rules, governing 950 the information it uniquely provides. Thus, server 2 would have 951 rules about who is allowed to see geolocation, and server 1 would 952 have rules about who is allowed to subscribe overall. Though not 953 ideal, there is certainly precedent for users configuring policies on 954 different servers based on the differing services provided by those 955 servers. Users today provision block and allow lists in email for 956 access to email servers, and separately in IM and presence 957 applications for access to IM. 959 3.2.2.3. Central Provisioning 961 The central provisioning model is a hybrid between root-only and 962 distributed provisioning. Each server does in fact execute its own 963 authorization and composition policies. However, rather than the 964 user provisioning them independently in each place, there is some 965 kind of central portal where the user provisions the rules, and that 966 portal generates policies for each specific server based on the data 967 that the corresponding server provides. This is shown in Figure 10. 969 +---------------------+ 970 |provisioning portal | 971 +---------------------+ 972 . . . . . 973 . . . . . 974 . . . . ....................... 975 ........................... . . . . 976 . . . . . 977 . . . . . 978 . ........................... . ............. . 979 . . . . . 980 . . ...................... . . 981 . . V +-----------+ . . 982 . . *-----------* | | . . 983 . . |Auth and |---->| Presence | <--- root . . 984 . . |Composition| | Server | . . 985 . . *-----------* | | . . 986 . . | | . . 987 . . +-----------+ . . 988 . . | ---- . . 989 . . | ------- . . 990 . . | ------- . 991 . . | .------- . 992 . . V . ---V V 993 . . +-----------+ . +-----------+ 994 . . | | V | | 995 . . *-----------* | Presence | *-----------* | Presence | 996 . ....>|Auth and |-->| Server | |Auth and |-->| Server | 997 . |Composition| | | |Composition| | | 998 . *-----------* | | *-----------* | | 999 . +-----------+ +-----------+ 1000 . / -- 1001 . / ---- 1002 . / --- 1003 . / ---- 1004 . / --- 1005 . / ---- 1006 . V -V 1007 . +-----------+ +-----------+ 1008 V | | | | 1009 *-----------* | Presence | *-----------* | Presence | 1010 |Auth and |-->| Server | |Auth and |-->| Server | 1011 |Composition| | | |Composition| | | 1012 *-----------* | | *-----------* | | 1013 +-----------+ +-----------+ 1015 Figure 10: Central Provisioning 1017 Centralized provisioning brings the benefits of root-only (single 1018 point of user provisioning) with those of distributed provisioning 1019 (utilize full capabilities of all servers). Its principle drawback 1020 is that it requires another component - the portal - which can 1021 represent the union of the authorization policies supported by each 1022 server, and then delegate those policies to each corresponding 1023 server. 1025 For both the centralized and distributed provisioning approaches, the 1026 hierarchical model suffers overall from the fact that the root of the 1027 policy processing may not be tuned to the specific policy needs of 1028 the device that has subscribed. For example, in the use case of 1029 Figure 6, presence server 1 may be providing composition policies 1030 tuned to the fact that the device is wireless with limited display. 1031 Consequently, when Bob subscribes from his mobile device, is presence 1032 server 2 is the root, presence server 2 may add additional data and 1033 provide an overall presence document to the client which is not 1034 optimized for that device. This problem is one of the principal 1035 motivations for the peer model, described below. 1037 3.2.3. Presence Data 1039 The hierarhical model is based on the idea that each presence server 1040 in the chain contributes some unique piece of presence information, 1041 composing it with what it receives from its child, and passing it on. 1042 For the overall presence document to be reasonable, several 1043 guidelines need to be followed: 1045 o A presence server must be prepared to receive documents from its 1046 peer containing information that it does not understand, and to 1047 apply unioned composition policies that retain this information, 1048 adding to it the unique information it wishes to contribute. 1050 o A user interface rendering some presence document provided by its 1051 presence server must be prepared for any kind of presence document 1052 compliant to the presence data model, and must not assume a 1053 specific structure based on the limitations and implementation 1054 choices of the server to which it is paired. 1056 If these basic rules are followed, the overall system provides 1057 functionality equivalent to the combination of the presence 1058 capabilities of the servers contained within it, which is highly 1059 desirable. 1061 3.3. Peer Model 1063 In the peer model, there is no one root. When a watcher subscribes 1064 to a presentity, that subscription is processed first by the server 1065 to which the watcher is connected (effectively acting as the root), 1066 and then the subscription is passed to other child presence servers. 1067 In essence, in the peer model, there is a per-watcher hierarchy, with 1068 the root being a function of the watcher. Consider the use case in 1069 Figure 6 If Bob has his buddy list on presence server 1, and it 1070 contains Alice, presence server 1 acts as the root, and then performs 1071 a back-end subscription to presence server 2. However, if Joe has 1072 his buddy list on presence server 2, and his buddy list contains 1073 Alice, presence server 2 acts as the root, and performs a back-end 1074 subscription to presence server 1. This is shown in Figure 11. 1076 alice@example.com alice@example.com 1077 +------------+ +------------+ 1078 | |<-------------| |<--------+ 1079 SUB | Presence | | Presence | | 1080 List w/| Server | | Server | SUB | 1081 Alice | 1 | | 2 | List w/| 1082 +---->| |------------->| | Alice | 1083 | | | | | | 1084 | +------------+ +------------+ | 1085 | \ / | 1086 | \ / | 1087 | \ / | 1088 | \ / | 1089 | \ / | 1090 | \ / | 1091 ...|........ \...................../....... .........|........ 1092 . . \ / . . . 1093 . . .\ / . . +--------+ . 1094 . | . . \ | +--------+ . . |+------+| . 1095 . | . . | |+------+| . . || || . 1096 . +---+ . . +---+ || || . . || || . 1097 . |+-+| . . |+-+| || || . . |+------+| . 1098 . |+-+| . . |+-+| |+------+| . . +--------+ . 1099 . | | . . | | +--------+ . . /------ / . 1100 . | | . . | | /------ / . . /------ / . 1101 . +---+ . . +---+ /------ / . . --------/ . 1102 . . . --------/ . . . 1103 . . . . . . 1104 ............ ............................. .................. 1106 Bob Alice Joe 1108 Figure 11: Peer Model 1110 Whereas the hierarchical model clearly provides the consistency 1111 property, it is not obvious whether a particular deployment of the 1112 peer model provides the consistency property. It ends up being a 1113 function of the composition policies of the individual servers. If 1114 Pi() represents the composition and authorization policies of server 1115 i, and takes as input one or more presence documents provided by its 1116 children, and outputs a presence document, the overall system 1117 provides consistency when: 1119 Pi(Pj()) = Pj(Pi()) 1121 which is effectively the commutativity property. 1123 3.3.1. Routing 1125 Routing in the peer model works similarly to the hierarchical model. 1126 Each presence server would be configured with the children it has 1127 when it acts as the root. The overall routing algorithm then works 1128 as follows: 1130 o If a presence server receives a subscription for a presentity from 1131 a particular watcher, and it already has a different subscription 1132 (as identified by dialog identifiers) for that presentity from 1133 that watcher, it rejects the second subscription with an 1134 indication of a loop. This algorithm does rule out the 1135 possibility of two watchers subscribing to the same presentity. 1137 o If a presence server receives a subscription for a presentity from 1138 a watcher and it doesn't have one yet for that pair, it processes 1139 it and generates back end subscriptions to each configured child. 1140 If a back-end subscription generates an error due to loop, it 1141 proceeds without that back-end input. 1143 For example, consider Bob subscribing to Alice. Bob's client is 1144 supported by server 1. Server 1 has not seen this subscription 1145 before, so it acts as the root and passes it to server 2. Server 2 1146 hasn't seen it before, so it accepts it (now acting as the child), 1147 and sends the subscription to ITS child, which is server 1. Server 1 1148 has already seen the subscription, so it rejects it. Now server 2 1149 basically knows its the child, and so it generates documents with 1150 just its own data. 1152 As in the hierarchical case, it is possible to intermix partitioned 1153 and peer models for different users. In the partitioned case, the 1154 routing for hierarchical devolves into the forking routing described 1155 in Section 2.2.5. 1157 3.3.2. Policy 1159 The policy considerations for the peer model are very similar to 1160 those of the hierarchical model. However, the root-only policy 1161 approach is non-sensical in the peer model, and cannot be utilized. 1162 The distributed and centralized provisioning approaches apply, and 1163 the rules described above for generating correct results provide 1164 correct results in the peer model as well. 1166 In addition, the policy processing in the peer model eliminates the 1167 problem described in Section 3.2.2.3. The problem is that 1168 composition and authorization policies may be tuned to the needs of 1169 the specific device that is connected. In the hierarchical model, 1170 the wrong server for a particular device may be at the root, and the 1171 resulting presence document poorly suited to the consuming device. 1172 This problem is alleviated in the peer model. The server that is 1173 paired or tuned for that particular user or device is always at the 1174 root of the tree, and its composition policies have the final say in 1175 how presence data is presented to the watcher on that device. 1177 3.3.3. Presence Data 1179 The considerations for presence data and composition in the 1180 hierarchical model apply in the peer model as well. The principle 1181 issue is consistency, and whether the overall presence document for a 1182 watcher is the same regardless of which server the watcher connects 1183 from. As mentioned above, consistency is a property of commutativity 1184 of composition, which may or may not be true depending on the 1185 implementation. 1187 Interestingly, in the use case of Figure 7, a particular user only 1188 ever has devices on a single server, and thus the peer and 1189 hierarchical models end up being the same, and consistency is 1190 provided. 1192 4. Summary 1194 This document doesn't make any recommendation as to which models is 1195 best. Each model has different areas of applicability and are 1196 appropriate in a particular deployment. 1198 5. Future Considerations 1200 There are some additional concepts that can be considered, which have 1201 not yet been explored. One of them is routing of PUBLISH requests 1202 between systems. This can be used as part of the unioned models and 1203 requires further discussion. 1205 6. Acknowledgements 1207 The author would like to thank Paul Fullarton, David Williams and 1208 Paul Kyzivat for their comments. 1210 7. Security Considerations 1212 The principle issue in intra-domain federation is that of privacy. 1213 It is important that the system meets user expectations, and even in 1214 cases of user provisioning errors or inconsistencies, it provides 1215 appropriate levels of privacy. This is an issue in the unioned 1216 models, where user privacy policies can exist on multiple servers at 1217 the same time. The guidelines described here for authorization 1218 policies help ensure that privacy properties are maintained. 1220 8. IANA Considerations 1222 There are no IANA considerations associated with this specification. 1224 9. Informative References 1226 [RFC2778] Day, M., Rosenberg, J., and H. Sugano, "A Model for 1227 Presence and Instant Messaging", RFC 2778, February 2000. 1229 [RFC3863] Sugano, H., Fujimoto, S., Klyne, G., Bateman, A., Carr, 1230 W., and J. Peterson, "Presence Information Data Format 1231 (PIDF)", RFC 3863, August 2004. 1233 [RFC4479] Rosenberg, J., "A Data Model for Presence", RFC 4479, 1234 July 2006. 1236 [RFC3856] Rosenberg, J., "A Presence Event Package for the Session 1237 Initiation Protocol (SIP)", RFC 3856, August 2004. 1239 [RFC4662] Roach, A., Campbell, B., and J. Rosenberg, "A Session 1240 Initiation Protocol (SIP) Event Notification Extension for 1241 Resource Lists", RFC 4662, August 2006. 1243 [RFC3944] Johnson, T., Okubo, S., and S. Campos, "H.350 Directory 1244 Services", RFC 3944, December 2004. 1246 [RFC3325] Jennings, C., Peterson, J., and M. Watson, "Private 1247 Extensions to the Session Initiation Protocol (SIP) for 1248 Asserted Identity within Trusted Networks", RFC 3325, 1249 November 2002. 1251 [I-D.ietf-speermint-consolidated-presence-im-usecases] 1252 Houri, A., "Presence & Instant Messaging Peering Use 1253 Cases", 1254 draft-ietf-speermint-consolidated-presence-im-usecases-02 1255 (work in progress), July 2007. 1257 Author's Address 1259 Jonathan Rosenberg 1260 Cisco 1261 Edison, NJ 1262 US 1264 Phone: +1 973 952-5000 1265 Email: jdrosen@cisco.com 1266 URI: http://www.jdrosen.net 1268 Full Copyright Statement 1270 Copyright (C) The IETF Trust (2007). 1272 This document is subject to the rights, licenses and restrictions 1273 contained in BCP 78, and except as set forth therein, the authors 1274 retain all their rights. 1276 This document and the information contained herein are provided on an 1277 "AS IS" basis and THE CONTRIBUTOR, THE ORGANIZATION HE/SHE REPRESENTS 1278 OR IS SPONSORED BY (IF ANY), THE INTERNET SOCIETY, THE IETF TRUST AND 1279 THE INTERNET ENGINEERING TASK FORCE DISCLAIM ALL WARRANTIES, EXPRESS 1280 OR IMPLIED, INCLUDING BUT NOT LIMITED TO ANY WARRANTY THAT THE USE OF 1281 THE INFORMATION HEREIN WILL NOT INFRINGE ANY RIGHTS OR ANY IMPLIED 1282 WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. 1284 Intellectual Property 1286 The IETF takes no position regarding the validity or scope of any 1287 Intellectual Property Rights or other rights that might be claimed to 1288 pertain to the implementation or use of the technology described in 1289 this document or the extent to which any license under such rights 1290 might or might not be available; nor does it represent that it has 1291 made any independent effort to identify any such rights. Information 1292 on the procedures with respect to rights in RFC documents can be 1293 found in BCP 78 and BCP 79. 1295 Copies of IPR disclosures made to the IETF Secretariat and any 1296 assurances of licenses to be made available, or the result of an 1297 attempt made to obtain a general license or permission for the use of 1298 such proprietary rights by implementers or users of this 1299 specification can be obtained from the IETF on-line IPR repository at 1300 http://www.ietf.org/ipr. 1302 The IETF invites any interested party to bring to its attention any 1303 copyrights, patents or patent applications, or other proprietary 1304 rights that may cover technology that may be required to implement 1305 this standard. Please address the information to the IETF at 1306 ietf-ipr@ietf.org. 1308 Acknowledgment 1310 Funding for the RFC Editor function is provided by the IETF 1311 Administrative Support Activity (IASA).