idnits 2.17.1 draft-ietf-simple-intradomain-federation-00.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- ** It looks like you're using RFC 3978 boilerplate. You should update this to the boilerplate described in the IETF Trust License Policy document (see https://trustee.ietf.org/license-info), which is required now. -- Found old boilerplate from RFC 3978, Section 5.1 on line 16. -- Found old boilerplate from RFC 3978, Section 5.5, updated by RFC 4748 on line 1548. -- Found old boilerplate from RFC 3979, Section 5, paragraph 1 on line 1559. -- Found old boilerplate from RFC 3979, Section 5, paragraph 2 on line 1566. -- Found old boilerplate from RFC 3979, Section 5, paragraph 3 on line 1572. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- ** There are 7 instances of too long lines in the document, the longest one being 1 character in excess of 72. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust Copyright Line does not match the current year == Line 518 has weird spacing: '...ple.com alic...' == Line 519 has weird spacing: '...ple.com zeke...' -- The document seems to lack a disclaimer for pre-RFC5378 work, but may have content which was first submitted before 10 November 2008. If you have contacted all the original authors and they are all willing to grant the BCP78 rights to the IETF Trust, then this is fine, and you can ignore this comment. If not, you may need to add the pre-RFC5378 disclaimer. (See the Legal Provisions document at https://trustee.ietf.org/license-info for more information.) -- The document date (February 21, 2008) is 5909 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- == Outdated reference: A later version (-05) exists of draft-ietf-speermint-consolidated-presence-im-usecases-03 Summary: 2 errors (**), 0 flaws (~~), 4 warnings (==), 7 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 SIMPLE J. Rosenberg 3 Internet-Draft Cisco 4 Intended status: Informational A. Houri 5 Expires: August 24, 2008 IBM 6 February 21, 2008 8 Models for Intra-Domain Presence Federation 9 draft-ietf-simple-intradomain-federation-00 11 Status of this Memo 13 By submitting this Internet-Draft, each author represents that any 14 applicable patent or other IPR claims of which he or she is aware 15 have been or will be disclosed, and any of which he or she becomes 16 aware will be disclosed, in accordance with Section 6 of BCP 79. 18 Internet-Drafts are working documents of the Internet Engineering 19 Task Force (IETF), its areas, and its working groups. Note that 20 other groups may also distribute working documents as Internet- 21 Drafts. 23 Internet-Drafts are draft documents valid for a maximum of six months 24 and may be updated, replaced, or obsoleted by other documents at any 25 time. It is inappropriate to use Internet-Drafts as reference 26 material or to cite them other than as "work in progress." 28 The list of current Internet-Drafts can be accessed at 29 http://www.ietf.org/ietf/1id-abstracts.txt. 31 The list of Internet-Draft Shadow Directories can be accessed at 32 http://www.ietf.org/shadow.html. 34 This Internet-Draft will expire on August 24, 2008. 36 Copyright Notice 38 Copyright (C) The IETF Trust (2008). 40 Abstract 42 Presence federation involves the sharing of presence information 43 across multiple presence systems. Most often, presence federation is 44 assumed to be between different organizations, such as between two 45 enterprises or between and enterprise and a service provider. 46 However, federation can occur within a single organization or domain. 47 This can be the result of a multi-vendor network, or a consequence of 48 a large organization that requires partitioning. This document 49 examines different use cases and models for intra-domain federation. 51 Table of Contents 53 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 4 54 2. Intra-Domain Federation vs. Clustering . . . . . . . . . . . . 5 55 3. Use Cases for Intra-Domain Federation . . . . . . . . . . . . 6 56 3.1. Scale . . . . . . . . . . . . . . . . . . . . . . . . . . 7 57 3.2. Organizational Structures . . . . . . . . . . . . . . . . 7 58 3.3. Multi-Vendor Requirements . . . . . . . . . . . . . . . . 7 59 3.4. Specialization . . . . . . . . . . . . . . . . . . . . . . 7 60 4. Considerations for Federation Models . . . . . . . . . . . . . 8 61 5. Partitioned . . . . . . . . . . . . . . . . . . . . . . . . . 9 62 5.1. Applicability . . . . . . . . . . . . . . . . . . . . . . 10 63 5.2. Routing . . . . . . . . . . . . . . . . . . . . . . . . . 10 64 5.2.1. Centralized Database . . . . . . . . . . . . . . . . . 11 65 5.2.2. Routing Proxy . . . . . . . . . . . . . . . . . . . . 12 66 5.2.3. Subdomaining . . . . . . . . . . . . . . . . . . . . . 13 67 5.2.4. Peer-to-Peer . . . . . . . . . . . . . . . . . . . . . 15 68 5.2.5. Forking . . . . . . . . . . . . . . . . . . . . . . . 15 69 5.2.6. Provisioned Routing . . . . . . . . . . . . . . . . . 15 70 5.3. Policy . . . . . . . . . . . . . . . . . . . . . . . . . . 15 71 5.4. Presence Data . . . . . . . . . . . . . . . . . . . . . . 16 72 6. Exclusive . . . . . . . . . . . . . . . . . . . . . . . . . . 16 73 6.1. Routing . . . . . . . . . . . . . . . . . . . . . . . . . 17 74 6.1.1. Centralized Database . . . . . . . . . . . . . . . . . 17 75 6.1.2. Routing Proxy . . . . . . . . . . . . . . . . . . . . 18 76 6.1.3. Subdomaining . . . . . . . . . . . . . . . . . . . . . 18 77 6.1.4. Peer-to-Peer . . . . . . . . . . . . . . . . . . . . . 18 78 6.1.5. Forking . . . . . . . . . . . . . . . . . . . . . . . 18 79 6.2. Policy . . . . . . . . . . . . . . . . . . . . . . . . . . 19 80 6.3. Presence Data . . . . . . . . . . . . . . . . . . . . . . 19 81 7. Unioned . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 82 7.1. Hierarchical Model . . . . . . . . . . . . . . . . . . . . 23 83 7.1.1. Routing . . . . . . . . . . . . . . . . . . . . . . . 25 84 7.1.2. Policy and Identity . . . . . . . . . . . . . . . . . 26 85 7.1.2.1. Root Only . . . . . . . . . . . . . . . . . . . . 26 86 7.1.2.2. Distributed Provisioning . . . . . . . . . . . . . 28 87 7.1.2.3. Central Provisioning . . . . . . . . . . . . . . . 29 88 7.1.3. Presence Data . . . . . . . . . . . . . . . . . . . . 31 89 7.2. Peer Model . . . . . . . . . . . . . . . . . . . . . . . . 31 90 7.2.1. Routing . . . . . . . . . . . . . . . . . . . . . . . 33 91 7.2.2. Policy . . . . . . . . . . . . . . . . . . . . . . . . 34 92 7.2.3. Presence Data . . . . . . . . . . . . . . . . . . . . 34 93 8. Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . 34 94 9. Future Considerations . . . . . . . . . . . . . . . . . . . . 35 95 10. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . . 35 96 11. Security Considerations . . . . . . . . . . . . . . . . . . . 35 97 12. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 35 98 13. Informative References . . . . . . . . . . . . . . . . . . . . 35 99 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . . 36 100 Intellectual Property and Copyright Statements . . . . . . . . . . 38 102 1. Introduction 104 Presence refers to the ability, willingness and desire to communicate 105 across differing devices, mediums and services [RFC2778]. Presence 106 is described using presence documents [RFC3863] [RFC4479], exchanged 107 using a SIP-based event package [RFC3856] 109 Presence federation refers to the sharing of presence information 110 across multiple presence systems. This interconnection involves 111 passing of subscriptions from one system to another, and then the 112 passing of notifications in the opposite direction. 114 Most often, presence federation is considered in the context of 115 interconnection between different domains, also known as inter-domain 116 presence federation 117 [I-D.ietf-speermint-consolidated-presence-im-usecases]. For example, 118 consider the network of Figure 1, which shows one model for inter- 119 domain federation. In this network, Alice belongs to the example.org 120 domain, and Bob belongs to the example.com domain. Alice subscribes 121 to her buddy list on her presence server (which is also acting as her 122 Resource List Server (RLS) [RFC4662]), and that list includes 123 bob@example.com. Alice's presence server generates a back-end 124 subscription on the federated link between example.org and 125 example.com. The example.com presence server authorizes the 126 subscription, and if permitted, generates notifications back to 127 Alice's presence server, which are in turn passed to Alice. 129 ............................. .............................. 130 . . . . 131 . . . . 132 . alice@example.org . . bob@example.com . 133 . +------------+ SUB . . +------------+ . 134 . | | Bob . . | | . 135 . | Presence |------------------->| Presence | . 136 . | Server | . . | Server | . 137 . | | . . | | . 138 . | |<-------------------| | . 139 . | | NOTIFY . | | . 140 . +------------+ . . +------------+ . 141 . ^ | . . ^ . 142 . SUB | | . . |PUB . 143 . Buddy | |NOTIFY . . | . 144 . List | | . . | . 145 . | | . . | . 146 . | V . . | . 147 . +-------+ . . +-------+ . 148 . | | . . | | . 149 . | | . . | | . 150 . | | . . | | . 151 . +-------+ . . +-------+ . 152 . . . . 153 . Alice's . . Bob's . 154 . PC . . PC . 155 . . . . 156 ............................. .............................. 158 example.org example.com 160 Figure 1: Inter-Domain Model 162 However, federation can happen within a domain as well. We define 163 intra-domain federation as the interconnection of presence servers 164 within a single domain, where domain refers explicity to the right 165 hand side of the @-sign in the SIP URI. 167 2. Intra-Domain Federation vs. Clustering 169 Intra-domain federation is the interconnection of presence servers 170 within a single domain. This is very similar to clustering, which is 171 the tight coupling of a multiplicity of physical servers to realize 172 scale and/or high availability. Consequently, it is important to 173 clarify the differences. 175 Firstly, clustering implies a tight coupling of components. 176 Clustering usually involves proprietary information sharing, such as 177 database replication and state sharing, which in turn are tightly 178 bound with the internal implementation of the product. Intra-domain 179 federation, on the other hand, is a loose coupling. There is never 180 database replication or state replication across federated systems. 182 Secondly, clustering always occurs amongst components from the same 183 vendor. This is due to the tight coupling described above. Intra- 184 domain federation, on the other hand, can occur between servers from 185 different vendors. As described below, this is one of the chief use 186 cases for intra-domain federation. 188 Thirdly, clustering is almost always invisible to users. 189 Communications between users within the same cluster almost always 190 have identical functionality to communications between users on the 191 same server within the cluster. The cluster boundaries are 192 invisible; indeed the purpose of a cluster is to build a system which 193 behaves as if it were a single monolithic entity, even though it is 194 not. Federation, on the other hand, is often visible to users. 195 There will frequently be loss of functionality when crossing a 196 cluster. Though this is not a hard and fast rule, it is a common 197 differentiator. 199 Fourthly, connections between federated systems almost always involve 200 standards, whereas communications within a cluster often involves 201 proprietary mechanisms. Standards are needed for federation because 202 the federated systems can be from different vendors, and thus 203 agreement is needed to enable interoperation. 205 Finally, a cluster will often have an upper bound on its size and 206 capacity, due to some kind of constraint on the coupling between 207 nodes in the cluster. However, there is typically no limit, or a 208 much larger limit, on the number of federated systems that can be put 209 into a domain. This is a consequence to their loose coupling. 211 Though these rules are not hard and fast, they give general 212 guidelines on the differences between clustering and intra-domain 213 federation. 215 3. Use Cases for Intra-Domain Federation 217 There are several use cases that drive intra-domain federation. 219 3.1. Scale 221 One common use case for federation is an organization that is just 222 very large, and their size exceeds the capacity that a single server 223 or cluster can provide. So, instead, the domain breaks its users 224 into partitions (perhaps arbitrarily) and then uses intra-domain 225 federation to allow the overall system to scale up to arbitrary 226 sizes. This is common practice today for service providers and large 227 enterprises. 229 3.2. Organizational Structures 231 Another use case for intra-domain federation is a multi-national 232 organization with regional IT departments, each of which supports a 233 particular set of nationalities. It is very common for each regional 234 IT department to deploy and run its own servers for its own 235 population. In that case, the domain would end up being composed of 236 the presence servers deployed by each regional IT department. 237 Indeed, in many organizations, each regional IT department might end 238 up using different vendors. This can be a consequence of differing 239 regional requirements for features (such as compliance or 240 localization support), differing sales channels and markets in which 241 vendors sell, and so on. 243 3.3. Multi-Vendor Requirements 245 Another use case for intra-domain federation is an organization that 246 requires multiple vendors for each service, in order to avoid vendor 247 lock in and drive competition between its vendors. Since the servers 248 will come from different vendors, a natural way to deploy them is to 249 partition the users across them. Such multi-vendor networks are 250 extremely common in large service provider networks, many of which 251 have hard requirements for multiple vendors. 253 Typically, the vendors are split along geographies, often run by 254 different local IT departments. As such, this case is similar to the 255 organizational division above. 257 3.4. Specialization 259 Another use case is where certain vendors might specialize in 260 specific types of clients. For example, one vendor might provide a 261 mobile client (but no desktop client), while another provides a 262 desktop client but no mobile client. It is often the case that 263 specific client applications and devices are designed to only work 264 with their corresponding servers. In an ideal world, clients would 265 all implement to standards and this would not happen, but in 266 practice, the vast majority of presence endpoints work only (or only 267 work well) with the server from the same vendor. A domain might want 268 each user to have both a mobile client and a desktop client, which 269 will require servers from each vendor, leading to intra-domain 270 federation. 272 Similarly, presence can contain rich information, including 273 activities of the user (such as whether they are in a meeting or on 274 the phone), their geographic location, and their mood. This presence 275 state is can be determined manually (where the user enters and 276 updates the information), or automatically. Automatic determination 277 of these states is far preferable, since it put less burden on the 278 user. Determination of these presence states is done by taking "raw" 279 data about the user, and using it to generate corresponding presence 280 states. This raw data can come from any source that has information 281 about the user, including their calendaring server, their VoIP 282 infrastructure, their VPN server, their laptop operating system, and 283 so on. Each of these components is typically made by different 284 vendors, each of which is likely to integrate that data with their 285 presence servers. Consequently, presence servers from different 286 vendors are likely to specialize in particular pieces of presence 287 data, based on the other infrastructure they provide. The overall 288 network will need to contain servers from both vendors in order to 289 combine the benefits of both. This results in intra-domain 290 federation. 292 4. Considerations for Federation Models 294 When considering architectures for intra-domain presence federation, 295 several issues need to be considered: 297 Routing: How are subscriptions routed to the right presence 298 server(s)? This issue is more complex in intra-domain models, 299 since the right hand side of the @-sign cannot be used to perform 300 this routing. 302 Policy and Identity: Where do user policies reside, and what 303 presence server(s) are responsible for executing that policy? 304 What identities does the user have in each system and how do they 305 relate? 307 Data Ownership: Which presence servers are responsible for which 308 pieces of presence information, and how are those pieces composed 309 to form a coherent and consistent view of user presence? 311 The sections below describe several different models for intra-domain 312 federation. Each model is driven by a set of use cases, which are 313 described in an applicability subsection for each model. Each model 314 description also discusses how routing, policy, and composition work. 316 5. Partitioned 318 In the partitioned model, a single domain has a multiplicity of 319 presence servers, each of which manages a non-overlapping set of 320 users. That is, for each user in the domain, their presence data and 321 policy reside on a single server. Each "single server" may in fact 322 be a cluster. 324 Another important facet of the partitioned model is that, even though 325 users are partitioned across different servers, they each share the 326 same domain name in the right hand side of their URI, and this URI is 327 what those users use when communicating with other users both inside 328 and outside of the domain. There are many reasons why a domain would 329 want all of its users to share the same right-hand side of the @-sign 330 even though it is partitioned internally: 332 o The partitioning may reflect organizational or geographical 333 structures that a domain admistrator does not want to reflect 334 externally. 336 o If each partition had a separate domain name (i.e., 337 engineering.example.com and sales.example.com), if a user changed 338 organizations, this would necessitate a change in their URI. 340 o For reasons of vanity, users often like to have their URI (which 341 appear on business cards, email, and so on), to be brief and 342 short. 344 o If a watcher wants to add a presentity based on username and does 345 not want to know, or does not know, which subdomain or internal 346 dept the presentity belongs to, a single domain is needed. 348 This model is illustrated in Figure 2. As the model shows, the 349 domain example.com has six users across three servers, each of which 350 is handling two of the users. 352 ..................................................................... 353 . . 354 . . 355 . . 356 . joe@example.com alice@example.com padma@example.com . 357 . bob@example.com zeke@example.com hannes@example.com . 358 . +-----------+ +-----------+ +-----------+ . 359 . | | | | | | . 360 . | Server | | Server | | Server | . 361 . | 1 | | 2 | | 3 | . 362 . | | | | | | . 363 . +-----------+ +-----------+ +-----------+ . 364 . . 365 . . 366 . . 367 . example.com . 368 ..................................................................... 370 Figure 2: Partitioned Model 372 5.1. Applicability 374 The partitioned model arises naturally in larger domains, such as an 375 enterprise or service provider, where issues of scale, organizational 376 structure, or multi-vendor requirements cause the domain to be 377 managed by a multiplicity of independent servers. 379 In cases where each user has an AoR that directly points to its 380 partition (for example, us.example.com), that model becomes identical 381 to the inter-domain federated model and is not treated here further. 382 [[OPEN ISSUE: there are differences though, such as acess to a 383 centralized database. Is it worth adding this as a FOURTH model?]] 385 5.2. Routing 387 The partitioned intra-domain model works almost identically to an 388 inter-domain federated model, with the primary difference being 389 routing. In inter-domain federation, the domain part of the URI can 390 be used to route presence subscriptions from the watcher's domain to 391 the domain of the presentity. This is no longer the case in an 392 intra-domain model. Consider the case where Joe subscribes to his 393 buddy list, which is served by his presence server (server 1 in 394 Figure 2). Alice is a member of Joe's buddy list. How does server 1 395 know that the back-end subscription to Alice needs to get routed to 396 server 2? 398 5.2.1. Centralized Database 400 ..................................................................... 401 . +-----------+ . 402 . alice? | | . 403 . +---------------> | Database | . 404 . | server 2 | | . 405 . | +-------------| | . 406 . | | +-----------+ . 407 . | | . 408 . | | . 409 . | | . 410 . | | . 411 . | | . 412 . | | . 413 . | V . 414 . joe@example.com alice@example.com padma@example.com . 415 . bob@example.com zeke@example.com hannes@example.com . 416 . +-----------+ +-----------+ +-----------+ . 417 . | | | | | | . 418 . | Server | | Server | | Server | . 419 . | 1 | | 2 | | 3 | . 420 . | | | | | | . 421 . +-----------+ +-----------+ +-----------+ . 422 . . 423 . . 424 . . 425 . example.com . 426 ..................................................................... 428 Figure 3: Centralized DB 430 One solution is to rely on a common, centralized database that 431 maintains mappings of users to specific servers, shown in Figure 3. 432 When Joe subscribes to his buddy list that contains Alice, server 1 433 would query this database, asking it which server is responsible for 434 alice@example.com. The database would indicate server 2, and then 435 server 1 would generate the backend SUBSCRIBE request towards server 436 2. This is a common technique in large email systems. It is often 437 implemented using internal sub-domains; so that the database would 438 return alice@central.example.com to the query, and server 1 would 439 modify the Request-URI in the SUBSCRIBE request to reflect this. 441 Routing database solutions have the problem that they require 442 standardization on a common schema and database protocol in order to 443 work in multi-vendor environments. For example, LDAP and SQL are 444 both possibilities. There is variety in LDAP schema; one possibility 445 is H.350.4, which could be adapted for usage here [RFC3944]. 447 5.2.2. Routing Proxy 449 ..................................................................... 450 . +-----------+ . 451 . SUB alice | | . 452 . +---------------> | Routing | . 453 . | | Proxy | . 454 . | | | . 455 . | +-----------+ . 456 . | | . 457 . | | . 458 . | | . 459 . | |SUB Alice . 460 . | | . 461 . | | . 462 . | V . 463 . joe@example.com alice@example.com padma@example.com . 464 . bob@example.com zeke@example.com hannes@example.com . 465 . +-----------+ +-----------+ +-----------+ . 466 . | | | | | | . 467 . | Server | | Server | | Server | . 468 . | 1 | | 2 | | 3 | . 469 . | | | | | | . 470 . +-----------+ +-----------+ +-----------+ . 471 . . 472 . . 473 . . 474 . example.com . 475 ..................................................................... 477 Figure 4: Routing Proxy 479 A similar solution is to rely on a routing proxy. Instead of a 480 centralized database, there would be a centralized SIP proxy farm. 481 Server 1 would send subscriptions for users it doesn't serve to this 482 server farm, and the servers would lookup the user in a database 483 (which is now accessed only by the routing proxy), and the resulting 484 subscriptions are sent to the correct server. A redirect server can 485 be used as well, in which case the flow is very much like that of a 486 centralized database. 488 Routing proxies have the benefit that they do not require a common 489 database schema and protocol, but they do require a centralized 490 server function that sees all subscriptions, which can be a scale 491 challenge. 493 5.2.3. Subdomaining 495 In this solution, each user is associated with a subdomain, and is 496 provisioned as part of their respective presence server using that 497 subdomain. Consequently, each presence server thinks it is its own, 498 separate domain. However, when a user adds a presentity to their 499 buddy list without the subdomain, they first consult a shared 500 database which returns the subdomained URI to subscribe to. This 501 sub-domained URI can be returned because the user provided a search 502 criteria, such as "Find Alice Chang", or provided the non-subdomained 503 URI (alice@example.com). This is shown in Figure 5 504 ..................................................................... 505 . +-----------+ . 506 . who is Alice? | | . 507 . +---------------------->| Database | . 508 . | alice@b.example.com | | . 509 . | +---------------------| | . 510 . | | +-----------+ . 511 . | | . 512 . | | . 513 . | | . 514 . | | . 515 . | | . 516 . | | . 517 . | | . 518 . | | joe@a.example.com alice@b.example.com padma@c.example.com . 519 . | | bob@a.example.com zeke@b.example.com hannes@c.example.com . 520 . | | +-----------+ +-----------+ +-----------+ . 521 . | | | | | | | | . 522 . | | | Server | | Server | | Server | . 523 . | | | 1 | | 2 | | 3 | . 524 . | | | | | | | | . 525 . | | +-----------+ +-----------+ +-----------+ . 526 . | | ^ . 527 . | | | . 528 . | | | . 529 . | | | . 530 . | | | . 531 . | | | . 532 . | | +-----------+ . 533 . | +-------------------->| | . 534 . | | Watcher | . 535 . | | | . 536 . +-----------------------| | . 537 . +-----------+ . 538 . . 539 . . 540 . . 541 . example.com . 542 ..................................................................... 544 Figure 5: Subdomaining 546 Subdomaining puts the burden of routing within the client. The 547 servers can be completely unaware that they are actually part of the 548 same domain, and integrate with each other exactly as they would in 549 an inter-domain model. However, the client is given the burden of 550 determining the subdomained URI from the original URI or buddy name, 551 and then subscribing directly to that server, or including the 552 subdomained URI in their buddylist. The client is also responsible 553 for hiding the subdomain structure from the user and storing the 554 mapping information locally for extended periods of time. In cases 555 where users have buddy list subscriptions, the client will need to 556 resolve the buddy name into the sub-domained version before adding to 557 their buddy list. 559 5.2.4. Peer-to-Peer 561 Another model is to utilize a peer-to-peer network amongst all of the 562 servers, and store URI to server mappings in the distributed hash 563 table it creates. This has some nice properties but does require a 564 standardized and common p2p protocol across vendors, which does not 565 exist today. 567 5.2.5. Forking 569 Yet another solution is to utilize forking. Each server is 570 provisioned with the domain names or IP addresses of the other 571 servers, but not with the mapping of users to each of those servers. 572 When a server needs to create a back-end subscription for a user it 573 doesn't have, it forks the SUBSCRIBE request to all of the other 574 servers. This request will be rejected with a 404 on the servers 575 which do not handle that user, and accepted on the one that does. 576 The approach assumes that presence servers can differentiate inbound 577 SUBSCRIBE requests from end users (which cause back-end subscriptions 578 to get forked) and from other servers (which do not cause back-end 579 subscriptions). This approach works very well in organizations with 580 a relatively small number of servers (say, two or three), and becomes 581 increasingly ineffective with more and more servers. 583 5.2.6. Provisioned Routing 585 Yet another solution is to provision each server with each user, but 586 for servers that don't actually serve the user, the provisioning 587 merely tells the server where to proxy the request. This solution 588 has extremely poor operational properties, requiring multiple points 589 of provisioning across disparate systems. 591 5.3. Policy 593 A fundamental characteristic of the partitioned model is that there 594 is a single point of policy enforcement (authorization rules and 595 composition policy) for each user. 597 5.4. Presence Data 599 Another fundamental characteristic of the partitioned model is that 600 the presence data for a user is managed authoritatively on a single 601 server. In the example of Figure 2, the presence data for Alice 602 lives on server 2 alone (recall that server two may be physically 603 implemented as a multiplicity of boxes from a single vendor, each of 604 which might have a portion of the presence data, but externally it 605 appears to behave as if it were a single server). A subscription 606 from Bob to Alice may cause a transfer of presence information from 607 server 2 to server 1, but server 2 remains authoritative and is the 608 single root source of all data for Alice. 610 6. Exclusive 612 In the former (static) partitioned model, the mapping of a user to a 613 specific server is done by some off-line configuration means. The 614 configuration assigns a user to a specific server and in order to use 615 a different server, the user needs to change (or request the 616 administrator to do so) the configuration. 618 In some environments, this restriction of a user to use a particular 619 server may be a limitation. Instead, it is desirable to allow users 620 to freely move back and forth between systems, though using only a 621 single one at a time. This is called Exclusive Federation. 623 Some use cases where this can happen are: 625 o The organization is using multiple systems where each system has 626 its own characteristics. For example one server is tailored to 627 work with some CAD (Computer Aided Design) system and provide 628 presence and IM functionality along with the CAD system. The 629 other server is the default presence and IM server of the 630 organization. Users wish to be able to work with either system 631 when they wish to, they also wish to be able to see the presence 632 and IM with their buddies no matter which system their buddies are 633 currently using. 635 o An enterprise wishes to test presence servers from two different 636 vendors. In order to do so they wish to install a server from 637 each vendor and see which of the servers is better. In the static 638 partitioned model, a user will have to be statically assigned to a 639 particular server and could not compare the features of the two 640 servers. In the dynamic partitioned model, a user may choose on 641 whim which of the servers that are being tested to use. They can 642 move back and forth in case of problems. 644 o An enterprise is currently using servers from one vendor, but has 645 decided to add a second. They would like to gradually migrate 646 users from one to the other. In order to make a smooth 647 transition, users can move back and forth over a period of a few 648 weeks until they are finally required to stop going back, and get 649 deleted from their old system. 651 o A domain is using multiple clusters from the same vendor. To 652 simplify administration, users can connect to any of the clusters, 653 perhaps one local to their site. To accomplish this, the clusters 654 are connected using exclusive federation. 656 6.1. Routing 658 Due to its nature, routing in the Exclusive federation model is more 659 complex than the routing in the partitioned model. 661 Association of a user to a server can not be known until the user 662 publishes a presence document to a specific server or registers to 663 that server. Therefore, when Alice subscribes to Bob's presence 664 information, the server that serves the subscription can not know on 665 which server Bob's presence information may be published. 667 In addition, a server may get a subscription to a user but the user 668 may not be in any server yet. The server should respond with empty 669 notify and wait for the user to appear in one of the servers. Once 670 the user appears in one of the servers, the server should send the 671 subscription to that server. 673 A user may use two servers at the same time and have hers/his 674 presence information on two servers. This should be regarded as a 675 conflict and one of the presence clients should be terminated or 676 redirected to the other server. 678 Fortunately, most of the routing approaches described for partitioned 679 federation, excepting provisioned routing, can be adapted for 680 exclusive federation. 682 6.1.1. Centralized Database 684 A centralized database can be used, but will need to support a test- 685 and-set functionality. With it, servers can check if a user is 686 already in a specific server and set the user to the server if the 687 user is not on another server. If the user is already on another 688 server a redirect (or some other error message) will be sent to that 689 user. 691 6.1.2. Routing Proxy 693 The routing proxy mechanism can be used. However, it requires 694 signaling from each presence server to the routing proxy to indicate 695 that the user is now located on that server. This can be done by 696 having each server send a REGISTER request to the routing proxy, for 697 that user, and setting the contact to itself. The routing proxy 698 would have a rule which requires only a single registered contact per 699 user. Using the registration event package [RFC3680], each presence 700 server subscribes to the registration state for each user it is 701 managing. If the routing proxy sees a duplicate registration, it 702 allows it and then uses a reg-event notification to the other 703 presence server to de-register the user. 705 6.1.3. Subdomaining 707 Subdomaining is just a variation on the centralized database. 708 Assuming the database supports a test-and-set mechanism, it can be 709 used for exclusive federation. 711 6.1.4. Peer-to-Peer 713 Peer-to-peer routing is particularly well suited for exclusive 714 federation. Essentially, it provides a distributed registrar 715 function that maps each AoR to the particular server that they are 716 currently registered against. When a UA registers to a particular 717 server, that registration is written into the P2P network, such that 718 queries for that user are directed to that presence server. 720 6.1.5. Forking 722 When a subscription is received by a server and the server does not 723 already know that it serves the user, the server will fork the 724 subscription request to all the servers. If a response is not 725 received in a certain timeout then the server will know that the user 726 is not served by any server yet and should keep the subscription in 727 waiting until the user will appear in one of the servers. 729 In order to enable the servers that now have a "subscription in 730 waiting" to the user to know that the user is now available in one of 731 the servers, there should be some broadcast or subscription mechanism 732 between the servers that will enable those servers to know about the 733 appearance of the user in one of the servers. This can be done using 734 the registration event package. 736 6.2. Policy 738 In the exclusive federation model, policy becomes more complicated. 739 In the partitioned model, a user had their presence managed by the 740 same server all of the time. Thus, their policy can be provisioned 741 and excecuted there. With exclusive federation, a user can freely 742 move back and forth between servers. Consequently, their presence 743 will be managed by only a single server at one time, but that server 744 can change. 746 The simplest solution is just to require the user to separately 747 provision and manage policies on each server. In many of the use 748 cases above, exclusive federation is a transient situation that 749 eventually settles into partitioned federation. Thus, it may not be 750 unreasonable to require the user to manage both policies during the 751 transition. It is also possible that each server provides different 752 capabilities, and thus a user will receive different service 753 depending on which server they are connected to. Again, this may be 754 an acceptable limitation for the use cases it supports. 756 6.3. Presence Data 758 As with the partitioned model, in the exclusive model, the presence 759 data for a user resides on a single server at any given time. This 760 server owns all composition policies and procedures for collecting 761 and distributing presence data. 763 7. Unioned 765 In the unioned model, each user is actually served by more than one 766 presence server. In this case, "served" implies two properties: 768 o A user is served by a server when that user is provisioned on that 769 server, and 771 o That server is authoritative for some piece of presence state 772 associated with that user 774 In essence, in the unioned model, a user's presence data is 775 distributed across many presence servers, while in the partitioned 776 and exclusive models, its centralized in a single presence server. 777 Furthermore, it is possible that the user is provisioned with 778 different identifiers on each server. 780 This definition speaks specifically to ownership of presence data as 781 the key property. This rules out several cases which involve a mix 782 of servers within the enterprise, but do not constitute intra-domain 783 unioned federation: 785 o A user utilizes an outbound SIP proxy from one vendor, which 786 connects to a presence server from another vendor. Even though 787 this will result in presence subscriptions and notifications 788 flowing between servers, and the user is potentially provisioned 789 on both, there is no authoritative presence state in the outbound 790 proxy, and so this is not intra-domain federation. 792 o A user utilizes a Resource List Server (RLS) from one vendor, 793 which holds their buddy list, and accesses presence data from a 794 presence server from another vendor. This case is actually the 795 partitioned case, not the unioned case. Effectively, the buddy 796 list itself is another "user", and it exists entirely on one 797 server (the RLS), while the actual users on the buddy list exist 798 entirely within another. Consequently, this case does not have 799 the property that a single presence resource exists on multiple 800 servers at the same time. 802 o A user subscribes to the presence of a presentity. This 803 subscription is first passed to their presence server, which acts 804 as a proxy, and instead sends the subscription to the UA of the 805 user, which acts as a presence edge server. In this model, it may 806 appear as if there are two presence servers for the user (the 807 actual server and their UA). However, the server is acting as a 808 proxy in this case. There is only one source of presence 809 information. 811 The unioned models arise naturally when a user is using devices from 812 different vendors, each of which has their own respective servers, or 813 when a user is using different servers for different parts of their 814 presence state. For example, Figure 6 shows the case where a single 815 user has a mobile client connected to presence server one and a 816 desktop client connected to presence server two. 818 alice@example.com alice@example.com 819 +------------+ +------------+ 820 | | | | 821 | Presence | | Presence | 822 | Server |--------------| Server | 823 | 1 | | 2 | 824 | | | | 825 | | | | 826 +------------+ +------------+ 827 \ / 828 \ / 829 \ / 830 \ / 831 \ / 832 \ / 833 \...................../....... 834 \ / . 835 .\ / . 836 . \ | +--------+ . 837 . | |+------+| . 838 . +---+ || || . 839 . |+-+| || || . 840 . |+-+| |+------+| . 841 . | | +--------+ . 842 . | | /------ / . 843 . +---+ /------ / . 844 . --------/ . 845 . . 846 ............................. 848 Alice 850 Figure 6: Unioned Case 1 852 As another example, a user may have two devices from the same vendor, 853 both of which are asociated with a single presence server, but that 854 presence server has incomplete presence state about the user. 855 Another presence server in the enterprise, due to its access to state 856 for that user, has additional data which needs to be accessed by the 857 first presence server in order to provide a comprehensive view of 858 presence data. This is shown in Figure 7. 860 alice@example.com alice@example.com 861 +------------+ +------------+ 862 | | | | 863 | Presence | | Presence | 864 | Server |--------------| Server | 865 | 1 | | 2 | 866 | | | | 867 | | | | 868 +------------+ +------------+ 869 ^ | | 870 | | | 871 | | | 872 ///-------\\\ | | 873 ||| specialized ||| | | 874 || state || | | 875 \\\-------/// | | 876 ............................. 877 . | | . 878 . | | +--------+ . 879 . | |+------+| . 880 . +---+ || || . 881 . |+-+| || || . 882 . |+-+| |+------+| . 883 . | | +--------+ . 884 . | | /------ / . 885 . +---+ /------ / . 886 . --------/ . 887 . . 888 . . 889 ............................. 890 Alice 892 Figure 7: Unioned Case 2 894 Another use case for unioned federation are subscriber moves. 895 Consider a domain which uses multiple presence servers, typically 896 running in a partitioned configuration. The servers are organized 897 regionally so that each user is served by a presence server handling 898 their region. A user is moving from one region to a new job in 899 another, while retaining their SIP URI. In order to provide a smooth 900 transition, ideally the system would provide a "make before break" 901 functionality, allowing the user to be added onto the new server 902 prior to being removed from the old. During the transition period, 903 especially if the user had multiple clients to be moved, they can end 904 up with presence state existing on both servers at the same time. 906 Another use case for unioned federation is multiple providers. 908 Consider a user in an enterprise, alice@example.com. Example.com has 909 a presence server deployed for all of its users. In addition, Alice 910 uses a public IM and presence provider. Alice would like that users 911 who connect to the public provider see presence state that comes from 912 example.com, and vice-a-versa. Interestingly, this use case isn't 913 intra-domain federation at all, but rather, unioned inter-domain 914 federation. 916 7.1. Hierarchical Model 918 The unioned intra-federation model can be realized in one of two ways 919 - using a hierarchical structure or a peer structure. 921 In the hierarchical model, presence subscriptions for the presentity 922 in question are always routed first to one of the servers - the root 923 - and then the root presence server subscribes to the next layer of 924 presence servers (which may, in turn, subscribe to the presence state 925 in other presence servers). Each presence server composes the 926 presence information it receives from its children, applying local 927 authorization and composition policies, and then passes the results 928 up to the higher layer. This is shown in Figure 8. 930 +-----------+ 931 *-----------* | | 932 |Auth and |---->| Presence | <--- root 933 |Composition| | Server | 934 *-----------* | | 935 | | 936 +-----------+ 937 / --- 938 / ---- 939 / ---- 940 / ---- 941 V -V 942 +-----------+ +-----------+ 943 | | | | 944 *-----------* | Presence | *-----------* | Presence | 945 |Auth and |-->| Server | |Auth and |-->| Server | 946 |Composition| | | |Composition| | | 947 *-----------* | | *-----------* | | 948 +-----------+ +-----------+ 949 | --- 950 | ----- 951 | ----- 952 | ----- 953 | ----- 954 | ----- 955 V --V 956 +-----------+ +-----------+ 957 | | | | 958 *-----------* | Presence | *-----------* | Presence | 959 |Auth and |-->| Server | |Auth and |-->| Server | 960 |Composition| | | |Composition| | | 961 *-----------* | | *-----------* | | 962 +-----------+ +-----------+ 964 Figure 8: Hierarchical Model 966 Its important to note that this hierarchy defines the sequence of 967 presence composition and policy application, and does not imply a 968 literal message flow. As an example, consider once more the use case 969 of Figure 6. Assume that presence server 1 is the root, and presence 970 server 2 is its child. When Bob's PC subscribes to Bob's buddy list 971 (on presence server 2), that subscription will first go to presence 972 server 2. However, that presence server knows that it is not the 973 root in the hierarchy, and despite the fact that it has presence 974 state for Alice (who is on Bob's buddy list), it creates a back-end 975 subscription to presence server 1. Presence server 1, as the root, 976 subscribes to Alice's state at presence server 2. Now, since this 977 subscription came from presence server 1 and not Bob directly, 978 presence server 2 provides the presence state. This is received at 979 presence server 1, which composes the data with its own state for 980 Alice, and then provides the results back to presence server 2, 981 which, having acted as an RLS, forwards the results back to Bob. 982 Consequently, this flow, as a message sequence diagram, involves 983 notifications passing from presence server 2, to server 1, back to 984 server 2. However, in terms of composition and policy, it was done 985 first at the child node (presence server 2), and then those results 986 used at the parent node (presence server 1). 988 7.1.1. Routing 990 In the hierarchical model, each presence server needs to be 991 provisioned with the root, its parent and its children presence 992 servers for each presentity it handles. These relationships could in 993 fact be different on a presentity-by-presentity basis; however, this 994 is complex to manage. In all likelihood, the parent and child 995 relationships are identical for each presentities. The overall 996 routing algorithm can be described thusly: 998 o If a SUBCRIBE is received from the parent node for this 999 presentity, perform subscriptions to each child node for this 1000 presentity, and then take the results, apply composition and 1001 authorization policies, and propagate to the parent. 1003 o If a SUBSCRIBE is received from a node that is not the parent node 1004 for this presentity, proxy the SUBSCRIBE to the parent node. This 1005 includes cases where the node that sent the SUBSCRIBE is a child 1006 node. 1008 This routing rule is relatively simple, and in a two-server system is 1009 almost trivial to provision. Interestingly, it works in cases where 1010 some users are partitioned and some are unioned. When the users are 1011 partitioned, this routing algorithm devolves into the forking 1012 algorithm of Section 5.2.5. This points to the forking algorithm as 1013 the a good choice since it can be used for both partitioned and 1014 unioned. 1016 An important property of the routing in the hierarchical model is 1017 that the sequence of composition and policy operations are identical 1018 for all watchers to that presentity, regardless of which presence 1019 server they are associated with. The result is that the overall 1020 presence state provided to a watcher is always consistent and 1021 independent of the server the watcher is connected to. We call this 1022 property the *consistency property*, and it is an important metric in 1023 assessing the correctness of a federated presence system. 1025 7.1.2. Policy and Identity 1027 Policy and identity are a clear challenge in the unioned model. 1029 Firstly, since a user is provisioned on many servers, it is possible 1030 that the identifier they utilize could be different on each server. 1031 For example, on server 1, they could be joe@example.com, whereas on 1032 server 2, they are joe.smith@example.com. In cases where the 1033 identifiers are not equivalent, a mapping function needs to be 1034 provisioned. This ideally happens on the server performing the back- 1035 end subscription. 1037 Secondly, the unioned model will result in back-end subscriptions 1038 extending from one presence server to another presence server. These 1039 subscriptions, though made by the presence server, need to be made 1040 on-behalf-of the user that originally requested the presence state of 1041 the presentity. Since the presence server extending the back-end 1042 subscription will not often have credentials to claim identity of the 1043 watcher, asserted identity using techniques like P-Asserted-ID 1044 [RFC3325] are required, along with the associated trust relationships 1045 between servers. Optimizations, such as view sharing 1046 [I-D.rosenberg-simple-view-sharing] can help improve performance. 1048 The principle challenge in a unioned presence model is policy, 1049 including both authorization and composition policies. There are 1050 three potential solutions to the administration of policy in the 1051 hierarchical model (only two of which apply in the peer model, as 1052 we'll discuss below. These are root-only, distributed provisioned, 1053 and central provisioned. 1055 7.1.2.1. Root Only 1057 In the root-only policy model, authorization policy and composition 1058 policy are applied only at the root of the tree. This is shown in 1059 Figure 9. 1061 +-----------+ 1062 *-----------* | | 1063 |Auth and |---->| Presence | <--- root 1064 |Composition| | Server | 1065 *-----------* | | 1066 | | 1067 +-----------+ 1068 / --- 1069 / ---- 1070 / ---- 1071 / ---- 1072 V -V 1073 +-----------+ +-----------+ 1074 | | | | 1075 | Presence | | Presence | 1076 | Server | | Server | 1077 | | | | 1078 | | | | 1079 +-----------+ +-----------+ 1080 | --- 1081 | ----- 1082 | ----- 1083 | ----- 1084 | ----- 1085 | ----- 1086 V --V 1087 +-----------+ +-----------+ 1088 | | | | 1089 | Presence | | Presence | 1090 | Server | | Server | 1091 | | | | 1092 | | | | 1093 +-----------+ +-----------+ 1095 Figure 9: Root Only 1097 As long as the subscription request came from its parent, every child 1098 presence server would automatically accept the subscription, and 1099 provide notifications containing the full presence state it is aware 1100 of. Any composition performed by a child presence server would need 1101 to be lossless, in that it fully combines the source data without 1102 loss of information, and also be done without any per-user 1103 provisioning or configuration, operating in a default or 1104 administrator-provisioned mode of operation. 1106 The root-only model has the benefit that it requires the user to 1107 provision policy in a single place (the root). However, it has the 1108 drawback that the composition and policy processing may be performed 1109 very poorly. Presumably, there are multiple presence servers in the 1110 first place because each of them has a particular speciality. That 1111 speciality may be lost in the root-only model. For example, if a 1112 child server provides geolocation information, the root presence 1113 server may not have sufficient authorization policy capabilities to 1114 allow the user to manage how that geolocation information is provided 1115 to watchers. 1117 7.1.2.2. Distributed Provisioning 1119 The distributed provisioned model looks exactly like the diagram of 1120 Figure 8. Each presence server is separately provisioned with its 1121 own policies, including what users are allowed to watch, what 1122 presence data they will get, and how it will be composed. 1124 One immediate concern is whether the overall policy processing, when 1125 performed independently at each server, is consistent, sane, and 1126 provides reasonable degrees of privacy. It turns out that it can, if 1127 some guidelines are followed. 1129 Firstly, consider basic "yes/no" authorization policies. Lets say a 1130 presentity, Alice, provides an authorization policy in server 1 where 1131 Bob can see her presence, but on server 2, provides a policy where 1132 Bob cannot. If presence server 1 is the root, the subscription is 1133 accepted there, but the back-end subscription to presence server 2 1134 would be rejected. As long as presence server 1 then rejects the 1135 subscription, the system provides the correct behavior. This can be 1136 turned into a more general rule: 1138 o To guarantee privacy safety, if the back-end subscription 1139 generated by a presence server is denied, that server must deny 1140 the triggering subscription in turn, regardless of its own 1141 authorization policies. This means that a presence server cannot 1142 send notifications on its own until it has confirmed subscriptions 1143 from downstream servers. 1145 Things get more complicated when one considers authorization policies 1146 whose job is to block access to specific pieces of information, as 1147 opposed to blocking a user completely. For example, lets say Alice 1148 wants to allow Bob to see her presence, but not her geolocation 1149 information. She provisions a rule on server 1 that blocks 1150 geolocation information, but grants it on server 2. The correct mode 1151 of operation in this case is that the overall system will block 1152 geolocation from Bob. But will it? In fact, it will, if a few 1153 additional guidelines are followed: 1155 o If a presence server adds any information to a presence document 1156 beyond the information received from its children, it must provide 1157 authorization policies that govern the access to that information. 1159 o If a presence server does not understand a piece of presence data 1160 provided by its child, it should not attempt to apply its own 1161 authorization policies to access of that information. 1163 o A presence server should not add information to a presence 1164 document that overlaps with information that can be added by its 1165 parent. Of course, it is very hard for a presence server to know 1166 whether this information overlaps. Consequently, provisioned 1167 composition rules will be required to realize this. 1169 If these rules are followed, the overall system provides privacy 1170 safety and the overall policy applied is reasonable. This is because 1171 these rules effectively segment the application of policy based on 1172 specific data, to the servers that own the corresponding data. For 1173 example, consider once more the geolocation use case described above, 1174 and assume server 2 is the root. If server 1 has access to, and 1175 provides geolocation information in presence documents it produces, 1176 then server 1 would be the only one to provide authorization policies 1177 governing geolocation. Server 2 would receive presence documents 1178 from server 1 containing (or not) geolocation, but since it doesn't 1179 provide or control geolocation, it lets that information pass 1180 through. Thus, the overall presence document provided to the watcher 1181 will contain gelocation if Alice wanted it to, and not otherwise, and 1182 the controls for access to geolocation would exist only on server 1. 1184 The second major concern on distributed provisioning is that it is 1185 confusing for users. However, in the model that is described here, 1186 each server would necessarily be providing distinct rules, governing 1187 the information it uniquely provides. Thus, server 2 would have 1188 rules about who is allowed to see geolocation, and server 1 would 1189 have rules about who is allowed to subscribe overall. Though not 1190 ideal, there is certainly precedent for users configuring policies on 1191 different servers based on the differing services provided by those 1192 servers. Users today provision block and allow lists in email for 1193 access to email servers, and separately in IM and presence 1194 applications for access to IM. 1196 7.1.2.3. Central Provisioning 1198 The central provisioning model is a hybrid between root-only and 1199 distributed provisioning. Each server does in fact execute its own 1200 authorization and composition policies. However, rather than the 1201 user provisioning them independently in each place, there is some 1202 kind of central portal where the user provisions the rules, and that 1203 portal generates policies for each specific server based on the data 1204 that the corresponding server provides. This is shown in Figure 10. 1206 +---------------------+ 1207 |provisioning portal | 1208 +---------------------+ 1209 . . . . . 1210 . . . . . 1211 . . . . ....................... 1212 ........................... . . . . 1213 . . . . . 1214 . . . . . 1215 . ........................... . ............. . 1216 . . . . . 1217 . . ...................... . . 1218 . . V +-----------+ . . 1219 . . *-----------* | | . . 1220 . . |Auth and |---->| Presence | <--- root . . 1221 . . |Composition| | Server | . . 1222 . . *-----------* | | . . 1223 . . | | . . 1224 . . +-----------+ . . 1225 . . | ---- . . 1226 . . | ------- . . 1227 . . | ------- . 1228 . . | .------- . 1229 . . V . ---V V 1230 . . +-----------+ . +-----------+ 1231 . . | | V | | 1232 . . *-----------* | Presence | *-----------* | Presence | 1233 . ....>|Auth and |-->| Server | |Auth and |-->| Server | 1234 . |Composition| | | |Composition| | | 1235 . *-----------* | | *-----------* | | 1236 . +-----------+ +-----------+ 1237 . / -- 1238 . / ---- 1239 . / --- 1240 . / ---- 1241 . / --- 1242 . / ---- 1243 . V -V 1244 . +-----------+ +-----------+ 1245 V | | | | 1246 *-----------* | Presence | *-----------* | Presence | 1247 |Auth and |-->| Server | |Auth and |-->| Server | 1248 |Composition| | | |Composition| | | 1249 *-----------* | | *-----------* | | 1250 +-----------+ +-----------+ 1252 Figure 10: Central Provisioning 1254 Centralized provisioning brings the benefits of root-only (single 1255 point of user provisioning) with those of distributed provisioning 1256 (utilize full capabilities of all servers). Its principle drawback 1257 is that it requires another component - the portal - which can 1258 represent the union of the authorization policies supported by each 1259 server, and then delegate those policies to each corresponding 1260 server. 1262 For both the centralized and distributed provisioning approaches, the 1263 hierarchical model suffers overall from the fact that the root of the 1264 policy processing may not be tuned to the specific policy needs of 1265 the device that has subscribed. For example, in the use case of 1266 Figure 6, presence server 1 may be providing composition policies 1267 tuned to the fact that the device is wireless with limited display. 1268 Consequently, when Bob subscribes from his mobile device, is presence 1269 server 2 is the root, presence server 2 may add additional data and 1270 provide an overall presence document to the client which is not 1271 optimized for that device. This problem is one of the principal 1272 motivations for the peer model, described below. 1274 7.1.3. Presence Data 1276 The hierarhical model is based on the idea that each presence server 1277 in the chain contributes some unique piece of presence information, 1278 composing it with what it receives from its child, and passing it on. 1279 For the overall presence document to be reasonable, several 1280 guidelines need to be followed: 1282 o A presence server must be prepared to receive documents from its 1283 peer containing information that it does not understand, and to 1284 apply unioned composition policies that retain this information, 1285 adding to it the unique information it wishes to contribute. 1287 o A user interface rendering some presence document provided by its 1288 presence server must be prepared for any kind of presence document 1289 compliant to the presence data model, and must not assume a 1290 specific structure based on the limitations and implementation 1291 choices of the server to which it is paired. 1293 If these basic rules are followed, the overall system provides 1294 functionality equivalent to the combination of the presence 1295 capabilities of the servers contained within it, which is highly 1296 desirable. 1298 7.2. Peer Model 1300 In the peer model, there is no one root. When a watcher subscribes 1301 to a presentity, that subscription is processed first by the server 1302 to which the watcher is connected (effectively acting as the root), 1303 and then the subscription is passed to other child presence servers. 1304 In essence, in the peer model, there is a per-watcher hierarchy, with 1305 the root being a function of the watcher. Consider the use case in 1306 Figure 6 If Bob has his buddy list on presence server 1, and it 1307 contains Alice, presence server 1 acts as the root, and then performs 1308 a back-end subscription to presence server 2. However, if Joe has 1309 his buddy list on presence server 2, and his buddy list contains 1310 Alice, presence server 2 acts as the root, and performs a back-end 1311 subscription to presence server 1. This is shown in Figure 11. 1313 alice@example.com alice@example.com 1314 +------------+ +------------+ 1315 | |<-------------| |<--------+ 1316 SUB | Presence | | Presence | | 1317 List w/| Server | | Server | SUB | 1318 Alice | 1 | | 2 | List w/| 1319 +---->| |------------->| | Alice | 1320 | | | | | | 1321 | +------------+ +------------+ | 1322 | \ / | 1323 | \ / | 1324 | \ / | 1325 | \ / | 1326 | \ / | 1327 | \ / | 1328 ...|........ \...................../....... .........|........ 1329 . . \ / . . . 1330 . . .\ / . . +--------+ . 1331 . | . . \ | +--------+ . . |+------+| . 1332 . | . . | |+------+| . . || || . 1333 . +---+ . . +---+ || || . . || || . 1334 . |+-+| . . |+-+| || || . . |+------+| . 1335 . |+-+| . . |+-+| |+------+| . . +--------+ . 1336 . | | . . | | +--------+ . . /------ / . 1337 . | | . . | | /------ / . . /------ / . 1338 . +---+ . . +---+ /------ / . . --------/ . 1339 . . . --------/ . . . 1340 . . . . . . 1341 ............ ............................. .................. 1343 Bob Alice Joe 1345 Figure 11: Peer Model 1347 Whereas the hierarchical model clearly provides the consistency 1348 property, it is not obvious whether a particular deployment of the 1349 peer model provides the consistency property. It ends up being a 1350 function of the composition policies of the individual servers. If 1351 Pi() represents the composition and authorization policies of server 1352 i, and takes as input one or more presence documents provided by its 1353 children, and outputs a presence document, the overall system 1354 provides consistency when: 1356 Pi(Pj()) = Pj(Pi()) 1358 which is effectively the commutativity property. 1360 7.2.1. Routing 1362 Routing in the peer model works similarly to the hierarchical model. 1363 Each presence server would be configured with the children it has 1364 when it acts as the root. The overall routing algorithm then works 1365 as follows: 1367 o If a presence server receives a subscription for a presentity from 1368 a particular watcher, and it already has a different subscription 1369 (as identified by dialog identifiers) for that presentity from 1370 that watcher, it rejects the second subscription with an 1371 indication of a loop. This algorithm does rule out the 1372 possibility of two instances of the same watcher subscribing to 1373 the same presentity. 1375 o If a presence server receives a subscription for a presentity from 1376 a watcher and it doesn't have one yet for that pair, it processes 1377 it and generates back end subscriptions to each configured child. 1378 If a back-end subscription generates an error due to loop, it 1379 proceeds without that back-end input. 1381 For example, consider Bob subscribing to Alice. Bob's client is 1382 supported by server 1. Server 1 has not seen this subscription 1383 before, so it acts as the root and passes it to server 2. Server 2 1384 hasn't seen it before, so it accepts it (now acting as the child), 1385 and sends the subscription to ITS child, which is server 1. Server 1 1386 has already seen the subscription, so it rejects it. Now server 2 1387 basically knows its the child, and so it generates documents with 1388 just its own data. 1390 As in the hierarchical case, it is possible to intermix partitioned 1391 and peer models for different users. In the partitioned case, the 1392 routing for hierarchical devolves into the forking routing described 1393 in Section 5.2.5. However, intermixing peer and exclusive federation 1394 for different users is challenging. [[OPEN ISSUE: need to think 1395 about this more.]] 1397 7.2.2. Policy 1399 The policy considerations for the peer model are very similar to 1400 those of the hierarchical model. However, the root-only policy 1401 approach is non-sensical in the peer model, and cannot be utilized. 1402 The distributed and centralized provisioning approaches apply, and 1403 the rules described above for generating correct results provide 1404 correct results in the peer model as well. 1406 In addition, the policy processing in the peer model eliminates the 1407 problem described in Section 7.1.2.3. The problem is that 1408 composition and authorization policies may be tuned to the needs of 1409 the specific device that is connected. In the hierarchical model, 1410 the wrong server for a particular device may be at the root, and the 1411 resulting presence document poorly suited to the consuming device. 1412 This problem is alleviated in the peer model. The server that is 1413 paired or tuned for that particular user or device is always at the 1414 root of the tree, and its composition policies have the final say in 1415 how presence data is presented to the watcher on that device. 1417 7.2.3. Presence Data 1419 The considerations for presence data and composition in the 1420 hierarchical model apply in the peer model as well. The principle 1421 issue is consistency, and whether the overall presence document for a 1422 watcher is the same regardless of which server the watcher connects 1423 from. As mentioned above, consistency is a property of commutativity 1424 of composition, which may or may not be true depending on the 1425 implementation. 1427 Interestingly, in the use case of Figure 7, a particular user only 1428 ever has devices on a single server, and thus the peer and 1429 hierarchical models end up being the same, and consistency is 1430 provided. 1432 8. Summary 1434 This document doesn't make any recommendation as to which models is 1435 best. Each model has different areas of applicability and are 1436 appropriate in a particular deployment. 1438 9. Future Considerations 1440 There are some additional concepts that can be considered, which have 1441 not yet been explored. One of them is routing of PUBLISH requests 1442 between systems. This can be used as part of the unioned models and 1443 requires further discussion. 1445 It is also worth considering IM in these different models. For 1446 example, the issues for routing IM (and any session in general) are 1447 identical to presence routing. Perhaps a separate document. 1449 Another big issue is data federation. For the unioned models in 1450 particular, there is typically a desire to be able to add a buddy on 1451 one system and have it appear on another, or to add a user to a 1452 whitelist on one system and have that reflect in the other. This 1453 requires some kind of standardized data interfaces and is for further 1454 consideration. 1456 10. Acknowledgements 1458 The author would like to thank Paul Fullarton, David Williams, Sanjay 1459 Sinha, and Paul Kyzivat for their comments. 1461 11. Security Considerations 1463 The principle issue in intra-domain federation is that of privacy. 1464 It is important that the system meets user expectations, and even in 1465 cases of user provisioning errors or inconsistencies, it provides 1466 appropriate levels of privacy. This is an issue in the unioned 1467 models, where user privacy policies can exist on multiple servers at 1468 the same time. The guidelines described here for authorization 1469 policies help ensure that privacy properties are maintained. 1471 12. IANA Considerations 1473 There are no IANA considerations associated with this specification. 1475 13. Informative References 1477 [RFC2778] Day, M., Rosenberg, J., and H. Sugano, "A Model for 1478 Presence and Instant Messaging", RFC 2778, February 2000. 1480 [RFC3863] Sugano, H., Fujimoto, S., Klyne, G., Bateman, A., Carr, 1481 W., and J. Peterson, "Presence Information Data Format 1482 (PIDF)", RFC 3863, August 2004. 1484 [RFC4479] Rosenberg, J., "A Data Model for Presence", RFC 4479, 1485 July 2006. 1487 [RFC3856] Rosenberg, J., "A Presence Event Package for the Session 1488 Initiation Protocol (SIP)", RFC 3856, August 2004. 1490 [RFC4662] Roach, A., Campbell, B., and J. Rosenberg, "A Session 1491 Initiation Protocol (SIP) Event Notification Extension for 1492 Resource Lists", RFC 4662, August 2006. 1494 [RFC3944] Johnson, T., Okubo, S., and S. Campos, "H.350 Directory 1495 Services", RFC 3944, December 2004. 1497 [RFC3325] Jennings, C., Peterson, J., and M. Watson, "Private 1498 Extensions to the Session Initiation Protocol (SIP) for 1499 Asserted Identity within Trusted Networks", RFC 3325, 1500 November 2002. 1502 [RFC3680] Rosenberg, J., "A Session Initiation Protocol (SIP) Event 1503 Package for Registrations", RFC 3680, March 2004. 1505 [I-D.ietf-speermint-consolidated-presence-im-usecases] 1506 Houri, A., "Presence & Instant Messaging Peering Use 1507 Cases", 1508 draft-ietf-speermint-consolidated-presence-im-usecases-03 1509 (work in progress), November 2007. 1511 [I-D.rosenberg-simple-view-sharing] 1512 Rosenberg, J., Donovan, S., and K. McMurry, "Optimizing 1513 Federated Presence with View Sharing", 1514 draft-rosenberg-simple-view-sharing-00 (work in progress), 1515 November 2007. 1517 Authors' Addresses 1519 Jonathan Rosenberg 1520 Cisco 1521 Edison, NJ 1522 US 1524 Phone: +1 973 952-5000 1525 Email: jdrosen@cisco.com 1526 URI: http://www.jdrosen.net 1527 Avshalom Houri 1528 IBM 1529 Science Park, Rehovot 1530 Israel 1532 Email: avshalom@il.ibm.com 1534 Full Copyright Statement 1536 Copyright (C) The IETF Trust (2008). 1538 This document is subject to the rights, licenses and restrictions 1539 contained in BCP 78, and except as set forth therein, the authors 1540 retain all their rights. 1542 This document and the information contained herein are provided on an 1543 "AS IS" basis and THE CONTRIBUTOR, THE ORGANIZATION HE/SHE REPRESENTS 1544 OR IS SPONSORED BY (IF ANY), THE INTERNET SOCIETY, THE IETF TRUST AND 1545 THE INTERNET ENGINEERING TASK FORCE DISCLAIM ALL WARRANTIES, EXPRESS 1546 OR IMPLIED, INCLUDING BUT NOT LIMITED TO ANY WARRANTY THAT THE USE OF 1547 THE INFORMATION HEREIN WILL NOT INFRINGE ANY RIGHTS OR ANY IMPLIED 1548 WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. 1550 Intellectual Property 1552 The IETF takes no position regarding the validity or scope of any 1553 Intellectual Property Rights or other rights that might be claimed to 1554 pertain to the implementation or use of the technology described in 1555 this document or the extent to which any license under such rights 1556 might or might not be available; nor does it represent that it has 1557 made any independent effort to identify any such rights. Information 1558 on the procedures with respect to rights in RFC documents can be 1559 found in BCP 78 and BCP 79. 1561 Copies of IPR disclosures made to the IETF Secretariat and any 1562 assurances of licenses to be made available, or the result of an 1563 attempt made to obtain a general license or permission for the use of 1564 such proprietary rights by implementers or users of this 1565 specification can be obtained from the IETF on-line IPR repository at 1566 http://www.ietf.org/ipr. 1568 The IETF invites any interested party to bring to its attention any 1569 copyrights, patents or patent applications, or other proprietary 1570 rights that may cover technology that may be required to implement 1571 this standard. Please address the information to the IETF at 1572 ietf-ipr@ietf.org. 1574 Acknowledgment 1576 Funding for the RFC Editor function is provided by the IETF 1577 Administrative Support Activity (IASA).