idnits 2.17.1 draft-ietf-simple-intradomain-federation-01.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- ** It looks like you're using RFC 3978 boilerplate. You should update this to the boilerplate described in the IETF Trust License Policy document (see https://trustee.ietf.org/license-info), which is required now. -- Found old boilerplate from RFC 3978, Section 5.1 on line 18. -- Found old boilerplate from RFC 3978, Section 5.5, updated by RFC 4748 on line 1897. -- Found old boilerplate from RFC 3979, Section 5, paragraph 1 on line 1908. -- Found old boilerplate from RFC 3979, Section 5, paragraph 2 on line 1915. -- Found old boilerplate from RFC 3979, Section 5, paragraph 3 on line 1921. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- ** There are 7 instances of too long lines in the document, the longest one being 1 character in excess of 72. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust Copyright Line does not match the current year == Line 604 has weird spacing: '...ple.com alic...' == Line 605 has weird spacing: '...ple.com zeke...' -- The document seems to lack a disclaimer for pre-RFC5378 work, but may have content which was first submitted before 10 November 2008. If you have contacted all the original authors and they are all willing to grant the BCP78 rights to the IETF Trust, then this is fine, and you can ignore this comment. If not, you may need to add the pre-RFC5378 disclaimer. (See the Legal Provisions document at https://trustee.ietf.org/license-info for more information.) -- The document date (July 14, 2008) is 5758 days in the past. Is this intentional? -- Found something which looks like a code comment -- if you have code sections in the document, please surround them with '' and '' lines. Checking references for intended status: Informational ---------------------------------------------------------------------------- == Outdated reference: A later version (-05) exists of draft-ietf-speermint-consolidated-presence-im-usecases-04 == Outdated reference: A later version (-02) exists of draft-ietf-simple-view-sharing-00 Summary: 2 errors (**), 0 flaws (~~), 5 warnings (==), 8 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 SIMPLE J. Rosenberg 3 Internet-Draft Cisco 4 Intended status: Informational A. Houri 5 Expires: January 15, 2009 IBM 6 C. Smyth 7 Avaya 8 July 14, 2008 10 Models for Intra-Domain Presence and Instant Messaging (IM) Federation 11 draft-ietf-simple-intradomain-federation-01 13 Status of this Memo 15 By submitting this Internet-Draft, each author represents that any 16 applicable patent or other IPR claims of which he or she is aware 17 have been or will be disclosed, and any of which he or she becomes 18 aware will be disclosed, in accordance with Section 6 of BCP 79. 20 Internet-Drafts are working documents of the Internet Engineering 21 Task Force (IETF), its areas, and its working groups. Note that 22 other groups may also distribute working documents as Internet- 23 Drafts. 25 Internet-Drafts are draft documents valid for a maximum of six months 26 and may be updated, replaced, or obsoleted by other documents at any 27 time. It is inappropriate to use Internet-Drafts as reference 28 material or to cite them other than as "work in progress." 30 The list of current Internet-Drafts can be accessed at 31 http://www.ietf.org/ietf/1id-abstracts.txt. 33 The list of Internet-Draft Shadow Directories can be accessed at 34 http://www.ietf.org/shadow.html. 36 This Internet-Draft will expire on January 15, 2009. 38 Copyright Notice 40 Copyright (C) The IETF Trust (2008). 42 Abstract 44 Presence and Instant Messaging (IM) federation involves the sharing 45 of presence information and exchange of IM across multiple systems. 46 Most often, presence and IM federation is assumed to be between 47 different organizations, such as between two enterprises or between 48 and enterprise and a service provider. However, federation can occur 49 within a single organization or domain. This can be the result of a 50 multi-vendor network, or a consequence of a large organization that 51 requires partitioning. This document examines different use cases 52 and models for intra-domain presence and IM federation. 54 Table of Contents 56 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 4 57 2. Intra-Domain Federation vs. Clustering . . . . . . . . . . . . 7 58 3. Use Cases for Intra-Domain Federation . . . . . . . . . . . . 8 59 3.1. Scale . . . . . . . . . . . . . . . . . . . . . . . . . . 8 60 3.2. Organizational Structures . . . . . . . . . . . . . . . . 8 61 3.3. Multi-Vendor Requirements . . . . . . . . . . . . . . . . 8 62 3.4. Specialization . . . . . . . . . . . . . . . . . . . . . . 9 63 4. Considerations for Federation Models . . . . . . . . . . . . . 9 64 5. Partitioned . . . . . . . . . . . . . . . . . . . . . . . . . 10 65 5.1. Applicability . . . . . . . . . . . . . . . . . . . . . . 11 66 5.2. Routing . . . . . . . . . . . . . . . . . . . . . . . . . 11 67 5.2.1. Centralized Database . . . . . . . . . . . . . . . . . 12 68 5.2.2. Routing Proxy . . . . . . . . . . . . . . . . . . . . 13 69 5.2.3. Subdomaining . . . . . . . . . . . . . . . . . . . . . 14 70 5.2.4. Peer-to-Peer . . . . . . . . . . . . . . . . . . . . . 16 71 5.2.5. Forking . . . . . . . . . . . . . . . . . . . . . . . 16 72 5.2.6. Provisioned Routing . . . . . . . . . . . . . . . . . 16 73 5.3. Policy . . . . . . . . . . . . . . . . . . . . . . . . . . 16 74 5.4. Presence Data . . . . . . . . . . . . . . . . . . . . . . 17 75 5.5. Conversation Consistency . . . . . . . . . . . . . . . . . 17 76 6. Exclusive . . . . . . . . . . . . . . . . . . . . . . . . . . 17 77 6.1. Routing . . . . . . . . . . . . . . . . . . . . . . . . . 18 78 6.1.1. Centralized Database . . . . . . . . . . . . . . . . . 19 79 6.1.2. Routing Proxy . . . . . . . . . . . . . . . . . . . . 19 80 6.1.3. Subdomaining . . . . . . . . . . . . . . . . . . . . . 19 81 6.1.4. Peer-to-Peer . . . . . . . . . . . . . . . . . . . . . 20 82 6.1.5. Forking . . . . . . . . . . . . . . . . . . . . . . . 20 83 6.2. Policy . . . . . . . . . . . . . . . . . . . . . . . . . . 21 84 6.3. Presence Data . . . . . . . . . . . . . . . . . . . . . . 21 85 6.4. Conversation Consistency . . . . . . . . . . . . . . . . . 21 86 7. Unioned . . . . . . . . . . . . . . . . . . . . . . . . . . . 22 87 7.1. Hierarchical Model . . . . . . . . . . . . . . . . . . . . 25 88 7.1.1. Routing . . . . . . . . . . . . . . . . . . . . . . . 27 89 7.1.2. Policy and Identity . . . . . . . . . . . . . . . . . 28 90 7.1.2.1. Root Only . . . . . . . . . . . . . . . . . . . . 28 91 7.1.2.2. Distributed Provisioning . . . . . . . . . . . . . 30 92 7.1.2.3. Central Provisioning . . . . . . . . . . . . . . . 32 93 7.1.2.4. Centralized PDP . . . . . . . . . . . . . . . . . 34 94 7.1.3. Presence Data . . . . . . . . . . . . . . . . . . . . 36 95 7.1.4. Conversation Consistency . . . . . . . . . . . . . . . 36 97 7.2. Peer Model . . . . . . . . . . . . . . . . . . . . . . . . 37 98 7.2.1. Routing . . . . . . . . . . . . . . . . . . . . . . . 39 99 7.2.2. Policy . . . . . . . . . . . . . . . . . . . . . . . . 40 100 7.2.3. Presence Data . . . . . . . . . . . . . . . . . . . . 40 101 7.2.4. Conversation Consistency . . . . . . . . . . . . . . . 40 102 8. Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . 41 103 9. Future Considerations . . . . . . . . . . . . . . . . . . . . 41 104 10. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . . 41 105 11. Security Considerations . . . . . . . . . . . . . . . . . . . 41 106 12. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 41 107 13. Informative References . . . . . . . . . . . . . . . . . . . . 41 108 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . . 43 109 Intellectual Property and Copyright Statements . . . . . . . . . . 44 111 1. Introduction 113 Presence refers to the ability, willingness and desire to communicate 114 across differing devices, mediums and services [RFC2778]. Presence 115 is described using presence documents [RFC3863] [RFC4479], exchanged 116 using a Session Initiation Protocol (SIP) [RFC3261] based event 117 package [RFC3856] 119 Presence federation refers to the sharing of presence information 120 across multiple presence systems. This interconnection involves 121 passing of subscriptions from one system to another, and then the 122 passing of notifications in the opposite direction. 124 Similarly, instant messaging refers to the exchange of real-time 125 text-oriented messaging between users. SIP defines two mechanisms 126 for IM - pager mode [RFC3428] and session mode [RFC4975]. IM 127 federation refers to the exchange of IM between users in different IM 128 systems. 130 Most often, presence and IM federation is considered in the context 131 of interconnection between different domains, also known as inter- 132 domain federation 133 [I-D.ietf-speermint-consolidated-presence-im-usecases]. For example, 134 consider the network of Figure 1, which shows one model for inter- 135 domain presence federation. In this network, Alice belongs to the 136 example.org domain, and Bob belongs to the example.com domain. Alice 137 subscribes to her buddy list on her presence server (which is also 138 acting as her Resource List Server (RLS) [RFC4662]), and that list 139 includes bob@example.com. Alice's presence server generates a back- 140 end subscription on the federated link between example.org and 141 example.com. The example.com presence server authorizes the 142 subscription, and if permitted, generates notifications back to 143 Alice's presence server, which are in turn passed to Alice. 145 ............................. .............................. 146 . . . . 147 . . . . 148 . alice@example.org . . bob@example.com . 149 . +------------+ SUB . . +------------+ . 150 . | | Bob . . | | . 151 . | Presence |------------------->| Presence | . 152 . | Server | . . | Server | . 153 . | | . . | | . 154 . | |<-------------------| | . 155 . | | NOTIFY . | | . 156 . +------------+ . . +------------+ . 157 . ^ | . . ^ . 158 . SUB | | . . |PUB . 159 . Buddy | |NOTIFY . . | . 160 . List | | . . | . 161 . | | . . | . 162 . | V . . | . 163 . +-------+ . . +-------+ . 164 . | | . . | | . 165 . | | . . | | . 166 . | | . . | | . 167 . +-------+ . . +-------+ . 168 . . . . 169 . Alice's . . Bob's . 170 . PC . . PC . 171 . . . . 172 ............................. .............................. 174 example.org example.com 176 Figure 1: Inter-Domain Presence Model 178 Similarly, inter-domain IM federation would look like the model shown 179 in Figure 2: 181 ............................. .............................. 182 . . . . 183 . . . . 184 . alice@example.org . . bob@example.com . 185 . +------------+ INV . . +------------+ . 186 . | | Bob . . | | . 187 . | |------------------->| | . 188 . | IM | . . | IM | . 189 . | Server | . . | Server | . 190 . | |<------------------>| | . 191 . | | IM | | . 192 . +------------+ Content +------------+ . 193 . ^ ^ . . ^ | . 194 . INVITE | | . . IM | |INV . 195 . Bob | | IM . . Content| |Bob . 196 . | | Content . . | | . 197 . | | . . | | . 198 . | V . . V V . 199 . +-------+ . . +-------+ . 200 . | | . . | | . 201 . | | . . | | . 202 . | | . . | | . 203 . +-------+ . . +-------+ . 204 . . . . 205 . Alice's . . Bob's . 206 . PC . . PC . 207 . . . . 208 ............................. .............................. 210 example.org example.com 212 Figure 2: Inter-Domain IM Model 214 In this model, example.org and example.com both have an "IM server". 215 This would typically be a SIP proxy or B2BUA responsible for handling 216 both the signaling and the IM content (as these are separate in the 217 case of session mode). The IM server would handle routing of the IM 218 along with application of IM policy. 220 Though both of these pictures show federation as being between 221 domains, it can happen within a domain as well. We define intra- 222 domain federation as the interconnection of presence and IM servers 223 within a single domain, where domain refers explicity to the right 224 hand side of the @-sign in the SIP URI. 226 This document considers the architectural models and different 227 problems that arise when performing intra-domain presence and IM 228 federation. Though presence and IM are quite distinct functions, 229 this document considers both since the architectural models and 230 issues are common between the two. The document first clarifies the 231 distinction between intra-domain federation and clustering. It 232 defines the primary issues that arise in intra-domain presence and IM 233 federation, and then goes on to define the three primary models for 234 it - partitioned, unioned and exclusive. 236 2. Intra-Domain Federation vs. Clustering 238 Intra-domain federation is the interconnection of servers within a 239 single domain. This is very similar to clustering, which is the 240 tight coupling of a multiplicity of physical servers to realize scale 241 and/or high availability. Consequently, it is important to clarify 242 the differences. 244 Firstly, clustering implies a tight coupling of components. 245 Clustering usually involves proprietary information sharing, such as 246 database replication and state sharing, which in turn are tightly 247 bound with the internal implementation of the product. Intra-domain 248 federation, on the other hand, is a loose coupling. There is never 249 database replication or state replication across federated systems 250 (though a database and DB replication might be used within a 251 component providing routing functions to facilitate federation). 253 Secondly, clustering always occurs amongst components from the same 254 vendor. This is due to the tight coupling described above. Intra- 255 domain federation, on the other hand, can occur between servers from 256 different vendors. As described below, this is one of the chief use 257 cases for intra-domain federation. 259 Thirdly, clustering is almost always invisible to users. 260 Communications between users within the same cluster almost always 261 have identical functionality to communications between users on the 262 same server within the cluster. The cluster boundaries are 263 invisible; indeed the purpose of a cluster is to build a system which 264 behaves as if it were a single monolithic entity, even though it is 265 not. Federation, on the other hand, is often visible to users. 266 There will frequently be loss of functionality when crossing a 267 cluster. Though this is not a hard and fast rule, it is a common 268 differentiator. 270 Fourthly, connections between federated systems almost always involve 271 standards, whereas communications within a cluster often involves 272 proprietary mechanisms. Standards are needed for federation because 273 the federated systems can be from different vendors, and thus 274 agreement is needed to enable interoperation. 276 Finally, a cluster will often have an upper bound on its size and 277 capacity, due to some kind of constraint on the coupling between 278 nodes in the cluster. However, there is typically no limit, or a 279 much larger limit, on the number of federated systems that can be put 280 into a domain. This is a consequence to their loose coupling. 282 Though these rules are not hard and fast, they give general 283 guidelines on the differences between clustering and intra-domain 284 federation. 286 3. Use Cases for Intra-Domain Federation 288 There are several use cases that drive intra-domain federation. 290 3.1. Scale 292 One common use case for federation is an organization that is just 293 very large, and their size exceeds the capacity that a single server 294 or cluster can provide. So, instead, the domain breaks its users 295 into partitions (perhaps arbitrarily) and then uses intra-domain 296 federation to allow the overall system to scale up to arbitrary 297 sizes. This is common practice today for service providers and large 298 enterprises. 300 3.2. Organizational Structures 302 Another use case for intra-domain federation is a multi-national 303 organization with regional IT departments, each of which supports a 304 particular set of nationalities. It is very common for each regional 305 IT department to deploy and run its own servers for its own 306 population. In that case, the domain would end up being composed of 307 the presence servers deployed by each regional IT department. 308 Indeed, in many organizations, each regional IT department might end 309 up using different vendors. This can be a consequence of differing 310 regional requirements for features (such as compliance or 311 localization support), differing sales channels and markets in which 312 vendors sell, and so on. 314 3.3. Multi-Vendor Requirements 316 Another use case for intra-domain federation is an organization that 317 requires multiple vendors for each service, in order to avoid vendor 318 lock in and drive competition between its vendors. Since the servers 319 will come from different vendors, a natural way to deploy them is to 320 partition the users across them. Such multi-vendor networks are 321 extremely common in large service provider networks, many of which 322 have hard requirements for multiple vendors. 324 Typically, the vendors are split along geographies, often run by 325 different local IT departments. As such, this case is similar to the 326 organizational division above. 328 3.4. Specialization 330 Another use case is where certain vendors might specialize in 331 specific types of clients. For example, one vendor might provide a 332 mobile client (but no desktop client), while another provides a 333 desktop client but no mobile client. It is often the case that 334 specific client applications and devices are designed to only work 335 with their corresponding servers. In an ideal world, clients would 336 all implement to standards and this would not happen, but in 337 practice, the vast majority of presence and IM endpoints work only 338 (or only work well) with the server from the same vendor. A domain 339 might want each user to have both a mobile client and a desktop 340 client, which will require servers from each vendor, leading to 341 intra-domain federation. 343 Similarly, presence can contain rich information, including 344 activities of the user (such as whether they are in a meeting or on 345 the phone), their geographic location, and their mood. This presence 346 state can be determined manually (where the user enters and updates 347 the information), or automatically. Automatic determination of these 348 states is far preferable, since it puts less burden on the user. 349 Determination of these presence states is done by taking "raw" data 350 about the user, and using it to generate corresponding presence 351 states. This raw data can come from any source that has information 352 about the user, including their calendaring server, their VoIP 353 infrastructure, their VPN server, their laptop operating system, and 354 so on. Each of these components is typically made by different 355 vendors, each of which is likely to integrate that data with their 356 presence servers. Consequently, presence servers from different 357 vendors are likely to specialize in particular pieces of presence 358 data, based on the other infrastructure they provide. The overall 359 network will need to contain servers from those vendors, composing 360 together the various sources of information, in order to combine 361 their benefits. This use case is specified to presence, and results 362 in intra-domain federation. 364 4. Considerations for Federation Models 366 When considering architectures for intra-domain presence and IM 367 federation, several issues need to be considered. The first two of 368 these apply to both IM and presence (and indeed to any intra-domain 369 communications, including voice). The latter two are specific to 370 presence and IM respectively: 372 Routing: How are subscriptions and IMs routed to the right presence 373 and IM server(s)? This issue is more complex in intra-domain 374 models, since the right hand side of the @-sign cannot be used to 375 perform this routing. 377 Policy and Identity: Where do user policies reside, and what 378 presence and IM server(s) are responsible for executing that 379 policy? What identities does the user have in each system and how 380 do they relate? 382 Presence Data Ownership: Which presence servers are responsible for 383 which pieces of presence information, and how are those pieces 384 composed to form a coherent and consistent view of user presence? 386 Conversation Consistency: When considering instant messaging, if IM 387 can be delivered to multiple servers, how do we make sure that the 388 overall conversation is coherent to the user? 390 The sections below describe several different models for intra-domain 391 federation. Each model is driven by a set of use cases, which are 392 described in an applicability subsection for each model. Each model 393 description also discusses how routing, policy, presence data 394 ownership and conversation consistency work. 396 5. Partitioned 398 In the partitioned model, a single domain has a multiplicity of 399 servers, each of which manages a non-overlapping set of users. That 400 is, for each user in the domain, their presence data, policy and IM 401 handling reside on a single server. Each "single server" may in fact 402 be a cluster. 404 Another important facet of the partitioned model is that, even though 405 users are partitioned across different servers, they each share the 406 same domain name in the right hand side of their URI, and this URI is 407 what those users use when communicating with other users both inside 408 and outside of the domain. There are many reasons why a domain would 409 want all of its users to share the same right-hand side of the @-sign 410 even though it is partitioned internally: 412 o The partitioning may reflect organizational or geographical 413 structures that a domain admistrator does not want to reflect 414 externally. 416 o If each partition had a separate domain name (i.e., 417 engineering.example.com and sales.example.com), if a user changed 418 organizations, this would necessitate a change in their URI. 420 o For reasons of vanity, users often like to have their URI (which 421 appear on business cards, email, and so on), to be brief and 422 short. 424 o If a watcher wants to add a presentity based on username and does 425 not want to know, or does not know, which subdomain or internal 426 department the presentity belongs to, a single domain is needed. 428 This model is illustrated in Figure 3. As the model shows, the 429 domain example.com has six users across three servers, each of which 430 is handling two of the users. 432 ..................................................................... 433 . . 434 . . 435 . . 436 . joe@example.com alice@example.com padma@example.com . 437 . bob@example.com zeke@example.com hannes@example.com . 438 . +-----------+ +-----------+ +-----------+ . 439 . | | | | | | . 440 . | Server | | Server | | Server | . 441 . | 1 | | 2 | | 3 | . 442 . | | | | | | . 443 . +-----------+ +-----------+ +-----------+ . 444 . . 445 . . 446 . . 447 . example.com . 448 ..................................................................... 450 Figure 3: Partitioned Model 452 5.1. Applicability 454 The partitioned model arises naturally in larger domains, such as an 455 enterprise or service provider, where issues of scale, organizational 456 structure, or multi-vendor requirements cause the domain to be 457 managed by a multiplicity of independent servers. 459 In cases where each user has an AoR that directly points to its 460 partition (for example, us.example.com), that model becomes identical 461 to the inter-domain federated model and is not treated here further. 463 5.2. Routing 465 The partitioned intra-domain model works almost identically to an 466 inter-domain federated model, with the primary difference being 467 routing. In inter-domain federation, the domain part of the URI can 468 be used to route presence subscriptions and IM messages from one 469 domain to the other. This is no longer the case in an intra-domain 470 model. Consider the case where Joe subscribes to his buddy list, 471 which is served by his presence server (server 1 in Figure 3). Alice 472 is a member of Joe's buddy list. How does server 1 know that the 473 back-end subscription to Alice needs to get routed to server 2? 475 There are several techniques that can be used to solve this problem, 476 which are outlined in the subsections below. 478 5.2.1. Centralized Database 480 ..................................................................... 481 . +-----------+ . 482 . alice? | | . 483 . +---------------> | Database | . 484 . | server 2 | | . 485 . | +-------------| | . 486 . | | +-----------+ . 487 . | | . 488 . | | . 489 . | | . 490 . | | . 491 . | | . 492 . | | . 493 . | V . 494 . joe@example.com alice@example.com padma@example.com . 495 . bob@example.com zeke@example.com hannes@example.com . 496 . +-----------+ +-----------+ +-----------+ . 497 . | | | | | | . 498 . | Server | | Server | | Server | . 499 . | 1 | | 2 | | 3 | . 500 . | | | | | | . 501 . +-----------+ +-----------+ +-----------+ . 502 . . 503 . . 504 . . 505 . example.com . 506 ..................................................................... 508 Figure 4: Centralized DB 510 One solution is to rely on a common, centralized database that 511 maintains mappings of users to specific servers, shown in Figure 4. 512 When Joe subscribes to his buddy list that contains Alice, server 1 513 would query this database, asking it which server is responsible for 514 alice@example.com. The database would indicate server 2, and then 515 server 1 would generate the backend SUBSCRIBE request towards server 516 2. Similarly, when Joe sends an INVITE to establish an IM session 517 with Padma, he would send the IM to his IM server, an it would query 518 the database to find out that Padma is supported on server 3. This 519 is a common technique in large email systems. It is often 520 implemented using internal sub-domains; so that the database would 521 return alice@central.example.com to the query, and server 1 would 522 modify the Request-URI in the request to reflect this. 524 Routing database solutions have the problem that they require 525 standardization on a common schema and database protocol in order to 526 work in multi-vendor environments. For example, LDAP and SQL are 527 both possibilities. There is variety in LDAP schema; one possibility 528 is H.350.4, which could be adapted for usage here [RFC3944]. 530 5.2.2. Routing Proxy 532 ..................................................................... 533 . +-----------+ . 534 . SUB/INV alice | | . 535 . +---------------> | Routing | . 536 . | | Proxy | . 537 . | | | . 538 . | +-----------+ . 539 . | | . 540 . | | . 541 . | | . 542 . | |SUB/INV alice . 543 . | | . 544 . | | . 545 . | V . 546 . joe@example.com alice@example.com padma@example.com . 547 . bob@example.com zeke@example.com hannes@example.com . 548 . +-----------+ +-----------+ +-----------+ . 549 . | | | | | | . 550 . | Server | | Server | | Server | . 551 . | 1 | | 2 | | 3 | . 552 . | | | | | | . 553 . +-----------+ +-----------+ +-----------+ . 554 . . 555 . . 556 . . 557 . example.com . 558 ..................................................................... 560 Figure 5: Routing Proxy 562 A similar solution is to rely on a routing proxy or B2BUA. Instead 563 of a centralized database, there would be a centralized SIP proxy 564 farm. Server 1 would send requests (SUBSCRIBE, INVITE, etc.) for 565 users it doesn't serve to this server farm, and the servers would 566 lookup the user in a database (which is now accessed only by the 567 routing proxy), and the resulting requests are sent to the correct 568 server. A redirect server can be used as well, in which case the 569 flow is very much like that of a centralized database, but uses SIP. 571 Routing proxies have the benefit that they do not require a common 572 database schema and protocol, but they do require a centralized 573 server function that sees all subscriptions and IM requests, which 574 can be a scale challenge. For IM, a centralized proxy is very 575 challenging when using pager mode, since each and every IM is 576 processed by the central proxy. For session mode, the scale is 577 better, since the proxy handles only the initial INVITE. 579 5.2.3. Subdomaining 581 In this solution, each user is associated with a subdomain, and is 582 provisioned as part of their respective server using that subdomain. 583 Consequently, each server thinks it is its own, separate domain. 584 However, when a user adds a presentity to their buddy list without 585 the subdomain, they first consult a shared database which returns the 586 subdomained URI to subscribe or IM to. This sub-domained URI can be 587 returned because the user provided a search criteria, such as "Find 588 Alice Chang", or provided the non-subdomained URI 589 (alice@example.com). This is shown in Figure 6 590 ..................................................................... 591 . +-----------+ . 592 . who is Alice? | | . 593 . +---------------------->| Database | . 594 . | alice@b.example.com | | . 595 . | +---------------------| | . 596 . | | +-----------+ . 597 . | | . 598 . | | . 599 . | | . 600 . | | . 601 . | | . 602 . | | . 603 . | | . 604 . | | joe@a.example.com alice@b.example.com padma@c.example.com . 605 . | | bob@a.example.com zeke@b.example.com hannes@c.example.com . 606 . | | +-----------+ +-----------+ +-----------+ . 607 . | | | | | | | | . 608 . | | | Server | | Server | | Server | . 609 . | | | 1 | | 2 | | 3 | . 610 . | | | | | | | | . 611 . | | +-----------+ +-----------+ +-----------+ . 612 . | | ^ . 613 . | | | . 614 . | | | . 615 . | | | . 616 . | | | . 617 . | | | . 618 . | | +-----------+ . 619 . | +-------------------->| | . 620 . | | Client | . 621 . | | | . 622 . +-----------------------| | . 623 . +-----------+ . 624 . . 625 . . 626 . . 627 . example.com . 628 ..................................................................... 630 Figure 6: Subdomaining 632 Subdomaining puts the burden of routing within the client. The 633 servers can be completely unaware that they are actually part of the 634 same domain, and integrate with each other exactly as they would in 635 an inter-domain model. However, the client is given the burden of 636 determining the subdomained URI from the original URI or buddy name, 637 and then subscribing or IMing directly to that server, or including 638 the subdomained URI in their buddylist. The client is also 639 responsible for hiding the subdomain structure from the user and 640 storing the mapping information locally for extended periods of time. 641 In cases where users have buddy list subscriptions, the client will 642 need to resolve the buddy name into the sub-domained version before 643 adding to their buddy list. 645 5.2.4. Peer-to-Peer 647 Another model is to utilize a peer-to-peer network amongst all of the 648 servers, and store URI to server mappings in the distributed hash 649 table it creates. This has some nice properties but does require a 650 standardized and common p2p protocol across vendors, which does not 651 exist today. 653 5.2.5. Forking 655 Yet another solution is to utilize forking. Each server is 656 provisioned with the domain names or IP addresses of the other 657 servers, but not with the mapping of users to each of those servers. 658 When a server needs to handle a request for a user it doesn't have, 659 it forks the request to all of the other servers. This request will 660 be rejected with a 404 on the servers which do not handle that user, 661 and accepted on the one that does. The approach assumes that servers 662 can differentiate inbound requests from end users (which need to get 663 passed on to other servers - for example via a back-end subscription) 664 and from other servers (which do not get passed on). This approach 665 works very well in organizations with a relatively small number of 666 servers (say, two or three), and becomes increasingly ineffective 667 with more and more servers. 669 5.2.6. Provisioned Routing 671 Yet another solution is to provision each server with each user, but 672 for servers that don't actually serve the user, the provisioning 673 merely tells the server where to proxy the request. This solution 674 has extremely poor operational properties, requiring multiple points 675 of provisioning across disparate systems. 677 5.3. Policy 679 A fundamental characteristic of the partitioned model is that there 680 is a single point of policy enforcement (authorization rules and 681 composition policy) for each user. 683 5.4. Presence Data 685 Another fundamental characteristic of the partitioned model is that 686 the presence data for a user is managed authoritatively on a single 687 server. In the example of Figure 3, the presence data for Alice 688 lives on server 2 alone (recall that server two may be physically 689 implemented as a multiplicity of boxes from a single vendor, each of 690 which might have a portion of the presence data, but externally it 691 appears to behave as if it were a single server). A subscription 692 from Bob to Alice may cause a transfer of presence information from 693 server 2 to server 1, but server 2 remains authoritative and is the 694 single root source of all data for Alice. 696 5.5. Conversation Consistency 698 Since the IM for a particular user are always delivered through a 699 particular server that handles the user, it is relatively easy to 700 achieve conversation consistency. That server receives all of the 701 messages and readily pass them onto the user for rendering. 702 Furthermore, a coherent view of message history can be assembled by 703 the server, since it sees all messages. If a user has multiple 704 devices, there are challenges in constructing a consistent view of 705 the conversation with page mode IM. However, those issues exist in 706 general with page mode and are not worsened by intra-domain 707 federation. 709 6. Exclusive 711 In the former (static) partitioned model, the mapping of a user to a 712 specific server is done by some off-line configuration means. The 713 configuration assigns a user to a specific server and in order to use 714 a different server, the user needs to change (or request the 715 administrator to do so) the configuration. 717 In some environments, this restriction of a user to use a particular 718 server may be a limitation. Instead, it is desirable to allow users 719 to freely move back and forth between systems, though using only a 720 single one at a time. This is called Exclusive Federation. 722 Some use cases where this can happen are: 724 o The organization is using multiple systems where each system has 725 its own characteristics. For example one server is tailored to 726 work with some CAD (Computer Aided Design) system and provide 727 presence and IM functionality along with the CAD system. The 728 other server is the default presence and IM server of the 729 organization. Users wish to be able to work with either system 730 when they wish to, they also wish to be able to see the presence 731 and IM with their buddies no matter which system their buddies are 732 currently using. 734 o An enterprise wishes to test presence servers from two different 735 vendors. In order to do so they wish to install a server from 736 each vendor and see which of the servers is better. In the static 737 partitioned model, a user will have to be statically assigned to a 738 particular server and could not compare the features of the two 739 servers. In the dynamic partitioned model, a user may choose on 740 whim which of the servers that are being tested to use. They can 741 move back and forth in case of problems. 743 o An enterprise is currently using servers from one vendor, but has 744 decided to add a second. They would like to gradually migrate 745 users from one to the other. In order to make a smooth 746 transition, users can move back and forth over a period of a few 747 weeks until they are finally required to stop going back, and get 748 deleted from their old system. 750 o A domain is using multiple clusters from the same vendor. To 751 simplify administration, users can connect to any of the clusters, 752 perhaps one local to their site. To accomplish this, the clusters 753 are connected using exclusive federation. 755 6.1. Routing 757 Due to its nature, routing in the Exclusive federation model is more 758 complex than the routing in the partitioned model. 760 Association of a user to a server can not be known until the user 761 publishes a presence document to a specific server or registers to 762 that server. Therefore, when Alice subscribes to Bob's presence 763 information, or sends him an IM, Alice's server will not easily know 764 the server that has Bob's presence and is handling his IM. 766 In addition, a server may get a subscription to a user, or an IM 767 targeted at a user, but the user may not be connected to any server 768 yet. In the case of presence, once the user appears in one of the 769 servers, the subscription should be sent to that server. 771 A user may use two servers at the same time and have hers/his 772 presence information on two servers. This should be regarded as a 773 conflict and one of the presence clients should be terminated or 774 redirected to the other server. 776 Fortunately, most of the routing approaches described for partitioned 777 federation, excepting provisioned routing, can be adapted for 778 exclusive federation. 780 6.1.1. Centralized Database 782 A centralized database can be used, but will need to support a test- 783 and-set functionality. With it, servers can check if a user is 784 already in a specific server and set the user to the server if the 785 user is not on another server. If the user is already on another 786 server a redirect (or some other error message) will be sent to that 787 user. 789 When a client sends a subscription request for some target user, and 790 the target user is not associated with a server yet, the subscription 791 must be 'held' on the server of the watcher. Once the target user 792 connects and becomes bound to a server, the database needs to send a 793 change notification to the watching server, so that the 'held' 794 subscription can be extended to the server which is now handling 795 presence for the user. 797 6.1.2. Routing Proxy 799 The routing proxy mechanism can be used for exclusive federation as 800 well. However, it requires signaling from each server to the routing 801 proxy to indicate that the user is now located on that server. This 802 can be done by having each server send a REGISTER request to the 803 routing proxy, for that user, and setting the contact to itself. The 804 routing proxy would have a rule which allows only a single registered 805 contact per user. Using the registration event package [RFC3680], 806 each server subscribes to the registration state at the routing proxy 807 for each user it is managing. If the routing proxy sees a duplicate 808 registration, it allows it, and then uses a reg-event notification to 809 the other server to de-register the user. Once the user is de- 810 registered from that server, it would terminate any subscriptions in 811 place for that user, causing the watching server to reconnect the 812 subscription to the new server. Something similar can be done for 813 in-progress IM sessions; however this may have the effect of causing 814 a disruption in ongoing sessions. 816 6.1.3. Subdomaining 818 Subdomaining is just a variation on the centralized database. 819 Assuming the database supports a test-and-set mechanism, it can be 820 used for exclusive federation. 822 However, the principle challenge in applying subdomaining to 823 exclusive federation is database change notifications. When a user 824 moves from one server to another, that change needs to be propagated 825 to all clients which have ongoing sessions (presence and IM) with 826 that user. This requires a large-scale change notification mechanism 827 - to each client in the network. 829 6.1.4. Peer-to-Peer 831 Peer-to-peer routing can be used for routing in exclusive federation. 832 Essentially, it provides a distributed registrar function that maps 833 each AoR to the particular server that they are currently registered 834 against. When a UA registers to a particular server, that 835 registration is written into the P2P network, such that queries for 836 that user are directed to that presence server. 838 However, change notifications can be troublesome. When a user 839 registered on server 1 now registers on server 2, server 2 needs to 840 query the p2p network, discover that server 1 is handling the user, 841 and then tell server 1 that the user has moved. Server 1 then needs 842 to terminate its ongoing subscriptions and send the to server 2. 844 Furthermore, P2P networks do not inherently provide a test-and-set 845 primitive, and consequently, it is possible for race conditions to 846 occur where there is an inconsistent view on where the user is 847 currently registered. 849 6.1.5. Forking 851 The forking model can be applied to exclusive federation. When a 852 user registers with a server or publishes a presence document to a 853 server, and that server is not serving the user yet, that server 854 begins serving the user. Furthermore, it needs to propagate a change 855 notification to all of the other servers. This can be done using a 856 registration event package; basically each server would subscribe to 857 every other server for reg-event notifications for users they serve. 859 When subscription or IM request is received at a server, and that 860 server doesn't serve the target user, it forks the subscription or IM 861 to all other servers. If the user is currently registered somewhere, 862 one will accept, and the others will reject with a 404. If the user 863 is registered nowhere, all others generate a 404. If the request is 864 a subscription, the server that received it would 'hold' the 865 subscription, and then subscribe for the reg-event package on every 866 other server for the target user. Once the target user registers 867 somewhere, the server holding the subscription gets a notification 868 and can propagate it to the new target server. 870 Like the P2P solution, the forking solution lacks an effective test- 871 and-set mechanism, and it is therefore possible that there could be 872 inconsistent views on which server is handling a user. 874 6.2. Policy 876 In the exclusive federation model, policy becomes more complicated. 877 In the partitioned model, a user had their presence and IM managed by 878 the same server all of the time. Thus, their policy can be 879 provisioned and excecuted there. With exclusive federation, a user 880 can freely move back and forth between servers. Consequently, the 881 policy for a particular user may need to execute on multiple 882 different servers over time. 884 The simplest solution is just to require the user to separately 885 provision and manage policies on each server. In many of the use 886 cases above, exclusive federation is a transient situation that 887 eventually settles into partitioned federation. Thus, it may not be 888 unreasonable to require the user to manage both policies during the 889 transition. It is also possible that each server provides different 890 capabilities, and thus a user will receive different service 891 depending on which server they are connected to. Again, this may be 892 an acceptable limitation for the use cases it supports. 894 6.3. Presence Data 896 As with the partitioned model, in the exclusive model, the presence 897 data for a user resides on a single server at any given time. This 898 server owns all composition policies and procedures for collecting 899 and distributing presence data. 901 6.4. Conversation Consistency 903 Because a user receives all of their IM on a single server at a time, 904 there aren't issues with seeing a coherent conversation for the 905 duration that a user is associated with that server. 907 However, if a user has sessions in progress while they move from one 908 server to another, it is possible that IM's can be misrouted or 909 dropped, or delivered out of order. Fortunately, this is a transient 910 event, and given that its unlikely that a user would actually have 911 in-progress IM sessions when they change servers, this may be an 912 acceptable limitation. 914 However, conversation history may be more troubling. IM message 915 history is often stored both in clients (for context of past 916 conversations, search, etc.) and in servers (for the same reasons, in 917 addition to legal requirements for data retention). If a user 918 changes servers, some of their past conversations will be stored on 919 one server, and some on another. Any kind of search or query 920 facility provided amongst the server-stored messages would need to 921 search amongst all of the servers to find the data. 923 7. Unioned 925 In the unioned model, each user is actually served by more than one 926 presence server at a time. In this case, "served" implies two 927 properties: 929 o A user is served by a server when that user is provisioned on that 930 server, and 932 o That server is authoritative for some piece of presence state 933 associated with that user or responsible for some piece of 934 registration state associated with that user, for the purposes of 935 IM delivery 937 In essence, in the unioned model, a user's presence and registration 938 data is distributed across many presence servers, while in the 939 partitioned and exclusive models, its centralized in a single server. 940 Furthermore, it is possible that the user is provisioned with 941 different identifiers on each server. 943 This definition speaks specifically to ownership of dynamic data - 944 presence and registration state - as the key property. This rules 945 out several cases which involve a mix of servers within the 946 enterprise, but do not constitute intra-domain unioned federation: 948 o A user utilizes an outbound SIP proxy from one vendor, which 949 connects to a presence server from another vendor. Even though 950 this will result in presence subscriptions, notifications, and IM 951 requests flowing between servers, and the user is potentially 952 provisioned on both, there is no authoritative presence or 953 registration state in the outbound proxy, and so this is not 954 intra-domain federation. 956 o A user utilizes a Resource List Server (RLS) from one vendor, 957 which holds their buddy list, and accesses presence data from a 958 presence server from another vendor. This case is actually the 959 partitioned case, not the unioned case. Effectively, the buddy 960 list itself is another "user", and it exists entirely on one 961 server (the RLS), while the actual users on the buddy list exist 962 entirely within another. Consequently, this case does not have 963 the property that a single presence resource exists on multiple 964 servers at the same time. 966 o A user subscribes to the presence of a presentity. This 967 subscription is first passed to their presence server, which acts 968 as a proxy, and instead sends the subscription to the UA of the 969 user, which acts as a presence edge server. In this model, it may 970 appear as if there are two presence servers for the user (the 971 actual server and their UA). However, the server is acting as a 972 proxy in this case - there is only one source of presence 973 information. For IM, there is only one source of registration 974 state - the server. Thus, this model is partitioned, but with 975 different servers owning IM and presence. 977 The unioned models arise naturally when a user is using devices from 978 different vendors, each of which has their own respective servers, or 979 when a user is using different servers for different parts of their 980 presence state. For example, Figure 7 shows the case where a single 981 user has a mobile client connected to server one and a desktop client 982 connected to server two. 984 alice@example.com alice@example.com 985 +------------+ +------------+ 986 | | | | 987 | | | | 988 | Server |--------------| Server | 989 | 1 | | 2 | 990 | | | | 991 | | | | 992 +------------+ +------------+ 993 \ / 994 \ / 995 \ / 996 \ / 997 \ / 998 \ / 999 \...................../....... 1000 \ / . 1001 .\ / . 1002 . \ | +--------+ . 1003 . | |+------+| . 1004 . +---+ || || . 1005 . |+-+| || || . 1006 . |+-+| |+------+| . 1007 . | | +--------+ . 1008 . | | /------ / . 1009 . +---+ /------ / . 1010 . --------/ . 1011 . . 1012 ............................. 1014 Alice 1016 Figure 7: Unioned Case 1 1018 As another example, a user may have two devices from the same vendor, 1019 both of which are asociated with a single presence server, but that 1020 presence server has incomplete presence state about the user. 1021 Another presence server in the enterprise, due to its access to state 1022 for that user, has additional data which needs to be accessed by the 1023 first presence server in order to provide a comprehensive view of 1024 presence data. This is shown in Figure 8. This use case tends to be 1025 specific to presence. 1027 alice@example.com alice@example.com 1028 +------------+ +------------+ 1029 | | | | 1030 | Presence | | Presence | 1031 | Server |--------------| Server | 1032 | 1 | | 2 | 1033 | | | | 1034 | | | | 1035 +------------+ +------------+ 1036 ^ | | 1037 | | | 1038 | | | 1039 ///-------\\\ | | 1040 ||| specialized ||| | | 1041 || state || | | 1042 \\\-------/// | | 1043 ............................. 1044 . | | . 1045 . | | +--------+ . 1046 . | |+------+| . 1047 . +---+ || || . 1048 . |+-+| || || . 1049 . |+-+| |+------+| . 1050 . | | +--------+ . 1051 . | | /------ / . 1052 . +---+ /------ / . 1053 . --------/ . 1054 . . 1055 . . 1056 ............................. 1057 Alice 1059 Figure 8: Unioned Case 2 1061 Another use case for unioned federation are subscriber moves. 1062 Consider a domain which uses multiple servers, typically running in a 1063 partitioned configuration. The servers are organized regionally so 1064 that each user is served by a server handling their region. A user 1065 is moving from one region to a new job in another, while retaining 1066 their SIP URI. In order to provide a smooth transition, ideally the 1067 system would provide a "make before break" functionality, allowing 1068 the user to be added onto the new server prior to being removed from 1069 the old. During the transition period, especially if the user had 1070 multiple clients to be moved, they can end up with state existing on 1071 both servers at the same time. 1073 Another use case for unioned federation is multiple providers. 1074 Consider a user in an enterprise, alice@example.com. Example.com has 1075 a presence server deployed for all of its users. In addition, Alice 1076 uses a public IM and presence provider. Alice would like that users 1077 who connect to the public provider see presence state that comes from 1078 example.com, and vice-a-versa. Interestingly, this use case isn't 1079 intra-domain federation at all, but rather, unioned inter-domain 1080 federation. 1082 7.1. Hierarchical Model 1084 The unioned intra-federation model can be realized in one of two ways 1085 - using a hierarchical structure or a peer structure. 1087 In the hierarchical model, presence subscriptions and IM requests for 1088 the target are always routed first to one of the servers - the root. 1089 In the case of presence, the root has the final say on the structure 1090 of the presence document delivered to watchers. It collects presence 1091 data from its child presence servers (through notifications or 1092 publishes received from them) and composes them into the final 1093 presence document. In the case of IM, the root applies IM policy and 1094 then passes the IM onto the children for delivery. There can be 1095 multiple layers in the hierarchical model. This is shown in Figure 9 1096 for presence. 1098 +-----------+ 1099 *-----------* | | 1100 |Auth and |---->| Presence | <--- root 1101 |Composition| | Server | 1102 *-----------* | | 1103 | | 1104 +-----------+ 1105 / --- 1106 / ---- 1107 / ---- 1108 / ---- 1109 V -V 1110 +-----------+ +-----------+ 1111 | | | | 1112 *-----------* | Presence | *-----------* | Presence | 1113 |Auth and |-->| Server | |Auth and |-->| Server | 1114 |Composition| | | |Composition| | | 1115 *-----------* | | *-----------* | | 1116 +-----------+ +-----------+ 1117 | --- 1118 | ----- 1119 | ----- 1120 | ----- 1121 | ----- 1122 | ----- 1123 V --V 1124 +-----------+ +-----------+ 1125 | | | | 1126 *-----------* | Presence | *-----------* | Presence | 1127 |Auth and |-->| Server | |Auth and |-->| Server | 1128 |Composition| | | |Composition| | | 1129 *-----------* | | *-----------* | | 1130 +-----------+ +-----------+ 1132 Figure 9: Hierarchical Model 1134 Its important to note that this hierarchy defines the sequence of 1135 presence composition and policy application, and does not imply a 1136 literal message flow. As an example, consider once more the use case 1137 of Figure 7. Assume that presence server 1 is the root, and presence 1138 server 2 is its child. When Bob's PC subscribes to Bob's buddy list 1139 (on presence server 2), that subscription will first go to presence 1140 server 2. However, that presence server knows that it is not the 1141 root in the hierarchy, and despite the fact that it has presence 1142 state for Alice (who is on Bob's buddy list), it creates a back-end 1143 subscription to presence server 1. Presence server 1, as the root, 1144 subscribes to Alice's state at presence server 2. Now, since this 1145 subscription came from presence server 1 and not Bob directly, 1146 presence server 2 provides the presence state. This is received at 1147 presence server 1, which composes the data with its own state for 1148 Alice, and then provides the results back to presence server 2, 1149 which, having acted as an RLS, forwards the results back to Bob. 1150 Consequently, this flow, as a message sequence diagram, involves 1151 notifications passing from presence server 2, to server 1, back to 1152 server 2. However, in terms of composition and policy, it was done 1153 first at the child node (presence server 2), and then those results 1154 used at the parent node (presence server 1). 1156 7.1.1. Routing 1158 In the hierarchical model, each server needs to be provisioned with 1159 the root, its parent and its children servers for each user it 1160 handles. These relationships could in fact be different on a user- 1161 by-user basis; however, this is complex to manage. In all 1162 likelihood, the parent and child relationships are identical for each 1163 user. The overall routing algorithm can be described thusly: 1165 o If a SUBCRIBE is received from the parent node for this 1166 presentity, perform subscriptions to each child node for this 1167 presentity, and then take the results, apply composition and 1168 authorization policies, and propagate to the parent. If a node is 1169 the root, the logic here applies regardless of where the request 1170 came from. 1172 o If an IM request is received from the parent node for a user, 1173 perform IM processing and then proxy the request to each child IM 1174 server for this user. If a node is the root, the logic here 1175 applies regardless of where the request came from. 1177 o If a request is received from a node that is not the parent node 1178 for this presentity, proxy the request to the parent node. This 1179 includes cases where the node that sent the request is a child 1180 node. 1182 This routing rule is relatively simple, and in a two-server system is 1183 almost trivial to provision. Interestingly, it works in cases where 1184 some users are partitioned and some are unioned. When the users are 1185 partitioned, this routing algorithm devolves into the forking 1186 algorithm of Section 5.2.5. This points to the forking algorithm as 1187 a good choice since it can be used for both partitioned and unioned. 1189 An important property of the routing in the hierarchical model is 1190 that the sequence of composition and policy operations for any IM or 1191 presence session is identical, regardless of the watcher or sender of 1192 the IM. The result is that the overall presence state provided to a 1193 watcher, and overall IM behavior, is always consistent and 1194 independent of the server the client is connected to. We call this 1195 property the *consistency property*, and it is an important metric in 1196 assessing the correctness of a federated presence and IM system. 1198 7.1.2. Policy and Identity 1200 Policy and identity are a clear challenge in the unioned model. 1202 Firstly, since a user is provisioned on many servers, it is possible 1203 that the identifier they utilize could be different on each server. 1204 For example, on server 1, they could be joe@example.com, whereas on 1205 server 2, they are joe.smith@example.com. In cases where the 1206 identifiers are not equivalent, a mapping function needs to be 1207 provisioned. This ideally happens on root server. 1209 Secondly, the unioned model will result in back-end subscriptions 1210 extending from one presence server to another presence server. These 1211 subscriptions, though made by the presence server, need to be made 1212 on-behalf-of the user that originally requested the presence state of 1213 the presentity. Since the presence server extending the back-end 1214 subscription will not often have credentials to claim identity of the 1215 watcher, asserted identity using techniques like P-Asserted-ID 1216 [RFC3325] are required, along with the associated trust relationships 1217 between servers. Optimizations, such as view sharing 1218 [I-D.ietf-simple-view-sharing] can help improve performance. The 1219 same considerations apply for IM. 1221 The principle challenge in a unioned model is policy, including both 1222 authorization and composition policies. There are three potential 1223 solutions to the administration of policy in the hierarchical model 1224 (only two of which apply in the peer model, as we'll discuss below). 1225 These are root-only, distributed provisioned, and central 1226 provisioned. 1228 7.1.2.1. Root Only 1230 In the root-only policy model, authorization policy, IM policy, and 1231 composition policy are applied only at the root of the tree. This is 1232 shown in Figure 10. 1234 +-----------+ 1235 *-----------* | | 1236 | |---->| | <--- root 1237 | Policy | | Server | 1238 *-----------* | | 1239 | | 1240 +-----------+ 1241 / --- 1242 / ---- 1243 / ---- 1244 / ---- 1245 V -V 1246 +-----------+ +-----------+ 1247 | | | | 1248 | | | | 1249 | Server | | Server | 1250 | | | | 1251 | | | | 1252 +-----------+ +-----------+ 1253 | --- 1254 | ----- 1255 | ----- 1256 | ----- 1257 | ----- 1258 | ----- 1259 V --V 1260 +-----------+ +-----------+ 1261 | | | | 1262 | | | | 1263 | Server | | Server | 1264 | | | | 1265 | | | | 1266 +-----------+ +-----------+ 1268 Figure 10: Root Only 1270 As long as a subscription request came from its parent, every child 1271 presence server would automatically accept the subscription, and 1272 provide notifications containing the full presence state it is aware 1273 of. Similarly, any IM received from a parent would be simply 1274 propagated onwards towards children. Any composition performed by a 1275 child presence server would need to be lossless, in that it fully 1276 combines the source data without loss of information, and also be 1277 done without any per-user provisioning or configuration, operating in 1278 a default or administrator-provisioned mode of operation. 1280 The root-only model has the benefit that it requires the user to 1281 provision policy in a single place (the root). However, it has the 1282 drawback that the composition and policy processing may be performed 1283 very poorly. Presumably, there are multiple presence servers in the 1284 first place because each of them has a particular speciality. That 1285 speciality may be lost in the root-only model. For example, if a 1286 child server provides geolocation information, the root presence 1287 server may not have sufficient authorization policy capabilities to 1288 allow the user to manage how that geolocation information is provided 1289 to watchers. 1291 7.1.2.2. Distributed Provisioning 1293 The distributed provisioned model looks exactly like the diagram of 1294 Figure 9. Each server is separately provisioned with its own 1295 policies, including what users are allowed to watch, what presence 1296 data they will get, how it will be composed, what IMs get blocked, 1297 and so on. 1299 One immediate concern is whether the overall policy processing, when 1300 performed independently at each server, is consistent, sane, and 1301 provides reasonable degrees of privacy. It turns out that it can, if 1302 some guidelines are followed. 1304 For presence, consider basic "yes/no" authorization policies. Lets 1305 say a presentity, Alice, provides an authorization policy in server 1 1306 where Bob can see her presence, but on server 2, provides a policy 1307 where Bob cannot. If presence server 1 is the root, the subscription 1308 is accepted there, but the back-end subscription to presence server 2 1309 would be rejected. As long as presence server 1 then rejects the 1310 subscription, the system provides the correct behavior. This can be 1311 turned into a more general rule: 1313 o To guarantee privacy safety, if the back-end subscription 1314 generated by a presence server is denied, that server must deny 1315 the triggering subscription in turn, regardless of its own 1316 authorization policies. This means that a presence server cannot 1317 send notifications on its own until it has confirmed subscriptions 1318 from downstream servers. 1320 For IM, basic yes/no authorization policies work in a similar way. 1321 If any one of the servers has a policy that says to block an IM, the 1322 IM is not propagated further down the chain. Whether the overall 1323 system blocks IMs from a sender depends on the topology. If there is 1324 no forking in the hierarchy, the system has the property that, if a 1325 sender is blocked at any server, the user is blocked overall. 1326 However, in tree structures where there are multiple children, it is 1327 possible that an IM could be delivered to some downstream clients, 1328 and not others. 1330 Things get more complicated when one considers presence authorization 1331 policies whose job is to block access to specific pieces of 1332 information, as opposed to blocking a user completely. For example, 1333 lets say Alice wants to allow Bob to see her presence, but not her 1334 geolocation information. She provisions a rule on server 1 that 1335 blocks geolocation information, but grants it on server 2. The 1336 correct mode of operation in this case is that the overall system 1337 will block geolocation from Bob. But will it? In fact, it will, if a 1338 few additional guidelines are followed: 1340 o If a presence server adds any information to a presence document 1341 beyond the information received from its children, it must provide 1342 authorization policies that govern the access to that information. 1344 o If a presence server does not understand a piece of presence data 1345 provided by its child, it should not attempt to apply its own 1346 authorization policies to access of that information. 1348 o A presence server should not add information to a presence 1349 document that overlaps with information that can be added by its 1350 parent. Of course, it is very hard for a presence server to know 1351 whether this information overlaps. Consequently, provisioned 1352 composition rules will be required to realize this. 1354 If these rules are followed, the overall system provides privacy 1355 safety and the overall policy applied is reasonable. This is because 1356 these rules effectively segment the application of policy based on 1357 specific data, to the servers that own the corresponding data. For 1358 example, consider once more the geolocation use case described above, 1359 and assume server 2 is the root. If server 1 has access to, and 1360 provides geolocation information in presence documents it produces, 1361 then server 1 would be the only one to provide authorization policies 1362 governing geolocation. Server 2 would receive presence documents 1363 from server 1 containing (or not) geolocation, but since it doesn't 1364 provide or control geolocation, it lets that information pass 1365 through. Thus, the overall presence document provided to the watcher 1366 will contain gelocation if Alice wanted it to, and not otherwise, and 1367 the controls for access to geolocation would exist only on server 1. 1369 The second major concern on distributed provisioning is that it is 1370 confusing for users. However, in the model that is described here, 1371 each server would necessarily be providing distinct rules, governing 1372 the information it uniquely provides. Thus, server 2 would have 1373 rules about who is allowed to see geolocation, and server 1 would 1374 have rules about who is allowed to subscribe overall. Though not 1375 ideal, there is certainly precedent for users configuring policies on 1376 different servers based on the differing services provided by those 1377 servers. Users today provision block and allow lists in email for 1378 access to email servers, and separately in IM and presence 1379 applications for access to IM. 1381 7.1.2.3. Central Provisioning 1383 The central provisioning model is a hybrid between root-only and 1384 distributed provisioning. Each server does in fact execute its own 1385 authorization and composition policies. However, rather than the 1386 user provisioning them independently in each place, there is some 1387 kind of central portal where the user provisions the rules, and that 1388 portal generates policies for each specific server based on the data 1389 that the corresponding server provides. This is shown in Figure 11. 1391 +---------------------+ 1392 |provisioning portal | 1393 +---------------------+ 1394 . . . . . 1395 . . . . . 1396 . . . . ....................... 1397 ........................... . . . . 1398 . . . . . 1399 . . . . . 1400 . ........................... . ............. . 1401 . . . . . 1402 . . ...................... . . 1403 . . V +-----------+ . . 1404 . . *-----------* | | . . 1405 . . |Auth and |---->| Presence | <--- root . . 1406 . . |Composition| | Server | . . 1407 . . *-----------* | | . . 1408 . . | | . . 1409 . . +-----------+ . . 1410 . . | ---- . . 1411 . . | ------- . . 1412 . . | ------- . 1413 . . | .------- . 1414 . . V . ---V V 1415 . . +-----------+ . +-----------+ 1416 . . | | V | | 1417 . . *-----------* | Presence | *-----------* | Presence | 1418 . ....>|Auth and |-->| Server | |Auth and |-->| Server | 1419 . |Composition| | | |Composition| | | 1420 . *-----------* | | *-----------* | | 1421 . +-----------+ +-----------+ 1422 . / -- 1423 . / ---- 1424 . / --- 1425 . / ---- 1426 . / --- 1427 . / ---- 1428 . V -V 1429 . +-----------+ +-----------+ 1430 V | | | | 1431 *-----------* | Presence | *-----------* | Presence | 1432 |Auth and |-->| Server | |Auth and |-->| Server | 1433 |Composition| | | |Composition| | | 1434 *-----------* | | *-----------* | | 1435 +-----------+ +-----------+ 1437 Figure 11: Central Provisioning 1439 Centralized provisioning brings the benefits of root-only (single 1440 point of user provisioning) with those of distributed provisioning 1441 (utilize full capabilities of all servers). Its principle drawback 1442 is that it requires another component - the portal - which can 1443 represent the union of the authorization policies supported by each 1444 server, and then delegate those policies to each corresponding 1445 server. 1447 The other drawback of centralized provisioning is that it assumes 1448 completely consistent policy decision making on each server. There 1449 is a rich set of possible policy decisions that can be taken by 1450 servers, and this is often an area of differentiation. 1452 7.1.2.4. Centralized PDP 1454 The centralized provisioning model assumes that there is a single 1455 point of policy administration, but that there is independent 1456 decision making at each presence and IM server. This only works in 1457 cases where the decision function - the policy decision point - is 1458 identical in each server. 1460 An alternative model is to utilize a single point of policy 1461 administration and a single point of policy decisionmaking. Each 1462 presence server acts solely as an enforcement point, asking the 1463 policy server (through a policy protocol of some sort) how to handle 1464 the presence or IM. The policy server then comes back with a policy 1465 decision - whether to proceed with the subscription or IM, and how to 1466 filter and process it. This is shown in Figure 12. 1468 +------------+ +---------------+ 1469 |Provisioning|=====>|Policy Decision| 1470 | Portal | | Point (PDP) | 1471 +------------+ +---------------+ 1472 # # # # # 1473 ################### # # # ########################### 1474 # # # # # 1475 # ######## # #################### # 1476 # # +-----------+ # # 1477 # # | | # # 1478 # # | | .... root # # 1479 # # | Server | # # 1480 # # | | # # 1481 # # | | # # 1482 # # +-----------+ # # 1483 # # / --- # # 1484 # # / ---- # # 1485 # # / ---- # # 1486 # # / ---- # # 1487 # # V -V# # 1488 # +-----------+ +-----------+ # 1489 # | | | | # 1490 # | | | | # 1491 # | Server | | Server | # 1492 # | | | | # 1493 # | | | | # 1494 # +-----------+ +-----------+ # 1495 # | --- # 1496 # | ----- # 1497 # | ----- # 1498 # | ----- # 1499 # | ----- # 1500 # | ----- # 1501 # V --V # 1502 # +-----------+ +-----------+ # 1503 # | | | | # 1504 #######| | | | # 1505 | Server | | Server |### 1506 | | | | 1507 | | | | 1508 +-----------+ +-----------+ 1510 ===== Provisioning Protocol 1512 ##### Policy Protocol 1514 ----- SIP 1516 Figure 12: Central PDP 1518 The centralized PDP has the benefits of central provisioning, and 1519 consistent policy operation, and decouples policy decision making 1520 from presence and IM processing. This decoupling allows for multiple 1521 presence and IM servers, but still allows for a single policy 1522 function overall. The individual presence and IM servers don't need 1523 to know about the policies themselves, or even know when they change. 1524 Of course, if a server is caching the results of a policy decision, 1525 change notifications are required from the PDP to the server, 1526 informing it of the change (alternatively, traditional TTL-based 1527 expirations can be used if delay in updates are acceptable). 1529 For the centralized and distributed provisioning approaches, and the 1530 centralized decision approach, the hierarchical model suffers overall 1531 from the fact that the root of the policy processing may not be tuned 1532 to the specific policy needs of the device that has subscribed. For 1533 example, in the use case of Figure 7, presence server 1 may be 1534 providing composition policies tuned to the fact that the device is 1535 wireless with limited display. Consequently, when Bob subscribes 1536 from his mobile device, is presence server 2 is the root, presence 1537 server 2 may add additional data and provide an overall presence 1538 document to the client which is not optimized for that device. This 1539 problem is one of the principal motivations for the peer model, 1540 described below. 1542 7.1.3. Presence Data 1544 The hierarhical model is based on the idea that each presence server 1545 in the chain contributes some unique piece of presence information, 1546 composing it with what it receives from its child, and passing it on. 1547 For the overall presence document to be reasonable, several 1548 guidelines need to be followed: 1550 o A presence server must be prepared to receive documents from its 1551 peer containing information that it does not understand, and to 1552 apply unioned composition policies that retain this information, 1553 adding to it the unique information it wishes to contribute. 1555 o A user interface rendering some presence document provided by its 1556 presence server must be prepared for any kind of presence document 1557 compliant to the presence data model, and must not assume a 1558 specific structure based on the limitations and implementation 1559 choices of the server to which it is paired. 1561 If these basic rules are followed, the overall system provides 1562 functionality equivalent to the combination of the presence 1563 capabilities of the servers contained within it, which is highly 1564 desirable. 1566 7.1.4. Conversation Consistency 1568 Unioned federation introduces a particular challenge for conversation 1569 consistency. A user with multiple devices attached to multiple 1570 servers could potentially try to participate in the conversation on 1571 multiple devices at once. This would clearly pose a challenge. 1572 There are really two approaches that produce a sensible user 1573 experience. 1575 The first approach simulates the "phone experience" with IM. When a 1576 user (say Alice) sends an IM to Bob, and Bob is a unioned user with 1577 two devices on two servers, Bob receives that IM on both devices. 1578 However, when he "answers" by typing a reply from one of those 1579 devices, the conversation continues only on that device. The other 1580 device on the other server receives no further IMs for this session - 1581 either from Alice or from Bob. Indeed, the IM window on Bob's 1582 unanswered device may even disappear to emphasize this fact. 1584 This mode of operation, which we'll call uni-device IM, is only 1585 feasible with session mode IM, and its realization using traditional 1586 SIP signaling is described in [RFC4975]. 1588 The second mode of operation, called multi-device IM, is more of a 1589 conferencing experience. The initial IM from Alice is delivered to 1590 both Bob's devices. When Bob answers on one, that response is shown 1591 to ALice but is also rendered on Bob's other device. Effectively, we 1592 have set up an IM conference where each of Bob's devices is an 1593 independent participant in the conference. This model is feasible 1594 with both session and pager mode IM; however conferencing works much 1595 better overall with session mode. 1597 A related challenge is conversation history. In the uni-device IM 1598 mode, this past history for a user's conversation may be distributed 1599 amongst the different servers, depending on which clients and servers 1600 were involved in the conversation. As with the exclusive model, IM 1601 search and retrieval services may need to access all of the servers 1602 on which a user might be located. This is easier for the unioned 1603 case than the exclusive one, since in the unioned case, the user's 1604 location is on a fixed number of servers based on provisioning. This 1605 problem is even more complicated in IM page mode when multiple 1606 devices are present, due to the limitation of page mode in these 1607 configurations. 1609 7.2. Peer Model 1611 In the peer model, there is no one root. When a watcher subscribes 1612 to a presentity, that subscription is processed first by the server 1613 to which the watcher is connected (effectively acting as the root), 1614 and then the subscription is passed to other child presence servers. 1615 The same goes for IM; when a client sends an IM, the IM is processed 1616 first by the server associated with the sender (effectively acting as 1617 the root), and then the IM is passed to the child IM servers. In 1618 essence, in the peer model, there is a per-client hierarchy, with the 1619 root being a function of the client. Consider the use case in 1620 Figure 7 If Bob has his buddy list on presence server 1, and it 1621 contains Alice, presence server 1 acts as the root, and then performs 1622 a back-end subscription to presence server 2. However, if Joe has 1623 his buddy list on presence server 2, and his buddy list contains 1624 Alice, presence server 2 acts as the root, and performs a back-end 1625 subscription to presence server 1. Similarly, if Bob sends an IM to 1626 Alice, it is processed first by server 1 and then server 2. If Joe 1627 sends an IM to Alice, it is first processed by server 2 and then 1628 server 1. This is shown in Figure 13. 1630 alice@example.com alice@example.com 1631 +------------+ +------------+ 1632 | |<-------------| |<--------+ 1633 | | | | | 1634 Connect | Server | | Server | | 1635 Alice | 1 | | 2 | Connect | 1636 +---->| |------------->| | Alice | 1637 | | | | | | 1638 | +------------+ +------------+ | 1639 | \ / | 1640 | \ / | 1641 | \ / | 1642 | \ / | 1643 | \ / | 1644 | \ / | 1645 ...|........ \...................../....... .........|........ 1646 . . \ / . . . 1647 . . .\ / . . +--------+ . 1648 . | . . \ | +--------+ . . |+------+| . 1649 . | . . | |+------+| . . || || . 1650 . +---+ . . +---+ || || . . || || . 1651 . |+-+| . . |+-+| || || . . |+------+| . 1652 . |+-+| . . |+-+| |+------+| . . +--------+ . 1653 . | | . . | | +--------+ . . /------ / . 1654 . | | . . | | /------ / . . /------ / . 1655 . +---+ . . +---+ /------ / . . --------/ . 1656 . . . --------/ . . . 1657 . . . . . . 1658 ............ ............................. .................. 1660 Bob Alice Joe 1662 Figure 13: Peer Model 1664 Whereas the hierarchical model clearly provides the consistency 1665 property, it is not obvious whether a particular deployment of the 1666 peer model provides the consistency property. When policy decision 1667 making is distributed amongst the servers, it ends up being a 1668 function of the composition policies of the individual servers. If 1669 Pi() represents the composition and authorization policies of server 1670 i, and takes as input one or more presence documents provided by its 1671 children, and outputs a presence document, the overall system 1672 provides consistency when: 1674 Pi(Pj()) = Pj(Pi()) 1676 which is effectively the commutativity property. 1678 7.2.1. Routing 1680 Routing in the peer model works similarly to the hierarchical model. 1681 Each server would be configured with the children it has when it acts 1682 as the root. The overall presence routing algorithm then works as 1683 follows: 1685 o If a presence server receives a subscription for a presentity from 1686 a particular watcher, and it already has a different subscription 1687 (as identified by dialog identifiers) for that presentity from 1688 that watcher, it rejects the second subscription with an 1689 indication of a loop. This algorithm does rule out the 1690 possibility of two instances of the same watcher subscribing to 1691 the same presentity. 1693 o If a presence server receives a subscription for a presentity from 1694 a watcher and it doesn't have one yet for that pair, it processes 1695 it and generates back end subscriptions to each configured child. 1696 If a back-end subscription generates an error due to loop, it 1697 proceeds without that back-end input. 1699 The algorithm for IM routing works almost identically. 1701 For example, consider Bob subscribing to Alice. Bob's client is 1702 supported by server 1. Server 1 has not seen this subscription 1703 before, so it acts as the root and passes it to server 2. Server 2 1704 hasn't seen it before, so it accepts it (now acting as the child), 1705 and sends the subscription to its child, which is server 1. Server 1 1706 has already seen the subscription, so it rejects it. Now server 2 1707 basically knows its the child, and so it generates documents with 1708 just its own data. 1710 As in the hierarchical case, it is possible to intermix partitioned 1711 and peer models for different users. In the partitioned case, the 1712 routing for hierarchical devolves into the forking routing described 1713 in Section 5.2.5. However, intermixing peer and exclusive federation 1714 for different users is challenging. [[OPEN ISSUE: need to think 1715 about this more.]] 1717 7.2.2. Policy 1719 The policy considerations for the peer model are very similar to 1720 those of the hierarchical model. However, the root-only policy 1721 approach is non-sensical in the peer model, and cannot be utilized. 1722 The distributed and centralized provisioning approaches apply, and 1723 the rules described above for generating correct results provide 1724 correct results in the peer model as well. 1726 However, the centralized PDP model works particularly well in concert 1727 with the peer model. It allows for consistent policy processing 1728 regardless of the type of rules, and has the benefit of having a 1729 single point of provisioning. At the same time, it avoids the need 1730 for defining and having a single root; indeed there is little benefit 1731 for utilizing the hierarchical model when a centralized PDP is used. 1733 However, the distributed processing model in the peer model 1734 eliminates the problem described in Section 7.1.2.3. The problem is 1735 that composition and authorization policies may be tuned to the needs 1736 of the specific device that is connected. In the hierarchical model, 1737 the wrong server for a particular device may be at the root, and the 1738 resulting presence document poorly suited to the consuming device. 1739 This problem is alleviated in the peer model. The server that is 1740 paired or tuned for that particular user or device is always at the 1741 root of the tree, and its composition policies have the final say in 1742 how presence data is presented to the watcher on that device. 1744 7.2.3. Presence Data 1746 The considerations for presence data and composition in the 1747 hierarchical model apply in the peer model as well. The principle 1748 issue is consistency, and whether the overall presence document for a 1749 watcher is the same regardless of which server the watcher connects 1750 from. As mentioned above, consistency is a property of commutativity 1751 of composition, which may or may not be true depending on the 1752 implementation. 1754 Interestingly, in the use case of Figure 8, a particular user only 1755 ever has devices on a single server, and thus the peer and 1756 hierarchical models end up being the same, and consistency is 1757 provided. 1759 7.2.4. Conversation Consistency 1761 The hierarchical and peer models have no impact on the issue of 1762 conversation consistency; the problem exists identically for both 1763 approaches. 1765 8. Summary 1767 This document doesn't make any recommendation as to which models is 1768 best. Each model has different areas of applicability and are 1769 appropriate in a particular deployment. 1771 9. Future Considerations 1773 There are some additional concepts that can be considered, which have 1774 not yet been explored. One of them is routing of PUBLISH requests 1775 between systems. This can be used as part of the unioned models and 1776 requires further discussion. 1778 Another big issue is data federation. For the unioned models in 1779 particular, there is typically a desire to be able to add a buddy on 1780 one system and have it appear on another, or to add a user to a 1781 whitelist on one system and have that reflect in the other. This 1782 requires some kind of standardized data interfaces and is for further 1783 consideration. 1785 10. Acknowledgements 1787 The author would like to thank Paul Fullarton, David Williams, Sanjay 1788 Sinha, and Paul Kyzivat for their comments. 1790 11. Security Considerations 1792 The principle issue in intra-domain federation is that of privacy. 1793 It is important that the system meets user expectations, and even in 1794 cases of user provisioning errors or inconsistencies, it provides 1795 appropriate levels of privacy. This is an issue in the unioned 1796 models, where user privacy policies can exist on multiple servers at 1797 the same time. The guidelines described here for authorization 1798 policies help ensure that privacy properties are maintained. 1800 12. IANA Considerations 1802 There are no IANA considerations associated with this specification. 1804 13. Informative References 1806 [RFC2778] Day, M., Rosenberg, J., and H. Sugano, "A Model for 1807 Presence and Instant Messaging", RFC 2778, February 2000. 1809 [RFC3863] Sugano, H., Fujimoto, S., Klyne, G., Bateman, A., Carr, 1810 W., and J. Peterson, "Presence Information Data Format 1811 (PIDF)", RFC 3863, August 2004. 1813 [RFC4479] Rosenberg, J., "A Data Model for Presence", RFC 4479, 1814 July 2006. 1816 [RFC3856] Rosenberg, J., "A Presence Event Package for the Session 1817 Initiation Protocol (SIP)", RFC 3856, August 2004. 1819 [RFC4662] Roach, A., Campbell, B., and J. Rosenberg, "A Session 1820 Initiation Protocol (SIP) Event Notification Extension for 1821 Resource Lists", RFC 4662, August 2006. 1823 [RFC3944] Johnson, T., Okubo, S., and S. Campos, "H.350 Directory 1824 Services", RFC 3944, December 2004. 1826 [RFC3325] Jennings, C., Peterson, J., and M. Watson, "Private 1827 Extensions to the Session Initiation Protocol (SIP) for 1828 Asserted Identity within Trusted Networks", RFC 3325, 1829 November 2002. 1831 [RFC3680] Rosenberg, J., "A Session Initiation Protocol (SIP) Event 1832 Package for Registrations", RFC 3680, March 2004. 1834 [RFC3428] Campbell, B., Rosenberg, J., Schulzrinne, H., Huitema, C., 1835 and D. Gurle, "Session Initiation Protocol (SIP) Extension 1836 for Instant Messaging", RFC 3428, December 2002. 1838 [RFC4975] Campbell, B., Mahy, R., and C. Jennings, "The Message 1839 Session Relay Protocol (MSRP)", RFC 4975, September 2007. 1841 [RFC3261] Rosenberg, J., Schulzrinne, H., Camarillo, G., Johnston, 1842 A., Peterson, J., Sparks, R., Handley, M., and E. 1843 Schooler, "SIP: Session Initiation Protocol", RFC 3261, 1844 June 2002. 1846 [I-D.ietf-speermint-consolidated-presence-im-usecases] 1847 Houri, A., "Presence & Instant Messaging Peering Use 1848 Cases", 1849 draft-ietf-speermint-consolidated-presence-im-usecases-04 1850 (work in progress), February 2008. 1852 [I-D.ietf-simple-view-sharing] 1853 Rosenberg, J., Donovan, S., and K. McMurry, "Optimizing 1854 Federated Presence with View Sharing", 1855 draft-ietf-simple-view-sharing-00 (work in progress), 1856 February 2008. 1858 Authors' Addresses 1860 Jonathan Rosenberg 1861 Cisco 1862 Edison, NJ 1863 US 1865 Phone: +1 973 952-5000 1866 Email: jdrosen@cisco.com 1867 URI: http://www.jdrosen.net 1869 Avshalom Houri 1870 IBM 1871 Science Park, Rehovot 1872 Israel 1874 Email: avshalom@il.ibm.com 1876 Colm Smyth 1877 Avaya 1878 Dublin 18, Sandyford Business Park 1879 Ireland 1881 Email: smythc@avaya.com 1883 Full Copyright Statement 1885 Copyright (C) The IETF Trust (2008). 1887 This document is subject to the rights, licenses and restrictions 1888 contained in BCP 78, and except as set forth therein, the authors 1889 retain all their rights. 1891 This document and the information contained herein are provided on an 1892 "AS IS" basis and THE CONTRIBUTOR, THE ORGANIZATION HE/SHE REPRESENTS 1893 OR IS SPONSORED BY (IF ANY), THE INTERNET SOCIETY, THE IETF TRUST AND 1894 THE INTERNET ENGINEERING TASK FORCE DISCLAIM ALL WARRANTIES, EXPRESS 1895 OR IMPLIED, INCLUDING BUT NOT LIMITED TO ANY WARRANTY THAT THE USE OF 1896 THE INFORMATION HEREIN WILL NOT INFRINGE ANY RIGHTS OR ANY IMPLIED 1897 WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. 1899 Intellectual Property 1901 The IETF takes no position regarding the validity or scope of any 1902 Intellectual Property Rights or other rights that might be claimed to 1903 pertain to the implementation or use of the technology described in 1904 this document or the extent to which any license under such rights 1905 might or might not be available; nor does it represent that it has 1906 made any independent effort to identify any such rights. Information 1907 on the procedures with respect to rights in RFC documents can be 1908 found in BCP 78 and BCP 79. 1910 Copies of IPR disclosures made to the IETF Secretariat and any 1911 assurances of licenses to be made available, or the result of an 1912 attempt made to obtain a general license or permission for the use of 1913 such proprietary rights by implementers or users of this 1914 specification can be obtained from the IETF on-line IPR repository at 1915 http://www.ietf.org/ipr. 1917 The IETF invites any interested party to bring to its attention any 1918 copyrights, patents or patent applications, or other proprietary 1919 rights that may cover technology that may be required to implement 1920 this standard. Please address the information to the IETF at 1921 ietf-ipr@ietf.org. 1923 Acknowledgment 1925 Funding for the RFC Editor function is provided by the IETF 1926 Administrative Support Activity (IASA).