idnits 2.17.1 draft-ietf-simple-intradomain-federation-05.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- No issues found here. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year -- The document seems to contain a disclaimer for pre-RFC5378 work, and may have content which was first submitted before 10 November 2008. The disclaimer is necessary when there are original authors that you have been unable to contact, or if some do not wish to grant the BCP78 rights to the IETF Trust. If you are able to get all authors (current and original) to grant those rights, you can and should remove the disclaimer; otherwise, the disclaimer is needed and you can ignore this comment. (See the Legal Provisions document at https://trustee.ietf.org/license-info for more information.) -- The document date (May 12, 2010) is 5091 days in the past. Is this intentional? -- Found something which looks like a code comment -- if you have code sections in the document, please surround them with '' and '' lines. Checking references for intended status: Informational ---------------------------------------------------------------------------- == Unused Reference: 'RFC3265' is defined on line 1938, but no explicit reference was found in the text == Unused Reference: 'RFC3903' is defined on line 1960, but no explicit reference was found in the text -- Obsolete informational reference (is this intentional?): RFC 3265 (Obsoleted by RFC 6665) -- Obsolete informational reference (is this intentional?): RFC 4474 (Obsoleted by RFC 8224) Summary: 0 errors (**), 0 flaws (~~), 3 warnings (==), 5 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 SIMPLE J. Rosenberg 3 Internet-Draft jdrosen.net 4 Intended status: Informational A. Houri 5 Expires: November 13, 2010 IBM 6 C. Smyth 7 Avaya 8 F. Audet 9 Skype 10 May 12, 2010 12 Models for Intra-Domain Presence and Instant Messaging (IM) Bridging 13 draft-ietf-simple-intradomain-federation-05 15 Abstract 17 Presence and Instant Messaging (IM) bridging involves the sharing of 18 presence information and exchange of IM across multiple systems 19 within a single domain. As such, it is a close cousin to presence 20 and IM federation, which involves the sharing of presence and IM 21 across differing domains. Presence and IM bridging can be the result 22 of a multi-vendor network, or a consequence of a large organization 23 that requires partitioning. This document examines different use 24 cases and models for intra-domain presence and IM bridging. It is 25 meant to provide a framework for defining requirements and 26 specifications for presence and IM bridging. The document assumes 27 SIP as the underlying protocl but many of the modles can fit other 28 protocols as well. 30 Status of this Memo 32 This Internet-Draft is submitted in full conformance with the 33 provisions of BCP 78 and BCP 79. 35 Internet-Drafts are working documents of the Internet Engineering 36 Task Force (IETF). Note that other groups may also distribute 37 working documents as Internet-Drafts. The list of current Internet- 38 Drafts is at http://datatracker.ietf.org/drafts/current/. 40 Internet-Drafts are draft documents valid for a maximum of six months 41 and may be updated, replaced, or obsoleted by other documents at any 42 time. It is inappropriate to use Internet-Drafts as reference 43 material or to cite them other than as "work in progress." 45 This Internet-Draft will expire on November 13, 2010. 47 Copyright Notice 48 Copyright (c) 2010 IETF Trust and the persons identified as the 49 document authors. All rights reserved. 51 This document is subject to BCP 78 and the IETF Trust's Legal 52 Provisions Relating to IETF Documents 53 (http://trustee.ietf.org/license-info) in effect on the date of 54 publication of this document. Please review these documents 55 carefully, as they describe your rights and restrictions with respect 56 to this document. Code Components extracted from this document must 57 include Simplified BSD License text as described in Section 4.e of 58 the Trust Legal Provisions and are provided without warranty as 59 described in the Simplified BSD License. 61 This document may contain material from IETF Documents or IETF 62 Contributions published or made publicly available before November 63 10, 2008. The person(s) controlling the copyright in some of this 64 material may not have granted the IETF Trust the right to allow 65 modifications of such material outside the IETF Standards Process. 66 Without obtaining an adequate license from the person(s) controlling 67 the copyright in such materials, this document may not be modified 68 outside the IETF Standards Process, and derivative works of it may 69 not be created outside the IETF Standards Process, except to format 70 it for publication as an RFC or to translate it into languages other 71 than English. 73 Table of Contents 75 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 5 76 2. Intra-Domain Bridging vs. Clustering . . . . . . . . . . . . . 8 77 3. Use Cases for Intra-Domain Bridging . . . . . . . . . . . . . 9 78 3.1. Scale . . . . . . . . . . . . . . . . . . . . . . . . . . 9 79 3.2. Organizational Structures . . . . . . . . . . . . . . . . 9 80 3.3. Multi-Vendor Requirements . . . . . . . . . . . . . . . . 9 81 3.4. Specialization . . . . . . . . . . . . . . . . . . . . . . 10 82 4. Considerations for Bridging Models . . . . . . . . . . . . . . 11 83 5. Overview of the Models . . . . . . . . . . . . . . . . . . . . 11 84 6. Partitioned . . . . . . . . . . . . . . . . . . . . . . . . . 13 85 6.1. Applicability . . . . . . . . . . . . . . . . . . . . . . 14 86 6.2. Routing . . . . . . . . . . . . . . . . . . . . . . . . . 14 87 6.2.1. Centralized Database . . . . . . . . . . . . . . . . . 15 88 6.2.2. Routing Proxy . . . . . . . . . . . . . . . . . . . . 16 89 6.2.3. Subdomaining . . . . . . . . . . . . . . . . . . . . . 17 90 6.2.4. Peer-to-Peer . . . . . . . . . . . . . . . . . . . . . 19 91 6.2.5. Forking . . . . . . . . . . . . . . . . . . . . . . . 19 92 6.2.6. Provisioned Routing . . . . . . . . . . . . . . . . . 19 93 6.3. Policy . . . . . . . . . . . . . . . . . . . . . . . . . . 20 94 6.4. Presence Data . . . . . . . . . . . . . . . . . . . . . . 20 95 6.5. Conversation Consistency . . . . . . . . . . . . . . . . . 20 96 7. Exclusive . . . . . . . . . . . . . . . . . . . . . . . . . . 20 97 7.1. Routing . . . . . . . . . . . . . . . . . . . . . . . . . 21 98 7.1.1. Centralized Database . . . . . . . . . . . . . . . . . 22 99 7.1.2. Routing Proxy . . . . . . . . . . . . . . . . . . . . 22 100 7.1.3. Subdomaining . . . . . . . . . . . . . . . . . . . . . 23 101 7.1.4. Peer-to-Peer . . . . . . . . . . . . . . . . . . . . . 23 102 7.1.5. Forking . . . . . . . . . . . . . . . . . . . . . . . 23 103 7.2. Policy . . . . . . . . . . . . . . . . . . . . . . . . . . 24 104 7.3. Presence Data . . . . . . . . . . . . . . . . . . . . . . 24 105 7.4. Conversation Consistency . . . . . . . . . . . . . . . . . 25 106 8. Unioned . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 107 8.1. Hierarchical Model . . . . . . . . . . . . . . . . . . . . 29 108 8.1.1. Routing . . . . . . . . . . . . . . . . . . . . . . . 31 109 8.1.2. Policy and Identity . . . . . . . . . . . . . . . . . 32 110 8.1.2.1. Root Only . . . . . . . . . . . . . . . . . . . . 33 111 8.1.2.2. Distributed Provisioning . . . . . . . . . . . . . 34 112 8.1.2.3. Central Provisioning . . . . . . . . . . . . . . . 36 113 8.1.2.4. Centralized PDP . . . . . . . . . . . . . . . . . 38 114 8.1.3. Presence Data . . . . . . . . . . . . . . . . . . . . 40 115 8.1.4. Conversation Consistency . . . . . . . . . . . . . . . 41 116 8.2. Peer Model . . . . . . . . . . . . . . . . . . . . . . . . 42 117 8.2.1. Routing . . . . . . . . . . . . . . . . . . . . . . . 44 118 8.2.2. Policy . . . . . . . . . . . . . . . . . . . . . . . . 44 119 8.2.3. Presence Data . . . . . . . . . . . . . . . . . . . . 45 120 8.2.4. Conversation Consistency . . . . . . . . . . . . . . . 45 122 9. More about Policy . . . . . . . . . . . . . . . . . . . . . . 45 123 10. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . . 46 124 11. Security Considerations . . . . . . . . . . . . . . . . . . . 46 125 12. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 47 126 13. Informative References . . . . . . . . . . . . . . . . . . . . 47 127 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . . 48 129 1. Introduction 131 Presence refers to the ability, willingness and desire to communicate 132 across differing devices, mediums and services [RFC2778]. Presence 133 is described using presence documents [RFC3863] [RFC4479], exchanged 134 using a Session Initiation Protocol (SIP) [RFC3261] based event 135 package [RFC3856]. Similarly, instant messaging refers to the 136 exchange of real-time text-oriented messaging between users. SIP 137 defines two mechanisms for IM - pager mode [RFC3428] and session mode 138 [RFC4975]. 140 Presence and Instant Messaging (IM) bridging involves the sharing of 141 presence information and exchange of IM across multiple systems 142 within a single domain. As such, it is a close cousin to presence 143 and IM federation [RFC5344], which involves the sharing of presence 144 and IM across differing domains. 146 For example, consider the network of Figure 1, which shows one model 147 for inter-domain presence federation. In this network, Alice belongs 148 to the example.org domain, and Bob belongs to the example.com domain. 149 Alice subscribes to her buddy list on her presence server (which is 150 also acting as her Resource List Server (RLS) [RFC4662]), and that 151 list includes bob@example.com. Alice's presence server generates a 152 back-end subscription on the federated link between example.org and 153 example.com. The example.com presence server authorizes the 154 subscription, and if permitted, generates notifications back to 155 Alice's presence server, which are in turn passed to Alice. 157 ............................. .............................. 158 . . . . 159 . . . . 160 . alice@example.org . . bob@example.com . 161 . +------------+ SUB . . +------------+ . 162 . | | Bob . . | | . 163 . | Presence |------------------->| Presence | . 164 . | Server | . . | Server | . 165 . | | . . | | . 166 . | |<-------------------| | . 167 . | | NOTIFY . | | . 168 . +------------+ . . +------------+ . 169 . ^ | . . ^ . 170 . SUB | | . . |PUB . 171 . Buddy | |NOTIFY . . | . 172 . List | | . . | . 173 . | | . . | . 174 . | V . . | . 175 . +-------+ . . +-------+ . 176 . | | . . | | . 177 . | | . . | | . 178 . | | . . | | . 179 . +-------+ . . +-------+ . 180 . . . . 181 . Alice's . . Bob's . 182 . PC . . PC . 183 . . . . 184 ............................. .............................. 186 example.org example.com 188 Figure 1: Inter-Domain Presence Model 190 Similarly, inter-domain IM federation would look like the model shown 191 in Figure 2: 193 ............................. .............................. 194 . . . . 195 . . . . 196 . alice@example.org . . bob@example.com . 197 . +------------+ INV . . +------------+ . 198 . | | Bob . . | | . 199 . | |------------------->| | . 200 . | IM | . . | IM | . 201 . | Server | . . | Server | . 202 . | |<------------------>| | . 203 . | | IM | | . 204 . +------------+ Content +------------+ . 205 . ^ ^ . . ^ | . 206 . INVITE | | . . IM | |INV . 207 . Bob | | IM . . Content| |Bob . 208 . | | Content . . | | . 209 . | | . . | | . 210 . | V . . V V . 211 . +-------+ . . +-------+ . 212 . | | . . | | . 213 . | | . . | | . 214 . | | . . | | . 215 . +-------+ . . +-------+ . 216 . . . . 217 . Alice's . . Bob's . 218 . PC . . PC . 219 . . . . 220 ............................. .............................. 222 example.org example.com 224 Figure 2: Inter-Domain SIP based IM Model 226 In this model, example.org and example.com both have an "IM server". 227 This would typically be a SIP proxy or B2BUA responsible for handling 228 both the signaling and the IM content (as these are separate in the 229 case of session mode). The IM server would handle routing of the IM 230 along with application of IM policy. 232 Though both of these pictures show federation between domains, a 233 similar interconnection - presence and IM bridging - can happen 234 within a domain as well. We define intra-domain bridging as the 235 interconnection of presence and IM servers within a single 236 administrative domain. Typically, a single administrative domain 237 often means the same DNS domain within the right hand side of the 238 @-sign in the SIP URI. 240 This document considers the architectural models and different 241 problems that arise when performing intra-domain presence and IM 242 bridging. Though presence and IM are quite distinct functions, this 243 document considers both since the architectural models and issues are 244 common between the two. The document first clarifies the distinction 245 between intra-domain bridging and clustering. It defines the primary 246 issues that arise in intra-domain presence and IM bridging, and then 247 goes on to define the three primary models for it - partitioned, 248 unioned and exclusive. 250 This document doesn't make any recommendation as to which model is 251 best. Each model has different areas of applicability and are 252 appropriate in a particular deployment. The intent is to provide 253 informative material and ideas on how this can be done. 255 2. Intra-Domain Bridging vs. Clustering 257 Intra-domain bridging is the interconnection of servers within a 258 single domain. This is very similar to clustering, which is the 259 tight coupling of a multiplicity of physical servers to realize scale 260 and/or high availability. Consequently, it is important to clarify 261 the differences. Note that clustering is out of scope for this 262 document. 264 Firstly, clustering implies a tight coupling of components. 265 Clustering usually involves proprietary information sharing, such as 266 state sharing, which in turn are tightly bound with the internal 267 implementation of the product. Intra-domain bridging, on the other 268 hand, is a loose coupling. Although database replication may be used 269 across bridged systems, it is not in the same level of stats 270 replication that usually occurs in clusters. 272 Secondly, clustering most usually occurs amongst components from the 273 same vendor. This is due to the tight coupling described above. 274 Intra-domain bridging, on the other hand, can occur between servers 275 from different vendors. As described below, this is one of the chief 276 use cases for intra-domain bridging. 278 Thirdly, clustering is almost always invisible to users. 279 Communications between users within the same cluster almost always 280 have identical functionality to communications between users on the 281 same server within the cluster. The cluster boundaries are 282 invisible; indeed the purpose of a cluster is to build a system which 283 behaves as if it were a single monolithic entity, even though it is 284 not. Bridging, on the other hand, is often visible to users. There 285 will frequently be loss of functionality when crossing a cluster. 286 Though this is not a hard and fast rule, it is a common 287 differentiator. 289 Fourthly, connections between federated and bridged systems almost 290 always involve standards, whereas communications within a cluster 291 often involves proprietary mechanisms. Standards are needed for 292 bridging because the systems can be from different vendors, and thus 293 agreement is needed to enable interoperation. 295 Finally, a cluster will often have an upper bound on its size and 296 capacity, due to some kind of constraint on the coupling between 297 nodes in the cluster. However, there is typically no limit, or a 298 much larger limit, on the number of bridged systems that can be put 299 into a domain. This is a consequence of their loose coupling. 301 Though these rules are not hard and fast, they give general 302 guidelines on the differences between clustering and intra-domain 303 bridging. 305 3. Use Cases for Intra-Domain Bridging 307 There are several use cases that drive intra-domain bridging. 309 3.1. Scale 311 One common use case for bridging is an organization that is just very 312 large, and their size exceeds the capacity that a single server or 313 cluster can provide. So, instead, the domain breaks its users into 314 partitions (perhaps arbitrarily) and then uses intra-domain bridging 315 to allow the overall system to scale up to arbitrary sizes. This is 316 common practice today for service providers and large enterprises. 318 3.2. Organizational Structures 320 Another use case for intra-domain bridging is a multi-national 321 organization with regional IT departments, each of which supports a 322 particular set of nationalities. It is very common for each regional 323 IT department to deploy and run its own servers for its own 324 population. In that case, the domain would end up being composed of 325 the presence servers deployed by each regional IT department. 326 Indeed, in many organizations, each regional IT department might end 327 up using different vendors. This can be a consequence of differing 328 regional requirements for features (such as compliance or 329 localization support), differing sales channels and markets in which 330 vendors sell, and so on. 332 3.3. Multi-Vendor Requirements 334 Another use case for intra-domain bridging is an organization that 335 requires multiple vendors for each service, in order to avoid vendor 336 lock in and drive competition between its vendors. Since the servers 337 will come from different vendors, a natural way to deploy them is to 338 partition the users across them. Such multi-vendor networks are 339 extremely common in large service provider networks, many of which 340 have hard requirements for multiple vendors. 342 Typically, the vendors are split along geographies, often run by 343 different local IT departments. As such, this case is similar to the 344 organizational division above. 346 3.4. Specialization 348 Another use case is where certain vendors might specialize in 349 specific types of clients. For example, one vendor might provide a 350 mobile client (but no desktop client), while another provides a 351 desktop client but no mobile client. It is often the case that 352 specific client applications and devices are designed to only work 353 with their corresponding servers. In an ideal world, clients would 354 all implement to standards and this would not happen, but in current 355 practice, the vast majority of presence and IM endpoints work only 356 (or only work well) with the server from the same vendor. A domain 357 might want each user to have both a mobile client and a desktop 358 client, which will require servers from each vendor, leading to 359 intra-domain bridging. 361 Similarly, presence can contain rich information, including 362 activities of the user (such as whether they are in a meeting or on 363 the phone), their geographic location, and their mood. This presence 364 state can be determined manually (where the user enters and updates 365 the information), or automatically. Automatic determination of these 366 states is far preferable, since it puts less burden on the user. 367 Determination of these presence states is done by taking "raw" data 368 about the user, and using it to generate corresponding presence 369 states. This raw data can come from any source that has information 370 about the user, including their calendaring server, their VoIP 371 infrastructure, their VPN server, their laptop operating system, and 372 so on. Each of these components is typically made by different 373 vendors, each of which is likely to integrate that data with their 374 presence servers. Consequently, presence servers from different 375 vendors are likely to specialize in particular pieces of presence 376 data, based on the other infrastructure they provide. The overall 377 network will need to contain servers from those vendors, composing 378 together the various sources of information, in order to combine 379 their benefits. This use case is specific to presence, and results 380 in intra-domain bridging. 382 4. Considerations for Bridging Models 384 When considering architectures for intra-domain presence and IM 385 bridging, several issues need to be considered. The first two of 386 these apply to both IM and presence (and indeed to any intra-domain 387 communications, including voice). The latter two are specific to 388 presence and IM respectively: 390 Routing: How are subscriptions and IMs routed to the right presence 391 and IM server(s)? This issue is more complex in intra-domain 392 models, since the right hand side of the @-sign cannot be used to 393 perform this routing. 395 Policy and Identity: Where do user policies reside, and what 396 presence and IM server(s) are responsible for executing that 397 policy? What identities does the user have in each system and how 398 do they relate? 400 Presence Data Ownership: Which presence servers are responsible for 401 which pieces of presence information, and how are those pieces 402 composed to form a coherent and consistent view of user presence? 404 Conversation Consistency: When considering instant messaging, if IM 405 can be delivered to multiple servers, how do we make sure that the 406 overall conversation is coherent to the user? 408 The sections below describe several different models for intra-domain 409 bridging. Each model is driven by a set of use cases, which are 410 described in an applicability subsection for each model. Each model 411 description also discusses how routing, policy, presence data 412 ownership and conversation consistency work. 414 5. Overview of the Models 416 There are three models for intra-domain bridging. These are 417 partitioned, exclusive, and unioned. They can be explained relative 418 to each other via a decision tree. 420 +--------------+ 421 | | 422 | Is a user | one 423 | managed by |-----> PARTITIONED 424 | one system | 425 | or more than| 426 | one? | 427 +--------------+ 428 | 429 | more 430 V 432 +-----------------+ 433 | Can the user | 434 | be managed by | no 435 | more than one |---> EXCLUSIVE 436 | system at the | 437 | same time? | 438 +-----------------+ 439 | 440 |yes 441 | 442 V 444 UNIONED 446 Figure 3: Decision Tree 448 The first question is whether any particular user is 'managed' by 449 just one system, or more than one? Here, 'managed' means that the 450 user is provisioned on the system, and can use it for some kind of 451 presence and IM services. In the partitioned model, the answer is no 452 - a user is on only one system. In that way, partitioned bridging is 453 analagous to an inter-domain model where a user is handled by a 454 single domain. 456 If a user is 'managed' by more than one system, is it more than one 457 at the same time, or only one at a time? In the exclusive model, it 458 is one at a time. The user can log into one system, log out, and 459 then log into the other. For example, a user might have a PC client 460 connected to system one, and a different PC client connected to 461 system two. They can use one or the other, but not both. In unioned 462 bridging, they can be connected to more than one at the same time. 463 For example, a user might have a mobile client connected to one 464 system, and a PC client connected to another. 466 6. Partitioned 468 In the partitioned model, a single domain has a multiplicity of 469 servers, each of which manages a non-overlapping set of users. That 470 is, for each user in the domain, their presence data, policy and IM 471 handling reside on a single server. Each "single server" may in fact 472 be a cluster. 474 Another important facet of the partitioned model is that, even though 475 users are partitioned across different servers, they each share the 476 same domain name in the right hand side of their URI, and this URI is 477 what those users use when communicating with other users both inside 478 and outside of the domain. There are many reasons why a domain would 479 want all of its users to share the same right-hand side of the @-sign 480 even though it is partitioned internally: 482 o The partitioning may reflect organizational or geographical 483 structures that a domain admistrator does not want to reflect 484 externally. 486 o If each partition had a separate domain name (i.e., 487 engineering.example.com and sales.example.com), if a user changed 488 organizations, this would necessitate a change in their URI. 490 o For reasons of vanity, users often like to have their URI (which 491 appear on business cards, email, and so on), to be brief and 492 short. 494 o If a watcher wants to add a presentity based on username and does 495 not want to know, or does not know, which subdomain or internal 496 department the presentity belongs to, a single domain is needed. 498 This model is illustrated in Figure 4. As the model shows, the 499 domain example.com has six users across three servers, each of which 500 is handling two of the users. 502 ..................................................................... 503 . . 504 . . 505 . . 506 . joe@example.com alice@example.com padma@example.com . 507 . bob@example.com zeke@example.com hannes@example.com . 508 . +-----------+ +-----------+ +-----------+ . 509 . | | | | | | . 510 . | Server | | Server | | Server | . 511 . | 1 | | 2 | | 3 | . 512 . | | | | | | . 513 . +-----------+ +-----------+ +-----------+ . 514 . . 515 . . 516 . . 517 . example.com . 518 ..................................................................... 520 Figure 4: Partitioned Model 522 6.1. Applicability 524 The partitioned model arises naturally in larger domains, such as an 525 enterprise or service provider, where issues of scale, organizational 526 structure, or multi-vendor requirements cause the domain to be 527 managed by a multiplicity of independent servers. 529 In cases where each user has an AoR (Address of Record) that directly 530 points to its partition (for example, us.example.com), that model 531 becomes identical to the inter-domain federated model and is not 532 treated here further. 534 6.2. Routing 536 The partitioned intra-domain model works almost identically to an 537 inter-domain federated model, with the primary difference being 538 routing. In inter-domain federation, the domain part of the URI can 539 be used to route presence subscriptions and IM messages from one 540 domain to the other. This is no longer the case in an intra-domain 541 model. Consider the case where Joe subscribes to his buddy list, 542 which is served by his presence server (server 1 in Figure 4). Alice 543 is a member of Joe's buddy list. How does server 1 know that the 544 back-end subscription to Alice needs to get routed to server 2? 546 There are several techniques that can be used to solve this problem, 547 which are outlined in the subsections below. 549 6.2.1. Centralized Database 551 ..................................................................... 552 . +-----------+ . 553 . alice? | | . 554 . +---------------> | Database | . 555 . | server 2 | | . 556 . | +-------------| | . 557 . | | +-----------+ . 558 . | | . 559 . | | . 560 . | | . 561 . | | . 562 . | | . 563 . | | . 564 . | V . 565 . joe@example.com alice@example.com padma@example.com . 566 . bob@example.com zeke@example.com hannes@example.com . 567 . +-----------+ +-----------+ +-----------+ . 568 . | | | | | | . 569 . | Server | | Server | | Server | . 570 . | 1 | | 2 | | 3 | . 571 . | | | | | | . 572 . +-----------+ +-----------+ +-----------+ . 573 . . 574 . . 575 . . 576 . example.com . 577 ..................................................................... 579 Figure 5: Centralized DB 581 One solution is to rely on a common, centralized database that 582 maintains mappings of users to specific servers, shown in Figure 5. 583 When Joe subscribes to his buddy list that contains Alice, server 1 584 would query this database, asking it which server is responsible for 585 alice@example.com. The database would indicate server 2, and then 586 server 1 would generate the backend SUBSCRIBE request towards server 587 2. Similarly, when Joe sends an INVITE to establish an IM session 588 with Padma, he would send the IM to his IM server, an it would query 589 the database to find out that Padma is supported on server 3. This 590 is a common technique in large email systems. It is often 591 implemented using internal sub-domains; so that the database would 592 return alice@central.example.com to the query, and server 1 would 593 modify the Request-URI in the request to reflect this. 595 Routing database solutions have the problem that they require 596 standardization on a common schema and database protocol in order to 597 work in multi-vendor environments. For example, LDAP and SQL are 598 both possibilities. There is variety in LDAP schema; one possibility 599 is H.350.4, which could be adapted for usage here [RFC3944]. 601 6.2.2. Routing Proxy 603 ..................................................................... 604 . +-----------+ . 605 . SUB/INV alice | | . 606 . +---------------> | Routing | . 607 . | | Proxy | . 608 . | | | . 609 . | +-----------+ . 610 . | | . 611 . | | . 612 . | | . 613 . | |SUB/INV alice . 614 . | | . 615 . | | . 616 . | V . 617 . joe@example.com alice@example.com padma@example.com . 618 . bob@example.com zeke@example.com hannes@example.com . 619 . +-----------+ +-----------+ +-----------+ . 620 . | | | | | | . 621 . | Server | | Server | | Server | . 622 . | 1 | | 2 | | 3 | . 623 . | | | | | | . 624 . +-----------+ +-----------+ +-----------+ . 625 . . 626 . . 627 . . 628 . example.com . 629 ..................................................................... 631 Figure 6: Routing Proxy 633 A similar solution is to rely on a routing proxy or B2BUA. Instead 634 of a centralized database, there would be a centralized SIP proxy 635 farm. Server 1 would send requests (SUBSCRIBE, INVITE, etc.) for 636 users it doesn't serve to this server farm, and the servers would 637 lookup the user in a database (which is now accessed only by the 638 routing proxy), and the resulting requests are sent to the correct 639 server. A redirect server can be used as well, in which case the 640 flow is very much like that of a centralized database, but uses SIP. 642 Routing proxies have the benefit that they do not require a common 643 database schema and protocol, but they do require a centralized 644 server function that sees all subscriptions and IM requests, which 645 can be a scale challenge. For IM, a centralized proxy is very 646 challenging when using pager mode, since each and every IM is 647 processed by the central proxy. For session mode, the scale is 648 better, since the proxy handles only the initial INVITE. 650 6.2.3. Subdomaining 652 In this solution, each user is associated with a subdomain, and is 653 provisioned as part of their respective server using that subdomain. 654 Consequently, each server thinks it is its own, separate domain. 655 However, when a user adds a presentity to their buddy list without 656 the subdomain, they first consult a shared database which returns the 657 subdomained URI to subscribe or IM to. This sub-domained URI can be 658 returned because the user provided a search criteria, such as "Find 659 Alice Chang", or provided the non-subdomained URI 660 (alice@example.com). This is shown in Figure 7 661 ..................................................................... 662 . +-----------+ . 663 . who is Alice? | | . 664 . +---------------------->| Database | . 665 . | alice@b.example.com | | . 666 . | +---------------------| | . 667 . | | +-----------+ . 668 . | | . 669 . | | . 670 . | | . 671 . | | . 672 . | | . 673 . | | . 674 . | | . 675 . | | joe@example.com alice@example.com padma@example.com . 676 . | | bob@example.com zeke@example.com hannes@example.com . 677 . | | +-----------+ +-----------+ +-----------+ . 678 . | | | | | | | | . 679 . | | | Server | | Server | | Server | . 680 . | | | 1 | | 2 | | 3 | . 681 . | | | | | | | | . 682 . | | +-----------+ +-----------+ +-----------+ . 683 . | | ^ . 684 . | | | . 685 . | | | . 686 . | | | . 687 . | | | . 688 . | | | . 689 . | | +-----------+ . 690 . | +-------------------->| | . 691 . | | Client | . 692 . | | | . 693 . +-----------------------| | . 694 . +-----------+ . 695 . . 696 . . 697 . . 698 . example.com . 699 ..................................................................... 701 Figure 7: Subdomaining 703 Subdomaining puts the burden of routing within the client. The 704 servers can be completely unaware that they are actually part of the 705 same domain, and integrate with each other exactly as they would in 706 an inter-domain model. However, the client is given the burden of 707 determining the subdomained URI from the original URI or buddy name, 708 and then subscribing or IMing directly to that server, or including 709 the subdomained URI in their buddylist. The client is also 710 responsible for hiding the subdomain structure from the user and 711 storing the mapping information locally for extended periods of time. 712 In cases where users have buddy list subscriptions, the client will 713 need to resolve the buddy name into the sub-domained version before 714 adding to their buddy list. 716 Subdmaining can be done via different databases. In order to provide 717 consistent interface to clients, a front-end of a SIP redirect 718 proxies can be implemented. A client would send the SIP request to 719 one of the redirect proxies and the redirect proxy will reply with 720 the right domain after consulting the database in whatever protocol 721 the databases exposes. 723 6.2.4. Peer-to-Peer 725 Another model is to utilize a peer-to-peer network amongst all of the 726 servers, and store URI to server mappings in the distributed hash 727 table it creates. This has some nice properties but does require a 728 standardized and common peer-to-peer protocol across vendors. 730 6.2.5. Forking 732 Yet another solution is to utilize forking. Each server is 733 provisioned with the domain names or IP addresses of the other 734 servers, but not with the mapping of users to each of those servers. 735 When a server needs to handle a request for a user it doesn't have, 736 it forks the request to all of the other servers. This request will 737 be rejected with a 404 on the servers which do not handle that user, 738 and accepted on the one that does. The approach assumes that servers 739 can differentiate inbound requests from end users (which need to get 740 passed on to other servers - for example via a back-end subscription) 741 and from other servers (which do not get passed on). This approach 742 works very well in organizations with a relatively small number of 743 servers (say, two or three), and becomes increasingly ineffective 744 with more and more servers. Indeed, if multiple servers exist for 745 the purposes of achieving scale, this approach can defeat the very 746 reason those additional servers were deployed. 748 6.2.6. Provisioned Routing 750 Yet another solution is to provision each server with each user, but 751 for servers that don't actually serve the user, the provisioning 752 merely tells the server where to proxy the request. This solution 753 has extremely poor operational properties, requiring multiple points 754 of provisioning across disparate systems. 756 6.3. Policy 758 A fundamental characteristic of the partitioned model is that there 759 is a single point of policy enforcement (authorization rules and 760 composition policy) for each user. 762 For more discussion regarding policy see Section 9. 764 6.4. Presence Data 766 Another fundamental characteristic of the partitioned model is that 767 the presence data for a user is managed authoritatively on a single 768 server. In the example of Figure 4, the presence data for Alice 769 lives on server 2 alone (recall that server two may be physically 770 implemented as a multiplicity of boxes from a single vendor, each of 771 which might have a portion of the presence data, but externally it 772 appears to behave as if it were a single server). A subscription 773 from Bob to Alice may cause a transfer of presence information from 774 server 2 to server 1, but server 2 remains authoritative and is the 775 single root source of all data for Alice. 777 6.5. Conversation Consistency 779 Since the IM for a particular user are always delivered through a 780 particular server that handles the user, it is relatively easy to 781 achieve conversation consistency. That server receives all of the 782 messages and readily pass them onto the user for rendering. 783 Furthermore, a coherent view of message history can be assembled by 784 the server, since it sees all messages. If a user has multiple 785 devices, there are challenges in constructing a consistent view of 786 the conversation with page mode IM. However, those issues exist in 787 general with page mode and are not worsened by intra-domain bridging. 789 7. Exclusive 791 In the former (static) partitioned model, the mapping of a user to a 792 specific server is done by some off-line configuration means. The 793 configuration assigns a user to a specific server and in order to use 794 a different server, the user needs to change (or request the 795 administrator to do so) the configuration. 797 In some environments, this restriction of a user to use a particular 798 server may be a limitation. Instead, it is desirable to allow users 799 to freely move back and forth between systems, though using only a 800 single one at a time. This is called Exclusive Bridging. 802 Some use cases where this can happen are: 804 o The organization is using multiple systems where each system has 805 its own characteristics. For example one server is tailored to 806 work with some CAD (Computer Aided Design) system and provide 807 presence and IM functionality along with the CAD system. The 808 other server is the default presence and IM server of the 809 organization. Users wish to be able to work with either system 810 when they wish to, they also wish to be able to see the presence 811 and IM with their buddies no matter which system their buddies are 812 currently using. 814 o An enterprise wishes to test presence servers from two different 815 vendors. In order to do so they wish to install a server from 816 each vendor and see which of the servers is better. In the static 817 partitioned model, a user will have to be statically assigned to a 818 particular server and could not compare the features of the two 819 servers. In the dynamic partitioned model, a user may choose on 820 whim which of the servers that are being tested to use. They can 821 move back and forth in case of problems. 823 o An enterprise is currently using servers from one vendor, but has 824 decided to add a second. They would like to gradually migrate 825 users from one to the other. In order to make a smooth 826 transition, users can move back and forth over a period of a few 827 weeks until they are finally required to stop going back, and get 828 deleted from their old system. 830 o A domain is using multiple clusters from the same vendor. To 831 simplify administration, users can connect to any of the clusters, 832 perhaps one local to their site. To accomplish this, the clusters 833 are connected using exclusive bridging. 835 7.1. Routing 837 Due to its nature, routing in the Exclusive bridging model is more 838 complex than the routing in the partitioned model. 840 Association of a user to a server can not be known until the user 841 publishes a presence document to a specific server or registers to 842 that server. Therefore, when Alice subscribes to Bob's presence 843 information, or sends him an IM, Alice's server will not easily know 844 the server that has Bob's presence and is handling his IM. 846 In addition, a server may get a subscription to a user, or an IM 847 targeted at a user, but the user may not be connected to any server 848 yet. In the case of presence, once the user appears in one of the 849 servers, the subscription should be sent to that server. 851 A user may use two servers at the same time and have hers/his 852 presence information on two servers. This should be regarded as a 853 conflict and one of the presence clients should be terminated or 854 redirected to the other server. 856 Fortunately, most of the routing approaches described for partitioned 857 bridging, excepting provisioned routing, can be adapted for exclusive 858 bridging. 860 7.1.1. Centralized Database 862 A centralized database can be used, but will need to support a test- 863 and-set functionality. With it, servers can check if a user is 864 already in a specific server and set the user to the server if the 865 user is not on another server. If the user is already on another 866 server, a redirect (or some other error message) will be sent to that 867 user. 869 When a client sends a subscription request for some target user, and 870 the target user is not associated with a server yet, the subscription 871 must be 'held' on the server of the watcher. Once the target user 872 connects and becomes bound to a server, the database needs to send a 873 change notification to the watching server, so that the 'held' 874 subscription can be extended to the server which is now handling 875 presence for the user. 877 Note that this approach actually moves the scaling problem of the 878 routing mechanism to the database, especially when the percentage of 879 the community that is offline is large. 881 7.1.2. Routing Proxy 883 The routing proxy mechanism can be used for exclusive bridging as 884 well. However, it requires signaling from each server to the routing 885 proxy to indicate that the user is now located on that server. This 886 can be done by having each server send a REGISTER request to the 887 routing proxy, for that user, and setting the contact to itself. The 888 routing proxy would have a rule which allows only a single registered 889 contact per user. Using the registration event package [RFC3680], 890 each server subscribes to the registration state at the routing proxy 891 for each user it is managing. If the routing proxy sees a duplicate 892 registration, it allows it, and then uses a reg-event notification to 893 the other server to de-register the user. Once the user is de- 894 registered from that server, it would terminate any subscriptions in 895 place for that user, causing the watching server to reconnect the 896 subscription to the new server. Something similar can be done for 897 in-progress IM sessions; however this may have the effect of causing 898 a disruption in ongoing sessions. 900 Note that this approach actually moves the scaling problem of the 901 routing mechanism to the registrar, especially when the percentage of 902 the community that is offline is large. 904 7.1.3. Subdomaining 906 Subdomaining is just a variation on the centralized database. 907 Assuming the database supports a test-and-set mechanism, it can be 908 used for exclusive bridging. 910 However, the principle challenge in applying subdomaining to 911 exclusive bridging is database change notifications. When a user 912 moves from one server to another, that change needs to be propagated 913 to all clients which have ongoing sessions (presence and IM) with 914 that user. This requires a large-scale change notification mechanism 915 - to each client in the network. 917 7.1.4. Peer-to-Peer 919 Peer-to-peer routing can be used for routing in exclusive bridging. 920 Essentially, it provides a distributed registrar function that maps 921 each AoR to the particular server that they are currently registered 922 against. When a UA registers to a particular server, that 923 registration is written into the P2P (Peer to Peer) network, such 924 that queries for that user are directed to that presence server. 926 However, change notifications can be troublesome. When a user 927 registered on server 1 now registers on server 2, server 2 needs to 928 query the P2P network, discover that server 1 is handling the user, 929 and then tell server 1 that the user has moved. Server 1 then needs 930 to terminate its ongoing subscriptions and send the to server 2. 932 Furthermore, P2P networks do not inherently provide a test-and-set 933 primitive, and consequently, it is possible for race conditions to 934 occur where there is an inconsistent view on where the user is 935 currently registered. 937 7.1.5. Forking 939 The forking model can be applied to exclusive bridging. When a user 940 registers with a server or publishes a presence document to a server, 941 and that server is not serving the user yet, that server begins 942 serving the user. Furthermore, it needs to propagate a change 943 notification to all of the other servers. This can be done using a 944 registration event package; basically each server would subscribe to 945 every other server for reg-event notifications for users they serve. 947 When subscription or IM request is received at a server, and that 948 server doesn't serve the target user, it forks the subscription or IM 949 to all other servers. If the user is currently registered somewhere, 950 one will accept, and the others will reject with a 404. If the user 951 is registered nowhere, all others generate a 404. If the request is 952 a subscription, the server that received it would 'hold' the 953 subscription, and then subscribe for the reg-event package on every 954 other server for the target user. Once the target user registers 955 somewhere, the server holding the subscription gets a notification 956 and can propagate it to the new target server. 958 Like the P2P solution, the forking solution lacks an effective test- 959 and-set mechanism, and it is therefore possible that there could be 960 inconsistent views on which server is handling a user. One possible 961 scenario where multiple servers will think that thy are serving the 962 user would be when a subscription request is forked and reaches to 963 multiple servers, each of them thinks that it serves the user. 965 7.2. Policy 967 Unless policy is somehow managed in the same database and is accessed 968 by the servers in the exclusive bridging model, policy becomes more 969 complicated in the exclusive bridging model. In the partitioned 970 model, a user had their presence and IM managed by the same server 971 all of the time. Thus, their policy can be provisioned and excecuted 972 there. With exclusive bridging, a user can freely move back and 973 forth between servers. Consequently, the policy for a particular 974 user may need to execute on multiple different servers over time. 976 The simplest solution is just to require the user to separately 977 provision and manage policies on each server. In many of the use 978 cases above, exclusive bridging is a transient situation that 979 eventually settles into partitioned bridging. Thus, it may not be 980 unreasonable to require the user to manage both policies during the 981 transition. It is also possible that each server provides different 982 capabilities, and thus a user will receive different service 983 depending on which server they are connected to. Again, this may be 984 an acceptable limitation for the use cases it supports. 986 For more discussion regarding policy see Section 8.1.2 and Section 9. 988 7.3. Presence Data 990 As with the partitioned model, in the exclusive model, the presence 991 data for a user resides on a single server at any given time. This 992 server owns all composition policies and procedures for collecting 993 and distributing presence data. 995 7.4. Conversation Consistency 997 Because a user receives all of their IM on a single server at a time, 998 there aren't issues with seeing a coherent conversation for the 999 duration that a user is associated with that server. 1001 However, if a user has sessions in progress while they move from one 1002 server to another, it is possible that IM's can be misrouted or 1003 dropped, or delivered out of order. Fortunately, this is a transient 1004 event, and given that its unlikely that a user would actually have 1005 in-progress IM sessions when they change servers, this may be an 1006 acceptable limitation. 1008 However, conversation history may be more troubling. IM message 1009 history is often stored both in clients (for context of past 1010 conversations, search, etc.) and in servers (for the same reasons, in 1011 addition to legal requirements for data retention). If a user 1012 changes servers, some of their past conversations will be stored on 1013 one server, and some on another. Any kind of search or query 1014 facility provided amongst the server-stored messages would need to 1015 search amongst all of the servers to find the data. 1017 8. Unioned 1019 In the unioned model, each user is actually served by more than one 1020 presence server at a time. In this case, "served" implies two 1021 properties: 1023 o A user is served by a server when that user is provisioned on that 1024 server, and 1026 o That server is authoritative for some piece of presence state 1027 associated with that user or responsible for some piece of 1028 registration state associated with that user, for the purposes of 1029 IM delivery 1031 In essence, in the unioned model, a user's presence and registration 1032 data is distributed across many presence servers, while in the 1033 partitioned and exclusive models, its centralized in a single server. 1034 Furthermore, it is possible that the user is provisioned with 1035 different identifiers on each server. 1037 This definition speaks specifically to ownership of dynamic data - 1038 presence and registration state - as the key property. This rules 1039 out several cases which involve a mix of servers within the 1040 enterprise, but do not constitute intra-domain unioned bridging: 1042 o A user utilizes an outbound SIP proxy from one vendor, which 1043 connects to a presence server from another vendor. Even though 1044 this will result in presence subscriptions, notifications, and IM 1045 requests flowing between servers, and the user is potentially 1046 provisioned on both, there is no authoritative presence or 1047 registration state in the outbound proxy, and so this is not 1048 intra-domain bridging. 1050 o A user utilizes a Resource List Server (RLS) from one vendor, 1051 which holds their buddy list, and accesses presence data from a 1052 presence server from another vendor. This case is actually the 1053 partitioned case, not the unioned case. Effectively, the buddy 1054 list itself is another "user", and it exists entirely on one 1055 server (the RLS), while the actual users on the buddy list exist 1056 entirely within another. Consequently, this case does not have 1057 the property that a single presence resource exists on multiple 1058 servers at the same time. 1060 o A user subscribes to the presence of a presentity. This 1061 subscription is first passed to their presence server, which acts 1062 as a proxy, and instead sends the subscription to the UA of the 1063 user, which acts as a presence edge server. In this model, it may 1064 appear as if there are two presence servers for the user (the 1065 actual server and their UA). However, the server is acting as a 1066 proxy in this case - there is only one source of presence 1067 information. For IM, there is only one source of registration 1068 state - the server. Thus, this model is partitioned, but with 1069 different servers owning IM and presence. 1071 The unioned models arise naturally when a user is using devices from 1072 different vendors, each of which has their own respective servers, or 1073 when a user is using different servers for different parts of their 1074 presence state. For example, Figure 8 shows the case where a single 1075 user has a mobile client connected to server one and a desktop client 1076 connected to server two. 1078 alice@example.com alice@example.com 1079 +------------+ +------------+ 1080 | | | | 1081 | | | | 1082 | Server |--------------| Server | 1083 | 1 | | 2 | 1084 | | | | 1085 | | | | 1086 +------------+ +------------+ 1087 \ / 1088 \ / 1089 \ / 1090 \ / 1091 \ / 1092 \ / 1093 \...................../....... 1094 \ / . 1095 .\ / . 1096 . \ | +--------+ . 1097 . | |+------+| . 1098 . +---+ || || . 1099 . |+-+| || || . 1100 . |+-+| |+------+| . 1101 . | | +--------+ . 1102 . | | /------ / . 1103 . +---+ /------ / . 1104 . --------/ . 1105 . . 1106 ............................. 1108 Alice 1110 Figure 8: Unioned Case 1 1112 As another example, a user may have two devices from the same vendor, 1113 both of which are asociated with a single presence server, but that 1114 presence server has incomplete presence state about the user. 1115 Another presence server in the enterprise, due to its access to state 1116 for that user, has additional data which needs to be accessed by the 1117 first presence server in order to provide a comprehensive view of 1118 presence data. This is shown in Figure 9. This use case tends to be 1119 specific to presence. 1121 alice@example.com alice@example.com 1122 +------------+ +------------+ 1123 | | | | 1124 | Presence | | Presence | 1125 | Server |--------------| Server | 1126 | 1 | | 2 | 1127 | | | | 1128 | | | | 1129 +------------+ +------------+ 1130 ^ | | 1131 | | | 1132 | | | 1133 ///-------\\\ | | 1134 ||| specialized ||| | | 1135 || state || | | 1136 \\\-------/// | | 1137 ............................. 1138 . | | . 1139 . | | +--------+ . 1140 . | |+------+| . 1141 . +---+ || || . 1142 . |+-+| || || . 1143 . |+-+| |+------+| . 1144 . | | +--------+ . 1145 . | | /------ / . 1146 . +---+ /------ / . 1147 . --------/ . 1148 . . 1149 . . 1150 ............................. 1151 Alice 1153 Figure 9: Unioned Case 2 1155 Another use case for unioned bridging are subscriber moves. Consider 1156 a domain which uses multiple servers, typically running in a 1157 partitioned configuration. The servers are organized regionally so 1158 that each user is served by a server handling their region. A user 1159 is moving from one region to a new job in another, while retaining 1160 their SIP URI. In order to provide a smooth transition, ideally the 1161 system would provide a "make before break" functionality, allowing 1162 the user to be added onto the new server prior to being removed from 1163 the old. During the transition period, especially if the user had 1164 multiple clients to be moved, they can end up with state existing on 1165 both servers at the same time. 1167 8.1. Hierarchical Model 1169 The unioned intra-bridging model can be realized in one of two ways - 1170 using a hierarchical structure or a peer structure. 1172 In the hierarchical model, presence subscriptions and IM requests for 1173 the target are always routed first to one of the servers - the root. 1174 In the case of presence, the root has the final say on the structure 1175 of the presence document delivered to watchers. It collects presence 1176 data from its child presence servers (through notifications or 1177 publishes received from them) and composes them into the final 1178 presence document. In the case of IM, the root applies IM policy and 1179 then passes the IM onto the children for delivery. There can be 1180 multiple layers in the hierarchical model. This is shown in 1181 Figure 10 for presence. 1183 +-----------+ 1184 *-----------* | | 1185 |Auth and |---->| Presence | <--- root 1186 |Composition| | Server | 1187 *-----------* | | 1188 | | 1189 +-----------+ 1190 / --- 1191 / ---- 1192 / ---- 1193 / ---- 1194 V -V 1195 +-----------+ +-----------+ 1196 | | | | 1197 *-----------* | Presence | *-----------* | Presence | 1198 |Auth and |-->| Server | |Auth and |-->| Server | 1199 |Composition| | | |Composition| | | 1200 *-----------* | | *-----------* | | 1201 +-----------+ +-----------+ 1202 | --- 1203 | ----- 1204 | ----- 1205 | ----- 1206 | ----- 1207 | ----- 1208 V --V 1209 +-----------+ +-----------+ 1210 | | | | 1211 *-----------* | Presence | *-----------* | Presence | 1212 |Auth and |-->| Server | |Auth and |-->| Server | 1213 |Composition| | | |Composition| | | 1214 *-----------* | | *-----------* | | 1215 +-----------+ +-----------+ 1217 Figure 10: Hierarchical Model 1219 It is important to note that this hierarchy defines the sequence of 1220 presence composition and policy application, and does not imply a 1221 literal message flow. As an example, consider once more the use case 1222 of Figure 8. Assume that presence server 1 is the root, and presence 1223 server 2 is its child. When Bob's PC subscribes to Bob's buddy list 1224 (on presence server 2), that subscription will first go to presence 1225 server 2. However, that presence server knows that it is not the 1226 root in the hierarchy, and despite the fact that it has presence 1227 state for Alice (who is on Bob's buddy list), it creates a back-end 1228 subscription to presence server 1. Presence server 1, as the root, 1229 subscribes to Alice's state at presence server 2. Now, since this 1230 subscription came from presence server 1 and not Bob directly, 1231 presence server 2 provides the presence state. This is received at 1232 presence server 1, which composes the data with its own state for 1233 Alice, and then provides the results back to presence server 2, 1234 which, having acted as an RLS, forwards the results back to Bob. 1235 Consequently, this flow, as a message sequence diagram, involves 1236 notifications passing from presence server 2, to server 1, back to 1237 server 2. However, in terms of composition and policy, it was done 1238 first at the child node (presence server 2), and then those results 1239 used at the parent node (presence server 1). 1241 Note that we are assuming that presence servers will subscribe to 1242 each other. It is also possible to assume that given the hierarchy 1243 configuration knowledge, a presence server can send PUBLISH messages 1244 to other presence servers based on the configured hierarchy. 1245 However, sending PUBLISH messages from one presence server to another 1246 presence server will actually make the presence server that sends the 1247 PUBLISH messages a presence source and not a presence server from the 1248 point of view of the receiving presence server. Therefore, we will 1249 assume that presence servers will subscribe to each other but PUBLISH 1250 messages instead of subscriptions could be used if they are 1251 preferred. 1253 8.1.1. Routing 1255 In the hierarchical model, the servers need to collectively be 1256 provisioned with the topology of the network. This topology defines 1257 the root and the parent/child relationships. These relationships 1258 could in fact be different on a user-by-user basis; however, this is 1259 complex to manage. In all likelihood, the parent and child 1260 relationships are identical for each user. The overall routing 1261 algorithm can be described thusly: 1263 o If a SUBCRIBE is received from the parent node for this 1264 presentity, perform subscriptions to each child node for this 1265 presentity, and then take the results, apply composition and 1266 authorization policies, and propagate to the parent. If a node is 1267 the root, the logic here applies regardless of where the request 1268 came from. 1270 o If an IM request is received from the parent node for a user, 1271 perform IM processing and then proxy the request to each child IM 1272 server for this user. If a node is the root, the logic here 1273 applies regardless of where the request came from. 1275 o If a request is received from a node that is not the parent node 1276 for this presentity, proxy the request to the parent node. This 1277 includes cases where the node that sent the request is a child 1278 node. Note that if the node that receives the request can send 1279 the request directly to the root, it should do so thus reducing 1280 the traffic in the system. 1282 This routing rule is relatively simple, and in a two-server system is 1283 almost trivial to provision. Interestingly, it works in cases where 1284 some users are partitioned and some are unioned. When the users are 1285 partitioned, this routing algorithm devolves into the forking 1286 algorithm of Section 6.2.5. This points to the forking algorithm as 1287 a good choice since it can be used for both partitioned and unioned. 1289 An important property of the routing in the hierarchical model is 1290 that the sequence of composition and policy operations for any IM or 1291 presence session is identical, regardless of the watcher or sender of 1292 the IM. The result is that the overall presence state provided to a 1293 watcher, and overall IM behavior, is always consistent and 1294 independent of the server the client is connected to. We call this 1295 property the *consistency property*, and it is an important metric in 1296 assessing the correctness of a bridged presence and IM system. 1298 8.1.2. Policy and Identity 1300 Policy and identity are a clear challenge in the unioned model. 1302 Firstly, since a user is provisioned on many servers, it is possible 1303 that the identifier they utilize could be different on each server. 1304 For example, on server 1, they could be joe@example.com, whereas on 1305 server 2, they are joe.smith@example.com. In cases where the 1306 identifiers are not equivalent, a mapping function needs to be 1307 provisioned. This ideally happens on root server. 1309 Secondly, the unioned model will result in back-end subscriptions 1310 extending from one presence server to another presence server. These 1311 subscriptions, though made by the presence server, need to be made 1312 on-behalf-of the user that originally requested the presence state of 1313 the presentity. Since the presence server extending the back-end 1314 subscription will not often have credentials to claim identity of the 1315 watcher, asserted identity using techniques like P-Asserted-ID 1316 [RFC3325] or authenticated identity [RFC4474] are required, along 1317 with the associated trust relationships between servers. 1318 Optimizations, such as view sharing [I-D.ietf-simple-view-sharing] 1319 can help improve performance. The same considerations apply for IM. 1321 The principle challenge in a unioned model is policy, including both 1322 authorization and composition policies. There are three potential 1323 solutions to the administration of policy in the hierarchical model 1324 (only two of which apply in the peer model, as we'll discuss below). 1325 These are root-only, distributed provisioned, and central 1326 provisioned. 1328 8.1.2.1. Root Only 1330 In the root-only policy model, authorization policy, IM policy, and 1331 composition policy are applied only at the root of the tree. This is 1332 shown in Figure 11. 1334 +-----------+ 1335 *-----------* | | 1336 | |---->| | <--- root 1337 | Policy | | Server | 1338 *-----------* | | 1339 | | 1340 +-----------+ 1341 / --- 1342 / ---- 1343 / ---- 1344 / ---- 1345 V -V 1346 +-----------+ +-----------+ 1347 | | | | 1348 | | | | 1349 | Server | | Server | 1350 | | | | 1351 | | | | 1352 +-----------+ +-----------+ 1353 | --- 1354 | ----- 1355 | ----- 1356 | ----- 1357 | ----- 1358 | ----- 1359 V --V 1360 +-----------+ +-----------+ 1361 | | | | 1362 | | | | 1363 | Server | | Server | 1364 | | | | 1365 | | | | 1366 +-----------+ +-----------+ 1368 Figure 11: Root Only 1370 As long as a subscription request came from its parent, every child 1371 presence server would automatically accept the subscription, and 1372 provide notifications containing the full presence state it is aware 1373 of. Similarly, any IM received from a parent would be simply 1374 propagated onwards towards children. Any composition performed by a 1375 child presence server would need to be lossless, in that it fully 1376 combines the source data without loss of information, and also be 1377 done without any per-user provisioning or configuration, operating in 1378 a default or administrator-provisioned mode of operation. 1380 The root-only model has the benefit that it requires the user to 1381 provision policy in a single place (the root). However, it has the 1382 drawback that the composition and policy processing may be performed 1383 very poorly. Presumably, there are multiple presence servers in the 1384 first place because each of them has a particular speciality. That 1385 speciality may be lost in the root-only model. For example, if a 1386 child server provides geolocation information, the root presence 1387 server may not have sufficient authorization policy capabilities to 1388 allow the user to manage how that geolocation information is provided 1389 to watchers. 1391 8.1.2.2. Distributed Provisioning 1393 The distributed provisioned model looks exactly like the diagram of 1394 Figure 10. Each server is separately provisioned with its own 1395 policies, including what users are allowed to watch, what presence 1396 data they will get, how it will be composed, what IMs get blocked, 1397 and so on. 1399 One immediate concern is whether the overall policy processing, when 1400 performed independently at each server, is consistent, sane, and 1401 provides reasonable degrees of privacy. It turns out that it can, if 1402 some guidelines are followed. 1404 For presence, consider basic "yes/no" authorization policies. Lets 1405 say a presentity, Alice, provides an authorization policy in server 1 1406 where Bob can see her presence, but on server 2, provides a policy 1407 where Bob cannot. If presence server 1 is the root, the subscription 1408 is accepted there, but the back-end subscription to presence server 2 1409 would be rejected. As long as presence server 1 then rejects the 1410 subscription, the system provides the correct behavior. This can be 1411 turned into a more general rule: 1413 o To guarantee privacy safety, if the back-end subscription 1414 generated by a presence server is denied, that server must deny 1415 the triggering subscription in turn, regardless of its own 1416 authorization policies. This means that a presence server cannot 1417 send notifications on its own until it has confirmed subscriptions 1418 from downstream servers. 1420 For IM, basic yes/no authorization policies work in a similar way. 1421 If any one of the servers has a policy that says to block an IM, the 1422 IM is not propagated further down the chain. Whether the overall 1423 system blocks IMs from a sender depends on the topology. If there is 1424 no forking in the hierarchy, the system has the property that, if a 1425 sender is blocked at any server, the user is blocked overall. 1426 However, in tree structures where there are multiple children, it is 1427 possible that an IM could be delivered to some downstream clients, 1428 and not others. 1430 Things get more complicated when one considers presence authorization 1431 policies whose job is to block access to specific pieces of 1432 information, as opposed to blocking a user completely. For example, 1433 lets say Alice wants to allow Bob to see her presence, but not her 1434 geolocation information. She provisions a rule on server 1 that 1435 blocks geolocation information, but grants it on server 2. The 1436 correct mode of operation in this case is that the overall system 1437 will block geolocation from Bob. But will it? In fact, it will, if a 1438 few additional guidelines are followed: 1440 o If a presence server adds any information to a presence document 1441 beyond the information received from its children, it must provide 1442 authorization policies that govern the access to that information. 1444 o If a presence server does not understand a piece of presence data 1445 provided by its child, it should not attempt to apply its own 1446 authorization policies to access of that information. 1448 o A presence server should not add information to a presence 1449 document that overlaps with information that can be added by its 1450 parent. Of course, it is very hard for a presence server to know 1451 whether this information overlaps. Consequently, provisioned 1452 composition rules will be required to realize this. 1454 If these rules are followed, the overall system provides privacy 1455 safety and the overall policy applied is reasonable. This is because 1456 these rules effectively segment the application of policy based on 1457 specific data, to the servers that own the corresponding data. For 1458 example, consider once more the geolocation use case described above, 1459 and assume server 2 is the root. If server 1 has access to, and 1460 provides geolocation information in presence documents it produces, 1461 then server 1 would be the only one to provide authorization policies 1462 governing geolocation. Server 2 would receive presence documents 1463 from server 1 containing (or not) geolocation, but since it doesn't 1464 provide or control geolocation, it lets that information pass 1465 through. Thus, the overall presence document provided to the watcher 1466 will contain gelocation if Alice wanted it to, and not otherwise, and 1467 the controls for access to geolocation would exist only on server 1. 1469 For more discussion regarding policy see Section 9. 1471 8.1.2.3. Central Provisioning 1473 The central provisioning model is a hybrid between root-only and 1474 distributed provisioning. Each server does in fact execute its own 1475 authorization and composition policies. However, rather than the 1476 user provisioning them independently in each place, there is some 1477 kind of central portal where the user provisions the rules, and that 1478 portal generates policies for each specific server based on the data 1479 that the corresponding server provides. This is shown in Figure 12. 1481 +---------------------+ 1482 |provisioning portal |........... 1483 +---------------------+ . 1484 . . . . . . 1485 . . . . . . 1486 . . . . ....................... 1487 ........................... . . . . . 1488 . . . . . . 1489 . . . . . . 1490 . ........................... . ............. . . 1491 . . . . . . 1492 . . ...................... . . . 1493 . . V +-----------+ . . . 1494 . . *-----------* | | . . . 1495 . . |Auth and |---->| Presence | <--- root . . . 1496 . . |Composition| | Server | . . . 1497 . . *-----------* | | . . . 1498 . . | | . . . 1499 . . +-----------+ . . . 1500 . . | ---- . . . 1501 . . | ------- . . . 1502 . . | ------- . . 1503 . . | .------- . 1504 . . V . . ---V V 1505 . . +-----------+ . . +---------+ 1506 . . | | V . | | 1507 . . *-----------* | Presence | *-----------* . |Presence | 1508 . ....>|Auth and |-->| Server | |Auth and |-->| Server | 1509 . |Composition| | | |Composition| . | | 1510 . *-----------* | | *-----------* . +---------+ 1511 . +-----------+ . 1512 . / \ . 1513 . / \ . 1514 . / \ . 1515 . / \ . 1516 . / \ . 1517 . / \ . 1518 . V V ... 1519 . +-----------+ +-----------+ . 1520 V | | | | V 1521 *-----------* | Presence | | Presence | *-----------* 1522 |Auth and |-->| Server | | Server |<----|Auth and | 1523 |Composition| | | | | |Composition| 1524 *-----------* | | | | *-----------* 1525 +-----------+ +-----------+ 1527 Figure 12: Central Provisioning 1529 Centralized provisioning brings the benefits of root-only (single 1530 point of user provisioning) with those of distributed provisioning 1531 (utilize full capabilities of all servers). Its principle drawback 1532 is that it requires another component - the portal - which can 1533 represent the union of the authorization policies supported by each 1534 server, and then delegate those policies to each corresponding 1535 server. 1537 The other drawback of centralized provisioning is that it assumes 1538 completely consistent policy decision making on each server. There 1539 is a rich set of possible policy decisions that can be taken by 1540 servers, and this is often an area of differentiation. 1542 8.1.2.4. Centralized PDP 1544 The centralized provisioning model assumes that there is a single 1545 point of policy administration, but that there is independent 1546 decision making at each presence and IM server. This only works in 1547 cases where the decision function - the policy decision point - is 1548 identical in each server. 1550 An alternative model is to utilize a single point of policy 1551 administration and a single point of policy decision making. Each 1552 presence server acts solely as an enforcement point, asking the 1553 policy server (through a policy protocol of some sort) how to handle 1554 the presence or IM. The policy server then comes back with a policy 1555 decision - whether to proceed with the subscription or IM, and how to 1556 filter and process it. This is shown in Figure 13. 1558 +------------+ +---------------+ 1559 |Provisioning|=====>|Policy Decision| 1560 | Portal | | Point (PDP) | 1561 +------------+ +---------------+ 1562 # # # # # 1563 ################### # # # ########################### 1564 # # # # # 1565 # ######## # #################### # 1566 # # +-----------+ # # 1567 # # | | # # 1568 # # | | .... root # # 1569 # # | Server | # # 1570 # # | | # # 1571 # # | | # # 1572 # # +-----------+ # # 1573 # # / --- # # 1574 # # / ---- # # 1575 # # / ---- # # 1576 # # / ---- # # 1577 # # V -V# # 1578 # +-----------+ +-----------+ # 1579 # | | | | # 1580 # | | | | # 1581 # | Server | | Server | # 1582 # | | | | # 1583 # | | | | # 1584 # +-----------+ +-----------+ # 1585 # | --- # 1586 # | ----- # 1587 # | ----- # 1588 # | ----- # 1589 # | ----- # 1590 # | ----- # 1591 # V --V # 1592 # +-----------+ +-----------+ # 1593 # | | | | # 1594 #######| | | | # 1595 | Server | | Server |### 1596 | | | | 1597 | | | | 1598 +-----------+ +-----------+ 1600 ===== Provisioning Protocol 1601 ##### Policy Protocol 1602 ----- SIP 1604 Figure 13: Central PDP 1606 The centralized PDP has the benefits of central provisioning, and 1607 consistent policy operation, and decouples policy decision making 1608 from presence and IM processing. This decoupling allows for multiple 1609 presence and IM servers, but still allows for a single policy 1610 function overall. The individual presence and IM servers don't need 1611 to know about the policies themselves, or even know when they change. 1612 Of course, if a server is caching the results of a policy decision, 1613 change notifications are required from the PDP to the server, 1614 informing it of the change (alternatively, traditional TTL-based 1615 expirations can be used if delay in updates are acceptable). 1617 It is also possible to move the decision making process into each 1618 server. In that case, there is still a centralized policy portal and 1619 centralized repository of the policy data. The interface between the 1620 servers and the repository then becomes some kind of standardized 1621 database interface. 1623 For the centralized and distributed provisioning approaches, and the 1624 centralized decision approach, the hierarchical model suffers overall 1625 from the fact that the root of the policy processing may not be tuned 1626 to the specific policy needs of the device that has subscribed. For 1627 example, in the use case of Figure 8, presence server 1 may be 1628 providing composition policies tuned to the fact that the device is 1629 wireless with limited display. Consequently, when Bob subscribes 1630 from his mobile device, when presence server 2 is the root, presence 1631 server 2 may add additional data and provide an overall presence 1632 document to the client which is not optimized for that device. This 1633 problem is one of the principal motivations for the peer model, 1634 described below. 1636 For more discussion regarding policy see Section 9. 1638 8.1.3. Presence Data 1640 The hierarhical model is based on the idea that each presence server 1641 in the chain contributes some unique piece of presence information, 1642 composing it with what it receives from its child, and passing it on. 1643 For the overall presence document to be reasonable, several 1644 guidelines need to be followed: 1646 o A presence server must be prepared to receive documents from its 1647 peer containing information that it does not understand, and to 1648 apply unioned composition policies that retain this information, 1649 adding to it the unique information it wishes to contribute. 1651 o A user interface rendering some presence document provided by its 1652 presence server must be prepared for any kind of presence document 1653 compliant to the presence data model, and must not assume a 1654 specific structure based on the limitations and implementation 1655 choices of the server to which it is paired. 1657 If these basic rules are followed, the overall system provides 1658 functionality equivalent to the combination of the presence 1659 capabilities of the servers contained within it, which is highly 1660 desirable. 1662 8.1.4. Conversation Consistency 1664 Unioned bridging introduces a particular challenge for conversation 1665 consistency. A user with multiple devices attached to multiple 1666 servers could potentially try to participate in the conversation on 1667 multiple devices at once. This would clearly pose a challenge. 1668 There are really two approaches that produce a sensible user 1669 experience. 1671 The first approach simulates the "phone experience" with IM. When a 1672 user (say Alice) sends an IM to Bob, and Bob is a unioned user with 1673 two devices on two servers, Bob receives that IM on both devices. 1674 However, when he "answers" by typing a reply from one of those 1675 devices, the conversation continues only on that device. The other 1676 device on the other server receives no further IMs for this session - 1677 either from Alice or from Bob. Indeed, the IM window on Bob's 1678 unanswered device may even disappear to emphasize this fact. 1680 This mode of operation, which we'll call uni-device IM, is only 1681 feasible with session mode IM, and its realization using traditional 1682 SIP signaling is described in [RFC4975]. 1684 The second mode of operation, called multi-device IM, is more of a 1685 conferencing experience. The initial IM from Alice is delivered to 1686 both Bob's devices. When Bob answers on one, that response is shown 1687 to Alice but is also rendered on Bob's other device. Effectively, we 1688 have set up an IM conference where each of Bob's devices is an 1689 independent participant in the conference. This model is feasible 1690 with both session and pager mode IM; however conferencing works much 1691 better overall with session mode. 1693 A related challenge is conversation history. In the uni-device IM 1694 mode, this past history for a user's conversation may be distributed 1695 amongst the different servers, depending on which clients and servers 1696 were involved in the conversation. As with the exclusive model, IM 1697 search and retrieval services may need to access all of the servers 1698 on which a user might be located. This is easier for the unioned 1699 case than the exclusive one, since in the unioned case, the user's 1700 location is on a fixed number of servers based on provisioning. This 1701 problem is even more complicated in IM page mode when multiple 1702 devices are present, due to the limitation of page mode in these 1703 configurations. 1705 8.2. Peer Model 1707 In the peer model, there is no one root. When a watcher subscribes 1708 to a presentity, that subscription is processed first by the server 1709 to which the watcher is connected (effectively acting as the root), 1710 and then the subscription is passed to other child presence servers. 1711 The same goes for IM; when a client sends an IM, the IM is processed 1712 first by the server associated with the sender (effectively acting as 1713 the root), and then the IM is passed to the child IM servers. In 1714 essence, in the peer model, there is a per-client hierarchy, with the 1715 root being a function of the client. Consider the use case in 1716 Figure 8 If Bob has his buddy list on presence server 1, and it 1717 contains Alice, presence server 1 acts as the root, and then performs 1718 a back-end subscription to presence server 2. However, if Joe has 1719 his buddy list on presence server 2, and his buddy list contains 1720 Alice, presence server 2 acts as the root, and performs a back-end 1721 subscription to presence server 1. Similarly, if Bob sends an IM to 1722 Alice, it is processed first by server 1 and then server 2. If Joe 1723 sends an IM to Alice, it is first processed by server 2 and then 1724 server 1. This is shown in Figure 14. 1726 alice@example.com alice@example.com 1727 +------------+ +------------+ 1728 | |<-------------| |<--------+ 1729 | | | | | 1730 Connect | Server | | Server | | 1731 Alice | 1 | | 2 | Connect | 1732 +---->| |------------->| | Alice | 1733 | | | | | | 1734 | +------------+ +------------+ | 1735 | \ / | 1736 | \ / | 1737 | \ / | 1738 | \ / | 1739 | \ / | 1740 | \ / | 1741 ...|........ \...................../....... .........|........ 1742 . . \ / . . . 1743 . . .\ / . . +--------+ . 1744 . | . . \ | +--------+ . . |+------+| . 1745 . | . . | |+------+| . . || || . 1746 . +---+ . . +---+ || || . . || || . 1747 . |+-+| . . |+-+| || || . . |+------+| . 1748 . |+-+| . . |+-+| |+------+| . . +--------+ . 1749 . | | . . | | +--------+ . . /------ / . 1750 . | | . . | | /------ / . . /------ / . 1751 . +---+ . . +---+ /------ / . . --------/ . 1752 . . . --------/ . . . 1753 . . . . . . 1754 ............ ............................. .................. 1756 Bob Alice Joe 1758 Figure 14: Peer Model 1760 Whereas the hierarchical model clearly provides the consistency 1761 property, it is not obvious whether a particular deployment of the 1762 peer model provides the consistency property. When policy decision 1763 making is distributed amongst the servers, it ends up being a 1764 function of the composition policies of the individual servers. If 1765 Pi() represents the composition and authorization policies of server 1766 i, and takes as input one or more presence documents provided by its 1767 children, and outputs a presence document, the overall system 1768 provides consistency when: 1770 Pi(Pj()) = Pj(Pi()) 1772 which is effectively the commutativity property. 1774 8.2.1. Routing 1776 Routing in the peer model works similarly to the hierarchical model. 1777 Each server would be configured with the children it has when it acts 1778 as the root. The overall presence routing algorithm then works as 1779 follows: 1781 o If a presence server receives a subscription for a presentity from 1782 a particular watcher, and it already has a different subscription 1783 (as identified by dialog identifiers) for that presentity from 1784 that watcher, it rejects the second subscription with an 1785 indication of a loop. This algorithm does rule out the 1786 possibility of two instances of the same watcher subscribing to 1787 the same presentity. 1789 o If a presence server receives a subscription for a presentity from 1790 a watcher and it doesn't have one yet for that pair, it processes 1791 it and generates back end subscriptions to each configured child. 1792 If a back-end subscription generates an error due to loop, it 1793 proceeds without that back-end input. 1795 The algorithm for IM routing works almost identically. 1797 For example, consider Bob subscribing to Alice. Bob's client is 1798 supported by server 1. Server 1 has not seen this subscription 1799 before, so it acts as the root and passes it to server 2. Server 2 1800 hasn't seen it before, so it accepts it (now acting as the child), 1801 and sends the subscription to its child, which is server 1. Server 1 1802 has already seen the subscription, so it rejects it. Now server 2 1803 basically knows it is the child, and so it generates documents with 1804 just its own data. 1806 As in the hierarchical case, it is possible to intermix partitioned 1807 and peer models for different users. In the partitioned case, the 1808 routing for hierarchical devolves into the forking routing described 1809 in Section 6.2.5. However, intermixing peer and exclusive bridging 1810 for different users is challenging and is out of scope. 1812 8.2.2. Policy 1814 The policy considerations for the peer model are very similar to 1815 those of the hierarchical model. However, the root-only policy 1816 approach is non-sensical in the peer model, and cannot be utilized. 1817 The distributed and centralized provisioning approaches apply, and 1818 the rules described above for generating correct results provide 1819 correct results in the peer model as well. 1821 However, the centralized PDP model works particularly well in concert 1822 with the peer model. It allows for consistent policy processing 1823 regardless of the type of rules, and has the benefit of having a 1824 single point of provisioning. At the same time, it avoids the need 1825 for defining and having a single root; indeed there is little benefit 1826 for utilizing the hierarchical model when a centralized PDP is used. 1828 However, the distributed processing model in the peer model 1829 eliminates the problem described in Section 8.1.2.3. The problem is 1830 that composition and authorization policies may be tuned to the needs 1831 of the specific device that is connected. In the hierarchical model, 1832 the wrong server for a particular device may be at the root, and the 1833 resulting presence document poorly suited to the consuming device. 1834 This problem is alleviated in the peer model. The server that is 1835 paired or tuned for that particular user or device is always at the 1836 root of the tree, and its composition policies have the final say in 1837 how presence data is presented to the watcher on that device. 1839 For more discussion regarding policy see Section 9. 1841 8.2.3. Presence Data 1843 The considerations for presence data and composition in the 1844 hierarchical model apply in the peer model as well. The principle 1845 issue is consistency, and whether the overall presence document for a 1846 watcher is the same regardless of which server the watcher connects 1847 from. As mentioned above, consistency is a property of commutativity 1848 of composition, which may or may not be true depending on the 1849 implementation. 1851 Interestingly, in the use case of Figure 9, a particular user only 1852 ever has devices on a single server, and thus the peer and 1853 hierarchical models end up being the same, and consistency is 1854 provided. 1856 8.2.4. Conversation Consistency 1858 The hierarchical and peer models have no impact on the issue of 1859 conversation consistency; the problem exists identically for both 1860 approaches. 1862 9. More about Policy 1864 There are several models that are described in this document that may 1865 create some ambiguity regarding what the subscribing user will see 1866 when the policy is not managed in a centralized way (Section 8.1.2.3, 1867 Section 8.1.2.4). 1869 o Exclusive model - In this case one server may have a certain 1870 policy and the other server may have a different policy. Bob 1871 subscribing one day to server 1 will not be able to see Alice's 1872 presence while he will be able to see her presence when he 1873 subscribes to server 2. 1875 o Hierarchical unioned model - Since only the root will provide the 1876 presence information all the users will see the same presence 1877 information. However, if there are some contradicting rules in 1878 the servers, what the subscriber will see will in most cases be 1879 the strictest and most minimal view and will be dependent on the 1880 hierarchy of the servers in the hierarchical model. 1882 o Peer unioned model - In this case one peer server may have a 1883 certain policy and the other server may have a different policy. 1884 Bob subscribing one day to peer server 1 will not be able to see 1885 Alice's presence while he will be able to see her presence when he 1886 subscribes to peer server 2. 1888 There are several reasons why distributed policy model may needed: 1890 o Adapting policy to the presence server type - The particular 1891 presence server may be adapted to specific type of presence 1892 information and devices. It will be hard or not feasible to 1893 provide a centralized policy for all the types of presence 1894 servers/devices 1896 o A presence server that is part of the intradomain bridging may not 1897 be able to use the centralized policy provisioning since it does 1898 not support this feature. 1900 It is probable that although confusing for users, distributed 1901 provisioning will be used at least in the initial deployments of the 1902 intradomain bridging until standards for central policy provisioning 1903 will be developed and implemented by various presence servers. 1905 10. Acknowledgements 1907 The authors would like to thank Paul Fullarton, David Williams, 1908 Sanjay Sinha, and Paul Kyzivat for their comments. Thanks to Adam 1909 Roach, Ben Campbell Michael Froman, Alan Johanston and Christer 1910 Holmberg for their dedicated review. 1912 11. Security Considerations 1914 While the normal presence and IM protocol (as SIP) mechanisms for 1915 securing presecne and IM should be used, The principle issue in 1916 intra-domain bridging is that of privacy. It is important that the 1917 system meets user expectations, and even in cases of user 1918 provisioning errors or inconsistencies, it provides appropriate 1919 levels of privacy. This is an issue in the unioned models, where 1920 user privacy policies can exist on multiple servers at the same time. 1921 The guidelines described here for authorization policies help ensure 1922 that privacy properties are maintained. 1924 12. IANA Considerations 1926 There are no IANA considerations associated with this specification. 1928 13. Informative References 1930 [RFC2778] Day, M., Rosenberg, J., and H. Sugano, "A Model for 1931 Presence and Instant Messaging", RFC 2778, February 2000. 1933 [RFC3261] Rosenberg, J., Schulzrinne, H., Camarillo, G., Johnston, 1934 A., Peterson, J., Sparks, R., Handley, M., and E. 1935 Schooler, "SIP: Session Initiation Protocol", RFC 3261, 1936 June 2002. 1938 [RFC3265] Roach, A., "Session Initiation Protocol (SIP)-Specific 1939 Event Notification", RFC 3265, June 2002. 1941 [RFC3325] Jennings, C., Peterson, J., and M. Watson, "Private 1942 Extensions to the Session Initiation Protocol (SIP) for 1943 Asserted Identity within Trusted Networks", RFC 3325, 1944 November 2002. 1946 [RFC3428] Campbell, B., Rosenberg, J., Schulzrinne, H., Huitema, C., 1947 and D. Gurle, "Session Initiation Protocol (SIP) Extension 1948 for Instant Messaging", RFC 3428, December 2002. 1950 [RFC3680] Rosenberg, J., "A Session Initiation Protocol (SIP) Event 1951 Package for Registrations", RFC 3680, March 2004. 1953 [RFC3856] Rosenberg, J., "A Presence Event Package for the Session 1954 Initiation Protocol (SIP)", RFC 3856, August 2004. 1956 [RFC3863] Sugano, H., Fujimoto, S., Klyne, G., Bateman, A., Carr, 1957 W., and J. Peterson, "Presence Information Data Format 1958 (PIDF)", RFC 3863, August 2004. 1960 [RFC3903] Niemi, A., "Session Initiation Protocol (SIP) Extension 1961 for Event State Publication", RFC 3903, October 2004. 1963 [RFC3944] Johnson, T., Okubo, S., and S. Campos, "H.350 Directory 1964 Services", RFC 3944, December 2004. 1966 [RFC4474] Peterson, J. and C. Jennings, "Enhancements for 1967 Authenticated Identity Management in the Session 1968 Initiation Protocol (SIP)", RFC 4474, August 2006. 1970 [RFC4479] Rosenberg, J., "A Data Model for Presence", RFC 4479, 1971 July 2006. 1973 [RFC4662] Roach, A., Campbell, B., and J. Rosenberg, "A Session 1974 Initiation Protocol (SIP) Event Notification Extension for 1975 Resource Lists", RFC 4662, August 2006. 1977 [RFC4975] Campbell, B., Mahy, R., and C. Jennings, "The Message 1978 Session Relay Protocol (MSRP)", RFC 4975, September 2007. 1980 [RFC5344] Houri, A., Aoki, E., and S. Parameswar, "Presence and 1981 Instant Messaging Peering Use Cases", RFC 5344, 1982 October 2008. 1984 [I-D.ietf-simple-view-sharing] 1985 Rosenberg, J., Donovan, S., and K. McMurry, "Optimizing 1986 Federated Presence with View Sharing", 1987 draft-ietf-simple-view-sharing-02 (work in progress), 1988 November 2008. 1990 Authors' Addresses 1992 Jonathan Rosenberg 1993 jdrosen.net 1994 Monmouth, NJ 1995 US 1997 Email: jdrosen@jdrosen.net 1998 URI: http://www.jdrosen.net 2000 Avshalom Houri 2001 IBM 2002 Science Park, Rehovot 2003 Israel 2005 Email: avshalom@il.ibm.com 2006 Colm Smyth 2007 Avaya 2008 Dublin 18, Sandyford Business Park 2009 Ireland 2011 Email: smythc@avaya.com 2013 Francois Audet 2014 Skype 2016 Email: francois.audet@skype.net