idnits 2.17.1 draft-ietf-simple-intradomain-federation-04.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- ** The document seems to lack a License Notice according IETF Trust Provisions of 28 Dec 2009, Section 6.b.i or Provisions of 12 Sep 2009 Section 6.b -- however, there's a paragraph with a matching beginning. Boilerplate error? (You're using the IETF Trust Provisions' Section 6.b License Notice from 12 Feb 2009 rather than one of the newer Notices. See https://trustee.ietf.org/license-info/.) Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- No issues found here. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year -- The document seems to contain a disclaimer for pre-RFC5378 work, and may have content which was first submitted before 10 November 2008. The disclaimer is necessary when there are original authors that you have been unable to contact, or if some do not wish to grant the BCP78 rights to the IETF Trust. If you are able to get all authors (current and original) to grant those rights, you can and should remove the disclaimer; otherwise, the disclaimer is needed and you can ignore this comment. (See the Legal Provisions document at https://trustee.ietf.org/license-info for more information.) -- The document date (July 13, 2009) is 5401 days in the past. Is this intentional? -- Found something which looks like a code comment -- if you have code sections in the document, please surround them with '' and '' lines. Checking references for intended status: Informational ---------------------------------------------------------------------------- -- Obsolete informational reference (is this intentional?): RFC 4474 (Obsoleted by RFC 8224) Summary: 1 error (**), 0 flaws (~~), 1 warning (==), 4 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 SIMPLE J. Rosenberg 3 Internet-Draft Cisco 4 Intended status: Informational A. Houri 5 Expires: January 14, 2010 IBM 6 C. Smyth 7 Avaya 8 F. Audet 9 Nortel 10 July 13, 2009 12 Models for Intra-Domain Presence and Instant Messaging (IM) Bridging 13 draft-ietf-simple-intradomain-federation-04 15 Status of this Memo 17 This Internet-Draft is submitted to IETF in full conformance with the 18 provisions of BCP 78 and BCP 79. This document may contain material 19 from IETF Documents or IETF Contributions published or made publicly 20 available before November 10, 2008. The person(s) controlling the 21 copyright in some of this material may not have granted the IETF 22 Trust the right to allow modifications of such material outside the 23 IETF Standards Process. Without obtaining an adequate license from 24 the person(s) controlling the copyright in such materials, this 25 document may not be modified outside the IETF Standards Process, and 26 derivative works of it may not be created outside the IETF Standards 27 Process, except to format it for publication as an RFC or to 28 translate it into languages other than English. 30 Internet-Drafts are working documents of the Internet Engineering 31 Task Force (IETF), its areas, and its working groups. Note that 32 other groups may also distribute working documents as Internet- 33 Drafts. 35 Internet-Drafts are draft documents valid for a maximum of six months 36 and may be updated, replaced, or obsoleted by other documents at any 37 time. It is inappropriate to use Internet-Drafts as reference 38 material or to cite them other than as "work in progress." 40 The list of current Internet-Drafts can be accessed at 41 http://www.ietf.org/ietf/1id-abstracts.txt. 43 The list of Internet-Draft Shadow Directories can be accessed at 44 http://www.ietf.org/shadow.html. 46 This Internet-Draft will expire on January 14, 2010. 48 Copyright Notice 49 Copyright (c) 2009 IETF Trust and the persons identified as the 50 document authors. All rights reserved. 52 This document is subject to BCP 78 and the IETF Trust's Legal 53 Provisions Relating to IETF Documents in effect on the date of 54 publication of this document (http://trustee.ietf.org/license-info). 55 Please review these documents carefully, as they describe your rights 56 and restrictions with respect to this document. 58 Abstract 60 Presence and Instant Messaging (IM) bridging involves the sharing of 61 presence information and exchange of IM across multiple systems 62 within a single domain. As such, it is a close cousin to presence 63 and IM federation, which involves the sharing of presence and IM 64 across differing domains. Presence and IM bridging can be the result 65 of a multi-vendor network, or a consequence of a large organization 66 that requires partitioning. This document examines different use 67 cases and models for intra-domain presence and IM bridging. It is 68 meant to provide a framework for defining requirements and 69 specifications for presence and IM bridging. 71 Table of Contents 73 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 5 74 2. Intra-Domain Bridging vs. Clustering . . . . . . . . . . . . . 8 75 3. Use Cases for Intra-Domain Bridging . . . . . . . . . . . . . 9 76 3.1. Scale . . . . . . . . . . . . . . . . . . . . . . . . . . 9 77 3.2. Organizational Structures . . . . . . . . . . . . . . . . 9 78 3.3. Multi-Vendor Requirements . . . . . . . . . . . . . . . . 10 79 3.4. Specialization . . . . . . . . . . . . . . . . . . . . . . 10 80 4. Considerations for Bridging Models . . . . . . . . . . . . . . 11 81 5. Overview of the Models . . . . . . . . . . . . . . . . . . . . 11 82 6. Partitioned . . . . . . . . . . . . . . . . . . . . . . . . . 13 83 6.1. Applicability . . . . . . . . . . . . . . . . . . . . . . 14 84 6.2. Routing . . . . . . . . . . . . . . . . . . . . . . . . . 14 85 6.2.1. Centralized Database . . . . . . . . . . . . . . . . . 15 86 6.2.2. Routing Proxy . . . . . . . . . . . . . . . . . . . . 16 87 6.2.3. Subdomaining . . . . . . . . . . . . . . . . . . . . . 17 88 6.2.4. Peer-to-Peer . . . . . . . . . . . . . . . . . . . . . 19 89 6.2.5. Forking . . . . . . . . . . . . . . . . . . . . . . . 19 90 6.2.6. Provisioned Routing . . . . . . . . . . . . . . . . . 19 91 6.3. Policy . . . . . . . . . . . . . . . . . . . . . . . . . . 20 92 6.4. Presence Data . . . . . . . . . . . . . . . . . . . . . . 20 93 6.5. Conversation Consistency . . . . . . . . . . . . . . . . . 20 94 7. Exclusive . . . . . . . . . . . . . . . . . . . . . . . . . . 20 95 7.1. Routing . . . . . . . . . . . . . . . . . . . . . . . . . 21 96 7.1.1. Centralized Database . . . . . . . . . . . . . . . . . 22 97 7.1.2. Routing Proxy . . . . . . . . . . . . . . . . . . . . 22 98 7.1.3. Subdomaining . . . . . . . . . . . . . . . . . . . . . 23 99 7.1.4. Peer-to-Peer . . . . . . . . . . . . . . . . . . . . . 23 100 7.1.5. Forking . . . . . . . . . . . . . . . . . . . . . . . 23 101 7.2. Policy . . . . . . . . . . . . . . . . . . . . . . . . . . 24 102 7.3. Presence Data . . . . . . . . . . . . . . . . . . . . . . 24 103 7.4. Conversation Consistency . . . . . . . . . . . . . . . . . 25 104 8. Unioned . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 105 8.1. Hierarchical Model . . . . . . . . . . . . . . . . . . . . 29 106 8.1.1. Routing . . . . . . . . . . . . . . . . . . . . . . . 31 107 8.1.2. Policy and Identity . . . . . . . . . . . . . . . . . 32 108 8.1.2.1. Root Only . . . . . . . . . . . . . . . . . . . . 33 109 8.1.2.2. Distributed Provisioning . . . . . . . . . . . . . 34 110 8.1.2.3. Central Provisioning . . . . . . . . . . . . . . . 36 111 8.1.2.4. Centralized PDP . . . . . . . . . . . . . . . . . 38 112 8.1.3. Presence Data . . . . . . . . . . . . . . . . . . . . 40 113 8.1.4. Conversation Consistency . . . . . . . . . . . . . . . 41 114 8.2. Peer Model . . . . . . . . . . . . . . . . . . . . . . . . 41 115 8.2.1. Routing . . . . . . . . . . . . . . . . . . . . . . . 43 116 8.2.2. Policy . . . . . . . . . . . . . . . . . . . . . . . . 44 117 8.2.3. Presence Data . . . . . . . . . . . . . . . . . . . . 44 118 8.2.4. Conversation Consistency . . . . . . . . . . . . . . . 45 120 9. More about Policy . . . . . . . . . . . . . . . . . . . . . . 45 121 10. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . . 46 122 11. Security Considerations . . . . . . . . . . . . . . . . . . . 46 123 12. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 46 124 13. Informative References . . . . . . . . . . . . . . . . . . . . 46 125 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . . 47 127 1. Introduction 129 Presence refers to the ability, willingness and desire to communicate 130 across differing devices, mediums and services [RFC2778]. Presence 131 is described using presence documents [RFC3863] [RFC4479], exchanged 132 using a Session Initiation Protocol (SIP) [RFC3261] based event 133 package [RFC3856]. Similarly, instant messaging refers to the 134 exchange of real-time text-oriented messaging between users. SIP 135 defines two mechanisms for IM - pager mode [RFC3428] and session mode 136 [RFC4975]. 138 Presence and Instant Messaging (IM) bridging involves the sharing of 139 presence information and exchange of IM across multiple systems 140 within a single domain. As such, it is a close cousin to presence 141 and IM federation [RFC5344], which involves the sharing of presence 142 and IM across differing domains. 144 For example, consider the network of Figure 1, which shows one model 145 for inter-domain presence federation. In this network, Alice belongs 146 to the example.org domain, and Bob belongs to the example.com domain. 147 Alice subscribes to her buddy list on her presence server (which is 148 also acting as her Resource List Server (RLS) [RFC4662]), and that 149 list includes bob@example.com. Alice's presence server generates a 150 back-end subscription on the federated link between example.org and 151 example.com. The example.com presence server authorizes the 152 subscription, and if permitted, generates notifications back to 153 Alice's presence server, which are in turn passed to Alice. 155 ............................. .............................. 156 . . . . 157 . . . . 158 . alice@example.org . . bob@example.com . 159 . +------------+ SUB . . +------------+ . 160 . | | Bob . . | | . 161 . | Presence |------------------->| Presence | . 162 . | Server | . . | Server | . 163 . | | . . | | . 164 . | |<-------------------| | . 165 . | | NOTIFY . | | . 166 . +------------+ . . +------------+ . 167 . ^ | . . ^ . 168 . SUB | | . . |PUB . 169 . Buddy | |NOTIFY . . | . 170 . List | | . . | . 171 . | | . . | . 172 . | V . . | . 173 . +-------+ . . +-------+ . 174 . | | . . | | . 175 . | | . . | | . 176 . | | . . | | . 177 . +-------+ . . +-------+ . 178 . . . . 179 . Alice's . . Bob's . 180 . PC . . PC . 181 . . . . 182 ............................. .............................. 184 example.org example.com 186 Figure 1: Inter-Domain Presence Model 188 Similarly, inter-domain IM federation would look like the model shown 189 in Figure 2: 191 ............................. .............................. 192 . . . . 193 . . . . 194 . alice@example.org . . bob@example.com . 195 . +------------+ INV . . +------------+ . 196 . | | Bob . . | | . 197 . | |------------------->| | . 198 . | IM | . . | IM | . 199 . | Server | . . | Server | . 200 . | |<------------------>| | . 201 . | | IM | | . 202 . +------------+ Content +------------+ . 203 . ^ ^ . . ^ | . 204 . INVITE | | . . IM | |INV . 205 . Bob | | IM . . Content| |Bob . 206 . | | Content . . | | . 207 . | | . . | | . 208 . | V . . V V . 209 . +-------+ . . +-------+ . 210 . | | . . | | . 211 . | | . . | | . 212 . | | . . | | . 213 . +-------+ . . +-------+ . 214 . . . . 215 . Alice's . . Bob's . 216 . PC . . PC . 217 . . . . 218 ............................. .............................. 220 example.org example.com 222 Figure 2: Inter-Domain IM Model 224 In this model, example.org and example.com both have an "IM server". 225 This would typically be a SIP proxy or B2BUA responsible for handling 226 both the signaling and the IM content (as these are separate in the 227 case of session mode). The IM server would handle routing of the IM 228 along with application of IM policy. 230 Though both of these pictures show federation between domains, a 231 similar interconnection - presence and IM bridging - can happen 232 within a domain as well. We define intra-domain bridging as the 233 interconnection of presence and IM servers within a single 234 administrative domain. Typically, a single administrative domain 235 often means the same DNS domain within the right hand side of the 236 @-sign in the SIP URI. 238 This document considers the architectural models and different 239 problems that arise when performing intra-domain presence and IM 240 bridging. Though presence and IM are quite distinct functions, this 241 document considers both since the architectural models and issues are 242 common between the two. The document first clarifies the distinction 243 between intra-domain bridging and clustering. It defines the primary 244 issues that arise in intra-domain presence and IM bridging, and then 245 goes on to define the three primary models for it - partitioned, 246 unioned and exclusive. 248 This document doesn't make any recommendation as to which model is 249 best. Each model has different areas of applicability and are 250 appropriate in a particular deployment. The intent is to provide 251 informative material and ideas on how this can be done. 253 2. Intra-Domain Bridging vs. Clustering 255 Intra-domain bridging is the interconnection of servers within a 256 single domain. This is very similar to clustering, which is the 257 tight coupling of a multiplicity of physical servers to realize scale 258 and/or high availability. Consequently, it is important to clarify 259 the differences. 261 Firstly, clustering implies a tight coupling of components. 262 Clustering usually involves proprietary information sharing, such as 263 database replication and state sharing, which in turn are tightly 264 bound with the internal implementation of the product. Intra-domain 265 bridging, on the other hand, is a loose coupling. Database 266 replication or state replication across federated systems are 267 extremely rare (though a database and DB replication might be used 268 within a component providing routing functions to facilitate 269 bridging). 271 Secondly, clustering most usually occurs amongst components from the 272 same vendor. This is due to the tight coupling described above. 273 Intra-domain bridging, on the other hand, can occur between servers 274 from different vendors. As described below, this is one of the chief 275 use cases for intra-domain bridging. 277 Thirdly, clustering is almost always invisible to users. 278 Communications between users within the same cluster almost always 279 have identical functionality to communications between users on the 280 same server within the cluster. The cluster boundaries are 281 invisible; indeed the purpose of a cluster is to build a system which 282 behaves as if it were a single monolithic entity, even though it is 283 not. Bridging, on the other hand, is often visible to users. There 284 will frequently be loss of functionality when crossing a cluster. 285 Though this is not a hard and fast rule, it is a common 286 differentiator. 288 Fourthly, connections between federated and bridged systems almost 289 always involve standards, whereas communications within a cluster 290 often involves proprietary mechanisms. Standards are needed for 291 bridging because the systems can be from different vendors, and thus 292 agreement is needed to enable interoperation. 294 Finally, a cluster will often have an upper bound on its size and 295 capacity, due to some kind of constraint on the coupling between 296 nodes in the cluster. However, there is typically no limit, or a 297 much larger limit, on the number of bridged systems that can be put 298 into a domain. This is a consequence to their loose coupling. 300 Though these rules are not hard and fast, they give general 301 guidelines on the differences between clustering and intra-domain 302 bridging. 304 3. Use Cases for Intra-Domain Bridging 306 There are several use cases that drive intra-domain bridging. 308 3.1. Scale 310 One common use case for bridging is an organization that is just very 311 large, and their size exceeds the capacity that a single server or 312 cluster can provide. So, instead, the domain breaks its users into 313 partitions (perhaps arbitrarily) and then uses intra-domain bridging 314 to allow the overall system to scale up to arbitrary sizes. This is 315 common practice today for service providers and large enterprises. 317 3.2. Organizational Structures 319 Another use case for intra-domain bridging is a multi-national 320 organization with regional IT departments, each of which supports a 321 particular set of nationalities. It is very common for each regional 322 IT department to deploy and run its own servers for its own 323 population. In that case, the domain would end up being composed of 324 the presence servers deployed by each regional IT department. 325 Indeed, in many organizations, each regional IT department might end 326 up using different vendors. This can be a consequence of differing 327 regional requirements for features (such as compliance or 328 localization support), differing sales channels and markets in which 329 vendors sell, and so on. 331 3.3. Multi-Vendor Requirements 333 Another use case for intra-domain bridging is an organization that 334 requires multiple vendors for each service, in order to avoid vendor 335 lock in and drive competition between its vendors. Since the servers 336 will come from different vendors, a natural way to deploy them is to 337 partition the users across them. Such multi-vendor networks are 338 extremely common in large service provider networks, many of which 339 have hard requirements for multiple vendors. 341 Typically, the vendors are split along geographies, often run by 342 different local IT departments. As such, this case is similar to the 343 organizational division above. 345 3.4. Specialization 347 Another use case is where certain vendors might specialize in 348 specific types of clients. For example, one vendor might provide a 349 mobile client (but no desktop client), while another provides a 350 desktop client but no mobile client. It is often the case that 351 specific client applications and devices are designed to only work 352 with their corresponding servers. In an ideal world, clients would 353 all implement to standards and this would not happen, but in current 354 practice, the vast majority of presence and IM endpoints work only 355 (or only work well) with the server from the same vendor. A domain 356 might want each user to have both a mobile client and a desktop 357 client, which will require servers from each vendor, leading to 358 intra-domain bridging. 360 Similarly, presence can contain rich information, including 361 activities of the user (such as whether they are in a meeting or on 362 the phone), their geographic location, and their mood. This presence 363 state can be determined manually (where the user enters and updates 364 the information), or automatically. Automatic determination of these 365 states is far preferable, since it puts less burden on the user. 366 Determination of these presence states is done by taking "raw" data 367 about the user, and using it to generate corresponding presence 368 states. This raw data can come from any source that has information 369 about the user, including their calendaring server, their VoIP 370 infrastructure, their VPN server, their laptop operating system, and 371 so on. Each of these components is typically made by different 372 vendors, each of which is likely to integrate that data with their 373 presence servers. Consequently, presence servers from different 374 vendors are likely to specialize in particular pieces of presence 375 data, based on the other infrastructure they provide. The overall 376 network will need to contain servers from those vendors, composing 377 together the various sources of information, in order to combine 378 their benefits. This use case is specific to presence, and results 379 in intra-domain bridging. 381 4. Considerations for Bridging Models 383 When considering architectures for intra-domain presence and IM 384 bridging, several issues need to be considered. The first two of 385 these apply to both IM and presence (and indeed to any intra-domain 386 communications, including voice). The latter two are specific to 387 presence and IM respectively: 389 Routing: How are subscriptions and IMs routed to the right presence 390 and IM server(s)? This issue is more complex in intra-domain 391 models, since the right hand side of the @-sign cannot be used to 392 perform this routing. 394 Policy and Identity: Where do user policies reside, and what 395 presence and IM server(s) are responsible for executing that 396 policy? What identities does the user have in each system and how 397 do they relate? 399 Presence Data Ownership: Which presence servers are responsible for 400 which pieces of presence information, and how are those pieces 401 composed to form a coherent and consistent view of user presence? 403 Conversation Consistency: When considering instant messaging, if IM 404 can be delivered to multiple servers, how do we make sure that the 405 overall conversation is coherent to the user? 407 The sections below describe several different models for intra-domain 408 bridging. Each model is driven by a set of use cases, which are 409 described in an applicability subsection for each model. Each model 410 description also discusses how routing, policy, presence data 411 ownership and conversation consistency work. 413 5. Overview of the Models 415 There are three models for intra-domain bridging. These are 416 partitioned, exclusive, and unioned. They can be explained relative 417 to each other via a decision tree. 419 +--------------+ 420 | | 421 | Is a user | one 422 | managed by |-----> PARTITIONED 423 | one system | 424 | or more than| 425 | one? | 426 +--------------+ 427 | 428 | more 429 V 431 +-----------------+ 432 | Can the user | 433 | be managed by | no 434 | more than one |---> EXCLUSIVE 435 | system at the | 436 | same time? | 437 +-----------------+ 438 | 439 |yes 440 | 441 V 443 UNIONED 445 Figure 3: Decision Tree 447 The first question is whether any particular user is 'managed' by 448 just one system, or more than one? Here, 'managed' means that the 449 user is provisioned on the system, and can use it for some kind of 450 presence and IM services. In the partitioned model, the answer is no 451 - a user is on only one system. In that way, partitioned federation 452 is analagous to an inter-domain model where a user is handled by a 453 single domain. 455 If a user is 'managed' by more than one system, is it more than one 456 at the same time, or only one at a time? In the exclusive model, its 457 one at a time. The user can log into one system, log out, and then 458 log into the other. For example, a user might have a PC client 459 connected to system one, and a different PC client connected to 460 system two. They can use one or the other, but not both. In unioned 461 federation, they can be connected to more than one at the same time. 462 For example, a user might have a mobile client connected to one 463 system, and a PC client connected to another. 465 6. Partitioned 467 In the partitioned model, a single domain has a multiplicity of 468 servers, each of which manages a non-overlapping set of users. That 469 is, for each user in the domain, their presence data, policy and IM 470 handling reside on a single server. Each "single server" may in fact 471 be a cluster. 473 Another important facet of the partitioned model is that, even though 474 users are partitioned across different servers, they each share the 475 same domain name in the right hand side of their URI, and this URI is 476 what those users use when communicating with other users both inside 477 and outside of the domain. There are many reasons why a domain would 478 want all of its users to share the same right-hand side of the @-sign 479 even though it is partitioned internally: 481 o The partitioning may reflect organizational or geographical 482 structures that a domain admistrator does not want to reflect 483 externally. 485 o If each partition had a separate domain name (i.e., 486 engineering.example.com and sales.example.com), if a user changed 487 organizations, this would necessitate a change in their URI. 489 o For reasons of vanity, users often like to have their URI (which 490 appear on business cards, email, and so on), to be brief and 491 short. 493 o If a watcher wants to add a presentity based on username and does 494 not want to know, or does not know, which subdomain or internal 495 department the presentity belongs to, a single domain is needed. 497 This model is illustrated in Figure 4. As the model shows, the 498 domain example.com has six users across three servers, each of which 499 is handling two of the users. 501 ..................................................................... 502 . . 503 . . 504 . . 505 . joe@example.com alice@example.com padma@example.com . 506 . bob@example.com zeke@example.com hannes@example.com . 507 . +-----------+ +-----------+ +-----------+ . 508 . | | | | | | . 509 . | Server | | Server | | Server | . 510 . | 1 | | 2 | | 3 | . 511 . | | | | | | . 512 . +-----------+ +-----------+ +-----------+ . 513 . . 514 . . 515 . . 516 . example.com . 517 ..................................................................... 519 Figure 4: Partitioned Model 521 6.1. Applicability 523 The partitioned model arises naturally in larger domains, such as an 524 enterprise or service provider, where issues of scale, organizational 525 structure, or multi-vendor requirements cause the domain to be 526 managed by a multiplicity of independent servers. 528 In cases where each user has an AoR that directly points to its 529 partition (for example, us.example.com), that model becomes identical 530 to the inter-domain federated model and is not treated here further. 532 6.2. Routing 534 The partitioned intra-domain model works almost identically to an 535 inter-domain federated model, with the primary difference being 536 routing. In inter-domain federation, the domain part of the URI can 537 be used to route presence subscriptions and IM messages from one 538 domain to the other. This is no longer the case in an intra-domain 539 model. Consider the case where Joe subscribes to his buddy list, 540 which is served by his presence server (server 1 in Figure 4). Alice 541 is a member of Joe's buddy list. How does server 1 know that the 542 back-end subscription to Alice needs to get routed to server 2? 544 There are several techniques that can be used to solve this problem, 545 which are outlined in the subsections below. 547 6.2.1. Centralized Database 549 ..................................................................... 550 . +-----------+ . 551 . alice? | | . 552 . +---------------> | Database | . 553 . | server 2 | | . 554 . | +-------------| | . 555 . | | +-----------+ . 556 . | | . 557 . | | . 558 . | | . 559 . | | . 560 . | | . 561 . | | . 562 . | V . 563 . joe@example.com alice@example.com padma@example.com . 564 . bob@example.com zeke@example.com hannes@example.com . 565 . +-----------+ +-----------+ +-----------+ . 566 . | | | | | | . 567 . | Server | | Server | | Server | . 568 . | 1 | | 2 | | 3 | . 569 . | | | | | | . 570 . +-----------+ +-----------+ +-----------+ . 571 . . 572 . . 573 . . 574 . example.com . 575 ..................................................................... 577 Figure 5: Centralized DB 579 One solution is to rely on a common, centralized database that 580 maintains mappings of users to specific servers, shown in Figure 5. 581 When Joe subscribes to his buddy list that contains Alice, server 1 582 would query this database, asking it which server is responsible for 583 alice@example.com. The database would indicate server 2, and then 584 server 1 would generate the backend SUBSCRIBE request towards server 585 2. Similarly, when Joe sends an INVITE to establish an IM session 586 with Padma, he would send the IM to his IM server, an it would query 587 the database to find out that Padma is supported on server 3. This 588 is a common technique in large email systems. It is often 589 implemented using internal sub-domains; so that the database would 590 return alice@central.example.com to the query, and server 1 would 591 modify the Request-URI in the request to reflect this. 593 Routing database solutions have the problem that they require 594 standardization on a common schema and database protocol in order to 595 work in multi-vendor environments. For example, LDAP and SQL are 596 both possibilities. There is variety in LDAP schema; one possibility 597 is H.350.4, which could be adapted for usage here [RFC3944]. 599 6.2.2. Routing Proxy 601 ..................................................................... 602 . +-----------+ . 603 . SUB/INV alice | | . 604 . +---------------> | Routing | . 605 . | | Proxy | . 606 . | | | . 607 . | +-----------+ . 608 . | | . 609 . | | . 610 . | | . 611 . | |SUB/INV alice . 612 . | | . 613 . | | . 614 . | V . 615 . joe@example.com alice@example.com padma@example.com . 616 . bob@example.com zeke@example.com hannes@example.com . 617 . +-----------+ +-----------+ +-----------+ . 618 . | | | | | | . 619 . | Server | | Server | | Server | . 620 . | 1 | | 2 | | 3 | . 621 . | | | | | | . 622 . +-----------+ +-----------+ +-----------+ . 623 . . 624 . . 625 . . 626 . example.com . 627 ..................................................................... 629 Figure 6: Routing Proxy 631 A similar solution is to rely on a routing proxy or B2BUA. Instead 632 of a centralized database, there would be a centralized SIP proxy 633 farm. Server 1 would send requests (SUBSCRIBE, INVITE, etc.) for 634 users it doesn't serve to this server farm, and the servers would 635 lookup the user in a database (which is now accessed only by the 636 routing proxy), and the resulting requests are sent to the correct 637 server. A redirect server can be used as well, in which case the 638 flow is very much like that of a centralized database, but uses SIP. 640 Routing proxies have the benefit that they do not require a common 641 database schema and protocol, but they do require a centralized 642 server function that sees all subscriptions and IM requests, which 643 can be a scale challenge. For IM, a centralized proxy is very 644 challenging when using pager mode, since each and every IM is 645 processed by the central proxy. For session mode, the scale is 646 better, since the proxy handles only the initial INVITE. 648 6.2.3. Subdomaining 650 In this solution, each user is associated with a subdomain, and is 651 provisioned as part of their respective server using that subdomain. 652 Consequently, each server thinks it is its own, separate domain. 653 However, when a user adds a presentity to their buddy list without 654 the subdomain, they first consult a shared database which returns the 655 subdomained URI to subscribe or IM to. This sub-domained URI can be 656 returned because the user provided a search criteria, such as "Find 657 Alice Chang", or provided the non-subdomained URI 658 (alice@example.com). This is shown in Figure 7 659 ..................................................................... 660 . +-----------+ . 661 . who is Alice? | | . 662 . +---------------------->| Database | . 663 . | alice@b.example.com | | . 664 . | +---------------------| | . 665 . | | +-----------+ . 666 . | | . 667 . | | . 668 . | | . 669 . | | . 670 . | | . 671 . | | . 672 . | | . 673 . | | joe@example.com alice@example.com padma@example.com . 674 . | | bob@example.com zeke@example.com hannes@example.com . 675 . | | +-----------+ +-----------+ +-----------+ . 676 . | | | | | | | | . 677 . | | | Server | | Server | | Server | . 678 . | | | 1 | | 2 | | 3 | . 679 . | | | | | | | | . 680 . | | +-----------+ +-----------+ +-----------+ . 681 . | | ^ . 682 . | | | . 683 . | | | . 684 . | | | . 685 . | | | . 686 . | | | . 687 . | | +-----------+ . 688 . | +-------------------->| | . 689 . | | Client | . 690 . | | | . 691 . +-----------------------| | . 692 . +-----------+ . 693 . . 694 . . 695 . . 696 . example.com . 697 ..................................................................... 699 Figure 7: Subdomaining 701 Subdomaining puts the burden of routing within the client. The 702 servers can be completely unaware that they are actually part of the 703 same domain, and integrate with each other exactly as they would in 704 an inter-domain model. However, the client is given the burden of 705 determining the subdomained URI from the original URI or buddy name, 706 and then subscribing or IMing directly to that server, or including 707 the subdomained URI in their buddylist. The client is also 708 responsible for hiding the subdomain structure from the user and 709 storing the mapping information locally for extended periods of time. 710 In cases where users have buddy list subscriptions, the client will 711 need to resolve the buddy name into the sub-domained version before 712 adding to their buddy list. 714 Subdmaining can be done via different databases. In order to provide 715 consistent interface to clients, a front-end of a SIP redirect 716 proxies can be implemented. A client would send the SIP request to 717 one of the redirect proxies and the redirect proxy will reply with 718 the right domain after consulting the database in whatever protocol 719 the databases exposes. 721 6.2.4. Peer-to-Peer 723 Another model is to utilize a peer-to-peer network amongst all of the 724 servers, and store URI to server mappings in the distributed hash 725 table it creates. This has some nice properties but does require a 726 standardized and common p2p protocol across vendors, which is being 727 worked on in the P2PSIP IETF working gorup but still does not exist 728 today. 730 6.2.5. Forking 732 Yet another solution is to utilize forking. Each server is 733 provisioned with the domain names or IP addresses of the other 734 servers, but not with the mapping of users to each of those servers. 735 When a server needs to handle a request for a user it doesn't have, 736 it forks the request to all of the other servers. This request will 737 be rejected with a 404 on the servers which do not handle that user, 738 and accepted on the one that does. The approach assumes that servers 739 can differentiate inbound requests from end users (which need to get 740 passed on to other servers - for example via a back-end subscription) 741 and from other servers (which do not get passed on). This approach 742 works very well in organizations with a relatively small number of 743 servers (say, two or three), and becomes increasingly ineffective 744 with more and more servers. Indeed, if multiple servers exist for 745 the purposes of achieving scale, this approach can defeat the very 746 reason those additional servers were deployed. 748 6.2.6. Provisioned Routing 750 Yet another solution is to provision each server with each user, but 751 for servers that don't actually serve the user, the provisioning 752 merely tells the server where to proxy the request. This solution 753 has extremely poor operational properties, requiring multiple points 754 of provisioning across disparate systems. 756 6.3. Policy 758 A fundamental characteristic of the partitioned model is that there 759 is a single point of policy enforcement (authorization rules and 760 composition policy) for each user. 762 For more discussion regarding policy see Section 9. 764 6.4. Presence Data 766 Another fundamental characteristic of the partitioned model is that 767 the presence data for a user is managed authoritatively on a single 768 server. In the example of Figure 4, the presence data for Alice 769 lives on server 2 alone (recall that server two may be physically 770 implemented as a multiplicity of boxes from a single vendor, each of 771 which might have a portion of the presence data, but externally it 772 appears to behave as if it were a single server). A subscription 773 from Bob to Alice may cause a transfer of presence information from 774 server 2 to server 1, but server 2 remains authoritative and is the 775 single root source of all data for Alice. 777 6.5. Conversation Consistency 779 Since the IM for a particular user are always delivered through a 780 particular server that handles the user, it is relatively easy to 781 achieve conversation consistency. That server receives all of the 782 messages and readily pass them onto the user for rendering. 783 Furthermore, a coherent view of message history can be assembled by 784 the server, since it sees all messages. If a user has multiple 785 devices, there are challenges in constructing a consistent view of 786 the conversation with page mode IM. However, those issues exist in 787 general with page mode and are not worsened by intra-domain bridging. 789 7. Exclusive 791 In the former (static) partitioned model, the mapping of a user to a 792 specific server is done by some off-line configuration means. The 793 configuration assigns a user to a specific server and in order to use 794 a different server, the user needs to change (or request the 795 administrator to do so) the configuration. 797 In some environments, this restriction of a user to use a particular 798 server may be a limitation. Instead, it is desirable to allow users 799 to freely move back and forth between systems, though using only a 800 single one at a time. This is called Exclusive Bridging. 802 Some use cases where this can happen are: 804 o The organization is using multiple systems where each system has 805 its own characteristics. For example one server is tailored to 806 work with some CAD (Computer Aided Design) system and provide 807 presence and IM functionality along with the CAD system. The 808 other server is the default presence and IM server of the 809 organization. Users wish to be able to work with either system 810 when they wish to, they also wish to be able to see the presence 811 and IM with their buddies no matter which system their buddies are 812 currently using. 814 o An enterprise wishes to test presence servers from two different 815 vendors. In order to do so they wish to install a server from 816 each vendor and see which of the servers is better. In the static 817 partitioned model, a user will have to be statically assigned to a 818 particular server and could not compare the features of the two 819 servers. In the dynamic partitioned model, a user may choose on 820 whim which of the servers that are being tested to use. They can 821 move back and forth in case of problems. 823 o An enterprise is currently using servers from one vendor, but has 824 decided to add a second. They would like to gradually migrate 825 users from one to the other. In order to make a smooth 826 transition, users can move back and forth over a period of a few 827 weeks until they are finally required to stop going back, and get 828 deleted from their old system. 830 o A domain is using multiple clusters from the same vendor. To 831 simplify administration, users can connect to any of the clusters, 832 perhaps one local to their site. To accomplish this, the clusters 833 are connected using exclusive bridging. 835 7.1. Routing 837 Due to its nature, routing in the Exclusive bridging model is more 838 complex than the routing in the partitioned model. 840 Association of a user to a server can not be known until the user 841 publishes a presence document to a specific server or registers to 842 that server. Therefore, when Alice subscribes to Bob's presence 843 information, or sends him an IM, Alice's server will not easily know 844 the server that has Bob's presence and is handling his IM. 846 In addition, a server may get a subscription to a user, or an IM 847 targeted at a user, but the user may not be connected to any server 848 yet. In the case of presence, once the user appears in one of the 849 servers, the subscription should be sent to that server. 851 A user may use two servers at the same time and have hers/his 852 presence information on two servers. This should be regarded as a 853 conflict and one of the presence clients should be terminated or 854 redirected to the other server. 856 Fortunately, most of the routing approaches described for partitioned 857 bridging, excepting provisioned routing, can be adapted for exclusive 858 bridging. 860 7.1.1. Centralized Database 862 A centralized database can be used, but will need to support a test- 863 and-set functionality. With it, servers can check if a user is 864 already in a specific server and set the user to the server if the 865 user is not on another server. If the user is already on another 866 server a redirect (or some other error message) will be sent to that 867 user. 869 When a client sends a subscription request for some target user, and 870 the target user is not associated with a server yet, the subscription 871 must be 'held' on the server of the watcher. Once the target user 872 connects and becomes bound to a server, the database needs to send a 873 change notification to the watching server, so that the 'held' 874 subscription can be extended to the server which is now handling 875 presence for the user. 877 Note that this approach actually moves the scaling problem of the 878 routing mechanism to the database, especially when the percentage of 879 the community that is offline is large. 881 7.1.2. Routing Proxy 883 The routing proxy mechanism can be used for exclusive bridging as 884 well. However, it requires signaling from each server to the routing 885 proxy to indicate that the user is now located on that server. This 886 can be done by having each server send a REGISTER request to the 887 routing proxy, for that user, and setting the contact to itself. The 888 routing proxy would have a rule which allows only a single registered 889 contact per user. Using the registration event package [RFC3680], 890 each server subscribes to the registration state at the routing proxy 891 for each user it is managing. If the routing proxy sees a duplicate 892 registration, it allows it, and then uses a reg-event notification to 893 the other server to de-register the user. Once the user is de- 894 registered from that server, it would terminate any subscriptions in 895 place for that user, causing the watching server to reconnect the 896 subscription to the new server. Something similar can be done for 897 in-progress IM sessions; however this may have the effect of causing 898 a disruption in ongoing sessions. 900 Note that this approach actually moves the scaling problem of the 901 routing mechanism to the registrar, especially when the percentage of 902 the community that is offline is large. 904 7.1.3. Subdomaining 906 Subdomaining is just a variation on the centralized database. 907 Assuming the database supports a test-and-set mechanism, it can be 908 used for exclusive bridging. 910 However, the principle challenge in applying subdomaining to 911 exclusive bridging is database change notifications. When a user 912 moves from one server to another, that change needs to be propagated 913 to all clients which have ongoing sessions (presence and IM) with 914 that user. This requires a large-scale change notification mechanism 915 - to each client in the network. 917 7.1.4. Peer-to-Peer 919 Peer-to-peer routing can be used for routing in exclusive bridging. 920 Essentially, it provides a distributed registrar function that maps 921 each AoR to the particular server that they are currently registered 922 against. When a UA registers to a particular server, that 923 registration is written into the P2P network, such that queries for 924 that user are directed to that presence server. 926 However, change notifications can be troublesome. When a user 927 registered on server 1 now registers on server 2, server 2 needs to 928 query the p2p network, discover that server 1 is handling the user, 929 and then tell server 1 that the user has moved. Server 1 then needs 930 to terminate its ongoing subscriptions and send the to server 2. 932 Furthermore, P2P networks do not inherently provide a test-and-set 933 primitive, and consequently, it is possible for race conditions to 934 occur where there is an inconsistent view on where the user is 935 currently registered. 937 7.1.5. Forking 939 The forking model can be applied to exclusive bridging. When a user 940 registers with a server or publishes a presence document to a server, 941 and that server is not serving the user yet, that server begins 942 serving the user. Furthermore, it needs to propagate a change 943 notification to all of the other servers. This can be done using a 944 registration event package; basically each server would subscribe to 945 every other server for reg-event notifications for users they serve. 947 When subscription or IM request is received at a server, and that 948 server doesn't serve the target user, it forks the subscription or IM 949 to all other servers. If the user is currently registered somewhere, 950 one will accept, and the others will reject with a 404. If the user 951 is registered nowhere, all others generate a 404. If the request is 952 a subscription, the server that received it would 'hold' the 953 subscription, and then subscribe for the reg-event package on every 954 other server for the target user. Once the target user registers 955 somewhere, the server holding the subscription gets a notification 956 and can propagate it to the new target server. 958 Like the P2P solution, the forking solution lacks an effective test- 959 and- set mechanism, and it is therefore possible that there could be 960 inconsistent views on which server is handling a user. One possible 961 scenario where multiple servers will think that thy are serving the 962 user would be when a subscription request is forked and reaches to 963 multiple servers, each of them thinks that it serves the user. 965 7.2. Policy 967 Unless policy is somehow managed in the same database and is accessed 968 by the servers in the exclusive bridging model, policy becomes more 969 complicated in the exclusive bridging model. In the partitioned 970 model, a user had their presence and IM managed by the same server 971 all of the time. Thus, their policy can be provisioned and excecuted 972 there. With exclusive bridging, a user can freely move back and 973 forth between servers. Consequently, the policy for a particular 974 user may need to execute on multiple different servers over time. 976 The simplest solution is just to require the user to separately 977 provision and manage policies on each server. In many of the use 978 cases above, exclusive bridging is a transient situation that 979 eventually settles into partitioned bridging. Thus, it may not be 980 unreasonable to require the user to manage both policies during the 981 transition. It is also possible that each server provides different 982 capabilities, and thus a user will receive different service 983 depending on which server they are connected to. Again, this may be 984 an acceptable limitation for the use cases it supports. 986 For more discussion regarding policy see Section 8.1.2 and Section 9. 988 7.3. Presence Data 990 As with the partitioned model, in the exclusive model, the presence 991 data for a user resides on a single server at any given time. This 992 server owns all composition policies and procedures for collecting 993 and distributing presence data. 995 7.4. Conversation Consistency 997 Because a user receives all of their IM on a single server at a time, 998 there aren't issues with seeing a coherent conversation for the 999 duration that a user is associated with that server. 1001 However, if a user has sessions in progress while they move from one 1002 server to another, it is possible that IM's can be misrouted or 1003 dropped, or delivered out of order. Fortunately, this is a transient 1004 event, and given that its unlikely that a user would actually have 1005 in-progress IM sessions when they change servers, this may be an 1006 acceptable limitation. 1008 However, conversation history may be more troubling. IM message 1009 history is often stored both in clients (for context of past 1010 conversations, search, etc.) and in servers (for the same reasons, in 1011 addition to legal requirements for data retention). If a user 1012 changes servers, some of their past conversations will be stored on 1013 one server, and some on another. Any kind of search or query 1014 facility provided amongst the server-stored messages would need to 1015 search amongst all of the servers to find the data. 1017 8. Unioned 1019 In the unioned model, each user is actually served by more than one 1020 presence server at a time. In this case, "served" implies two 1021 properties: 1023 o A user is served by a server when that user is provisioned on that 1024 server, and 1026 o That server is authoritative for some piece of presence state 1027 associated with that user or responsible for some piece of 1028 registration state associated with that user, for the purposes of 1029 IM delivery 1031 In essence, in the unioned model, a user's presence and registration 1032 data is distributed across many presence servers, while in the 1033 partitioned and exclusive models, its centralized in a single server. 1034 Furthermore, it is possible that the user is provisioned with 1035 different identifiers on each server. 1037 This definition speaks specifically to ownership of dynamic data - 1038 presence and registration state - as the key property. This rules 1039 out several cases which involve a mix of servers within the 1040 enterprise, but do not constitute intra-domain unioned bridging: 1042 o A user utilizes an outbound SIP proxy from one vendor, which 1043 connects to a presence server from another vendor. Even though 1044 this will result in presence subscriptions, notifications, and IM 1045 requests flowing between servers, and the user is potentially 1046 provisioned on both, there is no authoritative presence or 1047 registration state in the outbound proxy, and so this is not 1048 intra-domain bridging. 1050 o A user utilizes a Resource List Server (RLS) from one vendor, 1051 which holds their buddy list, and accesses presence data from a 1052 presence server from another vendor. This case is actually the 1053 partitioned case, not the unioned case. Effectively, the buddy 1054 list itself is another "user", and it exists entirely on one 1055 server (the RLS), while the actual users on the buddy list exist 1056 entirely within another. Consequently, this case does not have 1057 the property that a single presence resource exists on multiple 1058 servers at the same time. 1060 o A user subscribes to the presence of a presentity. This 1061 subscription is first passed to their presence server, which acts 1062 as a proxy, and instead sends the subscription to the UA of the 1063 user, which acts as a presence edge server. In this model, it may 1064 appear as if there are two presence servers for the user (the 1065 actual server and their UA). However, the server is acting as a 1066 proxy in this case - there is only one source of presence 1067 information. For IM, there is only one source of registration 1068 state - the server. Thus, this model is partitioned, but with 1069 different servers owning IM and presence. 1071 The unioned models arise naturally when a user is using devices from 1072 different vendors, each of which has their own respective servers, or 1073 when a user is using different servers for different parts of their 1074 presence state. For example, Figure 8 shows the case where a single 1075 user has a mobile client connected to server one and a desktop client 1076 connected to server two. 1078 alice@example.com alice@example.com 1079 +------------+ +------------+ 1080 | | | | 1081 | | | | 1082 | Server |--------------| Server | 1083 | 1 | | 2 | 1084 | | | | 1085 | | | | 1086 +------------+ +------------+ 1087 \ / 1088 \ / 1089 \ / 1090 \ / 1091 \ / 1092 \ / 1093 \...................../....... 1094 \ / . 1095 .\ / . 1096 . \ | +--------+ . 1097 . | |+------+| . 1098 . +---+ || || . 1099 . |+-+| || || . 1100 . |+-+| |+------+| . 1101 . | | +--------+ . 1102 . | | /------ / . 1103 . +---+ /------ / . 1104 . --------/ . 1105 . . 1106 ............................. 1108 Alice 1110 Figure 8: Unioned Case 1 1112 As another example, a user may have two devices from the same vendor, 1113 both of which are asociated with a single presence server, but that 1114 presence server has incomplete presence state about the user. 1115 Another presence server in the enterprise, due to its access to state 1116 for that user, has additional data which needs to be accessed by the 1117 first presence server in order to provide a comprehensive view of 1118 presence data. This is shown in Figure 9. This use case tends to be 1119 specific to presence. 1121 alice@example.com alice@example.com 1122 +------------+ +------------+ 1123 | | | | 1124 | Presence | | Presence | 1125 | Server |--------------| Server | 1126 | 1 | | 2 | 1127 | | | | 1128 | | | | 1129 +------------+ +------------+ 1130 ^ | | 1131 | | | 1132 | | | 1133 ///-------\\\ | | 1134 ||| specialized ||| | | 1135 || state || | | 1136 \\\-------/// | | 1137 ............................. 1138 . | | . 1139 . | | +--------+ . 1140 . | |+------+| . 1141 . +---+ || || . 1142 . |+-+| || || . 1143 . |+-+| |+------+| . 1144 . | | +--------+ . 1145 . | | /------ / . 1146 . +---+ /------ / . 1147 . --------/ . 1148 . . 1149 . . 1150 ............................. 1151 Alice 1153 Figure 9: Unioned Case 2 1155 Another use case for unioned bridging are subscriber moves. Consider 1156 a domain which uses multiple servers, typically running in a 1157 partitioned configuration. The servers are organized regionally so 1158 that each user is served by a server handling their region. A user 1159 is moving from one region to a new job in another, while retaining 1160 their SIP URI. In order to provide a smooth transition, ideally the 1161 system would provide a "make before break" functionality, allowing 1162 the user to be added onto the new server prior to being removed from 1163 the old. During the transition period, especially if the user had 1164 multiple clients to be moved, they can end up with state existing on 1165 both servers at the same time. 1167 8.1. Hierarchical Model 1169 The unioned intra-bridging model can be realized in one of two ways - 1170 using a hierarchical structure or a peer structure. 1172 In the hierarchical model, presence subscriptions and IM requests for 1173 the target are always routed first to one of the servers - the root. 1174 In the case of presence, the root has the final say on the structure 1175 of the presence document delivered to watchers. It collects presence 1176 data from its child presence servers (through notifications or 1177 publishes received from them) and composes them into the final 1178 presence document. In the case of IM, the root applies IM policy and 1179 then passes the IM onto the children for delivery. There can be 1180 multiple layers in the hierarchical model. This is shown in 1181 Figure 10 for presence. 1183 +-----------+ 1184 *-----------* | | 1185 |Auth and |---->| Presence | <--- root 1186 |Composition| | Server | 1187 *-----------* | | 1188 | | 1189 +-----------+ 1190 / --- 1191 / ---- 1192 / ---- 1193 / ---- 1194 V -V 1195 +-----------+ +-----------+ 1196 | | | | 1197 *-----------* | Presence | *-----------* | Presence | 1198 |Auth and |-->| Server | |Auth and |-->| Server | 1199 |Composition| | | |Composition| | | 1200 *-----------* | | *-----------* | | 1201 +-----------+ +-----------+ 1202 | --- 1203 | ----- 1204 | ----- 1205 | ----- 1206 | ----- 1207 | ----- 1208 V --V 1209 +-----------+ +-----------+ 1210 | | | | 1211 *-----------* | Presence | *-----------* | Presence | 1212 |Auth and |-->| Server | |Auth and |-->| Server | 1213 |Composition| | | |Composition| | | 1214 *-----------* | | *-----------* | | 1215 +-----------+ +-----------+ 1217 Figure 10: Hierarchical Model 1219 It is important to note that this hierarchy defines the sequence of 1220 presence composition and policy application, and does not imply a 1221 literal message flow. As an example, consider once more the use case 1222 of Figure 8. Assume that presence server 1 is the root, and presence 1223 server 2 is its child. When Bob's PC subscribes to Bob's buddy list 1224 (on presence server 2), that subscription will first go to presence 1225 server 2. However, that presence server knows that it is not the 1226 root in the hierarchy, and despite the fact that it has presence 1227 state for Alice (who is on Bob's buddy list), it creates a back-end 1228 subscription to presence server 1. Presence server 1, as the root, 1229 subscribes to Alice's state at presence server 2. Now, since this 1230 subscription came from presence server 1 and not Bob directly, 1231 presence server 2 provides the presence state. This is received at 1232 presence server 1, which composes the data with its own state for 1233 Alice, and then provides the results back to presence server 2, 1234 which, having acted as an RLS, forwards the results back to Bob. 1235 Consequently, this flow, as a message sequence diagram, involves 1236 notifications passing from presence server 2, to server 1, back to 1237 server 2. However, in terms of composition and policy, it was done 1238 first at the child node (presence server 2), and then those results 1239 used at the parent node (presence server 1). 1241 Note that we are assuming that presence servers will subscribe to 1242 each other. It is also possible to assume that given the hierarchy 1243 configuration knowledge, a presence server can send PUBLISH messages 1244 to other presence servers based on the configured hierarchy. 1245 However, sending PUBLISH messages from one presence server to another 1246 presence server will actually make the presence server that sends the 1247 PUBLISH messages a presence source and not a presence server from the 1248 point of view of the receiving presence server. Therefore, we will 1249 assume that presence servers will subscribe to each other but PUBLISH 1250 messages instead of subscriptions could be used if they are 1251 preferred. 1253 8.1.1. Routing 1255 In the hierarchical model, the servers need to collectively be 1256 provisioned with the topology of the network. This topology defines 1257 the root and the parent/child relationships. These relationships 1258 could in fact be different on a user-by-user basis; however, this is 1259 complex to manage. In all likelihood, the parent and child 1260 relationships are identical for each user. The overall routing 1261 algorithm can be described thusly: 1263 o If a SUBCRIBE is received from the parent node for this 1264 presentity, perform subscriptions to each child node for this 1265 presentity, and then take the results, apply composition and 1266 authorization policies, and propagate to the parent. If a node is 1267 the root, the logic here applies regardless of where the request 1268 came from. 1270 o If an IM request is received from the parent node for a user, 1271 perform IM processing and then proxy the request to each child IM 1272 server for this user. If a node is the root, the logic here 1273 applies regardless of where the request came from. 1275 o If a request is received from a node that is not the parent node 1276 for this presentity, proxy the request to the parent node. This 1277 includes cases where the node that sent the request is a child 1278 node. Note that if the node that receives the request can send 1279 the request directly to the root, it should do so thus reducing 1280 the traffic in the system. 1282 This routing rule is relatively simple, and in a two-server system is 1283 almost trivial to provision. Interestingly, it works in cases where 1284 some users are partitioned and some are unioned. When the users are 1285 partitioned, this routing algorithm devolves into the forking 1286 algorithm of Section 6.2.5. This points to the forking algorithm as 1287 a good choice since it can be used for both partitioned and unioned. 1289 An important property of the routing in the hierarchical model is 1290 that the sequence of composition and policy operations for any IM or 1291 presence session is identical, regardless of the watcher or sender of 1292 the IM. The result is that the overall presence state provided to a 1293 watcher, and overall IM behavior, is always consistent and 1294 independent of the server the client is connected to. We call this 1295 property the *consistency property*, and it is an important metric in 1296 assessing the correctness of a federated presence and IM system. 1298 8.1.2. Policy and Identity 1300 Policy and identity are a clear challenge in the unioned model. 1302 Firstly, since a user is provisioned on many servers, it is possible 1303 that the identifier they utilize could be different on each server. 1304 For example, on server 1, they could be joe@example.com, whereas on 1305 server 2, they are joe.smith@example.com. In cases where the 1306 identifiers are not equivalent, a mapping function needs to be 1307 provisioned. This ideally happens on root server. 1309 Secondly, the unioned model will result in back-end subscriptions 1310 extending from one presence server to another presence server. These 1311 subscriptions, though made by the presence server, need to be made 1312 on- behalf-of the user that originally requested the presence state 1313 of the presentity. Since the presence server extending the back-end 1314 subscription will not often have credentials to claim identity of the 1315 watcher, asserted identity using techniques like P-Asserted-ID 1316 [RFC3325] or authenticated identity [RFC4474] are required, along 1317 with the associated trust relationships between servers. 1318 Optimizations, such as view sharing [I-D.ietf-simple-view-sharing] 1319 can help improve performance. The same considerations apply for IM. 1321 The principle challenge in a unioned model is policy, including both 1322 authorization and composition policies. There are three potential 1323 solutions to the administration of policy in the hierarchical model 1324 (only two of which apply in the peer model, as we'll discuss below). 1325 These are root-only, distributed provisioned, and central 1326 provisioned. 1328 8.1.2.1. Root Only 1330 In the root-only policy model, authorization policy, IM policy, and 1331 composition policy are applied only at the root of the tree. This is 1332 shown in Figure 11. 1334 +-----------+ 1335 *-----------* | | 1336 | |---->| | <--- root 1337 | Policy | | Server | 1338 *-----------* | | 1339 | | 1340 +-----------+ 1341 / --- 1342 / ---- 1343 / ---- 1344 / ---- 1345 V -V 1346 +-----------+ +-----------+ 1347 | | | | 1348 | | | | 1349 | Server | | Server | 1350 | | | | 1351 | | | | 1352 +-----------+ +-----------+ 1353 | --- 1354 | ----- 1355 | ----- 1356 | ----- 1357 | ----- 1358 | ----- 1359 V --V 1360 +-----------+ +-----------+ 1361 | | | | 1362 | | | | 1363 | Server | | Server | 1364 | | | | 1365 | | | | 1366 +-----------+ +-----------+ 1368 Figure 11: Root Only 1370 As long as a subscription request came from its parent, every child 1371 presence server would automatically accept the subscription, and 1372 provide notifications containing the full presence state it is aware 1373 of. Similarly, any IM received from a parent would be simply 1374 propagated onwards towards children. Any composition performed by a 1375 child presence server would need to be lossless, in that it fully 1376 combines the source data without loss of information, and also be 1377 done without any per-user provisioning or configuration, operating in 1378 a default or administrator-provisioned mode of operation. 1380 The root-only model has the benefit that it requires the user to 1381 provision policy in a single place (the root). However, it has the 1382 drawback that the composition and policy processing may be performed 1383 very poorly. Presumably, there are multiple presence servers in the 1384 first place because each of them has a particular speciality. That 1385 speciality may be lost in the root-only model. For example, if a 1386 child server provides geolocation information, the root presence 1387 server may not have sufficient authorization policy capabilities to 1388 allow the user to manage how that geolocation information is provided 1389 to watchers. 1391 8.1.2.2. Distributed Provisioning 1393 The distributed provisioned model looks exactly like the diagram of 1394 Figure 10. Each server is separately provisioned with its own 1395 policies, including what users are allowed to watch, what presence 1396 data they will get, how it will be composed, what IMs get blocked, 1397 and so on. 1399 One immediate concern is whether the overall policy processing, when 1400 performed independently at each server, is consistent, sane, and 1401 provides reasonable degrees of privacy. It turns out that it can, if 1402 some guidelines are followed. 1404 For presence, consider basic "yes/no" authorization policies. Lets 1405 say a presentity, Alice, provides an authorization policy in server 1 1406 where Bob can see her presence, but on server 2, provides a policy 1407 where Bob cannot. If presence server 1 is the root, the subscription 1408 is accepted there, but the back-end subscription to presence server 2 1409 would be rejected. As long as presence server 1 then rejects the 1410 subscription, the system provides the correct behavior. This can be 1411 turned into a more general rule: 1413 o To guarantee privacy safety, if the back-end subscription 1414 generated by a presence server is denied, that server must deny 1415 the triggering subscription in turn, regardless of its own 1416 authorization policies. This means that a presence server cannot 1417 send notifications on its own until it has confirmed subscriptions 1418 from downstream servers. 1420 For IM, basic yes/no authorization policies work in a similar way. 1421 If any one of the servers has a policy that says to block an IM, the 1422 IM is not propagated further down the chain. Whether the overall 1423 system blocks IMs from a sender depends on the topology. If there is 1424 no forking in the hierarchy, the system has the property that, if a 1425 sender is blocked at any server, the user is blocked overall. 1426 However, in tree structures where there are multiple children, it is 1427 possible that an IM could be delivered to some downstream clients, 1428 and not others. 1430 Things get more complicated when one considers presence authorization 1431 policies whose job is to block access to specific pieces of 1432 information, as opposed to blocking a user completely. For example, 1433 lets say Alice wants to allow Bob to see her presence, but not her 1434 geolocation information. She provisions a rule on server 1 that 1435 blocks geolocation information, but grants it on server 2. The 1436 correct mode of operation in this case is that the overall system 1437 will block geolocation from Bob. But will it? In fact, it will, if a 1438 few additional guidelines are followed: 1440 o If a presence server adds any information to a presence document 1441 beyond the information received from its children, it must provide 1442 authorization policies that govern the access to that information. 1444 o If a presence server does not understand a piece of presence data 1445 provided by its child, it should not attempt to apply its own 1446 authorization policies to access of that information. 1448 o A presence server should not add information to a presence 1449 document that overlaps with information that can be added by its 1450 parent. Of course, it is very hard for a presence server to know 1451 whether this information overlaps. Consequently, provisioned 1452 composition rules will be required to realize this. 1454 if these rules are followed, the overall system provides privacy 1455 safety and the overall policy applied is reasonable. This is because 1456 these rules effectively segment the application of policy based on 1457 specific data, to the servers that own the corresponding data. For 1458 example, consider once more the geolocation use case described above, 1459 and assume server 2 is the root. If server 1 has access to, and 1460 provides geolocation information in presence documents it produces, 1461 then server 1 would be the only one to provide authorization policies 1462 governing geolocation. Server 2 would receive presence documents 1463 from server 1 containing (or not) geolocation, but since it doesn't 1464 provide or control geolocation, it lets that information pass 1465 through. Thus, the overall presence document provided to the watcher 1466 will contain gelocation if Alice wanted it to, and not otherwise, and 1467 the controls for access to geolocation would exist only on server 1. 1469 For more discussion regarding policy see Section 9. 1471 8.1.2.3. Central Provisioning 1473 The central provisioning model is a hybrid between root-only and 1474 distributed provisioning. Each server does in fact execute its own 1475 authorization and composition policies. However, rather than the 1476 user provisioning them independently in each place, there is some 1477 kind of central portal where the user provisions the rules, and that 1478 portal generates policies for each specific server based on the data 1479 that the corresponding server provides. This is shown in Figure 12. 1481 +---------------------+ 1482 |provisioning portal |........... 1483 +---------------------+ . 1484 . . . . . . 1485 . . . . . . 1486 . . . . ....................... 1487 ........................... . . . . . 1488 . . . . . . 1489 . . . . . . 1490 . ........................... . ............. . . 1491 . . . . . . 1492 . . ...................... . . . 1493 . . V +-----------+ . . . 1494 . . *-----------* | | . . . 1495 . . |Auth and |---->| Presence | <--- root . . . 1496 . . |Composition| | Server | . . . 1497 . . *-----------* | | . . . 1498 . . | | . . . 1499 . . +-----------+ . . . 1500 . . | ---- . . . 1501 . . | ------- . . . 1502 . . | ------- . . 1503 . . | .------- . 1504 . . V . . ---V V 1505 . . +-----------+ . . +---------+ 1506 . . | | V . | | 1507 . . *-----------* | Presence | *-----------* . |Presence | 1508 . ....>|Auth and |-->| Server | |Auth and |-->| Server | 1509 . |Composition| | | |Composition| . | | 1510 . *-----------* | | *-----------* . +---------+ 1511 . +-----------+ . 1512 . / \ . 1513 . / \ . 1514 . / \ . 1515 . / \ . 1516 . / \ . 1517 . / \ . 1518 . V V ... 1519 . +-----------+ +-----------+ . 1520 V | | | | V 1521 *-----------* | Presence | | Presence | *-----------* 1522 |Auth and |-->| Server | | Server |<----|Auth and | 1523 |Composition| | | | | |Composition| 1524 *-----------* | | | | *-----------* 1525 +-----------+ +-----------+ 1527 Figure 12: Central Provisioning 1529 Centralized provisioning brings the benefits of root-only (single 1530 point of user provisioning) with those of distributed provisioning 1531 (utilize full capabilities of all servers). Its principle drawback 1532 is that it requires another component - the portal - which can 1533 represent the union of the authorization policies supported by each 1534 server, and then delegate those policies to each corresponding 1535 server. 1537 The other drawback of centralized provisioning is that it assumes 1538 completely consistent policy decision making on each server. There 1539 is a rich set of possible policy decisions that can be taken by 1540 servers, and this is often an area of differentiation. 1542 8.1.2.4. Centralized PDP 1544 The centralized provisioning model assumes that there is a single 1545 point of policy administration, but that there is independent 1546 decision making at each presence and IM server. This only works in 1547 cases where the decision function - the policy decision point - is 1548 identical in each server. 1550 An alternative model is to utilize a single point of policy 1551 administration and a single point of policy decision making. Each 1552 presence server acts solely as an enforcement point, asking the 1553 policy server (through a policy protocol of some sort) how to handle 1554 the presence or IM. The policy server then comes back with a policy 1555 decision - whether to proceed with the subscription or IM, and how to 1556 filter and process it. This is shown in Figure 13. 1558 +------------+ +---------------+ 1559 |Provisioning|=====>|Policy Decision| 1560 | Portal | | Point (PDP) | 1561 +------------+ +---------------+ 1562 # # # # # 1563 ################### # # # ########################### 1564 # # # # # 1565 # ######## # #################### # 1566 # # +-----------+ # # 1567 # # | | # # 1568 # # | | .... root # # 1569 # # | Server | # # 1570 # # | | # # 1571 # # | | # # 1572 # # +-----------+ # # 1573 # # / --- # # 1574 # # / ---- # # 1575 # # / ---- # # 1576 # # / ---- # # 1577 # # V -V# # 1578 # +-----------+ +-----------+ # 1579 # | | | | # 1580 # | | | | # 1581 # | Server | | Server | # 1582 # | | | | # 1583 # | | | | # 1584 # +-----------+ +-----------+ # 1585 # | --- # 1586 # | ----- # 1587 # | ----- # 1588 # | ----- # 1589 # | ----- # 1590 # | ----- # 1591 # V --V # 1592 # +-----------+ +-----------+ # 1593 # | | | | # 1594 #######| | | | # 1595 | Server | | Server |### 1596 | | | | 1597 | | | | 1598 +-----------+ +-----------+ 1600 ===== Provisioning Protocol 1602 ##### Policy Protocol 1604 ----- SIP 1606 Figure 13: Central PDP 1608 The centralized PDP has the benefits of central provisioning, and 1609 consistent policy operation, and decouples policy decision making 1610 from presence and IM processing. This decoupling allows for multiple 1611 presence and IM servers, but still allows for a single policy 1612 function overall. The individual presence and IM servers don't need 1613 to know about the policies themselves, or even know when they change. 1614 Of course, if a server is caching the results of a policy decision, 1615 change notifications are required from the PDP to the server, 1616 informing it of the change (alternatively, traditional TTL-based 1617 expirations can be used if delay in updates are acceptable). 1619 It is also possible to move the decisionmaking process into each 1620 server. In that case, there is still a centralized policy portal and 1621 centralized repository of the policy data. The interface between the 1622 servers and the repository then becomes some kind of standardized 1623 database interface. 1625 For the centralized and distributed provisioning approaches, and the 1626 centralized decision approach, the hierarchical model suffers overall 1627 from the fact that the root of the policy processing may not be tuned 1628 to the specific policy needs of the device that has subscribed. For 1629 example, in the use case of Figure 8, presence server 1 may be 1630 providing composition policies tuned to the fact that the device is 1631 wireless with limited display. Consequently, when Bob subscribes 1632 from his mobile device, when presence server 2 is the root, presence 1633 server 2 may add additional data and provide an overall presence 1634 document to the client which is not optimized for that device. This 1635 problem is one of the principal motivations for the peer model, 1636 described below. 1638 For more discussion regarding policy see Section 9. 1640 8.1.3. Presence Data 1642 The hierarhical model is based on the idea that each presence server 1643 in the chain contributes some unique piece of presence information, 1644 composing it with what it receives from its child, and passing it on. 1645 For the overall presence document to be reasonable, several 1646 guidelines need to be followed: 1648 o A presence server must be prepared to receive documents from its 1649 peer containing information that it does not understand, and to 1650 apply unioned composition policies that retain this information, 1651 adding to it the unique information it wishes to contribute. 1653 o A user interface rendering some presence document provided by its 1654 presence server must be prepared for any kind of presence document 1655 compliant to the presence data model, and must not assume a 1656 specific structure based on the limitations and implementation 1657 choices of the server to which it is paired. 1659 If these basic rules are followed, the overall system provides 1660 functionality equivalent to the combination of the presence 1661 capabilities of the servers contained within it, which is highly 1662 desirable. 1664 8.1.4. Conversation Consistency 1666 Unioned bridging introduces a particular challenge for conversation 1667 consistency. A user with multiple devices attached to multiple 1668 servers could potentially try to participate in the conversation on 1669 multiple devices at once. This would clearly pose a challenge. 1670 There are really two approaches that produce a sensible user 1671 experience. 1673 The first approach simulates the "phone experience" with IM. When a 1674 user (say Alice) sends an IM to Bob, and Bob is a unioned user with 1675 two devices on two servers, Bob receives that IM on both devices. 1676 However, when he "answers" by typing a reply from one of those 1677 devices, the conversation continues only on that device. The other 1678 device on the other server receives no further IMs for this session - 1679 either from Alice or from Bob. Indeed, the IM window on Bob's 1680 unanswered device may even disappear to emphasize this fact. 1682 This mode of operation, which we'll call uni-device IM, is only 1683 feasible with session mode IM, and its realization using traditional 1684 SIP signaling is described in [RFC4975]. 1686 The second mode of operation, called multi-device IM, is more of a 1687 conferencing experience. The initial IM from Alice is delivered to 1688 both Bob's devices. When Bob answers on one, that response is shown 1689 to ALice but is also rendered on Bob's other device. Effectively, we 1690 have set up an IM conference where each of Bob's devices is an 1691 independent participant in the conference. This model is feasible 1692 with both session and pager mode IM; however conferencing works much 1693 better overall with session mode. 1695 A related challenge is conversation history. In the uni-device IM 1696 mode, this past history for a user's conversation may be distributed 1697 amongst the different servers, depending on which clients and servers 1698 were involved in the conversation. As with the exclusive model, IM 1699 search and retrieval services may need to access all of the servers 1700 on which a user might be located. This is easier for the unioned 1701 case than the exclusive one, since in the unioned case, the user's 1702 location is on a fixed number of servers based on provisioning. This 1703 problem is even more complicated in IM page mode when multiple 1704 devices are present, due to the limitation of page mode in these 1705 configurations. 1707 8.2. Peer Model 1709 In the peer model, there is no one root. When a watcher subscribes 1710 to a presentity, that subscription is processed first by the server 1711 to which the watcher is connected (effectively acting as the root), 1712 and then the subscription is passed to other child presence servers. 1713 The same goes for IM; when a client sends an IM, the IM is processed 1714 first by the server associated with the sender (effectively acting as 1715 the root), and then the IM is passed to the child IM servers. In 1716 essence, in the peer model, there is a per-client hierarchy, with the 1717 root being a function of the client. Consider the use case in 1718 Figure 8 If Bob has his buddy list on presence server 1, and it 1719 contains Alice, presence server 1 acts as the root, and then performs 1720 a back-end subscription to presence server 2. However, if Joe has 1721 his buddy list on presence server 2, and his buddy list contains 1722 Alice, presence server 2 acts as the root, and performs a back-end 1723 subscription to presence server 1. Similarly, if Bob sends an IM to 1724 Alice, it is processed first by server 1 and then server 2. If Joe 1725 sends an IM to Alice, it is first processed by server 2 and then 1726 server 1. This is shown in Figure 14. 1728 alice@example.com alice@example.com 1729 +------------+ +------------+ 1730 | |<-------------| |<--------+ 1731 | | | | | 1732 Connect | Server | | Server | | 1733 Alice | 1 | | 2 | Connect | 1734 +---->| |------------->| | Alice | 1735 | | | | | | 1736 | +------------+ +------------+ | 1737 | \ / | 1738 | \ / | 1739 | \ / | 1740 | \ / | 1741 | \ / | 1742 | \ / | 1743 ...|........ \...................../....... .........|........ 1744 . . \ / . . . 1745 . . .\ / . . +--------+ . 1746 . | . . \ | +--------+ . . |+------+| . 1747 . | . . | |+------+| . . || || . 1748 . +---+ . . +---+ || || . . || || . 1749 . |+-+| . . |+-+| || || . . |+------+| . 1750 . |+-+| . . |+-+| |+------+| . . +--------+ . 1751 . | | . . | | +--------+ . . /------ / . 1752 . | | . . | | /------ / . . /------ / . 1753 . +---+ . . +---+ /------ / . . --------/ . 1754 . . . --------/ . . . 1755 . . . . . . 1756 ............ ............................. .................. 1758 Bob Alice Joe 1760 Figure 14: Peer Model 1762 Whereas the hierarchical model clearly provides the consistency 1763 property, it is not obvious whether a particular deployment of the 1764 peer model provides the consistency property. When policy decision 1765 making is distributed amongst the servers, it ends up being a 1766 function of the composition policies of the individual servers. If 1767 Pi() represents the composition and authorization policies of server 1768 i, and takes as input one or more presence documents provided by its 1769 children, and outputs a presence document, the overall system 1770 provides consistency when: 1772 Pi(Pj()) = Pj(Pi()) 1774 which is effectively the commutativity property. 1776 8.2.1. Routing 1778 Routing in the peer model works similarly to the hierarchical model. 1779 Each server would be configured with the children it has when it acts 1780 as the root. The overall presence routing algorithm then works as 1781 follows: 1783 o If a presence server receives a subscription for a presentity from 1784 a particular watcher, and it already has a different subscription 1785 (as identified by dialog identifiers) for that presentity from 1786 that watcher, it rejects the second subscription with an 1787 indication of a loop. This algorithm does rule out the 1788 possibility of two instances of the same watcher subscribing to 1789 the same presentity. 1791 o If a presence server receives a subscription for a presentity from 1792 a watcher and it doesn't have one yet for that pair, it processes 1793 it and generates back end subscriptions to each configured child. 1794 If a back-end subscription generates an error due to loop, it 1795 proceeds without that back-end input. 1797 The algorithm for IM routing works almost identically. 1799 For example, consider Bob subscribing to Alice. Bob's client is 1800 supported by server 1. Server 1 has not seen this subscription 1801 before, so it acts as the root and passes it to server 2. Server 2 1802 hasn't seen it before, so it accepts it (now acting as the child), 1803 and sends the subscription to its child, which is server 1. Server 1 1804 has already seen the subscription, so it rejects it. Now server 2 1805 basically knows its the child, and so it generates documents with 1806 just its own data. 1808 As in the hierarchical case, it is possible to intermix partitioned 1809 and peer models for different users. In the partitioned case, the 1810 routing for hierarchical devolves into the forking routing described 1811 in Section 6.2.5. However, intermixing peer and exclusive bridging 1812 for different users is challenging. [[OPEN ISSUE: need to think 1813 about this more.]] 1815 8.2.2. Policy 1817 The policy considerations for the peer model are very similar to 1818 those of the hierarchical model. However, the root-only policy 1819 approach is non-sensical in the peer model, and cannot be utilized. 1820 The distributed and centralized provisioning approaches apply, and 1821 the rules described above for generating correct results provide 1822 correct results in the peer model as well. 1824 However, the centralized PDP model works particularly well in concert 1825 with the peer model. It allows for consistent policy processing 1826 regardless of the type of rules, and has the benefit of having a 1827 single point of provisioning. At the same time, it avoids the need 1828 for defining and having a single root; indeed there is little benefit 1829 for utilizing the hierarchical model when a centralized PDP is used. 1831 However, the distributed processing model in the peer model 1832 eliminates the problem described in Section 8.1.2.3. The problem is 1833 that composition and authorization policies may be tuned to the needs 1834 of the specific device that is connected. In the hierarchical model, 1835 the wrong server for a particular device may be at the root, and the 1836 resulting presence document poorly suited to the consuming device. 1837 This problem is alleviated in the peer model. The server that is 1838 paired or tuned for that particular user or device is always at the 1839 root of the tree, and its composition policies have the final say in 1840 how presence data is presented to the watcher on that device. 1842 For more discussion regarding policy see Section 9. 1844 8.2.3. Presence Data 1846 The considerations for presence data and composition in the 1847 hierarchical model apply in the peer model as well. The principle 1848 issue is consistency, and whether the overall presence document for a 1849 watcher is the same regardless of which server the watcher connects 1850 from. As mentioned above, consistency is a property of commutativity 1851 of composition, which may or may not be true depending on the 1852 implementation. 1854 Interestingly, in the use case of Figure 9, a particular user only 1855 ever has devices on a single server, and thus the peer and 1856 hierarchical models end up being the same, and consistency is 1857 provided. 1859 8.2.4. Conversation Consistency 1861 The hierarchical and peer models have no impact on the issue of 1862 conversation consistency; the problem exists identically for both 1863 approaches. 1865 9. More about Policy 1867 There are several models that are described in this document that may 1868 create some ambiguity regarding what the subscribing user will see 1869 when the policy is not managed in a centralized way (Section 8.1.2.3, 1870 Section 8.1.2.4). 1872 o Exclusive model - In this case one server may have a certain 1873 policy and the other server may have a different policy. Bob 1874 subscribing one day to server A will not be able to see Alice's 1875 presence while he will be able to see her presence when he 1876 subscribes to server B. 1878 o Hierarchical unioned model - Since only the root will provide the 1879 presence information all the users will see the same presence 1880 information. However, if there are some contradicting rules in 1881 the servers, what the subscriber will see will in most cases be 1882 the strictest and most minimal view and will be dependent on the 1883 hierarchy of the servers in the hierarchical model. 1885 o Peer unioned model - In this case one peer server may have a 1886 certain policy and the other server may have a different policy. 1887 Bob subscribing one day to peer server A will not be able to see 1888 Alice's presence while he will be able to see her presence when he 1889 subscribes to peer server B. 1891 There are several reasons why distributed policy model may needed: 1893 o Adapting policy to the presence server type - The particular 1894 presence server may be adapted to specific type of presence 1895 information and devices. It will be hard or not feasible to 1896 provide a centralized policy for all the types of presence 1897 servers/devices 1899 o A presence server that is part of the intradomain bridging may not 1900 be able to use the centralized policy provisioning since it does 1901 not support this feature. 1903 It is probable that although confusing for users, distributed 1904 provisioning will be used at least in the initial deployments of the 1905 intradomain bridging until standards for central policy provisioning 1906 will be developed and implemented by various presence servers. 1908 10. Acknowledgements 1910 The authors would like to thank Paul Fullarton, David Williams, 1911 Sanjay Sinha, and Paul Kyzivat for their comments. Thanks to Adam 1912 Roach and Ben Campbell for their dedicated review. 1914 11. Security Considerations 1916 The principle issue in intra-domain bridging is that of privacy. It 1917 is important that the system meets user expectations, and even in 1918 cases of user provisioning errors or inconsistencies, it provides 1919 appropriate levels of privacy. This is an issue in the unioned 1920 models, where user privacy policies can exist on multiple servers at 1921 the same time. The guidelines described here for authorization 1922 policies help ensure that privacy properties are maintained. 1924 12. IANA Considerations 1926 There are no IANA considerations associated with this specification. 1928 13. Informative References 1930 [RFC2778] Day, M., Rosenberg, J., and H. Sugano, "A Model for 1931 Presence and Instant Messaging", RFC 2778, February 2000. 1933 [RFC3863] Sugano, H., Fujimoto, S., Klyne, G., Bateman, A., Carr, 1934 W., and J. Peterson, "Presence Information Data Format 1935 (PIDF)", RFC 3863, August 2004. 1937 [RFC4479] Rosenberg, J., "A Data Model for Presence", RFC 4479, 1938 July 2006. 1940 [RFC3856] Rosenberg, J., "A Presence Event Package for the Session 1941 Initiation Protocol (SIP)", RFC 3856, August 2004. 1943 [RFC4662] Roach, A., Campbell, B., and J. Rosenberg, "A Session 1944 Initiation Protocol (SIP) Event Notification Extension for 1945 Resource Lists", RFC 4662, August 2006. 1947 [RFC3944] Johnson, T., Okubo, S., and S. Campos, "H.350 Directory 1948 Services", RFC 3944, December 2004. 1950 [RFC3325] Jennings, C., Peterson, J., and M. Watson, "Private 1951 Extensions to the Session Initiation Protocol (SIP) for 1952 Asserted Identity within Trusted Networks", RFC 3325, 1953 November 2002. 1955 [RFC4474] Peterson, J. and C. Jennings, "Enhancements for 1956 Authenticated Identity Management in the Session 1957 Initiation Protocol (SIP)", RFC 4474, August 2006. 1959 [RFC3680] Rosenberg, J., "A Session Initiation Protocol (SIP) Event 1960 Package for Registrations", RFC 3680, March 2004. 1962 [RFC3428] Campbell, B., Rosenberg, J., Schulzrinne, H., Huitema, C., 1963 and D. Gurle, "Session Initiation Protocol (SIP) Extension 1964 for Instant Messaging", RFC 3428, December 2002. 1966 [RFC4975] Campbell, B., Mahy, R., and C. Jennings, "The Message 1967 Session Relay Protocol (MSRP)", RFC 4975, September 2007. 1969 [RFC3261] Rosenberg, J., Schulzrinne, H., Camarillo, G., Johnston, 1970 A., Peterson, J., Sparks, R., Handley, M., and E. 1971 Schooler, "SIP: Session Initiation Protocol", RFC 3261, 1972 June 2002. 1974 [RFC5344] Houri, A., Aoki, E., and S. Parameswar, "Presence and 1975 Instant Messaging Peering Use Cases", RFC 5344, 1976 October 2008. 1978 [I-D.ietf-simple-view-sharing] 1979 Rosenberg, J., Donovan, S., and K. McMurry, "Optimizing 1980 Federated Presence with View Sharing", 1981 draft-ietf-simple-view-sharing-02 (work in progress), 1982 November 2008. 1984 Authors' Addresses 1986 Jonathan Rosenberg 1987 Cisco 1988 Iselin, NJ 1989 US 1991 Email: jdrosen@cisco.com 1992 URI: http://www.jdrosen.net 1994 Avshalom Houri 1995 IBM 1996 Science Park, Rehovot 1997 Israel 1999 Email: avshalom@il.ibm.com 2001 Colm Smyth 2002 Avaya 2003 Dublin 18, Sandyford Business Park 2004 Ireland 2006 Email: smythc@avaya.com 2008 Francois Audet 2009 Nortel 2010 4655 Great America Parkway 2011 Santa Clara, CA 95054 2012 USA 2014 Email: audet@nortel.com