idnits 2.17.1 draft-ietf-simple-intradomain-federation-02.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- ** It looks like you're using RFC 3978 boilerplate. You should update this to the boilerplate described in the IETF Trust License Policy document (see https://trustee.ietf.org/license-info), which is required now. -- Found old boilerplate from RFC 3978, Section 5.1 on line 20. -- Found old boilerplate from RFC 3978, Section 5.5, updated by RFC 4748 on line 1879. -- Found old boilerplate from RFC 3979, Section 5, paragraph 1 on line 1890. -- Found old boilerplate from RFC 3979, Section 5, paragraph 2 on line 1897. -- Found old boilerplate from RFC 3979, Section 5, paragraph 3 on line 1903. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- ** There are 7 instances of too long lines in the document, the longest one being 1 character in excess of 72. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust Copyright Line does not match the current year == Line 601 has weird spacing: '...ple.com alic...' == Line 602 has weird spacing: '...ple.com zeke...' -- The document seems to lack a disclaimer for pre-RFC5378 work, but may have content which was first submitted before 10 November 2008. If you have contacted all the original authors and they are all willing to grant the BCP78 rights to the IETF Trust, then this is fine, and you can ignore this comment. If not, you may need to add the pre-RFC5378 disclaimer. (See the Legal Provisions document at https://trustee.ietf.org/license-info for more information.) -- The document date (November 3, 2008) is 5646 days in the past. Is this intentional? -- Found something which looks like a code comment -- if you have code sections in the document, please surround them with '' and '' lines. Checking references for intended status: Informational ---------------------------------------------------------------------------- == Outdated reference: A later version (-02) exists of draft-ietf-simple-view-sharing-01 Summary: 2 errors (**), 0 flaws (~~), 4 warnings (==), 8 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 SIMPLE J. Rosenberg 3 Internet-Draft Cisco 4 Intended status: Informational A. Houri 5 Expires: May 7, 2009 IBM 6 C. Smyth 7 Avaya 8 F. Audet 9 Nortel 10 November 3, 2008 12 Models for Intra-Domain Presence and Instant Messaging (IM) Bridging 13 draft-ietf-simple-intradomain-federation-02 15 Status of this Memo 17 By submitting this Internet-Draft, each author represents that any 18 applicable patent or other IPR claims of which he or she is aware 19 have been or will be disclosed, and any of which he or she becomes 20 aware will be disclosed, in accordance with Section 6 of BCP 79. 22 Internet-Drafts are working documents of the Internet Engineering 23 Task Force (IETF), its areas, and its working groups. Note that 24 other groups may also distribute working documents as Internet- 25 Drafts. 27 Internet-Drafts are draft documents valid for a maximum of six months 28 and may be updated, replaced, or obsoleted by other documents at any 29 time. It is inappropriate to use Internet-Drafts as reference 30 material or to cite them other than as "work in progress." 32 The list of current Internet-Drafts can be accessed at 33 http://www.ietf.org/ietf/1id-abstracts.txt. 35 The list of Internet-Draft Shadow Directories can be accessed at 36 http://www.ietf.org/shadow.html. 38 This Internet-Draft will expire on May 7, 2009. 40 Copyright Notice 42 Copyright (C) The IETF Trust (2008). 44 Abstract 46 Presence and Instant Messaging (IM) bridging involves the sharing of 47 presence information and exchange of IM across multiple systems 48 within a single domain. As such, it is a close cousin to presence 49 and IM federation, which involves the sharing of presence and IM 50 across differing domains. Presence and IM bridging can be the result 51 of a multi-vendor network, or a consequence of a large organization 52 that requires partitioning. This document examines different use 53 cases and models for intra-domain presence and IM bridging. 55 Table of Contents 57 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 4 58 2. Intra-Domain Bridging vs. Clustering . . . . . . . . . . . . . 7 59 3. Use Cases for Intra-Domain Bridging . . . . . . . . . . . . . 8 60 3.1. Scale . . . . . . . . . . . . . . . . . . . . . . . . . . 8 61 3.2. Organizational Structures . . . . . . . . . . . . . . . . 8 62 3.3. Multi-Vendor Requirements . . . . . . . . . . . . . . . . 8 63 3.4. Specialization . . . . . . . . . . . . . . . . . . . . . . 9 64 4. Considerations for Bridging Models . . . . . . . . . . . . . . 10 65 5. Partitioned . . . . . . . . . . . . . . . . . . . . . . . . . 10 66 5.1. Applicability . . . . . . . . . . . . . . . . . . . . . . 11 67 5.2. Routing . . . . . . . . . . . . . . . . . . . . . . . . . 12 68 5.2.1. Centralized Database . . . . . . . . . . . . . . . . . 12 69 5.2.2. Routing Proxy . . . . . . . . . . . . . . . . . . . . 14 70 5.2.3. Subdomaining . . . . . . . . . . . . . . . . . . . . . 15 71 5.2.4. Peer-to-Peer . . . . . . . . . . . . . . . . . . . . . 17 72 5.2.5. Forking . . . . . . . . . . . . . . . . . . . . . . . 17 73 5.2.6. Provisioned Routing . . . . . . . . . . . . . . . . . 17 74 5.3. Policy . . . . . . . . . . . . . . . . . . . . . . . . . . 17 75 5.4. Presence Data . . . . . . . . . . . . . . . . . . . . . . 18 76 5.5. Conversation Consistency . . . . . . . . . . . . . . . . . 18 77 6. Exclusive . . . . . . . . . . . . . . . . . . . . . . . . . . 18 78 6.1. Routing . . . . . . . . . . . . . . . . . . . . . . . . . 19 79 6.1.1. Centralized Database . . . . . . . . . . . . . . . . . 20 80 6.1.2. Routing Proxy . . . . . . . . . . . . . . . . . . . . 20 81 6.1.3. Subdomaining . . . . . . . . . . . . . . . . . . . . . 20 82 6.1.4. Peer-to-Peer . . . . . . . . . . . . . . . . . . . . . 21 83 6.1.5. Forking . . . . . . . . . . . . . . . . . . . . . . . 21 84 6.2. Policy . . . . . . . . . . . . . . . . . . . . . . . . . . 22 85 6.3. Presence Data . . . . . . . . . . . . . . . . . . . . . . 22 86 6.4. Conversation Consistency . . . . . . . . . . . . . . . . . 22 87 7. Unioned . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 88 7.1. Hierarchical Model . . . . . . . . . . . . . . . . . . . . 26 89 7.1.1. Routing . . . . . . . . . . . . . . . . . . . . . . . 28 90 7.1.2. Policy and Identity . . . . . . . . . . . . . . . . . 29 91 7.1.2.1. Root Only . . . . . . . . . . . . . . . . . . . . 29 92 7.1.2.2. Distributed Provisioning . . . . . . . . . . . . . 31 93 7.1.2.3. Central Provisioning . . . . . . . . . . . . . . . 33 94 7.1.2.4. Centralized PDP . . . . . . . . . . . . . . . . . 35 95 7.1.3. Presence Data . . . . . . . . . . . . . . . . . . . . 37 96 7.1.4. Conversation Consistency . . . . . . . . . . . . . . . 37 97 7.2. Peer Model . . . . . . . . . . . . . . . . . . . . . . . . 38 98 7.2.1. Routing . . . . . . . . . . . . . . . . . . . . . . . 40 99 7.2.2. Policy . . . . . . . . . . . . . . . . . . . . . . . . 41 100 7.2.3. Presence Data . . . . . . . . . . . . . . . . . . . . 41 101 7.2.4. Conversation Consistency . . . . . . . . . . . . . . . 41 102 8. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . . 42 103 9. Security Considerations . . . . . . . . . . . . . . . . . . . 42 104 10. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 42 105 11. Informative References . . . . . . . . . . . . . . . . . . . . 42 106 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . . 43 107 Intellectual Property and Copyright Statements . . . . . . . . . . 45 109 1. Introduction 111 Presence refers to the ability, willingness and desire to communicate 112 across differing devices, mediums and services [RFC2778]. Presence 113 is described using presence documents [RFC3863] [RFC4479], exchanged 114 using a Session Initiation Protocol (SIP) [RFC3261] based event 115 package [RFC3856]. Similarly, instant messaging refers to the 116 exchange of real-time text-oriented messaging between users. SIP 117 defines two mechanisms for IM - pager mode [RFC3428] and session mode 118 [RFC4975]. 120 Presence and Instant Messaging (IM) bridging involves the sharing of 121 presence information and exchange of IM across multiple systems 122 within a single domain. As such, it is a close cousin to presence 123 and IM federation 124 [I-D.ietf-speermint-consolidated-presence-im-usecases], which 125 involves the sharing of presence and IM across differing domains. 127 For example, consider the network of Figure 1, which shows one model 128 for inter-domain presence federation. In this network, Alice belongs 129 to the example.org domain, and Bob belongs to the example.com domain. 130 Alice subscribes to her buddy list on her presence server (which is 131 also acting as her Resource List Server (RLS) [RFC4662]), and that 132 list includes bob@example.com. Alice's presence server generates a 133 back-end subscription on the federated link between example.org and 134 example.com. The example.com presence server authorizes the 135 subscription, and if permitted, generates notifications back to 136 Alice's presence server, which are in turn passed to Alice. 138 ............................. .............................. 139 . . . . 140 . . . . 141 . alice@example.org . . bob@example.com . 142 . +------------+ SUB . . +------------+ . 143 . | | Bob . . | | . 144 . | Presence |------------------->| Presence | . 145 . | Server | . . | Server | . 146 . | | . . | | . 147 . | |<-------------------| | . 148 . | | NOTIFY . | | . 149 . +------------+ . . +------------+ . 150 . ^ | . . ^ . 151 . SUB | | . . |PUB . 152 . Buddy | |NOTIFY . . | . 153 . List | | . . | . 154 . | | . . | . 155 . | V . . | . 156 . +-------+ . . +-------+ . 157 . | | . . | | . 158 . | | . . | | . 159 . | | . . | | . 160 . +-------+ . . +-------+ . 161 . . . . 162 . Alice's . . Bob's . 163 . PC . . PC . 164 . . . . 165 ............................. .............................. 167 example.org example.com 169 Figure 1: Inter-Domain Presence Model 171 Similarly, inter-domain IM federation would look like the model shown 172 in Figure 2: 174 ............................. .............................. 175 . . . . 176 . . . . 177 . alice@example.org . . bob@example.com . 178 . +------------+ INV . . +------------+ . 179 . | | Bob . . | | . 180 . | |------------------->| | . 181 . | IM | . . | IM | . 182 . | Server | . . | Server | . 183 . | |<------------------>| | . 184 . | | IM | | . 185 . +------------+ Content +------------+ . 186 . ^ ^ . . ^ | . 187 . INVITE | | . . IM | |INV . 188 . Bob | | IM . . Content| |Bob . 189 . | | Content . . | | . 190 . | | . . | | . 191 . | V . . V V . 192 . +-------+ . . +-------+ . 193 . | | . . | | . 194 . | | . . | | . 195 . | | . . | | . 196 . +-------+ . . +-------+ . 197 . . . . 198 . Alice's . . Bob's . 199 . PC . . PC . 200 . . . . 201 ............................. .............................. 203 example.org example.com 205 Figure 2: Inter-Domain IM Model 207 In this model, example.org and example.com both have an "IM server". 208 This would typically be a SIP proxy or B2BUA responsible for handling 209 both the signaling and the IM content (as these are separate in the 210 case of session mode). The IM server would handle routing of the IM 211 along with application of IM policy. 213 Though both of these pictures show federation between domains, a 214 similar interconnection - presence and IM bridging - can happen 215 within a domain as well. We define intra-domain bridging as the 216 interconnection of presence and IM servers within a single domain, 217 where domain refers explicity to the right hand side of the @-sign in 218 the SIP URI. 220 This document considers the architectural models and different 221 problems that arise when performing intra-domain presence and IM 222 bridging. Though presence and IM are quite distinct functions, this 223 document considers both since the architectural models and issues are 224 common between the two. The document first clarifies the distinction 225 between intra-domain bridging and clustering. It defines the primary 226 issues that arise in intra-domain presence and IM bridging, and then 227 goes on to define the three primary models for it - partitioned, 228 unioned and exclusive. 230 This document doesn't make any recommendation as to which model is 231 best. Each model has different areas of applicability and are 232 appropriate in a particular deployment. The intent is to provide 233 informative material and ideas on how this can be done. 235 2. Intra-Domain Bridging vs. Clustering 237 Intra-domain bridging is the interconnection of servers within a 238 single domain. This is very similar to clustering, which is the 239 tight coupling of a multiplicity of physical servers to realize scale 240 and/or high availability. Consequently, it is important to clarify 241 the differences. 243 Firstly, clustering implies a tight coupling of components. 244 Clustering usually involves proprietary information sharing, such as 245 database replication and state sharing, which in turn are tightly 246 bound with the internal implementation of the product. Intra-domain 247 bridging, on the other hand, is a loose coupling. There is never 248 database replication or state replication across federated systems 249 (though a database and DB replication might be used within a 250 component providing routing functions to facilitate bridging). 252 Secondly, clustering always occurs amongst components from the same 253 vendor. This is due to the tight coupling described above. Intra- 254 domain bridging, on the other hand, can occur between servers from 255 different vendors. As described below, this is one of the chief use 256 cases for intra-domain bridging. 258 Thirdly, clustering is almost always invisible to users. 259 Communications between users within the same cluster almost always 260 have identical functionality to communications between users on the 261 same server within the cluster. The cluster boundaries are 262 invisible; indeed the purpose of a cluster is to build a system which 263 behaves as if it were a single monolithic entity, even though it is 264 not. Bridging, on the other hand, is often visible to users. There 265 will frequently be loss of functionality when crossing a cluster. 266 Though this is not a hard and fast rule, it is a common 267 differentiator. 269 Fourthly, connections between federated and bridged systems almost 270 always involve standards, whereas communications within a cluster 271 often involves proprietary mechanisms. Standards are needed for 272 bridging because the systems can be from different vendors, and thus 273 agreement is needed to enable interoperation. 275 Finally, a cluster will often have an upper bound on its size and 276 capacity, due to some kind of constraint on the coupling between 277 nodes in the cluster. However, there is typically no limit, or a 278 much larger limit, on the number of bridged systems that can be put 279 into a domain. This is a consequence to their loose coupling. 281 Though these rules are not hard and fast, they give general 282 guidelines on the differences between clustering and intra-domain 283 bridging. 285 3. Use Cases for Intra-Domain Bridging 287 There are several use cases that drive intra-domain bridging. 289 3.1. Scale 291 One common use case for bridging is an organization that is just very 292 large, and their size exceeds the capacity that a single server or 293 cluster can provide. So, instead, the domain breaks its users into 294 partitions (perhaps arbitrarily) and then uses intra-domain bridging 295 to allow the overall system to scale up to arbitrary sizes. This is 296 common practice today for service providers and large enterprises. 298 3.2. Organizational Structures 300 Another use case for intra-domain bridging is a multi-national 301 organization with regional IT departments, each of which supports a 302 particular set of nationalities. It is very common for each regional 303 IT department to deploy and run its own servers for its own 304 population. In that case, the domain would end up being composed of 305 the presence servers deployed by each regional IT department. 306 Indeed, in many organizations, each regional IT department might end 307 up using different vendors. This can be a consequence of differing 308 regional requirements for features (such as compliance or 309 localization support), differing sales channels and markets in which 310 vendors sell, and so on. 312 3.3. Multi-Vendor Requirements 314 Another use case for intra-domain bridging is an organization that 315 requires multiple vendors for each service, in order to avoid vendor 316 lock in and drive competition between its vendors. Since the servers 317 will come from different vendors, a natural way to deploy them is to 318 partition the users across them. Such multi-vendor networks are 319 extremely common in large service provider networks, many of which 320 have hard requirements for multiple vendors. 322 Typically, the vendors are split along geographies, often run by 323 different local IT departments. As such, this case is similar to the 324 organizational division above. 326 3.4. Specialization 328 Another use case is where certain vendors might specialize in 329 specific types of clients. For example, one vendor might provide a 330 mobile client (but no desktop client), while another provides a 331 desktop client but no mobile client. It is often the case that 332 specific client applications and devices are designed to only work 333 with their corresponding servers. In an ideal world, clients would 334 all implement to standards and this would not happen, but in 335 practice, the vast majority of presence and IM endpoints work only 336 (or only work well) with the server from the same vendor. A domain 337 might want each user to have both a mobile client and a desktop 338 client, which will require servers from each vendor, leading to 339 intra-domain bridging. 341 Similarly, presence can contain rich information, including 342 activities of the user (such as whether they are in a meeting or on 343 the phone), their geographic location, and their mood. This presence 344 state can be determined manually (where the user enters and updates 345 the information), or automatically. Automatic determination of these 346 states is far preferable, since it puts less burden on the user. 347 Determination of these presence states is done by taking "raw" data 348 about the user, and using it to generate corresponding presence 349 states. This raw data can come from any source that has information 350 about the user, including their calendaring server, their VoIP 351 infrastructure, their VPN server, their laptop operating system, and 352 so on. Each of these components is typically made by different 353 vendors, each of which is likely to integrate that data with their 354 presence servers. Consequently, presence servers from different 355 vendors are likely to specialize in particular pieces of presence 356 data, based on the other infrastructure they provide. The overall 357 network will need to contain servers from those vendors, composing 358 together the various sources of information, in order to combine 359 their benefits. This use case is specified to presence, and results 360 in intra-domain bridging. 362 4. Considerations for Bridging Models 364 When considering architectures for intra-domain presence and IM 365 bridging, several issues need to be considered. The first two of 366 these apply to both IM and presence (and indeed to any intra-domain 367 communications, including voice). The latter two are specific to 368 presence and IM respectively: 370 Routing: How are subscriptions and IMs routed to the right presence 371 and IM server(s)? This issue is more complex in intra-domain 372 models, since the right hand side of the @-sign cannot be used to 373 perform this routing. 375 Policy and Identity: Where do user policies reside, and what 376 presence and IM server(s) are responsible for executing that 377 policy? What identities does the user have in each system and how 378 do they relate? 380 Presence Data Ownership: Which presence servers are responsible for 381 which pieces of presence information, and how are those pieces 382 composed to form a coherent and consistent view of user presence? 384 Conversation Consistency: When considering instant messaging, if IM 385 can be delivered to multiple servers, how do we make sure that the 386 overall conversation is coherent to the user? 388 The sections below describe several different models for intra-domain 389 bridging. Each model is driven by a set of use cases, which are 390 described in an applicability subsection for each model. Each model 391 description also discusses how routing, policy, presence data 392 ownership and conversation consistency work. 394 5. Partitioned 396 In the partitioned model, a single domain has a multiplicity of 397 servers, each of which manages a non-overlapping set of users. That 398 is, for each user in the domain, their presence data, policy and IM 399 handling reside on a single server. Each "single server" may in fact 400 be a cluster. 402 Another important facet of the partitioned model is that, even though 403 users are partitioned across different servers, they each share the 404 same domain name in the right hand side of their URI, and this URI is 405 what those users use when communicating with other users both inside 406 and outside of the domain. There are many reasons why a domain would 407 want all of its users to share the same right-hand side of the @-sign 408 even though it is partitioned internally: 410 o The partitioning may reflect organizational or geographical 411 structures that a domain admistrator does not want to reflect 412 externally. 414 o If each partition had a separate domain name (i.e., 415 engineering.example.com and sales.example.com), if a user changed 416 organizations, this would necessitate a change in their URI. 418 o For reasons of vanity, users often like to have their URI (which 419 appear on business cards, email, and so on), to be brief and 420 short. 422 o If a watcher wants to add a presentity based on username and does 423 not want to know, or does not know, which subdomain or internal 424 department the presentity belongs to, a single domain is needed. 426 This model is illustrated in Figure 3. As the model shows, the 427 domain example.com has six users across three servers, each of which 428 is handling two of the users. 430 ..................................................................... 431 . . 432 . . 433 . . 434 . joe@example.com alice@example.com padma@example.com . 435 . bob@example.com zeke@example.com hannes@example.com . 436 . +-----------+ +-----------+ +-----------+ . 437 . | | | | | | . 438 . | Server | | Server | | Server | . 439 . | 1 | | 2 | | 3 | . 440 . | | | | | | . 441 . +-----------+ +-----------+ +-----------+ . 442 . . 443 . . 444 . . 445 . example.com . 446 ..................................................................... 448 Figure 3: Partitioned Model 450 5.1. Applicability 452 The partitioned model arises naturally in larger domains, such as an 453 enterprise or service provider, where issues of scale, organizational 454 structure, or multi-vendor requirements cause the domain to be 455 managed by a multiplicity of independent servers. 457 In cases where each user has an AoR that directly points to its 458 partition (for example, us.example.com), that model becomes identical 459 to the inter-domain federated model and is not treated here further. 461 5.2. Routing 463 The partitioned intra-domain model works almost identically to an 464 inter-domain federated model, with the primary difference being 465 routing. In inter-domain federation, the domain part of the URI can 466 be used to route presence subscriptions and IM messages from one 467 domain to the other. This is no longer the case in an intra-domain 468 model. Consider the case where Joe subscribes to his buddy list, 469 which is served by his presence server (server 1 in Figure 3). Alice 470 is a member of Joe's buddy list. How does server 1 know that the 471 back-end subscription to Alice needs to get routed to server 2? 473 There are several techniques that can be used to solve this problem, 474 which are outlined in the subsections below. 476 5.2.1. Centralized Database 477 ..................................................................... 478 . +-----------+ . 479 . alice? | | . 480 . +---------------> | Database | . 481 . | server 2 | | . 482 . | +-------------| | . 483 . | | +-----------+ . 484 . | | . 485 . | | . 486 . | | . 487 . | | . 488 . | | . 489 . | | . 490 . | V . 491 . joe@example.com alice@example.com padma@example.com . 492 . bob@example.com zeke@example.com hannes@example.com . 493 . +-----------+ +-----------+ +-----------+ . 494 . | | | | | | . 495 . | Server | | Server | | Server | . 496 . | 1 | | 2 | | 3 | . 497 . | | | | | | . 498 . +-----------+ +-----------+ +-----------+ . 499 . . 500 . . 501 . . 502 . example.com . 503 ..................................................................... 505 Figure 4: Centralized DB 507 One solution is to rely on a common, centralized database that 508 maintains mappings of users to specific servers, shown in Figure 4. 509 When Joe subscribes to his buddy list that contains Alice, server 1 510 would query this database, asking it which server is responsible for 511 alice@example.com. The database would indicate server 2, and then 512 server 1 would generate the backend SUBSCRIBE request towards server 513 2. Similarly, when Joe sends an INVITE to establish an IM session 514 with Padma, he would send the IM to his IM server, an it would query 515 the database to find out that Padma is supported on server 3. This 516 is a common technique in large email systems. It is often 517 implemented using internal sub-domains; so that the database would 518 return alice@central.example.com to the query, and server 1 would 519 modify the Request-URI in the request to reflect this. 521 Routing database solutions have the problem that they require 522 standardization on a common schema and database protocol in order to 523 work in multi-vendor environments. For example, LDAP and SQL are 524 both possibilities. There is variety in LDAP schema; one possibility 525 is H.350.4, which could be adapted for usage here [RFC3944]. 527 5.2.2. Routing Proxy 529 ..................................................................... 530 . +-----------+ . 531 . SUB/INV alice | | . 532 . +---------------> | Routing | . 533 . | | Proxy | . 534 . | | | . 535 . | +-----------+ . 536 . | | . 537 . | | . 538 . | | . 539 . | |SUB/INV alice . 540 . | | . 541 . | | . 542 . | V . 543 . joe@example.com alice@example.com padma@example.com . 544 . bob@example.com zeke@example.com hannes@example.com . 545 . +-----------+ +-----------+ +-----------+ . 546 . | | | | | | . 547 . | Server | | Server | | Server | . 548 . | 1 | | 2 | | 3 | . 549 . | | | | | | . 550 . +-----------+ +-----------+ +-----------+ . 551 . . 552 . . 553 . . 554 . example.com . 555 ..................................................................... 557 Figure 5: Routing Proxy 559 A similar solution is to rely on a routing proxy or B2BUA. Instead 560 of a centralized database, there would be a centralized SIP proxy 561 farm. Server 1 would send requests (SUBSCRIBE, INVITE, etc.) for 562 users it doesn't serve to this server farm, and the servers would 563 lookup the user in a database (which is now accessed only by the 564 routing proxy), and the resulting requests are sent to the correct 565 server. A redirect server can be used as well, in which case the 566 flow is very much like that of a centralized database, but uses SIP. 568 Routing proxies have the benefit that they do not require a common 569 database schema and protocol, but they do require a centralized 570 server function that sees all subscriptions and IM requests, which 571 can be a scale challenge. For IM, a centralized proxy is very 572 challenging when using pager mode, since each and every IM is 573 processed by the central proxy. For session mode, the scale is 574 better, since the proxy handles only the initial INVITE. 576 5.2.3. Subdomaining 578 In this solution, each user is associated with a subdomain, and is 579 provisioned as part of their respective server using that subdomain. 580 Consequently, each server thinks it is its own, separate domain. 581 However, when a user adds a presentity to their buddy list without 582 the subdomain, they first consult a shared database which returns the 583 subdomained URI to subscribe or IM to. This sub-domained URI can be 584 returned because the user provided a search criteria, such as "Find 585 Alice Chang", or provided the non-subdomained URI 586 (alice@example.com). This is shown in Figure 6 587 ..................................................................... 588 . +-----------+ . 589 . who is Alice? | | . 590 . +---------------------->| Database | . 591 . | alice@b.example.com | | . 592 . | +---------------------| | . 593 . | | +-----------+ . 594 . | | . 595 . | | . 596 . | | . 597 . | | . 598 . | | . 599 . | | . 600 . | | . 601 . | | joe@a.example.com alice@b.example.com padma@c.example.com . 602 . | | bob@a.example.com zeke@b.example.com hannes@c.example.com . 603 . | | +-----------+ +-----------+ +-----------+ . 604 . | | | | | | | | . 605 . | | | Server | | Server | | Server | . 606 . | | | 1 | | 2 | | 3 | . 607 . | | | | | | | | . 608 . | | +-----------+ +-----------+ +-----------+ . 609 . | | ^ . 610 . | | | . 611 . | | | . 612 . | | | . 613 . | | | . 614 . | | | . 615 . | | +-----------+ . 616 . | +-------------------->| | . 617 . | | Client | . 618 . | | | . 619 . +-----------------------| | . 620 . +-----------+ . 621 . . 622 . . 623 . . 624 . example.com . 625 ..................................................................... 627 Figure 6: Subdomaining 629 Subdomaining puts the burden of routing within the client. The 630 servers can be completely unaware that they are actually part of the 631 same domain, and integrate with each other exactly as they would in 632 an inter-domain model. However, the client is given the burden of 633 determining the subdomained URI from the original URI or buddy name, 634 and then subscribing or IMing directly to that server, or including 635 the subdomained URI in their buddylist. The client is also 636 responsible for hiding the subdomain structure from the user and 637 storing the mapping information locally for extended periods of time. 638 In cases where users have buddy list subscriptions, the client will 639 need to resolve the buddy name into the sub-domained version before 640 adding to their buddy list. 642 5.2.4. Peer-to-Peer 644 Another model is to utilize a peer-to-peer network amongst all of the 645 servers, and store URI to server mappings in the distributed hash 646 table it creates. This has some nice properties but does require a 647 standardized and common p2p protocol across vendors, which does not 648 exist today. 650 5.2.5. Forking 652 Yet another solution is to utilize forking. Each server is 653 provisioned with the domain names or IP addresses of the other 654 servers, but not with the mapping of users to each of those servers. 655 When a server needs to handle a request for a user it doesn't have, 656 it forks the request to all of the other servers. This request will 657 be rejected with a 404 on the servers which do not handle that user, 658 and accepted on the one that does. The approach assumes that servers 659 can differentiate inbound requests from end users (which need to get 660 passed on to other servers - for example via a back-end subscription) 661 and from other servers (which do not get passed on). This approach 662 works very well in organizations with a relatively small number of 663 servers (say, two or three), and becomes increasingly ineffective 664 with more and more servers. 666 5.2.6. Provisioned Routing 668 Yet another solution is to provision each server with each user, but 669 for servers that don't actually serve the user, the provisioning 670 merely tells the server where to proxy the request. This solution 671 has extremely poor operational properties, requiring multiple points 672 of provisioning across disparate systems. 674 5.3. Policy 676 A fundamental characteristic of the partitioned model is that there 677 is a single point of policy enforcement (authorization rules and 678 composition policy) for each user. 680 5.4. Presence Data 682 Another fundamental characteristic of the partitioned model is that 683 the presence data for a user is managed authoritatively on a single 684 server. In the example of Figure 3, the presence data for Alice 685 lives on server 2 alone (recall that server two may be physically 686 implemented as a multiplicity of boxes from a single vendor, each of 687 which might have a portion of the presence data, but externally it 688 appears to behave as if it were a single server). A subscription 689 from Bob to Alice may cause a transfer of presence information from 690 server 2 to server 1, but server 2 remains authoritative and is the 691 single root source of all data for Alice. 693 5.5. Conversation Consistency 695 Since the IM for a particular user are always delivered through a 696 particular server that handles the user, it is relatively easy to 697 achieve conversation consistency. That server receives all of the 698 messages and readily pass them onto the user for rendering. 699 Furthermore, a coherent view of message history can be assembled by 700 the server, since it sees all messages. If a user has multiple 701 devices, there are challenges in constructing a consistent view of 702 the conversation with page mode IM. However, those issues exist in 703 general with page mode and are not worsened by intra-domain bridging. 705 6. Exclusive 707 In the former (static) partitioned model, the mapping of a user to a 708 specific server is done by some off-line configuration means. The 709 configuration assigns a user to a specific server and in order to use 710 a different server, the user needs to change (or request the 711 administrator to do so) the configuration. 713 In some environments, this restriction of a user to use a particular 714 server may be a limitation. Instead, it is desirable to allow users 715 to freely move back and forth between systems, though using only a 716 single one at a time. This is called Exclusive Bridging. 718 Some use cases where this can happen are: 720 o The organization is using multiple systems where each system has 721 its own characteristics. For example one server is tailored to 722 work with some CAD (Computer Aided Design) system and provide 723 presence and IM functionality along with the CAD system. The 724 other server is the default presence and IM server of the 725 organization. Users wish to be able to work with either system 726 when they wish to, they also wish to be able to see the presence 727 and IM with their buddies no matter which system their buddies are 728 currently using. 730 o An enterprise wishes to test presence servers from two different 731 vendors. In order to do so they wish to install a server from 732 each vendor and see which of the servers is better. In the static 733 partitioned model, a user will have to be statically assigned to a 734 particular server and could not compare the features of the two 735 servers. In the dynamic partitioned model, a user may choose on 736 whim which of the servers that are being tested to use. They can 737 move back and forth in case of problems. 739 o An enterprise is currently using servers from one vendor, but has 740 decided to add a second. They would like to gradually migrate 741 users from one to the other. In order to make a smooth 742 transition, users can move back and forth over a period of a few 743 weeks until they are finally required to stop going back, and get 744 deleted from their old system. 746 o A domain is using multiple clusters from the same vendor. To 747 simplify administration, users can connect to any of the clusters, 748 perhaps one local to their site. To accomplish this, the clusters 749 are connected using exclusive bridging. 751 6.1. Routing 753 Due to its nature, routing in the Exclusive bridging model is more 754 complex than the routing in the partitioned model. 756 Association of a user to a server can not be known until the user 757 publishes a presence document to a specific server or registers to 758 that server. Therefore, when Alice subscribes to Bob's presence 759 information, or sends him an IM, Alice's server will not easily know 760 the server that has Bob's presence and is handling his IM. 762 In addition, a server may get a subscription to a user, or an IM 763 targeted at a user, but the user may not be connected to any server 764 yet. In the case of presence, once the user appears in one of the 765 servers, the subscription should be sent to that server. 767 A user may use two servers at the same time and have hers/his 768 presence information on two servers. This should be regarded as a 769 conflict and one of the presence clients should be terminated or 770 redirected to the other server. 772 Fortunately, most of the routing approaches described for partitioned 773 bridging, excepting provisioned routing, can be adapted for exclusive 774 bridging. 776 6.1.1. Centralized Database 778 A centralized database can be used, but will need to support a test- 779 and-set functionality. With it, servers can check if a user is 780 already in a specific server and set the user to the server if the 781 user is not on another server. If the user is already on another 782 server a redirect (or some other error message) will be sent to that 783 user. 785 When a client sends a subscription request for some target user, and 786 the target user is not associated with a server yet, the subscription 787 must be 'held' on the server of the watcher. Once the target user 788 connects and becomes bound to a server, the database needs to send a 789 change notification to the watching server, so that the 'held' 790 subscription can be extended to the server which is now handling 791 presence for the user. 793 6.1.2. Routing Proxy 795 The routing proxy mechanism can be used for exclusive bridging as 796 well. However, it requires signaling from each server to the routing 797 proxy to indicate that the user is now located on that server. This 798 can be done by having each server send a REGISTER request to the 799 routing proxy, for that user, and setting the contact to itself. The 800 routing proxy would have a rule which allows only a single registered 801 contact per user. Using the registration event package [RFC3680], 802 each server subscribes to the registration state at the routing proxy 803 for each user it is managing. If the routing proxy sees a duplicate 804 registration, it allows it, and then uses a reg-event notification to 805 the other server to de-register the user. Once the user is de- 806 registered from that server, it would terminate any subscriptions in 807 place for that user, causing the watching server to reconnect the 808 subscription to the new server. Something similar can be done for 809 in-progress IM sessions; however this may have the effect of causing 810 a disruption in ongoing sessions. 812 6.1.3. Subdomaining 814 Subdomaining is just a variation on the centralized database. 815 Assuming the database supports a test-and-set mechanism, it can be 816 used for exclusive bridging. 818 However, the principle challenge in applying subdomaining to 819 exclusive bridging is database change notifications. When a user 820 moves from one server to another, that change needs to be propagated 821 to all clients which have ongoing sessions (presence and IM) with 822 that user. This requires a large-scale change notification mechanism 823 - to each client in the network. 825 6.1.4. Peer-to-Peer 827 Peer-to-peer routing can be used for routing in exclusive bridging. 828 Essentially, it provides a distributed registrar function that maps 829 each AoR to the particular server that they are currently registered 830 against. When a UA registers to a particular server, that 831 registration is written into the P2P network, such that queries for 832 that user are directed to that presence server. 834 However, change notifications can be troublesome. When a user 835 registered on server 1 now registers on server 2, server 2 needs to 836 query the p2p network, discover that server 1 is handling the user, 837 and then tell server 1 that the user has moved. Server 1 then needs 838 to terminate its ongoing subscriptions and send the to server 2. 840 Furthermore, P2P networks do not inherently provide a test-and-set 841 primitive, and consequently, it is possible for race conditions to 842 occur where there is an inconsistent view on where the user is 843 currently registered. 845 6.1.5. Forking 847 The forking model can be applied to exclusive bridging. When a user 848 registers with a server or publishes a presence document to a server, 849 and that server is not serving the user yet, that server begins 850 serving the user. Furthermore, it needs to propagate a change 851 notification to all of the other servers. This can be done using a 852 registration event package; basically each server would subscribe to 853 every other server for reg-event notifications for users they serve. 855 When subscription or IM request is received at a server, and that 856 server doesn't serve the target user, it forks the subscription or IM 857 to all other servers. If the user is currently registered somewhere, 858 one will accept, and the others will reject with a 404. If the user 859 is registered nowhere, all others generate a 404. If the request is 860 a subscription, the server that received it would 'hold' the 861 subscription, and then subscribe for the reg-event package on every 862 other server for the target user. Once the target user registers 863 somewhere, the server holding the subscription gets a notification 864 and can propagate it to the new target server. 866 Like the P2P solution, the forking solution lacks an effective test- 867 and-set mechanism, and it is therefore possible that there could be 868 inconsistent views on which server is handling a user. 870 6.2. Policy 872 In the exclusive bridging model, policy becomes more complicated. In 873 the partitioned model, a user had their presence and IM managed by 874 the same server all of the time. Thus, their policy can be 875 provisioned and excecuted there. With exclusive bridging, a user can 876 freely move back and forth between servers. Consequently, the policy 877 for a particular user may need to execute on multiple different 878 servers over time. 880 The simplest solution is just to require the user to separately 881 provision and manage policies on each server. In many of the use 882 cases above, exclusive bridging is a transient situation that 883 eventually settles into partitioned bridging. Thus, it may not be 884 unreasonable to require the user to manage both policies during the 885 transition. It is also possible that each server provides different 886 capabilities, and thus a user will receive different service 887 depending on which server they are connected to. Again, this may be 888 an acceptable limitation for the use cases it supports. 890 6.3. Presence Data 892 As with the partitioned model, in the exclusive model, the presence 893 data for a user resides on a single server at any given time. This 894 server owns all composition policies and procedures for collecting 895 and distributing presence data. 897 6.4. Conversation Consistency 899 Because a user receives all of their IM on a single server at a time, 900 there aren't issues with seeing a coherent conversation for the 901 duration that a user is associated with that server. 903 However, if a user has sessions in progress while they move from one 904 server to another, it is possible that IM's can be misrouted or 905 dropped, or delivered out of order. Fortunately, this is a transient 906 event, and given that its unlikely that a user would actually have 907 in-progress IM sessions when they change servers, this may be an 908 acceptable limitation. 910 However, conversation history may be more troubling. IM message 911 history is often stored both in clients (for context of past 912 conversations, search, etc.) and in servers (for the same reasons, in 913 addition to legal requirements for data retention). If a user 914 changes servers, some of their past conversations will be stored on 915 one server, and some on another. Any kind of search or query 916 facility provided amongst the server-stored messages would need to 917 search amongst all of the servers to find the data. 919 7. Unioned 921 In the unioned model, each user is actually served by more than one 922 presence server at a time. In this case, "served" implies two 923 properties: 925 o A user is served by a server when that user is provisioned on that 926 server, and 928 o That server is authoritative for some piece of presence state 929 associated with that user or responsible for some piece of 930 registration state associated with that user, for the purposes of 931 IM delivery 933 In essence, in the unioned model, a user's presence and registration 934 data is distributed across many presence servers, while in the 935 partitioned and exclusive models, its centralized in a single server. 936 Furthermore, it is possible that the user is provisioned with 937 different identifiers on each server. 939 This definition speaks specifically to ownership of dynamic data - 940 presence and registration state - as the key property. This rules 941 out several cases which involve a mix of servers within the 942 enterprise, but do not constitute intra-domain unioned bridging: 944 o A user utilizes an outbound SIP proxy from one vendor, which 945 connects to a presence server from another vendor. Even though 946 this will result in presence subscriptions, notifications, and IM 947 requests flowing between servers, and the user is potentially 948 provisioned on both, there is no authoritative presence or 949 registration state in the outbound proxy, and so this is not 950 intra-domain bridging. 952 o A user utilizes a Resource List Server (RLS) from one vendor, 953 which holds their buddy list, and accesses presence data from a 954 presence server from another vendor. This case is actually the 955 partitioned case, not the unioned case. Effectively, the buddy 956 list itself is another "user", and it exists entirely on one 957 server (the RLS), while the actual users on the buddy list exist 958 entirely within another. Consequently, this case does not have 959 the property that a single presence resource exists on multiple 960 servers at the same time. 962 o A user subscribes to the presence of a presentity. This 963 subscription is first passed to their presence server, which acts 964 as a proxy, and instead sends the subscription to the UA of the 965 user, which acts as a presence edge server. In this model, it may 966 appear as if there are two presence servers for the user (the 967 actual server and their UA). However, the server is acting as a 968 proxy in this case - there is only one source of presence 969 information. For IM, there is only one source of registration 970 state - the server. Thus, this model is partitioned, but with 971 different servers owning IM and presence. 973 The unioned models arise naturally when a user is using devices from 974 different vendors, each of which has their own respective servers, or 975 when a user is using different servers for different parts of their 976 presence state. For example, Figure 7 shows the case where a single 977 user has a mobile client connected to server one and a desktop client 978 connected to server two. 980 alice@example.com alice@example.com 981 +------------+ +------------+ 982 | | | | 983 | | | | 984 | Server |--------------| Server | 985 | 1 | | 2 | 986 | | | | 987 | | | | 988 +------------+ +------------+ 989 \ / 990 \ / 991 \ / 992 \ / 993 \ / 994 \ / 995 \...................../....... 996 \ / . 997 .\ / . 998 . \ | +--------+ . 999 . | |+------+| . 1000 . +---+ || || . 1001 . |+-+| || || . 1002 . |+-+| |+------+| . 1003 . | | +--------+ . 1004 . | | /------ / . 1005 . +---+ /------ / . 1006 . --------/ . 1007 . . 1008 ............................. 1010 Alice 1012 Figure 7: Unioned Case 1 1014 As another example, a user may have two devices from the same vendor, 1015 both of which are asociated with a single presence server, but that 1016 presence server has incomplete presence state about the user. 1017 Another presence server in the enterprise, due to its access to state 1018 for that user, has additional data which needs to be accessed by the 1019 first presence server in order to provide a comprehensive view of 1020 presence data. This is shown in Figure 8. This use case tends to be 1021 specific to presence. 1023 alice@example.com alice@example.com 1024 +------------+ +------------+ 1025 | | | | 1026 | Presence | | Presence | 1027 | Server |--------------| Server | 1028 | 1 | | 2 | 1029 | | | | 1030 | | | | 1031 +------------+ +------------+ 1032 ^ | | 1033 | | | 1034 | | | 1035 ///-------\\\ | | 1036 ||| specialized ||| | | 1037 || state || | | 1038 \\\-------/// | | 1039 ............................. 1040 . | | . 1041 . | | +--------+ . 1042 . | |+------+| . 1043 . +---+ || || . 1044 . |+-+| || || . 1045 . |+-+| |+------+| . 1046 . | | +--------+ . 1047 . | | /------ / . 1048 . +---+ /------ / . 1049 . --------/ . 1050 . . 1051 . . 1052 ............................. 1053 Alice 1055 Figure 8: Unioned Case 2 1057 Another use case for unioned bridging are subscriber moves. Consider 1058 a domain which uses multiple servers, typically running in a 1059 partitioned configuration. The servers are organized regionally so 1060 that each user is served by a server handling their region. A user 1061 is moving from one region to a new job in another, while retaining 1062 their SIP URI. In order to provide a smooth transition, ideally the 1063 system would provide a "make before break" functionality, allowing 1064 the user to be added onto the new server prior to being removed from 1065 the old. During the transition period, especially if the user had 1066 multiple clients to be moved, they can end up with state existing on 1067 both servers at the same time. 1069 Another use case for unioned bridging is multiple providers. 1070 Consider a user in an enterprise, alice@example.com. Example.com has 1071 a presence server deployed for all of its users. In addition, Alice 1072 uses a public IM and presence provider. Alice would like that users 1073 who connect to the public provider see presence state that comes from 1074 example.com, and vice-a-versa. Interestingly, this use case isn't 1075 intra-domain bridging at all, but rather, unioned inter-domain 1076 federation. 1078 7.1. Hierarchical Model 1080 The unioned intra-bridging model can be realized in one of two ways - 1081 using a hierarchical structure or a peer structure. 1083 In the hierarchical model, presence subscriptions and IM requests for 1084 the target are always routed first to one of the servers - the root. 1085 In the case of presence, the root has the final say on the structure 1086 of the presence document delivered to watchers. It collects presence 1087 data from its child presence servers (through notifications or 1088 publishes received from them) and composes them into the final 1089 presence document. In the case of IM, the root applies IM policy and 1090 then passes the IM onto the children for delivery. There can be 1091 multiple layers in the hierarchical model. This is shown in Figure 9 1092 for presence. 1094 +-----------+ 1095 *-----------* | | 1096 |Auth and |---->| Presence | <--- root 1097 |Composition| | Server | 1098 *-----------* | | 1099 | | 1100 +-----------+ 1101 / --- 1102 / ---- 1103 / ---- 1104 / ---- 1105 V -V 1106 +-----------+ +-----------+ 1107 | | | | 1108 *-----------* | Presence | *-----------* | Presence | 1109 |Auth and |-->| Server | |Auth and |-->| Server | 1110 |Composition| | | |Composition| | | 1111 *-----------* | | *-----------* | | 1112 +-----------+ +-----------+ 1113 | --- 1114 | ----- 1115 | ----- 1116 | ----- 1117 | ----- 1118 | ----- 1119 V --V 1120 +-----------+ +-----------+ 1121 | | | | 1122 *-----------* | Presence | *-----------* | Presence | 1123 |Auth and |-->| Server | |Auth and |-->| Server | 1124 |Composition| | | |Composition| | | 1125 *-----------* | | *-----------* | | 1126 +-----------+ +-----------+ 1128 Figure 9: Hierarchical Model 1130 Its important to note that this hierarchy defines the sequence of 1131 presence composition and policy application, and does not imply a 1132 literal message flow. As an example, consider once more the use case 1133 of Figure 7. Assume that presence server 1 is the root, and presence 1134 server 2 is its child. When Bob's PC subscribes to Bob's buddy list 1135 (on presence server 2), that subscription will first go to presence 1136 server 2. However, that presence server knows that it is not the 1137 root in the hierarchy, and despite the fact that it has presence 1138 state for Alice (who is on Bob's buddy list), it creates a back-end 1139 subscription to presence server 1. Presence server 1, as the root, 1140 subscribes to Alice's state at presence server 2. Now, since this 1141 subscription came from presence server 1 and not Bob directly, 1142 presence server 2 provides the presence state. This is received at 1143 presence server 1, which composes the data with its own state for 1144 Alice, and then provides the results back to presence server 2, 1145 which, having acted as an RLS, forwards the results back to Bob. 1146 Consequently, this flow, as a message sequence diagram, involves 1147 notifications passing from presence server 2, to server 1, back to 1148 server 2. However, in terms of composition and policy, it was done 1149 first at the child node (presence server 2), and then those results 1150 used at the parent node (presence server 1). 1152 7.1.1. Routing 1154 In the hierarchical model, each server needs to be provisioned with 1155 the root, its parent and its children servers for each user it 1156 handles. These relationships could in fact be different on a user- 1157 by-user basis; however, this is complex to manage. In all 1158 likelihood, the parent and child relationships are identical for each 1159 user. The overall routing algorithm can be described thusly: 1161 o If a SUBCRIBE is received from the parent node for this 1162 presentity, perform subscriptions to each child node for this 1163 presentity, and then take the results, apply composition and 1164 authorization policies, and propagate to the parent. If a node is 1165 the root, the logic here applies regardless of where the request 1166 came from. 1168 o If an IM request is received from the parent node for a user, 1169 perform IM processing and then proxy the request to each child IM 1170 server for this user. If a node is the root, the logic here 1171 applies regardless of where the request came from. 1173 o If a request is received from a node that is not the parent node 1174 for this presentity, proxy the request to the parent node. This 1175 includes cases where the node that sent the request is a child 1176 node. 1178 This routing rule is relatively simple, and in a two-server system is 1179 almost trivial to provision. Interestingly, it works in cases where 1180 some users are partitioned and some are unioned. When the users are 1181 partitioned, this routing algorithm devolves into the forking 1182 algorithm of Section 5.2.5. This points to the forking algorithm as 1183 a good choice since it can be used for both partitioned and unioned. 1185 An important property of the routing in the hierarchical model is 1186 that the sequence of composition and policy operations for any IM or 1187 presence session is identical, regardless of the watcher or sender of 1188 the IM. The result is that the overall presence state provided to a 1189 watcher, and overall IM behavior, is always consistent and 1190 independent of the server the client is connected to. We call this 1191 property the *consistency property*, and it is an important metric in 1192 assessing the correctness of a federated presence and IM system. 1194 7.1.2. Policy and Identity 1196 Policy and identity are a clear challenge in the unioned model. 1198 Firstly, since a user is provisioned on many servers, it is possible 1199 that the identifier they utilize could be different on each server. 1200 For example, on server 1, they could be joe@example.com, whereas on 1201 server 2, they are joe.smith@example.com. In cases where the 1202 identifiers are not equivalent, a mapping function needs to be 1203 provisioned. This ideally happens on root server. 1205 Secondly, the unioned model will result in back-end subscriptions 1206 extending from one presence server to another presence server. These 1207 subscriptions, though made by the presence server, need to be made 1208 on-behalf-of the user that originally requested the presence state of 1209 the presentity. Since the presence server extending the back-end 1210 subscription will not often have credentials to claim identity of the 1211 watcher, asserted identity using techniques like P-Asserted-ID 1212 [RFC3325] are required, along with the associated trust relationships 1213 between servers. Optimizations, such as view sharing 1214 [I-D.ietf-simple-view-sharing] can help improve performance. The 1215 same considerations apply for IM. 1217 The principle challenge in a unioned model is policy, including both 1218 authorization and composition policies. There are three potential 1219 solutions to the administration of policy in the hierarchical model 1220 (only two of which apply in the peer model, as we'll discuss below). 1221 These are root-only, distributed provisioned, and central 1222 provisioned. 1224 7.1.2.1. Root Only 1226 In the root-only policy model, authorization policy, IM policy, and 1227 composition policy are applied only at the root of the tree. This is 1228 shown in Figure 10. 1230 +-----------+ 1231 *-----------* | | 1232 | |---->| | <--- root 1233 | Policy | | Server | 1234 *-----------* | | 1235 | | 1236 +-----------+ 1237 / --- 1238 / ---- 1239 / ---- 1240 / ---- 1241 V -V 1242 +-----------+ +-----------+ 1243 | | | | 1244 | | | | 1245 | Server | | Server | 1246 | | | | 1247 | | | | 1248 +-----------+ +-----------+ 1249 | --- 1250 | ----- 1251 | ----- 1252 | ----- 1253 | ----- 1254 | ----- 1255 V --V 1256 +-----------+ +-----------+ 1257 | | | | 1258 | | | | 1259 | Server | | Server | 1260 | | | | 1261 | | | | 1262 +-----------+ +-----------+ 1264 Figure 10: Root Only 1266 As long as a subscription request came from its parent, every child 1267 presence server would automatically accept the subscription, and 1268 provide notifications containing the full presence state it is aware 1269 of. Similarly, any IM received from a parent would be simply 1270 propagated onwards towards children. Any composition performed by a 1271 child presence server would need to be lossless, in that it fully 1272 combines the source data without loss of information, and also be 1273 done without any per-user provisioning or configuration, operating in 1274 a default or administrator-provisioned mode of operation. 1276 The root-only model has the benefit that it requires the user to 1277 provision policy in a single place (the root). However, it has the 1278 drawback that the composition and policy processing may be performed 1279 very poorly. Presumably, there are multiple presence servers in the 1280 first place because each of them has a particular speciality. That 1281 speciality may be lost in the root-only model. For example, if a 1282 child server provides geolocation information, the root presence 1283 server may not have sufficient authorization policy capabilities to 1284 allow the user to manage how that geolocation information is provided 1285 to watchers. 1287 7.1.2.2. Distributed Provisioning 1289 The distributed provisioned model looks exactly like the diagram of 1290 Figure 9. Each server is separately provisioned with its own 1291 policies, including what users are allowed to watch, what presence 1292 data they will get, how it will be composed, what IMs get blocked, 1293 and so on. 1295 One immediate concern is whether the overall policy processing, when 1296 performed independently at each server, is consistent, sane, and 1297 provides reasonable degrees of privacy. It turns out that it can, if 1298 some guidelines are followed. 1300 For presence, consider basic "yes/no" authorization policies. Lets 1301 say a presentity, Alice, provides an authorization policy in server 1 1302 where Bob can see her presence, but on server 2, provides a policy 1303 where Bob cannot. If presence server 1 is the root, the subscription 1304 is accepted there, but the back-end subscription to presence server 2 1305 would be rejected. As long as presence server 1 then rejects the 1306 subscription, the system provides the correct behavior. This can be 1307 turned into a more general rule: 1309 o To guarantee privacy safety, if the back-end subscription 1310 generated by a presence server is denied, that server must deny 1311 the triggering subscription in turn, regardless of its own 1312 authorization policies. This means that a presence server cannot 1313 send notifications on its own until it has confirmed subscriptions 1314 from downstream servers. 1316 For IM, basic yes/no authorization policies work in a similar way. 1317 If any one of the servers has a policy that says to block an IM, the 1318 IM is not propagated further down the chain. Whether the overall 1319 system blocks IMs from a sender depends on the topology. If there is 1320 no forking in the hierarchy, the system has the property that, if a 1321 sender is blocked at any server, the user is blocked overall. 1322 However, in tree structures where there are multiple children, it is 1323 possible that an IM could be delivered to some downstream clients, 1324 and not others. 1326 Things get more complicated when one considers presence authorization 1327 policies whose job is to block access to specific pieces of 1328 information, as opposed to blocking a user completely. For example, 1329 lets say Alice wants to allow Bob to see her presence, but not her 1330 geolocation information. She provisions a rule on server 1 that 1331 blocks geolocation information, but grants it on server 2. The 1332 correct mode of operation in this case is that the overall system 1333 will block geolocation from Bob. But will it? In fact, it will, if a 1334 few additional guidelines are followed: 1336 o If a presence server adds any information to a presence document 1337 beyond the information received from its children, it must provide 1338 authorization policies that govern the access to that information. 1340 o If a presence server does not understand a piece of presence data 1341 provided by its child, it should not attempt to apply its own 1342 authorization policies to access of that information. 1344 o A presence server should not add information to a presence 1345 document that overlaps with information that can be added by its 1346 parent. Of course, it is very hard for a presence server to know 1347 whether this information overlaps. Consequently, provisioned 1348 composition rules will be required to realize this. 1350 If these rules are followed, the overall system provides privacy 1351 safety and the overall policy applied is reasonable. This is because 1352 these rules effectively segment the application of policy based on 1353 specific data, to the servers that own the corresponding data. For 1354 example, consider once more the geolocation use case described above, 1355 and assume server 2 is the root. If server 1 has access to, and 1356 provides geolocation information in presence documents it produces, 1357 then server 1 would be the only one to provide authorization policies 1358 governing geolocation. Server 2 would receive presence documents 1359 from server 1 containing (or not) geolocation, but since it doesn't 1360 provide or control geolocation, it lets that information pass 1361 through. Thus, the overall presence document provided to the watcher 1362 will contain gelocation if Alice wanted it to, and not otherwise, and 1363 the controls for access to geolocation would exist only on server 1. 1365 The second major concern on distributed provisioning is that it is 1366 confusing for users. However, in the model that is described here, 1367 each server would necessarily be providing distinct rules, governing 1368 the information it uniquely provides. Thus, server 2 would have 1369 rules about who is allowed to see geolocation, and server 1 would 1370 have rules about who is allowed to subscribe overall. Though not 1371 ideal, there is certainly precedent for users configuring policies on 1372 different servers based on the differing services provided by those 1373 servers. Users today provision block and allow lists in email for 1374 access to email servers, and separately in IM and presence 1375 applications for access to IM. 1377 7.1.2.3. Central Provisioning 1379 The central provisioning model is a hybrid between root-only and 1380 distributed provisioning. Each server does in fact execute its own 1381 authorization and composition policies. However, rather than the 1382 user provisioning them independently in each place, there is some 1383 kind of central portal where the user provisions the rules, and that 1384 portal generates policies for each specific server based on the data 1385 that the corresponding server provides. This is shown in Figure 11. 1387 +---------------------+ 1388 |provisioning portal | 1389 +---------------------+ 1390 . . . . . 1391 . . . . . 1392 . . . . ....................... 1393 ........................... . . . . 1394 . . . . . 1395 . . . . . 1396 . ........................... . ............. . 1397 . . . . . 1398 . . ...................... . . 1399 . . V +-----------+ . . 1400 . . *-----------* | | . . 1401 . . |Auth and |---->| Presence | <--- root . . 1402 . . |Composition| | Server | . . 1403 . . *-----------* | | . . 1404 . . | | . . 1405 . . +-----------+ . . 1406 . . | ---- . . 1407 . . | ------- . . 1408 . . | ------- . 1409 . . | .------- . 1410 . . V . ---V V 1411 . . +-----------+ . +-----------+ 1412 . . | | V | | 1413 . . *-----------* | Presence | *-----------* | Presence | 1414 . ....>|Auth and |-->| Server | |Auth and |-->| Server | 1415 . |Composition| | | |Composition| | | 1416 . *-----------* | | *-----------* | | 1417 . +-----------+ +-----------+ 1418 . / -- 1419 . / ---- 1420 . / --- 1421 . / ---- 1422 . / --- 1423 . / ---- 1424 . V -V 1425 . +-----------+ +-----------+ 1426 V | | | | 1427 *-----------* | Presence | *-----------* | Presence | 1428 |Auth and |-->| Server | |Auth and |-->| Server | 1429 |Composition| | | |Composition| | | 1430 *-----------* | | *-----------* | | 1431 +-----------+ +-----------+ 1433 Figure 11: Central Provisioning 1435 Centralized provisioning brings the benefits of root-only (single 1436 point of user provisioning) with those of distributed provisioning 1437 (utilize full capabilities of all servers). Its principle drawback 1438 is that it requires another component - the portal - which can 1439 represent the union of the authorization policies supported by each 1440 server, and then delegate those policies to each corresponding 1441 server. 1443 The other drawback of centralized provisioning is that it assumes 1444 completely consistent policy decision making on each server. There 1445 is a rich set of possible policy decisions that can be taken by 1446 servers, and this is often an area of differentiation. 1448 7.1.2.4. Centralized PDP 1450 The centralized provisioning model assumes that there is a single 1451 point of policy administration, but that there is independent 1452 decision making at each presence and IM server. This only works in 1453 cases where the decision function - the policy decision point - is 1454 identical in each server. 1456 An alternative model is to utilize a single point of policy 1457 administration and a single point of policy decisionmaking. Each 1458 presence server acts solely as an enforcement point, asking the 1459 policy server (through a policy protocol of some sort) how to handle 1460 the presence or IM. The policy server then comes back with a policy 1461 decision - whether to proceed with the subscription or IM, and how to 1462 filter and process it. This is shown in Figure 12. 1464 +------------+ +---------------+ 1465 |Provisioning|=====>|Policy Decision| 1466 | Portal | | Point (PDP) | 1467 +------------+ +---------------+ 1468 # # # # # 1469 ################### # # # ########################### 1470 # # # # # 1471 # ######## # #################### # 1472 # # +-----------+ # # 1473 # # | | # # 1474 # # | | .... root # # 1475 # # | Server | # # 1476 # # | | # # 1477 # # | | # # 1478 # # +-----------+ # # 1479 # # / --- # # 1480 # # / ---- # # 1481 # # / ---- # # 1482 # # / ---- # # 1483 # # V -V# # 1484 # +-----------+ +-----------+ # 1485 # | | | | # 1486 # | | | | # 1487 # | Server | | Server | # 1488 # | | | | # 1489 # | | | | # 1490 # +-----------+ +-----------+ # 1491 # | --- # 1492 # | ----- # 1493 # | ----- # 1494 # | ----- # 1495 # | ----- # 1496 # | ----- # 1497 # V --V # 1498 # +-----------+ +-----------+ # 1499 # | | | | # 1500 #######| | | | # 1501 | Server | | Server |### 1502 | | | | 1503 | | | | 1504 +-----------+ +-----------+ 1506 ===== Provisioning Protocol 1508 ##### Policy Protocol 1510 ----- SIP 1512 Figure 12: Central PDP 1514 The centralized PDP has the benefits of central provisioning, and 1515 consistent policy operation, and decouples policy decision making 1516 from presence and IM processing. This decoupling allows for multiple 1517 presence and IM servers, but still allows for a single policy 1518 function overall. The individual presence and IM servers don't need 1519 to know about the policies themselves, or even know when they change. 1520 Of course, if a server is caching the results of a policy decision, 1521 change notifications are required from the PDP to the server, 1522 informing it of the change (alternatively, traditional TTL-based 1523 expirations can be used if delay in updates are acceptable). 1525 For the centralized and distributed provisioning approaches, and the 1526 centralized decision approach, the hierarchical model suffers overall 1527 from the fact that the root of the policy processing may not be tuned 1528 to the specific policy needs of the device that has subscribed. For 1529 example, in the use case of Figure 7, presence server 1 may be 1530 providing composition policies tuned to the fact that the device is 1531 wireless with limited display. Consequently, when Bob subscribes 1532 from his mobile device, is presence server 2 is the root, presence 1533 server 2 may add additional data and provide an overall presence 1534 document to the client which is not optimized for that device. This 1535 problem is one of the principal motivations for the peer model, 1536 described below. 1538 7.1.3. Presence Data 1540 The hierarhical model is based on the idea that each presence server 1541 in the chain contributes some unique piece of presence information, 1542 composing it with what it receives from its child, and passing it on. 1543 For the overall presence document to be reasonable, several 1544 guidelines need to be followed: 1546 o A presence server must be prepared to receive documents from its 1547 peer containing information that it does not understand, and to 1548 apply unioned composition policies that retain this information, 1549 adding to it the unique information it wishes to contribute. 1551 o A user interface rendering some presence document provided by its 1552 presence server must be prepared for any kind of presence document 1553 compliant to the presence data model, and must not assume a 1554 specific structure based on the limitations and implementation 1555 choices of the server to which it is paired. 1557 If these basic rules are followed, the overall system provides 1558 functionality equivalent to the combination of the presence 1559 capabilities of the servers contained within it, which is highly 1560 desirable. 1562 7.1.4. Conversation Consistency 1564 Unioned bridging introduces a particular challenge for conversation 1565 consistency. A user with multiple devices attached to multiple 1566 servers could potentially try to participate in the conversation on 1567 multiple devices at once. This would clearly pose a challenge. 1568 There are really two approaches that produce a sensible user 1569 experience. 1571 The first approach simulates the "phone experience" with IM. When a 1572 user (say Alice) sends an IM to Bob, and Bob is a unioned user with 1573 two devices on two servers, Bob receives that IM on both devices. 1574 However, when he "answers" by typing a reply from one of those 1575 devices, the conversation continues only on that device. The other 1576 device on the other server receives no further IMs for this session - 1577 either from Alice or from Bob. Indeed, the IM window on Bob's 1578 unanswered device may even disappear to emphasize this fact. 1580 This mode of operation, which we'll call uni-device IM, is only 1581 feasible with session mode IM, and its realization using traditional 1582 SIP signaling is described in [RFC4975]. 1584 The second mode of operation, called multi-device IM, is more of a 1585 conferencing experience. The initial IM from Alice is delivered to 1586 both Bob's devices. When Bob answers on one, that response is shown 1587 to ALice but is also rendered on Bob's other device. Effectively, we 1588 have set up an IM conference where each of Bob's devices is an 1589 independent participant in the conference. This model is feasible 1590 with both session and pager mode IM; however conferencing works much 1591 better overall with session mode. 1593 A related challenge is conversation history. In the uni-device IM 1594 mode, this past history for a user's conversation may be distributed 1595 amongst the different servers, depending on which clients and servers 1596 were involved in the conversation. As with the exclusive model, IM 1597 search and retrieval services may need to access all of the servers 1598 on which a user might be located. This is easier for the unioned 1599 case than the exclusive one, since in the unioned case, the user's 1600 location is on a fixed number of servers based on provisioning. This 1601 problem is even more complicated in IM page mode when multiple 1602 devices are present, due to the limitation of page mode in these 1603 configurations. 1605 7.2. Peer Model 1607 In the peer model, there is no one root. When a watcher subscribes 1608 to a presentity, that subscription is processed first by the server 1609 to which the watcher is connected (effectively acting as the root), 1610 and then the subscription is passed to other child presence servers. 1611 The same goes for IM; when a client sends an IM, the IM is processed 1612 first by the server associated with the sender (effectively acting as 1613 the root), and then the IM is passed to the child IM servers. In 1614 essence, in the peer model, there is a per-client hierarchy, with the 1615 root being a function of the client. Consider the use case in 1616 Figure 7 If Bob has his buddy list on presence server 1, and it 1617 contains Alice, presence server 1 acts as the root, and then performs 1618 a back-end subscription to presence server 2. However, if Joe has 1619 his buddy list on presence server 2, and his buddy list contains 1620 Alice, presence server 2 acts as the root, and performs a back-end 1621 subscription to presence server 1. Similarly, if Bob sends an IM to 1622 Alice, it is processed first by server 1 and then server 2. If Joe 1623 sends an IM to Alice, it is first processed by server 2 and then 1624 server 1. This is shown in Figure 13. 1626 alice@example.com alice@example.com 1627 +------------+ +------------+ 1628 | |<-------------| |<--------+ 1629 | | | | | 1630 Connect | Server | | Server | | 1631 Alice | 1 | | 2 | Connect | 1632 +---->| |------------->| | Alice | 1633 | | | | | | 1634 | +------------+ +------------+ | 1635 | \ / | 1636 | \ / | 1637 | \ / | 1638 | \ / | 1639 | \ / | 1640 | \ / | 1641 ...|........ \...................../....... .........|........ 1642 . . \ / . . . 1643 . . .\ / . . +--------+ . 1644 . | . . \ | +--------+ . . |+------+| . 1645 . | . . | |+------+| . . || || . 1646 . +---+ . . +---+ || || . . || || . 1647 . |+-+| . . |+-+| || || . . |+------+| . 1648 . |+-+| . . |+-+| |+------+| . . +--------+ . 1649 . | | . . | | +--------+ . . /------ / . 1650 . | | . . | | /------ / . . /------ / . 1651 . +---+ . . +---+ /------ / . . --------/ . 1652 . . . --------/ . . . 1653 . . . . . . 1654 ............ ............................. .................. 1656 Bob Alice Joe 1658 Figure 13: Peer Model 1660 Whereas the hierarchical model clearly provides the consistency 1661 property, it is not obvious whether a particular deployment of the 1662 peer model provides the consistency property. When policy decision 1663 making is distributed amongst the servers, it ends up being a 1664 function of the composition policies of the individual servers. If 1665 Pi() represents the composition and authorization policies of server 1666 i, and takes as input one or more presence documents provided by its 1667 children, and outputs a presence document, the overall system 1668 provides consistency when: 1670 Pi(Pj()) = Pj(Pi()) 1672 which is effectively the commutativity property. 1674 7.2.1. Routing 1676 Routing in the peer model works similarly to the hierarchical model. 1677 Each server would be configured with the children it has when it acts 1678 as the root. The overall presence routing algorithm then works as 1679 follows: 1681 o If a presence server receives a subscription for a presentity from 1682 a particular watcher, and it already has a different subscription 1683 (as identified by dialog identifiers) for that presentity from 1684 that watcher, it rejects the second subscription with an 1685 indication of a loop. This algorithm does rule out the 1686 possibility of two instances of the same watcher subscribing to 1687 the same presentity. 1689 o If a presence server receives a subscription for a presentity from 1690 a watcher and it doesn't have one yet for that pair, it processes 1691 it and generates back end subscriptions to each configured child. 1692 If a back-end subscription generates an error due to loop, it 1693 proceeds without that back-end input. 1695 The algorithm for IM routing works almost identically. 1697 For example, consider Bob subscribing to Alice. Bob's client is 1698 supported by server 1. Server 1 has not seen this subscription 1699 before, so it acts as the root and passes it to server 2. Server 2 1700 hasn't seen it before, so it accepts it (now acting as the child), 1701 and sends the subscription to its child, which is server 1. Server 1 1702 has already seen the subscription, so it rejects it. Now server 2 1703 basically knows its the child, and so it generates documents with 1704 just its own data. 1706 As in the hierarchical case, it is possible to intermix partitioned 1707 and peer models for different users. In the partitioned case, the 1708 routing for hierarchical devolves into the forking routing described 1709 in Section 5.2.5. However, intermixing peer and exclusive bridging 1710 for different users is challenging. [[OPEN ISSUE: need to think 1711 about this more.]] 1713 7.2.2. Policy 1715 The policy considerations for the peer model are very similar to 1716 those of the hierarchical model. However, the root-only policy 1717 approach is non-sensical in the peer model, and cannot be utilized. 1718 The distributed and centralized provisioning approaches apply, and 1719 the rules described above for generating correct results provide 1720 correct results in the peer model as well. 1722 However, the centralized PDP model works particularly well in concert 1723 with the peer model. It allows for consistent policy processing 1724 regardless of the type of rules, and has the benefit of having a 1725 single point of provisioning. At the same time, it avoids the need 1726 for defining and having a single root; indeed there is little benefit 1727 for utilizing the hierarchical model when a centralized PDP is used. 1729 However, the distributed processing model in the peer model 1730 eliminates the problem described in Section 7.1.2.3. The problem is 1731 that composition and authorization policies may be tuned to the needs 1732 of the specific device that is connected. In the hierarchical model, 1733 the wrong server for a particular device may be at the root, and the 1734 resulting presence document poorly suited to the consuming device. 1735 This problem is alleviated in the peer model. The server that is 1736 paired or tuned for that particular user or device is always at the 1737 root of the tree, and its composition policies have the final say in 1738 how presence data is presented to the watcher on that device. 1740 7.2.3. Presence Data 1742 The considerations for presence data and composition in the 1743 hierarchical model apply in the peer model as well. The principle 1744 issue is consistency, and whether the overall presence document for a 1745 watcher is the same regardless of which server the watcher connects 1746 from. As mentioned above, consistency is a property of commutativity 1747 of composition, which may or may not be true depending on the 1748 implementation. 1750 Interestingly, in the use case of Figure 8, a particular user only 1751 ever has devices on a single server, and thus the peer and 1752 hierarchical models end up being the same, and consistency is 1753 provided. 1755 7.2.4. Conversation Consistency 1757 The hierarchical and peer models have no impact on the issue of 1758 conversation consistency; the problem exists identically for both 1759 approaches. 1761 8. Acknowledgements 1763 The author would like to thank Paul Fullarton, David Williams, Sanjay 1764 Sinha, and Paul Kyzivat for their comments. 1766 9. Security Considerations 1768 The principle issue in intra-domain bridging is that of privacy. It 1769 is important that the system meets user expectations, and even in 1770 cases of user provisioning errors or inconsistencies, it provides 1771 appropriate levels of privacy. This is an issue in the unioned 1772 models, where user privacy policies can exist on multiple servers at 1773 the same time. The guidelines described here for authorization 1774 policies help ensure that privacy properties are maintained. 1776 10. IANA Considerations 1778 There are no IANA considerations associated with this specification. 1780 11. Informative References 1782 [RFC2778] Day, M., Rosenberg, J., and H. Sugano, "A Model for 1783 Presence and Instant Messaging", RFC 2778, February 2000. 1785 [RFC3863] Sugano, H., Fujimoto, S., Klyne, G., Bateman, A., Carr, 1786 W., and J. Peterson, "Presence Information Data Format 1787 (PIDF)", RFC 3863, August 2004. 1789 [RFC4479] Rosenberg, J., "A Data Model for Presence", RFC 4479, 1790 July 2006. 1792 [RFC3856] Rosenberg, J., "A Presence Event Package for the Session 1793 Initiation Protocol (SIP)", RFC 3856, August 2004. 1795 [RFC4662] Roach, A., Campbell, B., and J. Rosenberg, "A Session 1796 Initiation Protocol (SIP) Event Notification Extension for 1797 Resource Lists", RFC 4662, August 2006. 1799 [RFC3944] Johnson, T., Okubo, S., and S. Campos, "H.350 Directory 1800 Services", RFC 3944, December 2004. 1802 [RFC3325] Jennings, C., Peterson, J., and M. Watson, "Private 1803 Extensions to the Session Initiation Protocol (SIP) for 1804 Asserted Identity within Trusted Networks", RFC 3325, 1805 November 2002. 1807 [RFC3680] Rosenberg, J., "A Session Initiation Protocol (SIP) Event 1808 Package for Registrations", RFC 3680, March 2004. 1810 [RFC3428] Campbell, B., Rosenberg, J., Schulzrinne, H., Huitema, C., 1811 and D. Gurle, "Session Initiation Protocol (SIP) Extension 1812 for Instant Messaging", RFC 3428, December 2002. 1814 [RFC4975] Campbell, B., Mahy, R., and C. Jennings, "The Message 1815 Session Relay Protocol (MSRP)", RFC 4975, September 2007. 1817 [RFC3261] Rosenberg, J., Schulzrinne, H., Camarillo, G., Johnston, 1818 A., Peterson, J., Sparks, R., Handley, M., and E. 1819 Schooler, "SIP: Session Initiation Protocol", RFC 3261, 1820 June 2002. 1822 [I-D.ietf-speermint-consolidated-presence-im-usecases] 1823 Houri, A., "Presence & Instant Messaging Peering Use 1824 Cases", 1825 draft-ietf-speermint-consolidated-presence-im-usecases-05 1826 (work in progress), July 2008. 1828 [I-D.ietf-simple-view-sharing] 1829 Rosenberg, J., Donovan, S., and K. McMurry, "Optimizing 1830 Federated Presence with View Sharing", 1831 draft-ietf-simple-view-sharing-01 (work in progress), 1832 July 2008. 1834 Authors' Addresses 1836 Jonathan Rosenberg 1837 Cisco 1838 Iselin, NJ 1839 US 1841 Email: jdrosen@cisco.com 1842 URI: http://www.jdrosen.net 1844 Avshalom Houri 1845 IBM 1846 Science Park, Rehovot 1847 Israel 1849 Email: avshalom@il.ibm.com 1850 Colm Smyth 1851 Avaya 1852 Dublin 18, Sandyford Business Park 1853 Ireland 1855 Email: smythc@avaya.com 1857 Francois Audet 1858 Nortel 1859 4655 Great America Parkway 1860 Santa Clara, CA 95054 1861 USA 1863 Email: audet@nortel.com 1865 Full Copyright Statement 1867 Copyright (C) The IETF Trust (2008). 1869 This document is subject to the rights, licenses and restrictions 1870 contained in BCP 78, and except as set forth therein, the authors 1871 retain all their rights. 1873 This document and the information contained herein are provided on an 1874 "AS IS" basis and THE CONTRIBUTOR, THE ORGANIZATION HE/SHE REPRESENTS 1875 OR IS SPONSORED BY (IF ANY), THE INTERNET SOCIETY, THE IETF TRUST AND 1876 THE INTERNET ENGINEERING TASK FORCE DISCLAIM ALL WARRANTIES, EXPRESS 1877 OR IMPLIED, INCLUDING BUT NOT LIMITED TO ANY WARRANTY THAT THE USE OF 1878 THE INFORMATION HEREIN WILL NOT INFRINGE ANY RIGHTS OR ANY IMPLIED 1879 WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. 1881 Intellectual Property 1883 The IETF takes no position regarding the validity or scope of any 1884 Intellectual Property Rights or other rights that might be claimed to 1885 pertain to the implementation or use of the technology described in 1886 this document or the extent to which any license under such rights 1887 might or might not be available; nor does it represent that it has 1888 made any independent effort to identify any such rights. Information 1889 on the procedures with respect to rights in RFC documents can be 1890 found in BCP 78 and BCP 79. 1892 Copies of IPR disclosures made to the IETF Secretariat and any 1893 assurances of licenses to be made available, or the result of an 1894 attempt made to obtain a general license or permission for the use of 1895 such proprietary rights by implementers or users of this 1896 specification can be obtained from the IETF on-line IPR repository at 1897 http://www.ietf.org/ipr. 1899 The IETF invites any interested party to bring to its attention any 1900 copyrights, patents or patent applications, or other proprietary 1901 rights that may cover technology that may be required to implement 1902 this standard. Please address the information to the IETF at 1903 ietf-ipr@ietf.org. 1905 Acknowledgment 1907 Funding for the RFC Editor function is provided by the IETF 1908 Administrative Support Activity (IASA).