idnits 2.17.1 draft-bestler-transactional-multicast-00.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- No issues found here. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year -- The document date (March 23, 2015) is 3322 days in the past. Is this intentional? -- Found something which looks like a code comment -- if you have code sections in the document, please surround them with '' and '' lines. Checking references for intended status: Experimental ---------------------------------------------------------------------------- == Missing Reference: 'A' is mentioned on line 529, but not defined == Missing Reference: 'B' is mentioned on line 529, but not defined == Missing Reference: 'C' is mentioned on line 528, but not defined == Missing Reference: 'D' is mentioned on line 529, but not defined == Missing Reference: 'M4' is mentioned on line 765, but not defined == Missing Reference: 'M5' is mentioned on line 765, but not defined == Missing Reference: 'M19' is mentioned on line 765, but not defined == Unused Reference: 'IEEE.802.1Qau-2011' is defined on line 1113, but no explicit reference was found in the text == Unused Reference: 'IEEE.802.1Qaz-2011' is defined on line 1118, but no explicit reference was found in the text == Unused Reference: 'RFC6325' is defined on line 1156, but no explicit reference was found in the text -- Obsolete informational reference (is this intentional?): RFC 5661 (Obsoleted by RFC 8881) Summary: 0 errors (**), 0 flaws (~~), 11 warnings (==), 3 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 PIM C. Bestler, Ed. 3 Internet-Draft Nexenta 4 Intended status: Experimental R. Novack 5 Expires: September 24, 2015 March 23, 2015 7 Creation of Transactional Multicast Groups 8 draft-bestler-transactional-multicast-00 10 Abstract 12 This memo presents techniques for controlling the membership of 13 multicast groups which are constrained to be a subset of a pre- 14 existing multicast group, where such subset groups are only used for 15 short duration transactions which are multicast to a subset of the 16 larger multicast group. Further the memberships of these 17 transactional groups is pushed by the sender rather than being pulled 18 by the receiver. These groups could be called Transactional Subset 19 Push Multicast Groups, but that label would be a bit long. 21 Editor's Note 23 The proper working group for this draft has not yet been determined. 24 Alternate working groups include TSVWG and INT. 26 Nexenta has been developing a multicast based transport/storage 27 protocol for Object Clusters at Nexenta. This applies multicast 28 datagrams to creation and replication of Objects such as those 29 supported by the Amazon Simple Storage Service ("S3") protocol or the 30 OpenStack Object Storage service ("Swift"). Creating replicas of 31 object payload on multiple servers is an inherent part of any storage 32 cluster, which makes multicast addressing very inviting. There are 33 issues of congestion control and reliability to settle, but new Layer 34 2 capabilities such as DCB (Data Center Bridging) make this doable. 36 However, we found that the existing standard protocols for 37 controlling multicast group membership (IGMP and MLD) are not 38 suitable for our storage application. The Authors doubt this is 39 unique to a single application. It should apply to many clusters 40 that have a need to distribute transactional messages to dynamically 41 selected subsets of a group within a cluster to multiple known 42 recipients. 44 Computational clusters using MPI are also potential users of 45 transactional multicasting. Inter-server replication in a pNFS 46 cluster is another. 48 These are just examples of synchronizing cluster data where the 49 synchronization does not replicate all of the shared data with the 50 entire cluster. But these are merely initial hunches, working group 51 feedback is expected to refine characterization of the applicability 52 of transactional multicast groups. 54 This submission, and ensuing discussion of this draft and its 55 successors will make reference to specific applications, including 56 the Nexenta Replicast protocol for multicast replication in Nexenta's 57 Cloud Copy-on-Write (CCOW) Object Cluster used in the NexentaEdge 58 product. Such examples are merely for illustrative purposes. Any 59 IETF standardization of the Replicast storage protocols would be done 60 via the Storm or NFS groups, and would require adoption of a 61 definition of Object Storage as a service before standardizing any 62 specific protocol for providing Object Storage services. 64 At this stage in drafting message formats have not yet been set for 65 the standardized version of the protocol. The pre-standard version 66 was limited to a single L2 physical network, which would be an 67 inappropriate limitation for an IETF standard. Working Group 68 feedback on the format of these messages will be sought during the 69 consensus building process. 71 Status of This Memo 73 This Internet-Draft is submitted in full conformance with the 74 provisions of BCP 78 and BCP 79. 76 Internet-Drafts are working documents of the Internet Engineering 77 Task Force (IETF). Note that other groups may also distribute 78 working documents as Internet-Drafts. The list of current Internet- 79 Drafts is at http://datatracker.ietf.org/drafts/current/. 81 Internet-Drafts are draft documents valid for a maximum of six months 82 and may be updated, replaced, or obsoleted by other documents at any 83 time. It is inappropriate to use Internet-Drafts as reference 84 material or to cite them other than as "work in progress." 86 This Internet-Draft will expire on September 24, 2015. 88 Copyright Notice 90 Copyright (c) 2015 IETF Trust and the persons identified as the 91 document authors. All rights reserved. 93 This document is subject to BCP 78 and the IETF Trust's Legal 94 Provisions Relating to IETF Documents 95 (http://trustee.ietf.org/license-info) in effect on the date of 96 publication of this document. Please review these documents 97 carefully, as they describe your rights and restrictions with respect 98 to this document. Code Components extracted from this document must 99 include Simplified BSD License text as described in Section 4.e of 100 the Trust Legal Provisions and are provided without warranty as 101 described in the Simplified BSD License. 103 Table of Contents 105 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . 4 106 1.1. Requirements Notation . . . . . . . . . . . . . . . . . . 4 107 2. Motivation . . . . . . . . . . . . . . . . . . . . . . . . . 4 108 3. An Example Application . . . . . . . . . . . . . . . . . . . 5 109 4. Generalized Usage of Transactional Multicast Groups . . . . . 6 110 5. Transactional Multicast Groups . . . . . . . . . . . . . . . 6 111 5.1. Definition . . . . . . . . . . . . . . . . . . . . . . . 6 112 5.1.1. Dynamic Specification versus Dynamic Selection . . . 7 113 5.1.2. Push vs. Join . . . . . . . . . . . . . . . . . . . . 7 114 5.2. Applicability . . . . . . . . . . . . . . . . . . . . . . 8 115 5.2.1. How is the Group Selected? . . . . . . . . . . . . . 9 116 5.2.2. What are the endpoints that receive the messages? . . 10 117 5.2.3. What is the duration of the group? . . . . . . . . . 10 118 5.2.4. Who are the members of the group? . . . . . . . . . . 11 119 5.2.5. How much latency does the application tolerate? . . . 12 120 5.2.6. What must be done to maintain the Group? . . . . . . 12 121 6. Forwarding Control Agent . . . . . . . . . . . . . . . . . . 12 122 6.1. Network Topology . . . . . . . . . . . . . . . . . . . . 13 123 6.2. Isolated VLANs Strategy . . . . . . . . . . . . . . . . . 13 124 7. Forwarding Control Agent Methods . . . . . . . . . . . . . . 14 125 7.1. Dynamically Pushed Transactional Groups . . . . . . . . . 14 126 7.2. Persistent Transactional Groups . . . . . . . . . . . . . 16 127 8. Relationship to Existing Multicast Membership Protocols . . . 17 128 9. Control Protocol . . . . . . . . . . . . . . . . . . . . . . 17 129 10. Forwarding Control Agent Methods . . . . . . . . . . . . . . 18 130 10.1. Create Transactional Multicast Address Block . . . . . . 18 131 10.2. Release Transactional Multicast Address Block . . . . . 19 132 10.3. Set Dynamic Transactional Multicast Group Membership 133 IPV6 . . . . . . . . . . . . . . . . . . . . . . . . . . 19 134 10.4. Set Dynamic Transactional Multicast Group Membership 135 IPV4 . . . . . . . . . . . . . . . . . . . . . . . . . . 20 136 10.5. Set Persistent Transactional Multicast Groups IPv6 . . . 20 137 10.6. Set Persistent Transactional Multicast Groups IPv4 . . . 21 138 10.7. Refresh Persistent Transactional Multicast Group . . . . 21 139 11. Operating With Just Dynamic Selection . . . . . . . . . . . . 23 140 12. Security Considerations . . . . . . . . . . . . . . . . . . . 23 141 13. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 23 142 14. Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . 24 143 15. References . . . . . . . . . . . . . . . . . . . . . . . . . 24 144 15.1. Informative References . . . . . . . . . . . . . . . . . 24 145 15.2. Normative References . . . . . . . . . . . . . . . . . . 25 146 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . 25 148 1. Introduction 150 Existing standards for controlling the membership of multicast groups 151 can be characterized as being Join-driven. These include 152 [RFC3376],[RFC3810], [RFC4541] and [RFC4604]. Due to their inherent 153 latency these techniques prove to be unsuitable for maintaining large 154 sets of related multiast groups. This memo details a new method of 155 maintaining such large sets of related multicast groups when they are 156 all subsets of a single master reference group. This is not a 157 restriction for most cluster-oriented applications which could use 158 transactional multicasting. 160 Transactional Multicasting defines techniques that extends existing 161 control of a reference multicast group to a potentially large set of 162 multicast addresses used with a VLAN within each local subnet that 163 the reference multicast group reaches. 165 This specification makes no modifications to the forwarding of 166 multicast packets nor to the communications between mrouters. New 167 methods are defined to set Layer 2 multicast forwarding rules on 168 switches within each of the relevant Layer 2 subnets. 170 1.1. Requirements Notation 172 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 173 "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this 174 document are to be interpreted as described in RFC 2119 [RFC2119]. 176 2. Motivation 178 Transactional Multicast groups are maintained within each VLAN. A 179 'Forwarding Control Agent' is defined within each VLAN that is 180 responsible for applying the forwarding information known for a 181 reference multicast group to efficiently set layer 2 multicast 182 forwarding rules within each local network. 184 The functionality of the Forwarding Control Agent is best understood 185 as extending the functionality of IGMP/MLD Snooping (See [RFC4541]). 187 An IGMP/MLD snooper interprets IGMP (see [RFC3376]) or MLD (see 188 [RFC3810]) messages to translate their Layer 3 objectives into Layer 189 2 multicast forwarding rules. 191 A Forwarding Control Agent interprets new messages defined in this 192 specification for a newly defined class of transactional multicast 193 groups into the same Layer 2 multicast forwarding rules as used by 194 existing IGMP/MLD snoopers. Strategies for implementing Forwarding 195 Control Agents would include extending existing IGMP/MLD snooping 196 implementations or building the Forwarding Control Agent external to 197 the existing L2 switch software. 199 The per transaction costs of using such groups are far lower than 200 with the existing methods. The ongoing maintenance work for 201 multicast forwarding elements is limited to the reference multicast 202 group, the work is not replicated for each of the subset 203 transactional multicast groups. 205 3. An Example Application 207 The Replicast (see [Replicast]) usage of transactional multicasting 208 involves: 210 o Taking a Cryptographic Hash of each chunk to be stored. This 211 "hash id" is used with a distributed hash table to determine a 212 conventional multicast group which will be used to negotiate 213 placement of the chunk. This is the reference multicast group. 214 Replicast refers to it as a "Negotiating Group". 216 o Multicasting a request to put the chunk to the reference multicast 217 group. Receiving storage nodes will respond with a bid on when 218 they could store that chunk, or an indication that they already 219 have that chunk stored. Each of the storage nodes making a bid is 220 offering a provisional reservation of its input capacity for a 221 specific time window. 223 o Assuming that the chunk is not already stored, selecting the best 224 responses to make a transactional group. Determination of 'best' 225 typically is driven by the earliest possible completion of the 226 transaction, but may factor the current available storage capacity 227 on each of the storage nodes as well. 229 o Form or select a "rendezvous group" which will be used to 230 multicast the chunk. When the core network is non-blocking, the 231 transfer will be able to proceed at close to full wire speed at 232 the reserved time because each of the selected storage nodes has 233 reserved its input capacity for bulk payload exclusively. A 234 multicast message to the reference group informs both those 235 selected and those not selected for the rendezvous transfer. 236 Those not selected will release the provisional reservation. 238 o At the designated time, multicast the chunk payload to the 239 transactional multicast group. 241 o Each recipient validates the cryptographic hash of the received 242 data, and unicasts a positive or negative acknowledgement to the 243 sender. 245 o If sufficient valid copies have been positively acknowledge, the 246 transaction is complete. Otherwise it is retried. 248 Replicast can further apply the techniques described in this document 249 to form the Negotiating Groups itself, because they are themselves a 250 subset of a cluster-wide reference multicast group. This is an 251 optional optimization, however, as that the required speed for 252 forming Negotiating Groups does not preclude the use of conventional 253 IGMP/MLD techniques. 255 4. Generalized Usage of Transactional Multicast Groups 257 Beyond a specific application, the generalized potential for dramatic 258 savings is that transactional messaging within a cluster is a 259 radically different use-case from traditional multicast. 261 The set of factors that differentiates this class of applications can 262 be examined through a series of questions: 264 o How is the group Selected? Section 5.2.1 266 o What are the endpoints that receive the messages? Section 5.2.2 268 o What is the duration of the group?Section 5.2.3 270 o Who are the potential members of the group? Section 5.2.4 272 o How much latency does the application tolerate? Section 5.2.5 274 o What must be done to maintain the group? Section 5.2.6 276 5. Transactional Multicast Groups 278 5.1. Definition 280 A Transactions Multicast Group is a multicast group which: 282 o Is derived from a pre-existing multicast group created by means 283 independent of this standard. These methods include SNMP 284 management of multicast forwarding elements as well as IGMP/MLD 285 methods. The membership of this derived group is a subset of the 286 reference existing multicast group. 288 o Has a multicast group address which is part of a block allocated 289 for transactional multicast groups. This block only needs to be 290 allocated for use within a single VLAN. 292 o Will only be used for the duration of a transaction. A network 293 failure or re-configuration during the transaction will require an 294 upper layer retry of the transaction. Transactional Multicast 295 groups are not suitable for streaming of content. Transactional 296 multicast groups may be persistent, in that the same group 297 continues to exist and be used for a series of transactions. But 298 each datagram sent to the group is part of a single short duration 299 transaction. Retransmission, if any, is the responsibility of the 300 application layer. Typically, the transport layer will not 301 support identifying which part of a transaction was not received 302 but there may be a checksum or fingerprint of the entire message 303 spanning payload encoded in multiple datagrams. 305 5.1.1. Dynamic Specification versus Dynamic Selection 307 There are two basic strategies for managing the membership of 308 transactional multicast groups: 310 o Dynamic Specification: The selected members join a group that had 311 been dynamically configured for the transaction. 313 o Dynamic Selection: A pre-existing group is selected to match the 314 subset desired. That group is allocated for this purpose and used 315 for the transaction. 317 These two strategies can also be combined to form a hybrid strategy. 318 If there is a pre-existing group for the desired membership list it 319 is allocated and used, otherwise an available group is allocated and 320 re-configured to have the required membership. 322 5.1.2. Push vs. Join 324 Existing methods for managing membership of a multicast group can be 325 characterized as Join protocols. The receivers may join the group, 326 or subscribe to a specific source within a group, but the receivers 327 of multicast messages control their reception of multicast messages. 329 This model is well suited for multimedia transmission where the 330 sender does not necessarily know the full set of endpoints receiving 331 its multicast content. In many cluster application the sender has 332 determined the set of receivers. Requiring the sender to communicate 333 with the recipients so that they can Join the group adds latency to 334 the entire transaction. 336 However, there would be a serious security concern if transactional 337 multicasting is not limited to transactional multicasting. Requiring 338 that every member of a subset multicast group already be a member of 339 a reference multicast group ensures that no new method of sending 340 traffic is being created. Without this guarantee a denial-of-service 341 attacker could simply push a multicast group membership listing 1000 342 members, then flood that multicast group. The amount of traffic 343 delivered to the aggregate destinations would be multiplied by a 344 factor of 1000. 346 Transactional multicasting is defined to eliminate the latency 347 required for Join-directed multicast group membership, while avoiding 348 creating a new attack vector for denial-of-service flooding. 350 5.2. Applicability 352 Transactional Multicast Groups are applicable for applications that 353 want to reduce overall latency by reducing the number of round-trips 354 required for their transactions when identical content must be 355 delivered to multiple cluster members, but the selected members are a 356 subset of a larger group that must be dynamically selected. 358 Parallel processing of payload and/or storage of payload are the 359 primary examples of such a pattern of communications. 361 Examples of such applications include: 363 o Computational Clusters, particularly those using MPI (see [MPI]) 365 o Storage applications, including: 367 * pNFS (See [RFC5661]). 369 * Amazon Simple Storage Service (S3) (See [AmazonS3]). 371 * OpenStack Object Storage (Swift) (See [Swift]). 373 Dynamic selection of subsets ultimately enables multiple concurrent 374 transfers to occur, which would not have been possible if the message 375 had been sent to the entire reference multicast group. Applications 376 with relatively small payload to be multicast may find it easier to 377 use simple multicast and slightly over-deliver the message. 379 Transactional Multicast Groups simplify the problem to be solved 380 compared to existing multicast protocols in that they are tailored 381 for use within a fully known cluster with a finite number of 382 receivers that is known prior to and is unchanged for the duration of 383 a transaction. 385 5.2.1. How is the Group Selected? 387 In Join-directed multicasting the membership of a multicast group is 388 controlled by the listeners joining and leaving the group. The 389 sender does not control or even know the recipients. This matches 390 the multicast streaming use-case very well. However it does not 391 match a cluster that needs to distribute a transactional message to 392 an enumerated subset of a known cluster. 394 The target group is also assumed to be stable for a long sequence of 395 packets, such as sending large chunks of a file. The targeted 396 applications direct transactions to a subset of a stable group. 398 One example of the need to distribute a transactional message to a 399 subset of a known cluster is replication of data within an object 400 storage cluster. A set of targets has been selected through an 401 higher layer protocol. Joi-directed group setup here adds excessive 402 latency to the process. The targets must be informed of their 403 selection, they must execute IGMP joins and confirm their joining to 404 the source before the multicast delivery can begin. While this does 405 not greatly reduce the bandwidth available through a network, it adds 406 considerable latency for any given transfer. Only replication of 407 large storage assets can tolerate this setup penalty. 409 A distributed computation may similarly have data that is relevant to 410 a specific set of recipients within the cluster. Performing the 411 distribution serially to each target over unicast point-to-point 412 connections uses excessive bandwidth and increases the transactions' 413 latency. It is also undesirable to incur the latency of Join-driven 414 multicast group setup. 416 This specification creates two methods for a sender to form or select 417 a multicast group for transactional purposes. With these methods no 418 further transmissions are required from the selected targets until 419 the full transfer is complete. 421 The restriction that the targeted group must be a subset of an 422 existing multicast group is necessary to prevent a denial-of-service 423 flooding attack. Transactional multicast groups that were not 424 restricted to being a subset of an existing multicast group could be 425 used to flood a large number of targets that were unprepared to 426 process incoming multicast datagrams. 428 5.2.2. What are the endpoints that receive the messages? 430 The endpoints of the transactional messages may be higher layer 431 entities, where each network endpoint supports multiples instances of 432 the higher layer entities. For example, a storage application may 433 have IP addresses associated with specific virtual drives, as opposed 434 to an IP address associated with a server that hosted multiple 435 virtual drives. 437 Having an IP address for each drive makes migrating control over that 438 drive to a new server easier, and allows the servers to direct 439 incoming payload to the correct drive. 441 5.2.3. What is the duration of the group? 443 Join-directed multicasting is well designed for the multicast 444 streaming use-case. A group has an indefinite lifespan, and members 445 come and go at any time during this lifespan without requiring any 446 action by the transmitter. The duration of the transmission might be 447 measured in minutes, hours or days. 449 Transaction multicasting is designed to support applications where a 450 transaction lasts for microseconds or milliseconds (possibly even 451 seconds). Transactional multicasting seeks to identify a multicast 452 group for the duration of sending a set of multicast datagrams 453 related to a specific transaction. Recipients either receive the 454 entire set of datagrams or they do not. Multicast streaming 455 frequently is transmitting error tolerant content, such as MPEG 456 encoded material. Transaction multicasting will typically transmit 457 data with some form of validating signature and transaction 458 identifier that allows each recipient to confirm full reception of 459 the transaction. 461 This obviously needs to be combined with applicable congestion 462 control strategies being deployed by the upper layer protocols. The 463 Nexenta Replicast protocol only does bulk transfers against reserved 464 bandwidth, but there are probably as many solutions for this problem 465 as there are applications. Replicast relies upon IEEE I802.1 466 Datacenter Bridging (DCB) protocols such as Priority Flow Control and 467 Congestion Notification to provide no-drop service. The DCB 468 protocols deal with the fine timing of congestion avoidance, but 469 require higher layer transport or application protocols to keep the 470 sustained traffic rates below the sustained capacity. Creating 471 explicit reservations for bulk transfers is the main method for 472 accomplishing this. 474 The relevant DCB protocols include: 476 o Congestion Notification:[IEEE.802.1Qau-2011] 478 o Enhanced Transmission Selection:[IEEE.802.1Qaz-2011] 480 o Priority Flow Control[IEEE.802.1Qbb-2011] 482 The important distinction between Replicast and conventional 483 multicast applications is that there is no need to dynamically adjust 484 multicast forwarding tables during the lifespan of a transaction, 485 while IGMP and MLD are designed to allow the addition and deletion of 486 members while a multicast group is in use. This distinction is not 487 unique to any single storage application. Transactional replication 488 is a common element in cluster protocol design. 490 The limited duration of a transactional multicast group implies that 491 there is no need for the multicast forwarding element to rebuild its 492 forwarding tables after it restarts. Any transaction in progress 493 will have failed, and been retried by the higher-layer protocol. 494 Merely limiting the rate at which it fails and restarts is all that 495 is required of each forwarding element. 497 Another implication is that there is no need for the forwarding 498 elements to rebuild the membership list of a transactional multicast 499 group after the forwarding element has been reset. The transactions 500 using the forwarding element will all fail, and be retried by a 501 higher layer transport or application protocol. Assuming that 502 forwarding elements do not reset multiple times a minute this will 503 have very limited impact on overall application throughput. 505 The duration of a transaction is application specific, but inherently 506 limited. A failed transaction will be retried at the application 507 layer, so obviously it has a duration measured in seconds at the 508 longest. 510 5.2.4. Who are the members of the group? 512 Join-directed multicasting allows any number of recipients to join or 513 leave a group at will. 515 Transactional multicast requires that the group be identified as a 516 small subset of a pre-existing multicast group. 518 Building forwarding rules that are a subset of forwarding rules for 519 an existing multicast group can be done substantially faster than 520 creating forwarding rules to arbitrary and potentially previously 521 unknown destinations. 523 Some applications, including Object Clusters, benefit considering the 524 members to be higher layer entities (such as virtual drives) rather 525 than simply being the base IP address of the servers that host the 526 higher layer entities. Doing so allows groups to be defined for each 527 set of logical endpoints, not merely sets of physical endpoints. An 528 Object Cluster, for example, could have two different groups ([A,B,C] 529 vs [A,B,D]) even when the destinations are the same Layer 2 MAC 530 address (i.e., C and D are hosted by the same server). This allows 531 the server hosting both C and D to distinguish which entity is 532 addressed using the Destination IP Address. 534 5.2.5. How much latency does the application tolerate? 536 While no application likes latency, multicast streaming is very 537 tolerant of setup latency. If the end application is viewing or 538 listening to media, how many msecs are required to subscribe to the 539 group will not have a measurable impact to the end user. 541 For transactions in a cluster, however, every msec is delaying 542 forward progress. The time it takes to do an IGMP join would be a 543 significant addition to the latency of storing an object in an object 544 cluster using a relatively fast storage technology (such as SSD, 545 Flash or Memristor). 547 5.2.6. What must be done to maintain the Group? 549 The Join-directed multicast protocols specify methods for the 550 required maintenance of multicast groups.mMulticast forwarders, 551 switches or mrouters, must deal with new routes and new locations for 552 endpoints. 554 The reference multicast group will still be maintained by the 555 existing Join-directed multicast group protocols. The existing IGMP/ 556 MLD snooping procedures will keep the L2 multicasting forwarding 557 rules updated as changes in the network topology are detected. 558 Nothing in this specification changes the handling of the reference 559 multicast group. 561 Transactional multicast groups are defined to be used only for short 562 transactions, allowing them to piggy-back on the maintenance of the 563 reference multicast group. 565 6. Forwarding Control Agent 567 The Forwarding Control Agent is responsible for translating 568 forwarding control messages as defined in Section 7 into Layer 2 569 multicast forwarding for one or more subnets associated with a single 570 physical layer 2 subnet. 572 Each Forwarding Control Agent can be though of as extending the IGMP/ 573 MLD snooping capabilities of an L2 forwarding element. It is 574 translating the forwarding control agent messages into configuration 575 of L2 multicast forwarding just as an IGMP/MLD snooper translates 576 IGMP/MLD messages into configuration of Layer 2 multicast forwarding. 577 This MAY be done external to the existing implementation, or it may 578 be integrated with the IGMP/MLD snooper implementation. 580 Each Forwarding Control Agent: 582 o MUST Accept authenticated forwarding control agent messages 583 controlling the creation and membership of Transactional Multicast 584 Groups within the context of a specified VLAN. 586 o MUST support at least one VLAN. 588 o MAY support multiple VLANs. 590 o MUST update the controlled Layer 2 forwarding element's multicast 591 forwarding rules to reflect the subset specified for the group. 593 o MUST Update the controlled L2 forwarding elements multicast 594 forwarding rules to reflect changes in the mapping of IP addresses 595 to L2 MAC addresses between transactions for persistent 596 transactional suset multicast groups when informed of a prior 597 transactional failure with a Refresh Membership message (see 598 Figure 7). 600 o MAY refresh the Layer 2 multicast forwarding rules at any time. 602 6.1. Network Topology 604 Forwarding Control Agents are applicable for networks which consist 605 of one or more local subnets which have direct links with each other. 607 6.2. Isolated VLANs Strategy 609 Transactional Multicast groups define a very large number of 610 multicast addresses which must be delivered within a closed set of IP 611 subnets without having to dynamically co-ordinate allocation of these 612 multicast addresses with a wider network. 614 This MAY be accomplished using a "Isolated VLANs Strategy" where the 615 reference multicast group and all transactional multicast groups 616 derived from it are used strictly inside of a single VLAN or a set of 617 interconnected VLANs which route these multicast groups solely within 618 this closed set. 620 Specifically, an implementation using the Isolated VLANs Strategy: 622 o MUST include only a pre-defined set of subnets,each enforced with 623 a VLAN. 625 o MUST provide for routing or forwarding of all packets using the 626 reference multicast group and all transactional multicast groups 627 derived from it amongst these subnets. 629 o MUST NOT allow any packet using the reference multicast group or 630 any transactional multicast groups derived from it to be routed to 631 any subnet that is not part of the identified Isolated VLAN set. 633 o MAY guard the confidentiality of multicast packets routed between 634 subnets that transit subnets that are not part of the Isolated 635 VLAN set. 637 Applications MAY use the Isolated VLAN Strategy. Virtually all 638 applications will elect to do so because allocating a very large 639 block of adjacent multicast addresses would be very difficult without 640 the restriction of the Isolated VLAN strategy. Confining usage of 641 these addresses to a single VLAN is highly desirable. 643 Direct connections between the VLANs hosting Forwarding Control 644 Agents is required because the Transactional Multicast Groups are not 645 known to any intermediate multicast routers that would implement 646 indirect links. Co-locating Forwarding Control Agents with RBridges 647 [[RFC6325]] MAY be a solution. 649 7. Forwarding Control Agent Methods 651 7.1. Dynamically Pushed Transactional Groups 653 Each Pushed Transactional Membership command MUST contain the 654 following: 656 o Reference Multicast Group: All forwarding rules created must be a 657 subset of the forwarding rules for this group. That is, all 658 Targets listed in the Target List must be reachable by the 659 Reference Multicast Group. 661 o Transactional Multicast Group: Group multicast address that is to 662 have its multicast forwarding rules updated. This address must be 663 within a block of Transactional Multicast Groups previously 664 created using the Create Transactional Multicast Address Block 665 command (Section 10.1). 667 o Target List: List of IP Addresses which are to be the targets of 668 this group. These addresses are intended to be members of the 669 reference group. When formulating the list, non-members MUST NOT 670 be included. However there is no transaction lock placed upon the 671 group, and therefor there may be changes in the group membership 672 before the message is received. Therefore the Forwarding Control 673 Agent MUST ignore any listed target that is not a member of the 674 reference group. 676 This sets the multicast forwarding rules for pre-existing multicast 677 forwarding address X to be the subset of the forwarding rules for 678 existing group Y required to reach a specified member list. 680 This is done by communicating the same instruction (above) to each 681 multicast forwarding network element. This can be done by unicast 682 addressing with each of them, or by multicasting the instructions. 684 Each multicast forwarder will modify its multicast forwarding port 685 set to be the union of the unicast forwarding it has for the listed 686 members, but result must be a subset of the forwarding ports for the 687 Reference Multicast Group (Y in the example). 689 For example, consider an instruction is to modify a transaction 690 multicast group I which is a subset of multicast group J to reach 691 addresses A,B and C. 693 Addresses A and B are attached directly to multicast forwarder Q, 694 while C is attached to multicast forwarder R. 696 On forwarder Q the forwarding rule for new group I contains: 698 o The forwarding port for A. 700 o The forwarding port for B. 702 o The forwarding port to forwarder Y (a hub link). This eventually 703 leads to C. 705 While on forwarder R the forwarding rule for the new group I will 706 contain: 708 The forwarding port for forwarder X (a hub link). This eventually 709 leads to A and B. 711 The forwarding port for C. 713 The Forwarding Control Agent MUST perform a two-step translation: 714 first from IP Address to MAC Address, and then from MAC Address to 715 forwarding port. For typical applications of Transactional 716 Multicasting, all of the referenced IP Addresses will have been 717 involved in recent messaging, and therefore will typically already be 718 cached. 720 Many ethernet switches already support command line and/or SNMP 721 methods of setting these multicast forwarding rules, but it is 722 challenging for an application to reliably apply the same changes 723 using multiple vendor specific methods. Having a standardized method 724 of pushing the membership of a multicast group from the sender would 725 be desirable. 727 A Forwarding Control Agent MAY accept a request where the Target List 728 is expressed as a list of destination L2 MAC addresses. 730 The Target List MAY list both IPv4 and IPV6 target addresses. 731 However since any given datagram will either be an IPV6 or an IPV4 732 UDP datagram it is unlikely that any application would have a need to 733 specify the Target List with a mixed set of addresses. It is 734 intending to multicast either IPV4 or IPV6 datagrams. 736 7.2. Persistent Transactional Groups 738 There is a large group of pre-configured multicast groups which are 739 an enumeration of the possible subsets of a master group. This will 740 be a specific subset, such as all combinations of 3 members for 741 multicast group X. These groups are enumerated and assuaged 742 successive multicast addresses within a block. 744 The sender first obtains exclusive permission to utilize a portion of 745 the reception capacity of each desired target, and then selects the 746 multicast address that will reach that group. Having first secured 747 exclusive rights to transmit towards a finite reception capacity for 748 each target the sender will have effectively claimed exclusive access 749 to the multicast group collecting multiple such targets. 751 In a straightforward enumeration of 3 members out of a group of 20, 752 there are 20*19*18/3*2 or 1040 possible groups. Typically the higher 753 layer protocol will have negotiated the right to send the transaction 754 with the member prior to selecting the multicast group. In making 755 the final selection, the actual multicast group is selected and some 756 offered targets are declined. 758 Those 1040 possible groups can be enumerated in order (starting with 759 M1, M2 and M3 and ending with M18, M19 and M20) and assigned 760 multicast addresses from N to N+1039. 762 When the transaction requires reaching M4, M5 and M19, you simply 763 select that group. Because exclusive rights to use multicasting to 764 M4, M5 and M19 have already been obtained through the higher layer 765 protocol the group [M4,M5,M19] is already exclusively claimed. 767 These 1040 groups may be set up through any of the following means: 769 o Traditional IGMP/MLD joining/leaving. 771 o Setting static forwarding rules using SNMP MIBs and/or switch- 772 specific command line interfaces. Note that the wide-spread 773 existence of command line interfaces to custom set multicast 774 forwarding rules is an indicator that there are existing 775 applications that find the exising IGMP/MLD protocols to be 776 inadequate to fulfill their needs. 778 o The Dynamically Pushed Multicast Group method. See Section 7.1 780 8. Relationship to Existing Multicast Membership Protocols 782 Transactional Multicast Groups are not a replacement for Join-based 783 management of Multicast Groups. Rather they extend the group 784 maintenance performed by the Join-based multicast control protocols 785 from the reference group to any entire set of multicast addresses 786 that are subsets of it. 788 This extension requires no modification to the existing data-plane 789 multicast forwarding protocols or implementations. Transactional 790 Multicast groups may be implemented solely in the sender, receivers 791 and the Forwarding Control Agents associated with each multicast 792 forwarder supporting the reference group. 794 The maintenance work of the Join-based multicast protocols performed 795 on the reference multicast group is leveraged to allow maintenance of 796 a potentially large number of derived Transactional Multicast groups. 797 This allows identification of a large number of subsets of the 798 reference group, without requiring a matching increase in the 799 maintenance traffic which would have been required had the derived 800 groups been formed with a Join-based protocol. 802 9. Control Protocol 804 Note: the pre-standard protocol relies on multicasting of commands 805 within a single secure VLAN. More general usage of these techniques 806 will require transmitting Forwarding Control Agent instructions 807 between subnets where they may be subject to interception and even 808 alteration. Therefore a more secure method of delivering Forwarding 809 Control Agent instructions is required. 811 The methods standardized by the KARP (Key Authentication for Router 812 Protocols) are, in the Authors' opinion, fully applicable to this 813 protocol. See [RFC6518]. Working Group feedback is sought as to how 814 to expand this section, whether to split the Control Protocol to a 815 separate document, or other methods of dealing with the control 816 protocol. 818 The following requirements apply to any Control Protocol used: 820 o Each request MUST be uniquely identified. This identification 821 MUST include the source IP address of the requester. 823 o The message MUST be authenticated. 825 o WG discussion is needed to reach a consensus as to whether the 826 message contents need to be kept confidential, or whether 827 preventing alteration is sufficient. 829 o The sender MUST NOT be required to transmit the command more than 830 once other than as required for retries. For example, requiring 831 SSH connections with each Forwarding Control Agent is not 832 acceptable. 834 o Barring network errors, the message MUST be delivered to all 835 Forwarding Control Agents that can receive the reference master 836 group. 838 10. Forwarding Control Agent Methods 840 10.1. Create Transactional Multicast Address Block 842 TBD:This section will define the fields required for the command to 843 create a block of transactional multicast addresses within a specific 844 VLAN. The command defined here is delivered within a control 845 protocol. 847 0 1 2 3 848 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 849 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ 850 | Opcode=CreateTransactionalMulticast | 851 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ 852 | Base Multicast Group Number | 853 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+--+ 854 | Number of Addresses required in Block | 855 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+_-+--+-+ 857 Figure 1: Create Transcaction Multicast Address Block Message 859 The Multicast Group Number is the 24-bit L2 Multicast MAC address. 860 This matches both the IPV4 and IPV6 addresses which map to it. A 861 given UDP datagram is sent using either an IPV4 or an IPV6 address, 862 so the membership of a Multicast Group is either IPV4 endpoints or 863 IPV6 endpoints at any given instant. 865 This command does not allow creating numerically scattered group of 866 addresses. Doing so would have made the job of each Forwarding 867 Control Agent more complex, and would be of no benefit in the 868 recommended Isolated VLANs strategy (See Section 6.2). 870 10.2. Release Transactional Multicast Address Block 872 0 1 2 3 873 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 874 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ 875 | Opcode=ReleaseTransactionalMulticast | 876 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ 877 | Base Multicast Group Number | 878 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+--+ 880 Figure 2: Release Transcactin Multicast Address Block Message 882 10.3. Set Dynamic Transactional Multicast Group Membership IPV6 884 0 1 2 3 885 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 886 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ 887 | Opcode=PushTransactionalMulticastMembershipIPV6 or | 888 | AddTransactionalMulticast MembershipsIPV6 | 889 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ 890 | reserved | Multicast Group Number | 891 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-| 892 | # members | Reference Multicast Group Number | 893 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ 894 | IPV6 Address of 1st Member | 895 | | 896 | | 897 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ 898 ... 900 Figure 3: Set Dynamic Transactional Multicast Group Membership 901 Message 903 Members: 8 bit unsigned number of IPV6 addresses that are to be the 904 target of this specified Multicast Group Number. 906 When this message is formed the unicast IPV6 addresses MUST be 907 members of the reference multicast group. Unicast IPV6 addresses 908 must be transactional multicast addresses derived from the reference 909 multicast group. 911 10.4. Set Dynamic Transactional Multicast Group Membership IPV4 913 0 1 2 3 914 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 915 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ 916 | Opcode=PushTransactionalMulticastMembershipIPV4 | 917 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ 918 | # members | Multicast Group Number | 919 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ 920 | IPV4 Address of 1st member | 921 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ 922 ... 924 Figure 4: Set Dynamic Transactional Multicast Group Membership 925 Message 927 Members: 8 bit unsigned number of IPV6 addresses that are to be the 928 target of this specified Multicast Group Number. 930 10.5. Set Persistent Transactional Multicast Groups IPv6 932 0 1 2 3 933 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 934 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ 935 | Opcode=PushPersistentMulticastMembershipIPV6 | 936 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ 937 | Reserved - MBZ| Base Multicast Group Number to be set | 938 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ 939 | # members | Reference Multicast Group Num | 940 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ 941 | IPV6 Address of 1st Member | 942 | | 943 | | 944 | | 945 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ 946 ... 948 Figure 5: Set Persistent Transactional Multicast Groups Message IPV6 950 Members: 8 bit unsigned number of Members that are to be included in 951 each Transactional Group set by this command. 953 Base Multicast Group Number to be set. 955 # Members in the following list of IPV6 addresses. These must all be 956 members of the Reference Multicast Group. 958 Reference Multicast Group Num: 24 bit L2 Multicast Group Number. 960 The motivation for supplying the list of IP addresses is to avoid 961 race conditions where an IGMP or MLD join is in progress. If there 962 were a method to refer to a specific generation of a multicast group 963 membership then it would be possible to omit this list. 965 Note: Working Group suggestions are encouraged on this topic. 967 10.6. Set Persistent Transactional Multicast Groups IPv4 969 0 1 2 3 970 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 971 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ 972 | Opcode=PushPersistentMulticastMembershipIPV6 | 973 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ 974 | Reserved - MBZ| Base Multicast Group Number to be set | 975 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ 976 | # members | Reference Multicast Group Num | 977 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ 978 | IPV4 Address of 1st Member | 979 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ 980 ... 982 Figure 6: Set Persistent Transactional Multicast Groups Message IPv4 984 Members: 8 bit unsigned number of Members that are to be included in 985 each Transactional Group set by this command. 987 Base Multicast Group Number to be set. 989 # Members in the following list of IPV6 addresses. These must all be 990 members of the Reference Multicast Group. 992 Reference Multicast Group Num: 24 bit L2 Multicast Group Number. 994 10.7. Refresh Persistent Transactional Multicast Group 995 0 1 2 3 996 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 997 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ 998 | Opcode=RefreshMulticastMembership | 999 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ 1000 | Reserved - MBZ| Multicast Group Number to be Refreshed | 1001 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ 1002 | reserved | Reference Multicast Group Num | 1003 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ 1005 Figure 7: Refresh Persistent Transactional Multicast Groups Message 1007 The existing Join-directed multicast group control protocols maintain 1008 delivery of a multicast group to the subscribers independent of 1009 network topology changes either at Layer 2 or layer 3. If a unicast 1010 IP datagram to a member would be delivered, then the multicast 1011 forwarding can be expected to also be current. 1013 Transactional multicast groups do not require the same effort for 1014 maintenance. For a given transaction the entire set of datagrams is 1015 either delivered or it is not. There is no benefit to the 1016 application that the Forwarding Control Agent can achieve by promptly 1017 updating the L2 multicast forwarding tables after a network topology 1018 change. The current transaction will miss at least one datagram, and 1019 therefore does not care if it misses multiple datagrams. 1021 However, a Persistent Transactional Multlicast Group is used for a 1022 sequence of transactions targeting the same group. The upper layer 1023 protocol sender must have obtained exclusive rights to use the group 1024 for the period of time that it will be sending the transaction. 1026 One method that it MAY use is to obtain the exclusive right to send 1027 the specific type of transaction to each of the members of the 1028 targeted group during negotiations conducted prior to use of the 1029 transactional group. For example, a reservation on inbound bandwidth 1030 may have been granted. 1032 The Forwarding Control Agent MAY refresh its mapping from member IP 1033 addresses to L2 MAC address and then to L2 forwarding port at any 1034 time. However it MUST do so after receipt of a Refresh Transactional 1035 Multicast Group for the group. 1037 The sender of a transaction SHOULD send a Refresh Transactional 1038 Multicast Group message after it fails to receive acknowledgement of 1039 an attempted transaction. 1041 11. Operating With Just Dynamic Selection 1043 When all Transactional Multicast groups are selected with Dynamic 1044 Selection there is no need for a Forwarding Control Agent. All 1045 groups can be created with traditional IGMP/MLD protocols. Local 1046 algorithms can select the correct group based on shared rules without 1047 requiring dynamic collaboration. 1049 When the Forwarding Control Agent is available it SHOULD be used to 1050 pre-create groups for Dynamic Selection in favor to using IGMP/MLD as 1051 that groups configured using the Forwarding Control Agent will 1052 require less maintenance. 1054 12. Security Considerations 1056 The methods described here enable no sender to multicast messages to 1057 any destination that was not already addressable by it. Therefore no 1058 new security vulnerabilities are enabled by these techniques. 1060 Because authentication of subset commands is kept lightweight there 1061 is an implicit trust within the application that transactional subset 1062 groups will be formed or selected in accordance with application 1063 layer expectations. The transport layer lacks sufficient information 1064 to enforce application layer expectations. If a malicious actor 1065 deliberately creates a transactional subset multicast group with an 1066 incorrect group it may adversely impact the operation of the specific 1067 upper layer application. However in no case can it be used to launch 1068 a denial of service attack on targets that have not already 1069 voluntarily joined the reference group 1071 The protocol does not currently provide any mechanism to guard 1072 against selecting an existing but unrelated multicast group as a 1073 reference multicast group. Explicitly enabling use of an existing 1074 multicast group to be a reference group would not solve the problem 1075 that the existing management of multicast groups is not aware of the 1076 need to explicitly forbid creation of derived multicast groups based 1077 upon a multicast group that it creates. 1079 13. IANA Considerations 1081 Note: a set of opcodes are defined. It is not yet known whether 1082 these are extending an existing set of opcodes or whether they form a 1083 new set of numbers to be defined. This should be corrected before 1084 this document reaches working group last call. 1086 14. Summary 1088 The proposal provides for two new methods to manage multicast group 1089 membership, Thee are simple techniques, but provide a cohesive 1090 cluster-wide approach to providing transactional multicasting. These 1091 techniques are better suited for transactional multicasting that the 1092 existing methods, IGMP and MLD, which are oriented to streaming use- 1093 cases. 1095 15. References 1097 15.1. Informative References 1099 [Replicast] 1100 Bestler, C., "White Paper: Nexenta Replicast 1101 http://info.nexenta.com/rs/nexenta/images/ 1102 Nexenta_Replicast_White_Paper.pdf", November 2013. 1104 [MPI] MPI Forum, "Message Passing Inteface", 2012. 1106 [AmazonS3] 1107 Amazon, "Amazon Simple Storage Service (S3) 1108 http://aws.amazon.com/s3/", 2014. 1110 [Swift] Openstack, "OpenStack Object Service (Swift) 1111 http://docs.openstack.org/developer/swift/", 2014. 1113 [IEEE.802.1Qau-2011] 1114 IEEE, "IEEE Standard for Local and Metropolitan Area 1115 Networks: Virtual Bridged Local Area Networks - Amendment 1116 10: Congestion Notification", IEEE Std 802.1Qau, 2011. 1118 [IEEE.802.1Qaz-2011] 1119 IEEE, "IEEE Standard for Local and Metropolitan Area 1120 Networks: Virtual Bridged Local Area Networks - Amendment 1121 18: Enhanced Transmission Selection.", IEEE Std 802.1Qaz, 1122 2011. 1124 [IEEE.802.1Qbb-2011] 1125 IEEE, "IEEE Standard for Local and Metropolitan Area 1126 Networks: Virtual Bridged Local Area Networks - Amendment 1127 17: Priority-based Flow Control.", IEEE Std 802.1Qbb, 1128 2011. 1130 [RFC5661] Shepler, S., Eisler, M., and D. Noveck, "Network File 1131 System (NFS) Version 4 Minor Version 1 Protocol", RFC 1132 5661, January 2010. 1134 15.2. Normative References 1136 [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate 1137 Requirement Levels", BCP 14, RFC 2119, March 1997. 1139 [RFC3376] Cain, B., Deering, S., Kouvelas, I., Fenner, B., and A. 1140 Thyagarajan, "Internet Group Management Protocol, Version 1141 3", RFC 3376, October 2002. 1143 [RFC3810] Vida, R. and L. Costa, "Multicast Listener Discovery 1144 Version 2 (MLDv2) for IPv6", RFC 3810, June 2004. 1146 [RFC4541] Christensen, M., Kimball, K., and F. Solensky, 1147 "Considerations for Internet Group Management Protocol 1148 (IGMP) and Multicast Listener Discovery (MLD) Snooping 1149 Switches", RFC 4541, May 2006. 1151 [RFC4604] Holbrook, H., Cain, B., and B. Haberman, "Using Internet 1152 Group Management Protocol Version 3 (IGMPv3) and Multicast 1153 Listener Discovery Protocol Version 2 (MLDv2) for Source- 1154 Specific Multicast", RFC 4604, August 2006. 1156 [RFC6325] Perlman, R., Eastlake, D., Dutt, D., Gai, S., and A. 1157 Ghanwani, "Routing Bridges (RBridges): Base Protocol 1158 Specification", RFC 6325, July 2011. 1160 [RFC6518] Lebovitz, G. and M. Bhatia, "Keying and Authentication for 1161 Routing Protocols (KARP) Design Guidelines", RFC 6518, 1162 February 2012. 1164 Authors' Addresses 1166 Caitlin Bestler (editor) 1167 Nexenta Systems 1168 451 El Camino Real 1169 Santa Clara, CA 1170 US 1172 Email: caitlin.bestler@nexenta.com,cait@asomi.com 1174 Robert Novack 1176 Email: sailinfool@gmail.com