idnits 2.17.1 draft-ietf-ipatm-framework-doc-06.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- ** Cannot find the required boilerplate sections (Copyright, IPR, etc.) in this document. Expected boilerplate is as follows today (2024-04-19) according to https://trustee.ietf.org/license-info : IETF Trust Legal Provisions of 28-dec-2009, Section 6.a: This Internet-Draft is submitted in full conformance with the provisions of BCP 78 and BCP 79. IETF Trust Legal Provisions of 28-dec-2009, Section 6.b(i), paragraph 2: Copyright (c) 2024 IETF Trust and the persons identified as the document authors. All rights reserved. IETF Trust Legal Provisions of 28-dec-2009, Section 6.b(i), paragraph 3: This document is subject to BCP 78 and the IETF Trust's Legal Provisions Relating to IETF Documents (https://trustee.ietf.org/license-info) in effect on the date of publication of this document. Please review these documents carefully, as they describe your rights and restrictions with respect to this document. Code Components extracted from this document must include Simplified BSD License text as described in Section 4.e of the Trust Legal Provisions and are provided without warranty as described in the Simplified BSD License. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- ** Missing expiration date. The document expiration date should appear on the first and last page. ** The document seems to lack a 1id_guidelines paragraph about Internet-Drafts being working documents. ** The document seems to lack a 1id_guidelines paragraph about 6 months document validity. ** The document seems to lack a 1id_guidelines paragraph about the list of current Internet-Drafts. ** The document seems to lack a 1id_guidelines paragraph about the list of Shadow Directories. ** The document is more than 15 pages and seems to lack a Table of Contents. == No 'Intended status' indicated for this document; assuming Proposed Standard Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- ** The document seems to lack a Security Considerations section. ** The document seems to lack an IANA Considerations section. (See Section 2.2 of https://www.ietf.org/id-info/checklist for how to handle the case when there are no actions for IANA.) ** The document seems to lack separate sections for Informative/Normative References. All references will be assumed normative when checking for downward references. ** There are 554 instances of weird spacing in the document. Is it really formatted ragged-right, rather than justified? == There are 1 instance of lines with non-RFC2606-compliant FQDNs in the document. Miscellaneous warnings: ---------------------------------------------------------------------------- == Line 13 has weird spacing: '...ocument is an...' == Line 15 has weird spacing: '...te that other...' == Line 19 has weird spacing: '... Drafts may ...' == Line 20 has weird spacing: '...by other doc...' == Line 21 has weird spacing: '... Drafts as re...' == (549 more instances...) -- The document seems to lack a disclaimer for pre-RFC5378 work, but may have content which was first submitted before 10 November 2008. If you have contacted all the original authors and they are all willing to grant the BCP78 rights to the IETF Trust, then this is fine, and you can ignore this comment. If not, you may need to add the pre-RFC5378 disclaimer. (See the Legal Provisions document at https://trustee.ietf.org/license-info for more information.) -- The document date (October 2, 1995) is 10427 days in the past. Is this intentional? Checking references for intended status: Proposed Standard ---------------------------------------------------------------------------- (See RFCs 3967 and 4897 for information about using normative references to lower-maturity documents in RFCs) ** Obsolete normative reference: RFC 1626 (ref. '1') (Obsoleted by RFC 2225) ** Obsolete normative reference: RFC 985 (ref. '2') (Obsoleted by RFC 1009) ** Downref: Normative reference to an Informational RFC: RFC 1620 (ref. '3') -- Possible downref: Non-RFC (?) normative reference: ref. '4' ** Downref: Normative reference to an Experimental RFC: RFC 1433 (ref. '5') ** Obsolete normative reference: RFC 1483 (ref. '6') (Obsoleted by RFC 2684) ** Downref: Normative reference to an Experimental RFC: RFC 1735 (ref. '7') ** Obsolete normative reference: RFC 1577 (ref. '8') (Obsoleted by RFC 2225) -- Possible downref: Non-RFC (?) normative reference: ref. '10' ** Downref: Normative reference to an Historic RFC: RFC 904 (ref. '11') Summary: 19 errors (**), 0 flaws (~~), 8 warnings (==), 4 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 Internet Engineering Task Force R. G. Cole 3 INTERNET-DRAFT D. H. Shur 4 draft-ietf-ipatm-framework-doc-06 AT&T Bell Laboratories 5 C. Villamizar 6 ANS 7 October 2, 1995 9 IP over ATM: A Framework Document 11 Status of this Memo 13 This document is an internet draft. Internet Drafts are working 14 documents of the Internet Engineering Task Force (IETF), its Areas, 15 and its Working Groups. Note that other groups may also distribute 16 working documents as Internet Drafts. 18 Internet Drafts are draft documents valid for a maximum of six 19 months. Internet Drafts may be updated, replaced, or obsoleted 20 by other documents at any time. It is not appropriate to 21 use Internet Drafts as reference material or to cite them other 22 than as a ``working draft'' or ``work in progress''. Please 23 check the lid-abstracts.txt listing contained in the internet-drafts 24 shadow directories on nic.ddn.mil, nnsc.nsf.net, nic.nordu.net, 25 ftp.nisc.src.com, or munnari.oz.au to learn the current status of any 26 Internet Draft. 28 Abstract 30 The discussions of the IP over ATM working group over the last several 31 years have produced a diverse set of proposals, some of which are 32 no longer under active consideration. A categorization is provided 33 for the purpose of focusing discussion on the various proposals 34 for IP over ATM deemed of primary interest by the IP over ATM 35 working group. The intent of this framework is to help clarify the 36 differences between proposals and identify common features in order to 37 promote convergence to a smaller and more mutually compatible set of 38 standards. In summary, it is hoped that this document, in classifying 39 ATM approaches and issues will help to focus the IP over ATM working 40 group's direction. 42 1 Introduction 44 The IP over ATM Working Group of the Internet Engineering Task 45 Force (IETF) is chartered to develop standards for routing and 46 forwarding IP packets over ATM sub-networks. This document provides 47 a classification/taxonomy of IP over ATM options and issues and then 48 describes various proposals in these terms. 50 The remainder of this memorandum is organized as follows: 52 o Section 2 defines several terms relating to networking and 53 internetworking. 55 o Section 3 discusses the parameters for a taxonomy of the 56 different ATM models under discussion. 58 o Section 4 discusses the options for low level encapsulation. 60 o Section 5 discusses tradeoffs between connection oriented and 61 connectionless approaches. 63 o Section 6 discusses the various means of providing direct 64 connections across IP subnet boundaries. 66 o Section 7 discusses the proposal to extend IP routing to better 67 accommodate direct connections across IP subnet boundaries. 69 o Section 8 identifies several prominent IP over ATM proposals that 70 have been discussed within the IP over ATM Working Group and 71 their relationship to the framework described in this document. 73 o Section 9 addresses the relationship between the documents 74 developed in the IP over ATM and related working groups and the 75 various models discussed. 77 2 Definitions and Terminology 79 We define several terms: 81 A Host or End System: A host delivers/receives IP packets to/from 82 other systems, but does not relay IP packets. 84 A Router or Intermediate System: A router delivers/receives IP 85 packets to/from other systems, and relays IP packets among 86 systems. 88 IP Subnet: In an IP subnet, all members of the subnet are able to 89 transmit packets to all other members of the subnet directly, 90 without forwarding by intermediate entities. No two subnet 91 members are considered closer in the IP topology than any other. 92 From an IP routing and IP forwarding standpoint a subnet is 93 atomic, though there may be repeaters, hubs, bridges, or switches 94 between the physical interfaces of subnet members. 96 Bridged IP Subnet: A bridged IP subnet is one in which two or 97 more physically disjoint media are made to appear as a single IP 98 subnet. There are two basic types of bridging, media access 99 control (MAC) level, and proxy ARP (see section 6). 101 A Broadcast Subnet: A broadcast network supports an arbitrary 102 number of hosts and routers and additionally is capable of 103 transmitting a single IP packet to all of these systems. 105 A Multicast Capable Subnet: A multicast capable subnet supports 106 a facility to send a packet which reaches a subset of the 107 destinations on the subnet. Multicast setup may be sender 108 initiated, or leaf initiated. ATM UNI 3.0 [4] and UNI 3.1 109 support only sender initiated while IP supports leaf initiated 110 join. UNI 4.0 will support leaf initiated join. 112 A Non-Broadcast Multiple Access (NBMA) Subnet: An NBMA supports 113 an arbitrary number of hosts and routers but does not 114 natively support a convenient multi-destination connectionless 115 transmission facility, as does a broadcast or multicast capable 116 subnetwork. 118 An End-to-End path: An end-to-end path consists of two hosts which 119 can communicate with one another over an arbitrary number of 120 routers and subnets. 122 An internetwork: An internetwork (small ``i'') is the concatenation 123 of networks, often of various different media and lower level 124 encapsulations, to form an integrated larger network supporting 125 communication between any of the hosts on any of the component 126 networks. The Internet (big ``I'') is a specific well known 127 global concatenation of (over 40,000 at the time of writing) 128 component networks. 130 IP forwarding: IP forwarding is the process of receiving a packet 131 and using a very low overhead decision process determining how 132 to handle the packet. The packet may be delivered locally 133 (for example, management traffic) or forwarded externally. For 134 traffic that is forwarded externally, the IP forwarding process 135 also determines which interface the packet should be sent out on, 136 and if necessary, either removes one media layer encapsulation 137 and replaces it with another, or modifies certain fields in the 138 media layer encapsulation. 140 IP routing: IP routing is the exchange of information that takes 141 place in order to have available the information necessary to 142 make a correct IP forwarding decision. 144 IP address resolution: A quasi-static mapping exists between IP 145 address on the local IP subnet and media address on the local 146 subnet. This mapping is known as IP address resolution. 147 An address resolution protocol (ARP) is a protocol supporting 148 address resolution. 150 In order to support end-to-end connectivity, two techniques are used. 151 One involves allowing direct connectivity across classic IP subnet 152 boundaries supported by certain NBMA media, which includes ATM. The 153 other involves IP routing and IP forwarding. In essence, the former 154 technique is extending IP address resolution beyond the boundaries of 155 the IP subnet, while the latter is interconnecting IP subnets. 157 Large internetworks, and in particular the Internet, are unlikely to 158 be composed of a single media, or a star topology, with a single media 159 at the center. Within a large network supporting a common media, 160 typically any large NBMA such as ATM, IP routing and IP forwarding 161 must always be accommodated if the internetwork is larger than the 162 NBMA, particularly if there are multiple points of interconnection 163 with the NBMA and/or redundant, diverse interconnections. 165 Routing information exchange in a very large internetwork can be quite 166 dynamic due to the high probability that some network elements are 167 changing state. The address resolution space consumption and resource 168 consumption due to state change, or maintenance of state information 169 is rarely a problem in classic IP subnets. It can become a problem in 170 large bridged networks or in proposals that attempt to extend address 171 resolution beyond the IP subnet. Scaling properties of address 172 resolution and routing proposals, with respect to state information 173 and state change, must be considered. 175 3 Parameters Common to IP Over ATM Proposals 177 In some discussion of IP over ATM distinctions have made between 178 local area networks (LANs), and wide area networks (WANs) that do 179 not necessarily hold. The distinction between a LAN, MAN and WAN 180 is a matter of geographic dispersion. Geographic dispersion affects 181 performance due to increased propagation delay. 183 LANs are used for network interconnections at the the major Internet 184 traffic interconnect sites. Such LANs have multiple administrative 185 authorities, currently exclusively support routers providing transit 186 to multihomed internets, currently rely on PVCs and static address 187 resolution, and rely heavily on IP routing. Such a configuration 188 differs from the typical LANs used to interconnect computers in 189 corporate or campus environments, and emphasizes the point that prior 190 characterization of LANs do not necessarily hold. Similarly, WANs 191 such as those under consideration by numerous large IP providers, 192 do not conform to prior characterizations of ATM WANs in that they 193 have a single administrative authority and a small number of nodes 194 aggregating large flows of traffic onto single PVCs and rely on IP 195 routers to avoid forming congestion bottlenecks within ATM. 197 The following characteristics of the IP over ATM internetwork may be 198 independent of geographic dispersion (LAN, MAN, or WAN). 200 o The size of the IP over ATM internetwork (number of nodes). 202 o The size of ATM IP subnets (LIS) in the ATM Internetwork. 204 o Single IP subnet vs multiple IP subnet ATM internetworks. 206 o Single or multiple administrative authority. 208 o Presence of routers providing transit to multihomed internets. 210 o The presence or absence of dynamic address resolution. 212 o The presence or absence of an IP routing protocol. 214 IP over ATM should therefore be characterized by: 216 o Encapsulations below the IP level. 218 o Degree to which a connection oriented lower level is available 219 and utilized. 221 o Type of address resolution at the IP subnet level (static or 222 dynamic). 224 o Degree to which address resolution is extended beyond the IP 225 subnet boundary. 227 o The type of routing (if any) supported above the IP level. 229 ATM-specific attributes of particular importance include: 231 o The different types of services provided by the ATM Adaptation 232 Layers (AAL). These specify the Quality-of-Service, the 233 connection-mode, etc. The models discussed within this document 234 assume an underlying connection-oriented service. 236 o The type of virtual circuits used, i.e., PVCs versus SVCs. The 237 PVC environment requires the use of either static tables for 238 ATM-to-IP address mapping or the use of inverse ARP, while the 239 SVC environment requires ARP functionality to be provided. 241 o The type of support for multicast services. If point-to-point 242 services only are available, then a server for IP multicast is 243 required. If point-to-multipoint services are available, then 244 IP multicast can be supported via meshes of point-to-multipoint 245 connections (although use of a server may be necessary due to 246 limits on the number of multipoint VCs able to be supported or to 247 maintain the leaf initiated join semantics). 249 o The presence of logical link identifiers (VPI/VCIs) and the 250 various information element (IE) encodings within the ATM SVC 251 signaling specification, i.e., the ATM Forum UNI version 3.1. 252 This allows a VC originator to specify a range of ``layer'' 253 entities as the destination ``AAL User''. The AAL specifications 254 do not prohibit any particular ``layer X'' from attaching 255 directly to a local AAL service. Taken together these points 256 imply a range of methods for encapsulation of upper layer 257 protocols over ATM. For example, while LLC/SNAP encapsulation is 258 one approach (the default), it is also possible to bind virtual 259 circuits to higher level entities in the TCP/IP protocol stack. 260 Some examples of the latter are single VC per protocol binding, 261 TULIP, and TUNIC, discussed further in Section 4. 263 o The number and type of ATM administrative domains/networks, and 264 type of addressing used within an administrative domain/network. 265 In particular, in the single domain/network case, all attached 266 systems may be safely assumed to be using a single common 267 addressing format, while in the multiple domain case, attached 268 stations may not all be using the same common format, 269 with corresponding implications on address resolution. (See 270 Appendix A for a discussion of some of the issues that arise 271 when multiple ATM address formats are used in the same logical 272 IP subnet (LIS).) Also security/authentication is much more of a 273 concern in the multiple domain case. 275 IP over ATM proposals do not universally accept that IP routing over 276 an ATM network is required. Certain proposals rely on the following 277 assumptions: 279 o The widespread deployment of ATM within premises-based networks, 280 private wide-area networks and public networks, and 282 o The definition of interfaces, signaling and routing protocols 283 among private ATM networks. 285 The above assumptions amount to ubiquitous deployment of a seamless 286 ATM fabric which serves as the hub of a star topology around which 287 all other media is attached. There has been a great deal of 288 discussion over when, if ever, this will be a realistic assumption for 289 very large internetworks, such as the Internet. Advocates of such 290 approaches point out that even if these are not relevant to very large 291 internetworks such as the Internet, there may be a place for such 292 models in smaller internetworks, such as corporate networks. 294 The NHRP protocol (Section 8.2), not necessarily specific to ATM, 295 would be particularly appropriate for the case of ubiquitous ATM 296 deployment. NHRP supports the establishment of direct connections 297 across IP subnets in the ATM domain. The use of NHRP does not require 298 ubiquitous ATM deployment, but currently imposes topology constraints 299 to avoid routing loops (see Section 7). Section 8.2 describes NHRP in 300 greater detail. 302 The Peer Model assumes that internetwork layer addresses can be mapped 303 onto ATM addresses and vice versa, and that reachability information 304 between ATM routing and internetwork layer routing can be exchanged. 305 This approach has limited applicability unless ubiquitous deployment 306 of ATM holds. The peer model is described in Section 8.4. 308 The Integrated Model proposes a routing solution supporting an 309 exchange of routing information between ATM routing and higher level 310 routing. This provides timely external routing information within 311 the ATM routing and provides transit of external routing information 312 through the ATM routing between external routing domains. Such 313 proposals may better support a possibly lengthy transition during 314 which assumptions of ubiquitous ATM access do not hold. The 315 Integrated Model is described in Section 8.5. 317 The Multiprotocol over ATM (MPOA) Sub-Working Group was formed by 318 the ATM Forum to provide multiprotocol support over ATM. The MPOA 319 effort is at an early stage at the time of this writing. An MPOA 320 baseline document has been drafted, which provides terminology for 321 further discussion of the architecture. This document is available 322 from the FTP server ftp.atmforum.com in pub/contributions as the file 323 atm95-0824.ps or atm95-0824.txt. 325 4 Encapsulations and Lower Layer Identification 327 Data encapsulation, and the identification of VC endpoints, constitute 328 two important issues that are somewhat orthogonal to the issues of 329 network topology and routing. The relationship between these two 330 issues is also a potential sources of confusion. In conventional 331 LAN technologies the 'encapsulation' wrapped around a packet of 332 data typically defines the (de)multiplexing path within source and 333 destination nodes (e.g. the Ethertype field of an Ethernet packet). 334 Choice of the protocol endpoint within the packet's destination node 335 is essentially carried 'in-band'. 337 As the multiplexing is pushed towards ATM and away from LLC/SNAP 338 mechanism, a greater burden will be placed upon the call setup and 339 teardown capacity of the ATM network. This may result in some 340 questions being raised regarding the scalability of these lower level 341 multiplexing options. 343 With the ATM Forum UNI version 3.1 service the choice of endpoint 344 within a destination node is made 'out of band' - during the Call 345 Setup phase. This is quite independent of any in-band encapsulation 346 mechanisms that may be in use. The B-LLI Information Element allows 347 Layer 2 or Layer 3 entities to be specified as a VC's endpoint. When 348 faced with an incoming SETUP message the Called Party will search 349 locally for an AAL User that claims to provide the service of the 350 layer specified in the B-LLI. If one is found then the VC will be 351 accepted (assuming other conditions such as QoS requirements are also 352 met). 354 An obvious approach for IP environments is to simply specify the 355 Internet Protocol layer as the VCs endpoint, and place IP packets into 356 AAL--SDUs for transmission. This is termed 'VC multiplexing' or 'Null 357 Encapsulation', because it involves terminating a VC (through an AAL 358 instance) directly on a layer 3 endpoint. However, this approach 359 has limitations in environments that need to support multiple layer 3 360 protocols between the same two ATM level endpoints. Each pair of 361 layer 3 protocol entities that wish to exchange packets require their 362 own VC. 364 RFC--1483 [6] notes that VC multiplexing is possible, but focuses 365 on describing an alternative termed 'LLC/SNAP Encapsulation'. This 366 allows any set of protocols that may be uniquely identified by an 367 LLC/SNAP header to be multiplexed onto a single VC. Figure 1 shows 368 how this works for IP packets - the first 3 bytes indicate that 369 the payload is a Routed Non-ISO PDU, and the Organizationally Unique 370 Identifier (OUI) of 0x00-00-00 indicates that the Protocol Identifier 371 (PID) is derived from the EtherType associated with IP packets 372 (0x800). ARP packets are multiplexed onto a VC by using a PID of 373 0x806 instead of 0x800. 375 Whatever layer terminates a VC carrying LLC/SNAP encapsulated traffic 376 must know how to parse the AAL--SDUs in order to retrieve the packets. 377 The recently approved signalling standards for IP over ATM are more 378 explicit, noting that the default SETUP message used to establish IP 379 over ATM VCs must carry a B-LLI specifying an ISO 8802/2 Layer 2 380 Figure 1: IP packet encapsulated in an AAL5 SDU 382 (LLC) entity as each VCs endpoint. More significantly, there is no 383 information carried within the SETUP message about the identity of 384 the layer 3 protocol that originated the request - until the packets 385 begin arriving the terminating LLC entity cannot know which one or 386 more higher layers are packet destinations. 388 Taken together, this means that hosts require a protocol entity to 389 register with the host's local UNI 3.1 management layer as being an 390 LLC entity, and this same entity must know how to handle and generate 391 LLC/SNAP encapsulated packets. The LLC entity will also require 392 mechanisms for attaching to higher layer protocols such as IP and ARP. 393 Figure 2 attempts to show this, and also highlights the fact that 394 such an LLC entity might support many more than just IP and ARP. In 395 fact the combination of RFC 1483 LLC/SNAP encapsulation, LLC entities 396 terminating VCs, and suitable choice of LLC/SNAP values, can go a long 397 way towards providing an integrated approach to building multiprotocol 398 networks over ATM. 400 The processes of actually establishing AAL Users, and identifying them 401 to the local UNI 3.1 management layers, are still undefined and are 402 likely to be very dependent on operating system environments. 404 Two encapsulations have been discussed within the IP over ATM working 405 group which differ from those given in RFC--1483 [6]. These 406 have the characteristic of largely or totally eliminating IP header 407 overhead. These models were discussed in the July 1993 IETF meeting 408 in Amsterdam, but have not been fully defined by the working group. 410 TULIP and TUNIC assume single hop reachability between IP entities. 411 Following name resolution, address resolution, and SVC signaling, an 412 Figure 2: LLC/SNAP encapsulation allows more than just IP or ARP per 413 VC. 415 implicit binding is established between entities in the two hosts. In 416 this case full IP headers (and in particular source and destination 417 addresses) are not required in each data packet. 419 o The first model is ``TCP and UDP over Lightweight IP'' (TULIP) 420 in which only the IP protocol field is carried in each packet, 421 everything else being bound at call set-up time. In this 422 case the implicit binding is between the IP entities in each 423 host. Since there is no further routing problem once the binding 424 is established, since AAL5 can indicate packet size, since 425 fragmentation cannot occur, and since ATM signaling will handle 426 exception conditions, the absence of all other IP header fields 427 and of ICMP should not be an issue. Entry to TULIP mode would 428 occur as the last stage in SVC signaling, by a simple extension 429 to the encapsulation negotiation described in RFC--1755 [10]. 431 TULIP changes nothing in the abstract architecture of the IP 432 model, since each host or router still has an IP address which is 433 resolved to an ATM address. It simply uses the point-to-point 434 property of VCs to allow the elimination of some per-packet 435 overhead. The use of TULIP could in principle be negotiated on a 436 per-SVC basis or configured on a per-PVC basis. 438 o The second model is ``TCP and UDP over a Nonexistent IP 439 Connection'' (TUNIC). In this case no network-layer information 440 is carried in each packet, everything being bound at virtual 441 circuit set-up time. The implicit binding is between two 442 Encapsulation In setup message Demultiplexing 443 -------------+--------------------------+------------------------ 444 SNAP/LLC _ nothing _ source and destination 445 _ _ address, protocol 446 _ _ family, protocol, ports 447 _ _ 448 NULL encaps _ protocol family _ source and destination 449 _ _ address, protocol, ports 450 _ _ 451 TULIP _ source and destination _ protocol, ports 452 _ address, protocol family _ 453 _ _ 454 TUNIC - A _ source and destination _ ports 455 _ address, protocol family _ 456 _ protocol _ 457 _ _ 458 TUNIC - B _ source and destination _ nothing 459 _ address, protocol family _ 460 _ protocol, ports _ 462 Table 1: Summary of Encapsulation Types 464 applications using either TCP or UDP directly over AAL5 on a 465 dedicated VC. If this can be achieved, the IP protocol field has 466 no useful dynamic function. However, in order to achieve binding 467 between two applications, the use of a well-known port number 468 in classical IP or in TULIP mode may be necessary during call 469 set-up. This is a subject for further study and would require 470 significant extensions to the use of SVC signaling described in 471 RFC--1755 [10]. 473 TULIP/TUNIC can be presented as being on one end of a continuum 474 opposite the SNAP/LLC encapsulation, with various forms of null 475 encapsulation somewhere in the middle. The continuum is simply a 476 matter of how much is moved from in-stream demultiplexing to call 477 setup demultiplexing. The various encapsulation types are presented 478 in Table 1. 480 Encapsulations such as TULIP and TUNIC make assumptions with regard 481 to the desirability to support connection oriented flow. The 482 tradeoffs between connection oriented and connectionless are discussed 483 in Section 5. 485 5 Connection Oriented and Connectionless Tradeoffs 487 The connection oriented and connectionless approaches each offer 488 advantages and disadvantages. In the past, strong advocates of pure 489 connection oriented and pure connectionless architectures have argued 490 intensely. IP over ATM does not need to be purely connectionless or 491 purely connection oriented. 493 ATM with basic AAL 5 service is connection oriented. The IP layer 494 above ATM is connectionless. On top of IP much of the traffic is 495 supported by TCP, a reliable end-to-end connection oriented protocol. 496 A fundamental question is to what degree is it beneficial to map 497 different flows above IP into separate connections below IP. There is 498 a broad spectrum of opinion on this. 500 As stated in section 4, at one end of the spectrum, IP would remain 501 highly connectionless and set up single VCs between routers which are 502 adjacent on an IP subnet and for which there was active traffic flow. 503 All traffic between the such routers would be multiplexed on a single 504 ATM VC. At the other end of the spectrum, a separate ATM VC would 505 be created for each identifiable flow. For every unique TCP or UDP 506 address and port pair encountered a new VC would be required. Part of 507 the intensity of early arguments has been over failure to recognize 508 that there is a middle ground. 510 ATM offers QoS and traffic management capabilities that are well 511 suited for certain types of services. It may be advantageous to use 512 separate ATM VC for such services. Other IP services such as DNS, 513 are ill suited for connection oriented delivery, due to their normal 514 very short duration (typically one packet in each direction). Short 515 duration transactions, even many using TCP, may also be poorly suited 516 for a connection oriented model due to setup and state overhead. 517 ATM QoS and traffic management capabilities may be poorly suited for 518 elastic traffic. 520 Work in progress is addressing how QoS requirements might be expressed 521 and how the local decisions might be made as to whether those 522 requirements are best and/or most cost effectively accomplished using 523 ATM or IP capabilities. Table 2, Table 3, and Table 4 describe 524 typical treatment of various types of traffic using a pure connection 525 oriented approach, middle ground approach, and pure connectionless 526 approach. 528 The above qualitative description of connection oriented vs 529 connectionless service serve only as examples to illustrate differing 530 approaches. Work in the area of an integrated service model, QoS 531 and resource reservation are related to but outside the scope of 532 the IP over ATM Work Group. This work falls under the Integrated 533 Services Work Group (int-serv) and Reservation Protocol Work Group 534 (rsvp), and will ultimately determine when direct connections will be 535 APPLICATION Pure Connection Oriented Approach 536 ----------------+------------------------------------------------- 537 General _ Always set up a VC 538 _ 539 Short Duration _ Set up a VC. Either hold the packet during VC 540 UDP (DNS) _ setup or drop it and await a retransmission. 541 _ Teardown on a timer basis. 542 _ 543 Short Duration _ Set up a VC. Either hold packet(s) during VC 544 TCP (SMTP) _ setup or drop them and await retransmission. 545 _ Teardown on detection of FIN-ACK or on a timer 546 _ basis. 547 _ 548 Elastic (TCP) _ Set up a VC same as above. No clear method to 549 Bulk Transfer _ set QoS parameters has emerged. 550 _ 551 Real Time _ Set up a VC. QoS parameters are assumed to 552 (audio, video) _ precede traffic in RSVP or be carried in some 553 _ form within the traffic itself. 555 Table 2: Connection Oriented vs. Connectionless - a) a pure 556 connection oriented approach 558 established. The IP over ATM Work Group can make more rapid progress 559 if concentrating solely on how direct connections are established. 561 6 Crossing IP Subnet Boundaries 563 A single IP subnet will not scale well to a large size. Techniques 564 which extend the size of an IP subnet in other media include MAC layer 565 bridging, and proxy ARP bridging. 567 MAC layer bridging alone does not scale well. Protocols such 568 as ARP rely on the media broadcast to exchange address resolution 569 information. Most bridges improve scaling characteristics by 570 capturing ARP packets and retaining the content, and distributing the 571 information among bridging peers. The ARP information gathered from 572 ARP replies is broadcast only where explicit ARP requests are made. 573 This technique is known as proxy ARP. 575 Proxy ARP bridging improves scaling by reducing broadcast traffic, but 576 still suffers scaling problems. If the bridged IP subnet is part of a 577 larger internetwork, a routing protocol is required to indicate what 578 destinations are beyond the IP subnet unless a statically configured 579 default route is used. A default route is only applicable to a very 580 APPLICATION Middle Ground 581 ----------------+------------------------------------------------- 582 General _ Use RSVP or other indication which clearly 583 _ indicate a VC is needed and what QoS parameters 584 _ are appropriate. 585 _ 586 Short Duration _ Forward hop by hop. RSVP is unlikely to precede 587 UDP (DNS) _ this type of traffic. 588 _ 589 Short Duration _ Forward hop by hop unless RSVP indicates 590 TCP (SMTP) _ otherwise. RSVP is unlikely to precede this 591 _ type of traffic. 592 _ 593 Elastic (TCP) _ By default hop by hop forwarding is used. 594 Bulk Transfer _ However, RSVP information, local configuration 595 _ about TCP port number usage, or a locally 596 _ implemented method for passing QoS information 597 _ from the application to the IP/ATM driver may 598 _ allow/suggest the establishment of direct VCs. 599 _ 600 Real Time _ Forward hop by hop unless RSVP indicates 601 (audio, video) _ otherwise. RSVP will indicate QoS requirements. 602 _ It is assumed RSVP will generally be used for 603 _ this case. A local decision can be made as to 604 _ whether the QoS is better served by a sepa- 605 rate VC. 607 Table 3: Connection Oriented vs. Connectionless - b) a middle ground 608 approach 609 APPLICATION Pure Connectionless Approach 610 ----------------+------------------------------------------------- 611 General _ Always forward hop by hop. Use queueing 612 _ algorithms implemented at the IP layer to 613 _ support reservations such as those specified by 614 _ RSVP. 615 _ 616 Short Duration _ Forward hop by hop. 617 UDP (DNS) _ 618 _ 619 Short Duration _ Forward hop by hop. 620 TCP (SMTP) _ 621 _ 622 Elastic (TCP) _ Forward hop by hop. Assume ability of TCP to 623 Bulk Transfer _ share bandwidth (within a VBR VC) works as well 624 _ or better than ATM traffic management. 625 _ 626 Real Time _ Forward hop by hop. Assume that queueing 627 (audio, video) _ algorithms at the IP level can be designed to 628 _ work with sufficiently good performance 629 _ (e.g., due to support for predic- 630 tive reservation). 632 Table 4: Connection Oriented vs. Connectionless - c) a pure 633 connectionless approach 634 simple topology with respect to the larger internet and creates a 635 single point of failure. Because internets of enormous size create 636 scaling problems for routing protocols, the component networks of such 637 large internets are often partitioned into areas, autonomous systems 638 or routing domains, and routing confederacies. 640 The scaling limits of the simple IP subnet require a large network to 641 be partitioned into smaller IP subnets. For NBMA media like ATM, 642 there are advantages to creating direct connections across the entire 643 underlying NBMA network. This leads to the need to create direct 644 connections across IP subnet boundaries. 646 For example, figure 3 shows an end-to-end configuration consisting 647 of four components, three of which are ATM technology based, while 648 the fourth is a standard IP subnet based on non-ATM technology. 649 End-systems (either hosts or routers) attached to the ATM-based 650 networks may communicate either using the Classical IP model or 651 directly via ATM (subject to policy constraints). Such nodes may 652 communicate directly at the IP level without necessarily needing 653 an intermediate router, even if end-systems do not share a common 654 IP-level network prefix. Communication with end-systems on the 655 non-ATM-based Classical IP subnet takes place via a router, following 656 the Classical IP model (see Section 8.1 below). 658 Many of the problems and issues associated with creating such direct 659 connections across subnet boundaries were originally being addressed 660 in the IETF's IPLPDN working group and the IP over ATM working group. 661 This area is now being addressed in the Routing over Large Clouds 662 working group. Examples of work performed in the IPLPDN working 663 group include short-cut routing (proposed by P. Tsuchiya) and directed 664 ARP RFC--1433 [5] over SMDS networks. The ROLC working group has 665 produced the distributed ARP server architectures and the NBMA Address 666 Resolution Protocol (NARP) [7]. The Next Hop Resolution Protocol 667 (NHRP) is still work in progress, though the ROLC WG is considering 668 advancing the current draft. Questions/issues specifically related to 669 defining a capability to cross IP subnet boundaries include: 671 o How can routing be optimized across multiple logical IP subnets 672 over both a common ATM based and a non-ATM based infrastructure. 673 For example, in Figure 3, there are two gateways/routers between 674 the non-ATM subnet and the ATM subnets. The optimal path 675 from end-systems on any ATM-based subnet to the non ATM-based 676 subnet is a function of the routing state information of the two 677 routers. 679 o How to incorporate policy routing constraints. 681 o What is the proper coupling between routing and address 682 resolution particularly with respect to off-subnet communication. 684 Figure 3: A configuration with both ATM-based and non-ATM based 685 subnets. 687 o What are the local procedures to be followed by hosts and 688 routers. 690 o Routing between hosts not sharing a common IP-level (or L3) 691 network prefix, but able to be directly connected at the NBMA 692 media level. 694 o Defining the details for an efficient address resolution 695 architecture including defining the procedures to be followed by 696 clients and servers (see RFC--1433 [5], RFC--1735 [7] and NHRP). 698 o How to identify the need for and accommodate special purpose SVCs 699 for control or routing and high bandwidth data transfers. 701 For ATM (unlike other NBMA media), an additional complexity in 702 supporting IP routing over these ATM internets lies in the 703 multiplicity of address formats in UNI 3.0 [4]. NSAP modeled address 704 formats only are supported on ``private ATM'' networks, while either 705 1) E.164 only, 2) NSAP modeled formats only, or 3) both are supported 706 on ``public ATM'' networks. Further, while both the E.164 and NSAP 707 modeled address formats are to be considered as network points of 708 attachment, it seems that E.164 only networks are to be considered 709 as subordinate to ``private networks'', in some sense. This leads 710 to some confusion in defining an ARP mechanism in supporting all 711 combinations of end-to-end scenarios (refer to the discussion in 712 Appendix A on the possible scenarios to be supported by ARP). 714 Figure 4: A Routing Loop Due to Lost PV Routing Attributes. 716 7 Extensions to IP Routing 718 RFC--1620 [3] describes the problems and issues associated with direct 719 connections across IP subnet boundaries in greater detail, as well as 720 possible solution approaches. The ROLC WG has identified persistent 721 routing loop problems that can occur if protocols which lose 722 information critical to path vector routing protocol loop suppression 723 are used to accomplish direct connections across IP subnet boundaries. 724 The problems may arise when a destination network which is not on the 725 NBMA network is reachable via different routers attached to the NBMA 726 network. This problem occurs with proposals that attempt to carry 727 reachability information, but do not carry full path attributes (for 728 path vector routing) needed for inter-AS path suppression, or full 729 metrics (for distance vector or link state routing even if path vector 730 routing is not used) for intra-AS routing. 732 There are many potential scenarios for routing loops. An example 733 is given in Figure 4. It is possible to produce a simpler example 734 where a loop can form. The example in Figure 4 illustrates a loop 735 which will persist even if the protocol on the NBMA supports redirects 736 or can invalidate any route which changes in any way, but does not 737 support the communication of full metrics or path attributes. 739 In the example in Figure 4, Host 1 is sending traffic toward Host 2. 740 In practice, host routes would not be used, so the destination for 741 the purpose of routing would be Subnet 3. The traffic travels by 742 way of Router 1 which establishes a ``cut-through'' SVC to the NBMA 743 next-hop, shown here as Router 2. Router 2 forwards traffic destined 744 for Subnet 3 through Subnet 2 to Router 3. Traffic from Host 1 would 745 then reach Host 2. 747 Router 1's cut-through routing implementation caches an association 748 between Host 2's IP address (or more likely all of Subnet 3) and 749 Router 2's NBMA address. While the cut-through SVC is still up, 750 Link 1 fails. Router 5 loses it's preferred route through Router 3 751 and must direct traffic in the other direction. Router 2 loses 752 a route through Router 3, but picks up an alternate route through 753 Router 5. Router 1 is still directing traffic toward Router 2 and 754 advertising a means of reaching Subnet 3 to Subnet 1. Router 5 and 755 Router 2 will see a route, creating a loop. 757 This loop would not form if path information normally carried by 758 interdomain routing protocols such as BGP and IDRP were retained 759 across the NBMA. Router 2 would reject the initial route from Router 5 760 due to the path information. When Router 2 declares the route to 761 Subnet 3 unreachable, Router 1 withdraws the route from routing at 762 Subnet 1, leaving the route through Router 4, which would then reach 763 Router 5, and would reach Router 2 through both Router 1 and Router 5. 764 Similarly, a link state protocol would not form such a loop. 766 Two proposals for breaking this form of routing loop have been 767 discussed. Redirect in this example would have no effect, since 768 Router 2 still has a route, just has different path attributes. A 769 second proposal is that is that when a route changes in any way, the 770 advertising NBMA cut-through router invalidates the advertisement for 771 some time period. This is similar to the notion of Poison Reverse 772 in distance vector routing protocols. In this example, Router 2 773 would eventually readvertise a route since a route through Router 6 774 exists. When Router 1 discovers this route, it will advertise it to 775 Subnet 1 and form the loop. Without path information, Router 1 cannot 776 distinguish between a loop and restoration of normal service through 777 the link L1. 779 The loop in Figure 4 can be prevented by configuring Router 4 or 780 Router 5 to refuse to use the reverse path. This would break backup 781 connectivity through Router 8 if L1 and L3 failed. The loop can also 782 be broken by configuring Router 2 to refuse to use the path through 783 Router 5 unless it could not reach the NBMA. Special configuration 784 of Router 2 would work as long as Router 2 was not distanced from 785 Router 3 and Router 5 by additional subnets such that it could not 786 determine which path was in use. If Subnet 1 is in a different AS or 787 RD than Subnet 2 or Subnet 4, then the decision at Router 2 could be 788 based on path information. 790 Figure 5: The Classical IP model as a concatenation of three separate 791 ATM IP subnets. 793 In order for loops to be prevented by special configuration at 794 the NBMA border router, that router would need to know all paths 795 that could lead back to the NBMA. The same argument that special 796 configuration could overcome loss of path information was posed in 797 favor of retaining the use of the EGP protocol defined in the now 798 historic RFC--904 [11]. This turned out to be unmanageable, with 799 routing problems occurring when topology was changed elsewhere. 801 8 IP Over ATM Proposals 803 8.1 The Classical IP Model 805 The Classical IP Model was suggested at the Spring 1993 IETF meeting 806 [8] and retains the classical IP subnet architecture. This model 807 simply consists of cascading instances of IP subnets with IP-level (or 808 L3) routers at IP subnet borders. An example realization of this 809 model consists of a concatenation of three IP subnets. This is shown 810 in Figure 5. Forwarding IP packets over this Classical IP model is 811 straight forward using already well established routing techniques and 812 protocols. 814 SVC-based ATM IP subnets are simplified in that they: 816 o limit the number of hosts which must be directly connected at any 817 given time to those that may actually exchange traffic. 819 o The ATM network is capable of setting up connections between 820 any pair of hosts. Consistent with the standard IP routing 821 algorithm [2] connectivity to the ``outside'' world is achieved 822 only through a router, which may provide firewall functionality 823 if so desired. 825 o The IP subnet supports an efficient mechanism for address 826 resolution. 828 Issues addressed by the IP Over ATM Working Group, and some of the 829 resolutions, for this model are: 831 o Methods of encapsulation and multiplexing. This issue is 832 addressed in RFC--1483 [6], in which two methods of encapsulation 833 are defined, an LLC/SNAP and a per-VC multiplexing option. 835 o The definition of an address resolution server (defined in 836 RFC--1577). 838 o Defining the default MTU size. This issue is addressed in 839 RFC--1626 [1] which proposes the use of the MTU discovery 840 protocol (RFC--1191 [9]). 842 o Support for IP multicasting. In the summer of 1994, work began 843 on the issue of supporting IP multicasting over the SVC LATM 844 model. The proposal for IP multicasting is currently defined by 845 a set of IP over ATM WG Internet Drafts, referred to collectively 846 as the IPMC drafts. In order to support IP multicasting the 847 ATM subnet must either support point-to- multipoint SVCs, or 848 multicast servers, or both. 850 o Defining interim SVC parameters, such as QoS parameters and 851 time-out values. 853 o Signaling and negotiations of parameters such as MTU size 854 and method of encapsulation. RFC--1755 [10] describes an 855 implementation agreement for routers signaling the ATM network 856 to establish SVCs initially based upon the ATM Forum's UNI 857 version 3.0 specification [4], and eventually to be based 858 upon the ATM Forum's UNI version 3.1 and later specifications. 859 Topics addressed in RFC--1755 include (but are not limited to) 860 VC management procedures, e.g., when to time-out SVCs, QOS 861 parameters, service classes, explicit setup message formats for 862 various encapsulation methods, node (host or router) to node 863 negotiations, etc. 865 RFC-1577 is also applicable to PVC-based subnets. Full mesh PVC 866 connectivity is required. 868 For more information see RFC--1577 [8]. 870 8.2 The ROLC NHRP Model 872 The Next Hop Resolution Protocol (NHRP), currently a draft defined 873 by the Routing Over Large Clouds Working Group (ROLC), performs 874 address resolution to accomplish direct connections across IP subnet 875 boundaries. NHRP can supplement RFC--1577 ARP. There has been recent 876 discussion of replacing RFC--1577 ARP with NHRP. NHRP can also perform 877 a proxy address resolution to provide the address of the border router 878 serving a destination off of the NBMA which is only served by a 879 single router on the NBMA. NHRP as currently defined cannot be used 880 in this way to support addresses learned from routers for which the 881 same destinations may be heard at other routers, without the risk of 882 creating persistent routing loops. 884 8.3 ``Conventional'' Model 886 The ``Conventional Model'' assumes that a router can relay IP packets 887 cell by cell, with the VPI/VCI identifying a flow between adjacent 888 routers rather than a flow between a pair of nodes. A latency 889 advantage can be provided if cell interleaving from multiple IP 890 packets is allowed. Interleaving frames within the same VCI requires 891 an ATM AAL such as AAL3/4 rather than AAL5. Cell forwarding is 892 accomplished through a higher level mapping, above the ATM VCI layer. 894 The conventional model is not under consideration by the IP/ATM WG. 895 The COLIP WG has been formed to develop protocols based on the 896 conventional model. 898 8.4 The Peer Model 900 The Peer Model places IP routers/gateways on an addressing peer basis 901 with corresponding entities in an ATM cloud (where the ATM cloud 902 may consist of a set of ATM networks, inter-connected via UNI or 903 P-NNI interfaces). ATM network entities and the attached IP hosts 904 or routers exchange call routing information on a peer basis by 905 algorithmically mapping IP addressing into the NSAP space. Within the 906 ATM cloud, ATM network level addressing (NSAP-style), call routing and 907 packet formats are used. 909 In the Peer Model no provision is made for selection of primary path 910 and use of alternate paths in the event of primary path failure 911 in reaching multihomed non-ATM destinations. This will limit the 912 topologies for which the peer model alone is applicable to only those 913 topologies in which non-ATM networks are singly homed, or where loss 914 of backup connectivity is not an issue. The Peer Model may be used to 915 avoid the need for an address resolution protocol and in a proxy-ARP 916 mode for stub networks, in conjunction with other mechanisms suitable 917 to handle multihomed destinations. 919 During the discussions of the IP over ATM working group, it was felt 920 that the problems with the end-to-end peer model were much harder than 921 any other model, and had more unresolved technical issues. While 922 encouraging interested individuals/companies to research this area, it 923 was not an initial priority of the working group to address these 924 issues. The ATM Forum Network Layer Multiprotocol Working Group has 925 reached a similar conclusion. 927 8.5 The PNNI and the Integrated Models 929 The Integrated model (proposed and under study within the 930 Multiprotocol group of ATM Forum) considers a single routing protocol 931 to be used for both IP and for ATM. A single routing information 932 exchange is used to distribute topological information. The routing 933 computation used to calculate routes for IP will take into account the 934 topology, including link and node characteristics, of both the IP and 935 ATM networks and calculates an optimal route for IP packets over the 936 combined topology. 938 The PNNI is a hierarchical link state routing protocol with multiple 939 link metrics providing various available QoS parameters given current 940 loading. Call route selection takes into account QoS requirements. 941 Hysteresis is built into link metric readvertisements in order 942 to avoid computational overload and topological hierarchy serves 943 to subdivide and summarize complex topologies, helping to bound 944 computational requirements. 946 Integrated Routing is a proposal to use PNNI routing as an IP routing 947 protocol. There are several sets of technical issues that need to be 948 addressed, including the interaction of multiple routing protocols, 949 adaptation of PNNI to broadcast media, support for NHRP, and others. 950 These are being investigated. However, the ATM Forum MPOA group is 951 not currently performing this investigation. Concerned individuals 952 are, with an expectation of bringing the work to the ATM Forum and the 953 IETF. 955 PNNI has provisions for carrying uninterpreted information. While not 956 yet defined, a compatible extension of the base PNNI could be used to 957 carry external routing attributes and avoid the routing loop problems 958 described in Section 7. 960 Figure 6: The ATM transition model assuming the presence of gateways 961 or routers between the ATM networks and the ATM peer networks. 963 8.6 Transition Models 965 Finally, it is useful to consider transition models, lying somewhere 966 between the Classical IP Models and the Peer and Integrated Models. 967 Some possible architectures for transition models have been suggested 968 by Fong Liaw. Others are possible, for example Figure 6 showing a 969 Classical IP transition model which assumes the presence of gateways 970 between ATM networks and ATM Peer networks. 972 Some of the models described in the prior sections, most notably 973 the Integrated Model, anticipate the need for mixed environment with 974 complex routing topologies. These inherently support transition 975 (possibly with an indefinite transition period). Models which provide 976 no transition support are primarily of interest to new deployments 977 which make exclusive, or near exclusive use of ATM or deployments 978 capable of wholesale replacement of existing networks or willing to 979 retain only non-ATM stub networks. 981 For some models, most notably the Peer Model, the ability to attach 982 to a large non-ATM or mixed internetwork is infeasible without routing 983 support at a higher level, or at best may pose interconnection 984 topology constraints (for example: single point of attachment and a 985 static default route). If a particular model requires routing support 986 at a higher level a large deployment will need to be subdivided 987 to provide scalability at the higher level, which for some models 988 degenerates back to the Classical model. 990 9 Application of the Working Group's and Related Documents 992 The IP Over ATM Working Group has generated several Internet-Drafts 993 and RFCs. This section identifies the relationship of these and other 994 related documents to the various IP Over ATM Models identified in this 995 document. The Drafts and RFCs produced to date are the following 996 references, RFC--1483 [6], RFC--1577 [8], RFC--1626 [1], RFC--1755 997 [10] and the IPMC drafts. The ROLC WG has produced the NHRP draft. 998 Table 5 gives a summary of these documents and their relationship to 999 the various IP Over ATM Models. 1001 Acknowledgments: 1003 This draft is the direct result of the numerous discussions of the 1004 IP over ATM Working Group of the Internet Engineering Task Force. 1005 The authors also had the benefit of several private discussions 1006 with H. Nguyen of AT&T Bell Laboratories. Brian Carpenter of CERN 1007 was kind enough to contribute the TULIP and TUNIC sections to this 1008 draft. Grenville Armitage of Bellcore was kind enough to contribute 1009 the sections on VC binding, encapsulations and the use of B-LLI 1010 information elements to signal such bindings. The text of Appendix A 1011 was pirated liberally from Anthony Alles' of Cisco posting on the IP 1012 over ATM discussion list (and modified at the authors' discretion). 1013 M. Ohta provided a description of the Conventional Model (again which 1014 the authors modified at their discretion). This draft also has 1015 benefitted from numerous suggestions from John T. Amenyo of ANS, Joel 1016 Halpern of Newbridge, and Andy Malis of Ascom-Timplex. 1018 Authors' Addresses: 1020 Robert G. Cole 1021 AT&T Bell Laboratories 1022 101 Crawfords Corner Road, Rm. 3L-533 1023 Holmdel, NJ 07733 1024 Phone: (908) 949-1950 1025 Fax: (908) 949-8887 1026 Email: rgc@qsun.att.com 1028 David H. Shur 1029 AT&T Bell Laboratories 1030 101 Crawfords Corner Road, Rm. 1F-338 1031 Holmdel, NJ 07733 1032 Phone: (908) 949-6719 1033 Documents Summary 1034 ----------------+------------------------------------------------- 1035 RFC-1483 _ How to identify/label multiple 1036 _ packet/frame-based protocols multiplexed over 1037 _ ATM AAL5. Applies to any model dealing with IP 1038 _ over ATM AAL5. 1039 _ 1040 RFC-1577 _ Model for transporting IP and ARP over ATM AAL5 1041 _ in an IP subnet where all nodes share a common 1042 _ IP network prefix. Includes ARP server/Inv-ARP 1043 _ packet formats and procedures for SVC/PVC 1044 _ subnets. 1045 _ 1046 RFC-1626 _ Specifies default IP MTU size to be used with 1047 _ ATM AAL5. Requires use of PATH MTU discovery. 1048 _ Applies to any model dealing with IP over ATM 1049 _ AAL5 1050 _ 1051 RFC-1755 _ Defines how implementations of IP over ATM 1052 _ should use ATM call control signaling 1053 _ procedures, and recommends values of mandatory 1054 _ and optional IEs focusing particularly on the 1055 _ Classical IP model. 1056 _ 1057 IPMC _ Defines how to support IP multicast in Classical 1058 _ IP model using either (or both) meshes of 1059 _ point-to-multipoint ATM VCs, or multicast 1060 _ server(s). IPMC is work in progress. 1061 _ 1062 NHRP _ Describes a protocol that can be used by hosts 1063 _ and routers to determine the NBMA next hop 1064 _ address of a destina- 1065 tion in ``NBMA connectivity'' 1066 _ of the sending node. If the destination is not 1067 _ connected to the NBMA fabric, the IP and NBMA 1068 _ addresses of preferred egress points are 1069 _ returned. NHRP is work in progress (ROLC WG). 1071 Table 5: Summary of WG Documents 1073 Fax: (908) 949-5775 1074 Email: d.shur@att.com 1076 Curtis Villamizar 1077 ANS 1078 100 Clearbrook Road 1079 Elmsford, NY 10523 1080 Email: curtis@ans.net 1082 References 1084 [1] R. Atkinson. Default IP MTU for use over ATM AAL5. Request 1085 for Comments (Experimental) RFC 1626, Internet Engineering Task 1086 Force, May 1994. 1088 [2] R. Braden and J. Postel. Requirements for Internet gateways. 1089 Request for Comments (Standard) RFC 1009, Internet Engineering 1090 Task Force, June 1987. Obsoletes RFC-985. 1092 [3] R. Braden, J. Postel, and Y. Rekhter. Internet Architecture Ex- 1093 tensions for Shared Media. Request for Comments (Informational) 1094 RFC 1620, Internet Engineering Task Force, May 1994. 1096 [4] ATM Forum. ATM User-Network Interface Specification. Prentice 1097 Hall, September 1993. 1099 [5] J. Garrett, J. Hagan, and J. Wong. Directed ARP. Request 1100 for Comments (Experimental) RFC 1433, Internet Engineering Task 1101 Force, March 1993. 1103 [6] J. Heinanen. Multiprotocol Encapsulation over ATM Adaptation 1104 Layer 5. Request for Comments (Proposed Standard) RFC 1483, 1105 Internet Engineering Task Force, July 1993. 1107 [7] J. Heinanen and R. Govindan. NBMA Address Resolution Protocol 1108 (NARP). Request for Comments (Experimental) RFC 1735, Internet 1109 Engineering Task Force, December 1994. 1111 [8] M. Laubach. Classical IP and ARP over ATM. Request for Comments 1112 (Proposed Standard) RFC 1577, Internet Engineering Task Force, 1113 January 1994. 1115 [9] J. Mogul and S. Deering. Path MTU discovery. Request for 1116 Comments (Draft Standard) RFC 1191, Internet Engineering Task 1117 Force, November 1990. 1119 [10] M. Perez, F. Liaw, D. Grossman, A. Mankin, and A. Hoffman. 1121 ATM signalling support for IP over ATM. Request for Comments 1122 (Informational) RFC 1755, Internet Engineering Task Force, 1123 January 1995. 1125 [11] International Telegraph and D. Mills. Exterior Gateway Protocol 1126 formal specification. Request for Comments (Historical) STD 18, 1127 RFC 904, Internet Engineering Task Force, April 1984. 1129 A Potential Interworking Scenarios to be Supported by ARP 1131 The architectural model of the VC routing protocol, being defined by 1132 the Private Network-to-Network Interface (P-NNI) working group of the 1133 ATM Forum, categorizes ATM networks into two types: 1135 o Those that participate in the VC routing protocols and use NSAP 1136 modeled addresses UNI 3.0 [4] (referred to as private networks, 1137 for short), and 1139 o Those that do not participate in the VC routing protocol. 1140 Typically, but possibly not in all cases, public ATM networks 1141 that use native mode E.164 addresses UNI 3.0 [4] will fall into 1142 this later category. 1144 The issue for ARP, then is to know what information must be returned 1145 to allow such connectivity. Consider the following scenarios: 1147 o Private host to Private Host, no intervening public transit 1148 network(s): Clearly requires that ARP return only the NSAP 1149 modeled address format of the end host. 1151 o Private host to Private host, through intervening public 1152 networks: In this case, the connection setup from host A to host 1153 B must transit the public network(s). This requires that at 1154 each ingress point to the public network that a routing decision 1155 be made as to which is the correct egress point from that public 1156 network to the next hop private ATM switch, and that the native 1157 E.164 address of that egress point be found (finding this is a VC 1158 routing problem, probably requiring configuration of the public 1159 network links and connectivity information). ARP should return, 1160 at least, the NSAP address of the endpoint in which case the 1161 mapping of the NSAP addresses to the E.164 address, as specified 1162 in [4], is the responsibility of ingress switch to the public 1163 network. 1165 o Private Network Host to Public Network Host: To get connectivity 1166 between the public node and the private nodes requires the 1167 same kind of routing information discussed above - namely, the 1168 directly attached public network needs to know the (NSAP format) 1169 ATM address of the private station, and the native E.164 address 1170 of the egress point from the public network to that private 1171 network (or to that of an intervening transit private network 1172 etc.). There is some argument, that the ARP mechanism could 1173 return this egress point native E.164 address, but this may 1174 be considered inconsistent for ARP to return what to some is 1175 clearly routing information, and to others is required signaling 1176 information. 1178 In the opposite direction, the private network node can use, and 1179 should only get, the E.164 address of the directly attached public 1180 node. What format should this information be carried in? This 1181 question is clearly answered, by Note 9 of Annex A of UNI 3.0 [4], 1182 vis: 1184 ``A call originated on a Private UNI destined for an 1185 host which only has a native (non-NSAP) E.164 address (i.e. 1186 a system directly attached to a public network supporting 1187 the native E.164 format) will code the Called Party number 1188 information element in the (NSAP) E.164 private ATM Address 1189 Format, with the RD, AREA, and ESI fields set to zero. The 1190 Called Party Subaddress information element is not used.'' 1192 Hence, in this case, ARP should return the E.164 address of the 1193 public ATM station in NSAP format. This is essentially implying an 1194 algorithmic resolution between the native E.164 and NSAP addresses of 1195 directly attached public stations. 1197 o Public network host to Public network host, no intervening 1198 private network: In this case, clearly the Q.2931 requests would 1199 use native E.164 address formats. 1201 o Public network host to Public network host, intervening private 1202 network: same as the case immediately above, since getting 1203 to and through the private network is a VC routing, not an 1204 addressing issue. 1206 So several issues arise for ARP in supporting arbitrary connections 1207 between hosts on private and public network. One is how to 1208 distinguish between E.164 address and E.164 encoded NSAP modeled 1209 address. Another is what is the information to be supplied by ARP, 1210 e.g., in the public to private scenario should ARP return only the 1211 private NSAP modeled address or both an E.164 address, for a point of 1212 attachment between the public and private networks, along with the 1213 private NSAP modeled address.