idnits 2.17.1 draft-templin-ironbis-09.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- No issues found here. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year -- The document date (November 30, 2011) is 4530 days in the past. Is this intentional? -- Found something which looks like a code comment -- if you have code sections in the document, please surround them with '' and '' lines. Checking references for intended status: Informational ---------------------------------------------------------------------------- == Unused Reference: 'RFC5743' is defined on line 1634, but no explicit reference was found in the text ** Obsolete normative reference: RFC 2460 (Obsoleted by RFC 8200) == Outdated reference: A later version (-06) exists of draft-ietf-savi-framework-05 -- Obsolete informational reference (is this intentional?): RFC 5246 (Obsoleted by RFC 8446) Summary: 1 error (**), 0 flaws (~~), 3 warnings (==), 3 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 Network Working Group F. Templin, Ed. 3 Internet-Draft Boeing Research & Technology 4 Intended status: Informational November 30, 2011 5 Expires: June 2, 2012 7 The Internet Routing Overlay Network (IRON) 8 draft-templin-ironbis-09.txt 10 Abstract 12 Since the Internet must continue to support escalating growth due to 13 increasing demand, it is clear that current routing architectures and 14 operational practices must be updated. This document proposes an 15 Internet Routing Overlay Network (IRON) architecture that supports 16 sustainable growth while requiring no changes to end systems and no 17 changes to the existing routing system. In addition to routing 18 scaling, IRON further addresses other important issues including 19 mobility management, mobile networks, multihoming, traffic 20 engineering and NAT traversal. While business considerations are an 21 important determining factor for widespread adoption, they are out of 22 scope for this document. 24 Status of this Memo 26 This Internet-Draft is submitted in full conformance with the 27 provisions of BCP 78 and BCP 79. 29 Internet-Drafts are working documents of the Internet Engineering 30 Task Force (IETF). Note that other groups may also distribute 31 working documents as Internet-Drafts. The list of current Internet- 32 Drafts is at http://datatracker.ietf.org/drafts/current/. 34 Internet-Drafts are draft documents valid for a maximum of six months 35 and may be updated, replaced, or obsoleted by other documents at any 36 time. It is inappropriate to use Internet-Drafts as reference 37 material or to cite them other than as "work in progress." 39 This Internet-Draft will expire on June 2, 2012. 41 Copyright Notice 43 Copyright (c) 2011 IETF Trust and the persons identified as the 44 document authors. All rights reserved. 46 This document is subject to BCP 78 and the IETF Trust's Legal 47 Provisions Relating to IETF Documents 48 (http://trustee.ietf.org/license-info) in effect on the date of 49 publication of this document. Please review these documents 50 carefully, as they describe your rights and restrictions with respect 51 to this document. Code Components extracted from this document must 52 include Simplified BSD License text as described in Section 4.e of 53 the Trust Legal Provisions and are provided without warranty as 54 described in the Simplified BSD License. 56 Table of Contents 58 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 5 59 2. Terminology . . . . . . . . . . . . . . . . . . . . . . . . . 6 60 3. The Internet Routing Overlay Network . . . . . . . . . . . . . 8 61 3.1. IRON Client . . . . . . . . . . . . . . . . . . . . . . . 10 62 3.2. IRON Serving Router . . . . . . . . . . . . . . . . . . . 11 63 3.3. IRON Relay Router . . . . . . . . . . . . . . . . . . . . 11 64 4. IRON Organizational Principles . . . . . . . . . . . . . . . . 12 65 5. IRON Control Plane Operation . . . . . . . . . . . . . . . . . 14 66 5.1. IRON Client Operation . . . . . . . . . . . . . . . . . . 14 67 5.2. IRON Server Operation . . . . . . . . . . . . . . . . . . 15 68 5.3. IRON Relay Operation . . . . . . . . . . . . . . . . . . . 15 69 6. IRON Forwarding Plane Operation . . . . . . . . . . . . . . . 16 70 6.1. IRON Client Operation . . . . . . . . . . . . . . . . . . 16 71 6.2. IRON Server Operation . . . . . . . . . . . . . . . . . . 17 72 6.3. IRON Relay Operation . . . . . . . . . . . . . . . . . . . 18 73 7. IRON Reference Operating Scenarios . . . . . . . . . . . . . . 18 74 7.1. Both Hosts within Same IRON Instance . . . . . . . . . . . 18 75 7.1.1. EUNs Served by Same Server . . . . . . . . . . . . . . 19 76 7.1.2. EUNs Served by Different Servers . . . . . . . . . . . 20 77 7.2. Mixed IRON and Non-IRON Hosts . . . . . . . . . . . . . . 23 78 7.2.1. From IRON Host A to Non-IRON Host B . . . . . . . . . 23 79 7.2.2. From Non-IRON Host B to IRON Host A . . . . . . . . . 25 80 7.3. Hosts within Different IRON Instances . . . . . . . . . . 26 81 8. Mobility, Multiple Interfaces, Multihoming, and Traffic 82 Engineering . . . . . . . . . . . . . . . . . . . . . . . . . 26 83 8.1. Mobility Management and Mobile Networks . . . . . . . . . 27 84 8.2. Multiple Interfaces and Multihoming . . . . . . . . . . . 27 85 8.3. Traffic Engineering . . . . . . . . . . . . . . . . . . . 28 86 9. Renumbering Considerations . . . . . . . . . . . . . . . . . . 28 87 10. NAT Traversal Considerations . . . . . . . . . . . . . . . . . 28 88 11. Multicast Considerations . . . . . . . . . . . . . . . . . . . 29 89 12. Nested EUN Considerations . . . . . . . . . . . . . . . . . . 29 90 12.1. Host A Sends Packets to Host Z . . . . . . . . . . . . . . 31 91 12.2. Host Z Sends Packets to Host A . . . . . . . . . . . . . . 31 92 13. Implications for the Internet . . . . . . . . . . . . . . . . 32 93 14. Additional Considerations . . . . . . . . . . . . . . . . . . 33 94 15. Related Initiatives . . . . . . . . . . . . . . . . . . . . . 33 95 16. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 34 96 17. Security Considerations . . . . . . . . . . . . . . . . . . . 34 97 18. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . . 35 98 19. References . . . . . . . . . . . . . . . . . . . . . . . . . . 35 99 19.1. Normative References . . . . . . . . . . . . . . . . . . . 35 100 19.2. Informative References . . . . . . . . . . . . . . . . . . 36 101 Appendix A. IRON Operation over Internetworks with Different 102 Address Families . . . . . . . . . . . . . . . . . . 38 103 Appendix B. Scaling Considerations . . . . . . . . . . . . . . . 39 104 Author's Address . . . . . . . . . . . . . . . . . . . . . . . . . 41 106 1. Introduction 108 Growth in the number of entries instantiated in the Internet routing 109 system has led to concerns regarding unsustainable routing scaling 110 [RFC4984][RADIR]. Operational practices such as the increased use of 111 multihoming with Provider-Independent (PI) addressing are resulting 112 in more and more de-aggregated prefixes being injected into the 113 routing system from more and more end user networks. Furthermore, 114 depletion of the public IPv4 address space has raised concerns for 115 both increased de-aggregation (leading to yet further routing system 116 entries) and an impending address space run-out scenario. At the 117 same time, the IPv6 routing system is beginning to see growth 118 [BGPMON] which must be managed in order to avoid the same routing 119 scaling issues the IPv4 Internet now faces. Since the Internet must 120 continue to scale to accommodate increasing demand, it is clear that 121 new methodologies and operational practices are needed. 123 Several related works have investigated routing scaling issues. 124 Virtual Aggregation (VA) [GROW-VA] and Aggregation in Increasing 125 Scopes (AIS) [EVOLUTION] are global routing proposals that introduce 126 routing overlays with Virtual Prefixes (VPs) to reduce the number of 127 entries required in each router's Forwarding Information Base (FIB) 128 and Routing Information Base (RIB). Routing and Addressing in 129 Networks with Global Enterprise Recursion (RANGER) [RFC5720] examines 130 recursive arrangements of enterprise networks that can apply to a 131 very broad set of use-case scenarios [RFC6139]. IRON specifically 132 adopts the RANGER Non-Broadcast, Multiple Access (NBMA) tunnel 133 virtual-interface model, and uses Virtual Enterprise Traversal (VET) 134 [INTAREA-VET] the Subnetwork Adaptation and Encapsulation Layer 135 (SEAL) [INTAREA-SEAL] and Asymmetric Extended Route Optimization 136 [AERO] as its functional building blocks. 138 This document proposes an Internet Routing Overlay Network (IRON) 139 architecture with goals of supporting scalable routing and addressing 140 while requiring no changes to the Internet's Border Gateway Protocol 141 (BGP) interdomain routing system [RFC4271]. IRON observes the 142 Internet Protocol standards [RFC0791][RFC2460], while other network- 143 layer protocols that can be encapsulated within IP packets (e.g., 144 OSI/CLNP (Connectionless Network Protocol) [RFC1070], etc.) are also 145 within scope. 147 IRON borrows concepts from VA and AIS, and further borrows concepts 148 from the Internet Vastly Improved Plumbing (Ivip) [IVIP-ARCH] 149 architecture proposal along with its associated Translating Tunnel 150 Router (TTR) mobility extensions [TTRMOB]. Indeed, the TTR model to 151 a great degree inspired the IRON mobility architecture design 152 discussed in this document. The Network Address Translator (NAT) 153 traversal techniques adapted for IRON were inspired by the Simple 154 Address Mapping for Premises Legacy Equipment (SAMPLE) proposal 155 [SAMPLE]. 157 IRON is a global virtual routing system comprising Virtual Service 158 Provider (VSP) overlay networks that service Aggregated Prefixes 159 (APs) from which more-specific Client Prefixes (CPs) are delegated. 160 IRON is motivated by a growing end user demand for mobility 161 management, mobile networks, multihoming and traffic engineering 162 while using stable addressing to minimize dependence on network 163 renumbering [RFC4192][RFC5887]. IRON VSP overlay network instances 164 use the existing IPv4 and IPv6 Internets as virtual NBMA links for 165 tunneling inner network layer packets within outer network layer 166 headers (see Section 3). Each IRON instance requires deployment of a 167 small number of relays and servers in the Internet, as well as client 168 devices that connect End User Networks (EUNs). No modifications to 169 hosts, and no modifications to existing routers, are required. The 170 following sections discuss details of the IRON architecture. 172 2. Terminology 174 This document makes use of the following terms: 176 Aggregated Prefix (AP): 177 a short network-layer prefix (e.g., an IPv4 /16, an IPv6 /20, an 178 OSI Network Service Access Protocol (NSAP) prefix, etc.) that is 179 owned and managed by a Virtual Service Provider (VSP). The term 180 "Aggregated Prefix (AP)" used in this document is the equivalent 181 to the term "Virtual Prefix (VP)" used in Virtual Aggregation (VA) 182 [GROW-VA]. 184 Client Prefix (CP): 185 a more-specific network-layer prefix (e.g., an IPv4 /28, an IPv6 186 /56, etc.) derived from an AP and delegated to a client end user 187 network. 189 Client Prefix Address (CPA): 190 a network-layer address belonging to a CP and assigned to an 191 interface in an End User Network (EUN). 193 End User Network (EUN): 194 an edge network that connects an end user's devices (e.g., 195 computers, routers, printers, etc.) to the Internet. IRON EUNs 196 are mobile networks, and can change their ISP attachments without 197 having to renumber. 199 Internet Routing Overlay Network (IRON): 200 the union of all VSP overlay network instances. Each such IRON 201 instance supports routing within the overlay through encapsulation 202 of inner packets within outer headers. Each IRON instance appears 203 as a virtual enterprise network, and connects to the global 204 Internet the same as for any Autonomous System (AS). 206 IRON Client Router/Host ("Client"): 207 a customer device that logically connects EUNs to an IRON instance 208 via an NBMA tunnel virtual interface. The device is normally a 209 router, but may instead be a host if the "EUN" is a singleton end 210 system. 212 IRON Serving Router ("Server"): 213 a VSP's IRON instance router that provides forwarding and mapping 214 services for Clients. 216 IRON Relay Router ("Relay"): 217 a VSP's router that acts as a relay between the IRON instance and 218 the (native) Internet. 220 IRON Agent (IA): 221 generically refers to any of an IRON Client/Server/Relay. 223 IRON Instance: 224 a set of IRON Agents deployed by a VSP to service EUNs through 225 automatic tunneling over the Internet. 227 Internet Service Provider (ISP): 228 a service provider that connects an IA to the Internet. In other 229 words, an ISP is responsible for providing IAs with data link 230 services for basic Internet connectivity. 232 Locator: 233 an IP address assigned to the interface of a router or end system 234 connected to a public or private network over which tunnels are 235 formed. Locators taken from public IP prefixes are routable on a 236 global basis, while locators taken from private IP prefixes 237 [RFC1918] are made public via Network Address Translation (NAT). 239 Routing and Addressing in Networks with Global Enterprise Recursion 240 (RANGER): 241 an architectural examination of virtual overlay networks applied 242 to enterprise network scenarios, with implications for a wider 243 variety of use cases. 245 Subnetwork Encapsulation and Adaptation Layer (SEAL): 246 an encapsulation sublayer that provides extended identification 247 fields and control messages to ensure deterministic network-layer 248 feedback. 250 Virtual Enterprise Traversal (VET): 251 a method for discovering border routers and forming dynamic 252 tunnel-neighbor relationships over enterprise networks (or sites) 253 with varying properties. 255 Virtual Service Provider (VSP): 256 a company that owns and manages a set of APs from which it 257 delegates CPs to EUNs. 259 VSP Overlay Network: 260 the same as defined above for IRON Instance. 262 3. The Internet Routing Overlay Network 264 The Internet Routing Overlay Network (IRON) is the union of all 265 Virtual Service Provider (VSP) overlay networks (also known as "IRON 266 instances"). IRON provides a number of important services to End 267 User Networks (EUNs) that are not well supported in the current 268 Internet architecture, including routing scaling, mobility 269 management, mobile networks, multihoming, traffic engineering and NAT 270 traversal. While the principles presented in this document are 271 discussed within the context of the public global Internet, they can 272 also be applied to any other form of autonomous internetwork (e.g., 273 corporate enterprise networks, civil aviation networks, tactical 274 military networks, etc.). Hence, the terms "Internet" and 275 "internetwork" are used interchangeably within this document. 277 Each IRON instance consists of IRON Agents (IAs) that automatically 278 tunnel the packets of end-to-end communication sessions within 279 encapsulating headers used for Internet routing. IAs use the Virtual 280 Enterprise Traversal (VET) [INTAREA-VET] virtual NBMA link model in 281 conjunction with the Subnetwork Encapsulation and Adaptation Layer 282 (SEAL) [INTAREA-SEAL] to encapsulate inner network-layer packets 283 within outer network layer headers, as shown in Figure 1. 285 +-------------------------+ 286 | Outer headers with | 287 ~ locator addresses ~ 288 | (IPv4 or IPv6) | 289 +-------------------------+ 290 | SEAL Header | 291 +-------------------------+ +-------------------------+ 292 | Inner Packet Header | --> | Inner Packet Header | 293 ~ with CPA addresses ~ --> ~ with CPA addresses ~ 294 | (IPv4, IPv6, OSI, etc.) | --> | (IPv4, IPv6, OSI, etc.) | 295 +-------------------------+ +-------------------------+ 296 | | --> | | 297 ~ Inner Packet Body ~ --> ~ Inner Packet Body ~ 298 | | --> | | 299 +-------------------------+ +-------------------------+ 300 | SEAL Trailer | 301 +-------------------------+ 303 Inner packet before Outer packet after 304 encapsulation encapsulation 306 Figure 1: Encapsulation of Inner Packets within Outer IP Headers 308 VET specifies automatic tunneling and tunnel neighbor coordination 309 mechanisms, where IAs appear as neighbors on an NBMA tunnel virtual 310 link. SEAL specifies the format and usage of the SEAL encapsulating 311 header and trailer. Additionally, Asymmetric Extended Route 312 Optimization (AERO) [AERO] specifies the method for reducing routing 313 path stretch. Together, these documents specify elements of a SEAL 314 Control Message Protocol (SCMP) used to deterministically exchange 315 and authenticate neighbor discovery messages, route redirections, 316 indications of Path Maximum Transmission Unit (PMTU) limitations, 317 destination unreachables, etc. 319 Each IRON instance comprises a set of IAs distributed throughout the 320 Internet to provide internetworking services for a set of Aggregated 321 Prefixes (APs). (The APs may be owned either by the VSP, or by an 322 enterprise network customer the hires the VSP to manage its APs.) 323 VSPs delegate sub-prefixes from APs, which they provide to end users 324 as Client Prefixes (CPs). In turn, end users assign CPs to Client 325 IAs which connect their End User Networks (EUNs) to the VSP IRON 326 instance. 328 VSPs may have no affiliation with the ISP networks from which end 329 users obtain their basic Internet connectivity. In that case, the 330 VSP can service its end users without the need to coordinate its 331 activities with ISPs or other VSPs. Further details on VSP business 332 considerations are out of scope for this document. 334 IRON requires no changes to end systems or to existing routers. 335 Instead, IAs are deployed either as new platforms or as modifications 336 to existing platforms. IAs may be deployed incrementally without 337 disturbing the existing Internet routing system, and act as waypoints 338 (or "cairns") for navigating VSP overly networks. The functional 339 roles for IAs are described in the following sections. 341 3.1. IRON Client 343 An IRON Client (or, simply, "Client") is a router or host that 344 logically connects EUNs to the VSP's IRON instance via tunnels, as 345 shown in Figure 2. Clients obtain CPs from their VSPs and use them 346 to number subnets and interfaces within the EUNs. 348 Each Client connects to one or more Servers in the IRON instance 349 which serve as default routers. The Servers in turn consider this 350 class of Clients as "connected" Clients. Clients also dynamically 351 discover destination-specific Servers through the receipt of Redirect 352 messages. These destination-specific Servers in turn consider this 353 class of Clients as "foreign" Clients. 355 A Client can be deployed on the same physical platform that also 356 connects EUNs to the end user's ISPs, but it may also be deployed as 357 a separate router within the EUN. (This model applies even if the 358 EUN connects to the ISP via a Network Address Translator (NAT) -- see 359 Section 6.7). Finally, a Client may also be a simple end system that 360 connects a singleton EUN and exhibits the outward appearance of a 361 host. 362 .-. 363 ,-( _)-. 364 +--------+ .-(_ (_ )-. 365 | Client |--(_ ISP ) 366 +---+----+ `-(______)-' 367 | <= T \ .-. 368 .-. u \ ,-( _)-. 369 ,-( _)-. n .-(_ (- )-. 370 .-(_ (_ )-. n (_ Internet ) 371 (_ EUN ) e `-(______)- 372 `-(______)-' l ___ 373 | s => (:::)-. 374 +----+---+ .-(::::::::) 375 | Host | .-(::: IRON :::)-. 376 +--------+ (:::: Instance ::::) 377 `-(::::::::::::)-' 378 `-(::::::)-' 380 Figure 2: IRON Client Connecting EUN to IRON Instance 382 3.2. IRON Serving Router 384 An IRON serving router (or, simply, "Server") is a VSP's router that 385 provides forwarding and mapping services within the IRON instance for 386 the CPs that have been delegated to end user Clients. In typical 387 deployments, a VSP will deploy many Servers for the IRON instance in 388 a globally distributed fashion (e.g., as depicted in Figure 3) around 389 the Internet so that Clients can discover those that are nearby. 391 +--------+ +--------+ 392 | Boston | | Tokyo | 393 | Server | | Server | 394 +--+-----+ ++-------+ 395 +--------+ \ / 396 | Seattle| \ ___ / 397 | Server | \ (:::)-. +--------+ 398 +------+-+ .-(::::::::)------+ Paris | 399 \.-(::: IRON :::)-. | Server | 400 (:::: Instance ::::) +--------+ 401 `-(::::::::::::)-' 402 +--------+ / `-(::::::)-' \ +--------+ 403 | Moscow + | \--- + Sydney | 404 | Server | +----+---+ | Server | 405 +--------+ | Cairo | +--------+ 406 | Server | 407 +--------+ 409 Figure 3: IRON Server Global Distribution Example 411 Each Server acts as a tunnel-endpoint router. The Server forms 412 bidirectional tunnel-neighbor relationships with each of its 413 connected Clients, and also serves as the unidirectional tunnel- 414 neighbor egress for dynamically discovered foreign Clients. Each 415 Server also forms bidirectional tunnel-neighbor relationships with a 416 set of Relays that can forward packets from the IRON instance out to 417 the native Internet and vice versa, as discussed in the next section. 419 3.3. IRON Relay Router 421 An IRON Relay Router (or, simply, "Relay") is a router that connects 422 the VSP's IRON instance to the Internet as an Autonomous System (AS). 423 The Relay therefore also serves as an Autonomous System Border Router 424 (ASBR) that is owned and managed by the VSP. 426 Each VSP configures one or more Relays that advertise the VSP's APs 427 into the IPv4 and/or IPv6 global Internet routing systems. Each 428 Relay associates with the VSP's IRON instance Servers, e.g., via 429 tunnel virtual links over the IRON instance, via a physical 430 interconnect such as an Ethernet cable, etc. The Relay role is 431 depicted in Figure 4. 433 .-. 434 ,-( _)-. 435 .-(_ (_ )-. 436 (_ Internet ) 437 `-(______)-' | +--------+ 438 | |--| Server | 439 +----+---+ | +--------+ 440 | Relay |----| +--------+ 441 +--------+ |--| Server | 442 _|| | +--------+ 443 (:::)-. (Physical Interconnects) 444 .-(::::::::) 445 +--------+ .-(::: IRON :::)-. +--------+ 446 | Server |=(:::: Instance ::::)=| Server | 447 +--------+ `-(::::::::::::)-' +--------+ 448 `-(::::::)-' 449 || (Tunnels) 450 +--------+ 451 | Server | 452 +--------+ 454 Figure 4: IRON Relay Router Connecting IRON Instance to Native 455 Internet 457 4. IRON Organizational Principles 459 The IRON consists of the union of all VSP overlay networks configured 460 over the Internet. Each such IRON instance represents a distinct 461 "patch" on the underlying Internet "quilt", where the patches are 462 stitched together by standard Internet routing. When a new IRON 463 instance is deployed, it becomes yet another patch on the quilt and 464 coordinates its internal routing system independently of all other 465 patches. 467 Each IRON instance connects to the Internet as an AS in the Internet 468 routing system using a public BGP Autonomous System Number (ASN). 469 The IRON instance maintains a set of Relays that serve as ASBRs as 470 well as a set of Servers that provide routing and addressing services 471 to Clients. Figure 5 depicts the logical arrangement of Relays, 472 Servers, and Clients in an IRON instance. 474 .-. 475 ,-( _)-. 476 .-(_ (_ )-. 477 (__ Internet _) 478 `-(______)-' 480 <------------ Relays ------------> 481 ________________________ 482 (::::::::::::::::::::::::)-. 483 .-(:::::::::::::::::::::::::::::) 484 .-(:::::::::::::::::::::::::::::::::)-. 485 (::::::::::: IRON Instance :::::::::::::) 486 `-(:::::::::::::::::::::::::::::::::)-' 487 `-(::::::::::::::::::::::::::::)-' 489 <------------ Servers ------------> 490 .-. .-. .-. 491 ,-( _)-. ,-( _)-. ,-( _)-. 492 .-(_ (_ )-. .-(_ (_ )-. .-(_ (_ )-. 493 (__ ISP A _) (__ ISP B _) ... (__ ISP x _) 494 `-(______)-' `-(______)-' `-(______)-' 495 <----------- NATs ------------> 497 <----------- Clients and EUNs -----------> 499 Figure 5: IRON Organization 501 Each Relay connects the IRON instance directly to the underlying IPv4 502 and/or IPv6 Internets via external BGP (eBGP) peerings with 503 neighboring ASes. It also advertises the IPv4 APs managed by the VSP 504 into the IPv4 Internet routing system and advertises the IPv6 APs 505 managed by the VSP into the IPv6 Internet routing system. Relays 506 will therefore receive packets with CPA destination addresses sent by 507 end systems in the Internet and forward them to a Server that 508 connects the Client to which the corresponding CP has been delegated. 509 Finally, the IRON instance Relays maintain synchronization by running 510 interior BGP (iBGP) between themselves the same as for ordinary 511 ASBRs. 513 In a simple VSP overlay network arrangement, each Server can be 514 configured as an ASBR for a stub AS using a private ASN [RFC1930] to 515 peer with each IRON instance Relay the same as for an ordinary eBGP 516 neighbor. (The Server and Relay functions can instead be deployed 517 together on the same physical platform as a unified gateway.) Each 518 Server maintains a working set of connected Clients for which it 519 caches CP-to-Client mappings in its forwarding table. Each Server 520 also, in turn, propagates the list of CPs in its working set to its 521 neighboring Relays via eBGP. Therefore, each Server only needs to 522 track the CPs for its current working set of Clients, while each 523 Relay will maintain a full CP-to-Server forwarding table that 524 represents reachability information for all CPs in the IRON instance. 526 Each Client obtains its basic Internet connectivity from ISPs, and 527 connects to Servers to attach its EUNs to the IRON instance. Each 528 EUN can further connect to the IRON instance via multiple Clients as 529 long as the Clients coordinate with one another, e.g., to mitigate 530 EUN partitions. Unlike Relays and Servers, Clients may use private 531 addresses behind one or several layers of NATs. Each Client 532 initially discovers a list of nearby Servers then forms a 533 bidirectional tunnel-neighbor relationship with one or more Servers 534 through an initial exchange followed by periodic keepalives. 536 After a Client connects to Servers, it forwards initial outbound 537 packets from its EUNs by tunneling them to a Server, which may, in 538 turn, forward them to a nearby Relay within the IRON instance. The 539 Client may subsequently receive Redirect messages informing it of a 540 more direct route through a different Server within the IRON instance 541 that serves the final destination EUN. This foreign Server in turn 542 provides the Client with a unidirectional tunnel-neighbor egress for 543 route optimization purposes,. 545 IRON can also be used to support APs of network-layer address 546 families that cannot be routed natively in the underlying 547 Internetwork (e.g., OSI/CLNP over the public Internet, IPv6 over 548 IPv4-only Internetworks, IPv4 over IPv6-only Internetworks, etc.). 549 Further details for the support of IRON APs of one address family 550 over Internetworks based on different address families are discussed 551 in Appendix A. 553 5. IRON Control Plane Operation 555 Each IRON instance supports routing through the control plane startup 556 and runtime dynamic routing operation of IAs. The following sub- 557 sections discuss control plane considerations for initializing and 558 maintaining the IRON instance routing system. 560 5.1. IRON Client Operation 562 Each Client obtains one or more CPs in a secured exchange with the 563 VSP as part of the initial end user registration. Upon startup, the 564 Client discovers a list of nearby VSP Servers via, e.g., a location 565 broker, a well known website, a static map, etc. 567 After the Client obtains a list of nearby Servers, it initiates short 568 transactions to connect to one or more Servers, e.g., via secured TCP 569 connections. During the transaction, each Server provides the Client 570 with a CP and a symmetric secrey key that the Client will use to sign 571 and authenticate messages. The Client in turn provides the Server 572 with a set of link identifiers ("LINK_ID"s) that represent the 573 Client's ISP connections. The protocol details of the transaction 574 are specific to the VSP, and hence out of scope for this document. 576 After the Client connects to Servers, it configures default routes 577 that list the Servers as next hops on the tunnel virtual interface. 578 The Client may subsequently discover more-specific routes through 579 receipt of Redirect messages. 581 5.2. IRON Server Operation 583 In a simple VSP overlay network arrangement, each IRON Server is 584 provisioned with the locators for Relays within the IRON instance. 585 The Server is further configured as an ASBR for a stub AS and uses 586 eBGP with a private ASN to peer with each Relay. 588 Upon startup, the Server reports the list of CPs it is currently 589 serving to the overlay network Relays. The Server then actively 590 listens for Clients that register their CPs as part of their 591 connection establishment procedure. When a new Client connects, the 592 Server announces the new CP routes to its neighboring Relays; when an 593 existing Client disconnects, the Server withdraws its CP 594 announcements. This process can often be accommodated through 595 standard router configurations, e.g., on routers that can announce 596 and withdraw prefixes based on kernel route additions and deletions. 598 5.3. IRON Relay Operation 600 Each IRON Relay is provisioned with the list of APs that it will 601 serve, as well as the locators for Servers within the IRON instance. 602 The Relay is also provisioned with eBGP peerings with neighboring 603 ASes in the Internet -- the same as for any ASBR. 605 In a simple VSP overlay network arrangement, each Relay connects to 606 each Server via IRON instance-internal eBGP peerings for the purpose 607 of discovering CP-to-Server mappings, and connects to all other 608 Relays using iBGP either in a full mesh or using route reflectors. 609 (The Relay only uses iBGP to announce those prefixes it has learned 610 from AS peerings external to the IRON instance, however, since all 611 Relays will already discover all CPs in the IRON instance via their 612 eBGP peerings with Servers.) The Relay then engages in eBGP routing 613 exchanges with peer ASes in the IPv4 and/or IPv6 Internets the same 614 as for any ASBR. 616 After this initial synchronization procedure, the Relay advertises 617 the APs to its eBGP peers in the Internet. In particular, the Relay 618 advertises the IPv6 APs into the IPv6 Internet routing system and 619 advertises the IPv4 APs into the IPv4 Internet routing system, but it 620 does not advertise the full list of the IRON overlay's CPs to any of 621 its eBGP peers. The Relay further advertises "default" via eBGP to 622 its associated Servers, then engages in ordinary packet-forwarding 623 operations. 625 6. IRON Forwarding Plane Operation 627 Following control plane initialization, IAs engage in the cooperative 628 process of receiving and forwarding packets. IAs forward 629 encapsulated packets over the IRON instance using the mechanisms of 630 VET [INTAREA-VET], AERO [AERO] and SEAL [INTAREA-SEAL], while Relays 631 additionally forward packets to and from the native IPv6 and/or IPv4 632 Internets. IAs also use SCMP to coordinate with other IAs, including 633 the process of sending and receiving Redirect messages, error 634 messages, etc. Each IA operates as specified in the following sub- 635 sections. 637 6.1. IRON Client Operation 639 After connecting to Servers as specified in Section 5.1, the Client 640 registers its active ISP connections with each Server. Thereafter, 641 the Client sends periodic beacons (e.g., cryptographically signed SRS 642 messages) to the Server via each ISP connection to maintain tunnel- 643 neighbor address mapping state. The beacons should be sent at no 644 more than 60 second intervals (subject to a small random delay) so 645 that state in NATs on the path as well as on the Server itself is 646 refreshed regularly. Although the Client may connect via multiple 647 ISPs (each represented by a different LINK_ID), the CP itself is used 648 to represent the bidirectional Client-to-Server tunnel neighbor 649 association. The CP therefore names this "bundle" of ISP 650 connections. 652 If the Client ceases to receive acknowledgements from a Server via a 653 specific ISP connection, it marks the Server as unreachable from that 654 ISP. (The Client should also inform the Server of this outage via 655 one of its working ISP connections.) If the Client ceases to receive 656 acknowledgements from the Server via multiple ISP connections, it 657 disconnects from the failing Server and connects to a new nearby 658 Server. The act of disconnecting from old servers and connecting to 659 new servers will soon propagate the appropriate routing information 660 among the IRON instance's Relays. 662 When an end system in an EUN sends a flow of packets to a 663 correspondent in a different network, the packets are forwarded 664 through the EUN via normal routing until they reach the Client, which 665 then tunnels the initial packets to a Server as its default router. 666 In particular, the Client encapsulates each packet in an outer header 667 with its locator as the source address and the locator of the Server 668 as the destination address. 670 The Client uses the mechanisms specified in VET, SEAL and AERO to 671 encapsulate each packet to be forwarded. The Client further accepts 672 SCMP protocol messages from its Servers, including neighbor 673 coordination exchanges, indications of PMTU limitations, Redirects 674 and other control messages. When the Client is redirected to a 675 foreign Server that serves a destination CP, it forms a 676 unidirectional tunnel neighbor association with the foreign Server as 677 the new next hop toward the CP. 679 Note that Client-to-Client tunneling is not accommodated, since this 680 could result in communication failures when one or both Clients are 681 located behind a NAT, or when one or both Clients are mobile. 682 Therefore, Client-to-Client mobility binding updates are not required 683 in the IRON model. 685 6.2. IRON Server Operation 687 After the Server associates with nearby Relays, it accepts Client 688 connections and authenticates the SRS messages it receives from its 689 already-connected Clients. The Server discards any SRS messages that 690 failed authentication, and responds to authentic SRS messages by 691 returning signed SRAs. 693 When the Server receives a SEAL-encapsulated data packet from one of 694 its connected Clients, it uses normal longest-prefix-match rules to 695 locate a forwarding table entry that matches the packet's inner 696 destination address. The Server then re-encapsulates the packet 697 (i.e., it removes the outer header and replaces it with a new outer 698 header), sets the outer destination address to the locator address of 699 the next hop and forwards the packet to the next hop. 701 When the Server receives a SEAL-encapsulated data packet from a 702 foreign Client, it accepts the packet only if the packet's signature 703 is correct; otherwise, it silently drops the packet. The Server then 704 locates a forwarding table entry that matches the packet's inner 705 destination address. If the destination does not correspond to one 706 of the Server's connected Clients, the Server silently drops the 707 packet. Otherwise, the Server re-encapsulates the packet and 708 forwards it to the correct connected Client. If the Client is in the 709 process of disconnecting (e.g., due to mobility), the Server also 710 returns a Redirect message listing a NULL next hop to inform the 711 foreign Client that the connected Client has moved. 713 When the Server receives a SEAL-encapsulated data packet from a 714 Relay, it again locates a forwarding table entry that matches the 715 packet's inner destination. If the destination does not correspond 716 to one of the Server's connected Clients, the Server drops the packet 717 and sends a destination unreachable message. Otherwise, the Server 718 re-encapsulates the packet and forwards it to the correct connected 719 Client. 721 The permissible data flow paths for tunneled packets that flow 722 through a Server are shown diagrammatically in Section 7. 724 6.3. IRON Relay Operation 726 After each Relay has synchronized its APs (see Section 5.3) it 727 advertises them in the IPv4 and/or IPv6 Internet routing systems. 728 These APs will be represented as ordinary routing information in the 729 interdomain routing system, and any packets originating from the IPv4 730 or IPv6 Internet destined to an address covered by one of the APs 731 will be forwarded to one of the VSP's Relays. 733 When a Relay receives a packet from the Internet destined to a CPA 734 covered by one of its APs, it behaves as an ordinary IP router. 735 Specifically, the Relay looks in its forwarding table to discover a 736 locator of a Server that serves the CP covering the destination 737 address. The Relay then simply forwards the packet to the Server, 738 e.g., via SEAL encapsulation over a tunnel virtual link, via a 739 physical interconnect, etc. 741 When a Relay receives a packet from a Server destined to a CPA 742 serviced by a different Server, the Relay forwards the packet toward 743 the correct Server while also sending a "predirect" indication as the 744 initial leg in the AERO redirection procedure. When the target 745 Server returns a Redirect message, the Relay proxies the Redirect by 746 re-encapsulating it and forwarding it to the previous hop. 748 7. IRON Reference Operating Scenarios 750 IRON supports communications when one or both hosts are located 751 within CP-addressed EUNs. The following sections discuss the 752 reference operating scenarios. 754 7.1. Both Hosts within Same IRON Instance 756 When both hosts are within EUNs served by the same IRON instance, it 757 is sufficient to consider the scenario in a unidirectional fashion, 758 i.e., by tracing packet flows only in the forward direction from 759 source host to destination host. The reverse direction can be 760 considered separately and incurs the same considerations as for the 761 forward direction. The simplest case occurs when the EUNs that 762 service the source and destination hosts are connected to the same 763 server, while the general case occurs when the EUNs are connected to 764 different Servers. The two cases are discussed in the following 765 sections. 767 7.1.1. EUNs Served by Same Server 769 In this scenario, the packet flow from the source host is forwarded 770 through the EUN to the source's IRON Client. The Client then tunnels 771 the packets to the Server, which simply re-encapsulates and forwards 772 the tunneled packets to the destination's Client. The destination's 773 Client then removes the packets from the tunnel and forwards them 774 over the EUN to the destination. Figure 6 depicts the sustained flow 775 of packets from Host A to Host B within EUNs serviced by the same 776 Server via a "hairpinned" route: 778 ________________________________________ 779 .-( )-. 780 .-( )-. 781 .-( )-. 782 .( ). 783 .( ). 784 .( +------------+ ). 785 ( +===================>| Server(S) |=====================+ ) 786 ( // +------------+ \\ ) 787 ( // .-. .-. \\ ) 788 ( //,-( _)-. ,-( _)-\\ ) 789 ( .||_ (_ )-. .-(_ (_ ||. ) 790 ((_|| ISP A .) (__ ISP B ||_)) 791 ( ||-(______)-' `-(______)|| ) 792 ( || | | vv ) 793 ( +-----+-----+ +-----+-----+ ) 794 | Client(A) | | Client(B) | 795 +-----+-----+ VSP IRON Instance +-----+-----+ 796 ^ | ( (Overlaid on the Native Internet) ) | | 797 | .-. .-( .-) .-. | 798 | ,-( _)-. .-(________________________)-. ,-( _)-. | 799 .|(_ (_ )-. .-(_ (_ )| 800 (_| EUN A ) (_ EUN B |) 801 |`-(______)-' `-(______)-| 802 | | Legend: | | 803 | +---+----+ ----> == Native +----+---+ | 804 +-| Host A | ====> == Tunnel | Host B |<+ 805 +--------+ +--------+ 807 Figure 6: Sustained Packet Flow via Hairpinned Route 809 With reference to Figure 6, Host A sends packets destined to Host B 810 via its network interface connected to EUN A. Routing within EUN A 811 will direct the packets to Client(A) as a default router for the EUN, 812 which then encapsulates them in outer IP/SEAL/* headers with its 813 locator address as the outer source address, the locator address of 814 Server(S) as the outer destination address, and the identifying 815 information associated with its tunnel-neighbor state as the 816 identity. Client(A) then simply forwards the encapsulated packets 817 into the ISP network connection that provided its locator. The ISP 818 will forward the encapsulated packets into the Internet without 819 filtering since the (outer) source address is topologically correct. 820 Once the packets have been forwarded into the Internet, routing will 821 direct them to Server(S). 823 Server(S) will receive the encapsulated packets from Client(A) then 824 check its forwarding table to discover an entry that covers 825 destination address B with Client(B) as the next hop. Server(S) then 826 re-encapsulates the packets in a new outer header that uses the 827 source address, destination address, and identification parameters 828 associated with the tunnel-neighbor state for Client(B). Server(S) 829 then forwards these re-encapsulated packets into the Internet, where 830 routing will direct them to Client(B). Client(B) will, in turn, 831 decapsulate the packets and forward the inner packets to Host B via 832 EUN B. 834 7.1.2. EUNs Served by Different Servers 836 In this scenario, the initial packets of a flow produced by a source 837 host within an EUN connected to the IRON instance by a Client must 838 flow through both the Server of the source host and a nearby Relay, 839 but route optimization can eliminate these elements from the path for 840 subsequent packets in the flow. Figure 7 shows the flow of initial 841 packets from Host A to Host B within EUNs of the same IRON instance: 843 ________________________________________ 844 .-( )-. 845 .-( +------------+ )-. 846 .-( +======>| Relay(R) |=======+ )-. 847 .( || +*--*--*--*-*+ || ). 848 .( || * * vv ). 849 .( +--------++--+* *+--++--------+ ). 850 ( +==>| Server(A) *| | Server(B) |====+ ) 851 ( // +----------*-+ +------------+ \\ ) 852 ( // .-. * .-. \\ ) 853 ( //,-( _)-. * ,-( _)-\\ ) 854 ( .||_ (_ )-. * .-(_ (_ ||. ) 855 ((_|| ISP A .) * (__ ISP B ||_)) 856 ( ||-(______)-' * `-(______)|| ) 857 ( || | * | vv ) 858 ( +-----+-----+ * +-----+-----+ ) 859 | Client(A) |<* | Client(B) | 860 +-----+-----+ VSP IRON Instance +-----+-----+ 861 ^ | ( (Overlaid on the Native Internet) ) | | 862 | .-. .-( .-) .-. | 863 | ,-( _)-. .-(________________________)-. ,-( _)-. | 864 .|(_ (_ )-. .-(_ (_ )| 865 (_| EUN A ) (_ EUN B |) 866 |`-(______)-' `-(______)-| 867 | | Legend: | | 868 | +---+----+ ----> == Native +----+---+ | 869 +-| Host A | ====> == Tunnel | Host B |<+ 870 +--------+ <**** == Redirect +--------+ 872 Figure 7: Initial Packet Flow Before Redirects 874 With reference to Figure 7, Host A sends packets destined to Host B 875 via its network interface connected to EUN A. Routing within EUN A 876 will direct the packets to Client(A) as a default router for the EUN, 877 which then encapsulates them in outer IP/SEAL/* headers that use the 878 source address, destination address, and identification parameters 879 associated with the tunnel-neighbor state for Server(A). Client(A) 880 then forwards the encapsulated packets into the ISP network 881 connection that provided its locator, which will forward the 882 encapsulated packets into the Internet where routing will direct them 883 to Server(A). 885 Server(A) receives the encapsulated packets from Client(A) and 886 consults its forwarding table to determine that the most-specific 887 matching route is via Relay(R) as the next hop. Server(A) then re- 888 encapsulates the packets in outer headers that use the source 889 address, destination address, and identification parameters 890 associated with Relay (R), and forwards them into the Internet where 891 routing will direct them to Relay(R). (Note that the Server could 892 instead forward the packets directly to the Relay without 893 encapsulation when the Relay is directly connected, e.g., via a 894 physical interconnect.) 896 Relay(R) receives the forwarded packets from Server(A) then checks 897 its forwarding table to discover a CP entry that covers inner 898 destination address B with Server(B) as the next hop. Relay(R) then 899 sends a "predirect" indication forward to Server(B) to inform the 900 server that a Redirect message must be returned (the "predirect" may 901 be either a separate control message or an indication setting on the 902 data packet itself). Relay(R) finally re-encapsulates the packets in 903 outer headers that use the source address, destination address, and 904 identification parameters associated with Server(B), then forwards 905 them into the Internet where routing will direct them to Server(B). 906 (Note again that the Relay could instead forward the packets directly 907 to the Server, e.g., via a physical interconnect.) 909 Server(B) receives the "predirect" indication and forwarded packets 910 from Relay(R), then checks its forwarding table to discover a CP 911 entry that covers destination address B with Client(B) as the next 912 hop. Server(B) returns a Redirect message to Relay(R), which proxies 913 the message back to Server(A), which then proxies the message back to 914 Client(A). 916 Server(B) then re-encapsulates the packets in outer headers that use 917 the source address, destination address, and identification 918 parameters associated with Client(B), then forwards them into the 919 Internet where routing will direct them to Client(B). Client(B) 920 will, in turn, decapsulate the packets and forward the inner packets 921 to Host B via EUN B. 923 After the initial flow of packets, Client(A) will have received one 924 or more Redirect messages listing Server(B) as a better next hop, and 925 will establish unidirectional tunnel-neighbor state listing Server(B) 926 as the next hop toward the CP that covers Host B. Client(A) 927 thereafter forwards its encapsulated packets directly to the locator 928 address of Server(B) without involving either Server(A) or Relay(B), 929 as shown in Figure 8. 931 ________________________________________ 932 .-( )-. 933 .-( )-. 934 .-( )-. 935 .( ). 936 .( ). 937 .( +------------+ ). 938 ( +====================================>| Server(B) |====+ ) 939 ( // +------------+ \\ ) 940 ( // .-. .-. \\ ) 941 ( //,-( _)-. ,-( _)-\\ ) 942 ( .||_ (_ )-. .-(_ (_ ||. ) 943 ((_|| ISP A .) (__ ISP B ||_)) 944 ( ||-(______)-' `-(______)|| ) 945 ( || | | vv ) 946 ( +-----+-----+ +-----+-----+ ) 947 | Client(A) | | Client(B) | 948 +-----+-----+ IRON Instance +-----+-----+ 949 ^ | ( (Overlaid on the Native Internet) ) | | 950 | .-. .-( .-) .-. | 951 | ,-( _)-. .-(________________________)-. ,-( _)-. | 952 .|(_ (_ )-. .-(_ (_ )| 953 (_| EUN A ) (_ EUN B |) 954 |`-(______)-' `-(______)-| 955 | | Legend: | | 956 | +---+----+ ----> == Native +----+---+ | 957 +-| Host A | ====> == Tunnel | Host B |<+ 958 +--------+ +--------+ 960 Figure 8: Sustained Packet Flow After Redirects 962 7.2. Mixed IRON and Non-IRON Hosts 964 The cases in which one host is within an IRON EUN and the other is in 965 a non-IRON EUN (i.e., one that connects to the native Internet 966 instead of the IRON) are described in the following sub-sections. 968 7.2.1. From IRON Host A to Non-IRON Host B 970 Figure 9 depicts the IRON reference operating scenario for packets 971 flowing from Host A in an IRON EUN to Host B in a non-IRON EUN. 973 _________________________________________ 974 .-( )-. )-. 975 .-( +-------)----+ )-. 976 .-( | Relay(A) |--------------------------+ )-. 977 .( +------------+ \ ). 978 .( +=======>| Server(A) | \ ). 979 .( // +--------)---+ \ ). 980 ( // ) \ ) 981 ( // IRON ) \ ) 982 ( // .-. Instance ) .-. \ ) 983 ( //,-( _)-. ) ,-( _)-. \ ) 984 ( .||_ (_ )-. ) The Native Internet .- _ (_ )-| ) 985 ( _|| ISP A ) ) (_ ISP B |)) 986 ( ||-(______)-' ) `-(______)-' | ) 987 ( || | )-. | v ) 988 ( +-----+ ----+ )-. +-----+-----+ ) 989 | Client(A) |)-. | Router(B) | 990 +-----+-----+ +-----+-----+ 991 ^ | ( ) | | 992 | .-. .-( .-) .-. | 993 | ,-( _)-. .-(________________________)-. ,-( _)-. | 994 .|(_ (_ )-. .-(_ (_ )| 995 (_| EUN A ) ( EUN B |) 996 |`-(______)-' `-(______)-| 997 | | Legend: | | 998 | +---+----+ ----> == Native +----+---+ | 999 +-| Host A | ====> == Tunnel | Host B |<+ 1000 +--------+ +--------+ 1002 Figure 9: From IRON Host A to Non-IRON Host B 1004 In this scenario, Host A sends packets destined to Host B via its 1005 network interface connected to IRON EUN A. Routing within EUN A will 1006 direct the packets to Client(A) as a default router for the EUN, 1007 which then encapsulates them and forwards them into the Internet 1008 routing system where they will be directed to Server(A). 1010 Server(A) receives the encapsulated packets from Client(A) then 1011 forwards them to Relay(A), which simply forwards the unencapsulated 1012 packets into the Internet. Once the packets are released into the 1013 Internet, routing will direct them to the final destination B. (Note 1014 that for simplicity Server(A) and Relay(A) are depicted in Figure 9 1015 as two concatenated "half-routers", and the forwarding between the 1016 two halves is via encapsulation, via a physical interconnect, via a 1017 shared memory operation when the two halves are within the same 1018 physical platform, etc.) 1020 7.2.2. From Non-IRON Host B to IRON Host A 1022 Figure 10 depicts the IRON reference operating scenario for packets 1023 flowing from Host B in an Non-IRON EUN to Host A in an IRON EUN. 1025 _________________________________________ 1026 .-( )-. )-. 1027 .-( +-------)----+ )-. 1028 .-( | Relay(A) |<-------------------------+ )-. 1029 .( +------------+ \ ). 1030 .( +========| Server(A) | \ ). 1031 .( // +--------)---+ \ ). 1032 ( // ) \ ) 1033 ( // IRON ) \ ) 1034 ( // .-. Instance ) .-. \ ) 1035 ( //,-( _)-. ) ,-( _)-. \ ) 1036 ( .||_ (_ )-. ) The Native Internet .- _ (_ )-| ) 1037 ( _|| ISP A ) ) (_ ISP B |)) 1038 ( ||-(______)-' ) `-(______)-' | ) 1039 ( vv | )-. | | ) 1040 ( +-----+ ----+ )-. +-----+-----+ ) 1041 | Client(A) |)-. | Router(B) | 1042 +-----+-----+ +-----+-----+ 1043 | | ( ) | | 1044 | .-. .-( .-) .-. | 1045 | ,-( _)-. .-(________________________)-. ,-( _)-. | 1046 .|(_ (_ )-. .-(_ (_ )| 1047 (_| EUN A ) ( EUN B |) 1048 |`-(______)-' `-(______)-| 1049 | | Legend: | | 1050 | +---+----+ <---- == Native +----+---+ | 1051 +>| Host A | <==== == Tunnel | Host B |-+ 1052 +--------+ +--------+ 1054 Figure 10: From Non-IRON Host B to IRON Host A 1056 In this scenario, Host B sends packets destined to Host A via its 1057 network interface connected to non-IRON EUN B. Internet routing will 1058 direct the packets to Relay(A), which then forwards them to 1059 Server(A). 1061 Server(A) will then check its forwarding table to discover an entry 1062 that covers destination address A with Client(A) as the next hop. 1063 Server(A) then (re-)encapsulates the packets and forwards them into 1064 the Internet, where routing will direct them to Client(A). Client(A) 1065 will, in turn, decapsulate the packets and forward the inner packets 1066 to Host A via its network interface connected to IRON EUN A. 1068 7.3. Hosts within Different IRON Instances 1070 Figure 11 depicts the IRON reference operating scenario for packets 1071 flowing between Host A in an IRON instance A and Host B in a 1072 different IRON instance B. In that case, forwarding between hosts A 1073 and B always involves the Servers and Relays of both IRON instances, 1074 i.e., the scenario is no different than if one of the hosts was 1075 serviced by an IRON EUN and the other was serviced by a non-IRON EUN. 1076 _________________________________________ 1077 .-( )-. .-( )-. 1078 .-( +-------)----+ +---(--------+ )-. 1079 .-( | Relay(A) | <---> | Relay(B) | )-. 1080 .( +------------+ +------------+ ). 1081 .( +=======>| Server(A) | | Server(B) |<======+ ). 1082 .( // +--------)---+ +---(--------+ \\ ). 1083 ( // ) ( \\ ) 1084 ( // IRON ) ( IRON \\ ) 1085 ( // .-. Instance A ) ( Instance B .-. \\ ) 1086 ( //,-( _)-. ) ( ,-( _). || ) 1087 ( .||_ (_ )-. ) ( .-'_ (_ )|| ) 1088 ( _|| ISP A ) ) ( (_ ISP B ||)) 1089 ( ||-(______)-' ) ( '-(______)-|| ) 1090 ( vv | )-. .-( | vv ) 1091 ( +-----+ ----+ )-. .-( +-----+-----+ ) 1092 | Client(A) |)-. .-(| Client(B) | 1093 +-----+-----+ The Native Internet +-----+-----+ 1094 ^ | ( ) | ^ 1095 | .-. .-( .-) .-. | 1096 | ,-( _)-. .-(________________________)-. ,-( _)-. | 1097 .|(_ (_ )-. .-(_ (_ )| 1098 (_| EUN A ) (_ EUN B |) 1099 |`-(______)-' `-(______)-| 1100 | | Legend: | | 1101 | +---+----+ <---> == Native +----+---+ | 1102 +>| Host A | <===> == Tunnel | Host B |<+ 1103 +--------+ +--------+ 1105 Figure 11: Hosts within Different IRON Instances 1107 8. Mobility, Multiple Interfaces, Multihoming, and Traffic Engineering 1109 While IRON Servers and Relays are typically arranged as fixed 1110 infrastructure, Clients may need to move between different network 1111 points of attachment, connect to multiple ISPs, or explicitly manage 1112 their traffic flows. The following sections discuss mobility, 1113 multihoming, and traffic engineering considerations for IRON Clients. 1115 8.1. Mobility Management and Mobile Networks 1117 When a Client changes its network point of attachment (e.g., due to a 1118 mobility event), it configures one or more new locators. If the 1119 Client has not moved far away from its previous network point of 1120 attachment, it simply informs its Server of any locator changes. 1121 This operation is performance sensitive and should be conducted 1122 immediately to avoid packet loss. This aspect of mobility can be 1123 classified as a "localized mobility event". 1125 If the Client has moved far away from its previous network point of 1126 attachment, however, it re-issues the Server discovery procedure 1127 described in Section 5.3. If the Client's current Server is no 1128 longer close by, the Client may wish to move to a new Server in order 1129 to reduce routing stretch. This operation is not performance 1130 critical, and therefore can be conducted over a matter of seconds/ 1131 minutes instead of milliseconds/microseconds. This aspect of 1132 mobility can be classified as a "global mobility event". 1134 To move to a new Server, the Client first engages in the CP 1135 registration process with the new Server, as described in Section 1136 5.3. The Client then informs its former Server that it has departed; 1137 again, via a VSP-specific secured reliable transport connection. The 1138 former Server will then withdraw its CP advertisements from the IRON 1139 instance routing system and retain the (stale) forwarding table 1140 entries until their lifetime expires. In the interim, the former 1141 Server continues to deliver packets to the Client's last-known 1142 locator addresses for the short term while informing any 1143 unidirectional tunnel-neighbors that the Client has moved. 1145 Note that the Client may be either a mobile host or a mobile router. 1146 In the case of a mobile router, the Client's EUN becomes a mobile 1147 network, and can continue to use the Client's CPs without renumbering 1148 even as it moves between different network attachment points. 1150 8.2. Multiple Interfaces and Multihoming 1152 A Client may register multiple ISP connections with each Server such 1153 that multiple interfaces are naturally supported. This feature 1154 results in the Client "harnessing" its multiple ISP connections into 1155 a "bundle" that is represented as a single entity at the network 1156 layer, and therefore allows for ISP independence at the link-layer. 1158 A Client may further register with multiple Servers for fault 1159 tolerance and reduced routing stretch. In that case, the Client 1160 should register its full bundle of ISP connections with each of its 1161 Servers unless it has a way of carefully coordinating its ISP-to- 1162 Server mappings. 1164 Client registration with multiple Servers results in "pseudo- 1165 multihoming", in which the multiple homes are within the same VSP 1166 IRON instance and hence share fate with the health of the IRON 1167 instance itself. 1169 8.3. Traffic Engineering 1171 A Client can dynamically adjust its ISP-to-Server mappings in order 1172 to influence inbound traffic flows. It can also change between 1173 Servers when multiple Servers are available, but should strive for 1174 stability in its Server selection in order to limit VSP network 1175 routing churn. 1177 A Client can select outgoing ISPs, e.g., based on current Quality-of- 1178 Service (QoS) considerations such as minimizing delay or variance. 1180 9. Renumbering Considerations 1182 As new link-layer technologies and/or service models emerge, end 1183 users will be motivated to select their basic Internet connectivity 1184 solutions through healthy competition between ISPs. If an end user's 1185 network-layer addresses are tied to a specific ISP, however, they may 1186 be forced to undergo a painstaking renumbering even if they wish to 1187 change to a different ISP [RFC4192][RFC5887]. 1189 When an end user Client obtains CPs from a VSP, it can change between 1190 ISPs seamlessly and without need to renumber the CPs. IRON therefore 1191 provides ISP independence at the link layer. If the end user is 1192 later compelled to change to a different VSP, however, it would be 1193 obliged to abandon its CPs and obtain new ones from the new VSP. In 1194 that case, the Client would again be required to engage in a 1195 painstaking renumbering event. 1197 In order to avoid all future renumbering headaches, a Client that is 1198 part of a cooperative collective (e.g., a large enterprise network) 1199 could join together with the collective to obtain a suitably large PI 1200 prefix then and hire a VSP to manage the prefix on behalf of the 1201 collective. If the collective later decides to switch to a new VSP, 1202 it simply revokes its PI prefix registration with the old VSP and 1203 activates its registration with the new VSP. 1205 10. NAT Traversal Considerations 1207 The Internet today consists of a global public IPv4 routing and 1208 addressing system with non-IRON EUNs that use either public or 1209 private IPv4 addressing. The latter class of EUNs connect to the 1210 public Internet via Network Address Translators (NATs). When an IRON 1211 Client is located behind a NAT, it selects Servers using the same 1212 procedures as for Clients with public addresses and can then send SRS 1213 messages to Servers in order to get SRA messages in return. The only 1214 requirement is that the Client must configure its encapsulation 1215 format to use a transport protocol that supports NAT traversal, e.g., 1216 UDP, TCP, etc. 1218 Since the Server maintains state about its connected Clients, it can 1219 discover locator information for each Client by examining the 1220 transport port number and IP address in the outer headers of the 1221 Client's encapsulated packets. When there is a NAT in the path, the 1222 transport port number and IP address in each encapsulated packet will 1223 correspond to state in the NAT box and might not correspond to the 1224 actual values assigned to the Client. The Server can then 1225 encapsulate packets destined to hosts in the Client's EUN within 1226 outer headers that use this IP address and transport port number. 1227 The NAT box will receive the packets, translate the values in the 1228 outer headers, then forward the packets to the Client. In this 1229 sense, the Server's "locator" for the Client consists of the 1230 concatenation of the IP address and transport port number. 1232 In order to keep NAT and Server connection state alive, the Client 1233 sends periodic beacons to the server, e.g., by sending an SRS message 1234 to elicit an SRA message from the Server. IRON does not otherwise 1235 introduce any new issues to complications raised for NAT traversal or 1236 for applications embedding address referrals in their payload. 1238 11. Multicast Considerations 1240 IRON Servers and Relays are topologically positioned to provide 1241 Internet Group Management Protocol (IGMP) / Multicast Listener 1242 Discovery (MLD) proxying for their Clients [RFC4605]. Further 1243 multicast considerations for IRON (e.g., interactions with multicast 1244 routing protocols, traffic scaling, etc.) are out of scope and will 1245 be discussed in a future document. 1247 12. Nested EUN Considerations 1249 Each Client configures a locator that may be taken from an ordinary 1250 non-CPA address assigned by an ISP or from a CPA address taken from a 1251 CP assigned to another Client. In that case, the Client is said to 1252 be "nested" within the EUN of another Client, and recursive nestings 1253 of multiple layers of encapsulations may be necessary. 1255 For example, in the network scenario depicted in Figure 12, Client(A) 1256 configures a locator CPA(B) taken from the CP assigned to EUN(B). 1257 Client(B) in turn configures a locator CPA(C) taken from the CP 1258 assigned to EUN(C). Finally, Client(C) configures a locator ISP(D) 1259 taken from a non-CPA address delegated by an ordinary ISP(D). 1261 Using this example, the "nested-IRON" case must be examined in which 1262 a Host A, which configures the address CPA(A) within EUN(A), 1263 exchanges packets with Host Z located elsewhere in a different IRON 1264 instance EUN(Z). 1266 .-. 1267 ISP(D) ,-( _)-. 1268 +-----------+ .-(_ (_ )-. 1269 | Client(C) |--(_ ISP(D) ) 1270 +-----+-----+ `-(______)-' 1271 | <= T \ .-. 1272 .-. u \ ,-( _)-. 1273 ,-( _)-. n .-(_ (- )-. 1274 .-(_ (_ )-. n (_ Internet ) 1275 (_ EUN(C) ) e `-(______)-' 1276 `-(______)-' l ___ 1277 | CPA(C) s => (:::)-. 1278 +-----+-----+ .-(::::::::) 1279 | Client(B) | .-(: Multiple :)-. +-----------+ 1280 +-----+-----+ (:::::: IRON ::::::) | Relay(Z) | 1281 | `-(: Instances:)-' +-----------+ 1282 .-. `-(::::::)-' +-----------+ 1283 ,-( _)-. | Server(Z) | 1284 .-(_ (_ )-. +---------------+ +-----------+ 1285 (_ EUN(B) ) |Relay/Server(C)| +-----------+ 1286 `-(______)-' +---------------+ | Client(Z) | 1287 | CPA(B) +---------------+ +-----------+ 1288 +-----+-----+ |Relay/Server(B)| | 1289 | Client(A) | +---------------+ .-. 1290 +-----------+ +---------------+ ,-( _)-. 1291 | |Relay/Server(A)| .-(_ (_ )-. 1292 .-. +---------------+ (_ EUN(Z) ) 1293 ,-( _)-. CPA(A) `-(______)-' 1294 .-(_ (_ )-. +--------+ +--------+ 1295 (_ EUN(A) )---| Host A | | Host Z | 1296 `-(______)-' +--------+ +--------+ 1298 Figure 12: Nested EUN Example 1300 The two cases of Host A sending packets to Host Z, and Host Z sending 1301 packets to Host A, must be considered separately, as described below. 1303 12.1. Host A Sends Packets to Host Z 1305 Host A first forwards a packet with source address CPA(A) and 1306 destination address Z into EUN(A). Routing within EUN(A) will direct 1307 the packet to Client(A), which encapsulates it in an outer header 1308 with CPA(B) as the outer source address and Server(A) as the outer 1309 destination address then forwards the once-encapsulated packet into 1310 EUN(B). 1312 Routing within EUN(B) will direct the packet to Client(B), which 1313 encapsulates it in an outer header with CPA(C) as the outer source 1314 address and Server(B) as the outer destination address then forwards 1315 the twice-encapsulated packet into EUN(C). Routing within EUN(C) 1316 will direct the packet to Client(C), which encapsulates it in an 1317 outer header with ISP(D) as the outer source address and Server(C) as 1318 the outer destination address. Client(C) then sends this triple- 1319 encapsulated packet into the ISP(D) network, where it will be routed 1320 via the Internet to Server(C). 1322 When Server(C) receives the triple-encapsulated packet, it forwards 1323 it to Relay(C) which removes the outer layer of encapsulation and 1324 forwards the resulting twice-encapsulated packet into the Internet to 1325 Server(B). Next, Server(B) forwards the packet to Relay(B) which 1326 removes the outer layer of encapsulation and forwards the resulting 1327 once-encapsulated packet into the Internet to Server(A). Next, 1328 Server(A) forwards the packet to Relay(A), which decapsulates it and 1329 forwards the resulting inner packet via the Internet to Relay(Z). 1330 Relay(Z), in turn, forwards the packet to Server(Z), which 1331 encapsulates and forwards the packet to Client(Z), which decapsulates 1332 it and forwards the inner packet to Host Z. 1334 12.2. Host Z Sends Packets to Host A 1336 When Host Z sends a packet to Host A, forwarding in EUN(Z) will 1337 direct it to Client(Z), which encapsulates and forwards the packet to 1338 Server(Z). Server(Z) will forward the packet to Relay(Z), which will 1339 then decapsulate and forward the inner packet into the Internet. 1340 Internet routing will convey the packet to Relay(A) as the next-hop 1341 towards CPA(A), which then forwards it to Server(A). 1343 Server (A) encapsulates the packet and forwards it to Relay(B) as the 1344 next-hop towards CPA(B) (i.e., the locator for CPA(A)). Relay(B) 1345 then forwards the packet to Server(B), which encapsulates it a second 1346 time and forwards it to Relay(C) as the next-hop towards CPA(C) 1347 (i.e., the locator for CPA(B)). Relay(C) then forwards the packet to 1348 Server(C), which encapsulates it a third time and forwards it to 1349 Client(C). 1351 Client(C) then decapsulates the packet and forwards the resulting 1352 twice-encapsulated packet via EUN(C) to Client(B). Client(B) in turn 1353 decapsulates the packet and forwards the resulting once-encapsulated 1354 packet via EUN(B) to Client(A). Client(A) finally decapsulates and 1355 forwards the inner packet to Host A. 1357 13. Implications for the Internet 1359 The IRON architecture envisions a hybrid routing/mapping system that 1360 benefits from both the shortest-path routing afforded by pure dynamic 1361 routing systems and the routing-scaling suppression afforded by pure 1362 mapping systems. Therefore, IRON targets the elusive "sweet spot" 1363 that pure routing and pure mapping systems alone cannot satisfy. 1365 The IRON system requires a VSP deployment of new routers/servers 1366 throughout the Internet to maintain well-balanced virtual overlay 1367 networks. These routers/servers can be deployed incrementally 1368 without disruption to existing Internet infrastructure as long as 1369 they are appropriately managed to provide acceptable service levels 1370 to end users. 1372 End-to-end traffic that traverses an IRON instance may experience 1373 delay variance between the initial packets and subsequent packets of 1374 a flow. This is due to the IRON system allowing a longer path 1375 stretch for initial packets followed by timely route optimizations to 1376 utilize better next hop routers/servers for subsequent packets. 1378 IRON instances work seamlessly with existing and emerging services 1379 within the native Internet. In particular, end users serviced by an 1380 IRON instance will receive the same service enjoyed by end users 1381 serviced by non-IRON service providers. Internet services already 1382 deployed within the native Internet also need not make any changes to 1383 accommodate IRON end users. 1385 The IRON system operates between IAs within the Internet and EUNs. 1386 Within these networks, the underlying paths traversed by the virtual 1387 overlay networks may comprise links that accommodate varying MTUs. 1388 While the IRON system imposes an additional per-packet overhead that 1389 may cause the size of packets to become slightly larger than the 1390 underlying path can accommodate, IAs have a method for naturally 1391 detecting and tuning out instances of path MTU underruns. In some 1392 cases, these MTU underruns may need to be reported back to the 1393 original hosts; however, the system will also allow for MTUs much 1394 larger than those typically available in current Internet paths to be 1395 discovered and utilized as more links with larger MTUs are deployed. 1397 Finally, and perhaps most importantly, the IRON system provides in- 1398 built mobility management, mobile networks, multihoming and traffic 1399 engineering capabilities that allow end user devices and networks to 1400 move about freely while both imparting minimal oscillations in the 1401 routing system and maintaining generally shortest-path routes. This 1402 mobility management is afforded through the very nature of the IRON 1403 service model, and therefore requires no adjunct mechanisms. The 1404 mobility management and multihoming capabilities are further 1405 supported by forward-path reachability detection that provides "hints 1406 of forward progress" in the same spirit as for IPv6 Neighbor 1407 Discovery (ND). 1409 14. Additional Considerations 1411 Considerations for the scalability of Internet Routing due to 1412 multihoming, traffic engineering, and provider-independent addressing 1413 are discussed in [RADIR]. Other scaling considerations specific to 1414 IRON are discussed in Appendix B. 1416 Route optimization considerations for mobile networks are found in 1417 [RFC5522]. 1419 In order to ensure acceptable end user service levels, the VSP should 1420 conduct a traffic scaling analysis and distribute sufficient Relays 1421 and Servers for the IRON instance globally throughout the Internet. 1423 15. Related Initiatives 1425 IRON builds upon the concepts of the RANGER architecture [RFC5720] , 1426 and therefore inherits the same set of related initiatives. The 1427 Internet Research Task Force (IRTF) Routing Research Group (RRG) 1428 mentions IRON in its recommendation for a routing architecture 1429 [RFC6115]. 1431 Virtual Aggregation (VA) [GROW-VA] and Aggregation in Increasing 1432 Scopes (AIS) [EVOLUTION] provide the basis for the Virtual Prefix 1433 concepts. 1435 Internet Vastly Improved Plumbing (Ivip) [IVIP-ARCH] has contributed 1436 valuable insights, including the use of real-time mapping. The use 1437 of Servers as mobility anchor points is directly influenced by Ivip's 1438 associated TTR mobility extensions [TTRMOB]. 1440 [RO-CR] discusses a route optimization approach using a Correspondent 1441 Router (CR) model. The IRON Server construct is similar to the CR 1442 concept described in this work; however, the manner in which Clients 1443 coordinate with Servers is different and based on the NBMA virtual 1444 link model [RFC5214]. 1446 Numerous publications have proposed NAT traversal techniques. The 1447 NAT traversal techniques adapted for IRON were inspired by the Simple 1448 Address Mapping for Premises Legacy Equipment (SAMPLE) proposal 1449 [SAMPLE]. 1451 The IRON Client-Server relationship is managed in essentially the 1452 same way as for the Tunnel Broker model [RFC3053]. Numerous existing 1453 tunnel broker provider networks (e.g., Hurricane Electric, SixXS, 1454 freenet6, etc.) provide existence proofs that IRON-like overlay 1455 network services can be deployed and managed on a global basis 1456 [BROKER]. 1458 16. IANA Considerations 1460 There are no IANA considerations for this document. 1462 17. Security Considerations 1464 Security considerations that apply to tunneling in general are 1465 discussed in [RFC6169]. Additional considerations that apply also to 1466 IRON are discussed in RANGER [RFC5720] , VET [INTAREA-VET] and SEAL 1467 [INTAREA-SEAL]. 1469 The IRON system further depends on mutual authentication of IRON 1470 Clients to Servers and Servers to Relays. As for all Internet 1471 communications, the IRON system also depends on Relays acting with 1472 integrity and not injecting false advertisements into the Internet 1473 routing system (e.g., to mount traffic siphoning attacks). 1475 IRON Servers must perform source address verification on the packets 1476 they accept from IRON Clients. Clients must therefore include a 1477 signature on each packet that the Server can use to verify that the 1478 Client is authorized to use the source address. Source address 1479 verification considerations are discussed in 1480 [I-D.ietf-savi-framework]. 1482 IRON Servers must ensure that any changes in a Client's locator 1483 addresses are communicated only through an authenticated exchange 1484 that is not subject to replay. For this reason, Clients periodically 1485 send digitally-signed SRS messages to the Server. If the Client's 1486 locator address stays the same, the Server can accept the SRS message 1487 without verifying the signature. If the Client's locator address 1488 changes, the Server must verify the SRS message's signature before 1489 accepting the message. Once the message has been authenticated, the 1490 Server updates the Client's locator address to the new address. 1492 Each IRON instance requires a means for assuring the integrity of the 1493 interior routing system so that all Relays and Servers in the overlay 1494 have a consistent view of CP<->Server bindings. Also, Denial-of- 1495 Service (DoS) attacks on IRON Relays and Servers can occur when 1496 packets with spoofed source addresses arrive at high data rates. 1497 However, this issue is no different than for any border router in the 1498 public Internet today. 1500 Middleboxes can interfere with tunneled packets within an IRON 1501 instance in various ways. For example, a middlebox may alter a 1502 packet's contents, change a packet's locator addresses, inject 1503 spurious packets, replay old packets, etc. These issues are no 1504 different than for middlebox interactions with ordinary Internet 1505 communications. If man-in-the-middle attacks are a matter for 1506 concern in certain deployments, however, IRON Agents can use IPsec 1507 [RFC4301] or TLS/SSL [RFC5246] to protect the authenticity, integrity 1508 and (if necessary) privacy of their tunneled packets. 1510 18. Acknowledgements 1512 The ideas behind this work have benefited greatly from discussions 1513 with colleagues; some of which appear on the RRG and other IRTF/IETF 1514 mailing lists. Robin Whittle and Steve Russert co-authored the TTR 1515 mobility architecture, which strongly influenced IRON. Eric 1516 Fleischman pointed out the opportunity to leverage anycast for 1517 discovering topologically close Servers. Thomas Henderson 1518 recommended a quantitative analysis of scaling properties. 1520 The following individuals provided essential review input: Jari 1521 Arkko, Mohamed Boucadair, Stewart Bryant, John Buford, Ralph Droms, 1522 Wesley Eddy, Adrian Farrel, Dae Young Kim, and Robin Whittle. 1524 Discussions with colleagues following the publication of RFC6179 have 1525 provided useful insights that have resulted in significant 1526 improvements to this, the Second Edition of IRON. 1528 19. References 1530 19.1. Normative References 1532 [RFC0791] Postel, J., "Internet Protocol", STD 5, RFC 791, 1533 September 1981. 1535 [RFC2460] Deering, S. and R. Hinden, "Internet Protocol, Version 6 1536 (IPv6) Specification", RFC 2460, December 1998. 1538 19.2. Informative References 1540 [AERO] Templin, F., Ed., "Asymmetric Extended Route Optimization 1541 (AERO)", Work in Progress, June 2011. 1543 [BGPMON] net, B., "BGPmon.net - Monitoring Your Prefixes, 1544 http://bgpmon.net/stat.php", June 2010. 1546 [BROKER] Wikipedia, W., "List of IPv6 Tunnel Brokers, 1547 http://en.wikipedia.org/wiki/List_of_IPv6_tunnel_brokers", 1548 August 2011. 1550 [EVOLUTION] 1551 Zhang, B., Zhang, L., and L. Wang, "Evolution Towards 1552 Global Routing Scalability", Work in Progress, 1553 October 2009. 1555 [GROW-VA] Francis, P., Xu, X., Ballani, H., Jen, D., Raszuk, R., and 1556 L. Zhang, "FIB Suppression with Virtual Aggregation", Work 1557 in Progress, February 2011. 1559 [I-D.ietf-savi-framework] 1560 Wu, J., Bi, J., Bagnulo, M., Baker, F., and C. Vogt, 1561 "Source Address Validation Improvement Framework", 1562 draft-ietf-savi-framework-05 (work in progress), 1563 July 2011. 1565 [INTAREA-SEAL] 1566 Templin, F., Ed., "The Subnetwork Encapsulation and 1567 Adaptation Layer (SEAL)", Work in Progress, February 2011. 1569 [INTAREA-VET] 1570 Templin, F., Ed., "Virtual Enterprise Traversal (VET)", 1571 Work in Progress, January 2011. 1573 [IVIP-ARCH] 1574 Whittle, R., "Ivip (Internet Vastly Improved Plumbing) 1575 Architecture", Work in Progress, March 2010. 1577 [RADIR] Narten, T., "On the Scalability of Internet Routing", Work 1578 in Progress, February 2010. 1580 [RFC1070] Hagens, R., Hall, N., and M. Rose, "Use of the Internet as 1581 a subnetwork for experimentation with the OSI network 1582 layer", RFC 1070, February 1989. 1584 [RFC1918] Rekhter, Y., Moskowitz, R., Karrenberg, D., Groot, G., and 1585 E. Lear, "Address Allocation for Private Internets", 1586 BCP 5, RFC 1918, February 1996. 1588 [RFC1930] Hawkinson, J. and T. Bates, "Guidelines for creation, 1589 selection, and registration of an Autonomous System (AS)", 1590 BCP 6, RFC 1930, March 1996. 1592 [RFC3053] Durand, A., Fasano, P., Guardini, I., and D. Lento, "IPv6 1593 Tunnel Broker", RFC 3053, January 2001. 1595 [RFC4192] Baker, F., Lear, E., and R. Droms, "Procedures for 1596 Renumbering an IPv6 Network without a Flag Day", RFC 4192, 1597 September 2005. 1599 [RFC4271] Rekhter, Y., Li, T., and S. Hares, "A Border Gateway 1600 Protocol 4 (BGP-4)", RFC 4271, January 2006. 1602 [RFC4301] Kent, S. and K. Seo, "Security Architecture for the 1603 Internet Protocol", RFC 4301, December 2005. 1605 [RFC4548] Gray, E., Rutemiller, J., and G. Swallow, "Internet Code 1606 Point (ICP) Assignments for NSAP Addresses", RFC 4548, 1607 May 2006. 1609 [RFC4605] Fenner, B., He, H., Haberman, B., and H. Sandick, 1610 "Internet Group Management Protocol (IGMP) / Multicast 1611 Listener Discovery (MLD)-Based Multicast Forwarding 1612 ("IGMP/MLD Proxying")", RFC 4605, August 2006. 1614 [RFC4984] Meyer, D., Zhang, L., and K. Fall, "Report from the IAB 1615 Workshop on Routing and Addressing", RFC 4984, 1616 September 2007. 1618 [RFC5214] Templin, F., Gleeson, T., and D. Thaler, "Intra-Site 1619 Automatic Tunnel Addressing Protocol (ISATAP)", RFC 5214, 1620 March 2008. 1622 [RFC5246] Dierks, T. and E. Rescorla, "The Transport Layer Security 1623 (TLS) Protocol Version 1.2", RFC 5246, August 2008. 1625 [RFC5522] Eddy, W., Ivancic, W., and T. Davis, "Network Mobility 1626 Route Optimization Requirements for Operational Use in 1627 Aeronautics and Space Exploration Mobile Networks", 1628 RFC 5522, October 2009. 1630 [RFC5720] Templin, F., "Routing and Addressing in Networks with 1631 Global Enterprise Recursion (RANGER)", RFC 5720, 1632 February 2010. 1634 [RFC5743] Falk, A., "Definition of an Internet Research Task Force 1635 (IRTF) Document Stream", RFC 5743, December 2009. 1637 [RFC5887] Carpenter, B., Atkinson, R., and H. Flinck, "Renumbering 1638 Still Needs Work", RFC 5887, May 2010. 1640 [RFC6115] Li, T., "Recommendation for a Routing Architecture", 1641 RFC 6115, February 2011. 1643 [RFC6139] Russert, S., Fleischman, E., and F. Templin, "Routing and 1644 Addressing in Networks with Global Enterprise Recursion 1645 (RANGER) Scenarios", RFC 6139, February 2011. 1647 [RFC6169] Krishnan, S., Thaler, D., and J. Hoagland, "Security 1648 Concerns with IP Tunneling", RFC 6169, April 2011. 1650 [RO-CR] Bernardos, C., Calderon, M., and I. Soto, "Correspondent 1651 Router based Route Optimisation for NEMO (CRON)", Work 1652 in Progress, July 2008. 1654 [SAMPLE] Carpenter, B. and S. Jiang, "Legacy NAT Traversal for 1655 IPv6: Simple Address Mapping for Premises Legacy Equipment 1656 (SAMPLE)", Work in Progress, June 2010. 1658 [TTRMOB] Whittle, R. and S. Russert, "TTR Mobility Extensions for 1659 Core-Edge Separation Solutions to the Internet's Routing 1660 Scaling Problem, 1661 http://www.firstpr.com.au/ip/ivip/TTR-Mobility.pdf", 1662 August 2008. 1664 Appendix A. IRON Operation over Internetworks with Different Address 1665 Families 1667 The IRON architecture leverages the routing system by providing 1668 generally shortest-path routing for packets with CPA addresses from 1669 APs that match the address family of the underlying Internetwork. 1670 When the APs are of an address family that is not routable within the 1671 underlying Internetwork, however, (e.g., when OSI/NSAP [RFC4548] APs 1672 are used over an IPv4 Internetwork) a global Master AP mapping 1673 database (MAP) is required. The MAP allows the Relays of the local 1674 IRON instance to map APs belonging to other IRON instances to 1675 addresses taken from companion prefixes of address families that are 1676 routable within the Internetwork. For example, an IPv6 AP (e.g., 1677 2001:DB8::/32) could be paired with one or more companion IPv4 1678 prefixes (e.g., 192.0.2.0/24) so that encapsulated IPv6 packets can 1679 be forwarded over IPv4-only Internetworks. (In the limiting case, 1680 the companion prefixes could themselves be singleton addresses, e.g., 1681 192.0.2.1/32). 1683 The MAP is maintained by a globally managed authority, e.g. in the 1684 same manner as the Internet Assigned Numbers Authority (IANA) 1685 currently maintains the master list of all top-level IPv4 and IPv6 1686 delegations. The MAP can be replicated across multiple servers for 1687 load balancing using common Internetworking server hierarchies, e.g., 1688 the DNS caching resolvers, ftp mirror servers, etc. 1690 Upon startup, each Relay advertises IPv4 companion prefixes (e.g., 1691 192.0.2.0/24) into the IPv4 Internetwork routing system and/or IPv6 1692 companion prefixes (e.g., 2001:DB8::/64) into the IPv6 Internetwork 1693 routing system for the IRON instance that it serves. The Relay then 1694 selects singleton host numbers within the IPv4 companion prefixes 1695 (e.g., 192.0.2.1) and/or IPv6 companion prefixes (e.g., as 1696 2001:DB8::0), and assigns the resulting addresses to its Internetwork 1697 interfaces. (When singleton companion prefixes are used (e.g., 1698 192.0.2.1/32), the Relay does not advertise a the companion prefixes 1699 but instead simply assigns them to its Internetwork interfaces and 1700 allows standard Internet routing to direct packets to the 1701 interfaces.) 1703 The Relay then discovers the APs for other IRON instances by reading 1704 the MAP, either a priori or on-demand of data packets addressed to 1705 other AP destinations. The Relay reads the MAP from a nearby MAP 1706 server and periodically checks the server for deltas since the 1707 database was last read. The Relay can then forward packets toward 1708 CPAs belonging to other IRON instances by encapsulating them in an 1709 outer header of the companion prefix address family and using the 1710 Relay anycast address as the outer destination address. 1712 Possible encapsulations in this model include IPv6-in-IPv4, IPv4-in- 1713 IPv6, OSI/CLNP-in-IPv6, OSI/CLNP-in-IPv4, etc. Details of how the 1714 DNS can be used as a MAP are given in Section 5.4 of VET 1715 [INTAREA-VET]. 1717 Appendix B. Scaling Considerations 1719 Scaling aspects of the IRON architecture have strong implications for 1720 its applicability in practical deployments. Scaling must be 1721 considered along multiple vectors, including Interdomain core routing 1722 scaling, scaling to accommodate large numbers of EUNs, traffic 1723 scaling, state requirements, etc. 1725 In terms of routing scaling, each VSP will advertise one or more APs 1726 into the global Internet routing system from which CPs are delegated 1727 to end users. Routing scaling will therefore be minimized when each 1728 AP covers many CPs. For example, the IPv6 prefix 2001:DB8::/32 1729 contains 2^24 ::/56 CP prefixes for assignment to EUNs; therefore, 1730 the VSP could accommodate 2^32 ::/56 CPs with only 2^8 ::/32 APs 1731 advertised in the interdomain routing core. (When even longer CP 1732 prefixes are used, e.g., /64s assigned to individual handsets in a 1733 cellular provider network, many more EUNs can be represented within 1734 only a single AP.) 1736 In terms of traffic scaling for Relays, each Relay represents an ASBR 1737 of a "shell" enterprise network that simply directs arriving traffic 1738 packets with CPA destination addresses towards Servers that service 1739 the corresponding Clients. Moreover, the Relay sheds traffic 1740 destined to CPAs through redirection, which removes it from the path 1741 for the majority of traffic packets between Clients within the same 1742 IRON instance. On the other hand, each Relay must handle all traffic 1743 packets forwarded between the CPs it manages and the rest of the 1744 Internet. The scaling concerns for this latter class of traffic are 1745 no different than for ASBR routers that connect large enterprise 1746 networks to the Internet. In terms of traffic scaling for Servers, 1747 each Server services a set of CPs. The Server services all traffic 1748 packets destined to its own CPs but only services the initial packets 1749 of flows initiated from its own CPs and destined to other CPs. 1750 Therefore, traffic scaling for CPA-addressed traffic is an asymmetric 1751 consideration and is proportional to the number of CPs each Server 1752 serves. 1754 In terms of state requirements for Relays, each Relay maintains a 1755 list of Servers in the IRON instance as well as forwarding table 1756 entries for the CPs that each Server handles. This Relay state is 1757 therefore dominated by the total number of CPs handled by the Relay's 1758 associated Servers. Keeping in mind that current day core router 1759 technologies are only capable of handling fast-path FIB cache sizes 1760 of O(1M) entries, a large-scale deployment may require that the total 1761 CP database for the VSP overlay be spread between the FIBs of a mesh 1762 of Relays rather than fully-resident in the FIB of each Relay. In 1763 that case, the techniques of Virtual Aggregation (VA) may be useful 1764 in bridging together the mesh of Relays. Alternatively, each Relay 1765 could elect to keep some or all CP prefixes out of the FIB and 1766 maintain them only in a slow-path forwarding table. In that case, 1767 considerably more CP entries could be kept in each Relay at the cost 1768 of incurring slow-path processing for the initial packets of a flow. 1770 In terms of state requirements for Servers, each Server maintains 1771 state only for the CPs it serves, and not for the CPs handled by 1772 other Servers in the IRON instance. Finally, neither Relays nor 1773 Servers need keep state for final destinations of outbound traffic. 1775 Clients source and sink all traffic packets originating from or 1776 destined to the CP. Therefore, traffic scaling considerations for 1777 Clients are the same as for any site border router. Clients also 1778 retain unidirectional tunnel-neighbor state for the Servers for final 1779 destinations of outbound traffic flows. This can be managed as soft 1780 state, since stale entries purged from the cache will be refreshed 1781 when new traffic packets are sent. 1783 Author's Address 1785 Fred L. Templin (editor) 1786 Boeing Research & Technology 1787 P.O. Box 3707 MC 7L-49 1788 Seattle, WA 98124 1789 USA 1791 EMail: fltemplin@acm.org