idnits 2.17.1 draft-ietf-grow-ix-bgp-route-server-operations-03.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- No issues found here. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year -- The document date (September 8, 2014) is 3508 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- == Outdated reference: A later version (-12) exists of draft-ietf-idr-ix-bgp-route-server-05 -- Obsolete informational reference (is this intentional?): RFC 4893 (Obsoleted by RFC 6793) Summary: 0 errors (**), 0 flaws (~~), 2 warnings (==), 2 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 GROW Working Group N. Hilliard 3 Internet-Draft INEX 4 Intended status: Informational E. Jasinska 5 Expires: March 12, 2015 Netflix, Inc 6 R. Raszuk 7 NTT I3 8 N. Bakker 9 Akamai Technologies B.V. 10 September 8, 2014 12 Internet Exchange Route Server Operations 13 draft-ietf-grow-ix-bgp-route-server-operations-03 15 Abstract 17 The popularity of Internet exchange points (IXPs) brings new 18 challenges to interconnecting networks. While bilateral eBGP 19 sessions between exchange participants were historically the most 20 common means of exchanging reachability information over an IXP, the 21 overhead associated with this interconnection method causes serious 22 operational and administrative scaling problems for IXP participants. 24 Multilateral interconnection using Internet route servers can 25 dramatically reduce the administrative and operational overhead of 26 IXP participation and these systems used by many IXP participants as 27 a preferred means of exchanging routing information. 29 This document describes operational considerations for multilateral 30 interconnections at IXPs. 32 Status of This Memo 34 This Internet-Draft is submitted in full conformance with the 35 provisions of BCP 78 and BCP 79. 37 Internet-Drafts are working documents of the Internet Engineering 38 Task Force (IETF). Note that other groups may also distribute 39 working documents as Internet-Drafts. The list of current Internet- 40 Drafts is at http://datatracker.ietf.org/drafts/current/. 42 Internet-Drafts are draft documents valid for a maximum of six months 43 and may be updated, replaced, or obsoleted by other documents at any 44 time. It is inappropriate to use Internet-Drafts as reference 45 material or to cite them other than as "work in progress." 47 This Internet-Draft will expire on March 12, 2015. 49 Copyright Notice 51 Copyright (c) 2014 IETF Trust and the persons identified as the 52 document authors. All rights reserved. 54 This document is subject to BCP 78 and the IETF Trust's Legal 55 Provisions Relating to IETF Documents 56 (http://trustee.ietf.org/license-info) in effect on the date of 57 publication of this document. Please review these documents 58 carefully, as they describe your rights and restrictions with respect 59 to this document. Code Components extracted from this document must 60 include Simplified BSD License text as described in Section 4.e of 61 the Trust Legal Provisions and are provided without warranty as 62 described in the Simplified BSD License. 64 Table of Contents 66 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . 3 67 1.1. Notational Conventions . . . . . . . . . . . . . . . . . 3 68 2. Bilateral BGP Sessions . . . . . . . . . . . . . . . . . . . 3 69 3. Multilateral Interconnection . . . . . . . . . . . . . . . . 4 70 4. Operational Considerations for Route Server Installations . . 5 71 4.1. Path Hiding . . . . . . . . . . . . . . . . . . . . . . . 5 72 4.2. Route Server Scaling . . . . . . . . . . . . . . . . . . 6 73 4.2.1. Tackling Scaling Issues . . . . . . . . . . . . . . . 6 74 4.2.1.1. View Merging and Decomposition . . . . . . . . . 7 75 4.2.1.2. Destination Splitting . . . . . . . . . . . . . . 7 76 4.2.1.3. NEXT_HOP Resolution . . . . . . . . . . . . . . . 8 77 4.3. Prefix Leakage Mitigation . . . . . . . . . . . . . . . . 8 78 4.4. Route Server Redundancy . . . . . . . . . . . . . . . . . 8 79 4.5. AS_PATH Consistency Check . . . . . . . . . . . . . . . . 9 80 4.6. Export Routing Policies . . . . . . . . . . . . . . . . . 9 81 4.6.1. BGP Communities . . . . . . . . . . . . . . . . . . . 9 82 4.6.2. Internet Routing Registry . . . . . . . . . . . . . . 9 83 4.6.3. Client-accessible Databases . . . . . . . . . . . . . 10 84 4.7. Layer 2 Reachability Problems . . . . . . . . . . . . . . 10 85 4.8. BGP NEXT_HOP Hijacking . . . . . . . . . . . . . . . . . 10 86 5. Security Considerations . . . . . . . . . . . . . . . . . . . 12 87 6. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 12 88 7. Acknowledgments . . . . . . . . . . . . . . . . . . . . . . . 12 89 8. References . . . . . . . . . . . . . . . . . . . . . . . . . 12 90 8.1. Normative References . . . . . . . . . . . . . . . . . . 12 91 8.2. Informative References . . . . . . . . . . . . . . . . . 12 92 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . 13 94 1. Introduction 96 Internet exchange points (IXPs) provide IP data interconnection 97 facilities for their participants, typically using shared Layer-2 98 networking media such as Ethernet. The Border Gateway Protocol (BGP) 99 [RFC4271] is normally used to facilitate exchange of network 100 reachability information over these media. 102 As bilateral interconnection between IXP participants requires 103 operational and administrative overhead, BGP route servers 104 [I-D.ietf-idr-ix-bgp-route-server] are often deployed by IXP 105 operators to provide a simple and convenient means of interconnecting 106 IXP participants with each other. A route server redistributes 107 prefixes received from its BGP clients to other clients according to 108 a pre-specified policy, and it can be viewed as similar to an eBGP 109 equivalent of an iBGP [RFC4456] route reflector. 111 Route servers at IXPs require careful management and it is important 112 for route server operators to thoroughly understand both how they 113 work and what their limitations are. In this document, we discuss 114 several issues of operational relevance to route server operators and 115 provide recommendations to help route server operators provision a 116 reliable interconnection service. 118 1.1. Notational Conventions 120 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 121 "SHOULD", "SHOULD NOT", "RECOMMENDED", "NOT RECOMMENDED", "MAY", and 122 "OPTIONAL" in this document are to be interpreted as described in 123 [RFC2119]. 125 2. Bilateral BGP Sessions 127 Bilateral interconnection is a method of interconnecting routers 128 using individual BGP sessions between each participant router on an 129 IXP, in order to exchange reachability information. If an IXP 130 participant wishes to implement an open interconnection policy - i.e. 131 a policy of interconnecting with as many other IXP participants as 132 possible - it is necessary for the participant to liaise with each of 133 their intended interconnection partners. Interconnection can then be 134 implemented bilaterally by configuring a BGP session on both 135 participants' routers to exchange network reachability information. 136 If each exchange participant interconnects with each other 137 participant, a full mesh of BGP sessions is needed, as shown in 138 Figure 1. 140 ___ ___ 141 / \ / \ 142 ..| AS1 |..| AS2 |.. 143 : \___/____\___/ : 144 : | \ / | : 145 : | \ / | : 146 : IXP | \/ | : 147 : | /\ | : 148 : | / \ | : 149 : _|_/____\_|_ : 150 : / \ / \ : 151 ..| AS3 |..| AS4 |.. 152 \___/ \___/ 154 Figure 1: Full-Mesh Interconnection at an IXP 156 Figure 1 depicts an IXP platform with four connected routers, 157 administered by four separate exchange participants, each of them 158 with a locally unique autonomous system number: AS1, AS2, AS3 and 159 AS4. Each of these four participants wishes to exchange traffic with 160 all other participants; this is accomplished by configuring a full 161 mesh of BGP sessions on each router connected to the exchange, 162 resulting in 6 BGP sessions across the IXP fabric. 164 The number of BGP sessions at an exchange has an upper bound of 165 n*(n-1)/2, where n is the number of routers at the exchange. As many 166 exchanges have large numbers of participating networks, the amount of 167 administrative and operation overhead required to implement an open 168 interconnection scales quadratically. New participants to an IXP 169 require significant initial resourcing in order to gain value from 170 their IXP connection, while existing exchange participants need to 171 commit ongoing resources in order to benefit from interconnecting 172 with these new participants. 174 3. Multilateral Interconnection 176 Multilateral interconnection is implemented using a route server 177 configured to use BGP to distribute network layer reachability 178 information (NLRI) among all client routers. The route server 179 preserves the BGP NEXT_HOP attribute from all received NLRI UPDATE 180 messages, and passes these messages with unchanged NEXT_HOP to its 181 route server clients, according to its configured routing policy, as 182 described in [I-D.ietf-idr-ix-bgp-route-server]. Using this method 183 of exchanging NLRI messages, an IXP participant router can receive an 184 aggregated list of prefixes from all other route server clients using 185 a single BGP session to the route server instead of depending on BGP 186 sessions with each other router at the exchange. This reduces the 187 overall number of BGP sessions at an Internet exchange from n*(n-1)/2 188 to n, where n is the number of routers at the exchange. 190 Although a route server uses BGP to exchange reachability information 191 with each of its clients, it does not forward traffic itself and is 192 therefore not a router. 194 In practical terms, this allows dense interconnection between IXP 195 participants with low administrative overhead and significantly 196 simpler and smaller router configurations. In particular, new IXP 197 participants benefit from immediate and extensive interconnection, 198 while existing route server participants receive reachability 199 information from these new participants without necessarily having to 200 modify their configurations. 202 ___ ___ 203 / \ / \ 204 ..| AS1 |..| AS2 |.. 205 : \___/ \___/ : 206 : \ / : 207 : \ / : 208 : \__/ : 209 : IXP / \ : 210 : | RS | : 211 : \____/ : 212 : / \ : 213 : / \ : 214 : __/ \__ : 215 : / \ / \ : 216 ..| AS3 |..| AS4 |.. 217 \___/ \___/ 219 Figure 2: IXP-based Interconnection with Route Server 221 As illustrated in Figure 2, each router on the IXP fabric requires 222 only a single BGP session to the route server, from which it can 223 receive reachability information for all other routers on the IXP 224 which also connect to the route server. 226 4. Operational Considerations for Route Server Installations 228 4.1. Path Hiding 230 "Path hiding" is a term used in [I-D.ietf-idr-ix-bgp-route-server] to 231 describe the process whereby a route server may mask individual paths 232 by applying conflicting routing policies to its Loc-RIB. When this 233 happens, route server clients receive incomplete information from the 234 route server about network reachability. 236 There are several approaches which may be used to mitigate against 237 the effect of path hiding; these are described in 238 [I-D.ietf-idr-ix-bgp-route-server]. However, the only method which 239 does not require explicit support from the route server client is for 240 the route server itself to maintain a individual Loc-RIB for each 241 client which is the subject of conflicting routing policies. 243 4.2. Route Server Scaling 245 While deployment of multiple Loc-RIBs on the route server presents a 246 simple way to avoid the path hiding problem noted in Section 4.1, 247 this approach requires significantly more computing resources on the 248 route server than where a single Loc-RIB is deployed for all clients. 249 As the [RFC4271] BGP decision process must be applied to all Loc-RIBs 250 deployed on the route server, both CPU and memory requirements on the 251 host computer scale approximately according to O(P * N), where P is 252 the total number of unique paths received by the route server and N 253 is the number of route server clients which require a unique Loc-RIB. 254 As this is a super-linear scaling relationship, large route servers 255 may derive benefit from deploying per-client Loc-RIBs only where they 256 are required. 258 Regardless of any Loc-RIB optimization technique is implemented, the 259 route server's control plane bandwidth requirements will scale 260 according to O(P * N), where P is the total number of unique paths 261 received by the route server and N is the total number of route 262 server clients. In the case where P_avg (the arithmetic mean number 263 of unique paths received per route server client) remains roughly 264 constant even as the number of connected clients increases, this 265 relationship can be rewritten as O((P_avg * N) * N) or O(N^2). This 266 quadratic upper bound on the network traffic requirements indicates 267 that the route server model will not scale to arbitrarily large 268 sizes. 270 This scaling analysis presents problems in three key areas: route 271 processor CPU overhead associated with BGP decision process 272 calculations, the memory requirements for handling many different BGP 273 path entries, and the network traffic bandwidth required to 274 distribute these prefixes from the route server to each route server 275 client. 277 4.2.1. Tackling Scaling Issues 279 The network traffic scaling issue presents significant difficulties 280 with no clear solution - ultimately, each client must receive a 281 UPDATE for each unique prefix received by the route server. However, 282 there are several potential methods for dealing with the CPU and 283 memory resource requirements of route servers. 285 4.2.1.1. View Merging and Decomposition 287 View merging and decomposition, outlined in [RS-ARCH], describes a 288 method of optimising memory and CPU requirements where multiple route 289 server clients are subject to exactly the same routing policies. In 290 this situation, the multiple Loc-RIB views required by each client 291 are merged into a single view. 293 There are several variations of this approach. If the route server 294 operator has prior knowledge of interconnection relationships between 295 route server clients, then the operator may configure separate Loc- 296 RIBs only for route server clients with unique outbound routing 297 policies. As this approach requires prior knowledge of 298 interconnection relationships, the route server operator must depend 299 on each client sharing their interconnection policies, either in a 300 internal provisioning database controlled by the operator, or else in 301 an external data store such as an Internet Routing Registry Database. 303 Conversely, the route server implementation itself may implement 304 internal view decomposition by creating virtual Loc-RIBs based on a 305 single in-memory master Loc-RIB, with delta differences for each 306 prefix subject to different routing policies. This allows a more 307 granular and flexible approach to the problem of Loc-RIB scaling, at 308 the expense of requiring a more complex in-memory Loc-RIB structure. 310 Whatever method of view merging and decomposition is chosen on a 311 route server, pathological edge cases can be created whereby they 312 will scale no better than fully non-optimised per-client Loc-RIBs. 313 However, as most route server clients connect to a route server for 314 the purposes of reducing overhead, rather than implementing complex 315 per-client routing policies, edge cases tend not to arise in 316 practice. 318 4.2.1.2. Destination Splitting 320 Destination splitting, also described in [RS-ARCH], describes a 321 method for route server clients to connect to multiple route servers 322 and to send non-overlapping sets of prefixes to each route server. 323 As each route server computes the best path for its own set of 324 prefixes, the quadratic scaling requirement operates on multiple 325 smaller sets of prefixes. This reduces the overall computational and 326 memory requirements for managing multiple Loc-RIBs and performing the 327 best-path calculation on each. In order for this method to perform 328 well, destination splitting would require significant co-ordination 329 between the route server operator and each route server client. In 330 practice, this level of close co-ordination between IXP operators and 331 their participants tends not to occur, suggesting that the approach 332 is unlikely to be of any real use on production IXPs. 334 4.2.1.3. NEXT_HOP Resolution 336 As route servers are usually deployed at IXPs which use flat layer 2 337 networks, recursive resolution of the NEXT_HOP attribute is generally 338 not required, and can be replaced by a simple check to ensure that 339 the NEXT_HOP value for each prefix is a network address on the IXP 340 LAN's IP address range. 342 4.3. Prefix Leakage Mitigation 344 Prefix leakage occurs when a BGP client unintentionally distributes 345 NLRI UPDATE messages to one or more neighboring BGP routers. Prefix 346 leakage of this form to a route server can cause serious connectivity 347 problems at an IXP if each route server client is configured to 348 accept all prefix UPDATE messages from the route server. It is 349 therefore RECOMMENDED when deploying route servers that, due to the 350 potential for collateral damage caused by NLRI leakage, route server 351 operators deploy prefix leakage mitigation measures in order to 352 prevent unintentional prefix announcements or else limit the scale of 353 any such leak. Although not foolproof, per-client inbound prefix 354 limits can restrict the damage caused by prefix leakage in many 355 cases. Per-client inbound prefix filtering on the route server is a 356 more deterministic and usually more reliable means of preventing 357 prefix leakage, but requires more administrative resources to 358 maintain properly. 360 If a route server operator implements per-client inbound prefix 361 filtering, then it is RECOMMENDED that the operator also builds in 362 mechanisms to automatically compare the Adj-RIB-In received from each 363 client with the inbound prefix lists configured for those clients. 364 Naturally, it is the responsibility of the route server client to 365 ensure that their stated prefix list is compatible with what they 366 announce to an IXP route server. However, many network operators do 367 not carefully manage their published routing policies and it is not 368 uncommon to see significant variation between the two sets of 369 prefixes. Route server operator visibility into this discrepancy can 370 provide significant advantages to both operator and client. 372 4.4. Route Server Redundancy 374 As the purpose of an IXP route server implementation is to provide a 375 reliable reachability brokerage service, it is RECOMMENDED that 376 exchange operators who implement route server systems provision 377 multiple route servers on each shared Layer-2 domain. There is no 378 requirement to use the same BGP implementation or operating system 379 for each route server on the IXP fabric; however, it is RECOMMENDED 380 that where an operator provisions more than a single server on the 381 same shared Layer-2 domain, each route server implementation be 382 configured equivalently and in such a manner that the path 383 reachability information from each system is identical. 385 4.5. AS_PATH Consistency Check 387 [RFC4271] requires that every BGP speaker which advertises a route to 388 another external BGP speaker prepends its own AS number as the last 389 element of the AS_PATH sequence. Therefore the leftmost AS in an 390 AS_PATH attribute should be equal to the autonomous system number of 391 the BGP speaker which sent the UPDATE message. 393 As [I-D.ietf-idr-ix-bgp-route-server] suggests that route servers 394 should not modify the AS_PATH attribute, a consistency check on the 395 AS_PATH of an UPDATE received by a route server client would normally 396 fail. It is therefore RECOMMENDED that route server clients disable 397 the AS_PATH consistency check towards the route server. 399 4.6. Export Routing Policies 401 Policy filtering is commonly implemented on route servers to provide 402 prefix distribution control mechanisms for route server clients. A 403 route server "export" policy is a policy which affects prefixes sent 404 from the route server to a route server client. Several different 405 strategies are commonly used for implementing route server export 406 policies. 408 4.6.1. BGP Communities 410 Prefixes sent to the route server are tagged with specific [RFC1997] 411 or [RFC4360] BGP community attributes, based on pre-defined values 412 agreed between the operator and all client. Based on these community 413 tags, prefixes may be propagated to all other clients, a subset of 414 clients, or none. This mechanism allows route server clients to 415 instruct the route server to implement per-client export routing 416 policies. 418 As both standard and extended BGP communities values are restricted 419 to 6 octets, the route server operator should take care to ensure 420 that the predefined BGP community values mechanism used on their 421 route server is compatible with [RFC4893] 4-octet autonomous system 422 numbers. 424 4.6.2. Internet Routing Registry 426 Internet Routing Registry databases (IRRDBs) may be used by route 427 server operators to implement construct per-client routing policies. 428 [RFC2622] Routing Policy Specification Language (RPSL) provides an 429 comprehensive grammar for describing interconnection relationships, 430 and several toolsets exist which can be used to translate RPSL policy 431 description into route server configurations. 433 4.6.3. Client-accessible Databases 435 Should the route server operator not wish to use either BGP community 436 tags or the public IRRDBs for implementing client export policies, 437 they may implement their own routing policy database system for 438 managing their clients' requirements. A database of this form SHOULD 439 allow a route server client operator to update their routing policy 440 and provide a mechanism for allowing the client to specify whether 441 they wish to exchange all their prefixes with any other route server 442 client. Optionally, the implementation may allow a client to specify 443 unique routing policies for individual prefixes over which they have 444 routing policy control. 446 4.7. Layer 2 Reachability Problems 448 Layer 2 reachability problems on an IXP can cause serious operational 449 problems for IXP participants which depend on route servers for 450 interconnection. Ethernet switch forwarding bugs have occasionally 451 been observed to cause non-commutative reachability. For example, 452 given a route server and two IXP participants, A and B, if the two 453 participants can reach the route server but cannot reach each other, 454 then traffic between the participants may be dropped until such time 455 as the layer 2 forwarding problem is resolved. This situation does 456 not tend to occur in bilateral interconnection arrangements, as the 457 routing control path between the two hosts is usually (but not 458 always, due to IXP inter-switch connectivity load balancing 459 algorithms) the same as the data path between them. 461 Problems of this form can be dealt with using [RFC5881] bidirectional 462 forwarding detection. However, as this is a bilateral protocol 463 configured between routers, and as there is currently no means for 464 automatic configuration of BFD between route server clients, BFD does 465 not currently provide an optimal means of handling the problem. 467 4.8. BGP NEXT_HOP Hijacking 469 Section 5.1.3(2) of [RFC4271] allows eBGP speakers to change the 470 NEXT_HOP address of an NLRI update to be a different internet address 471 on the same subnet. This is the mechanism which allows route servers 472 to operate on a shared layer 2 IXP network. However, the mechanism 473 can be abused by route server clients to redirect traffic for their 474 prefixes to other IXP participant routers. 476 ____ 477 / \ 478 | AS99 | 479 \____/ 480 / \ 481 / \ 482 __/ \__ 483 / \ / \ 484 ..| AS1 |..| AS2 |.. 485 : \___/ \___/ : 486 : \ / : 487 : \ / : 488 : \__/ : 489 : IXP / \ : 490 : | RS | : 491 : \____/ : 492 : : 493 .................... 495 Figure 3: BGP NEXT_HOP Hijacking using a Route Server 497 For example in Figure 3, if AS1 and AS2 both announce prefixes for 498 AS99 to the route server, AS1 could set the NEXT_HOP address for 499 AS99's prefixes to be the address of AS2's router, thereby diverting 500 traffic for AS99 via AS2. This may override the routing policies of 501 AS99 and AS2. 503 Worse still, if the route server operator does not use inbound prefix 504 filtering, AS1 could announce any arbitrary prefix to the route 505 server with a NEXT_HOP address of any other IXP participant. This 506 could be used as a denial of service mechanism against either the 507 users of the address space being announced by illicitly diverting 508 their traffic, or the other IXP participant by overloading their 509 network with traffic which would not normally be sent there. 511 This problem is not specific to route servers and it can also be 512 implemented using bilateral peering sessions. However, the potential 513 damage is amplified by route servers because a single BGP session can 514 be used to affect many networks simultaneously. 516 Route server operators SHOULD check that the BGP NEXT_HOP attribute 517 for NLRIs received from a route server client matches the interface 518 address of the client. If the route server receives an NLRI where 519 these addresses are different and where the announcing route server 520 client is in a different autonomous system to the route server client 521 which uses the next hop address, the NLRI SHOULD be dropped. 523 5. Security Considerations 525 On route server installations which do not employ path hiding 526 mitigation techniques, the path hiding problem outlined in section 527 Section 4.1 can be used in certain circumstances to proactively block 528 third party prefix announcements from other route server clients. 530 If the route server operator does not implement prefix leakage 531 mitigation as described in section Section 4.3, it is trivial for 532 route server clients to implement denial of service attacks against 533 arbitrary Internet networks using a route server. 535 Route server installations SHOULD be secured against BGP NEXT_HOP 536 hijacking, as described in section Section 4.8. 538 6. IANA Considerations 540 There are no IANA considerations. 542 7. Acknowledgments 544 The authors would like to thank Chris Hall, Ryan Bickhart, Steven 545 Bakker and Eduardo Ascenco Reis for their valuable input. 547 In addition, the authors would like to acknowledge the developers of 548 BIRD, OpenBGPD and Quagga, whose open source BGP implementations 549 include route server capabilities which are compliant with this 550 document. 552 8. References 554 8.1. Normative References 556 [I-D.ietf-idr-ix-bgp-route-server] 557 Jasinska, E., Hilliard, N., Raszuk, R., and N. Bakker, 558 "Internet Exchange Route Server", draft-ietf-idr-ix-bgp- 559 route-server-05 (work in progress), June 2014. 561 [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate 562 Requirement Levels", BCP 14, RFC 2119, March 1997. 564 8.2. Informative References 566 [RFC1997] Chandrasekeran, R., Traina, P., and T. Li, "BGP 567 Communities Attribute", RFC 1997, August 1996. 569 [RFC2622] Alaettinoglu, C., Villamizar, C., Gerich, E., Kessens, D., 570 Meyer, D., Bates, T., Karrenberg, D., and M. Terpstra, 571 "Routing Policy Specification Language (RPSL)", RFC 2622, 572 June 1999. 574 [RFC4271] Rekhter, Y., Li, T., and S. Hares, "A Border Gateway 575 Protocol 4 (BGP-4)", RFC 4271, January 2006. 577 [RFC4360] Sangli, S., Tappan, D., and Y. Rekhter, "BGP Extended 578 Communities Attribute", RFC 4360, February 2006. 580 [RFC4456] Bates, T., Chen, E., and R. Chandra, "BGP Route 581 Reflection: An Alternative to Full Mesh Internal BGP 582 (IBGP)", RFC 4456, April 2006. 584 [RFC4893] Vohra, Q. and E. Chen, "BGP Support for Four-octet AS 585 Number Space", RFC 4893, May 2007. 587 [RFC5881] Katz, D. and D. Ward, "Bidirectional Forwarding Detection 588 (BFD) for IPv4 and IPv6 (Single Hop)", RFC 5881, June 589 2010. 591 [RS-ARCH] Govindan, R., Alaettinoglu, C., Varadhan, K., and D. 592 Estrin, "A Route Server Architecture for Inter-Domain 593 Routing", 1995, 594 . 596 Authors' Addresses 598 Nick Hilliard 599 INEX 600 4027 Kingswood Road 601 Dublin 24 602 IE 604 Email: nick@inex.ie 606 Elisa Jasinska 607 Netflix, Inc 608 100 Winchester Circle 609 Los Gatos, CA 95032 610 USA 612 Email: elisa@netflix.com 613 Robert Raszuk 614 NTT I3 615 101 S Ellsworth Avenue Suite 350 616 San Mateo, CA 94401 617 US 619 Email: robert@raszuk.net 621 Niels Bakker 622 Akamai Technologies B.V. 623 Kingsfordweg 151 624 Amsterdam 1043 GR 625 NL 627 Email: nbakker@akamai.com