idnits 2.17.1 draft-bonica-v6-multihome-03.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- No issues found here. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year -- The document date (April 13, 2012) is 4396 days in the past. Is this intentional? -- Found something which looks like a code comment -- if you have code sections in the document, please surround them with '' and '' lines. Checking references for intended status: Experimental ---------------------------------------------------------------------------- No issues found here. Summary: 0 errors (**), 0 flaws (~~), 1 warning (==), 2 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 INTAREA R. Bonica 3 Internet-Draft Juniper Networks 4 Updates: 6296 (if approved) F. Baker 5 Intended status: Experimental Cisco Systems 6 Expires: October 15, 2012 M. Wasserman 7 Painless Security 8 G. Miller 9 Verizon 10 W. Kumari 11 Google, Inc. 12 April 13, 2012 14 Multihoming with IPv6-to-IPv6 Network Prefix Translation (NPTv6) 15 draft-bonica-v6-multihome-03 17 Abstract 19 RFC 6296 introduces IPv6-to-IPv6 Network Prefix Translation (NPTv6). 20 By deploying NPTv6, a site can achieve addressing independence 21 without contributing to excessive routing table growth. Section 2.4 22 of RFC 6296 proposes an NPTv6 architecture for sites that are homed 23 to multiple upstream providers. By deploying the proposed 24 architecture, a multihomed site can achieve access redundancy and 25 load balancing, in addition to addressing independence. 27 This memo proposes an alternative NPTv6 architecture for hosts that 28 are homed to multiple upstream providers. The architecture described 29 herein provides transport-layer survivability, in addition to the 30 benefits mentioned above. In order to provide transport-layer 31 survivability, the architecture described herein requires a small 32 amount of additional configuration. 34 This memo updates RFC 6296. 36 Status of this Memo 38 This Internet-Draft is submitted in full conformance with the 39 provisions of BCP 78 and BCP 79. 41 Internet-Drafts are working documents of the Internet Engineering 42 Task Force (IETF). Note that other groups may also distribute 43 working documents as Internet-Drafts. The list of current Internet- 44 Drafts is at http://datatracker.ietf.org/drafts/current/. 46 Internet-Drafts are draft documents valid for a maximum of six months 47 and may be updated, replaced, or obsoleted by other documents at any 48 time. It is inappropriate to use Internet-Drafts as reference 49 material or to cite them other than as "work in progress." 51 This Internet-Draft will expire on October 15, 2012. 53 Copyright Notice 55 Copyright (c) 2012 IETF Trust and the persons identified as the 56 document authors. All rights reserved. 58 This document is subject to BCP 78 and the IETF Trust's Legal 59 Provisions Relating to IETF Documents 60 (http://trustee.ietf.org/license-info) in effect on the date of 61 publication of this document. Please review these documents 62 carefully, as they describe your rights and restrictions with respect 63 to this document. Code Components extracted from this document must 64 include Simplified BSD License text as described in Section 4.e of 65 the Trust Legal Provisions and are provided without warranty as 66 described in the Simplified BSD License. 68 Table of Contents 70 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 4 71 1.1. Terminology . . . . . . . . . . . . . . . . . . . . . . . 5 72 2. NPTv6 Deployment . . . . . . . . . . . . . . . . . . . . . . . 5 73 2.1. Topology . . . . . . . . . . . . . . . . . . . . . . . . . 6 74 2.2. Addressing . . . . . . . . . . . . . . . . . . . . . . . . 7 75 2.2.1. Upstream Provider Addressing . . . . . . . . . . . . . 7 76 2.2.2. Site Addressing . . . . . . . . . . . . . . . . . . . 7 77 2.3. Address Translation . . . . . . . . . . . . . . . . . . . 8 78 2.4. Domain Name System (DNS) . . . . . . . . . . . . . . . . . 8 79 2.5. Routing . . . . . . . . . . . . . . . . . . . . . . . . . 9 80 2.6. Failure Detection and Recovery . . . . . . . . . . . . . . 10 81 2.7. Load Balancing . . . . . . . . . . . . . . . . . . . . . . 11 82 3. Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . 12 83 4. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 12 84 5. Security Considerations . . . . . . . . . . . . . . . . . . . 12 85 6. Acknowledgments . . . . . . . . . . . . . . . . . . . . . . . 12 86 7. References . . . . . . . . . . . . . . . . . . . . . . . . . . 13 87 7.1. Normative References . . . . . . . . . . . . . . . . . . . 13 88 7.2. Informative References . . . . . . . . . . . . . . . . . . 13 89 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . . 14 91 1. Introduction 93 [RFC3582] establishes the following goals for IPv6 site multihoming: 95 Redundancy - A site's ability to remain connected to the 96 Internet, even when connectivity through one or more of its 97 upstream providers fails. 99 Transport-Layer Survivability - A site's ability to maintain 100 transport-layer sessions across failover and restoration 101 events. During a failover/restoration event, the transport- 102 layer session may detect packet loss or reordering, but neither 103 of these cause the transport-layer session to fail. 105 Load Sharing - The ability of a site to distribute both inbound 106 and outbound traffic across its upstream providers. 108 [RFC3582] notes that a multihoming solution may require interactions 109 with the routing subsystem. However, multihoming solutions must be 110 simple and scalable. They must not require excessive operational 111 effort and must not cause excessive routing table expansion. 113 [RFC6296] explains how a site can achieve address independence using 114 IPv6-to-IPv6 Network Prefix Translation (NPTv6). In order to achieve 115 address independence, the site assigns an inside address to each of 116 its resources (e.g., hosts). Nodes outside of the site identify 117 those same resources using a corresponding Provider Allocated (PA) 118 address. 120 The site resolves this addressing dichotomy by deploying an NPTv6 121 translator between itself and its upstream provider. The NPTv6 122 translator maintains a static, one-to-one mapping between each inside 123 address and its corresponding PA address. That mapping persists 124 across flows and over time. 126 If the site disconnects from one upstream provider and connects to 127 another, it may lose its PA assignment. However, the site will not 128 need to renumber its resources. It will only need to reconfigure the 129 mapping rules on its local NPTv6 translator. 131 Section 2.4 of [RFC6296] describes an NPTv6 architecture for sites 132 that are homed to multiple upstream providers. While that 133 architecture fulfills many of the goals identified by [RFC3582], it 134 does not achieve transport-layer survivability. Transport-layer 135 survivability is not achieved because in this architecture, a PA 136 address is usable only when the multi-homed site is directly 137 connected to the allocating provider. 139 This memo describes an alternative architecture for multihomed sites 140 that require transport-layer survivability. It updates Section 2.4 141 of [RFC6296]. In this architecture, PA addresses remain usable, even 142 when the multihomed site loses its direct connection to the 143 allocating provider. 145 The architecture described in this document can be deployed in sites 146 that are served by two or more upstream providers. For the purpose 147 of example, this document demonstrates how the architecture can be 148 deployed in a site that is served by two upstream providers. 150 1.1. Terminology 152 The following terms are used in this document: 154 inbound packet - A packet that is destined for the multi-homed 155 site 157 outbound packet - A packet that originates at the multi-homed site 158 and is destined for a point outside of the multi-homed site 160 NPTv6 inside interface - An interface that connects an NPTv6 161 translator to the site 163 NPTv6 outside interface- An interface that connects an NPTv6 164 translator to an upstream provider 166 2. NPTv6 Deployment 168 This section demonstrates how NPTv6 can be deployed in order to 169 achieve the goals of [RFC3582]. 171 2.1. Topology 173 Upstream Upstream 174 Provider #1 Provider #2 175 / \ / \ 176 / \ / \ 177 / +------+ +------+ \ 178 +------+ |Backup| |Backup| +------+ 179 | PE | | PE |. .| PE | | PE | 180 | #1 | | #1 | . | #2 | | #2 | 181 +------+ +------+ . . +------+ +------+ 182 | . . | 183 |. . . . . . . . . . . . . | 184 +------+ +------+ 185 |NPTv6 | |NPTv6 | 186 | #1 | | #2 | 187 +------+ +------+ 188 | | 189 | | 190 ------------------------------------------------------ 191 Internal Network 193 Figure 1: NPTv6 Multihomed Topology 195 In Figure 1, a site attaches all of its assets, including two NPTv6 196 translators, to an Internal Network. NPTv6 #1 is connected to 197 Provider Edge (PE) Router #1, which is maintained by Upstream 198 Provider #1. Likewise, NPTv6 #2 is connected to PE Router #2, which 199 is maintained by Upstream Provider #2. 201 Each upstream provider also maintains a Backup PE Router. A 202 forwarding tunnel connects the loopback interface of Backup PE Router 203 #1 to the outside interface of NPTv6 #2. Another forwarding tunnel 204 connects Backup PE Router #2 to NPTv6 #1. Network operators can 205 select from many encapsulation techniques (e.g., GRE) to realize the 206 forwarding tunnels. Tunnels are depicted as dotted lines in 207 Figure 1. 209 In the figure, NPTv6 #1 and NPTv6 #2 are depicted as separate boxes. 210 While vendors may produce a separate box to support the NPTv6 211 function, they may also integrate the NPTv6 function into a router. 213 During periods of normal operation, the Backup PE routers is very 214 lightly loaded. Therefore, a single Backup PE router may back up 215 multiple PE routers. Furthermore, the Backup PE router may be used 216 for other purposes (e.g., primary PE router for another customer). 218 2.2. Addressing 220 2.2.1. Upstream Provider Addressing 222 A Regional Internet Registry (RIR) allocates Provider Address Block 223 (PAB) #1 to Upstream Provider #1. From PAB #1, Upstream Provider #1 224 allocates two sub-blocks, using them as follows. 226 Upstream Provider #1 uses the first sub-block to number its internal 227 interfaces. It also uses that sub-block to number the interfaces 228 that connect it to its customers. 230 Upstream Provider #1 uses the second sub-block for customer address 231 assignments. We refer to a particular assignment from this sub-block 232 as a Customer Network Block (CNB). In this case, Upstream Provider 233 #1 assigns CNB #1 to the multihomed site. For the purpose of 234 example, assume that CNB #1 is a /59. 236 In a similar fashion, a Regional Internet Registry (RIR) allocates 237 PAB #2 to Upstream Provider #2. Upstream Provider #2, in turn, 238 assigns CNB #2 to the multihomed site. For the purpose of example, 239 assume that CNB #2 is a /60. 241 The multihomed site does not number any of its interfaces from CNB #1 242 or CNB #2. Section 2.3 describes how the multihomed site uses CNB #1 243 and CNB #2. 245 2.2.2. Site Addressing 247 The site obtains a Site Address Block (SAB), either from Unique Local 248 Address (ULA) [RFC4193] space, or by some other means. For the 249 purpose of example, assume that the site obtains a /48 from ULA 250 space. 252 The site then draws a /59 prefix and a /60 prefix from the SAB. In 253 this document, we call those prefixes SAB #1 and SAB #2. Note that 254 SAB #1 and CNB #1 are both /59 prefixes. Likewise, SAB #2 and CNB #2 255 are both /60 prefixes. In Section 2.3, the site will map SAB #1 to 256 CNB #1 and SAB #2 to CNB #2. Mapped prefixes must be of identical 257 size. 259 The site then numbers its resources from SAB #1 and SAB #2. SAB #1 260 and SAB #2 are the only usable portions of the SAB, because they are 261 the only prefixes that will be mapped to CNB addresses. 263 During periods of normal operation, hosts that are numbered from SAB 264 #1 receive inbound traffic from Upstream Provider #1. Hosts that are 265 numbered from SAB #2 receive inbound traffic from Upstream Provider 266 #2. Selected hosts receive inbound traffic from both upstream 267 providers, balancing the load between them. These hosts have 268 multiple addresses, with at least one address being drawn from SAB #1 269 and at least one address being drawn from SAB #2. 271 Section 2.7 explains how load balancing is achieved. 273 2.3. Address Translation 275 Both NPTv6 translators are configured with the following rules: 277 For outbound packets, if the first 59 bits of the source 278 address identify SAB #1, overwrite those 59 bits with the 59 279 bits that identify CNB #1 281 For outbound packets, if the first 60 bits of the source 282 address identify SAB #2, overwrite those 60 bits with the 60 283 bits that identify CNB #2 285 For outbound packets, if none of the conditions above are met, 286 silently discard the packet 288 For inbound packets, if the first 59 bits of the destination 289 address identify CNB #1, overwrite those 59 bits with the 59 290 bits that identify SAB #1 292 For inbound packets, if the first 60 bits of the destination 293 address identify CNB #2, overwrite those 60 bits with the 60 294 bits that identify SAB #2 296 For inbound packets, if none of the conditions above are met, 297 silently discard the packet 299 Due to the nature of the rules described above, NPTv6 translation is 300 stateless. Therefore, traffic flows do not need to be symmetric 301 across NPTv6 translators. Furthermore, a traffic flow can shift from 302 one NPTv6 translator to another without causing transport-layer 303 session failure. 305 2.4. Domain Name System (DNS) 307 In order to make all site resources reachable by domain name 308 [RFC1034], the site publishes AAAA records [RFC3596] associating each 309 resource with all of its CNB addresses. While this DNS architecture 310 is sufficient, it is suboptimal. Traffic that both originates and 311 terminates within the site traverses NPTv6 translators needlessly. 312 Several optimizations are available. These optimizations are well 313 understood and have been applied to [RFC1918] networks for many 314 years. 316 2.5. Routing 318 Upstream Provider #1 uses an Interior Gateway Protocol to flood 319 topology information throughout its domain. It also uses BGP 320 [RFC4271] to distribute customer and peer reachability information. 322 PE #1 acquires a route to CNB #1 with NEXT-HOP equal to the outside 323 interface of NPTv6 #1. PE #1 can either learn this route from a 324 single-hop eBGP session with NPTv6 #1, or acquire it through static 325 configuration. In either case, PE #1 overwrites the NEXT-HOP of this 326 route with its own loopback address and distributes the route 327 throughout Upstream Provider #1 using iBGP. The LOCAL PREF for this 328 route is set high, so that the route will be preferred to alternative 329 routes to CNB #1. Upstream Provider #1 does not distribute this 330 route to CNB #1 outside of its own borders because it is part of the 331 larger aggregate PAB #1, which is itself advertised. 333 NPTv6 #1 acquires a default route with NEXT-HOP equal to the directly 334 connected interface on PE #1. NPTv6 #1 can either learn this route 335 from a single-hop eBGP session with PE #1, or acquire it through 336 static configuration. 338 Similarly, Backup PE #1 acquires a route to CNB #1 with NEXT-HOP 339 equal to the outside interface of NPTv6 #2. Backup PE #1 can either 340 learn this route from a multi-hop eBGP session with NPTv6 #2, or 341 acquire it through static configuration. In either case, Backup PE 342 #1 overwrites the NEXT-HOP of this route with its own loopback 343 address and distributes the route throughout Upstream Provider #1 344 using iBGP. Distribution procedures are defined in 345 [I-D.ietf-idr-best-external]. The LOCAL PREF for this route is set 346 low, so that the route will not be preferred to alternative routes to 347 CNB #1. Upstream Provider #1 does not distribute this route to CNB 348 #1 outside of its own borders. 350 Even if Backup PE #1 maintains an eBGP session NPTv6 #2, it does not 351 advertise the default route through that eBGP session. During 352 failures, Backup PE #1 does not attract outbound traffic to itself. 354 PE #2 acquires a route to CNB #1 with NEXT-HOP equal to the outside 355 interface of NPTv6 #2. PE #2 can either learn this route from a 356 single-hop eBGP session with NPTv6 #2, or acquire it through static 357 configuration. PE #2 uses this route to enforce source address 358 filtering [RFC2827] on the interface through which it is connected to 359 NPTv6 #2. PE #2 does not advertise this route to CNB #1 to any or 360 its routing peers. 362 Finally, the Autonomous System Border Routers (ASBR) contained by 363 Upstream Provider #1 maintain eBGP sessions with their peers. The 364 ASBRs advertise only PAB #1 through those eBGP sessions. Upstream 365 Provider #1 does not advertise any of the following to its eBGP 366 peers: 368 any prefix that is contained by PAB #1 (i.e., more specific) 370 PAB #2 or any part thereof 372 the SAB or any part thereof 374 Upstream Provider #2 is configured in a manner analogous to that 375 described above. 377 Because both NPTv6 gateways are configured with identical translation 378 rules, and because both PE routers maintain routes to CNB #1 and CNB 379 #1, outbound packets can traverse either NPTv6 gateway. Outbound 380 routing is controlled by the site and therefore, is beyond the scope 381 of this document. 383 2.6. Failure Detection and Recovery 385 When PE #1 loses its route to CNB #1, it withdraws its iBGP 386 advertisement for that prefix from Upstream Provider #1. The route 387 advertised by Backup PE #1 remains and Backup PE #1 attracts traffic 388 bound for CNB #1 to itself. Backup PE #1 forwards that traffic 389 through the tunnel to NPTv6 #2. NPTv6 #2 performs translations and 390 delivers the traffic to the Internal Network. 392 Likewise, when NPTv6 #1 loses its default route, it makes itself 393 unavailable as a gateway for other hosts on the Internal Network. 394 NPTv6 #2 attracts all outbound traffic to itself and forwards that 395 traffic through Upstream Provider #2. Because PE #2 maintains routes 396 to both CNB #1 and CNB #2, it does not discard any traffic from CNB 397 #1 or CNB #2 as a result of source address filtering. The mechanism 398 by which NPTv6 #1 makes itself unavailable as a gateway is beyond the 399 scope of this document. 401 If PE #1 maintains a single-hop eBGP session with NPTv6 #1, the 402 failure of that eBGP session will cause both routes mentioned above 403 to be lost. Otherwise, another failure detection mechanism such as 404 BFD [RFC5881] is required. 406 Regardless of the failure detection mechanism, inbound traffic 407 traverses the tunnel only during failure periods and outbound traffic 408 never traverses the tunnel. Furthermore, restoration is localized. 409 As soon as the advertisement for CNB #1 is withdrawn throughout 410 Upstream Provider #1, restoration is complete. 412 Transport-layer connections survive Failure/Recovery events because 413 both NPTv6 translators implement identical translation rules. When a 414 traffic flow shifts from one translator to another, neither the 415 source address nor the destination address changes. 417 2.7. Load Balancing 419 Outbound load balancing is controlled by the site and is beyond the 420 scope of this document. 422 For inbound traffic, addressing determines load balancing. If a host 423 is numbered from SAB #1, its address is mapped into CNB #1, which is 424 announced only by Upstream Provider #1 (as part of PAB #1). 425 Therefore, during periods of normal operation, all traffic bound for 426 that host traverses Upstream Provider #1 and NPTv6 #1. Likewise, if 427 a host is numbered from SAB #2, its address is mapped into CNB #2, 428 which is announced only by Upstream Provider #2 (as part of PAB #2). 429 Therefore, during periods of normal operation, all traffic bound for 430 that host traverses Upstream Provider #2 and NPTv6 #2. 432 Selected hosts receive inbound traffic from both upstream providers, 433 balancing load between them. These hosts have multiple addresses, 434 with at least one address being drawn from SAB #1 and at least one 435 address being drawn from SAB #2. The number of addresses drawn from 436 each range determines how connections originating outside of the 437 multihomed site distribute inbound load. 439 Recall that all CNB addresses associated with a host are published in 440 DNS. When a remote host initiates a TCP connection, it selects from 441 among these addresses in a relatively random manner. If it selects 442 an address from CNB #1, inbound packets belonging to that connection 443 will traverse Upstream Provider #1 and NPTv6 #1. If it selects an 444 address from CNB #2, inbound packets belonging to that connection 445 will traverse Upstream Provider #2 and NPTv6 #2. 447 When the multiply addressed host initiates a connection, it 448 associates one of its own addresses with the connection. If the 449 address that it chooses is from SAB #1, that address will be mapped 450 to a CNB #1 address and return traffic will traverse Upstream 451 Provider #1 and NPTv6 #1. Alternatively, if the host selects an 452 address from SAB #2, that address will be mapped to a CNB #2 address 453 and return traffic will traverse Upstream Provider #1 and NPTv6 #2. 455 3. Discussion 457 When compared to the multihoming architecture described in Section 458 2.4 of [RFC6296], the proposed architecture achieves transport-layer 459 survivability at the cost of backup PE hardware and additional 460 configuration. The cost of backup PE hardware is minimal, because 461 backup PE routers are very lightly loaded during periods of normal 462 operation. However, in the example provided above, Upstream Provider 463 #1 must configure the following additional items: 465 o an interface to the multihomed site on Backup PE #1 467 o a forwarding tunnel connecting that interface to NPTv6 #2 469 o either a multi-hop eBGP session between Backup PE #1 and NPTv6 #2, 470 or a static route to CNB #1 on Backup PE #1 472 Furthermore, if PE #1 does not maintain an eBGP session with NPTv6 473 #1, Upstream Provider #1 must configure a static route to CNB #2 (as 474 well as CNB #1) on PE #1. However, if PE #1 does maintain an eBGP 475 session with NPTv6 #1, Upstream Provider #1 must configure policy on 476 that session causing it to accept, but not readvertise a path to CNB 477 #2. 479 4. IANA Considerations 481 This document requires no IANA actions. 483 5. Security Considerations 485 As with any architecture that modifies source and destination 486 addresses, the operation of access control lists, firewalls and 487 intrusion detection systems may be impacted. Also many users may 488 confuse NPTv6 translation with a NAT. Two limitations of NAT are 489 that a) it does not support incoming connections without special 490 configuration and b) it requires symmetric routing across the NAT 491 device. Many users understood these limitations to be security 492 features. Because NPTv6 has neither of these limitations, it also 493 offers neither of these features. 495 6. Acknowledgments 497 Thanks to Holger Zuleger, John Scudder and Yakov Rekhter for their 498 helpful comments, encouragement and support. Special thanks to 499 Johann Jonsson, James Piper, Ravinder Wali, Ashte Collins, Inga 500 Rollins and an anonymous donor, without whom this memo would not have 501 been written. 503 7. References 505 7.1. Normative References 507 [RFC1034] Mockapetris, P., "Domain names - concepts and facilities", 508 STD 13, RFC 1034, November 1987. 510 [RFC1918] Rekhter, Y., Moskowitz, R., Karrenberg, D., Groot, G., and 511 E. Lear, "Address Allocation for Private Internets", 512 BCP 5, RFC 1918, February 1996. 514 [RFC2827] Ferguson, P. and D. Senie, "Network Ingress Filtering: 515 Defeating Denial of Service Attacks which employ IP Source 516 Address Spoofing", BCP 38, RFC 2827, May 2000. 518 [RFC3582] Abley, J., Black, B., and V. Gill, "Goals for IPv6 Site- 519 Multihoming Architectures", RFC 3582, August 2003. 521 [RFC3596] Thomson, S., Huitema, C., Ksinant, V., and M. Souissi, 522 "DNS Extensions to Support IP Version 6", RFC 3596, 523 October 2003. 525 [RFC4193] Hinden, R. and B. Haberman, "Unique Local IPv6 Unicast 526 Addresses", RFC 4193, October 2005. 528 [RFC4271] Rekhter, Y., Li, T., and S. Hares, "A Border Gateway 529 Protocol 4 (BGP-4)", RFC 4271, January 2006. 531 [RFC5881] Katz, D. and D. Ward, "Bidirectional Forwarding Detection 532 (BFD) for IPv4 and IPv6 (Single Hop)", RFC 5881, 533 June 2010. 535 [RFC6296] Wasserman, M. and F. Baker, "IPv6-to-IPv6 Network Prefix 536 Translation", RFC 6296, June 2011. 538 7.2. Informative References 540 [I-D.ietf-idr-best-external] 541 Marques, P., Fernando, R., Chen, E., Mohapatra, P., and H. 542 Gredler, "Advertisement of the best external route in 543 BGP", draft-ietf-idr-best-external-05 (work in progress), 544 January 2012. 546 Authors' Addresses 548 Ron Bonica 549 Juniper Networks 550 Sterling, Virginia 20164 551 USA 553 Email: rbonica@juniper.net 555 Fred Baker 556 Cisco Systems 557 Santa Barbara, California 93117 558 USA 560 Email: fred@cisco.com 562 Margaret Wasserman 563 Painless Security 564 356 Abbott Street 565 North Andover, Massachusetts 01845 566 USA 568 Phone: +1 781 405 7464 569 Email: mrw@painless-security.com 570 URI: http://www.painless-security.com 572 Gregory J. Miller 573 Verizon 574 Ashburn, Virginia 20147 575 USA 577 Email: gregory.j.miller@verizon.com 579 Warren Kumari 580 Google, Inc. 581 Mountain View, California 94043 583 Email: warren@kumari.net