idnits 2.17.1 draft-lopez-v6ops-dc-ipv6-05.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- No issues found here. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year -- The document date (July 14, 2013) is 3910 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- == Outdated reference: A later version (-17) exists of draft-ietf-6man-stable-privacy-addresses-10 == Outdated reference: A later version (-08) exists of draft-ietf-opsec-ipv6-host-scanning-01 == Outdated reference: A later version (-27) exists of draft-ietf-opsec-v6-02 == Outdated reference: A later version (-06) exists of draft-ietf-v6ops-enterprise-incremental-ipv6-03 -- Obsolete informational reference (is this intentional?): RFC 6145 (Obsoleted by RFC 7915) -- Obsolete informational reference (is this intentional?): RFC 6555 (Obsoleted by RFC 8305) Summary: 0 errors (**), 0 flaws (~~), 5 warnings (==), 3 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 v6ops D. Lopez 3 Internet-Draft Telefonica I+D 4 Intended status: Informational Z. Chen 5 Expires: January 15, 2014 China Telecom 6 T. Tsou 7 Huawei Technologies (USA) 8 C. Zhou 9 Huawei Technologies 10 A. Servin 11 LACNIC 12 July 14, 2013 14 IPv6 Operational Guidelines for Datacenters 15 draft-lopez-v6ops-dc-ipv6-05 17 Abstract 19 This document is intended to provide operational guidelines for 20 datacenter operators planning to deploy IPv6 in their 21 infrastructures. It aims to offer a reference framework for 22 evaluating different products and architectures, and therefore it is 23 also addressed to manufacturers and solution providers, so they can 24 use it to gauge their solutions. We believe this will translate in a 25 smoother and faster IPv6 transition for datacenters of these 26 infrastuctures. 28 The document focuses on the DC infrastructure itself, its operation, 29 and the aspects related to DC interconnection through IPv6. It does 30 not consider the particular mechanisms for making Internet services 31 provided by applications hosted in the DC available through IPv6 32 beyond the specific aspects related to how their deployment on the 33 Data Center (DC) infrastructure. 35 Apart from facilitating the transition to IPv6, the mechanisms 36 outlined here are intended to make this transition as transparent as 37 possible (if not completely transparent) to applications and services 38 running on the DC infrastructure, as well as to take advantage of 39 IPv6 features to simplify DC operations, internally and across the 40 Internet. 42 Status of this Memo 44 This Internet-Draft is submitted in full conformance with the 45 provisions of BCP 78 and BCP 79. 47 Internet-Drafts are working documents of the Internet Engineering 48 Task Force (IETF). Note that other groups may also distribute 49 working documents as Internet-Drafts. The list of current Internet- 50 Drafts is at http://datatracker.ietf.org/drafts/current/. 52 Internet-Drafts are draft documents valid for a maximum of six months 53 and may be updated, replaced, or obsoleted by other documents at any 54 time. It is inappropriate to use Internet-Drafts as reference 55 material or to cite them other than as "work in progress." 57 This Internet-Draft will expire on January 15, 2014. 59 Copyright Notice 61 Copyright (c) 2013 IETF Trust and the persons identified as the 62 document authors. All rights reserved. 64 This document is subject to BCP 78 and the IETF Trust's Legal 65 Provisions Relating to IETF Documents 66 (http://trustee.ietf.org/license-info) in effect on the date of 67 publication of this document. Please review these documents 68 carefully, as they describe your rights and restrictions with respect 69 to this document. Code Components extracted from this document must 70 include Simplified BSD License text as described in Section 4.e of 71 the Trust Legal Provisions and are provided without warranty as 72 described in the Simplified BSD License. 74 Table of Contents 76 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 4 77 2. Architecture and Transition Stages . . . . . . . . . . . . . . 4 78 2.1. General Architecture . . . . . . . . . . . . . . . . . . . 5 79 2.2. Experimental Stage. Native IPv4 Infrastructure . . . . . . 7 80 2.2.1. Off-shore v6 Access . . . . . . . . . . . . . . . . . 8 81 2.3. Dual Stack Stage. Internal Adaptation . . . . . . . . . . 8 82 2.3.1. Dual-stack at the Aggregation Layer . . . . . . . . . 10 83 2.3.2. Dual-stack Extended OS/Hypervisor . . . . . . . . . . 12 84 2.4. IPv6-Only Stage. Pervasive IPv6 Infrastructure . . . . . . 12 85 3. Other Operational Considerations . . . . . . . . . . . . . . . 13 86 3.1. Addressing . . . . . . . . . . . . . . . . . . . . . . . . 13 87 3.2. Management Systems and Applications . . . . . . . . . . . 14 88 3.3. Monitoring and Logging . . . . . . . . . . . . . . . . . . 15 89 3.4. Costs . . . . . . . . . . . . . . . . . . . . . . . . . . 15 90 4. Security Considerations . . . . . . . . . . . . . . . . . . . 15 91 4.1. Neighbor Discovery Protocol attacks . . . . . . . . . . . 16 92 4.2. Addressing . . . . . . . . . . . . . . . . . . . . . . . . 16 93 4.3. Edge filtering . . . . . . . . . . . . . . . . . . . . . . 17 94 4.4. Final Security Remarks . . . . . . . . . . . . . . . . . . 17 95 5. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 17 96 6. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . . 17 97 7. Informative References . . . . . . . . . . . . . . . . . . . . 17 98 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . . 19 100 1. Introduction 102 The need for considering the aspects related to IPv4-to-IPv6 103 transition for all devices and services connected to the Internet has 104 been widely mentioned elsewhere, and it is not our intention to make 105 an additional call on it. Just let us note that many of those 106 services are already or will soon be located in Data Centers (DC), 107 what makes considering the issues associated to DC infrastructure 108 transition a key aspect both for these infrastructures themselves, 109 and for providing a simpler and clear path to service transition. 111 All issues discussed here are related to DC infrastructure 112 transition, and are intended to be orthogonal to whatever particular 113 mechanisms for making the services hosted in the DC available through 114 IPv6 beyond the specific aspects related to their deployment on the 115 infrastructure. General mechanisms related to service transition 116 have been discussed in depth elsewhere (see, for example 117 [I-D.ietf-v6ops-icp-guidance] and 118 [I-D.ietf-v6ops-enterprise-incremental-ipv6]) and are considered to 119 be independent to the goal of this discussion. The applicability of 120 these general mechanisms for service transition will, in many cases, 121 depend on the supporting DC's infrastructure characteristics. 122 However, this document intends to keep both problems (service vs. 123 infrastructure transition) as different issues. 125 Furthermore, the combination of the regularity and controlled 126 management in a DC interconnection fabric with IPv6 universal end-to- 127 end addressing should translate in simpler and faster VM migrations, 128 either intra- or inter-DC, and even inter-provider. 130 2. Architecture and Transition Stages 132 This document presents a transition framework structured along 133 transition stages and operational guidance associated with the degree 134 of penetration of IPv6 into the DC communication fabric. It is worth 135 noting we are using these stages as a classification mechanism, and 136 they have not to be associated with any a succession of steps from a 137 v4-only infrastructure to full-fledged v6, but to provide a framework 138 that operators, users, and even manufacturers could use to assess 139 their plans and products. 141 There is no (explicit or implicit) requirement on starting at the 142 stage describe in first place, nor to follow them in successive 143 order. According to their needs and the available solutions, DC 144 operators can choose to start or remain at a certain stage, and 145 freely move from one to another as they see fit, without contravening 146 this document. In this respect, the classification intends to 147 support the planning in aspects such as the adaptation of the 148 different transition stages to the evolution of traffic patterns, or 149 risk assessment in what relates to deploying new components and 150 incorporating change control, integration and testing in highly- 151 complex multi-vendor infrastructures. 153 Three main transition stages can be considered when analyzing IPv6 154 deployment in the DC infrastructure, all compatible with the 155 availability of services running in the DC through IPv6: 157 o Experimental. The DC keeps a native IPv4 infrastructure, with 158 gateway routers (or even application gateways when services 159 require so) performing the adaptation to requests arriving from 160 the IPv6 Internet. 162 o Dual stack. Native IPv6 and IPv4 are present in the 163 infrastructure, up to whatever the layer in the interconnection 164 scheme where L3 is applied to packet forwarding. 166 o IPv6-Only. The DC has a fully pervasive IPv6 infrastructure, 167 including full IPv6 hypervisors, which perform the appropriate 168 tunneling or NAT if required by internal applications running 169 IPv4. 171 2.1. General Architecture 173 The diagram in Figure 1 depicts a generalized interconnection schema 174 in a DC. 176 | | 177 +-----+-----+ +-----+-----+ 178 | Gateway | | Gateway | Internet / Remote Access 179 +-----+-----+ +-----+-----+ Modules 180 | | 181 +---+-----------+ 182 | | 183 +---+---+ +---+---+ 184 | Core0 | | CoreN | Core 185 +---+---+ +---+---+ 186 / \ / / 187 / \-----\ / 188 / /---/ \ / 189 +--------+ +--------+ 190 +/-------+ | +/-------+ | 191 | Aggr01 | +-----| AggrN1 | + Aggregation 192 +---+---+/ +--------+/ 193 / \ / \ 194 / \ / \ 195 +-----+ +-----+ +-----+ +-----+ 196 | T11 |... | T1x | | T21 |... | T2y | Access 197 +-----+ +-----+ +-----+ +-----+ 198 | HyV | | HyV | | HyV | | HyV | Physical Servers 199 +:::::+ +:::::+ +:::::+ +:::::+ 200 | VMs | | VMs | | VMs | | VMs | Virtual Machines 201 +-----+ +-----+ +-----+ +-----+ 202 . . . . . . . . . . . . . . . . 203 +-----+ +-----+ +-----+ +-----+ 204 | HyV | | HyV | | HyV | | HyV | 205 +:::::+ +:::::+ +:::::+ +:::::+ 206 | VMs | | VMs | | VMs | | VMs | 207 +-----+ +-----+ +-----+ +-----+ 209 Figure 1: DC Interconnnection Schema 211 o Hypervisors provide connection services (among others) to virtual 212 machines running on physical servers. 214 o Access elements provide connectivity directly to/from physical 215 servers. The access elements are typically placed either top-of- 216 rack (ToR) or end-of-row(EoR). 218 o Aggregation elements group several (many) physical racks to 219 achieve local integration and provide as much structure as 220 possible to data paths. 222 o Core elements connect all aggregation elements acting as the DC 223 backbone. 225 o One or several gateways connecting the DC to the Internet, Branch 226 Offices, Partners, Third-Parties, and/or other DCs. The 227 interconnectivity to other DC may be in the form of VPNs, WAN 228 links, metro links or any other form of interconnection. 230 In many actual deployments, depending on DC size and design 231 decisions, some of these elements may be combined (core and gateways 232 are provider by the same routers, or hypervisors act as access 233 elements) or virtualized to some extent, but this layered schema is 234 the one that best accommodates the different options to use L2 or L3 235 at any of the different DC interconnection layers, and will help us 236 in the discussion along the document. 238 2.2. Experimental Stage. Native IPv4 Infrastructure 240 This transition stage corresponds to the first step that many 241 datacenters may take (or have taken) in order to make their external 242 services initially accessible from the IPv6 Internet and/or to 243 evaluate the possibilities around it, and corresponds to IPv6 traffic 244 patterns totally originated out of the DC or their tenants, being a 245 small percentage of the total external requests. At this stage, DC 246 network scheme and addressing do not require any important change, if 247 any. 249 It is important to remark that in no case this can be considered a 250 permanent stage in the transition, or even a long-term solution for 251 incorporating IPv6 into the DC infrastructure. This stage is only 252 recommended for experimentation or early evaluation purposes. 254 The translation of IPv6 requests into the internal infrastructure 255 addressing format occurs at the outmost level of the DC Internet 256 connection. This can be typically achieved at the DC gateway 257 routers, that support the appropriate address translation mechanisms 258 for those services required to be accessed through native IPv6 259 requests. The policies for applying adaptation can range from 260 performing it only to a limited set of specified services to 261 providing a general translation service for all public services. 262 More granular mechanisms, based on address ranges or more 263 sophisticated dynamic policies are also possible, as they are applied 264 by a limited set of control elements. These provide an additional 265 level of control to the usage of IPv6 routable addresses in the DC 266 environment, which can be especially significant in the 267 experimentation or early deployment phases this stage is applicable 268 to. 270 Even at this stage, some implicit advantages of IPv6 application come 271 into play, even if they can only be applied at the ingress elements: 273 o Flow labels can be applied to enhance load-balancing, as described 274 in [I-D.ietf-6man-flow-ecmp]. If the incoming IPv6 requests are 275 adequately labeled the gateway systems can use the flow labels as 276 a hint for applying load-balancing mechanisms when translating the 277 requests towards the IPv4 internal network. 279 o During VM migration (intra- or even inter-DC), Mobile IP 280 mechanisms can be applied to keep service availability during the 281 transient state. 283 2.2.1. Off-shore v6 Access 285 This model is also suitable to be applied in an "off-shore" mode by 286 the service provider connecting the DC infrastructure to the 287 Internet, as described in [I-D.sunq-v6ops-contents-transition]. 289 When this off-shore mode is applied, the original source address will 290 be hidden to the DC infrastructure, and therefore identification 291 techniques based on it, such as geolocation or reputation evaluation, 292 will be hampered. Unless there is a specific trust link between the 293 DC operator and the ISP, and the DC operator is able to access 294 equivalent identification interfaces provided by the ISP as an 295 additional service, the off-shore experimental stage cannot be 296 considered applicable when source address identification is required. 298 2.3. Dual Stack Stage. Internal Adaptation 300 This stage requires dual-stack elements in some internal parts of the 301 DC infrastructure. This brings some degree of partition in the 302 infrastructure, either in a horizontal (when data paths or management 303 interfaces are migrated or left in IPv4 while the rest migrate) or a 304 vertical (per tenant or service group), or even both. 306 Although it may seem an artificial case, situations requiring this 307 stage can arise from different requirements from the user base, or 308 the need for technology changes at different points of the 309 infrastructure, or even the goal of having the possibility of 310 experimenting new solutions in a controlled real-operations 311 environment, at the price of the additional complexity of dealing 312 with a double protocol stack, as noted in 313 [I-D.ietf-v6ops-icp-guidance] and elsewhere. 315 This transition stage can accommodate different traffic patterns, 316 both internal and external, though it better fits to scenarios of a 317 clear differentiation of different types of traffic (external vs. 318 internal, data vs management...), and/or a more or less even 319 distribution of external requests. A common scenario would include 320 native dual stack servers for certain services combined with single 321 stack ones for others (web server in dual stack and database servers 322 only supporting v4, for example). 324 At this stage, the advantages outlined above on load balancing based 325 on flow labels and Mobile IP mechanisms are applicable to any L3- 326 based mechanism (intra- as well as inter-DC). They will translate 327 into enhanced VM mobility, more effective load balancing, and higher 328 service availability. Furthermore, the simpler integration provided 329 by IPv6 to and from the L2 flat space to the structured L3 one can be 330 applied to achieve simpler deployments, as well as alleviating 331 encapsulation and fragmentation issues when traversing between L2 and 332 L3 spaces. With an appropriate prefix management, automatic address 333 assignment, discovery, and renumbering can be applied not only to 334 public service interfaces, but most notably to data and management 335 paths. 337 Other potential advantages include the application of multicast 338 scopes to limit broadcast floods, and the usage of specific security 339 headers to enhance tenant differentiation. 341 On the other hand, this stage requires a much more careful planning 342 of addressing (please refer to ([RFC5375]) schemas and access 343 control, according to security levels. While the experimental stage 344 implies relatively few global routable addresses, this one brings the 345 advantages and risks of using different kinds of addresses at each 346 point of the IPv6-aware infrastructure. 348 2.3.1. Dual-stack at the Aggregation Layer 350 +---------------------+ 351 | Internet / External | 352 +---------+-----------+ 353 | 354 +-----+----+ 355 | Gateway | 356 +-----+----+ 357 . 358 . Core Level 359 . 360 +--+--+ 361 | FW | 362 +--+--+ 363 | Aggregation Level 364 +--+--+ 365 | LB | 366 +--+--+ 367 _ / \_ 368 / \ 369 +--+--+ +--+--+ 370 | Web | ... | Web | 371 +--+--+ +--+--+ 372 | \ __ _ _/ | 373 | / \ | 374 +--+--+ +--+--+ 375 |Cache| | DB | 376 +-----+ +-----+ 378 Figure 2: Data Center Application Scheme 380 An initial approach corresponding to this transition stage relies on 381 taking advantage of specific elements at the aggregation layer 382 described in Figure 1, and make them able to provide dual-stack 383 gatewaying to the IPv4-based servers and data infrastructure. 385 Typically, firewalls (FW) are deployed as the security edge of the 386 whole service domain and provides safe access control of this service 387 domain from other function domains. In addition, some application 388 optimization based on devices and security devices (e.g. Load 389 Balancers, SSL VPN, IPS and etc.) may be deployed in the aggregation 390 level to alleviate the burden of the server and to guarantee deep 391 security, as shown in Figure 2. 393 The load balancer (LB) or some other boxes could be upgraded to 394 support the data transmission. There may be two ways to achieve this 395 at the edge of the DC: Encapsulation and NAT. In the encapsulation 396 case, the LB function carries the IPv6 traffic over IPv4 using an 397 encapsulation (IPv6-in-IPv4). In the NAT case, there are already 398 some technologies to solve this problem. For example, DNS and NAT 399 device could be concatenated for IPv4/IPv6 translation if IPv6 host 400 needs to visit IPv4 servers. However, this may require the 401 concatenation of multiple network devices, which means the NAT tables 402 needs to be synchronized at different devices. As described below, a 403 simplified IPv4/IPv6 translation model can be applied, which could be 404 implemented in the LB device. The mapping information of IPv4 and 405 IPv6 will be generated automatically based on the information of the 406 LB. The host IP address will be translated without port translation. 408 +----------+------------------------------+ 409 |Dual Stack| IPv4-only +----------+ | 410 | | +----|Web Server| | 411 | +------|------+ / +----------+ | 412 +--------+ +-------+ | | | | / | 413 |Internet|--|Gateway|---|---+Load-Balancer+-- \ | 414 | | | | | | | | \ +----------+ | 415 +--------+ +-------+ | +------|------+ +----|Web Server| | 416 | | +----------+ | 417 +----------+------------------------------+ 419 Figure 3: Dual Stack LB mechanism 421 As shown in Figure 3,the LB can be considered divided into two parts: 422 The dual-stack part facing the external border, and the IPv4-only 423 part which contains the traditional LB functions. The IPv4 DC is 424 allocated an IPv6 prefix which is for the VSIPv6 (Virtual Service 425 IPv6 Address). We suggest that the IPv6 prefix is not the well-known 426 prefix in order to avoid the IPv4 routings of the services in 427 different DCs spread to the IPv6 network. The VSIPv4 (Virtual 428 Service IPv4 Address) is embedded in VSIPv6 using the allocated IPv6 429 prefix. In this way, the LB has the stateless IP address mapping 430 between VSIPv6 and VSIPv4, and synchronization is not required 431 between LB and DNS64 server. 433 The dual-stack part of the LB has a private IPv4 address pool. When 434 IPv6 packets arrive, the dual-stack part does the one-on-one SIP 435 (source IP address) mapping (as defined in 436 [I-D.sunq-v6ops-contents-transition]) between IPv4 private address 437 and IPv6 SIP. Because there will be too many UDP/TCP sessions 438 between the DC and Internet, the IP addresses binding tables between 439 IPv6 and IPv4 are not session-based, but SIP-based. Thus, the dual- 440 stack part of LB builds IP binding stateful tables for the host IPv6 441 address and private IPv4 address of the pool. When the following 442 IPv6 packets of the host come from Internet to the LB, the dual stack 443 part does the IP address translation for the packets. Thus, the IPv6 444 packets were translated to IPv4 packets and sent to the IPv4 only 445 part of the LB. 447 2.3.2. Dual-stack Extended OS/Hypervisor 449 Another option for deploying a infrastructure at the dual-stack stage 450 would bring dual-stack much closer to the application servers, by 451 requiring hypervisors, VMs and applications in the v6-capable zone of 452 the DC to be able to operate in dual stack. This way, incoming 453 connections would be dealt in a seamless manner, while for outgoing 454 ones an OS-specific replacement for system calls like gethostbyname() 455 and getaddrinfo() would accept a character string (an IPv4 literal, 456 an IPv6 literal, or a domain name) and would return a connected 457 socket or an error message, having executed a happy eyeballs 458 algorithm ([RFC6555]). 460 If these hypothetical system call replacements were smart enough, 461 they would allow the transparent interoperation of DCs with different 462 levels of v6 penetration, either horizontal (internal data paths are 463 not migrated, for example) or vertical (per tenant or service group). 464 This approach requires, on the other hand, all the involved DC 465 infrastructure to become dual-stack, as well as some degree of 466 explicit application adaptation. 468 2.4. IPv6-Only Stage. Pervasive IPv6 Infrastructure 470 We can consider a DC infrastructure at the final stage when all 471 network layer elements, including hypervisors, are IPv6-aware and 472 apply it by default. Conversely with the experimental stage, access 473 from the IPv4 Internet is achieved, when required, by protocol 474 translation performed at the edge infrastructure elements, or even 475 supplied by the service provider as an additional network service. 477 There are different drivers that could motivate DC managers to 478 transition to this stage. In principle the scarcity of IPv4 479 addresses may require to reclaim IPv4 resources from portions of the 480 network infrastructure which no longer need them. Furthermore, the 481 unavailability of IPv4 address would make dual-stack environments not 482 possible anymore and careful assessments will be perfumed to asses 483 where to use the remaining IPv4 resources. 485 Another important motivation to move DC operations from dual-stack to 486 IPv6-only is to save costs and operation activities that managing a 487 single-stack network could bring in comparison with managing two 488 stacks. Today, besides of learning to manage two different stacks, 489 network and system administrators require to duplicate other tasks 490 such as IP address management, firewalls configuration, system 491 security hardening and monitoring among others. These activities are 492 not just costly for the DC management, they may also may lead to 493 configuration errors and security holes. 495 This stage can be also of interest for new deployments willing to 496 apply a fresh start aligned with future IPv6 widespread usage, when a 497 relevant amount of requests are expected to be using IPv6, or to take 498 advantage of any of the potential benefits that an IPv6 support 499 infrastructure can provide. Other, and probably more compelling in 500 many cases, drivers for this stage may be either a lack of enough 501 IPv4 resources (whether private or globally unique) or a need to 502 reclaim IPv4 resources from portions of the network which no longer 503 need them. In these circumstances, a careful evaluation of what 504 still needs to speak IPv4 and what does not will need to happen to 505 ensure judicious use of the remaining IPv4 resources. 507 The potential advantages mentioned for the previous stages (load 508 balancing based on flow labels, mobility mechanisms for transient 509 states in VM or data migration, controlled multicast, and better 510 mapping of L2 flat space on L3 constructs) can be applied at any 511 layer, even especially tailored for individual services. Obviously, 512 the need for a careful planning of address space is even stronger 513 here, though the centralized protocol translation services should 514 reduce the risk of translation errors causing disruptions or security 515 breaches. 517 [V6DCS] proposes an approach to a next generation DC deployment, 518 already demonstrated in practice, and claims the advantages of 519 materializing the stage from the beginning, providing some rationale 520 for it based on simplifying the transition process. It relies on 521 stateless NAT64 ([RFC6052], [RFC6145]) to enable access from the IPv4 522 Internet. 524 3. Other Operational Considerations 526 In this section we review some operation considerations related 527 addressing and management issues in V6 DC infrastructure. 529 3.1. Addressing 531 There are different considerations related on IPv6 addressing topics 532 in DC. Many of these considerations are already documented in a 533 variety of IETF documents and in general the recommendations and best 534 practices mentioned on them apply in IPv6 DC environments. However 535 we would like to point out some topics that we consider important to 536 mention. 538 The first question that DC managers often have is the type of IPv6 539 address to use; that is Provider Aggregated (PA), Provider 540 Independent (PI) or Unique Local IPv6 Addresses (ULAs) [RFC4193] 541 Related to the use of PA vs. PI, we concur with 542 [I-D.ietf-v6ops-icp-guidance] and 543 [I-D.ietf-v6ops-enterprise-incremental-ipv6] that PI provides 544 independence from the ISP and decreases renumbering issues, it may 545 bring up other considerations as a fee for the allocation, a request 546 process and allocation maintenance to the Regional Internet Registry, 547 etc. In this respect, there is not a specific recommendation to use 548 either PI vs. PA as it would depend also on business and management 549 factors rather than pure technical. 551 ULAs should be used only in DC infrastructure that does not require 552 access to the public Internet; such devices may be databases servers, 553 application-servers, and management interfaces of webservers and 554 network devices among others. This practice may decrease the 555 renumbering issues when PA addressing is used, as only public faced 556 devices would require an address change. Also we would like to know 557 that although ULAs may provide some security the main motivation for 558 it used should be address management. 560 Another topic to discuss is the length of prefixes within the DC. In 561 general we recommend the use of subnets of 64 bits for each vlan or 562 network segment used in the DC. Although subnet with prefixes longer 563 than 64 bits may work, it is necessary that the reader understand 564 that this may break stateless autoconfiguration and at least manual 565 configuration must be employed. For details please read [RFC5375]. 567 Address plans should follow the principles of being hierarchical and 568 able to aggregate address space. We recommend at least to have a /48 569 for each data-center. If the DC provides services that require 570 subassigment of address space we do not offer a single recommendation 571 (i.e. request a /40 prefix from an RIR or ISP and assign /48 prefixes 572 to customers), as this may depend on other no technical factors. 573 Instead we refer the reader to [RFC6177]. 575 For point-to-point links please refer to the recommendations in 576 [RFC6164]. 578 3.2. Management Systems and Applications 580 Data-centers may use Internet Protocol address management (IPAM) 581 software, provisioning systems and other variety of software to 582 document and operate. It is important that these systems are 583 prepared and possibly modified to support IPv6 in their data models. 584 In general, if IPv6 support for these applications has not been 585 previously done, changes may take sometime as they may be not just 586 adding more space in input fields but also modifying data models and 587 data migration. 589 3.3. Monitoring and Logging 591 Monitoring and logging are critical operations in any network 592 environment and they should be carried at the same level for IPv6 and 593 IPv4. Monitoring and management operations in V6 DC are by no means 594 different than any other IPv6 networks environments. It is important 595 to consider that the collection of information from network devices 596 is orthogonal to the information collected. For example it is 597 possible to collect data from IPv6 MIBs using IPv4 transport. 598 Similarly it is possible to collect IPv6 data generated by Netflow9/ 599 IPFIX agents in IPv4 transport. In this way the important issue to 600 address is that agents (i.e. network devices) are able to collect 601 data specific to IPv6. 603 And as final note on monitoring, although IPv6 MIBs are supported by 604 SNMP versions 1 and 2, we recommend to use SNMP version 3 instead. 606 3.4. Costs 608 It is very possible that moving from a single stack data-center 609 infrastructure to any of the IPv6 stages described in this document 610 may incur in capital expenditures. This may include but it is not 611 confined to routers, load-balancers, firewalls and software upgrades 612 among others. However the cost that most concern us is operational. 613 Moving the DC infrastructure operations from a single-stack to a 614 dual-stack may infer in a variety of extra costs such as application 615 development and testing, operational troubleshooting and service 616 deployment. At the same time, this extra cost may be seeing as 617 saving when moving from a dual-stack DC to an IPv6-Only DC. 619 Depending of the complexity of the DC network, provisioning and other 620 factors we estimate that the extra costs (and later savings) may be 621 around between 15 to 20%. 623 4. Security Considerations 625 A thorough collection of operational security aspects for IPv6 626 network is made in [I-D.ietf-opsec-v6] . Most of them, with the 627 probable exception of those specific to residential users, are 628 applicable in the environment we consider in this document. 630 4.1. Neighbor Discovery Protocol attacks 632 The first important issue that V6 DC manager should be aware is the 633 attacks against Neighbor Discovery Protocol [RFC6583]. This attack 634 is similar to ARP attacks [RFC4732] in IPv4 but exacerbated by the 635 fact that the common size of an IPv6 subnet is /64. In principle an 636 attacker would be able to fill the Neighbor Cache of the local router 637 and starve its memory and processing resources by sending multiple ND 638 packets requesting information of non-existing hosts. The result 639 would be the inability of the router to respond to ND requests, to 640 update its Neighbor Cache and even to forward packets. The attack 641 does need to be launched with malicious purposes; it could be just 642 the result of bad stack implementation behavior. 644 R[RFC6583] mentions some options to mitigate the effects of the 645 attacks against NDP. For example filtering unused space, minimizing 646 subnet size when possible, tuning rate limits in the NDP queue and to 647 rely in router vendor implementations to better handle resources and 648 to prioritize NDP requests. 650 4.2. Addressing 652 Other important security considerations in V6 DC are related to 653 addressing. Because of the large address space is commonly thought 654 that IPv6 is not vulnerable to reconnaissance techniques such as 655 scanning. Although that may be true to force brute attacks, 656 [I-D.ietf-opsec-ipv6-host-scanning] shows some techniques that may be 657 employed to speed up and improve results in order to discover IPv6 658 address in a subnet. The use of virtual machines and SLACC aggravate 659 this problem due the fact that they tent to use automatically- 660 generated MAC address well known patterns. 662 To mitigate address-scanning attacks it is recommended to avoid using 663 SLAAC and if used stable privacy-enhanced addresses 664 [I-D.ietf-6man-stable-privacy-addresses] should be the method of 665 address generation. Also, for manually assigned addresses try to 666 avoid IID low-byte address (i.e. from 0 to 256), IPv4-based addresses 667 and wordy addresses especially for infrastructure without a fully 668 qualified domain name. 670 In spite of the use of manually assigned addresses is the preferred 671 method for V6 DC, SLACC and DHCPv6 may be also used for some special 672 reasons. However we recommend paying special attention to RA 673 [RFC6104] and DHCP [I-D.gont-opsec-dhcpv6-shield] hijack attacks. In 674 these kinds of attacks the attacker deploys rogue routers sending RA 675 messages or rogue DHCP servers to inject bogus information and 676 possibly to perform a man in the middle attack. In order to mitigate 677 this problem it is necessary to apply some techniques in access 678 switches such as RA-Guard [RFC6105] at least. 680 Another topic that we would like to mention related to addressing is 681 the use of ULAs. As we previously mentioned, although ULAs may be 682 used to hide host from the outside world we do not recommend to rely 683 on them as a security tool but better as a tool to make renumbering 684 easier. 686 4.3. Edge filtering 688 In order to avoid being used as a source of amplification attacks is 689 it important to follow the rules of BCP38 on ingress filtering. At 690 the same time it is important to filter-in on the network border all 691 the unicast traffic and routing announcement that should not be 692 routed in the Internet, commonly known as "bogus prefixes". 694 4.4. Final Security Remarks 696 Finally, let us just emphasize the need for careful configuration of 697 access control rules at the translation points. This latter one is 698 specially sensitive in infrastructures at the dual-stack stage, as 699 the translation points are potentially distributed, and when protocol 700 translation is offered as an external service, since there can be 701 operational mismatches. 703 5. IANA Considerations 705 None. 707 6. Acknowledgements 709 We would like to thank Tore Anderson, Wes George, Ray Hunter, Joel 710 Jaeggli, Fred Baker, Lorenzo Colitti, Dan York, Carlos Martinez, Lee 711 Howard, Alejandro Acosta, Alexis Munoz, Nicolas Fiumarelli, Santiago 712 Aggio and Hans Velez for their questions, suggestions, reviews and 713 comments. 715 7. Informative References 717 [I-D.gont-opsec-dhcpv6-shield] 718 Gont, F. and W. Liu, "DHCPv6-Shield: Protecting Against 719 Rogue DHCPv6 Servers", draft-gont-opsec-dhcpv6-shield-01 720 (work in progress), October 2012. 722 [I-D.ietf-6man-flow-ecmp] 723 Carpenter, B. and S. Amante, "Using the IPv6 flow label 724 for equal cost multipath routing and link aggregation in 725 tunnels", draft-ietf-6man-flow-ecmp-05 (work in progress), 726 July 2011. 728 [I-D.ietf-6man-stable-privacy-addresses] 729 Gont, F., "A method for Generating Stable Privacy-Enhanced 730 Addresses with IPv6 Stateless Address Autoconfiguration 731 (SLAAC)", draft-ietf-6man-stable-privacy-addresses-10 732 (work in progress), June 2013. 734 [I-D.ietf-opsec-ipv6-host-scanning] 735 Gont, F. and T. Chown, "Network Reconnaissance in IPv6 736 Networks", draft-ietf-opsec-ipv6-host-scanning-01 (work in 737 progress), April 2013. 739 [I-D.ietf-opsec-v6] 740 Chittimaneni, K., Kaeo, M., and E. Vyncke, "Operational 741 Security Considerations for IPv6 Networks", 742 draft-ietf-opsec-v6-02 (work in progress), February 2013. 744 [I-D.ietf-v6ops-enterprise-incremental-ipv6] 745 Chittimaneni, K., Chown, T., Howard, L., Kuarsingh, V., 746 Pouffary, Y., and E. Vyncke, "Enterprise IPv6 Deployment 747 Guidelines", 748 draft-ietf-v6ops-enterprise-incremental-ipv6-03 (work in 749 progress), July 2013. 751 [I-D.ietf-v6ops-icp-guidance] 752 Carpenter, B. and S. Jiang, "IPv6 Guidance for Internet 753 Content and Application Service Providers", 754 draft-ietf-v6ops-icp-guidance-05 (work in progress), 755 January 2013. 757 [I-D.sunq-v6ops-contents-transition] 758 Sun, Q., Liu, D., Zhao, Q., Liu, Q., Xie, C., Li, X., and 759 J. Qin, "Rapid Transition of IPv4 contents to be IPv6- 760 accessible", draft-sunq-v6ops-contents-transition-03 (work 761 in progress), March 2012. 763 [RFC4193] Hinden, R. and B. Haberman, "Unique Local IPv6 Unicast 764 Addresses", RFC 4193, October 2005. 766 [RFC4732] Handley, M., Rescorla, E., and IAB, "Internet Denial-of- 767 Service Considerations", RFC 4732, December 2006. 769 [RFC5375] Van de Velde, G., Popoviciu, C., Chown, T., Bonness, O., 770 and C. Hahn, "IPv6 Unicast Address Assignment 771 Considerations", RFC 5375, December 2008. 773 [RFC6052] Bao, C., Huitema, C., Bagnulo, M., Boucadair, M., and X. 774 Li, "IPv6 Addressing of IPv4/IPv6 Translators", RFC 6052, 775 October 2010. 777 [RFC6104] Chown, T. and S. Venaas, "Rogue IPv6 Router Advertisement 778 Problem Statement", RFC 6104, February 2011. 780 [RFC6105] Levy-Abegnoli, E., Van de Velde, G., Popoviciu, C., and J. 781 Mohacsi, "IPv6 Router Advertisement Guard", RFC 6105, 782 February 2011. 784 [RFC6145] Li, X., Bao, C., and F. Baker, "IP/ICMP Translation 785 Algorithm", RFC 6145, April 2011. 787 [RFC6164] Kohno, M., Nitzan, B., Bush, R., Matsuzaki, Y., Colitti, 788 L., and T. Narten, "Using 127-Bit IPv6 Prefixes on Inter- 789 Router Links", RFC 6164, April 2011. 791 [RFC6177] Narten, T., Huston, G., and L. Roberts, "IPv6 Address 792 Assignment to End Sites", BCP 157, RFC 6177, March 2011. 794 [RFC6555] Wing, D. and A. Yourtchenko, "Happy Eyeballs: Success with 795 Dual-Stack Hosts", RFC 6555, April 2012. 797 [RFC6583] Gashinsky, I., Jaeggli, J., and W. Kumari, "Operational 798 Neighbor Discovery Problems", RFC 6583, March 2012. 800 [V6DCS] "The case for IPv6-only data centres", . 805 Authors' Addresses 807 Diego R. Lopez 808 Telefonica I+D 809 Don Ramon de la Cruz, 84 810 Madrid 28006 811 Spain 813 Phone: +34 913 129 041 814 Email: diego@tid.es 815 Zhonghua Chen 816 China Telecom 817 P.R.China 819 Phone: 820 Email: 18918588897@189.cn 822 Tina Tsou 823 Huawei Technologies (USA) 824 2330 Central Expressway 825 Santa Clara, CA 95050 826 USA 828 Phone: +1 408 330 4424 829 Email: Tina.Tsou.Zouting@huawei.com 831 Cathy Zhou 832 Huawei Technologies 833 Bantian, Longgang District 834 Shenzhen 518129 835 P.R. China 837 Phone: 838 Email: cathy.zhou@huawei.com 840 Arturo Servin 841 LACNIC 842 Rambla Republica de Mexico 6125 843 Montevideo 11300 844 Uruguay 846 Phone: +598 2604 2222 847 Email: aservin@lacnic.net