<?xml version="1.0" encoding="US-ASCII"?>
<!DOCTYPE rfc SYSTEM "rfc2629.dtd" [
<!ENTITY I-D.sunq-v6ops-contents-transition SYSTEM "http://xml.resource.org/public/rfc/bibxml3/reference.I-D.sunq-v6ops-contents-transition">
<!ENTITY I-D.ietf-6man-flow-ecmp SYSTEM "http://xml.resource.org/public/rfc/bibxml3/reference.I-D.ietf-6man-flow-ecmp">
<!ENTITY I-D.ietf-opsec-dhcpv6-shield SYSTEM "http://xml.resource.org/public/rfc/bibxml3/reference.I-D.ietf-opsec-dhcpv6-shield">
<!ENTITY I-D.ietf-opsec-v6 SYSTEM "http://xml.resource.org/public/rfc/bibxml3/reference.I-D.ietf-opsec-v6">
<!ENTITY I-D.ietf-v6ops-enterprise-incremental-ipv6 SYSTEM "http://xml.resource.org/public/rfc/bibxml3/reference.I-D.ietf-v6ops-enterprise-incremental-ipv6">
<!ENTITY I-D.ietf-nvo3-overlay-problem-statement SYSTEM "http://xml.resource.org/public/rfc/bibxml3/reference.I-D.ietf-nvo3-overlay-problem-statement">
<!ENTITY I-D.ietf-sfc-problem-statement SYSTEM "http://xml.resource.org/public/rfc/bibxml3/reference.I-D.ietf-sfc-problem-statement">
<!ENTITY I-D.ietf-opsec-ipv6-host-scanning SYSTEM "http://xml.resource.org/public/rfc/bibxml3/reference.I-D.ietf-opsec-ipv6-host-scanning">
<!ENTITY I-D.ietf-6man-stable-privacy-addresses SYSTEM "http://xml.resource.org/public/rfc/bibxml3/reference.I-D.ietf-6man-stable-privacy-addresses">
<!ENTITY RFC4193 SYSTEM "http://xml.resource.org/public/rfc/bibxml/reference.RFC.4193.xml">
<!ENTITY RFC4732 SYSTEM "http://xml.resource.org/public/rfc/bibxml/reference.RFC.4732.xml">
<!ENTITY RFC5375 SYSTEM "http://xml.resource.org/public/rfc/bibxml/reference.RFC.5375.xml">
<!ENTITY RFC6052 SYSTEM "http://xml.resource.org/public/rfc/bibxml/reference.RFC.6052.xml">
<!ENTITY RFC6104 SYSTEM "http://xml.resource.org/public/rfc/bibxml/reference.RFC.6104.xml">
<!ENTITY RFC6105 SYSTEM "http://xml.resource.org/public/rfc/bibxml/reference.RFC.6105.xml">
<!ENTITY RFC6145 SYSTEM "http://xml.resource.org/public/rfc/bibxml/reference.RFC.6145.xml">
<!ENTITY RFC6164 SYSTEM "http://xml.resource.org/public/rfc/bibxml/reference.RFC.6164.xml">
<!ENTITY RFC6177 SYSTEM "http://xml.resource.org/public/rfc/bibxml/reference.RFC.6177.xml">
<!ENTITY RFC6294 SYSTEM "http://xml.resource.org/public/rfc/bibxml/reference.RFC.6294.xml">
<!ENTITY RFC6555 SYSTEM "http://xml.resource.org/public/rfc/bibxml/reference.RFC.6555.xml">
<!ENTITY RFC6583 SYSTEM "http://xml.resource.org/public/rfc/bibxml/reference.RFC.6583.xml">
<!ENTITY RFC6883 SYSTEM "http://xml.resource.org/public/rfc/bibxml/reference.RFC.6883.xml">
<!ENTITY RFC7098 SYSTEM "http://xml.resource.org/public/rfc/bibxml/reference.RFC.7098.xml">
]>
<?rfc toc="yes"?>
<?rfc tocompact="yes"?>
<?rfc tocdepth="3"?>
<?rfc tocindent="yes"?>
<?rfc symrefs="yes"?>
<?rfc sortrefs="yes"?>
<?rfc comments="yes"?>
<?rfc inline="yes"?>
<?rfc compact="yes"?>
<?rfc subcompact="no"?>
<rfc category="info" docName="draft-ietf-v6ops-dc-ipv6-01" ipr="trust200902">
  <front>
    <title abbrev="IPv6 Op Guidelines for DCs">IPv6 Operational Guidelines for Datacenters</title>   
    
    <author fullname="Diego R. Lopez" initials="D.R." surname="Lopez">
      <organization>Telefonica I+D</organization>
      <address>
        <postal>
          <street>Don Ramon de la Cruz, 84</street>
          <city>Madrid</city>
          <code>28006</code>
          <country>Spain</country>
        </postal>
        <phone>+34 913 129 041</phone>
        <email>diego@tid.es</email>
      </address>
    </author>
    
    <author fullname="Zhonghua Chen" initials="Z." surname="Chen">
      <organization>China Telecom</organization>
      <address>
        <postal>
          <street></street>
          <city></city>
          <region></region>
          <code></code>
          <country>P.R.China</country>
        </postal>
        <phone></phone>
        <email>18918588897@189.cn</email>
      </address>
    </author>
    
    <author fullname="Tina Tsou" initials="T." surname="Tsou">
      <organization>Huawei Technologies (USA)</organization>
      <address>
        <postal>
          <street>2330 Central Expressway</street>
          <city>Santa Clara</city>
          <region>CA</region>
          <code>95050</code>
          <country>USA</country>
        </postal>
        <phone>+1 408 330 4424</phone>
        <email>Tina.Tsou.Zouting@huawei.com</email>
      </address>
    </author>
    
    <author fullname="Cathy Zhou" initials="C." surname="Zhou">
      <organization>Huawei Technologies</organization>
      <address>
        <postal>
          <street>Bantian, Longgang District</street>
          <city>Shenzhen</city>
          <code>518129</code>
          <country>P.R. China</country>
        </postal>
        <phone></phone>
        <email>cathy.zhou@huawei.com</email>
      </address>
    </author>
    
    <author fullname="Arturo Servin" initials="A." surname="Servin">
      <organization>LACNIC</organization>
      <address>
        <postal>
          <street>Rambla Republica de Mexico 6125</street>
          <city>Montevideo</city>
          <code>11300</code>
          <country>Uruguay</country>
        </postal>
        <phone>+598 2604 2222</phone>
        <email>aservin@lacnic.net</email>
      </address>
    </author>

    <date day="3" month="February" year="2014" />
    
    <workgroup>v6ops</workgroup>

    <abstract>
      <t>This document is intended to provide operational guidelines
      for datacenter operators planning to deploy IPv6 in their 
      infrastructures. It aims to offer a reference framework for 
      evaluating different products and architectures, and therefore
      it is also addressed to manufacturers and solution providers, 
      so they can use it to gauge their solutions. We believe this
      will translate in a smoother and faster IPv6 transition for
      datacenters of these infrastuctures.</t>
      <t>The document focuses on the DC infrastructure itself, its 
      operation, and the aspects related to DC interconnection through
      IPv6. It does not  consider the particular mechanisms for making 
      Internet services provided by applications hosted in the DC 
      available through IPv6 beyond the specific aspects related to 
      how their deployment on the Data Center (DC) infrastructure.</t>
      <t>Apart from facilitating the transition to IPv6, the 
      mechanisms outlined here are intended to make this transition as 
      transparent as possible (if not completely transparent) to 
      applications and services running on the DC infrastructure, as 
      well as to take advantage of IPv6 features to simplify DC 
      operations, internally and across the Internet.</t>
    </abstract>
  </front>

  <middle>
    <section title="Introduction">
      <t>The need for considering the aspects related to IPv4-to-IPv6 
      transition for all devices and services connected to the Internet 
      has been widely mentioned elsewhere, and it is not our intention
      to make an additional call on it. Just let us note that many of 
      those services are already or will soon be located in Data Centers 
      (DC), what makes considering the issues associated to DC 
      infrastructure transition a key aspect both for these
      infrastructures themselves, and for providing a simpler and clear 
      path to service transition.</t>
      <t>All issues discussed here are related to DC infrastructure 
      transition, and are intended to be orthogonal to whatever 
      particular mechanisms for making the services hosted in the DC
      available through IPv6 beyond the specific aspects related to 
      their deployment on the infrastructure. General mechanisms related
      to service transition have been discussed in depth elsewhere (see, for
      example <xref target="RFC6883"/> and 
      <xref target="I-D.ietf-v6ops-enterprise-incremental-ipv6"/>) and
      are considered to be independent to the goal of this discussion.
      The applicability of these general mechanisms for service transition
      will, in many cases, depend on the supporting DC's infrastructure
      characteristics. However, this document intends to keep both problems
      (service vs. infrastructure transition) as different issues.</t>
      <t>Furthermore, the combination of the regularity and controlled
      management in a DC interconnection fabric with IPv6 universal
      end-to-end addressing should translate in simpler and faster 
      VM migrations, either intra- or inter-DC, and even inter-provider.
      </t>
         
    </section>

    <section title="Architecture and Transition Stages">
      <t>This document presents a transition framework structured along
      transition stages and operational guidance associated
      with the degree of penetration
      of IPv6 into the DC communication fabric. It is worth noting we are using
      these stages as a classification mechanism, and they have not to be
      associated with any a succession of steps from a v4-only infrastructure
      to full-fledged v6, but to provide a framework that operators, users,
      and even manufacturers could use to assess their plans and 
      products.</t>
      <t>There is no (explicit or implicit) requirement on starting at 
      the stage describe in first place, nor to follow them in successive
      order. According to their needs and the available solutions, DC
      operators can choose to start or remain at a certain stage, and
      freely move from one to another as they see fit, without contravening
      this document. In this respect, the classification intends to support
      the planning in aspects such as the adaptation of the different
      transition stages to the evolution of traffic patterns, or risk
      assessment in what relates to deploying new components and
      incorporating change control, integration and testing in
      highly-complex multi-vendor infrastructures.</t> 
      <t>Three main transition stages can be considered when analyzing IPv6
      deployment in the DC infrastructure, all compatible with the 
      availability of services running in the DC through IPv6:
      <list style="symbols">
        <t>Experimental. The DC keeps a native IPv4 infrastructure,
        with gateway routers (or even application gateways when services
        require so) performing the adaptation to requests arriving from
        the IPv6 Internet.</t>
        <t>Dual stack. Native IPv6 and IPv4 are present in the
        infrastructure, up to whatever the layer in the interconnection 
        scheme where L3 is applied to packet forwarding.</t>
        <t>IPv6-Only. The DC has a fully pervasive IPv6 infrastructure,
        including full IPv6 hypervisors, which perform the appropriate
        tunneling or NAT if required by internal applications running
        IPv4.</t>
      </list></t>
      
      <section title="General Architecture">
        <t>The diagram in Figure 1 depicts a generalized interconnection
      schema in a DC.</t>
<figure anchor="fig_DCInterconnect" title="DC Interconnnection Schema">
  <artwork><![CDATA[
          |               |              
    +-----+-----+   +-----+-----+
    |  Gateway  |   |  Gateway  |          Internet / Remote Access
    +-----+-----+   +-----+-----+                  Modules
          |               | 
          +---+-----------+ 
              |     |    
      +---+---+     +---+---+
      | Core0 |     | CoreN |              Core
      +---+---+     +---+---+
            /  \    /      / 
           /    \-----\   /
          /   /---/    \ /
        +--------+       +--------+
      +/-------+ |     +/-------+ | 
      | Aggr01 | +-----| AggrN1 | +        Aggregation
      +---+---+/       +--------+/
       /     \         /      \
      /       \       /        \
+-----+    +-----+   +-----+    +-----+
| T11 |... | T1x |   | T21 |... | T2y |    Access
+-----+    +-----+   +-----+    +-----+ 
| HyV |    | HyV |   | HyV |    | HyV |    Physical Servers
+:::::+    +:::::+   +:::::+    +:::::+
| VMs |    | VMs |   | VMs |    | VMs |    Virtual Machines
+-----+    +-----+   +-----+    +-----+    
. . . .    . . . .   . . . .    . . . .
+-----+    +-----+   +-----+    +-----+ 
| HyV |    | HyV |   | HyV |    | HyV |
+:::::+    +:::::+   +:::::+    +:::::+
| VMs |    | VMs |   | VMs |    | VMs |
+-----+    +-----+   +-----+    +-----+
  ]]></artwork>
</figure>
      <t>
        <list style="symbols">
          <t>Hypervisors provide connection services (among others) to 
          virtual machines running on physical servers.</t>
          <t>Access elements provide connectivity directly to/from 
          physical servers. The access elements are typically placed 
          either top-of-rack (ToR) or end-of-row(EoR).</t>
          <t>Aggregation elements group several (many) physical racks
          to achieve local integration and provide as much structure as
          possible to data paths.</t>
          <t>Core elements connect all aggregation elements acting as 
          the DC backbone.</t>
          <t>One or several gateways connecting the DC to the Internet,
          Branch Offices, Partners, Third-Parties,
          and/or other DCs. The interconnectivity  to other DC may be in the
          form of VPNs, WAN links, metro links or any other form of
          interconnection.</t>
        </list>
      In many actual deployments, depending on DC size and design 
      decisions, some of these elements may be combined (core and
      gateways are provider by the same routers, or hypervisors act as
      access elements) or virtualized to some extent, but this layered 
      schema is the one that best accommodates the different options to 
      use L2 or L3 at any of the different DC interconnection layers, 
      and will help us in the discussion along the document.</t> 
      
      </section>

      <section title="Experimental Stage. Native IPv4 Infrastructure">
        <t>This transition stage corresponds to the first step that many
        datacenters may take (or have taken) in order to make their
        external services initially accessible from the IPv6 Internet
        and/or to evaluate the possibilities around it, and corresponds
        to IPv6 traffic patterns totally originated out of the DC or their
        tenants, being a small percentage of the total external requests.
        At this stage, DC network scheme and addressing do not require
        any important change, if any.</t>
        <t>It is important to remark that in no case this can be considered
        a permanent stage in the transition, or even a long-term solution
        for incorporating IPv6 into the DC infrastructure. This stage is
        only recommended for experimentation or early evaluation purposes.</t>
        <t>The translation of IPv6 requests into the internal infrastructure
        addressing format occurs at the outmost level of the DC Internet
        connection. This can be typically achieved at the DC gateway routers,
        that support the appropriate address translation mechanisms for those
        services required to be accessed through native IPv6 requests. The
        policies for applying adaptation can range from performing it only to
        a limited set of specified services to providing a general translation
        service for all public services. More granular mechanisms, based on 		
        address ranges or more sophisticated dynamic policies are also possible, 
        as they are applied by a limited set of control elements. These provide 
        an additional level of control to the usage of IPv6 routable addresses
        in the DC environment, which can be especially significant in the
        experimentation or early deployment phases this stage is applicable
        to.</t>
        <t>Even at this stage, some implicit advantages of IPv6 application
        come into play, even if they can only be applied at the ingress
        elements:
        <list style="symbols">
          <t>Flow labels can be applied to enhance load distribution, as 
          described in <xref target="RFC7098"/>. If the incoming
          IPv6 requests are adequately labeled the gateway systems can use the
          flow labels as a hint for applying load-balancing mechanisms when 
          translating the requests towards the IPv4 internal network.</t>
          <t>During VM migration (intra- or even inter-DC), Mobile IPv6
          mechanisms can be applied to keep service availability during the 
          transient state.</t>
        </list></t>
        <section title="Off-shore v6 Access">
          <t>This model is also suitable to be applied in an "off-shore"
          mode by the service provider connecting the DC infrastructure to 
          the Internet, as described in
          <xref target="I-D.sunq-v6ops-contents-transition"/>.</t>
          <t>When this off-shore mode is applied, the original source
          address will be hidden to the DC infrastructure, and therefore
          identification techniques based on it, such as geolocation or
          reputation evaluation, will be hampered.
          Unless there is a specific trust link between the DC operator and
          the ISP, and the DC operator is able to access equivalent
          identification interfaces provided by the ISP as an additional
          service, the off-shore experimental stage cannot be considered
          applicable when source address identification is required.</t>
        </section>
      </section>

      <section title="Dual Stack Stage. Internal Adaptation">
        <t>This stage requires dual-stack elements in some internal
        parts of the DC infrastructure. This brings some degree of partition
        in the infrastructure, either in a horizontal (when data paths or
        management interfaces are migrated or left in IPv4 while the rest
        migrate) or a vertical (per tenant or service group), or even both.</t>
        <t>Although it may seem an artificial case, situations requiring this
        stage can arise from different requirements from the user base,
        or the need for technology changes at different points of the
        infrastructure, or even the goal of having the possibility of
        experimenting new solutions in a controlled real-operations
        environment, at the price of the additional complexity of dealing
        with a double protocol stack, as noted in
        <xref target="RFC6883"/> and elsewhere.</t>
        <t>This transition stage can accommodate different traffic patterns,
        both internal and external, though it better fits to scenarios of
        a clear differentiation of different types of traffic (external vs.
        internal, data vs management...), and/or a more or less even
        distribution of external requests. A common scenario would include
        native dual stack servers for certain services combined with single
        stack ones for others (web server in dual stack and database servers
        only supporting v4, for example).</t>
        <t>At this stage, the advantages outlined above on load balancing
        based on flow labels and Mobile IP mechanisms are applicable to any 
        L3-based mechanism (intra- as well as inter-DC). They will 
        translate into enhanced VM mobility, more effective load
        balancing, and higher service availability. Furthermore,
        the simpler integration provided by IPv6 to and from the L2 flat 
        space to the structured L3 one can be applied to achieve
        simpler deployments, as well as alleviating encapsulation and
        fragmentation issues when traversing between L2 and L3 spaces.
        With an appropriate prefix management, automatic address 
        assignment, discovery, and renumbering can be applied not 
        only to public service interfaces, but most notably to data 
        and management paths. Other potential advantages include the application 
        of multicast scopes to limit broadcast floods, and the usage of specific
        security headers to enhance tenant differentiation.</t>
        <t>In general, all these advantages are especially significative to 
        overlay techniques applied to support multi-tenancy and inter-DC 
        operation.</t>
        <t>On the other hand, this stage requires a much more careful
        planning of addressing (please refer to (<xref target="RFC5375"/>)
        schemas and access control, according to
        security levels. While the experimental stage implies relatively few
        global routable addresses, this one brings the advantages and risks of
        using different kinds of addresses at each point of the IPv6-aware
        infrastructure.</t>
		<section title="Dual-stack at the Aggregation Layer">
<figure anchor="fig_DCApplication" title="Data Center Application Scheme">
  <artwork><![CDATA[
         
 +---------------------+        
 | Internet / External |
 +---------+-----------+        
           |
     +-----+----+     
     |  Gateway |     
     +-----+----+
           .
           .           Core Level
           .      
        +--+--+  
        | FW  | 
        +--+--+  
           |           Aggregation Level   
        +--+--+  
        | MB  | 
        +--+--+
        _ / \_
       /       \   
 +--+--+     +--+--+       
 | Web | ... | Web | 
 +--+--+     +--+--+ 
    | \ __ _ _/ | 
    | /       \ |   
 +--+--+     +--+--+       
 |Cache|     | DB  | 
 +-----+     +-----+       
  ]]></artwork>
</figure>
          <t>An initial approach corresponding to this transition stage
          relies on taking advantage of  specific elements at the
          aggregation layer described in <xref target="fig_DCInterconnect"/>,
          and make them able to provide dual-stack gatewaying to the
          IPv4-based servers and data infrastructure.</t>
          <t>Typically, firewalls (FW) are deployed as the security edge
          of the whole service domain and provides safe access control of
          this service domain from other function domains. In addition,
          some application optimization based on devices and security
          devices (generally known as middleboxes, e.g. Load Balancers, SSL VPN, 
          IPS and etc.) may be deployed in the aggregation level to alleviate 
          the burden of the server and to guarantee deep security, as shown in 
          <xref target="fig_DCApplication"/>. The choice of a particular kind 
          of middlebox for this dual-stack approach shall be based on the nature 
          of the services and the deployment of the middleboxes in the DC 
          infrastructure.</t>
          <t>The middlebox could be upgraded to support the data transmission. 
          There may be two ways to achieve this at the edge of the DC: 
          Encapsulation and NAT. In the encapsulation case, the middlebox 
          function carries the IPv6 traffic over IPv4 using an encapsulation 
          (IPv6-in-IPv4). In the NAT case, there are already some technologies 
          to solve this problem. For example, DNS and NAT devices could be 
          concatenated for IPv4/IPv6 translation if IPv6 host needs to visit 
          IPv4 servers. However, this may require the concatenation of multiple 
          network devices, which means the NAT tables needs to be synchronized 
          at different devices. As described below, a simplified IPv4/IPv6
          translation model can be applied, which could be implemented 
          in the device. The mapping information of IPv4 and IPv6 will
          be generated automatically based on the information of the middlebox.
          The host IP address will be translated without port translation.</t>
<figure anchor="fig_LBDualStack" title="Dual Stack middlebox (Load-Balancer) mechanism">
  <artwork><![CDATA[

                        +----------+------------------------------+ 
                        |Dual Stack| IPv4-only       +----------+ |
                        |          |            +----|Web Server| |
                        |   +------|------+    /     +----------+ |
+--------+  +-------+   |   |      |      |   /                   |
|Internet|--|Gateway|---|---+Load-Balancer+-- \                   |
|        |  |       |   |   |      |      |    \     +----------+ |
+--------+  +-------+   |   +------|------+     +----|Web Server| |
                        |          |                 +----------+ |    
                        +----------+------------------------------+
  ]]></artwork>
</figure>  
          <t>As shown in <xref target="fig_LBDualStack"/>,the middlebox (a 
          load-balancer, LB, in this case) can be considered divided into two 
          parts: The dual-stack part facing the external border, and the 
          IPv4-only part which contains the traditional LB functions. The IPv4 
          DC is allocated an IPv6 prefix which is for the VSIPv6 (Virtual 
          Service IPv6 Address). We suggest that the IPv6 prefix is not the 
          well-known prefix in order to avoid the IPv4 routings of the services 
          in different DCs spread to the IPv6 network. The VSIPv4 (Virtual 
          Service IPv4 Address) is embedded in VSIPv6 using the allocated IPv6 
          prefix.
          In this way, the LB has the stateless IP address mapping between
          VSIPv6 and VSIPv4, and synchronization is not required between
          LB and DNS64 server.</t>
          <t>The dual-stack part of the LB has a private IPv4 address pool.
          When IPv6 packets arrive, the dual-stack part does the one-on-one
          SIP (source IP address) mapping (as defined in 
          <xref target="I-D.sunq-v6ops-contents-transition"/>) between
          IPv4 private address and IPv6 SIP. Because there will be too
          many UDP/TCP sessions between the DC and Internet, the IP
          addresses binding tables between IPv6 and IPv4 are not
          session-based, but SIP-based. Thus, the dual-stack part of LB 
          builds IP binding stateful tables for the host IPv6 address
          and private IPv4 address of the pool. When the following IPv6
          packets of the host come from Internet to the LB, the dual
          stack part does the IP address translation for the packets.
          Thus, the IPv6 packets were translated to IPv4 packets and sent 
          to the IPv4 only part of the LB.</t>
        </section>
        <section title="Dual-stack Extended OS/Hypervisor">
          <t>Another option for deploying a infrastructure at the dual-stack
          stage would bring dual-stack much closer to the application servers,
          by requiring hypervisors, VMs and applications in the v6-capable
          zone of the DC to be able to operate in dual stack. This way,
          incoming connections would be dealt in a seamless manner, while
          for outgoing ones an OS-specific replacement for system calls like
          gethostbyname() and getaddrinfo() would accept a character string (an 
          IPv4 literal, an IPv6 literal, or a domain name) and would return a 
          connected socket or an error message, having executed a happy 
          eyeballs algorithm (<xref target="RFC6555"/>).</t>
          <t>If these hypothetical system call replacements were smart
          enough, they would allow the transparent interoperation of DCs 
          with different levels of v6 penetration, either horizontal 
          (internal data paths are not migrated, for example) or vertical
          (per tenant or service group). This approach requires, on the
          other hand, all the involved DC infrastructure to become
          dual-stack, as well as some degree of explicit application
          adaptation.</t>
        </section>
      </section>

      <section title="IPv6-Only Stage. Pervasive IPv6 Infrastructure">
        <t>We can consider a DC infrastructure at the final 
        stage when all network layer elements, including hypervisors, are
        IPv6-aware and apply it by default. Conversely with the experimental
        stage, access from the IPv4 Internet is achieved, when required,
        by protocol translation performed at the edge infrastructure elements,
        or even supplied by the service provider as an additional network
        service.</t>
        <t> There are different drivers that could motivate DC managers to
        transition to this stage. In principle the scarcity of IPv4 addresses
        may require to reclaim IPv4 resources from portions of the network
        infrastructure which no longer need them. Furthermore, the 
        unavailability of IPv4 address would make dual-stack environments not 
        possible anymore and careful assessments will be perfumed to asses where 
        to use the remaining IPv4 resources. </t>
        <t>Another important motivation to move DC operations from dual-stack to
        IPv6-only is to save costs and operation activities that managing a 
        single-stack network could bring in comparison with managing two stacks. 
        Today, besides of learning to manage two different stacks, network and 
        system administrators require to duplicate other tasks such as IP 
        address management, firewalls configuration, system security hardening 
        and monitoring among others. These activities are not just costly for 
        the DC management, they may also may lead to configuration errors and 
        security holes. In particular, a few activities have special impact on 
        costs for dual-stacked infrastructures:
        	<list style="symbols">
          		<t>Development. When a new device or app version is released, 
          		it must be tested three times: IPv4, dual-stack, and IPv6-only. 
          		Though this does not imply a triple the effort once the 
          		development environment is set up, a general estimate is that it 
          		implies a 10% additional cost.</t>
          		<t>Test. Everything QA procedure must be performed at least 
          		twice  and in many cases three times, with an estimate 10% 
          		incremental effort.</t>
          		<t>Operation and troubleshooting. While for L1/L2 problems we 
          		would be talking of 1% incremental effort (in a few words, once 
          		ping6 works, checking ping is very little effort), for L3
          		problems a rough estimate would an increment of 5%.</t>
          		<t>Application development. Many applications would require to
          		keep two branches, with a 10-30% additional cost. The estimate 	
          		here implies a higher range, as applications cover a wide 
          		variety of cases.</t>
          		<t>Addition on new L3 devices, that should handle IPv4 and IPv6
          		flows, and provide higher performance to deal with both at the 
          		same time. It comes with a cost increment of 5-10%.</t>
          		<t>Network management. The incremental costs of managing two L3 
          		network plane would come at around a 10% incremental cost.</t>	 
          	</list>
        In summary, a full dual-stack datacenter would come at an additional 
        5-10% operating cost than a single-stack one.</t>
        <t>This stage can be also of interest for new deployments willing
        to apply a fresh start aligned with future IPv6 widespread usage,
        when a relevant amount of requests are expected to be using IPv6,
        or to take advantage of any of the potential benefits that an IPv6 
        support infrastructure can provide. Other, and probably more compelling
        in many cases, drivers for this stage may be either a lack of enough
        IPv4 resources (whether private or globally unique) or a need to
        reclaim IPv4 resources from portions of the network which no longer
        need them. In these circumstances, a careful evaluation of what still
        needs to speak IPv4 and what does not will need to happen to ensure
        judicious use of the remaining IPv4 resources.</t>
        <t>The potential advantages mentioned for the previous stages (load
        distribution based on flow labels, mobility mechanisms for transient 
        states in VM or data migration, controlled multicast, and better mapping 
        of L2 flat space on L3 constructs) can be applied at any layer, even 
        especially tailored for individual services. Obviously, the need for a 
        careful planning of address space is even stronger here, though the 
        centralized protocol translation services should reduce the risk of 
        translation errors causing disruptions or security breaches.</t>
        <t><xref target="V6DCS"/> proposes an approach to a next generation
        DC deployment, already demonstrated in practice, and claims the 
        advantages of materializing the stage from the beginning, providing
        some rationale for it based on simplifying the transition process.
        It relies on stateless NAT64 (<xref target="RFC6052"/>,
        <xref target="RFC6145"/>) to enable access from the IPv4 Internet.</t>
        <section title="Overlay and Chaining Support">
          <t>A DC infrastructure in this final stage is in the position of
          providing a much better support to requirements that have been 
          recently formulated, mostly in the scope of other recently created
          IETF working groups.</t>
          <t>In particular, support for highly scalable VPN and multi-tenancy 
          according to the key requirements defined in
          <xref target="I-D.ietf-nvo3-overlay-problem-statement"/>:
          <list style="symbols">
            <t>Traffic isolation, so that a tenant's traffic is not visible to 
            any other tenant.</t>
            <t>Address independence, so that one tenant's addressing scheme does 
            not collide with other tenant's addressing schemes or with addresses
            used within the data center itself.</t> 
            <t>Support the placement and migration of VMs anywhere within the
            data center, without being limited by DC network constraints
            such as the IP subnet boundaries of the underlying DC network.</t>
          </list></t>
          <t>With a pervasive IPv6 infrastructure, these goals can be achieved 
          by means of native addressing and direct interaction of the 
          applications with the network infrastructure of the datacenter, and 
          across multiple datacenters connected via WAN links. Virtual networks 
          can be constructed by a natural consequence of addressing rules, 
          traffic isolation guaranteed by routing mechanisms, and migration 
          directly supported by signaling protocols.</t>
          <t>On the other hand, service chaining is consolidating as a technique 
          for dynamically structuring network services, adapting them to user 
          requirements, provider policies, and network state. In this model, 
          service functions, whether physical or virtualized, are not required 
          to reside on the direct data path and traffic is instead steered
          through required service functions, wherever they are deployed 
          <xref target="I-D.ietf-sfc-problem-statement"/>.</t>
          <t>Service function chaining requires packets in a given flow intended 
          to follow a particular path to be tagged by a classifier, so  
          intermediate service nodes in the path can route them accordingly. The 
          usage of flow labels can greatly simplify this classification and 
          allow a much simpler deployment of service function chains. 
          Furthermore, it offers much richer possibilities for network 
          architects building chains and paths inside them as well as to
          application developers willing to get advantage of service chaining, 
          since it provides the possibility of providing rich metadata for 
          any given flow, in a generalization of the use cases described in
          <xref target="RFC6294"/> and <xref target="RFC7098"/>.</t>        
        </section>
    </section>
    
    <section title="Other Operational Considerations">
     <t>In this section we review some operation considerations related
     addressing and management issues in V6 DC infrastructure.</t>
     
     <section title="Addressing">
     <t>There are different considerations related on IPv6 addressing topics in 
     DC. Many of these considerations are already documented in a variety of 
     IETF documents and in general the recommendations and best practices 
     mentioned on them apply in IPv6 DC environments. However we would like to 
     point out some topics that we consider important to mention.</t>
     <t>The first question that DC managers often have is the type of IPv6
     address to use; that is Provider Aggregated (PA), Provider Independent (PI) 
     or Unique Local IPv6 Addresses (ULAs) <xref target="RFC4193"/> Related
     to the use of PA vs. PI, we concur with <xref target="RFC6883"/> and 
     <xref target="I-D.ietf-v6ops-enterprise-incremental-ipv6"/> that PI 
     provides independence from the ISP and decreases renumbering issues, it may 
     bring up other considerations as a fee for the allocation, a request 
     process and allocation maintenance to the Regional Internet Registry, etc.
     In this respect, there is not a specific recommendation to use either PI 
     vs. PA as it would depend also on business and management factors rather 
     than pure technical.</t>
     <t>ULAs should be used only in DC infrastructure that does not require 
     access to the public Internet; such devices may be databases servers, 
     application-servers, and management interfaces of webservers and network 
     devices among others. This practice may decrease the renumbering issues 
     when PA addressing is used, as only public faced devices would require an 
     address change. Also we would like to know that although ULAs may provide
     some security the main motivation for it used should be address 
     management.</t>
     <t>Another topic to discuss is the length of prefixes within the DC. In 
     general we recommend the use of subnets of 64 bits for each VLAN or network 
     segment used in the DC. Although subnet with prefixes longer than 64 bits 
     may work, it is necessary that the reader understands that this may break 
     stateless autoconfiguration and at least manual configuration must be 
     employed. For details please read <xref target="RFC5375"/>.</t>
     <t>Address plans should follow the principles of being hierarchical and 
     able to aggregate address space. We recommend at least to have a /48 for 
     each data-center. If the DC provides services that require subassigment of 
     address space we do not offer a single recommendation (i.e. request a /40 
     prefix from an RIR or ISP and assign /48 prefixes to customers), as this 
     may depend on other no technical factors. Instead we refer the reader to 
     <xref target="RFC6177"/>.</t>
     <t>For point-to-point links please refer to the recommendations in <xref 
     target="RFC6164"/>.</t>
     </section>
     
     <section title="Management Systems and Applications">
     <t>Data-centers may use Internet Protocol address management (IPAM) 
     software, provisioning systems and other variety of software to document
     and operate. It is important that these systems are prepared and possibly 
     modified to support IPv6 in their data models. In general, if IPv6 support 
     for these applications has not been previously done, changes may take 
     sometime as they may be not just adding more space in input fields but also 
     modifying data models and data migration.</t>
     </section>
     
     <section title="Monitoring and Logging">
      <t>Monitoring and logging are critical operations in any network 
      environment and they should be carried at the same level for IPv6 and 
      IPv4. Monitoring and management operations in V6 DC are by no means 
      different than any other IPv6 networks environments. It is important to
      consider that the collection of information from network devices is
      orthogonal to the information collected. For example it is possible to 
      collect data from IPv6 MIBs using IPv4 transport. Similarly it is possible 
      to collect IPv6 data generated by Netflow9/IPFIX agents in IPv4 transport. 
      In this way the important issue to address is that agents (i.e. network 
      devices) are able to collect data specific to IPv6.</t>
      
      <t> And as final note on monitoring, although IPv6 MIBs are supported by 
      SNMP versions 1 and 2, we recommend to use SNMP version 3 instead.</t>
     </section>
     
     <section title="Costs">
      <t>It is very possible that moving from a single stack data-center 
      infrastructure to any of the IPv6 stages described in this document may 
      incur in capital expenditures. This may include but it is not confined to 
      routers, load-balancers, firewalls and software upgrades among others. 
      However the cost that most concern us is operational. Moving the DC 
      infrastructure operations from a single-stack to a dual-stack may infer in 
      a variety of extra costs such as application development and testing, 
      operational troubleshooting and service deployment. At the same time, this 
      extra cost may be seeing as saving when moving from a dual-stack DC to an 
      IPv6-Only DC.</t>
      <t>Depending of the complexity of the DC network, provisioning and other 
      factors we estimate that the extra costs (and later savings) may be around 
      between 15 to 20%.</t>
     </section>
      
    </section>

    <section title="Security Considerations">
      <t> A thorough collection of operational security aspects for IPv6
      network is made in <xref target="I-D.ietf-opsec-v6"/>. Most of them, with
      the probable exception of those specific to residential users, are
      applicable in the environment we consider in this document.</t>
      
      <section title="Neighbor Discovery Protocol attacks">
        <t>The first important issue that V6 DC manager should be aware is the
        attacks against Neighbor Discovery Protocol <xref target="RFC6583"/>.
        This attack
        is similar to ARP attacks <xref target="RFC4732"/> in IPv4 but 
        exacerbated by the fact that the common size of an IPv6 subnet is /64. 
        In principle an attacker would be able to fill the Neighbor Cache of the 
        local router and starve its memory and processing resources by sending 
        multiple ND packets requesting information of non-existing hosts. The 
        result would be the inability of the router to respond to ND requests, 
        to update its Neighbor Cache and even to forward packets. The attack 
        does need to be launched with malicious purposes; it could be just the 
        result of bad stack implementation behavior.</t>
        <t>R<xref target="RFC6583"/> mentions some options to mitigate the 
        effects of the attacks against NDP. For example filtering unused space, 
        minimizing subnet size when possible, tuning rate limits in the NDP 
        queue and to rely in router vendor implementations to better handle 
        resources and to prioritize NDP requests. </t>
      </section>
      
      <section title="Addressing">
        <t> Other important security considerations in V6 DC are related to 
        addressing. Because of the large address space is commonly thought that 
        IPv6 is not vulnerable to reconnaissance techniques such as scanning. 
        Although that may be true to force brute attacks, <xref 
        target="I-D.ietf-opsec-ipv6-host-scanning"/> shows some techniques
        that may be employed to speed up and improve results in order to 
        discover IPv6 address in a subnet. The use of virtual machines and SLACC 
        aggravate this problem due the fact that they tent to use 
        automatically-generated MAC address well known patterns. </t>
        <t>To mitigate address-scanning attacks it is recommended to avoid using 
        SLAAC and if used stable privacy-enhanced addresses
        <xref target="I-D.ietf-6man-stable-privacy-addresses"/>
        should be the method of address generation. Also, for manually assigned 
        addresses try to avoid IID low-byte address (i.e. from 0 to 256), 
        IPv4-based addresses and wordy addresses especially for infrastructure 
        without a fully qualified domain name.</t>
        <t>In spite of the use of manually assigned addresses is the preferred 
        method for V6 DC, SLACC and DHCPv6 may be also used for some special 
        reasons. However we recommend paying special attention to RA <xref 
        target="RFC6104"/> and DHCP <xref 
        target="I-D.ietf-opsec-dhcpv6-shield"/> hijack attacks. In these kinds
        of attacks the attacker deploys rogue routers sending RA messages or 
        rogue DHCP servers to inject bogus information and possibly to perform a 
        man in the middle attack. In order to mitigate this problem it is 
        necessary to apply some techniques in access switches such as RA-Guard
        <xref target="RFC6105"/> at least.</t>
        <t> Another topic that we would like to mention related to addressing is 
        the use of ULAs. As we previously mentioned, although ULAs may be used 
        to hide host from the outside world we do not recommend to rely on them 
        as a security tool but better as a tool to make renumbering easier.</t>
      </section>
      
      <section title="Edge filtering">
        <t>In order to avoid being used as a source of amplification attacks is 
        it important to follow the rules of BCP38 on ingress filtering. At the 
        same time it is important to filter-in on the network border all the 
        unicast traffic and routing announcement that should not be routed in 
        the Internet, commonly known as "bogus prefixes".</t>
      </section>
        
      <section title="Final Security Remarks">
      <t> Finally, let us just emphasize the need for careful configuration of 
      access control rules at the translation points. This latter one is 
      specially sensitive in infrastructures at the dual-stack stage, as the 
      translation points are potentially distributed, and when protocol 
      translation is offered as an external service, since there can be 
      operational mismatches. </t>
      </section>
    </section>

    <section anchor="iana" title="IANA Considerations">
      <t>None.</t>
    </section>

    <section title="Acknowledgements">
      <t>We would like to thank Tore Anderson, Wes George, Ray Hunter,
      Joel Jaeggli, Fred Baker, Lorenzo Colitti, Dan York, Carlos Martinez,
      Lee Howard, Alejandro Acosta, Alexis Munoz, Nicolas Fiumarelli, Santiago 
      Aggio and Hans Velez for their questions, suggestions, reviews and 
      comments.</t>
    </section>
    </section>
  </middle>

  <back>
    <references title="Informative References">
      &I-D.sunq-v6ops-contents-transition;
      &I-D.ietf-opsec-v6;
      &I-D.ietf-v6ops-enterprise-incremental-ipv6;
      &I-D.ietf-nvo3-overlay-problem-statement;
      &I-D.ietf-sfc-problem-statement;
      &I-D.ietf-opsec-ipv6-host-scanning;
      &I-D.ietf-6man-stable-privacy-addresses;
      &I-D.ietf-opsec-dhcpv6-shield;
      <reference anchor="V6DCS" target="https://ripe64.ripe.net/presentations/67-20120417-RIPE64-The_Case_for_IPv6_Only_Data_Centres.pdf">
        <front>
          <title>The case for IPv6-only data centres</title>
          <author fullname="Tore Anderson"><organization></organization></author>
        </front>
      </reference>
      &RFC4193;
      &RFC4732;
      &RFC5375;
      &RFC6052;
      &RFC6104;
      &RFC6105;
      &RFC6145;
      &RFC6164;
      &RFC6177;
	  &RFC6294;
      &RFC6555;
      &RFC6583;
      &RFC6883;
      &RFC7098;
    </references>
  </back>
</rfc>