| < draft-ietf-nvo3-use-case-01.txt | draft-ietf-nvo3-use-case-02.txt > | |||
|---|---|---|---|---|
| skipping to change at page 1, line 13 ¶ | skipping to change at page 1, line 13 ¶ | |||
| Internet Draft Huawei | Internet Draft Huawei | |||
| Category: Informational M. Toy | Category: Informational M. Toy | |||
| Comcast | Comcast | |||
| A. Isaac | A. Isaac | |||
| Bloomberg | Bloomberg | |||
| V. Manral | V. Manral | |||
| Hewlett-Packard | Hewlett-Packard | |||
| L. Dunbar | L. Dunbar | |||
| Huawei | Huawei | |||
| Expires: November 2013 May 1, 2013 | Expires: January 2014 July 11, 2013 | |||
| Use Cases for DC Network Virtualization Overlays | Use Cases for DC Network Virtualization Overlays | |||
| draft-ietf-nvo3-use-case-01 | draft-ietf-nvo3-use-case-02 | |||
| Abstract | Abstract | |||
| This document describes the DC NVO3 use cases that may be | This document describes the DC Network Virtualization (NVO3) use | |||
| potentially deployed in various data centers and apply to different | cases that may be potentially deployed in various data centers and | |||
| applications. An application in a DC may be a combination of some | apply to different applications. | |||
| use cases described here. | ||||
| Status of this Memo | Status of this Memo | |||
| This Internet-Draft is submitted to IETF in full conformance with | This Internet-Draft is submitted to IETF in full conformance with | |||
| the provisions of BCP 78 and BCP 79. | the provisions of BCP 78 and BCP 79. | |||
| Internet-Drafts are working documents of the Internet Engineering | Internet-Drafts are working documents of the Internet Engineering | |||
| Task Force (IETF), its areas, and its working groups. Note that | Task Force (IETF), its areas, and its working groups. Note that | |||
| other groups may also distribute working documents as Internet- | other groups may also distribute working documents as Internet- | |||
| Drafts. | Drafts. | |||
| skipping to change at page 1, line 47 ¶ | skipping to change at page 1, line 46 ¶ | |||
| months and may be updated, replaced, or obsoleted by other documents | months and may be updated, replaced, or obsoleted by other documents | |||
| at any time. It is inappropriate to use Internet-Drafts as reference | at any time. It is inappropriate to use Internet-Drafts as reference | |||
| material or to cite them other than as "work in progress." | material or to cite them other than as "work in progress." | |||
| The list of current Internet-Drafts can be accessed at | The list of current Internet-Drafts can be accessed at | |||
| http://www.ietf.org/ietf/1id-abstracts.txt. | http://www.ietf.org/ietf/1id-abstracts.txt. | |||
| The list of Internet-Draft Shadow Directories can be accessed at | The list of Internet-Draft Shadow Directories can be accessed at | |||
| http://www.ietf.org/shadow.html. | http://www.ietf.org/shadow.html. | |||
| This Internet-Draft will expire on November, 2013. | This Internet-Draft will expire on January, 2014. | |||
| Copyright Notice | Copyright Notice | |||
| Copyright (c) 2013 IETF Trust and the persons identified as the | Copyright (c) 2013 IETF Trust and the persons identified as the | |||
| document authors. All rights reserved. | document authors. All rights reserved. | |||
| This document is subject to BCP 78 and the IETF Trust's Legal | This document is subject to BCP 78 and the IETF Trust's Legal | |||
| Provisions Relating to IETF Documents | Provisions Relating to IETF Documents | |||
| (http://trustee.ietf.org/license-info) in effect on the date of | (http://trustee.ietf.org/license-info) in effect on the date of | |||
| publication of this document. Please review these documents | publication of this document. Please review these documents | |||
| skipping to change at page 2, line 31 ¶ | skipping to change at page 2, line 31 ¶ | |||
| The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", | The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", | |||
| "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this | "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this | |||
| document are to be interpreted as described in RFC-2119 [RFC2119]. | document are to be interpreted as described in RFC-2119 [RFC2119]. | |||
| Table of Contents | Table of Contents | |||
| 1. Introduction...................................................3 | 1. Introduction...................................................3 | |||
| 1.1. Contributors..............................................4 | 1.1. Contributors..............................................4 | |||
| 1.2. Terminology...............................................4 | 1.2. Terminology...............................................4 | |||
| 2. Basic Virtual Networks in a Data Center........................5 | 2. Basic Virtual Networks in a Data Center........................4 | |||
| 3. Interconnecting DC Virtual Network and External Networks.......6 | 3. Interconnecting DC Virtual Network and External Networks.......6 | |||
| 3.1. DC Virtual Network Access via Internet....................6 | 3.1. DC Virtual Network Access via Internet....................6 | |||
| 3.2. DC VN and Enterprise Sites interconnected via SP WAN......7 | 3.2. DC VN and Enterprise Sites interconnected via SP WAN......7 | |||
| 4. DC Applications Using NVO3.....................................8 | 4. DC Applications Using NVO3.....................................8 | |||
| 4.1. Supporting Multi Technologies and Applications in a DC....9 | 4.1. Supporting Multi Technologies and Applications in a DC....9 | |||
| 4.2. Tenant Network with Multi-Subnets or across multi DCs.....9 | 4.2. Tenant Network with Multi-Subnets or across multi DCs.....9 | |||
| 4.3. Virtual Data Center (vDC)................................11 | 4.3. Virtual Data Center (vDC)................................11 | |||
| 5. OAM Considerations............................................13 | 5. OAM Considerations............................................12 | |||
| 6. Summary.......................................................13 | 6. Summary.......................................................13 | |||
| 7. Security Considerations.......................................14 | 7. Security Considerations.......................................14 | |||
| 8. IANA Considerations...........................................14 | 8. IANA Considerations...........................................14 | |||
| 9. Acknowledgements..............................................14 | 9. Acknowledgements..............................................14 | |||
| 10. References...................................................14 | 10. References...................................................14 | |||
| 10.1. Normative References....................................14 | 10.1. Normative References....................................14 | |||
| 10.2. Informative References..................................15 | 10.2. Informative References..................................15 | |||
| Authors' Addresses...............................................15 | Authors' Addresses...............................................15 | |||
| 1. Introduction | 1. Introduction | |||
| Server Virtualization has changed IT industry in terms of efficiency, | Server Virtualization has changed IT industry in terms of efficiency, | |||
| cost, and the speed in providing a new applications and/or services. | cost, and the speed in providing a new applications and/or services. | |||
| However the problems in today's data center networks hinder the | However the problems in today's data center networks hinder the | |||
| support of an elastic cloud service and dynamic virtual tenant | support of cloud applications and multi tenant networks [NVO3PRBM]. | |||
| networks [NVO3PRBM]. The goal of DC Network Virtualization Overlays, | The goal of DC Network Virtualization Overlays, i.e. NVO3, is to | |||
| i.e. NVO3, is to decouple the communication among tenant systems | decouple the communication among tenant systems from DC physical | |||
| from DC physical networks and to allow one physical network | networks and to allow one physical network infrastructure to provide: | |||
| infrastructure to provide: 1) traffic isolation among tenant virtual | 1) traffic isolation among tenant virtual networks over the same | |||
| networks over the same physical network; 2) independent address | physical network; 2) independent address space in each virtual | |||
| space in each virtual network and address isolation from the | network and address isolation from the infrastructure's; 3) Flexible | |||
| infrastructure's; 3) Flexible VM placement and move from one server | VM placement and move from one server to another without VM address | |||
| to another without any of the physical network limitations. These | and configuration change. These characteristics will help address | |||
| characteristics will help address the issues that hinder true | the issues in today's cloud applications [NVO3PRBM]. | |||
| virtualization in the data centers [NVO3PRBM]. | ||||
| Although NVO3 enables a true virtualization environment, the NVO3 | Although NVO3 enables a true network virtualization environment, the | |||
| solution has to address the communication between a virtual network | NVO3 solution has to address the communication between a virtual | |||
| and a physical network. This is because 1) many DCs that need to | network and a physical network. This is because 1) many DCs that | |||
| provide network virtualization are currently running over physical | need to provide network virtualization are currently running over | |||
| networks, the migration will be in steps; 2) a lot of DC | physical networks, the migration will be in steps; 2) a lot of DC | |||
| applications are served to Internet users which run directly on | applications are served to Internet users which run directly on | |||
| physical networks; 3) some applications are CPU bound like Big Data | physical networks; 3) some applications are CPU bound like Big Data | |||
| analytics and may not need the virtualization capability. | analytics and may not need the virtualization capability. | |||
| This document is to describe general NVO3 use cases that apply to | This document is to describe general NVO3 use cases that apply to | |||
| various data centers. Three types of the use cases described here | various data centers. Three types of the use cases described here | |||
| are: | are: | |||
| o A virtual network connects many tenant systems within a Data | o Basic virtual networks in DC. A virtual network connects many | |||
| Center and form one L2 or L3 communication domain. A virtual | tenant systems in a Data Center site (or more) and forms one L2 | |||
| network segregates its traffic from others and allows the VMs in | or L3 communication domain. Many virtual networks are over same | |||
| the network moving from one server to another. The case may be | DC physical network. The case may be used for DC internal | |||
| used for DC internal applications that constitute the DC East- | applications that constitute the DC East-West traffic. | |||
| West traffic. | ||||
| o A DC provider offers a secure DC service to an enterprise | o DC virtual network access from external. A DC provider offers a | |||
| customer and/or Internet users. In these cases, the enterprise | secure DC service to an enterprise customer and/or Internet users. | |||
| customer may use a traditional VPN provided by a carrier or an | An enterprise customer may use a traditional VPN provided by a | |||
| IPsec tunnel over Internet connecting to a NVO3 network within a | carrier or an IPsec tunnel over Internet connecting to a virtual | |||
| provider DC. This is mainly constitutes DC North-South traffic. | network within a provider DC site. This mainly constitutes DC | |||
| North-South traffic. | ||||
| o A DC provider may use NVO3 and other network technologies for a | o DC applications or services that may use NVO3. Three scenarios | |||
| tenant network, construct different topologies or zones for a | are described: 1) use NVO3 and other network technologies to | |||
| tenant network, and may design a variety of cloud applications | build a tenant network; 2) construct several virtual networks as | |||
| that may require the network service appliance, virtual compute, | a tenant network; 3) apply NVO3 to a virtual DC (vDC) service. | |||
| storage, and networking. In this case, the NVO3 provides the | ||||
| networking functions for the applications. | ||||
| The document uses the architecture reference model defined in | The document uses the architecture reference model defined in | |||
| [NVO3FRWK] to describe the use cases. | [NVO3FRWK] to describe the use cases. | |||
| 1.1. Contributors | 1.1. Contributors | |||
| Vinay Bannai | Vinay Bannai | |||
| PayPal | PayPal | |||
| 2211 N. First St, | 2211 N. First St, | |||
| San Jose, CA 95131 | San Jose, CA 95131 | |||
| skipping to change at page 4, line 38 ¶ | skipping to change at page 4, line 31 ¶ | |||
| Email: ramk@brocade.com | Email: ramk@brocade.com | |||
| 1.2. Terminology | 1.2. Terminology | |||
| This document uses the terminologies defined in [NVO3FRWK], | This document uses the terminologies defined in [NVO3FRWK], | |||
| [RFC4364]. Some additional terms used in the document are listed | [RFC4364]. Some additional terms used in the document are listed | |||
| here. | here. | |||
| CPE: Customer Premise Equipment | CPE: Customer Premise Equipment | |||
| DMZ: Demilitarized Zone | DMZ: Demilitarized Zone. A computer or small subnetwork that sits | |||
| between a trusted internal network, such as a corporate private LAN, | ||||
| and an un-trusted external network, such as the public Internet. | ||||
| DNS: Domain Name Service | DNS: Domain Name Service | |||
| NAT: Network Address Translation | NAT: Network Address Translation | |||
| VIRB: Virtual Integrated Routing/Bridging | VIRB: Virtual Integrated Routing/Bridging | |||
| Note that a virtual network in this document is a network | Note that a virtual network in this document is an overlay virtual | |||
| virtualization overlay instance. | network instance. | |||
| 2. Basic Virtual Networks in a Data Center | 2. Basic Virtual Networks in a Data Center | |||
| A virtual network may exist within a DC. The network enables a | A virtual network may exist within a DC. The network enables a | |||
| communication among Tenant Systems (TSs) that are in a Closed User | communication among Tenant Systems (TSs) that are in a Closed User | |||
| Group (CUG). A TS may be a physical server or virtual machine (VM) | Group (CUG). A TS may be a physical server/device or a virtual | |||
| on a server. The network virtual edge (NVE) may co-exist with Tenant | machine (VM) on a server. The network virtual edge (NVE) may co- | |||
| Systems, i.e. on an end-device, or exist on a different device, e.g. | exist with Tenant Systems, i.e. on a same end-device, or exist on a | |||
| a top of rack switch (ToR). A virtual network has a unique virtual | different device, e.g. a top of rack switch (ToR). A virtual network | |||
| network identifier (may be local or global unique) for an NVE to | has a unique virtual network identifier (may be local or global | |||
| properly differentiate it from other virtual networks. | unique) for an NVE to properly differentiate it from other virtual | |||
| networks. | ||||
| The TSs attached to the same NVE are not necessary in the same CUG, | The TSs attached to the same NVE may belong to the same or different | |||
| i.e. in the same virtual network. The multiple CUGs can be | virtual network. The multiple CUGs can be constructed in a way so | |||
| constructed in a way so that the policies are enforced when the TSs | that the policies are enforced when the TSs in one CUG communicate | |||
| in one CUG communicate with the TSs in other CUGs. An NVE provides | with the TSs in other CUGs. An NVE provides the reachbility for | |||
| the reachbility for Tenant Systems in a CUG, and may also have the | Tenant Systems in a CUG, and may also have the policies and provide | |||
| policies and provide the reachbility for Tenant Systems in different | the reachbility for Tenant Systems in different CUGs (See section | |||
| CUGs (See section 4.2). Furthermore in a DC operators may construct | 4.2). Furthermore in a DC operators may construct many tenant | |||
| many tenant networks that have no communication at all. In this | networks that have no communication in between at all. In this case, | |||
| case, each tenant network may use its own address space. Note that | each tenant network may use its own address space. One tenant | |||
| one tenant network may contain one or more CUGs. | network may have one or more virtual networks. | |||
| A Tenant System may also be configured with multiple addresses and | A Tenant System may also be configured with multiple addresses and | |||
| participate in multiple virtual networks, i.e. use different address | participate in multiple virtual networks, i.e. use different address | |||
| in different virtual network. For examples, a TS is NAT GW; or a TS | in different virtual networks. For examples, a TS may be a NAT GW; | |||
| is a firewall server for multiple CUGs. | or a firewall for multiple CUGs. | |||
| Network Virtualization Overlay in this context means the virtual | ||||
| networks over DC infrastructure network via a tunnel, i.e. a tunnel | ||||
| between any pair of NVEs. This architecture decouples tenant system | ||||
| address schema from the infrastructure address space, which brings a | ||||
| great flexibility for VM placement and mobility. This also makes the | ||||
| transit nodes in the infrastructure not aware of the existence of | ||||
| the virtual networks. One tunnel may carry the traffic belonging to | ||||
| different virtual networks; a virtual network identifier is used for | ||||
| traffic segregation in a tunnel. | ||||
| A virtual network may be an L2 or L3 domain. An NVE may be a member | Network Virtualization Overlay in this context means that a virtual | |||
| of several virtual networks each of which is in L2 or L3. A virtual | network is implemented in overlay, i.e. traffic from an NVE to | |||
| network may carry unicast traffic and/or broadcast/multicast/unknown | another is sent via a tunnel.[NVO3FMWK] This architecture decouples | |||
| traffic from/to tenant systems. An NVE may use p2p tunnels or a p2mp | tenant system address scheme and configuration from the | |||
| tunnel to transport broadcast or multicast traffic, or may use other | infrastructure's, which brings a great flexibility for VM placement | |||
| mechanisms [NVO3MCAST]. | and mobility. This also makes the transit nodes in the | |||
| infrastructure not aware of the existence of the virtual networks. | ||||
| One tunnel may carry the traffic belonging to different virtual | ||||
| networks; a virtual network identifier is used for traffic | ||||
| demultiplexing. | ||||
| It is worth to mention two distinct cases here. The first is that TS | A virtual network may be an L2 or L3 domain. The TSs attached to an | |||
| and NVE are co-located on a same end device, which means that the | NVE may belong to different virtual networks that may be in L2 or | |||
| NVE can be made aware of the TS state at any time via internal API. | L3. A virtual network may carry unicast traffic and/or | |||
| broadcast/multicast/unknown traffic from/to tenant systems. There | ||||
| are several ways to transport BUM traffic.[NVO3MCAST] | ||||
| The second is that TS and NVE are remotely connected, i.e. connected | It is worth to mention two distinct cases here. The first is that | |||
| via a switched network or point-to-point link. In this case, a | TSs and NVE are co-located on a same end device, which means that | |||
| protocol is necessary for NVE to know TS state. | the NVE can be made aware of the TS state at any time via internal | |||
| API. The second is that TSs and NVE are remotely connected, i.e. | ||||
| connected via a switched network or point-to-point link. In this | ||||
| case, a protocol is necessary for NVE to know TS state. | ||||
| One virtual network may have many NVE members each of which many TSs | One virtual network may connect many TSes that attach to many | |||
| may attach to. TS dynamic placement and mobility results in frequent | different NVEs. TS dynamic placement and mobility results in | |||
| changes in the TS and NVE bindings. The TS reachbility update | frequent changes in the TS and NVE bindings. The TS reachbility | |||
| mechanism MUST be fast enough to not cause any service interruption. | update mechanism need be fast enough to not cause any service | |||
| The capability of supporting a lot of TSs in a tenant network and a | interruption. The capability of supporting many TSs in a virtual | |||
| lot of tenant networks is critical for NVO3 solution. | network and many more virtual networks in a DC is critical for NVO3 | |||
| solution. | ||||
| If a virtual network spans across multiple DC sites, one design is | If a virtual network spans across multiple DC sites, one design is | |||
| to allow the corresponding NVO3 instance seamlessly span across | to allow the network seamlessly to span across the sites without DC | |||
| those sites without DC gateway routers' termination. In this case, | gateway routers' termination. In this case, the tunnel between a | |||
| the tunnel between a pair of NVEs may in turn be tunneled over other | pair of NVEs may in turn be tunneled over other intermediate tunnels | |||
| intermediate tunnels over the Internet or other WANs, or the intra | over the Internet or other WANs, or the intra DC and inter DC | |||
| DC and inter DC tunnels are stitched together to form an end-to-end | tunnels are stitched together to form an end-to-end virtual network | |||
| virtual network across DCs. The latter is described in section 3.2. | across DCs. | |||
| Section 4.2 describes other options. | ||||
| 3. Interconnecting DC Virtual Network and External Networks | 3. Interconnecting DC Virtual Network and External Networks | |||
| For customers (an enterprise or individuals) who want to utilize the | For customers (an enterprise or individuals) who utilize the DC | |||
| DC provider's compute and storage resources to run their | provider's compute and storage resources to run their applications, | |||
| applications, they need to access their systems hosted in a DC | they need to access their systems hosted in a DC through Internet or | |||
| through Internet or Service Providers' WANs. A DC provider may | Service Providers' WANs. A DC provider may construct a virtual | |||
| construct an NVO3 network which all the resources designated for a | network that connect all the resources designated for a customer and | |||
| customer connect to and allow the customer to access their systems | allow the customer to access their resources via a virtual gateway | |||
| via the network. This, in turn, becomes the case of interconnecting | (vGW). This, in turn, becomes the case of interconnecting a DC | |||
| a DC NVO3 network and external networks via Internet or WANs. Two | virtual network and the network at customer site(s) via Internet or | |||
| cases are described here. | WANs. Two cases are described here. | |||
| 3.1. DC Virtual Network Access via Internet | 3.1. DC Virtual Network Access via Internet | |||
| A user or an enterprise customer connects securely to a DC virtual | A customer can connect to a DC virtual network via Internet in a | |||
| network via Internet. Figure 1 illustrates this case. A virtual | secure way. Figure 1 illustrates this case. A virtual network is | |||
| network is configured on NVE1 and NVE2 and two NVEs are connected | configured on NVE1 and NVE2 and two NVEs are connected via an L3 | |||
| via an L3 tunnel in the Data Center. A set of tenant systems are | tunnel in the Data Center. A set of tenant systems are attached to | |||
| attached to NVE1 on a server. The NVE2 resides on a DC Gateway | NVE1 on a server. The NVE2 resides on a DC Gateway device. NVE2 | |||
| device. NVE2 terminates the tunnel and uses the VNID on the packet | terminates the tunnel and uses the VNID on the packet to pass the | |||
| to pass the packet to the corresponding VN GW entity on the DC GW. A | packet to the corresponding vGW entity on the DC GW. A customer can | |||
| user or customer can access their systems, i.e. TS1 or TSn, in the | access their systems, i.e. TS1 or TSn, in the DC via Internet by | |||
| DC via Internet by using IPsec tunnel [RFC4301]. The IPsec tunnel is | using IPsec tunnel [RFC4301]. The IPsec tunnel is configured between | |||
| between the VN GW and the user or CPE at enterprise edge location. | the vGW and the customer gateway at customer site. Either static | |||
| The VN GW provides IPsec functionality such as authentication scheme | route or BGP may be used for peer routes. The vGW provides IPsec | |||
| and encryption, as well as the mapping to the right virtual network | functionality such as authentication scheme and encryption. Note | |||
| entity on the DC GW. Note that 1) some VN GW functions such as | that: 1) some vGW functions such as firewall and load balancer may | |||
| firewall and load balancer may also be performed by locally attached | also be performed by locally attached network appliance devices; 2) | |||
| network appliance devices; 2) The virtual network in DC may use | The virtual network in DC may use different address space than | |||
| different address space than external users, then VN GW serves the | external users, then vGW need to provide the NAT function; 3) more | |||
| NAT function. | than one IPsec tunnels can be configured for the redundancy; 4) vGW | |||
| may be implemented on a server or VM. In this case, IP tunnels or | ||||
| IPsec tunnels may be used over DC infrastructure. | ||||
| Server+---------------+ | Server+---------------+ | |||
| | TS1 TSn | | | TS1 TSn | | |||
| | |...| | | | |...| | | |||
| | +-+---+-+ | External User | | +-+---+-+ | Customer Site | |||
| | | NVE1 | | +-----+ | | | NVE1 | | +-----+ | |||
| | +---+---+ | | PC | | | +---+---+ | | CGW | | |||
| +------+--------+ +--+--+ | +------+--------+ +--+--+ | |||
| | * | | * | |||
| L3 Tunnel * | L3 Tunnel * | |||
| | * | | * | |||
| DC GW +------+---------+ .--. .--. | DC GW +------+---------+ .--. .--. | |||
| | +---+---+ | ( '* '.--. | | +---+---+ | ( '* '.--. | |||
| | | NVE2 | | .-.' * ) | | | NVE2 | | .-.' * ) | |||
| | +---+---+ | ( * Internet ) | | +---+---+ | ( * Internet ) | |||
| | +---+---+. | ( * / | | +---+---+. | ( * / | |||
| | | VNGW1 * * * * * * * * '-' '-' | | | vGW | * * * * * * * * '-' '-' | |||
| | +-------+ | | IPsec \../ \.--/' | | +-------+ | | IPsec \../ \.--/' | |||
| | +--------+ | Tunnel | | +--------+ | Tunnel | |||
| +----------------+ | +----------------+ | |||
| DC Provider Site | DC Provider Site | |||
| Figure 1 DC Virtual Network Access via Internet | Figure 1 DC Virtual Network Access via Internet | |||
| 3.2. DC VN and Enterprise Sites interconnected via SP WAN | 3.2. DC VN and Enterprise Sites interconnected via SP WAN | |||
| An Enterprise company would lease some DC provider compute resources | An enterprise company may lease the VM and storage resources hosted | |||
| to run some applications. For example, the company may run its web | in the 3rd party DC to run its applications. For example, the rd company may run its web applications at 3 party sites but run | |||
| applications at DC provider sites but run backend applications in | backend applications in own DCs. The Web applications and backend rd applications need to communicate privately. The 3 party DC may | |||
| their own DCs. The Web applications and backend applications need to | construct one or more virtual networks to connect all VMs and | |||
| communicate privately. DC provider may construct a NVO3 network to | storage running the Enterprise Web applications. The company may buy | |||
| connect all VMs running the Enterprise Web applications. The | a p2p private tunnel such as VPWS from a SP to interconnect its site | |||
| enterprise company may buy a p2p private tunnel such as VPWS from a | and the virtual network at the 3rd party site. A protocol is | |||
| SP to interconnect its site and the NVO3 network in provider DC site. | necessary for exchanging the reachability between two peering points | |||
| A protocol is necessary for exchanging the reachability between two | and the traffic are carried over the tunnel. If an enterprise has | |||
| peering points and the traffic are carried over the tunnel. If an | multiple sites, it may buy multiple p2p tunnels to form a mesh rd interconnection among the sites and the 3 party site. This requires | |||
| enterprise has multiple sites, it may buy multiple p2p tunnels to | each site peering with all other sites for route distribution. | |||
| form a mesh interconnection among the sites and DC provider site. | ||||
| This requires each site peering with all other sites for route | ||||
| distribution. | ||||
| Another way to achieve multi-site interconnection is to use Service | Another way to achieve multi-site interconnection is to use Service | |||
| Provider (SP) VPN services, in which each site only peers with SP PE | Provider (SP) VPN services, in which each site only peers with SP PE | |||
| site. A DC Provider and VPN SP may build a NVO3 network (VN) and VPN | site. A DC Provider and VPN SP may build a DC virtual network (VN) | |||
| independently. The VN provides the networking for all the related | and VPN independently. The VPN interconnects several enterprise | |||
| TSes within the provider DC. The VPN interconnects several | sites and the DC virtual network at DC site, i.e. VPN site. The DC | |||
| enterprise sites, i.e. VPN sites. The DC provider and VPN SP further | VN and SP VPN interconnect via a local link or a tunnel. The control | |||
| connect the VN and VPN at the DC GW/ASBR and SP PE/ASBR. Several | plan interconnection options are described in RFC4364 [RFC4364]. In | |||
| options for the interconnection of the VN and VPN are described in | Option A with VRF-LITE [VRF-LITE], both DC GW and SP PE maintain a | |||
| RFC4364 [RFC4364]. In Option A with VRF-LITE [VRF-LITE], both DC GW | routing/forwarding table, and perform the table lookup in forwarding. | |||
| and SP PE maintain the routing/forwarding table, and perform the | In Option B, DC GW and SP PE do not maintain the forwarding table, | |||
| table lookup in forwarding. In Option B, DC GW and SP PE do not | it only maintains the VN and VPN identifier mapping, and swap the | |||
| maintain the forwarding table, it only maintains the VN and VPN | identifier on the packet in the forwarding process. Both option A | |||
| identifier mapping, and exchange the identifier on the packet in the | and B requires tunnel termination. In option C, DC GW and SP PE use | |||
| forwarding process. In option C, DC GW and SP PE use the same | the same identifier for VN and VPN, and just perform the tunnel | |||
| identifier for VN and VPN, and just perform the tunnel stitching, | stitching, i.e. change the tunnel end points. Each option has | |||
| i.e. change the tunnel end points. Each option has pros/cons (see | pros/cons (see RFC4364) and has been deployed in SP networks | |||
| RFC4364) and has been deployed in SP networks depending on the | depending on the applications. The BGP protocols may be used in | |||
| applications. The BGP protocols may be used in these options for | these options for route distribution. Note that if the provider DC | |||
| route distribution. Note that if the provider DC is the SP Data | is the SP Data Center, the DC GW and PE in this case may be on one | |||
| Center, the DC GW and PE in this case may be on one device. | device. | |||
| This configuration allows the enterprise networks communicating to | This configuration allows the enterprise networks communicating to | |||
| the tenant systems attached to the VN in a provider DC without | the tenant systems attached to the VN in a provider DC without | |||
| interfering with DC provider underlying physical networks and other | interfering with DC provider underlying physical networks and other | |||
| virtual networks in the DC. The enterprise may use its own address | virtual networks in the DC. The enterprise may use its own address | |||
| space on the tenant systems attached to the VN. The DC provider can | space on the tenant systems in the VN. The DC provider can manage | |||
| manage the VMs and storage attachment to the VN for the enterprise | which VM and storage attachment to the VN. The enterprise customer | |||
| customer. The enterprise customer can determine and run their | manages what applications to run on the VMs in the VN. See Section 4 | |||
| applications on the VMs. See section 4 for more. | for more. | |||
| The interesting feature in this use case is that the VN and compute | The interesting feature in this use case is that the VN and compute | |||
| resource are managed by the DC provider. The DC operator can place | resource are managed by the DC provider. The DC operator can place | |||
| them at any location without notifying the enterprise and WAN SP | them at any server without notifying the enterprise and WAN SP | |||
| because the DC physical network is completely isolated from the | because the DC physical network is completely isolated from the | |||
| carrier and enterprise network. Furthermore, the DC operator may | carrier and enterprise network. Furthermore, the DC operator may | |||
| move the VMs assigned to the enterprise from one sever to another in | move the VMs assigned to the enterprise from one sever to another in | |||
| the DC without the enterprise customer awareness, i.e. no impact on | the DC without the enterprise customer awareness, i.e. no impact on | |||
| the enterprise 'live' applications running these resources. Such | the enterprise 'live' applications running these resources. Such | |||
| advanced features bring DC providers great benefits in serving these | advanced features bring DC providers great benefits in serving cloud | |||
| kinds of applications but also add some requirements for NVO3 | applications but also add some requirements for NVO3 [NVO3PRBM]. | |||
| [NVO3PRBM]. | ||||
| 4. DC Applications Using NVO3 | 4. DC Applications Using NVO3 | |||
| NVO3 brings DC operators the flexibility in designing and deploying | NVO3 brings DC operators the flexibility in designing and deploying | |||
| different applications in an end-to-end virtualization environment, | different applications in an end-to-end virtualization overlay | |||
| where the operators not need worry about the constraints of the | environment, where the operators no longer need to worry about the | |||
| physical network configuration in the Data Center. DC provider may | constraints of the DC physical network configuration when creating | |||
| use NVO3 in various ways and also use it in the conjunction with | VMs and configuring a virtual network. DC provider may use NVO3 in | |||
| physical networks in DC for many reasons. This section highlights | various ways and also use it in the conjunction with physical | |||
| some use cases but not limits to. | networks in DC for many reasons. This section just highlights some | |||
| use cases. | ||||
| 4.1. Supporting Multi Technologies and Applications in a DC | 4.1. Supporting Multi Technologies and Applications in a DC | |||
| Most likely servers deployed in a large data center are rolled in at | Most likely servers deployed in a large data center are rolled in at | |||
| different times and may have different capacities/features. Some | different times and may have different capacities/features. Some | |||
| servers may be virtualized, some may not; some may be equipped with | servers may be virtualized, some may not; some may be equipped with | |||
| virtual switches, some may not. For the ones equipped with | virtual switches, some may not. For the ones equipped with | |||
| hypervisor based virtual switches, some may support VxLAN [VXLAN] | hypervisor based virtual switches, some may support VxLAN [VXLAN] | |||
| encapsulation, some may support NVGRE encapsulation [NVGRE], and | encapsulation, some may support NVGRE encapsulation [NVGRE], and | |||
| some may not support any types of encapsulation. To construct a | some may not support any types of encapsulation. To construct a | |||
| tenant virtual network among these servers and the ToR switches, it | tenant network among these servers and the ToR switches, it may | |||
| may construct one virtual network overlay and one virtual network | construct one virtual network and one traditional VLAN network; or | |||
| w/o overlay, or two virtual networks overlay with different | two virtual networks that one uses VxLAN encapsulation and another | |||
| implementations. For example, one virtual network overlay uses VxLAN | uses NVGRE. | |||
| encapsulation and another virtual network w/o overlay uses | ||||
| traditional VLAN or another virtual network overlay uses NVGRE. | ||||
| The gateway device or virtual gateway on a device may be used. The | In these cases, a gateway device or virtual GW is used to | |||
| gateway participates in to both virtual networks. It performs the | participate in multiple virtual networks. It performs the packet | |||
| packet encapsulation/decapsulation and may also perform address | encapsulation/decapsulation and may also perform address mapping or | |||
| mapping or translation, and etc. | translation, and etc. | |||
| A data center may be also constructed with multi-tier zones. Each | A data center may be also constructed with multi-tier zones. Each | |||
| zone has different access permissions and run different applications. | zone has different access permissions and run different applications. | |||
| For example, the three-tier zone design has a front zone (Web tier) | For example, the three-tier zone design has a front zone (Web tier) | |||
| with Web applications, a mid zone (application tier) with service | with Web applications, a mid zone (application tier) with service | |||
| applications such as payment and booking, and a back zone (database | applications such as payment and booking, and a back zone (database | |||
| tier) with Data. External users are only able to communicate with | tier) with Data. External users are only able to communicate with | |||
| the web application in the front zone. In this case, the | the Web application in the front zone. In this case, the | |||
| communication between the zones MUST pass through the security | communication between the zones MUST pass through the security | |||
| GW/firewall. The network virtualization may be used in each zone. If | GW/firewall. One virtual network may be configured in each zone and | |||
| individual zones use the different implementations, the GW needs to | a GW is used to interconnect two virtual networks. If individual | |||
| support these implementations as well. | zones use the different implementations, the GW needs to support | |||
| these implementations as well. | ||||
| 4.2. Tenant Network with Multi-Subnets or across multi DCs | 4.2. Tenant Network with Multi-Subnets or across multi DCs | |||
| A tenant network may contain multiple subnets. DC operators may | A tenant network may contain multiple subnets. The DC physical | |||
| construct multiple tenant networks. The access policy for inter- | network needs support the connectivity for many tenant networks. The | |||
| subnets is often necessary. To benefit the policy management, the | inter-subnets policies may be placed at some designated gateway | |||
| policies may be placed at some designated gateway devices only. Such | devices only. Such design requires the inter-subnet traffic to be | |||
| design requires the inter-subnet traffic MUST be sent to one of the | sent to one of the gateways first for the policy checking, which may | |||
| gateways first for the policy checking. However this may cause | cause traffic hairpin at the gateway in a DC. It is desirable that | |||
| traffic hairpin on the gateway in a DC. It is desirable that an NVE | an NVE can hold some policies and be able to forward inter-subnet | |||
| can hold some policy and be able to forward inter-subnet traffic | traffic directly. To reduce NVE burden, the hybrid design may be | |||
| directly. To reduce NVE burden, the hybrid design may be deployed, | deployed, i.e. an NVE can perform forwarding for the selected inter- | |||
| i.e. an NVE can perform forwarding for the selected inter-subnets | subnets and the designated GW performs for the rest. For example, | |||
| and the designated GW performs for the rest. For example, each NVE | each NVE performs inter-subnet forwarding for a tenant, and the | |||
| performs inter-subnet forwarding for a tenant, and the designated GW | designated GW is used for inter-subnet traffic from/to the different | |||
| is used for inter-subnet traffic from/to the different tenant | tenant networks. | |||
| networks. | ||||
| A tenant network may span across multiple Data Centers in distance. | A tenant network may span across multiple Data Centers in distance. | |||
| DC operators may want an L2VN within each DC and L3VN between DCs | DC operators may configure an L2 VN within each DC and an L3 VN | |||
| for a tenant network. L2 bridging has the simplicity and endpoint | between DCs for a tenant network. For this configuration, the | |||
| awareness while L3 routing has advantages in policy based routing, | virtual L2/L3 gateway can be implemented on DC GW device. Figure 2 | |||
| aggregation, and scalability. For this configuration, the virtual | ||||
| L2/L3 gateway can be implemented on DC GW device. Figure 2 | ||||
| illustrates this configuration. | illustrates this configuration. | |||
| Figure 2 depicts two DC sites. The site A constructs an L2VN with | Figure 2 depicts two DC sites. The site A constructs one L2 VN, say | |||
| NVE1, NVE2, and NVE3. NVE1 and NVE2 reside on the servers where the | L2VNa, on NVE1, NVE2, and NVE3. NVE1 and NVE2 reside on the servers | |||
| tenant systems are created. NVE3 resides on the DC GW device. The | which host multiple tenant systems. NVE3 resides on the DC GW device. | |||
| site Z has similar configuration with NVE3 and NVE4 on the servers | The site Z has similar configuration with L2VNz on NVE3, NVE4, and | |||
| and NVE6 on the DC GW. An L3VN is configured between the NVE5 at | NVE6. One L3 VN, say L3VNx, is configured on the NVE5 at site A and | |||
| site A and the NVE6 at site Z. An internal Virtual Integrated | the NVE6 at site Z. An internal Virtual Interface of Routing and | |||
| Routing and Bridging (VIRB) is used between L2VNI and L3VNI on NVE5 | Bridging (VIRB) is used between L2VNI and L3VNI on NVE5 and NVE6, | |||
| and NVE6. The L2VNI is the MAC/NVE mapping table and the L3VNI is | respectively. The L2VNI is the MAC/NVE mapping table and the L3VNI | |||
| the IP prefix/NVE mapping table. A packet to the NVE5 from L2VN will | is the IP prefix/NVE mapping table. A packet to the NVE5 from L2VNa | |||
| be decapsulated and converted into an IP packet and then | will be decapsulated and converted into an IP packet and then | |||
| encapsulated and sent to the site Z. | encapsulated and sent to the site Z. The policies can be checked at | |||
| VIRB. | ||||
| Note that both the L2VNs and L3VN in Figure 2 are encapsulated and | Note that the L2VNa, L2VNz, and L3VNx in Figure 2 are overlay | |||
| carried over within DC and across WAN networks, respectively. | virtual networks. | |||
| NVE5/DCGW+------------+ +-----------+NVE6/DCGW | NVE5/DCGW+------------+ +-----------+NVE6/DCGW | |||
| | +-----+ | '''''''''''''''' | +-----+ | | | +-----+ | '''''''''''''''' | +-----+ | | |||
| | |L3VNI+----+' L3VN '+---+L3VNI| | | | |L3VNI+----+' L3VNx '+---+L3VNI| | | |||
| | +--+--+ | '''''''''''''''' | +--+--+ | | | +--+--+ | '''''''''''''''' | +--+--+ | | |||
| | |VIRB | | VIRB| | | | |VIRB | | VIRB| | | |||
| | +--+---+ | | +---+--+ | | | +--+---+ | | +---+--+ | | |||
| | |L2VNIs| | | |L2VNIs| | | | |L2VNIs| | | |L2VNIs| | | |||
| | +--+---+ | | +---+--+ | | | +--+---+ | | +---+--+ | | |||
| +----+-------+ +------+----+ | +----+-------+ +------+----+ | |||
| ''''|'''''''''' ''''''|''''''' | ''''|'''''''''' ''''''|''''''' | |||
| ' L2VN ' ' L2VN ' | ' L2VNa ' ' L2VNz ' | |||
| NVE1/S ''/'''''''''\'' NVE2/S NVE3/S'''/'''''''\'' NVE4/S | NVE1/S ''/'''''''''\'' NVE2/S NVE3/S'''/'''''''\'' NVE4/S | |||
| +-----+---+ +----+----+ +------+--+ +----+----+ | +-----+---+ +----+----+ +------+--+ +----+----+ | |||
| | +--+--+ | | +--+--+ | | +---+-+ | | +--+--+ | | | +--+--+ | | +--+--+ | | +---+-+ | | +--+--+ | | |||
| | |L2VNI| | | |L2VNI| | | |L2VNI| | | |L2VNI| | | | |L2VNI| | | |L2VNI| | | |L2VNI| | | |L2VNI| | | |||
| | ++---++ | | ++---++ | | ++---++ | | ++---++ | | | ++---++ | | ++---++ | | ++---++ | | ++---++ | | |||
| +--+---+--+ +--+---+--+ +--+---+--+ +--+---+--+ | +--+---+--+ +--+---+--+ +--+---+--+ +--+---+--+ | |||
| |...| |...| |...| |...| | |...| |...| |...| |...| | |||
| Tenant Systems Tenant Systems | Tenant Systems Tenant Systems | |||
| DC Site A DC Site Z | DC Site A DC Site Z | |||
| Figure 2 Tenant Virtual Network with Bridging/Routing | Figure 2 Tenant Virtual Network with Bridging/Routing | |||
| 4.3. Virtual Data Center (vDC) | 4.3. Virtual Data Center (vDC) | |||
| Enterprise DC's today may often use several routers, switches, and | Enterprise DC's today may deploy routers, switches, and network | |||
| network appliance devices to construct its internal network, DMZ, | appliance devices to construct its internal network, DMZ, and | |||
| and external network access. A DC Provider may offer a virtual DC | external network access and have many servers and storage running | |||
| service to an enterprise customer and run enterprise applications | various applications. A DC Provider may offer a virtual DC service | |||
| such as website/emails as well. Instead of using many hardware | to enterprise customers. A vDC provides the same capability as a | |||
| devices to do it, with the network virtualization overlay | physical DC. A customer manages what and how applications to run in | |||
| technology, DC operators may build such vDCs on top of a common | the vDC. Instead of using many hardware devices to do it, with the | |||
| network infrastructure for many such customers and run network | network virtualization overlay technology, DC operators may build | |||
| service applications per a vDC basis. The net service applications | such vDCs on top of a common DC infrastructure for many such | |||
| such as firewall, DNS, load balancer can be designed per vDC. The | customers and run network service application per vDC. The network | |||
| network virtualization overlay further enables potential for vDC | service applications may include firewall, DNS, load balancer, | |||
| mobility when customer moves to different locations because tenant | gateway, etc. The network virtualization overlay further enables | |||
| systems and net appliances configuration can be completely decouple | potential for vDC mobility when a customer moves to different | |||
| from the infrastructure network. | locations because vDC configuration is decouple from the | |||
| infrastructure network. | ||||
| Figure 3 below illustrates one scenario. For the simple | Figure 3 below illustrates one scenario. For the simple | |||
| illustration, it only shows the L3VN or L2VN as virtual and overlay | illustration, it only shows the L3 VN or L2 VN as virtual routers or | |||
| routers or switches. In this case, DC operators construct several L2 | switches. In this case, DC operators create several L2 VNs (L2VNx, | |||
| VNs (L2VNx, L2VNy, L2VNz) in Figure 3 to group the end tenant | L2VNy, L2VNz) in Figure 3 to group the tenant systems together per | |||
| systems together per application basis, create an L3VNa for the | application basis, create one L3 VN, e.g. VNa for the internal | |||
| internal routing. A net device (may be a VM or server) runs | routing. A net device (may be a VM or server) runs firewall/gateway | |||
| firewall/gateway applications and connects to the L3VNa and | applications and connects to the L3VNa and Internet. A load balancer | |||
| Internet. A load Balancer (LB) is used in L2VNx. A VPWS p2p tunnel | (LB) is used in L2 VNx. A VPWS p2p tunnel is also built between the | |||
| is also built between the gateway and enterprise router. The design | gateway and enterprise router. Enterprise customer runs | |||
| runs Enterprise Web/Mail/Voice applications at the provider DC site; | Web/Mail/Voice applications at the provider DC site; lets the users | |||
| lets the users at Enterprise site to access the applications via the | at Enterprise site to access the applications via the VPN tunnel and | |||
| VPN tunnel and Internet via a gateway at the Enterprise site; let | Internet via a gateway at the Enterprise site; let Internet users | |||
| Internet users access the applications via the gateway in the | access the applications via the gateway in the provider DC. | |||
| provider DC. | ||||
| The Enterprise customer decides which applications are accessed by | The customer decides which applications are accessed by intranet | |||
| intranet only and which by both intranet and extranet; DC operators | only and which by both intranet and extranet and configures the | |||
| then design and configure the proper security policy and gateway | proper security policy and gateway function. Furthermore a customer | |||
| function. Furthermore DC operators may use multi-zones in a vDC for | may want multi-zones in a vDC for the security and/or set different | |||
| the security and/or set different QoS levels for the different | QoS levels for the different applications. | |||
| applications based on customer applications. | ||||
| This use case requires the NVO3 solution to provide the DC operator | This use case requires the NVO3 solution to provide the DC operator | |||
| an easy way to create a VN and NVEs for any design and to quickly | an easy way to create a VN and NVEs for any design and to quickly | |||
| assign TSs to a VNI on a NVE they attach to, easily to set up | assign TSs to VNIs on a NVE they attach to, easily to set up virtual | |||
| virtual topology and place or configure policies on an NVE or VMs | topology and place or configure policies on an NVE or VMs that run | |||
| that run net services, and support VM mobility. Furthermore, DC | net services, and support VM mobility. Furthermore a DC operator | |||
| operator needs to view the tenant network topology and know the | and/or customer should be able to view the tenant network topology | |||
| tenant node capability and is able to configure a net service on the | and configure the tenant network functions. DC provider may further | |||
| tenant node. DC provider may further let a tenant to manage the vDC | let a tenant to manage the vDC itself. | |||
| itself. | ||||
| Internet ^ Internet | Internet ^ Internet | |||
| | | | | |||
| ^ +-+----+ | ^ +--+---+ | |||
| | | GW | | | | GW | | |||
| | +--+---+ | | +--+---+ | |||
| | | | | | | |||
| +-------+--------+ +-+----+ | +-------+--------+ +--+---+ | |||
| |FireWall/Gateway+--- VPWS/MPLS---+Router| | |FireWall/Gateway+--- VPN-----+router| | |||
| +-------+--------+ +-+--+-+ | +-------+--------+ +-+--+-+ | |||
| | | | | | | | | |||
| ...+... |..| | ...+.... |..| | |||
| +-----: L3VNa :--------+ LANs | +-------: L3 VNa :---------+ LANs | |||
| +-+-+ ....... | | +-+-+ ........ | | |||
| |LB | | | Enterprise Site | |LB | | | Enterprise Site | |||
| +-+-+ | | | +-+-+ | | | |||
| ...+... ...+... ...+... | ...+... ...+... ...+... | |||
| : L2VNx : : L2VNy : : L2VNz : | : L2VNx : : L2VNy : : L2VNx : | |||
| ....... ....... ....... | ....... ....... ....... | |||
| |..| |..| |..| | |..| |..| |..| | |||
| | | | | | | | | | | | | | | |||
| Web Apps Mail Apps VoIP Apps | Web Apps Mail Apps VoIP Apps | |||
| Provider DC Site | Provider DC Site | |||
| firewall/gateway and Load Balancer (LB) may run on a server or VMs | firewall/gateway and Load Balancer (LB) may run on a server or VMs | |||
| Figure 3 Virtual Data Center by Using NVO3 | Figure 3 Virtual Data Center by Using NVO3 | |||
| 5. OAM Considerations | 5. OAM Considerations | |||
| NVO3 brings the ability for a DC provider to segregate tenant | NVO3 brings the ability for a DC provider to segregate tenant | |||
| skipping to change at page 13, line 47 ¶ | skipping to change at page 13, line 41 ¶ | |||
| configuration, and helps deployment of higher level services over | configuration, and helps deployment of higher level services over | |||
| the application. | the application. | |||
| NVO3's underlying network provides the tunneling between NVEs so | NVO3's underlying network provides the tunneling between NVEs so | |||
| that two NVEs appear as one hop to each other. Many tunneling | that two NVEs appear as one hop to each other. Many tunneling | |||
| technologies can serve this function. The tunneling may in turn be | technologies can serve this function. The tunneling may in turn be | |||
| tunneled over other intermediate tunnels over the Internet or other | tunneled over other intermediate tunnels over the Internet or other | |||
| WANs. It is also possible that intra DC and inter DC tunnels are | WANs. It is also possible that intra DC and inter DC tunnels are | |||
| stitched together to form an end-to-end tunnel between two NVEs. | stitched together to form an end-to-end tunnel between two NVEs. | |||
| A DC virtual network may be accessed via an external network in a | A DC virtual network may be accessed by external users in a secure | |||
| secure way. Many existing technologies can help achieve this. | way. Many existing technologies can help achieve this. | |||
| NVO3 implementation may vary. Some DC operators prefer to use | NVO3 implementations may vary. Some DC operators prefer to use | |||
| centralized controller to manage tenant system reachbility in a | centralized controller to manage tenant system reachbility in a | |||
| tenant network, other prefer to use distributed protocols to | tenant network, other prefer to use distributed protocols to | |||
| advertise the tenant system location, i.e. attached NVEs. For the | advertise the tenant system location, i.e. associated NVEs. For the | |||
| migration and special requirement, the different solutions may apply | migration and special requirement, the different solutions may apply | |||
| to one tenant network in a DC. When a tenant network spans across | to one tenant network in a DC. When a tenant network spans across | |||
| multiple DCs and WANs, each network administration domain may use | multiple DCs and WANs, each network administration domain may use | |||
| different methods to distribute the tenant system locations. Both | different methods to distribute the tenant system locations. Both | |||
| control plane and data plane interworking are necessary. | control plane and data plane interworking are necessary. | |||
| 7. Security Considerations | 7. Security Considerations | |||
| Security is a concern. DC operators need to provide a tenant a | Security is a concern. DC operators need to provide a tenant a | |||
| secured virtual network, which means one tenant's traffic isolated | secured virtual network, which means one tenant's traffic isolated | |||
| skipping to change at page 15, line 23 ¶ | skipping to change at page 15, line 17 ¶ | |||
| [RFC5880] Katz, D. and Ward, D., "Bidirectional Forwarding Detection | [RFC5880] Katz, D. and Ward, D., "Bidirectional Forwarding Detection | |||
| (BFD)", rfc5880, June 2010. | (BFD)", rfc5880, June 2010. | |||
| 10.2. Informative References | 10.2. Informative References | |||
| [NVGRE] Sridharan, M., "NVGRE: Network Virtualization using Generic | [NVGRE] Sridharan, M., "NVGRE: Network Virtualization using Generic | |||
| Routing Encapsulation", draft-sridharan-virtualization- | Routing Encapsulation", draft-sridharan-virtualization- | |||
| nvgre-02, work in progress. | nvgre-02, work in progress. | |||
| [NVO3PRBM] Narten, T., etc "Problem Statement: Overlays for Network | [NVO3PRBM] Narten, T., et al "Problem Statement: Overlays for | |||
| Virtualization", draft-ietf-nvo3-overlay-problem- | Network Virtualization", draft-ietf-nvo3-overlay-problem- | |||
| statement-02, work in progress. | statement-03, work in progress. | |||
| [NVO3FRWK] Lasserre, M., Motin, T., and etc, "Framework for DC | [NVO3FRWK] Lasserre, M., Motin, T., and et al, "Framework for DC | |||
| Network Virtualization", draft-ietf-nvo3-framework-02, | Network Virtualization", draft-ietf-nvo3-framework-03, | |||
| work in progress. | work in progress. | |||
| [NVO3MCAST] Ghanwani, A., "Multicast Issues in Networks Using NVO3", | [NVO3MCAST] Ghanwani, A., "Multicast Issues in Networks Using NVO3", | |||
| draft-ghanwani-nvo3-mcast-issues-00, work in progress. | draft-ghanwani-nvo3-mcast-issues-00, work in progress. | |||
| [VRF-LITE] Cisco, "Configuring VRF-lite", http://www.cisco.com | [VRF-LITE] Cisco, "Configuring VRF-lite", http://www.cisco.com | |||
| [VXLAN] Mahalingam,M., Dutt, D., etc "VXLAN: A Framework for | [VXLAN] Mahalingam,M., Dutt, D., etc "VXLAN: A Framework for | |||
| Overlaying Virtualized Layer 2 Networks over Layer 3 | Overlaying Virtualized Layer 2 Networks over Layer 3 | |||
| Networks", draft-mahalingam-dutt-dcops-vxlan-03.txt, work | Networks", draft-mahalingam-dutt-dcops-vxlan-03.txt, work | |||
| End of changes. 53 change blocks. | ||||
| 290 lines changed or deleted | 282 lines changed or added | |||
This html diff was produced by rfcdiff 1.48. The latest version is available from http://tools.ietf.org/tools/rfcdiff/ | ||||