idnits 2.17.1 draft-kong-sdnrg-routing-optimization-sdn-in-dc-06.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- No issues found here. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year == The document doesn't use any RFC 2119 keywords, yet seems to have RFC 2119 boilerplate text. -- The document date (April 05, 2019) is 1841 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- No issues found here. Summary: 0 errors (**), 0 flaws (~~), 2 warnings (==), 1 comment (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 1 SDNRG Q. Kong 2 Internet Draft T. Gao 3 Intended status: Informational BUPT 4 Expires: October 2019 D. Wang 5 Z. Wang 6 J. Wang 7 ZTE 8 B. Guo 9 S. Huang 10 BUPT 11 April 05, 2019 13 Routing Optimization with SDN in Data Center Networks 14 draft-kong-sdnrg-routing-optimization-sdn-in-dc-06 16 Status of this Memo 18 This Internet-Draft is submitted in full conformance with the 19 provisions of BCP 78 and BCP 79. 21 Internet-Drafts are working documents of the Internet Engineering 22 Task Force (IETF), its areas, and its working groups. Note that other 23 groups may also distribute working documents as Internet-Drafts. 25 Internet-Drafts are draft documents valid for a maximum of six months 26 and may be updated, replaced, or obsoleted by other documents at any 27 time. It is inappropriate to use Internet-Drafts as reference 28 material or to cite them other than as "work in progress." 30 The list of current Internet-Drafts can be accessed at 31 http://www.ietf.org/ietf/1id-abstracts.txt 33 The list of Internet-Draft Shadow Directories can be accessed at 34 http://www.ietf.org/shadow.html 36 This Internet-Draft will expire on October 05, 2019. 38 Copyright Notice 40 Copyright (c) 2016 IETF Trust and the persons identified as the 41 document authors. All rights reserved. 43 This document is subject to BCP 78 and the IETF Trust's Legal 44 Provisions Relating to IETF Documents 45 (http://trustee.ietf.org/license-info) in effect on the date of 46 publication of this document. Please review these documents 47 carefully, as they describe your rights and restrictions with respect 48 to this document. 50 Abstract 52 With the open and standard programmatic interface and the flexibility 53 of controlling network, Software Defined Network (SDN) can obviously 54 simplify and integrate operation and business support systems. As a 55 consequence, to satisfy the rising switching demand in the data 56 center network, it is a good option to adopt SDN technology. In 57 addition, current architecture of data center network is far from 58 ideality, which results in the low utilization rate in bandwidth 59 resource. For example, mice flow cannot be well effectively served in 60 the conventional Wavelength Division Multiplexing (WDM) optical 61 network with at least 50GHz spectrum interval. From a data center 62 network perspective, it is necessary to further improve the resource 63 utilization efficiency and the flexibility of coping with different 64 traffic. 66 This document described an optical data center interconnect, which 67 comprises both the fixed and flexible grid transceivers. A traffic 68 monitor is implemented in the SDN-based data center network to 69 evaluate the coming traffic demands and allocate appropriate spectrum 70 for the request. For instance, mice flow can be served by fixed grid 71 transceivers, well the elephant flows can be transmitted by the 72 flexi-grid transceiver using multiple subcarriers to form a 73 superchannel. Thus, spectrum efficiency is optimized and bandwidth 74 utilization is improved dramatically. 76 Table of Contents 78 1. Introduction ................................................ 3 79 2. Conventions used in this document ........................... 3 80 3. Required Technology ......................................... 4 81 4. Data center interconnect .................................... 5 82 5. Traffic-Monitor based routing in data center networks ....... 6 83 6. Dynamic traffic demand recognition scheme ................... 7 84 7. Security Considerations ..................................... 8 85 8. IANA Considerations ......................................... 8 86 9. Conclusions and Use Cases ................................... 8 87 10. References ................................................. 9 88 10.1. Normative References .................................. 9 89 10.2. Informative References ............................... 9 91 1. Introduction 93 The bandwidth bottleneck and growing power requirements have become 94 central challenges for high performance DCN interconnect. The current 95 fat tree topology causes communication bottlenecks in the server 96 interaction process, resulting in power-hungry O-E-O conversions that 97 limit the minimum latency and the power efficiency of these systems. 98 Various optical interconnect [KT12] have been proposed to take 99 advantage of the high bandwidth capacity and low power consumption 100 offered by optical switching. The optical data center interconnect 101 also provides interface to control plane for the network control and 102 operation. This opens the opportunity to implement enhanced network 103 functions as all components running under the centralized software- 104 defined networking (SDN) controller through SDN agents. With the 105 advantage of the flexibility of controlling network and the privacy 106 of network operations, the concept of SDN is rapidly adopted in data 107 centers. SDN technology has been mature for the commercial deployment 108 in data centers, and most notably, Google has realized the 109 interconnection between its data centers through the two 110 intercontinental backbone networks. From a data center network 111 perspective, the research focused on further improving the resource 112 utilizing efficiency and the flexibility of coping with different 113 traffic demands is never out of date. 115 This document describes a data center interconnect with SDN control 116 which can support both finer and coarse granularity switching 117 requirements. By implementing traffic monitoring into SDN-based data 118 center network to allocate appropriate bandwidth to either fixed or 119 flexible grid channel, spectrum efficiency is optimized and bandwidth 120 utilization is greatly increased. To realize both fixed grid and 121 flexible grid transmission, multiple Small Form-Factor Pluggable 122 (SFPs) and Single-Carrier Frequency-Division-Multiplexed (SCFDM) 123 transceivers are attached to the cascaded (Micro-Electro-Mechanical 124 System) MEMS which is in charge of the communication between ToRs in 125 different clusters. We also proposed a module named MUX/DEMUX&SSS 126 module using optical components to provide the flexible switching 127 functionality. 129 2. Conventions used in this document 131 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 132 "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this 133 document are to be interpreted as described in RFC 2119 [RFC2119]. 135 This document makes use of the following acronyms: 137 SDN: Software Defined Network 138 WDM: Wavelength Division Multiplexing 140 MEMS: Micro-Electro-Mechanical System 142 ToRs: Top-of-Racks 144 SFP: Small Form-Factor Pluggable 146 SCFDM: Single-Carrier Frequency-Division-Multiplexed 148 SSS: Spectrum Selective Switches 150 AWG: Arrayed Waveguide Grating 152 MIMO: Multi-Input Multi-Output 154 3. Required Technology 156 With the wide deployment of cloud computing and other kinds of 157 applications, traffic switching inter or intra data center networks 158 is drawing more and more attention. Nevertheless, despite the 159 commercial employment of SDN technology in the data centers, 160 architecture of current data centers network is still far from being 161 ideal. 163 On one hand, in conventional WDM optical networks, a traffic demand 164 is supported by a wavelength channel which occupies a 50GHz spectrum. 165 In this case, when the traffic demand between the end nodes is no 166 more than the capacity of the wavelength channel, the spectrum is 167 waste because of the fixed and coarse granularity. To address this 168 issue, scenario where flexible and fixed grid transceivers can be 169 adopted in the data center networks. On the other hand, with the 170 advantage of open interfaces and programming, the SDN-enabled network 171 can be implemented to realize required control methods to optimize 172 the bandwidth efficiency. 174 To satisfy the requirement of fast speed switching as well as 175 improving bandwidth efficiency in data center networks, traffic 176 monitor is embedded in the ToR to monitor the bandwidth that might 177 require to modulate the traffic to either fixed or flexible grid 178 channel. Monitoring the traffic before it comes, mice or elephant 179 flow can be severed by allocating appropriate flexible or fixed grid 180 bandwidth rather than allocating uniform fixed 50GHz bandwidth. Thus, 181 the spectrum is optimized and the bandwidth utilizing is improved. 183 4. Data center interconnect 185 As shown in Fig.1, we employ the cascaded MEMS-switches. The inter- 186 cluster MEMS in the core is in charge of the communication between 187 ToRs in different clusters. Multiple SFPs and SCFDM transceivers are 188 implemented to realize the mixed transmission whose bandwidth demand 189 is either fixed grid or flexible grid. To provide flexible switching 190 functionality, we proposed the module named Mux/Demux & SSS which is 191 illustrated in Fig.2. Optical components such as coupler, Spectrum 192 Selective Switches (SSS), Arrayed Waveguide Grating (AWG), and 193 circulator are attached to a backplane to further increase the 194 flexibility of coping with different traffic demands. In Fig.2, the 195 symbol "@" represents a circulator which is a passive non-reciprocal 196 three-port device, and an optical signal entering any port is 197 transmitted to the next port in rotation(only). The coupler is a 198 passive device which is used to split and combine signals in the 199 optical network and can have multiple inputs and outputs. The SSS is 200 typical an 1xN optical component that can partition the spectrum of 201 input signal to different ports. The AWG is a passive data-rate 202 independent optical device that route each wavelength of an input to 203 a different output. Using this module, traffic can be deliberately 204 added and dropped through these components, and can be merged and 205 switched to the same destination together through AWG or coupler, and 206 also can be separated by SSS and switched to the different output 207 ports for purpose of realizing Multi-Input Multi-Output(MIMO) 208 switching. At the same time, each ToR has both SFP and SCFDM 209 transceivers which can realize fixed or flexible grid traffic 210 switching. Thus, each rack can communicate with multiple racks 211 simultaneous and high interconnect efficiency can be achieved as 212 arbitrary traffic inter or intra ToRs can be switched using fine 213 bandwidth rather than fixed grid bandwidth. 215 +----------------+ +----------------+ +----------------+ 216 |Mux/Demux &SSS 1| |Mux/Demux &SSS 2| |Mux/Demux &SSS 3| ... 217 +----------------+ +----------------+ +----------------+ 218 | | | | | | | | | 219 | | | | | | | | | 220 +----------------------------------------------------------------+ 221 | Optical OXC | 222 +----------------------------------------------------------------+ 223 | | | | | | 224 | | | | | | 225 +--------------+ +--------------+ +--------------+ 226 | SFP|BV-TX/RX | | SFP|BV-TX/RX | | SFP|BV-TX/RX | 227 | ToR 1 | | ToR 2 | | ToR 3 | 228 +--------------+ +--------------+ +--------------+ 229 Figure 1: Schematic of architecture in data center 231 +------------------------------------------------------------------+ 232 | | 233 +------------------------------------------------------------------+ 234 A A A A | | | | 235 +---|---------------------|--------|------|--------|--|--|-----|--+ 236 | | V | | | | | | | 237 | | +--------------@ V | +---------+ | | 238 | | | +----------A--------@ V | AWG | | | 239 | | | | +-----|---------A------@ +---------+ | | 240 | | | | | | | A | | | 241 | | V V V | | | +--------+ | 242 | | +---------+ +---------------------+ | 243 | +-----| Coupler | | SSS | | 244 | +---------+ +---------------------+ | 245 +-----------------------------------------------------------------+ 247 Figure 2: Mux/Demux &&& SSS 249 5. Traffic-Monitor based routing in data center networks 251 The proposed architecture which is based on SDN technology is shown 252 in Fig.3. Resource Computation Element (RCE) is responsible for 253 allocating available port resource to configure the backplane to 254 sever the new coming request based on the resource information 255 provided by the Resource Management Element (RME).RME storages all of 256 the port and spectrum information. Both RCE and RME are controlled by 257 a SDN controller. In particular, RCE can be implemented with certain 258 algorithm for routing and allocating spectrum optimally and RME can 259 also be configured by the SDN controller. 261 When a new traffic comes from ToRs, RCE inquiries the RME for the 262 available port resource and other information to compute the most 263 suitable route and allocate appropriate spectrum. If there is no 264 available resource for the moment, the request will be stored in the 265 buffer. The traffic monitor provides all the traffic request 266 information both come and in the buffer in order to evaluate the type 267 of the traffic, and then passes the information to RME to execute the 268 processing scheme which we will discuss about later. 270 After finishing computing the optimized route, the optical switching 271 module is configured through an agent to allocate appropriate 272 bandwidth for the request. With the implement of bandwidth variable 273 component and the capacity of both fixed and flexible grid switching, 274 the optical backplane can be ordered to allocate exactly appropriate 275 bandwidth for coming demands. As a consequence, the requests from the 276 ToRs are satisfied with the optimized route and high resource 277 utilizing. 279 +---------------------------------|-----------+ 280 ---+---+---+ | SDN controller | 281 +---> | | |----+-->+--------------+ +--------------+ | 282 | ---+---+---+ | | Resource |----->| Resource | | 283 | Buffer | | Computation | | Management | | 284 | ---+---+---+ | | Element |<-----| Element | | 285 | +-> | | |----+-->+--------------+ +--------------+ | 286 | | ---+---+---+ +----------A----------------------------------+ 287 | | | | 288 | | | v 289 | | +--------------------+------+ 290 | | +---------+ +----+ +-----+ | Agent| 291 | +--| Request |---->|ToRs|---->|Tx/Rx|-----> +------+------+------+ 292 | +---------+ +----+ +-----+ | Optical | 293 | +---------+ +----+ +-----+ | Switching | 294 +----| Request |---->|ToRs|---->|Tx/Rx|-----> | Module | 295 +---------+ +----+ +-----+ +--------------------+ 296 +------------+ A 297 | Traffic | | 298 | Monitor |----+ 299 +------------+ 300 Figure 3: Traffic Monitor implemented architecture 302 6. Dynamic traffic demand recognition scheme 304 With the implement of traffic monitor, the proposed architecture can 305 support the new switching requirements by executing dynamic traffic 306 demand recognition scheme through RME which is described above. We 307 monitor the traffic before they come, and evaluate the type of 308 traffic demand, and then allocate appropriate bandwidth according to 309 the request. When traffic comes, it is arbitrated by RME whether it 310 is a flexible grid signal to determine where it goes. A flexible grid 311 signal is transferred to the SCFDM transceiver and then arbitrated 312 whether it is intra-data center request. If it is, optical components 313 such as SSS and coupler will be placed to set or reuse connection. 314 Similarly, a fixed grid signal is transferred to SFP module and 315 arbitrated whether it is intra-cluster request to determine where it 316 will be transferred next step. Thus, bandwidth with fine granularity 317 can be allocated to satisfy the dynamic traffic demand in data center 318 network. For instance, mice flow can be served directly by being 319 modulated to SCFDM transmitter. At the meantime, elephant flow can 320 also be divided into fixed and flexible grid signal. Fixed grid 321 signal can be switched to the WDM SFP transceivers which support 323 2.5 Gbps and 10 Gbps transmission. Flexible grid traffic demand can 324 be served by the SCFDM transceivers. Such algorithm can allocate 325 optimized bandwidth to potential request. Thus, both mice and 326 elephant flow can be served by either using the already existing 327 connection or setup new route to avoid frequent configuration of the 328 optical backplane. 330 7. Security Considerations 332 Security in the communication between ToRs through Optical Backplane 333 in data center network is to be addressed. While the security of the 334 architecture described in this document greatly depends on the 335 security of communication mechanism itself such as communication 336 protocols, processing procedure and so on. However, the architecture 337 that implements the traffic monitor can improve the security of 338 switching in data center network by evaluating the type of coming 339 traffic. 341 8. IANA Considerations 343 This document includes no request to IANA. 345 9. Conclusions and Use Cases 347 Data centers have received more and more attention as a result of 348 increasing demand for storing and switching large volumes of data. 349 With the advantage of open programmatic interface and privacy of 350 operations, SDN tends to be applied to data center so as to improve 351 the spectrum efficiency and bandwidth utilizing. 353 This document describes an architecture where a traffic monitor is 354 implemented and bandwidth variable components are adopted. Due to the 355 capacity of monitoring the traffic before they come, we can evaluate 356 the type of the requests and inquires RME whose function is to store 357 all ports information whether they are occupied or released. Based on 358 obtained the available resource information from RME, RCE then 359 allocate appropriate bandwidth for the request which may be fixed or 360 flexible grid. Rather than allocating the bandwidth with rigid and 361 coarse granularity, the new switching requirements are supported to 362 satisfy the dynamic traffic demand in data center networks. As a 363 consequence, the spectrum efficiency is optimized and bandwidth 364 utilization is increased dramatically. 366 With the feature of switching traffic using both fixed and flexible 367 grid bandwidth, the proposed architecture can be well adopted in 368 various network structure especially in data center network. For 369 example, it can be accustomed to the scenario where data flow is big 370 and duration time is long such as data migration in the midnight, as 371 well as the scenario where data flow is slight and duration time is 372 short such as a Web request. 374 10. References 376 10.1. Normative References 378 [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate 379 Requirement Levels", BCP 14, RFC 2119, March 1997. 381 10.2. Informative References 383 [KT12] C. Kachris, and I. Tomkos, "A Survey on Optical Interconnects 384 for Data Centers," 2012. 386 Authors' Addresses 388 Qian Kong 389 Beijing University of Posts and Telecommunications 391 Email: kongqian@bupt.edu.cn 393 Tao Gao 394 Beijing University of Posts and Telecommunications 396 Email: taogao@bupt.edu.cn 398 Dajiang Wang 399 ZTE Corporation 401 Email: wang.dajiang@zte.com.cn 403 Zhenyu Wang 404 ZTE Corporation 406 Email: wang.zhenyu1@zte.com.cn 408 Jiayu Wang 409 ZTE Corporation 411 Email: wang.jiayu1@zte.com.cn 413 Bingli Guo 414 Beijing University of Posts and Telecommunications 416 Email: guobingli@bupt.edu.cn 418 Shanguo Huang 419 Beijing University of Posts and Telecommunications 421 Email: shghuang@bupt.edu.cn