idnits 2.17.1 draft-eddy-sdnrg-customer-filters-01.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- No issues found here. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year -- The document date (August 11, 2015) is 3181 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- No issues found here. Summary: 0 errors (**), 0 flaws (~~), 1 warning (==), 1 comment (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 Internet Engineering Task Force W. Eddy 3 Internet-Draft G. Clark 4 Intended status: Informational J. Dailey 5 Expires: February 12, 2016 MTI Systems 6 August 11, 2015 8 Customer-Controlled Filtering Using SDN 9 draft-eddy-sdnrg-customer-filters-01 11 Abstract 13 In order to reduce unwanted traffic and make efficient use of limited 14 access link capacity or other network resources, it is advantageous 15 to filter traffic upstream of the end-networks that the packets are 16 destined to. This document describes filtering within access 17 Internet Service Provider (ISP) networks. The ISP's end-network 18 customers are given control over ISP filtering of traffic destined to 19 their own prefixes, since each customer's definition of desirable 20 versus undesirable traffic may change over time (e.g. as new network 21 services and protocols are introduced). In this document, we 22 describe an SDN-based means for customers to express flow definitions 23 to their ISPs in order to distinguish between desirable and 24 undesirable inbound traffic. These rules can be dynamically and 25 securely updated within the running ISP network, with full automation 26 One use case for this capability is in mitigating denial of service 27 attacks. Even if such filtering is only implemented in an ISP's 28 access network, it preserves capacity on the customer access links 29 for desirable traffic. If implemented at the ISP's edge connections 30 to other providers, or prior to ingress to their core, it can also 31 preserve the ISP's own network capacity and other resources that may 32 be threatened by attacks. 34 Status of This Memo 36 This Internet-Draft is submitted in full conformance with the 37 provisions of BCP 78 and BCP 79. 39 Internet-Drafts are working documents of the Internet Engineering 40 Task Force (IETF). Note that other groups may also distribute 41 working documents as Internet-Drafts. The list of current Internet- 42 Drafts is at http://datatracker.ietf.org/drafts/current/. 44 Internet-Drafts are draft documents valid for a maximum of six months 45 and may be updated, replaced, or obsoleted by other documents at any 46 time. It is inappropriate to use Internet-Drafts as reference 47 material or to cite them other than as "work in progress." 48 This Internet-Draft will expire on February 12, 2016. 50 Copyright Notice 52 Copyright (c) 2015 IETF Trust and the persons identified as the 53 document authors. All rights reserved. 55 This document is subject to BCP 78 and the IETF Trust's Legal 56 Provisions Relating to IETF Documents 57 (http://trustee.ietf.org/license-info) in effect on the date of 58 publication of this document. Please review these documents 59 carefully, as they describe your rights and restrictions with respect 60 to this document. Code Components extracted from this document must 61 include Simplified BSD License text as described in Section 4.e of 62 the Trust Legal Provisions and are provided without warranty as 63 described in the Simplified BSD License. 65 Table of Contents 67 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . 2 68 1.1. Requirements Language . . . . . . . . . . . . . . . . . . 4 69 2. Perceived Requirements . . . . . . . . . . . . . . . . . . . 4 70 3. Architecture . . . . . . . . . . . . . . . . . . . . . . . . 5 71 4. Customer Network . . . . . . . . . . . . . . . . . . . . . . 8 72 5. ISP Network . . . . . . . . . . . . . . . . . . . . . . . . . 10 73 5.1. Sub-Controller Configuration . . . . . . . . . . . . . . 10 74 5.2. Sub-Controller Operation . . . . . . . . . . . . . . . . 11 75 6. Discussion . . . . . . . . . . . . . . . . . . . . . . . . . 17 76 7. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . 19 77 8. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 19 78 9. Security Considerations . . . . . . . . . . . . . . . . . . . 19 79 10. References . . . . . . . . . . . . . . . . . . . . . . . . . 20 80 10.1. Normative References . . . . . . . . . . . . . . . . . . 20 81 10.2. Informative References . . . . . . . . . . . . . . . . . 20 82 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . 21 84 1. Introduction 86 At the edge of the Internet, end-network customers purchase access 87 through Internet Service Providers (ISPs). The ISPs offer a limited 88 amount of access link capacity to each customer, and have their own 89 capacity limitations within their access networks, core networks, and 90 peering to other providers. Traffic coming in from other networks to 91 the ISP's customers is normally forwarded through the ISP's 92 infrastructure and to the customer access links based only on the 93 destination IP addresses within packets. 95 Customers generally require reliable Internet access, and critical 96 business operations functions rely on availability of the ISP's 97 network resources. However, in many cases, customers are also 98 receiving substantial amounts of undesirable traffic, including port- 99 scans, intrusion attempts on vulnerable services in their networks, 100 denial-of-service (DoS), and distributed DoS (DDoS) attacks. These 101 undesirable flows are able to use up network capacity and disrupt or 102 interfere with desirable flows. 104 A normal end-network customer only requires a limited set of traffic 105 to be forwarded to them from the outside, and all other traffic can 106 safely be filtered. In fact, a common and highly recommended 107 practice that many end-networks already employ is to firewall 108 undesirable incoming traffic as it comes in from the ISP's access 109 link. This protects the end-network, but still leaves the access 110 link itself and capacity within the ISP's network subject to abuse. 111 Since customer networks already execute logic to distinguish between 112 desirable and undesirable traffic, then there is benefit to both them 113 and the ISPs in sharing that logic and pushing it upstream to 114 increase the scope of protected resources. This allows the ISP to 115 only devote its resources to packets that have potential value to its 116 customers, and to quickly disgard undesirable traffic. 118 During a DoS or DDoS attack, these upstream filters can be 119 implemented at points of ingress from other networks and save the 120 ISP's core and access network resources from being wasted or abused 121 while carrying the attack traffic. Since a high-volume attack on one 122 of its customers can have collateral impact to the ISP and its other 123 customers, this filtering is beneficial to the ISP and its other 124 customers, in addition to the intended victim. For instance, in 2010 125 an attack on a company attempting to shutdown the Mariposa botnet 126 took down not only that company, but also several other networks 127 using the same ISP, including a Canadian university and a few 128 government agencies in Ottawa, according to news articles. This 129 could have been avoided with upstream defenses. 131 As Software Defined Networking (SDN) technology becomes more 132 prevalent in both customer and ISP networks, among many other 133 capabilities, it enables the type of upstream filtering that would be 134 beneficial in these cases. An interdomain usage of SDN allows the 135 filter rules to be managed by the customer themselves, dynamically, 136 and to be distributed to the ISP through automation without human 137 intervention. 139 This document describes a usage of SDN that enables a customer to 140 have all the necessary control over filtering functionality 141 implemented within a service provider network. This description 142 includes: 144 o Perceived Requirements 146 o Functional Architecture including ISP and Customer Components 148 o Configuration and Operation of ISP Components 150 o Configuration and Operation of Customer Components 152 This is an informational document, and not a standard. The intention 153 of the document is to engage with the Internet community as SDN 154 technology continues to transfer from research to increased 155 operational use, and to discuss new ways of utilizing SDN-based 156 network infrastructure to enhance defensive capabilities for DDoS 157 mitigation. 159 We attempt to use the SDN terminology previously defined in the IRTF 160 SDNRG [RFC7426] [RFC7149]. 162 One of the interesting areas of SDN research is in inter-network, 163 inter-domain, and inter-provider SDN concepts, use cases, and 164 mechanisms. This is a challenging area because of the differing 165 requirements between cooperating networks and their separate 166 administrative domains and trust relationships. The configurations 167 described in this document implement one use-case for inter-domain 168 SDN based on OpenFlow signaling, and do so in a way that respects 169 both customer and provider requirements on the interface. 171 A key goal of the architecture described in this document is for ISPs 172 to be able to securely delegate control of their network filtering 173 configurations without creating new weaknesses or exposing any 174 additional information or capabilities beyond the filtering 175 configuration. This means that an individual customer's insight and 176 abilities must be limited to only traffic destined for their own 177 prefix(es). 179 1.1. Requirements Language 181 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 182 "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this 183 document are to be interpreted as described in RFC 2119 [RFC2119]. 185 2. Perceived Requirements 187 The requirements in this section are only applicable to ISPs that 188 choose to offer a service for customers to control upstream packet 189 filters. They are not meant to apply to other ISPs. 191 The following requirements are what we perceive to be constraints on 192 configuration of provider and customer network devices, in order to 193 support powerful customer-controlled filtering behavior within the 194 ISP network while still keeping the administrative boundary of the 195 provider network intact: 197 1. The customer MUST be able to control traffic destined for their 198 own prefix(es). 200 2. The customer MUST NOT be able to control any traffic destined for 201 other prefixes outside their end-network. 203 3. The customer's ability to control filtering within ISP equipment 204 MUST NOT change their level of access to the provider's network 205 (e.g. it should not offer them additional insight into traffic 206 passing through the provider that would not be available 207 otherwise, should not expose internal or sensitive details of the 208 provider's network architecture and internal configuration, etc). 210 4. The customer SHOULD be able to specify logic based on packet 211 contents that leads to a "drop" or "accept" decision on inbound 212 traffic, but very little or no more. An "accept" decision should 213 lead to provider-defined forwarding logic to select the ultimate 214 output ports, tunnels, or functions applied to traffic towards 215 the customer; the customer should not have control over selecting 216 these details. 218 5. The customer MUST be able to access statistics on its own traffic 219 or the filtering logic it controls from the view of the 220 provider's network (e.g. per-filter packet match counters). This 221 allows the customer to manage its rule set over time and remove 222 or replace logic that is no longer needed or effective. 224 6. The customer MUST NOT be able to access statistics on other flows 225 or aspects of the ISP network not related to the customer's own 226 flows. 228 3. Architecture 230 Functionally, the entities involved in defining and implementing 231 customer-specific filters can be divided into: 233 Filter management application - run by the customer to determine 234 desired filtering rules at a given point in time. The desired 235 filtering rules are dynamic and change as new information becomes 236 available. For instance as attacks are detected or subside, 237 filter rules may be added or deleted. Also, as details of an 238 attack become known, filters may be revised to either generate 239 less false positives (desirable packets lost) or false negatives 240 (undesirable packets not dropped). 242 Customer switching - SDN hardware and software switches within the 243 end-network that the customer has decided to implement ingress 244 filtering policies within. 246 Customer controller - SDN controller software within the end-network 247 that instantiates filter rules provided by the filter management 248 application. This controller is capabile of managing switches 249 within the customer end-network, at least. This controller must 250 convey relevant limitations of any managed hardware or software 251 switches to the filter management application so that rules can be 252 expressed at the proper granularity given the available resources 253 for implementing filter logic. 255 ISP switching - SDN hardware and software switches within an ISP 256 network, at least some of which are made available to the customer 257 for implementation of customer-controlled filtering rules. 259 ISP controller - SDN controller software in an ISP's network may be 260 responsible for handling all switch configuration within the ISP 261 network, other than the filtering rules provided by the customer. 262 For instance, this controller may implement the ISP's initial 263 ingress policies and actions, switching decisions within the ISP's 264 network, QoS policies, and other egress functions and policies 265 desired by the ISP. 267 ISP sub-controller - SDN software in an ISP's network that acts as 268 an intermediary between switches below it, and instances of other 269 controller software above it. The sub-controller is responsible 270 for presenting appropriate views of the underlying network to each 271 controller above, and for enforcing policy rules on the types of 272 actions that each controller can perform on the underlying 273 network. 275 These are all functional elements, and in some cases might either be 276 combined together or with other concrete elemetns. For instance, the 277 customer controller and filter management application could be run on 278 the same host(s) and be implemented within a single codebase. 280 The ISP sub-controller is the most important element for enabling 281 inter-domain SDN, and the majority of this document describes it. 283 There may be additional elements that are part of a fully detailed 284 real-world system. For instance, detection and classification of 285 attack traffic can be a complex task, and may involve multiple other 286 systems before input is provided to the filter management 287 application. This may involve monitoring, analysis, and other 288 functions provided by software or appliances in the customer network, 289 coordination with cloud services and other mitigation techniques, 290 operator alerting and possibly operator interaction and approval of a 291 suggested response. In a basic implementation, however, the filter 292 management could be a manual, operator-driven process. None of this 293 specifically matters to this architecture, as ultimately filtering 294 rules will be defined, monitored, and managed over time, no matter 295 how those rules are determined or coordinated with other facets of 296 system operation. 298 || 299 Customer Network || ISP Network 300 || 301 +-------------+ || 302 | Filter | || +------------+ 303 | Management | || | ISP | 304 | Application | || | Controller | 305 +-------------+ || +------------+ 306 A A || A 307 | | Restricted OpenFlow | 308 Controller | +------------------------+ +----------+ 309 API | || | | | 310 V || V V | 311 +------------+ || +----------------+ | Normal 312 | Customer | || | ISP | | OpenFlow 313 | Controller | || | Sub-Controller | | 314 +------------+ || +----------------+ | 315 A || A | 316 | Normal || | Normal | 317 | OpenFlow || | OpenFlow | 318 V || V V 319 +-----------+ Filtered +-------------------+ Un-Filtered 320 | Customer | Data Flow | ISP | Data Flow 321 | Switching |<------------------| Switching |<------------- 322 +-----------+ || +-------------------+ 323 || 325 Figure 1 327 As illustrated in Figure 1, interfaces between elements of this 328 architecture are fundamentally supported using the OpenFlow protocol. 329 This is a key factor in making it possible to implement using 330 existing customer and ISP switching elements, as OpenFlow has become 331 commonly available. However, OpenFlow is generally used only within 332 an administrative domain, and not inter-network between different 333 administrative domains. Including the sub-controller element in this 334 architecture enables OpenFlow to be utilized inter-network. Without 335 requiring SDN protocol modifications, the sub-controller defined here 336 provides one method to advance from the current single-domain state- 337 of-practice in SDN, towards giving SDN software a wider interdomain 338 reach and scope of view and influence. This will enable better 339 global optimization of traffic flows, while continuing to respect the 340 policies desired by each network's operators. 342 We refer to the northbound interface of the ISP Sub-Controller as 343 "Restricted OpenFlow", because the vocabulary of OpenFlow messages 344 that can be passed on this interface is limited to only those that 345 conform with the requirements listed previously. Restricted OpenFlow 346 is not a new protocol or standard, but just a subset of OpenFlow that 347 is safe to use for implementing customer-controlled filtering 348 functionality in an interdomain scenario. The specific subset and 349 per-message restrictions are detailed later in this document. 351 Within its own network, a Filter Management Application can use any 352 controller API that local Customer Controllers make available in 353 order to program and monitor filter rules within the Customer 354 Switching elements. 356 In the figure, we show "Normal OpenFlow" being used between many 357 elements, but this could actually be any southbound interface 358 protocol supported between controllers and switching elements in an 359 SDN architecture, and does not have to be OpenFlow specifically. It 360 is simply shown here because at the present time, it is widely 361 available and has been used in early implementations of this 362 architecture. 364 It is not illustrated in the diagram, but as an alternative, a 365 Customer Controller element could interface with the ISP Sub- 366 Controller using Restricted OpenFlow, instead of the Filter 367 Management Application making this control connection into the ISP's 368 network. Depending on the specific design of the Filter Management 369 Application, this may be desirable to simplify its codebase and 370 operations. 372 The following sections of this document discuss configuration and 373 operation of the elements in the ISP and customer networks 374 respectively. 376 4. Customer Network 378 End-network architectures vary widely, for instance from SOHO to 379 enterprise campuses. Some may have multiple sub-networks, be multi- 380 homed to different ISPs, support multiple address families (e.g. 381 IPv4 and IPv6), use VPNs and tunnels to other networks, etc. The 382 filter management application within the end-network may need to 383 understand these aspects of the network configuration, but it should 384 not be relevant to other entities in the architecture. 386 The filter management application interfaces with both controller 387 software within the customer end-network as well as the sub- 388 controller within an ISP (the "upstream" inter-domain interface). 390 The purpose of the filter management application is to take a series 391 of rules or logic that identifies undesirable traffic (as input from 392 other software or an operator) and synthesizes them into a list of 393 filters for upstream implementation, and another list of filters for 394 local use within the customer network. These lists may be the same, 395 or different, based on the capabilities of the switches, such as the 396 number of rules they can simultaneously support or their ability to 397 match certain packet fields. 399 The filter management application maintains connections with sub- 400 controllers at upstream ISPs using a restricted variant of OpenFlow. 401 It uses this to add, remove, and modify upstream filtering logic. 403 Managing filters within both the customer and ISP network switches 404 may be useful, for instance, to have coarse-grained logic that 405 removes the bulk of unwanted traffic upstream, and fine-grained logic 406 that further sifts traffic at ingress to the customer network. It 407 may also be possible to exploit more functionality within the end- 408 network to do stateful filtering or employ other techniques that may 409 not be possible or available in upstream ISP filters. However, it is 410 possible that the customer network includes legacy equipment and does 411 not support SDN at all. In this case, the filter management 412 application only controls upstream filters. 414 The filter management application (or SDN controller) within the 415 customer network listen for restricted OpenFlow connections from the 416 sub-controllers in the ISP networks. For security reasons, 417 connections should only be accepted from designated sub-controller 418 addresses, otherwise an attacker might be able to extract filter 419 logic and changes from the application and determine ways to 420 dynamically evade filtering. Operators will need to exchange IP 421 address (and possibly port number) information to be used for these 422 restricted OpenFlow connections so that they can be made securely, 423 allowed through firewalls, etc. Connecting to multiple customer 424 filter management applications (e.g. with OpenFlow master/slave 425 roeles) is possible in order to support redundancy and failover, and 426 is not complicated, but details are not discussed in this document. 428 5. ISP Network 430 There are a wide range of sizes and architectures for ISP networks, 431 for instance serving different types of customers (residential, 432 commercial, government, etc), and with different types of access 433 networks and technologies (e.g. cable, fiber, wireless/cellular, 434 satellite, etc). The specific way that an individual ISP configures, 435 manages, and controls its switching elements is intended to be 436 unaffected by implementation of customer-controlled filtering rules 437 as described here. We provide a generic description of how the sub- 438 controller element can be introduced and coexist with other control 439 structures and systems within the ISP network, either based on SDN or 440 other technologies. 442 There are many different specific ways the technology can be 443 instantiated, but the same general rules apply. For instance, an ISP 444 may create virtual routers for each customer and dedicate resources 445 of those customer-specific virtual routers for instantiating 446 filtering rules. In this case, there could be one sub-controller per 447 customer and the sub-controller would be able to work almost directly 448 with the designated virtual routers for that customer. 449 Alternatively, another ISP might have a single sub-controller that 450 all customers connect to, and that sub-controller could be 451 responsible for limiting customer filtering rules to only occupy some 452 fraction of the hardware flow table resources available within ISP 453 switching elements common to all customers. The same concepts and 454 configuration rules apply in all cases, even though there is a range 455 of complexity, scaling, and performance requirements for the sub- 456 controller element in different ISP networks. 458 5.1. Sub-Controller Configuration 460 Since a sub-controller proxies for the switching elements, it needs 461 to behave like them and initiate OpenFlow connections upward to 462 controllers that it has been configured to connect to. 464 The sub-controller has different security considerations on its 465 intradomain OpenFlow connections with switches versus its connections 466 to other controllers that may be interdomain. Intradomain 467 connections to switches and other intradomain controllers are likely 468 to stay entirely within the ISP's infrastructure and may be on 469 portions of the network engineered for isolation from all customer 470 traffic and logically or physically separated from customer 471 accessible resources. In this case, subcontroller to switching 472 component connections can be configured in any way that meets the 473 ISP's security requirements. 475 For interdomain customer controller connections, packets may traverse 476 sections of the network that are shared or could be visible to other 477 parties. The OpenFlow specification [OF1.3] notes "The OpenFlow 478 channel is usually encrypted using TLS, but may be run directly over 479 TCP". If an ISP permits sub-controller to use TCP without TLS when 480 making connections to customer controllers, it may be exposing both 481 parties to privacy risks [RFC6973] and other types of attacks. TLS 482 SHOULD be used for all interdomain sub-controller to controller 483 connections. 485 It is not expected that client certificates are useful for either 486 switches or for the sub-controller when acting in its client role, 487 however, they might be used in any cases deemed appropriate. 489 Since updates to rule sets are latency sensitive and may happen in a 490 relatively hostile setting (e.g. a customer trying to push drop rules 491 to a sub-controller while suffering from a large DDoS attack), it is 492 beneficial for sub-controller communications to be out-of-band from 493 normal data services to customers. For instance, this could be via a 494 separately provisioned (e.g. assured forwarding) service class using 495 some codepoint agreed between the provider and its customers. If 496 this the interdomain control traffic is forwarded in-band with data 497 traffic, some types of attacks will be capable of disrupting the 498 control flow and could prohibit mitgations from being effectively 499 triggered. 501 It is important to not have switches queueing packets while waiting 502 for responses to new flow event notifications to a customer 503 controller outside the ISP network. The latency and potential for 504 packet loss (especially under attack conditions) make this 505 undesirable. We assume instead that switches have been configured to 506 suppress event notifications to customer controllers, and that events 507 are handled either by the sub-controller itself or by the ISP 508 controllers. The customer-designated flow tables can be configured 509 with default drop or forwarding rules, as desired so that a match is 510 always found, and no "packet in" events need to be generated to 511 customer controllers. 513 5.2. Sub-Controller Operation 515 When starting, an ISP sub-controller needs to establish OpenFlow 516 connections upwards to controllers above it. These controllers may 517 belong to the ISP itself, as well as multiple customers. The number 518 of connections may be relatively large, compared to typical SDN 519 usages, since an ISP may have hundreds or thousands of customers that 520 it is enabling to control filtering. A sub-controller implementation 521 SHOULD include features to rate-limit outgoing connections to 522 customer controllers. This can be under operator control. 524 Intradomain connections to ISP controllers may be made 525 preferentially, prior to customer connections. 527 An ISP sub-controller will accept OpenFlow connections from the ISP's 528 switching elements. It will maintain an internal view of the state 529 of those switch resources, as directly reported to it. 531 When processing events coming up from ISP switching elements, the 532 sub-controller determines whether to handle the event internally, or 533 route the event through to an upper ISP controller. Events can be 534 routed directly to the ISP controllers without modification by the 535 sub-controller, or they may be modified in order to provided a 536 simplified or virtualized view of the underlying resources to the ISP 537 controllers, depending on ISP desires and configuration. 539 Note that the sub-controller SHOULD NOT forward events to the 540 customer OpenFlow connections. This prevents switching elements in 541 the ISP network from performing poorly due to waiting upon responses 542 from the customer, and helps to alleviate concerns about a customer's 543 controller being able to degrade performance in the ISP network or 544 impact the customer's own SLAs (e.g. for latency), either 545 inadvertently or on purpose. 547 There are different possible implementations of a sub-controller in 548 terms of the number of switches that it can manage. A simple sub- 549 controller might manage only a single switch, and in this case it 550 will act only as a proxy and filter for OpenFlow messages. It may 551 even modify and relay most messages directly without performing its 552 own queries, inventing its own responses, etc. More complex sub- 553 controllers will manage multiple switches, and will have more complex 554 logic for processing messages, because simple manipulation and 555 relaying will not be possible with multiple switches. For instance, 556 an OpenFlow echo request message expects a single reply, not multiple 557 replies. 559 Specific message types in the OpenFlow 1.3 specification and the 560 proper responses to them on sub-controller reception either from a 561 switch or a controller is described below. These are all written for 562 the more complex situation of a sub-controller that manages multiple 563 switches, as the single-switch situation is trivial. 565 Immutable Messages 567 OFPT_HELLO (0) - This message is processed by the sub- 568 controller and not relayed. 570 OFPT_ERROR (1) - This message is processed by the sub- 571 controller and not relayed. 573 OFPT_ECHO_REQUEST (2) - This message is replied to with an echo 574 reply whenever it is received, and it is never passed through. 576 OFPT_ECHO_REPLY (3) - This message is processed by the sub- 577 controller and not relayed. 579 OFPT_EXPERIMENTER (4) - Treatment and generation of these 580 messages is left to be determined by each implementation. 582 Switch Configuration Messages 584 OFPT_FEATURES_REQUEST (5) - This message should not be received 585 from a switch. When received from a controller, a reply is 586 generated indicating the capabilities of the sub-controller, 587 and the functionalities that it can synthesize across the set 588 of switches that it manages. The sub-controller creates its 589 own datapath ID representing the sub-controller, and does not 590 feed through the datapath IDs or features of the individual 591 switches, but only a synthesized view of them. 593 OFPT_FEATURES_REPLY (6) - This message should not be received 594 from a controller. When received from a switch, it is 595 processed by the sub-controller and not relayed to other 596 controllers above it. 598 OFPT_GET_CONFIG_REQUEST (7) - This message should not be 599 received from a customer controller, though it might be 600 received from an ISP controller. If the sub-controller 601 homogenizes configuration across managed switches, then it can 602 generate a proper reply to an ISP controller for this message. 604 OFPT_GET_CONFIG_REPLY (8) - This message may be sent to an ISP 605 controller and provide indication of a homogenous configuration 606 across the managed switches below the sub-controller. 608 OFPT_SET_CONFIG (9) - This message should not be received from 609 a customer controller, though it might be received from an ISP 610 controller. If the sub-controller homogenizes configuration 611 across managed switches, then it can take appropriate action 612 and generate a proper reply to an ISP controller for this 613 message. 615 Asynchronous Messages 617 OFPT_PACKET_IN (10) - These messages may be received from 618 switches, if the sub-controller has configured the switches to 619 generate them. They can be relayed to ISP controllers, but 620 should not be relayed to customer controllers (e.g. according 621 to the "master", "equal", or "slave" OpenFlow controller 622 roles). 624 OFPT_FLOW_REMOVED (11) - These messages may be received from 625 switches, if the sub-controller has configured the switches to 626 generate them. They can be relayed to ISP controllers, but 627 should not be relayed to customer controllers (e.g. according 628 to the "master", "equal", or "slave" OpenFlow controller 629 roles). 631 OFPT_PORT_STATUS (12) - These messages may be received from 632 switches, if the sub-controller has configured the switches to 633 generate them. They can be relayed to ISP controllers, but 634 should not be relayed to customer controllers (e.g. according 635 to the "master", "equal", or "slave" OpenFlow controller 636 roles). 638 Controller Command Messages 640 OFPT_PACKET_OUT (13) 642 OFPT_FLOW_MOD (14) - These are the primary messages from a 643 customer controller that will be validated, translated, and 644 conveyed into additional messages to managed switches. When 645 received from an ISP controller, on the other hand, they may be 646 passed through without change. In either case, they may be 647 passed through be replicating each message to all managed 648 switches, or by conveying specific messages translating the 649 contents of the message as appropriate for each individual 650 switch. 652 OFPT_GROUP_MOD (15) - These messages are treated in the same 653 way as the OFPT_FLOW_MOD messages. 655 OFPT_PORT_MOD (16) - These messages should not be received from 656 a customer controller, and when received from an ISP controller 657 require special processing by the sub-controller in translating 658 the port view that the ISP controller has to particular ports 659 across the set of managed switches. 661 OFPT_TABLE_MOD (17) - This message does not seem to be 662 particularly useful, and we do not exepct it to be received 663 from either ISP or customer controllers. 665 Multipart Messages 667 OFPT_MULTIPART_REQUEST (18) - These messages might be received 668 from any type of controller, and their contents should be 669 processed according to the other guidelines in this list. 670 These messages may be generated by the sub-controller in any 671 case where a multipart request is needed to convey information 672 to switches. 674 OFPT_MULTIPART_REPLY (19) - These messages might be received 675 from switches, and their contents should be processed according 676 to the other guidelines in this list. These messages may be 677 generated by the sub-controller in any case that they're needed 678 to convey information that it generates internally for 679 controllers. 681 Barrier Messages 683 OFPT_BARRIER_REQUEST (20) - This can be generated for 684 operations going to switches from the sub-controller. It might 685 be received from either customer or ISP controllers and applies 686 to the indicated messages in either case. 688 OFPT_BARRIER_REPLY (21) - This message might be received by the 689 sub-controller from switches. It can also be generated by the 690 sub-controller in response to ISP or customer controllers 691 barrier requests. 693 Queue Configuration Messages 695 OFPT_QUEUE_GET_CONFIG_REQUEST (22) - Queue configuration 696 setting is outside of the OpenFlow protocol scope, and may be 697 difficult to synthesize across multiple managed switches, so 698 these messages might not be accepted or processed by a sub- 699 controller. They may be generated by the sub-controller itself 700 in learning about the queue configuration of managed switches. 702 OFPT_QUEUE_GET_CONFIG_REPLY (23) - These messages may be 703 received by the sub-controller from switches, but it does not 704 seem useful to create a way to generate them from the sub- 705 controller to either ISP or customer controllers. 707 Controller Role Change Request Messages 709 OFPT_ROLE_REQUEST (24) - Role requests may be received from 710 either customer or ISP controllers. In either case, the sub- 711 controller should process them normally, with the exception 712 that the customer controller roles influence only their 713 capabilities within the envelope of control delegated to the 714 particular customer, and not a global role. The sub-controller 715 may generate role request messages to managed switches in order 716 to control its role relative to other sub-controllers or ISP 717 controllers accessing the same switches. Generally, only an 718 "equal" or "master" role is sensible for a sub-controller, 719 since it would not be able to carry out obligations to customer 720 or ISP controllers otherwise. 722 OFPT_ROLE_REPLY (25) - This message is received from managed 723 switches and also generated in response to role requests from 724 ISP or customer controllers. 726 Asynchronous Message Configuration 728 OFPT_GET_ASYNC_REQUEST (26) - There does not seem to be a need 729 to accept or process these messages from customer controllers, 730 though they might be received from an ISP controller. They can 731 be generated by the sub-controller for managed switches. 733 OFPT_GET_ASYNC_REPLY (27) - This message might be generated in 734 response to an ISP controller's request, or received from a 735 switch in response to a request from the sub-controller. 737 OFPT_SET_ASYNC (28) - This message might be received from an 738 ISP controller, though does not seem useful to process from 739 customer controllers. It can be generated by the sub- 740 controller and sent to managed switches. 742 Meters and Rate Limiters Configuration Messages 744 OFPT_METER_MOD (29) - Meter control may be useful for both 745 customer and ISP controllers, and the sub-controller should 746 translate these into appropriate messages to the managed 747 switches. The view of meters provided to controllers from the 748 sub-controller will need to be synthesized from aggregated view 749 of meters on the managed switches, and this requires specific 750 logic in the sub-controller. 752 When validating flow table modifications from customer controllers, 753 they need to first be checked against the particular flow table 754 resources designated for modification by the particular customer, or 755 transformed in some way to customer-specific flow table resources on 756 the individual managed switches. After this, the content of the 757 modification needs to be checked in order to make sure that the 758 format of the rule only supports a Drop action or a GoTo-Table action 759 for a table designated by the ISP to that customer as a Stage-3 flow 760 table (described below). The sub-controller may ensure this by 761 transforming the indicated GoTo-Table target into values it controls 762 and tracks for the individual switches being managed by it. 764 In OpenFlow, rules are organized into flow tables, and sets of rules 765 across flow tables are linked via GotoTable actions 767 In order to ensure that the ISP retains capabilities to control 768 traffic through its network, and that the only thing being granted to 769 the customer is an ability to specify a drop preference on offending 770 traffic, the sub-controller views the underlying switch flow-table 771 organization as three stages. 773 Stage-1 (ISP Controlled): These contain the rules initially 774 applied to packets ingressing the ISP's network, and is where the 775 ISP implements is own ingress filtering and other actions. At the 776 end of the Stage-1 flow tables, entries based on the destination 777 IP address are matched against customer-specific prefixes, and 778 Goto-Table actions specify the Stage-2 flow tables to be checked 779 for customer-specified filtering rules. 781 Stage-2 (Customer Controlled): Customers have access to these flow 782 tables. They MUST contain only Drop, Forward, or Goto- 783 Table actions. The Goto-Table actions MUST indicate a Stage-3 784 table selected by the ISP. Each Stage-2 flow table corresponds to 785 a particular customer and are dynamically managed by each 786 customer. These can be safely managed without interaction with 787 the ISP. 789 Stage-3 (ISP Controlled): These flow tables are used to implement 790 the ISP's additional forwarding logic and other processing needed 791 before an accepted packet is placed at an egress port. Since 792 these flow tables are not reached unless Stage-2 logic has already 793 indicated the traffic is desirable, the functionality accessed 794 through Stage-3 tables (e.g. tunnels, etc) is able to be protected 795 from DDoS attacks once they are detected and distinguished from 796 desirable traffic. 798 The sub-controller permits only Stage-2 rules to be modified by the 799 customer controllers, limits the flow table resources used in Stage-2 800 by each customer, and limits the format of rules permitted to be be 801 included in Stage-2. 803 6. Discussion 805 The FlowVisor architecture [SGY09] is similar to the one described in 806 this document, but has some key distinctions. FlowVisor allows for 807 the partitioning of a network into a series of logical "slices" via a 808 series of logical flow classification rules. The rules are managed 809 by a customized OpenFlow controller running the FlowVisor software, 810 which acts as a transparent proxy for OpenFlow network traffic. Each 811 unique slice has one or more OpenFlow controllers associated with it 812 (called "guests"). Messages to or from all guests are routed through 813 the FlowVisor proxy, which is then responsible for examining OpenFlow 814 messages it observes and rewriting (or discarding) them as necessary 815 to ensure the messages only affect the pieces of the network assigned 816 to be part of that slice. 818 FlowVisor could potentially be used to implement the ideas defined in 819 this document. The caveats to this include: 821 Scaling the Proxy In order for the rules to be properly applied in 822 the FlowVisor system, all messages must pass through a single 823 proxy, and this may have scaling implications compared to the more 824 specific sub-controller functions we described that can be 825 distributed even to per-customer sub-controllers, because 826 customers do not require a unified consistent view of the ISP 827 switching elements, nor do their directives interact with one 828 another since the "stage-2" flow tables are isolated from one 829 another. 831 Hierarchical Considerations - While FlowVisor supports hierarchical 832 deployment through partitioning or overlap of flowspaces, applying 833 rules in the presence of certain types of network designs (e.g. 834 multiple layers of NAT) could be a challenge 836 Generic - FlowVisor provides a superset of the functionality 837 required to support upstream DDoS filtering. The additional 838 functionality may be useful in many cases, but is more heavyweight 839 than a focused solution for DDoS filtering. 841 Isolation - Rewritten rules need to be carefully audited to ensure 842 the transformations have no adverse effects at any level of a 843 managed network(s). 845 When there are multiple SDN applications controlling flow table 846 entries, there is a potential for conflict, as dealt with by systems 847 like FortNOX [PSY12]. By creating the three-stage approach to the 848 flow table composition, and only allowing customer access to 849 individual stage-2 flow tables (or entries), we have largely avoided 850 the types of conflicts that require more complex analysis and 851 enforecement systems. 853 While this document focuses on the sub-controller as a hierarchical 854 SDN approach to filtering traffic, the idea could be extended to 855 support other applications as well. For example, quality of service 856 or traffic-shaping applications could be requested at the upstream 857 provider in the same way as (D)DoS filtering mechanisms. This may be 858 possible to extend with only moderate additional sub-controller 859 complexity, following a similar three-stage design to the flow table 860 configuration. 862 The focus in this document is on implementation of DDoS mitigations 863 between an attack target and its direct ISPs. Pushing defenses 864 further upstream, to higher tier ISPs or to ISPs closer to the attack 865 traffic sources, is something we consider to be a separate problem 866 that other protocol mechanisms can deal with [EDC15]. 868 Several position papers from the 2015 IAB CARIS workshop discuss 869 inter-domain collaboration for DDoS defense and possible application 870 of SDN technology towards this goal. [BJD15] notes that standards 871 are needed for threat feed exchanges, service requirement profiles, 872 and dynamic negotiation with service providers. The system described 873 in this document shows that at least for immediate ISPs of an attack 874 target, mitigations can be implemented without new protocols or 875 standards. The delegation of limited flow table entry management to 876 an ISP's customers could be viewed as a simple, fast, and effective 877 means of the dynamic negotiation functionality. [XHH15] discusses 878 use of "big data analysis" working in conjunction with an SDN-based 879 network. Use of any type of attack identification or 880 characterization technique is fully compatible with the architecture 881 descibed in this document, including high-performance traffic 882 analysis appliances and other on-premises equipment. Identification 883 and characterization of an attack is performed by a functional entity 884 that provides input into the filter management application that we 885 describe, and the way this input is determined is orthoganal to 886 whether SDN or some other mechanism is used to signal and effect a 887 mitigation function. 889 7. Acknowledgements 891 Work on some of the material discussed in this document was sponsored 892 by the United States Department of Homeland Security (contract 893 HSHQDC-15-C-00017), but it does not necessarily reflect the position 894 or the policy of the Government and no official endorsement should be 895 inferred. Support and feedback from Dan Massey was helpful in 896 assessing and describing this usage of SDN for DDoS mitigation. 898 8. IANA Considerations 900 This memo includes no request to IANA. 902 9. Security Considerations 904 DDoS mitigation is a security functionality. The architecture 905 described in this document improves on the current ability of end- 906 networks to survive attacks and of ISP networks to accurately filter 907 DDoS traffic. 909 Security of the system described in this document depends heavily on 910 security of the OpenFlow control connections themselves, which is 911 discussed in the body of the document as part of configuring the 912 switches, sub-controllers, and controllers. 914 Security flaws in implementations or applications of this 915 architecture could be used to attack an end network by rerouting or 916 dropping its traffic within its ISPs' networks, however, these would 917 likely need to leverage underlying flaws in OpenFlow that would have 918 other implications more serious than this. 920 10. References 922 10.1. Normative References 924 [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate 925 Requirement Levels", BCP 14, RFC 2119, 926 DOI 10.17487/RFC2119, March 1997, 927 . 929 10.2. Informative References 931 [BJD15] Boucadair, M., Jacquenet, C., and L. Dunbar, "Integrating 932 Hosted Security Functions with on Premises Security 933 Functions - Joint Force to Mitigate Internet Attacks", 934 2015. 936 2015 IAB CARIS workshop 938 [EDC15] Eddy, W., Dailey, J., and G. Clark, "BGP Flow 939 Specification Validation Using the Resource Public Key 940 Infrastructure", 2015. 942 [OF1.3] Open Networking Foundation, "OpenFlow Switch Specification 943 Version 1.3.0", June 2012, 944 . 948 [PSY12] Porras, P., Shin, S., Yegneswaran, V., Fong, M., Tyson, 949 M., and G. Gu, "A Security Enforcement Kernel for OpenFlow 950 Networks", August 2012, 951 . 954 [RFC6973] Cooper, A., Tschofenig, H., Aboba, B., Peterson, J., 955 Morris, J., Hansen, M., and R. Smith, "Privacy 956 Considerations for Internet Protocols", RFC 6973, 957 DOI 10.17487/RFC6973, July 2013, 958 . 960 [RFC7149] Boucadair, M. and C. Jacquenet, "Software-Defined 961 Networking: A Perspective from within a Service Provider 962 Environment", RFC 7149, DOI 10.17487/RFC7149, March 2014, 963 . 965 [RFC7426] Haleplidis, E., Ed., Pentikousis, K., Ed., Denazis, S., 966 Hadi Salim, J., Meyer, D., and O. Koufopavlou, "Software- 967 Defined Networking (SDN): Layers and Architecture 968 Terminology", RFC 7426, DOI 10.17487/RFC7426, January 969 2015, . 971 [SGY09] Sherwood, R., Gibb, G., Yap, K., Appenzeller, G., Casado, 972 M., McKeown, N., and G. Parulkar, "FlowVisor: A Network 973 Virtualization Layer", October 2009, 974 . 977 [XHH15] Xia, L., Fu, T., He, C., Gondrom, T., and D. He, "An 978 Elastic and Adaptive Anti-DDoS Architecture Based on Big 979 Data Analysis and SDN for Operators", 2015. 981 2015 IAB CARIS workshop 983 Authors' Addresses 985 Wesley Eddy 986 MTI Systems 988 Email: wes@mti-systems.com 990 Gilbert Clark 991 MTI Systems 993 Email: gclark@mti-systems.com 995 Justin Dailey 996 MTI Systems 998 Email: justin@mti-systems.com