idnits 2.17.1 draft-white-i2rs-use-case-00.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- ** The document seems to lack a Security Considerations section. ** The document seems to lack an IANA Considerations section. (See Section 2.2 of https://www.ietf.org/id-info/checklist for how to handle the case when there are no actions for IANA.) Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year == The document seems to lack the recommended RFC 2119 boilerplate, even if it appears to use RFC 2119 keywords. (The document does seem to have the reference to RFC 2119 which the ID-Checklist requires). -- The document date (February 18, 2013) is 4056 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- == Missing Reference: 'IRS' is mentioned on line 94, but not defined == Missing Reference: 'BGP' is mentioned on line 133, but not defined == Unused Reference: 'RFC2119' is defined on line 606, but no explicit reference was found in the text Summary: 2 errors (**), 0 flaws (~~), 5 warnings (==), 1 comment (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 Routing Area Working Group R. White 3 Internet-Draft Verisign 4 Intended status: Informational S. Hares 5 Expires: August 22, 2013 Hickory Hill Consulting 6 R. Fernando 7 Cisco Systems 8 February 18, 2013 10 Use Cases for an Interface to the Routing System 11 draft-white-i2rs-use-case-00 13 Abstract 15 Programmatic interfaces to provide control over individual forwarding 16 devices in a network promise to reduce operational costs while 17 improving scaling, control, and visibility into the operation of 18 large scale networks. To this end, several programmatic interfaces 19 have been proposed. OpenFlow, for instance, provides a mechanism to 20 replace the dynamic control plane processes on individual forwarding 21 devices throughout a network with off box processes that interact 22 with the forwarding tables on each device. Another example is 23 NETCONF, which provides a fast and flexible mechanism to interact 24 with device configuration and policy. 26 There is, however, no proposal which provides an interface to all 27 aspects of the routing systemas a system. Such a system would not 28 interact with the forwarding system on individual devices, but rather 29 with the control plane processes already used to discover the best 30 path to any given destination through the network, as well as 31 interact with the routing information base (RIB), which feeds the 32 forwarding table the information needed to actually switch traffic at 33 a local level. 35 This document describes a set of use cases such a system could 36 fulfill. It is designed to provide underlying support for the 37 framework, policy, and other drafts describing the Interface to the 38 Routing System (IRS). 40 Status of this Memo 42 This Internet-Draft is submitted in full conformance with the 43 provisions of BCP 78 and BCP 79. 45 Internet-Drafts are working documents of the Internet Engineering 46 Task Force (IETF). Note that other groups may also distribute 47 working documents as Internet-Drafts. The list of current Internet- 48 Drafts is at http://datatracker.ietf.org/drafts/current/. 50 Internet-Drafts are draft documents valid for a maximum of six months 51 and may be updated, replaced, or obsoleted by other documents at any 52 time. It is inappropriate to use Internet-Drafts as reference 53 material or to cite them other than as "work in progress." 55 This Internet-Draft will expire on August 22, 2013. 57 Copyright Notice 59 Copyright (c) 2013 IETF Trust and the persons identified as the 60 document authors. All rights reserved. 62 This document is subject to BCP 78 and the IETF Trust's Legal 63 Provisions Relating to IETF Documents 64 (http://trustee.ietf.org/license-info) in effect on the date of 65 publication of this document. Please review these documents 66 carefully, as they describe your rights and restrictions with respect 67 to this document. Code Components extracted from this document must 68 include Simplified BSD License text as described in Section 4.e of 69 the Trust Legal Provisions and are provided without warranty as 70 described in the Simplified BSD License. 72 Table of Contents 74 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 4 75 2. Optimized Exit Control . . . . . . . . . . . . . . . . . . . . 4 76 3. Distributed Reaction to Network Based Attacks . . . . . . . . 7 77 4. Remote Service Routing . . . . . . . . . . . . . . . . . . . . 8 78 5. Within Data Center Routing . . . . . . . . . . . . . . . . . . 10 79 6. Temporary Overlays between Data Centers . . . . . . . . . . . 12 80 7. Central membership computation for MPLS based VPNs . . . . . . 13 81 8. Normative References . . . . . . . . . . . . . . . . . . . . . 14 82 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . . 15 84 1. Introduction 86 The Interface to the Routing System Framework [IRS] desribes a 87 mechanism where the distributed control plane can be augmented by an 88 outside control plane through an open, accessible interface, 89 including the Routing Information Base (RIB), in individual devices. 90 This represents a "halfway point" beteween completely replacing the 91 traditional distributed control plane and directly configuring 92 devices to distribute policy or modifications to routing through off- 93 board processes. This draft proposes a set of use cases that explain 94 where the work described in [IRS] will be useful. The goal is to 95 inform not only the community's understanding of where IRS fits in 96 the larger scheme of SDN proposals, but also to inform the 97 requirements, framework, and specification of IRS to provide the best 98 fit for the purposes which make the most sense for this type of 99 programmatic interface. 101 Towards this end the authors have searched for a number of different 102 use cases representing not only complex modifications of the control 103 plane, including interaction with applications and network 104 conditions, but also simpler use cases. The array of use cases 105 presented here should provide the reader with a solid understanding 106 of the power of an SDN solution that will augment, rather than 107 replace, traditional distributed control planes. 109 Each use case is presented in its own section. 111 2. Optimized Exit Control 113 At edges where traffic exits along two or more possible paths, it is 114 often desirable to choose a path based on more information the 115 dynamic control plane provides. For instance, a network operator may 116 want to take into account factos such as: 118 o Cost per unit of data sent, indluding time of day variations, 119 surcharges over a specific amount of data transmitted, and 120 surcharges for transmitting data to specific types of 121 destinations. 123 o Urgency of data traffic or flow. 125 o Exit point performance, including historical jitter, delay, and 126 available bandwidth, possibly on a per destination basis. 128 o Availability of a specific destination through a given link at the 129 per destination basis (more specific than the routing protocol 130 provides). 132 A number of possible solutions have been proposed or deployed in the 133 past. For instance, the necessary metrics could be added to [BGP], 134 or any other routing protocol, to provide the necessary information, 135 and fine-tuned algorithms could be developed and deployed. Massive 136 changes to well known and understood distributed control plane 137 protocols to resolve a single use case, however, are not likely to be 138 productive for the community as a whole. It's often difficult to 139 justify the added complexity in the database and algorithms of 140 routing protools to solve what is considered a point case. 142 Another alternative has been the development of specific appliances 143 designed to monitor the information necessary to provide an optimal 144 edge decision, and then to use some automated configuration mechanism 145 to transmit the decision to the edge routers. An example is 146 illustrated in the figure below. 148 |-----------------R1-----------| 149 | | | 150 Internal Network Controller External Network 151 | | | 152 |-----------------R2-----------| 154 The controller in this network must: 156 o Discover the topology of the network from R1 and R2. 158 o Compare the current traffic flow information to policies set 159 administratively by the network operator. 161 o Monitor the flow of traffic from the perspective of R1 and R2. 163 o Inject forwarding information to directly impact the traffic flow 164 at the edge devices, or modify the policy of the existing 165 distributed (dynamic) control plane already running in the 166 network. 168 Many of these steps is challenging for currently available solutions. 170 To discover the topology at the edge rotuers, the controllers can 171 either participate in the control plane, or walk the local routing 172 table using a network management protocol. Neither of these options 173 are optimal in this case because the controlling process cannot 174 interact dynamically with the local topology information in near real 175 time through such mechanisms. 177 Injecting forwarding information directly into the RIB on the 178 individual devices in this network is possible today through the 179 configuration of static routes through some external mechanism, such 180 as SNMP, NETCONF, or by direct external interaction with the devices' 181 CLI. None of these options are attractive because: 183 o They modify the actual configuration of the device (unlike a 184 dynamic routing process). 186 o They are too persistent (routes installed through static 187 configuration persist across device reboots). 189 o The controller cannot interact with the routing table in parallel 190 with other routing processes. For instance, when a routing 191 process attempts to install a new route in the routing table, 192 there is often a callback or other notification to the other 193 routing processes running on the same device; this notification 194 provides important information the controller can take into 195 account in its view of the current state of the routing table, and 196 the state of the device's routing table. Interface level events 197 also often trigger notifications from the RIB to local routing 198 processes; these notifications would be invaluble for the 199 controller to modify injected routing state in reaction to network 200 topology events. 202 o Routes installed through the an off box controller through the CLI 203 or XML interface are difficult to redistribute into other 204 protocols to draw traffic to a specific exit point, and it can be 205 difficult to fine tune how these injected routes interact with 206 routes learned through other routing processes. 208 IRS can resolve these issues by providing an open interface to the 209 local RIB on each device, allowing the controller to interact with 210 the RIB just as a local routing process would. This would allow the 211 controlling process to see the topology information in the RIB 212 dynamically, receiving near real time updates for route removals, 213 installs, and other events, and without relying on static 214 configuration to inject forwarding information each device can use. 216 Summary of IRS Capabilities and Interactions: 218 o IRS should provide the ability to read the local RIB of each 219 forwarding device, including the destination prefix (NLRI), a 220 table identifier (if the forwarding device has multiple forwarding 221 instances), the metric of each installed route, a route 222 preference, and an identifier indicating the installing process. 224 o The ability to monitor the available routes installed in the RIB 225 of each forwarding device, including near real time notification 226 of route installation and removal. This information must include 227 the destination prefix (NLRI), a table identifier (if the 228 forwarding device has multiple forwarding instances), the metric 229 of the installed route, and an identifier indicating the 230 installing process. 232 o The ability to install destination based routes in the local RIB 233 of each forwarding device. This must include the ability to 234 supply the destination prefix (NLRI), a table identifier (if the 235 forwarding device has multiple forwarding instances), a route 236 preference, a route metric, a next hop, an outbound interface, and 237 a route process identifier. 239 o The ability to interact with various policies configured on the 240 forwarding devices, in order to inform the policies implemented by 241 the dynamic routing processes. This interaction SHOULD be through 242 existing configuration mechanisms, such as NETCONF, and SHOULD be 243 recorded in the configuration of the local device so operators are 244 aware of the full policy implemented in the network from the 245 running configuration. 247 o The ability to interact with traffic flow and other network 248 traffic level measurement protocols and systems, in order to 249 determine path performance, top talkers, and other information 250 required to make an informed path decision based on locally 251 configured policy. 253 3. Distributed Reaction to Network Based Attacks 255 Quickly modifying the control plane to reroute traffic for one 256 destination while leaving a standard configuration in place (filters, 257 metrics, and other policy mechanisms) is a challenge --but this is 258 precisely the challenge of a network engineer attempting to deal with 259 a network incursion. The ability to redirect specific flows of 260 information or specific classes of traffic into, through, and back 261 out of traffic analyzers on the fly is crucial in these situations. 262 The following network diagram provides an illustration of the 263 problem. 265 Valid Source---\ /--R2--------------------\ 266 R1 R3---Valid Destination 267 Attack Source--/ \--Monitoring Device-----/ 269 Modifying the cost of the link between R1 and R2 to draw the attack 270 traffic through the monitoring device in the distributed control 271 plane will, of necessity, also draw the valid traffic through the 272 monitoring device. Drawing valid traffic through a monitoring device 273 introduces delay, jitter, and other quality of service issues, as 274 well as posing a problem for the monitoring device itself in terms of 275 traffic load and management. 277 An IRS controller could stand between the detection of the attack and 278 the control plane to facilitate the rapid modification of control and 279 forwarding planes to either block the traffic or redirect it to 280 analysis devices connected to the network. 282 Summary of IRS Capabilities and Interactions: 284 o The ability to monitor the available routes installed in the RIB 285 of each forwarding device, including near real time notification 286 of route installation and removal. This information must include 287 the destination prefix (NLRI), a table identifier (if the 288 forwarding device has multiple forwarding instances), the metric 289 of the installed route, and an identifier indicating the 290 installing process. 292 o The ability to install source and destination based routes in the 293 local RIB of each forwarding device. This must include the 294 ability to supply the destination prefix (NLRI), the source prefix 295 (NLRI), a table identifier (if the forwarding device has multiple 296 forwarding instances), a route preference, a route metric, a next 297 hop, an outbound interface, and a route process identifier. 299 o The ability to install a route to a null destination, effectively 300 filtering traffic to this destination. 302 o The ability to interact with various policies configured on the 303 forwarding devices, in order to inform the policies implemented by 304 the dynamic routing processes. This interaction SHOULD be through 305 existing configuration mechanisms, such as NETCONF, and SHOULD be 306 recorded in the configuration of the local device so operators are 307 aware of the full policy implemented in the network from the 308 running configuration. 310 o The ability to interact with traffic flow and other network 311 traffic level measurement protocols and systems, in order to 312 determine path performance, top talkers, and other information 313 required to make an informed path decision based on locally 314 configured policy. 316 4. Remote Service Routing 318 In hub and spoke overlay networks, there is always an issue with 319 balancing between the information held in the spoke routing table, 320 optimal routing through the network underlying the overlay, and 321 mobility. Most solutions in this space use some form of centralized 322 route server that acts as a directory of all reachable destinations 323 and next hops, a protocol by which spoke devices and this route 324 server communicate, and caches at the remote sites. 326 An IRS solution would use the same elements, but with a different 327 control plane. Remote sites would register (or advertise through 328 some standard routing protocol, such as BGP), the reachable 329 destinations at each site, along with the address of the router (or 330 other device) used to reach that destination. These would, as 331 always, be stored in a route server (or several redundant route 332 servers) at a central location. 334 When a remote site sends a set of packets to the central location 335 that are eventually destined to some other remote site, the central 336 location can forward this traffic, but at the same time simply 337 directly insert the correct routing information into the remote 338 site's routing table. If the location of the destination changes, 339 the route server can directly modify the routing information at the 340 remote site as needed. 342 An interesting aspect of this solution is that no new and specialized 343 protocols are needed between the remote sites and the centralized 344 route server(s). Normal routing protocols can be used to notify the 345 centralized route server(s) of modifications in reachability 346 information, and the route server(s) can respond as needed, based on 347 local algorithms optimized for a particular application or network. 348 For instance, short lived flows might be allowed to simply pass 349 through the hub site with no reaction, while longer lived flows might 350 warrant a specific route to be installed in the remote router. 351 Algorithms can also be developed that would optimize traffic flow 352 through the overlay, and also to remove routing entries from remote 353 devices when they are no longer needed based on far greater 354 intelligence than simple non-use for some period of time. 356 Summary of IRS Capabilities and Interactions: 358 o The ability to read the local RIB of each forwarding device, 359 including the destination prefix (NLRI), a table identifier (if 360 the forwarding device has multiple forwarding instances), the 361 metric of each installed route, a route preference, and an 362 identifier indicating the installing process. 364 o The ability to monitor the available routes installed in the RIB 365 of each forwarding device, including near real time notification 366 of route installation and removal. This information must include 367 the destination prefix (NLRI), a table identifier (if the 368 forwarding device has multiple forwarding instances), the metric 369 of the installed route, and an identifier indicating the 370 installing process. 372 o The ability to install destination based routes in the local RIB 373 of each forwarding device. This must include the ability to 374 supply the destination prefix (NLRI), a table identifier (if the 375 forwarding device has multiple forwarding instances), a route 376 preference, a route metric, a next hop, an outbound interface, and 377 a route process identifier. 379 5. Within Data Center Routing 381 Data Centers have evolved into massive topologies with thousands of 382 server racks and millions of hosts. Data Centers use BGP with ECMP, 383 ISIS (with multiple LAGs), or other protocols to tie the data center 384 together. Data centers are currently designed around a three or four 385 tier structure with: server, top-of-rack switches, aggregation 386 switches, and router interfacing the data center to the Internet. 387 Microsoft's usage of BGP in the data center, described in [Lapukh- 388 BGP], examines many of these elements of data center design. 390 One key element of these Data Center routing infrastructures is the 391 ability to quickly read topology information and excute configuration 392 from a centralized location. Key to this environment is the tight 393 feedback loop between learning about topology changes or loading 394 changes, and instantiating new routing policy. Without IRS, may Data 395 Centers are using extra physical topologies or logical topologies to 396 work around the features. 398 For example, Microsoft's network uses BGP because the topology state 399 could be read from BGP impementations in a consistent fashion. 400 Microsoft might have chosen a different routing protocol (such as 401 ISIS) if the routing protocol state had been easier to obtain. 402 Microsoft chose BGP for the data center because routers had a good 403 BGP interface with topology information. 405 An IRS solution would use the same in the elements, but with a 406 different control plane. The IRS enable control plane could provide 407 the Data Center 4 tier infrastructure the quick access to topology 408 and data flow information needed for traffic flow optimization. 409 Changes to the Data Center infrastructure done via the IRS could have 410 a tight feedback loop. 412 Again, this solution would reduce the need for new and specialized 413 protocols while giving the Data Center the control it desire. The 414 IRS routing interface could be extended to virtual routers. 416 Summary of IRS Capabilities and Interactions: 418 o The ability to read the local RIB of each forwarding device, 419 including the destination prefix (NLRI), a table identifier (if 420 the forwarding device has multiple forwarding instances), the 421 metric of each installed route, a route preference, and an 422 identifier indicating the installing process. 424 o The ability to monitor the available routes installed in the RIB 425 of each forwarding device, including near real time notification 426 of route installation and removal. This information must include 427 the destination prefix (NLRI), a table identifier (if the 428 forwarding device has multiple forwarding instances), the metric 429 of the installed route, and an identifier indicating the 430 installing process. 432 o The ability to install destination based routes in the local RIB 433 of each forwarding device. This must include the ability to 434 supply the destination prefix (NLRI), a table identifier (if the 435 forwarding device has multiple forwarding instances), a route 436 preference, a route metric, a next hop, an outbound interface, and 437 a route process identifier. 439 o The ability to read the tables of other local protocol processes 440 running on the device. This reading action SHOULD be supported 441 through an import/export interface which can present the 442 information in a consistent manner across all protocol 443 implementations, rather than using a protocol specific model for 444 each type of available process. 446 o The ability to inject information directly into the local tables 447 of other protocol processes running on the forwarding device. 448 This injection SHOULD be supported through an import/export 449 interface which can inject routing information in a consistent 450 manner across all protocol implementations, rather than using a 451 protocol specific model for each type of available process. 453 o The ability to interact with various policies configured on the 454 forwarding devices, in order to inform the policies implemented by 455 the dynamic routing processes. This interaction SHOULD be through 456 existing configuration mechanisms, such as NETCONF, and SHOULD be 457 recorded in the configuration of the local device so operators are 458 aware of the full policy implemented in the network from the 459 running configuration. 461 o The ability to interact with traffic flow and other network 462 traffic level measurement protocols and systems, in order to 463 determine path performance, top talkers, and other information 464 required to make an informed path decision based on locally 465 configured policy. 467 6. Temporary Overlays between Data Centers 469 Data Centers within one organization may operate as one single entity 470 even though the Data Centers are geographically distributed fashion. 471 Applications are load balanced within Data Centers and between data 472 centers to take advantage of cost economics in power, storage, and 473 server availability for compute resources. Applications are also 474 transfer to alternate data centers in case of failures within a data 475 center. To reduce time during failure, Data Centers often replicate 476 user storage between two or more data centers. During the tranfer of 477 stored information prior to a Data Center to Data Center move, the 478 Data Center controllers need to dynamically aquire a large amount of 479 inter-data center bandwidth through an overlay network, often during 480 off hours. 482 IRS could provide the connection between the overlay network 483 configuration, local policies, and the control plane to dynamically 484 bring a large bandwidth inter-data center overlay or channel into 485 use, and then to remove it from use when the data transfer is 486 completed. 488 Similarly, during a fail-over, a control process within data centers 489 interacts with a group host process and the network to seamless move 490 the processing to another data center. During the fail-over case, 491 additional process state may need to be moved as well to restart the 492 system. The difference between these data-to-data center moves is 493 immediate and urgent need to move systems. If an application (such 494 as medical or banking services) pays to have this type of fail-over, 495 it is likely the service will pay for preemption on network 496 bandwidth. IRS can allow the Data Center network and the Network 497 connecting the data center to prempt other best-effort traffic to 498 send this priority data flow. After the high priority data flow has 499 finished, networks can return to their previous condition 501 Summary of IRS Capabilities and Interactions: 503 o The ability to read the local RIB of each forwarding device, 504 including the destination prefix (NLRI), a table identifier (if 505 the forwarding device has multiple forwarding instances), the 506 metric of each installed route, a route preference, and an 507 identifier indicating the installing process. 509 o The ability to monitor the available routes installed in the RIB 510 of each forwarding device, including near real time notification 511 of route installation and removal. This information must include 512 the destination prefix (NLRI), a table identifier (if the 513 forwarding device has multiple forwarding instances), the metric 514 of the installed route, and an identifier indicating the 515 installing process. 517 o The ability to install destination based routes in the local RIB 518 of each forwarding device. This must include the ability to 519 supply the destination prefix (NLRI), a table identifier (if the 520 forwarding device has multiple forwarding instances), a route 521 preference, a route metric, a next hop, an outbound interface, and 522 a route process identifier. 524 o The ability to interact with various policies configured on the 525 forwarding devices, in order to inform the policies implemented by 526 the dynamic routing processes. This interaction SHOULD be through 527 existing configuration mechanisms, such as NETCONF, and SHOULD be 528 recorded in the configuration of the local device so operators are 529 aware of the full policy implemented in the network from the 530 running configuration. 532 o The ability to interact with policies and configurations on the 533 forwarding devices using time based processing, either through 534 timed auto-rollback or some other mechanism. This interaction 535 SHOULD be through existing configuration mechanisms, such as 536 NETCONF, and SHOULD be recorded in the configuration of the local 537 device so operators are aware of the full policy implemented in 538 the network from the running configuration. 540 o The ability to interact with traffic flow and other network 541 traffic level measurement protocols and systems, in order to 542 determine path performance, top talkers, and other information 543 required to make an informed path decision based on locally 544 configured policy. 546 7. Central membership computation for MPLS based VPNs 548 MPLS based VPNs use route target extended communities to express 549 membership information. Every PE router holds incoming BGP NLRI and 550 processes them to determine membership and then import the NLRI into 551 the appropriate MPLS/VPN routing tables. This consumes resources, 552 both memory and compute on each of the PE devices. 554 An alternative approach is to monitor routing updates on every PE 555 from the attached CEs and then compute membership in a central 556 manner. Once computed the routes are pushed to the VPN RIBs of the 557 participating PEs. 559 This centralization of membership control has a few advantages. 561 o The membership mechanism (route-targets) need not be configured in 562 each of the PEs and can be expressed once centrally. 564 o No resources in the PEs need to be spent to categorize routes into 565 the VRF tables that they belong and to filter out unwanted state. 567 o Doing it centrally means the availability of almost unlimited 568 compute capacity to compute membership and hence can be done in a 569 scaleable manner. 571 o More sophisticated routing policies and filters can be applied 572 during the central import/export process than can be expressed and 573 performed using the traditional route target mechanism. 575 o Routes can be selectively pushed only to the participating PE's 576 further reducing the memory load on the individual routers in the 577 network. This further obviates for a distributed mechanisms such 578 as rt constraints to reduce unnecessary path state in the routers. 580 Note that centrally compution of membership can be applied to other 581 scenarios as well such as VPLS, MVPNs, MAC VPNs etc. Depending on 582 the scenario, what gets monitored from the CE might vary. Central 583 computation will especially help VPLS where multi-homing and load 584 balancing using distributed techniques has particularly been a 585 challenge. 587 Also note that one of the biggest promises of central route 588 computation is simplification and reduction of computation and memory 589 load on all devices in the network. This use case is just one 590 example that illustrates these benefits of central computation very 591 well. 593 Summary of IRS Capabilities and Interactions: 595 o The ability to read the loc-RIB-In BGP table that gets all the 596 routes that the CE has provided to a PE router. 598 o The ability to install destination based routes in the local RIB 599 of the PE devices. This must include the ability to supply the 600 destination prefix (NLRI), a table identifier, a route preference, 601 a route metric, a next-hop tunnel through which traffic would be 602 carried 604 8. Normative References 606 [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate 607 Requirement Levels", BCP 14, RFC 2119, March 1997. 609 Authors' Addresses 611 Russ White 612 Verisign 613 12061 Bluemont Way 614 Reston, VA 20190 615 USA 617 Email: riwhite@verisign.com 619 Susan Hares 620 Hickory Hill Consulting 621 7453 Hickory Hill 622 Saline, MI 48176 623 USA 625 Email: shares@ndzh.com 627 Rex E. Fernando 628 Cisco Systems 629 170 W Tasman Dr 630 San Jose, CA 95134 631 USA 633 Email: rex@cisco.com