idnits 2.17.1 draft-ietf-sacm-use-cases-04.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- No issues found here. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year == Line 79 has weird spacing: '... new secur...' -- The document date (October 21, 2013) is 3833 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- == Missing Reference: 'Optional' is mentioned on line 429, but not defined == Unused Reference: 'RFC2119' is defined on line 800, but no explicit reference was found in the text == Unused Reference: 'RFC2865' is defined on line 805, but no explicit reference was found in the text Summary: 0 errors (**), 0 flaws (~~), 5 warnings (==), 1 comment (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 Security Automation and Continuous Monitoring WG D. Waltermire 3 Internet-Draft NIST 4 Intended status: Informational D. Harrington 5 Expires: April 24, 2014 Effective Software 6 October 21, 2013 8 Endpoint Security Posture Assessment - Enterprise Use Cases 9 draft-ietf-sacm-use-cases-04 11 Abstract 13 This memo documents a sampling of use cases for securely aggregating 14 configuration and operational data and evaluating that data to 15 determine an organization's security posture. From these operational 16 use cases, we can derive common functional capabilities and 17 requirements to guide development of vendor-neutral, interoperable 18 standards for aggregating and evaluating data relevant to security 19 posture. 21 Status of This Memo 23 This Internet-Draft is submitted in full conformance with the 24 provisions of BCP 78 and BCP 79. 26 Internet-Drafts are working documents of the Internet Engineering 27 Task Force (IETF). Note that other groups may also distribute 28 working documents as Internet-Drafts. The list of current Internet- 29 Drafts is at http://datatracker.ietf.org/drafts/current/. 31 Internet-Drafts are draft documents valid for a maximum of six months 32 and may be updated, replaced, or obsoleted by other documents at any 33 time. It is inappropriate to use Internet-Drafts as reference 34 material or to cite them other than as "work in progress." 36 This Internet-Draft will expire on April 24, 2014. 38 Copyright Notice 40 Copyright (c) 2013 IETF Trust and the persons identified as the 41 document authors. All rights reserved. 43 This document is subject to BCP 78 and the IETF Trust's Legal 44 Provisions Relating to IETF Documents 45 (http://trustee.ietf.org/license-info) in effect on the date of 46 publication of this document. Please review these documents 47 carefully, as they describe your rights and restrictions with respect 48 to this document. Code Components extracted from this document must 49 include Simplified BSD License text as described in Section 4.e of 50 the Trust Legal Provisions and are provided without warranty as 51 described in the Simplified BSD License. 53 Table of Contents 55 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . 3 56 2. Endpoint Posture Assessment . . . . . . . . . . . . . . . . . 3 57 2.1. Definition and Publication of Automatable Configuration 58 Guides . . . . . . . . . . . . . . . . . . . . . . . . . 5 59 2.2. Automated Checklist Verification . . . . . . . . . . . . 6 60 2.3. Organizational Software Policy Compliance . . . . . . . . 7 61 2.4. Detection of Posture Deviations . . . . . . . . . . . . . 7 62 2.5. Search for Signs of Infection . . . . . . . . . . . . . . 7 63 2.6. Remediation and Mitigation . . . . . . . . . . . . . . . 8 64 2.7. Endpoint Information Analysis and Reporting . . . . . . . 8 65 2.8. Asynchronous Compliance/Vulnerability Assessment at Ice 66 Station Zebra . . . . . . . . . . . . . . . . . . . . . . 9 67 2.9. Vulnerable Endpoint Identification . . . . . . . . . . . 10 68 2.10. Compromised Endpoint Identification . . . . . . . . . . . 10 69 2.11. Suspicious Endpoint Behavior . . . . . . . . . . . . . . 10 70 2.12. Traditional endpoint assessment with stored results . . . 11 71 2.13. NAC/NAP connection with no stored results using an 72 endpoint evaluator . . . . . . . . . . . . . . . . . . . 11 73 2.14. NAC/NAP connection with no stored results using a third- 74 party evaluator . . . . . . . . . . . . . . . . . . . . . 11 75 2.15. Repository Interaction - A Full Assessment . . . . . . . 12 76 2.16. Repository Interaction - Filtered Delta Assessment . . . 12 77 2.17. Direct Human Retrieval of Ancillary Materials. . . . . . 12 78 2.18. Register with repository for immediate notification of 79 new security vulnerability content that match a 80 selection filter. . . . . . . . . . . . . . . . . . . . . 12 81 2.19. Others... . . . . . . . . . . . . . . . . . . . . . . . . 12 82 3. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 12 83 4. Security Considerations . . . . . . . . . . . . . . . . . . . 13 84 5. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . 13 85 6. Change Log . . . . . . . . . . . . . . . . . . . . . . . . . 13 86 6.1. -03- to -04- . . . . . . . . . . . . . . . . . . . . . . 13 87 6.2. -02- to -03- . . . . . . . . . . . . . . . . . . . . . . 13 88 6.3. -01- to -02- . . . . . . . . . . . . . . . . . . . . . . 14 89 6.4. -00- to -01- . . . . . . . . . . . . . . . . . . . . . . 14 90 6.5. draft-waltermire-sacm-use-cases-05 to draft-ietf-sacm- 91 use-cases-00 . . . . . . . . . . . . . . . . . . . . . . 15 92 6.6. waltermire -04- to -05- . . . . . . . . . . . . . . . . . 16 93 7. References . . . . . . . . . . . . . . . . . . . . . . . . . 17 94 7.1. Normative References . . . . . . . . . . . . . . . . . . 17 95 7.2. Informative References . . . . . . . . . . . . . . . . . 17 96 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . 18 98 1. Introduction 100 Our goal with this document is to improve our agreement on which 101 problems we're trying to solve. We need to start with short, simple 102 problem statements and discuss those by email and in person. Once we 103 agree on which problems we're trying to solve, we can move on to 104 propose various solutions and decide which ones to use. 106 This document describes example use cases for endpoint posture 107 assessment for enterprises. It provides a sampling of use cases for 108 securely aggregating configuration and operational data and 109 evaluating that data to determine the security posture of individual 110 endpoints, and, in the aggregate, the security posture of an 111 enterprise. 113 These use cases cross many IT security information domains. From 114 these operational use cases, we can derive common concepts, common 115 information expressions, functional capabilities and requirements to 116 guide development of vendor-neutral, interoperable standards for 117 aggregating and evaluating data relevant to security posture. 119 Using this standard data, tools can analyze the state of endpoints, 120 user activities and behaviour, and evaluate the security posture of 121 an organization. Common expression of information should enable 122 interoperability between tools (whether customized, commercial, or 123 freely available), and the ability to automate portions of security 124 processes to gain efficiency, react to new threats in a timely 125 manner, and free up security personnel to work on more advanced 126 problems. 128 The goal is to enable organizations to make informed decisions that 129 support organizational objectives, to enforce policies for hardening 130 systems, to prevent network misuse, to quantify business risk, and to 131 collaborate with partners to identify and mitigate threats. 133 It is expected that use cases for enterprises and for service 134 providers will largely overlap, but there are additional 135 complications for service providers, especially in handling 136 information that crosses administrative domains. 138 The output of endpoint posture assessment is expected to feed into 139 additional processes, such as policy-based enforcement of acceptable 140 state, verification and monitoring of security controls, and 141 compliance to regulatory requirements. 143 2. Endpoint Posture Assessment 144 Endpoint posture assessment involves orchestrating and performing 145 data collection and evaluating the posture of a given endpoint. 146 Typically, endpoint posture information is gathered and then 147 published to appropriate data repositories to make collected 148 information available for further analysis supporting organizational 149 security processes. 151 Endpoint posture assessment typically includes: 153 o Collecting the attributes of a given endpoint; 155 o Making the attributes available for evaluation and action; and 157 o Verifying that the endpoint's posture is in compliance with 158 enterprise standards and policy. 160 As part of these activities it is often necessary to identify and 161 acquire any supporting content that is needed to drive data 162 collection and analysis. 164 The following is a typical workflow scenario for assessing endpoint 165 posture: 167 1. Some type of trigger initiates the workflow. For example, an 168 operator or an application might trigger the process with a 169 request, or the endpoint might trigger the process using an 170 event-driven notification. 172 QUESTION: Since this is about security automation, can we drop 173 the User and just use Application? Is there a better term to 174 use here? Once the policy is selected, the rest seems like 175 something we definitely would want to automate, so I dropped 176 the User part. 178 2. A user/application selects a target endpoint to be assessed. 180 3. A user/application selects which policies are applicable to the 181 target. 183 4. The application determines which (sets of) posture attributes 184 need to be collected for evaluation. 186 QUESTION: It was suggested that mentioning several common 187 acquisition methods, such as local API, WMI, Puppet, DCOM, 188 SNMP, CMDB query, and NEA, without forcing any specific method 189 would be good. I have concerns this could devolve into a 190 "what about my favorite?" contest. OTOH, the charter does 191 specifically call for use of existing standards where 192 applicable, so the use cases document might be a good neutral 193 location for such information, and might force us to consider 194 what types of external interfaces we might need to support 195 when we consider the requirements. It appears that the 196 generic workflow sequence would be a good place to mention 197 such common acquisition methods. 199 5. The application might retrieve previously collected information 200 from a cache or data store, such as a data store populated by an 201 asset management system. 203 6. The application might establish communication with the target, 204 mutually authenticate identities and authorizations, and collect 205 posture attributes from the target. 207 7. The application might establish communication with one or more 208 intermediary/agents, mutually authenticate their identities and 209 determine authorizations, and collect posture attributes about 210 the target from the intermediary/agents. Such agents might be 211 local or external. 213 8. The application communicates target identity and (sets of) 214 collected attributes to an evaluator, possibly an external 215 process or external system. 217 9. The evaluator compares the collected posture attributes with 218 expected values as expressed in policies. 220 QUESTION: Evaluator generates a report or log or notification 221 of some type? 223 The following subsections detail specific use cases for data 224 collection, analysis, and related operations pertaining to the 225 publication and use of supporting content. 227 2.1. Definition and Publication of Automatable Configuration Guides 229 A vendor manufactures a number of specialized endpoint devices. They 230 also develop and maintain an operating system for these devices that 231 enables end-user organizations to configure a number of security and 232 operational settings. As part of their customer support activities, 233 they publish a number of secure configuration guides that provide 234 minimum security guidelines for configuring their devices. 236 Each guide they produce applies to a specific model of device and 237 version of the operating system and provides a number of specialized 238 configurations depending on the devices intended function and what 239 add-on hardware modules and software licenses are installed on the 240 device. To enable their customers to evaluate the security posture 241 of their devices to ensure that all appropriate minimal security 242 settings are enabled, they publish an automatable configuration 243 checklist using a popular data format that defines what settings to 244 collect using a network management protocol and appropriate values 245 for each setting. They publish these guides to a public content 246 repository that customers can query to retrieve applicable guides for 247 their deployed enterprise network infrastructure endpoints. 249 Guides could also come from sources other than a device vendor, such 250 as industry groups or regulatory authorities, or enterprises could 251 develop their own checklists. 253 2.2. Automated Checklist Verification 255 A financial services company operates a heterogeneous IT environment. 256 In support of their risk management program, they utilize vendor 257 provided automatable security configuration checklists for each 258 operating system and application used within their IT environment. 259 Multiple checklists are used from different vendors to insure 260 adequate coverage of all IT assets. 262 To identify what checklists are needed, they use automation to gather 263 an inventory of the software versions utilized by all IT assets in 264 the enterprise. This data gathering will involve querying existing 265 data stores of previously collected endpoint software inventory 266 posture data and actively collecting data from reachable endpoints as 267 needed utilizing network and systems management protocols. 268 Previously collected data may be provided by periodic data 269 collection, network connection-driven data collection, or ongoing 270 event-driven monitoring of endpoint posture changes. 272 Using the gathered software inventory data and associated asset 273 management data indicating the organizational defined functions of 274 each endpoint, they locate and query each vendors content repository 275 for the appropriate checklists. These checklists are cached locally 276 to reduce the need to download the checklist multiple times. 278 Driven by the setting data provided in the checklist, a combination 279 of existing configuration data stores and data collection methods are 280 used to gather the appropriate posture information from each 281 endpoint. Specific data is gathered based on the defined enterprise 282 function and software inventory of each endpoint. The data 283 collection paths used to collect software inventory posture will be 284 used again for this purpose. Once the data is gathered, the actual 285 state is evaluated against the expected state criteria in each 286 applicable checklist. Deficiencies are identified and reported to 287 the appropriate endpoint operators for remedy. 289 Checklists could also come from sources other than the application or 290 OS vendor, such as industry groups or regulatory authorities, or 291 enterprises could develop their own checklists. 293 2.3. Organizational Software Policy Compliance 295 Example Corporation, in support of compliance requirements, has 296 identified a number of secure baselines for different endpoint types 297 that exist across their enterprise IT environment. Determining which 298 baseline applies to a given endpoint is based on the organizationally 299 defined function of the device. 301 Each baseline, defined using an automatable standardized data format, 302 identifies the expected hardware, software and patch inventory, and 303 software configuration item values for each endpoint type. As part 304 of their compliance activities, they require that all endpoints 305 connecting to their network meet the appropriate baselines. The 306 configuration settings of each endpoint are collected and compared to 307 the baseline to make sure the configuration complies with the 308 appropriate baseline whenever it connects to the network and at least 309 once a day thereafter. These daily compliance checks evaluate the 310 posture of each endpoint and report on its compliance with the 311 appropriate baseline. 313 [TODO: Need to speak to how the baselines are identified for a given 314 endpoint connecting to the network.] 316 2.4. Detection of Posture Deviations 318 Example corporation has established secure configuration baselines 319 for each different type of endpoint within their enterprise 320 including: network infrastructure, mobile, client, and server 321 computing platforms. These baselines define an approved list of 322 hardware, software (i.e., operating system, applications, and 323 patches), and associated required configurations. When an endpoint 324 connects to the network, the appropriate baseline configuration is 325 communicated to the endpoint based on its location in the network, 326 the expected function of the device, and other asset management data. 327 It is checked for compliance with the baseline indicating any 328 deviations to the device's operators. Once the baseline has been 329 established, the endpoint is monitored for any change events 330 pertaining to the baseline on an ongoing basis. When a change occurs 331 to posture defined in the baseline, updated posture information is 332 exchanged allowing operators to be notified and/or automated action 333 to be taken. 335 2.5. Search for Signs of Infection 336 The Example Corporation carefully manages endpoint security with 337 tools that implement the SACM standards. One day, the endpoint 338 security team at Example Corporation learns about a stealthy malware 339 package. This malware has just been discovered but has already 340 spread widely around the world. Certain signs of infection have been 341 identified (e.g. the presence of certain files). The security team 342 would like to know which endpoints owned by the Example Corporation 343 have been infected with this malware. They use their tools to search 344 for the signs of infection and generate a list of infected endpoints. 346 The search for infected endpoints may be performed by gathering new 347 endpoint posture information regarding the presence of the signs of 348 infection. However, this might miss finding endpoints that were 349 previously infected but where the infection has now erased itself. 350 Such previously infected endpoints may be detected by searching a 351 database of posture information previously gathered for the signs of 352 infection. However, this will not work if the malware hides its 353 presence carefully or if the signs of infection were not included in 354 previous posture assessments. In those cases, the database may be 355 used to at least detect which endpoints previously had software 356 vulnerable to infection by the malware. 358 2.6. Remediation and Mitigation 360 When Example Corporation discovers that one of its endpoints is 361 vulnerable to infection, a process of mitigation and remediation is 362 triggered. The first step is mitigating the impact of the 363 vulnerability, perhaps by placing the endpoint into a safe network or 364 blocking network traffic that could infect the endpoint. The second 365 step is remediation: fixing the vulnerability. In some cases, these 366 steps may happen automatically and rapidly. In other cases, they may 367 require human intervention either to decide what response is most 368 appropriate or to complete the steps, which are sometimes complex. 370 These same steps of mitigation and remediation may be used when 371 Example Corporation discovers that one of its endpoints has become 372 infected with some malware. Alternatively, the infected endpoint may 373 simply be monitored or even placed into a honeynet or similar 374 environment to observe the malware's behavior and lead the attackers 375 astray. 377 QUESTION: Is remediation and mitigation within the scope of the WG, 378 and should the use case be included here? 380 2.7. Endpoint Information Analysis and Reporting 382 Freed from the drudgery of manual endpoint compliance monitoring, one 383 of the security administrators at Example Corporation notices (not 384 using SACM standards) that five endpoints have been uploading lots of 385 data to a suspicious server on the Internet. The administrator 386 queries the SACM database of endpoint posture to see what software is 387 installed on those endpoints and finds that they all have a 388 particular program installed. She then searches the database to see 389 which other endpoints have that program installed. All these 390 endpoints are monitored carefully (not using SACM standards), which 391 allows the administrator to detect that the other endpoints are also 392 infected. 394 This is just one example of the useful analysis that a skilled 395 analyst can do using the database of endpoint posture that SACM can 396 provide. 398 2.8. Asynchronous Compliance/Vulnerability Assessment at Ice Station 399 Zebra 401 A university team receives a grant to do research at a government 402 facility in the arctic. The only network communications will be via 403 an intermittent low-speed high-latency high-cost satellite link. 404 During their extended expedition they will need to show continue 405 compliance with the security policies of the university, the 406 government, and the provider of the satellite network as well as keep 407 current on vulnerability testing. Interactive assessments are 408 therefore not reliable, and since the researchers have very limited 409 funding they need to minimize how much money they spend on network 410 data. 412 Prior to departure they register all equipment with an asset 413 management system owned by the university, which will also initiate 414 and track assessments. 416 On a periodic basis -- either after a maximum time delta or when the 417 content repository has received a threshold level of new 418 vulnerability definitions -- the university uses the information in 419 the asset management system to put together a collection request for 420 all of the deployed assets that encompasses the minimal set of 421 artifacts necessary to evaluate all three security policies as well 422 as vulnerability testing. 424 In the case of new critical vulnerabilities this collection request 425 consists only of the artifacts necessary for those vulnerabilities 426 and collection is only initiated for those assets that could 427 potentially have a new vulnerability. 429 [Optional] Asset artifacts are cached in a local CMDB. When new 430 vulnerabilities are reported to the content repository, a request to 431 the live asset is only done if the artifacts in the CMDB are 432 incomplete and/or not current enough. 434 The collection request is queued for the next window of connectivity. 435 The deployed assets eventually receive the request, fulfill it, and 436 queue the results for the next return opportunity. 438 The collected artifacts eventually make it back to the university 439 where the level of compliance and vulnerability expose is calculated 440 and asset characteristics are compared to what is in the asset 441 management system for accuracy and completeness. 443 2.9. Vulnerable Endpoint Identification 445 Typically vulnerability reports identify an executable or library 446 that is vulnerable, or worst case the software that is vulnerable. 447 This information is used to determine if an organization has one or 448 more endpoints that have exposure to a vulnerability (i.e., what 449 endpoints are vulnerable?). It is often necessary to know where you 450 are running vulnerable code and what configurations are in place on 451 the endpoint and upstream devices (e.g., IDS, firewall) that may 452 limit the exposure. All of this information, along with details on 453 the severity and impact of a vulnerability, is necessary to 454 prioritize remedies. 456 2.10. Compromised Endpoint Identification 458 Along with knowing if one or more endpoints are vulnerable, it is 459 also important to know if you have been compromised. Indicators of 460 compromise provide details that can be used to identify malware 461 (e.g., file hashes), identify malicious activity (e.g. command and 462 control traffic), presence of unauthorized/malicious configuration 463 items, and other indicators. While important, this goes beyond 464 determining organizational exposure. 466 2.11. Suspicious Endpoint Behavior 468 This Use Case describes the collaboration between specific 469 participants in an information security system specific to detecting 470 a connection attempt to a known-bad Internet host by a botnet zombie 471 that has made its way onto an organization's Information Technology 472 systems. The primary human actor is the Security Operations Center 473 Analyst, and the primary software actor is the configuration 474 assessment tool. Note, however, the dependencies on other tools, 475 such as asset management, intrusion detection, and messaging. 477 2.12. Traditional endpoint assessment with stored results 479 An external trigger initiates an assessment of an endpoint. The 480 Controller uses the data in the Datastore to look up authentication 481 information for the endpoint and passes that along with the 482 assessment request details to the Evaluator. The Evaluator uses the 483 Endpoint information to request taxonomy information from the 484 Collector on the endpoint, which responds with those attributes. The 485 Evaluator uses that taxonomy information along with the information 486 in the original request from the Controller to request the 487 appropriate content from the Content Repository. The Evaluator uses 488 the content to derive the minimal set of endpoint attributes needed 489 to perform the assessment and makes that request. The Evaluator uses 490 the Collector response to do the assessment and returns the results 491 to the Controller. The Controller puts the results in the Datastore. 493 2.13. NAC/NAP connection with no stored results using an endpoint 494 evaluator 496 A mobile endpoint makes a VPN connection request. The NAC/NAP broker 497 requests the results of the VPN connection assessment from the 498 Controller. The Controller requests the VPN attributes from a 499 Content Repository. The Controller requests an evaluation of the 500 collected attributes from the Evaluator on the endpoint. The 501 endpoint performs the assessment and returns the results. The 502 Controller completes the original assessment request by returning the 503 results to the NAC/NAP broker, which uses them to set the level of 504 network access allowed to the endpoint. 506 QUESTION: I edited these from Gunnar's email of 9/11, to try to 507 reduce the use of "assessment", to focus on collection and 508 evaluation, and deal with use cases rather than architecture. I am 509 not sure I got all the concepts properly identified. 511 2.14. NAC/NAP connection with no stored results using a third-party 512 evaluator 514 A mobile endpoint makes a VPN connection request. The NAC/NAP broker 515 requests the results of the VPN connection assessment from the 516 Controller. The Controller requests the VPN attributes from a 517 Content Repository. The Controller requests an evaluation of the 518 collected attributes from an Evaluator in the network (rather than 519 trusting an evaluator on the endpoint). The evaluator performs the 520 evaluation and returns the results. The Controller completes the 521 original assessment request by returning the results to the NAC/NAP 522 broker, which uses them to set the level of network access allowed to 523 the endpoint. 525 QUESTION: I edited these from Gunnar's email of 9/11, to try to 526 reduce the use of "assessment", to focus on collection and 527 evaluation, and deal with use cases rather than architecture. I am 528 not sure I got all the concepts properly identified. 530 2.15. Repository Interaction - A Full Assessment 532 An auditor at a health care provider needs to know the current 533 compliance level of his network, including enumeration of known 534 vulnerabilities, so she initiates a full enterprise-wide assessment. 535 For each endpoint on the network, after determining its taxonomical 536 classification, the assessment system queries the content repository 537 for all materials that apply to that endpoint. 539 2.16. Repository Interaction - Filtered Delta Assessment 541 Before heading out on a road trip, a rep checks out an iOS tablet 542 computer from the IT department. Before turning over the laptop the 543 IT administrator first initiates a quick assessment to see if any new 544 vulnerabilities that potentially yield remote access or local 545 privilege escalation have been identified for that device type since 546 the last time the device had had a full assessment. 548 2.17. Direct Human Retrieval of Ancillary Materials. 550 Preceding a HIPAA assessment the local SSO wants to review the HIPAA 551 regulations to determine which assets do or do not fall under the 552 regulation. Following the assessment he again queries the content 553 repository for more information about remediation strategies and 554 employee training materials. 556 2.18. Register with repository for immediate notification of new 557 security vulnerability content that match a selection filter. 559 Interested in reducing the exposure time to new vulnerabilities and 560 compliance policy changes, the IT administrator registers with his 561 subscribed content repository(s) to receive immediate notification of 562 any changes to the vulnerability and compliance content that apply to 563 his managed assets. Receipt of notifications trigger an immediate 564 delta assessment against those assets that potentially match. 566 2.19. Others... 568 Additional use cases will be identified as we work through other 569 domains. 571 3. IANA Considerations 572 This memo includes no request to IANA. 574 4. Security Considerations 576 This memo documents, for Informational purposes, use cases for 577 security automation. While it is about security, it does not affect 578 security. 580 5. Acknowledgements 582 The National Institute of Standards and Technology (NIST) and/or the 583 MITRE Corporation have developed specifications under the general 584 term "Security Automation" including languages, protocols, 585 enumerations, and metrics. 587 Adam Montville edited early versions of this draft. 589 Kathleen Moriarty and Stephen Hanna contributed text describing the 590 scope of the document. 592 Steve Hanna provided use cases for Search for Signs of Infection, 593 Remediation and Mitigation, and Endpoint Information Analysis and 594 Reporting. 596 Gunnar Engelbach provided the use case about Ice Station Zebra, and 597 use cases regarding the content repository. 599 6. Change Log 601 6.1. -03- to -04- 603 Added four new use cases regarding content repository. 605 6.2. -02- to -03- 607 Expanded the workflow description based on ML input. 609 Changed the ambiguous "assess" to better separate data collection 610 from evaluation. 612 Added use case for Search for Signs of Infection. 614 Added use case for Remediation and Mitigation. 616 Added use case for Endpoint Information Analysis and Reporting. 618 Added use case for Asynchronous Compliance/Vulnerability Assessment 619 at Ice Station Zebra. 621 Added use case for Traditional endpoint assessment with stored 622 results. 624 Added use case for NAC/NAP connection with no stored results using an 625 endpoint evaluator. 627 Added use case for NAC/NAP connection with no stored results using a 628 third-party evaluator. 630 Added use case for Compromised Endpoint Identification. 632 Added use case for Suspicious Endpoint Behavior. 634 Added use case for Vulnerable Endpoint Identification. 636 Updated Acknowledgements 638 6.3. -01- to -02- 640 Changed title 642 removed section 4, expecting it will be moved into the requirements 643 document. 645 removed the list of proposed caabilities from section 3.1 647 Added empty sections for Search for Signs of Infection, Remediation 648 and Mitigation, and Endpoint Information Analysis and Reporting. 650 Removed Requirements Language section and rfc2119 reference. 652 Removed unused references (which ended up being all references). 654 6.4. -00- to -01- 656 o Work on this revision has been focused on document content 657 relating primarily to use of asset management data and functions. 659 o Made significant updates to section 3 including: 661 * Reworked introductory text. 663 * Replaced the single example with multiple use cases that focus 664 on more discrete uses of asset management data to support 665 hardware and software inventory, and configuration management 666 use cases. 668 * For one of the use cases, added mapping to functional 669 capabilities used. If popular, this will be added to the other 670 use cases as well. 672 * Additional use cases will be added in the next revision 673 capturing additional discussion from the list. 675 o Made significant updates to section 4 including: 677 * Renamed the section heading from "Use Cases" to "Functional 678 Capabilities" since use cases are covered in section 3. This 679 section now extrapolates specific functions that are needed to 680 support the use cases. 682 * Started work to flatten the section, moving select subsections 683 up from under asset management. 685 * Removed the subsections for: Asset Discovery, Endpoint 686 Components and Asset Composition, Asset Resources, and Asset 687 Life Cycle. 689 * Renamed the subsection "Asset Representation Reconciliation" to 690 "Deconfliction of Asset Identities". 692 * Expanded the subsections for: Asset Identification, Asset 693 Characterization, and Deconfliction of Asset Identities. 695 * Added a new subsection for Asset Targeting. 697 * Moved remaining sections to "Other Unedited Content" for future 698 updating. 700 6.5. draft-waltermire-sacm-use-cases-05 to draft-ietf-sacm-use-cases-00 702 o Transitioned from individual I/D to WG I/D based on WG consensus 703 call. 705 o Fixed a number of spelling errors. Thank you Erik! 707 o Added keywords to the front matter. 709 o Removed the terminology section from the draft. Terms have been 710 moved to: draft-dbh-sacm-terminology-00 712 o Removed requirements to be moved into a new I/D. 714 o Extracted the functionality from the examples and made the 715 examples less prominent. 717 o Renamed "Functional Capabilities and Requirements" section to "Use 718 Cases". 720 * Reorganized the "Asset Management" sub-section. Added new text 721 throughout. 723 + Renamed a few sub-section headings. 725 + Added text to the "Asset Characterization" sub-section. 727 o Renamed "Security Configuration Management" to "Endpoint 728 Configuration Management". Not sure if the "security" distinction 729 is important. 731 * Added new sections, partially integrated existing content. 733 * Additional text is needed in all of the sub-sections. 735 o Changed "Security Change Management" to "Endpoint Posture Change 736 Management". Added new skeletal outline sections for future 737 updates. 739 6.6. waltermire -04- to -05- 741 o Are we including user activities and behavior in the scope of this 742 work? That seems to be layer 8 stuff, appropriate to an IDS/IPS 743 application, not Internet stuff. 745 o I removed the references to what the WG will do because this 746 belongs in the charter, not the (potentially long-lived) use cases 747 document. I removed mention of charter objectives because the 748 charter may go through multiple iterations over time; there is a 749 website for hosting the charter; this document is not the correct 750 place for that discussion. 752 o I moved the discussion of NIST specifications to the 753 acknowledgements section. 755 o Removed the portion of the introduction that describes the 756 chapters; we have a table of concepts, and the existing text 757 seemed redundant. 759 o Removed marketing claims, to focus on technical concepts and 760 technical analysis, that would enable subsequent engineering 761 effort. 763 o Removed (commented out in XML) UC2 and UC3, and eliminated some 764 text that referred to these use cases. 766 o Modified IANA and Security Consideration sections. 768 o Moved Terms to the front, so we can use them in the subsequent 769 text. 771 o Removed the "Key Concepts" section, since the concepts of ORM and 772 IRM were not otherwise mentioned in the document. This would seem 773 more appropriate to the arch doc rather than use cases. 775 o Removed role=editor from David Waltermire's info, since there are 776 three editors on the document. The editor is most important when 777 one person writes the document that represents the work of 778 multiple people. When there are three editors, this role marking 779 isn't necessary. 781 o Modified text to describe that this was specific to enterprises, 782 and that it was expected to overlap with service provider use 783 cases, and described the context of this scoped work within a 784 larger context of policy enforcement, and verification. 786 o The document had asset management, but the charter mentioned 787 asset, change, configuration, and vulnerability management, so I 788 added sections for each of those categories. 790 o Added text to Introduction explaining goal of the document. 792 o Added sections on various example use cases for asset management, 793 config management, change management, and vulnerability 794 management. 796 7. References 798 7.1. Normative References 800 [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate 801 Requirement Levels", BCP 14, RFC 2119, March 1997. 803 7.2. Informative References 805 [RFC2865] Rigney, C., Willens, S., Rubens, A., and W. Simpson, 806 "Remote Authentication Dial In User Service (RADIUS)", RFC 807 2865, June 2000. 809 Authors' Addresses 811 David Waltermire 812 National Institute of Standards and Technology 813 100 Bureau Drive 814 Gaithersburg, Maryland 20877 815 USA 817 Email: david.waltermire@nist.gov 819 David Harrington 820 Effective Software 821 50 Harding Rd 822 Portsmouth, NH 03801 823 USA 825 Email: ietfdbh@comcast.net