idnits 2.17.1 draft-ietf-sacm-use-cases-03.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- No issues found here. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year -- The document date (October 19, 2013) is 3840 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- == Missing Reference: 'Optional' is mentioned on line 423, but not defined == Unused Reference: 'RFC2119' is defined on line 759, but no explicit reference was found in the text == Unused Reference: 'RFC2865' is defined on line 764, but no explicit reference was found in the text Summary: 0 errors (**), 0 flaws (~~), 4 warnings (==), 1 comment (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 Security Automation and Continuous Monitoring WG D. Waltermire 3 Internet-Draft NIST 4 Intended status: Informational D. Harrington 5 Expires: April 22, 2014 Effective Software 6 October 19, 2013 8 Endpoint Security Posture Assessment - Enterprise Use Cases 9 draft-ietf-sacm-use-cases-03 11 Abstract 13 This memo documents a sampling of use cases for securely aggregating 14 configuration and operational data and evaluating that data to 15 determine an organization's security posture. From these operational 16 use cases, we can derive common functional capabilities and 17 requirements to guide development of vendor-neutral, interoperable 18 standards for aggregating and evaluating data relevant to security 19 posture. 21 Status of This Memo 23 This Internet-Draft is submitted in full conformance with the 24 provisions of BCP 78 and BCP 79. 26 Internet-Drafts are working documents of the Internet Engineering 27 Task Force (IETF). Note that other groups may also distribute 28 working documents as Internet-Drafts. The list of current Internet- 29 Drafts is at http://datatracker.ietf.org/drafts/current/. 31 Internet-Drafts are draft documents valid for a maximum of six months 32 and may be updated, replaced, or obsoleted by other documents at any 33 time. It is inappropriate to use Internet-Drafts as reference 34 material or to cite them other than as "work in progress." 36 This Internet-Draft will expire on April 22, 2014. 38 Copyright Notice 40 Copyright (c) 2013 IETF Trust and the persons identified as the 41 document authors. All rights reserved. 43 This document is subject to BCP 78 and the IETF Trust's Legal 44 Provisions Relating to IETF Documents 45 (http://trustee.ietf.org/license-info) in effect on the date of 46 publication of this document. Please review these documents 47 carefully, as they describe your rights and restrictions with respect 48 to this document. Code Components extracted from this document must 49 include Simplified BSD License text as described in Section 4.e of 50 the Trust Legal Provisions and are provided without warranty as 51 described in the Simplified BSD License. 53 Table of Contents 55 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . 3 56 2. Endpoint Posture Assessment . . . . . . . . . . . . . . . . . 3 57 2.1. Definition and Publication of Automatable Configuration 58 Guides . . . . . . . . . . . . . . . . . . . . . . . . . 5 59 2.2. Automated Checklist Verification . . . . . . . . . . . . 6 60 2.3. Organizational Software Policy Compliance . . . . . . . . 7 61 2.4. Detection of Posture Deviations . . . . . . . . . . . . . 7 62 2.5. Search for Signs of Infection . . . . . . . . . . . . . . 7 63 2.6. Remediation and Mitigation . . . . . . . . . . . . . . . 8 64 2.7. Endpoint Information Analysis and Reporting . . . . . . . 8 65 2.8. Asynchronous Compliance/Vulnerability Assessment at Ice 66 Station Zebra . . . . . . . . . . . . . . . . . . . . . . 9 67 2.9. Vulnerable Endpoint Identification . . . . . . . . . . . 10 68 2.10. Compromised Endpoint Identification . . . . . . . . . . . 10 69 2.11. Suspicious Endpoint Behavior . . . . . . . . . . . . . . 10 70 2.12. Traditional endpoint assessment with stored results . . . 11 71 2.13. NAC/NAP connection with no stored results using an 72 endpoint evaluator . . . . . . . . . . . . . . . . . . . 11 73 2.14. NAC/NAP connection with no stored results using a third- 74 party evaluator . . . . . . . . . . . . . . . . . . . . . 11 75 2.15. Repository Interaction . . . . . . . . . . . . . . . . . 12 76 2.16. Others... . . . . . . . . . . . . . . . . . . . . . . . . 12 77 3. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 12 78 4. Security Considerations . . . . . . . . . . . . . . . . . . . 12 79 5. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . 12 80 6. Change Log . . . . . . . . . . . . . . . . . . . . . . . . . 13 81 6.1. -02- to -03- . . . . . . . . . . . . . . . . . . . . . . 13 82 6.2. -01- to -02- . . . . . . . . . . . . . . . . . . . . . . 13 83 6.3. -00- to -01- . . . . . . . . . . . . . . . . . . . . . . 14 84 6.4. draft-waltermire-sacm-use-cases-05 to draft-ietf-sacm- 85 use-cases-00 . . . . . . . . . . . . . . . . . . . . . . 15 86 6.5. waltermire -04- to -05- . . . . . . . . . . . . . . . . . 15 87 7. References . . . . . . . . . . . . . . . . . . . . . . . . . 17 88 7.1. Normative References . . . . . . . . . . . . . . . . . . 17 89 7.2. Informative References . . . . . . . . . . . . . . . . . 17 90 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . 17 92 1. Introduction 94 Our goal with this document is to improve our agreement on which 95 problems we're trying to solve. We need to start with short, simple 96 problem statements and discuss those by email and in person. Once we 97 agree on which problems we're trying to solve, we can move on to 98 propose various solutions and decide which ones to use. 100 This document describes example use cases for endpoint posture 101 assessment for enterprises. It provides a sampling of use cases for 102 securely aggregating configuration and operational data and 103 evaluating that data to determine the security posture of individual 104 endpoints, and, in the aggregate, the security posture of an 105 enterprise. 107 These use cases cross many IT security information domains. From 108 these operational use cases, we can derive common concepts, common 109 information expressions, functional capabilities and requirements to 110 guide development of vendor-neutral, interoperable standards for 111 aggregating and evaluating data relevant to security posture. 113 Using this standard data, tools can analyze the state of endpoints, 114 user activities and behaviour, and evaluate the security posture of 115 an organization. Common expression of information should enable 116 interoperability between tools (whether customized, commercial, or 117 freely available), and the ability to automate portions of security 118 processes to gain efficiency, react to new threats in a timely 119 manner, and free up security personnel to work on more advanced 120 problems. 122 The goal is to enable organizations to make informed decisions that 123 support organizational objectives, to enforce policies for hardening 124 systems, to prevent network misuse, to quantify business risk, and to 125 collaborate with partners to identify and mitigate threats. 127 It is expected that use cases for enterprises and for service 128 providers will largely overlap, but there are additional 129 complications for service providers, especially in handling 130 information that crosses administrative domains. 132 The output of endpoint posture assessment is expected to feed into 133 additional processes, such as policy-based enforcement of acceptable 134 state, verification and monitoring of security controls, and 135 compliance to regulatory requirements. 137 2. Endpoint Posture Assessment 138 Endpoint posture assessment involves orchestrating and performing 139 data collection and evaluating the posture of a given endpoint. 140 Typically, endpoint posture information is gathered and then 141 published to appropriate data repositories to make collected 142 information available for further analysis supporting organizational 143 security processes. 145 Endpoint posture assessment typically includes: 147 o Collecting the attributes of a given endpoint; 149 o Making the attributes available for evaluation and action; and 151 o Verifying that the endpoint's posture is in compliance with 152 enterprise standards and policy. 154 As part of these activities it is often necessary to identify and 155 acquire any supporting content that is needed to drive data 156 collection and analysis. 158 The following is a typical workflow scenario for assessing endpoint 159 posture: 161 1. Some type of trigger initiates the workflow. For example, an 162 operator or an application might trigger the process with a 163 request, or the endpoint might trigger the process using an 164 event-driven notification. 166 QUESTION: Since this is about security automation, can we drop 167 the User and just use Application? Is there a better term to 168 use here? Once the policy is selected, the rest seems like 169 something we definitely would want to automate, so I dropped 170 the User part. 172 2. A user/application selects a target endpoint to be assessed. 174 3. A user/application selects which policies are applicable to the 175 target. 177 4. The application determines which (sets of) posture attributes 178 need to be collected for evaluation. 180 QUESTION: It was suggested that mentioning several common 181 acquisition methods, such as local API, WMI, Puppet, DCOM, 182 SNMP, CMDB query, and NEA, without forcing any specific method 183 would be good. I have concerns this could devolve into a 184 "what about my favorite?" contest. OTOH, the charter does 185 specifically call for use of existing standards where 186 applicable, so the use cases document might be a good neutral 187 location for such information, and might force us to consider 188 what types of external interfaces we might need to support 189 when we consider the requirements. It appears that the 190 generic workflow sequence would be a good place to mention 191 such common acquisition methods. 193 5. The application might retrieve previously collected information 194 from a cache or data store, such as a data store populated by an 195 asset management system. 197 6. The application might establish communication with the target, 198 mutually authenticate identities and authorizations, and collect 199 posture attributes from the target. 201 7. The application might establish communication with one or more 202 intermediary/agents, mutually authenticate their identities and 203 determine authorizations, and collect posture attributes about 204 the target from the intermediary/agents. Such agents might be 205 local or external. 207 8. The application communicates target identity and (sets of) 208 collected attributes to an evaluator, possibly an external 209 process or external system. 211 9. The evaluator compares the collected posture attributes with 212 expected values as expressed in policies. 214 QUESTION: Evaluator generates a report or log or notification 215 of some type? 217 The following subsections detail specific use cases for data 218 collection, analysis, and related operations pertaining to the 219 publication and use of supporting content. 221 2.1. Definition and Publication of Automatable Configuration Guides 223 A vendor manufactures a number of specialized endpoint devices. They 224 also develop and maintain an operating system for these devices that 225 enables end-user organizations to configure a number of security and 226 operational settings. As part of their customer support activities, 227 they publish a number of secure configuration guides that provide 228 minimum security guidelines for configuring their devices. 230 Each guide they produce applies to a specific model of device and 231 version of the operating system and provides a number of specialized 232 configurations depending on the devices intended function and what 233 add-on hardware modules and software licenses are installed on the 234 device. To enable their customers to evaluate the security posture 235 of their devices to ensure that all appropriate minimal security 236 settings are enabled, they publish an automatable configuration 237 checklist using a popular data format that defines what settings to 238 collect using a network management protocol and appropriate values 239 for each setting. They publish these guides to a public content 240 repository that customers can query to retrieve applicable guides for 241 their deployed enterprise network infrastructure endpoints. 243 Guides could also come from sources other than a device vendor, such 244 as industry groups or regulatory authorities, or enterprises could 245 develop their own checklists. 247 2.2. Automated Checklist Verification 249 A financial services company operates a heterogeneous IT environment. 250 In support of their risk management program, they utilize vendor 251 provided automatable security configuration checklists for each 252 operating system and application used within their IT environment. 253 Multiple checklists are used from different vendors to insure 254 adequate coverage of all IT assets. 256 To identify what checklists are needed, they use automation to gather 257 an inventory of the software versions utilized by all IT assets in 258 the enterprise. This data gathering will involve querying existing 259 data stores of previously collected endpoint software inventory 260 posture data and actively collecting data from reachable endpoints as 261 needed utilizing network and systems management protocols. 262 Previously collected data may be provided by periodic data 263 collection, network connection-driven data collection, or ongoing 264 event-driven monitoring of endpoint posture changes. 266 Using the gathered software inventory data and associated asset 267 management data indicating the organizational defined functions of 268 each endpoint, they locate and query each vendors content repository 269 for the appropriate checklists. These checklists are cached locally 270 to reduce the need to download the checklist multiple times. 272 Driven by the setting data provided in the checklist, a combination 273 of existing configuration data stores and data collection methods are 274 used to gather the appropriate posture information from each 275 endpoint. Specific data is gathered based on the defined enterprise 276 function and software inventory of each endpoint. The data 277 collection paths used to collect software inventory posture will be 278 used again for this purpose. Once the data is gathered, the actual 279 state is evaluated against the expected state criteria in each 280 applicable checklist. Deficiencies are identified and reported to 281 the appropriate endpoint operators for remedy. 283 Checklists could also come from sources other than the application or 284 OS vendor, such as industry groups or regulatory authorities, or 285 enterprises could develop their own checklists. 287 2.3. Organizational Software Policy Compliance 289 Example Corporation, in support of compliance requirements, has 290 identified a number of secure baselines for different endpoint types 291 that exist across their enterprise IT environment. Determining which 292 baseline applies to a given endpoint is based on the organizationally 293 defined function of the device. 295 Each baseline, defined using an automatable standardized data format, 296 identifies the expected hardware, software and patch inventory, and 297 software configuration item values for each endpoint type. As part 298 of their compliance activities, they require that all endpoints 299 connecting to their network meet the appropriate baselines. The 300 configuration settings of each endpoint are collected and compared to 301 the baseline to make sure the configuration complies with the 302 appropriate baseline whenever it connects to the network and at least 303 once a day thereafter. These daily compliance checks evaluate the 304 posture of each endpoint and report on its compliance with the 305 appropriate baseline. 307 [TODO: Need to speak to how the baselines are identified for a given 308 endpoint connecting to the network.] 310 2.4. Detection of Posture Deviations 312 Example corporation has established secure configuration baselines 313 for each different type of endpoint within their enterprise 314 including: network infrastructure, mobile, client, and server 315 computing platforms. These baselines define an approved list of 316 hardware, software (i.e., operating system, applications, and 317 patches), and associated required configurations. When an endpoint 318 connects to the network, the appropriate baseline configuration is 319 communicated to the endpoint based on its location in the network, 320 the expected function of the device, and other asset management data. 321 It is checked for compliance with the baseline indicating any 322 deviations to the device's operators. Once the baseline has been 323 established, the endpoint is monitored for any change events 324 pertaining to the baseline on an ongoing basis. When a change occurs 325 to posture defined in the baseline, updated posture information is 326 exchanged allowing operators to be notified and/or automated action 327 to be taken. 329 2.5. Search for Signs of Infection 330 The Example Corporation carefully manages endpoint security with 331 tools that implement the SACM standards. One day, the endpoint 332 security team at Example Corporation learns about a stealthy malware 333 package. This malware has just been discovered but has already 334 spread widely around the world. Certain signs of infection have been 335 identified (e.g. the presence of certain files). The security team 336 would like to know which endpoints owned by the Example Corporation 337 have been infected with this malware. They use their tools to search 338 for the signs of infection and generate a list of infected endpoints. 340 The search for infected endpoints may be performed by gathering new 341 endpoint posture information regarding the presence of the signs of 342 infection. However, this might miss finding endpoints that were 343 previously infected but where the infection has now erased itself. 344 Such previously infected endpoints may be detected by searching a 345 database of posture information previously gathered for the signs of 346 infection. However, this will not work if the malware hides its 347 presence carefully or if the signs of infection were not included in 348 previous posture assessments. In those cases, the database may be 349 used to at least detect which endpoints previously had software 350 vulnerable to infection by the malware. 352 2.6. Remediation and Mitigation 354 When Example Corporation discovers that one of its endpoints is 355 vulnerable to infection, a process of mitigation and remediation is 356 triggered. The first step is mitigating the impact of the 357 vulnerability, perhaps by placing the endpoint into a safe network or 358 blocking network traffic that could infect the endpoint. The second 359 step is remediation: fixing the vulnerability. In some cases, these 360 steps may happen automatically and rapidly. In other cases, they may 361 require human intervention either to decide what response is most 362 appropriate or to complete the steps, which are sometimes complex. 364 These same steps of mitigation and remediation may be used when 365 Example Corporation discovers that one of its endpoints has become 366 infected with some malware. Alternatively, the infected endpoint may 367 simply be monitored or even placed into a honeynet or similar 368 environment to observe the malware's behavior and lead the attackers 369 astray. 371 QUESTION: Is remediation and mitigation within the scope of the WG, 372 and should the use case be included here? 374 2.7. Endpoint Information Analysis and Reporting 376 Freed from the drudgery of manual endpoint compliance monitoring, one 377 of the security administrators at Example Corporation notices (not 378 using SACM standards) that five endpoints have been uploading lots of 379 data to a suspicious server on the Internet. The administrator 380 queries the SACM database of endpoint posture to see what software is 381 installed on those endpoints and finds that they all have a 382 particular program installed. She then searches the database to see 383 which other endpoints have that program installed. All these 384 endpoints are monitored carefully (not using SACM standards), which 385 allows the administrator to detect that the other endpoints are also 386 infected. 388 This is just one example of the useful analysis that a skilled 389 analyst can do using the database of endpoint posture that SACM can 390 provide. 392 2.8. Asynchronous Compliance/Vulnerability Assessment at Ice Station 393 Zebra 395 A university team receives a grant to do research at a government 396 facility in the arctic. The only network communications will be via 397 an intermittent low-speed high-latency high-cost satellite link. 398 During their extended expedition they will need to show continue 399 compliance with the security policies of the university, the 400 government, and the provider of the satellite network as well as keep 401 current on vulnerability testing. Interactive assessments are 402 therefore not reliable, and since the researchers have very limited 403 funding they need to minimize how much money they spend on network 404 data. 406 Prior to departure they register all equipment with an asset 407 management system owned by the university, which will also initiate 408 and track assessments. 410 On a periodic basis -- either after a maximum time delta or when the 411 content repository has received a threshold level of new 412 vulnerability definitions -- the university uses the information in 413 the asset management system to put together a collection request for 414 all of the deployed assets that encompasses the minimal set of 415 artifacts necessary to evaluate all three security policies as well 416 as vulnerability testing. 418 In the case of new critical vulnerabilities this collection request 419 consists only of the artifacts necessary for those vulnerabilities 420 and collection is only initiated for those assets that could 421 potentially have a new vulnerability. 423 [Optional] Asset artifacts are cached in a local CMDB. When new 424 vulnerabilities are reported to the content repository, a request to 425 the live asset is only done if the artifacts in the CMDB are 426 incomplete and/or not current enough. 428 The collection request is queued for the next window of connectivity. 429 The deployed assets eventually receive the request, fulfill it, and 430 queue the results for the next return opportunity. 432 The collected artifacts eventually make it back to the university 433 where the level of compliance and vulnerability expose is calculated 434 and asset characteristics are compared to what is in the asset 435 management system for accuracy and completeness. 437 2.9. Vulnerable Endpoint Identification 439 Typically vulnerability reports identify an executable or library 440 that is vulnerable, or worst case the software that is vulnerable. 441 This information is used to determine if an organization has one or 442 more endpoints that have exposure to a vulnerability (i.e., what 443 endpoints are vulnerable?). It is often necessary to know where you 444 are running vulnerable code and what configurations are in place on 445 the endpoint and upstream devices (e.g., IDS, firewall) that may 446 limit the exposure. All of this information, along with details on 447 the severity and impact of a vulnerability, is necessary to 448 prioritize remedies. 450 2.10. Compromised Endpoint Identification 452 Along with knowing if one or more endpoints are vulnerable, it is 453 also important to know if you have been compromised. Indicators of 454 compromise provide details that can be used to identify malware 455 (e.g., file hashes), identify malicious activity (e.g. command and 456 control traffic), presence of unauthorized/malicious configuration 457 items, and other indicators. While important, this goes beyond 458 determining organizational exposure. 460 2.11. Suspicious Endpoint Behavior 462 This Use Case describes the collaboration between specific 463 participants in an information security system specific to detecting 464 a connection attempt to a known-bad Internet host by a botnet zombie 465 that has made its way onto an organization's Information Technology 466 systems. The primary human actor is the Security Operations Center 467 Analyst, and the primary software actor is the configuration 468 assessment tool. Note, however, the dependencies on other tools, 469 such as asset management, intrusion detection, and messaging. 471 2.12. Traditional endpoint assessment with stored results 473 An external trigger initiates an assessment of an endpoint. The 474 Controller uses the data in the Datastore to look up authentication 475 information for the endpoint and passes that along with the 476 assessment request details to the Evaluator. The Evaluator uses the 477 Endpoint information to request taxonomy information from the 478 Collector on the endpoint, which responds with those attributes. The 479 Evaluator uses that taxonomy information along with the information 480 in the original request from the Controller to request the 481 appropriate content from the Content Repository. The Evaluator uses 482 the content to derive the minimal set of endpoint attributes needed 483 to perform the assessment and makes that request. The Evaluator uses 484 the Collector response to do the assessment and returns the results 485 to the Controller. The Controller puts the results in the Datastore. 487 2.13. NAC/NAP connection with no stored results using an endpoint 488 evaluator 490 A mobile endpoint makes a VPN connection request. The NAC/NAP broker 491 requests the results of the VPN connection assessment from the 492 Controller. The Controller requests the VPN attributes from a 493 Content Repository. The Controller requests an evaluation of the 494 collected attributes from the Evaluator on the endpoint. The 495 endpoint performs the assessment and returns the results. The 496 Controller completes the original assessment request by returning the 497 results to the NAC/NAP broker, which uses them to set the level of 498 network access allowed to the endpoint. 500 QUESTION: I edited these from Gunnar's email of 9/11, to try to 501 reduce the use of "assessment", to focus on collection and 502 evaluation, and deal with use cases rather than architecture. I am 503 not sure I got all the concepts properly identified. 505 2.14. NAC/NAP connection with no stored results using a third-party 506 evaluator 508 A mobile endpoint makes a VPN connection request. The NAC/NAP broker 509 requests the results of the VPN connection assessment from the 510 Controller. The Controller requests the VPN attributes from a 511 Content Repository. The Controller requests an evaluation of the 512 collected attributes from an Evaluator in the network (rather than 513 trusting an evaluator on the endpoint). The evaluator performs the 514 evaluation and returns the results. The Controller completes the 515 original assessment request by returning the results to the NAC/NAP 516 broker, which uses them to set the level of network access allowed to 517 the endpoint. 519 QUESTION: I edited these from Gunnar's email of 9/11, to try to 520 reduce the use of "assessment", to focus on collection and 521 evaluation, and deal with use cases rather than architecture. I am 522 not sure I got all the concepts properly identified. 524 2.15. Repository Interaction 526 Additional use cases will be identified as we work through other 527 domains. 529 2.16. Others... 531 Additional use cases will be identified as we work through other 532 domains. 534 3. IANA Considerations 536 This memo includes no request to IANA. 538 4. Security Considerations 540 This memo documents, for Informational purposes, use cases for 541 security automation. While it is about security, it does not affect 542 security. 544 5. Acknowledgements 546 The National Institute of Standards and Technology (NIST) and/or the 547 MITRE Corporation have developed specifications under the general 548 term "Security Automation" including languages, protocols, 549 enumerations, and metrics. 551 Adam Montville edited early versions of this draft. 553 Kathleen Moriarty and Stephen Hanna contributed text describing the 554 scope of the document. 556 Steve Hanna provided use cases for Search for Signs of Infection, 557 Remediation and Mitigation, and Endpoint Information Analysis and 558 Reporting. 560 Gunnar Engelbach provided the use case about Ice Station Zebra. 562 6. Change Log 564 6.1. -02- to -03- 566 Expanded the workflow description based on ML input. 568 Changed the ambiguous "assess" to better separate data collection 569 from evaluation. 571 Added use case for Search for Signs of Infection. 573 Added use case for Remediation and Mitigation. 575 Added use case for Endpoint Information Analysis and Reporting. 577 Added use case for Asynchronous Compliance/Vulnerability Assessment 578 at Ice Station Zebra. 580 Added use case for Traditional endpoint assessment with stored 581 results. 583 Added use case for NAC/NAP connection with no stored results using an 584 endpoint evaluator. 586 Added use case for NAC/NAP connection with no stored results using a 587 third-party evaluator. 589 Added use case for Compromised Endpoint Identification. 591 Added use case for Suspicious Endpoint Behavior. 593 Added use case for Vulnerable Endpoint Identification. 595 Updated Acknowledgements 597 6.2. -01- to -02- 599 Changed title 601 removed section 4, expecting it will be moved into the requirements 602 document. 604 removed the list of proposed caabilities from section 3.1 606 Added empty sections for Search for Signs of Infection, Remediation 607 and Mitigation, and Endpoint Information Analysis and Reporting. 609 Removed Requirements Language section and rfc2119 reference. 611 Removed unused references (which ended up being all references). 613 6.3. -00- to -01- 615 o Work on this revision has been focused on document content 616 relating primarily to use of asset management data and functions. 618 o Made significant updates to section 3 including: 620 * Reworked introductory text. 622 * Replaced the single example with multiple use cases that focus 623 on more discrete uses of asset management data to support 624 hardware and software inventory, and configuration management 625 use cases. 627 * For one of the use cases, added mapping to functional 628 capabilities used. If popular, this will be added to the other 629 use cases as well. 631 * Additional use cases will be added in the next revision 632 capturing additional discussion from the list. 634 o Made significant updates to section 4 including: 636 * Renamed the section heading from "Use Cases" to "Functional 637 Capabilities" since use cases are covered in section 3. This 638 section now extrapolates specific functions that are needed to 639 support the use cases. 641 * Started work to flatten the section, moving select subsections 642 up from under asset management. 644 * Removed the subsections for: Asset Discovery, Endpoint 645 Components and Asset Composition, Asset Resources, and Asset 646 Life Cycle. 648 * Renamed the subsection "Asset Representation Reconciliation" to 649 "Deconfliction of Asset Identities". 651 * Expanded the subsections for: Asset Identification, Asset 652 Characterization, and Deconfliction of Asset Identities. 654 * Added a new subsection for Asset Targeting. 656 * Moved remaining sections to "Other Unedited Content" for future 657 updating. 659 6.4. draft-waltermire-sacm-use-cases-05 to draft-ietf-sacm-use-cases-00 661 o Transitioned from individual I/D to WG I/D based on WG consensus 662 call. 664 o Fixed a number of spelling errors. Thank you Erik! 666 o Added keywords to the front matter. 668 o Removed the terminology section from the draft. Terms have been 669 moved to: draft-dbh-sacm-terminology-00 671 o Removed requirements to be moved into a new I/D. 673 o Extracted the functionality from the examples and made the 674 examples less prominent. 676 o Renamed "Functional Capabilities and Requirements" section to "Use 677 Cases". 679 * Reorganized the "Asset Management" sub-section. Added new text 680 throughout. 682 + Renamed a few sub-section headings. 684 + Added text to the "Asset Characterization" sub-section. 686 o Renamed "Security Configuration Management" to "Endpoint 687 Configuration Management". Not sure if the "security" distinction 688 is important. 690 * Added new sections, partially integrated existing content. 692 * Additional text is needed in all of the sub-sections. 694 o Changed "Security Change Management" to "Endpoint Posture Change 695 Management". Added new skeletal outline sections for future 696 updates. 698 6.5. waltermire -04- to -05- 700 o Are we including user activities and behavior in the scope of this 701 work? That seems to be layer 8 stuff, appropriate to an IDS/IPS 702 application, not Internet stuff. 704 o I removed the references to what the WG will do because this 705 belongs in the charter, not the (potentially long-lived) use cases 706 document. I removed mention of charter objectives because the 707 charter may go through multiple iterations over time; there is a 708 website for hosting the charter; this document is not the correct 709 place for that discussion. 711 o I moved the discussion of NIST specifications to the 712 acknowledgements section. 714 o Removed the portion of the introduction that describes the 715 chapters; we have a table of concepts, and the existing text 716 seemed redundant. 718 o Removed marketing claims, to focus on technical concepts and 719 technical analysis, that would enable subsequent engineering 720 effort. 722 o Removed (commented out in XML) UC2 and UC3, and eliminated some 723 text that referred to these use cases. 725 o Modified IANA and Security Consideration sections. 727 o Moved Terms to the front, so we can use them in the subsequent 728 text. 730 o Removed the "Key Concepts" section, since the concepts of ORM and 731 IRM were not otherwise mentioned in the document. This would seem 732 more appropriate to the arch doc rather than use cases. 734 o Removed role=editor from David Waltermire's info, since there are 735 three editors on the document. The editor is most important when 736 one person writes the document that represents the work of 737 multiple people. When there are three editors, this role marking 738 isn't necessary. 740 o Modified text to describe that this was specific to enterprises, 741 and that it was expected to overlap with service provider use 742 cases, and described the context of this scoped work within a 743 larger context of policy enforcement, and verification. 745 o The document had asset management, but the charter mentioned 746 asset, change, configuration, and vulnerability management, so I 747 added sections for each of those categories. 749 o Added text to Introduction explaining goal of the document. 751 o Added sections on various example use cases for asset management, 752 config management, change management, and vulnerability 753 management. 755 7. References 757 7.1. Normative References 759 [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate 760 Requirement Levels", BCP 14, RFC 2119, March 1997. 762 7.2. Informative References 764 [RFC2865] Rigney, C., Willens, S., Rubens, A., and W. Simpson, 765 "Remote Authentication Dial In User Service (RADIUS)", RFC 766 2865, June 2000. 768 Authors' Addresses 770 David Waltermire 771 National Institute of Standards and Technology 772 100 Bureau Drive 773 Gaithersburg, Maryland 20877 774 USA 776 Email: david.waltermire@nist.gov 778 David Harrington 779 Effective Software 780 50 Harding Rd 781 Portsmouth, NH 03801 782 USA 784 Email: ietfdbh@comcast.net