idnits 2.17.1 draft-ietf-sacm-use-cases-07.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- No issues found here. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year -- The document date (April 29, 2014) is 3651 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- == Missing Reference: 'Optional' is mentioned on line 825, but not defined Summary: 0 errors (**), 0 flaws (~~), 2 warnings (==), 1 comment (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 Security Automation and Continuous Monitoring WG D. Waltermire 3 Internet-Draft NIST 4 Intended status: Informational D. Harrington 5 Expires: October 31, 2014 Effective Software 6 April 29, 2014 8 Endpoint Security Posture Assessment - Enterprise Use Cases 9 draft-ietf-sacm-use-cases-07 11 Abstract 13 This memo documents a sampling of use cases for securely aggregating 14 configuration and operational data and evaluating that data to 15 determine an organization's security posture. From these operational 16 use cases, we can derive common functional capabilities and 17 requirements to guide development of vendor-neutral, interoperable 18 standards for aggregating and evaluating data relevant to security 19 posture. 21 Status of This Memo 23 This Internet-Draft is submitted in full conformance with the 24 provisions of BCP 78 and BCP 79. 26 Internet-Drafts are working documents of the Internet Engineering 27 Task Force (IETF). Note that other groups may also distribute 28 working documents as Internet-Drafts. The list of current Internet- 29 Drafts is at http://datatracker.ietf.org/drafts/current/. 31 Internet-Drafts are draft documents valid for a maximum of six months 32 and may be updated, replaced, or obsoleted by other documents at any 33 time. It is inappropriate to use Internet-Drafts as reference 34 material or to cite them other than as "work in progress." 36 This Internet-Draft will expire on October 31, 2014. 38 Copyright Notice 40 Copyright (c) 2014 IETF Trust and the persons identified as the 41 document authors. All rights reserved. 43 This document is subject to BCP 78 and the IETF Trust's Legal 44 Provisions Relating to IETF Documents 45 (http://trustee.ietf.org/license-info) in effect on the date of 46 publication of this document. Please review these documents 47 carefully, as they describe your rights and restrictions with respect 48 to this document. Code Components extracted from this document must 49 include Simplified BSD License text as described in Section 4.e of 50 the Trust Legal Provisions and are provided without warranty as 51 described in the Simplified BSD License. 53 Table of Contents 55 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . 2 56 2. Endpoint Posture Assessment . . . . . . . . . . . . . . . . . 3 57 2.1. Use Cases . . . . . . . . . . . . . . . . . . . . . . . . 5 58 2.1.1. Define, Publish, Query and Retrieve Security 59 Automation Data . . . . . . . . . . . . . . . . . . . 5 60 2.1.2. Endpoint Identification and Assessment Planning . . . 9 61 2.1.3. Endpoint Posture Attribute Value Collection . . . . . 10 62 2.1.4. Posture Attribute Evaluation . . . . . . . . . . . . 11 63 2.2. Usage Scenarios . . . . . . . . . . . . . . . . . . . . . 12 64 2.2.1. Definition and Publication of Automatable 65 Configuration Checklists . . . . . . . . . . . . . . 12 66 2.2.2. Automated Checklist Verification . . . . . . . . . . 13 67 2.2.3. Detection of Posture Deviations . . . . . . . . . . . 16 68 2.2.4. Endpoint Information Analysis and Reporting . . . . . 17 69 2.2.5. Asynchronous Compliance/Vulnerability Assessment at 70 Ice Station Zebra . . . . . . . . . . . . . . . . . . 17 71 2.2.6. Identification and Retrieval of Guidance . . . . . . 19 72 2.2.7. Guidance Change Detection . . . . . . . . . . . . . . 20 73 2.2.8. Others... . . . . . . . . . . . . . . . . . . . . . . 20 74 3. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 21 75 4. Security Considerations . . . . . . . . . . . . . . . . . . . 21 76 5. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . 21 77 6. Change Log . . . . . . . . . . . . . . . . . . . . . . . . . 21 78 6.1. -06- to -07- . . . . . . . . . . . . . . . . . . . . . . 21 79 6.2. -05- to -06- . . . . . . . . . . . . . . . . . . . . . . 21 80 6.3. -04- to -05- . . . . . . . . . . . . . . . . . . . . . . 22 81 6.4. -03- to -04- . . . . . . . . . . . . . . . . . . . . . . 23 82 6.5. -02- to -03- . . . . . . . . . . . . . . . . . . . . . . 23 83 6.6. -01- to -02- . . . . . . . . . . . . . . . . . . . . . . 24 84 6.7. -00- to -01- . . . . . . . . . . . . . . . . . . . . . . 24 85 6.8. draft-waltermire-sacm-use-cases-05 to draft-ietf-sacm- 86 use-cases-00 . . . . . . . . . . . . . . . . . . . . . . 25 87 6.9. waltermire -04- to -05- . . . . . . . . . . . . . . . . . 26 88 7. Informative References . . . . . . . . . . . . . . . . . . . 27 89 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . 27 91 1. Introduction 93 This document describes the core set of use cases for endpoint 94 posture assessment for enterprises. It provides a discussion of 95 these use cases and associated building block capabilities that 96 support securely aggregating configuration and operational data and 97 evaluating that data to determine the security posture of individual 98 endpoints, and, in the aggregate, the security posture of an 99 enterprise. Additionally, this document describes a set of usage 100 scenarios that provide examples for using the use cases and 101 associated building blocks to address a variety of operational 102 functions. 104 These use cases and usage scenarios cross many IT security 105 information domains. From these operational use cases, we can derive 106 common concepts, common information expressions, functional 107 capabilities and requirements to guide development of vendor-neutral, 108 interoperable standards for aggregating and evaluating data relevant 109 to security posture. 111 Using this standard data, tools can analyze the state of endpoints, 112 user activities and behaviour, and evaluate the security posture of 113 an organization. Common expression of information should enable 114 interoperability between tools (whether customized, commercial, or 115 freely available), and the ability to automate portions of security 116 processes to gain efficiency, react to new threats in a timely 117 manner, and free up security personnel to work on more advanced 118 problems. 120 The goal is to enable organizations to make informed decisions that 121 support organizational objectives, to enforce policies for hardening 122 systems, to prevent network misuse, to quantify business risk, and to 123 collaborate with partners to identify and mitigate threats. 125 It is expected that use cases for enterprises and for service 126 providers will largely overlap, but there are additional 127 complications for service providers, especially in handling 128 information that crosses administrative domains. 130 The output of endpoint posture assessment is expected to feed into 131 additional processes, such as policy-based enforcement of acceptable 132 state, verification and monitoring of security controls, and 133 compliance to regulatory requirements. 135 2. Endpoint Posture Assessment 137 Endpoint posture assessment involves orchestrating and performing 138 data collection and evaluating the posture of a given endpoint. 139 Typically, endpoint posture information is gathered and then 140 published to appropriate data repositories to make collected 141 information available for further analysis supporting organizational 142 security processes. 144 Endpoint posture assessment typically includes: 146 o Collecting the attributes of a given endpoint; 148 o Making the attributes available for evaluation and action; and 150 o Verifying that the endpoint's posture is in compliance with 151 enterprise standards and policy. 153 As part of these activities it is often necessary to identify and 154 acquire any supporting security automation data that is needed to 155 drive and feed data collection and evaluation processes. 157 The following is a typical workflow scenario for assessing endpoint 158 posture: 160 1. Some type of trigger initiates the workflow. For example, an 161 operator or an application might trigger the process with a 162 request, or the endpoint might trigger the process using an 163 event-driven notification. 165 2. An operator/application selects one or more target endpoints to 166 be assessed. 168 3. An operator/application selects which policies are applicable to 169 the targets. 171 4. For each target: 173 A. The application determines which (sets of) posture attributes 174 need to be collected for evaluation. Implementations should 175 be able to support (possibly mixed) sets of standardized and 176 proprietary attributes. 178 B. The application might retrieve previously collected 179 information from a cache or data store, such as a data store 180 populated by an asset management system. 182 C. The application might establish communication with the 183 target, mutually authenticate identities and authorizations, 184 and collect posture attributes from the target. 186 D. The application might establish communication with one or 187 more intermediary/agents, mutually authenticate their 188 identities and determine authorizations, and collect posture 189 attributes about the target from the intermediary/agents. 190 Such agents might be local or external. 192 E. The application communicates target identity and (sets of) 193 collected attributes to an evaluator, possibly an external 194 process or external system. 196 F. The evaluator compares the collected posture attributes with 197 expected values as expressed in policies. 199 G. The evaluator reports the evaluation result for the requested 200 assessment, in a standardized or proprietary format, such as 201 a report, a log entry, a database entry, or a notification. 203 2.1. Use Cases 205 The following subsections detail specific use cases for assessment 206 planning, data collection, analysis, and related operations 207 pertaining to the publication and use of supporting data. Each use 208 case is defined by a short summary containing a simple problem 209 statement, followed by a discussion of related concepts, and a 210 listing of associated building blocks which represent the 211 capabilities needed to support the use case. These use cases and 212 building blocks identify separate units of functionality that may be 213 supported by different components of an architectural model. 215 2.1.1. Define, Publish, Query and Retrieve Security Automation Data 217 This use case describes the need for security automation data to be 218 defined and published to one or more data stores, as well as queried 219 and retrieved from these data stores for the explicit use of posture 220 collection and evaluation. 222 Security automation data is a general concept that refers to any data 223 expression that may be generated and/or used as part of the process 224 of collecting and evaluating endpoint posture. Different types of 225 security automation data will generally fall into one of three 226 categories: 228 Guidance: Instructions and related metadata that guide the attribute 229 collection and evaluation processes. The purpose of this data 230 is to allow implementations to be data-driven enabling their 231 behavior to be customized without requiring changes to deployed 232 software. 234 This type of data tends to change in units of months and days. 235 In cases where assessments are made more dynamic, it may be 236 necessary to handle changes in the scope of hours or minutes. 237 This data will typically be provided by large organizations, 238 product vendors, and some 3rd-parties. Thus, it will tend to 239 be shared across large enterprises and customer communities. 241 In some cases access may be controlled to specific 242 authenticated users. In other cases, the data may be provided 243 broadly with little to no access control. 245 This includes: 247 * Listings of attribute identifiers for which values may be 248 collected and evaluated 250 * Lists of attributes that are to be collected along with 251 metadata that includes: when to collect a set of attributes 252 based on a defined interval or event, the duration of 253 collection, and how to go about collecting a set of 254 attributes. 256 * Guidance that specifies how old collected data can be to be 257 used for evaluation. 259 * Policies that define how to target and perform the 260 evaluation of a set of attributes for different kinds or 261 groups of endpoints and the assets they are composed of. In 262 some cases it may be desirable to maintain hierarchies of 263 policies as well. 265 * References to human oriented-data that provide technical, 266 organizational, and/or policy context. This might include 267 references to: best practices documents, legal guidance and 268 legislation, and instructional materials related to the 269 automation data in question. 271 Attribute Data: Data collected through automated and manual 272 mechanisms describing organizational and posture details 273 pertaining to specific endpoints and the assets that they are 274 composed of (e.g., hardware, software, accounts). The purpose 275 of this type of data is to characterize an endpoint (e.g., 276 endpoint type, organizationally expected function/role) and to 277 provide actual and expected state data pertaining to one or 278 more endpoints. This data is used to determine what posture 279 attributes to collect from which endpoints and to feed one or 280 more evaluations. 282 This type of data tends to change in units of days, minutes, a 283 seconds with posture attribute values typically changing more 284 frequently than endpoint characterizations. This data tends to 285 be organizationally and endpoint specific, with specific 286 operational groups of endpoints tending to exhibit similar 287 attribute profiles. This data will generally not be shared 288 outside an organizational boundary and will generally require 289 authentication with specific access controls. 291 This includes: 293 * Endpoint characterization data that describes the endpoint 294 type, organizationally expected function/role, etc. 296 * Collected endpoint posture attribute values and related 297 context including: time of collection, tools used for 298 collection, etc. 300 * Organizationally defined expected posture attribute values 301 targeted to specific evaluation guidance and endpoint 302 characteristics. This allows a common set of guidance to be 303 parameterized for use with different groups of endpoints. 305 Processing Artifacts: Data that is generated by and is specific to 306 an individual assessment process. This data may be used as 307 part of the interactions between architectural components to 308 drive and coordinate collection and evaluation activities. Its 309 lifespan will be bounded by the lifespan of the assessment. It 310 may also be exchanged and stored to provide historic context 311 around an assessment activity so that individual assessments 312 can be grouped, evaluated, and reported in an enterprise 313 context. 315 This includes: 317 * The identified set of endpoints for which an assessment 318 should be performed. 320 * The identified set of posture attributes that need to be 321 collected from specific endpoints to perform an evaluation. 323 * The resulting data generated by an evaluation process 324 including the context of what was assessed, what it was 325 assessed against, what collected data was used, when it was 326 collected, and when the evaluation was performed. 328 The information model for security automation data must support a 329 variety of different data types as described above, along with the 330 associated metadata that is needed to support publication, query, and 331 retrieval operations. It is expected that multiple data models will 332 be used to express specific data types requiring specialized or 333 extensible security automation data repositories. The different 334 temporal characteristics, access patterns, and access control 335 dimensions of each data type may also require different protocols and 336 data models to be supported furthering the potential requirement for 337 specialized data repositories. See [RFC3444] for a description and 338 discussion of distinctions between an information and data model. It 339 is likely that additional kinds of data will be identified through 340 the process of defining requirements and an architectural model. 341 Implementations supporting this building block will need to be 342 extensible to accommodate the addition of new types of data, both 343 proprietary or (preferably) using a standard format. 345 The building blocks of this use case are: 347 Data Definition: Security automation data will guide and inform 348 collection and evaluation processes. This data may be designed 349 by a variety of roles - application implementers may build 350 security automation data into their applications; 351 administrators may define guidance based on organizational 352 policies; operators may define guidance and attribute data as 353 needed for evaluation at runtime, and so on. Data producers 354 may choose to reuse data from existing stores of security 355 automation data and may create new data. Data producers may 356 develop data based on available standardized or proprietary 357 data models, such as those used for network management and/or 358 host management. 360 Data Publication: The capability to enable data producers to publish 361 data to a security automation data store for further use. 362 Published data may be made publicly available or access may be 363 based on an authorization decision using authenticated 364 credentials. As a result, the visibility of specific security 365 automation data to an operator or application may be public, 366 enterprise-scoped, private, or controlled within any other 367 scope. 369 Data Query: An operator or application should be able to query a 370 security automation data store using a set of specified 371 criteria. The result of the query will be a listing matching 372 the query. The query result listing may contain publication 373 metadata (e.g., create date, modified date, publisher, etc.) 374 and/or the full data, a summary, snippet, or the location to 375 retrieve the data. 377 Data Retrieval: An user, operator, or application acquires one or 378 more specific security automation data entries. The location 379 of the data may be known a priori, or may be determined based 380 on decisions made using information from a previous query. 382 Data Change Detection: An operator or application needs to know when 383 security automation data they interested in has been published 384 to, updated in, or deleted from a security automation data 385 store which they have been authorized to access. 387 These building blocks are used to enable acquisition of various 388 instances of security automation data based on specific data models 389 that are used to drive assessment planning (see section 2.1.2), 390 posture attribute value collection (see section 2.1.3), and posture 391 evaluation (see section 2.1.4). 393 2.1.2. Endpoint Identification and Assessment Planning 395 This use case describes the process of discovering endpoints, 396 understanding their composition, identifying the desired state to 397 assess against, and calculating what posture attributes to collect to 398 enable evaluation. This process may be a set of manual, automated, 399 or hybrid steps that are performed for each assessment. 401 The building blocks of this use case are: 403 Endpoint Discovery: To determine the current or historic presence of 404 endpoints in the environment that are available for posture 405 assessment. Endpoints are identified in support of discovery 406 using information previously obtained or by using other 407 collection mechanisms to gather identification and 408 characterization data. Previously obtained data may originate 409 from sources such as network authentication exchanges. 411 Endpoint Characterization: The act of acquiring, through automated 412 collection or manual input, and organizing attributes 413 associated with an endpoint (e.g., type, organizationally 414 expected function/role, hardware/software versions). 416 Identify Endpoint Targets: Determine the candidate endpoint 417 target(s) against which to perform the assessment. Depending 418 on the assessment trigger, a single endpoint or multiple 419 endpoints may be targeted based on characterized endpoint 420 attributes. Guidance describing the assessment to be performed 421 may contain instructions or references used to determine the 422 applicable assessment targets. In this case the Data Query and 423 /or Data Retrieval building blocks (see section 2.1.1) may be 424 used to acquire this data. 426 Endpoint Component Inventory: To determine what applicable desired 427 states should be assessed, it is first necessary to acquire the 428 inventory of software, hardware, and accounts associated with 429 the targeted endpoint(s). If the assessment of the endpoint is 430 not dependent on the these details, then this capability is not 431 required for use in performing the assessment. This process 432 can be treated as a collection use case for specific posture 433 attributes. In this case the building blocks for 434 Endpoint Posture Attribute Value Collection (see section 2.1.3) 435 can be used. 437 Posture Attribute Identification: Once the endpoint targets and 438 their associated asset inventory is known, it is then necessary 439 to calculate what posture attributes are required to be 440 collected to perform the desired evaluation. When available, 441 existing posture data is queried for suitability using the Data 442 Query building block (see section 2.1.1). Such posture data is 443 suitable if it is complete and current enough for use in the 444 evaluation. Any unsuitable posture data is identified for 445 collection. 447 If this is driven by guidance, then the Data Query and/or Data 448 Retrieval building blocks (see section 2.1.1) may be used to 449 acquire this data. 451 At this point the set of posture attribute values to use for 452 evaluation are known and they can be collected if necessary (see 453 section 2.1.3). 455 2.1.3. Endpoint Posture Attribute Value Collection 457 This use case describes the process of collecting a set of posture 458 attribute values related to one or more endpoints. This use case can 459 be initiated by a variety of triggers including: 461 1. A posture change or significant event on the endpoint. 463 2. A network event (e.g., endpoint connects to a network/VPN, 464 specific netflow is detected). 466 3. Due to a scheduled or ad hoc collection task. 468 The building blocks of this use case are: 470 Collection Guidance Acquisition: If guidance is required to drive 471 the collection of posture attributes values, this capability is 472 used to acquire this data from one or more security automation 473 data stores. Depending on the trigger, the specific guidance 474 to acquire might be known. If not, it may be necessary to 475 determine the guidance to use based on the component inventory 476 or other assessment criteria. The Data Query and/or Data 477 Retrieval building blocks (see section 2.1.1) may be used to 478 acquire this guidance. 480 Posture Attribute Value Collection: The accumulation of posture 481 attribute values. This may be based on collection guidance 482 that is associated with the posture attributes. 484 Once the posture attribute values are collected, they may be 485 persisted for later use or they may be immediately used for posture 486 evaluation. 488 2.1.4. Posture Attribute Evaluation 490 This use case represents the action of analyzing collected posture 491 attribute values as part of an assessment. The primary focus of this 492 use case is to support evaluation of actual endpoint state against 493 the expected state selected for the assessment. 495 This use case can be initiated by a variety of triggers including: 497 1. A posture change or significant event on the endpoint. 499 2. A network event (e.g., endpoint connects to a network/VPN, 500 specific netflow is detected). 502 3. Due to a scheduled or ad hoc evaluation task. 504 The building blocks of this use case are: 506 Collected Posture Change Detection: An operator or application has a 507 mechanism to detect the availability of new, or changes to 508 existing, posture attribute values. The timeliness of 509 detection may vary from immediate to on-demand. Having the 510 ability to filter what changes are detected will allow the 511 operator to focus on the changes that are relevant to their use 512 and will enable evaluation to occur dynamically based on 513 detected changes. 515 Posture Attribute Value Query: If previously collected posture 516 attribute values are needed, the appropriate data stores are 517 queried to retrieve them using the Data Query building block 518 (see section 2.1.1). If all posture attribute values are 519 provided directly for evaluation, then this capability may not 520 be needed. 522 Evaluation Guidance Acquisition: If guidance is required to drive 523 the evaluation of posture attributes values, this capability is 524 used to acquire this data from one or more security automation 525 data stores. Depending on the trigger, the specific guidance 526 to acquire might be known. If not, it may be necessary to 527 determine the guidance to use based on the component inventory 528 or other assessment criteria. The Data Query and/or Data 529 Retrieval building blocks (see section 2.1.1) may be used to 530 acquire this guidance. 532 Posture Attribute Evaluation: The comparison of posture attribute 533 values against their expected values as expressed in the 534 specified guidance. The result of this comparison is output as 535 a set of posture evaluation results. Such results include 536 metadata required to provide a level of assurance with respect 537 to the posture attribute data and, therefore, evaluation 538 results. Examples of such metadata include provenance and or 539 availability data. 541 While the primary focus of this use cases is around enabling the 542 comparison of expected vs. actual state, the same building blocks can 543 support other analysis techniques that are applied to collected 544 posture attribute data (e.g., trending, historic analysis). 546 Completion of this process represents a complete assessment cycle as 547 defined in Section 2. 549 2.2. Usage Scenarios 551 In this section, we describe a number of usage scenarios that utilize 552 aspects of endpoint posture assessment. These are examples of common 553 problems that can be solved with the building blocks defined above. 555 2.2.1. Definition and Publication of Automatable Configuration 556 Checklists 558 A vendor manufactures a number of specialized endpoint devices. They 559 also develop and maintain an operating system for these devices that 560 enables end-user organizations to configure a number of security and 561 operational settings. As part of their customer support activities, 562 they publish a number of secure configuration guides that provide 563 minimum security guidelines for configuring their devices. 565 Each guide they produce applies to a specific model of device and 566 version of the operating system and provides a number of specialized 567 configurations depending on the devices intended function and what 568 add-on hardware modules and software licenses are installed on the 569 device. To enable their customers to evaluate the security posture 570 of their devices to ensure that all appropriate minimal security 571 settings are enabled, they publish an automatable configuration 572 checklists using a popular data format that defines what settings to 573 collect using a network management protocol and appropriate values 574 for each setting. They publish these checklist to a public security 575 automation data store that customers can query to retrieve applicable 576 checklist for their deployed specialized endpoint devices. 578 Automatable configuration checklist could also come from sources 579 other than a device vendor, such as industry groups or regulatory 580 authorities, or enterprises could develop their own checklists. 582 This usage scenario employs the following building blocks defined in 583 Section 2.1.1 above: 585 Data Definition: To allow guidance to be defined using standardized 586 or proprietary data models that will drive Collection and 587 Evaluation. 589 Data Publication: Providing a mechanism to publish created guidance 590 to a security automation data store. 592 Data Query: To locate and select existing guidance that may be 593 reused. 595 Data Retrieval To retrieve specific guidance from a security 596 automation data store for editing. 598 While each building block can be used in a manual fashion by a human 599 operator, it is also likely that these capabilities will be 600 implemented together in some form of a guidance editor or generator 601 application. 603 2.2.2. Automated Checklist Verification 605 A financial services company operates a heterogeneous IT environment. 606 In support of their risk management program, they utilize vendor 607 provided automatable security configuration checklists for each 608 operating system and application used within their IT environment. 609 Multiple checklists are used from different vendors to insure 610 adequate coverage of all IT assets. 612 To identify what checklists are needed, they use automation to gather 613 an inventory of the software versions utilized by all IT assets in 614 the enterprise. This data gathering will involve querying existing 615 data stores of previously collected endpoint software inventory 616 posture data and actively collecting data from reachable endpoints as 617 needed utilizing network and systems management protocols. 618 Previously collected data may be provided by periodic data 619 collection, network connection-driven data collection, or ongoing 620 event-driven monitoring of endpoint posture changes. 622 Using the collected hardware and software inventory data and 623 associated asset characterization data that may indicate the 624 organizational defined functions of each endpoint, checklist guidance 625 is queried, located and downloaded from the appropriate vendor and 626 3rd-party security automation data store for the appropriate 627 checklists. This guidance is cached locally to reduce the need to 628 retrieve the data multiple times. 630 Driven by the setting data provided in the checklist, a combination 631 of existing configuration data stores and data collection methods are 632 used to gather the appropriate posture attributes from (or pertaining 633 to) each endpoint. Specific posture attribute values are gathered 634 based on the defined enterprise function and software inventory of 635 each endpoint. The collection mechanisms used to collect software 636 inventory posture will be used again for this purpose. Once the data 637 is gathered, the actual state is evaluated against the expected state 638 criteria defined in each applicable checklist. The results of this 639 evaluation are provided to appropriate operators and applications to 640 drive additional business logic. 642 Checklists could include searching for indicators of compromise on 643 the endpoint (e.g., file hashes); identifying malicious activity 644 (e.g. command and control traffic); detecting presence of 645 unauthorized/malicious software, hardware, and configuration items; 646 and other indicators. 648 A checklist can be assessed as a whole, or a specific subset of the 649 checklist can be assessed resulting in partial data collection and 650 evaluation. 652 Checklists could also come from sources other than the application or 653 OS vendor, such as industry groups or regulatory authorities, or 654 enterprises could develop their own checklists. 656 While specific applications for checklists results are out-of-scope 657 for current SACM efforts, how the data is used may illuminate 658 specific latency and bandwidth requirements. For this purpose use of 659 checklist assessment results may include, but are not limited to: 661 o Detecting endpoint posture deviations as part of a change 662 management program to include changes to hardware and software 663 inventory including patches, changes to configuration items, and 664 other posture aspects. 666 o Determining compliance with organizational policies governing 667 endpoint posture. 669 o Searching for current and historic signs of infection by malware 670 and determining the scope of infection within an enterprise. 672 o Informing configuration management, patch management, and 673 vulnerability mitigation and remediation decisions. 675 o Detecting performance, attack and vulnerable conditions that 676 warrant additional network diagnostics, monitoring, and analysis. 678 o Informing network access control decision making for wired, 679 wireless, or VPN connections. 681 This usage scenario employs the following building blocks defined in 682 Section 2.1.1 above: 684 Endpoint Discovery: The purpose of discovery is to determine the 685 type of endpoint to be posture assessed. 687 Identify Endpoint Targets: To identify what potential endpoint 688 targets the checklist should apply to based on organizational 689 policies. 691 Endpoint Component Inventory: Collecting and consuming the software 692 and hardware inventory for the target endpoints. 694 Posture Attribute Identification: To determine what data needs to be 695 collected to support evaluation, the checklist is evaluated 696 against the component inventory and other endpoint metadata to 697 determine the set of posture attribute values that are needed. 699 Collection Guidance Acquisition: Based on the identified posture 700 attributes, the application will query appropriate security 701 automation data stores to find the "applicable" collection 702 guidance for each endpoint in question. 704 Posture Attribute Value Collection: For each endpoint, the values 705 for the required posture attributes are collected. 707 Posture Attribute Value Query: If previously collected posture 708 attribute values are used, they are queried from the 709 appropriate data stores for the target endpoint(s). 711 Evaluation Guidance Acquisition: Any guidance that is needed to 712 support evaluation is queried and retrieved. 714 Posture Attribute Evaluation: The resulting posture attribute values 715 from previous Collection processes are evaluated using the 716 evaluation guidance to provide a set of posture results. 718 2.2.3. Detection of Posture Deviations 720 Example corporation has established secure configuration baselines 721 for each different type of endpoint within their enterprise 722 including: network infrastructure, mobile, client, and server 723 computing platforms. These baselines define an approved list of 724 hardware, software (i.e., operating system, applications, and 725 patches), and associated required configurations. When an endpoint 726 connects to the network, the appropriate baseline configuration is 727 communicated to the endpoint based on its location in the network, 728 the expected function of the device, and other asset management data. 729 It is checked for compliance with the baseline indicating any 730 deviations to the device's operators. Once the baseline has been 731 established, the endpoint is monitored for any change events 732 pertaining to the baseline on an ongoing basis. When a change occurs 733 to posture defined in the baseline, updated posture information is 734 exchanged allowing operators to be notified and/or automated action 735 to be taken. 737 Like the Automated Checklist Verification usage scenario (see section 738 2.2.2), this usage scenario supports assessment based on automatable 739 checklists. It differs from that scenario by monitoring for specific 740 endpoint posture changes on an ongoing basis. When the endpoint 741 detects a posture change, an alert is generated identifying the 742 specific changes in posture allowing assessment of the delta to be 743 performed instead of a full assessment in the previous case. This 744 usage scenario employs the same building blocks as 745 Automated Checklist Verification (see section 2.2.2). It differs 746 slightly in how it uses the following building blocks: 748 Endpoint Component Inventory: Additionally, changes to the hardware 749 and software inventory are monitored, with changes causing 750 alerts to be issued. 752 Posture Attribute Value Collection: After the initial assessment, 753 posture attributes are monitored for changes. If any of the 754 selected posture attribute values change, an alert is issued. 756 Posture Attribute Value Query: The previous state of posture 757 attributes are tracked, allowing changes to be detected. 759 Posture Attribute Evaluation: After the initial assessment, a 760 partial evaluation is performed based on changes to specific 761 posture attributes. 763 This usage scenario highlights the need to query a data store to 764 prepare a compliance report for a specific endpoint and also the need 765 for a change in endpoint state to trigger Collection and Evaluation. 767 2.2.4. Endpoint Information Analysis and Reporting 769 Freed from the drudgery of manual endpoint compliance monitoring, one 770 of the security administrators at Example Corporation notices (not 771 using SACM standards) that five endpoints have been uploading lots of 772 data to a suspicious server on the Internet. The administrator 773 queries data stores for specific endpoint posture to see what 774 software is installed on those endpoints and finds that they all have 775 a particular program installed. She then queries the appropriate 776 data stores to see which other endpoints have that program installed. 777 All these endpoints are monitored carefully (not using SACM 778 standards), which allows the administrator to detect that the other 779 endpoints are also infected. 781 This is just one example of the useful analysis that a skilled 782 analyst can do using data stores of endpoint posture. 784 This usage scenario employs the following building blocks defined in 785 Section 2.1.1 above: 787 Posture Attribute Value Query: Previously collected posture 788 attribute values are queried from the appropriate data stores 789 for the target endpoint(s). 791 This usage scenario highlights the need to query a repository for 792 attributes to see which attributes certain endpoints have in common. 794 2.2.5. Asynchronous Compliance/Vulnerability Assessment at Ice Station 795 Zebra 797 A university team receives a grant to do research at a government 798 facility in the arctic. The only network communications will be via 799 an intermittent low-speed high-latency high-cost satellite link. 800 During their extended expedition they will need to show continue 801 compliance with the security policies of the university, the 802 government, and the provider of the satellite network as well as keep 803 current on vulnerability testing. Interactive assessments are 804 therefore not reliable, and since the researchers have very limited 805 funding they need to minimize how much money they spend on network 806 data. 808 Prior to departure they register all equipment with an asset 809 management system owned by the university, which will also initiate 810 and track assessments. 812 On a periodic basis -- either after a maximum time delta or when the 813 security automation data store has received a threshold level of new 814 vulnerability definitions -- the university uses the information in 815 the asset management system to put together a collection request for 816 all of the deployed assets that encompasses the minimal set of 817 artifacts necessary to evaluate all three security policies as well 818 as vulnerability testing. 820 In the case of new critical vulnerabilities this collection request 821 consists only of the artifacts necessary for those vulnerabilities 822 and collection is only initiated for those assets that could 823 potentially have a new vulnerability. 825 [Optional] Asset artifacts are cached in a local CMDB. When new 826 vulnerabilities are reported to the security automation data store, a 827 request to the live asset is only done if the artifacts in the CMDB 828 are incomplete and/or not current enough. 830 The collection request is queued for the next window of connectivity. 831 The deployed assets eventually receive the request, fulfill it, and 832 queue the results for the next return opportunity. 834 The collected artifacts eventually make it back to the university 835 where the level of compliance and vulnerability expose is calculated 836 and asset characteristics are compared to what is in the asset 837 management system for accuracy and completeness. 839 Like the Automated Checklist Verification usage scenario (see section 840 2.2.2), this usage scenario supports assessment based on checklists. 841 It differs from that scenario in how guidance, collected posture 842 attribute values, and evaluation results are exchanged due to 843 bandwidth limitations and availability. This usage scenario employs 844 the same building blocks as Automated Checklist Verification (see 845 section 2.2.2). It differs slightly in how it uses the following 846 building blocks: 848 Endpoint Component Inventory: It is likely that the component 849 inventory will not change. If it does, this information will 850 need to be batched and transmitted during the next 851 communication window. 853 Collection Guidance Acquisition: Due to intermittent communication 854 windows and bandwidth constraints, changes to collection 855 guidance will need to batched and transmitted during the next 856 communication window. Guidance will need to be cached locally 857 to avoid the need for remote communications. 859 Posture Attribute Value Collection: The specific posture attribute 860 values to be collected are identified remotely and batched for 861 collection during the next communication window. If a delay is 862 introduced for collection to complete, results will need to be 863 batched and transmitted. 865 Posture Attribute Value Query: Previously collected posture 866 attribute values will be stored in a remote data store for use 867 at the university 869 Evaluation Guidance Acquisition: Due to intermittent communication 870 windows and bandwidth constraints, changes to evaluation 871 guidance will need to batched and transmitted during the next 872 communication window. Guidance will need to be cached locally 873 to avoid the need for remote communications. 875 Posture Attribute Evaluation: Due to the caching of posture 876 attribute values and evaluation guidance, evaluation may be 877 performed at both the university campus as well as the 878 satellite site. 880 This usage scenario highlights the need to support low-bandwidth, 881 intermittent, or high-latency links. 883 2.2.6. Identification and Retrieval of Guidance 885 In preparation for performing an assessment, an operator or 886 application will need to identify one or more security automation 887 data stores that contain the guidance entries necessary to perform 888 data collection and evaluation tasks. The location of a given 889 guidance entry will either be known a priori or known security 890 automation data stores will need to be queried to retrieve applicable 891 guidance. 893 To query guidance it will be necessary to define a set of search 894 criteria. This criteria will often utilize a logical combination of 895 publication metadata (e.g. publishing identity, create time, 896 modification time) and guidance data-specific criteria elements. 897 Once the criteria is defined, one or more security automation data 898 stores will need to be queried generating a result set. Depending on 899 how the results are used, it may be desirable to return the matching 900 guidance directly, a snippet of the guidance matching the query, or a 901 resolvable location to retrieve the data at a later time. The 902 guidance matching the query will be restricted based the authorized 903 level of access allowed to the requester. 905 If the location of guidance is identified in the query result set, 906 the guidance will be retrieved when needed using one or more data 907 retrieval requests. A variation on this approach would be to 908 maintain a local cache of previously retrieved data. In this case, 909 only guidance that is determined to be stale by some measure will be 910 retrieved from the remote data store. 912 Alternately, guidance can be discovered by iterating over data 913 published with a given context within a security automation data 914 store. Specific guidance can be selected and retrieved as needed. 916 This usage scenario employs the following building blocks defined in 917 Section 2.1.1 above: 919 Data Query: Enables an operator or application to query one or more 920 security automation data stores for guidance using a set of 921 specified criteria. 923 Data Retrieval: If data locations are returned in the query result 924 set, then specific guidance entries can be retrieved and 925 possibly cached locally. 927 2.2.7. Guidance Change Detection 929 An operator or application may need to identify new, updated, or 930 deleted guidance in a security automation data store for which they 931 have been authorized to access. This may be achieved by querying or 932 iterating over guidance in a security automation data store, or 933 through a notification mechanism that alerts to changes made to a 934 security automation data store. 936 Once guidance changes have been determined, data collection and 937 evaluation activities may be triggered. 939 This usage scenario employs the following building blocks defined in 940 Section 2.1.1 above: 942 Data Change Detection: Allows an operator or application to identify 943 guidance changes in a security automation data store which they 944 have been authorized to access. 946 Data Retrieval: If data locations are provided by the change 947 detection mechanism, then specific guidance entries can be 948 retrieved and possibly cached locally. 950 2.2.8. Others... 952 Additional usage scenarios will be identified as we work through 953 other domains. 955 3. IANA Considerations 957 This memo includes no request to IANA. 959 4. Security Considerations 961 This memo documents, for Informational purposes, use cases for 962 security automation. While it is about security, it does not affect 963 security. 965 5. Acknowledgements 967 The National Institute of Standards and Technology (NIST) and/or the 968 MITRE Corporation have developed specifications under the general 969 term "Security Automation" including languages, protocols, 970 enumerations, and metrics. 972 Adam Montville edited early versions of this draft. 974 Kathleen Moriarty, and Stephen Hanna contributed text describing the 975 scope of the document. 977 Gunnar Engelbach, Steve Hanna, Chris Inacio, Kent Landfield, Lisa 978 Lorenzin, Adam Montville, Kathleen Moriarty, Nancy Cam-Winget, and 979 Aron Woland provided use cases text for various revisions of this 980 draft. 982 6. Change Log 984 6.1. -06- to -07- 986 A number of edits were made to section 2 to resolve open questions in 987 the draft based on meeting and mailing list discussions. 989 Section 2.1.5 was merged into section 2.1.4. 991 6.2. -05- to -06- 993 Updated the "Introduction" section to better reflect the use case, 994 building block, and usage scenario structure changes from previous 995 revisions. 997 Updated most uses of the terms "content" and "content repository" to 998 use "guidance" and "security automation data store" respectively. 1000 In section 2.1.1, added a discussion of different data types and 1001 renamed "content" to "data" in the building block names. 1003 In section 2.1.2, separated out the building block concepts of 1004 "Endpoint Discovery" and "Endpoint Characterization" based on mailing 1005 list discussions. 1007 Addressed some open questions throughout the draft based on consensus 1008 from mailing list discussions and the two virtual interim meetings. 1010 Changed many section/sub-section names to better reflect their 1011 content. 1013 6.3. -04- to -05- 1015 Changes in this revision are focused on section 2 and the subsequent 1016 subsections: 1018 o Moved existing use cases to a subsection titled "Usage Scenarios". 1020 o Added a new subsection titled "Use Cases" to describe the common 1021 use cases and building blocks used to address the "Usage 1022 Scenarios". The new use cases are: 1024 * Define, Publish, Query and Retrieve Content 1026 * Endpoint Identification and Assessment Planning 1028 * Endpoint Posture Attribute Value Collection 1030 * Posture Evaluation 1032 * Mining the Database 1034 o Added a listing of building blocks used for all usage scenarios. 1036 o Combined the following usage scenarios into "Automated Checklist 1037 Verification": "Organizational Software Policy Compliance", 1038 "Search for Signs of Infection", "Vulnerable Endpoint 1039 Identification", "Compromised Endpoint Identification", 1040 "Suspicious Endpoint Behavior", "Traditional endpoint assessment 1041 with stored results", "NAC/NAP connection with no stored results 1042 using an endpoint evaluator", and "NAC/NAP connection with no 1043 stored results using a third-party evaluator". 1045 o Created new usage scenario "Identification and Retrieval of 1046 Repository Content" by combining the following usage scenarios: 1047 "Repository Interaction - A Full Assessment" and "Repository 1048 Interaction - Filtered Delta Assessment" 1050 o Renamed "Register with repository for immediate notification of 1051 new security vulnerability content that match a selection filter" 1052 to "Content Change Detection" and generalized the description to 1053 be neutral to implementation approaches. 1055 o Removed out-of-scope usage scenarios: "Remediation and Mitigation" 1056 and "Direct Human Retrieval of Ancillary Materials" 1058 Updated acknowledgements to recognize those that helped with editing 1059 the use case text. 1061 6.4. -03- to -04- 1063 Added four new use cases regarding content repository. 1065 6.5. -02- to -03- 1067 Expanded the workflow description based on ML input. 1069 Changed the ambiguous "assess" to better separate data collection 1070 from evaluation. 1072 Added use case for Search for Signs of Infection. 1074 Added use case for Remediation and Mitigation. 1076 Added use case for Endpoint Information Analysis and Reporting. 1078 Added use case for Asynchronous Compliance/Vulnerability Assessment 1079 at Ice Station Zebra. 1081 Added use case for Traditional endpoint assessment with stored 1082 results. 1084 Added use case for NAC/NAP connection with no stored results using an 1085 endpoint evaluator. 1087 Added use case for NAC/NAP connection with no stored results using a 1088 third-party evaluator. 1090 Added use case for Compromised Endpoint Identification. 1092 Added use case for Suspicious Endpoint Behavior. 1094 Added use case for Vulnerable Endpoint Identification. 1096 Updated Acknowledgements 1098 6.6. -01- to -02- 1100 Changed title 1102 removed section 4, expecting it will be moved into the requirements 1103 document. 1105 removed the list of proposed capabilities from section 3.1 1107 Added empty sections for Search for Signs of Infection, Remediation 1108 and Mitigation, and Endpoint Information Analysis and Reporting. 1110 Removed Requirements Language section and rfc2119 reference. 1112 Removed unused references (which ended up being all references). 1114 6.7. -00- to -01- 1116 o Work on this revision has been focused on document content 1117 relating primarily to use of asset management data and functions. 1119 o Made significant updates to section 3 including: 1121 * Reworked introductory text. 1123 * Replaced the single example with multiple use cases that focus 1124 on more discrete uses of asset management data to support 1125 hardware and software inventory, and configuration management 1126 use cases. 1128 * For one of the use cases, added mapping to functional 1129 capabilities used. If popular, this will be added to the other 1130 use cases as well. 1132 * Additional use cases will be added in the next revision 1133 capturing additional discussion from the list. 1135 o Made significant updates to section 4 including: 1137 * Renamed the section heading from "Use Cases" to "Functional 1138 Capabilities" since use cases are covered in section 3. This 1139 section now extrapolates specific functions that are needed to 1140 support the use cases. 1142 * Started work to flatten the section, moving select subsections 1143 up from under asset management. 1145 * Removed the subsections for: Asset Discovery, Endpoint 1146 Components and Asset Composition, Asset Resources, and Asset 1147 Life Cycle. 1149 * Renamed the subsection "Asset Representation Reconciliation" to 1150 "Deconfliction of Asset Identities". 1152 * Expanded the subsections for: Asset Identification, Asset 1153 Characterization, and Deconfliction of Asset Identities. 1155 * Added a new subsection for Asset Targeting. 1157 * Moved remaining sections to "Other Unedited Content" for future 1158 updating. 1160 6.8. draft-waltermire-sacm-use-cases-05 to draft-ietf-sacm-use-cases-00 1162 o Transitioned from individual I/D to WG I/D based on WG consensus 1163 call. 1165 o Fixed a number of spelling errors. Thank you Erik! 1167 o Added keywords to the front matter. 1169 o Removed the terminology section from the draft. Terms have been 1170 moved to: draft-dbh-sacm-terminology-00 1172 o Removed requirements to be moved into a new I/D. 1174 o Extracted the functionality from the examples and made the 1175 examples less prominent. 1177 o Renamed "Functional Capabilities and Requirements" section to "Use 1178 Cases". 1180 * Reorganized the "Asset Management" sub-section. Added new text 1181 throughout. 1183 + Renamed a few sub-section headings. 1185 + Added text to the "Asset Characterization" sub-section. 1187 o Renamed "Security Configuration Management" to "Endpoint 1188 Configuration Management". Not sure if the "security" distinction 1189 is important. 1191 * Added new sections, partially integrated existing content. 1193 * Additional text is needed in all of the sub-sections. 1195 o Changed "Security Change Management" to "Endpoint Posture Change 1196 Management". Added new skeletal outline sections for future 1197 updates. 1199 6.9. waltermire -04- to -05- 1201 o Are we including user activities and behavior in the scope of this 1202 work? That seems to be layer 8 stuff, appropriate to an IDS/IPS 1203 application, not Internet stuff. 1205 o Removed the references to what the WG will do because this belongs 1206 in the charter, not the (potentially long-lived) use cases 1207 document. I removed mention of charter objectives because the 1208 charter may go through multiple iterations over time; there is a 1209 website for hosting the charter; this document is not the correct 1210 place for that discussion. 1212 o Moved the discussion of NIST specifications to the 1213 acknowledgements section. 1215 o Removed the portion of the introduction that describes the 1216 chapters; we have a table of concepts, and the existing text 1217 seemed redundant. 1219 o Removed marketing claims, to focus on technical concepts and 1220 technical analysis, that would enable subsequent engineering 1221 effort. 1223 o Removed (commented out in XML) UC2 and UC3, and eliminated some 1224 text that referred to these use cases. 1226 o Modified IANA and Security Consideration sections. 1228 o Moved Terms to the front, so we can use them in the subsequent 1229 text. 1231 o Removed the "Key Concepts" section, since the concepts of ORM and 1232 IRM were not otherwise mentioned in the document. This would seem 1233 more appropriate to the arch doc rather than use cases. 1235 o Removed role=editor from David Waltermire's info, since there are 1236 three editors on the document. The editor is most important when 1237 one person writes the document that represents the work of 1238 multiple people. When there are three editors, this role marking 1239 isn't necessary. 1241 o Modified text to describe that this was specific to enterprises, 1242 and that it was expected to overlap with service provider use 1243 cases, and described the context of this scoped work within a 1244 larger context of policy enforcement, and verification. 1246 o The document had asset management, but the charter mentioned 1247 asset, change, configuration, and vulnerability management, so I 1248 added sections for each of those categories. 1250 o Added text to Introduction explaining goal of the document. 1252 o Added sections on various example use cases for asset management, 1253 config management, change management, and vulnerability 1254 management. 1256 7. Informative References 1258 [RFC3444] Pras, A. and J. Schoenwaelder, "On the Difference between 1259 Information Models and Data Models", RFC 3444, January 1260 2003. 1262 Authors' Addresses 1264 David Waltermire 1265 National Institute of Standards and Technology 1266 100 Bureau Drive 1267 Gaithersburg, Maryland 20877 1268 USA 1270 Email: david.waltermire@nist.gov 1272 David Harrington 1273 Effective Software 1274 50 Harding Rd 1275 Portsmouth, NH 03801 1276 USA 1278 Email: ietfdbh@comcast.net