idnits 2.17.1 draft-ietf-sacm-use-cases-06.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- No issues found here. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year -- The document date (March 3, 2014) is 3679 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- == Missing Reference: 'Optional' is mentioned on line 852, but not defined Summary: 0 errors (**), 0 flaws (~~), 2 warnings (==), 1 comment (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 Security Automation and Continuous Monitoring WG D. Waltermire 3 Internet-Draft NIST 4 Intended status: Informational D. Harrington 5 Expires: September 4, 2014 Effective Software 6 March 3, 2014 8 Endpoint Security Posture Assessment - Enterprise Use Cases 9 draft-ietf-sacm-use-cases-06 11 Abstract 13 This memo documents a sampling of use cases for securely aggregating 14 configuration and operational data and evaluating that data to 15 determine an organization's security posture. From these operational 16 use cases, we can derive common functional capabilities and 17 requirements to guide development of vendor-neutral, interoperable 18 standards for aggregating and evaluating data relevant to security 19 posture. 21 Status of This Memo 23 This Internet-Draft is submitted in full conformance with the 24 provisions of BCP 78 and BCP 79. 26 Internet-Drafts are working documents of the Internet Engineering 27 Task Force (IETF). Note that other groups may also distribute 28 working documents as Internet-Drafts. The list of current Internet- 29 Drafts is at http://datatracker.ietf.org/drafts/current/. 31 Internet-Drafts are draft documents valid for a maximum of six months 32 and may be updated, replaced, or obsoleted by other documents at any 33 time. It is inappropriate to use Internet-Drafts as reference 34 material or to cite them other than as "work in progress." 36 This Internet-Draft will expire on September 4, 2014. 38 Copyright Notice 40 Copyright (c) 2014 IETF Trust and the persons identified as the 41 document authors. All rights reserved. 43 This document is subject to BCP 78 and the IETF Trust's Legal 44 Provisions Relating to IETF Documents 45 (http://trustee.ietf.org/license-info) in effect on the date of 46 publication of this document. Please review these documents 47 carefully, as they describe your rights and restrictions with respect 48 to this document. Code Components extracted from this document must 49 include Simplified BSD License text as described in Section 4.e of 50 the Trust Legal Provisions and are provided without warranty as 51 described in the Simplified BSD License. 53 Table of Contents 55 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . 2 56 2. Endpoint Posture Assessment . . . . . . . . . . . . . . . . . 3 57 2.1. Use Cases . . . . . . . . . . . . . . . . . . . . . . . . 5 58 2.1.1. Define, Publish, Query and Retrieve Security 59 Automation Data . . . . . . . . . . . . . . . . . . . 5 60 2.1.2. Endpoint Identification and Assessment Planning . . . 9 61 2.1.3. Endpoint Posture Attribute Value Collection . . . . . 10 62 2.1.4. Posture Evaluation . . . . . . . . . . . . . . . . . 11 63 2.1.5. Mining the Database . . . . . . . . . . . . . . . . . 12 64 2.2. Usage Scenarios . . . . . . . . . . . . . . . . . . . . . 12 65 2.2.1. Definition and Publication of Automatable 66 Configuration Checklists . . . . . . . . . . . . . . 13 67 2.2.2. Automated Checklist Verification . . . . . . . . . . 14 68 2.2.3. Detection of Posture Deviations . . . . . . . . . . . 16 69 2.2.4. Endpoint Information Analysis and Reporting . . . . . 17 70 2.2.5. Asynchronous Compliance/Vulnerability Assessment at 71 Ice Station Zebra . . . . . . . . . . . . . . . . . . 18 72 2.2.6. Identification and Retrieval of Guidance . . . . . . 20 73 2.2.7. Guidance Change Detection . . . . . . . . . . . . . . 21 74 2.2.8. Others... . . . . . . . . . . . . . . . . . . . . . . 21 75 3. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 21 76 4. Security Considerations . . . . . . . . . . . . . . . . . . . 21 77 5. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . 21 78 6. Change Log . . . . . . . . . . . . . . . . . . . . . . . . . 22 79 6.1. -05- to -06- . . . . . . . . . . . . . . . . . . . . . . 22 80 6.2. -04- to -05- . . . . . . . . . . . . . . . . . . . . . . 22 81 6.3. -03- to -04- . . . . . . . . . . . . . . . . . . . . . . 23 82 6.4. -02- to -03- . . . . . . . . . . . . . . . . . . . . . . 23 83 6.5. -01- to -02- . . . . . . . . . . . . . . . . . . . . . . 24 84 6.6. -00- to -01- . . . . . . . . . . . . . . . . . . . . . . 24 85 6.7. draft-waltermire-sacm-use-cases-05 to draft-ietf-sacm- 86 use-cases-00 . . . . . . . . . . . . . . . . . . . . . . 25 87 6.8. waltermire -04- to -05- . . . . . . . . . . . . . . . . . 26 88 7. Informative References . . . . . . . . . . . . . . . . . . . 27 89 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . 27 91 1. Introduction 93 This document describes the core set of use cases for endpoint 94 posture assessment for enterprises. It provides a discussion of 95 these use cases and associated building block capabilities that 96 support securely aggregating configuration and operational data and 97 evaluating that data to determine the security posture of individual 98 endpoints, and, in the aggregate, the security posture of an 99 enterprise. Additionally, this document describes a set of usage 100 scenarios that provide examples for using the use cases and 101 associated building blocks to address a variety of operational 102 functions. 104 These use cases and usage sceneries cross many IT security 105 information domains. From these operational use cases, we can derive 106 common concepts, common information expressions, functional 107 capabilities and requirements to guide development of vendor-neutral, 108 interoperable standards for aggregating and evaluating data relevant 109 to security posture. 111 Using this standard data, tools can analyze the state of endpoints, 112 user activities and behaviour, and evaluate the security posture of 113 an organization. Common expression of information should enable 114 interoperability between tools (whether customized, commercial, or 115 freely available), and the ability to automate portions of security 116 processes to gain efficiency, react to new threats in a timely 117 manner, and free up security personnel to work on more advanced 118 problems. 120 The goal is to enable organizations to make informed decisions that 121 support organizational objectives, to enforce policies for hardening 122 systems, to prevent network misuse, to quantify business risk, and to 123 collaborate with partners to identify and mitigate threats. 125 It is expected that use cases for enterprises and for service 126 providers will largely overlap, but there are additional 127 complications for service providers, especially in handling 128 information that crosses administrative domains. 130 The output of endpoint posture assessment is expected to feed into 131 additional processes, such as policy-based enforcement of acceptable 132 state, verification and monitoring of security controls, and 133 compliance to regulatory requirements. 135 2. Endpoint Posture Assessment 137 Endpoint posture assessment involves orchestrating and performing 138 data collection and evaluating the posture of a given endpoint. 139 Typically, endpoint posture information is gathered and then 140 published to appropriate data repositories to make collected 141 information available for further analysis supporting organizational 142 security processes. 144 Endpoint posture assessment typically includes: 146 o Collecting the attributes of a given endpoint; 148 o Making the attributes available for evaluation and action; and 150 o Verifying that the endpoint's posture is in compliance with 151 enterprise standards and policy. 153 As part of these activities it is often necessary to identify and 154 acquire any supporting security automation data that is needed to 155 drive and feed data collection and evaluation processes. 157 The following is a typical workflow scenario for assessing endpoint 158 posture: 160 1. Some type of trigger initiates the workflow. For example, an 161 operator or an application might trigger the process with a 162 request, or the endpoint might trigger the process using an 163 event-driven notification. 165 2. An operator/application selects one or more target endpoints to 166 be assessed. 168 3. An operator/application selects which policies are applicable to 169 the targets. 171 4. For each target: 173 A. The application determines which (sets of) posture attributes 174 need to be collected for evaluation. Implementations should 175 be able to support (possibly mixed) sets of standardized and 176 proprietary attributes. 178 B. The application might retrieve previously collected 179 information from a cache or data store, such as a data store 180 populated by an asset management system. 182 C. The application might establish communication with the 183 target, mutually authenticate identities and authorizations, 184 and collect posture attributes from the target. 186 D. The application might establish communication with one or 187 more intermediary/agents, mutually authenticate their 188 identities and determine authorizations, and collect posture 189 attributes about the target from the intermediary/agents. 190 Such agents might be local or external. 192 E. The application communicates target identity and (sets of) 193 collected attributes to an evaluator, possibly an external 194 process or external system. 196 F. The evaluator compares the collected posture attributes with 197 expected values as expressed in policies. 199 G. The evaluator reports the evaluation result for the requested 200 assessment, in a standardized or proprietary format, such as 201 a report, a log entry, a database entry, or a notification. 203 2.1. Use Cases 205 The following subsections detail specific use cases for assessment 206 planning, data collection, analysis, and related operations 207 pertaining to the publication and use of supporting data. Each use 208 case is defined by a short summary containing a simple problem 209 statement, followed by a discussion of related concepts, and a 210 listing of associated building blocks which represent the 211 capabilities needed to support the use case. These use cases and 212 building blocks identify separate units of functionality that may be 213 supported by different components of an architectural model. 215 2.1.1. Define, Publish, Query and Retrieve Security Automation Data 217 This use case describes the need for security automation data to be 218 defined and published to one or more data stores, as well as queried 219 and retrieved from these data stores for the explicit use of posture 220 collection and evaluation. 222 Security automation data is a general concept that refers to any data 223 expression that may be generated and/or used as part of the process 224 of collecting and evaluating endpoint posture. Different types of 225 security automation data will generally fall into one of three 226 categories: 228 Guidance: Instructions and related metadata that guide the attribute 229 collection and evaluation processes. The purpose of this data 230 is to allow implementations to be data-driven enabling their 231 behavior to be customized without requiring changes to deployed 232 software. 234 This type of data tends to change in units of months and days. 235 In cases where assessments are made more dynamic, it may be 236 necessary to handle changes in the scope of hours or minutes. 237 This data will typically be provided by large organizations, 238 product vendors, and some 3rd-parties. Thus, it will tend to 239 be shared across large enterprises and customer communities. 241 In some cases access may be controlled to specific 242 authenticated users. In other cases, the data may be provided 243 broadly with little to no access control. 245 This includes: 247 * Listings of attribute identifiers for which values may be 248 collected and evaluated 250 * Lists of attributes that are to be collected along with 251 metadata that includes: when to collect a set of attributes 252 based on a defined interval or event, the duration of 253 collection, and how to go about collecting a set of 254 attributes. 256 * Guidance that specifies how old collected data can be to be 257 used for evaluation. 259 * Policies that define how to target and perform the 260 evaluation of a set of attributes for different kinds or 261 groups of endpoints and the assets they are composed of. In 262 some cases it may be desirable to maintain hierarchies of 263 policies as well. 265 * References to human oriented-data that provide technical, 266 organizational, and/or policy context. This might include 267 references to: best practices documents, legal guidance and 268 legislation, and instructional materials related to the 269 automation data in question. 271 Attribute Data: Data collected through automated and manual 272 mechanisms describing organizational and posture details 273 pertaining to specific endpoints and the assets that they are 274 composed of (e.g., hardware, software, accounts). The purpose 275 of this type of data is to characterize an endpoint (e.g., 276 endpoint type, organizationally expected function/role) and to 277 provide actual and expected state data pertaining to one or 278 more endpoints. This data is used to determine what posture 279 attributes to collect from which endpoints and to feed one or 280 more evaluations. 282 This type of data tends to change in units of days, minutes, a 283 seconds with posture attribute values typically changing more 284 frequently than endpoint characterizations. This data tends to 285 be organizationally and endpoint specific, with specific 286 operational groups of endpoints tending to exhibit similar 287 attribute profiles. This data will generally not be shared 288 outside an organizational boundary and will generally require 289 authentication with specific access controls. 291 This includes: 293 * Endpoint characterization data that describes the endpoint 294 type, organizationally expected function/role, etc. 296 * Collected endpoint posture attribute values and related 297 context including: time of collection, tools used for 298 collection, etc. 300 * Organizationally defined expected posture attribute values 301 targeted to specific evaluation guidance and endpoint 302 characteristics. This allows a common set of guidance to be 303 parameterized for use with different groups of endpoints. 305 Processing Artifacts: Data that is generated by and is specific to 306 an individual assessment process. This data may be used as 307 part of the interactions between architectural components to 308 drive and coordinate collection and evaluation activities. Its 309 lifespan will be bounded by the lifespan of the assessment. It 310 may also be exchanged and stored to provide historic context 311 around an assessment activity so that individual assessments 312 can be grouped, evaluated, and reported in an enterprise 313 context. 315 This includes: 317 * The identified set of endpoints for which an assessment 318 should be performed. 320 * The identified set of posture attributes that need to be 321 collected from specific endpoints to perform an evaluation. 323 * The resulting data generated by an evaluation process 324 including the context of what was assessed, what it was 325 assessed against, what collected data was used, when it was 326 collected, and when the evaluation was performed. 328 The information model for security automation data must support a 329 variety of different data types as described above, along with the 330 associated metadata that is needed to support publication, query, and 331 retrieval operations. It is expected that multiple data models will 332 be used to express specific data types requiring specialized or 333 extensible security automation data repositories. The different 334 temporal characteristics, access patterns, and access control 335 dimensions of each data type may also require different protocols and 336 data models to be supported furthering the potential requirement for 337 specialized data repositories. See [RFC3444] for a description and 338 discussion of distinctions between an information and data model. It 339 is likely that additional kinds of data will be identified through 340 the process of defining requirements and an architectural model. 341 Implementations supporting this building block will need to be 342 extensible to accommodate the addition of new types of data, both 343 proprietary or (preferably) using a standard format. 345 The building blocks of this use case are: 347 Data Definition: Security automation data will guide and inform 348 collection and evaluation processes. This data may be designed 349 by a variety of roles - application implementers may build 350 security automation data into their applications; 351 administrators may define guidance based on organizational 352 policies; operators may define guidance and attribute data as 353 needed for evaluation at runtime, and so on. Data producers 354 may choose to reuse data from existing stores of security 355 automation data and may create new data. Data producers may 356 develop data based on available standardized or proprietary 357 data models, such as those used for network management and/or 358 host management. 360 Data Publication: The capability to enable data producers to publish 361 data to a security automation data store for further use. 362 Published data may be made publicly available or access may be 363 based on an authorization decision using authenticated 364 credentials. As a result, the visibility of specific security 365 automation data to an operator or application may be public, 366 enterprise-scoped, private, or controlled within any other 367 scope. 369 Data Query: An operator or application should be able to query a 370 security automation data store using a set of specified 371 criteria. The result of the query will be a listing matching 372 the query. The query result listing may contain publication 373 metadata (e.g., create date, modified date, publisher, etc.) 374 and/or the full data, a summary, snippet, or the location to 375 retrieve the data. 377 Data Retrieval: An user, operator, or application acquires one or 378 more specific security automation data entries. The location 379 of the data may be known a priori, or may be determined based 380 on decisions made using information from a previous query. 382 Data Change Detection: An operator or application needs to know when 383 security automation data they interested in has been published 384 to, updated in, or deleted from a security automation data 385 store which they have been authorized to access. 387 These building blocks are used to enable acquisition of various 388 instances of security automation data based on specific data models 389 that are used to drive assessment planning (see section 2.1.2), 390 posture attribute value collection (see section 2.1.3), and posture 391 evaluation (see section 2.1.4). 393 2.1.2. Endpoint Identification and Assessment Planning 395 This use case describes the process of discovering endpoints, 396 understanding their composition, identifying the desired state to 397 assess against, and calculating what posture attributes to collect to 398 enable evaluation. This process may be a set of manual, automated, 399 or hybrid steps that are performed for each assessment. 401 The building blocks of this use case are: 403 Endpoint Discovery: To determine the current or historic presence of 404 endpoints in the environment that are available for posture 405 assessment. 407 Endpoint Characterization: The act of acquiring, through automated 408 collection or manual input, and organizing attributes 409 associated with an endpoint (e.g., type, organizationally 410 expected function/role, hardware/software versions). 412 Identify Endpoint Targets: Determine the candidate endpoint 413 target(s) against which to perform the assessment. Depending 414 on the assessment trigger, a single endpoint or multiple 415 endpoints may be targeted based on characterized endpoint 416 attributes. Guidance describing the assessment to be performed 417 may contain instructions or references used to determine the 418 applicable assessment targets. In this case the Data Query and 419 /or Data Retrieval building blocks (see section 2.1.1) may be 420 used to acquire this data. 422 QUESTION: Should this include authentication of the target? 424 Endpoint Component Inventory: To determine what applicable desired 425 states should be assessed, it is first necessary to acquire the 426 inventory of software, hardware, and accounts associated with 427 the targeted endpoint(s). If the assessment of the endpoint is 428 not dependant on the component inventory, then this capability 429 is not required for use in performing the assessment. This 430 process can be treated as a collection use case for specific 431 posture attributes. In this case the building blocks for 432 Endpoint Posture Attribute Value Collection (see section 2.1.3) 433 can be used. 435 Posture Attribute Identification: Once the endpoint targets and 436 component inventory is known, it is then necessary to calculate 437 what posture attributes are required to be collected to perform 438 the evaluation. If this is driven by guidance, then the Data 439 Query and/or Data Retrieval building blocks (see section 2.1.1) 440 may be used to acquire this data. 442 QUESTION: Are we missing a building block that determines what 443 previously collected data, if any, is suitable for evaluation and 444 what data needs to be actually collected? Should a building block be 445 identified that evaluates existing data to determine if it is current 446 enough for use in the evaluation or if current data should be 447 collected anyway according to a policy? 449 COMMENT(DR): Probably yes, taking into account usage scenarios like 450 2.2.2, 2.2.3 which rely on historical data. 452 At this point the set of posture attribute values to use for 453 evaluation are known and they can be collected if necessary (see 454 section 2.1.3). 456 2.1.3. Endpoint Posture Attribute Value Collection 458 This use case describes the process of collecting a set of posture 459 attribute values related to one or more endpoints. This use case can 460 be initiated by a variety of triggers including: 462 1. A posture change or significant event on the endpoint. 464 2. A network event (e.g., endpoint connects to a network/VPN, 465 specific netflow is detected). 467 3. Due to a scheduled or ad hoc collection task. 469 The building blocks of this use case are: 471 Collection Guidance Acquisition: If guidance is required to drive 472 the collection of posture attributes values, this capability is 473 used to acquire this data from one or more security automation 474 data stores. Depending on the trigger, the specific guidance 475 to acquire might be known. If not, it may be necessary to 476 determine the guidance to use based on the component inventory 477 or other assessment criteria. The Data Query and/or Data 478 Retrieval building blocks (see section 2.1.1) may be used to 479 acquire this guidance. 481 Posture Attribute Value Collection: The accumulation of posture 482 attribute values. This may be based on collection guidance 483 that is associated with the posture attributes. 485 Once the posture attribute values are collected, they may be 486 persisted for later use or they may be immediately used for posture 487 evaluation. 489 2.1.4. Posture Evaluation 491 This use case describes the process of evaluating collected posture 492 attribute values representing actual endpoint state against the 493 expected state selected for the assessment. This use case can be 494 initiated by a variety of triggers including: 496 1. A posture change or significant event on the endpoint. 498 2. A network event (e.g., endpoint connects to a network/VPN, 499 specific netflow is detected). 501 3. Due to a scheduled or ad hoc evaluation task. 503 The building blocks of this use case are: 505 Posture Attribute Value Query: If previously collected posture 506 attribute values are needed, the appropriate data stores are 507 queried to retrieve them. If all posture attribute values are 508 provided directly for evaluation, then this capability may not 509 be needed. 511 Evaluation Guidance Acquisition: If guidance is required to drive 512 the evaluation of posture attributes values, this capability is 513 used to acquire this data from one or more security automation 514 data stores. Depending on the trigger, the specific guidance 515 to acquire might be known. If not, it may be necessary to 516 determine the guidance to use based on the component inventory 517 or other assessment criteria. The Data Query and/or Data 518 Retrieval building blocks (see section 2.1.1) may be used to 519 acquire this guidance. 521 Posture Attribute Evaluation: The comparison of posture attribute 522 values against their expected results as expressed in the 523 specified guidance. The result of this comparison is output as 524 a set of posture evaluation results. 526 QUESTION: What if data is unavailable or is not current enough 527 to support the evaluation? This could be caused if collection 528 did not occur (for some reason) and previous collection was too 529 old. 531 Completion of this process represents a complete assessment cycle as 532 defined in Section 2. 534 QUESTION: Since this indicates completion of the section 2 process, I 535 would expect section 3 to follow. But section continues with 2.1.5? 537 2.1.5. Mining the Database 539 This use case describes the need to analyze previously collected 540 posture attribute values from one or more endpoints. This is an 541 alternate use case to Posture Evaluation (see section 2.1.4) that 542 uses collected posture attributes values for analysis processes that 543 may do more than evaluating expected vs. actual state(s). 545 The building blocks of this use case are: 547 Query: Query a data store for specific posture attribute values. 549 Change Detection: An operator should have a mechanism to detect the 550 availability of new or changes to existing posture attribute 551 values. The timeliness of detection may vary from immediate to 552 on demand. Having the ability to filter what changes are 553 detected will allow the operator to focus on the changes that 554 are relevant to their use. 556 QUESTION: Does this warrant a separate use case, or should this be 557 incorporated into the previous use case? 559 COMMENT(DBH): I think the 2.1.5 use case is a subset of 2.1.4 use 560 case, specifically, the query of existing data is covered in Posture 561 Attribute Value Query, condition 1. I think Posture Attribute Value 562 Query should be modified to include the change detection, as part of 563 establishing what needs to be queried. 565 2.2. Usage Scenarios 567 In this section, we describe a number of usage scenarios that utilize 568 aspects of endpoint posture assessment. These are examples of common 569 problems that can be solved with the building blocks defined above. 571 COMMENT(DBH): I don't see "Search for signs of Infection", 572 "Vulnerability Endpoint Identification", "Compromised Endpoint 573 Identification", and "Suspicious Endpoint behavior", which were in 574 -04-. They were moved into "Automated Checklist Verification". But 575 the original usage scenarios did not mention checklists. Are we now 576 limiting SACM to a checklist-driven approaches? Do the authors of 577 the text in -04- agree that their use cases/usage scenarios are 578 adequately captured in -05-? 580 2.2.1. Definition and Publication of Automatable Configuration 581 Checklists 583 A vendor manufactures a number of specialized endpoint devices. They 584 also develop and maintain an operating system for these devices that 585 enables end-user organizations to configure a number of security and 586 operational settings. As part of their customer support activities, 587 they publish a number of secure configuration guides that provide 588 minimum security guidelines for configuring their devices. 590 Each guide they produce applies to a specific model of device and 591 version of the operating system and provides a number of specialized 592 configurations depending on the devices intended function and what 593 add-on hardware modules and software licenses are installed on the 594 device. To enable their customers to evaluate the security posture 595 of their devices to ensure that all appropriate minimal security 596 settings are enabled, they publish an automatable configuration 597 checklists using a popular data format that defines what settings to 598 collect using a network management protocol and appropriate values 599 for each setting. They publish these checklist to a public security 600 automation data store that customers can query to retrieve applicable 601 checklist for their deployed specialized endpoint devices. 603 Automatable configuration checklist could also come from sources 604 other than a device vendor, such as industry groups or regulatory 605 authorities, or enterprises could develop their own checklists. 607 This usage scenario employs the following building blocks defined in 608 Section 2.1.1 above: 610 Data Definition: To allow guidance to be defined using standardized 611 or proprietary data models that will drive Collection and 612 Evaluation. 614 Data Publication: Providing a mechanism to publish created guidance 615 to a security automation data store. 617 Data Query: To locate and select existing guidance that may be 618 reused. 620 Data Retrieval To retrieve specific guidance from a security 621 automation data store for editing. 623 While each building block can be used in a manual fashion by a human 624 operator, it is also likely that these capabilities will be 625 implemented together in some form of a guidance editor or generator 626 application. 628 2.2.2. Automated Checklist Verification 630 A financial services company operates a heterogeneous IT environment. 631 In support of their risk management program, they utilize vendor 632 provided automatable security configuration checklists for each 633 operating system and application used within their IT environment. 634 Multiple checklists are used from different vendors to insure 635 adequate coverage of all IT assets. 637 To identify what checklists are needed, they use automation to gather 638 an inventory of the software versions utilized by all IT assets in 639 the enterprise. This data gathering will involve querying existing 640 data stores of previously collected endpoint software inventory 641 posture data and actively collecting data from reachable endpoints as 642 needed utilizing network and systems management protocols. 643 Previously collected data may be provided by periodic data 644 collection, network connection-driven data collection, or ongoing 645 event-driven monitoring of endpoint posture changes. 647 Using the collected hardware and software inventory data and 648 associated asset characterization data that may indicate the 649 organizational defined functions of each endpoint, checklist guidance 650 is queried, located and downloaded from the appropriate vendor and 651 3rd-party security automation data store for the appropriate 652 checklists. This guidance is cached locally to reduce the need to 653 retrieve the data multiple times. 655 Driven by the setting data provided in the checklist, a combination 656 of existing configuration data stores and data collection methods are 657 used to gather the appropriate posture attributes from (or pertaining 658 to) each endpoint. Specific posture attribute values are gathered 659 based on the defined enterprise function and software inventory of 660 each endpoint. The collection mechanisms used to collect software 661 inventory posture will be used again for this purpose. Once the data 662 is gathered, the actual state is evaluated against the expected state 663 criteria defined in each applicable checklist. The results of this 664 evaluation are provided to appropriate operators and applications to 665 drive additional business logic. 667 Checklists could include searching for indicators of compromise on 668 the endpoint (e.g., file hashes); identifying malicious activity 669 (e.g. command and control traffic); detecting presence of 670 unauthorized/malicious software, hardware, and configuration items; 671 and other indicators. 673 A checklist can be assessed as a whole, or a specific subset of the 674 checklist can be assessed resulting in partial data collection and 675 evaluation. 677 Checklists could also come from sources other than the application or 678 OS vendor, such as industry groups or regulatory authorities, or 679 enterprises could develop their own checklists. 681 While specific applications for checklists results are out-of-scope 682 for current SACM efforts, how the data is used may illuminate 683 specific latency and bandwidth requirements. For this purpose use of 684 checklist assessment results may include, but are not limited to: 686 o Detecting endpoint posture deviations as part of a change 687 management program to include changes to hardware and software 688 inventory including patches, changes to configuration items, and 689 other posture aspects. 691 o Determining compliance with organizational policies governing 692 endpoint posture. 694 o Searching for current and historic signs of infection by malware 695 and determining the scope of infection within an enterprise. 697 o Informing configuration management, patch management, and 698 vulnerability mitigation and remediation decisions. 700 o Detecting performance, attack and vulnerable conditions that 701 warrant additional network diagnostics, monitoring, and analysis. 703 o Informing network access control decision making for wired, 704 wireless, or VPN connections. 706 This usage scenario employs the following building blocks defined in 707 Section 2.1.1 above: 709 Endpoint Discovery: The purpose of discovery is to determine the 710 type of endpoint to be posture assessed. 712 Identify Endpoint Targets: To identify what potential endpoint 713 targets the checklist should apply to based on organizational 714 policies. 716 Endpoint Component Inventory: Collecting and consuming the software 717 and hardware inventory for the target endpoints. 719 Posture Attribute Identification: To determine what data needs to be 720 collected to support evaluation, the checklist is evaluated 721 against the component inventory and other endpoint metadata to 722 determine the set of posture attribute values that are needed. 724 Collection Guidance Acquisition: Based on the identified posture 725 attributes, the application will query appropriate security 726 automation data stores to find the "applicable" collection 727 guidance for each endpoint in question. 729 Posture Attribute Value Collection: For each endpoint, the values 730 for the required posture attributes are collected. 732 Posture Attribute Value Query: If previously collected posture 733 attribute values are used, they are queried from the 734 appropriate data stores for the target endpoint(s). 736 Evaluation Guidance Acquisition: Any guidance that is needed to 737 support evaluation is queried and retrieved. 739 Posture Attribute Evaluation: The resulting posture attribute values 740 from previous Collection processes are evaluated using the 741 evaluation guidance to provide a set of posture results. 743 2.2.3. Detection of Posture Deviations 745 Example corporation has established secure configuration baselines 746 for each different type of endpoint within their enterprise 747 including: network infrastructure, mobile, client, and server 748 computing platforms. These baselines define an approved list of 749 hardware, software (i.e., operating system, applications, and 750 patches), and associated required configurations. When an endpoint 751 connects to the network, the appropriate baseline configuration is 752 communicated to the endpoint based on its location in the network, 753 the expected function of the device, and other asset management data. 754 It is checked for compliance with the baseline indicating any 755 deviations to the device's operators. Once the baseline has been 756 established, the endpoint is monitored for any change events 757 pertaining to the baseline on an ongoing basis. When a change occurs 758 to posture defined in the baseline, updated posture information is 759 exchanged allowing operators to be notified and/or automated action 760 to be taken. 762 Like the Automated Checklist Verification usage scenario (see section 763 2.2.2), this usage scenario supports assessment based on automatable 764 checklists. It differs from that scenario by monitoring for specific 765 endpoint posture changes on an ongoing basis. When the endpoint 766 detects a posture change, an alert is generated identifying the 767 specific changes in posture allowing assessment of the delta to be 768 performed instead of a full assessment in the previous case. This 769 usage scenario employs the same building blocks as 770 Automated Checklist Verification (see section 2.2.2). It differs 771 slightly in how it uses the following building blocks: 773 Endpoint Component Inventory: Additionally, changes to the hardware 774 and software inventory are monitored, with changes causing 775 alerts to be issued. 777 Posture Attribute Value Collection: After the initial assessment, 778 posture attributes are monitored for changes. If any of the 779 selected posture attribute values change, an alert is issued. 781 Posture Attribute Value Query: The previous state of posture 782 attributes are tracked, allowing changes to be detected. 784 Posture Attribute Evaluation: After the initial assessment, a 785 partial evaluation is performed based on changes to specific 786 posture attributes. 788 This usage scenario highlights the need to query a data store to 789 prepare a compliance report for a specific endpoint and also the need 790 for a change in endpoint state to trigger Collection and Evaluation. 792 2.2.4. Endpoint Information Analysis and Reporting 794 Freed from the drudgery of manual endpoint compliance monitoring, one 795 of the security administrators at Example Corporation notices (not 796 using SACM standards) that five endpoints have been uploading lots of 797 data to a suspicious server on the Internet. The administrator 798 queries data stores for specific endpoint posture to see what 799 software is installed on those endpoints and finds that they all have 800 a particular program installed. She then queries the appropriate 801 data stores to see which other endpoints have that program installed. 802 All these endpoints are monitored carefully (not using SACM 803 standards), which allows the administrator to detect that the other 804 endpoints are also infected. 806 This is just one example of the useful analysis that a skilled 807 analyst can do using data stores of endpoint posture. 809 This usage scenario employs the following building blocks defined in 810 Section 2.1.1 above: 812 Posture Attribute Value Query: Previously collected posture 813 attribute values are queried from the appropriate data stores 814 for the target endpoint(s). 816 QUESTION: Should we include other building blocks here? 818 This usage scenario highlights the need to query a repository for 819 attributes to see which attributes certain endpoints have in common. 821 2.2.5. Asynchronous Compliance/Vulnerability Assessment at Ice Station 822 Zebra 824 A university team receives a grant to do research at a government 825 facility in the arctic. The only network communications will be via 826 an intermittent low-speed high-latency high-cost satellite link. 827 During their extended expedition they will need to show continue 828 compliance with the security policies of the university, the 829 government, and the provider of the satellite network as well as keep 830 current on vulnerability testing. Interactive assessments are 831 therefore not reliable, and since the researchers have very limited 832 funding they need to minimize how much money they spend on network 833 data. 835 Prior to departure they register all equipment with an asset 836 management system owned by the university, which will also initiate 837 and track assessments. 839 On a periodic basis -- either after a maximum time delta or when the 840 security automation data store has received a threshold level of new 841 vulnerability definitions -- the university uses the information in 842 the asset management system to put together a collection request for 843 all of the deployed assets that encompasses the minimal set of 844 artifacts necessary to evaluate all three security policies as well 845 as vulnerability testing. 847 In the case of new critical vulnerabilities this collection request 848 consists only of the artifacts necessary for those vulnerabilities 849 and collection is only initiated for those assets that could 850 potentially have a new vulnerability. 852 [Optional] Asset artifacts are cached in a local CMDB. When new 853 vulnerabilities are reported to the security automation data store, a 854 request to the live asset is only done if the artifacts in the CMDB 855 are incomplete and/or not current enough. 857 The collection request is queued for the next window of connectivity. 858 The deployed assets eventually receive the request, fulfill it, and 859 queue the results for the next return opportunity. 861 The collected artifacts eventually make it back to the university 862 where the level of compliance and vulnerability expose is calculated 863 and asset characteristics are compared to what is in the asset 864 management system for accuracy and completeness. 866 Like the Automated Checklist Verification usage scenario (see section 867 2.2.2), this usage scenario supports assessment based on checklists. 868 It differs from that scenario in how guidance, collected posture 869 attribute values, and evaluation results are exchanged due to 870 bandwidth limitations and availability. This usage scenario employs 871 the same building blocks as Automated Checklist Verification (see 872 section 2.2.2). It differs slightly in how it uses the following 873 building blocks: 875 Endpoint Component Inventory: It is likely that the component 876 inventory will not change. If it does, this information will 877 need to be batched and transmitted during the next 878 communication window. 880 Collection Guidance Acquisition: Due to intermittent communication 881 windows and bandwidth constraints, changes to collection 882 guidance will need to batched and transmitted during the next 883 communication window. Guidance will need to be cached locally 884 to avoid the need for remote communications. 886 Posture Attribute Value Collection: The specific posture attribute 887 values to be collected are identified remotely and batched for 888 collection during the next communication window. If a delay is 889 introduced for collection to complete, results will need to be 890 batched and transmitted in the same way. 892 COMMENT(DBH): Why "in the same way"? Maybe results could be 893 handled in a different way. 895 Posture Attribute Value Query: Previously collected posture 896 attribute values will be stored in a remote data store for use 897 at the university 899 Evaluation Guidance Acquisition: Due to intermittent communication 900 windows and bandwidth constraints, changes to evaluation 901 guidance will need to batched and transmitted during the next 902 communication window. Guidance will need to be cached locally 903 to avoid the need for remote communications. 905 Posture Attribute Evaluation: Due to the caching of posture 906 attribute values and evaluation guidance, evaluation may be 907 performed at both the university campus as well as the 908 satellite site. 910 This usage scenario highlights the need to support low-bandwidth, 911 intermittent, or high-latency links. 913 2.2.6. Identification and Retrieval of Guidance 915 In preparation for performing an assessment, an operator or 916 application will need to identify one or more security automation 917 data stores that contain the guidance entries necessary to perform 918 data collection and evaluation tasks. The location of a given 919 guidance entry will either be known a priori or known security 920 automation data stores will need to be queried to retrieve applicable 921 guidance. 923 To query guidance it will be necessary to define a set of search 924 criteria. This criteria will often utilize a logical combination of 925 publication metadata (e.g. publishing identity, create time, 926 modification time) and guidance data-specific criteria elements. 927 Once the criteria is defined, one or more security automation data 928 stores will need to be queried generating a result set. Depending on 929 how the results are used, it may be desirable to return the matching 930 guidance directly, a snippet of the guidance matching the query, or a 931 resolvable location to retrieve the data at a later time. The 932 guidance matching the query will be restricted based the authorized 933 level of access allowed to the requester. 935 If the location of guidance is identified in the query result set, 936 the guidance will be retrieved when needed using one or more data 937 retrieval requests. A variation on this approach would be to 938 maintain a local cache of previously retrieved data. In this case, 939 only guidance that is determined to be stale by some measure will be 940 retrieved from the remote data store. 942 Alternately, guidance can be discovered by iterating over data 943 published with a given context within a security automation data 944 store. Specific guidance can be selected and retrieved as needed. 946 This usage scenario employs the following building blocks defined in 947 Section 2.1.1 above: 949 Data Query: Enables an operator or application to query one or more 950 security automation data stores for guidance using a set of 951 specified criteria. 953 Data Retrieval: If data locations are returned in the query result 954 set, then specific guidance entries can be retrieved and 955 possibly cached locally. 957 2.2.7. Guidance Change Detection 959 An operator or application may need to identify new, updated, or 960 deleted guidance in a security automation data store for which they 961 have been authorized to access. This may be achieved by querying or 962 iterating over guidance in a security automation data store, or 963 through a notification mechanism that alerts to changes made to a 964 security automation data store. 966 Once guidance changes have been determined, data collection and 967 evaluation activities may be triggered. 969 This usage scenario employs the following building blocks defined in 970 Section 2.1.1 above: 972 Data Change Detection: Allows an operator or application to identify 973 guidance changes in a security automation data store which they 974 have been authorized to access. 976 Data Retrieval: If data locations are provided by the change 977 detection mechanism, then specific guidance entries can be 978 retrieved and possibly cached locally. 980 2.2.8. Others... 982 Additional usage scenarios will be identified as we work through 983 other domains. 985 3. IANA Considerations 987 This memo includes no request to IANA. 989 4. Security Considerations 991 This memo documents, for Informational purposes, use cases for 992 security automation. While it is about security, it does not affect 993 security. 995 5. Acknowledgements 997 The National Institute of Standards and Technology (NIST) and/or the 998 MITRE Corporation have developed specifications under the general 999 term "Security Automation" including languages, protocols, 1000 enumerations, and metrics. 1002 Adam Montville edited early versions of this draft. 1004 Kathleen Moriarty, and Stephen Hanna contributed text describing the 1005 scope of the document. 1007 Gunnar Engelbach, Steve Hanna, Chris Inacio, Kent Landfield, Lisa 1008 Lorenzin, Adam Montville, Kathleen Moriarty, Nancy Cam-Winget, and 1009 Aron Woland provided use cases text for various revisions of this 1010 draft. 1012 6. Change Log 1014 6.1. -05- to -06- 1016 Updated the "Introduction" section to better reflect the use case, 1017 building block, and usage scenario structure changes from previous 1018 revisions. 1020 Updated most uses of the terms "content" and "content repository" to 1021 use "guidance" and "security automation data store" respectively. 1023 In section 2.1.1, added a discussion of different data types and 1024 renamed "content" to "data" in the building block names. 1026 In section 2.1.2, separated out the building block concepts of 1027 "Endpoint Discovery" and "Endpoint Characterization" based on mailing 1028 list discussions. 1030 Addressed some open questions throughout the draft based on consensus 1031 from mailing list discussions and the two virtual interim meetings. 1033 Changed many section/sub-section names to better reflect their 1034 content. 1036 6.2. -04- to -05- 1038 Changes in this revision are focused on section 2 and the subsequent 1039 subsections: 1041 o Moved existing use cases to a subsection titled "Usage Scenarios". 1043 o Added a new subsection titled "Use Cases" to describe the common 1044 use cases and building blocks used to address the "Usage 1045 Scenarios". The new use cases are: 1047 * Define, Publish, Query and Retrieve Content 1049 * Endpoint Identification and Assessment Planning 1051 * Endpoint Posture Attribute Value Collection 1052 * Posture Evaluation 1054 * Mining the Database 1056 o Added a listing of building blocks used for all usage scenarios. 1058 o Combined the following usage scenarios into "Automated Checklist 1059 Verification": "Organizational Software Policy Compliance", 1060 "Search for Signs of Infection", "Vulnerable Endpoint 1061 Identification", "Compromised Endpoint Identification", 1062 "Suspicious Endpoint Behavior", "Traditional endpoint assessment 1063 with stored results", "NAC/NAP connection with no stored results 1064 using an endpoint evaluator", and "NAC/NAP connection with no 1065 stored results using a third-party evaluator". 1067 o Created new usage scenario "Identification and Retrieval of 1068 Repository Content" by combining the following usage scenarios: 1069 "Repository Interaction - A Full Assessment" and "Repository 1070 Interaction - Filtered Delta Assessment" 1072 o Renamed "Register with repository for immediate notification of 1073 new security vulnerability content that match a selection filter" 1074 to "Content Change Detection" and generalized the description to 1075 be neutral to implementation approaches. 1077 o Removed out-of-scope usage scenarios: "Remediation and Mitigation" 1078 and "Direct Human Retrieval of Ancillary Materials" 1080 Updated acknowledgements to recognize those that helped with editing 1081 the use case text. 1083 6.3. -03- to -04- 1085 Added four new use cases regarding content repository. 1087 6.4. -02- to -03- 1089 Expanded the workflow description based on ML input. 1091 Changed the ambiguous "assess" to better separate data collection 1092 from evaluation. 1094 Added use case for Search for Signs of Infection. 1096 Added use case for Remediation and Mitigation. 1098 Added use case for Endpoint Information Analysis and Reporting. 1100 Added use case for Asynchronous Compliance/Vulnerability Assessment 1101 at Ice Station Zebra. 1103 Added use case for Traditional endpoint assessment with stored 1104 results. 1106 Added use case for NAC/NAP connection with no stored results using an 1107 endpoint evaluator. 1109 Added use case for NAC/NAP connection with no stored results using a 1110 third-party evaluator. 1112 Added use case for Compromised Endpoint Identification. 1114 Added use case for Suspicious Endpoint Behavior. 1116 Added use case for Vulnerable Endpoint Identification. 1118 Updated Acknowledgements 1120 6.5. -01- to -02- 1122 Changed title 1124 removed section 4, expecting it will be moved into the requirements 1125 document. 1127 removed the list of proposed capabilities from section 3.1 1129 Added empty sections for Search for Signs of Infection, Remediation 1130 and Mitigation, and Endpoint Information Analysis and Reporting. 1132 Removed Requirements Language section and rfc2119 reference. 1134 Removed unused references (which ended up being all references). 1136 6.6. -00- to -01- 1138 o Work on this revision has been focused on document content 1139 relating primarily to use of asset management data and functions. 1141 o Made significant updates to section 3 including: 1143 * Reworked introductory text. 1145 * Replaced the single example with multiple use cases that focus 1146 on more discrete uses of asset management data to support 1147 hardware and software inventory, and configuration management 1148 use cases. 1150 * For one of the use cases, added mapping to functional 1151 capabilities used. If popular, this will be added to the other 1152 use cases as well. 1154 * Additional use cases will be added in the next revision 1155 capturing additional discussion from the list. 1157 o Made significant updates to section 4 including: 1159 * Renamed the section heading from "Use Cases" to "Functional 1160 Capabilities" since use cases are covered in section 3. This 1161 section now extrapolates specific functions that are needed to 1162 support the use cases. 1164 * Started work to flatten the section, moving select subsections 1165 up from under asset management. 1167 * Removed the subsections for: Asset Discovery, Endpoint 1168 Components and Asset Composition, Asset Resources, and Asset 1169 Life Cycle. 1171 * Renamed the subsection "Asset Representation Reconciliation" to 1172 "Deconfliction of Asset Identities". 1174 * Expanded the subsections for: Asset Identification, Asset 1175 Characterization, and Deconfliction of Asset Identities. 1177 * Added a new subsection for Asset Targeting. 1179 * Moved remaining sections to "Other Unedited Content" for future 1180 updating. 1182 6.7. draft-waltermire-sacm-use-cases-05 to draft-ietf-sacm-use-cases-00 1184 o Transitioned from individual I/D to WG I/D based on WG consensus 1185 call. 1187 o Fixed a number of spelling errors. Thank you Erik! 1189 o Added keywords to the front matter. 1191 o Removed the terminology section from the draft. Terms have been 1192 moved to: draft-dbh-sacm-terminology-00 1194 o Removed requirements to be moved into a new I/D. 1196 o Extracted the functionality from the examples and made the 1197 examples less prominent. 1199 o Renamed "Functional Capabilities and Requirements" section to "Use 1200 Cases". 1202 * Reorganized the "Asset Management" sub-section. Added new text 1203 throughout. 1205 + Renamed a few sub-section headings. 1207 + Added text to the "Asset Characterization" sub-section. 1209 o Renamed "Security Configuration Management" to "Endpoint 1210 Configuration Management". Not sure if the "security" distinction 1211 is important. 1213 * Added new sections, partially integrated existing content. 1215 * Additional text is needed in all of the sub-sections. 1217 o Changed "Security Change Management" to "Endpoint Posture Change 1218 Management". Added new skeletal outline sections for future 1219 updates. 1221 6.8. waltermire -04- to -05- 1223 o Are we including user activities and behavior in the scope of this 1224 work? That seems to be layer 8 stuff, appropriate to an IDS/IPS 1225 application, not Internet stuff. 1227 o I removed the references to what the WG will do because this 1228 belongs in the charter, not the (potentially long-lived) use cases 1229 document. I removed mention of charter objectives because the 1230 charter may go through multiple iterations over time; there is a 1231 website for hosting the charter; this document is not the correct 1232 place for that discussion. 1234 o I moved the discussion of NIST specifications to the 1235 acknowledgements section. 1237 o Removed the portion of the introduction that describes the 1238 chapters; we have a table of concepts, and the existing text 1239 seemed redundant. 1241 o Removed marketing claims, to focus on technical concepts and 1242 technical analysis, that would enable subsequent engineering 1243 effort. 1245 o Removed (commented out in XML) UC2 and UC3, and eliminated some 1246 text that referred to these use cases. 1248 o Modified IANA and Security Consideration sections. 1250 o Moved Terms to the front, so we can use them in the subsequent 1251 text. 1253 o Removed the "Key Concepts" section, since the concepts of ORM and 1254 IRM were not otherwise mentioned in the document. This would seem 1255 more appropriate to the arch doc rather than use cases. 1257 o Removed role=editor from David Waltermire's info, since there are 1258 three editors on the document. The editor is most important when 1259 one person writes the document that represents the work of 1260 multiple people. When there are three editors, this role marking 1261 isn't necessary. 1263 o Modified text to describe that this was specific to enterprises, 1264 and that it was expected to overlap with service provider use 1265 cases, and described the context of this scoped work within a 1266 larger context of policy enforcement, and verification. 1268 o The document had asset management, but the charter mentioned 1269 asset, change, configuration, and vulnerability management, so I 1270 added sections for each of those categories. 1272 o Added text to Introduction explaining goal of the document. 1274 o Added sections on various example use cases for asset management, 1275 config management, change management, and vulnerability 1276 management. 1278 7. Informative References 1280 [RFC3444] Pras, A. and J. Schoenwaelder, "On the Difference between 1281 Information Models and Data Models", RFC 3444, January 1282 2003. 1284 Authors' Addresses 1286 David Waltermire 1287 National Institute of Standards and Technology 1288 100 Bureau Drive 1289 Gaithersburg, Maryland 20877 1290 USA 1292 Email: david.waltermire@nist.gov 1293 David Harrington 1294 Effective Software 1295 50 Harding Rd 1296 Portsmouth, NH 03801 1297 USA 1299 Email: ietfdbh@comcast.net