idnits 2.17.1 draft-ietf-sacm-use-cases-08.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- No issues found here. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year -- The document date (February 26, 2015) is 3346 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- == Missing Reference: 'Optional' is mentioned on line 848, but not defined Summary: 0 errors (**), 0 flaws (~~), 2 warnings (==), 1 comment (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 Security Automation and Continuous Monitoring WG D. Waltermire 3 Internet-Draft NIST 4 Intended status: Informational D. Harrington 5 Expires: August 30, 2015 Effective Software 6 February 26, 2015 8 Endpoint Security Posture Assessment - Enterprise Use Cases 9 draft-ietf-sacm-use-cases-08 11 Abstract 13 This memo documents a sampling of use cases for securely aggregating 14 configuration and operational data and evaluating that data to 15 determine an organization's security posture. From these operational 16 use cases, we can derive common functional capabilities and 17 requirements to guide development of vendor-neutral, interoperable 18 standards for aggregating and evaluating data relevant to security 19 posture. 21 Status of This Memo 23 This Internet-Draft is submitted in full conformance with the 24 provisions of BCP 78 and BCP 79. 26 Internet-Drafts are working documents of the Internet Engineering 27 Task Force (IETF). Note that other groups may also distribute 28 working documents as Internet-Drafts. The list of current Internet- 29 Drafts is at http://datatracker.ietf.org/drafts/current/. 31 Internet-Drafts are draft documents valid for a maximum of six months 32 and may be updated, replaced, or obsoleted by other documents at any 33 time. It is inappropriate to use Internet-Drafts as reference 34 material or to cite them other than as "work in progress." 36 This Internet-Draft will expire on August 30, 2015. 38 Copyright Notice 40 Copyright (c) 2015 IETF Trust and the persons identified as the 41 document authors. All rights reserved. 43 This document is subject to BCP 78 and the IETF Trust's Legal 44 Provisions Relating to IETF Documents 45 (http://trustee.ietf.org/license-info) in effect on the date of 46 publication of this document. Please review these documents 47 carefully, as they describe your rights and restrictions with respect 48 to this document. Code Components extracted from this document must 49 include Simplified BSD License text as described in Section 4.e of 50 the Trust Legal Provisions and are provided without warranty as 51 described in the Simplified BSD License. 53 Table of Contents 55 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . 2 56 2. Endpoint Posture Assessment . . . . . . . . . . . . . . . . . 4 57 2.1. Use Cases . . . . . . . . . . . . . . . . . . . . . . . . 5 58 2.1.1. Define, Publish, Query and Retrieve Security 59 Automation Data . . . . . . . . . . . . . . . . . . . 5 60 2.1.2. Endpoint Identification and Assessment Planning . . . 9 61 2.1.3. Endpoint Posture Attribute Value Collection . . . . . 10 62 2.1.4. Posture Attribute Evaluation . . . . . . . . . . . . 11 63 2.2. Usage Scenarios . . . . . . . . . . . . . . . . . . . . . 12 64 2.2.1. Definition and Publication of Automatable 65 Configuration Checklists . . . . . . . . . . . . . . 12 66 2.2.2. Automated Checklist Verification . . . . . . . . . . 13 67 2.2.3. Detection of Posture Deviations . . . . . . . . . . . 16 68 2.2.4. Endpoint Information Analysis and Reporting . . . . . 17 69 2.2.5. Asynchronous Compliance/Vulnerability Assessment at 70 Ice Station Zebra . . . . . . . . . . . . . . . . . . 18 71 2.2.6. Identification and Retrieval of Guidance . . . . . . 20 72 2.2.7. Guidance Change Detection . . . . . . . . . . . . . . 21 73 3. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 21 74 4. Security Considerations . . . . . . . . . . . . . . . . . . . 21 75 5. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . 21 76 6. Change Log . . . . . . . . . . . . . . . . . . . . . . . . . 22 77 6.1. -07- to -08- . . . . . . . . . . . . . . . . . . . . . . 22 78 6.2. -06- to -07- . . . . . . . . . . . . . . . . . . . . . . 22 79 6.3. -05- to -06- . . . . . . . . . . . . . . . . . . . . . . 22 80 6.4. -04- to -05- . . . . . . . . . . . . . . . . . . . . . . 23 81 6.5. -03- to -04- . . . . . . . . . . . . . . . . . . . . . . 24 82 6.6. -02- to -03- . . . . . . . . . . . . . . . . . . . . . . 24 83 6.7. -01- to -02- . . . . . . . . . . . . . . . . . . . . . . 24 84 6.8. -00- to -01- . . . . . . . . . . . . . . . . . . . . . . 25 85 6.9. draft-waltermire-sacm-use-cases-05 to draft-ietf-sacm- 86 use-cases-00 . . . . . . . . . . . . . . . . . . . . . . 26 87 6.10. waltermire -04- to -05- . . . . . . . . . . . . . . . . . 27 88 7. Informative References . . . . . . . . . . . . . . . . . . . 28 89 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . 28 91 1. Introduction 93 This document describes the core set of use cases for endpoint 94 posture assessment for enterprises. It provides a discussion of 95 these use cases and associated building block capabilities. The 96 described use cases support: 98 o securely collecting and aggregating configuration and operational 99 data, and 101 o evaluating that data to determine the security posture of 102 individual endpoints. 104 Additionally, this document describes a set of usage scenarios that 105 provide examples for using the use cases and associated building 106 blocks to address a variety of operational functions. 108 These operational use cases and related usage scenarios cross many IT 109 security domains. The use cases enable the derivation of common: 111 o concepts that are expressed as building blocks in this document, 113 o characteristics to inform development of a requirements document 115 o information concepts to inform development of an information model 116 document, and 118 o functional capabilities to inform development of an architecture 119 document. 121 Togther these ideas will be used to guide development of vendor- 122 neutral, interoperable standards for collecting, aggregating, and 123 evaluating data relevant to security posture. 125 Using this standard data, tools can analyze the state of endpoints, 126 user activities and behaviour, and evaluate the security posture of 127 an organization. Common expression of information should enable 128 interoperability between tools (whether customized, commercial, or 129 freely available), and the ability to automate portions of security 130 processes to gain efficiency, react to new threats in a timely 131 manner, and free up security personnel to work on more advanced 132 problems. 134 The goal is to enable organizations to make informed decisions that 135 support organizational objectives, to enforce policies for hardening 136 systems, to prevent network misuse, to quantify business risk, and to 137 collaborate with partners to identify and mitigate threats. 139 It is expected that use cases for enterprises and for service 140 providers will largely overlap, but there are additional 141 complications for service providers, especially in handling 142 information that crosses administrative domains. 144 The output of endpoint posture assessment is expected to feed into 145 additional processes, such as policy-based enforcement of acceptable 146 state, verification and monitoring of security controls, and 147 compliance to regulatory requirements. 149 2. Endpoint Posture Assessment 151 Endpoint posture assessment involves orchestrating and performing 152 data collection and evaluating the posture of a given endpoint. 153 Typically, endpoint posture information is gathered and then 154 published to appropriate data repositories to make collected 155 information available for further analysis supporting organizational 156 security processes. 158 Endpoint posture assessment typically includes: 160 o Collecting the attributes of a given endpoint; 162 o Making the attributes available for evaluation and action; and 164 o Verifying that the endpoint's posture is in compliance with 165 enterprise standards and policy. 167 As part of these activities it is often necessary to identify and 168 acquire any supporting security automation data that is needed to 169 drive and feed data collection and evaluation processes. 171 The following is a typical workflow scenario for assessing endpoint 172 posture: 174 1. Some type of trigger initiates the workflow. For example, an 175 operator or an application might trigger the process with a 176 request, or the endpoint might trigger the process using an 177 event-driven notification. 179 2. An operator/application selects one or more target endpoints to 180 be assessed. 182 3. An operator/application selects which policies are applicable to 183 the targets. 185 4. For each target: 187 A. The application determines which (sets of) posture attributes 188 need to be collected for evaluation. Implementations should 189 be able to support (possibly mixed) sets of standardized and 190 proprietary attributes. 192 B. The application might retrieve previously collected 193 information from a cache or data store, such as a data store 194 populated by an asset management system. 196 C. The application might establish communication with the 197 target, mutually authenticate identities and authorizations, 198 and collect posture attributes from the target. 200 D. The application might establish communication with one or 201 more intermediary/agents, mutually authenticate their 202 identities and determine authorizations, and collect posture 203 attributes about the target from the intermediary/agents. 204 Such agents might be local or external. 206 E. The application communicates target identity and (sets of) 207 collected attributes to an evaluator, possibly an external 208 process or external system. 210 F. The evaluator compares the collected posture attributes with 211 expected values as expressed in policies. 213 G. The evaluator reports the evaluation result for the requested 214 assessment, in a standardized or proprietary format, such as 215 a report, a log entry, a database entry, or a notification. 217 2.1. Use Cases 219 The following subsections detail specific use cases for assessment 220 planning, data collection, analysis, and related operations 221 pertaining to the publication and use of supporting data. Each use 222 case is defined by a short summary containing a simple problem 223 statement, followed by a discussion of related concepts, and a 224 listing of associated building blocks which represent the 225 capabilities needed to support the use case. These use cases and 226 building blocks identify separate units of functionality that may be 227 supported by different components of an architectural model. 229 2.1.1. Define, Publish, Query and Retrieve Security Automation Data 231 This use case describes the need for security automation data to be 232 defined and published to one or more data stores, as well as queried 233 and retrieved from these data stores for the explicit use of posture 234 collection and evaluation. 236 Security automation data is a general concept that refers to any data 237 expression that may be generated and/or used as part of the process 238 of collecting and evaluating endpoint posture. Different types of 239 security automation data will generally fall into one of three 240 categories: 242 Guidance: Instructions and related metadata that guide the attribute 243 collection and evaluation processes. The purpose of this data 244 is to allow implementations to be data-driven enabling their 245 behavior to be customized without requiring changes to deployed 246 software. 248 This type of data tends to change in units of months and days. 249 In cases where assessments are made more dynamic, it may be 250 necessary to handle changes in the scope of hours or minutes. 251 This data will typically be provided by large organizations, 252 product vendors, and some 3rd-parties. Thus, it will tend to 253 be shared across large enterprises and customer communities. 254 In some cases access may be controlled to specific 255 authenticated users. In other cases, the data may be provided 256 broadly with little to no access control. 258 This includes: 260 * Listings of attribute identifiers for which values may be 261 collected and evaluated 263 * Lists of attributes that are to be collected along with 264 metadata that includes: when to collect a set of attributes 265 based on a defined interval or event, the duration of 266 collection, and how to go about collecting a set of 267 attributes. 269 * Guidance that specifies how old collected data can be to be 270 used for evaluation. 272 * Policies that define how to target and perform the 273 evaluation of a set of attributes for different kinds or 274 groups of endpoints and the assets they are composed of. In 275 some cases it may be desirable to maintain hierarchies of 276 policies as well. 278 * References to human oriented-data that provide technical, 279 organizational, and/or policy context. This might include 280 references to: best practices documents, legal guidance and 281 legislation, and instructional materials related to the 282 automation data in question. 284 Attribute Data: Data collected through automated and manual 285 mechanisms describing organizational and posture details 286 pertaining to specific endpoints and the assets that they are 287 composed of (e.g., hardware, software, accounts). The purpose 288 of this type of data is to characterize an endpoint (e.g., 289 endpoint type, organizationally expected function/role) and to 290 provide actual and expected state data pertaining to one or 291 more endpoints. This data is used to determine what posture 292 attributes to collect from which endpoints and to feed one or 293 more evaluations. 295 This type of data tends to change in units of days, minutes, a 296 seconds with posture attribute values typically changing more 297 frequently than endpoint characterizations. This data tends to 298 be organizationally and endpoint specific, with specific 299 operational groups of endpoints tending to exhibit similar 300 attribute profiles. This data will generally not be shared 301 outside an organizational boundary and will generally require 302 authentication with specific access controls. 304 This includes: 306 * Endpoint characterization data that describes the endpoint 307 type, organizationally expected function/role, etc. 309 * Collected endpoint posture attribute values and related 310 context including: time of collection, tools used for 311 collection, etc. 313 * Organizationally defined expected posture attribute values 314 targeted to specific evaluation guidance and endpoint 315 characteristics. This allows a common set of guidance to be 316 parameterized for use with different groups of endpoints. 318 Processing Artifacts: Data that is generated by and is specific to 319 an individual assessment process. This data may be used as 320 part of the interactions between architectural components to 321 drive and coordinate collection and evaluation activities. Its 322 lifespan will be bounded by the lifespan of the assessment. It 323 may also be exchanged and stored to provide historic context 324 around an assessment activity so that individual assessments 325 can be grouped, evaluated, and reported in an enterprise 326 context. 328 This includes: 330 * The identified set of endpoints for which an assessment 331 should be performed. 333 * The identified set of posture attributes that need to be 334 collected from specific endpoints to perform an evaluation. 336 * The resulting data generated by an evaluation process 337 including the context of what was assessed, what it was 338 assessed against, what collected data was used, when it was 339 collected, and when the evaluation was performed. 341 The information model for security automation data must support a 342 variety of different data types as described above, along with the 343 associated metadata that is needed to support publication, query, and 344 retrieval operations. It is expected that multiple data models will 345 be used to express specific data types requiring specialized or 346 extensible security automation data repositories. The different 347 temporal characteristics, access patterns, and access control 348 dimensions of each data type may also require different protocols and 349 data models to be supported furthering the potential requirement for 350 specialized data repositories. See [RFC3444] for a description and 351 discussion of distinctions between an information and data model. It 352 is likely that additional kinds of data will be identified through 353 the process of defining requirements and an architectural model. 354 Implementations supporting this building block will need to be 355 extensible to accommodate the addition of new types of data, both 356 proprietary or (preferably) using a standard format. 358 The building blocks of this use case are: 360 Data Definition: Security automation data will guide and inform 361 collection and evaluation processes. This data may be designed 362 by a variety of roles - application implementers may build 363 security automation data into their applications; 364 administrators may define guidance based on organizational 365 policies; operators may define guidance and attribute data as 366 needed for evaluation at runtime, and so on. Data producers 367 may choose to reuse data from existing stores of security 368 automation data and may create new data. Data producers may 369 develop data based on available standardized or proprietary 370 data models, such as those used for network management and/or 371 host management. 373 Data Publication: The capability to enable data producers to publish 374 data to a security automation data store for further use. 375 Published data may be made publicly available or access may be 376 based on an authorization decision using authenticated 377 credentials. As a result, the visibility of specific security 378 automation data to an operator or application may be public, 379 enterprise-scoped, private, or controlled within any other 380 scope. 382 Data Query: An operator or application should be able to query a 383 security automation data store using a set of specified 384 criteria. The result of the query will be a listing matching 385 the query. The query result listing may contain publication 386 metadata (e.g., create date, modified date, publisher, etc.) 387 and/or the full data, a summary, snippet, or the location to 388 retrieve the data. 390 Data Retrieval: An user, operator, or application acquires one or 391 more specific security automation data entries. The location 392 of the data may be known a priori, or may be determined based 393 on decisions made using information from a previous query. 395 Data Change Detection: An operator or application needs to know when 396 security automation data they interested in has been published 397 to, updated in, or deleted from a security automation data 398 store which they have been authorized to access. 400 These building blocks are used to enable acquisition of various 401 instances of security automation data based on specific data models 402 that are used to drive assessment planning (see section 2.1.2), 403 posture attribute value collection (see section 2.1.3), and posture 404 evaluation (see section 2.1.4). 406 2.1.2. Endpoint Identification and Assessment Planning 408 This use case describes the process of discovering endpoints, 409 understanding their composition, identifying the desired state to 410 assess against, and calculating what posture attributes to collect to 411 enable evaluation. This process may be a set of manual, automated, 412 or hybrid steps that are performed for each assessment. 414 The building blocks of this use case are: 416 Endpoint Discovery: To determine the current or historic presence of 417 endpoints in the environment that are available for posture 418 assessment. Endpoints are identified in support of discovery 419 using information previously obtained or by using other 420 collection mechanisms to gather identification and 421 characterization data. Previously obtained data may originate 422 from sources such as network authentication exchanges. 424 Endpoint Characterization: The act of acquiring, through automated 425 collection or manual input, and organizing attributes 426 associated with an endpoint (e.g., type, organizationally 427 expected function/role, hardware/software versions). 429 Identify Endpoint Targets: Determine the candidate endpoint 430 target(s) against which to perform the assessment. Depending 431 on the assessment trigger, a single endpoint or multiple 432 endpoints may be targeted based on characterized endpoint 433 attributes. Guidance describing the assessment to be performed 434 may contain instructions or references used to determine the 435 applicable assessment targets. In this case the Data Query 436 and/or Data Retrieval building blocks (see section 2.1.1) may 437 be used to acquire this data. 439 Endpoint Component Inventory: To determine what applicable desired 440 states should be assessed, it is first necessary to acquire the 441 inventory of software, hardware, and accounts associated with 442 the targeted endpoint(s). If the assessment of the endpoint is 443 not dependent on the these details, then this capability is not 444 required for use in performing the assessment. This process 445 can be treated as a collection use case for specific posture 446 attributes. In this case the building blocks for 447 Endpoint Posture Attribute Value Collection (see section 2.1.3) 448 can be used. 450 Posture Attribute Identification: Once the endpoint targets and 451 their associated asset inventory is known, it is then necessary 452 to calculate what posture attributes are required to be 453 collected to perform the desired evaluation. When available, 454 existing posture data is queried for suitability using the Data 455 Query building block (see section 2.1.1). Such posture data is 456 suitable if it is complete and current enough for use in the 457 evaluation. Any unsuitable posture data is identified for 458 collection. 460 If this is driven by guidance, then the Data Query and/or Data 461 Retrieval building blocks (see section 2.1.1) may be used to 462 acquire this data. 464 At this point the set of posture attribute values to use for 465 evaluation are known and they can be collected if necessary (see 466 section 2.1.3). 468 2.1.3. Endpoint Posture Attribute Value Collection 470 This use case describes the process of collecting a set of posture 471 attribute values related to one or more endpoints. This use case can 472 be initiated by a variety of triggers including: 474 1. A posture change or significant event on the endpoint. 476 2. A network event (e.g., endpoint connects to a network/VPN, 477 specific netflow is detected). 479 3. A scheduled or ad hoc collection task. 481 The building blocks of this use case are: 483 Collection Guidance Acquisition: If guidance is required to drive 484 the collection of posture attributes values, this capability is 485 used to acquire this data from one or more security automation 486 data stores. Depending on the trigger, the specific guidance 487 to acquire might be known. If not, it may be necessary to 488 determine the guidance to use based on the component inventory 489 or other assessment criteria. The Data Query and/or Data 490 Retrieval building blocks (see section 2.1.1) may be used to 491 acquire this guidance. 493 Posture Attribute Value Collection: The accumulation of posture 494 attribute values. This may be based on collection guidance 495 that is associated with the posture attributes. 497 Once the posture attribute values are collected, they may be 498 persisted for later use or they may be immediately used for posture 499 evaluation. 501 2.1.4. Posture Attribute Evaluation 503 This use case represents the action of analyzing collected posture 504 attribute values as part of an assessment. The primary focus of this 505 use case is to support evaluation of actual endpoint state against 506 the expected state selected for the assessment. 508 This use case can be initiated by a variety of triggers including: 510 1. A posture change or significant event on the endpoint. 512 2. A network event (e.g., endpoint connects to a network/VPN, 513 specific netflow is detected). 515 3. A scheduled or ad hoc evaluation task. 517 The building blocks of this use case are: 519 Collected Posture Change Detection: An operator or application has a 520 mechanism to detect the availability of new, or changes to 521 existing, posture attribute values. The timeliness of 522 detection may vary from immediate to on-demand. Having the 523 ability to filter what changes are detected will allow the 524 operator to focus on the changes that are relevant to their use 525 and will enable evaluation to occur dynamically based on 526 detected changes. 528 Posture Attribute Value Query: If previously collected posture 529 attribute values are needed, the appropriate data stores are 530 queried to retrieve them using the Data Query building block 531 (see section 2.1.1). If all posture attribute values are 532 provided directly for evaluation, then this capability may not 533 be needed. 535 Evaluation Guidance Acquisition: If guidance is required to drive 536 the evaluation of posture attributes values, this capability is 537 used to acquire this data from one or more security automation 538 data stores. Depending on the trigger, the specific guidance 539 to acquire might be known. If not, it may be necessary to 540 determine the guidance to use based on the component inventory 541 or other assessment criteria. The Data Query and/or Data 542 Retrieval building blocks (see section 2.1.1) may be used to 543 acquire this guidance. 545 Posture Attribute Evaluation: The comparison of posture attribute 546 values against their expected values as expressed in the 547 specified guidance. The result of this comparison is output as 548 a set of posture evaluation results. Such results include 549 metadata required to provide a level of assurance with respect 550 to the posture attribute data and, therefore, evaluation 551 results. Examples of such metadata include provenance and or 552 availability data. 554 While the primary focus of this use cases is around enabling the 555 comparison of expected vs. actual state, the same building blocks can 556 support other analysis techniques that are applied to collected 557 posture attribute data (e.g., trending, historic analysis). 559 Completion of this process represents a complete assessment cycle as 560 defined in Section 2. 562 2.2. Usage Scenarios 564 In this section, we describe a number of usage scenarios that utilize 565 aspects of endpoint posture assessment. These are examples of common 566 problems that can be solved with the building blocks defined above. 568 2.2.1. Definition and Publication of Automatable Configuration 569 Checklists 571 A vendor manufactures a number of specialized endpoint devices. They 572 also develop and maintain an operating system for these devices that 573 enables end-user organizations to configure a number of security and 574 operational settings. As part of their customer support activities, 575 they publish a number of secure configuration guides that provide 576 minimum security guidelines for configuring their devices. 578 Each guide they produce applies to a specific model of device and 579 version of the operating system and provides a number of specialized 580 configurations depending on the devices intended function and what 581 add-on hardware modules and software licenses are installed on the 582 device. To enable their customers to evaluate the security posture 583 of their devices to ensure that all appropriate minimal security 584 settings are enabled, they publish an automatable configuration 585 checklists using a popular data format that defines what settings to 586 collect using a network management protocol and appropriate values 587 for each setting. They publish these checklist to a public security 588 automation data store that customers can query to retrieve applicable 589 checklist for their deployed specialized endpoint devices. 591 Automatable configuration checklist could also come from sources 592 other than a device vendor, such as industry groups or regulatory 593 authorities, or enterprises could develop their own checklists. 595 This usage scenario employs the following building blocks defined in 596 Section 2.1.1 above: 598 Data Definition: To allow guidance to be defined using standardized 599 or proprietary data models that will drive Collection and 600 Evaluation. 602 Data Publication: Providing a mechanism to publish created guidance 603 to a security automation data store. 605 Data Query: To locate and select existing guidance that may be 606 reused. 608 Data Retrieval To retrieve specific guidance from a security 609 automation data store for editing. 611 While each building block can be used in a manual fashion by a human 612 operator, it is also likely that these capabilities will be 613 implemented together in some form of a guidance editor or generator 614 application. 616 2.2.2. Automated Checklist Verification 618 A financial services company operates a heterogeneous IT environment. 619 In support of their risk management program, they utilize vendor 620 provided automatable security configuration checklists for each 621 operating system and application used within their IT environment. 623 Multiple checklists are used from different vendors to insure 624 adequate coverage of all IT assets. 626 To identify what checklists are needed, they use automation to gather 627 an inventory of the software versions utilized by all IT assets in 628 the enterprise. This data gathering will involve querying existing 629 data stores of previously collected endpoint software inventory 630 posture data and actively collecting data from reachable endpoints as 631 needed utilizing network and systems management protocols. 632 Previously collected data may be provided by periodic data 633 collection, network connection-driven data collection, or ongoing 634 event-driven monitoring of endpoint posture changes. 636 Appropriate checklists are queried, located and downloaded from the 637 relevant guidance data stores. The specific data stores queried and 638 the specifics of each query may be driven by data including: 640 o collected hardware and software inventory data, and 642 o associated asset characterization data that may indicate the 643 organizational defined functions of each endpoint. 645 Checklists may be sourced from guidance data stores maintained by an 646 application or OS vendor, an industry group, a regulatory authority, 647 or directly by the enterprise. 649 The retrieved guidance is cached locally to reduce the need to 650 retrieve the data multiple times. 652 Driven by the setting data provided in the checklist, a combination 653 of existing configuration data stores and data collection methods are 654 used to gather the appropriate posture attributes from (or pertaining 655 to) each endpoint. Specific posture attribute values are gathered 656 based on the defined enterprise function and software inventory of 657 each endpoint. The collection mechanisms used to collect software 658 inventory posture will be used again for this purpose. Once the data 659 is gathered, the actual state is evaluated against the expected state 660 criteria defined in each applicable checklist. 662 A checklist can be assessed as a whole, or a specific subset of the 663 checklist can be assessed resulting in partial data collection and 664 evaluation. 666 The results of checklist evaluation are provided to appropriate 667 operators and applications to drive additional business logic. 668 Specific applications for checklist evaluation results are out-of- 669 scope for current SACM efforts. Irrespective of specific 670 applications, the availability, timeliness, and liveness of results 671 is often of general concern. Network latency and available bandwidth 672 often create operational constriants that require trade-offs between 673 these concerns and need to be considered. 675 Uses of checklists and associated evaluation results may include, but 676 are not limited to: 678 o Detecting endpoint posture deviations as part of a change 679 management program to: 681 * identify missing required patches, 683 * unauthorized changes to hardware and software inventory, and 685 * unauthorized changes to configuration items. 687 o Determining compliance with organizational policies governing 688 endpoint posture. 690 o Informing configuration management, patch management, and 691 vulnerability mitigation and remediation decisions. 693 o Searching for current and historic indicators of compromise. 695 o Detecting current and historic infection by malware and 696 determining the scope of infection within an enterprise. 698 o Detecting performance, attack and vulnerable conditions that 699 warrant additional network diagnostics, monitoring, and analysis. 701 o Informing network access control decision making for wired, 702 wireless, or VPN connections. 704 This usage scenario employs the following building blocks defined in 705 Section 2.1.1 above: 707 Endpoint Discovery: The purpose of discovery is to determine the 708 type of endpoint to be posture assessed. 710 Identify Endpoint Targets: To identify what potential endpoint 711 targets the checklist should apply to based on organizational 712 policies. 714 Endpoint Component Inventory: Collecting and consuming the software 715 and hardware inventory for the target endpoints. 717 Posture Attribute Identification: To determine what data needs to be 718 collected to support evaluation, the checklist is evaluated 719 against the component inventory and other endpoint metadata to 720 determine the set of posture attribute values that are needed. 722 Collection Guidance Acquisition: Based on the identified posture 723 attributes, the application will query appropriate security 724 automation data stores to find the "applicable" collection 725 guidance for each endpoint in question. 727 Posture Attribute Value Collection: For each endpoint, the values 728 for the required posture attributes are collected. 730 Posture Attribute Value Query: If previously collected posture 731 attribute values are used, they are queried from the 732 appropriate data stores for the target endpoint(s). 734 Evaluation Guidance Acquisition: Any guidance that is needed to 735 support evaluation is queried and retrieved. 737 Posture Attribute Evaluation: The resulting posture attribute values 738 from previous Collection processes are evaluated using the 739 evaluation guidance to provide a set of posture results. 741 2.2.3. Detection of Posture Deviations 743 Example corporation has established secure configuration baselines 744 for each different type of endpoint within their enterprise 745 including: network infrastructure, mobile, client, and server 746 computing platforms. These baselines define an approved list of 747 hardware, software (i.e., operating system, applications, and 748 patches), and associated required configurations. When an endpoint 749 connects to the network, the appropriate baseline configuration is 750 communicated to the endpoint based on its location in the network, 751 the expected function of the device, and other asset management data. 752 It is checked for compliance with the baseline indicating any 753 deviations to the device's operators. Once the baseline has been 754 established, the endpoint is monitored for any change events 755 pertaining to the baseline on an ongoing basis. When a change occurs 756 to posture defined in the baseline, updated posture information is 757 exchanged allowing operators to be notified and/or automated action 758 to be taken. 760 Like the Automated Checklist Verification usage scenario (see section 761 2.2.2), this usage scenario supports assessment based on automatable 762 checklists. It differs from that scenario by monitoring for specific 763 endpoint posture changes on an ongoing basis. When the endpoint 764 detects a posture change, an alert is generated identifying the 765 specific changes in posture allowing assessment of the delta to be 766 performed instead of a full assessment in the previous case. This 767 usage scenario employs the same building blocks as 768 Automated Checklist Verification (see section 2.2.2). It differs 769 slightly in how it uses the following building blocks: 771 Endpoint Component Inventory: Additionally, changes to the hardware 772 and software inventory are monitored, with changes causing 773 alerts to be issued. 775 Posture Attribute Value Collection: After the initial assessment, 776 posture attributes are monitored for changes. If any of the 777 selected posture attribute values change, an alert is issued. 779 Posture Attribute Value Query: The previous state of posture 780 attributes are tracked, allowing changes to be detected. 782 Posture Attribute Evaluation: After the initial assessment, a 783 partial evaluation is performed based on changes to specific 784 posture attributes. 786 This usage scenario highlights the need to query a data store to 787 prepare a compliance report for a specific endpoint and also the need 788 for a change in endpoint state to trigger Collection and Evaluation. 790 2.2.4. Endpoint Information Analysis and Reporting 792 Freed from the drudgery of manual endpoint compliance monitoring, one 793 of the security administrators at Example Corporation notices (not 794 using SACM standards) that five endpoints have been uploading lots of 795 data to a suspicious server on the Internet. The administrator 796 queries data stores for specific endpoint posture to see what 797 software is installed on those endpoints and finds that they all have 798 a particular program installed. She then queries the appropriate 799 data stores to see which other endpoints have that program installed. 800 All these endpoints are monitored carefully (not using SACM 801 standards), which allows the administrator to detect that the other 802 endpoints are also infected. 804 This is just one example of the useful analysis that a skilled 805 analyst can do using data stores of endpoint posture. 807 This usage scenario employs the following building blocks defined in 808 Section 2.1.1 above: 810 Posture Attribute Value Query: Previously collected posture 811 attribute values for the target endpoint(s) are queried from 812 the appropriate data stores using a standardized method. 814 This usage scenario highlights the need to query a repository for 815 attributes to see which attributes certain endpoints have in common. 817 2.2.5. Asynchronous Compliance/Vulnerability Assessment at Ice Station 818 Zebra 820 A university team receives a grant to do research at a government 821 facility in the arctic. The only network communications will be via 822 an intermittent low-speed high-latency high-cost satellite link. 823 During their extended expedition they will need to show continue 824 compliance with the security policies of the university, the 825 government, and the provider of the satellite network as well as keep 826 current on vulnerability testing. Interactive assessments are 827 therefore not reliable, and since the researchers have very limited 828 funding they need to minimize how much money they spend on network 829 data. 831 Prior to departure they register all equipment with an asset 832 management system owned by the university, which will also initiate 833 and track assessments. 835 On a periodic basis -- either after a maximum time delta or when the 836 security automation data store has received a threshold level of new 837 vulnerability definitions -- the university uses the information in 838 the asset management system to put together a collection request for 839 all of the deployed assets that encompasses the minimal set of 840 artifacts necessary to evaluate all three security policies as well 841 as vulnerability testing. 843 In the case of new critical vulnerabilities this collection request 844 consists only of the artifacts necessary for those vulnerabilities 845 and collection is only initiated for those assets that could 846 potentially have a new vulnerability. 848 [Optional] Asset artifacts are cached in a local CMDB. When new 849 vulnerabilities are reported to the security automation data store, a 850 request to the live asset is only done if the artifacts in the CMDB 851 are incomplete and/or not current enough. 853 The collection request is queued for the next window of connectivity. 854 The deployed assets eventually receive the request, fulfill it, and 855 queue the results for the next return opportunity. 857 The collected artifacts eventually make it back to the university 858 where the level of compliance and vulnerability expose is calculated 859 and asset characteristics are compared to what is in the asset 860 management system for accuracy and completeness. 862 Like the Automated Checklist Verification usage scenario (see section 863 2.2.2), this usage scenario supports assessment based on checklists. 864 It differs from that scenario in how guidance, collected posture 865 attribute values, and evaluation results are exchanged due to 866 bandwidth limitations and availability. This usage scenario employs 867 the same building blocks as Automated Checklist Verification (see 868 section 2.2.2). It differs slightly in how it uses the following 869 building blocks: 871 Endpoint Component Inventory: It is likely that the component 872 inventory will not change. If it does, this information will 873 need to be batched and transmitted during the next 874 communication window. 876 Collection Guidance Acquisition: Due to intermittent communication 877 windows and bandwidth constraints, changes to collection 878 guidance will need to batched and transmitted during the next 879 communication window. Guidance will need to be cached locally 880 to avoid the need for remote communications. 882 Posture Attribute Value Collection: The specific posture attribute 883 values to be collected are identified remotely and batched for 884 collection during the next communication window. If a delay is 885 introduced for collection to complete, results will need to be 886 batched and transmitted. 888 Posture Attribute Value Query: Previously collected posture 889 attribute values will be stored in a remote data store for use 890 at the university 892 Evaluation Guidance Acquisition: Due to intermittent communication 893 windows and bandwidth constraints, changes to evaluation 894 guidance will need to batched and transmitted during the next 895 communication window. Guidance will need to be cached locally 896 to avoid the need for remote communications. 898 Posture Attribute Evaluation: Due to the caching of posture 899 attribute values and evaluation guidance, evaluation may be 900 performed at both the university campus as well as the 901 satellite site. 903 This usage scenario highlights the need to support low-bandwidth, 904 intermittent, or high-latency links. 906 2.2.6. Identification and Retrieval of Guidance 908 In preparation for performing an assessment, an operator or 909 application will need to identify one or more security automation 910 data stores that contain the guidance entries necessary to perform 911 data collection and evaluation tasks. The location of a given 912 guidance entry will either be known a priori or known security 913 automation data stores will need to be queried to retrieve applicable 914 guidance. 916 To query guidance it will be necessary to define a set of search 917 criteria. This criteria will often utilize a logical combination of 918 publication metadata (e.g. publishing identity, create time, 919 modification time) and guidance data-specific criteria elements. 920 Once the criteria is defined, one or more security automation data 921 stores will need to be queried generating a result set. Depending on 922 how the results are used, it may be desirable to return the matching 923 guidance directly, a snippet of the guidance matching the query, or a 924 resolvable location to retrieve the data at a later time. The 925 guidance matching the query will be restricted based the authorized 926 level of access allowed to the requester. 928 If the location of guidance is identified in the query result set, 929 the guidance will be retrieved when needed using one or more data 930 retrieval requests. A variation on this approach would be to 931 maintain a local cache of previously retrieved data. In this case, 932 only guidance that is determined to be stale by some measure will be 933 retrieved from the remote data store. 935 Alternately, guidance can be discovered by iterating over data 936 published with a given context within a security automation data 937 store. Specific guidance can be selected and retrieved as needed. 939 This usage scenario employs the following building blocks defined in 940 Section 2.1.1 above: 942 Data Query: Enables an operator or application to query one or more 943 security automation data stores for guidance using a set of 944 specified criteria. 946 Data Retrieval: If data locations are returned in the query result 947 set, then specific guidance entries can be retrieved and 948 possibly cached locally. 950 2.2.7. Guidance Change Detection 952 An operator or application may need to identify new, updated, or 953 deleted guidance in a security automation data store for which they 954 have been authorized to access. This may be achieved by querying or 955 iterating over guidance in a security automation data store, or 956 through a notification mechanism that alerts to changes made to a 957 security automation data store. 959 Once guidance changes have been determined, data collection and 960 evaluation activities may be triggered. 962 This usage scenario employs the following building blocks defined in 963 Section 2.1.1 above: 965 Data Change Detection: Allows an operator or application to identify 966 guidance changes in a security automation data store which they 967 have been authorized to access. 969 Data Retrieval: If data locations are provided by the change 970 detection mechanism, then specific guidance entries can be 971 retrieved and possibly cached locally. 973 3. IANA Considerations 975 This memo includes no request to IANA. 977 4. Security Considerations 979 This memo documents, for Informational purposes, use cases for 980 security automation. Specific security considerations will be 981 provided in related documents (e.g., requirements, architecture, 982 information model, data model, protocol) as appropriate to the 983 function described in each related document. 985 5. Acknowledgements 987 Adam Montville edited early versions of this draft. 989 Kathleen Moriarty, and Stephen Hanna contributed text describing the 990 scope of the document. 992 Gunnar Engelbach, Steve Hanna, Chris Inacio, Kent Landfield, Lisa 993 Lorenzin, Adam Montville, Kathleen Moriarty, Nancy Cam-Winget, and 994 Aron Woland provided use cases text for various revisions of this 995 draft. 997 6. Change Log 999 6.1. -07- to -08- 1001 Reworked long sentences throughout the document by shortening or 1002 using bulleted lists. 1004 Re-ordered and condensed text in the "Automated Checklist 1005 Verification" sub-section to improve the conceptual presentation and 1006 to clarify longer sentences. 1008 Clarified that the "Posture Attribute Value Query" building block 1009 represents a standardized interface in the context of SACM. 1011 Removed the "others" sub-section within the "usage scenarios" 1012 section. 1014 Updated the "Security Considerations" section to identify that actual 1015 SACM security considerations will be discussed in the appropriate 1016 related documents. 1018 6.2. -06- to -07- 1020 A number of edits were made to section 2 to resolve open questions in 1021 the draft based on meeting and mailing list discussions. 1023 Section 2.1.5 was merged into section 2.1.4. 1025 6.3. -05- to -06- 1027 Updated the "Introduction" section to better reflect the use case, 1028 building block, and usage scenario structure changes from previous 1029 revisions. 1031 Updated most uses of the terms "content" and "content repository" to 1032 use "guidance" and "security automation data store" respectively. 1034 In section 2.1.1, added a discussion of different data types and 1035 renamed "content" to "data" in the building block names. 1037 In section 2.1.2, separated out the building block concepts of 1038 "Endpoint Discovery" and "Endpoint Characterization" based on mailing 1039 list discussions. 1041 Addressed some open questions throughout the draft based on consensus 1042 from mailing list discussions and the two virtual interim meetings. 1044 Changed many section/sub-section names to better reflect their 1045 content. 1047 6.4. -04- to -05- 1049 Changes in this revision are focused on section 2 and the subsequent 1050 subsections: 1052 o Moved existing use cases to a subsection titled "Usage Scenarios". 1054 o Added a new subsection titled "Use Cases" to describe the common 1055 use cases and building blocks used to address the "Usage 1056 Scenarios". The new use cases are: 1058 * Define, Publish, Query and Retrieve Content 1060 * Endpoint Identification and Assessment Planning 1062 * Endpoint Posture Attribute Value Collection 1064 * Posture Evaluation 1066 * Mining the Database 1068 o Added a listing of building blocks used for all usage scenarios. 1070 o Combined the following usage scenarios into "Automated Checklist 1071 Verification": "Organizational Software Policy Compliance", 1072 "Search for Signs of Infection", "Vulnerable Endpoint 1073 Identification", "Compromised Endpoint Identification", 1074 "Suspicious Endpoint Behavior", "Traditional endpoint assessment 1075 with stored results", "NAC/NAP connection with no stored results 1076 using an endpoint evaluator", and "NAC/NAP connection with no 1077 stored results using a third-party evaluator". 1079 o Created new usage scenario "Identification and Retrieval of 1080 Repository Content" by combining the following usage scenarios: 1081 "Repository Interaction - A Full Assessment" and "Repository 1082 Interaction - Filtered Delta Assessment" 1084 o Renamed "Register with repository for immediate notification of 1085 new security vulnerability content that match a selection filter" 1086 to "Content Change Detection" and generalized the description to 1087 be neutral to implementation approaches. 1089 o Removed out-of-scope usage scenarios: "Remediation and Mitigation" 1090 and "Direct Human Retrieval of Ancillary Materials" 1092 Updated acknowledgements to recognize those that helped with editing 1093 the use case text. 1095 6.5. -03- to -04- 1097 Added four new use cases regarding content repository. 1099 6.6. -02- to -03- 1101 Expanded the workflow description based on ML input. 1103 Changed the ambiguous "assess" to better separate data collection 1104 from evaluation. 1106 Added use case for Search for Signs of Infection. 1108 Added use case for Remediation and Mitigation. 1110 Added use case for Endpoint Information Analysis and Reporting. 1112 Added use case for Asynchronous Compliance/Vulnerability Assessment 1113 at Ice Station Zebra. 1115 Added use case for Traditional endpoint assessment with stored 1116 results. 1118 Added use case for NAC/NAP connection with no stored results using an 1119 endpoint evaluator. 1121 Added use case for NAC/NAP connection with no stored results using a 1122 third-party evaluator. 1124 Added use case for Compromised Endpoint Identification. 1126 Added use case for Suspicious Endpoint Behavior. 1128 Added use case for Vulnerable Endpoint Identification. 1130 Updated Acknowledgements 1132 6.7. -01- to -02- 1134 Changed title 1136 removed section 4, expecting it will be moved into the requirements 1137 document. 1139 removed the list of proposed capabilities from section 3.1 1140 Added empty sections for Search for Signs of Infection, Remediation 1141 and Mitigation, and Endpoint Information Analysis and Reporting. 1143 Removed Requirements Language section and rfc2119 reference. 1145 Removed unused references (which ended up being all references). 1147 6.8. -00- to -01- 1149 o Work on this revision has been focused on document content 1150 relating primarily to use of asset management data and functions. 1152 o Made significant updates to section 3 including: 1154 * Reworked introductory text. 1156 * Replaced the single example with multiple use cases that focus 1157 on more discrete uses of asset management data to support 1158 hardware and software inventory, and configuration management 1159 use cases. 1161 * For one of the use cases, added mapping to functional 1162 capabilities used. If popular, this will be added to the other 1163 use cases as well. 1165 * Additional use cases will be added in the next revision 1166 capturing additional discussion from the list. 1168 o Made significant updates to section 4 including: 1170 * Renamed the section heading from "Use Cases" to "Functional 1171 Capabilities" since use cases are covered in section 3. This 1172 section now extrapolates specific functions that are needed to 1173 support the use cases. 1175 * Started work to flatten the section, moving select subsections 1176 up from under asset management. 1178 * Removed the subsections for: Asset Discovery, Endpoint 1179 Components and Asset Composition, Asset Resources, and Asset 1180 Life Cycle. 1182 * Renamed the subsection "Asset Representation Reconciliation" to 1183 "Deconfliction of Asset Identities". 1185 * Expanded the subsections for: Asset Identification, Asset 1186 Characterization, and Deconfliction of Asset Identities. 1188 * Added a new subsection for Asset Targeting. 1190 * Moved remaining sections to "Other Unedited Content" for future 1191 updating. 1193 6.9. draft-waltermire-sacm-use-cases-05 to draft-ietf-sacm-use-cases-00 1195 o Transitioned from individual I/D to WG I/D based on WG consensus 1196 call. 1198 o Fixed a number of spelling errors. Thank you Erik! 1200 o Added keywords to the front matter. 1202 o Removed the terminology section from the draft. Terms have been 1203 moved to: draft-dbh-sacm-terminology-00 1205 o Removed requirements to be moved into a new I/D. 1207 o Extracted the functionality from the examples and made the 1208 examples less prominent. 1210 o Renamed "Functional Capabilities and Requirements" section to "Use 1211 Cases". 1213 * Reorganized the "Asset Management" sub-section. Added new text 1214 throughout. 1216 + Renamed a few sub-section headings. 1218 + Added text to the "Asset Characterization" sub-section. 1220 o Renamed "Security Configuration Management" to "Endpoint 1221 Configuration Management". Not sure if the "security" distinction 1222 is important. 1224 * Added new sections, partially integrated existing content. 1226 * Additional text is needed in all of the sub-sections. 1228 o Changed "Security Change Management" to "Endpoint Posture Change 1229 Management". Added new skeletal outline sections for future 1230 updates. 1232 6.10. waltermire -04- to -05- 1234 o Are we including user activities and behavior in the scope of this 1235 work? That seems to be layer 8 stuff, appropriate to an IDS/IPS 1236 application, not Internet stuff. 1238 o Removed the references to what the WG will do because this belongs 1239 in the charter, not the (potentially long-lived) use cases 1240 document. I removed mention of charter objectives because the 1241 charter may go through multiple iterations over time; there is a 1242 website for hosting the charter; this document is not the correct 1243 place for that discussion. 1245 o Moved the discussion of NIST specifications to the 1246 acknowledgements section. 1248 o Removed the portion of the introduction that describes the 1249 chapters; we have a table of concepts, and the existing text 1250 seemed redundant. 1252 o Removed marketing claims, to focus on technical concepts and 1253 technical analysis, that would enable subsequent engineering 1254 effort. 1256 o Removed (commented out in XML) UC2 and UC3, and eliminated some 1257 text that referred to these use cases. 1259 o Modified IANA and Security Consideration sections. 1261 o Moved Terms to the front, so we can use them in the subsequent 1262 text. 1264 o Removed the "Key Concepts" section, since the concepts of ORM and 1265 IRM were not otherwise mentioned in the document. This would seem 1266 more appropriate to the arch doc rather than use cases. 1268 o Removed role=editor from David Waltermire's info, since there are 1269 three editors on the document. The editor is most important when 1270 one person writes the document that represents the work of 1271 multiple people. When there are three editors, this role marking 1272 isn't necessary. 1274 o Modified text to describe that this was specific to enterprises, 1275 and that it was expected to overlap with service provider use 1276 cases, and described the context of this scoped work within a 1277 larger context of policy enforcement, and verification. 1279 o The document had asset management, but the charter mentioned 1280 asset, change, configuration, and vulnerability management, so I 1281 added sections for each of those categories. 1283 o Added text to Introduction explaining goal of the document. 1285 o Added sections on various example use cases for asset management, 1286 config management, change management, and vulnerability 1287 management. 1289 7. Informative References 1291 [RFC3444] Pras, A. and J. Schoenwaelder, "On the Difference between 1292 Information Models and Data Models", RFC 3444, January 1293 2003. 1295 Authors' Addresses 1297 David Waltermire 1298 National Institute of Standards and Technology 1299 100 Bureau Drive 1300 Gaithersburg, Maryland 20877 1301 USA 1303 Email: david.waltermire@nist.gov 1305 David Harrington 1306 Effective Software 1307 50 Harding Rd 1308 Portsmouth, NH 03801 1309 USA 1311 Email: ietfdbh@comcast.net