idnits 2.17.1 draft-ietf-sacm-use-cases-10.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- No issues found here. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year -- The document date (July 1, 2015) is 3221 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- No issues found here. Summary: 0 errors (**), 0 flaws (~~), 1 warning (==), 1 comment (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 Security Automation and Continuous Monitoring WG D. Waltermire 3 Internet-Draft NIST 4 Intended status: Informational D. Harrington 5 Expires: January 2, 2016 Effective Software 6 July 1, 2015 8 Endpoint Security Posture Assessment - Enterprise Use Cases 9 draft-ietf-sacm-use-cases-10 11 Abstract 13 This memo documents a sampling of use cases for securely aggregating 14 configuration and operational data and evaluating that data to 15 determine an organization's security posture. From these operational 16 use cases, we can derive common functional capabilities and 17 requirements to guide development of vendor-neutral, interoperable 18 standards for aggregating and evaluating data relevant to security 19 posture. 21 Status of This Memo 23 This Internet-Draft is submitted in full conformance with the 24 provisions of BCP 78 and BCP 79. 26 Internet-Drafts are working documents of the Internet Engineering 27 Task Force (IETF). Note that other groups may also distribute 28 working documents as Internet-Drafts. The list of current Internet- 29 Drafts is at http://datatracker.ietf.org/drafts/current/. 31 Internet-Drafts are draft documents valid for a maximum of six months 32 and may be updated, replaced, or obsoleted by other documents at any 33 time. It is inappropriate to use Internet-Drafts as reference 34 material or to cite them other than as "work in progress." 36 This Internet-Draft will expire on January 2, 2016. 38 Copyright Notice 40 Copyright (c) 2015 IETF Trust and the persons identified as the 41 document authors. All rights reserved. 43 This document is subject to BCP 78 and the IETF Trust's Legal 44 Provisions Relating to IETF Documents 45 (http://trustee.ietf.org/license-info) in effect on the date of 46 publication of this document. Please review these documents 47 carefully, as they describe your rights and restrictions with respect 48 to this document. Code Components extracted from this document must 49 include Simplified BSD License text as described in Section 4.e of 50 the Trust Legal Provisions and are provided without warranty as 51 described in the Simplified BSD License. 53 Table of Contents 55 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . 2 56 2. Endpoint Posture Assessment . . . . . . . . . . . . . . . . . 4 57 2.1. Use Cases . . . . . . . . . . . . . . . . . . . . . . . . 5 58 2.1.1. Define, Publish, Query and Retrieve Security 59 Automation Data . . . . . . . . . . . . . . . . . . . 5 60 2.1.2. Endpoint Identification and Assessment Planning . . . 9 61 2.1.3. Endpoint Posture Attribute Value Collection . . . . . 10 62 2.1.4. Posture Attribute Evaluation . . . . . . . . . . . . 11 63 2.2. Usage Scenarios . . . . . . . . . . . . . . . . . . . . . 12 64 2.2.1. Definition and Publication of Automatable 65 Configuration Checklists . . . . . . . . . . . . . . 12 66 2.2.2. Automated Checklist Verification . . . . . . . . . . 13 67 2.2.3. Detection of Posture Deviations . . . . . . . . . . . 16 68 2.2.4. Endpoint Information Analysis and Reporting . . . . . 17 69 2.2.5. Asynchronous Compliance/Vulnerability Assessment at 70 Ice Station Zebra . . . . . . . . . . . . . . . . . . 18 71 2.2.6. Identification and Retrieval of Guidance . . . . . . 20 72 2.2.7. Guidance Change Detection . . . . . . . . . . . . . . 21 73 3. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 21 74 4. Security Considerations . . . . . . . . . . . . . . . . . . . 21 75 5. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . 22 76 6. Change Log . . . . . . . . . . . . . . . . . . . . . . . . . 22 77 6.1. -08- to -09- . . . . . . . . . . . . . . . . . . . . . . 22 78 6.2. -07- to -08- . . . . . . . . . . . . . . . . . . . . . . 22 79 6.3. -06- to -07- . . . . . . . . . . . . . . . . . . . . . . 23 80 6.4. -05- to -06- . . . . . . . . . . . . . . . . . . . . . . 23 81 6.5. -04- to -05- . . . . . . . . . . . . . . . . . . . . . . 23 82 6.6. -03- to -04- . . . . . . . . . . . . . . . . . . . . . . 24 83 6.7. -02- to -03- . . . . . . . . . . . . . . . . . . . . . . 24 84 6.8. -01- to -02- . . . . . . . . . . . . . . . . . . . . . . 25 85 6.9. -00- to -01- . . . . . . . . . . . . . . . . . . . . . . 25 86 6.10. draft-waltermire-sacm-use-cases-05 to draft-ietf-sacm- 87 use-cases-00 . . . . . . . . . . . . . . . . . . . . . . 26 88 6.11. waltermire -04- to -05- . . . . . . . . . . . . . . . . . 27 89 7. Informative References . . . . . . . . . . . . . . . . . . . 28 90 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . 28 92 1. Introduction 94 This document describes the core set of use cases for endpoint 95 posture assessment for enterprises. It provides a discussion of 96 these use cases and associated building block capabilities. The 97 described use cases support: 99 o securely collecting and aggregating configuration and operational 100 data, and 102 o evaluating that data to determine the security posture of 103 individual endpoints. 105 Additionally, this document describes a set of usage scenarios that 106 provide examples for using the use cases and associated building 107 blocks to address a variety of operational functions. 109 These operational use cases and related usage scenarios cross many IT 110 security domains. The use cases enable the derivation of common: 112 o concepts that are expressed as building blocks in this document, 114 o characteristics to inform development of a requirements document 116 o information concepts to inform development of an information model 117 document, and 119 o functional capabilities to inform development of an architecture 120 document. 122 Together these ideas will be used to guide development of vendor- 123 neutral, interoperable standards for collecting, aggregating, and 124 evaluating data relevant to security posture. 126 Using this standard data, tools can analyze the state of endpoints, 127 user activities and behaviour, and evaluate the security posture of 128 an organization. Common expression of information should enable 129 interoperability between tools (whether customized, commercial, or 130 freely available), and the ability to automate portions of security 131 processes to gain efficiency, react to new threats in a timely 132 manner, and free up security personnel to work on more advanced 133 problems. 135 The goal is to enable organizations to make informed decisions that 136 support organizational objectives, to enforce policies for hardening 137 systems, to prevent network misuse, to quantify business risk, and to 138 collaborate with partners to identify and mitigate threats. 140 It is expected that use cases for enterprises and for service 141 providers will largely overlap. When considering this overlap, there 142 are additional complications for service providers, especially in 143 handling information that crosses administrative domains. 145 The output of endpoint posture assessment is expected to feed into 146 additional processes, such as policy-based enforcement of acceptable 147 state, verification and monitoring of security controls, and 148 compliance to regulatory requirements. 150 2. Endpoint Posture Assessment 152 Endpoint posture assessment involves orchestrating and performing 153 data collection and evaluating the posture of a given endpoint. 154 Typically, endpoint posture information is gathered and then 155 published to appropriate data repositories to make collected 156 information available for further analysis supporting organizational 157 security processes. 159 Endpoint posture assessment typically includes: 161 o Collecting the attributes of a given endpoint; 163 o Making the attributes available for evaluation and action; and 165 o Verifying that the endpoint's posture is in compliance with 166 enterprise standards and policy. 168 As part of these activities, it is often necessary to identify and 169 acquire any supporting security automation data that is needed to 170 drive and feed data collection and evaluation processes. 172 The following is a typical workflow scenario for assessing endpoint 173 posture: 175 1. Some type of trigger initiates the workflow. For example, an 176 operator or an application might trigger the process with a 177 request, or the endpoint might trigger the process using an 178 event-driven notification. 180 2. An operator/application selects one or more target endpoints to 181 be assessed. 183 3. An operator/application selects which policies are applicable to 184 the targets. 186 4. For each target: 188 A. The application determines which (sets of) posture attributes 189 need to be collected for evaluation. Implementations should 190 be able to support (possibly mixed) sets of standardized and 191 proprietary attributes. 193 B. The application might retrieve previously collected 194 information from a cache or data store, such as a data store 195 populated by an asset management system. 197 C. The application might establish communication with the 198 target, mutually authenticate identities and authorizations, 199 and collect posture attributes from the target. 201 D. The application might establish communication with one or 202 more intermediary/agents, mutually authenticate their 203 identities and determine authorizations, and collect posture 204 attributes about the target from the intermediary/agents. 205 Such agents might be local or external. 207 E. The application communicates target identity and (sets of) 208 collected attributes to an evaluator, possibly an external 209 process or external system. 211 F. The evaluator compares the collected posture attributes with 212 expected values as expressed in policies. 214 G. The evaluator reports the evaluation result for the requested 215 assessment, in a standardized or proprietary format, such as 216 a report, a log entry, a database entry, or a notification. 218 2.1. Use Cases 220 The following subsections detail specific use cases for assessment 221 planning, data collection, analysis, and related operations 222 pertaining to the publication and use of supporting data. Each use 223 case is defined by a short summary containing a simple problem 224 statement, followed by a discussion of related concepts, and a 225 listing of associated building blocks which represent the 226 capabilities needed to support the use case. These use cases and 227 building blocks identify separate units of functionality that may be 228 supported by different components of an architectural model. 230 2.1.1. Define, Publish, Query and Retrieve Security Automation Data 232 This use case describes the need for security automation data to be 233 defined and published to one or more data stores, as well as queried 234 and retrieved from these data stores for the explicit use of posture 235 collection and evaluation. 237 Security automation data is a general concept that refers to any data 238 expression that may be generated and/or used as part of the process 239 of collecting and evaluating endpoint posture. Different types of 240 security automation data will generally fall into one of three 241 categories: 243 Guidance: Instructions and related metadata that guide the attribute 244 collection and evaluation processes. The purpose of this data 245 is to allow implementations to be data-driven enabling their 246 behavior to be customized without requiring changes to deployed 247 software. 249 This type of data tends to change in units of months and days. 250 In cases where assessments are made more dynamic, it may be 251 necessary to handle changes in the scope of hours or minutes. 252 This data will typically be provided by large organizations, 253 product vendors, and some 3rd-parties. Thus, it will tend to 254 be shared across large enterprises and customer communities. 255 In some cases access may be controlled to specific 256 authenticated users. In other cases, the data may be provided 257 broadly with little to no access control. 259 This includes: 261 * Listings of attribute identifiers for which values may be 262 collected and evaluated 264 * Lists of attributes that are to be collected along with 265 metadata that includes: when to collect a set of attributes 266 based on a defined interval or event, the duration of 267 collection, and how to go about collecting a set of 268 attributes. 270 * Guidance that specifies how old collected data can be to be 271 used for evaluation. 273 * Policies that define how to target and perform the 274 evaluation of a set of attributes for different kinds or 275 groups of endpoints and the assets they are composed of. In 276 some cases it may be desirable to maintain hierarchies of 277 policies as well. 279 * References to human-oriented data that provide technical, 280 organizational, and/or policy context. This might include 281 references to: best practices documents, legal guidance and 282 legislation, and instructional materials related to the 283 automation data in question. 285 Attribute Data: Data collected through automated and manual 286 mechanisms describing organizational and posture details 287 pertaining to specific endpoints and the assets that they are 288 composed of (e.g., hardware, software, accounts). The purpose 289 of this type of data is to characterize an endpoint (e.g., 290 endpoint type, organizationally expected function/role) and to 291 provide actual and expected state data pertaining to one or 292 more endpoints. This data is used to determine what posture 293 attributes to collect from which endpoints and to feed one or 294 more evaluations. 296 This type of data tends to change in units of days, minutes, a 297 seconds with posture attribute values typically changing more 298 frequently than endpoint characterizations. This data tends to 299 be organizationally and endpoint specific, with specific 300 operational groups of endpoints tending to exhibit similar 301 attribute profiles. This data will generally not be shared 302 outside an organizational boundary and will generally require 303 authentication with specific access controls. 305 This includes: 307 * Endpoint characterization data that describes the endpoint 308 type, organizationally expected function/role, etc. 310 * Collected endpoint posture attribute values and related 311 context including: time of collection, tools used for 312 collection, etc. 314 * Organizationally defined expected posture attribute values 315 targeted to specific evaluation guidance and endpoint 316 characteristics. This allows a common set of guidance to be 317 parameterized for use with different groups of endpoints. 319 Processing Artifacts: Data that is generated by, and is specific to, 320 an individual assessment process. This data may be used as 321 part of the interactions between architectural components to 322 drive and coordinate collection and evaluation activities. Its 323 lifespan will be bounded by the lifespan of the assessment. It 324 may also be exchanged and stored to provide historic context 325 around an assessment activity so that individual assessments 326 can be grouped, evaluated, and reported in an enterprise 327 context. 329 This includes: 331 * The identified set of endpoints for which an assessment 332 should be performed. 334 * The identified set of posture attributes that need to be 335 collected from specific endpoints to perform an evaluation. 337 * The resulting data generated by an evaluation process 338 including the context of what was assessed, what it was 339 assessed against, what collected data was used, when it was 340 collected, and when the evaluation was performed. 342 The information model for security automation data must support a 343 variety of different data types as described above, along with the 344 associated metadata that is needed to support publication, query, and 345 retrieval operations. It is expected that multiple data models will 346 be used to express specific data types requiring specialized or 347 extensible security automation data repositories. The different 348 temporal characteristics, access patterns, and access control 349 dimensions of each data type may also require different protocols and 350 data models to be supported furthering the potential requirement for 351 specialized data repositories. See [RFC3444] for a description and 352 discussion of distinctions between an information and data model. It 353 is likely that additional kinds of data will be identified through 354 the process of defining requirements and an architectural model. 355 Implementations supporting this building block will need to be 356 extensible to accommodate the addition of new types of data, both 357 proprietary or (preferably) using a standard format. 359 The building blocks of this use case are: 361 Data Definition: Security automation data will guide and inform 362 collection and evaluation processes. This data may be designed 363 by a variety of roles - application implementers may build 364 security automation data into their applications; 365 administrators may define guidance based on organizational 366 policies; operators may define guidance and attribute data as 367 needed for evaluation at runtime, and so on. Data producers 368 may choose to reuse data from existing stores of security 369 automation data and/or may create new data. Data producers may 370 develop data based on available standardized or proprietary 371 data models, such as those used for network management and/or 372 host management. 374 Data Publication: The capability to enable data producers to publish 375 data to a security automation data store for further use. 376 Published data may be made publicly available or access may be 377 based on an authorization decision using authenticated 378 credentials. As a result, the visibility of specific security 379 automation data to an operator or application may be public, 380 enterprise-scoped, private, or controlled within any other 381 scope. 383 Data Query: An operator or application should be able to query a 384 security automation data store using a set of specified 385 criteria. The result of the query will be a listing matching 386 the query. The query result listing may contain publication 387 metadata (e.g., create date, modified date, publisher, etc.) 388 and/or the full data, a summary, snippet, or the location to 389 retrieve the data. 391 Data Retrieval: A user, operator, or application acquires one or 392 more specific security automation data entries. The location 393 of the data may be known a priori, or may be determined based 394 on decisions made using information from a previous query. 396 Data Change Detection: An operator or application needs to know when 397 security automation data they interested in has been published 398 to, updated in, or deleted from a security automation data 399 store which they have been authorized to access. 401 These building blocks are used to enable acquisition of various 402 instances of security automation data based on specific data models 403 that are used to drive assessment planning (see section 2.1.2), 404 posture attribute value collection (see section 2.1.3), and posture 405 evaluation (see section 2.1.4). 407 2.1.2. Endpoint Identification and Assessment Planning 409 This use case describes the process of discovering endpoints, 410 understanding their composition, identifying the desired state to 411 assess against, and calculating what posture attributes to collect to 412 enable evaluation. This process may be a set of manual, automated, 413 or hybrid steps that are performed for each assessment. 415 The building blocks of this use case are: 417 Endpoint Discovery: To determine the current or historic presence of 418 endpoints in the environment that are available for posture 419 assessment. Endpoints are identified in support of discovery 420 using information previously obtained or by using other 421 collection mechanisms to gather identification and 422 characterization data. Previously obtained data may originate 423 from sources such as network authentication exchanges. 425 Endpoint Characterization: The act of acquiring, through automated 426 collection or manual input, and organizing attributes 427 associated with an endpoint (e.g., type, organizationally 428 expected function/role, hardware/software versions). 430 Identify Endpoint Targets: Determine the candidate endpoint 431 target(s) against which to perform the assessment. Depending 432 on the assessment trigger, a single endpoint or multiple 433 endpoints may be targeted based on characterized endpoint 434 attributes. Guidance describing the assessment to be performed 435 may contain instructions or references used to determine the 436 applicable assessment targets. In this case the Data Query 437 and/or Data Retrieval building blocks (see section 2.1.1) may 438 be used to acquire this data. 440 Endpoint Component Inventory: To determine what applicable desired 441 states should be assessed, it is first necessary to acquire the 442 inventory of software, hardware, and accounts associated with 443 the targeted endpoint(s). If the assessment of the endpoint is 444 not dependent on the these details, then this capability is not 445 required for use in performing the assessment. This process 446 can be treated as a collection use case for specific posture 447 attributes. In this case the building blocks for 448 Endpoint Posture Attribute Value Collection (see section 2.1.3) 449 can be used. 451 Posture Attribute Identification: Once the endpoint targets and 452 their associated asset inventory is known, it is then necessary 453 to calculate what posture attributes are required to be 454 collected to perform the desired evaluation. When available, 455 existing posture data is queried for suitability using the Data 456 Query building block (see section 2.1.1). Such posture data is 457 suitable if it is complete and current enough for use in the 458 evaluation. Any unsuitable posture data is identified for 459 collection. 461 If this is driven by guidance, then the Data Query and/or Data 462 Retrieval building blocks (see section 2.1.1) may be used to 463 acquire this data. 465 At this point the set of posture attribute values to use for 466 evaluation are known and they can be collected if necessary (see 467 section 2.1.3). 469 2.1.3. Endpoint Posture Attribute Value Collection 471 This use case describes the process of collecting a set of posture 472 attribute values related to one or more endpoints. This use case can 473 be initiated by a variety of triggers including: 475 1. A posture change or significant event on the endpoint. 477 2. A network event (e.g., endpoint connects to a network/VPN, 478 specific netflow is detected). 480 3. A scheduled or ad hoc collection task. 482 The building blocks of this use case are: 484 Collection Guidance Acquisition: If guidance is required to drive 485 the collection of posture attributes values, this capability is 486 used to acquire this data from one or more security automation 487 data stores. Depending on the trigger, the specific guidance 488 to acquire might be known. If not, it may be necessary to 489 determine the guidance to use based on the component inventory 490 or other assessment criteria. The Data Query and/or Data 491 Retrieval building blocks (see section 2.1.1) may be used to 492 acquire this guidance. 494 Posture Attribute Value Collection: The accumulation of posture 495 attribute values. This may be based on collection guidance 496 that is associated with the posture attributes. 498 Once the posture attribute values are collected, they may be 499 persisted for later use or they may be immediately used for posture 500 evaluation. 502 2.1.4. Posture Attribute Evaluation 504 This use case represents the action of analyzing collected posture 505 attribute values as part of an assessment. The primary focus of this 506 use case is to support evaluation of actual endpoint state against 507 the expected state selected for the assessment. 509 This use case can be initiated by a variety of triggers including: 511 1. A posture change or significant event on the endpoint. 513 2. A network event (e.g., endpoint connects to a network/VPN, 514 specific netflow is detected). 516 3. A scheduled or ad hoc evaluation task. 518 The building blocks of this use case are: 520 Collected Posture Change Detection: An operator or application has a 521 mechanism to detect the availability of new, or changes to 522 existing, posture attribute values. The timeliness of 523 detection may vary from immediate to on-demand. Having the 524 ability to filter what changes are detected will allow the 525 operator to focus on the changes that are relevant to their use 526 and will enable evaluation to occur dynamically based on 527 detected changes. 529 Posture Attribute Value Query: If previously collected posture 530 attribute values are needed, the appropriate data stores are 531 queried to retrieve them using the Data Query building block 532 (see section 2.1.1). If all posture attribute values are 533 provided directly for evaluation, then this capability may not 534 be needed. 536 Evaluation Guidance Acquisition: If guidance is required to drive 537 the evaluation of posture attributes values, this capability is 538 used to acquire this data from one or more security automation 539 data stores. Depending on the trigger, the specific guidance 540 to acquire might be known. If not, it may be necessary to 541 determine the guidance to use based on the component inventory 542 or other assessment criteria. The Data Query and/or Data 543 Retrieval building blocks (see section 2.1.1) may be used to 544 acquire this guidance. 546 Posture Attribute Evaluation: The comparison of posture attribute 547 values against their expected values as expressed in the 548 specified guidance. The result of this comparison is output as 549 a set of posture evaluation results. Such results include 550 metadata required to provide a level of assurance with respect 551 to the posture attribute data and, therefore, evaluation 552 results. Examples of such metadata include provenance and or 553 availability data. 555 While the primary focus of this use case is around enabling the 556 comparison of expected vs. actual state, the same building blocks can 557 support other analysis techniques that are applied to collected 558 posture attribute data (e.g., trending, historic analysis). 560 Completion of this process represents a complete assessment cycle as 561 defined in Section 2. 563 2.2. Usage Scenarios 565 In this section, we describe a number of usage scenarios that utilize 566 aspects of endpoint posture assessment. These are examples of common 567 problems that can be solved with the building blocks defined above. 569 2.2.1. Definition and Publication of Automatable Configuration 570 Checklists 572 A vendor manufactures a number of specialized endpoint devices. They 573 also develop and maintain an operating system for these devices that 574 enables end-user organizations to configure a number of security and 575 operational settings. As part of their customer support activities, 576 they publish a number of secure configuration guides that provide 577 minimum security guidelines for configuring their devices. 579 Each guide they produce applies to a specific model of device and 580 version of the operating system and provides a number of specialized 581 configurations depending on the device's intended function and what 582 add-on hardware modules and software licenses are installed on the 583 device. To enable their customers to evaluate the security posture 584 of their devices to ensure that all appropriate minimal security 585 settings are enabled, they publish an automatable configuration 586 checklists using a popular data format that defines what settings to 587 collect using a network management protocol and appropriate values 588 for each setting. They publish these checklists to a public security 589 automation data store that customers can query to retrieve applicable 590 checklist(s) for their deployed specialized endpoint devices. 592 Automatable configuration checklist could also come from sources 593 other than a device vendor, such as industry groups or regulatory 594 authorities, or enterprises could develop their own checklists. 596 This usage scenario employs the following building blocks defined in 597 Section 2.1.1 above: 599 Data Definition: To allow guidance to be defined using standardized 600 or proprietary data models that will drive collection and 601 evaluation. 603 Data Publication: Providing a mechanism to publish created guidance 604 to a security automation data store. 606 Data Query: To locate and select existing guidance that may be 607 reused. 609 Data Retrieval To retrieve specific guidance from a security 610 automation data store for editing. 612 While each building block can be used in a manual fashion by a human 613 operator, it is also likely that these capabilities will be 614 implemented together in some form of a guidance editor or generator 615 application. 617 2.2.2. Automated Checklist Verification 619 A financial services company operates a heterogeneous IT environment. 620 In support of their risk management program, they utilize vendor 621 provided automatable security configuration checklists for each 622 operating system and application used within their IT environment. 624 Multiple checklists are used from different vendors to insure 625 adequate coverage of all IT assets. 627 To identify what checklists are needed, they use automation to gather 628 an inventory of the software versions utilized by all IT assets in 629 the enterprise. This data gathering will involve querying existing 630 data stores of previously collected endpoint software inventory 631 posture data and actively collecting data from reachable endpoints as 632 needed utilizing network and systems management protocols. 633 Previously collected data may be provided by periodic data 634 collection, network connection-driven data collection, or ongoing 635 event-driven monitoring of endpoint posture changes. 637 Appropriate checklists are queried, located and downloaded from the 638 relevant guidance data stores. The specific data stores queried and 639 the specifics of each query may be driven by data including: 641 o collected hardware and software inventory data, and 643 o associated asset characterization data that may indicate the 644 organizational defined functions of each endpoint. 646 Checklists may be sourced from guidance data stores maintained by an 647 application or OS vendor, an industry group, a regulatory authority, 648 or directly by the enterprise. 650 The retrieved guidance is cached locally to reduce the need to 651 retrieve the data multiple times. 653 Driven by the setting data provided in the checklist, a combination 654 of existing configuration data stores and data collection methods are 655 used to gather the appropriate posture attributes from (or pertaining 656 to) each endpoint. Specific posture attribute values are gathered 657 based on the defined enterprise function and software inventory of 658 each endpoint. The collection mechanisms used to collect software 659 inventory posture will be used again for this purpose. Once the data 660 is gathered, the actual state is evaluated against the expected state 661 criteria defined in each applicable checklist. 663 A checklist can be assessed as a whole, or a specific subset of the 664 checklist can be assessed resulting in partial data collection and 665 evaluation. 667 The results of checklist evaluation are provided to appropriate 668 operators and applications to drive additional business logic. 669 Specific applications for checklist evaluation results are out-of- 670 scope for current SACM efforts. Irrespective of specific 671 applications, the availability, timeliness, and liveness of results 672 is often of general concern. Network latency and available bandwidth 673 often create operational constraints that require trade-offs between 674 these concerns and need to be considered. 676 Uses of checklists and associated evaluation results may include, but 677 are not limited to: 679 o Detecting endpoint posture deviations as part of a change 680 management program to: 682 * identify missing required patches, 684 * unauthorized changes to hardware and software inventory, and 686 * unauthorized changes to configuration items. 688 o Determining compliance with organizational policies governing 689 endpoint posture. 691 o Informing configuration management, patch management, and 692 vulnerability mitigation and remediation decisions. 694 o Searching for current and historic indicators of compromise. 696 o Detecting current and historic infection by malware and 697 determining the scope of infection within an enterprise. 699 o Detecting performance, attack and vulnerable conditions that 700 warrant additional network diagnostics, monitoring, and analysis. 702 o Informing network access control decision making for wired, 703 wireless, or VPN connections. 705 This usage scenario employs the following building blocks defined in 706 Section 2.1.1 above: 708 Endpoint Discovery: The purpose of discovery is to determine the 709 type of endpoint to be posture assessed. 711 Identify Endpoint Targets: To identify what potential endpoint 712 targets the checklist should apply to based on organizational 713 policies. 715 Endpoint Component Inventory: Collecting and consuming the software 716 and hardware inventory for the target endpoints. 718 Posture Attribute Identification: To determine what data needs to be 719 collected to support evaluation, the checklist is evaluated 720 against the component inventory and other endpoint metadata to 721 determine the set of posture attribute values that are needed. 723 Collection Guidance Acquisition: Based on the identified posture 724 attributes, the application will query appropriate security 725 automation data stores to find the "applicable" collection 726 guidance for each endpoint in question. 728 Posture Attribute Value Collection: For each endpoint, the values 729 for the required posture attributes are collected. 731 Posture Attribute Value Query: If previously collected posture 732 attribute values are used, they are queried from the 733 appropriate data stores for the target endpoint(s). 735 Evaluation Guidance Acquisition: Any guidance that is needed to 736 support evaluation is queried and retrieved. 738 Posture Attribute Evaluation: The resulting posture attribute values 739 from previous collection processes are evaluated using the 740 evaluation guidance to provide a set of posture results. 742 2.2.3. Detection of Posture Deviations 744 Example corporation has established secure configuration baselines 745 for each different type of endpoint within their enterprise 746 including: network infrastructure, mobile, client, and server 747 computing platforms. These baselines define an approved list of 748 hardware, software (i.e., operating system, applications, and 749 patches), and associated required configurations. When an endpoint 750 connects to the network, the appropriate baseline configuration is 751 communicated to the endpoint based on its location in the network, 752 the expected function of the device, and other asset management data. 753 It is checked for compliance with the baseline indicating any 754 deviations to the device's operators. Once the baseline has been 755 established, the endpoint is monitored for any change events 756 pertaining to the baseline on an ongoing basis. When a change occurs 757 to posture defined in the baseline, updated posture information is 758 exchanged, allowing operators to be notified and/or automated action 759 to be taken. 761 Like the Automated Checklist Verification usage scenario (see section 762 2.2.2), this usage scenario supports assessment based on automatable 763 checklists. It differs from that scenario by monitoring for specific 764 endpoint posture changes on an ongoing basis. When the endpoint 765 detects a posture change, an alert is generated identifying the 766 specific changes in posture allowing assessment of the delta to be 767 performed instead of a full assessment in the previous case. This 768 usage scenario employs the same building blocks as 769 Automated Checklist Verification (see section 2.2.2). It differs 770 slightly in how it uses the following building blocks: 772 Endpoint Component Inventory: Additionally, changes to the hardware 773 and software inventory are monitored, with changes causing 774 alerts to be issued. 776 Posture Attribute Value Collection: After the initial assessment, 777 posture attributes are monitored for changes. If any of the 778 selected posture attribute values change, an alert is issued. 780 Posture Attribute Value Query: The previous state of posture 781 attributes are tracked, allowing changes to be detected. 783 Posture Attribute Evaluation: After the initial assessment, a 784 partial evaluation is performed based on changes to specific 785 posture attributes. 787 This usage scenario highlights the need to query a data store to 788 prepare a compliance report for a specific endpoint and also the need 789 for a change in endpoint state to trigger Collection and Evaluation. 791 2.2.4. Endpoint Information Analysis and Reporting 793 Freed from the drudgery of manual endpoint compliance monitoring, one 794 of the security administrators at Example Corporation notices (not 795 using SACM standards) that five endpoints have been uploading lots of 796 data to a suspicious server on the Internet. The administrator 797 queries data stores for specific endpoint posture to see what 798 software is installed on those endpoints and finds that they all have 799 a particular program installed. She then queries the appropriate 800 data stores to see which other endpoints have that program installed. 801 All these endpoints are monitored carefully (not using SACM 802 standards), which allows the administrator to detect that the other 803 endpoints are also infected. 805 This is just one example of the useful analysis that a skilled 806 analyst can do using data stores of endpoint posture. 808 This usage scenario employs the following building blocks defined in 809 Section 2.1.1 above: 811 Posture Attribute Value Query: Previously collected posture 812 attribute values for the target endpoint(s) are queried from 813 the appropriate data stores using a standardized method. 815 This usage scenario highlights the need to query a repository for 816 attributes to see which attributes certain endpoints have in common. 818 2.2.5. Asynchronous Compliance/Vulnerability Assessment at Ice Station 819 Zebra 821 A university team receives a grant to do research at a government 822 facility in the arctic. The only network communications will be via 823 an intermittent, low-speed, high-latency, high-cost satellite link. 824 During their extended expedition, they will need to show continue 825 compliance with the security policies of the university, the 826 government, and the provider of the satellite network as well as keep 827 current on vulnerability testing. Interactive assessments are 828 therefore not reliable, and since the researchers have very limited 829 funding they need to minimize how much money they spend on network 830 data. 832 Prior to departure they register all equipment with an asset 833 management system owned by the university, which will also initiate 834 and track assessments. 836 On a periodic basis -- either after a maximum time delta or when the 837 security automation data store has received a threshold level of new 838 vulnerability definitions -- the university uses the information in 839 the asset management system to put together a collection request for 840 all of the deployed assets that encompasses the minimal set of 841 artifacts necessary to evaluate all three security policies as well 842 as vulnerability testing. 844 In the case of new critical vulnerabilities, this collection request 845 consists only of the artifacts necessary for those vulnerabilities 846 and collection is only initiated for those assets that could 847 potentially have a new vulnerability. 849 (Optional) Asset artifacts are cached in a local CMDB. When new 850 vulnerabilities are reported to the security automation data store, a 851 request to the live asset is only done if the artifacts in the CMDB 852 are incomplete and/or not current enough. 854 The collection request is queued for the next window of connectivity. 855 The deployed assets eventually receive the request, fulfill it, and 856 queue the results for the next return opportunity. 858 The collected artifacts eventually make it back to the university 859 where the level of compliance and vulnerability exposed is calculated 860 and asset characteristics are compared to what is in the asset 861 management system for accuracy and completeness. 863 Like the Automated Checklist Verification usage scenario (see section 864 2.2.2), this usage scenario supports assessment based on checklists. 865 It differs from that scenario in how guidance, collected posture 866 attribute values, and evaluation results are exchanged due to 867 bandwidth limitations and availability. This usage scenario employs 868 the same building blocks as Automated Checklist Verification (see 869 section 2.2.2). It differs slightly in how it uses the following 870 building blocks: 872 Endpoint Component Inventory: It is likely that the component 873 inventory will not change. If it does, this information will 874 need to be batched and transmitted during the next 875 communication window. 877 Collection Guidance Acquisition: Due to intermittent communication 878 windows and bandwidth constraints, changes to collection 879 guidance will need to batched and transmitted during the next 880 communication window. Guidance will need to be cached locally 881 to avoid the need for remote communications. 883 Posture Attribute Value Collection: The specific posture attribute 884 values to be collected are identified remotely and batched for 885 collection during the next communication window. If a delay is 886 introduced for collection to complete, results will need to be 887 batched and transmitted. 889 Posture Attribute Value Query: Previously collected posture 890 attribute values will be stored in a remote data store for use 891 at the university 893 Evaluation Guidance Acquisition: Due to intermittent communication 894 windows and bandwidth constraints, changes to evaluation 895 guidance will need to batched and transmitted during the next 896 communication window. Guidance will need to be cached locally 897 to avoid the need for remote communications. 899 Posture Attribute Evaluation: Due to the caching of posture 900 attribute values and evaluation guidance, evaluation may be 901 performed at both the university campus as well as the 902 satellite site. 904 This usage scenario highlights the need to support low-bandwidth, 905 intermittent, or high-latency links. 907 2.2.6. Identification and Retrieval of Guidance 909 In preparation for performing an assessment, an operator or 910 application will need to identify one or more security automation 911 data stores that contain the guidance entries necessary to perform 912 data collection and evaluation tasks. The location of a given 913 guidance entry will either be known a priori or known security 914 automation data stores will need to be queried to retrieve applicable 915 guidance. 917 To query guidance it will be necessary to define a set of search 918 criteria. This criteria will often utilize a logical combination of 919 publication metadata (e.g. publishing identity, create time, 920 modification time) and guidance data-specific criteria elements. 921 Once the criteria is defined, one or more security automation data 922 stores will need to be queried generating a result set. Depending on 923 how the results are used, it may be desirable to return the matching 924 guidance directly, a snippet of the guidance matching the query, or a 925 resolvable location to retrieve the data at a later time. The 926 guidance matching the query will be restricted based the authorized 927 level of access allowed to the requester. 929 If the location of guidance is identified in the query result set, 930 the guidance will be retrieved when needed using one or more data 931 retrieval requests. A variation on this approach would be to 932 maintain a local cache of previously retrieved data. In this case, 933 only guidance that is determined to be stale by some measure will be 934 retrieved from the remote data store. 936 Alternately, guidance can be discovered by iterating over data 937 published with a given context within a security automation data 938 store. Specific guidance can be selected and retrieved as needed. 940 This usage scenario employs the following building blocks defined in 941 Section 2.1.1 above: 943 Data Query: Enables an operator or application to query one or more 944 security automation data stores for guidance using a set of 945 specified criteria. 947 Data Retrieval: If data locations are returned in the query result 948 set, then specific guidance entries can be retrieved and 949 possibly cached locally. 951 2.2.7. Guidance Change Detection 953 An operator or application may need to identify new, updated, or 954 deleted guidance in a security automation data store for which they 955 have been authorized to access. This may be achieved by querying or 956 iterating over guidance in a security automation data store, or 957 through a notification mechanism that alerts to changes made to a 958 security automation data store. 960 Once guidance changes have been determined, data collection and 961 evaluation activities may be triggered. 963 This usage scenario employs the following building blocks defined in 964 Section 2.1.1 above: 966 Data Change Detection: Allows an operator or application to identify 967 guidance changes in a security automation data store which they 968 have been authorized to access. 970 Data Retrieval: If data locations are provided by the change 971 detection mechanism, then specific guidance entries can be 972 retrieved and possibly cached locally. 974 3. IANA Considerations 976 This memo includes no request to IANA. 978 4. Security Considerations 980 This memo documents, for informational purposes, use cases for 981 security automation. Specific security and privacy considerations 982 will be provided in related documents (e.g., requirements, 983 architecture, information model, data model, protocol) as appropriate 984 to the function described in each related document. 986 One consideration for security automation is that a malicious actor 987 could use the security automation infrastructure and related 988 collected data to gain access to an item of interest. This may 989 include personal data, private keys, software and configuration state 990 that can be used to inform an attack against the network and 991 endpoints, and other sensitive information. It is important that 992 security and privacy considerations in the related documents identify 993 methods to both identify and prevent such activity. 995 For consideration are means for protecting the communications as well 996 as the systems that store the information. For communications 997 between the varying SACM components there should be considerations 998 for protecting the confidentiality, data integrity and peer entity 999 authentication. For exchanged information, there should be a means 1000 to authenticate the origin of the information. This is important 1001 where tracking the provenance of data is needed. Also, for any 1002 systems that store information that could be used for unauthorized or 1003 malicious purposes, methods to identify and protect against 1004 unauthorized usage, inappropriate usage, and denial of service need 1005 to be considered. 1007 5. Acknowledgements 1009 Adam Montville edited early versions of this draft. 1011 Kathleen Moriarty, and Stephen Hanna contributed text describing the 1012 scope of the document. 1014 Gunnar Engelbach, Steve Hanna, Chris Inacio, Kent Landfield, Lisa 1015 Lorenzin, Adam Montville, Kathleen Moriarty, Nancy Cam-Winget, and 1016 Aron Woland provided use cases text for various revisions of this 1017 draft. 1019 6. Change Log 1021 6.1. -08- to -09- 1023 Fixed a number of gramatical nits throughout the draft identified by 1024 the SECDIR review. 1026 Added additional text to the security considerations about malicious 1027 actors. 1029 6.2. -07- to -08- 1031 Reworked long sentences throughout the document by shortening or 1032 using bulleted lists. 1034 Re-ordered and condensed text in the "Automated Checklist 1035 Verification" sub-section to improve the conceptual presentation and 1036 to clarify longer sentences. 1038 Clarified that the "Posture Attribute Value Query" building block 1039 represents a standardized interface in the context of SACM. 1041 Removed the "others" sub-section within the "usage scenarios" 1042 section. 1044 Updated the "Security Considerations" section to identify that actual 1045 SACM security considerations will be discussed in the appropriate 1046 related documents. 1048 6.3. -06- to -07- 1050 A number of edits were made to section 2 to resolve open questions in 1051 the draft based on meeting and mailing list discussions. 1053 Section 2.1.5 was merged into section 2.1.4. 1055 6.4. -05- to -06- 1057 Updated the "Introduction" section to better reflect the use case, 1058 building block, and usage scenario structure changes from previous 1059 revisions. 1061 Updated most uses of the terms "content" and "content repository" to 1062 use "guidance" and "security automation data store" respectively. 1064 In section 2.1.1, added a discussion of different data types and 1065 renamed "content" to "data" in the building block names. 1067 In section 2.1.2, separated out the building block concepts of 1068 "Endpoint Discovery" and "Endpoint Characterization" based on mailing 1069 list discussions. 1071 Addressed some open questions throughout the draft based on consensus 1072 from mailing list discussions and the two virtual interim meetings. 1074 Changed many section/sub-section names to better reflect their 1075 content. 1077 6.5. -04- to -05- 1079 Changes in this revision are focused on section 2 and the subsequent 1080 subsections: 1082 o Moved existing use cases to a subsection titled "Usage Scenarios". 1084 o Added a new subsection titled "Use Cases" to describe the common 1085 use cases and building blocks used to address the "Usage 1086 Scenarios". The new use cases are: 1088 * Define, Publish, Query and Retrieve Content 1090 * Endpoint Identification and Assessment Planning 1092 * Endpoint Posture Attribute Value Collection 1094 * Posture Evaluation 1095 * Mining the Database 1097 o Added a listing of building blocks used for all usage scenarios. 1099 o Combined the following usage scenarios into "Automated Checklist 1100 Verification": "Organizational Software Policy Compliance", 1101 "Search for Signs of Infection", "Vulnerable Endpoint 1102 Identification", "Compromised Endpoint Identification", 1103 "Suspicious Endpoint Behavior", "Traditional endpoint assessment 1104 with stored results", "NAC/NAP connection with no stored results 1105 using an endpoint evaluator", and "NAC/NAP connection with no 1106 stored results using a third-party evaluator". 1108 o Created new usage scenario "Identification and Retrieval of 1109 Repository Content" by combining the following usage scenarios: 1110 "Repository Interaction - A Full Assessment" and "Repository 1111 Interaction - Filtered Delta Assessment" 1113 o Renamed "Register with repository for immediate notification of 1114 new security vulnerability content that match a selection filter" 1115 to "Content Change Detection" and generalized the description to 1116 be neutral to implementation approaches. 1118 o Removed out-of-scope usage scenarios: "Remediation and Mitigation" 1119 and "Direct Human Retrieval of Ancillary Materials" 1121 Updated acknowledgements to recognize those that helped with editing 1122 the use case text. 1124 6.6. -03- to -04- 1126 Added four new use cases regarding content repository. 1128 6.7. -02- to -03- 1130 Expanded the workflow description based on ML input. 1132 Changed the ambiguous "assess" to better separate data collection 1133 from evaluation. 1135 Added use case for Search for Signs of Infection. 1137 Added use case for Remediation and Mitigation. 1139 Added use case for Endpoint Information Analysis and Reporting. 1141 Added use case for Asynchronous Compliance/Vulnerability Assessment 1142 at Ice Station Zebra. 1144 Added use case for Traditional endpoint assessment with stored 1145 results. 1147 Added use case for NAC/NAP connection with no stored results using an 1148 endpoint evaluator. 1150 Added use case for NAC/NAP connection with no stored results using a 1151 third-party evaluator. 1153 Added use case for Compromised Endpoint Identification. 1155 Added use case for Suspicious Endpoint Behavior. 1157 Added use case for Vulnerable Endpoint Identification. 1159 Updated Acknowledgements 1161 6.8. -01- to -02- 1163 Changed title 1165 removed section 4, expecting it will be moved into the requirements 1166 document. 1168 removed the list of proposed capabilities from section 3.1 1170 Added empty sections for Search for Signs of Infection, Remediation 1171 and Mitigation, and Endpoint Information Analysis and Reporting. 1173 Removed Requirements Language section and rfc2119 reference. 1175 Removed unused references (which ended up being all references). 1177 6.9. -00- to -01- 1179 o Work on this revision has been focused on document content 1180 relating primarily to use of asset management data and functions. 1182 o Made significant updates to section 3 including: 1184 * Reworked introductory text. 1186 * Replaced the single example with multiple use cases that focus 1187 on more discrete uses of asset management data to support 1188 hardware and software inventory, and configuration management 1189 use cases. 1191 * For one of the use cases, added mapping to functional 1192 capabilities used. If popular, this will be added to the other 1193 use cases as well. 1195 * Additional use cases will be added in the next revision 1196 capturing additional discussion from the list. 1198 o Made significant updates to section 4 including: 1200 * Renamed the section heading from "Use Cases" to "Functional 1201 Capabilities" since use cases are covered in section 3. This 1202 section now extrapolates specific functions that are needed to 1203 support the use cases. 1205 * Started work to flatten the section, moving select subsections 1206 up from under asset management. 1208 * Removed the subsections for: Asset Discovery, Endpoint 1209 Components and Asset Composition, Asset Resources, and Asset 1210 Life Cycle. 1212 * Renamed the subsection "Asset Representation Reconciliation" to 1213 "Deconfliction of Asset Identities". 1215 * Expanded the subsections for: Asset Identification, Asset 1216 Characterization, and Deconfliction of Asset Identities. 1218 * Added a new subsection for Asset Targeting. 1220 * Moved remaining sections to "Other Unedited Content" for future 1221 updating. 1223 6.10. draft-waltermire-sacm-use-cases-05 to draft-ietf-sacm-use- 1224 cases-00 1226 o Transitioned from individual I/D to WG I/D based on WG consensus 1227 call. 1229 o Fixed a number of spelling errors. Thank you Erik! 1231 o Added keywords to the front matter. 1233 o Removed the terminology section from the draft. Terms have been 1234 moved to: draft-dbh-sacm-terminology-00 1236 o Removed requirements to be moved into a new I/D. 1238 o Extracted the functionality from the examples and made the 1239 examples less prominent. 1241 o Renamed "Functional Capabilities and Requirements" section to "Use 1242 Cases". 1244 * Reorganized the "Asset Management" sub-section. Added new text 1245 throughout. 1247 + Renamed a few sub-section headings. 1249 + Added text to the "Asset Characterization" sub-section. 1251 o Renamed "Security Configuration Management" to "Endpoint 1252 Configuration Management". Not sure if the "security" distinction 1253 is important. 1255 * Added new sections, partially integrated existing content. 1257 * Additional text is needed in all of the sub-sections. 1259 o Changed "Security Change Management" to "Endpoint Posture Change 1260 Management". Added new skeletal outline sections for future 1261 updates. 1263 6.11. waltermire -04- to -05- 1265 o Are we including user activities and behavior in the scope of this 1266 work? That seems to be layer 8 stuff, appropriate to an IDS/IPS 1267 application, not Internet stuff. 1269 o Removed the references to what the WG will do because this belongs 1270 in the charter, not the (potentially long-lived) use cases 1271 document. I removed mention of charter objectives because the 1272 charter may go through multiple iterations over time; there is a 1273 website for hosting the charter; this document is not the correct 1274 place for that discussion. 1276 o Moved the discussion of NIST specifications to the 1277 acknowledgements section. 1279 o Removed the portion of the introduction that describes the 1280 chapters; we have a table of concepts, and the existing text 1281 seemed redundant. 1283 o Removed marketing claims, to focus on technical concepts and 1284 technical analysis, that would enable subsequent engineering 1285 effort. 1287 o Removed (commented out in XML) UC2 and UC3, and eliminated some 1288 text that referred to these use cases. 1290 o Modified IANA and Security Consideration sections. 1292 o Moved Terms to the front, so we can use them in the subsequent 1293 text. 1295 o Removed the "Key Concepts" section, since the concepts of ORM and 1296 IRM were not otherwise mentioned in the document. This would seem 1297 more appropriate to the arch doc rather than use cases. 1299 o Removed role=editor from David Waltermire's info, since there are 1300 three editors on the document. The editor is most important when 1301 one person writes the document that represents the work of 1302 multiple people. When there are three editors, this role marking 1303 isn't necessary. 1305 o Modified text to describe that this was specific to enterprises, 1306 and that it was expected to overlap with service provider use 1307 cases, and described the context of this scoped work within a 1308 larger context of policy enforcement, and verification. 1310 o The document had asset management, but the charter mentioned 1311 asset, change, configuration, and vulnerability management, so I 1312 added sections for each of those categories. 1314 o Added text to Introduction explaining goal of the document. 1316 o Added sections on various example use cases for asset management, 1317 config management, change management, and vulnerability 1318 management. 1320 7. Informative References 1322 [RFC3444] Pras, A. and J. Schoenwaelder, "On the Difference between 1323 Information Models and Data Models", RFC 3444, January 1324 2003. 1326 Authors' Addresses 1328 David Waltermire 1329 National Institute of Standards and Technology 1330 100 Bureau Drive 1331 Gaithersburg, Maryland 20877 1332 USA 1334 Email: david.waltermire@nist.gov 1335 David Harrington 1336 Effective Software 1337 50 Harding Rd 1338 Portsmouth, NH 03801 1339 USA 1341 Email: ietfdbh@comcast.net