idnits 2.17.1 draft-wandw-sacm-information-model-00.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- No issues found here. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year -- The document date (July 3, 2014) is 3582 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- -- Looks like a reference, but probably isn't: '1' on line 3137 == Outdated reference: A later version (-16) exists of draft-ietf-sacm-terminology-04 == Outdated reference: A later version (-10) exists of draft-ietf-sacm-use-cases-07 Summary: 0 errors (**), 0 flaws (~~), 3 warnings (==), 2 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 Internet Engineering Task Force D. Waltermire, Ed. 3 Internet-Draft NIST 4 Intended status: Informational K. Watson 5 Expires: January 4, 2015 DHS 6 July 3, 2014 8 Information Model for Endpoint Assessment 9 draft-wandw-sacm-information-model-00 11 Abstract 13 This document proposes a draft information model for endpoint posture 14 assessment. It describes the information needed to perform certain 15 assessment activities. 17 Status of This Memo 19 This Internet-Draft is submitted in full conformance with the 20 provisions of BCP 78 and BCP 79. 22 Internet-Drafts are working documents of the Internet Engineering 23 Task Force (IETF). Note that other groups may also distribute 24 working documents as Internet-Drafts. The list of current Internet- 25 Drafts is at http://datatracker.ietf.org/drafts/current/. 27 Internet-Drafts are draft documents valid for a maximum of six months 28 and may be updated, replaced, or obsoleted by other documents at any 29 time. It is inappropriate to use Internet-Drafts as reference 30 material or to cite them other than as "work in progress." 32 This Internet-Draft will expire on January 4, 2015. 34 Copyright Notice 36 Copyright (c) 2014 IETF Trust and the persons identified as the 37 document authors. All rights reserved. 39 This document is subject to BCP 78 and the IETF Trust's Legal 40 Provisions Relating to IETF Documents 41 (http://trustee.ietf.org/license-info) in effect on the date of 42 publication of this document. Please review these documents 43 carefully, as they describe your rights and restrictions with respect 44 to this document. Code Components extracted from this document must 45 include Simplified BSD License text as described in Section 4.e of 46 the Trust Legal Provisions and are provided without warranty as 47 described in the Simplified BSD License. 49 Table of Contents 51 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . 3 52 2. Problem Statement . . . . . . . . . . . . . . . . . . . . . . 5 53 2.1. Problem Scope . . . . . . . . . . . . . . . . . . . . . . 5 54 2.2. Mapping to SACM Use Cases . . . . . . . . . . . . . . . . 6 55 3. Conventions used in this document . . . . . . . . . . . . . . 7 56 3.1. Requirements Language . . . . . . . . . . . . . . . . . . 7 57 4. Terms and Definitions . . . . . . . . . . . . . . . . . . . . 7 58 4.1. Pre-defined and Modified Terms . . . . . . . . . . . . . 7 59 4.2. New Terms . . . . . . . . . . . . . . . . . . . . . . . . 8 60 5. Foundational Concepts . . . . . . . . . . . . . . . . . . . . 8 61 5.1. Core Principles . . . . . . . . . . . . . . . . . . . . . 8 62 5.2. Architecture Assumptions . . . . . . . . . . . . . . . . 9 63 6. Endpoint Management . . . . . . . . . . . . . . . . . . . . . 13 64 6.1. Core Information Need . . . . . . . . . . . . . . . . . . 13 65 6.2. Process Area Description . . . . . . . . . . . . . . . . 13 66 6.3. Endpoint Management Process Operations . . . . . . . . . 14 67 6.4. Information Model Requirements . . . . . . . . . . . . . 14 68 6.4.1. Enroll Operation . . . . . . . . . . . . . . . . . . 14 69 7. Software Management . . . . . . . . . . . . . . . . . . . . . 15 70 7.1. Core Information Needs . . . . . . . . . . . . . . . . . 15 71 7.2. Process Area Description . . . . . . . . . . . . . . . . 16 72 7.3. Software Management Process Operations . . . . . . . . . 18 73 7.4. Information Model Requirements . . . . . . . . . . . . . 18 74 7.4.1. Define Guidance . . . . . . . . . . . . . . . . . . . 19 75 7.4.2. Collect Inventory . . . . . . . . . . . . . . . . . . 19 76 7.4.3. Evaluate Software Inventory Posture . . . . . . . . . 20 77 7.4.4. Report Evaluation Results . . . . . . . . . . . . . . 21 78 8. Configuration Management . . . . . . . . . . . . . . . . . . 21 79 8.1. Core Information Needs . . . . . . . . . . . . . . . . . 22 80 8.2. Process Area Description . . . . . . . . . . . . . . . . 22 81 8.2.1. The Existence of Configuration Item Guidance . . . . 23 82 8.2.2. Configuration Collection Guidance . . . . . . . . . . 24 83 8.2.3. Configuration Evaluation Guidance . . . . . . . . . . 24 84 8.2.4. Local Configuration Management Process . . . . . . . 24 85 8.3. Configuration Management Operations . . . . . . . . . . . 26 86 8.4. Information Model Requirements . . . . . . . . . . . . . 26 87 8.4.1. Define Guidance . . . . . . . . . . . . . . . . . . . 26 88 8.4.2. Collect Posture Attributes Operation . . . . . . . . 28 89 8.4.3. Evaluate Posture Attributes Operation . . . . . . . . 29 90 8.4.4. Report Evaluation Results Operation . . . . . . . . . 30 91 9. Vulnerability Management . . . . . . . . . . . . . . . . . . 31 92 9.1. Core Information Needs . . . . . . . . . . . . . . . . . 31 93 9.2. Process Area Description . . . . . . . . . . . . . . . . 31 94 9.3. Vulnerability Management Process Operations . . . . . . . 32 95 9.4. Information Model Requirements . . . . . . . . . . . . . 32 96 9.4.1. Collect Vulnerability Reports . . . . . . . . . . . . 32 97 9.4.2. Evaluate Vulnerability Posture . . . . . . . . . . . 33 98 9.4.3. Report Evaluation Results . . . . . . . . . . . . . . 35 99 10. From Information Needs to Information Elements . . . . . . . 35 100 11. Information Model Elements . . . . . . . . . . . . . . . . . 36 101 11.1. Asset Identifiers . . . . . . . . . . . . . . . . . . . 37 102 11.1.2. Endpoint Identification . . . . . . . . . . . . . . 40 103 11.1.3. Software Identification . . . . . . . . . . . . . . 41 104 11.1.4. Hardware Identification . . . . . . . . . . . . . . 44 105 11.2. Other Identifiers . . . . . . . . . . . . . . . . . . . 44 106 11.2.1. Platform Configuration Item Identifier . . . . . . . 44 107 11.2.2. Configuration Item Identifier . . . . . . . . . . . 50 108 11.2.3. Vulnerability Identifier . . . . . . . . . . . . . . 52 109 11.3. Endpoint characterization . . . . . . . . . . . . . . . 52 110 11.4. Posture Attribute Expression . . . . . . . . . . . . . . 56 111 11.4.2. Platform Configuration Attributes . . . . . . . . . 56 112 11.5. Actual Value Representation . . . . . . . . . . . . . . 58 113 11.5.1. Software Inventory . . . . . . . . . . . . . . . . . 58 114 11.5.2. Collected Platform Configuration Posture Attributes 59 115 11.6. Evaluation Guidance . . . . . . . . . . . . . . . . . . 60 116 11.6.1. Configuration Evaluation Guidance . . . . . . . . . 60 117 11.7. Evaluation Result Reporting . . . . . . . . . . . . . . 62 118 11.7.1. Configuration Evaluation Results . . . . . . . . . . 62 119 11.7.2. Software Inventory Evaluation Results . . . . . . . 64 120 12. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . 64 121 13. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 65 122 14. Security Considerations . . . . . . . . . . . . . . . . . . . 65 123 15. References . . . . . . . . . . . . . . . . . . . . . . . . . 65 124 15.1. Normative References . . . . . . . . . . . . . . . . . . 65 125 15.2. Informative References . . . . . . . . . . . . . . . . . 65 126 15.3. URIs . . . . . . . . . . . . . . . . . . . . . . . . . . 69 127 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . 69 129 1. Introduction 131 The posture of an endpoint is the status of an endpoint's assets with 132 respect to the security policies and risk models of the organization. 134 A system administrator needs to be able to determine which of the 135 collection of assets that constitute an endpoint have a security 136 problem and which do not conform the organization's security 137 policies. The CIO needs to be able to determine whether endpoints 138 have security postures that conform to the organization's policies to 139 ensure that the organization is complying with its fiduciary and 140 regulatory responsibilities. The regulator or auditor needs to be 141 able to assess the level of due diligence being achieved by an 142 organization to ensure that all regulations and due diligence 143 expectations are being met. The operator needs to understand which 144 assets have deviated from organizational policies so that those 145 assets can be remedied. 147 Operators will focus on which endpoints are composed of specific 148 assets with problems. CIO and auditors need a characterization of 149 how an organization is performing as a whole to manage the posture of 150 its endpoints. All of these actors need deployed capabilities that 151 implement security automation standards in the form of data formats, 152 interfaces, and protocols to be able to assess, in a timely and 153 secure fashion, all assets on all endpoints within their enterprise. 154 This information model provides a basis to identify the desirable 155 characteristics of data models to support these scenarios. Other 156 SACM specifications, such as the SACM Architecture, will describe the 157 potential components of an interoperable system solution based on the 158 SACM information model to address the requirements for scalability, 159 timeliness, and security. 161 This draft was developed in response to the Call for Contributions 162 for the SACM Information Model sent to NIST 163 [IM-LIAISON-STATEMENT-NIST]. This draft proposes a notional 164 information model for endpoint posture assessment. It describes the 165 information needed to perform certain assessment activities and 166 relevant work that may be used as a basis for the development of 167 specific data models. The terms information model and data model 168 loosely align with the terms defined in RFC3444 [RFC3444]. 170 The four primary activities to support this information model are: 172 1. Endpoint Identification 174 2. Endpoint Characterization 176 3. Endpoint Attribute Expression/Representation 178 4. Policy evaluation expression and results reporting 180 These activities are aimed at the level of the technology that 181 performs operations to support collection, evaluation, and reporting. 183 Review of the SACM Use Case [I-D.ietf-sacm-use-cases] usage scenarios 184 show a common set of business process areas that are critical to 185 understanding endpoint posture such that appropriate policies, 186 security capabilities, and decisions can be developed and 187 implemented. 189 For this information model we have chosen to focus on the following 190 business process areas: 192 o Endpoint Management 194 o Software Management 196 o Configuration Management 198 o Vulnerability Management 200 These management process areas are a way to connect the SACM use 201 cases and building blocks [I-D.ietf-sacm-use-cases] to the 202 organizational needs such that the definition of information 203 requirements has a clearly understood context. 205 2. Problem Statement 207 Scalable and sustainable collection, expression, and evaluation of 208 endpoint information is foundational to SACM's objectives. To secure 209 and defend one's network one must reliably determine what devices are 210 on the network, how those devices are configured from a hardware 211 perspective, what software products are installed on those devices, 212 and how those products are configured. We need to be able to 213 determine, share, and use this information in a secure, timely, 214 consistent, and automated manner to perform endpoint posture 215 assessments. 217 This represents a large and broad set of mission and business 218 processes, and to make the most effective of use of technology, the 219 same data must support multiple processes. The activities and 220 processes described within this memo tend to build off of each other 221 to enable more complex characterization and assessment. In an effort 222 to create an information model that serves a common set of management 223 processes represented by the usage scenarios in the SACM Use Cases 224 document, we have narrowed down the scope of this model. 226 2.1. Problem Scope 228 The goal of this first iteration of the information model is to 229 define the information needs for an organization to effectively 230 manage the endpoints operating on their network, the software 231 installed on those endpoints, and the configuration of that software. 232 Once we have those three business processes in place, we can then 233 identify vulnerable endpoints in a very efficient manner. 235 The four business process areas represent a large set of tasks that 236 support endpoint posture assessment. In an effort to address the 237 most basic and foundational needs, we have also narrowed down the 238 scope inside of each of the business processes to a set of defined 239 tasks that strive to achieve specific results in the operational 240 environment and the organization. These tasks are: 242 1. Define the assets. This is what we want to know about an asset. 243 For instance, organizations will want to know what software is 244 installed and its many critical security attributes such as patch 245 level. 247 2. Resolve what assets actually compose an endpoint. This requires 248 populating the data elements and attributes needed to exchange 249 information pertaining to the assets composing an endpoint. 251 3. Express what expected values for the data elements and attributes 252 need to be evaluated against the actual collected instances of 253 asset data. This is how an organization can express its policy 254 for an acceptable data element or attribute value. A system 255 administrator can also identify specific data elements and 256 attributes that represent problems, such as vulnerabilities, that 257 need to be detected on an endpoint. 259 4. Evaluate the collected instances of the asset data against those 260 expressed in the policy. 262 5. Report the results of the evaluation. 264 2.2. Mapping to SACM Use Cases 266 This information model directly corresponds to all four use cases 267 defined in the SACM Use Cases draft [I-D.ietf-sacm-use-cases]. It 268 uses these use cases in coordination to achieve a small set of well- 269 defined tasks. 271 Sections 6 thru 9 address each of the process areas. For each 272 process area, a "Process Area Description" sub-section represent an 273 end state that is consistent with all the General Requirements and 274 many of the Use Case Requirements identified in the requirements 275 draft [I-D.camwinget-sacm-requirements]. 277 The management process areas and supporting operations defined in 278 this memo directly support REQ004 Endpoint Discovery; REQ005-006 279 Attribute and Information Based Queries, and REQ0007 Asynchronous 280 Publication. 282 In addition, the operations that defined for each business process in 283 this memo directly correlate with the typical workflow identified in 284 the SACM Use Case document. 286 3. Conventions used in this document 288 3.1. Requirements Language 290 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 291 "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this 292 document are to be interpreted as described in RFC 2119 [RFC2119]. 294 4. Terms and Definitions 296 This section describes terms that have been defined by other RFCs and 297 Internet Drafts, as well as new terms introduced in this document. 299 4.1. Pre-defined and Modified Terms 301 This section contains pre-defined terms that are sourced from other 302 IETF RFCs and Internet Drafts. Descriptions of terms in this section 303 will reference the original source of the term and will provide 304 additional specific context for the use of each term in SACM. For 305 sake of brevity, terms from [I-D.ietf-sacm-terminology] are not 306 repeated here unless the original meaning has been changed in this 307 document. 309 Asset For this Information Model it is necessary to change the 310 scope of the definition of asset from the one provided in 311 [I-D.ietf-sacm-terminology]. Originally defined in [RFC4949] 312 and referenced in [I-D.ietf-sacm-terminology] as "a system 313 resource that is (a) required to be protected by an 314 information system's security policy, (b) intended to be 315 protected by a countermeasure, or (c) required for a system's 316 mission." This definition generally relates to an "IT 317 Asset", which in the context of this document is overly 318 limiting. For use in this document, a broader definition of 319 the term is needed to represent non-IT asset types as well. 321 In [NISTIR-7693] an asset is defined as "anything that has 322 value to an organization, including, but not limited to, 323 another organization, person, computing device, information 324 technology (IT) system, IT network, IT circuit, software 325 (both an installed instance and a physical instance), virtual 326 computing platform (common in cloud and virtualized 327 computing), and related hardware (e.g., locks, cabinets, 328 keyboards)." This definition aligns better with common 329 dictionary definitions of the term and better fits the needs 330 of this document. 332 4.2. New Terms 334 IT Asset Originally defined in [RFC4949] as "a system resource that 335 is (a) required to be protected by an information system's 336 security policy, (b) intended to be protected by a 337 countermeasure, or (c) required for a system's mission." 339 Security Content Automation Protocol (SCAP) According to SP800-126, 340 SCAP, pronounced "ess-cap", is "a suite of specifications 341 that standardize the format and nomenclature by which 342 software flaw and security configuration information is 343 communicated, both to machines and humans." SP800-117 344 revision 1 [SP800-117] provides a general overview of SCAP 345 1.2. The 11 specifications that comprise SCAP 1.2 are 346 synthesized by a master specification, SP800-126 revision 2 347 [SP800-126], that addresses integration of the specifications 348 into a coherent whole. The use of "protocol" in its name is 349 a misnomer, as SCAP defines only data models. SCAP has been 350 adopted by a number of operating system and security tool 351 vendors. 353 5. Foundational Concepts 355 5.1. Core Principles 357 This information model is built on the following core principles: 359 o Collection and Evaluation are separate tasks. 361 o Collection and Evaluation can be performed on the endpoint, at a 362 local server that communicates directly with the endpoint, or 363 based on data queried from a back end data store that does not 364 communicate directly with any endpoints. 366 o Every entity (human or machine) that notifies, queries, or 367 responds to any guidance, collection, or evaluation task must have 368 a way of identifying itself and/or presenting credentials. 369 Authentication is a key step in all of the processes, and while 370 needed to support the business processes, information needs to 371 support authentication are not highlighted in this information 372 model. There is already a large amount of existing work that 373 defines information needs for authentication. 375 o Policies are reflected in guidance for collection, evaluation, and 376 reporting. 378 o Guidance will often be generated by humans or through the use of 379 transformations on existing automation data. Is some cases, 380 guidance will be generated dynamically based on shared information 381 or current operational needs. As guidance is created it will be 382 published to an appropriate guidance data store allowing guidance 383 to be managed in and retrieved from convenient locations. 385 o Operators of a continuous monitoring or security automation system 386 will need to make decisions when defining policies about what 387 guidance to use or reference. The guidance used may be directly 388 associated with policy or may be queried dynamically based on 389 associated metadata. 391 o Guidance can be gathered from multiple data stores. It may be 392 retrieved at the point of use or may be packaged and forwarded for 393 later use. Guidance may be retrieved in event of a collection or 394 evaluation trigger or it may be gathered ahead of time and stored 395 locally for use/reference during collection and evaluation 396 activities. 398 5.2. Architecture Assumptions 400 This information model will focus on WHAT information needs to be 401 exchanged to support the business process areas. The architecture 402 document is the best place to represent the HOW and the WHERE this 403 information is used. In an effort to ensure that the data models 404 derived from this information model scale to the architecture, four 405 core architectural components need to be defined. They are 406 producers, consumers, capabilities, and repositories. These elements 407 are defined as follows: 409 o Producers (e.g., Evaluation Producer) collect, aggregate, and/or 410 derive information items and provide them to consumers. For this 411 model there are Collection, Evaluation, and Results Producers. 412 There may or may not be Guidance Producers. 414 o Consumers (e.g., Collection Consumer) request and/or receive 415 information items from producers for their own use. For this 416 model there are Collection, Evaluation, and Results Consumers. 417 There may or may not be Guidance Consumers. 419 o Capabilities (e.g., Posture Evaluation Capability) take the input 420 from one or more producers and perform some function on or with 421 that information. For this model there are Collection Guidance, 422 Collection, Evaluation Guidance, Evaluation, Reporting Guidance, 423 and Results Reporting Capabilities. 425 o Repositories (e.g., Enterprise Repository) store information items 426 that are input to or output from Capabilities, Producers, and 427 Consumers. For this model we refer to generic Enterprise and 428 Guidance Repositories. 430 Information that needs to be communicated by or made available to any 431 of these components will be specified in each of the business process 432 areas. 434 In the most trivial example, illustrated in Figure 1, Consumers 435 either request information from, or are notified by, Producers. 437 +----------+ Request +----------+ 438 | <-----------------+ | 439 | Producer | | Consumer | 440 | +-----------------> | 441 +----------+ Response +----------+ 443 +----------+ +----------+ 444 | | Notify | | 445 | Producer +-----------------> Consumer | 446 | | | | 447 +----------+ +----------+ 449 Figure 1: Example Producer/Consumer Interactions 451 As illustrated in Figure 2, writing and querying from data 452 repositories are a way in which this interaction can occur in an 453 asynchronous fashion. 455 +----------+ +----------+ 456 | | | | 457 | Producer | | Consumer | 458 | | | | 459 +-----+----+ +----^-----+ 460 | | 461 Write | +------------+ | Query 462 | | | | 463 +-----> Repository +-------+ 464 | | 465 +------------+ 467 Figure 2: Producer/Consumer Repository Interaction 469 To perform an assessment, these elements are chained together. The 470 diagram below is illustrative of this and process, and is meant to 471 demonstrate WHAT basic information exchanges need to occur, while 472 trying to maintain flexibility in HOW and WHERE they occur. 474 For example: 476 o the collection capability can reside on the endpoint or not. 478 o the collection producer can be part of the collection capability 479 or not. 481 o a repository can be directly associated with a producer and/or an 482 evaluator or stand on its own. 484 o there can be multiple "levels" of producers and consumers. 486 +-------------+ 487 |Evaluation | 488 +-------------+ |Guidance +--+ 489 |Endpoint | |Capability | | 490 +-------+ | +-------------+ | 491 | | | | 492 | +-------+-----+ +-----v-------+ 493 | Collection | |Evaluation | 494 +-> Capability +--+--------+ |Capability | 495 | | |Collection | +-----------+ +----------+ 496 | +------------+Producer | | |---| | 497 | | | |Collection | |Evaluation| 498 | | | |Consumer | |Producer | 499 | +----+------+ +----^------+ +---+------+ 500 ++---------+ | | | 501 |Collection| +-----v------+ +---+--------+ | 502 |Guidance | | | |Collection | | 503 |Capability| |Collection | |Producer | | 504 | | |Consumer |-----| | | 505 +----------+ +------------+ +------------+ | 506 | Collection | | 507 | Repository | | 508 +------------+ | 509 | 510 +--------------+ +---------------+ | 511 |Evaluation | |Evaluation | | 512 |Results | |Consumer <-----+ 513 |Producer |-----------| | 514 +-----+--------+ +---------------+ 515 | |Results Reporting| 516 | |Capability | 517 | +------------^----+ 518 | | 519 +-----v--------+ +----+------+ 520 |Evaluation | |Reporting | 521 |Results | |Guidance | 522 |Consumer | |Repository | 523 +---+----------+ +-----------+ +-------------+ 524 | | Results | 525 +-----------------------------> Repository | 526 | | 527 +-------------+ 529 Figure 3: Producer/Consumer Complex Example 531 This illustrative example in Figure 3 provides a set of information 532 exchanges that need to occur to perform a posture assessment. The 533 rest of this information model is using this set of exchanges based 534 on these core architectural components as the basis for determining 535 information elements. 537 6. Endpoint Management 539 6.1. Core Information Need 541 Unique Endpoint Identifier: The organization needs to uniquely 542 identify and label an endpoint, regardless of if it is known 543 about a priori or discovered it in the operational 544 environment. 546 6.2. Process Area Description 548 The authors envisage a common "lifecycle" for all endpoints in an 549 enterprise network. Each endpoint's lifecycle begins with an 550 "enrollment" operation, where a description of the endpoint is added 551 to the enterprise repository of "known endpoints." The enrollment 552 operation may be performed manually, in advance of an endpoint's 553 first connection request, or automatically at the time of an 554 endpoint's first connection request. 556 Manual enrollment is typically done in situations where endpoint 557 devices are issued by the enterprise (as contrasted with "bring your 558 own device" situations), and must first be configured by the 559 enterprise's Information Technology (IT) department before they are 560 allowed to connect to the network. When enrollment is performed in 561 this manner, administrators typically know a lot about the physical 562 endpoint, such as any persistent identifying characteristics (e.g., 563 its primary MAC address), its assigned IP address and planned 564 physical location within the network, its role within the network 565 (e.g., end-user workstation, database server, webserver, etc.), and 566 the responsible parties (e.g., asset owner, device maintainer). 567 These data elements may be associated with the endpoint when the 568 endpoint characteristics are entered into an enterprise repository as 569 part of the enrollment process. 571 For networks with fewer access restrictions (e.g., guest wireless 572 networks, and "bring your own device" networks), enrollment may occur 573 automatically when an endpoint device first attempts to connect to 574 the network. In these situations, enrollment typically happens at 575 machine speed, without a human administrator in the loop. As a 576 result, much less may be known about the endpoint that is being 577 enrolled, and thus only minimal data elements may be available for 578 automatic entry into an enterprise repository. 580 6.3. Endpoint Management Process Operations 582 Waltermire et al. define an endpoint as "any physical or virtual 583 device that may have a network address" (Waltermire, et al., 2014). 584 Endpoint management encompasses all security automation processes 585 involved in the tracking and monitoring of endpoints as devices which 586 are connected (even if only transiently) to an enterprise network. 587 Based on the vision, we have identified the following operations for 588 Endpoint Management: 590 1. Enroll/Expel: Add an endpoint to (or remove an endpoint from) the 591 list of known endpoints that may be allowed to access network 592 resources. 594 6.4. Information Model Requirements 596 In this section we describe the data that enterprises will need to 597 carry out for each endpoint management operation. 599 6.4.1. Enroll Operation 601 We allow for two modes of the "Enroll" operation: (1) manual, and (2) 602 automatic. When the "enroll" operation is performed in the "manual" 603 mode, enrollment occurs before the enrolled endpoint first attempts 604 to connect to the enterprise network. The result of manual 605 enrollment is that the enterprise repository is updated with records 606 to indicate that the endpoint is enrolled and thus is "known". 607 During enrollment, credentials may be issued (e.g., host or user 608 certificates) and placed on the endpoint, to be furnished each time 609 the endpoint attempts to connect to the network. We assume that as a 610 byproduct of manual enrollment, the enterprise repository will 611 contain a persistent unique identifier for the enrolled endpoint. 613 In the automatic mode, enrollment occurs as a side effect of a 614 connection request. Here, the endpoint requesting access has not 615 previously been manually enrolled and is thus "unknown". If policy 616 permits, unknown endpoints may still be allowed to connect to the 617 network and may be given limited access to resources. 619 To complete the enrollment operation, the following information 620 elements may be needed ('M' indicates 'mandatory' and 'O' indicates 621 'optional'): 623 o (M) Unique Endpoint Identifier: a persistent unique identifier for 624 the endpoint 626 o (O) Device Role: the organization needs to identify the intended 627 use of the device (e.g., workstation, server, router). 629 o (O) Asset Ownership: the organization needs to know what person 630 and/or organization is responsible for maintaining the device. 632 o (O) Other Identifying Characteristics: the organization must know 633 what other identifiers can be mapped to the Unique Endpoint 634 Identifier (e.g., IP address, MAC address). 636 While many of these elements may be automatically collected, data 637 pertaining to device role and ownership often require manual entry. 639 7. Software Management 641 This section presents an information model for managing information 642 about software installed on endpoints. Software management 643 encompasses the subset of tasks within security automation and 644 continuous monitoring which play a role in compiling inventories of 645 software products installed on endpoints and transmitting those 646 inventories to software inventory data consumers. Software inventory 647 data consumers may store the software inventory data and/or perform 648 enterprise-level software inventory-related security posture 649 assessments. While software enforcement policies that are invoked 650 and enforced at the time of installation or execution are out of the 651 scope of SACM, they require the same software guidance information to 652 be produced and exchanged. For that reason, the first operation is 653 to Define Guidance making it available to perform a posture 654 evaluation activity during any operation. 656 7.1. Core Information Needs 658 Unique Endpoint Identifier: Organizations need to be able to relate 659 the instances of software to the endpoint on which it is 660 installed. This should be consistent with the identification 661 of the endpoint when it is enrolled (see Section 6.4.1). 663 Unique Software Identifier: Organizations need to be able to 664 uniquely identify and label software installed on an 665 endpoint. Specifically, they need to know the name, 666 publisher, unique ID, and version; and any related patches. 667 In some cases the software's identity might be known a priori 668 by the organization; in other cases, a software identity 669 might be first detected by an organization when the software 670 is first inventoried in an operational environment. Due to 671 this, it is important that an organization have a stable and 672 consistent means to identify software found during 673 collection. 675 7.2. Process Area Description 677 The authors envisage an automated capability that will collect 678 software inventory data on endpoints and transmit that data to 679 interested software inventory data consumers. "Inventory data" is 680 information about software products that may be operating systems, 681 end-user applications, or other software-based systems and services- 682 that have been added to, removed from, or modified on endpoints. The 683 collection and transmission of inventory data from a given endpoint 684 to an inventory data consumer may be scheduled, event-driven, or on 685 demand based on the requested collection policy. That is, endpoints 686 may be configured to collect and transmit inventory data on a 687 predefined schedule (scheduled), in response to a change in inventory 688 state (event-driven), or whenever an inventory consumer requests (on 689 demand). 691 On any occasion when inventory data is collected and transmitted, the 692 data may be "complete" or be an "update" to previously-transmitted 693 data based on the requested collection policy. Inventory data is 694 considered to be "complete" when it is intended to reflect a 695 comprehensive and up-to-date enumeration of all software products 696 that are believed to be installed on a given endpoint as of the time 697 the inventory is taken. Inventory data is considered to be an 698 "update" when the data is limited to documenting only the changes 699 that have occurred since the last complete inventory or update. Ad 700 hoc inventory data requests should also be supported; that is, an 701 inventory data consumer should be able to issue ad hoc queries to an 702 endpoint regarding specific identified products. Endpoints should be 703 able to indicate whether or not an identified product is installed, 704 and should be able to answer various questions about an installed 705 software product including: the date/time it was most recently 706 installed, removed or patched/updated, which patches are installed, 707 and the names and properties (e.g., versions, hashes) of files 708 associated with the product. Queries concerning the configuration of 709 installed software products are addressed separately in Section 7. 711 The author's vision rests on a model of the basic processes involved 712 in software product installation and maintenance. We use the term 713 "inventory event" to refer generically to any of three possible 714 events involving software which may occur on endpoints: (1) 715 installation (adding a software product to an endpoint's inventory), 716 (2) modification (changing any files associated with a previously- 717 installed product, and (3) removal (eliminating a software product 718 from an endpoint's inventory). 720 Under this model, each endpoint may support (but is not required to) 721 a resident "inventory manager". If present, the inventory manager is 722 an installed software product which provides a standard interface to 723 "product installers", which are specialized software applications 724 that are designed to install, modify or remove other software 725 products on endpoints. Product installers are generally expected to 726 interact with the resident inventory manager, if one is present, but 727 are not required to. By interacting with the inventory manager, 728 product installers notify the inventory manager of any inventory 729 events they are generating, and supply data values needed to 730 characterize the event. When a product installer interacts correctly 731 with a resident inventory manager, we say that it has generated a 732 "conforming inventory event", meaning it has installed, modified or 733 removed a software product in a manner that conforms to the inventory 734 manager's expectations, as defined by its interface. When product 735 installers fail to interact properly with a resident inventory 736 manager, bypass it altogether, or when an inventory manager is not 737 resident, we say that the resulting inventory event is "non- 738 conforming". Additionally in the non-conforming case, a resident 739 inventory manager may also monitor a filesystem or other installation 740 contexts to detect changes to software to characterize the nature of 741 the change. 743 On Linux systems, RPM and DPKG are examples of inventory managers. 744 Each provides a standard product installer application which parses 745 specially-formatted package files, updates the RPM/APT database, and 746 copies files to their intended locations. 748 We require that when a product installer generates a conforming 749 inventory event, the resident inventory manager shall update a local 750 inventory data store on the endpoint. The local inventory data store 751 must maintain an up-to-date record of all software products installed 752 on the endpoint. The local inventory data store should also maintain 753 a record of all inventory events, including product modifications and 754 removals for later collection. The resident inventory manager should 755 have the ability to provide event-driven notification to other 756 software systems, to support reporting of inventory change events as 757 soon as they occur. 759 Because our model allows for non-conforming inventory events, as well 760 as for situations in which an endpoint does not support a resident 761 inventory manager, we allow for some number of "endpoint scanners" to 762 access endpoints either directly (by being resident on the endpoint 763 and by having the privileges necessary to inspect all areas of the 764 endpoint where installed software may be present) or indirectly 765 (e.g., by monitoring network traffic between the endpoint and other 766 devices), and attempt to infer inventory events which may have 767 happened at some point in the past. 769 Compiling a complete and accurate inventory of software products 770 installed on an endpoint thus involves collecting information about 771 conforming as well as non-conforming inventory events. Information 772 about conforming inventory events is obtained from the resident 773 inventory manager, if present. Information about non-conforming 774 inventory events is obtained by running one or more endpoint 775 scanners. These tasks are performed by an "inventory producer", 776 which may or may not reside on an endpoint, but which is able to 777 interact with any resident inventory manager, as well as to initiate 778 scans using available endpoint scanners. Inventory producers 779 transmit collected inventory data to one or more inventory consumers, 780 which may store that data in repositories for later assessment, 781 perform assessments on that data directly, or both. 783 The collection and transmission of software inventory data is needed 784 to enable assessment of security posture attributes associated with 785 software inventory. For example, an enterprise may need to assess 786 compliance with "whitelists" (lists of software products authorized 787 for use on network devices) and "blacklists" (lists of specifically 788 prohibited products). For another example, an enterprise may need to 789 assess whether a software product with a publicly disclosed 790 vulnerability is installed on any endpoint within the network. 792 7.3. Software Management Process Operations 794 The following operations are necessary to carry out activities within 795 the Software Management domain: 797 1. Define Guidance: Add software to or remove software from one of 798 three lists for an endpoint. Those lists are software allowed to 799 be installed, software prohibited from being installed, and 800 software that is mandatory for installation. 802 2. Collect Inventory: Prepare and deliver a "complete" or "update" 803 inventory report to one or more interested inventory data 804 consumers, or respond to an ad hoc request for inventory data 805 about one or more software products. 807 3. Evaluate Software Inventory Posture: Based on guidance, evaluate 808 the current software inventory and determine compliance with 809 applicable security policies or identify conditions of interest. 811 4. Report Evaluation Results: Based on Guidance, report evaluation 812 results to interested report consumers. 814 7.4. Information Model Requirements 816 In this section we describe the data that enterprises will need to 817 carry out each Software Management operation. 819 7.4.1. Define Guidance 821 The "Define Guidance" operation involves the Software Inventory 822 Collection and Evaluation Guidance Capabilities. 824 The Collection Guidance Capability generates or maintains the 825 guidance related to when Software Inventory should be collected 826 (e.g., periodic or when the inventory changes on an endpoint) and 827 what type of collection (partial or full) should occur at that time. 829 The Evaluation Guidance Capability generates or maintains the 830 guidance associated with software items where each item is 831 categorized as one of "mandatory", "optional", or "prohibited" for a 832 set of endpoints. A product is "mandatory" if it must be installed 833 on every compatible endpoint. A product is "optional" if it is 834 allowed to be installed on compatible endpoints. A product is 835 "prohibited" if it must not be installed on any compatible endpoints. 837 The Collection and Evaluation Guidance Capabilities have the 838 following information needs: 840 o (M) Unique Software Identifier: the software product which is the 841 subject of the request must be identified. 843 o (M) Authorization Status: the authorization status of the product 844 (one of 'mandatory', 'optional', or 'prohibited') must be 845 supplied. 847 o (O) Software Footprint: hashes of the software footprint (or a 848 pointer to those values) may be used to determine if software is 849 corrupted or tampered with. 851 This operation results in a change to the Collection or Evaluation 852 Guidance. This may, but need not, trigger an automatic enterprise- 853 wide assessment. 855 7.4.2. Collect Inventory 857 The "Collect Inventory" operation involves the following architecture 858 components: 860 o Software Inventory Collection Capability 862 o Software Inventory Collection Producers and Consumers 864 o Software Inventory Collection Guidance Capability 865 The Software Inventory Collection Capability has the following 866 information needs: 868 o (M) Collection Guidance: the requester must indicate when to 869 perform a collection (e.g., at a set time, in response to a change 870 in inventory) and provide any other relevant guidance. 872 o (M) Request Type: the requester must indicate whether a "complete" 873 or "update" inventory should be performed and transmitted. 875 o (M) Unique Endpoint Identifier: the endpoint whose software 876 inventory data is to be collected must be identified. 878 o (M) Endpoint Software Inventory: a description of the current list 879 of software on the endpoint must be supplied. 881 At the completion of the Collect Inventory operation the Software 882 Inventory Producer will send an enumeration of installed software to 883 the appropriate Software Inventory Collection Consumer(s) or 884 Repositories. 886 7.4.3. Evaluate Software Inventory Posture 888 The "Evaluate Software Inventory Posture" operation involves the 889 following architecture components: 891 o Software Inventory Evaluation Capability 893 o Software Inventory Collection Consumers 895 o Software Inventory Evaluation Producers and Consumers 897 o Software Inventory Evaluation Guidance Capability 899 The Software Inventory Evaluation Capability is the component which 900 compares endpoint inventory information to current security guidance, 901 and notes any deviations from what is expected. 903 The Software Inventory Evaluation Capability has the following 904 information needs: 906 o (M) Endpoint Identifier: the endpoint whose inventory posture is 907 to be assessed must be identified. 909 o (M) Endpoint Software Inventory: a description of the current 910 software inventory of the endpoint must be supplied. 912 o (M) Software Inventory Evaluation Guidance: all guidance pertinent 913 to performing an evaluation or assessment of software inventory 914 must be supplied. 916 o (M) Software Inventory Evaluation Results: the results from the 917 evaluation of the software inventory against the guidance. 919 The outcome of this operation is that the Software Inventory 920 Evaluation Capability identifies any deviations from guidance related 921 to the current inventory of software products installed on the 922 endpoint. 924 7.4.4. Report Evaluation Results 926 The "Report Evaluation Results" operation involves the following 927 architecture component: 929 o Software Inventory Results Report Capability 931 o Software Inventory Results Producers and Consumers 933 o Software Inventory Evaluation Consumer 935 o Software Inventory Reporting Guidance Capability 937 The Software Inventory Results Report Capability has the following 938 information needs: 940 o (M) Endpoint Identifier: the endpoint whose inventory posture 941 assessment is to be reported must be identified. 943 o (M) Software Inventory Reporting Guidance: all guidance pertinent 944 to generating and reporting software inventory assessment results 945 must be supplied. 947 o (M) Software Inventory Evaluation Results: the results from the 948 evaluation of the software inventory must be supplied. 950 o (M) Software Inventory Results Report: the report generated by 951 applying the reporting guidance to the evaluation results. 953 8. Configuration Management 955 This section presents an information model for the collection of 956 software configuration posture attributes. Software configuration 957 collection encompasses the subset of tasks within security automation 958 and continuous monitoring involved in the collection of important 959 settings from an endpoint and transmitting those settings to posture 960 attribute data consumers that store the data and/or perform 961 enterprise-level security posture assessments. 963 For example, an operating system may enforce configurable password 964 complexity policies. As part of assessing the current complexity 965 requirements of an network of endpoints using this operating system, 966 a SACM tool will interact with the operating systems using a 967 standardized protocol to retrieve the value of the minimum password 968 length setting. This tool will then verify that each endpoint's 969 minimum password length setting meets (or potentially exceeds) the 970 organizational requirement, and will then report inconsistent 971 endpoints to the responsible administrator for action. 973 8.1. Core Information Needs 975 Unique Endpoint Identifier: We need to be able to relate the posture 976 attribute data and assessment results with an endpoint. 978 Configuration Item Identifier: We need to be able to uniquely 979 identify high-level, cross-platform, configuration statements 980 that can be interpreted and mapped to low-level, platform- 981 specific configuration settings by primary source vendors for 982 their platforms. 984 Platform Configuration Item Identifier(s): We need to know what low- 985 level configuration items map to the high-level configuration 986 items in order to collect posture attribute data from 987 specific platforms. 989 Posture Attributes: We need to be able to represent posture 990 attribute data collected from an endpoint for use in 991 assessments. 993 8.2. Process Area Description 995 In addition to the compilation of endpoint inventory data, there is a 996 need to compile and assess posture attribute values from an endpoint 997 to ensure that software on an endpoint has been configured correctly. 998 The configuration management security automation domain encompasses a 999 wide range of information needs to define, collect, and evaluate 1000 posture attributes across a myriad of operating systems, 1001 applications, and endpoint types. Configuration Management requires 1002 that there is guidance about how and what to collect. The 1003 establishment of all of this guidance is something that needs to be 1004 done before the assessment trigger event, and needs to be done in a 1005 way that can scale and be sustained over the lifecycle of the 1006 applicable software products. 1008 8.2.1. The Existence of Configuration Item Guidance 1010 The model for software configuration collection relies on two main 1011 components: (1) the identification of software configuration items 1012 and (2) the representation of posture attribute data from an 1013 endpoint. For the identification of software configuration items, 1014 the primary objective is for the security community to develop a 1015 high-level, cross-platform identifier known as a "configuration item" 1016 that can then be used by primary source vendors to map to low-level 1017 configuration settings called "platform configuration items" for 1018 their software. The benefits of this are that a single organization 1019 is not responsible for maintaining the entire set of configuration 1020 items for all platforms and the primary source vendors are given the 1021 flexibility to determine what a particular configuration item means 1022 for their software. From a practical perspective, this will likely 1023 require a set of federated registries for both "configuration items" 1024 and "platform configuration items". An example is the "configuration 1025 item" that code should not automatically run when introduced into a 1026 system without some entity intentionally invoking that operation. 1027 One associated "platform configuration item" for Windows could be 1028 disable autorun. 1030 With regards to the model that represents posture attribute data from 1031 an endpoint, there are three components: (1) the linking of posture 1032 attribute data to the specific endpoint from which it was collected, 1033 (2) a generic posture attribute that can be extended by primary 1034 source vendors, and (3) the actual extensions of this generic posture 1035 attribute by primary source vendors for their platforms. The first 1036 component known as "collected posture attributes" associates the 1037 posture attribute data collected from an endpoint with that endpoint 1038 through the use of the Unique Endpoint Identifier previously 1039 mentioned in this model. It may also include other metadata about 1040 the collection such as the timestamp and what Posture Attribute 1041 Producer was used. This model can be used as the payload for 1042 messages of standardized protocols that are responsible for 1043 transmitting and receiving posture attribute data between Posture 1044 Attribute Producers and Consumers. The second component is the 1045 "posture attribute" which provides metadata common to all posture 1046 attributes regardless of how they are extended to meet the needs of a 1047 particular platform. Finally, the third component is the "platform 1048 posture attribute" which is left as an extension point for primary 1049 source vendors to fulfill the posture attribute data needs for their 1050 platforms. With this model, they can not only define the structure 1051 of the posture attribute value data and the data type, but they can 1052 also specify any additional metadata about the posture attribute that 1053 they feel is relevant for posture attribute data consumers. 1055 8.2.2. Configuration Collection Guidance 1057 Collection guidance provides instructions on how to collect posture 1058 attributes from an endpoint and may include information such as a 1059 list of posture attributes that may be collected, a list of posture 1060 attributes to collect, and metadata that provides details on when and 1061 how to collect posture attributes on an endpoint. The development of 1062 this guidance is best performed by the primary source vendor who is 1063 the most authoritative source of information for a specific platform. 1064 However, if necessary, organizations can translate that information 1065 into actionable collection guidance. 1067 8.2.3. Configuration Evaluation Guidance 1069 With the ability to identify and collect configuration items on an 1070 endpoint, the next logical step is to assess the collected posture 1071 attribute data against some known state or to check for some specific 1072 conditions of interest. This will require the creation of evaluation 1073 guidance. Evaluation guidance provides instructions on how to 1074 evaluate collected posture attributes from an endpoint against a 1075 known state or condition of interest. These instructions will 1076 express policies and conditions of interest using logical constructs, 1077 requirements for the data used during evaluation (age of data, source 1078 of data, etc.), and references to human-oriented data that provides 1079 technical, organizational, and other policy information. 1081 The evaluation guidance will need to capture what posture attributes 1082 to collect, how to collect the posture attributes (e.g., retrieve it 1083 from a configuration database, a published source, or collect it by 1084 leveraging collection guidance), any requirements on the usability of 1085 the posture attribute data, and instructions on how to evaluate the 1086 collected posture attribute data against defined policies or check 1087 for a condition of interest. It may also include instructions on the 1088 granularity of the results. 1090 8.2.4. Local Configuration Management Process 1092 Once an endpoint has been targeted for assessment, the first step 1093 involves understanding what collection and evaluation guidance is 1094 applicable to the endpoint. 1096 After the applicable guidance has been retrieved, the collection of 1097 posture attributes can begin. This operation may result in the 1098 collection of a subset of posture attributes on the endpoint or all 1099 of the posture attributes on the endpoint using a variety of 1100 collection mechanisms. 1102 The three primary collection mechanisms include: (1) retrieving 1103 posture attribute data from a enterprise repository, (2) software 1104 publishing their posture attribute data, and (3) collecting the 1105 posture attribute data from the endpoint or some intermediary. When 1106 executing collection guidance, it may be necessary to use some 1107 combination of these mechanisms to get all of the required data and 1108 it may be necessary to authenticate with the endpoint, CMDB, or some 1109 other data store. 1111 In this model, a posture attribute producer may compile and transmit 1112 posture attribute data to a posture attribute consumer on any of 1113 these occasions: (1) upon request by a posture attribute consumer; 1114 (2) on a predefined schedule (e.g. daily, weekly, etc.); or (3) after 1115 some event has occurred (e.g. posture attribute value change 1116 detected, etc.). 1118 Once the posture attributes have been collected, evaluation guidance 1119 is used to assess how the collected posture attribute data compares 1120 with the predefined policies or whether or not the endpoint contains 1121 conditions of interest. This operation may occur locally on the 1122 endpoint or as part of an application that interacts with an 1123 intermediate or back end server. 1125 Once completed, the results of the evaluation can be transmitted to 1126 the designated locations with the appropriate level of granularity 1127 (e.g. results for individual "platform configuration items" or rolled 1128 up results for a "configuration item", etc.). Depending on the 1129 sensitivity of the evaluation results and collected posture attribute 1130 data, it may be necessary for locations receiving the results to 1131 authenticate with the sending endpoint and potentially even utilize a 1132 secure communication channel. 1134 Last in the configuration management lifecycle is the "adjudicate 1135 further action" operation where the results are processed and it is 1136 determined if follow up actions are necessary and by which parties. 1137 Further actions could include modifying configuration settings, 1138 expelling or quarantining an endpoint from the enterprise network, 1139 removing or upgrading installed software, generating an alert, 1140 documenting a compliance deviation, or performing some mitigation 1141 among other things. This may also require re-initiating the 1142 assessment process to ensure follow up actions were completed 1143 successfully. 1145 By clearly marking the line between collection and evaluation, tools 1146 are free to implement these steps in the way that best suits the 1147 needs of the end users and that allows for flexibility and 1148 scalability across organizations of all size and shape. 1150 This model allows software and hardware vendors to publish posture 1151 attributes in both proactive and reactive manners to centralized 1152 repositories for evaluation. This type of flexibility is crucial for 1153 scalable security automation in a large and diverse enterprise 1154 environment. Finally, with this described model, key stakeholders 1155 will be able to quickly and dynamically construct and execute new or 1156 updated policy to enable fast and accurate posture evaluation. 1158 Further, evaluation need not be constrained to a single repository of 1159 information (be it an endpoint or a central repository). Evaluation 1160 can occur across multiple repositories of information to reach an 1161 aggregated decision on security posture. 1163 8.3. Configuration Management Operations 1165 The Configuration Management security automation domain includes all 1166 of the processes involved in monitoring the configuration of 1167 endpoints on an enterprise network. We have defined the following 1168 operations within the configuration management domain: 1170 1. Define Guidance: Define or acquire cross-platform configuration 1171 item guidance, platform-specific configuration item guidance, 1172 collection guidance, and evaluation guidance as applicable for 1173 the endpoints that need to be assessed. This may also include 1174 verifying the integrity of the guidance. 1176 2. Collect Posture Attributes: Gather the needed posture attributes 1177 from the endpoint and report them to one or more interested 1178 posture attribute consumers. The collection of posture 1179 attributes can be initiated by a number of triggers and can be 1180 gathered using a variety of collection mechanisms. 1182 3. Evaluate Posture Attributes: Based on guidance, assess the 1183 collected posture attributes from the endpoint to determine 1184 compliance with applicable security policies or identify 1185 conditions of interest. 1187 4. Report Evaluation Results: Based on guidance, transmit the 1188 assessment results to interested report consumers with the 1189 appropriate level of granularity. 1191 8.4. Information Model Requirements 1193 8.4.1. Define Guidance 1195 The "Define Guidance" operation relies on the existence and 1196 population of three types of guidance data: (1) configuration item 1197 guidance (cross-platform guidance and platform-specific guidance), 1198 (2) collection guidance, and (3) evaluation guidance. 1200 Therefore, the "Define Guidance" operation involves the Configuration 1201 Item, Collection, Evaluation, and Reporting Guidance Capabilities 1202 architecture components. 1204 These components generate or store information about configuration 1205 items and posture attributes, including when and how to collect them. 1206 They also include how to evaluate the collected attributes and rules 1207 around reporting the results for the desired configuration posture 1208 assessments on applicable endpoints. Configuration Guidance 1209 Capabilities can initiate requests to acquire guidance from existing 1210 data stores or have the information manually added, modified, or 1211 deleted. 1213 To express configuration item guidance, the following information is 1214 needed: 1216 o (M) Configuration Item Identifier: A persistent, unique identifier 1217 for the configuration item 1219 o (M) Configuration Item Description: A high-level description of 1220 the configuration item 1222 To express platform configuration item guidance, the following 1223 information may be needed ('M' indicates 'mandatory' and 'O' 1224 indicates 'optional'): 1226 o (M) Platform Configuration Item Identifier: A persistent, unique 1227 identifier assigned by the primary source vendor 1229 o (O) A reference to the unique, persistent configuration item 1230 identifier 1232 o (M) Platform Configuration Item Identifier Description: A low- 1233 level description of the configuration item for the specific 1234 platform 1236 o (M) Posture Attributes: A list of posture attributes that 1237 correspond to the platform configuration item 1239 o (M) References that provide additional details about the platform 1240 configuration item 1242 o (O) Additional metadata that the primary source vendor feels is 1243 relevant 1245 o (O) References to collection and evaluation guidance that they may 1246 have developed or that someone else has developed on their behalf 1248 To express "collection" guidance, the following information is 1249 needed: 1251 o (M) Listings of posture attribute identifiers for which values may 1252 be collected and evaluated 1254 o (M) Lists of attributes that are to be collected 1256 o (O) Metadata that includes when to collect attributes (e.g. based 1257 on interval, event, duration of collection), how and where to 1258 collect the posture attribute data (e.g. CMDB, publish, collect 1259 from endpoint or other data source, etc.) 1261 To express "evaluation" guidance, the following information is 1262 needed: 1264 o (M) Logical constructs to express policies and conditions of 1265 interest as well as the ability to ask different questions such as 1266 "what is the value?", "is a configuration item compliant with a 1267 policy?", etc. 1269 o (M) Data requirements including the age of the data and the source 1270 of the data among other things 1272 o (O) References to human-oriented data that provides technical, 1273 organizational, and other policy information 1275 This operation results in a change in posture assessment guidance and 1276 may, but need not, trigger an automatic enterprise-wide assessment. 1278 8.4.2. Collect Posture Attributes Operation 1280 The "Collect Posture Attributes" operation involves the following 1281 architectural components: 1283 o Configuration Collection Producers and Consumers 1285 o Configuration Collection Capability 1287 o Configuration Collection Guidance Capability 1289 The Configuration Collection Capability has the following information 1290 needs: 1292 o (M) Collection Guidance: the requester must indicate when to 1293 perform a collection (e.g., at a set time, in response to a change 1294 in inventory) and provide any other relevant guidance. This 1295 guidance may include information such as what posture attributes 1296 to collect, how to collect the posture attributes, and whether or 1297 not the posture attribute data should be persisted for later use. 1299 o (M) Request Type: the requester must indicate whether a "full" or 1300 "partial" posture attribute collection should be performed. 1302 o (M) Unique Endpoint Identifier: the endpoint whose posture 1303 attribute data is to be collected must be identified. 1305 o (M) Collected Posture Attributes: the collected posture attribute 1306 data to include any required metadata. 1308 At the completion of the Collect Posture Attributes operation the 1309 Configuration Collection Producer will send the posture attributes 1310 and their values to the appropriate Software Inventory Consumer(s) or 1311 Repositories. 1313 8.4.3. Evaluate Posture Attributes Operation 1315 The "Evaluate Posture Attributes" operation involves the following 1316 information elements: 1318 o Configuration Evaluation Capability 1320 o Configuration Collection Consumers 1322 o Configuration Evaluation Producers and Consumers> 1324 o Configuration Evaluation Guidance Capability 1326 The Configuration Evaluation Capability is the component which 1327 compares collected posture attribute data to current evaluation 1328 guidance, and notes any deviations. 1330 The Configuration Evaluation Capability has the following information 1331 needs: 1333 o (M) Unique Endpoint Identifier: the endpoint whose inventory 1334 posture is to be assessed must be identified. 1336 o (M) Collected Posture Attributes: the collected posture attribute 1337 data to assess must be supplied or retrieved prior to performing 1338 the assessment. 1340 o (M) Configuration Evaluation Guidance: all guidance pertinent to 1341 performing an evaluation of posture attribute data must be 1342 supplied. 1344 o (M) Configuration Evaluation Results: the results from the 1345 evaluation of the collected posture attributes against the 1346 guidance. 1348 8.4.4. Report Evaluation Results Operation 1350 The "Report Evaluation Results" operation involves the following 1351 architecture components: 1353 o Configuration Results Report Capability 1355 o Configuration Results Producers and Consumers 1357 o Configuration Evaluation Consumers 1359 o Configuration Reporting Guidance Capability 1361 The Configuration Results Report Capability has the following 1362 information needs: 1364 o (M) Unique Endpoint Identifier: the endpoint whose posture 1365 attribute evaluation results are to be reported must be 1366 identified. 1368 o (M) Posture Assessment Reporting Guidance: all guidance pertinent 1369 to generating and reporting posture attribute evaluation results 1370 must be supplied. This includes information such as the level of 1371 granularity provided within the report (e.g. rolled up to the 1372 "configuration item" level, at the "platform configuration level", 1373 raw posture attribute data, or the results of evaluating the 1374 posture attribute data against a known state), assurance of the 1375 evaluation results, etc. 1377 o (M) Configuration Evaluation Results: the results from the 1378 evaluation of the collected posture attributes against the 1379 guidance. 1381 o (M) Configuration Results Report: the report generated by applying 1382 the reporting guidance to the evaluation results. 1384 The outcome of this operation is that the Configuration Results 1385 Producer reports posture assessment results to interested 1386 Configuration Results Consumers. 1388 9. Vulnerability Management 1390 This section presents an information model for discovering extant 1391 vulnerabilities within an enterprise network due to the presence of 1392 installed software with publicly disclosed vulnerabilities. 1393 Successful vulnerability management builds on the foundation laid by 1394 endpoint management, software management, and configuration 1395 management, discussed earlier. We limit the scope of vulnerability 1396 management to the identification of vulnerable software. We do not 1397 currently consider mitigation or remediation of identified 1398 vulnerabilities within scope; these are important topics that deserve 1399 careful attention. Furthermore, we recognize that the mere presence 1400 of installed software with publicly disclosed vulnerabilities does 1401 not necessarily mean than an enterprise network is vulnerable to 1402 attack, as other defensive layers may effectively preclude exploits. 1403 This will also need to be considered within the scope of an 1404 information model that supports mitigation/remediation operations. 1406 9.1. Core Information Needs 1408 Unique Endpoint Identifiers: Each endpoint within the enterprise 1409 network must have a unique identifier so we can relate 1410 instances of vulnerable software to the endpoint(s) on which 1411 they are installed. 1413 Unique Software Identifiers: Organizations need to be able to 1414 uniquely identify and label each software product installed 1415 on an endpoint. This label must identify software to the 1416 version/patch level and must have enough fidelity to be able 1417 to associate it with other authoritative data (e.g., a 1418 listing of the hashes of the executables that are associated 1419 with the software). 1421 Unique Vulnerability Identifiers: Organizations need to be able to 1422 uniquely identify and label publicly disclosed software 1423 vulnerabilities, and associate those labels with the unique 1424 software identifiers of the software product(s) containing 1425 those vulnerabilities. 1427 9.2. Process Area Description 1429 The authors envisage that software publishers produce "vulnerability 1430 reports" about their products. They also envisage that each 1431 "vulnerability report" is disseminated in a standard format suitable 1432 for machine consumption and processing. Each vulnerability report 1433 shall include, at a minimum, a unique identifier of the 1434 vulnerability, and a machine-readable "applicability statement" which 1435 associates the vulnerability with one or more software products and 1436 any pertinent product configuration information. One day, it will be 1437 possible for a machine to authenticate the source of a vulnerability 1438 report and confirm that the report has not been tampered with. 1440 Each enterprise network shall support automated processes to 1441 recognize when new vulnerability reports are available from software 1442 publishers, and to retrieve and process those reports within the 1443 context of the enterprise network. 1445 The outcome of vulnerability management processes is the generation 1446 of reports and/or alerts that identify vulnerable endpoints. 1448 9.3. Vulnerability Management Process Operations 1450 We have identified the following operations necessary to carry out 1451 activities within the Vulnerability Management domain: 1453 1. Collect Vulnerability Reports: Retrieve newly-published 1454 vulnerability reports from software publishers. 1456 2. Evaluate Vulnerability Posture: Based on guidance, assess the 1457 current software inventory and configuration data and determine 1458 compliance with applicable security policies or identify 1459 conditions of interest. 1461 3. Report Evaluation Results: Based on guidance, report evaluation 1462 results to interested report consumers. 1464 9.4. Information Model Requirements 1466 In this section we describe the data that enterprises will need to 1467 carry out each Vulnerability Management operation. 1469 9.4.1. Collect Vulnerability Reports 1471 The "Collect Vulnerability Reports" operation involves the following 1472 architectural components: 1474 o Vulnerability Collection Guidance Capability 1476 o Vulnerability Evaluation Guidance Capability 1478 The Vulnerability Collection Guidance Capability maintains a 1479 repository of the guidance associated with what information and 1480 attributes must be either collected or can be reused from other 1481 posture assessment collection operations. 1483 The Vulnerability Evaluation Guidance Capability creates the 1484 evaluation guidance associated with individual vulnerability reports. 1485 Each vulnerability report documents a publicly disclosed software 1486 vulnerability, and associates a unique vulnerability identifier with 1487 one or more unique identifiers of affected software products. 1489 The Vulnerability Evaluation Guidance Capability may be maintained by 1490 a software publisher or it may be a repository that aggregates 1491 vulnerability reports across multiple software publishers. 1493 If it is an aggregate repository, it will also maintain a list of 1494 publishers of vulnerability reports. 1496 The Vulnerability Evaluation Guidance Capability may operate in 1497 either (or both) "push" or "pull" modes. In "push" mode, publishers 1498 of vulnerability reports initiate contact whenever a new report 1499 becomes available. In "pull" mode, the Vulnerability Evaluation 1500 Guidance Capability initiates contact with publishers, either on a 1501 scheduled basis or in response to a request originating within the 1502 enterprise network. 1504 The Vulnerability Evaluation Guidance Capability has the following 1505 information needs: 1507 o (O) Publisher List: a new list of publishers may be supplied. 1509 o (O) Update Schedule: a new schedule for pull-mode operations may 1510 be supplied. 1512 This operation may result in a change to the enterprise's collection 1513 of vulnerability reports. This may, but need not, trigger an 1514 automatic enterprise-wide vulnerability posture assessment. 1516 9.4.2. Evaluate Vulnerability Posture 1518 The "Evaluate Vulnerability Posture" operation involves the following 1519 architectural components: 1521 o Vulnerability Evaluation Capability 1523 o Vulnerability Collection Guidance Capability 1525 o Vulnerability Evaluation Guidance Capability 1527 o Software Inventory Collection Producers 1529 o Configuration Evaluation or Reporting Producers (when appropriate) 1530 o Vulnerability Evaluation Consumers 1532 The Vulnerability Evaluation Capability is the component which 1533 compares information about publicly disclosed software 1534 vulnerabilities with current information about software installed 1535 within the enterprise network. When a vulnerability can be mitigated 1536 by a particular configuration, evaluation and/or reporting results 1537 can be used to determine the vulnerability posture of the endpoint. 1539 A vulnerability posture assessment may be triggered in response to 1540 any of (a) the arrival of a new vulnerability report, (b) a change in 1541 software inventory on any enterprise endpoint, or (c) a change in the 1542 configuration of any software product installed on any enterprise 1543 endpoint. This information is managed by the Vulnerability 1544 Collection Guidance Capability. 1546 The Vulnerability Evaluation Capability has the following information 1547 needs: 1549 o (M) Unique Vulnerability Identifier(s): the unique identifier of 1550 the vulnerability to be reported on must be supplied. 1552 o (M) Unique Endpoint Identifiers: the endpoints being assessed for 1553 vulnerabilities must be identified. 1555 o (M) Vulnerability Collection Guidance: all guidance pertinent to 1556 determining what previously collected posture data to use must be 1557 supplied. 1559 o (M) Endpoint Software Inventory: a description of the current 1560 software inventory of the endpoint must be supplied. 1562 o (M) Configuration Evaluation Results: the results from the 1563 evaluation of the collected posture attributes against the 1564 guidance. OR (M) Configuration Results Report: the report 1565 generated by applying the reporting guidance to the evaluation 1566 results. Configuration Posture Assessment Results would only be 1567 required when the vulnerability can be mitigated by configuring an 1568 installed instance of software in a particular manner. 1570 o (M) Vulnerability Evaluation Guidance: all guidance pertinent to 1571 performing an evaluation of posture data must be supplied. 1573 o (M) Vulnerability Evaluation Results: the results from the 1574 evaluation of the collected posture data against the guidance. 1576 The outcome of this operation is that a vulnerability report is 1577 delivered to Vulnerability Evaluation Consumers. 1579 9.4.3. Report Evaluation Results 1581 The "Report Evaluation Results" operation involves the following 1582 architectural components: 1584 o Vulnerability Results Report Capability 1586 o Vulnerability Results Producers and Consumers 1588 o Vulnerability Evaluation Consumers 1590 o Vulnerability Reporting Guidance Capability 1592 The Vulnerability Results Report Capability has the following 1593 information needs: 1595 o (M) Vulnerability Evaluation Results: the results from the 1596 evaluation of the collected posture data against the guidance. 1598 o (M) Unique Endpoint Identifiers: the vulnerable endpoints must be 1599 identified. 1601 o (M) Vulnerability Assessment Reporting Guidance: all guidance 1602 pertinent to generating and reporting vulnerability assessment 1603 results must be supplied (e.g., an alert should be generated). 1605 o (M) Vulnerability Results Report: the report generated by applying 1606 the reporting guidance to the evaluation results. 1608 The outcome of this operation may be a report, an alert, or set of 1609 reports and/or alerts, identifying any vulnerable endpoints within 1610 the enterprise network. 1612 10. From Information Needs to Information Elements 1614 The previous sections highlighted information needs for a set of 1615 management process areas that use posture assessment to achieve 1616 organizational security goals. A single information need may be made 1617 up of multiple information elements. Some information elements may 1618 be required for two different process areas, resulting in two 1619 different requirements. In an effort to support the main idea of 1620 collect once and reuse the data to support multiple processes, we try 1621 to define a singular set of information elements that will support 1622 all the associated information needs. 1624 11. Information Model Elements 1626 Traditionally, one would use the SACM architecture to define 1627 interfaces that required information exchanges. Identified 1628 information elements would then be based on those exchanges. Because 1629 the SACM architecture document is still in the personal draft stage, 1630 this information model uses a different approach to the 1631 identification of information elements. First it lists the four main 1632 endpoint posture assessment activities. Then it identifies 1633 management process areas that use endpoint posture assessment to 1634 achieve organizational security objectives. These process areas were 1635 then broken down into operations that mirrored the typical workflow 1636 from the SACM Use Cases draft [I-D.ietf-sacm-use-cases]. These 1637 operations identify architectural components and their information 1638 needs. In this section, information elements derived from those 1639 information needs are mapped back to the four main activities listed 1640 above. 1642 The original liaison statement [IM-LIAISON-STATEMENT-NIST] requested 1643 contributions for the SACM information model in the four areas 1644 described below. Based on the capabilities defined previously in 1645 this document, the requested areas alone do not provide a sufficient 1646 enough categorization of the necessary information model elements. 1647 The following sub-sections directly address the requested areas as 1648 follows: 1650 1. Endpoint Identification 1652 A. Section 11.1 Asset Identifiers: Describes identification of 1653 many different asset types including endpoints. 1655 2. Endpoint Characterization 1657 A. Section 11.3 Endpoint characterization: This directly maps to 1658 the requested area. 1660 3. Endpoint Attribute Expression/Representation 1662 A. Section 11.4 Posture Attribute Expression: This corresponds 1663 to the first part of "Endpoint Attribute Expression/ 1664 Representation." 1666 B. Section 11.5 Actual Value Representation: This corresponds to 1667 the second part of "Endpoint Attribute Expression/ 1668 Representation." 1670 4. Policy evaluation expression and results reporting 1671 A. Section 11.6 Evaluation Guidance: This corresponds to the 1672 first part of "Policy evaluation expression and results 1673 reporting." 1675 B. Section 11.7 Evaluation Result Reporting: corresponds to the 1676 second part of "Policy evaluation expression and results 1677 reporting." 1679 Additionally, Section 11.2 Other Identifiers: describes other 1680 important identification concepts that were not directly requested by 1681 the liaison statement. 1683 Per the liaison statement, each subsection references related work 1684 that provides a basis for potential data models. Some analysis is 1685 also included for each area of related work on how directly 1686 applicable the work is to the SACM efforts. In general, much of the 1687 related work does not fully address the general or use case-based 1688 requirements for SACM, but they do contain some parts that can be 1689 used as the basis for data models that correspond to the information 1690 model elements. In these cases additional work will be required by 1691 the WG to adapt the specification. In some cases, existing work can 1692 largely be used in an unmodified fashion. This is also indicated in 1693 the analysis. Due to time constraints, the work in this section is 1694 very biased to previous work supported by the authors and does not 1695 reflect a comprehensive listing. An attempt has been made where 1696 possible to reference existing IETF work. Additional research and 1697 discussion is needed to include other related work in standards and 1698 technology communities that could and should be listed here. The 1699 authors intend to continue this work in subsequent revisions of this 1700 draft. 1702 Where possible when selecting and developing data models in support 1703 of these information model elements, extension points and IANA 1704 registries SHOULD be used to provide for extensibility which will 1705 allow for future data models to be addressed. 1707 11.1. Asset Identifiers 1709 In this context an "asset" refers to "anything that has value to an 1710 organization" (see [NISTIR-7693]). This use of the term "asset" is 1711 broader than the current definition in [I-D.ietf-sacm-terminology]. 1712 To support SACM use cases, a number of different asset types will 1713 need to addressed. For each type of asset, one or more type of asset 1714 identifier will be needed for use in establishing contextual 1715 relationships within the SACM information model. The following asset 1716 types are referenced or implied by the SACM use cases: 1718 Endpoint: Identifies an individual endpoint for which posture is 1719 collected and evaluated. 1721 Hardware: Identifies a given type of hardware that may be installed 1722 within an endpoint. 1724 Software: Identifies a given type of software that may be installed 1725 within an endpoint. 1727 Network: Identifies a network for which a given endpoint may be 1728 connected or request a connection to. 1730 Organization: Identifies an organizational unit. 1732 Person: Identifies an individual, often within an organizational 1733 context. 1735 11.1.1. Related Work 1737 11.1.1.1. Asset Identification 1739 The Asset Identification specification [NISTIR-7693] is an XML-based 1740 data model that "provides the necessary constructs to uniquely 1741 identify assets based on known identifiers and/or known information 1742 about the assets." Asset identification plays an important role in 1743 an organization's ability to quickly correlate different sets of 1744 information about assets. The Asset Identification specification 1745 provides the necessary constructs to uniquely identify assets based 1746 on known identifiers and/or known information about the assets. 1747 Asset Identification provides a relatively flat and extensible model 1748 for capturing the identifying information about a one or more assets, 1749 and also provides a way to represent relationships between assets. 1751 The model is organized using an inheritance hierarchy of specialized 1752 asset types/classes (see Figure 4), providing for extension at any 1753 level of abstraction. For a given asset type, a number of properties 1754 are defined that provide for capturing identifying characteristics 1755 and the referencing of namespace qualified asset identifiers, called 1756 "synthetic IDs." 1757 The following figure illustrates the class hierarchy defined by the 1758 Asset Identification specification. 1760 asset 1761 +-it-asset 1762 | +-circuit 1763 | +-computing-device 1764 | +-database 1765 | +-network 1766 | +-service 1767 | +-software 1768 | +-system 1769 | +-website 1770 +-data 1771 +-organization 1772 +-person 1774 Figure 4: Asset Identification Class Hierarchy 1776 This table presents a mapping of notional SACM asset types to those 1777 asset types provided by the Asset Identification specification. 1779 +--------------+------------------+---------------------------------+ 1780 | SACM Asset | Asset | Notes | 1781 | Type | Identification | | 1782 | | Type | | 1783 +--------------+------------------+---------------------------------+ 1784 | Endpoint | computing-device | This is not a direct mapping | 1785 | | | since a computing device is not | 1786 | | | required to have network- | 1787 | | | connectivity. Extension will be | 1788 | | | needed to define a directly | 1789 | | | aligned endpoint asset type. | 1790 +--------------+------------------+---------------------------------+ 1791 | Hardware | Not Applicable | The concept of hardware is not | 1792 | | | addressed by the asset | 1793 | | | identification specification. | 1794 | | | An extension can be created | 1795 | | | based on the it-asset class to | 1796 | | | address this concept. | 1797 +--------------+------------------+---------------------------------+ 1798 | Software | software | Direct mapping. | 1799 +--------------+------------------+---------------------------------+ 1800 | Network | network | Direct mapping. | 1801 +--------------+------------------+---------------------------------+ 1802 | Organization | organization | Direct mapping. | 1803 +--------------+------------------+---------------------------------+ 1804 | Person | person | Direct mapping. | 1805 +--------------+------------------+---------------------------------+ 1807 Table 1: Mapping of SACM to Asset Identification Asset Types 1809 This specification has been adopted by a number of SCAP validated 1810 products. It can be used to address asset identification and 1811 categorization needs within SACM with minor modification. 1813 11.1.2. Endpoint Identification 1815 An unique name for an endpoint. This is a foundational piece of 1816 information that will enable collected posture attributes to be 1817 related to the endpoint from which they were collected. It is 1818 important that this name either be created from, provide, or be 1819 associated with operational information (e.g., MAC address, hardware 1820 certificate) that is discoverable from the endpoint or its 1821 communications on the network. It is also important to have a method 1822 of endpoint identification that can persist across network sessions 1823 to allow for correlation of collected data over time. 1825 11.1.2.1. Related Work 1827 The previously introduced asset identification specification (see 1828 Section 11.1.1.1 provides a basis for endpoint identification using 1829 the "computing-device" class. While the meaning of this class is 1830 broader than the current definition of an endpoint in the SACM 1831 terminology [I-D.ietf-sacm-terminology], either that class or an 1832 appropriate sub-class extension can be used to capture identification 1833 information for various endpoint types. 1835 11.1.3. Software Identification 1837 A unique name for a unit of installable software. Software names 1838 should generally represent a unique release or installable version of 1839 software. Identification approaches should allow for identification 1840 of commercially available, open source, and organizationally 1841 developed custom software. As new software releases are created, a 1842 new software identifier should be created by the releasing party 1843 (e.g., software creator, publisher, licensor). Such an identifier is 1844 useful to: 1846 o Relate metadata that describes the characteristics of the unit of 1847 software, potentially stored in a repository of software 1848 information. Typically, the software identifier would be used as 1849 an index into such a repository. 1851 o Indicate the presence of the software unit on a given endpoint. 1853 o To determine what endpoints are the targets for an assessment 1854 based on what software is installed on that endpoint. 1856 o Define guidance related to a software unit that represents 1857 collection, evaluation, or other automatable policies. 1859 In general, an extensible method of software identification is needed 1860 to provide for adequate coverage and to address legacy identification 1861 approaches. Use of an IANA registry supporting multiple software 1862 identification methods would be an ideal way forward. 1864 11.1.3.1. Related Work 1866 While we are not aware of a one-size-fits-all solution for software 1867 identification, there are two existing specifications that should be 1868 considered as part of the solution set. They are described in the 1869 following subsections. 1871 11.1.3.1.1. Common Platform Enumeration 1873 11.1.3.1.1.1. Background 1875 The Common Platform Enumeration (CPE) [CPE-WEBSITE] is composed of a 1876 family of four specification that are layered to build on lower-level 1877 functionality. The following describes each specification: 1879 1. CPE Naming: A standard machine-readable format [NISTIR-7695] for 1880 encoding names of IT products and platforms. This defines the 1881 notation used to encode the vendor, software name, edition, 1882 version and other related information for each platform or 1883 product. With the 2.3 version of CPE, a second, more advanced 1884 notation was added to the original colon-delimited notation for 1885 CPE naming. 1887 2. CPE Matching: A set of procedures [NISTIR-7696] for comparing 1888 names. This describes how to compare two CPE names to one 1889 another. It describes a logical method that ensures that 1890 automated systems comparing two CPE names would arrive at the 1891 same conclusion. 1893 3. CPE Applicability Language: An XML-based language [NISTIR-7698] 1894 for constructing "applicability statements" that combine CPE 1895 names with simple logical operators. 1897 4. CPE Dictionary: An XML-based catalog format [NISTIR-7697] that 1898 enumerates CPE Names and associated metadata. It details how to 1899 encode the information found in a CPE Dictionary, thereby 1900 allowing multiple organizations to maintain compatible CPE 1901 Dictionaries. 1903 The primary use case of CPE is for exchanging software inventory 1904 data, as it allows the usage of unique names to identify software 1905 platforms and products present on an endpoint. The NIST currently 1906 maintains and updates a dictionary of all agreed upon CPE names, and 1907 is responsible for ongoing maintenance of the standard. Many of the 1908 names in the CPE dictionary have been provided by vendors and other 1909 3rd-parties. 1911 While the effort has seen wide adoption, most notably within the US 1912 Government, a number of critical flaws have been identified. The 1913 most critical issues associated with the effort are: 1915 o Because there is no requirement for vendors to publish their own, 1916 official CPE names, CPE necessarily requires one or more 1917 organizations for curation. This centralized curation requirement 1918 ensures that the effort has difficulty scaling. 1920 o Not enough primary source vendors provide platform and product 1921 naming information. As a result, this pushes too much of the 1922 effort out onto third-party groups and non-authoritative 1923 organizations. This exacerbates the ambiguity in names used for 1924 identical platforms and products and further reduces the utility 1925 of the effort. 1927 11.1.3.1.1.2. Applicability to Software Identification 1929 The Common Platform Enumeration (CPE) Naming specification version 1930 2.3 defines a scheme for human-readable standardized identifiers of 1931 hardware and software products. 1933 CPE names are the identifier format for software and hardware 1934 products used in SCAP 1.2 and is currently adopted by a number of 1935 SCAP product vendors. 1937 CPE names can be directly referenced in the asset identification 1938 software class (see Section 11.1.1.1.) 1940 Although relevant, CPE has an unsustainable maintenance "tail" due to 1941 the need for centralized curation and naming-consistency enforcement. 1942 Its mention in this document is to support the historic inclusion of 1943 CPE as part of SCAP and implementation of this specification in a 1944 number of security processes and products. Going forward, software 1945 identification (SWID) tags are recommended as a replacement for CPE. 1946 To this end, work has been started to align both efforts to provide 1947 translation for software units identified using SWID tags to CPE 1948 Names. This translation would allow tools that currently use CPE- 1949 based identifiers to map to SWID identifiers during a transition 1950 period. 1952 11.1.3.1.2. Software Identification (SWID) Tags 1954 The software identification tag specification [ISO.19770-2] is an 1955 XML-based data model that is used to describe a unit of installable 1956 software. A SWID tag contains data elements that: 1958 o Identify a specific unit of installable software, 1960 o Enable categorization of the software (e.g., edition, bundle), 1962 o Identification and hashing of software artifacts (e.g., 1963 executables, shared libraries), 1965 o References to related software and dependencies, and 1967 o Inclusion of extensible metadata. 1969 SWID tags can be associated with software installation media, 1970 installed software, software updates (e.g., service packs, patches, 1971 hotfixes), and redistributable components. SWID tags also provide 1972 for a mechanism to relate these concepts to each other. For example, 1973 installed software can be related back to the original installation 1974 media, patches can be related to the software that they patch, and 1975 software dependencies can be described for required redistributable 1976 components. SWID tags are ideally created at build-time by the 1977 software creator, publisher or licensor; are bundled with software 1978 installers; and are deployed to an endpoint during software 1979 installation. 1981 SWID tags should be considered for two primary uses: 1983 1. As the data format for exchanging descriptive information about 1984 software products, and 1986 2. As the source of unique identifiers for installed software. 1988 In addition to usage for software identification, a SWID tag can 1989 provide the necessary data needed to target guidance based on 1990 included metadata, and to support verification of installed software 1991 and software media using cryptographic hashes. This added 1992 information increases the value of using SWID tags as part of the 1993 larger security automation and continuous monitoring solution space. 1995 11.1.4. Hardware Identification 1997 Due to the time constraints, research into information elements and 1998 related work for identifying hardware is not included in this 1999 revision of the information model. 2001 11.2. Other Identifiers 2003 In addition to identifying core asset types, it is also necessary to 2004 have stable, globally unique identifiers to represent other core 2005 concepts pertaining to posture attribute collection and evaluation. 2006 The concept of "global uniqueness" ensures that identifiers provided 2007 by multiple organization do not collide. This may be handled by a 2008 number of different mechanisms (e.g., use of namespaces). 2010 11.2.1. Platform Configuration Item Identifier 2012 A name for a low-level, platform-dependent configuration mechanism as 2013 determined by the authoritative primary source vendor. New 2014 identifiers will be created when the source vendor makes changes to 2015 the underlying platform capabilities (e.g., adding new settings, 2016 replacing old settings with new settings). When created each 2017 identifier should remain consistent with regards to what it 2018 represents. Generally, a change in meaning would constitute the 2019 creation of a new identifier. 2021 For example, if the configuration item is for "automatic execution of 2022 code", then the platform vendor would name the low-level mechanism 2023 for their platform (e.g., autorun for mounted media). 2025 11.2.1.1. Related Work 2027 11.2.1.1.1. Common Configuration Enumeration 2029 The Common Configuration Enumeration (CCE) [CCE] is an effort managed 2030 by NIST. CCE provides a unique identifier for platform-specific 2031 configuration items that facilitates fast and accurate correlation of 2032 configuration items across multiple information sources and tools. 2033 CCE does this by providing an identifier, a human readable 2034 description of the configuration control, parameters needed to 2035 implement the configuration control, various technical mechanisms 2036 that can be used to implement the configuration control, and 2037 references to documentation that describe the configuration control 2038 in more detail. 2040 By vendor request, NIST issues new blocks of CCE identifiers. 2041 Vendors then populate the required fields and provided the details 2042 back to NIST for publication in the "CCE List", a consolidated 2043 listing of assigned CCE identifiers and associated data. Many 2044 vendors also include references to these identifiers in web pages, 2045 SCAP content, and prose configuration guides they produce. 2047 CCE the identifier format for platform specific configuration items 2048 in SCAP and is currently adopted by a number of SCAP product vendors. 2050 While CCE is largely supported as a crowd-sourced effort, it does 2051 rely on a central point of coordination for assignment of new CCE 2052 identifiers. This approach to assignment requires a single 2053 organization, currently NIST, to manage allocations of CCE 2054 identifiers which doesn't scale well and introduces sustainability 2055 challenges for large volumes of identifier assignment. If this 2056 approach is used going forward by SACM, a namespaced approach is 2057 recommended for identifier assignment that allows vendors to manage 2058 their own namespace of CCE identifiers. This change would require 2059 additional work to specify and implement. 2061 11.2.1.1.2. Open Vulnerability and Assessment Language 2063 11.2.1.1.2.1. Background 2065 The Open Vulnerability and Assessment Language (OVAL(R)) is an XML 2066 schema-based data model developed as part of a public-private 2067 information security community effort to standardize how to assess 2068 and report upon the security posture of endpoints. OVAL provides an 2069 established framework for making assertions about an endpoint's 2070 posture by standardizing the three main steps of the assessment 2071 process: 2073 1. representing the current endpoint posture; 2075 2. analyzing the endpoint for the presence of the specified posture; 2076 and 2078 3. representing the results of the assessment. 2080 OVAL facilitates collaboration and information sharing among the 2081 information security community and interoperability among tools. 2082 OVAL is used internationally and has been implemented by a number of 2083 operating system and security tools vendors. 2085 The following figure illustrates the OVAL data model. 2087 +------------+ 2088 +-----------------+ | Variables | 2089 | Common <---+ | 2090 +--------> | +------------+ 2091 | | | +------------+ 2092 | | <---+ Directives | 2093 | +--------^----^---+ | | 2094 | | | +--------+---+ 2095 | | +-----+ | 2096 | | | | 2097 | +--------+--------+ | | 2098 | | System | | | 2099 | | Characteristics | | | 2100 +------+------+ | | | +--------v---+ 2101 | Definitions | | | | | Results | 2102 | | +--------^--------+ +-+ | 2103 | | | | | 2104 | | +------------+ | 2105 +------^------+ +-------+----+ 2106 | | 2107 +--------------------------------------+ 2109 Note: The direction of the arrows indicate a model dependency 2111 Figure 5: The OVAL Data Model 2113 The OVAL data model [OVAL-LANGUAGE], visualized in Figure 5, is 2114 composed of a number of different components. The components are: 2116 o Common: Constructs, enumerations, and identifier formats that are 2117 used throughout the other model components. 2119 o Definitions: Constructs that describe assertions about system 2120 state. This component also includes constructs for internal 2121 variable creation and manipulation through a variety of functions. 2122 The core elements are: 2124 * Definition: A collection of logical statements that are 2125 combined to form an assertion based on endpoint state. 2127 * Test(platform specific): A generalized construct that is 2128 extended in platform schema to describe the evaluation of 2129 expected against actual state. 2131 * Object(platform specific): A generalized construct that is 2132 extended in platform schema to describe a collectable aspect of 2133 endpoint posture. 2135 * State(platform specific): A generalized construct that is 2136 extended in platform schema to describe a set of criteria for 2137 evaluating posture attributes. 2139 o Variables: Constructs that allow for the parameterization of the 2140 elements used in the Definitions component based on externally 2141 provided values. 2143 o System Characteristics: Constructs that represent collected 2144 posture from one or more endpoints. This element may be embedded 2145 with the Results component, or may be exchanged separately to 2146 allow for separate collection and evaluation. The core elements 2147 of this component are: 2149 * CollectedObject: Provides a mapping of collected Items to 2150 Objects defined in the Definitions component. 2152 * Item(platform specific): A generalized construct that is 2153 extended in platform schema to describe specific posture 2154 attributes pertaining to an aspect of endpoint state. 2156 o Results: Constructs that represent the result of evaluating 2157 expected state (state elements) against actual state (item 2158 elements). It includes the true/false evaluation result for each 2159 evaluated Definition and Test. Systems characteristics are 2160 embedded as well to provide low-level posture details. 2162 o Directives: Constructs that enable result reporting detail to be 2163 declared, allowing for result production to customized. 2165 End-user organizations and vendors create assessment guidance using 2166 OVAL by creating XML instances based on the XML schema implementation 2167 of the OVAL Definitions model. The OVAL Definitions model defines a 2168 structured identifier format for each of the Definition, Test, 2169 Object, State, and Item elements. Each instantiation of these 2170 elements in OVAL XML instances are assigned a unique identifier based 2171 on the specific elements identifier syntax. These XML instances are 2172 used by tools that support OVAL to drive collection and evaluation of 2173 endpoint posture. When posture collection is performed, an OVAL 2174 Systems Characteristics XML instance is generated based on the 2175 collected posture attributes. When this collected posture is 2176 evaluated, an OVAL Result XML instance is generated that contains the 2177 results of the evaluation. In most implementations, the collection 2178 and evaluation is performed at the same time. 2180 Many of the elements in the OVAL model (i.e., Test, Object, State, 2181 Item) are abstract, requiring a platform-specific schema 2182 implementation, called a "Component Model" in OVAL. These platform 2183 schema implementations are where platform specific posture attributes 2184 are defined. For each aspect of platform posture a specialized OVAL 2185 Object, which appears in the OVAL Definitions model, provides a 2186 format for expressing what posture attribute data to collect from an 2187 endpoint through the specification of a datatype, operation, and 2188 value(s) on entities that uniquely identify a platform configuration 2189 item. For example, a hive, key, and name is used to identify a 2190 registry key on a Windows endpoint. Each specialized OVAL Object has 2191 a corresponding specialized State, which represents the posture 2192 attributes that can be evaluated, and an Item which represents the 2193 specific posture attributes that can be collected. Additionally, a 2194 specialized Test exists that allows collected Items corresponding to 2195 a CollectedObject to be evaluated against one or more specialized 2196 States of the same posture type. 2198 The OVAL language provides a generalized approach suitable for 2199 posture collection and evaluation. While this approach does provide 2200 for a degree of extensibility, there are some concerns that should be 2201 addressed in order to make OVAL a viable basis for SACM's use. These 2202 concerns include: 2204 o Platform Schema Creation and Maintenance: In OVAL platform schema, 2205 the OVAL data model maintains a tight binding between the Test, 2206 Object, State, and Item elements used to assess an aspect of 2207 endpoint posture. Creating a new platform schema or adding a new 2208 posture aspect to an existing platform schema can be a very labor 2209 intensive process. Doing so often involves researching and 2210 understanding system APIs and can be prone to issues with 2211 inconsistency within and between platforms. To simplify platform 2212 schema creation and maintenance, the model needs to be evolved to 2213 generalize the Test, Object, and State elements, requiring only 2214 the definition of an Item representation. 2216 o Given an XML instance based on the Definitions model, it is not 2217 clear in the specification how incremental collection and 2218 evaluation can occur. Because of this, typically, OVAL 2219 assessments are performed on a periodic basis. The OVAL 2220 specification needs to be enhanced to include specifications for 2221 performing event-based and incremental assessment in addition to 2222 full periodic collection. 2224 o Defining new functions for manipulating variable values is current 2225 handled in the Definitions schema. This requires revision to the 2226 core language to add new functions. The OVAL specification needs 2227 to be evolved to provide for greater extensibility in this area, 2228 allowing extension schema to define new functions. 2230 o The current process for releasing a new version of OVAL, bundle 2231 releases of the core language with release of community recognized 2232 platform schema. The revision processes for the core and platform 2233 schema need to be decoupled. Each platform schema should use some 2234 mechanism to declare which core language version it relies on. 2236 If adopted by SCAM, these issues will need to be addressed as part of 2237 the SCAM engineering work to make OVAL more broadly adoptable as a 2238 general purpose data model for posture collection and evaluation. 2240 11.2.1.1.2.2. Applicability to Platform Configuration Item 2241 Identification 2243 Each OVAL Object is identified by a globally unique identifier. This 2244 globally unique identifier could be used by the SACM community to 2245 identify platform-specific configuration items and at the same time 2246 serve as collection guidance. If used in this manner, OVAL Objects 2247 would likely need to undergo changes in order to decouple it from 2248 evaluation guidance and to provide more robust collection 2249 capabilities to support the needs of the SACM community. 2251 11.2.2. Configuration Item Identifier 2253 An identifier for a high-level, platform-independent configuration 2254 control. This identification concept is necessary to allow similar 2255 configuration item concepts to be comparable across platforms. For 2256 example, a configuration item might be created for the minimum 2257 password length configuration control, which may then have a number 2258 of different platform-specific configuration settings. Without this 2259 type of identification, it will be difficult to perform evaluation of 2260 expected versus actual state in a platform-neutral way. 2262 High-level configuration items tend to change much less frequently 2263 than the platform-specific configuration items (see Section 11.2.1) 2264 that might be associated with them. To provide for the greatest 2265 amount of sustainability, collections of configuration item 2266 identifiers are best defined by specific communities of interest, 2267 while platform-specific identifiers are best defined by the source 2268 vendor of the platform. Under this model, the primary source vendors 2269 would map their platform-specific configuration controls to the 2270 appropriate platform-independent item allowing end-user organizations 2271 to make use of these relationships. 2273 To support different communities of interest, it may be necessary to 2274 support multiple methods for identification of configuration items 2275 and for associating related metadata. Use of an IANA registry 2276 supporting multiple configuration item identification methods would 2277 be an ideal way forward. To the extent possible, a few number of 2278 configuration item identification approaches is desirable, to 2279 maximize the update by vendors who would be maintain mapping of 2280 platform-specific configuration identifiers to the more general 2281 platform-neutral configuration identifiers. 2283 11.2.2.1. Related Work 2285 11.2.2.1.1. Control Correlation Identifier 2287 The Control Correlation Identifier (CCI) [CCI] is developed and 2288 managed by the United States Department of Defense (US-DoD) Defense 2289 Information Systems Agency (DISA). According to their website, CCI 2290 "provides a standard identifier and description for each of the 2291 singular, actionable statements that comprise an information 2292 assurance (IA) control or IA best practice. CCI bridges the gap 2293 between high-level policy expressions and low-level technical 2294 implementations. CCI allows a security requirement that is expressed 2295 in a high-level policy framework to be decomposed and explicitly 2296 associated with the low-level security setting(s) that must be 2297 assessed to determine compliance with the objectives of that specific 2298 security control. This ability to trace security requirements from 2299 their origin (e.g., regulations, IA frameworks) to their low-level 2300 implementation allows organizations to readily demonstrate compliance 2301 to multiple IA compliance frameworks. CCI also provides a means to 2302 objectively roll-up and compare related compliance assessment results 2303 across disparate technologies." 2305 It is recommended that this approach be analysed as a potential 2306 candidate for use as a configuration item identifier method. 2308 Note: This reference to CCI is for informational purposes. Since the 2309 editors do not represent DISA's interests, its inclusion in this 2310 document does not indicate the presence or lack of desire to 2311 contribute aspects of this effort to SACM. 2313 11.2.2.1.2. A Potential Alternate Approach 2315 There will likely be a desire by different communities to create 2316 different collections of configuration item identifiers. This 2317 fracturing may be caused by: 2319 o Different requirements for levels of abstraction, 2321 o Varying needs for timely maintenance of the collection, and 2322 o Differing scopes of technological needs. 2324 Due to these and other potential needs, it will be difficult to 2325 standardize around a single collection of configuration identifiers. 2326 A workable solution will be one that is scalable and usable for a 2327 broad population of end-user organizations. An alternate approach 2328 that should be considered is the definition of data model that 2329 contains a common set of metadata attributes, perhaps supported by an 2330 extensible taxonomy, that can be assigned to platform-specific 2331 configuration items. If defined at a necessary level of granularity, 2332 it may be possible to query collections of platform-specific 2333 configuration items provided by vendors to create groupings at 2334 various levels of abstractions. By utilizing data provided by 2335 vendors, technological needs and the timeliness of information can be 2336 addressed based on customer requirements. 2338 SACM should consider this and other approaches to satisfy the need 2339 for configuration item roll-up in a way that provides the broadest 2340 benefit, while achieving a sensible degree of scalability and 2341 sustainability. 2343 11.2.3. Vulnerability Identifier 2345 An unique name for a known software flaw that exists in specific 2346 versions of one or more units of software. One use of a 2347 vulnerability identifier in the SACM context is to associate a given 2348 flaw with the vulnerable software using software identifiers. For 2349 this reason at minimum, software identifiers should identify a 2350 software product to the patch or version level, and not just to the 2351 level that the product is licensed. 2353 11.2.3.1. Related Work 2355 11.2.3.1.1. Common Vulnerabilities and Exposures 2357 Common Vulnerabilities and Exposures (CVE) [CVE-WEBSITE] is a MITRE 2358 led effort to assign common identifiers to publicly known security 2359 vulnerabilities in software to facilitate the sharing of information 2360 related to the vulnerabilities. CVE is the industry standard by 2361 which software vendors, tools, and security professionals identify 2362 vulnerabilities and could be used to address SACM's need for a 2363 vulnerability identifier. 2365 11.3. Endpoint characterization 2367 Target when policies (collection, evaluated, guidance) apply 2369 Collection can be used to further characterize 2370 Also human input 2372 Information required to characterize an endpoint is used to determine 2373 what endpoints are the target of a posture assessment. It is also 2374 used to determine the collection, evaluation, and/or reporting 2375 policies and the associated guidance that apply to the assessment. 2376 Endpoint characterization information may be populated by: 2378 o A manual input process and entered into records associated with 2379 the endpoint, or 2381 o Using information collected and evaluated by an assessment. 2383 Regardless of the method of collection, it will be necessary to query 2384 and exchange endpoint characterization information as part of the 2385 assessment planning workflow. 2387 11.3.1. Related Work 2389 11.3.1.1. Extensible Configuration Checklist Description Format 2391 11.3.1.1.1. Background 2393 The Extensible Configuration Checklist Description Format (XCCDF) is 2394 a specification that provides an XML-based format for expressing 2395 security checklists. The XCCDF 1.2 specification is published by 2396 International Organization for Standardization (ISO) [ISO.18180]. 2397 XCCDF contains multiple components and capabilities, and various 2398 components align with different elements of this information model. 2400 This specification was originally published by NIST [NISTIR-7275]. 2401 When contributed to ISO Joint Technical Committee 1 (JTC 1), a 2402 comment was introduced indicating an interest in the IETF becoming 2403 the maintenance organization for this standard. If the SACM working 2404 group is interested in taking on engineering work pertaining to 2405 XCCDF, a contribution through a national body can be made to create a 2406 ballot resolution for transition of this standard to the IETF for 2407 maintenance. 2409 11.3.1.1.2. Applicability to Endpoint characterization 2411 The target component of XCCDF provides a mechanism for capturing 2412 characteristics about an endpoint including the fully qualified 2413 domain name, network address, references to external identification 2414 information (e.g. Asset Identification), and is extensible to 2415 support other useful information (e.g. MAC address, globally unique 2416 identifier, certificate, etc.). XCCDF may serve as a good starting 2417 point for understanding the types of information that should be used 2418 to identify an endpoint. 2420 11.3.1.2. Asset Reporting Format 2422 11.3.1.2.1. Background 2424 The Asset Reporting Format (ARF) [NISTIR-7694] is a data model to 2425 express information about assets, and the relationships between 2426 assets and reports. It facilitates the reporting, correlating, and 2427 fusing of asset information within and between organizations. ARF is 2428 vendor and technology neutral, flexible, and suited for a wide 2429 variety of reporting applications. 2431 There are four major sub-components of ARF: 2433 o Asset: The asset component element includes asset identification 2434 information for one or more assets. It simply houses assets 2435 independent of their relationships to reports. The relationship 2436 section can then link the report section to specific assets. 2438 o Report: The report component element contains one or more asset 2439 reports. An asset report is composed of content (or a link to 2440 content) about one or more assets. 2442 o Report-Request: The report-request component element contains the 2443 asset report requests, which can give context to asset reports 2444 captured in the report section. The report-request section simply 2445 houses asset report requests independent of the report which was 2446 subsequently generated. 2448 o Relationship: The relationship component element links assets, 2449 reports, and report requests together with well-defined 2450 relationships. Each relationship is defined as {subject} 2451 {predicate} {object}, where {subject} is the asset, report 2452 request, or report of interest, {predicate} is the relationship 2453 type being established, and {object} is one or more assets, report 2454 requests, or reports. 2456 11.3.1.2.2. Relationship to Endpoint Characterization 2458 For Endpoint Characterization, ARF can be used in multiple ways due 2459 to its flexibility. ARF supports the use of the Asset Identification 2460 specification (more in Section 11.3.1.2.3) to embed the 2461 representation of one or more assets as well as relationships between 2462 those assets. It also allows the inclusion of report-requests, which 2463 can provide details on what data was required for an assessment. 2465 ARF is agnostic to the data formats of the collected posture 2466 attributes and therefore can be used within the SACM Architecture to 2467 provide Endpoint Characterization without dictating data formats for 2468 the encoding of posture attributes. The embedded Asset 2469 Identification data model (see Section 11.1.1.1) can be used to 2470 characterize one or more endpoints to allow targeting for collection, 2471 evaluation, etc. Additionally, the report-request model can dictate 2472 the type of reporting that has been requested, thereby providing 2473 context as to which endpoints the guidance applies. 2475 11.3.1.2.3. Asset Identification 2477 Described earlier 2479 In the context of Endpoint Characterization, the Asset Identification 2480 data model could be used to encode information that identifies 2481 specific endpoints and/or classes of endpoints to which a particular 2482 assessment is relevant. The flexibility in the Asset Identification 2483 specification allows usage of various endpoint identifiers as defined 2484 by the SACM engineering work. 2486 As stated in Section 11.3.1.2.3, the Asset Identification 2487 specification is included within the Asset Reporting Framework (ARF) 2488 and therefore can be used in concert with that specification as well. 2490 11.3.1.3. The CPE Applicability Language 2492 CPE described earlier 2494 Applicability in CPE is defined as an XML language [NISTIR-7698] for 2495 using CPE names to create applicability statements using logical 2496 expressions. These expressions can be used to applicability 2497 statements that can drive decisions about assets, whether or not to 2498 do things like collect data, report data, and execute policy 2499 compliance checks. 2501 It is recommended that SACM evolve the CPE Applicability Language 2502 through engineering work to allow it to better fit into the security 2503 automation vision laid out by the Use Cases and Architecture for 2504 SACM. This should include de-coupling the identification part of the 2505 language from the logical expressions, making it such that the 2506 language is agnostic to the method by which assets are identified. 2507 This will allow use of SWID, CPE Names, or other identifiers to be 2508 used, perhaps supported by an IANA registry of identifier types. 2510 The other key aspect that should be evolved is the ability to make 2511 use of the Applicability Language against a centralized repository of 2512 collected posture attributes. The language should be able to make 2513 applicability statements against previously collected posture 2514 attributes, such that an enterprise can quickly query the correct set 2515 of applicable endpoints in an automated and scalable manner. 2517 11.4. Posture Attribute Expression 2519 Discuss the catalog concept. Listing of things that can be chosen 2520 from. Things we can know about. Vendors define catalogs. Ways for 2521 users to get vendor-provided catalogs. 2523 To support the collection of posture attributes, there needs to be a 2524 way for operators to identify and select from a set of platform- 2525 specific attribute(s) to collect. The same identified attributes 2526 will also need to be identified post-collection to associate the 2527 actual value of that attribute pertaining to an endpoint as it was 2528 configured at the time of the collection. To provide for 2529 extensibility, the need exists to support a variety of possible 2530 identification approaches. It is also necessary to enable vendors of 2531 software to provide a listing, or catalog, of the available posture 2532 attributes to operators that can be collected. Ideally, a federated 2533 approach will be used to allow organizations to identify the location 2534 for a repository containing catalogs of posture attributes provided 2535 by authoritative primary source vendors. By querying these 2536 repositories, operators will be able to acquire the appropriate 2537 listings of available posture attributes for their deployed assets. 2538 One or more posture attribute expressions are needed to support these 2539 exchanges. 2541 11.4.1. Related Work 2543 The ATOM Syndication Format [RFC4287] provides an extensible, 2544 flexible XML-based expression for organizing a collection of data 2545 feeds consisting of entries. This standard can be used to express 2546 one or more catalogs of posture attributes represented as data feeds. 2547 Groupings of posture attributes would be represented as entries. 2548 These entries could be defined using the data models described in the 2549 "Related Work" sections below. Additionally, this approach can also 2550 be used more generally for guidance repositories allowing other forms 2551 of security automation guidance to be exchanged using the same 2552 format. 2554 11.4.2. Platform Configuration Attributes 2556 A low-level, platform-dependent posture attribute as determined by 2557 the authoritative primary source vendor. Collection guidance will be 2558 derived from catalogs of platform specific posture attributes. 2560 For example, a primary source vendor would create a platform-specific 2561 posture attribute that best models the posture attribute data for 2562 their platform. 2564 11.4.2.1. Related Work 2566 11.4.2.1.1. Open Vulnerability and Assessment Language 2568 A general overview of OVAL was provided previously in 2569 Section 11.2.1.1.2.1. The OVAL System Characteristics platform 2570 extension models provide a catalog of the posture attributes that can 2571 be collected from an endpoint. In OVAL these posture attributes are 2572 further grouped into logical constructs called OVAL Items. For 2573 example, the passwordpolicy_item that is part of the Windows platform 2574 extension groups together all the posture attributes related to 2575 passwords for an endpoint running Windows (e.g. maximum password 2576 age, minimum password length, password complexity, etc.). The 2577 various OVAL Items defined in the OVAL System Characteristics may 2578 serve as a good starting for the types of posture attribute data that 2579 needs to be collected from endpoints. 2581 OVAL platform extension models may be shared using the ATOM 2582 Syndication Format. 2584 11.4.2.1.2. Network Configuration Protocol and YANG Data Modeling 2585 Language 2587 The Network Configuration Protocol (NETCONF) [RFC6241] defines a 2588 mechanism for managing and retrieving posture attribute data from 2589 network infrastructure endpoints. The posture attribute data that 2590 can be collected from a network infrastructure endpoint is highly 2591 extensible and can defined using the YANG modeling language 2592 [RFC6020]. Models exist for common datatypes, interfaces, and 2593 routing subsystem information among other subjects. The YANG 2594 modeling language may be useful in providing an extensible framework 2595 for the SACM community to define one or more catalogs of posture 2596 attribute data that can be collected from network infrastructure 2597 endpoints. 2599 Custom YANG modules may also be shared using the ATOM Syndication 2600 Format. 2602 11.4.2.1.3. Simple Network Management Protocol and Management 2603 Information Base Entry 2605 The Simple Network Protocol (SNMP) [RFC3411] defines a protocol for 2606 managing and retrieving posture attribute data from endpoints on a 2607 network . The posture attribute data that can be collected of an 2608 endpoint and retrieved by SNMP is defined by the Management 2609 Information Base (MIB) [RFC3418] which is hierarchical collection of 2610 information that is referenced using Object Identifiers . Given this, 2611 MIBs may provide an extensible way for the SACM community to define a 2612 catalog of posture attribute data that can be collected off of 2613 endpoints using SNMP. 2615 MIBs may be shared using the ATOM Syndication Format. 2617 11.5. Actual Value Representation 2619 Discuss instance concept. 2621 The actual value of a posture attribute is collected or published 2622 from an endpoint. The identifiers discussed previously provide names 2623 for the posture attributes (i.e., software or configuration item) 2624 that can be the subject of an assessment. The information items 2625 listed below are the actual values collected during the assessment 2626 and are all associated with a specific endpoint. 2628 11.5.1. Software Inventory 2630 A software inventory is a list of software identifiers (or content) 2631 associated with a specific endpoint. Software inventories are 2632 maintained in some organized fashion so that entities can interact 2633 with it. Just having software publish identifiers onto an endpoint 2634 is not enough, there needs to be an organized listing of all those 2635 identifiers associated with that endpoint. 2637 11.5.1.1. Related Work 2639 11.5.1.1.1. Asset Summary Reporting 2641 The Asset Summary Reporting (ASR) specification [NISTIR-7848] 2642 provides a format for capturing summary information about one or more 2643 assets. Specifically, it provides the ability to express a 2644 collection of records from some defined data source and map them to 2645 some set of assets. As a result, this specification may be useful 2646 for capturing the software installed on an endpoint, its relevant 2647 attributes, and associating it with a particular endpoint. 2649 11.5.1.1.2. Software Identification Tags 2651 SWID tag were previously introduced in Section 11.1.3.1.2. As stated 2652 before, SWID tags are ideally deployed to an endpoint during software 2653 installation. In the less ideal case, they may also be generated 2654 based on information retrieved from a proprietary software 2655 installation data store. At minimum, SWID tag must contain an 2656 identifier for each unit of installed software. Given this, SWID 2657 tags may be a viable way for SACM to express detailed information 2658 about the software installed on an endpoint. 2660 11.5.2. Collected Platform Configuration Posture Attributes 2662 Configurations associated with a software instance associated with an 2663 endpoint 2665 A list of the configuration posture attributes associated with the 2666 actual values collected from the endpoint during the assessment as 2667 required/expressed by any related guidance. Additionally, each 2668 configuration posture attribute is associated with the installed 2669 software instance it pertains to. 2671 11.5.2.1. Related Work 2673 11.5.2.1.1. Open Vulnerability and Assessment Language 2675 A general overview of OVAL was provided previously in 2676 Section 11.2.1.1.2.1. As mentioned earlier, the OVAL System 2677 Characteristics platform extensions provide a catalog of the posture 2678 attributes that can be collected and assessed in the form of OVAL 2679 Items. These OVAL Items also serve as a model for representing 2680 posture attribute data and associated values that are collected off 2681 an endpoint. Furthermore, the OVAL System Characteristics model 2682 provides a system_info construct that captures information that 2683 identifies and characterizes the endpoint from which the posture 2684 attribute data was collected. Specifically, it includes operating 2685 system name, operating system version, endpoint architecture, 2686 hostname, network interfaces, and an extensible construct to support 2687 arbitrary additional information that may be useful in identifying 2688 the endpoint in an enterprise such as information capture in Asset 2689 Identification constructs. The OVAL System Characteristics model 2690 could serve as a useful starting point for representing posture 2691 attribute data collected from an endpoint although it may need to 2692 undergo some changes to satisfy the needs of the SACM community. 2694 11.5.2.1.2. NETCONF-Based Collection 2696 Introduced earlier in Section 11.4.2.1.2, NETCONF defines a protocol 2697 for managing and retrieving posture attribute data from network 2698 infrastructure endpoints. NETCONF provides the and 2699 operations to retrieve the configuration data, and 2700 configuration data and state data respectively from a network 2701 infrastructure endpoint. Upon successful completion of these 2702 operations, the current posture attribute data of the network 2703 infrastructure endpoint will be made available. NETCONF also 2704 provides a variety of filtering mechanisms (XPath, subtree, content 2705 matching, etc.) to trim down the posture attribute data that is 2706 collected from the endpoint. Given that NETCONF is widely adopted by 2707 network infrastructure vendors, it may useful to consider this 2708 protocol as a standardized mechanism for collecting posture attribute 2709 data from network infrastructure endpoints. 2711 As a side note, members of the OVAL Community have also developed a 2712 proposal to extend the OVAL Language to support the assessment of 2713 NETCONF configuration data [1]. The proposal leverages XPath to 2714 extract the posture attribute data of interest from the XML data 2715 returned by NETCONF. The collected posture attribute data can then 2716 be evaluated using OVAL Definitions and the results of the evaluation 2717 can be expressed as OVAL Results. While this proposal is not 2718 currently part of the OVAL Language, it may be worth considering. 2720 11.5.2.1.3. SNMP-Based Collection 2722 The SNMP, previously introduced in Section 11.4.2.1.3, defines a 2723 protocol for managing and retrieving posture attribute data from 2724 endpoints on a network [RFC3411]. SNMP provides three protocol 2725 operations [RFC3416] (GetRequest, GetNextRequest, and GetBulkRequest) 2726 for retrieving posture attribute data defined by MIB objects. Upon 2727 successful completion of these operations, the requested posture 2728 attribute data of the endpoint will be made available to the 2729 requesting application. Given that SNMP is widely adopted by 2730 vendors, and the MIBs that define posture attribute data on an 2731 endpoint are highly extensible, it may useful to consider this 2732 protocol as a standardized mechanism for collecting posture attribute 2733 data from endpoints in an enterprise. 2735 11.6. Evaluation Guidance 2737 11.6.1. Configuration Evaluation Guidance 2739 The evaluation guidance is applied by evaluators during posture 2740 assessment of an endpoint. This guidance must be able to reference 2741 or be associated with the following previously defined information 2742 elements: 2744 o configuration item identifiers, 2746 o platform configuration identifiers, and 2748 o collected Platform Configuration Posture Attributes. 2750 11.6.1.1. Related Work 2752 11.6.1.1.1. Open Vulnerability and Assessment Language 2754 A general overview of OVAL was provided previously in 2755 Section 11.2.1.1.2.1. The OVAL Definitions model provides an 2756 extensible framework for making assertions about the state of posture 2757 attribute data collected from an endpoint. Guidance written against 2758 this model consists of one or more OVAL Tests, that can be logically 2759 combined, where each OVAL Test defines what posture attributes should 2760 be collected from an endpoint (as OVAL Objects) and optionally 2761 defines the expected state of the posture attributes (as OVAL 2762 States). While the OVAL Definitions model may be a useful starting 2763 point for evaluation guidance, it will likely require some changes to 2764 decouple collection and evaluation concepts to satisfy the needs of 2765 the SACM community. 2767 11.6.1.1.2. XCCDF Rule 2769 A general description of XCCDF was provided in Section 11.3.1.1.1. 2770 As noted there, an XCCDF document represents a checklist of items 2771 against which a given endpoint's state is compared and evaluated. An 2772 XCCDF Rule represents one assessed item in this checklist. A Rule 2773 contains both a prose description of the assessed item (either for 2774 presentation to the user in a tool's user interface, or for rendering 2775 into a prose checklist for human consumption) and can also contain 2776 instructions to support automated evaluation of the assessed item, if 2777 such automated assessment is possible. Automated assessment 2778 instructions can be provided either within the XCCDF Rule itself, or 2779 by providing a reference to instructions expressed in other 2780 languages, such as OVAL. 2782 In order to support greater flexibility in XCCDF, checklists can be 2783 tailored to meet certain needs. One way to do this is to enable or 2784 disable certain rules that are appropriate or inappropriate to a 2785 given endpoint, respectively. For example, a single XCCDF checklist 2786 might contain check items to evaluate the configuration of an 2787 endpoint's operating system. An endpoint deployed in an enterprise's 2788 DMZ might need to be locked down more than a common internal 2789 endpoint, due to the greater exposure to attack. In this case, some 2790 operating system configuration requirements for the DMZ endpoint 2791 might be unnecessary for the internal endpoint. Nonetheless, most 2792 configuration requirements would probably remain applicable to both 2793 environments (providing a common baseline for configuration of the 2794 given operating system) and thus be common to the checking 2795 instructions for both types of endpoints. XCCDF supports this by 2796 allowing a single checklist to be defined, but then tailored to the 2797 needs of the assessed endpoint. In the previous example, some Rules 2798 that apply only to the DMZ endpoint would be disabled during the 2799 assessment of an internal endpoint and would not be exercised during 2800 the assessment or count towards the assessment results. To 2801 accomplish this, XCCDF uses the CPE Applicability Language. By 2802 enhancing this applicability language to support other aspects of 2803 endpoint characterization (see Section 11.3.1.3), XCCDF will also 2804 benefit from these enhancements. 2806 In addition, XCCDF Rules also support parameterization, allowing 2807 customization of the expected value for a given check item. For 2808 example, the DMZ endpoint might require a password of at least 12 2809 characters, while an internal endpoint may only need 8 or more 2810 characters in its password. By employing parameterization of the 2811 XCCDF Rule, the same Rule can be used when assessing either type of 2812 endpoint, and simply be provided with a different target parameter 2813 each time to reflect the different role-based requirements. Sets of 2814 customizations can be stored within the XCCDF document itself: XCCDF 2815 Values store parameters values that can be used in tailoring, while 2816 XCCDF Profiles store sets of tailoring instructions, including 2817 selection of certain Values as parameters and the enabling and 2818 disabling of certain Rules. The tailoring capabilities supported by 2819 XCCDF allow a single XCCDF document to encapsulate configuration 2820 evaluation guidance that applies to a broad range of endpoint roles. 2822 11.7. Evaluation Result Reporting 2824 11.7.1. Configuration Evaluation Results 2826 The evaluation guidance applied during posture assessment of an 2827 endpoint to customize the behavior of evaluators. Guidance can be 2828 used to define specific result output formats or to select the level- 2829 of-detail for the generated results. This guidance must be able to 2830 reference or be associated with the following previously defined 2831 information elements: 2833 o configuration item identifiers, 2835 o platform configuration identifiers, and 2837 o collected Platform Configuration Posture Attributes. 2839 11.7.1.1. Related Work 2841 11.7.1.1.1. XCCDF TestResults 2843 A general description of the eXtensible Configuration Checklist 2844 Description Format (XCCDF) was provided in section 2845 Section 11.3.1.1.1. The XCCDF TestResult structure captures the 2846 outcome of assessing a single endpoint against the assessed items 2847 (i.e., XCCDF Rules) contained in an XCCDF instance document. XCCDF 2848 TestResults capture a number of important pieces of information about 2849 the assessment including: 2851 o The identity of the assessed endpoint. See Section 11.3.1.1.2 for 2852 more about XCCDF structures used for endpoint identification. 2854 o Any tailoring of the checklist that might have been employed. See 2855 Section 11.6.1.1.2 for more on how XCCDF supports tailoring. 2857 o The individual results of the assessment of each enabled XCCDF 2858 Rule in the checklist. See Section 11.6.1.1.2 for more on XCCDF 2859 Rules. 2861 The individual results for a given XCCDF Rule capture only whether 2862 the rule "passed", "failed", or experienced some exceptional 2863 condition, such as if an error was encountered during assessment. 2864 XCCDF 1.2 Rule results do not capture the actual state of the 2865 endpoint. For example, an XCCDF Rule result might indicate that an 2866 endpoint failed to pass requirement that passwords be of a length 2867 greater than or equal to 8, but it would not capture that the 2868 endpoint was, in fact, only requiring passwords of 4 or more 2869 characters. It may, however, be possible for a user to discover this 2870 information via other means. For example, if the XCCDF Rule uses an 2871 OVAL Definition to effect the Rule's evaluation, then the actual 2872 endpoint state may be captured in the corresponding OVAL System 2873 Characteristics file. 2875 The XCCDF TestResult structure does provide a useful structure for 2876 understanding the overall assessment that was conducted and the 2877 results thereof. The ability to quickly determine the Rules that are 2878 not complied with on a given endpoint allow administrators to quickly 2879 identify where remediation needs to occur. 2881 11.7.1.1.2. Open Vulnerability and Assessment Language 2883 A general overview of OVAL was provided previously in 2884 Section 11.2.1.1.2.1. OVAL Results provides a model for expressing 2885 the results of the assessment of the actual state of the posture 2886 attribute values collected of an endpoint (represented as an OVAL 2887 System Characteristics document) against the expected posture 2888 attribute values (defined in an OVAL Definitions document. Using 2889 OVAL Directives, the granularity of OVAL Results can also be 2890 specified. The OVAL Results model may be useful in providing a 2891 format for capturing the results of an assessment. 2893 11.7.1.1.3. Asset Summary Reporting 2895 A general overview of ASR was provided previously in 2896 Section 11.5.1.1.1. As ASR provides a way to report summary 2897 information about assets, it can be used within the SACM Architecture 2898 to provide a way to aggregate asset information for later use. It 2899 makes no assertions about the data formats used by the assessment, 2900 but rather provides an XML, record-based way to collect aggregated 2901 information about assets. 2903 By using ASR to collect this summary information within the SACM 2904 Architecture, one can provide a way to encode the details used by 2905 various reporting requirements, including user-definable reports. 2907 11.7.1.1.4. ARF 2909 A general overview of ARF was provided previously in 2910 Section 11.3.1.2.1. Because ARF is data model agnostic, it can 2911 provide a flexible format for exchanging collection and evaluation 2912 information from endpoints. It additionally provides a way to encode 2913 relationships between guidance and assets, and as such, can be used 2914 to associate assessment results with guidance. This could be the 2915 guidance that directly triggered the assessment, or for guidance that 2916 is run against collected posture attributes located in a central 2917 repository. 2919 11.7.2. Software Inventory Evaluation Results 2921 The results of an evaluation of an endpoint's software inventory 2922 against an authorized software list. The authorized software list 2923 represents the policy for what software units are allowed, 2924 prohibited, and mandatory for an endpoint. 2926 12. Acknowledgements 2928 Many of the specifications in this document have been developed in a 2929 public-private partnership with vendors and end-users. The hard work 2930 of the SCAP community is appreciated in advancing these efforts to 2931 their current level of adoption. 2933 Over the course of developing the initial draft, Brant Cheikes, Matt 2934 Hansbury, Daniel Haynes, and Charles Schmidt have contributed text to 2935 many sections of this document. 2937 13. IANA Considerations 2939 This memo includes no request to IANA. 2941 14. Security Considerations 2943 Posture Assessments need to be performed in a safe and secure manner. 2944 In that regard, there are multiple aspects of security that apply to 2945 the communications between components as well as the capabilities 2946 themselves. Due to time constraints, this information model only 2947 contains an initial listing of items that need to be considered with 2948 respect to security. This list is not exhaustive, and will need to 2949 be augmented as the model continues to be developed/refined. 2951 Initial list of security considerations include: 2953 Authentication: Every component and asset needs to be able to 2954 identify itself and verify the identity of other components 2955 and assets. 2957 Confidentiality: Communications between components need to be 2958 protected from eavesdropping or unauthorized collection. 2959 Some communications between components and assets may need to 2960 be protected as well. 2962 Integrity: The information exchanged between components needs to be 2963 protected from modification. some exchanges between assets 2964 and components will also have this requirement. 2966 Restricted Access: Access to the information collected, evaluated, 2967 reported, and stored should only be viewable/consumable to 2968 authenticated and authorized entities. 2970 15. References 2972 15.1. Normative References 2974 [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate 2975 Requirement Levels", BCP 14, RFC 2119, March 1997. 2977 15.2. Informative References 2979 [CCE] The National Institute of Standards and Technology, 2980 "Common Configuration Enumeration", 2014, 2981 . 2983 [CCI] United States Department of Defense Defense Information 2984 Systems Agency, "Control Correlation Identifier", 2014, 2985 . 2987 [CPE-WEBSITE] 2988 The National Institute of Standards and Technology, 2989 "Common Platform Enumeration", 2014, 2990 . 2992 [CVE-WEBSITE] 2993 The MITRE Corporation, "Common Vulnerabilities and 2994 Exposures", 2014, . 2996 [I-D.camwinget-sacm-requirements] 2997 Cam-Winget, N., "Secure Automation and Continuous 2998 Monitoring (SACM) Requirements", draft-camwinget-sacm- 2999 requirements-04 (work in progress), June 2014. 3001 [I-D.ietf-sacm-terminology] 3002 Waltermire, D., Montville, A., Harrington, D., and N. Cam- 3003 Winget, "Terminology for Security Assessment", draft-ietf- 3004 sacm-terminology-04 (work in progress), May 2014. 3006 [I-D.ietf-sacm-use-cases] 3007 Waltermire, D. and D. Harrington, "Endpoint Security 3008 Posture Assessment - Enterprise Use Cases", draft-ietf- 3009 sacm-use-cases-07 (work in progress), April 2014. 3011 [IM-LIAISON-STATEMENT-NIST] 3012 Montville, A., "Liaison Statement: Call for Contributions 3013 for the SACM Information Model to NIST", May 2014, 3014 . 3016 [ISO.18180] 3017 "Information technology -- Specification for the 3018 Extensible Configuration Checklist Description Format 3019 (XCCDF) Version 1.2", ISO/IEC 18180, 2013, 3020 . 3023 [ISO.19770-2] 3024 "Information technology -- Software asset management -- 3025 Part 2: Software identification tag", ISO/IEC 19770-2, 3026 2009. 3028 [NISTIR-7275] 3029 Waltermire, D., Schmidt, C., Scarfone, K., and N. Ziring, 3030 "Specification for the Extensible Configuration Checklist 3031 Description Format (XCCDF) Version 1.2", NISTIR 7275r4, 3032 March 2013, . 3035 [NISTIR-7693] 3036 Wunder, J., Halbardier, A., and D. Waltermire, 3037 "Specification for Asset Identification 1.1", NISTIR 7693, 3038 June 2011, 3039 . 3042 [NISTIR-7694] 3043 Halbardier, A., Waltermire, D., and M. Johnson, 3044 "Specification for the Asset Reporting Format 1.1", NISTIR 3045 7694, June 2011, 3046 . 3049 [NISTIR-7695] 3050 Cheikes, B., Waltermire, D., and K. Scarfone, "Common 3051 Platform Enumeration: Naming Specification Version 2.3", 3052 NISTIR 7695, August 2011, 3053 . 3056 [NISTIR-7696] 3057 Parmelee, M., Booth, H., Waltermire, D., and K. Scarfone, 3058 "Common Platform Enumeration: Name Matching Specification 3059 Version 2.3", NISTIR 7696, August 2011, 3060 . 3063 [NISTIR-7697] 3064 Cichonski, P., Waltermire, D., and K. Scarfone, "Common 3065 Platform Enumeration: Dictionary Specification Version 3066 2.3", NISTIR 7697, August 2011, 3067 . 3070 [NISTIR-7698] 3071 Waltermire, D., Cichonski, P., and K. Scarfone, "Common 3072 Platform Enumeration: Applicability Language Specification 3073 Version 2.3", NISTIR 7698, August 2011, 3074 . 3077 [NISTIR-7848] 3078 Davidson, M., Halbardier, A., and D. Waltermire, 3079 "Specification for the Asset Summary Reporting Format 3080 1.0", NISTIR 7848, May 2012, 3081 . 3084 [OVAL-LANGUAGE] 3085 Baker, J., Hansbury, M., and D. Haynes, "The OVAL Language 3086 Specification version 5.10.1", January 2012, 3087 . 3089 [RFC3411] Harrington, D., Presuhn, R., and B. Wijnen, "An 3090 Architecture for Describing Simple Network Management 3091 Protocol (SNMP) Management Frameworks", STD 62, RFC 3411, 3092 December 2002. 3094 [RFC3416] Presuhn, R., "Version 2 of the Protocol Operations for the 3095 Simple Network Management Protocol (SNMP)", STD 62, RFC 3096 3416, December 2002. 3098 [RFC3418] Presuhn, R., "Management Information Base (MIB) for the 3099 Simple Network Management Protocol (SNMP)", STD 62, RFC 3100 3418, December 2002. 3102 [RFC3444] Pras, A. and J. Schoenwaelder, "On the Difference between 3103 Information Models and Data Models", RFC 3444, January 3104 2003. 3106 [RFC4287] Nottingham, M., Ed. and R. Sayre, Ed., "The Atom 3107 Syndication Format", RFC 4287, December 2005. 3109 [RFC4949] Shirey, R., "Internet Security Glossary, Version 2", RFC 3110 4949, August 2007. 3112 [RFC6020] Bjorklund, M., "YANG - A Data Modeling Language for the 3113 Network Configuration Protocol (NETCONF)", RFC 6020, 3114 October 2010. 3116 [RFC6241] Enns, R., Bjorklund, M., Schoenwaelder, J., and A. 3117 Bierman, "Network Configuration Protocol (NETCONF)", RFC 3118 6241, June 2011. 3120 [SP800-117] 3121 Quinn, S., Scarfone, K., and D. Waltermire, "Guide to 3122 Adopting and Using the Security Content Automation 3123 Protocol (SCAP) Version 1.2", SP 800-117, January 2012, 3124 . 3127 [SP800-126] 3128 Waltermire, D., Quinn, S., Scarfone, K., and A. 3129 Halbardier, "The Technical Specification for the Security 3130 Content Automation Protocol (SCAP): SCAP Version 1.2", SP 3131 800-126, September 2011, 3132 . 3135 15.3. URIs 3137 [1] https://github.com/OVALProject/Sandbox/blob/master/x-netconf- 3138 definitions-schema.xsd 3140 Authors' Addresses 3142 David Waltermire (editor) 3143 National Institute of Standards and Technology 3144 100 Bureau Drive 3145 Gaithersburg, Maryland 20877 3146 USA 3148 Email: david.waltermire@nist.gov 3150 Kim Watson 3151 United States Department of Homeland Security 3152 DHS/CS&C/FNR 3153 245 Murray Ln. SW, Bldg 410 3154 MS0613 3155 Washington, DC 20528 3156 USA 3158 Email: kimberly.watson@hq.dhs.gov