idnits 2.17.1 draft-waltermire-sacm-use-cases-02.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- No issues found here. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year == The document doesn't use any RFC 2119 keywords, yet seems to have RFC 2119 boilerplate text. -- The document date (September 7, 2012) is 4239 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- == Outdated reference: A later version (-09) exists of draft-ietf-nea-pt-eap-02 == Outdated reference: A later version (-08) exists of draft-ietf-nea-pt-tls-05 -- Obsolete informational reference (is this intentional?): RFC 3588 (Obsoleted by RFC 6733) -- Obsolete informational reference (is this intentional?): RFC 5226 (Obsoleted by RFC 8126) Summary: 0 errors (**), 0 flaws (~~), 4 warnings (==), 3 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 Network Working Group D. Waltermire, Ed. 3 Internet-Draft NIST 4 Intended status: Informational A. Montville 5 Expires: March 11, 2013 TW 6 September 7, 2012 8 Analysis of Security Automation and Continuous Monitoring (SACM) Use 9 Cases 10 draft-waltermire-sacm-use-cases-02 12 Abstract 14 This document identifies foundational use cases, derived functional 15 capabilities and requirements, architectural components, and the 16 supporting standards needed to define an interoperable, 17 automation\infrastructure required to support timely, accurate and 18 actionable situational awareness over an organization's IT systems. 19 Automation tools implementing a continuous monitoring approach will 20 utilize this infrastructure together with existing and emerging 21 event, incident and network management standards to provide 22 visibility into the state of assets, user activities and network 23 \behavior. Stakeholders will be able to use these tools to aggregate 24 and analyze relevant security and operational data to understand the 25 organizations security posture, quantify business risk, and make 26 informed decisions that support organizational objectives while 27 protecting critical information. Organizations will be able to use 28 these tools to augment and automate information sharing activities to 29 collaborate with partners to identify and mitigate threats. Other 30 automation tools will be able to integrate with these capabilities to 31 enforce policies based on human decisions to harden systems, prevent 32 misuse and reduce the overall attack surface. 34 Status of this Memo 36 This Internet-Draft is submitted in full conformance with the 37 provisions of BCP 78 and BCP 79. 39 Internet-Drafts are working documents of the Internet Engineering 40 Task Force (IETF). Note that other groups may also distribute 41 working documents as Internet-Drafts. The list of current Internet- 42 Drafts is at http://datatracker.ietf.org/drafts/current/. 44 Internet-Drafts are draft documents valid for a maximum of six months 45 and may be updated, replaced, or obsoleted by other documents at any 46 time. It is inappropriate to use Internet-Drafts as reference 47 material or to cite them other than as "work in progress." 48 This Internet-Draft will expire on March 11, 2013. 50 Copyright Notice 52 Copyright (c) 2012 IETF Trust and the persons identified as the 53 document authors. All rights reserved. 55 This document is subject to BCP 78 and the IETF Trust's Legal 56 Provisions Relating to IETF Documents 57 (http://trustee.ietf.org/license-info) in effect on the date of 58 publication of this document. Please review these documents 59 carefully, as they describe your rights and restrictions with respect 60 to this document. Code Components extracted from this document must 61 include Simplified BSD License text as described in Section 4.e of 62 the Trust Legal Provisions and are provided without warranty as 63 described in the Simplified BSD License. 65 Table of Contents 67 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 6 68 1.1. Requirements Language . . . . . . . . . . . . . . . . . . 6 69 2. Key Concepts . . . . . . . . . . . . . . . . . . . . . . . . . 7 70 3. Use Cases . . . . . . . . . . . . . . . . . . . . . . . . . . 9 71 3.1. UC1: System State Assessment . . . . . . . . . . . . . . . 9 72 3.1.1. Goal . . . . . . . . . . . . . . . . . . . . . . . . . 9 73 3.1.2. Main Success Scenario . . . . . . . . . . . . . . . . 9 74 3.1.3. Extensions . . . . . . . . . . . . . . . . . . . . . . 9 75 3.2. UC2: Enforcement of Acceptable State . . . . . . . . . . . 9 76 3.2.1. Goal . . . . . . . . . . . . . . . . . . . . . . . . . 9 77 3.2.2. Main Success Scenario . . . . . . . . . . . . . . . . 9 78 3.2.3. Extensions . . . . . . . . . . . . . . . . . . . . . . 10 79 3.3. UC3: Security Control Verification and Monitoring . . . . 10 80 3.3.1. Goal . . . . . . . . . . . . . . . . . . . . . . . . . 10 81 3.3.2. Main Success Scenario . . . . . . . . . . . . . . . . 10 82 3.3.3. Extensions . . . . . . . . . . . . . . . . . . . . . . 10 83 4. Functional Capabilities . . . . . . . . . . . . . . . . . . . 10 84 4.1. Capabilities Supporting UC1 . . . . . . . . . . . . . . . 11 85 4.1.1. Asset Management . . . . . . . . . . . . . . . . . . . 11 86 4.1.2. Data Collection . . . . . . . . . . . . . . . . . . . 12 87 4.1.2.1. Security Configuration Management . . . . . . . . 12 88 4.1.2.2. Vulnerability Management . . . . . . . . . . . . . 12 89 4.1.3. Assessment Result Analysis . . . . . . . . . . . . . . 13 90 4.1.4. Content Management . . . . . . . . . . . . . . . . . . 13 91 4.2. Capabilities Supporting UC2 . . . . . . . . . . . . . . . 14 92 4.2.1. Assessment Query and Transport . . . . . . . . . . . . 14 93 4.2.2. Acceptable State Enforcement . . . . . . . . . . . . . 14 94 4.3. Capabilities Supporting UC3 . . . . . . . . . . . . . . . 14 95 4.3.1. Tasking and Scheduling . . . . . . . . . . . . . . . . 14 96 4.3.2. Data Aggregation and Reporting . . . . . . . . . . . . 15 97 5. Functional Components . . . . . . . . . . . . . . . . . . . . 16 98 5.1. Asset Management . . . . . . . . . . . . . . . . . . . . . 16 99 5.1.1. Discovery . . . . . . . . . . . . . . . . . . . . . . 16 100 5.1.2. Characterization . . . . . . . . . . . . . . . . . . . 16 101 5.1.2.1. Logical . . . . . . . . . . . . . . . . . . . . . 16 102 5.1.2.2. Security . . . . . . . . . . . . . . . . . . . . . 16 103 5.1.3. Asset Identification . . . . . . . . . . . . . . . . . 16 104 5.2. Security Configuration Management . . . . . . . . . . . . 16 105 5.2.1. Configuration Assessment . . . . . . . . . . . . . . . 16 106 5.2.1.1. Non-technical Assessment . . . . . . . . . . . . . 16 107 5.2.1.2. Technical Assessment . . . . . . . . . . . . . . . 17 108 5.3. Vulnerability Management . . . . . . . . . . . . . . . . . 17 109 5.3.1. Non-technical Vulnerability Assessment . . . . . . . . 17 110 5.3.2. Technical Vulnerabiltiy Assessment . . . . . . . . . . 17 111 5.4. Content Management . . . . . . . . . . . . . . . . . . . . 17 112 5.4.1. Control Frameworks . . . . . . . . . . . . . . . . . . 17 113 5.4.2. Configuration Standards . . . . . . . . . . . . . . . 17 114 5.4.3. Scoring Models . . . . . . . . . . . . . . . . . . . . 17 115 5.4.4. Vulnerability Information . . . . . . . . . . . . . . 17 116 5.4.5. Patch Information . . . . . . . . . . . . . . . . . . 17 117 5.4.6. Asset Information . . . . . . . . . . . . . . . . . . 17 118 5.5. Assessment Result Analysis . . . . . . . . . . . . . . . . 17 119 5.5.1. Comparing Actual to Expected State . . . . . . . . . . 17 120 5.5.2. Scoring Comparison Results . . . . . . . . . . . . . . 17 121 5.5.3. Relating Comparison Results to Requirements . . . . . 17 122 5.5.4. Relating Requirements to Control Frameworks . . . . . 17 123 5.6. Tasking and Scheduling . . . . . . . . . . . . . . . . . . 17 124 5.6.1. Selection of Assessment Criteria . . . . . . . . . . . 18 125 5.6.2. Defining In-scope Assets . . . . . . . . . . . . . . . 18 126 5.6.3. Defining Periodic Assessments . . . . . . . . . . . . 18 127 5.6.4. Defining Assessment Triggers . . . . . . . . . . . . . 18 128 5.7. Data Aggregation and Reporting . . . . . . . . . . . . . . 18 129 5.7.1. By Asset Characterization . . . . . . . . . . . . . . 18 130 5.7.2. By Assessment Criteria . . . . . . . . . . . . . . . . 18 131 5.7.3. By Control Framework . . . . . . . . . . . . . . . . . 18 132 5.7.4. By Benchmark . . . . . . . . . . . . . . . . . . . . . 18 133 5.7.5. By Ad Hoc/Extended Properties . . . . . . . . . . . . 18 134 6. Data Exchange Models and Communications Protocols . . . . . . 18 135 6.1. Data Exchange Models . . . . . . . . . . . . . . . . . . . 19 136 6.1.1. Control Expression . . . . . . . . . . . . . . . . . . 19 137 6.1.1.1. Technical Control Expression . . . . . . . . . . . 19 138 6.1.1.2. Non-technical Control Expression . . . . . . . . . 19 139 6.1.2. Control Frameworks . . . . . . . . . . . . . . . . . . 19 140 6.1.2.1. Logical Expression and Syntactic Binding(s) . . . 19 141 6.1.2.2. Relationships . . . . . . . . . . . . . . . . . . 19 142 6.1.2.3. Substantiation (Control Requirement) . . . . . . . 19 143 6.1.2.4. Reporting . . . . . . . . . . . . . . . . . . . . 19 144 6.1.3. Asset Expressions . . . . . . . . . . . . . . . . . . 19 145 6.1.3.1. Asset Identification . . . . . . . . . . . . . . . 19 146 6.1.3.2. Asset Classification (Type) . . . . . . . . . . . 19 147 6.1.3.3. Asset Attributes . . . . . . . . . . . . . . . . . 20 148 6.1.3.4. Information Expression (non-identifying) . . . . . 20 149 6.1.3.5. Reporting . . . . . . . . . . . . . . . . . . . . 20 150 6.1.4. Benchmark/Checklist Expression . . . . . . . . . . . . 20 151 6.1.4.1. Logical Expression and Bindings . . . . . . . . . 20 152 6.1.4.2. Checking Systems . . . . . . . . . . . . . . . . . 20 153 6.1.4.3. Results and Scoring . . . . . . . . . . . . . . . 20 154 6.1.4.4. Reporting . . . . . . . . . . . . . . . . . . . . 20 155 6.1.5. Check Language . . . . . . . . . . . . . . . . . . . . 20 156 6.1.5.1. Logical Expression and Syntactic Binding(s) . . . 20 157 6.1.5.2. Reporting . . . . . . . . . . . . . . . . . . . . 20 158 6.1.6. Targeting Expression . . . . . . . . . . . . . . . . . 20 159 6.1.6.1. Information Owner . . . . . . . . . . . . . . . . 20 160 6.1.6.2. System Owner . . . . . . . . . . . . . . . . . . . 20 161 6.1.6.3. Assessor . . . . . . . . . . . . . . . . . . . . . 20 162 6.1.6.4. Computing Device . . . . . . . . . . . . . . . . . 20 163 6.1.6.5. Targeting Extensibility . . . . . . . . . . . . . 20 164 6.2. Communication Protocols . . . . . . . . . . . . . . . . . 21 165 6.2.1. Asset Management Interface . . . . . . . . . . . . . . 21 166 7. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 21 167 8. Security Considerations . . . . . . . . . . . . . . . . . . . 21 168 9. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . . 21 169 10. References . . . . . . . . . . . . . . . . . . . . . . . . . . 21 170 10.1. Normative References . . . . . . . . . . . . . . . . . . . 21 171 10.2. Informative References . . . . . . . . . . . . . . . . . . 21 172 Appendix A. Additional Stuff . . . . . . . . . . . . . . . . . . 22 173 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . . 22 175 1. Introduction 177 This document addresses foundational use cases in security 178 automation. These use cases may be considered when establishing a 179 charter for the Security Automation and Continuous Monitoring (SACM) 180 working group within the IETF. This working group will address a 181 many of the standards needed to define an interoperable, automation 182 infrastructure required to support timely, accurate and actionable 183 situational awareness over an organization's IT systems. This 184 document enumerates use cases and breaks down related concepts that 185 cross many IT security information domains. 187 Sections Section 2, Section 3, Section 4, and Section 5 of this 188 document respectively focus on: 190 Defining the key concepts and terminology used within the document 191 providing a common frame of reference; 193 Identifying foundational use cases that represent classes of 194 stakeholders, goals, and usage scenarios; 196 A set of derived functional capabilities and associated 197 requirements that are needed to support the use cases; 199 A break down of architectural components that address one or more 200 functional capabilities that can be used in various combinations 201 to support the use cases 203 The concepts identified in this document provide a foundation for 204 creating interoperable automation tools and continuous monitoring 205 solutions that provide visibility into the state of assets, user 206 activities, and network behavior. Stakeholders will be able to use 207 these tools to aggregate and analyze relevant security and 208 operational data to understand the organizations security posture, 209 quantify business risk, and make informed decisions that support 210 organizational objectives while protecting critical information. 211 Organizations will be able to use these tools to augment and automate 212 information sharing activities to collaborate with partners to 213 identify and mitigate threats. Other automation tools will be able 214 to integrate with these capabilities to enforce policies based on 215 human decisions to harden systems, prevent misuse and reduce the 216 overall attack surface. 218 1.1. Requirements Language 220 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 221 "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this 222 document are to be interpreted as described in RFC 2119 [RFC2119]. 224 2. Key Concepts 226 The operational methods we use within the bounds of our present 227 realities are failing us - we are falling behind. We have begun to 228 recognize that the evolution of threat agents, increasing system 229 complexity, rapid situational security change, and scarce resources 230 are detrimental to our success. There have been efforts to remedy 231 our circumstance, and these efforts are generally known as "Security 232 Automation." 234 Security Automation is a general term used to reference standards and 235 specifications originally created by the National Institute of 236 Standards and Technology (NIST) and/or the MITRE Corporation. 237 Security Automation generally includes languages, protocols 238 (prescribed ways by which specification collections are used), 239 enumerations, and metrics. 241 These specifications have provided an opportunity for tool vendors 242 and enterprises building customized solutions to take the appropriate 243 steps toward enabling Security Automation by defining common 244 information expressions. In effect, common expression of information 245 enables interoperability between tools (whether customized, 246 commercial, or freely available). Another important capability 247 common expression provides is the ability to automate portions of 248 security processes to gain efficiency, react to new threats in a 249 timely manner, and free up security personnel to work on more 250 advanced problems within the processes in which they participate. 252 +---------------------------------------+ +-------------+ 253 | | | | 254 | Operational Risk Management | | | 255 | | | | 256 +---------------------------------------+ | | 257 | | 258 +---------------------------------------+ | | 259 | | | | 260 | Information Risk Management | | Policy | 261 | | | Process | 262 +---------------------------------------+ | Procedure | 263 | | 264 +---------------------------------------+ | | 265 | | | | 266 | Control Frameworks | | | 267 | | | | 268 +---------------------------------------+ | | 269 | | 270 +---------------------------------------+ | | 271 | | | | 272 | Controls | | | 273 | | | | 274 +---------------------------------------+ +-------------+ 276 Figure 1 278 The figure above provides some context for our focus area. 279 Organizations of all sizes will have a more or less formal risk 280 management program, depending upon their maturity and organization- 281 specific needs. A small business with only a few employees may not 282 have a formally recognized risk management program, but they still 283 lock the doors at night. Typically, financial entities and 284 governments sit at the other end of the spectrum with often large, 285 laborious risk frameworks. The point is that all organizations 286 practice, to some degree, Operational Risk Management. An 287 Information Risk Management program is most likely a constituent of 288 Operational Risk Management (another constituent might be Financial 289 Risk Management). In the Information Risk Management domain, we 290 often use Control Frameworks to provide guidance for organizations 291 practicing ORM in an information context, and these Control 292 Frameworks define a variety of Controls. 294 From ORM, IRM, Control Frameworks, and the Controls themselves, 295 organizations derive a set of organization-specific policies, 296 processes, and procedures. Such policies, processes, and procedures 297 make use of a library of supporting information commonly stipulated 298 by the organization (i.e. enterprise acceptable use policies), but 299 often prescribed by external entities (i.e. Payment Card Industry 300 Data Security Standards, Sarbanes-Oxley, or EU Data Privacy 301 Directive). The focus of this document spans Controls, certain 302 aspects of policy, process, and procedure, and Control Frameworks. 304 3. Use Cases 306 This document addresses three use cases: System State Assessment, 307 Enforcement of Acceptable State, Security Control Verification and 308 Monitoring. 310 3.1. UC1: System State Assessment 312 3.1.1. Goal 314 Assess security state of a given system to be in compliance with 315 enterprise standards and, therefore, ensure alignment with enterprise 316 policy. 318 3.1.2. Main Success Scenario 320 1. Define target system to be assessed 322 2. Select acceptable state policies to apply to defined target 324 3. Collect actual state values from target 326 4. Compare actual state values collected from target with expected 327 state values as expressed in acceptable state policies 329 3.1.3. Extensions 331 None. 333 3.2. UC2: Enforcement of Acceptable State 335 3.2.1. Goal 337 Allow or deny access to a desired resource based on system 338 characteristics compliance with enterprise policy. 340 3.2.2. Main Success Scenario 342 1. An entity (user on a system or the system itself) requests access 343 to a given resource (i.e. network connection) 345 2. Assessment of system state is achieved using Section 3.1 347 3. Based on assessment results (i.e. compliance level with 348 enterprise policy) 350 A. System is allowed access to requested resource, or 352 B. System is denied access to requested resource 354 3.2.3. Extensions 356 None. 358 3.3. UC3: Security Control Verification and Monitoring 360 3.3.1. Goal 362 Continuous assessment of the implementation and effectiveness of 363 security controls based on machine processable content. 365 3.3.2. Main Success Scenario 367 1. Define set of targets to be assessed. 369 2. Select acceptable state policies to apply to set of targets 371 3. Define assessment trigger based on either a 373 A. Time period, or 375 B. System/enterprise event. 377 4. Define result reporting/alerting criteria 379 5. Enable continuous assessment 381 3.3.3. Extensions 383 None. 385 4. Functional Capabilities 387 In general, the activities of managing assets, configurations, and 388 vulnerabilities are common between UC1 and UC2. UC1 uses these 389 activities to either grant or deny an entity access to a requested 390 resource. UC2 uses these activities in support of compliance 391 measurement on a periodic basis. 393 At the most basic level, an enterprise needing to satisfy UC1 and UC2 394 will need certain capabilities to be met. Specifically, we are 395 talking about risk management capabilities. This is the central 396 problem domain, so it makes sense to be able to convey information 397 about technical and non-technical controls, benchmarks, control 398 requirements, control frameworks and other concepts in a common way. 400 4.1. Capabilities Supporting UC1 402 As described in Section Section 3.1, the required capabilities need 403 to support assessing host and/or network state in an automated 404 manner. This is, essentially, a configuration assessment check 405 before allowing a full connection to the network. 407 4.1.1. Asset Management 409 Effective Asset Management is a critical foundation upon which all 410 else in risk management is based. There are two important facets to 411 asset managment: 1) understanding coverage (how many assets are under 412 control) and, 2) understaning specific asset details. Coverage is 413 fairly straightforward - assessing 80% of the enterprise is better 414 than assessing 50% of the enterprise. Getting asset details is 415 comparatively subtle - if an enterprise does not have a precise 416 understanding of its assets, then all acquired data and consequent 417 actions are considered suspect. Assessing assets (managed and 418 unmanaged) requires that we see and properly characterize our assets 419 at the outset and over time. 421 What we need to do initially is discover and characterize our assets, 422 and then identify them in a common way. Characterization may take 423 the form of logical characterization or security characterization, 424 where logical characterization may include business context not 425 otherwise related to security, but which may be used as information 426 in support of decision making later in risk management workflows. 428 The following list details the requisite Asset Management 429 capabilities (later described in Section 5): 431 o Discover assets in the enterprise 433 o Characterize assets according to security and non-security asset 434 properties 436 o Identify and describe assets using a common vocabulary between 437 implementations 439 o Reconcile asset representations originating from disparate tools 440 o Manage asset information throughout the asset's life cycle 442 4.1.2. Data Collection 444 Related to managing assets, and central to any automated assessment 445 solution is the ability to collect data from target hosts (some might 446 call this "harvesting"). Of particular interest are data 447 representing the security state of a target, be it a computing 448 device, network hardware, operating system, or application. The 449 primary interest of the activities demanding data collection is 450 centered on object state collection, where objects may be file 451 attributes, operating system and/or application configuration items, 452 and network device configuration items among others. 454 4.1.2.1. Security Configuration Management 456 There are many valid perspectives to take when considering required 457 capabilities, but the industry seems to have roughly settled upon the 458 notion of "Security Configuration Management" (there are variants of 459 the term). Security Configuration Management (SCM) is a simple way 460 to reference several supporting capabilities involving technical and 461 non-technical assessment of systems. 463 The following capabilities support SCM: 465 o Target Assessment 467 * Collect the state of non-technical controls commonly called 468 administrative controls (i.e. policy, process, procedure) 470 * Collect the state of technical controls including, but not 471 necessarily limited to: 473 + Target configuration items 475 + Target patch level 477 + Target object state 479 4.1.2.2. Vulnerability Management 481 SCM is only part of the solution, as it deals exclusively with the 482 configuration of computing devices, including software 483 vulnerabilities (by testing for patch levels). All vulnerabilities 484 need to be addressed as part of a comprehensive risk management 485 program, which is a superset of software vulnerabilities. Thus, the 486 capability of assessing non-software vulnerabilities applicable to 487 the in-scope system is required. 489 The following capabilities support Vulnerability Management: 491 1. Assessment 493 * Non-technical Vulnerability Assessment (i.e. interrogative) 495 * Technical Vulnerability Assessment 497 4.1.3. Assessment Result Analysis 499 At the most basic level, the data collected needs to be analyzed for 500 compliance to a standard stipulated by the enterprise. Such 501 standards vary between enterprises, but commonly take a similar form. 503 The following capabilities support the analysis of assessment 504 results: 506 o Comparing actual state to expected state 508 o Scoring/weighting individual comparison results 510 o Relating specific comparisons to benchmark-level requirements 512 o Relating benchmark-level requirements to one or more control 513 frameworks 515 4.1.4. Content Management 517 It should be clear by now that the capabilities required to support 518 risk management state measurement will yield volumes of content. The 519 efficacy of risk management state measurement depends directly on the 520 stability of the driving content, and, subsequently, the ability to 521 change content according to enterprise needs. 523 Capabilities supporting Content Management should provide the ability 524 to create/define or modify content, as well as store and retrieve 525 said content of at least the following types: 527 o Configuration Standards 529 o Scoring Models 531 o Vulnerability Information 533 o Patch Information 535 o Asset Characterization 536 Note that the ability to modify content is in direct support of 537 tailoring content for enterprise-specific needs. 539 4.2. Capabilities Supporting UC2 541 UC2 is dependent upon UC1 and, therefore, includes all of the 542 capabilities described in Section Section 4.1. UC2 describes the 543 ability to make a resource access decision based on an assessment of 544 the requesting system (either by the system itself or on behalf of a 545 user operating that system). There are two chief capabilities 546 required to meet the needs expressed in Section Section 3.2: 547 Assessment Query and Transport, and Acceptable State Enforcement. 549 4.2.1. Assessment Query and Transport 551 Under certain circumstances, the system requesting access may be 552 unknown, which can make querying the system problematic (consider a 553 case where a system is connecting to the network and has no 554 assessment software installed). Note that The Network Endpoint 555 Assessment (NEA) protocols (PA-TNC [RFC5792], PB-TNC [RFC5793], PT- 556 TLS [I-D.ietf-nea-pt-tls], and PT-EAP [I-D.ietf-nea-pt-eap]) may be 557 used to query and transport the things to be measured. 559 4.2.2. Acceptable State Enforcement 561 Once the assessment has been performed a decision to allow or deny 562 access to the requested resource can be made. Making this decision 563 is a necessary but insufficient condition for enforcement of 564 acceptable state, and an implementation must have the ability to 565 actively allow or deny access to the requested resource. For 566 example, network enforcement may be implemented with RADIUS [RFC2865] 567 or DIAMETER [RFC3588]. 569 4.3. Capabilities Supporting UC3 571 Recall that UC3 is dependent upon UC1 and therefore includes all of 572 the capabilities described in Section 4.1. The difference in UC3 is 573 the notion of when to assess rather than what to assess. Therefore, 574 the capabilities described in this section are relevant only to the 575 "when" and not to the "what." 577 4.3.1. Tasking and Scheduling 579 The ability to task and schedule assessments is requisite for any 580 effective risk management program. Tasking refers to the ability to 581 create a set of instructions to be conveyed at a later time via 582 scheduling. Tasking, therefore, involves selecting a set of 583 assessment criteria, assigning that set to a group of assets, and 584 expressing that information in a manner that can be consumed by a 585 collection tool. Scheduling comes into play when the enterprise 586 determines when to perform a specific assessment task (or set of 587 tasks). Scheduling may be expressed in a way that constrains tasks 588 to execute only during defined periods, can be ad hoc, or may be 589 triggered by the analysis of previous assessment results or events 590 detected in the enterprise. 592 The following capabilities support Tasking and Scheduling: 594 o Selection of assessment criteria 596 o Defining in-scope assets (i.e. targeting) 598 o Defining periodic assessments for a given set of tasks 600 o Defining assessment triggers for a given set of tasks 602 4.3.2. Data Aggregation and Reporting 604 Assessment results are produced for every asset assessed, and these 605 results must be reported not only individually, but in the aggregate, 606 and in accordance with enterprise needs. Enterprises should be able 607 to aggregate and report on the data their assessments produce in a 608 number of different ways in order to support different levels of 609 decision making. At times, security operations personnel may be 610 interested in understanding where the most critical risks exist in 611 their enterprise so as to focus their remediation efforts in the most 612 effective way (in terms of cost and return). At other times, only 613 aggregated scores will matter, as might be the case when reporting to 614 an information security manager or other executive-level role. 616 It is not the position of these capabilities to provide explicit 617 details about how reports should be formatted for presentation, but 618 only what information they should contain for a particular purpose. 619 Furthermore, it is quite easy to imagine the need for a capability 620 providing extensibility to aggregation and reporting. 622 Aggregating assessment results by the following capabilities supports 623 Data Aggregation and Reporting 625 o By asset characterization 627 o By assessment criteria 629 o By control framework 630 o By benchmark 632 o By other attributes/properties of assessment characteristics 634 o Extensible aggregation and reporting 636 5. Functional Components 638 This section describes the functional components alluded to in the 639 previous section Section 4. In keeping with the organization of the 640 previous section, the following high-level functional capabilities 641 are decomposed herein: Asset Management, Security Configuration 642 Management, Vulnerability Management, Content Management, Assessment 643 Result Analysis, Tasking and Scheduling, and Data Aggregation and 644 Reporting. 646 5.1. Asset Management 648 As previously mentioned, asset management is a critically important 649 component of any risk management program. If you stop to consider 650 the different tools used to support a risk management program (i.e. 651 IDS/IPS, Firewalls, NAC devices, WAFs, SCM, and so on), they all 652 need, to some degree, an element of asset management. In this 653 context, asset management is defined as the maintenance of necessary 654 and accurate asset characteristics. Management of assets requires 655 the ability to discover, characterize, and subsequently identify 656 assets across enterprise tools. The components described herein 657 support Section 4.1.1 659 5.1.1. Discovery 661 5.1.2. Characterization 663 5.1.2.1. Logical 665 5.1.2.2. Security 667 5.1.3. Asset Identification 669 5.2. Security Configuration Management 671 The components described herein support Section 4.1.2 673 5.2.1. Configuration Assessment 675 5.2.1.1. Non-technical Assessment 676 5.2.1.2. Technical Assessment 678 5.2.1.2.1. Configuration Assessment 680 5.2.1.2.2. Patch Assessment 682 5.2.1.2.3. Object State Assessment 684 5.3. Vulnerability Management 686 The components described herein support Section 4.1.2 688 5.3.1. Non-technical Vulnerability Assessment 690 5.3.2. Technical Vulnerabiltiy Assessment 692 5.4. Content Management 694 The components described herein support Section 4.1.4 696 5.4.1. Control Frameworks 698 5.4.2. Configuration Standards 700 5.4.3. Scoring Models 702 5.4.4. Vulnerability Information 704 5.4.5. Patch Information 706 5.4.6. Asset Information 708 5.5. Assessment Result Analysis 710 The components described herein support Section 4.1.3 712 5.5.1. Comparing Actual to Expected State 714 5.5.2. Scoring Comparison Results 716 5.5.3. Relating Comparison Results to Requirements 718 5.5.4. Relating Requirements to Control Frameworks 720 5.6. Tasking and Scheduling 722 The components described herein support Section 4.3.1 724 5.6.1. Selection of Assessment Criteria 726 5.6.2. Defining In-scope Assets 728 5.6.3. Defining Periodic Assessments 730 5.6.4. Defining Assessment Triggers 732 5.7. Data Aggregation and Reporting 734 The components described herein support Section 4.3.2 736 5.7.1. By Asset Characterization 738 5.7.2. By Assessment Criteria 740 5.7.3. By Control Framework 742 5.7.4. By Benchmark 744 5.7.5. By Ad Hoc/Extended Properties 746 6. Data Exchange Models and Communications Protocols 748 Document where existing work exists, what is currently defined by 749 SDOs, and any gaps that should be addressed. Point to existing 750 event, incident and network management standards when available. 751 Describe emerging efforts that may be used for the creation of new 752 standards. For gaps provide insight into what would be a good fit 753 for SACM or another IETF working groups. 755 This will help us to identify what is needed for SACM to be 756 successful. This section will help determine which of the 757 specifications can be normatively referenced and what needs to be 758 addressed in the IETF. This should help us determine any protocol or 759 guidance documentation we will need to generate to support the 760 described use cases. 762 Things to address: 764 For IETF related efforts, discuss work in NEA and MILE working 765 groups. Address SNMP, NetConf and other efforts as needed. 767 Reference any Security Automation work that is applicable. 769 6.1. Data Exchange Models 771 The functional capabilities described in Section 4 require a 772 significant number of models to be selected or defined in order to 773 meet the needs of the three use cases presented in Section 3. A 774 "model" in this sense is a logical arrangement of information that 775 may have more than one syntactic binding. For the purpose of this 776 document, only the logical data model is considered. However, where 777 appropriate, example data models that may have well-defined syntactic 778 expressions may be referenced. 780 6.1.1. Control Expression 782 For each we need an identification method, a logical expression and 783 one or more syntactic bindings to that expression. For some, we may 784 wish to associate a method of risk scoring. 786 6.1.1.1. Technical Control Expression 788 6.1.1.2. Non-technical Control Expression 790 6.1.1.2.1. Configuration Controls 792 6.1.1.2.2. Patches 794 6.1.1.2.3. Vulnerabilities 796 6.1.1.2.4. Object (Non-security) State 798 6.1.2. Control Frameworks 800 6.1.2.1. Logical Expression and Syntactic Binding(s) 802 6.1.2.2. Relationships 804 6.1.2.3. Substantiation (Control Requirement) 806 6.1.2.4. Reporting 808 6.1.3. Asset Expressions 810 6.1.3.1. Asset Identification 812 6.1.3.2. Asset Classification (Type) 813 6.1.3.3. Asset Attributes 815 6.1.3.3.1. Criticality 817 6.1.3.3.2. Classification (security) 819 6.1.3.3.3. Owner 821 6.1.3.4. Information Expression (non-identifying) 823 6.1.3.5. Reporting 825 6.1.4. Benchmark/Checklist Expression 827 6.1.4.1. Logical Expression and Bindings 829 6.1.4.2. Checking Systems 831 6.1.4.3. Results and Scoring 833 6.1.4.4. Reporting 835 6.1.5. Check Language 837 6.1.5.1. Logical Expression and Syntactic Binding(s) 839 6.1.5.1.1. Technical 841 6.1.5.1.2. Non-technical 843 6.1.5.2. Reporting 845 6.1.6. Targeting Expression 847 6.1.6.1. Information Owner 849 6.1.6.2. System Owner 851 6.1.6.2.1. Computing Device(s) 853 6.1.6.2.2. Network(s) 855 6.1.6.3. Assessor 857 6.1.6.4. Computing Device 859 6.1.6.5. Targeting Extensibility 860 6.2. Communication Protocols 862 6.2.1. Asset Management Interface 864 7. IANA Considerations 866 This memo includes no request to IANA. 868 All drafts are required to have an IANA considerations section (see 869 RFC 5226 [RFC5226] for a guide). If the draft does not require IANA 870 to do anything, the section contains an explicit statement that this 871 is the case (as above). If there are no requirements for IANA, the 872 section will be removed during conversion into an RFC by the RFC 873 Editor. 875 8. Security Considerations 877 All drafts are required to have a security considerations section. 878 See RFC 3552 [RFC3552] for a guide. 880 9. Acknowledgements 882 The author would like to thank Kathleen Moriarty and Stephen Hanna 883 for contributing text to this document. The author would also like 884 to acknowledge the members of the SACM mailing list for thier keen 885 and insightful feedback on the concepts and text within this 886 document. 888 10. References 890 10.1. Normative References 892 [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate 893 Requirement Levels", BCP 14, RFC 2119, March 1997. 895 10.2. Informative References 897 [I-D.ietf-nea-pt-eap] 898 Cam-Winget, N. and P. Sangster, "PT-EAP: Posture Transport 899 (PT) Protocol For EAP Tunnel Methods", 900 draft-ietf-nea-pt-eap-02 (work in progress), May 2012. 902 [I-D.ietf-nea-pt-tls] 903 Sangster, P., Cam-Winget, N., and J. Salowey, "PT-TLS: A 904 TCP-based Posture Transport (PT) Protocol", 905 draft-ietf-nea-pt-tls-05 (work in progress), May 2012. 907 [RFC2865] Rigney, C., Willens, S., Rubens, A., and W. Simpson, 908 "Remote Authentication Dial In User Service (RADIUS)", 909 RFC 2865, June 2000. 911 [RFC3552] Rescorla, E. and B. Korver, "Guidelines for Writing RFC 912 Text on Security Considerations", BCP 72, RFC 3552, 913 July 2003. 915 [RFC3588] Calhoun, P., Loughney, J., Guttman, E., Zorn, G., and J. 916 Arkko, "Diameter Base Protocol", RFC 3588, September 2003. 918 [RFC5226] Narten, T. and H. Alvestrand, "Guidelines for Writing an 919 IANA Considerations Section in RFCs", BCP 26, RFC 5226, 920 May 2008. 922 [RFC5792] Sangster, P. and K. Narayan, "PA-TNC: A Posture Attribute 923 (PA) Protocol Compatible with Trusted Network Connect 924 (TNC)", RFC 5792, March 2010. 926 [RFC5793] Sahita, R., Hanna, S., Hurst, R., and K. Narayan, "PB-TNC: 927 A Posture Broker (PB) Protocol Compatible with Trusted 928 Network Connect (TNC)", RFC 5793, March 2010. 930 Appendix A. Additional Stuff 932 This becomes an Appendix if needed. 934 Authors' Addresses 936 David Waltermire (editor) 937 National Institute of Standards and Technology 938 100 Bureau Drive 939 Gaithersburg, Maryland 20877 940 USA 942 Phone: 943 Email: david.waltermire@nist.gov 944 Adam W. Montville 945 Tripwire, Inc. 946 101 SW Main Street, Suite 1500 947 Portland, Oregon 97204 948 USA 950 Phone: 951 Email: amontville@tripwire.com