idnits 2.17.1 draft-waltermire-sacm-use-cases-03.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- No issues found here. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year -- The document date (January 19, 2013) is 4112 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- == Outdated reference: A later version (-09) exists of draft-ietf-nea-pt-eap-06 -- Obsolete informational reference (is this intentional?): RFC 5226 (Obsoleted by RFC 8126) Summary: 0 errors (**), 0 flaws (~~), 2 warnings (==), 2 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 Network Working Group D. Waltermire, Ed. 3 Internet-Draft NIST 4 Intended status: Informational A. Montville 5 Expires: July 23, 2013 TW 6 January 19, 2013 8 Analysis of Security Automation and Continuous Monitoring (SACM) Use 9 Cases 10 draft-waltermire-sacm-use-cases-03 12 Abstract 14 This document identifies foundational use cases, derived functional 15 capabilities, and requirements needed to provide a foundation for 16 creating interoperable automation tools and continuous monitoring 17 solutions that provide visibility into the state of assets, user 18 activities, and network behavior. Stakeholders will be able to use 19 these tools to aggregate and analyze relevant security and 20 operational data to understand the organizations security posture, 21 quantify business risk, and make informed decisions that support 22 organizational objectives while protecting critical information. 23 Organizations will be able to use these tools to augment and automate 24 information sharing activities to collaborate with partners to 25 identify and mitigate threats. Other automation tools will be able 26 to integrate with these capabilities to enforce policies based on 27 human decisions to harden systems, prevent misuse and reduce the 28 overall attack surface. 30 Status of this Memo 32 This Internet-Draft is submitted in full conformance with the 33 provisions of BCP 78 and BCP 79. 35 Internet-Drafts are working documents of the Internet Engineering 36 Task Force (IETF). Note that other groups may also distribute 37 working documents as Internet-Drafts. The list of current Internet- 38 Drafts is at http://datatracker.ietf.org/drafts/current/. 40 Internet-Drafts are draft documents valid for a maximum of six months 41 and may be updated, replaced, or obsoleted by other documents at any 42 time. It is inappropriate to use Internet-Drafts as reference 43 material or to cite them other than as "work in progress." 45 This Internet-Draft will expire on July 23, 2013. 47 Copyright Notice 48 Copyright (c) 2013 IETF Trust and the persons identified as the 49 document authors. All rights reserved. 51 This document is subject to BCP 78 and the IETF Trust's Legal 52 Provisions Relating to IETF Documents 53 (http://trustee.ietf.org/license-info) in effect on the date of 54 publication of this document. Please review these documents 55 carefully, as they describe your rights and restrictions with respect 56 to this document. Code Components extracted from this document must 57 include Simplified BSD License text as described in Section 4.e of 58 the Trust Legal Provisions and are provided without warranty as 59 described in the Simplified BSD License. 61 Table of Contents 63 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 4 64 1.1. Requirements Language . . . . . . . . . . . . . . . . . . 4 65 2. Key Concepts . . . . . . . . . . . . . . . . . . . . . . . . . 5 66 3. Use Cases . . . . . . . . . . . . . . . . . . . . . . . . . . 7 67 3.1. UC1: System State Assessment . . . . . . . . . . . . . . . 7 68 3.1.1. Goal . . . . . . . . . . . . . . . . . . . . . . . . . 7 69 3.1.2. Main Success Scenario . . . . . . . . . . . . . . . . 7 70 3.2. UC2: Enforcement of Acceptable State . . . . . . . . . . . 7 71 3.2.1. Goal . . . . . . . . . . . . . . . . . . . . . . . . . 7 72 3.2.2. Main Success Scenario . . . . . . . . . . . . . . . . 8 73 3.3. UC3: Security Control Verification and Monitoring . . . . 8 74 3.3.1. Goal . . . . . . . . . . . . . . . . . . . . . . . . . 8 75 3.3.2. Main Success Scenario . . . . . . . . . . . . . . . . 8 76 4. Functional Capabilities and Requirements . . . . . . . . . . . 8 77 4.1. Capabilities Supporting UC1 . . . . . . . . . . . . . . . 9 78 4.1.1. Asset Management . . . . . . . . . . . . . . . . . . . 9 79 4.1.1.1. Concepts . . . . . . . . . . . . . . . . . . . . . 9 80 4.1.1.2. Requirements . . . . . . . . . . . . . . . . . . . 10 81 4.1.2. Data Collection . . . . . . . . . . . . . . . . . . . 10 82 4.1.2.1. Concepts . . . . . . . . . . . . . . . . . . . . . 10 83 4.1.2.2. Requirements . . . . . . . . . . . . . . . . . . . 11 84 4.1.3. Assessment Result Analysis . . . . . . . . . . . . . . 11 85 4.1.3.1. Concepts . . . . . . . . . . . . . . . . . . . . . 12 86 4.1.3.2. Requirements . . . . . . . . . . . . . . . . . . . 12 87 4.1.4. Content Management . . . . . . . . . . . . . . . . . . 12 88 4.1.4.1. Concepts . . . . . . . . . . . . . . . . . . . . . 12 89 4.1.4.2. Requirements . . . . . . . . . . . . . . . . . . . 13 90 4.2. Capabilities Supporting UC2 . . . . . . . . . . . . . . . 13 91 4.2.1. Assessment Query and Transport . . . . . . . . . . . . 13 92 4.2.2. Acceptable State Enforcement . . . . . . . . . . . . . 13 93 4.3. Capabilities Supporting UC3 . . . . . . . . . . . . . . . 13 94 4.3.1. Tasking and Scheduling . . . . . . . . . . . . . . . . 14 95 4.3.2. Data Aggregation and Reporting . . . . . . . . . . . . 14 96 5. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 15 97 6. Security Considerations . . . . . . . . . . . . . . . . . . . 15 98 7. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . . 15 99 8. References . . . . . . . . . . . . . . . . . . . . . . . . . . 16 100 8.1. Normative References . . . . . . . . . . . . . . . . . . . 16 101 8.2. Informative References . . . . . . . . . . . . . . . . . . 16 102 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . . 16 104 1. Introduction 106 This document addresses foundational use cases in security 107 automation. These use cases may be considered when establishing a 108 charter for the Security Automation and Continuous Monitoring (SACM) 109 working group within the IETF. This working group will address a 110 many of the standards needed to define an interoperable, automation 111 infrastructure required to support timely, accurate and actionable 112 situational awareness over an organization's IT systems. This 113 document enumerates use cases and breaks down related concepts that 114 cross many IT security information domains. 116 Sections Section 2, Section 3, and Section 4 of this document 117 respectively focus on: 119 Defining the key concepts and terminology used within the document 120 providing a common frame of reference; 122 Identifying foundational use cases that represent classes of 123 stakeholders, goals, and usage scenarios; 125 A set of derived functional capabilities and associated 126 requirements that are needed to support the use cases; 128 The concepts identified in this document provide a foundation for 129 creating interoperable automation tools and continuous monitoring 130 solutions that provide visibility into the state of assets, user 131 activities, and network behavior. Stakeholders will be able to use 132 these tools to aggregate and analyze relevant security and 133 operational data to understand the organizations security posture, 134 quantify business risk, and make informed decisions that support 135 organizational objectives while protecting critical information. 136 Organizations will be able to use these tools to augment and automate 137 information sharing activities to collaborate with partners to 138 identify and mitigate threats. Other automation tools will be able 139 to integrate with these capabilities to enforce policies based on 140 human decisions to harden systems, prevent misuse and reduce the 141 overall attack surface. 143 1.1. Requirements Language 145 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 146 "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this 147 document are to be interpreted as described in RFC 2119 [RFC2119]. 149 2. Key Concepts 151 The operational methods we use within the bounds of our present 152 realities are failing us - we are falling behind. We have begun to 153 recognize that the evolution of threat agents, increasing system 154 complexity, rapid situational security change, and scarce resources 155 are detrimental to our success. There have been efforts to remedy 156 our circumstance, and these efforts are generally known as "Security 157 Automation." 159 Security Automation is a general term used to reference standards and 160 specifications originally created by the National Institute of 161 Standards and Technology (NIST) and/or the MITRE Corporation. 162 Security Automation generally includes languages, protocols 163 (prescribed ways by which specification collections are used), 164 enumerations, and metrics. 166 These specifications have provided an opportunity for tool vendors 167 and enterprises building customized solutions to take the appropriate 168 steps toward enabling Security Automation by defining common 169 information expressions. In effect, common expression of information 170 enables interoperability between tools (whether customized, 171 commercial, or freely available). Another important capability 172 common expression provides is the ability to automate portions of 173 security processes to gain efficiency, react to new threats in a 174 timely manner, and free up security personnel to work on more 175 advanced problems within the processes in which they participate. 177 +---------------------------------------+ +-------------+ 178 | | | | 179 | Operational Risk Management | | | 180 | | | | 181 +---------------------------------------+ | | 182 | | 183 +---------------------------------------+ | | 184 | | | | 185 | Information Risk Management | | Policy | 186 | | | Process | 187 +---------------------------------------+ | Procedure | 188 | | 189 +---------------------------------------+ | | 190 | | | | 191 | Control Frameworks | | | 192 | | | | 193 +---------------------------------------+ | | 194 | | 195 +---------------------------------------+ | | 196 | | | | 197 | Controls | | | 198 | | | | 199 +---------------------------------------+ +-------------+ 201 Figure 1 203 The figure above provides some context for our focus area. 204 Organizations of all sizes will have a more or less formal risk 205 management program, depending upon their maturity and organization- 206 specific needs. A small business with only a few employees may not 207 have a formally recognized risk management program, but they still 208 lock the doors at night. Typically, financial entities and 209 governments sit at the other end of the spectrum with often large, 210 laborious risk frameworks. The point is that all organizations 211 practice, to some degree, Operational Risk Management. An 212 Information Risk Management program is most likely a constituent of 213 Operational Risk Management (another constituent might be Financial 214 Risk Management). In the Information Risk Management domain, we 215 often use Control Frameworks to provide guidance for organizations 216 practicing ORM in an information context, and these Control 217 Frameworks define a variety of Controls. 219 From ORM, IRM, Control Frameworks, and the Controls themselves, 220 organizations derive a set of organization-specific policies, 221 processes, and procedures. Such policies, processes, and procedures 222 make use of a library of supporting information commonly stipulated 223 by the organization (i.e. enterprise acceptable use policies), but 224 often prescribed by external entities (i.e. Payment Card Industry 225 Data Security Standards, Sarbanes-Oxley, or EU Data Privacy 226 Directive). The focus of this document spans Controls, certain 227 aspects of policy, process, and procedure, and Control Frameworks. 229 3. Use Cases 231 This document addresses three use cases: System State Assessment, 232 Enforcement of Acceptable State, Security Control Verification and 233 Monitoring. Currently, the first use case, System State Assessment, 234 is being pursued under the SACM charter. The additional use cases 235 are included to provide broader context to this work and represents 236 additional work that may be considered by SACM or another IETF 237 working group in the future. 239 3.1. UC1: System State Assessment 241 3.1.1. Goal 243 To assess the security state of a given system to be in compliance 244 with enterprise standards and, therefore, ensure alignment with 245 enterprise policy. 247 3.1.2. Main Success Scenario 249 1. Define target system to be assessed 251 2. Select acceptable state policies to apply to the defined target 253 3. Identify the target being assessed 255 4. Collect actual state values from target 257 5. Communicate target identity and collected state values to 258 external system for evaluation 260 6. Compare actual state values collected from target with expected 261 state values as expressed in acceptable state policies 263 3.2. UC2: Enforcement of Acceptable State 265 3.2.1. Goal 267 Allow or deny access to a desired resource based on system 268 characteristics compliance with enterprise policy. 270 3.2.2. Main Success Scenario 272 1. An entity (user on a system or the system itself) requests access 273 to a given resource (i.e. network connection) 275 2. Assessment of system state is achieved using Section 3.1 277 3. Based on assessment results (i.e. compliance level with 278 enterprise policy) 280 A. System is allowed access to requested resource, or 282 B. System is denied access to requested resource 284 3.3. UC3: Security Control Verification and Monitoring 286 3.3.1. Goal 288 Continuous assessment of the implementation and effectiveness of 289 security controls based on machine processable content. 291 3.3.2. Main Success Scenario 293 1. Define set of targets to be assessed. 295 2. Select acceptable state policies to apply to set of targets 297 3. Define assessment trigger based on either a 299 A. Time period, or 301 B. System/enterprise event. 303 4. Define result reporting/alerting criteria 305 5. Enable continuous assessment 307 4. Functional Capabilities and Requirements 309 In general, the activities of managing assets, configurations, and 310 vulnerabilities are common between UC1, UC2, and UC3. UC2 uses these 311 activities to either grant or deny an entity access to a requested 312 resource. UC3 uses these activities in support of compliance 313 measurement on a periodic basis. 315 At the most basic level, an enterprise needing to satisfy these use 316 cases will need certain capabilities to be met. Specifically, we are 317 talking about risk management capabilities. This is the central 318 problem domain, so it makes sense to be able to convey information 319 about technical and non-technical controls, benchmarks, control 320 requirements, control frameworks and other concepts in a common way. 322 4.1. Capabilities Supporting UC1 324 The capabilities in this section support assessing host and/or 325 network state in an automated manner as described in Section 326 Section 3.1. 328 4.1.1. Asset Management 330 Effective Asset Management is a critical foundation upon which all 331 else in risk management is based. There are two important facets to 332 asset management: 1) understanding coverage (how many assets are 333 under control) and, 2) understanding specific asset details. 334 Coverage is fairly straightforward - assessing 80% of the enterprise 335 is better than assessing 50% of the enterprise. Getting asset 336 details is comparatively subtle - if an enterprise does not have a 337 precise understanding of its assets, then all acquired data and 338 consequent actions are considered suspect. Assessing assets (managed 339 and unmanaged) requires that we see and properly characterize our 340 assets at the outset and over time. 342 4.1.1.1. Concepts 344 What we need to do initially is discover and characterize our assets, 345 and then identify them in a common way. Characterization may take 346 the form of logical characterization or security characterization, 347 where logical characterization may include business context not 348 otherwise related to security, but which may be used as information 349 in support of decision making later in risk management workflows. 351 The following list details the requisite Asset Management 352 capabilities: 354 o Discover assets in the enterprise 356 o Characterize assets according to security and non-security asset 357 properties 359 o Identify and describe assets using a common vocabulary between 360 implementations 362 o Reconcile asset representations originating from disparate tools 363 o Manage asset information throughout the asset's life cycle 365 4.1.1.2. Requirements 367 A method MUST be provided for identifying a target system (asset 368 identification) as a unique entity within the enterprise. 370 A method MUST be provided for defining a target system (asset 371 classification) based on a set of organizationally relevant 372 properties (e.g. organizational affiliation, criticality, function). 374 4.1.2. Data Collection 376 Related to managing assets, and central to any automated assessment 377 solution, is the ability to collect data from (or related to) a 378 target device (some might call this "harvesting"). Of particular 379 interest is data representing the security state of a target, be it a 380 computing device, network hardware, operating system, or application. 381 The primary interest of the activities demanding data collection is 382 centered on object state collection, where attributes may be 383 installed software, file properties, operating system and/or 384 application configuration items, and network device configuration 385 items among others. 387 4.1.2.1. Concepts 389 There are many valid perspectives to take when considering required 390 data collection capabilities. The nature of data collected relating 391 to assets supports a variety of information domains including: 392 security configuration management (SCM) and vulnerability management. 393 SCM deals with the configuration of computing and infrastructure 394 devices, including the software installed and in use on the device. 395 Vulnerability management involves identifying the patch level of 396 software installed on the device and the identification of insecure 397 custom code (e.g. web vulnerabilities). All vulnerabilities need to 398 be addressed as part of a comprehensive risk management program, 399 which is a superset of software vulnerabilities. Thus, the 400 capability of assessing non-software vulnerabilities applicable to 401 the in-scope system is required. Additionally, it may be necessary 402 to support non-technical assessment of data relating to assets such 403 as aspects related to operational and management controls. 405 The following assessment capabilities support SCM relative to a 406 target asset: 408 o Collect the state of technical controls including, but not 409 necessarily limited to: 411 * Software inventory (e.g. operating system, applications, 412 patches) 414 * Configuration settings 416 o Collect the state of non-technical controls commonly called 417 administrative controls (i.e. policy, process, procedure) 419 4.1.2.2. Requirements 421 One or more data formats MUST be provided to describe instructions, 422 data collection methods, to drive data collection (e.g. technical, 423 interrogative). 425 A method MUST be provided for retrieving data collection instructions 426 from a remote host (see Section Section 4.1.4). 428 A data format MUST be provided to capture the results of data 429 collection. 431 A mechanism MUST be provided to identify the device the results 432 pertain to (see Section Section 4.1.1. 434 A mechanism MUST be provided to identify the software inventory of 435 a device. 437 A mechanism MUST be provided to associate configuration settings 438 values to the associated software. 440 A mechanism MUST be provided to identify additional collected 441 attribute/value pairs related to the device, installed software, 442 or other controls. 444 A mechanism MUST be provided to associate the data collection 445 method with the collected value. 447 Collected data 449 A method of communicating data collection results to another system 450 for further analysis MUST be identified. 452 4.1.3. Assessment Result Analysis 454 At the most basic level, the data collected needs to be analyzed for 455 compliance to a standard stipulated by the enterprise. Analysis 456 methods may vary between enterprises, but commonly take a similar 457 form. 459 4.1.3.1. Concepts 461 The following capabilities support the analysis of assessment 462 results: 464 o Comparing actual state to expected state 466 o Scoring/weighting individual comparison results 468 o Relating specific comparisons to benchmark-level requirements 470 o Relating benchmark-level requirements to one or more control 471 frameworks 473 4.1.3.2. Requirements 475 A method MUST be provided for selecting acceptable state policy (test 476 expression). 478 A method MUST be provided for comparing collected data to expected 479 state values (test evaluation). 481 4.1.4. Content Management 483 It should be clear by now that the capabilities required to support 484 risk management state measurement will yield volumes of content. The 485 efficacy of risk management state measurement depends directly on the 486 stability of the driving content, and, subsequently, the ability to 487 change content according to enterprise needs. 489 4.1.4.1. Concepts 491 Capabilities supporting Content Management should provide the ability 492 to create/define or modify content, as well as store and retrieve 493 said content of at least the following types: 495 o Configuration Standards 497 o Scoring Models 499 o Vulnerability Information 501 o Patch Information 503 o Asset Characterization 505 Note that the ability to modify content is in direct support of 506 tailoring content for enterprise-specific needs. 508 4.1.4.2. Requirements 510 A protocol MUST be identified for retrieving SACM content from a 511 content repository 513 A protocol MUST be identified for querying SACM content held in a 514 content repository. The protocol MUST support querying content by 515 applicability to asset characteristics. 517 A protocol MUST be identified for curating SACM content in a content 518 repository. Note: This might be an area where we can limit the scope 519 of work relative to the initial SACM charter. 521 4.2. Capabilities Supporting UC2 523 UC2 is dependent upon UC1 and, therefore, includes all of the 524 capabilities described in Section Section 4.1. UC2 describes the 525 ability to make a resource access decision based on an assessment of 526 the requesting system (either by the system itself or on behalf of a 527 user operating that system). There are two chief capabilities 528 required to meet the needs expressed in Section Section 3.2: 529 Assessment Query and Transport, and Acceptable State Enforcement. 531 4.2.1. Assessment Query and Transport 533 Under certain circumstances, the system requesting access may be 534 unknown, which can make querying the system problematic (consider a 535 case where a system is connecting to the network and has no 536 assessment software installed). Note that The Network Endpoint 537 Assessment (NEA) protocols (PA-TNC [RFC5792], PB-TNC [RFC5793], PT- 538 TLS [I-D.ietf-nea-pt-tls], and PT-EAP [I-D.ietf-nea-pt-eap]) may be 539 used to query and transport the things to be measured. 541 4.2.2. Acceptable State Enforcement 543 Once the assessment has been performed a decision to allow or deny 544 access to the requested resource can be made. Making this decision 545 is a necessary but insufficient condition for enforcement of 546 acceptable state, and an implementation must have the ability to 547 actively allow or deny access to the requested resource. For 548 example, network enforcement may be implemented with RADIUS [RFC2865] 549 or DIAMETER [RFC6733]. 551 4.3. Capabilities Supporting UC3 553 Recall that UC3 is dependent upon UC1 and therefore includes all of 554 the capabilities described in Section 4.1. The difference in UC3 is 555 the notion of when to assess rather than what to assess. Therefore, 556 the capabilities described in this section are relevant only to the 557 "when" and not to the "what." 559 4.3.1. Tasking and Scheduling 561 The ability to task and schedule assessments is requisite for any 562 effective risk management program. Tasking refers to the ability to 563 create a set of instructions to be conveyed at a later time via 564 scheduling. Tasking, therefore, involves selecting a set of 565 assessment criteria, assigning that set to a group of assets, and 566 expressing that information in a manner that can be consumed by a 567 collection tool. Scheduling comes into play when the enterprise 568 determines when to perform a specific assessment task (or set of 569 tasks). Scheduling may be expressed in a way that constrains tasks 570 to execute only during defined periods, can be ad hoc, or may be 571 triggered by the analysis of previous assessment results or events 572 detected in the enterprise. 574 The following capabilities support Tasking and Scheduling: 576 o Selection of assessment criteria 578 o Defining in-scope assets (i.e. targeting) 580 o Defining periodic assessments for a given set of tasks 582 o Defining assessment triggers for a given set of tasks 584 4.3.2. Data Aggregation and Reporting 586 Assessment results are produced for every asset assessed, and these 587 results must be reported not only individually, but in the aggregate, 588 and in accordance with enterprise needs. Enterprises should be able 589 to aggregate and report on the data their assessments produce in a 590 number of different ways in order to support different levels of 591 decision making. At times, security operations personnel may be 592 interested in understanding where the most critical risks exist in 593 their enterprise so as to focus their remediation efforts in the most 594 effective way (in terms of cost and return). At other times, only 595 aggregated scores will matter, as might be the case when reporting to 596 an information security manager or other executive-level role. 598 It is not the position of these capabilities to provide explicit 599 details about how reports should be formatted for presentation, but 600 only what information they should contain for a particular purpose. 601 Furthermore, it is quite easy to imagine the need for a capability 602 providing extensibility to aggregation and reporting. 604 Aggregating assessment results by the following capabilities supports 605 Data Aggregation and Reporting 607 o By asset characterization 609 o By assessment criteria 611 o By control framework 613 o By benchmark 615 o By other attributes/properties of assessment characteristics 617 o Extensible aggregation and reporting 619 5. IANA Considerations 621 This memo includes no request to IANA. 623 All drafts are required to have an IANA considerations section (see 624 RFC 5226 [RFC5226] for a guide). If the draft does not require IANA 625 to do anything, the section contains an explicit statement that this 626 is the case (as above). If there are no requirements for IANA, the 627 section will be removed during conversion into an RFC by the RFC 628 Editor. 630 6. Security Considerations 632 All drafts are required to have a security considerations section. 633 See RFC 3552 [RFC3552] for a guide. 635 7. Acknowledgements 637 The author would like to thank Kathleen Moriarty and Stephen Hanna 638 for contributing text to this document. The author would also like 639 to acknowledge the members of the SACM mailing list for their keen 640 and insightful feedback on the concepts and text within this 641 document. 643 8. References 644 8.1. Normative References 646 [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate 647 Requirement Levels", BCP 14, RFC 2119, March 1997. 649 8.2. Informative References 651 [I-D.ietf-nea-pt-eap] 652 Cam-Winget, N. and P. Sangster, "PT-EAP: Posture Transport 653 (PT) Protocol For EAP Tunnel Methods", 654 draft-ietf-nea-pt-eap-06 (work in progress), 655 December 2012. 657 [I-D.ietf-nea-pt-tls] 658 Sangster, P., Cam-Winget, N., and J. Salowey, "PT-TLS: A 659 TLS-based Posture Transport (PT) Protocol", 660 draft-ietf-nea-pt-tls-08 (work in progress), October 2012. 662 [RFC2865] Rigney, C., Willens, S., Rubens, A., and W. Simpson, 663 "Remote Authentication Dial In User Service (RADIUS)", 664 RFC 2865, June 2000. 666 [RFC3552] Rescorla, E. and B. Korver, "Guidelines for Writing RFC 667 Text on Security Considerations", BCP 72, RFC 3552, 668 July 2003. 670 [RFC5226] Narten, T. and H. Alvestrand, "Guidelines for Writing an 671 IANA Considerations Section in RFCs", BCP 26, RFC 5226, 672 May 2008. 674 [RFC5792] Sangster, P. and K. Narayan, "PA-TNC: A Posture Attribute 675 (PA) Protocol Compatible with Trusted Network Connect 676 (TNC)", RFC 5792, March 2010. 678 [RFC5793] Sahita, R., Hanna, S., Hurst, R., and K. Narayan, "PB-TNC: 679 A Posture Broker (PB) Protocol Compatible with Trusted 680 Network Connect (TNC)", RFC 5793, March 2010. 682 [RFC6733] Fajardo, V., Arkko, J., Loughney, J., and G. Zorn, 683 "Diameter Base Protocol", RFC 6733, October 2012. 685 Authors' Addresses 687 David Waltermire (editor) 688 National Institute of Standards and Technology 689 100 Bureau Drive 690 Gaithersburg, Maryland 20877 691 USA 693 Phone: 694 Email: david.waltermire@nist.gov 696 Adam W. Montville 697 Tripwire, Inc. 698 101 SW Main Street, Suite 1500 699 Portland, Oregon 97204 700 USA 702 Phone: 703 Email: amontville@tripwire.com