idnits 2.17.1 draft-paine-smart-indicators-of-compromise-04.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- -- The document has an IETF Trust Provisions (28 Dec 2009) Section 6.c(i) Publication Limitation clause. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- No issues found here. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year == The document doesn't use any RFC 2119 keywords, yet seems to have RFC 2119 boilerplate text. -- The document date (12 January 2022) is 835 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- No issues found here. Summary: 0 errors (**), 0 flaws (~~), 2 warnings (==), 2 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 Internet Engineering Task Force K. Paine 3 Internet-Draft Splunk Inc. 4 Intended status: Informational O. Whitehouse 5 Expires: 16 July 2022 NCC Group 6 J. Sellwood 7 Twilio 8 A. Shaw 9 UK National Cyber Security Centre 10 12 January 2022 12 Indicators of Compromise (IoCs) and Their Role in Attack Defence 13 draft-paine-smart-indicators-of-compromise-04 15 Abstract 17 Cyber defenders frequently rely on Indicators of Compromise (IoCs) to 18 identify, trace, and block malicious activity in networks or on 19 endpoints. This draft reviews the fundamentals, opportunities, 20 operational limitations, and best practices of IoC use. It 21 highlights the need for IoCs to be detectable in implementations of 22 Internet protocols, tools, and technologies - both for the IoCs' 23 initial discovery and their use in detection - and provides a 24 foundation for new approaches to operational challenges in network 25 security. 27 Status of This Memo 29 This Internet-Draft is submitted in full conformance with the 30 provisions of BCP 78 and BCP 79. 32 Internet-Drafts are working documents of the Internet Engineering 33 Task Force (IETF). Note that other groups may also distribute 34 working documents as Internet-Drafts. The list of current Internet- 35 Drafts is at https://datatracker.ietf.org/drafts/current/. 37 Internet-Drafts are draft documents valid for a maximum of six months 38 and may be updated, replaced, or obsoleted by other documents at any 39 time. It is inappropriate to use Internet-Drafts as reference 40 material or to cite them other than as "work in progress." 42 This Internet-Draft will expire on 16 July 2022. 44 Copyright Notice 46 Copyright (c) 2022 IETF Trust and the persons identified as the 47 document authors. All rights reserved. 49 This document is subject to BCP 78 and the IETF Trust's Legal 50 Provisions Relating to IETF Documents (https://trustee.ietf.org/ 51 license-info) in effect on the date of publication of this document. 52 Please review these documents carefully, as they describe your rights 53 and restrictions with respect to this document. Code Components 54 extracted from this document must include Revised BSD License text as 55 described in Section 4.e of the Trust Legal Provisions and are 56 provided without warranty as described in the Revised BSD License. 58 This document may not be modified, and derivative works of it may not 59 be created, except to format it for publication as an RFC or to 60 translate it into languages other than English. 62 Table of Contents 64 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . 3 65 1.1. Requirements Language . . . . . . . . . . . . . . . . . . 3 66 2. Terminology . . . . . . . . . . . . . . . . . . . . . . . . . 3 67 3. IoC Fundamentals . . . . . . . . . . . . . . . . . . . . . . 4 68 3.1. IoC Types and the Pyramid of Pain . . . . . . . . . . . . 4 69 3.2. IoC Lifecycle . . . . . . . . . . . . . . . . . . . . . . 8 70 3.2.1. Discovery . . . . . . . . . . . . . . . . . . . . . . 8 71 3.2.2. Assessment . . . . . . . . . . . . . . . . . . . . . 9 72 3.2.3. Sharing . . . . . . . . . . . . . . . . . . . . . . . 9 73 3.2.4. Deployment . . . . . . . . . . . . . . . . . . . . . 10 74 3.2.5. Detection . . . . . . . . . . . . . . . . . . . . . . 10 75 3.2.6. Reaction . . . . . . . . . . . . . . . . . . . . . . 10 76 3.2.7. End of Life . . . . . . . . . . . . . . . . . . . . . 10 77 4. Using IoCs Effectively . . . . . . . . . . . . . . . . . . . 10 78 4.1. Opportunities . . . . . . . . . . . . . . . . . . . . . . 11 79 4.1.1. IoCs underpin and enable multiple layers of the modern 80 defence-in-depth strategy . . . . . . . . . . . . . . 11 81 4.1.2. IoCs can be used even with limited resources . . . . 12 82 4.1.3. IoCs have a multiplier effect on attack defence 83 effort . . . . . . . . . . . . . . . . . . . . . . . 12 84 4.1.4. IoCs are easily shared . . . . . . . . . . . . . . . 13 85 4.1.5. IoCs can provide significant time savings . . . . . . 13 86 4.1.6. IoCs allow for discovery of historic attacks . . . . 14 87 4.1.7. IoCs can be attributed to specific threats . . . . . 14 88 4.2. Case Studies . . . . . . . . . . . . . . . . . . . . . . 14 89 4.2.1. Introduction . . . . . . . . . . . . . . . . . . . . 14 90 4.2.2. Cobalt Strike . . . . . . . . . . . . . . . . . . . . 15 91 4.2.2.1. Overall TTP . . . . . . . . . . . . . . . . . . . 15 92 4.2.2.2. IoCs . . . . . . . . . . . . . . . . . . . . . . 15 93 4.2.3. APT33 . . . . . . . . . . . . . . . . . . . . . . . . 16 94 4.2.3.1. Overall TTP . . . . . . . . . . . . . . . . . . . 16 95 4.2.3.2. IoCs . . . . . . . . . . . . . . . . . . . . . . 17 96 5. Operational Limitations . . . . . . . . . . . . . . . . . . . 17 97 5.1. Time and Effort . . . . . . . . . . . . . . . . . . . . . 17 98 5.1.1. Fragility . . . . . . . . . . . . . . . . . . . . . . 17 99 5.1.2. Discoverability . . . . . . . . . . . . . . . . . . . 18 100 5.2. Precision . . . . . . . . . . . . . . . . . . . . . . . . 19 101 5.2.1. Specificity . . . . . . . . . . . . . . . . . . . . . 19 102 5.2.2. Dual and Compromised Use . . . . . . . . . . . . . . 20 103 5.3. Privacy . . . . . . . . . . . . . . . . . . . . . . . . . 20 104 5.4. Automation . . . . . . . . . . . . . . . . . . . . . . . 21 105 6. Best Practice . . . . . . . . . . . . . . . . . . . . . . . . 22 106 6.1. Comprehensive Coverage and Defence-in-Depth . . . . . . . 22 107 6.2. Security Considerations . . . . . . . . . . . . . . . . . 24 108 7. Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . 24 109 8. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 25 110 9. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . 25 111 10. Informative References . . . . . . . . . . . . . . . . . . . 25 112 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . 27 114 1. Introduction 116 This draft describes the various types of Indicator of Compromise 117 (IoC) and how they are used effectively in attack defence (often 118 called cyber defence). It introduces concepts such as the Pyramid of 119 Pain [PoP] and the IoC lifecycle to highlight how IoCs may be used to 120 provide a broad range of defences. This draft provides best practice 121 for implementers of controls based on IoCs, as well as potential 122 operational limitations. Two case studies which demonstrate the 123 usefulness of IoCs for detecting and defending against real world 124 attacks are included. One case study involves an intrusion set (a 125 collection of indicators for a specific attack) known as APT33 and 126 the other an attack tool called Cobalt Strike. This document is not 127 a comprehensive report of APT33 or Cobalt Strike and is intended to 128 be read alongside publicly published reports (referred to as open 129 source material among intelligence practitioners) on these threats 130 (for example, [Symantec] and [NCCGroup], respectively). 132 1.1. Requirements Language 134 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 135 "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this 136 document are to be interpreted as described in RFC 2119 [RFC2119]. 138 2. Terminology 140 Attack defence: the activity of providing cyber security to an 141 environment through the prevention, detection and response to 142 attempted and successful cyber intrusions. Successful defence is 143 achieved through the blocking, monitoring and response to adversarial 144 activity at a network, endpoint or application levels. 146 Command and control (C2) server: an attacker-controlled server used 147 to communicate with, send commands to and receive data from 148 compromised machines. Communication between a C2 server and 149 compromised hosts is called command and control traffic. 151 Domain Generation Algorithm (DGA): used in malware strains to 152 generate domain names periodically. Adversaries may use DGAs to 153 dynamically identify a destination for C2 traffic, rather than 154 relying on a list of static IP addresses or domains that can be 155 blocked more easily. 157 Kill chain: a model for conceptually breaking down a cyber intrusion 158 to allow defenders to think about, discuss, plan for, and implement 159 controls to defend discrete phases of an attacker's activity 160 [KillChain]. 162 Tactics, Techniques, and Procedures (TTPs): the way an adversary 163 undertakes activities in the kill chain - the choices made, methods 164 followed, tools and infrastructure used, protocols employed, and 165 commands executed. If they are distinct enough, aspects of an 166 attacker's TTPs can form specific Indicators of Compromise (IoCs), as 167 if they were a fingerprint. 169 3. IoC Fundamentals 171 3.1. IoC Types and the Pyramid of Pain 173 Indicators of Compromise (IoCs) are observable artefacts relating to 174 an attacker or their activities, such as their tactics, techniques, 175 procedures, and associated tooling and infrastructure. These 176 indicators can be observed at network or endpoint (host) levels and 177 can, with varying degrees of confidence, help network defenders (blue 178 teams) to pro-actively block malicious traffic or code execution, 179 determine a cyber intrusion occurred, or associate discovered 180 activity to a known intrusion set and thereby potentially identify 181 additional avenues for investigation. Examples of protocol-related 182 IoCs can include: 184 * IPv4 and IPv6 addresses in network traffic. 186 * DNS domain names in network traffic, resolver caches or logs. 188 * TLS Server Name Indication values in network traffic. 190 * Code signing certificates in binaries or TLS certificate 191 information (such as SHA256 hashes) in network traffic. 193 * Cryptographic hashes (e.g. MD5, SHA1 or SHA256) of malicious 194 binaries or scripts when calculated from network traffic or file 195 system artefacts. 197 * Attack tools (such as Mimikatz [Mimikatz]) and their code 198 structure and execution characteristics. 200 * Attack techniques, such as Kerberos golden tickets [GoldenTicket] 201 which can be observed in network traffic or system artefacts. 203 The common types of IoC form a 'Pyramid of Pain' [PoP] that informs 204 prevention, detection, and mitigation strategies. Each IoC type's 205 place in the pyramid represents how much 'pain' a typical adversary 206 experiences as part of changing the activity that produces that 207 artefact. The greater pain an adversary experiences (towards the 208 top) the less likely they are to change those aspects of their 209 activity and the longer the IoC is likely to reflect the attacker's 210 intrusion set - i.e., the less fragile those IoCs will be from a 211 defender's perspective. The layers of the PoP commonly range from 212 hashes up to TTPs, with the pain ranging from simply recompiling code 213 to creating a whole new attack strategy. Other types of IoC do exist 214 and could be included in an extended version of the PoP should that 215 assist the defender to understand and discuss intrusion sets most 216 relevant to them. 218 /\ 219 / \ MORE PAIN 220 / \ LESS FRAGILE 221 / \ LESS PRECISE 222 / TTPs \ 223 / \ / \ 224 ============== | 225 / \ | 226 / Tools \ | 227 / \ | 228 ====================== | 229 / \ | 230 / network/host artefacts \ | 231 / \ | 232 ============================== | 233 / \ | 234 / domain names \ | 235 / \ | 236 ====================================== | 237 / \ | 238 / IP addresses \ | 239 / \ \ / 240 ============================================== 241 / \ LESS PAIN 242 / Hash values \ MORE FRAGILE 243 / \ MORE PRECISE 244 ====================================================== 246 Figure 1 248 On the lowest (and least painful) level are hashes of malicious 249 files. These are easy for a defender to gather and can be deployed 250 to firewalls or endpoint protection to block malicious downloads or 251 prevent code execution. While IoCs aren't the only way for defenders 252 to do this kind of blocking, they are a quick, convenient, and 253 unintrusive method. Hashes are precise detections for individual 254 files based on their binary content. To subvert this defence, 255 however, an adversary need only recompile code, or otherwise modify 256 the file content with some trivial changes, to modify the hash value. 258 The next two levels are IP addresses and domain names. Interactions 259 with these may be blocked, with varying false positive rates 260 (misidentifying non-malicious traffic as malicious, see Section 5), 261 and often cause more pain to an adversary to subvert than file 262 hashes. The adversary may have to change IP ranges, find a new 263 provider, and change their code (e.g., if the IP address is hard- 264 coded, rather than resolved). Domain names are more specific than IP 265 addresses (as multiple domain names may be associated with a single 266 IP address) and are more painful for an adversary to change. 268 Network and endpoint artefacts, such as a malware's beaconing pattern 269 on the network or the modified timestamps of files touched on an 270 endpoint, are harder still to change as they relate specifically to 271 the attack taking place and, in some cases, may not be under the 272 direct control of the attacker. However, more sophisticated 273 attackers use TTPs or tooling that provide flexibility at this level 274 (such as Cobalt Strike's malleable command and control [COBALT]) or a 275 means by which some artefacts can be masked (see [Timestomp]). 277 Tools and TTPs form the top two levels of the pyramid; these levels 278 describe a threat actor's methodology - the way they perform the 279 attack. The tools level refers specifically to the software (and 280 less frequently hardware) used to conduct the attack, whereas the 281 TTPs level picks up on all the other aspects of the attack strategy. 282 IoCs at these levels are more complicated and complex - for example 283 they can include the details of how an attacker deploys malicious 284 code to perform reconnaissance of a victim's network, that pivots 285 laterally to a valuable endpoint, and then downloads a ransomware 286 payload. TTPs and tools take intensive effort to diagnose on the 287 part of the defender, but they are fundamental to the attacker and 288 campaign and hence incredibly painful for the adversary to change. 290 The variation in discoverability of IoCs is indicated by the numbers 291 of IoCs in the open threat intelligence community Alienvault 292 [ALIENVAULT]. As of June 2021, Alienvault contained: 294 * Groups (i.e., combinations of TTPs): 441 296 * Malware families (i.e., tools): ~24,000 298 * URL: 1,976,224 300 * Domain names: 34,959,787 302 * IPv4 addresses: 4,305,036 304 * SHA256 hash values: 4,767,891 305 The number of domain names appears out of sync with the other counts, 306 which reduce on the way up the PoP. This discrepancy warrants 307 further research; however, a contributing factor may be the fact that 308 threat actors use domain names to masquerade as legitimate 309 organisations and so have added incentive for creating new domain 310 names as they are identified and confiscated. 312 3.2. IoC Lifecycle 314 To be of use to defenders, IoCs must first be discovered, assessed, 315 shared, and deployed. When a logged activity is identified and 316 correlated to an IoC this detection triggers a reaction by the 317 defender which may include an investigation, potentially leading to 318 more IoCs being discovered, assessed, shared, and deployed. This 319 cycle continues until such time that the IoC is determined to no 320 longer be relevant, at which point it is removed from the control 321 space. 323 3.2.1. Discovery 325 IoCs are often discovered initially through manual investigation or 326 automated analysis. They can be discovered in a range of sources, 327 including in networks and at endpoints. They must either be 328 extracted from logs monitoring protocol runs, code execution or 329 system operations (in the case of hashes, IP addresses, domain names, 330 and network or endpoint artefacts), or be determined through analysis 331 of attack activity or tooling. In some cases, discovery may be a 332 reactive process, where IoCs from past or current attacks are 333 identified from the traces left behind. However, discovery may also 334 result from proactive hunting for potential future IoCs extrapolated 335 from knowledge of past events (such as from identifying attacker 336 infrastructure by monitoring domain name registration patterns). 338 Crucially, for an IoC to be discovered, the indicator must be 339 extractable from the internet protocol, tool, or technology it is 340 associated with. Identifying a particular protocol run related to an 341 attack is of limited benefit if indicators cannot be extracted and 342 subsequently associated with a later related run of the same, or a 343 different, protocol. If it is not possible to tell the source or 344 destination of malicious attack traffic, it will not be possible to 345 identify and block subsequent attack traffic either. 347 3.2.2. Assessment 349 Defenders may treat different IoCs differently, depending on the 350 IoCs' quality and the defender's needs and capabilities. Defenders 351 may, for example, place differing trust in IoCs depending on their 352 source, freshness, confidence level, or the associated threat. These 353 decisions rely on associated contextual information recovered at the 354 point of discovery or provided when the IoC was shared. 356 An IoC without context is not much use for network defence. On the 357 other hand, an IoC delivered with context (for example the threat 358 actor it relates to, its role in an attack, the last time it was seen 359 in use, its expected lifetime, or other related IoCs) allows a 360 network defender to make an informed choice on how to use it to 361 protect their network - for example, whether to simply log it, 362 actively monitor it, or out-right block it. 364 3.2.3. Sharing 366 Once discovered and assessed, IoCs are most helpful when then shared 367 at scale so many individuals and organisations can defend themselves. 368 An IoC may be shared individually (with appropriate context) in an 369 unstructured manner or may be packaged alongside many other IoCs in a 370 standardised format, such as Structured Threat Information Expression 371 [STIX], for distribution via a structured feed, such as one 372 implementing Trusted Automated Exchange of Intelligence Information 373 [TAXII], or through a Malware Information Sharing Platform [MISP]. 375 While some security companies and some membership-based groups (often 376 dubbed Information Sharing and Analysis Centres (ISACs)) provide paid 377 intel feeds containing IoCs, there are various free IoC sources 378 available from individual security researchers up through small trust 379 groups to national governmental cyber security organisations and 380 international Computer Emergency Response Teams (CERTs). Whomever 381 they are, sharers commonly indicate the extent to which receivers may 382 further distribute IoCs using the Traffic Light Protocol [TLP]. At 383 its simplest, this indicates that the receiver may share with anyone 384 (TLP WHITE), share within the defined sharing community (TLP GREEN), 385 share within their organisation (TLP AMBER), or not share with anyone 386 outside the original specific IoC exchange (TLP RED). 388 3.2.4. Deployment 390 For IoCs to provide defence-in-depth (see Section 6.1), which is one 391 of their key strengths, and so cope with different points of failure, 392 they should be deployed in controls monitoring networks and endpoints 393 through solutions that have sufficient privilege to act on them. 394 Wherever IoCs exist they need to be made available to security 395 controls and associated apparatus to ensure they can be deployed 396 quickly and widely. While IoCs may be manually assessed after 397 discovery or receipt, significant advantage may be gained by 398 automatically ingesting, processing, assessing, and deploying IoCs 399 from logs or intel feeds to the appropriate security controls. 401 3.2.5. Detection 403 Security controls with deployed IoCs monitor their relevant control 404 space and trigger a generic or specific reaction upon detection of 405 the IoC in monitored logs. 407 3.2.6. Reaction 409 The reaction to an IoC's detection may differ depending on factors 410 such as the capabilities and configuration of the control it is 411 deployed in, the assessment of the IoC, and the properties of the log 412 source in which it was detected. For example, a connection to a 413 known botnet C2 server may indicate a problem but does not guarantee 414 it, particularly if the server is a compromised host still performing 415 some other legitimate functions. Common reactions include event 416 logging, triggering alerts, and blocking or terminating the source of 417 the activity. 419 3.2.7. End of Life 421 How long an IoC remains useful varies and is dependent on factors 422 including initial confidence level, fragility, and precision of the 423 IoC (discussed further in Section 5). In some cases, IoCs may be 424 automatically 'aged' based on their initial characteristics and so 425 will reach end of life at a predetermined time. In other cases, IoCs 426 may become invalidated due to a shift in the threat actor's TTPs 427 (e.g., resulting from a new development or their discovery) or due to 428 remediation action taken by a defender. End of life may also come 429 about due to an activity unrelated to attack or defence, such as when 430 a third-party service used by the attacker changes or goes offline. 431 Whatever the cause, IoCs should be removed from detection at the end 432 of their life to reduce the likelihood of false positives. 434 4. Using IoCs Effectively 435 4.1. Opportunities 437 IoCs offer a variety of opportunities to cyber defenders as part of a 438 modern defence-in-depth strategy. No matter the size of an 439 organisation, IoCs can provide an effective, scalable, and efficient 440 defence mechanism against classes of attack from the latest threats 441 or specific intrusion sets which may have struck in the past. 443 4.1.1. IoCs underpin and enable multiple layers of the modern defence- 444 in-depth strategy 446 Firewalls, Intrusion Detection Systems (IDS), and Intrusion 447 Prevention Systems (IPS) all employ IoCs to identify and mitigate 448 threats across networks. Anti-Virus (AV) and Endpoint Detection and 449 Response (EDR) products deploy IoCs via catalogues or libraries to 450 all supported client endpoints. Security Incident Event Management 451 (SIEM) platforms compare IoCs against aggregated logs from various 452 sources - network, endpoint, and application. Of course, IoCs do not 453 address all attack defence challenges - but they form a vital tier of 454 any organisation's layered defence. Some types of IoC may be present 455 across all those controls while others may be deployed only in 456 certain layers. Further, IoCs relevant to a specific kill chain may 457 only reflect activity performed during a certain phase and so need to 458 be combined with other IoCs or mechanisms for complete coverage of 459 the kill chain as part of an intrusion set. 461 As an example, open source malware can be deployed by many different 462 actors, each using their own TTPs and infrastructure. However, if 463 the actors use the same executable, the hash remains the same and 464 this IoC can be deployed in endpoint protection to block execution 465 regardless of individual actor, infrastructure, or other TTPs. 466 Should this defence fail in a specific case, for example if an actor 467 recompiles the executable binary producing a unique hash, other 468 defences can prevent them progressing further through their attack - 469 for instance, by blocking known malicious domain name look-ups and 470 thereby preventing the malware calling out to its C2 infrastructure. 472 Alternatively, another malicious actor may regularly change their 473 tools and infrastructure (and thus the indicator intrusion set) 474 deployed across different campaigns, but their access vectors may 475 remain consistent and well-known. In this case, this access TTP can 476 be recognised and proactively defended against even while there is 477 uncertainty of the intended subsequent activity. For example, if 478 their access vector consistently exploits a vulnerability in 479 software, regular and estate-wide patching can prevent the attack 480 from taking place. Should these pre-emptive measures fail however, 481 other IoCs observed across multiple campaigns may be able to prevent 482 the attack at later stages in the kill chain. 484 4.1.2. IoCs can be used even with limited resources 486 IoCs are inexpensive, scalable, and easy to deploy, making their use 487 particularly beneficial for smaller entities, especially where they 488 are exposed to a significant threat. For example, a small 489 manufacturing subcontractor in a supply chain producing a critical, 490 highly specialised component may represent an attractive target 491 because there would be disproportionate impact on both the supply 492 chain and the prime contractor if it were compromised. It may be 493 reasonable to assume that this small manufacturer will have only 494 basic security (whether internal or outsourced) and while it is 495 likely to have comparatively less resources to manage the risks it 496 faces compared to larger partners, it can still leverage IoCs to 497 great effect. Small entities like this can deploy IoCs to give a 498 baseline protection against known threats without having access to a 499 well-resourced, mature defensive team and the threat intelligence 500 relationships necessary to perform resource-intensive investigations. 501 One reason for this is that use of IoCs does not require the same 502 intensive training as needed for more subjective controls, such as 503 those based on manual analysis of tipped machine learning events. In 504 this way, a major part of the appeal of IoCs is that they can afford 505 some level of protection to organisations across spectrums of 506 resource capability, maturity, and sophistication. 508 4.1.3. IoCs have a multiplier effect on attack defence effort 510 Individual IoCs can provide widespread protection that scales 511 effectively for defenders. Within a single organisation, simply 512 blocking one IoC may protect thousands of users and that blocking may 513 be performed (depending on the IoC type) across multiple security 514 controls monitoring numerous different types of activity within 515 networks, endpoints, and applications. While discovering one IoC can 516 be intensive, once shared via well-established routes (as discussed 517 in Section 3.2.2) that individual IoC may, further, protect thousands 518 of organisations and so all of their users. The prime contractor 519 from our earlier example can supply IoCs to the small subcontractor 520 and so further uplift that smaller entity's defensive capability and 521 at the same time protect itself and its interests. 523 Not only may multiple organisations benefit through directly 524 receiving shared IoCs, but they may also benefit through the IoCs' 525 application in services they utilise. In the case of an ongoing 526 email phishing campaign, IoCs can be monitored, discovered, and 527 deployed quickly and easily by individual organisations. However, if 528 they are deployed quickly via a mechanism such as a protective DNS 529 filtering service, they can be more effective still - an email 530 campaign may be mitigated before some organisations' recipients ever 531 click the link or before some malicious payloads can call out for 532 instructions. Through such approaches other parties can be protected 533 without additional effort. 535 4.1.4. IoCs are easily shared 537 There is significant benefit to be had from the sharing of IoCs and 538 they can be easily shared for two main reasons: firstly, indicators 539 are easy to distribute as they are textual and so in small numbers 540 are frequently exchanged in emails, blog posts, or technical reports; 541 secondly, standards such as MISP Core [MISPCORE], OpenIOC [OPENIOC], 542 and STIX [STIX] provide well-defined formats for sharing large 543 collections or regular sets of IoC along with all the associated 544 context. Quick and easy sharing of IoCs gives blanket coverage for 545 organisations and allows widespread mitigation in a timely fashion - 546 they can be shared with systems administrators, from small to large 547 organisations and from large teams to single individuals, allowing 548 them all to implement defences on their networks. 550 4.1.5. IoCs can provide significant time savings 552 Not only are there time savings from sharing IoCs, saving duplication 553 of investigation effort, but deploying them automatically at scale is 554 seamless for many enterprises. Where automatic deployment of IoCs is 555 working well, organisations and users get blanket protection with 556 minimal human intervention and minimal effort, a key goal of attack 557 defence. The ability to do this at scale and at pace is often vital 558 when responding to agile threat actors that may change their 559 intrusion set frequently and so the relevant IoCs also change. 560 Conversely, protecting a complex network without automatic deployment 561 of IoCs could mean manually updating every single endpoint or network 562 device consistently and reliably to the same security state. The 563 work this entails (including locating assets and devices, polling for 564 logs and system information, and manually checking patch levels) 565 introduces complexity and a need for skilled analysts and engineers. 566 While it is still necessary to invest effort to eliminate false 567 positives when widely deploying IoCs, the cost and effort involved 568 can be far smaller than the work entailed in reliably manually 569 updating all endpoint and network devices - for example, particularly 570 on legacy systems that may be particularly complicated, or even 571 impossible, to update. 573 4.1.6. IoCs allow for discovery of historic attacks 575 A network defender can use recently acquired IoCs in conjunction with 576 historic data, such as logged DNS queries or email attachment hashes, 577 to hunt for signs of past compromise. Not only can this technique 578 help to build up a clear picture of past attacks, but it also allows 579 for retrospective mitigation of the effects of any previous 580 intrusion. This opportunity is reliant on historic data not having 581 been compromised itself, by a technique such as Timestomp 582 [Timestomp], and not being incomplete due to data retention policies, 583 but is nonetheless valuable for detecting and remediating past 584 attacks. 586 4.1.7. IoCs can be attributed to specific threats 588 Deployment of various modern security controls, such as firewall 589 filtering or EDR, come with an inherent trade-off between breadth of 590 protection and various costs, including the risk of false positives 591 (see Section 5.2 ), staff time, and pure financial costs. 592 Organisations can use threat modelling and information assurance to 593 assess and prioritise risk from identified threats and to determine 594 how they will mitigate or accept each of them. Contextual 595 information tying IoCs to specific threats or actors and shared 596 alongside the IoCs enables organisations to focus their defences 597 against particular risks and so allows them the technical freedom and 598 capability to choose their risk posture and defence methods. 599 Producing this contextual information before sharing IoCs can take 600 intensive analytical effort as well as specialist tools and training. 601 At its simplest it can involve documenting sets of IoCs from multiple 602 instances of the same attack campaign, say from multiple unique 603 payloads (and therefore with distinct file hashes) from the same 604 source and connecting to the same C2 server. A more complicated 605 approach is to cluster similar combinations of TTPs seen across 606 multiple campaigns over a period of time. This can be used alongside 607 detailed malware reverse engineering and target profiling, overlaid 608 on a geopolitical and criminal backdrop, to infer attribution to a 609 single threat actor. 611 4.2. Case Studies 613 4.2.1. Introduction 615 The following two case studies illustrate how IoCs may be identified 616 in relation to threat actor tooling (in the first) and a threat actor 617 campaign (in the second). The case studies further highlight how 618 these IoCs may be used by cyber defenders. 620 4.2.2. Cobalt Strike 622 Cobalt Strike [COBALT] is a commercial attack framework that consists 623 of an implant framework (beacon), network protocol, and a C2 server. 624 The beacon and network protocol are highly malleable, meaning the 625 protocol representation 'on the wire' can be easily changed by an 626 attacker to blend in with legitimate traffic. The proprietary beacon 627 supports TLS encryption overlaid with a custom encryption scheme 628 based on a public-private keypair. The product also supports other 629 techniques, such as domain fronting [DFRONT], in attempt to avoid 630 obvious passive detection by static network signatures. 632 4.2.2.1. Overall TTP 634 A beacon configuration describes how the implant should operate and 635 communicate with its C2 server. This configuration also provides 636 ancillary information such as the Cobalt Strike user's licence 637 watermark. 639 4.2.2.2. IoCs 641 Tradecraft has been developed that allows the fingerprinting of C2 642 servers based on their responses to specific requests. This allows 643 the servers to be identified and then their beacon configurations to 644 be downloaded and the associated infrastructure addresses extracted 645 as IoCs. 647 The resulting mass IoCs for Cobalt Strike are: 649 * IP addresses of the C2 servers 651 * domain names used 653 Whilst these IoCs need to be refreshed regularly (due to the ease of 654 which they can be changed), the authors' experience of protecting 655 public sector organisations show these IoCs are effective for 656 disrupting threat actor operations that use Cobalt Strike. 658 These IoCs can be used to check historical data for evidence of past 659 compromise, as well as deployed to detect or block future infection 660 in a timely manner, thereby contributing to preventing the loss of 661 user and system data. 663 4.2.3. APT33 665 In contrast to the first case study, this describes a current 666 campaign by the threat actor APT33, also known as Elfin and Refined 667 Kitten (see [Symantec]). APT33 has been assessed by industry to be a 668 state-sponsored group [FireEye2], yet in this case study, IoCs still 669 gave defenders an effective tool against such a powerful adversary. 670 The group has been active since at least 2015 and is known to target 671 a range of sectors including petrochemical, government, engineering, 672 and manufacturing. Activity has been seen in countries across the 673 globe, but predominantly in the USA and Saudi Arabia. 675 4.2.3.1. Overall TTP 677 The techniques employed by this actor exhibit a relatively low level 678 of sophistication considering it is a state-sponsored group; 679 typically, APT33 performs spear phishing (sending targeted malicious 680 emails to a limited number of pre-selected recipients) with document 681 lures that imitate legitimate publications. User interaction with 682 these lures executes the initial payload and enables APT33 to gain 683 initial access. Once inside a target network, APT33 attempts to 684 pivot to other machines to gather documents and gain access to 685 administrative credentials. In some cases, users are tricked into 686 providing credentials that are then used with RULER, a freely 687 available tool that allows exploitation of an email client. The 688 attacker, in possession of a target's password, uses RULER to access 689 the target's mail account and embeds a malicious script which will be 690 triggered when the mail client is next opened, resulting in the 691 execution of malicious code (often additional malware retrieved from 692 the Internet) (see [FireEye]). 694 APT33 sometimes deploys a destructive tool which overwrites the 695 master boot record (MBR) of the hard drives in as many PCs as 696 possible. This type of tool, known as a wiper, results in data loss 697 and renders devices unusable until the operating system is 698 reinstalled. In some cases, the actor uses administrator credentials 699 to invoke execution across a large swathe of a company's IT estate at 700 once; where this isn't possible the actor may attempt to spread the 701 wiper first manually or by using worm-like capabilities against 702 unpatched vulnerabilities on the networked computers. 704 4.2.3.2. IoCs 706 As a result of investigations by a partnership of industry and the 707 UK's National Cyber Security Centre (NCSC), a set of IoCs were 708 compiled and shared with both public and private sector organisations 709 so network defenders could search for them in their networks. 710 Detection of these IoCs is likely indicative of APT33 targeting and 711 could indicate potential compromise and subsequent use of destructive 712 malware. Network defenders could also initiate processes to block 713 these IoCs to foil future attacks. This set of IoCs comprised: 715 * 9 hashes and email subject lines 717 * 5 IP addresses 719 * 7 domain names 721 5. Operational Limitations 723 The different IoC types inherently embody a set of trade-offs for 724 defenders between the risk of false positives (misidentifying non- 725 malicious traffic as malicious) and the risk of failing to identify 726 attacks. The attacker's relative pain of modifying attacks to 727 subvert known IoCs, as discussed using the Pyramid of Pain (PoP) in 728 Section 3.1, inversely correlates with the fragility of the IoC and 729 with the precision with which the IoC identifies an attack. Research 730 is needed to elucidate the exact nature of these trade-offs between 731 pain, fragility, and precision. 733 5.1. Time and Effort 735 5.1.1. Fragility 737 As alluded to in Section 3.1, the Pyramid of Pain can be thought of 738 in terms of fragility for the defender as well as pain for the 739 attacker. The less painful it is for the attacker to change an IoC, 740 the more fragile that IoC is as a defence tool. It is relatively 741 simple to determine the hash value for various malicious file 742 attachments observed as lures in a phishing campaign and to deploy 743 these through AV or an email gateway security control. However, 744 those hashes are fragile and can (and often will) be changed between 745 campaigns. Malicious IP addresses and domain names can also be 746 changed between campaigns, but this happens less frequently due to 747 the greater pain of managing infrastructure compared to altering 748 files, and so IP addresses and domain names provide a less fragile 749 detection capability. 751 This does not mean the more fragile IoC types are worthless. 752 Firstly, there is no guarantee a fragile IoC will change, and if a 753 known IoC isn't changed by the attacker but wasn't blocked then the 754 defender missed an opportunity to halt an attack in its tracks. 755 Secondly, even within one IoC type, there is variation in the 756 fragility depending on the context of the IoC. The file hash of a 757 phishing lure document (with a particular theme and containing a 758 specific staging server link) may be more fragile than the file hash 759 of a remote access trojan payload the attacker uses after initial 760 access. That in turn may be more fragile than the file hash of an 761 attacker-controlled post-exploitation reconnaissance tool that 762 doesn't connect directly to the attacker's infrastructure. Thirdly, 763 some threats and actors are more capable or inclined to change than 764 others, and so the fragility of an IoC for one may be very different 765 to an IoC of the same type for another actor. 767 Ultimately, fragility is a defender's concern that impacts the 768 ongoing efficacy of each IoC and will factor into decisions about end 769 of life. However, it should not prevent adoption of individual IoCs 770 unless there are significantly strict resource constraints that 771 demand down-selection of IoCs for deployment. More usually, 772 defenders researching threats will attempt to identify IoCs of 773 varying fragilities for a particular kill chain to provide the 774 greatest chances of ongoing detection given available investigative 775 effort (see Section 5.1.2) and while still maintaining precision (see 776 Section 5.2). 778 Finally, it is worth noting that fragility can apply to an entire 779 class of IoCs for a range of reasons; for example, IPv4 addresses are 780 becoming increasingly fragile due to addresses growing scarce, 781 widespread use of cloud services, and the ease with which domain 782 names can be moved from one hosting provider to another (thus 783 changing IP range). 785 5.1.2. Discoverability 787 To be used in attack defence, IoCs must first be discovered through 788 proactive hunting or reactive investigation. As noted in 789 Section 3.1, IoCs in the tools and TTPs levels of the PoP require 790 intensive effort and research to discover. However, it is not just 791 an IoC's type that impacts its discoverability. The sophistication 792 of the actor, their TTPs, and their tooling play a significant role, 793 as does whether the IoC is retrieved from logs after the attack or 794 extracted from samples or infected systems earlier. 796 For example, on an infected endpoint it may be possible to identify a 797 malicious payload and then extract relevant IoCs, such as the file 798 hash and its C2 server address. If the attacker used the same static 799 payload throughout the attack this single file hash value will cover 800 all instances. If, however, the attacker diversified their payloads, 801 that hash can be more fragile and other hashes may need to be 802 discovered from other samples used on other infected endpoints. 803 Concurrently, the attacker may have simply hard-coded configuration 804 data into the payload, in which case the C2 server address can be 805 easy to recover. Alternatively, the address can be stored in an 806 obfuscated persistent configuration either within the payload (e.g., 807 within its source code or associated resource) or the infected 808 endpoint's filesystem (e.g., using alternative data streams [ADS]) 809 and thus requiring more effort to discover. Further, the attacker 810 may be storing the configuration in memory only or relying on a 811 domain generation algorithm (DGA) to generate C2 server addresses on 812 demand. In this case, extracting the C2 server address can require a 813 memory dump or the execution or reverse engineering of the DGA, all 814 of which increase the effort still further. 816 If the malicious payload has already communicated with its C2 server, 817 then it may be possible to discover that C2 server address IoC from 818 network traffic logs more easily. However, once again multiple 819 factors can make discoverability more challenging, such as the 820 increasing adoption of HTTPS for malicious traffic - meaning C2 821 communications blend in with legitimate traffic, and can be 822 complicated to identify. Further, some malwares obfuscate their 823 intended destinations by using alternative DNS resolution services 824 (e.g., OpenNIC [OPENNIC]) or by performing transformation operations 825 on resolved IP addresses to determine the real C2 server address 826 encoded in the DNS response [LAZARUS]. 828 5.2. Precision 830 5.2.1. Specificity 832 Alongside pain and fragility, the PoP's levels can also be considered 833 in terms of how precise the defence can be, with the false positive 834 rate usually increasing as we move up the pyramid to less specific 835 IoCs. A hash value identifies a particular file, such as an 836 executable binary, and given a suitable cryptographic hash function 837 the false positives are effectively nil; by suitable we mean one with 838 preimage resistance and strong collision resistance. In comparison, 839 IoCs in the upper levels (such as some network artefacts or tool 840 fingerprints) may apply to various malicious binaries, and even 841 benign software may share the same identifying characteristics. For 842 example, threat actor tools making web requests may be identified by 843 the user-agent string specified in the request header. However, this 844 value may be the same as used by legitimate software, either by the 845 attacker's choice or through use of a common library. 847 It should come as no surprise that the more specific an IoC the more 848 fragile it is - as things change, they move outside of that specific 849 focus. While less fragile IoCs may be desirable for their robustness 850 and longevity, this must be balanced with the increased chance of 851 false positives from their broadness. One way in which this balance 852 is achieved is by grouping indicators and using them in combination. 853 While two low-specificity IoCs for a particular attack may each have 854 chances of false positives, when observed together they may provide 855 greater confidence of an accurate detection of the relevant kill 856 chain. 858 5.2.2. Dual and Compromised Use 860 As noted in Section 3.2.2, the context of an IoC, such as the way in 861 which the attacker uses it, may equally impact the precision with 862 which that IoC detects an attack. An IP address representing an 863 attacker's staging server, from which their attack chain downloads 864 subsequent payloads, offers a precise IP address for attacker-owned 865 infrastructure. However, it will be less precise if that IP address 866 is associated with a cloud hosting provider and it is regularly 867 reassigned from one user to another; and it will be less precise 868 still if the attacker compromised a legitimate web server and is 869 abusing the IP address alongside the ongoing legitimate use. 871 In a similar manner, a file hash representing an attacker's custom 872 remote access trojan will be very precise; however, a file hash 873 representing a common enterprise remote administration tool will be 874 less precise depending on whether the defender organisation usually 875 uses that tool for legitimate systems administration or not. 876 Notably, such dual use indicators are context specific both in 877 whether they are usually used legitimately and in the way they are 878 used in a particular circumstance. Use of the remote administration 879 tool may be legitimate for support staff during working hours, but 880 not generally by non-support staff, particularly if observed outside 881 of that employee's usual working hours. 883 It is reasons such as these that context is so important when sharing 884 and using IoCs. 886 5.3. Privacy 888 As noted in Section 3.2.2, context is critical to effective detection 889 using IoCs. However, at times, defenders may feel there are privacy 890 concerns with how much to share about a cyber intrusion, and with 891 whom. For example, defenders may generalise the IoCs' description of 892 the attack, by removing context to facilitate sharing. This 893 generalisation can result in an incomplete set of IoCs being shared 894 or IoCs being shared without clear indication of what they represent 895 and how they are involved in an attack. The sharer will consider the 896 privacy trade-off when generalising the IoC, and should bear in mind 897 that the loss of context can greatly reduce the utility of the IoC 898 for those they share with. 900 Self-censoring by sharers appears more prevalent and more extensive 901 when sharing IoCs into groups with more members, into groups with a 902 broader range of perceived member expertise (particularly the further 903 the lower bound extends below the sharer's perceived own expertise), 904 and into groups that do not maintain strong intermember trust. Trust 905 within such groups appears often strongest where members: interact 906 regularly; have common backgrounds, expertise, or challenges; conform 907 to behavioural expectations (such as by following defined handling 908 requirements and not misrepresenting material they share); and 909 reciprocate the sharing and support they receive. Research 910 opportunities exist to determine how IoC sharing groups' requirements 911 for trust and members' interaction strategies vary and whether 912 sharing can be optimised or incentivised, such as by using game 913 theoretic approaches. 915 5.4. Automation 917 While IoCs can be effectively utilised by organisations of various 918 sizes and resource constraints, as discussed in Section 4.1.2, 919 automation of IoC ingestion, processing, assessment, and deployment 920 is critical for managing them at scale. Manual oversight and 921 investigation may be necessary intermittently, but a reliance on 922 manual processing and searching only works at small scale or for 923 occasional cases. 925 The adoption of automation can also enable faster and easier 926 correlation of IoC detections across log sources, time, and space. 927 Thereby, the response can be tailored to reflect the number and 928 overlap of detections from a particular intrusion set, and the 929 necessary context can be presented alongside the detection when 930 generating any alerts for defender review. While manual processing 931 and searching may be no less accurate (although IoC transcription 932 errors are a common problem during busy incidents), the correlation 933 and cross-referencing necessary to provide the same degree of 934 situational awareness is much more time consuming. 936 A third important consideration when performing manual processing is 937 the longer phase monitoring and adjustment necessary to effectively 938 age out IoCs as they become irrelevant or, more crucially, 939 inaccurate. Manual implementations must often simply include or 940 exclude an IoC, as anything more granular is time consuming and 941 complicated to manage. In contrast, automations can support a 942 gradual reduction in confidence scoring enabling IoCs to contribute 943 but not individually disrupt a detection as their specificity 944 reduces. 946 6. Best Practice 948 6.1. Comprehensive Coverage and Defence-in-Depth 950 IoCs provide the defender with a range of options across the Pyramid 951 of Pain's (PoP) layers, enabling them to balance precision and 952 fragility to give high confidence detections that are practical and 953 useful. Broad coverage of the PoP is important as it allows the 954 defender to cycle between high precision but high fragility options 955 and more robust but less precise indicators. As fragile indicators 956 are changed, the more robust IoCs allow for continued detection and 957 faster rediscovery. For this reason, it's important to collect as 958 many IoCs as possible across the whole PoP. 960 At the top of the PoP, TTPs identified through anomaly detection and 961 machine learning are more likely to have false positives, which gives 962 lower confidence and, vitally, requires better trained analysts to 963 understand and implement the defences. However, these are very 964 painful for attackers to change and so when tuned appropriately 965 provide a robust detection. Hashes, at the bottom, are precise and 966 easy to deploy but are fragile and easily changed within and across 967 campaigns by malicious actors. 969 Endpoint Detection and Response (EDR) or Anti-Virus (AV) are often 970 the first port of call for protection from intrusion but endpoint 971 solutions aren't a panacea. One issue is that there are many 972 environments where it is not possible to keep them updated, or in 973 some cases, deploy them at all. For example, the Owari botnet, a 974 Mirai variant [Owari], exploited Internet of Things (IoT) devices 975 where such solutions could not be deployed. It is because of such 976 gaps, where endpoint solutions can't be relied on (see [EVOLVE]), 977 that a defence-in-depth approach is commonly advocated, using a 978 blended approach that includes both network and endpoint defences. 980 If an attack happens, then you hope an endpoint solution will pick it 981 up. If it doesn't, it could be for many good reasons: the endpoint 982 solution could be quite conservative and aim for a low false-positive 983 rate; it might not have ubiquitous coverage; or it might only be able 984 to defend the initial step of the kill chain [KillChain]. In the 985 worst cases, the attack specifically disables the endpoint solution 986 or the malware is brand new and so won't be recognised. 988 In the middle of the pyramid, IoCs related to network information 989 (such as domains and IP addresses) can be particularly useful. They 990 allow for broad coverage, without requiring each and every endpoint 991 security solution to be updated, as they may be detected and enforced 992 in a more centralised manner at network choke points (such as proxies 993 and gateways). This makes them particular useful in contexts where 994 ensuring endpoint security isn't possible such as "Bring Your Own 995 Device" (BYOD), Internet of Things (IoT) and legacy environments. 996 It's important to note that these network-level IoCs can also protect 997 against compromised endpoints when these IoCs used to detect the 998 attack in network traffic, even if the compromise passes unnoticed. 999 For example, in a BYOD environment, enforcing security policies on 1000 the device can be difficult, so non-endpoint IoCs and solutions are 1001 needed to allow detection of compromise even with no endpoint 1002 coverage. 1004 One example of how IoCs provide a layer of a defence-in-depth 1005 solution is Protective DNS (PDNS), a free and voluntary DNS filtering 1006 service provided by the UK NCSC for UK public sector organisations 1007 [PDNS]. In 2018, this service blocked access to 57.4 million DNS 1008 queries for 118,527 unique reasons (out of 68.7 billion total 1009 queries) for the organisations signed up to the service [ACD2019]. 28 1010 million of them were for domain generation algorithms (DGAs) [DGAs], 1011 including 15 known DGAs which are a type of TTP. 1013 IoCs such as malicious domains can be put on PDNS straight away and 1014 can then be used to prevent access to those known malicious domains 1015 across the entire estate of over 460 separate public sector entities 1016 that use NCSC's PDNS [Annual2019]. Coverage can be patchy with 1017 endpoints, as the roll-out of protections isn't uniform or 1018 necessarily fast - but if the IoC is on PDNS, a consistent defence is 1019 maintained. This offers protection, regardless of whether the 1020 context is a BYOD environment or a managed enterprise system. Other 1021 IoCs, like Server Name Indicator values in TLS or the server 1022 certificate information, also provide IoC protections. 1024 Similar to the AV scenario, large scale services face risk decisions 1025 around balancing threat against business impact from false positives. 1026 Organisations need to be able to retain the ability to be more 1027 conservative with their own defences, while still benefiting from 1028 them. For instance, a commercial DNS filtering service is intended 1029 for broad deployment, so will have a risk tolerance similar to AV 1030 products; whereas DNS filtering intended for government users (e.g. 1031 PDNS) can be more conservative, but will still have a relatively 1032 broad deployment if intended for the whole of government. A 1033 government department or specific company, on the other hand, might 1034 accept the risk of disruption and arrange firewalls or other network 1035 protection devices to completely block anything related to particular 1036 threats, regardless of the confidence, but rely on a DNS filtering 1037 service for everything else. 1039 Other network defences can make use of this blanket coverage from 1040 IoCs, like middlebox mitigation, proxy defences, and application 1041 layer firewalls, but are out of scope for this draft. Note too that 1042 DNS goes through firewalls, proxies and possibly to a DNS filtering 1043 service; it doesn't have to be unencrypted, but these appliances must 1044 be able to decrypt it to do anything useful with it, like blocking 1045 queries for known bad URIs. 1047 Covering a broad range of IoCs gives defenders a wide range of 1048 benefits: they are easy to deploy; they provide a high enough 1049 confidence to be effective; at least some will be painful for 1050 attackers to change; their distribution around the infrastructure 1051 allows for different points of failure, and so overall they enable 1052 the defenders to disrupt bad actors. The combination of these 1053 factors cements IoCs as a particularly valuable tool for defenders 1054 with limited resources. 1056 6.2. Security Considerations 1058 This draft is all about system security. However, when poorly 1059 deployed, IoCs can lead to over-blocking which may present an 1060 availability concern for some systems. While IoCs preserve privacy 1061 on a macro scale (by preventing data breaches), research could be 1062 done to investigate the impact on privacy from sharing IoCs, and 1063 improvements could be made to minimise any impact found. The 1064 creation of a privacy-preserving IoC sharing method, that still 1065 allows both network and endpoint defences to provide security and 1066 layered defences, would be an interesting proposal. 1068 7. Conclusions 1070 IoCs are versatile and powerful. IoCs underpin and enable multiple 1071 layers of the modern defence-in-depth strategy. IoCs are easy to 1072 share, providing a multiplier effect on attack defence effort and 1073 they save vital time. Network-level IoCs offer protection, 1074 especially valuable when an endpoint-only solution isn't sufficient. 1075 These properties, along with their ease of use, make IoCs a key 1076 component of any attack defence strategy and particularly valuable 1077 for defenders with limited resources. 1079 For IoCs to be useful, they don't have to be unencrypted or visible 1080 in networks - but crucially they do need to be made available, along 1081 with their context, to entities that need them. It is also important 1082 that this availability and eventual usage copes with multiple points 1083 of failure, as per the defence-in-depth strategy, of which IoCs are a 1084 key part. 1086 8. IANA Considerations 1088 This draft does not require any IANA action. 1090 9. Acknowledgements 1092 Thanks to all those who have been involved with improving cyber 1093 defence in the IETF and IRTF communities. 1095 10. Informative References 1097 [ACD2019] Levy, I. and M. S, "Active Cyber Defence - The Second 1098 Year", 2019, . 1101 [ADS] Microsoft, "File Streams (Local File Systems)", 2018, 1102 . 1105 [ALIENVAULT] 1106 AlienVault, "AlienVault", 2021, 1107 . 1109 [Annual2019] 1110 NCSC, "Annual Review 2019", 2019, 1111 . 1114 [COBALT] Cobalt Strike, "OVERRULED: Containing a Potentially 1115 Destructive Adversary", 2021, 1116 . 1118 [DFRONT] InfoSec Resources, "Domain Fronting", 2017, 1119 . 1122 [DGAs] MITRE, "Dynamic Resolution: Domain Generation Algorithms", 1123 2020, . 1125 [EVOLVE] McFadden, M., "Evolution of Endpoint Security - An 1126 Operational Perspective", 2021, 1127 . 1130 [FireEye] O'Leary, J., Kimble, J., Vanderlee, K., and N. Fraser, 1131 "Insights into Iranian Cyber Espionage: APT33 Targets 1132 Aerospace and Energy Sectors and has Ties to Destructive 1133 Malware", 2017, . 1137 [FireEye2] FireEye, "OVERRULED: Containing a Potentially Destructive 1138 Adversary", 2018, . 1142 [GoldenTicket] 1143 Soria-Machado, M., Abolins, D., Boldea, C., and K. Socha, 1144 "Kerberos Golden Ticket Protection", 2014, 1145 . 1149 [KillChain] 1150 Lockheed Martin, "The Cyber Kill Chain", 2020, 1151 . 1154 [LAZARUS] Kaspersky Lab, "Lazarus Under The Hood", 2018, 1155 . 1159 [Mimikatz] Mulder, J., "Mimikatz Overview, Defenses and Detection", 1160 2016, . 1164 [MISP] MISP, "MISP", 2019, . 1166 [MISPCORE] MISP, "MISP Core", 2020, . 1169 [NCCGroup] Jansen, W., "Abusing cloud services to fly under the 1170 radar", 2021, . 1173 [OPENIOC] Gibb, W., "OpenIOC: Back to the Basics", 2013, 1174 . 1177 [OPENNIC] OpenNIC Project, "OpenNIC Project", 2021, 1178 . 1180 [Owari] NCSC, "Owari botnet own-goal takeover", 2018, 1181 . 1184 [PDNS] NCSC, "Protective DNS", 2019, 1185 . 1187 [PoP] Bianco, D.J., "The Pyramid of Pain", 2014, 1188 . 1191 [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate 1192 Requirement Levels", BCP 14, RFC 2119, 1193 DOI 10.17487/RFC2119, March 1997, 1194 . 1196 [STIX] OASIS Cyber Threat Intelligence, "STIX", 2019, 1197 . 1200 [Symantec] Symantec, "Elfin: Relentless", 2019, 1201 . 1204 [TAXII] OASIS Cyber Threat Intelligence, "TAXII", 2021, 1205 . 1208 [Timestomp] 1209 OASIS Cyber Threat Intelligence, "Timestomp", 2019, 1210 . 1212 [TLP] FIRST, "Traffic Light Protocol", 2021, 1213 . 1215 Authors' Addresses 1217 Kirsty Paine 1218 Splunk Inc. 1220 Email: kirsty.ietf@gmail.com 1221 Ollie Whitehouse 1222 NCC Group 1224 Email: ollie.whitehouse@nccgroup.com 1226 James Sellwood 1227 Twilio 1229 Email: jsellwood@twilio.com 1231 Andrew Shaw 1232 UK National Cyber Security Centre 1234 Email: andrew.s2@ncsc.gov.uk