idnits 2.17.1 draft-ietf-opsec-indicators-of-compromise-01.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- == The document has an IETF Trust Provisions of 28 Dec 2009, Section 6.c(i) Publication Limitation clause. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- No issues found here. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year -- The document date (1 July 2022) is 655 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- No issues found here. Summary: 0 errors (**), 0 flaws (~~), 2 warnings (==), 1 comment (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 OPSEC K. Paine 3 Internet-Draft Splunk Inc. 4 Intended status: Informational O. Whitehouse 5 Expires: 2 January 2023 NCC Group 6 J. Sellwood 7 Twilio 8 A. Shaw 9 UK National Cyber Security Centre 10 1 July 2022 12 Indicators of Compromise (IoCs) and Their Role in Attack Defence 13 draft-ietf-opsec-indicators-of-compromise-01 15 Abstract 17 Cyber defenders frequently rely on Indicators of Compromise (IoCs) to 18 identify, trace, and block malicious activity in networks or on 19 endpoints. This draft reviews the fundamentals, opportunities, 20 operational limitations, and best practices of IoC use. It 21 highlights the need for IoCs to be detectable in implementations of 22 Internet protocols, tools, and technologies - both for the IoCs' 23 initial discovery and their use in detection - and provides a 24 foundation for new approaches to operational challenges in network 25 security. 27 Status of This Memo 29 This Internet-Draft is submitted in full conformance with the 30 provisions of BCP 78 and BCP 79. 32 Internet-Drafts are working documents of the Internet Engineering 33 Task Force (IETF). Note that other groups may also distribute 34 working documents as Internet-Drafts. The list of current Internet- 35 Drafts is at https://datatracker.ietf.org/drafts/current/. 37 Internet-Drafts are draft documents valid for a maximum of six months 38 and may be updated, replaced, or obsoleted by other documents at any 39 time. It is inappropriate to use Internet-Drafts as reference 40 material or to cite them other than as "work in progress." 42 This Internet-Draft will expire on 2 January 2023. 44 Copyright Notice 46 Copyright (c) 2022 IETF Trust and the persons identified as the 47 document authors. All rights reserved. 49 This document is subject to BCP 78 and the IETF Trust's Legal 50 Provisions Relating to IETF Documents (https://trustee.ietf.org/ 51 license-info) in effect on the date of publication of this document. 52 Please review these documents carefully, as they describe your rights 53 and restrictions with respect to this document. Code Components 54 extracted from this document must include Revised BSD License text as 55 described in Section 4.e of the Trust Legal Provisions and are 56 provided without warranty as described in the Revised BSD License. 58 This document may not be modified, and derivative works of it may not 59 be created, except to format it for publication as an RFC or to 60 translate it into languages other than English. 62 Table of Contents 64 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . 3 65 2. Terminology . . . . . . . . . . . . . . . . . . . . . . . . . 3 66 3. IoC Fundamentals . . . . . . . . . . . . . . . . . . . . . . 4 67 3.1. IoC Types and the Pyramid of Pain . . . . . . . . . . . . 4 68 3.2. IoC Lifecycle . . . . . . . . . . . . . . . . . . . . . . 7 69 3.2.1. Discovery . . . . . . . . . . . . . . . . . . . . . . 7 70 3.2.2. Assessment . . . . . . . . . . . . . . . . . . . . . 8 71 3.2.3. Sharing . . . . . . . . . . . . . . . . . . . . . . . 8 72 3.2.4. Deployment . . . . . . . . . . . . . . . . . . . . . 9 73 3.2.5. Detection . . . . . . . . . . . . . . . . . . . . . . 9 74 3.2.6. Reaction . . . . . . . . . . . . . . . . . . . . . . 9 75 3.2.7. End of Life . . . . . . . . . . . . . . . . . . . . . 9 76 4. Using IoCs Effectively . . . . . . . . . . . . . . . . . . . 9 77 4.1. Opportunities . . . . . . . . . . . . . . . . . . . . . . 10 78 4.1.1. IoCs underpin and enable multiple layers of the modern 79 defence-in-depth strategy . . . . . . . . . . . . . . 10 80 4.1.2. IoCs can be used even with limited resources . . . . 11 81 4.1.3. IoCs have a multiplier effect on attack defence 82 effort . . . . . . . . . . . . . . . . . . . . . . . 11 83 4.1.4. IoCs are easily shared . . . . . . . . . . . . . . . 12 84 4.1.5. IoCs can provide significant time savings . . . . . . 12 85 4.1.6. IoCs allow for discovery of historic attacks . . . . 13 86 4.1.7. IoCs can be attributed to specific threats . . . . . 13 87 4.2. Case Studies . . . . . . . . . . . . . . . . . . . . . . 13 88 4.2.1. Introduction . . . . . . . . . . . . . . . . . . . . 13 89 4.2.2. Cobalt Strike . . . . . . . . . . . . . . . . . . . . 14 90 4.2.2.1. Overall TTP . . . . . . . . . . . . . . . . . . . 14 91 4.2.2.2. IoCs . . . . . . . . . . . . . . . . . . . . . . 14 92 4.2.3. APT33 . . . . . . . . . . . . . . . . . . . . . . . . 15 93 4.2.3.1. Overall TTP . . . . . . . . . . . . . . . . . . . 15 94 4.2.3.2. IoCs . . . . . . . . . . . . . . . . . . . . . . 16 95 5. Operational Limitations . . . . . . . . . . . . . . . . . . . 16 96 5.1. Time and Effort . . . . . . . . . . . . . . . . . . . . . 16 97 5.1.1. Fragility . . . . . . . . . . . . . . . . . . . . . . 17 98 5.1.2. Discoverability . . . . . . . . . . . . . . . . . . . 18 99 5.2. Precision . . . . . . . . . . . . . . . . . . . . . . . . 19 100 5.2.1. Specificity . . . . . . . . . . . . . . . . . . . . . 19 101 5.2.2. Dual and Compromised Use . . . . . . . . . . . . . . 20 102 5.3. Privacy . . . . . . . . . . . . . . . . . . . . . . . . . 20 103 5.4. Automation . . . . . . . . . . . . . . . . . . . . . . . 21 104 6. Best Practice . . . . . . . . . . . . . . . . . . . . . . . . 21 105 6.1. Comprehensive Coverage and Defence-in-Depth . . . . . . . 22 106 6.2. Security Considerations . . . . . . . . . . . . . . . . . 24 107 7. Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . 24 108 8. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 25 109 9. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . 25 110 10. Informative References . . . . . . . . . . . . . . . . . . . 25 111 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . 27 113 1. Introduction 115 This draft describes the various types of Indicator of Compromise 116 (IoC) and how they are used effectively in attack defence (often 117 called cyber defence). It introduces concepts such as the Pyramid of 118 Pain [PoP] and the IoC lifecycle to highlight how IoCs may be used to 119 provide a broad range of defences. This draft provides best practice 120 for implementers of controls based on IoCs, as well as potential 121 operational limitations. Two case studies which demonstrate the 122 usefulness of IoCs for detecting and defending against real world 123 attacks are included. One case study involves an intrusion set (a 124 collection of indicators for a specific attack) known as APT33 and 125 the other an attack tool called Cobalt Strike. This document is not 126 a comprehensive report of APT33 or Cobalt Strike and is intended to 127 be read alongside publicly published reports (referred to as open 128 source material among intelligence practitioners) on these threats 129 (for example, [Symantec] and [NCCGroup], respectively). 131 2. Terminology 133 Attack defence: the activity of providing cyber security to an 134 environment through the prevention, detection and response to 135 attempted and successful cyber intrusions. Successful defence is 136 achieved through the blocking, monitoring and response to adversarial 137 activity at a network, endpoint or application levels. 139 Command and control (C2) server: an attacker-controlled server used 140 to communicate with, send commands to and receive data from 141 compromised machines. Communication between a C2 server and 142 compromised hosts is called command and control traffic. 144 Domain Generation Algorithm (DGA): used in malware strains to 145 generate domain names periodically. Adversaries may use DGAs to 146 dynamically identify a destination for C2 traffic, rather than 147 relying on a list of static IP addresses or domains that can be 148 blocked more easily. 150 Kill chain: a model for conceptually breaking down a cyber intrusion 151 to allow defenders to think about, discuss, plan for, and implement 152 controls to defend discrete phases of an attacker's activity 153 [KillChain]. 155 Tactics, Techniques, and Procedures (TTPs): the way an adversary 156 undertakes activities in the kill chain - the choices made, methods 157 followed, tools and infrastructure used, protocols employed, and 158 commands executed. If they are distinct enough, aspects of an 159 attacker's TTPs can form specific Indicators of Compromise (IoCs), as 160 if they were a fingerprint. 162 3. IoC Fundamentals 164 3.1. IoC Types and the Pyramid of Pain 166 Indicators of Compromise (IoCs) are observable artefacts relating to 167 an attacker or their activities, such as their tactics, techniques, 168 procedures, and associated tooling and infrastructure. These 169 indicators can be observed at network or endpoint (host) levels and 170 can, with varying degrees of confidence, help network defenders (blue 171 teams) to pro-actively block malicious traffic or code execution, 172 determine a cyber intrusion occurred, or associate discovered 173 activity to a known intrusion set and thereby potentially identify 174 additional avenues for investigation. Examples of protocol-related 175 IoCs can include: 177 * IPv4 and IPv6 addresses in network traffic. 179 * DNS domain names in network traffic, resolver caches or logs. 181 * TLS Server Name Indication values in network traffic. 183 * Code signing certificates in binaries or TLS certificate 184 information (such as SHA256 hashes) in network traffic. 186 * Cryptographic hashes (e.g. MD5, SHA1 or SHA256) of malicious 187 binaries or scripts when calculated from network traffic or file 188 system artefacts. 190 * Attack tools (such as Mimikatz [Mimikatz]) and their code 191 structure and execution characteristics. 193 * Attack techniques, such as Kerberos golden tickets [GoldenTicket] 194 which can be observed in network traffic or system artefacts. 196 The common types of IoC form a 'Pyramid of Pain' [PoP] that informs 197 prevention, detection, and mitigation strategies. Each IoC type's 198 place in the pyramid represents how much 'pain' a typical adversary 199 experiences as part of changing the activity that produces that 200 artefact. The greater pain an adversary experiences (towards the 201 top) the less likely they are to change those aspects of their 202 activity and the longer the IoC is likely to reflect the attacker's 203 intrusion set - i.e., the less fragile those IoCs will be from a 204 defender's perspective. The layers of the PoP commonly range from 205 hashes up to TTPs, with the pain ranging from simply recompiling code 206 to creating a whole new attack strategy. Other types of IoC do exist 207 and could be included in an extended version of the PoP should that 208 assist the defender to understand and discuss intrusion sets most 209 relevant to them. 211 /\ 212 / \ MORE PAIN 213 / \ LESS FRAGILE 214 / \ LESS PRECISE 215 / TTPs \ 216 / \ / \ 217 ============== | 218 / \ | 219 / Tools \ | 220 / \ | 221 ====================== | 222 / \ | 223 / network/host artefacts \ | 224 / \ | 225 ============================== | 226 / \ | 227 / domain names \ | 228 / \ | 229 ====================================== | 230 / \ | 231 / IP addresses \ | 232 / \ \ / 233 ============================================== 234 / \ LESS PAIN 235 / Hash values \ MORE FRAGILE 236 / \ MORE PRECISE 237 ====================================================== 239 Figure 1 241 On the lowest (and least painful) level are hashes of malicious 242 files. These are easy for a defender to gather and can be deployed 243 to firewalls or endpoint protection to block malicious downloads or 244 prevent code execution. While IoCs aren't the only way for defenders 245 to do this kind of blocking, they are a quick, convenient, and 246 unintrusive method. Hashes are precise detections for individual 247 files based on their binary content. To subvert this defence, 248 however, an adversary need only recompile code, or otherwise modify 249 the file content with some trivial changes, to modify the hash value. 251 The next two levels are IP addresses and domain names. Interactions 252 with these may be blocked, with varying false positive rates 253 (misidentifying non-malicious traffic as malicious, see Section 5), 254 and often cause more pain to an adversary to subvert than file 255 hashes. The adversary may have to change IP ranges, find a new 256 provider, and change their code (e.g., if the IP address is hard- 257 coded, rather than resolved). Domain names are more specific than IP 258 addresses (as multiple domain names may be associated with a single 259 IP address) and are more painful for an adversary to change. 261 Network and endpoint artefacts, such as a malware's beaconing pattern 262 on the network or the modified timestamps of files touched on an 263 endpoint, are harder still to change as they relate specifically to 264 the attack taking place and, in some cases, may not be under the 265 direct control of the attacker. However, more sophisticated 266 attackers use TTPs or tooling that provide flexibility at this level 267 (such as Cobalt Strike's malleable command and control [COBALT]) or a 268 means by which some artefacts can be masked (see [Timestomp]). 270 Tools and TTPs form the top two levels of the pyramid; these levels 271 describe a threat actor's methodology - the way they perform the 272 attack. The tools level refers specifically to the software (and 273 less frequently hardware) used to conduct the attack, whereas the 274 TTPs level picks up on all the other aspects of the attack strategy. 275 IoCs at these levels are more complicated and complex - for example 276 they can include the details of how an attacker deploys malicious 277 code to perform reconnaissance of a victim's network, that pivots 278 laterally to a valuable endpoint, and then downloads a ransomware 279 payload. TTPs and tools take intensive effort to diagnose on the 280 part of the defender, but they are fundamental to the attacker and 281 campaign and hence incredibly painful for the adversary to change. 283 The variation in discoverability of IoCs is indicated by the numbers 284 of IoCs in the open threat intelligence community Alienvault 285 [ALIENVAULT]. As of June 2021, Alienvault contained: 287 * Groups (i.e., combinations of TTPs): 441 288 * Malware families (i.e., tools): ~24,000 290 * URL: 1,976,224 292 * Domain names: 34,959,787 294 * IPv4 addresses: 4,305,036 296 * SHA256 hash values: 4,767,891 298 The number of domain names appears out of sync with the other counts, 299 which reduce on the way up the PoP. This discrepancy warrants 300 further research; however, a contributing factor may be the fact that 301 threat actors use domain names to masquerade as legitimate 302 organisations and so have added incentive for creating new domain 303 names as they are identified and confiscated. 305 3.2. IoC Lifecycle 307 To be of use to defenders, IoCs must first be discovered, assessed, 308 shared, and deployed. When a logged activity is identified and 309 correlated to an IoC this detection triggers a reaction by the 310 defender which may include an investigation, potentially leading to 311 more IoCs being discovered, assessed, shared, and deployed. This 312 cycle continues until such time that the IoC is determined to no 313 longer be relevant, at which point it is removed from the control 314 space. 316 3.2.1. Discovery 318 IoCs are often discovered initially through manual investigation or 319 automated analysis. They can be discovered in a range of sources, 320 including in networks and at endpoints. They must either be 321 extracted from logs monitoring protocol runs, code execution or 322 system operations (in the case of hashes, IP addresses, domain names, 323 and network or endpoint artefacts), or be determined through analysis 324 of attack activity or tooling. In some cases, discovery may be a 325 reactive process, where IoCs from past or current attacks are 326 identified from the traces left behind. However, discovery may also 327 result from proactive hunting for potential future IoCs extrapolated 328 from knowledge of past events (such as from identifying attacker 329 infrastructure by monitoring domain name registration patterns). 331 Crucially, for an IoC to be discovered, the indicator must be 332 extractable from the internet protocol, tool, or technology it is 333 associated with. Identifying a particular protocol run related to an 334 attack is of limited benefit if indicators cannot be extracted and 335 subsequently associated with a later related run of the same, or a 336 different, protocol. If it is not possible to tell the source or 337 destination of malicious attack traffic, it will not be possible to 338 identify and block subsequent attack traffic either. 340 3.2.2. Assessment 342 Defenders may treat different IoCs differently, depending on the 343 IoCs' quality and the defender's needs and capabilities. Defenders 344 may, for example, place differing trust in IoCs depending on their 345 source, freshness, confidence level, or the associated threat. These 346 decisions rely on associated contextual information recovered at the 347 point of discovery or provided when the IoC was shared. 349 An IoC without context is not much use for network defence. On the 350 other hand, an IoC delivered with context (for example the threat 351 actor it relates to, its role in an attack, the last time it was seen 352 in use, its expected lifetime, or other related IoCs) allows a 353 network defender to make an informed choice on how to use it to 354 protect their network - for example, whether to simply log it, 355 actively monitor it, or out-right block it. 357 3.2.3. Sharing 359 Once discovered and assessed, IoCs are most helpful when then shared 360 at scale so many individuals and organisations can defend themselves. 361 An IoC may be shared individually (with appropriate context) in an 362 unstructured manner or may be packaged alongside many other IoCs in a 363 standardised format, such as Structured Threat Information Expression 364 [STIX], for distribution via a structured feed, such as one 365 implementing Trusted Automated Exchange of Intelligence Information 366 [TAXII], or through a Malware Information Sharing Platform [MISP]. 368 While some security companies and some membership-based groups (often 369 dubbed Information Sharing and Analysis Centres (ISACs)) provide paid 370 intel feeds containing IoCs, there are various free IoC sources 371 available from individual security researchers up through small trust 372 groups to national governmental cyber security organisations and 373 international Computer Emergency Response Teams (CERTs). Whomever 374 they are, sharers commonly indicate the extent to which receivers may 375 further distribute IoCs using the Traffic Light Protocol [TLP]. At 376 its simplest, this indicates that the receiver may share with anyone 377 (TLP WHITE), share within the defined sharing community (TLP GREEN), 378 share within their organisation (TLP AMBER), or not share with anyone 379 outside the original specific IoC exchange (TLP RED). 381 3.2.4. Deployment 383 For IoCs to provide defence-in-depth (see Section 6.1), which is one 384 of their key strengths, and so cope with different points of failure, 385 they should be deployed in controls monitoring networks and endpoints 386 through solutions that have sufficient privilege to act on them. 387 Wherever IoCs exist they need to be made available to security 388 controls and associated apparatus to ensure they can be deployed 389 quickly and widely. While IoCs may be manually assessed after 390 discovery or receipt, significant advantage may be gained by 391 automatically ingesting, processing, assessing, and deploying IoCs 392 from logs or intel feeds to the appropriate security controls. 394 3.2.5. Detection 396 Security controls with deployed IoCs monitor their relevant control 397 space and trigger a generic or specific reaction upon detection of 398 the IoC in monitored logs. 400 3.2.6. Reaction 402 The reaction to an IoC's detection may differ depending on factors 403 such as the capabilities and configuration of the control it is 404 deployed in, the assessment of the IoC, and the properties of the log 405 source in which it was detected. For example, a connection to a 406 known botnet C2 server may indicate a problem but does not guarantee 407 it, particularly if the server is a compromised host still performing 408 some other legitimate functions. Common reactions include event 409 logging, triggering alerts, and blocking or terminating the source of 410 the activity. 412 3.2.7. End of Life 414 How long an IoC remains useful varies and is dependent on factors 415 including initial confidence level, fragility, and precision of the 416 IoC (discussed further in Section 5). In some cases, IoCs may be 417 automatically 'aged' based on their initial characteristics and so 418 will reach end of life at a predetermined time. In other cases, IoCs 419 may become invalidated due to a shift in the threat actor's TTPs 420 (e.g., resulting from a new development or their discovery) or due to 421 remediation action taken by a defender. End of life may also come 422 about due to an activity unrelated to attack or defence, such as when 423 a third-party service used by the attacker changes or goes offline. 424 Whatever the cause, IoCs should be removed from detection at the end 425 of their life to reduce the likelihood of false positives. 427 4. Using IoCs Effectively 428 4.1. Opportunities 430 IoCs offer a variety of opportunities to cyber defenders as part of a 431 modern defence-in-depth strategy. No matter the size of an 432 organisation, IoCs can provide an effective, scalable, and efficient 433 defence mechanism against classes of attack from the latest threats 434 or specific intrusion sets which may have struck in the past. 436 4.1.1. IoCs underpin and enable multiple layers of the modern defence- 437 in-depth strategy 439 Firewalls, Intrusion Detection Systems (IDS), and Intrusion 440 Prevention Systems (IPS) all employ IoCs to identify and mitigate 441 threats across networks. Anti-Virus (AV) and Endpoint Detection and 442 Response (EDR) products deploy IoCs via catalogues or libraries to 443 all supported client endpoints. Security Incident Event Management 444 (SIEM) platforms compare IoCs against aggregated logs from various 445 sources - network, endpoint, and application. Of course, IoCs do not 446 address all attack defence challenges - but they form a vital tier of 447 any organisation's layered defence. Some types of IoC may be present 448 across all those controls while others may be deployed only in 449 certain layers. Further, IoCs relevant to a specific kill chain may 450 only reflect activity performed during a certain phase and so need to 451 be combined with other IoCs or mechanisms for complete coverage of 452 the kill chain as part of an intrusion set. 454 As an example, open source malware can be deployed by many different 455 actors, each using their own TTPs and infrastructure. However, if 456 the actors use the same executable, the hash remains the same and 457 this IoC can be deployed in endpoint protection to block execution 458 regardless of individual actor, infrastructure, or other TTPs. 459 Should this defence fail in a specific case, for example if an actor 460 recompiles the executable binary producing a unique hash, other 461 defences can prevent them progressing further through their attack - 462 for instance, by blocking known malicious domain name look-ups and 463 thereby preventing the malware calling out to its C2 infrastructure. 465 Alternatively, another malicious actor may regularly change their 466 tools and infrastructure (and thus the indicator intrusion set) 467 deployed across different campaigns, but their access vectors may 468 remain consistent and well-known. In this case, this access TTP can 469 be recognised and proactively defended against even while there is 470 uncertainty of the intended subsequent activity. For example, if 471 their access vector consistently exploits a vulnerability in 472 software, regular and estate-wide patching can prevent the attack 473 from taking place. Should these pre-emptive measures fail however, 474 other IoCs observed across multiple campaigns may be able to prevent 475 the attack at later stages in the kill chain. 477 4.1.2. IoCs can be used even with limited resources 479 IoCs are inexpensive, scalable, and easy to deploy, making their use 480 particularly beneficial for smaller entities, especially where they 481 are exposed to a significant threat. For example, a small 482 manufacturing subcontractor in a supply chain producing a critical, 483 highly specialised component may represent an attractive target 484 because there would be disproportionate impact on both the supply 485 chain and the prime contractor if it were compromised. It may be 486 reasonable to assume that this small manufacturer will have only 487 basic security (whether internal or outsourced) and while it is 488 likely to have comparatively less resources to manage the risks it 489 faces compared to larger partners, it can still leverage IoCs to 490 great effect. Small entities like this can deploy IoCs to give a 491 baseline protection against known threats without having access to a 492 well-resourced, mature defensive team and the threat intelligence 493 relationships necessary to perform resource-intensive investigations. 494 One reason for this is that use of IoCs does not require the same 495 intensive training as needed for more subjective controls, such as 496 those based on manual analysis of tipped machine learning events. In 497 this way, a major part of the appeal of IoCs is that they can afford 498 some level of protection to organisations across spectrums of 499 resource capability, maturity, and sophistication. 501 4.1.3. IoCs have a multiplier effect on attack defence effort 503 Individual IoCs can provide widespread protection that scales 504 effectively for defenders. Within a single organisation, simply 505 blocking one IoC may protect thousands of users and that blocking may 506 be performed (depending on the IoC type) across multiple security 507 controls monitoring numerous different types of activity within 508 networks, endpoints, and applications. While discovering one IoC can 509 be intensive, once shared via well-established routes (as discussed 510 in Section 3.2.2) that individual IoC may, further, protect thousands 511 of organisations and so all of their users. The prime contractor 512 from our earlier example can supply IoCs to the small subcontractor 513 and so further uplift that smaller entity's defensive capability and 514 at the same time protect itself and its interests. 516 Not only may multiple organisations benefit through directly 517 receiving shared IoCs, but they may also benefit through the IoCs' 518 application in services they utilise. In the case of an ongoing 519 email phishing campaign, IoCs can be monitored, discovered, and 520 deployed quickly and easily by individual organisations. However, if 521 they are deployed quickly via a mechanism such as a protective DNS 522 filtering service, they can be more effective still - an email 523 campaign may be mitigated before some organisations' recipients ever 524 click the link or before some malicious payloads can call out for 525 instructions. Through such approaches other parties can be protected 526 without additional effort. 528 4.1.4. IoCs are easily shared 530 There is significant benefit to be had from the sharing of IoCs and 531 they can be easily shared for two main reasons: firstly, indicators 532 are easy to distribute as they are textual and so in small numbers 533 are frequently exchanged in emails, blog posts, or technical reports; 534 secondly, standards such as MISP Core [MISPCORE], OpenIOC [OPENIOC], 535 and STIX [STIX] provide well-defined formats for sharing large 536 collections or regular sets of IoC along with all the associated 537 context. Quick and easy sharing of IoCs gives blanket coverage for 538 organisations and allows widespread mitigation in a timely fashion - 539 they can be shared with systems administrators, from small to large 540 organisations and from large teams to single individuals, allowing 541 them all to implement defences on their networks. 543 4.1.5. IoCs can provide significant time savings 545 Not only are there time savings from sharing IoCs, saving duplication 546 of investigation effort, but deploying them automatically at scale is 547 seamless for many enterprises. Where automatic deployment of IoCs is 548 working well, organisations and users get blanket protection with 549 minimal human intervention and minimal effort, a key goal of attack 550 defence. The ability to do this at scale and at pace is often vital 551 when responding to agile threat actors that may change their 552 intrusion set frequently and so the relevant IoCs also change. 553 Conversely, protecting a complex network without automatic deployment 554 of IoCs could mean manually updating every single endpoint or network 555 device consistently and reliably to the same security state. The 556 work this entails (including locating assets and devices, polling for 557 logs and system information, and manually checking patch levels) 558 introduces complexity and a need for skilled analysts and engineers. 559 While it is still necessary to invest effort to eliminate false 560 positives when widely deploying IoCs, the cost and effort involved 561 can be far smaller than the work entailed in reliably manually 562 updating all endpoint and network devices - for example, particularly 563 on legacy systems that may be particularly complicated, or even 564 impossible, to update. 566 4.1.6. IoCs allow for discovery of historic attacks 568 A network defender can use recently acquired IoCs in conjunction with 569 historic data, such as logged DNS queries or email attachment hashes, 570 to hunt for signs of past compromise. Not only can this technique 571 help to build up a clear picture of past attacks, but it also allows 572 for retrospective mitigation of the effects of any previous 573 intrusion. This opportunity is reliant on historic data not having 574 been compromised itself, by a technique such as Timestomp 575 [Timestomp], and not being incomplete due to data retention policies, 576 but is nonetheless valuable for detecting and remediating past 577 attacks. 579 4.1.7. IoCs can be attributed to specific threats 581 Deployment of various modern security controls, such as firewall 582 filtering or EDR, come with an inherent trade-off between breadth of 583 protection and various costs, including the risk of false positives 584 (see Section 5.2 ), staff time, and pure financial costs. 585 Organisations can use threat modelling and information assurance to 586 assess and prioritise risk from identified threats and to determine 587 how they will mitigate or accept each of them. Contextual 588 information tying IoCs to specific threats or actors and shared 589 alongside the IoCs enables organisations to focus their defences 590 against particular risks and so allows them the technical freedom and 591 capability to choose their risk posture and defence methods. 592 Producing this contextual information before sharing IoCs can take 593 intensive analytical effort as well as specialist tools and training. 594 At its simplest it can involve documenting sets of IoCs from multiple 595 instances of the same attack campaign, say from multiple unique 596 payloads (and therefore with distinct file hashes) from the same 597 source and connecting to the same C2 server. A more complicated 598 approach is to cluster similar combinations of TTPs seen across 599 multiple campaigns over a period of time. This can be used alongside 600 detailed malware reverse engineering and target profiling, overlaid 601 on a geopolitical and criminal backdrop, to infer attribution to a 602 single threat actor. 604 4.2. Case Studies 606 4.2.1. Introduction 608 The following two case studies illustrate how IoCs may be identified 609 in relation to threat actor tooling (in the first) and a threat actor 610 campaign (in the second). The case studies further highlight how 611 these IoCs may be used by cyber defenders. 613 4.2.2. Cobalt Strike 615 Cobalt Strike [COBALT] is a commercial attack framework that consists 616 of an implant framework (beacon), network protocol, and a C2 server. 617 The beacon and network protocol are highly malleable, meaning the 618 protocol representation 'on the wire' can be easily changed by an 619 attacker to blend in with legitimate traffic. The proprietary beacon 620 supports TLS encryption overlaid with a custom encryption scheme 621 based on a public-private keypair. The product also supports other 622 techniques, such as domain fronting [DFRONT], in attempt to avoid 623 obvious passive detection by static network signatures. 625 4.2.2.1. Overall TTP 627 A beacon configuration describes how the implant should operate and 628 communicate with its C2 server. This configuration also provides 629 ancillary information such as the Cobalt Strike user's licence 630 watermark. 632 4.2.2.2. IoCs 634 Tradecraft has been developed that allows the fingerprinting of C2 635 servers based on their responses to specific requests. This allows 636 the servers to be identified and then their beacon configurations to 637 be downloaded and the associated infrastructure addresses extracted 638 as IoCs. 640 The resulting mass IoCs for Cobalt Strike are: 642 * IP addresses of the C2 servers 644 * domain names used 646 Whilst these IoCs need to be refreshed regularly (due to the ease of 647 which they can be changed), the authors' experience of protecting 648 public sector organisations show these IoCs are effective for 649 disrupting threat actor operations that use Cobalt Strike. 651 These IoCs can be used to check historical data for evidence of past 652 compromise, as well as deployed to detect or block future infection 653 in a timely manner, thereby contributing to preventing the loss of 654 user and system data. 656 4.2.3. APT33 658 In contrast to the first case study, this describes a current 659 campaign by the threat actor APT33, also known as Elfin and Refined 660 Kitten (see [Symantec]). APT33 has been assessed by industry to be a 661 state-sponsored group [FireEye2], yet in this case study, IoCs still 662 gave defenders an effective tool against such a powerful adversary. 663 The group has been active since at least 2015 and is known to target 664 a range of sectors including petrochemical, government, engineering, 665 and manufacturing. Activity has been seen in countries across the 666 globe, but predominantly in the USA and Saudi Arabia. 668 4.2.3.1. Overall TTP 670 The techniques employed by this actor exhibit a relatively low level 671 of sophistication considering it is a state-sponsored group; 672 typically, APT33 performs spear phishing (sending targeted malicious 673 emails to a limited number of pre-selected recipients) with document 674 lures that imitate legitimate publications. User interaction with 675 these lures executes the initial payload and enables APT33 to gain 676 initial access. Once inside a target network, APT33 attempts to 677 pivot to other machines to gather documents and gain access to 678 administrative credentials. In some cases, users are tricked into 679 providing credentials that are then used with RULER, a freely 680 available tool that allows exploitation of an email client. The 681 attacker, in possession of a target's password, uses RULER to access 682 the target's mail account and embeds a malicious script which will be 683 triggered when the mail client is next opened, resulting in the 684 execution of malicious code (often additional malware retrieved from 685 the Internet) (see [FireEye]). 687 APT33 sometimes deploys a destructive tool which overwrites the 688 master boot record (MBR) of the hard drives in as many PCs as 689 possible. This type of tool, known as a wiper, results in data loss 690 and renders devices unusable until the operating system is 691 reinstalled. In some cases, the actor uses administrator credentials 692 to invoke execution across a large swathe of a company's IT estate at 693 once; where this isn't possible the actor may attempt to spread the 694 wiper first manually or by using worm-like capabilities against 695 unpatched vulnerabilities on the networked computers. 697 4.2.3.2. IoCs 699 As a result of investigations by a partnership of industry and the 700 UK's National Cyber Security Centre (NCSC), a set of IoCs were 701 compiled and shared with both public and private sector organisations 702 so network defenders could search for them in their networks. 703 Detection of these IoCs is likely indicative of APT33 targeting and 704 could indicate potential compromise and subsequent use of destructive 705 malware. Network defenders could also initiate processes to block 706 these IoCs to foil future attacks. This set of IoCs comprised: 708 * 9 hashes and email subject lines 710 * 5 IP addresses 712 * 7 domain names 714 In November 2021, a joint advisory concerning APT33 [CISA] was issued 715 by Federal Bureau of Investigation (FBI), the Cybersecurity and 716 Infrastructure Security Agency (CISA), the Australian Cyber Security 717 Centre (ACSC), and NCSC. This outlined recent exploitation of 718 Fortinet vulnerabilities by APT33, providing a thorough overview of 719 observed TTPs, as well as sharing further IoCs: 721 * 8 hashes of malicious executables 723 * 3 IP addresses 725 5. Operational Limitations 727 The different IoC types inherently embody a set of trade-offs for 728 defenders between the risk of false positives (misidentifying non- 729 malicious traffic as malicious) and the risk of failing to identify 730 attacks. The attacker's relative pain of modifying attacks to 731 subvert known IoCs, as discussed using the Pyramid of Pain (PoP) in 732 Section 3.1, inversely correlates with the fragility of the IoC and 733 with the precision with which the IoC identifies an attack. Research 734 is needed to elucidate the exact nature of these trade-offs between 735 pain, fragility, and precision. 737 5.1. Time and Effort 738 5.1.1. Fragility 740 As alluded to in Section 3.1, the Pyramid of Pain can be thought of 741 in terms of fragility for the defender as well as pain for the 742 attacker. The less painful it is for the attacker to change an IoC, 743 the more fragile that IoC is as a defence tool. It is relatively 744 simple to determine the hash value for various malicious file 745 attachments observed as lures in a phishing campaign and to deploy 746 these through AV or an email gateway security control. However, 747 those hashes are fragile and can (and often will) be changed between 748 campaigns. Malicious IP addresses and domain names can also be 749 changed between campaigns, but this happens less frequently due to 750 the greater pain of managing infrastructure compared to altering 751 files, and so IP addresses and domain names provide a less fragile 752 detection capability. 754 This does not mean the more fragile IoC types are worthless. 755 Firstly, there is no guarantee a fragile IoC will change, and if a 756 known IoC isn't changed by the attacker but wasn't blocked then the 757 defender missed an opportunity to halt an attack in its tracks. 758 Secondly, even within one IoC type, there is variation in the 759 fragility depending on the context of the IoC. The file hash of a 760 phishing lure document (with a particular theme and containing a 761 specific staging server link) may be more fragile than the file hash 762 of a remote access trojan payload the attacker uses after initial 763 access. That in turn may be more fragile than the file hash of an 764 attacker-controlled post-exploitation reconnaissance tool that 765 doesn't connect directly to the attacker's infrastructure. Thirdly, 766 some threats and actors are more capable or inclined to change than 767 others, and so the fragility of an IoC for one may be very different 768 to an IoC of the same type for another actor. 770 Ultimately, fragility is a defender's concern that impacts the 771 ongoing efficacy of each IoC and will factor into decisions about end 772 of life. However, it should not prevent adoption of individual IoCs 773 unless there are significantly strict resource constraints that 774 demand down-selection of IoCs for deployment. More usually, 775 defenders researching threats will attempt to identify IoCs of 776 varying fragilities for a particular kill chain to provide the 777 greatest chances of ongoing detection given available investigative 778 effort (see Section 5.1.2) and while still maintaining precision (see 779 Section 5.2). 781 Finally, it is worth noting that fragility can apply to an entire 782 class of IoCs for a range of reasons; for example, IPv4 addresses are 783 becoming increasingly fragile due to addresses growing scarce, 784 widespread use of cloud services, and the ease with which domain 785 names can be moved from one hosting provider to another (thus 786 changing IP range). 788 5.1.2. Discoverability 790 To be used in attack defence, IoCs must first be discovered through 791 proactive hunting or reactive investigation. As noted in 792 Section 3.1, IoCs in the tools and TTPs levels of the PoP require 793 intensive effort and research to discover. However, it is not just 794 an IoC's type that impacts its discoverability. The sophistication 795 of the actor, their TTPs, and their tooling play a significant role, 796 as does whether the IoC is retrieved from logs after the attack or 797 extracted from samples or infected systems earlier. 799 For example, on an infected endpoint it may be possible to identify a 800 malicious payload and then extract relevant IoCs, such as the file 801 hash and its C2 server address. If the attacker used the same static 802 payload throughout the attack this single file hash value will cover 803 all instances. If, however, the attacker diversified their payloads, 804 that hash can be more fragile and other hashes may need to be 805 discovered from other samples used on other infected endpoints. 806 Concurrently, the attacker may have simply hard-coded configuration 807 data into the payload, in which case the C2 server address can be 808 easy to recover. Alternatively, the address can be stored in an 809 obfuscated persistent configuration either within the payload (e.g., 810 within its source code or associated resource) or the infected 811 endpoint's filesystem (e.g., using alternative data streams [ADS]) 812 and thus requiring more effort to discover. Further, the attacker 813 may be storing the configuration in memory only or relying on a 814 domain generation algorithm (DGA) to generate C2 server addresses on 815 demand. In this case, extracting the C2 server address can require a 816 memory dump or the execution or reverse engineering of the DGA, all 817 of which increase the effort still further. 819 If the malicious payload has already communicated with its C2 server, 820 then it may be possible to discover that C2 server address IoC from 821 network traffic logs more easily. However, once again multiple 822 factors can make discoverability more challenging, such as the 823 increasing adoption of HTTPS for malicious traffic - meaning C2 824 communications blend in with legitimate traffic, and can be 825 complicated to identify. Further, some malwares obfuscate their 826 intended destinations by using alternative DNS resolution services 827 (e.g., OpenNIC [OPENNIC]) or by performing transformation operations 828 on resolved IP addresses to determine the real C2 server address 829 encoded in the DNS response [LAZARUS]. 831 5.2. Precision 833 5.2.1. Specificity 835 Alongside pain and fragility, the PoP's levels can also be considered 836 in terms of how precise the defence can be, with the false positive 837 rate usually increasing as we move up the pyramid to less specific 838 IoCs. A hash value identifies a particular file, such as an 839 executable binary, and given a suitable cryptographic hash function 840 the false positives are effectively nil; by suitable we mean one with 841 preimage resistance and strong collision resistance. In comparison, 842 IoCs in the upper levels (such as some network artefacts or tool 843 fingerprints) may apply to various malicious binaries, and even 844 benign software may share the same identifying characteristics. For 845 example, threat actor tools making web requests may be identified by 846 the user-agent string specified in the request header. However, this 847 value may be the same as used by legitimate software, either by the 848 attacker's choice or through use of a common library. 850 It should come as no surprise that the more specific an IoC the more 851 fragile it is - as things change, they move outside of that specific 852 focus. While less fragile IoCs may be desirable for their robustness 853 and longevity, this must be balanced with the increased chance of 854 false positives from their broadness. One way in which this balance 855 is achieved is by grouping indicators and using them in combination. 856 While two low-specificity IoCs for a particular attack may each have 857 chances of false positives, when observed together they may provide 858 greater confidence of an accurate detection of the relevant kill 859 chain. 861 5.2.2. Dual and Compromised Use 863 As noted in Section 3.2.2, the context of an IoC, such as the way in 864 which the attacker uses it, may equally impact the precision with 865 which that IoC detects an attack. An IP address representing an 866 attacker's staging server, from which their attack chain downloads 867 subsequent payloads, offers a precise IP address for attacker-owned 868 infrastructure. However, it will be less precise if that IP address 869 is associated with a cloud hosting provider and it is regularly 870 reassigned from one user to another; and it will be less precise 871 still if the attacker compromised a legitimate web server and is 872 abusing the IP address alongside the ongoing legitimate use. 874 In a similar manner, a file hash representing an attacker's custom 875 remote access trojan will be very precise; however, a file hash 876 representing a common enterprise remote administration tool will be 877 less precise depending on whether the defender organisation usually 878 uses that tool for legitimate systems administration or not. 879 Notably, such dual use indicators are context specific both in 880 whether they are usually used legitimately and in the way they are 881 used in a particular circumstance. Use of the remote administration 882 tool may be legitimate for support staff during working hours, but 883 not generally by non-support staff, particularly if observed outside 884 of that employee's usual working hours. 886 It is reasons such as these that context is so important when sharing 887 and using IoCs. 889 5.3. Privacy 891 As noted in Section 3.2.2, context is critical to effective detection 892 using IoCs. However, at times, defenders may feel there are privacy 893 concerns with how much to share about a cyber intrusion, and with 894 whom. For example, defenders may generalise the IoCs' description of 895 the attack, by removing context to facilitate sharing. This 896 generalisation can result in an incomplete set of IoCs being shared 897 or IoCs being shared without clear indication of what they represent 898 and how they are involved in an attack. The sharer will consider the 899 privacy trade-off when generalising the IoC, and should bear in mind 900 that the loss of context can greatly reduce the utility of the IoC 901 for those they share with. 903 Self-censoring by sharers appears more prevalent and more extensive 904 when sharing IoCs into groups with more members, into groups with a 905 broader range of perceived member expertise (particularly the further 906 the lower bound extends below the sharer's perceived own expertise), 907 and into groups that do not maintain strong intermember trust. Trust 908 within such groups appears often strongest where members: interact 909 regularly; have common backgrounds, expertise, or challenges; conform 910 to behavioural expectations (such as by following defined handling 911 requirements and not misrepresenting material they share); and 912 reciprocate the sharing and support they receive. Research 913 opportunities exist to determine how IoC sharing groups' requirements 914 for trust and members' interaction strategies vary and whether 915 sharing can be optimised or incentivised, such as by using game 916 theoretic approaches. 918 5.4. Automation 920 While IoCs can be effectively utilised by organisations of various 921 sizes and resource constraints, as discussed in Section 4.1.2, 922 automation of IoC ingestion, processing, assessment, and deployment 923 is critical for managing them at scale. Manual oversight and 924 investigation may be necessary intermittently, but a reliance on 925 manual processing and searching only works at small scale or for 926 occasional cases. 928 The adoption of automation can also enable faster and easier 929 correlation of IoC detections across log sources, time, and space. 930 Thereby, the response can be tailored to reflect the number and 931 overlap of detections from a particular intrusion set, and the 932 necessary context can be presented alongside the detection when 933 generating any alerts for defender review. While manual processing 934 and searching may be no less accurate (although IoC transcription 935 errors are a common problem during busy incidents), the correlation 936 and cross-referencing necessary to provide the same degree of 937 situational awareness is much more time consuming. 939 A third important consideration when performing manual processing is 940 the longer phase monitoring and adjustment necessary to effectively 941 age out IoCs as they become irrelevant or, more crucially, 942 inaccurate. Manual implementations must often simply include or 943 exclude an IoC, as anything more granular is time consuming and 944 complicated to manage. In contrast, automations can support a 945 gradual reduction in confidence scoring enabling IoCs to contribute 946 but not individually disrupt a detection as their specificity 947 reduces. 949 6. Best Practice 950 6.1. Comprehensive Coverage and Defence-in-Depth 952 IoCs provide the defender with a range of options across the Pyramid 953 of Pain's (PoP) layers, enabling them to balance precision and 954 fragility to give high confidence detections that are practical and 955 useful. Broad coverage of the PoP is important as it allows the 956 defender to cycle between high precision but high fragility options 957 and more robust but less precise indicators. As fragile indicators 958 are changed, the more robust IoCs allow for continued detection and 959 faster rediscovery. For this reason, it's important to collect as 960 many IoCs as possible across the whole PoP. 962 At the top of the PoP, TTPs identified through anomaly detection and 963 machine learning are more likely to have false positives, which gives 964 lower confidence and, vitally, requires better trained analysts to 965 understand and implement the defences. However, these are very 966 painful for attackers to change and so when tuned appropriately 967 provide a robust detection. Hashes, at the bottom, are precise and 968 easy to deploy but are fragile and easily changed within and across 969 campaigns by malicious actors. 971 Endpoint Detection and Response (EDR) or Anti-Virus (AV) are often 972 the first port of call for protection from intrusion but endpoint 973 solutions aren't a panacea. One issue is that there are many 974 environments where it is not possible to keep them updated, or in 975 some cases, deploy them at all. For example, the Owari botnet, a 976 Mirai variant [Owari], exploited Internet of Things (IoT) devices 977 where such solutions could not be deployed. It is because of such 978 gaps, where endpoint solutions can't be relied on (see [EVOLVE]), 979 that a defence-in-depth approach is commonly advocated, using a 980 blended approach that includes both network and endpoint defences. 982 If an attack happens, then you hope an endpoint solution will pick it 983 up. If it doesn't, it could be for many good reasons: the endpoint 984 solution could be quite conservative and aim for a low false-positive 985 rate; it might not have ubiquitous coverage; or it might only be able 986 to defend the initial step of the kill chain [KillChain]. In the 987 worst cases, the attack specifically disables the endpoint solution 988 or the malware is brand new and so won't be recognised. 990 In the middle of the pyramid, IoCs related to network information 991 (such as domains and IP addresses) can be particularly useful. They 992 allow for broad coverage, without requiring each and every endpoint 993 security solution to be updated, as they may be detected and enforced 994 in a more centralised manner at network choke points (such as proxies 995 and gateways). This makes them particular useful in contexts where 996 ensuring endpoint security isn't possible such as "Bring Your Own 997 Device" (BYOD), Internet of Things (IoT) and legacy environments. 999 It's important to note that these network-level IoCs can also protect 1000 against compromised endpoints when these IoCs used to detect the 1001 attack in network traffic, even if the compromise passes unnoticed. 1002 For example, in a BYOD environment, enforcing security policies on 1003 the device can be difficult, so non-endpoint IoCs and solutions are 1004 needed to allow detection of compromise even with no endpoint 1005 coverage. 1007 One example of how IoCs provide a layer of a defence-in-depth 1008 solution is Protective DNS (PDNS), a free and voluntary DNS filtering 1009 service provided by the UK NCSC for UK public sector organisations 1010 [PDNS]. In 2018, this service blocked access to 57.4 million DNS 1011 queries for 118,527 unique reasons (out of 68.7 billion total 1012 queries) for the organisations signed up to the service [ACD2019]. 28 1013 million of them were for domain generation algorithms (DGAs) [DGAs], 1014 including 15 known DGAs which are a type of TTP. 1016 IoCs such as malicious domains can be put on PDNS straight away and 1017 can then be used to prevent access to those known malicious domains 1018 across the entire estate of over 925 separate public sector entities 1019 that use NCSC's PDNS [Annual2021]. Coverage can be patchy with 1020 endpoints, as the roll-out of protections isn't uniform or 1021 necessarily fast - but if the IoC is on PDNS, a consistent defence is 1022 maintained. This offers protection, regardless of whether the 1023 context is a BYOD environment or a managed enterprise system. Other 1024 IoCs, like Server Name Indicator values in TLS or the server 1025 certificate information, also provide IoC protections. 1027 Similar to the AV scenario, large scale services face risk decisions 1028 around balancing threat against business impact from false positives. 1029 Organisations need to be able to retain the ability to be more 1030 conservative with their own defences, while still benefiting from 1031 them. For instance, a commercial DNS filtering service is intended 1032 for broad deployment, so will have a risk tolerance similar to AV 1033 products; whereas DNS filtering intended for government users (e.g. 1034 PDNS) can be more conservative, but will still have a relatively 1035 broad deployment if intended for the whole of government. A 1036 government department or specific company, on the other hand, might 1037 accept the risk of disruption and arrange firewalls or other network 1038 protection devices to completely block anything related to particular 1039 threats, regardless of the confidence, but rely on a DNS filtering 1040 service for everything else. 1042 Other network defences can make use of this blanket coverage from 1043 IoCs, like middlebox mitigation, proxy defences, and application 1044 layer firewalls, but are out of scope for this draft. Note too that 1045 DNS goes through firewalls, proxies and possibly to a DNS filtering 1046 service; it doesn't have to be unencrypted, but these appliances must 1047 be able to decrypt it to do anything useful with it, like blocking 1048 queries for known bad URIs. 1050 Covering a broad range of IoCs gives defenders a wide range of 1051 benefits: they are easy to deploy; they provide a high enough 1052 confidence to be effective; at least some will be painful for 1053 attackers to change; their distribution around the infrastructure 1054 allows for different points of failure, and so overall they enable 1055 the defenders to disrupt bad actors. The combination of these 1056 factors cements IoCs as a particularly valuable tool for defenders 1057 with limited resources. 1059 6.2. Security Considerations 1061 This draft is all about system security. However, when poorly 1062 deployed, IoCs can lead to over-blocking which may present an 1063 availability concern for some systems. While IoCs preserve privacy 1064 on a macro scale (by preventing data breaches), research could be 1065 done to investigate the impact on privacy from sharing IoCs, and 1066 improvements could be made to minimise any impact found. The 1067 creation of a privacy-preserving IoC sharing method, that still 1068 allows both network and endpoint defences to provide security and 1069 layered defences, would be an interesting proposal. 1071 7. Conclusions 1073 IoCs are versatile and powerful. IoCs underpin and enable multiple 1074 layers of the modern defence-in-depth strategy. IoCs are easy to 1075 share, providing a multiplier effect on attack defence effort and 1076 they save vital time. Network-level IoCs offer protection, 1077 especially valuable when an endpoint-only solution isn't sufficient. 1078 These properties, along with their ease of use, make IoCs a key 1079 component of any attack defence strategy and particularly valuable 1080 for defenders with limited resources. 1082 For IoCs to be useful, they don't have to be unencrypted or visible 1083 in networks - but crucially they do need to be made available, along 1084 with their context, to entities that need them. It is also important 1085 that this availability and eventual usage copes with multiple points 1086 of failure, as per the defence-in-depth strategy, of which IoCs are a 1087 key part. 1089 8. IANA Considerations 1091 This draft does not require any IANA action. 1093 9. Acknowledgements 1095 Thanks to all those who have been involved with improving cyber 1096 defence in the IETF and IRTF communities. 1098 10. Informative References 1100 [ACD2019] Levy, I. and M. S, "Active Cyber Defence - The Second 1101 Year", 2019, . 1104 [ADS] Microsoft, "File Streams (Local File Systems)", 2018, 1105 . 1108 [ALIENVAULT] 1109 AlienVault, "AlienVault", 2021, 1110 . 1112 [Annual2021] 1113 NCSC, "Annual Review 2021", 2021, 1114 . 1117 [CISA] CISA, "Iranian Government-Sponsored APT Cyber Actors 1118 Exploiting Microsoft Exchange and Fortinet Vulnerabilities 1119 in Furtherance of Malicious Activities", 2021, 1120 . 1122 [COBALT] Cobalt Strike, "Cobalt Strike", 2021, 1123 . 1125 [DFRONT] InfoSec Resources, "Domain Fronting", 2017, 1126 . 1129 [DGAs] MITRE, "Dynamic Resolution: Domain Generation Algorithms", 1130 2020, . 1132 [EVOLVE] McFadden, M., "Evolution of Endpoint Security - An 1133 Operational Perspective", 2021, 1134 . 1137 [FireEye] O'Leary, J., Kimble, J., Vanderlee, K., and N. Fraser, 1138 "Insights into Iranian Cyber Espionage: APT33 Targets 1139 Aerospace and Energy Sectors and has Ties to Destructive 1140 Malware", 2017, . 1144 [FireEye2] FireEye, "OVERRULED: Containing a Potentially Destructive 1145 Adversary", 2018, . 1149 [GoldenTicket] 1150 Soria-Machado, M., Abolins, D., Boldea, C., and K. Socha, 1151 "Kerberos Golden Ticket Protection", 2014, 1152 . 1156 [KillChain] 1157 Lockheed Martin, "The Cyber Kill Chain", 2020, 1158 . 1161 [LAZARUS] Kaspersky Lab, "Lazarus Under The Hood", 2018, 1162 . 1166 [Mimikatz] Mulder, J., "Mimikatz Overview, Defenses and Detection", 1167 2016, . 1171 [MISP] MISP, "MISP", 2019, . 1173 [MISPCORE] MISP, "MISP Core", 2020, . 1176 [NCCGroup] Jansen, W., "Abusing cloud services to fly under the 1177 radar", 2021, . 1180 [OPENIOC] Gibb, W., "OpenIOC: Back to the Basics", 2013, 1181 . 1184 [OPENNIC] OpenNIC Project, "OpenNIC Project", 2021, 1185 . 1187 [Owari] NCSC, "Owari botnet own-goal takeover", 2018, 1188 . 1191 [PDNS] NCSC, "Protective DNS", 2019, 1192 . 1194 [PoP] Bianco, D.J., "The Pyramid of Pain", 2014, 1195 . 1198 [STIX] OASIS Cyber Threat Intelligence, "STIX", 2019, 1199 . 1202 [Symantec] Symantec, "Elfin: Relentless", 2019, 1203 . 1206 [TAXII] OASIS Cyber Threat Intelligence, "TAXII", 2021, 1207 . 1210 [Timestomp] 1211 OASIS Cyber Threat Intelligence, "Timestomp", 2019, 1212 . 1214 [TLP] FIRST, "Traffic Light Protocol", 2021, 1215 . 1217 Authors' Addresses 1219 Kirsty Paine 1220 Splunk Inc. 1221 Email: kirsty.ietf@gmail.com 1223 Ollie Whitehouse 1224 NCC Group 1225 Email: ollie.whitehouse@nccgroup.com 1227 James Sellwood 1228 Twilio 1229 Email: jsellwood@twilio.com 1230 Andrew Shaw 1231 UK National Cyber Security Centre 1232 Email: andrew.s2@ncsc.gov.uk