idnits 2.17.1 draft-oreirdan-mody-bot-remediation-17.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- No issues found here. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year == The document seems to contain a disclaimer for pre-RFC5378 work, but was first submitted on or after 10 November 2008. The disclaimer is usually necessary only for documents that revise or obsolete older RFCs, and that take significant amounts of text from those RFCs. If you can contact all authors of the source material and they are willing to grant the BCP78 rights to the IETF Trust, you can and should remove the disclaimer. Otherwise, the disclaimer is needed and you can ignore this comment. (See the Legal Provisions document at https://trustee.ietf.org/license-info for more information.) -- The document date (October 26, 2011) is 4565 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- -- Obsolete informational reference (is this intentional?): RFC 5070 (Obsoleted by RFC 7970) Summary: 0 errors (**), 0 flaws (~~), 2 warnings (==), 2 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 Internet Engineering Task Force J. Livingood 3 Internet-Draft N. Mody 4 Intended status: Informational M. O'Reirdan 5 Expires: April 28, 2012 Comcast 6 October 26, 2011 8 Recommendations for the Remediation of Bots in ISP Networks 9 draft-oreirdan-mody-bot-remediation-17 11 Abstract 13 This document contains recommendations on how Internet Service 14 Providers can manage the effects of computers used by their 15 subscribers, which have been infected with malicious bots, via 16 various remediation techniques. Internet users with infected 17 computers are exposed to risks such as loss of personal data, as well 18 as increased susceptibility to online fraud and/or phishing. Such 19 computers can also become an inadvertent participant in or component 20 of an online crime network, spam network, and/or phishing network, as 21 well as be used as a part of a distributed denial of service attack. 22 Mitigating the effects of and remediating the installations of 23 malicious bots will make it more difficult for botnets to operate and 24 could reduce the level of online crime on the Internet in general 25 and/or on a particular Internet Service Provider's network. 27 Status of this Memo 29 This Internet-Draft is submitted in full conformance with the 30 provisions of BCP 78 and BCP 79. 32 Internet-Drafts are working documents of the Internet Engineering 33 Task Force (IETF). Note that other groups may also distribute 34 working documents as Internet-Drafts. The list of current Internet- 35 Drafts is at http://datatracker.ietf.org/drafts/current/. 37 Internet-Drafts are draft documents valid for a maximum of six months 38 and may be updated, replaced, or obsoleted by other documents at any 39 time. It is inappropriate to use Internet-Drafts as reference 40 material or to cite them other than as "work in progress." 42 This Internet-Draft will expire on April 28, 2012. 44 Copyright Notice 46 Copyright (c) 2011 IETF Trust and the persons identified as the 47 document authors. All rights reserved. 49 This document is subject to BCP 78 and the IETF Trust's Legal 50 Provisions Relating to IETF Documents 51 (http://trustee.ietf.org/license-info) in effect on the date of 52 publication of this document. Please review these documents 53 carefully, as they describe your rights and restrictions with respect 54 to this document. Code Components extracted from this document must 55 include Simplified BSD License text as described in Section 4.e of 56 the Trust Legal Provisions and are provided without warranty as 57 described in the Simplified BSD License. 59 This document may contain material from IETF Documents or IETF 60 Contributions published or made publicly available before November 61 10, 2008. The person(s) controlling the copyright in some of this 62 material may not have granted the IETF Trust the right to allow 63 modifications of such material outside the IETF Standards Process. 64 Without obtaining an adequate license from the person(s) controlling 65 the copyright in such materials, this document may not be modified 66 outside the IETF Standards Process, and derivative works of it may 67 not be created outside the IETF Standards Process, except to format 68 it for publication as an RFC or to translate it into languages other 69 than English. 71 Table of Contents 73 1. Key Terminology . . . . . . . . . . . . . . . . . . . . . . . 4 74 2. Introduction and Problem Statement . . . . . . . . . . . . . . 6 75 3. Important Notice of Limitations and Scope . . . . . . . . . . 8 76 4. Detection of Bots . . . . . . . . . . . . . . . . . . . . . . 9 77 5. Notification to Internet Users . . . . . . . . . . . . . . . . 13 78 6. Remediation of Hosts Infected with a Bot . . . . . . . . . . . 19 79 6.1. Guided Remediation Process . . . . . . . . . . . . . . . . 21 80 6.2. Professionally-Assisted Remediation Process . . . . . . . 23 81 7. Failure or Refusal to Remediate . . . . . . . . . . . . . . . 23 82 8. Sharing of Data from the User to the ISP . . . . . . . . . . . 23 83 9. Security Considerations . . . . . . . . . . . . . . . . . . . 24 84 10. Privacy Considerations . . . . . . . . . . . . . . . . . . . . 24 85 11. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 25 86 12. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . . 25 87 13. Informative references . . . . . . . . . . . . . . . . . . . . 26 88 Appendix A. Examples of Third Party Malware Lists . . . . . . . . 28 89 Appendix B. Document Change Log . . . . . . . . . . . . . . . . . 28 90 Appendix C. Open Issues . . . . . . . . . . . . . . . . . . . . . 33 91 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . . 33 93 1. Key Terminology 95 This section defines the key terms used in this document. 97 1.1. Malicious Bots, or Bots 99 A malicious or potentially malicious "bot" (derived from the word 100 "robot", hereafter simply referred to as a "bot") refers to a program 101 that is installed on a system in order to enable that system to 102 automatically (or semi-automatically) perform a task or set of tasks 103 typically under the command and control of a remote administrator, or 104 "bot master". Bots are also known as "zombies". Such bots may have 105 been installed surreptitiously, without the user's full understanding 106 of what the bot will do once installed, unknowingly as part of 107 another software installation, under false pretenses, and/or in a 108 variety of other possible ways. 110 It is important to note that there are 'good', or benign bots. Such 111 benign bots are often found in such environments such as gaming and 112 Internet Relay Chat (IRC) [RFC1459], where a continual, interactive 113 presence can be a requirement for participating in the games, 114 interacting with a computing resource. Since such benign bots are 115 performing useful, lawful, and non-disruptive functions, there is no 116 reason for a provider to monitor for their presence and/or alert 117 users to their presence. 119 Thus, while there may be benign, or harmless bots, for the purposes 120 of this document all mention of bots shall assume that the bots 121 involved are malicious or potentially malicious in nature. Such 122 malicious bots shall generally be assumed to have been deployed 123 without the permission or conscious understanding of a particular 124 Internet user. Thus, without a user's knowledge, bots may transform 125 the user's computing device into a platform from which malicious 126 activities can be conducted. In addition, included explicitly in 127 this category are potentially malicious bots, which may initially 128 appear neutral but may simply be waiting for remote instructions to 129 transform and/or otherwise begin engaging in malicious behavior. In 130 general, installation of a malicious bot without user knowledge and 131 consent is considered in most regions to be unlawful, and the 132 activities of malicious bots typically involve unlawful or other 133 maliciously disruptive activities. 135 1.2. Bot Networks, or Botnets 137 These are defined as concerted networks of bots capable of acting on 138 instructions generated remotely. The malicious activities are either 139 focused on the information on the local machine or acting to provide 140 services for remote machines. Bots are highly customizable so they 141 can be programmed to do many things. The major malicious activities 142 include but are not limited to: identity theft, spam, spim (spam over 143 instant messaging), spit (spam over Internet telephony), email 144 address harvesting, distributed denial of service (DDoS) attacks, 145 key-logging, fraudulent DNS pharming (redirection), hosting proxy 146 services, fast flux (see Section 1.5) hosting, hosting of illegal 147 content, use in man-in-the-middle attacks, and click fraud. 149 Infection vectors include un-patched operating systems, software 150 vulnerabilities (which include so-called zero-day vulnerabilities 151 where no patch yet exists), weak/non-existent passwords, malicious 152 websites, un-patched browsers, malware, vulnerable helper 153 applications, inherently insecure protocols, protocols implemented 154 without security features switched on and social engineering 155 techniques to gain access to the user's computer. The detection and 156 destruction of bots is an ongoing issue and also a constant battle 157 between the Internet security community, network security engineers 158 and bot developers. 160 Initially, some bots used IRC to communicate but were easy to 161 shutdown if the command and control server was identified and 162 deactivated. Newer command and control methods have evolved, such 163 that those currently employed by bot masters make them much more 164 resistant to deactivation. With the introduction of P2P, HTTP and 165 other resilient communication protocols along with the widespread 166 adoption of encryption, bots are considerably more difficult to 167 identify and isolate from typical network usage. As a result 168 increased reliance is being placed on anomaly detection and 169 behavioral analysis, both locally and remotely, to identify bots. 171 1.3. Host 173 An end user's host, or computer, as used in the context of this 174 document, is intended to refer to a computing device that connects to 175 the Internet. This encompasses devices used by Internet users such 176 as personal computers, including laptops, desktops, and netbooks, as 177 well as mobile phones, smart phones, home gateway devices, and other 178 end user computing devices that are connected or can connect to the 179 public Internet and/or private IP networks. 181 Increasingly, other household systems and devices contain embedded 182 hosts which are connected to or can connect to the public Internet 183 and/or private IP networks. However, these devices may not be under 184 interactive control of the Internet user, such as may be the case 185 with various smart home and smart grid devices. 187 1.4. Malware 189 This is short for malicious software. In this case, malicious bots 190 are considered a subset of malware. Other forms of malware could 191 include viruses and other similar types of software. Internet users 192 can sometimes cause their hosts to be infected with malware, which 193 may include a bot or cause a bot to install itself, via inadvertently 194 accessing a specific website, downloading a file, or other 195 activities. 197 In other cases, Internet-connected hosts may become infected with 198 malware through externally initiated malicious activities such as the 199 exploitation of vulnerabilities or the brute force guessing of access 200 credentials. 202 1.5. Fast Flux 204 Domain Name System (DNS) Fast Fluxing occurs when a domain is bound 205 in DNS using A records to multiple IP addresses, each of which has a 206 very short Time To Live (TTL) value associated with it. This means 207 that the domain resolves to varying IP addresses over a short period 208 of time. 210 DNS Fast Flux is typically used in conjunction with proxies that then 211 route the web requests to the real host which serves the data being 212 sought. The proxies are normally run on compromised user hosts. The 213 effect of this is to make the detection of the real host much more 214 difficult and to ensure that the backend or hidden site remains up 215 for as long as possible. 217 2. Introduction and Problem Statement 219 Hosts used by Internet users, which in this case are customers of an 220 Internet Service Provider (ISP), can be infected with malware that 221 may contain and/or install one or more bots on a host. They can 222 present a major problem for an ISP for a number of reasons (not to 223 mention of course the problems created for users). First, these bots 224 can be used to send spam, in some cases very large volumes of spam 225 [Spamalytics]. This spam can result in extra cost for the ISPs in 226 terms of wasted network, server, and/or personnel resources, among 227 many other potential costs and side effects. Such spam can also 228 negatively affect the reputation of the ISP, their customers, and the 229 email reputation of the IP address space used by the ISP (often 230 referred to simply as 'IP reputation') A further potential 231 complication is that IP space compromised by bad reputation may 232 continue to carry this bad reputation even when used for entirely 233 innocent purposes following re-assignment of that IP space.. 235 In addition, these bots can act as platforms for directing, 236 participating in, or otherwise conducting attacks on critical 237 Internet infrastructure [Threat-Report]. Bots are frequently used as 238 part of coordinated Distributed Denial of Service (DDoS) attacks for 239 criminal, political or other motivations [Gh0st] [Dragon] [DDoS]. 240 For example, bots have been used to attack Internet resources and 241 infrastructure ranging from web sites to email servers and DNS 242 servers, as well as the critical Internet infrastructure of entire 243 countries [Estonia] [Combat-Zone]. Motivations for such coordinated 244 DDoS attacks can range from criminal extortion attempts through to 245 online protesting and nationalistic fervor [Whiz-Kid]. DDoS attacks 246 may also be motivated by simple personal vendettas or simply persons 247 seeking a cheap thrill at the expense of others. 249 There is good evidence to suggest that bots are being used in the 250 corporate environment for purposes of corporate espionage including 251 the exfiltration of corporate financial data and intellectual 252 property. This also extends to the possibilty of bots being used for 253 state sponsored purposes such as espionage. 255 While any computing device can be infected with bots, the majority of 256 bot infections affect the personal computers used by Internet end 257 users. As a result of the role of ISPs in providing IP connectivity, 258 among many other services, to Internet users, these ISPs are in a 259 unique position to be able to attempt to detect and observe botnets 260 operating in their networks. Furthermore, ISPs may also be in a 261 unique position to be able to notify their customers of actual, 262 potential, or likely infection by bots or other infection. 264 From end users' perspectives, being notified that they may have an 265 infected computer on their network is important information. Once 266 they know this, they can take steps to remove the bots, resolve any 267 problems which may stem from the bot infection, and protect 268 themselves againts future threats. Given that bots can consume vast 269 amounts of local computing and network resources, enable theft of 270 personal information (including personal financial information), 271 enable the host to be used for criminal activities (that may result 272 in the Internet user being legally culpable), destroy or leave the 273 host in an unrecoverable state via 'kill switch' bot technologies, it 274 is important to notify the user that they may be infected with a bot. 276 As a result, the intent of this document is to provide guidance to 277 ISPs and other organizations for the remediation of hosts infected 278 with bots, so as to reduce the size of botnets and minimize the 279 potential harm that bots can inflict upon Internet infrastructure 280 generally, as well as on individual Internet users. Efforts by ISPs 281 and other organizations can, over time, reduce the pool of hosts 282 infected with bots on the Internet, which in turn could result in 283 smaller botnets with less capability for disruption. 285 The potential mitigation of bots is accomplished through a process of 286 detection, notification to Internet users, and remediation of bot 287 infections with a variety of tools, as described later in this 288 document. 290 3. Important Notice of Limitations and Scope 292 The techniques described in this document in no way guarantee the 293 remediation of all bots. Bot removal is potentially a task requiring 294 specialized knowledge, skills and tools, and may be beyond the 295 ability of average users. Attempts at bot removal may frequently be 296 unsuccessful, or only partially successful, leaving the user's system 297 in an unstable and unsatisfactory state or even in a state where it 298 is still infected. Attempts at bot removal can result in side 299 effects ranging from a loss of data to partial or complete loss of 300 system usability. 302 In general, the only way a user can be sure they have removed some of 303 today's increasingly sophisticated malware is by 'nuking-and-paving' 304 the system: reformatting the drive, reinstalling the operating system 305 and applications (including all patches) from scratch, and then 306 restoring user files from a known clean backup. However the 307 introduction of persistent memory based malware may mean that, in 308 some cases, this may not be enough and may prove to be more than any 309 end user can be reasonably expected to resolve [BIOS]. Experienced 310 users would have to re-flash or re-image persistent memory sections 311 or components of their hosts in order to remove persistent memory 312 based malware. However, in some cases, not even 'nuking-and-paving' 313 the system will solve the problem, which calls for hard drive 314 replacement and/or complete replacement of the host. 316 Devices with embedded operating systems, such as video gaming 317 consoles and smart home appliances, will most likely be beyond a 318 user's capability to remediate by themselves, and could therefore 319 require the aid of vendor-specific advice, updates and tools. 320 However, in some cases, such devices will have a function or switch 321 to enable the user to reset that device to a factory default 322 configuration, which may in some cases enable the user to remediate 323 the infection. Care should be taken when imparting remediation 324 advice to Internet users given the increasingly wide array of 325 computing devices that can be, or could be, infected by bots in the 326 future. 328 This document is not intended to address the issues relating to the 329 prevention of bots on an end user device. This is out of scope for 330 this document. 332 4. Detection of Bots 334 An ISP must first identify that an Internet user, in this case a user 335 that is assumed to be their customer or otherwise connected to the 336 ISP's network, is infected, or likely to have been infected with a 337 bot. The ISP should attempt to detect the presence of bots using 338 methods, processes and tools that maintain the privacy of the 339 personally identifiable information (PII) of their customers. The 340 ISP should not block legitimate traffic in the course of bot 341 detection, and should instead employ detection methods, tools, and 342 processes that seek to be non-disruptive and transparent to Internet 343 users and end-user applications. 345 Detection methods, tools and processes may include analysis of 346 specific network and/or application traffic flows (such as traffic to 347 an email server), analysis of aggregate network and/or application 348 traffic data, data feeds received from other ISPs and organizations 349 (such as lists of the ISP's IP addresses which have been reported to 350 have sent spam), feedback from the ISP's customers or other Internet 351 users, as well as a wide variety of other possibilities. In 352 practice, it has proven effective to confirm a bot infection through 353 the use of a combination of multiple bot detection data points. This 354 can help to corroborate information of varying dependability or 355 consistency, as well as to avoid or minimize the possibility of false 356 positive identification of hosts. Detection should also, where 357 possible and feasible, attempt to classify the specific bot infection 358 type in order to confirm that it is malicious in nature, estimate the 359 variety and severity of threats it may pose (such as spam bot, key- 360 logging bot, file distribution bot, etc.), and to determine potential 361 methods for eventual remediation. However, given the dynamic nature 362 of botnet management and the criminal incentives to seek quick 363 financial rewards, botnets frequently update or change their core 364 capabilities. As a consequence, botnets that are initially detected 365 and classified by the ISP as one particular type of bot need to be 366 continuously monitored and tracked in order to identify correctly the 367 threat the botnet poses at any particular point in time. 369 Detection is also time-sensitive. If complex analysis is required 370 and multiple confirmations are needed to verify a bot is indeed 371 present, then it is possible that the bot may cause some damage (to 372 either the infected host or a remotely targeted system) before it can 373 be stopped. This means that an ISP needs to balance the desire or 374 need to definitively classify and/or confirm the presence of a bot, 375 which may take an extended period of time, with the ability to 376 predict the likelihood of a bot in a very short period of time. Such 377 determinations must have a relatively low false positive rate in 378 order to maintain the trust of users. This 'definitive-vs-likely' 379 challenge is difficult and, when in doubt, ISPs should err on the 380 side of caution by communicating that a bot infection has taken 381 place. This also means that Internet users may benefit from the 382 installation of client-based security software on their host. This 383 can enable rapid heuristically-based detection of bot activity, such 384 as the detection of a bot as it starts to communicate with other 385 botnets and execute commands. Any bot detection system should also 386 be capable of adapting, either via manual intervention or 387 automatically, in order to cope with a rapidly evolving threat. 389 As noted above, detection methods, tools, and processes should ensure 390 that privacy of customers' personally identifiable information (PII) 391 is maintained. This protection afforded to PII should also extend to 392 third parties processing data on behalf of ISPs. While bot detection 393 methods, tools, and processes are similar to spam and virus defenses 394 deployed by the ISP for the benefit of their customers (and may be 395 directly related to those defenses), attempts to detect bots should 396 take into account the need of an ISP to take care to ensure any PII 397 collected or incidentally detected is properly protected. This is 398 important, as just as spam defenses may involve scanning the content 399 of email messages, which may contain PII, then so too may bot 400 defenses similarly come into incidental contact with PII. The 401 definition of PII varies from one jurisdiction to the next so proper 402 care should be taken to ensure that any actions taken comply with 403 legislation and good practice in the jurisdiction in which the PII is 404 gathered. Finally, depending upon the geographic region within which 405 an ISP operates, certain methods relating to bot detection may need 406 to be included in relevant terms of service documents or other 407 documents which are available to the customers of a particular ISP. 409 There are several bot detection methods, tools, and processes that an 410 ISP may choose to utilize, as noted in the list below. It is 411 important to note that the technical solutions available are 412 relatively immature, and are likely to change over time, evolving 413 rapidly in the coming years. While these items are described in 414 relation to ISPs, they may also be applicable to organizations 415 operating other networks, such as campus networks and enterprise 416 networks. 418 a. Where it is not legally proscribed and an accepted industry 419 practise in a particular market region, an ISP may in some manner 420 "scan" its IP space in order to detect un-patched or otherwise 421 vulnerable hosts, or to detect the signs of infection. This may 422 provide the ISP with the opportunity to easily identify Internet 423 users who appear to already be infected or are at great risk of 424 being infected with a bot. ISPs should note that some types of 425 port scanning may leave network services in a hung state or 426 render them unusable due to common frailties, and that many 427 modern firewall and host-based intrusion detection 428 implementations may alert the Internet user to the scan. As a 429 result the scan may be interpreted as a malicious attack against 430 the host. Vulnerability scanning has a higher probability of 431 leaving accessible network services and applications in a damaged 432 state and will often result in a higher probability of detection 433 by the Internet user and subsequent interpretation as a targeted 434 attack. Depending upon the vulnerability for which an ISP may be 435 scanning, some automated methods of vulnerability checking may 436 result in data being altered or created afresh on the Internet 437 user's host which can be a problem in many legal environments. 438 It should also be noted that due to the prevalence of Network 439 Address Translation devices, Port Address Translation devices, 440 and/or firewall devices in user networks, network-based 441 vulnerability scanning may be of limited value. Thus, while we 442 note that this is one technique that may be utilized, it is 443 unlikely to be particularly effective and it has problematic side 444 effects, which leads the authors to recommend against the use of 445 this particular method. 447 b. An ISP may also communicate and share selected data, via feedback 448 loops or other mechanisms, with various third parties. Feedback 449 loops are consistently formatted feeds of real-time (or nearly 450 real-time) abuse reports offered by threat data clearinghouses, 451 security alert organizations, other ISPs, and other 452 organizations. The formats for feedback loops include those 453 defined in both ARF [RFC5965] and IODEF [RFC5070]. The data may 454 include, but is not limited to, IP addresses of hosts that appear 455 to be either definitely or probably infected, IP addresses, 456 domain names or fully qualified domain names (FQDNs) known to 457 host malware and/or be involved in the command and control of 458 botnets, recently tested or discovered techniques for detecting 459 or remediating bot infections, new threat vectors, and other 460 relevant information. A few good examples of data sharing are 461 noted in Appendix A. 463 c. An ISP may use Netflow [RFC3954] or other similar passive network 464 monitoring to identify network anomalies that may be indicative 465 of botnet attacks or bot communications. For example, an ISP may 466 be able to identify compromised hosts by identifying traffic 467 destined to IP addresses associated with the command and control 468 of botnets, or destined to the combination of an IP address and 469 control port associated with a command and control network 470 (sometimes command and control traffic comes from a host which 471 has legitimate traffic). In addition, bots may be identified 472 when a remote host is under a DDoS attack, because hosts 473 participating in the attack will likely be infected by a bot, 474 frequently as observed at network borders (though ISPs should 475 beware of source IP address spoofing techniques to avoid or 476 confuse detection). 478 d. An ISP may use DNS-based techniques to perform detection. For 479 example, a given classified bot may be known to query a specific 480 list of domain names at specific times or on specific dates (in 481 the example of the so-called "Conficker" bot (see [Conficker]), 482 often by matching DNS queries to a well known list of domains 483 associated with malware. In many cases such lists are 484 distributed by or shared using third parties, such as threat data 485 clearinghouses. 487 e. User complaints: Because hosts infected by bots are frequently 488 used to send spam or participate in DDoS attacks, the ISP 489 servicing those hosts will normally receive complaints about the 490 malicious network traffic. Those complaints may be sent to 491 RFC2142-specified [RFC2142] role accounts, such as abuse@, or to 492 other relevant addresses such as to abuse or security addresses 493 specified by the site as part of its WHOIS (or other) contact 494 data. 496 f. ISPs may also discover likely bot infected hosts located on other 497 networks. Thus, when legally permissible in a particular market 498 region, it may be worthwhile for ISPs to share information 499 relating to those compromised hosts with the relevant remote 500 network operator, with security researchers, and with blocklist 501 operators. 503 g. ISPs may operate or subscribe to services that provide 504 'sinkholing' or 'honeynet' capabilities. This may enable the ISP 505 to obtain near-real-time lists of bot infected hosts as they 506 attempt to join a larger botnet or propagate to other hosts on a 507 network. 509 h. ISP industry associations should examine the possibility of 510 collating statistics from ISP members in order to provide good 511 statistics about bot infections based on real ISP data. 513 i. An Intrusion Detection System (IDS) can be a useful tool to 514 actually help identify the malware. An IDS tool such as SNORT 515 (open source IDS platform; see [Snort]) can be placed in a Walled 516 Garden and used to analyze end user traffic to confirm malware 517 type. This will help with remediation of the infected device. 519 5. Notification to Internet Users 521 Once an ISP has detected a bot, or the strong likelihood of a bot, 522 steps should be undertaken to inform the Internet user that they may 523 have a bot-related problem. Depending upon a range of factors 524 including technical capabilities of the ISP, technical attributes of 525 its network, financial considerations, available server resources, 526 available organizational resources, the number of likely infected 527 hosts detected at any given time and the severity of any possible 528 threats, among other things, an ISP should decide the most 529 appropriate method or methods for providing notification to one or 530 more of their customers or Internet users. Such notification methods 531 may include one or more of the methods described in the following 532 subsections, as well as other possible methods not described below. 534 It is important to note that none of these methods are guaranteed to 535 be one-hundred percent successful, and that each has its own set of 536 limitations. In addition, in some cases, an ISP may determine that a 537 combination of two or more methods is most appropriate and effective, 538 and reduces the chance that malware may block a notification. As 539 such, the authors recommend the use of multiple notification methods. 540 Finally, notification is also considered time sensitive; if the user 541 does not receive or view the notification in a timely fashion, then a 542 particular bot could launch an attack, exploit the user, or cause 543 other harm. If possible, an ISP should establish a preferred means 544 of communication when the subscriber first signs up for service. As 545 a part of the notification process, ISPs should maintain a record of 546 the allocation of IP addresses to subscribers for such a period long 547 enough to allow any commonly used bot detection technology to be able 548 to accurately link an infected IP address to a subscriber. This 549 record should only be maintained for a period of time which is 550 necessary to support bot detection,but no longer, in order to protect 551 the privacy of the individual subscriber. 553 One important factor to bear in mind is that notification to end 554 users needs to be resistant to potential spoofing. This should be 555 done to protect, as reasonably as possible, against the potential of 556 legitimate notifications being spoofed and/or used by parties with 557 intent to perform additional malicious attacks against victims of 558 malware, or even to deliver additional malware. 560 It should be possible for the end user to indicate the preferred 561 means of notification on an opt-in basis for that notification 562 method. It is recommended that the end user should not be allowed to 563 totally opt out of notification entirely. 565 When users are notified, an ISP should endeavour to give as much 566 information to the end user regarding which bot detection methods 567 employed at the ISP consonant with not providing information to those 568 creating or deploying the bots so that they would be able to avoid 569 detection. 571 5.1. Email Notification 573 This is a common form of notification used by ISPs. One drawback of 574 using email is that it is not guaranteed to be viewed within a 575 reasonable time frame, if at all. The user may be using a different 576 primary email address than that which they have provided to the ISP. 577 In addition, some ISPs do not provide an email account at all, as 578 part of a bundle of Internet services, and/or do not have a need for 579 or method by which to request or retain the primary email addresses 580 of Internet users of their networks. Another possibility is that the 581 user, their email client, and/or their email servers could determine 582 or classify such a notification as spam, which could delete the 583 message or otherwise file it in an email folder that the user may not 584 check on a regular and/or timely basis. Bot masters have also been 585 known to impersonate the ISP or trusted sender and send fraudulent 586 emails to the users. This technique of social engineering often 587 leads to new bot infestations. Finally if the user's email 588 credentials are compromised, then a hacker and/or a bot could simply 589 access the user's email account and delete the email before it is 590 read by the user. 592 5.2. Telephone Call Notification 594 A telephone call may be an effective means of communication in 595 particularly high-risk situations. However, telephone calls may not 596 be feasible due to the cost of making a large number of calls, as 597 measured in either time, money, organizational resources, server 598 resources, or some other means. In addition, there is no guarantee 599 that the user will answer their phone. To the extent that the 600 telephone number called by the ISP can be answered by the infected 601 computing device, the bot on that host may be able to disconnect, 602 divert, or otherwise interfere with an incoming call. Users may also 603 interpret such a telephone notification as a telemarketing call and 604 as such not welcome it, or not accept the call at all. Finally, even 605 if a representative of the ISP is able to connect with and speak to a 606 user, that user is very likely to lack the necessary technical 607 expertise to understand or be able to effectively deal with the 608 threat. 610 5.3. Postal Mail Notification 612 This form of notification is probably the least popular and effective 613 means of communication, due to both preparation time, delivery time, 614 the cost of printing and paper, and the cost of postage. 616 5.4. Walled Garden Notification 618 Placing a user in a walled garden is another approach that ISPs may 619 take to notify users. A walled garden refers to an environment that 620 controls the information and services that a subscriber is allowed to 621 utilize and what network access permissions are granted. A walled 622 garden implementation can range from strict to leaky. In a strict 623 walled garden environment, access to most Internet resources is 624 typically limited by the ISP. In contrast, a leaky walled garden 625 environment permits access to all Internet resources, except those 626 deemed malicious, and ensures access to those that can be used to 627 notify users of infections. 629 Walled gardens are effective because it is possible to notify the 630 user and simultaneously block all communication between the bot and 631 the command and control channel. While in many cases the user is 632 almost guaranteed to view the notification message and take any 633 appropriate remediation actions, this approach can pose other 634 challenges. For example, it is not always the case that a user is 635 actively using a host that uses a web browser or that has a web 636 browser actively running on it, or that uses another application 637 which uses ports which are redirected to the walled garden. In one 638 example, a user could be playing a game online, via the use of a 639 dedicated, Internet-connected game console. In another example, the 640 user may not be using a host with a web browser when they are placed 641 in the walled garden and may instead be in the course of a telephone 642 conversation, or may be expecting to receive a call, using a Voice 643 Over IP (VoIP) device of some type. As a result, the ISP may feel 644 the need to maintain a potentially lengthy white list of domains that 645 are not subject to the typical restrictions of a walled garden, which 646 could well prove to be an onerous task from an operational 647 perspective. 649 For these reasons the implementation of a leaky walled garden makes 650 more sense, but a leaky walled garden has a different set of 651 drawbacks. The ISP has to assume that the user will eventually use a 652 web browser to acknowledge the notification, otherwise the user will 653 remain in the walled garden and not know it. If the intent of the 654 leaky walled garden is solely to notify the user about the bot 655 infection, then the leaky walled garden is not ideal because 656 notification is time sensitive and the user may not receive the 657 notification until the user invokes a request for the targeted 658 service and/or resource. This means the bot can potentially do more 659 damage. Additionally, the ISP has to identify which services and/or 660 resources to restrict for the purposes of notification. This does 661 not have to be resource specific and can be time based and/or policy 662 based. An example of how notification could be made on a timed basis 663 could involve notification for all HTTP requests every 10 minutes, or 664 show the notification for one in five HTTP requests. 666 The ISP has several options to determine when to let the user out of 667 the walled garden. One approach may be to let the user determine 668 when to exit. This option is suggested when the primary purpose of 669 the walled garden is to notify users and provide information on 670 remediation only, particularly since notification is not a guarantee 671 of successful remediation. It could also be the case that, for 672 whatever reason, the user makes the judgment that they cannot then 673 take the time to remediate their host and that other online 674 activities which they would like to resume are more important. Exit 675 from the walled garden may also involve a process to verify that it 676 is indeed the user who is requesting exit from the walled garden and 677 not the bot. 679 Once the user acknowledges the notification, they may decide to 680 either remediate and exit the walled garden or to exit the walled 681 garden without remediating the issue. Another approach may be to 682 enforce a stricter policy and require the user to clean the host 683 prior to permitting the user to exit the walled garden, though this 684 may not be technically feasible depending upon the type of bot, 685 obfuscation techniques employed by a bot, and/or a range of other 686 factors. Thus, the ISP may also need to support tools to scan the 687 infected host (in the style of a virus scan, rather than a port scan) 688 and determine whether it is still infected or rely on user judgment 689 that the bot has been disabled or removed. One challenge with this 690 approach is that the user might have multiple hosts sharing a single 691 IP address, such as via a common home gateway device which performs 692 Network Address Translation (NAT). In such a case, the ISP may need 693 to determine from user feedback, or other means, that all affected 694 hosts have been remediated, which may or may not be technically 695 feasible. 697 Finally, when a walled garden is used, a list of well-known addresses 698 for both operating system vendors and security vendors should be 699 created and maintained in a white list which permits access to these 700 sites. This can be important for allowing access from the walled 701 garden by end users in search of operating system and application 702 patches. It is recommended that walled gardens be seriously 703 considered as a method of notification as they are easy to implement 704 and proven to be effective as a means of getting end user attention. 706 5.5. Instant Message Notification 708 Instant messaging provides the ISP with a simple means to communicate 709 with the user. There are several advantages to using Instant 710 Messaging (IM) that make it an attractive option. If the ISP 711 provides IM service and the user subscribes to it, then the user can 712 be notified easily. IM-based notification can be a cost effective 713 means to communicate with users automatically from an IM alert system 714 or by a manual process, involving the ISP's support staff. Ideally, 715 the ISP should allow the user to register their IM identity in an ISP 716 account management system and grant permission to be contacted via 717 this means. If the IM service provider supports off-line messaging, 718 then the user can be notified regardless of whether they are 719 currently logged into the IM system. 721 There are several drawbacks with this communications method. There 722 is a high probability that subscriber may interpret the communication 723 to be spim, and as such ignore it. Also, not every user uses IM 724 and/or the user may not provide their IM identity to the ISP so some 725 alternative means have to be used. Even in those cases where a user 726 does have an IM address, they may not be signed onto that IM system 727 when the notification is attempted. There may be a privacy concern 728 on the part of users, when such an IM notification must be 729 transmitted over a third-party network and/or IM service. As such, 730 should this method be used, the notification should be discreet and 731 not include any PII in the notification itself. 733 5.6. Short Message Service (SMS) Notification 735 SMS allows the ISP to send a brief description of the problem to 736 notify the user of the issue, typically to a mobile device such as a 737 mobile phone or smart phone. Ideally, the ISP should allow the user 738 to register their mobile number and/or SMS address in an ISP account 739 management system and grant permission to be contacted via this 740 means. The primary advantage of SMS is that users are familiar with 741 receiving text messages and are likely to read them. However, users 742 may not act on the notification immediately if they are not in front 743 of their host at the time of the SMS notification. 745 One disadvantage is that ISPs may have to follow up with an alternate 746 means of notification if not all of the necessary information may be 747 conveyed in one message, given constraints on the number of 748 characters in an individual message (typically 140 characters). 749 Another disadvantage with SMS is the cost associated with it. The 750 ISP has to either build its own SMS gateway to interface with the 751 various wireless network service providers or use a third-party SMS 752 clearinghouse (relay) to notify users. In both cases an ISP may 753 incur fees related to SMS notifications, depending upon the method 754 used to send the notifications. An additional downside is that SMS 755 messages sent to a user may result in a charge to the user by their 756 wireless provider, depending upon the plan to which they subscribe 757 and the country in which the user resides. Another minor 758 disadvantage is that it is possible to notify the wrong user if the 759 intended user changes their mobile number but forgets to update it 760 with the ISP. 762 There are several other drawbacks with this communications method. 763 There is a high probability that subscriber may interpret the 764 communication to be spam, and as such ignore it. Also, not every 765 user uses SMS and/or the user may not provide their SMS address or 766 mobile number to the ISP. Even in those cases where a user does have 767 an SMS address or mobile number, their device may not be powered on 768 or otherwise available on a wireless network when the notification is 769 attempted. There maybe also be a privacy concern on the part of 770 users, when such an SMS notification must be transmitted over a 771 third-party network and/or SMS clearinghouse. As such, should this 772 method be used, the notification should be discreet and not include 773 any PII in the notification itself. 775 5.7. Web Browser Notification 777 Near real-time notification to the user's web browser is another 778 technique that may be utilized for notifying the user [RFC6108], 779 though how such a system might operate is outside the scope of this 780 document. Such a notification could have a comparative advantage 781 over a walled garden notification, in that it does not restrict 782 traffic to a specified list of destinations in the same way that a 783 walled garden by definition would. However, as with a walled garden 784 notification, there is no guarantee that a user is at any given time 785 making use of a web browser, though such a system could certainly 786 provide a notification when such a browser is eventually used. 787 Compared to a walled garden, a web browser notification is probably 788 preferred from the perspective of Internet users, as it does not have 789 the risk of disrupting non-web sessions, such as online games, VoIP 790 calls, etc. (as noted in Section 5.4). 792 There are alternative methods of web browser notification offered 793 commercially by a number of vendors. Many of the techniques used are 794 proprietary and it is not within the scope of this document to 795 describe how they are implemented. These techniques have been 796 successfully implemented at several ISPs. 798 It should be noted that web notification is only intended to notify 799 devices running a web browser. 801 5.8. Considerations for Notification to Public Network Locations 803 Delivering a notification to a location that provides a shared public 804 network, such as a train station, public square, coffee shop, or 805 similar location may be of low value since the users connecting to 806 such networks are typically highly transient and generally not known 807 to site or network administrators. For example, a system may detect 808 that a host on such a network has a bot, but by the time a 809 notification is generated that user has departed from the network and 810 moved elsewhere. 812 5.9. Considerations for Notification to Network Locations Using a 813 Shared IP Address 815 Delivering a notification to a location that accesses the Internet 816 routed through one or more shared public IP addresses may be of low 817 value since it may be quite difficult to differentiate between users 818 when providing a notification. For example, on a business network of 819 500 users, all sharing one public IP address, it may be sub-optimal 820 to provide a notification to all 500 users if you only need one 821 specific user to be notified and take action. As a result, such 822 networks may find value in establishing a localized bot detection and 823 notification system, just as they are likely to also establish other 824 localized systems for security, file sharing, email, and so on. 826 However, should an ISP implement some form of notification to such 827 networks, it may be better to simply send notifications to a 828 designated network administrator at the site. In such a case the 829 local network administrator may like to receive additional 830 information in such a notification, such as a date and timestamp, the 831 source port of the infected system, and malicious sites and ports 832 that may have been visited. 834 5.10. Notification and End User Expertise 836 The ultimate effectiveness of any of the aforementioned forms of 837 notification is heavily dependent upon both the expertise of the end 838 user and the wording of any such notification. For example, while a 839 user may receive and acknowledge a notification, that user may lack 840 the necessary technical expertise to understand or be able to deal 841 effectively with the threat. As a result, it is important that such 842 notifications use clear and easily understood language, so that the 843 majority of users (who are non-technical) may understand the 844 notification. In addition, a notification should provide easily 845 understood guidance on how to remediate a threat as described in 846 Section 6, potentially with one path for technical users to take and 847 another for non-technical users. 849 6. Remediation of Hosts Infected with a Bot 851 This section covers the different options available to remediate a 852 host, which means to remove, disable, or otherwise render a bot 853 harmless. Prior to this step, an ISP has detected the bot, notified 854 the user that one of their hosts is infected with a bot, and now may 855 provide some recommended means to clean the host. The generally 856 recommended approach is to provide the necessary tools and education 857 to the user so that they may perform bot remediation themselves, 858 particularly given the risks and difficulties inherent in attempting 859 to remove a bot. 861 For example, this may include the creation of a special web site with 862 security-oriented content that is dedicated for this purpose. This 863 should be a well-publicized security web site to which a user with a 864 bot infection can be directed to for remediation. This security web 865 site should clearly explain why the user was notified and may include 866 an explanation of what bots are, and the threats that they pose. 867 There should be a clear explanation of the steps that the user should 868 take in order to attempt to clean their host and provide information 869 on how users can keep the host free of future infections. The 870 security web site should also have a guided process that takes non- 871 technical users through the remediation process, on an easily 872 understood, step-by-step basis. 874 In terms of the text used to explain what bots are and the threats 875 that they pose, something simple such as this may suffice: 877 "What is a bot? A bot is a piece of software, generally 878 installed on your machine without your knowledge, which either 879 sends spam or tries to steal your personal information. They 880 can be very difficult to spot, though you may have noticed that 881 your computer is running much more slowly than usual or you 882 notice regular disk activity even when you are not doing 883 anything. Ignoring this problem is risky to you and your 884 personal information. Thus, bots need to be removed to protect 885 your data and your personal information." 887 Many bots are designed to work in a very stealthy manner and as such 888 there may be a need to make sure that the Internet user understands 889 the magnitude of the threat faced despite the stealthy nature of the 890 bot. 892 It is also important to note that it may not be immediately apparent 893 to the Internet user precisely which devices have been infected with 894 a particular bot. This may be due to the user's home network 895 configuration, which may encompass several hosts, where a home 896 gateway which performs Network Address Translation (NAT) to share a 897 single public IP address has been used. Therefore, any of these 898 devices can be infected with a bot. The consequence of this for an 899 ISP is that remediation advice may not ultimately be immediately 900 actionable by the Internet user, as that user may need to perform 901 additional investigation within their own home network. 903 An added complication is that the user may have a bot infection on a 904 device such as a video console, multimedia system, appliance, or 905 other end-user computing device which does not have a typical desktop 906 computing interface. As a result, diligence needs to be taken by the 907 ISP where possible such that it can identify and communicate the 908 specific nature of the device that has been infected with a bot, and 909 further providing appropriate remediation advice. If the ISP cannot 910 pin down the device or identify its type, then it shold make it clear 911 to the user that any initial advice given is generic and further 912 advice can be given (or is available) once the type of infected 913 device is known. 915 There are a number of forums that exist online to provide security 916 related support to end users. These forums are staffed by volunteers 917 and often are focussed around the use of a common tool set to help 918 end users to remediate hosts infected with malware. It may be 919 advantageous to ISPs to foster a relationship with one or more 920 forums, perhaps by offering free hosting or other forms of 921 sponsorship. 923 It is also important to keep in mind that not all users will be 924 technically adept as noted in Section 5.10. As a result, it may be 925 more effective to provide a range of suggestion options for 926 remediation. This may include for example a very detailed "do it 927 yourself" approach for experts, a simpler guided process for the 928 average user, and even assisted remediation as described in 929 Section 6.2. 931 6.1. Guided Remediation Process 933 Minimally, the Guided Remediation Process should include the 934 following goals, with options and/or recommendations for achieving 935 them: 937 1. Backup personal files. For example: "Before you start, make sure 938 to backup all of your important data. (You should do this on a 939 regular basis anyway.) You can backup your files manually or 940 using a system backup software utility, which may be part of your 941 Operating System (OS). You can backup your files to a USB Thumb 942 Drive (aka USB Key), a writeable CD/DVD-ROM, an external hard 943 drive, a network file server, or an Internet-based backup 944 service." It may be advisable to suggest that the user backup is 945 performed onto separate backup media or devices if they suspect 946 bot infection. 948 2. Download OS patches and Anti-Virus (A/V) software updates. For 949 example, links could be provided to Microsoft Windows updates as 950 well as to Apple MacOS updates, or to other major operating 951 systems which are relevant to users and their devices. 953 3. Configure the host to automatically install updates for the OS, 954 A/V and other common Web Browsers such as Microsoft Internet 955 Explorer, Mozilla Firefox, Apple Safari, Opera, and Google 956 Chrome. 958 4. Get professional assistance if they are unable to remove the bots 959 themselves. If purchasing professional assistance, then the user 960 should be encouraged to pre-determine how much they are willing 961 to pay for that help. If the host that is being remediated is 962 old and can easily be replaced with a new, faster, larger and 963 more reliable system for a certain cost, the it makes no sense to 964 spend more than that cost to fix the old host, for example. On 965 the other hand, if the customer has a brand new host, it might 966 make perfect sense to spend the money to attempt to remediate it. 968 5. To continue, regardless of whether the user or a knowledgeable 969 technical assistant is working on remediating the host, their 970 first task should be to determine which of multiple potentially- 971 infected machines may be the one that needs attention (in the 972 common case of multiple hosts in a home network). Sometimes, as 973 in cases where there is only a single directly-attached host, or 974 the user has been noticing problems with one of their hosts, this 975 can be easy. Other times, it may be more difficult especially if 976 there are no clues as to which host is infected. If the user is 977 behind a home gateway/router, then the first task may be to 978 ascertain which of the machines is infected. In some cases the 979 user may have to check all machines to identify the infected one. 981 6. ISPS may also look at offering a CD/DVD with remediation 982 processes and software in the event that a host is so badly 983 infected as to be unable to communicate over the Internet. 985 7. User surveys to solicit feedback on whether the notification and 986 remediation process is effective and what recommended changes 987 could be made in order to improve the ease, understandability, 988 and effectiveness the remediation process. 990 8. If the user is interested in reporting his or her host's bot 991 infection to an applicable law enforcement authority, then the 992 host effectively becomes a cyber "crime scene" and the 993 infectionthe infection should not be mitigated unless or until 994 law enforcement has collected the necessary evidence. For 995 individuals in this situation, the ISP may wish to provide links 996 to local, state, federal, or other relevant computer crime 997 offices. (Note: Some "minor" incidents, even if highly traumatic 998 to the user, may not be sufficiently serious for law enforcement 999 to commit some of their limited resources to an investigation.) 1000 In addition, individual regions may have other, specialized 1001 computer crime organizations to which these incidents can be 1002 reported. For example, in the United States, that organization 1003 is the Internet Crime Complaint Center, at http://www.ic3.gov. 1005 9. Users may also be interested in links to security expert forums, 1006 where other users can assist them. 1008 6.2. Professionally-Assisted Remediation Process 1010 It should be acknowledged that, based on the current state of 1011 remediation tools and the technical abilities of end users, that many 1012 users may be unable to remediate on their own. As a result, it is 1013 recommended that users have the option for professional assistance. 1014 This may entail online or telephone assistance for remediation, as 1015 well as working face to face with a professional who has training and 1016 expertise in the removal of malware. It should be made clear at the 1017 time of offering this service that this service is intended for those 1018 that do not have the skills or confidence to attempt remediation and 1019 is not intended as an upsell by the ISP. 1021 7. Failure or Refusal to Remediate 1023 ISP systems should track the bot infection history of hosts in order 1024 to detect when users consistently fail to remediate or refuse to take 1025 any steps to remediate. In such cases, ISPs may need to consider 1026 taking additional steps to protect their network, other users and 1027 hosts on that network, and other networks. Such steps may include a 1028 progression of actions up to and including account termination. 1029 Refusal to remediate can be viewed as a business issue and as such no 1030 technical recommendation is possible. 1032 8. Sharing of Data from the User to the ISP 1034 As an additional consideration, it may be useful to create a process 1035 by which users could choose, at their option and with their express 1036 consent, to share data regarding their bot infections with their ISP 1037 and/or another authorized third party. Such third parties may 1038 include governmental entities that aggregate threat data, such as the 1039 Internet Crime Complaint Center referred to earlier in this document, 1040 to academic institutions, and/or security researchers. While in many 1041 cases the information shared with the user's ISP or designated third 1042 parties will only be used for aggregated statistical analysis, it is 1043 also possible that certain research needs may be best met with more 1044 detailed data. Thus, any such data sharing from a user to the ISP or 1045 authorized third party may contain some type of personally 1046 identifiable information, either by design or inadvertently. As a 1047 result, any such data sharing should be enabled on an opt-in basis, 1048 where users review and approve of the data being shared and the 1049 parties with which it is to be shared, unless the ISP is already 1050 required to share such data in order to comply with local laws and in 1051 accordance with those laws and applicable regulations. 1053 9. Security Considerations 1055 This document describes in detail the numerous security risks and 1056 concerns relating to botnets. As such, it has been appropriate to 1057 include specific information about security in each section above. 1058 This document describes the security risks related to malicious bot 1059 infections themselves, such as enabling identity theft, theft of 1060 authentication credentials, and the use of a host to unwittingly 1061 participate in a DDoS attack, among many other risks. Finally, the 1062 document also describes security risks which may relate to the 1063 particular methods of communicating a notification to Internet users. 1064 Bot networks and bot infections pose extremely serious security risks 1065 and any reader should review this document carefully. 1067 In addition, regarding notifications, as described in Section 5, care 1068 should be taken to assure users that notifications have been provided 1069 by a trustworthy site and/or party, so that the notification is more 1070 difficult for phishers and/or malicious parties using social 1071 engineering tactics to mimic, or that the user has some level of 1072 trust that the notification is valid, and/or that the user has some 1073 way to verify via some other mechanism or step that the notification 1074 is valid. 1076 10. Privacy Considerations 1078 This document describes at a high level the activities to which ISPs 1079 should be sensitive, where the collection or communication of PII may 1080 be possible. In addition, when performing notifications to end users 1081 Section 5, those notifications should not include PII. 1083 As noted in Section 8, any sharing of data from the user to the ISP 1084 and/or authorized third parties should be done on an opt-in basis. 1085 Additionally the ISP and or authorized third parties should clearly 1086 state what data will be shared and with whom the data will be shared. 1088 Lastly, as noted in some other sections, there my be legal 1089 requirements in particular legal jurisdictions concerning how long 1090 any subscriber-related or other data is retained, of which an ISP 1091 operating in such a jurisdiction should be aware and with which an 1092 ISP should comply. 1094 11. IANA Considerations 1096 There are no IANA considerations in this document. 1098 12. Acknowledgements 1100 The authors wish to acknowledge the following individuals and groups 1101 for performing a detailed review of this document and/or providing 1102 comments and feedback that helped to improve and evolve this 1103 document: 1105 Mark Baugher 1107 Richard Bennett 1109 James Butler 1111 Vint Cerf 1113 Alissa Cooper 1115 Jonathan Curtis 1117 Jeff Chan 1119 Roland Dobbins 1121 Dave Farber 1123 Stephen Farrell 1125 Eliot Gillum 1127 Joel Halpern 1129 Joel Jaeggli 1131 Scott Keoseyan 1133 Murray S. Kucherawy 1135 The Messaging Anti-Abuse Working Group (MAAWG) 1136 Jose Nazario 1138 Gunter Ollmann 1140 David Reed 1142 Roger Safian 1144 Donald Smith 1146 Joe Stewart 1148 Forrest Swick 1150 Sean Turner 1152 Robb Topolski 1154 Maxim Weinstein 1156 Eric Ziegast 1158 13. Informative references 1160 [BIOS] Sacco, A. and A. Ortega, "Persistent BIOS Infection", 1161 March 2009, . 1164 [Combat-Zone] 1165 Alshech, E., "Cyberspace as a Combat Zone: The Phenomenon 1166 of Electronic Jihad", February 2007, . 1169 [Conficker] 1170 Porras, P., Saidi, H., and V. Yegneswaran, "An Analysis of 1171 Conficker's Logic and Rendezvous Points", March 2009, 1172 . 1174 [DDoS] Saafan, A., "Distributed Denial of Service Attacks: 1175 Explanation, Classification and Suggested Solutions", 1176 March 2009, . 1178 [Dragon] Nagaraja, S. and R. Anderson, "The snooping dragon: 1179 social-malware surveillance of the Tibetan movement", 1180 March 2009, 1181 . 1183 [Estonia] Evron, G., "Battling Botnets and Online Mobs: Estonia's 1184 Defense Efforts during the Internet War", May 2005, . 1190 [Gh0st] Vallentin, M., Whiteaker, J., and Y. Ben-David, "The Gh0st 1191 in the Shell: Network Security in the Himalayas", 1192 February 2010, . 1195 [RFC1459] Oikarinen, J. and D. Reed, "Internet Relay Chat Protocol", 1196 RFC 1459, May 1993. 1198 [RFC2142] Crocker, D., "MAILBOX NAMES FOR COMMON SERVICES, ROLES AND 1199 FUNCTIONS", RFC 2142, May 1997. 1201 [RFC3954] Claise, B., "Cisco Systems NetFlow Services Export Version 1202 9", RFC 3954, October 2004. 1204 [RFC5070] Danyliw, R., Meijer, J., and Y. Demchenko, "The Incident 1205 Object Description Exchange Format", RFC 5070, 1206 December 2007. 1208 [RFC5965] Shafranovich, Y., Levine, J., and M. Kucherawy, "An 1209 Extensible Format for Email Feedback Reports", RFC 5965, 1210 August 2010. 1212 [RFC6108] Chung, C., Kasyanov, A., Livingood, J., Mody, N., and B. 1213 Van Lieu, "Comcast's Web Notification System Design", 1214 RFC 6108, February 2011. 1216 [Snort] Roesch, M., "Snort Home Page", March 2009, 1217 . 1219 [Spamalytics] 1220 Kanich, C., Kreibich, C., Levchenko, K., Enright, B., 1221 Voelker, G., Paxson, V., and S. Savage, "Spamalytics: An 1222 Empirical Analysis of Spam Marketing Conversion", 1223 October 2008, . 1226 [Threat-Report] 1227 Ahamad, M., Amster, D., Barret, M., Cross, T., Heron, G., 1228 Jackson, D., King, J., Lee, W., Naraine, R., Ollman, G., 1229 Ramsey, J., Schmidt, H., and P. Traynor, "Emerging Cyber 1230 Threats Report for 2009: Data, Mobility and Questions of 1231 Responsibility will Drive Cyber Threats in 2009 and 1232 Beyond", October 2008, . 1235 [Whiz-Kid] 1236 Berinato, S., "Case Study: How a Bookmaker and a Whiz Kid 1237 Took On a DDOS-based Online Extortion Attack", May 2005, < 1238 http://www.csoonline.com/article/220336/ 1239 How_a_Bookmaker_and_a_Whiz_Kid_Took_On_a_DDOS_based_Online 1240 _Extortion_Attack>. 1242 Appendix A. Examples of Third Party Malware Lists 1244 As noted in Section 4, there are many potential third parties which 1245 may be willing to share lists of infected hosts. This list is for 1246 example purposes only, is not intended to be either exclusive or 1247 exhaustive, and is subject to change over time. 1249 o Arbor - Atlas, see http://atlas.arbor.net/ 1251 o Internet Systems Consortium - Secure Information Exchange (SIE), 1252 see https://sie.isc.org/ 1254 o Microsoft - Smart Network Data Services (SNDS), see 1255 https://postmaster.live.com/snds/ 1257 o SANS Institute / Internet Storm Center - DShield Distributed 1258 Intrusion Detection System, see http://www.dshield.org/about.html 1260 o ShadowServer Foundation, see http://www.shadowserver.org/ 1262 o Spamhaus - Policy Block List (PBL), see 1263 http://www.spamhaus.org/pbl/ 1265 o Spamhaus - Exploits Block List (XBL), see 1266 http://www.spamhaus.org/xbl/ 1268 o Team Cymru - Community Services, see http://www.team-cymru.org/ 1270 Appendix B. Document Change Log 1272 [RFC Editor: This section is to be removed before publication] 1274 -17 version: 1276 o various copy editing 1278 o briefly discuss IP reputation issues 1280 o briefly discuss corporate espionage threat 1282 o add references for ARF and IODEF, Snort, and Conficker 1284 -16 version: 1286 o Section 6.1.6 Substituted unable for able 1288 -15 version: 1290 o Issue of quiet bots addressed 1292 o Section 5.4 substitute "may be" for maybe 1294 o Section 5.4 Added reference to country of residence 1296 o Section 5.8 Corrected spelling error 1298 o Section 5.10 Correctedspelling error 1300 o Section 6 Corrected spelling errors 1302 -14 version: 1304 o Minor errors rectified, spelling errors addressed 1306 o ALL open issues are now closed! 1308 -13 version: 1310 o All changes below per Sean Farrell except where indicated 1312 o Section 1.2 Added reference to fast flux definition 1314 o Section 1.2 Included reference to insecure protocols 1316 o Section 4 Cleared ambiguity 1318 o Section 4 Substituted "must have" 1320 o Section 4 Substituted "to" for "too" 1322 o Section 4 Addressed PII issue for 3rd parties 1323 o Section 4 Addressed issue around blocking of traffic during bot 1324 detection process 1326 o Section 5 Per Max Weinstein Included a number of comments and 1327 addressed issues of detection transparency 1329 o Section 5 Addressed issue by recommending that users should be 1330 allowed to opt in to their desired method of notification 1332 o Section 5.4 Addressed issue around timing of notification 1334 o Section 5.4 Addressed Walled Garden issue by recommending that 1335 Walled Gardens are to be used as a notification method 1337 o Section 5.7 Noted that there are alternative methods to that 1338 outlined in RFC6108 1340 o Section 5.7 Noted that web notification is only intended for 1341 devices running a web browser 1343 o Section 5.9 Fixed typo 1345 o Section 6.1 Noted that ISPs should be clear when offering paid 1346 remediation services that these are aimed at those without skills 1347 to remediate or lacking confidence to do so 1349 o Section 7 Noted that refusal to remediate is a business issue and 1350 not subject to technical recommendation. 1352 o ALL open issues are now closed! 1354 -12 version: 1356 o Shortened reference names (non-RFC references) 1358 o Closed Open Issue #1 and #4, as leaky walled gardens are covered 1359 in Section 5.4 1361 o Closed Open Issue #2 and #6, by adding a section on users that 1362 fail to mitigate, including account termination 1364 o Closed Open Issue #3, by adding a Privacy Considerations section 1365 to address PII 1367 o Closed Open Issue #5, with no action taken 1369 o Closed Open Issue #7, by leaving as Informational (the IETF can 1370 assess this later) 1372 o Closed Open Issue #8, by generalizing the guided remediation 1373 section via the removal of specific links, etc. 1375 o Closed Open Issue #9, by reviewing and updating remediation steps 1377 o Changed some 'must' statements to 'should' statements (even though 1378 there is not RFC 2119 language in the document) 1380 o ALL open issues are now closed! 1382 -11 version: 1384 o Added reference to RFC 6108 1386 o Per Sean Turner, removed RFC 2119 reference and section 1388 o Per Donald Smith, externalized the reference to 3rd party data 1389 sources, now Appendix A 1391 o Per Donald Smith, moved basic notification challenges into a new 1392 section at the end of the Notifications section. 1394 -10 version: 1396 o Minor refresh to keep doc from expiring. Several large updates 1397 planned in a Dec/Jan revision 1399 -09 version: 1401 o Corrected nits pointed out by Sean 1403 o Removed occurrences of double spacing 1405 o Grammar and spelling corrections in many sections 1407 o Added text for leaky walled garden 1409 -08 version: 1411 o Corrected a reference error in Section 10. 1413 o Added a new informative reference 1415 o Change to Section 5.a., to note additional port scanning 1416 limitations 1418 o Per Joel Jaeggli, change computer to host, to conform to IETF 1419 document norms 1421 o Several other changes suggested by Joel Jaeggli and Donald Smith 1422 on the OPSEC mailing list 1424 o Incorp. other feedback received privately 1426 o Because Jason is so very dedicated, he worked on this revision 1427 while on vacation ;-) 1429 -07 version: 1431 o Corrected various spelling and grammatical errors, pointed out by 1432 additional reviewers. Also added a section on information flowing 1433 from the user. Lastly, updated the reviewer list to include all 1434 those who either were kind enough to review for us or who provided 1435 interesting, insightful, and/or helpful feedback. 1437 -06 version: 1439 o Corrected an error in the version change log, and added some extra 1440 information on user remediation. Also added an informational 1441 reference to BIOS infection. 1443 -05 version: 1445 o Minor tweaks made by Jason - ready for wider review and next 1446 steps. Also cleared open issues. Lastly, added 2nd paragraph to 1447 security section and added sections on limitations relating to 1448 public and other shared network sites. Added a new section on 1449 professional remediation. 1451 -04 version: 1453 o Updated reference to BIOS based malware, added wording on PII and 1454 local jurisdictions, added suggestion that industry body produce 1455 bot stats, added suggestion that ISPs use volunteer forums 1457 -03 version: 1459 o all updates from Jason - now ready for wider external review 1461 -02 version: 1463 o all updates from Jason - still some open issues but we're now at a 1464 place where we can solicit more external feedback 1466 -01 version: 1468 o -01 version published 1470 Appendix C. Open Issues 1472 No open issues. 1474 Authors' Addresses 1476 Jason Livingood 1477 Comcast Cable Communications 1478 One Comcast Center 1479 1701 John F. Kennedy Boulevard 1480 Philadelphia, PA 19103 1481 US 1483 Email: jason_livingood@cable.comcast.com 1484 URI: http://www.comcast.com 1486 Nirmal Mody 1487 Comcast Cable Communications 1488 One Comcast Center 1489 1701 John F. Kennedy Boulevard 1490 Philadelphia, PA 19103 1491 US 1493 Email: nirmal_mody@cable.comcast.com 1494 URI: http://www.comcast.com 1496 Mike O'Reirdan 1497 Comcast Cable Communications 1498 One Comcast Center 1499 1701 John F. Kennedy Boulevard 1500 Philadelphia, PA 19103 1501 US 1503 Email: michael_oreirdan@cable.comcast.com 1504 URI: http://www.comcast.com