idnits 2.17.1 draft-oreirdan-mody-bot-remediation-14.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- No issues found here. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year == The document seems to contain a disclaimer for pre-RFC5378 work, but was first submitted on or after 10 November 2008. The disclaimer is usually necessary only for documents that revise or obsolete older RFCs, and that take significant amounts of text from those RFCs. If you can contact all authors of the source material and they are willing to grant the BCP78 rights to the IETF Trust, you can and should remove the disclaimer. Otherwise, the disclaimer is needed and you can ignore this comment. (See the Legal Provisions document at https://trustee.ietf.org/license-info for more information.) -- The document date (September 1, 2011) is 4620 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- No issues found here. Summary: 0 errors (**), 0 flaws (~~), 2 warnings (==), 1 comment (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 Internet Engineering Task Force J. Livingood 3 Internet-Draft N. Mody 4 Intended status: Informational M. O'Reirdan 5 Expires: March 4, 2012 Comcast 6 September 1, 2011 8 Recommendations for the Remediation of Bots in ISP Networks 9 draft-oreirdan-mody-bot-remediation-14 11 Abstract 13 This document contains recommendations on how Internet Service 14 Providers can manage the effects of computers used by their 15 subscribers, which have been infected with malicious bots, via 16 various remediation techniques. Internet users with infected 17 computers are exposed to risks such as loss of personal data, as well 18 as increased susceptibility to online fraud and/or phishing. Such 19 computers can also become an inadvertent participant in or component 20 of an online crime network, spam network, and/or phishing network, as 21 well as be used as a part of a distributed denial of service attack. 22 Mitigating the effects of and remediating the installations of 23 malicious bots will make it more difficult for botnets to operate and 24 could reduce the level of online crime on the Internet in general 25 and/or on a particular Internet Service Provider's network. 27 Status of this Memo 29 This Internet-Draft is submitted in full conformance with the 30 provisions of BCP 78 and BCP 79. 32 Internet-Drafts are working documents of the Internet Engineering 33 Task Force (IETF). Note that other groups may also distribute 34 working documents as Internet-Drafts. The list of current Internet- 35 Drafts is at http://datatracker.ietf.org/drafts/current/. 37 Internet-Drafts are draft documents valid for a maximum of six months 38 and may be updated, replaced, or obsoleted by other documents at any 39 time. It is inappropriate to use Internet-Drafts as reference 40 material or to cite them other than as "work in progress." 42 This Internet-Draft will expire on March 4, 2012. 44 Copyright Notice 46 Copyright (c) 2011 IETF Trust and the persons identified as the 47 document authors. All rights reserved. 49 This document is subject to BCP 78 and the IETF Trust's Legal 50 Provisions Relating to IETF Documents 51 (http://trustee.ietf.org/license-info) in effect on the date of 52 publication of this document. Please review these documents 53 carefully, as they describe your rights and restrictions with respect 54 to this document. Code Components extracted from this document must 55 include Simplified BSD License text as described in Section 4.e of 56 the Trust Legal Provisions and are provided without warranty as 57 described in the Simplified BSD License. 59 This document may contain material from IETF Documents or IETF 60 Contributions published or made publicly available before November 61 10, 2008. The person(s) controlling the copyright in some of this 62 material may not have granted the IETF Trust the right to allow 63 modifications of such material outside the IETF Standards Process. 64 Without obtaining an adequate license from the person(s) controlling 65 the copyright in such materials, this document may not be modified 66 outside the IETF Standards Process, and derivative works of it may 67 not be created outside the IETF Standards Process, except to format 68 it for publication as an RFC or to translate it into languages other 69 than English. 71 Table of Contents 73 1. Key Terminology . . . . . . . . . . . . . . . . . . . . . . . 4 74 2. Introduction and Problem Statement . . . . . . . . . . . . . . 6 75 3. Important Notice of Limitations and Scope . . . . . . . . . . 7 76 4. Detection of Bots . . . . . . . . . . . . . . . . . . . . . . 8 77 5. Notification to Internet Users . . . . . . . . . . . . . . . . 12 78 6. Remediation of Hosts Infected with a Bot . . . . . . . . . . . 19 79 6.1. Guided Remediation Process . . . . . . . . . . . . . . . . 21 80 6.2. Professionally-Assisted Remediation Process . . . . . . . 22 81 7. Failure or Refusal to Remediate . . . . . . . . . . . . . . . 22 82 8. Sharing of Data from the User to the ISP . . . . . . . . . . . 23 83 9. Security Considerations . . . . . . . . . . . . . . . . . . . 23 84 10. Privacy Considerations . . . . . . . . . . . . . . . . . . . . 24 85 11. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 24 86 12. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . . 24 87 13. Informative references . . . . . . . . . . . . . . . . . . . . 25 88 Appendix A. Examples of Third Party Malware Lists . . . . . . . . 27 89 Appendix B. Document Change Log . . . . . . . . . . . . . . . . . 27 90 Appendix C. Open Issues . . . . . . . . . . . . . . . . . . . . . 31 91 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . . 31 93 1. Key Terminology 95 This section defines the key terms used in this document. 97 1.1. Malicious Bots, or Bots 99 A malicious or potentially malicious "bot" (derived from the word 100 "robot", hereafter simply referred to as a "bot") refers to a program 101 that is installed on a system in order to enable that system to 102 automatically (or semi-automatically) perform a task or set of tasks 103 typically under the command and control of a remote administrator, or 104 "bot master". Bots are also known as "zombies". Such bots may have 105 been installed surreptitiously, without the user's full understanding 106 of what the bot will do once installed, unknowingly as part of 107 another software installation, under false pretenses, and/or in a 108 variety of other possible ways. 110 It is important to note that there are 'good', or benign bots. Such 111 benign bots are often found in such environments such as gaming and 112 Internet Relay Chat (IRC) [RFC1459], where a continual, interactive 113 presence can be a requirement for participating in the games, 114 interacting with a computing resource. Since such benign bots are 115 performing useful, lawful, and non-disruptive functions, there is no 116 reason for a provider to monitor for their presence and/or alert 117 users to their presence. 119 Thus, while there may be benign, or harmless bots, for the purposes 120 of this document all mention of bots shall assume that the bots 121 involved are malicious or potentially malicious in nature. Such 122 malicious bots shall generally be assumed to have been deployed 123 without the permission or conscious understanding of a particular 124 Internet user. Thus, without a user's knowledge, bots may transform 125 the user's computing device into a platform from which malicious 126 activities can be conducted. In addition, included explicitly in 127 this category are potentially malicious bots, which may initially 128 appear neutral but may simply be waiting for remote instructions to 129 transform and/or otherwise begin engaging in malicious behavior. In 130 general, installation of a malicious bot without user knowledge and 131 consent is considered in most regions to be unlawful, and the 132 activities of malicious bots typically involve unlawful or other 133 maliciously disruptive activities. 135 1.2. Bot Networks, or Botnets 137 These are defined as concerted networks of bots capable of acting on 138 instructions generated remotely. The malicious activities are either 139 focused on the information on the local machine or acting to provide 140 services for remote machines. Bots are highly customizable so they 141 can be programmed to do many things. The major malicious activities 142 include but are not limited to: identity theft, spam, spim (spam over 143 instant messaging), spit (spam over Internet telephony), email 144 address harvesting, distributed denial of service (DDoS) attacks, 145 key-logging, fraudulent DNS pharming (redirection), hosting proxy 146 services, fast flux (see Section 1.5) hosting, hosting of illegal 147 content, use in man-in-the-middle attacks, and click fraud. 149 Infection vectors include un-patched operating systems, software 150 vulnerabilities (which include so-called zero-day vulnerabilities 151 where no patch yet exists), weak/non-existent passwords, malicious 152 websites, un-patched browsers, malware, vulnerable helper 153 applications, inherently insecure protocols, protocols implemented 154 without security features switched on and social engineering 155 techniques to gain access to the user's computer. The detection and 156 destruction of bots is an ongoing issue and also a constant battle 157 between the internet security community, network security engineers 158 and bot developers. 160 Initially, some bots used IRC to communicate but were easy to 161 shutdown if the command and control server was identified and 162 deactivated. Newer command and control methods have evolved, such 163 that those currently employed by bot masters make them much more 164 resistant to deactivation. With the introduction of P2P, HTTP and 165 other resilient communication protocols along with the widespread 166 adoption of encryption, bots are considerably more difficult to 167 identify and isolate from typical network usage. As a result 168 increased reliance is being placed on anomaly detection and 169 behavioral analysis, both locally and remotely, to identify bots. 171 1.3. Host 173 An end user's host, or computer, as used in the context of this 174 document, is intended to refer to a computing device that connects to 175 the Internet. This encompasses devices used by Internet users such 176 as personal computers, including laptops, desktops, and netbooks, as 177 well as mobile phones, smart phones, home gateway devices, and other 178 end user computing devices which are connected or can connect to the 179 public Internet and/or private IP networks. 181 Increasingly, other household systems and devices contain embedded 182 hosts which are connected to or can connect to the public Internet 183 and/or private IP networks. However, these devices may not be under 184 interactive control of the Internet user, such as may be the case 185 with various smart home and smart grid devices. 187 1.4. Malware 189 This is short for malicious software. In this case, malicious bots 190 are considered a subset of malware. Other forms of malware could 191 include viruses and other similar types of software. Internet users 192 can sometimes cause their host to be infected with malware, which may 193 include a bot or cause a bot to install itself, via inadvertently 194 accessing a specific website, downloading a file, or other 195 activities. 197 In other cases, Internet-connected hosts may become infected with 198 malware through externally initiated malicious activities such as the 199 exploitation of vulnerabilities or the brute force guessing of access 200 credentials. 202 1.5. Fast Flux 204 DNS Fast Fluxing occurs when a domain is bound in DNS using A records 205 to multiple IP addresses, each of which has a very short Time To Live 206 (TTL) value associated with it. This means that the domain resolves 207 to varying IP addresses over a short period of time. 209 DNS Fast Flux is typically used in conjunction with proxies which 210 then route the web requests to the real host which serves the data 211 being sought. The proxies are normally run on compromised user 212 hosts. The effect of this is to make the detection of the real host 213 much more difficult and to ensure that the backend or hidden site 214 remains up for as long as possible. 216 2. Introduction and Problem Statement 218 Hosts used by Internet users, which in this case are customers of an 219 Internet Service Provider (ISP), can be infected with malware which 220 may contain and/or install one or more bots on a host. They can 221 present a major problem for an ISP for a number of reasons (not to 222 mention of course the problems created for users). First, these bots 223 can be used to send spam, in some cases very large volumes of spam 224 [Spamalytics]. This spam can result in extra cost for the ISPs in 225 terms of wasted network, server, and/or personnel resources, among 226 many other potential costs and side effects. Such spam can also 227 negatively affect the reputation of the ISP, their customers, and the 228 email reputation of the IP address space used by the ISP (often 229 referred to simply as 'IP reputation'). 231 In addition, these bots can act as platforms for directing, 232 participating in, or otherwise conducting attacks on critical 233 Internet infrastructure [Threat-Report]. Bots are frequently used as 234 part of coordinated Distributed Denial of Service (DDoS) attacks for 235 criminal, political or other motivations [Gh0st] [Dragon] [DDoS]. 236 For example, bots have been used to attack Internet resources and 237 infrastructure ranging from web sites, to email servers and DNS 238 servers, as well as the critical Internet infrastructure of entire 239 countries [Estonia] [Combat-Zone]. Motivations for such coordinated 240 DDoS attacks can range from criminal extortion attempts through to 241 online protesting and nationalistic fervor [Whiz-Kid]. 243 While any computing device can be infected with bots, the majority of 244 bot infections affect the personal computers used by Internet end 245 users. As a result of the role of ISPs in providing IP connectivity, 246 among many other services, to Internet users, these ISPs are in a 247 unique position to be able to attempt to detect and observe bot nets 248 operating in their networks. Furthermore, ISPs may also be in a 249 unique position to be able to notify their customers of actual, 250 potential, or likely infection by bots or other infection. 252 From end users perspective, being notified that they may have an 253 infected computer on their network is important information. Once 254 they know this, they can take steps to remove the bots, resolve any 255 problems which may stem from the bot infection, and protect 256 themselves againts future threats. Given that bots can consume vast 257 amounts of local computing and network resources, enable theft of 258 personal information (including personal financial information), 259 enable the host to be used for criminal activities (that may result 260 in the Internet user being legally culpable), destroy or leave the 261 host in an unrecoverable state via 'kill switch' bot technologies, it 262 is important to notify the user that they may be infected with a bot. 264 As a result, the intent of this document is to provide guidance to 265 ISPs and other organizations for the remediation of hosts infected 266 with bots, so as to reduce the size of bot nets and minimize the 267 potential harm that bots can inflict upon Internet infrastructure 268 generally, as well as on individual Internet users. Efforts by ISPs 269 and other organizations can, over time, reduce the pool of hosts 270 infected with bots on the Internet, which in turn could result in 271 smaller bot nets with less capability for disruption. 273 The potential mitigation of bots is accomplished through a process of 274 detection, notification to Internet users, and remediation of bot 275 infections with a variety of tools, as described later in this 276 document. 278 3. Important Notice of Limitations and Scope 280 The techniques described in this document in no way guarantee the 281 remediation of all bots. Bot removal is potentially a task requiring 282 specialized knowledge, skills and tools, and may be beyond the 283 ability of average users. Attempts at bot removal may frequently be 284 unsuccessful, or only partially successful, leaving the user's system 285 in an unstable and unsatisfactory state or even in a state where it 286 is still infected. Attempts at bot removal can result in side 287 effects ranging from a loss of data to partial or complete loss of 288 system usability. 290 In general, the only way a user can be sure they have removed some of 291 today's increasingly sophisticated malware is by 'nuking-and-paving' 292 the system: reformatting the drive, reinstalling the operating system 293 and applications (including all patches) from scratch, and then 294 restoring user files from a known clean backup. However the 295 introduction of persistent memory based malware may mean that, in 296 some cases, this may not be enough and may prove to be more than any 297 end user can be reasonably expected to resolve [BIOS]. Experienced 298 users would have to re-flash or re-image persistent memory sections 299 or components of their hosts in order to remove persistent memory 300 based malware. However, in some cases, not even 'nuking-and-paving' 301 the system will solve the problem, which calls for hard drive 302 replacement and/or complete replacement of the host. 304 Devices with embedded operating systems, such as video gaming 305 consoles and smart home appliances, will most likely be beyond a 306 user's capability to remediate by themselves, and could therefore 307 require the aid of vendor-specific advice, updates and tools. 308 However, in some cases, such devices will have a function or switch 309 to enable the user to reset that device to a factory default 310 configuration, which may in some cases enable the user to remediate 311 the infection. Care should be taken when imparting remediation 312 advice to Internet users given the increasingly wide array of 313 computing devices that can be, or could be, infected by bots in the 314 future. 316 This document is not intended to address the issues relating to the 317 prevention of bots on an end user device. This is out of scope for 318 this document. 320 4. Detection of Bots 322 An ISP must first identify that an Internet user, in this case a user 323 that is assumed to be their customer or otherwise connected to the 324 ISP's network, is infected, or likely to have been infected with a 325 bot. The ISP should attempt to detect the presence of bots using 326 methods, processes, and tools which maintain the privacy of the 327 personally identifiable information (PII) of their customers. The 328 ISP should not block legitimate traffic in the course of bot 329 detection, and should instead employ detection methods, tools, and 330 processes which seek to be non-disruptive, and transparent to 331 Internet users and end-user applications. 333 Detection methods, tools, and processes may include analysis of 334 specific network and/or application traffic flows (such as traffic to 335 an email server), analysis of aggregate network and/or application 336 traffic data, data feeds received from other ISPs and organizations 337 (such as lists of the ISP's IP addresses which have been reported to 338 have sent spam), feedback from the ISP's customers or other Internet 339 users, as well as a wide variety of other possibilities. In 340 practice, it has proven effective to validate a bot infect through 341 the use of a combination of multiple bot detection data points. This 342 can help to corroborate information of varying dependability or 343 consistency, as well as to avoid or minimize the possibility of false 344 positive identification of hosts. Detection should also, where 345 possible and feasible, attempt to classify the specific bot infection 346 type in order to confirm that it is malicious in nature, estimate the 347 variety and severity of threats it may pose (such as spam bot, key- 348 logging bot, file distribution bot, etc.), and to determine potential 349 methods for eventual remediation. However, given the dynamic nature 350 of botnet management and the criminal incentives to seek quick 351 financial rewards, botnets frequently update or change their core 352 capabilities. As a consequence, botnets that are initially detected 353 and classified by the ISP as one particular type of bot need to be 354 continuously monitored and tracked in order to correctly identify the 355 threat the botnet poses at any particular point in time. 357 Detection is also time-sensitive. If complex analysis is required 358 and multiple confirmations are needed to verify a bot is indeed 359 present, then it is possible that the bot may cause some damage (to 360 either the infected host or a remotely targeted system) before it can 361 be stopped. This means that an ISP needs to balance the desire or 362 need to definitively classify and/or confirm the presence of a bot, 363 which may take an extended period of time, with the ability to 364 predict the likelihood of a bot in a very short period of time. Such 365 determinations must have a relatively low false positive rate in 366 order to maintain the trust of users. This 'definitive-vs-likely' 367 challenge is difficult and, when in doubt, ISPs should err on the 368 side of caution by communicating that a bot infection has taken 369 place. This also means that Internet users may benefit from the 370 installation of client-based software on their host. This can enable 371 rapid performance of heuristically-based detection of bot activity, 372 such as the detection of a bot as it starts to communicate with other 373 botnets and execute commands. Any bot detection system should also 374 be capable of adapting, either via manual intervention or 375 automatically, in order to cope with a rapidly evolving threat. 377 As noted above, detection methods, tools, and processes should ensure 378 that privacy of customers' PII is maintained. This protection 379 afforded to PII should also extend to third parties processing data 380 on behalf of ISPs. While bot detection methods, tools, and processes 381 are similar to spam and virus defenses deployed by the ISP for the 382 benefit of their customers (and may be directly related to those 383 defenses), attempts to detect bots should take into account the need 384 of an ISP to take care to ensure any PII collected or incidentally 385 detected is properly protected. This is important, as just as spam 386 defenses may involve scanning the content of email messages, which 387 may contain PII, then so too may bot defenses similarly come into 388 incidental contact with PII. The definition of PII varies from one 389 jurisdiction to the next so proper care should be taken to ensure 390 that any actions taken comply with legislation and good practice in 391 the jurisdiction in which the PII is gathered. Finally, depending 392 upon the geographic region within which an ISP operates, certain 393 methods relating to bot detection may need to be included in relevant 394 terms of service documents or other documents which are available to 395 the customers of a particular ISP. 397 There are several bot detection methods, tools, and processes that an 398 ISP may choose to utilize, as noted in the list below. It is 399 important to note that the technical solutions available are 400 relatively immature, and are likely to change over time, evolving 401 rapidly in the coming years. While these items are described in 402 relation to ISPs, they may also be applicable to organizations 403 operating other networks, such as campus networks and enterprise 404 networks. 406 a. Where legally permissible or otherwise an industry accepted 407 practice in a particular market region, an ISP may in some manner 408 "scan" their IP space in order to detect un-patched or otherwise 409 vulnerable hosts, or to detect the signs of infection. This may 410 provide the ISP with the opportunity to easily identify Internet 411 users who appear to already be infected or are at great risk of 412 being infected with a bot. ISPs should note that some types of 413 port scanning may leave network services in a hung state or 414 render them unusable due to common frailties, and that many 415 modern firewall and host-based intrusion detection 416 implementations may alert the Internet user to the scan. As a 417 result the scan may be interpreted as a malicious attack against 418 the host. Vulnerability scanning has a higher probability of 419 leaving accessible network services and applications in a damaged 420 state and will often result in a higher probability of detection 421 by the Internet user and subsequent interpretation as a targeted 422 attack. Depending upon the vulnerability for which an ISP may be 423 scanning, some automated methods of vulnerability checking may 424 result in data being altered or created afresh on the Internet 425 user's host which can be a problem in many legal environments. 426 It should also be noted that due to the prevalence of Network 427 Address Translation devices, Port Address Translation devices, 428 and/or firewall devices in user networks, network-based 429 vulnerability scanning may be of limited value. Thus, while we 430 note that this is one technique which may be utilized, it is 431 unlikely to be particularly effective and it has problematic side 432 effects, which leads the authors to recommend against the use of 433 this particular method. 435 b. An ISP may also communicate and share selected data, via feedback 436 loops or other mechanisms, with various third parties. Feedback 437 loops are consistently formatted feeds of real-time (or nearly 438 real-time) abuse reports offered by threat data clearinghouses, 439 security alert organizations, other ISPs, and other 440 organizations. The data may include, but is not limited to, IP 441 addresses of hosts which have or are likely infected, IP 442 addresses, domain names or fully qualified domain names (FQDNs) 443 known to host malware and/or be involved in the command and 444 control of botnets, recently tested or discovered techniques for 445 detecting or remediating bot infections, new threat vectors, and 446 other relevant information. A few good examples of data sharing 447 are noted in Appendix A. 449 c. An ISP may use Netflow [RFC3954] or other similar passive network 450 monitoring to identify network anomalies that may be indicative 451 of botnet attacks or bot communications. For example, an ISP may 452 be able to identify compromised hosts by identifying traffic 453 destined to IP addresses associated with the command and control 454 of botnets, or destined to the combination of an IP address and 455 control port associated with a command and control network 456 (sometimes command and control traffic comes from a host which 457 has legitimate traffic). In addition, bots may be identified 458 when a remote host is under a DDoS attack, because hosts 459 participating in the attack will likely be infected by a bot, 460 frequently as observed at network borders (though ISPs should 461 beware of source IP address spoofing techniques to avoid or 462 confuse detection). 464 d. An ISP may use DNS-based techniques to perform detection. For 465 example, a given classified bot may be known to query a specific 466 list of domain names at specific times or on specific dates (in 467 the example of the so-called "Conficker" bot), often by matching 468 DNS queries to a well known list of domains associated with 469 malware. In many cases such lists are distributed by or shared 470 using third parties, such as threat data clearinghouses. 472 e. User complaints: Because hosts infected by bots are frequently 473 used to send spam or participate in DDoS attacks, the ISP 474 servicing those hosts will normally receive complaints about the 475 malicious network traffic. Those complaints may be sent to 476 RFC2142-specified [RFC2142] role accounts, such as abuse@, or to 477 other relevant addresses such as to abuse or security addresses 478 specified by the site as part of its WHOIS (or other) contact 479 data. 481 f. ISPs may also discover likely bot infected hosts located on other 482 networks. Thus, when legally permissible in a particular market 483 region, it may be worthwhile for ISPs to share information 484 relating to those compromised hosts with the relevant remote 485 network operator, with security researchers, and with blocklist 486 operators. 488 g. ISPs may operate or subscribe to services that provide 489 'sinkholing' or 'honeynet' capabilities. This may enable the ISP 490 to obtain near-real-time lists of bot infected hosts as they 491 attempt to join a larger botnet or propagate to other hosts on a 492 network. 494 h. ISP industry associations should examine the possibility of 495 collating statistics from ISP members in order to provide good 496 statistics about bot infections based on real ISP data. 498 i. An Intrusion Detection System(IDS) can be a useful tool to 499 actually help identify the malware. An IDS tool such as SNORT 500 (open source IDS platform) can be placed in a Walled Garden and 501 used to analyze end user traffic to confirm malware type. This 502 will help with remediation of the infected device. 504 5. Notification to Internet Users 506 Once an ISP has detected a bot, or the strong likelihood of a bot, 507 steps should be undertaken to inform the Internet user that they may 508 have a bot-related problem. Depending upon a range of factors, from 509 the technical capabilities of the ISP, to the technical attributes of 510 their network, financial considerations, available server resources, 511 available organizational resources, the number of likely infected 512 hosts detected at any given time, and the severity of any possible 513 threats, among other things, an ISP should decide the most 514 appropriate method or methods for providing notification to one or 515 more of their customers or Internet users. Such notification methods 516 may include one or more of the following, as well as other possible 517 methods not described below. 519 It is important to note that none of these methods are guaranteed to 520 be one-hundred percent successful, and that each has its own set of 521 limitations. In addition, in some cases, an ISP may determine that a 522 combination of two or more methods is most appropriate and effective, 523 and reduces the chance that malware may block a notification. As 524 such, the authors recommend the use of multiple notification methods. 525 Finally, notification is also considered time sensitive; if the user 526 does not receive or view the notification or a timely basis, then a 527 particular bot could launch an attack, exploit the user, or cause 528 other harm. If possible, an ISP should establish a preferred means 529 of communication when the subscriber first signs up for service. As 530 a part of the notification process, ISPs should maintain a record of 531 the allocation of IP addresses to subscribers for such a period as 532 allows any commonly used bot detection technology to be able to 533 accurately link an infected IP address to a subscriber. This record 534 should only be maintained for a period of time which is necessary, in 535 order to maintain the protection of the privacy of an individual 536 subscriber. 538 One important factor to bear in mind is that notification to end 539 users needs to be resistant to potential spoofing. This should be 540 done to protect, as reasonably as possible, against the potential of 541 legitimate notifications being spoofed and/or used by parties with 542 intent to perform additional malicious attacks against victims of 543 malware, or even to deliver additional malware. 545 It should be possible for the end user to indicate the preferred 546 means of notification on an opt-in basis for that notification 547 method. It is recommended that the end user should not be allowed to 548 totally opt out of notification entirely. 550 When users are notified, an ISP should endeavour to give as much 551 information to the end user as to bot detection methods employed at 552 the ISP consonant with not providing information to those creating or 553 deploying the bots so that they would be able to avoid detection. 555 5.1. Email Notification 557 This is a common form of notification used by ISPs. One drawback of 558 using email is that it is not guaranteed to be viewed within a 559 reasonable time frame, if at all. The user may be using a different 560 primary email address than that which they have provided to the ISP. 561 In addition, some ISPs do not provide an email account at all, as 562 part of a bundle of Internet services, and/or do not have a need for 563 or method in which to request or retain the primary email addresses 564 of Internet users of their networks. Another possibility is that the 565 user, their email client, and/or their email servers could determine 566 or classify such a notification as spam, which could delete the 567 message or otherwise file it in an email folder that the user may not 568 check on a regular and/or timely basis. Bot masters have also been 569 known to impersonate the ISP or trusted sender and send fraudulent 570 emails to the users. This technique of social engineering often 571 leads to new bot infestations. Finally if the user's email 572 credentials are compromised, then a hacker and/or a bot could simply 573 access the user's email account and delete the email before it is 574 read by the user. 576 5.2. Telephone Call Notification 578 A telephone call may be an effective means of communication in 579 particularly high-risk situations. However, telephone calls may not 580 be feasible due to the cost of making a large number of calls, as 581 measured in either time, money, organizational resources, server 582 resources, or some other means. In addition, there is no guarantee 583 that the user will answer their phone. To the extent that the 584 telephone number called by the ISP can be answered by the infected 585 computing device, the bot on that host may be able to disconnect, 586 divert, or otherwise interfere with an incoming call. Users may also 587 interpret such a telephone notification as a telemarketing call and 588 as such not welcome it, or not accept the call at all. Finally, even 589 if a representative of the ISP is able to connect with and speak to a 590 user, that user is very likely to lack the necessary technical 591 expertise to understand or be able to effectively deal with the 592 threat. 594 5.3. Postal Mail Notification 596 This form of notification is probably the least popular and effective 597 means of communication, due to both preparation time, delivery time, 598 the cost of printing and paper, and the cost of postage. 600 5.4. Walled Garden Notification 602 Placing a user in a walled garden is another approach that ISPs may 603 take to notify users. A walled garden refers to an environment that 604 controls the information and services that a subscriber is allowed to 605 utilize and what network access permissions are granted. A walled 606 garden implementation can range from strict to leaky. In a strict 607 walled garden environment access to most Internet resources are 608 typically limited by the ISP. In contrast a leaky walled garden 609 environment permits access to all Internet resources except those 610 deemed malicious or service and resources that can be used to notify 611 the users. 613 Walled gardens are effective because it is possible to notify the 614 user and simultaneously block all communication between the bot and 615 the command and control channel. While in many cases the user is 616 almost guaranteed to view the notification message and take any 617 appropriate remediation actions, this approach can pose other 618 challenges. For example, it is not always the case that a user is 619 actively using a host that uses a web browser or which has a web 620 browser actively running on it, or that uses another application 621 which uses ports which are redirected to the walled garden. In one 622 example, a user could be playing a game online, via the use of a 623 dedicated, Internet-connected game console. In another example, the 624 user may not be using a host with a web browser when they are placed 625 in the walled garden and may instead be in the course of a telephone 626 conversation, or may be expecting to receive a call, using a Voice 627 Over IP (VoIP) device of some type. As a result, the ISP may feel 628 the need to maintain a potentially lengthy white list of domains 629 which are not subject to the typical restrictions of a walled garden, 630 which could well prove to be an onerous task, from an operational 631 perspective. 633 For these reasons the implementation of a leaky walled garden makes 634 more sense but a leaky walled garden has different set of drawbacks. 635 The ISP has to assume that the user will eventually use a web browser 636 to acknowledge the notification other wise the user will remain in 637 the walled garden and not know it. If the intent of the leaky walled 638 garden is to solely notify the user about the bot infection then the 639 leaky walled garden is not ideal because notification is time 640 sensitive and the user may not receive the notification until the 641 user invokes a request for the targeted service and/or resource. 642 This means the bot can potentially do more damage. Additionally, the 643 ISP has to identify which services and/or resources to restrict for 644 the purposes of notification. This does not have to be resource 645 specific and can be time based and or policy based. An example of 646 how notification could be made on a timed basis could involve 647 notification for all HTTP requests every 10 minutes or show the 648 notification for one in five HTTP requests. 650 The ISP has several options to determine when to let the user out of 651 the walled garden. One approach may be to let the user determine 652 when to exit. This option is suggested when the primary purpose of 653 the walled garden is to notify users and provide information on 654 remediation only, particularly since notification is not a guarantee 655 of successful remediation. It could also be the case that, for 656 whatever reason, the user makes the judgment that they cannot then 657 take the time to remediate their host and that other online 658 activities which they would like to resume are more important. Exit 659 from the walled garden may also involve a process to verify that it 660 is indeed the user who is requesting exit from the walled garden and 661 not the bot. 663 Once the user acknowledges the notification, they may decide to 664 either remediate and exit the walled garden or to exit the walled 665 garden without remediating the issue. Another approach may be to 666 enforce a stricter policy and require the user to clean the host 667 prior to permitting the user to exit the walled garden, though this 668 may not be technically feasible depending upon the type of bot, 669 obfuscation techniques employed by a bot, and/or a range of other 670 factors. Thus, the ISP may also need to support tools to scan the 671 infected host (in the style of a virus scan, rather than a port scan) 672 and determine whether it is still infected or rely on user judgment 673 that the bot has been disabled or removed. One challenge with this 674 approach is that if the user has multiple hosts sharing a single IP 675 address, such as via a common home gateway device which performs 676 Network Address Translation (NAT). In such a case, the ISP may need 677 to determine from user feedback, or other means, that all affected 678 hosts have been remediated, which may or may not be technically 679 feasible. 681 Finally, when a walled garden is used, a list of well-known addresses 682 for both operating system vendors and security vendors should be 683 created and maintained in a white list which permits access to these 684 sites. This can be important for allowing access from the walled 685 garden by end users in search of operating system and application 686 patches. It is recommended that walled gardens be seriously 687 considered as a method of notification as they are easy to implement 688 and proven to be effective as a means of getting end user attention. 690 5.5. Instant Message Notification 692 Instant messaging provides the ISP with a simple means to communicate 693 with the user. There are several advantages to using Instant 694 Messaging (IM) which makes it an attractive option. If the ISP 695 provides IM service and the user subscribes to it, then the user can 696 be notified easily. IM-based notification can be a cost effective 697 means to communicate with users automatically from an IM alert system 698 or via a manual process, by the ISP's support staff. Ideally, the 699 ISP should allow the user to register their IM identity in an ISP 700 account management system and grant permission to be contacted via 701 this means. If the IM service provider supports off-line messaging, 702 then the user can be notified regardless of whether they are 703 currently logged into the IM system. 705 There are several drawbacks with this communications method. There 706 is a high probability that subscriber may interpret the communication 707 to be spim, and as such ignore it. Also, not every user uses IM 708 and/or the user may not provide their IM identity to the ISP so some 709 alternative means have to be used. Even in those cases where a user 710 does have an IM address, they may not be signed onto that IM system 711 when the notification is attempted. There may be a privacy concern 712 on the part of users, when such an IM notification must be 713 transmitted over a third-party network and/or IM service. As such, 714 should this method be used, the notification should be discreet and 715 not include any PII in the notification itself. 717 5.6. Short Message Service (SMS) Notification 719 SMS allows the ISP send a brief description of the problem to notify 720 the user of the issue, typically to a mobile device such as a mobile 721 phone or smart phone. Ideally, the ISP should allow the user to 722 register their mobile number and/or SMS address in an ISP account 723 management system and grant permission to be contacted via this 724 means. The primary advantage of SMS is that users are familiar with 725 receiving text messages and are likely to read them. However, users 726 may not act on the notification immediately if they are not in front 727 of their host at the time of the SMS notification. 729 One disadvantage is that ISPs may have to follow up with an alternate 730 means of notification if not all of the necessary information maybe 731 conveyed in one message, given constraints on the number of 732 characters in an individual message (typically 140 characters). 733 Another disadvantage with SMS is the cost associated with it. The 734 ISP has to either build its own SMS gateway to interface with the 735 various wireless network service providers or use a third-party SMS 736 clearinghouse (relay) to notify users. In both cases an ISP may 737 incur fees related to SMS notifications, depending upon the method 738 used to send the notifications. An additional downside is that SMS 739 messages sent to a user may result in a charge to the user by their 740 wireless provider, depending upon the plan to which they subscribe. 741 Another minor disadvantage is that it is possible to notify the wrong 742 user if the intended user changes their mobile number but forgets to 743 update it with the ISP. 745 There are several other drawbacks with this communications method. 746 There is a high probability that subscriber may interpret the 747 communication to be spam, and as such ignore it. Also, not every 748 user uses SMS and/or the user may not provide their SMS address or 749 mobile number to the ISP. Even in those cases where a user does have 750 an SMS address or mobile number, their device may not be powered on 751 or otherwise available on a wireless network when the notification is 752 attempted. There maybe also be a privacy concern on the part of 753 users, when such an SMS notification must be transmitted over a 754 third-party network and/or SMS clearinghouse. As such, should this 755 method be used, the notification should be discreet and not include 756 any PII in the notification itself. 758 5.7. Web Browser Notification 760 Near real-time notification to the user's web browser is another 761 technique that may be utilized for notifying the user [RFC6108], 762 though how such a system might operate is outside the scope of this 763 document. Such a notification could have a comparative advantage 764 over a walled garden notification, in that it does not restrict 765 traffic to a specified list of destinations in the same way that a 766 walled garden by definition would. However, as with a walled garden 767 notification, there is no guarantee that a user is at any given time 768 making use of a web browser, though such a system could certainly 769 provide a notification when such a browser is eventually used. 770 Compared to a walled garden, a web browser notification is probably 771 preferred from the perspective of Internet users, as it does not have 772 the risk of disrupting non-web sessions, such as online games, VoIP 773 calls, etc. (as noted in Section 5.4). 775 There are alternative methods of web browser notification offered 776 commercially by a number of vendors. Many of the techniques used are 777 proprietary and it is not within the scope of this document to 778 describe how they are implemented. These techniques have been 779 successfully implemented at several ISPs. 781 It should be noted that web notification is only intended to notify 782 devices running a web browser. 784 5.8. Considerations for Notification to Public Network Locations 786 Delivering a notification to a location that provides a shared public 787 network, such as a train station, public square, coffee shop, or 788 similar location may be of low value since the users connecting to 789 such networks are typically highly transient and generally not know 790 to site or network administrators. For example, a system may detect 791 that a host on such a network has a bot, but by the time a 792 notification is generated that user has departed from the network and 793 moved elsewhere. 795 5.9. Considerations for Notification to Network Locations Using a 796 Shared IP Address 798 Delivering a notification to a location that accesses the Internet 799 routed through one or more shared public IP addresses may be of low 800 value since it may be quite difficult to differentiate between users 801 when providing a notification. For example, on a business network of 802 500 users, all sharing one public IP address, it may be sub-optimal 803 to provide a notification to all 500 users if you only need one 804 specific user to be notified and take action. As a result, such 805 networks may find value in establishing a localized bot detection and 806 notification system, just as they are likely to also establish other 807 localized systems for security, file sharing, email, and so on. 809 However, should an ISP implement some form of notification to such 810 networks, it may be better to simply send notifications to a 811 designated network administrator at the site. In such a case the 812 local network administrator may like to receive additional 813 information in such a notification, such as a date and timestamp, the 814 source port of the infected system, and malicious sites and ports 815 that may have been visited. 817 5.10. Notification and End User Expertise 819 The ultimate effectiveness of any of the aforementioned forms of 820 notification is heavily dependent upon both the expertise of the end 821 user and the wording of any such notification. For example, while a 822 user may receive and acknowledge a notification, that user may lack 823 the necessary technical expertise to understand or be able to 824 effectively deal with the threat. As a result, it is important that 825 such notifications use clear and easily understood language, so that 826 the majority of users (who are non-technical) may understand the 827 notification. In addition, a notification should provide easily 828 understood guidance on how to remediate a threat Section 6, 829 potentially with one path for technical users to take and another for 830 non-technical users. 832 6. Remediation of Hosts Infected with a Bot 834 This section covers the different options available to remediate a 835 host, which means to remove, disable, or otherwise render a bot 836 harmless. Prior to this step, an ISP has detected the bot, notified 837 the user that one of their hosts is infected with a bot, and now may 838 provide some recommended means to clean the host. The generally 839 recommended approach is to provide the necessary tools and education 840 to the user so that they may perform bot remediation themselves, 841 particularly given the risks and difficulties inherent in attempting 842 to remove a bot. 844 For example, this may include the creation of a special web site with 845 security-oriented content that is dedicated for this purpose. This 846 should be a well-publicized security web site to which a user with a 847 bot infection can be directed to for remediation. This security web 848 site should clearly explain why the user was notified and may include 849 an explanation of what bots are, and the threats that they pose. 850 There should be a clear explanation of the steps that the user should 851 take in order to attempt to clean their host and provide information 852 on how users can keep the host free of future infections. The 853 security web site should also have a guided process that takes non- 854 technical users through the remediation process, on an easily 855 understood, step-by-step basis. 857 In terms of the text used to explain what bots are and the threats 858 that they pose, something simple such as this may suffice: 860 "What is a bot? A bot is a piece of software, generally 861 installed on your machine without your knowledge, which either 862 sends spam or tries to steal your personal information. They 863 can be very difficult to spot, though you may have noticed that 864 your computer is running much more slowly than usual or you 865 notice regular disk activity even when you are not doing 866 anything. Ignoring this problem is risky to you and your 867 personal information. Thus, bots need to be removed to protect 868 your data and your personal information." 870 It is also important to note that it may not be immediately apparent 871 to the Internet user precisely which devices have been infected with 872 a particular bot. This may be due to the user's home network 873 configuration, which may encompass several hosts, where a home 874 gateway which performs Network Address Translation (NAT) to share a 875 single public IP address has been used. Therefore, any of these 876 devices can be infected with a bot. The consequence of this for an 877 ISP is that remediation advice may not ultimately be immediately 878 actionable by the Internet user, as that user may need to perform 879 additional investigation within their own home network. 881 An added complication is that the user may have a bot infection on a 882 device such as a video console, multimedia system, appliance, or 883 other end-user computing device which does not have a typical Windows 884 or Macintosh user interface. As a result, diligence needs to be 885 taken by the ISP where possible such that they can identify and 886 communicate the specific nature of the device that has been infected 887 with a bot, and further providing appropriate remediation advice. 889 There are a number of forums that exist online to provide security 890 related support to end users. These forums are staffed by volunteers 891 and often are focussed around the use of a common tool set to help 892 end users to remediate hosts infected with malware. It may be 893 advantageous to ISPs to foster a relationship with one or more 894 forums, perhaps by offering free hosting or other forms of 895 sponsorship. 897 It is also important to keep in mind that not all users will be 898 technically adept Section 5.10. As a result, it may be more 899 effective to provide a range of suggestion options for remediation. 900 This may include for example a very detailed "do it yourself" 901 approach for experts, a simpler guided process for the average user, 902 and even assisted remediation Section 6.2. 904 6.1. Guided Remediation Process 906 Minimally the Guided Remediation Process should include options 907 and/or recommendations on how a user should: 909 1. Backup personal files. For example: "Before you start, make sure 910 to back up all of your important data. (You should do this on a 911 regular basis anyway.) You can back up your files manually or 912 using a system back-up software utility, which may be part of 913 your Operating System (OS). You can back your files up to a USB 914 Thumb Drive (aka USB Key), a writeable CD/DVD-ROM, an external 915 hard drive, a network file server, or an Internet-based backup 916 service." 918 2. Download OS patches and Anti-Virus (A/V) software updates. For 919 example, links could be provided to Microsoft Windows updates as 920 well as to Apple MacOS updates, or to other major operating 921 systems which are relevant to users and their devices. 923 3. Explain how to configure the host to automatically install 924 updates for the OS, A/V and other common Web Browsers such as 925 Microsoft Internet Explorer, Mozilla Firefox, Apple Safari, 926 Opera, and Google Chrome. 928 4. The flow should also have the option for users to get 929 professional assistance if they are unable to remove the bots 930 themselves. If purchasing professional assistance, then the user 931 should be encouraged to pre-determine how much they are willing 932 to pay for that help. If the host that is being remediated is 933 old and can easily be replaced with a new, faster, larger and 934 more reliable system for a certain cost, the it makes no sense to 935 spend more than that cost to fix the old host, for example. On 936 the other hand, if the customer has a brand new host, it might 937 make perfect sense to spend the money to attempt to remediate it. 939 5. To continue, regardless of whether the user or a knowledgeable 940 technical assistant is working on remediating the host, their 941 first task should be to determine which of multiple potentially- 942 infected machines may be the one that needs attention (in the 943 common case of multiple hosts in a home network). Sometimes, as 944 in cases where there is only a single directly-attached host, or 945 the user has been noticing problems with one of their hosts, this 946 can be easy. Other times, it may be more difficult especially if 947 there are no clues as to which host is infected. If the user is 948 behind a home gateway/router, then the first task may be to 949 ascertain which of the machines is infected. In some cases the 950 user may have to check all machines to identify the infected one. 952 6. User surveys to solicit feedback on whether the notification and 953 remediation process is effective and what recommended changes 954 could be made in order to improve the ease, understandability, 955 and effectiveness the remediation process. 957 7. If the user is interested in reporting his or her host's bot 958 infection to an applicable law enforcement authority, then the 959 host effectively becomes a cyber "crime scene" and should not be 960 mitigated unless or until law enforcement has collected the 961 necessary evidence. For individuals in this situation, the ISP 962 may wish to provide links to local, state, federal, or other 963 relevant computer crime offices. (Note: Some "minor" incidents, 964 even if highly traumatic to the user, may not be sufficiently 965 serious for law enforcement to commit some of their limited 966 resources to an investigation.) In addition, individual regions 967 may have other, specialized computer crime organizations to which 968 these incidents can be reported. For example, in the United 969 States, that organization is the Internet Crime Complaint Center, 970 at http://www.ic3.gov. 972 8. Users may also be interested in links to security expert forums, 973 where other users can assist them. 975 6.2. Professionally-Assisted Remediation Process 977 It should be acknowledged that, based on the current state of 978 remediation tools and the technical abilities of end users, that many 979 users may be unable to remediate on their own. As a result, it is 980 recommended that users have the option for professional assistance. 981 This may entail online or telephone assistance for remediation, as 982 well as working face to face with a professional who has training and 983 expertise in the removal of malware. It should be made clear at the 984 time of offering this service that this service is intended for those 985 that do not have the skills or confidence to attempt remediation and 986 is not intended as an upsell by the ISP. 988 7. Failure or Refusal to Remediate 990 ISP systems should track the bot infection history of hosts in order 991 to detect when users consistently fail to remediate or refuse to take 992 any steps to remediate. In such cases, ISPs may need to consider 993 taking additional steps to protect their network, other users and 994 hosts on that network, and other networks. Such steps may include a 995 progression of actions up to and including account termination. 997 Refusal to remediate can be viewed as a business issue and as such no 998 technical recommendation is possible. 1000 8. Sharing of Data from the User to the ISP 1002 As an additional consideration, it may be useful to create a process 1003 by which users could choose, at their option and with their express 1004 consent, to share data regarding their bot infection with their ISP 1005 and/or another authorized third party. Such third parties may 1006 include governmental entities that aggregate threat data, such as the 1007 Internet Crime Complaint Center referred to earlier in this document, 1008 to academic institutions, and/or security researchers. While in many 1009 cases the information shared with the user's ISP or designated third 1010 parties will only be used for aggregated statistical analysis, it is 1011 also possible that certain research needs may be best met with more 1012 detailed data. Thus, any such data sharing from a user to the ISP or 1013 authorized third party may contain some type of personally 1014 identifiable information, either by design or inadvertently. As a 1015 result, any such data sharing should be enabled on an opt-in based, 1016 where users review and approve of the data being shared and the 1017 parties with which it is to be shared, unless the ISP is already 1018 required to share such data in order to comply with local laws and in 1019 accordance with those laws and applicable regulations. 1021 9. Security Considerations 1023 This document describes in detail the numerous security risks and 1024 concerns relating to bot nets. As such, it has been appropriate to 1025 include specific information about security in each section above. 1026 This document describes the security risks related to malicious bot 1027 infections themselves, such as enabling identity theft, theft of 1028 authentication credentials, and the use of a host to unwittingly 1029 participate in a DDoS attack, among many other risks. Finally, the 1030 document also describes security risks which may relate to the 1031 particular methods of communicating a notification to Internet users. 1032 Bot networks and bot infections pose extremely serious security risks 1033 and any reader should review this document carefully. 1035 In addition, regarding notifications, as described in Section 5, care 1036 should be taken to assure users that notifications have been provided 1037 by a trustworthy site and/or party, so that the notification is more 1038 difficult for phishers and/or malicious parties using social 1039 engineering tactics to mimic, or that the user has some level of 1040 trust that the notification is valid, and/or that the user has some 1041 way to verify via some other mechanism or step that the notification 1042 is valid. 1044 10. Privacy Considerations 1046 This document describes at a high level the activities that ISPs 1047 should be sensitive to, where the collection or communication of PII 1048 may be possible. In addition, when performing notifications to end 1049 users Section 5, those notifications should not include PII. 1051 As noted in Section 8, any sharing of data from the user to the ISP 1052 and/or authorized third parties should be done on an opt-in basis. 1053 Additionally the ISP and or authorized third parties should clearly 1054 state what data will be shared and with whom the data will be shared 1055 with. 1057 Lastly, as noted in some other sections, there my be legal 1058 requirements in particular legal jurisdictions concerning how long 1059 any subscriber-related or other data is retained, of which an ISP 1060 operating in such a jurisdiction should be aware and with which an 1061 ISP should comply. 1063 11. IANA Considerations 1065 There are no IANA considerations in this document. 1067 12. Acknowledgements 1069 The authors wish to acknowledge the following individuals and groups 1070 for performing a detailed review of this document and/or providing 1071 comments and feedback that helped to improve and evolve this 1072 document: 1074 Mark Baugher 1076 Richard Bennett 1078 James Butler 1080 Vint Cerf 1082 Alissa Cooper 1084 Jonathan Curtis 1086 Jeff Chan 1088 Roland Dobbins 1089 Dave Farber 1091 Stephen Farrell 1093 Eliot Gillum 1095 Joel Halpern 1097 Joel Jaeggli 1099 Scott Keoseyan 1101 The Messaging Anti-Abuse Working Group (MAAWG) 1103 Jose Nazario 1105 Gunter Ollmann 1107 David Reed 1109 Roger Safian 1111 Donald Smith 1113 Joe Stewart 1115 Forrest Swick 1117 Sean Turner 1119 Robb Topolski 1121 Maxim Weinstein 1123 Eric Ziegast 1125 13. Informative references 1127 [BIOS] Sacco, A. and A. Ortega, "Persistent BIOS Infection", 1128 March 2009, . 1131 [Combat-Zone] 1132 Alshech, E., "Cyberspace as a Combat Zone: The Phenomenon 1133 of Electronic Jihad", February 2007, . 1136 [DDoS] Saafan, A., "Distributed Denial of Service Attacks: 1137 Explanation, Classification and Suggested Solutions", 1138 March 2009, . 1140 [Dragon] Nagaraja, S. and R. Anderson, "The snooping dragon: 1141 social-malware surveillance of the Tibetan movement", 1142 March 2009, 1143 . 1145 [Estonia] Evron, G., "Battling Botnets and Online Mobs: Estonia's 1146 Defense Efforts during the Internet War", May 2005, . 1152 [Gh0st] Vallentin, M., Whiteaker, J., and Y. Ben-David, "The Gh0st 1153 in the Shell: Network Security in the Himalayas", 1154 February 2010, . 1157 [RFC1459] Oikarinen, J. and D. Reed, "Internet Relay Chat Protocol", 1158 RFC 1459, May 1993. 1160 [RFC2142] Crocker, D., "MAILBOX NAMES FOR COMMON SERVICES, ROLES AND 1161 FUNCTIONS", RFC 2142, May 1997. 1163 [RFC3954] Claise, B., "Cisco Systems NetFlow Services Export Version 1164 9", RFC 3954, October 2004. 1166 [RFC6108] Chung, C., Kasyanov, A., Livingood, J., Mody, N., and B. 1167 Van Lieu, "Comcast's Web Notification System Design", 1168 RFC 6108, February 2011. 1170 [Spamalytics] 1171 Kanich, C., Kreibich, C., Levchenko, K., Enright, B., 1172 Voelker, G., Paxson, V., and S. Savage, "Spamalytics: An 1173 Empirical Analysis of Spam Marketing Conversion", 1174 October 2008, . 1177 [Threat-Report] 1178 Ahamad, M., Amster, D., Barret, M., Cross, T., Heron, G., 1179 Jackson, D., King, J., Lee, W., Naraine, R., Ollman, G., 1180 Ramsey, J., Schmidt, H., and P. Traynor, "Emerging Cyber 1181 Threats Report for 2009: Data, Mobility and Questions of 1182 Responsibility will Drive Cyber Threats in 2009 and 1183 Beyond", October 2008, . 1186 [Whiz-Kid] 1187 Berinato, S., "Case Study: How a Bookmaker and a Whiz Kid 1188 Took On a DDOS-based Online Extortion Attack", May 2005, < 1189 http://www.csoonline.com/article/220336/ 1190 How_a_Bookmaker_and_a_Whiz_Kid_Took_On_a_DDOS_based_Online 1191 _Extortion_Attack>. 1193 Appendix A. Examples of Third Party Malware Lists 1195 As noted in Section 4, there are many potential third parties which 1196 may be willing to share lists of infected hosts. This list is for 1197 example purposes only, is not intended to be either exclusive or 1198 exhaustive, and is subject to change over time. 1200 o Arbor - Atlas, see http://atlas.arbor.net/ 1202 o Internet Systems Consortium - Secure Information Exchange (SIE), 1203 see https://sie.isc.org/ 1205 o Microsoft - Smart Network Data Services (SNDS), see 1206 https://postmaster.live.com/snds/ 1208 o SANS Institute / Internet Storm Center - DShield Distributed 1209 Intrusion Detection System, see http://www.dshield.org/about.html 1211 o ShadowServer Foundation, see http://www.shadowserver.org/ 1213 o Spamhaus - Policy Block List (PBL), see 1214 http://www.spamhaus.org/pbl/ 1216 o Spamhaus - Exploits Block List (XBL), see 1217 http://www.spamhaus.org/xbl/ 1219 o Team Cymru - Community Services, see http://www.team-cymru.org/ 1221 Appendix B. Document Change Log 1223 [RFC Editor: This section is to be removed before publication] 1225 -13 version: 1227 o All changes below per Sean Farrell except where indicated 1228 o Section 1.2 Added reference to fast flux definition 1230 o Section 1.2 Included reference to insecure protocols 1232 o Section 4 Cleared ambiguity 1234 o Section 4 Substituted "must have" 1236 o Section 4 Substituted "to" for "too" 1238 o Section 4 Addressed PII issue for 3rd parties 1240 o Section 4 Addressed issue around blocking of traffic during bot 1241 detection process 1243 o Section 5 Per Max Weinstein Included a number of comments and 1244 addressed issues of detection transparency 1246 o Section 5 Addressed issue by recommending that users should be 1247 allowed to opt in to their desired method of notification 1249 o Section 5.4 Addressed issue around timing of notification 1251 o Section 5.4 Addressed Walled Garden issue by recommending that 1252 Walled Gardens are to be used as a notification method 1254 o Section 5.7 Noted that there are alternative methods to that 1255 outlined in RFC6108 1257 o Section 5.7 Noted that web notification is only intended for 1258 devices running a web browser 1260 o Section 5.9 Fixed typo 1262 o Section 6.1 Noted that ISPs should be clear when offering paid 1263 remediation services that these are aimed at those without skills 1264 to remediate or lacking confidence to do so 1266 o Section 7 Noted that refusal to remediate is a business issue and 1267 not subject to technical recommendation. 1269 o ALL open issues are now closed! 1271 -12 version: 1273 o Shortened reference names (non-RFC references) 1274 o Closed Open Issue #1 and #4, as leaky walled gardens are covered 1275 in Section 5.4 1277 o Closed Open Issue #2 and #6, by adding a section on users that 1278 fail to mitigate, including account termination 1280 o Closed Open Issue #3, by adding a Privacy Considerations section 1281 to address PII 1283 o Closed Open Issue #5, with no action taken 1285 o Closed Open Issue #7, by leaving as Informational (the IETF can 1286 assess this later) 1288 o Closed Open Issue #8, by generalizing the guided remediation 1289 section via the removal of specific links, etc. 1291 o Closed Open Issue #9, by reviewing and updating remediation steps 1293 o Changed some 'must' statements to 'should' statements (even though 1294 there is not RFC 2119 language in the document) 1296 o ALL open issues are now closed! 1298 -11 version: 1300 o Added reference to RFC 6108 1302 o Per Sean Turner, removed RFC 2119 reference and section 1304 o Per Donald Smith, externalized the reference to 3rd party data 1305 sources, now Appendix A 1307 o Per Donald Smith, moved basic notification challenges into a new 1308 section at the end of the Notifications section. 1310 -10 version: 1312 o Minor refresh to keep doc from expiring. Several large updates 1313 planned in a Dec/Jan revision 1315 -09 version: 1317 o Corrected nits pointed out by Sean 1319 o Removed occurrences of double spacing 1320 o Grammar and spelling corrections in many sections 1322 o Added text for leaky walled garden 1324 -08 version: 1326 o Corrected a reference error in Section 10. 1328 o Added a new informative reference 1330 o Change to Section 5.a., to note additional port scanning 1331 limitations 1333 o Per Joel Jaeggli, change computer to host, to conform to IETF 1334 document norms 1336 o Several other changes suggested by Joel Jaeggli and Donald Smith 1337 on the OPSEC mailing list 1339 o Incorp. other feedback received privately 1341 o Because Jason is so very dedicated, he worked on this revision 1342 while on vacation ;-) 1344 -07 version: 1346 o Corrected various spelling and grammatical errors, pointed out by 1347 additional reviewers. Also added a section on information flowing 1348 from the user. Lastly, updated the reviewer list to include all 1349 those who either were kind enough to review for us or who provided 1350 interesting, insightful, and/or helpful feedback. 1352 -06 version: 1354 o Corrected an error in the version change log, and added some extra 1355 information on user remediation. Also added an informational 1356 reference to BIOS infection. 1358 -05 version: 1360 o Minor tweaks made by Jason - ready for wider review and next 1361 steps. Also cleared open issues. Lastly, added 2nd paragraph to 1362 security section and added sections on limitations relating to 1363 public and other shared network sites. Added a new section on 1364 professional remediation. 1366 -04 version: 1368 o Updated reference to BIOS based malware, added wording on PII and 1369 local jurisdictions, added suggestion that industry body produce 1370 bot stats, added suggestion that ISPs use volunteer forums 1372 -03 version: 1374 o all updates from Jason - now ready for wider external review 1376 -02 version: 1378 o all updates from Jason - still some open issues but we're now at a 1379 place where we can solicit more external feedback 1381 -01 version: 1383 o -01 version published 1385 Appendix C. Open Issues 1387 No open issues. 1389 Authors' Addresses 1391 Jason Livingood 1392 Comcast Cable Communications 1393 One Comcast Center 1394 1701 John F. Kennedy Boulevard 1395 Philadelphia, PA 19103 1396 US 1398 Email: jason_livingood@cable.comcast.com 1399 URI: http://www.comcast.com 1401 Nirmal Mody 1402 Comcast Cable Communications 1403 One Comcast Center 1404 1701 John F. Kennedy Boulevard 1405 Philadelphia, PA 19103 1406 US 1408 Email: nirmal_mody@cable.comcast.com 1409 URI: http://www.comcast.com 1410 Mike O'Reirdan 1411 Comcast Cable Communications 1412 One Comcast Center 1413 1701 John F. Kennedy Boulevard 1414 Philadelphia, PA 19103 1415 US 1417 Email: michael_oreirdan@cable.comcast.com 1418 URI: http://www.comcast.com