idnits 2.17.1 draft-gibson-pnfs-problem-statement-01.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- ** It looks like you're using RFC 3978 boilerplate. You should update this to the boilerplate described in the IETF Trust License Policy document (see https://trustee.ietf.org/license-info), which is required now. -- Found old boilerplate from RFC 3667, Section 5.1 on line 16. -- Found old boilerplate from RFC 3978, Section 5.5 on line 504. -- Found old boilerplate from RFC 3979, Section 5, paragraph 1 on line 515. -- Found old boilerplate from RFC 3979, Section 5, paragraph 2 on line 522. -- Found old boilerplate from RFC 3979, Section 5, paragraph 3 on line 528. ** Found boilerplate matching RFC 3978, Section 5.4, paragraph 1 (on line 496), which is fine, but *also* found old RFC 2026, Section 10.4C, paragraph 1 text on line 36. ** The document seems to lack an RFC 3978 Section 5.1 IPR Disclosure Acknowledgement -- however, there's a paragraph with a matching beginning. Boilerplate error? ** This document has an original RFC 3978 Section 5.4 Copyright Line, instead of the newer IETF Trust Copyright according to RFC 4748. ** This document has an original RFC 3978 Section 5.5 Disclaimer, instead of the newer disclaimer which includes the IETF Trust according to RFC 4748. ** The document uses RFC 3667 boilerplate or RFC 3978-like boilerplate instead of verbatim RFC 3978 boilerplate. After 6 May 2005, submission of drafts without verbatim RFC 3978 boilerplate is not accepted. The following non-3978 patterns matched text found in the document. That text should be removed or replaced: By submitting this Internet-Draft, I certify that any applicable patent or other IPR claims of which I am aware have been disclosed, or will be disclosed, and any of which I become aware will be disclosed, in accordance with RFC 3668. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- ** The document seems to lack a 1id_guidelines paragraph about 6 months document validity -- however, there's a paragraph with a matching beginning. Boilerplate error? == No 'Intended status' indicated for this document; assuming Proposed Standard Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- ** The document seems to lack an IANA Considerations section. (See Section 2.2 of https://www.ietf.org/id-info/checklist for how to handle the case when there are no actions for IANA.) Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the RFC 3978 Section 5.4 Copyright Line does not match the current year -- The document seems to lack a disclaimer for pre-RFC5378 work, but may have content which was first submitted before 10 November 2008. If you have contacted all the original authors and they are all willing to grant the BCP78 rights to the IETF Trust, then this is fine, and you can ignore this comment. If not, you may need to add the pre-RFC5378 disclaimer. (See the Legal Provisions document at https://trustee.ietf.org/license-info for more information.) -- The document date (July 2004) is 7215 days in the past. Is this intentional? Checking references for intended status: Proposed Standard ---------------------------------------------------------------------------- (See RFCs 3967 and 4897 for information about using normative references to lower-maturity documents in RFCs) == Unused Reference: 'Fayyad98' is defined on line 423, but no explicit reference was found in the text Summary: 8 errors (**), 0 flaws (~~), 3 warnings (==), 7 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 INTERNET-DRAFT Garth Gibson 3 Expires: January 2005 Panasas Inc. & CMU 4 Peter Corbett 5 Network Appliance, Inc. 7 Document: draft-gibson-pnfs-problem-statement-01.txt July 2004 9 pNFS Problem Statement 11 Status of this Memo 13 By submitting this Internet-Draft, I certify that any applicable 14 patent or other IPR claims of which I am aware have been disclosed, 15 or will be disclosed, and any of which I become aware will be 16 disclosed, in accordance with RFC 3668. 18 Internet-Drafts are working documents of the Internet Engineering 19 Task Force (IETF), its areas, and its working groups. Note that 20 other groups may also distribute working documents as Internet- 21 Drafts. 23 Internet-Drafts are draft documents valid for a maximum of six months 24 and may be updated, replaced, or obsoleted by other documents at any 25 time. It is inappropriate to use Internet-Drafts as reference 26 material or to cite them other than a "work in progress. 28 The list of current Internet-Drafts can be accessed at 29 http://www.ietf.org/1id-abstracts.html 31 The list of Internet-Draft Shadow Directories can be accessed at 32 http://www.ietf.org/shadow.html 34 Copyright Notice 36 Copyright (C) The Internet Society (2004). All Rights Reserved. 38 Abstract 40 This draft considers the problem of limited bandwidth to NFS servers. 41 The bandwidth limitation exists because an NFS server has limited 42 network, CPU, memory and disk I/O resources. Yet, access to any one 43 file system through the NFSv4 protocol requires that a single server 44 be accessed. While NFSv4 allows file system migration, it does not 45 provide a mechanism that supports multiple servers simultaneously 46 exporting a single writable file system. 48 This problem has become aggravated in recent years with the advent of 49 very cheap and easily expanded clusters of application servers that 50 are also NFS clients. The aggregate bandwidth demands of such 51 clustered clients, typically working on a shared data set 52 preferentially stored in a single file system, can increase much more 53 quickly than the bandwidth of any server. The proposed solution is 54 to provide for the parallelization of file services, by enhancing 55 NFSv4 in a minor version. 57 Table of Contents 59 1. Introduction...................................................2 60 2. Bandwidth Scaling in Clusters..................................4 61 3. Clustered Applications.........................................4 62 4. Existing File Systems for Clusters.............................6 63 5. Eliminating the Bottleneck.....................................7 64 6. Separated control and data access techniques...................8 65 7. Security Considerations........................................9 66 8. Informative References.........................................9 67 9. Acknowledgments...............................................11 68 10. Author's Addresses...........................................11 69 11. Full Copyright Statement.....................................11 71 1. Introduction 73 The storage I/O bandwidth requirements of clients are rapidly 74 outstripping the ability of network file servers to supply them. 75 Increasingly, this problem is being encountered in installations 76 running the NFS protocol. The problem can be solved by increasing 77 the server bandwidth. This draft suggests that an effort be mounted 78 to enable NFS file service to scale with its clusters of clients. 79 The proposed approach is to increase the aggregate bandwidth possible 80 to a single file system by parallelizing the file service, resulting 81 in multiple network connections to multiple server endpoints 82 participating in the transfer of requested data. This should be 83 achievable within the framework of NFS, possibly in a minor version 84 of the NFSv4 protocol. 86 In many application areas, single system servers are rapidly being 87 replaced by clusters of inexpensive commodity computers. As 88 clustering technology has improved, the barriers to running 89 application codes on very large clusters have been lowered. Examples 90 of application areas that are seeing the rapid adoption of scalable 91 client clusters are data intensive applications such as genomics, 92 seismic processing, data mining, content and video distribution, and 93 high performance computing. The aggregate storage I/O requirements of 94 a cluster can scale proportionally to the number of computers in the 95 cluster. It is not unusual for clusters today to make bandwidth 96 demands that far outstrip the capabilities of traditional file 97 servers. A natural solution to this problem is to enable file 98 service to scale as well, by increasing the number of server nodes 99 that are able to service a single file system to a cluster of 100 clients. 102 Scalable bandwidth can be claimed by simply adding multiple 103 independent servers to the network. Unfortunately, this leaves to 104 file system users the task of spreading data across these independent 105 servers. Because the data processed by a given data-intensive 106 application is usually logically associated, users routinely co- 107 locate this data in a single file system, directory or even a single 108 file. The NFSv4 protocol currently requires that all the data in a 109 single file system be accessible through a single exported network 110 endpoint, constraining access to be through a single NFS server. 112 A better way of increasing the bandwidth to a single file system is 113 to enable access to be provided through multiple endpoints in a 114 coordinated or coherent fashion. Separation of control and data 115 flows provides a straightforward framework to accomplish this, by 116 allowing transfers of data to proceed in parallel from many clients 117 to many data storage endpoints. Control and file management 118 operations, inherently more difficult to parallelize, can remain the 119 province of a single NFS server, inheriting the simple management of 120 today's NFS file service, while offloading data transfer operations 121 allows bandwidth scalability. Data transfer may be done using NFS or 122 other protocols, such as iSCSI. 124 While NFS is a widely used network file system protocol, most of the 125 world's data resides in data stores that are not accessible through 126 NFS. Much of this data is stored in Storage Area Networks, 127 accessible by SCSI's Fibre Channel Protocol (FCP), or increasingly, 128 by iSCSI. Storage Area Networks routinely provide much higher data 129 bandwidths than do NFS file servers. Unfortunately, the simple array 130 of blocks interface into Storage Area Networks does not lend itself 131 to controlling multiple clients that are simultaneously reading and 132 writing the blocks of the same or different files, a workload usually 133 referred to as data sharing. NFS file service, with its hierarchical 134 namespace of separately controlled files, offers simpler and more 135 cost-effective management. One might conclude that users must chose 136 between high bandwidth and data sharing. Not only is this conclusion 137 false, but it should also be possible to allow data stored in SAN 138 devices, FCP or iSCSI, to be accessed under the control of an NFS 139 server. Such an approach protects the industry's large investment in 140 NFS, since the bandwidth bottleneck no longer needs to drive users to 141 adopt a proprietary alternative solution, and leverages SAN storage 142 infrastructures, all within a common architectural framework. 144 2. Bandwidth Scaling in Clusters 146 When applied to data-intensive applications, clusters can generate 147 unprecedented demand for storage bandwidth. At present, each node in 148 the cluster is likely to be a dual processor, with each processor 149 running at multiple GHz, with gigabytes of DRAM. Depending on the 150 specific application, each node is capable of sustaining a demand of 151 10s to 100s of MB/s of data from storage. In addition, the number of 152 nodes in a cluster is commonly in the 100s, with many instances of 153 1000s to 10,000s of nodes. The result is that storage systems may be 154 called upon to provide an aggregate bandwidth of GB/s ranging upwards 155 toward TB/s. 157 The performance of a single NFS server has been improving, but it is 158 not able to keep pace with cluster demand. Directly connected storage 159 devices behind an NFS server have given way to disk arrays and 160 networked disk arrays, making it now possible for an NFS server to 161 directly access 100s to 1000s of disk drives whose aggregate capacity 162 reaches upwards to PBs and whose raw bandwidths range upwards to 10s 163 of GB/s. 165 An NFS server is interposed between the scalable storage subsystem 166 and the scalable client cluster. Multiple NIC endpoints help network 167 bandwidth keep up with DRAM bandwidth. However, the rate of 168 improvement of NFS server performance is not faster than the rate of 169 improvement in each client node. As long as an NFS file system is 170 associated with a single client-side network endpoint, the aggregate 171 capabilities of a single NFS server to move data between storage 172 networks and client networks will not be able to keep pace with the 173 aggregate demand of clustered clients and large disk subsystems. 175 3. Clustered Applications 177 Large datasets and high bandwidth processing of large datasets are 178 increasingly common in a wide variety of applications. As most 179 computer users can affirm, the size of everyday presentations, 180 pictures and programs seems to grow continuously, and in fact average 181 file size does grow with time [Ousterhout85, Baker91]. Simple 182 copying, viewing, archiving and sharing of even this baseline use of 183 growing files in day-to-day business and personal computing drives up 184 the bandwidth demand on servers. 186 Some applications, however, make much larger demands on file and file 187 system capacity and bandwidth. Databases of DNA sequences, used in 188 bioinformatics search, range up to tens of GBs and are often in use 189 by all cluster users are the same time [NIH03]. These huge files may 190 experience bursts of many concurrent clients loading the whole file 191 independently. 193 Bioinformatics is an example of extensive search in science 194 application. Extensive search is much broader than science. Wall 195 Street has taken to collecting long-term transaction record 196 histories. Looking for patterns of unbilled transactions, fraud or 197 predictable market trends is a growing financial opportunity 198 [Agarwal95, Senator95]. 200 Security and authentication are driving a need for image search, such 201 as face recognition [Flickner95]. Databasing the faces of approved 202 or suspected individuals and searching through many camera feeds 203 involves huge data and bandwidths. Traditional database indexing in 204 these high dimension data structures often fails to avoid full 205 database scans of these huge files [Berchtold97]. 207 With huge storage repositories and fast computers, huge sensor 208 capture is increasingly used in many applications. Consumer digital 209 photography fits this model, with photo touch-up and slide show 210 generation tools driving bandwidth, although much more demanding 211 applications are not unusual. 213 Medical test imagery is being captured at very high resolution and 214 tools are being developed for automatic preliminary diagnosis, for 215 example [Afework98]. In the science world, even larger datasets are 216 captured from satellites, telescopes, and atom-smashers, for example 217 [Greiman97]. Preliminary processing of a sky survey suggests that 218 thousand node clusters may sustain GB/s storage bandwidths [Gray03]. 219 Seismic trace data, often measured in helicopter loads, commands 220 large clusters for days to months [Knott03]. 222 At the high end of science application, accurate physical simulation, 223 its visualization and fault-tolerance checkpointing, has been 224 estimated to need 10 GB/s bandwidth and 100 TB of capacity for every 225 thousand nodes in a cluster [SGPFS01]. 227 Most of these applications make heavy use of shared data across many 228 clients, users and applications, have limited budgets available to 229 fund aggressive computational goals, and have technical or scientific 230 users with strong preferences for file systems and no patience for 231 tuning storage. NFS file service, appropriately scaled up in 232 capacity and bandwidth, is highly desired. 234 In addition to these search, sensor and science applications, 235 traditional database applications are increasingly employing NFS 236 servers. These applications often have hotspot tables, leading to 237 high bandwidth storage demands. Yet SAN-based solutions are 238 sometimes harder to manage than NFS based solutions, especially in 239 databases with a large number of tables. NFS servers with scalable 240 bandwidth would accelerate the adoption of NFS for database 241 applications. 243 These examples suggest that there is no shortage of applications 244 frustrated by the limitations of a single network endpoint on a 245 single NFS server exporting a single file system or single huge file. 247 4. Existing File Systems for Clusters 249 The server bottleneck has induced various vendors to develop 250 proprietary alternatives to NFS. 252 Known variously as asymmetric, out-of-band, clustered or SAN file 253 systems, these proprietary alternatives exploit the scalability of 254 storage networks by attaching all nodes in the client cluster to the 255 storage network. Then, by reorganizing client and server code 256 functionality to separate data traffic from control traffic, client 257 nodes are able to access storage devices directly rather than 258 requesting all data from the same single network endpoint in the file 259 server that handles control traffic. 261 Most proprietary alternative solutions have been tailored to storage 262 area networks based on the fixed-sized block SCSI storage device 263 command set and its Fibrechannel SCSI transport. Examples in this 264 class include EMC's High Road (www.emc.com); IBM's TotalStorage SAN 265 FS, SANergy and GPFS (www.ibm.com); Sistina/Redhat's GFS 266 (www.readhat.com); SGI's CXFS (www.sgi.com); Veritas' SANPoint Direct 267 and CFS (www.veritas.com); and Sun's QFS (www.sun.com). The 268 Fibrechannel SCSI transport used in these systems may soon be 269 replaceable by a TCP/IP SCSI transport, iSCSI, enabling these 270 proprietary alternatives to operate on the same equipment and IETF 271 protocols commonly used by NFS servers. 273 While fixed-sized block SCSI storage devices are used in most file 274 systems with separated data and control paths, this is not the only 275 alternative available today. SCSI's newly emerging command set, the 276 Object Storage Device (OSD) command set, transmits variable length 277 storage objects over SCSI transports [T10-03]. Panasas' ActiveScale 278 storage cluster employs a proto-OSD command set over iSCSI on its 279 separated data path (www.panasas.com). IBM's research is also 280 demonstrating a variant of their TotalStorage SAN FS employing proto- 281 OSD commands [Azagury02]. 283 Even more distinctive is Zforce's File Switch technology 284 (www.zforce.com). Zforce virtualizes a CIFS file server spreading 285 the contents of a file share over many backend CIFS storage servers 286 and places their control path functionality inside a network switch 287 in order to have some of the properties of both separated and non- 288 separated data and control paths. However, striping files over 289 multiple file-based storage servers is not a new concept. Berkeley's 290 Zebra file system, the successor to the log-based file system 291 developed for RAID storage, had a separated data and control path 292 with file protocols to both [Hartman95]. 294 5. Eliminating the Bottleneck 296 The restriction of a single network endpoint results from the way NFS 297 associates file servers and file systems. Essentially, each client 298 machine "mounts" each exported file system; these mount operations 299 bind a network endpoint to all files in the exported file system, 300 instructing the client to address that network endpoint with all 301 requests associated with all files in that file system. Mechanisms 302 intended for primarily for failover have been established for giving 303 clients a list of network endpoints associated with a given file 304 system. 306 Multiple NFS servers can be used instead of a single NFS server, and 307 many cluster administrators, programmers and end-users have 308 experimented with this alternative. The principle compromise 309 involved in exploiting multiple NFS servers is that a single file or 310 single file system is decomposed into multiple files or file systems, 311 respectively. For instance, a single file can be decomposed into many 312 files, each located in a part of the namespace that is exported by a 313 different NFS server; or the files of a single directory can be 314 linked to files in directories located in file systems exported by 315 different NFS servers. Because this decomposition is done without 316 NFS server support, the work of decomposing and recomposing and the 317 implications of the decomposition on capacity and load balancing, 318 backup consistency, error recovery, and namespace management all fall 319 to the customer. Moreover, the additional statefulness of NFSv4 makes 320 correct semantics for files decomposed over multiple services without 321 NFS support much more complex. Such extra work and extra problems are 322 usually referred to as storage management costs, and are blamed for 323 causing a high total cost of ownership for storage. 325 Preserving the relative ease of use of NFS storage systems requires 326 solutions to the bandwidth bottleneck that do not decompose files and 327 directories in the file subtree namespace. 328 A solution to this problem should continue to use the existing single 329 network endpoint for control traffic, including namespace 330 manipulations. Decompositions of individual files and file systems 331 over multiple network endpoints can be provided via the separated 332 data paths, without separating the control and metadata paths. 334 6. Separated control and data access techniques 336 Separating storage data flow from file system control flow 337 effectively moves the bottleneck away from the single endpoint of an 338 NFS server and distributes it across the bisectional bandwidth of the 339 storage network between the cluster nodes and storage devices. Since 340 switch bandwidths of upwards of terabits per second are available 341 today, this bottleneck is at least two orders of magnitude better 342 than that of an NFS server network endpoint. 344 In an architecture that separates the storage data path from the NFS 345 control path there are choices of protocol for the data path. One 346 straightforward answer is to extend the NFS protocol so it can 347 accommodate can be used on both control and separated data paths. 348 Another straightforward answer is to capture the existing market's 349 dominant separated data path, fixed-sized block SCSI storage. A third 350 alternative is the emerging object storage SCSI command set, OSD, 351 which is appearing in new products with separate data and control 352 paths. 354 A solution that accommodates all of these approaches provides the 355 broadest applicability for NFS. Specifically, NFS extensions should 356 make minimal assumptions about the storage data server access 357 protocol. The clients in such an extended NFS system should be 358 compatible with the current NFSv4 protocol, and should be compatible 359 with earlier versions of NFS as well. A solution should be capable 360 of providing both asymmetric data access, with the data path 361 connected via NFS or other protocols and transports, and symmetric 362 parallel access to servers that run NFS on each server node. 363 Specifically, it is desirable to enable NFS to manage asymmetric 364 access to storage attached via iSCSI and Fibre Channel/SCSI storage 365 area networks. 367 As previously discussed, the root cause of the NFS server bottleneck 368 is the binding between one network endpoint and all the files in a 369 file system. NFS extensions can allow the association of additional 370 network endpoints with specific files. These associations could be 371 represented as layout maps [Gibson98]. NFS clients could be extended 372 to have the ability to retrieve and use these layout maps. 374 NFSv4 provides an excellent foundation for this. We may be able to 375 extend the current notion of file delegations to include the ability 376 to retrieve and utilize a file layout map. A number of ideas have 377 been proposed for storing, accessing, and acting upon layout 378 information stored by NFS servers to allow separate access to file 379 data over separate data paths. Data access can be supported over 380 multiple protocols, including NFSv4, iSCSI, and OSD. 382 7. Security Considerations 384 Bandwidth scaling solutions that employ separation of control and 385 data paths will introduce new security concerns. For example, the 386 data access methods will require authentication and access control 387 mechanisms that are consistent with the primary mechanisms on the 388 NFSv4 control paths. Object storage employs revocable cryptographic 389 restrictions on each object, which can be created and revoked in the 390 control path. With iSCSI access methods, iSCSI security capabilities 391 are available, but do not contain NFS access control. Fibre Channel 392 based SCSI access methods have less sophisticated security than 393 iSCSI. These access methods typically use private networks to 394 provide security. 396 Any proposed solution must be analyzed for security threats and any 397 such threats must be addressed. The IETF and the NFS working group 398 have significant expertise in this area. 400 8. Informative References 402 [Afework98] A. Afework, M. Beynon, F. Bustamonte, A. Demarzo, R. 403 Ferriera, R. Miller, M. Silberman, J. Saltz, A. Sussman, H. Tang, 404 "Digital dynamic telepathology - the virtual microscope," Proc. of 405 the AMIA'98 Fall Symposium 1998. 407 [Agarwal95] Agrawal, R. and Srikant, R. "Fast Algorithms for Mining 408 Association Rules" VLDB, September 1995. 410 [Azagury02] Azagury, A., Dreizin, V., Factor, M., Henis, E., Naor, 411 D., Rinetzky, N., Satran, J., Tavory, A., Yerushalmi, L, "Towards 412 an Object Store," IBM Storage Systems Technology Workshop, 413 November 2002. 415 [Baker91] Baker, M.G., Hartman, J.H., Kupfer, M.D., Shirriff, K.W. 416 and Ousterhout, J.K. "Measurements of a Distributed File System" 417 SOSP, October 1991. 419 [Berchtold97] Berchtold, S., Boehm, C., Keim, D.A. and Kriegel, H. "A 420 Cost Model For Nearest Neighbor Search in High-Dimensional Data 421 Space" ACM PODS, May 1997. 423 [Fayyad98] Fayyad, U. "Taming the Giants and the Monsters: Mining 424 Large Databases for Nuggets of Knowledge" Database Programming and 425 Design, March 1998. 427 [Flickner95] Flickner, M., Sawhney, H., Niblack, W., Ashley, J., 428 Huang, Q., Dom, B., Gorkani, M., Hafner, J., Lee, D., Petkovic, 429 D., Steele, D. and Yanker, P. "Query by Image and Video Content: 430 the QBIC System" IEEE Computer, September 1995. 432 [Gibson98] Gibson, G. A., et. al., "A Cost-Effective, High-Bandwidth 433 Storage Architecture," International Conference on Architectural 434 Support for Programming Languages and Operating Systems (ASPLOS), 435 October 1998. 437 [Gray03] Jim Gray, "Distributed Computing Economics," Technical 438 Report MSR-TR-2003-24, March 2003. 440 [Greiman97] Greiman, W., W. E. Johnston, C. McParland, D. Olson, B. 441 Tierney, C. Tull, "High-Speed Distributed Data Handling for HENP," 442 Computing in High Energy Physics, April, 1997. Berlin, Germany. 444 [Hartman95] John H. Hartman and John K. Ousterhout, "The Zebra 445 Striped Network File System," ACM Transactions on Computer Systems 446 13, 3, August 1995. 448 [Knott03] Knott, T., "Computing colossus," BP Frontiers magazine, 449 Issue 6, April 2003, http://www.bp.com/frontiers. 451 [NIH03] "Easy Large-Scale Bioinformatics on the NIH Biowulf 452 Supercluster," http://biowulf.nih.gov/easy.html, 2003. 454 [Ousterhout85] Ousterhout, J.K., DaCosta, H., Harrison, D., Kunze, 455 J.A., Kupfer, M. and Thompson, J.G. "A Trace Drive Analysis of the 456 UNIX 4.2 BSD FIle System" SOSP, December 1985. 458 [Senator95] Senator, T.E., Goldberg, H.G., Wooten, J., Cottini, M.A., 459 Khan, A.F.U., Klinger, C.D., Llamas, W.M., Marrone, M.P. and Wong, 460 R.W.H. "The Financial Crimes Enforcement Network AI System (FAIS): 461 Identifying potential money laundering from reports of large cash 462 transactions" AIMagazine 16 (4), Winter 1995. 464 [SGPFS01] SGS File System RFP, DOE NNCA and DOD NSA, April 25, 2001. 466 [T10-03] Draft OSD Standard, T10 Committee, Storage Networking 467 Industry Association(SNIA), 468 ftp://www.t10.org/ftp/t10/drafts/osd/osd-r08.pdf 470 9. Acknowledgments 472 David Black, Gary Grider, Benny Halevy, Dean Hildebrand, Dave Noveck, 473 Julian Satran, Tom Talpey, and Brent Welch contributed to the 474 development of this problem statement. 476 10. Author's Addresses 478 Garth Gibson 479 Panasas Inc, and Carnegie Mellon University 480 1501 Reedsdale Street 481 Pittsburgh, PA 15233 USA 482 Phone: +1 412 323 3500 483 Email: ggibson@panasas.com 485 Peter Corbett 486 Network Appliance Inc. 487 375 Totten Pond Road 488 Waltham, MA 02451 USA 489 Phone: +1 781 768 5343 490 Email: peter@pcorbett.net 492 11. Full Copyright Statement 494 Copyright (C) The Internet Society (2004). This document is subject 495 to the rights, licenses and restrictions contained in BCP 78, and 496 except as set forth therein, the authors retain all their rights. 498 This document and the information contained herein are provided on an 499 "AS IS" basis and THE CONTRIBUTOR, THE ORGANIZATION HE/SHE REPRESENTS 500 OR IS SPONSORED BY (IF ANY), THE INTERNET SOCIETY AND THE INTERNET 501 ENGINEERING TASK FORCE DISCLAIM ALL WARRANTIES, EXPRESS OR IMPLIED, 502 INCLUDING BUT NOT LIMITED TO ANY WARRANTY THAT THE USE OF THE 503 INFORMATION HEREIN WILL NOT INFRINGE ANY RIGHTS OR ANY IMPLIED 504 WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. 506 Intellectual Property 508 The IETF takes no position regarding the validity or scope of any 509 Intellectual Property Rights or other rights that might be claimed to 510 pertain to the implementation or use of the technology described in 511 this document or the extent to which any license 512 under such rights might or might not be available; nor does it 513 represent that it has made any independent effort to identify any 514 such rights. Information on the procedures with respect to rights in 515 RFC documents can be found in BCP 78 and BCP 79. 517 Copies of IPR disclosures made to the IETF Secretariat and any 518 assurances of licenses to be made available, or the result of an 519 attempt made to obtain a general license or permission for the use of 520 such proprietary rights by implementers or users of this 521 specification can be obtained from the IETF on-line IPR repository at 522 http://www.ietf.org/ipr. 524 The IETF invites any interested party to bring to its attention any 525 copyrights, patents or patent applications, or other proprietary 526 rights that may cover technology that may be required to implement 527 this standard. Please address the information to the IETF at ietf- 528 ipr@ietf.org. 530 Acknowledgement 532 Funding for the RFC Editor function is currently provided by the 533 Internet Society.