idnits 2.17.1 draft-ietf-decade-integration-example-03.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- No issues found here. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year == The document seems to contain a disclaimer for pre-RFC5378 work, but was first submitted on or after 10 November 2008. The disclaimer is usually necessary only for documents that revise or obsolete older RFCs, and that take significant amounts of text from those RFCs. If you can contact all authors of the source material and they are willing to grant the BCP78 rights to the IETF Trust, you can and should remove the disclaimer. Otherwise, the disclaimer is needed and you can ignore this comment. (See the Legal Provisions document at https://trustee.ietf.org/license-info for more information.) -- The document date (March 12, 2012) is 4428 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- == Outdated reference: A later version (-10) exists of draft-ietf-decade-arch-04 == Outdated reference: A later version (-27) exists of draft-ietf-alto-protocol-10 Summary: 0 errors (**), 0 flaws (~~), 4 warnings (==), 1 comment (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 DECADE N. Zong, Ed. 3 Internet-Draft X. Chen 4 Intended status: Informational Z. Huang 5 Expires: September 13, 2012 Huawei Technologies 6 L. Chen 7 HP Labs 8 H. Liu 9 Yale University 10 March 12, 2012 12 Integration Examples of DECADE System 13 draft-ietf-decade-integration-example-03 15 Abstract 17 Decoupled Application Data Enroute (DECADE) system is an in-network 18 storage infrastructure which is still under discussion and 19 standardization process in IETF DECADE WG. This document presents 20 two detailed examples of how to integrate such in-network storage 21 infrastructure into peer-to-peer (P2P) applications to achieve more 22 efficient content distribution, and Application Layer Traffic 23 Optimization (ALTO) system to build a content distribution platform 24 for Content Providers (CPs). 26 Status of this Memo 28 This Internet-Draft is submitted in full conformance with the 29 provisions of BCP 78 and BCP 79. 31 Internet-Drafts are working documents of the Internet Engineering 32 Task Force (IETF). Note that other groups may also distribute 33 working documents as Internet-Drafts. The list of current Internet- 34 Drafts is at http://datatracker.ietf.org/drafts/current/. 36 Internet-Drafts are draft documents valid for a maximum of six months 37 and may be updated, replaced, or obsoleted by other documents at any 38 time. It is inappropriate to use Internet-Drafts as reference 39 material or to cite them other than as "work in progress." 41 This Internet-Draft will expire on September 13, 2012. 43 Copyright Notice 45 Copyright (c) 2012 IETF Trust and the persons identified as the 46 document authors. All rights reserved. 48 This document is subject to BCP 78 and the IETF Trust's Legal 49 Provisions Relating to IETF Documents 50 (http://trustee.ietf.org/license-info) in effect on the date of 51 publication of this document. Please review these documents 52 carefully, as they describe your rights and restrictions with respect 53 to this document. Code Components extracted from this document must 54 include Simplified BSD License text as described in Section 4.e of 55 the Trust Legal Provisions and are provided without warranty as 56 described in the Simplified BSD License. 58 This document may contain material from IETF Documents or IETF 59 Contributions published or made publicly available before November 60 10, 2008. The person(s) controlling the copyright in some of this 61 material may not have granted the IETF Trust the right to allow 62 modifications of such material outside the IETF Standards Process. 63 Without obtaining an adequate license from the person(s) controlling 64 the copyright in such materials, this document may not be modified 65 outside the IETF Standards Process, and derivative works of it may 66 not be created outside the IETF Standards Process, except to format 67 it for publication as an RFC or to translate it into languages other 68 than English. 70 Table of Contents 72 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 5 73 2. Terminology . . . . . . . . . . . . . . . . . . . . . . . . . 6 74 2.1. INS Server . . . . . . . . . . . . . . . . . . . . . . . . 6 75 2.2. INS Client . . . . . . . . . . . . . . . . . . . . . . . . 6 76 2.3. INS Operations . . . . . . . . . . . . . . . . . . . . . . 6 77 2.4. INS System . . . . . . . . . . . . . . . . . . . . . . . . 6 78 2.5. INS Client API . . . . . . . . . . . . . . . . . . . . . . 6 79 2.6. INS-enabled Application Client . . . . . . . . . . . . . . 6 80 2.7. INS Service Provider . . . . . . . . . . . . . . . . . . . 6 81 2.8. INS Portal . . . . . . . . . . . . . . . . . . . . . . . . 6 82 3. INS Client API . . . . . . . . . . . . . . . . . . . . . . . . 6 83 4. Integration of P2P Live Streaming and INS System . . . . . . . 7 84 4.1. Integration Architecture . . . . . . . . . . . . . . . . . 7 85 4.1.1. Data Access Messages . . . . . . . . . . . . . . . . . 8 86 4.1.2. Control Messages . . . . . . . . . . . . . . . . . . . 8 87 4.1.3. Object Naming Scheme . . . . . . . . . . . . . . . . . 9 88 4.2. Design Considerations . . . . . . . . . . . . . . . . . . 9 89 4.2.1. Improve Efficiency for Each Connection . . . . . . . . 9 90 4.2.2. Reduce Control Latency . . . . . . . . . . . . . . . . 9 91 5. Integration of P2P File Sharing and INS System . . . . . . . . 10 92 5.1. Integration Architecture . . . . . . . . . . . . . . . . . 10 93 5.2. Message Flow . . . . . . . . . . . . . . . . . . . . . . . 11 94 6. Integration of ALTO and INS System for File Distribution . . . 12 95 6.1. Architecture Design . . . . . . . . . . . . . . . . . . . 13 96 6.2. CP Uploading Procedure . . . . . . . . . . . . . . . . . . 14 97 6.3. End User Downloading Procedure . . . . . . . . . . . . . . 15 98 7. Test Environment and Settings . . . . . . . . . . . . . . . . 16 99 7.1. Test Settings . . . . . . . . . . . . . . . . . . . . . . 16 100 7.2. Test Environment for P2P Live Streaming Example . . . . . 17 101 7.2.1. INS Server . . . . . . . . . . . . . . . . . . . . . . 17 102 7.2.2. P2P Live Streaming Client . . . . . . . . . . . . . . 17 103 7.2.3. Tracker . . . . . . . . . . . . . . . . . . . . . . . 17 104 7.2.4. Streaming Source Server . . . . . . . . . . . . . . . 17 105 7.2.5. Test Controller . . . . . . . . . . . . . . . . . . . 18 106 7.3. Test Environment for P2P File Sharing Example . . . . . . 18 107 7.3.1. INS Server . . . . . . . . . . . . . . . . . . . . . . 18 108 7.3.2. Vuze Client . . . . . . . . . . . . . . . . . . . . . 18 109 7.3.3. Tracker . . . . . . . . . . . . . . . . . . . . . . . 18 110 7.3.4. Test Controller . . . . . . . . . . . . . . . . . . . 19 111 7.3.5. HTTP Server . . . . . . . . . . . . . . . . . . . . . 19 112 7.3.6. PlanetLab Manager . . . . . . . . . . . . . . . . . . 19 113 7.4. Test Environment for Combined ALTO and INS File 114 Distribution System . . . . . . . . . . . . . . . . . . . 19 115 8. Performance Analysis . . . . . . . . . . . . . . . . . . . . . 19 116 8.1. Performance Metrics . . . . . . . . . . . . . . . . . . . 20 117 8.1.1. P2P Live Streaming . . . . . . . . . . . . . . . . . . 20 118 8.1.2. P2P File Sharing . . . . . . . . . . . . . . . . . . . 20 119 8.1.3. Integration of ALTO and INS System for File 120 Distribution . . . . . . . . . . . . . . . . . . . . . 20 121 8.2. Results and Analysis . . . . . . . . . . . . . . . . . . . 20 122 8.2.1. P2P Live Streaming . . . . . . . . . . . . . . . . . . 20 123 8.2.2. P2P File Sharing . . . . . . . . . . . . . . . . . . . 21 124 8.2.3. Integrated ALTO and INS System for File 125 Distribution . . . . . . . . . . . . . . . . . . . . . 22 126 9. Short Conclusion . . . . . . . . . . . . . . . . . . . . . . . 22 127 10. Security Considerations . . . . . . . . . . . . . . . . . . . 22 128 11. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 22 129 12. References . . . . . . . . . . . . . . . . . . . . . . . . . . 22 130 12.1. Normative References . . . . . . . . . . . . . . . . . . . 22 131 12.2. Informative References . . . . . . . . . . . . . . . . . . 23 132 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . . 23 134 1. Introduction 136 Decoupled Application Data Enroute (DECADE) system is an in-network 137 storage infrastructure which is still under discussion and 138 standardization process in IETF DECADE WG. We implemented such in- 139 network storage infrastructure to simulate DECADE system including 140 DECADE servers, DECADE clients and DECADE protocols [I-D.ietf-decade- 141 arch]. Therefore, in the whole draft, we use the terms of in-network 142 storage (INS) system, INS server, INS client, INS operations, etc. 144 This draft introduces some examples of integrating INS system with 145 existing applications. In our example systems, the core components 146 include INS server and INS-enabled application client. An INS server 147 stores data inside the network, and thereafter manages both the 148 stored data and access to that data. An INS-enabled application 149 client including INS client and native application client uses a set 150 of Application Programming Interfaces (APIs) to enable native 151 application client to utilize INS operations such as data get, data 152 put, storage status query, etc. 154 This draft presents two detailed examples of how to integrate INS 155 system into peer-to-peer (P2P) applications, i.e. live streaming and 156 file sharing, as well as an example integration of Application Layer 157 Traffic Optimization (ALTO) [I-D.ietf-alto-protocol] and INS system 158 to support file distribution. We firstly show how to extend native 159 P2P applications by designing the INS-enabled P2P clients and 160 describing the corresponding flows of INS-enabled data transmission. 161 Then we introduce the functional architecture and working flows of 162 integrated ALTO and INS system for file distribution of Content 163 Providers (CPs). Finally we illustrate the performance gain to P2P 164 applications and more efficient content distribution by effectively 165 leveraging the INS system. We only show the feasibility of 166 integrated ALTO and INS system without comparing with other content 167 distribution systems at this time. More information would be 168 provided after more experiments are done in the near future. 170 Please note that the P2P applications mentioned in this draft only 171 represent some cases out of a large number of P2P applications, while 172 the INS system itself can support a variety of other applications. 173 Moreover, the set of APIs used in our integration examples is an 174 experimental implementation, which is not standard and still under 175 development. The INS system described in this draft is only a 176 preliminary functional set of in-network storage infrastructure for 177 applications. It is designed to test the pros and cons of INS system 178 utilized by P2P applications and verify the feasibility of utilizing 179 INS system to support content distribution. We hope our examples 180 would be useful for further standard protocol design, rather than to 181 present a solution for standardization purpose. 183 2. Terminology 185 The following terms will be used in this document. 187 2.1. INS Server 189 A server to simulate DECADE server defined in [I-D.ietf-decade-arch]. 191 2.2. INS Client 193 A client to simulate DECADE client defined in [I-D.ietf-decade-arch]. 195 2.3. INS Operations 197 A set of communications between INS server and INS client to simulate 198 DECADE protocols defined in [I-D.ietf-decade-arch]. 200 2.4. INS System 202 A system including INS servers, INS clients, and INS operations. 204 2.5. INS Client API 206 A set of APIs to enable native application client to utilize INS 207 operations. 209 2.6. INS-enabled Application Client 211 An INS-enabled application client includes INS client and native 212 application client communicating through INS client API. 214 2.7. INS Service Provider 216 An INS service provider deploys INS system and provides INS service 217 to applications/end users. It can be Internet Service Provider (ISP) 218 or other parties. 220 2.8. INS Portal 222 A simulated portal operated by INS service provider to offer 223 applications/end users a portal to access (e.g. upload, download) 224 files stored in INS servers. 226 3. INS Client API 228 In order to simplify the integration of INS system with P2P 229 applications, we provide INS client API to native P2P clients for 230 accomplishing INS operations such as data get, data put, etc. On top 231 of the INS client API, a native P2P client can develop its own 232 application specific control and data distribution flows. 234 We currently developed the following five basic interfaces. 236 o Get_Object: Get a data object from an INS server with an authorized 237 token. 239 o Put_Object: Store a data object into an INS server with an 240 authorized token. 242 o Delete_Object: Delete a data object in an INS server explicitly 243 with an authorized token. Note that a data object can also be 244 deleted implicitly by setting a Time-To-Live (TTL) value. 246 o Status_Query: Query current status of an application itself, 247 including listing stored data objects, resource (e.g. storage space) 248 usage, etc. 250 o Generate_Token: Generate an authorization token. The token can be 251 passed from one INS client to other INS clients to authorize other 252 INS clients to access data objects from its INS storage. In our P2P 253 live streaming example, the token is generated by INS client. In the 254 example of combining ALTO and INS system, the token is generated by 255 the CP. 257 4. Integration of P2P Live Streaming and INS System 259 We integrate an INS client into a P2P live streaming client in order 260 that P2P live streaming application can easily leverage INS system 261 for data transmission. 263 4.1. Integration Architecture 265 The architecture of the integration of P2P live streaming application 266 and INS system is shown in Figure 1. 268 +------------------+ +------------------+ 269 | INS-enabled | | INS-enabled | 270 |P2P Live Streaming| |P2P Live Streaming| 271 | Client | | Client | 272 |+----------------+| +---------------+ |+----------------+| 273 || INS |+---| INS Server |---+| INS || 274 || Client || +---------------+ || Client || 275 || |+-----------------------+| || 276 |+------+---------|| |+------+---------+| 277 | API | | | API | | 278 |+------+---------+| |+------+---------+| 279 || Native Client |+-----------------------+| Native Client || 280 |+----------------+| |+----------------+| 281 +------------------+ +------------------+ 283 Figure 1 285 An INS-enabled P2P live streaming client uses INS client to 286 communicate with INS server and transmit data between itself and INS 287 server. It is also compatible with original P2P live streaming 288 signaling messages such as peer discovery, data availability 289 announcement, etc. 291 4.1.1. Data Access Messages 293 INS client API is called whenever an INS-enabled P2P live streaming 294 client wants to get data objects from (or put data objects into) the 295 INS server. Each data object transferred between the application 296 client and the INS server should go through the INS client. A data 297 object is a data transfer unit between the INS server and the 298 application client, whose size can be application-customized 299 according to the variable requirements of performance or sensitive 300 factors (e.g. low latency). 302 4.1.2. Control Messages 304 The control protocols used between the native P2P live streaming 305 clients are modified BitTorrent-like protocols. Please refer to [BT] 306 for the detailed description of BitTorrent protocols. Native P2P 307 live streaming client uses BitTorrent-like protocols for meta-data 308 exchanging. On the other hand, INS-enabled P2P live streaming client 309 adds an additional message on top of BitTorrent-like protocols for 310 token distribution. By exchanging the authorization tokens, the 311 application clients can retrieve or store data objects into or from 312 the INS servers. 314 4.1.3. Object Naming Scheme 316 We use the hash of a data object's content for the name of the data 317 object. The name of a data object is generated and distributed by 318 the source streaming server in our example. INS-enabled P2P live 319 streaming client uses the name of the data object as the ID to 320 request and retrieve data. 322 4.2. Design Considerations 324 One essential objective of the integration is to improve the 325 performance of P2P live streaming application. In order to achieve 326 such goal, we have some important design considerations that would be 327 helpful to the future work of protocol development. 329 4.2.1. Improve Efficiency for Each Connection 331 In a native P2P system, a peer can establish tens or hundreds of 332 concurrent connections with other peers. On the other hand, it may 333 be expensive for an INS server to maintain many connections for a 334 large number of INS clients. Typically, each INS server may only 335 allocate and maintain M connections (in our examples, M=1) with each 336 INS client at a time. Therefore, we have the following design 337 considerations to improve the efficiency for each connection between 338 INS server and INS client to achieve satisfying data downloading 339 performance. 341 o Batch Request: In order to fully utilize the connection bandwidth 342 of INS server and reduce the overhead, an application client may 343 combine multiple requests in a single request to INS server. 345 o Larger Data Object: Data object size in existing P2P live streaming 346 application may be small and thus incur large control overhead and 347 low transport utilization. A larger data object may be needed to 348 more efficiently utilize the data connection between INS server and 349 INS client. 351 4.2.2. Reduce Control Latency 353 In a native P2P system, a serving peer sends data objects to the 354 requesting peer directly. Nevertheless, in an INS system, the 355 serving client typically only replies with an authorization token to 356 the requesting client, and then the requesting client uses this token 357 to fetch the data objects from the INS server. This process 358 introduces an additional control latency compared with the native P2P 359 system. It is even more serious in latency sensitive applications 360 such as P2P live streaming. Therefore, we need to consider how to 361 reduce such control latency. 363 o Range Token: One way to reduce control latency is to use range 364 token. An INS-enabled P2P live streaming client may piggyback a 365 range token when announcing data availability to its neighbor 366 clients, indicating that all available data objects are accessible by 367 this range token. Then instead of requesting some specific data 368 object and waiting for the response, a neighbor client can use this 369 range token to access all available data objects in the INS server. 371 5. Integration of P2P File Sharing and INS System 373 We integrate an INS client into Vuze - a BitTorrent based file 374 sharing application client, to leverage INS system for data 375 transmission. 377 5.1. Integration Architecture 379 The architecture of the integration of Vuze and INS system is shown 380 in Figure 2. 381 +------------------+ +------------------+ 382 | INS-enabled | | INS-enabled | 383 | Vuze Client | | Vuze Client | 384 |+----------------+| +---------------+ |+----------------+| 385 || INS |+---| INS Server |---+| INS || 386 || Client || +---------------+ || Client || 387 || |+-----------------------+| || 388 |+------+---------+| |+------+---------+| 389 | API | | | API | | 390 |+------+---------+| |+------+---------+| 391 || Native Client |+-----------------------+| Native Client || 392 |+----------------+| |+----------------+| 393 +------------------+ +------------------+ 395 Figure 2 397 An INS-enabled Vuze client uses INS client to communicate with INS 398 server and transmit data between itself and INS server. It is also 399 compatible with original Vuze signaling messages such as peer 400 discovery, data availability announcement, etc. 402 In our design, INS client inserts itself into the Vuze client by 403 intercepting certain BitTorrent messages, and adjusting their 404 handling to send/receive data using the INS operations instead. 406 In our example, the file to be shared is divided into many objects, 407 with each object being named as "filename_author_partn" where author 408 is the original author of the file or the user who uploads the file, 409 n is the sequence number of the object. We will also support hash- 410 based naming scheme in next version of our implementation. 412 5.2. Message Flow 414 In order for a better comparison, we firstly briefly show the diagram 415 of the native Vuze message exchange, and then show the corresponding 416 diagram including the INS system. 417 +--------+ +--------+ 418 | Vuze | | Vuze | 419 | Client1| | Client2| 420 +--------+ +--------+ 421 | | 422 | HandShake | 423 |<----------------------------------->| 424 | Azureus HandShake | 425 |<----------------------------------->| 426 | BT_BitField | 427 |<----------------------------------->| 428 | BT_Request | 429 |------------------------------------>| 430 | BT_Piece | 431 |<------------------------------------| 432 | | 434 Figure 3 436 In the above diagram, one can see that the key messages for data 437 sharing in native Vuze are "BT_BitField", "BT_Request" and 438 "BT_Piece". Vuze client1 and client2 exchange "BT_BitField" messages 439 to announce the available data objects to each other. If Vuze 440 client1 wants to get certain data object from client2, it sends a 441 "BT_Request" message to client2. Vuze client2 then return the 442 requested data object to client1 by a "BT_Piece" message. Please 443 refer to [Vuze] for the detailed description of Vuze messages. 445 ________ __________ __________ ________ _________ 446 | Vuze | | INS | | INS | | Vuze | | INS | 447 |Client1| | Client1 | | Client2 | |Client2| | Server | 448 |_______| |_________| |_________| |_______| |_________| 449 | | | | | 450 | | HandShake | | | 451 |<----------|------------|---------->| | 452 | Azureus HandShake | | 453 |<----------|------------|---------->| | 454 | | BT_BitField| | | 455 |<----------|------------|---------->| | 456 | | BT_Request | | | 457 |-----------|----------->| | | 458 | | | | | 459 | | Redirect | | | 460 | |<-----------| | | 461 | | | Get Data | | 462 | |----------------------------------->| 463 | | |Data Object| | 464 | |<-----------------------------------| 465 | | | | | 466 | BT_Piece | | | | 467 |<----------| | | | 468 | | | | | 470 Figure 4 472 o Vuze client1 sends a "BT_Request" message to Vuze client2 to 473 request a data object as usual. 475 o INS client2 embedded in Vuze client2 intercepts the incoming 476 "BT_Request" message and then replies with a "Redirect" message which 477 includes INS server's address and authorization token. 479 o INS client1 receives the "Redirect" message and then sends an INS 480 message "Get Data" to the INS server to request the data object. 482 o INS server receives the "Get Data" message and sends the requested 483 data object back to INS client1 after the token check. 485 o INS client1 encapsulates the received data object into a "BT_Piece" 486 message and sends to Vuze client1. 488 6. Integration of ALTO and INS System for File Distribution 490 The objective of ALTO service is to give guidance to applications/end 491 users about which content servers to select in order to optimize the 492 content downloading performance in an ISP network-friendly way (e.g. 493 reducing bandwidth consumption). The core component of ALTO service 494 is called ALTO server which generates the guidance based on ISP 495 network information. The ALTO protocol conveys such guidance from 496 the ALTO server to the applications/end users. The detailed 497 description of ALTO protocol can be found in [I-D.ietf-alto- 498 protocol]. 500 In this example, we integrate ALTO and INS system to build a content 501 distribution platform for CPs. 503 6.1. Architecture Design 505 The integrated ALTO and INS system allows CPs to upload files to INS 506 servers, and end users to download files from optimal INS servers 507 suggested by ALTO service. Specifically, three key components are 508 developed as follow. 510 o INS Servers: operated by an INS service provider to store files 511 from CPs. 513 o INS Portal: operated by an INS service provider to 1) offer CPs a 514 portal site to upload files; 2) provide ALTO service to direct end 515 users to optimal INS servers to download files. 517 o CP Portal: operated by a CP to publish the URLs of the uploaded 518 files for end user downloading. 520 The architecture is as follow. 522 __________ __________ 523 | End User | | End User | 524 |__________| |__________| 525 \ / 526 \ _____________ / 527 | CP Portal | 528 |_____________| 529 | 530 ______________________|______________________ 531 | INS ______|_____ | +--------+ 532 | Service | INS | | | ALTO | 533 | Provider | Portal |----------------+---| Server | 534 | /|____________| \ | +--------+ 535 | / | | \ | 536 | ________/ _____|__ _|______ \________ | 537 || INS | | INS | | INS | | INS | | 538 ||Server1 | |Server2 | |Server3 | |Servern | | 539 ||________| |________| |________| |________| | 540 |____________________________________________| 542 Figure 5 544 6.2. CP Uploading Procedure 546 CP uploads the files into INS servers first, then gets the URLs of 547 the uploaded files and publishes the URLs on the CP portal for end 548 user downloading. The flow is shown below. 549 _________ _________ _________ 550 | | | INS | | INS | 551 | CP | | Portal | | Server | 552 |_________| |_________| |_________| 553 | | | 554 | HTTP_POST | | 555 |------------------>| | 556 | | Put Data | 557 | |----------------->| 558 | | Response | 559 | |<-----------------| 560 | URLs | | 561 |<------------------| | 562 | | | 564 Figure 6 566 o CP uploads the file to the INS portal site via HTTP_POST message. 568 o INS portal distributes the file to the dedicated INS severs using 569 INS message "Put Data". Note that the data distribution policies 570 (e.g. how many copies of the data to which INS servers) can be 571 specified by CP. The dedicated INS servers can be also decided by 572 the INS service provider based on policies or system status (e.g. 573 INS server load). These issues are out of the scope of this draft. 575 In our example, the data stored in INS server is divided into many 576 objects, with each object being named as "filename_CPname_partn" 577 where CPname is the name of the CP who uploads the file, n is the 578 sequence number of the object. We will also support hash-based 579 naming scheme in next version of our implementation. 581 o When the file is uploaded successfully, CP portal will list the 582 URLs of the file for end use downloading. 584 6.3. End User Downloading Procedure 586 End users can visit the CP portal web pages and click the URLs for 587 downloading the desired files. The flow is shown below. 588 _________ ____________ _________ _________ _________ 589 | | | | | INS | | ALTO | | INS | 590 | End User| | CP Portal | | Portal | | Server | | Server | 591 |_________| |____________| |_________| |_________| |_________| 592 | | | | | 593 | HTTP_Get | | | | 594 |------------->| | | | 595 | Token | | | | 596 |<-------------| | | | 597 | | | | | 598 | HTTP_Get | | | 599 |------------------------------>| | | 600 | | | ALTO Req | | 601 | | |------------>| | 602 | | | ALTO Resp | | 603 | | |<------------| | 604 | Optimal INS Server address | | | 605 |<------------------------------| | | 606 | | | | | 607 | | Get Data | | 608 |---------------------------------------------------------->| 609 | | | | | 610 | | Data Object | | 611 |<----------------------------------------------------------| 612 | | | | | 614 Figure 7 616 o End user visits CP portal web page, and finds the URLs for the 617 desired file. 619 o End user clicks the hyper link, CP portal returns token to the end 620 user and redirects the end user to INS portal via HTTP_Get message. 622 o INS portal communicates with ALTO server to find the optimal INS 623 server storing the requested file. 625 o INS portal returns the optimal INS server address to the end user. 627 o End user connects to the optimal INS server to get data via INS 628 message "Get Data" after the token check. 630 7. Test Environment and Settings 632 We conduct some tests to show the results of our integration 633 examples. For a better performance comparison, we ran experiments 634 (i.e. INS integrated P2P application v.s. native P2P application) in 635 the same environment using the same settings. 637 7.1. Test Settings 639 Our tests ran on a wide-spread area and diverse platforms, including 640 a famous commercial cloud platform - Amazon EC2 [EC2] and a well 641 known test-bed - PlanetLab [PL]. The experimental settings are as 642 follows. 644 o Amazon EC2: We setup INS servers in Amazon EC2 cloud, including 645 four regions around the world - US east, US west, Europe and Asia. 647 o PlanetLab: We ran our P2P live streaming clients and P2P file 648 sharing clients (both INS-enabled and native clients) on PlanetLab on 649 a wild-spread area. 651 o Flash-crowd: Flash-crowd is an important scenario in P2P live 652 streaming system due to the live nature, i.e. a large number of users 653 join the live channel during the startup period of the event. 654 Therefore, we conduct experiments to test the system performance for 655 flash-crowd in our P2P live streaming example. 657 o Total supply bandwidth: Total supply bandwidth is the sum of the 658 capacity of bandwidth used to serve the streaming/file content, from 659 both servers (including source servers and INS servers) and the P2P 660 clients. For a fair comparison, we set the total supply bandwidth to 661 be the same in both tests of native and INS-enabled P2P applications. 663 7.2. Test Environment for P2P Live Streaming Example 665 In the tests, we have some functional components running in different 666 platforms, including INS servers, P2P live streaming clients (INS- 667 enabled or native), native P2P live streaming tracker, streaming 668 source server and test controller, as shown in Figure 8. 669 +------------+ +------------+ 670 | INS |----| INS | 671 | Server | | Server | 672 +-----+------+ +------+-----+ Amazon EC2 673 ______________________|__________________|_______ 674 | | 675 +-----+------+ +------+-----+ 676 | Streaming |----| Streaming | 677 | Client |\ /| Client | 678 +------+-----+ \/ +------+-----+ PlanetLab 679 _______________________|_______/\________|_______ 680 | / \ | 681 +--------------+ +------+-----+ +------+-----+ 682 | Streaming | | | | Test | 683 | Source Server| | Tracker | | Controller | 684 +--------------+ +------------+ +------------+ Yale Lab 686 Figure 8 688 7.2.1. INS Server 690 INS servers ran on Amazon EC2. 692 7.2.2. P2P Live Streaming Client 694 Both INS-enabled and native P2P live streaming clients ran on 695 PlanetLab. Each INS-enabled P2P live streaming client connects to 696 the closest INS server according to its geo-location distance to the 697 INS servers. INS-enabled P2P live streaming clients use their INS 698 servers to upload streaming content to neighbor clients. 700 7.2.3. Tracker 702 A native P2P live streaming tracker ran at Yale's laboratory and 703 served both INS-enabled and native P2P live streaming clients during 704 the test. 706 7.2.4. Streaming Source Server 708 A streaming source server ran at Yale's laboratory and served both 709 INS-enabled and native P2P live streaming clients during the test. 711 7.2.5. Test Controller 713 Test controller is a manager to control all machines' behaviors in 714 both Amazon EC2 and PlanetLab during the test. 716 7.3. Test Environment for P2P File Sharing Example 718 Functional components include Vuze client (with and without INS 719 client), INS servers, native Vuze tracker, HTTP server, PlanetLab 720 manager and test controller, as shown in Figure 9. 721 +-------------+ +-------------+ 722 | | | | 723 |INS Server | ... |INS Server | 724 | | | | 725 +-------------+ +-------------+ 726 / \ \ 727 / \ \ 728 / \ \ 729 +-------------+ +-------------+ +-------------+ +-----------+ 730 | Vuze | | Vuze | | Vuze | | | 731 | Client | | Client | ... | Client |--| Tracker | 732 +-------------+ +-------------+ +-------------+ +-----------+ 733 \ | / 734 \ | / 735 \ | / 736 +-------------+ +-------------+ +-------------+ 737 | PlanetLab | | Test | | | 738 | Manager | | Controller | | HTTP Server | 739 +-------------+ +-------------+ +-------------+ 741 Figure 9 743 7.3.1. INS Server 745 INS servers ran on Amazon EC2. 747 7.3.2. Vuze Client 749 Both INS-enabled and native Vuze clients ran on PlanetLab. INS 750 client embedded in Vuze client was automatically loaded and ran after 751 Vuze client start up. Vuze clients were divided into one seeding 752 client and multiple leeches. The seeding client ran at a Window 2003 753 server. 755 7.3.3. Tracker 757 Vuze client provides tracker capability, so we did not deploy our own 758 tracker. Tracker was enabled when making a BitTorrent file. The 759 seeding client was also a tracker in our test. 761 7.3.4. Test Controller 763 Similar to the test controller in P2P live streaming case, the test 764 controller in Vuze example can also control all machines' behaviors 765 in Amazon EC2 and PlanetLab. For example, it lists all the Vuze 766 clients via GUI and controls them to download a specific BitTorrent 767 file. It ran at the same Window 2003 server with the seeding client. 769 7.3.5. HTTP Server 771 BitTorrent file was put in the HTTP server and the leeches retrieved 772 the BitTorrent file from the HTTP server after receiving the 773 downloading command from the test controller. We used Apache Tomcat 774 for HTTP server. 776 7.3.6. PlanetLab Manager 778 PlanetLab manager is a tool developed by University of Washington. 779 It presents a simple GUI to control PlanetLab nodes and perform 780 common tasks such as: 1) selecting nodes for your slice; 2) choosing 781 nodes for your experiment based on the information about the nodes; 782 3) reliably deploying you experiment files; 4) executing commands on 783 every node in parallel; 5) monitoring the progress of the experiment 784 as a whole, as well as viewing console output from the nodes. 786 7.4. Test Environment for Combined ALTO and INS File Distribution 787 System 789 For the integration of ALTO and INS systems for supporting file 790 distribution of CPs, we built 6 Linux virtual machines (VMs) with 791 Fedora13 operating system. ALTO server, INS portal, CP portal and 792 two INS servers ran on these VMs. Each VM has 4G CPU, 2G Memory and 793 10G Disk. CP uploaded files to the INS server via INS portal. End 794 user can choose desired file through the CP portal, and download it 795 from the optimal INS server chosen by the INS portal using ALTO 796 service. 798 8. Performance Analysis 800 We illustrate the performance gain to P2P applications and more 801 efficient content distribution by effectively leveraging the INS 802 system. Note that for the example of integrating ALTO and INS 803 systems to support file distribution of CPs, we only show the 804 feasibility of such integration without comparing the performance of 805 our implementation with other content distribution systems. 807 8.1. Performance Metrics 809 8.1.1. P2P Live Streaming 811 To measure the performance of a P2P live streaming application, we 812 mainly employed the following four metrics. 814 o Startup delay: The duration from a peer joins the streaming channel 815 to the moment it starts to play. 817 o Piece missed rate: The number of pieces a peer loses when playing 818 over the total number of pieces. 820 o Freeze times: The number of times a peer re-buffers during playing. 822 o Average peer uploading rate: Average uploading bandwidth of a peer. 824 8.1.2. P2P File Sharing 826 To measure the performance of a P2P file sharing application, we 827 mainly employed the following three metrics. 829 o Download traffic: The total amount of traffic (MByte) representing 830 the network downlink resource usage. 832 o Upload traffic: The total amount of traffic (MByte) representing 833 the network uplink resource usage. 835 o Network resource efficiency: The ratio of P2P system download rate 836 to the total network (downlink) bandwidth. 838 8.1.3. Integration of ALTO and INS System for File Distribution 840 We only consider some common capacity metrics for content 841 distribution system, i.e. the bandwidth usage of each INS server, and 842 the total online users supported by each INS server at this time. 843 More comprehensive metrics would be provided after more experiments 844 are done in the near future. 846 8.2. Results and Analysis 848 8.2.1. P2P Live Streaming 850 o Startup delay: In the test, INS-enabled P2P live streaming clients 851 startup around 35~40 seconds and some of them startup around 10 852 seconds. Native P2P live streaming clients startup around 110~120 853 seconds and less than 20% of them startup within 100 seconds. 855 o Piece missed rate: In the test, both INS-enabled P2P live streaming 856 clients and native P2P live streaming clients achieved a good 857 performance in piece missed rate. Only about 0.02% of total pieces 858 missed in both cases. 860 o Freeze times: In the test, native P2P live streaming clients 861 suffered from more freezing times than INS-enabled P2P live streaming 862 clients by 40%. 864 o Average peer uploading rate: In the test, according to our 865 settings, INS-enabled P2P live streaming clients had no data upload 866 in their "last mile" access network, while in the native P2P live 867 streaming system, most peers uploaded streaming data for serving 868 other peers. In another word, INS system can shift uploading traffic 869 from clients' "last mile" to in-network devices, which saves a lot of 870 expensive bandwidth on access links. 872 8.2.2. P2P File Sharing 874 The test result is illustrated in Figure 10. We can see that there 875 is very few upload traffic from the INS-enabled Vuze clients, while 876 in the native Vuze case, the upload traffic from Vuze clients is the 877 same as the download traffic. Network resource usage is thus reduced 878 in the "last mile" in the INS-enabled Vuze case. This result also 879 verifies that the INS system can shift uploading traffic from 880 clients' "last mile" to in-network devices. 881 +--------------------+--------------------+--------------------+ 882 | | | | 883 | | Download Traffic | Upload Traffic | 884 | | | | 885 +--------------------+--------------------+--------------------+ 886 | | | | 887 | INS-Enabled Vuze | 480MB | 12MB | 888 | | | | 889 +--------------------+--------------------+--------------------+ 890 | | | | 891 | Native Vuze | 430MB | 430MB | 892 | | | | 893 +--------------------+--------------------+--------------------+ 895 Figure 10 897 We also found higher network resource efficiency in the INS-enabled 898 Vuze case where the network resource efficiency is defined as the 899 ratio of P2P system download rate to the total network (downlink) 900 bandwidth. The test result is that the network resource efficiency 901 of native Vuze is 65% while that of INS-enabled Vuze is 88%. 903 8.2.3. Integrated ALTO and INS System for File Distribution 905 Each INS server can supply the bandwidth usage of at most 94% of 906 network interface card (e.g. 1000M interface card server can supply 907 bandwidth of 940Mbps at most). Each INS server can support about 400 908 online users for file downloading simultaneously. 910 9. Short Conclusion 912 This document presents two examples of integrating INS system into 913 P2P applications (i.e. P2P live streaming and Vuze) by developing 914 INS client API for native P2P clients. To better adopt INS system, 915 we found some important design considerations including efficiency 916 for INS connection, control latency caused by INS operations, and 917 developed some mechanisms to address them. We ran some tests to show 918 the results of our integration examples on Amazon EC2 and PlanetLab 919 for deploying INS servers and clients, respectively. It can be 920 observed from our test results that integrating INS system into 921 native P2P applications could achieve performance gain to P2P 922 applications and more network efficient content distribution. 924 Note that for the example of integrating ALTO and INS system to 925 support file distribution of CPs, we only show the feasibility of 926 such integration without comparing with other content distribution 927 systems at this time. More information would be provided after more 928 experiments are done in the near future. 930 10. Security Considerations 932 The token can be passed from one INS client to other INS clients to 933 authorize other INS clients to access data objects from its INS 934 storage. Detailed mechanisms of token based authentication and 935 authorization can be found in [I-D.ietf-decade-arch]. 937 11. IANA Considerations 939 This document does not have any IANA considerations. 941 12. References 943 12.1. Normative References 945 [I-D.ietf-decade-arch] Alimi, R., Yang, Y., Rahman, A., Kutscher, D., 946 and H. Liu, "DECADE Architecture", draft-ietf-decade-arch-04 (work in 947 progress), October 2011. 949 [I-D.ietf-alto-protocol] Alimi, R., Penno, R., and Y. Yang, "ALTO 950 Protocol", draft-ietf-alto-protocol-10 (work in progress), October 951 2011. 953 12.2. Informative References 955 [BT] "http://www.bittorrent.org" 957 [Vuze] "http://www.vuze.com" 959 [EC2] "http://aws.amazon.com/ec2/" 961 [PL] "http://www.planet-lab.org/" 963 Authors' Addresses 965 Ning Zong (editor) 966 Huawei Technologies 968 Email: zongning@huawei.com 970 Xiaohui Chen 971 Huawei Technologies 973 Email: risker.chen@huawei.com 975 Zhigang Huang 976 Huawei Technologies 978 Email: andy.huangzhigang@huawei.com 980 Lijiang Chen 981 HP Labs 983 Email: lijiang.chen@hp.com 985 Hongqiang Liu 986 Yale University 988 Email: hongqiang.liu@yale.edu