idnits 2.17.1 draft-ietf-decade-integration-example-05.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- No issues found here. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year == The document seems to contain a disclaimer for pre-RFC5378 work, but was first submitted on or after 10 November 2008. The disclaimer is usually necessary only for documents that revise or obsolete older RFCs, and that take significant amounts of text from those RFCs. If you can contact all authors of the source material and they are willing to grant the BCP78 rights to the IETF Trust, you can and should remove the disclaimer. Otherwise, the disclaimer is needed and you can ignore this comment. (See the Legal Provisions document at https://trustee.ietf.org/license-info for more information.) -- The document date (July 9, 2012) is 4302 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- == Outdated reference: A later version (-10) exists of draft-ietf-decade-arch-07 == Outdated reference: A later version (-27) exists of draft-ietf-alto-protocol-11 Summary: 0 errors (**), 0 flaws (~~), 4 warnings (==), 1 comment (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 DECADE N. Zong, Ed. 3 Internet-Draft X. Chen 4 Intended status: Informational Z. Huang 5 Expires: December 21, 2012 Huawei Technologies 6 L. Chen 7 HP Labs 8 H. Liu 9 Yale University 10 July 9, 2012 12 Integration Examples of DECADE System 13 draft-ietf-decade-integration-example-05 15 Abstract 17 Decoupled Application Data Enroute (DECADE) system is an in-network 18 storage infrastructure which is still under discussion in IETF. This 19 document presents two detailed examples of how to integrate such in- 20 network storage infrastructure into peer-to-peer (P2P) applications 21 to achieve more efficient content distribution, and Application Layer 22 Traffic Optimization (ALTO) system to build a content distribution 23 platform for Content Providers (CPs). 25 Status of this Memo 27 This Internet-Draft is submitted in full conformance with the 28 provisions of BCP 78 and BCP 79. 30 Internet-Drafts are working documents of the Internet Engineering 31 Task Force (IETF). Note that other groups may also distribute 32 working documents as Internet-Drafts. The list of current Internet- 33 Drafts is at http://datatracker.ietf.org/drafts/current/. 35 Internet-Drafts are draft documents valid for a maximum of six months 36 and may be updated, replaced, or obsoleted by other documents at any 37 time. It is inappropriate to use Internet-Drafts as reference 38 material or to cite them other than as "work in progress." 40 This Internet-Draft will expire on December 21, 2012. 42 Copyright Notice 44 Copyright (c) 2012 IETF Trust and the persons identified as the 45 document authors. All rights reserved. 47 This document is subject to BCP 78 and the IETF Trust's Legal 48 Provisions Relating to IETF Documents 49 (http://trustee.ietf.org/license-info) in effect on the date of 50 publication of this document. Please review these documents 51 carefully, as they describe your rights and restrictions with respect 52 to this document. Code Components extracted from this document must 53 include Simplified BSD License text as described in Section 4.e of 54 the Trust Legal Provisions and are provided without warranty as 55 described in the Simplified BSD License. 57 This document may contain material from IETF Documents or IETF 58 Contributions published or made publicly available before November 59 10, 2008. The person(s) controlling the copyright in some of this 60 material may not have granted the IETF Trust the right to allow 61 modifications of such material outside the IETF Standards Process. 62 Without obtaining an adequate license from the person(s) controlling 63 the copyright in such materials, this document may not be modified 64 outside the IETF Standards Process, and derivative works of it may 65 not be created outside the IETF Standards Process, except to format 66 it for publication as an RFC or to translate it into languages other 67 than English. 69 Table of Contents 71 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 5 72 2. Terminology . . . . . . . . . . . . . . . . . . . . . . . . . 6 73 2.1. Native Application Client . . . . . . . . . . . . . . . . 6 74 2.2. INS Server . . . . . . . . . . . . . . . . . . . . . . . . 6 75 2.3. INS Client . . . . . . . . . . . . . . . . . . . . . . . . 6 76 2.4. INS Operations . . . . . . . . . . . . . . . . . . . . . . 6 77 2.5. INS System . . . . . . . . . . . . . . . . . . . . . . . . 6 78 2.6. INS Client API . . . . . . . . . . . . . . . . . . . . . . 6 79 2.7. INS-enabled Application Client . . . . . . . . . . . . . . 6 80 2.8. INS Service Provider . . . . . . . . . . . . . . . . . . . 6 81 2.9. INS Portal . . . . . . . . . . . . . . . . . . . . . . . . 6 82 3. INS Client API . . . . . . . . . . . . . . . . . . . . . . . . 7 83 4. Integration of P2P File Sharing and INS System . . . . . . . . 7 84 4.1. Integration Architecture . . . . . . . . . . . . . . . . . 7 85 4.1.1. Message Flow . . . . . . . . . . . . . . . . . . . . . 8 86 4.2. Concluding Remarks . . . . . . . . . . . . . . . . . . . . 10 87 5. Integration of P2P Live Streaming and INS System . . . . . . . 10 88 5.1. Integration Architecture . . . . . . . . . . . . . . . . . 10 89 5.1.1. Data Access Messages . . . . . . . . . . . . . . . . . 10 90 5.1.2. Control Messages . . . . . . . . . . . . . . . . . . . 10 91 5.2. Design Considerations . . . . . . . . . . . . . . . . . . 11 92 5.2.1. Improve Efficiency for Each Connection . . . . . . . . 11 93 5.2.2. Reduce Control Latency . . . . . . . . . . . . . . . . 11 94 6. Integration of ALTO and INS System for File Distribution . . . 12 95 6.1. Architecture . . . . . . . . . . . . . . . . . . . . . . . 12 96 6.1.1. CP Uploading Procedure . . . . . . . . . . . . . . . . 13 97 6.1.2. End User Downloading Procedure . . . . . . . . . . . . 14 98 7. Test Environment and Settings . . . . . . . . . . . . . . . . 15 99 7.1. Test Settings . . . . . . . . . . . . . . . . . . . . . . 15 100 7.2. Test Environment for P2P Live Streaming Example . . . . . 15 101 7.2.1. INS Server . . . . . . . . . . . . . . . . . . . . . . 16 102 7.2.2. P2P Live Streaming Client . . . . . . . . . . . . . . 16 103 7.2.3. Tracker . . . . . . . . . . . . . . . . . . . . . . . 16 104 7.2.4. Streaming Source Server . . . . . . . . . . . . . . . 16 105 7.2.5. Test Controller . . . . . . . . . . . . . . . . . . . 16 106 7.3. Test Environment for P2P File Sharing Example . . . . . . 17 107 7.3.1. INS Server . . . . . . . . . . . . . . . . . . . . . . 17 108 7.3.2. Vuze Client . . . . . . . . . . . . . . . . . . . . . 17 109 7.3.3. Tracker . . . . . . . . . . . . . . . . . . . . . . . 17 110 7.3.4. Test Controller . . . . . . . . . . . . . . . . . . . 17 111 7.3.5. HTTP Server . . . . . . . . . . . . . . . . . . . . . 18 112 7.3.6. PlanetLab Manager . . . . . . . . . . . . . . . . . . 18 113 7.4. Test Environment for Combined ALTO and INS File 114 Distribution System . . . . . . . . . . . . . . . . . . . 18 115 8. Performance Analysis . . . . . . . . . . . . . . . . . . . . . 18 116 8.1. Performance Metrics . . . . . . . . . . . . . . . . . . . 18 117 8.1.1. P2P Live Streaming . . . . . . . . . . . . . . . . . . 18 118 8.1.2. P2P File Sharing . . . . . . . . . . . . . . . . . . . 19 119 8.1.3. Integration of ALTO and INS System for File 120 Distribution . . . . . . . . . . . . . . . . . . . . . 19 121 8.2. Results and Analysis . . . . . . . . . . . . . . . . . . . 19 122 8.2.1. P2P Live Streaming . . . . . . . . . . . . . . . . . . 19 123 8.2.2. P2P File Sharing . . . . . . . . . . . . . . . . . . . 20 124 8.2.3. Integrated ALTO and INS System for File 125 Distribution . . . . . . . . . . . . . . . . . . . . . 21 126 9. Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . 21 127 10. Security Considerations . . . . . . . . . . . . . . . . . . . 21 128 11. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 21 129 12. References . . . . . . . . . . . . . . . . . . . . . . . . . . 21 130 12.1. Normative References . . . . . . . . . . . . . . . . . . . 22 131 12.2. Informative References . . . . . . . . . . . . . . . . . . 22 132 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . . 22 134 1. Introduction 136 Decoupled Application Data Enroute (DECADE) system is an in-network 137 storage infrastructure which is still under discussion in IETF. We 138 implemented such in-network storage infrastructure to simulate DECADE 139 system including DECADE servers, DECADE clients and DECADE protocols 140 [I-D.ietf-decade-arch]. Therefore, in the whole draft, we use the 141 terms of in-network storage (INS) system, INS server, INS client, INS 142 operations, etc. 144 This draft introduces some examples of integrating INS system with 145 existing applications. In our example systems, the core components 146 include INS server and INS-enabled application client. An INS server 147 stores data inside the network, and thereafter manages both the 148 stored data and access to that data. An INS-enabled application 149 client including INS client and native application client uses a set 150 of Application Programming Interfaces (APIs) to enable native 151 application client to utilize INS operations such as data get, data 152 put, storage status query, etc. 154 This draft presents two detailed examples of how to integrate INS 155 system into peer-to-peer (P2P) applications, i.e. live streaming and 156 file sharing, as well as an example integration of Application Layer 157 Traffic Optimization (ALTO) [I-D.ietf-alto-protocol] and INS system 158 to support file distribution. We show how to extend native P2P 159 applications by designing the INS-enabled P2P clients and describing 160 the corresponding flows of INS-enabled data transmission. Then we 161 introduce the functional architecture and working flows of integrated 162 ALTO and INS system for file distribution of Content Providers (CPs). 163 Finally we illustrate the performance gain to P2P applications and 164 more efficient content distribution by effectively leveraging the INS 165 system. 167 Please note that the P2P applications mentioned in this draft only 168 represent some cases out of a large number of P2P applications, while 169 the INS system itself can support a variety of other applications. 170 Moreover, the set of APIs used in our integration examples is an 171 experimental implementation, which is not standard and still under 172 development. The INS system described in this draft is only a 173 preliminary functional set of in-network storage infrastructure for 174 applications. It is designed to test the pros and cons of INS system 175 utilized by P2P applications and verify the feasibility of utilizing 176 INS system to support content distribution. We hope our examples 177 would be useful for further standard protocol design, rather than to 178 present a solution for standardization purpose. 180 2. Terminology 182 The following terms will be used in this document. 184 2.1. Native Application Client 186 A client running original application operations including control 187 and data messages defined by applications. 189 2.2. INS Server 191 A server to simulate DECADE server defined in [I-D.ietf-decade-arch]. 193 2.3. INS Client 195 A client to simulate DECADE client defined in [I-D.ietf-decade-arch]. 197 2.4. INS Operations 199 A set of communications between INS server and INS client to simulate 200 DECADE protocols defined in [I-D.ietf-decade-arch]. 202 2.5. INS System 204 A system including INS servers, INS clients, and INS operations. 206 2.6. INS Client API 208 A set of APIs to enable native application client to utilize INS 209 operations. 211 2.7. INS-enabled Application Client 213 An INS-enabled application client includes INS client and native 214 application client communicating through INS client API. 216 2.8. INS Service Provider 218 An INS service provider deploys INS system and provides INS service 219 to applications/end users. It can be Internet Service Provider (ISP) 220 or other parties. 222 2.9. INS Portal 224 A functional entity operated by INS service provider to offer 225 applications/end users a point to access (e.g. upload, download) 226 files stored in INS servers. 228 3. INS Client API 230 In order to simplify the integration of INS system with P2P 231 applications, we provide INS client API to native P2P clients for 232 accomplishing INS operations such as data get, data put, etc. On top 233 of the INS client API, a native P2P client can develop its own 234 application specific control and data distribution flows. 236 We currently developed the following five basic interfaces. 238 o Generate_Token: Generate an authorization token. An authorization 239 token is usually generated by an entity that is trusted by an INS 240 client which is sharing its data and passed to the other INS clients 241 for data access control. Please see [I-D.ietf-decade-arch] for more 242 details. 244 o Get_Object: Get a data object from an INS server with an 245 authorization token. 247 o Put_Object: Store a data object into an INS server with an 248 authorization token. 250 o Delete_Object: Delete a data object in an INS server explicitly 251 with an authorization token. Note that a data object can also be 252 deleted implicitly by setting a Time-To-Live (TTL) value. 254 o Status_Query: Query current status of an application itself, 255 including listing stored data objects, resource (e.g. storage space) 256 usage, etc. 258 4. Integration of P2P File Sharing and INS System 260 We integrate an INS client into Vuze - a BitTorrent based file 261 sharing application [VzApp]. 263 4.1. Integration Architecture 265 The architecture of the integration of Vuze and INS system is shown 266 in Figure 1. An INS-enabled Vuze client uses INS client to 267 communicate with INS server and transmit data between itself and INS 268 server. It is also compatible with original Vuze signaling messages 269 such as peer discovery, data availability announcement, etc. Note 270 that the same architecture applies to the other example of 271 integration of P2P live streaming and INS system. 273 +------------------+ +------------------+ 274 | INS-enabled | | INS-enabled | 275 | Client | | Client | 276 |+----------------+| +---------------+ |+----------------+| 277 || INS |+---| INS Server |---+| INS || 278 || Client || +---------------+ || Client || 279 || |+-----------------------+| || 280 |+------+---------+| |+------+---------+| 281 | API | | | API | | 282 |+------+---------+| |+------+---------+| 283 || Native Client |+-----------------------+| Native Client || 284 |+----------------+| |+----------------+| 285 +------------------+ +------------------+ 287 Figure 1 289 4.1.1. Message Flow 291 In order for a better comparison, we briefly show the below diagram 292 of the native Vuze message exchange, and then show the corresponding 293 diagram including the INS system. 294 +--------+ +--------+ 295 | Vuze | | Vuze | 296 | Client1| | Client2| 297 +--------+ +--------+ 298 | | 299 | HandShake | 300 |<----------------------------------->| 301 | BT_BitField | 302 |<----------------------------------->| 303 | BT_Request | 304 |------------------------------------>| 305 | BT_Piece | 306 |<------------------------------------| 307 | | 309 Figure 2 311 In the above diagram, one can see that the key messages for data 312 sharing in native Vuze are "BT_BitField", "BT_Request" and 313 "BT_Piece". Vuze client1 and client2 exchange "BT_BitField" messages 314 to announce the available data objects to each other. If Vuze 315 client1 wants to get certain data object from client2, it sends a 316 "BT_Request" message to client2. Vuze client2 then return the 317 requested data object to client1 by a "BT_Piece" message. Please 318 refer to [VzMsg] for the detailed description of Vuze messages. 320 As shown in the below diagram, in the integration of Vuze and INS 321 system, INS client inserts itself into the Vuze client by 322 intercepting certain Vuze messages, and adjusting their handling to 323 send/receive data using the INS operations instead. 324 ________ __________ __________ ________ _________ 325 | Vuze | | INS | | INS | | Vuze | | INS | 326 |Client1| | Client1 | | Client2 | |Client2| | Server | 327 |_______| |_________| |_________| |_______| |_________| 328 | | | | | 329 | | HandShake | | | 330 |<----------|------------|---------->| | 331 | | BT_BitField| | | 332 |<----------|------------|---------->| | 333 | | BT_Request | | | 334 |-----------|----------->| | | 335 | | | | | 336 | | Redirect | | | 337 | |<-----------| | | 338 | | | Get Data | | 339 | |----------------------------------->| 340 | | |Data Object| | 341 | |<-----------------------------------| 342 | | | | | 343 | BT_Piece | | | | 344 |<----------| | | | 345 | | | | | 347 Figure 3 349 o Vuze client1 sends a "BT_Request" message to Vuze client2 to 350 request a data object as usual. 352 o INS client2 embedded in Vuze client2 intercepts the incoming 353 "BT_Request" message and then replies with a "Redirect" message which 354 includes INS server's address and authorization token. 356 o INS client1 receives the "Redirect" message and then sends an INS 357 message "Get Data" to the INS server to request the data object. 359 o INS server receives the "Get Data" message and sends the requested 360 data object back to INS client1 after the token check. 362 o INS client1 encapsulates the received data object into a "BT_Piece" 363 message and sends to Vuze client1. 365 In this example, the file to be shared is divided into many objects, 366 with each object being named as "filename_author_partn" where author 367 is the original author of the file or the user who uploads the file, 368 n is the sequence number of the object. 370 4.2. Concluding Remarks 372 In this example, we feel that the INS system can effectively improve 373 the file sharing efficiency due to following reasons: 1) utilizing 374 in-network storage as the data location of the peer will achieve 375 statistical multiplexing gain of the data sharing; 2) shorter data 376 delivery path based on in-network storage could not only improve the 377 application performance, but avoid the potential bottleneck in the 378 ISP network. 380 5. Integration of P2P Live Streaming and INS System 382 We integrate an INS client into a P2P live streaming application. 384 5.1. Integration Architecture 386 The architecture of the integration of P2P live streaming application 387 and INS system is shown in Figure 1. An INS-enabled P2P live 388 streaming client uses INS client to communicate with INS server and 389 transmit data between itself and INS server. 391 5.1.1. Data Access Messages 393 INS client API is called whenever an INS-enabled P2P live streaming 394 client wants to get data objects from (or put data objects into) the 395 INS server. Each data object transferred between the application 396 client and the INS server should go through the INS client. Each 397 data object can be a variable-sized block to cater to different 398 application requirements (e.g. latency and throughput). 400 We use the hash of a data object's content for the name of the data 401 object. The name of a data object is generated and distributed by 402 the source streaming server in this example. 404 5.1.2. Control Messages 406 We used a lab-based P2P live streaming system for research purpose 407 only. The basic control messages between the native P2P live 408 streaming clients are similar to Vuze control protocols in the sense 409 that the data piece information is exchanged between the peers. The 410 INS-enabled P2P live streaming client adds an additional control 411 message for authorization token distribution, as shown as the line 412 between the INS clients in Figure 1. In this example, the 413 authorization token is generated by the INS client that is sharing 414 its data. By exchanging the authorization tokens, the application 415 clients can retrieve the data objects from the INS servers. 417 5.2. Design Considerations 419 One essential objective of the integration is to improve the 420 performance of P2P live streaming application. In order to achieve 421 such goal, we have some important design considerations that would be 422 helpful to the future work of protocol development. 424 5.2.1. Improve Efficiency for Each Connection 426 In a native P2P system, a peer can establish tens or hundreds of 427 concurrent connections with other peers. On the other hand, it may 428 be expensive for an INS server to maintain many connections for a 429 large number of INS clients. Typically, each INS server may only 430 allocate and maintain M connections (in our examples, M=1) with each 431 INS client at a time. Therefore, we have the following design 432 considerations to improve the efficiency for each connection between 433 INS server and INS client to achieve satisfying data downloading 434 performance. 436 o Batch Request: In order to fully utilize the connection bandwidth 437 of INS server and reduce the overhead, an application client may 438 request a batch of data objects in a single request. 440 o Larger Data Object: Data object size in existing P2P live streaming 441 application may be small and thus incur large control overhead and 442 low transport utilization. A larger data object may be needed to 443 more efficiently utilize the data connection between INS server and 444 INS client. 446 5.2.2. Reduce Control Latency 448 In a native P2P system, a serving peer sends data objects to the 449 requesting peer directly. Nevertheless, in an INS system, the 450 serving client typically only replies with an authorization token to 451 the requesting client, and then the requesting client uses this token 452 to fetch the data objects from the INS server. This process 453 introduces an additional control latency compared with the native P2P 454 system. It is even more serious in latency sensitive applications 455 such as P2P live streaming. Therefore, we need to consider how to 456 reduce such control latency. 458 o Range Token: One way to reduce control latency is to use range 459 token. An INS-enabled P2P live streaming client may piggyback a 460 range token when announcing data availability to other peers, 461 indicating that all available data objects are accessible by this 462 range token. Then instead of requesting some specific data object 463 and waiting for the response, a peer can use this range token to 464 access all available data objects that it was permitted to access in 465 the INS server. 467 6. Integration of ALTO and INS System for File Distribution 469 The objective of ALTO service is to give guidance to applications 470 about which content servers to select to improve content distribution 471 performance in an ISP-friendly way (e.g. reducing network usage 472 within the ISP). The core component of ALTO service is called ALTO 473 server which generates the guidance based on the ISP network 474 information. The ALTO protocol conveys such guidance from the ALTO 475 server to the applications. The detailed description of ALTO 476 protocol can be found in [I-D.ietf-alto-protocol]. 478 In this example, we integrate ALTO and INS system to build a content 479 distribution platform for CPs. 481 6.1. Architecture 483 The integrated system allows CPs to upload files to INS servers, and 484 guides end users to download files from the INS servers suggested by 485 ALTO service. The architecture diagram is shown as below. Note that 486 this diagram just shows a basic set of connections between the 487 components. Some redirection including that the INS portal redirects 488 end users to the INS servers can also happen between the components. 490 _____________________________________________ 491 | ________ ________ ________ ________ | 492 || INS | | INS | | INS | | INS | | 493 ||Server1 | |Server2 | |Server3 | |Servern | | 494 ||________| |________| |________| |________| | 495 | \ | | / | 496 | \ _|______|_ / | +--------+ 497 | INS \ | INS | / | | ALTO | 498 | Service | Portal |-----------------+---| Server | 499 | Provider |__________| | +--------+ 500 |_____________________|______________________| 501 | 502 ______|______ 503 | CP Portal | 504 |_____________| 505 / \ 506 __________ / \ __________ 507 | End User | | End User | 508 |__________| |__________| 510 Figure 4 512 Four key components are defined as follow. 514 o INS Servers: operated by an INS service provider to store files 515 from CPs. 517 o INS Portal: operated by an INS service provider to 1) upload files 518 from CPs to the dedicated INS servers; 2) direct end users to the INS 519 servers suggested by ALTO service to download files. 521 o CP Portal: operated by a CP to publish the URLs of the uploaded 522 files for end user downloading. 524 o End User: End users use standard web browser with INS extensions 525 such that INS client APIs can be called for fetching the data from 526 INS servers. 528 6.1.1. CP Uploading Procedure 530 CP uploads the files into INS servers first, then gets the URLs of 531 the uploaded files and publishes the URLs on the CP portal for end 532 user downloading. The flow is shown below. 533 _________ _________ _________ 534 | | | INS | | INS | 535 | CP | | Portal | | Server | 536 |_________| |_________| |_________| 537 | | | 538 | HTTP POST | | 539 |------------------>| | 540 | | Put Data | 541 | |----------------->| 542 | | Response | 543 | |<-----------------| 544 | URLs | | 545 |<------------------| | 546 | | | 548 Figure 5 550 o CP uploads the file to the INS portal site via HTTP POST message. 552 o INS portal distributes the file to the dedicated INS severs using 553 INS message "Put Data". Note that the data distribution policies 554 (e.g. how many copies of the data to which INS servers) can be 555 specified by CP. The dedicated INS servers can be also decided by 556 the INS service provider based on policies or system status (e.g. 557 INS server load). These issues are out of the scope of this draft. 559 In this example, the data stored in INS server is divided into many 560 objects, with each object being named as "filename_CPname_partn" 561 where CPname is the name of the CP who uploads the file, n is the 562 sequence number of the object. 564 o When the file is uploaded successfully, CP portal will list the 565 URLs of the file for end use downloading. 567 6.1.2. End User Downloading Procedure 569 End users can visit the CP portal web pages and click the URLs for 570 downloading the desired files. The flow is shown below. 571 _________ ____________ _________ _________ _________ 572 | | | | | INS | | ALTO | | INS | 573 | End User| | CP Portal | | Portal | | Server | | Server | 574 |_________| |____________| |_________| |_________| |_________| 575 | | | | | 576 | HTTP Get | | | | 577 |------------->| | | | 578 | Token | | | | 579 |<-------------| | | | 580 | | | | | 581 | HTTP Get | | | 582 |------------------------------>| | | 583 | | | ALTO Req | | 584 | | |------------>| | 585 | | | ALTO Resp | | 586 | | |<------------| | 587 | Optimal INS Server address | | | 588 |<------------------------------| | | 589 | | | | | 590 | | Get Data | | 591 |---------------------------------------------------------->| 592 | | | | | 593 | | Data Object | | 594 |<----------------------------------------------------------| 595 | | | | | 597 Figure 6 599 o End user visits CP portal web page, and finds the URLs for the 600 desired file. 602 o End user clicks the hyper link, CP portal returns authorization 603 token to the end user and redirects the end user to INS portal via 604 HTTP Get message. 606 o INS portal communicates with ALTO server to get the suggested INS 607 server storing the requested file. In this example, ALTO server just 608 selects the INS server within the same IP subset of the end user. 609 Please see [I-D.ietf-alto-protocol] for more details on how ALTO 610 select content server. 612 o INS portal returns the INS server address suggested by ALTO service 613 to the end user. 615 o End user connects to the suggested INS server to get data via INS 616 message "Get Data" after the token check. 618 7. Test Environment and Settings 620 We conduct some tests to show the results of our integration 621 examples. For a better performance comparison, we ran experiments 622 (i.e. INS integrated P2P application v.s. native P2P application) in 623 the same environment using the same settings. 625 7.1. Test Settings 627 Our tests ran on a wide-spread area and diverse platforms, including 628 a famous commercial platform - Amazon EC2 [EC2] and a well known 629 test-bed - PlanetLab [PL]. The experimental settings are as follows. 631 o Amazon EC2: We setup INS servers in Amazon EC2 platform, including 632 four regions around the world - US east, US west, Europe and Asia. 634 o PlanetLab: We ran our P2P live streaming clients and P2P file 635 sharing clients (both INS-enabled and native clients) on PlanetLab on 636 a wild-spread area. 638 o Flash-crowd: Flash-crowd is an important scenario in P2P live 639 streaming system due to the live nature, i.e. a large number of users 640 join the live channel during the startup period of the event. 641 Therefore, we conduct experiments to test the system performance for 642 flash-crowd in our P2P live streaming example. 644 o Total supply bandwidth: Total supply bandwidth is the sum of the 645 capacity of bandwidth used to serve the streaming/file content, from 646 both servers (including source servers and INS servers) and the P2P 647 clients. For a fair comparison, we set the total supply bandwidth to 648 be the same in both tests of native and INS-enabled P2P applications. 650 7.2. Test Environment for P2P Live Streaming Example 652 In the tests, we have some functional components running in different 653 platforms, including INS servers, P2P live streaming clients (INS- 654 enabled or native), native P2P live streaming tracker, streaming 655 source server and test controller, as shown in below figure. 657 +------------+ +------------+ 658 | INS |----| INS | 659 | Server | | Server | 660 +-----+------+ +------+-----+ Amazon EC2 661 ______________________|__________________|_________________ 662 | | 663 +-----+------+ +------+-----+ 664 | Streaming |----| Streaming | 665 | Client |\ /| Client | 666 +------+-----+ \/ +------+-----+ PlanetLab 667 _______________________|_______/\________|_________________ 668 | / \ | Yale Lab 669 +--------------+ +------+-----+ +------+-----+ 670 | Streaming | | Tracker | | Test | 671 | Source Server| | | | Controller | 672 +--------------+ +------------+ +------------+ 674 Figure 7 676 7.2.1. INS Server 678 INS servers ran on Amazon EC2. 680 7.2.2. P2P Live Streaming Client 682 Both INS-enabled and native P2P live streaming clients ran on 683 PlanetLab. Each INS-enabled P2P live streaming client connects to 684 the dedicated INS server. In this example, we decide which client 685 connects to which server based on the IP address. So, it is roughly 686 region-based and still coarse. Each INS-enabled P2P live streaming 687 client uses its INS server to share streaming content to other peers. 689 7.2.3. Tracker 691 A native P2P live streaming tracker ran at Yale's laboratory and 692 served both INS-enabled and native P2P live streaming clients during 693 the test. 695 7.2.4. Streaming Source Server 697 A streaming source server ran at Yale's laboratory and served both 698 INS-enabled and native P2P live streaming clients during the test. 700 7.2.5. Test Controller 702 Test controller is a manager running at Yale's laboratory to control 703 all machines' behaviors in both Amazon EC2 and PlanetLab during the 704 test. 706 7.3. Test Environment for P2P File Sharing Example 708 Functional components include Vuze client (with and without INS 709 client), INS servers, native Vuze tracker, HTTP server, PlanetLab 710 manager and test controller, as shown in below figure. 711 +-----------+ +-----------+ 712 | INS |----| INS | 713 | Server | | Server | 714 +-----+-----+ +-----+-----+ Amazon EC2 715 ______________________|________________|_________________ 716 | | 717 +-----+-----+ +-----+-----+ 718 | Vuze |----| Vuze | 719 | Client |\ /| Client | 720 +-----+-----+ \/ +-----+-----+ PlanetLab 721 ______________________|_______/\_______|_________________ 722 | / \ | Yale Lab 723 +-------------+ +------+-----+ +-----+------+ +-----------+ 724 | HTTP Server | | Tracker | | Test | | PlanetLab | 725 | | | | | Controller | | Manager | 726 +-------------+ +------------+ +------------+ +-----------+ 728 Figure 8 730 7.3.1. INS Server 732 INS servers ran on Amazon EC2. 734 7.3.2. Vuze Client 736 Vuze clients were divided into one seeding client and multiple 737 leechers. The seeding client ran at a Window 2003 server at Yale's 738 laboratory. Both INS-enabled and native Vuze clients (leechers) ran 739 on PlanetLab. INS client embedded in Vuze client was automatically 740 loaded and ran after Vuze client start up. 742 7.3.3. Tracker 744 Vuze software includes tracker implementation, so we didn't deploy 745 our own tracker. Tracker ran at Yale's laboratory and was enabled 746 when making a BitTorrent file. Tracker ran at the same Window 2003 747 server with the seeding client. 749 7.3.4. Test Controller 751 Similar to the test controller in P2P live streaming case, the test 752 controller in Vuze example can also control all machines' behaviors 753 in Amazon EC2 and PlanetLab. For example, it lists all the Vuze 754 clients via GUI and controls them to download a specific BitTorrent 755 file. Test controller ran at the same Window 2003 server with the 756 seeding client. 758 7.3.5. HTTP Server 760 BitTorrent file was put in the HTTP server and the leechers retrieved 761 the BitTorrent file from the HTTP server after receiving the 762 downloading command from the test controller. We used Apache Tomcat 763 for HTTP server. 765 7.3.6. PlanetLab Manager 767 PlanetLab manager is a tool developed by University of Washington. 768 It presents a simple GUI to control PlanetLab nodes and perform 769 common tasks such as: 1) selecting nodes for your slice; 2) choosing 770 nodes for your experiment based on the information about the nodes; 771 3) reliably deploying you experiment files; 4) executing commands on 772 every node in parallel; 5) monitoring the progress of the experiment 773 as a whole, as well as viewing console output from the nodes. 775 7.4. Test Environment for Combined ALTO and INS File Distribution 776 System 778 For the integration of ALTO and INS systems for supporting file 779 distribution of CPs, we built 6 Linux virtual machines (VMs) with 780 Fedora13 operating system. ALTO server, INS portal, CP portal and 781 two INS servers ran on these VMs. Each VM is allocated with 4 cores 782 from a 16-core 1Ghz CPU, and has 2GB memory space and 10GB disk 783 space. CP uploaded files to the INS server via INS portal. End user 784 can choose desired file through the CP portal, and download it from 785 the optimal INS server chosen by the INS portal using ALTO service. 787 8. Performance Analysis 789 We illustrate the performance gain to P2P applications and more 790 efficient content distribution by effectively leveraging the INS 791 system. For the example of integrating ALTO and INS systems to 792 support file distribution of CPs, we show the feasibility of such 793 integration. 795 8.1. Performance Metrics 797 8.1.1. P2P Live Streaming 799 To measure the performance of a P2P live streaming application, we 800 mainly employed the following four metrics. 802 o Startup delay: The duration from a peer joins the streaming channel 803 to the moment it starts to play. 805 o Piece missed rate: The number of pieces a peer loses when playing 806 over the total number of pieces. 808 o Freeze times: The number of times a peer re-buffers during playing. 810 o Average peer uploading rate: Average uploading bandwidth of a peer. 812 8.1.2. P2P File Sharing 814 To measure the performance of a P2P file sharing application, we 815 mainly employed the following three metrics. 817 o Download traffic: The total amount of traffic representing the 818 network downlink resource usage. 820 o Upload traffic: The total amount of traffic representing the 821 network uplink resource usage. 823 o Network resource efficiency: The ratio of P2P system download rate 824 to the total network (downlink) bandwidth. 826 8.1.3. Integration of ALTO and INS System for File Distribution 828 We consider some common capacity metrics for content distribution 829 system, i.e. the bandwidth usage of each INS server, and the total 830 online users supported by each INS server. 832 8.2. Results and Analysis 834 8.2.1. P2P Live Streaming 836 o Startup delay: In the test, INS-enabled P2P live streaming clients 837 startup around 35~40 seconds and some of them startup around 10 838 seconds. Native P2P live streaming clients startup around 110~120 839 seconds and less than 20% of them startup within 100 seconds. 841 o Piece missed rate: In the test, both INS-enabled P2P live streaming 842 clients and native P2P live streaming clients achieved a good 843 performance in piece missed rate. Only about 0.02% of total pieces 844 missed in both cases. 846 o Freeze times: In the test, native P2P live streaming clients 847 suffered from more freezing times than INS-enabled P2P live streaming 848 clients by 40%. 850 o Average peer uploading rate: In the test, according to our 851 settings, INS-enabled P2P live streaming clients had no data upload 852 in their "last mile" access network, while in the native P2P live 853 streaming system, most peers uploaded streaming data for serving 854 other peers. In another word, INS system can shift uploading traffic 855 from clients' "last mile" to in-network devices, which saves a lot of 856 expensive bandwidth on access links. 858 8.2.2. P2P File Sharing 860 The test result is illustrated in below figure. We can see that 861 there is very few upload traffic from the INS-enabled Vuze clients, 862 while in the native Vuze case, the upload traffic from Vuze clients 863 is the same as the download traffic. Network resource usage is thus 864 reduced in the "last mile" in the INS-enabled Vuze case. This result 865 also verifies that the INS system can shift uploading traffic from 866 clients' "last mile" to in-network devices. Note that because not 867 all clients finish downloading process, there are different total 868 download traffic for the independent tests, as shown in below figure. 869 +--------------------+--------------------+--------------------+ 870 | | | | 871 | | Download Traffic | Upload Traffic | 872 | | | | 873 +--------------------+--------------------+--------------------+ 874 | | | | 875 | INS-Enabled Vuze | 480MB | 12MB | 876 | | | | 877 +--------------------+--------------------+--------------------+ 878 | | | | 879 | Native Vuze | 430MB | 430MB | 880 | | | | 881 +--------------------+--------------------+--------------------+ 883 Figure 9 885 We also found higher network resource efficiency in the INS-enabled 886 Vuze case where the network resource efficiency is defined as the 887 ratio of P2P system download rate to the total network (downlink) 888 bandwidth. The test result is that the network resource efficiency 889 of native Vuze is 65% while that of INS-enabled Vuze is 88%. A 890 possible reason behind the higher network resource efficiency is that 891 the INS server can always serve content to the peers, while in 892 traditional P2P applications, peer has to finish downloading content 893 before sharing with other peers. 895 8.2.3. Integrated ALTO and INS System for File Distribution 897 Each INS server can supply the bandwidth usage of at most 94% of 898 network interface card (NIC) - e.g. 1Gbps NIC server can supply 899 bandwidth of 940Mbps at most. We did tests on 100Mbps and 1Gbps NIC, 900 and got same result of 94% bandwidth usage. 902 Each INS server can support about 400 online users for file 903 downloading simultaneously. When we tried 450 concurrent online 904 users, 50 users didn't start downloading on time, but wait for the 905 other 400 users to finish downloading. 907 9. Conclusion 909 This document presents two examples of integrating INS system into 910 P2P applications (i.e. P2P live streaming and Vuze) by developing 911 INS client API for native P2P clients. To better adopt INS system, 912 we found some important design considerations including efficiency 913 for INS connection, control latency caused by INS operations, and 914 developed some mechanisms to address them. We ran some tests to show 915 the results of our integration examples on Amazon EC2 and PlanetLab 916 for deploying INS servers and clients, respectively. It can be 917 observed from our test results that integrating INS system into 918 native P2P applications could achieve performance gain to P2P 919 applications and more network efficient content distribution. For 920 the example of integrating ALTO and INS system to support file 921 distribution of CPs, we have shown the feasibility of such 922 integration. 924 10. Security Considerations 926 The authorization token can be passed from one INS client to other 927 INS clients to authorize other INS clients to access data objects 928 from its INS storage. Detailed mechanisms of token based 929 authentication and authorization can be found in [I-D.ietf-decade- 930 arch]. 932 11. IANA Considerations 934 This document does not have any IANA considerations. 936 12. References 937 12.1. Normative References 939 [I-D.ietf-decade-arch] Alimi, R., Yang, Y., Rahman, A., Kutscher, D., 940 and H. Liu, "DECADE Architecture", draft-ietf-decade-arch-07 (work in 941 progress), June 2012. 943 [I-D.ietf-alto-protocol] Alimi, R., Penno, R., and Y. Yang, "ALTO 944 Protocol", draft-ietf-alto-protocol-11 (work in progress), March 945 2012. 947 12.2. Informative References 949 [VzApp] "http://www.vuze.com" 951 [VzMsg] "http://wiki.vuze.com/w/Azureus_messaging_protocol" 953 [EC2] "http://aws.amazon.com/ec2/" 955 [PL] "http://www.planet-lab.org/" 957 Authors' Addresses 959 Ning Zong (editor) 960 Huawei Technologies 962 Email: zongning@huawei.com 964 Xiaohui Chen 965 Huawei Technologies 967 Email: risker.chen@huawei.com 969 Zhigang Huang 970 Huawei Technologies 972 Email: andy.huangzhigang@huawei.com 974 Lijiang Chen 975 HP Labs 977 Email: lijiang.chen@hp.com 978 Hongqiang Liu 979 Yale University 981 Email: hongqiang.liu@yale.edu