idnits 2.17.1 draft-zong-integration-example-02.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- ** There are 21 instances of too long lines in the document, the longest one being 3 characters in excess of 72. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year -- The document date (August 30, 2013) is 3854 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- == Outdated reference: A later version (-05) exists of draft-alimi-decade-03 == Outdated reference: A later version (-27) exists of draft-ietf-alto-protocol-11 Summary: 1 error (**), 0 flaws (~~), 3 warnings (==), 1 comment (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 Network Working Group N. Zong, Ed. 3 Internet-Draft X. Chen 4 Intended status: Informational Z. Huang 5 Expires: March 03, 2014 Huawei Technologies 6 L. Chen 7 HP Labs 8 H. Liu 9 Yale University 10 August 30, 2013 12 Integration Examples of DECADE System 13 draft-zong-integration-example-02 15 Abstract 17 Decoupled Application Data Enroute (DECADE) system is an in-network 18 storage infrastructure which was discussed in IETF. This document 19 presents two detailed examples of how to integrate such in-network 20 storage infrastructure into peer-to-peer (P2P) applications to 21 achieve more efficient content distribution, and Application Layer 22 Traffic Optimization (ALTO) system to build a content distribution 23 platform for Content Providers (CPs). 25 Status of This Memo 27 This Internet-Draft is submitted in full conformance with the 28 provisions of BCP 78 and BCP 79. 30 Internet-Drafts are working documents of the Internet Engineering 31 Task Force (IETF). Note that other groups may also distribute 32 working documents as Internet-Drafts. The list of current Internet- 33 Drafts is at http://datatracker.ietf.org/drafts/current/. 35 Internet-Drafts are draft documents valid for a maximum of six months 36 and may be updated, replaced, or obsoleted by other documents at any 37 time. It is inappropriate to use Internet-Drafts as reference 38 material or to cite them other than as "work in progress." 40 This Internet-Draft will expire on March 03, 2014. 42 Copyright Notice 44 Copyright (c) 2013 IETF Trust and the persons identified as the 45 document authors. All rights reserved. 47 This document is subject to BCP 78 and the IETF Trust's Legal 48 Provisions Relating to IETF Documents 49 (http://trustee.ietf.org/license-info) in effect on the date of 50 publication of this document. Please review these documents 51 carefully, as they describe your rights and restrictions with respect 52 to this document. Code Components extracted from this document must 53 include Simplified BSD License text as described in Section 4.e of 54 the Trust Legal Provisions and are provided without warranty as 55 described in the Simplified BSD License. 57 Table of Contents 59 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . 3 60 2. Terminology . . . . . . . . . . . . . . . . . . . . . . . . . 4 61 2.1. Native Application Client . . . . . . . . . . . . . . . . 4 62 2.2. INS Server . . . . . . . . . . . . . . . . . . . . . . . 4 63 2.3. INS Client . . . . . . . . . . . . . . . . . . . . . . . 4 64 2.4. INS Operations . . . . . . . . . . . . . . . . . . . . . 4 65 2.5. INS System . . . . . . . . . . . . . . . . . . . . . . . 4 66 2.6. INS Client API . . . . . . . . . . . . . . . . . . . . . 5 67 2.7. INS-enabled Application Client . . . . . . . . . . . . . 5 68 2.8. INS Service Provider . . . . . . . . . . . . . . . . . . 5 69 2.9. INS Portal . . . . . . . . . . . . . . . . . . . . . . . 5 70 3. INS Client API . . . . . . . . . . . . . . . . . . . . . . . 5 71 4. Integration of P2P File Sharing and INS System . . . . . . . 6 72 4.1. Integration Architecture . . . . . . . . . . . . . . . . 6 73 4.1.1. Message Flow . . . . . . . . . . . . . . . . . . . . 6 74 4.2. Concluding Remarks . . . . . . . . . . . . . . . . . . . 8 75 5. Integration of P2P Live Streaming and INS System . . . . . . 8 76 5.1. Integration Architecture . . . . . . . . . . . . . . . . 8 77 5.1.1. Data Access Messages . . . . . . . . . . . . . . . . 8 78 5.1.2. Control Messages . . . . . . . . . . . . . . . . . . 9 79 5.2. Design Considerations . . . . . . . . . . . . . . . . . . 9 80 5.2.1. Improve Efficiency for Each Connection . . . . . . . 9 81 5.2.2. Reduce Control Latency . . . . . . . . . . . . . . . 10 82 6. Integration of ALTO and INS System for File Distribution . . 10 83 6.1. Architecture . . . . . . . . . . . . . . . . . . . . . . 10 84 6.1.1. CP Uploading Procedure . . . . . . . . . . . . . . . 11 85 6.1.2. End User Downloading Procedure . . . . . . . . . . . 12 86 7. Test Environment and Settings . . . . . . . . . . . . . . . . 13 87 7.1. Test Settings . . . . . . . . . . . . . . . . . . . . . . 14 88 7.2. Test Environment for P2P Live Streaming Example . . . . . 14 89 7.2.1. INS Server . . . . . . . . . . . . . . . . . . . . . 15 90 7.2.2. P2P Live Streaming Client . . . . . . . . . . . . . . 15 91 7.2.3. Tracker . . . . . . . . . . . . . . . . . . . . . . . 15 92 7.2.4. Streaming Source Server . . . . . . . . . . . . . . . 15 93 7.2.5. Test Controller . . . . . . . . . . . . . . . . . . . 15 94 7.3. Test Environment for P2P File Sharing Example . . . . . . 15 95 7.3.1. INS Server . . . . . . . . . . . . . . . . . . . . . 16 96 7.3.2. Vuze Client . . . . . . . . . . . . . . . . . . . . . 16 97 7.3.3. Tracker . . . . . . . . . . . . . . . . . . . . . . . 16 98 7.3.4. Test Controller . . . . . . . . . . . . . . . . . . . 16 99 7.3.5. HTTP Server . . . . . . . . . . . . . . . . . . . . . 17 100 7.3.6. PlanetLab Manager . . . . . . . . . . . . . . . . . . 17 101 7.4. Test Environment for Combined ALTO and INS File 102 Distribution System . . . . . . . . . . . . . . . . . . . 17 103 8. Performance Analysis . . . . . . . . . . . . . . . . . . . . 17 104 8.1. Performance Metrics . . . . . . . . . . . . . . . . . . . 17 105 8.1.1. P2P Live Streaming . . . . . . . . . . . . . . . . . 17 106 8.1.2. P2P File Sharing . . . . . . . . . . . . . . . . . . 18 107 8.1.3. Integration of ALTO and INS System for File 108 Distribution . . . . . . . . . . . . . . . . . . . . 18 109 8.2. Results and Analysis . . . . . . . . . . . . . . . . . . 18 110 8.2.1. P2P Live Streaming . . . . . . . . . . . . . . . . . 18 111 8.2.2. P2P File Sharing . . . . . . . . . . . . . . . . . . 19 112 8.2.3. Integrated ALTO and INS System for File Distribution 19 113 9. Conclusion and Next Step . . . . . . . . . . . . . . . . . . 20 114 10. Security Considerations . . . . . . . . . . . . . . . . . . . 20 115 11. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 20 116 12. References . . . . . . . . . . . . . . . . . . . . . . . . . 20 117 12.1. Normative References . . . . . . . . . . . . . . . . . . 21 118 12.2. Informative References . . . . . . . . . . . . . . . . . 21 119 13. References . . . . . . . . . . . . . . . . . . . . . . . . . 21 120 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . 21 122 1. Introduction 124 Decoupled Application Data Enroute (DECADE) system is an in-network 125 storage infrastructure which was discussed in IETF. We implemented 126 such in-network storage infrastructure to simulate DECADE system 127 including DECADE servers, DECADE clients and DECADE protocols [I-D 128 .alimi-decade]. Therefore, in the whole draft, we use the terms of 129 in-network storage (INS) system, INS server, INS client, INS 130 operations, etc. 132 This draft introduces some examples of integrating INS system with 133 existing applications. In our example systems, the core components 134 include INS server and INS-enabled application client. An INS server 135 stores data inside the network, and thereafter manages both the 136 stored data and access to that data. An INS-enabled application 137 client including INS client and native application client uses a set 138 of Application Programming Interfaces (APIs) to enable native 139 application client to utilize INS operations such as data get, data 140 put, storage status query, etc. 142 This draft presents two detailed examples of how to integrate INS 143 system into peer-to-peer (P2P) applications, i.e. live streaming and 144 file sharing, as well as an example integration of Application Layer 145 Traffic Optimization (ALTO) [I-D.ietf-alto-protocol] and INS system 146 to support file distribution. We show how to extend native P2P 147 applications by designing the INS-enabled P2P clients and describing 148 the corresponding flows of INS-enabled data transmission. Then we 149 introduce the functional architecture and working flows of integrated 150 ALTO and INS system for file distribution of Content Providers (CPs). 151 Finally we illustrate the performance gain to P2P applications and 152 more efficient content distribution by effectively leveraging the INS 153 system. 155 Please note that the P2P applications mentioned in this draft only 156 represent some cases out of a large number of P2P applications, while 157 the INS system itself can support a variety of other applications. 158 Moreover, the set of APIs used in our integration examples is an 159 experimental implementation, which is not standard and still under 160 development. The INS system described in this draft is only a 161 preliminary functional set of in-network storage infrastructure for 162 applications. It is designed to test the pros and cons of INS system 163 utilized by P2P applications and verify the feasibility of utilizing 164 INS system to support content distribution. We hope our examples 165 would be useful for further standard protocol design, rather than to 166 present a solution for standardization purpose. 168 2. Terminology 170 The following terms will be used in this document. 172 2.1. Native Application Client 174 A client running original application operations including control 175 and data messages defined by applications. 177 2.2. INS Server 179 A server to simulate DECADE server defined in [I-D.alimi-decade]. 181 2.3. INS Client 183 A client to simulate DECADE client defined in [I-D.alimi-decade]. 185 2.4. INS Operations 187 A set of communications between INS server and INS client to simulate 188 DECADE protocols defined in [I-D.alimi-decade]. 190 2.5. INS System 192 A system including INS servers, INS clients, and INS operations. 194 2.6. INS Client API 196 A set of APIs to enable native application client to utilize INS 197 operations. 199 2.7. INS-enabled Application Client 201 An INS-enabled application client includes INS client and native 202 application client communicating through INS client API. 204 2.8. INS Service Provider 206 An INS service provider deploys INS system and provides INS service 207 to applications/end users. It can be Internet Service Provider (ISP) 208 or other parties. 210 2.9. INS Portal 212 A functional entity operated by INS service provider to offer 213 applications/end users a point to access (e.g. upload, download) 214 files stored in INS servers. 216 3. INS Client API 218 In order to simplify the integration of INS system with P2P 219 applications, we provide INS client API to native P2P clients for 220 accomplishing INS operations such as data get, data put, etc. On top 221 of the INS client API, a native P2P client can develop its own 222 application specific control and data distribution flows. 224 We currently developed the following five basic interfaces. 226 o Generate_Token: Generate an authorization token. An authorization 227 token is usually generated by an entity that is trusted by an INS 228 client which is sharing its data and passed to the other INS clients 229 for data access control. Please see [I-D.alimi-decade] for more 230 details. 232 o Get_Object: Get a data object from an INS server with an 233 authorization token. 235 o Put_Object: Store a data object into an INS server with an 236 authorization token. 238 o Delete_Object: Delete a data object in an INS server explicitly 239 with an authorization token. Note that a data object can also be 240 deleted implicitly by setting a Time-To-Live (TTL) value. 242 o Status_Query: Query current status of an application itself, 243 including listing stored data objects, resource (e.g. storage space) 244 usage, etc. 246 4. Integration of P2P File Sharing and INS System 248 We integrate an INS client into Vuze - a BitTorrent based file 249 sharing application [VzApp]. 251 4.1. Integration Architecture 253 The architecture of the integration of Vuze and INS system is shown 254 in Figure 1. An INS-enabled Vuze client uses INS client to 255 communicate with INS server and transmit data between itself and INS 256 server. It is also compatible with original Vuze signaling messages 257 such as peer discovery, data availability announcement, etc. Note 258 that the same architecture applies to the other example of 259 integration of P2P live streaming and INS system. 261 +------------------+ +------------------+ 262 | INS-enabled | | INS-enabled | 263 | Client | | Client | 264 |+----------------+| +---------------+ |+----------------+| 265 || INS |+---| INS Server |---+| INS || 266 || Client || +---------------+ || Client || 267 || |+-----------------------+| || 268 |+------+---------+| |+------+---------+| 269 | API | | | API | | 270 |+------+---------+| |+------+---------+| 271 || Native Client |+-----------------------+| Native Client || 272 |+----------------+| |+----------------+| 273 +------------------+ +------------------+ 275 Figure 1 277 4.1.1. Message Flow 279 In order for a better comparison, we briefly show the below diagram 280 of the native Vuze message exchange, and then show the corresponding 281 diagram including the INS system. 283 +--------+ +--------+ 284 | Vuze | | Vuze | 285 | Client1| | Client2| 286 +--------+ +--------+ 287 | | 288 | HandShake | 289 |<----------------------------------->| 290 | BT_BitField | 291 |<----------------------------------->| 292 | BT_Request | 293 |------------------------------------>| 294 | BT_Piece | 295 |<------------------------------------| 296 | | 298 Figure 2 300 In the above diagram, one can see that the key messages for data 301 sharing in native Vuze are "BT_BitField", "BT_Request" and 302 "BT_Piece". Vuze client1 and client2 exchange "BT_BitField" messages 303 to announce the available data objects to each other. If Vuze 304 client1 wants to get certain data object from client2, it sends a 305 "BT_Request" message to client2. Vuze client2 then return the 306 requested data object to client1 by a "BT_Piece" message. Please 307 refer to [VzMsg] for the detailed description of Vuze messages. 309 As shown in the below diagram, in the integration of Vuze and INS 310 system, INS client inserts itself into the Vuze client by 311 intercepting certain Vuze messages, and adjusting their handling to 312 send/receive data using the INS operations instead. 314 ________ __________ __________ ________ _________ 315 | Vuze | | INS | | INS | | Vuze | | INS | 316 |Client1| | Client1 | | Client2 | |Client2| | Server | 317 |_______| |_________| |_________| |_______| |_________| 318 | | | | | 319 | | HandShake | | | 320 |<----------|------------|---------->| | 321 | | BT_BitField| | | 322 |<----------|------------|---------->| | 323 | | BT_Request | | | 324 |-----------|----------->| | | 325 | | | | | 326 | | Redirect | | | 327 | |<-----------| | | 328 | | | Get Data | | 329 | |----------------------------------->| 330 | | |Data Object| | 331 | |<-----------------------------------| 332 | | | | | 333 | BT_Piece | | | | 334 |<----------| | | | 335 | | | | | 336 Figure 3 338 o Vuze client1 sends a "BT_Request" message to Vuze client2 to 339 request a data object as usual. 341 o INS client2 embedded in Vuze client2 intercepts the incoming 342 "BT_Request" message and then replies with a "Redirect" message which 343 includes INS server's address and authorization token. 345 o INS client1 receives the "Redirect" message and then sends an INS 346 message "Get Data" to the INS server to request the data object. 348 o INS server receives the "Get Data" message and sends the requested 349 data object back to INS client1 after the token check. 351 o INS client1 encapsulates the received data object into a "BT_Piece" 352 message and sends to Vuze client1. 354 In this example, the file to be shared is divided into many objects, 355 with each object being named as "filename_author_partn" where author 356 is the original author of the file or the user who uploads the file, 357 n is the sequence number of the object. 359 4.2. Concluding Remarks 361 In this example, we feel that the INS system can effectively improve 362 the file sharing efficiency due to following reasons: 1) utilizing 363 in-network storage as the data location of the peer will achieve 364 statistical multiplexing gain of the data sharing; 2) shorter data 365 delivery path based on in-network storage could not only improve the 366 application performance, but avoid the potential bottleneck in the 367 ISP network. 369 5. Integration of P2P Live Streaming and INS System 371 We integrate an INS client into a P2P live streaming application. 373 5.1. Integration Architecture 375 The architecture of the integration of P2P live streaming application 376 and INS system is shown in Figure 1. An INS-enabled P2P live 377 streaming client uses INS client to communicate with INS server and 378 transmit data between itself and INS server. 380 5.1.1. Data Access Messages 381 INS client API is called whenever an INS-enabled P2P live streaming 382 client wants to get data objects from (or put data objects into) the 383 INS server. Each data object transferred between the application 384 client and the INS server should go through the INS client. Each 385 data object can be a variable-sized block to cater to different 386 application requirements (e.g. latency and throughput). 388 We use the hash of a data object's content for the name of the data 389 object. The name of a data object is generated and distributed by 390 the source streaming server in this example. 392 5.1.2. Control Messages 394 We used a lab-based P2P live streaming system for research purpose 395 only. The basic control messages between the native P2P live 396 streaming clients are similar to Vuze control protocols in the sense 397 that the data piece information is exchanged between the peers. The 398 INS-enabled P2P live streaming client adds an additional control 399 message for authorization token distribution, as shown as the line 400 between the INS clients in Figure 1. In this example, the 401 authorization token is generated by the INS client that is sharing 402 its data. By exchanging the authorization tokens, the application 403 clients can retrieve the data objects from the INS servers. 405 5.2. Design Considerations 407 One essential objective of the integration is to improve the 408 performance of P2P live streaming application. In order to achieve 409 such goal, we have some important design considerations that would be 410 helpful to the future work of protocol development. 412 5.2.1. Improve Efficiency for Each Connection 414 In a native P2P system, a peer can establish tens or hundreds of 415 concurrent connections with other peers. On the other hand, it may 416 be expensive for an INS server to maintain many connections for a 417 large number of INS clients. Typically, each INS server may only 418 allocate and maintain M connections (in our examples, M=1) with each 419 INS client at a time. Therefore, we have the following design 420 considerations to improve the efficiency for each connection between 421 INS server and INS client to achieve satisfying data downloading 422 performance. 424 o Batch Request: In order to fully utilize the connection bandwidth 425 of INS server and reduce the overhead, an application client may 426 request a batch of data objects in a single request. 428 o Larger Data Object: Data object size in existing P2P live streaming 429 application may be small and thus incur large control overhead and 430 low transport utilization. A larger data object may be needed to 431 more efficiently utilize the data connection between INS server and 432 INS client. 434 5.2.2. Reduce Control Latency 436 In a native P2P system, a serving peer sends data objects to the 437 requesting peer directly. Nevertheless, in an INS system, the 438 serving client typically only replies with an authorization token to 439 the requesting client, and then the requesting client uses this token 440 to fetch the data objects from the INS server. This process 441 introduces an additional control latency compared with the native P2P 442 system. It is even more serious in latency sensitive applications 443 such as P2P live streaming. Therefore, we need to consider how to 444 reduce such control latency. 446 o Range Token: One way to reduce control latency is to use range 447 token. An INS-enabled P2P live streaming client may piggyback a 448 range token when announcing data availability to other peers, 449 indicating that all available data objects are accessible by this 450 range token. Then instead of requesting some specific data object 451 and waiting for the response, a peer can use this range token to 452 access all available data objects that it was permitted to access in 453 the INS server. 455 6. Integration of ALTO and INS System for File Distribution 457 The objective of ALTO service is to give guidance to applications 458 about which content servers to select to improve content distribution 459 performance in an ISP-friendly way (e.g. reducing network usage 460 within the ISP). The core component of ALTO service is called ALTO 461 server which generates the guidance based on the ISP network 462 information. The ALTO protocol conveys such guidance from the ALTO 463 server to the applications. The detailed description of ALTO 464 protocol can be found in [I-D.ietf-alto-protocol]. 466 In this example, we integrate ALTO and INS system to build a content 467 distribution platform for CPs. 469 6.1. Architecture 470 The integrated system allows CPs to upload files to INS servers, and 471 guides end users to download files from the INS servers suggested by 472 ALTO service. The architecture diagram is shown as below. Note that 473 this diagram just shows a basic set of connections between the 474 components. Some redirection including that the INS portal redirects 475 end users to the INS servers can also happen between the components. 477 __________ __________ 478 | End User | | End User | 479 |__________| |__________| 480 \ / 481 \ _____________ / 482 | CP Portal | 483 |_____________| 484 | 485 ______________________|______________________ 486 | INS ______|_____ | +--------+ 487 | Service | INS | | | ALTO | 488 | Provider | Portal |----------------+---| Server | 489 | /|____________| \ | +--------+ 490 | / | | \ | 491 | ________/ _____|__ _|______ \________ | 492 || INS | | INS | | INS | | INS | | 493 ||Server1 | |Server2 | |Server3 | |Servern | | 494 ||________| |________| |________| |________| | 495 |____________________________________________| 497 Figure 4 499 Four key components are defined as follow. 501 o INS Servers: operated by an INS service provider to store files 502 from CPs. 504 o INS Portal: operated by an INS service provider to 1) upload files 505 from CPs to the dedicated INS servers; 2) direct end users to the INS 506 servers suggested by ALTO service to download files. 508 o CP Portal: operated by a CP to publish the URLs of the uploaded 509 files for end user downloading. 511 o End User: End users use standard web browser with INS extensions 512 such that INS client APIs can be called for fetching the data from 513 INS servers. 515 6.1.1. CP Uploading Procedure 516 CP uploads the files into INS servers first, then gets the URLs of 517 the uploaded files and publishes the URLs on the CP portal for end 518 user downloading. The flow is shown below. 520 _________ _________ _________ 521 | | | INS | | INS | 522 | CP | | Portal | | Server | 523 |_________| |_________| |_________| 524 | | | 525 | HTTP POST | | 526 |------------------>| | 527 | | Put Data | 528 | |----------------->| 529 | | Response | 530 | |<-----------------| 531 | URLs | | 532 |<------------------| | 533 | | | 535 Figure 5 537 o CP uploads the file to the INS portal site via HTTP POST message. 539 o INS portal distributes the file to the dedicated INS severs using 540 INS message "Put Data". Note that the data distribution policies 541 (e.g. how many copies of the data to which INS servers) can be 542 specified by CP. The dedicated INS servers can be also decided by 543 the INS service provider based on policies or system status (e.g. 544 INS server load). These issues are out of the scope of this draft. 546 In this example, the data stored in INS server is divided into many 547 objects, with each object being named as "filename_CPname_partn" 548 where CPname is the name of the CP who uploads the file, n is the 549 sequence number of the object. 551 o When the file is uploaded successfully, CP portal will list the 552 URLs of the file for end use downloading. 554 6.1.2. End User Downloading Procedure 556 End users can visit the CP portal web pages and click the URLs for 557 downloading the desired files. The flow is shown below. 559 _________ ____________ _________ _________ _________ 560 | | | | | INS | | ALTO | | INS | 561 | End User| | CP Portal | | Portal | | Server | | Server | 562 |_________| |____________| |_________| |_________| |_________| 563 | | | | | 564 | HTTP Get | | | | 565 |------------->| | | | 566 | Token | | | | 567 |<-------------| | | | 568 | | | | | 569 | HTTP Get | | | 570 |------------------------------>| | | 571 | | | ALTO Req | | 572 | | |------------>| | 573 | | | ALTO Resp | | 574 | | |<------------| | 575 | Optimal INS Server address | | | 576 |<------------------------------| | | 577 | | | | | 578 | | Get Data | | 579 |---------------------------------------------------------->| 580 | | | | | 581 | | Data Object | | 582 |<----------------------------------------------------------| 583 | | | | | 585 Figure 6 587 o End user visits CP portal web page, and finds the URLs for the 588 desired file. 590 o End user clicks the hyper link, CP portal returns authorization 591 token to the end user and redirects the end user to INS portal via 592 HTTP Get message. 594 o INS portal communicates with ALTO server to get the suggested INS 595 server storing the requested file. In this example, ALTO server just 596 selects the INS server within the same IP subset of the end user. 597 Please see [I-D.ietf-alto-protocol] for more details on how ALTO 598 select content server. 600 o INS portal returns the INS server address suggested by ALTO service 601 to the end user. 603 o End user connects to the suggested INS server to get data via INS 604 message "Get Data" after the token check. 606 7. Test Environment and Settings 608 We conduct some tests to show the results of our integration 609 examples. For a better performance comparison, we ran experiments 610 (i.e. INS integrated P2P application v.s. native P2P application) in 611 the same environment using the same settings. 613 7.1. Test Settings 615 Our tests ran on a wide-spread area and diverse platforms, including 616 a famous commercial platform - Amazon EC2 [EC2] and a well known 617 test-bed - PlanetLab [PL]. The experimental settings are as follows. 619 o Amazon EC2: We setup INS servers in Amazon EC2 platform, including 620 four regions around the world - US east, US west, Europe and Asia. 622 o PlanetLab: We ran our P2P live streaming clients and P2P file 623 sharing clients (both INS-enabled and native clients) on PlanetLab on 624 a wild-spread area. 626 o Flash-crowd: Flash-crowd is an important scenario in P2P live 627 streaming system due to the live nature, i.e. a large number of users 628 join the live channel during the startup period of the event. 629 Therefore, we conduct experiments to test the system performance for 630 flash-crowd in our P2P live streaming example. 632 o Total supply bandwidth: Total supply bandwidth is the sum of the 633 capacity of bandwidth used to serve the streaming/file content, from 634 both servers (including source servers and INS servers) and the P2P 635 clients. For a fair comparison, we set the total supply bandwidth to 636 be the same in both tests of native and INS-enabled P2P applications. 638 7.2. Test Environment for P2P Live Streaming Example 640 In the tests, we have some functional components running in different 641 platforms, including INS servers, P2P live streaming clients (INS- 642 enabled or native), native P2P live streaming tracker, streaming 643 source server and test controller, as shown in below figure. 645 +------------+ +------------+ 646 | INS |----| INS | 647 | Server | | Server | 648 +-----+------+ +------+-----+ Amazon EC2 649 ______________________|__________________|_________________ 650 | | 651 +-----+------+ +------+-----+ 652 | Streaming |----| Streaming | 653 | Client |\ /| Client | 654 +------+-----+ \/ +------+-----+ PlanetLab 655 _______________________|_______/\________|_________________ 656 | / \ | Yale Lab 657 +--------------+ +------+-----+ +------+-----+ 658 | Streaming | | Tracker | | Test | 659 | Source Server| | | | Controller | 660 +--------------+ +------------+ +------------+ 662 Figure 7 664 7.2.1. INS Server 666 INS servers ran on Amazon EC2. 668 7.2.2. P2P Live Streaming Client 670 Both INS-enabled and native P2P live streaming clients ran on 671 PlanetLab. Each INS-enabled P2P live streaming client connects to 672 the dedicated INS server. In this example, we decide which client 673 connects to which server based on the IP address. So, it is roughly 674 region-based and still coarse. Each INS-enabled P2P live streaming 675 client uses its INS server to share streaming content to other peers. 677 7.2.3. Tracker 679 A native P2P live streaming tracker ran at Yale's laboratory and 680 served both INS-enabled and native P2P live streaming clients during 681 the test. 683 7.2.4. Streaming Source Server 685 A streaming source server ran at Yale's laboratory and served both 686 INS-enabled and native P2P live streaming clients during the test. 688 7.2.5. Test Controller 690 Test controller is a manager running at Yale's laboratory to control 691 all machines' behaviors in both Amazon EC2 and PlanetLab during the 692 test. 694 7.3. Test Environment for P2P File Sharing Example 696 Functional components include Vuze client (with and without INS 697 client), INS servers, native Vuze tracker, HTTP server, PlanetLab 698 manager and test controller, as shown in below figure. 700 +-----------+ +-----------+ 701 | INS |----| INS | 702 | Server | | Server | 703 +-----+-----+ +-----+-----+ Amazon EC2 704 ______________________|________________|_________________ 705 | | 706 +-----+-----+ +-----+-----+ 707 | Vuze |----| Vuze | 708 | Client |\ /| Client | 709 +-----+-----+ \/ +-----+-----+ PlanetLab 710 ______________________|_______/\_______|_________________ 711 | / \ | Yale Lab 712 +-------------+ +------+-----+ +-----+------+ +-----------+ 713 | HTTP Server | | Tracker | | Test | | PlanetLab | 714 | | | | | Controller | | Manager | 715 +-------------+ +------------+ +------------+ +-----------+ 717 Figure 8 719 7.3.1. INS Server 721 INS servers ran on Amazon EC2. 723 7.3.2. Vuze Client 725 Vuze clients were divided into one seeding client and multiple 726 leechers. The seeding client ran at a Window 2003 server at Yale's 727 laboratory. Both INS-enabled and native Vuze clients (leechers) ran 728 on PlanetLab. INS client embedded in Vuze client was automatically 729 loaded and ran after Vuze client start up. 731 7.3.3. Tracker 733 Vuze software includes tracker implementation, so we didn't deploy 734 our own tracker. Tracker ran at Yale's laboratory and was enabled 735 when making a BitTorrent file. Tracker ran at the same Window 2003 736 server with the seeding client. 738 7.3.4. Test Controller 740 Similar to the test controller in P2P live streaming case, the test 741 controller in Vuze example can also control all machines' behaviors 742 in Amazon EC2 and PlanetLab. For example, it lists all the Vuze 743 clients via GUI and controls them to download a specific BitTorrent 744 file. Test controller ran at the same Window 2003 server with the 745 seeding client. 747 7.3.5. HTTP Server 749 BitTorrent file was put in the HTTP server and the leechers retrieved 750 the BitTorrent file from the HTTP server after receiving the 751 downloading command from the test controller. We used Apache Tomcat 752 for HTTP server. 754 7.3.6. PlanetLab Manager 756 PlanetLab manager is a tool developed by University of Washington. 757 It presents a simple GUI to control PlanetLab nodes and perform 758 common tasks such as: 1) selecting nodes for your slice; 2) choosing 759 nodes for your experiment based on the information about the nodes; 760 3) reliably deploying you experiment files; 4) executing commands on 761 every node in parallel; 5) monitoring the progress of the experiment 762 as a whole, as well as viewing console output from the nodes. 764 7.4. Test Environment for Combined ALTO and INS File Distribution 765 System 767 For the integration of ALTO and INS systems for supporting file 768 distribution of CPs, we built 6 Linux virtual machines (VMs) with 769 Fedora13 operating system. ALTO server, INS portal, CP portal and 770 two INS servers ran on these VMs. Each VM is allocated with 4 cores 771 from a 16-core 1Ghz CPU, and has 2GB memory space and 10GB disk 772 space. CP uploaded files to the INS server via INS portal. End user 773 can choose desired file through the CP portal, and download it from 774 the optimal INS server chosen by the INS portal using ALTO service. 776 8. Performance Analysis 778 We illustrate the performance gain to P2P applications and more 779 efficient content distribution by effectively leveraging the INS 780 system. For the example of integrating ALTO and INS systems to 781 support file distribution of CPs, we show the feasibility of such 782 integration. 784 8.1. Performance Metrics 786 8.1.1. P2P Live Streaming 788 To measure the performance of a P2P live streaming application, we 789 mainly employed the following four metrics. 791 o Startup delay: The duration from a peer joins the streaming channel 792 to the moment it starts to play. 794 o Piece missed rate: The number of pieces a peer loses when playing 795 over the total number of pieces. 797 o Freeze times: The number of times a peer re-buffers during playing. 799 o Average peer uploading rate: Average uploading bandwidth of a peer. 801 8.1.2. P2P File Sharing 803 To measure the performance of a P2P file sharing application, we 804 mainly employed the following three metrics. 806 o Download traffic: The total amount of traffic representing the 807 network downlink resource usage. 809 o Upload traffic: The total amount of traffic representing the 810 network uplink resource usage. 812 o Network resource efficiency: The ratio of P2P system download rate 813 to the total network (downlink) bandwidth. 815 8.1.3. Integration of ALTO and INS System for File Distribution 817 We consider some common capacity metrics for content distribution 818 system, i.e. the bandwidth usage of each INS server, and the total 819 online users supported by each INS server. 821 8.2. Results and Analysis 823 8.2.1. P2P Live Streaming 825 o Startup delay: In the test, INS-enabled P2P live streaming clients 826 startup around 35~40 seconds and some of them startup around 10 827 seconds. Native P2P live streaming clients startup around 110~120 828 seconds and less than 20% of them startup within 100 seconds. 830 o Piece missed rate: In the test, both INS-enabled P2P live streaming 831 clients and native P2P live streaming clients achieved a good 832 performance in piece missed rate. Only about 0.02% of total pieces 833 missed in both cases. 835 o Freeze times: In the test, native P2P live streaming clients 836 suffered from more freezing times than INS-enabled P2P live streaming 837 clients by 40%. 839 o Average peer uploading rate: In the test, according to our 840 settings, INS-enabled P2P live streaming clients had no data upload 841 in their "last mile" access network, while in the native P2P live 842 streaming system, most peers uploaded streaming data for serving 843 other peers. In another word, INS system can shift uploading traffic 844 from clients' "last mile" to in-network devices, which saves a lot of 845 expensive bandwidth on access links. 847 8.2.2. P2P File Sharing 849 The test result is illustrated in below figure. We can see that 850 there is very few upload traffic from the INS-enabled Vuze clients, 851 while in the native Vuze case, the upload traffic from Vuze clients 852 is the same as the download traffic. Network resource usage is thus 853 reduced in the "last mile" in the INS-enabled Vuze case. This result 854 also verifies that the INS system can shift uploading traffic from 855 clients' "last mile" to in-network devices. Note that because not 856 all clients finish downloading process, there are different total 857 download traffic for the independent tests, as shown in below figure. 859 +--------------------+--------------------+--------------------+ 860 | | | | 861 | | Download Traffic | Upload Traffic | 862 | | | | 863 +--------------------+--------------------+--------------------+ 864 | | | | 865 | INS-Enabled Vuze | 480MB | 12MB | 866 | | | | 867 +--------------------+--------------------+--------------------+ 868 | | | | 869 | Native Vuze | 430MB | 430MB | 870 | | | | 871 +--------------------+--------------------+--------------------+ 873 Figure 9 875 We also found higher network resource efficiency in the INS-enabled 876 Vuze case where the network resource efficiency is defined as the 877 ratio of P2P system download rate to the total network (downlink) 878 bandwidth. The test result is that the network resource efficiency 879 of native Vuze is 65% while that of INS-enabled Vuze is 88%. A 880 possible reason behind the higher network resource efficiency is that 881 the INS server can always serve content to the peers, while in 882 traditional P2P applications, peer has to finish downloading content 883 before sharing with other peers. 885 8.2.3. Integrated ALTO and INS System for File Distribution 887 Each INS server can supply the bandwidth usage of at most 94% of 888 network interface card (NIC) - e.g. 1Gbps NIC server can supply 889 bandwidth of 940Mbps at most. We did tests on 100Mbps and 1Gbps NIC, 890 and got same result of 94% bandwidth usage. 892 Each INS server can support about 400 online users for file 893 downloading simultaneously. When we tried 450 concurrent online 894 users, 50 users didn't start downloading on time, but wait for the 895 other 400 users to finish downloading. 897 9. Conclusion and Next Step 899 This document presents two examples of integrating INS system into 900 P2P applications (i.e. P2P live streaming and Vuze) by developing INS 901 client API for native P2P clients. To better adopt INS system, we 902 found some important design considerations including efficiency for 903 INS connection, control latency caused by INS operations, and 904 developed some mechanisms to address them. We ran some tests to show 905 the results of our integration examples on Amazon EC2 and PlanetLab 906 for deploying INS servers and clients, respectively. It can be 907 observed from our test results that integrating INS system into 908 native P2P applications could achieve performance gain to P2P 909 applications and more network efficient content distribution. For 910 the example of integrating ALTO and INS system to support file 911 distribution of CPs, we have shown the feasibility of such 912 integration. 914 Our next step work will continue on implementing more features of INS 915 servers and clients based on the protocol ideas developed in protocol 916 document [I-D.alimi-decade], such as OAUTH based authorization and 917 naming scheme for data objects. 919 10. Security Considerations 921 The authorization token can be passed from one INS client to other 922 INS clients to authorize other INS clients to access data objects 923 from its INS storage. Detailed mechanisms of token based 924 authentication and authorization can be found in [I-D.alimi-decade]. 926 11. IANA Considerations 928 This document does not have any IANA considerations. 930 12. References 931 12.1. Normative References 933 [I-D.alimi-decade] Alimi, R., Rahman, A., Kutscher, D., Yang, Y., 934 Song, H., and K. Pentikousis, "DECoupled Application Data Enroute 935 (DECADE)", draft-alimi-decade-03 (work in progress), August 2013. 937 [I-D.ietf-alto-protocol] Alimi, R., Penno, R., and Y. Yang, "ALTO 938 Protocol", draft-ietf-alto-protocol-11 (work in progress), March 939 2012. 941 12.2. Informative References 943 [VzApp] "http://www.vuze.com" 945 [VzMsg] "http://wiki.vuze.com/w/Azureus_messaging_protocol" 947 [EC2] "http://aws.amazon.com/ec2/" 949 [PL] "http://www.planet-lab.org/" 951 13. References 953 Authors' Addresses 955 Ning Zong (editor) 956 Huawei Technologies 958 Email: zongning@huawei.com 960 Xiaohui Chen 961 Huawei Technologies 963 Email: risker.chen@huawei.com 965 Zhigang Huang 966 Huawei Technologies 968 Email: andy.huangzhigang@huawei.com 970 Lijiang Chen 971 HP Labs 973 Email: lijiang.chen@hp.com 974 Hongqiang Liu 975 Yale University 977 Email: hongqiang.liu@yale.edu