idnits 2.17.1 draft-ietf-ppsp-survey-09.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- ** There is 1 instance of lines with control characters in the document. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year -- The document date (October 27, 2014) is 3470 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- -- Obsolete informational reference (is this intentional?): RFC 5389 (Obsoleted by RFC 8489) Summary: 1 error (**), 0 flaws (~~), 1 warning (==), 2 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 PPSP Y. Gu 3 Internet-Draft Unaffiliated 4 Intended status: Informational N. Zong, Ed. 5 Expires: April 30, 2015 Huawei 6 Y. Zhang 7 Coolpad 8 China Mobile 9 F. Piccolo 10 Cisco 11 S. Duan 12 CATR 13 October 27, 2014 15 Survey of P2P Streaming Applications 16 draft-ietf-ppsp-survey-09 18 Abstract 20 This document presents a survey of some of the most popular Peer-to- 21 Peer (P2P) streaming applications on the Internet. The main 22 selection criteria have been popularity and availability of 23 information on operation details at writing time. In doing this, 24 selected applications are not reviewed as a whole, but they are 25 reviewed with main focus on the signaling and control protocol used 26 to establish and maintain overlay connections among peers and to 27 advertise and download streaming content. 29 Status of This Memo 31 This Internet-Draft is submitted in full conformance with the 32 provisions of BCP 78 and BCP 79. 34 Internet-Drafts are working documents of the Internet Engineering 35 Task Force (IETF). Note that other groups may also distribute 36 working documents as Internet-Drafts. The list of current Internet- 37 Drafts is at http://datatracker.ietf.org/drafts/current/. 39 Internet-Drafts are draft documents valid for a maximum of six months 40 and may be updated, replaced, or obsoleted by other documents at any 41 time. It is inappropriate to use Internet-Drafts as reference 42 material or to cite them other than as "work in progress." 44 This Internet-Draft will expire on April 30, 2015. 46 Copyright Notice 48 Copyright (c) 2014 IETF Trust and the persons identified as the 49 document authors. All rights reserved. 51 This document is subject to BCP 78 and the IETF Trust's Legal 52 Provisions Relating to IETF Documents 53 (http://trustee.ietf.org/license-info) in effect on the date of 54 publication of this document. Please review these documents 55 carefully, as they describe your rights and restrictions with respect 56 to this document. Code Components extracted from this document must 57 include Simplified BSD License text as described in Section 4.e of 58 the Trust Legal Provisions and are provided without warranty as 59 described in the Simplified BSD License. 61 Table of Contents 63 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . 2 64 2. Terminologies and concepts . . . . . . . . . . . . . . . . . 4 65 3. Classification of P2P Streaming Applications Based on Overlay 66 Topology . . . . . . . . . . . . . . . . . . . . . . . . . . 5 67 4. Mesh-based P2P Streaming Applications . . . . . . . . . . . . 5 68 4.1. Octoshape . . . . . . . . . . . . . . . . . . . . . . . . 6 69 4.2. PPLive . . . . . . . . . . . . . . . . . . . . . . . . . 7 70 4.3. Zattoo . . . . . . . . . . . . . . . . . . . . . . . . . 9 71 4.4. PPStream . . . . . . . . . . . . . . . . . . . . . . . . 11 72 4.5. Tribler . . . . . . . . . . . . . . . . . . . . . . . . . 12 73 4.6. QQLive . . . . . . . . . . . . . . . . . . . . . . . . . 14 74 5. Tree-based P2P Streaming Systems . . . . . . . . . . . . . . 15 75 5.1. End System Multicast (ESM) . . . . . . . . . . . . . . . 15 76 6. Hybrid P2P streaming applications . . . . . . . . . . . . . . 17 77 6.1. New Coolstreaming . . . . . . . . . . . . . . . . . . . . 17 78 7. Security Considerations . . . . . . . . . . . . . . . . . . . 18 79 8. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 19 80 9. Author List . . . . . . . . . . . . . . . . . . . . . . . . . 19 81 10. Acknowledgments . . . . . . . . . . . . . . . . . . . . . . . 20 82 11. Informative References . . . . . . . . . . . . . . . . . . . 20 83 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . 21 85 1. Introduction 87 An ever-increasing number of multimedia streaming systems have been 88 adopting Peer-to-Peer (P2P) paradigm to stream multimedia audio and 89 video contents from a source to a large number of end users. This is 90 the reference scenario of this document, which presents a survey of 91 some of the most popular P2P streaming applications available on the 92 nowadays Internet. 94 The presented survey does not aim at being exhaustive. Reviewed 95 applications have indeed been selected mainly based on their 96 popularity and on the information publicly available on P2P operation 97 details at writing time. In addition, the provided descriptions may 98 sometimes appear inhomogeneous from the detail level point of view, 99 but this always depends on the amount of available information at 100 writing time. 102 In addition, the selected applications are not reviewed as a whole, 103 but they are reviewed with main focus on signaling and control 104 protocols used to construct and maintain the overlay connections 105 among peers and to advertise and download multimedia content. More 106 precisely, we assume throughout the document the high level system 107 model reported in Figure 1. 109 +---------------------------------------------------------+ 110 | +--------------------------------+ | 111 | | Tracker | | 112 | | | | 113 | | Information on multimedia | | 114 | | content and peer set | | 115 | +--------------------------------+ | 116 | ^ | ^ | | 117 | | | | | | 118 | | | Tracker | | Tracker | 119 | | | Protocol | | Protocol | 120 | | | | | | 121 | | | | | | 122 | | | | | | 123 | | V | V | 124 | +-------------+ +------------+ | 125 | | Peer 1 |<--------| Peer 2 | | 126 | | |-------->| | | 127 | +-------------+ +------------+ | 128 | Peer Protocol | 129 | | 130 +---------------------------------------------------------+ 132 Figure 1, High level architecture of P2P streaming systems assumed as 133 reference model througout the document 135 As Figure 1 shows, it is possible to identify in every P2P streaming 136 system two main types of entity: peers and trackers. Peers represent 137 end users, which join the system dynamically to send and receive 138 streamed media content, whereas trackers represent well-known nodes, 139 which are stably connected to the system and provide peers with 140 metadata information about the streamed content and the set of active 141 peers. According to this model, it is possible to distinguish 142 between two different control/signaling protocols: 144 -the "tracker protocol" for the interaction between trackers and 145 peer; 147 -the "peer protocol" for the interaction between peers. 149 Hence, whenever possible, we always try to identify tracker and peer 150 protocols and we provide the corresponding details. 152 This document is organized as follows. Section 2 introduces 153 terminology and concepts used throughout the current survey. Since 154 overlay topology built on connections among peers impacts some 155 aspects of tracker and peer protocols, Section 3 classifies P2P 156 streaming applications according to the overlay topology: mesh-based, 157 tree-based and hybrid. Then, Section 4 presents some of the most 158 popular mesh-based P2P streaming applications: Octoshape, PPLive, 159 Zattoo, PPStream, Tribler, QQLive. Likewise, Section 5 presents End 160 System Multicast as example of tree-based P2P streaming applications, 161 whereas Section 6 presents New Coolstreaming as example of hybrid- 162 topology P2P streaming application. Finally, Section 7 provides some 163 security considerations. 165 2. Terminologies and concepts 167 Reader is referred to RFC 6972 [RFC6972] for concepts such as chunk, 168 live streaming, video-on-demand (VOD), peer, tracker, swarm, which 169 will be extensively used throughout the document. 171 In addition, reader can refer to this section for the following 172 concepts. 174 CHANNEL: A CHANNEL denotes a TV channel from which live streaming 175 content is transmitted in a P2P streaming application. 177 PEER PROTOCOL: PEER PROTOCOL denotes the control and signaling 178 protocol for the interaction among peers. 180 PULL: PULL denotes the transmission of multimedia content that is 181 initiated by receiving peers. 183 PUSH: PUSH denotes the transmission of multimedia content that is not 184 initiated by receiving peers. 186 TRACKER PROTOCOL: TRACKER PROTOCOL denotes the control and signaling 187 protocol for the interaction among peers and trackers. 189 3. Classification of P2P Streaming Applications Based on Overlay 190 Topology 192 Depending on the topology of overlay connections among peers, it is 193 possible to distinguish among the following general types of P2P 194 streaming applications: 196 -mesh-based: peers are organized in a randomly connected overlay 197 network, and multimedia content delivery is pull-based. This is 198 the reason why these systems are also referred to as "data- 199 driven". Due to their unstructured nature, mesh-based P2P 200 streaming applications are very resilient with respect to peer 201 churn and guarantee high network resource utilization. On the 202 other side, the cost to maintain overlay topology may limit 203 performance in terms of delay, and pull-based data delivery calls 204 for large size buffers to store chunks; 206 -tree-based: peers are organized to form a tree-shape overlay 207 network rooted at the streaming source, and multimedia content 208 delivery is push-based. Peers that forward data are called parent 209 nodes, and peers that receive it are called children nodes. Due 210 to their structured nature, tree-based P2P streaming applications 211 guarantee both topology maintenance at very low cost and good 212 delay performance. On the other side, they are not very resilient 213 to peer churn, that may be very high in a P2P environment; 215 -hybrid: this category includes all the P2P applications that 216 cannot be classified as simply mesh-based or tree-based and 217 present characteristics of both mesh-based and tree-based 218 categories. 220 4. Mesh-based P2P Streaming Applications 222 In mesh-based P2P streaming application peers self-organize in a 223 randomly connected overlay graph where each peer interacts with a 224 limited subset of other peers (neighbors) and explicitly requests 225 chunks it needs (pull-based or data-driven delivery). This type of 226 content delivery may be associated with high overhead, not only 227 because peers formulate requests in order to download chunks they 228 need, but also because in some applications peers exchange chunk 229 availability information in form of buffer-maps (a sort of bit maps 230 with a bit "1" in correspondence of chunks stored in the local 231 buffer). On the one side, the main advantage of this kind of 232 applications lies in that a peer does not rely on a single peer for 233 retrieving multimedia content. Hence, these applications are very 234 resilient to peer churn. On the other side, overlay connections are 235 highly dynamic and not persistent (being driven by content 236 availability), and this makes content distribution efficiency 237 unpredictable. In fact, different chunks may be retrieved via 238 different network paths, and this may imply for end users playback 239 quality degradation ranging from low bit rates to long start-up 240 delays, to frequent playback freezes. Moreover, peers have to 241 maintain large buffers to increase the probability of satisfying 242 chunk requests received by neighbors. 244 4.1. Octoshape 246 Octoshape [Octoshape] is a P2P plug-in that has been realized by the 247 homonym Danish company and has become popular for being used by CNN 248 [CNN] to broadcast live streaming content. Octoshape helps indeed 249 CNN serve a peak of more than a million simultaneous viewers thanks 250 not only to the P2P content distribution paradigm, but also to 251 several innovative delivery technologies such as loss resilient 252 transport, adaptive bit rate, adaptive path optimization and adaptive 253 proximity delivery. 255 Figure 2 depicts the architecture of the Octoshape system. 257 +------------+ +--------+ 258 | Peer 1 |---| Peer 2 | 259 +------------+ +--------+ 260 | \ / | 261 | \ / | 262 | \ | 263 | / \ | 264 | / \ | 265 | / \ | 266 +--------------+ +-------------+ 267 | Peer 4 |----| Peer 3 | 268 +--------------+ +-------------+ 269 ***************************************** 270 | 271 | 272 +---------------+ 273 | Content Server| 274 +---------------+ 276 Figure 2, Architecture of Octoshape system 278 As it can be seen from the picture, there are no trackers and 279 consequently no tracker protocol is necessary. The content server 280 plays indeed the role of tracker and transmits the information on 281 peers that already joined the channel in form of metadata when 282 streaming the live content. 284 As regards the peer protocol, each peer maintains a sort of Address 285 Book with the information necessary to contact other peers who are 286 watching the same channel. 288 Regarding the data distribution strategy, in the Octoshape solution 289 the original stream is split into a number K of smaller equal-sized 290 data streams, but a number N > K of unique data streams are actually 291 constructed, in such a way that a peer receiving any K of the N 292 available data streams is able to play the original stream. For 293 instance, if the original live stream is a 400 kbit/sec signal, for 294 K=4 and N=12, 12 unique data streams are constructed, and a peer that 295 downloads any 4 of the 12 data streams is able to play the live 296 stream. In this way, each peer sends requests of data streams to 297 some selected peers, and it receives positive/negative answers 298 depending on availability of upload capacity at requested peers. In 299 case of negative answers, a peer continues sending requests until it 300 finds K peers willing to upload the minimum number of data streams 301 needed to display the original live stream. This allows a flexible 302 use of bandwidth at end users. In fact, since the original stream is 303 split into smaller data streams, a peer that does not have enough 304 upload capacity to transmit the original whole stream can transmit a 305 number of smaller data streams that fits its actual upload capacity. 307 In order to mitigate the impact of peer loss, the address book is 308 also used at each peer to derive the so called Standby List, which 309 Octoshape peers use to probe other peers and be sure that they are 310 ready to take over if one of the current senders leaves or gets 311 congested. 313 Finally, in order to optimize bandwidth utilization, Octoshape 314 leverages peers within a network to minimize external bandwidth usage 315 and to select the most reliable and "closest" source to each viewer. 316 It also chooses the best matching available codecs and players, and 317 it scales bit rate up and down according to the available Internet 318 connection. 320 4.2. PPLive 322 PPLive [PPLive] was first developed in Huazhong University of Science 323 and Technology in 2004, and it is one of the earliest and most 324 popular P2P streaming software in China. To give an idea, PPLive 325 website served 50 millions visitors during the Beijing 2008 Olympics 326 opening ceremony, and the dedicated Olympics channel attracted 221 327 millions of viewers in two weeks. 329 Even though PPLive was renamed to PPTV in 2010, we continue using the 330 old name PPLive throughout this document. 332 PPLive system includes the following main components: 334 -video streaming server, that plays the role of source of video 335 content and copes with content coding issues; 337 -peer, also called node or client, that is PPLive entity 338 downloading video content from other peers and uploading video 339 content to other peers 341 -channel server, that provides the list of available channels 342 (live TV or VoD content) to a PPLive peer, as soon as the peer 343 joins the system; 345 -tracker server, that provides a PPLive peer with the list of 346 online peers that are watching the same channel as the one the 347 joining peer is interested in. 349 Figure 3 illustrates the high level diagram of PPLive system. 351 +------------+ +------------+ 352 | Peer 2 |----| Peer 3 | 353 +------------+ +------------+ 354 | | | | 355 | | | | 356 | +--------------+ | 357 | | Peer 1 | | 358 | +--------------+ | 359 | | | 360 | | | 361 | | | 362 +------------------------------+ 363 | | 364 | +----------------------+ | 365 | |Video Streaming Server| | 366 | +----------------------+ | 367 | | Channel Server | | 368 | +----------------------+ | 369 | | Tracker Server | | 370 | +----------------------+ | 371 | | 372 +------------------------------+ 374 Figure 3, High level overview of PPLive system architecture 376 As regards the tracker protocol, as soon as a PPLive peer joins the 377 systems and selects the channel to watch, it retrieves from the 378 tracker server a list of peers that are watching the same channel. 380 As regards the peer protocol, it controls both peer discovery and 381 chunk distribution process. More specifically, peer discovery is 382 implemented by a kind of gossip-like mechanism. After retrieving the 383 list of active peers watching a specific channel from tracker server, 384 a PPLive peer sends out probes to establish active peer connections, 385 and some of those peers may return also their own list of active 386 peers to help the new peer discover more peers in the initial phase. 387 Chunk distribution process is mainly based on buffer map exchange to 388 advertise the availability of cached chunks. In more detail, PPLive 389 software client exploits two local buffers to cache chunks: the 390 PPLive TV engine buffer and media player buffer. The main reason 391 behind the double buffer structure is to address the download rate 392 variations when downloading chunks from PPLive network. In fact, 393 received chunks are first buffered and reassembled into the PPLive TV 394 engine buffer; as soon as the number of consecutive chunks in PPLive 395 TV engine buffer overcomes a predefined threshold, the media player 396 buffer downloads chunks from the PPLive TV engine buffer; finally, 397 when the media player buffer fills up to the required level, the 398 actual video playback starts. 400 Since the protocols and algorithm of PPLive are proprietary, most of 401 known details have been derived from measurement studies. 402 Specifically, it seems that: 404 -number of peers from which a PPLive node downloads live TV chunks 405 from is constant and relatively low, and the top-ten peers 406 contribute to a major part of the download traffic, as shown in 407 [P2PIPTVMEA]; 409 -PPLive can provide satisfactory performance for popular live TV 410 and VoD channels. For unpopular live TV channels, performance may 411 severely degrade, whereas for unpopular VoD channels this problem 412 rarely happens, as it shown in [CNSR]. Authors of [CNSR] also 413 demonstrate that the workload in most VoD channels is well 414 balanced, whereas for live TV channels the workload distribution 415 is unbalanced, and a small number of peers provide most video 416 data. 418 4.3. Zattoo 420 Zattoo [Zattoo] is P2P live streaming system that was launched in 421 Switzerland in 2006 in coincidence with the EUFA European Football 422 Championship and in few years was able to attract almost 10 million 423 registered users in several European countries. 425 Figure 4 depicts the high level architecture of Zattoo system. The 426 main reference for the information provided in this document is 427 [IMC09]. 429 +-----------------------------------+ 430 | ------------------------------- | +------+ 431 | | Broadcast Server | |---|Peer 1|---| 432 | ------------------------------- | +------+ | 433 | | Authentication Server | | +-------------+ 434 | ------------------------------- | |Repeater node| 435 | | Rendezvous Server | | +-------------+ 436 | ------------------------------- | +------+ | 437 | | Bandwidth Estimation Server | |---|Peer 2|---| 438 | ------------------------------- | +------+ 439 | | Other Servers | | 440 | ------------------------------- | 441 +-----------------------------------+ 443 Figure 4, High level overview of Zattoo system architecture 445 Broadcast server is in charge of capturing, encoding, encrypting and 446 sending the TV channel to the Zattoo network. A number N of logical 447 sub-streams is derived from the original stream, and packets of the 448 same order in the sub-streams are grouped together into the so-called 449 segments. Each segment is then coded via a Reed-Salomon error 450 correcting code in such a way that any number k < N of received 451 packets in the segment is enough to reconstruct the whole segment. 453 Authentication server is the first point of contact for a peer that 454 joins the system, and it authenticates Zattoo users. Then, a user 455 contacts the Rendezvous server and specifies the TV channel of 456 interest. The rendezvous server returns a list of Zattoo peers that 457 have already joined the requested channel. Hence, rendezvous server 458 plays the role of tracker. At this point the direct interaction 459 between peers starts using the peer protocol. 461 A new Zattoo user contacts the peers returned by the rendezvous 462 server in order to identify a set of neighboring peers covering the 463 full set of sub-streams in the TV channel. This process is denoted 464 in Zattoo jargon as Peer Division Multiplexing (PDM). To ease the 465 identification of neighboring peers, each contacted peer provides 466 also the list of its own known peers, in such a way that a new Zattoo 467 user, if needed, can contact new peers besides the ones indicated by 468 the rendezvous server. In selecting which peers to establish 469 connections with, a peer adopts the criterion of topological 470 closeness. The topological location of a peer is defined in Zattoo 471 as (in order of preference) its subset number, its autonomous system 472 number and its country code, and it is provided to each peer by the 473 authentication server. 475 Zattoo peer protocol provides also a mechanism to make PDM process 476 adaptive with respect to bandwidth fluctuations. First of all, a 477 peer controls the admission of new connections based on the available 478 uplink bandwidth. This is estimated i) at beginning with each peer 479 sending probe messages to the Bandwidth Estimation server, and ii) 480 while forwarding sub-streams to other peers based on the quality-of- 481 service feedback received by those peers. A quality-of-service 482 feedback is sent from the receiver to the sender only when the 483 quality of the received sub-stream is below a given threshold. So if 484 a quality-of-service feedback is received, a Zattoo peer decrements 485 the estimation of available uplink bandwidth, and if this drops below 486 the amount needed to supports the current connections, a proper 487 number of connections is closed. On the other side, if no quality- 488 of-service feedback is received for a given time interval, a Zattoo 489 peer increments the estimation of available uplink bandwidth 490 according to a mechanism very similar to the one of TCP congestion 491 window (a mechanism very similar to the one of TCP congestion window 492 (double increase or linear increase depending on whether the estimate 493 is below or above a given threshold). 495 Figure 4 also shows that there exist two classes of Zattoo nodes: 496 simple peers, whose behavior has already been presented, and repeater 497 nodes, that implement the same peer protocol as simple peers and in 498 addition are high-bandwidth peers and are able to forward any sub- 499 stream. In such a way repeater nodes serve as bandwidth multiplier. 501 4.4. PPStream 503 PPStream [PPStream] is a very popular P2P streaming software in China 504 and in many other countries of East Asia. 506 The system architecture of PPStream is very similar to the one of 507 PPLive. When a PPStream peer joins the system, it retrieves the list 508 of channels from the channel list server. After selecting the 509 channel to watch, a PPStream peer retrieves from the peer list server 510 the identifiers of peers that are watching the selected channel, and 511 it establishes connections that are used first of all to exchange 512 buffer-maps. In more detail, a PPStream chunk is identified by the 513 play time offset which is encoded by the streaming source and it is 514 subdivided into sub-chunks. So buffer-maps in PPStream carry the 515 play time offset information and are strings of bits that indicate 516 the availability of sub-chunks. After receiving the buffer-maps from 517 the connected peers, a PPStream peer selects peers to download sub- 518 chunks according to a rate-based algorithm, which maximizes the 519 utility of uplink and downlink bandwidth. 521 4.5. Tribler 523 Tribler [Tribler] is a BitTorrent [Bittorrent] client that was able 524 to go very much beyond BitTorrent model also thanks to the support 525 for video streaming. Initially developed by a team of researchers at 526 Delft University of Technology, Tribler was able to both i) attract 527 attention from other universities and media companies and ii) receive 528 European Union research funding (P2P-Next and QLectives projects). 530 Differently from BitTorrent, where a tracker server centrally 531 coordinates peers in uploads/downloads of chunks and peers directly 532 interact with each other only when they actually upload/download 533 chunks to/from each other, there is no tracker server in Tribler and, 534 as a consequence, there is no need of tracker protocol. 536 This is illustrated also in Figure 5, which depicts the high level 537 architecture of Tribler. 539 +------------+ 540 | Superpeer | 541 +------------+ 542 / \ 543 / \ 544 +------------+ +------------+ 545 | Peer 2 |----| Peer 3 | 546 +------------+ +------------+ 547 / | \ 548 / | \ 549 / +--------------+ \ 550 / | Peer 1 | \ 551 / +--------------+ \ 552 / / \ \ 553 +------------+ / +--------------+ 554 | Peer 4 | / | Peer 5 | 555 +------------+ / +--------------+ 556 \ / / 557 \ / / 558 \ / +------------+ 559 +------------+ | Superpeer | 560 | Superpeer | +------------+ 561 +------------+ 563 Figure 5, High level overview of Tribler system architecture 565 Regarding peer protocol and the organization of overlay mesh, Tribler 566 bootstrap process consists in preloading well known superpeer 567 addresses into peer local cache, in such a way that a joining peer 568 randomly selects a superpeer to retrieve a random list of already 569 active peers to establish overlay connections with. A gossip-like 570 mechanism called BuddyCast allows Tribler peers to exchange their 571 preference list, that is their downloaded files, and to build the so 572 called Preference Cache. This cache is used to calculate similarity 573 levels among peers and to identify the so called "taste buddies" as 574 the peers with highest similarity. Thanks to this mechanism each 575 peer maintains two lists of peers: i) a list of its top-N taste 576 buddies along with their current preference lists, and ii) a list of 577 random peers. So a peer alternatively selects a peer from one of the 578 lists and sends it its preference list, taste-buddy list and a 579 selection of random peers. The goal behind the propagation of this 580 kind of information is the support for the remote search function, a 581 completely decentralized search service that consists in querying 582 Preference Cache of taste buddies in order to find the torrent file 583 associated with an interest file. If no torrent is found in this 584 way, Tribler users may alternatively resort to a web-based torrent 585 collector server available for BitTorrent clients. 587 Tribler supports video streaming in two different forms: video on 588 demand and live streaming. 590 As regards video on demand, a peer first of all keeps informed its 591 neighbors about the chunks it has. Then, on the one side it applies 592 suitable chunk-picking policy in order to establish the order 593 according to which to request the chunks he wants to download. This 594 policy aims to assure that chunks come to the media player in order 595 and in the same time that overall chunk availability is maximized. 596 To this end, the chunk-picking policy differentiates among high, mid 597 and low priority chunks depending on their closeness with the 598 playback position. High priority chunks are requested first and in 599 strict order. When there are no more high priority chunks to 600 request, mid priority chunks are requested according to a rarest- 601 first policy. Finally, when there are no more mid priority chunks to 602 request, low priority chunks are requested according to a rarest- 603 first policy as well. On the other side, Tribler peers follow the 604 give-to-get policy in order to establish which peer neighbors are 605 allowed to request chunks (according to BitTorrent jargon to be 606 unchoked). In more detail, time is subdivided in periods and after 607 each period Tribler peers first sort their neighbors according to the 608 decreasing numbers of chunks they have forwarded to other peers, 609 counting only the chunks they originally received from them. In case 610 of tie, Tribler sorts their neighbors according to the decreasing 611 total number of chunks they have forwarded to other peers. In this 612 way, Tribler peer unchokes the three highest-ranked neighbours and, 613 in order to saturate upload bandwidth and in the same time not 614 decrease the performance of individual connections, it further 615 unchokes a limited number of neighbors. Moreover, in order to search 616 for better neighbors, Tribler peers randomly select a new peer in the 617 rest of the neighbours and optimistically unchoke it every two 618 periods. 620 As regards live streaming, differently from video on demand scenario, 621 the number of chunks cannot be known in advance. As a consequence a 622 sliding window of fixed width is used to identify chunks of interest: 623 every chunk that falls out the sliding window is considered outdated, 624 is locally deleted and is considered as deleted by peer neighbors as 625 well. In this way, when a peer joins the network, it learns about 626 chunks its neighbors possess and identify the most recent one. This 627 is assumed as beginning of the sliding window at the joining peer, 628 which starts downloading and uploading chunks according to the 629 description provided for video on demand scenario. 631 4.6. QQLive 633 QQLive [QQLive] is large-scale video broadcast software including 634 streaming media encoding, distribution and broadcasting. Its client 635 can apply for web, desktop program or other environments and provides 636 abundant interactive function in order to meet the watching 637 requirements of different kinds of users. 639 QQLive adopts Content Delivery Network (CDN) [CDN] and P2P 640 architecture for video distribution and is different from other 641 popular P2P streaming applications. QQLive provides video by source 642 servers and CDN, and the video content can be push to every region by 643 CDN throughout China. In each region, QQLive adopts P2P technology 644 for video content distribution. 646 One of the main aims for QQLive is to use the simplest architecture 647 to provide the best user experience. So QQLive takes some servers to 648 implement P2P file distribution. There are two servers in QQLive: 649 Stun Server [RFC5389] and Tracker Server. Stun Server is responsible 650 for NAT traversing. Tracker Server is responsible for providing 651 content address information. There are a group of these two Servers 652 for providing services. There is no Super Peer in QQLive. 654 Working flow of QQLive includes startup stage and play stage. 656 -Startup stage includes only interactions between peers and 657 Tracker servers. There is a built-in URL in QQLive client 658 software. When the client startups and connects to the network, 659 the client gets the Tracker's address through DNS and tells the 660 Tracker the information of its owned video contents. 662 -Play stage includes interactions between peers and peers or peers 663 and CDN. Generally, the client will download the video content 664 from CDN during the first 30 seconds and then gets contents from 665 other peers. If unfortunately there is no peer which owns the 666 content, the client will get the content from CDN again. 668 As the client watches the video, the client will store the video to 669 the hard disk. The default storage space is one Gbyte. If the 670 storage space is full, the client will delete the oldest content. 671 When the client does VCR operation, if the video content is stored in 672 hard disk, the client will not do interactions with other peers or 673 CDN. If there are messages or video content missing, the client will 674 take retransmission and the retransmission interval is decided by the 675 network condition. The QQLive does not take care of the strategy of 676 transmission and chunk selection, which is simple and not similar 677 with BT because of the CDN support. 679 5. Tree-based P2P Streaming Systems 681 In tree-based P2P streaming applications peers self-organize in a 682 tree-shape overlay network, where peers do not ask for a specific 683 chunk, but simply receive it from their so called "parent" node. 684 Such content delivery model is denoted as push-based. Receiving 685 peers are denoted as children, whereas sending nodes are denoted as 686 parents. Overhead to maintain overlay topology is usually lower for 687 tree-based streaming applications than for mesh-based streaming 688 applications, whereas performance in terms of delay is usually 689 better. On the other side, the greatest drawback of this type of 690 application lies in that each node depends on one single node, its 691 parent in overlay tree, to receive streamed content. Thus, tree- 692 based streaming applications suffer from peer churn phenomenon more 693 than mesh-based ones. 695 5.1. End System Multicast (ESM) 697 Even though End System Multicast (ESM) project is ended by now and 698 ESM infrastructure is not being currently implemented anywhere, we 699 decided to include it in this survey for a twofold reason. First of 700 all, it was probably the first and most significant research work 701 proposing the possibility of implementing multicast functionality at 702 end hosts in a P2P way. Secondly, ESM research group at Carnegie 703 Mellon University developed the first P2P live streaming system of 704 the world, and some members founded later Conviva [conviva] live 705 platform. 707 The main property of ESM is that it constructs the multicast tree in 708 a two-step process. The first step aims at the construction of a 709 mesh among participating peers, whereas the second step aims at the 710 construction of data delivery trees rooted at the stream source. 711 Therefore a peer participates in two types of topology management 712 structures: a control structure that guarantees peers are always 713 connected in a mesh, and a data delivery structure that guarantees 714 data gets delivered in an overlay multicast tree. 716 There exist two versions of ESM. 718 The first version of ESM architecture [ESM1] was conceived for small 719 scale multi-source conferencing applications. Regarding the mesh 720 construction phase, when a new member wants to join the group, an 721 out-of-bandwidth bootstrap mechanism provides the new member with a 722 list of some group members. The new member randomly selects a few 723 group members as peer neighbors. The number of selected neighbors 724 never exceeds a given bound, which reflects the bandwidth of the 725 peer's connection to the Internet. Each peer periodically emits a 726 refresh message with monotonically increasing sequence number, which 727 is propagated across the mesh in such a way that each peer can 728 maintain a list of all the other peers in the system. When a peer 729 leaves, either it notifies its neighbors and the information is 730 propagated across the mesh to all the participating peers, or peer 731 neighbors detect the condition of abrupt departure and propagate it 732 through the mesh. To improve mesh/tree quality, on the one side 733 peers constantly and randomly probe each other to add new links; on 734 the other side, peers continually monitor existing links in order to 735 drop the ones that are not perceived as good-quality links. This is 736 done thanks to the evaluation of a utility function and a cost 737 function, which are conceived to guarantee that the shortest overlay 738 delay between any pair of peers is comparable to the unicast delay 739 among them. Regarding multicast tree construction phase, peers run a 740 distance-vector protocol on top of the tree and use latency as 741 routing metric. In this way, data delivery trees may be constructed 742 from the reverse shortest path between source and recipients. 744 The second and subsequent version of ESM architecture [ESM2] was 745 conceived for an operational large scale single-source Internet 746 broadcast system. As regards the mesh construction phase, a node 747 joins the system by contacting the source and retrieving a random 748 list of already connected nodes. Information on active participating 749 peers is maintained thanks to a gossip protocol: each peer 750 periodically advertises to a randomly selected neighbor a subset of 751 nodes he knows and the last timestamps it has heard for each known 752 node. The main difference with the first version is that the second 753 version constructs and maintains the data delivery tree in a 754 completely distributed manner according to the following criteria: i) 755 each node maintains a degree bound on the maximum number of children 756 it can accept depending on its uplink bandwidth, ii) tree is 757 optimized mainly for bandwidth and secondarily for delay. To this 758 end, a parent selection algorithm allows identifying among the 759 neighbors the one that guarantees the best performance in terms of 760 throughput and delay. The same algorithm is also applied either if a 761 parent leaves the system or if a node is experiencing poor 762 performance (in terms of both bandwidth and packet loss). As loop 763 prevention mechanism, each node keeps also the information about the 764 hosts in the path between the source and its parent node. 766 This second ESM prototype is also able to cope with receiver 767 heterogeneity and presence of NAT/firewalls. In more detail, audio 768 stream is kept separated from video stream and multiple bit-rate 769 video streams are encoded at source and broadcast in parallel though 770 the overlay tree. Audio is always prioritized over video streams, 771 and lower quality video is always prioritized over high quality 772 video. In this way, system can dynamically select the most suitable 773 video stream according to receiver bandwidth and network congestion 774 level. Moreover, in order to take presence of hosts behind NAT/ 775 firewalls, tree is structured in such a way that public hosts use 776 hosts behind NAT/firewalls as parents. 778 6. Hybrid P2P streaming applications 780 This type of applications aims at integrating the main advantages of 781 mesh-based and tree-based approaches. To this end, overlay topology 782 is mixed mesh-tree, and content delivery model is push-pull. 784 6.1. New Coolstreaming 786 Coolstreaming, first released in summer 2004 with a mesh-based 787 structure, arguably represented the first successful large-scale P2P 788 live streaming. Nevertheless, it suffers poor delay performance and 789 high overhead associated with each video block transmission. In the 790 attempt of overcoming such a limitation, New Coolstreaming 791 [NEWCOOLStreaming] adopts a hybrid mesh-tree overlay structure and a 792 hybrid pull-push content delivery mechanism. 794 Like in the old Coolstreaming, a newly joined node contacts a special 795 bootstrap node and retrieves a partial list of active nodes in the 796 system. 798 The interaction with bootstrap node is the only one related to the 799 tracker protocol. The rest of New Coolstreaming interactions are 800 related to peer protocol. 802 The newly joined node then establishes a partnership with few active 803 nodes by periodically exchanging information on content availability. 804 Streaming content is divided in New Coolstreaming in equal-size 805 blocks or chunks, which are unambiguously associated with sequence 806 numbers that represent the playback order. Chunks are then grouped 807 to form multiple sub-streams. 809 Like in most of P2P streaming applications information on content 810 availability is exchanged in form of buffer-maps. However, New 811 Coolstreaming buffer-maps differ from the usual format of strings of 812 bits where each bit represents the availability of a chunk. Two 813 vectors represent indeed buffer-maps in New Coolstreaming. The first 814 vector reports the sequence numbers of the last chunk received for a 815 given sub-stream. The second vector is used to explicitly request 816 chunks from partner peers. In more details, the second vector has as 817 many bits as sub-streams, and a peer receiving a bit "1" in 818 correspondence of a given sub-stream is being requested from the 819 sending peer to upload chunks belonging to that sub-streams. Since 820 chunks are explicitly requested, data delivery may be regarded as 821 pull-based. However, data delivery is push-based as well, since 822 every time a node is requested to upload chunks, it uploads all 823 chunks for that sub-stream starting from the one indicated in the 824 first vector of received buffer-map. Hence, the overall overlay 825 topology is mesh-based, but it is also possible to identify as many 826 overlay trees as sub-streams. 828 In order to improve quality of mesh-tree overlay, each node 829 continuously monitors the quality of active connections in terms of 830 mutual delay between sub-streams. If such quality drops below a 831 predefined threshold, a New Coolstreaming node selects a new partner 832 among its partners. Parent re-selection is also triggered for a peer 833 when its previous parent leaves. 835 7. Security Considerations 837 Security in P2P streaming applications may be addressed at two 838 different levels: on the one side, at the control protocol level, on 839 the other side, at streamed multimedia content level. 841 In PPLive and PPStream control protocol messages are sent over HTTP, 842 UDP and TCP mostly in plain text, and this can allow malicious users 843 to interfere with the normal operation of the system and can lead to 844 malicious attacks that can make key components of the system 845 ineffective. 847 In Zattoo authentication server authenticates Zattoo users and 848 assigns them with a limited lifetime ticket. Then, a user presents 849 the tickets received by the authentication server to the rendezvous 850 server. Provided that the presented ticket is valid, the rendezvous 851 server returns a list of Zattoo peers that have already joined the 852 requested channel and a signed channel ticket. 854 In Tribler authentication of peers is based on secure, permanent peer 855 identifiers called PermIDs. PermID maps to a single IP address and 856 port number and is initially used to identify users. The idea is to 857 have each Tribler user assigned with a public/private keypair based 858 on Elliptic Curve Cryptography (ECC), where public key acts as the 859 PermID for the user. Users distribute their PermID to their friends 860 out-of-band to establish trusted friend relationships. When two 861 peers connect as part of a download, they authenticate each other 862 using the standard ISO/IEC 9798-3 [ISO/IEC 9798-3] challenge/response 863 identification protocol. If the peer is successfully authenticated 864 but not a friend of the user (i.e., does not appear in the list of 865 friends' PermIDs), the Tribler client will allow it to request non- 866 privileged operations, such as exchanging file preferences. If the 867 peer is a friend, it may request privileged operations such as 868 coordinating a friends-assisted download. Moreover, Tribler provides 869 security at streamed content level too. In the video on demand 870 scenario torrent files include a hash for each chunk in order to 871 prevent malicious attackers from corrupting data. In live streaming 872 scenario torrent files include the public key of the stream source. 873 Each chunk is then assigned with absolute sequence number and 874 timestamp and signed by source public key. Such a mechanism allows 875 Tribler peers to use the public key included in torrent file and 876 verify the integrity of each chunk. 878 In QQLive both tracker and peer protocol are fully private and 879 encrypt the whole message. The tracker protocol uses UDP and the 880 port for the tracker server is fixed. For the streamed content, if 881 the client gets the streaming from CDN, the client use the HTTP with 882 port 80 and no encryption. If the client gets the streaming from 883 other peers, the client use UDP to transfer the encrypted media 884 streaming and not RTP/RTCP. 886 8. IANA Considerations 888 This document has no actions for IANA. 890 9. Author List 892 Other authors of this document are listed as below. 894 Hui Zhang, NEC Labs America. 896 Jun Lei, University of Goettingen. 898 Gonzalo Camarillo, Ericsson. 900 Yong Liu, Polytechnic University. 902 Delfin Montuno, Huawei. 904 Lei Xie, Huawei. 906 10. Acknowledgments 908 We would like to acknowledge Jiang xingfeng for providing good ideas 909 for this document. 911 11. Informative References 913 [RFC6972] RFC 6972, "Problem Statement and Requirements of the Peer- 914 to-Peer Streaming Protocol (PPSP)". 916 [Octoshape] Alstrup, Stephen, et al., "Introducing Octoshape-a new 917 technology for large-scale streaming over the Internet". 919 [CNN] CNN web site, http://www.cnn.com 921 [PPLive] PPLive web site, http://www.pplive.com 923 [P2PIPTVMEA] Silverston, Thomas, et al., "Measuring P2P IPTV 924 Systems", June 2007. 926 [CNSR] Li, Ruixuan, et al., "Measurement Study on PPLive Based on 927 Channel Popularity", May 2011. 929 [Zattoo] Zattoo web site, http://www.zattoo.com 931 [IMC09] Chang, Hyunseok, et al., "Live streaming performance of the 932 Zattoo network", November 2009. 934 [PPStream] PPStream web site, http:// www.ppstream.com 936 [Tribler] Tribler Protocol Specification, January 2009, on line 937 available at http://svn.tribler.org/bt2-design/proto-spec- 938 unified/trunk/proto-spec-current.pdf 940 [Bittorrent] BitTorrent web site, http:// www.bittorrent.com 942 [QQLive] QQLive web site, http://v.qq.com 944 [CDN] CDN wiki, http://en.wikipedia.org/wiki/Content_delivery_network 946 [RFC5389] RFC5389, "Session Traversal Utilities for NAT (STUN)". 948 [conviva] Conviva web site, http://www.conviva.com 950 [ESM1] Chu, Yang-hua, et al., "A Case for End System Multicast", June 951 2000. (http://esm.cs.cmu.edu/technology/papers/ 952 Sigmetrics.CaseForESM.2000.pdf) 954 [ESM2] Chu, Yang-hua, et al., "Early Experience with an Internet 955 Broadcast System Based on Overlay Multicast", June 2004. 956 (http://static.usenix.org/events/usenix04/tech/general/full_papers/ 957 chu/chu.pdf) 959 [NEWCOOLStreaming] Li, Bo, et al., "Inside the New Coolstreaming: 960 Principles,Measurements and Performance Implications", April 2008. 962 [ISO/IEC 9798-3] ISO web site, http://www.iso.org/iso/ 963 catalogue_detail.htm?csnumber=29062 965 Authors' Addresses 967 Yingjie Gu 968 Unaffiliated 970 Email: guyingjie@gmail.com 972 Ning Zong (editor) 973 Huawei 974 101 Software Avenue 975 Nanjing 210012 976 China 978 Phone: +86-25-56624760 979 Fax: +86-25-56624702 980 Email: zongning@huawei.com 982 Yunfei Zhang 983 Coolpad 984 China Mobile 985 Email: hishigh@gmail.com 987 Francesca Lo Piccolo 988 Cisco 989 Via del Serafico 200 990 Rome 00142 991 Italy 993 Phone: +39-06-51645136 994 Email: flopicco@cisco.com 995 Shihui Duan 996 CATR 997 No.52 HuaYuan BeiLu 998 Beijing 100191 999 P.R.China 1001 Phone: +86-10-62300068 1002 Email: duanshihui@catr.cn