idnits 2.17.1 draft-softgear-ppsp-olive-griddelivery-intro-00.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- ** The document seems to lack a License Notice according IETF Trust Provisions of 28 Dec 2009, Section 6.b.ii or Provisions of 12 Sep 2009 Section 6.b -- however, there's a paragraph with a matching beginning. Boilerplate error? (You're using the IETF Trust Provisions' Section 6.b License Notice from 12 Feb 2009 rather than one of the newer Notices. See https://trustee.ietf.org/license-info/.) Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- ** The document seems to lack an IANA Considerations section. (See Section 2.2 of https://www.ietf.org/id-info/checklist for how to handle the case when there are no actions for IANA.) ** There are 3 instances of too long lines in the document, the longest one being 1 character in excess of 72. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year -- The document date (October 18, 2009) is 5298 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- == Unused Reference: 'I-D.ietf-p2psip-base' is defined on line 705, but no explicit reference was found in the text == Outdated reference: A later version (-26) exists of draft-ietf-p2psip-base-04 == Outdated reference: A later version (-02) exists of draft-softgear-p2psip-iptv-01 Summary: 3 errors (**), 0 flaws (~~), 4 warnings (==), 1 comment (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 1 PPSP Seok-Kap Ko 2 Internet Draft Seung-Hun Oh 3 Intended status: Informational Byung-Tak Lee 4 Expires: April 18, 2010 ETRI 5 October 18, 2009 7 Introduction of Olive and Grid Delivery 8 draft-softgear-ppsp-olive-griddelivery-intro-00.txt 10 Status of this Memo 12 This Internet-Draft is submitted to IETF in full conformance with the 13 provisions of BCP 78 and BCP 79. 15 Internet-Drafts are working documents of the Internet Engineering 16 Task Force (IETF), its areas, and its working groups. Note that 17 other groups may also distribute working documents as Internet- 18 Drafts. 20 Internet-Drafts are draft documents valid for a maximum of six months 21 and may be updated, replaced, or obsoleted by other documents at any 22 time. It is inappropriate to use Internet-Drafts as reference 23 material or to cite them other than as "work in progress." 25 The list of current Internet-Drafts can be accessed at 26 http://www.ietf.org/ietf/1id-abstracts.txt. 28 The list of Internet-Draft Shadow Directories can be accessed at 29 http://www.ietf.org/shadow.html. 31 This Internet-Draft will expire on April 18, 2010. 33 Copyright Notice 35 Copyright (c) 2009 IETF Trust and the persons identified as the 36 document authors. All rights reserved. 38 This document is subject to BCP 78 and the IETF Trust's Legal 39 Provisions Relating to IETF Documents in effect on the date of 40 publication of this document (http://trustee.ietf.org/license-info). 41 Please review these documents carefully, as they describe your rights 42 and restrictions with respect to this document. 44 Abstract 45 This draft briefly introduces Olive distributed media platform and 46 Grid Delivery solution. Olive platform, which is built by ETRI, 47 provides live peer-to-peer streaming service. Grid Delivery solution, 48 which is built by Peering Portal, is one of the most popular Peer-to- 49 Peer solutions for VoD, AoD, live streaming, and file download 50 service in Korea. 52 Table of Contents 54 1. Introduction ................................................ 3 55 2. P2P Streaming Architectures ................................. 3 56 2.1. Simple tracker based architecture ...................... 3 57 2.2. Server controlled architecture ......................... 4 58 2.3. Fully distributed architecture ......................... 5 59 3. ETRI Olive platform ......................................... 6 60 3.1. Overview ............................................... 6 61 3.2. Basic Operation......................................... 7 62 3.2.1. Channel Creation .................................. 7 63 3.2.2. Channel Watching .................................. 8 64 3.3. Messages .............................................. 10 65 4. Peering Portal Grid Delivery solution ...................... 14 66 4.1. Overview .............................................. 14 67 4.2. Features .............................................. 14 68 4.2.1. Caching .......................................... 14 69 4.2.2. Indexing ......................................... 15 70 4.2.3. Cache state update ............................... 15 71 4.2.4. Segmentation ..................................... 15 72 4.2.5. Meta information ................................. 16 73 4.2.6. Protocol ......................................... 16 74 4.3. Operation ............................................. 16 75 5. Security Considerations .................................... 17 76 6. Acknowledgments ............................................ 18 77 7. References ................................................. 18 78 7.1. Normative References .................................. 18 79 7.2. Informative References ................................ 18 81 1. Introduction 83 This document introduces two existing P2P streaming architectures. 84 ETRI Olive distributed media platform is developed by ETRI as an 85 experimental P2P streaming system. ETRI is Electronics and 86 Telecommunications Research Institute, Korea. This platform has been 87 deployed to several enterprise sites. 89 Grid Delivery is one of the most popular P2P solutions for VoD, AoD, 90 live streaming, and download service in Korea. Grid Delivery solution 91 is developed by Peering Portal[PeeringPortal]. Their solution have 92 been deployed to 14 major famous video streaming services. This 93 document shares their experiences of running commercial P2P service. 94 Grid Delivery solution reduced 80% server network traffic and 95 improved overall system performance with several times. 97 This introduction may help to understand how P2P streaming service 98 works and to understand various P2P streaming architectures. This 99 draft can be a "architectural survey" document of PPSP working group 100 in the future. 102 2. P2P Streaming Architectures 104 P2P IPTV architectures is categorized according to how much a central 105 server work. Terminologies in this document may not be the same to 106 PPSP BOF's[PPSP.WG]. 108 2.1. Simple tracker based architecture 110 Simple tracker based P2P streaming architecture is a simple and basic 111 architecture. The following figure shows this type of architecture. 113 +--------+ 114 | Server | 115 +--------+ 116 ^ 117 | +----------+ 118 | +------->| Tracker | (peer list) 119 | | +----------+ 120 | | all/random peer list 121 | | +--------+ 122 v v +-------------> | Peer A | 123 +--------+ | +--------+ 124 | Peer J |<--+ +--------+ Gossip +--------+ 125 +--------+ +-------------> | Peer B |-------->| Peer D | 126 | +--------+ +--------+ 127 | +--------+ 128 +-------------> | Peer C | 129 +--------+ 131 There is the server which supports channel information. This server 132 gives tracker information of a certain channel to new joining peer. 133 Tracker information includes tracker's IP address. The new joining 134 peer (Peer J in the figure) connects to the tracker and asks a peer 135 list. Tracker stores all peer list. This peer list includes IP 136 address and port number of all peers which attends in this channel. A 137 Content Provider of this channel is also included in this peer list. 138 After the joining peer (Peer J) receives all peer list or random peer 139 list from the tracker, this peer checks which peer is the best peer 140 among peers (Peer A,B,C in the figure) in the peer list. There are 141 some methods to find the best peer among peers. And, the joining peer 142 can get more peer list from the initial peers using Gossiping. if 143 Peer J finds the best peer among peers in the list, Peer J connects 144 Peer A and asks Peer A to send media. After Peer J receives media 145 stream from peer A, Peer J registers itself to the tracker. Peer J is 146 added into the peer list in the tracker. This architecture's major 147 character is that a tracker stores a peer list in the channel only. 148 This peer list includes only coarse information about peer such as IP 149 address and port. The initial setup time of this architecture is long 150 because of initial optimization and gossiping. 152 2.2. Server controlled architecture 154 In contrast to the tracker based architecture, the server controlled 155 P2P streaming architecture uses a central control server. This server 156 stores all information about the channel. This information includes a 157 peer list and virtual link information of the overlay. This 158 architecture supports Tree-Push model normally. The following figure 159 shows this architecture. 161 . +----------------+ 162 +------->| Control Server | (overlay topology) 163 | +----------------+ 164 |best candidate peer 165 | +--------+ 166 v +-------------> | Peer A | 167 +--------+ | +--------+ 168 | Peer J | --+ +--------+ 169 +--------+ +-------------> | Peer B | 170 +--------+ 172 A new joining peer (Peer J) connects the control server. Peer J finds 173 the channel from the server and asks the peer list to the server. The 174 control server calculates virtual distances from Peer J to the 175 existing peers in overlay topology and finds the best peer candidate 176 for Peer J in the peer list. In contrast to the tracker based 177 architecture, the control server does not give all peer list to the 178 joining peer (Peer J). Instead of all peer list, the control server 179 gives very small number of peer list which are the best candidates 180 (Peer A,B, in figure) for the joining peer. Peer J selects the best 181 peer among the candidates(Peer A,B). If Peer J selects Peer A as the 182 best peer, Peer J notifies the control server that Peer J selects to 183 Peer A. The control server makes Peer A send the media to Peer J. The 184 control server knows everything in this channel. i.e. The server 185 knows who sends to who and who receives from who. 187 The setup delay of this architecture may be very small because the 188 central control server gives optimal initial information to the 189 joining peer. This architecture is also good for CDN-style P2P 190 streaming. However, if there are too many peers in one channel, it 191 could be a burden for the central server. 193 2.3. Fully distributed architecture 195 Fully distributed architecture replaces the control server with 196 DHT(distributed hash table). P2PSIP is one of DHT implementation[I- 197 D.ietf-p2psip-base]. PPSP Chunk discovery[I-D.chunk-discovery] and 198 IPTV usage for RELOAD[I-D.p2psip-iptv] belong to this architecture. 199 This architecture supports Mesh-Pull model normally. The following 200 figure shows this architecture. 202 . +----------------+ 203 +------->| DHT (P2PSIP) | (peer list) 204 | +----------------+ 205 |Who has chunk#11 in channel#9? 206 | +--------+ 207 v +-------------> | Peer A | 208 +--------+ | +--------+ 209 | Peer J | --+ +--------+ 210 +--------+ +-------------> | Peer B | 211 +--------+ 213 DHT stores peer list for each channel. Similar to server controlled 214 architecture, DHT gives the peer list and additional information to 215 the joining peer. The joining peer can use additional information to 216 select the best peer to connect with. 218 For a example scenario, Peer J sends channel ID to DHT. DHT gives the 219 peer list to Peer J. Peer J selects the best peer among the peer list. 220 Peer J asks Peer A to send media. After that, Peer J registers its 221 information to DHT as a peer in the channel. 223 This architecture supports serverless service. This architecture can 224 be very scalable. The frequent query may cause overlay load. And, One 225 responsible peer cannot handle or store all peer list if there are 226 too many peer in one channel. IPTV usage for RELOAD[I-D.p2psip-iptv] 227 gives one solution about this problem. 229 3. ETRI Olive platform 231 3.1. Overview 233 Olive platform is a peer-to-peer live streaming service platform 234 built by ETRI. Olive platform saves CAPEX and OPEX of streaming 235 service using CDN and P2P technology. Olive platform has been 236 deployed for some enterprise site. 238 The following figure shows Olive platform architecture. Olive 239 platform consists of a central control server, relays, channel 240 providers, and viewers. Olive software allows a user to select its 241 operating mode. Olive is a server controlled architecture which is 242 described in section 2. All other nodes works as clients in control 243 plane. In the control plane, clients communicate with the central 244 server through HTTP and TCP based proprietary protocol. Normal nodes 245 do not communicate with each other except the central server. They 246 send or receive the real media with each other in data plane. 248 A central server controls all nodes and interworks with 249 EPG(electronic program guide)servers. The central server collects 250 configuration and bandwidth of all nodes. All other nodes reports 251 their capacity, bandwidth usage, and configuration about P2P 252 streaming to the central server. The central server has enough memory 253 and computing power to handle all nodes. One logical central server 254 may consists of multiple servers for performance and availability. 256 +---------+ 257 | Central | +--------+ 258 | Server |---------| EPG | 259 +---------+ | Server | 260 | +--------+ 261 +-----------+-----+--+--------+-------+ 262 | | | | | 263 +--------+ | | | +------+ 264 |Channel | +-----+ +-----+ +-----+ | | 265 |Provider|===|Relay|==|Relay|==|Relay|==|Viewer| 266 +--------+ +-----+ +-----+ +-----+ +------+ 268 3.2. Basic Operation 270 This section describes an example operation of Olive. The peer 271 selection algorithm can be replaced by administrative configuration. 273 3.2.1. Channel Creation 275 The following figure shows channel creation process. 277 First, a channel provider connects EPG server through HTTP. The EPG 278 server is a Web server which interwork with the central server. The 279 EPG server provides the channel provider a connection information to 280 establish control connection to the central server. This connection 281 information consists of protocol version, IP address and port number 282 of the central server. 284 Second, The channel provider connects to the central server after 285 login to EPG server. The channel provider sends INIT message which 286 includes operating mode, capacity, and bandwidth. The capacity 287 describes how many stream can be accepted in this node per input and 288 output. 290 Third, The channel provider gets new media ID from the web server. 291 The channel provider upload media description to the web server 292 before getting media ID. 294 Fourth, The channel provider sends REG message to the central server. 295 This message consists of media ID, sender's IP address and port, and 296 stream bandwidth. When the central server accepts this REG message, 297 it updates EPG with inserting new channel. 299 Channel Central EPG 300 Provider Server Server 301 | | 302 |----(HTTP) Login ------------------->| 303 |<---(HTTP) response /con.info --------| 304 |--(TCP establish)---->| 305 |---INIT(mode)-------->| 306 | 307 [channel creation] 308 |----(HTTP) GET new media ID --------->| 309 |<---(HTTP) Response ------------------| 310 |---CTRL:REG(mediaID)->| 311 | |------>[insert program] 312 | | | 314 3.2.2. Channel Watching 316 The following figure shows channel watching process. The relay 317 selection strategy may be changed according to the situation. 319 Channel EPG Central Relay Relay 320 Provider Server Server (R1) (R2) Viewer 321 | | | | | | 322 | |<--- (HTTP) Login ---------------------| 323 | |---- (HTTP) response/con.info -------->| 324 | | |<-- (TCP estabilish) --------| 325 | | |<-- INIT(mode) --------------| 326 | |<--- (HTTP) GET EPG -------------------| 327 | |---- (HTTP) response/EPG-------------->| 328 | | [channel select] 329 | |<--- (HTTP) GET channel info ----------| 330 | |---- (HTTP) response/relay candidates->| 331 | | |<--(RTT measure) --| 332 | | | |<--(RTT m.)-| 333 | | |<-- REQ(media, R2) ----------| 334 | |--RTT(R1,CP)--->| | 335 |<--(RTT measure)-------------------| | 336 | |<-(R.m)| | 337 | |<--RRTT---------| | 338 |<-- OUT+(R2)------| | 339 | |--INP+(CP)----->| | 340 | |--OUT+(Viewer)->| | 341 |==================================>|===========>| 342 | | 344 First, A viewer logins EPG server and connects to the central server. 345 The viewer sends INIT message with viewer mode. 347 Second, the viewer gets program(channel) list from EPG with HTTP. 348 Each channel has media ID as an identification. 350 Third, when the viewer selects a channel, the viewer sends GET 351 request for the media ID to the EPG server. The viewer receives 352 initial relay candidates. 354 Fourth, the viewer performs RTT measurement to the each relay 355 candidate. Currently, the viewer use the Ping(IGMP Echo request) to 356 measure RTT. The viewer selects the best relay from the result of the 357 RTT measurement. 359 Fifth, the viewer sends REQ message to the server. This message 360 includes media ID, selected relay IP address and port, and the 361 viewer's IP address and port. 363 Sixth, the central server try to form better media delivery tree. The 364 central server orders a relay to measure RTT to other relay or 365 channel provider. In this example, relay R2 receives RTT request from 366 the server. RTT request message includes other relay set(R1) and the 367 channel provider IP address. The relay R2 performs RTT measurement to 368 the relay R1 and the channel provider CP. If RTT between R2 and CP, 369 the relay reply with RRTT(Response of RTT request) to the server. 371 Seventh, the central server asks the channel provider to send the 372 media to the relay R2 using OUT+ message. The central sever also asks 373 the relay R2 to receive the media from the channel provider and to 374 send the media to the viewer using IN+ and OUT+ message. 376 Finally, the viewer can watch/receive the media. 378 3.3. Messages 380 This section describes proprietary messages of ETRI Olive platform. 381 All messages here are passed over TCP or TLS. The Olive message is 382 based on text form originally. Each element in a message is separated 383 by colon. The message is encrypted when it is transmitted. 385 Initialization 387 - This message is used to register the node to the central server. 389 CTRL:INIT:bandwidth:name:mode:maxin:maxout 391 o bandwidth : total bandwidth of this node. It uses minimum 392 bandwidth if asymmetric network. 394 o name : node name. Olive node ID. This name must be authorized in 395 the web server. 397 o mode : node's operating mode. 0:channel provider, 1:client(viewer), 398 2:relayer, 3:relayer+viewer. 400 o maxin : maximum number of incoming media stream. 402 o maxout : maximum number of outgoing media stream 404 Channel Registration 406 - This message is used for a channel provider to register its own 407 channel to the central server. 409 CTRL:REG:mediaId:sourceIP:sourcePort:senderIP:senderPort:cast:bandwid 410 th 412 o mediaId: the media ID. this value is allocated by central server. 413 and it can be received from EPG server. 415 o sourceIP: original source IP address. The relay can register the 416 media as another channel. In this case, sourceIP is not the same 417 to senderIP. 419 o sourcePort: source UDP port number. 421 o senderIP: this node's IP address for this media. 423 o senderPort: this node's UDP port number for this media. 425 o cast: original media source may be multicast or unicast. This must 426 be "UCAST" or "MCAST". This supports hybrid multicast, for example, 427 IP multicast - overlay multicast - IP multicast. 429 o bandwidth: this media's bandwidth (kbps). 431 Channel Request 433 - This message is used for a viewer to watch a certain channel. 435 CTRL:REQ:mediaId:relayIP:relayPort:viewerIP:viewerPort:bandwidth 437 o mediaId: the media ID, which the viewer want to watch. 439 o relayIP: the neighboring relay's IP address. 441 o relayPort: the neighboring relay's IP address. 443 o viewerIP: this node's IP address for this media. 445 o viewerPort: this node's UDP port number for this media. 447 o bandwidth: the bandwidth which the viewer is allowed. 449 RTT Measure Request 450 - This message is used for the other node to perform RTT measurement. 451 The central server sends to a client node which is a relay, a viewer, 452 or, a channel provider. 454 RTT:numCandidate:candidate1IP:candidate1Port:candidate2IP:candidate2P 455 ort:...:mediaId:clientIP:clientPort:bandwidth 457 o numCandidate: number of candidates in this message. A candidate 458 may be a relay or a channel provider. 460 o candidate*IP: candidate's IP address. 462 o candidate*Port: candidate's IP port. 464 o mediaId: RTT measure request for this media. 466 o clientIP: the node which perform RTT measurement. The receiver IP 467 of this request. 469 o clientPort: The receiver port of this request. 471 o bandwidth: the bandwidth for this media. 473 RTT Measure Response 475 - This message is the result response of RTT measure request. 477 RRTT:candidateIP:candidatePort: mediaId: bandwidth 479 o candidateIP: the best candidate's IP address which has smallest 480 RTT to this node. 482 o candidatePort: the best candidate's Port. 484 o mediaId: RTT measure response for this media. 486 o bandwidth: the bandwidth for this media. 488 Media Output Start 490 - This message is used for the node to send the media to another. 492 OUT+:cast:meidaId:receiverIP:receiverPort 493 o cast: This must be "UCAST" for unicast, or "MCAST" for multicast. 494 The node sends the media with this way. 496 o mediaId: the media ID which this node should send. 498 o receiverIP: The destination IP address for this media at this node. 500 o receiverPort: The destination IP address for this media at this 501 node. 503 Media Input Start 505 - This message is used for the node to receive the media from another. 507 IN+:cast:meidaId:senderIP:senderPort 509 o cast: This must be "UCAST" for unicast, or "MCAST" for multicast. 511 o mediaId: the media ID which this node will receive. 513 o senderIP: The sender IP address for this media. 515 o senderPort: The destination IP address for this media. 517 Media Output Stop 519 - This message is used to close a outgoing stream. 521 OUT-:mediaId:receiverIP:receiverPort 523 Media Input Stop 525 - This message is used to close a incoming stream. 527 IN-:mediaId:senderIP:senderPort 529 4. Peering Portal Grid Delivery solution 531 4.1. Overview 533 Peering portal is one of the biggest companies for providing p2p- 534 based streaming solution to the multimedia portal providers in Korea. 535 Its solution, which is named as 'Grid Delivery', has the following 536 key ideas. In the media streaming service its contents normally 537 stored in servers are very frequently reused. Therefore under Grid 538 Deliver the contents that users request once are stored in somewhere 539 except servers. When another user requests that contents again, they 540 are served to him/her not by servers but the node who stored it 541 temporally. As a result, the heavy load on the servers can be 542 released. It can reduce the cost of providing media streaming service 543 much more than the server-client solutions do, while providing the 544 same service. 546 The following figure shows the node architecture of Grid Delivery 547 solution. This node architecture supports multiple application using 548 client interface layer. 550 +----------++-----------++----------++-----------+ 551 Presentation |Web based ||Stand-alone||AdobeFlash||Silverlight| 552 Layer |player ||Application||player ||player | 553 +----------++-----------++----------++-----------+ 554 Client |ActiveX ||DirectShow ||AdobeFlash||Silverlight| 555 Interface |plug-in ||COM+ intfce||Component ||Component | 556 Layer +----------++-----------++----------++-----------+ 558 Grid Delivery +----------------+ +---------+ +-------+ 559 Peer | Peer selection | | Cache |-----| Cache | 560 Communication | & scheduling | | Manager | +-------+ 561 Layer +----------------+ +---------+ 562 +----------------+ +----------------+ 563 | Network client | | Network server | 564 +----------------+ +----------------+ 566 4.2. Features 568 4.2.1. Caching 570 There are two types of caching. 572 - To temporally store in advance the contents of high potential that 573 the clients (peers) may request. 575 - To temporally store the contents that any user requests and uses 576 once. 578 Grid Delivery adopts the second approach. Because it stores them in 579 caches only when the user's request the contests, there is not 580 addition redundant traffic, which the first approach can make, on 581 both the server side and the network. Also the more the content are 582 requested, the more it caches them. It improves the efficiency of the 583 system naturally. 585 4.2.2. Indexing 587 To receive the media data from the client, not the server, we need to 588 know which client is able to provide the content that we need. 589 Indexing is a process that makes necessary information to know the 590 list of the clients having the needed data. Indexing can be managed 591 in either central server or the distributed peers. Grid delivery 592 adopts the server-based indexing. It receives the information 593 directly from the server about the clients that can provide the 594 request data, and thus it can make decision easily and quickly. 596 When one peer requests data, the server gives the list of candidate 597 peers who cache the request data using the indexing. That makes it 598 possible for the request peer to receive the media stream from the 599 selected peer among the candidates. 601 4.2.3. Cache state update 603 For the server to perform indexing, the state of the peers must be 604 updated periodically. In order to find out the appropriate peers that 605 can provide the necessary information to the request peers, the 606 indexing server needs to monitor whether each peer is on-line or not. 607 Therefore the on-line peer must send the information about the cached 608 data and the necessary additional data for indexing to the server. 610 4.2.4. Segmentation 612 For the more efficient reception of the widely distributed multiple 613 data sources, it necessary to dynamically change the data source 614 according the situation. For this, the client must check the 615 situation not on a entire file basis not on a partial data basis (we 616 henceforth call it segment), and then can request the appropriate 617 data source for the segment. The process of dividing a file into 618 several segments is called as segmentation. 620 The size of the segment can be variable, but in Grid deliver it is 621 fixed. Therefore it can reduce the overhead of managing variable size 622 of the segments. Actually, it can adopt one from 64KB to 512KB 623 according to the features of the target services and applications. 625 4.2.5. Meta information 627 Under Grid Delivery, it needs to check the validity of the received 628 segments because the segments is not the one transmitted from the 629 original server but the one stored in cache of a peer. Meta 630 information is used for this purpose. 632 Grid Deliver based these meta information has the following 633 advantages : 635 - to prohibit the spread of the infected data in the cache, 637 - to prevent from distributing the data malformed by the malicious 638 user 640 - to control the term of data validity 642 - to even revoke the data which has been already stored in the cache 643 but is the wrong data inserted by mistake. 645 4.2.6. Protocol 647 In order to request the wanted file and transmit the requested file, 648 we need a protocol between peers and between a peer and a server. The 649 proprietary protocol designed by Peering Portal based on HTTP, to 650 make it works well in the network where data source is dynamically 651 changed. 653 4.3. Operation 655 The following figure shows Grid Delivery operation briefly. 657 Client Server Peer 658 |---- 1. Server Configuration Req ----->| | 659 |<--- 2. Server Configuration Rsp ------| | 660 |---- 3. Media Meta Info Req ---------->| | 661 |<--- 4. Media Meta Info Rsp -----------| | 662 |---- 5. Media File Request ----------->| | 663 |<--- 6. Peer List ---------------------| | 664 |---- 7. Media Chunk Request ---------->|------->| 665 |<--- 8. Media Chunk -------------------|--------| 666 |---- 9. Statistics ------------------->| | 668 Step 1-2. A client gets P2P network configuration information from 669 server. This configuration consists of property parameters, 670 encryption keys, server list, and so on. 672 Step 3-4. The client choose a media/content which the client want to 673 get. And then, the client gets meta information for the content. This 674 meta information consists of validation data for integrity of each 675 segment(chunk), time expires, segment size, permission, and so on. 677 Step 5-6. The client sends File Request message to the server. The 678 server gives the client the peer list back. This peer list consists 679 other peer's IP address and port. This peer list may include 680 additional information such as its bandwidth or its capacity. 682 Step 7-8. The client sends Media Chunk Request message to the server 683 or peers in the peer list. The peer selection algorithm is based on 684 IP location. The peer selection algorithm considers network bandwidth 685 or peer's capability also. Because P2P streaming service is a time- 686 critical service, the response time of Media Chunk Request is very 687 time critical. If the response comes too late, the client tries to 688 another peer or the server. 690 Step 9. The client sends statistics information to the server 691 periodically. 693 5. Security Considerations 695 TODO - fill in 697 6. Acknowledgments 699 TODO - fill in 701 7. References 703 7.1. Normative References 705 [I-D.ietf-p2psip-base] Jennings, C., Lowekamp, B., Rescorla, E., 706 Baset, S., and H. Schulzrinne, "REsource LOcation And 707 Discovery (RELOAD) Base Protocol", draft-ietf-p2psip-base- 708 04, Oct 2009. 710 [I-D.chunk-discovery] N. Zong, "Chunk Discovery for P2P Streaming", 711 draft-zong-ppsp-chunk-discovery-00, June 2009. 713 [I-D.p2psip-iptv] S. Ko, "IPTV usage for RELOAD", draft-softgear- 714 p2psip-iptv-01, July 2009. 716 7.2. Informative References 718 [PPSP.WG] https://www.ietf.org/mailman/listinfo/ppsp, 719 http://trac.tools.ietf.org/area/tsv/trac/wiki/PPSP 721 [PeeringPortal] http://www.peeringportal.com 723 Author's Addresses 725 Seok-Kap Ko 726 ETRI 727 1000-6 Oryong-dong, Buk-gu, Gwangju, 500-480, 728 Korea 729 Phone: +82-62-970-6677 730 Email: softgear@etri.re.kr 732 Seung-Hun Oh 733 ETRI 734 1000-6 Oryong-dong, Buk-gu, Gwangju, 500-480, 735 Korea 736 Phone: +82-62-970-6655 737 Email: osh93@etri.re.kr 738 Byung-Tak Lee 739 ETRI 740 1000-6 Oryong-dong, Buk-gu, Gwangju, 500-480, 741 Korea 742 Phone: +82-62-970-6624 743 Email: bytelee@etri.re.kr