idnits 2.17.1 draft-romanow-dispatch-telepresence-use-cases-01.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- No issues found here. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year == The document doesn't use any RFC 2119 keywords, yet seems to have RFC 2119 boilerplate text. -- The document date (July 12, 2010) is 5037 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- -- Obsolete informational reference (is this intentional?): RFC 4582 (Obsoleted by RFC 8855) Summary: 0 errors (**), 0 flaws (~~), 2 warnings (==), 2 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 DISPATCH WG A. Romanow 3 Internet-Draft Cisco 4 Intended status: Informational S. Botzko 5 Expires: January 13, 2011 M. Duckworth 6 Polycom 7 R. Even 8 Gesher Erove 9 T. Eubanks 10 Iformata Communications 11 July 12, 2010 13 Use Cases for Telepresence Multi-streams 14 draft-romanow-dispatch-telepresence-use-cases-01.txt 16 Abstract 18 Telepresence conferencing systems seek to create the sense of really 19 being present. A number of techniques for handling audio and video 20 streams are used to create this experience. When these techniques 21 are not similar, interoperability between different systems is 22 difficult at best, and often not possible. Conveying information 23 about the relationships between multiple streams of media would allow 24 senders and receivers to make choices to allow telepresence systems 25 to interwork. This memo describes the most typical and important use 26 cases for sending multiple streams in a telepresence conference. 28 Status of this Memo 30 This Internet-Draft is submitted in full conformance with the 31 provisions of BCP 78 and BCP 79. 33 Internet-Drafts are working documents of the Internet Engineering 34 Task Force (IETF). Note that other groups may also distribute 35 working documents as Internet-Drafts. The list of current Internet- 36 Drafts is at http://datatracker.ietf.org/drafts/current/. 38 Internet-Drafts are draft documents valid for a maximum of six months 39 and may be updated, replaced, or obsoleted by other documents at any 40 time. It is inappropriate to use Internet-Drafts as reference 41 material or to cite them other than as "work in progress." 43 This Internet-Draft will expire on January 13, 2011. 45 Copyright Notice 47 Copyright (c) 2010 IETF Trust and the persons identified as the 48 document authors. All rights reserved. 50 This document is subject to BCP 78 and the IETF Trust's Legal 51 Provisions Relating to IETF Documents 52 (http://trustee.ietf.org/license-info) in effect on the date of 53 publication of this document. Please review these documents 54 carefully, as they describe your rights and restrictions with respect 55 to this document. Code Components extracted from this document must 56 include Simplified BSD License text as described in Section 4.e of 57 the Trust Legal Provisions and are provided without warranty as 58 described in the Simplified BSD License. 60 Table of Contents 62 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 3 63 2. Terminology . . . . . . . . . . . . . . . . . . . . . . . . . 3 64 3. Telepresence Scenarios Overview . . . . . . . . . . . . . . . 4 65 4. Use Case Scenarios . . . . . . . . . . . . . . . . . . . . . . 6 66 4.1. Point to point meeting: symmetric . . . . . . . . . . . . 7 67 4.2. Point to point meeting: asymmetric . . . . . . . . . . . . 7 68 4.3. Multipoint meeting . . . . . . . . . . . . . . . . . . . . 9 69 4.4. Presentation . . . . . . . . . . . . . . . . . . . . . . . 10 70 4.5. Multipoint Education Usage . . . . . . . . . . . . . . . . 11 71 4.6. Other . . . . . . . . . . . . . . . . . . . . . . . . . . 12 72 5. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . . 13 73 6. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 13 74 7. Security Considerations . . . . . . . . . . . . . . . . . . . 13 75 8. Informative References . . . . . . . . . . . . . . . . . . . . 13 76 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . . 13 78 1. Introduction 80 Telepresence applications try to provide a "being there" experience 81 for conversational video conferencing. Often this telepresence 82 application is described as "immersive telepresence" in order to 83 distinguish it from traditional video conferencing, and from other 84 forms of remote presence not related to conversational video 85 conferencing, such as avatars and robots. The salient 86 characteristics of telepresence are often described as: full-sized, 87 immersive video, preserving interpersonal interaction and allowing 88 non-verbal communication. 90 Although telepresence systems are based on open standards such as RTP 91 [RFC3550], SIP [RFC3261] , H.264, and the H.323 suite of protocols, 92 they cannot easily interoperate with each other without operator 93 assistance and expensive additional equipment which translates from 94 one vendor to another. A standard way of describing the multiple 95 streams constituting the media flows and the fundamental aspects of 96 their behavior, would allow telepresence systems to interwork. 98 This draft presents a set of use cases describing typical scenarios. 99 Requirements will be derived from these use cases in a separate 100 document. The use cases are described from the viewpoint of the 101 users. They are illustrative of the user experience that needs to be 102 supported. It is possible to implement these use cases in a variety 103 of different ways. A problem statement draft describes the 104 difficulties when one participant's equipment has a different 105 approach than another's. 107 Many different scenarios need to be supported. Our strategy in this 108 document is to describe in detail the most common and basic use 109 cases. These will cover most of the requirements. Additional 110 scenarios that bring new features and requirements will be added. 112 We look at telepresence conferences that are point-to-point and 113 multipoint. In some settings, the number of displays is similar at 114 all sites, in others, the number of displays differs at different 115 sites. Both cases are considered. Also included is a use case 116 describing display of presentation or content. 118 The document structure is as follows: Section 2 presents the document 119 terminology, Section 3 gives an overview of the scenarios, and 120 Section 4 describes use cases. 122 2. Terminology 124 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 125 "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this 126 document are to be interpreted as described in RFC 2119 [RFC2119]. 128 3. Telepresence Scenarios Overview 130 This section describes the general characteristics of the use cases 131 and what the scenarios are intended to show. The typical setting is 132 a business conference, which was the initial focus of telepresence. 133 Recently consumer products are also being developed. We specifically 134 do not include in our scenarios the infrastructure aspects of 135 telepresence, such as room construction, layout and decoration. 137 Telepresence systems are typically composed of one or more video 138 cameras and encoders and one or more display monitors of large size 139 (around 60"). Microphones pick up sound and audio codec(s)produce 140 one or more audio streams. The cameras used to present the 141 telepresence users we will call participant cameras (and likewise for 142 displays). There may also be other cameras, such as for document 143 display. These will be referred to as presentation or content 144 cameras, which generally have different formats, aspect ratios, and 145 frame rates from the participant cameras. The presentation videos 146 may be shown on participant screen, or on auxiliary display screens. 147 A user's computer may also serve as a virtual content camera, 148 generating an animation or playing back a video for display to the 149 remote participants. 151 We describe such a telepresence system as sending M video streams, N 152 audio streams, and D content streams to the remote system(s). (Note 153 that the number of audio streams is generally not the same as the 154 number of video streams.) 156 The fundamental parameters describing today's typical telepresence 157 scenario include: 159 1. The number of participating sites 161 2. The number of visible seats at a site 163 3. The number of cameras 165 4. The number of audio channels 167 5. The screen size 169 6. The display capabilities - such as resolution, frame rate, 170 aspect ratio 172 7. The arrangement of the displays in relation to each other 174 8. Similar or dissimilar number of primary screens at all sites 176 9. Type and number of presentation displays 178 10. Multipoint conference display strategies - for example, the 179 camera-to-display mappings may be static or dynamic 181 11. The camera viewpoint 183 12. The cameras fields of view and how they do or do not overlap 185 The basic features that give telepresence its distinctive 186 characteristics are implemented in disparate ways in different 187 systems. Currently Telepresence systems from diverse vendors 188 interoperate to some extent, but this is not supported in a standards 189 based fashion. Interworking requires that translation and 190 transcoding devices be included in the architecture. Such devices 191 increase latency, reducing the quality of interpersonal interaction. 192 Use of these devices is often not automatic; it frequently requires 193 substantial manual configuration and a detailed understanding of the 194 nature of underlying audio and video streams. This state of affairs 195 is not acceptable for the continued growth of telepresence - we 196 believe telepresence systems should have the same ease of 197 interoperability as do telephones. 199 There is no agreed upon way to adequately describe the semantics of 200 how streams of various media types relate to each other. Without a 201 standard for stream semantics to describe the particular roles and 202 activities of each stream in the conference, interoperability is 203 cumbersome at best. 205 In a multiple screen conference, the video and audio streams sent 206 from remote participants must be understood by receivers so that they 207 can be presented in a coherent and life-like manner. This includes 208 the ability to present remote participants at their true size for 209 their apparent distance, while maintaining correct eye contact, 210 gesticular cues, and simultaneously providing a spatial audio sound 211 stage that is consistent with the video presentation. 213 The receiving device that decides how to display incoming information 214 needs to understand a number of variables such as the spatial 215 position of the speaker, the field of view of the cameras; the camera 216 zoom; which media stream is related to each of the displays; etc. It 217 is not simply that individual streams must be adequately described, 218 to a large extent this already exists, but rather that the semantics 219 of the relationships between the streams must be communicated. Note 220 that all of this is still required even if the basic aspects of the 221 streams, such as the bit rate, frame rate, and aspect ratio, are 222 known. Thus, this problem has aspects considerably beyond those 223 encountered in interoperation of single-node video conferencing 224 units. 226 4. Use Case Scenarios 228 Our development of use cases is staged, initially focusing on what is 229 currently typical and important. Use cases that add future or more 230 specialized features will be added later as needed. Also, there are 231 a number of possible variants for these use cases, for example, the 232 audio supported may differ at the end points (such as mono or stereo 233 versus surround sound), etc. These issues will be discussed in more 234 depth in the problem statement document. 236 The use cases here are intended to be hierarchical, in that the 237 earlier use cases describe basics of telepresence that will also be 238 used by later use cases. 240 Many of these systems offer a full conference room solution where 241 local participants sit on one side of a table and remote participants 242 are displayed as if they are sitting on the other side of the table. 243 The cameras and screens are typically arranged to provide a panoramic 244 (left to right) view of the remote room. 246 The sense of immersion and non-verbal communication is fostered by a 247 number of technical features, such as: 249 1. Good eye contact, which is achieved by careful placement of 250 participants, cameras and screens. 252 2. Camera field of view and screen sizes are matched so that the 253 images of the remote room appear to be full size. 255 3. The left side of each room is presented on the right display at 256 the far end; similarly the right side of the room is presented on 257 the left display. The effect of this is that participants of 258 each site appear to be sitting across the table from each other. 259 If two participants on the same site glance at each other, all 260 participants can observe it. Likewise, if a participant on one 261 site gestures to a participant on the other site, all 262 participants observe the gesture itself and the participants it 263 includes. 265 4.1. Point to point meeting: symmetric 267 In this case each of the two sites has an identical number of 268 screens, with cameras having fixed fields of view, and one camera for 269 each screen. The sound type is the same at each end. As an example, 270 there could be 3 cameras and 3 screens in each room, with stereo 271 sound being sent and received at each end. 273 The important thing here is that each of the 2 sites has the same 274 number of screens. Each screen is paired with a corresponding 275 camera. Each camera / screen pair is typically connected to a 276 separate codec, producing a video encoded stream for transmission to 277 the remote site, and receiving a similarly encoded stream from the 278 remote site. 280 Each system has one or multiple microphones for capturing audio. In 281 some cases, stereophonic microphones are employed. In other systems, 282 a microphone may be placed in front of each participant (or pair of 283 participants). In typical systems all the microphones are connected 284 to a single codec that sends and receives the audio streams as either 285 stereo or surround sound. The number of microphones and the number 286 of audio channels are often not the same as the number of cameras. 287 Also the number of microphones is often not the same as the number of 288 loudspeakers. 290 The audio may be transmitted as multi-channel (stereo/surround sound) 291 or as distinct and separate monophonic streams. Audio levels should 292 be matched, so the sound levels at both sites are identical. 293 Loudspeaker and microphone placements are chosen so that the sound 294 "stage" (orientation of apparent audio sources) is coordinated with 295 the video. That is, if a participant on one site speaks, the 296 participants at the remote site perceive her voice as originating 297 from her visual image. In order to accomplish this, the audio needs 298 to be mapped at the received site in the same fashion as the video. 299 That is, audio received from the right side of the room needs to be 300 output from loudspeaker(s) on the left side at the remote site, and 301 vice versa. 303 4.2. Point to point meeting: asymmetric 305 In this case, each site has a different number of screens and cameras 306 than the other site. The important characteristic of this scenario 307 is that the number of displays is different between the two sites. 308 This creates challenges which are handled differently by different 309 telepresence systems. 311 This use case builds on the basic scenario of 3 screens to 3 screens. 312 Here, we use the common case of 3 screens and 3 cameras at one site, 313 and 1 screen and 1 camera at the other site, connected by a point to 314 point call. The display sizes and camera fields of view at both 315 sites are basically similar, such that each camera view is designed 316 to show two people sitting side by side. Thus the 1 screen room has 317 up to 2 people seated at the table, while the 3 screen room may have 318 up to 6 people at the table. 320 The basic considerations of defining left and right and indicating 321 relative placement of the multiple audio and video streams are the 322 same as in the 3-3 use case. However, handling the mismatch between 323 the two sites of the number of displays and cameras requires more 324 complicated maneuvers. 326 For the video sent from the 1 camera room to the 3 screen room, 327 usually what is done is to simply use 1 of the 3 displays and keep 328 the second and third displays inactive, or put up the date, for 329 example. This would maintain the "full size" image of the remote 330 side. 332 For the other direction, the 3 camera room sending video to the 1 333 screen room, there are more complicated variations to consider. Here 334 are several possible ways in which the video streams can be handled. 336 1. The 1 screen system might simply show only 1 of the 3 camera 337 images, since the receiving side has only 1 screen. Two people 338 are seen at full size, but 4 people are not seen at all. The 339 choice of which 1 of the 3 streams to display could be fixed, or 340 could be selected by the users. It could also be made 341 automatically based on who is speaking in the 3 screen room, such 342 that the people in the 1 screen room always see the person who is 343 speaking. If the automatic selection is done at the sender, the 344 transmission of streams that are not displayed could be 345 suppressed, which would avoid wasting bandwidth. 347 2. The 1 screen system might be capable of receiving and decoding 348 all 3 streams from all 3 cameras. The 1 screen system could then 349 compose the 3 streams into 1 local image for display on the 350 single screen. All six people would be seen, but smaller than 351 full size. This could be done in conjunction with reducing the 352 image resolution of the streams, such that encode/decode 353 resources and bandwidth are not wasted on streams that will be 354 downsized for display anyway. 356 3. The 3 screen system might be capable of including all 6 people in 357 a single stream to send to the 1 screen system. For example, it 358 could use PTZ (Pan Tilt Zoom) cameras to physically adjust the 359 cameras such that 1 camera captures the whole room of six people. 360 Or it could recompose the 3 camera images into 1 encoded stream 361 to send to the remote site. These variations also show all six 362 people, but at a reduced size. 364 4. Or, there could be a combination of these approaches, such as 365 simultaneously showing the speaker in full size with a composite 366 of all the 6 participants in smaller size. 368 The receiving telepresence system needs to have information about the 369 content of the streams it receives to make any of these decisions. 370 If the systems are capable of supporting more than one strategy, 371 there needs to be some negotiation between the two sites to figure 372 out which of the possible variations they will use in a specific 373 point to point call. 375 4.3. Multipoint meeting 377 In a multipoint telepresence conference, there are more than two 378 sites participating. Additional complexity is required to enable 379 media streams from each participant to show up on the displays of the 380 other participants. 382 Clearly, there are a great number of topologies that can be used to 383 display the streams from multiple sites participating in a 384 conference. 386 One major objective for telepresence is to be able to preserve the 387 "Being there" user experience. However, in multi-site conferences it 388 is often (in fact usually) not possible to simultaneously provide 389 full size video, eye contact, common perception of gestures and gaze 390 by all participants. Several policies can be used for stream 391 distribution and display: all provide good results but they all make 392 different compromises. 394 One common policy is called site switching. Let's say the speaker is 395 at site A and everyone else is at a "remote" site. When the room at 396 site A shown, all the camera images from site A are forwarded to the 397 remote sites. Therefore at each receiving remote site, all the 398 screens display camera images from site A. This can be used to 399 preserve full size image display, and also provide full visual 400 context of the displayed far end, site A. In site switching, there is 401 a fixed relation between the cameras in each room and the displays in 402 remote rooms. The room or participants being shown is switched from 403 time to time based on who is speaking or by manual control, e.g., 404 from site A to site B. 406 Segment switching is another policy choice. Still using site A as 407 where the speaker is, and "remote" to refer to all the other sites, 408 in segment switching, rather than sending all the images from site A, 409 only the speaker at site A is shown. The camera images of the 410 current speaker and previous speakers (if any) are forwarded to the 411 other sites in the conference. Therefore the screens in each site 412 are usually displaying images from different remote sites - the 413 current speaker at site A and the previous ones. This strategy can 414 be used to preserve full size image display, and also capture the 415 non-verbal communication between the speakers. In segment switching, 416 the display depends on the activity in the remote rooms - generally, 417 but not necessarily based on audio / speech detection). 419 A third possibility is to reduce the image size so that multiple 420 camera views can be composited onto one or more screens. This does 421 not preserve full size image display, but provides the most visual 422 context (since more sites or segments can be seen). Typically in 423 this case the display mapping is static, i.e., each part of each room 424 is shown in the same location on the display screens throughout the 425 conference. 427 Other policies and combinations are also possible. For example, 428 there can be a static display of all screens from all remote rooms, 429 with part or all of one screen being used to show the current speaker 430 at full size. 432 4.4. Presentation 434 In addition to the video and audio streams showing the participants, 435 additional streams are used for presentations. 437 In systems available today, generally only one additional video 438 stream is available for presentations. Often this presentation 439 stream is half-duplex in nature, with presenters taking turns. The 440 presentation video may be captured from a PC screen, or it may come 441 from a multimedia source such as a document camera, camcorder or a 442 DVD. In a multipoint meeting, the presentation streams for the 443 currently active presentation are always distributed to all sites in 444 the meeting, so that the presentations are viewed by all. 446 Some systems display the presentation video on a screen that is 447 mounted either above or below the three participant screens. Other 448 systems provide monitors on the conference table for observing 449 presentations. If multiple presentation monitors are used, they 450 generally display identical content. There is considerable variation 451 in the placement, number, and size or presentation displays. 453 In some systems presentation audio is pre-mixed with the room audio. 454 In others, a separate presentation audio stream is provided (if the 455 presentation includes audio). 457 In H.323 systems, H.239 is typically used to control the video 458 presentation stream. In SIP systems, similar control mechanisms can 459 be provided with BFCP [RFC4582]. These mechanisms are suitable for 460 managing a single presentation stream. 462 Although today's systems remain limited to a single video 463 presentation stream, there are obvious uses for multiple presentation 464 streams. 466 1. Frequently the meeting convener is following a meeting agenda, 467 and it is useful for her to be able to show that agenda to all 468 participants during the meeting. Other participants at various 469 remote sites are able to make presentations during the meeting, 470 with the presenters taking turns. The presentations and the 471 agenda are both shown, either on separate displays, or perhaps 472 re-scaled and shown on a single display. 474 2. A single multimedia presentation can itself include multiple 475 video streams that should be shown together. For instance, a 476 presenter may be discussing the fairness of media coverage. In 477 addition to slides which support the presenter's conclusions, she 478 also has video excerpts from various news programs which she 479 shows to illustrate her findings. She uses a DVD player for the 480 video excerpts so that she can pause and reposition the video as 481 needed. Another example is an educator who is presenting a 482 multi-screen slide show. This show requires that the placement 483 of the images on the multiple displays at each site be 484 consistent. 486 There are many other examples where multiple presentation streams are 487 useful. 489 4.5. Multipoint Education Usage 491 The importance of this example is that the multiple video streams are 492 not used to create an immersive conferencing experience with 493 panoramic views at all the site. Instead the multiple streams are 494 dynamically used to enable full participation of remote students in a 495 university class. In some instances the same video stream is 496 displayed on multiple displays in the room, in other instances an 497 available stream is not displayed at all. 499 The main site is a university auditorium which is equipped with three 500 cameras. One camera is focused on the professor at the podium. A 501 second camera is mounted on the wall behind the professor and 502 captures the class in its entirety. The third camera is co-located 503 with the second, and is designed to capture a close up view of a 504 questioner in the audience. It automatically zooms in on that 505 student using sound localization. 507 Although the auditorium is equipped with three cameras, it is only 508 equipped with two screens. One is a large screen located at the 509 front so that the class can see it. The other is located at the rear 510 so the professor can see it. When someone asks a question, the front 511 screen shows the questioner. Otherwise it shows the professor 512 (ensuring everyone can easily see her). 514 The remote sites are typical immersive telepresence room with three 515 camera/screen pairs. 517 All remote sites display the professor on the center screen at full 518 size. A second screen shows the entire classroom view when the 519 professor is speaking. However, when a student asks a question, the 520 second screen shows the close up view of the student at full size. 521 Sometimes the student is in the auditorium; sometimes the speaking 522 student is at another remote site. The remote systems never display 523 the students that are actually in that room. 525 If someone at the remote site asks a question, then the screen in the 526 auditorium will show the remote student at full size (as if they were 527 present in the auditorium itself). The display in the rear also 528 shows this questioner, allowing the professor to see and respond to 529 the student without needing to turn her back on the main class. 531 When no one is asking a question, the screen in the rear briefly 532 shows a full-room view of each remote site in turn, allowing the 533 professor to monitor the entire class (remote and local students). 534 The professor can also use a control on the podium to see a 535 particular site - she can choose either a full-room view or a single 536 camera view. 538 Realization of this use case does not require any negotiation between 539 the participating sites. Endpoint devices (and an MCU if present) - 540 need to know who is speaking and what video stream includes the view 541 of that speaker. The remote systems need some knowledge of which 542 stream should be placed in the center. The ability of the professor 543 to see specific sites (or for the system to show all the sites in 544 turn) would also require the auditorium system to know what sites are 545 available, and to be able to request a particular view of any site. 546 Bandwidth is optimized if video that is not being shown at a 547 particular site is not distributed to that site. 549 4.6. Other 551 Additional use cases will be added in the future. 553 Add a typical case with mixture of immersive telepresence and legacy 554 systems, including telephony only. 556 5. Acknowledgements 558 The draft has benefitted from input from a number of people including 559 Alex Eleftheriadis, Tommy Andre Nyquist, Mark Gorzynski, Charles 560 Eckel, Nermeen Ismail, Mary Barnes, Jim Cole. 562 6. IANA Considerations 564 This document contains no IANA considerations. 566 7. Security Considerations 568 While there are likely to be security considerations for any solution 569 for telepresence interoperability, this document has no security 570 considerations. 572 8. Informative References 574 [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate 575 Requirement Levels", BCP 14, RFC 2119, March 1997. 577 [RFC3261] Rosenberg, J., Schulzrinne, H., Camarillo, G., Johnston, 578 A., Peterson, J., Sparks, R., Handley, M., and E. 579 Schooler, "SIP: Session Initiation Protocol", RFC 3261, 580 June 2002. 582 [RFC3550] Schulzrinne, H., Casner, S., Frederick, R., and V. 583 Jacobson, "RTP: A Transport Protocol for Real-Time 584 Applications", STD 64, RFC 3550, July 2003. 586 [RFC4582] Camarillo, G., Ott, J., and K. Drage, "The Binary Floor 587 Control Protocol (BFCP)", RFC 4582, November 2006. 589 Authors' Addresses 591 Allyn Romanow 592 Cisco 593 San Jose, CA 95134 594 US 596 Email: allyn@cisco.com 598 Stephen Botzko 599 Polycom 600 Andover, MA 01810 601 US 603 Email: stephen.botzko@polycom.com 605 Mark Duckworth 606 Polycom 607 Andover, MA 01810 608 US 610 Email: mark.duckworth@polycom.com 612 Roni Even 613 Gesher Erove 614 Tel Aviv, 615 Israel 617 Email: ron.even.tlv@gmail.com 619 Marshall Eubanks 620 Iformata Communications 621 Dayton, Ohio 45402 622 US 624 Email: marshall.eubanks@ilformata.com