idnits 2.17.1 draft-ietf-sipping-deaf-req-01.txt: ** The Abstract section seems to be numbered Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- ** Looks like you're using RFC 2026 boilerplate. This must be updated to follow RFC 3978/3979, as updated by RFC 4748. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- ** The document seems to lack a 1id_guidelines paragraph about Internet-Drafts being working documents. ** The document seems to lack a 1id_guidelines paragraph about 6 months document validity. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- ** The document seems to lack a Security Considerations section. ** The document seems to lack an IANA Considerations section. (See Section 2.2 of https://www.ietf.org/id-info/checklist for how to handle the case when there are no actions for IANA.) ** There is 1 instance of too long lines in the document, the longest one being 2 characters in excess of 72. == There are 2 instances of lines with non-RFC2606-compliant FQDNs in the document. ** The document seems to lack a both a reference to RFC 2119 and the recommended RFC 2119 boilerplate, even if it appears to use RFC 2119 keywords. RFC 2119 keyword, line 218: '...coding services) MUST be able to add o...' RFC 2119 keyword, line 226: '...., user profile) MUST be communicable ...' RFC 2119 keyword, line 227: '...and preferences MUST determine the han...' RFC 2119 keyword, line 251: '...SIP SHOULD support a class of User Age...' RFC 2119 keyword, line 263: '...lications, etc.) MUST include the abil...' (7 more instances...) Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the RFC 3978 Section 5.4 Copyright Line does not match the current year -- The document seems to lack a disclaimer for pre-RFC5378 work, but may have content which was first submitted before 10 November 2008. If you have contacted all the original authors and they are all willing to grant the BCP78 rights to the IETF Trust, then this is fine, and you can ignore this comment. If not, you may need to add the pre-RFC5378 disclaimer. (See the Legal Provisions document at https://trustee.ietf.org/license-info for more information.) -- Couldn't find a document date in the document -- date freshness check skipped. Checking references for intended status: Informational ---------------------------------------------------------------------------- -- Missing reference section? '1' on line 22 looks like a reference -- Missing reference section? '2' on line 77 looks like a reference -- Missing reference section? '3' on line 104 looks like a reference -- Missing reference section? '4' on line 161 looks like a reference -- Missing reference section? '5' on line 214 looks like a reference -- Missing reference section? '6' on line 367 looks like a reference -- Missing reference section? '7' on line 381 looks like a reference Summary: 8 errors (**), 0 flaws (~~), 2 warnings (==), 9 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 1 Internet Engineering Task Force SIPPING WG 2 Internet Draft Charlton/Gasson/ 3 Document: draft-ietf-sipping-deaf-req-01.txt Gybels/Spanner/ 4 April 2002 van Wijk 5 Expires: October 2002 RNID/Ericsson 6 Category: Informational 8 User Requirements for SIP in support of deaf, hard of hearing 9 and speech-impaired individuals 11 Status of this Memo 13 This document is an Internet-Draft and is subject to all provisions of 14 Section 10 of RFC2026[1]. 15 Internet-Drafts are working documents of the Internet Engineering Task 16 Force (IETF), its areas, and its working groups. Note that other groups 17 may also distribute working documents as Internet-Drafts. 18 Internet-Drafts are draft documents valid for a maximum of six months 19 and may be updated, replaced, or obsoleted by other documents at any 20 time. It is inappropriate to use Internet-Drafts as reference material 21 or to cite them other than as "work in progress." 23 The list of current Internet-Drafts can be accessed at 24 http://www.ietf.org/1id-abstracts.html 25 The list of Internet-Draft Shadow Directories can be accessed at 26 http://www.ietf.org/shadow.html. 28 Table of Contents 30 1. Abstract............................................................2 31 2. Terminology and Conventions Used in this Document...................2 32 3. Introduction........................................................3 33 4. Purpose and Scope...................................................3 34 5. Background..........................................................3 35 6. Deaf, Hard of Hearing and Speech-impaired Requirements for SIP......4 36 Introduction........................................................4 37 6.1. Connection without Difficulty..................................4 38 6.2 User Profile....................................................5 39 6.3 Intelligent Gateways............................................5 40 6.4 Inclusive Design................................................5 41 6.5 Resource Management.............................................6 42 6.6 Confidentiality and Security....................................6 43 7. Some Real World Scenarios...........................................6 44 7.1 Transcoding Service.............................................6 45 7.2 Media Service Provider..........................................7 46 7.3 Sign Language Interface.........................................7 47 7.4 Synthetic Lip-reading Support for Voice Calls...................8 48 7.5 Voice-Activated Menu Systems....................................8 49 7.6 Conference call.................................................9 50 8. Some Suggestions for Service Providers and User Agent 51 Manufacturers......................................................10 52 9. Acknowledgements...................................................11 53 10.Author's Addresses.................................................11 54 References and Notes..................................................12 55 1. Abstract 57 This document aims to present a set of SIP user requirements that 58 support communications for deaf, hard of hearing and speech-impaired 59 individuals. These user requirements address the current difficulties 60 of deaf, hard of hearing and speech-impaired individuals in using 61 communications facilities, while acknowledging the multi-functional 62 potential of SIP-based communications. 64 A number of issues related to these user requirements are further 65 raised in this document. 67 Also included are some real world scenarios and some technical 68 requirements to show the robustness of these requirements on a 69 concept-level. 71 2. Terminology and Conventions Used in this Document 73 In this document, the key words "MUST", "MUST NOT","REQUIRED", "SHALL", 74 "SHALL NOT", "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and 75 "OPTIONAL" are to be interpreted as described in RFC2119[2] and 76 indicate requirement levels for compliant SIP implementations. 78 For the purposes of this document, the following terms are considered to 79 have these meanings: 81 Abilities: A person's capacity for communicating which could include a 82 hearing or speech impairment or not. The terms Abilities and Preferences 83 apply to both caller and call-recipient. 85 Preferences: A person's choice of communication mode. This could include 86 any combination of media stream, e.g., text, audio, video. 88 The terms Abilities and Preferences apply to both caller and call- 89 recipient. 91 Relay Service: A third-party or intermediary that enables communications 92 between deaf, hard of hearing and speech-impaired people, and people 93 without hearing or speech-impairment. Relay Services form a subset of 94 the activities of Transcoding Services (see definition). 96 Transcoding Services: A human or automated third party acting as an 97 intermediary in any session between two other User Agents (being a User 98 Agent itself), and transcoding one stream into another (e.g., voice to 99 text or vice versa). 101 Textphone: Sometimes called a TTY (teletypewriter), TDD 102 (telecommunications device for the deaf) or a minicom, a textphone 103 enables a deaf, hard of hearing or speech-impaired person to place a 104 call to a telephone or another textphone. Some textphones use the V18[3] 105 protocol as a standard for communication with other textphone 106 communication protocols world-wide. 108 User: A deaf, hard of hearing or speech-impaired individual. A user is 109 otherwise referred to as a person or individual, and users referred to 110 as people. 112 Note: For the purposes of this document, a deaf, hard of hearing, or 113 speech-impaired person is an individual who chooses to use SIP because 114 it can minimize or eliminate constraints in using common communication 115 devices. As SIP promises a total communication solution for any kind of 116 person, regardless of ability and preference, therefore there is no 117 attempt to specifically define deaf, hard of hearing or speech-impaired 118 in this document. 120 3. Introduction 122 The background for this document is the recent developments of SIP and 123 SIP-based communications, and a growing awareness of deaf, hard of 124 hearing and speech-impaired issues in the technical community. 126 The SIP capacity to simplify setting up, managing and tearing down 127 communication sessions between all kinds of User Agents has specific 128 implications for deaf, hard of hearing and speech-impaired individuals. 130 As SIP enables multiple sessions with translation between multiple types 131 of media, these requirements aim to provide the standard for recognising 132 and enabling these interactions, and for a communications model that 133 includes any and all types of SIP-networking abilities and preferences. 135 4. Purpose and Scope 137 The scope of this document is to first present a current set of user 138 requirements for deaf, hard of hearing and speech-impaired individuals 139 through SIP-enabled communications. These are then followed by some real 140 world scenarios in SIP-communications that could be used in a test 141 environment, and some concepts on how these requirements can be 142 developed by service providers and User Agent manufacturers. 144 These recommendations make explicit the needs of a currently often 145 disadvantaged user-group and attempts to match them with the capacity of 146 SIP. It is also not the intention here to prioritise the needs of the 147 deaf, hard of hearing and speech-impaired in a way that would penalise 148 other individuals. 150 These requirements aim to encourage developers and manufacturers 151 world-wide to consider the specific needs of deaf, hard of hearing and 152 speech-impaired individuals. This document presents a world-vision 153 where deafness, hard of hearing or speech impairment are no longer a 154 barrier to communication. 156 5. Background 158 Deaf, hard of hearing and speech-impaired people are currently sometimes 159 unable to use commonly available communication devices. 161 Although this is documented[4], this does not mean that developers or 162 manufacturers are always aware of this. Communication devices for the 163 deaf, hard of hearing and speech-impaired (e.g., text-phones) are 164 currently often primitive in design, expensive, and non-compatible with 165 progressively designed, cheaper and more adaptable communication devices 166 for other individuals. For example, even though some text- phones use 167 the V18 protocol to enable communications with other text-phones across 168 national and regional areas, many are limited in their compatibility 169 with other communication devices in their particular country or region. 171 Additionally, non-technical human communications for (sign languages, 172 lip-reading) are non-standard around the world. 174 There are intermediary or third-party relay services (e.g. transcoding 175 services) that facilitate communications, uni- or bi-directional, 176 for the deaf, hard of hearing and speech-impaired. Currently Relay 177 services are mostly operator-assisted (manual), although methods of 178 partial automation are being implemented in some areas. These services 179 enable full access to modern facilities and conveniences for the deaf, 180 hard of hearing and speech-impaired. Although these services are 181 somewhat limited, their value is undeniable as compared to their 182 previous complete unavailability. 184 Yet communication methods in recent decades have proliferated: email, 185 mobile phones, video streaming, etc. These methods are an advance in the 186 development of data transfer technologies between devices. 188 Developers and advocates of SIP agree that this is a protocol that not 189 only anticipates the growth in communications between convergent 190 networks in real time, but also fulfils the potential of the Internet 191 as a communications and information forum. Further, that these 192 developments allow a standard of communication that can be applied 193 throughout all networking communities, regardless of abilities and 194 preferences. 196 6. Deaf, Hard of Hearing and Speech-impaired Requirements for SIP 198 Introduction 200 The user requirements in this section are provided for the benefit of 201 service providers, User Agent manufacturers and any other interested 202 party in the development of products and services for the deaf, hard of 203 hearing and speech-impaired. 205 The user requirements are as follows: 207 6.1 Connection without Difficulty 209 This requirement states: 211 Whatever the preferences and abilities of the user and User Agent, there 212 SHOULD be no difficulty in setting up SIP sessions. These sessions could 213 include multiple proxies, call routing decisions, transcoding services, 214 e.g., the relay service TypeTalk[5] or other media processing, and could 215 include multiple simultaneous or alternative media streams. 217 This means that any User Agent in the conversation (including 218 transcoding services) MUST be able to add or remove a media stream from 219 the call without having to tear it down and re-establish it. 221 6.2 User Profile 223 This requirement states: 225 Deaf, hard of hearing and speech-impaired user abilities and preferences 226 (i.e., user profile) MUST be communicable by SIP, and these abilities 227 and preferences MUST determine the handling of the session. 229 The User Profile for a deaf, hard of hearing and speech-impaired person 230 might include: 231 - How media streams are received and transmitted (text, voice, video, or 232 any combination, uni- or bi-directional) 233 - Redirecting specific media streams through a transcoding service 234 (e.g.,the relay service TypeTalk) 235 - Roaming (e.g., a deaf person accessing their User Profile from a web- 236 interface at an Internet cafe) 237 - Anonymity, that is, not revealing that a deaf person is calling, even 238 through a transcoding service (e.g., some relay services like 239 TextDirect inform the call-recipient that there is an incoming text 240 call without saying that a deaf person is calling). 242 Part of this requirement is to ensure that deaf, hard of hearing and 243 speech-impaired people can keep their preferences and abilities 244 confidential from others, to avoid possible discrimination or prejudice, 245 while still being able to establish a SIP session. 247 6.3 Intelligent Gateways 249 This requirement states: 251 SIP SHOULD support a class of User Agents to perform as gateways for 252 legacy systems designed for deaf, hard of hearing and speech-impaired 253 people. 255 For example, an individual could have a SIP User Agent acting as a 256 gateway to a PSTN legacy textphone. 258 6.4 Inclusive Design 260 This requirement states: 262 Where applicable, design concepts for communications (devices, 263 applications, etc.) MUST include the abilities and preferences of deaf, 264 hard of hearing and speech-impaired people. 266 Transcoding services and User Agents MUST be able to connect with each 267 other regardless of the provider or manufacturer. This means that new 268 User Agents MUST be able to support legacy protocols through appropriate 269 gateways. 271 6.5 Resource Management 273 This requirement states: 275 User Agents SHOULD be able to identify the content of a media stream in 276 order to obtain such information as the cost of the media stream, if a 277 transcoding service can support it, etc. 279 User Agents SHOULD be able to choose among transcoding services and 280 similar services based on their capabilities (e.g., whether a 281 transcoding service carries a particular media stream), and any policy 282 constraints they impose (e.g., charging for use). It SHOULD be possible 283 for User Agents to discover the availability of alternative media 284 streams and to choose from them. 286 6.6 Confidentiality and Security 288 This requirement states: 290 All third-party or intermediaries (transcoding services) employed in a 291 session for deaf, hard of hearing and speech-impaired people MUST offer 292 a confidentiality policy. All information exchanged in this type of 293 session SHOULD be secure, that is, erased before confidentiality is 294 breached, unless otherwise required. 296 This means that transcoding services (e.g., interpretation, translation) 297 MUST publish their confidentiality and security policies. 299 7. Some Real World Scenarios 301 These scenarios are intended to show some of the various types of media 302 streams that would be initiated, managed, directed, and terminated in a 303 SIP-enabled network, and shows how some resources might be managed 304 between SIP-enabled networks, transcoding services and service 305 providers. 307 To illustrate the communications dynamic of these kinds of scenarios, 308 each one specifically mentions the kind of media streams transmitted, 309 and whether User Agents and Transcoding Services are involved. 311 7.1 Transcoding Service 313 In this scenario, a hearing person calls the household of a deaf person 314 and a hearing person. 316 1. A voice conversation is initiated between the hearing participants: 317 ( Phone A) <-----Voice ---> ( Phone B) 318 2. During the conversation, the hearing person asks to talk with the 319 deaf person, while keeping the voice connection open so that voice to 320 voice communications can continue if required. 321 3. A Relay Service is invited into the conversation. 322 4. The Relay Service transcodes the hearing person's words into text. 323 5. Text from the hearing person's voice appears on the display of the 324 deaf person's User Agent. 325 6. The deaf person types a response. 326 7. The Relay Service receives the text and reads it to the hearing 327 person: 328 ( ) <------------------Voice----------------> ( ) 329 (Phone A ) -----Voice---> ( Voice To Text ) -Text-> (Phone B ) 330 ( ) <----Voice---- (Service Provider) <-Text- ( ) 331 8. The hearing person asks to talk with the hearing person on the other 332 side. 333 9. The Relay Service withdraws from the call. 335 7.2 Media Service Provider 337 In this scenario, a deaf person wishes to listen to a Radio program 338 through a text stream of the program's audio stream. 340 1. The deaf person attempts to establish a connection to the radio 341 broadcast, with User Agent preferences set to receiving audio stream 342 as text. 343 2. The User Agent of the deaf person queries the radio station User 344 Agent on whether a text stream is available, other than the audio 345 stream. 346 3. However, the radio station has no text stream available for a deaf 347 listener, and responds in the negative. 348 4. As no text stream is available, the deaf person's User Agent requests 349 a Voice-To-Text transcoding service (e.g., a real-time captioning 350 service like TypeTalk) to come into the conversation space. 351 5. The transcoding service User Agent identifies the audio stream as a 352 radio broadcast. However, the policy of the transcoding service is 353 that it does not accept radio broadcasts because it would overload 354 their resources far too quickly. 355 6. In this case, the connection fails. 357 Alternatively, continuing from 2 above: 358 3. The radio station does provide text with their audio streams. 359 4. The deaf person receives a text stream of the radio program. 361 Note: To support deaf, hard of hearing and speech-impaired, service 362 providers are encouraged to provide text with audio streams. 364 7.3 Sign Language Interface 366 In this scenario, a deaf person enables a signing avatar (e.g., 367 VisiCAST[6]) by setting up a User Agent to receive audio streams as XML 368 data that will operate an avatar for sign-language. For outgoing 369 communications, the deaf person types text that is transcoded into an 370 audio stream for the other conversation participant. 372 For example: 373 ( )-Voice->(Voice To Synthetic Signing) --XMLData-->( ) 374 ( hearing ) (deaf ) 375 ( Party A )<-Voice-( Text To Voice ) <--------Text-------- (Party B ) 376 ( ) (Service Provider) ( ) 378 7.4 Synthetic Lip-speaking Support for Voice Calls 380 In order to receive voice calls, a hard of hearing person uses a 381 synthetic lip-speaking avatar software (e.g., Synface[7]) on a PC. The 382 synthetic lip-speaking software processes voice (audio) stream data and 383 displays an animated face that a hard of hearing person may be able to 384 lip-read. During a conversation, the hard of hearing person uses the 385 synthetic lip-speaking software as support for understanding the audio 386 stream. 388 For example: 389 ( ) <-----Voice-----> ( hard of ) -Voice-> ( PC with ) 390 ( hearing ) ( hearing ) ( synthetic ) 391 ( Party A ) ( party B ) ( lip-speaking) 392 ( ) ( ) ( software ) 394 7.5 Voice Activated Menu Systems 396 In this scenario, a deaf person wishing to book cinema tickets with a 397 credit card, using a text phone to place the call. The cinema employs a 398 voice-activated menu system for film titles and showing times. 400 1. The deaf person places a call to the cinema with a text-phone: 401 (Text-phone) <-----Text ---> (Voice-activated System) 402 2. The cinema's voice-activated menu requests an auditory response to 403 continue. 404 3. A Relay Service is invited into the conversation. 405 4. The Relay Service transcodes the prompts of the voice-activated menu 406 into text. 407 5. Text from the voice-activated menu appears on the display of the deaf 408 person's text-phone. 409 6. The deaf person types a response. 410 7. The Relay Service receives the text and reads it to the voice- 411 activated system: 412 ( ) (Relay Service ) ( ) 413 (Text-phone ) -Text-> (Provider ) -Voice-> (Voice-Activated) 414 ( ) <-Text- (Text To Voice ) <-Voice- (System ) 415 8. The transaction is finalised with a confirmed booking time. 416 9. The Relay Service withdraws from the call. 418 7.6 Conference Call 420 A conference call is scheduled between five persons: 422 - Person A listens and types text (hearing, no speech) 423 - Person B recognises sign language and signs back (deaf, no speech) 424 - Person C reads text and speaks (deaf or hearing impaired) 425 - Person D listens and speaks 426 - Person E recognises sign language and reads text and signs 428 A conference call server calls the five people and based on their pre- 429 ferences sets up the different transcoding services required. Assuming 430 English is the base language for the call, the following intermediate 431 transcoding services are envoked: 433 - A transcoding service (English speech to English text) 434 - An English text to sign language service 435 - A sign language to English text service 436 - An English text to English speech service 438 Note: In order to translate from English speech to sign language, a 439 chain of intermediate transcoding services is used (transcoding and 440 English text to sign language) because there was no speech-to-sign 441 language available for direct translation. The same applies for the 442 translation from sign language to English speech accordingly. 444 (Person A) ----- Text ----> ( Text-to-SL ) --- Video ----> (Person B) 445 ---------------------- Text --------------------> (Person C) 446 ----- Text ----> (Text-to-Speech) --- Voice ----> (Person D) 447 ---------------------- Text --------------------> (Person E) 448 ----- Text ----> ( Text-to-SL ) --- Video ----> (Person E) 449 (Person B) -Video-> (SL-to-Text) -Text-> (Text-to-Speech) -> (Person A) 450 ---- Video ----> ( SL-to-Text ) ---- Text ----> (Person C) 451 -Video-> (SL-to-Text) -Text-> (Text-to-Speech) -> (Person D) 452 --------------------- Video --------------------> (Person E) 453 ---- Video ----> ( SL-to-Text ) ---- Text ----> (Person E) 454 (Person C) --------------------- Voice --------------------> (Person A) 455 Voice->(Speech-to-Text)-Text->(Text-to-SL)-Video->(Person B) 456 --------------------- Voice --------------------> (Person D) 457 ---- Voice ----> (Speech-to-Text) ---- Text ----> (Person E) 458 Voice->(Speech-to-Text)-Text->(Text-to-SL)-Video->(Person E) 459 (Person D) --------------------- Voice --------------------> (Person A) 460 Voice->(Speech-to-Text)-Text->(Text-to-SL)-Video->(Person B) 461 ---- Voice ----> (Speech-to-Text) ---- Text ----> (Person C) 462 ---- Voice ----> (Speech-to-Text) ---- Text ----> (Person E) 463 Voice->(Speech-to-Text)-Text->(Text-to-SL)-Video->(Person E) 464 (Person E) -Video-> (SL-to-Text) -Text-> (Text-to-Speech) -> (Person A) 465 --------------------- Video --------------------> (person B) 466 ---- Video ----> ( SL-to-Text ) ---- Text ----> (Person C) 467 -Video-> (SL-to-Text) -Text-> (Text-to-Speech) -> (Person D) 468 Remarks: - Some services might be shared by users and/or other 469 services. 470 - Person E uses two parallel streams (SL and English Text). 471 The User Agent might perform time synchronisation when dis- 472 playing the streams. However, this would require synchroni- 473 sation information to be present on the streams. 474 - The session protocols MIGHT support optional buffering of 475 media streams, so that users and/or intermediate services 476 could go back to previous content or to invoke a transcoding 477 service for content they just missed. 478 - Hearing impaired users might still reveive audio as well, 479 which they will use to drive some visual indicators so that 480 they can better see where for instance the pauses are in the 481 conversation. 483 8. Some Suggestions for Service Providers and User Agent Manufacturers 485 This section is included to encourage service providers and user agent 486 manufacturers in developing products and services that can be used by as 487 wide a range of individuals as possible, including the deaf, hard of 488 hearing and speech-impaired. 490 - Service providers and User Agent manufacturers can offer to a deaf, 491 hard of hearing and speech-impaired person the possibility of being 492 able to hide their specific references and abilities from being made 493 public in any transaction. 494 - If a User Agent performs auditory signalling, for example a pager, it 495 could also provide another signalling method; visual (flashing light) 496 or tactile (vibration). 497 - Service providers who allow the user to store specific setting or 498 preferences and abilities (i.e., user profile) might consider storing 499 these setting in a central repository, accessible no matter the 500 location of the user and regardless of the User Agent used at that 501 time or location. 502 - If there are several transcoding services available, the User Agent 503 can be set to select the most economical/highest quality service. 504 - The service provider can show the cost per minute and any minimum 505 charge of a transcoding service call before a session starts, allowing 506 the user a choice of engaging in the service or not. 507 - Service providers are encouraged to offer an alternative stream with 508 audio streams, for example, text or data streams that operate avatars, 509 etc. 510 - Service providers are encouraged to provide a text alternative to 511 voice-activated menus, e.g., answering and voice mail systems. 512 - Manufacturers of voice-activated software are encouraged to provide an 513 alternative visual format for software prompts, menus, messages, and 514 status information. 515 - Manufacturers of mobile phones are encouraged to develop equipment 516 design in which radio wave signals do not interfere with hearing aids. 518 - All services for interpreting, transliterating, or facilitating 519 communications for deaf, hard of hearing and speech-impaired people 520 are required to: 521 - Keep information exchanged during the transaction strictly 522 confidential 523 - Enable information exchange literally and simply, without deviating 524 and compromising the content 525 - Facilitate communication without bias, prejudice or opinion 526 - Match skill sets to the requirement of the individual 527 - Behave in a professional and appropriate manner 528 - Be fair in claiming compensation for services 529 - Strive to improve the skill-sets used for their services. 530 - Conference calling services might consider ways to allow users to 531 employ transcoding services (which usually introduce a delay) to have 532 real-time information sufficient to be able to identify gaps in the 533 conversation so they could inject comments, as well as ways to raise 534 their hand, vote and carry out other activities where timing of their 535 response relative to the real-time conversation is important. 537 9. Acknowledgements 539 The authors would like to thank the following individuals for their 540 contributions to this draft: 542 David R. Oran, Cisco 543 Mark Watson, Nortel Networks 544 Brian Grover, RNID 545 Anthony Rabin, RNID 546 Michael Hammer, Cisco 547 Henry Sinnreich, Worldcom 548 Rohan Mahy, Cisco 549 Julian Branston, Cedalion Hosting Services 550 Judy Harkins, Gallaudet University, Washington, D.C. 551 Cary Barbin, Gallaudet University, Washington, D.C. 552 Gregg Vanderheiden, Trace R&D Center University of Wisconsin-Madison 553 Gottfried Zimmerman, Trace R&D Center University of Wisconsin-Madison 555 10. Author's Addresses 557 Nathan Charlton, RNID, nathan.charlton@rnid.org.uk 558 Mick Gasson, RNID, mike.gasson@rnid.org.uk 559 Guido Gybels, RNID, Guido.Gybels@rnid.org.uk 560 Mike Spanner, RNID, mike.spanner@rnid.org.uk 561 19-23 Featherstone Street 562 London EC1Y 8SL 563 Tel: +44-20 7296 8000 564 Textphone: +44-20 7296 8001 565 Fax: +44-20 7296 8199 566 Arnoud van Wijk 567 Ericsson EuroLab Netherlands BV 568 P.O. Box 8 569 5120 AA Rijen 570 The Netherlands 571 Fax: +31-161-247569 572 Email: Arnoud.van.Wijk@eln.ericsson.se 574 Comments can be sent to the SIPPING mailing list. 576 References and Notes 578 1. Bradner, S., "The Internet Standards Process -- Revision 3", BCP9, 579 RFC 2026, October 1996. 580 2. S. Bradner, "Key words for use in RFCs to indicate requirement 581 levels". Request for Comments 2119, Internet Engineering Task Force. 582 March 1997. 583 3. V18 is a protocol from the International Telecommunication Union (ITU): 584 "Operational and interworking requirements for DCEs operating in 585 the text telephone mode". ITU-T Recommendation V.18, November 2000. 586 4. Moore, Matthew, et al. "For Hearing People Only: Answers to Some of 587 the Most Commonly Asked Questions About the Deaf Community, Its 588 Culture, and the Deaf Reality". MSM Productions Ltd., 2nd Edition, 589 September 1993. 590 5. TypeTalk is a UK based Relay Service with human operators performing 591 text to voice and voice to text transcoding services. 592 6. For more information on signing avatars, see www.visicast.co.uk. 593 7. For more information on synthetic lip-reading software, see 594 www.speech.kth.se/teleface. 596 Full Copyright Statement 598 "Copyright (C) The Internet Society 2001. All Rights Reserved. This 599 document and translations of it may be copied and furnished to others, 600 and derivative works that comment on or otherwise explain it or assist 601 in its implementation may be prepared, copied, published and 602 distributed, in whole or in part, without restriction of any kind, 603 provided that the above copyright notice and this paragraph are included 604 on all such copies and derivative works. However, this document itself 605 may not be modified in any way, such as by removing the copyright notice 606 or references to the Internet Society or other Internet organizations, 607 except as needed for the purpose of developing Internet standards in 608 which case the procedures for copyrights defined in the Internet 609 Standards process must be followed, or as required to translate it into 610 languages other than English. The limited permissions granted above are 611 perpetual and will not be revoked by the Internet Society or its 612 successors or assigns. This document and the information contained 613 herein is provided on an "AS IS" basis and THE INTERNET SOCIETY AND THE 614 INTERNET ENGINEERING TASK FORCE DISCLAIMS ALL WARRANTIES, EXPRESS OR 615 IMPLIED, INCLUDING BUT NOT LIMITED TO ANY WARRANTY THAT THE USE OF THE 616 INFORMATION HEREIN WILL NOT INFRINGE ANY RIGHTS OR ANY IMPLIED 617 WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. 619 Expires: October 2002 621 Charlton/Gasson/Gybels/Spanner/van Wijk [Page 13 of 13]