idnits 2.17.1 draft-ietf-speechsc-reqts-07.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- ** It looks like you're using RFC 3978 boilerplate. You should update this to the boilerplate described in the IETF Trust License Policy document (see https://trustee.ietf.org/license-info), which is required now. -- Found old boilerplate from RFC 3978, Section 5.1 on line 14. -- Found old boilerplate from RFC 3978, Section 5.5 on line 896. -- Found old boilerplate from RFC 3979, Section 5, paragraph 1 on line 873. -- Found old boilerplate from RFC 3979, Section 5, paragraph 2 on line 880. -- Found old boilerplate from RFC 3979, Section 5, paragraph 3 on line 886. ** This document has an original RFC 3978 Section 5.4 Copyright Line, instead of the newer IETF Trust Copyright according to RFC 4748. ** This document has an original RFC 3978 Section 5.5 Disclaimer, instead of the newer disclaimer which includes the IETF Trust according to RFC 4748. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- == No 'Intended status' indicated for this document; assuming Proposed Standard Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- ** The document seems to lack an IANA Considerations section. (See Section 2.2 of https://www.ietf.org/id-info/checklist for how to handle the case when there are no actions for IANA.) ** The document seems to lack a both a reference to RFC 2119 and the recommended RFC 2119 boilerplate, even if it appears to use RFC 2119 keywords. RFC 2119 keyword, line 321: '...SPEECHSC framework SHOULD use existing...' RFC 2119 keyword, line 327: '... MUST NOT redefine the semantics of ...' RFC 2119 keyword, line 333: '...asible, SPEECHSC SHOULD NOT duplicate ...' RFC 2119 keyword, line 351: '...EECHSC framework SHOULD employ protoco...' RFC 2119 keyword, line 359: '... The SPEECHSC framework MUST be compliant with the IAB OPES [3]...' (47 more instances...) Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the RFC 3978 Section 5.4 Copyright Line does not match the current year -- The document seems to lack a disclaimer for pre-RFC5378 work, but may have content which was first submitted before 10 November 2008. If you have contacted all the original authors and they are all willing to grant the BCP78 rights to the IETF Trust, then this is fine, and you can ignore this comment. If not, you may need to add the pre-RFC5378 disclaimer. (See the Legal Provisions document at https://trustee.ietf.org/license-info for more information.) -- The document date (May 10, 2005) is 6920 days in the past. Is this intentional? Checking references for intended status: Proposed Standard ---------------------------------------------------------------------------- (See RFCs 3967 and 4897 for information about using normative references to lower-maturity documents in RFCs) -- Possible downref: Non-RFC (?) normative reference: ref. '1' -- Possible downref: Non-RFC (?) normative reference: ref. '2' ** Downref: Normative reference to an Informational RFC: RFC 3238 (ref. '3') ** Downref: Normative reference to an Informational RFC: RFC 3351 (ref. '4') -- Obsolete informational reference (is this intentional?): RFC 3525 (ref. '7') (Obsoleted by RFC 5125) -- Obsolete informational reference (is this intentional?): RFC 2326 (ref. '8') (Obsoleted by RFC 7826) == Outdated reference: A later version (-07) exists of draft-shanmugham-mrcp-04 Summary: 7 errors (**), 0 flaws (~~), 3 warnings (==), 11 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 Speechsc D. Oran 3 Internet-Draft Cisco Systems, Inc. 4 Expires: November 11, 2005 May 10, 2005 6 Requirements for Distributed Control of ASR, SI/SV and TTS Resources 7 draft-ietf-speechsc-reqts-07 9 Status of this Memo 11 By submitting this Internet-Draft, each author represents that any 12 applicable patent or other IPR claims of which he or she is aware 13 have been or will be disclosed, and any of which he or she becomes 14 aware will be disclosed, in accordance with Section 6 of BCP 79. 16 Internet-Drafts are working documents of the Internet Engineering 17 Task Force (IETF), its areas, and its working groups. Note that 18 other groups may also distribute working documents as Internet- 19 Drafts. 21 Internet-Drafts are draft documents valid for a maximum of six months 22 and may be updated, replaced, or obsoleted by other documents at any 23 time. It is inappropriate to use Internet-Drafts as reference 24 material or to cite them other than as "work in progress." 26 The list of current Internet-Drafts can be accessed at 27 http://www.ietf.org/ietf/1id-abstracts.txt. 29 The list of Internet-Draft Shadow Directories can be accessed at 30 http://www.ietf.org/shadow.html. 32 This Internet-Draft will expire on November 11, 2005. 34 Copyright Notice 36 Copyright (C) The Internet Society (2005). 38 Abstract 40 This document outlines the needs and requirements for a protocol to 41 control distributed speech processing of audio streams. By speech 42 processing, this document specifically means automatic speech 43 recognition (ASR), speaker recognition - which includes both speaker 44 identification (SI) and speaker verification (SV) - and text-to- 45 speech (TTS). Other IETF protocols, such as SIP and RTSP, address 46 rendezvous and control for generalized media streams. However, 47 speech processing presents additional requirements that none of the 48 extant IETF protocols address. 50 Table of Contents 52 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 4 53 2. SPEECHSC Framework . . . . . . . . . . . . . . . . . . . . . . 4 54 2.1 TTS Example . . . . . . . . . . . . . . . . . . . . . . . 6 55 2.2 Automatic speech recognition example . . . . . . . . . . . 6 56 2.3 Speaker Identification example . . . . . . . . . . . . . . 7 57 3. General Requirements . . . . . . . . . . . . . . . . . . . . . 8 58 3.1 Reuse Existing Protocols . . . . . . . . . . . . . . . . . 8 59 3.2 Maintain Existing Protocol Integrity . . . . . . . . . . . 8 60 3.3 Avoid Duplicating Existing Protocols . . . . . . . . . . . 8 61 3.4 Efficiency . . . . . . . . . . . . . . . . . . . . . . . . 9 62 3.5 Invocation of services . . . . . . . . . . . . . . . . . . 9 63 3.6 Location and Load Balancing . . . . . . . . . . . . . . . 9 64 3.7 Multiple services . . . . . . . . . . . . . . . . . . . . 9 65 3.8 Multiple media sessions . . . . . . . . . . . . . . . . . 9 66 3.9 Users with disabilities . . . . . . . . . . . . . . . . . 10 67 3.10 Identification of process which produced media or 68 control output . . . . . . . . . . . . . . . . . . . . . . 10 69 4. TTS Requirements . . . . . . . . . . . . . . . . . . . . . . . 10 70 4.1 Requesting Text Playback . . . . . . . . . . . . . . . . . 10 71 4.2 Text Formats . . . . . . . . . . . . . . . . . . . . . . . 10 72 4.2.1 Plain Text . . . . . . . . . . . . . . . . . . . . . . 10 73 4.2.2 SSML . . . . . . . . . . . . . . . . . . . . . . . . . 10 74 4.2.3 Text in Control Channel . . . . . . . . . . . . . . . 11 75 4.2.4 Document Type Indication . . . . . . . . . . . . . . . 11 76 4.3 Control Channel . . . . . . . . . . . . . . . . . . . . . 11 77 4.4 Media origination/termination by control elements . . . . 11 78 4.5 Playback Controls . . . . . . . . . . . . . . . . . . . . 11 79 4.6 Session Parameters . . . . . . . . . . . . . . . . . . . . 12 80 4.7 Speech Markers . . . . . . . . . . . . . . . . . . . . . . 12 81 5. ASR Requirements . . . . . . . . . . . . . . . . . . . . . . . 12 82 5.1 Requesting Automatic Speech Recognition . . . . . . . . . 12 83 5.2 XML . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 84 5.3 Grammar Requirements . . . . . . . . . . . . . . . . . . . 12 85 5.3.1 Grammar Specification . . . . . . . . . . . . . . . . 12 86 5.3.2 Explicit Indication of Grammar Format . . . . . . . . 13 87 5.3.3 Grammar Sharing . . . . . . . . . . . . . . . . . . . 13 88 5.4 Session Parameters . . . . . . . . . . . . . . . . . . . . 13 89 5.5 Input Capture . . . . . . . . . . . . . . . . . . . . . . 13 90 6. Speaker Identification and Verification Requirements . . . . . 13 91 6.1 Requesting SI/SV . . . . . . . . . . . . . . . . . . . . . 13 92 6.2 Identifiers for SI/SV . . . . . . . . . . . . . . . . . . 13 93 6.3 State for multiple utterances . . . . . . . . . . . . . . 13 94 6.4 Input Capture . . . . . . . . . . . . . . . . . . . . . . 14 95 6.5 SI/SV functional extensibility . . . . . . . . . . . . . . 14 96 7. Duplexing and Parallel Operation Requirements . . . . . . . . 14 97 7.1 Full Duplex operation . . . . . . . . . . . . . . . . . . 14 98 7.2 Multiple services in parallel . . . . . . . . . . . . . . 14 99 7.3 Combination of services . . . . . . . . . . . . . . . . . 15 100 8. Additional Considerations (non-normative) . . . . . . . . . . 15 101 9. Security Considerations . . . . . . . . . . . . . . . . . . . 15 102 9.1 SPEECHSC protocol security . . . . . . . . . . . . . . . . 16 103 9.2 Client and server implementation and deployment . . . . . 16 104 9.3 Use of SPEECHSC for security functions . . . . . . . . . . 17 105 10. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . 18 106 11. References . . . . . . . . . . . . . . . . . . . . . . . . . 18 107 11.1 Normative References . . . . . . . . . . . . . . . . . . . 18 108 11.2 Informative References . . . . . . . . . . . . . . . . . . 18 109 Author's Address . . . . . . . . . . . . . . . . . . . . . . . 19 110 Intellectual Property and Copyright Statements . . . . . . . . 20 112 1. Introduction 114 There are multiple IETF protocols for establishment and termination 115 of media sessions (SIP [5]), low-level media control (MGCP [6] and 116 MEGACO [7]), and media record and playback (RTSP [8]). This document 117 focuses on requirements for one or more protocols to support the 118 control of network elements that perform Automated Speech Recognition 119 (ASR), speaker identification or verification (SI/SV), and rendering 120 text into audio, also known as Text-to-Speech (TTS). Many multimedia 121 applications can benefit from having automatic speech recognition 122 (ASR) and text-to-speech (TTS) processing available as a distributed, 123 network resource. This requirements document limits its focus to the 124 distributed control of ASR, SI/SV and TTS servers. 126 There are a broad range of systems which can benefit from a unified 127 approach to control of TTS, ASR, and SI/SV. These include 128 environments such as VoIP gateways to the PSTN, IP Telephones, media 129 servers, and wireless mobile devices who obtain speech services via 130 servers on the network. 132 To date, there are a number of proprietary ASR and TTS API's, as well 133 as two IETF drafts that address this problem [12], [13]. However, 134 there are serious deficiencies to the existing drafts. In 135 particular, they mix the semantics of existing protocols yet are 136 close enough to other protocols as to be confusing to the 137 implementer. 139 This document sets forth requirements for protocols to support 140 distributed speech processing of audio streams. For simplicity, and 141 to remove confusion with existing protocol proposals, this document 142 presents the requirements as being for a "framework" that addresses 143 the distributed control of speech resources It refers to such a 144 framework as "SPEECHSC", for Speech Services Control. 146 Discussion of this and related documents is on the speechsc mailing 147 list. To subscribe, send the message "subscribe speechsc" to 148 speechsc-request@ietf.org. The public archive is at http:// 149 www.ietf.org/mail-archive/workinggroups/speechsc/current/ 150 maillist.html 152 2. SPEECHSC Framework 154 Figure 1 below shows the SPEECHSC framework for speech processing. 156 +-------------+ 157 | Application | 158 | Server |\ 159 +-------------+ \ SPEECHSC 160 SIP, VoiceXML, / \ 161 etc. / \ 162 +------------+ / \ +-------------+ 163 | Media |/ SPEECHSC \---| ASR, SI/SV | 164 | Processing |-------------------------| and/or TTS | 165 RTP | Entity | RTP | Server | 166 =====| |=========================| | 167 +------------+ +-------------+ 169 Figure 1: Figure 1: SPEECHSC Framework 171 The "Media Processing Entity" is a network element that processes 172 media. It may be either a pure media handler, or also have an 173 associated SIP user agent, VoiceXML browser or other control entity. 174 The "ASR, SI/SV and/or TTS Server" is a network element which 175 performs the back-end speech processing. It may generate an RTP 176 stream as output based on text input (TTS) or return recognition 177 results in response to an RTP stream as input (ASR, SI/SV). The 178 "Application Server" is a network element that instructs the Media 179 Processing Entity on what transformations to make to the media 180 stream. Those instructions may be established via a session protocol 181 such as SIP, or provided via a client/server exchange such as 182 VoiceXML. The framework allows either the Media Processing Entity or 183 the Application Server to control the ASR or TTS Server using 184 SPEECHSC as a control protocol, which accounts for the speechsc 185 protocol appearing twice in the diagram. 187 Physical embodiments of the entities can reside in one physical 188 instance per entity, or some combination of entities. For example, a 189 VoiceXML [10] Gateway may combine the ASR and TTS functions on the 190 same platform as the Media Processing Entity. Note that VoiceXML 191 Gateways themselves are outside the scope of this protocol. 192 Likewise, one can combine the Application Server and Media Processing 193 Entity, as would be the case in an interactive voice response (IVR) 194 platform. 196 One can also decompose the Media Processing Entity into an entity 197 that controls media endpoints and entities that process media 198 directly. Such would be the case with a decomposed gateway using 199 MGCP or megaco. However, this decomposition is again orthogonal to 200 the scope of SPEECHSC. The following subsections provide a number of 201 example use cases the SPEECHSC, one each for TTS, ASR and SI/SV. 202 They are intended to be illustrative only, and not to imply any 203 restriction on the scope of the framework or to limit the 204 decompostion or configuration to that shown in the example. 206 2.1 TTS Example 208 This example illustrates a simple usage of SPEECHSC to provide a Text 209 to Speech service for playing announcements to a user on a phone with 210 no display for textual error messages. The example scenario is shown 211 below in figure 2. In the figure, the VoIP gateway acts as both the 212 Media Processing Entity and the Application Server of the SPEECHSC 213 framework in figure 1. 215 +---------+ 216 _| SIP | 217 _/ | Server | 218 +-----------+ SIP/ +---------+ 219 | | _/ 220 +-------+ | VoIP |_/ 221 | POTS |___| Gateway | RTP +---------+ 222 | Phone | | (SIP UA) |=========| | 223 +-------+ | |\_ | SPEECHSC| 224 +-----------+ \ | TTS | 225 \__ | Server | 226 SPEECHSC | | 227 \_| | 228 +---------+ 230 Figure 2: Figure 2: Text to speech example of SPEECHSC 232 The POTS phone on the left attempts to make a phone call. The VoIP 233 gateway, acting as a SIP UA, tries to establish a SIP session to 234 complete the call, but gets an error, such as a SIP "486 Busy Here" 235 response. Without SPEECHSC the gateway would most likely just output 236 a busy signal to the POTS phone, However, with SPEECHSC access to a 237 TTS server it can provide a spoken error message. The VoIP Gateway 238 therefore constructs a text error string using information from the 239 SIP messages, such as "Your call to 978-555-1212 did not go through 240 because the called party was busy". It then can use SPEECHSC to 241 establish an association with a SPEECHSC server, open an RTP stream 242 between itself and the server, and issue a TTS request for the error 243 message, which will be played to the user on the POTS phone. 245 2.2 Automatic speech recognition example 247 This example illustrates a VXML-enabled media processing entity and 248 associated application server using the SPEECHSC framework to supply 249 an ASR-based user interface through an Interactive Voice Response 250 (IVR) system. The example scenario is shown below in figure 3. The 251 VXML-client corresponds to the "media processing entity", while the 252 IVR application server corresponds to the "application server" of the 253 SPEECHSC framework of figure 1. 255 +------------+ 256 | IVR | 257 _|Application | 258 VXML_/ +------------+ 259 +-----------+ __/ 260 | |_/ +------------+ 261 PSTN Trunk | VoIP | SPEECHSC| | 262 =============| Gateway |---------| SPEECHSC | 263 |(VXML voice| | ASR | 264 | browser) |=========| Server | 265 +-----------+ RTP +------------+ 267 Figure 3: Figure 3: Automatic speech recognition example 269 In this example, users call into the service in order to obtain stock 270 quotes. The VoIP gateway answers their PSTN call. An IVR 271 application feeds VXML scripts to the gateway to drive the user 272 interaction. The VXML interpreter on the gateway directs the user's 273 media stream to the SPEECHSC ASR server and uses SPEECHSC to control 274 the ASR server. 276 When, for example, the user speaks the name of a stock in response to 277 an IVR prompt, the SPEECHSC ASR server attempts recognition of the 278 name, and returns the results to the VXML gateway. The VXML gateway, 279 following standard VXML mechanisms, informs the IVR Application of 280 the recognized result. The IVR Application can then do the 281 appropriate information lookup. The answer, of course, can be sent 282 back to the user using text-to-speech. This example does not show 283 this scenario, but it would work analogously to the scenario shown in 284 section Section 2.1. 286 2.3 Speaker Identification example 288 This example illustrates using speaker identification to allow voice- 289 actuated login to an IP phone. The example scenario is shown below 290 in figure 4. In the figure, the IP Phone acts as both the "Media 291 Processing Entity" and the "Application Server" of the SPEECHSC 292 framework in figure 1. 294 +-----------+ +---------+ 295 | | RTP | | 296 | IP |=========| SPEECHSC| 297 | Phone | | TTS | 298 | |_________| Server | 299 | | SPEECHSC| | 300 +-----------+ +---------+ 302 Figure 4: Figure 4: Speaker identification example 304 In this example, a user speaks into a SIP phone in order to get 305 "logged in" to that phone to make and receive phone calls using his 306 identity and preferences. The IP phone uses the SPEECHSC framework 307 to set up an RTP stream between the phone and the SPEECHSC SI/SV 308 server and to request verification. The SV server verifies the 309 user's identity and returns the result, including the necessary login 310 credentials, to the phone via SPEECHSC. The IP Phone may either use 311 the identity directly to identify the user in outgoing calls, to 312 fetch the user's preferences from a configuration server, request 313 authorization from a AAA server, in any combination. Since this 314 example uses SPEECHSC to perform a security-related function, be sure 315 to note the associated material in Section 9. 317 3. General Requirements 319 3.1 Reuse Existing Protocols 321 To the extent feasible, the SPEECHSC framework SHOULD use existing 322 protocols. 324 3.2 Maintain Existing Protocol Integrity 326 In meeting the requirement of Section 3.1, the SPEECHSC framework 327 MUST NOT redefine the semantics of an existing protocol. Said 328 differently, we will not break existing protocols or cause backward 329 compatibility problems. 331 3.3 Avoid Duplicating Existing Protocols 333 To the extent feasible, SPEECHSC SHOULD NOT duplicate the 334 functionality of existing protocols. For example, network 335 announcements using SIP [11] and RTSP [8] already define how to 336 request playback of audio. The focus of SPEECHSC is new 337 functionality not addressed by existing protocols or extending 338 existing protocols within the strictures of the requirement in 339 Section 3.2. Where an existing protocol can be gracefully extended 340 to support SPEECHSC requirements, such extensions are acceptable 341 alternatives for meeting the requirements. 343 As a corollary to this, the SPEECHSC should not require a separate 344 protocol to perform functions that could be easily added into the 345 SPEECHSC protocol (like redirecting media streams, or discovering 346 capabilities), unless it is similarly easy to embed that protocol 347 directly into the SPEECHSC framework. 349 3.4 Efficiency 351 The SPEECHSC framework SHOULD employ protocol elements known to 352 result in efficient operation. Techniques to be considered include: 353 o Re-use of transport connections across sessions 354 o Piggybacking of responses on requests in the reverse direction 355 o Caching of state across requests 357 3.5 Invocation of services 359 The SPEECHSC framework MUST be compliant with the IAB OPES [3] 360 framework. The applicability of the SPEECHSC protocol will therefore 361 be specified as occurring between clients and servers at least one of 362 which is operating directly on behalf of the user requesting the 363 service. 365 3.6 Location and Load Balancing 367 To the extent feasible, the SPEECHSC framework SHOULD exploit 368 existing schemes for supporting service location and load balancing, 369 such as the Service Location Protocol [12] or DNS SRV records [13]. 370 Where such facilities are not deemed adequate, the SPEECHSC framework 371 MAY define additional load balancing techniques. 373 3.7 Multiple services 375 The SPEECHSC framework MUST permit multiple services to operate on a 376 single media stream so that either the same or different servers may 377 be performing speech recognition, speaker identification or 378 verification, etc. in parallel. 380 3.8 Multiple media sessions 382 The SPEECHSC framework MUST allow a 1:N mapping between session and 383 RTP channels. For example, a single session may include an outbound 384 RTP channel for TTS, an inbound for ASR and a different inbound for 385 SI/SV (e.g. if processed by different elements on the Media Resource 386 Element). Note: All of these can be described via SDP, so if SDP is 387 utilized for media channel description, this requirement is met "for 388 free". 390 3.9 Users with disabilities 392 The SPEECHSC framework must have sufficient capabilities to address 393 the critical needs of people with disabilities. In particular, the 394 set of requirements set forth in RFC3351 [4] MUST be taken into 395 account by the framework. It is also important that implementers of 396 SPEECHSC clients and servers be cognizant that some interaction 397 modalities of SPEECHSC may be inconvenient, or simply inappropriate 398 for disabled users. Hearing-impaired individuals may find TTS of 399 limited utility. Spech-impaired users may be unable to make use of 400 ASR or SI/SV capabilities. Therefore, systems employing SPEECHSC 401 MUST provide alternative interaction modes or avoid the use of speech 402 processing entirely. 404 3.10 Identification of process which produced media or control output 406 The client of a SPEECHSC operation SHOULD be able to ascertain via 407 the SPEECHSC framework what speech process produced the output. For 408 example, an RTP stream containing the spoken output of TTS should be 409 identifiable as TTS output, and the recognized utterance of ASR be 410 identifiable as having been produced by ASR processing. 412 4. TTS Requirements 414 4.1 Requesting Text Playback 416 The SPEECHSC framework MUST allow a Media Processing Entity or 417 Application Server, using a control protocol, to request the TTS 418 Server to playback text as voice in an RTP stream. 420 4.2 Text Formats 422 4.2.1 Plain Text 424 The SPEECHSC framework MAY assume that all TTS servers are capable of 425 reading plain text. For reading plain text, framework MUST allow the 426 language and voicing to be indicated via session parameters. For 427 finer control over such properties, see [1]. 429 4.2.2 SSML 431 The SPEECHSC framework MUST support SSML[1] basics, and 432 SHOULD support other SSML tags. The framework assumes all TTS 433 servers are capable of reading SSML formatted text. 434 Internationalization of TTS in the SPEECHSC framework, including 435 multi-lingual output within a single utterance, is accomplished via 436 SSML xml:lang tags. 438 4.2.3 Text in Control Channel 440 The Speechsc framework assumes all TTS servers accept text over the 441 SPEECHSC connection for reading over the RTP connection. The 442 framework assumes the server can accept text either "by value" 443 (embedded in the protocol), or "by reference" (e.g. by de-referencing 444 a URI embedded in the protocol). 446 4.2.4 Document Type Indication 448 A document type specifies to the syntax in which the text to be read 449 is encoded. The SPEECHSC framework MUST be capable of explicitly 450 indicating the document type of the text to be processed, as opposed 451 to forcing the server to infer the content by other means. 453 4.3 Control Channel 455 The SPEECHSC framework MUST be capable of establishing the control 456 channel between the client and server on a per-session basis, where a 457 session is loosely defined to be associated with a single "call" or 458 "dialog". The protocol SHOULD be capable of maintaining a long-lived 459 control channel for multiple sessions serially, and MAY be capable of 460 shorter time horizons as well, including as short as for the 461 processing of a single utterance. 463 4.4 Media origination/termination by control elements 465 The SPEECHSC framework MUST NOT require the controlling element 466 (application server, media processing entity) to accept or originate 467 media streams. Media streams MAY source & sink from the controlled 468 element (ASR, TTS, etc.). 470 4.5 Playback Controls 472 The Speechsc framework MUST support "VCR controls" for controlling 473 the playout of streaming media output from SPEECHSC processing, and 474 MUST allow for servers with varying capabilities to accommodate such 475 controls. The protocol SHOULD allow clients to state what controls 476 they wish to use, and for servers to report which ones they honor. 477 These capabilities include: 478 o The ability to jump in time to the location of a specific marker. 479 o The ability to jump in time, forwards or backwards, by a specified 480 amount of time. Valid time units MUST include seconds, words, 481 paragraphs, sentences, and markers. 483 o The ability to increase and decrease playout speed. 484 o The ability to fast-forward and fast-rewind the audio, where 485 snippets of audio are played as the server moves forwards or 486 backwards in time. 487 o The ability to pause and resume playout. 488 o The ability to increase and decrease playout volume. 489 These controls SHOULD be made easily available to users through the 490 client user interface and through per-user customization capabilities 491 of the client. This is particularly important for hearing-impaired 492 users, who will likely desire settings and control regimes different 493 from those that would be acceptable for non-impaired users. 495 4.6 Session Parameters 497 The SPEECHSC framework MUST support the specification of session 498 parameters, such as language, prosody and voicing. 500 4.7 Speech Markers 502 The SPEECHSC framework MUST accommodate speech markers, with 503 capability at least as flexible as that provided in SSML [1]. The 504 framework MUST further provide an efficient mechanism for reporting 505 that a marker has been reached during playout. 507 5. ASR Requirements 509 5.1 Requesting Automatic Speech Recognition 511 The SPEECHSC framework MUST allow a Media Processing Entity or 512 Application Server to request the ASR Server to perform automatic 513 speech recognition on an RTP stream, returning the results over 514 SPEECHSC. 516 5.2 XML 518 The Speechsc framework assumes that all ASR servers support the 519 VoiceXML speech recognition grammar specification (SRGS) for speech 520 recognition [2]. 522 5.3 Grammar Requirements 524 5.3.1 Grammar Specification 526 The SPEECHSC framework assumes all ASR servers are capable of 527 accepting grammar specifications either "by value" (embedded in the 528 protocol), or "by reference" (e.g. by de-referencing a URI embedded 529 in the protocol). The latter MUST allow the indication of a grammar 530 already known to, or otherwise "built in" to the server. The 531 framework and protocol further SHOULD exploit the ability to store 532 and later retrieve by reference large grammars which were originally 533 supplied by the client. 535 5.3.2 Explicit Indication of Grammar Format 537 The SPEECHSC framework protocol MUST be able to explicitly convey the 538 grammar format in which the grammar is encoded and MUST be extensible 539 to allow for conveying new grammar formats as they are defined. 541 5.3.3 Grammar Sharing 543 The SPEECHSC framework SHOULD exploit sharing grammars across 544 sessions for servers which are capable of doing so. This supports 545 applications with large grammars for which it is unrealistic to 546 dynamically load. An example is a city-country grammar for a weather 547 service. 549 5.4 Session Parameters 551 The SPEECHSC framework MUST accommodate at a minimum all of the 552 protocol parameters currently defined in MRCP [9] In addition there 553 SHOULD be a capability to reset parameters within a session. 555 5.5 Input Capture 557 The SPEECHSC framework MUST support a method directing the ASR Server 558 to capture the input media stream for later analysis and tuning of 559 the ASR engine. 561 6. Speaker Identification and Verification Requirements 563 6.1 Requesting SI/SV 565 The SPEECHSC framework MUST allow a Media Processing Entity to 566 request the SI/SV Server to perform speaker identification or 567 verification on an RTP stream, returning the results over SPEECHSC. 569 6.2 Identifiers for SI/SV 571 The SPEECHSC framework MUST accommodate an identifier for each 572 verification resource and permit control of that resource by ID, 573 because voiceprint format and contents are vendor specific. 575 6.3 State for multiple utterances 577 The SPEECHSC framework MUST work with SI/SV servers which maintain 578 state to handle multi-utterance verification. 580 6.4 Input Capture 582 The SPEECHSC framework MUST support a method for capturing the input 583 media stream for later analysis and tuning of the SI/SV engine. The 584 framework may assume all servers are capable of doing so. In 585 addition the framework assumes that the captured stream contains 586 enough timestamp context (e.g. the NTP time range from the RTCP 587 packets which corresponds to the RTP timestamps of the captured 588 input) to ascertain after the fact exactly when the verification was 589 requested. 591 6.5 SI/SV functional extensibility 593 The SPEECHSC framework SHOULD be extensible to additional functions 594 associated with SI/SV, such as prompting, utterance verification, and 595 retraining. 597 7. Duplexing and Parallel Operation Requirements 599 One very important requirement for an interactive speech-driven 600 system is that user perception of the quality of the interaction 601 depends strongly on the ability of the user to interrupt a prompt or 602 rendered TTS with speech. Interrupting, or barging, the speech 603 output requires more than energy detection from the user's direction. 604 Many advanced systems halt the media towards the user by employing 605 the ASR engine to decide if an utterance is likely to be real speech, 606 as opposed to a cough, for example. 608 7.1 Full Duplex operation 610 To achieve low latency between utterance detection and halting of 611 playback, many implementations combine the speaking and ASR 612 functions. The SPEECHSC framework MUST support such full-duplex 613 implementations. 615 7.2 Multiple services in parallel 617 Good spoken user interfaces typically depend upon the ease with which 618 the user can accomplish his or her task. When making use of Speaker 619 Identification or Verification technologies, user interface 620 improvements often come from the combination of the different 621 technologies: simultaneous identity claim and verification (on the 622 same utterance), simultaneous knowledge and voice verification (using 623 ASR and verification simultaneously). Using ASR and verification on 624 the same utterance is in fact the only way to support rolling or 625 dynamically-generated challenge phrases (e.g., "say 51723"). The 626 SPEECHSC framework MUST support such parallel service 627 implementations. 629 7.3 Combination of services 631 It is optionally of interest that the SPEECHSC framework support more 632 complex remote combination and controls of speech engines: 633 o Combination in series of engines that may then act on the input or 634 output of ASR, TTS or Speaker recognition engines. The control 635 MAY then extend beyond such engines to include other audio input 636 and output processing and natural language processing. 637 o Intermediate exchanges and coordination between engines 638 o Remote specification of flows between engines. 639 These capabilities MAY benefit from service discovery mechanisms 640 (e.g. engines, properties and states discovery). 642 8. Additional Considerations (non-normative) 644 The framework assumes that SDP will be used to describe media 645 sessions and streams. The framework further assumes RTP carriage of 646 media, however since SDP can be used to describe other media 647 transport schemes (e.g. ATM) these could be used if they provide the 648 necessary elements (e.g. explicit timestamps). 650 The working group will not be defining distributed speech recognition 651 methods (DSR), as exemplified by the ETSI Aurora project. The 652 working group will not be recreating functionality available in other 653 protocols, such as SIP or SDP. 655 TTS looks very much like playing back a file. Extending RTSP looks 656 promising for when one requires VCR controls or markers in the text 657 to be spoken. When one does not require VCR controls, SIP in a 658 framework such as Network Announcements [11] works directly without 659 modification. 661 ASR has an entirely different set of characteristics. For barge-in 662 support, ASR requires real-time return of intermediate results. 663 Barring the discovery of a good reuse model for an existing protocol, 664 this will most likely become the focus of SPEECHSC. 666 9. Security Considerations 668 Protocols relating to speech processing must take security and 669 privacy into account. Many applications of speech technology deal 670 with sensitive information, such as the use of Text to Speech to read 671 financial information. Likewise, popular uses for automatic speech 672 recognition include executing financial transactions and shopping. 674 There are at least three aspects of speech processing security which 675 intersect with the SPEECHSC requirements - securing the speechsc 676 protocol itself, implementing and deploying the servers which run the 677 protocol, and ensuring that utilization of the technology for 678 providing security functions is appropriate. Each of these aspects 679 in discussed in the following sub-sections. While some of these 680 considerations are strictly speaking out of scope of the protocol 681 itself, they will be carefully considered and accommodated during 682 protocol design, and will be called out as part of the applicability 683 statement accompanying the protocol specification(s). Privacy 684 considerations are discussed as well. 686 9.1 SPEECHSC protocol security 688 The SPEECHSC protocol MUST in all cases support authentication, 689 authorization, and integrity, and SHOULD support confidentiality. 690 For privacy sensitive applications the protocol MUST support 691 confidentiality. We envision that rather than providing protocol- 692 specific security mechanisms in SPEECHSC itself, the resulting 693 protocol will employ security machinery of either a containing 694 protocol or the transport on which it runs. For example, we will 695 consider solutions such as using TLS for securing the control 696 channel, and SRTP for securing the media channel. Third-party 697 dependencies necessitating transitive trust will be minimized or 698 explicitly dealt with through the authentication and authorization 699 aspects of the protocol design. 701 9.2 Client and server implementation and deployment 703 Given the possibly sensitive nature of the information carried, 704 SPEECHSC clients and servers need to take steps to ensure 705 confidentiality and integrity of the data and its transformations to 706 and from spoken form. In addition to these general considerations, 707 certain SPEECHSC functions, such as speaker verification and 708 identification, employ voiceprints whose privacy, confidentiality, 709 and integrity must be maintained. Similarly, the requirement to 710 support input capture for analysis and tuning can represent a privacy 711 vulnerability because user utterances are recorded and could be 712 either revealed or re-played inappropriately. Implementers must take 713 care to prevent the exploitation of any centralized voiceprint 714 database and the recorded material from which such voiceprints may be 715 derived. Specific actions which are recommended to minimize these 716 threats include: 717 o end-to-end authentication, confidentiality, and integrity 718 protection (like TLS) of access to the database to minimize the 719 exposure to external attack 720 o Database protection measures such as read/write access control and 721 local login authentication to minimize the exposure to insider 722 threats 724 o Copies of the database, especially ones that are maintained at 725 off-site locations, need the same protection as the operational 726 database. 728 Inappropriate disclosure of this data does not as of the date of this 729 document represent an exploitable threat, but quite possibly might in 730 the future. Specific vulnerabilities that might become feasible are 731 discussed in the next sub-section. It is prudent to take measures 732 such as encrypting the voiceprint database and permitting access only 733 through programming interfaces enforcing adequate authorization 734 machinery. 736 9.3 Use of SPEECHSC for security functions 738 Either speaker identification or verification can be used directly as 739 an authentication technology. Authorization decisions can be coupled 740 with speaker verification in a direct fashion through challenge- 741 response protocols, or indirectly with speaker identification through 742 the use of access control lists or other identity-based authorization 743 mechanisms. When so employed, there are additional security concerns 744 that need to be addressed through the use of protocol security 745 mechanisms for clients and servers. For example, the ability of 746 manipulate the media stream of a speaker verification request could 747 inappropriately permit or deny access based on impersonation, or 748 simple garbling via noise injection, making it critical to properly 749 secure both the control and data channels, as recommended above. The 750 following issues specific to the use of SI/SV for authentication 751 should be carefully considered: 752 1. Theft of voiceprints or the recorded samples used to construct 753 them represents a future threat against the use of speaker 754 identification/verification as a biometric authentication 755 technology. A plausible attack vector (not feasible today) is to 756 use the voiceprint information as parametric input to a text-to- 757 speech synthesis system which could mimic the user's voice 758 accurately enough to match the voiceprint. Since it is not very 759 difficult to surreptitiously record reasonably large corpuses of 760 voice samples, the ability to construct voiceprints for input to 761 this attack would render the security of voice-based biometric 762 authentication, even using advanced challenge-response 763 techniques, highly vulnerable. Users speaker verification for 764 authentication should monitor technological developments in this 765 area closely for such future vulnerabilities (much as users of 766 other authentication technologies should monitor advances in 767 factoring as a way to break asymmetric keying systems). 768 2. As with other biometric authentication technologies, a downside 769 to the use of speech identification is that revocation is not 770 possible. Once compromised, the biometric information can be 771 used in identification and authentication to other independent 772 systems. 773 3. Enrollment procedures can be vulnerable to impersonation if not 774 protected both by protocol security mechanisms and some 775 independent proof of identity. (Proof of identity may not be 776 needed in systems which only need to verify continuity of 777 identity since enrollment, as opposed to association with a 778 particular individual. 780 Further discussion of the use of SI/SV as an authentication 781 technology, and some recommendations concerning advantages and 782 vulnerabilities can be found in Chapter 5 of [14]. 784 10. Acknowledgements 786 Eric Burger wrote the original draft of these requirements and has 787 continued to contribute actively throughout their development. He is 788 a co-author in all but formal authorship, and is instead acknowledged 789 here as it is preferable that working group co-chairs have non- 790 conflicting roles with respect to the progression of documents. 792 11. References 794 11.1 Normative References 796 [1] Walker, M., Burnett, D., and A. Hunt, "Speech Synthesis Markup 797 Language (SSML) Version 1.0", W3C REC REC-speech-synthesis- 798 20040907, September 2004. 800 [2] McGlashan, S. and A. Hunt, "Speech Recognition Grammar 801 Specification Version 1.0", W3C REC REC-speech-grammar-20040316, 802 March 2004. 804 [3] Floyd, S. and L. Daigle, "IAB Architectural and Policy 805 Considerations for Open Pluggable Edge Services", RFC 3238, 806 January 2002. 808 [4] Charlton, N., Gasson, M., Gybels, G., Spanner, M., and A. van 809 Wijk, "User Requirements for the Session Initiation Protocol 810 (SIP) in Support of Deaf, Hard of Hearing and Speech-impaired 811 Individuals", RFC 3351, August 2002. 813 11.2 Informative References 815 [5] Rosenberg, J., Schulzrinne, H., Camarillo, G., Johnston, A., 816 Peterson, J., Sparks, R., Handley, M., and E. Schooler, "SIP: 817 Session Initiation Protocol", RFC 3261, June 2002. 819 [6] Andreasen, F. and B. Foster, "Media Gateway Control Protocol 820 (MGCP) Version 1.0", RFC 3435, January 2003. 822 [7] Groves, C., Pantaleo, M., Ericsson, LM., Anderson, T., and T. 823 Taylor, "Gateway Control Protocol Version 1", RFC 3525, 824 June 2003. 826 [8] Schulzrinne, H., Rao, A., and R. Lanphier, "Real Time Streaming 827 Protocol (RTSP)", RFC 2326, April 1998. 829 [9] Shanmugham, S., Monaco, P., and B. Eberman, "MRCP: Media 830 Resource Control Protocol", Internet 831 Draft draft-shanmugham-mrcp-04.txt, May 2003. 833 [10] World Wide Web Consortium, "Voice Extensible Markup Language 834 (VoiceXML) Version 2.0", W3C Working Draft , April 2002, 835 . 837 [11] Burger, E., Van Dyke, J., and A. Spitzer, "Basic Network Media 838 Services with SIP", draft-burger-sipping-netann-11 (work in 839 progress), February 2005. 841 [12] Guttman, E., Perkins, C., Veizades, J., and M. Day, "Service 842 Location Protocol, Version 2", RFC 2608, June 1999. 844 [13] Gulbrandsen, A., Vixie, P., and L. Esibov, "A DNS RR for 845 specifying the location of services (DNS SRV)", RFC 2782, 846 February 2000. 848 [14] Committee on Authentication Technologies and Their Privacy 849 Implications, National Research Council, "Who Goes There?: 850 Authentication Through the Lens of Privacy", Computer Science 851 and Telecommunications Board (CSTB) , 2003, 852 . 854 Author's Address 856 David R Oran 857 Cisco Systems, Inc. 858 7 Ladyslipper Lane 859 Acton, MA 860 USA 862 Email: oran@cisco.com 864 Intellectual Property Statement 866 The IETF takes no position regarding the validity or scope of any 867 Intellectual Property Rights or other rights that might be claimed to 868 pertain to the implementation or use of the technology described in 869 this document or the extent to which any license under such rights 870 might or might not be available; nor does it represent that it has 871 made any independent effort to identify any such rights. Information 872 on the procedures with respect to rights in RFC documents can be 873 found in BCP 78 and BCP 79. 875 Copies of IPR disclosures made to the IETF Secretariat and any 876 assurances of licenses to be made available, or the result of an 877 attempt made to obtain a general license or permission for the use of 878 such proprietary rights by implementers or users of this 879 specification can be obtained from the IETF on-line IPR repository at 880 http://www.ietf.org/ipr. 882 The IETF invites any interested party to bring to its attention any 883 copyrights, patents or patent applications, or other proprietary 884 rights that may cover technology that may be required to implement 885 this standard. Please address the information to the IETF at 886 ietf-ipr@ietf.org. 888 Disclaimer of Validity 890 This document and the information contained herein are provided on an 891 "AS IS" basis and THE CONTRIBUTOR, THE ORGANIZATION HE/SHE REPRESENTS 892 OR IS SPONSORED BY (IF ANY), THE INTERNET SOCIETY AND THE INTERNET 893 ENGINEERING TASK FORCE DISCLAIM ALL WARRANTIES, EXPRESS OR IMPLIED, 894 INCLUDING BUT NOT LIMITED TO ANY WARRANTY THAT THE USE OF THE 895 INFORMATION HEREIN WILL NOT INFRINGE ANY RIGHTS OR ANY IMPLIED 896 WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. 898 Copyright Statement 900 Copyright (C) The Internet Society (2005). This document is subject 901 to the rights, licenses and restrictions contained in BCP 78, and 902 except as set forth therein, the authors retain all their rights. 904 Acknowledgment 906 Funding for the RFC Editor function is currently provided by the 907 Internet Society.