Last Modified: 2003-10-01
The speechsc Work Group will develop protocols to support distributed media processing of audio streams. The focus of this working group is to develop protocols to support ASR, TTS, and SV. The working group will only focus on the secure distributed control of these servers.
The working group will develop an informational RFC detailing the architecture and requirements for distributed speechsc control. In addition, the requirements document will describe the use cases driving these requirements. The working group will then examine existing media-related protocols, especially RTSP, for suitability as a protocol for carriage of speechsc server control. The working group will then propose extensions to existing protocols or the development of new protocols, as appropriate, to meet the requirements specified in the informational RFC.
The protocol will assume RTP carriage of media. Assuming session-oriented media transport, the protocol will use SDP to describe the session.
The working group will not be investigating distributed speech recognition (DSR), as exemplified by the ETSI Aurora project. The working group will not be recreating functionality available in other protocols, such as SIP or SDP. The working group will offer changes to existing protocols, with the possible exception of RTSP, to the appropriate IETF work group for consideration. This working group will explore modifications to RTSP, if required.
It is expected that we will coordinate our work in the IETF with the W3C Mutlimodal Interaction Work Group; the ITU-T Study Group 16 Working Party 3/16 on SG 16 Question 15/16; the 3GPP TSG SA WG1; and the ETSI Aurora STQ.
Once the current set of milestones is completed, the speechsc charter may be expanded, with IESG approval, to cover additional uses of the technology, such as the orchestration of multiple ASR/TTS/SV servers, the accommodation of additional types of servers such as simultaneous translation servers, etc.
|Done||Requirements ID submitted to IESG for publication (informational)|
|Done||Submit Internet Draft(s) Analyzing Existing Protocols (informational)|
|Done||Submit Internet Draft Describing New Protocol (if required) (standards track)|
|Oct 03||Submit Drafts to IESG for publication|
h).SPEECHSC Minutes Dave: The requirements document passed review, it's in editorial review now Sarvi: Dan Burnett's speaker identification/verification draft is out now. It's geared toward MRCP v1, and will be evolved into MRCP v2. Sarvi: Open Issues.. * Proxy support: Call flows are needed. Currently, we're looking at using a relay to front-end requests. * When to start/stop media: The recognizer should expect the media to start flowing when it receives the recognize request, and shouldn't buffer anything it receives beforehand. * Recording audio: Two types (definitions from Dan Burnett): resource-related: everything the recognizer hears and/or everything it thinks is speech time-based: record some period of the conversation, independent of the recognition. It was agreed that it is outside the scope of MRCP to record the conversation. It is, however, desirable to have a "record" resource which takes audio input from the client, "puts a handle on it" and makes it available to the client, possibly applying some "speechish" operations (end pointing, etc). * Resource types: there is potentially a need to identify/classify resource for allocation (e.g. this "recognizer" can only recognize DTMF input, not speech, or this "TTS engine" can only play audio, it doesn't do synthesis). SIP Callee capabilities will be investigated/discussed on the mailing list to determine whether they are sufficient. * NLSML versus EMMA: As we won't know the status of the EMMA specification at the time we publish until the time we publish, we'll leave a placeholder in our document until it's time to publish and make a decision them. * Multiple media streams: There's a need for only one media line * Multiple speak requests: Is it desirable to be able to pause an active speak request, execute a new speak request, and then resume the original request? Yes, it's potentially useful, but it can be accomplished on the client side by allocating two separate TTS resources, pausing one, starting the other, and then resuming the first one when the second one finishes. Dan Burnett on Speaker Identification and Verification: - joint proposal from Nuance and Intervoice submitted recently. In addition to SI/SV, document covers: - speaker-enrolled grammars: use recorded audio to make a grammar; well-suited for voice dialing applications - hotword recognition: recognizer listens for hotword(s) in a conversation, doing nothing until it actually recognizes something (as opposed to timing out, throwing a "nomatch", etc) SI/SV discussion: Two questions so far: 1. Why buffering? Can the audio from a captured recognizer session be used (when recognition is done with save-waveform=true) be used for verification, by passing the verification engine handle(s) to the recorded audio? We should be able to eliminate the pause/resume methods 2. Is there a need for some sort of registry for returned info - some verifier/identifier might return gender information, or language information; common categories would be beneficial Milestones: Slightly behind schedule currently. A draft will be submitted sometime after the next IETF meeting (March 2004?) Jeff Kusnitz,