Nathaniel opened the meeting with introductions, note well. Chris presented an overview of dmsp followed by short demonstration. Q&A during meeting: dave: is this only about sync'ing multiple streams to one recipient? chris: DMSP synchronizes multiple input/output modalities for a single user. ted: are their candidate enablers being developed in OMA as well? chris: The ID mentions the OMA reference architecture that enumerates the enablers required. The OMA spec refers to a protocol to synchronize, but assumes it will be done in IETF. ted: recommends speaking to dean willis re: OMA to set expectations. ted: would really like to see folks in OMA interested in this, participating in the IETF in defining. David Oran: (speechsc chair) how are we going to approach WG charter? David: outlines two approachs 1) simply bless what is here and move forward or 2) take what is done now as a starting point but revisit design decisions and go from there. Jim Ferrans showed a video of a brief demo. More Q&A: Dave crocker: nice demo. but doesn't explain dmsp at all! I have no idea what the components are and how they interact...is that next? Jim: presents viewgraph comparing widex and dmsp Jim: dmsp and widex are orthogonal. eric burger: paraphrases discussion at lunch: widex is about distributing GUI and dmsp about distributing a VUI. eric burger: concurs with architecture diagram but suggests discussion on whether or not the boxes (DMSP/widex) are indeed the same. Vlad: the point is not how you render or the end result of the UI, but how you are interacting between the renderer and server. If they are XML they are the same thing. Jim: in terms of the level of interaction, widex is much lower level. dmsp is higher level. For example, a voicexml interpreter is instructed to load a document. this was deliberate... mimimize network traffic. Chris: One of the goals of dmsp was to handle different types of UAs, including those that do not have a DOM. VoiceXML 2.0 does not have a DOM. W3C goal for next version of VXML is to have a DOM. eric: explains VoiceXML is a language... eric: VoiceXML 3.0 is completely different.... and will take years. Nathanial: VXML 2.x will be out there for a long time. Ted: suggests relaxing widex to allow interaction with voicexml... would that fold this into the widex working group? eric: vxml has not dom... but there is a good understanding of the data model behind it. chris: there is a possibility that the DOM becomes the interface between widex and dmsp. chris: believes multimodal sync. is orthogonal with widex which is about syncing a dom element on a server with a dom element on a client. eric: this is the disconnect. if the dom is "what is the user's name" whether its typed or spoken, the dom is the same and it can be rendered with speech or visually. chris: true in a one-to-one mapping such as you cite, but in multimodal you don't always have a one-to-one mappings across modalities. vlad: in widex you have one session, but the requirements define multiple renderers. widex also uses the MVC. nathaniel: if widex could support voicexml, that may be one way. sometimes a divide and conquer approach is more efficient. ted: sounds like there is a fair amount stuff out there right now (voicexml) where is the pain point? jim: my earlier comments might have been misleading. deployed voicexml apps today are voice-only, not multimodal. information is also presented to the user via speech/audio. With multimodal you can display the info as well. jim: likewise for data entry on handset today voice is easier than keying in data. David: don't see how that answers ted's question. chris: clarifies the existing voicexml market (deployed) is voice-only. the multimodal market has yet to emerge: dave: positive you guys are doing something useful, but still dont' understand what it is. you need to change your vocabulary to bring this into the IETF. Need a common frame of reference... what is being synchonized with what?? nathaniel: understand the comment, but dont' know if there is a language to translate to. dave crocker: no saying ietf has a vocabulary for this. we have to cultures here... need to find some vocab that is mutually comfortable. david oran: struggled with the same thing in speechsc. there is a lot you can talk about without understanding. Dave C: agrees with ted and david. I didn't make clear what I meant. Right now we are trying to arrive at a common framework. I am distracted by repeated references to stuff that are not part of the framework, but about the detail of what you've been working on for a long time. Nathaniel: suggests we dive into some more detail on DMSP to see if that helps. Chris: Agrees in general with Dave's comment, and mentions the companies involved are targetting mutliple languages already, but use specific languages as examples. ted: where is the interoperatibility here? chris: resumes DMSP presentation - presents four abstract interfaces: Command, Response, Event, & Signal. explains the need of binary and xml bindings. ted: suggests using apple "movement API" as the third UI! Nathaniel: tries to get a sense of whether or not this work should be kept separate from widex. Vlad: binary messages seems to suggest they are quite different. ted: not clear how many people here think we are solving a problem that really belongs in the IETF. Maybe folks should get up and talk about this. chris: OMA had a very focused effort (18 months or so) on defining a multimodal reference architecture with significant requirements document. eric: W3C is really really bad at protocols. eric: is there a need for a working group or is what needed is a well-reviewed spec. nathaniel: IBM/Motorola is not simply seeking an IETF rubber stamp. Wanted broader input. chris: enumerates other companies involved in OMA MM arch. (since they aren't represented here) Nokia, Ericsson, Oracle. ted: is there anybody in the room now, interested in stepping up to the mic and expressing how they will support this effort. nobody responds. Thomas: Is it obvious this work is needed? nathaniel: calls for a hum on this question. nathaniel: calls for hum on people "who don't know"? concensus: most don't know... Jim Ferrans displays a slide illustrating how dmsp is used in an end-to-end architecture. David O: I would expect the viewgraph to show that the voice server is running on my notebook computer. If it doesn't support that it is flawed. Chris: Confirms the use case David describes is indeed supported. Barry: points out there are two audio links, one up and one down. Jim: clarifies these would be whatever the native encoders on the handset are. nathaniel: asks for guidance from the area directors on how to proceed. ted: go to the mailing list and see who is willing to participate. Agrees with Dave's point that there seems to be a disconnect between this present work and the IETF. Also agrees with Nathaniel's earlier point that its odd there is interest in widex and not dmsp. Recommends reaching out via the mailing lists. nathaniel: suggests information RFC is an attractive option. nathaniel: closes the meeting