idnits 2.17.1 draft-ietf-slim-use-cases-01.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- == No 'Intended status' indicated for this document; assuming Proposed Standard Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- No issues found here. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year -- The document date (April 5, 2016) is 2942 days in the past. Is this intentional? Checking references for intended status: Proposed Standard ---------------------------------------------------------------------------- (See RFCs 3967 and 4897 for information about using normative references to lower-maturity documents in RFCs) -- Obsolete informational reference (is this intentional?): RFC 4566 (Obsoleted by RFC 8866) Summary: 0 errors (**), 0 flaws (~~), 2 warnings (==), 2 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 SLIM N. Rooney 3 Internet-Draft GSMA 4 Expires: October 7, 2016 April 5, 2016 6 SLIM Use Cases 7 draft-ietf-slim-use-cases-01 9 Abstract 11 Use cases for selection of language for internet media. 13 Status of This Memo 15 This Internet-Draft is submitted in full conformance with the 16 provisions of BCP 78 and BCP 79. 18 Internet-Drafts are working documents of the Internet Engineering 19 Task Force (IETF). Note that other groups may also distribute 20 working documents as Internet-Drafts. The list of current Internet- 21 Drafts is at http://datatracker.ietf.org/drafts/current/. 23 Internet-Drafts are draft documents valid for a maximum of six months 24 and may be updated, replaced, or obsoleted by other documents at any 25 time. It is inappropriate to use Internet-Drafts as reference 26 material or to cite them other than as "work in progress." 28 This Internet-Draft will expire on October 7, 2016. 30 Copyright Notice 32 Copyright (c) 2016 IETF Trust and the persons identified as the 33 document authors. All rights reserved. 35 This document is subject to BCP 78 and the IETF Trust's Legal 36 Provisions Relating to IETF Documents 37 (http://trustee.ietf.org/license-info) in effect on the date of 38 publication of this document. Please review these documents 39 carefully, as they describe your rights and restrictions with respect 40 to this document. Code Components extracted from this document must 41 include Simplified BSD License text as described in Section 4.e of 42 the Trust Legal Provisions and are provided without warranty as 43 described in the Simplified BSD License. 45 1. Introduction 47 The SLIM Working Group [SLIM] is developing standards for language 48 selection for non-real-time and real-time communications. There are 49 a number of relevant use cases which could benefit from this 50 functionality including emergency service real-time communications 51 and customer service. This document details the use cases for SLIM 52 and gives some indication of necessary requirements. For each use 53 case a 'Solution' is provided, indicating the implementability of the 54 use case based on "Negotiating Human Language in Real-Time 55 Communications" [NEGOTIATING-HUMAN-LANG]. 57 2. Use Cases 59 Use cases are listed below: 61 2.1. Single two-way language 63 The simplest use case. One language and modality both ways in media 64 described in SDP [RFC4566] as audio or video or text. 65 Straightforward. Works for spoken, written and signed languages. An 66 example is when a user makes a voice call and the preferred language 67 of that user is specified in SDP, allowing the answerer to make 68 decisions based on that specification. 70 o Solution: Possible 72 2.2. Alternatives in the same modality 74 Two or more language alternatives in the same modality. Two or more 75 languages both ways in media described in SDP as audio or video or 76 text, but only in one modality. Straightforward. Works for spoken, 77 written and signed languages. The answering part selects. There is 78 a relative preference expressed by the order, and the answering part 79 can try to fulfill that in the best way. An example is a user who 80 makes a voice call and prefers French as their first language and 81 German as their second, and the answerer selects to speak German as 82 no French speaking abilites are available. 84 o Solution: Possible 86 2.3. Fairly equal alternatives in different modalities. 88 Two or more modality alternatives. Two or more languages in 89 different modalities both ways in media described in SDP as audio or 90 video or text. An example is a person with hearing abilities who is 91 also competent in sign language declares both spoken and sign 92 language competence in audio and video. This is fairly 93 straightforward, as long as there is no strong difference in 94 preference for these alternatives. The indication of sign language 95 competence is needed to avoid invoking relay services in calls with 96 deaf sign language users only indicating sign language. 98 o Solution: Possible 100 2.4. Last resort indication 102 One language in the different modalities. Allows the user to 103 indicate one last resort language when no other is available. For 104 example, a hearing user has text capability but want to use that as 105 last resort. (With current specifications, there is no way to 106 describe preference level between modalities and no way to describe 107 absolute preference.) 109 o Solution: An answering service will have no guidance to which is 110 the preferred modality and may select to use the modality that is 111 the callers last resort even if the preferred alternative is 112 available. 114 Another practical case can be a sign language user with a small 115 mobile terminal that has some inconvenient means for texting, but 116 sign language will be strongly preferred. In order to not miss any 117 calls, the indication of text as last resort would be desirable. 119 o Solution: need coding of an absolute preference: hi, med, lo 120 together with the tag. 122 2.5. Directional capabilities in different modalities 124 Two or more language alternatives in the different modalities. For 125 example, a hard-of-hearing user strongly prefers to talk and receive 126 text back. Spoken language input is appreciated. This can be 127 indicated by spoken language two-ways in audio, and reception of 128 language in text. (There is no current solution that says that the 129 text path is important. The answering part may see it as an 130 alternative.) 132 o Solution: Need for preference indication per modality 134 2.5.1. Fail gracefully? 136 There currently are methods to indicate that the call shall fail if a 137 language is not met, but that may be too drastic for some users 138 including the one in the above scenario (Section 2.5). It may be 139 important to be able to connect and just say something, or use 140 residual hearing to get something back when the voice is familiar. 142 o Possible solution: coding of an absolute preference together with 143 the tag could solve this case if used together with the 144 directional indications. For example: 146 "preference: hi, med, lo" 148 Another solution would be to indicate required grouping of media, 149 however this raises the complexity level. 151 2.6. Combination of modalities 153 Similar to Section 2.5, two or more language alternatives in the 154 different modalities. A person who is deaf-blind may have highest 155 preference for signing to the answerer and then receiving text in 156 return. This requires the indication of sign language output in 157 video and text reception in text, using the current directional 158 attributes. An answering party may seek suitable modalities for each 159 direction and find the only possible combination. 161 o Solution: Need for preference indication per modality 163 2.7. Person with speech disabilities who prefer speech-to-speech 164 service 166 One specific language for one specific modality with a speech-speech 167 engine. A person who may find that others have some difficulty in 168 understanding what they are trying to say may be used to have support 169 of a speech-to-speech relay service that aids clear speech when 170 needed for the understanding. Typically, only calls with close 171 friends and family might be possible without the relay service. 173 This user would indicate preference for receiving spoken language in 174 audio. Text output can be indicated but this user might want to use 175 this method as a last resort. (There is no current coding for vague 176 or unarticulated speech or other needs for a speech-to-speech 177 service.) 179 A possibility could be to indicate no preference for spoken language 180 out, a coding of proposed assisting service and an indication of text 181 output on a low absolute level. 183 o Solution: Need of service indication, and absolute level of 184 preference indication. 186 2.8. Person with speech disabilities who prefer to type and hear 188 Two or more language alternatives for multiple modalities. A person 189 who speaks in a way that may be hard to understand, may be used to 190 using text for output and listen to spoken language for input. This 191 user would indicate preference for receiving spoken language in 192 audio. Text output modality can be indicated. 194 If the answering party has text and audio capabilities, there is a 195 match. If only voice capabilities exist there is a need to invoke a 196 text relay service. 198 o Solution: Need of service indication, and absolute level of 199 preference indication. 201 2.9. All Possibilities 203 Mutiple languages and multiple modalities. For example: a tele-sales 204 center calls out and wants to offer all kinds of possibilities so 205 that the answering party can select. A tele-sales center has 206 competence in multiple spoken languages and can invoke relay services 207 rapidly if needed. So, it indicates in the call setup competence in 208 a number of spoken languages in audio, a number of sign languages in 209 video and a number of written languages in text. This would allow, 210 as a further example, a deaf-blind person who prefers to sign out and 211 get text back answers with only these capabilities. The center can 212 detect that and act accordingly, this could work in the following 213 methods: 215 o Solution Alternative 1: The center calls without SDP. A deafblind 216 user includes its SDP offer and the center sees what is needed to 217 fulfill the call. 219 o Solution Alternative 2: The center calls out with only the spoken 220 language capabilities indicated that the caller can handle. 222 The person with deaf and / or sight disabilities who answers, or 223 terminal or service provider detects the difference compared to the 224 capabilities of the answering party, and adds a suitable relay 225 service. (This does not use all the offerings of the callers 226 competence to pull in extra services, but is maybe a more realistic 227 case for what usually happens in practice. ) 229 o Solution: Possible in the same way as cases in Section 1.8. 231 3. Final Comments 233 The use cases identified here try to cover all cases of when users 234 wish to make text, voice or video communication using the language of 235 set of languages in which they are able to speak, write or sign and 236 for which the receivers are also able to communicate. Some of these 237 use cases go even further to allow give some users the ability to 238 select multiple and different languages based on their abilities and 239 needs. 241 To fulfill all the use cases the currently specified directionality 242 will be needed, as well as an indication of absolute preference. An 243 indication of suitable service and its spoken language is needed for 244 the speech-to-speech case, but can be useful for other cases as well. 245 There is no clear need for explicit grouping of modalities seem to be 246 needed. 248 Subsequent work in the Selection of Language for Internet Media 249 Working Group [SLIM] will work on Internet Drafts to support these 250 use cases. 252 4. Security Considerations 254 Indications of user preferred language may give indications as to 255 their nationality, background and abilities. It may also give 256 indication to any possible disabilities and some existing and ongoing 257 health issues. 259 5. IANA Considerations 261 This document has no IANA actions. 263 6. Informative References 265 [RFC4566] Handley, M., Jacobson, V., and C. Perkins, "SDP: Session 266 Description Protocol", RFC 4566, DOI 10.17487/RFC4566, 267 July 2006, . 269 [SLIM] "SLIM Working Group", n.d., 270 . 272 [NEGOTIATING-HUMAN-LANG] 273 Gellens, R., "Negotiating Human Language in Real-Time 274 Communications", 2016, . 277 Appendix A. Acknowledgments 279 Gunnar Hellstrom's experience and knowledge in this area provided a 280 great deal of these use cases. Thanks also goes to Randall Gellens 281 and Brian Rosen. 283 Author's Address 285 Natasha Rooney 286 GSMA 288 Email: nrooney@gsma.com 289 URI: https://gsma.com