Network Working Group R. Gellens
Internet-Draft Core Technology Consulting
Intended status: Standards Track May 29, 2017
Expires: November 30, 2017

Negotiating Human Language in Real-Time Communications
draft-ietf-slim-negotiating-human-language-09

Abstract

Users have various human (natural) language needs, abilities, and preferences regarding spoken, written, and signed languages. This document adds new SDP media-level attributes so that when establishing interactive communication sessions ("calls"), it is possible to negotiate (communicate and match) the caller's language and media needs with the capabilities of the called party. This is especially important with emergency calls, where a call can be handled by a call taker capable of communicating with the user, or a translator or relay operator can be bridged into the call during setup, but this applies to non-emergency calls as well (as an example, when calling a company call center).

This document describes the need and a solution using new SDP media attributes.

Status of This Memo

This Internet-Draft is submitted in full conformance with the provisions of BCP 78 and BCP 79.

Internet-Drafts are working documents of the Internet Engineering Task Force (IETF). Note that other groups may also distribute working documents as Internet-Drafts. The list of current Internet-Drafts is at http://datatracker.ietf.org/drafts/current/.

Internet-Drafts are draft documents valid for a maximum of six months and may be updated, replaced, or obsoleted by other documents at any time. It is inappropriate to use Internet-Drafts as reference material or to cite them other than as "work in progress."

This Internet-Draft will expire on November 30, 2017.

Copyright Notice

Copyright (c) 2017 IETF Trust and the persons identified as the document authors. All rights reserved.

This document is subject to BCP 78 and the IETF Trust's Legal Provisions Relating to IETF Documents (http://trustee.ietf.org/license-info) in effect on the date of publication of this document. Please review these documents carefully, as they describe your rights and restrictions with respect to this document. Code Components extracted from this document must include Simplified BSD License text as described in Section 4.e of the Trust Legal Provisions and are provided without warranty as described in the Simplified BSD License.


Table of Contents

1. Introduction

A mutually comprehensible language is helpful for human communication. This document addresses the real-time, interactive side of the issue. A companion document on language selection in email [I-D.ietf-slim-multilangcontent] addresses the non-real-time side.

When setting up interactive communication sessions (using SIP or other protocols), human (natural) language and media modality (spoken, signed, written) negotiation may be needed. Unless the caller and callee know each other or there is contextual or out-of-band information from which the language(s) and media modalities can be determined, there is a need for spoken, signed, or written languages to be negotiated based on the caller's needs and the callee's capabilities. This need applies to both emergency and non-emergency calls. For various reasons, including the ability to establish multiple streams using different media (e.g., voice, text, video), it makes sense to use a per-stream negotiation mechanism, in this case, SDP.

This approach has a number of benefits, including that it is generic (applies to all interactive communications negotiated using SDP) and is not limited to emergency calls. In some cases such a facility isn't needed, because the language is known from the context (such as when a caller places a call to a sign language relay center, to a friend, or colleague). But it is clearly useful in many other cases. For example, it is helpful that someone calling a company call center or a Public Safety Answering Point (PSAP) be able to indicate preferred signed, written, and/or spoken languages, the callee be able to indicate its capabilities in this area, and the call proceed using the language(s) and media forms supported by both.

Since this is a protocol mechanism, the user equipment (UE client) needs to know the user's preferred languages; a reasonable technique could include a configuration mechanism with a default of the language of the user interface. In some cases, a UE could tie language and media preferences, such as a preference for a video stream using a signed language and/or a text or audio stream using a written/spoken language.

Including the user's human (natural) language preferences in the session establishment negotiation is independent of the use of a relay service and is transparent to a voice or other service provider. For example, assume a user within the United States who speaks Spanish but not English places a voice call. The call could be an emergency call or perhaps to an airline reservation desk. The language information is transparent to the voice service provider, but is part of the session negotiation between the UE and the terminating entity. In the case of a call to e.g., an airline, the call could be automatically handled by a Spanish-speaking agent. In the case of an emergency call, the Emergency Services IP network (ESInet) and the PSAP may choose to take the language and media preferences into account when determining how to process the call.

By treating language as another attribute that is negotiated along with other aspects of a media stream, it becomes possible to accommodate a range of users' needs and called party facilities. For example, some users may be able to speak several languages, but have a preference. Some called parties may support some of those languages internally but require the use of a translation service for others, or may have a limited number of call takers able to use certain languages. Another example would be a user who is able to speak but is deaf or hard-of-hearing and requires a voice stream plus a text stream. Making language a media attribute allows the standard session negotiation mechanism to handle this by providing the information and mechanism for the endpoints to make appropriate decisions.

Regarding relay services, in the case of an emergency call requiring sign language such as ASL, there are currently two common approaches: the caller initiates the call to a relay center, or the caller places the call to emergency services (e.g., 911 in the U.S. or 112 in Europe). (In a variant of the second case, the voice service provider invokes a relay service as well as emergency services.) In the former case, the language need is ancillary and supplemental. In the non-variant second case, the ESInet and/or PSAP may take the need for sign language into account and bridge in a relay center. In this case, the ESInet and PSAP have all the standard information available (such as location) but are able to bridge the relay sooner in the call processing.

By making this facility part of the end-to-end negotiation, the question of which entity provides or engages the relay service becomes separate from the call processing mechanics; if the caller directs the call to a relay service then the human language negotiation facility provides extra information to the relay service but calls will still function without it; if the caller directs the call to emergency services, then the ESInet/PSAP are able to take the user's human language needs into account, e.g., by assigning to a specific queue or call taker or bridging in a relay service or translator.

The term "negotiation" is used here rather than "indication" because human language (spoken/written/signed) is something that can be negotiated in the same way as which forms of media (audio/text/video) or which codecs. For example, if we think of non-emergency calls, such as a user calling an airline reservation center, the user may have a set of languages he or she speaks, with perhaps preferences for one or a few, while the airline reservation center will support a fixed set of languages. Negotiation should select the user's most preferred language that is supported by the call center. Both sides should be aware of which language was negotiated. This is conceptually similar to the way other aspects of each media stream are negotiated using SDP (e.g., media type and codecs).

To reduce the complexity of the solution, this draft focuses on negotiating language per media; routing is out of scope.

2. Terminology

The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this document are to be interpreted as described in RFC 2119 [RFC2119].

3. Desired Semantics

The desired solution is a media attribute (preferably per direction) that may be used within an offer to indicate the preferred language(s) of each (direction of a) media stream, and within an answer to indicate the accepted language. The semantics of including multiple values for a media stream within an offer is that the languages are listed in order of preference.

(Negotiating multiple simultaneous languages within a media stream is out of scope of this document.)

4. The existing 'lang' attribute

RFC 4566 [RFC4566] specifies an attribute 'lang' which appears similar to what is needed here, but is not sufficiently specific or flexible for the needs of this document. In addition, 'lang' is not mentioned in [RFC3264] and there are no known implementations in SIP. Further, it is useful to be able to specify language per direction (sending and receiving). This document therefore defines two new attributes.

5. Solution

An SDP attribute (per direction) seems the natural choice to negotiate human (natural) language of an interactive media stream, using the language tags of BCP 47 [RFC5646].

5.1. Rationale

The decision to base the proposal at the media negotiation level, and specifically to use SDP, came after significant debate and discussion. From an engineering standpoint, it is possible to meet the objectives using a variety of mechanisms, but none are perfect. None of the proposed alternatives was clearly better technically in enough ways to win over proponents of the others, and none were clearly so bad technically as to be easily rejected. As is often the case in engineering, choosing the solution is a matter of balancing trade-offs, and ultimately more a matter of taste than technical merit. The two main proposals were to use SDP and SIP. SDP has the advantage that the language is negotiated with the media to which it applies, while SIP has the issue that the languages expressed may not match the SDP media negotiated (for example, a session could negotiate a signed language at the SIP level but fail to negotiate a video media stream at the SDP layer).

The mechanism described here for SDP can be adapted to media negotiation protocols other than SDP.

5.2. The 'hlang-send' and 'hlang-recv' attributes

This document defines two media-level attributes starting with 'hlang' (short for "human interactive language") to negotiate which human language is used in each interactive media stream. There are two attributes, one ending in "-send" and the other in "-recv", registered in Section 6. Each can appear in an offer for a media stream.

In an offer, the 'hlang-send' value is a list of one or more language(s) the offerer is willing to use when sending using the media, and the 'hlang-recv' value is a list of one or more language(s) the offerer is willing to use when receiving using the media. The list of languages is in preference order (first is most preferred). When a media is intended for interactive communication using a language in one direction only (such as a user sending using text and receiving using audio), either hlang-send or hlang-recv MAY be omitted. When a media is not primarily intended for language (for example, a video or audio stream intended for background only) both SHOULD be omitted. Otherwise, both SHOULD have the same value. The two SHOULD NOT be set to languages which are difficult to match together (e.g., specifying a desire to send audio in Hungarian and receive audio in Portuguese will make it difficult to successfully complete the call).

In an answer, 'hlang-send' is the language the answerer will send when using the media (which in most cases is one of the languages in the offer's 'hlang-recv'), and 'hlang-recv' is the language the answerer expects to receive in the media (which in most cases is one of the languages in the offer's 'hlang-send').

Each value MUST be a list of one or more language tags per BCP 47 [RFC5646], separated by white space. BCP 47 describes mechanisms for matching language tags. Note that [RFC5646] Section 4.1 advises to "tag content wisely" and not include unnecessary subtags.

In an offer, each value MAY have an asterisk appended as the final value. An asterisk appended to either value in an offer indicates a request by the caller to the callee to not fail the call if there is no language in common. See Section 5.3 for more information and discussion.

When placing an emergency call, and in any other case where the language cannot be inferred from context, in an offer each media stream primarily intended for human language communication SHOULD specify both (or for asymmetrical language use, one of) the 'hlang-send' and 'hlang-recv' attributes.

Note that while signed language tags are used with a video stream to indicate sign language, a spoken language tag for a video stream in parallel with an audio stream with the same spoken language tag indicates a request for a supplemental video stream to see the speaker.

Clients acting on behalf of end users are expected to set one or both 'hlang-send' and 'hlang-recv' attributes on each media stream primarily intended for human communication in an offer when placing an outgoing session, and either ignore or take into consideration the attributes when receiving incoming calls, based on local configuration and capabilities. Systems acting on behalf of call centers and PSAPs are expected to take into account the attributes when processing inbound calls.

Note that media and language negotiation might result in more media streams being accepted than are needed by the users (e.g., if more preferred and less preferred combinations of media and language are all accepted). This is not a problem.

5.3. No Language in Common

A consideration with the ability to negotiate language is if the call proceeds or fails if the callee does not support any of the languages requested by the caller. This document does not mandate either behavior, although it does provide a way for the caller to indicate a preference for the call succeeding when there is no language in common. It is OPTIONAL for the callee to honor this preference. For example, a PSAP is likely to attempt the call even without an indicated preference when there is no language in common, while a call center might choose to fail the call.

The mechanism for indicating this preference is that, in an offer, if the last value of either 'hlang-recv' or 'hlang-send' is an asterisk, this indicates a request to not fail the call. The called party MAY ignore the indication, e.g., for the emergency services use case, regardless of the absence of an asterisk, a PSAP will likely not fail the call; some call centers might reject a call even if the offer contains an asterisk.

If the call is rejected due to lack of any languages in common, it is suggested to use SIP response code 488 (Not Acceptable Here) or 606 (Not Acceptable) [RFC3261] and include a Warning header field [RFC3261] in the SIP response. The Warning header field contains a warning code of [TBD: IANA VALUE, e.g., 308] and a warning text indicating that there are no mutually-supported languages; the text SHOULD also contain the supported languages and media.

Example:

5.4. Undefined Combinations

With the exception of the case mentioned in Section 5.2 (a spoken language tag for a video stream in parallel with an audio stream with the same spoken language tag), the behavior when specifying a spoken/written language tag for a video media stream, or a signed language tag for an audio or text media stream, is not defined.

5.5. Examples

Some examples are shown below. For clarity, only the most directly relevant portions of the SDP block are shown.

An offer or answer indicating spoken English both ways:

An offer or answer indicating American Sign Language both ways, and requesting that the call proceed even if the callee does not support the language:

An offer requesting spoken Spanish both ways (most preferred), spoken Basque both ways (second preference), or spoken English both ways (third preference). The offer further requests that the call proceed even if the callee does not support any of the languages:

An answer to the above offer indicating spoken Spanish both ways:

An alternative answer to the above offer indicating spoken Italian both ways (as the callee does not support any of the requested languages but chose to proceed with the call):

An offer of answer indicating written Greek both ways:

An offer requesting the following media streams: video for the caller to send using Argentine Sign Language, text for the caller to send using written Spanish (most preferred) or written Portuguese, audio for the caller to receive spoken Spanish (most preferred) or spoken Portuguese. The offer also requests that the call proceed even if the callee does not support any of the languages:

An answer for the above offer, indicating text in which the callee will receive written Spanish, and audio in which the callee will send spoken Spanish:

An offer requesting the following media streams: text for the caller to send using written English (most preferred) or written Spanish, audio for the caller to receive spoken English (most preferred) or spoken Spanish, supplemental video. The offer also requests that the call proceed even if the callee does not support any of the languages:

An answer for the above offer, indicating text in which the callee will receive written Spanish, audio in which the callee will send spoken Spanish, and supplemental video:

6. IANA Considerations

6.1. att-field Table in SDP Parameters

IANA is kindly requested to add two entries to the 'att-field (media level only)' table of the SDP parameters registry:

Attribute Name:
hlang-recv
Contact Name:
Randall Gellens
Contact Email Address:
rg+ietf@randy.pensive.org
Attribute Syntax:
hlang-value =
Language-Tag *( SP Language-tag ) [ SP asterisk ]
; Language-Tag as defined in BCP 47
asterisk =
"*"   ; an asterisk (ASCII %2A) character
sp =
1*" " ; one or more ASCII space (%20) characters

Attribute Semantics:
Described in Section 5.2 of TBD: THIS DOCUMENT
Usage Level:
media
Mux Category:
NORMAL
Charset Dependent:
No
Purpose:
See Section 5.2 of TBD: THIS DOCUMENT
O/A Procedures:
See Section 5.2 of TBD: THIS DOCUMENT
Reference:
TBD: THIS DOCUMENT

Attribute Name:
hlang-send
Contact Name:
Randall Gellens
Contact Email Address:
rg+ietf@randy.pensive.org
Attribute Syntax:
hlang-value
Attribute Semantics:
Described in Section 5.2 of TBD: THIS DOCUMENT
Usage Level:
media
Mux Category:
NORMAL
Charset Dependent:
No
Purpose:
See Section 5.2 of TBD: THIS DOCUMENT
O/A Procedures:
See Section 5.2 of TBD: THIS DOCUMENT
Reference:
TBD: THIS DOCUMENT

6.2. Warn-Codes Sub-Registry of SIP Parameters

IANA is requested to add a new value in the warn-codes sub-registry of SIP parameters in the 300 through 329 range that is allocated for indicating problems with keywords in the session description. The reference is to this document. The warn text is "Incompatible language specification: Requested languages not supported. Supported languages and media are: [list of supported languages and media]."

7. Security Considerations

The Security Considerations of BCP 47 [RFC5646] apply here. In addition, if the 'hlang-send' or 'hlang-recv' values are altered or deleted en route, the session could fail or languages incomprehensible to the caller could be selected; however, this is also a risk if any SDP parameters are modified en route.

8. Privacy Considerations

Language and media information can suggest a user's nationality, background, abilities, disabilities, etc.

9. Changes from Previous Versions

RFC EDITOR: Please remove this section prior to publication.

9.1. Changes from draft-ietf-slim-...-04 to draft-ietf-slim-...-06

9.2. Changes from draft-ietf-slim-...-02 to draft-ietf-slim-...-03

9.3. Changes from draft-ietf-slim-...-01 to draft-ietf-slim-...-02

9.4. Changes from draft-ietf-slim-...-00 to draft-ietf-slim-...-01

9.5. Changes from draft-gellens-slim-...-03 to draft-ietf-slim-...-00

9.6. Changes from draft-gellens-slim-...-02 to draft-gellens-slim-...-03

9.7. Changes from draft-gellens-slim-...-01 to draft-gellens-slim-...-02

9.8. Changes from draft-gellens-slim-...-00 to draft-gellens-slim-...-01

9.9. Changes from draft-gellens-mmusic-...-02 to draft-gellens-slim-...-00

9.10. Changes from draft-gellens-mmusic-...-01 to -02

9.11. Changes from draft-gellens-mmusic-...-00 to -01

9.12. Changes from draft-gellens-...-02 to draft-gellens-mmusic-...-00

9.13. Changes from draft-gellens-...-01 to -02

9.14. Changes from draft-gellens-...-00 to -01

10. Contributors

Gunnar Hellstrom deserves special mention for his reviews and assistance.

11. Acknowledgments

Many thanks to Bernard Aboba, Harald Alvestrand, Flemming Andreasen, Francois Audet, Eric Burger, Keith Drage, Doug Ewell, Christian Groves, Andrew Hutton, Hadriel Kaplan, Ari Keranen, John Klensin, Paul Kyzivat, John Levine, Alexey Melnikov, James Polk, Pete Resnick, Peter Saint-Andre, and Dale Worley for reviews, corrections, suggestions, and participating in in-person and email discussions.

12. References

12.1. Normative References

[RFC2119] Bradner, S., "Key words for use in RFCs to Indicate Requirement Levels", BCP 14, RFC 2119, DOI 10.17487/RFC2119, March 1997.
[RFC3261] Rosenberg, J., Schulzrinne, H., Camarillo, G., Johnston, A., Peterson, J., Sparks, R., Handley, M. and E. Schooler, "SIP: Session Initiation Protocol", RFC 3261, DOI 10.17487/RFC3261, June 2002.
[RFC4566] Handley, M., Jacobson, V. and C. Perkins, "SDP: Session Description Protocol", RFC 4566, DOI 10.17487/RFC4566, July 2006.
[RFC5646] Phillips, A. and M. Davis, "Tags for Identifying Languages", BCP 47, RFC 5646, DOI 10.17487/RFC5646, September 2009.

12.2. Informational References

[I-D.ietf-slim-multilangcontent] Tomkinson, N. and N. Borenstein, "Multiple Language Content Type", Internet-Draft draft-ietf-slim-multilangcontent-07, May 2017.
[RFC3264] Rosenberg, J. and H. Schulzrinne, "An Offer/Answer Model with Session Description Protocol (SDP)", RFC 3264, DOI 10.17487/RFC3264, June 2002.

Author's Address

Randall Gellens Core Technology Consulting EMail: rg+ietf@randy.pensive.org