Personal tools
You are here: Home SIGs and LIGs Speech SIG Special Issue of Computer Speech and Language

Special Issue of Computer Speech and Language

Special Issue of Computer Speech and Language on "Affective Speech in real-life interactions"

Special Issue of Computer Speech and Language on " Affective Speech in real-life interactions "

This decade has seen an upsurge of interest in affective computing. Speech and Language are among the main channels to communicate human affective states. Affective Speech and language processing can be used alone or coupled with other multimodal channels in many commercial applications such as call centers, robots, artificial animated agents for telephony, education, games, or medical applications.

In the computer science community, the widely used terms of “emotion” is often applied without distinction from the more generic term “affective state”, which may be viewed as more adequate to describe the complex emotional states of a person. This “affective state” includes the emotions / feelings / attitudes / moods / and the interpersonal stances of a person. However, most of the time, research has been carried out only on a sub-set of the big-six “basic” emotions described by Paul Ekman. Furthermore, most of the time, studies have been also performed using prototypical acted data.

There is a significant gap between the affective states observed with artificial data (acted data or contrived data produced in laboratories) and those observed with real-life spontaneous data. In the artificial data, the context is “removed” or “manipulated” so we see much simpler “full-blown” affective states which are quite far from real affective states. The affective state of a person at any given time is a mixture of emotion/ attitude/ mood /interpersonal stance with often multi-trigger events (internal or external) occurring at different times. Thus, far from being as simple as “basic emotion”, affective states in spontaneous data are a subtle blend of many more complex and often seemingly contradictory factors that are very relevant to human communication and that are perceived without any conscious effort by any native speaker of the language or member of the same cultural group.

If machines are to be made sensitive to more subtle and complex types of information, which we believe to be as important to human communication as (or perhaps even more so than) the propositional content, then we need more naturalistic corpora upon which to base our research. If the corpora are acted or contrived, then the resulting technology will be of little use; the more natural the data we collect, and the more complex the factors they contain, the closer we can come to understanding and modelling the mechanisms of human social communication. Expression of affective information by computers will also require first recognizing the human affective context as well as considering goals and predicting outcomes for each interaction. Affective corpora are therefore fundamental both to developing sound conceptual analyses and to training these 'affective-oriented systems' at all levels - to recognise user affect, to express appropriate affective states, to anticipate how a user in one state might respond to a possible kind of reaction from the machine, etc.

The motivation of this special issue is to report innovative work on the modelling and generation of affect in real-life speech and spoken interaction (including human-human or human-machine interaction, multi-party interaction) or in “realistic” interactions (including realistic fictions). Papers are expected to address one or more of the following topics:
  • Ontologies of affect in social interaction
  • Characterization of affect in spoken interaction
  • Recognition of affective information
  • Collection and annotation of realistic and representative data and corpora
  • Modelling of affective states in Human Communication
  • Use of affective information in Human Computer Interaction
  • Generation and synthesis of affect in speech
  • Representing affect in discourse and dialogue
  • Modelling of affect in robots and games
  • Comparison of natural versus simulated data
Papers limited only to analysis of the acted or simulated big-6 emotions will not be considered.

IMPORTANT DATES


Target Publication Date (if any)     March, 2010
Final Manuscript Due February, 2010

SUBMISSIONS PROCEDURE


Prospective authors should follow the regular guidelines of the Computer Speech and Language for electronic submission (http://ees.elsevier.com/csl/)

EDITORS


Laurence Devillers
Email Address: devil@limsi.fr
Primary Phone Number: +3369858062
Full Mailing Address: LIMSI-CNRS, BP 133, 91403 Orsay Cedex, France

Nick Campbell
Email Address: nick@tcd.ie
Primary Phone Number: +81 774 951 380
Full Mailing Address: ATR-SLC, Keihanna Science City, Kyoto 619-0288, Japan

SCIENTIFIC COMMITTEE


Elisabeth André, Univ. Augsburg, D
Anton Batliner, Univ. Erlangen, D
Nick Campbell, ATR, J
Roddy Cowie, QUB, UK
Laurence Devillers, LIMSI-CNRS, FR
Ellen Douglas-Cowie, QUB, UK
Jonathan Gratch, USC, USA
John Hansen, Univ. of Texas at Dallas, USA
Roger Moore, Univ. Sheffield, UK
Shrikanth Narayanan, USC Viterbi School of Engineering, USA
Catherine Pelachaud, Univ. Paris VIII, FR
Rosalind Picard, MIT, USA
Gerhard Rigoll, Univ. München, D
Izhak Shafran, Univ. Johns Hopkins, CSLP, USA
Marc Schröder, DFKI Saarbrücken, D
Klaus Scherer, Univ. Geneva, SUI
Elisabeth Shriberg, SRI and ICSI, USA
Document Actions
Powered by Plone

Portal usage statistics