Personal tools
You are here: Home SIGs and LIGs Speech SIG emospace-2015

emospace-2015





The 3rd International Workshop on Emotion Representation, Analysis and Synthesis in Continuous Time and Space – EmoSPACE’15 will be held in conjunction with the IEEE FG 2015 in Ljubljana on 8 May, 2015.

Download the Call for Papers.

Submit your paper

Scope

Building upon the success of the first EmoSPACE workshop at IEEE FG’11 and IEEE FG’13, the third workshop in the series aims to (i) focus on continuity in input, analysis and synthesis in terms of continuity in time and continuity in affective, mental and social dimensions and phenomena, and (ii) discuss the issues and the challenges pertinent in sensing, recognizing and responding to continuous human affective and social behaviour from diverse communicative cues and modalities.

The key aim of EmoSPACE’15 is to present cutting-edge research and new challenges in automatic and continuous analysis and synthesis of human affective and social behaviour in time and/or space in an interdisciplinary forum of affective and behavioural scientists. More specifically, the workshop aims (i) to bring forth existing efforts and major accomplishments in modelling, analysis and synthesis of affective and social behaviour in continuous time and/or space, (ii) while encouraging the design of novel applications in context as diverse as human-computer and human-robot interaction, clinical and biomedical studies, learning and driving environments, and entertainment technology, and (iii) to focus on current trends and future directions in the field.

Suggested workshop topics include, but are not limited to:

Cues for continuous affective, mental and social state recognition
  • facial expressions
  • head movements and gestures
  • body postures and gestures
  • audio (e.g., speech, non-linguistic vocalisations, etc.)
  • bio signals (e.g., heart, brain, thermal signals, etc.)

    Automatic analysis and prediction
  • approaches for discretised and continuous prediction
  • identifying appropriate classification and prediction methods
  • introducing or identifying optimal strategies for fusion
  • techniques for modelling high inter-subject variation
  • approaches to determining duration of affective and social cues for automatic analysis

    Data acquisition and annotation
  • elicitation of affective, mental and social states
  • individual variations (interpersonal and cognitive issues)
  • (multimodal) naturalistic data sets and annotations
  • (multimodal) annotation tools
  • modelling annotations from multiple raters and their reliability

    Applications
  • interaction with robots, virtual agents, and games
  • mobile affective computing
  • smart environments& digital spaces (e.g., in a car, or digital artworks)
  • implicit (multimedia) tagging
  • clinical and biomedical studies (e.g., autism, depression, pain etc.)

    Draft Programme

    9.15Opening
    Session 1
    9.30 Keynote: Nadia Berthouze
    10.30 Coffee break
    Session 2
    11.00 Noam Amir, Reut Rubinstein, Adi Shlomov and Gary Diamond: 'Comparing categorical and dimensional ratings of emotional speech'
    11.20 Ghadh Alzamzmi, Dmitry Goldgof, Rangachar Kasturi, Yu Sun, Terri Asmeade and Gabriel Ruiz: 'Pain Assessment in Infants: Towards Spotting the Pain Expression Based on the Facial Strain'
    11.40 Kevin El Haddad, Stéphane Dupont, Nicolas D'Alessandro and Thierry Dutoit: 'An HMM-based Speech-smile Synthesis System: An Approach for Amusement Synthesis'
    12.00 Lunch break
    Session 3
    13.30 Yona Falinie Binti Abd Gaus, Hongying Meng, Asim Jan, Saeed Turabzadeh and Fan Zhang, 'Automatic Affective Dimension Recognition from Naturalistic Facial Expressions Based on Wavelet Filtering and PLS Regression'
    13.50 Asim Jan and Hongying Meng, 'Automatic 3D Facial Expression Recognition using Geometric and Textured Feature Fusion'
    14.10 Shiro Kumano, Kazuhiro Otsuka, Ryo Ishii and Junji Yamato, 'Automatic Gaze Analysis in Multiparty Conversations based on Collective First-Person Vision'
    14.30 Wenxuan Mou, Oya Celiktutan and Hatice Gunes, 'Group-level Arousal and Valence Recognition in Static Images: Face, Body and Context'
    14.50 End


    Keynote

    From pain to laughter: how the body expresses what we feel.

    Abstract: Whilst facial expression has been thoroughly investigated as a means for affective communication, body expression is yet to receive the attention it deserves. As affective-aware technology enters every corner of our life, it important that we understand how this modality can help us better understand how people feel and design technology that can better support them. In this talk, I’ll present the work done within the EPSRC Emo&Pain project to automatically recognize how a person with chronic pain feels while doing physical exercise and how technology can help them build confidence in their capabilities. I will then present results from the EU-FP7 ILHAIRE project describing how body expressions contribute to discriminate between laughter types.

    Speaker: Dr Nadia Berthouze, Reader at UCL/UK in the UCLIC, received her PhD in computer science from the University of Milan. Her research focuses on body movement as a means to automatically recognise and modulate people’s affective states. She has published more than 150 papers in affective computing, HCI, and pattern recognition and was invited to give a TEDxStMartin talk. She is the PI on the EPSRC funded Emo&Pain project to design affective-aware technology to support rehabilitation in chronic pain; co-I on the EU-FP7 ILHAIRE project investigating laughter in human-avatar interaction; EU-FP7 UBI-HEALTH: Exchange of Excellence in Ubiquitous Computing Technologies to Address Healthcare Challenges.

    Important Dates

    Paper submission: 18 January 2015 *new*
    Notification of acceptance: 9 February 2015
    Camera ready paper: 18 February 2015
    Workshop: 8 May 2015


    Manuscripts should be in the IEEE FG paper format, please see here for info and templates.
    Manuscripts should be submitted via easychair here.

    Organisers

    Hatice Gunes, Queen Mary Univ. of London, UK
    Björn Schuller, University of Passau, Germany and Imperial College London, UK
    Maja Pantic, Imperial College London, UK
    Roddy Cowie, Queen's University Belfast, UK


    Program Committee

    Iulian Benta, Technical University of Cluj-Napoca, Romania
    Nadia Bianchi-Berthouze, University College London, UK
    Carlos Busso, University of Texas at Dallas, USA
    Ginevra Castellano, Uppsala University, Sweden
    Mohamed Chetouani, UPMC, France
    Marco Cristani, University of Verona, Italy
    Oya Celiktutan, Queen Mary University of London, UK
    Laurence Devillers, CNRS-LIMSI, France
    Hazim K. Ekenel, Istanbul Technical University, Turkey
    Julien Epps, University of New South Wales, Australia
    Zakia Hammal, Carnegie Mellon University, USA
    Dirk Heylen, University of Twente, The Netherlands
    M. Ehsan Hoque, University of Rochester, USA
    Shiro Kumano, Nippon Telegraph and Telephone Corporation, Japan
    Gary McKeown, Queen's University Belfast, UK
    Hongying Meng, Brunel University, UK
    Catherine Pelachaud, TELECOM ParisTech, France
    Stavros Petridis, Imperial College, UK
    Peter Robinson, University of Cambridge, UK
    Albert A. Salah, Bogazici University, Turkey
    Nicu Sebe, University of Trento, Italy
    Mohammad Soleymani, University of Geneva, Switzerland
    Jianhua Tao, Chinese Academy of Sciences, China
    Khiet P. Truong, University of Twente, The Netherlands
    Yan Tong, University of South Carolina, USA
    Michel Valstar, University of Notthingham, UK
    Yi-Hsuan Yang, Academia Sinica, Taiwan
  • Document Actions
    Powered by Plone

    Portal usage statistics