Personal tools
You are here: Home SIGs Speech SIG EmoSPACE-2013

EmoSPACE-2013

2nd International Workshop EmoSPACE 2013

2nd International Workshop on Emotion Representation, Analysis and Synthesis in Continuous Time and Space (EmoSPACE)
In conjunction with the IEEE FG 2013
Shanghai, China, 26 April, 2013



Call for Papers

Building upon the success of the first EmoSPACE workshop at IEEE FG’11, the second workshop in the EmoSPACE Workshop series aims to (i) focus on continuity in input, analysis and synthesis in terms of continuity in time and continuity in affective, mental and social dimensions and phenomena, and (ii) discuss the issues and the challenges pertinent in sensing, recognizing and responding to continuous human affective and social behaviour from diverse communicative cues and modalities.

The key aim of EmoSPACE’13, the second workshop in the series, is to present cutting-edge research and new challenges in automatic and continuous analysis and synthesis of human affective and social behaviour in time and/or space in an interdisciplinary forum of affective and behavioural scientists. More specifically, the workshop aims (i) to bring forth existing efforts and major accomplishments in modelling, analysis and synthesis of affective and social behaviour in continuous time and/or space, (ii) while encouraging the design of novel applications in context as diverse as human-computer and human-robot interaction, clinical and biomedical studies, learning and driving environments, and entertainment technology, and (iii) to focus on current trends and future directions in the field.

Sponsors:


Social Signal Processing Network (SSPNet)


Preliminary Technical Program:


08.30-08.40 Opening
 
08.40-09.30 Keynote 1: Ursula Hess, Humboldt-Universität Berlin/Germany. The Face as Context in Emotion recognition

Abstract: Faces are not simply blank canvases upon which facial expressions write their emotional messages. In fact, facial appearance and facial movement are both important social signaling systems in their own right. In my presentation I will provide multiple lines of evidence for the notion that the social signals derived from facial appearance on the one hand and facial movement on the other interact. As a consequence the same expression when shown by different people may not have the same meaning.

Speaker: Ursula Hess is an internationally recognized author of over 100 chapters and articles in leading journals in the field. Her research is centered on the communication of emotions. In particular, she focuses on the social factors that influence this process such as gender and intergroup relations. One line of research investigates the influence of facial appearance on the perception of emotions in men and women as well as individuals of different age. Another line of research focuses on empathic processes, especially facial mimicry, and their modulation by social context. Her expertise is in the combination of behavioral research and psychophysiology to assess both explicit and automatic processes underlying social interactions.
 
Session 1: Data induction, acqusition & annotation - Chair: Ursula Hess
09.30-09.50 Gary McKeown, Will Curran, Ciaran McLoughlin, Harry Griffin and Nadia Berthouze. Laughter Induction Techniques Suitable for Generating Motion Capture Data of Laughter Associated Body Movements
09.50-10.10 Fabien Ringeval, Andreas Sonderegger, Juergen Sauer and Denis Lalanne. Introducing the RECOLA Multimodal Corpus of Remote Collaborative and Affective Interactions
10.10-10.30 Angeliki Metallinou and Shrikanth Narayanan. Annotation and Processing of Continuous Emotional Attributes: Challenges and Opportunities
 
10:30-11:00 Coffee break
 
Session 2: Facial action unit & expression recognition - Chair: Hatice Gunes
11.00-11.20 Laszlo Attila Jeni, Jeff Girard, Jeffrey Cohn and Fernando De La Torre. Continuous AU Intensity Estimation using Localized, Sparse Facial Feature Space
11.20-11.40 Xiao Zhang, Mohammad Mahoor and Richard Voyles. Facial Expression Recognition using HessianMKL based Multiclass-SVM
11.40-12.00 Fatma Guney, N. Murat Arar, Mika Fischer and Hazim Kemal Ekenel. Cross-pose Facial Expression Recognition
12.00-12.20 Jinkuang Cheng, Yangdong Deng, Hongying Meng and Zhihua Wang. A Facial Expression Based Continuous Emotional State Monitoring System with GPU Acceleration
 
12:20-13:50 Lunch break
 
13.50-14.40 Keynote 2: Beatrice de Gelder, Maastricht University/the Netherlands. Through the eyes of the body

Abstract: Communication with conspecifics is a central requirement in complex environments. Faces and whole bodies are the most frequently encountered and most salient information signals. While research on facial communication has a long history investigations of whole body signaling has only just started. In this talk we review this new field and focus on some main issues. First, to what extend does signaling of fear and anger with the face and the whole body rely on similar affective structures or category specific resources and what are the distinctive functions of each. Second, what is the relative temporal dynamics of affective and category specific information, do we first process the expression and then the category and exemplar identity or, the other way round, as is the standard assumption? Next, is there parallel processing for conscious and non consciously perceived face and body signals and what can we learn from patients with focal brain damage. Finally, studies are discussed showing how is body expression influenced by social and cultural context and by interaction with other agents.

Speaker
 
Session 3: Affect analysis & recognition - Chair: Beatrice de Gelder
14.40-15.00 Shiro Kumano, Kazuhiro Otsuka, Masafumi Matsuda and Junji Yamato. Analyzing Perceived Empathy/Antipathy based on Reaction Time in Behavioral Coordination
15.00-15.20 Mojtaba Khomami Abadi, Seyed Mostafa Kia, Ramanathan Subramanian, Paolo Avesani and Nicu Sebe. Decoding affect in videos employing the MEG brain signal
15.20-15.40 Shizhi Chen and Yingli Tian. Margin-Constrained Multiple Kernel Learning Based Multi-Modal Fusion for Affect Recognition
15.40-16.00 Xiangning Liu, Katsuhito Akahane and Makoto Sato. Proposal on an Image Haptization System Based on Emotional Effects of Color

Workshop Organisers:



Program Committee:



  • Anton Batliner, Technische Universität München, Germany
  • Nadia Bianchi-Berthouze, University College London, UK
  • Felix Burkhardt, Deutsche Telekom, Germany
  • Carlos Busso, University of Texas at Dallas, USA
  • Antonio Camurri, University of Genova, Italy
  • George Caridakis, National Technical University of Athens, Greece
  • Ginevra Castellano, University of Birmingham, UK
  • Sidney D'Mello, University of Memphis, USA
  • Hazim Kemal Ekenel, Istanbul Technical University, Turkey
  • Dirk Heylen, University of Twente, The Netherlands
  • Eva Hudlicka, Psychometrix Associates, USA
  • Irene Kotsia, Queen Mary University London, UK
  • Gary McKeown, Queen's University Belfast, UK
  • Louis-Philippe Morency, University of Southern California, USA
  • Anton Nijholt, University of Twente, Netherlands
  • Peter Robinson, University of Cambridge, UK
  • Albert Ali Salah, Bogazici University, Turkey
  • Stefan Steidl, FAU, Germany
  • Michel Valstar, University of Nottingham, UK
  • Dongrui Wu, GE Global Research, USA
  • Yi-Hsuan Yang, Academia Sinica, Taiwan
  • Stefanos Zafeiriou, Imperial College London, UK


Important Dates:


Paper Submission: 21 November 2012
Notification of Acceptance: 8 January 2013
Camera Ready Paper: 15 January 2013
Workshop: 26 April 2013




Submission Policy:


In submitting a manuscript to this workshop, the authors acknowledge that no paper substantially similar in content has been submitted to another conference or workshop.
Manuscripts should be in the IEEE FG paper format.
Authors should submit papers as a PDF file.
Papers accepted for the workshop will be allocated 6 pages in the proceedings, with the option of having up to 2 extra pages.
EmoSPACE reviewing is double blind. Reviewing will be by members of the program committee. Each paper will receive at least two reviews. Acceptance will be based on relevance to the workshop, novelty, and technical quality.
Submission and reviewing will be handled via easychair.


Suggested workshop topics include, but are by no means limited to:
  • Cues for continuous affective, mental and social state recognition
    • facial expressions
    • head movements and gestures
    • body postures and gestures
    • audio (e.g., speech, non-linguistic vocalisations, etc.)
    • bio signals (e.g., heart, brain, thermal signals, etc.)
  • Automatic analysis and prediction
    • approaches for discretised and continuous prediction
    • identifying appropriate classification and prediction methods
    • introducing or identifying optimal strategies for fusion
    • techniques for modelling high inter-subject variation
    • approaches to determining duration of affective and social cues for automatic analysis
  • Data acquisition and annotation
    • elicitation of affective, mental and social states
    • individual variations (interpersonal and cognitive issues)
    • (multimodal) naturalistic data sets and annotations
    • (multimodal) annotation tools
    • modelling annotations from multiple raters and their reliability
  • Applications
    • interaction with robots, virtual agents, and games (including tutoring)
    • mobile affective computing
    • smart environments& digital spaces (e.g., in a car, or digital artworks)
    • implicit (multimedia) tagging
    • clinical and biomedical studies (e.g., autism, depression, pain etc.)


Please submit your paper at https://www.easychair.org/conferences/?conf=emospace2013.
Document Actions
Powered by Plone

Portal usage statistics