Personal tools
You are here: Home Bibliography
Views
Document Actions

Bibliography

Emotion Classification Contents
Information:

This folder currently contains 145 entries matching your query.


only show Humaine publications

2015

Abdelwahab, M., & Busso, C. (2015). Supervised domain adaptation for emotion recognition from speech. International Conference on Acoustics, Speech, and Signal Processing (ICASSP 2015) (pp. 5058-5062). Brisbane, Australia.


Lotfian, R., & Busso, C. (2015). Emotion recognition using synthetic speech as neutral reference. International Conference on Acoustics, Speech, and Signal Processing (ICASSP 2015) (pp. 4759-4763). Brisbane, Australia.


Mariooryad, S., & Busso, . (2015). Correcting time-continuous emotional labels by modeling the reaction lag of evaluators. IEEE Transactions on Affective Computing: 6 (2), 97-108.


2014

Abdelwahab, M., & Busso, C. (2014). Evaluation of syllable rate estimation in expressive speech and its contribution to emotion recognition. IEEE Spoken Language Technology Workshop (SLT) (pp. 472-477). South Lake Tahoe, CA, USA.


Arias, J., Busso, C., & Becerra Yoma, N. (2014). Shape-based modeling of the fundamental frequency contour for emotion detection in speech. Computer Speech and Language: 28 (1), 278-294.


Mariooryad, S., & Busso, C. (2014). Compensating for speaker or lexical variabilities in speech for emotion recognition. Speech Communication: 57, 1-12.


2013

Busso, C., Bulut, M., & Narayanan, S.S. (2013). Toward effective automatic recognition systems of emotion in speech. In J. Gratch and S. Marsella (Ed.), Social emotions in nature and artifact: emotions in human and human-computer interaction (pp. 110-127). New York, NY, USA: Oxford University Press.


Busso, C., Mariooryad, S., Metallinou, A., & Narayanan, S.S. (2013). Iterative feature normalization scheme for automatic emotion detection from speech. IEEE Transactions on Affective Computing: 4 (4), 386-397.


Ivonin, L., Chang, H., Chen, W., & Rauterberg, M. (2013). Measuring archetypal experiences with physiological sensors. SPIE Newsroom.


Ivonin, L., Chang, H., Chen, W., & Rauterberg, M. (2013). Automatic recognition of the unconscious reactions from physiological signals. In Andreas Holzinger, Martina Ziefle, Martin Hitz, Matjaž Debevc (Ed.), Human Factors in Computing and Informatics: Lecture Notes in Computer Science, 7946 (pp. 16-35). Berlin Heidelberg: Springer.


Ivonin, L., Chang, H., Chen, W., & Rauterberg, M. (2013). Unconscious emotions: quantifying and logging something we are not aware of. Personal and Ubiquitous Computing: 17 (4), 663--673.


Mariooryad, S., & Busso, C. (2013). Exploring cross-modality affective reactions for audiovisual emotion recognition. IEEE Transactions on Affective Computing: 4 (2), 183-196.


Mariooryad, S., & Busso, C. (2013). Feature and model level compensation of lexical content for facial emotion recognition. IEEE International Conference on Automatic Face and Gesture Recognition (FG 2013). Shanghai, China.


2012

Rahman, T., & Busso, C. (2012). A personalized emotion recognition system using an unsupervised feature adaptation scheme. International Conference on Acoustics, Speech, and Signal Processing (ICASSP 2012) (pp. 5117-5120). Kyoto, Japan.


2011

Kleinsmith, A., Bianchi-Berthouze, N., & Steed, A. (2011). Automatic recognition of non-acted affective postures. IEEE Transactions on Systems, Man, and Cybernetics Part B:.


2010

Schröder, M. (2010). The semaine api: towards a standards-based framework for building emotion-oriented systems. Advances in Human-Computer Interaction: 2010 (319406).


2009

Busso, C., Lee, S., & Narayanan, S.S. (2009). Analysis of emotionally salient aspects of fundamental frequency for emotion detection. IEEE Transactions on Audio, Speech and Language Processing: 17 (4), 582-596.


Lee, C., Busso, C., Lee, S., & Narayanan, S. (2009). Modeling mutual influence of interlocutor emotion states in dyadic spoken interactions. Interspeech 2009 (pp. 1983-1986). Brighton, UK.


Lee, C., Mower, E., Busso, C., Lee, S., & Narayanan, S. (2009). Emotion recognition using a hierarchical binary decision tree approach. Interspeech 2009 (pp. 320-323). Brighton, UK.


Mower, E., Metallinou, A., Lee, C.-., Kazemzadeh, A., Busso, C., Lee, S., & Narayanan, S.S. (2009). Interpreting ambiguous emotional expressions. International Conference on Affective Computing and Intelligent Interaction (ACII 2009). Amsterdam, The Netherlands.


2008

Batliner, A., Schuller, B., Schäffler, S., & Steidl, S. (2008). Mothers, adults, children, pets - towards the acoustics of intimacy. Proc. of ICASSP 2008. Las Vegas.


Batliner, A., Steidl, S., Hacker, C., & Nöth, E. (2008). Private emotions vs. social interaction — a data-driven approach towards analysing emotions in speech. User Modeling and User-Adapted Interaction:, 175--206.


D'Urso, V., Cavicchio, F., & Magno Caldognetto, E. (2008). Le etichette lessicali nelle ricerche sperimentali sulle emozioni: problemi teorici e metodologici. In E. Magno Caldognetto, F. Cavicchio, P. Cosi (Ed.), Comunicazione parlata e manifestazione delle emozioni. Naples: Liguori.


Kim, J., & André, E. (2008). Multi-channel biosignal analysis for automatic emotion recognition. International Conf. on Bio-inspired Systems and Signal Processing (Biosignals 2008). Funchal, Madeira.


Kim, J., André, E., , & (2008). Emotion recognition based on physiological changes in music listening. IEEE Transactions on Pattern Analysis and Machine Intelligence:.


 
Powered by Plone

Portal usage statistics