Personal tools
You are here: Home Bibliography
Views
Document Actions

Bibliography

Signal Analysis Contents
Information:

This folder currently contains 154 entries matching your query.


only show Humaine publications

2015

Mariooryad, S., & Busso, . (2015). Correcting time-continuous emotional labels by modeling the reaction lag of evaluators. IEEE Transactions on Affective Computing: 6 (2), 97-108.


2014

Abdelwahab, M., & Busso, C. (2014). Evaluation of syllable rate estimation in expressive speech and its contribution to emotion recognition. IEEE Spoken Language Technology Workshop (SLT) (pp. 472-477). South Lake Tahoe, CA, USA.


2013

Ivonin, L., Chang, H., Chen, W., & Rauterberg, M. (2013). Automatic recognition of the unconscious reactions from physiological signals. In Andreas Holzinger, Martina Ziefle, Martin Hitz, Matjaž Debevc (Ed.), Human Factors in Computing and Informatics: Lecture Notes in Computer Science, 7946 (pp. 16-35). Berlin Heidelberg: Springer.


Ivonin, L., Chang, H., Chen, W., & Rauterberg, M. (2013). Unconscious emotions: quantifying and logging something we are not aware of. Personal and Ubiquitous Computing: 17 (4), 663--673.


Mariooryad, S., & Busso, C. (2013). Analysis and compensation of the reaction lag of evaluators in continuous emotional annotations. Affective Computing and Intelligent Interaction (ACII 2013) (pp. 85-90). Geneva, Switzerland.


2012

Busso, C., & Rahman, T. (2012). Unveiling the acoustic properties that describe the valence dimension. Interspeech 2012 (pp. 1179-1182). Portland, OR, USA.


Mariooryad, S., & Busso, C. (2012). Factorizing speaker, lexical and emotional variabilities observed in facial expressions. IEEE International Conference on Image Processing (ICIP 2012) (pp. 2605-2608).


2010

Schröder, M. (2010). The semaine api: towards a standards-based framework for building emotion-oriented systems. Advances in Human-Computer Interaction: 2010 (319406).


2009

Busso, C., Bulut, M., Lee, S., & Narayanan, S.S. (2009). Fundamental frequency analysis for speech emotion processing. In Sylvie Hancil (Ed.), The Role of Prosody in Affective Speech (pp. 309-337). Berlin, Germany: Peter Lang Publishing Group.


Busso, C., Lee, S., & Narayanan, S.S. (2009). Analysis of emotionally salient aspects of fundamental frequency for emotion detection. IEEE Transactions on Audio, Speech and Language Processing: 17 (4), 582-596.


2008

Batliner, A., Schuller, B., Schäffler, S., & Steidl, S. (2008). Mothers, adults, children, pets - towards the acoustics of intimacy. Proc. of ICASSP 2008. Las Vegas.


Batliner, A., Steidl, S., Hacker, C., & Nöth, E. (2008). Private emotions vs. social interaction — a data-driven approach towards analysing emotions in speech. User Modeling and User-Adapted Interaction:, 175--206.


Guerini, M., Strapparava, C., & Stock, O. (2008). Trusting politicians' words (for persuasive nlp). to be printed in Proceedings of CICLING 2008. Haifa,.


Kim, J., & André, E. (2008). Multi-channel biosignal analysis for automatic emotion recognition. International Conf. on Bio-inspired Systems and Signal Processing (Biosignals 2008). Funchal, Madeira.


Kim, J., André, E., , & (2008). Emotion recognition based on physiological changes in music listening. IEEE Transactions on Pattern Analysis and Machine Intelligence:.


Tooher, M., Yanushevskaya, I., & Gobl, C. (2008). Transformation of lf parameters for speech synthesis of emotion: regression trees. Speech Prosody 2008, Campinas, Brazil.


Yanushevskaya, I., Gobl, C., & Ní Chasaide, A. (2008). Voice quality and loudness in affect perception. Speech Prosody 2008, Campinas, Brazil.


2007

Abrilian, S. (2007). Représentation de comportements emotionnels multimodaux spontanés: perception, annotation et synthèse (Ph.D thesis). Paris XI.


Asteriadis, S., Tzouveli, P., Karpouzis, K., & Kollias, S. (2007). Non-verbal feedback on user interest based on gaze direction and head pose. Proc. 2nd International Workshop on Semantic Media Adaptation and Personalization (SMAP'07), London, UK, December 17-18, 2007.


Batliner, A., Hacker, C., Kaiser, M., Moegele, H., & Noeth, E. (2007). Taking into account the user's focus of attention with the help of audio-visual information: towards less artificial human-machine-communication. International Conference on Auditory-Visual Speech Processing 2007 (pp. 51-56).


Batliner, A., Huber, R., Batliner, A., & Huber, R. (2007). Speaker characteristics and emotion classification. In Christian Müller (Ed.), Speaker Classification I Fundamentals, Features, and Methods: LNAI (pp. 138-151). Berlin-Heidelberg: Springer.


Batliner, A., Steidl, S., & Nöth, E. (2007). Laryngealizations and emotions: how many babushkas? . In Schröder, Marc; Batliner, Anton; d'Alessandro, Christophe (Ed.), Proceedings of the International Workshop on Paralinguistic Speech - between Models and Data (ParaLing'07 Saarbrücken 03.08.2007) (pp. 17-22). DFKI.


Batliner, A., Steidl, S., Schuller, B., Seppi, D., Vogt, T., Devillers, L., Vidrascu, L., Amir, N., Kessous, L., & Aharonson, V. (2007). The impact of f0 extraction errors on the classification of prominence and emotion. Proceedings of the 16th International Congress of PhoneticSciences (ICPhS 2007) (pp. 2201-2204). Saarbrücken.


Busso, C., & Narayanan, S. (2007). Interrelation between speech and facial gestures in emotional utterances: a single subject study. IEEE Transactions on Audio, Speech and Language Processing: 15 (8), 2331-2347.


Busso, C., Deng, Z., Grimm, M., Neumann, U., & Narayanan, S. (2007). Rigid head motion in expressive speech animation: analysis and synthesis. IEEE Transactions on Audio, Speech and Language Processing: 15 (3), 1075-1086.


 
Powered by Plone

Portal usage statistics