Personal tools
You are here: Home Bibliography
Views
Document Actions

Bibliography

Database Contents
Information:

This folder currently contains 91 entries matching your query.


only show Humaine publications

2015

Mower Provost, E., Shangguan, Y., & Busso, C. (2015). Umeme: university of michigan emotional mcgurk effect dataset. IEEE Transactions on Affective Computing: 6 (4), 395-409.


2014

Mariooryad, S., Lotfian, R., & Busso, C. (2014). Building a naturalistic emotional speech corpus by retrieving expressive behaviors from existing speech corpora. Interspeech 2014 (pp. 238-242). Singapore.


2008

Batliner, A., Schuller, B., Schäffler, S., & Steidl, S. (2008). Mothers, adults, children, pets - towards the acoustics of intimacy. Proc. of ICASSP 2008. Las Vegas.


Busso, C., & Narayanan, S.S. (2008). Scripted dialogs versus improvisation: lessons learned about emotional elicitation techniques from the iemocap database. Interspeech 2008 - Eurospeech (pp. 1670-1673). Brisbane, Australia.


Busso, C., & Narayanan, S.S. (2008). Recording audio-visual emotional databases from actors: a closer look. Second International Workshop on Emotion: Corpora for Research on Emotion and Affect, International conference on Language Resources and Evaluation (LREC 2008) (pp. 17-22). Marrakech, Morocco.


Busso, C., Bulut, M., Lee, C.C., Kazemzadeh, A., Mower, E., Kim, S., Chang, J.N., Lee, S., & Narayanan, S.S. (2008). Iemocap: interactive emotional dyadic motion capture database. Journal of Language Resources and Evaluation: 42 (4), 335-359.


Poggi, I., Cavicchio, F., & Magno Caldognetto, E. (2008). Irony in a judicial debate: analyzing the subtleties of irony while testing the subtleties of an annotation scheme. JLRE:.


Rehm, M., & André, E. (2008). From annotated multimodal corpora to simulated human-like behaviors. In Wachsmuth, I. & Knoblich, G. (Ed.), Modeling Communication for Robots and Virtual Humans. Springer.


2007

Amir, N., & Cohen, R. (2007). Characterizing emotion in the soundtrack of an animated film: credible or incredible?. ACII 2007 (pp. 148-158).


Bänziger, T., & Scherer, K. (2007). Using actor portrayals to systematically study multimodal emotion expression: the gemep corpus. ACII 2007 (pp. 476-487).


Castellano, G., & Mancini, M. (2007). Analysis of emotional gestures from videos for the generation of expressive behaviour in an eca. Proceedings of The 7th International Workshop on Gesture in Human-Computer Interaction and Simulation. Lisbon, Portugal.


Douglas-Cowie, E., Cowie, R., Sneddon, I., Cox, C., Lowry, O., McRorie, M., Martin, J.C., Devillers, L., & Batliner, A. (2007). The humaine database: addressing the needs of the affective. In Paiva, A. and Prada, R. and Picard, R. (Ed.), 2nd International Conference on Affective Computing and Intelligent Interaction (ACII'2007), LNCS, vol. 4738 (pp. 488-500). Lisbon, Portugal.


Heylen, D., Bevacqua, E., Tellier, M., & Pelachaud, C. (2007). Searching for prototypical facial feedback signals. Proceedings of IVA’07: Intelligent Virtual Agents. Paris, France.


Heylen, D., Bevacqua, E., Tellier, M., & Pelachaud, C. (2007). Searching for prototypical facial feedback signals. IVA (pp. 147-153,).


Izdebski, K. (2007). Emotions of the human voice. Plural Publishing.


Martin, J., Caridakis, G., Devillers, L., Karpouzis, K., & Abrilian, S. (2007). Manual annotation and automatic image processing of multimodal emotional behaviors: validating the annotation of tv interviews. Personal and Ubiquitous Computing, Springer: Special issue on Emerging Multimodal Interfaces.


Osherenko, A., & André, E. (2007). Lexical affect sensing: are affect dictionaries necessary to analyze affect?. Second International Conference on Affective Computing and Intelligent Interaction, ACII 2007: Lecture Notes in Computer Science (pp. 230-241). Springer.


2006

Abrilian, S., Devillers, L., & Martin, J. (2006). Annotation of emotions in real-life video interviews: variability between coders. 5th Int. Conf. on Language Resources and Evaluation (LREC 2006). Genoa, Italy.


Amir, N., & Ron, S. (2006). “collection and evaluation of an emotional speech corpus using event recollection”. Workshop on corpora for research on emotion and affect, LREC,. Genoa, Italy.


Batliner, A., Biersack, S., & Steidl, S. (2006). The prosody of pet robot directed speech: evidence from children. In Hoffmann, R. and Mixdorff, H. (Ed.), Proc. Speech Prosody, 3rd International Conference (pp. 1-4). Dresden: TUDpress.


Batliner, A., Burkhardt, F., van Ballegooy, M., & Nöth, E. (2006). A taxonomy of applications that utilize emotional awareness. In Erjavec, T. and Gros, J. (Ed.), Language Technologies, IS-LTC 2006 (pp. 246-250). Ljubljana, Slovenia: Infornacijska Druzba (Information Society).


Buisine, S., Abrilian, S., Niewiadomski, R., Martin, J.-., Devillers, L., & Pelachaud, C. (2006). Perception of blended emotions: from video corpus to expressive agent. In Proceedings of the 6th International Conference on Intelligent Virtual Agents (IVA'06).


Buisine, S., Abrilian, S., Niewiadomski, R., Martin, J.C., Devillers, L., & Pelachaud, C. (2006). Perception of blended emotions: from video corpus to expressive agent. 6th International Conference on Intelligent Virtual Agents (IVA 2006), Marina del Rey, USA, August 2006 (pp. 93-106). Springer.


Bänziger, T., Pirker, H., & Scherer, K. (2006). Gemep - geneva multimodal emotion portrayals: a corpus for the study of multimodal emotional expressions. In L. Deviller et al. (Ed.), Proceedings of LREC'06 Workshop on Corpora for Research on Emotion and Affect (pp. 15-019). Genoa. Italy.


Campbell, N., Devillers, L., Dougla-Cowie, E., Aubergé, V., Batliner, A., & Tao, J. (2006). Resources for the processing of affect in interactions. In ELRA (Ed.), Proceedings of the 4th International Conference of Language Resources and Evaluation (pp. XXV-XXVIII).


 
Powered by Plone

Portal usage statistics