ANNEMO (ANNotating EMOtions) is a web-based annotation tool of affective and social behaviors from audiovisual data.
In this tool, the annotator has to log in a web-based annotation interface by using a unique identifier and through the Google Chrome web-browser. The interface is split vertically in two parts: a scrolling list of the audiovisual recordings is given on the left side as an html list, whereas the video and the annotation cursor are displayed one below the other on the right side of the window.
Two affective dimensions (arousal and valence) can be annotated separately and time-continuously, using a slider with values ranging from -1 to +1 and a step of 0.01. For convenience, annotators are allowed to use a control panel of the video to eventually stop the play of a sequence and restart the annotation at a given instant. Whereas the social dimensions are rated once after having performed the annotation of the affective behaviors, using a 7-Likert scales on the five following dimensions: agreement, dominance, engagement, performance and rapport.
Ratings can be automatically checked with a Matlab script all along the data collection to ensure that: (i) the delay between the first annotation and the video start is less than a given constant, e.g., 5s (ii) the delay between two consecutive annotation samples of a same sequence (i.e., a ”blank”), is no longer than another given constant, e.g., 20s and (iii) the annotation of the social dimensions is performed after the two affective dimensions for each sequence. Sequences that fail one of these criteria should be considered for re-annotation. Additionally, several post-processing methods are available with Matlab scripts, such as binning of the continuous annotation values, mean-centering and synchronization, and statistical analysis of the data.
F. Ringeval, A. Sonderegger, J. Sauer and D. Lalanne, "Introducing the RECOLA Multimodal Corpus of Remote Collaborative and Affective Interactions", 2nd International Workshop on Emotion Representation, Analysis and Synthesis in Continuous Time and Space (EmoSPACE), in proc. of IEEE Face & Gestures 2013, Shanghai (China), April 22-26 2013