Personal tools
You are here: Home Wiki Demonstrators

This is the Demonstrators page in the HUMAINE know-how melting pot, a collaborative document editing environment destined to become an key source of information in the research area. As it uses wiki technology, all registered members can edit this set of pages and add new pages.


Demonstrators with websites:

ActAffAct
The ActAffAct system is an implementation of an appraisal-based agent architecture targeted at the generation of simple narratives. It has been extended with concepts from emotion-regulation. Source code and more information is available from:
Applet to manipulate a face
A Java applet for controling elemnts of a 2D head.

CharToon
Facial animation system to design 2D cartoon faces, and to control a 2D or 3D face.

DialogSimulator
A tool to simulate persuasive dialogs with an Embodied Animated Agent.

eShowroom
The eShowroom demonstrator of the NECA project features animated car sales dialogue between virtual characters, based on state-of-the-art 3d web animation technology and speech synthesis. You can choose the personalities of the virtual characters and the issues they should discuss. Based on your input, a dialogue will be generated for you.

Emotional filter
An emotional text-to-speech filter for Mbrola Can be used to simulate emotional arousal with synthetic speech, especially in multilingual applications.

Emotional Speech
A collection of samples of synthesized emotional speech.

EmotionDisc
A disc (or two squares) to generate a continuum of facial display of emotions based on 2D or 4D control.

eMoto
A mobile messaging system using affective gestures as input and affective expressions, in terms of colours, shapes and animations, as output.

ERMIS
The development of a prototype system for human computer interaction than can interpret its users' attitude or emotional state, e.g., activation/interest,boredom, and anger, in terms of their speech and/or their facial gestures and expressions.

EyesWeb
The EyesWeb research project aims at exploring and developing models of
interaction by extending music language toward gesture and visual languages,
with a particular focus on the understanding of affect and expressive
content in gesture.
The EyesWeb open platform has been originally conceived for supporting
research on multimodal expressive interfaces and multimedia interactive
systems. EyesWeb has also been widely employed for designing and developing
real-time dance, music, and multimedia applications. It supports the user in
experimenting computational models of non-verbal expressive communication
and in mapping gestures from different modalities (e.g., human full-body
movement, music) onto multimedia output (e.g., sound, music, visual media).

FearNot!
An interactive system for education against bullying with emotion-driven virtual characters in episodes involving bullying and interaction with child user.

Greta
Greta is an Embodied Conversational Agent system that takes as input a text tagged with information on communicative functions and that generates the synchronized multimodal behaviors (gesture, facial expressions, gaze and head movements) with speech.

Jerk-O-Meter
The Jerk-O-Meter monitors attention (activity and stress) in a phone conversation, based on speech feature analysis.

The Maido and Gurby Experiment
The Maido and Gurby experiment shows how autonomous creatures can invent and share their own sound/syllable system based on interactions coordinated through the use of emotional speech expression and recognition. This demonstration shows how one can build cartoon like emotional speech synthesis , and related algorithms to recognize emotion/attitude in speech. More info in this paper .
MARY German Emotional Text-to-Speech
A German Text-to-Speech synthesis system modelling emotions using emotion dimensions, developed in DFKI's Language Technology Lab.

NICE
The system enables user's to interact with speech and 2D gestures with 3D characters which can display some basic emotional patterns.

Smartkom
Dialog-based human-technology interaction by coordinated analysis and generation of multiple modalities.

The Playground Experiment
The Playground Experiment aims at showing how a robot equiped with an intrinsic motivation system, and in particular artificial curiosity , can explore its environment autonomously and develop skills which were not pre-specified, and with an increasing complexity for an extended period of time. More info on this research project in developmental/epigenetic robotics .
Socialite
The Socialite demonstrator of the NECA project shows simulated social interaction among embodied conversational agents. It is available in two versions:
  • as a Mockup
  • as part of the student community der Spittelberg (log in as "testagentA", password "testagentA" to see the generated dialogues):

Synthesis of Body Emotional Gestures
Java applet developed in the framework of the IST-INTERFACE project (IST-1999-10036). The objective is to display body emotional gestures with different levels of intensity. Intensity levels depend on the geographical region of the virtual character, e.g. southern people are supposed to be much more expressive than northern ones.

Tools To Simulate Affective Monologs or Dialogs by ECAs
The purpose of these tools is to simulate monologs or pre-compiled dialogs between two Embodied Animated Agents with various personalities and expressive capabilities and in various domains.

Tool to simulate cognitive activation of mixed emotions in artificial agents
A tool to simulate cognitive activation of mixed emotions in artificial agents and their time decay, with output in graphical or tabular form or as an utterance pronounced by an ECA.

Wizard of Oz studies with ECAs
A tool to design and perform Wizard of Oz studies with ECAs.

Demonstrators with contact information:

Affective Wave
A fuzzy logic system that may estimate the basic human emotions as supported by physiology.
  • Petar Goulev: pgoulev@imperial.ac.uk


Expressive virtual character
Character with personalised posture and expressive animations.
  • Arjan Egges: egges@miralab.unige.ch

GALA
Emotion modeller and generator for ECA systems. Emotion detection in texts and dialogues.
  • László Laufer: llaufer@aitia.ai

GGSTYLE
Markup Language to generate multimodal presentations of an ECA according to personality, culture, etc.
  • Zsófia Ruttkay: zsofi@cs.utwente.nl

GRACE
A virtual environment baed on a multiagent system for meeting groups. The behavior of the agent is based on emotional states.
  • Alexis Nédélec: nedelec@enib.fr

PROMOTER
A prototype of a persuasive system for monological interaction with human users.
  • Marco Guerini: guerini@itc.it

Robotic expressive head
Dynamic emotional expressions presented by a non-humanoid robotic head. Emotional patterns close to patterns validated by Ekman's FACS.
  • Jacqueline Nadel: jnadel@ext.jussieu.fr

Robots with various or no names
Various robots with emotion-oriented architectures for action selection and learning.
  • Lola Canamero: l.canamero@herts.ac.uk

Max - an Articulated Communicator
Max (derived from "Multimodal Assembly eXpert") is a virtual humanoid able to generate human-like multimodal utterances.



Powered by Plone

Portal usage statistics