Personal tools
You are here: Home Wiki Related Projects

This page aims to list all research projects related to HUMAINE - EU projects, national projects, etc. If your project is not listed, you should log in and edit this page to add it!

EU Specific Targeted Research Projects in FP7

SERA - Social Engagement with Robots and Agents

The project SERA studies social interaction between users and a robotic frontend to a smart room in real life situations. On this basis, it generates theoretical and methodological knowledge, and develops a reference architecture for social engagement, implemented in a showcase.

EU Networks of Excellence in FP6

SIMILAR - The European taskforce creating human-machine interfaces SIMILAR to human-human communication

SIMILAR will federate the European fundamental research in multimodal interfaces. This R&D effort may be seen as a 3-D space with the three axis : theoretical frameworks, interaction paradigms and application domains. Partners competences are spread inside this virtual cube and tend to fill the space. Many of the small cubes inside the big cube contain the 'Multimodal Grand Challenges'.
SIMILAR will realise a international and interdisciplinary fusion of those efforts to create a unique European virtual research centre on multimodal interaction.

Enactive Interfaces

Enactive knowledge is not simply a multisensory mediated one, but knowledge stored in the form of motor responses and acquired by the act of "doing".
It is a form of cognition inherently tied to actions, as in the handcrafter’s way of knowing; it is an intuitive non-symbolic form of learning.

EU Integrated Projects in FP6

AMI - Augmented Multi-party Interaction

AMI targets computer enhanced multi-modal interaction in the context of meetings. The project aims at substantially advancing the state-of-the-art, within important underpinning technologies (such as human-human communication modeling, speech recognition, computer vision, multimedia indexing and retrieval). It will also produce tools for off-line and on-line browsing of multi-modal meeting data, including meeting structure analysis and summarizing functions. The project also makes recorded and annotated multimodal meeting data widely available for the European research community, thereby contributing to the research infrastructure in the field.


COSY - Cognitive Systems for Cognitive Assistants

This COSY project is inspired by the FP6 objective: "To construct physically instantiated ... systems that can perceive, understand ... and interact with their environment, and evolve in order to achieve human-like performance in activities requiring context-(situation and task) specific knowledge". It is assumed that this is far beyond the current state of the art and will remain so for many years. However a set of intermediate targets have devised been based on that vision. The aim of the project is to advance the science of cognitive systems through a multi-disciplinary investigation of requirements, design options and trade-offs for human-like, autonomous, integrated, physical (e.g. robot) systems, including requirements for architectures, for forms of representation, for perceptual mechanisms, for learning, planning, reasoning, motivation, action, and communication. The results of the investigation will provide the basis for a succession of increasingly ambitious working robot systems to test and demonstrate the ideas. Devising demanding but achievable test scenarios, including scenarios in which a machine not only performs some task but shows that it understands what it has done, and why, is one of the challenges to be addressed in the project. 


CHIL - Computers in the Human Interaction Loop

 

CHIL is an EC FP6 Integrated Project in which it is developing and exploring a fundamental shift in the way we use computers. In contrast to building machines that have to be operated directly and explicitly, the project aims to realize computer services that are delivered to humans in an implicit, indirect and unobtrusive way. The CHIL concept aims to introduce Computers into the Human Interaction Loop, rather than condemning a human to operate in a loop of computers. For a machine to engage in an unobtrusive manner and to serve what a human really needs requires robust, multimodal perceptual user interfaces capable of tracking, identifying, recognizing and understanding the role, purpose and content of human communication, activities, state and their environment. It is no longer just about what one person said, but about technologies that model the full breadth, the Who, Where, What, Why and How of Human activities and communication.




ICEA - Integrating Cognition, Emotion and Autonomy


ICEA is a four-year integrated project on bio-inspired cognitive robotics and embodied cognition. The primary aim is to develop a cognitive systems architecture integrating cognitive, emotional and bioregulatory (self-maintenance) processes, based on the architecture and physiology of the mammalian brain. The twofold hypothesis behind this research is that:

  • the emotional and bioregulatory mechanisms that come with the organismic embodiment of living cognitive systems also play a crucial role in the constitution of their high-level cognitive processes, and
  • models of these mechanisms can be usefully integrated in artificial cognitive systems architectures, which will constitute a significant step towards truly autonomous robotic cognitive systems that reason and behave in accordance with energy and other self-preservation requirements. 



CALLAS - Conveying Affectiveness in Leading-Edge Living Adaptive Systems


The CALLAS Integrated Project aims to design and to develop a multimodal architecture including emotional aspects, to support applications in the new media business scenario, with an “ambient intelligence” paradigm.
The general vision underlining CALLAS Project is that a new media can produce an effective enhancement of people participation in Content, Media and Social interaction. In some way, if an old media is something static in the user experience; instead a New Media is a dynamic process that increases interaction and communication between users and technology. This dynamism imply a strong attention for all interface and interaction aspects. CALLAS would like to analyse, understand, and advance this participation, including all the emotional aspects relevant human communication processes.
Contemporarily, CALLAS Consortium believes that one of the interesting factor for human emotional interaction is the space, where this interaction acted. For this reason the scenarios chosen for CALLAS are all relevant different typologies of space: theatres, home, squares, festivals etc. CALLAS goal is to bridge the gap between the emerging capacity of conveying emotional aspects within multi modal interaction and the growing expectations of people for more natural and pervasive interaction with digital media applications in intelligent adaptive spaces.


EU Specific Targeted Research Projects in FP6

Aubade - A Wearable EMG Augmentation System for Robust Emotional Understanding

AUBADE aims towards implementing an intelligent, multisensorial wearable system that can ubiquitously monitor and classify the emotional condition of patients or people under extreme stress, such as car racing drivers. AUBADE started in January 2004 and is scheduled for a total of 24 months, due for completion by the end of December 2005.

INTREPID - A Virtual Reality Intelligent Multi-sensor Wearable System for Phobias' Treatment

Intrepid aims at developing an intelligent multi-sensor wearable system for the treatment of phobias and situational anxiety. The system will incorporate emotional intelligence - via a biosensors fusion system able to sense the underlying phobias states - and a virtual environment that based on machine's intelligent decisions will virtually expose the patient in situations that help him to overcome his phobia. In addition, it will communicate with a healthcare professional's site to provide decision support concerning the patient's therapy.

RASCALLI - Responsive Artificial Situated Cognitive Agents Living and Learning on the Internet

Rascalli represents a growing class of cooperative agents that do not have a physical presence, but nevertheless are equipped with major ingredients of cognition including situated correlates of physical embodiment to become adaptive, cooperative and self improving in a certain environment (Internet) given certain tasks. Their task-based processing of Web content requires an action-based model of interpretative perception. Because of the size and importance of their memory, special attention is paid to the associative structuring of the acquired information based on interests and experience, and to models of an active, permanently structure-creating and restructuring memory. With Rascalli we aim at artificial agents that are able to combine human and computer skills in such a way that both kinds of abilities can be optimally employed for the benefit of the human user. ...

ARTTS - Action Recognition and Tracking based on Time-of-Flight Sensors

Based on a new type of award-winning sensor technology, the time-of-flight (TOF) camera, ARTTS will develop algorithms and prototype systems to solve open computer-vision problems and enable new applications that involve multimodal interfaces and the sensing of people and their actions. Unlike a conventional video camera, the TOF camera delivers not only an intensity image but also a range map that contains a distance measurement at each pixel, obtained by measuring the time required by light to reach the object and return to the camera (time-of-flight principle).

Intelligent recognition and tracking solutions developed for TOF cameras will provide a crucial enabling technology for multimodal interfaces. The current state-of-the-art 3D TOF systems will be considerably improved so as to increase the depth resolution and quality of the signal and to reduce size, power consumption, and cost. A prototype of the improved system will be developed and released within the ARTTS project.

Advanced signal processing methods will increase the quality of the signal and derive the features needed for the subsequent development of algorithms that can track people and recognize their activity. Software toolboxes for object tracking and action recognition will be developed that can serve as a basis for various applications.

EU Coordination Actions in FP6

S2S2 - Sound to Sense, Sense to Sound

 S2S2 (acronym for «Sound to Sense, Sense to Sound», pronounce s two - s square) is a FET-Open  Coordination Action (number of contract: IST-2004-03773; ), running from June 2004 to May 2007. S2S2 has the following overall objective: to bring together the state-of-the-art research in the sound domain and in the proper combination of human sciences, technological research and neuropsychological sciences that does relate to sound and sense. Reaching this objective can foster a new generation of research topics such as higher-level sound analysis, the so-called ``engaging'' synthesis, an integrated sound-music research field, etc.


ENGAGE - Engineering Emotional Design

ENGAGE is bringing together industry, research and design in the field of affective engineering, to create a knowledge community and realise best use of both current and future knowledge. The ENGAGE consortium consists of 21 project partners from 9 European countries, working together to make sure all the ENGAGE objectives are successfully reached. To create a valuable knowledge community, everybody interested in Affective Design is invited to become a member. ENGAGE is funded by the European 6th Framework Programme. Read more about at www.engage-design.org


EU SSA (Specific Support Action) Projects in FP6

SWAMI (Safeguards in a World of Ambient Intelligence) aims to identify and analyse the social, economic, legal, technological and ethical issues related to identity, privacy and security in the forecasted but not yet deployed Ambient Intelligence (AmI) environment.

Start Date: 2005-02-01; Duration: 18 months; End Date: 2006-07-31

EU CRAFT (Co-operative Research Action for Technology) Projects in FP6

MYSELF - Multimodal elearning System based on Simulations, Role-Playing, Automatic Coaching and Voice Recognition interaction for Affective Profiling

  MYSELF is a project funded by the European Commission under the Co-operative Research (CRAFT) Programme (SME-2003-1-508258). The project has the main aim of expanding the potential of elearning through learning by doing (experiential and active learning), role playing via web, situated Collaborative Learning, Cognitive and Affective Profiling, Mobile Learning and Multimodal human-machine interaction. In order to reach these goals, the project will design and develop two different tools: 1) the “MySelf platform”, a web-based platform endowed with affective computing capabilities in collaborative learning simulations. This will be a flexible and reusable tool on which it could be possible to implement different target simulations; 2) the “MySelf Sim modules”, which will consist of interactive simulations for the training of communication and emotional skills and competences in medical and banking professional settings.


 

EU Projects in FP5

ERMIS - Emotionally Rich Man-Machine Interaction Systems

Scope: The development of a prototype system for human computer interaction than can interpret its users' attitude or emotional state, e.g., activation/interest,boredom, and anger, in terms of their speech and/or their facial gestures and expressions

Adopted technologies: Linguistic and paralinguistic speech analysis and robust speech recognition, facial expression analysis, interpretation of the user's emotional state using hybrid, neurofuzzy, techniques, while being in accordance with the MPEG-4 standard.

NECA - Net Environment for Embodied, Emotional Conversational Agents

NECA is a project funded by the European Commission under the Information Society Technologies Programme (IST-2000-28580). NECA promotes the concept of multi-modal communication with animated synthetic personalities. A particular focus in the project lies on communication between animated characters that exhibit credible personality traits and affective behavior. The key challenge of the project is the fruitful combination of different research strands including situation-based generation of natural language and speech, semiotics of non-verbal expression in situated social communication, and the modelling of emotions and personality.


PF-STAR - Preparing Future Multisensorial Interaction Research PF-STAR intends to contribute to establish future activities in the field of Multisensorial and Multilingual communication (Interface Technologies) on firmer bases by providing technological baselines, comparative evaluations, and assessment of prospects of core technologies, which future research and development efforts can build from.
To this end, the project will address three crucial areas: technologies for speech-to-speech translation, the detection and expressions of emotional states, and core speech technologies for children.
For each of them, promising technologies/approaches will be selected, further developed and aligned towards common baselines. The results will be assessed and evaluated with respect to both their performances and future prospects.

MEGA - Multisensory Expressive Gesture Applications (November 2000 - October 2003)

The MEGA project is centered on the modeling and communication of expressive and emotional content in non-verbal interaction by multi-sensory interfaces. In particular, the project focuses on music performance and full-body movements as primary conveyors of expressive and emotional content. Real-time, quantitative analysis and evaluation of expressive content in different performances of the same musical score, or in different performances of the same dance;fragment are examples of research outputs from the project. Main research issues are analysis of expressive gestures (i.e., how to recognize the expressive content conveyed through full body movement and musical gestures), synthesis of expressive gesture (i.e., how to communicate expressive content through computer generated expressive gesture, such as music performances, movement of virtual as well as real robotic characters, expressive utilization of visual media), mapping strategies (i.e.,how to use data coming from analysis for real-time generation and processing of audio and visual content), and cross-modal integration (i.e., how to combine data coming from different channels in order to analyze expressive gestures).A main output of the project is the MEGA System Environment, an environment for multimedia and performing arts applications where different software modules for real-time expressive gesture analysis and synthesis are interconnected with each other. Research results have been used in a number of artistic performances and multimedia events.

COST Actions

Cost287-ConGAS  - Gesture-Controlled Audio Systems

The Cost287-ConGAS Action intends to contribute to the advancement and to the development of musical gesture data analysis and to capture aspects connected to the control of digital sound and music processing. Cost287-ConGAS is a COST-TIST action.


COST 2102 - Cross-Modal Analysis of Verbal and Non-verbal Communication

The main objective of the Action is to develop an advanced acoustical, perceptual and psychological analysis of verbal and non-verbal communication signals originating in spontaneous face-to-face interaction, in order to identify algorithms and automatic procedures capable of identifying human emotional states.


National government-funded projects

Germany

Virtual Human
Combining research in computer graphics and multimodal user interfaces, this project develops virtual characters as personal dialogue partners. This can lead to a new quality in interactive systems.

Powered by Plone

Portal usage statistics