Project Duration: 2004 - 2006
The goal of VirtualHuman project at the German Research Center for Artificial Intellligence (DFKI) has been to develop different solutions for an efficient dialogue between humans and virtual characters. Experts from research & development worked on multimodal virtual agents, which show believable and emotional dialogue behaviour in regard to voice, mimicry and gesture. The project has been supported by the German Federal Ministry for Education and Research and conducted in cooperation with Frauenhofer Institute for Intelligent Analysis- and Information-Systems, Frauenhofer Institute for Graphical Data-Processing, Charamel GmbH, New Media GmbH, OLTO VR Systeme GmbH and the Center for Graphical Data-Processing e.V. Our work covered:
- Realising multiple Swing components for easy configuration of the dialogue system and the virtual characters
- Integrating JSHOP-Planer into the dialogue system, such that the behaviour of the characters can be automatically planned
- Implementing an assertional box (ABox) für the virtual characters, which functions as the brain (memory) of the characters.
- Integrating OWL and RDF ontologies, which are used to describe the virtual world and the knowledge of the virtual characters.
- Automatic extension of the character’s knowledge about the virtual environment.
- Autonomous and context based reaction of virtual characters regarding the ability to answer questions or reaching a goal.
- Multimodal (speech, gesture, mimicry) interaction between user and virtual characters.
- Integrating a graphics engine to control and visual the virtual characters
Used Languages & Technologies: