Multimodal Behavior Realization for Embodied Conversational Agents (CROSBI ID 160160)
Prilog u časopisu | izvorni znanstveni rad | međunarodna recenzija
Podaci o odgovornosti
Čereković, Aleksandra ; Pandžić, Igor
engleski
Multimodal Behavior Realization for Embodied Conversational Agents
Applications with intelligent conversational virtual humans, called Embodied Conversational Agents (ECAs), seek to bring human-like abilities into machines and establish natural human-computer interaction. In this paper we discuss realization of ECA multimodal behaviors which include speech and nonverbal behaviors. We devise RealActor, an open-source, multi-platform animation system for real-time multimodal behavior realization for ECAs. The system employs a novel solution for synchronizing gestures and speech using neural networks. It also employs an adaptive face animation model based on Facial Action Coding System (FACS) to synthesize face expressions. Our aim is to provide a generic animation system which can help researchers create believable and expressive ECAs.
Embodied Conversational Agents; behavior realization; multimodal synchrony
nije evidentirano
nije evidentirano
nije evidentirano
nije evidentirano
nije evidentirano
nije evidentirano
Podaci o izdanju
54 (1)
2011.
149-164
objavljeno
1380-7501
10.1007/s11042-010-0530-2