-
Notifications
You must be signed in to change notification settings - Fork 14
Greta Architecture
The Greta platform is a fully SAIBA compliant system for the real-time generation and animation of ECA’s verbal and nonverbal behaviours. SAIBA has adopted the strategy of using separate interfaces to specify an agent’s communicative function and its communicative behavior at two levels of abstraction, where the functional level determines the intent of the agent, that is, what it wants to communicate, and the behavioral level determines how the agent will communicate by instantiating the intent as a particular multi-modal realization. This separation can be seen as two independent components where one component represents the mind of an agent and the other component represents the body.
The main three architectural components are: (1) Intent Planner that produces the communicative intentions and handles the emotional state of the agent; (2) Behaviour Planner that transforms the communicative intents received in input into multi-modal signals and (3) Behaviour Realizer that produces the movements and rotations for the joints of the ECA.
Below an image that represents an overview of the Greta architecture. In this architecture, three main modules of the system, Intent Planner, Behavior Planner and Behavior Realizer are used as common modules for agents. Only the module of Animation Generator is specific to an agent.
The Greta platform is contained in a java program, called Modular.jar, which implements an agent without coding but just using graphic modules. The modular architecture of this platform supports the interconnection with external tools, CereProc text-to-speech engine, enhancing an agent’s detection and synthesis capabilities.
Advanced
- Generating New Facial expressions
- Generating New Gestures
- Generating new Hand configurations
- Torso Editor Interface
- Creating an Instance for Interaction
- Create a new virtual character
- Creating a Greta Module in Java
- Modular Application
- Basic Configuration
- Signal
- Feedbacks
- From text to FML
- Expressivity Parameters
- Text-to-speech, TTS
-
AUs from external sources
-
Large language model (LLM)
-
Automatic speech recognition (ASR)
-
Extentions
-
Integration examples
Nothing to show here