Skip to content

Greta Architecture

Brice Donval edited this page Jan 7, 2020 · 4 revisions

The Greta platform is a fully SAIBA compliant system for the real-time generation and animation of ECA’s verbal and nonverbal behaviours. SAIBA has adopted the strategy of using separate interfaces to specify an agent’s communicative function and its communicative behavior at two levels of abstraction, where the functional level determines the intent of the agent, that is, what it wants to communicate, and the behavioral level determines how the agent will communicate by instantiating the intent as a particular multi-modal realization. This separation can be seen as two independent components where one component represents the mind of an agent and the other component represents the body.

The main three architectural components are: (1) Intent Planner that produces the communicative intentions and handles the emotional state of the agent; (2) Behaviour Planner that transforms the communicative intents received in input into multi-modal signals and (3) Behaviour Realizer that produces the movements and rotations for the joints of the ECA.

Below an image that represents an overview of the Greta architecture. In this architecture, three main modules of the system, Intent Planner, Behavior Planner and Behavior Realizer are used as common modules for agents. Only the module of Animation Generator is specific to an agent.

The Greta platform is contained in a java program, called Modular.jar, which implements an agent without coding but just using graphic modules. The modular architecture of this platform supports the interconnection with external tools, CereProc text-to-speech engine, enhancing an agent’s detection and synthesis capabilities.

Getting started with Greta

Greta Architecture

Quick start

Advanced

Functionalities

Core functionality

Auxiliary functionalities

Preview functionality

Nothing to show here

Previous functionality (possibly it still works, but not supported anymore)

Clone this wiki locally