-
Notifications
You must be signed in to change notification settings - Fork 14
LLM DeepASR integration
Lucie Galland edited this page Jul 19, 2024
·
4 revisions
A Full demo configuration is available at Greta - Demo - Mistral - MM - ASR - Cereproc.yaml
This demo allows us to communicate with Mistral through Greta; the ASR module automatically listens and sends the request to the Mistral module (see below). This demo can be used with automatic gestures by enabling NVBG or Mistral.
- A DeepASR module can be linked to an LLM module, in this case, the transcribed speech is automatically sent to the LLM module and processed by it
- A DeepASR module can be linked to a Feedback module; in this case, if automatic speech is enabled, then the DeepASR module will automatically listen until Great starts talking and start listening again when Great is done talking.
- The Feedback module can be replaced by any turn-tacking module
Advanced
- Generating New Facial expressions
- Generating New Gestures
- Generating new Hand configurations
- Torso Editor Interface
- Creating an Instance for Interaction
- Create a new virtual character
- Creating a Greta Module in Java
- Modular Application
- Basic Configuration
- Signal
- Feedbacks
- From text to FML
- Expressivity Parameters
- Text-to-speech, TTS
-
AUs from external sources
-
Large language model (LLM)
-
Automatic speech recognition (ASR)
-
Extentions
-
Integration examples
Nothing to show here