-
Notifications
You must be signed in to change notification settings - Fork 14
Greta Furhat Interface
- how to integrate Greta Furhat Interface module to the existing configurations of Greta plateform
- how to run kotlin-coded Furhat Skills from IntelliJ
- how to run the skills on the physical Furhat robot
Assuming you can run Greta plateform correctly, make sure the following requirement are satisfied:
- Have the SDK running if you are going to use virtual Furhat
- have Java 8 JDK or OpenJDK 8 installed on your system.
- have a configured microphone.
- have IntelliJ IDEA installed and running. Community edition is enough
Note that you can start with Step1 or Step2. But since activemq servers are in Greta, it may be better to start with Step1. If you commence a step, please follow the order
- Run main project Modular.jar
- In Greta configurations folder, choose a config that has module "Face KeyFrames Performer", or simply "Face". Ex: Greta-Demo configuration - CereProc.xml
- In Modular.jar, add Greta Furhat Interface from [Add -> Network Connections -> Greta Furhat Interface -> GretaFurhatAUSenderGui]
- Add the following Connectors: 4.a. "Face" (or any Face keyframes performers) -> GretaFurhatAUSenderGui 4.b. "Face Viewer Library" -> GretaFurhatAUSenderGui. Note that this connection is necessary only if you are going to use the AUs slider to modify the facial expression of the character
- Once the module added, click on it to see the User Interface. Then select "Activate" and "Send" to send the AUs on the ActiveMQ topic. Do not change the default values of the server!!! If you do so, make sure to update in Furhat Skills kotlin code
-
Save the configuration for future use of the module
-
Find below an exemple configuration where "Greta Furhat Interface" is used
-
Note: In Greta Furhat Interface, the head movement rotation angles and speech text are directly sent on their respective topic without any need to do something. Their code are integrated in MPEG4Agent.java at auxiliary/OgrePlayer and CereProcTTS.java at auxiliary/TTS/CereProc
-
Open up IntelliJ IDEA on the same computer as Greta plateform (necessary for activeMQ servers to work, they are local) and import the skill by going to File->New->Project from Existing Sources -> Import project from external source (also select "Gradle" for project build) -> Create.
Note: Find the skill GretaMimic at bin/Common/Data/GretaFurhatInterface
-
Create configuration
Normally, the configuration file should be re-constructed automatically. But if no configuration is found (or simply cannot connect somehow), then create it with the following procedure:
- click on "current file" next to run -> Edit configuration
- Add new configuration -> Kotlin
- refer to image below (in screenshot section) to fill the parts (replace IP address with your robot IP address) And run the Main.kt to run the skill on robot (make sure Step1.5. is OK, all boxes selected)
-
run Main.kt
3.a. If you are using SDK (virtual Furhat), simply click on run (make sure Step1.5. is OK, all boxes selected)
3.b. For Furhat robot:
- Make sure robot is connected on the same network as Greta plateform
- do "Edit Run Configurations" in IntelliJ and copy the created Run configuration (in order to have two different configurations, one for SDK and the other for robot)
- Add the JVM argument -Dfurhatos.skills.brokeraddress= with the IP of your robot
- run the Main.kt
- Launch saved configuration with module GretaFurhatInterfaceGui
- Select both Activate and Send
- Run GretaMimic skills in IntelliJ
For more details, please find the source code of:
- the GretaMimic skills at GretaFurhatInterface/GretaMimic
- Greta Furhat Interface at auxiliary/GretaFurhatInterface
Advanced
- Generating New Facial expressions
- Generating New Gestures
- Generating new Hand configurations
- Torso Editor Interface
- Creating an Instance for Interaction
- Create a new virtual character
- Creating a Greta Module in Java
- Modular Application
- Basic Configuration
- Signal
- Feedbacks
- From text to FML
- Expressivity Parameters
- Text-to-speech, TTS
-
AUs from external sources
-
Large language model (LLM)
-
Automatic speech recognition (ASR)
-
Extentions
-
Integration examples
Nothing to show here