-
Notifications
You must be signed in to change notification settings - Fork 14
GretaUnity
This page contains the instructions to setup and run the Unity3D project GretaUnity for development.
The GretaUnity project in Unity is connected to Greta via thrift.
The BAPs and FAPs animation parameters produced in Greta are sent to Unity and a script named CharacterAnimation.cs attached to the character animates the joints accordingly.
This is a simple starting point for future developments of the Greta-Unity integration. See here a list of limitations that needs to be addressed.
- Unity3D (version 4.6, currently not working with 5): Unity Download
- GretaUnity https://github.com/isir/gretaUnity/
- MaryTTS
- Greta
Make sure that you can run Java programs (needed for running Greta) and that your system "PATH" variable has the Java paths all set (to run Java from console)
- Install Unity3D
- Download the unity project from our repository at the following link: https://github.com/isir/gretaUnity/
- Open the downloaded project with Unity (in the Unity dialog select the folder where you have downloaded the project)
You only need to run Modular and open the configuration file located at: ``\/bin/GretaUnity-Simple.xml``
In order to run the demo you need to follow, in this exact order, the following steps:
- Run MaryTTS: /bin/marytts-server.bat (if it is the first time you used MaryTTS look the wiki page Installing the speech synthesizer)
- Lunch Modular (with the given configuration for Unity)
- Open and Play, in Unity3D.
- You can Gadd the character in the unity scene drugging and dropping the character in the folder Asset/Prefabs/Character.
- Once added the character look at right, to the inspector (click on the character on the scene if you don't see anything) and click on Add_component/Script/CharacterAnimation_Single_Autodesk. In this way a new window will be added in the inspector allowing you to check the thrift port numbers.
- Look to thrift port numbers and check that, in the modular configuration you opened before, the thrift port numbers for commandReceiver, audio, FAP and BAP are the same in both Modular and Unity. If not, you can change them in Unity or modular.
- Once you added the character in the unity scene and checked the thrift port number, you can click play in Unity, send FML or BML file to greta and see the agent move.
- WASD/Arrows: Movement
- Mouse: look around
- Q or E: Climb or Drop Camera
- Shift / CTRL: Move slower / faster
- End: Toggle cursor locking to screen
- ESC: quit (only when running the compiled version)
- R: reset camera position
Using the FMLSender in modular you can play the following demo FML files located at ``\/bin/Examples/DemoEN/``:
- 1-Welcome.xml
- 2-SeeYou.xml
- Idle movements generated in Greta are not blended with gestures, currently the character is using the Unity/Mechanim animation for this, but the legs are frozen (i.e. BAPs received by Greta are ignored and idle animation is displayed instead).
- Speech is synchronized with lip movement, but the produced audio needs to be pre-created using Modular and then the corresponding audio file must be stored in the Unity3D project folder at: /Assets/Resources, this means that the character's voice is not streamed in real-time via thrift (as for the animation parameters), instead it needs to be pre-recorded and when an FML file is requested/sent from/to unity a name matched file needs to be found.
- The only available characters are Camille and Alice. Currently only Camille is included in the assets. If a new character is needed the rig needs to be done in order to work with the CharacterAnimation.cs script and correctly perform the animation parameters received via thrift.
Advanced
- Generating New Facial expressions
- Generating New Gestures
- Generating new Hand configurations
- Torso Editor Interface
- Creating an Instance for Interaction
- Create a new virtual character
- Creating a Greta Module in Java
- Modular Application
- Basic Configuration
- Signal
- Feedbacks
- From text to FML
- Expressivity Parameters
- Text-to-speech, TTS
-
AUs from external sources
-
Large language model (LLM)
-
Automatic speech recognition (ASR)
-
Extentions
-
Integration examples
Nothing to show here