-
Notifications
You must be signed in to change notification settings - Fork 14
Open Face 2 integration
OpenFace 2, developed at the CMU MultiComp Lab, is a facial behavior analysis toolkit capable of real-time performance. The system is capable of performing a number of facial analysis tasks:
- Facial Landmark Detection
- Facial Landmark and head pose tracking (links to YouTube videos)
- Facial Action Unit Recognition
- Gaze tracking (image of it in action)
- Facial Feature Extraction (aligned faces and HOG features)
It can forward all facial expressions, head pose, and gaze to any other software
It can either :
- read the standard output file from the standard OpenFace2 program. Reading is dynamic, synchronous with OpenFace writing.
- connect via ZeroMQ protocol on port 5000. Using the OpenFaceOfflineZeroMQ application (built with this branch of OpenFace 2)
In Greta use "OpenFace2 Output Stream Reader" module to listen to OpenFace inputs and connect them to the specific Greta modules handling AUs and headpose.
The module allows :
-
OpenFace data Input selection
-
Signal processing
-
To forward any information to a debug application using the OSC protocol
-
OpenFace data Input selection The UI allows dynamic and easy selection of the OpenFace 2 information. It can be used to use only a specific set of AUs.
- Signal processing A Facial action unit is composed of:
- a continuous signal : how much is feature is activated
- a discrete signal : is feature detected or not Raw signal :
Mask :
Hence the need to filter the signal processing filter.
Filtered signal (kernel size of 5, weight function with power 0.5) :
A dynamically sized kernel processing approach where the most recent signal value is the last index of the kernel. Each kernel weight is valued with the mathematical “pow” function which conveniently grows from 0 to 1 for x =[0-1]. So the most recent values have the most weight.
12 normalized kernel values for different power values
Demonstration videos :
Advanced
- Generating New Facial expressions
- Generating New Gestures
- Generating new Hand configurations
- Torso Editor Interface
- Creating an Instance for Interaction
- Create a new virtual character
- Creating a Greta Module in Java
- Modular Application
- Basic Configuration
- Signal
- Feedbacks
- From text to FML
- Expressivity Parameters
- Text-to-speech, TTS
-
AUs from external sources
-
Large language model (LLM)
-
Automatic speech recognition (ASR)
-
Extentions
-
Integration examples
Nothing to show here