This GitHub repository contains code (Matlab and Python) to facilitate the selection, loading, editing, analysis and saving of images and animations from the MF3D stimulus set. The images and animations themselves are hosted on figshare and must be downloaded separately for this code to be of use:
- MF3D Expression set: https://doi.org/10.6084/m9.figshare.8226029
- MF3D Identities set: https://doi.org/10.6084/m9.figshare.8226311
- MF3D Animations set: https://doi.org/10.6084/m9.figshare.8226317
The code in this repository is licensed under GNU General Public License GNU GPLv3, while the media provided in the MF3D R1 stimulus set is licensed under Creative Commons CC BY-NC 4.0. If you use any content from the stimulus set in your research, we ask that you cite the following publication:
This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.
The macaque face 3D (MF3D) stimulus set release #1 (R1) is the first database of computer generated images of a parametrically controlled, anatomically accurate, 3D avatar of the Rhesus macaque face and head. The intended use of the database is for visual stimulation in behavioural and neuroscientific studies involving Rhesus macaque subjects. An overview of the contents of the stimulus set can be found here, while full details of how the images were generated are available in the associated publication cited above.
The following video animations demonstrate some of the parameters of the 3D macaque avatar that can be controlled (click images to open videos).
Facial expression, gaze and lighting
This video demonstrates how our macaque model of emotional facial expressions (for a single identity) can be continuously and parametrically varied to adjust appearance. The model was constructed using computed tomography (CT) data from a real Rhesus macaque, acquired under anesthesia, and edited and rigged by a professional digital artist. In addition to control of various facial expressions, the model's head and eye gaze direction can be programmatically controlled, as well as other variables such as environmental lighting and surface coloration, amongst others.
Facial dynamics estimation
In order to simulate naturalistic facial dynamics in our macaque avatar, we estimate the time courses of facial motion from video footage of real animals. Applying these time courses to the animation of bones and shape keys of the model, we can mimic the facial motion of the original clip, while retaining independent control over a wide range of other variables. The output animation can be rendered at a higher resolution and frame rate (using interpolation) than the input video. (Original video footage in the left panel is used with permission of Off The Fence™).
Identity morphing
This video demonstrates how our macaque model of individual variations in cranio-facial morphology (i.e. 3D shape) can be continuously and parametrically varied to adjust appearance. The statistical model was constructed through principal component analysis (PCA) of the 3D surface reconstructions of 23 real Rhesus monkeys from computed tomography (CT) data acquired under anesthesia. The 3D plot in the top right corner illustrates the first three principal components of this 'face-space', where the origin of the plot represents the sample average face.
Animated sequences
This video demonstrates how animation clips from the MF3D R1 Animation stimulus set can be combined to form a longer continuous animation sequence for use in experiments that require more naturalistic dynamics. This example was generated using a Python script to control Blender's video sequence editor: MF3D_ConcatClips_Demo.py.