Skip to content

Multiple Head Poses Generation Workflow

Mohamed Shawky edited this page Apr 21, 2021 · 2 revisions

Multiple Head Poses Generation Workflow

Figure(1): Complete diagram of expected multiple head poses generation workflow.

Overall Workflow

  1. Module takes input as Input Image and the desired face pose as Angle Θ.

  2. A 3D morphable Face Model is fitted to the Input Image to generate a corresponding 3D Face Model.

  3. The generated 3D Face Model is rotated with angle Θ and projected to a 2D Image.

  4. The resulting 2D Image is inpainted to generate a final 2D rotated Face.

Face Fitting

  • This submodule fits a predefined morphable face model to the 2d face image to get the 3d model of the face.

  • Face fitting is done using a CNN to estimate the change of the 3d model parameters.

  • The 3d face model and CNN for fitting will be implemented from this paper.

Face Rotation and Projection

  • After estimating the 3d model of the input 2d face the model is rotated in the 3d space with the input angle.

  • After rotation, the 3d face will be projected to a 2d image using the neural renderer.

Face Inpainting

  • After projecting the 3d face model onto a 2d image, a face inpainting step is performed to fill the missing parts in the face using an Image to Image translation model such as CycleGAN and Pix2PixHD (Both of them will be tried).

  • A synthesized dataset will be collected by self-supervision as the 3d model of the face will be rotated with random angles back and forth and then be projected to the 2d image plane so we can have pairs of projected faces and their ground truth following the rotate and render approach described here.