Skip to content

MTSami/Structural_Semantic_Inpainting

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

4 Commits
 
 
 
 
 
 
 
 

Repository files navigation

Structurally and Semantically Coherent Deep Image Inpainting

Abstract. This paper presents an augmented method for image completion, particularly for images of human faces by leveraging on deep learning based inpainting techniques. Face completion generally tend to be a daunting task because of the relatively low uniformity of a face attributed to structures like eyes, nose, etc. Here, understanding the top level context is paramount for proper semantic completion. The method presented improves upon existing inpainting techniques that reduce context difference by locating the closest encoding of the damaged image in the latent space of a pretrained deep generator. However, these existing methods fail to consider key facial structures (eyes, nose, jawline, etc) and their respective location to each other. This paper mitigates this by introducing a face landmark detector and a corresponding landmark loss. This landmark loss is added to the construction loss between the damaged and generated image and the adversarial loss of the generative model. The model was trained with the celebA dataset, tools like pyamg, pillow and the opencv library was used for image manipulation and facial landmark detection. There are three main weighted parametes that balance the effect of the three loss functions in this paper, namely context loss, landmark loss and prior loss. After several experimentation, it can be concluded that the added landmark loss attributes to better understanding of top level context and hence the model can generate more visually appealing inpainted images.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages