Skip to content

[CVPR 2023] SadTalker:Learning Realistic 3D Motion Coefficients for Stylized Audio-Driven Single Image Talking Face Animation

License

Notifications You must be signed in to change notification settings

kevin5210/SadTalker

 
 

Repository files navigation

    Open In Colab   Hugging Face Spaces   sd webui-colab  
Replicate Discord

Wenxuan Zhang *,1,2Xiaodong Cun *,2Xuan Wang 3Yong Zhang 2Xi Shen 2
Yu Guo1 Ying Shan 2   Fei Wang 1

1 Xi'an Jiaotong University   2 Tencent AI Lab   3 Ant Group  

CVPR 2023

sadtalker

TL;DR:       single portrait image 🙎‍♂️      +       audio 🎤       =       talking head video 🎞.


🔥 Highlight

  • 🔥 Update LICENSE to Apache 2.0, remove non-commercial restrictation.

  • 🔥 SadTalker has been officially integrated into Discord, where you can use sadtalker in discord server free!!! by simple drop and also you can generate high-quality videos from text prompt.

  • 🔥 The extension of the stable-diffusion-webui is online. Checkout more details here.

sadtalker-webui.mp4
  • 🔥 full image mode is online! checkout here for more details.
still+enhancer in v0.0.1 still + enhancer in v0.0.2 input image @bagbag1815
still_e_n.mp4
full_body_2.bus_chinese_enhanced.mp4
  • 🔥 Several new mode, eg, still mode, reference mode, resize mode are online for better and custom applications.

  • 🔥 Happy to see more community demos at bilibili, Youtube and twitter #sadtalker.

📋 Changelog (Previous changelog can be founded here)

  • [2023.06.12]: add more new features in WEBUI extension, see the discussion here.

  • [2023.06.05]: release a new 512 beta face model. Fixed some bugs and improve the performance.

  • [2023.04.15]: Adding automatic1111 colab by @camenduru, thanks for this awesome colab: sd webui-colab.

  • [2023.04.12]: adding a more detailed sd-webui installation document, fixed reinstallation problem.

  • [2023.04.12]: Fixed the sd-webui safe issues becasue of the 3rd packages, optimize the output path in sd-webui-extension.

  • [2023.04.08]: ❗️❗️❗️ In v0.0.2, we add a logo watermark to the generated video to prevent abusing since it is very realistic.

  • [2023.04.08]: v0.0.2, full image animation, adding baidu driver for download checkpoints. Optimizing the logic about enhancer.

🚧 TODO: See the Discussion OpenTalker#280

If you have any problem, please view our FAQ before opening an issue.

⚙️ 1. Installation.

Tutorials from communities: 中文windows教程 | 日本語コース

Linux:

  1. Installing anaconda, python and git.

  2. Creating the env and install the requirements.

git clone https://github.com/Winfredy/SadTalker.git

cd SadTalker 

conda create -n sadtalker python=3.8

conda activate sadtalker

pip install torch==1.12.1+cu113 torchvision==0.13.1+cu113 torchaudio==0.12.1 --extra-index-url https://download.pytorch.org/whl/cu113

conda install ffmpeg

pip install -r requirements.txt

### tts is optional for gradio demo. 
### pip install TTS
  1. Install Python 3.10.6, checking "Add Python to PATH".
  2. Install git manually (OR scoop install git via scoop).
  3. Install ffmpeg, following this instruction (OR using scoop install ffmpeg via scoop).
  4. Download our SadTalker repository, for example by running git clone https://github.com/Winfredy/SadTalker.git.
  5. Download the checkpoint and gfpgan below↓.
  6. Run start.bat from Windows Explorer as normal, non-administrator, user, a gradio WebUI demo will be started.

Macbook:

More tips about installnation on Macbook and the Docker file can be founded here

📥 2. Download Trained Models.

You can run the following script to put all the models in the right place.

bash scripts/download_models.sh

Other alternatives:

we also provide an offline patch (gfpgan/), thus, no model will be downloaded when generating.

Google Driver: download our pre-trained model from this link (main checkpoints) and gfpgan (offline patch)

Github Release Page: download all the files from the lastest github release page, and then, put it in ./checkpoints.

百度云盘: we provided the downloaded model in checkpoints, 提取码: sadt. And gfpgan, 提取码: sadt.

Model Details

Model explains:

New version
Model Description
checkpoints/mapping_00229-model.pth.tar Pre-trained MappingNet in Sadtalker.
checkpoints/mapping_00109-model.pth.tar Pre-trained MappingNet in Sadtalker.
checkpoints/SadTalker_V0.0.2_256.safetensors packaged sadtalker checkpoints of old version, 256 face render).
checkpoints/SadTalker_V0.0.2_512.safetensors packaged sadtalker checkpoints of old version, 512 face render).
gfpgan/weights Face detection and enhanced models used in facexlib and gfpgan.
Old version
Model Description
checkpoints/auido2exp_00300-model.pth Pre-trained ExpNet in Sadtalker.
checkpoints/auido2pose_00140-model.pth Pre-trained PoseVAE in Sadtalker.
checkpoints/mapping_00229-model.pth.tar Pre-trained MappingNet in Sadtalker.
checkpoints/mapping_00109-model.pth.tar Pre-trained MappingNet in Sadtalker.
checkpoints/facevid2vid_00189-model.pth.tar Pre-trained face-vid2vid model from the reappearance of face-vid2vid.
checkpoints/epoch_20.pth Pre-trained 3DMM extractor in Deep3DFaceReconstruction.
checkpoints/wav2lip.pth Highly accurate lip-sync model in Wav2lip.
checkpoints/shape_predictor_68_face_landmarks.dat Face landmark model used in dilb.
checkpoints/BFM 3DMM library file.
checkpoints/hub Face detection models used in face alignment.
gfpgan/weights Face detection and enhanced models used in facexlib and gfpgan.

The final folder will be shown as:

image

🔮 3. Quick Start (Best Practice).

WebUI Demos:

Online: Huggingface | SDWebUI-Colab | Colab

Local Autiomatic1111 stable-diffusion webui extension: please refer to Autiomatic1111 stable-diffusion webui docs.

Local gradio demo(highly recommanded!): Similar to our hugging-face demo can be run by:

## you need manually install TTS(https://github.com/coqui-ai/TTS) via `pip install tts` in advanced.
python app_sadtalker.py

Local gradio demo(highly recommanded!):

  • windows: just double click webui.bat, the requirements will be installed automatically.
  • Linux/Mac OS: run bash webui.sh to start the webui.

Manually usages:

Animating a portrait image from default config:
python inference.py --driven_audio <audio.wav> \
                    --source_image <video.mp4 or picture.png> \
                    --enhancer gfpgan 

The results will be saved in results/$SOME_TIMESTAMP/*.mp4.

Full body/image Generation:

Using --still to generate a natural full body video. You can add enhancer to improve the quality of the generated video.

python inference.py --driven_audio <audio.wav> \
                    --source_image <video.mp4 or picture.png> \
                    --result_dir <a file to store results> \
                    --still \
                    --preprocess full \
                    --enhancer gfpgan 

More examples and configuration and tips can be founded in the >>> best practice documents <<<.

🛎 Citation

If you find our work useful in your research, please consider citing:

@article{zhang2022sadtalker,
  title={SadTalker: Learning Realistic 3D Motion Coefficients for Stylized Audio-Driven Single Image Talking Face Animation},
  author={Zhang, Wenxuan and Cun, Xiaodong and Wang, Xuan and Zhang, Yong and Shen, Xi and Guo, Yu and Shan, Ying and Wang, Fei},
  journal={arXiv preprint arXiv:2211.12194},
  year={2022}
}

💗 Acknowledgements

Facerender code borrows heavily from zhanglonghao's reproduction of face-vid2vid and PIRender. We thank the authors for sharing their wonderful code. In training process, We also use the model from Deep3DFaceReconstruction and Wav2lip. We thank for their wonderful work.

See also these wonderful 3rd libraries we use:

🥂 Extensions:

🥂 Related Works

📢 Disclaimer

This is not an official product of Tencent.

1. Please carefully read and comply with the open-source license applicable to this code before using it. 
2. Please carefully read and comply with the intellectual property declaration applicable to this code before using it.
3. This open-source code runs completely offline and does not collect any personal information or other data. If you use this code to provide services to end-users and collect related data, please take necessary compliance measures according to applicable laws and regulations (such as publishing privacy policies, adopting necessary data security strategies, etc.). If the collected data involves personal information, user consent must be obtained (if applicable). Any legal liabilities arising from this are unrelated to Tencent.
4. Without Tencent's written permission, you are not authorized to use the names or logos legally owned by Tencent, such as "Tencent." Otherwise, you may be liable for legal responsibilities.
5. This open-source code does not have the ability to directly provide services to end-users. If you need to use this code for further model training or demos, as part of your product to provide services to end-users, or for similar use, please comply with applicable laws and regulations for your product or service. Any legal liabilities arising from this are unrelated to Tencent.
6. It is prohibited to use this open-source code for activities that harm the legitimate rights and interests of others (including but not limited to fraud, deception, infringement of others' portrait rights, reputation rights, etc.), or other behaviors that violate applicable laws and regulations or go against social ethics and good customs (including providing incorrect or false information, spreading pornographic, terrorist, and violent information, etc.). Otherwise, you may be liable for legal responsibilities.

LOGO: color and font suggestion: ChatGPT, logo font:Montserrat Alternates .

All the copyrights of the demo images and audio are from community users or the generation from stable diffusion. Feel free to contact us if you feel uncomfortable.

About

[CVPR 2023] SadTalker:Learning Realistic 3D Motion Coefficients for Stylized Audio-Driven Single Image Talking Face Animation

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 97.1%
  • Shell 1.5%
  • Jupyter Notebook 1.3%
  • Batchfile 0.1%