Official implementation of 'Personalize Segment Anything Model with One Shot'.
💥 Try out the web demo 🤗 of PerSAM and PerSAM-F:
🎉 Try out the tutorial notebooks in colab for your own dataset. Great thanks to @NielsRogge!
🎆 Try out the online web demo of PerSAM in OpenXLab :
- Support MobileSAM 🔥 with significant efficiency improvement. Thanks for their wonderful work!
- TODO: Release the PerSAM-assisted Dreambooth for better fine-tuning Stable Diffusion 📌.
- We release the code of PerSAM and PerSAM-F 🔥. Check our video here!
- We release a new dataset for personalized segmentation, PerSeg 🔥.
How to customize SAM to automatically segment your pet dog in a photo album?
In this project, we propose a training-free Personalization approach for Segment Anything Model (SAM), termed as PerSAM. Given only a single image with a reference mask, PerSAM can segment specific visual concepts, e.g., your pet dog, within other images or videos without any training. For better performance, we further present an efficient one-shot fine-tuning variant, PerSAM-F. We freeze the entire SAM and introduce two learnable mask weights, which only trains 2 parameters within 10 seconds.
Besides, our approach can be utilized to assist DreamBooth in fine-tuning better Stable Diffusion for personalized image synthesis. We adopt PerSAM to segment the target object in the user-provided few-shot images, which eliminates the background disturbance and benefits the target representation learning.
Clone the repo and create a conda environment:
git clone https://github.com/ZrrSkywalker/Personalize-SAM.git
cd Personalize-SAM
conda create -n persam python=3.8
conda activate persam
pip install -r requirements.txt
Similar to Segment Anything, our code requires pytorch>=1.7
and torchvision>=0.8
. Please follow the instructions here to install both PyTorch and TorchVision dependencies.
Please download our constructed dataset PerSeg for personalized segmentation from Google Drive or Baidu Yun (code 222k
), and the pre-trained weights of SAM from here. Then, unzip the dataset file and organize them as
data/
|–– Annotations/
|–– Images/
sam_vit_h_4b8939.pth
Please download 480p TrainVal split of DAVIS 2017. Then decompress the file to DAVIS/2017
and organize them as
DAVIS/
|––2017/
|–– Annotations/
|–– ImageSets/
|–– JPEGImages/
For the training-free 🧊 PerSAM, just run:
python persam.py --outdir <output filename>
For 10-second fine-tuning of 🚀 PerSAM-F, just run:
python persam_f.py --outdir <output filename>
For MobileSAM with higher efficiency, just add --sam_type vit_t
:
python persam.py/persam_f.py --outdir <output filename> --sam_type vit_t
For Multi-Object segmentation of the same category by PerSAM-F (Great thanks to @mlzoo), just run:
python persam_f_multi_obj.py --sam_type <sam module type> --outdir <output filename>
After running, the output masks and visualizations will be stored at outputs/<output filename>
.
Then, for mIoU evaluation, please run:
python eval_miou.py --pred_path <output filename>
For the training-free and evaluation of 🧊 PerSAM on video, just run:
python persam_video.py --output_path <output filename>
For 10-second fine-tuning and evaluation of 🚀 PerSAM-F on video, just run:
python persam_video_f.py --output_path <output filename>
Our approach can enhance DreamBooth to better personalize Stable Diffusion for text-to-image generation.
Coming soon.
@article{zhang2023personalize,
title={Personalize Segment Anything Model with One Shot},
author={Zhang, Renrui and Jiang, Zhengkai and Guo, Ziyu and Yan, Shilin and Pan, Junting and Dong, Hao and Gao, Peng and Li, Hongsheng},
journal={arXiv preprint arXiv:2305.03048},
year={2023}
}
This repo benefits from Segment Anything and DreamBooth. Thanks for their wonderful works.
If you have any question about this project, please feel free to contact [email protected].