Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Feature] Support API #21

Open
SpenserCai opened this issue Jun 22, 2023 · 17 comments
Open

[Feature] Support API #21

SpenserCai opened this issue Jun 22, 2023 · 17 comments

Comments

@SpenserCai
Copy link

Can inpaint-anything support API

@Uminosachi
Copy link
Owner

I'm focusing on UI-based image processing and not considering an external API now.

@SpenserCai
Copy link
Author

I may be able to try adding the API section

@SpenserCai
Copy link
Author

The API here is worth calling through the SDK webui API

@Uminosachi
Copy link
Owner

The process of Inpaint Anything involves several steps, including segmentation, pointing by sketch, and mask generation. If even one of these steps is missing, the whole process won't function. I'm concerned about how to implement these steps as APIs.

@SpenserCai
Copy link
Author

I think it's possible to break down each step into a separate API

  1. Generate segmentation map: input parameters are images, and output parameters are segmented data/segmentation maps

  2. Generate a mask: input parameters are images, segmented data/images, and output parameters are mask images

...

@Uminosachi
Copy link
Owner

I've moved the SAM execution and mask generation code to separate library files, making it easier for other applications to utilize them.

https://github.com/Uminosachi/inpaint-anything/blob/main/README_DEV.md
https://github.com/Uminosachi/sd-webui-inpaint-anything/blob/main/README_DEV.md

@SpenserCai
Copy link
Author

thanks for your complete this!

@SpenserCai
Copy link
Author

Will DINO be supported? If possible, I will be willing to contribute API related code

@Uminosachi
Copy link
Owner

I'm not considering using DINO at the moment.

@nijiazhi
Copy link

I've moved the SAM execution and mask generation code to separate library files, making it easier for other applications to utilize them.

https://github.com/Uminosachi/inpaint-anything/blob/main/README_DEV.md https://github.com/Uminosachi/sd-webui-inpaint-anything/blob/main/README_DEV.md

Thank you so much for everything you've done. By the way, I have a couple of questions:

  1. Will there be an API available for inpainting?
  2. After the masking step, is it possible to manually input coordinate points (box's center points generated by SAM) and perform inpainting?

@Uminosachi
Copy link
Owner

  1. Will there be an API available for inpainting?

The inpainting feature in this app uses the StableDiffusionInpaintPipeline class from the Python diffusers package (you can find the code at the link below). Therefore, I haven't provided a separate API.

https://huggingface.co/docs/diffusers/api/pipelines/stable_diffusion/inpaint

  1. After the masking step, is it possible to manually input coordinate points (box's center points generated by SAM) and perform inpainting?

In the sample code, you can set the coordinates at the line provided below. By modifying (input_image.shape[1] // 2, input_image.shape[0] // 2), you can specify any point within the image. I haven't prepared an API that displays and allows you to select from multiple segment candidates by SAM.

https://github.com/Uminosachi/inpaint-anything/blob/main/README_DEV.md

draw.point((input_image.shape[1] // 2, input_image.shape[0] // 2), fill=(255, 255, 255))

@nijiazhi
Copy link

  1. Will there be an API available for inpainting?

The inpainting feature in this app uses the StableDiffusionInpaintPipeline class from the Python diffusers package (you can find the code at the link below). Therefore, I haven't provided a separate API.

https://huggingface.co/docs/diffusers/api/pipelines/stable_diffusion/inpaint

  1. After the masking step, is it possible to manually input coordinate points (box's center points generated by SAM) and perform inpainting?

In the sample code, you can set the coordinates at the line provided below. By modifying (input_image.shape[1] // 2, input_image.shape[0] // 2), you can specify any point within the image. I haven't prepared an API that displays and allows you to select from multiple segment candidates by SAM.

https://github.com/Uminosachi/inpaint-anything/blob/main/README_DEV.md

draw.point((input_image.shape[1] // 2, input_image.shape[0] // 2), fill=(255, 255, 255))

Thanks for your reply~
I would like to know if Lama Cleaner's API is directly used in the clean process, and do you provide a separate API for this part?
My understanding is to directly input the mask into the lama cleaner.

@Uminosachi
Copy link
Owner

Uminosachi commented Oct 17, 2023

I would like to know if Lama Cleaner's API is directly used in the clean process, and do you provide a separate API for this part?
My understanding is to directly input the mask into the lama cleaner.

Lama Cleaner can be installed as an individual package using pip, similar to diffusers.

pip install lama-cleaner

While I haven't created sample code, you might be able to code by referring to the run_cleaner function in the iasam_app.py file. In the code below init_image and mask_image are images of type PIL.Image.

import cv2
import numpy as np
import torch
from lama_cleaner.model_manager import ModelManager
from lama_cleaner.schema import Config, HDStrategy, LDMSampler, SDSampler
from PIL import Image

device = torch.device("cuda") if torch.cuda.is_available() else torch.device("cpu")
model = ModelManager(name="lama", device=device)

init_image = np.array(init_image)
mask_image = np.array(mask_image.convert("L"))

config = Config(
    ldm_steps=20,
    ldm_sampler=LDMSampler.ddim,
    hd_strategy=HDStrategy.ORIGINAL,
    hd_strategy_crop_margin=32,
    hd_strategy_crop_trigger_size=512,
    hd_strategy_resize_limit=512,
    prompt="",
    sd_steps=20,
    sd_sampler=SDSampler.ddim
)

output_image = model(image=init_image, mask=mask_image, config=config)
output_image = cv2.cvtColor(output_image.astype(np.uint8), cv2.COLOR_BGR2RGB)
output_image = Image.fromarray(output_image)

@nijiazhi
Copy link

  1. Will there be an API available for inpainting?

The inpainting feature in this app uses the StableDiffusionInpaintPipeline class from the Python diffusers package (you can find the code at the link below). Therefore, I haven't provided a separate API.

https://huggingface.co/docs/diffusers/api/pipelines/stable_diffusion/inpaint

  1. After the masking step, is it possible to manually input coordinate points (box's center points generated by SAM) and perform inpainting?

In the sample code, you can set the coordinates at the line provided below. By modifying (input_image.shape[1] // 2, input_image.shape[0] // 2), you can specify any point within the image. I haven't prepared an API that displays and allows you to select from multiple segment candidates by SAM.

https://github.com/Uminosachi/inpaint-anything/blob/main/README_DEV.md

draw.point((input_image.shape[1] // 2, input_image.shape[0] // 2), fill=(255, 255, 255))

import importlib

import numpy as np
from PIL import Image, ImageDraw

inpalib = importlib.import_module("inpaint-anything.inpalib")

ModuleNotFoundError: No module named 'inpaint-anything'

Hi, i have try the code, but where can i find the "inpaint-anything.inpalib"?

@Uminosachi
Copy link
Owner

Before you proceed, please make sure you've cloned this repository to your current directory using the following command:

git clone https://github.com/Uminosachi/inpaint-anything.git

@sunatte-saad
Copy link

I have build API around this, I am returning segmented image, mask based on selected points , and merged image of segments and original,
anything else you suggest that I should add?

@sunatte-saad
Copy link

  1. Will there be an API available for inpainting?

The inpainting feature in this app uses the StableDiffusionInpaintPipeline class from the Python diffusers package (you can find the code at the link below). Therefore, I haven't provided a separate API.
https://huggingface.co/docs/diffusers/api/pipelines/stable_diffusion/inpaint

  1. After the masking step, is it possible to manually input coordinate points (box's center points generated by SAM) and perform inpainting?

In the sample code, you can set the coordinates at the line provided below. By modifying (input_image.shape[1] // 2, input_image.shape[0] // 2), you can specify any point within the image. I haven't prepared an API that displays and allows you to select from multiple segment candidates by SAM.
https://github.com/Uminosachi/inpaint-anything/blob/main/README_DEV.md

draw.point((input_image.shape[1] // 2, input_image.shape[0] // 2), fill=(255, 255, 255))

import importlib

import numpy as np from PIL import Image, ImageDraw

inpalib = importlib.import_module("inpaint-anything.inpalib")

ModuleNotFoundError: No module named 'inpaint-anything'

Hi, i have try the code, but where can i find the "inpaint-anything.inpalib"?

change "inpaint-anything.inpalib" to "inpalib"

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants