Update (07/29/2024)
We finetune brushnet with our dataset. Now, new version can transfer inpainting ability to arbitrary SDXL models. This version is cityscape-friendly!!!
Have fun and make sure to add a ❤️ to receive future updates.
● This model is finetuned based on the diffusers/stable-diffusion-xl-1.0-inpainting-0.1 and can outpaint the pictures by using a mask.
● The uploaded part only has the UNet.
● The showcase is presented in the order output_result-raw_image.
Usage:
1. The input is a masked image(expanded from source image and fill the expanded part with the value you like) and its mask(set the expanded part to 255 and the source image part to 0).
2. Use the diffusers pipeline(eg. StableDiffusionXLInpaintPipeline) to automatically match the 9 input channels of the outpainting UNet.
3. Set the strength parameter to 1.0 (very important!!!).
Recommended:
1. Recommend diffusers pipeline. Automatic1111 does not support inpaint-XL model yet.
2. Sampling scheduler: DPM++ 2M SDE Karras, steps: 30, cfg: 3.
3. In order to have a better experience, the expansion ratio of the image height should not exceed 1.3, and the expansion ratio of the image width should not exceed 1.5.
4. Use lower cfg to reduce the impact of incorrect prompt.
5. More friendly for scenery image input.
Attention:
1. Higher expansion ratio than recommended may generate repetitive parts. For better experience, you can first expand single side and a more suitable prompt or use progressive generation method.
2. Prompt is not necessary. If use prompt, preferably describe the contents of the extended part you want rather than the objects already in the image(eg. Cars) to avoid repetitive objects, especially in the case of high expansion ratio.
3. We're working on the next version.
Update:
(01/26/2024): upload an instruction for infering : https://civarchive.com/articles/3835
Have fun and make sure to add a ❤️ to receive future updates.
(For model showcase, we use some real images as input and indicate the source of the image as much as possible in the comment. For infringement, please contact us deleted.)
Description
FAQ
Comments (7)
For model showcase, we use some real images as input and indicate the source of the image as much as possible in the comment. For infringement, please contact us deleted.
Could this be used in Fooocus? I have no idea how to make this work; I assume it's for Comfy only?
"1. Recommend diffusers pipeline. Automatic1111 does not support inpaint-XL model yet."
It seems only can be used via source codes diffusers. It's time to make a imple tutorial on how to use it!
Hi, the outpainting model is just focusing on diffusers pipeline for now , and updated an instruction for infering just now: https://civitai.com/articles/3835 . Maybe we'll cover other platform later. Have fun with this model❤️.
how can i load this ckpt by diffusers, .from_single_file() will be cause an error
Because the model is too large, we only uploaded the part of unet. There is a quick solution for inference: download diffusers/stable-diffusion-xl-1.0-inpainting-0.1(https://huggingface.co/diffusers/stable-diffusion-xl-1.0-inpainting-0.1) first, and replace the safetensor model in unet folder with our model. And then use StableDiffusionXLInpaintPipeline.from_pretrained(). We will write a detailed explanation later. Have fun with this model. If you like this work, don't forget add ❤️ to us . Thank U❤️❤️❤️
Hi, we upload an instruction for infering : https://civitai.com/articles/3835 . Have fun with this model.



















