Introduction
This is an inpainting model of the excellent Juggernaut XL by @KandooAI -- thank you for your work, mate!
The whole purpose of inpainting models is to provide at least somewhat better results when inpainting or outpainting an image. When e.g. outpainting an image, with the normal checkpoint models you tend to get a very visible seam between the original part of the image and the newly extended part -- this model helps eliminate that seam! Also, using a specific version of an inpainting model instead of the generic SDXL-one tends to get more thematically consistent results.
Here is an example of a rather visible seam after outpainting:
The original model on the left, the inpainting model on the right. (Yes, I cherrypicked one of the worst examples just to demonstrate the point)
Disclaimer: you definitely can get good results even without, but it's easier with an inpainting model. No need for any offensive comments about how I'm bad and should feel bad.
How to use:
Basically, the same general guidelines apply as for the original model, ie. DPM++ 2M Karras as the sampler and 30+ steps for generally good results. I like to go all the way up to 80 steps, myself, but YMMV and all that. CFG scale I usually set between 2-6 and denoising strength I usually begin with at 0.8, though I may bring it down, if I feel the model being a little too creative in the stuff it generates. May also need to bring it all the way up to 0.9 -- denoising strength has a large effect on the outcome and it may require a lot of playing around, so if you're not happy with your results, keep everything else the same and just give this setting a go.
Other stuff:
Created in Automatic1111 by merging the base SDXL inpainting model (by wangqyqq), Juggernaut XL (the non-lightning model), SDXL base and the fp16 fix VAE.
Description
Updated to match the just-released Juggernaut X RunDiffusion.
FAQ
Comments (42)
what do you recommend for masking blur and latent setting
Are you talking about inpainting or outpainting? There is no single value I recommend, it all depends on what you're doing and it's best to learn to not try to just use one value in every situation.
If you're e.g. trying to get rid of an artifact with inpainting, I find that selecting quite a bit larger area than just the artifact and using "fill" instead of "latent noise" or "latent nothing" for "Masked content" setting usually works better, but if you want to e.g. fix the fingers on one hand, mask only a small area around the fingers and use "original"
I'm still new to AI image generation stuff myself, but maybe I should write an article on how I use this model to outpaint and inpaint stuff. Perhaps that'd help other people.
If I see juggernaut, i think quality! :D
Were you aware that the filename for this inpainting model is 100% identical to the "normal" (non-inpainting) Juggernaut X model?
I had not realized that Civitai mangles the filename. I deliberately saved it as "juggernautXL_juggernautX.inpainting.safetensors" whereas the original model is "juggernautXL_juggernautX.safetensors", but apparently Civitai removes the ".inpainting." from the filename. How annoying.
I'll have to fix that, thanks for the report.
I'm re-uploading the model now with another filename. This Civitai-thing doesn't let me just rename it, I had to delete it and re-upload from scratch...
Well, that didn't work. It's still downloading it with the wrong filename.
@WereCatf I was just coming back to say that lol. Well, at least I snagged the whole thing this time before you removed it XD
@altoiddealer I am going to try to talk with Civitai's support in order to try and get this idiocy fixed. There really should be an option to just rename the already-uploaded files.
Still, thanks for letting me know. I wouldn't have noticed anything otherwise!
The filename issue has been resolved. It's not quite what I'd like, but at least the downloaded filename is no longer the same as the original Juggernaut XL :)
@WereCatf Cheers!
This is a great job. Could you please create a painting for a pony model? Thank you very much
I tried to, but there's something about the way those Pony models have been created that isn't compatible with my way of making inpainting models -- it only results in random noise. Maybe, if I knew exactly how they are made, but I don't, so at least for now not much I can do.
Will this work in A1111?
That's where I am using it myself, so yes.
it doesnt work in fooocus 🤔
Same here, this inpainting model won't work on Fooocus.
WARNING SHAPE MISMATCH diffusion_model.input_blocks.0.0.weight FOOOCUS WEIGHT NOT MERGED torch.Size([320, 4, 3, 3]) != torch.Size([320, 9, 3, 3])
[Fooocus Model Management] Moving model(s) has taken 3.97 seconds
Traceback (most recent call last):
File "E:\Fooocus_win64_2-1-831\Fooocus\modules\async_worker.py", line 973, in worker
handler(task)
File "E:\Fooocus_win64_2-1-831\python_embeded\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "E:\Fooocus_win64_2-1-831\python_embeded\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "E:\Fooocus_win64_2-1-831\Fooocus\modules\async_worker.py", line 865, in handler
imgs = pipeline.process_diffusion(
File "E:\Fooocus_win64_2-1-831\python_embeded\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "E:\Fooocus_win64_2-1-831\python_embeded\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "E:\Fooocus_win64_2-1-831\Fooocus\modules\default_pipeline.py", line 368, in process_diffusion
sampled_latent = core.ksampler(
File "E:\Fooocus_win64_2-1-831\python_embeded\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "E:\Fooocus_win64_2-1-831\python_embeded\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "E:\Fooocus_win64_2-1-831\Fooocus\modules\core.py", line 310, in ksampler
samples = ldm_patched.modules.sample.sample(model,
File "E:\Fooocus_win64_2-1-831\Fooocus\ldm_patched\modules\sample.py", line 100, in sample
samples = sampler.sample(noise, positive_copy, negative_copy, cfg=cfg, latent_image=latent_image, start_step=start_step, last_step=last_step, force_full_denoise=force_full_denoise, denoise_mask=noise_mask, sigmas=sigmas, callback=callback, disable_pbar=disable_pbar, seed=seed)
File "E:\Fooocus_win64_2-1-831\Fooocus\ldm_patched\modules\samplers.py", line 712, in sample
return sample(self.model, noise, positive, negative, cfg, self.device, sampler, sigmas, self.model_options, latent_image=latent_image, denoise_mask=denoise_mask, callback=callback, disable_pbar=disable_pbar, seed=seed)
File "E:\Fooocus_win64_2-1-831\python_embeded\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "E:\Fooocus_win64_2-1-831\python_embeded\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "E:\Fooocus_win64_2-1-831\Fooocus\modules\sample_hijack.py", line 107, in sample_hacked
positive = encode_model_conds(model.extra_conds, positive, noise, device, "positive", latent_image=latent_image, denoise_mask=denoise_mask)
File "E:\Fooocus_win64_2-1-831\Fooocus\ldm_patched\modules\samplers.py", line 498, in encode_model_conds
out = model_function(**params)
File "E:\Fooocus_win64_2-1-831\Fooocus\ldm_patched\modules\model_base.py", line 117, in extra_conds
if len(denoise_mask.shape) == len(noise.shape):
AttributeError: 'NoneType' object has no attribute 'shape'
Take it up with the Fooocus developers. Nothing I can do about it.
@Putkrapstukas Any news on the subject? I was planning to use this in fooocus, did the developers add support?
@Erke no inpaint model will work in fooocus because i read that fooocus has its own inpaint model that is using the actual model itself like Juggernaut XL
Don't use inpaint model version with fooocus, fooocus use custom function to make inpaint with classic XL models.
@binauralhealing100139 I see... Thank you for the info...
Is this inpainting model blood-free? Couldn't add any blood no matter what.
This is an inpainting-version of the original model, it doesn't change what the original model can or can't do.
Hello, I mainly use your model for face swapping, and it's the best one I've tested. However, there's an issue with the support for Chinese females in your model, such as brown eyes and hair. In some scenarios, like a photography studio with white lighting, the effect is not quite realistic. Additionally, the skin color tends to be too yellow after face swapping. I kindly request you to create a model specifically for face swapping, as you're the best model creator I know. Please consider this request.
Not working in Krita
Yes me too, should we put this in a different folder in Krita?
Excellent in webui, but 'SetLatentNoiseMask' cannot reproduce the results of webui in comfyUI
dont need ControlNet ?I don't have a very good experience using it,Is there anyone who can share their usage?
if you want some great tutuorials on all this stuff, checkout Pixaroma on youtube he has dozens of tutorials including one on using this checkpoint model
controlnet inpainting is a nightmare
Great work! Is there going to be a version for Juggernaut XI?
When using the LayerDiffusion plugin, an error occurs when reaching the sampler“The new shape must be larger than the original tensor in all dimensions”
Hi! So if I download this version I dont need to use the regular Juggernaut XL again?
Hello!
I am a humble indie solo developer. I am working on a dream game that will most likely never be released. However, I still don't want to violate anyone's rights...
Could you please tell me if I can use this model to edit images for my game?
PS. Of course, I will credit the use of this model in the game's credits if it ever gets released!
why u ask, just do it
If the game never appears, you don't have to worry anyway. If it appears but is not particularly successful, neither do you. And if it is successful, you'll have enough money to leave the worries to your lawyers.
CreativeML Open RAIL++-M License always means you do you boo
So how exactly do I use this? any workflows? Is this loaded into the checkpoint folder for an inpaint folder, or does it have to be like in an inpaint folder for an inpaint node?
it inpaints so poor, any prompt guidence?
Details
Files
juggernautXL_versionXInpaint.safetensors
Mirrors
juggernautXL_versionXInpaint.safetensors
inpaint_v10.safetensors
MyBack_SDXL_JuggerXL_inpaint_V10(version_X).safetensors
juggernautXL_versionXInpaint.safetensors
juggernautXL_versionXInpaint.safetensors
juggernautXL_versionXInpaint.safetensors
ip.safetensors
juggernautXL_versionXInpaint.safetensors
juggernautXL_versionXInpaint.safetensors
jxlvxi.safetensors
juggernautXL_versionXInpaint.safetensors
juggernautXL_versionXInpaint.safetensors
juggernautXL_versionXInpaint.safetensors
juggernautXL_versionXInpaint.safetensors
juggernautXL_versionXInpaint.safetensors
juggernautXL_versionXInpaint.safetensors
juggernautXL_versionXInpaint.safetensors
Available On (1 platform)
Same model published on other platforms. May have additional downloads or version variants.



