If you like my creations, then please consider buying me a coffee. Thank you! :)
Ko-Fi
No models using any license other than the standard creativeml-openrail-m license were used in this merge.
I have added an inpainting model, based on my v3 model, which works very well with any version of the GTM_UltimateBlend, and with many other models as well. Give it a try!
IMPORTANT!! For some reason the filename changes on Civitai. Please ensure you rename it to "GTM_UltimateBlend_inpainting_v3.safetensors"
If you're not already using it, I do recommend this in/outpainting app. It uses the AUTOMATIC1111 API to integrate with it, but setup instructions are in the "Help" button.
The correct inpainting VAE is now baked into the model.
If you want to help me out, or just have a look at the public images, then please hop over to my Patreon: https://www.patreon.com/galaxytimemachine
Some before and after inpainting and outpainting examples below.
Description
Now based on my v3 model and with correct inpainting VAE baked in.
FAQ
Comments (16)
hi, thanks a lot. which one to download? the first one with 7.6g or the second one(v3) with 3.9g?
The latest one (v3) is the better one.
@galaxytimemachine could you also kindly provide the prompts that were used to create the samples? I tried and failed miserably!
@ShishoSama This is an inpainting model, not for model creation.
@galaxytimemachine Which model did you use for the examples before the inpainting? and what were the prompts for the girls (roughly)?
What an interesting model, but how do you actually prompt them? Like, remove the clothes or adding the dick ...?
@galaxytimemachine what are the best inpaint settings that you'd recommend - sometimes it works flawlessly and sometimes and images just stumps me
This is a fantastic inpainting model, but there are situations it really struggles with. It has a hard time replacing anything black -- if you try to turn a black sweater into a white t-shirt for example, it isn't going to cooperate. You will just keep getting different black garments. I wish I understood why.
It also has difficulty removing/replacing things that go all the way to the edge of the frame. If a subject is wearing a yellow shirt and you want to make it a red shirt, it will probably work fine unless the yellow shirt goes all the way to the edge of the photo. If that happens you may need to change the shirt in little pieces, sending the output back to inpaint and slowly work at it over time on several successive inpaints. I don't know why this happens.
That being said it is a terrific model and apart from the specific situations described above, it is great at what it does.
Just for curiosity: have you tried also with sketch inpaint?
@settima_ai no I have not
@PlasteredDragon With "inpaint sketch" you give a great "suggestion" to Stable Diffusion, you can suggest shape and colors you want to impaint. Try it, is very powerfull.
@settima_ai I will, thank you!
can you post some examples with the prompt comand?
For NMKD SD Gui users: you need to put V3 before "inpainting" and also add a "-" before inpainting, so it ends with that. If not it's not recognized.
In other words follow the naming of the 1.2 version of inpainting file!
pro tip: ive found that cutting out the elements you dont want in photoshop and leaving them blank, then saving as a png is much better than the giant unwieldy brush native to SD
Why do my pictures have extra colors after they come out?
Details
Files
Available On (1 platform)
Same model published on other platforms. May have additional downloads or version variants.