NOTE! About 1.3b:
This might be the last version, as my HDD containing all Stable Diffusion checkpoints & images broke few days after making this version. So can't really remake it even anymore as lost the recipe as well.
Got a separate instance for SD generation which contained some stuff, but most of them was lost.
NOTE! Added guide for inpainting in Forge to this info text with some tips if you are interested in it, see more below.
This is a illustrious based, sort of semi-realistic anime-style checkpoint merge.
There are plenty of other (even better) checkpoints, but they are heading into some other directions. So made a merge of illustrious for personal use, with emphasis on some styles / loras / and generally things I like.
So no guarantee it does anything. Probably some stuff broke on the way too.
This has no "grand direction", it's more like what I'm messing up with at the time.
Shortly about Inpainting/denoising for 1.2b in Forge:
Can use rather high denoising strength in Forge. Like 0.4 or 0.5 to get the details out, while still retaining the picture composition while inpainting, if you mask items individually, or clothing part piece by piece (like shirt or armor piece individually).
Works nicely in amulets or smaller detailed items.
0.1-0.2 denoising works for plain cloth or items which do not need intricate details.
This all depends on the prompt if it's stable and if there are any Loras in use.
Base model itself can do this ok with stable prompt. The more Loras are added, the more harder the inpainting gets as the weights start to fluctuate for a small inpanted area.
One example is: if one lora has higher weights, like face, then it'll draw that instead of the item if high denoising is used and piece has appropiate colors, etc. (kinda same as while using adetailer).
Of course you can adjust the prompt & remove stuff or Loras while inpainting, then return back to full prompt and try for another piece if it works.
Longer guide for inpainting in Forge:
1a. Make an image with txt2img
You can make it with or without hiresfix
For hiresfix 1.25x or 1.5x upscale, you can use 0.4 or 0.5 denoising (depends on prompt or loras in use)
After generation, below image choose palette icon with text "Send image and generation parameters to img2img inpaint tab."
1b. OR input image from PNG Info tab:
Can be any image you have made previously, as long as it has the image generation information included
Drag and drop image to the source part "drag image here" or click it to get file open dialog
After choose "Send to inpaint"
NOTE! Doing this will most likely reset the "Inpaint area" button to "Whole Picture" in inpaint tab, so remember to change it back to "Only masked", more into this below.
2. Doing it this way should automatically navigate the UI to img2img -> Generation -> inpaint tab
3. Few things to check before doing any inpainting:
a. In general, these should be filled automatically, but just quick doublecheck. These all should be the same as the one image was generated with:
Sampling method
Schedule type
Sampling steps
CFG Scale
Seed
b. Inpaint Area setting:
Change to "Only masked", to only inpaint the masked area and leave other parts of the picture untouched
This will enhance the details on the masked part (item/clothing piece/etc)
c. Resize to / Resize by setting:
Change the setting to "Resize by" with Scale: 1
This forces an upscale into the masked part, but does not increase overall proportion of the image
This usually allows the details to come out better when inpainting and also to do several inpaintings into one image, one after another
d. Additional settings. Shouldn't need to change these:
Resize mode: Just resize
Masked content: original
NOTE! if you have lama cleaner, can choose to use it here to remove watermarks. Just don't use it for regular inpainting!
3. Denoising strength, depending what you want to do:
a. NOTE1: This is most likely something you will be adjusting during inpainting, piece by piece or per masked area you want to change
b. NOTE2: Sometimes the masked area also effects the denoising how it works, so occasionally it may be better to just increase the size of the mask, instead of changing denoising strength
c. denoising values for different purposes:
0.5 - in general good startup point trying to do 1:1 increased details on a piece, but might "hallucinate" new stuff into it depending on prompt. Might bring up the "original improved details" which were lacking from basic txt2img generation.
0.4 - reduces "hallucination", but still brings up details
0.2 - flattens out cloth or items (like metal) and edges, without much "additional details" added into them
0.1 - just to adjust edges and slight smoothing, etc
EXTRA:
0.55 - 0.6 - 0.7 if you want to try to completely redesign a clothing piece or item (like necklace/amulet/bracer whatnot). Also works with hands if you want to get rid of extra fingers, but this really depends on masked area size too (can be hectic at times).
4. Controls in the actual inpainting / masked area editing in the visible image section:
Controls and there are small buttons in the top of the image, when mouse is hovering over the image section:
Use left mouse button to mask an area for inpainting
Use right mouse button to move the image in the inpaint section
Undo button will remove last masked area
Redo will bring back last masked area, if it was removed with undo
Reset will clear all the masked areas (but pressing undo will bring the previous ones back)
Center position will bring up the unzoomed image area and center image
Mouse wheel to zoom in / zoom out
Remove will remove the image from inpainting (do not press unless really sure!)
Brush size slider, to increase or decrease the size of the brush used to mask the area
5. Actual inpainting:
Zoom in into the area you want to focus on
Use mouse to mask the area with the brush
Limit the mask to the edges of the item, like piece of clothing. This model likes to do black outlines, which you can use as a guide
Occasionally works better if you adjust the mask bit larger than the actual piece, it allows the model to retain the "idea" of the item better what it had during generation.
If it's a sword in a hand, you may want to mask some extra like the sword hilt the hand and part of arm/sleeve also, as otherwise the inpainting might do wacky things to the hand/fingers
E.g. for swords, you can do the blade and hilt separately. May have to experiment depending on item
For pieces of clothing/armor, depends on piece but either complete piece, or by section like center piece first, then sleeves separately (or shoulder / elbow / etc pieces one by one)
Generally experience helps, the more you do, the more you get the idea of how to adjust the mask in what situation or content you make (if it's 2d/3d/semi-realistic, etc).
Try to limit the inpainted area size, do not make it TOO large or the quality will drop again and it will not do the increased details.
Might be better to do garments layer by layer: first the ones beneath and then the top ones to improve quality. But sometimes it may work better to mask them all and the generation will fix seams & layers like shirt tugged under pants to look better visually, or you can just inpaint the seam itself or like corset strings.
If you're happy with the mask, click Generate and it'll do the inpainting to the area under the mask
6. After inpainting:
If the actual size of the item was changed (edges/hair/threads or finger positions), there may be "artifacts" left behind when things change, on the edges of the item/area. These can be adjusted away by changing the mask size on that edge a bit larger, and it'll remove the artifacts on next generation (generate the picture again).
HOWEVER this will also change the inpainting results themselves (for better or for worse), so check the result again if it's what you want (that's why there are undo/redo buttons!).
If you are happy with the changed piece, from below result image choose palette icon with text "Send image and generation parameters to img2img inpaint tab.", and it will refresh the inpainting image to the one you just did. This will allow you to keep inpainting, and you can move on to the next piece/part you want to change.
As the "Resize by" scale is 1, the image size does not increase and it'll allow you to do inpainting on that same image over and over again.
If the result was not what you wanted, or you want to go back few steps into an image which was better. Go to PNG info tab -> drag & drop the image you want and send to inpaint, and it will bring back that old version for inpainting.
NOTE! Doing this will most likely reset the "Inpaint area" button to "Whole Picture" -> Change it back to "Only masked". Doing it this way retains the masked area you had previously in inpainting.
Also to note that the Loras used in generation WILL affect the inpainting result, as they change the weights of the image generation.
If you have alot of loras with high weights, they WILL force themselves to the inpainting and it may happen that inpainting generation will create a completely new image to that masked area with high denoising, instead of improving details of the original image, as the lora weights skew the result what it's supposed to be doing.
You can work around this by lowering the Lora weights & adjusting the prompt in general during inpainting.
This inpainting process is not limited to this model, it also works with other models.
However, depending on the model/checkpoint, the actual details may not improve at all, or just produce garbage no matter what you do (just make sure inpaint area is really "Only masked" and the inpaint area is not too large!).
It may be that the model is not suited for inpainting, when this model should be able to do both: generation and inpainting.
Description
Works on my machine™
This is a bit more 2D like, than the other merges after (which have more realism mixed in).
FAQ
Comments (15)
It was pretty weird to see Wehrmacht crosses on bikini since i hadn't prompted anything like that:
https://civitai.com/images/67491388
so i wonder where from could it appear&
Only thing I can think of is the word "finnish" as those were used in world war army uniforms and it might make some sort of association from it. I haven't made any training or merging related to those myself, so it comes from somewhere. Also, in your prompt you have a semicolon ; at the spot of "fringe;1.3)" instead of : which will affect the prompt, as it does not increase strength.
Let me see if I can replicate that
I don't use ComfyUI, but I put the same prompt into Forge, without any loras, and got the same effect. I removed the word "Finnish" and it doesn't do it anymore, even if I play with the prompt back and forth otherwise. So I have to assume it's making some sort of association into Finnish world war army uniforms for some reason and that's causing it.
The model is not pure anime model, it is semi-realistic, so it has some realistic parts/checkpoints merged into it (even base SDXL has knowledge of those). So using real-world words in anime picture, will have that kind of effect. Finnish is not in booru tagging for anime pictures, so using it triggers "real-life knowledge portion" and does associations based on that.
Maybe because it's not specified what kind of "finnish" items you want, it just guesses something in this context based on the other words in prompt, or just put in something randomly as nothing is specified. Like "let's put something finnish" and some picture in training has had finnish military uniform, or world war decorations tagged as finnish, and here we are.
Ah booru taggings also have "finnish_army" (and other finnish_* tags), which include military uniforms. Which are also included in the "finnish" only word (when nothing else is specified).
They are related to some World War-esque anime series, which have old army uniforms in them, etc.
So it might pop up from there also, instead of real life stuff, but probably both as base SDXL even knows that army context. Had this kind of problem in SD1.5 too way back in time.
@Ongelmanratkoja Thank you for your research!
I also thought it may come from the "finnish" tag (it worked on some checkpoints to make a face a bit rounder, so it's kind of inheritance from my previous images) with relation to WWII; also thoght it could come from smth like 'Girls and Panzer' =) Unfortunately i had no time for in-depth research yesterday, thanks again for your effort.
Generally i like the style of your checkpoint very much, so i will make some other images (without "finnish" tag LOL) and post them if you don't mind :)
@Tyomanator Yeah 'Girls und Panzer' had plenty of "finnish" tags on them. These models can do rather weird things occasionally when they try to put the pictures together.
Glad you like the style! You can post stuff, I don't mind... I like the style too!
I also experiment with different concepts a lot, so interesting to see what others can make!
@Ongelmanratkoja Actually i found the style amazing, tried it a bit and posted, will certainly make some more experiments on weekend.
Just to keep things clear, i noticed there are almost no NSFW images in the Gallery, would you mind if i post some, or maybe you have some "red lines" on that matter? I try to respect checkpoint creators' preferences =)
@Tyomanator The model has been here for few days, so not many posts. Personally I don't really post/make NSFW stuff (maybe a few here or there occasionally).
I don't really mind it. It's just I don't really personally make that kind of stuff in general.
So no red lines, I just don't make them. So maybe someone else will :D
@Ongelmanratkoja Thanks, then i will certainly try making some =)
Amazing checkpoint, very clear and colourful image, certainly worth trying!
For me in local ComfyUI gens worked best on DPM++ Karras, even on 16 steps.
Very nice. Thank you for your work.
@evrenny Thanks and nice pictures too! Hope you don't mind that I borrowed one of your sorceress minion prompts, only tweaked it a bit: https://civitai.com/images/71464793
Been trying to get something like that to work, but prompting has been different for illustrious than SD1.5 where I got those working nicely.
@Ongelmanratkoja I think it looks great, the checkpoint you made is really good — it's one of my favorites!
@evrenny Glad you like it! Certainly has been fun playing around with.




