Introduction
Hi everyone! I wanted a simple IMG2IMG workflow that saves Metadata and uses LoRA Manager. I also wanted one that could handle resizing images because I hate fiddling with resolutions. I found an old workflow by Postpos and incorporated some genius Comfy spaghetti from Legendaer to get all the strings to arrange themselves into both the metadata and the conditioning ... and this simple workflow is the result. Presented as-is, if it doesn't work let me know but I might not be able to help you. This workflow should take any SDXL base checkpoint, including Pony and Illustrious. Seems to have no problem going from Anime to Realism and back. Remember to set your Denoise setting (~.5 for stronger image adherence, ~.7 for more creativity)
Base Workflow Features
Shortcuts by rgthree
Save metadata and LoRAs in a way that is CivitAI compatible
Toggle auto-resizing on or off
Natural Language and Tags prompting
LoRA Manager for ease-of-use
Workflow Embedding (toggle)
Before/After Image Comparer
Custom Node Requirements:
rgthree
Impact Pack
KJNodes
Easy-Use
ComfyMath
ComfyUI Image Saver
ComfyUI-LoRA-Manager
CG Use Everywhere - for Anything Everywhere
Efficiency-Nodes
Dynamic Thresholding
Skimmed_CFG
Comfyroll Studio
ComfyUI_essentials
Derfuu_ComfyUI_Modded Nodes
ComfyUI Impact Subpack
ComfyUI-mxToolkit - for the sliders
ComfyUI_Mira
I realize the number of nodes is increasing as the version number increases. This is still a fairly basic workflow and should not be overwhelming, and still remains very much like previous versions.
New with Version 3.0
I wanted to keep the workflow simple and at the same time, provide a little more detailer power. This workflow gives us previews of the SEGS, rather than the complexity of the SEGS filtering you simply get a preview image and can change the prompts based on the image. The colors of the preview/prompts alternate so you don't get mixed up. I added a hands detailer because we know we're gonna need it with AI. One new custom node, Everywhere, for the Anything Everywhere node.
I added a LoRA slot specifically for each detailer that will hopefully really bring things into focus. However this simple node is inserted without a connection to your existing Lora stack. If this is interfering with your generation, get rid of it or connect the Model output of the LoRA loader to the input of this node, depending on what you hope to accomplish.
New with Version 3.1
SEGS Detailer arrays upgraded to Module 1.1 (same as my Text2Img workflow): Replaced Image Previews with Image Comparer nodes.
New with Version 3.4
Color Matching after Upscaler, to counteract the tendency of upscalers to cause greening or yellowing slightly. Works great, no need to mess with it.
CFG Controls - CFG Skimming and Dynamic Thresholding, great nodes for allowing more flexibility in your CFG setting. Generally improves the overall output.
New With Version 4.0
Changed the layout somewhat, creating an upper left-to-right workflow area and putting the utilities below.
Implemented sliders and put most of the most important controls up front in the Input section. You may still want to tweak the detailer prompts etc in those areas.
New With Version 5.0
Bringing my workflows up to speed and into alignment so that they work and handle similarly, many small changes were made to ensure the two workflows handle as a pair. Banners were added for the detailers and upscaler to improve user orientation, and the LoRA blockers now work with toggles instead of spaghetti strings. The Spaghetti zone was moved out of the left-right flow, and the subgraph for resizing images was repackaged. The checkpoint loader now loads directly, allowing you to download a checkpoint and slap it into the native ComfyUI node without rebooting your server.
General:
Using Img2Img is fun and now with THIS workflow, you have complete control over what faces look like what. You may not be able to get the face detailer to work on all furries but you can still get a lot done with this workflow. Due to the way that the face detailer works, it really only makes sense to use it for I2I workflows. I2I approaches can obliterate the vanishing point effect that Illustrious can create. It can give you a predictable structure and framework and the results .. well, you might just be surprised!
Whereas before, you were at the mercy of randomness and checkpoint biases, with this detailer you can get ethnicity and expression right on the first shot. Details in the workflow!
Description
Image Preview
Toggles for:
Hires Fix
Preview Image Save
Face Detailer
FAQ
Comments (19)
Why the 2nd face detailer?
Good question! I forgot to explain that part. There is one detailer far as I recall.. it detects every face in the image and repeats until all faces are done. It will even do a painting on the wall etc. it can end up taking a while, which is why that preview image node is for making sure the rest of the image is satisfactory.
Gosh there can’t really be two can there?
@MGHerder A single detailer will do every face it detects, but with the same prompt, clip, and model settings. You have two of them in line. no seperate prompts or regions. You'd typically break it up to further refine it with a different prompt or scheduler function. Plug a GITs scheduler in there, set your noise and cfg(coefficient) high on pass one and low on pass 2. I think that will greatly inprove what you are aiming to achieve.
Also, you have no control over the seed, so if you do go to make changes, it'll reprocess both.
@lonecatone23 Thanks you very much for your sage advice. I really appreciate it. So far, workflows I have used aren't that sophisticated, but it sounds like maybe face detailers have caught up to inpainting -- if we can have separate regions for them that actually work! I'll revise and put out a new version.
@MGHerder Take a look at what I do. I run eyes seperately. There are also teeth and lips.
If you use a good hi-res fix first, 90% of teh work is done for you
Thanks for your help. Although I was unable to get the GITs scheduler to work, I actually found it worrisome to even try because there's a line of code you have to change on your system to avoid getting an error. And while that worked, updating that code could break it again and it's just not good for a workflow intended for mass use. So I created a segs filter chain instead to get the results I was hoping for. Works like a charm and I've been having a blast. Feel free to take a look, senpai!!!111
@MGHerder Then this will blow your mind
https://www.youtube.com/watch?v=jgimrovxT3E&t=195s
@lonecatone23 what did I just watch? Did my whole day go up in smoke? It never ends, the learning.. 😭😵💫😮💨😵😵💫
@MGHerder Tell me it's not fun.
I was today years old when I found that node and scratched my head.
@lonecatone23 Yeah for sure. I have to take a closer look into how he did that — maybe his workflow doesn’t do fresh gens just finds the segs (and displays them, what a cool node to have) and details them.
Yeah, you have to fix the seed for sure. It's part of the impact pack. I messed around with it for a minute today and go it to work, It does hands as well, which was cool.
@lonecatone23 hands with this level of control and prompting is a total game changer and since it’s segs based you don’t need a separate detailer and filter chain you can just change the bbox selector. 🤯
I just found that if you are working on just one face (I was getting a deformed result) you can keep the seed random and passthrough the KSampler. It will still emit the Latent Image which gets decoded by the VAE (there's nothing to decode, it's a denoised image) and it can be worked on by the Segs Detailer.
You should fix the seed and then use a different seeed on the detailers. ALso, You should use an ultimate upscaler as yor hi-res fix.
I went and updated this yesterday so you could tear it apart. Look how I use the detailers. There are benefits to controlling the noise and having indivisual prompts.
https://civitai.com/models/2217762/post-processing-and-detailing-workflow
@MGHerder Hey, did you ever finish messing aroudn with that SEGS picker node or figure out multiple faces? I can;t remember
Thanks for your work. Waiting to find that kind of I2I workflow. Unfortunately had problem to use that workflow because it uses WAS Node Suite (Revised). And that seems to need old NumPy 2.3. And I don't want to downgrade that because other workflows are using newer NumPy's. So in case you some day update it to work with other custom node - I'm willing to test it.
I'm making some big changes and I'll take it under advisement.
Okay as of now, the WAS Node Suite has no usage in version 2.1! I'll be releasing it soon.
I disabled the WAS node suite on my system and found the resize portion used that, I’m really sorry if you ran into problems after that. 🙂↕️🙇I didn’t properly validate my claims and that slipped by. With 3.2 that node suite is out.



