How to use : HERE
This workflow may require a relatively large amount of VRAM.
v2.0 - Image Sender nodes have been replaced with Load Image nodes.
Each character’s Detailer has been modified to work based on that character’s mask.
The conditioning has been connected so that it is input separately for each character.
v1.5 - Organized nodes, replaced the DWPose Estimator in Phase 2 with OpenPose Pose. A realistic base image is no longer required.
This workflow utilizes the Attention Couple node for regional conditioning.
It operates in two phases to generate images.
While I understand that not working in a single queue might not be ideal, this is an attempt to achieve better results.
If you have any ideas for improvements to this workflow, please let me know.
This workflow was inspired by : https://civarchive.com/models/541881/regional-conditioning-with-lora-support
The Attention Couple node was implemented using A8R8's node. : https://github.com/ramyma/A8R8_ComfyUI_nodes
Description
FAQ
Comments (41)
Hi, I'm getting an error
cannot access local variable 'xy_plot_image' where it is not associated with a value
at the ksampler for phase2
https://imgur.com/BC542Fm
little update: apparently my KSAMPLER efficient was bugged or smth? fixed it my reinstalling the custom node..
I checked the image in the link you gave. denoise is set to NaN. Try setting it to 1 :)
@Likke0703 I managed to get it working, thank you, do you know if there is any way to avoid the face detailing on male faces? that process takes quite a lot of time and most of the time males arent really the main focus of the composition
@cravenikke42360 There was a node that made the detailer work only on female faces. But at some point, it stopped working while I was using it. So it was excluded from this workflow. If you're interested in that, look for the HF Transformers classifier provider and SEGS classify nodes in the impack pack.
@Likke0703 thank you, thats very helpful , ill give it a shot
@Likke0703 sorry in case im bothering you, im trying to expand the workflow a bit, my question is how do you change the input field names to make it grab a global value? for example here
https://imgur.com/mxrSM5B
if you prefer to have discussion somewhere else lmk
@cravenikke42360 you should aim at the input slot and right-click, a menu called ‘Rename Slot’ appears. You can select it and edit it.
@Likke0703 oh well wow that was easier than i thought, i had right clicked about everything before asking... thanks!
@Likke0703 sorry to revive this comment, do you happen to have also workflows that use regional spatial control? ( generating straight up different characters in precise spots of the image without the need of masks )
@Renes_stuff There used to be a Visual Area Conditioning node that others used in the past. However, it became difficult to use as updates for it stopped. Inspired by that, a new node was created.
GitHub - Fuwuffyi/ComfyUI-VisualArea-Nodes: A repository containing a couple of nodes made to help out with area prompting.
This new node allows areas to be designated without masks, but it was challenging to control the blending between the areas.
As a result, I have used the workflow I uploaded.
I have the node import failed error with UltimateUpscale and ArtVenture nodes. I tried fixing reinstalling the nodes multiple times and update comfyui but it didn't work at all. Does anyone have the same problem as me? How to fix it?
When I recently reinstalled ComfyUI, it seemed like there was an issue with installing Ultimate SD Upscale. I resolved it by installing it manually. There are also other nodes that can be used for upscaling tasks besides Ultimate SD Upscale. (For example : Refer to the upscaler group in v1.0)
And, I can't think of anything related to the ArtVenture node. If you could let me know which part of my workflow is causing issues with ArtVenture, I will check it out.
@Likke0703 it said this
Error message occurred while importing the 'ComfyUI ArtVenture' module.
Traceback (most recent call last):
File "E:\ComfyUI_windows_portable\ComfyUI\nodes.py", line 2108, in load_custom_node
module_spec.loader.exec_module(module)
File "<frozen importlib._bootstrap_external>", line 995, in exec_module
File "<frozen importlib._bootstrap>", line 488, in callwith_frames_removed
File "E:\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui-art-venture\__init__.py", line 13, in <module>
from .modules.nodes import NODE_CLASS_MAPPINGS, NODE_DISPLAY_NAME_MAPPINGS
File "E:\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui-art-venture\modules\nodes.py", line 12, in <module>
from .utility_nodes import (
File "E:\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui-art-venture\modules\utility_nodes.py", line 17, in <module>
from .utils import pil2tensor, tensor2pil, ensure_package, get_dict_attribute
File "E:\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui-art-venture\modules\utils.py", line 9, in <module>
import pkg_resources
File "E:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\pkg_resources\__init__.py", line 2191, in <module>
register_finder(pkgutil.ImpImporter, find_on_path)
^^^^^^^^^^^^^^^^^^^
AttributeError: module 'pkgutil' has no attribute 'ImpImporter'. Did you mean: 'zipimporter'?
@RemTheBestWaifu Oh, the VAE loader node in my workflow is from ComfyUI-Art-Venture. Try replacing the current VAE loader with the default "Load VAE" node.
@Likke0703 this error appear when i import your workflow and install the missing custom node tho, i didn't use other VAE loader
@RemTheBestWaifu It was my mistake to include the ArtVenture node in this workflow. If you are not using the ArtVenture node, you can delete it from the manager and run the workflow using the default VAE loader.
@Likke0703 okay, many thanks for your advice!
@RemTheBestWaifu I hope the issue is resolved, and you can successfully use this workflow.
Hello, I have seen your workflow and I want to create a character that includes multiple Loras. Unfortunately, when I imported your workflow into COMFUI, it showed that many nodes and plugins were missing. Do you know where to add these missing things? I would greatly appreciate it if you could take the time to reply to me. Thank you!
Hi. I did everything correctly but the imagesender doesn't sent the generate base image into characters boxes. Any idea ?
After testing, its the "mask to image" node that generate a black 64x64 picture. If I route the receiver into a image-preview image i get the picture.
This is the only workflow for creating multiple characters from different LoRAs that I could get working!
But there is one problem, when it's on the KSampler (Efficient) step (Phase2), it's very slow, easily >30min to generate an image, even though the base image was generated in seconds. I have a 4060 (8GB VRAM). Did anyone else have a similar issue, or know how to speed things up?
For comparison, the Regional Prompter extension in AUTOMATIC1111, used for similar purposes, generates results very quickly for me.
Is it slow even without the upscaler turned on? When using the upscaler with 8GB of VRAM, the speed may be very slow.
@Likke0703 Even with the upscaler off, it's still >30 mins
@DizBabes I checked, and even with the upscaler turned off, the VRAM usage exceeds 8GB. Tasks that exceed VRAM run very slowly. If I add nodes to reduce VRAM usage in my workflow, it might solve the problem. However, I don't have a solution at the moment. I'm sorry I couldn't be of more help.
I recommend looking for ways to reduce VRAM usage.
@Likke0703 That's ok. I understand my rig is pretty budget-level. Thanks for your work on this anyway. I like the interface. Easy to understand and use.
@Likke0703 I tried Potatcat's "ComfyUi Regional Prompter Workflow" - it runs super fast for me. But it has a limitation of not being able to assign a specific lora to a specific mask. I wonder if that extra level of complication can cause the high VRAM usage? Well I couldn't begin to guess lol but just thought I'd mention it.
@DizBabes I have the same specs as you! Did you ever find out a low-vram solution?
@scottegg472795 nope
Hello, I really want to try out this workflow, but I keep getting this error:
Model in folder 'controlnet' with filename 'SDXL\t2i-adapter-depth-midas-sdxl-1.0.safetensors' not found.
Which is the right "controlnet" folder? Thanks a lot.
The path to controlnet is usually as follows:
'\models\controlnet'
If you don't have the depth-midas model, I recommend downloading it.
Hello, I encountered some issues while using the workflow and received the following prompt: CLIPTextEncode.encode() missing 1 required positional argument: 'clip',cg-use-everywhere The node seems to be malfunctioning. May I ask what version is required for this node?
It seems that with recent ComfyUI updates, there have been significant compatibility changes—especially affecting nodes like Anything Everywhere (cg-use-everywhere). To be honest, this workflow was uploaded quite a while ago, and I can’t guarantee that it still works as expected with the current versions.
Nyraen07 Hello, sorry for my bad english. I was interested in trying this workflow but as you mentioned the updates seem to cause problems with some of the custom nodes, even the manager cant find all the missing nodes. So i was wondering if you had an updated version or an idea as to how to update it.
darkshark027535 In my case, I have updated the workflow for personal use and am currently using it. However, there are still some issues such as preparing proper upload guidelines and optimizing the workflow, so I haven’t decided yet whether to upload it or not.
@Nyraen07 please upload and maybe an explanation vid too 🙏🙏🙏🙏
@ikiru99percent I am preparing to upload the workflow, but I’m not sure when it will be posted. Please wait patiently.



