CivArchive
    Regional Conditioning with AttentionCouple(LoRA supported) - v2.0
    NSFW
    Preview 100584984
    Preview 100600083
    Preview 100584983
    Preview 100584930
    Preview 101048649
    Preview 101048883

    How to use : HERE

    This workflow may require a relatively large amount of VRAM.

    v2.0 - Image Sender nodes have been replaced with Load Image nodes.

    Each character’s Detailer has been modified to work based on that character’s mask.

    The conditioning has been connected so that it is input separately for each character.


    v1.5 - Organized nodes, replaced the DWPose Estimator in Phase 2 with OpenPose Pose. A realistic base image is no longer required.


    This workflow utilizes the Attention Couple node for regional conditioning.

    It operates in two phases to generate images.

    While I understand that not working in a single queue might not be ideal, this is an attempt to achieve better results.

    If you have any ideas for improvements to this workflow, please let me know.

    This workflow was inspired by : https://civarchive.com/models/541881/regional-conditioning-with-lora-support
    The Attention Couple node was implemented using A8R8's node. : https://github.com/ramyma/A8R8_ComfyUI_nodes

    Description

    FAQ

    Comments (24)

    DizBabesOct 6, 2025
    CivitAI

    I have updated ComfyUI to v0.3.59 (2025-09-10) and started using the V2 version of this workflow.

    For the most part it is working good. However, it seems like Character 4 isn't working. The prompt and LoRA is not being used. ex. I put 'blonde hair' in the prompt and she comes out with black hair, and the prompts for the character LoRA seem to be ignored.

    The first 3 characters are working just fine though.

    Nyraen07
    Author
    Oct 6, 2025· 1 reaction

    It seems that when similar prompts are entered for multiple characters, one of them tends to be ignored.
    (For example, if you enter the same prompt like “dark-skinned woman” for two different characters, one may not be applied correctly.)
    You may need to run the queue several times until it applies properly, or try using the generated image as a base and re-run the workflow applying only the LoRA of the character that wasn’t applied correctly.

    DizBabesOct 7, 2025

    @Nyraen07 hmm, i don't thnk it's that... all of the characters i used had pretty different prompts. no matter what, it seems to be uniquely the 4th character slot with problems for me.

    Copying the generated image to use as a base seems to just lose the characters that I don't repaint.

    Nyraen07
    Author
    Oct 7, 2025

    @DizBabes It seems that your situation with the 4th character is different from mine. You might want to try increasing the weight of the Attention Couple Region node for the 4th character group. Since I haven’t experienced this issue myself, I’m afraid I can’t suggest an exact fix.
    Still, I’m glad to hear that it’s working fine for the first three characters.

    To explain more specifically about what I mentioned regarding using the generated image as a base —
    you could try lowering the denoise value of the KSampler in the “Phase2_Generate Regional image” node and then running the queue again.
    However, this method would probably only help if the problem is related to color or texture inconsistency in the character.

    That said, after thinking about it again, if the issue only affects one specific character, it might actually be better to just use the inpaint function instead.

    DizBabesOct 7, 2025

    @Nyraen07 
    Thinking about it more, probably it was the Seed Everywhere was on Random, so the Base image in Phase1 had a different seed than the Phase2 generation. After I realized that, it looks like it's respecting the prompts in all the character spots more.

    Anyway, I got the phase 2 results to where I'm happy with them. So i tried to run phase 3 with only the first four groups set to YES.
    - Detailer eyes
    - Upscaler
    - Detailers
    - CH1 Face Detailer

    This made a preview image show up in the upscaler's preview. for some reason it only shows what the upscaled result would look like when i mouse over the image.

    So it seemed like it got pretty far in the process, however, before it could save the upscaled result, this error popped up:

    ## Error Details - Node ID: 458 - Node Type: ImpactSegsAndMask - Exception Type: ValueError - Exception Message: operands could not be broadcast together with shapes (1075,1118) (1075,542)

    Any idea what it means or what I could try?

    Nyraen07
    Author
    Oct 7, 2025

    @DizBabes Since the detailers are set to operate only within each character’s mask, the masked images from Phase 2 also need to be upscaled in order to use the Upscaler properly. The node related to this process is located in the Character 1 group.

    The current error seems to be caused by a mismatch between the size of the mask image and the upscaled image. Did you happen to bypass the Character 1 group?

    DizBabesOct 7, 2025

    @Nyraen07 No, all the character groups are on, and they're all appearing as prompted.

    I turned off CH1 Face Detailer and that made the error go away. I realized that the SEGS & Mask in that group was directly going into the CH2 Face Detailer, which I had disabled because I felt I wouldn't be needing it, since the other characters weren't as important. I guess you need to enable the detailer for each character you use? Well, leaving it off seemed to produce a good enough result for me.

    Nyraen07
    Author
    Oct 7, 2025· 1 reaction

    @DizBabes The detailers in Phase 3 are not mandatory.
    If you’re satisfied with the results without them, you don’t need to use them.

    Munihalla88Nov 10, 2025
    CivitAI

    I'm using ComfyUI v0.3.62, and updated the Manager to 3.37.1. I started using V2 of this workflow because I liked the previous version and was hoping for more control when generating.

    When I load the workflow, the Models section, Checkpoint, and Load VAE nodes are greyed out, and cannot be adjusted.

    If I run the workflow, I get a mismatch error with the Checkpoint, because it's trying to choose all that I have. I get a Save Image error because there is no Image to save. I get more mismatch errors on Text Concatenate(JPS), CR SDXL Aspect Ratio, and the KSampler. All of these nodes are getting errors because linked nodes return every single sampler at once.

    If I disable the "sampler_name" Anything Everywhere node, I'm able to adjust the greyed out nodes but the workflow doesn't run.

    I don't know what else could be happening, do I need to update a specific node?

    Nyraen07
    Author
    Nov 13, 2025

    1. Have you installed all the required custom nodes that weren’t installed through ComfyUI-Manager?

    2. Have you read the “How to use” guide I uploaded?

    Munihalla88Nov 13, 2025

    @Nyraen07 1. I can't find any missing custom nodes. And the error doesn't report a missing node either, just that node links are mismatched and not able to read them.

    2. I have read the guide, I even tried setting it to lowvram. The problem is the workflow can't get started.

    3. Have you run this on ComfyUI v0.3.62? I only ask because an older workflow I made pre-v.0.3.59 that has FaceDetailer stopped working because a new node was required to be inserted before the FaceDetailer node to operate

    Nyraen07
    Author
    Dec 4, 2025

    @Munihalla88 My current ComfyUI version is v0.3.59, and I haven’t tested the workflow on the latest version yet.
    If the issue you’re experiencing is some kind of compatibility problem caused by version differences, then unfortunately I don’t have much I can say until I’ve had a chance to run the workflow on the latest version myself.

    Munihalla88Dec 5, 2025

    @Nyraen07 It took rebuilding your V1.5 to figure where things were going wrong. There was a custom node not updated that was breaking the workflow. Once that was fixed, it can at least run now.

    However, the save image doesn't seem to work at all, face detailer seems to only trigger on CH1, the hand detailer hasn't triggered, I can't find the right file for the Eyes bbox. And then I randomly get a negative integer while generating. Not always, just sometimes.

    linelinelNov 20, 2025
    CivitAI

    Hey! Great workflow, I really love the results!
    I had one question : sometimes the controlnet doesnt work.
    Most of the time, a value of 0.7 on the depth controlnet works nicely, we get the post of the base picture.
    Although, for like 40% of picture, there is just nothing in common, I need to raise the value gradually until controlnet finally works

    My point is : it's like controlnet doesnt gradually increases its influence, it's either all or nothing, and the value for it to works seems e bit random depending of the image

    Is it something you've experienced before?

    Thanks again for your workflow, i really enjoy it!

    Nyraen07
    Author
    Dec 4, 2025

    When the ControlNet strength is too low — and sometimes even at values that would normally be considered reasonable — it can behave as if it isn’t being applied at all. In the end, the only solution is to try multiple times until you get the result you want.

    10709959Dec 23, 2025
    CivitAI

    Gave this a try earlier, and I have to say, thank you so much - finally found a regional conditioning workflow I can actually use!

    It took me a bit to figure out how to navigate the whole thing, swap in the needed checkpoints, LORAs and ControlNet components, but once I got it all set up, it didn't take me too long to work out how it all fit together.

    I haven't tried using the Detailer and Upscale components of it yet, nor the additional two optional character slots, but now that I have this on hand I'm definitely eager to give it a try!

    One thing I found interesting is that nowhere in the documentation was any mention of the required custom nodes for this workflow to work. For the purpose of information, here's the ones that are needed:

    ComfyUI-Impact Pack

    ComfyUI-Impact Subpack

    Comfy-UI Essentials

    ComfyUI Custom Scripts

    ComfyUI WD14 Tagger

    ComfyUI KJNodes

    ComfyUI Image Saver

    ComfyUI ControlNet Aux

    ComfyRoll Custom Nodes

    rgthree-comfy

    frontend_only (may be default with ComfyUI)
    cg-use-everywhere

    All the same, this particular workflow does wonders, so thank you again for this!

    Nyraen07
    Author
    Dec 24, 2025

    Since I usually rely on ComfyUI-Manager, I didn’t think to manually list all the required nodes. anyway, I’m glad to hear that it’s working.

    CaymerraDec 23, 2025
    CivitAI

    hey, love the workflow, works as intended for sure, but i'm having a weird issue where my characters look absolutely ugly for some reason, i wrote the positives and negatives by hand and im using the same stuff i normally use on simpler workflows and i get great results there.

    i dunno what step im missing in yours, it's like it's ignoring the base prompts, thanks.

    Nyraen07
    Author
    Dec 24, 2025

    Does this issue occur before the detailer stage, or does it happen after going through the detailer?

    CaymerraDec 24, 2025

    @Nyraen07 oh i bypassed the detailers as i just wanted to test how it works and also not sure what models to use for the detailers if there are any specific ones... i mostly get good results without detailers in other workflows, that's why i find this issue weird...

    Nyraen07
    Author
    Dec 25, 2025

    @Caymerra I’ve experienced cases where the image quality was poor or the result looked strange, but I’m not sure about situations where it specifically turns out “ugly.”
    If it’s not an issue related to the detailer, I can’t think of much else to suggest besides checking whether there are any problems with the basic settings (CFG, steps, sampler, scheduler, the LoRAs being used, etc.).

    CaymerraDec 25, 2025

    @Nyraen07 i've tested the entire workflow now, everything does work as intended and the finished product looks super good but the entire process did take me over 15 minutes to finish completely which i guess is an understandable compromise if i want specific details in the image... i still can't figure out why the base image (before the detailers) comes out ugly, like really contorted facial expressions, bad fingers, simplistic art style and so on ...
    ill dig into it for a bit longer but really thank you for the workflow and hopefully you can optimize it further.

    CaymerraDec 25, 2025

    @Nyraen07 funny update, i removed the JPS nodes in phase one and replaced them with a normal CLIP text encode box and connected it appropriately and now phase 1 gives me great results. but when phase 2 redraws the characters they come out ugly again lol
    it's already using CLIP text encode boxes so i dunno what to change here ...

    malaronApr 22, 2026· 1 reaction
    CivitAI

    This.. Is... Amazing! I tried so many tutorials and other workflows that just did not work. This works great! One thing I noticed was I seem to need to bump my lora strength up more than with basic txt2img but that's probably me doing something wrong.

    ETA: One thing I noticed, when generating 3+ characters, it does a poor job of creating the openpose from the images I provide it. Since I generate the starter images with a separate openpose flow, I found it easier to just load my own openpose stick figures for phase 2 :)

    Workflows
    Other

    Details

    Downloads
    1,272
    Platform
    CivitAI
    Platform Status
    Available
    Created
    9/15/2025
    Updated
    5/13/2026
    Deleted
    -

    Files

    regionalConditioningWith_v20.zip

    Mirrors

    regionalConditioningWith_v20.zip

    Mirrors