Hello!
@superbeasts_ai and I (@midjourney_man) teamed up to create this ComfyUI workflow.
It is intended to upscale and enhance your input images.
We introduced a Freedom parameter that will drive how much new detail will be introduced in the upscaled image. FYI, values closer to 1 will stick to your input image more, while value closer to 10 allows more creative freedom but may introduce unwanted elements in your new image.
You will also notice an HDR effect that can be set for pass 1 and pass 2.
Couple things to note:
There are two passes. Depending on the upscale size for pass 2, it may take a while to generate your final image. If you want to save time, you can disable pass 2 altogether with the built-in switch.
We always downscale the loaded image to the nearest SDXL resolution (E.g. 1024). As a result, defaults scale values will result in 2K and 4K final images for pass 1 & pass 2, respectively.
Feel free to play with the settings to see what works best for you and your style. Do not be afraid to test other models and LoRAs. Last but not least, you can also use different CN models as needed (example: Open Pose).
Enjoy! :)
Resources used in the workflow:
SDXL Model: https://civarchive.com/models/133005?modelVersionId=348913
LoRA: https://civarchive.com/models/122359/detail-tweaker-xl
ControlNet for Tile (Special thanks to @ttplanet): https://civarchive.com/models/330313/ttplanetsdxlcontrolnettilerealisticv1
CN Control LoRA Models: https://huggingface.co/stabilityai/control-lora/tree/main/control-LoRAs-rank256
Upscale Model: https://huggingface.co/gemasai/4x_NMKD-Siax_200k/tree/main
Description
Support of IPA Adapter and various minor changes / optimizations
FAQ
Comments (35)
Which IPAdapter Models do I need? I am getting errors related to it: IPAdapterModelLoader 468:
- Value not in list: ipadapter_file: 'None' not in []
I was still getting errors after the IPAdapter model install, but figured out it was because I installed the SDXL version and not 1.5 version, that is now working.
Now I am getting "Can't import color-matcher, did you install requirements.txt? Manual install: pip install color-matcher" but I have done a pip install color-matcher in \custom_nodes and \KJNodes
Ok I fixed that by doing this: " https://github.com/hahnec/color-matcher
download and put "color_matcher "file into ..\python_embeded\Lib\site-packages"
Now I am getting the error: ImportError: cannot import name 'DEPTH_ANYTHING_MODEL_NAME' from 'controlnet_aux.util' , what Depth Anything Model should we be using?
If I give it an already large image 2560x2560 it down sizes it to 2048x2048 before upscaling to 4096x4096. shouldn't it just scale it by 2x on the first pass? The input Width and Height in the the Resolution Calculator are not changing from the default 1024 value.
FYI, if you click on the "Show More" link in the description you'll see that it says "We always downscale the loaded image to the nearest SDXL resolution (E.g. 1024). As a result, defaults scale values will result in 2K and 4K final images for pass 1 & pass 2, respectively."
@davz Ahh ok, thanks, I have been trying for ages to re-configure that part of the nodes, as that is not good for my current workflow, but I haven't figured out how it works yet.
@J1B yeah, it's a pretty complicated workflow.
@J1B Yeah the reason we do this is because this workflow is intentionally intended to introduce new details in the process but the whole workflow relies on tiled upscaling.
This works really well when working with SDXL resolutions for the sizes we've given and even 2x those resolutions but it creates a problem going larger than this because there are even more tiles to denoise. More tiles means more artifacts if the denoise values are too high.
The second pass is designed to denoise at a much lower value.
So you should be able to get this working by disabling the sampler for pass 1 and just connecting your input image into pass 2 directly with a Load Image node. Though with the get/set nodes I suspect you should set the pass 1 image as your input image. That way the comparison nodes should still work.
Good luck!! It is complex so I hope this brief explanation does provide something useful.
I would love to see a version that does three passes, first pass for 2k, second pass for 4k and then last pass for 8k.
I think I figured out how to an 8k version (3 passes) but I am not sure, I just copied the 2nd pass group nodes, created freeu version for pass 3, a save group node for pass 3 etc and renamed the set & get nodes for pass 3. Not sure if I did it right though.
The trouble is the 4k pass doesn't look as good as the 2K pass and it actually loses some details and flattens areas out even though it is a higher resolution.
@J1B Yes I noticed this as well. If there was a way to implement tile blur with the tile controlnet model then the results would be better on the 2nd and 3rd pass. Its what I use forge to squeeze new details. He could also try implementing supir for the 2nd and 3rd pass as it would add features and details to the 2nd pass without removing anything.
@vigilence I just tried the SUPIR Workflow as a 2nd step from 2k to 4K and it looks fantastic! Thanks
@J1B Nice! I have been testing it myself. The workflow is a combination of this one and ursium ai. The initial pass by @midjourney_man and then 2 full passes (denoise and pass) of the supir upscale. The results look very nice and not washed out. The keywords are important for supir or it changes the look of the photo especially for the medium of the artwork.
I made no changes but I get this error:
Error occurred when executing IPAdapterEmbeds: insightface model is required for FaceID models File "D:\Automatic1.1.1.1\ComfyUI_windows_portable\ComfyUI\execution.py", line 151, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\Automatic1.1.1.1\ComfyUI_windows_portable\ComfyUI\execution.py", line 81, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\Automatic1.1.1.1\ComfyUI_windows_portable\ComfyUI\execution.py", line 74, in map_node_over_list results.append(getattr(obj, func)(**slice_dict(input_data_all, i))) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\Automatic1.1.1.1\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_IPAdapter_plus\IPAdapterPlus.py", line 1018, in apply_ipadapter return ipadapter_execute(model.clone(), ipadapter_model, clip_vision, **ipa_args) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\Automatic1.1.1.1\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_IPAdapter_plus\IPAdapterPlus.py", line 183, in ipadapter_execute raise Exception("insightface model is required for FaceID models")
Have you tried to integrate technologies such as AYS and SAG?
I'm getting a following error:
Error occurred when executing ImageResize+: 'bool' object has no attribute 'startswith' File "C:\Users\user\Documents\ComfyUI_windows_portable\ComfyUI\execution.py", line 151, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\user\Documents\ComfyUI_windows_portable\ComfyUI\execution.py", line 81, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\user\Documents\ComfyUI_windows_portable\ComfyUI\execution.py", line 74, in map_node_over_list results.append(getattr(obj, func)(**slice_dict(input_data_all, i))) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\user\Documents\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_essentials\image.py", line 288, in execute elif method.startswith('fill'): ^^^^^^^^^^^^^^^^^
The only red node is Ultimate SD Upscale. Can someone help please?
Same issue
I think this Thread fixed it for me:
https://github.com/comfyanonymous/ComfyUI/issues/3463
or that could have been a different issue, it is a complicated one to get going and I just had to do a reinstall and got the same error.
I went with the last post:
"It seems to be a message from the ComfyUI-Universal-Styler node.
Here is the error output code:
https://github.com/KoreTeknology/ComfyUI-Universal-Styler/blob/main/naistyler_nodes.py#L106
The path mentioned in the error and the actual path it is trying to load are different, and it can be seen that the error message is misleading.
If you want to prevent the error, you can do the following:
Create a ComfyUI-NAI-styler directory under the custom_nodes directory.
Create an init.py file under the ComfyUI-NAI-styler directory with the following content:
__all__ = ['NODE_CLASS_MAPPINGS', 'NODE_DISPLAY_NAME_MAPPINGS'] NODE_CLASS_MAPPINGS = {} NODE_DISPLAY_NAME_MAPPINGS = {}Create a CSV directory under the ComfyUI-NAI-styler directory .
Create three empty files under the CSV directory:
naifilters.csv
naistyles.csv
naitypes.csv
After restarting ComfyUI, the error should no longer appear."
Ahh, I think the fix for this error is to change the Image Resize Node Method box from "True" to "Stretch".
Error occurred when executing IPAdapterModelLoader: IPAdapterModelLoader.load_ipadapter_model() missing 1 required positional argument: 'ipadapter_file'
File "C:\AI\ComfyUI\ComfyUI\execution.py", line 151, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\AI\ComfyUI\ComfyUI\execution.py", line 81, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\AI\ComfyUI\ComfyUI\execution.py", line 69, in map_node_over_list results.append(getattr(obj, func)()) ^^^^^^^^^^^^^^^^^^^^
What wrong with ipAdapter file? IPAdapter Model Loader node marked red line and ipadapter_file is "null" or "underfined". How to show to it path to models folder if canny/depth/tile it can find?
Hello. I'd love to try this out, but I'm getting the following error:
Error occurred when executing CLIPVisionLoader: 'NoneType' object has no attribute 'lower' File "D:\ComfyUI\ComfyUI\execution.py", line 151, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI\ComfyUI\execution.py", line 81, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI\ComfyUI\execution.py", line 74, in map_node_over_list results.append(getattr(obj, func)(**slice_dict(input_data_all, i))) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI\ComfyUI\nodes.py", line 874, in load_clip clip_vision = comfy.clip_vision.load(clip_path) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI\ComfyUI\comfy\clip_vision.py", line 113, in load sd = load_torch_file(ckpt_path) ^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI\ComfyUI\comfy\utils.py", line 13, in load_torch_file if ckpt.lower().endswith(".safetensors"): ^^^^^^^^^^
How do I solve this? I appreciate your help.
I solved the problem! In the "IPAdapter" group in the workflow, there's a "Load CLIP Vision" node just behind the "IPAdapter Model Loader" node (you have to move it to the side to see it). Then I had to download the CLIP-ViT-bigG-14-laion2B-39B-b160k model and load it in this node. To do this, I went to https://huggingface.co/laion/CLIP-ViT-bigG-14-laion2B-39B-b160k/tree/main and then downloaded "open_clip_pytorch_model.safetensors" file. Then I put it in ComfyUI\models\clip_vision. Then I renamed it to "CLIP-ViT-bigG-14-laion2B-39B-b160k.safetensors" to match the name in that node in the workflow. Now it works! I hope this helps other people facing this problem. 😊
First of all,this is a really good workflow thank you for that but I have question
all the photos posted here are very sharp and very beautiful but when I upscaled to 2k,its adding TOO MUCH sharpening and contrast and it almost becomes a painting rather than realistic photograph,how can I fix this issue?
Hi! You can try to decrease the freedom setting.
[Update] I fixed this by replacing Simplemath+ nodes with pysssss|Math Expression nodes. Just copy and paste the math over, reconnect, and delete the Simplemath+ that no longer works with Comfyui.
Thank you for this neat workflow. I believe the math nodes are broken with the latest version of comfyUI. I receive an error for all the essential math nodes with the latest update as follows:
got prompt
Failed to validate prompt for output 267:
* SimpleMath+ 264:
- Return type mismatch between linked nodes: a, FLOAT != INT,FLOAT
Output will be ignored
Failed to validate prompt for output 177:
* SimpleMath+ 385:
- Return type mismatch between linked nodes: a, FLOAT != INT,FLOAT
* SimpleMath+ 262:
- Return type mismatch between linked nodes: a, FLOAT != INT,FLOAT
* SimpleMath+ 263:
- Return type mismatch between linked nodes: a, FLOAT != INT,FLOAT
* SimpleMath+ 384:
- Return type mismatch between linked nodes: a, FLOAT != INT,FLOAT
...
Same issue here
I fixed this by replacing the broken simplemath+ nodes with Math Essentials from the python set.
Very cool workflow! Can you explain a bit about what the IP adapter section and the noise types are doing? There are several noise nodes (shuffle, fade, gaussian) with only one enabled in the downloaded workflow. Just trying to understand the inner workings of this... Thanks!
After getting this working (yeah that was fun). I think it overdoes things on default settings. Bringing the Freedom value down from 6.500 to 3.500 helped a lot. It also suffered from less ghosting issues.
It's another great hallucinative upscaler, but I really wish we had more accurate upscalers. Great job though 👍
Absolutely incredible!! thank you so much!!!
I don't understand why no one mentioned nodes for python compatibility?? We can't switch python versions for every workflow. Image Film Grain, Image Filter adjustment, upscale model loader node missing can't update with Manager.
Some one help me please. Why not work
Failed to validate prompt for output 103:
* DepthAnythingPreprocessor 366:
- Required input is missing: image
* CannyEdgePreprocessor 301:
- Required input is missing: image
* ControlNetApplyAdvanced 101:
- Required input is missing: image
* IPAdapterEncoder 474:
- Required input is missing: image
* Image Film Grain 521:
- Required input is missing: image
* UltimateSDUpscale 231:
- Required input is missing: image
Output will be ignored
Failed to validate prompt for output 510:
* ImageResize+ 295:
- Value not in list: method: 'True' not in ['stretch', 'keep proportion', 'fill / crop', 'pad']
Output will be ignored
Failed to validate prompt for output 177:
* ColorMatch 34:
- Required input is missing: image_ref
Output will be ignored
Failed to validate prompt for output 18:
Output will be ignored
Failed to validate prompt for output 512:
Output will be ignored
Prompt executed in 0.06 seconds

