Check the version description below (bottom right) for more info and add a ❤️ to receive future updates.
Do you like what I do? Consider leaving me a tip
Hi, I'm shikasensei, also known as u/Boring_Ad_914 on Reddit. With the hype around Magnific AI, I found myself drawn to achieving similar results. Initially, I was inspired by LDWorks' work; however, due to the limitations of their initial workflow, I decided to create my own, thus Ultra Upscale was born.
Currently, my focus has shifted more to tools like Clarity Upscaler and ultimately Magnific AI. However, moving the Automatic1111 workflow to ComfyUI has not been successful because the Tiled Diffusion nodes work differently. If anyone finds a way to achieve this, please let me know and I'll be happy to update the workflow.
In relation to the previous point, I recommend using Clarity Upscaler combined with tools like Upscayl, as this achieves much better results. However, I have updated the workflow because many people use it, and because I myself use ComfyUI in my day-to-day work and find it tedious to launch Automatic1111 just to upscale an image.
Please consider the following:
The workflow currently scales an image from 768x768 to 6144x6144. This doesn't mean it's limited to square images; rather, the workflow increases your image size by 8 times. Therefore, for larger images, it will take longer. One option would be to resize the image to 768 or 1024.
Before commenting on any issues with the workflow, ensure all necessary nodes are installed, update ComfyUI, and all nodes.
IMPORTANT: To install the missing nodes, you will have to install ComfyUI Manager. Once installed, you will have to click on "Install missing nodes" and install each node that appears.
Additional resources:
I hope you find this workflow helpful. Please let me know if you have any questions or suggestions.
Links to examples:
Description
FAQ
Comments (26)
For SDXL Lightning, seems like the vae node is not connected in "VAE Decode", is there something I'm missing?
Done.
i get an error:
Error occurred when executing InsightFaceLoader: No module named 'insightface'
I solved the problem following the steps on this video https://www.youtube.com/watch?v=vCCVxGtCyho
I started with a 1220x813 image. It took 89 minutes to complete the workflow on my Nvidia RTX 3060 12GB. The upscale looks great! But it just takes waaaaay too long.
Yes, I understand, the workflow is more of a curiosity than anything else. Honestly, almost no one needs to upscale their images to that extent. Personally, I usually only upscale to 3k. You would just need to remove the "upscale image by" node from simple tiles.
If you dont need precise high resolution (printing products with 300dpi etc..), then you can try chaiNNer (https://github.com/chaiNNer-org/chaiNNer) with standart models which you can download from here (https://openmodeldb.info) It takes only seconds in my 3090.
Thank you very much for your nodes setup!
Check please: For SDXL Lightning It seems to me that for the Detailer Inside pass, for Make Tile SEGS, input should be connected to filter_out_segs_opt overwise it caused 'cannot unpack non-iterable int object' error.
it fixes it but i think thats not whats meant to be done for correct operation
@omrio I’m also not sure that this is correct, I just had this error and I found such a quick solution
I get the same error for some images and for others it works like it should. I haven't looked too deeply into this issue, but do you think it could be related to the aspect ratio of the images? When I give it a square image it works.
@Gedrloov I'm not tech-savvy enough to grasp all the intricacies. It would be great to hear from the author for clarification, but unfortunately, he's not responding.
Regarding the resolution error, I sometimes encounter it when using non-standard resolutions (e.g., 906x588 ). However, in my case, it's a different error related to the TileMerge (Dynamic) node:
''Error occurred when executing DynamicTileMerge:
index 1 is out of bounds for dimension 0 with size 1''
In such instances, I either troubleshoot through trial and error or load a higher resolution image (e.g., 4K) while disabling the initial upscaling nodes.
Let me guess, there is no character or person in the image you were trying to upscale ? I didn't tried the workflow yet, I am rapidly checking it on my phone away from home but it seems like you have an issue because if that is the case the "person detector" node did not find something and then had nothing to feed the make tile SEGS node. That is why you have this error. I will try this tomorrow and see in more details
Thank you for sharing! Unfortunately I got this error towards the end:
[mask_to_segs] Empty mask.
# of Detected SEGS: 0
ERROR:root:!!! Exception during processing !!!
ERROR:root:Traceback (most recent call last):
File "I:\ai\ComfyUI_windows_portable\ComfyUI\execution.py", line 152, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
File "I:\ai\ComfyUI_windows_portable\ComfyUI\execution.py", line 82, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
File "I:\ai\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-0246\utils.py", line 381, in new_func
res_value = old_func(*final_args, **kwargs)
File "I:\ai\ComfyUI_windows_portable\ComfyUI\execution.py", line 75, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
File "I:\ai\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Impact-Pack\modules\impact\segs_nodes.py", line 1194, in doit
h, w = segs[0]
TypeError: cannot unpack non-iterable int object
In the 1.5 workflows you use cyberrealisticv33 with XDSLrender_v2_0.
These are not compatible with each other. So it's basically running the workflow without the Lora.
It seems you are a little confused. SDXLRender is a lora for stylizing the generations of 1.5 with the detail and style of SD XL.
@shikasensei The Comfy debug literally gives errors it can't load it, while it works fine when I load an XDSL model. I did read the SDXLRender model page, I'm just saying what Comfy is telling me.
Maybe I'm doing something wrong, but I wouldn't know what as I have all the nodes used in the json workflow file.
hello, how can I activate the second pass?
Error reported when running to KSampler
haviong the same issue
not working
SyntaxError: Unexpected non-whitespace character after JSON at position 4 (line 1)
insight face not detected
What would be the result if you used CNet/T2I instead or alongside simple detailer?
Is there any chance you'll be updating this with the new IPAdapter V2 nodes?
The end result is absolutely beautiful but it takes 10 minutes for 1 picture on a 3080ti....just doesn't make sense is there going to be an update to help with the processing speed?
It seems i've changed some settings and used minimum a 1024x1024 image, and it took around 7 mins on a 4090. But the end resolution was 16k x 16. But not real 16k details, so I had to halve it to 8k to get it to look sharp. But i've used one of my "customer sends old render image that are mhh and lack detail" reference to see how it performance for real-world use cases that i always use to compare.
For anyone else.. try changing the upscaler to something that is 2x instead of 4x.
And reduce the resolution of the input image if you are using your own images.
i.e. try keeping the input image below 1024x1024.
If you keep the upscaler as 4x or 8x and input a high resolution image.. it WILL take absolutely ages to perform the upscale.

