v3.1.2 Update:
Added LoRa Stacker
Simplified Loader
More Aspect Ratios and Resolutions
I actually wanted to add the Super Resolution ControlNet, but the integration into ComfyUI isn't working quite right yet. So, this feature will come with the next update.
V3.1 Update:
The V3.1 update now supports the ComfyUI Cascade models.
Additionally, it now has a cascade resolution and aspect ratio node, thus improving the quality of life. Many thanks to "bellamiss" for the node.
The Cascade resolutions custom node is included in the .zip file, just copy the node into the /customnodes folder."
ComfyUi Cascade Checkpoints Link:
V3 Update:
In V2, there was a bug that became apparent when rendering several hundred images, which affected the efficient sampler.
Therefore, I replaced it with the vanilla sampler in V3.
Many thanks for the help with testing to "erosdiffusion" from the L2 Discord.
Furthermore, V3 now has a face detailer that can edit up to 5 different faces.
Optionally, for images with larger faces, the quality of eyes can be improved with the eye detailer.
Eye Yolo Detector Model Link:
More Details LoRa SD 1.5 Link:
Update v3d (Diffusers):
After trying out numerous samplers and configurations, I've decided to go back to using the unofficial DiffusersStableCascade sampler again because it just consistently produces better, more creative images. Feel free to try out the v3 version as well, maybe you'll find a sampler configuration that consistently works well.
Diffusers Stable Cascase ComfyUi Node
It's possible that you encounter errors during installation. Here is a solution approach.
https://github.com/huggingface/diffusers/issues/1619#issuecomment-1345604389
https://huggingface.co/stabilityai/stable-cascade/discussions/17/files
V2 Update:
Comfyui now supports native stable cascade, so the diffusers sampler from v1 is no longer needed. The workflow generates images with sd cascade and improves face details if needed with an sd1.5 model.
The following resolutions work well: 1024x1024 / 1024x1904 / 1024x3808. If you know any other resolutions that work well, please post them here.
I've included a short guide on where to download the models and which folders to copy them into in the red notes of the workflow.
Description
A simple workflow for Cascade that utilizes the Diffusers sampler and a Face Detailer. For the Face Detailer, you can use both SD 1.5 and SDXL models.
If you're still having problems with the installation, here's a new fix. https://www.reddit.com/r/comfyui/comments/1arh2du/stable_cascade_working_yesterday_not_working/
Thanks for the heads-up, LaughterOnWater. ^^
To install, simply open the terminal in the custom nodes folder and enter "git clone https://github.com/kijai/ComfyUI-DiffusersStableCascade".
The models will then automatically load in ComfyUI upon first start.
I got some feedback that some folks are having issues after installation. Try going into the folder of the custom node, open up the terminal there, and type in "pip install -r requirements.txt".
FAQ
Comments (15)
This was working yesterday. Updated everything via manager today, and now it's not working.
"embedding.1.weight expected shape tensor(..., device='meta', size=(320, 64, 1, 1)), but got torch.Size([320, 16, 1, 1])."
Ideas?
The same error for me.
I always get the error when I try to render with an unsupported resolution. 1024x1024 / 1024x1904 / 1904x1024 work well.
I do not get this, even with the resolutions reported as not working
Best case it will be linked to your Comfy install / venv setup
Nv 4070 12 GB VRAM gives up after 671.5 secs coz of less memory...
This has been resolved in this reddit thread:
https://www.reddit.com/r/comfyui/comments/1arh2du/stable_cascade_working_yesterday_not_working/
@denrakeiw according to the demo it should work also with 1536x1536 https://huggingface.co/spaces/multimodalart/stable-cascade
@denrakeiw Hmmm... not happening with me. Maybe it's a GPU memory thing?
put ComfyUi Cascade Checkpoints Link to checkpoints, it worked to me
How much VRAM do you need to run this?
I have 24GB of VRAM, but Cascade is supposed to need less than SDXL.
8gb on my 6600xt is enough
Doesn't use even use all of my VRAM on an RX 6700 XT, which has 12 GB of VRAM.
Successfully running SC workflows on 3070Ti w/ 8GB VRAM as well.
4 GB is not enough in case anyone is wondering



















