video TBA
Update: v35 txt2img + Lora & Canny ControlNet
Update: v82-Cascade Anyone
The Checkpoint update has arrived !
New Checkpoint Method was released. All Workflows were refactored.
https://huggingface.co/stabilityai/stable-cascade/tree/main/comfyui_checkpoints
put both inside /models/checkpoints/
v30-txt2img
- updated workflow for new checkpoint method.
- Text 2 Image.
links at top
v32-txt2img-lora
- updated workflow for new checkpoint method.
- lora loader
- Text 2 Image.
links at top
v35-txt2img-canny
- updated workflow for new checkpoint method.
- lora loader
- ControlNet Canny
- Text 2 Image.
links at top
v40-img2img
- updated workflow for new checkpoint method.
- Image to Image with prompting, Image Variation by empty prompt.
links at top
v42-img2img-lora
- updated workflow for new checkpoint method.
- added Lora Loader for testing new trained Lora's
- Image to Image with prompting, Image Variation by empty prompt.
links at top
v45-img2img-canny
- updated workflow for new checkpoint method.
- Lora Loader
- canny support
- Image to Image with prompting, Image Variation by empty prompt.
links at top
v50-img2vision
- updated workflow for new checkpoint method.
- Image to CLIP Vision + Text Prompt.
links at top
v54-img2vision-lora
- updated workflow for new checkpoint method.
- added Lora Loader for testing new trained Lora's
- Image to CLIP Vision + Text Prompt.
links at top
v55-img2vision-canny
- updated workflow for new checkpoint method.
- Image to CLIP Vision + Text Prompt.
- adds canny support
links at top
v60-img2remix
- updated workflow for new checkpoint method.
- Multi-Image to CLIP Vision + Text Prompt.
links at top
v65-img2remix-canny
- updated workflow for new checkpoint method.
- Multi-Image to CLIP Vision + Text Prompt.
- adds canny support
links at top
v66-img2remix-lora
- updated workflow for new checkpoint method.
- added Lora Loader for testing new trained Lora's
- Multi-Image to CLIP Vision + Text Prompt.
links at top
v70-img2remix-faceswap
- updated workflow for new checkpoint method.
- Multi-Image to CLIP Vision + Text Prompt.
- Use an HD Face image with Reactor.
links at top
v75-img2faceswap-canny
- updated workflow for new checkpoint method.
- Multi-Image to CLIP Vision + Text Prompt.
- canny support added
- Use an HD Face image with Reactor.
links at top
v82-Cascade-Anyone
- Add high quality Face image with 4 character reference images using prompts.
- built from v70 to estimate custom characters without training or Cnet
links at top
v85-Anyone-canny
- Add high quality Face image with 4 character reference images using prompts.
- built from v70 to estimate custom characters without training or Cnet
- canny support added
links at top
v95-img2vision-canny
- Add 3 high quality reference images for Vision
- img2img with canny using the same image
- built from v85 to do complex remix variations
- canny control net and lora support added
links at top
UPDATE: removed Photomaker version, because it actually had no effect.
I want to stress that you MUST update your comfyUI to the latest version, you should also update ALL your custom nodes because there is no way to know which ones might have affect the UNET, CLIP and VAE spaces which cascade is now using to generate our images.
In addition, i have disabled a lot of custom nodes i did not need on that run. it's easy, just add ".disabled" to the folder name. This is what the button does in the manager. it's very easy to "switch off" some custom nodes in this way.
~
Everything Below applied to the early Method for loadings all the models here: official repo: https://huggingface.co/stabilityai/stable-cascade
~ i will leave the Early method here, for anyone wishing to use it :)
UltraBasic Stable Cascade Workflows for ComfyUI:
Article here: https://civarchive.com/articles/4161
IMG2IMG UPDATE:
these older workflows were deprecated on day 4 by a new method, however still work fine.
v10 = txt2img Stable Cascade here: https://civarchive.com/models/310409?modelVersionId=348385
v12 = v10 txt2img without custom nodes: https://civarchive.com/models/310409?modelVersionId=351470
v16 = img2img (stage C) Stable Cascade Workflow here: https://civarchive.com/models/310409?modelVersionId=351400
v17 = v16 img2img without custom nodes for scaling: https://civarchive.com/models/310409?modelVersionId=351464
v18 = v16 img2img (stage B and C ) now supported by new default node: https://civarchive.com/models/310409?modelVersionId=351658
You can squeeze it only any GPU if you use the correct combination.
These notes are in the Workflow also ;)
Cascade Combos:
stage_b + stage_c ~ 22GB
stage_b_bf16 + stage_c_bf16 ~ 12GB
stage_b_lite + stage_c_lite ~ 8GB
stage_b_lite_bf16 + stage_c_lite_bf16 ~ 5GB
I put together to paths you need to put all the models in case you had to manually DL each of them, due to a poor connection or whatever :)
Huggingface has the models we need, follow the chart below to find where they go
https://huggingface.co/stabilityai/stable-cascade
Text Encoder
ComfyUI Path: models\clip\Stable-Cascade\
HF Filename: /text_encoder/model.safetensors
text encoder CLIP = 1.39GB
Stage C
ComfyUI Path: models\unet\Stable-Cascade\
HF Filename: stage_c.safetensors
stage_c = 14.4GB
stage_c_bf16 = 7.18GB
stage_c_lite = 4.12GB
stage_c_lite_bf16 = 2.06GB
Stage B
ComfyUI Path: models\unet\Stable-Cascade\
HF Filename: stage_b.safetensors
stage_b = 6.25GB
stage_b_bf16 = 3.13GB
stage_b_lite = 2.8GB
stage_b_lite_bf16 = 1.4GB
Stage A
ComfyUI Path: models\vae\Stable-Cascade\
HF Filename: stage_a.safetensors
stage_a = 73.7mb
Effnet Encoder
ComfyUI Path: models\vae\Stable-Cascade\
HF Filename: effnet_encoder.safetensors
img2img VAE encoder = 81.5mb
Description
adds lora loading functionality to the img2vision workflow
FAQ
Comments (10)
after 2 generations something goes wrong , from then on it just says "Prompt executed in 0.00 seconds" and fails to generate , i dont get any error messages
can I use it without the lora? or do I need it?
if you pick up the -lora version and do not want to load a lora, you can click on the Lora Loader node and press Ctrl+B to bypass it. There are versions without the lora loader for download too.
quick answer is no :)
stable cascade is too confusing, im going back to sdxl
@yofoton174609 SDXL is king, cascade is a research model
@driftjohnson true, i tried playground (closest competitor to SC), but its not worth ditching all the LoRAs, and embeds and all for playground. It needs to be radically better for me to leave sdxl.
Is anyone else getting bad faces, hands and text? I'm using normal prompts but it keeps happening
You can try removing all negative prompts. the model card states that Cascade was trained to not need them. In some cases i find the quality actually improved with no negative prompt.
@driftjohnson thanks that seems to be improving things, I tried to use the lite version of the models but the workflow won't work, any reason it might be failing?
Help, please
To see the GUI go to: http://127.0.0.1:8188
FETCH DATA from: C:\Users\Belegante\Desktop\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Manager\extension-node-map.json
got prompt
'🔥 - 57 Nodes not included in prompt but is activated'
model_type STABLE_CASCADE
adm 0
clip missing: ['clip_g.logit_scale']
model_type STABLE_CASCADE
adm 0
clip missing: ['clip_g.logit_scale']
Requested to load StableCascadeClipModel
Loading 1 new model
ERROR:root:!!! Exception during processing !!!
ERROR:root:Traceback (most recent call last):
File "C:\Users\Belegante\Desktop\ComfyUI_windows_portable\ComfyUI\execution.py", line 152, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Belegante\Desktop\ComfyUI_windows_portable\ComfyUI\execution.py", line 82, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Belegante\Desktop\ComfyUI_windows_portable\ComfyUI\execution.py", line 75, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Belegante\Desktop\ComfyUI_windows_portable\ComfyUI\nodes.py", line 904, in encode
output = clip_vision.encode_image(image)
^^^^^^^^^^^^^^^^^^^^^^^^
AttributeError: 'NoneType' object has no attribute 'encode_image'
Prompt executed in 17.34 seconds


