What is this workflow?
This is my personal ComfyUI workflow built upon what I've learned and used in SD1.5, expanded to cover all possible use cases for myself. On a 4GB VRAM + 16GB RAM system, with everything active, it can still run and provide strong results (if you're willing to wait a while).
This workflow includes so, so many things that it's better to view it for yourself. The following is a small subset of supported features of this workflow:
v-prediction support + RescaleCFG (disabled by default)
Controllable CLIP Skip
ELLA + ollama prompt upscale (SD1.5 exclusive feature)
Scalable ControlNet group (disabled by default)
Scalable IP-Adapter group (disabled by default)
Dynamic Thresholding (disabled by default)
2-pass txt2img
Perlin noise latent blend
Watermark removal using CLIPSeg + Lama cleaner
Scalable ADetailer group
Notifications and sounds
Preview chooser for batch images
Full Civitai metadata support
Prompt and LoRA scheduling
Multi-checkpoint setup (still in testing)
Wildcard (and wildcard file) support
...
Despite being built for SD1.5 first and foremost, this workflow can also use SDXL models or any all-in-one checkpoint models easily that is supported by prompt-reader-node. In these cases, ELLA would be completely useless, so do not enable it.
How to use this workflow?
Step 0: Get ComfyUI and ComfyUI-Manager
Step 1: Download the workflow file
Step 2: Import it into ComfyUI
Step 3: Download all missing nodes
Step 4: Grab all missing models (more details below)
Step 5: Have fun generating!
Model requirements
Basic
A checkpoint file (SD1.5, SDXL,...)
A VAE file (optional if baked VAE)
4x-AnimeSharp upscale model (or any other 4x upscale model of choice)
2x-AniScale2 upscale model (or any other combination of upscale model for the final upscale step)
ControlNet
All models are available inside ComfyUI-Manager under Model Manager. If you're using PonyXL or IllustriousXL, adapt your own solutions to it.
Depth ControlNet (using Depth-Anything preprocessor)
Lineart ControlNet (using AnyLine Lineart preprocessor)
OpenPose ControlNet (using DWPose preprocessor)
Depends on what other ControlNet model you add
Other features
IP-Adapter: an IP-Adapter model and its related CLIP-G (CLIP Vision) model. They should all be available under Model Manager
ELLA: See https://github.com/TencentQQGYLab/ComfyUI-ELLA?tab=readme-ov-file#orange_book-models
ADetailer/FaceDetailer: Most models should be available under Model Manager, except Anzhc's face YOLO, which can be acquired at https://huggingface.co/Anzhc/Anzhcs_YOLOs. Place the downloaded file inside ultralytics/segm
Ollama: See https://ollama.com/download and follow their instructions. After installing ollama on your system, grab https://ollama.com/huihui_ai/llama3.2-abliterate:3b
Description
This is very much overdue at this point, but here's the updated, hopefully not broken (and insanely overkill) version update.
Changelog: V2.0 - Restart
Switched PowerNoiseSuite's Perlin noise sampler to Restart sampler + Latent Noise Injection
WARNING: Restart sampling can and will stress your system out, so be careful
Switched from comfyui-prompt-reader-nodes back to ComfyUI Image Saver for better LoRA support and better caching
As part of the transition to Image Saver, the 2nd checkpoint will also be included within the metadata, and LoRA loading will be handled by loading through prompts. To best use this system, enable LoRA autocomplete in ComfyUI-Custom-Scripts by going into Settings -> pysssss -> Autocomplete -> Loras enabled
Merged IP-Adapter and ControlNet to one Image References section and condensed certain features (mostly IP-Adapter)
Added WD1.4 Tagger to make use of images more effectively
Split certain sections to allow for more refined control in each section
Added more advanced model-changing settings (CFGZeroStar, RenormCFG, and MaHiRo Guidance,...) [probably mutually exclusive, but could work together]
Added documentations for usability
(As of 24/12/2025: Slightly updated to fix certain issues like Schedule Selector being unavailable and converted some of the merged nodes into subgraphs)