Comfyui Workflow (Image Generation)
required Custom Nodes:
Ollama Registry: https://registry.ollama.ai/huihui_ai
to run qwen3.5 update the ollama image (0.17.7): docker pull ollama/ollama:latest 2>&1
remove the eclipse-container: docker rm eclipse-ollama 2>&1
if you need help installing nunchaku or how to use the nodes check the readme files in the repo readme folder at github.
llama-cpp-python is needed for the loading_method: GGUF (llama-cpp-python) in the Smart LML Node
nunchaku wheels:
nunchaku model files:
Wildcards
Version 3 uses a new layout and replaces context pipes with KJ's Set/Get nodes plus 2 additional Eclipse nodes (GetFirst and GetAllActive).
Why the layout changed: In the old workflow, changing the model or LoRAs triggered a full re-run including SmartLML's image description, which is slow. The new layout decouples this — SmartLML runs once to describe the image, and that result is reused via Set/Get nodes. Subsequent runs skip the description step when the seed/index is fixed. When shuffling images (-4), SmartLML naturally re-runs since the image changes.
The basic guider is gone — the CFG guider is used for everything (set CFG to 1.0 for Flux, etc.).
Some rgthree nodes have been ported to Eclipse (Fast Bypasser/Muter, Fast Groups Bypasser/Muter, Repeater, Image Compare). They are converted to work with the V3 API and Nodes 2.0, using a different approach — no LiteGraph.registerNodeType interception or subclassing.
GetFirst & GetAllActive: Virtual frontend nodes that extend KJNodes' Set/Get system with priority-based variable resolution.
Get First resolves the first active (not muted/bypassed) SetNode from a prioritized list of variables — ideal for fallback chains (e.g., try LoRA model first, fall back to base model).
Get All Active outputs all active SetNode variables simultaneously, each on its own output slot — perfect for collecting multiple active components.
Both nodes feature:
Type filtering to show only matching SetNode variables
Automatic rename tracking when SetNodes are renamed
Reorder context menu (Move to Top / Up / Down / Bottom, Insert Above)
Green dot indicators showing which variables are currently active
Optional virtual link visualization

Description
requires ComfyUI_Eclipse Version: 2.4.29
added a new node to use the prompt data from loaded images: pipe io generation data (gated),
changed smartloader+ v1/v2 and its pipe out node to populate configure_sampler
load image from folder has a new seed_input input to freeze the current loaded image until the seed changes e.g. in mode -4 (shuffle)
load image (metadata pipe) has a delete button now to cleanup comfys input folder from within the workflow
fix load image from folder / read prompt from files re-execution
what changed
both load image groups are able to load / use the prompt data if available, selectable with a fast muter (sampler settings, pos prompt, neg prompt and seed)
that is why the model loader also have to populate it's sampler settings because they have a higher prio because of model specific settings (when saved or enabled)
in the inital render group everything is collected and assigned (if not none)
to prevent errors both prompt inputs from the image are using dummies in case nothing is provided to keep the flow flowing
fixed the save behavior of negative prompt even when it's option is set to no
the negative prompts are collected inside the prompt group, together with "Input Prompts" disable the negative prompt option in the load image groups if you don't want to use the negative prompt from the loaded image.