This workflow allows you to harness the power of FLUX.1 [dev] within ComfyUI. Follow these steps to set up and run the model.
The true "simplest" version of a Flux workflow can be found here, but it uses the FP8 version which is slightly worse. For that reason, I do not use the "Load Checkpoint" node version of the workflow.
Note: To train LoRAs, see my video guide here: https://civarchive.com/articles/6533/how-to-train-a-lora-for-flux1-dev-with-simpletuner
Instructions:
Model Placement:
Place the FLUX.1 [dev] checkpoint in
ComfyUI/models/unet(not in checkpoints folder)
Download Required Files:
Original weights: flux1-dev.sft
FP8 version (for <24GB VRAM systems): flux1-dev-fp8.safetensors
Text encoders (place in
ComfyUI/models/clip): flux_text_encodersVAE (place
ae.sftinComfyUI/models/vae): ae.sftFor lower RAM usage, download the FP8 T5XXL encoder (with some quality degradation)
Low VRAM Setup:
Launch ComfyUI with the "--lowvram" argument (add to your .bat file) to offload the text encoder to CPU
Performance Notes:
Confirmed to run on:
RTX 3090 (24GB): 1.29s/it
RTX 4070 (12GB): 85s/it
Both tests used the FP8 quantized version. Note that the 4070 is significantly slower.
Preview Setup:
Update your ComfyUI to the latest version
In the Manager menu, set "Preview Method" to "Auto"
Using the Workflow
Load the provided workflow file into ComfyUI
Adjust your prompts and parameters as desired
Run the workflow to generate images
Note: This workflow includes a custom node for metadata. It can run in vanilla ComfyUI, but you may need to adjust the workflow if you don't have this custom node installed.
Introduction to FLUX.1 [dev]
FLUX.1 [dev] is a groundbreaking 12 billion parameter rectified flow transformer for text-to-image generation, developed by Black Forest Labs. As part of the innovative FLUX.1 suite of models, it represents a significant leap forward in image synthesis technology, offering:
State-of-the-art image quality and prompt adherence
Competitive performance with closed-source alternatives
Enhanced efficiency through guidance distillation training
Support for diverse aspect ratios and resolutions (0.1 to 2.0 megapixels)
Black Forest Labs, founded by distinguished AI researchers and engineers, brings a wealth of experience from developing foundational generative AI models like VQGAN, Latent Diffusion, and Stable Diffusion. FLUX.1 [dev] showcases their commitment to pushing the boundaries of AI-generated imagery.
License and Usage
FLUX.1 [dev] is released under a non-commercial license. Please refer to the official FLUX.1 [dev] license for full terms and conditions. This model is intended for research, personal projects, and non-commercial use only.
Enjoy exploring the capabilities of FLUX.1 [dev] with this ComfyUI workflow!
Description
This version bypasses applying the LoRA to the text encoder. This is recommended since current LoRAs for FLUX do not train t5.
FAQ
Comments (21)
Very strange to read about 4070, but on my linux system and 12GB RTX 3060 I got generation speed around 15-18 sec/it. Much slower then 3090 but so faster then 4070 with 12GB?
这个自定义节点要怎么获得或者自己新建一个呢
How do I get this custom node
Change the custom nodes to the ones included with ComfyUI by default or install the following
comfy-image-saver
https://github.com/giriss/comfy-image-saver
Excellent work flow. The only one that produces high quality images and works with my 12 gb 3060. Thank you very much!
Doesn't work.
String Literal, Height and Width are in RED.
You have to install those custom node, just like any other ComfyUI workflow. Do you have ComfyUI Manager installed? It makes the process much easier
@markury yes. Was my bad.
I am completely new to ComfyUI having always used A1111 and Forge (and others like me will come along with the advent of Flux).
Thanks to you I discovered that the Manager can install missing nodes.
Obviously I have solved it. I thank you very much for your time.
@leclettico912 I am getting the same error. How did you fix it??
ComfyUI has released a new node to support flux1-bnb-nf4
https://github.com/comfyanonymous/ComfyUI_bitsandbytes_NF4
how to add negative prompt?
flux doesn't have negative conditioning
If you insert the SplitSigmas node between output of the Basic Scheduler and the input of the Sampler Customer Advanced node you don't need to set the --lowvram flag for cards with 12GB of VRAM - at least I didn't need to on my RTX 3060. The low_sigmas output of the SplitSigmas node goes to the sigmas input of the Sample Custom Scheduler. It takes a LONG time to cache on first run but once everything is loaded it is more than quick enough to play around with the Dev model, including with the various LoRAs which are coming out for Flux on a daily basis.
One thing I would say is definitely don't run anything else on your PC while this is buffering. Not even a shared folder to another computer. If you ask the GPU or CPU to do literally anything else at all while the model is loading it will crash and you will have to force restart your entire PC. Ask me how I know.
I think there are a lot of us who are temporarily ditching Automatic1111 to play around with Flux, who are completely new to Comfy, or might have tried it in the past and been put off by the fact it has a steep learning curve. That certainly describes me. So if there's anyone out there who knows an even more efficiant way of loading Flux LoRAs inside Comfy for users with 12GB of VRAM or less, please take us through it step by step. Thanks.
Same here. I dropped Auto1111. I just ordered a laptop with intel i9, 4090 16gb vram, 32gb ram and an external graphics card enclosure that connects to the laptop by thunderbolt 3 cable. I have an old 3060, I hope it works.
The three text input boxes failed to load up, so I had to add them back in manually for some reason, but other than that... great job.
One of the VERY few workflows I've downloaded and shortly after had up and running exactly as expected. Seems to do a simple job very well too.
Thank you :)
Nothing is loading in the comfy ui window. Tried both json and png file.
Update your ComfyUI and install ComfyUI manager to manage the custom nodes
Not worfk nf4 for me
This here are dead. Need replacement/Update
Original weights: flux1-dev.sft
VAE (place ae.sft in ComfyUI/models/vae): ae.sft
Hi. Can you tell me what Nodes I need to install. It is telling me that it is missing Int Literal and String Literal
same
Invoke Community Edition also utilizes Workflows. Is it possible for this workflow to function on Invoke AI? Thank you.

