Full Checkpoint with improved TE do not load additional CLIP/TE
FLUX.1 (Base UNET) + Google FLAN
All uploaded models sourced from 65GB Full FP32
Per the Apache 2.0 license FLAN is attributed to Google
Description
FAQ
Comments (17)
Beautifully done, my friend. I cannot wait to test this further using my new lora's.
Thank you for sharing this work of art. /Bow
Thank you
Beautiful images! Both with BF16 and NF4, it works very well with Forge and RTX3090 24GB.
Thanks for sharing them.
Thanks
same card, which one you prefer, BF16 or NF4
both models work well, BF16 certainly has more detail but personally for normal use I think NF4 is fine. 32Gb for BF16 is a lot and it is a bit slower than NF4.
It depends on what you need to do because I think the difference is not that big between the two models. I use NF4 + Rescaling more often. I think it is good to make a reference prompt and test it on both models as I did.
@jazara930 i got many error for NF4, can you share the workflow for that model?
Hi, with the workflow I can't help you because I don't use Comfyui, it is too complicated for me, I do it only for fun for now. I generally use forge, with these parameters: Euler/Beta, 20 Steps or More and Rescaling 1.5x or 30/40 Steps without Rescaler. Sometimes I use an SD1.5 model for Rescaling with excellent results. I have not yet had problems with both NF4 and BF16 for now.
need fp8 of your schnell ?
also what differ of using google flan ?
I will build the FP8 soon, and the FLAN TE was a finetune of T5xxl done by google, the exact details are not available to my knowledge
BF16 or FP16 Q8 GGUF version
GGUF is here: https://civitai.com/models/1488958/flux-monte-carlo-full-fp32
FLAN FP32 Pruned is here: https://civitai.com/models/1489246/google-flan-t5xxl-pruned-for-comfy-or-forge
@Felldude Should have clarified... BF16 or FP16 Q8 GGUF. FP32 is only needed for training.
changed the original comment to clarify.
Details
Files
Available On (1 platform)
Same model published on other platforms. May have additional downloads or version variants.


