Full Checkpoint with improved TE do not load additional CLIP/TE
FLUX.1 (Base UNET) + Google FLAN
All uploaded models sourced from 65GB Full FP32
Per the Apache 2.0 license FLAN is attributed to Google
Description
FAQ
Comments (12)
Jeepers she's hot. Too bad Flux costs so much buzz to use :(
Wonderful model, thank you! Quite slow, slower than all previous FP8 I tried, but I'm in no hurry😊
If your looking for speed I would suggest the NF4 Schnell in Forge - I get 4.5 seconds per IT compared to 7.6 seconds per IT on the Full FP16 model - on a 8GB card
ComfyUI with my LoRAs gives much more interesting results. This particular model gives amazing detail and in general the images look more artistic. One of the best FLUX Dev, I think.
Well, the speed.. I have a gtx 2060 with 6 GB. Nothing works fast for me, although everything starts.
相当精彩,是所有FLUX DEV 中最精彩的,甚至超过了原版
@Felldude 能做一个亚洲审美的吗?自从有了这个大模型,这2个月就没再用过别的模型,真的希望他越来越好,我只是一个小小的建议,只要有新版本,就是值得开心
@libin771018395 我对亚洲女性有一个了解,但完整的运动改革需要更大的数据集
Wǒ duì yàzhōu nǚxìng yǒu yīgè liǎojiě, dàn wánzhěng de yùndòng gǎigé xūyào gèng dà de shùjù jí@Felldude 相当佩服你对这个模型的付出,期待更好
This is a really good model. I really like the images generated - it is great quality.
I am running the FP16, but it seems to run out of VRAM on my 4090, so I am trying out FP8 to see if I can avoid running out of VRAM. I think it would take a lot more VRAM, perhaps 48GB of VRAM.
Is there any way to get a GGUF Q8_0 port of this to reduce the VRAM, while maintaining quality?
It is base Flux Schnell/Dev with this TE https://civitai.com/models/1489246/google-flan-t5xxl-pruned-for-comfy-or-forge & the zer0int detail CLIP-L https://civitai.com/models/1044804/clip-l-and-clip-g-full-fp32-zer0int-and-simulacrum
I have updated the schnell NF4 model with FP32