Flux.2 [Flex], [Dev], [Pro], & [Max] are live for Generation!
FLUX.2 [Flex] is the next leap in the FLUX model family, delivering unprecedented image quality and creative flexibility. FLUX.2 is a state-of-the-art performance image generation model with top of the line prompt following, visual quality, image detail and output diversity.
Original Flux.2 [Dev] files: https://huggingface.co/black-forest-labs/FLUX.2-dev
FP8 Quantized from ComfyUI: https://huggingface.co/Comfy-Org/flux2-dev/tree/main
Description
flux-2-klein-base-9b
FAQ
Comments (78)
Holy for Klein 9B the clip file : qwen_3_8b
is required to guide this version and is also 16GB!
Yep, but Klein 4B uses the same text encoder as ZIT. So that's something.
@cupra But Z uses Flux.1's autoencoder unlike Flux.2
Updated workflow for this model here: https://civitai.com/models/2213699/flux-2-klein-2d-and-gguf-pro-grade-workflow-high-and-llow-vram
Links for the text encoders and other models included
Only Flux.2 Klein Base 9b is available here for the Civitai generator, should be available for the generator soon...
Is it still censored?
Yes, not to let us create porn :(
@qek Yeah they can keep it lmao. I'm not even into it for porn, but if it censors bodyparts, than anything slightly spicy? GTFO
@kingliam1995492 don't be sad, I tested it and it makes nudes anyway, even edits, hahaha
Let's see how training Klein 9B and 4B will work out first.
@cupra They are ok, but Qwen Image Edit has better prompt adherence anyway. I can't wait to see loras for Klein
It's not censored, but it wasn't trained for NSFW. You'll need to train it yourself.
@g1263495582 nor was ZIT but yeah it is censored.
VERY
Compared to what?
@kingliam1995492 It doesn't. Anyone claiming it does is blatantly lying lmao. ZIT isn't even as good at boobs as Qwen Image, also. Both are worse still than Hunyuan Image 2.1 at all forms of nudity.
@ZootAllures9111 It's funny that Hunyuan Image 2.1 is not featured on the civitai.
We new categorie at filters for Flux.2 K (Klein).
Also fp8 version of model would be nice (and the distilled versions too).
They're on HuggingFace. theally only posted one for the civitai generator, not for downloading them all
I’m uploading the FP8 versions right now — it’ll just take a bit until everything is uploaded.
https://civitai.com/models/2311742?modelVersionId=2600878
@denrakeiw Ok, as I said, theally has been reposting models solely for the Civitai generator, there is why there is only Klein Base 9B
@denrakeiw Please read my updated comment
FYI, Flux 2D Loras do NOT work with the Klein version.
Because they were trained from scratch, the only thing which isn't different is that they all use Flux.2 AE
Did they fix it where I can run CLIP on default instead of cpu
Did you mean it can crash ComfyUI if you do it?
@qek well it does on flux2 when I put it on default yeah.
@2thecurve keep in mind: RAM is slower than VRAM
@qek I feel ya. I have a 4090. And when I put it on default it crashes comfyui (CLIP)
Does the flux Klein model work with the Forge WebUI?
Nope
switch to swarm-ui
Also works in ComfyUI
Switch to ComfyUI.. it's 2026
@sarcastictofu Yes, ComfyUI added Klein support as soon as it was possible
work with SDNext
Yes, I use Forge Neo. I've been using it for one month with great results. ComfyUI is better, but only if you know what youre doing.
GGUFs for every version of Klein and both text encoders (Klein 9Bs need Qwen 8B, Klein 4Bs need Qwen 4B) are here BTW:
https://huggingface.co/unsloth/Qwen3-4B-GGUF
https://huggingface.co/unsloth/Qwen3-8B-GGUF
https://huggingface.co/unsloth/FLUX.2-klein-4B-GGUF
https://huggingface.co/unsloth/FLUX.2-klein-9B-GGUF
Don't make ANY nudity, z-image still king
@210881175 ? Flux.2 still makes nudes, I think their words about the schizo censorship are partially false
@210881175 It does booba about as well as Z Image if you prompt correctly. Neither Klein nor Z Image is as good at it as Qwen Image 2512, though, IMO.
@qek I mean this seems pretty specific so maybe, but maybe not also
@ZootAllures9111 I prefer Qwen Edit 2511 over all Klein models for editing, and prefer Z Image Turbo over Klein for txt2img
Can we now train Flux.2 Klein 4B or 9B LORAs on CivitAI or use these in CivitAI's image generators??
Soon! We'll have Klein 4B/9B training and generation up as soon as possible - it's being worked on!
@theally Hopefully it's not gonna be like Z-Image where it always costs significantly more than Chroma does for the same dataset due to the forced "repeats"
How can I run the Full Model FP16 version (60.02 GB) of this model locally? When I search online for “Flux.2 D Full Model fp16,” nothing useful comes up... Could it be a mistake for “Full Model BF16”?
Hi friend, I don't know if there's already an answer, but the FLUX.2 DEV model, unfortunately, is professional-level. In this version, you'll only be able to use it with a GPUs-RIG, or you can use it privately on paid platforms that support uploading. Locally, use the GGUF versions.
on HF it is BF16 i believe. Or at least SDNext is report it as so. Locally runs on Intel iGPU (Intel Ultra 9 285H) with 128GB of RAM. 23 minutes per image on 1024px 50 steps.
@liutyi 23 min one image!!! Crazy... but yes, it's true, unfortunately these new models have very high requirements. I have to pay an online provider to generate images with flux.2 WAN, QWEN, etc., no more than 1 minute per image. Of course, the ideal would be a 100% local and fast system, but the price of the hardware is impossible.
@ArtificialHeartAI 285H is more or less ok (70 sec ZIT, 17|36 sec Klein 4B|9B). Crazy is when you generate on lower hw models. like Intel core Ultra 9 185H iGPU + 128GB RAM. Than it is something like 68 minutes/image (instead of 23 minutes). But anyway it is kind of cool to be able to run almost any model locally. So you can use Klein to relatively fast drafts. and than makes night scheduler for FLUX.2 Dev or Qwen Image to proceed with prompts you think you find to be good. There is example of test with 58 prompts https://wiki.liutyi.info/display/AI/ChatGPT+Image+1.5 6 of which is censored by ChatGPT and 2 censored by CivitAi. So you kind of limited on what is can be generated using online providers.
@liutyi thanks friend!!!👏👍👏
I have a 4060TI with 16GB and an additional 64GB of RAM. It works for me, but it takes half an hour for a high-resolution image, and it really fills up my RAM.
@ArtificialHeartAI
@liutyi
@uglyducklingariyoci111
Thank you so much for your kindness!!
This checkpoint is very good, thank you for making the content available, gratitude for the effort and work.
is there a NVFP4 version ?
absolutely fkn garbage. flags content prompts for things that arent even in the prompt.
Can somebody please explain me what are the differences of "DEV", "FLEX", "PRO", "MAX" and Klein?
Flux.2 [max] Elite Highest overall quality, best consistency, best prompt following, best editing Top (Flagship)
Flux.2 [pro] High-end Great balance of quality + speed Excellent
Flux.2 [flex] Advanced Best for text, fine details, control Very Good
Flux.2 [dev] Open-weight Local running, fine-tuning, experimentation Very Good
Flux.2 [klein] Fast/small Speed + cheap generations Good
@Addictedd Hey. Thank you for your answer! I guess that only dev/klein are possible to run on a local machine?
@Haiko yes and kontext and schnell
What you can run at home from FLUX.2 family is :
50 STEPS models
- FLUX.2 Dev (32B) - Total parameters 56B
- FLUX.2 Klein base 9B - Total parameters 17B
- FLUX.2 Klein base 4B - Total parameters 8B
4 Steps models
- FLUX.2 Klein 9B
- FLUX.2 Klein 4B
- FLUX.2 Klein 9B KV
In general 4B is low quality model but for 4B it is kind of OK. 9B is good, but might have some issues with limbs so sometimes more than one generation needed to get good result. KV is almost the same as not KV. its some performance tuning for edit that is not affect txt2img. FLUX.2 Dev is cool. Just cool. and nice. If you have hw and time to run it. More than articles of text helps simple visual comparison. so below is 40 images with same prompts generated bul all models listed above. + couple of images of Pro version
https://wiki.liutyi.info/display/AI/FLUX.2+Dev+test+v2
https://wiki.liutyi.info/display/AI/FLUX.2+Klein+9B+test+v2
https://wiki.liutyi.info/display/AI/FLUX.2+Klein+9B+KV+test+v2
https://wiki.liutyi.info/display/AI/FLUX.2+Klein+base+9B+test+v2
https://wiki.liutyi.info/display/AI/FLUX.2+Klein+4B+test+v2
https://wiki.liutyi.info/display/AI/FLUX.2+Klein+base+4B+test+v2
commercial
https://wiki.liutyi.info/display/AI/FLUX.2+Pro+test+v2
@liutyi Thank you, that help me a lot :-)
@Haiko https://wiki.liutyi.info/display/AI/FLUX.2+Max+test+v2 and the last one. Most expensive. Generated on BFL, because Civitai when you choose Max generates Pro, not Max..
@liutyi So for local use is Flux.2 Dev the best choice?
@Haiko FLUX.2 Dev, yes and no. No because It is very slow. For bf16 it required 128GB unified memory in case of Intel Arc iGPU. So yes. if you have enough hardware it is definitely the best (from FLUX family for local use). But FLUX.2 Klein 9B (4 step) is actually create very good output and very fast. Now got also a lot of LORAs. So I would say Klein 9B for 90% of images and FLUX.2 Dev for some when Klein is for some reasons fail. But not only FLUX.2 is there. It is also Z Image Turbo and QWEN 2512 (with Turbo LORA) exist. So the whole path of local generation is trying prompts with Klein 9B, Than Z Image Turbo, than QWEN 2512 with wuli turbo lora 4 step v3.0, Then QWEN 2512 whole 50 step, than Z Image Base 50 step (usually will be worse than Turbo, but not always), and the last is FLUX.2 Dev (just before switching to commercial cloud based like seedream, qwen image 2, nano banana or GPT Image. there is not much room for FLUX Pro or Max since their output not that outstanding, so if you pay for image it is better to pay for nano banana 2 (at the moment).
@liutyi "the whole path of local generation is trying prompts [...]"
ehhh, ZIT, Qwen 2512, Flux.2 and Nano Banana have rather different understanding of prompts. What is necessary for 1 model will only confuse another model. For some prompts, ZIT can easily surpass Qwen 2512, and some particularly complex prompts will only work in NB, no other model will come even close to doing things the way you described.
Personally, I experiment with ZIT and Qwen 2512 separately, the same prompt never worked well for both of them without a lot of adjustments.
And NBPro is a whole different beast. The only use in trying your prompt in ZIT/Klein before NBPRo is just to make sure you didn't forget anything. If the prompt is complex, ZIT/Klein will probably generate some junk (but NBPro will still succeed later) but you can still notice some oversights like "wait, why is he dark-haired, I wanted blo... oh, right, I forgot to specify it".
Klein - the smallest and cheapest, does some styles well but its main use is not generating, but editing. It's exceptionally good & fast when it comes to editing. Erasing objects, adding objects, changing poses and styles, transferring objects from 1 image to another, it can do it all with ease. https://civitai.com/images/118194502
Overall, Klein is the best, the most praiseworthy model in Flux.2 family imo. The rest of them are rather mediocre given their huge size.
Max - you're better off using NBPro/NB2.
Dev/Pro - "standard" choice, Pro is overall better but more costly.
Flex - weird middle brother.
@MV261 when person ask for difference on Klein and Dev doesn't it assume that it is not a level of prompts per model optimization? First prompts like "car", "fancy car", "1girl", "1girl, outdoor, masterpiece" can be executed on any of those..
@liutyi not necessarily? I've been toying with AI for a long time but when Flux.2 models started coming out I was also confused about the differences until I experimented with them because I couldn't find a good comparison spreadsheet anywhere at that time. Also idk who even writes prompts like "1girl, outdoor, masterpiece", I never wrote such prompts even back in the days of SDXL. xD Proper-description prompts have always been my go-to since the start, with the only exception being that when i only started toying with AI, I would start my prompts with "draw [description]" rather than just "[description]".
@MV261 idk, "1girl, outdoor, masterpiece" give such a creative space to the model. I think it is ok to practice short prompts time to time at least once per new model. Like see how model understand "Beautiful nature scenery" or "Cityscape". And 1girl it is like test prompt for all times. prompt that <1B models are ok with and >20B models are also understand.
What's the difference between the base model and the turbo models besides the step differences? Does the base model have better quality generations because of the 50 steps it needs to make it work, or does it just achieve the same results as the turbo model, just with more steps and waiting?
@AICuriousity22 if in case of Z image I can say turbo is fast and nice, and base supports more styles and kind of more complex prompts , but in case of Klein i am am not so sure. Doing direct image by image comparison of Klein base 9b test v2 and Klein 9b test v2 i am yet not sure how to describe the difference and need for extra 46 steps..
@AICuriousity22
https://wiki.liutyi.info/display/AI/FLUX.2+Klein+base+9B+test+v2.20.gemini
https://wiki.liutyi.info/display/AI/FLUX.2+Klein+9B+test+2.20.gemini
https://wiki.liutyi.info/display/AI/FLUX.2+Klein+base+9B+test+2.20.gpt
https://wiki.liutyi.info/display/AI/FLUX.2+Klein+9B+test+2.20.gpt
4 more test. still same "I do not know who and where should use Kein base and why:..
Is it possible to generate sexual content with flux 2 dev? Do i need a lora or checkpoint? Could anyone please direct me to where I can get them, I cannot for the life of me find any.