CivArchive
    Flux.2 - Klein
    NSFW
    Preview 117606962
    Preview 117606961

    Flux.2 [Flex], [Dev], [Pro], & [Max] are live for Generation!

    FLUX.2 [Flex] is the next leap in the FLUX model family, delivering unprecedented image quality and creative flexibility. FLUX.2 is a state-of-the-art performance image generation model with top of the line prompt following, visual quality, image detail and output diversity.

    Original Flux.2 [Dev] files: https://huggingface.co/black-forest-labs/FLUX.2-dev

    FP8 Quantized from ComfyUI: https://huggingface.co/Comfy-Org/flux2-dev/tree/main

    Description

    flux-2-klein-base-9b

    FAQ

    Comments (78)

    egpieperJan 15, 2026· 3 reactions
    CivitAI

    Holy for Klein 9B the clip file : qwen_3_8b

    is required to guide this version and is also 16GB!

    lonecatone23Jan 16, 2026
    CivitAI

    Updated workflow for this model here: https://civitai.com/models/2213699/flux-2-klein-2d-and-gguf-pro-grade-workflow-high-and-llow-vram

    Links for the text encoders and other models included

    qekJan 16, 2026
    CivitAI

    Only Flux.2 Klein Base 9b is available here for the Civitai generator, should be available for the generator soon...

    kingliam1995492Jan 16, 2026
    CivitAI

    Is it still censored?

    qekJan 16, 2026· 3 reactions

    Yes, not to let us create porn :(

    kingliam1995492Jan 16, 2026· 9 reactions

    @qek Yeah they can keep it lmao. I'm not even into it for porn, but if it censors bodyparts, than anything slightly spicy? GTFO

    qekJan 16, 2026

    @kingliam1995492 don't be sad, I tested it and it makes nudes anyway, even edits, hahaha

    cupraJan 16, 2026· 3 reactions

    Let's see how training Klein 9B and 4B will work out first.

    qekJan 16, 2026· 1 reaction

    @cupra They are ok, but Qwen Image Edit has better prompt adherence anyway. I can't wait to see loras for Klein

    g1263495582Jan 16, 2026· 2 reactions

    It's not censored, but it wasn't trained for NSFW. You'll need to train it yourself.

    kingliam1995492Jan 16, 2026· 1 reaction

    @g1263495582 nor was ZIT but yeah it is censored.

    210881175Jan 18, 2026· 1 reaction

    VERY

    ZootAllures9111Jan 22, 2026

    Compared to what?

    ZootAllures9111Jan 22, 2026

    @kingliam1995492 It doesn't. Anyone claiming it does is blatantly lying lmao. ZIT isn't even as good at boobs as Qwen Image, also. Both are worse still than Hunyuan Image 2.1 at all forms of nudity.

    210881175Jan 22, 2026

    @ZootAllures9111 It's funny that Hunyuan Image 2.1 is not featured on the civitai.

    flo11ok874Jan 16, 2026· 4 reactions
    CivitAI

    We new categorie at filters for Flux.2 K (Klein).

    Also fp8 version of model would be nice (and the distilled versions too).

    qekJan 16, 2026· 1 reaction

    They're on HuggingFace. theally only posted one for the civitai generator, not for downloading them all

    denrakeiwJan 17, 2026· 1 reaction

    I’m uploading the FP8 versions right now — it’ll just take a bit until everything is uploaded.
    https://civitai.com/models/2311742?modelVersionId=2600878

    qekJan 17, 2026· 1 reaction

    @denrakeiw Ok, as I said, theally has been reposting models solely for the Civitai generator, there is why there is only Klein Base 9B

    qekJan 17, 2026

    @denrakeiw Please read my updated comment

    lonecatone23Jan 16, 2026· 2 reactions
    CivitAI

    FYI, Flux 2D Loras do NOT work with the Klein version.

    qekJan 16, 2026

    Because they were trained from scratch, the only thing which isn't different is that they all use Flux.2 AE

    2thecurveJan 16, 2026
    CivitAI

    Did they fix it where I can run CLIP on default instead of cpu

    qekJan 16, 2026

    Did you mean it can crash ComfyUI if you do it?

    2thecurveJan 16, 2026

    @qek well it does on flux2 when I put it on default yeah.

    qekJan 16, 2026

    @2thecurve keep in mind: RAM is slower than VRAM

    2thecurveJan 16, 2026

    @qek I feel ya. I have a 4090. And when I put it on default it crashes comfyui (CLIP)

    meraklizihinJan 16, 2026· 1 reaction
    CivitAI

    Does the flux Klein model work with the Forge WebUI?

    qekJan 16, 2026

    Nope

    sevenof9247Jan 17, 2026· 2 reactions

    switch to swarm-ui

    qekJan 17, 2026

    Also works in ComfyUI

    sarcastictofuJan 20, 2026· 5 reactions

    Switch to ComfyUI.. it's 2026

    qekJan 20, 2026

    @sarcastictofu Yes, ComfyUI added Klein support as soon as it was possible

    liutyiFeb 9, 2026

    work with SDNext

    thefonzchannelApr 1, 2026

    Yes, I use Forge Neo. I've been using it for one month with great results. ComfyUI is better, but only if you know what youre doing.

    ZootAllures9111Jan 17, 2026· 13 reactions
    CivitAI
    210881175Jan 18, 2026

    Don't make ANY nudity, z-image still king

    qekJan 18, 2026

    @210881175 ? Flux.2 still makes nudes, I think their words about the schizo censorship are partially false

    ZootAllures9111Jan 18, 2026

    @210881175 It does booba about as well as Z Image if you prompt correctly. Neither Klein nor Z Image is as good at it as Qwen Image 2512, though, IMO.

    ZootAllures9111Jan 18, 2026

    @qek I mean this seems pretty specific so maybe, but maybe not also 

    qekJan 19, 2026

    @ZootAllures9111 I prefer Qwen Edit 2511 over all Klein models for editing, and prefer Z Image Turbo over Klein for txt2img

    sarcastictofuJan 20, 2026· 1 reaction
    CivitAI

    Can we now train Flux.2 Klein 4B or 9B LORAs on CivitAI or use these in CivitAI's image generators??

    theallyJan 20, 2026· 3 reactions

    Soon! We'll have Klein 4B/9B training and generation up as soon as possible - it's being worked on!

    ZootAllures9111Jan 23, 2026

    @theally Hopefully it's not gonna be like Z-Image where it always costs significantly more than Chroma does for the same dataset due to the forced "repeats"

    CarmoicefFeb 5, 2026· 1 reaction
    CivitAI

    How can I run the Full Model FP16 version (60.02 GB) of this model locally? When I search online for “Flux.2 D Full Model fp16,” nothing useful comes up... Could it be a mistake for “Full Model BF16”?

    Hi friend, I don't know if there's already an answer, but the FLUX.2 DEV model, unfortunately, is professional-level. In this version, you'll only be able to use it with a GPUs-RIG, or you can use it privately on paid platforms that support uploading. Locally, use the GGUF versions.

    liutyiFeb 9, 2026· 2 reactions

    on HF it is BF16 i believe. Or at least SDNext is report it as so. Locally runs on Intel iGPU (Intel Ultra 9 285H) with 128GB of RAM. 23 minutes per image on 1024px 50 steps.

    ArtificialHeartAIFeb 9, 2026· 1 reaction

    @liutyi 23 min one image!!! Crazy... but yes, it's true, unfortunately these new models have very high requirements. I have to pay an online provider to generate images with flux.2 WAN, QWEN, etc., no more than 1 minute per image. Of course, the ideal would be a 100% local and fast system, but the price of the hardware is impossible.

    liutyiFeb 10, 2026· 2 reactions

    @ArtificialHeartAI 285H is more or less ok (70 sec ZIT, 17|36 sec Klein 4B|9B). Crazy is when you generate on lower hw models. like Intel core Ultra 9 185H iGPU + 128GB RAM. Than it is something like 68 minutes/image (instead of 23 minutes). But anyway it is kind of cool to be able to run almost any model locally. So you can use Klein to relatively fast drafts. and than makes night scheduler for FLUX.2 Dev or Qwen Image to proceed with prompts you think you find to be good. There is example of test with 58 prompts https://wiki.liutyi.info/display/AI/ChatGPT+Image+1.5 6 of which is censored by ChatGPT and 2 censored by CivitAi. So you kind of limited on what is can be generated using online providers.

    ArtificialHeartAIFeb 11, 2026

    @liutyi thanks friend!!!👏👍👏

    I have a 4060TI with 16GB and an additional 64GB of RAM. It works for me, but it takes half an hour for a high-resolution image, and it really fills up my RAM.

    CarmoicefApr 6, 2026

    @ArtificialHeartAI

    @liutyi

    @uglyducklingariyoci111

    Thank you so much for your kindness!!

    ArtificialHeartAIFeb 6, 2026· 2 reactions
    CivitAI

    This checkpoint is very good, thank you for making the content available, gratitude for the effort and work.

    NapoInfrFeb 18, 2026· 1 reaction
    CivitAI

    is there a NVFP4 version ?

    bougyakumahouMar 13, 2026· 2 reactions
    CivitAI

    absolutely fkn garbage. flags content prompts for things that arent even in the prompt.

    HaikoMar 29, 2026· 3 reactions
    CivitAI

    Can somebody please explain me what are the differences of "DEV", "FLEX", "PRO", "MAX" and Klein?

    AddicteddApr 1, 2026· 3 reactions

    Flux.2 [max] Elite Highest overall quality, best consistency, best prompt following, best editing Top (Flagship)

    Flux.2 [pro] High-end Great balance of quality + speed Excellent

    Flux.2 [flex] Advanced Best for text, fine details, control Very Good

    Flux.2 [dev] Open-weight Local running, fine-tuning, experimentation Very Good

    Flux.2 [klein] Fast/small Speed + cheap generations Good

    HaikoApr 1, 2026· 1 reaction

    @Addictedd Hey. Thank you for your answer! I guess that only dev/klein are possible to run on a local machine?

    AddicteddApr 2, 2026

    @Haiko yes and kontext and schnell

    liutyiApr 6, 2026· 3 reactions

    What you can run at home from FLUX.2 family is :

    50 STEPS models

    - FLUX.2 Dev (32B) - Total parameters 56B
    - FLUX.2 Klein base 9B - Total parameters 17B
    - FLUX.2 Klein base 4B - Total parameters 8B

    4 Steps models

    - FLUX.2 Klein 9B
    - FLUX.2 Klein 4B
    - FLUX.2 Klein 9B KV

    In general 4B is low quality model but for 4B it is kind of OK. 9B is good, but might have some issues with limbs so sometimes more than one generation needed to get good result. KV is almost the same as not KV. its some performance tuning for edit that is not affect txt2img. FLUX.2 Dev is cool. Just cool. and nice. If you have hw and time to run it. More than articles of text helps simple visual comparison. so below is 40 images with same prompts generated bul all models listed above. + couple of images of Pro version

    https://wiki.liutyi.info/display/AI/FLUX.2+Dev+test+v2

    https://wiki.liutyi.info/display/AI/FLUX.2+Klein+9B+test+v2
    https://wiki.liutyi.info/display/AI/FLUX.2+Klein+9B+KV+test+v2
    https://wiki.liutyi.info/display/AI/FLUX.2+Klein+base+9B+test+v2

    https://wiki.liutyi.info/display/AI/FLUX.2+Klein+4B+test+v2
    https://wiki.liutyi.info/display/AI/FLUX.2+Klein+base+4B+test+v2

    commercial

    https://wiki.liutyi.info/display/AI/FLUX.2+Pro+test+v2

    HaikoApr 6, 2026

    @liutyi Thank you, that help me a lot :-)

    liutyiApr 6, 2026

    @Haiko https://wiki.liutyi.info/display/AI/FLUX.2+Max+test+v2 and the last one. Most expensive. Generated on BFL, because Civitai when you choose Max generates Pro, not Max..

    HaikoApr 6, 2026

    @liutyi So for local use is Flux.2 Dev the best choice?

    liutyiApr 7, 2026· 2 reactions

    @Haiko FLUX.2 Dev, yes and no. No because It is very slow. For bf16 it required 128GB unified memory in case of Intel Arc iGPU. So yes. if you have enough hardware it is definitely the best (from FLUX family for local use). But FLUX.2 Klein 9B (4 step) is actually create very good output and very fast. Now got also a lot of LORAs. So I would say Klein 9B for 90% of images and FLUX.2 Dev for some when Klein is for some reasons fail. But not only FLUX.2 is there. It is also Z Image Turbo and QWEN 2512 (with Turbo LORA) exist. So the whole path of local generation is trying prompts with Klein 9B, Than Z Image Turbo, than QWEN 2512 with wuli turbo lora 4 step v3.0, Then QWEN 2512 whole 50 step, than Z Image Base 50 step (usually will be worse than Turbo, but not always), and the last is FLUX.2 Dev (just before switching to commercial cloud based like seedream, qwen image 2, nano banana or GPT Image. there is not much room for FLUX Pro or Max since their output not that outstanding, so if you pay for image it is better to pay for nano banana 2 (at the moment).

    MV261Apr 8, 2026

    @liutyi "the whole path of local generation is trying prompts [...]"

    ehhh, ZIT, Qwen 2512, Flux.2 and Nano Banana have rather different understanding of prompts. What is necessary for 1 model will only confuse another model. For some prompts, ZIT can easily surpass Qwen 2512, and some particularly complex prompts will only work in NB, no other model will come even close to doing things the way you described.

    Personally, I experiment with ZIT and Qwen 2512 separately, the same prompt never worked well for both of them without a lot of adjustments.

    And NBPro is a whole different beast. The only use in trying your prompt in ZIT/Klein before NBPRo is just to make sure you didn't forget anything. If the prompt is complex, ZIT/Klein will probably generate some junk (but NBPro will still succeed later) but you can still notice some oversights like "wait, why is he dark-haired, I wanted blo... oh, right, I forgot to specify it".

    MV261Apr 8, 2026

    Klein - the smallest and cheapest, does some styles well but its main use is not generating, but editing. It's exceptionally good & fast when it comes to editing. Erasing objects, adding objects, changing poses and styles, transferring objects from 1 image to another, it can do it all with ease. https://civitai.com/images/118194502

    Overall, Klein is the best, the most praiseworthy model in Flux.2 family imo. The rest of them are rather mediocre given their huge size.

    Max - you're better off using NBPro/NB2.

    Dev/Pro - "standard" choice, Pro is overall better but more costly.

    Flex - weird middle brother.

    liutyiApr 8, 2026

    @MV261 when person ask for difference on Klein and Dev doesn't it assume that it is not a level of prompts per model optimization? First prompts like "car", "fancy car", "1girl", "1girl, outdoor, masterpiece" can be executed on any of those..

    MV261Apr 8, 2026

    @liutyi not necessarily? I've been toying with AI for a long time but when Flux.2 models started coming out I was also confused about the differences until I experimented with them because I couldn't find a good comparison spreadsheet anywhere at that time. Also idk who even writes prompts like "1girl, outdoor, masterpiece", I never wrote such prompts even back in the days of SDXL. xD Proper-description prompts have always been my go-to since the start, with the only exception being that when i only started toying with AI, I would start my prompts with "draw [description]" rather than just "[description]".

    liutyiApr 8, 2026

    @MV261 idk, "1girl, outdoor, masterpiece" give such a creative space to the model. I think it is ok to practice short prompts time to time at least once per new model. Like see how model understand "Beautiful nature scenery" or "Cityscape". And 1girl it is like test prompt for all times. prompt that <1B models are ok with and >20B models are also understand.

    AICuriousity22Apr 24, 2026

    What's the difference between the base model and the turbo models besides the step differences? Does the base model have better quality generations because of the 50 steps it needs to make it work, or does it just achieve the same results as the turbo model, just with more steps and waiting?

    liutyiApr 25, 2026

    @AICuriousity22 if in case of Z image I can say turbo is fast and nice, and base supports more styles and kind of more complex prompts , but in case of Klein i am am not so sure. Doing direct image by image comparison of Klein base 9b test v2 and Klein 9b test v2 i am yet not sure how to describe the difference and need for extra 46 steps..

    grasshopper85116Apr 22, 2026
    CivitAI

    Is it possible to generate sexual content with flux 2 dev? Do i need a lora or checkpoint? Could anyone please direct me to where I can get them, I cannot for the life of me find any.