CivArchive
    Niji_Expressive_flux_lora - N-Expressive_flux_lora_1
    Preview 26439555
    Preview 26439562
    Preview 26439553
    Preview 26439628
    Preview 26439610
    Preview 26439556
    Preview 26439616
    Preview 26439558
    Preview 26439612
    Preview 26439641
    Preview 26439647
    Preview 26439648
    Preview 26447139
    Preview 26455927
    Preview 26455924
    Preview 26455920
    Preview 26455919
    Preview 27140139
    Preview 27201432

    All models are exclusive to Civitai ! Anyone who publishes my models without my consent will be reported!

    Hello everyone!

    Trigger Word : NijiEX style

    I highly recommend loading my lora models in fp16 if possible

    Another new model! 😅

    This time I go back to the source with a Niji expressive style model.

    The model goes very well with my MJanime model.

    Put MJanime at strength 0.5 and the same for N--expressive and you will have nice surprises


    Have fun 😉



    Description

    FAQ

    Comments (9)

    2728625Sep 2, 2024· 2 reactions
    CivitAI

    Seconded on the lora combination mentioned in the description, half this half mjanime. Getting excellent sharp line anime results.

    Stan_Katayama
    Author
    Sep 2, 2024· 1 reaction

    Yes! The best of the models is taken!

    The results are excellent !

    Stan_Katayama
    Author
    Sep 2, 2024· 1 reaction

    I'll try to improve this for version 3 of the anime model.

    I think flux has a bit of trouble with detailed 2D flat.

    TetsuooSep 11, 2024· 2 reactions
    CivitAI

    Sorry what does it means, "load my lora in fp16" , should the Flux model be fp16, or the Clip ? both ?

    Stan_Katayama
    Author
    Sep 11, 2024

    This is more for those who use Forge for example.

    If no option offers you to load your lora models in fp16, do not worry about it

    TetsuooSep 11, 2024· 1 reaction

    I use mostly ComfyUI, and Forge when I want to chill. lol also for some reason I get lower quality with Flux in Comfy, no idea why. Anyway
    I mean you HIGHLY recommend it but ok let's not worry about it, fine xD I'm tired
    oh look what I found, "Cache FP16 weight for LoRA 

    (Cache fp16 weight when enabling FP8, will increase the quality of LoRA. Use more system ram.)"
    No idea that option even existed, this is all so complicate with all the fp8-fp16-nf4bnf64bitAlphaTurboGGUF nonsense

    Stan_Katayama
    Author
    Sep 11, 2024· 1 reaction

    @Tetsuoo Ok I just advise for Forge to use the low bit diffusion option in "automatic (lora fp16)".

    This allows LoRa models to be fully supported

    TetsuooSep 11, 2024· 1 reaction

    @Stan_Katayama Thanks I will try. That option I found sometimes makes a big difference in quality, sometimes no difference at all. I tried to make a few loras for Flux lately but it's still a mystery how to get it right, it's headache material lol

    Stan_Katayama
    Author
    Sep 11, 2024

    @Tetsuoo Yes. It depends on the training methods of the models.

    LORA
    Flux.1 D

    Details

    Downloads
    799
    Platform
    CivitAI
    Platform Status
    Available
    Created
    8/27/2024
    Updated
    5/12/2026
    Deleted
    -
    Trigger Words:
    NijiEX style

    Files

    N--Expressive_flux_lora_v1_000009900.safetensors