CivArchive
    Z-Image Turbo Distill Patch for ComfyUI - v1.0 ZIT
    Preview 114271561
    Preview 114272421
    Preview 114272667

    I've almost gaven up making LoRA for Z-Image Turbo until I found this jewel.

    I didn't find anyone else offering this LoRA or I would be referring to them instead of making a new post.

    The model I've posted here is reconditioned to work with ComfyUI.

    I'm showing a before and after in the example image, really only to have something to put as a stub, but it still examples something that's not immediately obvious to some, and that's the cross walk, which is fixed in the "After" shot. This was consistent throughout generations. The LoRA used is inconsequential so not mentioned here, but the patch LoRA seems to be doing what it's supposed to do.

    I didn't make this. I don't understand it or why it became needed. But, without it, some of my LoRA are pretty much useless. It's likely you'll run into issues, if you haven't already, that this model can clear up for you with other LoRA as well. I'm using a weight of 1.0. For more information read the model page.

    Without this "Patch" many of my LoRA, no matter how I trained them, how long, what fancy goofy crazy thing I did to them, no matter how many steps, I would get body horror, specifically when I attempted to target areas that appear to be missing from the Z-Image model.

    Apache 2.0 license:

    https://modelscope.cn/models/DiffSynth-Studio/Z-Image-Turbo-DistillPatch

    UPDATE:

    After acquiring some meaningful information from others in the comments I tested this model against the offered "de-distilled" model and it works as described, though I wasn't able to produce close enough results to my previous process that I would be compelled to swap methods, and I think I also understood that this model can be used during training, potentially having some value over the adapted standard model, though I wasn't able to get it working in that fashion and I wasn't highly motivated to track down a solution.

    So effectively, and I think this was offered in the comments as well, I'm doubling up on the "turbo" affect that the model already contains, and this would appear to be a mistake. I don't disagree that it CAN be a mistake, and it HAS been, which I've learned after a lot more testing but, again, the artifacts aren't frequent enough for me to dismiss using this method completely and there are times, as I stated earlier, that the body horror from the adapted models simply won't go away without the use of this patch.

    Description

    FAQ

    Comments (27)

    TigonTXDec 18, 2025
    CivitAI

    How do you use it, just apply it as a lora at strength 1?

    Tarterbox
    Author
    Dec 18, 2025· 2 reactions

    Yea, that's all I needed to do, but I haven't tested with any other strength so if you do then let us know what you find, if you don't mind.

    canlaf84Dec 18, 2025
    CivitAI

    People need to stop using "AI Toolkit" in the light of this. Its broken on purpose it seems... -.-

    aueki4g467Dec 18, 2025
    CivitAI

    Note: This patch works for LoRA and finetune models trained on the standard model. Many LoRA models are likely based on the de-distilled version. This is the reason why it doesn't work.

    Tarterbox
    Author
    Dec 18, 2025

    I'm a bit confused because I'm wondering if I'm experiencing something different.

    It seems like you're suggesting that those LoRA trained on the distilled model (that which is utilized for inference) will benefit from this patch, and that certainly makes sense.

    However, all of my LoRA made for Z-Image were adapted on the fly, using z_image_de_turbo_v1_bf16.safetensors, and I'm experiencing body horror, in ways that don't even seem possible, without this patch.

    I think it's important to clarify for others at least, because my method won't change any time soon but anyone passing over the option to use this patch may forget that it's even here later.

    Of course, not everything has to be "right" in the world and maybe this patch works for both, but I have my theory as to why the adapter is causing issues with LoRA and it's less than useless to bring it up since we won't have to deal with this scenario for long, one would hope, as the base should arrive soon crosses toes.

    aueki4g467Dec 19, 2025· 1 reaction

    Correction: Even with the de-distilled version of LoRA, turbo effects and other effects may be lost during execution.

    But, the author of this LoRA says that it is best to create a model based on the standard model, without using a de-distilled version or adapter, and then combine it with this LoRA at runtime. This LoRA is designed for that purpose.

    Both the AI ​​toolkit and Musubi Tuner assume the use of a de-distilled version or adapter, so it is unclear whether this LoRA will be effective with the currently released models and LoRA.

    Tarterbox
    Author
    Dec 19, 2025

    @aueki4g467 
    Thanks for the update.

    sianosianzDec 18, 2025· 2 reactions
    CivitAI

    Looks like it works like latent compression un-packer, or ADD (Adversarial Diffusion Distillation) "reverser". So it expands/restores/(recreates??) compressed parts of the model providing depth/space for LORAs to breathe. Sounds like a black magic/alchemy. Amazing stuff, thanks for posting!

    Tarterbox
    Author
    Dec 18, 2025

    You're welcome.

    Thank you for your input, I'm sure someone out there has the thinky power to appreciate what you explained, this is one area that's beyond my thinky abilities. I just like pretty pixels.

    Cheers.

    nymicalDec 19, 2025· 2 reactions

    @Tarterbox Not be to rude, but present participle of 'think' is 'thinking' (think-ing).
    I understand that English might not be your first language (same for me), but we learn as we go. :)

    JellaiDec 18, 2025· 3 reactions
    CivitAI

    I deifnitely see more concept accuracy in the After picture, which is cool. However, it does seem like the realism suffers, as the new one has a bit of a 2.5D look. I suppose it could also be the luck of the seed. I'll do some tests on my own.

    Anyway, until we get the full base model, it's cool that people like Ostris is creating hacks for us to do what we couldn't do otherwise, and cool that people like you are releasing tools to further hone that. There are tradeoffs with all of it, but it's nice that people are working on letting people do what wasn't possible before.

    - Edited from my original comment, which wasn't really fair to what has been done here:

    "Weird. The After image looks like a 2.5D cartoon. Seems like the Before is far better."

    Tarterbox
    Author
    Dec 18, 2025· 2 reactions

    Then the patch did its job because the LoRA features being blocked by the adapter process were released by the patch.

    Again, as I stated, the LoRA being affected is meaningless, until you chose to assert quality as a parameter by which to quantify the usefulness of the patch. Unless I misunderstood you, your intent being to critique the usefulness of the patch which I'm using right now, and have been doing so for several hours, in order to generate example images of a LoRA I'm about to upload.

    However, if you're interesting in contributing your critique to affect change then you'll do better by contacting the entity, or entities, responsible for generating the patch, for which I am incredibly grateful as they seemed to clearly identify a fault in the training process which I've been struggling with for many days.

    JellaiDec 18, 2025

    @Tarterbox I'm sure the lora patch is useful. I just find the ins and outs of Ostris's hack (and undoing it) to be interesting. I'm not saying this lora is worthless. Just trying to understand it more fully, and it seems to make the base rendering worse from the test here, even if it helps loras to become more accurate. Maybe other more thorough tests won't result in that. I'm just commenting on new information that I'm seeing.

    There are tradeoffs with using hacks, until we get the proper Omni model, and it doesn't make sense to get defensive about people noticing the tradeoffs (though I didn't handle my end in the best way either). And the tradeoffs are just that, trading for something else. As you have stated, you're trading for base concept coherence and lora accuracy, and I think that's great. I'm glad you found a way to pull that off. You seem to think my observation is worthless, and I get that feeling, but I just consider it part of the full conversation to plot out the pros and cons, and slowly build an accurate picture of how to make these tradeoff decisions.

    I honestly think it's cool that you've pulled this together, and I'm sorry I didn't lead with praise for that, really. I should have. I just jumped into observation mode and wrote out my mental note, from the limited perspective of it just having been released, and not having tested it with my loras yet. I apologize for that.

    Tarterbox
    Author
    Dec 18, 2025· 1 reaction

    @Jellai 
    Yea um, I guess I get people upset, I'm sort of logical and don't consider how people feel, it's autism.

    But um, well, I think you're trying to infer things that aren't true about me but if I'm specific and disagree with you then I worry that you'll make more words and I'm kind of old, retired, and I'm going to just try and have fun with my stuff and go back to my hobby, ok man?

    You know, have fun alright? No hard feelings, actually no feelings over on this side at all... take care now \o

    And um, whatever it is you said, you're right, I agree, maybe that makes things ok now.

    JellaiDec 18, 2025

    @Tarterbox I'm glad we talked it out. For what it's worth, I am also neurospicy, and that played a part in how I'm approaching this. But I also don't want to leave it at that. I do want to make it better to some degree. Would you like me to add context to my original comment? I really could acknowledge the strengths of what you've done ahead of my other observation. As I said before, I think it's important for people to see the full picture, and maybe I could give more of the full picture in my comment.

    You know what, I'm just going to do that.

    xtoDec 18, 2025
    CivitAI

    According to their article, this LoRA is intended for use in scenarios where instead of training a LoRA on an adapted/de-distilled checkpoint, you've done SFT on the distilled model, which destroys the distillation, then this LoRA is supposed to bring it back.

    xtoDec 18, 2025

    (Can't edit my comment for some reason.)

    So presumably, using this on the regular distilled checkpoint will just apply double distillation. That's quite apparently to me from the comparison images you posted, they look posterized, like you'd get with running too few steps.

    I agree it's difficult to train Z-Image on new concepts but I think we just have to wait for the base model.

    Tarterbox
    Author
    Dec 18, 2025· 1 reaction

    @xto
    Hay there \o

    I'm glad someone understands this stuff, and I wish I understood what you said. I read the article, in fact I read it a few times, before choosing to post the item, though I tested it extensively prior. I like what it's doing, it saved me from tossing away multiple LoRA that I thought were damaged.

    You know, I've gotten a bit old in my age O.o and now I think carefully before accepting any truth as part of my arsenal of ammunition and, instead, I test everything.

    I even challenge assertions made by authoritative sources and, while I don't understand what the article says if it reads that I shouldn't get the results that I'm getting then I would still perform as though I hadn't read it. Now, after all of that, I would very much like to know what you mean, in some simple terms, for instance, do you think I'm using it incorrectly and, if so, how might it be used properly? I bet other people would love to know that as well.

    Cheers!

    gunjinokanreiDec 18, 2025· 3 reactions
    CivitAI

    I completely agree with OP - you gotta test it for yourself to see if it does anything you might find useful, better, or whatever works for you. In my case, I tested it with my Grainfire LoRA (str 1 for both), and I like the results. So, cheers!

    juanpiDec 19, 2025
    CivitAI

    Sorry for butting in where I wasn't asked, but just in case it helps someone, this LoRA could be similar to an acceleration LoRA for "Distilled" models, which might be usable in training. At ZIT we have this model "https://civitai.com/models/2196015?modelVersionId=2472641", which can be accelerated for generation with this LoRA "https://civitai.com/models/2195651?modelVersionId=2472220". Distilled models modify the CFG (2-3) and increase the steps (20-30), and with the acceleration LoRA we can generate with CFG (1) and (8) steps.

    Tarterbox
    Author
    Dec 19, 2025· 1 reaction

    Thank you for the information, it allowed me to provide a proper testing facility for the patch in order to verify what some have alluded to. As a result of my testing I've updated the description.

    amazingbeautyDec 19, 2025

    wow you guys saying great stuff..i'm still that user that wait for a 4step solution .

    juanpiDec 19, 2025

    @amazingbeauty As far as I know, you have this 5-step solution. It's not 4, but it's close. Zimage Turbo by Stable Yogi - Lab Rat v0 | ZImageTurbo Checkpoint | Civitai

    amazingbeautyDec 22, 2025

    @juanpi that stable yuji ..is a machine that make ai more worse , i know he is a trend but all his models are even more fake shit . and thank you for your try to help

    yesfroggoJan 5, 2026· 1 reaction
    CivitAI

    I had no idea my post was pinned thanks! I have been using this in everything since I tried it. It breathes a lot of life into images, you just have to control the chaos/noise that comes with it haha

    Tarterbox
    Author
    Jan 5, 2026

    Kewlies nods

    MaskmanBladeMar 13, 2026· 1 reaction
    CivitAI

    I didn't know i needed this, but thank god I saw this post. Thanks OP.

    LORA
    ZImageTurbo

    Details

    Downloads
    1,120
    Platform
    CivitAI
    Platform Status
    Available
    Created
    12/18/2025
    Updated
    5/16/2026
    Deleted
    -