CivArchive
    LTX 2.3 I2V/ T2V (Base & GGUF) use your own🔉& SEED VR2 upscaler (Depracated) - v1.1 upscale & no upscale
    NSFW
    Preview 123488800

    This model has not been updated. Do not use.

    Note: If you have ANY issues with nodes not downloading, read the notes or reach out. There's nothing that special about any of them that aren't core modules.

    ⛔⚠️🛑✋ Read the notes completly before using. Most common install and node problems are listed in the directions

    Instagram: https://www.instagram.com/synth.studio.models/

    Buy me a☕ https://ko-fi.com/lonecatone

    This represents many hours of work. If you enjoy it, please 👍like, 💬 comment , and feel free to ⚡tip 😉

    也有中文说明

    This is a 12gb VRam or more workflow.

    Two seperate versions. One with a latent upscaler and one without.

    • I like the one without, but the verdict is still out. I'd love feedback on what settings and what works.

    Features:

    • Does both T2V and I2V

    • Allows you to use your own audio track.

      • Note: Lip sync is really hit or miss on this one. I'm still working on adjustments

    • Easy setup and use

    • Motion adjustment for more dynamic videos

    • Multimodal guider for fine tuning audio to video dynamics

    • Prompt generation from an image

    • Prompt enhancement

    • Handles both Normal and NSFW generation

    • 也有中文说明

    Description

    FAQ

    Comments (19)

    AginoMar 7, 2026
    CivitAI

    I don't think Lora is working with your workflow. I tested with a different workflow, it's just that you've seen the lora dont work . i recommend using Daddy Lora. The load is very good, fix the annoying noise from the video

    lonecatone23
    Author
    Mar 7, 2026

    What annoying noise? Ive had zero issues. Be more specific

    AginoMar 8, 2026

    @lonecatone23  Some Loras have this annoying static noise in the background. I just recommended using Dady Lora load to remove it . since i think the lora load on your workflow is working

    lonecatone23
    Author
    Mar 8, 2026

    @Agino A loRa loader is teh same. It has npothign to do with it, especially as there are no LTX2.3 LoRas out yet.

    I really have no idea what you are talking about.

    AginoMar 8, 2026

    this lora load - https://github.com/seanhan19911990-source/LTX2-Master-Loader/tree/main and lora from ltx-2 also work on ltx-2.3 i test it and its work

    lonecatone23
    Author
    Mar 8, 2026

    @Agino Are you crazy? Some random node with that stupid name? It's not in the ComfyUI registry.

    Also, the LoRa loader works absoleutly fine.


    No thank you.

    m1ndth13v3Mar 10, 2026
    CivitAI
    cannot unpack non-iterable NoneType object

    getting this error. Can't get it fixed.

    lonecatone23
    Author
    Mar 10, 2026

    It's missing something. without seeing your log, I have no idea. Did you try feeding the log through Grok or Claude?

    m1ndth13v3Mar 12, 2026

    @lonecatone23 I have figured it out, the node that Get_vae somehow isn't connected to the video vae, when Checkpoint Model is enabled, it gets disabled when GGUF model is turned off. btw excellent workflow. the prompt enhancer uses so much ram though.

    lonecatone23
    Author
    Mar 12, 2026

    @m1ndth13v3 oh shit. Thanks for that. I'll revise

    lonecatone23
    Author
    Mar 12, 2026

    @m1ndth13v3 Hold on. To clarify, you do not need the Video vae when you use a checkpoint model. It is baked in. However, you need it for the gGUF. If you use a diffuson mdel, then that's a different story.

    Please do me a favor and check exactly which model you loaded. I can't meke it error out

    m1ndth13v3Mar 12, 2026

    @lonecatone23 Okay now I understand, the checkpoint model im using is the dev_transformer_only_bf16 should i be using something else?

    lonecatone23
    Author
    Mar 12, 2026· 1 reaction

    @m1ndth13v3 yeah, the transformers only is a diffusion model. You can use that, but use a diffusion model loader and pull the Bideo VAE out of the gguf group so it doesn't turn off with the switch, then eliminate the "anyswitch" and hook it directly up to the setnode

    iamaritomix545Mar 12, 2026
    CivitAI

    Hi everyone. Question... I have everything updated and all the models your WF requires. I can't get my image to look detailed. Any advice on how to avoid it looking plasticky and distorted? Thanks a lot.

    lonecatone23
    Author
    Mar 12, 2026

    I haven't figured that out yet. It's LTX, not the workflow. Its also why I made two separate workflows

    ai_machine_learnerMar 12, 2026
    CivitAI

    I'm having trouble with the Qwen Prompt Enhancer. I'm not sure which settings to use since the workflow was pre-set to a setting that does not exist. I have a 4070 Super Ti 16GB. What should those settings be?

    lonecatone23
    Author
    Mar 12, 2026

    Hmmm, I just opened it and looked. It's set for low VRAM. It shoud be qwen 4b instruct and 4b ram friendly.. It's slooooow regardless. I liek it and hate it at the same time.

    You don't really need it. Honestly better results are gained by feeding your prompt directly to an LLM

    ai_machine_learnerMar 13, 2026

    @lonecatone23 Thanks! One other thing, the no the upscaler version has a section for upscaling. Is it just disabled? Or am I just dumb? (highly possible) Thanks for your work with this!

    lonecatone23
    Author
    Mar 13, 2026

    @ai_machine_learner Lol, noi. My bad. I should have clarified. no LATENT upscaler

    Workflows
    LTXV2

    Details

    Downloads
    666
    Platform
    CivitAI
    Platform Status
    Available
    Created
    3/7/2026
    Updated
    5/16/2026
    Deleted
    -

    Files

    ltx23I2VT2VBaseGGUFUseYourOwn_v11UpscaleNoUpscale.zip