CivArchive
    Rebels LTX-2.3 Dev (GGUF) - V1 - Base workflow I2V
    NSFW
    Preview 123285337

    UPDATED TO VERSION 2

    VERSION 2 removes the textenhance node to rid the workflow of the terrible time consuming encoding. also provided a secondary workflow inside that gives the correct distilled model settings all in 1 workflow!

    PRO-TIP: if you have at least 24gb of COMBINED memory (8gb vram + 16gb ram)

    you can fit Kijais fp8 scaled model. i tested it and its WAY better than the gguf formats. worth a try!

    I HIGHLY RECOMMEND UPDATING YOUR BAT FILE WITH THESE FLAGS:
    --lowvram --disable-xformers --use-pytorch-cross-attention --reserve-vram 2 --disable-smart-memory

    (these flags will help with the text encoder tricking the push back onto cpu and will burn your vram as priority first)

    __________________________________________________________________________________________________

    LOW VRAM workflow for the LTX-2.3 DEV gguf!

    NOTE: the workflow is set to 20 steps by default with the distill lora. if you dont use the distill lora, youll need 40 to 50 steps.

    You can remove the textenhance node if you dont want to wait for token generation!

    workflow is set to run both vaes and text encoder on cpu. if this is too slow for you, swap to main device on vaes and default on text encoder and that will speed it up but you may encounter OOMs because of it.

    WARNING! GENERATION TIMES WILL BE EXTREMELY LONG! The Gemma text encoder generates tokens and this process takes VERY long... like over an hour in some cases. You CAN run the text encoders on your gpu but if you have anything less than 10gb itll probably OOM. I recommend chaining the Gemma API node in and generating an API key so you can save VRAM and time by encoding through the cloud. (ITS FREE AND EASY TO SET UP)

    I HIGHLY RECOMMEND UPDATING YOUR BAT FILE WITH THESE FLAGS:
    --lowvram --disable-xformers --use-pytorch-cross-attention --reserve-vram 2 --disable-smart-memory

    (these flags will help with the text encoder tricking the push back onto cpu and will burn your vram as priority first)

    FILES:

    OPTIONAL Kijais fp8 Scaled (requires load diffusion model node instead of unet loader node and replaces the gguf entirely. )

    https://huggingface.co/Kijai/LTX2.3_comfy/tree/main/diffusion_models

    DEV gguf (distilled ggufs are in the repo as well)

    https://huggingface.co/unsloth/LTX-2.3-GGUF/tree/main

    (Unsloth is usually the best quants. They use a special quantization format which keep precision high! get the UD version)

    Gemma 3_12B FP4 text encoder

    https://huggingface.co/Comfy-Org/ltx-2/blob/main/split_files/text_encoders/gemma_3_12B_it_fp4_mixed.safetensors

    Audio VAE

    https://huggingface.co/Kijai/LTX2.3_comfy/blob/main/vae/LTX23_audio_vae_bf16.safetensors

    Video VAE

    https://huggingface.co/Kijai/LTX2.3_comfy/blob/main/vae/LTX23_video_vae_bf16.safetensors

    Text Projection text encoder

    https://huggingface.co/Kijai/LTX2.3_comfy/tree/main/text_encoders

    Distill Lora

    https://huggingface.co/Lightricks/LTX-2.3/blob/main/ltx-2.3-22b-distilled-lora-384.safetensors

    Spatial Upscaler (latent upscale models folder)

    https://huggingface.co/Lightricks/LTX-2.3/blob/main/ltx-2.3-spatial-upscaler-x1.5-1.0.safetensors

    Description

    base workflow for gguf dev model

    FAQ

    Workflows
    LTXV2

    Details

    Downloads
    694
    Platform
    CivitAI
    Platform Status
    Available
    Created
    3/11/2026
    Updated
    4/27/2026
    Deleted
    -