CivArchive
    (Video Tutorial Resources) LTX Sequencing workflow to create long video clips (new model!) - Alpha
    Preview 49128193

    Description

    FAQ

    Comments (23)

    Aderek514Jan 2, 2025
    CivitAI

    Any examples of results?

    Grockster
    Author
    Jan 2, 2025· 1 reaction

    Check out the video tutorial (it's in the first 30 seconds but the rest of the video is great too) :)

    Grockster
    Author
    Jan 3, 2025

    I just added a sample vid

    snap2887Jan 2, 2025· 2 reactions
    CivitAI

    This looks promising cant wait to test it. Best of luck with the progress.

    Grockster
    Author
    Jan 2, 2025

    Thank you!

    KAndyZZZJan 2, 2025· 1 reaction
    CivitAI

    What HW need?

    Grockster
    Author
    Jan 3, 2025

    My hardware is a 4090 (24gb vram) but others using 16gb were able to run it too (haven't tested below that but possible)

    gorathan274Jan 9, 2025· 1 reaction

    @Grockster will test in the next 2-3 days, have a project anyways

    CManzioneJan 3, 2025· 2 reactions
    CivitAI

    This is great thank you. Do you have a Vid2Vid version of this?

    Grockster
    Author
    Jan 3, 2025· 1 reaction

    Not yet, but slowly making our way :)

    DirkBenedictJan 3, 2025· 1 reaction
    CivitAI

    To save precious vram, in Ollama Advance Generate nodes, set keep_alive to 0 to have ollama unload model after prompt is generated

    Grockster
    Author
    Jan 3, 2025

    Thanks!

    orange8745164Jan 7, 2025

    how much vram I need for this?

    Grockster
    Author
    Jan 7, 2025

    @orange8745164 I've had people with 16GB VRAM who could run it, I haven't tested with less than that yet

    AdvOfJetJan 4, 2025
    CivitAI

    Whenever I try this workflow I get the following error.

    OllamaGenerateAdvance

    1 validation error for GenerateRequest model String should have at least 1 character [type=string_too_short, input_value='', input_type=str]

    I have ollama installed and the server address in the workflow is correct. Any ideas what could be wrong?

    DirkBenedictJan 4, 2025· 1 reaction

    The model input is empty probably because ollama is running but no models are running or downloaded (likely if you just installed it). Use "ollama run <model_name>" to download and run a model and then refresh node definitions or restart comfyui (can't remember which I did) so the OllamaGenerateAdvance node can populate the model list. Not sure which is the best model to use for prompt generation but I used 'ollama run dolphin-mistral' and it has worked good so far.

    Grockster
    Author
    Jan 4, 2025

    Yup agreed with @DirkBenedict - you have to add at least one model

    yikifoolerJan 7, 2025
    CivitAI

    the Node "Seconds per sequence" is not installed which node is that? I have installed ComfyUI-Logic too

    Grockster
    Author
    Jan 7, 2025

    It's just an Int node part of the ComfyLogic set

    rocky533Jan 19, 2025· 3 reactions
    CivitAI

    Nice workflow, glad im not the only one having issues with extending blurring the faces slightly each iteration. I have tried everything, it may just be a ltx limitation

    Grockster
    Author
    Jan 19, 2025· 1 reaction

    Yup, once the model can figure out how to get a perfect end frame(s), then starting the next iteration will be MUCH cleaner... Here's to continued improvements :)

    tomgottsauner430Feb 19, 2026
    CivitAI

    Install RequiredLTXVModelConfigurator

    Install RequiredLTXVShiftSigmas

    Install RequiredInt-🔬

    Install RequiredLTXVLoader

    cant find this nodes

    Grockster
    Author
    Feb 19, 2026

    This is for the previous version of LTX, I would look to use the newer version of LTX2 (and start with the Comfy templates as they're really good/easy to use)

    Workflows
    LTXV

    Details

    Downloads
    1,120
    Platform
    CivitAI
    Platform Status
    Available
    Created
    1/2/2025
    Updated
    5/13/2026
    Deleted
    -

    Files

    VideoTutorialResourcesLTX_alpha.zip

    Mirrors