CivArchive
    Preview 41869316Preview 41869290Preview 41867609Preview 41826364Preview 41826068Preview 41824265Preview 41867669Preview 41867739

    Work in progress

    The models with ๐Ÿ“ฆ mean that are archived and i have yet to update

    Check the About this Version of the chose workflow for proper introduction

    question, how do i embed the nodes into the images or videos?

    if there is a way to load OmniGen/CogVideoX/LLM/t2a/AnimateDiff(gpu) thru a CustomSamplerAdvanced let me know please

    Description

    I generate the input image with a pdxl model but you can use your favorite t2i, i use v-prediction for maximize creativity, i'm using my favorite noise chain for t2i thats 1 step fe_heun3, 1 step SamplerSonarDPMPPSDE(student-t), 2 steps lcm (uniform), for better first step you can use the SamplerDPMAdaptative node thats left alone, it's optimized to go fast, but you can play with it, for the second step you can prolonge the lcm(uniform) for more smooth results but less creative or add a SamplerRES_Momentumized(highress-pyramid) and finish with 2 steps of lcm(uniform), you can also try the ClownSampler node for step 2 to get a different result, the lcm(uniform) can also be changed for the ClownSampler but i really like what lcm(uniform) does. Now for the ltx video, you don't require the sampler chain, but if you want the best from the model, experimenting is your best bet, also, the cfg modulates the movement, the consistency and the artifacts, you may as well experiment with different cfgs for each 1/3 of the generation, thats also a reason for the split sigmas, that improve the generation by a lot

    Workflows
    Other

    Details

    Downloads
    83
    Platform
    CivitAI
    Platform Status
    Available
    Created
    11/24/2024
    Updated
    9/28/2025
    Deleted
    -

    Files

    4Or9StepsSamplerChainsNoiseTypes_Ltx.zip