CivArchive
    Animation_hardcore-LTX2.3 - I2V_v1.0.9_2D
    NSFW

    Animation_hardcore-v1.0.0: Dynamic Lora, short videos

    Anime_Hardcore-1.0.9: Static Lora, short videos

    D_LORA-2D: Dynamic Lora, Long video

    R LORA-3D: Trained on Sulphur. The model is relocating.......

    Animator_Artist_VideoTech: "Tag D_LORA-2D"

    Features: High dynamic range, enhanced spatial awareness, with a core focus on the subtle nuances of continuous NSFW interactions.

    Description: > This LoRA is capable of generating stable, interactive NSFW videos lasting up to 30–45 seconds. Achieving this required massive VRAM and compute power during training. The key was strictly controlling the facial weights to prevent over-baking, which usually leads to facial collapse and motion blur.

    What this means for you: When generating standard 5–10 second clips, you will experience extremely fluid, continuous motion and deep interaction. You definitely won't be bored!

    Pros: Highly practical for extreme dynamic motion.

    Cons: The high dynamics result in a slight loss of fine facial detail.

    Optimization: For the perfect output, mix and fine-tune this alongside the Anime_Hardcore v1.0.9_i2v LoRA.

    Trigger Word: [Enter your LoRA name]

    Theory & Core Knowledge:

    Static LoRAs: Deliver high resolution and crisp texture details. (The downside: they produce stiff "moving slideshows.")

    Dynamic LoRAs: Deliver superior spatial awareness and fluid motion. (The downside: prone to facial collapse and blur.)

    To achieve high-level animation, you must categorize your LoRAs into Static and Dynamic, and blend them accordingly.

    Remember the golden formula:

    Static LoRA + Dynamic LoRA = Ultimate Video Quality.

    Final Disclaimer (Read This):

    I know beginners are eager to master complex workflows and want to see mind-blowing animations immediately. But if you don't calm down, study patiently, and let genuine passion drive you, your frustration will just turn into toxic negativity aimed at others.

    Everyone has their own unique workflow. You don't have to copy mine perfectly—just work within your own limits. Everyone is different. The goal here is to have fun and be entertained!

    Complaining is human, but if you're going to complain, channel that energy into training your own model and sharing it with the community for fine-tuning. Everyone wants a top-tier model like Grok handed to them, but complaining keeps you exactly where you are.

    LTX is a god-tier model. Have some reverence and don't complain in front of the divine, or you might just face its wrath!

    Description

    LTX 2.3 is an absolute gift from God!

    Using LTX 2.3 isn't just super fast—once you try it, there's no going back!

    FAQ

    Comments (14)

    Ponder_StibbonsMar 30, 2026· 4 reactions
    CivitAI

    I agree with you 1000% on ditching the upscaler. That was a revelation for me too. Not only do you save all that memory, with LTX being chunky already, the quality of a full size single stage run is friggin amazing. Upscaler is useful for fixing up old stuff, but not needed for I2V.

    Ponder_StibbonsMar 30, 2026

    I just noticed you said quality is bad. That's not true at all. 704x1080 is awesome. Dev+304, pre-distilled, all of it. It's great. Maybe you're starting too small?

    artunoffical620Mar 30, 2026

    How many steps are you doing?

    Ponder_StibbonsMar 31, 2026

    @artunoffical620  With full dev model + 304 distill lora at 0.7 strength my sigma string is 1., 0.99375, 0.9875, 0.98125, 0.975, 0.909375, 0.725, 0.421875, 0.0. So that's 8 steps if using LTXVScheduler. Anything past 12, I'd start lowering the distillation strength. I know most of the WFs that have an upscale stage scale down the input image to 0.5, which will definitely give you garbage by itself. Honestly the only time I need to skip the distillation or crank it way down is when the subject is far away. Anything closeup is no problem.

    mr_Jack
    Author
    Mar 31, 2026

    ​I highly recommend just setting the base resolution straight to 1536.

    etherlothMar 31, 2026· 1 reaction

    I have totally ditched the Upscaler step and using instead RTX Video Super Resolution node.

    yajukunMar 31, 2026· 1 reaction

    Can someone link me to a good LTX 2.3 WF posted that does NOT use the upscaler? Seems like most of them utilize it. Thanks.

    Ponder_StibbonsMar 31, 2026

    @yajukun I posted mine last night. Decode is set up for 720, if you go straight to 1080, change the tile size. It's A2V but you can just add an empty audio latent to change that. https://civitai.com/models/2506770/ltx-23-audio-to-video-for-semi-creative-slop

    Or you could do what I did, just delete the upscaler stage and don't resize the initial image.

    yajukunMar 31, 2026

    @Ponder_Stibbons Thanks, will give it a try!

    ILikeCreampiesApr 1, 2026

    Just so I understand, so instead of two stages, one with upscaler at 3-4 steps each, you're doing 1 stage with 8 steps?

    Ponder_StibbonsApr 1, 2026· 1 reaction

    @ILikeCreampies Yes indeed. If you are using a straight-up distilled model you can stick with that, as you won't get much benefit from more steps. I prefer using a dev model + one of the distillation LoRAs. If you do that, you can increase steps as you decrease the LoRA strength. For example, strength at 1 @ 8 steps, strength .8 @ 10 steps, .5 @ 15 steps. That's just a rough example. I prefer to use manual sigmas, but you can use a schedule node and set a step number and that will calculate the schedule for you. I generally only up the steps when doing 1920x1088, full HD. It's sooooo much better than using the upscaler. Friggin model puts out 10 seconds of that in four minutes for me (with 8 steps and an i2v schedule). That's crazy.

    ILikeCreampiesApr 1, 2026

    @Ponder_Stibbons Cheers. I'm trying to increase the quality of the animations I have, as they're still fairly static and I'm trying to figure out what could work in 24G VRAM. I'm mostly using the distilled model but if you say the quality improves with dev and distilled lora I might try that instead. How do you even figure out manual sigmas? They look like random numbers to me. Also what's an "i2v schedule"?

    ILikeCreampiesApr 1, 2026· 2 reactions
    CivitAI

    So first experiments look very promising. As suggested, I removed the upscaler, and am doing just a single pass at the suggested sigmas 1., 0.99375, 0.9875, 0.98125, 0.975, 0.909375, 0.725, 0.421875, 0.0

    Seems to work well, but I found that I need to remove the audio weights from the lora to avoid getting random action music in the clip.

    I'm currently still using only the I2V LoRA at strength 1 with some other of my existing ones. But you mentioned people should use both your T2V and your I2V together, or did I misunderstand you?

    mr_Jack
    Author
    Apr 1, 2026

    I'm glad you understand.

    This is just some advice for people who aren't familiar with Loras!

    The best I can do to help them is offer some general advice!

    LTX has a lot of potential—take your time exploring it.