LTX Example Workflow
https://paaster.io/698559390a7a8c3988ee9e91#1NG7332gvq5L4A9yTnKw4INMZ00tgOFaCAEFqU8d-as
WTF?!
Why not ;)
I'm back again with a new experimental escapade that doesn't seem to be done commonly on here: a custom CogVideo Wan Video LTX-2 LoRA!
Training (LTX-2)
This LoRA was trained on a dataset of 143 clips with hand-revised captions.
These clips came from 54 unique source videos:
Real Life (42 clips => 74 clips, ~50%) mainly amateur clips from Reddit/RedGifs/Pornhub, and a couple studio-shot videos
Anime (7 clips => 14 clips, ~ 10%) drawn/3d animated clips
Furry (5 clips => 45 clips, ~40%) 3d animations
Around 80% of these clips had their own audio.
Trained for 4,000 steps (4.5 hours on a runpod H200 SXM) using the official training script: https://github.com/Lightricks/LTX-2/blob/main/packages/ltx-trainer/docs/quick-start.md
Rank 32
Training (Wan2.1 I2V [OLD])
This LoRA was trained with around 110 or so clips of up to 11s, some of real life amateur/porn videos, some of animations
990 steps (90 epochs)
Trained on 1 x A100 SXM4 using https://github.com/tdrussell/diffusion-pipe
Training (CogVideo [OLD])
This LoRA was trained with 19 videos of solo male masturbation, mainly amateur vids.
4,000 steps
Trained using H100 for around 18 hours or so, using https://github.com/a-r-r-o-w/cogvideox-factory