So, this was my first try to create a LORA. The status is not even beta it's delta, or something below. I am a total N00b... but I am learning, So please don't expect too much.
The LORA is usable to generate anilingus/and or rimming content,
BUT: Only for a specific position:
The giver has to be on the left site of the frame, only the head and upper body part should be visible. The taker must be on the right side of the frame, kneeling and bending slightly forward.
The classical Facesitting-positions are -unfortunately- problematic. I was lucky to generate one okay'ish video.
Please consider my attached videos, what positions are working.
Have fun! ;-)
Description
V0.1, trained with this prompt:
666asslick, two people, person on the left is licking the butthole of another person, indoor scene, soft lighting, close-up shot, slight camera movement, realistic
FAQ
Comments (16)
HOLYYYY THAAAAAANKS
;-) You are welcome!
When I generate videos using LORAS, the faces always come out slightly blurry, even when I use different workflows. Why is that?
Works great for ITV. I don't even bother with T2V on LTX models. No reason to. Always better off just using Qwen or Chroma->LTX
You could do one of cunnilingus... there aren't any, and since you've done anal, I don't think it'll be too difficult for you. If it's even possible, of course. Thanks.
Thats a good idea! I will try it.
what tool do you use for training LTX2.3 loras?
I am using the Power shell. Chat GPT was very helpfull. Maybe there are better solutions... I will find out. What are you using for training?
hmm, do you have a link the app? As googling "Powershell train lora" gives a number of different processes using Powershell (a cross-platform task-based command-line shell).
I'm using diffusion pipe https://github.com/tdrussell/diffusion-pipe
but it seems doesn't support LTX2.3 for now
@Ostap222 LTX LoRA Training – Quick Guide (Console)
1. Prepare video clips
short clips (2–4 seconds)
same action
similar perspective
Example:
clip_001.mp4
clip_002.mp4
clip_003.mp4
Folder:
C:\Users\...\ltx_dataset_videos
2. Generate JSON file
Open PowerShell and run:
cd C:\Users\...\ltx_dataset_videos
Then:
Get-ChildItem -Filter *.mp4 | Sort-Object Name | ForEach-Object {
'{"file": "' + $_.Name + '", "text": "triggerword, description of action, indoor scene, soft lighting, realistic"}'
} | Set-Content dataset.jsonl
Afterwards, you can manually refine the dataset.jsonl.
3. Preprocess dataset
cd C:\Users\...\LTX-2
Then:
python packages/ltx-trainer/scripts/process_dataset.py C:\Users\...\ltx_dataset_videos\dataset.jsonl --resolution-buckets 768x768x17 --model-path ... --text-encoder-path ... --output-dir C:\Users\...\ltx_dataset_preprocessed
This will generate:
video latents
text embeddings
4. Adjust training config
In the YAML file:
preprocessed_data_root: "C:\\Users\\...\\ltx_dataset_preprocessed"
5. Start training
python -m accelerate.commands.launch packages/ltx-trainer/scripts/train.py configs/ltx2_av_lora.yaml
LoRA training will now begin.
6. Result
After training, the LoRA can be found here:
outputs/
checkpoints/
lora_weights_step_XXXX.safetensors
You can load this file directly in ComfyUI.
In short
Workflow:
Video clips
→ dataset.jsonl
→ preprocess
→ training
→ LoRA
This is the complete console-based workflow.
@666m4ck1 ok:) it's probably from here https://huggingface.co/spaces/Lightricks/ltx-2/blob/main/packages/ltx-trainer/AGENTS.md
@666m4ck1 If you don't mind, how much VRAM you currently have while training the LORAs for LTX2.3?
@meryruizk332 I'm using an RTX5090 with 32GB VRAM. The trainingprocess took almost all of it.
@666m4ck1 Thank you so much for the reply and information