Optimal Settings for Best Results
To make the most out of this model, follow these settings:
Resolution: Use 768x512 for a balance between speed and quality.
Batch Size: Keep it at 1 for faster, more precise renders.
Steps: Set it between 20-30 for smooth outputs with sufficient detail.
Sampler: Opt for DDIM or Euler a to maintain fast and realistic results.
Seed: For reproducibility, set a fixed seed like 42.
Getting Started: Download the Model
The model is available on Hugging Face:
Download LTX-Video 2B v0.9
Sample Prompts for Testing
Here are some curated prompts designed to showcase the portrait capabilities of the model:
Prompt 1:A young boy with curly brown hair and large, curious green eyes stands at the edge of a meadow at dusk. His round face is illuminated by the soft glow of fireflies dancing around him, and his cheeks are flushed from running. He wears a simple white t-shirt with grass stains on the collar and a pair of denim overalls.
The camera captures a close-up of his face as he reaches out to cup a firefly in his small hands, his expression a mix of wonder and excitement. A faint smudge of dirt is visible on his right cheek, and his slightly chapped lips part as he lets out a quiet gasp.
In the background, blurred but softly lit, are tall wildflowers swaying in the breeze and the faint outline of a distant farmhouse. The audio features the gentle chirping of crickets and the soft rustle of leaves, enhancing the scene's magical, nostalgic quality
Prompt 2:A young boy with curly brown hair and large, curious green eyes stands at the edge of a meadow at dusk. His round face is illuminated by the soft glow of fireflies dancing around him, and his cheeks are flushed from running. He wears a simple white t-shirt with grass stains on the collar and a pair of denim overalls.
The camera captures a close-up of his face as he reaches out to cup a firefly in his small hands, his expression a mix of wonder and excitement. A faint smudge of dirt is visible on his right cheek, and his slightly chapped lips part as he lets out a quiet gasp.
In the background, blurred but softly lit, are tall wildflowers swaying in the breeze and the faint outline of a distant farmhouse. The audio features the gentle chirping of crickets and the soft rustle of leaves, enhancing the scene's magical, nostalgic quality
Tips for Crafting Long Prompts
Be descriptive: Detail the subject's features, clothing, and environment.
Use cinematic language: Focus on lighting, camera angles, and background elements.
Highlight emotions: Describe the mood or expression to add depth to the image.
Performance Evaluation
In testing, this model:
Generated high-resolution, photorealistic portraits consistently.
Maintained detail in textures like hair, skin, and clothing.
Delivered results within 3-8 seconds per frame on mid-range GPUs.
Description
FAQ
Comments (11)
how do i use the image to vid part its pink and greyed out
Right click on the group (not a node) and select 'Set Group Nodes To Always. Also do the same for text to vid but select the never option. Switch them back when you need to.
Or you can do the same but select Bypass Nodes. I forgot about that one.
@wideload thanks that worked but now its giving me red nodes as if somthing is wrong and says this (Prompt outputs failed validation CLIPTextEncode: - Required input is missing: clip VAEDecode: - Required input is missing: vae SamplerCustom: - Required input is missing: model LTXVImgToVideo: - Required input is missing: vae CLIPTextEncode: - Required input is missing: clip)
Download the CLIP model and place it in the specified directory:
Download the CLIP model:
t5xxl_fp16.safetensors
Save the file in the following location:
C:\Users\YourUserName\ComfyUI\models\clip
Download the file:
vae_diffusion_pytorch_model.safetensors
Rename the file:
Before moving the file, rename it to something descriptive, like LTX-video-vae.safetensors, so it's easier to identify later.
Move the file:
Place the renamed file in the folder:
C:\Users\YourUserName\ComfyUI\models\vae
Check that all required files are in the correct directories:
CLIP model: models/clip/t5xxl_fp16.safetensors
VAE model: models/clip/vae_diffusion_model.safetensors
Checkpoint: models/checkpoints/ltx-video-2b-v0.9.safetensors@Cyberai99
@Cyberai99 Make sure you didn't bypass these 3 nodes when you want to work with image-to-video: ((load checkpoint, loading the clip, and Anything Everywhere3))
@Cyberai99 For red nodes you need to go into the Manager and use Install Missing Nodes.
@77ossam thank you this worked. i already had all the correct files, just needed to included these nodes
Thanks for this workflow! it's very convenient to have both txt2vid and img2vid options together. i tried messing around with max and base shift as per instructions but the default gives me better results with 1.5 and 0.3
also your 3 part prompt method works better than any others I've tried
I looked into some of the models.. Eh.. so, do you need 40+GB worth of Vram to video on local hardware?