Dual Light LoRA setup, 4X faster.
Who it's for: creators who want this pipeline in ComfyUI without assembling nodes from scratch. Not for: one-click results with zero tuning — you still choose inputs, prompts, and settings.
Open preloaded workflow on RunComfy
Open preloaded workflow on RunComfy (browser)
Why RunComfy first
- Fewer missing-node surprises — run the graph in a managed environment before you mirror it locally.
- Quick GPU tryout — useful if your local VRAM or install time is the bottleneck.
- Matches the published JSON — the zip follows the same runnable workflow you can open on RunComfy.
When downloading for local ComfyUI makes sense — you want full control over models on disk, batch scripting, or offline runs.
How to use (local ComfyUI)
1. Load inputs (images/video/audio) in the marked loader nodes.
2. Set prompts, resolution, and seeds; start with a short test run.
3. Export from the Save / Write nodes shown in the graph.
Expectations — First run may pull large weights; cloud runs may require a free RunComfy account.
Overview
This ComfyUI Wan 2.2 + Lightx2v V2 workflow combines the power of Wan 2.2 14B model with Lightx2v V2 LoRA for lightning-fast image-to-video and text-to-video generation. Transform static images into dynamic videos or create videos directly from text prompts using advanced distillation techniques. Generate high-quality results in 8-14 steps with reduced sampling parameters, dramatically cutting generation time while maintaining excellent output quality. Choose from optimized configurations: 8 steps (maximum speed), 12 steps (balanced), or 14 steps (optimal quality), perfect for rapid prototyping and efficient video creation workflows.
Important nodes:
EmptyHunyuanLatentVideo
CLIP Text Encode (Positive Prompt)
LoadImage
Notes
Wan 2.2 + Lightx2v V2 ComfyUI Workflow | Fast Image & Text to Video — see RunComfy page for the latest node requirements.
Description
Initial release — Wan22-Lightx2v-V2.
