🚀 Turn a single reference image + pose video into lifelike, identity-preserving character animation with Wan 2.2 Animate V2.
▶️ Run Directly in Cloud:
https://www.runcomfy.com/comfyui-workflows/wan-2-2-animate-v2-in-comfyui-pose-driven-animation-workflow?utm_source=civitai
💡 Overview
Wan 2.2 Animate V2 is a pose-driven video generation workflow that builds on V1 with higher fidelity, smoother motion, and better temporal consistency. It combines robust pre-processing (pose, face, and subject masking) with the Wan 2.2 model family and optional LoRAs, so you can dial in style, lighting, and background handling with confidence.
Designed for creators who want fast, reliable results for character animation, dance clips, and performance-driven storytelling.
✨ Key Features
Pose-Driven Control: ViTPose + YOLO detection extract dense body keypoints from a driving video, precisely guiding every motion on the generated character.
Identity Lock: CLIP Vision encodes your reference image so facial structure, clothing, and style remain consistent across all frames.
SAM 2 Subject Masking: Automatic foreground isolation preserves backgrounds and enables clean compositing.
LoRA Fine-Tuning: Lightx2v and Wan22 Relight LoRAs improve I2V stability, shading consistency, and texture detail.
Audio Sync: Original audio from the driving video is muxed into the final export for perfect timing.
🚀 Getting Started
Load a driving video: Import your source clip with pose/dance motion.
Provide a reference image: Upload a clear, well-lit portrait or full-body image of your target character.
Preprocessing runs automatically: YOLO + ViTPose extract keypoints; SAM 2 builds a foreground mask.
Generate: Wan 2.2 Animate 14B synthesizes the retargeted frames, decoded and muxed with audio into a final MP4.
Click the "Run Directly" link above to bypass local setup and test this workflow immediately in your browser.
Description
Initial release.