🎬 Overview
MotionForge is an advanced ComfyUI workflow that combines multiple cutting-edge technologies to create high-quality image-to-video animations. This pipeline leverages the power of WAN2.2 models with Lightning-fast 4-step sampling and a sophisticated 5B refiner for exceptional video generation.
✨ Key Features
Multi-Stage Video Generation
A14B Base Generation: High-quality initial video creation using WAN2.2-I2V models
LightX2V 4-Step Acceleration: Lightning-fast sampling for efficient processing
5B Refiner Upscale: Advanced refinement and upscaling for superior quality
Frame Interpolation: RIFE VFI for smooth 32fps output
Technical Excellence
Dual Noise Handling: Separate high-noise and low-noise processing paths
GGUF Model Support: Efficient model loading with quantization
Advanced Sampling: UniPC sampler with beta57 scheduling
Multi-Resolution Output: 16fps and 32fps video options
🚀 Workflow Architecture
Stage 1: Model Preparation
GGUF Model Loading: WAN2.2-I2V A14B models in Q8_0 quantization
CLIP Text Encoding: Advanced prompt handling with umt5-xxl encoder
VAE Configuration: Wan2.1 VAE for optimal latent space processing
Stage 2: Core Video Generation
WanImageToVideo Node: Primary image-to-video conversion
Dual KSamplerAdvanced Setup: 4-step sampling pipeline
Lightning LoRA Integration: Fast inference with quality preservation
Stage 3: Refinement & Enhancement
5B Model Upscaling: Quality enhancement with Wan2.2-Fun-5B
RealESRGAN Upscaling: 2x resolution improvement
RIFE Frame Interpolation: Smooth motion from 16fps to 32fps
🎯 Optimal Use Cases
Perfect For:
Character animation from still images
Short film and cinematic content creation
Social media video content
Experimental AI art videos
Motion transfer applications
Input Requirements:
Start Image: 560x560 resolution (automatically resized)
Positive Prompt: Descriptive motion and scene instructions
Negative Prompt: Comprehensive quality control prompts
⚙️ Technical Specifications
Performance Settings
Sampling Steps: 4 steps (Lightning fast)
Refinement Steps: 8 steps (Quality focus)
Frame Rates: 16fps base, 32fps interpolated
Output Resolution: Upscaled 2x from original
Model Configuration
text
Primary Models:
- Wan2.2-I2V-A14B-HighNoise-Q8_0.gguf
- Wan2.2-I2V-A14B-LowNoise-Q8_0.gguf
- Wan2.2-Fun-5B-InP-Q8_0.gguf (Refiner)
LoRA Enhancements:
- LightX2V 4-step acceleration
- Style and quality optimization🛠️ Installation & Setup
Required Custom Nodes
ComfyUI-VideoHelperSuite: Video processing and combining
ComfyUI-Frame-Interpolation: RIFE VFI for smooth motion
ComfyUI-Easy-Use: Utility nodes and GPU management
GGUF Loaders: For quantized model support
Model Requirements
Download all specified GGUF models to appropriate directories
Ensure VAE and CLIP models are properly configured
LoRA files should be placed in the
wan_lorasdirectory
💡 Usage Tips
Optimal Results:
Start with high-quality source images (560x560 recommended)
Use descriptive motion prompts for better animation control
Experiment with denoise settings (0.2 default works well)
Consider output purpose when choosing 16fps vs 32fps
Performance Optimization:
Utilizes GPU memory management nodes
Automatic cache clearing between stages
Efficient model loading and swapping
🎨 Creative Applications
This workflow excels at:
Character Animation: Bringing still characters to life
Style Transfer: Applying motion to various art styles
Experimental Art: Creating unique AI-generated videos
Content Creation: Producing engaging social media content
📊 Quality Output
Expected Results:
Smooth, coherent motion sequences
High-resolution video output (1120x1120 after upscale)
Temporal consistency across frames
Minimal artifacts and flickering
Experience the next generation of AI video generation with MotionForge – where speed meets quality in perfect harmony.
Description
FAQ
Comments (15)
where can I find Wan2.2-Fun-5B-InP-Q8_0.gguf?
Also, which folder do I place it in?
edit:
https://huggingface.co/QuantStack/Wan2.2-Fun-5B-InP-GGUF
Place it in models/diffusion_models
@zardozai Thank you!
Very cool workflow, thank you so much for sharing! I was wondering where i might find the "uni_pc" lora that you have plugged into 5B latnent upscaler part of your work flow. Thank you in advance!
uni_pc and beta57 are schedulers or samplers not Loras you need to install
Thanks for the help! I am loving what you shared with the community thank you so much! For anyone else out there - I have found that the motion forge workflow here really works wonders with a 3 sampler setup. The first high noise lora can have the lightx2v setup with a strength of 3 - 5.5 depending on how much motion you want. Then chain that into a Wan MoeK sampler with seko high and low noise. Really gives you a nice combination of speed and quality!
I cannot seem to find the Latent Upscale node.
I updated comfy core, and updated everything, but I guess I might need to rollback to an earlier verfion of comfyui,
Its saying that this node
1d52b5cc-d402-40ad-9300-9c257f5685df
Cannot be found?
I took the time to learn what subgraph nodes are. Update, restart, reinstall nodes etc. looks like I am bout to run it. Will check back soon.
What's the difference with your other WF: https://civitai.com/models/1957469/motionforge-wan22-fun-a14b-i2v-lightx2v-4step-reward-loras-5b-refiner-32fps?modelVersionId=2250899?
We use Wan2.2 Fun Inp 14B and Fun Rewards Loras.
Cuando debería activar estos nodos que vinieron desactivados por defecto?
https://i.imgur.com/sBgVdPv.png
Y que hacen, dan más motion?
Y porque usar el LORA LIGHTING 2.1 en vez del 2.2, no sería mejor este último?
Amazing workflow! Thank you for your time and effort!
im not having any luck with8 this workshop! anyone have a video i can watch? There is literally nothing that tells me what I need to do in this work flow
may i know how to load character lora in the workflow? thanks
Appreciate if you can have a youtube tutorial teaching how to use this workflow, where to download all the related models and where to place them, Thanks a lot