โจOne-click Pod available on:โจ
๐ฃ Runpod ComfyUI 0.19.0 CUDA 12.8 for 5090
๐ฃ Runpod ComfyUI 0.19.0 CUDA 12.4 for 4090
๐ก VastAI ComfyUI 0.19.0 CUDA 13.0 for 5090
๐ก VastAI ComfyUI 0.19.0 CUDA 13.0 for 4090
Just click on the link, choose a Video Card and the Template will install all you need + all my workflows.
Wan 2.2 Models are not included, you can install it using Civicomfy or ComfyUI-HuggingFace directly inside ComfyUI
OneClick-I2V-Story on Runpod Basic Tutorial
https://limewire.com/d/2TAty#No5eoN7WhU
โ๏ธ buymeacoffee
IMPORTANT:
If you install RES4LYF node it will broke the MoEKSampler, to use it you have to use the KSampler included in that node.
๐ฅ 02/28/26 UPDATE ๐ฅ
โจ NEW NODES โจ
EASY MODEL DOWNLOAD FROM HUGGINGFACE
cd ComfyUI/custom_nodes
git clone https://github.com/huchukato/ComfyUI-HuggingFace.gitRIFE INTERPOLATION WITH TENSORRT
with Auto Install (CUDA 12/CUDA 13) and Auto Model Download
cd ComfyUI/custom_nodes
git clone https://github.com/huchukato/ComfyUI-RIFE-TensorRT-Auto.gitUPSCALER WITH TENSORRT
with Auto Install (CUDA 12/CUDA 13) and Auto Model Download
cd ComfyUI/custom_nodes
git clone https://github.com/huchukato/ComfyUI-Upscaler-TensorRT-Auto.gitโจMY QWEN3-VL NODE IS FINALLY IN THE MANAGERโจ
Just search for "QwenVL-Mod" to install it

or dowload it from GitHub: QwenVL-Mod: Enhanced Vision-Language
ComfyUI-QwenVL-Mod - Enhanced Vision-Language with WAN 2.2
Version 2.2.4 (2026/03/13) - ๐ฌ Critical I2V Timeline Fixes & NSFW Presets Optimization
๐ What is ComfyUI-QwenVL-Mod?
A powerful enhanced vision-language node for ComfyUI that combines Qwen3-VL models with professional WAN 2.2 video generation workflows. Features multilingual support, visual style detection, and NSFW capabilities for professional AI content creation.
Think: "Your all-in-one solution for intelligent prompt enhancement and video generation with cutting-edge AI models!"
๐ฌ Key Features
๐ WAN 2.2 Video Generation
Text-to-Video (T2V): Professional 5-second video generation
Image-to-Video (I2V): Advanced image animation with style detection
Story Generation: 20-second continuous videos with 4 narrative segments
Storyboard Workflows: Seamless storyboard-to-storyboard generation
Cinematic Video: Professional cinematography specifications
๐ Enhanced Capabilities
Multilingual Support: Process prompts from any language (Italian, English, etc.)
Visual Style Detection: 12+ artistic styles (anime, 3D, pixel art, puppet animation, etc.)
Smart Prompt Caching: Performance optimization with Fixed Seed Mode
GGUF Backend: Efficient local model inference with quantization support
NSFW Support: Comprehensive content generation without restrictions
๐ง Intelligent Features
Auto-Prompt Enhancement: Automatically enhance user prompts for optimal generation
Professional Cinematography: Built-in specifications for lighting, camera angles, shot types
Timeline Structure: Precise 5-second timeline with frame-by-frame descriptions
Keep Last Prompt: Generate once, preserve results while changing inputs
๐ฏ What's New in v2.2.4 - CRITICAL I2V TIMELINE FIXES
๐จ Major I2V Timeline (20s) Fixes
โ Style Coherence: Fixed AI changing animeโrealism mid-sequence
โ Character Stability: Fixed characters disappearing/appearing incorrectly
โ Natural Lighting: Fixed AI adding artificial lights not in image
โ Timeline Structure: Fixed continuous numbering (6,7,8...) instead of 0-5 restart
โ Format Consistency: Fixed missing parentheses and unwanted labels
โ Output Format: Each prompt starts directly with timeline markers
๐ง NSFW Presets Optimization
โ Complete Specifications: All 8 NSFW presets now include full NSFW descriptions
โ Emoji Display: Restored proper emoji rendering (๐ฟ๐ฅ๐ฌ๐)
โ Clear Instructions: Removed confusing recommendations from presets
โ User Guide: Token settings guide created for workflow optimization
๐ Technical Improvements
โ Timeline Markers: Correct
(At X seconds: ...)format for all 4 promptsโ Character Continuity: Natural progression without forced artificial presence
โ Lighting Rules: Logical progression instead of absolute prohibitions
โ Style Detection: Consistent style application across all timeline segments
๐ฏ Model Recommendations
Qwen3-VL-8B: Recommended for I2V Timeline (20s) complex sequences
Qwen3-VL-4B: Sufficient for I2V Scene (5s) single prompts
Token Settings: 2048+ for 20s timeline, 1024+ for 5s prompts
๐ฏ What's New in v2.2.3
CUDA 13 Compatibility: Fixed crashes caused by conflicting unload operations
Parameter Cleanup: Removed redundant unload_after_run from all nodes
Bug Fixes: Resolved "missing required positional argument" errors
Memory Management: Streamlined VRAM cleanup with VRAM Cleanup node
Documentation: Updated all README files with new memory features
Credits: Added community credits for feedback and testing
๐ฏ What's New in v2.2.2
๐ Critical T2V/I2V Workflow Fixes
Batch Processing: Fixed critical T2V โ GGUF issue with batch images
Frame Detection: Added automatic batch detection and individual frame processing
Video Support: Enhanced video frame processing with proper shape handling
Debug Enhanced: Comprehensive logging for batch processing troubleshooting
๐ Same Model Reuse Fix
Conflict Resolution: Fixed crash when using same model between T2V and I2V nodes
Memory Management: Enhanced cleanup with CUDA synchronization and timing
Signature Mismatch: Resolved different signature patterns between nodes
Aggressive Cleanup: Forced complete VRAM cleanup before model reload
๐ง keep_model_loaded Enhancement
Missing Parameter: Added keep_model_loaded to PromptEnhancer node
Consistent Behavior: Both GGUF and PromptEnhancer now have identical memory management
Conditional Cleanup: Proper cleanup based on keep_model_loaded setting
User Control: Full control over memory usage vs performance
๐จ CRITICAL BUG FIXES - v2.2.4
๐ฌ I2V Timeline (20s) - COMPLETELY FIXED
Before v2.2.4:
โ Anime style changed to realism mid-sequence
โ Characters disappeared/appeared randomly
โ AI added artificial lights not in image
โ Timeline numbering: 6,7,8... instead of 0-5 restart
โ Missing parentheses and unwanted "Prompt 1:" labels
After v2.2.4:
โ Perfect Style Coherence: Anime stays anime, realism stays realism
โ Character Stability: Same characters throughout all 4 prompts
โ Natural Lighting: Only lights visible in image, logical progression
โ Correct Timeline: Each prompt uses 0-5 seconds format
โ Clean Output: Proper
(At X seconds: ...)format, no labels
๐ฅ NSFW Presets - ENHANCED & FIXED
โ Complete Specifications: All 8 presets with full explicit descriptions
โ Emoji Display: Proper ๐ฟ๐ฅ๐ฌ๐ icons (no more unicode codes)
โ User-Friendly: Removed confusing technical recommendations
โ Token Guide: Workflow note for optimal settings
๐ฏ Result: Perfect I2V Timeline generation every time!
๐ฌ WAN 2.2 Story Workflow - Revolutionary AI Storytelling
๐ AI Story Generation
4-Segment Videos: Automatic 20-second videos (4 ร 5-second segments)
Narrative Continuity: Perfect story flow between segments
NSFW Support: Enhanced adult content generation
Timeline-Free: Natural storytelling without time markers
๐ Smart Auto-Split
Story Split Node: Intelligent prompt separation technology
Auto-Detection: Handles any separator format automatically
4-Output Guarantee: Always produces exactly 4 prompts
Debug Mode: Built-in troubleshooting information
๐ฆ Installation
Requirements
ComfyUI: v0.13.0+
GPU: 8GB+ VRAM (16GB+ recommended)
System: Windows/Linux/Mac
Python: 3.10+ (or use provided Docker environment)
Docker/Cloud Ready
RunPod: Pre-configured templates available
VastAI: Optimized instances ready
Local: Docker support included
Quick Install
Download: ComfyUI-QwenVL-Mod (latest version)
Extract to
ComfyUI/custom_nodes/ComfyUI-QwenVL-ModRestart ComfyUI
Load included workflows
๐ฎ Usage Examples
Basic Image-to-Video
Load WAN2.2-I2V-AutoPrompt.json
Upload your image
Select model (HF or GGUF)
Generate enhanced video
Basic Text-to-Video
Load WAN2.2-T2V-AutoPrompt.json
Input your text prompt
Select model (HF or GGUF)
Generate enhanced video
Image-to-Video with Style
Load WAN2.2-I2V-AutoPrompt.json
Upload your image
Enable style detection
Generate animated video
AI Story Generation
Load WAN2.2-I2V-AutoPrompt-Story.json
Input your story idea
Auto-split into 4 segments
Generate 20-second story video
๐ง Technical Specifications
โก Performance
Context: 65,536 tokens (8B models)
Memory: Optimized VRAM usage
Stability: Crash-free operation
Speed: Fast generation times
๐จ Model Support
Qwen3-VL 4B: 7 GGUF variants (2.38GB-4.28GB)
Qwen3-VL 8B: 7 GGUF variants (4.8GB-8.71GB)
HF Models: Josiefed and official variants
Quantization: Q4_K_S, Q5_K_S for VRAM efficiency
๐ Multilingual Capabilities
Input Languages: Any language supported
Auto-Translation: Automatic translation to optimized English
Style Detection: Works with multilingual prompts
Cultural Adaptation: Context-aware prompt enhancement
๐ฏ Included Workflows
๐ฟ WAN 2.2 Presets
๐ฟ Wan 2.2 I2V Timeline: Image-to-video with timeline structure
๐ฅ Wan 2.2 I2V Scene: Image-to-video with professional specs
๐ฟ Wan 2.2 T2V Timeline: Text-to-video with timeline structure
๐ฅ Wan 2.2 T2V Scene: Text-to-video with professional specs
๐ฌ Wan 2.2 I2V Timeline: 20-second multi-segment continuity
๐ Wan 2.2 I2V Scene: 20-second single scene with cinematography
๐ฅ Advanced Features
NSFW Enhancement: Uncensored content generation
Professional Lighting: 8 light types + 9 qualities
Camera Control: 6 shot types + 5 compositions
Color Grading: 4 tone options
๐จ Visual Style Detection
Automatically detects and enhances:
Photorealistic style - Realistic lighting, natural textures, lifelike details
Anime style - Japanese animation aesthetics with vibrant colors
Cartoon style - Bold outlines, flat colors, exaggerated expressions
3D animation style - Computer-generated 3D rendering characteristics
Fantasy style - Magical elements, ethereal lighting, imaginative atmosphere
Artistic portrait style - Painterly qualities, artistic brushwork, stylized composition
๐ฅ NSFW Content Support
Enhanced Generation
Explicit Content: Uncensored adult descriptions
Detailed Scenes: 8-12 sentences per segment
Natural Progression: Smooth story flow
Style Adaptation: Automatic visual style matching
Quality: Consistent characters & scenes
Professional Applications
Adult Content: Industry-standard generation
Artistic Nudity: Classical art styles
Educational: Anatomy and artistic study
Creative: Artistic expression
๐ Why Choose ComfyUI-QwenVL-Mod?
๐ฌ For Content Creators
Storytelling: Create compelling narratives
Efficiency: One prompt โ complete video
Quality: Professional video output
Flexibility: Any genre, any style
๐ฅ For NSFW Content
Explicit: Uncensored generation
Detailed: Rich scene descriptions
Continuous: Smooth story flow
Natural: Realistic progression
โก For Power Users
Customizable: Easy to modify
Extendable: Add more segments
Integrable: Works with existing setups
Optimized: Maximum performance
๐ What Makes This Special?
First: Complete AI story system with vision enhancement
Smart: Intelligent prompt splitting and enhancement
Complete: End-to-end solution from text to video
Optimized: Performance-tuned for professional use
Ready: Works out-of-the-box with included workflows
๐ฌ Create Amazing AI Videos Today!
Transform your ideas into stunning videos with the power of Qwen3-VL vision enhancement and WAN 2.2 video generation.
Perfect for creators, artists, and professionals looking for the ultimate AI video enhancement tool! ๐
Built with โค๏ธ for the ComfyUI community
๐ถ All the images used to create the videos in the gallery are generated with PimpMyPony ๐ถ
The new version is out today, with a brand new anime style โจ
WORKFLOWS TESTED ON:
ComfyUI 0.17.1
Python 3.12.12
Pytorch 2.9.1 + CUDA 13.0
๐ฃ Credits
This workflows are intended to be used with the models by taek75799 as they follow the structure of Dynamic Prompts you can find under these models:
WAN 2.2 Enhanced NSFW | SVI | camera prompt adherence (Lightning Edition) I2V and T2V fp8 GGUF
๐ฃ Other tested models:
Smooth Mix Wan 2.2 14B (I2V/T2V) by DigitalPastel | GGUF versions by Santodan
Wan2.2-Remix (T2V&I2V) by FX_FeiHou | GGUF versions by Santodan
Thanks to all the users who are commenting and helping me improve the workflows โค๏ธ
๐ T2V-I2V AUTOPROMPT
๐ Experimental WF ๐
Start with a T2V prompt and extend the generated video with I2V
This workflow requires both T2V and I2V Wan 2.2 Models
๐ SVI I2V AUTOPROMPT
Thanks to taek75799 for his models โค๏ธ
SVI LORAS:
LIGHTX2V LoRaS are not included in the model
๐ FULL I2V AUTOPROMPT
Complete workflow that includes:
Long Video Generation [from 5 to 20 seconds]
Auto Prompting [Qwen3-VL]
Upscale [2xLexicaRRDBNet and Tensorrt]
Frame Interpolation [30fps and 60fps for img2vid | 24fps and 50fps for MMAudio]
MMAudio [NSFW Unlocked]
๐ AUTO PROMPT
Prompt Description Box [Multilanguage]: Just write your idea and the LLM will do the rest, formatting the prompt in the dynamic prompt format used in the Wan 2.2 models
Final Prompt Preview: Shows up the final prompt

๐ถ QWEN3-VL NODE FOR GGUF MODELS ๐ถ

๐ To use the Qwen3-VL GGUF Quantized models you have to install llama-cpp-python
If you are not comfortable in manual install the llama, just go with the normal version inside Full-I2V-LongVideo
๐ STOP COMFYUI
๐ Activate the ComfyUI Virtual Enviroment
In your ComfyUI root installation folder type:
on Windows:
Command Prompt:
\venv\Scripts\activate.bator Power Shell:
\venv\Scripts\Activate.ps1on Linux:
. /venv/bin/activateIf you use ComfyUI Desktop:
Click on Console and then on Terminal
โฌ๏ธ Install llama-cpp-python
pip install --upgrade --force-reinstall --no-cache-dir "llama-cpp-python @ git+https://github.com/JamePeng/llama-cpp-python.git"๐ Restart ComfyUI and enjoy
๐ SWITCH CLIP
By default the WF uses the GGUF node to load quantized Clip, if you wanna switch to the NSFW Clip model, you have to bypass the GGUF node and connect the other Clip loader to the "Set_CLIP"


ACCELERATION:
Triton is disabled by default, you can enable it by opening the first Subgraph

Inside the workflow you will find all the links to download the models you will need
That's all, hope you enjoy ^^
Description
SVI Long Video Generation with Autoprompt up to 60 seconds
FAQ
Comments (24)
AILab_QwenVL_GGUF_Advanced
Failed to load model from file: D:\ComfyUI_windows_portable\ComfyUI\models\llm\GGUF\noctrex\Qwen3-VL-4B-Instruct-Abliterated-GGUF\Huihui-Qwen3-VL-4B-Instruct-abliterated-Q8_0.gguf
Everything is loaded. I checked it 10 times. All models are in place. I downloaded it again by hand, put it in the right way. It's still a mistake.
Update my node I did a lot of changes tonight
Updated it. It's still the same story.
Same here. file exists at the exact path from the error message but it fails to load somehow..
@redfox4491ย I changed the Qwen Models in the last updates, the huihui ones are not in the node anymore (due to problems with NSFW prompt aderence)
hello, it works now but the SVI quality is very bad and ugly compared to nsfw fast move v2 without svi. It looks like ltx2 wich where announced to be a revolution. On the opposite, your workflow is clean. Great technical conception. I hope there will be upgrades in SVI safetensors or in lora to make growing the quality in the future. I think i will definitivly stop testing svi workflows..
Thanks for the nice workflow. A question:
how do i install Huihui-Qwen3-VL-4B-Instruct-abliterated and have it shown in the dropdown list?
in custom_nodes\ComfyUI-QwenVL-Mod\custom_nodes.json, put this (everything is not necessary but it worked for me) :
{
"hf_models": {
"Huihui-Qwen3-VL-4B-Instruct-abliterated": {
"repo_id": "huihui-ai/Huihui-Qwen3-VL-4B-Instruct-abliterated",
"default": false,
"quantized": false,
"vram_requirement": {
"4bit": 2,
"8bit": 4,
"full": 8
}
}
}
}
Thanks m8. But it didnt work for me:/
You find the 4B and the 8B abliterated version in the dropdown menu in my custom node
@huchukatoย Yes true. But they dont do NSFW for some reason:/
@kenpachi601674ย O_O all the videos you see in the gallery are autopromped with 4B and 8B abliterated, straaaaange
Seems to be hanging after generating the prompt on stage QwenVL (Advanced). It has generated the final prompt preview , but is progressing no further. This is in the 15s stage
The "modded" node is just adding the custom "wan 2.2." prompt profile and the input box, and abliterated models to the list, these are json files in the custom_folder. Just steal that and put it in the Qwen-vl folder. I ended up just copying the prompt to a string node and using a concat node. Added a second string to the concat node and make that one your "prompt" node , and connect that to the node subgraph on the left side. Also steal the hf_models json as well because that will add the abliterated models . You can also adds those manually to qwen-vl hf_models json yourself.
Does QwenVL take into account the trigger words of the LORAs before reinterpreting the prompts?
I deleted the trigger words text box coz I caused problems in prompt aderence, I will work on in it next days, for now if you write a trigger word it just go into with your custom prompt into Qwen3 to generate the final prompt
@huchukatoย Thanks for the clarification, and great workflow! ๐
@Natsu24ย Just added a LoRa Trigger Words Text Box to ALL WF ;))
installed according to the instruction, but every time i run it gets stuck on [QwenVL] Node on nvidia_gpu
[QwenVL] Attention backend selected: sdpa
Fetching 14 files: 0%| | 0/14 [00:00<?, ?it/s] . tried reinstalling the nodes and same problem still persists
it should be downloading the model. it's slow and takes a while, but you can check your network usage in task manager to make sure it's downloading properly.
Hi I love your workflow but qwenvl is very slow on my pc, is there anyway I can bypass it? All the things I tried just remove the prompt window.
You tryed the GGUF version? BTW I will make a WF without autoprompt if you want, SVI or normal?
@huchukatoย Yeah I tried both and gguf works but it slows down the gen times still. Would be great if you just added a "toggle" or something in both svi and normal workflows for the qwen vl :D
@huchukatoย Hi @huchukato โ Iโm a beginner, but I really love your workflow. Iโm using the FP8 model, and QwenVL sometimes causes issues on my setup (and can slow things down).
If possible, could you please make an SVI version without Qwen/autoprompt (or add a simple toggle to disable QwenVL)? That would be hugely appreciated. Thank you!