My ComfyUI APP workflows
This workflows are used by me to create my art.
They are optimized for my checkpoints and created of my latest knowledge to enhance the outcome.
"If this workflow leveled up your day, I'd purr-eciate a like! ๐ป"
Versions & Information๐๐๐๐๐๐๐๐
๐ Please read below and the file descriptions "About this version" for more info's.
๐กSome WAN 2.2 versions use high+low checkpoints, other like LTX23 use only a single checkpoint - make sure to read the descriptions and use the correct checkpoints.
๐ฏ This is compatible with ComfyUI APP-Mode and Nodes 2.0
๐๐๐๐๐๐๐๐
What you get from the comfy workflows:
โจ๏ธ Easy controls
โ As less as possible dependencies
๐ชง Detailed documentation
โ๏ธ Highly automatic logic
โจ Optimized results
๐ฌ Fully automated resolution logic
๐ Bookmark-Shortcuts with number keys
Types of workflows
DeepDream C-WLTX
๐ผ๏ธโก๏ธ๐ฅ I2V (WAN22)
๐ผ๏ธ๐ผ๏ธ๐ฅ FLF2V (WAN22)
๐ผ๏ธโก๏ธ๐ฅโก๏ธ๐ฅ I2V2V (Image-to-Video-to-Video | WAN22 + LTX23) + Audio
๐งฉ Automatic aspectโratio calculation and fitting
โจ Multiple Upscalers
Torchlanc (very fast, color correct, low VRAM)
Upscale with Model (additional detail, high quality)
RTX Super Resolution (ultra fast, very accurate)
๐ค Video resolution matching - Fully automatic scaling and resolution calculations
๐ Length automation - Fully automatic calculation of frame count
๐ Add audio through LTX23 (V2V)
๐ซฅ Watermark option
๐ข Soundmark option
๐งฎ Color match feature
๐พ MiniMeme feature - Create small gif's
๐ซ NAG - Negative prompting with CFG1
๐๏ธ Double FPS (latent) feature
๐ช Interpolation feature

๐ฉป Known issues and advice's
โ ๏ธ Some workflows may set on webp av1 encoding (VHS node) - If your computer/setup missing drivers use any other like H265 or H264!
Install ffmpeg!
Update Comfyui and custom_nodes!
Update pytorch 2.9+cu128 or higher
Make sure to read where files/models should be placed inside the workflow
Check if the filepath for model/clip/vae match your system like Linux/Windows
The plugin ComfyUI-DD-Translation can break node connection (avoid)
All older Versions are available inside my GitHub Repo.
YOU are responsible for outputs as always! If you make ToS violating content and I get aware I WILL report this.
Description
Requirements
[ComfyUI-VideoHelperSuite](https://github.com/Kosinkadink/ComfyUI-VideoHelperSuite)
[rgthree-comfy](https://github.com/rgthree/rgthree-comfy)
[Comfyui-WhiteRabbit](https://github.com/Artificial-Sweetener/comfyui-WhiteRabbit)
[ComfyUI-LTXVideo](https://github.com/Lightricks/ComfyUI-LTXVideo)
[ComfyUI-KJNodes](https://github.com/kijai/ComfyUI-KJNodes)
[ComfyUI-DaSiWa-Nodes](https://github.com/darksidewalker/ComfyUI-DaSiWa-Nodes)
[ffmpeg](https://www.ffmpeg.org/)
[Sage Attention](https://github.com/thu-ml/SageAttention) (optional)
[Upscaler model](https://openmodeldb.info/) (optional)
[Nvidia_RTX_Nodes_ComfyUI](https://github.com/Comfy-Org/Nvidia_RTX_Nodes_ComfyUI)
Recent changes
Added LoRA support
Added NAG
Enhanced V2V logic
Enhanced processing
FAQ
Comments (12)
Great simple workflow but option for LoRas is missing.
I know. Its my first attempt to make an APP. It will be enhanced by time.
@darksidewalkerย keep up the good work, just to let you know what you are doing is really appreciated
LoRA's are added
Hello! I really love your workโthank you for sharing so many great things.
Iโm trying this workflow, but Iโm missing two nodes:
โข FrameInterpolate
โข FrameInterpolationModelLoader
I wasnโt able to find them anywhere.
Could you please guide me on how to install these nodes? Thank you.
They are comfyui native nodes from the latest comfyui 0.20.x - If you missing them you may need to update or reinstall comfyui.
@darksidewalkerย is the portable version an option? I can't seem to get any updates working on that.
@Ohriosย You just need the latest comfyui 0.20.x, my installer works, any other should do as long as it is 0.20.+
Is the goal here to let Wan2.2 drive the video generation then V2V the audio with LTX?
For the moment this is the goal, besides this is APP-mode and nodes 2.0.
Before this, I only used wan2.2 and didn't really dig in audio topic. If I want to make a V2V to add sound to a wan2.2 video, do I need to generate a wan2.2 video with a moving mouth? Or will v2v remake the video adding lip syncing if it wasn't there?
It will add lip sync