A collection of workflows for video generation models.
WAN 2.2 V2V (fix)
base on https://github.com/kijai/ComfyUI-FramePackWrapper
by the node change, use this version
WAN_Native_Phantom_Lora
base on @vrgamedevgirl
This author's workflow is more complete .
FramePack F1 timesampled
base on https://github.com/kijai/ComfyUI-FramePackWrapper , Since kijai has not merged other branch requests, to run this workflow, you need to pull the py file in the downloaded zip file into [yourfile/ComfyUI/custom_nodes/ComfyUI-FramePackWrapper ] file to replace it.
VACE_POSE V 1.0
had BiyAir node, which can replace wiht Florence2 or Joycaption.
Phantom workflow
Very important, the workflow needs to be in the ComfyUI-WanVideoWrapper node directory, use the "git switch dev" command to switch to the dev branch, so that it can be used normally
Description
Loop Effects
FAQ
Comments (4)
You should specify the required VRAM. Most use the larger model, and it won't even fit a 16V RAM.
Try it later, it seems to be useful
Works fine on 4090 24GB ~5 min, on 3060 12 GB ~ 5sec more then 6 hr :)
If i change seed to random videos loose outfit elements or face details, so seed = 44 important?
torch.OutOfMemoryError: Allocation on device
Got an OOM, unloading all loaded models.
Prompt executed in 237.89 seconds爆显存。。。。16g4060ti,内存128g,有解决办法吗?
