Unlimited Length I2V — ComfyUI Workflow
Welcome to the Unlimited Length I2V workflow for ComfyUI, pushing the boundaries of video generation by leveraging the FramePack system to produce videos of virtually unlimited length (i.e. number of frames are no longer limited to 96 for previous implementations of Hunyuai or even Wan).
Just a few weeks ago, this kind of output would have been impossible — now, it's a matter of a few nodes.
⚠️ This is a first working draft. Expect massive improvements soon (see below).
🚀 What It Does
This workflow uses FramePack to perform image-to-video (I2V) generation with long, coherent sequences. By combining the original FramePack I2V architecture with the modular flexibility of ComfyUI and support from native models, this setup opens new creative possibilities for animating images far beyond the usual frame count limitations.
It currently features :
experimental LoRA support!
automatically resize input image to nearest supported format
end frame support
any input resolution accepted (will be rounded to the nearest valid one)
LLM use for image description
teacache use
I also tried to explain each setting with a note directly on the workflow. No need to keep this page open when using it !
🔧 Dependencies
To run this workflow, you need the following:
Nvidia GPU in RTX 30XX, 40XX, 50XX series that supports fp16 and bf16. The GTX 10XX/20XX are not tested.
6GB of VRAM (yes, only ! it can work on your laptop !)
🧩 Required ComfyUI Custom Node
Kijai’s FramePack Wrapper for ComfyUI
➜ https://github.com/kijai/ComfyUI-FramePackWrapper
At the time of writing, it was not available with Comfy UI interface. You can install it through Manager > Install via git URL > https://github.com/kijai/ComfyUI-FramePackWrapper.git
FOR LORA SUPPORT : you need "dev" branch and to my knowledge, it cannot be done with the GUI. You need to open PowerShell (or bash on Linux), go to ComfyUI/custom_nodes and do "git switch dev" + "git pull".
📦 Model & Resource Downloads
1. Native models (text encoders, VAE, sigclip):
2. Transformer (FramePack) model:
🧠 Autodownload (recommended):
From HuggingFace: lllyasviel/FramePackI2V_HY
➜ Place in:ComfyUI/models/diffusers/lllyasviel/FramePackI2V_HY🧠 Manual download (single safetensor files):
Place in:ComfyUI/models/diffusion_models/FramePackI2V_HY_fp8_e4m3fn.safetensors
(Optimized for low-memory GPUs, with FP8 and reduced precision for better compatibility.)FramePackI2V_HY_bf16.safetensors
(Better suited for high-memory GPUs, offering higher fidelity thanks to BF16 precision.)
☕ Optional Feature: Teacache
Teacache is a smart caching system for diffusion models that stores intermediate computation states. This drastically speeds up generation times, especially during iterative tweaking or when generating multiple video segments with similar inputs.
The workflow includes a switch to enable or disable Teacache, depending on your memory availability and whether you're prioritizing speed or full fresh runs.
Teacache boost: Up to 2x speed improvement on repeat runs
Update infos
If you come from the v0.1 or v0.2 of my workflow, you need to update kijai/ComfyUI-FramePackWrapper to dev branch.
Go to ComfyUI/custom_nodes/ComfyUI-FramePackWrapper
on powershell / Bash, type :
git switch dev
git pull
You would of course need git for this.
LoRA support is highly experimental at this point. You can of course only use HunYuan video LoRA, and the effect is quite ... random. The explanation at this point is that all these LoRA were trained on very short videos (due to original limitations), and this impact high frame videos like the one generated with FramePack. I'll try to improve this in the future (not a limitation of the workflow though, but from the original FramePack implementation).
⚡ Benchmark Results
Tested on my "old" RTX 3090:
Resolution: 704x544
Length: 150 frames
Generation time: 11 minutes
Another test :
384x448, 600 frames generated on 15 minutes.
The original project claims that with an RTX 4090 desktop it generates at a speed of 2.5 seconds/frame (unoptimized) or 1.5 seconds/frame (teacache)
🧪 Current Status
This release is an second draft. It is mostly working and "straight to the point".
This is also my VERY FIRST WORKFLOW CONTRIBUTION on Civit.ai ! Please be gentle on your comments.
Next steps are :
Upscaling (coming very soon too)
Other way to improve quality
📎 Original Project Attribution
FramePack is originally developed by lllyasviel. This workflow wraps it in ComfyUI thanks to Kijai work and additional optimizations and user-friendly features.
🧠 Credits
@lllyasviel for the original FramePack architecture
@Kijai for the ComfyUI node wrapper
Comfy-Org for the models and pipeline integration
Everyone in the ComfyUI community for testing and feedback
The default settings were based on my RTX 3090 (24GB of vRAM). If you have less and you have memory usage, first change FramePack model to use fp8 model, then if it's not enough, try lowering VAE batch parameters.
Please, post all videos made with my workflow here, I really want to see what you are doing with it !
Description
Bugfixes:
FramePack model more visible (was a reduced node before)
Improvements:
Add End frame support
Handle all resolutions
Better negative prompt
Add "LLM image to prompt" feature (with QwenVL)
Use VHS VAE Encode Batched to lower memory usage at start/end of process
Add Torch Compile settings (disabled by default, see "other advanced tunables")
More notes !
FAQ
Comments (23)
Thank you just what I was waiting for, hope you can sort out Loras soon>
it can already work with Framepack original implementation but is not implemented in Kijai nodes yet at all.
@Ez4M Hi thank you, how do I use a Lora in the original implementation, is there a way to load one?
Hi thank you, how do I use a Lora in the original and will they work as normal?.
@Jezz you can use this branch : https://github.com/neph1/FramePack/tree/pr-branch
@Ez4M thank you I'll try it out
@Jezz Hey, try v0.3 of my workflow, now with LoRA support (read the instructions)
any suggestions for installing FramePackLoraSelect node?
This works better than I expected... very nice!
Thanks for the workflow! I followed your instructions here and cannot see FramePackSample & LoadFramepackModel nodes. Could you please help me here?
with Powershell :
cd ComfyUI/custom_nodes
git clone https://github.com/kijai/ComfyUI-FramePackWrapper.git
Then, restart ComfyUI.
you can also use manager > install via git url > https://github.com/kijai/ComfyUI-FramePackWrapper.git
what this wokflow have in comparison to other? you said "Unlimited length" but you can do that with any other framepack workflow, i'm wrong? if i set the workflow im using to 20sec duration and so.. the result will be the same.. i tough you made a loop of some kind that can keep generating by itself based on last frame and "unlimited length" until you stop the gen,, your workflow doesnt seem to do that...?
yes you can do the same with other framepack workflow. When I started this a few days ago, there were none. Thought it would be a good objective to publish it though.
@Ez4M i wanted to hook the end frame to another "run" in the workflow, but im still too noob for comfy xD i can tweak a bit some workflows but thats it.. xD thanks for sharing <3
@K3NK already been there, it's not possible afaik.
* FramePackSampler 50:
- Failed to convert an input value to a FLOAT value: denoise_strength, linear, could not convert string to float: 'linear'
Output will be ignored
Prompt executed in 0.01 seconds
denoise stength NaN
it only works at 320 resolution
you need to update Framepack repository. in custom_nodes\ComfyUI-FramePackWrapper type "git pull", or delete the folder and clone it again.
@trashkollector175 you need to update Framepack repository. in custom_nodes\ComfyUI-FramePackWrapper type "git pull", or delete the folder and clone it again.
Ok, got it working at the large size and with sage attn.. groovy
Great worfklow. Works almost 1:1 with the FramePack GUI from my initial testing.
Thanks for sharing