CivArchive
    FramePack 已支持首尾帧 by Lvmin Zhang based Hunyuan Video - Best practices by Kijai nodes - 首尾及关键帧(kayframes)
    NSFW
    Preview 71380299
    Preview 71359067
    Preview 71364862
    Preview 71373729
    Preview 71376431
    Preview 71374900
    Preview 71356454
    Preview 71373792
    Preview 71374969
    Preview 71376634
    Preview 71381368

    更新首尾帧关键帧参考(已支持ComfyUI0421

    nirvash’s repository for keyframe support (ComfyUI 无需额外权重):

    nirvash/ComfyUI-FramePackWrapper

    [ WEBP 格式的例图可以直接拖放到ComfyUI,包含Workflow ]

    [ 也可以下载右侧组件包,其中的 example_workflows 目录中包含工作流]

    Feature

    • Set end frame 支持设定结束帧

    • Assign weighted keyframes 支持加权中间帧

    • Use different prompts per section 每个FramePack分别设定提示词

    based on kijai's ComfyUI-FramePackWrapper:

    https://github.com/kijai/ComfyUI-FramePackWrapper


    End Frame support on Pytorch Gradio Webui:

    FramePack_SE by TTPlanetPig base on lllyasviel/FramePack


    生图模型一样玩转视频大模型敏神&Kijai’s nodes

    Packing Input Frame Context in Next-Frame Prediction Models for Video Generation

    算法组:Lvmin Zhang Maneesh Agrawala

    Stanford University

    Paper Code

    ComfyUI Wrapper for FramePack by lllyasviel

    最佳实践:ComfyUI Nodes kijai/ComfyUI-FramePackWrapper

    FramePack

    • Diffuse thousands of frames at full fps-30 with 13B models using 6GB laptop GPU memory.

    • Finetune 13B video model at batch size 64 on a single 8xA100/H100 node for personal/lab experiments.

    • Personal RTX 4090 generates at speed 2.5 seconds/frame (unoptimized) or 1.5 seconds/frame (teacache).

    • No timestep distillation.

    • Video diffusion, but feels like image diffusion.

    • 敏神的FramePack基于Hunyuan Video Diffuse,6G显存的笔记本电脑GPU,全fps的13B模型可连续生成数千帧视频画面。

    • 在8xA100/H100服务器以BS64对13B视频模型进行微调,用于个人/实验室。

    • 个人RTX 4090以2.5秒/帧(未优化)或1.5秒/帧的速度生成(teacache)。

    • 无时间步蒸馏。(仅CFG蒸馏,高画质)

    • 像图像扩散模型一样玩转视频大模型!

    Mostly working, took some liberties to make it run faster.

    Uses all the native models for text encoders, VAE and sigclip:

    https://huggingface.co/Comfy-Org/HunyuanVideo_repackaged/tree/main/split_files

    https://huggingface.co/Comfy-Org/sigclip_vision_384/tree/main

    And the transformer model itself is either autodownloaded from here:

    https://huggingface.co/lllyasviel/FramePackI2V_HY/tree/main

    to ComfyUI\models\diffusers\lllyasviel\FramePackI2V_HY

    Or from single file, in ComfyUI\models\diffusion_models:

    https://huggingface.co/Kijai/HunyuanVideo_comfy/blob/main/FramePackI2V_HY_fp8_e4m3fn.safetensors https://huggingface.co/Kijai/HunyuanVideo_comfy/blob/main/FramePackI2V_HY_bf16.safetensors

    Requirements

    Note that this repo is a functional desktop software with minimal standalone high-quality sampling system and memory management.

    Start with this repo before you try anything else!

    lllyasviel/FramePack: Lets make video diffusion practical!

    Requirements:

    • Nvidia GPU in RTX 30XX, 40XX, 50XX series that supports fp16 and bf16. The GTX 10XX/20XX are not tested.

    • Linux or Windows operating system.

    • At least 6GB GPU memory.

    To generate 1-minute video (60 seconds) at 30fps (1800 frames) using 13B model, the minimal required GPU memory is 6GB. (Yes 6 GB, not a typo. Laptop GPUs are okay.)

    About speed, on my RTX 4090 desktop it generates at a speed of 2.5 seconds/frame (unoptimized) or 1.5 seconds/frame (teacache). On my laptops like 3070ti laptop or 3060 laptop, it is about 4x to 8x slower.

    In any case, you will directly see the generated frames since it is next-frame(-section) prediction. So you will get lots of visual feedback before the entire video is generated.

    Cite

    @article{zhang2025framepack,
        title={Packing Input Frame Contexts in Next-Frame Prediction Models for Video Generation},
        author={Lvmin Zhang and Maneesh Agrawala},
        journal={Arxiv},
        year={2025}
    }

    Kijai's Models Repository

    Kijai/HunyuanVideo_comfy · Hugging Face

    Description

    This repository is a test version for keyframe support, based on Kijai's original ComfyUI-FramePackWrapper: nirvash/ComfyUI-FramePackWrapper


    Original repository (kijai):
    https://github.com/kijai/ComfyUI-FramePackWrapper

    ---

    Feature

    • Set end frame

    • Assign weighted keyframes

    • Use different prompts per section

    FAQ

    Comments (22)

    meritrash6350Apr 21, 2025· 5 reactions
    CivitAI

    Can anyone suggest a way to actually do a sequence of events? I've seen a couple things that say they should work, but don't.

    Even something so simple as: Character Stands up, character throws ball., character jumps in the air

    For me, just seems to go straight to the end.

    tedbivApr 22, 2025· 2 reactions
    tedbivApr 22, 2025

    it's like a time slice sequence.

    tedbivApr 22, 2025· 2 reactions

    prompt format is - [0s: The person waves hello] [2s: The person jumps up and down] [4s: The person does a spin]

    HeartBuzzLoveApr 25, 2025· 5 reactions
    CivitAI

    i cant believe how good FramePack works.

    AiMetatron
    Author
    Apr 25, 2025· 2 reactions

    Lvmin is the king of Diffusion world,also the GOD‘s Kijai

    essence25Feb 23, 2026

    Do you have a workflow that uses this FramePack? Something for I2V ?

    madnessdevilpurg8570Apr 25, 2025· 3 reactions
    CivitAI

    这个工作流是目前用下来最舒服的AI动画生成的方案了,3060可以安心生成,并且对于提示词的敏感度好过Wan2.1,用得很舒服!
    但是对于较复杂的需求,还有背景的连动性比可灵落差许多,适合较单一的场境动画内容。

    AiMetatron
    Author
    Apr 28, 2025· 1 reaction

    评价非常中允,Lvmin大师确实是最照顾Civit(民间)的玩家了,没有莫名其妙的门槛。期待Wan2.1的FP,以及长影片LoRA训练火起来~

    @METAFILM_Ai lora真的是很需要,不然只依靠提示词,控制力真的太弱了,光是镜头移动到目前我几乎都没有成功过,永远都是固定镜头,然后要让人物往前走也是都没反应,只是原地动作😭

    K3NKMay 3, 2025· 1 reaction
    CivitAI

    is this user @BlueChicken using hunyuan loras?

    https://civitai.com/images/73941433

    ofc he stripped the metadata .. im waiting for him to answer me...

    gambikules858May 11, 2025· 1 reaction
    CivitAI

    any gguf version ? for 12go GPU plz

    DaShu999May 13, 2025

    my video car is 12G 3060, but speed it really slow, you can run framepack_i2V_HY_fp8 or wan i2v14b480P

    DaShu999May 13, 2025
    CivitAI

    How to use Framepack Lora?? any workflow here?

    oldman169May 15, 2025

    I'm using a RTX 4080 12gb - sometimes it works and sometimes it doesn't.

    BigSad11May 29, 2025· 3 reactions
    CivitAI

    What is a good t2v workflow for this anyone? Thanks

    special_offer_ubikJun 8, 2025
    CivitAI

    I've recently started using ComfyUI and got a completely black output on my 2080ti 22g graphics card. Can anyone help me?

    thaddeuskJun 11, 2025

    Sounds likely to be using the wrong text encoder... or VAE decoder... maybe the wrong sampler? I dunno, I just stumble my way through everything. Best bet is to start with an existing workflow and make sure to use the exact same files and settings that the workflow came with.

    Often times, if you can find an example image or video you like, you can download it then drag it into comfyui to use the workflow it was generated with.

    special_offer_ubikJun 12, 2025

    @thaddeusk I use this workflow:github.com/nirvash/ComfyUI-FramePackWrapper/blob/main/example_workflows/framepack_hv_example.json

    VAE is hunyuan_video_vae_bf16, text encoder is llava_llama3_fp16, and sampler I just use that workflow's.

    I was suddenly able to generate the image correctly, but the video memory took up more than 22G, resulting in a lot of time that seemed to be wasted on exchange. Is this normal? Is it caused by the feature that 2080TI does not support bf16?

    thaddeuskJun 13, 2025

    @special_offer_ubik I'm not really sure. it might need to convert to fp16 while generating, which could slow it down a lot. 16bit weights can fill up 22GB pretty quickly, even with 32GB I still use 8bit models a lot of the time.

    soundgine998Nov 21, 2025
    CivitAI

    How do you use this with Framepack Studio?

    StarboarNov 27, 2025· 1 reaction

    This. I won't use comfy, so I need to know for Framepack Studio

    Checkpoint
    Hunyuan Video

    Details

    Downloads
    607
    Platform
    CivitAI
    Platform Status
    Available
    Created
    4/20/2025
    Updated
    5/14/2026
    Deleted
    -