CivArchive
    Hunyuan 12GB vram @1080p w/Upscale + Framegen + Wildcards - v1.0
    NSFW
    Preview 48139045

    This workflow is intended to allow video generation with the incredible Hunyuan Model with 12gb vram (tested on 4070). Workflow isn't fancy, as I hate those workflows. I perfer simple workflows you can easily adapt to your needs.

    With this workflow, you can generate your videos at a lower resolution (720x416 in my tests), then upscale to 1080p and generate new frames to result in a higher resolution, longer video. This also includes the ability to use wildcards and combination prompts.

    Required models

    Hunyuan BF16 or Hunyuan FP8 or Hunyuan Fast

    Hunyuan VAE

    Foolhardy Remacri 4x (or your favorite upscaler)

    Workflow include a optional node to hook to a local instance of Ollama if you wanted to generate prompts with LLMs. (Disabled by default)

    Workflow based on https://civarchive.com/models/1048302/hunyuanvideo-12gb-vram-workflow?modelVersionId=1176230

    Description

    Initial release

    FAQ

    Comments (86)

    crombobularDec 28, 2024· 1 reaction
    CivitAI

    nice, can't wait to try this.

    edit: is hunyuan fast the fp8 model but with the fast lora built in?

    TalesfromOurDigitalLives
    Author
    Dec 28, 2024

    I'm not sure. I would assume so?

    DomDomTomTomDec 28, 2024· 2 reactions
    CivitAI

    Great workflow, thank you for sharing. Just a side note for those who might wonder, "Shift" does not appear to work, at least with the fast model. I have not tested with other versions of the model but with the same seed, I keep getting the same gen regardless of the number. Same thing happened with the other 12GB workflow posted here. I've only gotten shift to function using the Hunyuan Video Wrapper nodes from Kijai.

    TalesfromOurDigitalLives
    Author
    Dec 28, 2024

    Interesting, I'll take a look. Have you seen improvement in the quality when "shift" is working as expected? As in, is it worth spending time on?

    DomDomTomTomDec 28, 2024· 1 reaction

    @TalesfromOurDigitalLives Hard to tell to be honest as I have low VRAM (12GB) and kinda gave up trying to figure it out. Even without shift, I'm still managing to get great outputs so I'm not mad. I have to say, your workflow's output quality is much higher than what I was using, even at low res, so thank you again for sharing! <3

    psspsspsspssspssDec 28, 2024· 5 reactions
    CivitAI

    please include a hd screenshot of the workflow so we can see what it includes before we download

    TalesfromOurDigitalLives
    Author
    Dec 28, 2024· 3 reactions

    Will do

    catfirmedDec 30, 2024
    CivitAI

    it is a really good workflow, but for me, it is impossible to make it work with HunyuanVideo LORAs

    TalesfromOurDigitalLives
    Author
    Dec 30, 2024· 4 reactions

    Hey @catfirmed . Here is how you would do it. Note that I've mostly used the bf16 version, unsure about the others.

    https://ibb.co/gDXH9DM

    joesixpaqJan 1, 2025· 1 reaction

    You can chain your LoRAs like this:

    https://ibb.co/85kgpyP

    ginxDec 31, 2024· 1 reaction
    CivitAI

    the random/variation prompt is genius, thankyou. The upscaler also helps a lot

    jdavid82500Jan 3, 2025
    CivitAI

    Error while deserializing header: MetadataIncompleteBuffer and Load Diffusion Model is circled in purpoe. So I did a sha256sum on hunyuan_video_720_cfgdistill_bf16.safetensors and it matches. So the model is not corrupt. I also downloaded the Fast version but it cannot be selected from the node, looks like something is hardcoded somewhere... maybe? I'm running an RTX 4090 on Linux

    tnil25Jan 6, 2025· 3 reactions
    CivitAI

    Just wanted to say this is probably the best workflow for Hunyuan right now, Thank you!

    contently_unmindfulJan 7, 2025· 1 reaction
    CivitAI

    This workflow works pretty well and fast for me, so thank you a lot!

    eFeRBeJan 8, 2025
    CivitAI

    It's a bit off topic, but what model do you use with ollama (uncensored ?) ?

    eastbluedudeJan 8, 2025· 1 reaction

    They seem to be using this one https://ollama.com/abedalswaity7/flux-prompt based on their workflow screenshot https://civitai.com/images/48139045

    contently_unmindfulJan 8, 2025
    CivitAI

    I have a question regarding prompts for the style of the generated video. A lot of times the generated characters have a anime/cartoon style even if i dont prompt for it. Is there a way to force a realistic style? So far i tried with prompts like "ultra-realistic", "photorealistic" or simply "real", but the result is still sort of anime-ish. Any tips or do others experience the same?

    kapper_bearJan 8, 2025· 2 reactions
    CivitAI

    Well done! Takes about 6 minutes on my 4070 Ti Super.

    dkain76Jan 8, 2025· 1 reaction

    Same on my non-super 4070ti

    BadToxicJan 11, 2025

    Oh, then I guess it's not normal that it takes 3 hours on my 4080 Super. I hope I can find a solution for that. :(

    nicetry20010Jan 25, 2025

    @BadToxic did you find the issue and fix?

    BadToxicFeb 4, 2025

    @nicetry20010 Strangely after a pc restart and closing absolutely everything reduced the time to 6-8min (and I guess I reduced the steps a little). It seemed like it depends on my RAM usage (not GPU or VRAM). And if only a little bit more is in use the duration increases dramatically. Especially closing Chrome did most magic in my experiments. XD
    I ended up using ComfyUI in Edge now, so I can close Chrome when needed. ^^'

    lucasye2008672Jan 8, 2025· 1 reaction
    CivitAI

    Thanks. It's a great workflow which work pretty well in My Macbook.

    dkain76Jan 8, 2025
    CivitAI

    Works great on my 4070 Ti. Only question is when I prompt walking towards the camera, they start forward for 2sec then walk backwards

    TalesfromOurDigitalLives
    Author
    Jan 9, 2025· 2 reactions

    The combine node has pingpong enabled. Turn it off to avoid the video looping back on itself

    dkain76Jan 9, 2025

    @TalesfromOurDigitalLives Just found out about this on the Civi discord lol. Been use to it being off by default when I load VHS Combine. Thank you!

    idkdudeJan 10, 2025
    CivitAI

    I'm getting
    Cannot execute because a node is missing the class_type property.: Node ID '#113' in all the set/get model/vae and the manager doesn't find any missing node.

    radiantResistorJan 10, 2025
    CivitAI

    Nice work! However I'm getting an error at the interpolation step:

    "FILM VFI

    The following operation failed in the TorchScript interpreter. Traceback of TorchScript, serialized code (most recent call last): File "code/__torch__/interpolator.py", line 15[...].

    Any ideas? Thanks

    radiantResistorJan 10, 2025· 1 reaction

    False alarm- it worked the second time around. But the final VFI output is slowed. Any way to prevent that?

    TekHousEJan 11, 2025
    CivitAI

    How do I turn off the random prompt? It is largely useless. Is there a way to just write in a prompt like normal?

    TalesfromOurDigitalLives
    Author
    Jan 11, 2025· 2 reactions

    Just type your prompt in that box. If you don't use wildcards or combination prompts, it will act just like a regular prompt.

    tedbivJan 11, 2025

    @TalesfromOurDigitalLives same problem. can't modify source node contents...

    tedbivJan 11, 2025

    @TalesfromOurDigitalLives i see, the random prompts node is editable, just not source. 

    TekHousEJan 12, 2025· 1 reaction

    @TalesfromOurDigitalLives tx mate..perfect

    tedbivJan 11, 2025· 2 reactions
    CivitAI

    great workflow, thanks for uploading.

    tedbivJan 11, 2025
    CivitAI

    dumb question:

    new to comfyui. workflow runs well. how can i get longer video, and/or increase resolution? i tried changing a few items that i thought might make the changes but they had no affect on end video...

    scooter_deJan 11, 2025

    I believe right now, 5 seconds is the limit.

    tedbivJan 11, 2025

    @scooter_de i modified node emptyhunyuanlatentvideo to width=512, height=960, length=128 and got an 8 sec video in 15minutes on rtx 4090.

    tedbivJan 11, 2025

    @scooter_de attempting length=256 now. 

    scooter_deJan 11, 2025

    @tedbiv I saw the option too and increased the number as well. I have a 3060 with 12gb RAM. Let's if it can do it. It takes a while, the machine has only 32gb RAM and is 7 years old :-).

    tedbivJan 11, 2025

    @scooter_de length=256 crapped out. out of memory. i'll try dialing down resolution...

    tedbivJan 11, 2025

    @scooter_de 13 sec video, 32 minute initial processing, length=201... not too shabby.

    scooter_deJan 12, 2025

    @tedbiv I'm at 121 frames now. Still processing. What new resolution did you try?

    tedbivJan 12, 2025

    @scooter_de tried 480x960 and 640x480, went back to 512x960. seemed best. i'll have to mess around.

    scooter_deJan 11, 2025
    CivitAI

    I'm getting the warning(?)

    python[1200633]: clip missing: ['text_projection.weight']

    Anybody else the same? Any ideas on how to avoid it?

    robertsAmechEJan 12, 2025· 1 reaction

    I do not, and have stuff running for the night, but I think I know what would cause it.

    On the far left is the block [dual clip loaders]

    The first clip file is one you probably already have: clip_I.safetensors

    The second one is one I had to download online and add to the clip folder: llava_llama3_scaled.safetensors

    I think this is where I got it from....maybe...possible. I had to download quite a few items and had to restart my computer to get my GPU to play nice.

    https://huggingface.co/calcuis/hyvid/blob/afbd46ccd115066b4d7092b036c3f939d818c11f/llava_llama3_fp8_scaled.safetensors

    scooter_deJan 12, 2025

    @robertsAmechE I noticed this is not specific to this workflow, but happens also in others. I now remember that I read about it a while ago. It seems to be an issue with ComfyUI. So far I haven't seen any issues with it. But I'd like to know what this warning is about. :-D

    friendzcornerz810Jan 12, 2025· 1 reaction
    CivitAI

    Getting OOM on 3090 though i am using fp8 fast model

    YW55Jan 13, 2025· 2 reactions
    CivitAI

    Upscaler and frame interpolation nodes are not compatible with AMD ROCm. Will use it as a template to find replacement nodes.

    crazypacoJan 18, 2025

    If you get it working, can you post?

    tedbivJan 13, 2025· 3 reactions
    CivitAI

    i think this is my new favorite toy... :)

    hook0rJan 13, 2025· 1 reaction
    CivitAI

    Hi, thanks a lot for sharing. Where do I have to put "film_net_fp32.pt" for "FILM VFI" node?

    TalesfromOurDigitalLives
    Author
    Jan 13, 2025

    It should download it automatically if I remember right

    MusigregJan 20, 2025

    @TalesfromOurDigitalLives What if it doesn't? Mine gives me error saying it can't download. I have the .pt, and I'm trying to find out where it is too...

    cd0001Feb 25, 2025

    @musigreg369 - I had the same issue. It had downloaded the version of the file from dajes, however, this was causing a padding error.

    Instead, I manually downloaded it from Huggingface and placed it into the ComfyUI/custom_nodes/comfyui-frame-interpolation/ckpts/film folder.

    But, I'm also experiencing an issue where ComfyUI stops working while processing the very last video (FILM VFI node), but produces no error message.

    dixytravian140Jan 14, 2025· 3 reactions
    CivitAI

    Very nice ...thank you for posting.

    StratDeCatJan 16, 2025· 2 reactions
    CivitAI

    Wow! Smooth as silk

    markharper80266Jan 16, 2025
    CivitAI

    This works great but I can't get the Loras to work. I'm new to Comfy and simply tried setting the lora node to always and tried on trigger but it never fires. Is there anything else I should do?

    GitarooManJan 17, 2025

    Did you try other loras and their trigger words, if applicable?

    markharper80266Jan 17, 2025

    @GitarooMan I did. Doesn't seem to be working for any of them. I'm new to Comfy UI, do I need to add a prompt to the lora itself or somthing?

    Xfile21xfile21Jan 19, 2025
    CivitAI

    Hi im new in this and im load your workflow and i get info about missing of this few: FILM VFI

    Display Any (rgthree)

    VHS_VideoCombine

    OllamaGenerate

    DPRandomGenerator. From where i have to get them?

    TalesfromOurDigitalLives
    Author
    Jan 19, 2025

    You should install the comfyui manager, which would make it easy to download missing nodes:
    https://github.com/ltdrdata/ComfyUI-Manager

    Xfile21xfile21Jan 20, 2025· 1 reaction

    @TalesfromOurDigitalLives Great :) Thank You very MUCH :)

    Enokk225Jan 20, 2025· 4 reactions
    CivitAI

    hello, New in this. can i use this workflow to image to video? (maybe a stupid question)

    Renes_stuffJan 23, 2025· 1 reaction
    CivitAI

    only workflows that actually works for me, and i have a 4090 too..

    Psy_pmpJan 24, 2025
    CivitAI

    4080 mobile 12Gb. Doesnt work, No errors. Just crash and thats all. Othew workflows gives allocation

    Psy_pmpJan 24, 2025

    Hm. FastVideo works.

    cd0001Feb 25, 2025

    Where is it crashing? For me, it dies just as it's processing the final video. There was an issue with film_net_fp32.pt, however, downloading the version manually from huggingface and placing it in the appropriate folder fixed that error (relating to padding), however, now it just stops. This is on the FILM VFI process.

    okamishirosaki237Jan 26, 2025
    CivitAI

    Tested on a RTX 3080 with 10Gb of VRAM, and 48Gb of RAM. Works fine, but randomly throws an "OutOfMemoryError: Allocation on device" error. I tested with 97 frames (first generation is estimated at 2:30 hours, but if I stop it and try to generate another one, it runs in 30 - 20 minutes (times will be random 😂)) and 121 frames, with 2 hours of work (I have not been able to reduce this time). The tested resolution is 416x720. I used the bf16 model and vae, and a fp16 clip.

    VelouraFeb 13, 2025· 3 reactions

    Use the fp8 model, or a gguf, why would you use bf16, that's optimized for 24gb vram

    sotruJan 28, 2025
    CivitAI

    Good afternoon. Could you please tell me how to enable LoRA? What should be connected and to what? I used another workflow, but it was already connected there, but I can't understand how to do it here.

    lug_LJan 29, 2025· 6 reactions
    BonescuadFeb 14, 2025· 1 reaction

    click on it, bypass, and bypass again worked for me.

    YikaPanicJan 30, 2025
    CivitAI

    "replication_pad3d_cuda" not implemented for 'BFloat16'
    Does anyone know how to solve this issue?

    lorderagonhh961Feb 4, 2025

    reinstall comfyui. You could try to update pytorch too.

    strubblesFeb 9, 2025

    update your pip and embedded pytorch

    skpManiacFeb 5, 2025
    CivitAI

    I'm loving this now I've got things working, but would it be possible to add a TEXT box so it's not all random?
    Thank you for your work, it is appreciated :)

    idelamo288Feb 6, 2025· 1 reaction

    You can add it anytime, just add a simple clip text encoder in the "yellow" line

    cd0001Feb 25, 2025· 1 reaction

    You can type whatever you need into the random prompts text box.

    civitstableFeb 8, 2025
    CivitAI

    can it use for 8gb vram?

    TalesfromOurDigitalLives
    Author
    Feb 8, 2025

    Didn't try, but I doubt it.

    VelouraFeb 11, 2025

    I wouldn't

    CetlhoMar 7, 2025
    CivitAI

    Is it possible to use more than one Lora at the same time? If possible, how do I do it?

    PixelsmaniaMar 11, 2025· 1 reaction

    yes, just add more "load lora" and put them between "load diffusuion model" and "Dual clip loader" then connect everything in series. you can add as many as you like until your gpu reaches it's limit

    PixelsmaniaMar 11, 2025
    CivitAI

    Hey, I finally found something working on my pc, have the 4070 super too and getting really good results, thanks!

    One question, how do I load correct character models? I tried a couple but they do not seem to work, only one worked with hunyuan title in it, I can't just use a normal lora but hunyuan loras? they are not that common sadly :(

    WarlockdiantJun 1, 2025
    CivitAI

    Some config to don't go to 100% gpu use with 3090?

    Workflows
    Hunyuan Video

    Details

    Downloads
    8,894
    Platform
    CivitAI
    Platform Status
    Available
    Created
    12/28/2024
    Updated
    5/13/2026
    Deleted
    -

    Files

    hunyuan12GBVram1080pW_v10.zip

    Mirrors

    HuggingFace (1 mirrors)

    fhdHunyuan12GBWUpscale_v10.zip

    Mirrors