CivArchive
    Img2Vid π‘½πŸ β–ͺ Hunyuan β–ͺ LeapFusion Lora V2 - Hun I2V | lora
    Preview undefined

    HUNYUAN | Img 2 Vid LeapFusion



    Requirements: LeapFusion Lora v2 (544p) or v1 (320p)

    In short: it uses a special LORA to do the trick.
    It works combined with avaible loras around. Prompting helps a lot but works even without.
    Raise resolution for more consistence and similarity with input image.
    *you may want to change steps on your needs. I used few steps for testing.

    Bonus TIPS:


    Here an article with all tips and trick i'm writing as i test this model since December:

    https://civarchive.com/articles/9584
    you will get a lot of precious quality of life tips to build and improving your hunyuan experience.


    no need to buzz me, tyπŸ’— ..feedbacks are much more appreciated.


    Description

    FAQ

    Comments (23)

    zengrathJan 27, 2025Β· 2 reactions
    CivitAI

    This is pretty cool how we managed to get this close without official support. I heard that they will be releasing image to video later this week so curious to see if official implementation is way better or ends up a similar outcome.

    LatentDream
    Author
    Jan 27, 2025

    we will see. were did you read that is later this week? please paste here this info πŸ™

    zengrathJan 27, 2025

    @LatentDreamΒ Try asking mckenna, the mod creator told me this on the HUNYUAN TESLA OPTIMUS BY BIZARRO lora that they are releasing it on the Chinese new year and it'll likely be a day or so before it's available to use comfyui hopefully. I am trying to find it on their twitter but no luck so far, twitter is TXhunyuan

    jamesmanx842Jan 27, 2025Β· 5 reactions
    CivitAI

    It seems to be a no go on a 3060 with 12GB of ram, after struggling to get through the clip loader, it doesn't have enough to get through the HYLoder step, and gives an "torch.OutOfMemoryError: Allocation on device

    Got an OOM, unloading all loaded models."

    No fault of the workflow maker, thank you anyway, just wanted to help others with my card save time!

    funscripter627Jan 28, 2025Β· 1 reaction

    Same here. Try to get bitsandbytes working so you can offload the CLIP model. It takes up no RAM at all for me now. Make sure to set the right quantization.

    LatentDream
    Author
    Jan 28, 2025Β· 1 reaction

    @funscripter627Β i remember there was a node that allowed loading stuff on demand on cpu, i used in flux time ago... it may be that case also? i could include this in my workflows to help low vram users

    funscripter627Jan 28, 2025

    @LatentDreamΒ I'm not sure about Flux specifically, all I know is that when I disable or set quantization of the hyvid_text_encoder to something else than bnb_n4 all my RAM gets eaten by the encoder. bnb_n4 seems like a crazy good quantization option for people with low RAM and VRAM.

    wqn999Jan 28, 2025
    CivitAI

    I think I need help. I get the following message when I run

    HyVideoModelLoader

    Can't import SageAttention: No module named 'sageattention'

    neuraiai9377Jan 30, 2025

    i done a full reinstall and followed this https://ko-fi.com/post/Installing-Triton-and-Sage-Attention-Flash-Attenti-P5P8175434, working well now. it was a pain but the fresh install was worth it.

    GooodisFeb 1, 2025

    @neuraiai9377Β Yeah but then? How do I launch Comfyui with micromamba? Install all dependencies etc

    banditlevel200Jan 29, 2025Β· 10 reactions
    CivitAI

    Holy shit the most annoying workflow I've ever tried to use, error after error after error. I give up

    LatentDream
    Author
    Jan 29, 2025

    Comfy is for tenacious and resourceful people, yeah, I know...🀣

    wiluxshop172Jan 29, 2025Β· 1 reaction

    Same shit πŸ˜‚πŸ˜‚πŸ˜‚ I just fixed all errors and now I can’t understand what it so slow (rtx3080)

    dominic1336756Jan 31, 2025

    @wiluxshop172Β it's a catastrophe even

    fayerJan 29, 2025
    CivitAI

    The generated video is blurry. Is it because of lora? Or need to change settings?

    fayerJan 29, 2025

    @LatentDreamΒ Ok, I'll try

    vim_brigantJan 30, 2025Β· 2 reactions
    CivitAI

    I must be missing something. Either it's denoised at 100% and I get a completely different video, or I denoise lower and get something with no motion. But others say it works so clearly I'm doing something wrong. Does anyone have a suggestion? I'd love to get it working.

    fancypantzzzFeb 2, 2025

    Similar situation

    BorugaFeb 4, 2025

    Same here, I tried, with an input image of 400x400 and a detailed prompt of the image and i get something totally different

    Workflows
    Hunyuan Video

    Details

    Downloads
    174
    Platform
    CivitAI
    Platform Status
    Available
    Created
    1/27/2025
    Updated
    5/12/2026
    Deleted
    -

    Files

    img2vidHunyuan_hunI2VLora.zip

    Mirrors

    HuggingFace (1 mirrors)