HUNYUAN | Img 2 Vid LeapFusion
Requirements: LeapFusion Lora v2 (544p) or v1 (320p)
In short: it uses a special LORA to do the trick.
It works combined with avaible loras around. Prompting helps a lot but works even without.
Raise resolution for more consistence and similarity with input image.
*you may want to change steps on your needs. I used few steps for testing.

Bonus TIPS:
Here an article with all tips and trick i'm writing as i test this model since December:
https://civarchive.com/articles/9584
you will get a lot of precious quality of life tips to build and improving your hunyuan experience.
no need to buzz me, tyπ ..feedbacks are much more appreciated.
Description
FAQ
Comments (23)
This is pretty cool how we managed to get this close without official support. I heard that they will be releasing image to video later this week so curious to see if official implementation is way better or ends up a similar outcome.
we will see. were did you read that is later this week? please paste here this info π
@LatentDreamΒ Try asking mckenna, the mod creator told me this on the HUNYUAN TESLA OPTIMUS BY BIZARRO lora that they are releasing it on the Chinese new year and it'll likely be a day or so before it's available to use comfyui hopefully. I am trying to find it on their twitter but no luck so far, twitter is TXhunyuan
It seems to be a no go on a 3060 with 12GB of ram, after struggling to get through the clip loader, it doesn't have enough to get through the HYLoder step, and gives an "torch.OutOfMemoryError: Allocation on device
Got an OOM, unloading all loaded models."
No fault of the workflow maker, thank you anyway, just wanted to help others with my card save time!
Same here. Try to get bitsandbytes working so you can offload the CLIP model. It takes up no RAM at all for me now. Make sure to set the right quantization.
@funscripter627Β i remember there was a node that allowed loading stuff on demand on cpu, i used in flux time ago... it may be that case also? i could include this in my workflows to help low vram users
@LatentDreamΒ I'm not sure about Flux specifically, all I know is that when I disable or set quantization of the hyvid_text_encoder to something else than bnb_n4 all my RAM gets eaten by the encoder. bnb_n4 seems like a crazy good quantization option for people with low RAM and VRAM.
I think I need help. I get the following message when I run
HyVideoModelLoader
Can't import SageAttention: No module named 'sageattention'
i done a full reinstall and followed this https://ko-fi.com/post/Installing-Triton-and-Sage-Attention-Flash-Attenti-P5P8175434, working well now. it was a pain but the fresh install was worth it.
@neuraiai9377Β Yeah but then? How do I launch Comfyui with micromamba? Install all dependencies etc
Holy shit the most annoying workflow I've ever tried to use, error after error after error. I give up
Comfy is for tenacious and resourceful people, yeah, I know...π€£
Same shit πππ I just fixed all errors and now I canβt understand what it so slow (rtx3080)
@wiluxshop172Β it's a catastrophe even
The generated video is blurry. Is it because of lora? Or need to change settings?
https://civitai.com/images/53828793 tryed this?
@LatentDreamΒ Ok, I'll try
Where I can get the lora?
I must be missing something. Either it's denoised at 100% and I get a completely different video, or I denoise lower and get something with no motion. But others say it works so clearly I'm doing something wrong. Does anyone have a suggestion? I'd love to get it working.
Similar situation
Same here, I tried, with an input image of 400x400 and a detailed prompt of the image and i get something totally different
