Adds an Avatar/Pandora vibe to your Hunyuan video gens
Description
FAQ
Comments (9)
Trained with movies or just photos?
Trained with diverse images of Pandora ripped from Blu-ray. Haven't tried training film clips yet. I'm open to Safe for Work suggestions/commissions for Hunyuan LoRas.
Awesome work,btw, how do you train the lora,I mean what software it is,thanks
This guide for Windows System for Linux, https://civitai.com/articles/9798. Original project is open source diffusion-pipe, https://github.com/tdrussell/diffusion-pipe, on github.
thanks a lot! is the diffusion-pipe easy to use, and can 4070tis train lora?
@zczcg, I think 24 Gb. VRAM is recommended, but check the thread under the article, less VRAM might work at a lower resolution, or with a quantized version of the model.
@noodles3211z_ Thks a lot!But it`s not easy to get a card with 24gb...
Awesome work! What's your text prompt and strength when applying lora?
During the training, did you tag it as "avatar" character in the txt for each image?
I tried this lora + hunyuan and didn't mention "avatar" or "pandora" w/ strength = 1, and didn't get a similar result. Wondering if I missed anything here. Thank you in advance!
This was my first attempt making a Hunyuan LoRa, based on this article: https://civitai.com/articles/9798/training-a-lora-for-hunyuan-video-on-windows. I didn’t use a trigger word because the configuration example and article text did not suggest one was needed. In hindsight, maybe I should have. I did modify the 8-bit configuration example to use a 16-bit pipeline, and I tested the LoRa and posted examples using the Hunyuan-t2v_Native_Civitai workflow, from this site.