Trained on 200+ very highly detailed high resolution (min 2000px) images + 35 hires vidclips of amateur female face portraits, with visible skin pores, skin hairs and blemishes/moles.
The lora is intended to use for higher resolutions like 768px and up (width and/or height) - as training concentrated image and vidframes samples between 768 and 1152px this is where performs best.
My samples posted are made with DPM++2M, Scheduler Beta for 28 steps on the original tencent bf16 models at 512x768 resolution @73 frames, lora strength 0.85. The quants/fpX/fastvid I tried all produced visible different and degraded quality IMHO.
V1.1 works better with other Loras - put it around 3/4 weight of the other Lora; e.g. this Lora at 0.65, the other Lora (character etc.) at 0.9
For SkyReels or Hunyuan ImageToVideo vids, lower the strength to about 0.4 for vids over 73 frames. the Lora was only trained on snippets <73 frames, and this seems to have more visible (anti)effect on longer vids based on I2V model than T2V.
Description
Improved image quality also for far-away shots (not only in closeups)
Works better together with other Loras (see examples)
FAQ
Comments (11)
super!
Trigger word?
there is no specific trigger word(s) as it was trained on full descriptive captions (4, 5 sentences). most of the captions did include words like 'closeup' 'face' 'woman' 'laughing' etc. though so that may help; but the lora really kicks in on most vids that includes faces.. :-)
how could this work with skyreels
You can use the native Skyreels workflow as posted, insert this lora at bout 0.9 strenth. You can also take a look in the gallery on this page, I posted my workflow! It's the Miley Cyrus img2vid
Can I use both i2v and t2v? (skyreels)
yes for v1, v2 of skyreels is wan-based that's not going to work
What a wonderful LoRA! And thanks for the tips! 💪☺️
thanks, happy generating!
I guess with hunyuan 1.5 the lora is obsolète ?
i have not tried but now you mentioned it.. I will probably try if is compatible, assume its not