This model depicts a couple engaging in doggystyle sex from a side-view-perspective (profile).
it still can produce same-sex material
Database contains side-view and explicit rear view material
Base-Promt example (most of the captions look like this):
"Male and female have sex in doggystyle position on a couch The female is bent over on all fours. The male is thrusting his hips back and forth rapidly, while pounding the female's ass. You have a clear view on the penis pussy penetration. The Penis is penetrating the pussy vulva. The female breasts are swinging back and forth by the force of the pushing male. "
Usage:
1.0 strength (double blocks)
Edit: I uploaded the WF that I used to create the examples:
https://civarchive.com/models/1283759/hvbasicadaptmultilorat2ti2vv2v10?modelVersionId=1448406
Quick Comment on the Sample Videos:
Guys, I need to confess: In order to use the sample-Videos I did not use
the base model but the following checkpoint merge:
https://civarchive.com/models/1237378/huncusvid?modelVersionId=1394503
Nevertheless I hope this Lora is of use to you.
If you have a great workflow example how to successfully use my Lora with the Basic fp8 Hunvid version I'm excited to see it.
Beginners questions:
I used the Learning Rate 1e-4
I read about learning rates, but could not figure out what to usem so I left it
to the said default. If you have any insight about it, please let me know in the comments.
About This Lora: It is the 1st iteration:
3750 Steps 250 Epochs. Trained on a 4070 Ti Super (16gb) in 16h 17m.
The dataset consists of 20 Video clips: Each encoded to following properties:
640x480 / 24fps / 65 frames per clip
Trained with: musubi_Hunyuan_Lora_Tuner / toml file:
resolution = [340, 256]
batch_size = 1
enable_bucket = true
bucket_no_upscale = false
target_frames = [1, 25, 45]
frame_extraction = "head"
Description
FAQ
Comments (42)
Good stable lora and thanks for putting me on to huncusvid.
thx for the feedback. Plus, huncusvid to me is far less frustrating to generate, although I couldn combine it with teacache. I hope people start cooking more merges with loras soon. I'll try if I figure it out some day.
Wait, "double blocks"? is there any reason/logic for that?
At this point I don't have an explaination. It was just an observation I made, that clearer images were delivered, when using double blocks instead of all/single. I don't know why. And at the moment I'm just happy that my lora works somehow. Until now, I could not figure out the configuration for most of the community's Loras. So implementing Loras was more of a trial and error procedure. I'm relatively new to training. I'm not even familiar to all terms involved in weight distribution. I'm sorry. Maybe one of our fellow CivitAIns can enlighten us in this comment section.
@Adaptalab0r Well that was a solid answer tho hehe - I'm actually playing with the lora "block types" and yeah I can't find a logical usage for that.... ever I found something about the Node Lora creator and he says "Just use Double blocks" hahaha anyways. Thanks again and I'm look forward to see your next uploads!
@Xavis double block allow you to stack more loras (about 2 or 3 more than without) while still maintaining relatively clean output, if not you will often have blurry very blurry and/or grid artifacts, I don't exactly know why though :)
@NoArtifact Really? I've been playing around with the "all" setting and it is working kinda "good".... I'll keep testing with those extra block methods. Thanks!
@Xavis if you only run 1 to 3 loras the all setting might be enough yes, if you notice blurry (very blurry) and blocky artifact output try double block
great work ,but can you share a WF,thx
I personally use the AIO 1.3 Ultra variant here, but you can try advanced:
Hunyuan 💥 AllInOne ▪ +Tips - ✔️AiO🟧Advanced☕1.4 | Hunyuan Video Workflows | Civitai
Thx for the feedback. I just finished one just a few hour ago. I'll make sure to post it soon.
Great lora! However, it seems to be biased towards specific ethnicities. I try to prompt for a Korean girl for instance and always end up with a caucasian one.
Any suggestions?
put greater weight on specific part of your prompt (a beautiful Korean woman:1.25) for example, and maybe lower the weight of the lora by 0.1 or 0.2
@NoArtifact Thx for the hint.
@Melty1989 The database really wasn't that diverse. From what I get, It Is easy to exchange the actors, just like in the same-sex example, because there was no men on men samples in the dataset. So I hope you can get those results :-)
Thank you for your work! But when I use this lora recieve grainy / bad textured videos. Do you have any suggestions? (Weight is 1 and single block)
i had the same issues as you described. Then I changed to "double blocks" and it was fine. Also you could try using Sampler: DPM++ 2M / Scheduler: beta on HunCusVid (20 Steps / 6-8 cfg)
@Adaptalab0r Thank you
Thanks so much for this LoRA it looks amazing in the previews.However, I also get a kind of VHS texture on my outputs. I've tried several different checkpoints. Any sugggestions?
Thanks!
Well, that's a common question, I have not seen a clear answer to. The only suggestion I can offer is to use "double blocks" Soon I'll upload a sample WF. Maybe this will help.
@Adaptalab0r amazing, will try that and thanks again!!! 😀
Female faces are often outside the camera,face or hands of female characters will be deformed and distorted when they are close to the edge of the screen. I think this LORA needs high resolution to ensure that both men and women are within the camera.
and my men and women cannot smooth like example,like man stops moving and a woman is orgasm.
Yes, I know what you mean with the tendency to be outside the camera. This is due to the dataset material, in which I had to be careful about both, the impairing watermark-zones and the aspect ratio. I sometimes even left out the faces as I alongside other people in the community I chatted with think it might benefit against the bias to show specific faces.
To be honest I did not have the distortion problem nor did the faces reach the boundaries of the screen often. From what I get, if you specify the surroundings of the actors, you should be fine. For example if I entered. "The room was littered with PC Hardware" it even gave me an ultra wide shot witt all the characters features unharmed. I hope this helps.
@Adaptalab0r I tried many times and finally I get the best results by setting the image to a 5:4 ratio,It seems that the problem I mentioned no longer occurs at this ratio.
If you used musubi, make sure you have recent version, an early Jan version had a bug that gave a grid texture on loras
Thank you. I will!
no thank you for sharing things you make
@Morser331
The latest version on github is Jan 20. I guess that might be after the fix then. The version I used was downloaded on the Jan 1st.
So, it was necessary.
motion/position is good but the quality is awful. I don't understand why it's generating at such terrible quality.
From the perspective of an vanilla Hunyuan fp16/fp8 user I can understand that. Using the first presented Checkpoint of this community, I get terrible quality from let's say 75% of all loras... That the point at which I changed to use a merge like HunCusVid and wait for the devs and the community to finetune the Model. I sort of gave up on the vanilla checkpoint to be honest, as I only got 2-5% of the generations right and I experimented a lot, but maybe I just straight out missed some key rules. Well. If you didn't try it before, you could try the lora a 0.8 - 1 double blocks DPM++ 2M / Scheduler: beta on HunCusVid (20 Steps / 6-8 cfg). I hope it helps.
@Adaptalab0r I haven't had that experience at all. This is the first LORA I've used where it's just completely burned out / fuzzy / etc, etc. It must be something to do with the LORA itself.
@CapAndABull Someone told me that there is a bug in my Version of musubi_Hunyuan_Lora_Tuner. If it is not that I cannot tell. Thank you for your feedback anyway.
@Adaptalab0r haven't heard of that particular trainer but very well could be. Just to be clear you didn't train on the huncis model right? you trained it on the base hunyuan one right?
@CapAndABull Yes that is true
@Adaptalab0r He is right, something is totally wrong with this lora. double_blocks doesn't fix anything it changed the pose entirely. Shame!
@Adaptalab0r I don't understand how you achieved the samples though??
@azeli Well, you can try my workflow combined with HunCusVid. For me it works and spits out those samples. Of course not 100% of the times, but there weren't many misses. However, Maybe I'll give the dataset of this lora a try with the updated software setup. If someone who reads this knows a good musubi_Hunyuan_Lora_Tuner configuration template, please let me know.
Hi god, isn't there possibility for a POV version ..?
Hi, nah, right now I'm not shure, whether to keep on doing Hunyuan Video stuff or switch to WAN. Both are amazing models. Plus, It's sad, but it seem the version of the app that helped makt this lora is bugged and I'd have to start all over again. I'm sorry.
@Adaptalab0r Oh no ..! Good luck setting it back :) I tried both indeed, but Hunyuan is so much cost effective, Wan is very greedy in ressources and based on low fps..
@hboxgames132 Yeah, I agree, but with wan I get 80% good result. With Hunyuan I get 10% good results. So to me it is more time effective :-)
@Adaptalab0r Oh I see !