Focuses on the booty jiggle physics of the twerk.
Example prompt
asstwerk, a close-up back view of a woman twerking, detailed jiggle physics on her buttocks. She is kneeling with her back turned towards the viewer. She is wearing a black dress and a black thong. Her buttocks are oiled up causing the light to reflect and glisten on her skin. She is in a well-lit office.The camera is holding steady.Trained on runpod 270x480, 8x 3second video clips, Examples generated on 272x368 on fast hunyuan model/lora since my rig isn't the best.
Description
FAQ
Comments (28)
I haven't visited a single porn site ever since the sex models came out.
The future is now!!! And its only going to get better!
WE need a competitor to nvidia with 128+ gb of vram for gpu's since they keep drip dropping a little more vram each release. I don't want to be an old man by the time we get 70 GB of vram.
fingers crossed.
Next 20 years will be interesting, simulations within simulations
@onbe Inception but you have to wade through the endless amounts of pornstars to get to the person having the dream lol.
still rocking a 3090 no regrets.. 5080 will only have 16gb how sad, I would have considered upgrading.
I think one of the main reasons as to why AI works so well on Nvidia and not AMD is because most developers/programs use CUDA which is hard-locked by Nvidia. It's not really a hardware issue, look at AMD they have mid-range cards for cheaper that have more vram but the issue is optimization and software development.
@d888 Nvidia is holding the world back
@Keroro_Gunso of course, they want to milk the cow... every. last. drop. lol.
Oh our aching bank accounts, imma still gonna nab a 5090 as soon as a decent 3rd party one appears. (They better not put a 64Gb version out 5 mins later)
@admiral_underpants I wish I could afford one. There are talks about a potential Titan or Ti release down the line as the chip of the 5090 is slightly nerfed (as crazy as that sounds). I don't think they would release a non-titan or workstation card with 64gb, Nvidia makes too much money with workstation cards it would hurt their bottom line more than anything.
@d888 The card isn't "slightly nerfed". The typical range for a 80ti card is 90-96% of a full 102 die. The 5090 is 88%. Meaning you are getting something inbetween 80-80ti class silicon (something that is normally $699) for $2000. Vram is famously very cheap so don't even go there.
This cards are a f*cking ripoff and you'd be a fool to support Nvidia by buying them,
@Keroro_Gunso You made my point for me, by 'slightly nerfed' I was trying to make the point that there is still more performance potential for that chip than what the standard 5090 offers. I don't agree with their pricing practises either, but what choice do we have right now? Crickets.
@d888 You could let the cards rot and let the horrible mess of the gpu market collapse but I know you won't.
There is performance on the table because Nvidia doesn't feel the need to sell more silicon for less money when consumers are willing to accept less. Pair that with their business model being built on selling vram more than anything, it's abusive and it is legitimately harming the entire industry.
You remind me of the people who say they only do bumps on special occasions. Sure all the money gets funneled back to the Cartel but your individual contribution can't amount to much... right?
These are organisations that set the terms because they control supply. If we could just up and quit them, they would collapse under their own weight but no... You want what you want. Damn the consequences.
@Keroro_Gunso Why do have to make things personal? Calling people fools, calling out irrelevant technicalities just to say 'you're wrong', and then indirectly insulting me isn't going to change anything.
@d888 Because I find your cavalier attitude extremely gross. The "what choice do we have" line of thinking is exactly why we are in this mess. This hellish timeline where gpus cost more than cars.
These cards aren't "slightly nerfed". That isn't a irrelevant technicality. That's a fact. It has a decade of historical precedence. This is some of the greediest actions I've ever seen a company take. We are all here because we like AI. We want to see it advance and improve. Nvidia is singlehandedly stiffling home compute and people are lauding them for it.
They have no incentive to release a 90ti for the same reasons they didn't release one last gen. There is no high end competition and miners/datacenters will pay more than gamers will.
I'm not trying to personally attack you, but if you (or anyone) care about this hobby remaining somewhat affordable, you wouldn't support the people who are doing everything in their power to make it a toy for the wealthy. We could financially support developers who build libraries for AMD cards. We could contribute to the Zluda project now that it's back underway. I just HATE being apart of this space where Nvidia is the only choice because people don't buy the other cards because no one develops for them, but the only reason no one develops for the other cards is because people don't buy them. We are stuck in this horrible catch 22 and the only way out of it is to just say "Enough!"
I'm sorry if I have offended you. Clearly this is something I am passionate about. I have spent my entire life building computers and unfortunately, I'm pretty sensitive about the state of my hobby.
Have a great rest of your day man. I am sorry for the rant.
@Keroro_Gunso Just because you are passionate about something doesn't justify you being disrespectful to others. Whatever, I'm not going to waste anymore of my time on this senseless argument.
@d888 Honestly fair. As irritating as my passion is. It's apathy that's killing the hobby.
Tbf the main issue is there is no one to compete with nvidia, most of the ai models utilise cuda atm. When/if a new gpu manufacturer that specialises in stable diffusion enters the game it will become cheaper again
@onbe the AI models do not care one bit how or where you run them. The problem is CUDA, which has a large list of lock-in features that combine multiple commands or make small changes to ensure models trained using CUDA will run more slowly on other cards. There is no technical reason for that, but it is something Nvidia has been doing since the x87 instruction set, and something they have been sued for in the past (https://www.realworldtech.com/physx87/, https://www.reddit.com/r/programming/comments/d87ac/why_physx_sucks_on_the_cpu_because_nvidia_wants/). They cannot compete on a fair footing, so they resort to tricks like this.
@ThalisAI oh damn, that's good info
You can try out the new Ryzen AI Max+ 395, it can assign up to 96GB of the shared ram to VRAM to load large models. It doesn't have the raw power of a 5090, but that is a lot of VRAM.
W jiggle physics
It seems as though I made the lora strenght too low for my initial posts. I must investigate this immediately.. for science.. yes.. for science
@d888 Yeah I couldn't find if the op put recommended strengths, I try to stick somewhere from 0.5-1 when it comes to movement loras, the higher the more jiggle
61 - 73 frames with 0.7 - 1 strength seems to get the best results , 0.7 allows you to get very nsfw results if you add a second person on lower resolutions 😉
the future of booty is now...
Can we have one for Wan?
Yes, for WAN please
I had to come back for a new comment .. Awesome Work Nice Job
I am getting non sharp vids