Join the NSFW AI community on Discord! With over 6,000 members, it's the place for developers, creators, and enthusiasts to share tips, discuss the latest models, and collaborate on the future of NSFW AI.
Major Update: The NSFW Wan 1.3B T2V model has been improved!
Based on community feedback, we've released a new experimental set of checkpoints (exp_e1 to exp_e14) to fix the quality degradation and "body horror" issues present in the original model.
This new version was trained with a revised, more stable process on a dataset of 30,000 explicit videos and 20,000 high-quality images simultaneously, after an initial "priming" epoch of 400k images. This has resulted in dramatically better anatomy and motion.
We strongly recommend all users switch to wan_1.3B_exp_e8.safetensors (v8 experimental) for the best results.
This new model can represent concepts across the full NSFW spectrum, generates quality video on its own, and is now an even more powerful and reliable base for training your own specialized LoRAs.
Find the new models and full details on the Hugging Face page: https://huggingface.co/NSFW-API/NSFW_Wan_1.3b
No complex prompting is needed—just describe what you want. We're excited to see what you create with the new and improved model
Description
FAQ
Comments (18)
400k images and 15k videos? Good lord, my guy. I will assume it's all human, though, so I doubt anthro will get any sort of recognition in the model, haha.
You would be surprised!
I would say probably like 70% realism/human, with the rest split between hentai/3d animation both containing some anthropomorphic representation.
@BetterPorn Well, I'll definitely be the judge of that once I give it a go myself. Might even do a breakdown for the fellow furs here in Civit if this model does provide (good enough) furries. Won't convince anyone though since everyone's all over I2V n' such. Doesn't hurt to try, at least.
how this work???
Use in any workflow in place of the normal Wan 1.3b model.
but the steps|?? thanks for answer
Is this the same as the model that is being trained here: NSFW-API (Andrew Snyder) ?
If you are the model creator, I'm surprised not to see any settings, workflows or even a mention of the HuggingFace page where the new epochs can be found.
There's also no mention of the motion lora that comes with the model, which is pretty much needed to get decent results.
Hopefully this is not just a plug for your Discord server using someone else's work :)
Hi, yes I am the creator!
I honestly don't really have ideal settings for this model yet, most of my work goes towards dataset processing and training, I do very little inference beyond the basics. I would trust anyone here to know how to utilize it better than me, as I hardly use ComfyUI or any advanced techniques.
As far as the helper LoRA, that was necessary for the first 10 epochs because the model was only trained on an image dataset, which destroyed the model's understanding of multi-frame outputs.
Since epoch 11, the image dataset was replaced with videos so there's no longer a need for that motion LoRA - It was only a stopgap in the interim to make those lower epochs usable.
That said, it might not hurt to keep using it, I haven't tested it. I highly encourage mixing LoRAs on top of this model to get the most out of it. Basically the goal of it was to make it more of a viable base, not necessarily that it would be able to shine by itself.
Quick check shows it looks to be epoch 17 from that repo, but the discord link checks out as the same they have on the HF model card. Given their listed license is "Steal this model!", it seems to check out haha.
@BetterPorn I would probably link to the repo properly, though, just to prevent confusion.
@BetterPorn Appreciate the information and clarification!
i was able to use this with this settings:
-420x960
-15 to 20 steps
-LCM or Uni_PC
-causvid and magcache
this is an incredible proof of concept,
all this trained on 14b would be incredible.
thanks for sharing
Thanks for posting your settings, how did it work out for you? Would love to see any samples you're able to post
can you share your workflow please?
The model is amazing !
Beware : you need to use the motion helper (linked on huggingface)
@tbsmsks Check out the new experimental version, let me know what you think!
How to install this model on WanGP? Author of the model, please give an explanation.
I've never used or heard about WanGP so my 2 cents:
Use ComfyUI or SwarmUI (Swarm actually uses ComfyUI, but besides that it also has a much simpler interface for you to generate images and videos with, so its "best of both worlds", it's easy to start with and when you get more comfortable you can just download other people's ComfyUI workflows into it)
To get decent speeds on older hardware (the main reason you use WanGP now i think?) you can use a bunch of things like CausVid, xformers, torch & more tricks, there are many guides on CivitAI that can teach you about these, just look for guides on wan and you should find several for slower machines that go into details about the things mentioned above.
You can find the UI's at these places:
https://github.com/mcmonkeyprojects/SwarmUI
or
https://github.com/comfyanonymous/ComfyUI
(again, the SwarmUI one is probably the better option if you're fairly new to all of this)
@baconmessenger Thanks for the answer) WanGP today is the bomb! Without the "vermicelli" program https://github.com/deepbeepmeep/Wan2GP