I started writing an article about how to best use older Wan 2.1 LoRAs in the new Wan 2.2 workflows and every time I wrote anything, it got so complicated that the article was nearly unreadable. In the end, I decided it was much simpler to just SHOW rather than EXPLAIN, so I made this workflow. The rough draft of this workflow has been my daily driver for wan creation for a little while now, so it's been proven to work.
1.2 UPDATE: Version 1.2 is released. It's a very minor bugfix version that replaces a LogicUtils node from the old workflow. A few users have mentioned that they are not able to install LogicUtils custom nodes package for some reason, so I swapped out that node for a similar one. This update should solve that installation problem.
UPDATE: I've published version 1.1 of this workflow. It's essentially the same but adds some extra guidance in notes for Wan 2.2 video creation and one new optional feature for long video generation. See the version notes on the right side of this page for more details.
This workflow is the result of a great deal of experimentation with running Wan 2.1 LoRAs inside a Wan 2.2 workflow for maximum effect and accuracy. Older LoRAs work best in the Low Noise model of Wan 2.2 which is the closest to the older Wan 2.1 model. However, the High Noise model is required to give Wan 2.2 videos the enhanced motions, camera control, and prompt adherence that are so much improved over Wan 2.1. This workflow is a compromise between those two competing interests and combines the best of both worlds. It provides all those Wan 2.2 advantages but also presesrves the look and feel of Wan 2.1 LoRAs, particularly character and clothing models, with high accuracy.
This workflow is customized to work very well with the Wan 2.1 models created by darkroast175696 on Civitai but should also work for any other Wan 2.1 models.
If your video doesn't use any LoRAs at all or is only using Wan 2.2 LoRAs, I recommend a 3-stage workflow with a 2-step introduction that doesn't use any LoRAs or acceleration at all, followed by 6 to 10 steps of Lightning enhanced High and Low Noise stages (3 to 5 steps of each).
The workflow includes recommended settings for all of the Wan models I've published. Those settings may not always be the best for whatever video you're making, but they will make a good starting point and you can adjust from there. Some other features that are supported and optional:
smoothing/frame interpolation
dynamic prompts for wildcards and random generation
caption text overlay on final video output
watermark image added to final video output
I hope you can make good use of this workflow and make tons of awesome videos to share with the rest of us.
Description
Replaced LogicUtils math node with one from ComfyRoll instead. Added workaround for Dynamic Prompts bug that prevents use of # symbol in prompt. The # is a comment character in dynamic prompts and there is no way to escape the symbol, so you can't use a #. See the notes about the positive prompt in the workflow for the workaround usage.
FAQ
Comments (7)
Thank you for the fabulous work. Myself I've been painfully making my way toward Wan 2.2 but there is only one Lora left keeps many of my renders pinned to Wan 2.1... the fabulous 'Masturbation Cumshot' Lora by 'definitelynotadog'
Q: Do you think this amazing Lora would work as well in this Wan 2.2 workflow?
@BigJuggsAI Good question, I don't know. I'll try a quick image-to-video modification to this workflow and see if it plays well with a 2.1 I2V lora. If so, you should be all set. The lora you're talking about looks like it's pretty much exclusively I2V. I know i2v loras often work ok in text-to-video workflows, but for something like this where the anatomy is completely foreign for wan's regular video generation, I think you'd probably be much better off sticking with i2v if that's how the lora is made. I'll make a quick test and get back to you.
Thanks so much for spending time on this... Yes all my work is on I2V. I know lots of people using that Lora that are waiting eagerly for a 2.2 release but so far no news so we're all hunting for solutions like this that can bridge the gap.
For your testing I've been able to get fabulous results with W21 with it by 1) keeping the penis upright and 2) videos less than 5 seconds long as demonstrated here. (If I can duplicate this in W22 with your solution that would be amazing!)
https://www.deviantart.com/stash/0b9zu8b3t50
https://www.deviantart.com/stash/0jijpw09wby
@BigJuggsAI I modified the text-to-video workflow to do image-to-video as a quick test and it seems to work pretty well. I posted a video using a 2.1 image-to-video lora as an example so you can pull the workflow from there. It'll probably be a little bit before the video is visible, but it's here:
https://civitai.com/posts/23813088
I don't really do futa stuff myself, but I downloaded someone else's image and tried it out with the lora you mentioned. I had decent results - I think? - with a strength of about 1.5 in both high and low noise. But you will have to judge what looks good and what doesn't, and play with the strengths yourself. You can also try with and without the lightning lora in the high noise pipeline. Just remember to switch the high noise cfg to 1.0 when you use lightning, and a higher number like 4 or 5 when you're not using lightning.
If this workflow looks useful to people, I might publish it as an alternate version to the T2V one that's already here.
@darkroast175696 Wow thank you SO much for this! I'll test this out and let me know how it compares with W21!
I noticed you are not using WanVideo sampler, you have classic Ksampler. Is there a specific reason. As far as I know for Wan 2.2 WanVideo Sampler is better.
No special reason, just habit. I should give the other one a try. I didn't know the results were different.

