CivArchive
    Wan2.2 animate Change character or background (long video) - v1.0
    NSFW

    This workflow can do a couple of things: it can make a character from a reference image copy the pose from a reference video (while keeping the original background), and it can also completely swap that character into the reference video, making them perform the same actions. Based on this idea, I also tested swapping just the character's head into the reference video, but I'm still debugging some issues with the results. If I find a good solution, I'll release that workflow too. For important notes on how to use this, please see the "Workflow Testing and Usage Instructions" below.

    💻I've already set up an online ➡️ workflow for you so you can quickly try out the effect.

    🎁Bonus: If you're signing up for the first time, you can get 1,000 free RH Coins by using my link and the invite code ➡️rh-v1182. Plus, you'll get another 100 RH Coins for logging in daily.

    🚀Workflow Testing and Usage Instructions:

    1. First, a big thank you to eddy for open-sourcing several Loras. When combined with KJ's Wan2.2 animate workflow, these Loras have great performance in both character consistency and video motion. It's especially important to note that "lightx2v_elite_it2v_animate_face" already has the lightx2v acceleration built-in, so you don't need any other speed-up Loras. This Lora also helps maintain the reference character's consistency, so I recommend a strength value between 1.0 and 1.2. If you need high consistency with the reference image character, I suggest a strength of 0.35-0.5 for "WAN22_MoCap_fullbodyCOPY ED". If you want to lean more towards the character in the reference video, a strength of 0.7-1 is better.

    2. Because this involves loading the Wan2.2 model and SAMSegment, it requires a lot of VRAM. That's why I've enabled WanVideo Block Swap by default. In my testing, the entire workflow can run on a 24G GPU. However, I recommend using the 48G GPUs on RunningHub for a much smoother experience.

    3. For different kinds of reference videos, I've preset two different masking methods in the workflow. You only need to choose one of them. If your reference video has only one character, use the "Single-character usage" group. If your video has multiple characters and you only want to mask certain areas, use the "Multi-role usage" group.

    4. This workflow can generate longer-than-usual videos, but I don't recommend using it for anything over 30 seconds. In my tests, I found that for a 20-second video, the character's consistency starts to decay around the 10-second mark, and the color tone also shifts slightly. I believe this is caused by the influence of the reference video during the context looping process. That's why I'd recommend keeping the videos you generate with this workflow to around 20 seconds and no longer than 30. Going past 30 seconds could lead to unpredictable degradation or a serious loss of consistency.

    5. For more detailed instructions, please see the notes inside the workflow.

    Description

    FAQ

    Comments (12)

    debelllgOct 1, 2025· 4 reactions
    CivitAI

    Looks great!

    Where do you get these nodes?:

    OnnxDetectionModelLoader

    PoseAndFaceDetection

    DrawViTPose

    GooodisOct 1, 2025

    Yeah I second that

    emilkwok
    Author
    Oct 1, 2025· 2 reactions

    Hey there! You'll need to update Kijai's nodes to the latest version, and then download and install ComfyUl-WanAnimatePreprocess from this link(https://github.com/kijai/ComfyUI-WanAnimatePreprocess). Once you do that, you'll have the "OnnxDetectionModelLoader" and "PoseAndFaceDetection" nodes.

    muhin976849Oct 1, 2025

    @emilkwok кажется нужно что то поподробнее) так ничего и не появилось(((((

    emilkwok
    Author
    Oct 1, 2025

    @muhin976849 Под "поподробнее" вы подразумеваете какую именно часть?

    muhin976849Oct 2, 2025

    @emilkwok Yes, I've been poking around for a while, and I think I've figured it out. I don't know why, but sometimes you need to upgrade the "beautiful face" node yourself..... without it, the Kijai nodes throw an error.....

    BMTZZOct 17, 2025

    @emilkwok can i ask how? i dont see any nodes in the link u posted just the detection models. My kijai nodes are on nightly already and i still cant get the nodes to show up

    emilkwok
    Author
    Oct 18, 2025

    @BMTZZ You might want to try reinstalling the "WanAnimatePreprocess" plugin by using git clone from KJ's webpage. After a complete installation, it should include the OnnxDetectionModelLoader and PoseAndFaceDetection nodes.

    bsbo327Oct 1, 2025
    CivitAI

    thanks.how do you increase quality, seems like videos are slightly blurry compare to a full wan2.2 video ?

    emilkwok
    Author
    Oct 1, 2025

    Yeah, exactly.That blurry is especially noticeable when you transfer a character from a reference image into a reference video. I'm currently testing a couple of different ways to fix it. My thinking is to either upscale the model and enhance the details to fix the blurry areas, or to do a second pass with the Wan2.2 low-noise model using a low denoise strength. So far, I haven't found a better or faster method. If you're willing, I'd love to hear about the fix you came up with.

    bsbo327Oct 1, 2025

    @emilkwok I experimented with different schedulers and higher steps and it seems there's not as much blurriness compared to the scheduler you have set in the workflow. Can you confirm ? I could be wrong though and it depends on other factors such as video and reference image.

    emilkwok
    Author
    Oct 2, 2025· 1 reaction

    @bsbo327 Keeping the other parameters the same, I also tried different schedulers, but the blurriness issue was still there. Increasing the sampling steps does bring out better details, but it doesn't seem to eliminate that blurry feeling. The trade-off between the time it takes for high sampling steps and the image quality is also something I'm considering (though, of course, if you don't care about the time, you can definitely improve quality by cranking up the steps). I think this problem has to do with the resolution of Wan2.2 and the specific combination of Loras being used. The default resolution I'm using is 480x832, whereas Wan2.2's standard resolution should be around 720p, where the quality is much better. Because of this, I've tested two different upscaling methods on top of V1.0, and they've shown a positive improvement on the blurriness. You can try out this change in my upcoming V1.1 workflow.

    Workflows
    Wan Video 2.2 I2V-A14B

    Details

    Downloads
    421
    Platform
    CivitAI
    Platform Status
    Available
    Created
    10/1/2025
    Updated
    5/12/2026
    Deleted
    -

    Files

    wan22AnimateChange_v10.zip

    Mirrors

    CivitAI (1 mirrors)