CivArchive
    Video Upscale or Enhancer using Wan FusionX Ingredients - Wan Vid2Vid Upscale
    NSFW

    🎬 Video Upscale & Enhancement Workflow ✨

    Enhance or upscale any video with flexible control over quality and creativity.


    🔧 Usage Tips:

    • 🆙 Just Upscaling?
      Leave the prompt empty, set Denoise: 0.3–0.5
      ➤ Preserves original look, improves clarity.

    • 🎨 Light Style Edit?
      Add a prompt, set Denoise: 0.7–0.8
      ➤ Subtle creative enhancement.

    • ⚠️ High Denoise Warning:
      0.85+ = generates new content from the prompt.
      No prompt? Expect random results 🎲


    🎯 Great For:

    • 📼 Low-res or blurry videos

    • 🖼️ CGI/3D Renders (e.g., Blender)

    • 🔁 Breathing life into old clips

    • 🖼️Upscale and/or Restyle old AI video's


    🔧Notes: If doing more than 121 frames, and you OOM or if your low on VRAM you will want to batch processing or context options. Enable block swapping if you need to.


    ☕ Like what I do? Support me here: Buy Me A Coffee 💜
    Every coffee helps fuel more free LoRAs & workflows!


    If you need help please reach out.

    📢 Join The Community!

    👉 Click here to join the Discord!


    This workflow uses the Lora's that are part of FusionX but CausVid has been replaced with a new Lora that allows you to use 4 steps. Higher steps above 8 could cause an overcooked looked so beware. 4 steps works very well.

    Description

    FAQ

    Comments (74)

    chrisbraeuer41172035Jun 25, 2025
    CivitAI

    Does it change the resolution? I need to play around with this...

    vrgamedevgirl
    Author
    Jun 25, 2025

    U pick the resolution. Default is 1024x576- i have gone up to 1920×1080 but u need full block swap

    Pirate_rusJul 1, 2025

    @vrgamedevgirl what GPU you used? Trying on 4080 16 1280 720 doesnt work, not enough VRAM

    R3G4LJul 13, 2025

    @Pirate_rus Using 4070 and 96gb here, it works at 1280x720p 81 frames and 10 LoRAs +Upscale to 2k in the same run(upscale eats up the rest of the ram cause pc to freeze for 20 sec). Took 4:34 at 1 cfg 5 steps lcm/beta scheduler (not including tiled vae decoder time). If I make a prompt change I have to unload everything and the run the workflow or it will OOM on the text encoder. Tiled decoder at the end is the slowest part, I don't tile vae on the ENCODER (unless I'm using more tha 81 frames with context node) saves time and doesn't cause OOM. Full 40 block swap. Offload all LoRA models etc anything that can be offloaded. Slap on a some vram nodes here and there where they have helped on other workflows.

    I also attach the Wan Enhance Video node to the feta_args spot on the sampler for better videos accuracy.

    lug_LJun 25, 2025
    CivitAI

    Hi, is there any way I could use it with my humble RTX 3080 10GB? When it gets to the WanVideo Sampler it throws a memory error. Is there anything I could reduce to make it work, if that's even possible? 😶

    skyrimer3dJun 25, 2025

    Have you tried increasing the blocks_to_swap? Try setting it to 30 or 40.

    lug_LJun 25, 2025

    @skyrimer3d No, it doesn't work either, not even with that. In fact, I still get the 'CUDA error: out of memory'.

    vrgamedevgirl
    Author
    Jun 25, 2025

    @lug_L what res are u using? U may be limited on Resolution with your gpu

    lug_LJun 26, 2025

    @vrgamedevgirl 480x848 The same resolutions I use in the other workflows.

    vrgamedevgirl
    Author
    Jun 26, 2025

    even with block swap at 40?

    lug_LJun 26, 2025

    @vrgamedevgirl As I mentioned above, even with 40 I still get the 'CUDA error: out of memory'😭. Is it possible to use a GGUF node?

    vrgamedevgirl
    Author
    Jun 26, 2025· 3 reactions

    @lug_L I don't have a workflow yet for native/gguf. I can try putting something together as soon as I have time though.

    lug_LJun 26, 2025

    @vrgamedevgirl Thank you.

    skyrimer3dJun 25, 2025
    CivitAI

    Still trying to make this work, but download link for Wan2.1-Fun-14B-InP-MPS_reward_lora_comfy.safetensors is not correct in the description.

    vrgamedevgirl
    Author
    Jun 25, 2025

    Where does it bring u? Maybe they changed the url

    skyrimer3dJun 26, 2025

    @vrgamedevgirl it takes me to https://huggingface.co/alibaba-pai/Wan2.1-Fun-Reward-LoRAs/blob/main/Wan2.1-Fun-14B-InP-MPS.safetensors instead of the link for "Wan2.1-Fun-14B-InP-MPS_reward_lora_comfy.safetensors".

    vrgamedevgirl
    Author
    Jun 26, 2025· 1 reaction

    @skyrimer3d that is the correct location... just download the model from that page..

    skyrimer3dJun 25, 2025
    CivitAI

    Getting two errors, first with one of the loras:

    Loading LoRA: Wan21_T2V_14B_MoviiGen_lora_rank32_fp16 with strength: 0.4000000000000001

    Loading LoRA: Wan2 with strength: 0.4000000000000001

    lora key not loaded: diffusion_model.blocks.0.cross_attn.k_img.alpha

    lora key not loaded: diffusion_model.blocks.0.cross_attn.k_img.lora_A.weight

    lora key not loaded: diffusion_model.blocks.0.cross_attn.k_img.lora_B.weight

    lora key not loaded: diffusion_model.blocks.0.cross_attn.v_img.alpha

    lora key not loaded: diffusion_model.blocks.0.cross_attn.v_img.lora_A.weight

    (...)

    Also this: !!! Exception during processing !!! cannot access local variable 'partial_add_cond' where it is not associated with a value

    vrgamedevgirl
    Author
    Jun 25, 2025

    The key errors can be ignored. Something is wrong with your settings i would need to see them. Reach out on discord

    skyrimer3dJun 26, 2025

    @vrgamedevgirl looks like reducing the length of the sample video, lowering resolution on "Resize Image v2", plus 20 block swap did the trick. Very impressive results, in my case it changed the face a bit too much for my liking, but still it's one of the most impressive wfs i've seen to take a meh sub-par plastic AI vid and give it a huge semi realistic quality bump, amazing job. Now i'll try some of the tips in the description to preserve a bit more of the fidelity with the original vid.

    vrgamedevgirl
    Author
    Jun 26, 2025

    @skyrimer3d what denoise did you use? and did you use a prompt?

    skyrimer3dJun 26, 2025

    @vrgamedevgirl To be honest i gave up, i tried to use a vid that was close to 200 frames and it OOM. In the end i tried that same vid on Topaz Video at 2x res using Starlight MIni and the results were fantastic clearing all face distortions, which was what i intended to achieve with your wf, and only took 8 min, so until i get a hardware upgrade i'll use that instead with long vids.

    HighlandriseJun 25, 2025· 4 reactions
    CivitAI

    Great Workflow, added this to my Favourites now. Managed to Fix/Improve severel Videos, one of them i uploaded here, twice, since i ran it several Times through the Workflow, and each time it got better, from super Blurry and washed out to Crisp and Sharp.

    Created that Video a long time ago, but didnt upload it because of the Bad Video Quality, thanks to this Workflow i was able to fix it and now uploaded it.

    I used lcm/beta as Scheduler with 0.6 Denoise, I highly recommend to use the Original Prompt that was used to Create the Video that needs to be fixed, AND i also recommend to Translate that Prompt to Chinese first, increases the Prompt adherence DRAMATICALLY (makes sense, both Wan and Hunyuan are Chinese Models) but dont just use the first Translation what i did was:

    First Translate your English prompt to Chinese (i used Google Translator) than translate the Chinese Translation back to English and look for errors (for example Translating a HUGE or LARGE Woman with "Plump" will most likely result in a Fat Person, or sometimes when the Original text says "Woman Sits on Chair" the Translation might end up as "Woman sits under Chair" if thats the case, grab the Original English Prompt, rephrase the parts with the bad Translation, and than Translate to chinese again and back to English, once the Translation ist good, use the Chinese Prompt and you will see Great results. I'm doing that to all of my Prompts now and the difference is Day and Night.

    Anyway, that was my 2 cents, hope it helps and a Big Thanks for this Workflow again!

    vrgamedevgirl
    Author
    Jun 26, 2025

    Awesome!!! Glad it worked. would love to see the video! Also, have you tried using Chat GPT to translate? I think it does a better jobs. Also, not using a prompt works as well and keep the video very close to the org. Not sure if you tried that or not.

    HighlandriseJun 26, 2025· 2 reactions

    @vrgamedevgirl for all "non to spicy" stuff i use ChatGPT, for everyting else Google Translator, regarding Prompt, if the Source Video is somewhat "ok" and not to Blurry/Washed out than one can get away with no prompt, but if the Source Video is of really Low Quality - like with the one i uploaded here - than the results will be subpar, some "weird things" will happen on the video since the Video Model / Worklflow has to "gues" the content of that Blurry Mess, with prompt on the other hand i had no issues, like i said this is for really low quality source Material, for all other stuff, no prompt also did the job.

    Crucial was also letting every subsequent improved version run again through the Workflow a few times, that way i ended from the Blurry Mess seen in one of the Videos to the Sharp and Crisp one. Thanks again for this, most of the Videos that i would normaly delete - because of issues like this - i can now just fix with this - very well done 👍👍👍👍

    skyrimer3dJun 26, 2025· 1 reaction

    That's a very interesting take of prompts with these chinese models, if i find a vid doing weird stuff (and it happens frequently lol) i'll take this advice.

    HighlandriseJun 26, 2025

    @skyrimer3d Enjoy your improved Videos 😃👍👍

    vAnN47Aug 1, 2025· 1 reaction

    can you share you're workflow? my output is not like the original figure also getting blurry output :(

    my setting is exactly as the workflow came with :(

    dasstab736Jun 30, 2025
    CivitAI

    Hello, Can I use 1.3B model instead of 14B?

    vrgamedevgirl
    Author
    Jul 1, 2025

    You can, but then you will have to bypass all the lora's and the results won't be very good as 1.3b was trained on 480P videos but go ahead and give it a shot

    ArtDesignAwesomeJul 1, 2025· 1 reaction
    CivitAI

    how do I fix this?


    RuntimeError: The size of tensor a (64) must match the size of tensor b (240) at non-singleton dimension 3

    vrgamedevgirl
    Author
    Jul 1, 2025

    i would need more details.. please join the discord server and ping me

    MarceloLopesOct 8, 2025
    the numbers in the resolution (resize image node) must be multiples of 8/16/32

    Pirate_rusJul 1, 2025
    CivitAI

    4080 16 doesnt work. On swaping block 18/40 error

    PiratebayxJul 3, 2025· 1 reaction
    CivitAI

    Hey Team is anyone getting a message like this "The size of tensor a (63) must match the size of tensor b (62) at non-singleton dimension 3"

    It seems the original dimensions will generate but once i change the height or width it comes up with this message.

    Thanks in Advance :)

    MarceloLopesOct 8, 2025
    the numbers in the resolution (resize image node) must be multiples of 8/16/32

    mrw21jJul 5, 2025
    CivitAI

    Is there a way to do this with Native nodes and not Kijai wrapper?

    vrgamedevgirl
    Author
    Jul 5, 2025· 1 reaction

    Yes, I just have not had time to create that WF. Have some other projects i'm working on. You could try and make one though based off this.

    ClocksmithJul 6, 2025
    CivitAI

    Definitely an improvement.

    I think denoise 0.4 while staying at the same resolution as the input is giving me the best results in terms of clarity and consistency. And then I can upscale in Topaz Video AI.

    I notice a drastic drop in likeness to the input video and jump in sharpness at denoise 0.5. My output for 0.49 and 0.5 usually have significantly higher differences than e.g. 0.49 and 0.4.

    vrgamedevgirl
    Author
    Jul 6, 2025

    I wonder if running a second pass with higher res would work in order to upscale while keeping likness?

    Defect450Jul 7, 2025· 1 reaction
    CivitAI

    Oh man how I wish this had a GGUF workflow as well, I keep getting OOM errors regardless of using the various memory saving techniques even with my 4090!

    This workflow has the potential to be game changing for remastering some old space battle renders I made in Blender a few years ago, so I'll keep my fingers crossed for an update eventually

    vrgamedevgirl
    Author
    Jul 7, 2025· 1 reaction

    Oh, yeah, this can be used with GGUF, i just need to move it to native. If you know comfy well you may able to just do it. Open up one of the other GGUF text to video workflows, use the wan gguf NOT fusionX. Then look at the settings on this and try and re-create it till I have time to put one together. If you join discord server i may be able to PM you one fast before I am able to fine tune and publish one here.

    R3G4LJul 13, 2025

    Using 4070 and 96gb here, it works with using the 14b fp8 model at 1280x720p 81 frames and 10 LoRA +Upscale to 2k in the same run(upscale eats up the rest of the ram cause pc to freeze for a bit). Took 4:34 at 1 cfg 5 steps lcm/beta scheduler (not including tiled vae decoder time). If I make a prompt change I have to unload everything and the run the workflow or it will OOM on the text encoder. Tiled decoder at the end is the slowest part, I don't tile vae on the ENCODER (unless I'm using more tha 81 frames with context node) saves time and doesn't cause OOM. Full 40 block swap. Offload all LoRA models etc anything that can be offloaded. Slap on a some vram nodes here and there where they have helped before.

    I also attach the Wan Enhance Video node to the feta_args spot on the sampler for better videos.

    Rich245Jul 7, 2025
    CivitAI

    I would love to have it working, as it seems very powerful - but I keep getting OOM on a 3090.

    I even tried to load the text encoder on a 2nd graphics card but nothing I do works.

    vrgamedevgirl
    Author
    Jul 7, 2025

    this should work on that card. Did you enable bock swapping? I would set it to 30, it won't slow it down. I accidently had it on at 40 once and it did not slow it down at all. (I don't need it at all) It will just prevent OOM.

    Rich245Jul 8, 2025

    I did test on 20, 30 and 40. No love.

    I'll test different settings. Thanks for your response.

    zoom83Jul 22, 2025

    Rich245 Same for me

    vrgamedevgirl
    Author
    Jul 23, 2025

    zoom83 what res? You may need to reduce the res. Also, how long is the input video?

    zoom83Jul 23, 2025

    vrgamedevgirl i tested with only 1 frame load cap and 320x480...

    vrgamedevgirl
    Author
    Jul 23, 2025

    zoom83 and that didn't work? Sounds like a bug. Please join discord

    zoom83Jul 23, 2025

    vrgamedevgirl maybe i found time to rebuild your Workflow on mine who works. I joined discord..

    vrgamedevgirl
    Author
    Jul 25, 2025

    zoom83 can you ping me in the support channel? vrgamedevgirl

    eriocaJul 12, 2025
    CivitAI

    thanks for the workflow! this got me thinking, is it possible to do inpainting+upscale v2v using your workflow?

    for example, we generated a video, but the face is distorted/melted due to low resolution, other part of the video is fine, using v2v workflow, we use lower denoise value to upscale the face part only, like how we do inpainting for image.

    not sure is it possible with t2v model, or should we use vace

    vrgamedevgirl
    Author
    Jul 12, 2025

    I have not tested this but u could use a mask? So mask everything but the face?

    qazxswJul 15, 2025
    CivitAI

    is it possible to add teacache/sageattention to the workflow to speed up the process?

    vrgamedevgirl
    Author
    Jul 16, 2025· 1 reaction

    Already has sag, teacache skips steps so won't work since we already use low steps. That was for when we had to use 20 or 30 steps with base

    qazxswJul 16, 2025

    @vrgamedevgirl thank you

    umutgklpJul 21, 2025
    CivitAI

    I'm getting "Can't import SageAttention: No module named 'sageattention'" error....did anyone getting the same? I'm using ComfyUI 0.3.43 with 4090. Any advice?

    syphinxero3411Aug 6, 2025· 2 reactions

    unless you have installed sage attention. Bypass it.

    umutgklpAug 25, 2025

    @syphinxero3411 thank you. I'll try it.

    confernoSep 19, 2025

    ask for chatgpt to help eith installation of missed components

    RandmeistJul 22, 2025· 1 reaction
    CivitAI

    mat1 and mat2 errors as well as constant OOMs on 24 VRAM and 32 RAM regardless of resoultion - all I can say

    greebaaaJul 25, 2025
    CivitAI

    Thats crazy VrGamedevgirl ! But I lost the face of my actor, how do you keep it ? I use a lora of his head ?

    vrgamedevgirl
    Author
    Jul 25, 2025

    What did you set the denoise to? Did you use a prompt? I would need more details.

    lamentcounterbalanceAug 1, 2025
    CivitAI

    This looks incredible. Will you be adding support for Wan 2.2?

    mramer723Aug 2, 2025

    I think wan2.2 has backward computability, so it should work. Please try and confirm.

    KarrasouAug 25, 2025
    CivitAI

    Hello ! i've tried your workflow seems good, but i fear i don't understanding well. It seems that 4 in step must be enough as you said, but, if i let at 4, the output is just a draft of the original. Is it normal ? I have to put at least at 10 to have something near of the input movie.

    I've loaded all models and loras linked. i've not change any default parameters except the height/width/length for the input movie. Also the denoise to 0.4

    Did i miss something ?

    Thx for your work

    MarceloLopesOct 8, 2025

    same

    billysaltzman625Sep 18, 2025· 3 reactions
    CivitAI

    LOL this made my character from a Mongolian warrior in the snowy mountains to a skier with a ski helmet on haha! I couldn't stop laughing. Is there some setting to make it not interpolate so wildly?

    SedidNeOct 3, 2025
    CivitAI

    Hi,
    Can I use this workflow for anime?
    If so, which lora should I deactivate please?

    MarceloLopesOct 8, 2025· 7 reactions
    CivitAI

    For those who are having problems with "The size of tensor a (xx) must match the size of tensor b (xx) at non-singleton".
    The numbers in the resolution must be multiples of 8/16/32.

    NemesisEnforcerNov 25, 2025· 1 reaction

    Thank you for posting, I had this error and was thinking it was an incorrect Node version or something. Much appreciated.

    MrSmith2025Oct 13, 2025· 3 reactions
    CivitAI

    Seems there is no way to get it running on a 16gb vram GPU. Only getting OOM errors even with activated "WanVideo Context Options" node like suggested. So its completely useless.

    vrgamedevgirl
    Author
    Apr 6, 2026

    This has nothing to do with the workflow...its the model which i did not make. Maybe try meta batch

    Workflows
    Wan Video 14B t2v

    Details

    Downloads
    5,747
    Platform
    CivitAI
    Platform Status
    Available
    Created
    6/25/2025
    Updated
    5/15/2026
    Deleted
    -

    Files

    videoUpscaleOrEnhancer_wanVid2vidUpscale.zip