CivArchive

    Yet Another Workflow : easy t2v + i2v

    I've aimed at a user-friendly UI for ComfyUI. There's a balance between complexity and ease of use, and this workflow aims to give you useful controls with clear guidance on what you need to care about. I hope these will be helpful to anyone strugging with quality and the general UI-isms of ComfyUI. I've taken the time to color code and add lots of notes. Please read the notes, I've tried to make them useful!

    This is the workflow I use, it's not aimed at a skill level. It's designed to be easy to use and adjust with some UI concessions and labeling to ensure you can pilot it with less experience in a way that is more sophisticated than the official example workflows, which can be easy to break.

    The primary goal with this workflow is to give you a strong foundational place to generate either text to video (T2V) or image to video (I2V) outputs without having to fuss too much. Lightx2\ning is on by default. (It's an accelerator that trades variety for generation speed.)

    The green controls are the stuff you generally want to mess with.

    The secondary goal here is to provide a consistent interface to interact with different samplers.

    Versions

    The "main" workflows (the one's without parenthetical version labels) support the basic ksampler node, but also includes a toggle to enable the ClownsharKSampler sampler and the TripleKSampler once you have some experience and want to mess around.

    I generally recommend the main workflow. It's my daily driver. It offers the most control with the least fuss. Each version has its place tho!

    If you are extremely new to Comfy and Wan? Consider using the MoE version. It removes a few nodes and options while providing mostly the same interface with slightly less visual complexity to help you get acclimated. Once you get comfy with this, step up up to the main version for more options.

    Want better edge case prompt adherance? I've created a version of the workflow that supports the WanVideo nodes. I don't recommend using this one until you're more comfy with the standard version. It has increased visual complexity. These nodes work completely different to other systems, and I hope to make it more accessible by providing you with the same interface to engage with it. WanVideo tends to produce completely different results, so it can be another intersting thing to explore.

    Want more fluid motion and jiggle? I've also created a Smooth Mix version to support the Smooth Mix checkpoint. What is it? Like with Stable Diffusion checkpoints, it merges many LoRA's into the base Wan 2.2 model to create a more opinionated model to create videos with. This version follows the recommendations in their official workflow, while offering you the improved YAW UI experience. I like this checkpoint for it's detail and motion, but it is also more prone to motion artifacts. It also has some built in support for anime styles. A self-forcing LoRA (Lightx\ning) is built-in, so the sampler options are kept simple for this one. Please note that, due to the additional 80gb of size, my RunPod template will only include this as an optional download. Also checkout the LoRA version, which I find much more useful as you can adjust the strength of the effect.

    Expect an update of the RunPod template to include the new templates soon after.

    As of version v0.38, I'm doing a revision of this article so patchnotes will be removed for clarity. The changes are noted in the file details section (and in the templates themselves).

    Like it?

    Give it a like! Tag it as a Resource when you use it! Support on Patreon or a tip on Ko-fi are also welcome. Yellow Buzz will go towards promoting awareness here on Civit.

    Need help?

    I like helping people get going with this stuff, so if you want help message me. If you want extended one-on-one help, there's an option on the Patreon. I'm happy to walk you through the details, answer your questions, and give you some extra tips and tricks, and scripts. I've done this for a few folks, I'll save you money and headaches.

    I've also written an article here on getting it going with my Runpod template. The template will vastly expedite and simplify getting things up and running.

    General Advice

    • Make lots of videos! Post your videos! Don't fuss with the tech! Be smart about how you spend your time with this stuff. It's easy to burn out if you spend more time trying to get things to work than making videos you like. That's really why I'm posting this.

    • Use RunPod. Use the RTX 5090 or the H100 SXM. Use my Wan 2.2 template. If you've not used RunPod before, sign up with my link; we'll both get some free credit. See the article for more.

    • If you use a service like RunPod, if you're doing I2V, it can be smart to have your images ready in advance to make sure the server stays busy while you are using it.

    • If you run this outside of Runpod, you'll need to install some custom nodes. To do that, click the "Manager" button at the top of the Comfy interface, and then click the "Install Missing Custom Nodes". Click "Install" on each one - I recommend in order; you'll need to wait till each has installed. Do not bother restarting ComfyUI until they are all installed. The RunPod template has them preinstalled. (There's a manual patch for the LTXFilmGrain node here.)

    • If the wires bother you, there's a button in the bottom right on the floating UI that will hide them.

    • What is Lightx2\ning? That's just my short hand for refering to Lightx2 and Lightning (which is just the Wan 2.2 version) self-forcing LoRA's .

    • I've made it easy to turn off Lightx2\ning as well, if you want to try without, but note that it's much slower! I really only recommend this with the H100 SXM. Do try though, especially with text-to-video! The full Wan 2.2 has some amazing capability.

    • This workflow is setup for .safetensors models, but you can use GGUF if you want to make the changes node changes.

    • If the having the Clownshark/TripleK sampler in the UI is distracting, you can delete the group with no negative consequence. (You could also delete the purple mute node for the sampler selection as well.)

    Costs?

    I'm updating the data here to reflect additional testing: In case you are curious, the example videos take around 4.5 / 3 minutes (720x1280). (I don't normally do that resolution when I'm just making stuff and experimenting.) I can generally make nice looking videos in 1-2 minutes. I'm generally running at either $0.93 or $2.69 per hour with the RTX 5090 or the faster but more expensive H100 SXM; in generally I think between 15 to 68 high quality videos per hour is what I tend to see, so about $0.02 - $0.13 per video, (rounding up). (With a session startup cost for loading the pod, probably adding a cent to so to that.) 1 to 2 minutes is probably my gen sweet spot for time, so it's either great or a bit over my ideal depending on resolution/scene complexity, but that's a cost consideration.

    Troubleshooting

    If a node is missing (bright thick red outline with a warning when you open the workflow), you can install them by going to Manager > Install Missing Custom Nodes, and pressing Install on any the nodes that show up there.

    If you are getting any errors related to a custom node, it's possible something has changed recently in the software. It might be useful to change a version back to the last "stable" build in these situations.

    For example, the nightly build of WanVideoWrapper might introduce an error that wasn't there last time. With a workflow open, you can go to Manager > Custom Nodes in Workflow. This will show you all of the custom nodes. If you click, Switch Ver, you can see all of the releases. Consider trying the first numbered on at the top of the list.

    If that doesn't work, or there seem to be more significant problems and you are using RunPod, you may have forgotten to select CUDA 12.8. Try restarting the server. If that doesn't work, terminate the pod, and make a new one. This will fix a surprising number of possible issues.

    Longer video generation support?

    One day. Probably.

    I'm always looking for a good solution to this. I've not found a good solution to this problem yet that isn't very complex. To talk through them a bit:

    There are some specialized solutions like Wan Animate and Infinite Talk that achieve longer videos by utilizing other technology to specific ends (remapping motion/making a talking head video), and while VACE is promising, it's very complex to setup and use and requires multiple steps. There are also techniques that involve making keyframes for your scene and using first/last frame to fill in the actual animations, and you can use interpolation as a post processing step to blend those clips in a way that can hide seams. Most of this also requires color correction or ipadapter to keey faces consistent.

    The SVI LoRA is a newer technique. It stabilizes consistency across videos, but lowers the base quality (everything gets less sharp), and the scenes become volitile to big changes while increasing the overal consistency across multiple videos. It's not perfect, and cannot go infinite, but if you're dead set on longer videos, this is a decent technique. It doesn't meet my quality bar. I find the overall drop in fidelity to be disappointing.

    At the end of the day, it's either a ton of work to make a still-short video, or you've introduced a ton of compromise on what's already a compromise. That's not what I'm selling here.

    I see this as the biggest problem in the AI video space, whether you do this as a hobby, like most of us, or you're a company trying to figure out how to seriously use this stuff commercially. These problems are also not unique to Wan, though they vary from company to company. There's a technology problem for how to extend video, so I suspect that there's a lot of economic pressure and research effort that will probably lead to better videos that aren't "more VRAM", as that doesn't scale well.

    To be clear: You can do this now by using the last frame as the first frame. v0.38 adds that capability. You'll generally get 2 or 3 decent extensions, but you're taking a quality hit each time, but any camera movement or motion may not look consistent between clips. (Using the same seed, sadly, not not ensure consistency.)

    Sound?

    Once it get's much better. Sora 2 and the other private models can do amazing sound, but the available public models create audio that I really dislike. You can certainly add it yourself, if you like it, but I won't officially support until it improves. LTX-2 can do decent sound and lipsync, but has a lot of issues which I'll cover elsewhere.

    Description

    Lots of updates.

    • Revisions to notes and labels.

    • Minor UI adjustments

    • Added a bunch of technical notes from other workflows that might be of use to folks trying to understand things

    • NAG support to improve negative prompt adherance.

    • Frame rate selection for interpolation.

    • Changing the file naming scheme moving forward to cite version number rather than my personal date system.

    FAQ

    Workflows
    Wan Video 2.2 T2V-A14B

    Details

    Downloads
    759
    Platform
    CivitAI
    Platform Status
    Available
    Created
    10/30/2025
    Updated
    4/27/2026
    Deleted
    -

    Files

    yetAnotherWorkflowEasyT2v_v035Wanvideo.zip

    yetAnotherWorkflowEasyT2vI2v_v035Wanvideo.zip