CivArchive
    Preview 60618723
    Preview 60618828
    Preview 60618827
    Preview 60622190
    Preview 60618826
    Preview 60618829
    Preview 56150048
    Preview 56908308

    HUNYUAN | AllInOne


    no need to buzz me. Feedbacks are much more appreciated. | last update: 06/03/2025


    ā¬‡ļøOFFICIAL Image To Video V2 Model is out!ā¬‡ļø COMFYUI UPDATE IS REQUIRED !
    Get files here:
    link 1 paste in: \models\clip_vision
    link 2 or Link 3 paste in: \models\diffusion_models (pick the one that works best for you)
    āš ļø I2V model got an update on 07/03/2025 āš ļø

    This workflows have evolved over time through various tests and refinements,
    thanks also to the huge contributions of this community.
    Requirements, Special thanks and credits above.


    Before commenting, please keep in mind:

    • The Advanced and Ultra workflows are intended for more experienced ComfyUI users.
      If you choose to install unfamiliar nodes, you take full responsibility.

    • I do this workflows for fun, randomly in my free time.

      Most issues you might encounter are probably already been widely discussed and solved on Discord, Reddit, GitHub, and addressed in the description corresponding to the workflow you're using, so please..Read carefully..
      and consider do some searches before comment.

    • I started this alone, but now there's a small group of people who are contributing with their passion, experiments and cool findings. Credits below.
      Thanks to their contributions this small project continues to grow and improve
      for everyone's benefit.

    • Fast Lora may works best when combined with other Loras, allowing you to reduce the number of steps.

      - Wave Speed can significantly reduce inference time but may introduce artifacts.

      - Achieving good results requires testing different settings. Default configurations may not always work, especially when using LORAs, so experiment to find settings that fits best.THERE'S NOT UNIVERSAL SETTINGS THAT WORKS FOR EVERY CASES.

      - You can also try to switch to different sampler/scheduler and see wich works best for you case, try UniPC simple, LCM simple , DDPM, DDMPP_2M beta, Euler normal/simple/beta, or the new "Gradient_estimation"
      (Samplers/schedulers need to be set for each stage and mode; they are not settings found in the console)



    Legend to help you choose the right workflow:

    āœ”ļø Green check = UP TO DATE version for its category.
    Include latest settings, tricks, updated nodes and samplers, working on latest ComfyUI.

    🟩🟧🟪 Colors = Basic / Advanced / Ultra

    āŒ = Based on deprecated nodes, you'll have to fix it yourself if you really want to use


    Quick Tips:


    Low Vram? Try this:

    and/or try use GGUF models avaible here.



    Rtx4000? use this:



    Want more tips?

    Check my article: https://civarchive.com/articles/9584


    All workflows available on this page are designed to prioritize efficiency, delivering high-quality results as quickly as possible.
    However, users can easily customize settings through intuitive, fast-access controls.

    For those seeking ultra-high-quality videos and the best output this model can achieve, adjustments may be necessary, like Increasing steps, modifying resolutions, reducing TeaCache / WaveSpeed influences, or disabling Fast LoRA entirely to enhance results.

    Personally, I aim for an optimal balance between quality and speed. All example videos I share follow this approach, utilizing the default settings provided in these workflows. While I may make minor adjustments to aspect ratio, resolution, or step count depending on the scene, these settings generally offer the best all-around performance.


    WORKFLOWS DESCRIPTION:


    🟩"I2V OFFICIAL"

    require:


    🟩"BASIC All In One"


    use native comfy nodes, it has 3 method to operate:

    • T2V

    • I2V (sort of, an image is multiplied *x frames and sent to latent, with a denoising level balanced to preserve the structure, composition, and colors of the original image. I find this approach highly useful as it saves both inference time and allows for better guidance toward the desired result). Obviously this comes at the expense of general motion, as lowering the denoise level too much causes the final result to become static and have minimal movement. The denoise threshold is up to you to decide based on your needs.

      There are other methods to achieve a more accurate image-to-video process, but they are slow. I didn’t even included a negative prompt in the workflow because it doubles the waiting times.

    • V2V same concept as I2V above

    require:
    https://github.com/chengzeyi/Comfy-WaveSpeed
    https://github.com/pollockjj/ComfyUI-MultiGPU


    🟧 "ADVANCED All In One TEA ā˜•"


    an improved version of the BASIC All In One TEA ā˜•, with additional methods to upscale faster, plus a lightweight captioning system for I2V and V2V, that consume only additional 100mb vram.

    Upscaling can be done in three ways:

    1. Upscaling using the model. Best Quality. Slower (Refine is optional)

    2. Upscale Classic + Refine. It uses a special video upscaling model that I selected from a crazy amount of multiple video upscaling models and tests, it is one of the fastest and allows for results with good contrast and well-defined lines. While it’s certainly not the optimal choice when used alone but when combined with the REFINE step, it produces well-defined videos. This option is a middle ground in terms of timing between the first and third method.

    3. Latent upscale + Refine. This is my favorite. fastest. decent.
      This method is nothing more than the same as the first, wich is basically V2V, but at slightly lower steps and denoise.

    Three different methods, more choices based on preferences.

    Requirements:

    -ClipVitLargePatch14
    download model.safetensors

    rename it as clip-vit-large-patch14_OPENAI.safetensors"

    paste it in \models\clip

    -RealESR General x4 v3

    paste it in \models\ESRGAN\

    -LongCLIP-SAE-ViT-L-14
    -https://github.com/pollockjj/ComfyUI-MultiGPU
    -https://github.com/chengzeyi/Comfy-WaveSpeed

    Update Changelogs:

    |1.1|
    Faster upscaling

    Better settings

    |1.2|
    removed redundancies, better logic
    some error fixed
    added extra box for the ability to load a video and directly upscale it

    |1.3|

    • New prompting system.

      Now you can copy and paste any prompt you find online and this will automatically modify the words you don't like and/or add additional random words.

    • Fixed some latent auto switches bugs (this gave me serious headhaces)

    • Fixed seed issue, now locking seed will lock sampling

    • Some Ui cleaning

    |1.4|

    • Batch Video Processing – Huge Time Saver!

      You can now generate videos at the bare minimum quality and later queue them all for upscaling, refining, or interpolating in a single step.

      Just point it to the folder where the videos are saved, and the process will be done automatically.

    • Added Seed Picker for Each Stage (Upscale/Refine)

      You can now, for example, lock the seed during the initial generation, then randomize the seed for the upscale or refine stage.

    • More Room for Video Previews

      No more overlapping nodes when generating tall videos (don't exagerate with ratio obviously)

    • Expanded Space for Sampler Previews

      Enable preview methods in the manager to watch the generation progress in real time.

      This allows you to interrupt the process if you don't like where it's going.

      (I usually keep previews off, as enabling them takes slightly longer, but they can be helpful in some cases.)

    • Improved UI

      Cleaned up some connections (noodles), removed redundancies, and enhanced overall efficiency.

      All essential nodes are highlighted in blue and emphasized right below each corresponding video node, while everything else (backend) like switches, logic, mathematics, and things you shouldn't touch have been moved further down. You can now change settings or replace nodes with those you prefer way more easily.

    • Notifications

      All nodes related to the browser notifications sent when each step is completed, which some people find annoying, have been moved to the very bottom and highlighted in gray. So, if they bother you, you can quickly find them, select them, and delete them

      |1.5|

      • general improvements, some bugs fixes

    NB:
    This two errors in console are completly fine. Just don't mind at those.
    WARNING: DreamBigImageSwitch.IS_CHANGED() got an unexpected keyword argument 'select'
    WARNING:
    SystemNotification.IS_CHANGED() missing 1 required positional argument: 'self'


    🟪 "AIO | ULTRA "


    Embrace This Beast of Mass Video Production!
    This version is for the truly brave professionals and unlocks a lot of possibilities.
    Plus, it includes settings for higher quality, sharper videos, and even faster speed, all while being nearly glitch-free.
    All older workflows have also been updated to minimize glitches, as explained in my previous article.

    From Concept to Creation in Record Time!

    We are achieving world-record speed here, but at the cost of some complexity. These workflows are becoming increasingly intimidating despite efforts to keep them clean and hide all automations in the back-end as much as possible.

    That's why I call this workflow ULTRA: a powerhouse for tenacious Hunyuan users who want to achieve the best results in the shortest time possible, with all tools at their fingertips


    Key Features and Improvements:

    • Handy Console: Includes buttons to activate stages with no need to connect cables or navigate elsewhere. Everything is centralized in one place (Control Room), and functions can be accessed with ease.

    • T2V, I2V*,V2V, T2I, I2I Support: Seamless transitions between different workflows.

      *I2V: an image is multiplied into *x frames and sent to latent. Official I2V model is not out yet. There's a temprorary trick to do I2V here wich require Kijai's nodes.

    • Wildcards + Custom Prompting Options: Switch between Classic prompting with wildcards or add random words in a dedicated box, with automatic customizable word swapping or censoring.

    • Video Loading: Load videos directly into upscalers/refiners and skip the initial inference stage.

    • Batch Video Processing: Upscale or Refine multiple videos in sequence by loading them from a custom folder.

    • Interpolation: Smooth frame transitions for enhanced video quality.

    • Random Character LoRA Picker: Includes 9 LoRA nodes in addition to fixed LoRA loaders.

    • Upscaling Options: Supports upscaling, double upscaling, and downscaling processes.

    • Notifications: Receive notifications for each completed stage, organized in a separate section for easy removal if necessary.

    • Lightweight Captioning: Enables captioning for I2V and V2V with minimal additional VRAM usage (only 100MB).

    • Virtual Vram support.

      Use the GGUF model with Virtual VRAM to create longer videos or increase resolution.

    • Hunyuan/Skyreel (T2V) quick merges slider

    • Switch from Regular Model to Virtual Vram / GGUF with a slider

    • Latent preview to cut down upscaling process.

    • A dedicated LoRA line exclusively for upscalers, toggled via a dedicated button.

    • RF edit loom

    • Upscale using Multiplier or "set to longest size" target

    • a button to toggle Wave Speed and FastLoRA as needed for upscaling only.

    • Ui improvements based on users feedbacks


    - Sequential Upscale Under 1x / Double Upscaling
    You can now downscale using the upscale process and then re-upscale with the refiner, or customize upscaler multipliers to upscale 2 times.

    • New Functionality:

      • The upscale value range now includes values as low as 0.5.

      • Two sliders are available: one for the initial upscale and another for the refiner (essentially another sampler, always V2V).

    • Applications:
      Upscale, Refine or combine the two

      • Upscale fast (latent resize + sampler) or accurate (resize + sampler)

      • Refine (works the same as upscale, can be used alone or as an auxiliary upscaler)

      • Double upscaling: Start small and upscale significantly in the final stage.

      • Downscale and re-upscale: Deconstruct at lower resolution and reconstruct at higher quality.

      • Combos: Upscale & Refine / Downscale & Upscale



    - Skip Decoders/Encoders Option
    Save significant time by skipping raw decoding for each desired stage and going directly to the final result.

    • How It Works: If your prompt is likely to produce a good output and the preview method ("latent2RGB") is active in the manager, you can monitor the process in real-time. Skip encoding/decoding by working exclusively in the latent space, generating and sending latent data directly to the upscaler until the process completes.

    • Example:
      A typical medium/high-quality generation might involve:

      • Resolution: ~ 432x320

      • Frames: 65

      • One Upscale: 1.5x (to 640x480)

      • Total Time: 162 seconds

      In this example case, by activating the preview in the manager and skipping the first decoder (the preview before upscaling), you can save ~30 seconds. The process now takes 133 seconds instead of 162.
      Bypassing additional decoders (e.g., upscale further or refinement) can save even more time.


    - Image Generation (T2I and I2I)
    Explore HUN latent space with this image generation capabilities.

    When the number of frames is set to 1, the image node activates automatically, allowing the image to be saved as a PNG.
    Use the settings shown here for the best results:

    - Structural Changes / Additional Features

    • Motion Guider for I2V
      This feature enhances motion for image-to-video workflows, lowering chances to get a static video as result.

    • 9 Random Character Loras Loader: Previously limited to 5, now expanded to 9.

    • Random Character Lora Lock On/Off:

      • By default, each seed is set to corresponds to a random Lora
        (e.g., seed n° 667 = Lora n° 7).

      • Now, you can unlock this "character Lora lock on seed" and regenerate the same video with a different random Lora while maintaining the main seed.

    • Clarifications:
      Let’s call things by their real names:

      • "Refine" and "Upscale" are both samplers here. Each optimized for specific stages:

        • Upscale: Higher steps/denoise, fast results, balanced quality.

        • Refine: Lower steps/denoise, focused on fixing issues and enhancing details.

      • Refine can work alone, without upscaling, to address small issues or improve fine details.

    • UI Simplification:
      The "classic upscale" is now replaced by a faster and better-performing resize + sharpness operation and hidden in back-end to save space.

    • Frame Limit Issue (101+ Frames):
      Generating more than 101 frames with latent upscale can cause problems. To address this, I added an option to upscale videos before switching to latent processing.

    - Bug Fixes

    • Latent Upscale Change:
      Latent upscaling now uses bicubic interpolation instead of nearest-exact, which performs better based on testing.

    • "Cliption" Bug Fixed

    • 201-Frame Fix:
      Generating 201-frame perfect loops caused artifacts with latent upscale. Switching to "resize" via the pink console buttons now resolves this issue.

    - Performance and other infos:

    Once you master it, you won’t want to go back. This workflow is designed to meet every need and handle every case, minimizing the need to move around the board too much. Everything is controlled from a central "Control Room."

    Traditionally, managing these functions would require connecting/disconnecting cables or loading various workflows. Here, however, everything is automated and executed with just a few button presses.

    Default settings (e.g., denoise, steps, resolution) are optimized for simplicity, but advanced users can easily adjust them to suit their needs.

    -Limitations:

    1. No Audio Integration:
      While I have an audio-capable workflow, it doesn’t make sense here. Audio should be processed separately for professional results.

    2. No Post-Production Effects:
      Effects like color correction, filmic grain, and other post-production enhancements are left to dedicated editing software or workflows. This workflow focuses on delivering a pure video product.

    3. Interpolation Considerations:
      Interpolation is included here. I set up the fastest i could find around, not necessary the best one. For best results, I typically use Topaz for both extra upscaling and interpolation after processing but is up to the user to choose whatever favourite interpolation method or final upscaling if needed.

    Requirements:

    ULTRA 1.2:
    -Tea cache

    -LongCLIP-SAE-ViT-L-14

    -ClipVitLargePatch14

    ULTRA 1.3:
    -UPDATE TO LATEST COMFY IS NEEDED!
    -Wave Speed

    -LongCLIP-SAE-ViT-L-14

    -ClipVitLargePatch14

    ULTRA 1.4 / 1.5:
    -UPDATE TO LATEST COMFY IS NEEDED!
    https://github.com/pollockjj/ComfyUI-MultiGPU
    https://github.com/chengzeyi/Comfy-WaveSpeed
    https://github.com/city96/ComfyUI-GGUF
    https://github.com/logtd/ComfyUI-HunyuanLoom
    https://github.com/kijai/ComfyUI-VideoNoiseWarp


    NB:
    The following warning in console is completly fine. Just don't mind at it:
    WARNING: DreamBigImageSwitch.IS_CHANGED() got an unexpected keyword argument 'select'
    WARNING:
    SystemNotification.IS_CHANGED() missing 1 required positional argument: 'self'

    Update Changelogs:

    |1.1|
    Better color scheme to easily understand how upscaling stages works
    Check images to understand

    |1.2|
    Wildcards.
    You can now switch from Classic Prompting system (with wildcards allowed)
    to the fancy one previously avaible

    |1.3|

    • An extra wavespeed boost kicks in for upscalers.

    • Changed samplers to native Comfy—no more TTP, no more interrup error messg

    • Tea cache is now a separate node.

    • Fixed a notification timing error and text again.

    • Replaced a node that was causing errors for some users: "if any" now swaps with "eden_comfy_pipelines."

    • Added SPICE, an extra-fast LoRA toggle that activates only in upscalers to speed up inference at lower steps and reduce noise.

    • Added Block Cache and Sage to the setup. Users who have them working can enable them.

    • Changed the default sampler from Euler Beta to the new "gradient_estimation" sampler introduced in the latest Comfy update.

    • Added a video info box for each stage (size, duration).

    • Removed "random lines."

    • Adjusted default values for general use.

    • Upscale 1 can now function as a refiner as well.

    • When pressing "Latent Resize" or "Resize," it will automatically activate the correct sampler.

    • A single-frame image is now displayed in other stages as well (when active).

      Thanks to all users that contributed on discord for this workflow improvements!

    |1.4|

    • Virtual Vram support

    • Hunyuan/Skyreel quick merges slider

    • Toggle to switch from Regular Model to Virtual Vram / GGUF

    • Longer vids / Higher Res / extreme upscaling now possible

    • Default res changed to 480x320 wich looks like a balanced middle way for lowres quick vids and most users should be ok with that.

    • Latent preview for skip preview mode

    • Switch toggle to enable/disable Exclusive LoRA for upscalers

    • RF edit loom

    • V2V loading time improved

    • Upscale to longest size target

    • Fixed slider upscale mismatch

    • info node moved

    • clean up and fixes

    • better settings for general use

    • upscale one can now use "resize to longers size" optional slider

    • added extra wave speed toggle for upscalers

    • added exclusive loras line for upscalers

    • general fixes

    • Ui improvement based on users feedbacks

    • fixed fast lora string issue on bypass in upscalers

    • more cleaning

    • changed exlusive loras for upscalers again, main fast lora is NOT going to pass in that line, since it has already a separate toggle (upscale with extra fast lora) previously called SPICE FOR UPSCALING.

    • fixed output node size for videos

    • moved resize by "longest size" toggle in extra menu

    • added extra wave speed toggle

    • control room is finished.. for now. I dont want to stress Aidoctor further. He already did a great job

    • lower fast lora default value now to 0.4

    • fixed VIDEO BATCH LOADING

      |1.5|

      • general improvements, Ui improvements, some bugs fixes

      • leap fusion support

      • Go With The Flow support




    Bonus TIPS:


    Here an article with all tips and trick i'm writing as i test this model:

    https://civarchive.com/articles/9584
    if you struggle to use my workflows for any reasons at least you can relate to the article above. You will get a lot of precious quality of life tips to build and improving your hunyuan experience.

    All the workflows labeled with an āŒare OLD and highly experimental, those rely on kijai nodes that were released at very early stage of development.
    If you want to explore those you need to fix them by yourself, wich should be pretty easy.





    CREDITS

    Everything I do, I do in my free time for personal enjoyment.
    But if you want to contribute,

    there are people who deserve WAY more support than I do,
    like Kijai.
    I’ll leave
    his link,
    if you’re feeling generous go support him.
    Thanks!

    Last but not least:
    Thank this community, especially those who given me advices and experimented with my workflows, helping improve them for everyone.

    Special thanks to:
    https://civarchive.com/user/galaxytimemachine
    for its peculiar and precise method of operation in finding the best settings and for all the tests conducted.

    https://civarchive.com/user/TheAIDoctor
    for his brilliance and for dedicating his time to create and modify special nodes for this workflow madness! such an incredible person.

    and
    https://github.com/pollockjj/ComfyUI-MultiGPU

    Also special thanks to:
    Tr1dae
    for creating HunyClip, a handy tool for quick video trimming. If you work with heavy editing software like DaVinci Resolve or Premiere, you'll find this tool incredibly useful for fast operations without the need to open resource-intensive programs.

    Check it out here: [link]


    Have fun

    Description

    FAQ

    Comments (123)

    K3NKFeb 6, 2025Ā· 1 reaction
    CivitAI

    im loving the āœ”ļøAiO🟪Ultraā˜•1.3 so far šŸ‘šŸ‘šŸ‘

    LatentDream
    Author
    Feb 6, 2025

    šŸ¤— enjoy!

    dirtysemFeb 6, 2025
    CivitAI

    Thank you so much for the update! Finally, we got rid of this annoying error. Especially pleased with SpiceLora. It would be great if it became possible to disable previous Loras when upscaling. That would be great.

    Another question: where is the wildcards folder located?

    LatentDream
    Author
    Feb 7, 2025

    i already considered to provide option to disable loras before upscaling. but on my tests a lto of them went crazy or switched too much away from the intial video. I should do more tests! thanks for remindes.
    is not that hard to do that in case, join discord i can explain in 2 minutes

    dirtysemFeb 7, 2025

    Unfortunately, I do not fully understand the structure of nodes and how they connect and what goes behind what. It's useless to explain to me. Well, thanks anyway, I'll be waiting for your update.

    eclipsexFeb 6, 2025Ā· 1 reaction
    CivitAI

    Any planned support for torch compile, first block cache, sage-attention via WaveSpeed and KJ's patch node?

    LimonobatonoFeb 6, 2025
    CivitAI

    Missing Node Types

    When loading the graph, the following node types were not found:

    ApplyFBCacheOnModel

    How can I get these nodes? Namely "WaveSpeed" and "Extra WaveSpeed for Upscale/Refine"

    oldmoonsonggamesFeb 8, 2025Ā· 2 reactions

    I had this same problem. The node is in Comfy-WaveSpeed. Search for it in manager.

    ziriuss1Feb 6, 2025
    CivitAI

    frame_amout does not have a visible slider and I cannot choose the number of frames for the video

    LimonobatonoFeb 6, 2025

    šŸ’»CONTROL ROOMšŸ’» - FRAMES (General)

    ziriuss1Feb 6, 2025

    @LimonobatonoĀ my workflow doest have frame control here a pics https://prnt.sc/yrn0fAD9Jq9c

    PepitoPalotesFeb 6, 2025

    @ziriuss1Ā did you install all missing nodes? those pink boxes should have sliders in them.

    ziriuss1Feb 6, 2025

    @PepitoPalotes I don't have any lost nodes and it generates a video but it lasts 2 seconds. If I right click and choose properties I can manually edit the frames and save, but there is no slider bar to choose from.

    LimonobatonoFeb 6, 2025

    @ziriuss1Ā This is another version

    ziriuss1Feb 6, 2025

    @LimonobatonoĀ i use aio ultra 1.3

    PepitoPalotesFeb 6, 2025

    @ziriuss1Ā that's the one I'm using and the bar is there. may be we are on different versions of comfy?

    Human__ErrorFeb 9, 2025

    @PepitoPalotesĀ me encanta tu usuario

    klosseszxcg492Feb 6, 2025Ā· 4 reactions
    CivitAI

    When generating a t2v with ultra workflow, I get the error connected with 'NoneType' object. It's either 'NoneType' object is not subscriptable or 'NoneType' object has no attribute 'copy'. Can somebody help me, please?

    klosseszxcg492Feb 6, 2025Ā· 3 reactions

    upd: I get this error in the first VAE Decode node, the one which is right on top of the RAW output section

    nomongkingFeb 8, 2025
    I also have the same symptoms. I can't find the cause and I'm upset because I wasted my time all day.
    PepitoPalotesFeb 6, 2025Ā· 1 reaction
    CivitAI

    From my tests, ultra v1.3 seems to be more memory efficient. Now I'm able to push the resolution and length even further without getting OOM, while maintaining the same image quality. Amazing work, thank you for sharing it!

    falkenfrFeb 6, 2025
    CivitAI

    Thanks a lot but i have an error with Triton and Sage. They are too hard to install under windows. Do you have a version of your template without this 2 add ons plz ?

    dominic1336756Feb 6, 2025Ā· 1 reaction

    you can do it

    LetTheBassDropFeb 7, 2025Ā· 2 reactions

    you can disable that node. ctrl-b or right click to bypass it.

    TheKnightsWhoSayNIFeb 10, 2025

    I'm having problems with Sage on Windows too. If I disable it, will there be any complications in the final quality?

    zalist420Feb 6, 2025
    CivitAI

    Hello, I keep having to travel the workflow going to and from the prompt area to the loras, I added a powerloraloader/bypass to get prompts from civitia, but I can't remember the prompts needed and keep having to go back to the site.

    Please can you add a lora loader that connects to civitia 'and or' something that could make adding lora prompts smoother then having to go to the site? I need to go ever minute or so to recheck what prompts I need do to brain damage.

    LetTheBassDropFeb 7, 2025Ā· 1 reaction
    CivitAI

    The Dream Project node isn't working. I have to manualkly connect the latent output to the upscaler or it gets a "None" type error otherwise.

    switch from nightly to stable

    PepitoPalotesFeb 7, 2025
    CivitAI

    Hi! After updating comfy the ultra 1.3 version is broken. The sage attention patching node fails with this error: "module 'comfy.ldm' has no attribute 'cosmos'". I was using this workflow yesterday and it was working fine, but I just clicked update all and... BOOM! ā˜¹ļø
    Be careful if you update comfy.

    PepitoPalotesFeb 7, 2025Ā· 2 reactions

    Nevermind, it's the classical comfy problem of updates breaking everything. A clean comfy install solved all the problems. I don't know how many times I have had to do this because some random update broke everything... Just ComfyUI being ComfyUI. šŸ™ƒ

    SIDKFeb 7, 2025Ā· 5 reactions
    CivitAI

    The move from Ultra 1.2 to 1.3 wasn't that bad, it just needed the separate TeaCache custom node, and I just bypassed the KJ Sage attention module (I'll open that can of worms some day). I can confirm what others have said about faster generations. I did a side-by-side test, and with the fast LORA at .50, it was about 50-70 seconds faster for raw output with the same prompt, dimensions, steps, denoising, guidance, etc. All in all, it was worth updating. Thanks for pushing the envelope!

    DIhanFeb 7, 2025
    CivitAI

    Ooooo a new one!

    Seems Triton only works on Linux. Need to setup a WSL2 but disabled the Sage node for now to work on windows portable.

    Have an issue that when Upscale 1 is set then the RAW output does not show

    triton also has a windows version. shortly google it. i use it on windows as well

    DIhanFeb 10, 2025

    @FoxTheFoxToTheFoxĀ I keep seeing how tricky it was to setup so avoided it killing my Comfyui but i'll have a go.

    Trying to setup on the Runpod atm

    https://civitai.com/articles/11303

    r0adkill76Feb 7, 2025
    CivitAI

    Not able to find this exact checkpoint and all my vids are static

    Machine_SpiritFeb 8, 2025
    CivitAI

    I get missing node type message on start of the advanced version:

    If ANY return A else B-šŸ”¬

    Anyone got a solution?

    DIhanFeb 8, 2025Ā· 1 reaction

    does the Ultra versions work? they are more optimized

    nisse91Feb 8, 2025Ā· 1 reaction

    i had this. i just opened up more of the same workflows until one worked.

    Machine_SpiritFeb 8, 2025

    @DIhanĀ Thanks for the info, that one actually works without messages =) but I can't get sageattention to work, even though I installed it and there are options to choose from (PathchSageAttentionKJ

    No module named 'sageattention'), Anyone got a sageattention install tutorial?

    I disabled sage but then get error when i run:
    VAEDecodeTiled

    'NoneType' object is not subscriptable

    and fix for the vae error?

    The basic workflow is the only one for me that works well.

    DIhanFeb 8, 2025

    @DUDE33Ā I cant get sageattention to work either. With the VAE there seems to be 2 of them in the world. One from comfyui org repo and the other by Kijai that worked for me. it has the same name https://huggingface.co/Kijai/HunyuanVideo_comfy/tree/main

    7058476Feb 8, 2025
    CivitAI

    I'm able to use sage attention with kijai workflow but for some reason I can't select any sage attention option here

    r0adkill76Feb 8, 2025
    CivitAI

    Running into a problem with the Basic + Upscale, It makes the first pic, but sometimes (and its random) it runs out of memory with the upscale, any idea how I fix this?

    trionautFeb 9, 2025

    Read the tips on this page for low vram. Also, lower the values in the node called VAE Encode (Tiled). Temporal Size (the number of frames processed at once) is a good starting value to try.

    r0adkill76Feb 12, 2025

    @trionautĀ I do not see VAE encode on the basic with upscale workflow, I am using the tips for under 16g but I have 24g , is that goofing this up ?

    pomamuraFeb 8, 2025
    CivitAI

    I’m struggling to faithfully recreate the original face using image-to-video (i2v) generation. Could you provide advice on the necessary settings and adjustment values to improve facial consistency and accuracy? Any tips would be greatly appreciated. Thank you

    galaxytimemachineFeb 9, 2025Ā· 1 reaction

    Lower denoise will get closer to original, but this is only a workaround method until the official img2vid model is available.

    damiangFeb 14, 2025Ā· 2 reactions

    You will never achieve the likeness 100% with the pseudo image2video implementation we have now. But having a trained lora of the original face will help immensely

    7058476Feb 9, 2025
    CivitAI

    there is something really buggy in this workflow and I think it's because there are way too many nodes. One issue I often get is that a workflow randomly stop if I start working on another one... it never do that.

    Is it because your VRAM is full, and your PC can't cope? I don't have any problems using it.

    7058476Feb 11, 2025

    @galaxytimemachineĀ well no I have plently of vram and there is no error, it just stops the workflow, and the weird part is that it's the only workflow doing that. But I found a way to reproduce it : Basically I open the 1.2 or 1.3 workflow on two different tabs then I start one workflow and if I close the second tab, which is idling, it will stop the first workflow. Either it's a bug in comfyui or there is a weird link between teacache nodes.

    galaxytimemachineFeb 11, 2025

    @mobile306 It is a bug in ComfyUI, one that I reported months ago and many others are reporting the same. It's not just this workflow 😊

    https://github.com/comfyanonymous/ComfyUI/issues/4014

    7058476Feb 11, 2025Ā· 1 reaction

    @galaxytimemachineĀ thanks! Weird that I didn't notice it before.

    ctf05Feb 9, 2025Ā· 3 reactions
    CivitAI

    All my videos are very grainy. I have tried the default workflow and changing some things. How can I fix this?

    BorugaFeb 27, 2025

    Same here, I tried different Scheduler, (euler, simple.. ), but still have the same result, not even Topaz Ai can fix it, did you manage to resolved it?

    ctf05Feb 9, 2025Ā· 1 reaction
    CivitAI

    What scheduler and sampler is best?

    biggerthanbigFeb 9, 2025
    CivitAI

    Have you looked into ComfyUI-MultiGPU? Don't know if it can be implemented into your workflow(s). I only dabble, so don't take my word for it, but from what I understand it is able to offload part of the process to virtual ram (or a 2nd GPU), freeing up precious GPU vram.

    jdavid82500Feb 9, 2025
    CivitAI

    Has anyone tested this on Mac?

    huggybearcatFeb 11, 2025

    Yes, he did a great job on this! I did have to disable, well bypass, the Sage node (I only found windows solutions for that) and made the weight on the model loader default. I haven't tried most of the bells and whistles yet, but will be soon!

    FoxTheFoxToTheFoxFeb 9, 2025
    CivitAI

    Amazing workflows. Thank you very much :-)
    Do you have any sampler recommendations for the non fast workflows? So far I tried several now, but I keep sticking with the Euler.

    KurtcPFeb 9, 2025
    CivitAI

    Thanks for your workflows - appreciated - i have been experimenting with wrapper and native for quite a while, but nice to see so many options in one place. I am probably being thick - but where do i load an image to get an autocaption from? I can't see an image loader anywhere obvious? Ta AJO6268

    informatiqueFeb 10, 2025Ā· 1 reaction
    CivitAI

    discord link is dead. how can i fix applyfbcacheonmodel when i try to use install via git url security level doesnt allow me do it. i stuck.

    421768Feb 11, 2025

    I came here for this. I tried to install that node manually too, I'm surprised more people aren't having this issue.

    421768Feb 11, 2025

    Got it! You have to go into: Manager>Custom Nodes Manager, make sure 'all' is selected, then search wavespeed and install comfy wavespeed which contains the missing node.

    421768Feb 11, 2025

    Now I'm having a problem with the scheduler, which is, like, bizarre, because it's "simple". Why on earth that would be an issue is just beyond me.

    informatiqueFeb 11, 2025

    @hades6666969Ā Installation Error: Comfy-WaveSpeed install failed: With the current security level configuration, only custom nodes from the "default channel" can be installed. Now i have this

    421768Feb 12, 2025

    @informatiqueĀ it may work to edit your config.ini found in folder: ComfyUI\user\default\ComfyUI-Manager. Then open with notepad or notepad++, and change the security setting to 'weak'. The line should read: security_level = weak Then save. That may fix it, but if that doesn't that's as much as I personally know. Good luck.

    IdelacioFeb 13, 2025Ā· 1 reaction

    @hades6666969Ā Thank you.

    421768Feb 11, 2025
    CivitAI

    I've got a problem with the basic scheduler node, which is weird because it's such a fundamental uncomplicated node. Error is: dreambiglatentswitch. is_changed() got unexpected keyword argument 'select'


    Machine_SpiritFeb 11, 2025Ā· 2 reactions
    CivitAI

    Newest ultra version:
    I get an error on the upscaler sampler 1

    LatentUpscale

    'NoneType' object has no attribute 'copy'

    when I deactivate the upscaler I get:

    VAEDecodeTiled

    'NoneType' object is not subscriptable

    pruizFeb 11, 2025Ā· 2 reactions
    CivitAI

    I cannot find "ApplyFBCacheOnModel" Basic + upscaler?

    galaxytimemachineFeb 11, 2025Ā· 1 reaction

    Are you using the manager to find uninstalled nodes?
    https://github.com/chengzeyi/Comfy-WaveSpeed

    vrilismFeb 11, 2025Ā· 1 reaction

    Update comfyui by going to the root of the folder and do git pull if WaveSpeed isnt in ComfyUI manager. Or you can just cd into custom_nodes and do git clone https://github.com/chengzeyi/Comfy-WaveSpeed

    pruizFeb 15, 2025

    @vrilismĀ where do I put the downloaded file?

    BasstheticsFeb 17, 2025

    @vrilismĀ this didnt work for me, i keep getting an installation fatal: refusing to merge unrelated histories error

    Jaya1010Feb 13, 2025Ā· 1 reaction
    CivitAI

    Any idea why the Raw output doesn't show a preview after the initial gen? It will load once the first pass upscale shows

    TequiilaFeb 14, 2025

    same here

    RumadayFeb 13, 2025Ā· 1 reaction
    CivitAI

    How do you do the Sage part? I've gone to the github but i don't really know what to do

    jobobby04Feb 13, 2025

    Same, I get
    ```
    PathchSageAttentionKJ

    No module named 'sageattention'
    ```
    With the ultra workflow

    jobobby04Feb 13, 2025Ā· 1 reaction

    I got it working, you need to install the sageattention pip package and then start ComfyUI with `--use-sage-attention`.

    smaftFeb 14, 2025

    @jobobby04Ā i did, but it's still missing :/

    damiangFeb 14, 2025

    @smaftĀ If you have venv or conda.. make sure you install it in that environment that ComfyUI uses. Don't install it globally but first activate the environment.

    SpazAIFeb 16, 2025

    install the pip dependencies

    citizynkyng962Feb 16, 2025

    @SpazAIĀ How?

    SpazAIFeb 16, 2025

    @citizynkyng962Ā as jobobby04 explained above: "I got it working, you need to install the sageattention pip package and then start ComfyUI with --use-sage-attention."

    prepstorm8424783Feb 13, 2025
    CivitAI

    I can't really get Img2Vid to work. It just renders out the first frame (which does not look too much like the image). Is there a way to fix this and to set the strength of the image guidance? Maybe I am doing something wrong. All I did however is to switch out the model to GGUF with DisTorch (MultiGPU node).

    damiangFeb 14, 2025Ā· 1 reaction
    CivitAI

    What an AMAZING workflow. Everything works excellent. Thank you for the hard work, the quality and performance is absolutely amazing.

    azeliFeb 14, 2025
    CivitAI

    @LatentDream Simple question for you if you don't mind. I get a massive difference in quality generating multi step latent upscales over a single 544x960 gen for example. Why is that and can anything be improved or is that the nature of latent upscaling?

    For me everything seems to go a bit wonky, less crisp and more artifacts. Generally I use denoise of 0.65, any lower and results are destroyed.

    ImSuckFeb 15, 2025
    CivitAI

    I've noticed changing to frames above 72 seems to break the workflow and nothing generates. Am I doing something wrong?

    SpazAIFeb 16, 2025

    Yes you are. I can easily ouput frames above 201 (201 makes a perfect loop). Can you give more information on what is the error or symptoms of the lack of generation. This is very unclear and require more information to troubleshoot.

    ImSuckFeb 16, 2025

    @SpazAIĀ So there is no error as the process just stalls. It'll start generating and then get stuck on about 5-20%. Regardless how long I leave it for it just stalls and never progresses past that.

    Ponder_StibbonsFeb 16, 2025

    @ImSuckĀ You're running out of VRAM. Stick a purge node in there to free it up between frames.

    ImSuckFeb 17, 2025

    @Ponder_StibbonsĀ I'd agree but it also randomly happens when doing it 60 frames sometimes as well. Which I have successfully generated dozens of times. I am currently running a 4070 ti super, 7700x cpu and 64 gig of RAM

    Ponder_StibbonsFeb 19, 2025

    @ImSuckĀ RAM is useless. I've got 128gb and it just sits there looking stupid. My 4090 with 24gb VRAM will refuse to cooperate if I don't stick my finger down its throat every few frames, especially with stuff like interpolation when I feel like a making two hour still image. It's the only thing that I know works for sure. There are a bunch of incantations and herb-burning ceremonies too but honestly I don't think they help at all. Purge like a teenager who hates her parents.

    sergiogmxFeb 23, 2025

    @SpazAIĀ  whre you can increase hte amount of frames?

    SpazAIFeb 23, 2025

    @sergiogmxĀ there's a FRAMES component containing an INT option part of the comfy-image-saver...

    yaoleynikov991Feb 16, 2025Ā· 1 reaction
    CivitAI

    PathchSageAttentionKJ
    No module named 'sageattention'

    "C:\AI\ComfyUI\ComfyUI\custom_nodes\comfyui-kjnodes\nodes\model_optimization_nodes.py", line 24, in patchmodules

    from sageattention import sageattn

    ModuleNotFoundError: No module named 'sageattention'

    chrisw225Feb 16, 2025

    same here, did you fix it?

    7058476Feb 18, 2025

    You need to have sageattention pip package instaled from github in version 2. For sageattention you need WSL.

    citizynkyng962Feb 16, 2025
    CivitAI

    Can't get it to work. Launching with the sageattention argument results in Stability Matrix telling me I need to install sageattention regardless of whether or not I have it in the Stability Matrix assets or the ComfyUI venv. Is there a way to bypass this?

    KanginakFeb 16, 2025Ā· 4 reactions
    CivitAI

    It doesnĆ  t work. IĆ  ve installed all the nodes, but it says: Missing node - ApplyFBCacheOnModel

    BasstheticsFeb 17, 2025
    CivitAI

    for the ksampler, i dont see gradient_estimation available.

    svayaFeb 17, 2025

    Had same issue, updated ComfyUI and now it's there.

    GlimmeringMoonFeb 17, 2025
    CivitAI

    I got it working but have some few issues on I2V. How do I retain the person's face while upscaling? It changes and also how to increase seconds?

    7058476Feb 18, 2025
    CivitAI

    Note for those wanting to use PathchSageAttentionKJ, you need to install sageattention from the github repository, as of today version 2.1.1. By default if you just use pip it will install the 1.0.6 and you will not be able to select anything.

    And of course you need wsl to install this. It's a whole process. Look youtube for "Install Hunyuan in WSL (w/Sage) for Incredible Results! Step-by-step Installation!"

    7058476Feb 18, 2025Ā· 1 reaction

    Also add --use-sage-attention as parameters.

    hellolacoFeb 18, 2025Ā· 4 reactions
    CivitAI

    do you have an updated discord invite?

    dizzyhazy958Feb 18, 2025Ā· 1 reaction
    CivitAI

    I'm fairly new to Comfy. I opened this workflow and with every missing item error, I installed nodes through the manager and downloaded missing files and put them in their corresponding folder. Now, with everything loaded, there are no error messages. When I try to generate I2V or T2V, the task is immediately moved to the History section and nothing has been generated. I'm using ComfyUI portable v.3.7. Anyone have any idea what might be the problem?

    ZUSIMOFeb 19, 2025

    "I am using comfyui portable" That is your problem. Manually install Comfyui(not desktop). Portable is terrible for running workflows that require so many custom nodes.

    dizzyhazy958Feb 19, 2025

    @ZUSIMOĀ Thank you, I will give that a shot.

    DIhanFeb 20, 2025
    CivitAI

    Anyone one having an issue with not finding the Easy Setnode?

    LatentDream
    Author
    Feb 22, 2025

    weird

    DIhanFeb 22, 2025

    @LatentDreamĀ It’s so odd as I can get to work in my home setup with all the latest updates but can’t get to work on runpod. I thought it was a python issue but tried a few models and no luck. I have ended up using the KJNodes version which is pretty much the same thing that has no issues.

    I previously had an issue with Easy-Use where I’m sure someone messed up the InstantID node with an image of someone else and all generations had this guy’s face. I’ve lost trust in that node

    Jaya1010Feb 20, 2025
    CivitAI

    Was working fine and now all of a sudden today it's giving me error :

    VAEDecodeTiled

    'NoneType' object is not subscriptable

    If I skip the initial decode and try to upscale I get the error:

    LatentUpscale

    'NoneType' object has no attribute 'copy'

    DIhanFeb 22, 2025

    had a similar issue when using ComfyUI's vae but Kajai's one worked https://huggingface.co/Kijai/HunyuanVideo_comfy/tree/main

    ygmdirFeb 21, 2025
    CivitAI
    Experience error, can't use. RuntimeError: Error(s) in loading state_dict for CLIPTextModel: size mismatch for text_model.embeddings.position_embedding.weight: copying a param with shape torch.Size([77, 768]) from checkpoint, the shape in current model is torch.Size([248, 768]). Note this is happening on the BASIC model, under the DualCLIPLoader node. Yes, I'm using both Clip-vit-large and Long-Vit-L, and this is whether I'm using Image2Image or Text2Image... AND DualCLIPLoader happens BEFORE any image is even accepted I believe, so this isn't a dimension issue I don't think. Also not sure where the 77/768 or 248/768 nums are even coming from.
    trickybarrel72984Feb 22, 2025
    CivitAI

    Can anyone help me get an api version of this working ?

    DIhanFeb 22, 2025

    serverless or using on a cloud gpu?

    I setup a template and guide to using this on Runpod. Also tested on novita.ai but seems to have better prices at runpod

    https://civitai.com/articles/11303

    DIhanFeb 22, 2025

    i read that that worng. You mean api of the workflow?

    trickybarrel72984Feb 22, 2025

    @DIhanĀ Yes api of the workflow. This one is overkill tbh, i just need the t2v that works via api.

    trickybarrel72984Feb 23, 2025Ā· 1 reaction

    @DIhanĀ It's different workflow, doesnt have fast lora or torch compile. With ultra 1.4 it's insane, you can generate videos in 1 min.

    Workflows
    Hunyuan Video

    Details

    Downloads
    15,936
    Platform
    CivitAI
    Platform Status
    Available
    Created
    2/6/2025
    Updated
    5/13/2026
    Deleted
    -

    Files

    hunyuanAllinone_AllVersions.zip

    Mirrors

    hunyuanAllinone_AllVersions.zip

    Mirrors

    hunyuanAllinone_AllVersions.zip

    Mirrors

    HuggingFace (1 mirrors)