CivArchive
    Preview 50932272

    HUNYUAN | AllInOne


    no need to buzz me. Feedbacks are much more appreciated. | last update: 06/03/2025


    ⬇️OFFICIAL Image To Video V2 Model is out!⬇️ COMFYUI UPDATE IS REQUIRED !
    Get files here:
    link 1 paste in: \models\clip_vision
    link 2 or Link 3 paste in: \models\diffusion_models (pick the one that works best for you)
    ⚠️ I2V model got an update on 07/03/2025 ⚠️

    This workflows have evolved over time through various tests and refinements,
    thanks also to the huge contributions of this community.
    Requirements, Special thanks and credits above.


    Before commenting, please keep in mind:

    • The Advanced and Ultra workflows are intended for more experienced ComfyUI users.
      If you choose to install unfamiliar nodes, you take full responsibility.

    • I do this workflows for fun, randomly in my free time.

      Most issues you might encounter are probably already been widely discussed and solved on Discord, Reddit, GitHub, and addressed in the description corresponding to the workflow you're using, so please..Read carefully..
      and consider do some searches before comment.

    • I started this alone, but now there's a small group of people who are contributing with their passion, experiments and cool findings. Credits below.
      Thanks to their contributions this small project continues to grow and improve
      for everyone's benefit.

    • Fast Lora may works best when combined with other Loras, allowing you to reduce the number of steps.

      - Wave Speed can significantly reduce inference time but may introduce artifacts.

      - Achieving good results requires testing different settings. Default configurations may not always work, especially when using LORAs, so experiment to find settings that fits best.THERE'S NOT UNIVERSAL SETTINGS THAT WORKS FOR EVERY CASES.

      - You can also try to switch to different sampler/scheduler and see wich works best for you case, try UniPC simple, LCM simple , DDPM, DDMPP_2M beta, Euler normal/simple/beta, or the new "Gradient_estimation"
      (Samplers/schedulers need to be set for each stage and mode; they are not settings found in the console)



    Legend to help you choose the right workflow:

    ✔️ Green check = UP TO DATE version for its category.
    Include latest settings, tricks, updated nodes and samplers, working on latest ComfyUI.

    🟩🟧🟪 Colors = Basic / Advanced / Ultra

    ❌ = Based on deprecated nodes, you'll have to fix it yourself if you really want to use


    Quick Tips:


    Low Vram? Try this:

    and/or try use GGUF models avaible here.



    Rtx4000? use this:



    Want more tips?

    Check my article: https://civarchive.com/articles/9584


    All workflows available on this page are designed to prioritize efficiency, delivering high-quality results as quickly as possible.
    However, users can easily customize settings through intuitive, fast-access controls.

    For those seeking ultra-high-quality videos and the best output this model can achieve, adjustments may be necessary, like Increasing steps, modifying resolutions, reducing TeaCache / WaveSpeed influences, or disabling Fast LoRA entirely to enhance results.

    Personally, I aim for an optimal balance between quality and speed. All example videos I share follow this approach, utilizing the default settings provided in these workflows. While I may make minor adjustments to aspect ratio, resolution, or step count depending on the scene, these settings generally offer the best all-around performance.


    WORKFLOWS DESCRIPTION:


    🟩"I2V OFFICIAL"

    require:


    🟩"BASIC All In One"


    use native comfy nodes, it has 3 method to operate:

    • T2V

    • I2V (sort of, an image is multiplied *x frames and sent to latent, with a denoising level balanced to preserve the structure, composition, and colors of the original image. I find this approach highly useful as it saves both inference time and allows for better guidance toward the desired result). Obviously this comes at the expense of general motion, as lowering the denoise level too much causes the final result to become static and have minimal movement. The denoise threshold is up to you to decide based on your needs.

      There are other methods to achieve a more accurate image-to-video process, but they are slow. I didn’t even included a negative prompt in the workflow because it doubles the waiting times.

    • V2V same concept as I2V above

    require:
    https://github.com/chengzeyi/Comfy-WaveSpeed
    https://github.com/pollockjj/ComfyUI-MultiGPU


    🟧 "ADVANCED All In One TEA ☕"


    an improved version of the BASIC All In One TEA ☕, with additional methods to upscale faster, plus a lightweight captioning system for I2V and V2V, that consume only additional 100mb vram.

    Upscaling can be done in three ways:

    1. Upscaling using the model. Best Quality. Slower (Refine is optional)

    2. Upscale Classic + Refine. It uses a special video upscaling model that I selected from a crazy amount of multiple video upscaling models and tests, it is one of the fastest and allows for results with good contrast and well-defined lines. While it’s certainly not the optimal choice when used alone but when combined with the REFINE step, it produces well-defined videos. This option is a middle ground in terms of timing between the first and third method.

    3. Latent upscale + Refine. This is my favorite. fastest. decent.
      This method is nothing more than the same as the first, wich is basically V2V, but at slightly lower steps and denoise.

    Three different methods, more choices based on preferences.

    Requirements:

    -ClipVitLargePatch14
    download model.safetensors

    rename it as clip-vit-large-patch14_OPENAI.safetensors"

    paste it in \models\clip

    -RealESR General x4 v3

    paste it in \models\ESRGAN\

    -LongCLIP-SAE-ViT-L-14
    -https://github.com/pollockjj/ComfyUI-MultiGPU
    -https://github.com/chengzeyi/Comfy-WaveSpeed

    Update Changelogs:

    |1.1|
    Faster upscaling

    Better settings

    |1.2|
    removed redundancies, better logic
    some error fixed
    added extra box for the ability to load a video and directly upscale it

    |1.3|

    • New prompting system.

      Now you can copy and paste any prompt you find online and this will automatically modify the words you don't like and/or add additional random words.

    • Fixed some latent auto switches bugs (this gave me serious headhaces)

    • Fixed seed issue, now locking seed will lock sampling

    • Some Ui cleaning

    |1.4|

    • Batch Video Processing – Huge Time Saver!

      You can now generate videos at the bare minimum quality and later queue them all for upscaling, refining, or interpolating in a single step.

      Just point it to the folder where the videos are saved, and the process will be done automatically.

    • Added Seed Picker for Each Stage (Upscale/Refine)

      You can now, for example, lock the seed during the initial generation, then randomize the seed for the upscale or refine stage.

    • More Room for Video Previews

      No more overlapping nodes when generating tall videos (don't exagerate with ratio obviously)

    • Expanded Space for Sampler Previews

      Enable preview methods in the manager to watch the generation progress in real time.

      This allows you to interrupt the process if you don't like where it's going.

      (I usually keep previews off, as enabling them takes slightly longer, but they can be helpful in some cases.)

    • Improved UI

      Cleaned up some connections (noodles), removed redundancies, and enhanced overall efficiency.

      All essential nodes are highlighted in blue and emphasized right below each corresponding video node, while everything else (backend) like switches, logic, mathematics, and things you shouldn't touch have been moved further down. You can now change settings or replace nodes with those you prefer way more easily.

    • Notifications

      All nodes related to the browser notifications sent when each step is completed, which some people find annoying, have been moved to the very bottom and highlighted in gray. So, if they bother you, you can quickly find them, select them, and delete them

      |1.5|

      • general improvements, some bugs fixes

    NB:
    This two errors in console are completly fine. Just don't mind at those.
    WARNING: DreamBigImageSwitch.IS_CHANGED() got an unexpected keyword argument 'select'
    WARNING:
    SystemNotification.IS_CHANGED() missing 1 required positional argument: 'self'


    🟪 "AIO | ULTRA "


    Embrace This Beast of Mass Video Production!
    This version is for the truly brave professionals and unlocks a lot of possibilities.
    Plus, it includes settings for higher quality, sharper videos, and even faster speed, all while being nearly glitch-free.
    All older workflows have also been updated to minimize glitches, as explained in my previous article.

    From Concept to Creation in Record Time!

    We are achieving world-record speed here, but at the cost of some complexity. These workflows are becoming increasingly intimidating despite efforts to keep them clean and hide all automations in the back-end as much as possible.

    That's why I call this workflow ULTRA: a powerhouse for tenacious Hunyuan users who want to achieve the best results in the shortest time possible, with all tools at their fingertips


    Key Features and Improvements:

    • Handy Console: Includes buttons to activate stages with no need to connect cables or navigate elsewhere. Everything is centralized in one place (Control Room), and functions can be accessed with ease.

    • T2V, I2V*,V2V, T2I, I2I Support: Seamless transitions between different workflows.

      *I2V: an image is multiplied into *x frames and sent to latent. Official I2V model is not out yet. There's a temprorary trick to do I2V here wich require Kijai's nodes.

    • Wildcards + Custom Prompting Options: Switch between Classic prompting with wildcards or add random words in a dedicated box, with automatic customizable word swapping or censoring.

    • Video Loading: Load videos directly into upscalers/refiners and skip the initial inference stage.

    • Batch Video Processing: Upscale or Refine multiple videos in sequence by loading them from a custom folder.

    • Interpolation: Smooth frame transitions for enhanced video quality.

    • Random Character LoRA Picker: Includes 9 LoRA nodes in addition to fixed LoRA loaders.

    • Upscaling Options: Supports upscaling, double upscaling, and downscaling processes.

    • Notifications: Receive notifications for each completed stage, organized in a separate section for easy removal if necessary.

    • Lightweight Captioning: Enables captioning for I2V and V2V with minimal additional VRAM usage (only 100MB).

    • Virtual Vram support.

      Use the GGUF model with Virtual VRAM to create longer videos or increase resolution.

    • Hunyuan/Skyreel (T2V) quick merges slider

    • Switch from Regular Model to Virtual Vram / GGUF with a slider

    • Latent preview to cut down upscaling process.

    • A dedicated LoRA line exclusively for upscalers, toggled via a dedicated button.

    • RF edit loom

    • Upscale using Multiplier or "set to longest size" target

    • a button to toggle Wave Speed and FastLoRA as needed for upscaling only.

    • Ui improvements based on users feedbacks


    - Sequential Upscale Under 1x / Double Upscaling
    You can now downscale using the upscale process and then re-upscale with the refiner, or customize upscaler multipliers to upscale 2 times.

    • New Functionality:

      • The upscale value range now includes values as low as 0.5.

      • Two sliders are available: one for the initial upscale and another for the refiner (essentially another sampler, always V2V).

    • Applications:
      Upscale, Refine or combine the two

      • Upscale fast (latent resize + sampler) or accurate (resize + sampler)

      • Refine (works the same as upscale, can be used alone or as an auxiliary upscaler)

      • Double upscaling: Start small and upscale significantly in the final stage.

      • Downscale and re-upscale: Deconstruct at lower resolution and reconstruct at higher quality.

      • Combos: Upscale & Refine / Downscale & Upscale



    - Skip Decoders/Encoders Option
    Save significant time by skipping raw decoding for each desired stage and going directly to the final result.

    • How It Works: If your prompt is likely to produce a good output and the preview method ("latent2RGB") is active in the manager, you can monitor the process in real-time. Skip encoding/decoding by working exclusively in the latent space, generating and sending latent data directly to the upscaler until the process completes.

    • Example:
      A typical medium/high-quality generation might involve:

      • Resolution: ~ 432x320

      • Frames: 65

      • One Upscale: 1.5x (to 640x480)

      • Total Time: 162 seconds

      In this example case, by activating the preview in the manager and skipping the first decoder (the preview before upscaling), you can save ~30 seconds. The process now takes 133 seconds instead of 162.
      Bypassing additional decoders (e.g., upscale further or refinement) can save even more time.


    - Image Generation (T2I and I2I)
    Explore HUN latent space with this image generation capabilities.

    When the number of frames is set to 1, the image node activates automatically, allowing the image to be saved as a PNG.
    Use the settings shown here for the best results:

    - Structural Changes / Additional Features

    • Motion Guider for I2V
      This feature enhances motion for image-to-video workflows, lowering chances to get a static video as result.

    • 9 Random Character Loras Loader: Previously limited to 5, now expanded to 9.

    • Random Character Lora Lock On/Off:

      • By default, each seed is set to corresponds to a random Lora
        (e.g., seed n° 667 = Lora n° 7).

      • Now, you can unlock this "character Lora lock on seed" and regenerate the same video with a different random Lora while maintaining the main seed.

    • Clarifications:
      Let’s call things by their real names:

      • "Refine" and "Upscale" are both samplers here. Each optimized for specific stages:

        • Upscale: Higher steps/denoise, fast results, balanced quality.

        • Refine: Lower steps/denoise, focused on fixing issues and enhancing details.

      • Refine can work alone, without upscaling, to address small issues or improve fine details.

    • UI Simplification:
      The "classic upscale" is now replaced by a faster and better-performing resize + sharpness operation and hidden in back-end to save space.

    • Frame Limit Issue (101+ Frames):
      Generating more than 101 frames with latent upscale can cause problems. To address this, I added an option to upscale videos before switching to latent processing.

    - Bug Fixes

    • Latent Upscale Change:
      Latent upscaling now uses bicubic interpolation instead of nearest-exact, which performs better based on testing.

    • "Cliption" Bug Fixed

    • 201-Frame Fix:
      Generating 201-frame perfect loops caused artifacts with latent upscale. Switching to "resize" via the pink console buttons now resolves this issue.

    - Performance and other infos:

    Once you master it, you won’t want to go back. This workflow is designed to meet every need and handle every case, minimizing the need to move around the board too much. Everything is controlled from a central "Control Room."

    Traditionally, managing these functions would require connecting/disconnecting cables or loading various workflows. Here, however, everything is automated and executed with just a few button presses.

    Default settings (e.g., denoise, steps, resolution) are optimized for simplicity, but advanced users can easily adjust them to suit their needs.

    -Limitations:

    1. No Audio Integration:
      While I have an audio-capable workflow, it doesn’t make sense here. Audio should be processed separately for professional results.

    2. No Post-Production Effects:
      Effects like color correction, filmic grain, and other post-production enhancements are left to dedicated editing software or workflows. This workflow focuses on delivering a pure video product.

    3. Interpolation Considerations:
      Interpolation is included here. I set up the fastest i could find around, not necessary the best one. For best results, I typically use Topaz for both extra upscaling and interpolation after processing but is up to the user to choose whatever favourite interpolation method or final upscaling if needed.

    Requirements:

    ULTRA 1.2:
    -Tea cache

    -LongCLIP-SAE-ViT-L-14

    -ClipVitLargePatch14

    ULTRA 1.3:
    -UPDATE TO LATEST COMFY IS NEEDED!
    -Wave Speed

    -LongCLIP-SAE-ViT-L-14

    -ClipVitLargePatch14

    ULTRA 1.4 / 1.5:
    -UPDATE TO LATEST COMFY IS NEEDED!
    https://github.com/pollockjj/ComfyUI-MultiGPU
    https://github.com/chengzeyi/Comfy-WaveSpeed
    https://github.com/city96/ComfyUI-GGUF
    https://github.com/logtd/ComfyUI-HunyuanLoom
    https://github.com/kijai/ComfyUI-VideoNoiseWarp


    NB:
    The following warning in console is completly fine. Just don't mind at it:
    WARNING: DreamBigImageSwitch.IS_CHANGED() got an unexpected keyword argument 'select'
    WARNING:
    SystemNotification.IS_CHANGED() missing 1 required positional argument: 'self'

    Update Changelogs:

    |1.1|
    Better color scheme to easily understand how upscaling stages works
    Check images to understand

    |1.2|
    Wildcards.
    You can now switch from Classic Prompting system (with wildcards allowed)
    to the fancy one previously avaible

    |1.3|

    • An extra wavespeed boost kicks in for upscalers.

    • Changed samplers to native Comfy—no more TTP, no more interrup error messg

    • Tea cache is now a separate node.

    • Fixed a notification timing error and text again.

    • Replaced a node that was causing errors for some users: "if any" now swaps with "eden_comfy_pipelines."

    • Added SPICE, an extra-fast LoRA toggle that activates only in upscalers to speed up inference at lower steps and reduce noise.

    • Added Block Cache and Sage to the setup. Users who have them working can enable them.

    • Changed the default sampler from Euler Beta to the new "gradient_estimation" sampler introduced in the latest Comfy update.

    • Added a video info box for each stage (size, duration).

    • Removed "random lines."

    • Adjusted default values for general use.

    • Upscale 1 can now function as a refiner as well.

    • When pressing "Latent Resize" or "Resize," it will automatically activate the correct sampler.

    • A single-frame image is now displayed in other stages as well (when active).

      Thanks to all users that contributed on discord for this workflow improvements!

    |1.4|

    • Virtual Vram support

    • Hunyuan/Skyreel quick merges slider

    • Toggle to switch from Regular Model to Virtual Vram / GGUF

    • Longer vids / Higher Res / extreme upscaling now possible

    • Default res changed to 480x320 wich looks like a balanced middle way for lowres quick vids and most users should be ok with that.

    • Latent preview for skip preview mode

    • Switch toggle to enable/disable Exclusive LoRA for upscalers

    • RF edit loom

    • V2V loading time improved

    • Upscale to longest size target

    • Fixed slider upscale mismatch

    • info node moved

    • clean up and fixes

    • better settings for general use

    • upscale one can now use "resize to longers size" optional slider

    • added extra wave speed toggle for upscalers

    • added exclusive loras line for upscalers

    • general fixes

    • Ui improvement based on users feedbacks

    • fixed fast lora string issue on bypass in upscalers

    • more cleaning

    • changed exlusive loras for upscalers again, main fast lora is NOT going to pass in that line, since it has already a separate toggle (upscale with extra fast lora) previously called SPICE FOR UPSCALING.

    • fixed output node size for videos

    • moved resize by "longest size" toggle in extra menu

    • added extra wave speed toggle

    • control room is finished.. for now. I dont want to stress Aidoctor further. He already did a great job

    • lower fast lora default value now to 0.4

    • fixed VIDEO BATCH LOADING

      |1.5|

      • general improvements, Ui improvements, some bugs fixes

      • leap fusion support

      • Go With The Flow support




    Bonus TIPS:


    Here an article with all tips and trick i'm writing as i test this model:

    https://civarchive.com/articles/9584
    if you struggle to use my workflows for any reasons at least you can relate to the article above. You will get a lot of precious quality of life tips to build and improving your hunyuan experience.

    All the workflows labeled with an are OLD and highly experimental, those rely on kijai nodes that were released at very early stage of development.
    If you want to explore those you need to fix them by yourself, wich should be pretty easy.





    CREDITS

    Everything I do, I do in my free time for personal enjoyment.
    But if you want to contribute,

    there are people who deserve WAY more support than I do,
    like Kijai.
    I’ll leave
    his link,
    if you’re feeling generous go support him.
    Thanks!

    Last but not least:
    Thank this community, especially those who given me advices and experimented with my workflows, helping improve them for everyone.

    Special thanks to:
    https://civarchive.com/user/galaxytimemachine
    for its peculiar and precise method of operation in finding the best settings and for all the tests conducted.

    https://civarchive.com/user/TheAIDoctor
    for his brilliance and for dedicating his time to create and modify special nodes for this workflow madness! such an incredible person.

    and
    https://github.com/pollockjj/ComfyUI-MultiGPU

    Also special thanks to:
    Tr1dae
    for creating HunyClip, a handy tool for quick video trimming. If you work with heavy editing software like DaVinci Resolve or Premiere, you'll find this tool incredibly useful for fast operations without the need to open resource-intensive programs.

    Check it out here: [link]


    Have fun

    Description

    FAQ

    Comments (29)

    RojerDodgerJan 11, 2025· 1 reaction
    CivitAI

    Everyday you put out some real gold.. Thanks for the new updates :)

    LatentDream
    Author
    Jan 11, 2025· 1 reaction

    thanks!

    0101ARTJan 11, 2025· 1 reaction
    CivitAI

    Awesome 👑

    half_realJan 11, 2025· 1 reaction
    CivitAI

    For img2vid you say "There are other methods to achieve a more accurate image-to-video process, but they are slow. I didn’t even included a negative prompt in the workflow because it doubles the waiting times."

    What are they? I can't really find anything beyond your method and using a different model that supports i2v and enhancing it with hunyuan vid2vid. I wonder if it is possible to, at every iteration, replace the first frame of the latent frames with an encoded and appropriately noised version of the image you are trying to make a video out of, thereby forcing the final video to conform to the first frame.

    LatentDream
    Author
    Jan 12, 2025

    you need to installe Kijai nodes and play with the RF inversion workflow avaible here https://github.com/kijai/ComfyUI-HunyuanVideoWrapper/tree/main/examples

    , plus add some other nodes for curves i haven't really figured, no one really figured i think.
    Kijai just wrote a comment saying that is tricky but possible,
    talking about curves of something then the conversation ended there.
    Good luck.. you'll need 🤣

    zanshing1630Jan 11, 2025
    CivitAI

    As much as I appreciate the power of ComfyUI, it's still a HUGE pain in places it doesn't need to be. There is no reason to have to manually search for and install missing model files. This could be automated much like missing custom nodes.

    For example, it took me 20+ minutes to find and install the missing models for this workflow, and ~30s for the missing custom nodes. Does that seem right?

    Keroro_GunsoJan 11, 2025

    No. Install Comfyui manager. click "search for missing nodes". It's automated/

    TurboCoomerJan 11, 2025

    it is possible, I used bunch of nodes that downloaded the models they intended to work with automatically
    dont know whats your problem with googling model name and instantly getting link to github/huggingface tho

    zanshing1630Jan 11, 2025· 2 reactions

    @TurboCoomer The point is there's no built-in, comprehensive feature to download models used by all devs like there is for custom nodes. My "problem" as you put it is that I find it tedious that I have to spend time and attention on something which should be automated. I don't like the errors and time-wasting that come from putting models in the wrong directories with NO guidance at all from workflow developers.

    If I were to publish a workflow for ComfyUI there would be conspicuous notes on where to download each model and which directory they go in if not including custom nodes which do the downloading automatically (if they were robust enough to not just provide another annoyance)

    Keroro_GunsoJan 12, 2025· 1 reaction

    @zanshing1630 Welcome to comfy. lol.

    half_realJan 12, 2025· 1 reaction

    @zanshing1630 A lot of nodes automatically download the weights they need if they don't find them. It's up to the ComfyUI extension creators to design their custom nodes that way. Everything else, I would expect the workflow writers to explain, yes.

    joesixpaqJan 12, 2025

    @zanshing1630 All you need is this
    https://github.com/ltdrdata/ComfyUI-Manager

    nerfmeJan 12, 2025

    A lot of the nodes on recent workflows forced download on the various models on me even when I had a better one or alternate one fml, and right after clicking generate, which is super annoying because my ram/vram is fairly stuffed. I have to remember to look for the 'node-creator/' folder name prefix lol. It catches me every time.

    LatentDream
    Author
    Jan 13, 2025· 1 reaction

    @nerfme ... yeah that catches me everytime i try workflows around and i rush to run it without checking whats inside. i wish there were a notification popup that ask "do you wan to download this or that?" instead of auto everything

    TurboCoomerJan 11, 2025
    CivitAI

    Workflow throws error that TeaCacheHunyuanVideoSampler is missing while its installed. I also installed both teacache repos from manager, still not working. And without it workflow refuses to load at all.

    AicushJan 12, 2025· 2 reactions

    I had the same issue, comfy would not allow me to download it, I realised its part of the comfy TTP tool set - https://github.com/TTPlanetPig/Comfyui_TTP_Toolset link is provided in authors instructions. I manually deleted the TTP node from my custom nodes and then git cloned the TTP and it worked for me. Good luck

    TurboCoomerJan 12, 2025

    @Aicush thanks bro, now it seems that all nodes are in place, but workflow is still blank somehow :(

    Agent_SmthJan 12, 2025

    @TurboCoomer worked here after installed those ttp nodes

    LatentDream
    Author
    Jan 12, 2025

    I'm using these specific nodes because, unlike the ones available through the manager and automatically downloadable, these nodes allow for extra speed that is probably not included in the standard nodes for some reasons i'm gonna explain here:
    Those avaible through manager allow 2X as maximum speed, this is likely because beyond that threshold the TeaCache method generally produce less acceptable results or completly blurry unusable.
    However, what I discovered is that when used ONLY during the upscaling process,
    this extra speed becomes useful. There are more pixels to work with and some time can be saved with good results.
    Evidently, this detail was overlooked, and they decided to remove speeds beyond 2x in all the other available nodes. Yet, in these nodes i sugges, they allow 4.4x and 3.2x speeds or even custom multiplier

    TurboCoomerJan 12, 2025

    @LatentDream can you please post hires screenshot of your workflow so I can recreate it manually, because I cant load it for some reason. Right click on empty space -> workflow image -> export

    LatentDream
    Author
    Jan 13, 2025

    @TurboCoomer ok. check second image

    Agent_SmthJan 12, 2025· 1 reaction
    CivitAI

    Your workflows are truly valuable. After trying countless settings, I always find yours very similar to mine or even better. Thank you for taking the time to share your discoveries

    tiggletattleJan 12, 2025
    CivitAI

    I am getting these errors, any ideas?


    ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-VideoHelperSuite\videohelpersuite\nodes.py:96: RuntimeWarning: invalid value encountered in cast

    return tensor_to_int(tensor, 8).astype(np.uint8)

    Prompt executed in 125.95 seconds

    got prompt

    Failed to validate prompt for output 298:

    * Upscale Model Loader 396:

    - Value not in list: model_name: '4x_foolhardy_Remacri.pth' not in []

    Output will be ignored

    Failed to validate prompt for output 305:

    Output will be ignored

    Failed to validate prompt for output 202:

    Output will be ignored

    Failed to validate prompt for output 404:

    Output will be ignored

    Failed to validate prompt for output 300:

    Output will be ignored

    Failed to validate prompt for output 401:

    Output will be ignored

    WARNING: DreamBigLatentSwitch.IS_CHANGED() got an unexpected keyword argument 'select'

    WARNING: DreamBigLatentSwitch.IS_CHANGED() got an unexpected keyword argument 'select'

    WARNING: DreamBigImageSwitch.IS_CHANGED() got an unexpected keyword argument 'select'

    WARNING: DreamBigImageSwitch.IS_CHANGED() got an unexpected keyword argument 'select'

    WARNING: SystemNotification.IS_CHANGED() missing 1 required positional argument: 'self'

    WARNING: SystemNotification.IS_CHANGED() missing 1 required positional argument: 'self'

    WARNING: SystemNotification.IS_CHANGED() missing 1 required positional argument: 'self'

    WARNING: SystemNotification.IS_CHANGED() missing 1 required positional argument: 'self'

    New prompt: philippine

    young beautiful woman,

    with medium auburn hair

    and a happy face expression,

    mouth open.

    The background is indoor

    Requested to load HunyuanVideoClipModel_

    loaded completely 9.5367431640625e+25 7894.8529052734375 True

    Requested to load HunyuanVideo

    loaded partially 8754.875 8754.7490234375 313

    100%|████████████████████████████████████████████████████████████████████████████████████| 9/9 [00:43<00:00, 4.81s/it]

    Requested to load AutoencoderKL

    0 models unloaded.

    loaded completely 9.5367431640625e+25 470.1210079193115 True

    ...ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-VideoHelperSuite\videohelpersuite\nodes.py:96: RuntimeWarning: invalid value encountered in cast

    return tensor_to_int(tensor, 8).astype(np.uint8)

    Prompt executed in 64.77 seconds

    LatentDream
    Author
    Jan 12, 2025

    download this file: https://openmodeldb.info/models/4x-Remacri

    paste it in \models\ESRGAN

    tiggletattleJan 12, 2025

    @LatentDream Thanks, it's working! But I get this error when I run the hunyuan upscaler, do you know a fix? is it to do with dimensions?

    TeaCacheHunyuanVideoSampler

    Sampling failed: shape '[1, 16, 18, 34, 16, 1, 2, 2]' is invalid for input of size 645120

    antey3195Jan 12, 2025
    CivitAI

    please tell me where I can download this model clip-vit-large-patch14_OPENAI.safetensors?

    My models won't fit? (CLIP-ViT-H-14-laion2B-s32B-b79K.safetensors and CLIP-ViT-bigG-14-laion2B-39B-b160K.safetensors)

    LatentDream
    Author
    Jan 12, 2025· 3 reactions

    link: https://huggingface.co/openai/clip-vit-large-patch14/tree/main

    get the "model.safetensor"

    rename it as "clip-vit-large-patch14.safetensors" or whatever name you perefer

    paste it in "\models\clip_vision\"

    joesixpaqJan 12, 2025
    CivitAI

    When I load "All in One Tea" (https://civitai.com/models/1007385?modelVersionId=1268856), the local server is asking to allow notifications. Which node is doing this? I does not happen with the basic workflow nor did it with other workflows. Is it safe and what is it for? Tried this in Brave and Vivaldi.

    LatentDream
    Author
    Jan 12, 2025· 2 reactions

    is a light grey color node called Notifications. Nothing suspicious.
    It just gives you a simple textual notification whenever a process is finished.
    I gave it light grey color because i knew someone were going to ask one day or another 🤣🤣🤣
    if bugs you just remove it, there are 5 of those, one for each process.
    on the bottom center to right

    Workflows
    Hunyuan Video

    Looks like we don't have an active mirror for this file right now.

    CivArchive is a community-maintained index — we catalog mirrors that volunteers upload to HuggingFace, torrents, and other public hosts. Looks like no one has uploaded a copy of this file yet.

    Some files do get recovered over time through contributions. If you're looking for this one, feel free to ask in Discord, or help preserve it if you have a copy.

    Details

    Downloads
    116
    Platform
    CivitAI
    Platform Status
    Deleted
    Created
    1/11/2025
    Updated
    8/9/2025
    Deleted
    8/7/2025

    Files

    HunyuanAllinoneFastT2VV2V_AIOAdvancedTea.zip