CivArchive
    Preview 54234196
    Preview 54234126
    Preview 54234100
    Preview 54234130
    Preview 54234313
    Preview 54397576
    Preview 54234318
    Preview 54412071
    Preview 54234322
    Preview 54234164
    Preview 54234114
    Preview 54237208
    Preview 54234108
    Preview 54234086

    HUNYUAN | AllInOne


    no need to buzz me. Feedbacks are much more appreciated. | last update: 06/03/2025


    ā¬‡ļøOFFICIAL Image To Video V2 Model is out!ā¬‡ļø COMFYUI UPDATE IS REQUIRED !
    Get files here:
    link 1 paste in: \models\clip_vision
    link 2 or Link 3 paste in: \models\diffusion_models (pick the one that works best for you)
    āš ļø I2V model got an update on 07/03/2025 āš ļø

    This workflows have evolved over time through various tests and refinements,
    thanks also to the huge contributions of this community.
    Requirements, Special thanks and credits above.


    Before commenting, please keep in mind:

    • The Advanced and Ultra workflows are intended for more experienced ComfyUI users.
      If you choose to install unfamiliar nodes, you take full responsibility.

    • I do this workflows for fun, randomly in my free time.

      Most issues you might encounter are probably already been widely discussed and solved on Discord, Reddit, GitHub, and addressed in the description corresponding to the workflow you're using, so please..Read carefully..
      and consider do some searches before comment.

    • I started this alone, but now there's a small group of people who are contributing with their passion, experiments and cool findings. Credits below.
      Thanks to their contributions this small project continues to grow and improve
      for everyone's benefit.

    • Fast Lora may works best when combined with other Loras, allowing you to reduce the number of steps.

      - Wave Speed can significantly reduce inference time but may introduce artifacts.

      - Achieving good results requires testing different settings. Default configurations may not always work, especially when using LORAs, so experiment to find settings that fits best.THERE'S NOT UNIVERSAL SETTINGS THAT WORKS FOR EVERY CASES.

      - You can also try to switch to different sampler/scheduler and see wich works best for you case, try UniPC simple, LCM simple , DDPM, DDMPP_2M beta, Euler normal/simple/beta, or the new "Gradient_estimation"
      (Samplers/schedulers need to be set for each stage and mode; they are not settings found in the console)



    Legend to help you choose the right workflow:

    āœ”ļø Green check = UP TO DATE version for its category.
    Include latest settings, tricks, updated nodes and samplers, working on latest ComfyUI.

    🟩🟧🟪 Colors = Basic / Advanced / Ultra

    āŒ = Based on deprecated nodes, you'll have to fix it yourself if you really want to use


    Quick Tips:


    Low Vram? Try this:

    and/or try use GGUF models avaible here.



    Rtx4000? use this:



    Want more tips?

    Check my article: https://civarchive.com/articles/9584


    All workflows available on this page are designed to prioritize efficiency, delivering high-quality results as quickly as possible.
    However, users can easily customize settings through intuitive, fast-access controls.

    For those seeking ultra-high-quality videos and the best output this model can achieve, adjustments may be necessary, like Increasing steps, modifying resolutions, reducing TeaCache / WaveSpeed influences, or disabling Fast LoRA entirely to enhance results.

    Personally, I aim for an optimal balance between quality and speed. All example videos I share follow this approach, utilizing the default settings provided in these workflows. While I may make minor adjustments to aspect ratio, resolution, or step count depending on the scene, these settings generally offer the best all-around performance.


    WORKFLOWS DESCRIPTION:


    🟩"I2V OFFICIAL"

    require:


    🟩"BASIC All In One"


    use native comfy nodes, it has 3 method to operate:

    • T2V

    • I2V (sort of, an image is multiplied *x frames and sent to latent, with a denoising level balanced to preserve the structure, composition, and colors of the original image. I find this approach highly useful as it saves both inference time and allows for better guidance toward the desired result). Obviously this comes at the expense of general motion, as lowering the denoise level too much causes the final result to become static and have minimal movement. The denoise threshold is up to you to decide based on your needs.

      There are other methods to achieve a more accurate image-to-video process, but they are slow. I didn’t even included a negative prompt in the workflow because it doubles the waiting times.

    • V2V same concept as I2V above

    require:
    https://github.com/chengzeyi/Comfy-WaveSpeed
    https://github.com/pollockjj/ComfyUI-MultiGPU


    🟧 "ADVANCED All In One TEA ā˜•"


    an improved version of the BASIC All In One TEA ā˜•, with additional methods to upscale faster, plus a lightweight captioning system for I2V and V2V, that consume only additional 100mb vram.

    Upscaling can be done in three ways:

    1. Upscaling using the model. Best Quality. Slower (Refine is optional)

    2. Upscale Classic + Refine. It uses a special video upscaling model that I selected from a crazy amount of multiple video upscaling models and tests, it is one of the fastest and allows for results with good contrast and well-defined lines. While it’s certainly not the optimal choice when used alone but when combined with the REFINE step, it produces well-defined videos. This option is a middle ground in terms of timing between the first and third method.

    3. Latent upscale + Refine. This is my favorite. fastest. decent.
      This method is nothing more than the same as the first, wich is basically V2V, but at slightly lower steps and denoise.

    Three different methods, more choices based on preferences.

    Requirements:

    -ClipVitLargePatch14
    download model.safetensors

    rename it as clip-vit-large-patch14_OPENAI.safetensors"

    paste it in \models\clip

    -RealESR General x4 v3

    paste it in \models\ESRGAN\

    -LongCLIP-SAE-ViT-L-14
    -https://github.com/pollockjj/ComfyUI-MultiGPU
    -https://github.com/chengzeyi/Comfy-WaveSpeed

    Update Changelogs:

    |1.1|
    Faster upscaling

    Better settings

    |1.2|
    removed redundancies, better logic
    some error fixed
    added extra box for the ability to load a video and directly upscale it

    |1.3|

    • New prompting system.

      Now you can copy and paste any prompt you find online and this will automatically modify the words you don't like and/or add additional random words.

    • Fixed some latent auto switches bugs (this gave me serious headhaces)

    • Fixed seed issue, now locking seed will lock sampling

    • Some Ui cleaning

    |1.4|

    • Batch Video Processing – Huge Time Saver!

      You can now generate videos at the bare minimum quality and later queue them all for upscaling, refining, or interpolating in a single step.

      Just point it to the folder where the videos are saved, and the process will be done automatically.

    • Added Seed Picker for Each Stage (Upscale/Refine)

      You can now, for example, lock the seed during the initial generation, then randomize the seed for the upscale or refine stage.

    • More Room for Video Previews

      No more overlapping nodes when generating tall videos (don't exagerate with ratio obviously)

    • Expanded Space for Sampler Previews

      Enable preview methods in the manager to watch the generation progress in real time.

      This allows you to interrupt the process if you don't like where it's going.

      (I usually keep previews off, as enabling them takes slightly longer, but they can be helpful in some cases.)

    • Improved UI

      Cleaned up some connections (noodles), removed redundancies, and enhanced overall efficiency.

      All essential nodes are highlighted in blue and emphasized right below each corresponding video node, while everything else (backend) like switches, logic, mathematics, and things you shouldn't touch have been moved further down. You can now change settings or replace nodes with those you prefer way more easily.

    • Notifications

      All nodes related to the browser notifications sent when each step is completed, which some people find annoying, have been moved to the very bottom and highlighted in gray. So, if they bother you, you can quickly find them, select them, and delete them

      |1.5|

      • general improvements, some bugs fixes

    NB:
    This two errors in console are completly fine. Just don't mind at those.
    WARNING: DreamBigImageSwitch.IS_CHANGED() got an unexpected keyword argument 'select'
    WARNING:
    SystemNotification.IS_CHANGED() missing 1 required positional argument: 'self'


    🟪 "AIO | ULTRA "


    Embrace This Beast of Mass Video Production!
    This version is for the truly brave professionals and unlocks a lot of possibilities.
    Plus, it includes settings for higher quality, sharper videos, and even faster speed, all while being nearly glitch-free.
    All older workflows have also been updated to minimize glitches, as explained in my previous article.

    From Concept to Creation in Record Time!

    We are achieving world-record speed here, but at the cost of some complexity. These workflows are becoming increasingly intimidating despite efforts to keep them clean and hide all automations in the back-end as much as possible.

    That's why I call this workflow ULTRA: a powerhouse for tenacious Hunyuan users who want to achieve the best results in the shortest time possible, with all tools at their fingertips


    Key Features and Improvements:

    • Handy Console: Includes buttons to activate stages with no need to connect cables or navigate elsewhere. Everything is centralized in one place (Control Room), and functions can be accessed with ease.

    • T2V, I2V*,V2V, T2I, I2I Support: Seamless transitions between different workflows.

      *I2V: an image is multiplied into *x frames and sent to latent. Official I2V model is not out yet. There's a temprorary trick to do I2V here wich require Kijai's nodes.

    • Wildcards + Custom Prompting Options: Switch between Classic prompting with wildcards or add random words in a dedicated box, with automatic customizable word swapping or censoring.

    • Video Loading: Load videos directly into upscalers/refiners and skip the initial inference stage.

    • Batch Video Processing: Upscale or Refine multiple videos in sequence by loading them from a custom folder.

    • Interpolation: Smooth frame transitions for enhanced video quality.

    • Random Character LoRA Picker: Includes 9 LoRA nodes in addition to fixed LoRA loaders.

    • Upscaling Options: Supports upscaling, double upscaling, and downscaling processes.

    • Notifications: Receive notifications for each completed stage, organized in a separate section for easy removal if necessary.

    • Lightweight Captioning: Enables captioning for I2V and V2V with minimal additional VRAM usage (only 100MB).

    • Virtual Vram support.

      Use the GGUF model with Virtual VRAM to create longer videos or increase resolution.

    • Hunyuan/Skyreel (T2V) quick merges slider

    • Switch from Regular Model to Virtual Vram / GGUF with a slider

    • Latent preview to cut down upscaling process.

    • A dedicated LoRA line exclusively for upscalers, toggled via a dedicated button.

    • RF edit loom

    • Upscale using Multiplier or "set to longest size" target

    • a button to toggle Wave Speed and FastLoRA as needed for upscaling only.

    • Ui improvements based on users feedbacks


    - Sequential Upscale Under 1x / Double Upscaling
    You can now downscale using the upscale process and then re-upscale with the refiner, or customize upscaler multipliers to upscale 2 times.

    • New Functionality:

      • The upscale value range now includes values as low as 0.5.

      • Two sliders are available: one for the initial upscale and another for the refiner (essentially another sampler, always V2V).

    • Applications:
      Upscale, Refine or combine the two

      • Upscale fast (latent resize + sampler) or accurate (resize + sampler)

      • Refine (works the same as upscale, can be used alone or as an auxiliary upscaler)

      • Double upscaling: Start small and upscale significantly in the final stage.

      • Downscale and re-upscale: Deconstruct at lower resolution and reconstruct at higher quality.

      • Combos: Upscale & Refine / Downscale & Upscale



    - Skip Decoders/Encoders Option
    Save significant time by skipping raw decoding for each desired stage and going directly to the final result.

    • How It Works: If your prompt is likely to produce a good output and the preview method ("latent2RGB") is active in the manager, you can monitor the process in real-time. Skip encoding/decoding by working exclusively in the latent space, generating and sending latent data directly to the upscaler until the process completes.

    • Example:
      A typical medium/high-quality generation might involve:

      • Resolution: ~ 432x320

      • Frames: 65

      • One Upscale: 1.5x (to 640x480)

      • Total Time: 162 seconds

      In this example case, by activating the preview in the manager and skipping the first decoder (the preview before upscaling), you can save ~30 seconds. The process now takes 133 seconds instead of 162.
      Bypassing additional decoders (e.g., upscale further or refinement) can save even more time.


    - Image Generation (T2I and I2I)
    Explore HUN latent space with this image generation capabilities.

    When the number of frames is set to 1, the image node activates automatically, allowing the image to be saved as a PNG.
    Use the settings shown here for the best results:

    - Structural Changes / Additional Features

    • Motion Guider for I2V
      This feature enhances motion for image-to-video workflows, lowering chances to get a static video as result.

    • 9 Random Character Loras Loader: Previously limited to 5, now expanded to 9.

    • Random Character Lora Lock On/Off:

      • By default, each seed is set to corresponds to a random Lora
        (e.g., seed n° 667 = Lora n° 7).

      • Now, you can unlock this "character Lora lock on seed" and regenerate the same video with a different random Lora while maintaining the main seed.

    • Clarifications:
      Let’s call things by their real names:

      • "Refine" and "Upscale" are both samplers here. Each optimized for specific stages:

        • Upscale: Higher steps/denoise, fast results, balanced quality.

        • Refine: Lower steps/denoise, focused on fixing issues and enhancing details.

      • Refine can work alone, without upscaling, to address small issues or improve fine details.

    • UI Simplification:
      The "classic upscale" is now replaced by a faster and better-performing resize + sharpness operation and hidden in back-end to save space.

    • Frame Limit Issue (101+ Frames):
      Generating more than 101 frames with latent upscale can cause problems. To address this, I added an option to upscale videos before switching to latent processing.

    - Bug Fixes

    • Latent Upscale Change:
      Latent upscaling now uses bicubic interpolation instead of nearest-exact, which performs better based on testing.

    • "Cliption" Bug Fixed

    • 201-Frame Fix:
      Generating 201-frame perfect loops caused artifacts with latent upscale. Switching to "resize" via the pink console buttons now resolves this issue.

    - Performance and other infos:

    Once you master it, you won’t want to go back. This workflow is designed to meet every need and handle every case, minimizing the need to move around the board too much. Everything is controlled from a central "Control Room."

    Traditionally, managing these functions would require connecting/disconnecting cables or loading various workflows. Here, however, everything is automated and executed with just a few button presses.

    Default settings (e.g., denoise, steps, resolution) are optimized for simplicity, but advanced users can easily adjust them to suit their needs.

    -Limitations:

    1. No Audio Integration:
      While I have an audio-capable workflow, it doesn’t make sense here. Audio should be processed separately for professional results.

    2. No Post-Production Effects:
      Effects like color correction, filmic grain, and other post-production enhancements are left to dedicated editing software or workflows. This workflow focuses on delivering a pure video product.

    3. Interpolation Considerations:
      Interpolation is included here. I set up the fastest i could find around, not necessary the best one. For best results, I typically use Topaz for both extra upscaling and interpolation after processing but is up to the user to choose whatever favourite interpolation method or final upscaling if needed.

    Requirements:

    ULTRA 1.2:
    -Tea cache

    -LongCLIP-SAE-ViT-L-14

    -ClipVitLargePatch14

    ULTRA 1.3:
    -UPDATE TO LATEST COMFY IS NEEDED!
    -Wave Speed

    -LongCLIP-SAE-ViT-L-14

    -ClipVitLargePatch14

    ULTRA 1.4 / 1.5:
    -UPDATE TO LATEST COMFY IS NEEDED!
    https://github.com/pollockjj/ComfyUI-MultiGPU
    https://github.com/chengzeyi/Comfy-WaveSpeed
    https://github.com/city96/ComfyUI-GGUF
    https://github.com/logtd/ComfyUI-HunyuanLoom
    https://github.com/kijai/ComfyUI-VideoNoiseWarp


    NB:
    The following warning in console is completly fine. Just don't mind at it:
    WARNING: DreamBigImageSwitch.IS_CHANGED() got an unexpected keyword argument 'select'
    WARNING:
    SystemNotification.IS_CHANGED() missing 1 required positional argument: 'self'

    Update Changelogs:

    |1.1|
    Better color scheme to easily understand how upscaling stages works
    Check images to understand

    |1.2|
    Wildcards.
    You can now switch from Classic Prompting system (with wildcards allowed)
    to the fancy one previously avaible

    |1.3|

    • An extra wavespeed boost kicks in for upscalers.

    • Changed samplers to native Comfy—no more TTP, no more interrup error messg

    • Tea cache is now a separate node.

    • Fixed a notification timing error and text again.

    • Replaced a node that was causing errors for some users: "if any" now swaps with "eden_comfy_pipelines."

    • Added SPICE, an extra-fast LoRA toggle that activates only in upscalers to speed up inference at lower steps and reduce noise.

    • Added Block Cache and Sage to the setup. Users who have them working can enable them.

    • Changed the default sampler from Euler Beta to the new "gradient_estimation" sampler introduced in the latest Comfy update.

    • Added a video info box for each stage (size, duration).

    • Removed "random lines."

    • Adjusted default values for general use.

    • Upscale 1 can now function as a refiner as well.

    • When pressing "Latent Resize" or "Resize," it will automatically activate the correct sampler.

    • A single-frame image is now displayed in other stages as well (when active).

      Thanks to all users that contributed on discord for this workflow improvements!

    |1.4|

    • Virtual Vram support

    • Hunyuan/Skyreel quick merges slider

    • Toggle to switch from Regular Model to Virtual Vram / GGUF

    • Longer vids / Higher Res / extreme upscaling now possible

    • Default res changed to 480x320 wich looks like a balanced middle way for lowres quick vids and most users should be ok with that.

    • Latent preview for skip preview mode

    • Switch toggle to enable/disable Exclusive LoRA for upscalers

    • RF edit loom

    • V2V loading time improved

    • Upscale to longest size target

    • Fixed slider upscale mismatch

    • info node moved

    • clean up and fixes

    • better settings for general use

    • upscale one can now use "resize to longers size" optional slider

    • added extra wave speed toggle for upscalers

    • added exclusive loras line for upscalers

    • general fixes

    • Ui improvement based on users feedbacks

    • fixed fast lora string issue on bypass in upscalers

    • more cleaning

    • changed exlusive loras for upscalers again, main fast lora is NOT going to pass in that line, since it has already a separate toggle (upscale with extra fast lora) previously called SPICE FOR UPSCALING.

    • fixed output node size for videos

    • moved resize by "longest size" toggle in extra menu

    • added extra wave speed toggle

    • control room is finished.. for now. I dont want to stress Aidoctor further. He already did a great job

    • lower fast lora default value now to 0.4

    • fixed VIDEO BATCH LOADING

      |1.5|

      • general improvements, Ui improvements, some bugs fixes

      • leap fusion support

      • Go With The Flow support




    Bonus TIPS:


    Here an article with all tips and trick i'm writing as i test this model:

    https://civarchive.com/articles/9584
    if you struggle to use my workflows for any reasons at least you can relate to the article above. You will get a lot of precious quality of life tips to build and improving your hunyuan experience.

    All the workflows labeled with an āŒare OLD and highly experimental, those rely on kijai nodes that were released at very early stage of development.
    If you want to explore those you need to fix them by yourself, wich should be pretty easy.





    CREDITS

    Everything I do, I do in my free time for personal enjoyment.
    But if you want to contribute,

    there are people who deserve WAY more support than I do,
    like Kijai.
    I’ll leave
    his link,
    if you’re feeling generous go support him.
    Thanks!

    Last but not least:
    Thank this community, especially those who given me advices and experimented with my workflows, helping improve them for everyone.

    Special thanks to:
    https://civarchive.com/user/galaxytimemachine
    for its peculiar and precise method of operation in finding the best settings and for all the tests conducted.

    https://civarchive.com/user/TheAIDoctor
    for his brilliance and for dedicating his time to create and modify special nodes for this workflow madness! such an incredible person.

    and
    https://github.com/pollockjj/ComfyUI-MultiGPU

    Also special thanks to:
    Tr1dae
    for creating HunyClip, a handy tool for quick video trimming. If you work with heavy editing software like DaVinci Resolve or Premiere, you'll find this tool incredibly useful for fast operations without the need to open resource-intensive programs.

    Check it out here: [link]


    Have fun

    Description

    FAQ

    Comments (126)

    Baka_OppaiJan 27, 2025Ā· 4 reactions
    CivitAI

    be nice if there was any real instruction to set this up

    LatentDream
    Author
    Jan 27, 2025Ā· 1 reaction

    i get it...
    i should probably quit sharing this madness cause Is becoming too much intimidating.
    It would need a user manual or some tutorials.

    Try play with it a bit,
    i pasted some images that can help understanding the functions.

    AicushJan 27, 2025Ā· 4 reactions
    CivitAI

    As always had a hernia just from looking at this workflow X-D, looking forward to trying this out, slight problem, I struggled with this the last couple of version the If any return a else b is causing me issues. My only solution was manually adding them on myself and replacing the ones you put. Any one have any idea about this?

    AicushJan 27, 2025Ā· 1 reaction

    Nevermind figured it out X-D, seems like it was updated or I missed something, all is working now, Ill see you in a week when I had a play :-) As always thanks for all you do to keep improving the Hunyuan video space

    DallenoJan 27, 2025Ā· 1 reaction

    can you share the work around ? i'm facing the same issue with (If ANY return A else B-šŸ”¬)

    AicushJan 27, 2025Ā· 2 reactions

    @DallenoĀ I tried two things, I am not sure which one fixed it, but you can try the below - I deleted the custom node ComfyUI-Logic reinstalled using the gitclone in custom nodes folder

    git clone https://github.com/theUpsider/ComfyUI-Logic.git

    Reboot comfy ui, ensure you redrag the workflow back into comfy after as mine didnt work until I rebooted comfy UI and redragged the workflow on.

    If the above doesnt work the only other thing I did was downloaded -

    https://github.com/theUpsider/ComfyUI-Logic/releases/tag/v1.2.0 and dragged the contents into the folder of comfy UI logic folder and overwrote any changes.

    -I know I am repeating myself, but ensure after rebooting comfy you redrag the workflow on as mine still showed the error until I did this.

    DallenoJan 27, 2025Ā· 2 reactions

    manual download (https://github.com/theUpsider/ComfyUI-Logic.git) is the solution :)

    AicushJan 27, 2025Ā· 2 reactions

    @DallenoĀ I always type 1000 words for a two word answer :) glad you got it working

    DallenoJan 27, 2025Ā· 1 reaction

    we commented at the same time <3 thanks

    DallenoJan 27, 2025Ā· 1 reaction

    this one for clip vit ?

    this is my first time running this model

    https://huggingface.co/openai/clip-vit-large-patch14/blob/main/model.safetensors

    AicushJan 27, 2025Ā· 1 reaction

    @DallenoĀ Apologies I am not sure, I would have assumed it is this one https://huggingface.co/openai/clip-vit-large-patch14 as it states openAI after it @LatentDream should be able to help :-)

    ---Edit--

    I was correct, if you read the OP's post - it links to my link above

    -ClipVitLargePatch14
    download model.safetensors

    rename it as clip-vit-large-patch14_OPENAI.safetensors"

    paste it in \models\clip_vision\

    LatentDream
    Author
    Jan 28, 2025Ā· 1 reaction

    i asnwered 12 hours ago but for some reasons Civit didnt took my answer, then it went down for a bit.
    LOL i know... i had 3 hernias just trying to make this works as i wanted. 🤣
    good, you solved.

    LatentDream
    Author
    Jan 28, 2025

    sorry to hear those who still have issues with "if ANY" , i tryed see if there's any similar issue opened here that match your problem but i'mm not really sure, try check here https://github.com/theUpsider/ComfyUI-Logic/issues

    LatentDream
    Author
    Jan 28, 2025

    i've changed a little bit the upscaling button menu to be more understandable and loaded an image that explain it better https://civitai.com/images/54366246
    uploaded the updated workflow on the same post since there's no real difference except few icons changed.
    it is the identical workflow, just added some different color scheme to help understand better ...
    hopefully...😪

    ARCFXJan 28, 2025Ā· 4 reactions
    CivitAI

    Hey.. for some reason i get errors when trying to install missing nodes from comfyUI manager... Am I doing something wrong>?

    LatentDream
    Author
    Jan 28, 2025

    there's the TEA cache node that is not avaible through manager, is written in the description

    dirtysemJan 28, 2025Ā· 2 reactions
    CivitAI

    How do I get rid of this annoying window that appears when the workflow is interrupted? This is really the most efficient workflow I've ever seen. However, this window constantly appears in different processes. This is too much.

    LatentDream
    Author
    Jan 28, 2025Ā· 1 reaction

    I completely agree.
    These TeaCache Nodes are the only ones that allow further acceleration of the process during upscaling; no other node enables this at the moment. As I already mentioned on the developer’s GitHub page, I’ve reported this error message that appears when the user interrupt the workflow, but I haven’t received any response from them yet. I wrote to them some time time ago, still no answer.
    So for now we must live with that popup that appear when thr workflow is interrupted if we want to have benefits and speed there's no other way at the moment.
    The only thing I can tell you, if it helps, is that you just need to click anywhere—not necessarily on the X—and that message will disappear.

    LatentDream
    Author
    Jan 28, 2025

    i really incourage you to help me in this if you can leave a message on my post here
    https://github.com/TTPlanetPig/Comfyui_TTP_Toolset/issues/22

    dirtysemJan 28, 2025

    @LatentDreamĀ 

    Now that we're talking, can I ask you one more question?:)

    How do I make a standard wildcard, I don't really like the system like yours, I want to write promptin one window using wildcard characters. Thank you in advance.

    LatentDream
    Author
    Jan 28, 2025

    @dirtysemĀ thank you.
    yes i thought about implementing wildcards in the "ultra" but i totally forgot about it.
    Definetly something i need to add in the next update

    LatentDream
    Author
    Jan 28, 2025

    here you go. ULTRA 1.2 with wildcards. Check the image to understand how to switch between the old system to classic prompting+wildcards. Enjoy

    dirtysemJan 28, 2025Ā· 1 reaction

    @LatentDreamĀ 

    thank you very much :)

    LatentDream
    Author
    Jan 28, 2025

    @dirtysemĀ let me know if you would change something else in better.
    I love feedbacks

    dirtysemJan 28, 2025

    If you insist :) how do I remove the output noise? he's too big.

    LatentDream
    Author
    Jan 28, 2025

    @dirtysemĀ what do you exactly mean? are you getting noisy results? that happens usually if you upscale too much in a single stage. like i do 1,5X upscale not more than that (for videos, for T2I is another thing)
    if i need more i activate the second upscaler.
    upload an example somewere so i can understand

    dirtysemJan 28, 2025Ā· 1 reaction

    Please take a look. All images are displayed in order. Everything is done on your standard settings. I like the second image best. If this was the output.

    https://drive.google.com/drive/folders/1k4zpKRYx9wqxycPu2EJgxlwmkSnc_YBh?usp=drive_link

    P.s. Indeed, this is the best workspace. I've never seen such results.

    LatentDream
    Author
    Jan 28, 2025Ā· 1 reaction

    @dirtysem Ok i saw 😊, now I understand what you meant.
    No, that’s normal behavior for this model.
    The "noise" tends to fade the higher you go with the resolution, like showed in my example images here:
    https://latentdream.pixieset.com/hun-teaultra10/
    The dog in the water example is the more evident, watch it in full screen here: https://i.vgy.me/CWRvjI.jpg
    That said, in this workflow, there’s some added sharpness.
    So, if you’re working at lower resolutions, that noise tends to get amplified, but it still exists as basis.
    If you want to adjust the sharpness settings, let me know which workflow you’re using.
    In the case of the latest workflow I published (ULTRA) the settings for sharpness are all at the top in the
    back-end section and are marked as red nodes.
    Try bypassing those and see if it works better for you.
    Keep in mind, though, that as a result, you’ll get softer videos,
    wich is a normal outcome with this model.
    You might also get rid of some noise by raising steps way up to 25 or 30 and bypass the fast lora, but do not expect miracles. Is all about resolution mostly

    dirtysemJan 28, 2025Ā· 1 reaction

    Thank you very much. I'll try again. I'm just wondering how far your workflow can go? Implement inpaint in this model, then in general it would be cool xD

    LatentDream
    Author
    Jan 28, 2025Ā· 1 reaction

    @dirtysemĀ Believe me, I tried, even though I knew inpainting wasn’t possible. What else can I add? I don’t think there’s much more to say for now (I could add sections for color correction, filmic grain, and other tweaks, but I prefer to keep a more professional approach focused solely on video production, leaving everything else for post-production).

    I believe I’ve hit a limit in terms of optimization as well, given that I had to deal with some pretty inconvenient limitations of ComfyUI itself, wrestling with nodes and constraints that I managed to work around.

    The next update will probably come when the REAL Img2Vid model is released.

    TheKnightsWhoSayNIJan 28, 2025Ā· 3 reactions
    CivitAI

    First of all, your workflow seem's to be amazing, good work! Could u help me?

    I'm using the basic one with tea, when I use T2V, I'm having this issue in upscale with hunyuan, it gave me a worst quality then the raw, It's that right? It should be better quality since we are upscaling and modeling again with a reference, no?

    Also the other steps seems to give me worst quality than the raw(refined;interpolated)

    edit: I was able to get better results increasing the denoise in basic scheduler from upscale/refined/interpolared, it's that right way to fix? Did I miss something?
    I'll try a little of I2V and V2V right now in default settings, thanks again for this great workflow =D

    LatentDream
    Author
    Jan 28, 2025Ā· 1 reaction

    you welcome.
    any settings that works for you its ok i guess but generally the default i set there are proved to work.
    maybe is a particular case or a treicky prompt, i don't know.
    Enjoy

    TheKnightsWhoSayNIJan 29, 2025

    I just download the ultra version and I'm testing now... Seems the upscaler/refine is better than basic tea somehow :)
    But still givin me some issues, It is possible to keep the RAW image and only use the upscale in BASIC TEA workflow? I saw the Ultra keeps the raw image without need to gen again

    Either Ultra or Basic, the upscaler/refiner sometimes give me blur images/artefacts/distortion noise/something like that...
    In ultra version when I press to queue an Image it gave me this warning:
    "got prompt

    WARNING: DreamBigLatentSwitch.IS_CHANGED() got an unexpected keyword argument 'select'

    WARNING: DreamBigLatentSwitch.IS_CHANGED() got an unexpected keyword argument 'select'

    WARNING: DreamBigLatentSwitch.IS_CHANGED() got an unexpected keyword argument 'select'

    WARNING: SystemNotification.IS_CHANGED() missing 1 required positional argument: 'self'

    WARNING: DreamBigLatentSwitch.IS_CHANGED() got an unexpected keyword argument 'select'

    WARNING: DreamBigLatentSwitch.IS_CHANGED() got an unexpected keyword argument 'select'

    WARNING: DreamBigIntSwitch.IS_CHANGED() got an unexpected keyword argument 'select'

    WARNING: DreamBigIntSwitch.IS_CHANGED() got an unexpected keyword argument 'select'

    WARNING: SystemNotification.IS_CHANGED() missing 1 required positional argument: 'self'"

    TheKnightsWhoSayNIJan 29, 2025

    Let me update you my mate, seems V2V and I2V is resulting in amazing and jaw-dropping videos, I'm in love with it. (in default settings)

    But my problem is T2V, I manage to solve the issue increasing the denoise from upscaler/definer, I tried a lot of other things, I thought the problem could be the prompt or a bad seed, but I tested the prompt in other workflows and goes fine, also tried a lot of seeds in your workflow but only manage to fix the issue as I said, increasing the denoise.

    Also the weird warning, should I do something?

    Anyway, I do believe my hardware is making the upscale issue, not your workflow. I wish you all the best, soon I'll upload the best videos I made here so everyone can confirm your workflow is the best by now

    RamblingJoeJan 31, 2025

    @TheKnightsWhoSayNIĀ I'm getting these errors as well, is it something we can ignore?

    LatentDream
    Author
    Jan 31, 2025Ā· 2 reactions

    @TheKnightsWhoSayNIĀ  sorry for late answer. those error in console are totally fine.
    is a list of switch that must work that way to allow auto switching to the correct active latent.
    about artefacts, those are probably related to the fast model.
    i recently swapped the model with the standard and inverted the fast lora strenght, in ALL WORKFLOWS ON THIS PAGE. you may try redownload and see if you get more luck.
    i also suggest you to give a read to my article were i constatly write about all my notes
    https://civitai.com/articles/9584?highlight=693834#comments

    TheKnightsWhoSayNIJan 28, 2025Ā· 1 reaction
    CivitAI

    Can I change the load diffusion for a gguf? =v

    LatentDream
    Author
    Jan 28, 2025Ā· 1 reaction

    yes, just swap the loader

    TheKnightsWhoSayNIJan 28, 2025

    @LatentDream How much Vram and Ram do you use? How much do you recommend to advance tea 1.4?

    LatentDream
    Author
    Jan 29, 2025Ā· 2 reactions

    @TheKnightsWhoSayNIĀ  here 24vram / 64 ram.
    Someone sayd TeaCache nodes may eat more vram for the speed but i havent noticed this on metering here.
    You should be able to run any workflows with 16vram as long as you do not exceed too much in frames amount and resolution

    TheKnightsWhoSayNIJan 29, 2025

    @LatentDreamĀ Thank you again, you're great and quick answering... I'm using a 4080 super but I'm lack of RAM Stick (2x8gb hahah) I'll try to upgrade soon as possible. Thanks again, great work

    LatentDream
    Author
    Jan 29, 2025Ā· 1 reaction

    @TheKnightsWhoSayNIĀ  a damn yeah i can ensure you i saw benefits when i jumped from 32 to 64.
    Good luck man

    AnandBJan 28, 2025Ā· 3 reactions
    CivitAI

    Oh man. A video guide on how to start using these workflows and generating video will be really helpful. I am pretty much lost here.

    LatentDream
    Author
    Jan 28, 2025

    i feel you... start with basic workflows.
    I'm sorry i dont have time to make a detailed video at the moment. Sorry

    AnandBJan 28, 2025

    @LatentDreamĀ Thanks for the reply. For some reason the basic workflow wasn't working correctly. I tried the Ultra workflow and it seems to working fine. I am getting a hang of it now.

    What would be the best way to generate 1080p videos? Produce raw 720p with T2V and then upscale with V2V?

    fd4r34twefeee873Jan 28, 2025Ā· 1 reaction
    CivitAI

    Even though I've activated some Add LORA nodes and set them to double blocks, with this workflow they don't seem to be taking effect. Is there an additional setting I'm missing?

    Thanks

    fd4r34twefeee873Jan 28, 2025

    I just turned off the fast lora and it helped with this. How can we reduce sharpness level?

    LatentDream
    Author
    Jan 28, 2025

    @fd4r34twefeee873Ā wich workflow are you using?

    thevrvarren650Jan 28, 2025Ā· 3 reactions
    CivitAI

    It's like you read my mind and put absolutely everything I wanted in a workflow, better than I could ever have. Trying this ASAP first thing tomorrow.

    LatentDream
    Author
    Jan 28, 2025

    😁 at your service 🤣

    thevrvarren650Jan 29, 2025

    @LatentDreamĀ took me a bit but managed to get it to run on my 16GB 4070 TI Super,
    First try was 15+ minutes with the basic workflow as it ate into the shared memory.
    - I switched to directly using the fast model rather than model + lora
    - Disabled loading CLIP vision as its not used.
    That got me just a hair under 16GB and now the thing runs in 108 seconds

    Thanks again!

    LatentDream
    Author
    Jan 29, 2025Ā· 1 reaction

    @thevrvarren650 yes, previously all my workflows has the FAST model instead of the regular+lora but after a crazy amount of tests we found that using regular+fast lora is a better option to avoid glitches expecially when no other loras are loaded in. So i had to modify all the workflows i shared and swap the model 😬
    i don't know if you get real vram benefit by doing that switch in particular, you may encounter in more artefacts for sure.

    The clipvision you disable is the one used for Cliption captioning, right?

    good to hear that this workflow run on 16gb! really give me some breath

    JCD007Jan 29, 2025Ā· 4 reactions
    CivitAI

    This is the only workflow that works! The 30th workflow I have tied by now. No errors, and amazing quality and speed. Thank you for your hard work!

    ULTRA1.2 on RTX4070 12GB

    LatentDream
    Author
    Jan 29, 2025

    I'm glad, but it feels strange that this is the only one that works for you, there are much simpler workflows out there than mine that should works 100% 😁. I haven’t actually tried any of them (except Kijai's / Zer0's ) I just took a quick look here and there to see if anyone had implemented something worth adding to mine for improvements. It really seems strange to me that this is the only one that works for you.

    Anyway... thanks. Enjoy it :)

    JCD007Jan 30, 2025Ā· 1 reaction

    @LatentDreamĀ Think it is down to the fact that you explain in detail what and how to use it. Everyone else expects the user to have a degree in programming to download.... Nothing explained. Only way to know what to download it when you get an error. Uhggg. Again. Thank you.

    dirtysemJan 29, 2025Ā· 1 reaction
    CivitAI

    I would like to be able to adjust the duration of the video using the slider. I'm not very good at this, so it's hard for me to count down frames and understand how long it takes.

    LatentDream
    Author
    Jan 29, 2025

    the workflow is set up in frames amount and the video ouput is set to 24 frames per second.
    By doing easy math you end up that 24 frames equal 1 second 😊
    so if you want 4 second video do 24x4 and round the result to compatible frames (97 in this case)

    dirtysemJan 29, 2025

    I see, thanks)

    dirtysemJan 29, 2025

    I tried it, my system doesn't take more than 5 seconds, then OMM. And by the way I don't have Clip vision. Doesn't make suggestions for videos and text. The process goes on, it seems without errors, but the prompt window does not change. And how do I manually crop an image or video, such as the top of the frame or the center, and so on?

    LatentDream
    Author
    Jan 29, 2025

    @dirtysemĀ depends on your Vram. OOM can be normal.
    files and requirements are written in description.
    i could implement cropping feature but honestly i dont want to make this workflow extremely heavy at this point is already huge.

    xG00N3RxJan 29, 2025Ā· 1 reaction
    CivitAI

    Love the workflow!

    Does anyone know if there's a way to use two character loras in one scene?

    LatentDream
    Author
    Jan 30, 2025

    there are 2 loras section in the latest workflows.
    one dedicated to random lora loading, the other that can be activated and stay fixed.
    activate 2 of them in this last section, wich is on the top left. select one and press CTRL+B

    in all other workflows there is only the fixed loras section, same story there.

    2790319Jan 29, 2025Ā· 3 reactions
    CivitAI

    Dankest workflow on site. Killer results the first queue!

    VeroleJan 30, 2025Ā· 3 reactions
    CivitAI

    awesome, your basic one is nice to me. fast and good ! THANKS

    mailvar88Jan 30, 2025Ā· 2 reactions
    CivitAI

    This is my first comment on Civitai. This workflow costs me a fresh installation of Comfy till it was working, but once it worked it is amazing.

    Generation times on a 4070 TI Super with 16GB:

    w/o upscale: 51 sec

    Latent upscale: 121 sec

    LatentDream
    Author
    Jan 30, 2025Ā· 2 reactions

    lol i'm sorry but i guess the only answer is: WELCOME TO COMFYUI routine 🤣

    psybertechJan 30, 2025Ā· 2 reactions
    CivitAI

    quick question - using the Advanced 1.1 workflow - after generating a video, it is creating a "scene#####.png" image file in my user pictures directory. My output is saved on a completely different drive and I cant find where or why these files are being created. I do not want them, nor need them. The normal output is working great, it is just these weird 'scene' png files that I want to prevent or at least redirect to the normal output folder. Anyone? Thanks!

    LatentDream
    Author
    Jan 30, 2025

    what? where is this file saved precisely? if you are using I2V and the image is created in comfy/input than may be normal

    psybertechJan 30, 2025

    @LatentDreamĀ the files are being saved in my windows' logged in user's pictures directory. It doesn't seem to create them all the time (which is weird - I am creating a new video right now and will check to see if one appears once it is complete) but I keep finding them there. It is weird that they are named 'scene####.png' - that naming is nothing like my file name saves for the videos at all (which all go into subdirectories in Comfy's output folder correctly). I have changed my pictures folder location default to point to D:\Photos (I always change doc locations for my folders), but that shouldn't matter. I have searched for specific text inside all files inside the custom-nodes folder and my active workspace .json file for any reference of 'scene' or 'pictures' or even my absolute path of d:\Photos but haven't found anything at all. Weird.

    LatentDream
    Author
    Jan 30, 2025

    @psybertechĀ  i still do not understand something dont looks right on your setup to me.
    can you please tell me exactly wich is this windows logged in user's picture directory?
    give me an example
    like c:\users\something\...

    screamlouderJan 30, 2025Ā· 2 reactions
    CivitAI

    Using Basic Tea workflow, I'm getting the following error when trying to use the upscaler:

    TeaCacheHunyuanVideoSampler

    Sampling failed: shape '[1, 17, 67, 44, 16, 1, 2, 2]' is invalid for input of size 3280320

    I kept all other settings the same. I just updated the prompt and the loras being loaded. What could be the issue/fix?

    LatentDream
    Author
    Jan 30, 2025

    uhm... can you paste the console errors?

    JPEGEnjoyerJan 31, 2025Ā· 1 reaction
    CivitAI

    I am getting very slow generation speed (like 50x slower than other workflows). I am on comfyui-Zluda 7900XTX. TeaCacheHunyuanVideo node is what is taking forever. Any idea why?

    EDIT: After letting it finish over night I returned to find a video of total static. So it is slow and does not work.

    LatentDream
    Author
    Jan 31, 2025

    Try the basic workflow (not tea) and let us know if works.
    https://civitai.com/models/1007385?modelVersionId=1261498
    if works then means this tea nodes are not compatible with your hardware/setup somehow

    Kung_fu_PronJan 31, 2025Ā· 2 reactions
    CivitAI

    When loading the graph, the following node types were not found

    TeaCacheHunyuanVideoSampler

    how do i fix this?

    LatentDream
    Author
    Jan 31, 2025

    check requirements in description

    fierytear464Jan 31, 2025Ā· 3 reactions
    CivitAI

    Wow, this is simply amazing, getting started with some simple workflows!

    Amazing effort!!

    DIhanJan 31, 2025Ā· 11 reactions
    CivitAI

    I created a one click template for Runpod that has everything you need to run this workflow. Search for
    Hunyuan Video - ComfyUI VScode AllInOne

    I made a guide to set it up
    https://civitai.com/articles/11303

    After its starts, drag in the workflow.

    RTX A6000 works well

    Filters select (optional ):

    - Community Could
    - NVME
    - High 600 mb/s

    https://runpod.io/console/deploy?template=unkcsqjb74&ref=0eayrc3z

    EDIT: Tried to update it for Ultra 1.2 workflow thinking Advanced 1.4. was the latest but Ultra is the lastest - both should work anyway

    After the build is comepleted you should see
    -------
    Starting VS Code server...

    15:20:00VS Code server started with PID:
    9215:20:02Starting ComfyUI...15:20:02
    CUDA Environment Check:
    ------
    NVDIA Stuff
    ------
    ---

    Wait for a 5-7minutes after ports are ready to use VSCode or Comfyui


    on the ComfyIUI change :
    'clip-vit-large-patch14_OPENAI.safetensors' >>
    'clip-vit-larg-patch14.safetensors'

    'hunyuan_video_720_cfgdistill_fp8_e4m3fn.safetensors' >>
    'hunyuan_video_t2v_720p_bf16.safetensors'

    - Notice a bug where Vscode might not setup correctly. May need to build again or use ssh terminal to transfer files.

    - Another bug where is cannot find the .sh file that happens on some GPU builds. Community Cloud GPUs seem to work fine.


    Have fun!

    ryanginaFeb 1, 2025Ā· 1 reaction

    You the man. TY.

    LatentDream
    Author
    Feb 1, 2025Ā· 2 reactions

    very cool.
    is was just talking about "i need a cloud service to do more tests before release next update that may speed up things even more"

    DIhanFeb 1, 2025

    I just realised that I've been using the 1.2 workflow however i'm updating the docker image to match the 1.4 right now.

    Noticed a few errors, so far just need to change
    'clip-vit-large-patch14_OPENAI.safetensors' to 'clip-vit-larg-patch14.safetensors'

    'hunyuan_video_720_cfgdistill_fp8_e4m3fn.safetensors' to' hunyuan_video_t2v_720p_bf16.safetensors'

    disable fast video (will add that next update)


    DIhanFeb 1, 2025

    @LatentDream This is a awesome workflow! I still cant get my head around all the nobs and dials yet. Hopefully in time after reading your posts :D

    ryanginaFeb 1, 2025Ā· 1 reaction

    @DIhanĀ We appreciate it dude!

    NOOBDAFeb 3, 2025

    Runpod says "NO RESULTS FOUND"
    Edit: Correct name of the template = Hunyuan Video - ComfyUI Manager VScode - AllInOne

    NOOBDAFeb 3, 2025

    Okay, update. This is not working on SECURE CLOUD.

    If you get Multiple error messages: [FATAL tini (19)] exec /workspace/start.sh failed: No such file or directory

    Then switch to COMMUNITY CLOUD, use A6000 and VOILA!

    NOOBDAFeb 3, 2025

    New Error: VAELoader

    Error while deserializing header: HeaderTooSmall

    NOOBDAFeb 3, 2025

    I am familiar with using Jupyter notebook so that i could place my models in respective folders. How do I do the same here?

    DIhanFeb 3, 2025

    @NOOBDAĀ Looking into it! Very odd that breaks on some GPUs. I'm attmepting to fix for all. Will update when done

    DIhanFeb 4, 2025

    @NOOBDAĀ open the folder /workspace/comfyui and you can drag in files there, right click > download to download files

    DIhanFeb 4, 2025

    @NOOBDA So there were 2 VAE's with the same name in the world. Both work but The one from Kijai seems to work better. Ive updated the new image with Kijai's one

    DIhanFeb 4, 2025

    OK New image is working! I'll write up a post on how to use it soon

    DIhanFeb 4, 2025

    I made a guide to help anyone else

    https://civitai.com/articles/11303

    NOOBDAFeb 4, 2025Ā· 1 reaction

    @DIhanĀ Thank you! I will test them in the weekend and let you know if I come across any issues.

    m0n3tFeb 1, 2025Ā· 3 reactions
    CivitAI

    Thank you! Getting decent results on 1070ti 8GB VRAM/ 32GB RAM

    NOOBDAFeb 1, 2025

    WHAT! FAKE? SARCASTIC?

    LatentDream
    Author
    Feb 1, 2025

    @NOOBDAĀ lol i thought that too, how is that even possible

    m0n3tFeb 1, 2025

    @NOOBDAĀ For real! 1070ti with 8GB VRAM and 32GB of DDR4 RAM :D

    m0n3tFeb 1, 2025Ā· 1 reaction

    @LatentDreamĀ Right? When I started seeing 8GB Hunyuan claims, I had to try. Took a lot of trial and error, but this workflow and a specific configuration of installations is working to produce 65 frames of 320x480 using 4 additional LoRas.

    NOOBDAFeb 2, 2025

    @m0n3tĀ please show a result of your trial.

    StillbornFeb 2, 2025

    This Workflow work with my RTX4050 6Gb wtf?! 32Gb RAM also.

    m0n3tFeb 2, 2025

    @NOOBDAĀ Just posted one!

    NOOBDAFeb 3, 2025

    @m0n3tĀ where? not visible

    m0n3tFeb 3, 2025

    @NOOBDAĀ Posted in the (Basic - Tea) gallery for this workflow https://civitai.com/images/55448601

    PepitoPalotesFeb 1, 2025Ā· 2 reactions
    CivitAI

    Works well, but this workflow has corrupt linking data, according to rgthree extension. You might want to check it.

    LatentDream
    Author
    Feb 1, 2025

    can you explain in details? we are discussing all the ways to solve all possible issues on my discord. this would be very helpfull for everyone. thanks

    PepitoPalotesFeb 1, 2025

    @LatentDreamĀ I mean that if you use rgthree, on the topbar there is a button with the rgthree icon, and clicking on it you can see a button for settings. On those settings there is an option to detect corrupt workflows. If you enable that options, and then load this workflow, you'll see a message saying that it has corrupt linking data. The message gives an option to try to fix it. I didn't try, though. If you do, better to do a backup, just in case the fix actually breaks the workflow.Ā šŸ˜…

    LatentDream
    Author
    Feb 3, 2025Ā· 1 reaction

    @PepitoPalotesĀ ah.. i see that now. it doesnt make sense to me. that error message appear also in some very basic workflows with native nodes. i would just ignore that

    Prompts the same error, nothing happens when I run this workflow, it skips and does not output any results

    PepitoPalotesFeb 5, 2025

    @SeaAdministrative684122Ā for me it does work perfectly. I reported this just in case because I saw the message, but the workflow does work and the results are good. Check if you have all the nodes and if all the models are set correctly. Also I'm using the ultra version, I didn't try the other versions.

    J1BFeb 1, 2025Ā· 1 reaction
    CivitAI

    What is the default output location? I cannot find the generated videos anywhere.

    LatentDream
    Author
    Feb 1, 2025Ā· 2 reactions

    you can toggle to save on the video output module.
    is not saving anything by default.
    personally i prefer to right click on the video and save it only whne i want to

    J1BFeb 1, 2025Ā· 1 reaction

    @LatentDreamĀ I knew you were going to say that, after I queued up 30 videos and walked away for a few hours :(

    Thanks for answering quickly anyway.

    m0n3tFeb 1, 2025Ā· 4 reactions

    @J1BĀ If you haven't restarted, they should be in the \temp\ folder

    makiaeveliFeb 4, 2025Ā· 1 reaction

    @m0n3tĀ holy fuck

    MikerhinosFeb 2, 2025Ā· 1 reaction
    CivitAI

    I'm trying latest ULTRA 1.2, is it normal that the image to video workflow output video has nothing to do with the input image ? (Yes I enabled I2V lol)

    MR_ZzFeb 2, 2025Ā· 1 reaction

    I have the same issue.

    LatentDream
    Author
    Feb 3, 2025

    yes. totally normal. Image to video model do not exist, thats a fake image to video, or as i wrote " SORT OF" 🤣. check my other posts with a better image to video workflows based on other nodes.

    dowefa5467894Feb 2, 2025Ā· 1 reaction
    CivitAI

    When generating 320x480 in T2V mode on a 4070ti super (16gb) / 64gb RAM, always GPU usage is 98~100% and OOM does not occur, but the task is extremely slow and in some cases it hangs indefinitely. Is this normal? Judging from the comments, it seems like other people are using it without any problems... Am I missing something?

    LatentDream
    Author
    Feb 3, 2025

    check my article on Vram overload section

    MR_ZzFeb 2, 2025Ā· 1 reaction
    CivitAI

    I'm getting this issue: Sharpen.sharpen() missing 1 required positional argument: 'image'

    using T2V :((

    LatentDream
    Author
    Feb 3, 2025

    wich workflow?

    _AIwaifu_Feb 3, 2025Ā· 2 reactions
    CivitAI

    Nodes are conflicting on tea ultra 1.2

    Comfyui_TTP_Toolset

    ComfyUI-Logic

    SpazAIFeb 3, 2025

    I have the same issue!

    LatentDream
    Author
    Feb 3, 2025Ā· 1 reaction

    1.3 is avaible on discord in alpha. no conflicts hassle hopefully.
    Join and test it before release šŸ˜‹

    darkviewFeb 4, 2025

    @LatentDreamĀ where is your discord ?

    ItsnamelessFeb 4, 2025

    @darkviewĀ https://discord.gg/VcCKy9mJKq
    You can find it in the description of the Workflow too.

    Workflows
    Hunyuan Video

    Details

    Downloads
    3,076
    Platform
    CivitAI
    Platform Status
    Available
    Created
    1/27/2025
    Updated
    5/13/2026
    Deleted
    -