HUNYUAN | AllInOne
no need to buzz me. Feedbacks are much more appreciated. | last update: 06/03/2025
ā¬ļøOFFICIAL Image To Video V2 Model is out!ā¬ļø COMFYUI UPDATE IS REQUIRED !
Get files here:
link 1 paste in: \models\clip_vision
link 2 or Link 3 paste in: \models\diffusion_models (pick the one that works best for you)
ā ļø I2V model got an update on 07/03/2025 ā ļø
This workflows have evolved over time through various tests and refinements,
thanks also to the huge contributions of this community.
Requirements, Special thanks and credits above.
Before commenting, please keep in mind:
The Advanced and Ultra workflows are intended for more experienced ComfyUI users.
If you choose to install unfamiliar nodes, you take full responsibility.I do this workflows for fun, randomly in my free time.
Most issues you might encounter are probably already been widely discussed and solved on Discord, Reddit, GitHub, and addressed in the description corresponding to the workflow you're using, so please..Read carefully..
and consider do some searches before comment.I started this alone, but now there's a small group of people who are contributing with their passion, experiments and cool findings. Credits below.
Thanks to their contributions this small project continues to grow and improve
for everyone's benefit.Fast Lora may works best when combined with other Loras, allowing you to reduce the number of steps.
- Wave Speed can significantly reduce inference time but may introduce artifacts.
- Achieving good results requires testing different settings. Default configurations may not always work, especially when using LORAs, so experiment to find settings that fits best.THERE'S NOT UNIVERSAL SETTINGS THAT WORKS FOR EVERY CASES.
- You can also try to switch to different sampler/scheduler and see wich works best for you case, try UniPC simple, LCM simple , DDPM, DDMPP_2M beta, Euler normal/simple/beta, or the new "Gradient_estimation"
(Samplers/schedulers need to be set for each stage and mode; they are not settings found in the console)
Legend to help you choose the right workflow:
āļø Green check = UP TO DATE version for its category.
Include latest settings, tricks, updated nodes and samplers, working on latest ComfyUI.
š©š§šŖ Colors = Basic / Advanced / Ultra
ā = Based on deprecated nodes, you'll have to fix it yourself if you really want to use
Quick Tips:
Low Vram? Try this:
and/or try use GGUF models avaible here.
Rtx4000? use this:
Want more tips?
Check my article: https://civarchive.com/articles/9584
All workflows available on this page are designed to prioritize efficiency, delivering high-quality results as quickly as possible.
However, users can easily customize settings through intuitive, fast-access controls.
For those seeking ultra-high-quality videos and the best output this model can achieve, adjustments may be necessary, like Increasing steps, modifying resolutions, reducing TeaCache / WaveSpeed influences, or disabling Fast LoRA entirely to enhance results.
Personally, I aim for an optimal balance between quality and speed. All example videos I share follow this approach, utilizing the default settings provided in these workflows. While I may make minor adjustments to aspect ratio, resolution, or step count depending on the scene, these settings generally offer the best all-around performance.
WORKFLOWS DESCRIPTION:
š©"I2V OFFICIAL"
require:
llava_llama3_vision: ā”ļøLink paste in: \models\clip_vision
Model: ā”ļøLink or ā”ļøLink (pick the one that works best for you)
paste in: \models\diffusion_modelshttps://github.com/pollockjj/ComfyUI-MultiGPU
The following node is for SAGE ATTENTION, if you don't have it installed just bypass it:

š©"BASIC All In One"
use native comfy nodes, it has 3 method to operate:
T2V
I2V (sort of, an image is multiplied *x frames and sent to latent, with a denoising level balanced to preserve the structure, composition, and colors of the original image. I find this approach highly useful as it saves both inference time and allows for better guidance toward the desired result). Obviously this comes at the expense of general motion, as lowering the denoise level too much causes the final result to become static and have minimal movement. The denoise threshold is up to you to decide based on your needs.
There are other methods to achieve a more accurate image-to-video process, but they are slow. I didnāt even included a negative prompt in the workflow because it doubles the waiting times.
V2V same concept as I2V above
require:
https://github.com/chengzeyi/Comfy-WaveSpeed
https://github.com/pollockjj/ComfyUI-MultiGPU
š§ "ADVANCED All In One TEA ā"
an improved version of the BASIC All In One TEA ā, with additional methods to upscale faster, plus a lightweight captioning system for I2V and V2V, that consume only additional 100mb vram.
Upscaling can be done in three ways:
Upscaling using the model. Best Quality. Slower (Refine is optional)
Upscale Classic + Refine. It uses a special video upscaling model that I selected from a crazy amount of multiple video upscaling models and tests, it is one of the fastest and allows for results with good contrast and well-defined lines. While itās certainly not the optimal choice when used alone but when combined with the REFINE step, it produces well-defined videos. This option is a middle ground in terms of timing between the first and third method.
Latent upscale + Refine. This is my favorite. fastest. decent.
This method is nothing more than the same as the first, wich is basically V2V, but at slightly lower steps and denoise.
Three different methods, more choices based on preferences.
Requirements:
-ClipVitLargePatch14
download model.safetensors
rename it as clip-vit-large-patch14_OPENAI.safetensors"
paste it in \models\clip
paste it in \models\ESRGAN\
-LongCLIP-SAE-ViT-L-14
-https://github.com/pollockjj/ComfyUI-MultiGPU
-https://github.com/chengzeyi/Comfy-WaveSpeed
Update Changelogs:
|1.1|
Faster upscaling
Better settings
|1.2|
removed redundancies, better logic
some error fixed
added extra box for the ability to load a video and directly upscale it
|1.3|
New prompting system.
Now you can copy and paste any prompt you find online and this will automatically modify the words you don't like and/or add additional random words.
Fixed some latent auto switches bugs (this gave me serious headhaces)
Fixed seed issue, now locking seed will lock sampling
Some Ui cleaning
|1.4|
Batch Video Processing ā Huge Time Saver!
You can now generate videos at the bare minimum quality and later queue them all for upscaling, refining, or interpolating in a single step.
Just point it to the folder where the videos are saved, and the process will be done automatically.
Added Seed Picker for Each Stage (Upscale/Refine)
You can now, for example, lock the seed during the initial generation, then randomize the seed for the upscale or refine stage.
More Room for Video Previews
No more overlapping nodes when generating tall videos (don't exagerate with ratio obviously)
Expanded Space for Sampler Previews
Enable preview methods in the manager to watch the generation progress in real time.
This allows you to interrupt the process if you don't like where it's going.
(I usually keep previews off, as enabling them takes slightly longer, but they can be helpful in some cases.)
Improved UI
Cleaned up some connections (noodles), removed redundancies, and enhanced overall efficiency.
All essential nodes are highlighted in blue and emphasized right below each corresponding video node, while everything else (backend) like switches, logic, mathematics, and things you shouldn't touch have been moved further down. You can now change settings or replace nodes with those you prefer way more easily.
Notifications
All nodes related to the browser notifications sent when each step is completed, which some people find annoying, have been moved to the very bottom and highlighted in gray. So, if they bother you, you can quickly find them, select them, and delete them
|1.5|
general improvements, some bugs fixes
NB:
This two errors in console are completly fine. Just don't mind at those.
WARNING: DreamBigImageSwitch.IS_CHANGED() got an unexpected keyword argument 'select'
WARNING: SystemNotification.IS_CHANGED() missing 1 required positional argument: 'self'
šŖ "AIO | ULTRA "
Embrace This Beast of Mass Video Production!
This version is for the truly brave professionals and unlocks a lot of possibilities.
Plus, it includes settings for higher quality, sharper videos, and even faster speed, all while being nearly glitch-free.
All older workflows have also been updated to minimize glitches, as explained in my previous article.
From Concept to Creation in Record Time!
We are achieving world-record speed here, but at the cost of some complexity. These workflows are becoming increasingly intimidating despite efforts to keep them clean and hide all automations in the back-end as much as possible.
That's why I call this workflow ULTRA: a powerhouse for tenacious Hunyuan users who want to achieve the best results in the shortest time possible, with all tools at their fingertips
Key Features and Improvements:
Handy Console: Includes buttons to activate stages with no need to connect cables or navigate elsewhere. Everything is centralized in one place (Control Room), and functions can be accessed with ease.
T2V, I2V*,V2V, T2I, I2I Support: Seamless transitions between different workflows.
*I2V: an image is multiplied into *x frames and sent to latent. Official I2V model is not out yet. There's a temprorary trick to do I2V here wich require Kijai's nodes.
Wildcards + Custom Prompting Options: Switch between Classic prompting with wildcards or add random words in a dedicated box, with automatic customizable word swapping or censoring.
Video Loading: Load videos directly into upscalers/refiners and skip the initial inference stage.
Batch Video Processing: Upscale or Refine multiple videos in sequence by loading them from a custom folder.
Interpolation: Smooth frame transitions for enhanced video quality.
Random Character LoRA Picker: Includes 9 LoRA nodes in addition to fixed LoRA loaders.
Upscaling Options: Supports upscaling, double upscaling, and downscaling processes.
Notifications: Receive notifications for each completed stage, organized in a separate section for easy removal if necessary.
Lightweight Captioning: Enables captioning for I2V and V2V with minimal additional VRAM usage (only 100MB).
Virtual Vram support.
Use the GGUF model with Virtual VRAM to create longer videos or increase resolution.
Hunyuan/Skyreel (T2V) quick merges slider
Switch from Regular Model to Virtual Vram / GGUF with a slider
Latent preview to cut down upscaling process.
A dedicated LoRA line exclusively for upscalers, toggled via a dedicated button.
RF edit loom
Upscale using Multiplier or "set to longest size" target
a button to toggle Wave Speed and FastLoRA as needed for upscaling only.
Ui improvements based on users feedbacks
- Sequential Upscale Under 1x / Double Upscaling
You can now downscale using the upscale process and then re-upscale with the refiner, or customize upscaler multipliers to upscale 2 times.
New Functionality:
The upscale value range now includes values as low as 0.5.
Two sliders are available: one for the initial upscale and another for the refiner (essentially another sampler, always V2V).
Applications:
Upscale, Refine or combine the twoUpscale fast (latent resize + sampler) or accurate (resize + sampler)
Refine (works the same as upscale, can be used alone or as an auxiliary upscaler)
Double upscaling: Start small and upscale significantly in the final stage.
Downscale and re-upscale: Deconstruct at lower resolution and reconstruct at higher quality.
Combos: Upscale & Refine / Downscale & Upscale

- Skip Decoders/Encoders Option
Save significant time by skipping raw decoding for each desired stage and going directly to the final result.

How It Works: If your prompt is likely to produce a good output and the preview method ("latent2RGB") is active in the manager, you can monitor the process in real-time. Skip encoding/decoding by working exclusively in the latent space, generating and sending latent data directly to the upscaler until the process completes.
Example:
A typical medium/high-quality generation might involve:Resolution: ~ 432x320
Frames: 65
One Upscale: 1.5x (to 640x480)
Total Time: 162 seconds
In this example case, by activating the preview in the manager and skipping the first decoder (the preview before upscaling), you can save ~30 seconds. The process now takes 133 seconds instead of 162.
Bypassing additional decoders (e.g., upscale further or refinement) can save even more time.
- Image Generation (T2I and I2I)
Explore HUN latent space with this image generation capabilities.
When the number of frames is set to 1, the image node activates automatically, allowing the image to be saved as a PNG.
Use the settings shown here for the best results:
T2I Example Gallery: Hunyuan Showcase
- Structural Changes / Additional Features
Motion Guider for I2V
This feature enhances motion for image-to-video workflows, lowering chances to get a static video as result.
9 Random Character Loras Loader: Previously limited to 5, now expanded to 9.
Random Character Lora Lock On/Off:
By default, each seed is set to corresponds to a random Lora
(e.g., seed n° 667 = Lora n° 7).Now, you can unlock this "character Lora lock on seed" and regenerate the same video with a different random Lora while maintaining the main seed.
Clarifications:
Letās call things by their real names:"Refine" and "Upscale" are both samplers here. Each optimized for specific stages:
Upscale: Higher steps/denoise, fast results, balanced quality.
Refine: Lower steps/denoise, focused on fixing issues and enhancing details.
Refine can work alone, without upscaling, to address small issues or improve fine details.
UI Simplification:
The "classic upscale" is now replaced by a faster and better-performing resize + sharpness operation and hidden in back-end to save space.Frame Limit Issue (101+ Frames):
Generating more than 101 frames with latent upscale can cause problems. To address this, I added an option to upscale videos before switching to latent processing.
- Bug Fixes
Latent Upscale Change:
Latent upscaling now uses bicubic interpolation instead of nearest-exact, which performs better based on testing.
"Cliption" Bug Fixed
201-Frame Fix:
Generating 201-frame perfect loops caused artifacts with latent upscale. Switching to "resize" via the pink console buttons now resolves this issue.
- Performance and other infos:
Once you master it, you wonāt want to go back. This workflow is designed to meet every need and handle every case, minimizing the need to move around the board too much. Everything is controlled from a central "Control Room."
Traditionally, managing these functions would require connecting/disconnecting cables or loading various workflows. Here, however, everything is automated and executed with just a few button presses.
Default settings (e.g., denoise, steps, resolution) are optimized for simplicity, but advanced users can easily adjust them to suit their needs.
-Limitations:
No Audio Integration:
While I have an audio-capable workflow, it doesnāt make sense here. Audio should be processed separately for professional results.No Post-Production Effects:
Effects like color correction, filmic grain, and other post-production enhancements are left to dedicated editing software or workflows. This workflow focuses on delivering a pure video product.Interpolation Considerations:
Interpolation is included here. I set up the fastest i could find around, not necessary the best one. For best results, I typically use Topaz for both extra upscaling and interpolation after processing but is up to the user to choose whatever favourite interpolation method or final upscaling if needed.
Requirements:
ULTRA 1.2:
-Tea cache
ULTRA 1.3:
-UPDATE TO LATEST COMFY IS NEEDED!
-Wave Speed
-ClipVitLargePatch14
ULTRA 1.4 / 1.5:
-UPDATE TO LATEST COMFY IS NEEDED!
https://github.com/pollockjj/ComfyUI-MultiGPU
https://github.com/chengzeyi/Comfy-WaveSpeed
https://github.com/city96/ComfyUI-GGUF
https://github.com/logtd/ComfyUI-HunyuanLoom
https://github.com/kijai/ComfyUI-VideoNoiseWarp
NB:
The following warning in console is completly fine. Just don't mind at it:
WARNING: DreamBigImageSwitch.IS_CHANGED() got an unexpected keyword argument 'select'
WARNING: SystemNotification.IS_CHANGED() missing 1 required positional argument: 'self'
Update Changelogs:
|1.1|
Better color scheme to easily understand how upscaling stages works
Check images to understand
|1.2|
Wildcards.
You can now switch from Classic Prompting system (with wildcards allowed)
to the fancy one previously avaible
|1.3|
An extra wavespeed boost kicks in for upscalers.
Changed samplers to native Comfyāno more TTP, no more interrup error messg
Tea cache is now a separate node.
Fixed a notification timing error and text again.
Replaced a node that was causing errors for some users: "if any" now swaps with "eden_comfy_pipelines."
Added SPICE, an extra-fast LoRA toggle that activates only in upscalers to speed up inference at lower steps and reduce noise.
Added Block Cache and Sage to the setup. Users who have them working can enable them.
Changed the default sampler from Euler Beta to the new "gradient_estimation" sampler introduced in the latest Comfy update.
Added a video info box for each stage (size, duration).
Removed "random lines."
Adjusted default values for general use.
Upscale 1 can now function as a refiner as well.
When pressing "Latent Resize" or "Resize," it will automatically activate the correct sampler.
A single-frame image is now displayed in other stages as well (when active).
Thanks to all users that contributed on discord for this workflow improvements!
|1.4|
Virtual Vram support
Hunyuan/Skyreel quick merges slider
Toggle to switch from Regular Model to Virtual Vram / GGUF
Longer vids / Higher Res / extreme upscaling now possible
Default res changed to 480x320 wich looks like a balanced middle way for lowres quick vids and most users should be ok with that.
Latent preview for skip preview mode
Switch toggle to enable/disable Exclusive LoRA for upscalers
RF edit loom
V2V loading time improved
Upscale to longest size target
Fixed slider upscale mismatch
info node moved
clean up and fixes
better settings for general use
upscale one can now use "resize to longers size" optional slider
added extra wave speed toggle for upscalers
added exclusive loras line for upscalers
general fixes
Ui improvement based on users feedbacks
fixed fast lora string issue on bypass in upscalers
more cleaning
changed exlusive loras for upscalers again, main fast lora is NOT going to pass in that line, since it has already a separate toggle (upscale with extra fast lora) previously called SPICE FOR UPSCALING.
fixed output node size for videos
moved resize by "longest size" toggle in extra menu
added extra wave speed toggle
control room is finished.. for now. I dont want to stress Aidoctor further. He already did a great job
lower fast lora default value now to 0.4
fixed VIDEO BATCH LOADING
|1.5|
general improvements, Ui improvements, some bugs fixes
leap fusion support
Go With The Flow support
Bonus TIPS:
Here an article with all tips and trick i'm writing as i test this model:
https://civarchive.com/articles/9584
if you struggle to use my workflows for any reasons at least you can relate to the article above. You will get a lot of precious quality of life tips to build and improving your hunyuan experience.
All the workflows labeled with an āare OLD and highly experimental, those rely on kijai nodes that were released at very early stage of development.
If you want to explore those you need to fix them by yourself, wich should be pretty easy.

CREDITS
Everything I do, I do in my free time for personal enjoyment.
But if you want to contribute,
there are people who deserve WAY more support than I do,
like Kijai.
Iāll leave his link,
if youāre feeling generous go support him.
Thanks!
Last but not least:
Thank this community, especially those who given me advices and experimented with my workflows, helping improve them for everyone.
Special thanks to:
https://civarchive.com/user/galaxytimemachine
for its peculiar and precise method of operation in finding the best settings and for all the tests conducted.
https://civarchive.com/user/TheAIDoctor
for his brilliance and for dedicating his time to create and modify special nodes for this workflow madness! such an incredible person.
and
https://github.com/pollockjj/ComfyUI-MultiGPU
Also special thanks to:
Tr1dae
for creating HunyClip, a handy tool for quick video trimming. If you work with heavy editing software like DaVinci Resolve or Premiere, you'll find this tool incredibly useful for fast operations without the need to open resource-intensive programs.
Check it out here: [link]
Have fun
Description
FAQ
Comments (126)
be nice if there was any real instruction to set this up
i get it...
i should probably quit sharing this madness cause Is becoming too much intimidating.
It would need a user manual or some tutorials.
Try play with it a bit,
i pasted some images that can help understanding the functions.
As always had a hernia just from looking at this workflow X-D, looking forward to trying this out, slight problem, I struggled with this the last couple of version the If any return a else b is causing me issues. My only solution was manually adding them on myself and replacing the ones you put. Any one have any idea about this?
Nevermind figured it out X-D, seems like it was updated or I missed something, all is working now, Ill see you in a week when I had a play :-) As always thanks for all you do to keep improving the Hunyuan video space
can you share the work around ? i'm facing the same issue with (If ANY return A else B-š¬)
@DallenoĀ I tried two things, I am not sure which one fixed it, but you can try the below - I deleted the custom node ComfyUI-Logic reinstalled using the gitclone in custom nodes folder
git clone https://github.com/theUpsider/ComfyUI-Logic.git
Reboot comfy ui, ensure you redrag the workflow back into comfy after as mine didnt work until I rebooted comfy UI and redragged the workflow on.
If the above doesnt work the only other thing I did was downloaded -
https://github.com/theUpsider/ComfyUI-Logic/releases/tag/v1.2.0 and dragged the contents into the folder of comfy UI logic folder and overwrote any changes.
-I know I am repeating myself, but ensure after rebooting comfy you redrag the workflow on as mine still showed the error until I did this.
manual download (https://github.com/theUpsider/ComfyUI-Logic.git) is the solution :)
@DallenoĀ I always type 1000 words for a two word answer :) glad you got it working
we commented at the same time <3 thanks
this one for clip vit ?
this is my first time running this model
https://huggingface.co/openai/clip-vit-large-patch14/blob/main/model.safetensors
@DallenoĀ Apologies I am not sure, I would have assumed it is this one https://huggingface.co/openai/clip-vit-large-patch14 as it states openAI after it @LatentDream should be able to help :-)
---Edit--
I was correct, if you read the OP's post - it links to my link above
-ClipVitLargePatch14
download model.safetensors
rename it as clip-vit-large-patch14_OPENAI.safetensors"
paste it in \models\clip_vision\
i asnwered 12 hours ago but for some reasons Civit didnt took my answer, then it went down for a bit.
LOL i know... i had 3 hernias just trying to make this works as i wanted. š¤£
good, you solved.
sorry to hear those who still have issues with "if ANY" , i tryed see if there's any similar issue opened here that match your problem but i'mm not really sure, try check here https://github.com/theUpsider/ComfyUI-Logic/issues
i've changed a little bit the upscaling button menu to be more understandable and loaded an image that explain it better https://civitai.com/images/54366246
uploaded the updated workflow on the same post since there's no real difference except few icons changed.
it is the identical workflow, just added some different color scheme to help understand better ...
hopefully...šŖ
Hey.. for some reason i get errors when trying to install missing nodes from comfyUI manager... Am I doing something wrong>?
there's the TEA cache node that is not avaible through manager, is written in the description
How do I get rid of this annoying window that appears when the workflow is interrupted? This is really the most efficient workflow I've ever seen. However, this window constantly appears in different processes. This is too much.
I completely agree.
These TeaCache Nodes are the only ones that allow further acceleration of the process during upscaling; no other node enables this at the moment. As I already mentioned on the developerās GitHub page, Iāve reported this error message that appears when the user interrupt the workflow, but I havenāt received any response from them yet. I wrote to them some time time ago, still no answer.
So for now we must live with that popup that appear when thr workflow is interrupted if we want to have benefits and speed there's no other way at the moment.
The only thing I can tell you, if it helps, is that you just need to click anywhereānot necessarily on the Xāand that message will disappear.
i really incourage you to help me in this if you can leave a message on my post here
https://github.com/TTPlanetPig/Comfyui_TTP_Toolset/issues/22
@LatentDreamĀ
Now that we're talking, can I ask you one more question?:)
How do I make a standard wildcard, I don't really like the system like yours, I want to write promptin one window using wildcard characters. Thank you in advance.
@dirtysemĀ thank you.
yes i thought about implementing wildcards in the "ultra" but i totally forgot about it.
Definetly something i need to add in the next update
here you go. ULTRA 1.2 with wildcards. Check the image to understand how to switch between the old system to classic prompting+wildcards. Enjoy
@LatentDreamĀ
thank you very much :)
@dirtysemĀ let me know if you would change something else in better.
I love feedbacks
If you insist :) how do I remove the output noise? he's too big.
@dirtysemĀ what do you exactly mean? are you getting noisy results? that happens usually if you upscale too much in a single stage. like i do 1,5X upscale not more than that (for videos, for T2I is another thing)
if i need more i activate the second upscaler.
upload an example somewere so i can understand
Please take a look. All images are displayed in order. Everything is done on your standard settings. I like the second image best. If this was the output.
https://drive.google.com/drive/folders/1k4zpKRYx9wqxycPu2EJgxlwmkSnc_YBh?usp=drive_link
P.s. Indeed, this is the best workspace. I've never seen such results.
@dirtysemĀ Ok i saw š, now I understand what you meant.
No, thatās normal behavior for this model.
The "noise" tends to fade the higher you go with the resolution, like showed in my example images here:
https://latentdream.pixieset.com/hun-teaultra10/
The dog in the water example is the more evident, watch it in full screen here: https://i.vgy.me/CWRvjI.jpg
That said, in this workflow, thereās some added sharpness.
So, if youāre working at lower resolutions, that noise tends to get amplified, but it still exists as basis.
If you want to adjust the sharpness settings, let me know which workflow youāre using.
In the case of the latest workflow I published (ULTRA) the settings for sharpness are all at the top in the
back-end section and are marked as red nodes.
Try bypassing those and see if it works better for you.
Keep in mind, though, that as a result, youāll get softer videos,
wich is a normal outcome with this model.
You might also get rid of some noise by raising steps way up to 25 or 30 and bypass the fast lora, but do not expect miracles. Is all about resolution mostly
Thank you very much. I'll try again. I'm just wondering how far your workflow can go? Implement inpaint in this model, then in general it would be cool xD
@dirtysemĀ Believe me, I tried, even though I knew inpainting wasnāt possible. What else can I add? I donāt think thereās much more to say for now (I could add sections for color correction, filmic grain, and other tweaks, but I prefer to keep a more professional approach focused solely on video production, leaving everything else for post-production).
I believe Iāve hit a limit in terms of optimization as well, given that I had to deal with some pretty inconvenient limitations of ComfyUI itself, wrestling with nodes and constraints that I managed to work around.
The next update will probably come when the REAL Img2Vid model is released.
First of all, your workflow seem's to be amazing, good work! Could u help me?
I'm using the basic one with tea, when I use T2V, I'm having this issue in upscale with hunyuan, it gave me a worst quality then the raw, It's that right? It should be better quality since we are upscaling and modeling again with a reference, no?
Also the other steps seems to give me worst quality than the raw(refined;interpolated)
edit: I was able to get better results increasing the denoise in basic scheduler from upscale/refined/interpolared, it's that right way to fix? Did I miss something?
I'll try a little of I2V and V2V right now in default settings, thanks again for this great workflow =D
you welcome.
any settings that works for you its ok i guess but generally the default i set there are proved to work.
maybe is a particular case or a treicky prompt, i don't know.
Enjoy
I just download the ultra version and I'm testing now... Seems the upscaler/refine is better than basic tea somehow :)
But still givin me some issues, It is possible to keep the RAW image and only use the upscale in BASIC TEA workflow? I saw the Ultra keeps the raw image without need to gen again
Either Ultra or Basic, the upscaler/refiner sometimes give me blur images/artefacts/distortion noise/something like that...
In ultra version when I press to queue an Image it gave me this warning:
"got prompt
WARNING: DreamBigLatentSwitch.IS_CHANGED() got an unexpected keyword argument 'select'
WARNING: DreamBigLatentSwitch.IS_CHANGED() got an unexpected keyword argument 'select'
WARNING: DreamBigLatentSwitch.IS_CHANGED() got an unexpected keyword argument 'select'
WARNING: SystemNotification.IS_CHANGED() missing 1 required positional argument: 'self'
WARNING: DreamBigLatentSwitch.IS_CHANGED() got an unexpected keyword argument 'select'
WARNING: DreamBigLatentSwitch.IS_CHANGED() got an unexpected keyword argument 'select'
WARNING: DreamBigIntSwitch.IS_CHANGED() got an unexpected keyword argument 'select'
WARNING: DreamBigIntSwitch.IS_CHANGED() got an unexpected keyword argument 'select'
WARNING: SystemNotification.IS_CHANGED() missing 1 required positional argument: 'self'"
Let me update you my mate, seems V2V and I2V is resulting in amazing and jaw-dropping videos, I'm in love with it. (in default settings)
But my problem is T2V, I manage to solve the issue increasing the denoise from upscaler/definer, I tried a lot of other things, I thought the problem could be the prompt or a bad seed, but I tested the prompt in other workflows and goes fine, also tried a lot of seeds in your workflow but only manage to fix the issue as I said, increasing the denoise.
Also the weird warning, should I do something?
Anyway, I do believe my hardware is making the upscale issue, not your workflow. I wish you all the best, soon I'll upload the best videos I made here so everyone can confirm your workflow is the best by now
@TheKnightsWhoSayNIĀ I'm getting these errors as well, is it something we can ignore?
@TheKnightsWhoSayNIĀ sorry for late answer. those error in console are totally fine.
is a list of switch that must work that way to allow auto switching to the correct active latent.
about artefacts, those are probably related to the fast model.
i recently swapped the model with the standard and inverted the fast lora strenght, in ALL WORKFLOWS ON THIS PAGE. you may try redownload and see if you get more luck.
i also suggest you to give a read to my article were i constatly write about all my notes
https://civitai.com/articles/9584?highlight=693834#comments
Can I change the load diffusion for a gguf? =v
yes, just swap the loader
@LatentDream How much Vram and Ram do you use? How much do you recommend to advance tea 1.4?
@TheKnightsWhoSayNIĀ here 24vram / 64 ram.
Someone sayd TeaCache nodes may eat more vram for the speed but i havent noticed this on metering here.
You should be able to run any workflows with 16vram as long as you do not exceed too much in frames amount and resolution
@LatentDreamĀ Thank you again, you're great and quick answering... I'm using a 4080 super but I'm lack of RAM Stick (2x8gb hahah) I'll try to upgrade soon as possible. Thanks again, great work
@TheKnightsWhoSayNIĀ a damn yeah i can ensure you i saw benefits when i jumped from 32 to 64.
Good luck man
Oh man. A video guide on how to start using these workflows and generating video will be really helpful. I am pretty much lost here.
i feel you... start with basic workflows.
I'm sorry i dont have time to make a detailed video at the moment. Sorry
@LatentDreamĀ Thanks for the reply. For some reason the basic workflow wasn't working correctly. I tried the Ultra workflow and it seems to working fine. I am getting a hang of it now.
What would be the best way to generate 1080p videos? Produce raw 720p with T2V and then upscale with V2V?
Even though I've activated some Add LORA nodes and set them to double blocks, with this workflow they don't seem to be taking effect. Is there an additional setting I'm missing?
Thanks
I just turned off the fast lora and it helped with this. How can we reduce sharpness level?
@fd4r34twefeee873Ā wich workflow are you using?
It's like you read my mind and put absolutely everything I wanted in a workflow, better than I could ever have. Trying this ASAP first thing tomorrow.
š at your service š¤£
@LatentDreamĀ took me a bit but managed to get it to run on my 16GB 4070 TI Super,
First try was 15+ minutes with the basic workflow as it ate into the shared memory.
- I switched to directly using the fast model rather than model + lora
- Disabled loading CLIP vision as its not used.
That got me just a hair under 16GB and now the thing runs in 108 seconds
Thanks again!
@thevrvarren650Ā yes, previously all my workflows has the FAST model instead of the regular+lora but after a crazy amount of tests we found that using regular+fast lora is a better option to avoid glitches expecially when no other loras are loaded in. So i had to modify all the workflows i shared and swap the model š¬
i don't know if you get real vram benefit by doing that switch in particular, you may encounter in more artefacts for sure.
The clipvision you disable is the one used for Cliption captioning, right?
good to hear that this workflow run on 16gb! really give me some breath
This is the only workflow that works! The 30th workflow I have tied by now. No errors, and amazing quality and speed. Thank you for your hard work!
ULTRA1.2 on RTX4070 12GB
I'm glad, but it feels strange that this is the only one that works for you, there are much simpler workflows out there than mine that should works 100% š. I havenāt actually tried any of them (except Kijai's / Zer0's ) I just took a quick look here and there to see if anyone had implemented something worth adding to mine for improvements. It really seems strange to me that this is the only one that works for you.
Anyway... thanks. Enjoy it :)
@LatentDreamĀ Think it is down to the fact that you explain in detail what and how to use it. Everyone else expects the user to have a degree in programming to download.... Nothing explained. Only way to know what to download it when you get an error. Uhggg. Again. Thank you.
I would like to be able to adjust the duration of the video using the slider. I'm not very good at this, so it's hard for me to count down frames and understand how long it takes.
the workflow is set up in frames amount and the video ouput is set to 24 frames per second.
By doing easy math you end up that 24 frames equal 1 second š
so if you want 4 second video do 24x4 and round the result to compatible frames (97 in this case)
I see, thanks)
I tried it, my system doesn't take more than 5 seconds, then OMM. And by the way I don't have Clip vision. Doesn't make suggestions for videos and text. The process goes on, it seems without errors, but the prompt window does not change. And how do I manually crop an image or video, such as the top of the frame or the center, and so on?
@dirtysemĀ depends on your Vram. OOM can be normal.
files and requirements are written in description.
i could implement cropping feature but honestly i dont want to make this workflow extremely heavy at this point is already huge.
Love the workflow!
Does anyone know if there's a way to use two character loras in one scene?
there are 2 loras section in the latest workflows.
one dedicated to random lora loading, the other that can be activated and stay fixed.
activate 2 of them in this last section, wich is on the top left. select one and press CTRL+B
in all other workflows there is only the fixed loras section, same story there.
Dankest workflow on site. Killer results the first queue!
awesome, your basic one is nice to me. fast and good ! THANKS
This is my first comment on Civitai. This workflow costs me a fresh installation of Comfy till it was working, but once it worked it is amazing.
Generation times on a 4070 TI Super with 16GB:
w/o upscale: 51 sec
Latent upscale: 121 sec
lol i'm sorry but i guess the only answer is: WELCOME TO COMFYUI routine š¤£
quick question - using the Advanced 1.1 workflow - after generating a video, it is creating a "scene#####.png" image file in my user pictures directory. My output is saved on a completely different drive and I cant find where or why these files are being created. I do not want them, nor need them. The normal output is working great, it is just these weird 'scene' png files that I want to prevent or at least redirect to the normal output folder. Anyone? Thanks!
what? where is this file saved precisely? if you are using I2V and the image is created in comfy/input than may be normal
@LatentDreamĀ the files are being saved in my windows' logged in user's pictures directory. It doesn't seem to create them all the time (which is weird - I am creating a new video right now and will check to see if one appears once it is complete) but I keep finding them there. It is weird that they are named 'scene####.png' - that naming is nothing like my file name saves for the videos at all (which all go into subdirectories in Comfy's output folder correctly). I have changed my pictures folder location default to point to D:\Photos (I always change doc locations for my folders), but that shouldn't matter. I have searched for specific text inside all files inside the custom-nodes folder and my active workspace .json file for any reference of 'scene' or 'pictures' or even my absolute path of d:\Photos but haven't found anything at all. Weird.
@psybertechĀ i still do not understand something dont looks right on your setup to me.
can you please tell me exactly wich is this windows logged in user's picture directory?
give me an example
like c:\users\something\...
Using Basic Tea workflow, I'm getting the following error when trying to use the upscaler:
TeaCacheHunyuanVideoSampler
Sampling failed: shape '[1, 17, 67, 44, 16, 1, 2, 2]' is invalid for input of size 3280320
I kept all other settings the same. I just updated the prompt and the loras being loaded. What could be the issue/fix?
uhm... can you paste the console errors?
I am getting very slow generation speed (like 50x slower than other workflows). I am on comfyui-Zluda 7900XTX. TeaCacheHunyuanVideo node is what is taking forever. Any idea why?
EDIT: After letting it finish over night I returned to find a video of total static. So it is slow and does not work.
Try the basic workflow (not tea) and let us know if works.
https://civitai.com/models/1007385?modelVersionId=1261498
if works then means this tea nodes are not compatible with your hardware/setup somehow
When loading the graph, the following node types were not found
TeaCacheHunyuanVideoSampler
how do i fix this?
check requirements in description
Wow, this is simply amazing, getting started with some simple workflows!
Amazing effort!!
I created a one click template for Runpod that has everything you need to run this workflow. Search for
Hunyuan Video - ComfyUI VScode AllInOne
I made a guide to set it up
https://civitai.com/articles/11303After its starts, drag in the workflow.
RTX A6000 works well
Filters select (optional ):
- Community Could
- NVME
- High 600 mb/shttps://runpod.io/console/deploy?template=unkcsqjb74&ref=0eayrc3zEDIT: Tried to update it for Ultra 1.2 workflow thinking Advanced 1.4. was the latest but Ultra is the lastest - both should work anyway
After the build is comepleted you should see
-------
Starting VS Code server...
15:20:00VS Code server started with PID:
9215:20:02Starting ComfyUI...15:20:02
CUDA Environment Check:
------
NVDIA Stuff
------
---
Wait for a 5-7minutes after ports are ready to use VSCode or Comfyui
on the ComfyIUI change :
'clip-vit-large-patch14_OPENAI.safetensors' >>
'clip-vit-larg-patch14.safetensors'
'hunyuan_video_720_cfgdistill_fp8_e4m3fn.safetensors' >>
'hunyuan_video_t2v_720p_bf16.safetensors'
- Notice a bug where Vscode might not setup correctly. May need to build again or use ssh terminal to transfer files.
- Another bug where is cannot find the .sh file that happens on some GPU builds. Community Cloud GPUs seem to work fine.
Have fun!
You the man. TY.
very cool.
is was just talking about "i need a cloud service to do more tests before release next update that may speed up things even more"
I just realised that I've been using the 1.2 workflow however i'm updating the docker image to match the 1.4 right now.
Noticed a few errors, so far just need to change
'clip-vit-large-patch14_OPENAI.safetensors' to 'clip-vit-larg-patch14.safetensors'
'hunyuan_video_720_cfgdistill_fp8_e4m3fn.safetensors' to' hunyuan_video_t2v_720p_bf16.safetensors'
disable fast video (will add that next update)
@LatentDream This is a awesome workflow! I still cant get my head around all the nobs and dials yet. Hopefully in time after reading your posts :D
@DIhanĀ We appreciate it dude!
Runpod says "NO RESULTS FOUND"
Edit: Correct name of the template = Hunyuan Video - ComfyUI Manager VScode - AllInOne
Okay, update. This is not working on SECURE CLOUD.
If you get Multiple error messages: [FATAL tini (19)] exec /workspace/start.sh failed: No such file or directory
Then switch to COMMUNITY CLOUD, use A6000 and VOILA!
New Error: VAELoader
Error while deserializing header: HeaderTooSmall
I am familiar with using Jupyter notebook so that i could place my models in respective folders. How do I do the same here?
@NOOBDAĀ Looking into it! Very odd that breaks on some GPUs. I'm attmepting to fix for all. Will update when done
@NOOBDAĀ open the folder /workspace/comfyui and you can drag in files there, right click > download to download files
@NOOBDA So there were 2 VAE's with the same name in the world. Both work but The one from Kijai seems to work better. Ive updated the new image with Kijai's one
OK New image is working! I'll write up a post on how to use it soon
I made a guide to help anyone else
https://civitai.com/articles/11303
@DIhanĀ Thank you! I will test them in the weekend and let you know if I come across any issues.
Thank you! Getting decent results on 1070ti 8GB VRAM/ 32GB RAM
WHAT! FAKE? SARCASTIC?
@NOOBDAĀ lol i thought that too, how is that even possible
@NOOBDAĀ For real! 1070ti with 8GB VRAM and 32GB of DDR4 RAM :D
@LatentDreamĀ Right? When I started seeing 8GB Hunyuan claims, I had to try. Took a lot of trial and error, but this workflow and a specific configuration of installations is working to produce 65 frames of 320x480 using 4 additional LoRas.
@m0n3tĀ please show a result of your trial.
This Workflow work with my RTX4050 6Gb wtf?! 32Gb RAM also.
@NOOBDAĀ Just posted one!
@m0n3tĀ where? not visible
@NOOBDAĀ Posted in the (Basic - Tea) gallery for this workflow https://civitai.com/images/55448601
Works well, but this workflow has corrupt linking data, according to rgthree extension. You might want to check it.
can you explain in details? we are discussing all the ways to solve all possible issues on my discord. this would be very helpfull for everyone. thanks
@LatentDreamĀ I mean that if you use rgthree, on the topbar there is a button with the rgthree icon, and clicking on it you can see a button for settings. On those settings there is an option to detect corrupt workflows. If you enable that options, and then load this workflow, you'll see a message saying that it has corrupt linking data. The message gives an option to try to fix it. I didn't try, though. If you do, better to do a backup, just in case the fix actually breaks the workflow.Ā š
@PepitoPalotesĀ ah.. i see that now. it doesnt make sense to me. that error message appear also in some very basic workflows with native nodes. i would just ignore that
Prompts the same error, nothing happens when I run this workflow, it skips and does not output any results
@SeaAdministrative684122Ā for me it does work perfectly. I reported this just in case because I saw the message, but the workflow does work and the results are good. Check if you have all the nodes and if all the models are set correctly. Also I'm using the ultra version, I didn't try the other versions.
What is the default output location? I cannot find the generated videos anywhere.
you can toggle to save on the video output module.
is not saving anything by default.
personally i prefer to right click on the video and save it only whne i want to
@LatentDreamĀ I knew you were going to say that, after I queued up 30 videos and walked away for a few hours :(
Thanks for answering quickly anyway.
@J1BĀ If you haven't restarted, they should be in the \temp\ folder
@m0n3tĀ holy fuck
I'm trying latest ULTRA 1.2, is it normal that the image to video workflow output video has nothing to do with the input image ? (Yes I enabled I2V lol)
I have the same issue.
yes. totally normal. Image to video model do not exist, thats a fake image to video, or as i wrote " SORT OF" š¤£. check my other posts with a better image to video workflows based on other nodes.
When generating 320x480 in T2V mode on a 4070ti super (16gb) / 64gb RAM, always GPU usage is 98~100% and OOM does not occur, but the task is extremely slow and in some cases it hangs indefinitely. Is this normal? Judging from the comments, it seems like other people are using it without any problems... Am I missing something?
check my article on Vram overload section
I'm getting this issue: Sharpen.sharpen() missing 1 required positional argument: 'image'
using T2V :((
wich workflow?
Nodes are conflicting on tea ultra 1.2
Comfyui_TTP_Toolset
ComfyUI-Logic
I have the same issue!
1.3 is avaible on discord in alpha. no conflicts hassle hopefully.
Join and test it before release š
@LatentDreamĀ where is your discord ?
@darkviewĀ https://discord.gg/VcCKy9mJKq
You can find it in the description of the Workflow too.













