Workflow for generating morph style looping videos.
v3: Hyper-SD implementation - allows us to use AnimateDiff v3 Motion model with DPM and other samplers. Seems to result in improved quality, overall color and animation coherence.
Uses QRCode Controlnet to guide the animation flow, morphing between the reference images is done via IPAdapter attention masks.
Here are some more motion masks to use with QRCode - kindly provided by @Xenodimensional: https://civarchive.com/posts/2011230
❗If you are getting an error message CLIP Vision Model not found: /ComfyUI/models/clip_vision folder
CLIP-ViT-H-14-laion2B-s32B-b79K.safetensors download and rename
CLIP-ViT-bigG-14-laion2B-39B-b160k.safetensors download and rename
❗If you are getting an error message IPAdapter model not found:
You are likely missing the IPAdapter model. In ComfyUI Manager Menu click Install Models - Search for ip-adapter_sd15_vit-G.safetensors and click Install.
If installing through Manager doesn't work for some reason you can download the model from Huggingface and drop it into \ComfyUI\models\ipadapter folder.
ViT-G model is what I used in the workflow but I suggest you try out other IPAdapter models as well.
Description
Now supports 4 Reference images to morph between.
FAQ
Comments (228)
Hyped to try version 2, had a lot of fun with version 1, good job.
Ty @Dever 💙 Don't get your hopes too up, it's pretty minor update. More about showcasing how to include more reference images to morph through. One day I'll dive in and do the math to make it more modular. Soon ™
More reference images was a much requested addition to the workflow
@ipiv still hyped cause I couldn't be bothered to do it so in the meantime I learned how to transition between 4-5 separate renders in DaVinci haha
Wow but can't solve this error!
Error occurred when executing IPAdapterUnifiedLoader: IPAdapter model not found. File "D:\Tools\ComfyUI\ComfyUI_windows_portable\ComfyUI\execution.py", line 151, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\Tools\ComfyUI\ComfyUI_windows_portable\ComfyUI\execution.py", line 81, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\Tools\ComfyUI\ComfyUI_windows_portable\ComfyUI\execution.py", line 74, in map_node_over_list results.append(getattr(obj, func)(**slice_dict(input_data_all, i))) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\Tools\ComfyUI\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_IPAdapter_plus\IPAdapterPlus.py", line 393, in load_models raise Exception("IPAdapter model not found.")
No worries at all, happy to help!
You are missing the IPAdapter model. In ComfyUI Manager Menu click Install Models - Search for ip-adapter_sd15_vit-G.safetensors and click Install.
If installing through Manager doesn't work for some reason you can download the model from Huggingface and drop it into \ComfyUI\models\ipadapter folder.
vit-G model is what I used in the workflow but I suggest you try out other IPAdapter models as well.
@ipiv Thanks but I have it already...
@ipiv sorry I mean, I have the ip-adapter_sd15_vit-G.safetensors installed but still that error...
@freddypeters381 did you get it working? I had same issue - but used the links in the workflow to (re) download and rename the clip_vision models and then it worked beautifully for me.
This is Amazing
Can't seem to get anthropomorphic Doberman dog type images to adhere to their original design as this workflow wants to change them to human like characters. I changed the ipadapter to high strength... it helps but not great. IPadapter weight can't go above 1. Any suggestions to get animation to look closer to original images? (example images) https://drive.google.com/file/d/1ExDh3rj_eAXu0q-1-AQNnkVbV14tM4kS/view?usp=drive_link
https://drive.google.com/file/d/1EnN_F6YGL9AwHKvyP8Ez2DO1sUU9eurn/view?usp=drive_link
Thank you again for this great work. It's my favorite workflow for animations❤️
amazing work , but I have a question every time I try it the video seems to fast and short length , is there a way to make slower ?
Increase the multiplier on the RIFE VFI node
When I input four reference images, I get an error: Error loading state_dict for ImageProjModel: Size mismatch of proj.weight: Copying parameters with shape torch.Size([3072, 1280]) from the checkpoint, in the current model The shape is torch.Size([3072, 1024] ). Does this workflow have requirements for input reference images?
512²
There's no requirement, IPadapter resizes the images automatically - It crops it in the center to a square resolution. (If the main focus of the picture is not in the middle the result might not be what you are expecting.)
But the error often happens when you are mixmatching SD1.5 and SDXL models. It might indicate you are loading or have incorrect model such as Clip Vision in your Comfy folders. Re-download and rename according to the description/Note node in the workflow.
I need a video tutorial, thanks
Hi, thank you very much for workflow.
I have a question about those two clip models:
CLIP-ViT-H-14-laion2B-s32B-b79K.safetensors
CLIP-ViT-bigG-14-laion2B-39B-b160k.safetensors "download and rename"
What exact node/nodes uses them?
in /clip_vision/ folder I have those 3 models:
1. clip_vision_g.safetensors
2. CLIP-ViT-bigG-14-laion2B-39B-b160k.safetensors
3. CLIP-ViT-H-14-laion2B-s32B-b79K.safetensors
and the IP Adapter Unified Loader uses only "clip_vision_g.safetensors"
Hey,
IPAdapter uses only those 2 Clip Vision models I have linked in description and also in the Note node.
Source: https://github.com/cubiq/ComfyUI_IPAdapter_plus?tab=readme-ov-file#installation
You should be able to safely delete that "clip_vision_g.safetensors" file - unless some other Custom Node requires that naming.
hey i tried that and it still brings me to ipadapter not found?
@ipiv rename these 2 files to what?sir
I also keep getting this error, despite having downloaded, renamed and placed both clipvision models correctly :(
Awesome, can't tell you how much I appreciate it. Very helpful, thanks!
hi! i just tried the workflow, i can tell i can get amazing results but, im getting this error in Ksampler:
Error occurred when executing KSampler: integer division or modulo by zero File "C:\Ai\ComfyUI_windows_portable\ComfyUI\execution.py", line 151, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) File "C:\Ai\ComfyUI_windows_portable\ComfyUI\execution.py", line 81, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) File "C:\Ai\ComfyUI_windows_portable\ComfyUI\execution.py", line 74, in map_node_over_list results.append(getattr(obj, func)(**slice_dict(input_data_all, i))) File "C:\Ai\ComfyUI_windows_portable\ComfyUI\nodes.py", line 1369, in sample return common_ksampler(model, seed, steps, cfg, sampler_name, scheduler, positive, negative, latent_image, denoise=denoise) File "C:\Ai\ComfyUI_windows_portable\ComfyUI\nodes.py", line 1339, in common_ksampler samples = comfy.sample.sample(model, noise, steps, cfg, sampler_name, scheduler, positive, negative, latent_image, File "C:\Ai\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Impact-Pack\modules\impact\sample_error_enhancer.py", line 9, in informative_sample return original_sample(*args, **kwargs) # This code helps interpret error messages that occur within exceptions but does not have any impact on other operations. File "C:\Ai\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-AnimateDiff-Evolved\animatediff\sampling.py", line 376, in motion_sample latents = wrap_function_to_inject_xformers_bug_info(orig_comfy_sample)(model, noise, args, *kwargs) File "C:\Ai\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-AnimateDiff-Evolved\animatediff\utils_model.py", line 377, in wrapped_function return function_to_wrap(*args, **kwargs) File "C:\Ai\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Advanced-ControlNet\adv_control\control_reference.py", line 47, in refcn_sample return orig_comfy_sample(model, args, *kwargs) File "C:\Ai\ComfyUI_windows_portable\ComfyUI\comfy\sample.py", line 37, in sample samples = sampler.sample(noise, positive, negative, cfg=cfg, latent_image=latent_image, start_step=start_step, last_step=last_step, force_full_denoise=force_full_denoise, denoise_mask=noise_mask, sigmas=sigmas, callback=callback, disable_pbar=disable_pbar, seed=seed) File "C:\Ai\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 755, in sample return sample(self.model, noise, positive, negative, cfg, self.device, sampler, sigmas, self.model_options, latent_image=latent_image, denoise_mask=denoise_mask, callback=callback, disable_pbar=disable_pbar, seed=seed) File "C:\Ai\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 657, in sample return cfg_guider.sample(noise, latent_image, sampler, sigmas, denoise_mask, callback, disable_pbar, seed) File "C:\Ai\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 644, in sample output = self.inner_sample(noise, latent_image, device, sampler, sigmas, denoise_mask, callback, disable_pbar, seed) File "C:\Ai\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 623, in inner_sample samples = sampler.sample(self, sigmas, extra_args, callback, noise, latent_image, denoise_mask, disable_pbar) File "C:\Ai\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 534, in sample samples = self.sampler_function(model_k, noise, sigmas, extra_args=extra_args, callback=k_callback, disable=disable_pbar, **self.extra_options) File "C:\Ai\ComfyUI_windows_portable\python_embeded\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context return func(*args, **kwargs) File "C:\Ai\ComfyUI_windows_portable\ComfyUI\comfy\k_diffusion\sampling.py", line 745, in sample_lcm denoised = model(x, sigmas[i] s_in, *extra_args) File "C:\Ai\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 272, in call out = self.inner_model(x, sigma, model_options=model_options, seed=seed) File "C:\Ai\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 610, in call return self.predict_noise(*args, **kwargs) File "C:\Ai\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 613, in predict_noise return sampling_function(self.inner_model, x, timestep, self.conds.get("negative", None), self.conds.get("positive", None), self.cfg, model_options=model_options, seed=seed) File "C:\Ai\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-AnimateDiff-Evolved\animatediff\sampling.py", line 421, in evolved_sampling_function cond_pred, uncond_pred = sliding_calc_cond_uncond_batch(model, cond, uncond_, x, timestep, model_options) File "C:\Ai\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-AnimateDiff-Evolved\animatediff\sampling.py", line 537, in sliding_calc_cond_uncond_batch sub_cond_out, sub_uncond_out = comfy.samplers.calc_cond_batch(model, [sub_cond, sub_uncond], sub_x, sub_timestep, model_options) File "C:\Ai\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 192, in calc_cond_batch c['control'] = control.get_control(input_x, timestep_, c, len(cond_or_uncond)) File "C:\Ai\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Advanced-ControlNet\adv_control\utils.py", line 536, in get_control_inject return self.get_control_advanced(x_noisy, t, cond, batched_number) File "C:\Ai\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Advanced-ControlNet\adv_control\control.py", line 32, in get_control_advanced return self.sliding_get_control(x_noisy, t, cond, batched_number) File "C:\Ai\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Advanced-ControlNet\adv_control\control.py", line 63, in sliding_get_control self.cond_hint = broadcast_image_to(self.cond_hint, x_noisy.shape[0], batched_number) File "C:\Ai\ComfyUI_windows_portable\ComfyUI\comfy\controlnet.py", line 26, in broadcast_image_to tensor = torch.cat([tensor] * (per_batch // tensor.shape[0]) + [tensor[:(per_batch % tensor.shape[0])]], dim=0)
it has something to do with the init video, if i run 30 frames it works but if want to run the whole video it chrashes :c.
i hope you can help me or tell me if there is a tutorial for this workflow :D
Hmm.. if it runs fine for a 30 frame animation - it means you should have all the correct models loaded.
Make sure you haven't accidentally changed a value somewhere.
Also, you might be running out of VRAM? Can you check your VRAM usage during execution. The starting resolution is quite low already and if it runs out of vram during preview step your options are limited unfortunately.
Hello! Tell me please how to deal with the error. Yesterday it was not there and workflow was working. Today, an error appears when trying to generate:
Error occurred when executing KSampler: module 'comfy.sample' has no attribute 'prepare_mask'
sorry to bother you, the reason for the error was in the ANimateDiff update
@Trivika Thanks for the heads up - I did a full update on Comfy and all the custom nodes used in this workflow and it went smoothly. None of the nodes, including AnimateDiff broke for me so it seems update for this workflow is not needed.
@ipiv I recieved the same error as @Trivika, it seemed to have stemmed from the fact that I had installed the AnimeLCM custom node before it was implemented into the AnimateDiff package. So when the nodes were read, it used the old node rather than the updated one in the ad-evolved package. Just deleting the old stand-alone node solved the issue for me. Also, great workflow!
I have the same problem :(
Please can you elaborate on how you solved it?
@lahalahaounijiang thank you! It works )
Here are some different motion clips if you're tired of the expanding donut look: https://civitai.com/posts/2011230
Note: they are 1024² and you'll want to set force_size to 512² if you are rolling with the default workflow.
Legend ❤ Ty, I'm sure others will find these useful!
Very nice workflow, i was having some problems with ipadapter, but i work that out, for people having a problem with ipadapter chech out if the name is correct, sometimes you download thing 2 times and you have (1), so make sure to rename to real name in that case, i was having problem whit that. For the author of nice workflow of for somebod who new, i have a one question, i would like to try this workflow with turbo sdxl model but it stucks on the ksampler, so i probably must use other one, can somebody have some advice on that? thanx!
Is it possible for the output initial and end frames to be the exact same as the reference image? I want to use this as a transition between two frames.
I have the same needs as you,
Can someone help me with this error
Error occurred when executing VHS_LoadVideoPath: "B:\AI\ComfyUI_windows_portable\ComfyUI\input\Motion design.mp4" could not be loaded with cv. File "B:\AI\ComfyUI_windows_portable\ComfyUI\execution.py", line 151, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "B:\AI\ComfyUI_windows_portable\ComfyUI\execution.py", line 81, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "B:\AI\ComfyUI_windows_portable\ComfyUI\execution.py", line 74, in map_node_over_list results.append(getattr(obj, func)(**slice_dict(input_data_all, i))) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "B:\AI\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-VideoHelperSuite\videohelpersuite\load_video_nodes.py", line 252, in load_video return load_video_cv(**kwargs) ^^^^^^^^^^^^^^^^^^^^^^^ File "B:\AI\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-VideoHelperSuite\videohelpersuite\load_video_nodes.py", line 126, in load_video_cv (width, height, fps, duration, total_frames, target_frame_time) = next(gen) ^^^^^^^^^ File "B:\AI\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-VideoHelperSuite\videohelpersuite\load_video_nodes.py", line 50, in cv_frame_generator raise ValueError(f"{video} could not be loaded with cv.")
+1
Im not at my Pc atm but seems like u are trying to load a local video instead of the one provided.
You could try Load Video (Upload) node instead of the Path one and see if that works for u.
Error occurred when executing IPAdapterBatch:
'ModelPatcherAndInjector' object has no attribute 'get_model_object'
这个报错能是什么意思呢能帮我解决一下吗 非常感谢
我也是这个问题
indeed seems to be related to some change into comfyui, after updating comfyui is working again
Very cool workflow, but I cannot run it, can anyone help me fix this issue?
Error occurred when executing IPAdapterBatch: Error(s) in loading state_dict for ImageProjModel: size mismatch for proj.weight: copying a param with shape torch.Size([3072, 1280]) from checkpoint, the shape in current model is torch.Size([3072, 1024]).
Update Comfy and all custom nodes. After that Reload the original workflow and double check all the green nodes with model names and make sure the names match.
hello, how can I make it so it syncs with audio? i don't see any audio node
It's a bit more advanced than simply Load audio and hit the button. You have many ways to extend the workflow by using some existing Custom Nodes, like AudioScheduler for example.
What I have done in some of my videos is using After Effects to create a "motion mask" video which is then loaded into QRCode instead of the looping circles video.
@ipiv Thank you! Is there any way to re-create this workflow for SDXL? SD1.5 is so 2022 :D
@joroenevinen Currently, animatediff 1.5 motion model outperforms the sdxl beta version released by the authors
Great workflow. Having tons of fun with it. Thanks for sharing.
💙
That insanly good, how the zoom in works? is because of the ipadapters?
check Load Video Path node under the QRCode ControlNet group then you will understand how it works.
Thanks, QRCode controlnet is driving the animation flow. IPadapters are doing the fading from one reference image to another
How to solve text with blurred faces?
Not sure what is that you are asking
Hello, this seems to be a great workflow, but after struggling with many errors, I can't get though this one :
AttributeError: module 'comfy.sample' has no attribute 'prepare_mask'
Googled it, but no answer... any idea? Thanks for the help.
Try updating Comfy and all your custom nodes - if the error persist after doing that you might have some Custom Node installed that is not included in the workflow but might be overriding default nodes and generating errors due to recent Comfy update.
@ipiv Thanks so much for the answer IPIV, actually reinstalled a brand new Comfy, since updating didn't work either, and your workflow works perfectly, now! I still don't know where this error comes from, but it might be some conflict with another node I guess.
@mironmeow Glad you figured it out!
I have the same problem. I'll try to find what is the custom node causing this.
@AugmentedRealityCat I hope you succeed I spent a bit of time building a workflow, I just got to this post from google, literally right now LOL
Oh btw this is what I got in the log
[AnimateDiffEvo] - ERROR - Encountered AttributeError while attempting to restore functions - likely, an error occured while trying to save original functions before injection, and a more specific error was thrown by ComfyUI.
!!! Exception during processing !!!
any luck solving this Error, Ive been stuck at in for the last 24 Hours
Okay so I am using a portable verison
for me, this is what it solved it
You have to go into your comfyui folder
open terminal from there
Just install the appropriate version:
This will uninstall the current version
python_embeded\python.exe -m pip uninstall -y onnxruntime onnxruntime-gpu
And this will revert to the one that works
python_embeded\python.exe -m pip install onnxruntime==1.15.1
Stuff I also did, but did not work:
Apparently for me after the idiot I am, I clicked on the update comfyui in the manager ... don't do that. Not even if all those smarty panty youtube comfyui teachers tell you to do it ffs. I honestly don't know why I clicked on it LOL Anyways it busted up my onnxruntime, because Animatediff, and Reactor was perfectly fine running with the version 1.15.1 and somehow I got the new version or different version while updating, and than it broke all my workflows that involved animatediff or reactor, I don't really know the depht of these things I just wanna make stupid things with Ai , like putting my face on a dolphin LOL.
Anyways other stuff I tried.
Given that mine is a portable version, it has it's own python, so I figured I would remove any other versions that were on my PC because I downloaded some versions a long time ago, and someone said on a forum it fixed the same problem for them we are facing here because of some path issues.
I also had a problem for the same reason a long time ago when I clicked on the update in manager ( honestly someone should just remove that button I swear) and back than, insightface broke for me, that was pretty easy to fix, I just had to reinstall it sort of the same way you see the commands above.
I also tried to just reinstall onnxruntime, but I think it installs the updated version, and not 1.15.1 so that didn't work.
There were some isntructions in my error message when starting up comfyui, that I should upgrade something, check your code if you have that message, it literally gives you the command so it's pretty easy to do so, you just start terminal from the comfyui folder, and paste the code in and it does everything automatically.
I also tried to update all my out of date custom nodes, espescially uninstalling and reinstalling reactorfaceswap, or anything related to it, and animatediff and anything related to it, I also updated comfyui manager
NONE OF THESE WORKED... the only thing that fixed it is the command above.
@thavikingninja418 if you run the command "activate.bat" in your comfyui/venv/scripts folder before doing any commandline stuff, anything downloaded, installed, or uninstalled, will only affect that virtual environment that comfyUI is using (instead of your entire system).
Not sure if this is applicable to the portable version but using venvs (virtual environments) is essential for running multiple projects from Github etc.
i fixed this by updating the animatediff-evolved custom nodes
@dangerweenie Honestly I take your word for it, for me it just did not do anything when I updated animatediff. But I think it depends on so many things, like all of us has different systems, and the tinyest thing can trip us. I know one thing, this works now LOL And thanks for the advice, I will remember it for next time when I bust up my whole thing by clicking the wrong button LOL XD
Is there more motion masks available?
and which SD1.5 is better in results?
Here are some more motion masks to use with QRCode - kindly provided by @Xenodimensional: https://civitai.com/posts/2011230
If you have some after effects experience you can create ur own black and white video masks for your exact needs.
@ipiv How i can create my own ... i'm not good in this stuff 🥺
@ipiv about your note "Not the case for my 3060 unfortunately..."
you can do this
first rebatch before what you will do and need lot of vram then
second image list to batch again
i do that when i upscale more than my vram can handle 😃
sorry about my english i hope you will understand .... good luck 🤞
@NoobFromEgypt @ipiv here is a screenshot of a procedural motion mask using the reference image depth map and creating batch images.
https://civitai.com/posts/2071548
@EpochEclat Thanks for sharing
Thanks for the great workflow. I'm currently testing the Upscaling part and the Upscale Image (using Model) is taking many many hours. Is this expected?
Current test is 96 frames upscaled once with Upscale Image By x 2.0, then going throug the Upscale Image (using Model) using 4x_NMKD-Siax_200k
I've got a Nvidia 4060 Ti with 8GB
For it to take many hours is not expected.
You must be running out of VRAM or something, try lowering the x2.0 to x1.5 or even lower and possibly bypass the Upscaling /w Model step.
If you are using any non-default launch parameters in the .bat file then try with just the default parameters ComfyUI comes with and see if it speeds up the process.
There are other possible issues but that's what comes to mind right away.
with rtx4090 upscale with 2x is the key to awesome quality
How to solve:
Failed to validate prompt for output 219:
* VRAM_Debug 580:
- Required input is missing: input
Output will be ignored
???
Did you disconnect some node at the far top right of the workflow?
Things you can try:
Install Missing Custom Nodes via Comfy Manager
Reloading the original workflow
Update All via Comfy Manager
I had the same issue but it's not causing any problems. Started all fresh for the first time today. If I figure out, i'll comment!
How to fix this?😭 AttributeError: module 'comfy.model_base' has no attribute 'SDXL_instructpix2pix'
Hey, try reloading the original workflow and before hitting queue prompt - double check all the model names match. There are links to all the models used in Note nodes.
This is a SD1.5 workflow so models must be SD1.5 and not SDXL.
updating comfyUI fixed this particular error for me
This is so cool! Thanks IPIV for sharing this workflow. I was just wondering, is there a way to force the result "look" to look closer to the 4 IPA pics, because even I use the same model checkpoint, and try the same seed as the generation for the 4 pics, the animated video looks rather a lot different. I tried to change IPA PLUS, PLUS FACE... tweaked, but can't really get animate diff to respect the 4 pics "look". Thanks for any tips for this.
Was wondering the same and looking at the notes of increasing the weight on the ipadapters ( tried with up to 5x) didn't seem to have the desired impact.
Using a model with high strenght, was not getting close either.
The model seems to be less willing to constraint it looks on non human images. Buildings look completely different, vs human where we have quite a bit of resemblance of the reference image.
Have tried to give the IPAdapter an attention mask based on depth of the reference image, which didn't result in a closer look.
@ipiv thanks for the workflow and any ideas would be great.
This is amazing!!! Is there any tutorial on how to get started? I would love love love to playground with this!!
looking forward to trying this
do you know where i can get the four reference images you used in the workflow?
That's where you come in 😊 Get creative - try all kinds of images to morph between
So awesome!
Error occurred when executing KSampler:
'NoneType' object has no attribute 'shape'
What does this error mean? It has been bothering me for a long time. Can you help me with it?
Error occurred when executing IPAdapterBatch: 'ModelPatcherAndInjector' object has no attribute 'get_model_object'
I've tried several different images and get absolutely no where even close to what I've seen others do with this workflow. Although I really appreciate it being shared, never seeing what images others used in their workflows makes me question whether their end results are legitimate.
it would be harder to fake it than to do it with ai lol
@userno99 That's what people say but no one is willing to show an example workflow that includes what images and masks they used.
@LearningCreator because lazy, it's cutting edge rn, we're all busy fkn with it + when someone finds something that works they don't necessarily want to share it due to feelings of intellectual ownership
Is there a way to slow down or speed up the qr mask video in the workflow or add node to do that? ... to show that change in the animation
if you're using the circles video mask included with workflow you can do this to slow down the flow:
in Load Video (Path) node change force_rate to 24
@ipiv so it is says 12 and if i change it to 24 it will be slow .... then i want to make it faster that will be 6 ? and those numpers is necessary exact 12 and 24 or can be 18 20 21 ?
Please tell me your favorite Checkpoins
always show up this error,cant fix it,:(
module 'comfy.model_base' has no attribute 'SDXL_instructpix2pix'
you can't use XL models for animation AFAIK
@userno99 sure, i didnt use any SDXL model, but still report error.
Hey, update your ComfyUI by going to ComfyUI Manager and clicking Update All.
For safety, you could create a snapshot before updating incase something breaks: Manager -> Snapshot Manager -> Save Snapshot.
Later load that Snapshot if update breaks your Comfy for some reason.
@ipiv thx a lot,it works while i paste the clip version again,but Ksampler report:object has no attribute 'repeat'
Amazing workflow, thanks for sharing! Is there an alternative for the RIFE VFI node? It doesn't support Non-Cuda devices (like apple). The recommended workaround with taichi doesn't seem to work. Thanks!
Thank u!
Sorry not sure about non-cuda alternatives but I know there's an executable u can use if you're comfortable with running a program through terminal: https://github.com/nihui/rife-ncnn-vulkan
In that repo you'll find Windows/Linux/MacOS executables for Intel/AMD/Nvidia gpus and instructions to run it.
@ipiv Thanks a lot. I'll check it out!
Adherence to original images varies a lot. A house with a stream and sunset works great, airplanes are terrible. People are good, artistic humanized dog is terrible. Any suggestions for getting better adherence to original images? I changed to high strength from default medium, but not much better. If users are getting animations that are close to your images, can you share your settings?
Hi! Your workflow work amazing! Thanks for sharing! I want to change IPAdapter models on PLUS version. I upload ip-adapter-plus_sd15.safetensors in directory \ComfyUI\models\ipadapter. But i have error "IPAdapter model not found". Should I changing the name tjis file similarly how did you rename it in your tutorial? If so, what name should I choose for renaming ip-adapter-plus_sd15.safetensors? Or am I doing something wrong?
Hi, thanks for the buzz btw!
Your file naming and folder path seem to be correct according to the documentation:
https://github.com/cubiq/ComfyUI_IPAdapter_plus?tab=readme-ov-file#installation
Not 100% sure what the cause of your issue is but to help debugging:
Have you changed any of the default model folder paths in some of the config files or are you using some custom node to do that?
In ComfyUI root directory, is there a file named "extra_model_paths.yaml" - without the ".example" at the end?
Can you double check and re-download the model from Manager or from the links in github documentation (ip-adapter-plus_sd15.safetensors)
Make sure to not mistake FaceID models with regular models.
@mironovata Please help me! I have also encountered this problem. How did you solve it?
@qian11111 You need to download the file Plus version ip-adapter-plus_sd15.safetensors from https://github.com/cubiq/ComfyUI_IPAdapter_plus?tab=readme-ov-file#installation. And make sure that you definitely put it in ComfyUI\models\ipadapter . And double-check that you followed the instructions exactly.
Thank you, brother. I also want to ask why the videos I produce are very blurry
Amazing plug-and-play workflow. Thank you for sharing. How would you increase the output video duration? I tried changing the frames but that doesn't seem to work
Hey, thanks!
It's a bit more advanced but what I can suggest is:
Look in the IPadapter and Attention mask generation group, you have to change the mask frame numbers accordingly. If you want add more images then you gotta duplicate the IPadapter node with mask generation and also adjust the frame numbers according to your batch_count and desired fade durations.
Error occurred when executing VRAM_Debug: VRAM_Debug.VRAMdebug() got an unexpected keyword argument 'image_passthrough' XD
You can simply remove that node or bypass it. It's only needed for low VRAM cards to help them free up some VRAM before frame interpolation if they run out on the last step.
@ipiv Thank you very much. The problem has been resolved
@ipiv May I ask again why the video I generated is very blurry and cannot be seen clearly.
got prompt
[rgthree] Using rgthree's optimized recursive execution.
!!! Exception during processing !!!
Traceback (most recent call last):
File "D:\ComfyUI\execution.py", line 151, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\ComfyUI\execution.py", line 81, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\ComfyUI\execution.py", line 74, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
TypeError: VRAM_Debug.VRAMdebug() got an unexpected keyword argument 'image_passthrough'
Prompt executed in 0.37 seconds
Hi, your workflow is amazing! However, I have a problem when I execute:'Error occurred when executing KSampler: module 'comfy.sample' has no attribute 'prepare_mask'' Could you help me to figure it out?
other information:
File "D:\comfyui\ComfyUI-aki-v1.3\execution.py", line 151, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) File "D:\comfyui\ComfyUI-aki-v1.3\execution.py", line 81, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) File "D:\comfyui\ComfyUI-aki-v1.3\execution.py", line 74, in map_node_over_list results.append(getattr(obj, func)(**slice_dict(input_data_all, i))) File "D:\comfyui\ComfyUI-aki-v1.3\nodes.py", line 1369, in sample return common_ksampler(model, seed, steps, cfg, sampler_name, scheduler, positive, negative, latent_image, denoise=denoise) File "D:\comfyui\ComfyUI-aki-v1.3\nodes.py", line 1339, in common_ksampler samples = comfy.sample.sample(model, noise, steps, cfg, sampler_name, scheduler, positive, negative, latent_image, File "D:\comfyui\ComfyUI-aki-v1.3\custom_nodes\ComfyUI-Impact-Pack\modules\impact\sample_error_enhancer.py", line 9, in informative_sample return original_sample(*args, **kwargs) # This code helps interpret error messages that occur within exceptions but does not have any impact on other operations. File "D:\comfyui\ComfyUI-aki-v1.3\custom_nodes\ComfyUI-AnimateDiff-Evolved\animatediff\sampling.py", line 273, in motion_sample function_injections.inject_functions(model, params) File "D:\comfyui\ComfyUI-aki-v1.3\custom_nodes\ComfyUI-AnimateDiff-Evolved\animatediff\sampling.py", line 205, in inject_functions self.orig_prepare_mask = comfy.sample.prepare_mask
I am a designer, I am not very familiar with coding and comfyUI, sorry for asking you question.
我也是这个问题,无解了吗
just update animatediff model through comfyui manager
I met with the same problem and solved it by downloading the animatediff-evolved node from github and replace the old one. Hope this can help you.
@1905928452513 Thanks a lot! I will try it!
@gptytpromo106 thank you! I will try it!
Thats just amazing! Can anyone point me to a good tutorial for noobs on how i can install and run this? Is it possible to run it on a huggingface space or something? I got shitty AMD card and a dream.
Here's a tutorial i made: https://youtu.be/mecA9feCihs ; You can try running it on the cloud on openart for free (link below) or pay for a cloud service like comfy.icu or runcomfy
https://openart.ai/workflows/abeatech/tutorial-morpheus---morphing-videos-using-text-or-images-txt2img2vid/fOrrmsUtKEcBfopPrMXi
@abeatech 求助!我在运行的过程中出现了这个错误
Error occurred when executing IPAdapterUnifiedLoader:
module 'comfy.model_base' has no attribute 'SDXL_instructpix2pix'
File "X:\ComfyUI-AI\ComfyUI-aki-v1.3\execution.py", line 151, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
File "X:\ComfyUI-AI\ComfyUI-aki-v1.3\execution.py", line 81, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
File "X:\ComfyUI-AI\ComfyUI-aki-v1.3\execution.py", line 74, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
File "X:\ComfyUI-AI\ComfyUI-aki-v1.3\custom_nodes\ComfyUI_IPAdapter_plus\IPAdapterPlus.py", line 449, in load_models
is_sdxl = isinstance(model.model, (comfy.model_base.SDXL, comfy.model_base.SDXLRefiner, comfy.model_base.SDXL_instructpix2pix))
执行VRAM_Debug时出错: VRAM_Debug.VRAMdebug() 有一个意外的关键字参数“image_passthrough” 文件“I:\comfyui\execution.py”,第 151 行,recursive_execute output_data,output_ui = get_output_data(obj, input_data_all) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 文件“I:\comfyui\execution.py”,第 81 行,get_output_data return_values = map_node_over_list(obj, input_data_all, obj.函数,allow_interrupt=True) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 文件“I:\comfyui\execution.py”,第 74 行,map_node_over_list results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
How to solve this error
Bypass or remove the VRAM Debug node.
i created a new vram debug node and noticed the input names were different so i removed the exisiting one and put the new one in its place and connected it the same made the settings match and voila it works great !
Hi, I've been playing around quite a bit with this workflow. Thank you. But I am having issues with the fact that all my outputs seem blurry and I can't seem to find the cause. Could anyone recommend what I should be modifying to get a better output animation? thank you.
Reload the original workflow and before hitting queue make sure to double check that you are loading the correct models. Names and links are next to the loader nodes.
@ipiv tried that too, but still blurry videos :/
When I use this workflow,the following error will occur when inputting a 512X512 image:Error occurred when executing IPAdapterBatch: Error(s) in loading state_dict for ImageProjModel: size mismatch for proj.weight: copying a param with shape torch.Size([3072, 1280]) from checkpoint, the shape in current model is torch.Size([3072, 1024]). I don't know where the problem lies? How did you solve similar problems when you encountered them?
Noob here, I got it to work, but what settings would be good for quick testing? I don't mind getting small resolution video.
First of all, thanks for sharing this piece of work!
I've downloaded and placed all the models, but I get this message:
"Error occurred when executing IPAdapterBatch: 'ModelPatcherAndInjector' object has no attribute 'get_model_object' "
What am I doing wrong?
Thanks again.
Nevermind... seems that I had something outdated... I did an "Update All" in ComfyUI Manager and now it's working.
Not deleting the post, just in case it's usefull for someone ;)
Hmm, I get pickle.UnpicklingError: Weights only load failed. Re-running torch.load with weightsonly set to False will likely succeed, but it can result in arbitrary code execution.Do it only if you get the file from a trusted source. WeightsUnpickler error: Unsupported operand 64
Edit: I solved this. It turned out I got the wrong file and renamed to AnimateLCM_sd15_t2v.ckpt; so whatever file that was could not be processed properly and led to this pretty random error.
replace your checkpoint and use SD 1.5
Hello, I think the display is a super cool workflow. When I opened it according to the tutorial, I found the following prompts. Some of them I installed plug-ins through Google, and some I couldn't even find through Google. I don't know if there are some tips for me. Thank you~
When loading the graph, the following node types were not found:
BatchCount+
VHS_SplitImages
SimpleMath+
VHS_LoadVideoPath
CreateFadeMaskAdvanced
VHS_VideoCombine
VRAM_Debug
RIFE VFI
FILM VFI
ComfyUI Manager -> Install Missing Custom Nodes -> Restart.
Might need Update All afterwards to make sure your Comfy and all other nodes are up to date.
@ipiv thanks your reply !let me try
Great! Is there a way to make still (non moving) camera in transitions ? Don't know why camera tries to rotate or movein/out. Need it still. Changed QR motion mask, but camera moves again
Thanks bro! how to use SDXL with this workflow?
Switch out all models to SDXL versions (checkpoint, QRCode, IPAdapter, Vae, Motion model etc), change base resolution, adjust AnimateDiff beta_schedule to sdxl and adjust Ksampler settings according to your sdxl model.
Output probably won't be as good since currently SD1.5 Animatediff is better than the SDXL Beta version in my opinion.
Thanks for sharing this workflow. It's super easy to use, and really appreciate all the notes you've added throughout the flow!
Did some successfully tried to make the reference images persistent? @ipiv
I installed everything exactly as shown in the workflow, but I'm getting this error:
Error occurred when executing IPAdapterBatch: Error(s) in loading state_dict for ImageProjModel: size mismatch for proj.weight: copying a param with shape torch.Size([3072, 1280]) from checkpoint, the shape in current model is torch.Size([3072, 1024]). File "D:\Beeldbewerking\ComfyUI\ComfyUI\execution.py", line 151, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\Beeldbewerking\ComfyUI\ComfyUI\execution.py", line 81, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\Beeldbewerking\ComfyUI\ComfyUI\execution.py", line 74, in map_node_over_list results.append(getattr(obj, func)(**slice_dict(input_data_all, i))) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\Beeldbewerking\ComfyUI\ComfyUI\custom_nodes\ComfyUI_IPAdapter_plus\IPAdapterPlus.py", line 679, in apply_ipadapter return (ipadapter_execute(model.clone(), ipadapter_model, clip_vision, **ipa_args), ) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\Beeldbewerking\ComfyUI\ComfyUI\custom_nodes\ComfyUI_IPAdapter_plus\IPAdapterPlus.py", line 329, in ipadapter_execute ipa = IPAdapter( ^^^^^^^^^^ File "D:\Beeldbewerking\ComfyUI\ComfyUI\custom_nodes\ComfyUI_IPAdapter_plus\IPAdapterPlus.py", line 69, in init self.image_proj_model.load_state_dict(ipadapter_model["image_proj"]) File "D:\Beeldbewerking\ComfyUI\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 2153, in load_state_dict raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format(
are you using an SDXL checkpoint instead of SD 1.5?
Excuse me, big shot. Have you ever produced a video tutorial on this workflow
Amazing work, thank you so much! Question... if I decide to skip the 3 videos/upscaling process, what does it mean exactly to "fix" the seed? What do we need to do after we're happy with the motion/look and want to upscale?
KSampler node has parameter "control_before_generate" - Randomize or Fixed
1. Keep seed Randomized and generate until u are happy with the preview
2. Fix the seed in KSampler
3. Unbypass the 3 video outputs
4. Queue it again to run the upscaling process without running the preview step again
I'm getting this error after running the prompt, any help or ideas would be highly appreciated, or let me know if you need more info (Using ComfyUI and RTX 4070)
Error occurred when executing VRAM_Debug: VRAM_Debug.VRAMdebug() got an unexpected keyword argument 'image_passthrough'
Hey, thanks for the great workflow, I'm running the workflows but only seeing the motion mask (the black and white circles), not any of the reference images, I've tried to adjust the weight and strength of controlnet/ipadapter, without really getting the image to show correctly. Other times when I increase the ipadapter weight I just get a flat image without any recognizable shape. Do you have any recommendations?
Great workflow, but I get below error when executing. Any idea how to fix it. Error occurred when executing UpscaleModelLoader: 'NoneType' object has no attribute 'lower' File "C:\ComfyUI\ComfyUI\execution.py", line 151, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) File "C:\ComfyUI\ComfyUI\execution.py", line 81, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) File "C:\ComfyUI\ComfyUI\execution.py", line 74, in map_node_over_list results.append(getattr(obj, func)(**slice_dict(input_data_all, i))) File "C:\ComfyUI\ComfyUI\comfy_extras\nodes_upscale_model.py", line 20, in load_model sd = comfy.utils.load_torch_file(model_path, safe_load=True) File "C:\ComfyUI\ComfyUI\comfy\utils.py", line 13, in load_torch_file if ckpt.lower().endswith(".safetensors"):
did you forget to load in an upscale model?
Error occurred when executing IPAdapterUnifiedLoader:module 'comfy.model_base' has no attribute 'SDXL_instructpix2pix'大佬救命!这个报错在管理器上更新了comfyUI也不行呀!还是报错
try updating all via the comfyui manager
https://github.com/comfyanonymous/ComfyUI/issues/3236
any fix for the following error? :
Error occurred when executing VHS_VideoCombine: [Errno 22] Invalid argument File "D:\ComfyUI_windows_portable\ComfyUI\execution.py", line 151, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\ComfyUI\execution.py", line 81, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\ComfyUI\execution.py", line 74, in map_node_over_list results.append(getattr(obj, func)(**slice_dict(input_data_all, i))) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-VideoHelperSuite\videohelpersuite\nodes.py", line 365, in combine_video output_process.send(image.tobytes()) File "D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-VideoHelperSuite\videohelpersuite\nodes.py", line 119, in ffmpeg_process proc.stdin.write(frame_data)
Excuse me, where are the two red plug-in download, I can not find in the manager and search engine, can the download link sent out
IPAdapterBatch
IPAdapterUnifiedLoader
ComfyUI Manager -> Install Missing Custom Nodes -> ComfyUI_IPAdapter_plus -> Install
Or on Github: cubiq/ComfyUI_IPAdapter_plus
I have already installed the ComfyUI-IPAdapter_plus plugin, but every time I open the workflow, it prompts that these two are missing@ipiv
我也遇到了一样的问题,安装了插件也依然报错
Can anybody change this to work with SDXL please?
Yes, please!
Additionaly, if someone is going to implement this, I recommend to use turbo/lightning models, such as: https://civitai.com/models/112902?modelVersionId=351306
From my testing, this will result in degraded animation and overall motion quality.
Currently, you'll get way better/coherent output using SD1.5 motion models (v3, v2, lcm) rather than the SDXL beta version of AnimateDiff.
@ipiv Thank you. Any tip, if it is possible to create somehow good highres animation with this 1.5 approach? (for example how to connect NNlatent upscaling)
@tomas647 You could change starting resolution to 512x910 for example and increase High-res fix multiplier even further (2nd KSampler). But keep in mind almost all examples I posted in the gallery started with 288x512 base resolution, upscaled to 1080x1920.
Vae Decode -> Vae Encode upscaling (The process currently used in workflow) preserves quality and detail better than any latent upscaling for High-res fix. Source: NNLatentUpscale Github
Error occurred when executing UpscaleModelLoader: 'NoneType' object has no attribute 'lower'
how can i resolve?
Have you installed "4x_NMKD-Siax_200k" via ComfyUI Manager?
Manager -> Install Models -> Search for "4x_NMKD-Siax_200k" -> Install
After installed click Refresh and make sure you load it in Load Upscale Model node
how can i switch in 1920x01080 16:9 video?
Flip the width and height in Settings group (512x288) and Upscale /w Model group to get a 1920x1080 video output
Hi sorry, im noob making videos , im only used to make picstures. Can you please recommend me which program do you use to create videos like this? Thank you
It is done by using this workflow for ComfyUI. It could be installed in one click via Pinokio https://pinokio.computer/ , or just see ComfyUI Github https://github.com/comfyanonymous/ComfyUI and follow installation guides. Then load this workflow. Also, there's a deсent tutorial for this workflow https://youtu.be/mecA9feCihs . All in all, there are two main UI for graphic synthesis: Automatic 1111 and Comfy, where you can install extensions for video/animation, such as Deforum/AnimateDiff/MultiFrameRender/Ebsynth, etc. Hope it will help!
@Zodiak Thank you so much, you are very kind. I'm going to practice 😃😃
I got this error... what can i do?
VRAM debug() got an unexpected keyword argument 'image passthrough'
I'm getting
if sub_idxs is not None and self.orig_img_latents.size(0) >= full_length: AttributeError: 'NoneType' object has no attribute 'size'
On the Ksampler.
Tried messing around with the latent a bit to no avail.
I can't fix this error on the object Apply Advanced ControlNet, trying to figure out if it's about models locations but everything seems ok
Error occurred when executing ACN_AdvancedControlNetApply: 'NoneType' object has no attribute 'copy'
how can i extend the duration and slow the video?
Everything works like magic. However, despite using LCM models, the results I get are too burnt, dark, as if there's a high cfg value. Additionally, when I increase the Motion scale value, I receive even darker/burnt results. How can I overcome this? Thank you very much.
I had the same problem then I experimented with the K Samplers on both sides. As soon as I set to euler or euler a then adjust cfg and steps a bit up that went right away. I haven't tried others yet but I bet they can be good too so just experiment on K samplers a little.
help me please!!
Error occurred when executing VHS_LoadVideoPath: https://i.imgur.com/FZojh3v.mp4 could not be loaded with cv.
把这个链接打开 里面的视频保存到comfyui根目录里 然后把视频地址改成FZojh3v - Imgur.mp4
Make sure you right click the video and copy the video paht, it should end in .mp4
Hi! I switched to Comfy from Auto1111 recently and maybe it's an obvious question, but how to use promts instead of image references? IPAdapter does not always do well, especially when I used Lora
module 'comfy.sample' has no attribute 'prepare_mask'
And I'm stuck
update comfyui
Hey,hello.How to solve the following problem?Thank U. Error occurred when executing VRAM_Debug: VRAM_Debug.VRAMdebug() got an unexpected keyword argument 'image_passthrough' File "D:\ComfyUI-aki-v1.1\execution.py", line 151, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) File "D:\ComfyUI-aki-v1.1\execution.py", line 81, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) File "D:\ComfyUI-aki-v1.1\execution.py", line 74, in map_node_over_list results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
Hey, You can remove the VRAM Debug node alltogether or if you're on low VRAM you can Right click the node -> Fix Node (Recreate) and reconnect the input and output.
Error occurred when executing VRAM_Debug: VRAM_Debug.VRAMdebug() got an unexpected keyword argument 'image_passthrough' File "/workspace/ComfyUI/execution.py", line 151, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) File "/workspace/ComfyUI/execution.py", line 81, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) File "/workspace/ComfyUI/execution.py", line 74, in map_node_over_list results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
I put the node in "Bypass" ...
I'm confused on why the model is only 8kb. I'm trying to use this with Automatic1111 but I can't seem to find the model file anywhere
Its not a model but a workflow for comfyUI
IPAdapterBatch
IPAdapterUnifiedLoader
Has anyone successfully generated an animation? Can you tell me where to download the two plugins above
Hi guys. Anybody help me? In load Lora this erro appear:
Error occurred when executing LoraLoaderModelOnly: LoraLoaderModelOnly.load_lora_model_only() missing 1 required positional argument: 'lora_name' File "\ComfyUI_windows_portable\ComfyUI\execution.py", line 151, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "\ComfyUI_windows_portable\ComfyUI\execution.py", line 81, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "\ComfyUI_windows_portable\ComfyUI\execution.py", line 74, in map_node_over_list results.append(getattr(obj, func)(**slice_dict(input_data_all, i))) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
‘’AnimateLCM_sd15_t2v.ckpt‘’
Have you downloaded this file in the comments?
i just downloaded but how to use it
Hello, I have been trying to generate nice animation but somehow mine is not vibrant color like ppl share here. Is anyone can share workflow ?
maybe because you dont use vae, or caus your vae dosent feat well with your model
@UnconvAI I think I need to add image sharpen and color correct
@UnconvAI that solved the problem for me. @EllieMia I had the same problem, chosing vae-ft-MSE-840000-ema-pruned fixed it and the colors are vibrant.
@omphteliba my problem was solved after update ipvp which was color correct
Hi,
I am getting below error at the sampler node.
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu!
Issue gets resolved after I change denoise to zero but the image is blank.
Thanks in advance.
your vram must 6gb, it won't work, minimum 8gb vram required for animate diff
Thanks for the reply. Appreciate it.
I'm wondering if this error is because my images might be different res? It's a Ksampler error
Error occurred when executing KSampler: mat1 and mat2 shapes cannot be multiplied (1232x2048 and 768x320) File "C:\LocalAI\COMFYUI\ComfyUI_windows_portable\ComfyUI\execution.py", line 151, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\LocalAI\COMFYUI\ComfyUI_windows_portable\ComfyUI\execution.py", line 81, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\LocalAI\COMFYUI\ComfyUI_windows_portable\ComfyUI\execution.py", line 74, in map_node_over_list results.append(getattr(obj, func)(**slice_dict(input_data_all, i))) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\LocalAI\COMFYUI\ComfyUI_windows_portable\ComfyUI\nodes.py", line 1369, in sample return common_ksampler(model, seed, steps, cfg, sampler_name, scheduler, positive, negative, latent_image, denoise=denoise) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\LocalAI\COMFYUI\ComfyUI_windows_portable\ComfyUI\nodes.py", line 1339, in common_ksampler samples = comfy.sample.sample(model, noise, steps, cfg, sampler_name, scheduler, positive, negative, latent_image, ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\LocalAI\COMFYUI\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Impact-Pack\modules\impact\sample_error_enhancer.py", line 22, in informative_sample raise e File "C:\LocalAI\COMFYUI\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Impact-Pack\modules\impact\sample_error_enhancer.py", line 9, in informative_sample return original_sample(*args, **kwargs) # This code helps interpret error messages that occur within exceptions but does not have any impact on other operations. ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\LocalAI\COMFYUI\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-AnimateDiff-Evolved\animatediff\sampling.py", line 376, in motion_sample latents = wrap_function_to_inject_xformers_bug_info(orig_comfy_sample)(model, noise, args, *kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\LocalAI\COMFYUI\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-AnimateDiff-Evolved\animatediff\utils_model.py", line 377, in wrapped_function return function_to_wrap(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\LocalAI\COMFYUI\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Advanced-ControlNet\adv_control\control_reference.py", line 47, in refcn_sample return orig_comfy_sample(model, args, *kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\LocalAI\COMFYUI\ComfyUI_windows_portable\ComfyUI\comfy\sample.py", line 37, in sample samples = sampler.sample(noise, positive, negative, cfg=cfg, latent_image=latent_image, start_step=start_step, last_step=last_step, force_full_denoise=force_full_denoise, denoise_mask=noise_mask, sigmas=sigmas, callback=callback, disable_pbar=disable_pbar, seed=seed) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\LocalAI\COMFYUI\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 755, in sample return sample(self.model, noise, positive, negative, cfg, self.device, sampler, sigmas, self.model_options, latent_image=latent_image, denoise_mask=denoise_mask, callback=callback, disable_pbar=disable_pbar, seed=seed) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\LocalAI\COMFYUI\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 657, in sample return cfg_guider.sample(noise, latent_image, sampler, sigmas, denoise_mask, callback, disable_pbar, seed) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\LocalAI\COMFYUI\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 644, in sample output = self.inner_sample(noise, latent_image, device, sampler, sigmas, denoise_mask, callback, disable_pbar, seed) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\LocalAI\COMFYUI\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 623, in inner_sample samples = sampler.sample(self, sigmas, extra_args, callback, noise, latent_image, denoise_mask, disable_pbar) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\LocalAI\COMFYUI\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 534, in sample samples = self.sampler_function(model_k, noise, sigmas, extra_args=extra_args, callback=k_callback, disable=disable_pbar, **self.extra_options) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\LocalAI\COMFYUI\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context return func(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^ File "C:\LocalAI\COMFYUI\ComfyUI_windows_portable\ComfyUI\comfy\k_diffusion\sampling.py", line 745, in sample_lcm denoised = model(x, sigmas[i] s_in, *extra_args) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\LocalAI\COMFYUI\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 272, in call out = self.inner_model(x, sigma, model_options=model_options, seed=seed) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\LocalAI\COMFYUI\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 610, in call return self.predict_noise(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\LocalAI\COMFYUI\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 613, in predict_noise return sampling_function(self.inner_model, x, timestep, self.conds.get("negative", None), self.conds.get("positive", None), self.cfg, model_options=model_options, seed=seed) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\LocalAI\COMFYUI\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-AnimateDiff-Evolved\animatediff\sampling.py", line 421, in evolved_sampling_function cond_pred, uncond_pred = sliding_calc_cond_uncond_batch(model, cond, uncond_, x, timestep, model_options) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\LocalAI\COMFYUI\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-AnimateDiff-Evolved\animatediff\sampling.py", line 537, in sliding_calc_cond_uncond_batch sub_cond_out, sub_uncond_out = comfy.samplers.calc_cond_batch(model, [sub_cond, sub_uncond], sub_x, sub_timestep, model_options) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\LocalAI\COMFYUI\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 192, in calc_cond_batch c['control'] = control.get_control(input_x, timestep_, c, len(cond_or_uncond)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\LocalAI\COMFYUI\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Advanced-ControlNet\adv_control\utils.py", line 536, in get_control_inject return self.get_control_advanced(x_noisy, t, cond, batched_number) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\LocalAI\COMFYUI\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Advanced-ControlNet\adv_control\control.py", line 32, in get_control_advanced return self.sliding_get_control(x_noisy, t, cond, batched_number) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\LocalAI\COMFYUI\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Advanced-ControlNet\adv_control\control.py", line 78, in sliding_get_control control = self.control_model(x=x_noisy.to(dtype), hint=self.cond_hint, timesteps=timestep.float(), context=context.to(dtype), y=y) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\LocalAI\COMFYUI\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1518, in wrappedcall_impl return self._call_impl(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\LocalAI\COMFYUI\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1527, in callimpl return forward_call(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\LocalAI\COMFYUI\ComfyUI_windows_portable\ComfyUI\comfy\cldm\cldm.py", line 305, in forward h = module(h, emb, context) ^^^^^^^^^^^^^^^^^^^^^^^ File "C:\LocalAI\COMFYUI\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1518, in wrappedcall_impl return self._call_impl(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\LocalAI\COMFYUI\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1527, in callimpl return forward_call(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\LocalAI\COMFYUI\ComfyUI_windows_portable\ComfyUI\comfy\ldm\modules\diffusionmodules\openaimodel.py", line 60, in forward return forward_timestep_embed(self, args, *kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\LocalAI\COMFYUI\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-AnimateDiff-Evolved\animatediff\sampling.py", line 109, in forward_timestep_embed x = layer(x, context, transformer_options) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\LocalAI\COMFYUI\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1518, in wrappedcall_impl return self._call_impl(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\LocalAI\COMFYUI\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1527, in callimpl return forward_call(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\LocalAI\COMFYUI\ComfyUI_windows_portable\ComfyUI\comfy\ldm\modules\attention.py", line 633, in forward x = block(x, context=context[i], transformer_options=transformer_options) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\LocalAI\COMFYUI\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1518, in wrappedcall_impl return self._call_impl(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\LocalAI\COMFYUI\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1527, in callimpl return forward_call(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\LocalAI\COMFYUI\ComfyUI_windows_portable\ComfyUI\comfy\ldm\modules\attention.py", line 460, in forward return checkpoint(self._forward, (x, context, transformer_options), self.parameters(), self.checkpoint) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\LocalAI\COMFYUI\ComfyUI_windows_portable\ComfyUI\comfy\ldm\modules\diffusionmodules\util.py", line 191, in checkpoint return func(*inputs) ^^^^^^^^^^^^^ File "C:\LocalAI\COMFYUI\ComfyUI_windows_portable\ComfyUI\comfy\ldm\modules\attention.py", line 560, in forward n = self.attn2(n, context=contextattn2, value=value_attn2) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\LocalAI\COMFYUI\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1518, in wrappedcall_impl return self._call_impl(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\LocalAI\COMFYUI\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1527, in callimpl return forward_call(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\LocalAI\COMFYUI\ComfyUI_windows_portable\ComfyUI\comfy\ldm\modules\attention.py", line 404, in forward k = self.to_k(context) ^^^^^^^^^^^^^^^^^^ File "C:\LocalAI\COMFYUI\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1518, in wrappedcall_impl return self._call_impl(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\LocalAI\COMFYUI\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1527, in callimpl return forward_call(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\LocalAI\COMFYUI\ComfyUI_windows_portable\ComfyUI\comfy\ops.py", line 52, in forward return super().forward(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\LocalAI\COMFYUI\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\linear.py", line 114, in forward return F.linear(input, self.weight, self.bias) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Close
Queue size: 0
⚙️×
Queue Prompt
Extra options
Queue FrontView QueueView History
SaveLoadRefreshClipspaceClearLoad Default
ManagerShare
RuntimeError 80 KSampler