CivArchive
    ipiv's Morph - img2vid AnimateDiff LCM / Hyper-SD - v2.0 (4 Reference images)
    NSFW

    Workflow for generating morph style looping videos.

    v3: Hyper-SD implementation - allows us to use AnimateDiff v3 Motion model with DPM and other samplers. Seems to result in improved quality, overall color and animation coherence.


    Uses QRCode Controlnet to guide the animation flow, morphing between the reference images is done via IPAdapter attention masks.

    Here are some more motion masks to use with QRCode - kindly provided by @Xenodimensional: https://civarchive.com/posts/2011230

    ❗If you are getting an error message CLIP Vision Model not found: /ComfyUI/models/clip_vision folder

    ❗If you are getting an error message IPAdapter model not found:

    You are likely missing the IPAdapter model. In ComfyUI Manager Menu click Install Models - Search for ip-adapter_sd15_vit-G.safetensors and click Install.

    If installing through Manager doesn't work for some reason you can download the model from Huggingface and drop it into \ComfyUI\models\ipadapter folder.

    ViT-G model is what I used in the workflow but I suggest you try out other IPAdapter models as well.

    Description

    Now supports 4 Reference images to morph between.

    FAQ

    Comments (228)

    DeverApr 3, 2024· 5 reactions
    CivitAI

    Hyped to try version 2, had a lot of fun with version 1, good job.

    ipiv
    Author
    Apr 3, 2024· 4 reactions

    Ty @Dever 💙 Don't get your hopes too up, it's pretty minor update. More about showcasing how to include more reference images to morph through. One day I'll dive in and do the math to make it more modular. Soon ™

    More reference images was a much requested addition to the workflow

    DeverApr 3, 2024· 1 reaction

    @ipiv still hyped cause I couldn't be bothered to do it so in the meantime I learned how to transition between 4-5 separate renders in DaVinci haha

    freddypeters381Apr 4, 2024
    CivitAI

    Wow but can't solve this error!

    Error occurred when executing IPAdapterUnifiedLoader: IPAdapter model not found. File "D:\Tools\ComfyUI\ComfyUI_windows_portable\ComfyUI\execution.py", line 151, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\Tools\ComfyUI\ComfyUI_windows_portable\ComfyUI\execution.py", line 81, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\Tools\ComfyUI\ComfyUI_windows_portable\ComfyUI\execution.py", line 74, in map_node_over_list results.append(getattr(obj, func)(**slice_dict(input_data_all, i))) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\Tools\ComfyUI\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_IPAdapter_plus\IPAdapterPlus.py", line 393, in load_models raise Exception("IPAdapter model not found.")

    ipiv
    Author
    Apr 4, 2024

    No worries at all, happy to help!

    You are missing the IPAdapter model. In ComfyUI Manager Menu click Install Models - Search for ip-adapter_sd15_vit-G.safetensors and click Install.

    If installing through Manager doesn't work for some reason you can download the model from Huggingface and drop it into \ComfyUI\models\ipadapter folder.

    vit-G model is what I used in the workflow but I suggest you try out other IPAdapter models as well.

    freddypeters381Apr 4, 2024

    @ipiv Thanks but I have it already...

    freddypeters381Apr 4, 2024

    @ipiv sorry I mean, I have the ip-adapter_sd15_vit-G.safetensors installed but still that error...

    @freddypeters381 did you get it working? I had same issue - but used the links in the workflow to (re) download and rename the clip_vision models and then it worked beautifully for me.

    toteprty811Apr 4, 2024
    CivitAI
    Spectacular work, really thank you for this. What should I do to get a longer video?
    SmillJokeApr 4, 2024· 1 reaction
    CivitAI

    This is Amazing

    Blade3dApr 4, 2024
    CivitAI

    Can't seem to get anthropomorphic Doberman dog type images to adhere to their original design as this workflow wants to change them to human like characters. I changed the ipadapter to high strength... it helps but not great. IPadapter weight can't go above 1. Any suggestions to get animation to look closer to original images? (example images) https://drive.google.com/file/d/1ExDh3rj_eAXu0q-1-AQNnkVbV14tM4kS/view?usp=drive_link

    https://drive.google.com/file/d/1EnN_F6YGL9AwHKvyP8Ez2DO1sUU9eurn/view?usp=drive_link

    WhatDaFAIApr 4, 2024· 1 reaction
    CivitAI

    Thank you again for this great work. It's my favorite workflow for animations❤️

    FaniloApr 5, 2024
    CivitAI

    amazing work , but I have a question every time I try it the video seems to fast and short length , is there a way to make slower ?

    XenodimensionalApr 6, 2024

    Increase the multiplier on the RIFE VFI node

    jakeca520327Apr 5, 2024
    CivitAI

    When I input four reference images, I get an error: Error loading state_dict for ImageProjModel: Size mismatch of proj.weight: Copying parameters with shape torch.Size([3072, 1280]) from the checkpoint, in the current model The shape is torch.Size([3072, 1024] ). Does this workflow have requirements for input reference images?

    XenodimensionalApr 6, 2024

    512²

    ipiv
    Author
    Apr 6, 2024

    There's no requirement, IPadapter resizes the images automatically - It crops it in the center to a square resolution. (If the main focus of the picture is not in the middle the result might not be what you are expecting.)

    But the error often happens when you are mixmatching SD1.5 and SDXL models. It might indicate you are loading or have incorrect model such as Clip Vision in your Comfy folders. Re-download and rename according to the description/Note node in the workflow.

    52570005713Apr 5, 2024· 16 reactions
    CivitAI

    I need a video tutorial, thanks

    addrainApr 5, 2024
    CivitAI

    Hi, thank you very much for workflow.

    I have a question about those two clip models:

    CLIP-ViT-H-14-laion2B-s32B-b79K.safetensors

    CLIP-ViT-bigG-14-laion2B-39B-b160k.safetensors "download and rename"

    What exact node/nodes uses them?

    in /clip_vision/ folder I have those 3 models:
    1. clip_vision_g.safetensors
    2. CLIP-ViT-bigG-14-laion2B-39B-b160k.safetensors
    3. CLIP-ViT-H-14-laion2B-s32B-b79K.safetensors

    and the IP Adapter Unified Loader uses only "clip_vision_g.safetensors"

    ipiv
    Author
    Apr 5, 2024

    Hey,
    IPAdapter uses only those 2 Clip Vision models I have linked in description and also in the Note node.

    Source: https://github.com/cubiq/ComfyUI_IPAdapter_plus?tab=readme-ov-file#installation

    You should be able to safely delete that "clip_vision_g.safetensors" file - unless some other Custom Node requires that naming.

    vidokkApr 6, 2024

    hey i tried that and it still brings me to ipadapter not found?

    mingweixueApr 7, 2024

    @ipiv rename these 2 files to what?sir

    missrobot99May 2, 2024

    I also keep getting this error, despite having downloaded, renamed and placed both clipvision models correctly :(

    billbApr 5, 2024· 1 reaction
    CivitAI

    Awesome, can't tell you how much I appreciate it. Very helpful, thanks!

    imcybearpunkApr 6, 2024
    CivitAI

    hi! i just tried the workflow, i can tell i can get amazing results but, im getting this error in Ksampler:

    Error occurred when executing KSampler: integer division or modulo by zero File "C:\Ai\ComfyUI_windows_portable\ComfyUI\execution.py", line 151, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) File "C:\Ai\ComfyUI_windows_portable\ComfyUI\execution.py", line 81, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) File "C:\Ai\ComfyUI_windows_portable\ComfyUI\execution.py", line 74, in map_node_over_list results.append(getattr(obj, func)(**slice_dict(input_data_all, i))) File "C:\Ai\ComfyUI_windows_portable\ComfyUI\nodes.py", line 1369, in sample return common_ksampler(model, seed, steps, cfg, sampler_name, scheduler, positive, negative, latent_image, denoise=denoise) File "C:\Ai\ComfyUI_windows_portable\ComfyUI\nodes.py", line 1339, in common_ksampler samples = comfy.sample.sample(model, noise, steps, cfg, sampler_name, scheduler, positive, negative, latent_image, File "C:\Ai\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Impact-Pack\modules\impact\sample_error_enhancer.py", line 9, in informative_sample return original_sample(*args, **kwargs) # This code helps interpret error messages that occur within exceptions but does not have any impact on other operations. File "C:\Ai\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-AnimateDiff-Evolved\animatediff\sampling.py", line 376, in motion_sample latents = wrap_function_to_inject_xformers_bug_info(orig_comfy_sample)(model, noise, args, *kwargs) File "C:\Ai\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-AnimateDiff-Evolved\animatediff\utils_model.py", line 377, in wrapped_function return function_to_wrap(*args, **kwargs) File "C:\Ai\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Advanced-ControlNet\adv_control\control_reference.py", line 47, in refcn_sample return orig_comfy_sample(model, args, *kwargs) File "C:\Ai\ComfyUI_windows_portable\ComfyUI\comfy\sample.py", line 37, in sample samples = sampler.sample(noise, positive, negative, cfg=cfg, latent_image=latent_image, start_step=start_step, last_step=last_step, force_full_denoise=force_full_denoise, denoise_mask=noise_mask, sigmas=sigmas, callback=callback, disable_pbar=disable_pbar, seed=seed) File "C:\Ai\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 755, in sample return sample(self.model, noise, positive, negative, cfg, self.device, sampler, sigmas, self.model_options, latent_image=latent_image, denoise_mask=denoise_mask, callback=callback, disable_pbar=disable_pbar, seed=seed) File "C:\Ai\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 657, in sample return cfg_guider.sample(noise, latent_image, sampler, sigmas, denoise_mask, callback, disable_pbar, seed) File "C:\Ai\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 644, in sample output = self.inner_sample(noise, latent_image, device, sampler, sigmas, denoise_mask, callback, disable_pbar, seed) File "C:\Ai\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 623, in inner_sample samples = sampler.sample(self, sigmas, extra_args, callback, noise, latent_image, denoise_mask, disable_pbar) File "C:\Ai\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 534, in sample samples = self.sampler_function(model_k, noise, sigmas, extra_args=extra_args, callback=k_callback, disable=disable_pbar, **self.extra_options) File "C:\Ai\ComfyUI_windows_portable\python_embeded\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context return func(*args, **kwargs) File "C:\Ai\ComfyUI_windows_portable\ComfyUI\comfy\k_diffusion\sampling.py", line 745, in sample_lcm denoised = model(x, sigmas[i] s_in, *extra_args) File "C:\Ai\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 272, in call out = self.inner_model(x, sigma, model_options=model_options, seed=seed) File "C:\Ai\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 610, in call return self.predict_noise(*args, **kwargs) File "C:\Ai\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 613, in predict_noise return sampling_function(self.inner_model, x, timestep, self.conds.get("negative", None), self.conds.get("positive", None), self.cfg, model_options=model_options, seed=seed) File "C:\Ai\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-AnimateDiff-Evolved\animatediff\sampling.py", line 421, in evolved_sampling_function cond_pred, uncond_pred = sliding_calc_cond_uncond_batch(model, cond, uncond_, x, timestep, model_options) File "C:\Ai\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-AnimateDiff-Evolved\animatediff\sampling.py", line 537, in sliding_calc_cond_uncond_batch sub_cond_out, sub_uncond_out = comfy.samplers.calc_cond_batch(model, [sub_cond, sub_uncond], sub_x, sub_timestep, model_options) File "C:\Ai\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 192, in calc_cond_batch c['control'] = control.get_control(input_x, timestep_, c, len(cond_or_uncond)) File "C:\Ai\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Advanced-ControlNet\adv_control\utils.py", line 536, in get_control_inject return self.get_control_advanced(x_noisy, t, cond, batched_number) File "C:\Ai\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Advanced-ControlNet\adv_control\control.py", line 32, in get_control_advanced return self.sliding_get_control(x_noisy, t, cond, batched_number) File "C:\Ai\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Advanced-ControlNet\adv_control\control.py", line 63, in sliding_get_control self.cond_hint = broadcast_image_to(self.cond_hint, x_noisy.shape[0], batched_number) File "C:\Ai\ComfyUI_windows_portable\ComfyUI\comfy\controlnet.py", line 26, in broadcast_image_to tensor = torch.cat([tensor] * (per_batch // tensor.shape[0]) + [tensor[:(per_batch % tensor.shape[0])]], dim=0)

    it has something to do with the init video, if i run 30 frames it works but if want to run the whole video it chrashes :c.

    i hope you can help me or tell me if there is a tutorial for this workflow :D

    ipiv
    Author
    Apr 6, 2024

    Hmm.. if it runs fine for a 30 frame animation - it means you should have all the correct models loaded.
    Make sure you haven't accidentally changed a value somewhere.

    Also, you might be running out of VRAM? Can you check your VRAM usage during execution. The starting resolution is quite low already and if it runs out of vram during preview step your options are limited unfortunately.

    TrivikaApr 6, 2024· 2 reactions
    CivitAI

    Hello! Tell me please how to deal with the error. Yesterday it was not there and workflow was working. Today, an error appears when trying to generate:

    Error occurred when executing KSampler: module 'comfy.sample' has no attribute 'prepare_mask'

    TrivikaApr 6, 2024

    sorry to bother you, the reason for the error was in the ANimateDiff update

    ipiv
    Author
    Apr 6, 2024

    @Trivika Thanks for the heads up - I did a full update on Comfy and all the custom nodes used in this workflow and it went smoothly. None of the nodes, including AnimateDiff broke for me so it seems update for this workflow is not needed.

    UpasundishApr 7, 2024

    @ipiv I recieved the same error as @Trivika, it seemed to have stemmed from the fact that I had installed the AnimeLCM custom node before it was implemented into the AnimateDiff package. So when the nodes were read, it used the old node rather than the updated one in the ad-evolved package. Just deleting the old stand-alone node solved the issue for me. Also, great workflow!

    str_bboy903Apr 7, 2024

    I have the same problem :(

    Please can you elaborate on how you solved it?

    str_bboy903Apr 8, 2024

    @lahalahaounijiang thank you! It works )

    XenodimensionalApr 6, 2024· 6 reactions
    CivitAI

    Here are some different motion clips if you're tired of the expanding donut look: https://civitai.com/posts/2011230

    Note: they are 1024² and you'll want to set force_size to 512² if you are rolling with the default workflow.

    ipiv
    Author
    Apr 6, 2024

    Legend ❤ Ty, I'm sure others will find these useful!

    vidokkApr 6, 2024
    CivitAI

    Very nice workflow, i was having some problems with ipadapter, but i work that out, for people having a problem with ipadapter chech out if the name is correct, sometimes you download thing 2 times and you have (1), so make sure to rename to real name in that case, i was having problem whit that. For the author of nice workflow of for somebod who new, i have a one question, i would like to try this workflow with turbo sdxl model but it stucks on the ksampler, so i probably must use other one, can somebody have some advice on that? thanx!

    erik8lrl938Apr 7, 2024· 1 reaction
    CivitAI

    Is it possible for the output initial and end frames to be the exact same as the reference image? I want to use this as a transition between two frames.

    lanhanyue1996663Apr 9, 2024

    I have the same needs as you,

    r00vyApr 8, 2024
    CivitAI

    Can someone help me with this error
    Error occurred when executing VHS_LoadVideoPath: "B:\AI\ComfyUI_windows_portable\ComfyUI\input\Motion design.mp4" could not be loaded with cv. File "B:\AI\ComfyUI_windows_portable\ComfyUI\execution.py", line 151, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "B:\AI\ComfyUI_windows_portable\ComfyUI\execution.py", line 81, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "B:\AI\ComfyUI_windows_portable\ComfyUI\execution.py", line 74, in map_node_over_list results.append(getattr(obj, func)(**slice_dict(input_data_all, i))) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "B:\AI\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-VideoHelperSuite\videohelpersuite\load_video_nodes.py", line 252, in load_video return load_video_cv(**kwargs) ^^^^^^^^^^^^^^^^^^^^^^^ File "B:\AI\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-VideoHelperSuite\videohelpersuite\load_video_nodes.py", line 126, in load_video_cv (width, height, fps, duration, total_frames, target_frame_time) = next(gen) ^^^^^^^^^ File "B:\AI\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-VideoHelperSuite\videohelpersuite\load_video_nodes.py", line 50, in cv_frame_generator raise ValueError(f"{video} could not be loaded with cv.")

    CedricFerrisApr 8, 2024

    +1

    ipiv
    Author
    Apr 8, 2024

    Im not at my Pc atm but seems like u are trying to load a local video instead of the one provided.

    You could try Load Video (Upload) node instead of the Path one and see if that works for u.

    huizhiyuan288205Apr 8, 2024· 3 reactions
    CivitAI

    Error occurred when executing IPAdapterBatch:

    'ModelPatcherAndInjector' object has no attribute 'get_model_object'

    这个报错能是什么意思呢能帮我解决一下吗 非常感谢

    dxshouxi986Apr 8, 2024

    我也是这个问题

    dxshouxi986Apr 9, 2024

    indeed seems to be related to some change into comfyui, after updating comfyui is working again

    xushuai2018820Apr 8, 2024
    CivitAI

    Very cool workflow, but I cannot run it, can anyone help me fix this issue?

    Error occurred when executing IPAdapterBatch: Error(s) in loading state_dict for ImageProjModel: size mismatch for proj.weight: copying a param with shape torch.Size([3072, 1280]) from checkpoint, the shape in current model is torch.Size([3072, 1024]).

    ipiv
    Author
    Apr 9, 2024· 1 reaction

    Update Comfy and all custom nodes. After that Reload the original workflow and double check all the green nodes with model names and make sure the names match.

    joroenevinenApr 8, 2024
    CivitAI

    hello, how can I make it so it syncs with audio? i don't see any audio node

    ipiv
    Author
    Apr 9, 2024

    It's a bit more advanced than simply Load audio and hit the button. You have many ways to extend the workflow by using some existing Custom Nodes, like AudioScheduler for example.
    What I have done in some of my videos is using After Effects to create a "motion mask" video which is then loaded into QRCode instead of the looping circles video.

    joroenevinenApr 11, 2024

    @ipiv Thank you! Is there any way to re-create this workflow for SDXL? SD1.5 is so 2022 :D

    ipiv
    Author
    Apr 11, 2024

    @joroenevinen Currently, animatediff 1.5 motion model outperforms the sdxl beta version released by the authors

    jefharrisApr 8, 2024· 1 reaction
    CivitAI

    Great workflow. Having tons of fun with it. Thanks for sharing.

    ipiv
    Author
    Apr 9, 2024

    💙

    _Dream_Making_Apr 8, 2024
    CivitAI

    That insanly good, how the zoom in works? is because of the ipadapters?

    ertingbirdApr 8, 2024

    check Load Video Path node under the QRCode ControlNet group then you will understand how it works.

    ipiv
    Author
    Apr 9, 2024

    Thanks, QRCode controlnet is driving the animation flow. IPadapters are doing the fading from one reference image to another

    52570005713Apr 8, 2024
    CivitAI

    How to solve text with blurred faces?

    ipiv
    Author
    Apr 9, 2024

    Not sure what is that you are asking

    mironmeowApr 9, 2024· 3 reactions
    CivitAI

    Hello, this seems to be a great workflow, but after struggling with many errors, I can't get though this one :

    AttributeError: module 'comfy.sample' has no attribute 'prepare_mask'

    Googled it, but no answer... any idea? Thanks for the help.

    ipiv
    Author
    Apr 9, 2024

    Try updating Comfy and all your custom nodes - if the error persist after doing that you might have some Custom Node installed that is not included in the workflow but might be overriding default nodes and generating errors due to recent Comfy update.

    mironmeowApr 9, 2024· 1 reaction

    @ipiv Thanks so much for the answer IPIV, actually reinstalled a brand new Comfy, since updating didn't work either, and your workflow works perfectly, now! I still don't know where this error comes from, but it might be some conflict with another node I guess.

    ipiv
    Author
    Apr 9, 2024

    @mironmeow Glad you figured it out!

    I have the same problem. I'll try to find what is the custom node causing this.

    @AugmentedRealityCat I hope you succeed I spent a bit of time building a workflow, I just got to this post from google, literally right now LOL

    Oh btw this is what I got in the log

    [AnimateDiffEvo] - ERROR - Encountered AttributeError while attempting to restore functions - likely, an error occured while trying to save original functions before injection, and a more specific error was thrown by ComfyUI.

    !!! Exception during processing !!!

    raptilerecords342Apr 10, 2024

    any luck solving this Error, Ive been stuck at in for the last 24 Hours

    thavikingninja418Apr 10, 2024

    Okay so I am using a portable verison

    for me, this is what it solved it

    You have to go into your comfyui folder

    open terminal from there

    Just install the appropriate version:

    This will uninstall the current version
    python_embeded\python.exe -m pip uninstall -y onnxruntime onnxruntime-gpu

    And this will revert to the one that works
    python_embeded\python.exe -m pip install onnxruntime==1.15.1

    Stuff I also did, but did not work:

    Apparently for me after the idiot I am, I clicked on the update comfyui in the manager ... don't do that. Not even if all those smarty panty youtube comfyui teachers tell you to do it ffs. I honestly don't know why I clicked on it LOL Anyways it busted up my onnxruntime, because Animatediff, and Reactor was perfectly fine running with the version 1.15.1 and somehow I got the new version or different version while updating, and than it broke all my workflows that involved animatediff or reactor, I don't really know the depht of these things I just wanna make stupid things with Ai , like putting my face on a dolphin LOL.

    Anyways other stuff I tried.

    Given that mine is a portable version, it has it's own python, so I figured I would remove any other versions that were on my PC because I downloaded some versions a long time ago, and someone said on a forum it fixed the same problem for them we are facing here because of some path issues.

    I also had a problem for the same reason a long time ago when I clicked on the update in manager ( honestly someone should just remove that button I swear) and back than, insightface broke for me, that was pretty easy to fix, I just had to reinstall it sort of the same way you see the commands above.

    I also tried to just reinstall onnxruntime, but I think it installs the updated version, and not 1.15.1 so that didn't work.

    There were some isntructions in my error message when starting up comfyui, that I should upgrade something, check your code if you have that message, it literally gives you the command so it's pretty easy to do so, you just start terminal from the comfyui folder, and paste the code in and it does everything automatically.

    I also tried to update all my out of date custom nodes, espescially uninstalling and reinstalling reactorfaceswap, or anything related to it, and animatediff and anything related to it, I also updated comfyui manager

    NONE OF THESE WORKED... the only thing that fixed it is the command above.

    dangerweenieApr 10, 2024

    @thavikingninja418 if you run the command "activate.bat" in your comfyui/venv/scripts folder before doing any commandline stuff, anything downloaded, installed, or uninstalled, will only affect that virtual environment that comfyUI is using (instead of your entire system).

    Not sure if this is applicable to the portable version but using venvs (virtual environments) is essential for running multiple projects from Github etc.

    dangerweenieApr 10, 2024

    i fixed this by updating the animatediff-evolved custom nodes

    thavikingninja418Apr 10, 2024

    @dangerweenie Honestly I take your word for it, for me it just did not do anything when I updated animatediff. But I think it depends on so many things, like all of us has different systems, and the tinyest thing can trip us. I know one thing, this works now LOL And thanks for the advice, I will remember it for next time when I bust up my whole thing by clicking the wrong button LOL XD

    NoobFromEgyptApr 9, 2024
    CivitAI

    Is there more motion masks available?

    and which SD1.5 is better in results?

    ipiv
    Author
    Apr 9, 2024· 1 reaction

    Here are some more motion masks to use with QRCode - kindly provided by @Xenodimensional: https://civitai.com/posts/2011230

    If you have some after effects experience you can create ur own black and white video masks for your exact needs.

    NoobFromEgyptApr 9, 2024

    @ipiv How i can create my own ... i'm not good in this stuff 🥺

    NoobFromEgyptApr 10, 2024

    @ipiv about your note "Not the case for my 3060 unfortunately..."
    you can do this

    first rebatch before what you will do and need lot of vram then

    second image list to batch again

    i do that when i upscale more than my vram can handle 😃

    sorry about my english i hope you will understand .... good luck 🤞

    EpochEclatApr 10, 2024· 1 reaction

    @NoobFromEgypt @ipiv here is a screenshot of a procedural motion mask using the reference image depth map and creating batch images.
    https://civitai.com/posts/2071548

    NoobFromEgyptApr 10, 2024

    @EpochEclat Thanks for sharing

    jffaustApr 9, 2024
    CivitAI

    Thanks for the great workflow. I'm currently testing the Upscaling part and the Upscale Image (using Model) is taking many many hours. Is this expected?

    Current test is 96 frames upscaled once with Upscale Image By x 2.0, then going throug the Upscale Image (using Model) using 4x_NMKD-Siax_200k

    I've got a Nvidia 4060 Ti with 8GB

    ipiv
    Author
    Apr 9, 2024

    For it to take many hours is not expected.
    You must be running out of VRAM or something, try lowering the x2.0 to x1.5 or even lower and possibly bypass the Upscaling /w Model step.

    If you are using any non-default launch parameters in the .bat file then try with just the default parameters ComfyUI comes with and see if it speeds up the process.

    There are other possible issues but that's what comes to mind right away.

    with rtx4090 upscale with 2x is the key to awesome quality

    rodzukicApr 9, 2024
    CivitAI

    How to solve:

    Failed to validate prompt for output 219:

    * VRAM_Debug 580:

    - Required input is missing: input

    Output will be ignored

    ???

    ipiv
    Author
    Apr 9, 2024

    Did you disconnect some node at the far top right of the workflow?
    Things you can try:

    Install Missing Custom Nodes via Comfy Manager
    Reloading the original workflow
    Update All via Comfy Manager

    claygraffixApr 9, 2024· 1 reaction

    I had the same issue but it's not causing any problems. Started all fresh for the first time today. If I figure out, i'll comment!

    zhy798Apr 10, 2024· 3 reactions
    CivitAI

    How to fix this?😭 AttributeError: module 'comfy.model_base' has no attribute 'SDXL_instructpix2pix'

    ipiv
    Author
    Apr 10, 2024

    Hey, try reloading the original workflow and before hitting queue prompt - double check all the model names match. There are links to all the models used in Note nodes.

    This is a SD1.5 workflow so models must be SD1.5 and not SDXL.

    dangerweenieApr 10, 2024

    updating comfyUI fixed this particular error for me

    byarloooarlooo279Apr 10, 2024
    CivitAI

    This is so cool! Thanks IPIV for sharing this workflow. I was just wondering, is there a way to force the result "look" to look closer to the 4 IPA pics, because even I use the same model checkpoint, and try the same seed as the generation for the 4 pics, the animated video looks rather a lot different. I tried to change IPA PLUS, PLUS FACE... tweaked, but can't really get animate diff to respect the 4 pics "look". Thanks for any tips for this.

    EpochEclatApr 10, 2024· 1 reaction

    Was wondering the same and looking at the notes of increasing the weight on the ipadapters ( tried with up to 5x) didn't seem to have the desired impact.
    Using a model with high strenght, was not getting close either.
    The model seems to be less willing to constraint it looks on non human images. Buildings look completely different, vs human where we have quite a bit of resemblance of the reference image.

    Have tried to give the IPAdapter an attention mask based on depth of the reference image, which didn't result in a closer look.

    @ipiv thanks for the workflow and any ideas would be great.

    antoiaIreneSApr 10, 2024
    CivitAI

    This is amazing!!! Is there any tutorial on how to get started? I would love love love to playground with this!!

    dangerweenieApr 10, 2024
    CivitAI

    looking forward to trying this

    do you know where i can get the four reference images you used in the workflow?

    ipiv
    Author
    Apr 10, 2024

    That's where you come in 😊 Get creative - try all kinds of images to morph between

    BlackPanther2024Apr 10, 2024· 1 reaction
    CivitAI

    So awesome!

    869445754543Apr 11, 2024· 1 reaction
    CivitAI

    Error occurred when executing KSampler:

    'NoneType' object has no attribute 'shape'

    mlq600Apr 12, 2024

    +1

    mlq600Apr 12, 2024

    edit: you need to download the exact controlnet model they're signaling for 1.5, control_v1p_sd15_qrcode_monster.safetensors

    https://huggingface.co/monster-labs/control_v1p_sd15_qrcode_monster/resolve/main/control_v1p_sd15_qrcode_monster.safetensors?download=true

    huizhiyuan288205Apr 11, 2024
    CivitAI

    What does this error mean? It has been bothering me for a long time. Can you help me with it?

    Error occurred when executing IPAdapterBatch: 'ModelPatcherAndInjector' object has no attribute 'get_model_object'

    2964388258750Apr 11, 2024
    CivitAI

    有没有具体操作的视频教程,真的很想学会这个视频制作方式

    perileeApr 12, 2024

    同求

    LearningCreatorApr 11, 2024· 3 reactions
    CivitAI

    I've tried several different images and get absolutely no where even close to what I've seen others do with this workflow. Although I really appreciate it being shared, never seeing what images others used in their workflows makes me question whether their end results are legitimate.

    userno99Apr 12, 2024

    it would be harder to fake it than to do it with ai lol

    LearningCreatorApr 12, 2024· 2 reactions

    @userno99 That's what people say but no one is willing to show an example workflow that includes what images and masks they used.

    userno99Apr 15, 2024

    @LearningCreator because lazy, it's cutting edge rn, we're all busy fkn with it + when someone finds something that works they don't necessarily want to share it due to feelings of intellectual ownership

    NoobFromEgyptApr 11, 2024
    CivitAI

    Is there a way to slow down or speed up the qr mask video in the workflow or add node to do that? ... to show that change in the animation

    ipiv
    Author
    Apr 11, 2024· 1 reaction

    if you're using the circles video mask included with workflow you can do this to slow down the flow:
    in Load Video (Path) node change force_rate to 24

    NoobFromEgyptApr 11, 2024

    @ipiv so it is says 12 and if i change it to 24 it will be slow .... then i want to make it faster that will be 6 ? and those numpers is necessary exact 12 and 24 or can be 18 20 21 ?

    nomoreplayApr 11, 2024
    CivitAI
    if you still have problems with ipadapter: ipadapter model not found, ( then try to find and open " folder_paths.py " and add to it " folder_names_and_paths["ipadapter"] = ([os.path.join(models_dir, "ipadapter")], supported_pt_extensions) "
    AIDigitalMediaAgencyApr 12, 2024
    CivitAI

    Please tell me your favorite Checkpoins

    lym8763402Apr 12, 2024
    CivitAI

    always show up this error,cant fix it,:(

    module 'comfy.model_base' has no attribute 'SDXL_instructpix2pix'

    userno99Apr 12, 2024

    you can't use XL models for animation AFAIK

    lym8763402Apr 12, 2024

    @userno99 sure, i didnt use any SDXL model, but still report error.

    ipiv
    Author
    Apr 12, 2024

    Hey, update your ComfyUI by going to ComfyUI Manager and clicking Update All.

    For safety, you could create a snapshot before updating incase something breaks: Manager -> Snapshot Manager -> Save Snapshot.

    Later load that Snapshot if update breaks your Comfy for some reason.

    lym8763402Apr 13, 2024

    @ipiv thx a lot,it works while i paste the clip version again,but Ksampler report:object has no attribute 'repeat'

    abuedts522Apr 12, 2024
    CivitAI

    Amazing workflow, thanks for sharing! Is there an alternative for the RIFE VFI node? It doesn't support Non-Cuda devices (like apple). The recommended workaround with taichi doesn't seem to work. Thanks!

    ipiv
    Author
    Apr 12, 2024· 1 reaction

    Thank u!
    Sorry not sure about non-cuda alternatives but I know there's an executable u can use if you're comfortable with running a program through terminal: https://github.com/nihui/rife-ncnn-vulkan

    In that repo you'll find Windows/Linux/MacOS executables for Intel/AMD/Nvidia gpus and instructions to run it.

    abuedts522Apr 15, 2024

    @ipiv Thanks a lot. I'll check it out!

    Blade3dApr 12, 2024
    CivitAI

    Adherence to original images varies a lot. A house with a stream and sunset works great, airplanes are terrible. People are good, artistic humanized dog is terrible. Any suggestions for getting better adherence to original images? I changed to high strength from default medium, but not much better. If users are getting animations that are close to your images, can you share your settings?

    mironovataApr 13, 2024· 2 reactions
    CivitAI

    Hi! Your workflow work amazing! Thanks for sharing! I want to change IPAdapter models on PLUS version. I upload ip-adapter-plus_sd15.safetensors in directory \ComfyUI\models\ipadapter. But i have error "IPAdapter model not found". Should I changing the name tjis file similarly how did you rename it in your tutorial? If so, what name should I choose for renaming ip-adapter-plus_sd15.safetensors? Or am I doing something wrong?

    ipiv
    Author
    Apr 13, 2024· 1 reaction

    Hi, thanks for the buzz btw!
    Your file naming and folder path seem to be correct according to the documentation:
    https://github.com/cubiq/ComfyUI_IPAdapter_plus?tab=readme-ov-file#installation

    Not 100% sure what the cause of your issue is but to help debugging:
    Have you changed any of the default model folder paths in some of the config files or are you using some custom node to do that?

    In ComfyUI root directory, is there a file named "extra_model_paths.yaml" - without the ".example" at the end?

    Can you double check and re-download the model from Manager or from the links in github documentation (ip-adapter-plus_sd15.safetensors)
    Make sure to not mistake FaceID models with regular models.

    mironovataApr 13, 2024
    Thanks for the feedback! Everything worked out!
    qian11111Apr 18, 2024

    @mironovata Please help me! I have also encountered this problem. How did you solve it?

    mironovataApr 18, 2024

    @qian11111 You need to download the file Plus version ip-adapter-plus_sd15.safetensors from https://github.com/cubiq/ComfyUI_IPAdapter_plus?tab=readme-ov-file#installation. And make sure that you definitely put it in ComfyUI\models\ipadapter . And double-check that you followed the instructions exactly.

    qian11111Apr 19, 2024

    Thank you, brother. I also want to ask why the videos I produce are very blurry

    deadsec99Apr 13, 2024
    CivitAI

    Amazing plug-and-play workflow. Thank you for sharing. How would you increase the output video duration? I tried changing the frames but that doesn't seem to work

    ipiv
    Author
    Apr 13, 2024· 2 reactions

    Hey, thanks!
    It's a bit more advanced but what I can suggest is:
    Look in the IPadapter and Attention mask generation group, you have to change the mask frame numbers accordingly. If you want add more images then you gotta duplicate the IPadapter node with mask generation and also adjust the frame numbers according to your batch_count and desired fade durations.

    442153546Apr 13, 2024· 3 reactions
    CivitAI

    Error occurred when executing VRAM_Debug: VRAM_Debug.VRAMdebug() got an unexpected keyword argument 'image_passthrough' XD

    ipiv
    Author
    Apr 13, 2024

    You can simply remove that node or bypass it. It's only needed for low VRAM cards to help them free up some VRAM before frame interpolation if they run out on the last step.

    442153546Apr 13, 2024· 1 reaction

    @ipiv Thank you very much. The problem has been resolved

    442153546Apr 13, 2024· 1 reaction

    @ipiv May I ask again why the video I generated is very blurry and cannot be seen clearly.

    LucasYaoApr 14, 2024
    CivitAI

    got prompt

    [rgthree] Using rgthree's optimized recursive execution.

    !!! Exception during processing !!!

    Traceback (most recent call last):

    File "D:\ComfyUI\execution.py", line 151, in recursive_execute

    output_data, output_ui = get_output_data(obj, input_data_all)

    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

    File "D:\ComfyUI\execution.py", line 81, in get_output_data

    return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)

    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

    File "D:\ComfyUI\execution.py", line 74, in map_node_over_list

    results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))

    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

    TypeError: VRAM_Debug.VRAMdebug() got an unexpected keyword argument 'image_passthrough'

    Prompt executed in 0.37 seconds

    shandoumingApr 14, 2024
    CivitAI

    Hi, your workflow is amazing! However, I have a problem when I execute:'Error occurred when executing KSampler: module 'comfy.sample' has no attribute 'prepare_mask'' Could you help me to figure it out?

    shandoumingApr 14, 2024

    other information:

    File "D:\comfyui\ComfyUI-aki-v1.3\execution.py", line 151, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) File "D:\comfyui\ComfyUI-aki-v1.3\execution.py", line 81, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) File "D:\comfyui\ComfyUI-aki-v1.3\execution.py", line 74, in map_node_over_list results.append(getattr(obj, func)(**slice_dict(input_data_all, i))) File "D:\comfyui\ComfyUI-aki-v1.3\nodes.py", line 1369, in sample return common_ksampler(model, seed, steps, cfg, sampler_name, scheduler, positive, negative, latent_image, denoise=denoise) File "D:\comfyui\ComfyUI-aki-v1.3\nodes.py", line 1339, in common_ksampler samples = comfy.sample.sample(model, noise, steps, cfg, sampler_name, scheduler, positive, negative, latent_image, File "D:\comfyui\ComfyUI-aki-v1.3\custom_nodes\ComfyUI-Impact-Pack\modules\impact\sample_error_enhancer.py", line 9, in informative_sample return original_sample(*args, **kwargs) # This code helps interpret error messages that occur within exceptions but does not have any impact on other operations. File "D:\comfyui\ComfyUI-aki-v1.3\custom_nodes\ComfyUI-AnimateDiff-Evolved\animatediff\sampling.py", line 273, in motion_sample function_injections.inject_functions(model, params) File "D:\comfyui\ComfyUI-aki-v1.3\custom_nodes\ComfyUI-AnimateDiff-Evolved\animatediff\sampling.py", line 205, in inject_functions self.orig_prepare_mask = comfy.sample.prepare_mask

    shandoumingApr 14, 2024

    I am a designer, I am not very familiar with coding and comfyUI, sorry for asking you question.

    我也是这个问题,无解了吗

    gptytpromo106Apr 14, 2024

    just update animatediff model through comfyui manager

    1905928452513Apr 14, 2024

    I met with the same problem and solved it by downloading the animatediff-evolved node from github and replace the old one. Hope this can help you.

    shandoumingApr 15, 2024

    @1905928452513 Thanks a lot! I will try it!

    shandoumingApr 15, 2024

    @gptytpromo106 thank you! I will try it! 

    alexdeparioApr 14, 2024
    CivitAI

    Thats just amazing! Can anyone point me to a good tutorial for noobs on how i can install and run this? Is it possible to run it on a huggingface space or something? I got shitty AMD card and a dream.

    asadtinkersApr 18, 2024

    Here's a tutorial i made: https://youtu.be/mecA9feCihs ; You can try running it on the cloud on openart for free (link below) or pay for a cloud service like comfy.icu or runcomfy

    https://openart.ai/workflows/abeatech/tutorial-morpheus---morphing-videos-using-text-or-images-txt2img2vid/fOrrmsUtKEcBfopPrMXi

    qian11111Apr 18, 2024

    @abeatech 求助!我在运行的过程中出现了这个错误

    Error occurred when executing IPAdapterUnifiedLoader:

    module 'comfy.model_base' has no attribute 'SDXL_instructpix2pix'

    File "X:\ComfyUI-AI\ComfyUI-aki-v1.3\execution.py", line 151, in recursive_execute

    output_data, output_ui = get_output_data(obj, input_data_all)

    File "X:\ComfyUI-AI\ComfyUI-aki-v1.3\execution.py", line 81, in get_output_data

    return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)

    File "X:\ComfyUI-AI\ComfyUI-aki-v1.3\execution.py", line 74, in map_node_over_list

    results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))

    File "X:\ComfyUI-AI\ComfyUI-aki-v1.3\custom_nodes\ComfyUI_IPAdapter_plus\IPAdapterPlus.py", line 449, in load_models

    is_sdxl = isinstance(model.model, (comfy.model_base.SDXL, comfy.model_base.SDXLRefiner, comfy.model_base.SDXL_instructpix2pix))

    cmcm0046Apr 15, 2024
    CivitAI

    执行VRAM_Debug时出错: VRAM_Debug.VRAMdebug() 有一个意外的关键字参数“image_passthrough” 文件“I:\comfyui\execution.py”,第 151 行,recursive_execute output_data,output_ui = get_output_data(obj, input_data_all) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 文件“I:\comfyui\execution.py”,第 81 行,get_output_data return_values = map_node_over_list(obj, input_data_all, obj.函数,allow_interrupt=True) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 文件“I:\comfyui\execution.py”,第 74 行,map_node_over_list results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))

    How to solve this error

    ipiv
    Author
    Apr 15, 2024

    Bypass or remove the VRAM Debug node.

    loudhippo840689Apr 17, 2024

    i created a new vram debug node and noticed the input names were different so i removed the exisiting one and put the new one in its place and connected it the same made the settings match and voila it works great !

    jatrantApr 15, 2024
    CivitAI

    Hi, I've been playing around quite a bit with this workflow. Thank you. But I am having issues with the fact that all my outputs seem blurry and I can't seem to find the cause. Could anyone recommend what I should be modifying to get a better output animation? thank you.

    ipiv
    Author
    Apr 15, 2024

    Reload the original workflow and before hitting queue make sure to double check that you are loading the correct models. Names and links are next to the loader nodes.

    Thorium32434235Apr 16, 2024

    @ipiv tried that too, but still blurry videos :/

    jakeca520327Apr 15, 2024· 2 reactions
    CivitAI

    When I use this workflow,the following error will occur when inputting a 512X512 image:Error occurred when executing IPAdapterBatch: Error(s) in loading state_dict for ImageProjModel: size mismatch for proj.weight: copying a param with shape torch.Size([3072, 1280]) from checkpoint, the shape in current model is torch.Size([3072, 1024]). I don't know where the problem lies? How did you solve similar problems when you encountered them?

    megamakerApr 15, 2024

    same

    ipiv
    Author
    Apr 15, 2024

    loading wrong models or incorrectly renamed clipvision.

    Canna75Apr 15, 2024

    I had similar problem... solved updating all through the Manager.

    ja81498Apr 15, 2024
    CivitAI

    Noob here, I got it to work, but what settings would be good for quick testing? I don't mind getting small resolution video.

    Canna75Apr 15, 2024
    CivitAI

    First of all, thanks for sharing this piece of work!
    I've downloaded and placed all the models, but I get this message:
    "Error occurred when executing IPAdapterBatch: 'ModelPatcherAndInjector' object has no attribute 'get_model_object' "
    What am I doing wrong?
    Thanks again.

    Canna75Apr 15, 2024

    Nevermind... seems that I had something outdated... I did an "Update All" in ComfyUI Manager and now it's working.
    Not deleting the post, just in case it's usefull for someone ;)

    cartossin343Apr 16, 2024· 2 reactions
    CivitAI

    Hmm, I get pickle.UnpicklingError: Weights only load failed. Re-running torch.load with weightsonly set to False will likely succeed, but it can result in arbitrary code execution.Do it only if you get the file from a trusted source. WeightsUnpickler error: Unsupported operand 64

    Edit: I solved this. It turned out I got the wrong file and renamed to AnimateLCM_sd15_t2v.ckpt; so whatever file that was could not be processed properly and led to this pretty random error.

    Draco93Apr 22, 2024

    replace your checkpoint and use SD 1.5

    leyoaitoolsApr 16, 2024· 2 reactions
    CivitAI

    Hello, I think the display is a super cool workflow. When I opened it according to the tutorial, I found the following prompts. Some of them I installed plug-ins through Google, and some I couldn't even find through Google. I don't know if there are some tips for me. Thank you~

    When loading the graph, the following node types were not found:

    BatchCount+

    VHS_SplitImages

    SimpleMath+

    VHS_LoadVideoPath

    CreateFadeMaskAdvanced

    VHS_VideoCombine

    VRAM_Debug

    RIFE VFI

    FILM VFI

    ipiv
    Author
    Apr 16, 2024· 1 reaction

    ComfyUI Manager -> Install Missing Custom Nodes -> Restart.
    Might need Update All afterwards to make sure your Comfy and all other nodes are up to date.

    leyoaitoolsApr 17, 2024

    @ipiv thanks your reply !let me try

    rix81Apr 16, 2024
    CivitAI

    Great! Is there a way to make still (non moving) camera in transitions ? Don't know why camera tries to rotate or movein/out. Need it still. Changed QR motion mask, but camera moves again

    bro7dudeman336Apr 16, 2024
    CivitAI

    Thanks bro! how to use SDXL with this workflow?

    ipiv
    Author
    Apr 16, 2024· 1 reaction

    Switch out all models to SDXL versions (checkpoint, QRCode, IPAdapter, Vae, Motion model etc), change base resolution, adjust AnimateDiff beta_schedule to sdxl and adjust Ksampler settings according to your sdxl model.
    Output probably won't be as good since currently SD1.5 Animatediff is better than the SDXL Beta version in my opinion.

    asadtinkersApr 17, 2024
    CivitAI

    Thanks for sharing this workflow. It's super easy to use, and really appreciate all the notes you've added throughout the flow!

    yy2006629Apr 17, 2024
    CivitAI

    Did some successfully tried to make the reference images persistent? @ipiv

    ReinBijlsmaApr 17, 2024
    CivitAI

    I installed everything exactly as shown in the workflow, but I'm getting this error:

    Error occurred when executing IPAdapterBatch: Error(s) in loading state_dict for ImageProjModel: size mismatch for proj.weight: copying a param with shape torch.Size([3072, 1280]) from checkpoint, the shape in current model is torch.Size([3072, 1024]). File "D:\Beeldbewerking\ComfyUI\ComfyUI\execution.py", line 151, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\Beeldbewerking\ComfyUI\ComfyUI\execution.py", line 81, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\Beeldbewerking\ComfyUI\ComfyUI\execution.py", line 74, in map_node_over_list results.append(getattr(obj, func)(**slice_dict(input_data_all, i))) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\Beeldbewerking\ComfyUI\ComfyUI\custom_nodes\ComfyUI_IPAdapter_plus\IPAdapterPlus.py", line 679, in apply_ipadapter return (ipadapter_execute(model.clone(), ipadapter_model, clip_vision, **ipa_args), ) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\Beeldbewerking\ComfyUI\ComfyUI\custom_nodes\ComfyUI_IPAdapter_plus\IPAdapterPlus.py", line 329, in ipadapter_execute ipa = IPAdapter( ^^^^^^^^^^ File "D:\Beeldbewerking\ComfyUI\ComfyUI\custom_nodes\ComfyUI_IPAdapter_plus\IPAdapterPlus.py", line 69, in init self.image_proj_model.load_state_dict(ipadapter_model["image_proj"]) File "D:\Beeldbewerking\ComfyUI\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 2153, in load_state_dict raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format(

    peam3178Apr 18, 2024

    are you using an SDXL checkpoint instead of SD 1.5?

    qian11111Apr 17, 2024· 7 reactions
    CivitAI

    Excuse me, big shot. Have you ever produced a video tutorial on this workflow

    sourcesauce101Apr 17, 2024
    CivitAI

    Amazing work, thank you so much! Question... if I decide to skip the 3 videos/upscaling process, what does it mean exactly to "fix" the seed? What do we need to do after we're happy with the motion/look and want to upscale?

    ipiv
    Author
    Apr 17, 2024· 2 reactions

    KSampler node has parameter "control_before_generate" - Randomize or Fixed
    1. Keep seed Randomized and generate until u are happy with the preview
    2. Fix the seed in KSampler
    3. Unbypass the 3 video outputs
    4. Queue it again to run the upscaling process without running the preview step again

    RobbykbApr 17, 2024
    CivitAI

    I'm getting this error after running the prompt, any help or ideas would be highly appreciated, or let me know if you need more info (Using ComfyUI and RTX 4070)

    Error occurred when executing VRAM_Debug: VRAM_Debug.VRAMdebug() got an unexpected keyword argument 'image_passthrough'

    ipiv
    Author
    Apr 17, 2024· 3 reactions

    VRAM_Debug node got updated and it seems to be broken now. Just remove it or bypass it for now.

    RobbykbApr 18, 2024

    @ipiv I appreciate the reply!

    SirNeuralApr 17, 2024· 2 reactions
    CivitAI

    Hey, thanks for the great workflow, I'm running the workflows but only seeing the motion mask (the black and white circles), not any of the reference images, I've tried to adjust the weight and strength of controlnet/ipadapter, without really getting the image to show correctly. Other times when I increase the ipadapter weight I just get a flat image without any recognizable shape. Do you have any recommendations?

    T12DSApr 25, 2024

    Same issue here, the output images from the ksamplers are all pitch black blank with nothing in them.

    mercan1Apr 26, 2024

    Same issue also for me until i changed lora model like in the notes: AnimateLCM sd15 t2v lora.safetensors and using another check point.

    leonmyhero407Apr 18, 2024
    CivitAI

    Great workflow, but I get below error when executing. Any idea how to fix it. Error occurred when executing UpscaleModelLoader: 'NoneType' object has no attribute 'lower' File "C:\ComfyUI\ComfyUI\execution.py", line 151, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) File "C:\ComfyUI\ComfyUI\execution.py", line 81, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) File "C:\ComfyUI\ComfyUI\execution.py", line 74, in map_node_over_list results.append(getattr(obj, func)(**slice_dict(input_data_all, i))) File "C:\ComfyUI\ComfyUI\comfy_extras\nodes_upscale_model.py", line 20, in load_model sd = comfy.utils.load_torch_file(model_path, safe_load=True) File "C:\ComfyUI\ComfyUI\comfy\utils.py", line 13, in load_torch_file if ckpt.lower().endswith(".safetensors"):

    peam3178Apr 18, 2024

    did you forget to load in an upscale model?

    ZealousGApr 18, 2024
    CivitAI

    Error occurred when executing IPAdapterUnifiedLoader:module 'comfy.model_base' has no attribute 'SDXL_instructpix2pix'大佬救命!这个报错在管理器上更新了comfyUI也不行呀!还是报错

    peam3178Apr 18, 2024

    try updating all via the comfyui manager
    https://github.com/comfyanonymous/ComfyUI/issues/3236

    oneshotoneApr 18, 2024
    CivitAI

    any fix for the following error? :
    Error occurred when executing VHS_VideoCombine: [Errno 22] Invalid argument File "D:\ComfyUI_windows_portable\ComfyUI\execution.py", line 151, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\ComfyUI\execution.py", line 81, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\ComfyUI\execution.py", line 74, in map_node_over_list results.append(getattr(obj, func)(**slice_dict(input_data_all, i))) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-VideoHelperSuite\videohelpersuite\nodes.py", line 365, in combine_video output_process.send(image.tobytes()) File "D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-VideoHelperSuite\videohelpersuite\nodes.py", line 119, in ffmpeg_process proc.stdin.write(frame_data)

    kaka990Apr 19, 2024
    CivitAI

    Excuse me, where are the two red plug-in download, I can not find in the manager and search engine, can the download link sent out

    IPAdapterBatch

    IPAdapterUnifiedLoader

    ipiv
    Author
    Apr 19, 2024

    ComfyUI Manager -> Install Missing Custom Nodes -> ComfyUI_IPAdapter_plus -> Install

    Or on Github: cubiq/ComfyUI_IPAdapter_plus

    kaka990Apr 22, 2024

    I have already installed the ComfyUI-IPAdapter_plus plugin, but every time I open the workflow, it prompts that these two are missing@ipiv 

    hewenhaoApr 22, 2024

    我也遇到了一样的问题,安装了插件也依然报错

    tomas647Apr 19, 2024· 2 reactions
    CivitAI

    Can anybody change this to work with SDXL please?

    ZodiakApr 20, 2024· 1 reaction

    Yes, please!

    ZodiakApr 20, 2024

    Additionaly, if someone is going to implement this, I recommend to use turbo/lightning models, such as: https://civitai.com/models/112902?modelVersionId=351306

    ipiv
    Author
    Apr 20, 2024· 3 reactions

    From my testing, this will result in degraded animation and overall motion quality.
    Currently, you'll get way better/coherent output using SD1.5 motion models (v3, v2, lcm) rather than the SDXL beta version of AnimateDiff.

    tomas647Apr 20, 2024

    @ipiv Thank you. Any tip, if it is possible to create somehow good highres animation with this 1.5 approach? (for example how to connect NNlatent upscaling)

    ipiv
    Author
    Apr 20, 2024· 1 reaction

    @tomas647 You could change starting resolution to 512x910 for example and increase High-res fix multiplier even further (2nd KSampler). But keep in mind almost all examples I posted in the gallery started with 288x512 base resolution, upscaled to 1080x1920.

    Vae Decode -> Vae Encode upscaling (The process currently used in workflow) preserves quality and detail better than any latent upscaling for High-res fix. Source: NNLatentUpscale Github

    YuzackApr 20, 2024
    CivitAI

    Error occurred when executing UpscaleModelLoader: 'NoneType' object has no attribute 'lower'

    how can i resolve?

    ipiv
    Author
    Apr 20, 2024

    Have you installed "4x_NMKD-Siax_200k" via ComfyUI Manager?
    Manager -> Install Models -> Search for "4x_NMKD-Siax_200k" -> Install
    After installed click Refresh and make sure you load it in Load Upscale Model node

    YuzackApr 20, 2024
    CivitAI

    how can i switch in 1920x01080 16:9 video?

    ipiv
    Author
    Apr 20, 2024· 1 reaction

    Flip the width and height in Settings group (512x288) and Upscale /w Model group to get a 1920x1080 video output

    1303352Apr 20, 2024· 1 reaction
    CivitAI

    Hi sorry, im noob making videos , im only used to make picstures. Can you please recommend me which program do you use to create videos like this? Thank you

    ZodiakApr 21, 2024· 2 reactions

    It is done by using this workflow for ComfyUI. It could be installed in one click via Pinokio https://pinokio.computer/ , or just see ComfyUI Github https://github.com/comfyanonymous/ComfyUI and follow installation guides. Then load this workflow. Also, there's a deсent tutorial for this workflow https://youtu.be/mecA9feCihs . All in all, there are two main UI for graphic synthesis: Automatic 1111 and Comfy, where you can install extensions for video/animation, such as Deforum/AnimateDiff/MultiFrameRender/Ebsynth, etc. Hope it will help!

    1303352Apr 28, 2024· 1 reaction

    @Zodiak Thank you so much, you are very kind. I'm going to practice 😃😃

    YuzackApr 20, 2024
    CivitAI

    I got this error... what can i do?

    VRAM debug() got an unexpected keyword argument 'image passthrough'

    ipiv
    Author
    Apr 20, 2024· 2 reactions

    You can remove the VRAM Debug node alltogether or if you're on low VRAM you can Right click the node -> Fix Node (Recreate) and reconnect the input and output.

    jtk1014Apr 21, 2024

    @ipiv Thank you!!

    BunnyByteApr 20, 2024
    CivitAI

    I'm getting

    if sub_idxs is not None and self.orig_img_latents.size(0) >= full_length: AttributeError: 'NoneType' object has no attribute 'size'

    On the Ksampler.

    Tried messing around with the latent a bit to no avail.

    nopaaApr 21, 2024
    CivitAI

    I can't fix this error on the object Apply Advanced ControlNet, trying to figure out if it's about models locations but everything seems ok

    Error occurred when executing ACN_AdvancedControlNetApply: 'NoneType' object has no attribute 'copy'

    YuzackApr 21, 2024· 5 reactions
    CivitAI

    how can i extend the duration and slow the video?

    skipintrobabyApr 21, 2024· 2 reactions
    CivitAI

    Everything works like magic. However, despite using LCM models, the results I get are too burnt, dark, as if there's a high cfg value. Additionally, when I increase the Motion scale value, I receive even darker/burnt results. How can I overcome this? Thank you very much.

    elmaestro100Apr 24, 2024

    I had the same problem then I experimented with the K Samplers on both sides. As soon as I set to euler or euler a then adjust cfg and steps a bit up that went right away. I haven't tried others yet but I bet they can be good too so just experiment on K samplers a little.

    ray19950508Apr 22, 2024· 2 reactions
    CivitAI

    help me please!!
    Error occurred when executing VHS_LoadVideoPath: https://i.imgur.com/FZojh3v.mp4 could not be loaded with cv.

    lmz11702132Apr 22, 2024

    把这个链接打开 里面的视频保存到comfyui根目录里 然后把视频地址改成FZojh3v - Imgur.mp4

    CeelopezApr 24, 2024

    Make sure you right click the video and copy the video paht, it should end in .mp4

    ZodiakApr 23, 2024· 1 reaction
    CivitAI

    Hi! I switched to Comfy from Auto1111 recently and maybe it's an obvious question, but how to use promts instead of image references? IPAdapter does not always do well, especially when I used Lora

    NEURO_NOVA_LIZAApr 23, 2024
    CivitAI

    module 'comfy.sample' has no attribute 'prepare_mask'
    And I'm stuck

    3kyApr 26, 2024

    update comfyui

    carcarti831Apr 23, 2024
    CivitAI

    Hey,hello.How to solve the following problem?Thank U. Error occurred when executing VRAM_Debug: VRAM_Debug.VRAMdebug() got an unexpected keyword argument 'image_passthrough' File "D:\ComfyUI-aki-v1.1\execution.py", line 151, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) File "D:\ComfyUI-aki-v1.1\execution.py", line 81, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) File "D:\ComfyUI-aki-v1.1\execution.py", line 74, in map_node_over_list results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))

    ipiv
    Author
    Apr 23, 2024· 1 reaction

    Hey, You can remove the VRAM Debug node alltogether or if you're on low VRAM you can Right click the node -> Fix Node (Recreate) and reconnect the input and output.

    3kyApr 26, 2024

    Error occurred when executing VRAM_Debug: VRAM_Debug.VRAMdebug() got an unexpected keyword argument 'image_passthrough' File "/workspace/ComfyUI/execution.py", line 151, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) File "/workspace/ComfyUI/execution.py", line 81, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) File "/workspace/ComfyUI/execution.py", line 74, in map_node_over_list results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))

    3kyApr 26, 2024

    I put the node in "Bypass" ...

    TalkashieApr 23, 2024
    CivitAI

    I'm confused on why the model is only 8kb. I'm trying to use this with Automatic1111 but I can't seem to find the model file anywhere

    moelleApr 24, 2024

    Its not a model but a workflow for comfyUI

    kaka990Apr 24, 2024· 1 reaction
    CivitAI

    IPAdapterBatch

    IPAdapterUnifiedLoader

    Has anyone successfully generated an animation? Can you tell me where to download the two plugins above

    gafernandes435209Apr 25, 2024· 1 reaction
    CivitAI

    Hi guys. Anybody help me? In load Lora this erro appear:
    Error occurred when executing LoraLoaderModelOnly: LoraLoaderModelOnly.load_lora_model_only() missing 1 required positional argument: 'lora_name' File "\ComfyUI_windows_portable\ComfyUI\execution.py", line 151, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "\ComfyUI_windows_portable\ComfyUI\execution.py", line 81, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "\ComfyUI_windows_portable\ComfyUI\execution.py", line 74, in map_node_over_list results.append(getattr(obj, func)(**slice_dict(input_data_all, i))) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

    XuYuHangApr 25, 2024

    ‘’AnimateLCM_sd15_t2v.ckpt‘’

    Have you downloaded this file in the comments?

    SkyeverseApr 25, 2024
    CivitAI

    i just downloaded but how to use it

    EllieMiaApr 25, 2024
    CivitAI

    Hello, I have been trying to generate nice animation but somehow mine is not vibrant color like ppl share here. Is anyone can share workflow ?

    ZikinXApr 25, 2024· 1 reaction

    maybe because you dont use vae, or caus your vae dosent feat well with your model

    EllieMiaApr 26, 2024

    @UnconvAI  I think I need to add image sharpen and color correct

    omphtelibaApr 26, 2024

    @UnconvAI that solved the problem for me. @EllieMia I had the same problem, chosing vae-ft-MSE-840000-ema-pruned fixed it and the colors are vibrant.

    EllieMiaApr 26, 2024

    @omphteliba  my problem was solved after update ipvp which was color correct

    djnas_74055Apr 25, 2024· 1 reaction
    CivitAI

    Hi,

    I am getting below error at the sampler node.

    RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu!

    Issue gets resolved after I change denoise to zero but the image is blank.

    Thanks in advance.

    labibkhan01Apr 25, 2024

    your vram must 6gb, it won't work, minimum 8gb vram required for animate diff

    djnas_74055Apr 26, 2024

    Thanks for the reply. Appreciate it.

    RobopsychoApr 26, 2024
    CivitAI

    I'm wondering if this error is because my images might be different res? It's a Ksampler error

    Error occurred when executing KSampler: mat1 and mat2 shapes cannot be multiplied (1232x2048 and 768x320) File "C:\LocalAI\COMFYUI\ComfyUI_windows_portable\ComfyUI\execution.py", line 151, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\LocalAI\COMFYUI\ComfyUI_windows_portable\ComfyUI\execution.py", line 81, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\LocalAI\COMFYUI\ComfyUI_windows_portable\ComfyUI\execution.py", line 74, in map_node_over_list results.append(getattr(obj, func)(**slice_dict(input_data_all, i))) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\LocalAI\COMFYUI\ComfyUI_windows_portable\ComfyUI\nodes.py", line 1369, in sample return common_ksampler(model, seed, steps, cfg, sampler_name, scheduler, positive, negative, latent_image, denoise=denoise) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\LocalAI\COMFYUI\ComfyUI_windows_portable\ComfyUI\nodes.py", line 1339, in common_ksampler samples = comfy.sample.sample(model, noise, steps, cfg, sampler_name, scheduler, positive, negative, latent_image, ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\LocalAI\COMFYUI\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Impact-Pack\modules\impact\sample_error_enhancer.py", line 22, in informative_sample raise e File "C:\LocalAI\COMFYUI\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Impact-Pack\modules\impact\sample_error_enhancer.py", line 9, in informative_sample return original_sample(*args, **kwargs) # This code helps interpret error messages that occur within exceptions but does not have any impact on other operations. ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\LocalAI\COMFYUI\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-AnimateDiff-Evolved\animatediff\sampling.py", line 376, in motion_sample latents = wrap_function_to_inject_xformers_bug_info(orig_comfy_sample)(model, noise, args, *kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\LocalAI\COMFYUI\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-AnimateDiff-Evolved\animatediff\utils_model.py", line 377, in wrapped_function return function_to_wrap(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\LocalAI\COMFYUI\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Advanced-ControlNet\adv_control\control_reference.py", line 47, in refcn_sample return orig_comfy_sample(model, args, *kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\LocalAI\COMFYUI\ComfyUI_windows_portable\ComfyUI\comfy\sample.py", line 37, in sample samples = sampler.sample(noise, positive, negative, cfg=cfg, latent_image=latent_image, start_step=start_step, last_step=last_step, force_full_denoise=force_full_denoise, denoise_mask=noise_mask, sigmas=sigmas, callback=callback, disable_pbar=disable_pbar, seed=seed) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\LocalAI\COMFYUI\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 755, in sample return sample(self.model, noise, positive, negative, cfg, self.device, sampler, sigmas, self.model_options, latent_image=latent_image, denoise_mask=denoise_mask, callback=callback, disable_pbar=disable_pbar, seed=seed) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\LocalAI\COMFYUI\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 657, in sample return cfg_guider.sample(noise, latent_image, sampler, sigmas, denoise_mask, callback, disable_pbar, seed) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\LocalAI\COMFYUI\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 644, in sample output = self.inner_sample(noise, latent_image, device, sampler, sigmas, denoise_mask, callback, disable_pbar, seed) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\LocalAI\COMFYUI\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 623, in inner_sample samples = sampler.sample(self, sigmas, extra_args, callback, noise, latent_image, denoise_mask, disable_pbar) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\LocalAI\COMFYUI\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 534, in sample samples = self.sampler_function(model_k, noise, sigmas, extra_args=extra_args, callback=k_callback, disable=disable_pbar, **self.extra_options) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\LocalAI\COMFYUI\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context return func(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^ File "C:\LocalAI\COMFYUI\ComfyUI_windows_portable\ComfyUI\comfy\k_diffusion\sampling.py", line 745, in sample_lcm denoised = model(x, sigmas[i] s_in, *extra_args) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\LocalAI\COMFYUI\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 272, in call out = self.inner_model(x, sigma, model_options=model_options, seed=seed) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\LocalAI\COMFYUI\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 610, in call return self.predict_noise(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\LocalAI\COMFYUI\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 613, in predict_noise return sampling_function(self.inner_model, x, timestep, self.conds.get("negative", None), self.conds.get("positive", None), self.cfg, model_options=model_options, seed=seed) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\LocalAI\COMFYUI\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-AnimateDiff-Evolved\animatediff\sampling.py", line 421, in evolved_sampling_function cond_pred, uncond_pred = sliding_calc_cond_uncond_batch(model, cond, uncond_, x, timestep, model_options) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\LocalAI\COMFYUI\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-AnimateDiff-Evolved\animatediff\sampling.py", line 537, in sliding_calc_cond_uncond_batch sub_cond_out, sub_uncond_out = comfy.samplers.calc_cond_batch(model, [sub_cond, sub_uncond], sub_x, sub_timestep, model_options) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\LocalAI\COMFYUI\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 192, in calc_cond_batch c['control'] = control.get_control(input_x, timestep_, c, len(cond_or_uncond)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\LocalAI\COMFYUI\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Advanced-ControlNet\adv_control\utils.py", line 536, in get_control_inject return self.get_control_advanced(x_noisy, t, cond, batched_number) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\LocalAI\COMFYUI\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Advanced-ControlNet\adv_control\control.py", line 32, in get_control_advanced return self.sliding_get_control(x_noisy, t, cond, batched_number) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\LocalAI\COMFYUI\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Advanced-ControlNet\adv_control\control.py", line 78, in sliding_get_control control = self.control_model(x=x_noisy.to(dtype), hint=self.cond_hint, timesteps=timestep.float(), context=context.to(dtype), y=y) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\LocalAI\COMFYUI\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1518, in wrappedcall_impl return self._call_impl(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\LocalAI\COMFYUI\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1527, in callimpl return forward_call(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\LocalAI\COMFYUI\ComfyUI_windows_portable\ComfyUI\comfy\cldm\cldm.py", line 305, in forward h = module(h, emb, context) ^^^^^^^^^^^^^^^^^^^^^^^ File "C:\LocalAI\COMFYUI\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1518, in wrappedcall_impl return self._call_impl(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\LocalAI\COMFYUI\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1527, in callimpl return forward_call(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\LocalAI\COMFYUI\ComfyUI_windows_portable\ComfyUI\comfy\ldm\modules\diffusionmodules\openaimodel.py", line 60, in forward return forward_timestep_embed(self, args, *kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\LocalAI\COMFYUI\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-AnimateDiff-Evolved\animatediff\sampling.py", line 109, in forward_timestep_embed x = layer(x, context, transformer_options) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\LocalAI\COMFYUI\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1518, in wrappedcall_impl return self._call_impl(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\LocalAI\COMFYUI\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1527, in callimpl return forward_call(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\LocalAI\COMFYUI\ComfyUI_windows_portable\ComfyUI\comfy\ldm\modules\attention.py", line 633, in forward x = block(x, context=context[i], transformer_options=transformer_options) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\LocalAI\COMFYUI\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1518, in wrappedcall_impl return self._call_impl(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\LocalAI\COMFYUI\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1527, in callimpl return forward_call(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\LocalAI\COMFYUI\ComfyUI_windows_portable\ComfyUI\comfy\ldm\modules\attention.py", line 460, in forward return checkpoint(self._forward, (x, context, transformer_options), self.parameters(), self.checkpoint) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\LocalAI\COMFYUI\ComfyUI_windows_portable\ComfyUI\comfy\ldm\modules\diffusionmodules\util.py", line 191, in checkpoint return func(*inputs) ^^^^^^^^^^^^^ File "C:\LocalAI\COMFYUI\ComfyUI_windows_portable\ComfyUI\comfy\ldm\modules\attention.py", line 560, in forward n = self.attn2(n, context=contextattn2, value=value_attn2) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\LocalAI\COMFYUI\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1518, in wrappedcall_impl return self._call_impl(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\LocalAI\COMFYUI\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1527, in callimpl return forward_call(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\LocalAI\COMFYUI\ComfyUI_windows_portable\ComfyUI\comfy\ldm\modules\attention.py", line 404, in forward k = self.to_k(context) ^^^^^^^^^^^^^^^^^^ File "C:\LocalAI\COMFYUI\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1518, in wrappedcall_impl return self._call_impl(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\LocalAI\COMFYUI\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1527, in callimpl return forward_call(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\LocalAI\COMFYUI\ComfyUI_windows_portable\ComfyUI\comfy\ops.py", line 52, in forward return super().forward(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\LocalAI\COMFYUI\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\linear.py", line 114, in forward return F.linear(input, self.weight, self.bias) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

    Close

    Queue size: 0

    ⚙️×

    Queue Prompt

    Extra options

    Queue FrontView QueueView History

    SaveLoadRefreshClipspaceClearLoad Default

    ManagerShare

    RuntimeError 80 KSampler

    Workflows
    SD 1.5
    by ipiv

    Details

    Downloads
    10,100
    Platform
    CivitAI
    Platform Status
    Available
    Created
    4/3/2024
    Updated
    4/30/2026
    Deleted
    -

    Files

    ipivsMorphImg2vid_v204ReferenceImages.zip