Workflow for generating morph style looping videos.
v3: Hyper-SD implementation - allows us to use AnimateDiff v3 Motion model with DPM and other samplers. Seems to result in improved quality, overall color and animation coherence.
Uses QRCode Controlnet to guide the animation flow, morphing between the reference images is done via IPAdapter attention masks.
Here are some more motion masks to use with QRCode - kindly provided by @Xenodimensional: https://civarchive.com/posts/2011230
❗If you are getting an error message CLIP Vision Model not found: /ComfyUI/models/clip_vision folder
CLIP-ViT-H-14-laion2B-s32B-b79K.safetensors download and rename
CLIP-ViT-bigG-14-laion2B-39B-b160k.safetensors download and rename
❗If you are getting an error message IPAdapter model not found:
You are likely missing the IPAdapter model. In ComfyUI Manager Menu click Install Models - Search for ip-adapter_sd15_vit-G.safetensors and click Install.
If installing through Manager doesn't work for some reason you can download the model from Huggingface and drop it into \ComfyUI\models\ipadapter folder.
ViT-G model is what I used in the workflow but I suggest you try out other IPAdapter models as well.
Description
Hyper-SD implementation - allows us to use v3 Motion model with DPM and other samplers. Seems to result in improved quality, overall color and coherence.
FAQ
Comments (312)
Showing latest 225 of 312.
Excellent.
Excellent.
I just write a tutorial on ipiv
Error occurred when executing VHS_LoadVideoPath: module 'cv2' has no attribute 'VideoCapture'
Error occurred when executing IPAdapterUnifiedLoader:
module 'comfy.model_base' has no attribute 'SDXL_instructpix2pix'
same error
@xushuai2018820 解决了,把那个模块删掉再添加一个相同的
Error occurred when executing VHS_LoadVideoPath: module 'cv2' has no attribute 'VideoCapture'
FYI if you use a1111 directory for your models, Clip vision and Contronet models should not place in there, but into original comfyui file
SyntaxError: Unexpected non-whitespace character after JSON at position 4 (line 1 column 5)
you probably have something that's unconnected? just remove that and should be fine
i have that error too and cant figure it out
same error someone help
What is the configuration of your GPU?can A10 run ?
Have been trying all day and I ge this error
: Error occurred when executing CheckpointLoaderSimple: 'model.diffusion_model.input_blocks.0.0.weight' File "C:\AI\Comfy UI\ComfyUI_windows_portable\ComfyUI\execution.py", line 151, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\AI\Comfy UI\ComfyUI_windows_portable\ComfyUI\execution.py", line 81, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\AI\Comfy UI\ComfyUI_windows_portable\ComfyUI\custom_nodes\deforum-comfy-nodes\deforum_nodes\exec_hijack.py", line 55, in map_node_over_list return orig_exec(obj, input_data_all, func, allow_interrupt) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\AI\Comfy UI\ComfyUI_windows_portable\ComfyUI\execution.py", line 74, in map_node_over_list results.append(getattr(obj, func)(**slice_dict(input_data_all, i))) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\AI\Comfy UI\ComfyUI_windows_portable\ComfyUI\nodes.py", line 516, in load_checkpoint out = comfy.sd.load_checkpoint_guess_config(ckpt_path, output_vae=True, output_clip=True, embedding_directory=folder_paths.get_folder_paths("embeddings")) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\AI\Comfy UI\ComfyUI_windows_portable\ComfyUI\comfy\sd.py", line 458, in load_checkpoint_guess_config model_config = model_detection.model_config_from_unet(sd, "model.diffusion_model.") ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\AI\Comfy UI\ComfyUI_windows_portable\ComfyUI\comfy\model_detection.py", line 194, in model_config_from_unet unet_config = detect_unet_config(state_dict, unet_key_prefix) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\AI\Comfy UI\ComfyUI_windows_portable\ComfyUI\comfy\model_detection.py", line 78, in detect_unet_config model_channels = state_dict['{}input_blocks.0.0.weight'.format(key_prefix)].shape[0] ~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^Any clues of how to fix this? Thank you!
Is there way to use sdxl with it?
ControlNet QRCode is not doing the job, what`s wrong?, the workflow is working, but i dont see when morphing to another image the use of the tool
I ran into this problem as well. I got it working by loading a workflow from the previous day. Try updating everything and loading the default workflow.
@ZacharyTiger i will try again, thanks
How do some of you have such crispt picture quality? Is it a higher size of the reference images or is it the Image Sharpen?
I had a Cuda error when trying to enable Image Sharpen, not sure if I require higher vram. If so, cry in 12GB vram
Might be your settings. Check under the video combine- CRF setting. The default it something crazy like 19. If you set the CRF to 1, it won't be so compressed and the visuals will be better. The files will be about 10x larger.
Do you refer to resolution, details, color or consistency?
@ZacharyTiger I'll try that out! I indeed didn't touch the CRF of the default.
@efastcurex My results felt washed out and blurry. Lost a lot of details from the reference images. Your results are extremely well done!
@Catz model = color, scale multiple = sharpness, motion scale / ksampler steps = details,
12g vram too, I also cannot open the sharpen node
@ZacharyTiger I didn't see much difference in the CRF. After an hour of messing around, I finally figure it out. The IPAdapter Unified Loader use the VIT-G medium strength by default, which doesn't give consistant results of the reference images. It does generate interesting alternatives, but it's blurry
The PLUS version did use the reference images, but I had to cycle through different checkpoints to reduce artifacts. Not sure why the PLUS isn't by default.
@Catz Which models did you find worked best i'm getting more consistent results with VIT-G PLUS but they still end up kind of blurry and crunchy looking.
Can this be done in WEBUI FORGE? All the tutorials I see are in ComfyUI :(
I hope so too. Bump.
no. this is a ComfyUi Workflow
@AIDigitalMediaAgency Is there or will there be a version available for Forge? Would be epic.
@efastcurex Thankyou <3 Legend.
@jaffaparty420 welcome, by the way, it's not that convenient to use in auto1111
@efastcurex True :P
Christoph Schmidt
Any one have a simple tutorial on how to do this?
1. Install Comfyui
2. Install Comfyui manager
3. download the workflow jason file and load.
4. any missing nodes ( red highlighted ones) use Comfyui Manager - Install Missing Custom Nodes
5. in the workflow there are notes and web addresses for the model files you will need to download and put in the correct models folder.
6. you will need to download motion files as well
7. Run
8. As long as you use high quality images the outputs are great, if you dont have self generated images - Pinterest is a good source.
12G GPU works fine.
Enjoy!
@ramboe thank you so much!!
Here is a guide on how to use it, this is part 3 of a guide I made on making my music videos, this part focuses only on this iPiv morphing tool.
Great Job! Thanks a lot for sharing.
Using portraits results in complete different people in the video, any suggestion to increase the similarity between the initial image and the video?
Under the IP Adapter Unified Loader section try switching the preset from medium strength to the PLUS high strength.
@ZacharyTiger Thanks! It's definitely better now
Hi there, could you please show me how to run this on my computer? I am relatively new to coding
@violathief Download GenVista Motion from App Store and use it. easier, cheaper and faster lol
Error occurred when executing KSampler: 'NoneType' object has no attribute 'size'
怎么解决?
Is it possible to increase the video size to 16 seconds?
use the 8 pics version, but it run slow, the feeling using it just like use a machine gun, very difficult to handle. there's link in Discussion section
i cant find the link , can you reply please?@efastcurex
@reisalison366 https://fastupload.io/5be2235dd4be396c here it's, but you need double the mask length to use
@efastcurex this workflow is insane. i'm noob in confyui but a ilove this. Realy game changer!!! ty man
how can i fix this:
Error occurred when executing CheckpointLoaderSimple: 'model.diffusion_model.input_blocks.0.0.weight'
ANy idea why this happens, I have made sure to all the adapters installed, not sure what up here?
thanks if you know?>>>
Requested to load SD1ClipModel
Loading 1 new model
!!! Exception during processing!!! invalid syntax (<unknown>, line 0)
Traceback (most recent call last):
File "F:\ComfyUI\ComfyUI_windows_portable\ComfyUI\execution.py", line 151, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
I WISH TO R T F M
Hi I've got this error. not sure where to fix it
Error occurred when executing IPAdapterAdvanced: Error(s) in loading state_dict for Resampler: size mismatch for proj_in.weight: copying a param with shape torch.Size([768, 1280]) from checkpoint, the shape in current model is torch.Size([768, 1664]). File "D:\ComfyUI_windows_portable\ComfyUI\execution.py", line 151, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\ComfyUI\execution.py", line 81, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\ComfyUI\execution.py", line 74, in map_node_over_list results.append(getattr(obj, func)(**slice_dict(input_data_all, i))) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_IPAdapter_plus\IPAdapterPlus.py", line 758, in apply_ipadapter work_model, face_image = ipadapter_execute(work_model, ipadapter_model, clip_vision, **ipa_args) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_IPAdapter_plus\IPAdapterPlus.py", line 386, in ipadapter_execute ipa = IPAdapter( ^^^^^^^^^^ File "D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_IPAdapter_plus\IPAdapterPlus.py", line 70, in init self.image_proj_model.load_state_dict(ipadapter_model["image_proj"]) File "D:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 2189, in load_state_dict raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format(
thank you in advance
I fixed it. I named the file in clipvision wrong
make sure you properly re named the 2 downloaded Clip - Vision files...
We need a tutorial :)
It already exists :D https://www.youtube.com/watch?v=Ivn65-63hPU
@vulp_art Thank you so much!
The output image is totally different from the input image. When I use the SVD, the output image remains unchanged but with animation.
Is there something to do, so keep the output image the same as the input images, but with emotion?
I'm experiencing the IPAdapter model not found issue, but the provided solution is still not working for me. Any suggestions?
I had this issue with the PLUS model and was able to resolve by uninstalling and reinstalling IPAdapter.
having the same issue!
Hi, I don't know if you have already solved it or not. But I had the same problem, I saw the video below and I solved it. I simply created the folder "ipadapter" and downloaded all the ipadapter from the video, the link to download them is in the description. After that it worked! Hope it helps
Link video: https://www.youtube.com/watch?v=n6tYqqV0q7I
@alespadadc Great, thank you!
Unfortunately, this workflow breaks ComfyUI's Save as API. When re-importing the JSON, the nodes' connections are completely broken.
Any tips to get SDXL to work? or to reduce the SD1.5 Characteristic Dirty Black/Blue/Green look? no matter how white it all becomes grimey cyberpunk.
I got the same issue. I imported SDXL images, but the output animation look nothing like the original images. Would love it if the final output matched the style of the original input.
Anyway to get SDXL working? I find the output animation looks nothing like the SDXL images I started with in regards to look & style.
Great workflow!
I can't ever get the "Sharpen (mtb)" node to work. I'm running a 3070 and will run out of memory unless I bypass that node.
Is there a similar sharpening technique or workaround I can do to make it work?
Super!
please someone help me fix this :(((
Error occurred when executing ADE_LoadAnimateDiffModel:
LoadAnimateDiffModelNode.load_motion_model() missing 1 required positional argument: 'model_name'
File "C:\comfu UI\ComfyUI_windows_portable\ComfyUI\execution.py", line 151, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\comfu UI\ComfyUI_windows_portable\ComfyUI\execution.py", line 81, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\comfu UI\ComfyUI_windows_portable\ComfyUI\execution.py", line 69, in map_node_over_list
results.append(getattr(obj, func)())
^^^^^^^^^^^^^^^^^^^^
+1
Open the launcher and change the kernel version to the old version.
@hoskenerisln50735 thanks you
@hoskenerisln50735 how
@hoskenerisln50735 how can i do this help lease :(
Awesome workflow!
Those lost two sections upscale and interpolate are ROUGH though on any computer. Sucks for cloud services where it can crash the box.
I use Topaz AI to upscale it
this is FANTASTIC.
that said im wondering how to smooth out the "seam" where things loop, as the first guidance image just pops on abrubtly (compared to the other three images which crossfade more)
wondering if there is a way to frameblend just on the looping part?
Great workflow.
Really appreciate the notes with basically everything to get us started.
I have one problem tho.
Is it normal that the "Color Correct (mtb)" node maxed out my RAM with my VRAM being at about 40-45%?
It makes the workflow take twice as long and feels the same as when I tried to run SDXL with refiner on my laptop with 8GB of VRAM a while ago.
I have 4080S and 32GB of RAM.
i have rtx 4080 16gb and it collapses on the color correct part.
@johndoeshit my collapses on a 3050 8gb vram
Who can help me?
When loading the graph, the following node typees were not found
IPAdapterUnifiedLoader
IPAdapterAdvanced
Nodes that have failed to load will show as redI on the graph.
Error occurred when executing CheckpointLoaderSimple: 'model.diffusion_model.input_blocks.0.0.weight' File "D:\new_ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\execution.py", line 152, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\new_ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\execution.py", line 82, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\new_ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\execution.py", line 75, in map_node_over_list results.append(getattr(obj, func)(**slice_dict(input_data_all, i))) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\new_ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\nodes.py", line 552, in load_checkpoint out = comfy.sd.load_checkpoint_guess_config(ckpt_path, output_vae=True, output_clip=True, embedding_directory=folder_paths.get_folder_paths("embeddings")) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\new_ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\comfy\sd.py", line 448, in load_checkpoint_guess_config model_config = model_detection.model_config_from_unet(sd, "model.diffusion_model.", unet_dtype) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\new_ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\comfy\model_detection.py", line 163, in model_config_from_unet unet_config = detect_unet_config(state_dict, unet_key_prefix, dtype) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\new_ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\comfy\model_detection.py", line 49, in detect_unet_config model_channels = state_dict['{}input_blocks.0.0.weight'.format(key_prefix)].shape[0] ~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
update your ipaplus comfyui in manager
May I ask what I have to do when I get this error: Error occurred when executing VRAM_Debug: VRAM_Debug.VRAMdebug() got an unexpected keyword argument 'image_passthrough'
you have to delete the vram_debug and add it again. and connect the image pass etc. and then it will work.
[tcp @ 000001d4f8e81280] Connection to tcp://i.imgur.com:443 failed: Error number -138 occurred
!!! Exception during processing!!! https://i.imgur.com/FZojh3v.mp4 could not be loaded with cv.
Traceback (most recent call last):
File "D:\ComfyUI\new_ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\execution.py", line 151, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\ComfyUI\new_ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\execution.py", line 81, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\ComfyUI\new_ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\execution.py", line 74, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\ComfyUI\new_ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-VideoHelperSuite\videohelpersuite\load_video_nodes.py", line 281, in load_video
return load_video_cv(**kwargs)
^^^^^^^^^^^^^^^^^^^^^^^
File "D:\ComfyUI\new_ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-VideoHelperSuite\videohelpersuite\load_video_nodes.py", line 131, in load_video_cv
(width, height, fps, duration, total_frames, target_frame_time) = next(gen)
^^^^^^^^^
File "D:\ComfyUI\new_ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-VideoHelperSuite\videohelpersuite\load_video_nodes.py", line 51, in cv_frame_generator
raise ValueError(f"{video} could not be loaded with cv.")
ValueError: https://i.imgur.com/FZojh3v.mp4 could not be loaded with cv.
i have this error! someone can help me ? thx
I have the same problem.are you fix it?thx
@Ronon_Chen u need download to local directory
How we can add ADetailer to this workflow? Somethimes it causes weird view on eyes
use node name detailer for animatediff the same feature as adtailer in webui
I finished a full video using this ,still one of my favorite video tools!!
I also used Hedra quite a bit for the lipsynch
The Raven:Edgar Allan Poe AI Song with Suno HEDRA lip synch and Other Amazing AI Video tools! (youtube.com)
***** Help! *****
I had a Black image error after upscale.
How can I fix it?
Wait, this is the new version? I just discovered the SDXL txt2img2vid version and thought that was the new one until I saw no reviews for a long time! 😆
Please someone help me!! Getting the Followingo error and cant fix it:
Error occurred when executing IPAdapterAdvanced: 'NoneType' object is not subscriptable File "D:\ConfyUI\ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\execution.py", line 151, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ConfyUI\ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\execution.py", line 81, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ConfyUI\ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\execution.py", line 74, in map_node_over_list results.append(getattr(obj, func)(**slice_dict(input_data_all, i))) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ConfyUI\ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_IPAdapter_plus\IPAdapterPlus.py", line 769, in apply_ipadapter return (work_model, face_image, ) ^^^^^ File "D:\ConfyUI\ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_IPAdapter_plus\IPAdapterPlus.py", line 228, in ipadapter_execute is_faceid = is_portrait or "0.to_q_lora.down.weight" in ipadapter["ip_adapter"] or is_portrait_unnorm
HELP!
I still cannot resolve this issue. Both models downloaded - ComfyUi sees them but this is what i get:
---------------------------------------------------------
Error occurred when executing IPAdapterUnifiedLoader: IPAdapter model not found. File "C:\Ai\ComfyUI_windows_portable\ComfyUI\execution.py", line 151, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Ai\ComfyUI_windows_portable\ComfyUI\execution.py", line 81, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Ai\ComfyUI_windows_portable\ComfyUI\execution.py", line 74, in map_node_over_list results.append(getattr(obj, func)(**slice_dict(input_data_all, i))) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Ai\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_IPAdapter_plus\IPAdapterPlus.py", line 525, in load_models raise Exception("IPAdapter model not found.")
do you solve this erorr if you solve could you help me please :(
What worked for me is adding the line below to the ComfyUI/folder_paths.py file when the "folder_names_and_paths" dict is created.
folder_names_and_paths["ipadapter"] = ([os.path.join(models_dir, "ipadapter")], supported_pt_extensions)
The issue I think is ipadpter path is not getting picked correctly from the extra_model_paths.yaml file
@baba2876 What worked for me is adding the line below to the ComfyUI/folder_paths.py file when the "folder_names_and_paths" dict is created.
folder_names_and_paths["ipadapter"] = ([os.path.join(models_dir, "ipadapter")], supported_pt_extensions)
The issue I think is ipadpter path is not getting picked correctly from the extra_model_paths.yaml file
It wasn't very easy but I managed to make it work but only by skipping the final part of upscale and frame interpolation. However, excellent results are obtained for making psychedelic videos! I'm just wondering if there is a way to reduce the denoise, in order to produce even more faithful images than those provided.
How can I upload this to models lab models?
I have added the path to : i.imgur.com/EHe7cAU.mp4 but it says:
Error occurred when executing VHS_LoadVideoPath: No frames generated
what the hell ?????
you forget the "https://"
In this workflow, I encounter an error when executing the KSampler Mode node.
!!! Exception during processing!!! Expected query, key, and value to have the same dtype, but got query.dtype: float key.dtype: struct c10::Half and value.dtype: struct c10::Half instead.
Please help me resolve this issue.
I also got an error at the sampler, did you fix it?What was the problem?我也是在采样器出现了错误,你解决了吗?是什么问题?
Awesome workflow - very smooth results! One question, any advice or guidance on how to produce wide aspect ratio (16:9) videos instead of vertical videos?
is there any way to increase lenght time of each images or make it more than 4 images?
any idea why the image would be all blown out ? cant figure out what parameter might be off https://imgur.com/a/NGSIPvu
it seems that you are using the wrong sampling method. try putting sgmuniform with dmpp2msde
u r not using LCM lora and lcm sampler.
I recently made a playlist on making music videos and part 3 is entirely a guide for Ipiv morph tool here with a link to this page, here is the guide.
It focuses on just the basics and how to change resolution,etc.
2024 Summer Sale_Dosa Video_30sec_LAO_cut_ESRB (youtube.com)
Hi, if I increase the number of input images from the default 4 to 8 or 12, is there anything in the flow that I need to change, right now after the 4 images it just goes blur.
Anyone now what to do about this
AttributeError: 'NoneType' object has no attribute 'lower'
Prompt executed in 7.71 seconds
got prompt
!!! Exception during processing!!! 'NoneType' object has no attribute 'lower'
Traceback (most recent call last):
File "H:\ComfyUI_windows_portable\ComfyUI\execution.py", line 152, in recursive_execute
Traceback (most recent call last):
File "asyncio\events.py", line 84, in _run
File "asyncio\proactor_events.py", line 165, in callconnection_lost
ConnectionResetError: [WinError 10054] An existing connection was forcibly closed by the remote host
!!! Exception during processing!!! VRAM_Debug.VRAMdebug() got an unexpected keyword argument 'image_passthrough'
Traceback (most recent call last):
File "D:\ComfyUI_windows_portable\ComfyUI\execution.py", line 152, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\ComfyUI_windows_portable\ComfyUI\execution.py", line 82, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\ComfyUI_windows_portable\ComfyUI\execution.py", line 75, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
TypeError: VRAM_Debug.VRAMdebug() got an unexpected keyword argument 'image_passthrough'
running comfyui in a conda env. i've installed libx264 and ffmpeg in that venv, as well as system-wide of course, but getting the error "Encoder not found" -- webm works, was getting grainy output until i realized i didn't need an LCM checkpoint if the LCM weights are in the workflow already. genius stuff though, thanks!
Running this in ComfyUI and getting:
RuntimeError: Given groups=1, weight of size [512, 16, 3, 3], expected input[2, 4, 64, 36] to have 16 channels, but got 4 channels instead
Anyone know how to solve?
I am getting these, what do i need to do:
When loading the graph, the following node types were not found: ADE_ApplyAnimateDiffModelSimple VHS_SplitImages SimpleMath+ ControlNetLoaderAdvanced ADE_MultivalDynamic VHS_VideoCombine BatchCount+ ADE_UseEvolvedSampling FILM VFI RIFE VFI Color Correct (mtb) VHS_LoadVideoPath IPAdapterUnifiedLoader ACN_AdvancedControlNetApply ADE_LoadAnimateDiffModel ADE_LoopedUniformContextOptions IPAdapterAdvanced CreateFadeMaskAdvancedhello , how rename clip_vision files in thinkdiffusion
I did everything in the notes but still get the IPAdapter model not found message. I have ip-adapter_sd15_vit-G installed so I don't understand the problem, does anyone know what is causing this?
hi , go to manager -> install models -> search ipadapter -> install first 5-6 comfyui ipadapters which show in results , it will fix it, also install one of model like vit which you said and select it as your model
@magic_hand Hi, thank you for the tip, I tried it and I still get the same error of:
“Error occurred when executing IPAdapterUnifiedLoader:
IPAdapter model not found”
try select PLUS(high strength) which works for me
@magic_hand despite trying all of the option and having all of them downloaded it gives me an error for each of them. What checkpoint are you using? maybe this is a part of my problem
@rumiwal control_v1p_sd15_qrcode_monster.safetensors
For those who want the solution, it's very simple. If you use Stability Matrix, just copy the models to the default matrix models folder, in ".\Data\Models\IpAdapter". That's it, everything will work correctly. \o
I tried this workflow but I can't get anything good at all...
Hi, this workflow is awesome, but could you tell me which parameters to adjust to keep the animation visual style as much as the 4 orignal reference images? Thanks.
@darajan this is not possible.
My final output looks absolutely nothing like any of my 4 images...what do I adjust to make them identical?
It is going to morph from one image to the next, so it each image will look like a mix of the 2 images and not exactly like either of them. You can play with the seed and get different results but it is going to be a mix of 2 images always, the one before and the one after.
@prophetofthesingularity5 Thanks for the response, I understand how the morphing works and the transition. I am talking about the keyframed images. I generated images in auto1111 and I want to use those images as the keyframes. I use the same checkpoints, loras etc and the output is a whole different style, person, etc. I guess I am just looking for someway to take MY images and morph them, not generate a whole different set of images. Does that make more sense?
@BoomerMcBlast Ok, yes, I do not think this is a good workflow for that but maybe if you tweaked the settings like strength and just used a solid white or black movie, but I do not know how, maybe someone else will.
@BoomerMcBlast set that morph style to ease in-out and decrease strength
@BoomerMcBlast You can't. I was facing this with importing external SDXL images. Unfortunately, this workflow will always generate completely different looking export. I think if we could some how get an SDXL video export workflow, we could change the final video, but until that we're locked to this AI fake computery look, unfortunately.
PS. To save you some time, I posted on multiple forums here and on reddit, and reached out to multiple users. None of them were able to figure out how to fix this. It's just not possible.
For some strange reason I'm getting an error noone else does, with the node SimpleMath+:
Failed to validate prompt for output 53:
* SimpleMath+ 461:
- Return type mismatch between linked nodes: a, INT != INT,FLOAT
- Return type mismatch between linked nodes: b, INT != INT,FLOAT
I don't suppose anyone will notice and maybe give me a hint?
I get the same after updating comfyui to newest update. Cant find the solution though.
After update the value field in the "simple math" node is empty. Just put in a/b as the value to get it work again :-)
@jonathanbtsn732 After update the value field in the "simple math" node is empty. Just put in a/b as the value to get it work again :-)
@Aquanoid hey man, thanks, but this shouldn't be the problem, I was following a tutorial and I put the value a/b in there, so it must be something else...
@Aquanoid However, today I did Update All in comfy and suddenly SimpleMath no longer throws error, I now get IPAdapter model not found! That's a progress!
After latest ComfyUI update the value field in the "simple math" node is empty. Just put in a/b as the value to get it work again :-)
Even with the right value "a/b", this node doesn't work anymore with the new comfyui update.
@EXO - This was my only error after I updated ComfyUI + all the nodes in the ComfyUI manager menu
@Aquanoid I did put just a/b and it still shows the same error, any other fix or way around it?
@habibigonemad For now the fix that worked for me is to replace by an other node named "Math Expression" from the custom node ComfyUI-Custom-Scripts
@EXO @Aquanoid So both the nodes didnt do it for me, the issue is with the new comfy update.
So in order to get through i had to replace simple math with "int math" node
and update all the animatediff evolved nodes from Kosinkadink.
Hope this helps anyone who's struggling :)
@EXO It worked for me! Thank you very much
thanks for the solve @afterlifenirvana939
After updating to " git checkout v0.0.8" to make comfyui compatible with FLUX models I'm having issues with the upscale models. The preview looks alright but then the outputs of the second ksampler comes out extremely noisy and flickery. Also, suddenly pushing the QR_monster strength breaks the preview. Haas anyone found the same issues?
For this workflow is required at least the 4090 ? I have a 4070 Ti Super and it's not working. Someone can help?
I have the same card and it is working, what error are you seeing?
🚨🚨🚨🚨🚨Workflow not working after Comfy update🚨🚨🚨🚨🚨🚨
I tried everything to get it working again but nothing.. I hope they fix it soon
i use 2 diffrent comfyui. The "old" with python 3.11 and pytorch 2.3. The new comfyui use pytorch 2.4!
@PunkDali @Cavernust @AlbertoSono Wrote my fix in a comment
does any one have any other motion video like https://i.imgur.com/FZojh3v.mp4 please share and let me know where I can find more.
giphy
I I did all the steps as shown in the video, but when I click "Queue Prompt," it starts running in the terminal (I am using a Mac M1), and at the end, the message I attached here appears, and it just stays at 0%, even though I left the upscale nodes deactivated as instructed in the video. Can someone help me solve this issue? In the terminal, it only shows 0% as in the image. Thank you in advance!
KSampler Error - module 'torch' has no attribute 'float8_e5m2'
I've gone through and checked the workflow to pair up the missing models I had. Those errors were fixed. I keep getting the error listed above. I tried swapping out the KSampler for the KSamplerAdvanced and still had the same error. I ran a few pip install commands to the directory that were suggested online. None of those fixed the error.
If any of you are perhaps more familiar with ComfyUI and can suggest a solution to try for this specific error, please let me know. Thank you.
Excited to get this working so I can try it out.
# ComfyUI Error Report
## Error Details
- Node Type: KSampler
- Exception Type: AttributeError
- Exception Message: module 'torch' has no attribute 'float8_e5m2'
## Stack Trace
```
File "X:\AI\comfyUI\ComfyUI_windows_portable\ComfyUI\execution.py", line 317, in execute
output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
File "X:\AI\comfyUI\ComfyUI_windows_portable\ComfyUI\execution.py", line 192, in get_output_data
return_values = mapnode_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
File "X:\AI\comfyUI\ComfyUI_windows_portable\ComfyUI\execution.py", line 169, in mapnode_over_list
process_inputs(input_dict, i)
File "X:\AI\comfyUI\ComfyUI_windows_portable\ComfyUI\execution.py", line 158, in process_inputs
results.append(getattr(obj, func)(**inputs))
File "X:\AI\comfyUI\ComfyUI_windows_portable\ComfyUI\nodes.py", line 1429, in sample
return common_ksampler(model, seed, steps, cfg, sampler_name, scheduler, positive, negative, latent_image, denoise=denoise)
File "X:\AI\comfyUI\ComfyUI_windows_portable\ComfyUI\nodes.py", line 1396, in common_ksampler
samples = comfy.sample.sample(model, noise, steps, cfg, sampler_name, scheduler, positive, negative, latent_image,
File "X:\AI\comfyUI\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Impact-Pack\modules\impact\sample_error_enhancer.py", line 9, in informative_sample
return original_sample(*args, **kwargs) # This code helps interpret error messages that occur within exceptions but does not have any impact on other operations.
File "X:\AI\comfyUI\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-AnimateDiff-Evolved\animatediff\sampling.py", line 526, in motion_sample
latents = orig_comfy_sample(model, noise, args, *kwargs)
File "X:\AI\comfyUI\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Advanced-ControlNet\adv_control\sampling.py", line 116, in acn_sample
return orig_comfy_sample(model, args, *kwargs)
File "X:\AI\comfyUI\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Advanced-ControlNet\adv_control\utils.py", line 117, in uncond_multiplier_check_cn_sample
return orig_comfy_sample(model, args, *kwargs)
File "X:\AI\comfyUI\ComfyUI_windows_portable\ComfyUI\comfy\sample.py", line 43, in sample
samples = sampler.sample(noise, positive, negative, cfg=cfg, latent_image=latent_image, start_step=start_step, last_step=last_step, force_full_denoise=force_full_denoise, denoise_mask=noise_mask, sigmas=sigmas, callback=callback, disable_pbar=disable_pbar, seed=seed)
File "X:\AI\comfyUI\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_smZNodes\smZNodes.py", line 103, in KSampler_sample
return orig_fn(*args, **kwargs)
File "X:\AI\comfyUI\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 829, in sample
return sample(self.model, noise, positive, negative, cfg, self.device, sampler, sigmas, self.model_options, latent_image=latent_image, denoise_mask=denoise_mask, callback=callback, disable_pbar=disable_pbar, seed=seed)
File "X:\AI\comfyUI\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_smZNodes\smZNodes.py", line 121, in sample
return orig_fn(*args, **kwargs)
File "X:\AI\comfyUI\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 729, in sample
return cfg_guider.sample(noise, latent_image, sampler, sigmas, denoise_mask, callback, disable_pbar, seed)
File "X:\AI\comfyUI\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 706, in sample
self.inner_model, self.conds, self.loaded_models = comfy.sampler_helpers.prepare_sampling(self.model_patcher, noise.shape, self.conds)
File "X:\AI\comfyUI\ComfyUI_windows_portable\ComfyUI\comfy\sampler_helpers.py", line 66, in prepare_sampling
comfy.model_management.load_models_gpu([model] + models, memory_required=memory_required, minimum_memory_required=minimum_memory_required)
File "X:\AI\comfyUI\ComfyUI_windows_portable\ComfyUI\comfy\model_management.py", line 545, in load_models_gpu
cur_loaded_model = loaded_model.model_load(lowvram_model_memory, force_patch_weights=force_patch_weights)
File "X:\AI\comfyUI\ComfyUI_windows_portable\ComfyUI\comfy\model_management.py", line 327, in model_load
raise e
File "X:\AI\comfyUI\ComfyUI_windows_portable\ComfyUI\comfy\model_management.py", line 323, in model_load
self.real_model = self.model.patch_model(device_to=patch_model_to, lowvram_model_memory=lowvram_model_memory, load_weights=load_weights, force_patch_weights=force_patch_weights)
File "X:\AI\comfyUI\ComfyUI_windows_portable\ComfyUI\comfy\model_patcher.py", line 427, in patch_model
self.load(device_to, lowvram_model_memory=lowvram_model_memory, force_patch_weights=force_patch_weights, full_load=full_load)
File "X:\AI\comfyUI\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-AnimateDiff-Evolved\animatediff\model_injection.py", line 818, in load
self._handle_float8_pe_tensors()
File "X:\AI\comfyUI\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-AnimateDiff-Evolved\animatediff\model_injection.py", line 845, in handlefloat8_pe_tensors
if comfy.utils.get_attr(self.model, key).dtype not in [torch.float8_e5m2, torch.float8_e4m3fn]:
```
## System Information
- ComfyUI Version: v0.2.1-5-g5cbaa9e
- Arguments: ComfyUI\main.py --windows-standalone-build
- OS: nt
- Python Version: 3.10.9 (tags/v3.10.9:1dd9be6, Dec 6 2022, 20:01:21) [MSC v.1934 64 bit (AMD64)]
- Embedded Python: true
- PyTorch Version: 2.0.1+cu118
## Devices
- Name: cuda:0 NVIDIA GeForce RTX 3060 : cudaMallocAsync
- Type: cuda
- VRAM Total: 12884377600
- VRAM Free: 10690410496
- Torch VRAM Total: 1006632960
- Torch VRAM Free: 7518208
## Logs
```
2024-09-05 20:44:56,080 - root - INFO - Total VRAM 12288 MB, total RAM 31894 MB
2024-09-05 20:44:56,080 - root - INFO - pytorch version: 2.0.1+cu118
2024-09-05 20:44:56,107 - root - INFO - xformers version: 0.0.20
2024-09-05 20:44:56,108 - root - INFO - Set vram state to: NORMAL_VRAM
2024-09-05 20:44:56,108 - root - INFO - Device: cuda:0 NVIDIA GeForce RTX 3060 : cudaMallocAsync
2024-09-05 20:44:56,336 - root - INFO - Using xformers cross attention
2024-09-05 20:44:57,253 - root - INFO - [Prompt Server] web root: X:\AI\comfyUI\ComfyUI_windows_portable\ComfyUI\web
2024-09-05 20:44:57,254 - root - INFO - Adding extra search path checkpoints X:\AI\stable-diffusion-webui\models/Stable-diffusion
2024-09-05 20:44:57,254 - root - INFO - Adding extra search path configs X:\AI\stable-diffusion-webui\models/Stable-diffusion
2024-09-05 20:44:57,254 - root - INFO - Adding extra search path vae X:\AI\stable-diffusion-webui\models/VAE
2024-09-05 20:44:57,254 - root - INFO - Adding extra search path loras X:\AI\stable-diffusion-webui\models/Lora
2024-09-05 20:44:57,254 - root - INFO - Adding extra search path loras X:\AI\stable-diffusion-webui\models/LyCORIS
2024-09-05 20:44:57,254 - root - INFO - Adding extra search path upscale_models X:\AI\stable-diffusion-webui\models/ESRGAN
2024-09-05 20:44:57,254 - root - INFO - Adding extra search path upscale_models X:\AI\stable-diffusion-webui\models/RealESRGAN
2024-09-05 20:44:57,254 - root - INFO - Adding extra search path upscale_models X:\AI\stable-diffusion-webui\models/SwinIR
2024-09-05 20:44:57,255 - root - INFO - Adding extra search path embeddings X:\AI\stable-diffusion-webui\embeddings
2024-09-05 20:44:57,255 - root - INFO - Adding extra search path hypernetworks X:\AI\stable-diffusion-webui\models/hypernetworks
2024-09-05 20:44:57,255 - root - INFO - Adding extra search path controlnet X:\AI\stable-diffusion-webui\extensions\sd-webui-controlnet\models
2024-09-05 20:44:58,761 - root - INFO - Total VRAM 12288 MB, total RAM 31894 MB
2024-09-05 20:44:58,761 - root - INFO - pytorch version: 2.0.1+cu118
2024-09-05 20:44:58,761 - root - INFO - xformers version: 0.0.20
2024-09-05 20:44:58,762 - root - INFO - Set vram state to: NORMAL_VRAM
2024-09-05 20:44:58,762 - root - INFO - Device: cuda:0 NVIDIA GeForce RTX 3060 : cudaMallocAsync
2024-09-05 20:45:02,142 - root - INFO -
Import times for custom nodes:
2024-09-05 20:45:02,142 - root - INFO - 0.0 seconds: X:\AI\comfyUI\ComfyUI_windows_portable\ComfyUI\custom_nodes\websocket_image_save.py
2024-09-05 20:45:02,142 - root - INFO - 0.0 seconds: X:\AI\comfyUI\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Yolo-Cropper
2024-09-05 20:45:02,142 - root - INFO - 0.0 seconds: X:\AI\comfyUI\ComfyUI_windows_portable\ComfyUI\custom_nodes\stability-ComfyUI-nodes
2024-09-05 20:45:02,142 - root - INFO - 0.0 seconds: X:\AI\comfyUI\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui-inpaint-nodes
2024-09-05 20:45:02,142 - root - INFO - 0.0 seconds: X:\AI\comfyUI\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_IPAdapter_plus
2024-09-05 20:45:02,142 - root - INFO - 0.0 seconds: X:\AI\comfyUI\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-IPAnimate
2024-09-05 20:45:02,142 - root - INFO - 0.0 seconds: X:\AI\comfyUI\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_UltimateSDUpscale
2024-09-05 20:45:02,142 - root - INFO - 0.0 seconds: X:\AI\comfyUI\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Custom-Scripts
2024-09-05 20:45:02,143 - root - INFO - 0.0 seconds: X:\AI\comfyUI\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Frame-Interpolation
2024-09-05 20:45:02,143 - root - INFO - 0.0 seconds: X:\AI\comfyUI\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Advanced-ControlNet
2024-09-05 20:45:02,143 - root - INFO - 0.0 seconds: X:\AI\comfyUI\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-LivePortraitKJ
2024-09-05 20:45:02,143 - root - INFO - 0.0 seconds: X:\AI\comfyUI\ComfyUI_windows_portable\ComfyUI\custom_nodes\rgthree-comfy
2024-09-05 20:45:02,143 - root - INFO - 0.0 seconds: X:\AI\comfyUI\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-KJNodes
2024-09-05 20:45:02,143 - root - INFO - 0.0 seconds: X:\AI\comfyUI\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_essentials
2024-09-05 20:45:02,143 - root - INFO - 0.0 seconds: X:\AI\comfyUI\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-AnimateDiff-Evolved
2024-09-05 20:45:02,143 - root - INFO - 0.0 seconds: X:\AI\comfyUI\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Inspire-Pack
2024-09-05 20:45:02,143 - root - INFO - 0.1 seconds: X:\AI\comfyUI\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui_controlnet_aux
2024-09-05 20:45:02,143 - root - INFO - 0.1 seconds: X:\AI\comfyUI\ComfyUI_windows_portable\ComfyUI\custom_nodes\facerestore_cf
2024-09-05 20:45:02,143 - root - INFO - 0.1 seconds: X:\AI\comfyUI\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui-tooling-nodes
2024-09-05 20:45:02,144 - root - INFO - 0.1 seconds: X:\AI\comfyUI\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_smZNodes
2024-09-05 20:45:02,144 - root - INFO - 0.1 seconds: X:\AI\comfyUI\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-VideoHelperSuite
2024-09-05 20:45:02,144 - root - INFO - 0.2 seconds: X:\AI\comfyUI\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Manager
2024-09-05 20:45:02,144 - root - INFO - 0.5 seconds: X:\AI\comfyUI\ComfyUI_windows_portable\ComfyUI\custom_nodes\SeargeSDXL
2024-09-05 20:45:02,144 - root - INFO - 0.5 seconds: X:\AI\comfyUI\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Impact-Pack
2024-09-05 20:45:02,144 - root - INFO - 2.2 seconds: X:\AI\comfyUI\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfy_mtb
2024-09-05 20:45:02,144 - root - INFO -
2024-09-05 20:45:02,153 - root - INFO - Starting server
2024-09-05 20:45:02,153 - root - INFO - To see the GUI go to: http://127.0.0.1:8188
2024-09-05 20:45:11,780 - root - INFO - got prompt
2024-09-05 20:45:15,956 - root - INFO - Using xformers attention in VAE
2024-09-05 20:45:15,958 - root - INFO - Using xformers attention in VAE
2024-09-05 20:45:17,945 - root - INFO - model weight dtype torch.float16, manual cast: None
2024-09-05 20:45:17,947 - root - INFO - model_type EPS
2024-09-05 20:45:25,011 - root - INFO - Using xformers attention in VAE
2024-09-05 20:45:25,012 - root - INFO - Using xformers attention in VAE
2024-09-05 20:45:46,132 - root - INFO - Requested to load CLIPVisionModelProjection
2024-09-05 20:45:46,133 - root - INFO - Loading 1 new model
2024-09-05 20:45:46,948 - root - INFO - loaded completely 0.0 3522.953369140625 True
2024-09-05 20:45:48,622 - root - INFO - Requested to load SD1ClipModel
2024-09-05 20:45:48,622 - root - INFO - Loading 1 new model
2024-09-05 20:45:48,676 - root - INFO - loaded completely 0.0 235.84423828125 True
2024-09-05 20:45:52,295 - root - INFO - Requested to load AnimateDiffModel
2024-09-05 20:45:52,295 - root - INFO - Requested to load ControlNet
2024-09-05 20:45:52,295 - root - INFO - Requested to load BaseModel
2024-09-05 20:45:52,295 - root - INFO - Loading 3 new models
2024-09-05 20:45:53,246 - root - INFO - loaded completely 0.0 795.6268310546875 True
2024-09-05 20:45:53,425 - root - ERROR - !!! Exception during processing !!! module 'torch' has no attribute 'float8_e5m2'
2024-09-05 20:45:53,439 - root - ERROR - Traceback (most recent call last):
File "X:\AI\comfyUI\ComfyUI_windows_portable\ComfyUI\execution.py", line 317, in execute
output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
File "X:\AI\comfyUI\ComfyUI_windows_portable\ComfyUI\execution.py", line 192, in get_output_data
return_values = mapnode_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
File "X:\AI\comfyUI\ComfyUI_windows_portable\ComfyUI\execution.py", line 169, in mapnode_over_list
process_inputs(input_dict, i)
File "X:\AI\comfyUI\ComfyUI_windows_portable\ComfyUI\execution.py", line 158, in process_inputs
results.append(getattr(obj, func)(**inputs))
File "X:\AI\comfyUI\ComfyUI_windows_portable\ComfyUI\nodes.py", line 1429, in sample
return common_ksampler(model, seed, steps, cfg, sampler_name, scheduler, positive, negative, latent_image, denoise=denoise)
File "X:\AI\comfyUI\ComfyUI_windows_portable\ComfyUI\nodes.py", line 1396, in common_ksampler
samples = comfy.sample.sample(model, noise, steps, cfg, sampler_name, scheduler, positive, negative, latent_image,
File "X:\AI\comfyUI\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Impact-Pack\modules\impact\sample_error_enhancer.py", line 9, in informative_sample
return original_sample(*args, **kwargs) # This code helps interpret error messages that occur within exceptions but does not have any impact on other operations.
File "X:\AI\comfyUI\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-AnimateDiff-Evolved\animatediff\sampling.py", line 526, in motion_sample
latents = orig_comfy_sample(model, noise, args, *kwargs)
File "X:\AI\comfyUI\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Advanced-ControlNet\adv_control\sampling.py", line 116, in acn_sample
return orig_comfy_sample(model, args, *kwargs)
File "X:\AI\comfyUI\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Advanced-ControlNet\adv_control\utils.py", line 117, in uncond_multiplier_check_cn_sample
return orig_comfy_sample(model, args, *kwargs)
File "X:\AI\comfyUI\ComfyUI_windows_portable\ComfyUI\comfy\sample.py", line 43, in sample
samples = sampler.sample(noise, positive, negative, cfg=cfg, latent_image=latent_image, start_step=start_step, last_step=last_step, force_full_denoise=force_full_denoise, denoise_mask=noise_mask, sigmas=sigmas, callback=callback, disable_pbar=disable_pbar, seed=seed)
File "X:\AI\comfyUI\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_smZNodes\smZNodes.py", line 103, in KSampler_sample
return orig_fn(*args, **kwargs)
File "X:\AI\comfyUI\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 829, in sample
return sample(self.model, noise, positive, negative, cfg, self.device, sampler, sigmas, self.model_options, latent_image=latent_image, denoise_mask=denoise_mask, callback=callback, disable_pbar=disable_pbar, seed=seed)
File "X:\AI\comfyUI\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_smZNodes\smZNodes.py", line 121, in sample
return orig_fn(*args, **kwargs)
File "X:\AI\comfyUI\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 729, in sample
return cfg_guider.sample(noise, latent_image, sampler, sigmas, denoise_mask, callback, disable_pbar, seed)
File "X:\AI\comfyUI\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 706, in sample
self.inner_model, self.conds, self.loaded_models = comfy.sampler_helpers.prepare_sampling(self.model_patcher, noise.shape, self.conds)
File "X:\AI\comfyUI\ComfyUI_windows_portable\ComfyUI\comfy\sampler_helpers.py", line 66, in prepare_sampling
comfy.model_management.load_models_gpu([model] + models, memory_required=memory_required, minimum_memory_required=minimum_memory_required)
File "X:\AI\comfyUI\ComfyUI_windows_portable\ComfyUI\comfy\model_management.py", line 545, in load_models_gpu
cur_loaded_model = loaded_model.model_load(lowvram_model_memory, force_patch_weights=force_patch_weights)
File "X:\AI\comfyUI\ComfyUI_windows_portable\ComfyUI\comfy\model_management.py", line 327, in model_load
raise e
File "X:\AI\comfyUI\ComfyUI_windows_portable\ComfyUI\comfy\model_management.py", line 323, in model_load
self.real_model = self.model.patch_model(device_to=patch_model_to, lowvram_model_memory=lowvram_model_memory, load_weights=load_weights, force_patch_weights=force_patch_weights)
File "X:\AI\comfyUI\ComfyUI_windows_portable\ComfyUI\comfy\model_patcher.py", line 427, in patch_model
self.load(device_to, lowvram_model_memory=lowvram_model_memory, force_patch_weights=force_patch_weights, full_load=full_load)
File "X:\AI\comfyUI\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-AnimateDiff-Evolved\animatediff\model_injection.py", line 818, in load
self._handle_float8_pe_tensors()
File "X:\AI\comfyUI\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-AnimateDiff-Evolved\animatediff\model_injection.py", line 845, in handlefloat8_pe_tensors
if comfy.utils.get_attr(self.model, key).dtype not in [torch.float8_e5m2, torch.float8_e4m3fn]:
AttributeError: module 'torch' has no attribute 'float8_e5m2'
2024-09-05 20:45:53,440 - root - INFO - Prompt executed in 41.51 seconds
2024-09-05 21:02:32,803 - root - INFO - got prompt
2024-09-05 21:02:33,037 - root - INFO - Requested to load AnimateDiffModel
2024-09-05 21:02:33,037 - root - INFO - Requested to load ControlNet
2024-09-05 21:02:33,037 - root - INFO - Requested to load BaseModel
2024-09-05 21:02:33,037 - root - INFO - Loading 3 new models
2024-09-05 21:02:33,227 - root - INFO - loaded completely 0.0 795.6268310546875 True
2024-09-05 21:02:33,370 - root - ERROR - !!! Exception during processing !!! module 'torch' has no attribute 'float8_e5m2'
2024-09-05 21:02:33,370 - root - ERROR - Traceback (most recent call last):
File "X:\AI\comfyUI\ComfyUI_windows_portable\ComfyUI\execution.py", line 317, in execute
output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
File "X:\AI\comfyUI\ComfyUI_windows_portable\ComfyUI\execution.py", line 192, in get_output_data
return_values = mapnode_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
File "X:\AI\comfyUI\ComfyUI_windows_portable\ComfyUI\execution.py", line 169, in mapnode_over_list
process_inputs(input_dict, i)
File "X:\AI\comfyUI\ComfyUI_windows_portable\ComfyUI\execution.py", line 158, in process_inputs
results.append(getattr(obj, func)(**inputs))
File "X:\AI\comfyUI\ComfyUI_windows_portable\ComfyUI\nodes.py", line 1429, in sample
return common_ksampler(model, seed, steps, cfg, sampler_name, scheduler, positive, negative, latent_image, denoise=denoise)
File "X:\AI\comfyUI\ComfyUI_windows_portable\ComfyUI\nodes.py", line 1396, in common_ksampler
samples = comfy.sample.sample(model, noise, steps, cfg, sampler_name, scheduler, positive, negative, latent_image,
File "X:\AI\comfyUI\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Impact-Pack\modules\impact\sample_error_enhancer.py", line 9, in informative_sample
return original_sample(*args, **kwargs) # This code helps interpret error messages that occur within exceptions but does not have any impact on other operations.
File "X:\AI\comfyUI\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-AnimateDiff-Evolved\animatediff\sampling.py", line 526, in motion_sample
latents = orig_comfy_sample(model, noise, args, *kwargs)
File "X:\AI\comfyUI\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Advanced-ControlNet\adv_control\sampling.py", line 116, in acn_sample
return orig_comfy_sample(model, args, *kwargs)
File "X:\AI\comfyUI\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Advanced-ControlNet\adv_control\utils.py", line 117, in uncond_multiplier_check_cn_sample
return orig_comfy_sample(model, args, *kwargs)
File "X:\AI\comfyUI\ComfyUI_windows_portable\ComfyUI\comfy\sample.py", line 43, in sample
samples = sampler.sample(noise, positive, negative, cfg=cfg, latent_image=latent_image, start_step=start_step, last_step=last_step, force_full_denoise=force_full_denoise, denoise_mask=noise_mask, sigmas=sigmas, callback=callback, disable_pbar=disable_pbar, seed=seed)
File "X:\AI\comfyUI\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_smZNodes\smZNodes.py", line 103, in KSampler_sample
return orig_fn(*args, **kwargs)
File "X:\AI\comfyUI\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 829, in sample
return sample(self.model, noise, positive, negative, cfg, self.device, sampler, sigmas, self.model_options, latent_image=latent_image, denoise_mask=denoise_mask, callback=callback, disable_pbar=disable_pbar, seed=seed)
File "X:\AI\comfyUI\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_smZNodes\smZNodes.py", line 121, in sample
return orig_fn(*args, **kwargs)
File "X:\AI\comfyUI\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 729, in sample
return cfg_guider.sample(noise, latent_image, sampler, sigmas, denoise_mask, callback, disable_pbar, seed)
File "X:\AI\comfyUI\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 706, in sample
self.inner_model, self.conds, self.loaded_models = comfy.sampler_helpers.prepare_sampling(self.model_patcher, noise.shape, self.conds)
File "X:\AI\comfyUI\ComfyUI_windows_portable\ComfyUI\comfy\sampler_helpers.py", line 66, in prepare_sampling
comfy.model_management.load_models_gpu([model] + models, memory_required=memory_required, minimum_memory_required=minimum_memory_required)
File "X:\AI\comfyUI\ComfyUI_windows_portable\ComfyUI\comfy\model_management.py", line 545, in load_models_gpu
cur_loaded_model = loaded_model.model_load(lowvram_model_memory, force_patch_weights=force_patch_weights)
File "X:\AI\comfyUI\ComfyUI_windows_portable\ComfyUI\comfy\model_management.py", line 327, in model_load
raise e
File "X:\AI\comfyUI\ComfyUI_windows_portable\ComfyUI\comfy\model_management.py", line 323, in model_load
self.real_model = self.model.patch_model(device_to=patch_model_to, lowvram_model_memory=lowvram_model_memory, load_weights=load_weights, force_patch_weights=force_patch_weights)
File "X:\AI\comfyUI\ComfyUI_windows_portable\ComfyUI\comfy\model_patcher.py", line 427, in patch_model
self.load(device_to, lowvram_model_memory=lowvram_model_memory, force_patch_weights=force_patch_weights, full_load=full_load)
File "X:\AI\comfyUI\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-AnimateDiff-Evolved\animatediff\model_injection.py", line 818, in load
self._handle_float8_pe_tensors()
File "X:\AI\comfyUI\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-AnimateDiff-Evolved\animatediff\model_injection.py", line 845, in handlefloat8_pe_tensors
if comfy.utils.get_attr(self.model, key).dtype not in [torch.float8_e5m2, torch.float8_e4m3fn]:
AttributeError: module 'torch' has no attribute 'float8_e5m2'
2024-09-05 21:02:33,371 - root - INFO - Prompt executed in 0.41 seconds
```
## Attached Workflow
Please make sure that workflow does not contain any sensitive information such as API keys or passwords.
```
Workflow too large. Please manually upload the workflow from local file system.
```
## Additional Context
(Please add any additional context or steps to reproduce the error here)
@Rasali I wrote a fix in a comment, try that out
@Catz I believe this is a different problem.
@Rasali I get this:
# ComfyUI Error Report
## Error Details
- Node Type: KSampler
- Exception Type: AttributeError
- Exception Message: module 'torch' has no attribute 'float8_e5m2'
## Stack Trace
```
File "E:\ComfyUI_windows_portable\ComfyUI\execution.py", line 323, in execute
output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
File "E:\ComfyUI_windows_portable\ComfyUI\execution.py", line 198, in get_output_data
return_values = mapnode_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
File "E:\ComfyUI_windows_portable\ComfyUI\execution.py", line 169, in mapnode_over_list
process_inputs(input_dict, i)
File "E:\ComfyUI_windows_portable\ComfyUI\execution.py", line 158, in process_inputs
results.append(getattr(obj, func)(**inputs))
File "E:\ComfyUI_windows_portable\ComfyUI\nodes.py", line 1434, in sample
return common_ksampler(model, seed, steps, cfg, sampler_name, scheduler, positive, negative, latent_image, denoise=denoise)
File "E:\ComfyUI_windows_portable\ComfyUI\nodes.py", line 1401, in common_ksampler
samples = comfy.sample.sample(model, noise, steps, cfg, sampler_name, scheduler, positive, negative, latent_image,
File "E:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Impact-Pack\modules\impact\sample_error_enhancer.py", line 9, in informative_sample
return original_sample(*args, **kwargs) # This code helps interpret error messages that occur within exceptions but does not have any impact on other operations.
File "E:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-AnimateDiff-Evolved\animatediff\sampling.py", line 526, in motion_sample
latents = orig_comfy_sample(model, noise, args, *kwargs)
File "E:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Advanced-ControlNet\adv_control\sampling.py", line 116, in acn_sample
return orig_comfy_sample(model, args, *kwargs)
File "E:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Advanced-ControlNet\adv_control\utils.py", line 117, in uncond_multiplier_check_cn_sample
return orig_comfy_sample(model, args, *kwargs)
File "E:\ComfyUI_windows_portable\ComfyUI\comfy\sample.py", line 43, in sample
samples = sampler.sample(noise, positive, negative, cfg=cfg, latent_image=latent_image, start_step=start_step, last_step=last_step, force_full_denoise=force_full_denoise, denoise_mask=noise_mask, sigmas=sigmas, callback=callback, disable_pbar=disable_pbar, seed=seed)
File "E:\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 829, in sample
return sample(self.model, noise, positive, negative, cfg, self.device, sampler, sigmas, self.model_options, latent_image=latent_image, denoise_mask=denoise_mask, callback=callback, disable_pbar=disable_pbar, seed=seed)
File "E:\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 729, in sample
return cfg_guider.sample(noise, latent_image, sampler, sigmas, denoise_mask, callback, disable_pbar, seed)
File "E:\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 706, in sample
self.inner_model, self.conds, self.loaded_models = comfy.sampler_helpers.prepare_sampling(self.model_patcher, noise.shape, self.conds)
File "E:\ComfyUI_windows_portable\ComfyUI\comfy\sampler_helpers.py", line 66, in prepare_sampling
comfy.model_management.load_models_gpu([model] + models, memory_required=memory_required, minimum_memory_required=minimum_memory_required)
File "E:\ComfyUI_windows_portable\ComfyUI\comfy\model_management.py", line 545, in load_models_gpu
cur_loaded_model = loaded_model.model_load(lowvram_model_memory, force_patch_weights=force_patch_weights)
File "E:\ComfyUI_windows_portable\ComfyUI\comfy\model_management.py", line 327, in model_load
raise e
File "E:\ComfyUI_windows_portable\ComfyUI\comfy\model_management.py", line 323, in model_load
self.real_model = self.model.patch_model(device_to=patch_model_to, lowvram_model_memory=lowvram_model_memory, load_weights=load_weights, force_patch_weights=force_patch_weights)
File "E:\ComfyUI_windows_portable\ComfyUI\comfy\model_patcher.py", line 431, in patch_model
self.load(device_to, lowvram_model_memory=lowvram_model_memory, force_patch_weights=force_patch_weights, full_load=full_load)
File "E:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-AnimateDiff-Evolved\animatediff\model_injection.py", line 818, in load
self._handle_float8_pe_tensors()
File "E:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-AnimateDiff-Evolved\animatediff\model_injection.py", line 845, in handlefloat8_pe_tensors
if comfy.utils.get_attr(self.model, key).dtype not in [torch.float8_e5m2, torch.float8_e4m3fn]:
```
## System Information
- ComfyUI Version: v0.2.2-76-gbdd4a22
- Arguments: ComfyUI\main.py --windows-standalone-build
- OS: nt
- Python Version: 3.10.9 (tags/v3.10.9:1dd9be6, Dec 6 2022, 20:01:21) [MSC v.1934 64 bit (AMD64)]
- Embedded Python: true
- PyTorch Version: 2.0.1+cu118
## Devices
- Name: cuda:0 NVIDIA GeForce RTX 3060 : cudaMallocAsync
- Type: cuda
- VRAM Total: 12884377600
- VRAM Free: 10710333440
- Torch VRAM Total: 1073741824
- Torch VRAM Free: 74627072
## Logs
```
2024-09-25 12:16:18,910 - root - ERROR - Failed to validate prompt for output 219:
2024-09-25 12:16:18,910 - root - ERROR - Output will be ignored
2024-09-25 12:16:18,928 - root - ERROR - Failed to validate prompt for output 272:
2024-09-25 12:16:18,928 - root - ERROR - Output will be ignored
2024-09-25 12:16:18,986 - root - INFO - Prompt executed in 0.05 seconds
2024-09-25 12:17:32,905 - root - INFO - got prompt
2024-09-25 12:17:32,952 - root - ERROR - Failed to validate prompt for output 53:
2024-09-25 12:17:32,952 - root - ERROR - * SimpleMath+ 461:
2024-09-25 12:17:32,952 - root - ERROR - - Return type mismatch between linked nodes: a, INT != INT,FLOAT
2024-09-25 12:17:32,952 - root - ERROR - - Return type mismatch between linked nodes: b, INT != INT,FLOAT
2024-09-25 12:17:32,952 - root - ERROR - Output will be ignored
2024-09-25 12:17:32,968 - root - ERROR - Failed to validate prompt for output 205:
2024-09-25 12:17:32,968 - root - ERROR - Output will be ignored
2024-09-25 12:17:32,999 - root - ERROR - Failed to validate prompt for output 219:
2024-09-25 12:17:32,999 - root - ERROR - Output will be ignored
2024-09-25 12:17:33,015 - root - ERROR - Failed to validate prompt for output 272:
2024-09-25 12:17:33,015 - root - ERROR - Output will be ignored
2024-09-25 12:17:33,078 - root - INFO - Prompt executed in 0.05 seconds
2024-09-25 12:17:51,754 - root - INFO - got prompt
2024-09-25 12:17:51,801 - root - ERROR - Failed to validate prompt for output 53:
2024-09-25 12:17:51,801 - root - ERROR - * SimpleMath+ 461:
2024-09-25 12:17:51,801 - root - ERROR - - Return type mismatch between linked nodes: a, INT != INT,FLOAT
2024-09-25 12:17:51,801 - root - ERROR - - Return type mismatch between linked nodes: b, INT != INT,FLOAT
2024-09-25 12:17:51,801 - root - ERROR - Output will be ignored
2024-09-25 12:17:51,824 - root - ERROR - Failed to validate prompt for output 205:
2024-09-25 12:17:51,824 - root - ERROR - Output will be ignored
2024-09-25 12:17:51,849 - root - ERROR - Failed to validate prompt for output 219:
2024-09-25 12:17:51,849 - root - ERROR - Output will be ignored
2024-09-25 12:17:51,864 - root - ERROR - Failed to validate prompt for output 272:
2024-09-25 12:17:51,864 - root - ERROR - Output will be ignored
2024-09-25 12:17:52,699 - root - INFO - Prompt executed in 0.83 seconds
2024-09-25 12:18:13,791 - root - INFO - got prompt
2024-09-25 12:18:13,839 - root - ERROR - Failed to validate prompt for output 53:
2024-09-25 12:18:13,839 - root - ERROR - * SimpleMath+ 461:
2024-09-25 12:18:13,839 - root - ERROR - - Return type mismatch between linked nodes: a, INT != INT,FLOAT
2024-09-25 12:18:13,839 - root - ERROR - - Return type mismatch between linked nodes: b, INT != INT,FLOAT
2024-09-25 12:18:13,839 - root - ERROR - Output will be ignored
2024-09-25 12:18:13,839 - root - ERROR - Failed to validate prompt for output 798:
2024-09-25 12:18:13,839 - root - ERROR - Output will be ignored
2024-09-25 12:18:13,870 - root - ERROR - Failed to validate prompt for output 205:
2024-09-25 12:18:13,870 - root - ERROR - Output will be ignored
2024-09-25 12:18:13,885 - root - ERROR - Failed to validate prompt for output 219:
2024-09-25 12:18:13,885 - root - ERROR - Output will be ignored
2024-09-25 12:18:13,907 - root - ERROR - Failed to validate prompt for output 272:
2024-09-25 12:18:13,907 - root - ERROR - Output will be ignored
2024-09-25 12:18:13,964 - root - INFO - Prompt executed in 0.06 seconds
2024-09-25 12:19:45,692 - root - INFO - got prompt
2024-09-25 12:19:45,754 - root - ERROR - Failed to validate prompt for output 53:
2024-09-25 12:19:45,754 - root - ERROR - * SimpleMath+ 461:
2024-09-25 12:19:45,754 - root - ERROR - - Return type mismatch between linked nodes: a, INT != INT,FLOAT
2024-09-25 12:19:45,754 - root - ERROR - - Return type mismatch between linked nodes: b, INT != INT,FLOAT
2024-09-25 12:19:45,756 - root - ERROR - Output will be ignored
2024-09-25 12:19:45,756 - root - ERROR - Failed to validate prompt for output 799:
2024-09-25 12:19:45,756 - root - ERROR - Output will be ignored
2024-09-25 12:19:45,756 - root - ERROR - Failed to validate prompt for output 798:
2024-09-25 12:19:45,756 - root - ERROR - Output will be ignored
2024-09-25 12:19:45,770 - root - ERROR - Failed to validate prompt for output 205:
2024-09-25 12:19:45,770 - root - ERROR - Output will be ignored
2024-09-25 12:19:45,803 - root - ERROR - Failed to validate prompt for output 219:
2024-09-25 12:19:45,803 - root - ERROR - Output will be ignored
2024-09-25 12:19:45,818 - root - ERROR - Failed to validate prompt for output 272:
2024-09-25 12:19:45,818 - root - ERROR - Output will be ignored
2024-09-25 12:19:45,882 - root - INFO - Prompt executed in 0.05 seconds
2024-09-25 12:28:30,606 - root - INFO - got prompt
2024-09-25 12:28:30,651 - root - ERROR - Failed to validate prompt for output 53:
2024-09-25 12:28:30,651 - root - ERROR - * SimpleMath+ 461:
2024-09-25 12:28:30,651 - root - ERROR - - Return type mismatch between linked nodes: a, INT != INT,FLOAT
2024-09-25 12:28:30,651 - root - ERROR - - Return type mismatch between linked nodes: b, INT != INT,FLOAT
2024-09-25 12:28:30,651 - root - ERROR - Output will be ignored
2024-09-25 12:28:30,651 - root - ERROR - Failed to validate prompt for output 799:
2024-09-25 12:28:30,651 - root - ERROR - * SimpleMath+ 800:
2024-09-25 12:28:30,651 - root - ERROR - - Return type mismatch between linked nodes: a, INT != INT,FLOAT
2024-09-25 12:28:30,651 - root - ERROR - - Return type mismatch between linked nodes: b, INT != INT,FLOAT
2024-09-25 12:28:30,651 - root - ERROR - Output will be ignored
2024-09-25 12:28:30,651 - root - ERROR - Failed to validate prompt for output 798:
2024-09-25 12:28:30,651 - root - ERROR - Output will be ignored
2024-09-25 12:28:30,682 - root - ERROR - Failed to validate prompt for output 205:
2024-09-25 12:28:30,682 - root - ERROR - Output will be ignored
2024-09-25 12:28:30,698 - root - ERROR - Failed to validate prompt for output 219:
2024-09-25 12:28:30,698 - root - ERROR - Output will be ignored
2024-09-25 12:28:30,714 - root - ERROR - Failed to validate prompt for output 272:
2024-09-25 12:28:30,714 - root - ERROR - Output will be ignored
2024-09-25 12:28:30,777 - root - INFO - Prompt executed in 0.05 seconds
2024-09-25 12:28:57,477 - root - INFO - got prompt
2024-09-25 12:28:57,540 - root - ERROR - Failed to validate prompt for output 53:
2024-09-25 12:28:57,540 - root - ERROR - * SimpleMath+ 461:
2024-09-25 12:28:57,540 - root - ERROR - - Return type mismatch between linked nodes: a, INT != INT,FLOAT
2024-09-25 12:28:57,540 - root - ERROR - - Return type mismatch between linked nodes: b, INT != INT,FLOAT
2024-09-25 12:28:57,540 - root - ERROR - Output will be ignored
2024-09-25 12:28:57,540 - root - ERROR - Failed to validate prompt for output 799:
2024-09-25 12:28:57,540 - root - ERROR - * SimpleMath+ 800:
2024-09-25 12:28:57,540 - root - ERROR - - Return type mismatch between linked nodes: a, INT != INT,FLOAT
2024-09-25 12:28:57,540 - root - ERROR - Output will be ignored
2024-09-25 12:28:57,540 - root - ERROR - Failed to validate prompt for output 798:
2024-09-25 12:28:57,540 - root - ERROR - Output will be ignored
2024-09-25 12:28:57,555 - root - ERROR - Failed to validate prompt for output 205:
2024-09-25 12:28:57,555 - root - ERROR - Output will be ignored
2024-09-25 12:28:57,571 - root - ERROR - Failed to validate prompt for output 219:
2024-09-25 12:28:57,571 - root - ERROR - Output will be ignored
2024-09-25 12:28:57,603 - root - ERROR - Failed to validate prompt for output 272:
2024-09-25 12:28:57,603 - root - ERROR - Output will be ignored
2024-09-25 12:28:57,650 - root - INFO - Prompt executed in 0.05 seconds
2024-09-25 12:29:13,620 - root - INFO - got prompt
2024-09-25 12:29:13,667 - root - ERROR - Failed to validate prompt for output 53:
2024-09-25 12:29:13,667 - root - ERROR - * SimpleMath+ 461:
2024-09-25 12:29:13,667 - root - ERROR - - Return type mismatch between linked nodes: a, INT != INT,FLOAT
2024-09-25 12:29:13,683 - root - ERROR - - Return type mismatch between linked nodes: b, INT != INT,FLOAT
2024-09-25 12:29:13,683 - root - ERROR - Output will be ignored
2024-09-25 12:29:13,683 - root - ERROR - Failed to validate prompt for output 798:
2024-09-25 12:29:13,683 - root - ERROR - Output will be ignored
2024-09-25 12:29:13,701 - root - ERROR - Failed to validate prompt for output 205:
2024-09-25 12:29:13,701 - root - ERROR - Output will be ignored
2024-09-25 12:29:13,718 - root - ERROR - Failed to validate prompt for output 219:
2024-09-25 12:29:13,718 - root - ERROR - Output will be ignored
2024-09-25 12:29:13,748 - root - ERROR - Failed to validate prompt for output 272:
2024-09-25 12:29:13,748 - root - ERROR - Output will be ignored
2024-09-25 12:29:13,796 - root - INFO - Prompt executed in 0.05 seconds
2024-09-25 12:58:35,343 - root - INFO - got prompt
2024-09-25 12:58:35,397 - root - ERROR - Failed to validate prompt for output 53:
2024-09-25 12:58:35,397 - root - ERROR - * SimpleMath+ 461:
2024-09-25 12:58:35,397 - root - ERROR - - Return type mismatch between linked nodes: a, INT != INT,FLOAT
2024-09-25 12:58:35,397 - root - ERROR - - Return type mismatch between linked nodes: b, INT != INT,FLOAT
2024-09-25 12:58:35,397 - root - ERROR - Output will be ignored
2024-09-25 12:58:35,397 - root - ERROR - Failed to validate prompt for output 798:
2024-09-25 12:58:35,397 - root - ERROR - Output will be ignored
2024-09-25 12:58:35,418 - root - ERROR - Failed to validate prompt for output 205:
2024-09-25 12:58:35,418 - root - ERROR - Output will be ignored
2024-09-25 12:58:35,434 - root - ERROR - Failed to validate prompt for output 219:
2024-09-25 12:58:35,434 - root - ERROR - Output will be ignored
2024-09-25 12:58:35,451 - root - ERROR - Failed to validate prompt for output 272:
2024-09-25 12:58:35,451 - root - ERROR - Output will be ignored
2024-09-25 12:58:35,720 - root - INFO - Prompt executed in 0.26 seconds
2024-09-25 13:11:06,721 - root - INFO - got prompt
2024-09-25 13:11:06,784 - root - ERROR - Failed to validate prompt for output 53:
2024-09-25 13:11:06,784 - root - ERROR - * SimpleMath+ 461:
2024-09-25 13:11:06,784 - root - ERROR - - Return type mismatch between linked nodes: a, INT != INT,FLOAT
2024-09-25 13:11:06,784 - root - ERROR - - Return type mismatch between linked nodes: b, INT != INT,FLOAT
2024-09-25 13:11:06,784 - root - ERROR - Output will be ignored
2024-09-25 13:11:06,784 - root - ERROR - Failed to validate prompt for output 798:
2024-09-25 13:11:06,784 - root - ERROR - Output will be ignored
2024-09-25 13:11:06,801 - root - ERROR - Failed to validate prompt for output 205:
2024-09-25 13:11:06,801 - root - ERROR - Output will be ignored
2024-09-25 13:11:06,816 - root - ERROR - Failed to validate prompt for output 219:
2024-09-25 13:11:06,816 - root - ERROR - Output will be ignored
2024-09-25 13:11:06,838 - root - ERROR - Failed to validate prompt for output 272:
2024-09-25 13:11:06,838 - root - ERROR - Output will be ignored
2024-09-25 13:11:07,053 - root - INFO - Prompt executed in 0.20 seconds
2024-09-25 13:12:13,631 - root - INFO - got prompt
2024-09-25 13:12:13,697 - root - ERROR - Failed to validate prompt for output 53:
2024-09-25 13:12:13,697 - root - ERROR - * SimpleMath+ 461:
2024-09-25 13:12:13,697 - root - ERROR - - Return type mismatch between linked nodes: a, INT != INT,FLOAT
2024-09-25 13:12:13,697 - root - ERROR - - Return type mismatch between linked nodes: b, INT != INT,FLOAT
2024-09-25 13:12:13,697 - root - ERROR - Output will be ignored
2024-09-25 13:12:13,697 - root - ERROR - Failed to validate prompt for output 799:
2024-09-25 13:12:13,697 - root - ERROR - * SimpleMath+ 800:
2024-09-25 13:12:13,697 - root - ERROR - - Return type mismatch between linked nodes: a, INT != INT,FLOAT
2024-09-25 13:12:13,697 - root - ERROR - Output will be ignored
2024-09-25 13:12:13,697 - root - ERROR - Failed to validate prompt for output 798:
2024-09-25 13:12:13,697 - root - ERROR - Output will be ignored
2024-09-25 13:12:13,711 - root - ERROR - Failed to validate prompt for output 205:
2024-09-25 13:12:13,711 - root - ERROR - Output will be ignored
2024-09-25 13:12:13,740 - root - ERROR - Failed to validate prompt for output 219:
2024-09-25 13:12:13,740 - root - ERROR - Output will be ignored
2024-09-25 13:12:13,757 - root - ERROR - Failed to validate prompt for output 272:
2024-09-25 13:12:13,757 - root - ERROR - Output will be ignored
2024-09-25 13:12:13,805 - root - INFO - Prompt executed in 0.06 seconds
2024-09-25 13:13:16,762 - root - INFO - got prompt
2024-09-25 13:13:16,824 - root - ERROR - Failed to validate prompt for output 53:
2024-09-25 13:13:16,824 - root - ERROR - * SimpleMath+ 461:
2024-09-25 13:13:16,824 - root - ERROR - - Return type mismatch between linked nodes: a, INT != INT,FLOAT
2024-09-25 13:13:16,824 - root - ERROR - - Return type mismatch between linked nodes: b, INT != INT,FLOAT
2024-09-25 13:13:16,824 - root - ERROR - Output will be ignored
2024-09-25 13:13:16,824 - root - ERROR - Failed to validate prompt for output 799:
2024-09-25 13:13:16,824 - root - ERROR - * SimpleMath+ 800:
2024-09-25 13:13:16,824 - root - ERROR - - Return type mismatch between linked nodes: a, INT != INT,FLOAT
2024-09-25 13:13:16,824 - root - ERROR - Output will be ignored
2024-09-25 13:13:16,824 - root - ERROR - Failed to validate prompt for output 798:
2024-09-25 13:13:16,824 - root - ERROR - Output will be ignored
2024-09-25 13:13:16,840 - root - ERROR - Failed to validate prompt for output 205:
2024-09-25 13:13:16,840 - root - ERROR - Output will be ignored
2024-09-25 13:13:16,855 - root - ERROR - Failed to validate prompt for output 219:
2024-09-25 13:13:16,871 - root - ERROR - Output will be ignored
2024-09-25 13:13:16,887 - root - ERROR - Failed to validate prompt for output 272:
2024-09-25 13:13:16,887 - root - ERROR - Output will be ignored
2024-09-25 13:13:16,935 - root - INFO - Prompt executed in 0.05 seconds
2024-09-25 13:19:53,572 - root - INFO - got prompt
2024-09-25 13:19:53,635 - root - ERROR - Failed to validate prompt for output 53:
2024-09-25 13:19:53,635 - root - ERROR - * SimpleMath+ 461:
2024-09-25 13:19:53,635 - root - ERROR - - Return type mismatch between linked nodes: a, INT != INT,FLOAT
2024-09-25 13:19:53,635 - root - ERROR - - Return type mismatch between linked nodes: b, INT != INT,FLOAT
2024-09-25 13:19:53,635 - root - ERROR - Output will be ignored
2024-09-25 13:19:53,635 - root - ERROR - Failed to validate prompt for output 799:
2024-09-25 13:19:53,635 - root - ERROR - * SimpleMath+ 800:
2024-09-25 13:19:53,635 - root - ERROR - - Return type mismatch between linked nodes: a, INT != INT,FLOAT
2024-09-25 13:19:53,635 - root - ERROR - - Return type mismatch between linked nodes: b, INT != INT,FLOAT
2024-09-25 13:19:53,635 - root - ERROR - Output will be ignored
2024-09-25 13:19:53,635 - root - ERROR - Failed to validate prompt for output 798:
2024-09-25 13:19:53,635 - root - ERROR - Output will be ignored
2024-09-25 13:19:53,652 - root - ERROR - Failed to validate prompt for output 205:
2024-09-25 13:19:53,652 - root - ERROR - Output will be ignored
2024-09-25 13:19:53,666 - root - ERROR - Failed to validate prompt for output 219:
2024-09-25 13:19:53,666 - root - ERROR - Output will be ignored
2024-09-25 13:19:53,700 - root - ERROR - Failed to validate prompt for output 272:
2024-09-25 13:19:53,700 - root - ERROR - Output will be ignored
2024-09-25 13:19:53,761 - root - INFO - Prompt executed in 0.06 seconds
2024-09-25 13:20:07,258 - root - INFO - got prompt
2024-09-25 13:20:07,305 - root - ERROR - Failed to validate prompt for output 53:
2024-09-25 13:20:07,305 - root - ERROR - * SimpleMath+ 461:
2024-09-25 13:20:07,305 - root - ERROR - - Return type mismatch between linked nodes: a, INT != INT,FLOAT
2024-09-25 13:20:07,305 - root - ERROR - - Return type mismatch between linked nodes: b, INT != INT,FLOAT
2024-09-25 13:20:07,305 - root - ERROR - Output will be ignored
2024-09-25 13:20:07,305 - root - ERROR - Failed to validate prompt for output 799:
2024-09-25 13:20:07,305 - root - ERROR - * SimpleMath+ 800:
2024-09-25 13:20:07,305 - root - ERROR - - Return type mismatch between linked nodes: a, INT != INT,FLOAT
2024-09-25 13:20:07,305 - root - ERROR - - Return type mismatch between linked nodes: b, INT != INT,FLOAT
2024-09-25 13:20:07,305 - root - ERROR - Output will be ignored
2024-09-25 13:20:07,305 - root - ERROR - Failed to validate prompt for output 798:
2024-09-25 13:20:07,305 - root - ERROR - Output will be ignored
2024-09-25 13:20:07,337 - root - ERROR - Failed to validate prompt for output 205:
2024-09-25 13:20:07,337 - root - ERROR - Output will be ignored
2024-09-25 13:20:07,352 - root - ERROR - Failed to validate prompt for output 219:
2024-09-25 13:20:07,352 - root - ERROR - Output will be ignored
2024-09-25 13:20:07,370 - root - ERROR - Failed to validate prompt for output 272:
2024-09-25 13:20:07,370 - root - ERROR - Output will be ignored
2024-09-25 13:20:07,431 - root - INFO - Prompt executed in 0.06 seconds
2024-09-25 13:20:47,766 - root - INFO - got prompt
2024-09-25 13:20:47,813 - root - ERROR - Failed to validate prompt for output 53:
2024-09-25 13:20:47,813 - root - ERROR - * SimpleMath+ 461:
2024-09-25 13:20:47,813 - root - ERROR - - Return type mismatch between linked nodes: a, INT != INT,FLOAT
2024-09-25 13:20:47,813 - root - ERROR - - Return type mismatch between linked nodes: b, INT != INT,FLOAT
2024-09-25 13:20:47,813 - root - ERROR - Output will be ignored
2024-09-25 13:20:47,813 - root - ERROR - Failed to validate prompt for output 799:
2024-09-25 13:20:47,813 - root - ERROR - * SimpleMath+ 800:
2024-09-25 13:20:47,813 - root - ERROR - - Return type mismatch between linked nodes: a, INT != INT,FLOAT
2024-09-25 13:20:47,813 - root - ERROR - - Return type mismatch between linked nodes: b, INT != INT,FLOAT
2024-09-25 13:20:47,813 - root - ERROR - Output will be ignored
2024-09-25 13:20:47,829 - root - ERROR - Failed to validate prompt for output 205:
2024-09-25 13:20:47,829 - root - ERROR - Output will be ignored
2024-09-25 13:20:47,860 - root - ERROR - Failed to validate prompt for output 219:
2024-09-25 13:20:47,860 - root - ERROR - Output will be ignored
2024-09-25 13:20:47,876 - root - ERROR - Failed to validate prompt for output 272:
2024-09-25 13:20:47,876 - root - ERROR - Output will be ignored
2024-09-25 13:20:47,939 - root - INFO - Prompt executed in 0.06 seconds
2024-09-25 13:21:12,038 - root - INFO - got prompt
2024-09-25 13:21:12,085 - root - ERROR - Failed to validate prompt for output 53:
2024-09-25 13:21:12,085 - root - ERROR - * SimpleMath+ 461:
2024-09-25 13:21:12,101 - root - ERROR - - Return type mismatch between linked nodes: a, INT != INT,FLOAT
2024-09-25 13:21:12,101 - root - ERROR - - Return type mismatch between linked nodes: b, INT != INT,FLOAT
2024-09-25 13:21:12,101 - root - ERROR - Output will be ignored
2024-09-25 13:21:12,101 - root - ERROR - Failed to validate prompt for output 799:
2024-09-25 13:21:12,101 - root - ERROR - * SimpleMath+ 800:
2024-09-25 13:21:12,101 - root - ERROR - - Return type mismatch between linked nodes: a, INT != INT,FLOAT
2024-09-25 13:21:12,101 - root - ERROR - - Return type mismatch between linked nodes: b, INT != INT,FLOAT
2024-09-25 13:21:12,101 - root - ERROR - Output will be ignored
2024-09-25 13:21:12,119 - root - ERROR - Failed to validate prompt for output 205:
2024-09-25 13:21:12,119 - root - ERROR - Output will be ignored
2024-09-25 13:21:12,134 - root - ERROR - Failed to validate prompt for output 219:
2024-09-25 13:21:12,134 - root - ERROR - Output will be ignored
2024-09-25 13:21:12,149 - root - ERROR - Failed to validate prompt for output 272:
2024-09-25 13:21:12,149 - root - ERROR - Output will be ignored
2024-09-25 13:21:12,211 - root - INFO - Prompt executed in 0.06 seconds
2024-09-25 13:21:33,180 - root - INFO - got prompt
2024-09-25 13:21:33,243 - root - ERROR - Failed to validate prompt for output 799:
2024-09-25 13:21:33,243 - root - ERROR - * SimpleMath+ 800:
2024-09-25 13:21:33,243 - root - ERROR - - Return type mismatch between linked nodes: a, INT != INT,FLOAT
2024-09-25 13:21:33,243 - root - ERROR - - Return type mismatch between linked nodes: b, INT != INT,FLOAT
2024-09-25 13:21:33,243 - root - ERROR - Output will be ignored
2024-09-25 13:21:33,447 - root - INFO - Using xformers attention in VAE
2024-09-25 13:21:33,447 - root - INFO - Using xformers attention in VAE
2024-09-25 13:21:45,350 - root - INFO - model weight dtype torch.float16, manual cast: None
2024-09-25 13:21:45,350 - root - INFO - model_type EPS
2024-09-25 13:22:02,200 - root - INFO - Using xformers attention in VAE
2024-09-25 13:22:02,200 - root - INFO - Using xformers attention in VAE
2024-09-25 13:22:06,057 - root - INFO - Requested to load SD1ClipModel
2024-09-25 13:22:06,057 - root - INFO - Loading 1 new model
2024-09-25 13:22:06,251 - root - INFO - loaded completely 0.0 235.84423828125 True
2024-09-25 13:22:10,851 - root - ERROR - !!! Exception during processing !!! ClipVision model not found.
2024-09-25 13:22:10,867 - root - ERROR - Traceback (most recent call last):
File "E:\ComfyUI_windows_portable\ComfyUI\execution.py", line 323, in execute
output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
File "E:\ComfyUI_windows_portable\ComfyUI\execution.py", line 198, in get_output_data
return_values = mapnode_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
File "E:\ComfyUI_windows_portable\ComfyUI\execution.py", line 169, in mapnode_over_list
process_inputs(input_dict, i)
File "E:\ComfyUI_windows_portable\ComfyUI\execution.py", line 158, in process_inputs
results.append(getattr(obj, func)(**inputs))
File "E:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_IPAdapter_plus\IPAdapterPlus.py", line 559, in load_models
raise Exception("ClipVision model not found.")
Exception: ClipVision model not found.
2024-09-25 13:22:10,867 - root - INFO - Prompt executed in 37.56 seconds
2024-09-25 14:02:06,100 - root - INFO - got prompt
2024-09-25 14:02:07,713 - root - INFO - Requested to load CLIPVisionModelProjection
2024-09-25 14:02:07,713 - root - INFO - Loading 1 new model
2024-09-25 14:02:08,668 - root - INFO - loaded completely 0.0 3522.953369140625 True
2024-09-25 14:02:10,483 - root - INFO - Requested to load AnimateDiffModel
2024-09-25 14:02:10,484 - root - INFO - Requested to load BaseModel
2024-09-25 14:02:10,484 - root - INFO - Requested to load ControlNet
2024-09-25 14:02:10,484 - root - INFO - Loading 3 new models
2024-09-25 14:02:11,724 - root - INFO - loaded completely 0.0 795.6268310546875 True
2024-09-25 14:02:11,959 - root - ERROR - !!! Exception during processing !!! module 'torch' has no attribute 'float8_e5m2'
2024-09-25 14:02:12,034 - root - ERROR - Traceback (most recent call last):
File "E:\ComfyUI_windows_portable\ComfyUI\execution.py", line 323, in execute
output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
File "E:\ComfyUI_windows_portable\ComfyUI\execution.py", line 198, in get_output_data
return_values = mapnode_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
File "E:\ComfyUI_windows_portable\ComfyUI\execution.py", line 169, in mapnode_over_list
process_inputs(input_dict, i)
File "E:\ComfyUI_windows_portable\ComfyUI\execution.py", line 158, in process_inputs
results.append(getattr(obj, func)(**inputs))
File "E:\ComfyUI_windows_portable\ComfyUI\nodes.py", line 1434, in sample
return common_ksampler(model, seed, steps, cfg, sampler_name, scheduler, positive, negative, latent_image, denoise=denoise)
File "E:\ComfyUI_windows_portable\ComfyUI\nodes.py", line 1401, in common_ksampler
samples = comfy.sample.sample(model, noise, steps, cfg, sampler_name, scheduler, positive, negative, latent_image,
File "E:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Impact-Pack\modules\impact\sample_error_enhancer.py", line 9, in informative_sample
return original_sample(*args, **kwargs) # This code helps interpret error messages that occur within exceptions but does not have any impact on other operations.
File "E:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-AnimateDiff-Evolved\animatediff\sampling.py", line 526, in motion_sample
latents = orig_comfy_sample(model, noise, args, *kwargs)
File "E:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Advanced-ControlNet\adv_control\sampling.py", line 116, in acn_sample
return orig_comfy_sample(model, args, *kwargs)
File "E:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Advanced-ControlNet\adv_control\utils.py", line 117, in uncond_multiplier_check_cn_sample
return orig_comfy_sample(model, args, *kwargs)
File "E:\ComfyUI_windows_portable\ComfyUI\comfy\sample.py", line 43, in sample
samples = sampler.sample(noise, positive, negative, cfg=cfg, latent_image=latent_image, start_step=start_step, last_step=last_step, force_full_denoise=force_full_denoise, denoise_mask=noise_mask, sigmas=sigmas, callback=callback, disable_pbar=disable_pbar, seed=seed)
File "E:\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 829, in sample
return sample(self.model, noise, positive, negative, cfg, self.device, sampler, sigmas, self.model_options, latent_image=latent_image, denoise_mask=denoise_mask, callback=callback, disable_pbar=disable_pbar, seed=seed)
File "E:\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 729, in sample
return cfg_guider.sample(noise, latent_image, sampler, sigmas, denoise_mask, callback, disable_pbar, seed)
File "E:\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 706, in sample
self.inner_model, self.conds, self.loaded_models = comfy.sampler_helpers.prepare_sampling(self.model_patcher, noise.shape, self.conds)
File "E:\ComfyUI_windows_portable\ComfyUI\comfy\sampler_helpers.py", line 66, in prepare_sampling
comfy.model_management.load_models_gpu([model] + models, memory_required=memory_required, minimum_memory_required=minimum_memory_required)
File "E:\ComfyUI_windows_portable\ComfyUI\comfy\model_management.py", line 545, in load_models_gpu
cur_loaded_model = loaded_model.model_load(lowvram_model_memory, force_patch_weights=force_patch_weights)
File "E:\ComfyUI_windows_portable\ComfyUI\comfy\model_management.py", line 327, in model_load
raise e
File "E:\ComfyUI_windows_portable\ComfyUI\comfy\model_management.py", line 323, in model_load
self.real_model = self.model.patch_model(device_to=patch_model_to, lowvram_model_memory=lowvram_model_memory, load_weights=load_weights, force_patch_weights=force_patch_weights)
File "E:\ComfyUI_windows_portable\ComfyUI\comfy\model_patcher.py", line 431, in patch_model
self.load(device_to, lowvram_model_memory=lowvram_model_memory, force_patch_weights=force_patch_weights, full_load=full_load)
File "E:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-AnimateDiff-Evolved\animatediff\model_injection.py", line 818, in load
self._handle_float8_pe_tensors()
File "E:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-AnimateDiff-Evolved\animatediff\model_injection.py", line 845, in handlefloat8_pe_tensors
if comfy.utils.get_attr(self.model, key).dtype not in [torch.float8_e5m2, torch.float8_e4m3fn]:
AttributeError: module 'torch' has no attribute 'float8_e5m2'
2024-09-25 14:02:12,036 - root - INFO - Prompt executed in 5.82 seconds
2024-09-25 14:03:06,012 - root - INFO - got prompt
2024-09-25 14:03:06,216 - root - INFO - Requested to load AnimateDiffModel
2024-09-25 14:03:06,216 - root - INFO - Requested to load BaseModel
2024-09-25 14:03:06,216 - root - INFO - Requested to load ControlNet
2024-09-25 14:03:06,216 - root - INFO - Loading 3 new models
2024-09-25 14:03:06,577 - root - INFO - loaded completely 0.0 795.6268310546875 True
2024-09-25 14:03:06,782 - root - ERROR - !!! Exception during processing !!! module 'torch' has no attribute 'float8_e5m2'
2024-09-25 14:03:06,782 - root - ERROR - Traceback (most recent call last):
File "E:\ComfyUI_windows_portable\ComfyUI\execution.py", line 323, in execute
output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
File "E:\ComfyUI_windows_portable\ComfyUI\execution.py", line 198, in get_output_data
return_values = mapnode_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
File "E:\ComfyUI_windows_portable\ComfyUI\execution.py", line 169, in mapnode_over_list
process_inputs(input_dict, i)
File "E:\ComfyUI_windows_portable\ComfyUI\execution.py", line 158, in process_inputs
results.append(getattr(obj, func)(**inputs))
File "E:\ComfyUI_windows_portable\ComfyUI\nodes.py", line 1434, in sample
return common_ksampler(model, seed, steps, cfg, sampler_name, scheduler, positive, negative, latent_image, denoise=denoise)
File "E:\ComfyUI_windows_portable\ComfyUI\nodes.py", line 1401, in common_ksampler
samples = comfy.sample.sample(model, noise, steps, cfg, sampler_name, scheduler, positive, negative, latent_image,
File "E:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Impact-Pack\modules\impact\sample_error_enhancer.py", line 9, in informative_sample
return original_sample(*args, **kwargs) # This code helps interpret error messages that occur within exceptions but does not have any impact on other operations.
File "E:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-AnimateDiff-Evolved\animatediff\sampling.py", line 526, in motion_sample
latents = orig_comfy_sample(model, noise, args, *kwargs)
File "E:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Advanced-ControlNet\adv_control\sampling.py", line 116, in acn_sample
return orig_comfy_sample(model, args, *kwargs)
File "E:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Advanced-ControlNet\adv_control\utils.py", line 117, in uncond_multiplier_check_cn_sample
return orig_comfy_sample(model, args, *kwargs)
File "E:\ComfyUI_windows_portable\ComfyUI\comfy\sample.py", line 43, in sample
samples = sampler.sample(noise, positive, negative, cfg=cfg, latent_image=latent_image, start_step=start_step, last_step=last_step, force_full_denoise=force_full_denoise, denoise_mask=noise_mask, sigmas=sigmas, callback=callback, disable_pbar=disable_pbar, seed=seed)
File "E:\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 829, in sample
return sample(self.model, noise, positive, negative, cfg, self.device, sampler, sigmas, self.model_options, latent_image=latent_image, denoise_mask=denoise_mask, callback=callback, disable_pbar=disable_pbar, seed=seed)
File "E:\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 729, in sample
return cfg_guider.sample(noise, latent_image, sampler, sigmas, denoise_mask, callback, disable_pbar, seed)
File "E:\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 706, in sample
self.inner_model, self.conds, self.loaded_models = comfy.sampler_helpers.prepare_sampling(self.model_patcher, noise.shape, self.conds)
File "E:\ComfyUI_windows_portable\ComfyUI\comfy\sampler_helpers.py", line 66, in prepare_sampling
comfy.model_management.load_models_gpu([model] + models, memory_required=memory_required, minimum_memory_required=minimum_memory_required)
File "E:\ComfyUI_windows_portable\ComfyUI\comfy\model_management.py", line 545, in load_models_gpu
cur_loaded_model = loaded_model.model_load(lowvram_model_memory, force_patch_weights=force_patch_weights)
File "E:\ComfyUI_windows_portable\ComfyUI\comfy\model_management.py", line 327, in model_load
raise e
File "E:\ComfyUI_windows_portable\ComfyUI\comfy\model_management.py", line 323, in model_load
self.real_model = self.model.patch_model(device_to=patch_model_to, lowvram_model_memory=lowvram_model_memory, load_weights=load_weights, force_patch_weights=force_patch_weights)
File "E:\ComfyUI_windows_portable\ComfyUI\comfy\model_patcher.py", line 431, in patch_model
self.load(device_to, lowvram_model_memory=lowvram_model_memory, force_patch_weights=force_patch_weights, full_load=full_load)
File "E:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-AnimateDiff-Evolved\animatediff\model_injection.py", line 818, in load
self._handle_float8_pe_tensors()
File "E:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-AnimateDiff-Evolved\animatediff\model_injection.py", line 845, in handlefloat8_pe_tensors
if comfy.utils.get_attr(self.model, key).dtype not in [torch.float8_e5m2, torch.float8_e4m3fn]:
AttributeError: module 'torch' has no attribute 'float8_e5m2'
2024-09-25 14:03:06,782 - root - INFO - Prompt executed in 0.64 seconds
```
## Attached Workflow
Please make sure that workflow does not contain any sensitive information such as API keys or passwords.
```
Workflow too large. Please manually upload the workflow from local file system.
```
## Additional Context
(Please add any additional context or steps to reproduce the error here)
@Rasali We definitely have similar both HW and SW configurations:
## System Information
- ComfyUI Version: v0.2.2-76-gbdd4a22
- Arguments: ComfyUI\main.py --windows-standalone-build
- OS: nt
- Python Version: 3.10.9 (tags/v3.10.9:1dd9be6, Dec 6 2022, 20:01:21) [MSC v.1934 64 bit (AMD64)]
- Embedded Python: true
- PyTorch Version: 2.0.1+cu118
## Devices
- Name: cuda:0 NVIDIA GeForce RTX 3060 : cudaMallocAsync
- Type: cuda
- VRAM Total: 12884377600
- VRAM Free: 10710333440
- Torch VRAM Total: 1073741824
- Torch VRAM Free: 74627072
## System Information
- ComfyUI Version: v0.2.1-5-g5cbaa9e
- Arguments: ComfyUI\main.py --windows-standalone-build
- OS: nt
- Python Version: 3.10.9 (tags/v3.10.9:1dd9be6, Dec 6 2022, 20:01:21) [MSC v.1934 64 bit (AMD64)]
- Embedded Python: true
- PyTorch Version: 2.0.1+cu118
## Devices
- Name: cuda:0 NVIDIA GeForce RTX 3060 : cudaMallocAsync
- Type: cuda
- VRAM Total: 12884377600
- VRAM Free: 10690410496
- Torch VRAM Total: 1006632960
- Torch VRAM Free: 7518208
It seems the error is related to AnimateDiff Evolved - especially AnimateDiff Model.
Updating Torch helped:
python_embeded\python.exe -s -m pip install torch==2.1.2 torchvision torchaudio --index-url https://download.pytorch.org/whl/cu118
Very good Workflow !! simple and comprehensive
I just had red blurry video when i wrote >250 letters in the conditioning (???)
Thank you very much
hi some how my morphing is not working its turning into a GIF kinda thing can someone help
Is the format in the Video Combine set to video/h264-mp4? Could be set to GIF by default
🚨My fix for the new ComfyUI update:🚨
A) Inside the QrCode Controlnet, the Simple Math node is problematic. This node basically subtract the 96 batch size from the Empty Latent Image by the frame count of the White on Black alpha from the imgur link, which is 24. Basically 96/24 = 4.
1. Delete the Simple Math Node
2. Add a Int Literal node and insert 4
3. Attach the "INT" output from the Int Literal node to the "amount" input of the RepeatImageBatch node.
4. Modify that number if you're doing higher frames than 96, but that's the default.
B) If you're using the v2.0 version,
1. Update the AnimateDiff Evolved custom node from the Node Manager.
2. At the right of the workflow, locate the VRAM Debug node and Reload it (with right click).
3. Re-attach the "IMAGE" output from the Upscale Image to the "image_passthrough" input of the VRAM Debug node.
4. Attach the "image_passthrough" output from the VRAM Debug node to the "frames" input of the RIFE VFI node.
5. If the workflow still stop at VRAM Debug node, delete the AnimateDiff Evolved custom node and re-install it or re-add it manually.
Hope it helps
Simple Math node crashed for me too so I replaced it by Math Expression and it seems to work. (However, I get another and probably unrelated error in KSampler.)
When loading the graph, the following node types were not found:
ACN_AdvancedControlNetApply
ControlNet載入器(進階)
Nodes that have failed to load will show as red on the graph.
(IMPORT FAILED) ComfyUI-Advanced-ControlNet
How do we change the length of the output video?
hey great question idk but let me know if you find a way aswell
@letapyerhs @PJA You'll need to modify the numbers of total frames. Right now everything is set to 96, so you need to adjust all the numbers up by 2 for 2x higher frames. A easy way to know all the numbers is simply take a screenshot of the controlnets area and paste it in Copilot/ChatGPT and ask it to convert it into the time you want. It'll give you the right numbers :D
@Catz thanks
I created a workflow that automatically does the calculations and replaces the numbers in the ipadapter frames text field... ill upload that real quick so that you can put any length video in... just keep in mind.. longer videos will be harder on the system... also bypassing the color corrector helps if you run into RAM allocation issues... I think you can click on my name to get the workflow
@operation12studio300 What do I need to adjust to increase the time in my new workflow?
@operation12studio300 hey What do i change in your script to increase thee length mine is still coming for0.2 seconds
Hi been trying to run any video animator on flux this last days but has been finding a lot of broken workflows with recent flux 1dev environment/nodes, and documentation is poor , any discord/stream or community forum that helps more technically?
You can join the Civitai Discord. Invitation link can be found on the Discord icon, on the home page, at the bottom of the screen.
Join Banodoco on Discord, that’s where a lot of stuff comes out from.
Great workflow, thank you.
Question: is it possible to achieve quality improvement if you disable hyper lora and set the cfg and steps to the normal values for a particular model? As far as i understand your notes, that would allow to use negative prompt.
Or is this workflow hardwired to a hyper lora?
How to break through the frame limit of 255?
I am unable to install the mtb node , always showing "Try Fix"
can anyone help me with
Node Type - VAEDecode
Exception Message -'VAE' object has no attribute 'vae_dtype'
I had the same error - but I found that the vae it was using wasn't a match for the checkpoint I selected.
If you look at the bottom left of the workflow - you'll see the Load Checkpoint box, right under that one is Load VAE. Make sure you select the vae according to the checkpoint you're using. I just used the default vae-ft-MSE-840000-ema-pruned and it's working for me.
does anyone know how to do the animatediff staying loyal to the input images? my checkpoint is loading a woman instead of reptiles, what do i need to change? Im using realistic vision thank you
It says I need to Run Numpy2.0 or less and I have 2.1?
I cant get numpy 2.0 installed or less any help would be great or advice ?
Hi! This looks so dope but I cannot seem to make it work. The generation is a blank white image. Can someone help? Thanks
Такое и у меня было. Я вручную подгружала все недостающие компоненты, ну и про картинки не забудь, без них белое поле, так как воркфлоу пустой без базовых картинок которые надо морфить.
@Vera_oka Но даже с картинками выдает белое изображение, что-то не так
aqui resolvi adicioando um prompt. o modelo vem sem nada no nó.
tengo el mismo problema, instale todo, reinicie y como no le habia puesto las imagenes salian imagnes en blanco y negro y esca la de gris luego le puse imagenes y ahroa solo sale una imagen blanca, falta configurar algo mas?
Has anyone been in a similar situation? Why does the video I generate have nothing to do with load image. https://i.imgur.com/FZojh3v.mp4 could not be loaded with cv.
I find cropping and composition of imgs is very important, in this instance make sure what you want included is dead center and tightly cropped, otherwise I find that for example if you have a character in a street scene you end up with a street scene and not necessarily the character. I'd suggest downloading the motion masks from the link in the description and loading them locally. Good luck!
有可能是你的网络不好?
ERRORS and not working. I get all the errors and malfunctions everyone is getting plus some others. Followed several tutorials, installed and uninstalled it several times. And there seems to be no clear path or any kind of community support :(
lishfiusnflisfn awesome
what settings do i need to make it fluid animation? mine is not a perfect loop and is all over the place. thank you!
Every nodes working no error , But why my video not exactly same like picture that i load
and super blurr
can anyone help???
Thank u
that is not how it works it still interprets it, this is an old way of doing things
denoise probably
This is normal. It is Animatediff and IP Adapter. It is throwing a dice to have a result that comes close ...
Can anyone help? Everything is ok. But I can never make videos like the ones above. I tried all the settings. I tried everything. I can never make transitions like this. Can anyone share another workflow? Or where can I find support?
same here
Your error message is a bit too vague. Chances are that you changed some vital parameters. Try the original workflow again. You need different useful source images. They wil morph into each other then. But note that Animatediff and IPAdapter have their own will.
I made a live stream on how to use and also a guide, live one is the top one and newer but the guide is more precise and explains things better I think. I should make another one.
Making Looping Videos with iPiv image morpher and ComfyUI
https://youtu.be/3amkmEP7JvQ?si=QuDlRdAdiwblH6Yh
hey there, somebody has an advice how to change the amount of seconds of the video in the workflow? can it be longer at all? thx!!!
the first generation everything was ok, but the second time I had an error:
KSampler
index is out of bounds for dimension with size 0
and I can't understand why, all settings was not changed
I found the fix for this!
Just replace the Split Images VHS node with a Image Batch Splitter node.
How come I have a female in the morph when I've only uploaded 4 images of males portraits?
Hello friends,
I would love to use this workflow but with Wan 2.1 to make it more realistic. Could you please tell me which models I need to download and where exactly I should load them? Sorry for the newbie questions, I’m just starting out with this. Thank you so much for your help!
To change the base model would require a lot of re-working for the whole workflow, you would pretty much have to start from scratch.
@prophetofthesingularity5 Thanks a lot for getting back to me! I’ll start with something a bit simpler — you know, sometimes we dream big without realizing the kind of adventure we’re signing up for.
I really wants to make these small looping vides again, it's been many months since I did one. Now I'm not able to find the same IPAdapter as the one used in the workflow. Dol you have a new and updated workflow?? Could be so nice :)
Can someone please help me I keep getting this error: UnboundLocalError: cannot access local variable 'file' where it is not associated with a value. I've tried everything cant get rid of it! PLEASE
Thanks a lot for uploading this workflow! I just have one question, what nodes to replace if I have your video mask files(.mp4)?
THanks!
Works perfectly thank you
Want to get this workflow working but keep running into an error when the node graph gets to the first ksampler: !!! Exception during processing !!! expected scalar type Half but found Float. Let me know if you have found a solution to this error. Thank you!!
RuntimeError: expected scalar type Half but found Float
File "C:\Users\liavb\OneDrive\מסמכים\הורדות 2\תיקיה חדשה\ComfyUI\resources\ComfyUI\execution.py", line 525, in execute
output_data, output_ui, has_subgraph, has_pending_tasks = await get_output_data(prompt_id, unique_id, obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb, v3_data=v3_data)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\liavb\OneDrive\מסמכים\הורדות 2\תיקיה חדשה\ComfyUI\resources\ComfyUI\execution.py", line 334, in get_output_data
return_values = await asyncmap_node_over_list(prompt_id, unique_id, obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb, v3_data=v3_data)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\liavb\OneDrive\מסמכים\הורדות 2\תיקיה חדשה\ComfyUI\resources\ComfyUI\execution.py", line 308, in asyncmap_node_over_list
await process_inputs(input_dict, i)
File "C:\Users\liavb\OneDrive\מסמכים\הורדות 2\תיקיה חדשה\ComfyUI\resources\ComfyUI\execution.py", line 296, in process_inputs
result = f(**inputs)
^^^^^^^^^^^
File "C:\Users\liavb\OneDrive\מסמכים\הורדות 2\תיקיה חדשה\ComfyUI\resources\ComfyUI\nodes.py", line 1591, in sample
return common_ksampler(model, seed, steps, cfg, sampler_name, scheduler, positive, negative, latent_image, denoise=denoise)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\liavb\OneDrive\מסמכים\הורדות 2\תיקיה חדשה\ComfyUI\resources\ComfyUI\nodes.py", line 1556, in common_ksampler
samples = comfy.sample.sample(model, noise, steps, cfg, sampler_name, scheduler, positive, negative, latent_image,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\liavb\OneDrive\מסמכים\הורדות 2\תיקיה חדשה\ComfyUI\resources\ComfyUI\comfy\sample.py", line 66, in sample
samples = sampler.sample(noise, positive, negative, cfg=cfg, latent_image=latent_image, start_step=start_step, last_step=last_step, force_full_denoise=force_full_denoise, denoise_mask=noise_mask, sigmas=sigmas, callback=callback, disable_pbar=disable_pbar, seed=seed)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\liavb\OneDrive\מסמכים\הורדות 2\תיקיה חדשה\ComfyUI\resources\ComfyUI\comfy\samplers.py", line 1180, in sample
return sample(self.model, noise, positive, negative, cfg, self.device, sampler, sigmas, self.model_options, latent_image=latent_image, denoise_mask=denoise_mask, callback=callback, disable_pbar=disable_pbar, seed=seed)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\liavb\OneDrive\מסמכים\הורדות 2\תיקיה חדשה\ComfyUI\resources\ComfyUI\comfy\samplers.py", line 1070, in sample
return cfg_guider.sample(noise, latent_image, sampler, sigmas, denoise_mask, callback, disable_pbar, seed)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\liavb\OneDrive\מסמכים\הורדות 2\תיקיה חדשה\ComfyUI\resources\ComfyUI\comfy\samplers.py", line 1052, in sample
output = executor.execute(noise, latent_image, sampler, sigmas, denoise_mask, callback, disable_pbar, seed, latent_shapes=latent_shapes)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\liavb\OneDrive\מסמכים\הורדות 2\תיקיה חדשה\ComfyUI\resources\ComfyUI\comfy\patcher_extension.py", line 113, in execute
return self.wrappers[self.idx](self, args, *kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\COMFYUI\custom_nodes\comfyui-animatediff-evolved\animatediff\sampling.py", line 561, in outer_sample_wrapper
latents = executor(*tuple(args), **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\liavb\OneDrive\מסמכים\הורדות 2\תיקיה חדשה\ComfyUI\resources\ComfyUI\comfy\patcher_extension.py", line 105, in call
return new_executor.execute(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\liavb\OneDrive\מסמכים\הורדות 2\תיקיה חדשה\ComfyUI\resources\ComfyUI\comfy\patcher_extension.py", line 113, in execute
return self.wrappers[self.idx](self, args, *kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\COMFYUI\custom_nodes\comfyui-advanced-controlnet\adv_control\sampling.py", line 124, in acn_outer_sample_wrapper
return executor(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\liavb\OneDrive\מסמכים\הורדות 2\תיקיה חדשה\ComfyUI\resources\ComfyUI\comfy\patcher_extension.py", line 105, in call
return new_executor.execute(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\liavb\OneDrive\מסמכים\הורדות 2\תיקיה חדשה\ComfyUI\resources\ComfyUI\comfy\patcher_extension.py", line 112, in execute
return self.original(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\liavb\OneDrive\מסמכים\הורדות 2\תיקיה חדשה\ComfyUI\resources\ComfyUI\comfy\samplers.py", line 995, in outer_sample
output = self.inner_sample(noise, latent_image, device, sampler, sigmas, denoise_mask, callback, disable_pbar, seed, latent_shapes=latent_shapes)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\liavb\OneDrive\מסמכים\הורדות 2\תיקיה חדשה\ComfyUI\resources\ComfyUI\comfy\samplers.py", line 981, in inner_sample
samples = executor.execute(self, sigmas, extra_args, callback, noise, latent_image, denoise_mask, disable_pbar)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\liavb\OneDrive\מסמכים\הורדות 2\תיקיה חדשה\ComfyUI\resources\ComfyUI\comfy\patcher_extension.py", line 112, in execute
return self.original(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\liavb\OneDrive\מסמכים\הורדות 2\תיקיה חדשה\ComfyUI\resources\ComfyUI\comfy\samplers.py", line 751, in sample
samples = self.sampler_function(model_k, noise, sigmas, extra_args=extra_args, callback=k_callback, disable=disable_pbar, **self.extra_options)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\COMFYUI\.venv\Lib\site-packages\torch\utils\_contextlib.py", line 124, in decorate_context
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\liavb\OneDrive\מסמכים\הורדות 2\תיקיה חדשה\ComfyUI\resources\ComfyUI\comfy\k_diffusion\sampling.py", line 205, in sample_euler
denoised = model(x, sigma_hat s_in, *extra_args)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\liavb\OneDrive\מסמכים\הורדות 2\תיקיה חדשה\ComfyUI\resources\ComfyUI\comfy\samplers.py", line 400, in call
out = self.inner_model(x, sigma, model_options=model_options, seed=seed)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\liavb\OneDrive\מסמכים\הורדות 2\תיקיה חדשה\ComfyUI\resources\ComfyUI\comfy\samplers.py", line 954, in call
return self.outer_predict_noise(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\liavb\OneDrive\מסמכים\הורדות 2\תיקיה חדשה\ComfyUI\resources\ComfyUI\comfy\samplers.py", line 961, in outer_predict_noise
).execute(x, timestep, model_options, seed)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\liavb\OneDrive\מסמכים\הורדות 2\תיקיה חדשה\ComfyUI\resources\ComfyUI\comfy\patcher_extension.py", line 112, in execute
return self.original(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\liavb\OneDrive\מסמכים\הורדות 2\תיקיה חדשה\ComfyUI\resources\ComfyUI\comfy\samplers.py", line 964, in predict_noise
return sampling_function(self.inner_model, x, timestep, self.conds.get("negative", None), self.conds.get("positive", None), self.cfg, model_options=model_options, seed=seed)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\COMFYUI\custom_nodes\comfyui-animatediff-evolved\animatediff\sampling.py", line 631, in evolved_sampling_function
cond_pred, uncond_pred = comfy.samplers.calc_cond_batch(model, [cond, uncond_], x, timestep, model_options)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\liavb\OneDrive\מסמכים\הורדות 2\תיקיה חדשה\ComfyUI\resources\ComfyUI\comfy\samplers.py", line 205, in calc_cond_batch
return calccond_batch_outer(model, conds, x_in, timestep, model_options)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\liavb\OneDrive\מסמכים\הורדות 2\תיקיה חדשה\ComfyUI\resources\ComfyUI\comfy\samplers.py", line 213, in calccond_batch_outer
return executor.execute(model, conds, x_in, timestep, model_options)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\liavb\OneDrive\מסמכים\הורדות 2\תיקיה חדשה\ComfyUI\resources\ComfyUI\comfy\patcher_extension.py", line 113, in execute
return self.wrappers[self.idx](self, args, *kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\COMFYUI\custom_nodes\comfyui-animatediff-evolved\animatediff\sampling.py", line 841, in sliding_calc_cond_batch
results = evaluate_context_windows(executor, model, x_in, conds, timestep, [enum_window], model_options, CREF, ADGS)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\COMFYUI\custom_nodes\comfyui-animatediff-evolved\animatediff\sampling.py", line 909, in evaluate_context_windows
sub_conds_out = executor(model, sub_conds, sub_x, sub_timestep, model_options)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\liavb\OneDrive\מסמכים\הורדות 2\תיקיה חדשה\ComfyUI\resources\ComfyUI\comfy\patcher_extension.py", line 105, in call
return new_executor.execute(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\liavb\OneDrive\מסמכים\הורדות 2\תיקיה חדשה\ComfyUI\resources\ComfyUI\comfy\patcher_extension.py", line 112, in execute
return self.original(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\liavb\OneDrive\מסמכים\הורדות 2\תיקיה חדשה\ComfyUI\resources\ComfyUI\comfy\samplers.py", line 325, in calccond_batch
output = model.apply_model(input_x, timestep_, **c).chunk(batch_chunks)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\liavb\OneDrive\מסמכים\הורדות 2\תיקיה חדשה\ComfyUI\resources\ComfyUI\comfy\model_base.py", line 171, in apply_model
return comfy.patcher_extension.WrapperExecutor.new_class_executor(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\liavb\OneDrive\מסמכים\הורדות 2\תיקיה חדשה\ComfyUI\resources\ComfyUI\comfy\patcher_extension.py", line 112, in execute
return self.original(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\liavb\OneDrive\מסמכים\הורדות 2\תיקיה חדשה\ComfyUI\resources\ComfyUI\comfy\model_base.py", line 210, in applymodel
model_output = self.diffusion_model(xc, t, context=context, control=control, transformer_options=transformer_options, **extra_conds)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\COMFYUI\.venv\Lib\site-packages\torch\nn\modules\module.py", line 1776, in wrappedcall_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\COMFYUI\.venv\Lib\site-packages\torch\nn\modules\module.py", line 1787, in callimpl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\liavb\OneDrive\מסמכים\הורדות 2\תיקיה חדשה\ComfyUI\resources\ComfyUI\comfy\ldm\modules\diffusionmodules\openaimodel.py", line 837, in forward
return comfy.patcher_extension.WrapperExecutor.new_class_executor(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\liavb\OneDrive\מסמכים\הורדות 2\תיקיה חדשה\ComfyUI\resources\ComfyUI\comfy\patcher_extension.py", line 112, in execute
return self.original(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\liavb\OneDrive\מסמכים\הורדות 2\תיקיה חדשה\ComfyUI\resources\ComfyUI\comfy\ldm\modules\diffusionmodules\openaimodel.py", line 879, in _forward
h = forward_timestep_embed(module, h, emb, context, transformer_options, time_context=time_context, num_video_frames=num_video_frames, image_only_indicator=image_only_indicator)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\liavb\OneDrive\מסמכים\הורדות 2\תיקיה חדשה\ComfyUI\resources\ComfyUI\comfy\ldm\modules\diffusionmodules\openaimodel.py", line 56, in forward_timestep_embed
x = handler(layer, x, emb, context, transformer_options, output_shape, time_context, num_video_frames, image_only_indicator)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\COMFYUI\custom_nodes\comfyui-animatediff-evolved\animatediff\model_injection.py", line 197, in forward_timestep_embed_patch_ade
return layer(x, context, transformer_options=transformer_options)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\COMFYUI\.venv\Lib\site-packages\torch\nn\modules\module.py", line 1776, in wrappedcall_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\COMFYUI\.venv\Lib\site-packages\torch\nn\modules\module.py", line 1787, in callimpl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\COMFYUI\custom_nodes\comfyui-animatediff-evolved\animatediff\motion_module_ad.py", line 950, in forward
return self.temporal_transformer(input_tensor, encoder_hidden_states, attention_mask, self.view_options, mm_kwargs, transformer_options)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\COMFYUI\.venv\Lib\site-packages\torch\nn\modules\module.py", line 1776, in wrappedcall_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\COMFYUI\.venv\Lib\site-packages\torch\nn\modules\module.py", line 1787, in callimpl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\COMFYUI\custom_nodes\comfyui-animatediff-evolved\animatediff\motion_module_ad.py", line 1205, in forward
hidden_states = self.proj_in(hidden_states).to(hidden_states.dtype)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\COMFYUI\.venv\Lib\site-packages\torch\nn\modules\module.py", line 1776, in wrappedcall_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\COMFYUI\.venv\Lib\site-packages\torch\nn\modules\module.py", line 1787, in callimpl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\liavb\OneDrive\מסמכים\הורדות 2\תיקיה חדשה\ComfyUI\resources\ComfyUI\comfy\ops.py", line 394, in forward
return super().forward(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\COMFYUI\.venv\Lib\site-packages\torch\nn\modules\linear.py", line 134, in forward
return F.linear(input, self.weight, self.bias)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
pls help