Showing an example of how to use music to drive AI animations. This should make it easier to create AI animation of music videos.
Under the hood we're using the amplitude of certain frequencies to change the prompt.
String Scheduling
This version shows the example of shuffling a set of prompts when the amplitude hits a certain threshold.
Prompts can be loop and / or shuffled to create more dynamic content.
Description
Showing how to use the audio nodes for value scheduling
FAQ
Comments (9)
How should I use prompts? Prepend text node? a bit more information about where we enter our own data and prompts would be appreciated. Comfy went through a LONG list of embeddings I don't have but I don't see them in the workflow. Plus I got this error: Error occurred when executing ADE_AnimateDiffLoaderWithContext: 'NoneType' object has no attribute 'lower' File "C:\ComfyUI_windows_portable\ComfyUI\execution.py", line 153, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\ComfyUI_windows_portable\ComfyUI\execution.py", line 83, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\ComfyUI_windows_portable\ComfyUI\execution.py", line 76, in map_node_over_list results.append(getattr(obj, func)(**slice_dict(input_data_all, i))) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-AnimateDiff-Evolved\animatediff\nodes.py", line 115, in load_mm_and_inject_params motion_model = load_motion_module(model_name, model, motion_lora=motion_lora, motion_model_settings=motion_model_settings) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-AnimateDiff-Evolved\animatediff\model_injection.py", line 184, in load_motion_module mm_state_dict = comfy.utils.load_torch_file(model_path, safe_load=True) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\ComfyUI_windows_portable\ComfyUI\comfy\utils.py", line 12, in load_torch_file if ckpt.lower().endswith(".safetensors"):
Hi , I got the same error , how to fix this and make the workflow working. Did you manage to make it work.
this is great! is it possible to add initial image?
Hi,
I am getting this error at the Video combine node.
ERROR:root:Failed to validate prompt for output 271:
ERROR:root:* (prompt):
ERROR:root: - Return type mismatch between linked nodes: frame_rate, INT != FLOAT
ERROR:root:* VHS_VideoCombine 271:
ERROR:root: - Return type mismatch between linked nodes: frame_rate, INT != FLOAT
ERROR:root:Output will be ignored
Please assist.
Thanks.
Looks like "frame_rate" is now a float. So instead of passing in an int, you need to give it a float. Probably the easiest way to do it is to turn "frame_rate" back to a widget on the node and manually pass in the value you want
Unsure what Im doing wrong, but all my generations end up mosaic, and abstract. got any idea why this could be?
Error occurred when executing ADE_AnimateDiffLoaderWithContext:
'ModelPatcher' object has no attribute 'model_keys'
How should I solve it
Currently broken?
Would love to play with this, but I get an error.
"Prompt outputs failed validation AudioToFFTs: - Return type mismatch between linked nodes: audio, AUDIO != AUDIO_DATA"
