TUTORIAL BELOW
I have released version 2 of my node suite with the following notable changes
-Everything can react to everything: ALL parameters of ALL nodes are now ALL simultaneously schedulable
-Hundreds of new features, dozens of new nodes
-Performance improvements
-A focus on user experience. Less noodles, tooltips for everything.
-Direct integration with @kosinkadink's Advanced-Controlnet and AnimateDiff-Evolved. I plan to grow this integration over time.
-More feature-rich modulation system
-All feature inputs are now optional, making these nodes double as a powerful suite for manipulation of images, masks, videos, etc., even without reactivity.
There's something like 160 nodes now, all modular, super fun. There is more than I can cover here, but I plan to introduce more examples of new features going forward. For now, please see this example below. Check out the github, gimmi stars, blah blah blah
https://github.com/ryanontheinside/ComfyUI_RyanOnTheInside
TUTORIAL
Description
FAQ
Comments (12)
Lovely! This update is magic!
hell yea glad you like
thank u master
go forth and warp
Pretty dope that it does everything! I'm tweeking the workflow to automate a bit more from the start.
What would you suggest to choose from to isolate Synthesizer in soundtracks? Not having great feedback in some of the current options.
open unmix is one of the best open-source source separation models. There are some paid that are better. If you can get your hands on the MIDI, there is a MIDIFeatureExtractor node ;)
May I ask what you mean by automating? As far as I remember, everything is automated aside from the Drawablefeature which is optional
@ryanontheinside Alright I'll have to check for those nodes, thanks!
You can drag one of my video in comfy to see new nodes :
Currently, I've mainly automated the Lora prompting. LoraLoaderAdvanced nodes automatically add the trigger words and I fed them into a string merge to wrap the words between the particle prompts.
I realized the "Drawable Feature Node" frame_count had to be manually added, so I turned it into an input for automatically calculate the frame rate x duration (I was calculating with some math nodes before I realized your Audio Info nodes was doing just that, sweet!)
Save all videos into a specific folder. I'll re-order them into mask and final upscale folder later.
I kind of wished the Particle Emitter and Particle Emission Mask nodes gave you more divers options depending on the audio. Maybe a different shape of particle and random size for each of them or depending on the audio volume. Messing with the gravity or strengths makes it super long to render. Currently, it can feel a bit samy and need to tweak the particles every new video, but there are lots of potential here!
FeatureToLatentKeyFrame not working. pls help
so sorry for the late reply - i did see this issue on github and hopefully resolved it, dont know if that was you. If ur still having trouble let me know!
I don't understand such developers, they do a huge job and don't indicate 5 links to the models
apologies for overlooking that, a lot of moving parts on my end. check the youtube description, links are there. ill try to remember to puyt them in more places