Workflow for generating morph style looping videos.
v3: Hyper-SD implementation - allows us to use AnimateDiff v3 Motion model with DPM and other samplers. Seems to result in improved quality, overall color and animation coherence.
Uses QRCode Controlnet to guide the animation flow, morphing between the reference images is done via IPAdapter attention masks.
Here are some more motion masks to use with QRCode - kindly provided by @Xenodimensional: https://civarchive.com/posts/2011230
❗If you are getting an error message CLIP Vision Model not found: /ComfyUI/models/clip_vision folder
CLIP-ViT-H-14-laion2B-s32B-b79K.safetensors download and rename
CLIP-ViT-bigG-14-laion2B-39B-b160k.safetensors download and rename
❗If you are getting an error message IPAdapter model not found:
You are likely missing the IPAdapter model. In ComfyUI Manager Menu click Install Models - Search for ip-adapter_sd15_vit-G.safetensors and click Install.
If installing through Manager doesn't work for some reason you can download the model from Huggingface and drop it into \ComfyUI\models\ipadapter folder.
ViT-G model is what I used in the workflow but I suggest you try out other IPAdapter models as well.
Description
FAQ
Comments (38)
It's just crazy,the possibilities of this workflow are phenomenal! Fascinating, a game changer
Ty so much for the kind words @patz65 !
Doesn't work for me right now.
"Error occurred when executing KSampler: 'ModuleList' object has no attribute '1'"
In the Load Advanced Controlnet node make sure you are loading SD1.5 QRCode and not the SDXL version.
Download: control_v1p_sd15_qrcode_monster.safetensors
I was wondering how you were generating the growing circles loop without a video - I thought it would be some clever use of moving rectangles with some rectangular to polar nodes I had not heard about before, but the actual solution is just as clever ! Linking to online content like that might be a good practice to adopt for anyone sharing workflows. It was so easy to use that I did not even expect it to be there !
Big thanks for sharing your workflow. We use almost exactly the same , it's stunning how similar they are structurally, but how different they look aesthetically. Yours is much cleaner, so much so that I'm going to use it to explain my process in the future rather than showing my own spaghetti.
There are also a few unique tricks I am going to borrow, that's for sure. Like that video loader based on an URL to share my workflows with others.
Glad you found it helpful!
At first i tried to use existing shape and mask generation nodes to achieve this in comfy, even started writing own custom node for creating these kinds of "motion masks" but halfway through realized that at current stage it is much easier to use already existing tools like after effects to generate the masks and import instead.
Although I'm sure very soon we'll be able to get the whole pipeline in Comfy.
@ipiv yeah a masking pipeline would be great in Comfy. Also thank you so much, I'm working on an MV, and this is the EXACT effect I was trying to achieve for a shot. Bless :)
I always have a problem with this model and I don't know why https://drive.google.com/file/d/1i1vvb1Zzu5on-BoyhwZ9WSW9t0-j5Jr5/view?usp=sharing
control_v1p_sd15_qrcode_monster
Download this model into ComfyUI/models/controlnet. Afterwards choose it in the Load Advanced Controlnet node after clicking Refresh on the sidemenu.
@ipiv You are amazing, man. Thank you for this beautiful creativity ♥
Wow. I am impressed. I call that next level. thank you for sharing.
Hi, nice work. Can you tell me why you use FILM VFI and not RIFE VFI. I compared them a long time ago in SD, but I never realized the difference, I only noticed that FILM VFI lasts longer. Maybe I'm missing something.
Hii, thank you!
There isn't really right or wrong here - I use both and change between them. For longer animations I tend to use RIFE since it's faster and less VRAM heavy on my card but it sometimes introduces artifacts.
It really depends on the animation and subjects so I suggest you compare both and see the difference by switching the node out. When the frames are cached and seed is fixed it will only run the interpolation process without regenerating.
If you need help hit me up, always happy to help!
by the way, u can make longer interpolation with rife vfi. just change 47 to 49 model
@ipiv Interesting, thanks for the reply)
Hi thank u for your work
I get this error:
Error occurred when executing KSampler:
integer division or modulo by zero
Hmm.. this error message doesnt say much to me unfortunately.
Is everything updated, correct models loaded?
Reload the workflow and check the model names before hitting queue as it can autofill incorrect models if the models with original name werent found.
Do default and other AnimateDiff workflows generate any errors?
@ipiv Okay, I got it.
the problem was with AnimateLCM model
cant find the VIT-G even though its in the right folder and renamed as per
Hey,
Do you have ip-adapter_sd15_vit-G.safetensors in ComfyUI\models\ipadapter folder?
It can be downloaded through Comfy Manager or from Huggingface
Naming is important for Comfy to find the correct ipadapter and clip vision models so make sure to double check if error persists.
IPAdapter Plus variant works as well by the way so I suggest to give it a try and see if switching to it gives an error as well.
@ipiv Thanks, works well with standard medium too. is there a way to add more images to morph through. Taking into account i'd need to change the mask frames too? i.e 0:(0.0),
28:(1.0),
48:(0.0),
68:(1.0),
88:(0.0),
108:(1.0)
Would you mind telling me what the value for the Simple Math Node should be at the very top of the Workflow. For some reason it loads empty. Thanks.
The value for that node is "a/2". It divides the batch into 2 and does a hard switch from one reference image to the other rather than fading in the middle. This group is actually optional and isn't connected in the base workflow.
@ipiv many thanks for your response.
Is possible to load more images by cloning the 'IPAdapter Batch' and 'Load image'?
It is definitely possible but unfortunately this isn't implemented in the current version and requires changing up the nodes a bit to create a mask for each reference image and change the current implementation of fading between the masks.
I thought about adding the possibility but I decided to keep the workflow more like "plug and play" without needing to change up the nodes&values.
I will revisit it when I have time and see if I can find a way to make it modular so it automatically changes up the mask and fading generation and total batch count depending on the amount of reference images user puts in.
very nice flow! Same question about more than 2 sources!
Conversion of more than 3 images? In an animation that totals less than 5 seconds?
How do I avoid large black circles?
Hey, lower the QRCode Controlnet strength or bypass it entirely.
You're a legend! Keep up the EPIC WORK
Good works!!
Question: what is the use of the "Hard switch between reference images" group at the very top of the workflow ? It doesn't seem to output any kind of data anywhere. Is it an alternative to the softer transition that is used in the group below it, and which is actually connected to the rest of the workflow ?
Everything works well, but I'm wondering if I'm missing something over there !
Thanks again for this very inspiring share - I keep coming back to it when things go haywire with my own frankenstein spaghetti of nodes.
Ty so much for the kind words! 💙
Indeed those "Hard switch" nodes aren't used in the base workflow but can be replaced with the fading mask generation process to make a hard switch between reference images instead of fading from one to another.
I simply tried to showcase different ways to create the attention masks for IPAdapter, might inspire some people to expand on the base workflow for their specific use case.
@ipiv Thanks for the information about that alternative attention mask source. I've had some very interesting results by feeding directly or compositing the QRmonster source video itself as an attention mask. I'm not sure if this would have potential with your expanding circles clip though since I made my tests using my own black-and-white mask animations, but it might be worth a try.
Now I get to test the new version with 4 source images - thanks for sharing it ! Your stuff is by far the most inspiring there is here on Civitai.
@AugmentedRealityCat No worries and thanks for the kind words!
At first I started out with exactly that - just masking the ipadapter attention - any b&w single/batch image can be fed to the attention mask and you can definitely get some interesting and artistic results by doing that!
Keep at it, experimenting is the key!
请问关于CLIP Vision Model的重新命名规则是怎样的