AnimateLCM
LoRAs and Motion weights for fast video generation.
We support high-quality video generation in 4 steps.
Important Notes:
Use LCMScheduler for sampling.
CFG should be kept between 1 and 2. Use CFG 1 for saving memory and inference time. Use CFG > 1.5 and proper negative prompts for better visual quality.
LoRA weights is set to 0.8 by default.
16 frames is preferred. Too long or too short videos tend to cause generation failure.
For more details, please refer to https://github.com/G-U-N/AnimateLCM
Please do tag our model #AnimateLCM if you use it.
Links
Contact: [email protected]
Description
FAQ
Comments (21)
wow! really? would love to see a demo of this in action (on various hardware)
That's a good point. We will try to do so.
@G_U_N if what your describing is true, this is a real gamechanger.
@AstralNemesis Hi AstralNemesis, thank you for your comment. In our work, we did not focus on hardware optimization. Instead, we reduced the sampling steps required for video generation from 25 to just 4, and we did this without needing classifier-free guidance. This means, under similar conditions, our method can achieve a generation speed that is approximately 12.5 times faster (calculated as 25 * 2 / 4). If you're familiar with LCM-Lora, you might consider our approach as its video equivalent.
@G_U_N I'm assuming this was training on high end hardware a 3090 or 4090 etc. I personally have a lowly 3060ti and have avoided video because of the longer render times, something like this could really level the playing field so to speak and really open up creativity and development.
How do I use this in ComfyUi? Thanks - it looks awesome!
yea .... same here .... the lora works but not the checkpoint
sd15_t2v_beta.ckpt is motion model
@guidenglong thanks for the hint. For everybody else wonderin where to get this:
https://huggingface.co/wangfuyun/AnimateLCM/tree/main
Thats the package I found, gonna try it out now.
@macro same issue that I faced
For everyone, here is an interesting workflow for AnimateLCM: https://civitai.com/models/313594/jboogx-and-machine-learner-animatelcm-workflow-vid2vid-controlnet-ipadapter-highresfix-reactor-face-swap
Same as @Macro & @RunningFountain , the lora works great but the chekpoint does not load, ive tried everything, throws following error ( AttributeError: module 'torch' has no attribute 'float8_e5m2' ) researching comes up with nothing useful, I verified im using the latest of everything, the few hits searching said certain things might be out of date, but they're not. Is the chekpoint expecting newest GPU speck or something? im on an older GPU.
Hello, is there somewhere an explanation on how to use the Lora and how it is useful ?
With it the picture have some heavy grain.
Also do you have a suggestion of generative model to use with this motion module ?
Any chance your team will be training an XL version of this? :D
Does it support prompt travel? It doesn't seem to change prompts over frames when using it in comfyui
May i know what is prompt travel?
Do you actually know why this model is much better, at least with LCM generation ?
The best animatediff lcm model sofar. Any plan to train more than 16 frames?
HI. WHERE IS THE WORKFLOW?
Is this different from the original animatediff lcm motion module or is this just an old beta version? I see it says FAST video generation so is this a new trained lcm model?
its the same, the original beta_t2v diff model