Character precise motion control (test), improved on the basis of Animatediff SparseCtrl img2img2vid 4/8frames2VideoPrediction LLM SD15, adding key frame motion control, attention mask, combining SparseCtrl with the latest version of Ip-adapter-plus plus ip_plus_composition_sd15, which can be used for single img generates 16/24/32 frame video. The lower the frame rate, the more stable it is. The higher the frame rate, the greater the range of motion.
Frontal, side, clear posture , and img with a larger proportion of faces have better results.
PS: There is still a problem with video upscale. It consumes a lot of video memory and runs slowly, so it is temporarily unavailable.
人物精准动作控制(测试),在Animatediff SparseCtrl img2img2vid 4/8frames2VideoPrediction LLM SD15基础上改进,增加了关键帧动作控制,注意力遮罩,将SparseCtrl与最新版Ip-adapter-plus加ip_plus_composition_sd15结合,可以单张图生成16/24/32帧视频,帧率越低越稳定,帧率越高动作幅度越大。
正面,侧面,人脸占比较大的图,效果较好。
PS:视频放大还有问题, 消耗大量显存运行缓慢,暂时不能用。
Description
图生视频,现在可以生成更高帧率,人物一致性更高的视频。
I2V can now generate videos with higher frame rates and higher consistency of characters.
视频放大显存消耗过大问题,依然无法解决。
The problem of excessive VRAM consumption during video upscale still cannot be solved.
FAQ
Comments (5)
thanks for your workflow BUT I cant get it running. Error on empty latent Image(batch size not connected ?) & error on BatchPromptScedule (max frames)
I replaced the frames nodes with primitive nodes and improved some node download note. You can directly delete the original node, add an primitive node and connect it to the corresponding reroute, or you can re-download the workflow.
When the "cpu and cuda: 0!" problem occurs, 90% of the time it is due to insufficient VRAM. Reduce the resolution
i am a new guy, how to keep image and generated video is a same guy? i use myself photo, but i got anther guy video. thanks.(mac M2pro)
The underlying technology is SD1.5, which cannot keep the image and the person in the video completely consistent. It can only imitate the image features as much as possible. This workflow technology is outdated. For the latest image-to-video technology, please see Luma AI. In the future, I may try to develop an image-to-video workflow based on FLUX, but unfortunately the corresponding nodes are not yet complete.
