A basic workflow to use an input video to drive the actions of the subject in the generated video.
2 x IP Adapters: One image input for the background, One image input for the subject
ControlNet: Uses Coco segmenter to get the subject from the input video and Depth Anything preprocessor
This workflow uses the IP Adapter v1 nodes. If you have updated to IP Adapter V2, please refer to my updated workflow here: https://civarchive.com/models/382012/tik-tok-dance-workflow
Description
Details
Downloads
360
Platform
CivitAI
Platform Status
Available
Created
3/25/2024
Updated
9/26/2025
Deleted
-
Files
videoGenerationAnimateDiff_v10.zip
Mirrors
CivitAI (1 mirrors)