Trained for 2000 steps with the phrase:
> "~action shot:0.2|^action shot:0.2"
using https://github.com/ntc-ai/conceptmod
All the animations used the trigger words "action shot" with a lora value of -4 to 4
to create the animation:
python3 lora_anim.py -s -3.0 -e 3.0 -l "compvis-word_actionshot0.2^actionshot0.2-metho" -lp ", action shot" -np "nipples, child, weird image." -n 35 -sd 7 -m 8.0
Description
FAQ
Comments (19)
just for bike surfing arielle alone this deserves an award
cool
On creating the animations - where do I put those commands in? That is totally confusing. Can you 'explain like I'm 5'?
Hi it's a python file you have to modify then call from a command line. Sorry I don't have easy instructions yet as I'm still working on the hard instructions :)
The animation creation could eventually be an A111 plugin if someone would like to build it. I can't right now because I'm busy with other projects.
@ntc Yeah I'm using A1111 so a plugin would be wicked awesome. If nothing else, I can wait for the 'hard instructions' once you've compiled them. No worries - thanks for the reply!
@ntc still don't know how to use the lora_anim.py to create animation. Could you please give a brief instruction on how to use it? For example, how many frames do I have to generate before I can create an animation just like your example? And where do I put those pictures before they are turned into an animation? And most important of all, how to create sequence frames using lora_anim.py?
@kuangzuanzhao
To easily try it, start up the animation runpod here.
https://civitai.com/models/57334/angry-trained-without-data-new-runpod-easy-animations-for-any-lora
the python lora_anim creates a video with moviepy after it queries a111 a bunch with different lora values. It does a max of 120 frames (configurable)
@ntc thanks for your instruction, I viewed runpod yesterday, but I am currently using another cloud GPU service right now. Is it possible to use lora_anim.py on other cloud GPU to create animation, and how?
@kuangzuanzhao heres the dockerfile that sets up the runpod https://github.com/ntc-ai/conceptmod/blob/main/docker/Dockerfile_lora_animation
@ntc thanks for the invitation, but my models are all on another cloud service provider so I have to stick to it. Also I tried to rewrite lora_anim.py in order to make it work on my cloud service. As of now still encounter various problems. For example, when running, lora_anim.py automaticlly download a model called"pytorch_model.bin" which has a size of 10G? And it is actually GPT3? Which means in order for lora_anim.py to run successfully, I have to download the whole GPT3 model? That's totally insane...Anyway, I will give it another try tomorrow, hopefully I could get it run and get an animation at least once.
@kuangzuanzhao understood. It shouldn't download gpt-3. It does download ImageReward, which theoretically helps select aesthetic starting / ending points. That should be a couple of gigs.
There is a section in my new tutorial that might be of interest to you, https://civitai.com/models/58873/conceptmod-tutorial-fire-train-any-lora-with-just-text-no-data-required
section 7
I want to make this more accessible to people so please share any insight you have in that regard.
Cheers
@ntc I am still working on rewriting and testing your lora_anim.py scripts, and after another half-a-day struggling, I overcame many bugs, but still trapped...My current problem is I need the ViT-B/32.pt model, but I neither cannot git clone it via linux, nor can I directly download it using windows. Do you have this model, or could you be so kind to tell me where I can find it? Thanks again for your time. Cheers
@kuangzuanzhao sorry it's proving to be such a pain to setup. Have you tried
pip3 install git+https://github.com/openai/CLIP.git --no-deps
? I copied that from the Dockerfile dependencies (line 25) https://github.com/ntc-ai/conceptmod/blob/main/docker/Dockerfile_lora_animation
@ntc Still working on rewriting and testing the script, and it has been the fourth day but still encounter tons of problems, don't know why using a different cloud service would cause so much trouble when dealing with codes. Anyway, I've decided to try it though untill I could successfully run the script. Cheers
After 5 days struggling, I finally give up. It's impossible to run lora_anim.py on another cloud service, or maybe it's just me. Such a shame. Anyway, thanks @ntc for the help. Hope you can come up with a more detailed instruction on how to run the script.
I've just seen your github repository, how much VRAM/CPU does it require to be used? It looks incredible
It takes about 21 GB of vram for the two default terms. The easiest way to do this is probably through runpod where a phrase will cost ~$5 to train. I just released some docker images here:
https://civitai.com/models/57334/angry-trained-without-data-new-runpod-easy-animations-for-any-lora
Thanks! Be sure to tag with conceptmod if you release something.
Would love to see a SDXL version of this, really helps with adding dynamism to the images.
Details
Available On (1 platform)
Same model published on other platforms. May have additional downloads or version variants.