Trained for 2000 steps with: "#:0.41|~maniacal laughter:0.2|^maniacal laughter:0.05" using https://github.com/ntc-ai/conceptmod
animations with lora value from 0.0 to 1.8. All examples use the trigger phrase: "maniacal laughter"
Words are pale shadows of forgotten names. As names have power, words have power. Words can light fires in the minds of men stable diffusion models.
Tutorial:
This tutorial is technical. For a much easier path, follow https://civarchive.com/models/58873/conceptmod-tutorial-fire-train-any-lora-with-just-text-no-data-required and run on runpod. It's cheap. $5 to train a model and <$1 to create animations.
Local installation (technical)
Requires 20 GB vram
0) Install a111
1) git clone https://github.com/ntc-ai/conceptmod.git
2) cd conceptmod
3) edit train_sequential.sh and add your training phrase (see above)
4) install dependencies (see https://github.com/ntc-ai/conceptmod#installation-guide ) and ImageReward https://github.com/THUDM/ImageReward
5) train the model with bash train_sequential.sh(takes a while)
6) once you are successfully training you will save a new checkpoint into models , there will be samples during training if you have enabled them in the samples directory.
7) mv the models to a a111 models path:
`mv -v models/*/*.ckpt ../stable-diffusion-webui2/models/Stable-diffusion/0new`
8) extract the lora with sd-scripts https://github.com/kohya-ss/sd-scripts (requires new conda env)
here is the script I use to create loras of everything in a directory. replace dir and basemodel with yours. Run it in the sd-scripts project.
https://github.com/ntc-ai/conceptmod/blob/main/extract_lora.sh
9) Start a111 webui.py with '--api'.
10) Create the animations
I used:
> python3 lora_anim.py -s -0.0 -e 1.8 -l "mania" -lp ", maniacal laughter" -np "nipples, weird image." -n 32 -sd 7 -m 2.0
If you create something with this, please tag it 'conceptmod'
-
React!: https://civarchive.com/user/ntc/images?sort=Most+Reactions
Description
FAQ
Comments (10)
loled @ pizza trump almost keeping the same maniacal laughter from start to finish of the animation
These previews are awesome!! Well done!!
Can you please make a tutorial on how to do this on Runpod.io
I started the runpod images. https://civitai.com/models/57334/angry-trained-without-data-new-runpod-easy-training-and-animations
animation works but I'm still getting training going.
just fyi, good suggestion
training should work too now
@ntc Thanks! right now it's loading. I will let you know about my experience.
@ntc The loading took a long time (still not finished), I'm worried I didn't set the storage parameters right. I set it to 5GB Disk and 50GB Pod Volume. that's what usually works for me for Dreambooth training with A1111 on Runpod.
@ntc I saw the recent tutorial you posted here on CivitAI and it seems, 5+50 GB was the default for your pod, since I was also always using that I thought it was just my previous settings and that maybe it won't work for this pod 😂
@kleind Yeah it takes a while to start due to installing dependencies. Thanks for trying it, let me know if you run into issues.
pip install taming-transformers-rom1504 'git+https://github.com/openai/CLIP.git@main#egg=clip' image-reward safetensors datasets matplotlib diffusers kornia
at this point while I am installing dependencies it throws this error.
ERROR: Invalid requirement: "'git+https://github.com/openai/CLIP.git@main#egg=clip'"
Hint: = is not a valid operator. Did you mean == ?
Details
Available On (1 platform)
Same model published on other platforms. May have additional downloads or version variants.