Trained for 2000 steps on:
#|fire%{random_prompt}, fire:-0.1|fire++:guidance=2
animations use lora strength 0.0 to 1.1 with the trigger word fire
Here's how:
Use the runpod at https://runpod.io/gsc?template=8y3jhbola2&ref=xf9c949d
1. Create a pod
I chose a 3090. It needs > 20 GB ram.
Don't encrypt your volume, container disk defaults to 5 gb and volume disk to 50 gb, these are fine
click Continue
It will take a few minutes to download conceptmod, once Connect becomes enabled, you are ready to continue.
2. Login to the web console
"Connect", SSH or "Start web console" and connect to it.
Once you log in, it will install dependencies (take a minute) then output a welcome message.
3. Send over your the base model checkpoint that you want to train on:
Note: be sure to use a safetensors checkpoint.
Using https://github.com/runpod/runpodctl
on local:
runpodctl send mycheckpoint.safetensors
on pod:
cd /workspace/stable-diffusion-webui/models/Stable-diffusion/
runpodctl receive <code from send>
Note: You can also use scp, wget or a cloud storage attachment to transfer your model
4. Train on your phrase (takes 3 hours for 1000 steps)
cd /workspace/sd-scripts
python3 train-scripts/train-esd.py --prompt "#|fire%{random_prompt}, fire:-0.1|fire++:guidance=2" --train_method selfattn --ckpt_path /workspace/stable-diffusion-webui/models/Stable-diffusion/mycheckpoint.safetensors
It saves every 300 steps, which is about an hour.
Selecting a phrase
Look at the models here and find one to modify: https://civarchive.com/tag/conceptmod?model=58873&sort=Newest
5. Extract the lora
bash /workspace/conceptmod/docker/extract_lora.sh /workspace/stable-diffusion-webui/models/Stable-diffusion/<mycheckpoint>.safetensors
The argument is your base checkpoint from (3).
6. (optional) Test the lora in webui
Your model (and the training intermediates) will now be available in webui as a lora. Select your base model from (3) and apply the lora to figure out what the strength should be.
Freeze the seed to manually see how lora strength changes the model.
For ease of use:
cd /workspace/stable-diffusion-webui/Lora
mv compvis-word_firefire%\{random_prompt\}-0.1-metho.safetensors fire.safetensors
7. (optional) Create animations to show how your lora changes the images
choice a) To create an animation on your prompt:
python3 lora_anim.py -s 0.0 -e 0.7 -l "fire" -p "fire prompt"
-s is starting lora strength
-e is ending lora strength
-l "fire" is your lora
-p "fire prompt" is your prompt
choice b) To create one animation using a top 80k prompt appended with your trigger (like these previews):
python3 lora_anim.py -s 0.0 -e 0.7 -l "fire" -lp ", fire"
prompt defaults to https://huggingface.co/datasets/Gustavosta/Stable-Diffusion-Prompts
-lp ", fire" is your trigger.
Run continuously for many videos
while true; do python3 lora_anim.py -s 0.0 -e 0.7 -l "fire" -lp ", fire"; done
Break with ctrl-c. The animations are in the v4 directory as mpv files.
Transfer your videos:
on pod:
runpodctl send v4
on local:
runpodctl receive <code from send>
8. Download the lora
on pod:
runpodctl send /workspace/stable-diffusion-webui/models/Lora/
on local:
runpodctl receive <code from send>
9. Stop and terminate the pod to stop paying money
on https://runpod.io/console.pods , stop and terminate the running pod
10. Post the lora to civitai with the tag conceptmod.
Include your training phrase for a 5 star review.
https://civarchive.com/tag/conceptmod?model=58873&sort=Newest
Description
FAQ
Comments (18)
OK, I ran the following command (hoping to get a Lora to make things side view) and I need to wait another 4 hours I believe.
python3 train-scripts/train-esd.py --prompt "~side_view" --train_method selfattn --ckpt_path /workspace/stable-diffusion-webui/models/Stable-diffusion/deliberate_v11.safetensors
nice!
Where are all of these lora's?
I almost reached the 300-step mark, and I want to try the created checkpoint. This raises the following question:
while the training is in progress can I use A1111 to generate images or will that interrupt the training? I don't want to find out the hard way 😃
From my testing, there isn't enough memory on a 3090 to do both. With enough memory it is possible.
Also, I don't know if 300, 1000, 2000 or 100000 is the right number of steps. Erasing (the repo this is based on) defaults to 1000 so thats what I set the runpod to.
@ntc I am using an A5000, but it has the same amount of VRAM. I don't know where the checkpoint is saved actually, and since Jupiter Lab wasn't available as a connection option for the pod and for some reason, the ls and ll commands are not found I don't know how to look for it either.
I'm so confused what I created...
I used the command "python3 train-scripts/train-esd.py --prompt "~side_view" --train_method selfattn --ckpt_path /workspace/stable-diffusion-webui/models/Stable-diffusion/deliberate_v11.safetensors"
and the resulting Lora makes everything not side view... but if I give it a negative weight it makes things side view, but everything except for people will disappear.
did I do something wrong? and what is the "~" doing? I just kept it there from the example "~laugh" that was shown in the ssh console.
@ntc it sort of works, but only with negative weight. I'll do more testing whether it's something useful and publish it if it is
@kleind Hey I think I found a bug with a recent refactoring that might have caused the '++' command to not work. I'm testing it now, but this may explain what happened here.
@kleind I updated the ++ command and this example shows how to use it. Seems like it works better.
@ntc I will give it another try (in a few days). thanks 👍
@kleind I found a faster method to train, check out my latest models. one term, trains in 4 hours or less.
In the command you shared for continuous creation of videos, there's an extra semicolon after the "do", I believe. it worked for me after I removed it.
noticed another small error in the guide
where it says
runpodctl send /workspace/stable-diffusion-webui/Lora/<thetrainedlora>.safetensors
it is missing the "models" dir before the "Lora" folder
Also, I think it's better to download the entire Lora folder. much easier than adding the Lora names one by one. I mean like so:
runpodctl send /workspace/stable-diffusion-webui/models/Lora/
Thanks, I've made these changes. Really appreciate the feedback!
Hi! Me again with a quick question. Is the generation of samples integral to the training process? It seems to spend a lot of time doing that, and I was wondering if it would disrupt anything to generate one every 10 iterations or so, rather than every iteration.
hey, sampling does not effect training so feel free to lessen or disable it.
Details
Available On (1 platform)
Same model published on other platforms. May have additional downloads or version variants.