This embedding was trained on ~160 high quality close up photos of Preying Mantises in various poses.
You might enjoy the rest of my embugging series:
Description
FAQ
Comments (3)
What Base model was used, please?
v1-5-pruned -> e1441589a6
{
"datetime": "2023-01-28 23:14:22", "model_name": "v1-5-pruned", "model_hash": "e1441589a6", "num_of_dataset_images": 167, "num_vectors_per_token": 6, "embedding_name": "pmantis", "learn_rate": "0.005:100, 1e-3:1000, 1e-5", "batch_size": 1, "gradient_step": 1, "data_root": "/path/to/preyingmantis/512x640", "log_directory": "textual_inversion/2023-01-28/pmantis", "training_width": 512, "training_height": 640, "steps": 5000, "clip_grad_mode": "disabled", "clip_grad_value": "0.1", "latent_sampling_method": "once", "create_image_every": 50, "save_embedding_every": 300, "save_image_with_stored_embedding": false, "template_file": "textual_inversion_templates/embedsubject_filewords.txt", "initial_step": 622
}
Though there were a few iterations where I forgot change back from deliberate, so just a taste of that one
Details
Available On (1 platform)
Same model published on other platforms. May have additional downloads or version variants.

















