While the built in embedding for Emma Stone is not terrible, I was curious on whether I could improve that.
I used 443 sample images, all cropped and tagged manually, mostly chosen from the top 1000 of the posts in her subreddit.
Description
Trained on a 3080Ti with 443 sample images, 2 Vectors, Batch Size 2, 70,000 steps, with a custom learning rate schedule:
5e-4:70, 1e-3:210, 2e-3:350, 3e-3:560, 4e-3:770, 5e-3:1050, 6e-3:1400, 7e-3:2030, 8e-3:3640, 7e-3:4130, 6e-3:4620, 5e-3:5320, 4e-3:6160, 3e-3:7350, 2e-3:9240, 1e-3:11200, 9e-4:11620, 8e-4:12180, 7e-4:12880, 6e-4:13650, 5e-4:14700, 4e-4:16030, 3e-4:17920, 2e-4:21210, 1e-4:24640, 9e-5:25550, 8e-5:26670, 7e-5:27930, 6e-5:29540, 5e-5:31570, 4e-5:34300, 3e-5:38150, 2e-5:43960, 1e-5:48580, 9e-6:49630, 8e-6:50680, 7e-6:51940, 6e-6:53200, 5e-6:54740, 4e-6:56490, 3e-6:58590, 2e-6:61460, 1e-6:63700, 9e-7:64190, 8e-7:64750, 7e-7:65380, 6e-7:66150, 5e-7:66920, 4e-7:67970, 3e-7:69230, 2e-7
The schedule represents a short warm up, followed by a slightly tweaked exponential decay, which focuses on refining details over many steps.
FAQ
Details
Available On (1 platform)
Same model published on other platforms. May have additional downloads or version variants.