Test model AYEISHA.
Shocasing a new training method I developped that I call "multires-training". The goal is to use the same images but sized at different resolution to train the LoRA. As you can see this model is able to produce "ok" faces and details when the subject is small in the overall picture. Even small squinting eyes are better rendered.
This is possible because I trained the model at 512x512, 384x384, 256x256 and 128x128 resolution. In the picture where the model is smaller it allow SD to use the data learned from smaller resolution that more closelly match the size of the model in the picture.
At the moment the code required to train with multiple resolutions is in the dev branch: bmaltais/kohya_ss at dev (github.com)
You can use the tools/resize_images_to_resolutions.py to create the multi-resolution dataset for training.
Let me know what you think.
Description
1st test model
FAQ
Looks like we don't have an active mirror for this file right now.
CivArchive is a community-maintained index — we catalog mirrors that volunteers upload to HuggingFace, torrents, and other public hosts. Looks like no one has uploaded a copy of this file yet.
Some files do get recovered over time through contributions. If you're looking for this one, feel free to ask in Discord, or help preserve it if you have a copy.
Details
Available On (1 platform)
Same model published on other platforms. May have additional downloads or version variants.