can you imagine this lora be trained on just 3 images? yeah, me neither but it works!
how did this happen?
i generated the initial images with perchance. a front view, side view and rear view with enough coherence to resemble the character. after using comfyui locally to do variations and upscaling i thought "why not try to make a lora?"
i have been thinking about it for a while after training my last lora that "in theory" it should be possible to make a character lora with just 3 images and minimal tagging for trigger words.
this way the lora only has the character and no random clutter that bloats training steps and generations. given that this is only gen 4 of the training and it is already slightly overbaked, suggests that lora training is possible with a tiny dataset and minimal tags for training in a fast and efficient way.
in hindsight, one might also check out the hyper lora paper and the experimental model that can generate lora weights on the fly during generation.
Description
FAQ
Looks like we don't have an active mirror for this one right now.
This archive is kept alive by community contributions, so there are still some gaps, especially for newly removed content.
Some files do get recovered over time through contributions. If you're looking for this one, feel free to ask in Discord, or help preserve it if you have a copy.
Details
Available On (1 platform)
Same model published on other platforms. May have additional downloads or version variants.