Disclaimer: This artificial intelligence model ("the Model") is developed solely for artistic expression and technological demonstration, not for creating or distributing illicit or inappropriate content. The creator expressly disapproves of such use. By downloading or using the Model, you ("the User") agree not to misuse it in any manner contrary to these intentions and accept all responsibility for compliance with these terms.
Description
Initial Release
FAQ
Comments (4)
Raz, I would like to challenge your viewpoint regarding quality. I hope you shall take my words into consideration.
- "Forever" is a deceptive notion when it comes to ML models. At most, it'll be a few months before someone invents some new training method, or collects a better dataset, or a new checkpoint comes up, and trains the same model which makes even a "high quality" model obsolete. Meanwhile, unless one actively concerns themselves with cleanup due to supersedence or whatnot, models on the hard drive actually are forever.
- The assertion that smaller models are "subpar" is inaccurate. Whether you acknowledge it or not, size IS a factor for the user (both for the limited space and loading time, initial dl time, perhaps other reasons), as is convenience (eg do I have to clear up space to hold the model, or resize it myself); meanwhile, having test converted several person models at minimal settings (to ~20mb), the post resize frobenius norm is consistently >99% and results are pretty much visually indistinguishable to my eye. So it's like saying audio should be stored only in uncompressed wav format because mp3 is lossy; only an audiophile could (pretend to) tell the difference. Besides, with model training it might not necessarily be contributing to quality as in a recording since AFAIK a larger model requires a larger dataset to train adequately, not sure where lora dims stand on the scale.
- "Resize them in kohya with just a few clicks" is very wrong. One needs to install kohya so it's actually working, one needs to learn how to use it, one needs to understand how the conversions parameters work and try to figure out which will resize to an acceptable standard (because literally nobody's explaining what the heck is sv_fro and whatnot), one needs to have the hardware to convert the model, one needs to spend gpu time on converting each such model. Knowledge, hw (=money), time - these are all precious resources, and quite frankly few people have all three in abundance.
- If you were to spend the little time to resize your models and publish the compressed file even as a secondary version or another file, it would not likely be detrimental to anyone's perception of the quality of your models (since you do make explicit it is a "lower quality" version and the user made their choice by selecting it), and it would both save plenty of cycles / space on everyone else's machines and grant you additional downloads / likes amidst the "plebeians" - note how there's always someone asking for a pruned version of models.
- And a side note, not all models can be resized currently - seems that lycos spit out an error, as noted on the repo. Yours are all loras, but that's one more space hog across the site.
You need to fix your Buy me a beer link :-)
Your work is seriously impressive. Thank you and followed
It's perfect. Can we get a recent version as well?