CivArchive
    Preview undefined
    Preview undefined
    Preview undefined
    Preview undefined
    Preview undefined
    Preview undefined
    Preview undefined
    Preview undefined
    Preview undefined
    Preview undefined
    Preview undefined

    Greetings! YiffAI (YAI) is trained from the basic SD 2.x 768-v model with the finetune variant of the kohya repo to house all sorts of vectors for creating a myriad of (anthro) furs! This model has been tuned for a total of around 400+ hours on a lone 3090 to produce quite dazzling results in a wide range of styles, at least those that SD 2.x can make itself.

    1/26/23: Currently training has shifted over to SD 2.0 768-V as a model base starting with YAI 2.3.22. This model does not explicitly require xformers like the SD 2.1 768-V based models, but everything else applies. If you wish to use the SD 2.1-based models, make sure you read up on the xformers requirement further in this description!

    Generations are recommended to be made at 768x768 or a few steps up or down. 512x512 and below doesn't give as best as results- like with most models based off of the 768-v series.

    Extra information, prompts, guides, and more available on the Discord server where this model originates from: Furry Diffusion Discord.


    Importantly, this model does not use artist tags outside of those that are naturally available in SD 2.x itself. -No artists have been added in.- Instead, use actual, bona-fide style terms to great effect, such as watercolor, countershade, rim lighting, or even kemono! Of course, it does know a fair amount of more... particularly furry topics as well for image ideas. Wouldn't have a mark if otherwise.

    Assorted Notes:
    You will require either the latest version of Automatic1111's WebUI or a similarly capable interface with which to use this model. If you have ran Auto1111's UI before, then you will almost assuredly need to delete your /venv folder in the installation directory. It will remake it upon your next launch, but this is needed in order to update all your dependencies in order to load up an SD 2.x model.

    You will require xformers to run the 2.1-based models, or you must use the --no-half command or a similar command argument in your batch file to run at full precision instead. If you do not use xformers or no-half, your images will come out all black! Heed this warning. If you are willing to reinstall or need to install an interface, https://www.reddit.com/r/StableDiffusion/comments/zpansd/automatic1111s_stable_diffusion_webui_easy/ makes this very easy to do.

    You must have the YAML file(s) included in the same folder that you extract the model to, or where the model resides. Usually for an Auto1111, this is in the "/models/stable-diffusion" folder. The YAML must also have the same filename as the model!

    This model, as well as SD 2.x itself, has been trained with Clip Skip! Your results can vary greatly between clip skip and no clip skip, so if you do not wish to use clip skip or want a taste of the crazier side, then rename your model's filename to have "noskip" at the end so that it matches the other yaml file! Otherwise, do not touch the naming scheme of this file!* Addendum: multiple YAMLs cannot be uploaded as of current, so the only available config is with clip skip.

    Description

    Uploaded 1//23

    The model's training was continued and finished to current target after another six epochs from 2.2.14. This is the original target epoch of the model, but the earlier epoch came out well too. The images generated are nearly the same seed/workflow-wise as the images for 2.2.14 with the exception of the lynx image as the workflow to create that one was lost.

    Pruned model option for ease of use.

    FAQ

    Comments (5)

    robot1meJan 3, 2023
    CivitAI

    Very promising model! :) The biggest catch so far is only to make it work. Locally I was able to get this running with a new Stable Diffusion web UI install (as described by you), but in Google Colab and Huggingface it's really biting its teeth. If you can test that on a Colab like https://colab.research.google.com/github/acheong08/Diffusion-ColabUI/blob/main/Diffusion_WebUI.ipynb that would be awesome

    lyco
    Author
    Jan 3, 2023

    A lot of colabs have issues due to the very paltry amount of ram that colab offers for their free machines, which makes SD 2.x models hard to use. In the Discord server that this model was first hosted in, those that used colab ended up having to use TheLastBen's colab: https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb in order to get it to work. It would appear the main difference is that the colab that you linked does not seem to set --lowram for large models and instead tries to use --lowvram? In TheLastBen's colab, it uses --lowram to allow the larger models to load up. At least, it seems that way upon a cursory glance.

    lyco
    Author
    Jan 5, 2023

    A follow up- https://colab.research.google.com/drive/1wDE62GQxpARm6VKZdu1ex_UW4lFxoi7m#scrollTo=gfKvWAVnz8OB is a mod I made to the colab notebook offered in the server that this model originates from. It loads up SDV2.x models fine now.

    SlugCatJan 3, 2023· 3 reactions
    CivitAI

    The premise of "use actual, bona-fide style terms" is flawed. You can specify a medium like watercolor, sure, but for "styles" to be actually useful would require that those terms be tagged to images in the dataset. Compared to artist names, that's basically unheard of.

    lyco
    Author
    Jan 3, 2023

    As this model was trained with EMA active, a lot of SD 2.1 768-v's medium/style terms were preserved for the most part it seems- though I think realistic stuff isn't really there anymore? But I think a couple on the Discord where this is also hosted managed to get somewhat close at a cursory glance.
    Regardless, the point of that statement is more about this model needing a different style of prompting, i.e. not just putting in a furry artist, to get good results out of this than the only other popular local model so far -that I know of- yiffy-e18 or the countless merges of it. Impressionism, tenebrism, stuff like that insofar as I know have an effect on the output image too.

    Checkpoint
    SD 2.1 768
    by lyco

    Details

    Downloads
    665
    Platform
    CivitAI
    Platform Status
    Available
    Created
    1/2/2023
    Updated
    5/11/2026
    Deleted
    -

    Files

    yiffai2322_yai2220.ckpt

    Mirrors

    HuggingFace (1 mirrors)
    CivitAI (1 mirrors)

    yiffai2322_yai2220.ckpt

    Mirrors

    CivitAI (1 mirrors)

    yiffai2322_yai2220.yaml

    Mirrors

    CivitAI (83 mirrors)