CivitAIArchive
    Checkpoint
    SD 1.5
    anime
    porn
    base model
    nudity
    hentai
    Preview 334163Preview 334161Preview 344176Preview 334162Preview 362095Preview 335157Preview 337107Preview 337074Preview 334160Preview 343101Preview 334158Preview 343834Preview 334767Preview 334834Preview 334159

    Hassaku aims to be a model with a bright, clear anime style. Model focus are nsfw images, but also with a high emphasis for good looking sfw images as well. My Discord for everything related to anime models and art. You can support me on my patreon and if you are interested.

    My models: sudachi(flat 2d), koji(2d), yuzu(light semirealistic), grapefruit

    Supporters:

    Thanks to my supporters Riyu, SETI, Jelly, Alessandro and Kodokuna on my patreon!

    You can support me on my patreon, where you can get other models of me and early access to hassaku versions.

    _____________________________________________________

    Using the model:

    Use mostly danbooru tags. No extra vae needed. For better promting on it, use this LINK or LINK. But instead of {}, use (), stable-diffusion-webui use (). Use "masterpiece" and "best quality" in positive, "worst quality" and "low quality" in negative.

    My negative ones are: (low quality, worst quality:1.4) with extra monochrome, signature, text or logo when needed.

    Use a clip skip 1 or 2. Clip 2 is better for private parts, img2img and prompt following. Clip 1 is visually better, because i assume, the model has more time and freedom there. I use clip2.

    Don't use face restore and underscores _, type red eyes and not red_eyes.

    Don't go to really high resolutions. Every model, like hassaku, get lost in the vastness of big images and has a much higher chance to greate, as example, a second anus.

    _____________________________________________________

    Loras:

    Every LoRA that is build to function on anyV3 or orangeMixes, works on hassaku too. Some can be found here, here or on civit by lottalewds, Trauter, Your_computer, ekune or lykon.

    _____________________________________________________

    Black result fix (vae bug in web ui): Use --no-half-vae in your command line arguments

    I use a Eta noise seed delta of 31337 or 0, with a clip skip of 2 for the example images. Model quality mostly proved with sampler DDIM and DPM++ SDE Karras. I love DDIM the most (because it is the fastest).

    Version: V1
    SD 1.5 Standard

    The determination of what is better/worse in compare to my other model grapefruit is more difficult, because a model can be used in varios ways and training have more impact overall, i can't test/predict everything. Some feedback would be nice.

    What is worse:

    - On some occasion, hands again

    - Images have sometimes a more dark and muddy look to it (see what is better)

    - Faces have sometimes not the details of the body and looking a bit detached

    What is better/worse:

    - Less motion lines

    - It tents in its style on some images to be more semi-realistic

    What is better:

    - Better quality of images before highres (not on all, but for the majority)

    - Better contrasts -> stronger lights and shadows (but it can create dark and muddy images).

    - Better skin textures

    - Some backgrounds like mountains looking better

    - On some images, backgrounds are better structured

    Details:

    Training with a small dataset (800 images).

    Vae included in the model.

    You can support me on my patreon, where you can get other models of me and early access to hassaku versions.

    15,219Downloads
    AvailableCivitAI Status
    -Deleted
    3/26/2023Created

    Files