CivArchive
    DJZ AssassinKahb [Cascade Test Lora] - v1
    Preview 7068841
    Preview 7068844
    Preview 7068846
    Preview 7068847
    Preview 7068842
    Preview 7068848
    Preview 7068845
    Preview 7068861
    Preview 7068862
    Preview 7068865
    Preview 7068867
    Preview 7068866
    Preview 7068868
    Preview 7078211
    Preview 7078217
    Preview 7078219
    Preview 7078234
    Preview 7078241
    Preview 7078260
    Preview 7078263

    V1 evaluation lora used to create a baseline.
    Training settings were refined for the next versions;
    V2 - ADAFACTOR, Rank 16, Alpha 1
    V3 - Rank 32, Alpha 1
    v3b - rank 32, alpha 8
    v3c - rank 32, alpha 4
    V4 - were all trained with ADAFACTOR using revised settings. Rank 64, Alpha 1


    Round 2

    V5 - Trained with ADAFACTOR with final settings, Rank 32, Alpha 1, reduced learning rate, LR 2e3

    Switched to PRODIGY

    V6-early = ADAFACTOR, Rank 32, Alpha 1
    V6 =
    ADAFACTOR, Rank 32, Alpha 1

    offset noise 0.06
    V7-early = PRODIGY, Rank 64, Alpha 1
    V7 = PRODIGY, Rank 64, Alpha 1

    initialD 1e-06

    V8-early = PRODIGY, Rank 64, Alpha 1
    V8-mid = PRODIGY, Rank 64, Alpha 1

    V8 = PRODIGY, Rank 64, Alpha 1

    as shown in the video, early Epochs seem to perform better beyond txt2img.

    Study presentation: https://www.figma.com/file/NM9dSIwcKoyZpOSYqi4pKs/OneTrainer-Stable-Cascade-(LR-adjustment)

    Models: https://civarchive.com/models/320332 Dataset: https://github.com/MushroomFleet/assassinKahb-1024

    Article: Training Stable Cascade with OneTrainer

    OneTrainer: https://github.com/Nerogar/OneTrainer

    my preset for OneTrainer: https://pastebin.com/tLri1HdU

    .json file, place inside OneTrainer\training_presets

    which are trained to compare the Rank setting.

    test prompt:

    • AssassinKahb style a demonic looking skeleton holding a sword with red hair

    txt2img-lora: workflow

    img2img-lora: workflow

    txt2img-vision-lora: workflow

    txt2img-remix-lora: workflow

    At the time of posting the Lora implementation is incomplete in most webUI's You will be able to use this if you have updated your ComfyUI to the latest version.

    Description

    it only makes this character, or blends this character with your prompts, use the full prompt to get closer to the character

    FAQ

    Comments (4)

    halr9000Feb 25, 2024· 4 reactions
    CivitAI

    How long and how many resources did it require?

    driftjohnson
    Author
    Feb 25, 2024· 2 reactions

    Training requirements vary with your project. how big are your images, how many are there, are you using an optimiser to reduce load (there are 15 different ones), what batch size, what Learning rate, how many Epochs? all of these factors influence the answer to your question, so the answer is meaningless.

    "This piece of string here is 9 inches long."

    I want to help you with a useful answer, however the answer to your questions will not help you.

    Instead i'll give you some basic information on this evaluation dataset, that was used to train this Lora.

    There were 11 images of 1024x1024, each image features the same character in a different pose, drawn in the same style with a similar aesthetic, however some inconsistencies in the character art do exist, so providing room for stylistic approximation.

    Each image was paired with a matching filename text file, containing the tag for training:
    "AssassinKahb style a demonic looking skeleton woman holding a sword with red hair"

    This caption is historic, was used to train baseline checkpoints and lora's in SD1.5, SD2.1 and SDXL
    I used an RTX 4090 24gb GPU, at batch 1. It took around 40 minutes to train, used up at least 12GB VRAM


    I used ADAFACTOR with default settings to train.


    that's the best i can do, but it only gives you hints, because your mileage may vary with your own project ;)

    tscott65Mar 24, 2024

    @driftjohnson since (if I'm understanding correctly) you trained SD1.5 and SDXL with the same data, can you tell us the relative training times compared to these?

    driftjohnson
    Author
    Mar 25, 2024

    @tscott65 i spent more time training SD2.1 and SDXL, so i rarely worked with 1.5. It can depends on a lot of factors, however, considering the fact you get 1024x with SDXL, i have to say SDXL takes less time. it's hard to fairly compare them owing to SD1.5 smaller base dimensions. As an example, i used to train 1.5 at 768x768 and 2.1 at 1024x1024. In SDXL you can train 1280x images, although aspect bucketing means they are likely resized automatically. it's really tough to call a fair comparison.

    this particular study compared SDXL and Stable Cascade, using the same dataset and settings.

    LORA
    Stable Cascade

    Details

    Downloads
    66
    Platform
    CivitAI
    Platform Status
    Available
    Created
    2/24/2024
    Updated
    5/12/2026
    Deleted
    -
    Trigger Words:
    AssassinKahb style

    Files

    Zenkai_V01_E8_R30_LR3e-3.safetensors

    Mirrors

    Available On (1 platform)

    Same model published on other platforms. May have additional downloads or version variants.