CivArchive
    OpenNiji-V2 - OpenNiji-V2-Release
    Preview 172585
    Preview 172600
    Preview 172599
    Preview 172598
    Preview 172597
    Preview 172596
    Preview 172595
    Preview 172594
    Preview 172593
    Preview 172592
    Preview 172591
    Preview 172590
    Preview 172589
    Preview 172588
    Preview 172587
    Preview 172586

    OpenNiji-V2

    The NEW Stable Diffusion model trained on 180k Nijijourney images!

    Acknowledgements

    Results

    1girl, eyes closed, slight smile, underwater, water bubbles, reflection, long light brown hair, bloom, depth of field, bokeh

    masterpiece, best quality, 1girl, aqua eyes, baseball cap, blonde hair, closed mouth, earrings, green background, hat, hoop earrings, jewellery, looking at viewer, shirt, short hair, simple background, solo, upper body, yellow shirt

    1girl, looking at viewer, (highly detailed), (realistic), reflections (transparent) iridescent opaque jacket, long transparent iridescent hair, bloom, depth of field, bokeh, cinematic lighting, dynamic pose, (full body), ((ultra realistic perfect face))

    Description

    OpenNiji V2 (left as a separate model to help differentiate statistics)

    FAQ

    Comments (29)

    DiaryOfStaMar 1, 2023· 1 reaction
    CivitAI

    Great!

    alexds9Mar 2, 2023· 3 reactions
    CivitAI

    Hi, can you please clarify.
    What's the base model that you used for training, NAI, AnythingV3, or OrangeMix?
    How many images did you use for the training?

    AshtakaOOfMar 2, 2023· 2 reactions

    Everything is in the description...

    180k images and the model is Silicon29 from Xynon available on Huggingface

    sdf34gdssdg4Mar 12, 2023

    @AshtakaOOf How did you manage to generate 180k images from Niji?!

    alexds9Mar 12, 2023

    @sdf34gdssdg4 
    Magic...

    alexds9Mar 12, 2023

    @sdf34gdssdg4 
    I asked him about the base model, and he pointed me to an empty description.
    180k is an absurd number.
    I'm not sure what's happening here, but something definitely wrong...

    AshtakaOOfMar 12, 2023

    @alexds9 All of these images come from the sharing channel in the Nijijourney discord server.

    The base model is Silicon29 from Xynon, and nothing is wrong here.

    alexds9Mar 12, 2023

    @AshtakaOOf 
    For how many epochs it was trained?

    alexds9Mar 12, 2023

    @AshtakaOOf 
    There is no info on Xynon/SD-Silicon about the base models, only a vague claim that it is "based off the experimental automerger, autoMBW", do you know what's the base model for Silicon28/29, is it NAI or AnythingV3?
    The number of steps specified in the description is smaller than the number of training images that you are claiming it was trained on, how it's possible?
    You are claiming it was trained on 180K images, for comparison SD 1.4 was trained for 225k steps, how long it took for you to train it, what kind of hardware you used for that?

    sdf34gdssdg4Mar 12, 2023

    @alexds9 I doubt it represents any risk since it's a safetensor. I tried the model, it's decent but not great, I doubt 180k images have been used to get a result like this.

    alexds9Mar 12, 2023

    @sdf34gdssdg4 
    The information specified and claimed later not adding up at all.
    I'm not sure what motives people have to lie about such things, but it is very suspicious to me...

    AshtakaOOfMar 13, 2023

    @alexds9 Hey so Silicon29 is a merge done with the AutoMBW (Automatic Merge Block Weighted) method, basically Xynon and Xerxemi made a program that will try different MBW combinations until it finds what looks closest to the aesthetic classification that was done before starting the process.

    More info on this at https://medium.com/@media_97267/the-automated-stable-diffusion-checkpoint-merger-autombw-44f8dfd38871 (blog post of Xerxemi)

    So the base model was made trough this merging process, it basically made the ideal model semi automatically using AbyssOrangeMix2 and AnythingV4.5.

    And the training of OpenNiji-V2 was done using the LoRA finetune method, on 180k images from the Nijijourney discord server.

    Using this technique it wasn't trained very long (sadly) because the anatomy started getting worse and worse during training, so Korakoe chose to stop the training process early.

    Also NO! Absolutely no one lied here, please don't start creating misconception in your mind.
    So to simply say, nope this is not a malicious project in any way, this was done because Korakoe wanted to make an even better OpenNiji model.

    alexds9Mar 13, 2023

    @AshtakaOOf 
    Have you participated in the training process, or have you just been told this information by @Korakoe?

    AshtakaOOfMar 13, 2023

    @alexds9 I am a friend of Korakoe

    alexds9Mar 13, 2023

    @AshtakaOOf 
    But were you participating in the training or not?

    AshtakaOOfMar 14, 2023

    @alexds9 I did not take part in the making of this model.

    I just shared everything you can learn about the making of this model.

    If you really think me or Korakoe are not trustworthy enough, just don't use this model.

    alexds9Mar 14, 2023

    @AshtakaOOf 
    I don't familiar with Korakoe, so I don't know if he is trustworthy or not.
    But there are a few problems with information about this model that doesn't look right.
    You claim that it is based on Silicon29 from Xynon, it is a mix of AbyssOrangeMix2 and AnythingV4.5, it's not information that is available anywhere.
    180K images were used for training but for 112,949 steps, so the training hasn't even covered all the images even once. What kind of training is that?
    The amount of 180K images don't look like a realistic number at all, how long it took to train it?

    AshtakaOOfMar 14, 2023

    @alexds9 It trained for about a day and a half (LoRA training is very fast). and yes Silicon29 is a MBW based on AnythingV4.5 and AbyssOrangeMix2. If you really don't believe me ask Xynon.

    The LoRA finetune technique is very different from a classic finetune, it is known to be very fast to train and it shows.

    And yes 180k is unrealistic and stupid, but why not try nonetheless.
    Korakoe was fetching images for about 4 days on Nijijourney server.

    alexds9Mar 14, 2023

    @AshtakaOOf 
    So it was a Lora training that got converted to ckpt?
    You don't want to address the issue of training steps?

    AshtakaOOfMar 14, 2023

    @alexds9 LoRA can be used to finetune model instead of being portable.
    And can you tell me what you find wrong with those steps exactly?

    AshtakaOOfMar 18, 2023

    @alexds9 Are you still here?

    Korakoe
    Author
    Mar 18, 2023· 4 reactions

    @alexds9 hey! sorry for replying late, what Ashtaka says is true, other than the fact this isn't a LoRA fine-tune, its a fully fine-tuned model

    also, this was trained with 180k images (about 196GB of data), however, some of the dataset's images were missing the captions and had to be auto-labelled, the original dataset took 4 days to scrape as I hadn't implemented multi-threading, but the new dataset took about 1-2 nights with 7 simultaneous downloads

    unfortunately, I cannot provide any information on silicon as well, I don't know much about it myself, other than it's a pretty aesthetic model, hence why I deemed it to be a good base model

    you are partly correct in assuming the training didn't cover all images, with a batch size of 1, 112,949 steps were done with a batch size of 8, it was trained for a day and a half, I decided to stop early because anatomy was getting worse and it had already achieved the aesthetic, this was either really early overfitting or the result of using AI-generated data to train another AI

    due to computing resources, this model is trained and saved in fp16, unfortunately, I don't have the money to train it on better computing hardware, for reference, here's the training command:

    accelerate launch --num_cpu_threads_per_process 8 fine_tune.py --pretrained_model_name_or_path="C:/OpenNiji/Silicon29.safetensors" --train_data_dir=C:/OpenNiji/latents --in_json=C:/OpenNiji/meta_out.json --output_dir=C:/OpenNiji/checkpoints --output_name=OpenNijiV2 --mixed_precision=fp16 --save_precision=fp16 --save_every_n_epochs=1 --save_last_n_epochs=10 --save_model_as=ckpt --train_batch_size=8 --max_token_length=225 --train_text_encoder --use_8bit_adam --learning_rate=2e-6 --dataset_repeats=1 --clip_skip=1 --save_state --shuffle_caption --xformers --enable_bucket --vae "C:/OpenNiji/orangemix.vae.pt" --max_train_epochs=5 --gradient_checkpointing --gradient_accumulation_steps=1

    Korakoe
    Author
    Mar 18, 2023

    @sdf34gdssdg4 this was done through a custom scraper I wrote, specifically to scrape prompts and generations

    alexds9Mar 18, 2023· 2 reactions

    @Korakoe 
    Thank you very much for your extensive response.
    I apologize if any of my remarks were in any way impolite.

    Korakoe
    Author
    Mar 19, 2023

    @alexds9 No np at all!

    tnginakoMar 23, 2023· 1 reaction
    CivitAI

    Hi, do you have a FP32 pruned version of this model? I would like to try training Loras off this model since I love the aesthetic or do you think this is already good?

    Korakoe
    Author
    Mar 23, 2023· 2 reactions

    Sorry I don’t… this was trained in fp16 due to hardware limitations, you should however still be able to train a good looking LoRA!

    277553May 18, 2023
    CivitAI

    I read the description posted on huggingface. If I understand right, the prompt style is nijijourney as you use the original niji channels' outputs-- image with prompts.

    Korakoe
    Author
    May 18, 2023

    Yes that’s correct, however tags still do in fact work

    Checkpoint
    Other

    Details

    Downloads
    1,396
    Platform
    CivitAI
    Platform Status
    Available
    Created
    3/1/2023
    Updated
    5/14/2026
    Deleted
    -

    Available On (1 platform)

    Same model published on other platforms. May have additional downloads or version variants.