All models are exclusive to Civitai ! Anyone who publishes my models without my consent will be reported!
Hi all!
SDXL_Niji_Seven is available!
I have incredible results with my LoRa model SDXL_Niji_V6_DLC_LoRa_ v2 at strength 0.5!
https://civarchive.com/models/541460?modelVersionId=601994
I haven't tested v4 yet
What's new?
-This version is very different because it is based on NijiV6 To be more precise, around 1000 images from NijiV5 and 1600 from v6 (compared to a total of 1300 images in v6).
-I chose not to use the "expressive" style of NijiV5 this time. But maybe I will add it in a 7.5 version in the future.
-There is no longer a trigger word
-Version 6 is very different so I still recommend it.
Some tips for use:
-Don't hesitate to play with things like: realistic, hyper realistic, anime studio, digital artwork, illustration...
-Clip skip -2 or -3 is recommended (-3 better overall)
-A minimum of 26 steps
-cfg scale: between 3.5 and 6
-Sampling method: DPMPP_SDE Karras (recommended, best quality), Euler_Ancestral simple (2nd best), DPMPP_2M_SDE Karras or ddim simple (average quality for both).
Have fun ! 😊
Description
I redid a training at 453 images
FAQ
Comments (34)
How can i train xl model ??
Do some research on Youtube. Something like "SDXL Kohya training"
@Stan_Katayama On that same topic, from your experience, is SDXL harder to train/harder to get good results from training than 1.5?
@Maxx_ My opinion may not be the best. But yes, I find SDXL much harder to master than 1.5 or even 2.1.
Do you mind sharing your training configurations/settings? I am a first-time finetuner and all my finetuned model outputs are either warped or have rainbowy effects.
I would gladly share them. But I use Google Colab because my gpu is not powerful enough for SDXL training. So it's a little complicated to transcribe the settings from Colab to the local version of Kohya
Maybe you could google it or find a tutorial on youtube?
@Stan_Katayama you can at least tell us how much images was train for X number of steps or so ? plz :) ?
(edit : 453 images got it !)
@Le_Fourbe 16000 steps and batch 2
@Stan_Katayama Oh that's okay. What is your image captioning tool, AI (BLIP, DeepBooru, etc) or manual captioning? What is your epoch and learning rate? Do you think large image dataset matters (>50 images) or is there a diminishing return? I'm sorry for asking this many questions but I need to learn from one of the best :) I understand if you can't provide all of the answers.
@hgloow It's a bit special. Since I'm using Google colab and the base is Kohya. Not everything works like Kohya locally. I use automated capture tools and I correct manually if necessary. I'm using the base learning rate for SDXL (so 4e-7). I think it takes a lot of images, minimum 100, but I think the best is between 150 and 600. I can't say the exact number of epoch because on Google colab it is automatically adjusted according to the number of steps and depending on the batch. Thanks for the compliment, but I don't think I'm the best. Far from there. I'm not sure I understand everything to be honest. I think the best way to learn is to try things and make mistakes 😅
@Stan_Katayama Wow thanks a lot for the detailed explanation :) I'm gonna try again with your suggestions.
@Stan_Katayama you could literally press "print training command" and copy and paste that here... its right there in the kohya gui just below "start training"
@CaptnSeraph I can't. I do my training with Google Colab and I don't have the classic Kohya interface
@Stan_Katayama anyone else smell that smelly smell?
So you have a colab you don't understand or what? Cmon... either you're using a colab someone else made (share the colab then) or your flat out lying to keep this "super secret" knowledge you think is important.
@CaptnSeraph The colab is very easy to find. "SDXL kohya colab" on Google. And it is clear that if I had a "super secret" your behavior would definitely make me want to give it to you!
@Stan_Katayamai don't need it, I have a script for LoRa. And I'm happy to share mine with anyone who asks.
@CaptnSeraph I'm using colab and leaving the default options. I don't understand everything myself. How do you expect me to explain correctly? Maybe English is your native language or you're perfectly bilingual, but that's not my case. I explain as best as I can. If it doesn't suit you... Go ask somewhere else
@CaptnSeraph @CaptnSeraph for kohya i watched a YouTube tutorial by ai something but he only sharing his info with patrons i wanted to make my own lora ill take your advice
@Stan_Katayama this is cool i just downloaded it cool work
@The_one_and_only7723 Thanks 😊
@The_one_and_only7723 what hardware you running on?
@CaptnSeraph For the training on colab it is an A100 from Nvidia and I generate my images with a 1080 ti
@Stan_Katayama maybe i know the amount/number of images u used to train this model pls?
@nash990 457 images 16000 steps
There is a good tutorial from the youtube user Aitrepreneur, but it is for XL loras, but maybe you can learn more about it, he also spoke with stability AI devs so he gives good advice, it is his last video. It is a 1hour video so he goes very in depth
The Asuka one looks like a mix between Evangelion and Breath of the wild, very cool
Veery cool
Thanks a lot ☺️
Very good work, I have some training questions that bother me a lot, but I found that your model achieves good results, so I really need some of your experience to help me.
I have a randomly generated training dataset of about 200 images, but the model I trained cannot generate many poses and camera angles of the characters. Although sometimes the style of the painting is not fully fitted, it is difficult for me to get better from dynamic poses or dynamic shots. Characters often only have some fixed half-body or full-body poses. Although my training dataset already has a lot of different pose combinations.
Here are some of my setup parameters, hoping to get your suggestions on how to improve for better pose/lens angle compatibility.
220 images with random pose and background, repeat 1/epoch,10 epoches,network dim 128/Alpha 64,lion8bit, lr 1e-6
It's gonna be hard to explain because I'm using google collab and not everything works exactly the same. I think you may have some answers in what I wrote to hgloow in the comments. Merges also help
@Stan_Katayama Well, thank you very much for your reply. I have read what you wrote, and it seems that I need to try more.
@Stan_Katayama I have another question about image tagging. Are you using wd14, deepdanbooru, or blip, I only used wd14 before maybe this will have some impact?
@CyberDickLang wd14 in .txt + blip in .caption.
@Stan_Katayama thx a lot
Details
Available On (1 platform)
Same model published on other platforms. May have additional downloads or version variants.














