All models are exclusive to Civitai ! Anyone who publishes my models without my consent will be reported!
Hi all!
SDXL_Niji_Seven is available!
I have incredible results with my LoRa model SDXL_Niji_V6_DLC_LoRa_ v2 at strength 0.5!
https://civarchive.com/models/541460?modelVersionId=601994
I haven't tested v4 yet
What's new?
-This version is very different because it is based on NijiV6 To be more precise, around 1000 images from NijiV5 and 1600 from v6 (compared to a total of 1300 images in v6).
-I chose not to use the "expressive" style of NijiV5 this time. But maybe I will add it in a 7.5 version in the future.
-There is no longer a trigger word
-Version 6 is very different so I still recommend it.
Some tips for use:
-Don't hesitate to play with things like: realistic, hyper realistic, anime studio, digital artwork, illustration...
-Clip skip -2 or -3 is recommended (-3 better overall)
-A minimum of 26 steps
-cfg scale: between 3.5 and 6
-Sampling method: DPMPP_SDE Karras (recommended, best quality), Euler_Ancestral simple (2nd best), DPMPP_2M_SDE Karras or ddim simple (average quality for both).
Have fun ! 😊
Description
No new training here. It's a merge of version 3 and version 5 of my training. Warning: You could not obtain these results by merging the models as they were published.
FAQ
Comments (91)
the Special edition is really special, works amazing with any prompt
Yes. If only I could have done it during the contest 😅
new version is amazing
Thanks a lot 🙏
It says "Failed while downloading" and downloading stops.
DW finishes intermittently and I can put it in the usual folder of StableDiffusion, but only SDXL won't start.
Why is this?
I don't know. It's probably coming from Civitai's servers. You may have to wait a bit and try downloading it again. In any case if the download does not end, normal that the model does not work
can you put it on huggingface ?
You can download it here. I don't see the difference!?
Why was it possible to use your model before? Recently, I have been unable to switch all of your models. I have switched to many versions of SD, but none of them have worked. I hope it can be resolved.
They all work! There is no such thing as a model that works one day and not the next day.
A model is just a set of number. If the model is corrupted, it can generate garbage, but it should always load.
Try loading a different model, more likely than not that model will not work either.
Most likely something is wrong with your setup, try re-installing your software (auto1111, comfyUI, etc).
I also had the same problem
@aiping Have you tried the test I've suggested? i.e., does the problem only happen with SDXL_Niji or does it happen with other SDXL (such as JuggernautXL or CrystalClearXL) models too?
Hi there. This is a bit of an odd one, but I wanted to say that I was experimenting with NIJI SE last night using the model stack nodes in ComfyUi. During my experiments I somehow made a version of your model which seems to have many of the remaining issues ironed out and a lot of minor elements improved.
I would like to share this model, because I'd feel guilty holding onto it by myself. But I decided that it would be best to approach you first and ask how you would like to proceed. If anything, rather than being uploaded on my own page where it will never see the light of day - I'd rather the creator himself was able to take advantage of the improvements and if you'd like to release this version yourself etc. etc.
Basically, if you have other plans and don't mind me releasing it I will do so, but otherwise it is up to you what you would like to do with it from this point onwards.
To explain a little further on the model stack node experiments, essentially they are experimental beta nodes and due to an error in the way they are coded they seem to have a weird impact when "merging" models instead resulting in improved versions of one of the existing models, with the right balance.
Let me know!
Hi ! I work on my model every day and am about to release a new version.
I've been working on it several hours a day for over a month now. I know my work isn't perfect, but that's also how I learn.
It is important to me that my model is 100% my work. With its qualities and its faults. It's part of the creative process.
I understand that your intentions are not bad. But I also see that you are also creative. Why not create your own Niji model in this case?
Don't take this the wrong way, but I'm not going to comment under your models to tell you how to do things. Either I like it and download it as is, or I don't like it and move on.
That's it, it's nothing against you. But I would like this model to remain 100% my work. But then I'm not closed to collaborating on another project.
I hope you will understand.
Greetings
I am quite curious about your claims. Why don't you make 3 or 4 pairs of images using your new model and SDXL_Niji_SE so that we can see how well your model works?
@NobodyButMeow Personally, I'm not that curious. I know I make mistakes. But somehow no SDXL model is... As with SD 1.5, it took months for models to start being truly impressive.
Personally, this is how I progress. I watch a tutorial to get a vague idea of the process and then I tinker around until I have something that suits me. It's my way of having some semblance of artistic vision in my creations. Make mistakes and try to correct them rather than following the tutorials you see here and there to the letter
This is my vision, if it doesn't suit, there are many other Niji models. 😅
@Stan_Katayama I don't mean to offend. Your model is great work. Honestly I can see you've put a lot of effort into it, as someone who has also trained on SDXL I understand how difficult it is to get things on point.
If anything, seeing as this is your response I would give you some advice on how to use the tools yourself to test and see whether you can achieve something you are happy with.
I shall also do as you say and post a few comparison images. Upon further testing with some of your prompts in here, though I couldn't achieve the same results being on ComfyUI instead of A1111, I found that my version generally got anatomy better and produced less extra limbs and furthermore seemed to include more variance in the characters drawn, rather than drawing clones of the same characters. Though Neither model seemed to be consistent at drawing Monkey D. Luffy, I would say yours retained more anime-esque style while my iteration had more of the manga-esque look.
Anyway. there is no point in waffling on about it here. I've been doing some more experimentation with the old version of the ComfyUI model stack (it was just updated today to add fixed methods, though I requested they leave this seeming "model tuning" version as one of the available methods and they have done so) because I wasn't quite satisfied with some of my further testing.
Maybe it's just because of the difference in ComfyUI>A1111 prompting/seed/noise generation but the results I get are not anywhere near close to what you get in either version of the models, all I can say is that the output of my version tends to draw more coherent images, with slightly more definition when used in ComfyUI. Not that my results reflect what you might get on your own system.
I will finish my experimentation and shall get back to you shortly with some comparisons, either between the current version or the experimental version I am working on currently.
I do agree that working on my own stuff is a much better idea in general, but as I am currently in the process of testing these beta nodes for the developers, I was running tests of the models I have and by sheer coincidence an error caused me to stumble upon this interaction and I wanted to share it somehow with those who might benefit.
I had been attempting to merge 3 models, Niji SE, Spectrum Blend and Yamers merge heaven. However instead of merging them properly it simply changed how many artifacts your model produced in the overcooked image I was purposefully generating.
Also if I sound overly critical of you or your work at any point, it is meant with the best intentions. I absolutely love the style you produce and want nothing more than to be able to use it as variably as possible. As I'm sure was the intention when you produced the model(s)!
The first attempt at training a model that I did ended up outputting nothing but grey blotches, so I don't really have a leg to stand on when it comes to criticism xD.
@Triple_Headed_Monkey Perhaps we didn't understand each other well. If what you want to do is a merge, as long as it is not a clone of my model, that poses no problem for me.
I want to make a model that is as versatile as possible. Which greatly complicates my task. But I don't despair of finding the perfect solution. I've been working on version 6 for over a week now. And I'm always aiming a little higher. I'll take it out when I'm really happy with the result.
@Stan_Katayama I'm looking forwards to that. I can see that the variety of data you're pumping in there is quite substantial and to balance all that must be a nightmare honestly.
But I can tell from your dedication that, given a little time, this will be one of the most capable models on CIVTAI. I would myself dedicate more time to training models, but I am only endowed with a 3080 10GB GPU, so I have done all of my practice etc. on Runpod alone for SDXL.
I have no interest in creating a clone per say, but if I could assist in balancing some of the issues that many of the models on the site have, or at least point people to a direction where they can attempt similar methods themselves and decide whether it works for them as I believe it might, then that would be the best.
I'm more of a share everything kind of guy.
@NobodyButMeow
Here is a post I just made with some comparisons using the same seeds and prompts per image between each model.
https://civitai.com/posts/595572
@Stan_Katayama I mostly agree with what you said. A model will be strong in some areas and weak on others (it only has 2.6 billion weights that it can store 😁). As you said, a model creator is like any other artist. He/she must have an artistic vision as to where to take it.
Personally, I love what you've done so far. One version shows definite improvement over the previous one. I though v5.1 was great until I start to play with SE! 😂
@Triple_Headed_Monkey Thank you making the comparison images, much appreciated.
For anyone checking out the images, please read the explanation at the bottom of the posting so that you what you are looking at 😅
But here is the TLDR; version: Other than the first 6 images, the rest are:
Niji SE, Followed by a comparison to "custom Niji unbaked" version.
@Triple_Headed_Monkey I don't understand how you manage to have such catastrophic results!!!
cfg scale at 2.5??? I saw 14 steps... We don't know what model does what. I don't recognize my model anywhere. We also never make 512x512 images with SDXL models!!! Sorry !
I don't want you to touch my model, at least if it's for publication! I have a 1080 ti, which is not ideal. However, I do 100x times better.
Sorry eh... But from the start, the problem is not my model. But rather what you do with it.
Yes it's a clone of my model and no I don't want you to publish it!!!!!!
@Stan_Katayama My understanding is that Triple_Headed_Monkey is using the "wrong parameters" of 512x512, CFG:2.5 and Steps:20 to "break" the model on purpose, i.e., to show defects, and then "fine-tune" with his software until the defects goes away. He is not trying to show that SDXL_Niji_SE is broken. That is how his method works on any SDXL models that he wants to "unbake".
I can be totally wrong here, but that's my reading of his comments 😅
@NobodyButMeow Never mind ! Yes, my model has flaws! But that’s also what makes its qualities! Yes, it can be improved and that's why I work hard every day to improve!
I understand the process. But imagine if everyone did that on everyone else's models. It's through my mistakes that I progress!
Maybe you can't understand. But when you have spent 100s of hours there, you do your best to provide something of quality. Believe me it’s insulting!!!
@Stan_Katayama Yes, I can understand why you felt insulted. If I have offended you in some way, I apologize, but it was never intentional. I have no doubt that you have spent lots of effort into the model, and it shows. SDXL_Niji_SE is one of the best SDXL models. Every model have its strengths and quirks, that's why we have our favorite models.
Anyway, I think you've clearly stated your case, and we should just leave the matter to rest 😅. And I look forward to version 6 of SDXL_Niji 👍😁.
@NobodyButMeow Do not worry. There is no problem. The problem I have is how to do it. Maybe he's explaining it wrong, but there you go. This way of imposing things.
He's been making comments to me for a while. It's more the manner of saying things that bothers me. I am open to any criticism. There I feel like he's telling me "push, I'm going to do better than you!"
It's very discouraging...
@Stan_Katayama You're wrong. And you misunderstand EVERYTHING that I did.
@Stan_Katayama To explain. I was purposefully testing bad settings in order to see whether or not there were improvements. Any model can produce a decent image at high step counts. Low step count comparisons are the absolute best 1-1 exhibition of what the real improvements are. And striving for a model to be more usable at a higher variety of settings should be something all creators strive fore.
Not to mention that all of the images in the post had the settings below.
And I explained to you that I was unable to get the same results that you did USING THE SAME EXACT SETTINGS AS IN YOUR PREVIEW IMAGES ON CIVITAI. I USED YOUR PROMPT, YOUR SEED, YOUR CFG, YOUR STEP COUNT ETC. AND I GOT SOME OF THE WORST RESULTS I HAVE EVER SEEN FROM ANY MODEL, EVER. PERIOD. But as I am on ComfyUI and do not use Automatic1111 - the way that it deals with things is different.
FURTHERMORE I am using a custom variant of the KSAMPLER node which comes with added enhancement features which almost half the number of steps needed to produce a decent image.
Also this custom variant of the KSAMPLER is capable of running any model at 100CFG. Yes, even your model.
So the settings I used for the Custom KSAMPLER (Extreme Detail KSAMPLER) do not translate the same. However WHEN I COMPLETED MY TESTING OF YOUR PROMPTS - I USED ONLY YOUR SETTINGS WITH THE STANDARD SAMPLERS TO MATCH AS CLOSE AS I COULD.
If you can't see the process I went through either, I respectfully apologize for showing you something so advanced without you clearly having any comfyUI experience. Because now you're just confused.
The first image of the 5 is a purposefully bad generation at 512x512. Generating at that resolution produces more artifacts and issues on ALL models. I did this on purpose, while merging, to see how much of an impact the other models were making. THrough the process it went from that first ugly flat image that I would never ever share, to one that actually looked decent - DESPITE THE AWFUL SETTINGS. Then these improvements translated across the board.
If you can take a step back and interact with me in a way that doesn't take offense. Perhaps we can communicate civilly and you could potentially learn something from the exchange. Otherwise. I'll be on my way and sharing these benefits with others. You were coincidentally the first I approached because I happened to be playing with your model.
I know others will be more receptive to learning new techniques that they can use to enhance their model training methods. Most creators are here to learn. At least imo.
The comparison images are just images generated use the default settings of the extreme KSAMPLER. I wanted to generate them quickly and I did not want to cherry pick results. So I just spammed a bunch of quick generations from both models to show people the difference in quality.
I don't know about you, but so far not a single person has picked your models output over mine in the discord server where I've shared them during my testing. But Anger and other emotions could be clouding your judgement unlike those without personal investment into the project I suppose.
@Triple_Headed_Monkey Maybe I'm wrong, maybe I'm investing too much in working on this model and fatigue isn't making me think very clearly. Like I said, I understand your approach. But I'm working hard on this model. And try to understand that I might take all this badly... I don't do things in a conventional way. But my mistakes are part of my creative process...
@Stan_Katayama I don't want to take away from your work. If anything I was just excited by having created something that fixed anatomical issues etc. without having impacted the overall aesthetic too highly.
I would have thought that, you might like to try such techniques yourself to achieve a result that you can be satisfied with at a higher range of CFG/steps. Without having to faff about as much with the training side.
I'm going to test in A1111 now to see if I can reproduce your images. I tried EVERYTHING in ComfyUI to no success.
@Stan_Katayama I am glad there is no problem/misunderstanding between us.
As for Triple_Headed_Monkey, there are probably some cultural issues here. Western cultures (specially American) tends to be more direct and blunt, but Asian cultures need more subtlety and politeness. What is considered plainspoken in one culture can be considered rude in the other.
I am Asian, but I've lived in Canada all my life, so I kind of understand both cultures.
@NobodyButMeow Well I don't want to step on anyone's toes. otherwise I would have just uploaded without saying anything. I apologize if I come across insulting or rude.
@Stan_Katayama terrible take... the model you make isint based off even your own work, see how that works? thinking your actually doing something, when in reality its all based off others work, then you make a comment like this and look silly af. clown shit, when the AI field is where its at right now thanks to being open source, talking bout "creative process" man shut up
@Triple_Headed_Monkey just release it why are going back and forth. fuck this guy hes in a field of AI where its prevailing thanks to open source.. stable diffusion is open source for a reason everything Emad is for this idiot is opposite. on top of that. he uses others art work to create the model. and acts like hes doing some "c reative process" foolish. when its a collaborative experience . this fool is a dumbass.
@beatsbyghost824 No need to go over the top. It is a gray area when talking about open source licensing etc. of models but you are right in pointing out that the copyrighted material that is trained on is not anybody involveds work.
Also. I have since released the models "AnimeGodXL" and "LoveXL", the fist of which is a more conventional 80s/90s/2000s hand drawn anime/manga model and the second of which is an all round 2D art style model capable of almost any style.
Check them out, I hope you like!
@beatsbyghost824 You know nothing ! What are you doing taking advantage of a system that clearly poses ethical problems for you? It's a shame to have your speech while you yourself benefit from the creations of others without participating...
I have no problem with my model being used to create something different like Triple_Headed_Monkey did. I just didn't want him to make a stupid clone of my model.
The problem I had was mainly that he found problems in my model that it didn't really have.
You just have to look at what others have done with my model to see that it is not catastrophic!
@Stan_Katayama It's a good model. My take on it after further experience is that it just does a bit too much all at once. It makes it harder to use complex prompts, especially when using other UI which make use of both text encoders.
As there are now 2 separate clip/text encoders, but automatic1111 only acts on one of them currently, the results of using the combined input are much different to the results you get on automatic1111, which are "more diluted" prompt inputs comparatively to other UI in results.
Also I agree with you that this guy shouldn't be insulting. We had a light misunderstanding at best and though I did use some all caps writing I don't believe either of us had escalated the situation whereby any kind of real negativity was necessary. It's a good thing to have a healthy debate, but there is no need for anger over the internet :D
@Stan_Katayama When I made my initial assessments of your model I was not aware that automatic1111 prompting only used one of the 2 text encoders. So some of my analysis was exaggerated based on the extra input and caused some crazy outputs.
For example, I can write an entire story, multiple paragraphs long for many models and receive an interesting output, but doing so with NijiSE when using both text encoders causes the output to be insanely bad.
That isn't your fault, you just haven't tested the impact of multiple text encoder inputs, I believe, nor have you trained the model to accept large natural language prompts like some of the other models.
Both of which contribute to some odd results.
But I can see why that is the case now and it is not so much your training as it is the testing methodology of using Automatic1111 only and most other model creators have had the same issues, for the same reasons.
@Triple_Headed_Monkey When we communicated about this, I had been working on the model for a month non-stop. I worked 10 to 16 hours a day on it, I had spent more than $100 in Google colab because my GPU does not allow training with SDXL.
I was exhausted and had done my best to give the best I could. I was not fair to you and I apologize!
I was just so involved that at that moment your approach didn't go very well
@Stan_Katayama Honestly I know the feeling. For my model "Simply Definition XL" I spent $50 and ended up having to compromise heavily on what I would have done if I had my own GPU.
In hindsight I really hate myself for buying a GPU for games and not workloads xD but how could I have known this would be how it is back it 2021?
I myself at the time was lacking a lot of sleep when I first contacted you and could have approached the subject better myself xD So it's all good. I'm sorry for not being able to explain myself calmly either.
Why is so many characters making a Joker Laugh? Seriously asking.
Because if you include "laughing" as one of the prompt words, SDXL based models, including SDXL_Niji_SE, tends to produce that sort of rather exaggerated laugh. I guess one can tone it down by using "smile" instead, but I just like that exaggerate way of laughing in my own images, like these: https://civitai.com/posts/597839 🤣
does this model require the sdxl 1.0 refiner?
I really don't recommend it. But if you want to try, you don't have to take a lot of steps with refiner. Between 5 and 10. Personally I use hires fix.
@Stan_Katayama so no refiner at all with this model? will give it a try thank you!
@Dawsintron In general, refiner is used only for photo style images for ANY SDXL based model. Most of my images are illustration/anime style, so I almost never use the refiner with SDXL_Niji_SE
Source: SDXL: Improving Latent Diffusion Models for High-Resolution Image Synthesis
We note that this step is optional, but improves sample quality for detailed backgrounds and human faces, as demonstrated in Fig. 6 and Fig. 13
I have a collection of pictures of distinctive drawings by artist gaston18
I would like someone to make a professional lora style model for artist gaston18
I have prepared the images in a zip file
Some of these pictures were purchased by me
So this file contains rare and distinctive designs
I was keen to get as many pictures as possible
So if you know someone who can do this
Maybe you can put out a bounty for it: https://civitai.com/bounties
Warning: lots of NSFW bounties in there 😂
I have just created an account to say your model is amazing, congrats i know it must have been a lot of trouble to build. Amazing work.
Thank you so much ! It's a pleasure to read. I've been working on a new version for over a month. But it is true that it is complicated. I always try to do a little better 😅
@Stan_Katayama Can i ask one thing about the training ? Do you use auto prompting for the images with something like the BLIP or do you write specific prompts ? You tend to add specific thinks like if is a "Dragon Ball character" or "is a picture of Monkey D luffy" ? I`m asking to understand better how to prompt for the model. i`ve seen people using a generalized Key world like TOK to invoke the style the model is trained on or do specific prompts to make it better generalize.
Will there be a new version?
the most magic model on CIVITAI
This model is insane ♥♥♥
Can you make Niji SE runnable on civitai? This is one of the few top SDXL models that is not runnable (the other is Copax).
Thank you for your consideration.
Good morning. I think I activated the option
It's working now 👍😁. Your help is much appreciated 🙏
This model is my love at first sight
Hi People, i m having a small 'eye issue' with this model. crossed or poorly illustrated..How to fix this? CFG less or more, or something else..
Hi ! I tried to make a versatile model and in some cases (prompt) there can be small problems. You can perhaps try to counterbalance with negative pomps or negative embeddings. I hope this will help
Try using ADetailer if the face is too far away to be rendered properly. Otherwise post your prompt here and see if somebody can help you.
what I write in prompt to get a character to turn around using this model. My prompts usually work with every other model I tried for character turnarounds. but it's not working this one
Post your prompt and someone may be able to help you.
This model, along with Paradox 2, are two of the most distinct models, because they are not fine-tuned using photo of people. So it is not surprising that what works on many other models (which are all very similar) won't work with them.
Hi, could this model be made smaller? 6.4 Gb is too much
No, because all models based on Stable Diffusion XL are that big. The reason is that the base is simply that massive. That's why it's called XL. If you think they are too big, you can filter out XL models, then you will only see smaller SD 1.5 and SD 2 models.
just get HDD is much cheaper nowdays
Can you also make niji v6 style?
I had a hard time making this checkpoint to product good results, unfortunately. Seeing all the good reviews, it makes me wonder if I did something wrong. The anime results seem pretty good, though it can be a bit overly dramatic sometimes. And not sure why the girls tend to default to messy, curly hair all the time. And for photo realistic prompts I got creepy expressions and weird lighting almost all the time. The color also seems over saturated. I tried the negative embedding suggested and was not able to make it work.
Example:
Prompt: "A teenage Japanese girl in school uniform, standing in front of the school gate. gentle, light smile, dslr, photo realistic"
Steps: 40, Sampler: DPM++ 2M Karras, CFG scale: 7, Seed: 3047603816, Size: 1024x1536,
Clip skip: 2
Another example:
Prompt: A teenage Japanese girl in school uniform, standing in front of the school gate. gentle, light smile, dslr, photo realistic,
Steps: 40, Sampler: DPM++ SDE Karras, CFG scale: 7, Seed: 3047603816, Size: 1024x1536, Model hash: fa892e66c0, Model: sdxlNijiSpecial_sdxlNijiSE, Version: v1.6.1
I switched CLIP skip to 1 and it produced pretty bad face.
@phageoussurgery439 You are using an incorrect resolution. 1024x1536 is not a "native" SDXL resolution. Use 832x1216 or 896x1152 instead.
Also, SDXL Niji SE is a "Niji" anime model, not suitable for "photo style" images. The name of the model, "Niji", already tells you that it is based on Midjourney's "Niji" anime style. You can see that most of the images on this page are either anime or anime style illustrations.
So instead of "A teenage Japanese girl in school uniform, standing in front of the school gate, gentle, light smile, dslr, photo realistic", just leave out "dslr, photo realistic" and simply use "A teenage Japanese girl in school uniform, standing in front of the school gate, gentle, light smile" will give you what the model is design to do: a "Niji" style illustration. See these sample images: https://civitai.com/posts/1526471
It is not fair to give a model a low rating for something it is not design to do.
I see, thanks for the explanation. I didn't know Niji was a thing. Looks like an interesting model but just not for me. I will remove my review.
@phageoussurgery439 Thank you for your understanding 🙏👍
The model is great, but it's still ongoing img2img At that time,appear:modules.devices.NansException: A tensor with all NaNs was produced in Unet. This could be either because there's not enough precision to represent the picture, or because your video card does not support half type. Try setting the "Upcast cross attention layer to float32" option in Settings > Stable Diffusion or using the --no-half commandline argument to fix this. Use --disable-nan-check commandline argument to disable this check.
I can confirm after try your parameters and a bunch of others this models is awful. Almost always the character ends up with a physical or mental disability, i bet people have good results thanks to characters LORAs who carry the generation, but for raw results is almost useless compared to others.
@SeikKV Why are you making a comment here that has nothing to do with what the OP is asking? ALLLLLL is having some problem with img2img, which has nothing with the quality of the model itself. The problem probably will go away if he re-installs his Auto1111 or even just rebooting the computer.
Making statements like "always the character ends up with a physical or mental disability" just makes you look like a troll.
But if you are not trolling, then please show your prompt and generation parameters so that we can see what you are talking about. Users can get excellent results from this model by making appropriate tweaks to the prompt or the negative prompt, or even by just changing the CFG/Sampler/Steps.
Most of the images you see on this model page do not use character LoRAs.
I really love your work! can you make a Turbo version please! =) =)
Love the model. Having lots of fun with it...
Thank you so much ! 🙏
Would really like it to work with highres fix enabled.
There's no reason why it shouldn't work.
As much as we like this one, any changes to size or like in use in img2img inpaint it just generates black.
is refiner necessary for this model?
No. No need
In general, the refiner is only needed for SDXL base 1.0. Even with the base model, it is only needed for photo style realistic image of people, and some other special cases: https://www.reddit.com/r/StableDiffusion/comments/15ah7uj/can_someone_explain_what_the_sdxl_refiner_does/
Details
Files
sdxlNijiSeven_sdxlNijiSE.safetensors
Mirrors
pcia_sdxl_niji_se.safetensors
sdxlNijiSpecial_sdxlNijiSE.safetensors
sdxlNijiSeven_sdxlNijiSE.safetensors
sdxlNijiSeven_sdxlNijiSE.safetensors
sdxlNijiSpecial_sdxlNijiSE.safetensors
sdxlNijiSpecial_sdxlNijiSE.safetensors
sdxlNijiSpecial_sdxlNijiSE.safetensors
sdxlNijiSpecial_sdxlNijiSE.safetensors
sdxlNijiSpecial_sdxlNijiSE.safetensors
sdxlNijiSpecial_sdxlNijiSE.safetensors
sdxlNijiSpecial_sdxlNijiSE.safetensors
多风格sdxlNijiSpecial_sdxlNijiSE.safetensors
Available On (1 platform)
Same model published on other platforms. May have additional downloads or version variants.


















