👇👀 Read me! 👇👀
This is a slider for SDXL PonyXL. Trained on AutismMix Confetti. The purpose of this slider is to enhance the overall detail in an image, and the goal is to have it be robust and versatile enough so that you can put it in pretty much all your prompts. At the moment, it leaves the subject of an image unchanged, but enhances the background details considerably.
This model is also available to use and download via SeaArt.
No activation tag. This is a unipolar slider LoRA. Recommended weighting is anywhere from 2 to 4. For version 1.4, if you combine it with a style LoRA you can go up to 5 or maybe even 6. I like the style LoRAs here, and there are many to choose from. This LoRA is also useful for Img2Img / Tiled upscaling.
Version 1.4 is geared towards upping detail in background and scenery versus adding detail to the image subject. In fact, it should do very little to the subject to the image in order to allow for great prompt adherence.
Example prompt:
score 9, score 8 up, score 7 up, source anime, 1girl, solo, wolf girl, black hair, long hair, facial mark, wolf ears, black bodysuit, long sleeves, cleavage cutout, gold trim, navel, black pants, black gloves, fur trim, tail, jumping, claw pose, attacking, open mouth, fangs, serious, looking at viewer, from below, night, outdoors, starry sky, forest forced perspective, <lora:StS_PonyXL_Detail_Slider_v1.4_iteration_3:4>
No special negative prompting needed, although the scenery tag is great to add for any scene that is outside. A nice side effect is that largely replaces the need to put anything like "masterpiece" or "high quality" in the positive prompt.
If you generate images with this, please share! I love seeing what people make with my LoRAs and it helps keep me motivated to make more. If you have PonyXL LoRA requests, feel free to leave a comment and I'll look into into it 👍
Known Issues:
High positive values may introduce a slight realism style to the images or over sharpen image
̶P̶o̶s̶i̶t̶i̶v̶e̶ ̶v̶a̶l̶u̶e̶s̶ ̶m̶a̶y̶ ̶a̶l̶s̶o̶ ̶s̶c̶a̶l̶e̶ ̶t̶h̶e̶ ̶a̶g̶e̶ ̶o̶f̶ ̶t̶h̶e̶ ̶s̶u̶b̶j̶e̶c̶t̶ ̶a̶ ̶b̶i̶t̶ Fixed in v1.3
̶V̶e̶r̶s̶i̶o̶n̶ ̶1̶.̶2̶ ̶m̶a̶y̶ ̶n̶o̶t̶ ̶a̶d̶d̶ ̶e̶n̶o̶u̶g̶h̶ ̶d̶e̶t̶a̶i̶l̶ ̶t̶o̶ ̶t̶h̶e̶ ̶i̶m̶a̶g̶e̶ Fixed in v1.3
Very high weight values (
4<) may introduce multiple subjects in an image when there is only supposed to be one.̶T̶h̶e̶ ̶L̶o̶R̶A̶ ̶m̶a̶y̶ ̶r̶a̶i̶s̶e̶ ̶t̶h̶e̶ ̶c̶o̶n̶t̶r̶a̶s̶t̶ ̶o̶f̶ ̶t̶h̶e̶ ̶i̶m̶a̶g̶e̶ ̶u̶n̶n̶e̶c̶e̶s̶s̶a̶r̶i̶l̶y̶ ̶ Fixed in v1.4
Changelog:
April 15, 2024 - Initial release
June 26, 2024 - Version 1.2 Release & Added "Known Issues" section
June 28, 2024 - Updated "Known Issues" with item #2
August 7, 2024 - Updated "Known Issues" with item #3
August 8, 2024 - Version 1.3 Beta release, updated Known Issues
August 9, 2024 - Removed v 1.3 Early Access disclaimer from description. A copy of it can be found here: https://pastebin.com/Qwacm6Af
August 9, 2024 - Updated CivitAI Version name to remove Early Access now that version is publicly available
August 10, 2024 - Released v1.4, updated Known Issues and model description
January 20, 2025 - Added SeaArt link in description
Description
v1.2 Release
Added the ability to add detail to the foreground and subject matter
Improved prompt adherence
Improved flexibility
FAQ
Comments (22)
V1.2 is interesting.
it is much better, and makes image follow the prompt more, but also makes the subject older more than v1.
now i need to pair it with age slider, what a cunning move 😅.
The age shift is not intentional I promise. I will look into fixing this for v1.3, thank you for letting me know. I'm glad you find v1.2 to be better as it was a quite a bit of work to get it to improve on v1.0 :)
v1.2 seems more like an age slider than a detail slider. there's a big age difference between -5 and +5.
v1 works much better, there age is not affected much
@OrangeJuiceAlien Known issue. I plan to fix this for next version release.
5 thousand downloads for what is essentially a schizo lora.
How can anyone look at these comparisons and say it honestly has the desired effect, or even any effect at all?
EDIT: I was WRONG, read the comment chain below for full context!
@pogo With all due respect, are you visually impaired? I do not wish to denigrate the physically disabled, but there are labelled XY graphs in the example images for quite literally the exact purpose for showing the differences. If those do not satisfy you, I am amenable to posting some more XY comparison graphs.
This detail slider LoRA, as the name suggests, tweaks and enhances/lowers the detail of an image. It's not supposed to transform the entire image - quite the opposite. By handling level of detail *without* huge composition changes, it adheres to prompts better and is more flexible when combined with other models/LoRAs.
I could have trained this slider LoRA for longer, making its effect more and more drastic, but I did not want to compromise the compatibility and prompt adherence that makes this slider so flexible. In my eyes, slider LoRAs are for fine tuning control in a convenient manor making the aforementioned qualities important, and I believe I more or less achieves that with this LoRA.
@Shed_The_Skin Damn I feel bad for making you upset
but I made a few dozen A vs B examples of no LoRA vs your LoRA at 3.0 strength
I'd love to see you take your best guess at which is which
@pogo Apologies if I came across as overly combative, that was not my intention. I'd be happy to take a look at the images. I believe I'll likely be able to tell as long as there aren't other LoRAs being used as well. For reference, a weight of 3 is usually the lowest I would use for this LoRA. I normally use 4 or sometimes 5.
Yup I have to agree, it changes the output but doesn't really add or substract details in any way shape or form for me either.
Edit: Ok I have finally made it work, it's a little bit unreliable but it sometimes DOES indeed sometimes work as intended.
After further evaluation, I must sheepishly admit that I was wrong and that pogo's and ToasterLord's points stand. This LoRA can add detail, but the weights required to do so often add unnecessary characteristics to the image such as an age shift.
With this in mind, I am working on a new version. Thanks for bringing this to my attention, and I apologize for my overly defensive earlier remarks.
@Shed_The_Skin All good brother. I also changed my opinion but forgot to say anything. It definitely has an effect, but like you said it adds unnecessary characteristics. When I was playing with it, it would add random stuff like horns or jewelry that I never asked for. It can be fun almost like wildcards.
@pogo I have version 1.3 in early access beta testing now. It should do an overall better job, albeit with some slight stylization change. Thanks again for pointing out some of the issues 🫡
It's funny how this comment chain ended with the lora behind a paywall.
@K18V Thanks to the generator compensation design, most loras won't be earning much (though this one's a bit of an exception). So you can expect to see paywalls become quite prevalent in the near future to make up the difference, even though civ takes a 30% cut for doing very little wrt the early access feature. I reckon however that after a short while few will have the luxury to pay for those, and it might settle down (or steeper prices will be charged).
Regardless, I think sts put a relatively fair price here on an upgrade request, and all of 3 days of early access (which may well be removed ere the day is over), compared to the 5x goal and days I'm seeing all over.
@firemanbrakeneck Perhaps A1111, Comfy, Kohya, and all the numerous tools and extensions should become paywalled as well.
@K18V I'd like to make it clear that the Early Access release was not due to Pogo or ToasterLord. CivitAI rolled out Early Access releases right before I finish testing v 1.3 which is the reason why v 1.3 was released with Early Access enabled.
I have no inclination to perma-paywall any of my public LoRAs - past, present, or future. In fact I have always actively stayed away from soliciting donations via things like Ko-fi and Patreon because of moral concerns. However, I am becoming increasingly busy and new LoRA construction is more time consuming now since I always aim for new version releases to be measurably and appreciably better than prior version releases. Also, I am gearing up to make LoRAs for Flux next, which is largely a new architecture and is taking up the vast majority of my free time as Flux is a rather complicated beast to tackle.
At the end of the day, I can either majorly cut back on the time I dedicate towards SD generation tools, or I can find a way to bring in a little bit of income from it to justify the time spent. And when I say "a little bit" I do actually mean "a little bit". I'm not looking to make a livable income or anything like that, but a little pocket change goes a long ways in maintaining motivation and justifying time spent.
I wish CivitAI allowed me to set the buzz cost of using the LoRAs on a per-version basis. I'd be cool with having the latest and greatest version cost 1 - 3 buzz to use in generation while the older versions are free to use. I feel like that would be a good compromise. As for Early Access, the feature is brand new and I'm still exploring ways to make it equitable. For Version 1.3, I kept the Early Access period as short as I could and set what I thought was a reasonable donation-to-unlock threshold.
I am interested in gathering the community's feedback regarding this so that I can keep equitable access to my LoRAs without getting absolutely nothing in return. Feel free to throw in your two cents 🫡
@Shed_The_Skin hey bro do you do your loras through civit or do you train them locally or what?
@pogo I train all of them locally. I've never touched CivitAI's LoRA's trainer so I can't speak for its quality.
@K18V In essence they are, just behind the scenes, not labelled "early access" and someone else foots the bill, so users don't normally complain about it (I see it as comparable to dev / release branches where one needs either tech knowledge to build from scratch & handle the glitches, or to wait for release; and the price is much steeper than $10-50).
Stability raised something like $250m to pay their staff. Comfyanonymous started comfyui as a pet project, to learn how diffusers work from the ground up, and would have probably abandoned it as soon as that goal was reached, but then it blew up and stability hired him to continue development. Kohya & illyasev, I haven't heard much of them, but I'm sure they get by - individuals tend to undervalue programmers & good code, but companies pay a pretty penny for them. I suspect vlad might be an automaton / android.
From my observations, open source projects which lack sufficient financial backing will either die out (90-99% of the time), or suffer extremely long development time as they await someone who is both competent & determined enough to maintain them (as are all the owners whom I've met in some of the extensions) - such individuals tend to be in high demand and as such their spare time is in short supply.
What I don't respect in the slightest is companies like closedai, which take their capital and lock down their models permanently behind pay per use. No open modifications, no community, just money for the sake of making more money.
The reality is, no major company will touch civ nor its creators with a ten foot pole in this age. If adding a funding goal keeps the quality creators satisfied, a short wait for those who prefer not to invest the currency nor effort is fair trade (the famous "fast, cheap, good" triangle).
In short, my recommendation to you, as long as civ's financial model remains early access at worst, be patient. Should it ever escalate to exclusive access, we'll take up arms together.
@Shed_The_Skin How fares your progress with flux? I read in the commentary that due to the distillment it's undergone, it's extremely difficult to train the publicly available dev & schnell. I assume you've been following bghira's simpletuner guide?
@firemanbrakeneck I'm making progress, albeit slow. Recent updates to SimpleTuner have allowed me to run it on my setup after a considerable amount of configuration and troubleshooting. The next thing I need to do is properly setup a test dataset and try to train some test LoRAs for Flux. I'm planning on doing that once I have v1.4 of this detail slider done. The thing I foresee taking the most time to do is compiling a good dataset. Early feedback would strongly suggest that Flux LoRAs need way more images in their dataset for training which will be quite time consuming to compile since I always curate my dataset with a "garbage in, garbage out" mindset.
glad to see this conversation turned into something constructive. i was going to comment on how much of a hit or miss this LORA is before but didn't want to but i am glad to see i was nto the only one. I will try the new one and give my opinion this time


