Can get decent results with simple prompts such as :
a house by william eggleston, sunrays, beautifulcloseup portrait of a woman in a kitchen by william eggleston, beautifula beautiful view through a kitchen window, car, by william eggleston, sunlightTrained using https://github.com/TheLastBen/fast-stable-diffusion SDXL trainer.
ComfyUI seems to give better results than A1111, but that's just me.
Description
FAQ
Comments (19)
What network rank (dim / dimension) did you use on this training? The quality of your selects looks amazing.
dim 256, but the size shoots to 2.5gb, it helps with the quality
@TheLastBen Thanks. Great quality out of such a high dim, but then such a high file size. But for something like Eggleston, well warranted to capture all the details.
Im still not sure 256-1 be too much different to 128-1 or even 64/32... Time will tell but for now the size is huge, haha,
@LDWorksDavid 256 dim, 20.000 alpha
@TheLastBen 20 or 20k?
@LDWorksDavid 20k
@TheLastBen Huge alpha there, I assume because there's an aesthetic and a style
@countlippe I always use alpha 20k
This is insane, awesome work!
Thanks
Absolutely nuts, really great work!
thanks!
Hey Ben! Nice to see you on CivitAI, I'm a great fan of your Fast Dreambooth. I have a question, some guy here used to try uncensor SD2.1 (looks like it actually worked) but when i tried to use your Fast Dreambooth on their model it looked like your repo could not unpack the text encoder, he said something about using a different encoder from the censored one provided by SD2.1 and said that it worked nicely even without a negative prompt (which sounds impossible with the native text encoder of SD2.X), do you think you can do something about it sometime?
hey,
the trainer uses diffusers library to load the text encoder, if you get the text encoder in diffusers format from the model maker, then you can train it with the notebook
@TheLastBen Oh that's interesting, thank you! I guess I can do that with the Kohya scripts in local environment.
@TheLastBen Hi again, sorry to bother, looks like something was updated in the dependencies of your notebook and now it can't download the weights of this particular uncensored model. I managed to upload the converted diffusers on Huggingface and managed to train the model once, but now it is not possible anymore because the model downloader says "HTTPError: HTTP Error 404: Not Found", this happens only with the weights of this particular model (if I download a standard 2.1 model it works perfectly). I checked your github and no one seems to have this problem... I'm sad about this, because your repo is the only one I know about that can easily train the text encoder by itself.
@gsgsdg if you're using colab, update to the latest notebook, huggingface changed the way they handler their access token, I update the notebooks accordingly
Thank you. I love the the old style 35mm slide look that this can produce. Like looking through an old photo album found in a thrift shop, wondering who the people were, and what they are doing now.

















