Training takes lots of time so sharing your best generations makes it much easier for me!
Description
FAQ
Comments (14)
Wtf is this style XDD, its too specific
Stevencarson. Worked out pretty well
@klikkeri1 noted with thanks, hope the suits can be isolated and transposed in other style
3.0 should have better positions.
I used dim 1 and alpha 0.1
Wow great to know that you can use super low dim and get those results, f*k 300Mb loras
@_1_ Most of people are too lazy to do some basic image normalization and removing useless tags. lol
I don't blame them tho. When I started creating hypernetworks and loras 1 year ago, I didn't care manipulating tags or images. Just removing batches of bad images from large datasets.
Recently I got interested in data manipulation because of probabilistic programming
@klikkeri1 what's normalization in this case?
@_1_ I use winsorization/winsorizing. I assume there is no important information in extreme values/colors.
@klikkeri1 Ok, very interesting - and did you find it improves outputs? Need to try it for my XL loras
@_1_ Not sure because if picture looks weird after normalization, I just delete it from the dataset.
@klikkeri1 what tool are you using for normalization? thx
@_1_ @_1_ I created a script with Julia language using Images.jl
@klikkeri1 Hi, could be possible know (in a general way) how do you do the normalization (winsorization/winsorizing) process? I would like (try) recreate it with python.
@Konoko There is so many different ways. Use ChatGPT to assist you. First try open an image with Python, then try saving copy of that image, then try winsorization.








