Just a model i created for fun for testing sd xl lora :D a bit jank without negative emmbeding and even more jank without any at strength 1 (hit or miss at 1 but some what good most of the time)
edit: apparently its really good using anime art diffusion xl using the negative emmbedings and really good in blue pencil aswell aslo the jank output was due to the res being to high opps me :/
Resolutions that i find best are: 915x1144, 886 x 1182, it can do 1024x 1546 (768x1344 works better) but i would not do it (gives the best result at the expense of a chance in jankyness 2/5 chance jank unless using negative emmbedings it eill reduce the chance)
trained on 99 images
stregth try using 0.8-1 works well on blue pencil sd xl lineups (works well on 0.9)
please give feedbacks as i am playing around with a lot of settings and want to know the optimal settings are :DDD
any comments, suggestions, and tips are appreciated :D hope you enjoy it :DDD
Description
pruned
FAQ
Comments (18)
dumb me forgot the settings:
"LoRA_type": "Standard",
"adaptive_noise_scale": 0,
"additional_parameters": "",
"block_alphas": "",
"block_dims": "",
"block_lr_zero_threshold": "",
"bucket_no_upscale": true,
"bucket_reso_steps": 64,
"cache_latents": true,
"cache_latents_to_disk": true,
"caption_dropout_every_n_epochs": 0.0,
"caption_dropout_rate": 0,
"caption_extension": "",
"clip_skip": 2,
"color_aug": false,
"conv_alpha": 1,
"conv_block_alphas": "",
"conv_block_dims": "",
"conv_dim": 1,
"decompose_both": false,
"dim_from_weights": false,
"down_lr_weight": "",
"enable_bucket": true,
"epoch": 10,
"factor": -1,
"flip_aug": false,
"full_bf16": false,
"full_fp16": false,
"gradient_accumulation_steps": 1.0,
"gradient_checkpointing": true,
"keep_tokens": "0",
"learning_rate": 0.0004,
"logging_dir": "/workspace/data/log",
"lora_network_weights": "",
"lr_scheduler": "constant",
"lr_scheduler_num_cycles": "",
"lr_scheduler_power": "",
"lr_warmup": 0,
"max_bucket_reso": 2048,
"max_data_loader_n_workers": "0",
"max_resolution": "1024,1536",
"max_timestep": 1000,
"max_token_length": "225",
"max_train_epochs": "",
"mem_eff_attn": false,
"mid_lr_weight": "",
"min_bucket_reso": 256,
"min_snr_gamma": 0,
"min_timestep": 0,
"mixed_precision": "bf16",
"model_list": "custom",
"module_dropout": 0,
"multires_noise_discount": 0,
"multires_noise_iterations": 0,
"network_alpha": 1,
"network_dim": 256,
"network_dropout": 0,
"no_token_padding": false,
"noise_offset": 0,
"noise_offset_type": "Original",
"num_cpu_threads_per_process": 2,
"optimizer": "Adafactor",
"optimizer_args": "scale_parameter=False relative_step=False warmup_init=False",
"output_dir": "/workspace/data/output",
"output_name": "sailorboy",
"persistent_data_loader_workers": false,
"pretrained_model_name_or_path": "/workspace/bluePencilXL_v006.safetensors",
"prior_loss_weight": 1.0,
"random_crop": false,
"rank_dropout": 0,
"reg_data_dir": "/workspace/data/regg",
"resume": "",
"sample_every_n_epochs": 0,
"sample_every_n_steps": 0,
"sample_prompts": "",
"sample_sampler": "euler_a",
"save_every_n_epochs": 1,
"save_every_n_steps": 0,
"save_last_n_steps": 0,
"save_last_n_steps_state": 0,
"save_model_as": "safetensors",
"save_precision": "bf16",
"save_state": false,
"scale_v_pred_loss_like_noise_pred": false,
"scale_weight_norms": 0,
"sdxl": true,
"sdxl_cache_text_encoder_outputs": false,
"sdxl_no_half_vae": true,
"seed": "",
"shuffle_caption": true,
"stop_text_encoder_training_pct": 0,
"text_encoder_lr": 4e-05,
"train_batch_size": 4,
"train_data_dir": "/workspace/data/traindata",
"train_on_input": true,
"training_comment": "",
"unet_lr": 0.0004,
"unit": 1,
"up_lr_weight": "",
"use_cp": false,
"use_wandb": false,
"v2": false,
"v_parameterization": false,
"vae_batch_size": 0,
"wandb_api_key": "",
"weighted_captions": false,
"xformers": true
forgot to mention i did prune the model from 1.8 gb to 17 megs using the kohya script. sd xl lora are ureasonabally huge. pruning will also improve some models ourput aswell. also i noticed that 1024x1536 will some jank results, to prevent that you can use 915x1144
You've made me very happy. 😁 Now I'll have to start playing with XL. (I hadn't tried it out yet.) Do you know if it will also work with sailor shorts, for the full sailor boy look? That would be awesome.
yes i think it does, the problem with sd xl right now is the fact that its not as good generatig male characters compared to sd 1,5, for exaple you cant generate a sailor boys without a lora/ training it yourself. i suggest comfy ui as a1111 takes ~20 gigs of ram...... comfy can run on colab just fine
Imagine that: Stable Diffusion being lousy at generating male characters! I'm shocked. 😆
I've got my a1111 system up and running locally, and have been using it that way for many months. I've also got Comfy and Invoke and even a few other tools (Mochi Diffusion, Diffusion Bee, and a CoreML Python-only implementation), but I use a1111 for most all of my stuff. I can't feel comfortable in ComfyUI for one thing. 😜
Unfortunately, there seems to be a bug with Torch 2.0 on macOS 13.3, and it likes to throw out of memory errors when generating even a single 256x256 image. (That affects all three: a111, Comfy, and Invoke.) The only workaround I've found is to go back to using Torch 1, which is slower and doesn't have as great support for MPS (the interface to GPU on Mac, in place of Cuda). I don't really want to run on a shared system like Colab for various reasons. But it's good to know that Comfy does still work on Colab, should I need to use it.
In terms of space, I've got a 2TB SSD devoted entirely to SD, and about 4TB free if I need it on my main data hard drive, plus a couple more 4TB drives I'm not using, so space isn't really a big concern for me. (A huge change from when I first started using computers... a 60MB hard drive was huge, and 10-20MB was standard. My first PC actually didn't even have a hard drive. Just two floppies.)
@dita i remember when i was younger 20 gigs was a lot of storage, i store my sd files in a 2tb hhd and mirror it in hf for fast downloads when training on runpod. i run sd locally but i use runpod to run sd xl and train sd lora. this model alone took 6 hours to train with 4 numers of workers and a 3090. imagine if i only used 1 it would take 19k steps and would probabaly took 1 full day to train. but i dont mind it its really fun to train and experiment. lora sizes are absurd on sdxl even if the size dont bother you the loading time will, the unpruned version is 1.8 gigs and took 7 seconds to load in sd due to the size it also eats your ram. look at us now downloading sd checkpoints left and right not caring about the size 😆😆. humanity have come a long way i guess :D
@shiowoneko Hmm, yes, that's a good point. At 1.8gb, I definitely wouldn't be downloading 4000+ of them like I have for SD1.5 lol. But that happens with increased size. People think, "1024x1024 is twice the size of 512x512." But it's not. It's 4x the size. So it makes sense if the LoRAs are around 4x the size too.
It's possible I can't even do XL on my system, since I haven't even tried it yet. If so, I'll probably have to do like you and either start using RunPod... Or just not do XL.
It's crazy just how fast things like storage space evolve. In 10 years, my 12TB data HD will be puny in comparison to what will be on the market. No doubt they'll have drives in the 100–200TB range, if not 500TB. That's if they keep making HD and don't switch entirely to SSD. In terms of SSDs, probably 50TB would be pushing it. But in 15 years? Maybe they'll be breaking the PB (petabyte) range. Putting that in terms of the HDs of my early days, 1PB ≅ 10.5 million 100MB drives or 105 million 10MB drives. Boggles the mind. My first MP3 player, which I got in 1999, stored 16MB, enough for 4-6 songs. 😆 (No, I wasn't stupid enough to buy it; it was included free with a CD writer I bought.)
If we had a fast, easy, cheap way of storing things in DNA, our storage space would explode. DNA has a storage density of approximately 1TB per mg. The data on my 12TB HD would fit in 12mg of DNA. The entire Library of Congress is estimated at 40PB, so it would take 40g (1.4oz if you're American like me) of DNA to store it. But there are also newer forms of NVRAM on the horizon, like RRAM, that store data at a higher density than current SSD (NAND NVRAM) technology. So perhaps that technology will cause data storage to grow exponentially.
@dita you can already buy ssd with 100tb of storage, yes it would cost a lot but time will only make i cheaper. its crazy mow much our dna can store information, maybe this will be the future of storage who knows, dna storage is not too far fetch seeing what human can do with time... only time can tell i guess 😆😆😆😆
@shiowoneko $40,000 US. Wow. Insane. But I had no idea that 100tb was on the market already as an SSD. Or even as an HDD, actually. Guess I'm pretty short-sighted when it comes to speculating on future technology. In terms of consumer/desktop equipment, it looks like 16Tb is the largest SSD and 20Tb is the largest HDD as of June. That doesn't sound quite right, though, because they had 18Tb HDDs in 2020.
@dita i know right its insane.... altho 20tb is a bit too big for me but hey cant hurt to have more right 😁😁
@shiowoneko No matter how much storage space you have, you can always find some way to fill it up. —Dita's Law
@dita cant agree more.. i just bught a new phone, it has 128 gb of storage + a 64gb sd card for photos and vidios. it only took me 3 weeks to get filled to 80% with the sd card. crazy :D
@shiowoneko That's how it goes, yes. I had to delete a whole lot of anime a couple years ago, and I'd rather not do that again. I've found in the last few months that I wanted to go back and watch some of it again. So now I try to buy more storage ahead of time, before i run out of what I've already got.
@dita yeah been there, i had downloaded 07 ghost and had to delete it because i need the space fo editing a vidio, it suck but i had to do it back then
@shiowoneko Ah, that's a title I haven't seen in... 10 years? Maybe 12. I actually only started watching anime in 2009, at the age of 33. But I've made up for it. My MAL stats show I've seen 21,060 episodes (1110 series, 121 OVA, 93 movies, 101 specials, 340.6 days). Around 4 episodes a day on average.
@dita wow thats a lot i wish i had the time 😆😆
@shiowoneko I don't watch nearly as much as I used to, so my list is growing a lot more slowly than it was the first 5-6 years. At that point, there was so much I hadn't seen, and I was focused primarily on older series, so I was always marathoning them. I watched all of Naruto and the first 200-some episodes of Shippuuden in just over 2 weeks. Then when I started watching seasonals, I would watch just about everything. Until a few years ago, it was pretty common for me to be watching 30-40 series per season. Now I try to keep it in the 20-25 range (ideally 21), because I mostly only watch while I'm eating. So 21 episodes in 7 days. Which is still a lot. But 20 episodes a week compared to, like, 150 is a big difference: 8 hours vs 60 hours.
@dita yeah, i use to watch anime from morning untill night but nowdays its been really busy with school
Details
Available On (1 platform)
Same model published on other platforms. May have additional downloads or version variants.













