Description
# Overview
I have reduced the dimensionality from 256 to 64 (a typical level) compared to V1. Which is more appropriate may depend on the backend used.
I trained a lightweight model on a similar dataset, and while the overall quality is nearly the same, I can't discern any differences in the details.
train_network.py --pretrained_model_name_or_path="JosephusCheung/ACertainty" --prior_loss_weight=1.0 --enable_bucket --min_bucket_reso=384 --max_bucket_reso=1280 --train_batch_size=8 --learning_rate=1e-6 --text_encoder_lr 5e-8 --xformers --save_model_as=safetensors --clip_skip=2 --seed=42 --color_aug --flip_aug --network_module=networks.lora --resolution=768,512 --network_dim 64 --save_every_n_epochs 1 --max_train_epochs 2FAQ
Comments (3)
maan, hal gb is too much for lora. there is loras like 9mb
600 mb?
thanks. my ever-growing SD folder is now that much bigger. 244 gigs i cri
Looks like we don't have an active mirror for this file right now.
CivArchive is a community-maintained index — we catalog mirrors that volunteers upload to HuggingFace, torrents, and other public hosts. Looks like no one has uploaded a copy of this file yet.
Some files do get recovered over time through contributions. If you're looking for this one, feel free to ask in Discord, or help preserve it if you have a copy.
Details
Available On (1 platform)
Same model published on other platforms. May have additional downloads or version variants.