Mecha Girl - Mechabare | メカ娘 ・メカバレ
Trained on depictions of Exposed mechanical, robotic, or cybernetic body parts, systems, innards, etc of a cyborg, android, or other robotic characters.
ℹ️ LoRA work best when applied to the base models on which they are trained. Please read the About This Version on the appropriate base models and workflow/training information.
Trigger:
mechabareRecommended Tags:
android
cyberpunk
cyborg
damaged
mecha musume
mechanical parts
mechanical arms
mechanical legs
mechanical eyes
mechanical ears
mechanical hands
mechanical horns
mechanical tail
mechanical wings
robot joints
ribs
science fiction
skeleton
spineTraining images rated for sensitivity, nsfw or explicit
Description
[WAN 1.3B] LoRA
Trained with diffusion-pipe on Wan2.1-T2V-1.3B
Text to Video previews generated with ComfyUI_examples/wan/#text-to-video
Loading the LoRA with LoraLoaderModelOnly node and using the fp16 1.3B wan2.1_t2v_1.3B_fp16.safetensors
dataset.toml
# Resolution settings.
resolutions = [512]
# Aspect ratio bucketing settings
enable_ar_bucket = true
min_ar = 0.5
max_ar = 2.0
num_ar_buckets = 7
# Frame buckets (1 is for images)
frame_buckets = [1]
[[directory]] # IMAGES
# Path to the directory containing images and their corresponding caption files.
path = '/mnt/d/huanvideo/training_data/images'
num_repeats = 5
resolutions = [720]
frame_buckets = [1] # Use 1 frame for images.
[[directory]] # VIDEOS
# Path to the directory containing videos and their corresponding caption files.
path = '/mnt/d/huanvideo/training_data/videos'
num_repeats = 5
resolutions = [512]
frame_buckets = [30, 33, 38, 50, 56, 80]config.toml
# Dataset config file.
output_dir = '/mnt/d/wan/training_output'
dataset = 'dataset.toml'
# Training settings
epochs = 50
micro_batch_size_per_gpu = 1
pipeline_stages = 1
gradient_accumulation_steps = 4
gradient_clipping = 1.0
warmup_steps = 100
# eval settings
eval_every_n_epochs = 5
eval_before_first_step = true
eval_micro_batch_size_per_gpu = 1
eval_gradient_accumulation_steps = 1
# misc settings
save_every_n_epochs = 5
checkpoint_every_n_minutes = 30
activation_checkpointing = true
partition_method = 'parameters'
save_dtype = 'bfloat16'
caching_batch_size = 1
steps_per_print = 1
video_clip_mode = 'single_middle'
[model]
type = 'wan'
ckpt_path = '../Wan2.1-T2V-1.3B'
dtype = 'bfloat16'
# You can use fp8 for the transformer when training LoRA.
transformer_dtype = 'float8'
timestep_sample_method = 'logit_normal'
[adapter]
type = 'lora'
rank = 32
dtype = 'bfloat16'
[optimizer]
type = 'adamw_optimi'
lr = 5e-5
betas = [0.9, 0.99]
weight_decay = 0.02
eps = 1e-8FAQ
Details
Files
Available On (1 platform)
Same model published on other platforms. May have additional downloads or version variants.













