training data from others
The training data resolution is too low - you can reduce the weight when using it
The quality of the teeth that formed is really poor, but there's nothing can be done about it now
Description
{
"engine": "kohya",
"unetLR": 1,
"clipSkip": 2,
"loraType": "lora",
"keepTokens": 3,
"networkDim": 14,
"numRepeats": 2,
"resolution": 2040,
"lrScheduler": "cosine",
"minSnrGamma": 5,
"noiseOffset": 0.1,
"targetSteps": 1210,
"enableBucket": true,
"networkAlpha": 8,
"optimizerType": "Prodigy",
"textEncoderLR": 1,
"maxTrainEpochs": 20,
"shuffleCaption": true,
"trainBatchSize": 4,
"flipAugmentation": false,
"lrSchedulerNumCycles": 3
}Comments (5)
Nice to see a daz character, I had been wondering for a while if it could make for good training data for loras.
It can work, but DAZ renders often have very fixed posing/composition patterns, so the resulting LoRA can end up looking a bit stiff or repetitive if the dataset isn’t varied enough.
@sxus_Sw I had thought so, if you are interested in doing more loras trained on renders feel free to message me.
Does 2048 resolution improve the Lora rather than doubling the Dim to 64?
2048 resolution usually doesn’t help much for SDXL LoRAs since the base model itself isn’t truly high-res. Also, dim 64 is already pretty large — I usually use 16~32 depending on the LoRA.










