使用keep tokens separator训练,开启shuffle
推荐训练模型和推理模型的tag使用顺序为先画风artist name,模拟风格比如jojo (style)和次级风格比如greyscale,然后分辨率absurdes,然后是系列版权名copyright,官方,模拟艺术official art或者official style类,最后人物名,视角,衣服,动作,外貌衣服细节,时间year xxxx周期newest质量词masterpiece,敏感度explicit。结尾lora模型(即画面占比越大,提示词范围越小越精确的词权重排前,画面占比越小越后,越模糊越后。以画面占比优先,提示词精确程度在其后考虑)
模型可能过拟合,记得降权使用
lora参数查看器https://xypher7.github.io/lora-metadata-viewer/
推荐使用https://civitai.com/articles/4560/upscaling-images-using-multidiffusion 方法,并配合adetailer插件修脸放大
或者使用高清修复,并配合adetailer插件修脸,否则画面将模糊
画师fanbox https://jima.fanbox.cc/?utm_campaign=www_profile&utm_medium=site_flow&utm_source=pixiv
画师pixiv https://www.pixiv.net/users/4359745
(请支持画师谢谢喵/Please support the artist thank you meow)
Description
lokr v1.8版本使用max token length=675,可能导致模型出现元素混乱,错位。这个是因为noobaixl底模使用的max token length=77和此模型不符导致的(类似与1024x1024分辨率训练的底模上使用1536x1536分辨率训练lora,由于训练集太小导致元素混乱,如果觉得效果不好可以用回v1.7版本)
gemini translated:
The LoKR v1.8 version uses a max_token_length of 675, which may cause chaotic or misplaced elements in the generated images. This issue stems from an incompatibility with the noobai-XL base model, which was trained using a max_token_length of 77.
(This is analogous to training a LoRA at a 1536x1536 resolution on a base model originally trained at 1024x1024; such a mismatch can lead to chaotic elements if the training set is too small.)
If you find the results unsatisfactory, you can revert to the v1.7 version.
FAQ
Details
Files
Available On (1 platform)
Same model published on other platforms. May have additional downloads or version variants.







