Recommended Settings:Sampler: DPM++ SDE / Euler aSampling Steps: 20 - 50CFG Scale: 5 - 8Positive prompt: score_9, score_8_up, score_7_up,Negative prompt (for anime): ((score_6, score_5, score_4)), source_pony, 3d, cartoon, bad ???????, loli, child, (worst quality, low quality, normal quality), lowres, text, simple background, poor detail, displeasing, blurry, lacking depth, extra limbs, missing fingersTIPS: This model has strict token limits. Exceeding these limits makes it harder to follow prompts. Aim for about 100 tokens or fewer. To check your prompt's token count, use the token visualizer on the Run.app website. Note that symbols like "_", "-", "()", "[]", and even commas count as separate tokens.To maximize efficiency:Use commas only when necessary to separate categories.Group similar descriptors together without commas. For example: long hair black hair ponytail hime cut, blue eyesThis approach reduces token count and helps the AI process related concepts together. It's more effective because the AI tries to make sense of words in each token group.But hey its just a theory. But I hope this help though =-=
Description
Recommended Settings:Sampler: DPM++ SDE / Euler aSampling Steps: 20 - 50CFG Scale: 5 - 8Positive prompt: score_9, score_8_up, score_7_up,Negative prompt (for anime): ((score_6, score_5, score_4)), source_pony, 3d, cartoon, bad ???????, loli, child, (worst quality, low quality, normal quality), lowres, text, simple background, poor detail, displeasing, blurry, lacking depth, extra limbs, missing fingersTIPS: This model has strict token limits. Exceeding these limits makes it harder to follow prompts. Aim for about 100 tokens or fewer. To check your prompt's token count, use the token visualizer on the Run.app website. Note that symbols like "_", "-", "()", "[]", and even commas count as separate tokens.To maximize efficiency:Use commas only when necessary to separate categories.Group similar descriptors together without commas. For example: long hair black hair ponytail hime cut, blue eyesThis approach reduces token count and helps the AI process related concepts together. It's more effective because the AI tries to make sense of words in each token group.But hey its just a theory. But I hope this help though =-=






