1990s Anime style LoRA
Making models can be expensive. Do you like what I do? Consider supporting me on Patreon 🅿️ or feel free to buy me a coffee ☕
This is pretty straightforward. Works on most anime models, but on some of them it lowers the success rate of hands, so be careful. I've noticed Kenshi is a good model that still manages to get hands right even with this LoRA, while Midnight Mixer Melt has a lower success rate with hands but gets the style more consistently. Triggers with 1990s \(style\), retro artstyle, weight around 0.5-0.7.
Shout-out to Nerfgun3 for making most of the example images! Some are also using his unreleased Raphtalia embedding. I'll link it as soon as it's available.
Also, as you can see, it works with all anime LoRAs and embeddings that do not add too much style (but Nerfgun3 manages to get many good pics with an "Elden Ring Background" LoRA).
How to use LoRA's in auto1111:
Update webui (use
git pulllike here or redownload it)Copy the file to
stable-diffusion-webui/models/loraSelect your LoRA like in this video
Make sure to change the weight (by default it's
:1which is usually too high)
Description
FAQ
Comments (18)
Thank you again for the wonderful custom LoRA model !
I apologize for asking so many questions, but in order to learn a color style, is it enough just to learn a picture with similar colors and shapes. (in this case, an anime picture from the 1990s)?
Or do I have to change the tags attached to the learning images to something special?
for LoRA and models you also need the tags.
I have a question. I'm so sorry to bother you with the question.
What tools are you using for LORA training? Is it produced by 'cloneofsimo'?
I'm using Kohya.
Thank you very much for your response. I am suffering from the network_alpha problem recently added to kohya. I want to train with dim(=lora_rank) as 256 like you, but if I leave network_alpha at the default value of 1, the training will take too long. I sincerely wonder how you are solving this problem. Could you give me a help....?
@APPLAE sorry but I never had this issue :\
Anyway training at 256dim doesn't really do much. I only use it when I'm afraid that some concept is too complex, but 90% of my LoRA's are 128dim.
If I set network_alpha equal to dim, the training result comes out as a monochromatic image. Then I guess I'll have to lower the LR, but I can't find a suitable number. If these questions are too sensitive for you, I'd be very happy to pass on your knowledge, even just a hint.
Sorry for late seeing your answer. The style LORA you created is really fantastic good, and I just saw it and said it.
Akemi Takada (1980s) Style LoRA - What I saw is this LORA, so I wanted to know how to train 256 dims. Are the defaults enough without any special settings?
@APPLAE yes. Data is all that matters.
@lykon Thank you very much :) You are angel
Something's wrong on my side, I don't get the same result as the example images at all.
These are based on the same prompt and parameters as the first image: (sd 1.5)
https://i.imgur.com/QKzztQF.png
https://i.imgur.com/mqyINDW.png
Edit: looks like the base model is more important than I thought. Tested with counterfeight, result is better but it still doesn't look like the examples (it doesn't feel 90's at all); Anything v4 works much better.
Don't use sd 1.5. Please use a better base model. The description indicates which one I used, as well as the generation data
Cool, but how hard would it be to build a larger model, possibly checkpoint for specifically 1995 Evangelion series? I searched entire civitai and there is nothing like it.
some software is incompatible with this because of the punctuation in the trigger word, please remove it in a future update. see how it is escaped in the civitai tag as well?
it's just 2 separate trigger words. No need to use the comma.
Details
Available On (1 platform)
Same model published on other platforms. May have additional downloads or version variants.















