Both of the models were trained on the portraits for starsector with one being geared more toward the original style completely and the other a more upscaled but still true to the compositional style of the original portraits.
From my own tinkering both work the best at about 0.6 strength but you may need to tinker a bit more if you want to mix in other Loras or embeddings.
Neither are perfect but I'm happy with the overall results for the moment will probably try to either further train or retrain later on.
Description
This version is trained solely on the original 128x128 images from the game. So the 512x512 it generates are best scaled down to look decent.
From personal experience a weight of 0.6 is the best for looking decent scaled down and retaining the style of the game.
FAQ
Comments (1)
If you trained on portraits, how did you prevent the faces from looking too similar to each other? There is a little similarity in these faces but it's not too bad.
Nevermind, on second look they are very similar.
What you can try is trying to get as close as possible to the style with prompts, and use a wildcard for race and other facial features, and then use batch face swap with that prompt and wildcard. Take all those swapped images and train the model over again, It should increase the variety in the faces it makes, making them more distinct and easier to change and edit.
It's a cool style.
Also for a style you can do a dim size of 1/2/4 (very low). It really doesn't need to be 144mb.



















