This model was trained for 1000 epochs on some images of landscapes surrounded by the paper trees.
Since the example images were purely landscapes it takes a bit of prompt magic to get it to make portraits and objects. Look at the example prompts on the uploaded images.
Triggerword is:
(paperai)
I really like this style and the depth the paper gives. Let me know what you think of it in the comments below.
Description
Trained SD1.5 for 1000 epochs.
FAQ
Comments (14)
Hello friend! I have a gorgeous pack of logos created with Midjourney. Can you make a model using a set of my paintings? If you are interested, write to me in Discord - JomaanGa#6268
Why can't you create a logo model? I tried out some of your models, and they are amazing.
@Dewon My GTX 980 graphics card does not allow training unique models. I created logotype with a unique, different style for training the new 1.5 version model. I just need help to train this new unique model.
@JIM_POISON i have agtx 980 4gb Lap, and RTX 3080 PC. if you teach me how to make models i can help you out!
@Dewon Write to me in the discord Jim_Poison#6268
Have you considered taking your best generations that aren't landscapes and feeding them back into the training to make the model more versatile? You can also get some results out of delaying the token, so by adding [paperai:2] so it makes it paper after it gets the composition of an image.
That's a good idea, might going to try that out later! Thanks for the idea!
I'm quite new to this AI thing. So I take the best generations, and just resume training on those images?
@KoningWouter There are a lot of different forms of training. The one I have used, you would take these generations and add them to your training data and train the model again from scratch. To help not overtraining its composition, maybe these additional images should be made with controlnet or with img2img.
@A3tr Ok thank you for your reply! I will try this out.
@A3tr Thanks again, I released a new version of the model with your help.
What is the easiest way to put generated images to training data? The prompt is embedded into the pictures but you should extract it to .txt file? It is also possible that the workflow for LoRA and dreambooth is much different.
@klikkeri1 I used the CLIP interrogator for extracting basic image properties. Then edited it by hand. https://colab.research.google.com/github/pharmapsychotic/clip-interrogator/blob/main/clip_interrogator.ipynb
Could also look at MJ's paper generations, they're very good













