For Flux1 Schnell users: you do not need the Lora, just the .txt training data
click here if you are looking for SDXL (Beta3 version)
BASE MODEL: Johnson Diffusion Zenkai
Zenkai System Explained: https://civarchive.com/articles/199
Update 24th May '23 - Added "DesuZenkaiV21beta2-lycoris" (beta2)
Update 21st May '23 - Added "djzJohnsonDesuZenkaiV15-lycoris" (beta1)
Animate Diff V2 video:
all new version built on the Johnson Diffusion "Zenkai" Dataset
Note:
Beta1 contained 230 style models
Beta2 contains 350 style models
Full Instructions, Prompt Examples, Caption lists and Wildcard Files added to Training Data section for both CKPT and LORA version.
The Zenkai System allow for easy interpolation of 230 of my original styles, multiplied by 5000 original prompts & 4200 generated variation prompts which can be mixed, doubled or tripled and used with txt2img or img2img with ease.
Art by Pure chance or by exact design. Your Choice.
-------------------------------------------
ALL BELOW IS OLD 108 LORA VERSION
-------------------------------------------
Update: 24th Feb '23 - Added "djzJohnsonDesuV21-320"
same as below, version 1, built on SD 2.1-768 with LORA 320.
Update: 23rd Feb '23 - Added "djzJohnsonDesuV15-320"
same as below, version 1, built on SD 1.5 with LORA 320.
Update: 22nd Feb '23 - Added "djzJohnsonDesuV21-256"
same as below, version 1, built on SD 2.1-768 with LORA 256.
Update: 18th Feb '23 - Added "djzJohnsonDesuV15"
same as below, version 1, built on SD1.5 with LORA 128.
This is "djzJohnsonDesuV2"
The result of retraining all of the djz Diffusion collection into a single LORA 128. While this contains the original 108 tokens, you do not need to use them and can prompt normally. While this is 250gb of models in a 200mb filesize, this will produce different images than other LORA or the original models. be sure to scroll down and get the links for the negative embeddings used to create the demo pics ;)
Version 1:
108 Concepts, 4277 images.
Epoch 1 - 42,770 Steps
Epoch 2 - 85,540 Steps
Links to recommended LORA/Negative Embeds below
Epoch 1 & Epoch 2 both gave nice and unique images, so we are releasing both versions.
LORA 128 Demo Epoch 1 vs Epoch 2 vs Base only (same prompt/seed)
Resources we use with our SD2.1-768 models:
nfixer, nartfixer, nfixernext - from Illuminati Diffusion
https://huggingface.co/IlluminatiAI/Illuminati_Diffusion_v1.0/tree/main/embeds/negative
Neg_Facelift768 by SoCalGuitarist
https://civarchive.com/models/2385/socalguitarists-magic-facelift-negative-embedding-for-model-2x-fix-yo-ugly-faces
DrD_PNTE768 by Dr.Diffusion
https://civarchive.com/models/4044/doctor-diffusions-point-e-negative-embedding
DangerGoose by Drift Johnson
https://civarchive.com/models/8107/djz-johnsons-image-helper-dangerhawk-and-dangergoose
Contrast Fix & High Key by theOvercomer8 (style LORA)
https://civarchive.com/models/8765/theovercomer8s-contrast-fix-sd21-768 https://civarchive.com/models/9205/to8s-high-key-lora-sd21-768
Resources we use with our SD1.5 models:
DangerHawk by Drift Johnson
https://civarchive.com/models/8107/djz-johnsons-image-helper-dangerhawk-and-dangergoose
Description
108 separate datasets were used to train this LORA 320 fine tune.
Recommend using with Negative Embeddings.
You do not need to use a Trigger Word.
Links in the main description!
FAQ
Comments (7)
you know i had to do it to 'em, watched the whole thing!
What's the difference between Epoch 1 and 2? Epoch 2 is more overfitted?
there are 4277 images in the dataset, split into 108 separate folders. These are all of the DJZ Checkpoints. LORA uses bucketing which creates 10 images in different dimensions so the full training set increases to 42,770 images. 1 Epoch means that each image has been trained once. 2 Epoch means that each image has been trained twice.
The owner of the Datasets tested Epochs 1 through 4 to find the best "spot" prior to release. Although Epoch one is faithful to the original dataset, Epoch 2 was giving unique images, so we decided to release them both. Because of the Linear training approach and the Text Encoder settings, we managed to avoid overfitting, however on the smaller network sizes 128 and below we saw you cannot really push the strength of the network in the prompting.
As a result we released both Epoch 1 and Epoch 2 to serve a wider range of use cases. The author of the Training tools we use recommended a LORA strength of 0.6 - 1.0 although the sampler and step counts will have a greater impact. If stacking more than two LORAs you might need to play with the weights a little. If you need more control, consider using CKPT merges as these are very robust for those situations, using this LORA or other LORA's on top. Hope that helps :)
2.1-768 will be getting the 320 network release next in both Epoch 1 and 2.
so watch this space!
i like how you show all the garbage imagery with the 1.5 version of the lora. kek, nice touch 🤦♂️
no cherry picking
@axsthxticroot sure pal, whatever you say. 😏👍🏻

















