zImage Base - V1.0
After if fucked up the other training for the photo stuff, because i'm still not used to AI-Toolkit apart from sliders, decided to go test it with the smaller dataset from this model. Very different from Chroma, need to change quite a few things and more testing. But so far, very easy to train and doesn't need as much resources... might even be able to train the text encoder ( or LLM ) for the first time. Dunno if that's a good idea or if it even works, but that won't stop me from trying / brute forcing it anyway.
A bit to clean for my taste, but oh well.
V3.0
Should be enough, can't push it anymore except for more pictures which has the potential to fuck everything up. Didn't really think it would do this realistic-illustration blend stuff at the end but i like it.
V2.0
Trained another model and merged both v1 and v2 into the checkpoint, extracted the lora, got this ... no idea why it even has such an effect, but should be interesting to mix with other lora's.
Got bored while training and preparing more demanding models. Went through all my training images, trying to match pictures with the same theme but always realistic, illustration, anime and general colorful high resolution stuff.
Went to work and forgot to switch to another model that takes longer, now this one is pretty overtrained and i set it so it overrides the previous lora.
In some cases i just used the last part of the prompts or intechanged them with each other to let Chroma take the wheel, fill the gaps and to create a bit of randomness.
Pretty sure will find some use for it here and there.
Description
FAQ
Comments (6)
yeeeee
Ei, what happened to AIToolkit? Is more or less the same and you can customize it. Were you using Musubi?
Not Musubi, some chinese fork from kohya with a custom GUI. Was the only GUI thingy that had Chroma support, even though it wasn't visible in the GUI, but i had just to add things to the json to make it work. Later the owner of the Fork added Chroma after i helped someone in the comment section.
Didn't know that you have to set the repeats manually for each dataset, not as with kohya where you add it to the folder name. Also, what i haven't figured out yet, it that it complains about "the embedding needs to have the same size as the latent cache" or whatever, when i set the batch size to more then 1. Haven't found a list for custom strings, or what is possible, for the json file either... and i miss prodigy, adafactor is more for big datasets and fine-tuning. Just a minor thing though, Adam8bit is works too.
@TijuanaSlumlord Musubi is kohya in a nutshell, just with more capabilities. The repeats you can use them in the folder pre as XX_name. You should try Musubi, even for video is working great. Anyways, great to see you on the ZI loop.
@LDWorksDavid Will look into it, as much as time allows. Might have found something that works, but, not sure yet. This thing is much more "base" then Chroma. Mistakes in training, dataset and captation will show up immediately and it doesn't compensate as much like with Chroma / Flux Schnell... really like it, a lot of room for experiments.
For ZIT Strength should be about 3
Details
Files
Available On (1 platform)
Same model published on other platforms. May have additional downloads or version variants.



















