A finetune, aligning NoobAI v-pred 1.0 to Anzhc's EQ-VAE-B7 for basic usage and further training with loras or finetuning, less noisy generations due to cleaner latents at training time, usual v-pred shenanigans apply, but model should be more resilient to colors exploding and colors should be a bit cleaner overall compared to base v-pred, does this behave like a slight aesthetic tunning? yes due to limited data get me a 5090 and I can do bigger finetunes instead of choking my 4090,, this was trained for 370k steps on 1x4090 I slammed my head hard against the dataset and compute wall, go donate to Anzhc for his efforts training the VAE.
If you like it, and see some future for it, donate to me at Ko-fi as this was trained at my own expense and it took several days of tries with single GPU Training in a 4090.
Generation settings? Default v-pred, just use the bundled VAE, or B7 from Anzhc's Repo, yes he is banned, yes I'll claim the NoobAI1.1EQ he did and the VAEs here in civitai as he has given my permission to claim those.
Does this still leak random styles with some tokens due to Natural Language? yes it fucking does, fuck Natural Language with Clip L and G
Some shitty comparisons
Raw gen

levels changed to expose noise in the background which remains very consistent
Same image in base v-pred

levels changed to expose noise in the background which remains very bad
Description
FAQ
Comments (39)
biggus
Great Work!ミ(・・)ミ
Thank you!
Works without CFG rescale and other workarounds. While base noob vpred obviously not. Backgrounds are better too, they were especially bad on vpred version of noobai. Is this the result of new VAE or aesthetic tuning?
The usage without CFG Rescale is a combination of VAE and timestep sampling, the backgrounds themselves are results of the dataset used.
bluvoll Thanks for the reply! Now I am looking forward for eps version. I like it the most for the seed variety. Hoping your finetune will keep this feature.
@somedoby wdym by seed variety? Is the math of vpred more stubborn? I didn't know that.
@deitychaser I mean how much image composition is changed with different seeds. I don't know if it vpred thing in general but with noob model eps version definitely has better variety than vpred.
@somedoby Yeah, after testing abit around with vpred (usually i prefer eps too for convenience) I came to the conclusion that with vpred it is better to work with low steps but then add additonal steps with a refiner model or hiresfix. If you blow out 25+ steps with your main model you basically end up at a very narrow field of results during the denoising, thus stopping the process after idk 10 steps for instance gives more room for more variaty when the refiner or upscaler picks up where it was left.
Any news about eps v1.1 finetune? Can't wait to try it out.
Thanks for sharing
Really nice, i wish you luck with further training.
For some reason, my results come out totally different and way worse quality on this model using the same settings as preview images. With other models everything looks fine. What could be the problem? Tried different reforge forks.
I have the same problem.
eagerly waiting for the eps release. Amazing stuff dude! Also sorry for the stupid question but when you write "use the bundled VAE" do you mean the VAE is already baked into the model?
@deitychaser yes, its baked!
@bluvoll thanks mate
this has a lot of potential, way better than other v pred models i've tried
Will this be updated to B7?
Trying this model for the first time and idk what am I doing wrong but even using the settings of the preview images for the model I get really bad results, lots of deformed characters and the style with some artists look weird. Happens both with comfy and reforge
@GoonetteAI_ this model is experimental so it has issues, its more of a showcase of EQ-VAE working and giving better generations, but yeah its rough.
I've checked the resolution of the preview images: They are upscaled. So keep that mind. Perhaps the issue is that you didn't upscale?
I want to know how the LoRA trained on this checkpoint will perform on another checkpoints? Should I use the VAE that comes with another checkpoint or use EQ-VAE?
@MTT0731 BADLY, this is experimental and will be superseeded by a proper version
@bluvoll and how about loras that have been trained on other vpred checkpoints with normal VAE? Will they work on this checkpoint?
@deitychaser Loras trained on v-pred 1.0 will work no issues.
I've tested this, take a look at a lora I trained on this model vs when used on noob-ai vpred https://civitai.com/models/535401?modelVersionId=2186956
@SquidPuffer Was training also faster? From what I understand the main benefit of EQ-VAE during training is that because there is less noise it can pick up image data during the training more efficient with a drastic reduction in loss and thus you'd need less repetitions.
@deitychaser Not much difference in my testing on speed, maybe 5-ish %? But I can't exactly recall the numbers.
Hi, i still need to try this, i mainly download it because on base Noob V-Pred the checkpoint sometimes goes crazy and gives me weird output, sometimes the poses are all weird and exagerated and sometimes the styles are different, this seems to happen only with some prompts, and only if they are copied (i tried to delete everything and rewrite them with same seed and settings and the result was different), so do you know why this happens, and do you think i can fix this with your model?
read the descriuption
"Does this still leak random styles with some tokens due to Natural Language? yes it fucking does, fuck Natural Language with Clip L and G"
this is REALLY good
i uploaded an image with some basic settings, euler a, 28 steps, cfg 5.
Most eps models work well with dpm++2m karras, but not the v pred ones it seems.
You actually DON'T want to add highres or absurdres to the prompt, or lowres to the negative, and putting "worst aesthetic, old, early" in the negative helps here, unlike with the eps models.
Putting 3d in the negative is highly recommended unless you want that look, 3d tends to insert itself into everything if you don't tell it to go away.
I always recommend using the spo lora with everything.
So, I get from that that you did use v-pred calculation but no cfg rescaling? Looking at the metadata if your test image you used a rather high denoising for upscaling (0.6) - was this an important factor to the quality?
@deitychaser I've always used 0.6 as my default for upscaling with every model since the original SD1.4, i find that lower than 0.5 isn't transformative enough to be worth the compute time. I don't even know what cfg rescaling is.
I did try some of the first vpred models when they came out and got nothing but garbage out of them, but this works perfectly.
@spunkymcgoo i see, i feel 0.6 makes you loose lots of background detail and highlights with this model. Can be good in some scenes, in others it looks less appealing in direct comparison to a lower denoised upscaling like 0.3 .



















