If you want to use more my checkpoint online generation, please visit here.
https://tensor.art/u/762555264535746522
V-pred-04
Data balancing and adjustment have been made, and some background logic has been optimized
Recommended settings:
Steps: 30
CFG scale: 5-7
Sampler: Euler a
Positive Prompt
,masterpiece,best quality,newest,absurdres,highres,very awa, Negative Prompt
low quality,worst quality,normal quality,text,jpeg artifacts,bad anatomy,old,early,copyright name,watermark,artist name,signature,V-pred-02
The stability of the limbs, with the overall color leaning towards a warmer tone.
v-prediction versions are experimental models.
You need to use the webui that support v-prediction.
ComfyUI
reForge
Forge
AUTOMATIC1111 (dev branch)
Use Hires. fix (The situations where ADetailer is needed are not many)
Style and Character reference NOOB
All example images are generated at 1024x1360,and Hires upscale: 1.5, Hires steps: 20, Hires upscaler: R-ESRGAN 4x+ Anime6B,Denoising strength: 0.5.
Recommended settings:
Steps: 30
CFG scale: 5-7
Sampler: Euler a
Positive Prompt
,masterpiece,best quality,newest,absurdres,highres,very awa, Negative Prompt
low quality,worst quality,normal quality,text,signature,jpeg artifacts,bad anatomy,old,early,copyright name,watermark,artist name,signature,V-pred-01
BASE:NOOB V-pred 1.0
This is a test model, attempting to use some unconventional methods to achieve merging.
The merging materials used are not purely from the v-prediction model, so there may be issues that I'm unaware of. If there are any, please leave me a message.
v-prediction versions are experimental models.
You need to use the webui that support v-prediction.
ComfyUI
reForge
Forge
AUTOMATIC1111 (dev branch)
Use Hires. fix (The situations where ADetailer is needed are not many)
Style and Character reference NOOB
All example images are generated at 1024x1360,and Hires upscale: 1.5, Hires steps: 20, Hires upscaler: R-ESRGAN 4x+ Anime6B,Denoising strength: 0.5.
Recommended settings:
Steps: 30
CFG scale: 7
Sampler: Euler a
Positive Prompt
,masterpiece,best quality,newest,absurdres,highres,very awa, Negative Prompt
low quality,worst quality,normal quality,text,signature,jpeg artifacts,bad anatomy,old,early,copyright name,watermark,artist name,signature,V2
BASE:NOOB eps 1.1
Try to ensure image quality without using quality keywords, in order to create a model that can be easily used by beginners.
Not very necessary to use Positive and Negative quality Prompt
Use Hires. fix (The situations where ADetailer is needed are not many)
Style and Character reference NOOB
All example images are generated at 1024x1360,and Hires upscale: 1.5, Hires steps: 20, Hires upscaler: R-ESRGAN 4x+ Anime6B,Denoising strength: 0.5.
NO Positive and Negative quality TAG
Recommended settings:
Steps: 30
CFG scale: 5.5
Sampler: Euler a
v1
Recommended settings:
Steps: 28-35
CFG scale: 5-7
Sampler: Euler a
Positive Prompt
,masterpiece,best quality,newest,Negative Prompt
low quality,worst quality,normal quality,text,signature,jpeg artifacts,bad anatomy,old,early,copyright name,watermark,artist name,signature,Description
FAQ
Comments (89)
好像V预测2.0版本在脚趾上问题还蛮大的(没弄出来一张正常的,但是E版可以),然后是用ad的话会莫名在脸上重绘一张新的图(E预测版本就正常重绘脸部)用的forge UI
我不知道是不是forge自己的问题,最近用我模型出各种奇怪的问题报告全在forge上面.
@WAI0731 我敲,wai佬秒回。可惜我不会用comfy不然就去comfy再验证一下了,哈哈。还好E预测版本够用了。
@mikuhatsune 正好在刷lora,看到就回了
@WAI0731 试了一下comfy,感觉应该是forge的问题。comfy随便roll了几张,肢体还挺正常的(脚趾上至少是没问题了)。
虽然有点晚了,但是我估计是你采样选错了,V预测必须用Eular A,默认的DPM不行……目前不管ForgeUI还是ComfyUI都正常
@ShiroNekoAlpha 没,当时我就选的Eular A一直也没改过
@ShiroNekoAlpha V預測用uni_pc + beta 運作良好,不一定要用Eular A
vpred 2.0 is my favorite model now. The versatility is off the charts.
I´m loving V-pred-02 🤩
verified the model does work in SwarmUI as well :)
v-pred 2.0 is the first model to beat ntr 4.0 in my opinion, very nice looking, very stable, very responsive to lots of difficult prompting, doesn't have most of the drawbacks of other v-pred versions. Only complaint is the artist tags don't work as well, still decent but the style gets lost more easily.
I've been testing V-Pred 2.0 tonight, and so far, it's the first model to give NTR some competition. I keep trying new Illustrious/NoobAI models, but none of them impress me enough to pull me away from NTR.
I'm loving it so far and will continue testing it!
Please fix the description. A1111 supports v-pred sdxl only on dev branch and add Forge UI.
You're one of the GOATs of checkpoint merges, Noob has a bright future ahead of it as long as guys like you are around. Keep up the good work!
2.0 is really good one, but dynamic range feels lower then on original model, closer to eps models.
https://civitai.com/posts/14122609
wai-illu-nsfw is a bit washed out here due to parameters.
Nice comparison
I have used this model any the image genarate is very impressive, i have used more than 1 lora with height weight but only a very little bleeding, hope you keep this model update, better image genaration than the current "WAI-NSFW-illustrious-SDXL" from my testing with local reforge generation.
Why do you think the NAI version is better than the ILXL?
I hope E-pred gets an update.
It works well with the Lora that I make.
Is the civ generator was error or something? It can't generate any images even one smh
我是使用V-pred-02模型,這是一個非常強大的模型,角色還原跟顏色方面都非常棒! 只是當有些角色需要使用adetailer 來修復眼睛時,瞳孔裡面會出現很多密集小光點導致眼睛看起來不太正常
What are the fundamental differences from the ILXL model? Сant make up my mind
Based on noobai wich means it is beter finetuned, supports e621 tags, v-pred version is v-pred.
當角色躺下時 肚臍很容易變成兩個
So basically, you can't use this on stablediff default? No video out there clearly explaining how to get this so called dev branch working. Really a shame, guess I'll never get to try this checkpoint out,
1. (If you haven't installed WebUI) Install WebUI by following the instructions in the repository. For simp
2.Switch to dev branch:
git switch dev3. Pull latest updates:
git pull4. Launch WebUI and use the model as usual
@WAI0731 thanks! I did have webUI. However YT tutorials were outdated and github didn't seem to explain it either. Your method allowed me to access dev branch in an instant. Thanks WAI! Great model btw, just had a test run with it, works flawlessly!
Really impressive, it feels like a good compromise between WAI-NSFW and Obsession (NoobAI models in general)
i see v-pred stuff still has massive massive LORA compatibility issues. Generations look amazing unless you want to apply LORAs
Yeah anatomy and hands are hit or miss when using lora.
I hope this will get more updates in the future!
Have been running some comparisons between v-pred 2.0 and WAI-NSFW-v.12 and I must say, v-pred feels like a consistent upgrade. Level of detail, sharpness and background consistency are miles better. v-pred can sometimes overbrighten the image or add too much constrast, but it doesn't happen all the time. I am really looking forward to updates of this model.
Till now I'm still using E-pred v2 for generation because the v-pred model is somehow not compatible with the Lora's I'm using and having 7 CFG causes too many Jpeg artifacts. I'm hoping that the vpred model got fixed for compatibility with LORA.
It is not recommended to exceed 4 vpred cfg. If a high cfg effect is required, it must be used in conjunction with the dynamic cfg or rescale cfg plug-in,It can greatly improve the quality problems caused by high CFG.
There should be no room for improvement in the compatibility of the vpred model with non-vpred trained LoRa.The vpred lora I trained myself works well on both eps and vpred models (the lora trained by vpred also performs better on eps than the lora trained by eps), but the lora trained by eps is a disaster on the vpred model.
@NTR_BLACK base Noobai model is not visible on CivitAi lora custom model trainer. Any recommended custom model to train it on?
good
为什么我用的秋叶sdwebui forge整合包,还是不能正常出图
版本已經最新了?
建議還是用ComfyUI, forge 用vpred 有很多奇怪的問題,也要額外的設置
@NTR_BLACK 在启动器里刷新并更新过,也按noobai用户手册里git pull更新过,但画出来的图有很多错误
@D00M5799 cfg 降到4以下看看
@NTR_BLACK 4和3都试了,还是不行
用reForge,这个能正常出图
@zonde306 请问这个reforge有办法弄到秋叶的整合包里吗,我从git上安装到本地了,但是纯净版的我不会用,b站上也没搜到关于reforge的教学视频
你Sampler(采样器)选的是Euler A吗?默认的DPM无法在V-pred类别模型生图
@ShiroNekoAlpha 是euler a,我从来都是用euler a和karras生图
@D00M5799 奇了怪了,我现在就在用秋叶佬的FORGE整合包,用WAI佬这个V预测是没问题的啊。你试试其他模型,如果所有EPS模型都没问题,但所有V预测模型都没法使用,我估计你可能下载错版本了……FORGE版本的秋叶启动器底下会写着“SD-WebUI-FORGE 版本:xxxxxxxxxxxx 的
@ShiroNekoAlpha 我很确定没下错版本,就是forge,但是我每次启动webui后都需要f5刷新网页以后才能正常用,e预测模型都能正常用,但v预测都是出的图里很多错误,尤其是色彩方面
新人请问为什么不能使用了。变成了拍卖的按钮
因为C站的服务器是小水管,没法同时支持那么多Checkpoint给所有人生成,所以每周限定200个Checkpoint来生成。如果是热门模型的话会自动被算在这200个模型里面,但是WAI佬的这个V预测模型没EPS版本那么热门,只能在拍卖行里竞价来进每周候选。
@ShiroNekoAlpha 原来是这样😭这个对新人来说真好用,不需要太多技巧就能用了,好可惜
@ShiroNekoAlpha 原来如此,感谢解答
I love this model, but I've been having a problem where a lot of my images come out overexposed. Is there a way to fix this?
are you using the correct sampler? for V-pred models, the sampler must be Euler A or Euler, the default DPM didn't work.
Yes, besides using Euler A as a sampler, here is what I found to be effective:
At the very end of your positive prompt put this: "soft lighting, dim lighting" or (stronger) "dark, dim lighting". This fixes the overexposure and contrast issues with most artist tags for me.
@ShiroNekoAlpha Yup, I've been using Euler A.
@Noise_Connoisseur I'll give it a shot, thanks!
或许你可以试一下cfg缩放功能,权重0.3~0.7,图像过曝是v预测模型的通病,要注意的是用了它就不要用动态cfg
@Noise_Connoisseur Thank you so much for your advice — your lighting method really helped me as well.
That said, I still dont quite understand why oversaturation happens. Most prompts work fine, but for some reason, only a few end up oversaturated.
At first, I thought specific tags might be the cause (similar to how, in PonyXL, the sweat tag would somehow break the whole image), but after running some tests, that theory didnt hold up…
have you tried using resclaecfg?
会有一个新的E-pred版本吗dalao?小白被comfyUI整麻了
看到另外兩個模型有更新V14,不知道這個NOOBAI VPERD 分支最近是不是也打算更新了?:D
很喜欢这个模型,还会更新吗
I liked the fact one version of this model has been elected for image generation in this site, but I must leave a reminder @WAI0731 created a NoobAI based model with the explicit intention of being used in the CivitAI's image generator and spending few buzz in the process, that model is WAI-CheapFast-ANI-NoobAI.
Top Notch
what is special about this model over WAI-NSFW-illustrious? if someone knows please explain that would be helpful.
This is based on noobAI while that is based on illustrious 1.0. NoodAI was going to be based on pony when illustrious 0.1 or 0.2 was "unofficially" made public and opensource by the dev. Illustrious 1 is a later iteration of the same 0.1 variant Nood is based on.
IF you're asking because you're not seeing much of a difference, Noob and illustrious share a fairly recent common ancestor as it were. BUT they are different models and are diverging with each generation, as it can be seen by WAI himself refusing to "upgrade" to using the latest open weights Illustrious 2.0 model for his checkpoint while this noobai model is based on the latest noob checkpoints.
NO
To give you an actual response in practical terms. Since this is based on noobAI,
1. Larger, more recent dataset. Noob incorporated a more recent danbooru dataset as well as e621
2. Vibrancy. Based on the way v prediction models create the image they're able to create more deep black colors and more vibrant colors overall
hi are you planning on updating this model at some point? asking because i think this is the best model and its not particularly close
Leagues above WAI-NSFW-Illustrious
was curious to know if this will get a another update?
What does bidding mean?
Its like voting with buzz for models to be included in the onsite image generator.
Looking forward to updates QAQ
wai-shuffle-noob v-pred-02, vs ,
WAI-NSFW-illustrious-SDXL-v14,
which do you prefer more?
WAI-NSFW-illustrious-SDX
因为我在用noob的时候老是出现问题,但illustrious就比较稳定
very good
I saw that you updated Wai-Illustrious today. Do you plan to update this version as well?
I'm having doubts this version will ever see an update sadly, which is a real bummer because it is amazing.
@sansmia well. . . surprise i guess
@monicalucci Lol.... Amazing timing.
I don't understand one thing: what is the difference between this shuffle-noob and branch rouwei? I am new to noobAI and still don't know what is this rouwei branch thing.
If someone can kindly explain....
Different model ancestry. Both models have a very detailed anime style, but this model here is based off NoobAIXL while BranchRouwei is based off Rouwei.
This model is better than the Nova Furry models so I hope you can update it someday.
Details
Files
Available On (1 platform)
Same model published on other platforms. May have additional downloads or version variants.







