The design_pixar base model is designed to generate Pixar, cartoon, semi-realistic, realistic and photographic style images. It is fine-tunable, and has a lot of data from other popular models well known to the community.
The vae is already included in the model!
I recommend using the Adetailer extension.
Use this extension to fix hand errors:
https://github.com/licyk/advanced_euler_sampler_extension
It is strictly prohibited to distribute, inside and outside civit ai, this template or post it anywhere or on hosting services, on third party image generation services, you can use it for personal use, if you know who distributed it without my consent , report it immediately
Blend composition:
Resolution settings:
Standard resolutions 512 x 768, 768 x 512, 540 x 960 and 1024 x1024
Resolution: 540x540 select the Hires.fix checkbox, result 1080 x 1080 pixels
Resolution: 540x960 select the Hires.fix checkbox, result 1080 x 1920 pixels
Use these recommended settings for generation:
Sampling method: Euler_Max
Sampling steps: 30-50
CFG Scale: 2.0 - 7.0
Skip clip: 1-2
Sampling method: Euler_Smea_Dy
Sampling steps: 30-50
CFG Scale: 2.0 - 7.0
Skip clip: 1-2
Sampling method: Euler a
Sampling steps: 30-50
CFG Scale: 2.0 - 7.0
Skip clip: 1-2
Sampling method: LCM
Sampling steps: 20-30
CFG Scale: 2.0 - 7.0
Skip clip: 1-2
Sampling method: DDPM Karras
Sampling steps: 20-30
CFG Scale: 2.0 - 7.0
Skip clip: 1-2
Sampling method: DPM++ 2M Karras
Sampling steps: 20-30
CFG Scale: 2.0 - 7.0
Skip clip: 1-2
Sampling method: DPM++ SDE Karras
Sampling steps: 20-30
CFG Scale: 2.0 - 7.0
Skip clip: 1-2
Sampling method: DPM++ 2M SDE Karras
Sampling steps: 20-30
CFG Scale: 2.0 - 7.0
Skip clip: 1-2
Sampling method: DPM++ 2M SDE Exponential
Sampling steps: 20-30
CFG Scale: 2.0 - 7.0
Skip clip: 1-2
Sampling method: DPM++ 3M SDE
Sampling steps: 20-30
CFG Scale: 2.0 - 7.0
Skip clip: 1-2
Sampling method: DPM++ 3M SDE Karras
Sampling steps: 20-30
CFG Scale: 2.0 - 7.0
Skip clip: 1-2
Sampling method: DPM++ 3M SDE Exponential
Sampling steps: 20-30
CFG Scale: 2.0 - 7.0
Skip clip: 1-2
I recommend adding these words to the positive prompt: medium eyes, detailed eyes, expressive eyes
Recommended negative prompts:
CyberRealistic_Negative-neg), cartoon, painting, illustration, (grayscale:1.4), (worst quality:2), (low quality:2), (normal quality:2), ugly, fat, 3D rendering, asian;
EasyNegative, fat, paintings, sketches, (worst quality:2), (low quality:2), (normal quality:2), lowres, ((monochrome)), ((grayscale)), bad anatomy,
You can feel free to use the negative prompt of your choice and customize it.
Description
V2. Base design_pixar model merged with fluz-dev-fp8 model with weight 7.
This model requires the following files:
ae.safetensors, clip_l.safetensors and t5xxl_fp8_e4m3fn.safetensors
put the files in the stable diffusion forge folder
Resolution settings:
Standard resolutions 512 x 768, 768 x 512, 540 x 960 and 1024 x1024
Resolution: 540x540 select the Hires.fix checkbox, result 1080 x 1080 pixels
Resolution: 540x960 select the Hires.fix checkbox, result 1080 x 1920 pixels
Use these recommended settings for generation:
Sampling method: Euler-simple
Sampling steps: 20
CFG Scale: 2.0 - 7.0
It is recommended to enter these words in the prompt:
medium eyes, detailed eyes, expressive eyes, realistic, disney pixar style, 3d, 3d rendering
FAQ
Comments (22)
I can't get this - or its previous version to work. I use a standard diffusion loader, including the dual clip loader and vae loader as noted in your file description. But it produced a ton of errors, starting with:
Error occurred when executing UNETLoader: Error(s) in loading state_dict for Flux: size mismatch for img_in.weight: copying a param with shape torch.Size([98304, 1]) from checkpoint, the shape in current model is torch.Size([3072, 64]). size mismatch for time_in.in_layer.weight: copying a param with shape torch.Size([393216, 1]) from checkpoint, the shape in current model is torch.Size([3072, 256]). size mismatch for time_in.out_layer.weight: copying a param with shape torch.Size([4718592, 1]) from checkpoint, the shape in current model is torch.Size([3072, 3072]). size mismatch for vector_in.in_layer.weight: copying a param with shape torch.Size([1179648, 1]) from checkpoint, the shape in current model is torch.Size([3072, 768]).
I recommend that you use the more updated version of stable diffusion forge
where can I get the workflow?
you ever find a solution in comfyUI?
For anyone having issues loading the v2.0 6.24GB model with the regular Diffusion Model loader node in ComfyUI, this model is an NF4 UNet model (no text encoder/vae), not an FP8 model, so you need the ComfyUI_UNet_bitsandbytes_NF4 custom node package from here:
https://github.com/DenkingOfficial/ComfyUI_UNet_bitsandbytes_NF4
The above custom node is based on this NF4 checkpoint loader linked below, which you can also install to load full NF4 checkpoints with the text encoder/vae included:
https://github.com/comfyanonymous/ComfyUI_bitsandbytes_NF4
Good luck!
Hi, thank you very much for your collaboration. I tried to make a more compact version of this model. I only attributed its size to the fp8 because it is unet bnb-nf4.
@Dxdesignia no worries! Im happy to contribute any way I can. Hopefully the CivitAI team will add support for designating files as NF4 safetensors and GGUF quantized models soon, so there's less confusion in the community.
And thank you too for making the model, I'm liking the style of it. I can only imagine the kind of amazing models we'll have in a year from now.
oh thank you! I was wondering why it wasn't working. Maybe take that into account in the version naming
Hi. So this means it does not wotk with loras, does it?
@SafetyAction Hello, does the model work with Loras OK!
我更喜歡V1
can I use this for my nsfw games / comics?
Good afternoon, yes, you can use it.
Does it need a trigger world?
Hello does it perfoem ok in normal images, not necessary portrait characters?
You know it would be so much easier to try your model if you would post a workflow to allow us to use it. So far tried installing nodes suggested and it still doesn't want to work.
Can you Drop a workflow for use please
Keep getting error in Swarmui/ComfyUI
https://drive.google.com/file/d/1vMGmJvd-eSUmkqJrbLWUkQm6Y3FLy-4I/view?usp=sharing
it do work in WebUI forge though
Good evening, it works on stable diffusion forge neo
Details
Available On (1 platform)
Same model published on other platforms. May have additional downloads or version variants.



















