SDXL-Lightning is a lightning-fast text-to-image generation model. It can generate high-quality 1024px images in a few steps. For more information, please refer to our research paper: SDXL-Lightning: Progressive Adversarial Diffusion Distillation. We open-source the model as part of the research.
Our models are distilled from stabilityai/stable-diffusion-xl-base-1.0. This repository contains checkpoints for 1-step, 2-step, 4-step, and 8-step distilled models. The generation quality of our 2-step, 4-step, and 8-step model is amazing. Our 1-step model is more experimental.
Description
sdxl_lightning_2step
FAQ
Comments (14)
doesnt work good. i dont get a descent picture with 8 steps. yes i use the 8-step model
Try with DMPP_SDE Karras (8 steps) with cfg 2
Use reference ComfyUI workflows from ByteDance: https://huggingface.co/ByteDance/SDXL-Lightning/tree/main/comfyui as a starting point.
In short: sgm_uniform scheduler, euler sampler (eta 0), cfg 1.0, and number of steps depending on model type (1/2/4/8). It works like a charm, if you stick to those settings.
I've been using the 4-step version with DPM++ SDE Karras, 4 steps (duh), and CFG1-2. Consistently good results although still none as good as the ones shown here. If I set the CFG any higher or use any of the newer DPM++ versions, the image is trash (not a criticism, just pointing it out in case it happens to others)
Also had decent results with Euler Auto or Euler a Auto, and same CFG settings
@aisorcerer1337 Yup, Euler A can work as well, but it might require increasing number of steps and/or CFG a bit (also depends on input parameters, and the particular workflow).
That's because this is just the 1step model
Eight steps are acceptable, but four steps are amazing. It’s an artistic explosion. Just look at this variety; it’s not a selection. I generated them one after another without any fixes. Some of them resemble real art rather than AI-generated art. Unfortunately, DDIM cannot handle one and two steps.
Bullshit. This comment was created by the account holder.
can you make it to run through civitai?
does it require clip skip 2 ?
After a lot of tests, the access to controlnet error, this model can not be controlled by controlnet, there is no solution.
I keep getting burned images. I set the steps to 4 and the CFG to like 3 and I still get burned images.
just use no CFG
