CivArchive
    PAseer-SDXL/PONY-LCM and Turbo Accelerator - v1.0
    Preview 4039808

    Base on Turbo and LCM LORA

    I made this Version for accelerating Checkpoint to generate.

    Only 24 MB

    Generation Time = 3s per SDXL/PONY 1K high quality img and 10s per SDXL 4K quality img.

    Remove the influence of CLIP:

    now you can link the node wherever you want , no more just put them at the end of you LORAs and hesitate if you should link the CLIP or not which make your nodes more complex.

    [Usage : 1. Just like a normal LORA. 2. use lcm sampler. 3. CFG==1-2. 4. Steps==2-8]

    基于最新的ADD训练模式,Turbo和LCM LORA的推出大大加快的出图速度。

    我基于最新的两个官方LORA模型,制作了这个微型的LORA版本。

    仅仅只有 24兆字节 大小。

    生图时间:3秒钟一张1K分辨率的SDXL或者PONY的图像,十秒钟一张SDXL 4K超清大图

    此版本特别优化了CLIP层对于图像的影响:

    现在,使用我的LORA,你可以随意的将这个24MB的LORA放在你的LORAs栈中的任意位置,而不必再因为需要避开CLIP层而感到迷茫,费时费力的将它放在最后,并且需要链接更复杂的节点了。随便放,放哪里都可以

    【使用方法:第一步:像随便哪一个LORA一样使用它. 第二步:采样方法选LCM(没看见的快更新你本地的平台版本。)第三步:调节CFG为1到2之间。第四步:采样步骤在2-8步之间。然后点击生成图片就可以了】

    Description

    LCM+Turbo

    FAQ

    Comments (18)

    Shio_NDec 1, 2023· 3 reactions
    CivitAI

    I have better results with only lcm. Also lcm only works 2x faster if you run it without negative prompt.

    aashiDec 7, 2023
    CivitAI

    How do I get LCM sampler in A1111?

    Aseer
    Author
    Dec 7, 2023

    update to the newest version, at least after 20/Nov/2023.

    EliteLensCraftDec 7, 2023

    Or use animatediff extension to inject it temporarily...

    autruiDec 21, 2023· 3 reactions
    CivitAI

    This works perfectly on my nVidia GPU, but produces only colorful noise on my AMD GPU. I might be doing something wrong on the AMD computer; I'll update if I figure it out.

    real_ratibor387Feb 12, 2024

    Is this due to the fact that AMD cards do not support CUDA? Or due to lack of AMD support for this particular piece of technology?

    autruiFeb 13, 2024

    @real_ratibor387 I was unable to figure it out lol -- I use the Shiny one on my AMD GPU and this one on my Nvidia. From what I understand, bad luck with AMD should be expected; so much of this stuff is actually being worked on at NVidia, so AMD is just "implementing" -- and I'm sure there are a fair few secrets NVidia keeps about the directions their research is going.

    LazmanOct 1, 2024· 1 reaction

    @autrui Damn.. Well, I'm not sure how much of this stuff is actually being worked on AT Nvidia. I'm just as sceptical of corporate nonsense as it sounds you are, if not more, but apart from the ties to CUDA, and general advertisements boasting about their superior AI capabilities, I haven't actually seen anything linking Nvidia to SD or Flux development, or SDnext/comfyUI(the two primary webuis used by most people for AI art generation).

    BTW, have your results improved on AMD significantly within the past 9 months? I was highly considering an AMD card, due to being more natively compatible with Linux than Nvidia, but I've been on the fence about it recently, cuz I know that Nvidia is technically still superior to AMD for AI related tasks.

    Aseer
    Author
    Oct 1, 2024

    @Lazman I agree with your basiclly, cause CUDA is the only benifit N cards has in AI generation.

    DucaAIOct 15, 2024· 1 reaction

    @Lazman I agree, you know what are you talking about

    samwyseOct 17, 2024· 1 reaction

    Hi, I use an amd card and this works fine.

    LazmanOct 23, 2024

    @Aseer Cuda and tensor cores. But yea.. And the card is more supported by software universally speaking. Sadly, I did end up going with Nvidia. The 4060ti. Given my budget, 16 gigs was my limit for Vram regardless of which card I went with, so I figured I'd have more of an edge with Nvidia.

    LazmanOct 23, 2024

    @DucaAI Thx. I do try.

    LazmanOct 23, 2024

    @samwyse Bit late to the party, lol. I bought my card 2-3 weeks ago now. Decided to go Nvidia, cuz I didn't have 1000$ to blow on a single card to get the 24 gig AMD card, so in being practical, it only made sense to go with nvidia for AI performance and compatibility. My friend uses an AMD card, but he has the 24 gig card, so the comparison would be off in contrast to the 16gig amd card I would have been able to afford.

    Also, he really lacks imagination and software diversity compared to me, so his perspective, unfortunately, didn't offer the full scope of knowledge for how AMD card use would be in contrast to Nvidia in terms of software and application. especially as I plan to eventually begin training my own loras and maybe even models if I deem it practical to do so, as well as a chat AI.

    Shu_ShengXiaJan 16, 2024
    CivitAI

    I used LCM to accelerate SDXL 1.1 and found that the style prompt words did not work. Is there a solution

    wyxzddsjj919Feb 27, 2024

    Yes, I tested several TURBO and LIGHT acceleration models, only real photography style, a lot of art style tips are broken (only a few styles work) and the original non-acceleration models are fine, if you want to use them to draw other art styles you have to rely on LORA

    EbenezerDanglewoodApr 16, 2025

    I think that's just the cost/tradeoff of using fast samplers. They only work on low CFG, so you're sacrificing control for speed.

    An easy workaround is to just use the fast sampler to get a decent initial image, then switch to a normal slow sampler, remove the Lora, and img2img/upscale it in whatever style you want.

    CorbeMar 14, 2024
    CivitAI

    I always use your LCM on 1.5, but testing the SDXL version it actually takes longer to generate than without the LCM LORA. Any idea why?

    LORA
    SDXL 1.0

    Details

    Downloads
    3,006
    Platform
    CivitAI
    Platform Status
    Available
    Created
    11/30/2023
    Updated
    5/12/2026
    Deleted
    -

    Files