CivArchive
    Preview 110655460
    Preview 110655479
    Preview 110655482
    Preview 110655485
    Preview 110655497
    Preview 110655498
    Preview 110655524
    Preview 110655527
    Preview 110655550
    Preview 110655604
    Preview 110655617
    Preview 110655651
    Preview 110655652
    Preview 110655653
    Preview 110655657
    Preview 110655858
    Preview 110655668
    Preview 110655861
    Preview 110655669

    Velvet Chroma v2.0

    Recommended settings:

    Sampler: Euler / Deis2m

    Scheduler: Beta / Beta57 / Sigmoid_offset

    Steps: 10

    CFG: 4

    Resolution: 1024x1024 / 832x1216 / 896x1152

    Clip: Flant5-xxl_Q8_0 works fine

    VAE: ae.safetensors

    Recommended Prompting:

    Trigger words: none

    Negative prompt: none (or only worst quality, low quality)

    Comfyui worflow: On showcase pictures or here



    Velvet Chroma v1.0

    Recommended settings:
    Sampler: Euler · Steps: around 30 · CFG: 3–4
    Hires Fix: recommended
    Resolution: 896×1152 · VAE: ae.safetensors

    Other quantizations can be added on request.

    Description

    Sampler: Euler / Scheduler: Sigmoid_offset / Steps: 10 / CFG: 4

    FAQ

    Comments (5)

    MrFlexNov 18, 2025
    CivitAI

    does this have accelerator lora built in?

    yuto303Nov 19, 2025
    CivitAI

    The wokflow link you posted for 2.0 is it work for version 1.1 full fp16 model? Does that workflow help to reduce vram usage when doing hires fix?

    DeViLDoNia
    Author
    Nov 19, 2025· 1 reaction

    Yes, it works with the 1.1 full fp16 model .

    You just need to swap the GGUF loader node for a UNet loader node and it’ll run fine.

    About VRAM: it’s going to use almost the same. The fp16 model is ~17GB by itself and that can’t really be reduced. The only thing the workflow does is clean VRAM after sampling and after the face detailer, so usage will drop a bit at those points — but with full fp16 you’ll still be around 26–28GB VRAM in my case.

    (I also have a lot of tabs and stuff open, so you might be able to get it a bit lower than my numbers.)

    yuto303Nov 19, 2025

    @DeViLDoNia yes i swap out the unetloader gguf with load diffusion model and clip loader gguf with normal clip that allow me to use fp16 text encoder and then connect to the same node as the original workflow. When i click run, i got this error:

    UltralyticsDetectorProvider

    Weights only load failed. This file can still be loaded, to do so you have two options, do those steps only if you trust the source of the checkpoint. (1) In PyTorch 2.6, we changed the default value of the weights_only argument in torch.load from False to True. Re-running torch.load with weights_only set to False will likely succeed, but it can result in arbitrary code execution. Do it only if you got the file from a trusted source. (2) Alternatively, to load with weights_only=True please check the recommended steps in the following error message. WeightsUnpickler error: Unsupported global: GLOBAL ultralytics.nn.tasks.DetectionModel was not an allowed global by default. Please use torch.serialization.add_safe_globals([ultralytics.nn.tasks.DetectionModel]) or the torch.serialization.safe_globals([ultralytics.nn.tasks.DetectionModel]) context manager to allowlist this global if you trust this class/function. Check the documentation of torch.load to learn more about types accepted by default with weights_only https://pytorch.org/docs/stable/generated/torch.load.html.

    yuto303Nov 19, 2025· 1 reaction

    @DeViLDoNia nvm. problem solved. I revert back the previous version of comfyui subpack and that face detailer now working

    Checkpoint
    Chroma

    Details

    Downloads
    103
    Platform
    CivitAI
    Platform Status
    Available
    Created
    11/18/2025
    Updated
    5/12/2026
    Deleted
    -

    Files

    velvetChroma_v20Q80.gguf

    Mirrors

    CivitAI (1 mirrors)

    Available On (1 platform)

    Same model published on other platforms. May have additional downloads or version variants.