CivArchive
    Anima - Preview
    NSFW
    Preview 119697060
    Preview 119697076
    Preview 119697086
    Preview 119697088
    Preview 119697089
    Preview 119697087
    Preview 119697090
    Preview 120604431
    Preview 120604437
    Preview 120604438
    Preview 120604435
    Preview 120604439
    Preview 120604466
    Preview 120604465
    Preview 120604467

    Update 2026-03-11:

    The official owners have uploaded Preview 2. Go use that version!


    This is not my model. Don't ask me questions. I don't know the answers.

    Go to the official Anima Huggingface repo for answers.

    You need to use qwen_3_06b_base.safetensors for text encoder, and qwen_image_vae.safetensors for VAE.

    I've only uploaded the model here as a way to tag the images I generate with it. It will most likely be taken down once the official owners upload their own version, so be warned.

    The following is the original README as of time of publishing.


    Anima is a 2 billion parameter text-to-image model created via a collaboration between CircleStone Labs and Comfy Org. It is focused mainly on anime concepts, characters, and styles, but is also capable of generating a wide variety of other non-photorealistic content. The model is designed for making illustrations and artistic images, and will not work well at realism.

    It is trained on several million anime images and about 800k non-anime artistic images. No synthetic data was used for training. The knowledge cut-off date for the anime training data is September 2025.

    This preview version is an intermediate model checkpoint. The model is still training and the final version will improve, especially for fine details and overall aesthetics.

    Installing and running

    The model is natively supported in ComfyUI. The above image contains a workflow; you can open it in ComfyUI or drag-and-drop to get the workflow. The model files go in their respective folders inside your model directory:

    • anima-preview.safetensors goes in ComfyUI/models/diffusion_models

    • qwen_3_06b_base.safetensors goes in ComfyUI/models/text_encoders

    • qwen_image_vae.safetensors goes in ComfyUI/models/vae (this is the Qwen-Image VAE, you might already have it)

    Generation settings

    • The preview version should be used at about 1MP resolution. E.g. 1024x1024, 896x1152, 1152x896, etc.

    • 30-50 steps, CFG 4-5.

    • A variety of samplers work. Some of my favorites:

      • er_sde: neutral style, flat colors, sharp lines. I use this as a reasonable default.

      • euler_a: Softer, thinner lines. Can sometimes tend towards a 2.5D look. CFG can be pushed a bit higher than other samplers without burning the image.

      • dpmpp_2m_sde_gpu: similar in style to er_sde but can produce more variety and be more "creative". Depending on the prompt it can get too wild sometimes.

    Prompting

    The model is trained on Danbooru-style tags, natural language captions, and combinations of tags and captions.

    Tag order

    [quality/meta/year/safety tags] [1girl/1boy/1other etc] [character] [series] [artist] [general tags]

    Within each tag section, the tags can be in arbitrary order.

    Quality tags

    Human score based: masterpiece, best quality, good quality, normal quality, low quality, worst quality

    PonyV7 aesthetic model based: score_9, score_8, ..., score_1

    You can use either the human score quality tags, the aesthetic model tags, both together, or neither. All combinations work.

    Time period tags

    Specific year: year 2025, year 2024, ...

    Period: newest, recent, mid, early, old

    Meta tags

    highres, absurdres, anime screenshot, jpeg artifacts, official art, etc

    Safety tags

    safe, sensitive, nsfw, explicit

    Artist tags

    Prefix artist with @. E.g. "@big chungus". You must put @ in front of the artist. The effect will be very weak if you don't.

    Full tag example

    year 2025, newest, normal quality, score_5, highres, safe, 1girl, oomuro sakurako, yuru yuri, @nnn yryr, smile, brown hair, hat, solo, fur-trimmed gloves, open mouth, long hair, gift box, fang, skirt, red gloves, blunt bangs, gloves, one eye closed, shirt, brown eyes, santa costume, red hat, skin fang, twitter username, white background, holding bag, fur trim, simple background, brown skirt, bag, gift bag, looking at viewer, santa hat, ;d, red shirt, box, gift, fur-trimmed headwear, holding, red capelet, holding box, capelet

    Tag dropout

    The model was trained with random tag dropout. You don't need to include every single relevant tag for the image.

    Dataset tags

    To improve style and content diversity, the model was additionally trained on two non-anime datasets: LAION-POP (specifically the ye-pop version) and DeviantArt. Both were filtered to exclude photos. Because these datasets are qualitatively different from anime datasets, captions from them have been labeled with a "dataset tag". This occurs at the very beginning of a prompt followed by a newline. Optionally, the second line can contain either the image alt-text (ye-pop) or the title of the work (DeviantArt). Examples:

    ye-pop

    For Sale: Others by Arun Prem

    Abstract, oil painting of three faceless, blue-skinned figures. Left: white, draped figure; center: yellow-shirted, dark-haired figure; right: red-veiled, dark-haired figure carrying another. Bold, textured colors, minimalist style.

    deviantart

    Flame

    Digital painting of a fiery dragon with glowing yellow eyes, black horns, and a long, sinuous tail, perched on a glowing, molten rock formation. The background is a gradient of dark purple to orange.

    Natural language prompting tips

    • If using pure natural langauge, more descriptive is better. Aim for at least 2 sentences. Extremely short prompts can give unexpected results (this will be better in the final version).

    • You can mix tags and natural language in arbitrary order.

    • You can put quality / artist tags at the beginning of a natural language prompt.

      • "masterpiece, best quality, @big chungus. An anime girl with medium-length blonde hair is..."

    • Name a character, then describe their basic appearance.

      • "Digital artwork of Fern from Sousou no Frieren, with long purple hair and purple eyes, wearing a black coat over a white dress with puffy sleeves..."

      • This is extra important when prompting for multiple characters. If you just list off character names with no description of appearance, the model can get confused.

    Model comparison

    You may be interested in comparing Anima's outputs with other models. A ComfyUI workflow, anima_comparison.json, is provided. This workflow generates a grid of images where each model is a column and the rows are different seeds. It can be configured to compare any number of models you select by changing a few output nodes. Supported model architectures: Anima, SDXL, Lumina, Chroma, Newbie-Image. The default configuration compares Anima, NetaYume, and Newbie-Image.

    Limitations

    • The model doesn't do realism well. This is intended. It is an anime / illustration / art focused model.

    • The model may generate undesired content, especially if the prompt is short or lacking details.

      • Avoid this by using the appropriate safety tags in the positive and negative prompts, and by writing sufficiently detailed prompts.

    • The model isn't great at text rendering. It can generally do single words and sometimes short phrases, but lengthy text rendering won't work well.

    • The preview model isn't that good at higher resolutions yet.

      • It is a medium-resolution intermediate checkpoint, trained on a small amount of high-res images.

      • The final version will have been trained on a dedicated high-res phase. Details and overall image composition will improve.

    • The preview model is a true base model. It hasn't been aesthetic tuned on a curated dataset. The default style is very plain and neutral, which is especially apparent if you don't use artist or quality tags.

    License

    This model is licensed under the CircleStone Labs Non-Commercial License. The model and derivatives are only usable for non-commercial purposes. Additionally, this model constitutes a "Derivative Model" of Cosmos-Predict2-2B-Text2Image, and therefore is subject to the NVIDIA Open Model License Agreement insofar as it applies to Derivative Models.

    The details of the commercial licensing process are still being worked out. For now, you can express your interest in acquiring a commercial license by emailing [email protected]

    Built on NVIDIA Cosmos.

    Description

    Version uploaded on 2026-01-28

    FAQ

    Comments (158)

    alter2611Feb 2, 2026· 4 reactions
    CivitAI

    sha256 mismatch. (fixed)

    original: 41fa7b78613dfe0d888b3647f70c7fb8cdfda2ff177e78d2e16e06dc810d9dcc
    this: 719452166cce4283392a7dc9b6db0560c03adeaf6cc323acde8e3d47809c2134

    clueless_engineer
    Author
    Feb 2, 2026· 1 reaction

    I downloaded it and uploaded it as-is, with no change. I'm unsure what else I should do.

    alter2611Feb 2, 2026· 1 reaction

    @clueless_engineer weird.
    FYI I've tested some gens: same prompt, "different" models (this vs original); results came out identical, at least in my case
    Dunno, maybe CivitAI/WebUI strip/add some extra metadata or smth

    clueless_engineer
    Author
    Feb 2, 2026· 1 reaction

    @alter2611 Re-uploaded, made sure to not touch the file. Hopefully this fixes the issue.

    alter2611Feb 2, 2026

    @clueless_engineer well done! now they are the same :)

    alter2611Feb 2, 2026

    @clueless_engineer I also believe the model is bf16, not fp16. Yet I don't know whether badge info important at all, since apps and libs rely on actual headers

    clueless_engineer
    Author
    Feb 2, 2026· 2 reactions

    @alter2611 It sure is bf16, thanks for the heads' up!

    nicccFeb 2, 2026· 2 reactions

    clip does not work, ton of errors

    PawelSFeb 3, 2026· 3 reactions
    CivitAI

    For me it works best with the lcm sampler.

    killerjaybird305Feb 3, 2026· 1 reaction

    The DDIM Uniform scheduler with LCM or ER-SDE is surprisingly stable

    monakoisFeb 3, 2026
    CivitAI

    Can someone please explain why I'm getting a mismatch with the suggested text encoder? Despite using Qwen_3_06B_base, I still get "Error(s) in loading state_dict for Qwen3_4B: size mismatch for model.embed_tokens.weight: copying a param with shape torch.Size([151936, 1024]) from checkpoint, the shape in the current model is torch.Size([151936, 2560])."

    I've tried both anima_preview and animaFP8_preview, as well as changing the various load clip types (Stable Diffusion, Cosmos, Qwen Image). Thanks in advance for any response.

    clueless_engineer
    Author
    Feb 3, 2026

    You can just drag&drop the sample image in the description, it has everything you need - probably need to change the model name but that's it.

    monakoisFeb 3, 2026

    @clueless_engineer Thanks for the reply.

    I already did that, but it kept giving me that message.

    I updated comfyui, and the message disappeared (but now it prevents me from using fp8). I didn't think an update would be necessary to make it work, since practically nothing has changed in the clip node I was using before (they just added ovis as the architecture).

    Now I just need a compatible TAESD VAE (when they do) to reduce the maximum VRAM peak.

    "using Qwen_3_06B_base", "for Qwen3_4B". They are different models. Correct model is Qwen 0.6B, place it in "models/text encoders". Here is link: https://huggingface.co/circlestone-labs/Anima/tree/main/split_files/text_encoders

    monakoisFeb 3, 2026· 1 reaction

    @Monfor_Salentaiel Thanks.

    I used Qwen_3_06B_base, but that message strangely thought I was using Qwen 3_4B.

    After updating comfyui, everything worked as it should 🤷.

    Big_SodaFeb 3, 2026
    CivitAI

    Nice, very good prompt adherence, decent speed. I'm on a 2080 Super, 32 GB DDR4, I9-9900k, Windows 10. In SwarmUI at 1024x1024, 4 CFG, 25 steps, Euler A for the sampler and normal for the scheduler (Although it seems any scheduler and sampler makes no difference regarding time.) I get about 2 minutes per generation. I have GPUSpecificOptimizations turned on and torch.compile set to inductor. Figured I'd post my specs and experience if anybody else with anything similar decides to try this.

    It's much easier to get this model to do what I want compared to Illustrious, NoobAI, Pony, any SDXL model really. I do get much faster speeds in SDXL however, about 20-40 seconds per generation depending on settings, usually the same as the settings above but I use Forge for that stuff. I hope the guy who's making it decides to add some E621 to the database although I could see that adding much more time to any curation since E621 has much more garbage on it than Danbooru does.

    nanalayaFeb 3, 2026· 1 reaction

    My gen time is 16 seconds on 5070 TI with the default workflow.

    ibara0608Feb 3, 2026· 3 reactions

    For RTX 20-series GPU or V100, you can refer to the following FP16 patch to greatly improve the speed. After using it, the V100 takes about 24 s for 1024*1024@30step, while the RTX 5090 still takes 9 s

    https://huggingface.co/circlestone-labs/Anima/discussions/15

    nanalayaFeb 3, 2026

    @ibara0608 Absolutely right, I forgot the limitation of pre-3000 cards. Didn't think someone had made the patch already.

    Big_SodaFeb 3, 2026

    @nanalaya I'm jealous! I got an RTX 4060 TI 8GB laying around that someone gave to me for helping upgrade their PC, I might try that out.

    Big_SodaFeb 3, 2026

    @ibara0608 Thank you! I'm not quite sure how the patch works in SwarmUI, I put it in the custom_nodes folder but I'm impatient and not so intelligent so I dislike using nodes and spaghetting everything together. I'm assuming I just select Preferred Dtype under Advanced Sampling and set that to Default (FP16)?

    Edit: I just tested it, I get generations in 30 - 35 seconds now. Thank you, again!

    ibara0608Feb 3, 2026· 2 reactions
    CivitAI

    Will the final version replace the text encoder and VAE, or will it continue to be trained in this preview version?

    Daru_22Feb 7, 2026

    We hope strongly for a flux2 vae change, but it's very unlikely at this point.

    nanalayaFeb 3, 2026
    CivitAI

    Great model, good prompt adherence, good composition, understanding of styles, and artist names. Can adjust camera angles. I personally prefer EulaA with Normal denoise. Great nsfw gen as well.

    For those who have cards before the 3000 version, use the fp8 checkpoint or with fp16 patch.

    steps 30, cfg 4, 1024x1024.
    45 seconds with 2080 8GB.
    15 seconds with 5070 TI 16GB.

    https://civitai.com/models/2356447?modelVersionId=2650296

    clueless_engineer
    Author
    Feb 3, 2026· 2 reactions

    I'm ok with shutting this page down and pointing folks to that page, but for some reason the author has banned me.

    nanalayaFeb 3, 2026

    @clueless_engineer I think there is no reason to delete this one; it works great as-is.

    CyberoFeb 3, 2026· 34 reactions
    CivitAI

    Finally, a model that actually listens! Anima is easily outperforming Illustrious in prompt adherence and multi-character consistency. The style variety is top-tier, and the full NSFW support ;) It’s impressive that this is only a "base" preview model. The potential here is massive!

    zeuss194Feb 4, 2026· 13 reactions
    CivitAI

    Great start for this model, and congrats for getting sponsored by comfy Org "AI grant" ( https://discord.com/channels/1218270712402415686/1237205240294539325/1468647254448668703 )

    Only4u_ArtFeb 5, 2026· 8 reactions
    CivitAI

    Looks promising. After Illustrious SDXL i wasn't sure if someone would be brave enough to push anime art forward, thankfully you guys are willing to push this ungrateful community further. I am not sure what your endgame is but i appreciate it if you give us full access in the near future to all models!

    Daru_22Feb 7, 2026

    you big meme

    OnlyWinningMoveFeb 5, 2026· 3 reactions
    CivitAI

    Can't wait to train LoRAs on it!

    OnlyWinningMoveFeb 6, 2026

    Already trained a LoRA for it, it's easier to train than pony in my experience.

    ReignShad0Feb 7, 2026

    @OnlyWinningMove How did you train a Lora for it, if you don't mind me asking? Did you use a Lora Trainer software like Onetrainer or Khoya_SS?

    Daru_22Feb 7, 2026· 3 reactions

    @ReignShad0 They used their esper power, psychic communication with the ghost in the machine!
    Of course they used a software, check pipe diffusion, but it's only linux, sd scripts had an implementation too.

    OnlyWinningMoveFeb 7, 2026

    @ReignShad0 I used diffusion-pipe.

    openmn793Feb 7, 2026· 15 reactions
    CivitAI

    The preview version showed the model's huge potential, and hopefully the official version won't be too far away.

    openmn793Feb 7, 2026
    CivitAI

    Should the weight of the artist tag be @(wlop:0.7) or (@wlop:0.7)? Or is it not supported for cue word weighting? It seems to be (@wlop:0.7).

    ZootAllures9111Feb 13, 2026

    it's definitely inside the bracket

    CptCharnameFeb 7, 2026· 2 reactions
    CivitAI

    good work. it work very well for a 512 model. it it still has issue of artist mixing, fingers and blur background. hope it improved on final version!

    properstyleFeb 11, 2026

    Excuse me, but what is this issue of artist mixing? I'm guessing it's artstyle being incoherent when you use more than one artist?

    CptCharnameFeb 11, 2026· 1 reaction

    @properstyle correct. it is pretty unstable if im using nature language with artist mixing. for tag only, it works fine for me.

    properstyleFeb 11, 2026· 1 reaction

    @CptCharname I gave it a few tries and while it gets artists better than even noobai, since you can use artists that have like 50~60 posts and the dataset is newer, it's true that mixing them yields poor results. Can't wait for the final version, particularly hoping that they fix this and at least improve anatomy since everything else you can deal with finetunes.

    springmushroom_86Feb 17, 2026· 2 reactions

    Yeah, the behaviour of mixing artists style is different than Illustrious/NoobAI, especially if you use natural language prompt.

    springmushroom_86Feb 8, 2026· 10 reactions
    CivitAI

    It's so far a DiT anime model that's very small (smaller than SDXL) and can be run it in Google Colab. Despite this is just a preview model, it's pretty solid for small DiT model.

    You can write prompt in booru tag or natural language prompt. for me works best to mix them if I wanted to create more complex scenario/composition.

    Artist tags works great but mixing them kinda so-so (sometimes may overpowered character tags), and weighting doesn't work since it uses LLM as tenc.

    Prompt adherence behave like a typical DiT models, it can do simple text which is a plus point and better multiple characters adherence (multiple characters' interaction is still gacha depending on prompt).

    Overall, in my opinion it's not going to replace Illustrious/NoobAI model for time soon, but I can't wait for the next training, it has a lot of potential.

    PugtatersFeb 9, 2026· 1 reaction
    CivitAI

    Hopefully it comes to Forge Neo soon :-D

    AbzaloffFeb 10, 2026· 1 reaction

    There is support for the model in Forge Neo today

    TANLI2023Feb 11, 2026

    уже поддерживалAlready supported

    spunkymcgooFeb 12, 2026· 1 reaction
    CivitAI

    Any AMD GPU+Linux users out there? I'm an SD boomer that's been on auto1111 with sdxl for the last 30 years, I'm assuming i'll have to upgrade to a new UI for this, if it even works at all on this hardware. Is it worth bothering to try it?

    clueless_engineer
    Author
    Feb 12, 2026

    Just give up A1111 and go with SwarmUI.

    Monfor_SalentaielFeb 12, 2026

    ComfyUI has native support for the model. The model is awesome even with the current preview version. Runs a bit slower than illustrious, but has superior prompt adherence, best artist style following. It is a development of the core ideas of NoobAI, currently being the most artistic model there.

    There are some some weaker parts: it is less stable and has worse character recognition.

    For creating workflow in ComfyUI I recommend using sampling at ~1 MP -> native upscale 1.3-1.5x -> sampling with 0.6-0.75 denoise while applying this LoRA: https://civitai.com/models/929497/aesthetic-quality-modifiers-masterpiece?modelVersionId=2670417

    It solves 90% of anatomy problems, while preserving artist knowledge. Also artists should be used with @ before their names

    upscaleanon537Feb 12, 2026· 1 reaction

    @Monfor_Salentaiel Solves 90% of the anatomy problems? Compared to what? Because with naiXLVpred102d_final that is just not true. This model fumbles fingers constantly where NAIXL does not. And saying it runs "a bit slower" is just not true. This model basically needs to be run at 30 steps and takes like 25 seconds for a single 1024x1024 image whereas I can gen something just as good in NAIXL in 7-8 seconds with my RTX 3090.

    Maybe if you compare anatomy against the base illustrious, sure, but nobody in their right mind should be using that.

    Only time NAIXL struggles with fingers is when they're kinda far away/small, but this model even struggles when they're right in your face.

    Heck, female genitals look absolutely horrendous most of the time.

    It's still a fun model to experiment with, but definitely not a NAIXL killer.

    Monfor_SalentaielFeb 12, 2026· 1 reaction

    @upscaleanon537 It is literally in the same state of development as Illustrious. I know the model struggles with anatomy, I meant that improvised high res fix makes anatomy better than generating from latent. NoobAI, especially with modern fine-tunes, is better than Anima, no doubts is here. I am just happy to be finally make model create a somewhat complex scene with nice not generic style. It is a great foundation model, that is already usable. To get a jump in quality similar from Illustrious to NoobAI there is a need to wait.

    IntelGoreFeb 13, 2026

    as an ex-SD Boomer using a lot of Reforge(Autmatic1111); I advise to finally switch to ComfyUI. I'm still oftentimes confused. But just get one of the auto installers from here for a portable version, get the next best workflow for the model and go. You don't have know to every single node.

    StraitjacketFeb 16, 2026

    @upscaleanon537 Assuming the developers follow through with their promise the fine tunes will likely be a lot better than NAI based on what it's doing now with just this incomplete base model. The prompt adherence is far superior.

    its_not_realFeb 16, 2026

    I get you. Using comfy is such a bad experience when you "just want to generate images".

    Risking completely breaking everything if you update it for example. Terrible user experience in this regard. Great for devs who knows what they are doing, terrible for "normal users" that don't want to spend time learning how everything works in the backend, understanding how dependencies work with different workflows etc.

    I still use forge, I see no huge reason to switch, but I have my eyes on "forge neo" that seems to be doing really great work implementing support for new models and such.

    Check it out: https://github.com/Haoming02/sd-webui-forge-classic/tree/neo

    New Features:

    Support Anima

    Support Flux.2-Klein

    4B, 9B

    txt2img, img2img, inpaint

    Support Z-Image

    z-image, z-image-turbo

    Support Wan 2.2

    txt2img, img2img, txt2vid, img2vid

    use Refiner to achieve High Noise / Low Noise switching

    enable Refiner in Settings/Refiner

    spunkymcgooMar 4, 2026

    I ended up using Comfy

    its_not_realMar 5, 2026

    @spunkymcgoo Just be aware that you are putting yourslef at risk, and you said "you are a boomer" so I assume you are not really reading up on threats or know how to analyze if something might be a threat.

    Here is just a tiny collection about stuff spreading malware via comfy the last year or so, and it's only going to get worse and worse because of ai slop and slop-squatting workflow providers...

    https://comfyui-wiki.com/en/news/2024-12-05-comfyui-impact-pack-virus-alert
    https://blog.comfy.org/p/upscaler-4k-malicious-node-pack-post
    https://cyberpress.org/hackers-compromise-700-comfyui-ai-image-generation-servers/
    https://www.reddit.com/r/comfyui/comments/1qn4w1j/i_think_my_comfyui_has_been_compromised_check_in/
    https://labs.snyk.io/resources/hacking-comfyui-through-custom-nodes/

    People tend to fail mentioning the big risk of becoming infected by using comfy unless you are decently knowable about this stuff, or rather, get compromised via workflows in comfy...

    To try to clarify more closely what I mean:
    Let's say you happened to run one of those infected nodes in a workflow, might not be intentional you got infected via the workflow (do an internet search on slop-squatting), but never the less you are now infected.
    Would you even be aware? Do you read up on those kinds of news? The Comfy interface would not inform you about you running infected stuff with malware and/or viruses, so unless you actively know about it, you would keep being compromised...

    Just be REALLY careful... You are executing code on your computer that has NOT been vetted through any kind of security validation. And this will not change, comfy is NOT a platform for "noobs", it's a platform for devs even though they REALLY want to claim they are for "anybody".

    spunkymcgooMar 6, 2026· 1 reaction

    @its_not_real not interested into downloading bloated workflows from randos anyway, i want the minimum that's necessary and everything i've seen on civit looks like a mess. forge neo explicitly states that it does not support AMD gpus or linux so it's not an option for me, sorry. i'm not using comfy because i like it, but because it's literally the only option

    its_not_realMar 6, 2026

    @spunkymcgoo So why are you on civitai if you are not going to "use something from randoms"?!? xD

    Besides, it's not the workflow, it's what's IN the workflow. You have to put all your trust in the maintainer of the workflow doing proper testing before sharing, but you never know...

    Yeah, nvidia is king when it comes to AI, sucks, but it is what it is...

    Just be careful, if you read the links I posted, you should now be aware of that there are close to zero protections against malware/virus in nodes.

    spunkymcgooMar 6, 2026· 1 reaction

    @its_not_real .safetensors files are not remotely equivalent.........

    its_not_realMar 7, 2026

    @spunkymcgoo WOW! You did a google search and found that pt/pth files should not be used due to insecurity, an issue known for multiple years. ok...? (has nothing to do with any of the hundreds of security flaws with RCE (remote code execution) and privilege escalation the past year mentioned in the links i posted from security experts)

    Look, if you refuse to learn and prefer to BE AN IDIOT, then go ahead, do whatever you want in a very insecure state, become infested by malware etc if you want.

    BUT DO NOT try to deflect or diminish the MASSIVE RISK you intentionally put yourself in with nonsense arguments, therefore trying to convince OTHER non-informed users that "it's fine", it REALLY IS NOT.


    Ofc WORKFLOWS should not include NODES that pulls pt or pth files, been a thing for YEARS, and EXACTLY one of the most common things done by workflows on this site. So you KNOW what workflows utilizes nodes that contains pt or pth files huh?!? (ofc you don't)

    If you want to be an idiot, go ahead, ignore warnings from people who actually understand if you want. And by doing so, you show you are quite frankly an idiot.
    The same kind of idiot that would say: "I don't need a helmet when riding a motorcycle, I don't crash..."

    spunkymcgooMar 8, 2026· 1 reaction

    @its_not_real oh my god fuck offfffff

    upscaleanon537Feb 12, 2026· 5 reactions
    CivitAI

    It seems the LM it uses has a problem caching stuff?

    I genned some miku, removed it from prompt, still get miku long afterwards unless I specify some other character. I've noticed this with other models that uses LM like Qwen. And this seems to screw over styles too. I tried prompting a simple prompt of "masterpiece, best quality, score_9, score_8, @miclot" and the first 2 images actually had the style, but the next 14 images didn't. Probably because of this cache bullshit or whatever it's doing.

    It's like it's losing the understanding of the style the more you gen, it makes no sense. And then randomly after genning some more, I get the artstyle back if I'm really lucky then it goes back to some shitty style.

    I've tried several different samplers and scheduler, doesn't help at all. Orientation of the gen doesn't seem to matter either. Increasing tag strength doesn't work either.

    The only thing that seems to fix it, is if you actually specify things like a character, but why does the model try this hard to default to a specific style aswell as defaulting to a character? But it seems that even if you specify, there are a few times it "forgets" the artstyle and just decides to gen in some other style than what you specified, which seems to happen maybe 8% of the time based on me genning 100 images. Maybe it would be even less if I specify even more parts of the image than the earlier prompt + "cirno, touhou", but this is just bad.

    Really disappointing experience so far as it has these issues aswell as several things just not being possible, but it does show a LOT of potential and I'm looking forward to seeing the full release and eventually finetunes of that.

    REgenerationSDFeb 19, 2026

    Maybe because it's a DEMO! XD

    popim48846589Feb 12, 2026· 4 reactions
    CivitAI

    After exclusively using a noob mix for a year, this model is a huge leap in quality for me.

    yLoraFeb 13, 2026· 33 reactions
    CivitAI

    Great, now all we need is a furry finetune.

    Or, maybe, a furry equivalent of Anima trained on solely furry images

    Monfor_SalentaielFeb 13, 2026· 7 reactions

    Considering that first ever fine-tunes of Stable Diffusion were done by weebs and furries, you most likely won't need to wait for long

    yLoraFeb 16, 2026· 6 reactions

    all these dislikes are from fake degenerates

    StraitjacketFeb 16, 2026

    Either is fine by me but I'll happily wait and anticipate the finetune of the full release from some big name on here, since thats inevitable. I just hope we don't get another illustrious situation where we're left out in the cold- not that that was completely terrible, and this dev doesn't seem the type so we'll see.

    CharbelFeb 13, 2026· 38 reactions
    CivitAI

    It looks like Illustrious finally has a worthy rival. ⚔️✨

    Anima is hands down an excellent model—it's what Pony V7 should have been, what Z-Image is expected to be, and what Nano Banana will never be. 🚫

    A free model, with SFW/NSFW capabilities, and a great balance between quality and generation speed. Bravo! 👏

    its_not_realFeb 16, 2026· 2 reactions

    "This model is licensed under the CircleStone Labs Non-Commercial License."

    Does not sound like "free" to me. Probably copyleft or smthn (I didn't read the license) since you have to pay to use it commercially witch is completely fine, but it's NOT free...

    This model is created to make money, please don't try to "open-source wash" it.

    Besides, why are you thanking the uploader on civitai that just downloaded it from huggingface and reuploaded it on civitai to "be able to tag the images they are generating"...?

    If you like this model, go support the actual creators of it instead... Maybe even paying for a license to thank them monetarily.

    CharbelFeb 18, 2026· 1 reaction

    Non-Commercial License?, I couldn't care less about that, as long as I can use it for free on my PC hahahaha 🤣. Thanking?, I'm just dropping a straight fact as the user I am, sharing my honest take on this model. Facts are facts, and the truth is this model eats Pony V7, Z-Image, and others that are just more of the same for breakfast 🍟. Anima has the perfect balance between quality, generation speed, and broad NSFW/SFW content. Without a doubt, a masterpiece. 🔥

    its_not_realFeb 18, 2026· 1 reaction

    @Charbel Agree, facts ARE facts. And the FACT is this is NOT free.
    Free=ANYBODY can use it without paying.

    Let's say someone want's to use this model to create a simple ren'py game, and then sets up a patreon where they get 2 supporters paying $2 each/month making the creator receive $4/month.
    That someone is now breaking the license and can be sued. 👍
    A simple removal request to civitai by the creators (or anybody for that matter since the model is illegal to redistribute in this way) and you loose it all here and have to turn to huggingface to download.
    FFS, the license is not even redistributed with this download!! Civitai (in extension to allowing this model to exist) is breaking the license!
    So let's go one step further and lets say someone generates using civitai online generator witch is very likely to have already happened) and payed for that, civitai is double liable because they earn money by using a model with a non-commersial license, THEY HAVE TO BUY A LICENSE! (assuming the creators actually accept a sale to them)
    So maybe be a bit careful before screaming and shouting how you appreciate using a model breaking the license agreement.

    Using "facts" does not mean ignoring the actual people that created this model by thanking somebody that is redistributing something SOMEBODY ELSE CREATED with a non-commercial license on a 3:rd party web-page...
    Your "facts" are similar as someone claiming "racism does not exist, because it does not affect me".

    But it's ok to not know these things, if you are not involved in any development (FOSS or non-FOSS alike) it's not strange at all you don't know/understand these things.
    But you REALLY should sit down and LEARN instead of arguing with nonsense.

    If you ACTUALLY want to support the creators of this model, register on huggingface, request a download (and provide your email) and you are then a legal user of the model as long as you don't earn a single dime off it.

    CharbelFeb 18, 2026· 3 reactions

    Ugh, what a drag... I got lost in the first two words hahahaha 🤣. Like I said, Anima is and will always be a masterpiece, and free for me. See ya! 👋

    BingusChungusFeb 14, 2026
    CivitAI

    When changing the artist style strength, how do you do it? Like "(@artist:0.5)" or "@(artist:0.5)"?

    CrypheFeb 14, 2026· 1 reaction

    the former

    Monfor_SalentaielFeb 15, 2026· 2 reactions

    I doesn't have weights, if I am not mistaken. Text encoder is not clip, it's Qwen, an llm

    ComradeAnanasFeb 16, 2026

    As Monfor told, you can't change the weight. Precise weighting can only be done on clip, when you attempt this on LLM, you effectively hope that it will be understood as "make it weaker" or "make it stronger". And there is absolutely no guarantee that this will do what you intended for it to do in terms of style.

    lemonbreadtunaFeb 27, 2026

    This is either incorrect or misleading. In ComfyUI you can use weights and it works as intended. "(@artist:0.5), (@artist:1.0)," The artists blend together and increasing or decreasing the strength works as you'd expect in Illu/Noob, if you try to turn it up extremely high or low it'll just give you noise too which is proof it works.

    ZuiXunZhenLiFeb 15, 2026
    CivitAI

    能和illustrious的lora搭配吗

    StraitjacketFeb 17, 2026

    Definitely not, they're completely different architectures afaik, this is not a Stable Diffusion model.

    ComradeAnanasFeb 16, 2026· 19 reactions
    CivitAI

    About prompting, advice for potential users - do not expect it to work like on XL. There is no CLIP, there is LLM. If you had XxX_UberStyleCombination3000_XxX on XL, that you carefully weighted and used, it will most likely work far differently, since:
    1. LLM doesn't have precise weighting of CLIP, your (quasarcake:1.5) might make style more prominent, but not exactly.
    2. Styles behave differently from Illustrious/Noob. Even simple combinations will give you more different result than on Noob. You can use this site to check out how latent styles look and make adjustments: https://thetacursed.github.io/Anima-Style-Explorer/index.html
    I would advice to try simple style usage, like 3-4 styles max, and test how adding or substracting them affects the result.

    jefharrisFeb 16, 2026
    CivitAI

    Matched all models yet getting a "KSampler

    mat1 and mat2 shapes cannot be multiplied (1024x12288 and 1024x2048)" error

    Monfor_SalentaielFeb 16, 2026

    "1024x12288"? Seems like you've made a typo in setting latent size

    TScitaiFeb 17, 2026· 1 reaction

    mat1 and mat2 shapes suggest you may be missin the VAE or using incompatible one.

    jefharrisFeb 19, 2026

    @TScitai ah yes this was it! totally forgot to change that.

    StraitjacketFeb 17, 2026· 5 reactions
    CivitAI

    I'm getting poor results when upscaling with hires fix in Forgeneo, which upscaler do you guys use and what settings?

    ComradeAnanasFeb 17, 2026· 1 reaction

    Depends on what you constitute as poor results. I had issues with regular upscaling that would produce noise on the image, most likely caused by my sampler choice, since i ran er_sde sampler and SDE samplers sometimes go finicky on inpainting or denoise.
    I would advice that you run upscaling on euler/euler a sampler, 1.5x scale, 10 steps, 0.2-0.3 denoise. As for upscaler, i didn't notice a strong influence from them, my choice is 2x-AnimeSharpV4.
    You could also try SD Ultimate Upscale extension, tiled upscale also works fine, but slower than regular upscaling.

    StraitjacketFeb 17, 2026

    @ComradeAnanas Thank you! I'll try switching samplers on the upscale. It's been a long time since I played with samplers other than euler as far as image generation goes so it's good to know more about the others and their quirks.

    ThreadZenFeb 18, 2026· 1 reaction

    Currently, Anima's Hires feature still has issues. You can refer this issue: https://huggingface.co/circlestone-labs/Anima/discussions/42

    If you use Hires, please do not set resolution higher than 2000px.

    cuenta62018485Feb 22, 2026

    How did you make it run in forge? in mine isn't identifying the model

    RisingVFeb 24, 2026

    @cuenta62018485 You have to use Forge Neo. Normal forge does not support it.

    grade201178715Feb 17, 2026· 12 reactions
    CivitAI

    多人交互史诗级增强,不要单纯比审美画风要看到这方面巨大的进步,只要生态起来了就是光辉plus了,画风什么的自然就能跟上,希望作者们不要放弃

    hipapi7444709Feb 17, 2026
    CivitAI

    Does anyone know of any good captioners local or in huggingface?

    ikekph5Feb 20, 2026· 2 reactions

    Onetrainer, supports windows, has gui, can make tag-based captions in one click. Also can make masks.

    Although doesn't support training anima, yet. The captioner is a standalone gui.

    degurshaftFeb 24, 2026

    @ikekph5 Why don't you just use comfy

    coldturkeyFeb 19, 2026
    CivitAI

    I keep running into this error when trying to run the workflow embedded in the author's image:

    Error(s) in loading state_dict for Llama2: size mismatch for model.embed_tokens.weight: copying a param with shape torch.Size([151936, 1024]) from checkpoint, the shape in current model is torch.Size([128256, 4096]).

    clueless_engineer
    Author
    Feb 19, 2026

    Check your VAE.

    coldturkeyFeb 19, 2026

    @clueless_engineer qwen_image_vae.safetensors is what I am using. The issue is still happening

    ReGeneratedFeb 21, 2026

    Update comfyui

    IsekaiAIFeb 26, 2026

    I am running into an issue as well, I have updated ComfyUI and get: KSampler

    module 'tensorflow' has no attribute 'Tensor'

    luckmiriwindsFeb 19, 2026· 11 reactions
    CivitAI

    The quality is an insane bump from SDXL checkpoints (noobai and illustrious), just the fact that because of the new VAE you can have detailed full body images is proof enough. From all the projects with new vaes (Newbie, Klein Image, ZImage and Lumina 2) I gotta say I'm most excited for Anima and so far it looks great

    I'm getting awful hands and arms though. Gotta fix those with inpainting unless anyone has some advice for me

    Monfor_SalentaielFeb 20, 2026

    After generation use built-in algorithmic 1.2-1.4x upscale and start second sampling with 0.6-0.8 denoise. Obviously extends time for generation, but corrects ~80% of problems with anatomy and artifacts. Also this lora doesn't break styles and makes the model more stable: https://civitai.com/models/929497/aesthetic-quality-modifiers-masterpiece?modelVersionId=2670417

    ikekph5Feb 20, 2026· 6 reactions
    CivitAI

    There is a cfg distilled lora for anima preview

    https://civitai.com/models/2364703

    Very useful to quickly test concepts/ideas, with cfg 1 and as low as 12 steps. Or to get more stable image with normal 30 steps.

    Model isn't mine, I shared here simplify I think it's an unique tool and I should draw attention to it. It can help the community, especially those don't have a powerful GPU. And this repainted model post now is the top post on Civitai.

    DevilSShadoWFeb 24, 2026· 18 reactions
    CivitAI

    As someone who only does Anime/2.5d, this is single-handedly the biggest development in the self-hosted imagen space since illustrious/NAI. Maybe since pony. This is currently the only model, even in its early stages where I am more than happy to trade IT/s for the multitude of options one has of expressing themselves for generation. Style prompting and separation are impeccable, no loras required.

    On my 5070TI, at my usual settings, SDXL-based models usually spit out an image in 4~ seconds at 24 steps. This does it in about double, at around 10 seconds and I find this a fair tradeoff for the capability of switching to natural language at any point during my prompt construction and seeing the model follow my prompts to a T, as long as they are well-constructed, with an awareness of the underlying LLM's limitations.

    Regarding problems with upscaling, I have easily bypassed this by using a different model as a refiner, with little to no alteration of the OG image.

    Finally, the last problem I've run into is inpainting, another issue I'm currently bypassing via using a different model for the inpaint process. Not because Anima can't do it, but because the parameters I'm used to tweaking for getting inpaints just right do not apply here, although I was able to inpaint with anima via trial and error.

    I really hope the community at large gives Anima a try because for me this is the only model that's revitalized my interest in the medium/scene in almost a year. I know there are some discussions going on regarding licensing and I hope for everyone's sake that they can be amicably resolved, as I'd love nothing more than to see the community rally around Anima in order to come out with loras and other tools for building upon what's already an excellent base, even in this early preview form.

    PS: I've seen reports of issues with hands which almost made me skip giving Anima a go, but I am happy to report I get good hands/feet even when generating two characters in the same scene more often than not, at the very least the success rate seems to be on par with Illustrious/NAI, even at low step counts (15).

    4377781Feb 27, 2026· 15 reactions
    CivitAI

    I'm not super impressed yet. The LLM is too small to still effectively use natural language, but better than Illustrious for sure, though being able to use my Illustrious presets without changes (outside of embeds) is nice. Quality is on par with or just below modern Illustrious models: its okay, but at the recommended 40 steps, significantly slower than 20-step illust (5s vs ~25), and at the same 20 steps about 3x slower (5s vs 17).

    It does text marginally better than Illustrious, but still mostly useless.

    If it can achieve Z-image level images, I'll be excited, but right now it just feels like a slow Illustrious.

    Ronaf99Feb 28, 2026· 9 reactions

    Don't worry, give it a few months and it will surprise you. People said the same thing about Illustrious when it first came out, and it went on to become a solid model for anime-style content today.

    4377781Feb 28, 2026· 8 reactions

    @Ronaf99 to be totally fair -- it is an alpha preview at like 60% the filesize of illustrious with better natural language. If its already matching illustrious, I'm definitely excited for what it can do in the future and would happily trade the speed for the control

    StraitjacketFeb 28, 2026· 5 reactions

    It already feels a lot more prompt adherent with both tags and natural language (I find a combination works best)

    Once the full weights drop and finetunes start rolling out its gonna be a massive step forward.

    spunkymcgooMar 4, 2026

    it seems to work fine at 20 steps for me

    4377781Mar 4, 2026

    @spunkymcgoo yeah it seems to be just as good in terms of image quality, just takes a long time. I dont mind the extra time if it means my life is easier

    FloatsYourStoatMar 5, 2026

    @Whilpin Something else to keep in mind, not only is it a preview, it's not even trained on the full resolution images like the final will be, if you take a peek at some of the loras people have made so far that were trained on 1024 res, you will note that even just that ends up making it look a bit better. I'm sure it could evolve to something quite lovely after it's fully cooked.

    _Jarvis_Feb 28, 2026· 4 reactions
    CivitAI

    Hey guys, does anyone know if it's possible to run Lora training on this model in Google Colab? Local training is too tedious to set up, so I'm not considering it.

    KeMiliUsFeb 28, 2026

    You can used accereate launch anime_train_network py from https://github.com/gazingstars123/Anima-Standalone-Trainer

    RisingVMar 1, 2026· 6 reactions

    Actually @CitronLegacy put out a One-click setup anima lora trainer for google colab:
    https://github.com/citronlegacy/citron-colab-anima-lora-trainer
    I did not use it, but I guess it's the easiest way to run lora training in google colab?
    This lora was trained using it.

    CitronLegacyMar 1, 2026· 2 reactions

    @RisingV Thanks for the shout out!

    @_Jarvis_
    My colab trainer isn't as strong as local training but its good if you want to make a simple lora. If you do use my colab trainer and want to see changes just let me know! I tried to make as simple as possible to see what we can make with least amount of effort possible, but I can upgrade it as needed.

    _Jarvis_Mar 2, 2026· 3 reactions

    @CitronLegacy Wow, bro, your Trainer is amazing. It's super easy and quick!

    _Jarvis_Mar 3, 2026

    @KeMiliUs No, thanks. It's some kind of traceback simulator))

    CitronLegacyMar 4, 2026· 1 reaction

    @_Jarvis_ Thank you! Glad you like it. Let me know if ever have ideas for improving it. Also you Clementine Anima lora looks great!

    degurshaftMar 2, 2026· 7 reactions
    CivitAI

    Sdxl illustrious has no future while anima exists. If the model is this good now, I’m scared to imagine what it’ll be like at release

    ponzomut146Mar 2, 2026· 1 reaction
    CivitAI

    Its interesting, but is it just me or is it very bad at image variation?

    DazrockMar 3, 2026· 1 reaction

    I've noticed that with some prompts it will output similar images with different seeds.
    But most of the time it's variety is very good.
    Tho this is likely to improve with the finished base model.

    happtander866Mar 4, 2026
    CivitAI

    Is it possible to use Ultimate SD Upscale with this model?

    degurshaftMar 7, 2026

    highly recommended to use exactly that, as the standard upscaling method produces artifacts

    purpleladyMar 6, 2026· 2 reactions
    CivitAI

    HELP PLSSS....

    when i use artists tags @ i always get their logo or patron things etc. in the image. i can't see to be able to use the standard negative prompts to be able to get rid of them...

    vanillahMar 6, 2026· 2 reactions

    you can't really get rid of them, they are REALLY integrated in to the training data. wait and see if future versions will be better.

    purpleladyMar 6, 2026· 1 reaction

    @vanillah i did kinda figure out a trick, if you give it longer prompts it less likely to show up

    degurshaftMar 7, 2026· 1 reaction

    The same was true for Illustrious. You cannot get rid of such signatures with negative prompts. Try integrating Llama сleaner into your workflow so it triggers whenever signatures are detected.

    bleepMar 7, 2026· 17 reactions
    CivitAI

    Illustrious is so cooked

    lepef41586464Mar 8, 2026
    CivitAI

    Nice model

    sneedingonmyligma420Mar 9, 2026· 10 reactions
    CivitAI

    This model is incredibly powerful, and absolutely trivial to train loras for. I'm 100% confident this is the next logical step up from Illustrious/Noob.
    I used this to train a Stocking Anarchy accurate art style lora, took about 10 hours on my 5060 ti 16gb for 4000 steps.

    https://www.reddit.com/r/StableDiffusion/comments/1r706a4/standalone_anima_lora_trainer_gui/

    _Jarvis_Mar 10, 2026· 5 reactions

    4000? But 1500-2000 is enough :О

    sneedingonmyligma420Mar 10, 2026· 1 reaction

    @_Jarvis_  I was just starting off and didn't realize i had more than enough headroom for a 4 batch size, and that got me down to 1500 steps, 2 hours lol.

    oh and it helped to drop the max bucket size down from 1536 to 1024.. quite a number of rookie mistakes.

    wretcheduniverseMar 11, 2026

    interesting!

    sylveriateMar 9, 2026· 16 reactions
    CivitAI

    The preview images do not do this model justice, it is by far the most impressive anime model to date, even as a preview. Very much looking forward to future checkpoints with more time in the oven.

    DarkEngine2024Mar 9, 2026· 33 reactions
    CivitAI

    Finally, Pony v7

    Shio_NMar 10, 2026· 14 reactions
    CivitAI

    Text descriptions work better than tags for this model. I think it's essential to continue natural language usage together with tags for finetune to not ruin what we have right now. A lot of models trained on this base have signs of significant degradation for natural language prompts.

    PixelCrafterMar 11, 2026

    "Text descriptions work better than tags"
    I've come to the conclusion that it's exactly the opposite. The only thing is that there are no token limits on such tags.
    Actually, this is true for all models for anime style that use Qwen encoders. I'm using Kandinsky myself and noticed that writing with tags gives significantly better results.

    B80_8888Mar 11, 2026· 14 reactions
    CivitAI

    sdxl prime

    Its_JoeverMar 12, 2026
    CivitAI

    anyone can teach me how to set up on comfyui?

    LyloGummyMar 12, 2026

    just download this image and drop it into comfyui:
    https://huggingface.co/circlestone-labs/Anima/resolve/main/example.png

    Make sure to update comfyui to latest version.

    The model is natively supported in ComfyUI. The above image contains a workflow; you can open it in ComfyUI or drag-and-drop to get the workflow. The model files go in their respective folders inside your model directory:

    anima-preview.safetensors goes in ComfyUI/models/diffusion_models

    qwen_3_06b_base.safetensors goes in ComfyUI/models/text_encoders

    qwen_image_vae.safetensors goes in ComfyUI/models/vae (this is the Qwen-Image VAE, you might already have it)

    info is all here: https://huggingface.co/circlestone-labs/Anima

    Its_JoeverMar 12, 2026

    @LyloGummy thanks

    itachiiiMar 15, 2026

    @LyloGummy how to setup for reforge?

    YisusJMar 18, 2026· 1 reaction

    @itachiii Forge - Neo is already implementing it if you're interested

    katedi5015293Mar 17, 2026
    CivitAI

    I found websites that's have this model and the New one too and run it online for free they gives you 50 points every day but no porn the website is tensor art

    hboxgames132Mar 22, 2026· 1 reaction
    CivitAI

    Guys, any thoughts about Illustrious VS Anima ?

    RenSirMar 22, 2026· 1 reaction

    Anima is Stronger than Illustrious in the support for natural language and background, but still need improvement in NSFW contents and fingers. Also, Anima is easier to fine-tune than Illustrious because it has fewer parameters. And, this is just the preview version. Looking forward to the final release of Anima.

    hboxgames132Mar 22, 2026

    @RenSir Ohhh I see it's a preview. Thanks for your thoughts. Will stand by for the release !

    Rangiku209090Mar 23, 2026· 3 reactions
    CivitAI

    what's anime-preview 2?

    deitychaserMar 24, 2026

    a continuation with further training as this is still all just a preview and not the final version of the model.

    iocaste394Mar 25, 2026

    From the Huggingface page:

    "The preview2 version is a small upgrade to the first preview.

    A significant part of the training is redone with different hyperparameters and techniques, designed to help make the model more robust to finetuning. It is trained for much longer at medium resolutions in order to acquire more character knowledge. A regularization dataset is introduced to improve natural language comprehension and help preserve non-anime knowledge. It has the same resolution limitations as the first preview. It is trained only briefly at 1024 resolution. Going much beyond this will cause the model to break down. This is a base model with no aesthetic tuning. It is designed to be wild and creative, with the maximum possible breadth of knowledge. It is not optimized to produce aesthetic or consistent images."

    raidou88Apr 3, 2026

    i have the gguf version of this, but when i'm trying the preview 2 , it doesn't work? do i have to download vae and clip again? for testing that? man i'm out of space already

    ottoportmanMar 24, 2026· 2 reactions
    CivitAI

    is it possible to use multiple loras and mix styles?, my own workflow for Illustrious don't even work with this model. I would be grateful if someone would share theirs with support for such

    aurau673Mar 29, 2026· 4 reactions
    CivitAI

    Does Qwen3-0.6B-abliterated work?

    VeerGeerApr 26, 2026

    It "works", but it's also unnecessary and mostly just introduces artifacts

    AltairTheArcApr 1, 2026· 3 reactions
    CivitAI

    How to do hires fix with this model, is it possible?

    Natsu24Apr 14, 2026

    This is not recommended

    VeerGeerApr 26, 2026· 1 reaction
    CivitAI

    I would like to post a silly comment here as the topic is important, about this post;
    https://huggingface.co/circlestone-labs/Anima/discussions/135#69eba22f3dba94d545c02bcf

    Yes guys, you can do the good old song and dance, "1girl, (gigantic breasts:1.5)" - except this model requires much higher weights for Most things (some things are much more sensitive)

    So in this particular example, it would be "1girl, (gigantic breasts:8.5)" in Anima instead

    have fun!