Workflow for Anima (Preview)
Any feedback would be appreciated.
Downloads
official HF repo: https://huggingface.co/circlestone-labs/Anima
civitai model page: https://civarchive.com/models/2458426/anima-official
Detectors (YOLO/SEG):
Hands (hand_yolov9c): https://huggingface.co/Bingsu/adetailer/blob/main/hand_yolov9c.pt
Face (face_yolov9c): https://huggingface.co/Bingsu/adetailer/blob/main/face_yolov9c.pt
Eye (Eyeful_v2-Paired or Eyeful_v2-Individual): https://civarchive.com/models/178518/eyeful-or-robust-eye-detection-for-adetailer-comfyui
NSFW (ntd11_anime_nsfw_segm_v5-variant1): https://civarchive.com/models/1313556/anime-nsfw-detectionadetailer-all-in-one
Body (yolo11m-seg): https://docs.ultralytics.com/models/yolo11/#segmentation-coco
Tag order
[quality/meta/year/safety tags] [1girl/1boy/1other etc] [character] [series] [artist] [general tags]
Generation Settings
30-50 Steps
4-5 CFG
1MP Resolution e.g. 1024x1024 or 896x1152
er_sde, euler_a or dpmpp_2m_sde_gpu
Key Features:
LoRA
Detailers
HiresFix
Wildcards
I2I
Workflows included:
🟥Anima: Standard workflow version
🟨AnimaStandard: more simplistic version of the workflow
🟩AnimaBasic: more simplistic version of the standard workflow that lacks some of the advanced features for metadata and such
Custom Nodes
ComfyUI-Manager (by Comfy-Org)
https://github.com/Comfy-Org/ComfyUI-ManagerComfyUI-Impact-Pack (by ltdrdata)
https://github.com/ltdrdata/ComfyUI-Impact-PackComfyUI-Impact-Subpack (by ltdrdata)
https://github.com/ltdrdata/ComfyUI-Impact-Subpackrgthree-comfy (by rgthree)
https://github.com/rgthree/rgthree-comfyComfyUI-Image-Saver (by alexopus)
https://github.com/alexopus/ComfyUI-Image-SaverComfyUI-KJNodes (by kijai)
https://github.com/kijai/ComfyUI-KJNodesComfyUI-Lora-Manager (by willmiao)
https://github.com/willmiao/ComfyUI-Lora-ManagerComfyUI-Easy-Use (by yolain)
https://github.com/yolain/ComfyUI-Easy-UseComfyUI_UltimateSDUpscale (by ssitu)
https://github.com/ssitu/ComfyUI_UltimateSDUpscale
Experiment and enjoy!

Description
Upload from V1→V2
(This is not an exhaustive changelog)
General:
Added AnimaV1 workflow
Added AnimaBasicV1 workflow
FAQ
Comments (11)
Amazing, but I got an error for this node when run:
"WidgetToString "Node not found in prompt. Tried keys: 'None:1' and '1'".
I bypass that nodes and it works fine, but may I ask what this node is about?
It's from comfyui‑kjnodes and it is used to automatically fetch the diffusion model name in this workflow so it can be saved in the metadata later.
good workflow, thank you
Does it makes it faster?
Great workflow, thanks for sharing! Worked quite well out of the box.
If you allow me to be that "☝️🤓" guy for a second, here are a few things I changed in the workflow for my own personal use:
- Depending on the resolution the first detailer will leave a weird vertical line in your images, to fix that you can change the empty mask from "1024x1024" to be the same resolution as your starting empty latent or image (just pull it from the nodes on the left).
- I'd recommend using a different seed for the second pass, if you want to use two samplers. I feel like Anima doesn't benefit much from a second pass though.
- I also added mask previews for all the other detailers, and an image comparer near the end. Those can help you see what's going on if something goes wrong.
You wanna share?
@gfreeman101979846 I believe he already implemented those in his latest update.
I generally dont use comfy for image gen, but this is a great workflow
Is there a way to use this workflow with img2img or is that not possible with the current model? I would like to use specific characters I've generated elsewhere with this model if possible. Or is that more of a Z-Image and Qwen thing?
Anima doesn’t have image editing capabilities like qwen or zimage do. The workflow only allows for basic i2i processing by loading the image as a latent input to guide the denoising process. So using reference images of a character isn’t possible with Anima at the moment.
What's the use-case for Hires PreDetailer vs PostDetailer?





