CivArchive
    Photo2Anime | ComfyUI workflow - v1.0
    NSFW
    Preview 5996791
    Preview 5996798
    Preview 13029422
    Preview 13029416
    Preview 13029419
    Preview 13029421
    Preview 13029414
    Preview 13029448

    (Note: in ComfyUI you can disable nodes with Ctrl+M. You can quickly disable whole groups with right click option. Useful to turn off parts you don't want.)

    This workflow takes a realistic image to generate 5 different anime versions of it in. Works with real photos!

    Reverse workflow: Anime2Photo

    Comparison of results

    full size image here

    Installation and dependencies

    1. Install WAS Node Suite custom nodes;

    2. (optional) Install WD 1.4 Tagger custom node;

    3. (optional) Install SD Prompt Reader custom node;

    4. Download and open this workflow;

    5. (kind of important) Check "Resources and substitutions" section below for recommended extras.

    You only need WD 1.4 Tagger if you want to interrogate image for prompt automatically.

    You only need SD Prompt Reader to get prompts from metadata of realistic generations.

    With neither, you'd have to write the prompt for picture manually.

    Resources and substitutions

    Workflow was made with possibility to tune with your favorite models in mind. There is the node called "Quality prefix" near every model loader. Just put most suitable universal keywords for the model in positive (1st string) and negative (2nd string). Note that KSampler parameters are thoroughly tested and fine-tuned for selected models.

    Semi-realistic model (intermediate): XenoGASM. How to substitite: with semi-realistic model of your choise, like CT, EighthThing or Lyriel. Add anime-guiding keywords to "Quality prefix" node.

    3D Anime model: OnlyAnime. How to substitite: with anime model of your style choise. Examples of 3D anime models: Kotosmix, xRikishi's Unstable Diffusion, AniVerse. Add universal quality tags recommended by the model author to "Quality prefix" node.

    2.5D Anime model: Aux. How to substitite: with anime model of your style choise. Examples of 2.5D anime models: Hassaku, AniWave, DivineEleganceMix. Add universal quality tags recommended by the model author to "Quality prefix" node.

    2D Anime model: ObsidianV3-Flat. How to substitute: don't know, really. Put your favorite model of unique style, I guess. Add universal quality tags recommended by the model author to "Quality prefix" node.

    VAE to encode realistic: vae-ft-mse-840000-ema-pruned. Standard, recommended.

    VAE to decode anime: kl-f8-anime2. How to substitute: vae-ft-mse-840000-ema is fine.

    Embeddings: easynegative (recommended), bad-hands-5 (optional).

    How to use

    1. Choose where to get image and prompts from and connect nodes for image, positive and negative into ">> Route your inputs here <<" group:

      • By default, manual written prompts are used

      • Turn on (Ctrl + M) and connect "WD 1.4 Tagger" node to generate prompt from picture

      • Turn on (Ctrl + M) and connect "SD Prompt Reader" node to get prompts from metadata

    2. Run it!

    Check out my other workflows

    Description

    Initial version of 28.01.2024.

    FAQ

    Comments (10)

    jandkJan 29, 2024
    CivitAI

    There is a reverse demand to generate realistic photos from anime/CG images

    Postpos
    Author
    Jan 30, 2024· 2 reactions

    I already have the workflow for it, but don't publish it yet. It need some renewal with fresh models, some testing and some serious rewiring. Coming soon, in a few days. You can press "Follow" button near my nickname on model card to get notification.

    Postpos
    Author
    May 20, 2024

    Oh, if anyone miss it: Anime2Photo

    knobvornMar 2, 2024
    CivitAI
    It's such a cool workflow. thank you.!

    If I use an ip adapter or control net, which part should I enter?

    Postpos
    Author
    Mar 2, 2024

    Positive conditioning. Between "Positive" and "Bus Node".
    There are 4 models used tho, so you'll need 4 "Apply ControlNet" nodes.
    (and 1 "Load ControlNet Model", 1 "Load Image", and, maybe, 1 node for ControlNet preprocessor)

    knobvornMar 4, 2024

    @Postpos Can't it be used as a multi-control net by connecting to the 'Manual Input' load image?

    Sorry, but could you please show me an example image?

    Postpos
    Author
    Mar 4, 2024

    @knobvorn Typical ControlNet setup: https://ibb.co/16XCcNG

    SyamsQJun 28, 2024
    CivitAI

    How to fix this. I already put "vae-ft-mse-840000-ema-pruned.safetensors" at \ComfyUI\models\vae, but it can't run.

    Prompt outputs failed validation VAEEncode: - Required input is missing: vae

    Postpos
    Author
    Jun 30, 2024

    1. Press on VAE name in VAE node;

    2. Select any VAE you have.

    ssr765Oct 19, 2024
    CivitAI

    I tried with a very close front plane of a girl and the results felt like a nightmare.

    Workflows
    SD 1.5

    Details

    Downloads
    3,023
    Platform
    CivitAI
    Platform Status
    Available
    Created
    1/28/2024
    Updated
    5/12/2026
    Deleted
    -

    Files

    photo2animeComfyui_v10.zip

    Mirrors

    CivitAI (1 mirrors)