Note: the last version of the workflow has been greatly simplified to use the ComfyUI sub-workflow system. In consequence, it has less features than the previous ones, but is more stable. I will try to re-include these features over time.
Introduction
Here's my Scene Composer worklfow for ComfyUI.
The main goal is to create short 5-panels stories in just one queue. For that, it chose randomly parts of the prompt that are used for generation, like:
Character (e.g. hair, eyes, attitude)
Clothes & Underwear
Sexual position and action
To keep consistency, it also keeps certain parts of the prompt to inject it across all scenes (like the environment, the main character and their clothes). You can read my Overview & Usecases article for more explanations!
If you're looking for a simpler workflow, check my Main ComfyUI Workflow. I also suggest to have a look at my Prompt Notebook to better understand how I structure tags.
If you have any comment or request, please feel free to share!
Features
Note: features in gray are in progress and need to migrate to v1.x
Random procedural generation of prompts
Main character (e.g. body, hair, eyes, tattos, piercing, horns, tail,…)
Attire (e.g. clothes, swimsuit, underwear, uniform, accessories.…)
Environment (e.g. place, daytime, nighttime, weather,…)
Action in scene (e.g. starting scene, sexual encounter, ending scene,…)
Predefined personas (demon, goblin, furry, slime, etc)
One place to control all scenes parameters
Seed, steps, CFG, image size, etc.
HighRes-Fix (2nd Pass)
LoRA stacker
Keep control over scenes
Re-generate one or many elements by changing its seed
Re-generate one or many scene by changing its seed
Overwrite and compose the final prompt with variables
Scenes consistency
Tags update dynamically according to the scene (e.g.
wetis added if there's rain)Attire state stay across scenes (if character lose clothes, it stay lost)
If clothes are torned, they stay so
Bondage ropes stay on character, with clothes
Output
Upscaling, pre-processing
Images scene merged into one
CivitAI metadatas & workflow embedded
Setup
Simply import the workflow.json file attached to this article in ComfyUI. You can also drag-and-drop the workflow image directly in the interface.
I personally don't have a very powerful computer. For people in the same situation, check my environment on my Main ComfyUI Workflow article, I explain how I rent and setup machines on remote.
Models
In theory, the workflow can work with any models that use Danbooru-like tags. I personally use Illustrious/NoobAI/Pony-based models and mix with some anime-oriented LoRAs. Have a look at my last images metadatas if you're curious!
Custom nodes
To achieve this workflow, I developed the comfyui-scene-composer extension. You can use as standalone in your own workflow. If you have trouble setting things up, check the repository.
Description
Added features:
LoRAs pro scene
Upscaling
Scene images merged into one
CivitAI metadatas in images
Workflow embedded in images
FAQ
Comments (9)
This is truly a rigorous and clear masterpiece! I have studied it for a while but still don't understand some details of this workflow. May I ask you: the first image is always black and white, I hope they are all in color, how should I adjust it?
(Because the first picture is always very stable in black and white, while the other pictures are very stable in color, it makes me feel that this seems to be due to some kind of setting rather than a coincidence?)
Hey! Thank you for your kind comment. In the "<styles>" variable group, there is a 50% chance that the "doujinshi" variable apply, which contains the "monochrome" tag. It's not working every time (especially if there's colors involved in the prompt..). So you can simply remove this and it should work, let me know if it doesn't!
I think it's a sign that I should remove this part. Monochrome should maybe apply in the final post-process step. I'm even considering having a "chose style" dropdown, where user chose between manga, anime, realistic,...etc.
@taches My problem has been perfectly solved, thank you for prompt response! Just finished playing a round, and now I have a reason to continue exploring lol~
Being able to choose from more styles is undoubtedly good news. I really look forward to the day when this beautiful work is even more perfected!
Today is my first day trying AIG-image; I was focusing on generating high-quality text stories, and used to think that learning image generation would take too much effort, but today I had a sudden inspiration to give it a try. I am very fortunate to quickly learn excellent design ideas through your sharing! (I read a lot of posts today, and this is truly an outstanding piece of work.)
Additionally, as a story enthusiast, I resonate with your concept and look forward to experiencing life in another world through open-ended plot comics someday in the future!
Thank you for sharing - just noticed the nocflatstyle lora link doesn't appear to be correct. Will stop back once I've tried this out...
Personally, I try to find other alternatives if the node has no stars or if it has no more updates. Sometime I still download them to try it in a secure environment.
Thanks for the broken link, should be fixed now!
@taches no longer an issue - manger within comfyui was showing no stars but the git actually had good social proof so i removed my comment about that concern - thanks for sharing your hard work!
This is really awesome. How hard would it be to adjust this to for example, generate ~5 <action-starter> images, ~10 <action-pre> images, ~20 <action-main images> and ~5 <action-finisher> images. The numbers aren't exact, but just as an example.
Hey! That's an interesting case. Duplicate all the stuff is definitely not an option. The easiest way I see would be to have the possibility to precise a batch number at the beginning of each scene. I think I'm gonna try to work on it soon. Thank you for this idea!
@taches Awesome, I'd love to see that implemented. Thank you. :D

















