Note: the last version of the workflow has been greatly simplified to use the ComfyUI sub-workflow system. In consequence, it has less features than the previous ones, but is more stable. I will try to re-include these features over time.
Introduction
Here's my Scene Composer worklfow for ComfyUI.
The main goal is to create short 5-panels stories in just one queue. For that, it chose randomly parts of the prompt that are used for generation, like:
Character (e.g. hair, eyes, attitude)
Clothes & Underwear
Sexual position and action
To keep consistency, it also keeps certain parts of the prompt to inject it across all scenes (like the environment, the main character and their clothes). You can read my Overview & Usecases article for more explanations!
If you're looking for a simpler workflow, check my Main ComfyUI Workflow. I also suggest to have a look at my Prompt Notebook to better understand how I structure tags.
If you have any comment or request, please feel free to share!
Features
Note: features in gray are in progress and need to migrate to v1.x
Random procedural generation of prompts
Main character (e.g. body, hair, eyes, tattos, piercing, horns, tail,…)
Attire (e.g. clothes, swimsuit, underwear, uniform, accessories.…)
Environment (e.g. place, daytime, nighttime, weather,…)
Action in scene (e.g. starting scene, sexual encounter, ending scene,…)
Predefined personas (demon, goblin, furry, slime, etc)
One place to control all scenes parameters
Seed, steps, CFG, image size, etc.
HighRes-Fix (2nd Pass)
LoRA stacker
Keep control over scenes
Re-generate one or many elements by changing its seed
Re-generate one or many scene by changing its seed
Overwrite and compose the final prompt with variables
Scenes consistency
Tags update dynamically according to the scene (e.g.
wetis added if there's rain)Attire state stay across scenes (if character lose clothes, it stay lost)
If clothes are torned, they stay so
Bondage ropes stay on character, with clothes
Output
Upscaling, pre-processing
Images scene merged into one
CivitAI metadatas & workflow embedded
Setup
Simply import the workflow.json file attached to this article in ComfyUI. You can also drag-and-drop the workflow image directly in the interface.
I personally don't have a very powerful computer. For people in the same situation, check my environment on my Main ComfyUI Workflow article, I explain how I rent and setup machines on remote.
Models
In theory, the workflow can work with any models that use Danbooru-like tags. I personally use Illustrious/NoobAI/Pony-based models and mix with some anime-oriented LoRAs. Have a look at my last images metadatas if you're curious!
Custom nodes
To achieve this workflow, I developed the comfyui-scene-composer extension. You can use as standalone in your own workflow. If you have trouble setting things up, check the repository.
Description
In this new version, the workflow has been greatly simplified using the new sub-graph system. It allows to remove a large portion of custom nodes that was clunky/not working; giving more stability to the workflow. There is also more clothes options for your character.
Some bugs are already known and need to be worked on. Including:
Images don't generate in order (but they do in Output group)
No CivitAI metadatas auto-parse
Output/upscale group has been simplified
FAQ
Comments (29)
The goat is back. Awesome work~
Thank you so much!
GOAT! LOVE YOUR WORK
Thank you, glade you like it!
How the heck do you get the backgrounds to be that consistent? Any explanation for that?
I don't focus on the backgrounds. Normally, it's just 2-3 tags repeated for all scenes
@taches I see, so there's nothing specific about them, hmm, maybe the lower denoise (and not 1.0) or the same fixed seeds have any connection with this. I think I'll install comfyui just to try this workflow to see for myself if the backgrounds really are that consistent.
Keep up the good job anyways!
An ADetailer node for face/eyes would make wonders in this workflow if you know of the ADetailer from Automatic1111 - I'm sure something like that has to exist for comfyui too, but I'm too noob in comfy :(
@TekeshiX curious to have a feedback on your experiments! I experienced a bit with ADetailer in the past. Maybe in a future version!
@TekeshiX consistency is derived from fixing the seed prompt and the seed sampler.
Yes, Facedetailer also exists for ComfyUI.
In reality, there are several versions to give details to the face.
Bro, ihave a problem with sampler nodes. i dont know whats wrong.
MathExpression|pysssss
Compex types (LATENT/IMAGE) need to reference their width/height, e.g. a.width
me too ,Have you solved this problem now? If so, please let me know as well,thanks
Hi. What version are you using ? v1.2 should not use this node
@taches i use v1.2 bro
@dvd3vgg115 Alright, I'll check when I have time. In the meantime, try to found the problematic node and try to disable/replace it with another similar one. Check it especially the size of your image (width/height)
@taches looking forward to your reply,thanks
i get a same problem,did you fix that?
@zlikk602596259999 no
@dvd3vgg115 @3461280230948 @zlikk602596259999
in Scene1 behand sampler/variables seed, in seed merge, you will find pipe and merge seeds.
You can DEL merge seeds then Link seed to pipe( random seed)
same on Scene 2/3.
good luck!
@wopen996 Thank you very much for the debug. This merge seed component helps for random diversity, so it shouldn't break anything too much. I'll see how to make it more stable in the future version
@wopen996 i will try it ,thanks bro
Bro, your workflow is really useful. Thank you very much. It would be even better if it could edit pictures.
Thank you for your comment! Since it generates multiple images at once, how do you picture the image edition ?
hi bro,did you fix :MathExpression|pysssss
Compex types (LATENT/IMAGE) need to reference their width/height, e.g. a.width
Thanks for this updated WF.
It looks even better than the previous one.
I will try it soon.
went i use v1.2,it show me a eError:MathExpression|pysssss
Compex types (LATENT/IMAGE) need to reference their width/height, e.g. a.width,i download a same scene-composer version and cg-use-everywhere version and comfyui-custom-scripts version . still error
in Scene1 behand sampler/variables seed, in seed merge, you will find pipe and merge seeds.
You can DEL merge seeds then Link seed to pipe( random seed)
same on Scene 2/3.
good luck!
The workflow is excellent. If I may offer a suggestion, it would be useful to introduce a node like 'comfyui-standard-trigger-words'.
This would simplify tag insertion and add more dynamism to the workflow, which can feel a bit static at times.
Inspired by your work, I attempted to create a local prompt generator leveraging your Python logic. However, it proved too complex for my current programming skills.
While I managed to achieve a decent result in just a few days with Gemini's help, I noticed that the AI tends to hallucinate once complex variables and constants are introduced.










