A lot of people ask about what workflow i use. Here i made a workflow to generate images a huge amount of time, but imo the quality worth it
Description
FAQ
Comments (11)
may I ask what specific upscale model (.pth) you are using for this workflow? tried to use 4x_NMKD-Superscale-SP_178000_G.pth but im getting an error. thank you
Hi. I took that model maybe 2 years ago from web page that store all custom VAEs and upscale models. But what exactly wrong? I mean what error says?
I just realized that I upload workflow and didn't write where to get all models. Genius 😑
https://civitai.com/models/141491/4x-nmkd-superscale is working for me :3
Works perfectly. Thank you for sharing, you are the legend!
sorry, im new to this. is this a workflow for ComfyUI or Runpod?
for comfui.. runpod is cloud gpu platform
if you meant comfyUI on runpod then yes i guess
Hi! Thanks a lot for your amazing workflow – it's working great.
I have a question: how can I add face tracking or face alignment to the workflow?
I'd like to generate images that include my own face, and I want it to follow facial expressions or head orientation properly.
Is there a recommended way to integrate face tracking, maybe with a specific node or tool (like InsightFace, IPAdapter, or ControlNet)?
Would really appreciate any tips or examples!
Thanks again for your great work!
I have a workflow I got from one of your generated images on the ultra realistic lora project where you have the ultimate SD upscaler. Is that upscaler better or is the one you have in this workflow better?
I’m having an issue while using your workflow. I can’t use the ultrarealFineTune_v4 model directly except with fp8, whether I use diffusion or gguf. I’m getting the following error message:
"SamplerCustomAdvancedHIP out of memory. Tried to allocate 108.00 MiB. GPU 0 has a total capacity of 15.92 GiB of which 0 bytes is free. Of the allocated memory 28.35 GiB is allocated by PyTorch, and 149.79 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_HIP_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory ManagementThis error means you ran out of memory on your GPU.TIPS: If the workflow worked before you might have accidentally set the batch_size to a large number."When I try using fp8, the output is completely a noisy image. What is the solution?"
