Demonstrate how to do a face swap and face restore in Comfy UI
Did you find this helpful? Feel free to leave a donation: https://www.buymeacoffee.com/bjew
Description
FAQ
Comments (11)
Hi @a1lazydog! I really like your workflow, I want to know if it's possible to add a Lora or a Adetailer AFTER the Reactor node to improve the skin details. I really have no idea how to do this. Thanks!
Check out the example flow in https://civitai.com/models/161184/three-step-face-swap-workflow-in-comfyui. Specially look for the VAE Encode node. That will turn the image output back to a latent that you can use to further apply a Lora or Adetailer. That should give you an idea on how you can continue on after the ReActor node.
Great I just checking it. I've found the latent output, on VAE Encode, I guess Im supposed to use the Lora node before the KSampler right? Another question : the lora node only accepts model and clip inputs. Is there a way to convert latent to those type of inputs? Thank you!
@Primaveri Yes, use the Lora node before the KSampler.
From the checkpoint node input both the model and clip inputs.
From the lora node output, take the clip output and put it into both a positive & negative text conditioning, then plug that into your ksampler. Use the trigger words for your lora.
The latent should plug right into the ksampler input itself.
The lora would be used by the ksampler to generate the new latent image
@a1lazydog thank you so much! I just added 3 loras: perfect eyes, skin slider and add detail. It really gave life to those generations. thank you! Please consider doing a new workflow that will add face detail to the face, it's really a game changer.
@Primaveri share your workflow guy come on.
@a1lazydog Yeah this legit does not work on my setup. No errors just no faceswap
@Primaveri Hi, could you share your workflow please...?!
Thank you for uploading this workflow! I'm wondering, do you think that this workflow still provides benefits over ReActor's built-in face restoration using GFPGAN?





