Version 4.0
Welcome to version 4 of my workflow, now with an even further improved base output and a different upscaling method for the refiner. I also want to give a shoutout to Bunline v0.5 and its creator yayaman for his great work finetuning PixArt!
Version 3.0
I managed to get a lot better outputs from the base PixArt Model with some changes and therefore you do not need to do as much refining anymore. Sometimes the unrefined output even seems better. I think this is a huge step up from the previous versions and I would love to hear some Feedback from you! I also wanted to thank you for all the downloads and ratings so far ❤️
Version 2.0
Updated the workflow with an optional unsampling and refining step as well as an optional upscaling step. I have also changed some default settings for the samplers.
Version 1.0
The first release of my ComfyUI workflow for PixArt with a SDXL Refiner. Custom nodes can be found with ComfyUI Manager. Read the description of each node on how to use it if you run into issues! I made this for PixArt Sigma, but the Alpha Version should work as well I think.
Follow the install instructions on https://github.com/city96/ComfyUI_ExtraModels carefully to get a proper install of PixArt and T5!
Description
Improved the base output even further with a few changes to the sampling method
Switched from latent to pixel upscaling in the refiner for less artifacts in the refined output
Experimented with Bunline v0.5 a lot in this but the normal PixArt Sigma model also works really well
FAQ
Comments (2)
ERROR V4
Error occurred when executing T5TextEncode: 'str' object has no attribute 'nelement' File "X:\ComfyUI\execution.py", line 151, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) File "X:\ComfyUI\execution.py", line 81, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) File "X:\ComfyUI\execution.py", line 74, in map_node_over_list results.append(getattr(obj, func)(**slice_dict(input_data_all, i))) File "X:\ComfyUI\custom_nodes\ComfyUI_ExtraModels\T5\nodes.py", line 89, in encode cond = T5.encode_from_tokens(tokens) File "X:\ComfyUI\custom_nodes\ComfyUI_ExtraModels\T5\loader.py", line 76, in encode_from_tokens self.load_model() File "X:\ComfyUI\custom_nodes\ComfyUI_ExtraModels\T5\loader.py", line 91, in load_model model_management.load_model_gpu(self.patcher) File "X:\ComfyUI\comfy\model_management.py", line 453, in load_model_gpu return load_models_gpu([model]) File "X:\ComfyUI\comfy\model_management.py", line 447, in load_models_gpu cur_loaded_model = loaded_model.model_load(lowvram_model_memory) File "X:\ComfyUI\comfy\model_management.py", line 304, in model_load raise e File "X:\ComfyUI\comfy\model_management.py", line 298, in model_load self.real_model = self.model.patch_model_lowvram(device_to=patch_model_to, lowvram_model_memory=lowvram_model_memory) File "X:\ComfyUI\comfy\model_patcher.py", line 310, in patch_model_lowvram mem_counter += comfy.model_management.module_size(m) File "X:\ComfyUI\comfy\model_management.py", line 269, in module_size module_mem += t.nelement() * t.element_size()




