HSZDai_IF-Ollama+PIXART_2
__________________________________
## NODES
==========
You need to have 'ComfyUI_ExtraModels' installed. To do this, use:
> cd <your-comfyui-directory>
> git clone https://github.com/city96/ComfyUI_ExtraModels custom_nodes/ComfyUI_ExtraModels
> pip install -r requirements.txt
Or try the installation methods suggested on the project page:
https://github.com/city96/ComfyUI_ExtraModels
## REQUIRED FILES
====================
You also need to have the following 5 files locally:
From https://github.com/PixArt-alpha/PixArt-sigma?tab=readme-ov-file#-available-models
> PixArt-Sigma-XL-2-1024-MS.pth
From https://huggingface.co/PixArt-alpha/pixart_sigma_sdxlvae_T5_diffusers/tree/main/text_encoder
> config.json
> model-00001-of-00002.safetensors
> model-00002-of-00002.safetensors
From https://huggingface.co/PixArt-alpha/pixart_sigma_sdxlvae_T5_diffusers/tree/main/vae
> diffusion_pytorch_model.safetensors
NOTE!: This file must be renamed to 'pixart_sigma_vae.safetensors'
Place them inside the ComfyUI directory as follows:
comfyui/models/
/checkpoints
> PixArt-Sigma-XL-2-1024-MS.pth
> Photon_v1.safetensor (or any other SD15 model)
/t5
> config.json
> model-00001-of-00002.safetensors
> model-00002-of-00002.safetensors
/vae
> pixart_sigma_vae.safetensors
ComfyUI Ollama
Custom ComfyUI Nodes for interacting with Ollama using the ollama python client.
Integrate the power of LLMs into ComfyUI workflows easily or just experiment with GPT.
To use this properly, you would need a running Ollama server reachable from the host that is running ComfyUI.
Installation
Install ComfyUI
git clone in the
custom_nodes
folder inside your ComfyUI installation or download as zip and unzip the contents tocustom_nodes/compfyui-ollama
.Start/restart ComfyUI
Or
https://github.com/stavsap/comfyui-ollama
________________________________
I relied a lot on the works of the two friends
Greetings, guys