Source (gguf): https://huggingface.co/city96/HiDream-I1-Full-gguf/tree/main from city96
Source (fp8): https://huggingface.co/calcuis/hidream-gguf/tree/main from calcuis
The VAE and text encoders can be downloaded from Comfy-Org here!
This model can be used with the https://github.com/city96/ComfyUI-GGUF node!
💪Train your own model: https://runpod.io?ref=gased9mt
🍺 Join my discord: https://discord.com/invite/pAz4Bt3rqb
Description
FAQ
Comments (33)
what Vram is required for Q2k?
If you have a hugging account, you can set your hardware and it shows you what you can start with your parameters. I hope that this function will also be possible here at Civ
@fjall that would be a cool feature for the Civitai team to add
NVIDIA GeForce RTX 4070 can run the 4bit model. That's the best I can do for you.
@twistedmind696969334 AI is so expensive to run that even 30 series is obsolete, that's crazy
@SLACK69 I dont think he is saying a 3090 can not run it (we wont know until there is a way TO run it)
@NorfolkDave https://github.com/hykilpikonna/HiDream-I1-nf4
This one will run. It's a compressed model that works, I'm still in the process of testing the ones on here.
@twistedmind696969334 https://github.com/city96/ComfyUI-GGUF/issues/248#issuecomment-2809864849
Thats my stats running the q8 full gguf and the none gguf of all 4 text encoders.
Yep working well, using nearly all my vram and takes a couple of minutes to gen but pretty decent results. havent had a chance to play around much yet though
@NorfolkDave and most people don't have a 24GB gpu (including me), what about the q2k model?
@SLACK69 You would need to try that, I was merely pointing out that the 30 series is not obsolite
@NorfolkDave i have a 6gb GPU
@SLACK69 That does not render the whole 30 series obsolete though. You do however have my sympathy, my laptop is a 6gb one... its not really good for AI stuff. I usually remote into my pc.
@NorfolkDave right... i don't have a laptop. so i can do more with AI than if i did have a 6gb mobile GPU. and i was mostly joking about the 30 series being obsolete
Can this model render people lying down? ;)
Yeha we tried that with the NF4 model, it works 😁
Thank god! That other version was a nightmare to try and use. will keep an eye on this over the week
Shout out to all you trailblazers that figure shit out before everyone else 🙏
Anbody of the comfy Team working on loaders, yet?
Yes it looks like the ComfyUI-GGUF did an update and they are uploading the compatible gguf files right now. I tried using the ones associated with this post but it does not seem to work. https://huggingface.co/city96/HiDream-I1-Dev-gguf
@rusty2930 I reuploaded the .gguf files this morning, they should work just fine now
The ComfyUI support has been released for this. There is a bug with the GGUF files but the fp8 safetensor works. Currently you need to use all 4 Clip files in this repo to get it work with KSampler https://huggingface.co/Comfy-Org/HiDream-I1_ComfyUI/tree/main/split_files/text_encoders
Thanks for the headsup, replacing the files with the city96 quants now!
@RalFinger Cool! city96 also added a f16 gguf. It is pretty awesome!
Hello, do you have by any chance a workflow to spare, or just parameters for the ModelSamplingSD3 and KSampler nodes ? I can generate but can't find the best setup !
how do you use 4 clips? what node?
@azeli updated GGUF embarks QuadrupleCLIPLoader, don't forget to update ComfyUI too !
@Piquemine thanks, what about the loader I'm getting KeyError: 'conv_in.weight'
@Piquemine I use a basic setup with Empty Latent Image, QuadClip, Prompt (Positive/Negative), GGUF/Diffsion Model loader, KSampler, VAE Decode. You need to set the steps manually 50 for Full and 28 for Dev models. If you want to do image2image you can use Load Image -> Resize Image -> VAE encode instead of Empty Latent. I am playing around with the values, for img2img it looks like it needs to set denoise to .8 for best results.
@rusty2930 What model loader are you using and model? I've tried all loaders and models and get this error on all of them
KeyError: 'conv_in.weight'
@azeli I am using the GGUF loading from the "gguf" package. Install the latest and restart ComfyUI. Make sure you are using the city96 gguf files. Also make sure the image input is of the correct size, there is a list of sizes it supports but start with (1024x1024).
@rusty2930 what difference city96 file and file in civitai? File in civitai not work (whose dimensions in the model are torch.Size([4, 2560]) and whose dimensions in the checkpoint are torch.Size([4, 1440]).
@kiryanton930 There was a bug in the original GGUF files posted yesterday like the one you are seeing. If they were from calcuis's huggingface repo then they are bad (Note I believe they uploaded fixed GGUF files this morning). city96 reuploaded working gguf files yesterday.
@kiryanton930 Actually looking at that error more. I dont think you have loaded in all the Clip files. You need to use Quad Clip Loader and there should be 4 different files. I linked all 4 in the top of this post.
