Full Checkpoint with improved TE do not load additional CLIP/TE
SD3.5 Large with FLAN improved TE
The Full BF16 model runs at an amazing speed even on my 8GB card. It was built with Triple Clip using the 42GB Google FLAN T5xxl 12B parameter model (Converted to BF16), CLIP-G and improved CLIP-L
The Full FP16 model runs at half the speed of the BF16 version, on my card but may have better accuracy.
Do not use negatives above 0.2 timestamp - If you do not understand this line load any image as a workflow. (The same instructions as base SD 3.5)
FP16 Hybrid model and LARGE FP8 model have standard T5xxl I consider the Full BF16/FP16 models to surpass them in every way but am leaving them up for now.
If you have a 8GB card I suggest the Medium Model with FLAN it is still about several times faster then the BF16 FLAN model on my RTX 3050 (1.5 seconds per IT vs 5-6 Seconds Per IT for the 26GB model)
Works in Comfy-UI without any modification just load checkpoint and go.
Per the Apache 2.0 license FLAN is attributed to Google
My IT's per Second on an old 3050 8GB RTX
SD 3.5 Large (Triple CLIP FP8)
13.5GB = 6-8 Seconds Per IT
22GB Hybrid = 6-8 Seconds Per IT
26GB (BF16 FULL) = 5-6 Seconds per IT (BF16 seems to be faster for 1bit less precision but wider range I think it is worth it)
26GB (FP16 FULL) = 8-16 Seconds per IT (FP16 seem to have erratic IT/s compared to BF16)
Description
FAQ
Comments (4)
The description is still confusing. The bf16 version is t5xxl and clip_l is changed, right?
Where did you get the BF16 T5 FLAN model from? Would be interested in testing stuff out with it.
Wait, so Flan embeddings were compatible with old T5 embeddings all along?
This model with the baked flan clip (not hybrid) is generating lots of extra limbs or in some cases missing limbs. When I run the model without the standard clips, I don't have that problem so it appears to be the flan clip.
Details
Files
Available On (1 platform)
Same model published on other platforms. May have additional downloads or version variants.