Happy St. Patrick’s Day! 🍀
Please check out my other World Morphs: https://civarchive.com/user/CitronLegacy/models?tag=world%20morph
🍀May the Luck of the Irish be with you every time you visit this page!🍀
Description
FAQ
Comments (3)
i used all three together in comfy, and i tried because when i started i didn't know sd, and sdxl were different and i used to grab whatever and use it, so i figured it worked before, lets try again : ) i set sd to max .19 just a little sprinkle of sd in an sdxl enviroment
this was a fun group of lora to use : )
At least if you look at the CLIP model loading code in comfy, you will see that SDXL CLIP starts by importing SD1_CLIP. I am not 100% sure about the architectures, but like even FLUX is using SD3 CLIP like this line from ../comfy/supported_models.py seems to indicate:
t5_detect = comfy.text_encoders.sd3_clip.t5_xxl_detect(state_dict, "{}t5xxl.transformer.".format(pref))
IIRC (50:50) FLUX and SD3 do something unusual with inverting the endianess of tensors, (meaning like most significant bit or least significant bit first). I think I saw somewhere that comfy is detecting this and inverting accordingly so all tensors may be compatible but that memory is rather vague for context of where I saw the code. If that is true, I think the dimensions will still be hit or miss because even if the smaller dimension of an earlier LoRA is compatible with the larger, I don't know how that will affect the neuron weights for the additional dimensions. Assuming my edge of understanding here is correct thus far, the real question is probably if each generation of Stable Diffusion is building additional dimensions while retaining the old dimensions of the earlier smaller model or if the whole thing is new. A lot of what I see in the code like for calling CLIP indicates that the generational differences seem to be mostly in the complexity of how the model is processed with CLIP. One of my biggest curiosities has been that SD1.x knows about homelessness and poverty while SD2, XL, and SD3 do not. I think there is a deeper reason why SD1.x was so bad about errors especially in the past because alignment is based upon ethics where the lack of ethics in the acceptance of these cultural norms negates the foundations of alignment especially when the worst offenders of this lack of ethics are part of the same government and culture of most AI researchers. The output of a SD1.x base model changes significantly if plain text is used to address these conflicts... in my amateur experience.
@DudeWTF so i just woke up, hahahaha so kewl to jump in to this, how are things made by corporations or business with money as focus, i am an artist forgotten centuries ago, what you gave is the usual and why it leave a possibility in future for even more magic, because people FORCED, to survive and consuming energy to move, will take the easiest path, ironically most entertainment that IS A BOX HIT is the opposite path created by those forced to walk another. quality of work by those forced to survive ( working on deadlines) cut corners, may not use passion, may hold back, may be looking for a new job
point is always go 360 degree around something, turn 180 degree in both direction, keep doing this till you seen it all, read it all, made something new : )
Details
Available On (1 platform)
Same model published on other platforms. May have additional downloads or version variants.

















