Example prompt: "A woman's face. Whitish translucent cum is splattered across her face and chest."
Flexible prompting: see example images.
Note: "whitish, translucent cum is splattered..." keeps character likeness. If you dont care about likeness you can try things like "her face is covered in milk" or "milky cum" and you might get thicker goo.
Important descriptors: whitish, translucent, splattered
Other descriptors: gooey, stringy, thick, wet, milky
Settings:
LoRA strength: 1.0
Steps: 20 - 30
Goals:
Should not impact: ethnicity, age, style, image quality, camera angle etc
Small file size (quicker renders)
Decent goop
History:
V1 (first attempt): complete failure, deleted
V2: unrealistic
V3: better goop, but poor quality and too much training person likeness
V4 (after 17 attempts): better goop, still too much effect over age/ethnicity/likeness
V5 (after 90 attempts): least influence over subject, but goop can be a little crazy
V6 (hunders of attempts): slight influence over subject, but better goop than V5
⚠️ beta versions will be removed ⚠️
Description
FAQ
Comments (8)
I use every new version of your excellent COF that comes out and somewhere around the middle of season4 episode_whatever the faces started being 'standardized'. I don't understand what happened but I just noticed that the goop gets better but the faces it is on seem more 'ordinary and samey' than before. I may be wrong. All this math is gibberish to me and my small brain.
I use schnell as I am not very clever and never do as I am told and while 'white goop' has always worked in schnell I enjoy the variety your LoRA offers.
I only cum here for the spunked on women... I am a man of but simple needs.
Keep up the good work... if that's what it indeed is.
thanks. yes, i know what you mean. training is hard and i'm trying to get good goop without overtraining other aspects (age, hair, body etc)
@mawedesign I don't run FLUX within a GUI and only on raw CPU (no CUDA!) so I often get raw error codes and I got this from my python script on the console;
Adding LoRA COF-f1-v5-beta17_alldim6_000001300.safetensors referred to as co with weight 0.9 to pipeline...
Loading adapter weights from state_dict led to missing keys in the model: transformer_blocks.0.norm1_context.linear.lora_A.ns.weight, transformer_blocks.0.norm1_context.linear.lora_B.ns.weight, transformer_blocks.0.ff_context.net.0.proj.lora_A.ns.weight, transformer_blocks.0.ff_context.net.0.proj.lora_B.ns.weight, transformer_blocks.0.ff_context.net.2.lora_A.ns.weight, transformer_blocks.0.ff_context.net.2.lora_B.ns.weight,
...etc (big block of these)
transformer_blocks.5.ff_context.net.2.lora_A.ns.weight,
...etc (big block of these upto)
transformer_blocks.18.norm1_context.linear.lora_A.ns.weight, transformer_blocks.18.norm1_context.linear.lora_B.ns.weight, transformer_blocks.18.ff_context.net.0.proj.lora_A.ns.weight, transformer_blocks.18.ff_context.net.0.proj.lora_B.ns.weight, transformer_blocks.18.ff_context.net.2.lora_A.ns.weight, transformer_blocks.18.ff_context.net.2.lora_B.ns.weight.
and it's possible it was doing it before with an earlier version. Sorry to be the bearer of bad news. I don't know what's causing it but a quick search shows a 'corrupted dictionary' or something. Hope this helps. I knew something had changed as it 'felt' slower. Still works but throws these errors out.
Ignore that your LoRA got renamed as I always rename LoRAs because so many get posted with;
LORAnonsense_gibberishLORALORAsomethingLORAFLUXFLUXFLUX.safetensors
or somesuch and a week after downloading it I can't figure out what it is or what it's for so I standardize on;
{LoRA NAME}-f{FLUX VERSION currently=1}-v(RELEASE VERSION or 1 if none included}.safetensors
so I can figure out what they are and it might be machine-parsable at some stage.
Hopefully it might be an easy fix with your training settings or a little Python script might be needed...
@Davros666 i switched to training elsewhere (instead of on civit), so this has probably been happening since then. I tried a few where i only trained specific layers, and i could expect the error from those, but the one mentioned in your comment was with all layers.
@Davros666 how quickly after publishing are you downloading the loras? if its automated that might cause a problem, as civit does some auto-correction thing a litle while after upload
@mawedesign If it shows up in my Notification list I download.
@Davros666 what's the last lora that worked properly for you?
@mawedesign Sorry. Not sure. I took your advice and deleted old BETAs and numbers lower than the latest.
Looks like we don't have an active mirror for this file right now.
CivArchive is a community-maintained index — we catalog mirrors that volunteers upload to HuggingFace, torrents, and other public hosts. Looks like no one has uploaded a copy of this file yet.
Some files do get recovered over time through contributions. If you're looking for this one, feel free to ask in Discord, or help preserve it if you have a copy.