Note: I now largely recommend my newer model over this: https://civarchive.com/models/1972981/qwen-sex-nudes-other-fun-stuff-snofs
It contains the dataset I used for this model along with a bunch of other images, and it's a lokr so it's just straight up more flexible and better. You could also possibly combine them somehow - I haven't tried it.
This can be used to do snapchat-style selfies with text captions, or just regular selfies/low quality phone type images. The selfies can be in a mirror or direct - mention a mirror if you want it in the generation. If you want a caption simply prompt 'a caption that reads "text"' or 'the text on the selfie says "text"' or other similar variations. It's not picky.
The Qwen version has other concepts, such as cum, sex, and blowjobs - though sex and blowjobs are fairly hit and miss and would be better with another relevant lora at a lower strength.
Description
A much expanded dataset for Qwen.
FAQ
Comments (8)
Damn, really does what it says on the tin. Can turn off all other loras and this one still shines.
Thank you! I put entirely too much training time into this just to see how best to train things on Qwen.
Very good!
Personally, I found it perfect when the strength was set to 0.75 to 0.85
This is pure entertainment GOLD.
Thank you for making this.
Would you mind sharing your training settings?
Diffusion pipe, formatting got jacked up, sorry:
# Change these paths output_dir = '/mnt/q/AI/Models/Trained/Loras/DiffusionPipe/SnapPipeBeta' dataset = './trainingConfigs/snapchatDataset.toml' # training settings epochs = 20 micro_batch_size_per_gpu = 1 pipeline_stages = 1 gradient_accumulation_steps = 4 warmup_steps = 100 # eval settings #eval_every_n_epochs = 1 #eval_every_n_steps = 100 #eval_before_first_step = true #eval_micro_batch_size_per_gpu = 1 #eval_gradient_accumulation_steps = 1 # misc settings #save_every_n_epochs = 250 save_every_n_steps = 250 #checkpoint_every_n_epochs = 1 checkpoint_every_n_minutes = 120 activation_checkpointing = true partition_method = 'parameters' save_dtype = 'bfloat16' caching_batch_size = 2 steps_per_print = 1 blocks_to_swap = 8 [model] type = 'qwen_image' transformer_path = '/mnt/q/AI/Models/DiffusionModels/qwen_image_bf16.safetensors' text_encoder_path = '/mnt/q/AI/Models/CLIP/qwen_2.5_vl_7b.safetensors' vae_path = '/home/models/qwen_diffusion_pytorch_model.safetensors' dtype = 'bfloat16' transformer_dtype = 'float8' timestep_sample_method = 'logit_normal' [adapter] type = 'lora' rank = 32 dtype = 'bfloat16' init_from_existing = '/mnt/q/AI/Models/Trained/Loras/DiffusionPipe/SnapPipeBeta/20250901_06-16-39/step3250/'
[optimizer] type = 'adamw_optimi' lr = 1e-4 betas = [0.9, 0.99] weight_decay = 0.01 eps = 1e-8
love this style but can we get an updated version... it doesn't seem to work on Qwen edit 2511.
Whoa. Would love to see an updated version for Klein!
Details
Files
Available On (2 platforms)
Same model published on other platforms. May have additional downloads or version variants.













