Candid film photography
Simulating analog photography with an emphasis on film grain and flattened highlights/shadows. I now have two versions, one that leans to more retro 1960s-1980s coloring and a second with much less bias in subject matter and coloring, but both lend your outputs a very pleasant analog film feel.
My preference is for use with the base SDXL. Some fine tunes overwhelm the subtle texture and color influence of the LoRA. But if you find one that you can recommend, please share your experience with me! I don't do extensive testing across many fine tuned models.
Keywords are not usually necessary, but to ensure some stronger influence of the LoRA try any of the following:
found footage
analog film noise
grainy photograph
light distortions
lens flare
grunge frame (does not really work with the unbiased LoRA and only sometimes with the retro LoRA)
Description
With a tendency toward warmer tones, but much less biased in subject matter - better sci-fi and fantasy subjects without looking like a 1970s TV production
FAQ
Comments (14)
Absolutely insane what this can pull out of the base model. Congrats, really great LoRA!
thanks! Frankly, this thing has been surprising me too.
After some test, it works very well with leosam hello worldxl,realvisxl and realism engine xl
You're the best - thanks!
@eldritchadam a pleasure to help 🫡
Is there some keywords to activate this version??
Nothing is strictly necessary, but to strengthen some effect I usually start my prompts "grainy photograph" and throw in any of the following:
found footage
analog film noise
grainy photograph
light distortions
lens flare
@eldritchadam thank you, the one i always avoid Is Lens Flare because Sole models make it so big or like a huge ball or like a sun lol
You have nailed it, It is amazing, a wallpaper-worthy lora in my usage, it's best to work with RealVisXL V4.0 and SD XL version which works perfectly for the onsite generator.
thanks so much - I started with this on a lark, to get something quite different (as the name of 'candid' indicates, was more about low-quality snapshots, not so much the focus on analog film) but it turned into maybe the most interesting LoRA I've trained. It almost doesn't make sense to me the outputs I'm getting. It has me generating more images than I've done since I was working on my main oil painting style.
But now ... on to SD3 😊
edit - huh. Maybe just stick with SDXL. First tests with SD3 are not too promising.
I've been trying to train Loras on my own photography and having mixed results, would you be willing to share your settings for training this? This Lora and the checkpoint you made are incredible.
Gladly! I'm on the road a lot today and tomorrow but I'll share as much as you like when I can. I'm using Kohya, so if you're using the same I can just export all settings for an easy import.
@eldritchadam unfortunately I am not using Kohya. I use Draw Things on MacOS and it has a Lora trainer built in, or I'm using the on-site trainer here. I believe they both use the same process as Kohya.
But take your time! No rush at all, and thank you!
@Fenn I'm not really a technician or computer scientist. As a graphic designer by occupation I do work all day with computers, but my primary training is in the fine arts. And I only minimally understand the tools I work with. I tend to think in either anthropomorphizing ways ("the machine likes being trained on ... ") or in terms analogous to painting - blending or layering paints etc.
So, I don't necessarily know what all the crucial settings are in LoRA training. I'll share anything you want to know! But you may need to ask for specifics that I don't know are crucial.
That said, here's a bunch of numbers and settings from my Kohya setup for one of these LoRA models (the final model is a mix of a few different datasets, though they probably overlap a lot)
save precision: bf16
LoRA type: standard
batch size: 1
epoch: 100 - 140
Dataset size: 30-60 images per training run (often merging multiple models for a final LoRA version)
LR Scheduler: constant_with_warmup
Max grad norm: 1
Optimizer: Prodigy
Optimizer extra arguments: weight_decay=0.1 decouple=True use_bias_correction=True safeguard_warmup=True betas=0.9,0.99
Learning Rate: 1 (Prodigy optimizer handles the learning rate - so much easier than other optimizer types having you fiddle with LR over multiple runs!)
LR warmup (% of total steps): 5
Unet learning rate: 1
Network Rank (Dimension): 36 (most my LoRAs are lower at 18, but doubling for my photo model seemed impactful)
Network Alpha: 24
Details
Files
Available On (1 platform)
Same model published on other platforms. May have additional downloads or version variants.


















