CivArchive
    LoRA-3D - Anaglyph Image Generator - LoRA-3D Strong
    NSFW
    Preview 69072
    Preview 69053
    Preview 69071
    Preview 69135
    Preview 69224
    Preview 69146

    What is this? The preview images look terrible!

    This is a LoRA, conceived and trained by Elldreth and TheAlly, which produces consistent, useable 3D Anaglyph images, without any plugins or extensions, or expensive headsets*. Simply enable the LoRA, set the strength to 1, and prompt as you normally would.

    If you don't have 3D Anaglyph glasses, yes, the images will look terrible!

    What's an Anaglyph Image?

    An anaglyph image is a type of 3D image that uses color to create the illusion of depth. The image is made up of two slightly different versions of the same picture, one in red and one in blue. To see the full 3D effect, you need to wear a pair of anaglyph glasses, which have one red lens and one blue lens.

    The glasses allow you to see the two versions of the image as separate, and your brain blends them together to create a sense of depth. The use of red and blue for the two images is based on the principle that red and blue are the colors furthest apart on the color spectrum, and therefore provide the greatest contrast, which results in a more pronounced 3D effect.

    *anaglyph glasses are inexpensive and can be found on Amazon and other retailers.

    Can I view these images without the Glasses?

    There's a rudimentary anaglyph viewer available here (https://3dthis.com/stereo.htm). This will allow you to preview the image with some sense of depth.

    Things to Note!

    • While this LoRA can produce excellent results consistently, it will output images where the 3D effect doesn't appear to be working, and sometimes produces incoherent result. It's experimental! And as with all Stable Diffusion generations, there's an element of luck with the seeds - please just re-roll.

    • Some models may work better than others! We have tested against some of the major models, and our own Lucid and Churned Mixes, receiving positive results. It seems to work less well against NAI and Anything v3, but performs well with Berry's Mix.

    • Hires fix seems to either diminish the 3D effect, or produce a loss in image quality - beware.

    • Also note that the sample images for this model are intended to give you an idea of what the model is capable of and demonstrate the possibilities of the depth effect, not necessarily be 100% reproducible - due to the settings of my Web UI.

    • You can create anaglyphs with the depth map script plugin on Automatic1111's SD UI, which does a much better job! But I think LoRA and prompting is much more accessible for many, than complex extensions.

    Please consider joining my Patreon! Advanced SD tutorials, settings explanations, adult-art, and expeditions into the experimental, from a female content creator (me!) patreon.com/theally

    Description

    Strong Version

    This version produces much more perfectly formed subjects, but also overrides the prompt somewhat, and can be difficult to control.

    FAQ

    Comments (38)

    drakmourFeb 2, 2023
    CivitAI

    I wonder if they work in VR or only with goggles.

    theally
    Author
    Feb 2, 2023· 3 reactions

    Yes, so side-by-side 3D images can be viewed on Oculus Quest with Pigasus viewer, but as far as I'm aware, the only Anaglyph viewer for Quest is Virtual Desktop - which isn't great.

    drakmourFeb 2, 2023· 1 reaction

    @theally i have some players in my Quest 2, I'll try it there! Very cool idea! :-)

    Evade6559Feb 5, 2023· 1 reaction

    If you have VR and use Automatic1111, you might use https://github.com/thygate/stable-diffusion-webui-depthmap-script to generate Left/Right stereo images. The beauty of theally's solution here is that its just a lora which is nuts. Depthmap takes 10-20x as long to generate and has its own technical complexity. Depthmap can make anaglyphs but a Lora to do the same thing is awfully cool.

    theally
    Author
    Feb 5, 2023

    @Evade6559 I used that extension to generate my data set for this - the thinking was that it's more likely LoRA will be implemented in InvokeAI, Easy Diffusion, DiffusionBee, etc. than the Depth Map extension - so more people will be able to experience the wonder of anaglyphs!

    aineomenosFeb 2, 2023· 3 reactions
    CivitAI

    Preview pictures work, a little fuzzy on the edges atleast with the glasses im using, but other than that seems to be working, forest background one though didnt do great in the back, but the foreground was nice, again could be my glasses (cheap crappy ones), but other than that, cool

    13433Feb 3, 2023· 6 reactions
    CivitAI

    if you use auto1111 depth map script makes anaglyphs that actually work well

    theally
    Author
    Feb 3, 2023

    We used that to create our input dataset :)

    Evade6559Feb 5, 2023· 2 reactions

    A lora version is pretty brilliant tho. 1/100th the effort.

    RenaissanceAffairFeb 3, 2023· 3 reactions
    CivitAI

    You should do "cross view" next. It's already working in sd1.5 without fine tuning, but struggles to position slides correctly side by side.

    theally
    Author
    Feb 3, 2023· 1 reaction

    That's something I'm looking into! I was really surprised this one worked so well, so I'm keen to try some other types of 3D images!

    xpgx1Feb 4, 2023· 2 reactions
    CivitAI

    What a fantastic idea! Now I have to wonder - is it possible to expand this with SBS support? Because then - Rift / Headset-Owners can enjoy their creations in 3D as well. But I suspect this would be asking too much, the color shift is already a nice way to extract something like a depth-impression from the images. In any case - great idea, can't wait to fully test this once I'm back! Thx for this!

    theally
    Author
    Feb 4, 2023· 1 reaction

    Thanks! I've actually trained an equirectangular proof of concept and had moderately good results, but it needs a little more time. Stereo SBS, possibly too, but saving that for last - seems hardest!

    Hope you get good results!

    CaulinoFeb 5, 2023

    @theally Depth extension already does this and the result is superior to this lora.

    theally
    Author
    Feb 5, 2023· 1 reaction

    @Caulino correct, that's how I created the data set for this LoRA. The point is that LoRA will be implemented in other UI's, like Easy Diffusion, or InvokeAi, but it's unlikely the depth extension will be ported to those. Anyway, just a bit of fun - experimental, and it's amazing (to me) that it actually works!

    lenMLFeb 4, 2023
    CivitAI

    interesting try

    Is it possible to train the model to output two graphs

    image + depth map

    In this case, it will be more practical than red and blue 3D

    theally
    Author
    Feb 4, 2023

    It's something I'm looking into. I have a successful proof-of-concept output for equirectangular images for viewing in 360 VR (like the Oculus Quest 2), which will be awesome if I can refine it a little more :)

    fr34kyFeb 14, 2023

    @theally you will be my new god when you achieve that

    WASFeb 4, 2023
    CivitAI

    The depth is unfortunately all wrong between foreground and background, and in a lot of cases, it just creates random shapes for left and right channel offsets.

    theally
    Author
    Feb 4, 2023

    Show some examples you've made?

    In most cases it works really well - depends on the LoRA strength and the depth in your prompt/image.

    WASFeb 5, 2023

    @theally They're your examples. Have you tried them with glasses? Channel offsets get warpy, and have shape noise (shapes become... stuff, inaccurate to subject). No reason for me to try. I'm very familiar with this with my 3D environment work for film.

    This does give me an idea though to just use MiDaS and force and left and right channels at a fixed offset (like the 3d books when we were kids). Haven't tried doing a webui plugin though, used to diffusers API.

    WASFeb 5, 2023

    @theally PS could also be sensitive to any anomalies. I tend to not like MiDaS results as is cause not based on real zdepth, and a lot of the work I've done lately is like QA basically. Making sure textures aren't popping, boundary clipping, etc, etc

    theally
    Author
    Feb 5, 2023

    @WAS I have definitely tried - got my 50 pairs of anaglyph glasses from Amazon :)

    For images generated from a diffusion model, it's (in my opinion) extremely impressive that it's able to do as well as it does. Of course, if you really wanted to generate anaglyph images, you'd use a proper tool (like the Depth map script extension for Web UI - which is how I made my data set for this project).

    ds32Feb 4, 2023· 1 reaction
    CivitAI

    is there a high level writeup for how you'd produce a sophisticated LORA like this?

    theally
    Author
    Feb 4, 2023· 3 reactions

    I haven't actually written one, but I was thinking of re-training this for SD 2.x - and I might do a tutorial/writeup of that. It was fairly straightforward though; generated a bunch of 512x images, created anaglyphs of them, trained them :)

    ds32Feb 9, 2023· 1 reaction

    @theally please ping me if you do. Very interested in the effort, cool stuff.

    diplodocusMar 19, 2023· 9 reactions
    CivitAI

    Brilliant idea!

    Would it be possible to create an alternative version that produces a pair of standard images that can be viewed cross-eyed, or used in non-anaglyph-based 3D viewing systems?

    JustAiGuyApr 29, 2023
    CivitAI

    Nice work! Do i need cyan/red glasses or a blue/red is ok too?

    theally
    Author
    Apr 29, 2023

    I'm not sure to be honest! I didn't know you got blue/red - are you sure they're not the same thing?

    JustAiGuyMay 14, 2023

    @theally suddenly, blue/red is the most common where i live. Wikipedia tells that's a difference in 3d effect and color perception due to strong color separation.

    JustAiGuyMay 18, 2023· 2 reactions
    CivitAI

    Eventually blue-red glasses were bought especially for this LORA. Does not work. Images are still blurry without any depth. Perhaps cyan/red would work, but don't waste your time with blue/red.

    theally
    Author
    May 19, 2023· 1 reaction

    You can buy a pack of 50 cyan/red paper glasses on Amazon for less than $10 (depending on where you are). I bought some just for making this. Now I have 49 unused pairs, 😂

    phili13251Jun 2, 2023

    @theally i got a better for 10€ - now after looking at two pics its gathering dust

    ydoomenaudJan 9, 2024
    CivitAI

    All three versions as I go higher in slider values also progressively undress the subject of the image.

    While that would be a great model of its own, I'm not looking for that effect here. I tested this in Epicrealism and Dreamshaper, both with and without character LoRAs, and the only thing I've noticed is if the prompt specifically names items of clothing the effect is slower but still occurs. Negative prompts do not stop this.

    jtk1996606Jan 16, 2024

    Just do as the instructions say: Simply enable the LoRA, set the strength to 1, and prompt as you normally would.

    miraishounenMar 13, 2024
    CivitAI

    very very bad

    theally
    Author
    Mar 13, 2024

    You do realize this was made in February of 2023 when Stable Diffusion and LoRA training was very new? This model actually works surprisingly well. I presume you have tested it with anaglyph 3D glasses?

    futuristicSep 9, 2024· 1 reaction
    CivitAI

    The sample images / video work great for me! Any chance you'll be making a Flux version of this?

    LORA
    SD 1.5

    Details

    Downloads
    1,027
    Platform
    CivitAI
    Platform Status
    Available
    Created
    2/2/2023
    Updated
    4/24/2026
    Deleted
    -

    Files

    LoRA-3D-Strong.safetensors

    Mirrors

    TensorFiles (1 mirrors)