CivArchive
    Trapped in Wall - Behind - V1
    NSFW
    Preview 101481
    Preview 101482

    Very few source images. I don't know what you'll be able to do with this.

    Weight of 0.8 is good.

    For the other side of the wall... https://civarchive.com/models/8764/trapped-in-wall

    Description

    FAQ

    Comments (16)

    SnowSultanFeb 14, 2023· 3 reactions
    CivitAI

    Can you explain a little about how something this specific is made? Was it trained on one image? (edit, I just read your other comment, it was two images. Did you draw them yourself?) I haven't tried training anything yet because I work in 3D and am not sure if training using 3D models as reference would tend to result in 3D-looking generations. Thanks for any information (plus this is hilarious, I always like seeing bizarre additions here). :)

    Shurik
    Author
    Feb 14, 2023· 4 reactions

    This was trained on 4 images with mirrored copies, so 8 images total. I did not draw them. I found them online.

    In my experience, training on 3D does create a 3d looking generation. Something I might try is training on a model that is designed for 3d looking images. I don't know if one exists or not. Might help to have terms like "render, 3d" in a caption file for each image. That tells the AI that the image being train is associated with those captions. If you don't' include them, then it will possibly strongly associate the 3d style into the model. It's sort of hard to explain. I only had this revelation a few days ago myself. Basically, if there is a word in the caption file, then that makes it easier to change that thing in prompts. in other words, if there are any parts of the images you want to keep, then leave it out of the captions. I've only tried training on 3d a couple times so far. A lot of this is kind of speculation on my part. If you can't find a 3D style model, then use a real world model based on V1.5. Training a model on that actually translates to anime models to varying degrees. I accidently figured that out last night. I was trying to train a human face with LORA and accidently tested it on anime and it got the hair and clothes remarkably well.

    If any of that is confusing, I can attempt to clarify. I do not have very organized thoughts.

    RalFingerFeb 14, 2023· 2 reactions

    Maybe you can convert your 3d images to 2d with img2img and then try to train a model?

    Shurik
    Author
    Feb 14, 2023· 2 reactions

    @RalFinger I've thought about adding a posterize effect to images to "flatten" them. I'm guessing that would introduce a lot of other problems... like posterized images...

    I haven't tried it, but img2img might be difficult to get consistently good conversions. I'll be experimenting with 3D model datasets down the road, I'll test a lot of this then. I always try to to find the solution with the least amount of effort, which takes a lot of effort to do 😝

    SnowSultanFeb 14, 2023· 1 reaction

    @Shurik @RalFinger Thank you for the replies and explanation. I actually don't want 3D-looking generations, so Ral's idea of using img2img on them to make 2D versions would be a very good one if, like you said, we could get consistent results. I've done countless experiments with trying to get posed 3D figure renders to translate into anime or illustrated 2D generations, but unless I'm missing something, there just isn't a way to use both a low denoising strength to preserve the 3D render's composition and details, and one high enough to change the overall style. Based on your latest experiment though, it does sound like it's possible to get anime results from realistic training data.

    If you need 3D renders for any testing in the future, let me know and I'll see what I can do. Thanks again for the information.

    SteveWarnerFeb 14, 2023· 1 reaction

    @SnowSultan I'm totally with you here. If you get a great pose or form, then use I2I with a low strength, it won't change the style. But if you increase the strength enough to change the style, you're likely to end up with a different pose. Very frustrating. However I just saw this video, and it may be of help/interest with this particular issue. https://www.youtube.com/watch?v=YJebdQ30UZQ

    SnowSultanFeb 14, 2023· 1 reaction

    @SteveWarner Thank you, I thought I might be missing something obvious. I will watch that video shortly, thanks for that as well. I wonder if small but specific custom loras or textural inversions for specific poses (like this silly example) or even body part positions could aid in either retaining poses or giving more control in text-to-image generations. More precise control over posing is the one thing I want most from AI and it would change the entire industry (for both good and bad) if it can ever be done well.

    edit: watched most of the video - oh man, that skeleton 'rig' version is what I've been imagining (and wishing for) since I first used AI. Can't wait until these get an Automatic1111 release of some sort.

    SteveWarnerFeb 14, 2023· 1 reaction

    @SnowSultan Glad you liked the video. I'll be downloading that tool this week and seeing if I can run it through the paces. Being able to work through the stages of art production (from a studio manager's standpoint) is important, as that's what clients expect. No one wants you to show them final art in your first meeting. You start with sketches, then go to intermediates, then final art. And in that, you lock down the framing, pose, etc. Being able to lock each of these elements in so they don't change between renders is really critical. I'm hoping these tools move us closer to that reality.

    VegaProxyFeb 16, 2023· 1 reaction

    @Shurik Well speaking from some experience, as long as the 3d model is toon shaded/flat shaded, or unshaded, and not soft shaded then it won't have as drastic of a 3D effect, for instance I worked on a vTuber, whom is very small, and they gave me a toon shaded model, and a bunch of high quality photos from VRChat in many angles, poses, lighting conditions, and backgrounds, and the final result of the LoRA looked very much 2D in nature. So you can easily introduce 3D Models if any ethical concerns arise, or it's a niche character, concept, etc,

    RalFingerFeb 16, 2023· 1 reaction

    @Shurik did you see the new git called "ControlNet" which has the option to give your images outlines? I guess that could work perfectly to render out 2d from 3d images, give it a try :)

    Shurik
    Author
    Feb 16, 2023

    @RalFinger I installed it last night. Haven't tried it yet though.

    SnowSultanFeb 16, 2023· 1 reaction

    @RalFinger I've been doing almost nothing but doing tests with it since yesterday. It's awesome for applying poses from a second image, but I haven't really had luck transferring only a style to a 3D render yet. Will keep experimenting. :)

    biggerthanbigFeb 28, 2023· 2 reactions
    CivitAI

    Could this be used to make trapped in the ceiling / hitting the ceiling images? Like head stuck / bumping against the ceiling. So far I had no luck achieving that.

    Shurik
    Author
    Feb 28, 2023· 2 reactions

    I doubt it. I don't know where to begin to make that work. This Lora is pretty restricted. I might be able to make a new one that sort of does what you want. I need some time though. Sounds like an interesting idea!

    biggerthanbigMar 1, 2023

    @Shurik Oh, that would be wonderful. Take all the time you need.

    jorgeholiverOct 20, 2024

    any news?

    LORA
    SD 1.5

    Details

    Downloads
    5,465
    Platform
    CivitAI
    Platform Status
    Available
    Created
    2/14/2023
    Updated
    5/14/2026
    Deleted
    -
    Trigger Words:
    trapped_in_wall, stuck in wall

    Available On (2 platforms)

    Same model published on other platforms. May have additional downloads or version variants.