CivArchive
    Fluffy - single word concept, no data used - v1.0

    Please react to my videos: https://civarchive.com/user/ntc/images?sort=Most+Reactions

    Trained for 2000 steps on the phrase:
    #:0.04|~fluffy:0.2

    https://github.com/ntc-ai/conceptmod

    movies created with the lora applied from 0 to 3.5 .


    animation with:

    https://github.com/ntc-ai/conceptmod/blob/main/lora_anim.py

    Description

    FAQ

    Comments (32)

    7727Apr 18, 2023· 2 reactions
    CivitAI

    ok, so, seems like you put a lot of effort into these examples, and yet, I still have no idea what you've uploaded.

    ntc
    Author
    Apr 18, 2023

    it makes everything fluffy

    ntc
    Author
    Apr 18, 2023

    I updated the description hopefully it provides more clarity but I'm not sure.

    jrittvoApr 18, 2023· 3 reactions

    For his (or her) demo, I think they run the image generation over and over, increasing the lora's strength value each pass. The starting image grows fluff as the lora kicks in stronger, Then the images are combined as frames of a video, which has the appearance of animated motion. It's a very cool technique that would probably work with a bunch of different lora concepts.

    ntc
    Author
    Apr 18, 2023· 1 reaction

    @jrittvo Good explanation. One point is that it's automated in a for loop so I just go look at what it found after a while. It also uses optical flow / aesthetic scoring to find good seeds.

    jrittvoApr 18, 2023· 1 reaction

    @ntc Yeah. I kinda saw that in the code, but its way ahead of my knowledge. The concept is very, very clever. Really takes advantage of the capabilities of SD. Neatest thing I've seen here in a while. Its as empowering a technique as ControlNet, in my book.

    jrittvoApr 18, 2023
    CivitAI

    Does the code in your repo run on a "normal" machine, or does it require a massive nVidia card or something like colab? Does it depend on xformers? I'd love to try it on my M1 Mac, but that limits me to a basic pipeline with torch and diffusers using MPS.

    ntc
    Author
    Apr 18, 2023

    @jrittvo it does back prop so massive card only unfortunately. I am using a6000. It takes 20GB of vram

    jrittvoApr 18, 2023

    @ntc I also noticed cudatoolkit in one of the imports. So it goes. I've been looking at the original paper. Is the training step integral to your pipeline? Does it first modify the model in some way based on the add/remove prompts? Or does it modify the model wights on the fly for each generation step? Or is training and/or modifying weights about something completely different from your pipeline, like their example of permanently removing NSFW pieces from a model? [ My last questions, I promise ;) ]

    ntc
    Author
    Apr 18, 2023

    @jrittvo You can use the lora_anim script for any lora to create videos, no training needed. I haven't tried it outside of mine.
    Training the lora modifies the weights. The phrase '#|chicken++' gets split into two loss terms, '#' and 'chicken++' which are used to backprop. This lora uses the selfattn option but you could modify the full checkpoint too. Takes 12 hours to train. I sample every step so it could be faster.

    azretion188Apr 18, 2023· 3 reactions
    CivitAI

    So, you used conceptmod to modify sd1.5, then extracted a Lora from the difference with the modified checkpoint?

    This is an amazing idea! It opens up so many possibilities!

    amppApr 19, 2023
    CivitAI

    Having trouble using your lora_anim script. Could you help me find the image-reward?

    ntc
    Author
    Apr 19, 2023
    ntc
    Author
    Apr 19, 2023

    This is the command I use to run it
    ```
    python3 lora_anim.py -s -2.5 -e 5 -l fluffy -lp ", fluffy" -n 10 -sd 5 -m 5.0
    ```
    It has some hard coded values, like it expects a111 api at 192.168...:7777

    eurotakuApr 22, 2023
    CivitAI

    so by no data you mean no external dataset? could you plz elaborate a bit on the advantages? less unwanted bias/style change often introduced by classic loras? no legal implications of using external training data? better consistency of generated images? more flexibility with the used checkpoints? file size, vram usage? sorry for all the questions, but i'm still not sure what i'm looking at exactly, but i got the feeling that it can be something awesome.

    ntc
    Author
    Apr 22, 2023· 1 reaction

    1) yes no external dataset
    2) Its very experimental, I dont know really
    3) not sure, it just started working
    4) yeah no legal implications I think. I am not distributing artist or character loras with this though out of caution.
    5) no idea :)
    6) I'm not sure
    7) it's 5mb, 1/4 the size of one of the gifs.

    Thanks, it may just be interesting as model inference gets better.

    eurotakuApr 22, 2023

    @ntc ok, thx for your reply

    mangoLassAiApr 22, 2023· 1 reaction
    CivitAI

    This is so cool!

    _Envy_May 4, 2023
    CivitAI

    What OS are you running? I'm trying to get conceptmod working on Windows 10 and every model I train seems to output nothing but random noise. Any idea what I'm doing wrong?

    ntc
    Author
    May 4, 2023

    Hey I'm using linux. Sorry about the issues, it's very beta.

    Here's a way to do it with runpod https://civitai.com/models/57334/angry-trained-without-data-new-runpod-easy-animations-for-any-lora

    Maybe try with the runpod to see if it's a windows issue.

    _Envy_May 4, 2023

    @ntc Looks like it has to do with Windows somehow. I did have to change a line of code to get it to run, and maybe that's the issue:

    from pytorch_lightning.utilities.distributed import rank_zero_only -->

    from pytorch_lightning.utilities.rank_zero import rank_zero_only

    It was apparently moved to a different part of pytorch_lightning, so I doubt that's the issue, but maybe? Any idea what else I could do to diagnose the issue?

    _Envy_May 4, 2023

    Also, could you put a list of the required python modules somewhere?

    ntc
    Author
    May 4, 2023

    @_Envy_ This is probably the best list of required modules https://github.com/ntc-ai/conceptmod/blob/main/docker/Dockerfile_train (line 43)

    I will add it to the docs as well.

    Thanks for working through this!

    ntc
    Author
    May 4, 2023

    @_Envy_ BTW, the pytorch lightning issue might be a versioning issue. I am using pytorch-lightning 1.7.7

    ntc
    Author
    May 4, 2023

    @_Envy_ Some of the parameter defaults were negative. train-esd.py being invoked directly had negative parameters that should have been positive. If you used train_sequential .sh , the params were set correctly.
    Just fyi.

    I have deployed a new Dockerfile to fix this on runpod. It requires stopping and starting the pod.

    _Envy_May 4, 2023

    @ntc I figured out what the problem was. It assumes the file format is safetensors, but it happily loads a ckpt and you end up with a mess. When I gave it a safetensors file, it worked, so I can run it locally now. :)

    Here's a minimal set of commands that will set up the environment correctly on Windows:

    https://pastebin.com/cGFaWNZ4

    This assumes you've got a working anaconda environment set up.

    ntc
    Author
    May 4, 2023

    @_Envy_ Thanks! I added this to the readme and a link to your civit profile :)

    _Envy_May 5, 2023
    CivitAI

    So what would be the difference between what you're doing here, versus using the phrase "fluffy="?

    ntc
    Author
    May 5, 2023

    I'm not really sure. There are lots of combinations of phrases and things that I haven't tried. Some don't work well together and some do unexpected things.

    _Envy_May 5, 2023

    @ntc What sort of things would you try if, for instance, you wanted to make a model that makes everything iridescent, without needing a trigger word?

    ntc
    Author
    May 5, 2023

    @_Envy_
    This is what I would do - start by querying the model to find the right prompt: Iridescent seems to not change the input image as desired maybe. I would try 'Brilliant, lustrous, colorful'

    Then I would find a working phrase from the past and modify it.
    Copying anger,
    "Brilliant, lustrous, colorful++:0.4|Brilliant, lustrous, colorful%{random_prompt}:-0.1"

    Note that other models have different phrases and combinations they trained on.

    The prompting seems to work better with trigger words. I'm not sure the best way to remove the need for triggers.

    Hope this helps

    _Envy_May 10, 2023· 1 reaction

    @ntc Thanks!

    One other thing, the recent change to safe file names broke saving on windows. I got it working again by replacing both : and | with _.

    LORA
    SD 1.5
    by ntc

    Details

    Downloads
    527
    Platform
    CivitAI
    Platform Status
    Available
    Created
    4/17/2023
    Updated
    5/13/2026
    Deleted
    -
    Trigger Words:
    fluffy

    Available On (1 platform)

    Same model published on other platforms. May have additional downloads or version variants.