CivArchive

    Custom Lora's and commissions here: https://ko-fi.com/suppressor

    Updating for Wan! All Wan models are compatible with 2.1 and 2.2.

    Hey all, currently updating this model by training individual Disney Princesses for WAN T2V 14B. I hope you enjoy. Will release each princess as the training finishes.

    This is a Hunyuan video lora designed to create the likeness of Dis*ey Princesses from "Ralph Breaks the Internet," with some stylistic differences and choices mixed in from freely available OG Dis*ey cartoons and standardization images from the darkest dankest corners of the internet. Depending on strength you can get more or less CGI. The higher the strength, more D*sney, the lower, more realism.


    Example Prompt Trigger: CGI Disney Princess Anna, with her auburn hair and twin braids is wearing her signature green top and shorts. Anna has large expressive eyes and blushed cheeks. Anna is smiling at the camera and performing a comedy routine.

    Supported Characters that have been trained:

    Anna

    Ariel

    Aurora

    Belle

    Cinderella

    Elsa

    Jasmine

    Merida

    Moana

    Namaari

    Pocahontas

    Snow White

    Tiana

    Other D*sney Girls in the Mix (trained with minimal images and can be hit or miss)

    Alice

    Asha (sometimes)

    Chel

    Dolores

    Esmerelda

    Isabela

    Jane

    Kida (sometimes)

    Megara (sometimes)

    Mirabel

    Raya (sometimes)

    Shank (sometimes)

    Venellope (sometimes)

    This ALL DP version does a few things:


    1) It allows 1 lora for most if not all D*sney Princesses and supporting cast.

    I've had success prompting for multiple DP's in several styles, but you need clearly delineate in your prompt what you are trying to accomplish. Crucial items like outfit, hairstyle, skin tone and ethnicity must be prompted for.

    2) It allows you to create your own Disney Princess if you can prompt well.

    I've come up with some total babes that I will share soon. You can just create your ideal princess, crank up the strength, and let it ride. It works surprisingly well.

    3) The strength directly correlates with the CGI and Cartoon effects.

    4) It directly supports nudity and NSFW

    Single Models:

    RAPUNZEL

    Sample prompt: Rapunzel, with her insanely long blonde hair is sitting at a bar on a chair wearing her purple dress. She is obviously drunk and seems a bit disoriented, her expression is both surprised and curious.

    You can get more realism by decreasing the strength, keep it high for CGI.

    BELL - BEAUTY AND THE BEAST

    Pretty evenly balanced, just use the trigger and prompt for styles and expressions.

    Sample prompt:

    in a luxurious medieval ballroom, full body shot of white girl cartoon CGI Belle, dancing , alluring dance moves. the video is a computer-generated 3D render and Disney stylized. She is depicted in a cartoonish, anime-inspired style with exaggerated features big expressive green eyes, and vibrant colors. The character has a slender, build, her brown hair is in an updo. She is wearing a sleeveless, light yellow dress. She is flirting with the camera, being sexy, looking at the camera as if she is confused or curious starting to smile!

    Add detailed texture work, dynamic shadows and godrays, bokeh, and lifelike animations to showcase the highest quality of CGI rendering, Chiaroscuro lighting,

    AURORA - SLEEPING BEAUTY

    Pretty evenly balanced. Always prompt Aurora Sleeping Beauty for CGI style, just Sleeping Beauty for more cartoon style. Lower lora values for more realistic style. Blonde hair comes out more in her signature style when prompted.

    Sample prompt: Video of beautiful young CGI Aurora Sleeping Beauty, gorgeous, blonde hair, tiara, graceful movements, hanging out at a bar, getting drunk. Natural motion, realistic CGI, high budget cinema, blockbuster, perfect low light warm lighting with bloom from light sources, she seems to be having a really good time.

    SNOW WHITE

    Pretty evenly balanced. Helps if you prompt for her hair, dress and bow if you want them.

    Sample prompt: Video of beautiful young CGI Snow White, raven black hair, pale white skin, red bow in her hair, graceful movements, hanging out at a bar, getting drunk. Natural motion, realistic CGI, high budget cinema, blockbuster, perfect low light warm lighting with bloom from light sources, she seems to be having a really good time.

    ELSA

    Use low lora strength and increase as you feel comfortable or add more loras, I overcooked this one a bit, my generations were done around .45 lora strength, and low embedded guidance scale.

    Sample prompt: Video of beautiful young CGI Elsa, pretty white girl, graceful movements, hanging out at a bar, getting drunk. Natural motion, realistic CGI, high budget cinema, blockbuster, perfect low light warm lighting with bloom from light sources, she seems to be having a really good time.

    Description

    Should be pretty balanced and easy to use. Cartoon CGI Belle is the best trigger, found it very helpful to add "detailed texture work, dynamic shadows and godrays, bokeh, and lifelike animations to showcase the highest quality of CGI rendering" to the prompts

    FAQ

    Comments (28)

    azeliJan 24, 2025· 2 reactions
    CivitAI

    is it possible to merge them into a single lora? :O

    2770379Jan 24, 2025· 3 reactions

    It may be possible. For me not yet. I've tried doing the training with multiple caption styles, and there is a ton of style and likeness bleed over. Basically every woman generated looks like a mish-mash of all of them. I won't quit trying, but at this point I haven't had any luck.

    ItsThatTimeAgainFeb 9, 2025· 2 reactions

    @Suppressor Well then, I guess you got successful into mashing them all in 1! Surprisingly the same file size. Are the single ones higher quality output or similar?

    2770379Feb 9, 2025· 4 reactions

    @ItsThatTimeAgain It seems about the same to me. I was shocked, this thing trained for like a week, I expected it to be huge.

    twoldogsFeb 9, 2025· 3 reactions

    @Suppressor same here.I tried many caption styles but it won't work. amazing work!!! what's the trick to merge them into one.

    2770379Feb 9, 2025· 6 reactions

    @twoldogs I think it has to do with with dataset size and training time. I increased the size of the image set to 1.8gb of images, and 3.1gb of video clips. I ran them through Joy Caption 2, then let it train for 8 days.

    I think the mistake I was making before was assuming I could train multiple concepts in the same amount of time as I trained single concepts. Lost track of the steps and epochs, but it was in the 6 digits on steps last time I remember looking.

    twoldogsFeb 9, 2025· 2 reactions

    @Suppressor I see. Do I need to use a separate folder for each concept, or should we use one large folder to store all the images and videos inside? Thank!!!

    2770379Feb 9, 2025· 3 reactions

    @twoldogs I dumped it all in one giant folder.

    I forget to mention that I heavily captioned with trigger words, like "CGI Disney Princess Belle, Belle has long, wavy brown hair cascading over her shoulders, styled in a loose, elegant manner. Princess Belle's eyebrows are well-defined, and Disney Belle has a delicate, slightly upturned nose. CGI Belle is wearing a soft, golden-yellow dress with a V-neckline, which adds a warm, glowing effect to the image. CGI Princess Belle's lips are painted a soft pink, and Belle has a subtle, gentle smile.

    I did string searches for her and she, and replaced them with the triggers for the character's name. Hope this helps.

    twoldogsFeb 9, 2025· 4 reactions

    @Suppressor thank a lot. I just used the name as trigger word;as the result I feel trigger word did almost nothing even after 50000 steps.

    2988Feb 10, 2025· 3 reactions

    @Suppressor very good job, I'd say this is one of the best loras so far imo.

    As for bleeding, I've been thinking about the concept of training loras with two or more characters together, for example if you want less bleeding you would make sure to train Cloud and Tifa in many scenes together interacting, not just alone. Maybe something worth trying if you haven't already.

    2770379Feb 10, 2025· 2 reactions

    @Redbird That's a really good point. From some movie screen grabs, I did have a tiny bit of training material doing just that, but I didn't focus on it. I am sure that would beef it up and eliminate some of the bleeding. The weirdest thing I have found is if more than one princess is in a scene, it works way better if they are different ethnicities. Don't know why... Same ethnicity? You have to prompt extensively for the differences between the two. For your idea to bear fruit, I think I would have to actually create and generate scenes of multiple princesses, since except for "Ralph Breaks the Internet" and some promo videos, there is very little overlap between them all. Maybe as people start to generate art from it, some of that could contribute to training material.

    2988Feb 11, 2025· 3 reactions

    @Suppressor yeah exactly, I've noticed the same with they having to be very different ethnicity or in looks to help with the bleeding.

    And yeah I was thinking it would be hard to collect images with more than one of these characters together

    It would work better with FF7, Friends or something because all those characters are always together, so easier to find images with those. But that won't help much with new ideas of new characters being together for the first time haha.

    K3NKFeb 12, 2025· 1 reaction

    Im trying something similar, i was training to make a single lora of the two i have published, but not merging, i wanted to train, and the results are not convincing me.. with x150 3sec clips , with num repeats 3 at step 4000 the loss starting to climb.. i think something is not working, do you have available the tensorBoard graph for the all in one princess training? I was asking gpt about It and It told me that combining drastically different scenes and povs can affect the training process, making the lora to allucinate when generating instead of generating something specific

    2770379Feb 12, 2025· 2 reactions

    @K3NK Sorry, I did not save the Tensorboard logs for this. I would if I knew it was helpful, what did you hope to gain from looking at it? I can only share my experience, and I strongly suspect Chat GPT has no idea how to do this well. Though I did not use radically different POV, I did make sure to get all sides images of every character in my LoRA. What I do know, is, I was at least at 100k steps before things started to shape up. Insane patience required.

    2770379Feb 12, 2025· 2 reactions

    @K3NK I was thinking further on your models, I checked them out and installed them on my system so I could check... I strongly suspect you are dealing with captioning issues. Hunyuan inherently understands a lot of POV from my experience, though it tends to screw up with "from behind" and "low angle shot." Did you use standardization images or videos in your training?

    K3NKFeb 12, 2025· 1 reaction

    i'm using videos, and yes, i was captioning manually a bit wrong at first, the last ones are joycaption and now i'm training one i captioned 118 clips with LLaVA-Video-7B-Qwen2-captioner, although people complain that pov is hard to get, actually it wasn't for me.. pov wasn't working in your tests? i guess for action loras you need to keep the data set the more focused on a single action or maybe if i train for 8 days.. 😅

    when was the first time you tested? i mean, i guess you tested after 24h of training, were the results bad? in this one i told you, the loss started to rise in the tensorboard so i cancelled it, also the test i did came up horrible, i guess i will stick to simpler single action loras... i wanted to create an all in one but the model is really amazing but maybe with better captions and a longer training? the truth is that i have seen worse results in the tests than with the ones i did with sideview and 69pov.

    2770379Feb 13, 2025· 3 reactions

    @K3NK So yeah, to directly address the training, here is exactly what happened. I thought I had something around 8th epoch. After I put it through rigorous prompting I realized it was shite and I couldn't release it. Around 12 epochs, I was like Hmmmm, this is a possibility, but I was getting ghosting, artifacts and lack of likeness. I let it lie for 4 more days, just training all day. Around the 56th epoch, it started being REALLY f**king solid. Like unbreakable even with LoRAs. I let it go for 2 more days, and finally after testing I decided it was baked enough. In all honesty, with some of the feedback I got, there is going to be a final train, I may let it just go for 2 weeks and see what I get.
    IMO, It's not like Flux or SD3 where you can over bake it, it feels more like one of those models where the more you are willing to put into it, the more you will get.

    K3NKFeb 13, 2025· 1 reaction

    @Suppressor how low did you get with the loss??
    if you still have the oputput folder you could just check tensorboard, i really wanted to check the graph loss at the end of the training

    azeliFeb 13, 2025· 2 reactions

    @Suppressor That's so interested, as with the image based trains I've tried they just freeze after like 1500 steps, no movement just images at that point. Maybe I need to throw some clips into the mix or something

    K3NKFeb 13, 2025· 1 reaction

    @Suppressor how big was your dataset? videos and pictures? im using like 58 clips of 3sec 24fs 256x256 and after 7 hours the loss stabilizes and doesnt go down.. im getting so sick of this... xD this one doesnt even have the mixture.. is just pov bj clips... but the graph looks almost the same as the others... were you using learning rate of 2?

    2770379Feb 13, 2025· 2 reactions

    @azeli Yeah, you will get freezing on static images after a point in my experience, gotta have some clips in there IMO.

    2770379Feb 13, 2025· 2 reactions

    @K3NK My dataset was 1.8gb of images, and 3.1gb of video clips.. Learning rate of 2, 100 warmup steps, video_clip_mode = 'single_middle'. I had many different sizes and quality of images and video.

    logenninefingers888Feb 14, 2025· 2 reactions

    Wow that's a lot of data! What rank was this? 32? Just to add some numbers on loss, it seems getting down to ~0.06-0.07 is good, and 0.05 seems overtrained.

    K3NKFeb 19, 2025· 2 reactions

    @Suppressor if you Will train again please try to setup the tensorBoard,im very curious to see how the graph looks on such a Big dataset

    2770379Feb 19, 2025· 1 reaction

    @K3NK I'm trying to remember, is it just --log_with tensorboard on my train script to get it to write data to the directory?

    2770379Feb 19, 2025· 2 reactions

    @logenninefingers888 Yep, 32.

    K3NKFeb 19, 2025· 1 reaction

    @Suppressor for me is joing to /output/folder and run the command:

    tensorboard --logdir=.

    2770379Feb 20, 2025· 1 reaction

    @K3NK Hmm, I don't run locally, I am using a VSC tunnel to a training cluster. I think I can do it with the script, I will try this evening.

    LORA
    Hunyuan Video

    Details

    Downloads
    970
    Platform
    CivitAI
    Platform Status
    Available
    Created
    1/23/2025
    Updated
    5/7/2026
    Deleted
    -
    Trigger Words:
    CGI Belle