CivArchive
    ECLIPSE XL - v1.0
    NSFW
    Preview 14145386
    Preview 14148561
    Preview 14148572
    Preview 14148623
    Preview 14148683
    Preview 14148688
    Preview 14148689
    Preview 14148696
    Preview 14148698
    Preview 14148701
    Preview 14148704
    Preview 14148722
    Preview 14146460
    Preview 14148749
    Preview 14149601
    Preview 14149602
    Preview 14149600

    Changelogs on bottom of description. TLDR, V1 is better than V1.3 (test)

    Eclipse XL v1.0 is a fine-tuned model trained on 63K images, aimed at creating a higher-fidelity base Anime XL model. This was a collaborative effort by Wasabi and Hecatonchirea. We tackled this project with a token-based approach, using a dataset primarily consisting of Booru-based tags, along with a few additional Rule34 tags and original tags. The tags were pruned and cleaned using our tag-editing application (HW tagger, Link to github and tutorial), with semi-manual inspection.

    Technically, the base model is Pony v6, so natural language prompting and Pony v6-based Loras will likely work with this model (although we don't recommend using Pony's quality tags, for reasons explained in the technical section). We introduced new tags for lighting (composition tags), new quality tags, and various other features to achieve better control over our generations. Our focus was not on characters or styles, as people will create Loras anyway, and using Loras will produce better images.

    We had many subgoals for the project, such as improving lighting, enhancing sensitivity to tags, overwriting the knowledge of Pony, separating the style tied to tags, achieving a consistent and flexible style, and preventing it from being style-hungry like Pony (more details in the technical section). We will provide more in-depth details in the technical article.

    We operate without any funding or sponsors, so if you appreciate the model, any amount of tips would be highly appreciated. You can also support us through our Patreon.

    This versatile model is capable of generating both SFW and NSFW images. Please use it responsibly. If you are unable to run XL models and haven't heard of SD Forge, I highly recommend looking it up, as it may help you run XL models more efficiently. We also recommend checking out the related article because it contains the csv for the tags used in this model which you can drop it into your webui tag-autocomplete extension.

    How to use:

    Recommended starter prompt:

    Positive prompt [prompt like any other tag based model]:

    masterpiece, best, great, ...

    Negative prompt (NO NEED for a long negative):

    worst, worse, average, signature, watermark
    (768, 1280), 3:5 ratio
    (768, 1344), 4:7 ratio
    (832, 1216), 13:19 ratio
    (896, 1152), 7:9 ratio
    (960, 1088), 15:17 ratio
    (1024, 1024), square ratio

    We recommend CFG between 5-8, sampling steps above 20 (we use 36 steps), and CLIP SKIP 1 (Pony Says to use 2, but Clip Skip is disabled for XL training on kohya so it doesn't make sense to use 2, people are probably gaslighting themselves from SD 1.5 experience).

    Special tag info:

    We introduced new tags based on their understood meaning (or the lack there of) by the text encoder model (Vit and BIG) that is in the XL model. Most new tokens are 1~2 token long, so the information is better absorbed in the training.

    quality tags:

    masterpiece, best, great, good, average, worse, worst

    The quality tags were assigned using the aesthetic scorer from imgutils, we're aware of it's many biases so we manually corrected them. Although flawed, it's better than other options, so we ran with this, more info on the technical detail article.

    additional detail tags:

    dense, intricate

    These tags were added for images with many details or parts, some images had both:

    • intricate : the details on objects/subject is tightly packed and is not a simple design (ex: lingerie, complex dress, designed armor trims, multiple accessories, etc)

    • dense : image that has multiple objects/subjects that makes the image more densely packed

    lighting tags:

    dim composition, ambient composition, dun composition, dark composition, contrast composition, bright composition, vibrant composition, dark background

    These are not necessary for basic lighting but these tags were added to images with very extreme lighting or darkness. We followed the definitions below for tagging the images for specific lighting scenarios so we're consistent with our tagging for the training, but you can mix them in generation to get interesting effects:

    • dim composition : Dark fully visible image, but there are multiple sources of light

    • ambient composition : Dark fully visible image, but there's one single source of light

    • dun composition : Dark fully visible image with diffused light, no apparent light source

    • dark composition : Fully dark, no light source, close to pitch black image

    • contrast composition : Contains both dark and bright parts in the image, the dark and bright doesn't necessarily need to work together (ex: heaven & hell, day & night, or just a big shadow)

    • bright composition : Very bright image with strong highlights (close to white highlights)

    • vibrant composition : Image with high intensity (saturated) colors for majority of the image, independent of light source

    style tags:

    illustration style, western style, anime coloring, realistic, photorealistic, bold lines, 3d, 3d blender, 3d koikatsu, 3d mmd, 3d filmmaker

    These tags were introduced/utilized to absorb styles that are different from what we wanted in the base model. We also tagged specific styles that were present in our dataset to properly separate it from the main style (list on technical details). If the generation deviates from the base style, you can include tags like "3d" and "western style" in the negative prompts. Sometimes the base Pony's knowledge leaks out, but we plan to document and fix these issues as we identify more untrained tags.

    • illustration style : anime style image with basic shading (little to no gradients used for shade)

    • western style : any western styled images that doesn't match the base style

    • anime coloring : images with anime coloring

    • realistic/photorealistic : this model is not intended for this but we used these tags for hyperreal illustrations or photo looking images. followed danbooru's definition

    • bold lines : Common with western style but also in some illustration. used for images with thick lines.

    • 3d : Images tagged with just "3d" are 3d images that didn't fall under the categories below.

      • 3d blender : 3d images made using blender

      • 3d mmd : 3d images made using mmd

      • 3d koikatsu : 3d images made using koikatsu

      • 3d filmmaker : 3d images using 3d filmmaker

    Translated tags:

    Since we don't live in a perfect world where all tokens are properly learned without concept bleeding, we made adjustments to few tags to train them accurately. The reasoning and a list of translated tags can be found in the technical details article, but here are a few examples:

    • "torii" -> split into "red torii" and "stone torii"

    • "clothed <gender> nude <gender>" -> split into "nude <gender>" and "clothed <gender>" to better absorb them as separate concepts

    Known problems:

    Some tags were left untouched or are not sufficiently trained by our current model, causing it to invoke the base Pony's knowledge (e.g., "bimbo" and other minor tags not in the dataset). We plan to add images with these concepts in future versions to overwrite the base knowledge. The pseudo-signature problem from the base Pony model is weaker in our model but still present. We know it can be resolved with a combination of brute force and clever strategies to prevent bleeding, so we expect to solve it in the next version.

    Version history, and what to expect:

    version history:

    • Eclipse XL v2: TBA

    • Eclipse XL 1.3: changed lr for Te2 to be wayyyyy lower, this retained some pony, but the model seems a bit undercooked, compared to V1 (published 6/29/2024)

    • Eclipse XL 1.1 and 1.2: testing new config values and testing way to prevent deterioration of concepts. (Edit: we stopped training half-way for these cause their quality was worse)

    • Eclipse XL v1 (published 5/31/2024): this includes phase 0 ~ phase 2 of our project, using finalized config

    • Beta version (unnamed): phase 0 and phase 1 dataset, testing configs

    Eclipse XL v2 will include our phase 3 dataset which will incorporate the following:

    • Weapons (swords, gun, etc)

    • More fantasy races, furry & non furry, robots (gundam and stuff)

    • More angles

    • We currently have a long list of concepts to add, and we will take feedback for things that don't work in the current version and we will add to the list based on priority.

    What not to expect:

    • We want to make a good "base" model, so anything that's entirely closed within a small circle/fandom is probably not on the list.

    • We don't care about supporting some random character from a minor show. Just imagine training every single character listed on Mudae, that's ~50 imgs per char x 110,000 characters = ~5.5 million imgs. That's a task for lora creators or look towards pony v7.

    • Same for random movie reference or something, the required dataset size easily stacks and they're a better task for loras.

    Version log:

    Current best model: Eclipse V1

    Current test version: V1.3, we're testing LR and scheduler and stuff

    Acknowledgements:

    Authors: Wasabiya, Hecatonchirea

    Testers: Nebuchadnezzar, and other anonymous people

    We thanks Anzhc and Shippy for helping getting started. We thanks people at deepGHS for their python libraries and models, which helped a lot. And many thanks to all anonymous people involved in or helped shaping this project.

    Licenses:

    This model is licensed under a modified Fair AI Public License 1.0-SD (https://freedevproject.org/faipl-1.0-sd/) license. Since we used Pony v6 as the starting point, the following modification to the license hold true: You are not permitted to run inference of this model on websites or applications allowing any form of monetization (paid inference, faster tiers, etc.). This applies to any derivative models or model merges.

    This is a first draft of the description. We will update the details if we find any errors or need to add more clarification.

    last edited: 6/29/2024

    Description

    FAQ

    Comments (21)

    judas2991Jun 1, 2024
    CivitAI

    Civitai generator gives pretty bad results compared to examples provided here. I wonder if webui will be different?

    Wasabiya
    Author
    Jun 1, 2024· 2 reactions

    maybe because u didn't use the recommended negative? I didn't see any in your post. We recommend adding "worst, worse, average" in the neg

    hamedsheygh205Jun 1, 2024· 3 reactions

    well just use web ui and read manual will fix it!

    ktiseos_nyxJun 1, 2024· 1 reaction

    WebUI wil consistently produce better results, WebUI has hires, and other things - where as at the moment if the Generator is in a bad mood it will not work right :)

    dims2Jun 1, 2024· 4 reactions
    CivitAI

    There is definitely a learning curve, but it seems worth it, especially for NSFW situations (still messing around, and not tested SFW things yet).

    And wow, I just love dark composition!

    ahllokamkhao104Jun 2, 2024· 7 reactions
    CivitAI

    This model is a very nice one, and very good that you guys took initiative on not only puting effort towards making a base model, but also made a wonderful guide on how to prompt and use it, with even recommended settings for making loras with it!!

    We need more creators like you guys in community!

    One thing I would like to ask is if you guys could expand more on the yaoi content of the model, I notice on the tags frequency that there is very little of it, I think its less niche than furry or random fandoms, meaning that it would be good to augment the dataset with it!

    HecatonchireartJun 2, 2024· 1 reaction

    Hello thanks for the feedback, yaoi is a problem yes, we found it hard to find yaoi that wasn't bara or femboy, and since we don't want to flood the meaning of 1boy/2boys with specific body types, we need to find balance between femboy and bara content. Neutral body types in yaoi is not easy to find.

    thebrownsauce184Jun 3, 2024· 1 reaction

    As a point of order for full inclusion and representation, I support this request.

    thebrownsauce184Jun 3, 2024

    @Hecatonchireart Oh, that makes sense. I don't deal with yaoi, so I wouldn't even know where to point you. Perhaps @ahllokamkhao104 has a source that you could use?

    augmentedidolJun 4, 2024· 2 reactions
    CivitAI

    This model is awesome! Lots of flexibility with different styles.

    JustTooLazyJun 7, 2024· 2 reactions
    CivitAI

    I have tried out different artist style preset, and personally think, (artistdd) work the best. I would recommend to have a try, me personally use 0.7 weigthing.

    HITTRAKKZJun 7, 2024· 3 reactions
    CivitAI

    fine tuned on 60k images but didn't bother tagging the artist or characters (excluding the original characters) seems like a waste to me, sure you can just create lora's for styles and characters but if the style and character are already known from the model then you wouldn't have to use any lora's which would speed up generation time or allow you to use other concept/style lora's keeping the lora's you use down to 2-3 max and i know theres taggers that will tag the character and artist name thats listed on gelbooru if it has one

    HecatonchireartJun 7, 2024· 1 reaction

    Hello, We did tag artists and characters, you just need to read a bit the explanation but artists tags are there, also we do tags characters it's just that we won't SPECIFICALLY support any character, we prefer supporting concepts and styles in general rather than characters, but characters are tagged when they are present in our images (you can see it in the csv file that stores most of the tags that were using during training, look at the explanation article to have more info)

    HITTRAKKZJun 8, 2024· 3 reactions

    @Hecatonchireart oh thanks for clarifying that, when I read the information about the training I first thought there wasn't any tagging of any artists or characters

    NybaJun 7, 2024· 3 reactions
    CivitAI

    Pretty good model, especially composition. Dark composition and outline, my favorite. But sometimes it lacks agility or how to say it. It can't generate everything as pony does.

    HecatonchireartJun 7, 2024

    Hello, thanks, we are testing several things relating to encoders to see if we can alleviate this problem.

    YoshistriderJun 10, 2024· 7 reactions
    CivitAI

    Model seems to do things that other pony models don't, especially when it comes to lighting versatility.

    Might not work with every Pony style, but the ones I tested worked fine without issue, including some Pony clothing loras.

    I don't recommend using this model if you want something that will work out the gate- it takes a few minutes of reading to get started on handling this ckpt correctly. Fortunately, Wasabi and friends have given us a wealth of resources to learn from for this model.

    Probably my favourite pony CKPT i've run into so far. Strongly recommend. I'll be doing lots more stuff with this model i'm sure.

    EBIXJun 10, 2024· 3 reactions
    CivitAI

    i ignored this model and came back to see what it is after i tried it on onsite generator , and DAMN its some pony retrain mode. also its not stable with stuff (i tried on onsite generator so issue might be from my side) . waiting for 1.1 or 1.2 .

    Wasabiya
    Author
    Jun 11, 2024

    Thank you for testing out our model. I saw other comments about the onsite generator result being whack. If you can share what settings/prompts u were using I would appreciate it. I want to get some info on what worked and what didn't

    LizzyRascalJun 11, 2024· 5 reactions
    CivitAI

    This is an odd one, but it's very much worth trying.

    Beautifully versatile checkpoint... It can do a wide range of styles, and makes it easy to control things like lighting and composition.

    It's a Pony Diffusion checkpoint, but it doesn't rely on the "score_9, score_8" type of tagging, and seems to still work with most pony LoRAs.

    It's much more specialized in anime-style illustrations, but still has the capability to generate MLP and furry art, so I would consider this a compliment to Pony rather than a replacement.

    I find this model to be a good combination of PonyDiffusion and other anime checkpoints like animagineXL.

    stable_diffusion_espanolJun 27, 2024· 11 reactions
    CivitAI

    Your model is great and it deserves more attention!! I created this review (in spanish) on my channel: https://youtu.be/f6ESjdImcWc Thanks for your model!!

    Checkpoint
    Pony

    Details

    Downloads
    1,709
    Platform
    CivitAI
    Platform Status
    Available
    Created
    5/31/2024
    Updated
    5/13/2026
    Deleted
    -

    Available On (1 platform)

    Same model published on other platforms. May have additional downloads or version variants.