๐ฃ FFUSION AI SD2.1- 768 BaSE Public 1.0.0Release is Here!
Diffusers available at https://huggingface.co/FFusion

STABLE DIFFUSION 2.1 768+ MODEL
before complaining about usage if you haven't used 2.1 stick to 1.5 models

Introducing FFusion.AI-beta-Playground on Hugging Face Spaces!
https://huggingface.co/spaces/FFusion/FFusion.AI-beta-Playground
https://ffusion.ai/
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
We're thrilled to announce the launch of our new application, FFusion.AI-beta-Playground, now live on Hugging Face Spaces! This cutting-edge tool harnesses the power of AI to generate stunning images based on your prompts. 

โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
With FFusion.AI-beta-Playground, you can:
Generate images from a variety of pre-trained models including FFUSION.ai-768-BaSE, FFUSION.ai-v2.1-768-BaSE-alpha-preview, and FFusion.ai.Beta-512.
Experiment with different schedulers to fine-tune the image generation process.
View the generated images right in your browser and save them for later use.
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
Our application is built on top of the diffusers library and uses StableDiffusionPipeline for image generation. It's powered by Gradio for a user-friendly interface. And here's the exciting part: very soon, it will run on a CUDA-enabled environment for optimal performance, thanks to our partners at RUNPOD! 

Stay tuned for this upcoming enhancement that will take your image generation experience to the next level. We're thrilled to be partnering with RUNPOD.io to bring you this cutting-edge technology.
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
To get started, simply enter your prompt, select the models you want to use, choose a scheduler, and let our application do the rest.
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
Check out FFusion.AI-beta-Playground now at FFusion/FFusion.AI-beta-Playground and start creating your own unique images today! 

โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
We're excited to see what you'll create with FFusion.AI-beta-Playground. Your feedback is invaluable to us, so please don't hesitate to share your thoughts and suggestions. Enjoy exploring the possibilities of AI-powered image generation! 

๐ญ We are thrilled to launch the public beta release of FFUSION Ai, though we want to clarify that it's currently limited in its breadth. Having been trained on just a fraction of our full image collection (20%), the capabilities of the model are not yet fully realized. This early version is primarily intended for experimentation with various prompt combinations and initial testing.
๐ก While we're committed to delivering the highest level of excellence, we want to highlight that our model, notably the Unet component, is still developing its proficiency with certain objects and faces. But fear not, we're actively fine-tuning these areas as we progress towards the final release.
๐ A huge shout out to our Reddit community for their support in alpha testing and for helping the text encoder respond to some exciting fuse ideas. We couldn't have come this far without you!
๐ก Your contribution in this beta testing phase is extremely crucial to us. We invite you to explore the model extensively, experiment with it, and do not hesitate to report any prompts that don't meet your expectations. Your feedback is our guiding light in refining the performance and overall quality of FFUSION Ai.
โ ๏ธ Attention: The model is based on Stable Diffusion 2.1 - 512 and is designed for optimal performance up to a resolution of approximately 600-700 pixels. For larger image sizes, we recommend upscaling them independently or patiently waiting for our final release that's just around the corner. This forthcoming release will enhance performance and support for higher resolutions.
๐ฅ Thank you for being part of the FFUSION Ai beta testing community. Your support, feedback, and passion inspire us to continually develop a pioneering tool that is set to revolutionize creativity and visualization. Together, we can shape the future of storytelling and creativity.
๐ฎ Why not add some effects to your favorite prompts or fuse them together for a surreal twist? (Please note, Pen Pineapple Apple Pan effects and FUSIONS are excluded in this beta version)
๐ With over 730.9449 hours of dedicated training sessions, our Fusion AI model offers a wealth of data subsets and robust datasets developed in collaboration with two enterprise corporate accounts for Mid Journey. We also pride ourselves in having an effective utilization of GPU usage, making the most out of our partnership with Idle Stoev, Source Code Bulgaria, Praesidium CX & BlackSwan Technologies. ๐
Full transparency on our extensive 700,000-image dataset, training methodologies, classifications, and successful experiments is on its way. This information will be released shortly after the final version, further establishing FFUSION Ai as a trusted tool in the world of AI-powered creativity. Let's continue to imagine, create and explore together!
Model Overview: Unleashing the Power of Imagination!
FFUSION AI is a state-of-the-art image generation and transformation tool, developed around the leading Latent Diffusion Model. Leveraging Stable Diffusion 2.1, FFUSION AI converts your prompts into captivating artworks. Discover an imaginative landscape where ideas come to life in vibrant, surreal visuals.
Developed by: Idle Stoev, Source Code Bulgaria, Praesidium CX & BlackSwan Technologies
Shared by: FFusion AI
Model type: Diffusion-based text-to-image generation model
Language(s) (NLP): English
License: CreativeML Open RAIL++-M License
Model Use: Enabling Creativity and Exploring AI Frontiers
Designed for research and artistic exploration, FFUSION AI serves as a versatile tool in a variety of scenarios:
Investigating biases and constraints in generative models
Unleashing creativity in artistic endeavors
Infusing AI-driven innovation into educational or creative tools
Furthering research in the exciting field of generative models
Repository: https://github.com/1e-2
Demo: https://huggingface.co/spaces/FFusion/FFusion.AI-beta-Playground
Out-of-Scope Use and Prohibited Misuse:
Generating factually inaccurate representations of people or events
Inflicting harm or spreading malicious content such as demeaning, dehumanizing, or offensive imagery
Creating harmful stereotypes or spreading discrimination
Impersonating individuals without their consent
Disseminating non-consensual explicit content or misinformation
Violating copyrights or usage terms of licensed material
Model Limitations and Bias
While our model brings us closer to the future of AI-driven creativity, there are several limitations:
Achieving perfect photorealism or surrealism is still an ongoing challenge.
Rendering legible text could be difficult without further ~30min training on your brand.
Accurate generation of human faces, especially far away faces, is not guaranteed (yet).
Model Releases
We are thrilled to announce:
Version 512 Beta: Featuring LiTE and MiD BFG model variations
Version 768 Alpha: BaSE, FUSION, and FFUSION models with enhanced training capabilities, including LoRa, LyCORIS, Dylora & Kohya-ss/sd-scripts.
Version 768 BaSE: A BaSE Ready model for easy applying more than 200 build op LoRA models trained along the way.
Environmental Impact
In line with our commitment to sustainability, FFUSION AI has been designed with carbon efficiency in mind:
Hardware Type: A100 PCIe 40GB
Hours used: 1190
Cloud Provider: CoreWeave & Runpod (official partner)
Compute Region: US Cyxtera Chicago Data Center - ORD1 / EU - CZ & EU - RO
Carbon Emitted: 124.95 kg of CO2 (calculated via Machine Learning Impact calculator)
That said all LoRA and further models are based on initial training.
Model Card Authors
This model card was authored by Idle Stoev and is based on the Stability AI - Stable Diffusion 2.1 model card.
Model Card Contact
Download the FFUSION AI diffusers - 768 BaSE Release here.
๐ฌ Intended Use: From Research to Artistry ๐จ

Description
๐ฃ FFUSION AI - 768 BaSE Public alpha Release is Here!
download the di.FFUSION.ai-v2.1-768-BaSE-alpha
๐งจDIFFUSERS
at https://huggingface.co/FFusion/
FAQ
Comments (13)
Do you need the config file if you've downloaded from hugging face?
If you have downloaded the model checkpoint file (.ckpt) from Hugging Face, then yes, you would need the corresponding config file to properly load and use the model.
However, if you are referring to training with the diffusers available from Hugging Face, you can either perform a complete Git pull to obtain all the necessary files, including the config file, or you can directly specify the model name as FFusion/di.FFUSION.ai-v2.1-768-BaSE-alpha when working with the diffuser model. This way, the model will be fetched from the Hugging Face model repository, and you won't need to provide a separate config file.
A lot of waffle but nothing behind it. I get better results with RMADA with dramatically less promts.
Im really struggling to understand why anyone would want to explore or make any content for sd2.1 models. its hot garbage. ive never images that can come close to what we can do in sd1.5 rather keep expanding on that and build that universe. sd2.1 is dying ? just what ive noticed in general
Well... probably the reason you never get images that come close to what we can do in sd1.5 is precisely beacause nobody whats to explore or make any content for sd2.1 :P
2.1 is 10 times better than 1.5 IMO. no one just makes any content on it... idk why. may coz it requires more resources.
@Geekyzillaย yes, 2.1 is way more amazing for attention and prompt accuracy, e.g. food/scenery.
it is scared to draw torso with limbs/full body usually with uncanny limbs or with distorted abdomen etc. it needs more anatomy lessons, sadly.
@Memeaterย yeah I watched a vid on youtube where someone explained that because they chose to censor sd2 it really messed up anatomy in general. from the face to the body and limbs etc. the way the models learn is from full nude anatomy and then it builds up from there. then with a few lora's in sd1.5 you can easily get super amazing food or scenery. there is just so much add ons available
@DukeNukem47ย It's really impressive to see how people try to sell something nowadays. Instead of selling what the customer or the majority wants, they simply want to sell and choose what they want us to buy, they want to sell what they want and not what we want. They want us to swallow all this shit with a smile on our face like it's the most delicious thing in the world and if possible stand up and give applause.
It is precisely the challenge to fix what's broken and improve what's driving the people to do that. And it's great.
What's the point of fine-tuning the already perfect and allmighty SDXL is a better question.
@seedmancย unfortunately the comment was 4 months ago, no XL then, the infamous obsolescence stuff, like what is the point of tuning XL if say XXL or XL2 gonna appear in the future making the current obsolete, etc.
Details
Files
ffusionAISD21_768BaseAlpha.yaml
Mirrors
Tilt Shift LoHa.yaml
Speedtail-000002.yaml
c5x7-21-v01.yaml
Countach lp800.yaml
CitroenC6-21-v1.yaml
providence_2112Anchor.yaml
allworkforkrowk_v02GammaFasterlr512.yaml
AnyTayJoy-woman_Vparam_Reged.yaml
idle_Potion_Generator.yaml
evo_car_D-adaptation-LoRA_SD21-768_v044.yaml
ffusionAISD21_768BaseAlpha.yaml
ffusionAISD21_baseV100.yaml
evo_car-dm_plaza_D-adaptation-LoRA_SD21-768_v045.yaml
Orc_reg_128.yaml
providence_2111Anchor.yaml
F1 LM-000004.yaml
Speedtail-000004.yaml
Amg One.yaml
evo-car-lora-sd21-768_v02.yaml
coloringBook_coloringBook.yaml
shurima-1-000010.yaml
providence_2110Prelude.yaml
gdsingld.yaml
Aston Martin DBX Interior-000002.yaml
LBWK360MODENA-000001.yaml
m14af1car70s.yaml
drccitycar_v0.3.yaml
Rx7FD-000002.yaml
Speedtail-000001.yaml
Aston Martin DBX-000002.yaml
anime21_v15.yaml
Aston Martin DBX-000001.yaml
Rx7FD-000001.yaml
pixhell_v20.yaml
mcaf1car70s.yaml
DaveScare-v2.yaml
Available On (1 platform)
Same model published on other platforms. May have additional downloads or version variants.



















