𧬠Live Wallpaper Fast Fusion β 8 to 10 Step Edition
Live Wallpaper Fast Fusion is a high-performance merged model that brings together the strengths of:
ποΈ Live Wallpaper LoRAs β two custom LoRAs trained to produce fluid motion, parallax depth, and anime/game-style aesthetics.
β‘ CausVid LoRA β enables ultra-fast video generation in just 8 to 10 steps, while preserving high visual quality (https://github.com/tianweiy/CausVid, Wan21_CausVid_14B_T2V_lora_rank32_v2.safetensors Β· Kijai/WanVideo_comfy at main)
π¬ AccVid LoRA β improves motion accuracy and dynamics for expressive sequences. (aejion/AccVideo: Official code for AccVideo: Accelerating Video Diffusion Model with Synthetic Dataset, Wan21_AccVid_T2V_14B_lora_rank32_fp16.safetensors Β· Kijai/WanVideo_comfy at main)
π MoviiGen LoRA β adds cinematic depth and flow to the animation, enhancing visual storytelling. (ZulutionAI/MoviiGen1.1: MoviiGen 1.1: Towards Cinematic-Quality Video Generative Models, Wan21_T2V_14B_MoviiGen_lora_rank32_fp16.safetensors Β· Kijai/WanVideo_comfy at main)
π§ Wan I2V 720p (14B) base model β providing strong temporal consistency and high-resolution outputs for expressive video scenes.
There are 4 files for download: fp8 version, GGUF version: Q8, Q6, Q4, in civitai there is no way to put the quantization indication by GGUF in the files, so Q8 is the one marked fp32, Q6 is fp16 and Q4 is nf4.
This fusion results in a versatile and powerful video generation model, capable of producing short cinematic clips (2 to 5 seconds) with smooth, natural motion and rich visual detail. While inspired by live wallpaper aesthetics, the model is designed for short, expressive animations ideal for storytelling, dynamic backgrounds, and ambient scenes.
β Do not reapply CausVid, AccVid, or MoviiGen LoRAs β they are already baked into the model and reapplying them may degrade results.
Recommended CFG: 1
π¨ You can safely use additional LoRAs for extra style or effects β feel free to experiment.
π οΈ Suggested Caption Workflow (LLM + Template)
To maximize output quality, you can use any LLM (such as ChatGPT, Gemini, Claude, etc.) with the following prompt template to generate motion-aware captions for your images:
You are an expert in motion design for seamless animated loops.
Given a single image as input, generate a richly detailed description of how it could be turned into a smooth, seamless animation.
Your response must include:
β
What elements **should move**:
β Hair (e.g., swaying, fluttering)
β Eyes (e.g., blinking, subtle gaze shifts)
β Clothing or fabric elements (e.g., ribbons, loose parts reacting to wind or motion)
β Ambient particles (e.g., dust, sparks, petals)
β Light effects (e.g., holograms, glows, energy fields)
β Floating objects (e.g., drones, magical orbs) if they are clearly not rigid or fixed
β Background **ambient** motion (e.g., fog, drifting light, slow parallax)
π« And **explicitly specify what should remain static**:
β Rigid structures (e.g., chairs, weapons, metallic armor)
β Body parts not involved in subtle motion (e.g., torso, limbs unless thereβs idle shifting)
β Background elements that do not visually suggest movement
β οΈ Guidelines:
β The animation must be **fluid, consistent, and seamless**, suitable for a loop
β Do NOT include sudden movements, teleportation, scene transitions, or pose changes
β Do NOT invent objects or effects not present in the image
β Do NOT describe static features like colors, names, or environment themes
β Return only the description (no lists, no markdown, no instructions)
Use the output from the LLM directly as your video prompt to ensure motion relevance and temporal coherence.
π― Best for:
Short video generation (2β5 seconds)
Anime/game-inspired motion scenes
Ambient motion with parallax, particles, soft light, and floating elements
Fast generation workflows (8 to 10 steps)
π Want to generate true seamless loops?
Check out this community workflow based on Wan 2.1:
π WAN 2.1 Seamless Loop Workflow (I2V) on Civitai
β οΈ Disclaimer:
Videos generated using this model are intended for personal, educational, or experimental use only, unless youβve completed your own legal due diligence.
This model is a merge of multiple research-grade sources, and is not guaranteed to be free of copyrighted or proprietary data.
You are solely responsible for any content you generate and how it is used.
If you choose to use outputs commercially, you assume all legal liability for copyright infringement, misuse, or violation of third-party rights.
When in doubt, consult a qualified legal advisor before monetizing or distributing any generated content.