A fully automated RunPod template for WAN 2.2 Image-to-Video generation
with Ollama AI prompt generation built in. Deploy, wait for first boot,
and start generating — no manual setup required.
## Template Link
https://console.runpod.io/deploy?template=08e485o9k7&ref=oz31dyrx
## What's Included
- ComfyUI latest version with WAN 2.2 I2V support
- SageAttention for optimized fast generation
- Ollama for automatic AI prompt generation from images or text
- Modifiable Ollama system prompt — customize the AI prompt style directly in the workflow node
- JupyterLab for file management and monitoring
- ComfyUI Manager for easy node management
- Civicomfy — download models from CivitAI directly inside ComfyUI
- LoRA Manager — browse, manage and swap LoRAs without leaving ComfyUI
- Pre-installed custom nodes — rgthree, VideoHelperSuite, KJNodes, Easy Use, DaSiWa Nodes, WhiteRabbit, comfyui-ollama
- Auto-downloads all required models on first boot via CivitAI
- Nova Anime XL included for image generation
- Pre-loaded workflow ready to use on first boot
## The Workflow
The included oneclick_workflow is designed to be as simple as possible:
- Drop in a reference image and hit generate
- Ollama automatically generates a motion prompt from your image
- Built in text-to-image section using Nova Anime XL to generate your reference image if you don't have one
- Swap LoRAs easily using the Power LoRA Loader node
- Bypass the Ollama node and type your own prompt anytime
- Everything wired up and ready — no node connecting required
## Requirements
- RunPod account — runpod.io
- CivitAI account and API token — civitai.com → Account Settings → API Keys
## Setup
1. Click the RunPod template link below
2. Select your GPU
3. Attach a network volume at /workspace (250GB+ recommended)
4. Add your CIVITAI_TOKEN environment variable
5. Deploy the pod
6. Wait 10-15 minutes on first boot for models to download
7. Open ComfyUI on port 8188
8. Load oneclick_workflow from the workflows menu and start generating
## Environment Variables
- CIVITAI_TOKEN (required) — your CivitAI API token
- VRAM_MODE (optional) — high, normal, or low — default normal
- OLLAMA_GPU (optional) — gpu or cpu — default gpu
## Recommended GPUs
- L40S 48GB — best experience, VRAM_MODE=high OLLAMA_GPU=gpu
- RTX 5090 32GB — VRAM_MODE=normal OLLAMA_GPU=gpu
- RTX 4090 24GB — VRAM_MODE=low OLLAMA_GPU=cpu
## Ports
- 8188 — ComfyUI
- 8888 — JupyterLab
- 11434 — Ollama API
## First Boot
First boot takes 10-20 minutes while models download. Monitor progress
in JupyterLab on port 8888 by opening a terminal and running:
tail -f /workspace/logs/bar.log
Subsequent boots are fast since all models are cached on your network volume.
## Ollama Prompt Generation
The workflow uses Ollama for automatic motion prompt generation. The included
model runs in text-only mode rather than vision mode — this is intentional.
Text-only inference is significantly faster and in practice a brief manual
description of your reference image tends to produce more accurate and
consistent motion prompts than vision-based analysis. Simply type a short
description of your image into the prompt field and let Ollama expand it into
a full motion prompt. The system prompt is fully editable directly in the
Ollama node within ComfyUI, so you can tailor the prompt style to your
specific workflow without any technical knowledge required.
If the prompts being generated aren't to your liking, try resetting the session via the node in the comfyui workflow, or if that doesn't work try a different system prompt.
## Template Link
https://console.runpod.io/deploy?template=08e485o9k7&ref=oz31dyrx
If you find this useful please follow and leave a review!
Feel free to post your generations in the comments.