# 360° Equirectangular Outpainting — LTX-2.3 IC-LoRA · v0.1
**Proof-of-concept IC-LoRA** for [Lightricks LTX-2.3-22B](https://huggingface.co/Lightricks) that turns standard
**widescreen footage into a full 360° equirectangular** video you can view in a VR/360 player.
> Early v0.1 release. Expect rough edges outside the sweet spot below — a bigger, more diverse next version is planned.
## What it does
- **Input** — a flat 2.39:1 (cinemascope) clip, plus an equirectangular reference (your clip projected into the
equirect canvas with the unknown regions masked black).
- **Output** — the model fills the masked regions, giving you a plausible 360° equirect video viewable in a VR/360
player.
Designed for repurposing existing live-action or cinematic footage as immersive content.
## Sweet spot (v0.1)
The v0.1 model was tuned on a deliberately narrow domain to validate the approach:
- Semi-static establishing **city / urban** scenes (no heavy camera motion)
- **~100° horizontal FOV** on the source clip
- **2.39:1 source aspect** (standard cinemascope)
It will generalize poorly outside these conditions — fast action, extreme close-ups, heavily stylised imagery, or very
different FOVs are not reliably handled yet.
## Usage
Tested only with **ComfyUI + LTX-2 video_to_video** pipeline. Load on top of ltx-2.3-22b-dev.safetensors and set:
- **Trigger word**: equirectangular — optional. It works without a prompt, but a descriptive prompt lets you steer
the content of the outpainted region.
- **Reference video**: your source clip projected into the equirect canvas with unknown regions masked.
- **Resolution**: 1920×960, 121 frames, 24 fps.
A ready-to-run workflow Equirect-Outpaint.json) ships alongside this LoRA on the Hugging Face mirror:
<https://huggingface.co/TheBurgstall/VR-360-Outpaint-LTX2.3-IC-LoRA>. Note that the workflow's padding node crops your
input footage to 2.39:1 (center / top / bottom selectable). Other aspect ratios work poorly in this early version.
### Companion ComfyUI nodes
A small ComfyUI helper pack —
**[ComfyUI-EquirectProjector](https://github.com/Burgstall-labs/ComfyUI-EquirectProjector)** — was written alongside
this LoRA to produce the masked equirect reference from a flat clip. The included workflow shows the exact wiring.
## Training
| | |
|--|--|
| Base model | LTX-2.3-22B (dev) |
| Strategy | IC-LoRA video_to_video) |
| Rank / alpha | 128 / 128 |
| Target modules | video self + cross attention + FFN |
| Resolution | 1024×512, 41 frames @ 24 fps |
| Optimizer | Prodigy (D-Adaptation), lr=1.0, constant |
| Precision | bf16, gradient checkpointing |
| Steps | 3500 |
| Hardware | 1× NVIDIA H100 80GB |
| Dataset | Small curated POC set (not released) — semi-static city establishing clips |
The final **step 3500** checkpoint is what's uploaded here.
## What's next
A next version is planned on a significantly larger and more diverse dataset:
- Broader subject matter (interiors, landscapes, crowds, vehicles, …)
- Varied input FOVs and focal lengths
- A wider range of camera motion — not just static establishing shots
- Better handling of the polar regions (top / bottom caps of the equirect canvas)
## Limitations
- Does not model the top/bottom caps of the sphere well — expect stretching or repetition at the poles.
- Struggles with busy motion and fast cuts.
- Prompt adherence is weak; conditioning is dominated by the reference video.
- Outputs are a creative re-projection, not a reconstruction — not a substitute for natively captured 360° footage.
## Links
- **Hugging Face mirror**: <https://huggingface.co/TheBurgstall/VR-360-Outpaint-LTX2.3-IC-LoRA>
- **ComfyUI helper nodes**: <https://github.com/Burgstall-labs/ComfyUI-EquirectProjector>
## License
Apache-2.0. Inherits any base-model conditions from LTX-2.3-22B.
---
Description
FAQ
Comments (9)
Any plans for 180° ?
none for the time being unfortunately, the focus is on an improved version of this one
This is a huge deal to me, because you need SUCH high resolution for 360 video that it tends to look kinda bad even at huge resolution. 180 can push it twice as far with the same resolution, and video generators and even high end consumer VRAM is not equipped to go as high as 360 degree demands. Also, 180 doesn't have to be concerned with the seam between the ends of the video like 360 does. It just seems like the best idea to start with 180, then go to 360 in 2066 when our grandchildren can finally get more than 32GB of VRAM.
@Jellai +1 to this, 180 would be great a the video requirements would be smaller.
I'm giving a +1 also to the 180 degrees idea.
360 is super cool, but still just a toy to play with. 180 is where it becomes useful and workflows can be built around to create appealing content. Hence why 99% of consumed VR videos are 180 and not 360.
The previews look pretty impressive, did you gen them locally? If so, how long did that take?
Thank you. Genned locally on my 5090, takes a couple of minutes on 1920x960 resolution. With various optimizations you could do much faster on this card, and also same time with older GPUs, LTX 2.3 is really quite flexible.
Seriously cool! Great work! <3
What is the bypassed lora with the step00015000 in the workflow? and where am I supposed to be using the "text encoder" that's part of this upload?