Convert flat videos to VR workflow: here
THE SEEM ISSUE HAS BEEN FIXED: https://civarchive.com/articles/25291
When I realized LTX-2 can generate 4K video, my first thought was: holy shit, we can finally start pumping out VR videos. So I immediately rushed to make this LoRA without really thinking it through, just to see whether LTX-2 could already do this by default.
Short answer: kind of.
Much like the Hardcut LoRA for Wan 2.2, LTX-2 understands the concept of 360° video, but struggles to execute it properly. This LoRA gives it the extra push it needs to reliably generate true 360-degree content without turning into a mangled mess.
That said, for whatever reason, it does not seamlessly close the seam, so there’s a noticeable vertical line in the 360 sphere when you turn around. I’m not sure if a node exists that can fix this yet, but please let me know if you find a solution.
Note: the video ends are matching they just cut off and slightly different points so technically you could crop the video horizontally so it ends on one side exactly where it starts on another.
Anyway, as always, if you like what I do and want to support the work, feel free to buy me a coffee ☕
https://ko-fi.com/aidepository35
or support me on patreon
https://www.patreon.com/c/aitrepreneur/posts
Recommended Settings
Weight: 0.6–1 works well
I’ve even gotten away with 0.2, so feel free to experiment
Aspect Ratio: 2:1
Post-Processing (Optional)
The raw video can be played in most 360 media players or VR players as-is. However, if you want actual depth in VR, you’ll need to apply Stereoscopic depth to the video.
This node can do that:
https://github.com/SamSeenX/ComfyUI_SSStereoscope?tab=readme-ov-file
⚠️ Warning: It appears to have a size limit.
For example, one of my videos ended up around 500 MB, which exceeds what the node (and even ComfyUI itself) will accept for upload.
If you find a workaround, please let me know. Otherwise, you’ll need to use an external depth tool or handle the depth manually.
VR Metadata Injection (Highly Recommended)
It’s also a good idea to inject VR metadata so headsets and players automatically recognize the video as VR content.
You can use Google’s Spatial Media tool for this:
https://github.com/google/spatial-media/releases
It’s free and very easy to use.
TL;DR
Yes, it works
Use 2:1 aspect ratio
You can make VR videos
You can make them better by adding depth and VR metadata
Extra Banter
Honestly, I’m kind of glad LTX-2 can’t do this out of the box. I was already a full day into training this LoRA before I realized I probably should’ve checked that first.
More importantly, I now fully understand why there are so few LoRAs like this floating around. Even a 5090 didn’t have enough VRAM to train it. I had to use one of the 48 GB Ada cards. On top of that, finding usable flat panoramic 360° video datasets was a nightmare, so I couldn’t build a massive dataset. Thankfully, I didn’t need one.
I’ll be real though: if this had failed after the two days of training, I would’ve just said fuck it.
Description
FAQ
Comments (26)
Wow great work! The next step is Lora for VR video in stereoscopic side-by-side format!
the custome node i linked in the description can do that, even for normal videos. it has a specific node for the vr panoramas to.
Really a 180 degree would be ideal
How about 180 degrees?
at first i was like why.. then i thought about it, the resolution required to make 360 degree actually look good in vr would be insane, so yeah, 180 degree.
i was originally going to make a 180 degree lora but i couldnt find in flat 180 degree panoramas to train from. even for the 360 is was hard to find them. now that i think about it you could probly try prompting for a 180 degree and look up the aspect ratio for it. it would probobly work with the lora
@Ragamuffin20 couldn't you crop the 360 training data? cut 25% of each side, 180 degree...
@MrReclusive666 yes. each side is identical but they just original cut off at the wrong points. honestly u might only need to crop one side slightly to get rid of the seem
@Ragamuffin20 i was more referring to the 180 degree panoramas for training, if you can't find 180 degree pananarama's to train on, just edit the 360, the 180 fov is inside the 360.
@MrReclusive666 thats a good idea only that the flat image is pre-warped so it wraps around a 3d sphere seemlessly. i think 180 degree videos have a different type of warping if im making sense
We need 180 degrees fish eye! Side by side upgrade may be generated with another workflow.
so idk why no one is reading my description but there's literly a comfyui node in there that dose that for you with panorama images and videos. it also dose it for regular images aswell
There is a trick for seamlessness where you redefine the convolution to use circular padding mode. I'll investigate if this can be done with this model (depends on underlying architecture).
It's nifty because it requires no additional training and the compute is not noticeable (if any at all).
https://github.com/Comfy-Org/ComfyUI/issues/646
The above discussion covers the gist of the theory... toward the bottom jn-jairo mentions implementing a node for this in his suite:
https://github.com/jn-jairo/jn_node_suite_comfyui
@aztecmanmusic837 its a WIP and it hasnt been touched for 2 years. honestly your better off just croping it in a video editor
@Ragamuffin20 I've used this technique myself with SD 1.5, I just have not tried it with video models.
The code I linked is for using a standard single image diffusion model not a video model... so it really doesn't matter that it is years old. I'm pointing to it because I know it works. You need to get every convolution layer in the NN and redefine the function, essentially just making a trivial change to how the edges get treated via the mechanism of 'padding'.
Sooner or later, the elegant solution will become implemented for one or more video models.
I've used it on music generation as well to create loops... it's a very versatile technique.
Maybe you can swap the left and right side in video editor then in-paint with the center seam in vace etc, then swap the side back to original and it should work.
180 degrees version would be great
sigh...ik... its just a pain getting samples.
@Ragamuffin20 youtube is full of 180 decree videos it even has a tag for them when filtering. And there is https://github.com/yt-dlp/yt-dlp to download them (I guess if you wanna save time and don't want to do one by one you could even download a playlist full of 180 videos). But don't ask me if there are any rules against doing it like this, just my idea not sure if there is anything speaking against it.
@herpderpmerp i have a chrome extention that can download them but they dont always download in flat panoramas. id like to try the git but tbh ive never used a git withought a UI before and for lack of better words its kindof alot. not even sure how to use it. but who knows maybe ill give it a go and some point.
@Ragamuffin20 there is a also a gui version you just need to download and run https://github.com/jely2002/youtube-dl-gui/releases/tag/app-v3.1.1
any plan to retrain for LTX 2.3?
bru it hasent even been 24 hrs lmao. but i think i will. also please try the 180 version and let me know what you think
@Ragamuffin20 lol no rush. Tried both loras with 2.3 and it kind of worked but not sure if my setup was wrong.
@markdalias be sure your using this workflow, there are better to come but this one works pretty well so far. https://civitai.com/models/2443867/ltx-23-22b-gguf-workflows-12gb-vram?modelVersionId=274778
Details
Files
Available On (1 platform)
Same model published on other platforms. May have additional downloads or version variants.