Request a Wan2.1 LoRA on our Discord and we will train and open-source it for free.
Join our Discord to generate videos of the 360 Degree Rotation LoRA for free.
Wan2.1 14B I2V 480p v1.0:
Trained on 30 seconds of video comprised of 12 short clips (each clip captioned separately) of things being rotated 360 degrees. This was trained on the Wan.21 14B I2V 480p model.
The trigger word is: 'r0t4tion 360 degrees rotation'
See below for some prompt examples that worked well for me. You can also check the videos I've posted here for the captions that were used to generate them. For each video the input image is just the first frame.
Recommended Settings:
LoRA strength = 1.0
Embedded guidance scale = 6.0
Flow shift = 5.0
Here's a link to the Wan2.1 I2V LoRA inference workflow I used to generate these videos: https://huggingface.co/Remade/Squish/blob/main/workflow/wan_img2video_lora_workflow.json
This is a slight modification to Kijai's version, with the main difference being the addition of the WanVideo Lora Select node, connected to the 'lora' field of the WanVideo Lora Select node. Find Kijai's original workflow here:
https://github.com/kijai/ComfyUI-WanVideoWrapper/blob/main/example_workflows/wanvideo_480p_I2V_example_02.json
Prompt Examples:
The video shows a man seated on a chair. The man and the chair performs a r0t4tion 360 degrees rotation.
The video features a Pomeranian puppy sitting on a gravel surface, and the puppy undergoes a r0t4tion 360 degrees rotation.
Let me know if there are any questions, I'll be happy to help!
Description
v1.0
FAQ
Comments (64)
Thanks for making this, camera control in general is super useful
How many steps did you train? Did you use diffusion-pipe or musubi-tuner?
God bless you
I notice same video has 5 seconds another 3 seconds Why?
I think there are videos with 24fps
@deep_synth
@deep_synth
@knall08143 hi
You should now make a slime effect where something turns into slime and gets pulled apart.
Would it be possible to make a variation of this where the background also rotates?
can you make a lora, the subject is moving or walking and the camera rotates 360 around it
It might be useful for image-to-3d. Just need to use photogrammetry with it.
yes, if you use Sora Loop function
i just grabbed it for that!
I noticed a lot of the time I don't get the full 360 degree rotation (usually its about 270 degrees). Does anyone have any tips on how to fix this?
try to set length 121 and 24 fps
@intlex by "try to set length to 121 and 24fps"
121 frames is a loop animation with Hunyuan trained by Tencent. Which has absolutely nothing to do with Wan2.1 trained by Alibaba which is meant to run at 16fps, and has no reason to share this "121" Hunyuan's trick.
@SD_AI_2025 but it works.
it performs best when you actually describe what is happening in the image. otherwise the subject spins and twists but not like a "360 microwave" :D
WanVideoSampler
'NoneType' object is not callable
how can I fix this? already 2 days and I can't find a fix!
Doesn't work with native ComfyUI workflow.
Did videos used for the training had the same frame count ? Because as it is a true 360 only happens by pure luck.
Any training code we can look at? I'd love to fine tune this thing further
So that did work for me, but the character in the image is moving, making some movements, i couldnt make a static 360 of the character, any hints ?
Works decently. I'd like to see different versions, like one for rotating vertically, and one for a full 360 camera movement where the subject is stationary. If you look up the term "stereograph" or "wigglegram", that would be a decent style too (basically it's just 2 still images used to quickly tilt the view to give a perception of depth). That would pretty much fill in the general possibilities. I think some training could be added that also considers out-of-frame elements, because currently it doesn't really know what to do with subjects that are partially clipped from the image.
I need help with this... I get an error on "Load WanVideo T5 TextEncoder" Node, my settings are:
model_name : umt5-xxl-enc-bf16.safetensors
precision : fp16
load_device : main_device
quantization : disabled
My best wan lora so far.
720p i2v variant would be nice. (Upscaled 480p Looks not so realistic)
Hey everyone 👋
It does a really cool job generating 360° rotation videos from a single image — but I’ve hit a wall and hoping someone here might have some insight.
The Issue:
Since it only uses one input image (usually the front view), the AI doesn’t actually know what the back of the statue looks like. So, it just “hallucinates” the backside, often in a way that looks totally different from the actual statue. In my case, the result is quite far off from the real design.
What I’ve Tried:
• I provided a clean front image of the statue.
• The generated rotation video looks great from the front and sides, but once it gets to the back, it’s a different design entirely.
What I’m Hoping to Achieve:
1. Is there a way to provide more than one image (like front + back) so the model understands what the full 3D object looks like?
2. Would training a custom LoRA of this statue help? For example, feeding it multiple angles (front, side, back) so that it learns the actual geometry and doesn’t make stuff up?
I'm interested in this. Any workaround?
Sounds like you might want https://huanngzh.github.io/MV-Adapter-Page/
https://hyper3d.ai/
With this model, you can feed multiple images, point-clouds, voxels, bounding boxes
has anyone thought of using the frames of these outputs to train loras actually seems incredibly useful
Yes i did THOUGH of. It's in my todo list.
However i would like to train one myself since it's a very small 30sec dataset this guy used.
Pixart AI have such tool and it is good
... But censored and paid
@Le_FourbeHaven't been able to get pixart ai to work yet so far.
Could you please provide this example prompt for testing?
Amazing LoRA dude, and you did this on 30 seconds of videos? You mind sharing some more info? Any images, # of repeats, resolution, # of frames to process (chunk 17/33/49, etc.). Would love to get some training advice since there is so little out there for WAN.
Edit: before downvoting this, read my most recent comments near the bottom. PS: stop downvoting people just because you're too numb in the head to take the time to see where they're coming from.
For as basic as video loras are at this point, how is this one of the top downloaded? No offence intended, but concepts don't really get much more basic than turning on the spot.. It's not a hit on the model, or the uploader, just curious how there's DBZ action loras, and this is the most downloaded, are people(in general) just really boring, or what?
Edit: Ok, I just got my first 3D printer, and now my eyes are fully open. This probably is the most practical one on here. The key is that it's not necessarily for making amazing videos, but for it's practical use in contrast to the others. Since most consumer hardware can't even make videos with any practical use as videos, this one does stand out for what it does.
I'll give it a thumbs up, and probably try it out myself sometime shortly.
Because this Lora is actually useful.
Sometimes it's not just about stuff looking good
your comment shows you know nothing. its about having different positions in lora making this is a brilliant model
@Le_Fourbe It's useful to see people turn on the spot? I mean, isn't it kinda about it looking cool if it's literally just gonna be used to make a 3-6 second video clip? why would someone watch it if it didn't look cool? lol..
@riiahworld "positions in lora" but the person doesn't move/go anywhere. It's better than the basic img2vid(with the basic wobbley character zoom-in effect), but I've seen more action outta txt2vid, maybe it's good if it can be used with img2vid?
It's perfect for creating Loras of characters from a ton of different angles with one image particularly if you want to maintain character consistency.
@Lazman as the other said :
sometime you wanna recreate a character you made with AI. one such thing that would allow you to make it is Lora.
to make a lora you have to feed multiple consistent images in different angles and position.
this 360 tool is powerful for Lora TRAINING as it will provide the model with every sides of your single picture for whatever design you have. making a more out of your original image.
@getswoll1986 That's funny, I actually thought about this very thing just a couple days ago when thinking about this. Yea, in that context, it could certainly be useful.
@Le_Fourbe Yep, that could work. Question though; I wonder if it would work on characters with tails..? Or unique characters in general. I mean, characters with less predictable alternate angles. due to uniqueness of style, or can you feed more than 1 image into it to give it a better idea(assuming the images had the character at the same stance/proportions)?
@Lazman eventually you will have to add that specific aspect in your mind manually (with help of image AI).
back side you will get will be random but would follow basic logic of the front side. which is still a good shortcut compared to generating character sheet.
anyone having vertigo after seeing the examples? my head was spinning
My model rotated, but it did not rotate 360 degrees, but only rotated 120 degrees. I set it to 360 degrees. How can I solve this problem?
Maybe you need to say 720 degree? :D
Increase animation time. Give it like 65 frames or 81 frames. offload models to RAM
@GardaX Doesn't matter if you give it 16 or 81 frames - the rotation stays the same.
Rad. Can you point me in the direction of a lora training workflow using a video dataset?
Thank you for lora. How about license? Can it be used in commercial generations?
can right rotation?Almost output left rotation
Reverse the video
1.3B t2v wan 2.1 vace?
can we have a wan 2.2 version?
Only ever get about 270 degrees with the settings you've recommended - never the full 360. Adding frames or rephrasing it as fx 720 degrees makes zero difference. Could you provide some sort of solution to this? I see a number of other people having the same problem.
can you do one for wan 2.2
Hello, I am interested in learning more about LoRa. Could you please share insights on how you trained the model, the amount of data used for training, and the duration of the training process? Thank you!
Details
Files
360_epoch20.safetensors
Mirrors
360_epoch20.safetensors
B64_MzYwX2Vwb2NoMjA.safetensors
360_epoch20.safetensors
360_epoch20.safetensors
rotate_20_epochs.safetensors
360_epoch20.safetensors
109_360_epoch20.safetensors
103_360_epoch20.safetensors
18_360_epoch20.safetensors
360_epoch20.safetensors
360_epoch20.safetensors
360_epoch20.safetensors
rotate_20_epochs.safetensors
360_epoch20.safetensors
rotate_20_epochs.safetensors
360_epoch20.safetensors
360_epoch20.safetensors
wan_360_rotation_low.safetensors
360_epoch20.safetensors
360_epoch20.safetensors
360_epoch20.safetensors
360_epoch20.safetensors
Available On (2 platforms)
Same model published on other platforms. May have additional downloads or version variants.