Maybe not relevant thing, I barely search something like that.
Well, it's just FLF2V workflow, based on lightx2v (6 steps) and 14B models, which can take three images and consistently turn them into short clip.
Works good with a banch of similar looking pictures.
Thinking about to do more pics than 3, but... Tried to merge clips together with avidemux and got nice result (1->2->3, 3->4->5 and more, same seed, for loop - 3->4->1 or 3->2->1 depends on pics). Got 40 seconds clip with visible, but not critic transitions. But, at last step I got kinda not very good part, so remade it with different seed, and than I got it: if I got 12 pics in one workflow - I'd just wasted time thanks to bad last clip. So, I think, better to do 3 pic clips and merge them elsewhere.
Description
Same as v1.0, but added separate loras for each step and separate positive prompts.
Also, included FLF2V for 2 pics, which I also using in some cases.
FAQ
Comments (10)
where do you download clip vision are you using? I have clip vision g and h and both seem to give wonky results. I also had to remove the triton stuff because I cant get it working on my windows machine
Hi. https://civitai.com/models/1802070/wan-21-nsfw-clip-vision-h 3,67gb
Or https://huggingface.co/qpqpqpqpqpqp/basedbase_clip_h_wan_fp8/tree/main 986mb
Also, tried various clip-visions and they look the same to me.
Sometimes wan2.2 itself do wonky stuff.
About triton you can check this guide (helped me): https://www.reddit.com/r/StableDiffusion/comments/1k23rwv/quick_guide_for_fixinginstalling_python_pytorch/
@forfreelsd368 Thanks, I think I was missing that version, ill see if it works.
Edit: it still doesnt work, meaning the sampler probably does require triton to get reasonable output. Or maybe im prompting it wrong.
@burnera679889 Hm. Try to use two default KSamplers as in most of workflows if you think that's it (just replace MoE Ksampler - look at default wan2.2 wf) - they worked without triton, as I remember (as should MoE sampler too, even edited by me).
I changed original MoE Ksampler from https://github.com/stduhpf/ComfyUI-WanMoeKSampler because I believed that separate sigma shift from both models is important, but in MoE it was fixed for two models, so I prefer separate shift in "ModelSamplingSD3" nodes. But, if you install original MoE Ksampler with fixed shift (not from my archive) - two "ModelSamplingSD3" nodes, maybe, can cause troubles (or they don't works at all if they don't cause).
Triton is for speed-up, not for quality.
Make sure you use right high and low lightx2v loras for i2v https://huggingface.co/lightx2v/Wan2.2-Lightning/tree/main/Wan2.2-I2V-A14B-4steps-lora-rank64-Seko-V1.
You can download video of that broken creepy black cat, which is contains metedata (workflow), past it in your comfy, took sample pictures from wf archive and try to recreate.
I think, it must reveal the trouble.
In my case, I rarely use prompts in flf2v (like never almost, only for tests how it affects).
P.S. I don't see if you edit message in notificatins, so better to write another. Whole bunch of them, if it necessary. ^_^
@forfreelsd368 @forfreelsd368 @forfreelsd368
@forfreelsd368 Yeah, trying two k-samplers seems to have worked.
Quick question, it's like 'translating' from image 1 to 2 and then from image 2 to 3 and filling the gaps in between? Thank you
Hi!
Yep, that's it.
@forfreelsd368 Ok this is officially amazing, thank you for this!
"translating from", "filling the gaps"... = Interpolation