This model has not been updated. Do not use.
Note: If you have ANY issues with nodes not downloading, read the notes or reach out. There's nothing that special about any of them that aren't core modules.
⛔⚠️🛑✋ Read the notes completly before using. Most common install and node problems are listed in the directions
Instagram: https://www.instagram.com/synth.studio.models/
Buy me a☕ https://ko-fi.com/lonecatone
This represents many hours of work. If you enjoy it, please 👍like, 💬 comment , and feel free to ⚡tip 😉
也有中文说明
This is a 12gb VRam or more workflow.
Two seperate versions. One with a latent upscaler and one without.
I like the one without, but the verdict is still out. I'd love feedback on what settings and what works.
Features:
Does both T2V and I2V
Allows you to use your own audio track.
Note: Lip sync is really hit or miss on this one. I'm still working on adjustments
Easy setup and use
Motion adjustment for more dynamic videos
Multimodal guider for fine tuning audio to video dynamics
Prompt generation from an image
Prompt enhancement
Handles both Normal and NSFW generation
也有中文说明

Description
FAQ
Comments (19)
I don't think Lora is working with your workflow. I tested with a different workflow, it's just that you've seen the lora dont work . i recommend using Daddy Lora. The load is very good, fix the annoying noise from the video
What annoying noise? Ive had zero issues. Be more specific
@lonecatone23 Some Loras have this annoying static noise in the background. I just recommended using Dady Lora load to remove it . since i think the lora load on your workflow is working
@Agino A loRa loader is teh same. It has npothign to do with it, especially as there are no LTX2.3 LoRas out yet.
I really have no idea what you are talking about.
this lora load - https://github.com/seanhan19911990-source/LTX2-Master-Loader/tree/main and lora from ltx-2 also work on ltx-2.3 i test it and its work
@Agino Are you crazy? Some random node with that stupid name? It's not in the ComfyUI registry.
Also, the LoRa loader works absoleutly fine.
No thank you.
getting this error. Can't get it fixed.
It's missing something. without seeing your log, I have no idea. Did you try feeding the log through Grok or Claude?
@lonecatone23 I have figured it out, the node that Get_vae somehow isn't connected to the video vae, when Checkpoint Model is enabled, it gets disabled when GGUF model is turned off. btw excellent workflow. the prompt enhancer uses so much ram though.
@m1ndth13v3 oh shit. Thanks for that. I'll revise
@m1ndth13v3 Hold on. To clarify, you do not need the Video vae when you use a checkpoint model. It is baked in. However, you need it for the gGUF. If you use a diffuson mdel, then that's a different story.
Please do me a favor and check exactly which model you loaded. I can't meke it error out
@lonecatone23 Okay now I understand, the checkpoint model im using is the dev_transformer_only_bf16 should i be using something else?
@m1ndth13v3 yeah, the transformers only is a diffusion model. You can use that, but use a diffusion model loader and pull the Bideo VAE out of the gguf group so it doesn't turn off with the switch, then eliminate the "anyswitch" and hook it directly up to the setnode
Hi everyone. Question... I have everything updated and all the models your WF requires. I can't get my image to look detailed. Any advice on how to avoid it looking plasticky and distorted? Thanks a lot.
I haven't figured that out yet. It's LTX, not the workflow. Its also why I made two separate workflows
I'm having trouble with the Qwen Prompt Enhancer. I'm not sure which settings to use since the workflow was pre-set to a setting that does not exist. I have a 4070 Super Ti 16GB. What should those settings be?
Hmmm, I just opened it and looked. It's set for low VRAM. It shoud be qwen 4b instruct and 4b ram friendly.. It's slooooow regardless. I liek it and hate it at the same time.
You don't really need it. Honestly better results are gained by feeding your prompt directly to an LLM
@lonecatone23 Thanks! One other thing, the no the upscaler version has a section for upscaling. Is it just disabled? Or am I just dumb? (highly possible) Thanks for your work with this!
@ai_machine_learner Lol, noi. My bad. I should have clarified. no LATENT upscaler
