This model has not been updated. Do not use.
Note: If you have ANY issues with nodes not downloading, read the notes or reach out. There's nothing that special about any of them that aren't core modules.
⛔⚠️🛑✋ Read the notes completly before using. Most common install and node problems are listed in the directions
Instagram: https://www.instagram.com/synth.studio.models/
Buy me a☕ https://ko-fi.com/lonecatone
This represents many hours of work. If you enjoy it, please 👍like, 💬 comment , and feel free to ⚡tip 😉
也有中文说明
This is a 12gb VRam or more workflow.
Two seperate versions. One with a latent upscaler and one without.
I like the one without, but the verdict is still out. I'd love feedback on what settings and what works.
Features:
Does both T2V and I2V
Allows you to use your own audio track.
Note: Lip sync is really hit or miss on this one. I'm still working on adjustments
Easy setup and use
Motion adjustment for more dynamic videos
Multimodal guider for fine tuning audio to video dynamics
Prompt generation from an image
Prompt enhancement
Handles both Normal and NSFW generation
也有中文说明

Description
Upgraded flow and samplers
Better controls
fixed latent upscaler issues
Added 🔉🎶volume control
added an audio equalizer 🎶🎶
Added color match
Added Sharpen
Added film grain
FAQ
Comments (27)
In T2V mode, I couldn't figure out how to change the image dimensions and video duration.
I just checked the version I posted. All of teh controls wor width anhd legthare hooked up to the appropriate sliders. I am confused as to why you are having problems. Use the second slider to adjust length and the image MX slider to adjust size. It works automatically.
@lonecatone23 In T2V mode, I couldn't find any sliders or parameter controls; by default, the duration is set to 7 seconds, with a height of 1600 and a width of 1408. I've thoroughly searched the entire workspace and couldn't find any settings for dimensions or duration.
@AI_Creator_John That's super wired. I didn;t modify it. I'll check tonight
My video is blurry, no matter what I do or what modes I use, whether it's text-to-video or picture-to-video. Upscalers are disabled. LORs are disabled. What could be the problem?I'm using the GGUF version.
try this. Shorten the video length and in teh brains section, change it to the LTXV scheduler. Use 12 steps. Also, make sure you are using Euler as a sampler
@lonecatone23 Yes, I just started tinkering with the settings, and now it seems to be working more or less. You also have an error connecting the LORa. The LORa that was next to the upscaler loader didn't come with the model in the sampler. So, you had to install distilled LORa in a different loader. Maybe that's what was causing the problem? You need to select distilled LORa in both loaders.
@Ashaf The LoRa needed to be downloaded seperately, but you only need the distiled LoRa for the upscale model. You might need lightning LoRa to drop steps down
@lonecatone23 I solved the blur problem, so everything is great. Thank you for your work.
@Ashaf what was it?
@lonecatone23 I think it's because of the two LOR loaders, you need to install distilled LOR in both of them and the problem will go away. But I also changed the loaders themselves to others from KJ
@Ashaf Wired. That shouldnlt matte as they are in serial across the model. That sounds like a weight issue. The weight shold not be over 0.7
This is incredible thank you for all this hard work people like you are the ones who make the world a better place for humanity you are the definition of human greatness!!
Lol, I try. I have a broken node, so re-download it tomorrow
Hi. Thanks for the workflow. The LTX 2.3 works great but not for NSFW content. I tried 10eros model, loras etc. but nothing. Suggestions?
Works fine for me. It's all I use now
Use LoRas when applicable, especially for male geetalia. Penis Praxis is a good one. Same thing with fluids and actions.
Oh, super important. Don't try using Prompt enhance. Even with implicit instructions, it screws it up. Less is more.
Also, describe actions. instead of Blowjob, say "she bobs her head up and down", or "He slides his cock in and out". For sex, say something similar, or "she vigorously bounces up and down on his cock" when riding cowgirl.
@lonecatone23 Ok, thanks. I'll do some more tests. What strength value should the lora have ? Do you have any suggestions on names for the lora ? Thank you.
@yemajar767531 Not really. I always default to 0.8.
Just filter for LTX 2.3 LoRas here. You'll find them
@lonecatone23 Ok...Thanks...
Hey, how do I fix this error
Exception: An error occured in the ffmpeg subprocess:
[aac @ 000001623337ba40] Input contains (near) NaN/+-Inf
[aost#0:1/aac @ 000001623337a600] [enc:aac @ 00000162333c7440] Error submitting audio frame to the encoder
[aost#0:1/aac @ 000001623337a600] [enc:aac @ 00000162333c7440] Error encoding a frame: Invalid argument
[aost#0:1/aac @ 000001623337a600] Task finished with error code: -22 (Invalid argument)
[aost#0:1/aac @ 000001623337a600] Terminating thread with return code -22 (Invalid argument)
File "C:\Users\Shadow\ComfyUI\execution.py", line 525, in execute
output_data, output_ui, has_subgraph, has_pending_tasks = await get_output_data(prompt_id, unique_id, obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb, v3_data=v3_data)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Shadow\ComfyUI\execution.py", line 334, in get_output_data
return_values = await asyncmap_node_over_list(prompt_id, unique_id, obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb, v3_data=v3_data)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Shadow\ComfyUI\execution.py", line 308, in asyncmap_node_over_list
await process_inputs(input_dict, i)
File "C:\Users\Shadow\ComfyUI\execution.py", line 296, in process_inputs
result = f(**inputs)
^^^^^^^^^^^
File "C:\Users\Shadow\ComfyUI\custom_nodes\comfyui-videohelpersuite\videohelpersuite\nodes.py", line 600, in combine_video
raise Exception("An error occured in the ffmpeg subprocess:\n" \
That popped up on me last night after updating. I have no clue. I've run it plenty of times without issue.
Feed the log into Grok. I'm guessing FFMpeg did an update or something
"Two seperate versions. One with a latent upscaler and one without"
Where can I find version without upscaler?
For some reason, on my RTX 5070 ti 16 GB VRAM, the image on my monitors disappears and the generation process does not continue. I think this is due to the x2 upscaler.
If anything, I'm talking about ltx-2.3-spatial-upscaler-x2-1.0.safetensors
I have two workflows in the folder, do I not?
Sorry, my bad
ltx-2.3-spatial-upscaler-x2-1.0.safetensors - there is an updated version from ltx, version 1.1. look at their huggingspace page there you can find it.
link: https://huggingface.co/Lightricks/LTX-2.3/tree/main
@schlemihl I actually tried 2.1.1. It's okay for longer videos. Made the same shorter test runs worse>
In general I'm over this model. It's frigging impossible to wire and keep stable. I won't be updatin these workflows anytime soon.
