KEY COMFY TOPICS
Sequenced LTX Video
STG Video Generation
Optimized Workflow Generation
NEW INSTALL
New LTX Model - https://huggingface.co/jbilcke-hf/LTX-Video-0.9.1-HFIE/blob/3841f91e9efe0c53b82d269e545ead2a184b901e/ltx-video-2b-v0.9.1.safetensors
Comfy Math - https://github.com/evanspearman/ComfyMath
Derfu Math Nodes - https://github.com/Derfuu/Derfuu_ComfyUI_ModdedNodes
Mikey Nodes - https://github.com/bash-j/mikey_nodes
ComfyUI Ollama - https://github.com/stavsap/comfyui-ollama
Ollama Server - https://ollama.com/download/windows
LTX Video nodes 1 - https://github.com/logtd/ComfyUI-LTXTricks
LTX Video nodes 2 - https://github.com/Lightricks/ComfyUI-LTXVideo
Video Helper Suite - https://github.com/Kosinkadink/ComfyUI-VideoHelperSuite
Image Selector - https://github.com/SLAPaper/ComfyUI-Image-Selector
YouTube Tutorial:
Description
FAQ
Comments (23)
Any examples of results?
This looks promising cant wait to test it. Best of luck with the progress.
Thank you!
What HW need?
My hardware is a 4090 (24gb vram) but others using 16gb were able to run it too (haven't tested below that but possible)
@Grockster will test in the next 2-3 days, have a project anyways
This is great thank you. Do you have a Vid2Vid version of this?
Not yet, but slowly making our way :)
To save precious vram, in Ollama Advance Generate nodes, set keep_alive to 0 to have ollama unload model after prompt is generated
Thanks!
how much vram I need for this?
@orange8745164 I've had people with 16GB VRAM who could run it, I haven't tested with less than that yet
Whenever I try this workflow I get the following error.
OllamaGenerateAdvance
1 validation error for GenerateRequest model String should have at least 1 character [type=string_too_short, input_value='', input_type=str]
I have ollama installed and the server address in the workflow is correct. Any ideas what could be wrong?
The model input is empty probably because ollama is running but no models are running or downloaded (likely if you just installed it). Use "ollama run <model_name>" to download and run a model and then refresh node definitions or restart comfyui (can't remember which I did) so the OllamaGenerateAdvance node can populate the model list. Not sure which is the best model to use for prompt generation but I used 'ollama run dolphin-mistral' and it has worked good so far.
Yup agreed with @DirkBenedict - you have to add at least one model
the Node "Seconds per sequence" is not installed which node is that? I have installed ComfyUI-Logic too
It's just an Int node part of the ComfyLogic set
Nice workflow, glad im not the only one having issues with extending blurring the faces slightly each iteration. I have tried everything, it may just be a ltx limitation
Yup, once the model can figure out how to get a perfect end frame(s), then starting the next iteration will be MUCH cleaner... Here's to continued improvements :)
Install RequiredLTXVModelConfigurator
Install RequiredLTXVShiftSigmas
Install RequiredInt-🔬
Install RequiredLTXVLoader
cant find this nodes
This is for the previous version of LTX, I would look to use the newer version of LTX2 (and start with the Comfy templates as they're really good/easy to use)
