CivArchive
    Img2Vid π‘½πŸ β–ͺ Hunyuan β–ͺ LeapFusion Lora V2 - Hun I2V | lora v2 | 1.0
    Preview undefined

    HUNYUAN | Img 2 Vid LeapFusion



    Requirements: LeapFusion Lora v2 (544p) or v1 (320p)

    In short: it uses a special LORA to do the trick.
    It works combined with avaible loras around. Prompting helps a lot but works even without.
    Raise resolution for more consistence and similarity with input image.
    *you may want to change steps on your needs. I used few steps for testing.

    Bonus TIPS:


    Here an article with all tips and trick i'm writing as i test this model since December:

    https://civarchive.com/articles/9584
    you will get a lot of precious quality of life tips to build and improving your hunyuan experience.


    no need to buzz me, tyπŸ’— ..feedbacks are much more appreciated.


    Description

    FAQ

    Comments (60)

    guy33Jan 25, 2025Β· 3 reactions
    CivitAI

    Hi, since you experiment a lot with HunyuanVideo I thought you might enjoy this read where I trained a LoRa that can do up to three scenes inside one video: https://civitai.com/models/1177810/porn-movie-director-three-scenes-intro-sucking-fucking?modelVersionId=1325348

    dominic1336756Jan 25, 2025
    CivitAI

    not working for me

    LatentDream
    Author
    Jan 25, 2025

    are your kijai nodes updated?

    midiaplaayJan 25, 2025
    CivitAI

    I will try soon.

    desktop8Jan 25, 2025Β· 1 reaction
    CivitAI

    Thanks

    funscripter627Jan 25, 2025Β· 1 reaction
    CivitAI

    Edit2: Had to update the Kijai wrapper first.

    Getting the error below. I only changed the lora references and the picture. Changing the frames slider changes the number at the end. Setting it to 1 makes the error disappear. Any ideas? Thank you

    HyVideoSampler

    shape '[1, 600, 1, 128]' is invalid for input of size 844800

    Edit: I think it was because I had quantization on for the text encoder. Can't test it without though because I don't have enough RAM.

    funscripter627Jan 25, 2025

    The original workflow also does this though so probably not an issue with yours

    half_realJan 25, 2025Β· 1 reaction

    You need to update the ComfyUI-HunyuanVideoWrapper, otherwise it'll think you're trying to do vid2vid and fail because the video length doesn't match the input length (1 single image = 1 frame). That's why it doesn't fail if you set the length to 1.

    LatentDream
    Author
    Jan 25, 2025Β· 1 reaction

    update kijai nodes

    funscripter627Jan 25, 2025

    @half_realΒ Yeah I realized that after posting this thanks.

    midiaplaayJan 25, 2025
    CivitAI

    What weight for Lora?

    half_realJan 25, 2025Β· 1 reaction

    1, although you can try setting to to something close to 1 like 0.95 to experiment.

    LatentDream
    Author
    Jan 25, 2025

    @half_realΒ i already tryed but meh

    midiaplaayJan 25, 2025

    Only works if I will update kijai's node?

    half_realJan 25, 2025Β· 1 reaction

    @midiaplaayΒ yes

    XT_404Jan 25, 2025
    CivitAI

    Error : HyVideoTextEncode

    list index out of range

    LatentDream
    Author
    Jan 25, 2025

    paste more of console

    XT_404Jan 25, 2025

    @LatentDreamΒ :

    # ComfyUI Error Report

    ## Error Details

    - Node ID: 30

    - Node Type: HyVideoTextEncode

    - Exception Type: IndexError

    - Exception Message: list index out of range

    ## Stack Trace

    ```

    File "D:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\execution.py", line 327, in execute

    output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)

    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

    File "D:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\execution.py", line 202, in get_output_data

    return_values = mapnode_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)

    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

    File "D:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\execution.py", line 174, in mapnode_over_list

    process_inputs(input_dict, i)

    File "D:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\execution.py", line 163, in process_inputs

    results.append(getattr(obj, func)(**inputs))

    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^

    File "D:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-HunyuanVideoWrapper\nodes.py", line 884, in process

    prompt_embeds, negative_prompt_embeds, attention_mask, negative_attention_mask = encode_prompt(self,

    ^^^^^^^^^^^^^^^^^^^

    File "D:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-HunyuanVideoWrapper\nodes.py", line 809, in encode_prompt

    text_inputs = text_encoder.text2tokens(prompt,

    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

    File "D:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-HunyuanVideoWrapper\hyvideo\text_encoder\__init__.py", line 253, in text2tokens

    text_tokens = self.processor(

    ^^^^^^^^^^^^^^^

    File "D:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\python_embeded\Lib\site-packages\transformers\models\llava\processing_llava.py", line 145, in call

    image_inputs = self.image_processor(images, **output_kwargs["images_kwargs"])

    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

    File "D:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\python_embeded\Lib\site-packages\transformers\image_processing_utils.py", line 41, in call

    return self.preprocess(images, **kwargs)

    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

    File "D:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\python_embeded\Lib\site-packages\transformers\models\clip\image_processing_clip.py", line 286, in preprocess

    images = make_list_of_images(images)

    ^^^^^^^^^^^^^^^^^^^^^^^^^^^

    File "D:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\python_embeded\Lib\site-packages\transformers\image_utils.py", line 185, in make_list_of_images

    if is_batched(images):

    ^^^^^^^^^^^^^^^^^^

    File "D:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\python_embeded\Lib\site-packages\transformers\image_utils.py", line 158, in is_batched

    return is_valid_image(img[0])

    ~~~^^^

    ```

    ## System Information

    - ComfyUI Version: 0.3.12

    - Arguments: ComfyUI\main.py

    - OS: nt

    - Python Version: 3.12.7 (tags/v3.12.7:0b05ead, Oct 1 2024, 03:06:41) [MSC v.1941 64 bit (AMD64)]

    - Embedded Python: true

    - PyTorch Version: 2.5.1+cu124

    ## Devices

    - Name: cuda:0 NVIDIA GeForce RTX 4090 : cudaMallocAsync

    - Type: cuda

    - VRAM Total: 25756696576

    - VRAM Free: 6986435724

    - Torch VRAM Total: 17012097024

    - Torch VRAM Free: 21793932

    ## Logs

    ```

    2025-01-25T18:34:01.701305 - To see the GUI go to: http://127.0.0.1:8188

    2025-01-25T18:34:04.631935 - FETCH ComfyRegistry Data: 5/312025-01-25T18:34:04.631935 -

    2025-01-25T18:34:10.398169 - got prompt

    2025-01-25T18:34:11.839896 - FETCH ComfyRegistry Data: 10/312025-01-25T18:34:11.839896 -

    2025-01-25T18:34:12.148021 - encoded latents shape2025-01-25T18:34:12.148021 - 2025-01-25T18:34:12.148021 - torch.Size([1, 16, 1, 60, 40])2025-01-25T18:34:12.148021 -

    2025-01-25T18:34:12.149021 - Loading text encoder model (clipL) from: D:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\models\clip\clip-vit-large-patch14

    2025-01-25T18:34:12.385548 - Text encoder to dtype: torch.float16

    2025-01-25T18:34:12.413549 - Loading tokenizer (clipL) from: D:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\models\clip\clip-vit-large-patch14

    2025-01-25T18:34:12.466549 - Using a slow image processor as use_fast is unset and a slow processor was saved with this model. use_fast=True will be the default behavior in v4.48, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with use_fast=False.

    2025-01-25T18:34:12.469549 - You are using the default legacy behaviour of the <class 'transformers.models.llama.tokenization_llama_fast.LlamaTokenizerFast'>. This is expected, and simply means that the legacy (previous) behavior will be used so nothing changes for you. If you want to use the new behaviour, set legacy=False. This should only be set if you understand what it means, and thoroughly read the reason why this was added as explained in https://github.com/huggingface/transformers/pull/24565 - if you loaded a llama tokenizer from a GGUF file you can ignore this message.

    2025-01-25T18:34:13.117297 - Loading text encoder model (vlm) from: D:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\models\LLM\llava-llama-3-8b-v1_1-transformers

    2025-01-25T18:34:17.019510 -

    Loading checkpoint shards: 25%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Ž | 1/4 [00:03<00:11, 3.72s/it]2025-01-25T18:34:19.382231 - FETCH ComfyRegistry Data: 15/312025-01-25T18:34:19.383229 -

    2025-01-25T18:34:26.295366 - FETCH ComfyRegistry Data: 20/312025-01-25T18:34:26.295366 -

    2025-01-25T18:34:28.945992 -

    Loading checkpoint shards: 50%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Œ | 2/4 [00:15<00:17, 8.55s/it]2025-01-25T18:34:32.952776 - FETCH ComfyRegistry Data: 25/312025-01-25T18:34:32.952776 -

    2025-01-25T18:34:40.382987 - FETCH ComfyRegistry Data: 30/312025-01-25T18:34:40.382987 -

    2025-01-25T18:34:42.028020 - FETCH ComfyRegistry Data [DONE]2025-01-25T18:34:42.028020 -

    2025-01-25T18:34:42.064671 - [ComfyUI-Manager] default cache updated: https://api.comfy.org/nodes

    2025-01-25T18:34:42.109674 - nightly_channel: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/remote

    2025-01-25T18:34:42.109674 - FETCH DATA from: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/custom-node-list.json2025-01-25T18:34:42.109674 - 2025-01-25T18:34:42.302671 - [DONE]2025-01-25T18:34:42.302671 -

    2025-01-25T18:34:49.535974 -

    Loading checkpoint shards: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 4/4 [00:36<00:00, 9.06s/it]2025-01-25T18:34:49.536973 -

    Loading checkpoint shards: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 4/4 [00:36<00:00, 9.06s/it]2025-01-25T18:34:49.536973 -

    2025-01-25T18:35:02.702413 - Text encoder to dtype: torch.float16

    2025-01-25T18:35:05.675424 - Loading tokenizer (vlm) from: D:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\models\LLM\llava-llama-3-8b-v1_1-transformers

    2025-01-25T18:35:05.912915 - !!! Exception during processing !!! list index out of range

    2025-01-25T18:35:05.914818 - Traceback (most recent call last):

    File "D:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\execution.py", line 327, in execute

    output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)

    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

    File "D:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\execution.py", line 202, in get_output_data

    return_values = mapnode_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)

    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

    File "D:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\execution.py", line 174, in mapnode_over_list

    process_inputs(input_dict, i)

    File "D:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\execution.py", line 163, in process_inputs

    results.append(getattr(obj, func)(**inputs))

    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^

    File "D:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-HunyuanVideoWrapper\nodes.py", line 884, in process

    prompt_embeds, negative_prompt_embeds, attention_mask, negative_attention_mask = encode_prompt(self,

    ^^^^^^^^^^^^^^^^^^^

    File "D:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-HunyuanVideoWrapper\nodes.py", line 809, in encode_prompt

    text_inputs = text_encoder.text2tokens(prompt,

    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

    File "D:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-HunyuanVideoWrapper\hyvideo\text_encoder\__init__.py", line 253, in text2tokens

    text_tokens = self.processor(

    ^^^^^^^^^^^^^^^

    File "D:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\python_embeded\Lib\site-packages\transformers\models\llava\processing_llava.py", line 145, in call

    image_inputs = self.image_processor(images, **output_kwargs["images_kwargs"])

    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

    File "D:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\python_embeded\Lib\site-packages\transformers\image_processing_utils.py", line 41, in call

    return self.preprocess(images, **kwargs)

    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

    File "D:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\python_embeded\Lib\site-packages\transformers\models\clip\image_processing_clip.py", line 286, in preprocess

    images = make_list_of_images(images)

    ^^^^^^^^^^^^^^^^^^^^^^^^^^^

    File "D:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\python_embeded\Lib\site-packages\transformers\image_utils.py", line 185, in make_list_of_images

    if is_batched(images):

    ^^^^^^^^^^^^^^^^^^

    File "D:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\python_embeded\Lib\site-packages\transformers\image_utils.py", line 158, in is_batched

    return is_valid_image(img[0])

    ~~~^^^

    IndexError: list index out of range

    2025-01-25T18:35:05.916816 - Prompt executed in 55.51 seconds

    2025-01-25T18:40:54.194053 - got prompt

    2025-01-25T18:40:54.201565 - Failed to validate prompt for output 81:

    2025-01-25T18:40:54.201565 - * HyVideoTextEncode 82:

    2025-01-25T18:40:54.201565 - - Required input is missing: text_encoders

    2025-01-25T18:40:54.201565 - Output will be ignored

    2025-01-25T18:40:54.203566 - Failed to validate prompt for output 62:

    2025-01-25T18:40:54.203566 - Output will be ignored

    2025-01-25T18:40:54.205567 - Failed to validate prompt for output 34:

    2025-01-25T18:40:54.205567 - Output will be ignored

    2025-01-25T18:40:54.205567 - invalid prompt: {'type': 'prompt_outputs_failed_validation', 'message': 'Prompt outputs failed validation', 'details': '', 'extra_info': {}}

    2025-01-25T18:41:11.989066 - got prompt

    2025-01-25T18:41:12.275693 - CLIP/text encoder model load device: cuda:0, offload device: cpu, current: cpu, dtype: torch.float16

    2025-01-25T18:41:12.276693 - clip missing: ['text_model.embeddings.token_embedding.weight', 'text_model.embeddings.position_embedding.weight', 'text_model.encoder.layers.0.layer_norm1.weight', 'text_model.encoder.layers.0.layer_norm1.bias', 'text_model.encoder.layers.0.self_attn.q_proj.weight', 'text_model.encoder.layers.0.self_attn.q_proj.bias', 'text_model.encoder.layers.0.self_attn.k_proj.weight', 'text_model.encoder.layers.0.self_attn.k_proj.bias', 'text_model.encoder.layers.0.self_attn.v_proj.weight', 'text_model.encoder.layers.0.self_attn.v_proj.bias', 'text_model.encoder.layers.0.self_attn.out_proj.weight', 'text_model.encoder.layers.0.self_attn.out_proj.bias', 'text_model.encoder.layers.0.layer_norm2.weight', 'text_model.encoder.layers.0.layer_norm2.bias', 'text_model.encoder.layers.0.mlp.fc1.weight', 'text_model.encoder.layers.0.mlp.fc1.bias', 'text_model.encoder.layers.0.mlp.fc2.weight', 'text_model.encoder.layers.0.mlp.fc2.bias', 'text_model.encoder.layers.1.layer_norm1.weight', 'text_model.encoder.layers.1.layer_norm1.bias', 'text_model.encoder.layers.1.self_attn.q_proj.weight', 'text_model.encoder.layers.1.self_attn.q_proj.bias', 'text_model.encoder.layers.1.self_attn.k_proj.weight', 'text_model.encoder.layers.1.self_attn.k_proj.bias', 'text_model.encoder.layers.1.self_attn.v_proj.weight', 'text_model.encoder.layers.1.self_attn.v_proj.bias', 'text_model.encoder.layers.1.self_attn.out_proj.weight', 'text_model.encoder.layers.1.self_attn.out_proj.bias', 'text_model.encoder.layers.1.layer_norm2.weight', 'text_model.encoder.layers.1.layer_norm2.bias', 'text_model.encoder.layers.1.mlp.fc1.weight', 'text_model.encoder.layers.1.mlp.fc1.bias', 'text_model.encoder.layers.1.mlp.fc2.weight', 'text_model.encoder.layers.1.mlp.fc2.bias', 'text_model.encoder.layers.2.layer_norm1.weight', 'text_model.encoder.layers.2.layer_norm1.bias', 'text_model.encoder.layers.2.self_attn.q_proj.weight', 'text_model.encoder.layers.2.self_attn.q_proj.bias', 'text_model.encoder.layers.2.self_attn.k_proj.weight', 'text_model.encoder.layers.2.self_attn.k_proj.bias', 'text_model.encoder.layers.2.self_attn.v_proj.weight', 'text_model.encoder.layers.2.self_attn.v_proj.bias', 'text_model.encoder.layers.2.self_attn.out_proj.weight', 'text_model.encoder.layers.2.self_attn.out_proj.bias', 'text_model.encoder.layers.2.layer_norm2.weight', 'text_model.encoder.layers.2.layer_norm2.bias', 'text_model.encoder.layers.2.mlp.fc1.weight', 'text_model.encoder.layers.2.mlp.fc1.bias', 'text_model.encoder.layers.2.mlp.fc2.weight', 'text_model.encoder.layers.2.mlp.fc2.bias', 'text_model.encoder.layers.3.layer_norm1.weight', 'text_model.encoder.layers.3.layer_norm1.bias', 'text_model.encoder.layers.3.self_attn.q_proj.weight', 'text_model.encoder.layers.3.self_attn.q_proj.bias', 'text_model.encoder.layers.3.self_attn.k_proj.weight', 'text_model.encoder.layers.3.self_attn.k_proj.bias', 'text_model.encoder.layers.3.self_attn.v_proj.weight', 'text_model.encoder.layers.3.self_attn.v_proj.bias', 'text_model.encoder.layers.3.self_attn.out_proj.weight', 'text_model.encoder.layers.3.self_attn.out_proj.bias', 'text_model.encoder.layers.3.layer_norm2.weight', 'text_model.encoder.layers.3.layer_norm2.bias', 'text_model.encoder.layers.3.mlp.fc1.weight', 'text_model.encoder.layers.3.mlp.fc1.bias', 'text_model.encoder.layers.3.mlp.fc2.weight', 'text_model.encoder.layers.3.mlp.fc2.bias', 'text_model.encoder.layers.4.layer_norm1.weight', 'text_model.encoder.layers.4.layer_norm1.bias', 'text_model.encoder.layers.4.self_attn.q_proj.weight', 'text_model.encoder.layers.4.self_attn.q_proj.bias', 'text_model.encoder.layers.4.self_attn.k_proj.weight', 'text_model.encoder.layers.4.self_attn.k_proj.bias', 'text_model.encoder.layers.4.self_attn.v_proj.weight', 'text_model.encoder.layers.4.self_attn.v_proj.bias', 'text_model.encoder.layers.4.self_attn.out_proj.weight', 'text_model.encoder.layers.4.self_attn.out_proj.bias', 'text_model.encoder.layers.4.layer_norm2.weight', 'text_model.encoder.layers.4.layer_norm2.bias', 'text_model.encoder.layers.4.mlp.fc1.weight', 'text_model.encoder.layers.4.mlp.fc1.bias', 'text_model.encoder.layers.4.mlp.fc2.weight', 'text_model.encoder.layers.4.mlp.fc2.bias', 'text_model.encoder.layers.5.layer_norm1.weight', 'text_model.encoder.layers.5.layer_norm1.bias', 'text_model.encoder.layers.5.self_attn.q_proj.weight', 'text_model.encoder.layers.5.self_attn.q_proj.bias', 'text_model.encoder.layers.5.self_attn.k_proj.weight', 'text_model.encoder.layers.5.self_attn.k_proj.bias', 'text_model.encoder.layers.5.self_attn.v_proj.weight', 'text_model.encoder.layers.5.self_attn.v_proj.bias', 'text_model.encoder.layers.5.self_attn.out_proj.weight', 'text_model.encoder.layers.5.self_attn.out_proj.bias', 'text_model.encoder.layers.5.layer_norm2.weight', 'text_model.encoder.layers.5.layer_norm2.bias', 'text_model.encoder.layers.5.mlp.fc1.weight', 'text_model.encoder.layers.5.mlp.fc1.bias', 'text_model.encoder.layers.5.mlp.fc2.weight', 'text_model.encoder.layers.5.mlp.fc2.bias', 'text_model.encoder.layers.6.layer_norm1.weight', 'text_model.encoder.layers.6.layer_norm1.bias', 'text_model.encoder.layers.6.self_attn.q_proj.weight', 'text_model.encoder.layers.6.self_attn.q_proj.bias', 'text_model.encoder.layers.6.self_attn.k_proj.weight', 'text_model.encoder.layers.6.self_attn.k_proj.bias', 'text_model.encoder.layers.6.self_attn.v_proj.weight', 'text_model.encoder.layers.6.self_attn.v_proj.bias', 'text_model.encoder.layers.6.self_attn.out_proj.weight', 'text_model.encoder.layers.6.self_attn.out_proj.bias', 'text_model.encoder.layers.6.layer_norm2.weight', 'text_model.encoder.layers.6.layer_norm2.bias', 'text_model.encoder.layers.6.mlp.fc1.weight', 'text_model.encoder.layers.6.mlp.fc1.bias', 'text_model.encoder.layers.6.mlp.fc2.weight', 'text_model.encoder.layers.6.mlp.fc2.bias', 'text_model.encoder.layers.7.layer_norm1.weight', 'text_model.encoder.layers.7.layer_norm1.bias', 'text_model.encoder.layers.7.self_attn.q_proj.weight', 'text_model.encoder.layers.7.self_attn.q_proj.bias', 'text_model.encoder.layers.7.self_attn.k_proj.weight', 'text_model.encoder.layers.7.self_attn.k_proj.bias', 'text_model.encoder.layers.7.self_attn.v_proj.weight', 'text_model.encoder.layers.7.self_attn.v_proj.bias', 'text_model.encoder.layers.7.self_attn.out_proj.weight', 'text_model.encoder.layers.7.self_attn.out_proj.bias', 'text_model.encoder.layers.7.layer_norm2.weight', 'text_model.encoder.layers.7.layer_norm2.bias', 'text_model.encoder.layers.7.mlp.fc1.weight', 'text_model.encoder.layers.7.mlp.fc1.bias', 'text_model.encoder.layers.7.mlp.fc2.weight', 'text_model.encoder.layers.7.mlp.fc2.bias', 'text_model.encoder.layers.8.layer_norm1.weight', 'text_model.encoder.layers.8.layer_norm1.bias', 'text_model.encoder.layers.8.self_attn.q_proj.weight', 'text_model.encoder.layers.8.self_attn.q_proj.bias', 'text_model.encoder.layers.8.self_attn.k_proj.weight', 'text_model.encoder.layers.8.self_attn.k_proj.bias', 'text_model.encoder.layers.8.self_attn.v_proj.weight', 'text_model.encoder.layers.8.self_attn.v_proj.bias', 'text_model.encoder.layers.8.self_attn.out_proj.weight', 'text_model.encoder.layers.8.self_attn.out_proj.bias', 'text_model.encoder.layers.8.layer_norm2.weight', 'text_model.encoder.layers.8.layer_norm2.bias', 'text_model.encoder.layers.8.mlp.fc1.weight', 'text_model.encoder.layers.8.mlp.fc1.bias', 'text_model.encoder.layers.8.mlp.fc2.weight', 'text_model.encoder.layers.8.mlp.fc2.bias', 'text_model.encoder.layers.9.layer_norm1.weight', 'text_model.encoder.layers.9.layer_norm1.bias', 'text_model.encoder.layers.9.self_attn.q_proj.weight', 'text_model.encoder.layers.9.self_attn.q_proj.bias', 'text_model.encoder.layers.9.self_attn.k_proj.weight', 'text_model.encoder.layers.9.self_attn.k_proj.bias', 'text_model.encoder.layers.9.self_attn.v_proj.weight', 'text_model.encoder.layers.9.self_attn.v_proj.bias', 'text_model.encoder.layers.9.self_attn.out_proj.weight', 'text_model.encoder.layers.9.self_attn.out_proj.bias', 'text_model.encoder.layers.9.layer_norm2.weight', 'text_model.encoder.layers.9.layer_norm2.bias', 'text_model.encoder.layers.9.mlp.fc1.weight', 'text_model.encoder.layers.9.mlp.fc1.bias', 'text_model.encoder.layers.9.mlp.fc2.weight', 'text_model.encoder.layers.9.mlp.fc2.bias', 'text_model.encoder.layers.10.layer_norm1.weight', 'text_model.encoder.layers.10.layer_norm1.bias', 'text_model.encoder.layers.10.self_attn.q_proj.weight', 'text_model.encoder.layers.10.self_attn.q_proj.bias', 'text_model.encoder.layers.10.self_attn.k_proj.weight', 'text_model.encoder.layers.10.self_attn.k_proj.bias', 'text_model.encoder.layers.10.self_attn.v_proj.weight', 'text_model.encoder.layers.10.self_attn.v_proj.bias', 'text_model.encoder.layers.10.self_attn.out_proj.weight', 'text_model.encoder.layers.10.self_attn.out_proj.bias', 'text_model.encoder.layers.10.layer_norm2.weight', 'text_model.encoder.layers.10.layer_norm2.bias', 'text_model.encoder.layers.10.mlp.fc1.weight', 'text_model.encoder.layers.10.mlp.fc1.bias', 'text_model.encoder.layers.10.mlp.fc2.weight', 'text_model.encoder.layers.10.mlp.fc2.bias', 'text_model.encoder.layers.11.layer_norm1.weight', 'text_model.encoder.layers.11.layer_norm1.bias', 'text_model.encoder.layers.11.self_attn.q_proj.weight', 'text_model.encoder.layers.11.self_attn.q_proj.bias', 'text_model.encoder.layers.11.self_attn.k_proj.weight', 'text_model.encoder.layers.11.self_attn.k_proj.bias', 'text_model.encoder.layers.11.self_attn.v_proj.weight', 'text_model.encoder.layers.11.self_attn.v_proj.bias', 'text_model.encoder.layers.11.self_attn.out_proj.weight', 'text_model.encoder.layers.11.self_attn.out_proj.bias', 'text_model.encoder.layers.11.layer_norm2.weight', 'text_model.encoder.layers.11.layer_norm2.bias', 'text_model.encoder.layers.11.mlp.fc1.weight', 'text_model.encoder.layers.11.mlp.fc1.bias', 'text_model.encoder.layers.11.mlp.fc2.weight', 'text_model.encoder.layers.11.mlp.fc2.bias', 'text_model.final_layer_norm.weight', 'text_model.final_layer_norm.bias', 'text_projection.weight']

    2025-01-25T18:41:12.407692 - clip missing: ['text_projection.weight']

    2025-01-25T18:41:12.442692 - SELECTED: input12025-01-25T18:41:12.442692 -

    2025-01-25T18:41:12.442692 - ImpactSwitch: invalid select index (ignored)2025-01-25T18:41:12.442692 -

    2025-01-25T18:41:12.443692 - !!! Exception during processing !!! 'NoneType' object is not subscriptable

    2025-01-25T18:41:12.444693 - Traceback (most recent call last):

    File "D:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\execution.py", line 327, in execute

    output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)

    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

    File "D:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\execution.py", line 202, in get_output_data

    return_values = mapnode_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)

    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

    File "D:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\execution.py", line 174, in mapnode_over_list

    process_inputs(input_dict, i)

    File "D:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\execution.py", line 163, in process_inputs

    results.append(getattr(obj, func)(**inputs))

    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^

    File "D:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-HunyuanVideoWrapper\nodes.py", line 772, in process

    text_encoder_1 = text_encoders["text_encoder"]

    ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^

    TypeError: 'NoneType' object is not subscriptable

    2025-01-25T18:41:12.444693 - Prompt executed in 0.38 seconds

    2025-01-25T18:41:25.575186 - got prompt

    2025-01-25T18:41:25.730184 - CLIP/text encoder model load device: cuda:0, offload device: cpu, current: cpu, dtype: torch.float16

    2025-01-25T18:41:25.940183 - !!! Exception during processing !!! Error(s) in loading state_dict for CLIPTextModel:

    size mismatch for text_model.embeddings.position_embedding.weight: copying a param with shape torch.Size([77, 768]) from checkpoint, the shape in current model is torch.Size([248, 768]).

    2025-01-25T18:41:25.950183 - Traceback (most recent call last):

    File "D:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\execution.py", line 327, in execute

    output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)

    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

    File "D:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\execution.py", line 202, in get_output_data

    return_values = mapnode_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)

    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

    File "D:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\execution.py", line 174, in mapnode_over_list

    process_inputs(input_dict, i)

    File "D:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\execution.py", line 163, in process_inputs

    results.append(getattr(obj, func)(**inputs))

    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^

    File "D:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\nodes.py", line 986, in load_clip

    clip = comfy.sd.load_clip(ckpt_paths=[clip_path1, clip_path2], embedding_directory=folder_paths.get_folder_paths("embeddings"), clip_type=clip_type, model_options=model_options)

    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

    File "D:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\comfy\sd.py", line 666, in load_clip

    return load_text_encoder_state_dicts(clip_data, embedding_directory=embedding_directory, clip_type=clip_type, model_options=model_options)

    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

    File "D:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\comfy\sd.py", line 808, in load_text_encoder_state_dicts

    m, u = clip.load_sd(c)

    ^^^^^^^^^^^^^^^

    File "D:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\comfy\sd.py", line 232, in load_sd

    return self.cond_stage_model.load_sd(sd)

    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

    File "D:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\comfy\sdxl_clip.py", line 69, in load_sd

    return self.clip_l.load_sd(sd)

    ^^^^^^^^^^^^^^^^^^^^^^^

    File "D:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\comfy\sd1_clip.py", line 255, in load_sd

    return self.transformer.load_state_dict(sd, strict=False)

    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

    File "D:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 2584, in load_state_dict

    raise RuntimeError(

    RuntimeError: Error(s) in loading state_dict for CLIPTextModel:

    size mismatch for text_model.embeddings.position_embedding.weight: copying a param with shape torch.Size([77, 768]) from checkpoint, the shape in current model is torch.Size([248, 768]).

    2025-01-25T18:41:25.951184 - Prompt executed in 0.34 seconds

    2025-01-25T18:41:34.605313 - got prompt

    2025-01-25T18:41:34.792313 - CLIP/text encoder model load device: cuda:0, offload device: cpu, current: cpu, dtype: torch.float16

    2025-01-25T18:41:35.042511 - !!! Exception during processing !!! Error(s) in loading state_dict for CLIPTextModel:

    size mismatch for text_model.embeddings.position_embedding.weight: copying a param with shape torch.Size([77, 768]) from checkpoint, the shape in current model is torch.Size([248, 768]).

    2025-01-25T18:41:35.043511 - Traceback (most recent call last):

    File "D:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\execution.py", line 327, in execute

    output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)

    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

    File "D:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\execution.py", line 202, in get_output_data

    return_values = mapnode_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)

    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

    File "D:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\execution.py", line 174, in mapnode_over_list

    process_inputs(input_dict, i)

    File "D:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\execution.py", line 163, in process_inputs

    results.append(getattr(obj, func)(**inputs))

    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^

    File "D:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\nodes.py", line 986, in load_clip

    clip = comfy.sd.load_clip(ckpt_paths=[clip_path1, clip_path2], embedding_directory=folder_paths.get_folder_paths("embeddings"), clip_type=clip_type, model_options=model_options)

    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

    File "D:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\comfy\sd.py", line 666, in load_clip

    return load_text_encoder_state_dicts(clip_data, embedding_directory=embedding_directory, clip_type=clip_type, model_options=model_options)

    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

    File "D:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\comfy\sd.py", line 808, in load_text_encoder_state_dicts

    m, u = clip.load_sd(c)

    ^^^^^^^^^^^^^^^

    File "D:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\comfy\sd.py", line 232, in load_sd

    return self.cond_stage_model.load_sd(sd)

    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

    File "D:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\comfy\sdxl_clip.py", line 69, in load_sd

    return self.clip_l.load_sd(sd)

    ^^^^^^^^^^^^^^^^^^^^^^^

    File "D:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\comfy\sd1_clip.py", line 255, in load_sd

    return self.transformer.load_state_dict(sd, strict=False)

    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

    File "D:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 2584, in load_state_dict

    raise RuntimeError(

    RuntimeError: Error(s) in loading state_dict for CLIPTextModel:

    size mismatch for text_model.embeddings.position_embedding.weight: copying a param with shape torch.Size([77, 768]) from checkpoint, the shape in current model is torch.Size([248, 768]).

    2025-01-25T18:41:35.044512 - Prompt executed in 0.38 seconds

    2025-01-25T18:41:41.825192 - got prompt

    2025-01-25T18:41:42.025193 - CLIP/text encoder model load device: cuda:0, offload device: cpu, current: cpu, dtype: torch.float16

    2025-01-25T18:41:42.468192 - !!! Exception during processing !!! 'NoneType' object is not subscriptable

    2025-01-25T18:41:42.469193 - Traceback (most recent call last):

    File "D:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\execution.py", line 327, in execute

    output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)

    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

    File "D:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\execution.py", line 202, in get_output_data

    return_values = mapnode_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)

    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

    File "D:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\execution.py", line 174, in mapnode_over_list

    process_inputs(input_dict, i)

    File "D:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\execution.py", line 163, in process_inputs

    results.append(getattr(obj, func)(**inputs))

    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^

    File "D:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-HunyuanVideoWrapper\nodes.py", line 772, in process

    text_encoder_1 = text_encoders["text_encoder"]

    ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^

    TypeError: 'NoneType' object is not subscriptable

    2025-01-25T18:41:42.470193 - Prompt executed in 0.58 seconds

    2025-01-25T18:41:52.510662 - got prompt

    2025-01-25T18:41:52.571661 - !!! Exception during processing !!! 'NoneType' object is not subscriptable

    2025-01-25T18:41:52.571661 - Traceback (most recent call last):

    File "D:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\execution.py", line 327, in execute

    output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)

    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

    File "D:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\execution.py", line 202, in get_output_data

    return_values = mapnode_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)

    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

    File "D:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\execution.py", line 174, in mapnode_over_list

    process_inputs(input_dict, i)

    File "D:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\execution.py", line 163, in process_inputs

    results.append(getattr(obj, func)(**inputs))

    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^

    File "D:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-HunyuanVideoWrapper\nodes.py", line 772, in process

    text_encoder_1 = text_encoders["text_encoder"]

    ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^

    TypeError: 'NoneType' object is not subscriptable

    2025-01-25T18:41:52.572662 - Prompt executed in 0.01 seconds

    2025-01-25T18:42:00.162119 - got prompt

    2025-01-25T18:42:00.245117 - !!! Exception during processing !!! 'NoneType' object is not subscriptable

    2025-01-25T18:42:00.245117 - Traceback (most recent call last):

    File "D:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\execution.py", line 327, in execute

    output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)

    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

    File "D:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\execution.py", line 202, in get_output_data

    return_values = mapnode_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)

    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

    File "D:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\execution.py", line 174, in mapnode_over_list

    process_inputs(input_dict, i)

    File "D:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\execution.py", line 163, in process_inputs

    results.append(getattr(obj, func)(**inputs))

    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^

    File "D:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-HunyuanVideoWrapper\nodes.py", line 772, in process

    text_encoder_1 = text_encoders["text_encoder"]

    ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^

    TypeError: 'NoneType' object is not subscriptable

    2025-01-25T18:42:00.246118 - Prompt executed in 0.02 seconds

    2025-01-25T18:42:21.879853 - got prompt

    2025-01-25T18:42:21.963853 - !!! Exception during processing !!! 'NoneType' object is not subscriptable

    2025-01-25T18:42:21.964853 - Traceback (most recent call last):

    File "D:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\execution.py", line 327, in execute

    output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)

    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

    File "D:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\execution.py", line 202, in get_output_data

    return_values = mapnode_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)

    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

    File "D:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\execution.py", line 174, in mapnode_over_list

    process_inputs(input_dict, i)

    File "D:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\execution.py", line 163, in process_inputs

    results.append(getattr(obj, func)(**inputs))

    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^

    File "D:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-HunyuanVideoWrapper\nodes.py", line 772, in process

    text_encoder_1 = text_encoders["text_encoder"]

    ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^

    TypeError: 'NoneType' object is not subscriptable

    2025-01-25T18:42:21.965853 - Prompt executed in 0.04 seconds

    2025-01-25T18:44:57.676971 - got prompt

    2025-01-25T18:44:57.938770 - D:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui-impact-subpack\modules\subcore.py:150: FutureWarning: You are using torch.load with weights_only=False (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for weights_only will be flipped to True. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via torch.serialization.add_safe_globals. We recommend you start setting weights_only=True for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.

    return orig_torch_load(*args, **kwargs) # NOTE: This code simply delegates the call to torch.load, and any errors that occur here are not the responsibility of Subpack.

    2025-01-25T18:44:58.805538 - Loads SAM model: D:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\models\sams\sam_vit_b_01ec64.pth (device:AUTO)2025-01-25T18:44:58.805538 -

    2025-01-25T18:44:59.431492 - model weight dtype torch.float16, manual cast: None

    2025-01-25T18:44:59.433491 - model_type EPS

    2025-01-25T18:45:17.539870 - Using pytorch attention in VAE

    2025-01-25T18:45:17.540869 - Using pytorch attention in VAE

    2025-01-25T18:45:18.263869 - VAE load device: cuda:0, offload device: cpu, dtype: torch.bfloat16

    2025-01-25T18:45:18.402869 - CLIP/text encoder model load device: cuda:0, offload device: cpu, current: cpu, dtype: torch.float16

    2025-01-25T18:45:27.818869 - Requested to load SDXLClipModel

    2025-01-25T18:45:29.854869 - loaded completely 9.5367431640625e+25 1560.802734375 True

    2025-01-25T18:45:29.963870 - Token indices sequence length is longer than the specified maximum sequence length for this model (134 > 77). Running this sequence through the model will result in indexing errors

    2025-01-25T18:45:29.966870 - Token indices sequence length is longer than the specified maximum sequence length for this model (134 > 77). Running this sequence through the model will result in indexing errors

    2025-01-25T18:45:30.013870 - Requested to load SDXL

    2025-01-25T18:45:35.282869 - loaded completely 9.5367431640625e+25 4897.0483474731445 True

    2025-01-25T18:45:57.049527 -

    100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 30/30 [00:21<00:00, 1.73it/s]2025-01-25T18:45:57.049527 -

    100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 30/30 [00:21<00:00, 1.38it/s]2025-01-25T18:45:57.049527 -

    2025-01-25T18:45:57.050529 - Requested to load AutoencoderKL

    2025-01-25T18:45:57.077528 - loaded completely 9.5367431640625e+25 159.55708122253418 True

    2025-01-25T18:46:00.209889 - [Impact Pack] WARN: FaceDetailer is not a node designed for video detailing. If you intend to perform video detailing, please use Detailer For AnimateDiff.2025-01-25T18:46:00.209889 -

    2025-01-25T18:46:00.881889 -

    2025-01-25T18:46:01.113889 - 0: 640x512 1 face, 39.0ms

    2025-01-25T18:46:01.114890 - Speed: 3.0ms preprocess, 39.0ms inference, 13.0ms postprocess per image at shape (1, 3, 640, 512)

    2025-01-25T18:46:07.191319 - Detailer: segment skip (enough big)2025-01-25T18:46:07.191319 -

    2025-01-25T18:46:07.619067 -

    2025-01-25T18:46:07.722067 - 0: 640x512 1 face, 12.0ms

    2025-01-25T18:46:07.722067 - Speed: 3.0ms preprocess, 12.0ms inference, 3.0ms postprocess per image at shape (1, 3, 640, 512)

    2025-01-25T18:46:13.477369 - Detailer: segment skip (enough big)2025-01-25T18:46:13.478368 -

    2025-01-25T18:46:17.420695 - Prompt executed in 79.74 seconds

    2025-01-25T18:53:11.707762 - QualityOfLifeSuit_Omar92::NSP ready2025-01-25T18:53:11.707762 -

    2025-01-25T18:53:11.708762 - Error. No naistyles.csv found. Put your naistyles.csv in the custom_nodes/ComfyUI_NAI-mod/CSV directory of ComfyUI. Then press "Refresh".

    Your current root directory is: D:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI

    2025-01-25T18:53:11.708762 -

    2025-01-25T18:53:11.708762 - Error. No naistyles.csv found. Put your naistyles.csv in the custom_nodes/ComfyUI_NAI-mod/CSV directory of ComfyUI. Then press "Refresh".

    Your current root directory is: D:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI

    2025-01-25T18:53:11.708762 -

    2025-01-25T18:53:11.708762 - Error. No naistyles.csv found. Put your naistyles.csv in the custom_nodes/ComfyUI_NAI-mod/CSV directory of ComfyUI. Then press "Refresh".

    Your current root directory is: D:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI

    2025-01-25T18:53:11.709763 -

    2025-01-25T18:53:11.710763 - Error. No naistyles.csv found. Put your naistyles.csv in the custom_nodes/ComfyUI_NAI-mod/CSV directory of ComfyUI. Then press "Refresh".

    Your current root directory is: D:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI

    2025-01-25T18:53:11.710763 -

    2025-01-25T18:53:11.710763 - Error. No naistyles.csv found. Put your naistyles.csv in the custom_nodes/ComfyUI_NAI-mod/CSV directory of ComfyUI. Then press "Refresh".

    Your current root directory is: D:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI

    2025-01-25T18:53:11.710763 -

    2025-01-25T18:53:11.711763 - Error. No naistyles.csv found. Put your naistyles.csv in the custom_nodes/ComfyUI_NAI-mod/CSV directory of ComfyUI. Then press "Refresh".

    Your current root directory is: D:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI

    2025-01-25T18:53:11.711763 -

    2025-01-25T18:53:59.798759 - got prompt

    2025-01-25T18:53:59.857812 - Requested to load SDXL

    2025-01-25T18:54:01.722751 - loaded completely 9.5367431640625e+25 4897.0483474731445 True

    2025-01-25T18:54:23.054398 -

    100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 30/30 [00:21<00:00, 1.71it/s]2025-01-25T18:54:23.054398 -

    100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 30/30 [00:21<00:00, 1.41it/s]2025-01-25T18:54:23.054398 -

    2025-01-25T18:54:23.055389 - Requested to load AutoencoderKL

    2025-01-25T18:54:23.082389 - loaded completely 9.5367431640625e+25 159.55708122253418 True

    2025-01-25T18:54:25.899578 - [Impact Pack] WARN: FaceDetailer is not a node designed for video detailing. If you intend to perform video detailing, please use Detailer For AnimateDiff.2025-01-25T18:54:25.900578 -

    2025-01-25T18:54:26.308578 -

    2025-01-25T18:54:26.416606 - 0: 640x512 1 face, 6.0ms

    2025-01-25T18:54:26.416606 - Speed: 3.0ms preprocess, 6.0ms inference, 2.0ms postprocess per image at shape (1, 3, 640, 512)

    2025-01-25T18:54:31.821300 - Detailer: segment skip (enough big)2025-01-25T18:54:31.821300 -

    2025-01-25T18:54:32.257835 -

    2025-01-25T18:54:32.355827 - 0: 640x512 1 face, 12.0ms

    2025-01-25T18:54:32.355827 - Speed: 3.0ms preprocess, 12.0ms inference, 3.0ms postprocess per image at shape (1, 3, 640, 512)

    2025-01-25T18:54:38.777957 - Detailer: segment skip (enough big)2025-01-25T18:54:38.778957 -

    2025-01-25T18:54:41.712759 - Prompt executed in 41.91 seconds

    2025-01-25T18:55:17.230228 - got prompt

    2025-01-25T18:55:17.242227 - D:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui-impact-subpack\modules\subcore.py:150: FutureWarning: You are using torch.load with weights_only=False (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for weights_only will be flipped to True. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via torch.serialization.add_safe_globals. We recommend you start setting weights_only=True for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.

    return orig_torch_load(*args, **kwargs) # NOTE: This code simply delegates the call to torch.load, and any errors that occur here are not the responsibility of Subpack.

    2025-01-25T18:55:17.865228 - Requested to load SDXL

    2025-01-25T18:55:19.617228 - loaded completely 9.5367431640625e+25 4897.0483474731445 True

    2025-01-25T18:55:41.400761 -

    100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 30/30 [00:21<00:00, 1.71it/s]2025-01-25T18:55:41.400761 -

    100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 30/30 [00:21<00:00, 1.38it/s]2025-01-25T18:55:41.400761 -

    2025-01-25T18:55:41.401762 - Requested to load AutoencoderKL

    2025-01-25T18:55:41.426761 - loaded completely 9.5367431640625e+25 159.55708122253418 True

    2025-01-25T18:55:43.512761 - [Impact Pack] WARN: FaceDetailer is not a node designed for video detailing. If you intend to perform video detailing, please use Detailer For AnimateDiff.2025-01-25T18:55:43.512761 -

    2025-01-25T18:55:43.941761 -

    2025-01-25T18:55:44.068761 - 0: 640x512 1 face, 7.0ms

    2025-01-25T18:55:44.068761 - Speed: 2.0ms preprocess, 7.0ms inference, 4.0ms postprocess per image at shape (1, 3, 640, 512)

    2025-01-25T18:55:52.481139 - Detailer: segment skip (enough big)2025-01-25T18:55:52.482139 -

    2025-01-25T18:55:52.951036 -

    2025-01-25T18:55:53.079036 - 0: 640x512 1 face, 35.0ms

    2025-01-25T18:55:53.079036 - Speed: 5.0ms preprocess, 35.0ms inference, 4.0ms postprocess per image at shape (1, 3, 640, 512)

    2025-01-25T18:56:00.344663 - Detailer: segment skip (enough big)2025-01-25T18:56:00.344663 -

    2025-01-25T18:56:02.590730 - Prompt executed in 45.36 seconds

    2025-01-25T19:00:45.220629 - got prompt

    2025-01-25T19:00:45.226629 - D:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui-impact-subpack\modules\subcore.py:150: FutureWarning: You are using torch.load with weights_only=False (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for weights_only will be flipped to True. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via torch.serialization.add_safe_globals. We recommend you start setting weights_only=True for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.

    return orig_torch_load(*args, **kwargs) # NOTE: This code simply delegates the call to torch.load, and any errors that occur here are not the responsibility of Subpack.

    2025-01-25T19:00:45.254629 - Requested to load SDXL

    2025-01-25T19:00:46.888821 - loaded completely 9.5367431640625e+25 4897.0483474731445 True

    2025-01-25T19:01:07.844479 -

    100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 30/30 [00:20<00:00, 1.76it/s]2025-01-25T19:01:07.844479 -

    100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 30/30 [00:20<00:00, 1.43it/s]2025-01-25T19:01:07.844479 -

    2025-01-25T19:01:07.845480 - Requested to load AutoencoderKL

    2025-01-25T19:01:07.869479 - loaded completely 9.5367431640625e+25 159.55708122253418 True

    2025-01-25T19:01:09.907526 - [Impact Pack] WARN: FaceDetailer is not a node designed for video detailing. If you intend to perform video detailing, please use Detailer For AnimateDiff.2025-01-25T19:01:09.907526 -

    2025-01-25T19:01:10.292541 -

    2025-01-25T19:01:10.388521 - 0: 640x512 1 face, 5.0ms

    2025-01-25T19:01:10.388521 - Speed: 3.0ms preprocess, 5.0ms inference, 3.0ms postprocess per image at shape (1, 3, 640, 512)

    2025-01-25T19:01:17.113429 - Detailer: segment skip (enough big)2025-01-25T19:01:17.114428 -

    2025-01-25T19:01:17.556428 -

    2025-01-25T19:01:17.687428 - 0: 640x512 1 face, 34.0ms

    2025-01-25T19:01:17.687428 - Speed: 5.0ms preprocess, 34.0ms inference, 4.0ms postprocess per image at shape (1, 3, 640, 512)

    2025-01-25T19:01:23.804062 - Detailer: segment skip (enough big)2025-01-25T19:01:23.804062 -

    2025-01-25T19:01:26.037471 - Prompt executed in 40.81 seconds

    2025-01-25T19:03:06.877927 - got prompt

    2025-01-25T19:03:08.848329 - encoded latents shape2025-01-25T19:03:08.849329 - 2025-01-25T19:03:08.849329 - torch.Size([1, 16, 1, 60, 40])2025-01-25T19:03:08.849329 -

    2025-01-25T19:03:08.850329 - Loading text encoder model (clipL) from: D:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\models\clip\clip-vit-large-patch14

    2025-01-25T19:03:09.122335 - Text encoder to dtype: torch.float16

    2025-01-25T19:03:09.151330 - Loading tokenizer (clipL) from: D:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\models\clip\clip-vit-large-patch14

    2025-01-25T19:03:09.201329 - Loading text encoder model (llm) from: D:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\models\LLM\llava-llama-3-8b-text-encoder-tokenizer

    2025-01-25T19:03:09.230336 -

    Loading checkpoint shards: 0%| | 0/4 [00:00<?, ?it/s]2025-01-25T19:03:09.230336 -

    Loading checkpoint shards: 0%| | 0/4 [00:00<?, ?it/s]2025-01-25T19:03:09.230336 -

    2025-01-25T19:03:09.230336 - !!! Exception during processing !!! No such file or directory: "D:\\ComfyUI_windows_portable_nvidia\\ComfyUI_windows_portable\\ComfyUI\\models\\LLM\\llava-llama-3-8b-text-encoder-tokenizer\\model-00001-of-00004.safetensors"

    2025-01-25T19:03:09.232336 - Traceback (most recent call last):

    File "D:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\execution.py", line 327, in execute

    output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)

    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

    File "D:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\execution.py", line 202, in get_output_data

    return_values = mapnode_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)

    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

    File "D:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\execution.py", line 174, in mapnode_over_list

    process_inputs(input_dict, i)

    File "D:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\execution.py", line 163, in process_inputs

    results.append(getattr(obj, func)(**inputs))

    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^

    File "D:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-HunyuanVideoWrapper\nodes.py", line 684, in loadmodel

    text_encoder = TextEncoder(

    ^^^^^^^^^^^^

    File "D:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-HunyuanVideoWrapper\hyvideo\text_encoder\__init__.py", line 167, in init

    self.model, self.model_path = load_text_encoder(

    ^^^^^^^^^^^^^^^^^^

    File "D:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-HunyuanVideoWrapper\hyvideo\text_encoder\__init__.py", line 39, in load_text_encoder

    text_encoder = AutoModel.from_pretrained(

    ^^^^^^^^^^^^^^^^^^^^^^^^^^

    File "D:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\python_embeded\Lib\site-packages\transformers\models\auto\auto_factory.py", line 564, in from_pretrained

    return model_class.from_pretrained(

    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^

    File "D:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\python_embeded\Lib\site-packages\transformers\modeling_utils.py", line 4224, in from_pretrained

    ) = cls._load_pretrained_model(

    ^^^^^^^^^^^^^^^^^^^^^^^^^^^

    File "D:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\python_embeded\Lib\site-packages\transformers\modeling_utils.py", line 4770, in loadpretrained_model

    state_dict = load_state_dict(

    ^^^^^^^^^^^^^^^^

    File "D:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\python_embeded\Lib\site-packages\transformers\modeling_utils.py", line 504, in load_state_dict

    with safe_open(checkpoint_file, framework="pt") as f:

    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

    FileNotFoundError: No such file or directory: "D:\\ComfyUI_windows_portable_nvidia\\ComfyUI_windows_portable\\ComfyUI\\models\\LLM\\llava-llama-3-8b-text-encoder-tokenizer\\model-00001-of-00004.safetensors"

    2025-01-25T19:03:09.234336 - Prompt executed in 2.35 seconds

    2025-01-25T19:03:53.774329 - got prompt

    2025-01-25T19:03:53.799328 - Loading text encoder model (clipL) from: D:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\models\clip\clip-vit-large-patch14

    2025-01-25T19:03:54.122327 - Text encoder to dtype: torch.float16

    2025-01-25T19:03:54.220327 - Loading tokenizer (clipL) from: D:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\models\clip\clip-vit-large-patch14

    2025-01-25T19:03:54.273328 - Loading text encoder model (llm) from: D:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\models\LLM\llava-llama-3-8b-text-encoder-tokenizer

    2025-01-25T19:03:54.298328 -

    Loading checkpoint shards: 0%| | 0/4 [00:00<?, ?it/s]2025-01-25T19:03:54.299329 -

    Loading checkpoint shards: 0%| | 0/4 [00:00<?, ?it/s]2025-01-25T19:03:54.299329 -

    2025-01-25T19:03:54.299329 - !!! Exception during processing !!! No such file or directory: "D:\\ComfyUI_windows_portable_nvidia\\ComfyUI_windows_portable\\ComfyUI\\models\\LLM\\llava-llama-3-8b-text-encoder-tokenizer\\model-00001-of-00004.safetensors"

    2025-01-25T19:03:54.300327 - Traceback (most recent call last):

    File "D:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\execution.py", line 327, in execute

    output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)

    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

    File "D:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\execution.py", line 202, in get_output_data

    return_values = mapnode_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)

    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

    File "D:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\execution.py", line 174, in mapnode_over_list

    process_inputs(input_dict, i)

    File "D:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\execution.py", line 163, in process_inputs

    results.append(getattr(obj, func)(**inputs))

    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^

    File "D:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-HunyuanVideoWrapper\nodes.py", line 684, in loadmodel

    text_encoder = TextEncoder(

    ^^^^^^^^^^^^

    File "D:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-HunyuanVideoWrapper\hyvideo\text_encoder\__init__.py", line 167, in init

    self.model, self.model_path = load_text_encoder(

    ^^^^^^^^^^^^^^^^^^

    File "D:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-HunyuanVideoWrapper\hyvideo\text_encoder\__init__.py", line 39, in load_text_encoder

    text_encoder = AutoModel.from_pretrained(

    ^^^^^^^^^^^^^^^^^^^^^^^^^^

    File "D:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\python_embeded\Lib\site-packages\transformers\models\auto\auto_factory.py", line 564, in from_pretrained

    return model_class.from_pretrained(

    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^

    File "D:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\python_embeded\Lib\site-packages\transformers\modeling_utils.py", line 4224, in from_pretrained

    ) = cls._load_pretrained_model(

    ^^^^^^^^^^^^^^^^^^^^^^^^^^^

    File "D:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\python_embeded\Lib\site-packages\transformers\modeling_utils.py", line 4770, in loadpretrained_model

    state_dict = load_state_dict(

    ^^^^^^^^^^^^^^^^

    File "D:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\python_embeded\Lib\site-packages\transformers\modeling_utils.py", line 504, in load_state_dict

    with safe_open(checkpoint_file, framework="pt") as f:

    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

    FileNotFoundError: No such file or directory: "D:\\ComfyUI_windows_portable_nvidia\\ComfyUI_windows_portable\\ComfyUI\\models\\LLM\\llava-llama-3-8b-text-encoder-tokenizer\\model-00001-of-00004.safetensors"

    2025-01-25T19:03:54.301328 - Prompt executed in 0.52 seconds

    2025-01-25T19:03:56.297911 - Error. No naistyles.csv found. Put your naistyles.csv in the custom_nodes/ComfyUI_NAI-mod/CSV directory of ComfyUI. Then press "Refresh".

    Your current root directory is: D:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI

    2025-01-25T19:03:56.297911 -

    2025-01-25T19:03:56.297911 - Error. No naistyles.csv found. Put your naistyles.csv in the custom_nodes/ComfyUI_NAI-mod/CSV directory of ComfyUI. Then press "Refresh".

    Your current root directory is: D:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI

    2025-01-25T19:03:56.298924 -

    2025-01-25T19:03:56.298924 - Error. No naistyles.csv found. Put your naistyles.csv in the custom_nodes/ComfyUI_NAI-mod/CSV directory of ComfyUI. Then press "Refresh".

    Your current root directory is: D:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI

    2025-01-25T19:03:56.298924 -

    2025-01-25T19:03:56.298924 - Error. No naistyles.csv found. Put your naistyles.csv in the custom_nodes/ComfyUI_NAI-mod/CSV directory of ComfyUI. Then press "Refresh".

    Your current root directory is: D:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI

    2025-01-25T19:03:56.298924 -

    2025-01-25T19:03:56.298924 - Error. No naistyles.csv found. Put your naistyles.csv in the custom_nodes/ComfyUI_NAI-mod/CSV directory of ComfyUI. Then press "Refresh".

    Your current root directory is: D:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI

    2025-01-25T19:03:56.298924 -

    2025-01-25T19:03:56.298924 - Error. No naistyles.csv found. Put your naistyles.csv in the custom_nodes/ComfyUI_NAI-mod/CSV directory of ComfyUI. Then press "Refresh".

    Your current root directory is: D:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI

    2025-01-25T19:03:56.298924 -

    2025-01-25T19:03:58.126882 - got prompt

    2025-01-25T19:03:58.149881 - Loading text encoder model (clipL) from: D:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\models\clip\clip-vit-large-patch14

    2025-01-25T19:03:58.386882 - Text encoder to dtype: torch.float16

    2025-01-25T19:03:58.495303 - Loading tokenizer (clipL) from: D:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\models\clip\clip-vit-large-patch14

    2025-01-25T19:03:58.548302 - Loading text encoder model (llm) from: D:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\models\LLM\llava-llama-3-8b-text-encoder-tokenizer

    2025-01-25T19:03:58.574336 -

    Loading checkpoint shards: 0%| | 0/4 [00:00<?, ?it/s]2025-01-25T19:03:58.574336 -

    Loading checkpoint shards: 0%| | 0/4 [00:00<?, ?it/s]2025-01-25T19:03:58.574336 -

    2025-01-25T19:03:58.575336 - !!! Exception during processing !!! No such file or directory: "D:\\ComfyUI_windows_portable_nvidia\\ComfyUI_windows_portable\\ComfyUI\\models\\LLM\\llava-llama-3-8b-text-encoder-tokenizer\\model-00001-of-00004.safetensors"

    2025-01-25T19:03:58.576336 - Traceback (most recent call last):

    File "D:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\execution.py", line 327, in execute

    output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)

    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

    File "D:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\execution.py", line 202, in get_output_data

    return_values = mapnode_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)

    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

    File "D:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\execution.py", line 174, in mapnode_over_list

    process_inputs(input_dict, i)

    File "D:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\execution.py", line 163, in process_inputs

    results.append(getattr(obj, func)(**inputs))

    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^

    File "D:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-HunyuanVideoWrapper\nodes.py", line 684, in loadmodel

    text_encoder = TextEncoder(

    ^^^^^^^^^^^^

    File "D:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-HunyuanVideoWrapper\hyvideo\text_encoder\__init__.py", line 167, in init

    self.model, self.model_path = load_text_encoder(

    ^^^^^^^^^^^^^^^^^^

    File "D:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-HunyuanVideoWrapper\hyvideo\text_encoder\__init__.py", line 39, in load_text_encoder

    text_encoder = AutoModel.from_pretrained(

    ^^^^^^^^^^^^^^^^^^^^^^^^^^

    File "D:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\python_embeded\Lib\site-packages\transformers\models\auto\auto_factory.py", line 564, in from_pretrained

    return model_class.from_pretrained(

    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^

    File "D:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\python_embeded\Lib\site-packages\transformers\modeling_utils.py", line 4224, in from_pretrained

    ) = cls._load_pretrained_model(

    ^^^^^^^^^^^^^^^^^^^^^^^^^^^

    File "D:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\python_embeded\Lib\site-packages\transformers\modeling_utils.py", line 4770, in loadpretrained_model

    state_dict = load_state_dict(

    ^^^^^^^^^^^^^^^^

    File "D:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\python_embeded\Lib\site-packages\transformers\modeling_utils.py", line 504, in load_state_dict

    with safe_open(checkpoint_file, framework="pt") as f:

    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

    FileNotFoundError: No such file or directory: "D:\\ComfyUI_windows_portable_nvidia\\ComfyUI_windows_portable\\ComfyUI\\models\\LLM\\llava-llama-3-8b-text-encoder-tokenizer\\model-00001-of-00004.safetensors"

    2025-01-25T19:03:58.578336 - Prompt executed in 0.44 seconds

    2025-01-25T19:05:00.565587 - Error. No naistyles.csv found. Put your naistyles.csv in the custom_nodes/ComfyUI_NAI-mod/CSV directory of ComfyUI. Then press "Refresh".

    Your current root directory is: D:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI

    2025-01-25T19:05:00.565587 -

    2025-01-25T19:05:00.565587 - Error. No naistyles.csv found. Put your naistyles.csv in the custom_nodes/ComfyUI_NAI-mod/CSV directory of ComfyUI. Then press "Refresh".

    Your current root directory is: D:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI

    2025-01-25T19:05:00.565587 -

    2025-01-25T19:05:00.565587 - Error. No naistyles.csv found. Put your naistyles.csv in the custom_nodes/ComfyUI_NAI-mod/CSV directory of ComfyUI. Then press "Refresh".

    Your current root directory is: D:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI

    2025-01-25T19:05:00.565587 -

    2025-01-25T19:05:00.565587 - Error. No naistyles.csv found. Put your naistyles.csv in the custom_nodes/ComfyUI_NAI-mod/CSV directory of ComfyUI. Then press "Refresh".

    Your current root directory is: D:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI

    2025-01-25T19:05:00.565587 -

    2025-01-25T19:05:00.565587 - Error. No naistyles.csv found. Put your naistyles.csv in the custom_nodes/ComfyUI_NAI-mod/CSV directory of ComfyUI. Then press "Refresh".

    Your current root directory is: D:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI

    2025-01-25T19:05:00.565587 -

    2025-01-25T19:05:00.566586 - Error. No naistyles.csv found. Put your naistyles.csv in the custom_nodes/ComfyUI_NAI-mod/CSV directory of ComfyUI. Then press "Refresh".

    Your current root directory is: D:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI

    2025-01-25T19:05:00.566586 -

    2025-01-25T19:05:02.472795 - Error. No naistyles.csv found. Put your naistyles.csv in the custom_nodes/ComfyUI_NAI-mod/CSV directory of ComfyUI. Then press "Refresh".

    Your current root directory is: D:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI

    2025-01-25T19:05:02.472795 -

    2025-01-25T19:05:02.472795 - Error. No naistyles.csv found. Put your naistyles.csv in the custom_nodes/ComfyUI_NAI-mod/CSV directory of ComfyUI. Then press "Refresh".

    Your current root directory is: D:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI

    2025-01-25T19:05:02.472795 -

    2025-01-25T19:05:02.472795 - Error. No naistyles.csv found. Put your naistyles.csv in the custom_nodes/ComfyUI_NAI-mod/CSV directory of ComfyUI. Then press "Refresh".

    Your current root directory is: D:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI

    2025-01-25T19:05:02.472795 -

    2025-01-25T19:05:02.472795 - Error. No naistyles.csv found. Put your naistyles.csv in the custom_nodes/ComfyUI_NAI-mod/CSV directory of ComfyUI. Then press "Refresh".

    Your current root directory is: D:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI

    2025-01-25T19:05:02.472795 -

    2025-01-25T19:05:02.472795 - Error. No naistyles.csv found. Put your naistyles.csv in the custom_nodes/ComfyUI_NAI-mod/CSV directory of ComfyUI. Then press "Refresh".

    Your current root directory is: D:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI

    2025-01-25T19:05:02.472795 -

    2025-01-25T19:05:02.472795 - Error. No naistyles.csv found. Put your naistyles.csv in the custom_nodes/ComfyUI_NAI-mod/CSV directory of ComfyUI. Then press "Refresh".

    Your current root directory is: D:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI

    2025-01-25T19:05:02.472795 -

    2025-01-25T19:05:04.680517 - got prompt

    2025-01-25T19:05:04.699524 - Loading text encoder model (clipL) from: D:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\models\clip\clip-vit-large-patch14

    2025-01-25T19:05:05.009471 - Text encoder to dtype: torch.float16

    2025-01-25T19:05:05.125463 - Loading tokenizer (clipL) from: D:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\models\clip\clip-vit-large-patch14

    2025-01-25T19:05:05.176464 - Loading text encoder model (llm) from: D:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\models\LLM\llava-llama-3-8b-text-encoder-tokenizer

    2025-01-25T19:05:05.203463 -

    Loading checkpoint shards: 0%| | 0/4 [00:00<?, ?it/s]2025-01-25T19:05:05.203463 -

    Loading checkpoint shards: 0%| | 0/4 [00:00<?, ?it/s]2025-01-25T19:05:05.203463 -

    2025-01-25T19:05:05.203463 - !!! Exception during processing !!! No such file or directory: "D:\\ComfyUI_windows_portable_nvidia\\ComfyUI_windows_portable\\ComfyUI\\models\\LLM\\llava-llama-3-8b-text-encoder-tokenizer\\model-00001-of-00004.safetensors"

    2025-01-25T19:05:05.204463 - Traceback (most recent call last):

    File "D:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\execution.py", line 327, in execute

    output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)

    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

    File "D:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\execution.py", line 202, in get_output_data

    return_values = mapnode_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)

    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

    File "D:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\execution.py", line 174, in mapnode_over_list

    process_inputs(input_dict, i)

    File "D:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\execution.py", line 163, in process_inputs

    results.append(getattr(obj, func)(**inputs))

    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^

    File "D:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-HunyuanVideoWrapper\nodes.py", line 684, in loadmodel

    text_encoder = TextEncoder(

    ^^^^^^^^^^^^

    File "D:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-HunyuanVideoWrapper\hyvideo\text_encoder\__init__.py", line 167, in init

    self.model, self.model_path = load_text_encoder(

    ^^^^^^^^^^^^^^^^^^

    File "D:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-HunyuanVideoWrapper\hyvideo\text_encoder\__init__.py", line 39, in load_text_encoder

    text_encoder = AutoModel.from_pretrained(

    ^^^^^^^^^^^^^^^^^^^^^^^^^^

    File "D:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\python_embeded\Lib\site-packages\transformers\models\auto\auto_factory.py", line 564, in from_pretrained

    return model_class.from_pretrained(

    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^

    File "D:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\python_embeded\Lib\site-packages\transformers\modeling_utils.py", line 4224, in from_pretrained

    ) = cls._load_pretrained_model(

    ^^^^^^^^^^^^^^^^^^^^^^^^^^^

    File "D:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\python_embeded\Lib\site-packages\transformers\modeling_utils.py", line 4770, in loadpretrained_model

    state_dict = load_state_dict(

    ^^^^^^^^^^^^^^^^

    File "D:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\python_embeded\Lib\site-packages\transformers\modeling_utils.py", line 504, in load_state_dict

    with safe_open(checkpoint_file, framework="pt") as f:

    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

    FileNotFoundError: No such file or directory: "D:\\ComfyUI_windows_portable_nvidia\\ComfyUI_windows_portable\\ComfyUI\\models\\LLM\\llava-llama-3-8b-text-encoder-tokenizer\\model-00001-of-00004.safetensors"

    2025-01-25T19:05:05.205471 - Prompt executed in 0.52 seconds

    2025-01-25T19:05:10.119189 - got prompt

    2025-01-25T19:05:10.149183 - Loading text encoder model (clipL) from: D:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\models\clip\clip-vit-large-patch14

    2025-01-25T19:05:10.382084 - Text encoder to dtype: torch.float16

    2025-01-25T19:05:10.485083 - Loading tokenizer (clipL) from: D:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\models\clip\clip-vit-large-patch14

    2025-01-25T19:05:11.197818 - Loading text encoder model (vlm) from: D:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\models\LLM\llava-llama-3-8b-v1_1-transformers

    2025-01-25T19:05:26.055480 -

    Loading checkpoint shards: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 4/4 [00:14<00:00, 3.42s/it]2025-01-25T19:05:26.055480 -

    Loading checkpoint shards: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 4/4 [00:14<00:00, 3.70s/it]2025-01-25T19:05:26.055480 -

    2025-01-25T19:05:34.647669 - Text encoder to dtype: torch.float16

    2025-01-25T19:05:37.668186 - Loading tokenizer (vlm) from: D:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\models\LLM\llava-llama-3-8b-v1_1-transformers

    2025-01-25T19:05:37.897179 - !!! Exception during processing !!! list index out of range

    2025-01-25T19:05:37.898178 - Traceback (most recent call last):

    File "D:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\execution.py", line 327, in execute

    output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)

    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

    File "D:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\execution.py", line 202, in get_output_data

    return_values = mapnode_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)

    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

    File "D:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\execution.py", line 174, in mapnode_over_list

    process_inputs(input_dict, i)

    File "D:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\execution.py", line 163, in process_inputs

    results.append(getattr(obj, func)(**inputs))

    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^

    File "D:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-HunyuanVideoWrapper\nodes.py", line 884, in process

    prompt_embeds, negative_prompt_embeds, attention_mask, negative_attention_mask = encode_prompt(self,

    ^^^^^^^^^^^^^^^^^^^

    File "D:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-HunyuanVideoWrapper\nodes.py", line 809, in encode_prompt

    text_inputs = text_encoder.text2tokens(prompt,

    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

    File "D:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-HunyuanVideoWrapper\hyvideo\text_encoder\__init__.py", line 253, in text2tokens

    text_tokens = self.processor(

    ^^^^^^^^^^^^^^^

    File "D:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\python_embeded\Lib\site-packages\transformers\models\llava\processing_llava.py", line 145, in call

    image_inputs = self.image_processor(images, **output_kwargs["images_kwargs"])

    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

    File "D:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\python_embeded\Lib\site-packages\transformers\image_processing_utils.py", line 41, in call

    return self.preprocess(images, **kwargs)

    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

    File "D:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\python_embeded\Lib\site-packages\transformers\models\clip\image_processing_clip.py", line 286, in preprocess

    images = make_list_of_images(images)

    ^^^^^^^^^^^^^^^^^^^^^^^^^^^

    File "D:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\python_embeded\Lib\site-packages\transformers\image_utils.py", line 185, in make_list_of_images

    if is_batched(images):

    ^^^^^^^^^^^^^^^^^^

    File "D:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\python_embeded\Lib\site-packages\transformers\image_utils.py", line 158, in is_batched

    return is_valid_image(img[0])

    ~~~^^^

    IndexError: list index out of range

    2025-01-25T19:05:37.899179 - Prompt executed in 27.77 seconds

    ```

    ## Attached Workflow

    Please make sure that workflow does not contain any sensitive information such as API keys or passwords.

    ```

    {"last_node_id":81,"last_link_id":109,"nodes":[{"id":43,"type":"HyVideoEncode","pos":[-280,430],"size":[420,150],"flags":{},"order":11,"mode":0,"inputs":[{"name":"vae","type":"VAE","link":54},{"name":"image","type":"IMAGE","link":63}],"outputs":[{"name":"samples","type":"LATENT","links":[64],"slot_index":0}],"properties":{"Node name for S&R":"HyVideoEncode"},"widgets_values":[false,64,128,true]},{"id":61,"type":"mxSlider","pos":[-280,-180],"size":[800,30],"flags":{"pinned":false},"order":0,"mode":0,"inputs":[],"outputs":[{"name":"","type":"INT","links":[86,89],"slot_index":0}],"title":"FRAMES","properties":{"Node name for S&R":"mxSlider","value":97,"min":1,"max":301,"step":4,"decimals":0,"snap":true},"widgets_values":[97,97,0],"color":"#000000","bgcolor":"#b30057","shape":1},{"id":5,"type":"HyVideoDecode","pos":[560,-270],"size":[345.4285888671875,150],"flags":{"collapsed":false},"order":15,"mode":0,"inputs":[{"name":"vae","type":"VAE","link":6},{"name":"samples","type":"LATENT","link":4}],"outputs":[{"name":"images","type":"IMAGE","links":[83,87],"slot_index":0}],"properties":{"Node name for S&R":"HyVideoDecode"},"widgets_values":[true,64,192,false]},{"id":68,"type":"HyVideoLoraSelect","pos":[-1010,-380],"size":[380,102],"flags":{},"order":7,"mode":4,"inputs":[{"name":"prev_lora","type":"HYVIDLORA","link":92,"shape":7},{"name":"blocks","type":"SELECTEDBLOCKS","link":null,"shape":7}],"outputs":[{"name":"lora","type":"HYVIDLORA","links":[91],"slot_index":0}],"properties":{"Node name for S&R":"HyVideoLoraSelect"},"widgets_values":["lora Hun\\closeupface-v1.1.safetensors",0.7000000000000001]},{"id":69,"type":"HyVideoLoraSelect","pos":[-1010,-540],"size":[380,102],"flags":{},"order":1,"mode":4,"inputs":[{"name":"prev_lora","type":"HYVIDLORA","link":null,"shape":7},{"name":"blocks","type":"SELECTEDBLOCKS","link":null,"shape":7}],"outputs":[{"name":"lora","type":"HYVIDLORA","links":[92],"slot_index":0}],"properties":{"Node name for S&R":"HyVideoLoraSelect"},"widgets_values":["lora Hun\\closeupface-v1.1.safetensors",0.7000000000000001]},{"id":7,"type":"HyVideoVAELoader","pos":[-280,-310],"size":[379.166748046875,82],"flags":{},"order":2,"mode":0,"inputs":[{"name":"compile_args","type":"COMPILEARGS","link":null,"shape":7}],"outputs":[{"name":"vae","type":"VAE","links":[6,54],"slot_index":0}],"title":"VAE","properties":{"Node name for S&R":"HyVideoVAELoader"},"widgets_values":["hunyuan_video_vae_bf16.safetensors","bf16"]},{"id":63,"type":"Label (rgthree)","pos":[1170,-270],"size":[336.1572265625,25],"flags":{"allow_interaction":true},"order":3,"mode":0,"inputs":[],"outputs":[],"title":"SKIP FIRST JUNKY FRAMES","properties":{"fontSize":25,"fontFamily":"Arial","fontColor":"#ffffff","textAlign":"left","backgroundColor":"transparent","padding":0,"borderRadius":0},"color":"#fff0","bgcolor":"#fff0"},{"id":50,"type":"Note","pos":[-1010,80],"size":[557.0466918945312,58],"flags":{},"order":4,"mode":0,"inputs":[],"outputs":[],"title":"GET THE LORA HERE:","properties":{},"widgets_values":["https://huggingface.co/leapfusion-image2vid-test/image2vid-512x320/blob/main/img2vid.safetensors"],"color":"#432","bgcolor":"#653"},{"id":45,"type":"ImageResizeKJ","pos":[-620,190],"size":[315,266],"flags":{},"order":8,"mode":0,"inputs":[{"name":"image","type":"IMAGE","link":81},{"name":"get_image_size","type":"IMAGE","link":null,"shape":7},{"name":"width_input","type":"INT","link":null,"widget":{"name":"width_input"},"shape":7},{"name":"height_input","type":"INT","link":null,"widget":{"name":"height_input"},"shape":7}],"outputs":[{"name":"IMAGE","type":"IMAGE","links":[63,108],"slot_index":0},{"name":"width","type":"INT","links":[69],"slot_index":1},{"name":"height","type":"INT","links":[70],"slot_index":2}],"properties":{"Node name for S&R":"ImageResizeKJ"},"widgets_values":[320,480,"lanczos",false,2,0,0,"center"]},{"id":3,"type":"HyVideoSampler","pos":[210,-90],"size":[310,670],"flags":{},"order":14,"mode":0,"inputs":[{"name":"model","type":"HYVIDEOMODEL","link":2},{"name":"hyvid_embeds","type":"HYVIDEMBEDS","link":74},{"name":"samples","type":"LATENT","link":64,"shape":7},{"name":"stg_args","type":"STGARGS","link":null,"shape":7},{"name":"context_options","type":"HYVIDCONTEXT","link":null,"shape":7},{"name":"feta_args","type":"FETAARGS","link":null,"shape":7},{"name":"teacache_args","type":"TEACACHEARGS","link":null,"shape":7},{"name":"width","type":"INT","link":69,"widget":{"name":"width"}},{"name":"height","type":"INT","link":70,"widget":{"name":"height"}},{"name":"num_frames","type":"INT","link":89,"widget":{"name":"num_frames"}}],"outputs":[{"name":"samples","type":"LATENT","links":[4],"slot_index":0}],"properties":{"Node name for S&R":"HyVideoSampler"},"widgets_values":[512,320,97,9,6,6,108,"increment",1,1,"FlowMatchDiscreteScheduler"]},{"id":80,"type":"ImageConcatMulti","pos":[1790,-70],"size":[210,150],"flags":{},"order":19,"mode":0,"inputs":[{"name":"image_1","type":"IMAGE","link":108},{"name":"image_2","type":"IMAGE","link":109}],"outputs":[{"name":"images","type":"IMAGE","links":[105],"slot_index":0}],"properties":{},"widgets_values":[2,"right",false,null]},{"id":60,"type":"ImageFromBatch+","pos":[1170,-200],"size":[315,82],"flags":{},"order":17,"mode":0,"inputs":[{"name":"image","type":"IMAGE","link":87},{"name":"length","type":"INT","link":86,"widget":{"name":"length"}}],"outputs":[{"name":"IMAGE","type":"IMAGE","links":[88,109],"slot_index":0}],"properties":{"Node name for S&R":"ImageFromBatch+"},"widgets_values":[6,-1],"color":"#c7146b","bgcolor":"#b30057"},{"id":30,"type":"HyVideoTextEncode","pos":[-280,630],"size":[810,310],"flags":{},"order":9,"mode":0,"inputs":[{"name":"text_encoders","type":"HYVIDTEXTENCODER","link":35},{"name":"custom_prompt_template","type":"PROMPT_TEMPLATE","link":null,"shape":7},{"name":"clip_l","type":"CLIP","link":null,"shape":7},{"name":"hyvid_cfg","type":"HYVID_CFG","link":null,"shape":7}],"outputs":[{"name":"hyvid_embeds","type":"HYVIDEMBEDS","links":[74],"slot_index":0}],"properties":{"Node name for S&R":"HyVideoTextEncode"},"widgets_values":["a woman surrounded by blue butterflies","bad quality video","video"],"color":"#30352b","bgcolor":"#1c2117"},{"id":81,"type":"VHS_VideoCombine","pos":[2020,-70],"size":[580.7774658203125,334],"flags":{},"order":20,"mode":0,"inputs":[{"name":"images","type":"IMAGE","link":105},{"name":"audio","type":"AUDIO","link":null,"shape":7},{"name":"meta_batch","type":"VHS_BatchManager","link":null,"shape":7},{"name":"vae","type":"VAE","link":null,"shape":7}],"outputs":[{"name":"Filenames","type":"VHS_FILENAMES","links":null}],"title":"combined","properties":{"Node name for S&R":"VHS_VideoCombine"},"widgets_values":{"frame_rate":24,"loop_count":0,"filename_prefix":"HunyuanVideo_leapfusion_I2V","format":"video/h264-mp4","pix_fmt":"yuv420p","crf":19,"save_metadata":true,"trim_to_audio":false,"pingpong":false,"save_output":true,"videopreview":{"hidden":false,"paused":false,"params":{"filename":"HunyuanVideo_leapfusion_I2V_00005.mp4","subfolder":"","type":"output","format":"video/h264-mp4","frame_rate":24,"workflow":"HunyuanVideo_leapfusion_I2V_00005.png","fullpath":"F:\\Comfy2\\ComfyUI\\output\\HunyuanVideo_leapfusion_I2V_00005.mp4"},"muted":false}}},{"id":62,"type":"VHS_VideoCombine","pos":[1170,-70],"size":[580.7774658203125,334],"flags":{},"order":18,"mode":0,"inputs":[{"name":"images","type":"IMAGE","link":88},{"name":"audio","type":"AUDIO","link":null,"shape":7},{"name":"meta_batch","type":"VHS_BatchManager","link":null,"shape":7},{"name":"vae","type":"VAE","link":null,"shape":7}],"outputs":[{"name":"Filenames","type":"VHS_FILENAMES","links":null}],"title":"WITH SKIPPED FRAMES","properties":{"Node name for S&R":"VHS_VideoCombine"},"widgets_values":{"frame_rate":24,"loop_count":0,"filename_prefix":"HUN-K-I2V-LeapFusion","format":"video/h264-mp4","pix_fmt":"yuv420p","crf":13,"save_metadata":true,"trim_to_audio":false,"pingpong":false,"save_output":false,"videopreview":{"hidden":false,"paused":false,"params":{"filename":"HUN-I2V_00010.mp4","subfolder":"","type":"temp","format":"video/h264-mp4","frame_rate":24,"workflow":"HUN-I2V_00010.png","fullpath":"F:\\Comfy2\\ComfyUI\\temp\\HUN-I2V_00010.mp4"},"muted":false}}},{"id":34,"type":"VHS_VideoCombine","pos":[560,-70],"size":[580.7774658203125,334],"flags":{},"order":16,"mode":0,"inputs":[{"name":"images","type":"IMAGE","link":83},{"name":"audio","type":"AUDIO","link":null,"shape":7},{"name":"meta_batch","type":"VHS_BatchManager","link":null,"shape":7},{"name":"vae","type":"VAE","link":null,"shape":7}],"outputs":[{"name":"Filenames","type":"VHS_FILENAMES","links":null}],"title":"RAW","properties":{"Node name for S&R":"VHS_VideoCombine"},"widgets_values":{"frame_rate":24,"loop_count":0,"filename_prefix":"HUN-K-I2V-LeapFusion","format":"video/h264-mp4","pix_fmt":"yuv420p","crf":13,"save_metadata":true,"trim_to_audio":false,"pingpong":false,"save_output":false,"videopreview":{"hidden":false,"paused":false,"params":{"filename":"HUN-I2V_00009.mp4","subfolder":"","type":"temp","format":"video/h264-mp4","frame_rate":24,"workflow":"HUN-I2V_00009.png","fullpath":"F:\\Comfy2\\ComfyUI\\temp\\HUN-I2V_00009.mp4"},"muted":false}}},{"id":56,"type":"HyVideoLoraSelect","pos":[-1010,-220],"size":[380,102],"flags":{},"order":10,"mode":0,"inputs":[{"name":"prev_lora","type":"HYVIDLORA","link":91,"shape":7},{"name":"blocks","type":"SELECTEDBLOCKS","link":null,"shape":7}],"outputs":[{"name":"lora","type":"HYVIDLORA","links":[82],"slot_index":0}],"properties":{"Node name for S&R":"HyVideoLoraSelect"},"widgets_values":["LORA Hunyuan video\\Img2vid\\hyvideo_FastVideo_LoRA-fp8.safetensors",0.7000000000000001]},{"id":41,"type":"HyVideoLoraSelect","pos":[-1010,-70],"size":[550,102],"flags":{},"order":12,"mode":0,"inputs":[{"name":"prev_lora","type":"HYVIDLORA","link":82,"shape":7},{"name":"blocks","type":"SELECTEDBLOCKS","link":null,"shape":7}],"outputs":[{"name":"lora","type":"HYVIDLORA","links":[51]}],"properties":{"Node name for S&R":"HyVideoLoraSelect"},"widgets_values":["LORA Hunyuan video\\Img2vid\\img2vid.safetensors",1],"color":"#432","bgcolor":"#653"},{"id":44,"type":"LoadImage","pos":[-1010,190],"size":[360,540],"flags":{},"order":5,"mode":0,"inputs":[],"outputs":[{"name":"IMAGE","type":"IMAGE","links":[81],"slot_index":0},{"name":"MASK","type":"MASK","links":null}],"properties":{"Node name for S&R":"LoadImage"},"widgets_values":["Hirunerunerune_1.png","image"],"color":"#223","bgcolor":"#335"},{"id":1,"type":"HyVideoModelLoader","pos":[-280,-90],"size":[420,242],"flags":{},"order":13,"mode":0,"inputs":[{"name":"compile_args","type":"COMPILEARGS","link":null,"shape":7},{"name":"block_swap_args","type":"BLOCKSWAPARGS","link":null,"shape":7},{"name":"lora","type":"HYVIDLORA","link":51,"shape":7}],"outputs":[{"name":"model","type":"HYVIDEOMODEL","links":[2],"slot_index":0}],"properties":{"Node name for S&R":"HyVideoModelLoader"},"widgets_values":["hunyuan_video_720_cfgdistill_fp8_e4m3fn.safetensors","bf16","fp8_e4m3fn","offload_device","sageattn_varlen",false,true],"color":"#223","bgcolor":"#335"},{"id":16,"type":"DownloadAndLoadHyVideoTextEncoder","pos":[-280,190],"size":[420,180],"flags":{},"order":6,"mode":0,"inputs":[],"outputs":[{"name":"hyvid_text_encoder","type":"HYVIDTEXTENCODER","links":[35]}],"properties":{"Node name for S&R":"DownloadAndLoadHyVideoTextEncoder"},"widgets_values":["xtuner/llava-llama-3-8b-v1_1-transformers","openai/clip-vit-large-patch14","fp16",false,2,"disabled"]}],"links":[[2,1,0,3,0,"HYVIDEOMODEL"],[4,3,0,5,1,"LATENT"],[6,7,0,5,0,"VAE"],[35,16,0,30,0,"HYVIDTEXTENCODER"],[51,41,0,1,2,"HYVIDLORA"],[54,7,0,43,0,"VAE"],[63,45,0,43,1,"IMAGE"],[64,43,0,3,2,"LATENT"],[69,45,1,3,7,"INT"],[70,45,2,3,8,"INT"],[74,30,0,3,1,"HYVIDEMBEDS"],[81,44,0,45,0,"IMAGE"],[82,56,0,41,0,"HYVIDLORA"],[83,5,0,34,0,"IMAGE"],[86,61,0,60,1,"INT"],[87,5,0,60,0,"IMAGE"],[88,60,0,62,0,"IMAGE"],[89,61,0,3,9,"INT"],[91,68,0,56,0,"HYVIDLORA"],[92,69,0,68,0,"HYVIDLORA"],[105,80,0,81,0,"IMAGE"],[108,45,0,80,0,"IMAGE"],[109,60,0,80,1,"IMAGE"]],"groups":[],"config":{},"extra":{"ds":{"scale":1.4122927695244514,"offset":[1406.8247228474982,283.0361351855706]},"node_versions":{"ComfyUI-HunyuanVideoWrapper":"88823b79b7e41377e4dccf0790719578e139bbf3","ComfyUI-mxToolkit":"3659749ab6b19ab4bc7b2ed144e3bcf92813fbf7","comfyui-kjnodes":"1.0.5","comfyui_essentials":"1.1.0","ComfyUI-VideoHelperSuite":"f24f4e10f448913eb8c0d8ce5ff6190a8be84454","comfy-core":"0.3.12"},"VHS_latentpreview":false,"VHS_latentpreviewrate":0,"controller_panel":{"controllers":{},"hidden":true,"highlight":true,"version":2,"default_order":[]},"ue_links":[]},"version":0.4}

    ```

    ## Additional Context

    (Please add any additional context or steps to reproduce the error here)

    choose llm_model from Kijai

    XT_404Jan 25, 2025

    @myprivacy27091991221Β when I choose it I have exactly the same error message as this LLAMA silk or the other

    XT_404Jan 25, 2025

    anomaly rectify

    half_realJan 25, 2025
    CivitAI

    OP, can you add a reminder to update the ComfyUI-HunyuanVideoWrapper extension to the description? I think it's the cause of most of the issues people are having.

    LatentDream
    Author
    Jan 25, 2025

    yes i realized and wrote this 😁

    greentheoryJan 25, 2025

    I'm a smooth brain. Beyond "install missing custom nodes", and "Update all" in the comfyUI manager what is required? Do we have to manually get stuff from GitHub or something? I also search for Kijai and then "try update". So Update all doesn't actually update all?

    half_realJan 25, 2025

    @greentheoryΒ I think that should be fine? Can you run the workflow now?

    greentheoryJan 25, 2025

    @half_realΒ No I can't, its getting stuck on the VAE ComfyUI-HunyuanVideoWrapper node. I updated to 1.0.1

    Edit - I swapped out some models to the Kijai specific ones, seems to have fixed it

    0101ARTJan 25, 2025Β· 1 reaction
    CivitAI

    Great! It's working!
    Thx bro

    greentheoryJan 25, 2025
    CivitAI

    Do I need to install sageAttention separately? It works on windows like an 8 step process for Triton?

    https://github.com/woct0rdho/triton-windows?tab=readme-ov-file

    Can't import SageAttention: No module named 'sageattention'

    mrbosstvJan 25, 2025

    I am also stuck on sage attention, im not sure what to do here. I dont have sageattn_varlen

    greentheoryJan 25, 2025

    @mrbosstvΒ I got it to work by switching to comfy as the attention and using the exact cfgdistill model preloaded instead of the one I had.

    marviskealan254Jan 25, 2025

    load into your venv for comfy "pip install sageattention"

    futuristicFeb 11, 2025

    @marviskealan254Β could you let me know which folder I need to run this command in please?

    midiaplaayJan 25, 2025Β· 1 reaction
    CivitAI

    I can tell that is more consistent than LTX. A good advance.

    3427221Jan 25, 2025Β· 1 reaction
    CivitAI

    As always on my side with Kijai Hunyuan wrapper I get this error : (not full but imagine 3 page of this)

    HyVideoVAELoader

    Error(s) in loading state_dict for AutoencoderKLCausal3D: Missing key(s) in state_dict: "encoder.down_blocks.0.resnets.0.norm1.weight", "encoder.down_blocks.0.resnets.0.norm1.bias", "encoder.down_blocks.0.resnets.0.conv1.conv.weight", "encoder.down_blocks.0.resnets.0.conv1.conv.bias", "encoder.down_blocks.0.resnets.0.norm2.weight", "encoder.down_blocks.0.resnets.0.norm2.bias", "encoder.down_blocks.0.resnets.0.conv2.conv.weight", "encoder.down_blocks.0.resnets.0.conv2.conv.bias", "encoder.down_blocks.0.resnets.1.norm1.weight", "encoder.down_blocks.0.resnets.1.norm1.bias", "encoder.down_blocks.0.resnets.1.conv1.conv.weight", "encoder.down_blocks.0.resnets.1.conv1.conv.bias", "encoder.down_blocks.0.resnets.1.norm2.weight", "encoder.down_blocks.0.resnets.1.norm2.bias"

    MatloJan 26, 2025Β· 3 reactions

    Hi, I had the same issue and I found out it's because of the VAE. Download the VAE from Kijai's repo: https://huggingface.co/Kijai/HunyuanVideo_comfy/tree/main (and make sure you are replacing the VAE in the comfyui VAE model folder)

    3427221Jan 26, 2025Β· 1 reaction

    @MatloΒ I can't believe it! Worked :) They had the same name and size as the one I already had so I didn't even imagined it was different.. thanks for the tip man, this should be on every workflow that use kijai wrapper as a warning in the description.

    TurboCoomerJan 26, 2025Β· 2 reactions

    @MatloΒ thanks bro I'd never figured it out myself since I hadthis file already

    MatloJan 26, 2025Β· 2 reactions

    @NoArtifactΒ @TurboCoomerΒ πŸ’•πŸ™

    CyphermodxJan 27, 2025Β· 1 reaction

    Oh my goodness, I was getting frustrated because I was SO sure I was using the right vae. But just to put my ego aside I downloaded the same sized one from the link and it worked immediately. DO NOT BE LIKE ME, even if you are pretty sure, just download the linked vae and try again.

    3427221Jan 27, 2025Β· 1 reaction

    @CyphermodxΒ yeah, and rename it with kijai_ in front so you don't confuse in case the other have it's particular use as well

    greentheoryJan 25, 2025
    CivitAI

    Its good to bring a still image to life accurately for 1 second but when movement kicks in it goes crazy. Any way to extend this minimal movement for longer?

    LatentDream
    Author
    Jan 26, 2025

    raise resolution, it will be more consistent

    magnusfolk314971Jan 25, 2025
    CivitAI

    great thank you

    what parameters to tweak to increase the quality ? I tried to increase steps for examples, it's better

    LatentDream
    Author
    Jan 26, 2025

    yeah steps and resolution. higher res also more consistence

    iodrg244Jan 25, 2025
    CivitAI

    Thanks! It's working great! Question about the seed. I know seed must be set to "increment after generate" for the video but is there a way to randomize the seed pre-generate? Otherwise the seed just changes by +1 each time you generate a new video unless manually changed in the HunyuanVideoSampler. A follow up question, does it matter that much? Does changing the starting seed by just one digit make same difference as a completely different random seed?

    LatentDream
    Author
    Jan 26, 2025Β· 1 reaction

    +1 and random are the same thing.
    the advantace is that if you like the seed you can roll back by -1 and use it again.

    jm112368767Jan 28, 2025

    LatentDream is correct...and also, per OPs questions, yes a seed of 1234 or 1235 is as different to a computer of 1234 vs 23452345234523452345324

    iodrg244Jan 25, 2025
    CivitAI

    How do you adjust the frames with your workflow? Exactly 201 frames will always produce a looping (seamless) video.

    LatentDream
    Author
    Jan 26, 2025

    yeah thats what i discovered one month ago and wrote it in my article. https://civitai.com/articles/9584 theres a giant pink slider called FRAMES right on top of this little workflow

    iodrg244Jan 26, 2025Β· 1 reaction

    @LatentDreamΒ hehe, I saw the node. : ) I probably don't have the custom node that shows the actual slider. You don't get a "missing nodes" type message. It just shows as a blank red space to me. I was able to edit the properties of the frames node to enter the frames I wanted. Wow, I didn't know you were the original discoverer of the 201 frames. Thanks! For these short videos, seamless loops make a big difference to me.

    iodrg244Jan 25, 2025Β· 3 reactions
    CivitAI

    There's something going on with LORAS and Kijai nodes right now. https://github.com/kijai/ComfyUI-HunyuanVideoWrapper/issues/303 I have to restart comfyui to fix it. Ive gotten a couple amazing results from this but it's a mess right now with this lora caching thing going on. One fix I've found is adding a second lora, any hunyuan video lora and it works again...basically keep switching a 2nd lora at low strength and it works without having to restart. Switch to a different lora (or maybe back and forth) each time you generate and it might fix your issue (if you are getting like 1 second of good video then it goes off the rails on you.

    iodrg244Jan 26, 2025Β· 2 reactions

    An easier fix without adding a LORA for this is to just adjust the img2vid LORA back and forth between 1.0000 and .9999 each time you do a new generation. It seems as long as it has to recalculate the LORA strength it works. This is a possible fix if you are have a video that has 1 second of your image and then goes off the rails completely (as if there is no LORA)

    LatentDream
    Author
    Jan 26, 2025

    @iodrg244Β interesting.. i need to try this

    TurboCoomerJan 26, 2025

    I have the same issue but changing lora weights does not fix it :(

    iodrg244Jan 26, 2025Β· 1 reaction

    @TurboCoomerΒ So strange. It's not 100% but I just tried 5 generations adjusting img2vid weight in this order: .9999 success. 1.0 success, 1.0001 failed (1 sec then off rails), 1.0 success, .9999 success. 4 out of five (I was using incremental seeds each generation. It's either really good or as if LORA is unloaded after 1 second. If you are getting 1 good result then all bad, you might try the "unload models" and "free model and nodes cache" buttons in ComfyUI (the icons look like a wired mouse controller on the menu bar to me). That might work but it's a lot of loading/unloading time each generation. It appears they are aware of the issue and a commit for ComfyUI update is in the works according to the linked thread. I'm using a 4090 in case that factors into anything.

    radiantResistorJan 26, 2025Β· 5 reactions
    CivitAI

    I'm getting the following error on the VAE node:

    "HyVideoVAELoader

    Error(s) in loading state_dict for AutoencoderKLCausal3D: Missing key(s) in state_dict: "encoder.down_blocks.0.resnets.0.norm1.weight",

    ..."

    ...followed by several hundred more entries. I'm using the same VAE as in the downloaded workflow. Maybe I need a different VAE version?

    LatentDream
    Author
    Jan 26, 2025

    are wrapper nodes updated?

    LatentDream
    Author
    Jan 26, 2025Β· 8 reactions

    try download the VAE from Kijai's repo: https://huggingface.co/Kijai/HunyuanVideo_comfy/tree/main (and make sure you are replacing the VAE in the comfyui VAE model folder)

    uTestJan 26, 2025
    CivitAI

    Getting the following error: DownloadAndLoadHyVideoTextEncoder

    Trying to set a tensor of shape torch.Size([128256, 4096]) in "weight" (which has shape torch.Size([128320, 4096])), this looks incorrect.

    LatentDream
    Author
    Jan 29, 2025Β· 1 reaction
    Workflows
    Hunyuan Video

    Details

    Downloads
    9,797
    Platform
    CivitAI
    Platform Status
    Available
    Created
    1/25/2025
    Updated
    5/12/2026
    Deleted
    -