Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

"LayerNormKernelImpl" not implemented for 'Half' #54

Open
zz1625 opened this issue Jan 25, 2024 · 1 comment
Open

"LayerNormKernelImpl" not implemented for 'Half' #54

zz1625 opened this issue Jan 25, 2024 · 1 comment

Comments

@zz1625
Copy link

zz1625 commented Jan 25, 2024

Error occurred when executing NEW_PhotoMaker_Generation:

"LayerNormKernelImpl" not implemented for 'Half'

File "D:\Ai\SD-N\ComfyUI\execution.py", line 155, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\Ai\SD-N\ComfyUI\execution.py", line 85, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\Ai\SD-N\ComfyUI\execution.py", line 78, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\Ai\SD-N\ComfyUI\custom_nodes\ComfyUI-PhotoMaker\PhotoMakerNode.py", line 396, in generate_image
output = pipe(
^^^^^
File "D:\Ai\SD-N\ComfyUI.ext\Lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "D:\Ai\SD-N\ComfyUI\custom_nodes\ComfyUI-PhotoMaker\pipeline.py", line 336, in call
) = self.encode_prompt_with_trigger_word(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\Ai\SD-N\ComfyUI\custom_nodes\ComfyUI-PhotoMaker\pipeline.py", line 202, in encode_prompt_with_trigger_word
prompt_embeds = text_encoder(
^^^^^^^^^^^^^
File "D:\Ai\SD-N\ComfyUI.ext\Lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\Ai\SD-N\ComfyUI.ext\Lib\site-packages\transformers\models\clip\modeling_clip.py", line 798, in forward
return self.text_model(
^^^^^^^^^^^^^^^^
File "D:\Ai\SD-N\ComfyUI.ext\Lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\Ai\SD-N\ComfyUI.ext\Lib\site-packages\transformers\models\clip\modeling_clip.py", line 703, in forward
encoder_outputs = self.encoder(
^^^^^^^^^^^^^
File "D:\Ai\SD-N\ComfyUI.ext\Lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\Ai\SD-N\ComfyUI.ext\Lib\site-packages\transformers\models\clip\modeling_clip.py", line 630, in forward
layer_outputs = encoder_layer(
^^^^^^^^^^^^^^
File "D:\Ai\SD-N\ComfyUI.ext\Lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\Ai\SD-N\ComfyUI.ext\Lib\site-packages\transformers\models\clip\modeling_clip.py", line 371, in forward
hidden_states = self.layer_norm1(hidden_states)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\Ai\SD-N\ComfyUI.ext\Lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\Ai\SD-N\ComfyUI.ext\Lib\site-packages\torch\nn\modules\normalization.py", line 190, in forward
return F.layer_norm(
^^^^^^^^^^^^^
File "D:\Ai\SD-N\ComfyUI.ext\Lib\site-packages\torch\nn\functional.py", line 2515, in layer_norm
return torch.layer_norm(input, normalized_shape, weight, bias, eps, torch.backends.cudnn.enabled)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

@eliganim
Copy link

eliganim commented Mar 2, 2024

I believe this happens when you're running on CPU instead of GPU.
Try adding --no-half to the parameters. If that doesn't work edit PhotoMakerNode.py and change all occurrences of fp16 to fp32 and all float16 to float32.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants