Skip to content
This repository has been archived by the owner on Feb 7, 2023. It is now read-only.

Tensor initializers aren't getting converted to constants #578

Open
s1ddok opened this issue Jun 23, 2020 · 3 comments
Open

Tensor initializers aren't getting converted to constants #578

s1ddok opened this issue Jun 23, 2020 · 3 comments
Labels
bug Unexpected behaviour that should be corrected (type)

Comments

@s1ddok
Copy link

s1ddok commented Jun 23, 2020

Even though onnx contain needed input as tensor initializer:

image

The model yields an error while converting:

Underlying exception message was: Error compiling model: "Error reading protobuf spec. validator error: Layer 'Pad_0' consumes an input named 'initial_block' which is not present in this network.".
  RuntimeWarning)

is there a way to support constant inputs to layers?

@s1ddok s1ddok added the bug Unexpected behaviour that should be corrected (type) label Jun 23, 2020
@bhushan23
Copy link
Collaborator

@s1ddok you can call load_input_constants before converting node! see here
Which node is this? we might not be calling load_input_constants for this operator

@bhushan23
Copy link
Collaborator

also, you can give a try to new pytorch converter https://coremltools.readme.io/docs/pytorch-conversion

@s1ddok
Copy link
Author

s1ddok commented Jul 6, 2020

@bhushan23 it happens with constant input to padding and then conv. I manually patched converter to load that constant

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
bug Unexpected behaviour that should be corrected (type)
Projects
None yet
Development

No branches or pull requests

2 participants