ERROR: Didn't find op for builtin opcode 'QUANTIZE' version '3' #879
Labels
comp:compiler
Compiler related issues
comp:model
Model related isssues
comp:thirdparty
Thirdparty related issues
Hardware:USB Accelerator
Coral USB Accelerator issues
subtype:ubuntu/linux
Ubuntu/Linux Build/installation issues
type:support
Support question or issue
Description
I am trying to convert a model that has VGG-16 as a base model, a classifier, GlobalAveragePooling2D and Dropout to Tensorflow Lite after doing quantization-aware training, but I am getting the following error:
I am currently training my model and saving it as .h5 then loading its checkpoints and creating a new model for quantization. However, when trying to apply quantization-aware training I get the error mentioned above. This is the code I used to apply QAT:
The Edge TPU Compiler version is 16.0.384591198. The Tensorflow version is 2.14. Tensorflow Lite version is 2.14.0.
I am running tensorflow on WSL2 and running the command edgetpu_compiler /mnt/e/models/quantized_model.tflite to try to convert the model to TFLite, however I am getting errors:
Why am I encountering this error during Edge TPU compilation? How can I ensure compatibility between the TFLite model and the Edge TPU Compiler? Any insights or suggestions would be greatly appreciated. Thank you!
Click to expand!
Issue Type
Support
Operating System
Linux
Coral Device
USB Accelerator
Other Devices
Rapsberry Pi 4
Programming Language
Python 3.9
Relevant Log Output
The text was updated successfully, but these errors were encountered: