Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

When we try to quantify the model, things go awry.... #6

Open
WenWeiZhao opened this issue Oct 20, 2022 · 1 comment
Open

When we try to quantify the model, things go awry.... #6

WenWeiZhao opened this issue Oct 20, 2022 · 1 comment

Comments

@WenWeiZhao
Copy link

No description provided.

@rohansaw
Copy link

@WenWeiZhao did you find any insights on this or even fixed it?

In the original paper the authors report the inference speed on an edge TPU (quantized tflite), so I would expect that there should not be strong degradation of the results, however there is no infromation provided on this.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants