Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ImportError: No module named 'quantized_ops' #1

Open
MrLinNing opened this issue Mar 19, 2018 · 5 comments
Open

ImportError: No module named 'quantized_ops' #1

MrLinNing opened this issue Mar 19, 2018 · 5 comments

Comments

@MrLinNing
Copy link

sorry, i run your code, but it happens this

(my_env) luhang@intelligence:~/meqnn/QuantizedNeuralNetworks-Keras-Tensorflow$ python train.py config_CIFAR-10 -o lr=0.01 wbits=4 abits=4 network_type='full-qnn'
/home/luhang/my_env/lib/python3.5/site-packages/h5py/__init__.py:36: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. In future, it will be treated as `np.float64 == np.dtype(float).type`.
  from ._conv import register_converters as _register_converters
Using TensorFlow backend.
Traceback (most recent call last):
  File "train.py", line 8, in <module>
    from models.model_factory import build_model
  File "/home/luhang/meqnn/QuantizedNeuralNetworks-Keras-Tensorflow/models/model_factory.py", line 8, in <module>
    from layers.quantized_layers import QuantizedConv2D,QuantizedDense
  File "/home/luhang/meqnn/QuantizedNeuralNetworks-Keras-Tensorflow/layers/quantized_layers.py", line 10, in <module>
    from quantized_ops import quantize, clip_through
ImportError: No module named 'quantized_ops'
@BertMoons
Copy link
Owner

Hi,

  • Make sure the quantized_ops.py file exists under ./layers/
  • Your command is not exactly right, that might have something to do with this. You should either run:
    python train.py -c config_CIFAR-10 -o lr=0.01 wbits=4 abits=4 network_type='full-qnn', or:
    ./train.sh config_CIFAR-10 -o lr=0.01 wbits=4 abits=4 network_type='full-qnn'

Hopefully this works, good luck!

Bert

@MrLinNing
Copy link
Author

MrLinNing commented Mar 19, 2018

but, i am sure the quantized_ops.py file exists under ./layers/
image
And most import thing that I used python 3.5, how can I run the code?

@xmicroshell
Copy link

Hi, LinNing
This problem is a common error.and i dont think it has anything to do with the python version.
You should replace all the sentences:
from quantized_ops import quantize
with:
from layers.quantized_ops import quantize
i meet other similar import errors, just follow the above to resolve it

@MrLinNing
Copy link
Author

MrLinNing commented Mar 26, 2018

Thank you!BertMoons and xmicroshell.
I want to deploy trained weights and the network to my embedding devices, and how to see the trained weights value ?

@xmicroshell
Copy link

If you want to use the model in your device,i think the function model.save_weights(filepath) could help.
If you want to see the weight by yourself, maybe tensorboard can help you (but i've used it)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants