-
Notifications
You must be signed in to change notification settings - Fork 77
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Installation of the torch package to isolated python environment #503
Comments
As you noted, passing in
If you're running multiple python models without OPE, things can get really tricky (even if you're not using It's possible that this can cause conflicts on its own (e.g. one model changing global state that another depends on), but it gets much more complex when requirements are introduced. If one model depends on Additionally, conflicts between transitive native dependencies of models (e.g. different versions of Running with OPE solves these issues because every model runs in its own isolated environment independent of your application and independent of every other model.
Dependencies are only downloaded and unpacked the first time they're used and then are immediately available on subsequent model requests for the same dependency. One thing to note is that the packages aren't actually installed; the unpacked packages are added to If you control the runtime environment, you can preload dependencies during your environment build process by running a placeholder model that depends on everything you want to preload: create_python_neuropod(
...
requirements="""
torch==1.8.0
"""
...
) And as long as you persist the cache folder ( Final notesIf you can ensure that all the models you run won't have conflicting dependencies (or transitive dependencies), you might be able to get away with not using OPE, but that doesn't seem particularly robust. Also, if you want specific packages to be available to all of your models (without them specifying neuropod/source/neuropod/backends/python_bridge/_neuropod_native_bootstrap/pip_utils.py Lines 82 to 88 in fa32117
This triggers a download/unpack at runtime on the first run of any python model, but the difference here is that those packages are available to all models regardless of if they're specified in a model's Let me know if you have any other questions! |
Hi Vivek, thanks for your response. This answered all my questions. But when I checked the requirements.lock file in my model, I still could see --index-url and --trusted-host in it which would cause ValueError when loading deps. I remembered that I fixed them in commit here. How can I the neuropod package that has my fix? |
We are trying to enable the neuropod python backend with Neuropod JNI now. After enabling python isolation, since the packaged python environment doesn't have torch pre-installed, we would meet No module named "Torch" when loading the torch model.
Including a requirements.lock file could resolve the issue but this would cause installation when loading the model. This might be a problem when loading models on a large number of machines simultaneously. You also mentioned in code that this is problematic when running multiple python models in a single process and it's only intended to work when using OPE. So, I am wondering is it possible to pre-install the necessary package like torch to the isolated python environment before loading the model and keep the size of the python backend small at the same time?
The text was updated successfully, but these errors were encountered: