Replies: 4 comments
-
If you try |
Beta Was this translation helpful? Give feedback.
-
I think this is likely a limitation related to the specific host service. Multiprocessing can turn into a nightmare pretty fast. It might help if you can disable that in the host service. Otherwise, dynamic command execution should get around it. Ruff has apparently resolved the referenced issue, but this does not appear to be the same error. "No such file or directory" suggests it is a file system error. Even if Ruff fails, the static assets will still have been generated and built. The error message - "This portal is not running" - implies that async event loops are not being handled somewhere in the pipeline. This can be caused from $PATH issues that introduce system packages to the environment instead of isolating the environment completely and only calling packages from within it. This is often a problem with an incorrectly configured Anaconda Navigator installation. See this for a comparable to Databricks. You may need to add Potential SolutionIn situations like this, it will be better to run functions using For Streamlit Cloud apps, you have to do it this way because they do not provide access to the file system site-packages where the static assets are stored. This is async and you need to manage this carefully throughout the entire pipeline. You might want to add this to the code after the import blocks: import nest_asyncio
nest_asyncio.apply() |
Beta Was this translation helpful? Give feedback.
-
Unfortunately this doesn't work, it results in the same error and |
Beta Was this translation helpful? Give feedback.
-
This works thanks a lot! Am able to use |
Beta Was this translation helpful? Give feedback.
-
When installing the
multpl
openbb extension and building the Python interface as recommended by the docs, I encounter an error when executing the build command as the temporary cache file cannot be renamed and subsequently I encounter an error pulling data from themultpl
provider.I only encounter this issue when using a Spark cluster (specifically in Databricks) but when I run the same code on my local Desktop there are no issues, the build succeeds and I can pull data as usual from the
multpl
provider. I believe the failure when using a Spark cluster is due to multiple processes running simultaneously using multipleruff
invocations as described in this issue.I am wondering if renaming the cache file is critical to the building step and can be ignored if it fails so the build can still succeed and I can continue using
obb
to pull data from the given provider.To Reproduce
Screenshots
Desktop (see more details here):
Beta Was this translation helpful? Give feedback.
All reactions