-
Notifications
You must be signed in to change notification settings - Fork 36
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[FEATURE] Better Thread Pool integration #61
Comments
opensearch-project/OpenSearch#10248 - none of this stuff needs to run in transport threads? |
We're calling the workflow steps via the MLClient, so the eventual calls will run on their appropriate threads. All of our calls have timeout (unless the user explicitly overrides.) @joshpalis is working on our own thread pool in #63, see https://github.com/opensearch-project/opensearch-ai-flow-framework/pull/63/files#diff-9d60d6086ce87e2240d7d6bcf8d2a8b79c73400fc6ec6865f74fc571e03bf1af |
PR #63 has been merged, which actions this issue by overriding the There is an open question that I would like to bring up for discussion : what should our thread pool size/queueSize be for a provision workflow (which may become an expensive operation depending on the given use case)
Currently the provision thread pool is implemented as a |
Is your feature request related to a problem?
#47 implemented basic use of an ExecutorService to give better control of the async process execution.
#66 improved on #47 by using the actual thread pool but is still just using the generic name.
We can implement more custom thread pool behavior/prioritization similar to how other plugins have done so, moving beyond the generic thread pool.
What solution would you like?
Probably carefully choose the correct existing thread pool for the threads you're creating.
Possibly a custom thread pool integration properly prioritizing this plugin's threads. Custom thread pools override
getExecutorBuilders()
, see examples here and hereDo you have any additional context?
This is an admittedly vague feature because it's an optional prioritization, entered primarily to note that the initial implementation took the lowest/default priority in the interest of unblocking rapid development. This issue is intended to prompt a more careful implementation.
The text was updated successfully, but these errors were encountered: