You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The library work sub-optimally in case if resource allocation time is higher than resource use time.
In OLTP transactions over TSL-protected connection transactions may take tens of milliseconds, and connection initiation takes up to few hundreds, and up to seconds on oversaturated system. When pool is not satured and a new request comes in the response will wait until new resource is created and not until the next one is freed. This behaviour may lead to additional stress on the other subsystems and longer pauses for the system that uses the pool.
In our system we faced this situation a number of times. Technically minimal amount of allocated resources cover this problem, but does not cure it.
Proposed solution
As a solution I propose is to extend tryAllocateResource and allocateResource functions so they allocates resource in the background thread. And user thread itself will wait for either resource to be created or resource to be freed. In either case resource will be allocated but the wait time will be smaller in case if we free the resource first. The wait can be the same as we have now, so it will be more complex by a constant factor.
This solution should play well with #4 as we will have to track threads.
Discussion of possible problems
Fairness property. We will have no property that in absence of deadlocks the thread will ever get a resource during the high load. However I don't think it matters much as we use STM for resource control at this moment, so technically we don't have fairness property yet.
Exceptions delivery. Currently when we allocate a resource in the user thread exception will be delivered in the user thread and resource will not be registered in the pool. Everything is simple. However in the proposed scenario it not obvious what to do with exception. Should we throw in exception in one of the threads that waits for resource, what to do in a case when exception happened but there no thread listens?
Background thread control. As we will spawn background threads we will need to track them, it will lead to making LocalPool or Pool structures more complex and teardown procedure will have to track those threads. The change looks straightforward, but are there any additional complexities that we may miss?
The text was updated successfully, but these errors were encountered:
The problem
The library work sub-optimally in case if resource allocation time is higher than resource use time.
In OLTP transactions over TSL-protected connection transactions may take tens of milliseconds, and connection initiation takes up to few hundreds, and up to seconds on oversaturated system. When pool is not satured and a new request comes in the response will wait until new resource is created and not until the next one is freed. This behaviour may lead to additional stress on the other subsystems and longer pauses for the system that uses the pool.
In our system we faced this situation a number of times. Technically minimal amount of allocated resources cover this problem, but does not cure it.
Proposed solution
As a solution I propose is to extend
tryAllocateResource
andallocateResource
functions so they allocates resource in the background thread. And user thread itself will wait for either resource to be created or resource to be freed. In either case resource will be allocated but the wait time will be smaller in case if we free the resource first. The wait can be the same as we have now, so it will be more complex by a constant factor.This solution should play well with #4 as we will have to track threads.
Discussion of possible problems
Fairness property. We will have no property that in absence of deadlocks the thread will ever get a resource during the high load. However I don't think it matters much as we use STM for resource control at this moment, so technically we don't have fairness property yet.
Exceptions delivery. Currently when we allocate a resource in the user thread exception will be delivered in the user thread and resource will not be registered in the pool. Everything is simple. However in the proposed scenario it not obvious what to do with exception. Should we throw in exception in one of the threads that waits for resource, what to do in a case when exception happened but there no thread listens?
Background thread control. As we will spawn background threads we will need to track them, it will lead to making
LocalPool
orPool
structures more complex and teardown procedure will have to track those threads. The change looks straightforward, but are there any additional complexities that we may miss?The text was updated successfully, but these errors were encountered: