-
Notifications
You must be signed in to change notification settings - Fork 885
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add watch request timeout to prevent watch request hang #5732
base: master
Are you sure you want to change the base?
Conversation
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
Codecov ReportAttention: Patch coverage is
❗ Your organization needs to install the Codecov GitHub app to enable full functionality. Additional details and impacted files@@ Coverage Diff @@
## master #5732 +/- ##
==========================================
+ Coverage 40.90% 41.58% +0.67%
==========================================
Files 650 655 +5
Lines 55182 55773 +591
==========================================
+ Hits 22573 23191 +618
+ Misses 31171 31076 -95
- Partials 1438 1506 +68
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. |
cc @XiShanYongYe-Chang @RainbowMango @ikaven1024 PTAL. |
return nil, err | ||
case <-time.After(30 * time.Second): | ||
// If the watch request times out, return an error, and the client will retry. | ||
return nil, fmt.Errorf("timeout waiting for watch for resource %v in cluster %q", gvr.String(), cluster) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@xigang Hi, if a watch request is hanging and causes a timeout, will the hanging watch request continue to exist in the subprocess?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@zhzhuang-zju Yes, there is this issue. When a watch request times out, the goroutine needs to be terminated.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good point! Then that case we have to cancel the context passed to cache.Watch().
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So this patch intends to terminate the hanging by raising an error after a period of time. Is this the idea?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Another question:
Before starting the Watch
, we tried to get the cache of that cluster, I'm curious why this cache still exists even after the cluster is gone. Do we have a chance to clean the cache?
karmada/pkg/search/proxy/store/multi_cluster_cache.go
Lines 333 to 336 in e7b6513
cache := c.cacheForClusterResource(cluster, gvr) | |
if cache == nil { | |
continue | |
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Another question: Before starting the
Watch
, we tried to get the cache of that cluster, I'm curious why this cache still exists even after the cluster is gone. Do we have a chance to clean the cache?karmada/pkg/search/proxy/store/multi_cluster_cache.go
Lines 333 to 336 in e7b6513
cache := c.cacheForClusterResource(cluster, gvr) if cache == nil { continue }
@RainbowMango When the member cluster goes offline but the Cluster resources in the control plane are not deleted, it can prevent the offline clusters in the ResourceRegistry from being removed, resulting in the resource cache being retained for a short time.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@xigang Hi, if a watch request is hanging and causes a timeout, will the hanging watch request continue to exist in the subprocess?
@RainbowMango @zhzhuang-zju Fixed, please take a look.
21c10d3
to
5a55ca2
Compare
/retest |
return nil, err | ||
case <-time.After(30 * time.Second): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It seems wait 30s for each cluster. Should we wait all clusters paralleled?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It seems wait 30s for each cluster. Should we wait all clusters paralleled?
@ikaven1024 There’s no issue here; as long as a single cache.Watch
times out, the Watch request will return with an error and end. There’s no problem with that here.😄
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It seems wait 30s for each cluster. Should we wait all clusters paralleled?
@ikaven1024 There’s no issue here; as long as a single
cache.Watch
times out, the Watch request will return with an error and end. There’s no problem with that here.😄
While if every cluster create watching takes 20s, not timeout, the total time spends 20s * N.
errChan := make(chan error, 1) | ||
|
||
go func(cluster string) { | ||
w, err := cache.Watch(ctx, options) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If this watcher is created after 30s, then it seems no way to stop it, is it leak?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If the watcher times out after 30 seconds during creation, it will trigger a time.After timeout, return an error, and call cancel to stop the watcher goroutine.
defer func() { cancel() }() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ok
…hang Signed-off-by: xigang <[email protected]>
What type of PR is this?
/kind bug
What this PR does / why we need it:
When the federate-apiserver's watch request to the member cluster gets stuck, it will cause the watch request from the federated client to get stuck as well.
Which issue(s) this PR fixes:
Fixes #5672
Special notes for your reviewer:
Does this PR introduce a user-facing change?: