Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

enablePeriodicRefresh has problematic behavior when service is shut down(close) #3089

Open
Kvicii opened this issue Dec 18, 2024 · 6 comments
Labels
status: waiting-for-feedback We need additional information before we can continue

Comments

@Kvicii
Copy link

Kvicii commented Dec 18, 2024

Bug Report

Current Behavior

For example: I have 2 classes, A and B, and the close method is as follows.
A will cycle through multiple RedisSenders (B) and finally release ClientResources. At the same time, ClusterTopologyRefreshOptions is set as follows:

.enablePeriodicRefresh(10 seconds)
.enableAllAdaptiveRefreshTriggers()
.closeStaleConnections(true)
.dynamicRefreshSources(true)

I found that it might be because the netty thread was not closed in time, and the following error occurred:

Screenshot 2024-12-18 at 22 14 35 Screenshot 2024-12-18 at 22 14 43
// A
public void close() {
    for (RedisSender sender : RedisSender List) {
        sender.close();
    }

    if (clientResources != null) {
        clientResources.shutdown();
    }
}

// B RedisSender
public void close() {
    StatefulRedisClusterConnection<String, byte[]>.close();
    RedisCluster.close();
}

Expected behavior/code

After the ClientResources are closed, the tasks being processed should be discarded

Environment

  • Lettuce version(s): 6.3.2-RELEASE
@tishun
Copy link
Collaborator

tishun commented Dec 21, 2024

This seems to be the same issue as the one described in #2904

@Kvicii
Copy link
Author

Kvicii commented Dec 23, 2024

@tishun yes, but I'm not sure if it can be fixed soon, as #2985 mentioned, I'm not sure if it works.

@tishun
Copy link
Collaborator

tishun commented Dec 23, 2024

Closing in favor of #2904
Let's continue the discussion there.

If this is another problem, please reopen and describe how it is different so we can try and address it too.

@tishun
Copy link
Collaborator

tishun commented Dec 23, 2024

@tishun yes, but I'm not sure if it can be fixed soon, as #2985 mentioned, I'm not sure if it works.

Not fixed yet.

How big of a problem is it for you? Because it seems to me like it should not have a big effect on the application, besides the log messages?

@tishun
Copy link
Collaborator

tishun commented Dec 23, 2024

@tishun yes, but I'm not sure if it can be fixed soon, as #2985 mentioned, I'm not sure if it works.

Not fixed yet.

How big of a problem is it for you? Because it seems to me like it should not have a big effect on the application, besides the log messages?

To elaborate more on my question:

  • if there is a functional regression caused by this problem then we would increase it's priority
  • if not I would have to postpone it for now, as there are actual functional bugs that are with higher priority

As always if you believe you can contribute a solution please send out a PR with your idea.

@tishun tishun added the status: waiting-for-feedback We need additional information before we can continue label Dec 23, 2024
@tishun tishun closed this as completed Dec 23, 2024
@tishun tishun reopened this Dec 23, 2024
@Kvicii
Copy link
Author

Kvicii commented Dec 24, 2024

@tishun No significant impact has been found so far, but for a stable service, some error logs should be avoided as much as possible.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
status: waiting-for-feedback We need additional information before we can continue
Projects
None yet
Development

No branches or pull requests

2 participants