-
-
Notifications
You must be signed in to change notification settings - Fork 209
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Prefetch syncs to redis #1446
base: main
Are you sure you want to change the base?
Prefetch syncs to redis #1446
Conversation
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## main #1446 +/- ##
==========================================
- Coverage 93.88% 92.92% -0.97%
==========================================
Files 78 79 +1
Lines 6361 5030 -1331
==========================================
- Hits 5972 4674 -1298
+ Misses 300 251 -49
- Partials 89 105 +16 ☔ View full report in Codecov by Sentry. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I have a couple comments but it's mostly stuff I noticed about the existing code when comparing with the new one!
resolver/caching_resolver.go
Outdated
} else { | ||
util.LogOnError(ctx, fmt.Sprintf("can't prefetch '%s' ", domainName), err) | ||
|
||
if r.redisClient != nil { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The behavior here is slightly different from putInCache
: if .Pack()
failed, we're not sending the response to Redis. I think not sending it is better, so maybe update putInCache
to match?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ur correct sending it when pack fails is an error in putInCache
resolver/caching_resolver.go
Outdated
r.redisClient.PublishCache(cacheKey, &res) | ||
} | ||
|
||
return &packed, r.adjustTTLs(response.Res.Answer) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is also a slight difference compared to the putInCache
flow: here Redis sees the unadjusted TTLs whereas when we add values using putInCache
Redis gets the adjusted TTLs.
My initial reaction is this is correct, and the other code should change since other blocky instances might have different TTL configuration.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sending the unadjusted TTL to Redis isn't a problem since received entries are TTL adjusted upon receiving before they are put in the local cache.
This way instances with different configs can share the same sync channel without breaking each other's settings.
That was the base idea when I implemented it. I get that this is counterintuitive and should be changed.
resolver/caching_resolver.go
Outdated
// don't cache any EDNS OPT records | ||
util.RemoveEdns0Record(respCopy) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice catch!
Based on my other comments and this being missing before, I think we should try and factorize the code to have a single func we can call from here and putInCache
. Maybe something that adds the response to Redis, and returns the local cache TTL + the entry with adjusted TTL.
@ThinkChaos I started refactoring the whole caching_resolver. Thanks for the input. 👍 I'll let you know when I'm done. |
@ThinkChaos still needs some tests but may I get your opinion on the current changes? |
resolver/caching_resolver.go
Outdated
r.redisClient.PublishCache(cacheKey, cacheCopy) | ||
} | ||
|
||
return &packed, util.ToTTLDuration(ttl) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think the biggest cause of dupe code is that here we need to return the new cache entry,
but in r.Resolver
we need to call r.resultCache.Put
.
So we could change the prefetch API to be more like the normal caching flow:
PrefetchingOptions.ReloadFn func(ctx context.Context, key string) (*T, time.Duration)
could be something like OnAboutToExpire func(ctx context.Context, key string)
,
where the callback is expected to call Put
if they want to update the value.
That way reloadCacheEntry
could become a tiny wrapper around r.Resolve
(or a new resolve
method they both call).
From a quick check, ReloadFn
is only used by this code (and the cache tests),
and Put
allows replacing a value, so I think that should be doable!
From there I think most things will fall into place :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the input I think that was the part which caused me the most headaches. 😅
I also try to refactor the caches to streamline the resolver behavior. 👍
resolver/caching_resolver.go
Outdated
ttl := r.modifyResponseTTL(response.Res) | ||
if ttl > noCacheTTL { | ||
cacheCopy := r.putInCache(logger, cacheKey, response) | ||
if cacheCopy != nil && r.redisClient != nil { | ||
r.redisClient.PublishCache(cacheKey, cacheCopy) | ||
} | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This still adjusts the TTL twice, but if we can get everything down to a single code path, it means putInCache
can also publish to Redis, which will make this whole part simpler, and avoid needing to return a copy out etc. and just use it in a couple lines in a row.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe wrap it in another function? 🤔
I tried removing the publish Boolean parameter since it seemed misplaced.
On the other hand also a method to publish without put is needed (prefetching)... 🥴
I'm a little troubled fitting these 3 requirements in one logical block.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
On the other hand also a method to publish without put is needed (prefetching)... 🥴
Not sure if you saw my other comment, but I think we can make this go away without too much trouble.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Already commented on it. 👍
Sorry, read and replayed in reverse order. 🫣
I had to checkout the branch locally to browse with my editor cause it is indeed a lot of functions calling combined in a couple different ways so definitely hard to follow! |
Yeah I know, that's one of the reasons why I was a bit unsure about the direction. 🫣 |
Good thing I looked deeper into the caches since I found a remaining |
Changes:
Closes #1422
Related #1420