-
Notifications
You must be signed in to change notification settings - Fork 2.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fix(p2p): cache responses to serve without roundtrip to db #2352
base: master
Are you sure you want to change the base?
Conversation
dcdaaf7
to
8079739
Compare
crates/services/p2p/src/service.rs
Outdated
impl CachedView { | ||
fn new(metrics: bool) -> Self { | ||
Self { | ||
sealed_block_headers: DashMap::new(), | ||
transactions_on_blocks: DashMap::new(), | ||
metrics, | ||
} | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we probably want to also support sub-ranges or even partial ranges here, but can be in the future :)
009dc14
to
5b03fa0
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for implementing this. I Hate to be annoying here, but to approve this I need to:
- See the Changelog updated.
- Understand the reasoning behind the current caching strategy and the benefits/drawbacks over an LRU cache.
- Be certain that we don't open the door to OOM attacks by allowing our cache to be overloaded.
Let me know your thoughts on 2 and 3. I'm happy to jump on a call to discuss this and figure out a good path forward.
pub struct CachedView { | ||
sealed_block_headers: DashMap<Range<u32>, Vec<SealedBlockHeader>>, | ||
transactions_on_blocks: DashMap<Range<u32>, Vec<Transactions>>, | ||
metrics: bool, | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm a bit hesitant to the current approach of storing everything and clearing on a regular interval. Right now, there is no memory limit of the cache, and we use ranges as keys. So if someone queries the ranges (1..=4, 1..=2, 3..=4), we'd store all blocks in the 1..=4 range twice - and this could theoretically grow quadratically for larger ranges.
I would assume that the most popular queries at a given time are quite similar. Why not use a normal LRU cache with fixed memory size? Alternatively just maintain a cache over the last
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yup, its still wip.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah right, I see this PR is still a draft :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we now use block height as the key in 6422210
we will retain the time-based eviction strategy for now
… instead of a per-range basis
Co-authored-by: Mårten Blankfors <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Partial review
Review completed.
let mut items = Vec::new(); | ||
let mut missing_start = None; | ||
|
||
for height in range.clone() { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It came as a surprise that Range isn't Copy
, but seems like it's not: rust-lang/rust#21846 (comment)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think you can go around it by copying the range bounds before consuming the range itself, since those are Copy
.
e.g.
let range_copy = range.start..range.end
for height in range { /* ... */ } // No need to clone anymore
In this case you only need range.end
, so it might be better to copy that value only
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM 👍
Co-authored-by: Rafał Chabowski <[email protected]>
} | ||
} | ||
|
||
pub(super) fn clear(&self) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe we could use some LRU cache instead of wiping the whole cache every 10 seconds? This is just a thought since the current approach should also work.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, in the future we can use an LRU :) we discussed it somewhere above too
cache_reset_interval: Duration, | ||
next_cache_reset_time: Instant, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks like it could be an internal logic of the CachedView
and on each insert/get we can clean it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
it could be, but I wanted to not make the get_from_cache_or_db
require a mutable reference to Self
because it's just a getter. no strong opinion here, so if you want it that way, i can move it around
|
||
for height in range.clone() { | ||
if let Some(item) = cache.get(&height) { | ||
items.push(item.clone()); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It can be follow up PR, but it would be nice if we avoid heavy clone here and used Arc
instead
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
added comment here - d897cba
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
associated issue: #2436
let block_height_range = 0..100; | ||
let sealed_headers = default_sealed_headers(block_height_range.clone()); | ||
let result = cached_view | ||
.get_sealed_headers(&db, block_height_range.clone()) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would expect the cache to be linked to the DB at the time it is created, rather than having to specify the DB when invoking the function get_sealed_headers
or get_transactions
. Just curious to know what's the reason behind this choice?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
you will notice that the view of the current tip of the db (LatestView) is passed into the CachedView while making calls
LGTM. I have a side question of whether the cache should be cleared in case of a DB rollback, to avoid inconsistencies? |
that's a good question! i wonder if we have a hook from the db to be notified when it gets rolled back. |
Linked Issues/PRs
Description
When we request transactions for a given block range, we shouldn't only keep using the same peer causing pressure on it. we should pick a random one with the same height and try to get the transactions from that instead.This PR caches p2p responses (ttl 10 seconds by default) and serves requests from cache falling back to db for others.
Checklist
Before requesting review
After merging, notify other teams
[Add or remove entries as needed]