Skip to content

Commit

Permalink
perf(consensus): Reuse an internal buffer for block building (cometbf…
Browse files Browse the repository at this point in the history
…t#3162)

<!--

Please add a reference to the issue that this PR addresses and indicate
which
files are most critical to review. If it fully addresses a particular
issue,
please include "Closes #XXX" (where "XXX" is the issue number).

If this PR is non-trivial/large/complex, please ensure that you have
either
created an issue that the team's had a chance to respond to, or had some
discussion with the team prior to submitting substantial pull requests.
The team
can be reached via GitHub Discussions or the Cosmos Network Discord
server in
the #cometbft channel. GitHub Discussions is preferred over Discord as
it
allows us to keep track of conversations topically.
https://github.com/cometbft/cometbft/discussions

If the work in this PR is not aligned with the team's current
priorities, please
be advised that it may take some time before it is merged - especially
if it has
not yet been discussed with the team.

See the project board for the team's current priorities:
https://github.com/orgs/cometbft/projects/1

-->

Makes an internal buffer that we can re-use when building blocks. This
seems to save on average 75 microseconds per block for osmosis blocks.
(Ranging between 1-2 block parts in the relevant time range) This should
be scaling roughly linear with block size.

This lets us remove one allocation cost per complete block coming in. We
only need to re-allocate on the next "biggest ever seen" block we see.

---

- [x] Tests written/updated
- [x] Changelog entry added in `.changelog` (we use
[unclog](https://github.com/informalsystems/unclog) to manage our
changelog)
- [x] Updated relevant documentation (`docs/` or `spec/`) and code
comments
- [x] Title follows the [Conventional
Commits](https://www.conventionalcommits.org/en/v1.0.0/) spec

(cherry picked from commit 16839d8)
  • Loading branch information
ValarDragon authored and mergify[bot] committed Jun 4, 2024
1 parent ba57909 commit dc24598
Show file tree
Hide file tree
Showing 3 changed files with 60 additions and 1 deletion.
Original file line number Diff line number Diff line change
@@ -0,0 +1,2 @@
- [`consensus`] Reuse an internal buffer for block building to reduce memory allocation overhead.
([\#3162](https://github.com/cometbft/cometbft/issues/3162)
30 changes: 29 additions & 1 deletion consensus/state.go
Original file line number Diff line number Diff line change
Expand Up @@ -140,6 +140,13 @@ type State struct {

// for reporting metrics
metrics *Metrics

// offline state sync height indicating to which height the node synced offline
offlineStateSyncHeight int64

// a buffer to store the concatenated proposal block parts (serialization format)
// should only be accessed under the cs.mtx lock
serializedBlockBuffer []byte
}

// StateOption sets an optional parameter on the State.
Expand Down Expand Up @@ -1900,6 +1907,27 @@ func (cs *State) defaultSetProposal(proposal *types.Proposal) error {
return nil
}

func (cs *State) readSerializedBlockFromBlockParts() ([]byte, error) {
// reuse a serialized block buffer from cs
var serializedBlockBuffer []byte
if len(cs.serializedBlockBuffer) < int(cs.ProposalBlockParts.ByteSize()) {
serializedBlockBuffer = make([]byte, cs.ProposalBlockParts.ByteSize())
cs.serializedBlockBuffer = serializedBlockBuffer
} else {
serializedBlockBuffer = cs.serializedBlockBuffer[:cs.ProposalBlockParts.ByteSize()]
}

n, err := io.ReadFull(cs.ProposalBlockParts.GetReader(), serializedBlockBuffer)
if err != nil {
return nil, err
}
// Consistency check, should be impossible to fail.
if n != len(serializedBlockBuffer) {
return nil, fmt.Errorf("unexpected error in reading block parts, expected to read %d bytes, read %d", len(serializedBlockBuffer), n)
}
return serializedBlockBuffer, nil
}

// NOTE: block is not necessarily valid.
// Asynchronously triggers either enterPrevote (before we timeout of propose) or tryFinalizeCommit,
// once we have the full block.
Expand Down Expand Up @@ -1944,7 +1972,7 @@ func (cs *State) addProposalBlockPart(msg *BlockPartMessage, peerID p2p.ID) (add
)
}
if added && cs.ProposalBlockParts.IsComplete() {
bz, err := io.ReadAll(cs.ProposalBlockParts.GetReader())
bz, err := cs.readSerializedBlockFromBlockParts()
if err != nil {
return added, err
}
Expand Down
29 changes: 29 additions & 0 deletions consensus/state_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -2067,3 +2067,32 @@ func findBlockSizeLimit(t *testing.T, height, maxBytes int64, cs *State, partSiz
require.Fail(t, "We shouldn't hit the end of the loop")
return nil, nil
}

// TestReadSerializedBlockFromBlockParts tests that the readSerializedBlockFromBlockParts function
// reads the block correctly from the block parts.
func TestReadSerializedBlockFromBlockParts(t *testing.T) {
sizes := []int{0, 5, 64, 70, 128, 200}

// iterate through many initial buffer sizes and new block sizes.
// (Skip new block size = 0, as that is not valid construction)
// Ensure that we read back the correct block size, and the buffer is resized correctly.
for i := 0; i < len(sizes); i++ {
for j := 1; j < len(sizes); j++ {
initialSize, newBlockSize := sizes[i], sizes[j]
testName := fmt.Sprintf("initialSize=%d,newBlockSize=%d", initialSize, newBlockSize)
t.Run(testName, func(t *testing.T) {
blockData := cmtrand.Bytes(newBlockSize)
ps := types.NewPartSetFromData(blockData, 64)
cs := &State{
serializedBlockBuffer: make([]byte, initialSize),
}
cs.ProposalBlockParts = ps

serializedBlock, err := cs.readSerializedBlockFromBlockParts()
require.NoError(t, err)
require.Equal(t, blockData, serializedBlock)
require.Equal(t, len(cs.serializedBlockBuffer), max(initialSize, newBlockSize))
})
}
}
}

0 comments on commit dc24598

Please sign in to comment.