Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Proposal] Support for Autobatching #361

Open
pfreixes opened this issue Apr 17, 2021 · 0 comments
Open

[Proposal] Support for Autobatching #361

pfreixes opened this issue Apr 17, 2021 · 0 comments

Comments

@pfreixes
Copy link

Hi, first of all thanks for the work done here, great Memcached client.

I would like to contribute to the repository by adding a new feature called Autobatching, but before start working on this I would like to gather some feedback from you, for understanding how feasible is gonna be have this feature in place if eventually the reviewers believe that the PR is acceptable.

Rationale

Autobatching is based on the idea of autopipeline for Redis [1] which tries to minimize the TCP overhead by sending multiple keys in a single command transparently to the user.

Following the same ideas, Autobatching for a Memcached client implemented in a reactor loop paradigm leverages on the get_many support provided by Memcached for piling up a set of individual get - or gets - operations for sending all of them together in the next loop iteration.

This has already been implemented, Im the author, in the Emcache [2] library, which uses Asyncio as a framework which implements a reactor pattern, similar to what is provided by Node.js. Laboratory benchmarks show that autobatching can perform x2 better compared to the traditional usage of get and gets methods.

What would be implemented

The idea is to follow the current Emcache implementation, which is based on the following characteristics:

  • Get and gets operations are piled up during the same event loop iteration.
  • At the next event loop iteration batches are sent to the right nodes.
  • Batches are at the same time split into multiple batches for limiting the size of a batch.
  • Autobatching is enabled at client instantiation level.
  • API remains equal, but once autobatching is enabled get and gets are routed to the code path for using autobatching.

WDYT?

[1] https://github.com/mcollina/ioredis-auto-pipeline
[2] https://github.com/pfreixes/emcache

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant