-
Notifications
You must be signed in to change notification settings - Fork 44
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Peer to chaincode optimization proposal RFC #58
base: main
Are you sure you want to change the base?
Conversation
- Added a new RFC proposing a batching mechanism to reduce communication overhead. - Defined new `ChangeStateBatch` message type to encapsulate multiple state changes. - Outlined the process flow, including negotiation and batch handling logic. - Included sample chaincode illustrating the issue with current communication protocol. - Provided benchmarks comparing performance before and after applying batching. - Listed required repository changes in the correct order: - github.com/hyperledger/fabric-protos - github.com/hyperledger/fabric-chaincode-go - github.com/hyperledger/fabric - Ensured backward compatibility by using a negotiation mechanism in `Ready` messages. - Added example handler code for processing batched state changes. Signed-off-by: Artem Barger <[email protected]>
I think the reason that a series of Have you investigated the performance of a shim that simply defers all |
Yes, this will achieve better results compared to the current implementation, while doing it with a single message is still better, given you avoid sending multiple messages. |
|
After all, it is not very hard to implement RFC, which could bring significant gains without breaking current functionality and transaction semantics, not even the programming model. |
There was an old Jira item to implement SetStateMultipleKeys() and GetStateMultipleKeys() to set and get keys in batch for precisely the reason you mention. It was even implemented at ledger level, just not at the chaincode programming model level. I don't think there was any reason it didn't get implemented, it simply wasn't prioritized for development. But let's also check with @manish-sethi to see if there were any other considerations that delayed the implementation... |
The main reason for this is to maintain the sanity of a transaction as it is intended at the application layer as a transaction is defined as a sequence of reads and writes. For instance, if a transaction sets different values to the same key, on different peers the result could differ if parallelism is to be allowed. Even if the result on different peers ends up the same, unless we prohibit setting the value of the same key more than once in a transaction, in the worst case, parallelism at shim could create a different result than the chaincode intended. |
Yes, that was the intention. It was always a matter of priority. If we expose SetStateMultipleKeys() and GetStateMultipleKeys() to the chaincode, the simple cases should benefit easily. |
```proto | ||
message ChangeStateBatch { | ||
repeated StateKV kvs = 1; | ||
} | ||
|
||
message StateKV { | ||
enum Type { | ||
UNDEFINED = 0; | ||
PUT_STATE = 9; | ||
DEL_STATE = 10; | ||
PUT_STATE_METADATA = 21; | ||
PURGE_PRIVATE_DATA = 23; | ||
} | ||
|
||
string key = 1; | ||
bytes value = 2; | ||
string collection = 3; | ||
StateMetadata metadata = 4; | ||
Type type = 5; | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please note that this RFC suggests sending anything that changes state in batches.
I think it's cool.
Well, we can reveal this line of work, this RFC precisely about that. Once we will get in agreement, it's we can just do it, I do think there is general consensus to proceed, given in the past there was even plans to do it not to mention ledger already supports it. |
@denyeart, what should the following steps be? Would you like this RFC to be presented at the community call? Would you like us to vote? Something else? Note, this is a straightforward and not intrusive change, not breaking any APIs while bringing impact. |
I think the original vision for GetStateMultipleKeys() and SetStateMultipleKeys() was for the chaincode developer to be able to control getting and setting multiple keys at once. Whereas I believe your proposal would be transparent to the chaincode developer, that is, the updates would still be one at a time from the chaincode developer perspective, but at runtime they would be cached in the shim until the end of chaincode execution, at which point they would all be 'flushed' to the peer all at once right before the peer builds the final read/write set. Is my understanding correct? I do agree the transparent batching makes more sense for ledger updates. For gets I still think the original intent makes sense, allow the chaincode developer to get N keys at once, so that they can use the values in the subsequent chaincode logic. However I think batch get is lower priority and not the intent of this RFC, so I'm ok with just doing the transparent batching for the updates as part of this RFC. Question - do we need to consider splitting the batch into smaller pieces? For example if chaincode attempts to update 100000 keys should we send 10 batches of 10000 each? This may help to stay under limits such as grpc max message size (default 100MB), but on the other hand such large transactions would likely cause problems regardless due to the ultimate large write set. I'd prefer to keep things as simple as possible, just checking your thought. In terms of next steps, RFCs require approval from 3 maintainers. For relatively straightforward RFCs like this one, we can just utilize github. If you think there are more considerations to discuss we could add an agenda item to the November 20th contributor meeting. |
Yeah, that's right. |
This behavior will be configurable in core.yaml, via the new parameters UsePutStateBatch and MaxSizePutStateBatch. The thing is, I know this will be an improvement in my case. But I don't know all the fabric use cases, so through the UsePutStateBatch parameter it will be possible to disable the new behavior. Or through the MaxSizePutStateBatch parameter - 1000000000 could be set so that the post is never split into batchs. A benchmark test will also be added, which you can use as an example to research which parameters will be better in your particular case. |
I agree with the overall proposal. However, if the goal is to keep the behavior consistent with the current implementation, I recommend an exercise of reviewing the transaction simulator code carefully, as some write operations involve reading the current state as well. For example, In summary, we should assess whether it is feasible to maintain the current behavior in all cases, or alternatives should be discussed. A simple alternative could be to limit this optimization to straightforward cases or to allow for modifying the behavior so that any error is propagated at the end, rather than at the specific step where it occurs -- Also in some cases, we would have to limit ourselves to return a more general error as we would have lost the actual sequence of reads and writes. |
That's an excellent point. |
Yes, and hence it changes the error handling behavior from the current implementation. For example, if you make changes in only in the shim side of code, a user will get this error whereas in his chaincode he may have performed a write before the query. So I suggested going through the lower level code and capturing all behavior changes, and necessary changes in the error handling at the lower level that may be needed. IMO, this should be part of the RFC. |
Doesn't it work both ways? it seems to me that operations that cannot be performed in one transaction cause an error without necessarily following the order. |
Yes, it does, and in the current implementation, the appropriate error is returned. If you make changes in the shim side only, the wrong error will be returned. In that case, it will be more appropriate to return a more general error than a wrong error. |
ChangeStateBatch
message type to encapsulate multiple state changes.Ready
messages.