You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When you have many randomly queries to a single merkle tree, you can compute on you're own most nodes at the top. (e.g. with two queries, one on the left half, one on the right half, you don't need to provide any internal nodes on the 'first' layer of the MT proof)
At the moment we already prune such nodes, to save on proof size. However we can further save on computation by having only a single higher arity hash be executed at the top of the tree. So we want to support a 'cap hash', that computes a single hash of 2^k inputs for the top layers of the tree.
This should be done by add a new type called "cap hash" to hashing.hpp, which has a single template parameter for input & output type. (hash_digest_type), and it intakes a vector of hash_digest_type and outputs a single hash_digest_type. The we should update the merkle_tree.hpp file to use this, and update the proof format accordingly. the cap hash size should be a parameter input to the merkle tree, and also be updated in BCS.
For now, I suggest we first add this to the merkle tree standalone and test it, with the BCS transform defaulting to use cap hash size of 1. Then once that works we add this cap hash parameter to the BCS parameters.
This notably helps save on circuit costs when we go to recurse a SNARK (e.g. Fractal)
The text was updated successfully, but these errors were encountered:
When you have many randomly queries to a single merkle tree, you can compute on you're own most nodes at the top. (e.g. with two queries, one on the left half, one on the right half, you don't need to provide any internal nodes on the 'first' layer of the MT proof)
At the moment we already prune such nodes, to save on proof size. However we can further save on computation by having only a single higher arity hash be executed at the top of the tree. So we want to support a 'cap hash', that computes a single hash of
2^k
inputs for the top layers of the tree.Standard merkle tree:
A
cap_hash_size =2^2
, tree:This should be done by add a new type called "cap hash" to hashing.hpp, which has a single template parameter for
input & output type
. (hash_digest_type), and it intakes a vector of hash_digest_type and outputs a single hash_digest_type. The we should update the merkle_tree.hpp file to use this, and update the proof format accordingly. the cap hash size should be a parameter input to the merkle tree, and also be updated in BCS.For now, I suggest we first add this to the merkle tree standalone and test it, with the BCS transform defaulting to use cap hash size of 1. Then once that works we add this cap hash parameter to the BCS parameters.
This notably helps save on circuit costs when we go to recurse a SNARK (e.g. Fractal)
The text was updated successfully, but these errors were encountered: