Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Implement Verifier's Deposit Finalize handler #423

Open
wants to merge 5 commits into
base: dev
Choose a base branch
from

Conversation

mmtftr
Copy link

@mmtftr mmtftr commented Jan 17, 2025

Description

Created a new deposit_signatures table with primary key of (outpt, op_idx)
Implemented deposit finalize handler:

  • Verifies all incoming signatures,
  • Saves to DB
  • Signs Move_TX and returns partial_sig

Linked Issues

Testing

Currently no tests since I don't have a complete understanding of the business logic here.

Warning

Need to add tests before merge

Created a new deposit_signatures table with primary key of (outpt, op_idx)
Implemented deposit finalize handler:
- Verifies all incoming signatures,
- Saves to DB
- Signs Move_TX and returns partial_sig
@mmtftr mmtftr requested review from ekrembal and ozankaymak January 17, 2025 08:18
@mmtftr mmtftr self-assigned this Jan 17, 2025
@mmtftr mmtftr added this to the Deposit Signature Collection milestone Jan 17, 2025
core/src/rpc/verifier.rs Show resolved Hide resolved
core/src/rpc/verifier.rs Outdated Show resolved Hide resolved
core/src/rpc/verifier.rs Outdated Show resolved Hide resolved
Some code cleaning and documentation improvements:
- Macro for impl TryFrom<Vec> for byte array-backed structs
- Rustdoc on new_deposit and inner comments
- Naming improvements

Features:
- Split verified sigs into per operator arrays and save with operator_idx
- Collect +1 nonce for movetx but only sign at the final stage

Fixes:
- Some panics were replaced with errors
@mmtftr
Copy link
Author

mmtftr commented Jan 20, 2025

There are some issues with the backend causing tests to fail. Other than that, I think it's ready for review.

@mmtftr mmtftr marked this pull request as ready for review January 20, 2025 12:58
@ozankaymak
Copy link
Contributor

Please resolve the conflicts before review.

Copy link
Author

@mmtftr mmtftr left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Questions to the open

async fn nonce_aggregator(
mut nonce_streams: Vec<
impl Stream<Item = Result<MusigPubNonce, BridgeError>> + Unpin + Send + 'static,
>,
mut sighash_stream: impl Stream<Item = Result<TapSighash, BridgeError>> + Unpin + Send + 'static,
agg_nonce_sender: Sender<AggNonceQueueItem>,
) -> Result<(), BridgeError> {
while let Ok(sighash) = sighash_stream.next().await.transpose() {
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I believe previous behavior was incorrect. Ok(None) means stream end here, but it would've caused a panic due to ok_or below. Also, an error in the stream would've just broken out of the loop, which is incorrect behavior.

I fixed this along with adding better logging.

let nonce_dist_handle = tokio::spawn(nonce_distributor(
agg_nonce_receiver,
partial_sig_streams,
partial_sig_sender,
));

// Start the signature aggregation pipe.
let sig_agg_handle = thread::spawn(|| {
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

thread::spawn and tokio::spawn are mixed here, was there a reason why? I think we can just use tokio::spawn everywhere, as it's a multi-threaded Runtime anyway.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's because aggregation tasks are CPU dependent while distributor tasks are IO dependent. IO dependent tasks are opted for a Tokio "thread" to reduce memory usage and improve startup time. Tried using Tokio's spawn and spawn_blockings but they failed to run 2 aggregation tasks in parallel, as far as I remember.

Also, we need all these tasks to run at the "same time" to achieve pipeline. That is the reason we spawn other Tokio runtimes inside this async function.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure if that's related, there really shouldn't be such a big difference unless there is a bug in Tokio itself. We can write a test for this to make sure it works properly

On a related note, I saw some problems with the mutex lock being held for a long time (i.e. for most of the duration of the function) in some functions. I believe I fixed one of these in the PR but we may benefit from testing whether locks are being held for too long to the point they'd reduce parallelism.

In this function's case, holding the sessions lock would block any other aggregator from running for any other session.

));

nonce_agg_handle.join().unwrap().unwrap();
// Wait for all pipeline tasks to complete
try_join_all(vec![nonce_dist_handle]).await.unwrap();
sig_agg_handle.join().unwrap().unwrap();
try_join_all(vec![sig_dist_handle]).await.unwrap();
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think these try_join_all are necessary. We can simply await, everything i spawned anyways.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, it looks like try_join_all calls are unnecessary. Can be replaced with smthng like nonce_dist_handle.await.unwrap().unwrap().

u32,
),
Status,
> {
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

maybe even TryFrom<(DepositSignSession, usize)> for (...result tuple...)

But usize wouldn't be clear that it's actually a verifier_idx.

I was thinking of doing some impls on Prost-generated types for ease of use and code quality, I think we can attempt to do it after the gRPC API stabilizes further

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sure, maybe we can create an issue for that to not forget?

@mmtftr
Copy link
Author

mmtftr commented Jan 20, 2025

Please resolve the conflicts before review.

Conflicts have been resolved, waiting for review

@@ -142,12 +144,11 @@ message NonceGenRequest {
}

message NonceGenFirstResponse {
reserved 3; // 3 = sig, previously used for signing new keypairs
// Nonce ID
uint32 id = 1;
// New public key for the deposit
bytes public_key = 2;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We don't need public key here

@@ -142,12 +144,11 @@ message NonceGenRequest {
}

message NonceGenFirstResponse {
reserved 3; // 3 = sig, previously used for signing new keypairs
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We don't need to be backward compatible now

u32,
),
Status,
> {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sure, maybe we can create an issue for that to not forget?

@@ -393,6 +398,7 @@ impl ClementineVerifier for Verifier {
&self,
req: Request<Streaming<VerifierDepositFinalizeParams>>,
) -> Result<Response<PartialSig>, Status> {
use clementine::verifier_deposit_finalize_params::Params;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why use here?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

easier to read and parse for the eyes below. These params are only relevant for this function, so we can put them in scope to make it easier to read below.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants