-
Notifications
You must be signed in to change notification settings - Fork 44
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Marking connext swap as pending when connext transfer was failed #1847
Comments
To state the obvious, this is solved by #1850
So what is left to do: prevent worst case 8 out of 9 swaps failing. Could you explain again why the connext client shouldn't deal with concurrent transfers by itself (queue them, execute sequentially)? Also, do you think failing the transfer is better than queuing it? @sangaman @erkarl |
If we queue them it'll cause a swap timeout on xud side if there's additional collateralization required. Easier solution would probably be to just fail them if there's already a pending transfer in progress. |
So the UX in @raladev 's example from above (#1847 (comment)) will be: 1 swap succeeding, 8 immediately failing. Thats not exactly great. Ideas how we could make this issues scenario (send 9 concurrent connext transfers within one global timeout) work? @rhlsthrm |
The only way this would work if we don't need to do in-flight collateralizations for the transfers. In that case all of the 9 swaps would succeed. #1845 addresses that. |
I am confused. Assuming we have enough collateral in the channel for all 9 transfers, I thought the connext client can't handle multiple concurrent transfers. If it can and all 9 succeed with enough collateral in the channel, we don't have a problem. What is this issue about then? |
Sorry for the confusion here guys. The client does already queue concurrent transfers. The issue as mentioned above is with the collateralization. The concurrent transfers will request collateral, but only one collateralization will succeed due to our protection against collateral request "spamming". At the time of the transfer, some of them might fail due to the "earlier" transfers using up the collateral (sorry for the slightly complicated explanation). If you request enough collateral upfront for the 9 transfers, it would work as you mentioned @kilrau. |
Not at all, issue is on our side here. Thanks for explaining @rhlsthrm ! @erkarl @sangaman @raladev so now that connext/rest-api-client#89 is done and #1885 is up and we fail trades if there is not enough collateral/remote balance: can it even happen that a connext transfer fails because of mutiple live collateralizations? |
Yep. That would we a nice UX. We'd also have to track the "locked" inbound balanced that's already being used for existing swaps.
There can only be 1 concurrent collateralization. |
Iterating back to "what needs to be done" in this PR: I am not really sure what to change since #1885 prevents most of the live collateralization scenarios to happen and Rahul confirmed the connext client already queues transfers. So what concretely would you want to do? @erkarl @raladev @sangaman |
I see there only 2 options after/with 1885:
|
I'd definitely go for 1., that's indeed what #1885 should do. Forbidding multi-matching is not really an option :/ |
Steps:
Actual result:
failed swaps marked as pending swaps because of 200 response code of /hashlock response code, but ETH for this swaps is not blocked and available for new swaps and withdrawal.
Expected result:
Swap marked as failed, pending htlcs of counterparty are released.
Note:
Reason of pending marking is 200 status code without status field.
extend_connext_maker.log
log_maker_extend_xud.log
log_taker_extend_xud.log
also i checked manually - 200 for maker and 404 for taker.
The text was updated successfully, but these errors were encountered: