Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Marking connext swap as pending when connext transfer was failed #1847

Open
raladev opened this issue Aug 28, 2020 · 14 comments
Open

Marking connext swap as pending when connext transfer was failed #1847

raladev opened this issue Aug 28, 2020 · 14 comments
Assignees
Labels
bug Something isn't working connext P2 mid priority question/tbd Further information or discussion needed

Comments

@raladev
Copy link
Contributor

raladev commented Aug 28, 2020

Steps:

  1. Place 9 sell orders ETH/BTC as a maker
  2. Fill them using 1 order as a taker
  3. Wait

Actual result:
failed swaps marked as pending swaps because of 200 response code of /hashlock response code, but ETH for this swaps is not blocked and available for new swaps and withdrawal.

Expected result:
Swap marked as failed, pending htlcs of counterparty are released.

Note:
Reason of pending marking is 200 status code without status field.

extend_connext_maker.log
log_maker_extend_xud.log
log_taker_extend_xud.log

also i checked manually - 200 for maker and 404 for taker.

Screenshot from 2020-08-27 18-51-43

@raladev raladev added bug Something isn't working P2 mid priority labels Aug 28, 2020
@raladev raladev assigned ghost , sangaman and kilrau Aug 28, 2020
@kilrau kilrau added P1 top priority and removed P2 mid priority labels Aug 28, 2020
@kilrau kilrau added the connext label Aug 28, 2020
@raladev
Copy link
Contributor Author

raladev commented Aug 31, 2020

#1850

@ghost
Copy link

ghost commented Sep 8, 2020

Now that #1850 is implemented we can also add an additional check from xud side to not created multiple concurrent hashlock transfers to connext to prevent this scenario from happening. That way the transfer would fail immediately and improve the UX for swaps.

Thoughts @sangaman @kilrau @raladev?

@kilrau
Copy link
Contributor

kilrau commented Sep 8, 2020

To state the obvious, this is solved by #1850

pending marking is 200 status code without status field

So what is left to do: prevent worst case 8 out of 9 swaps failing. Could you explain again why the connext client shouldn't deal with concurrent transfers by itself (queue them, execute sequentially)? Also, do you think failing the transfer is better than queuing it? @sangaman @erkarl

@ghost
Copy link

ghost commented Sep 9, 2020

To state the obvious, this is solved by #1850

pending marking is 200 status code without status field

So what is left to do: prevent worst case 8 out of 9 swaps failing. Could you explain again why the connext client shouldn't deal with concurrent transfers by itself (queue them, execute sequentially)? Also, do you think failing the transfer is better than queuing it? @sangaman @erkarl

If we queue them it'll cause a swap timeout on xud side if there's additional collateralization required. Easier solution would probably be to just fail them if there's already a pending transfer in progress.

@kilrau
Copy link
Contributor

kilrau commented Sep 9, 2020

So the UX in @raladev 's example from above (#1847 (comment)) will be: 1 swap succeeding, 8 immediately failing. Thats not exactly great. Ideas how we could make this issues scenario (send 9 concurrent connext transfers within one global timeout) work? @rhlsthrm

@ghost
Copy link

ghost commented Sep 9, 2020

The only way this would work if we don't need to do in-flight collateralizations for the transfers. In that case all of the 9 swaps would succeed. #1845 addresses that.

@kilrau
Copy link
Contributor

kilrau commented Sep 9, 2020

I am confused. Assuming we have enough collateral in the channel for all 9 transfers, I thought the connext client can't handle multiple concurrent transfers. If it can and all 9 succeed with enough collateral in the channel, we don't have a problem. What is this issue about then?

@rhlsthrm
Copy link

Sorry for the confusion here guys. The client does already queue concurrent transfers. The issue as mentioned above is with the collateralization. The concurrent transfers will request collateral, but only one collateralization will succeed due to our protection against collateral request "spamming". At the time of the transfer, some of them might fail due to the "earlier" transfers using up the collateral (sorry for the slightly complicated explanation).

If you request enough collateral upfront for the 9 transfers, it would work as you mentioned @kilrau.

@kilrau
Copy link
Contributor

kilrau commented Sep 10, 2020

Not at all, issue is on our side here. Thanks for explaining @rhlsthrm !

@erkarl @sangaman @raladev so now that connext/rest-api-client#89 is done and #1885 is up and we fail trades if there is not enough collateral/remote balance: can it even happen that a connext transfer fails because of mutiple live collateralizations?

@ghost
Copy link

ghost commented Sep 14, 2020

Not at all, issue is on our side here. Thanks for explaining @rhlsthrm !

@erkarl @sangaman @raladev so now that connext/rest-api-client#89 is done and #1885 is up and we fail trades if there is not enough collateral/remote balance

Yep. That would we a nice UX. We'd also have to track the "locked" inbound balanced that's already being used for existing swaps.

can it even happen that a connext transfer fails because of mutiple live collateralizations?

There can only be 1 concurrent collateralization.

@kilrau
Copy link
Contributor

kilrau commented Sep 14, 2020

xud to not created multiple concurrent hashlock transfers to connext to prevent this scenario from happening

Iterating back to "what needs to be done" in this PR: I am not really sure what to change since #1885 prevents most of the live collateralization scenarios to happen and Rahul confirmed the connext client already queues transfers. So what concretely would you want to do? @erkarl @raladev @sangaman

@raladev
Copy link
Contributor Author

raladev commented Sep 14, 2020

I see there only 2 options after/with 1885:

  1. be able to summarize collateral - when user places 10 small eth orders, he should have sum of value of these orders as collateral. it should fix this problem;
  2. forbid multi-matching.

@kilrau
Copy link
Contributor

kilrau commented Sep 14, 2020

I'd definitely go for 1., that's indeed what #1885 should do. Forbidding multi-matching is not really an option :/

@kilrau kilrau added question/tbd Further information or discussion needed P2 mid priority and removed P1 top priority labels Oct 21, 2020
@raladev
Copy link
Contributor Author

raladev commented Oct 22, 2020

Current state of this issue:

  1. Live and lazy collateralization solve problem of multiswap failing
    Screenshot from 2020-10-22 21-06-57

  2. without these features (nobalancechecks = true) i still get pending htlcs after SwapTimeout (I waited 30 mins - they was not cancelled)

xud_maker_log_pending_ht.log
xud_taker_log_pending_ht.log

Screenshot from 2020-10-22 20-54-49
Screenshot from 2020-10-22 20-54-17

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working connext P2 mid priority question/tbd Further information or discussion needed
Projects
None yet
Development

No branches or pull requests

4 participants