Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

V2 migration #655

Open
wants to merge 16 commits into
base: develop
Choose a base branch
from
Open

V2 migration #655

wants to merge 16 commits into from

Conversation

emmanueldenloye
Copy link
Contributor

No description provided.

If we find old noderefs (e.g. us1.testnet.chainweb.com), we remove them
and put the correct uri (api.testnet.chainweb.com)
@jmininger
Copy link
Contributor

You should make sure that the tests in frontend/tests/Frontend/VersionedStore* are changed over from V1 to V2 (or that V2 is added?)

newNetworks = (\nets -> StoreFrontend_Network_Networks :=> Identity (convertNodeRefs $ V0.unNetworkMap $ runIdentity nets))
<$> DMap.lookup V0.StoreNetwork_Networks v0

-- This will regenerate the missing key. Desktop will recover the key with
Copy link
Contributor

@jmininger jmininger Jun 4, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is something we really need to look out for with the upcoming chainweaver changes. Might be ok in the immediate future, but we can't copy the same code over for a V2 --> V3 if this ever happens in the future, because then we would risk wiping out users' keys


deriving instance Show (StoreFrontend key a)

upgradeFromV0 :: (Monad m, HasCrypto key m) => DMap (V0.StoreFrontend key) Identity -> m (DMap (StoreFrontend key) Identity)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Might it make more sense to just compose V1.upgradeFromV0 and V2.upgradeFromV1 so that we aren't stuck copying code over for each migration and having to make sure that we have all of the "previous stuff" from the intermediary migration as well as all of the new migration stuff?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmmmm, interesting point. I think we probably should compose the migrations otherwise the logical conclusion just won't be scalable. That's typically the way I've done it with DB migrations.

@@ -161,3 +168,4 @@ versionedStorage = VersionedStorage
liftIO $ Api._transactionLogger_rotateLogFile txLogger
pure ()
Just (StoreFrontendVersion_1 :=> _) -> pure ()
Just (StoreFrontendVersion_2 :=> _) -> pure ()
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Reminder to fix this

just replace the entries for the network names "Testnet" and "Mainnet"
also fixed upgrade procedure
these new test cases will investigate how chainweaver
recovers from a migration that fails at some point before
completion.
@@ -169,6 +169,7 @@ restoreLocalStorage
:: forall storeKeys m
. ( HasStorage m
, Monad m
, MonadIO m
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What was the reason for these new MonadIO constraints? Is it left over from debugging?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, I'm pretty sure that's the reason. I'll take them out.


deriving instance Show (StoreFrontend key a)

upgradeFromV0 :: (Monad m, HasCrypto key m) => DMap (V0.StoreFrontend key) Identity -> m (DMap (StoreFrontend key) Identity)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmmmm, interesting point. I think we probably should compose the migrations otherwise the logical conclusion just won't be scalable. That's typically the way I've done it with DB migrations.

@@ -292,7 +374,169 @@ test_v0ToV1Upgrade = testCaseSteps "V0 to V1 Upgrade" $ \step -> do
where
path = "tests" </> "Frontend" </> "VersionedStoreSpec.files" </> "V0"

fail_test_v0ToV2Upgrade :: FailStorageState -> TestTree
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If we use compositional migrations as mentioned above, we might be able to get this test to become more like a property, i.e.

migrate a (a+2) == (migrate a (a+1) >> migrate (a+1) (a+2))

Not sure if that will be too invasive of a change though.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I actually really like this idea. Also adding new tests for new versions will be a lot easier to make.

convertNodeRefs :: Map NetworkName [NodeRef] -> Map NetworkName [NodeRef]
convertNodeRefs = Map.mapWithKey migrate
where
migrate = \case
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmmmm...I'm contemplating the possibility of doing the replacement no matter what the network name is. I think I've seen instances where it's lowercase "mainnet" rather than "Mainnet". We would have to keep track of which of the node refs were actually found to know which new ref to add. But that would work for these other cases.


upgradeFromV0 :: (Monad m, HasCrypto key m) => DMap (V0.StoreFrontend key) Identity -> m (DMap (StoreFrontend key) Identity)
upgradeFromV0 v0 = do
(newKeysList, newAccountStorage) <- foldMapM splitOldKey oldKeysList
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

splitOldKey really concerns me, for the web version. Its relying on the keys' index in a list, and then using that index to "regenerate" them instead of saving /copying them somewhere. This code is so easy to accidentally copy over, for, say the next migration. We should either rewrite it or at the very least use CPP to make it a no-op on ghcjs. Not quite sure its in the scope of this task, but it could so easily go really bad really quickly if it isn't taken care of imo

In essence, we don't care what networks the user has either created,
modified or left alone, we filter out undesired noderefs in that network name.
It is equivalent to "These" except that we take out the
"This" constructor. With it gone, we still get the same applicative
operations from "These" (sans "This") but we don't have to account for
the "This" constructor in situations where we know it is impossible to
produce it.
@@ -152,16 +155,19 @@ fromMultiSet = ($ []) . Map.foldrWithKey (\k i -> (.) (dlrep k i)) id
| otherwise = (v:) . dlrep v (n - 1)

convertNodeRefs :: Map NetworkName [NodeRef] -> Map NetworkName [NodeRef]
convertNodeRefs = Map.mapWithKey migrate
convertNodeRefs = fmap migrate
where
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is the nested where necessary ?

The datatype we really want is something equivalent to Either () a but
with a different Applicative instance.
-- the "This" constructor in situations where we know it is impossible to
-- produce it. This is also morally equivalent to Either () b with a
-- different applicative instance
data Deez b = Dis b | Dat b
Copy link
Member

@emilypi emilypi Jun 11, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

slightly misleading comment.

These a b 
~ a + b + a*b

But

These () a 
~ 1 + a + 1*a 
~ 1 + a + a 
~ Maybe (Either a a)

and

Either () b 
~ 1 + b
~ Maybe b

You want a newtype for Either a a here to get the modified Applicative.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants