Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add add_storage() method to UnitSentry #140

Open
wants to merge 1 commit into
base: master
Choose a base branch
from

Conversation

tvansteenburgh
Copy link
Contributor

Fixes #112.

Adds an add_storage() method the UnitSentry, e.g.:

unit = d.sentry['wordpress'][0]
unit.add_storage('mystorage', pool='ebs', count=3, size=1024)

There are no 'remove storage' commands in Juju.

@AdamIsrael
Copy link
Contributor

Would it make sense to add methods for list-storage as well?

Otherwise, this LGTM.

@marcoceppi
Copy link
Contributor

@Yrrsinn Will this be sufficient?

@Yrrsinn
Copy link

Yrrsinn commented Jun 14, 2016

I am not sure. This patch covers the case of adding storage to a deployed charm, that is true. But I see two points here (both related to storage-pool, but different aspects).

First, the approach forces the charm-test-author to explicitly name a storage provider, e.g. ebs. Shouldn't a charm-test be agnostic about the execution environment? More explicit: Would a test that has pool='ebs' hard-wired pass on a juju/maas test-environment? Is this a use-case?

Second, a charm can express that it needs at least one volume when deployed (non-optional storage). The constraints about the storage pool to use, should be expressible. Compare to https://lists.ubuntu.com/archives/juju/2016-April/006987.html.

The PR wraps around parts of the command-line interface. Imho a better approach would be to use the support for storage in bundle deploy (added in 2.0 afaik), since this is the general approach of amulet.

@tvansteenburgh
Copy link
Contributor Author

@Yrrsinn, good feedback, thanks.

Regarding your first point: For add_storage(), if pool=None then the default pool for the provider will be used, which may be sufficient for test cases. If you need to create non-default pools for your tests we'll need to add a

Deployment.create_storage_pool(name='mypool', provider_type='ebs', **config)

method, which will need to know the provider type at run time. One way to achieve this is to support AMULET_STORAGE_PROVIDER and AMULET_STORAGE_CONFIG environment variables that could be passed in at runtime. Thoughts?

Regarding your second point: The reason we used CLI commands is because there is no native bundle support for juju-1.25, and juju-deployer will possibly never support storage, and we want amulet to work for both juju1 and juju2. Are there use cases that are impossible with the current implementation?

@Yrrsinn
Copy link

Yrrsinn commented Jun 15, 2016

@tvansteenburgh Actually I needed the functionality sketched by you. Parts of the Quobyte System are designed to start only, when sufficient resources are present. In the Juju context the need for a disk with at least 10GB made a problem.

When I expresses this requirement in Juju, the machines does not boot. During the set-up, Juju Storage creates a loop-device with a 10GB image (no sparse-file, what is correct). The size of the machines root-partition is only 8GB. Thus loop-device image creation fails and the machine does not boot, since the root-partition is used-up.
Working-around this issue by setting the constraints for the machines to start, did not yield a positive result. The current solution to work around this issue is to turn off these checks in Quobyte and changing configurations to start with smaller disks (1GB).

I think that I do not have all the insights in Juju 1.25 / 2.0 that I would need to give good advice on design decisions for Juju, Amulet or Juju-Storage.

@AdamIsrael
Copy link
Contributor

@Yrrsinn, what kind of problems did you run into when setting the root-disk machine constraint large enough for the 10GB loop device? I'm running bundletester against the quobyte bundle now, with the increased constraints, and I'd like to see if I encounter the same result you did.

@Yrrsinn
Copy link

Yrrsinn commented Jun 15, 2016

@AdamIsrael it simply did not work? Honestly, I tried this in January 2016, and filed this issue #112 a few days later. Since we have the mid of June by now, I do not know what was the precise error message back then.

Maybe the problem is fixed meanwhile by one of the Juju 1.25.x bug-fix releases, or it was related to the problems with the Juju Vagrant Box, Canonical advertises on jujucharms.com/docs/stable.

Actually I tried to upgrade from the trusty to the wily Vagrant Box in January, to fix 'strange behaviour' (a.k.a. Juju did not work as supposed). Unfortunately the wily Vagrant Box was broken, and and the bug I filed for that issue on February 2th is still in the state 'new' on June 15th: Juju vagrant box failed to start (wily). Meanwhile I use a Docker based set-up.

Just this week, we learned that running tests with bundletester on charms with storage can cause the bundletester to crash.

tl;dr
I do not have the time to debug all your software. I am supposed to introduce 'bugs' in our own software.

@AdamIsrael
Copy link
Contributor

@Yrrsinn I'll follow up on those bugs you listed. Thanks for filing them, and sorry they haven't been addressed sooner.

Right now, I'm testing with juju 2.0 beta7/8, using a vagrant box running Xenial. We still recommend the Trusty vagrant juju box because the others are not as well-tested. That's something we're working to address in the near future.

The test errors I'm seeing, running bundletester against the bundle, are timeouts waiting for the environment to stand up. I'd really like to see those timeouts bumped up per our email discussion, because I think they'll resolve many of the problems I've seen, at least as far as Juju 2 is concerned. I'll verify that today, and if everything does work as expected, I'll raise the issue of backporting the fixes in Juju 2 to the 1.25 series.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants