Thank you for helping us to improve Augur! Use the GitHub markdown preview sidebar to navigate the sections in this document.
To be an effective, productive contributor, please start by reading the Nextstrain contributing guide for useful information about how to pick an issue, submit your contributions, and so on.
Please see the "Source" tab in Augur's installation instructions for details on how to install Augur for development.
This project strictly adheres to the Contributor Covenant Code of Conduct.
Please see the open issues list for currently available issues.
We currently target compatibility with Python 3.8 and higher. As Python releases new versions, the minimum target compatibility may be increased in the future.
While you are making code changes, you will want to run augur to see it behavior with those changes. To to test your local changes (without installing them to your system), run the following convenience script from the root of your cloned git repository:
./bin/augur
Note that the ./bin/augur
convenience script is not installing augur
system-wide with pip.
Writing good tests and running tests helps maintain code quality and eases future refactoring. This section describes the different types of tests, how to write them, how to run them, and when they are run automatically.
We encourage keeping tests up to date and covered for any code contribution. Please add a note in your PR if you need help with adding tests.
Tests consist of:
- Unit tests
- Doctests
- Functional tests
The combined test coverage is currently . We realize that this number is not 100%, so the automated CI GitHub Actions workflow is augmented with external testing via running pathogen repo workflows using CI-specific input data.
Unit tests are written using pytest.
Augur's unit tests are located in the tests
directory and prefixed with test_
. There is generally one test file for each code file.
Doctests are a type of test that are written within a module's docstrings.
They can be helpful for testing a real-world example and determining if a regression is introduced in a particular module. They are run via pytest
.
Augur's command line interface is tested by functional tests implemented with the Cram framework. These tests complement existing unit tests of individual augur Python functions by running augur commands in the shell and confirming that these commands:
- execute without any errors
- produce exactly the expected outputs for the given inputs
These tests can reveal bugs resulting from untested internal functions or untested combinations fo internal functions.
Over time, we have changed the way we design and organize Augur's Cram tests. You might find older practices in existing tests that haven't been updated yet, but these are the latest guidelines that we've discovered to be helpful.
- Keep cram files modular. This makes it easier to see which command is failing.
- Create files in the initial working directory (e.g.
./file.txt
or simplyfile.txt
), as it is a temporary working directory unique to the test. Note that the name of the$TMP
directory is misleading - although it is temporary, it is shared across all tests so you'll have to explicitly remove files at the end of each test to avoid affecting other tests. The initial directory of each test is a unique directory within$TMP
. - Each directory containing cram tests should have a setup script named
_setup.sh
. Keep all shared setup commands in this file.
Compare deterministic outputs of augur commands in a functional test with a diff
between the expected and observed output files.
For extremely simple deterministic outputs, use the expected text written to standard output instead of creating a separate expected output file.
To compare trees with stochastic branch lengths:
- provide a fixed random seed to the tree builder executable (e.g.,
--tree-builder-args "-seed 314159"
for the “iqtree” method of augur tree) - use
scripts/diff_trees.py
instead ofdiff
and optionally provide a specific number to--significant-digits
to limit the precision that should be considered in the diff
To compare JSON outputs with stochastic numerical values, use scripts/diff_jsons.py
with the appropriate --significant-digits
argument.
Both tree and JSON comparison scripts rely on deepdiff for underlying comparisons.
- Unit tests should be used for the public API.
- Unit tests should be used if you want to test specific behavior of a class or function.
- Doctests or functional tests should be used if you believe it would improve the documentation and execution of a real world example.
You've written tests and now you want to run them to see if they are passing. First, you will need to install the complete Nextstrain environment and augur dev dependencies as described above. Next, run all augur tests with the following command from the root, top-level of the augur repository:
./run_tests.sh
For rapid execution of a subset of unit tests (as during test-driven development), the -k
argument will disable code coverage and functional tests and pass directly to pytest to limit the tests that are run.
For example, the following command only runs unit tests related to augur mask.
./run_tests.sh -k test_mask
To run a specific integration test with cram, you can use the following command:
cram tests/functional/clades.t
To run cram tests locally and capture test coverage data, you can use this invocation:
AUGUR="coverage run --data-file="$PWD/.coverage" $PWD/bin/augur" cram
You can provide one or more cram test file names to get coverage for just those tests, or omit file names to run the entire cram test suite.
Troubleshooting tip: As tests run on the development code in the augur repository, your environment should not have an existing augur installation that could cause a conflict in pytest.
We use continuous integration with GitHub Actions to run tests on every pull request submitted to the project. We use codecov to automatically produce test coverage for new contributions and the project as a whole.
Our goal is to gradually add [type annotations][] to our code so that we can catch errors earlier and be explicit about the interfaces expected and provided. Annotation pairs well with the functional approach taken by the package.
During development you can run static type checks using [mypy][]:
$ mypy
# No output is good!
and [pyright][]:
$ npx pyright
0 errors, 0 warnings, 0 informations
There are also many editor integrations for mypy, and Pyright can be configured for VS Code.
Instead of removing a feature from one release to the next, consider first deprecating the feature by adding a warning output and a deprecation entry. This allows a period of transition time where both the deprecated feature and a suggested alternative can be adopted.
The changelog should be updated in every pull request that makes a functional change to the behavior of a command or improves documentation. Changelog entries are separated into three categories to define the upcoming release number:
- Major Changes
- Features
- Bug Fixes
Documentation changes can be listed under "bug fixes" since they do not impact functionality but are still good note for users.
Versions for this project, Augur, from 3.0.0 onwards aim to follow the Semantic Versioning rules.
-
Compare changes to find PRs and direct commits since the previous tag (replacing
X.X.X
with previous tag)- For all changes: open https://github.com/nextstrain/augur/compare/X.X.X...HEAD
- For PRs: run the commands below and paste the output URL in your browser:
previous_tag="X.X.X" previous_tag_date=$(git log -1 $previous_tag --format=%cd --date=format:'%Y-%m-%dT%H:%M:%SZ') echo "https://github.com/nextstrain/augur/pulls?q=is:pr%20is:closed%20merged:>$previous_tag_date"
-
Define a new version number
X.X.X
based on changes and Semantic Versioning rules.[!NOTE] If releasing a major version, consider removing a deprecated feature.
- Go through each PR and note the PRs that didn't provide a changelog update.
- For the PRs missing a changelog update, add an entry summarizing the changes in the PR.
- Keep headers and formatting consistent with the rest of the file.
- Open a PR with these changes. If changes are clear and you feel confident in the release notes, merge without PR approval. Otherwise, or if unsure, add nextstrain/core as a reviewer and wait for approval before proceeding with the release.
- Go to this GitHub Actions workflow.
- Select Run workflow. In the new menu:
- Ensure
master
branch is selected. - In New version X.X.X, provide the new version number.
- Select Run workflow.
- Ensure
- Ensure workflow runs successfully.
- Ensure the docker-base CI action triggered by nextstrain-bot runs successfully.
First, check if the Python version or dependency list in setup.py had any changes since the previous Augur version.
If there are no such changes:
- Wait for an auto-bump PR in bioconda-recipes.
- Add a comment
@BiocondaBot please add label
. - Wait for a bioconda maintainer to approve and merge.
If there are changes to the Python version or dependency list:
- Create a PR in bioconda-recipes translating the changes in
setup.py
to Conda package match specifications. Follow instructions at nextstrain/bioconda-recipes/README.md (example). - Add a comment
@BiocondaBot please add label
. - Wait for a bioconda maintainer to approve and merge.
- Wait for an auto-bump PR in bioconda-recipes.
- Add a comment in the auto-bump PR
Please close this in favor of #<your PR number>
.
- Wait for the bioconda-recipe PR to be merged.
- Wait for the new version of Augur to be available on bioconda.
- Manually run the conda-base CI workflow on the
main
branch. - Ensure workflow runs successfully and that the summary outputs for Ubuntu and MacOS builds include
augur
with the appropriate version in the list of changed packages.
New releases are tagged in git using an "annotated" tag. If the git option
user.signingKey
is set, the tag will also be signed. Signed tags are
preferred, but it can be hard to setup GPG correctly. Source and wheel
(binary) distributions are uploaded to the nextstrain-augur project on
PyPi.
There is a ./devel/release
script which will prepare a new release from your
local repository. It ends with instructions for you on how to push the release
commit/tag/branch and how to upload the built distributions to PyPi. You'll
need a PyPi account and twine installed to do the latter.
After you create a new release and before you push it to GitHub, run all tests again as described above to confirm that nothing broke with the new release.
If any tests fail, run the ./devel/rewind-release
script to undo the release, then fix the tests before trying again.
New releases trigger a new docker-base build to keep the Docker image up-to-date. This trigger is implemented as the rebuild-docker-image job in the release workflow, which is explicitly conditioned on the previous run job's successful completion. To trigger a Docker image rebuild without making a release, see this section of the docker-base README.
Bioconda hosts augur’s conda package and defines augur’s dependencies in a conda recipe YAML file. New releases on GitHub automatically trigger a new Bioconda release.
To modify augur’s dependencies or other aspects of its conda environment, follow Bioconda’s contributing guide.
You will need to update the existing recipe YAML locally and create a pull request on GitHub for testing and review.
Add your GitHub username to the recipe_maintainers
list, if this is your first time modifying the augur recipe.
After a successful pull request review, Bioconda will automatically update the augur package that users download.
Branches and PRs are tested by GitHub Actions workflows configured in .github/workflows
.
Our CI GitHub Actions workflow is comprised of a test job that runs tests and uploads the coverage report to Codecov.
Currently, only pytest
results are included in the report.
Documentation is built using Sphinx and hosted on Read The Docs. Versions of the documentation for each augur release and git branch are available and preserved. Read The Docs is updated automatically from commits and releases on GitHub.
Documentation is mostly written as reStructuredText (.rst) files, but they can also be Markdown (.md) files. There are advantages to both formats:
- reStructuredText enables python-generated text to fill your documentation as in the
auto-importing of modules or usage of plugins like
sphinx-argparse
(see below). - Markdown is more intuitive to write and is widely used outside of python development.
- If you don't need autogeneration of help documentation, then you may want to stick with writing Markdown.
Sphinx, coupled with reStructuredText, can be tricky to learn. Here's a subset of reStructuredText worth committing to memory to help you get started writing these files.
Many Sphinx reStructuredText files contain a directive to add relations between single files in the documentation known as a Table of Contents Tree (TOC Tree).
Human-readable augur and augur subcommand documentation is written using a Sphinx extension called sphinx-argparse.
The documentation source-files are located in ./docs
, with ./docs/index.rst
being the main entry point.
Each subsection of the documentation is a subdirectory inside ./docs
.
For instance, the tutorials are all found in ./docs/tutorials
and are included in the documentation website via the directive in ./docs/index.rst
.
Docs should be updated any time a new Python file is added or updated. Docs are largely generated from the Python file contents such as docstrings, but each Python file must be accompanied by at least one corresponding reStructuredText file in order to render the pages.
- If a new Python file is added, a new reStructuredText file should be added
under
docs/api/developer
. This can be done using a script. - If the new Python file represents a subcommand of
augur
, a new reStructuredText file should be added underdocs/usage/cli/
in addition to the new file underdocs/api/developer
.
To regenerate the developer API documentation after adding, renaming, or removing an augur submodule, autogenerate a new API file as follows.
./devel/regenerate-developer-api-docs
Building the documentation locally is useful to test changes. First, make sure you have the development dependencies of augur installed:
pip install -e '.[dev]'
This installs packages listed in the dev
section of extras_require
in setup.py,
as well as augur's dependencies as necessary.
Sphinx and make are used when building documentation. Here are some examples that you may find useful:
Build the HTML output format by running:
make -C docs html
To monitor the source files for changes and automatically rebuild as necessary, run:
make -C docs livehtml
Sphinx can build other formats, such as epub. To see other available formats, run:
make -C docs help
To make doc rebuilds faster, Sphinx caches built documentation by default, which is generally great, but can cause the sidebar of pages to be stale. You can clean out the cache with:
make -C docs clean
To view the generated documentation in your browser, Mac users should run:
open docs/_build/html/index.html
Linux users can view the docs by running:
xdg-open docs/_build/html/index.html
This will open your browser for you to see and read your work.