Skip to content

Commit

Permalink
Fix ASV CI Workflow (#792)
Browse files Browse the repository at this point in the history
* update workflow

* update env file

* update env file (2)

* update asv-bencnhmarking.yml

* fetch branches

* update asv-benchmarking.yml

* try different version scheme

* revert to old version scheme

* drop 'v' from version

* try to remove shell spec

* add v

* add bash

* Update .github/workflows/asv-benchmarking.yml

Co-authored-by: Anissa Zacharias <[email protected]>

* update mpas_ocean benchmark to use current path

* fix dataset download (1)

* fix dataset download (2)

* fix dataset download (3)

* fix dataset download (4)

* add compressed mesh files using zarr

* use zarr files in benchmark

* delete meshfiles and update benchmark to version working on fork

* attempt to fix env cache (1)

* attempt to fix env cache (2)

* attempt to fix env cache (3)

* attempt to fix env cache (4)

* attempt to fix env cache (5)

* attempt to fix env cache (5)

* attempt to fix env cache (6)

---------

Co-authored-by: Anissa Zacharias <[email protected]>
  • Loading branch information
philipc2 and anissa111 authored May 17, 2024
1 parent 318f3f6 commit 2a4bcee
Show file tree
Hide file tree
Showing 2 changed files with 41 additions and 23 deletions.
54 changes: 36 additions & 18 deletions .github/workflows/asv-benchmarking.yml
Original file line number Diff line number Diff line change
Expand Up @@ -9,6 +9,9 @@ on:
jobs:
benchmark:
runs-on: ubuntu-latest
defaults:
run:
shell: bash -el {0}
env:
CONDA_ENV_FILE: ./ci/asv.yml
ASV_DIR: ./benchmarks
Expand All @@ -18,36 +21,51 @@ jobs:
uses: actions/checkout@v4
with:
repository: UXARRAY/uxarray
ref: main
fetch-depth: 0

- name: Fetch all tags and branches for uxarray
run: |
git fetch --all --tags
- name: Checkout uxarray-asv
uses: actions/checkout@v4
with:
repository: uxarray/uxarray-asv
repository: UXARRAY/uxarray-asv
persist-credentials: false
fetch-depth: 0
ref: main
path: uxarray-asv
- name: Set environment variables
run: |
echo "TODAY=$(date +'%Y-%m-%d')" >> $GITHUB_ENV

- name: Set up conda environment
id: env-setup
continue-on-error: true
uses: mamba-org/setup-micromamba@v1
- name: Setup Miniforge
uses: conda-incubator/setup-miniconda@v3
with:
environment-file: ./ci/asv.yml
cache-environment: true
cache-environment-key: "benchmark-${{runner.os}}-${{runner.arch}}-${{env.TODAY}}"
miniforge-version: "24.1.2-0"
activate-environment: asv

- name: Get date
id: get-date
run: echo "today=$(/bin/date -u '+%Y%m%d')" >> $GITHUB_OUTPUT
shell: bash

- name: retry environment set up if failed
if: steps.env-setup.outcome == 'failure'
uses: mamba-org/setup-micromamba@v1
- name: Cache environment
uses: actions/cache@v4
with:
download-micromamba: false
environment-file: ./ci/asv.yml
cache-environment: true
cache-environment-key: "benchmark-${{runner.os}}-${{runner.arch}}-${{env.TODAY}}"
path: ${{ env.CONDA }}/envs
key: conda-${{ runner.os }}--${{ runner.arch }}--${{ steps.get-date.outputs.today }}-${{ hashFiles('./ci/asv.yml') }}-${{ env.CACHE_NUMBER }}
env:
CACHE_NUMBER: 0
id: cache

- name: Update environment
run:
conda env update -n asv -f ./ci/asv.yml
if: steps.cache.outputs.cache-hit != 'true'

- name: Conda list
run: |
conda info
conda list
- name: Copy existing results
run: |
Expand Down
10 changes: 5 additions & 5 deletions benchmarks/mpas_ocean.py
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@

import uxarray as ux

current_path = Path(os.path.dirname(os.path.realpath(__file__))).parents[0]
current_path = Path(os.path.dirname(os.path.realpath(__file__)))

data_var = 'bottomDepth'

Expand All @@ -17,14 +17,14 @@
filenames = [grid_filename_480, data_filename_480, grid_filename_120, data_filename_120]

for filename in filenames:
if not os.path.isfile(filename):
if not os.path.isfile(current_path / filename):
# downloads the files from Cookbook repo, if they haven't been downloaded locally yet
url = f"https://github.com/ProjectPythia/unstructured-grid-viz-cookbook/raw/main/meshfiles/{filename}"
_, headers = urllib.request.urlretrieve(url, filename=filename)
_, headers = urllib.request.urlretrieve(url, filename=current_path / filename)


file_path_dict = {"480km": [grid_filename_480, data_filename_480],
"120km": [grid_filename_120, data_filename_120]}
file_path_dict = {"480km": [current_path / grid_filename_480, current_path / data_filename_480],
"120km": [current_path / grid_filename_120, current_path / data_filename_120]}


class Gradient:
Expand Down

0 comments on commit 2a4bcee

Please sign in to comment.