From 4e832e21ac5a3e5fbb930a91f76fa20d81146b3c Mon Sep 17 00:00:00 2001 From: Ray Bell Date: Thu, 29 Feb 2024 13:47:05 -0500 Subject: [PATCH] typos (#56) Co-authored-by: Ray Bell --- notebooks/foundations/02_kerchunk_multi_file.ipynb | 4 ++-- notebooks/foundations/03_kerchunk_dask.ipynb | 4 ++-- notebooks/generating_references/GRIB2.ipynb | 4 ++-- 3 files changed, 6 insertions(+), 6 deletions(-) diff --git a/notebooks/foundations/02_kerchunk_multi_file.ipynb b/notebooks/foundations/02_kerchunk_multi_file.ipynb index ab7ea44..5942aa7 100644 --- a/notebooks/foundations/02_kerchunk_multi_file.ipynb +++ b/notebooks/foundations/02_kerchunk_multi_file.ipynb @@ -216,7 +216,7 @@ "- Uses an `fsspec` `s3` filesystem to read in a `NetCDF` from a given url.\n", "- Generates a `Kerchunk` index using the `SingleHdf5ToZarr` `Kerchunk` method.\n", "- Creates a simplified filename using some string slicing.\n", - "- Uses the local filesytem created with `fsspec` to write the `Kerchunk` index to a `.json` reference file.\n", + "- Uses the local filesystem created with `fsspec` to write the `Kerchunk` index to a `.json` reference file.\n", "\n", "Below the `generate_json_reference` function we created, we have a simple `for` loop that iterates through our list of `NetCDF` file urls and passes them to our `generate_json_reference` function, which appends the name of each `.json` reference file to a list named **output_files**.\n" ] @@ -329,7 +329,7 @@ "Now that we have built a virtual dataset using `Kerchunk`, we can read all of those original `NetCDF` files as if they were a single `Zarr` dataset. \n", "\n", "\n", - "**Since we saved the combined reference `.json` file, this work doesn't have to be repeated for anyone else to use this dataset. All they need is to pass the combined refernece file to `Xarray` and it is as if they had a `Zarr` dataset! The cells below here no longer need kerchunk.** " + "**Since we saved the combined reference `.json` file, this work doesn't have to be repeated for anyone else to use this dataset. All they need is to pass the combined reference file to `Xarray` and it is as if they had a `Zarr` dataset! The cells below here no longer need kerchunk.** " ] }, { diff --git a/notebooks/foundations/03_kerchunk_dask.ipynb b/notebooks/foundations/03_kerchunk_dask.ipynb index cbe11f2..2302eec 100644 --- a/notebooks/foundations/03_kerchunk_dask.ipynb +++ b/notebooks/foundations/03_kerchunk_dask.ipynb @@ -118,7 +118,7 @@ "metadata": {}, "source": [ "## Building off of our Previous Work\n", - "In the next section, we will re-use some of the code from [Multiple Files and Kerchunk](../foundations/kerchunk_multi_file) notebook. However, we will modify it slightly to make it compatable with `Dask`.\n", + "In the next section, we will re-use some of the code from [Multiple Files and Kerchunk](../foundations/kerchunk_multi_file) notebook. However, we will modify it slightly to make it compatible with `Dask`.\n", "\n", "The following two cells should look the same as before. As a reminder we are importing the required libraries, using `fsspec` to create a list of our input files and setting up some kwargs for `fsspec` to use. " ] @@ -311,7 +311,7 @@ "\n", "\n", "\n", - "Running our `Dask` version on a subset of 40 files took only ~39 seconds. In comparison, computing the `Kerchunk` indicies one-by-one in took about 3 minutes and 41 seconds.\n", + "Running our `Dask` version on a subset of 40 files took only ~39 seconds. In comparison, computing the `Kerchunk` indices one-by-one in took about 3 minutes and 41 seconds.\n", "\n", "\n", "Just by changing a few lines of code and using `Dask`, we got our code to run almost **6x faster**. One other detail to note is that there is usually a bit of a delay as `Dask` builds its task graph before any of the tasks are started. All that to say, you may see even better performance when using `Dask` and `Kerchunk` on larger datasets.\n", diff --git a/notebooks/generating_references/GRIB2.ipynb b/notebooks/generating_references/GRIB2.ipynb index d168a93..899f3dc 100644 --- a/notebooks/generating_references/GRIB2.ipynb +++ b/notebooks/generating_references/GRIB2.ipynb @@ -178,9 +178,9 @@ "id": "5ec53627", "metadata": {}, "source": [ - "## Iterate through list of files and create `Kerchunk` indicies as `.json` reference files\n", + "## Iterate through list of files and create `Kerchunk` indices as `.json` reference files\n", "\n", - "Each input GRIB2 file contains mutiple \"messages\", each a measure of some variable on a grid, but with grid dimensions not necessarily compatible with one-another. The filter we create in the first line selects only certain types of messages, and indicated that heightAboveGround will be a coordinate of interest.\n", + "Each input GRIB2 file contains multiple \"messages\", each a measure of some variable on a grid, but with grid dimensions not necessarily compatible with one-another. The filter we create in the first line selects only certain types of messages, and indicated that heightAboveGround will be a coordinate of interest.\n", "\n", "We also write a separate JSON for each of the selected message, since these are the basic component data sets (see the loop over `out`).\n", "\n",