Skip to content

Commit

Permalink
typos (#56)
Browse files Browse the repository at this point in the history
Co-authored-by: Ray Bell <[email protected]>
  • Loading branch information
raybellwaves and Ray Bell authored Feb 29, 2024
1 parent c13f187 commit 4e832e2
Show file tree
Hide file tree
Showing 3 changed files with 6 additions and 6 deletions.
4 changes: 2 additions & 2 deletions notebooks/foundations/02_kerchunk_multi_file.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -216,7 +216,7 @@
"- Uses an `fsspec` `s3` filesystem to read in a `NetCDF` from a given url.\n",
"- Generates a `Kerchunk` index using the `SingleHdf5ToZarr` `Kerchunk` method.\n",
"- Creates a simplified filename using some string slicing.\n",
"- Uses the local filesytem created with `fsspec` to write the `Kerchunk` index to a `.json` reference file.\n",
"- Uses the local filesystem created with `fsspec` to write the `Kerchunk` index to a `.json` reference file.\n",
"\n",
"Below the `generate_json_reference` function we created, we have a simple `for` loop that iterates through our list of `NetCDF` file urls and passes them to our `generate_json_reference` function, which appends the name of each `.json` reference file to a list named **output_files**.\n"
]
Expand Down Expand Up @@ -329,7 +329,7 @@
"Now that we have built a virtual dataset using `Kerchunk`, we can read all of those original `NetCDF` files as if they were a single `Zarr` dataset. \n",
"\n",
"\n",
"**Since we saved the combined reference `.json` file, this work doesn't have to be repeated for anyone else to use this dataset. All they need is to pass the combined refernece file to `Xarray` and it is as if they had a `Zarr` dataset! The cells below here no longer need kerchunk.** "
"**Since we saved the combined reference `.json` file, this work doesn't have to be repeated for anyone else to use this dataset. All they need is to pass the combined reference file to `Xarray` and it is as if they had a `Zarr` dataset! The cells below here no longer need kerchunk.** "
]
},
{
Expand Down
4 changes: 2 additions & 2 deletions notebooks/foundations/03_kerchunk_dask.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -118,7 +118,7 @@
"metadata": {},
"source": [
"## Building off of our Previous Work\n",
"In the next section, we will re-use some of the code from [Multiple Files and Kerchunk](../foundations/kerchunk_multi_file) notebook. However, we will modify it slightly to make it compatable with `Dask`.\n",
"In the next section, we will re-use some of the code from [Multiple Files and Kerchunk](../foundations/kerchunk_multi_file) notebook. However, we will modify it slightly to make it compatible with `Dask`.\n",
"\n",
"The following two cells should look the same as before. As a reminder we are importing the required libraries, using `fsspec` to create a list of our input files and setting up some kwargs for `fsspec` to use. "
]
Expand Down Expand Up @@ -311,7 +311,7 @@
"\n",
"\n",
"\n",
"Running our `Dask` version on a subset of 40 files took only ~39 seconds. In comparison, computing the `Kerchunk` indicies one-by-one in took about 3 minutes and 41 seconds.\n",
"Running our `Dask` version on a subset of 40 files took only ~39 seconds. In comparison, computing the `Kerchunk` indices one-by-one in took about 3 minutes and 41 seconds.\n",
"\n",
"\n",
"Just by changing a few lines of code and using `Dask`, we got our code to run almost **6x faster**. One other detail to note is that there is usually a bit of a delay as `Dask` builds its task graph before any of the tasks are started. All that to say, you may see even better performance when using `Dask` and `Kerchunk` on larger datasets.\n",
Expand Down
4 changes: 2 additions & 2 deletions notebooks/generating_references/GRIB2.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -178,9 +178,9 @@
"id": "5ec53627",
"metadata": {},
"source": [
"## Iterate through list of files and create `Kerchunk` indicies as `.json` reference files\n",
"## Iterate through list of files and create `Kerchunk` indices as `.json` reference files\n",
"\n",
"Each input GRIB2 file contains mutiple \"messages\", each a measure of some variable on a grid, but with grid dimensions not necessarily compatible with one-another. The filter we create in the first line selects only certain types of messages, and indicated that heightAboveGround will be a coordinate of interest.\n",
"Each input GRIB2 file contains multiple \"messages\", each a measure of some variable on a grid, but with grid dimensions not necessarily compatible with one-another. The filter we create in the first line selects only certain types of messages, and indicated that heightAboveGround will be a coordinate of interest.\n",
"\n",
"We also write a separate JSON for each of the selected message, since these are the basic component data sets (see the loop over `out`).\n",
"\n",
Expand Down

0 comments on commit 4e832e2

Please sign in to comment.