Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ENH: Add a reader for nexrad level2 files #147

Closed
wants to merge 50 commits into from
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
50 commits
Select commit Hold shift + click to select a range
c9d88ec
ENH: Add first cut at nexrad reader
mgrover1 Jan 5, 2024
f7e42d2
FIX: Fix incorrect module registration
mgrover1 Jan 8, 2024
34b52b0
FIX: Fix the requirements for the package
mgrover1 Jan 8, 2024
9330b17
ADD: Add new xarray dataset builder
mgrover1 Jan 18, 2024
fbd942c
FIX: Fix last few steps
mgrover1 Jan 19, 2024
3f8f8cf
ADD: Add testing suite
mgrover1 Jan 19, 2024
e042f1f
FIX: Fix the manifest
mgrover1 Jan 19, 2024
0c2d0dc
FIX: Fix local import nexrad_interpolate
mgrover1 Jan 19, 2024
4f1d06c
FIX: Fix import of interpolation
mgrover1 Jan 19, 2024
1af4649
ADD: Add missing cython depend
mgrover1 Jan 19, 2024
2e37130
ADD: Add updated manifest
mgrover1 Jan 19, 2024
7ae79bd
ADD: Ensure cython is packaged
mgrover1 Jan 19, 2024
ec2261c
ADD: Add specific submodules
mgrover1 Jan 19, 2024
568e65b
force reinstall
mgrover1 Jan 19, 2024
0a56163
make sure more files are included
mgrover1 Jan 19, 2024
a904c03
FIX: Fix build of cython extensions
mgrover1 Jan 25, 2024
14aa01c
FIX: Fix lowercase letter
mgrover1 Jan 25, 2024
888fb1b
revert couple of settings
mgrover1 Jan 25, 2024
44647ac
fix installation line
mgrover1 Jan 25, 2024
2b8d9b9
ADD: Add proper import
mgrover1 Jan 25, 2024
0550e2c
move interpolation to its own submodule
mgrover1 Jan 25, 2024
78fada3
ADD: Update setup
mgrover1 Jan 25, 2024
cb94350
Move to hidden module
mgrover1 Jan 25, 2024
c3824d6
FIX: Fix all imports in backends
mgrover1 Jan 25, 2024
5e0da73
fix manifest
mgrover1 Jan 25, 2024
d16e2e3
include all cython
mgrover1 Jan 25, 2024
3a0a76f
Add other types of cython files
mgrover1 Jan 25, 2024
62db259
debug submodule
mgrover1 Jan 25, 2024
2edb91c
ADD: add this back in
mgrover1 Jan 25, 2024
f0ec0e7
Be explicit about submodules
mgrover1 Jan 25, 2024
f1af9cd
ADD: Add clear submodule
mgrover1 Jan 25, 2024
6c7b824
ADD: Add imports
mgrover1 Jan 25, 2024
f162928
remove name in setup
mgrover1 Jan 25, 2024
9592148
remove check on version
mgrover1 Jan 25, 2024
045ba99
make sure submodule is blank
mgrover1 Jan 25, 2024
2f26565
ADD: Add extra line
mgrover1 Jan 25, 2024
d7d65b2
be more explicit
mgrover1 Jan 25, 2024
3b49bc3
add setup.py run
mgrover1 Jan 26, 2024
9c11f0b
ADD: Add proper installation to other parts
mgrover1 Jan 26, 2024
e1a59df
ADD: add test for lazy dict
mgrover1 Jan 26, 2024
98090ad
DEL: Remove unused sections of common
mgrover1 Jan 26, 2024
528a16d
DEL: Remove the interpolation step
mgrover1 Jan 30, 2024
1b0d97b
FIX: Fix linting
mgrover1 Jan 30, 2024
bae3d19
Only use ruff for linting
mgrover1 Jan 30, 2024
aee82ed
DOC: Add addition to history file
mgrover1 Jan 30, 2024
fc03120
ADD: add original ci back in
mgrover1 Jan 30, 2024
c33d521
DEL: Delete extra init
mgrover1 Jan 30, 2024
5c7bd7e
Update xradar/io/backends/common.py
mgrover1 Jan 31, 2024
091a7c0
Merge latest updates on main
mgrover1 Feb 28, 2024
2a1c46e
Merge branch 'add-nexrad-reader' of https://github.com/mgrover1/xrada…
mgrover1 Feb 28, 2024
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
3 changes: 0 additions & 3 deletions .github/workflows/ci.yml
Original file line number Diff line number Diff line change
Expand Up @@ -22,9 +22,6 @@ jobs:
run: |
python -m pip install --upgrade pip
pip install black black[jupyter] ruff
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

AFAIK we would need to enable jupyter notebook linting/formatting for ruff in pyproject.toml

- name: Black style check
run: |
black --check .
- name: Lint with ruff
run: |
ruff .
Expand Down
1 change: 1 addition & 0 deletions .github/workflows/upload_pypi.yml
Original file line number Diff line number Diff line change
Expand Up @@ -30,4 +30,5 @@ jobs:
TWINE_PASSWORD: ${{ secrets.PYPI_API_TOKEN }}
run: |
python -m build
python setup.py build_ext --inplace
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This wont be needed anymore.

twine upload dist/*
1 change: 1 addition & 0 deletions MANIFEST.in
Original file line number Diff line number Diff line change
Expand Up @@ -8,4 +8,5 @@ recursive-include tests *
recursive-exclude * __pycache__
recursive-exclude * *.py[co]

global-include *.pyx *pxd
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This can be removed too.

recursive-include docs *.rst conf.py Makefile make.bat *.jpg *.png *.gif
1 change: 1 addition & 0 deletions docs/history.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,7 @@

## Development Version (unreleased)

* ENH: Add a reader for nexrad level2 files ({pull}`147`) by [@mgrover1](https://github.com/mgrover1)
* MNT: Update GitHub actions, address DeprecationWarnings ({pull}`153`) by [@kmuehlbauer](https://github.com/kmuehlbauer).
* MNT: restructure odim.py/gamic.py, add test_odim.py/test_gamic.py ({pull}`154`) by [@kmuehlbauer](https://github.com/kmuehlbauer).
* MNT: use CODECOV token ({pull}`155`) by [@kmuehlbauer](https://github.com/kmuehlbauer).
Expand Down
2 changes: 2 additions & 0 deletions pyproject.toml
Original file line number Diff line number Diff line change
Expand Up @@ -39,12 +39,14 @@ gamic = "xradar.io.backends:GamicBackendEntrypoint"
iris = "xradar.io.backends:IrisBackendEntrypoint"
odim = "xradar.io.backends:OdimBackendEntrypoint"
rainbow = "xradar.io.backends:RainbowBackendEntrypoint"
nexradlevel2 = "xradar.io.backends:NexradLevel2BackendEntrypoint"

[build-system]
requires = [
"setuptools>=45",
"wheel",
"setuptools_scm[toml]>=7.0",
"numpy"
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

numpy can be removed?

]
build-backend = "setuptools.build_meta"

Expand Down
5 changes: 5 additions & 0 deletions tests/conftest.py
Original file line number Diff line number Diff line change
Expand Up @@ -58,3 +58,8 @@ def iris0_file():
@pytest.fixture(scope="session")
def iris1_file():
return DATASETS.fetch("SUR210819000227.RAWKPJV")


@pytest.fixture(scope="session")
def nexradlevel2_file():
return DATASETS.fetch("KATX20130717_195021_V06")
9 changes: 9 additions & 0 deletions tests/io/backends/test_common.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,9 @@
from xradar.io.backends import common


def test_lazy_dict():
d = common.LazyLoadDict({"key1": "value1", "key2": "value2"})
assert d["key1"] == "value1"
lazy_func = lambda: 999
d.set_lazy("lazykey1", lazy_func)
assert d["lazykey1"] == 999
30 changes: 30 additions & 0 deletions tests/io/test_nexrad_level2.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,30 @@
#!/usr/bin/env python
# Copyright (c) 2024, openradar developers.
# Distributed under the MIT License. See LICENSE for more info.

"""Tests for `xradar.io.nexrad_archive` module."""

import xarray as xr

from xradar.io.backends import open_nexradlevel2_datatree


def test_open_nexradlevel2_datatree(nexradlevel2_file):
dtree = open_nexradlevel2_datatree(nexradlevel2_file)
ds = dtree["sweep_0"]
assert ds.attrs["instrument_name"] == "KATX"
assert ds.attrs["nsweeps"] == 16
assert ds.attrs["Conventions"] == "CF/Radial instrument_parameters"
assert ds["DBZH"].shape == (719, 1832)
assert ds["DBZH"].dims == ("azimuth", "range")
assert int(ds.sweep_number.values) == 0


def test_open_nexrad_level2_backend(nexradlevel2_file):
ds = xr.open_dataset(nexradlevel2_file, engine="nexradlevel2")
assert ds.attrs["instrument_name"] == "KATX"
assert ds.attrs["nsweeps"] == 16
assert ds.attrs["Conventions"] == "CF/Radial instrument_parameters"
assert ds["DBZH"].shape == (719, 1832)
assert ds["DBZH"].dims == ("azimuth", "range")
assert int(ds.sweep_number.values) == 0
19 changes: 18 additions & 1 deletion xradar/io/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,24 @@
.. automodule:: xradar.io.export

"""
from .backends import * # noqa
from .backends import (
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We can do this, but we should be sure, that we do not need any other things exported from the subpackages. At least for wradlib, I need some of the sigmet/iris defines. Sure, I can import those directly using the deep path to the definition.

CfRadial1BackendEntrypoint, # noqa
FurunoBackendEntrypoint, # noqa
GamicBackendEntrypoint, # noqa
IrisBackendEntrypoint, # noqa
NexradLevel2BackendEntrypoint, # noqa
OdimBackendEntrypoint, # noqa
RainbowBackendEntrypoint, # noqa
open_cfradial1_datatree, # noqa
open_furuno_datatree, # noqa
open_gamic_datatree, # noqa
open_iris_datatree, # noqa
open_nexradlevel2_datatree, # noqa
open_odim_datatree, # noqa
open_rainbow_datatree, # noqa
)

# noqa
from .export import * # noqa

__all__ = [s for s in dir() if not s.startswith("_")]
5 changes: 5 additions & 0 deletions xradar/io/backends/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -15,6 +15,7 @@
.. automodule:: xradar.io.backends.furuno
.. automodule:: xradar.io.backends.rainbow
.. automodule:: xradar.io.backends.iris
.. automodule:: xradar.io.backends.nexrad_level2

"""

Expand All @@ -24,5 +25,9 @@
from .iris import * # noqa
from .odim import * # noqa
from .rainbow import * # noqa
from .nexrad_level2 import (
NexradLevel2BackendEntrypoint, # noqa
open_nexradlevel2_datatree, # noqa
)

__all__ = [s for s in dir() if not s.startswith("_")]
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why remove the __all__ here?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not sure... it is back in. thanks!!

169 changes: 169 additions & 0 deletions xradar/io/backends/common.py
Original file line number Diff line number Diff line change
Expand Up @@ -12,10 +12,15 @@

"""

import bz2
import gzip
import io
import itertools
import struct
from collections import OrderedDict
from collections.abc import MutableMapping

import fsspec
import h5netcdf
import numpy as np
import xarray as xr
Expand Down Expand Up @@ -229,3 +234,167 @@ def _unpack_dictionary(buffer, dictionary, rawdata=False):
UINT1 = {"fmt": "B", "dtype": "unit8"}
UINT2 = {"fmt": "H", "dtype": "uint16"}
UINT4 = {"fmt": "I", "dtype": "unint32"}


def prepare_for_read(filename, storage_options={"anon": True}):
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's a very nice add-on. 👍 We should advertise this more prominently (eg. docs?). We can do this in follow-up PR.

"""
Return a file like object read for reading.

Open a file for reading in binary mode with transparent decompression of
Gzip and BZip2 files. The resulting file-like object should be closed.

Parameters
----------
filename : str or file-like object
Filename or file-like object which will be opened. File-like objects
will not be examined for compressed data.

storage_options : dict, optional
Parameters passed to the backend file-system such as Google Cloud Storage,
Amazon Web Service S3.

Returns
-------
file_like : file-like object
File like object from which data can be read.

"""
# if a file-like object was provided, return
if hasattr(filename, "read"): # file-like object
return filename

# look for compressed data by examining the first few bytes
fh = fsspec.open(filename, mode="rb", compression="infer", **storage_options).open()
magic = fh.read(3)
fh.close()

# If the data is still compressed, use gunzip/bz2 to uncompress the data
if magic.startswith(b"\x1f\x8b"):
return gzip.GzipFile(filename, "rb")

if magic.startswith(b"BZh"):
return bz2.BZ2File(filename, "rb")

return fsspec.open(
filename, mode="rb", compression="infer", **storage_options
).open()


def make_time_unit_str(dtobj):
"""Return a time unit string from a datetime object."""
return "seconds since " + dtobj.strftime("%Y-%m-%dT%H:%M:%SZ")


class LazyLoadDict(MutableMapping):
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This also neat feature. We should check, if that could be used by other readers. We should open an issue when this is merged.

"""
A dictionary-like class supporting lazy loading of specified keys.

Keys which are lazy loaded are specified using the set_lazy method.
The callable object which produces the specified key is provided as the
second argument to this method. This object gets called when the value
of the key is loaded. After this initial call the results is cached
in the traditional dictionary which is used for supplemental access to
this key.

Testing for keys in this dictionary using the "key in d" syntax will
result in the loading of a lazy key, use "key in d.keys()" to prevent
this evaluation.

The comparison methods, __cmp__, __ge__, __gt__, __le__, __lt__, __ne__,
nor the view methods, viewitems, viewkeys, viewvalues, are implemented.
Neither is the fromkeys method.

This is from Py-ART.

Parameters
----------
dic : dict
Dictionary containing key, value pairs which will be stored and
evaluated traditionally. This dictionary referenced not copied into
the LazyLoadDictionary and hence changed to this dictionary may change
the original. If this behavior is not desired copy dic in the
initalization.

Examples
--------
>>> d = LazyLoadDict({'key1': 'value1', 'key2': 'value2'})
>>> d.keys()
['key2', 'key1']
>>> lazy_func = lambda : 999
>>> d.set_lazy('lazykey1', lazy_func)
>>> d.keys()
['key2', 'key1', 'lazykey1']
>>> d['lazykey1']
999

"""

def __init__(self, dic):
"""initalize."""
self._dic = dic
self._lazyload = {}

# abstract methods
def __setitem__(self, key, value):
"""Set a key which will not be stored and evaluated traditionally."""
self._dic[key] = value
if key in self._lazyload:
del self._lazyload[key]

def __getitem__(self, key):
"""Get the value of a key, evaluating a lazy key if needed."""
if key in self._lazyload:
value = self._lazyload[key]()
self._dic[key] = value
del self._lazyload[key]
return self._dic[key]

def __delitem__(self, key):
"""Remove a lazy or traditional key from the dictionary."""
if key in self._lazyload:
del self._lazyload[key]
else:
del self._dic[key]

def __iter__(self):
"""Iterate over all lazy and traditional keys."""
return itertools.chain(self._dic.copy(), self._lazyload.copy())

def __len__(self):
"""Return the number of traditional and lazy keys."""
return len(self._dic) + len(self._lazyload)

# additional class to mimic dict behavior
def __str__(self):
"""Return a string representation of the object."""
if len(self._dic) == 0 or len(self._lazyload) == 0:
seperator = ""
else:
seperator = ", "
lazy_reprs = [(repr(k), repr(v)) for k, v in self._lazyload.items()]
lazy_strs = ["{}: LazyLoad({})".format(*r) for r in lazy_reprs]
lazy_str = ", ".join(lazy_strs) + "}"
return str(self._dic)[:-1] + seperator + lazy_str

def has_key(self, key):
"""True if dictionary has key, else False."""
return key in self

def copy(self):
"""
Return a copy of the dictionary.

Lazy keys are not evaluated in the original or copied dictionary.
"""
dic = self.__class__(self._dic.copy())
# load all lazy keys into the copy
for key, value_callable in self._lazyload.items():
dic.set_lazy(key, value_callable)
return dic

# lazy dictionary specific methods
def set_lazy(self, key, value_callable):
"""Set a lazy key to load from a callable object."""
if key in self._dic:
del self._dic[key]
self._lazyload[key] = value_callable
Loading
Loading