forked from pool/python-xarray
Accepting request 1171905 from devel:languages:python:numeric
OBS-URL: https://build.opensuse.org/request/show/1171905 OBS-URL: https://build.opensuse.org/package/show/openSUSE:Factory/python-xarray?expand=0&rev=48
This commit is contained in:
commit
ae5c6a4f3a
@ -1,3 +1,63 @@
|
|||||||
|
-------------------------------------------------------------------
|
||||||
|
Fri May 3 13:02:26 UTC 2024 - Ben Greiner <code@bnavigator.de>
|
||||||
|
|
||||||
|
- Update to 2024.3.0
|
||||||
|
## New Features
|
||||||
|
* Partial writes to existing chunks with region or append_dim
|
||||||
|
will now raise an error (unless safe_chunks=False); previously
|
||||||
|
an error would only be raised on new variables. (PR8459,
|
||||||
|
GH8371, GH8882) By Maximilian Roos.
|
||||||
|
* Grouped and resampling quantile calculations now use the
|
||||||
|
vectorized algorithm in flox>=0.9.4 if present. By Deepak
|
||||||
|
Cherian.
|
||||||
|
* Do not broadcast in arithmetic operations when global option
|
||||||
|
arithmetic_broadcast=False (GH6806, PR8784). By Etienne Schalk
|
||||||
|
and Deepak Cherian.
|
||||||
|
* Add the .oindex property to Explicitly Indexed Arrays for
|
||||||
|
orthogonal indexing functionality. (GH8238, PR8750) By Anderson
|
||||||
|
Banihirwe.
|
||||||
|
* Add the .vindex property to Explicitly Indexed Arrays for
|
||||||
|
vectorized indexing functionality. (GH8238, PR8780) By Anderson
|
||||||
|
Banihirwe.
|
||||||
|
* Expand use of .oindex and .vindex properties. (:pull: 8790) By
|
||||||
|
Anderson Banihirwe and Deepak Cherian.
|
||||||
|
* Allow creating xr.Coordinates objects with no indexes (PR8711)
|
||||||
|
By Benoit Bovy and Tom Nicholas.
|
||||||
|
* Enable plotting of datetime.dates. (GH8866, PR8873) By Sascha
|
||||||
|
Hofmann.
|
||||||
|
## Breaking changes
|
||||||
|
* Don’t allow overwriting index variables with to_zarr region
|
||||||
|
writes. (GH8589, PR8876). By Deepak Cherian.
|
||||||
|
## Bug fixes
|
||||||
|
* The default freq parameter in xr.date_range() and
|
||||||
|
xr.cftime_range() is set to 'D' only if periods, start, or end
|
||||||
|
are None (GH8770, PR8774). By Roberto Chang.
|
||||||
|
* Ensure that non-nanosecond precision numpy.datetime64 and
|
||||||
|
numpy.timedelta64 values are cast to nanosecond precision
|
||||||
|
values when used in DataArray.expand_dims() and
|
||||||
|
:Dataset.expand_dims() (PR8781). By Spencer Clark.
|
||||||
|
* CF conform handling of _FillValue/missing_value and dtype in
|
||||||
|
CFMaskCoder/CFScaleOffsetCoder (GH2304, GH5597, GH7691, PR8713,
|
||||||
|
see also discussion in PR7654). By Kai Mühlbauer.
|
||||||
|
* Do not cast _FillValue/missing_value in CFMaskCoder if
|
||||||
|
_Unsigned is provided (GH8844, PR8852).
|
||||||
|
* Adapt handling of copy keyword argument for numpy >= 2.0dev
|
||||||
|
(GH8844, PR8851, PR8865). By Kai Mühlbauer.
|
||||||
|
* Import trapz/trapezoid depending on numpy version (GH8844,
|
||||||
|
PR8865). By Kai Mühlbauer.
|
||||||
|
* Warn and return bytes undecoded in case of UnicodeDecodeError
|
||||||
|
in h5netcdf-backend (GH5563, PR8874). By Kai Mühlbauer.
|
||||||
|
* Fix bug incorrectly disallowing creation of a dataset with a
|
||||||
|
multidimensional coordinate variable with the same name as one
|
||||||
|
of its dims. (GH8884, PR8886) By Tom Nicholas.
|
||||||
|
## Internal Changes
|
||||||
|
* Migrates treenode functionality into xarray/core (PR8757) By
|
||||||
|
Matt Savoie and Tom Nicholas.
|
||||||
|
* Migrates datatree functionality into xarray/core. (:pull: 8789)
|
||||||
|
By Owen Littlejohns, Matt Savoie and Tom Nicholas.
|
||||||
|
- Drop Add xarray-pr8797-tokenize.patch
|
||||||
|
- Add xarray-pr8953-nodatatreeprune.patch gh#pydata/xarray#8953
|
||||||
|
|
||||||
-------------------------------------------------------------------
|
-------------------------------------------------------------------
|
||||||
Mon Mar 18 19:47:16 UTC 2024 - Ben Greiner <code@bnavigator.de>
|
Mon Mar 18 19:47:16 UTC 2024 - Ben Greiner <code@bnavigator.de>
|
||||||
|
|
||||||
|
@ -25,20 +25,21 @@
|
|||||||
%define psuffix %{nil}
|
%define psuffix %{nil}
|
||||||
%endif
|
%endif
|
||||||
|
|
||||||
%define skip_python39 1
|
%define ghversion 2024.03.0
|
||||||
|
|
||||||
%{?sle15_python_module_pythons}
|
%{?sle15_python_module_pythons}
|
||||||
Name: python-xarray%{psuffix}
|
Name: python-xarray%{psuffix}
|
||||||
Version: 2024.2.0
|
Version: 2024.3.0
|
||||||
Release: 0
|
Release: 0
|
||||||
Summary: N-D labeled arrays and datasets in Python
|
Summary: N-D labeled arrays and datasets in Python
|
||||||
License: Apache-2.0
|
License: Apache-2.0
|
||||||
URL: https://github.com/pydata/xarray
|
URL: https://github.com/pydata/xarray
|
||||||
Source: https://files.pythonhosted.org/packages/source/x/xarray/xarray-%{version}.tar.gz
|
Source: https://github.com/pydata/xarray/archive/refs/tags/v%{ghversion}.tar.gz#/xarray-%{ghversion}-gh.tar.gz
|
||||||
# PATCH-FEATURE-UPSTREAM local_dataset.patch gh#pydata/xarray#5377 mcepl@suse.com
|
# PATCH-FEATURE-UPSTREAM local_dataset.patch gh#pydata/xarray#5377 mcepl@suse.com
|
||||||
# fix xr.tutorial.open_dataset to work with the preloaded cache.
|
# fix xr.tutorial.open_dataset to work with the preloaded cache.
|
||||||
Patch0: local_dataset.patch
|
Patch0: local_dataset.patch
|
||||||
# PATCH-FIX-UPSTREAM xarray-pr8797-tokenize.patch gh#pydata/xarray#8797 fixes gh#pydata/xarray#8788
|
# PATCH-FIX-UPSTREAM xarray-pr8953-nodatatreeprune.patch gh#pydata/xarray#8953
|
||||||
Patch1: https://github.com/pydata/xarray/pull/8797.patch#/xarray-pr8797-tokenize.patch
|
Patch1: xarray-pr8953-nodatatreeprune.patch
|
||||||
BuildRequires: %{python_module base >= 3.9}
|
BuildRequires: %{python_module base >= 3.9}
|
||||||
BuildRequires: %{python_module pip}
|
BuildRequires: %{python_module pip}
|
||||||
BuildRequires: %{python_module setuptools_scm}
|
BuildRequires: %{python_module setuptools_scm}
|
||||||
@ -149,15 +150,7 @@ Except nc-time-axis, because it's not packaged yet.
|
|||||||
Use `pip-%{python_bin_suffix} --user install nc-time-axis` to install from PyPI, if needed.
|
Use `pip-%{python_bin_suffix} --user install nc-time-axis` to install from PyPI, if needed.
|
||||||
|
|
||||||
%prep
|
%prep
|
||||||
%autosetup -p1 -n xarray-%{version}
|
%autosetup -p1 -n xarray-%{ghversion}
|
||||||
%if "%{version}" == "2024.2.0"
|
|
||||||
# gh#pydata/xarray#8768, remove this after the next update!
|
|
||||||
rm -r xarray/tests/datatree
|
|
||||||
%else
|
|
||||||
echo "You failed to update the specfile"
|
|
||||||
exit 1
|
|
||||||
%endif
|
|
||||||
|
|
||||||
chmod -x xarray/util/print_versions.py
|
chmod -x xarray/util/print_versions.py
|
||||||
|
|
||||||
%build
|
%build
|
||||||
|
3
xarray-2024.03.0-gh.tar.gz
Normal file
3
xarray-2024.03.0-gh.tar.gz
Normal file
@ -0,0 +1,3 @@
|
|||||||
|
version https://git-lfs.github.com/spec/v1
|
||||||
|
oid sha256:c4cc63dd850fe5a0b62d6805147b64947f6df81a876de31e563558be0543d3a6
|
||||||
|
size 3722922
|
@ -1,3 +0,0 @@
|
|||||||
version https://git-lfs.github.com/spec/v1
|
|
||||||
oid sha256:a105f02791082c888ebe2622090beaff2e7b68571488d62fe6afdab35b4b717f
|
|
||||||
size 3634288
|
|
@ -1,195 +0,0 @@
|
|||||||
From 4eb05f0f73c535455f457e650036c86cdfaf4aa2 Mon Sep 17 00:00:00 2001
|
|
||||||
From: crusaderky <crusaderky@gmail.com>
|
|
||||||
Date: Thu, 29 Feb 2024 12:21:18 +0000
|
|
||||||
Subject: [PATCH] tokenize() should ignore difference between None and {} attrs
|
|
||||||
|
|
||||||
---
|
|
||||||
xarray/core/dataarray.py | 2 +-
|
|
||||||
xarray/core/dataset.py | 8 ++++----
|
|
||||||
xarray/core/variable.py | 6 ++++--
|
|
||||||
xarray/namedarray/core.py | 7 +++----
|
|
||||||
xarray/namedarray/utils.py | 4 ++--
|
|
||||||
xarray/tests/test_dask.py | 35 ++++++++++++++++++++++++-----------
|
|
||||||
xarray/tests/test_sparse.py | 4 ----
|
|
||||||
7 files changed, 38 insertions(+), 28 deletions(-)
|
|
||||||
|
|
||||||
diff --git a/xarray/core/dataarray.py b/xarray/core/dataarray.py
|
|
||||||
index c00fe1a9e6..aeb6b2217c 100644
|
|
||||||
--- a/xarray/core/dataarray.py
|
|
||||||
+++ b/xarray/core/dataarray.py
|
|
||||||
@@ -1070,7 +1070,7 @@ def reset_coords(
|
|
||||||
dataset[self.name] = self.variable
|
|
||||||
return dataset
|
|
||||||
|
|
||||||
- def __dask_tokenize__(self):
|
|
||||||
+ def __dask_tokenize__(self) -> object:
|
|
||||||
from dask.base import normalize_token
|
|
||||||
|
|
||||||
return normalize_token((type(self), self._variable, self._coords, self._name))
|
|
||||||
diff --git a/xarray/core/dataset.py b/xarray/core/dataset.py
|
|
||||||
index 884e302b8b..e1fd9e025f 100644
|
|
||||||
--- a/xarray/core/dataset.py
|
|
||||||
+++ b/xarray/core/dataset.py
|
|
||||||
@@ -694,7 +694,7 @@ def __init__(
|
|
||||||
data_vars, coords
|
|
||||||
)
|
|
||||||
|
|
||||||
- self._attrs = dict(attrs) if attrs is not None else None
|
|
||||||
+ self._attrs = dict(attrs) if attrs else None
|
|
||||||
self._close = None
|
|
||||||
self._encoding = None
|
|
||||||
self._variables = variables
|
|
||||||
@@ -739,7 +739,7 @@ def attrs(self) -> dict[Any, Any]:
|
|
||||||
|
|
||||||
@attrs.setter
|
|
||||||
def attrs(self, value: Mapping[Any, Any]) -> None:
|
|
||||||
- self._attrs = dict(value)
|
|
||||||
+ self._attrs = dict(value) if value else None
|
|
||||||
|
|
||||||
@property
|
|
||||||
def encoding(self) -> dict[Any, Any]:
|
|
||||||
@@ -856,11 +856,11 @@ def load(self, **kwargs) -> Self:
|
|
||||||
|
|
||||||
return self
|
|
||||||
|
|
||||||
- def __dask_tokenize__(self):
|
|
||||||
+ def __dask_tokenize__(self) -> object:
|
|
||||||
from dask.base import normalize_token
|
|
||||||
|
|
||||||
return normalize_token(
|
|
||||||
- (type(self), self._variables, self._coord_names, self._attrs)
|
|
||||||
+ (type(self), self._variables, self._coord_names, self._attrs or None)
|
|
||||||
)
|
|
||||||
|
|
||||||
def __dask_graph__(self):
|
|
||||||
diff --git a/xarray/core/variable.py b/xarray/core/variable.py
|
|
||||||
index cd0c022d70..315c46369b 100644
|
|
||||||
--- a/xarray/core/variable.py
|
|
||||||
+++ b/xarray/core/variable.py
|
|
||||||
@@ -2592,11 +2592,13 @@ def __init__(self, dims, data, attrs=None, encoding=None, fastpath=False):
|
|
||||||
if not isinstance(self._data, PandasIndexingAdapter):
|
|
||||||
self._data = PandasIndexingAdapter(self._data)
|
|
||||||
|
|
||||||
- def __dask_tokenize__(self):
|
|
||||||
+ def __dask_tokenize__(self) -> object:
|
|
||||||
from dask.base import normalize_token
|
|
||||||
|
|
||||||
# Don't waste time converting pd.Index to np.ndarray
|
|
||||||
- return normalize_token((type(self), self._dims, self._data.array, self._attrs))
|
|
||||||
+ return normalize_token(
|
|
||||||
+ (type(self), self._dims, self._data.array, self._attrs or None)
|
|
||||||
+ )
|
|
||||||
|
|
||||||
def load(self):
|
|
||||||
# data is already loaded into memory for IndexVariable
|
|
||||||
diff --git a/xarray/namedarray/core.py b/xarray/namedarray/core.py
|
|
||||||
index 2972269043..fd209bc273 100644
|
|
||||||
--- a/xarray/namedarray/core.py
|
|
||||||
+++ b/xarray/namedarray/core.py
|
|
||||||
@@ -511,7 +511,7 @@ def attrs(self) -> dict[Any, Any]:
|
|
||||||
|
|
||||||
@attrs.setter
|
|
||||||
def attrs(self, value: Mapping[Any, Any]) -> None:
|
|
||||||
- self._attrs = dict(value)
|
|
||||||
+ self._attrs = dict(value) if value else None
|
|
||||||
|
|
||||||
def _check_shape(self, new_data: duckarray[Any, _DType_co]) -> None:
|
|
||||||
if new_data.shape != self.shape:
|
|
||||||
@@ -570,13 +570,12 @@ def real(
|
|
||||||
return real(self)
|
|
||||||
return self._new(data=self._data.real)
|
|
||||||
|
|
||||||
- def __dask_tokenize__(self) -> Hashable:
|
|
||||||
+ def __dask_tokenize__(self) -> object:
|
|
||||||
# Use v.data, instead of v._data, in order to cope with the wrappers
|
|
||||||
# around NetCDF and the like
|
|
||||||
from dask.base import normalize_token
|
|
||||||
|
|
||||||
- s, d, a, attrs = type(self), self._dims, self.data, self.attrs
|
|
||||||
- return normalize_token((s, d, a, attrs)) # type: ignore[no-any-return]
|
|
||||||
+ return normalize_token((type(self), self._dims, self.data, self._attrs or None))
|
|
||||||
|
|
||||||
def __dask_graph__(self) -> Graph | None:
|
|
||||||
if is_duck_dask_array(self._data):
|
|
||||||
diff --git a/xarray/namedarray/utils.py b/xarray/namedarray/utils.py
|
|
||||||
index 0326a6173c..b82a80b546 100644
|
|
||||||
--- a/xarray/namedarray/utils.py
|
|
||||||
+++ b/xarray/namedarray/utils.py
|
|
||||||
@@ -218,7 +218,7 @@ def __eq__(self, other: ReprObject | Any) -> bool:
|
|
||||||
def __hash__(self) -> int:
|
|
||||||
return hash((type(self), self._value))
|
|
||||||
|
|
||||||
- def __dask_tokenize__(self) -> Hashable:
|
|
||||||
+ def __dask_tokenize__(self) -> object:
|
|
||||||
from dask.base import normalize_token
|
|
||||||
|
|
||||||
- return normalize_token((type(self), self._value)) # type: ignore[no-any-return]
|
|
||||||
+ return normalize_token((type(self), self._value))
|
|
||||||
diff --git a/xarray/tests/test_dask.py b/xarray/tests/test_dask.py
|
|
||||||
index 07bf773cc8..517fc0c2d6 100644
|
|
||||||
--- a/xarray/tests/test_dask.py
|
|
||||||
+++ b/xarray/tests/test_dask.py
|
|
||||||
@@ -299,17 +299,6 @@ def test_persist(self):
|
|
||||||
self.assertLazyAndAllClose(u + 1, v)
|
|
||||||
self.assertLazyAndAllClose(u + 1, v2)
|
|
||||||
|
|
||||||
- def test_tokenize_empty_attrs(self) -> None:
|
|
||||||
- # Issue #6970
|
|
||||||
- assert self.eager_var._attrs is None
|
|
||||||
- expected = dask.base.tokenize(self.eager_var)
|
|
||||||
- assert self.eager_var.attrs == self.eager_var._attrs == {}
|
|
||||||
- assert (
|
|
||||||
- expected
|
|
||||||
- == dask.base.tokenize(self.eager_var)
|
|
||||||
- == dask.base.tokenize(self.lazy_var.compute())
|
|
||||||
- )
|
|
||||||
-
|
|
||||||
@requires_pint
|
|
||||||
def test_tokenize_duck_dask_array(self):
|
|
||||||
import pint
|
|
||||||
@@ -1573,6 +1562,30 @@ def test_token_identical(obj, transform):
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
+@pytest.mark.parametrize(
|
|
||||||
+ "obj",
|
|
||||||
+ [
|
|
||||||
+ make_ds(), # Dataset
|
|
||||||
+ make_ds().variables["c2"], # Variable
|
|
||||||
+ make_ds().variables["x"], # IndexVariable
|
|
||||||
+ ],
|
|
||||||
+)
|
|
||||||
+def test_tokenize_empty_attrs(obj):
|
|
||||||
+ """Issues #6970 and #8788"""
|
|
||||||
+ obj.attrs = {}
|
|
||||||
+ assert obj._attrs is None
|
|
||||||
+ a = dask.base.tokenize(obj)
|
|
||||||
+
|
|
||||||
+ assert obj.attrs == {}
|
|
||||||
+ assert obj._attrs == {} # attrs getter changed None to dict
|
|
||||||
+ b = dask.base.tokenize(obj)
|
|
||||||
+ assert a == b
|
|
||||||
+
|
|
||||||
+ obj2 = obj.copy()
|
|
||||||
+ c = dask.base.tokenize(obj2)
|
|
||||||
+ assert a == c
|
|
||||||
+
|
|
||||||
+
|
|
||||||
def test_recursive_token():
|
|
||||||
"""Test that tokenization is invoked recursively, and doesn't just rely on the
|
|
||||||
output of str()
|
|
||||||
diff --git a/xarray/tests/test_sparse.py b/xarray/tests/test_sparse.py
|
|
||||||
index 289149bdd6..09c1281875 100644
|
|
||||||
--- a/xarray/tests/test_sparse.py
|
|
||||||
+++ b/xarray/tests/test_sparse.py
|
|
||||||
@@ -878,10 +878,6 @@ def test_dask_token():
|
|
||||||
import dask
|
|
||||||
|
|
||||||
s = sparse.COO.from_numpy(np.array([0, 0, 1, 2]))
|
|
||||||
-
|
|
||||||
- # https://github.com/pydata/sparse/issues/300
|
|
||||||
- s.__dask_tokenize__ = lambda: dask.base.normalize_token(s.__dict__)
|
|
||||||
-
|
|
||||||
a = DataArray(s)
|
|
||||||
t1 = dask.base.tokenize(a)
|
|
||||||
t2 = dask.base.tokenize(a)
|
|
130
xarray-pr8953-nodatatreeprune.patch
Normal file
130
xarray-pr8953-nodatatreeprune.patch
Normal file
@ -0,0 +1,130 @@
|
|||||||
|
From 84d23be58bb39be4eb896f5f0dbe0a8f956431fb Mon Sep 17 00:00:00 2001
|
||||||
|
From: Matt Savoie <matthew.savoie@colorado.edu>
|
||||||
|
Date: Wed, 17 Apr 2024 09:57:53 -0600
|
||||||
|
Subject: [PATCH 1/4] DAS-2108: stop pruning datatree_ directory
|
||||||
|
|
||||||
|
Quick fixup of some typing.
|
||||||
|
Removes mypy exclusions for datatree_
|
||||||
|
---
|
||||||
|
MANIFEST.in | 1 -
|
||||||
|
pyproject.toml | 10 +---------
|
||||||
|
xarray/datatree_/datatree/io.py | 10 +++++-----
|
||||||
|
xarray/datatree_/datatree/tests/test_extensions.py | 11 +++++------
|
||||||
|
xarray/datatree_/docs/source/conf.py | 6 +++---
|
||||||
|
5 files changed, 14 insertions(+), 24 deletions(-)
|
||||||
|
delete mode 100644 MANIFEST.in
|
||||||
|
|
||||||
|
Index: xarray-2024.03.0/MANIFEST.in
|
||||||
|
===================================================================
|
||||||
|
--- xarray-2024.03.0.orig/MANIFEST.in
|
||||||
|
+++ xarray-2024.03.0/MANIFEST.in
|
||||||
|
@@ -1 +1,2 @@
|
||||||
|
prune xarray/datatree_*
|
||||||
|
+recursive-include xarray/datatree_/datatree *.py
|
||||||
|
Index: xarray-2024.03.0/pyproject.toml
|
||||||
|
===================================================================
|
||||||
|
--- xarray-2024.03.0.orig/pyproject.toml
|
||||||
|
+++ xarray-2024.03.0/pyproject.toml
|
||||||
|
@@ -96,11 +96,6 @@ warn_redundant_casts = true
|
||||||
|
warn_unused_configs = true
|
||||||
|
warn_unused_ignores = true
|
||||||
|
|
||||||
|
-# Ignore mypy errors for modules imported from datatree_.
|
||||||
|
-[[tool.mypy.overrides]]
|
||||||
|
-module = "xarray.datatree_.*"
|
||||||
|
-ignore_errors = true
|
||||||
|
-
|
||||||
|
# Much of the numerical computing stack doesn't have type annotations yet.
|
||||||
|
[[tool.mypy.overrides]]
|
||||||
|
ignore_missing_imports = true
|
||||||
|
Index: xarray-2024.03.0/xarray/datatree_/datatree/io.py
|
||||||
|
===================================================================
|
||||||
|
--- xarray-2024.03.0.orig/xarray/datatree_/datatree/io.py
|
||||||
|
+++ xarray-2024.03.0/xarray/datatree_/datatree/io.py
|
||||||
|
@@ -3,14 +3,14 @@ from xarray.core.datatree import DataTre
|
||||||
|
|
||||||
|
def _get_nc_dataset_class(engine):
|
||||||
|
if engine == "netcdf4":
|
||||||
|
- from netCDF4 import Dataset # type: ignore
|
||||||
|
+ from netCDF4 import Dataset
|
||||||
|
elif engine == "h5netcdf":
|
||||||
|
- from h5netcdf.legacyapi import Dataset # type: ignore
|
||||||
|
+ from h5netcdf.legacyapi import Dataset
|
||||||
|
elif engine is None:
|
||||||
|
try:
|
||||||
|
from netCDF4 import Dataset
|
||||||
|
except ImportError:
|
||||||
|
- from h5netcdf.legacyapi import Dataset # type: ignore
|
||||||
|
+ from h5netcdf.legacyapi import Dataset
|
||||||
|
else:
|
||||||
|
raise ValueError(f"unsupported engine: {engine}")
|
||||||
|
return Dataset
|
||||||
|
@@ -78,7 +78,7 @@ def _datatree_to_netcdf(
|
||||||
|
|
||||||
|
|
||||||
|
def _create_empty_zarr_group(store, group, mode):
|
||||||
|
- import zarr # type: ignore
|
||||||
|
+ import zarr
|
||||||
|
|
||||||
|
root = zarr.open_group(store, mode=mode)
|
||||||
|
root.create_group(group, overwrite=True)
|
||||||
|
@@ -92,7 +92,7 @@ def _datatree_to_zarr(
|
||||||
|
consolidated: bool = True,
|
||||||
|
**kwargs,
|
||||||
|
):
|
||||||
|
- from zarr.convenience import consolidate_metadata # type: ignore
|
||||||
|
+ from zarr.convenience import consolidate_metadata
|
||||||
|
|
||||||
|
if kwargs.get("group", None) is not None:
|
||||||
|
raise NotImplementedError(
|
||||||
|
Index: xarray-2024.03.0/xarray/datatree_/datatree/tests/test_extensions.py
|
||||||
|
===================================================================
|
||||||
|
--- xarray-2024.03.0.orig/xarray/datatree_/datatree/tests/test_extensions.py
|
||||||
|
+++ xarray-2024.03.0/xarray/datatree_/datatree/tests/test_extensions.py
|
||||||
|
@@ -18,16 +18,15 @@ class TestAccessor:
|
||||||
|
return "bar"
|
||||||
|
|
||||||
|
dt: DataTree = DataTree()
|
||||||
|
- assert dt.demo.foo == "bar" # type: ignore
|
||||||
|
+ assert dt.demo.foo == "bar"
|
||||||
|
|
||||||
|
# accessor is cached
|
||||||
|
- assert dt.demo is dt.demo # type: ignore
|
||||||
|
+ assert dt.demo is dt.demo
|
||||||
|
|
||||||
|
# check descriptor
|
||||||
|
- assert dt.demo.__doc__ == "Demo accessor." # type: ignore
|
||||||
|
- # TODO: typing doesn't seem to work with accessors
|
||||||
|
- assert DataTree.demo.__doc__ == "Demo accessor." # type: ignore
|
||||||
|
- assert isinstance(dt.demo, DemoAccessor) # type: ignore
|
||||||
|
+ assert dt.demo.__doc__ == "Demo accessor."
|
||||||
|
+ assert DataTree.demo.__doc__ == "Demo accessor." # type: ignore
|
||||||
|
+ assert isinstance(dt.demo, DemoAccessor)
|
||||||
|
assert DataTree.demo is DemoAccessor # type: ignore
|
||||||
|
|
||||||
|
with pytest.warns(Warning, match="overriding a preexisting attribute"):
|
||||||
|
Index: xarray-2024.03.0/xarray/datatree_/docs/source/conf.py
|
||||||
|
===================================================================
|
||||||
|
--- xarray-2024.03.0.orig/xarray/datatree_/docs/source/conf.py
|
||||||
|
+++ xarray-2024.03.0/xarray/datatree_/docs/source/conf.py
|
||||||
|
@@ -17,9 +17,9 @@ import inspect
|
||||||
|
import os
|
||||||
|
import sys
|
||||||
|
|
||||||
|
-import sphinx_autosummary_accessors
|
||||||
|
+import sphinx_autosummary_accessors # type: ignore
|
||||||
|
|
||||||
|
-import datatree
|
||||||
|
+import datatree # type: ignore
|
||||||
|
|
||||||
|
# If extensions (or modules to document with autodoc) are in another directory,
|
||||||
|
# add these directories to sys.path here. If the directory is relative to the
|
||||||
|
@@ -286,7 +286,7 @@ htmlhelp_basename = "datatree_doc"
|
||||||
|
|
||||||
|
# -- Options for LaTeX output --------------------------------------------------
|
||||||
|
|
||||||
|
-latex_elements = {
|
||||||
|
+latex_elements: dict = {
|
||||||
|
# The paper size ('letterpaper' or 'a4paper').
|
||||||
|
# 'papersize': 'letterpaper',
|
||||||
|
# The font size ('10pt', '11pt' or '12pt').
|
Loading…
x
Reference in New Issue
Block a user