forked from pool/python-xarray
* Add test for rechunking to a size string by @dcherian in #9117 * Update docstring in api.py for open_mfdataset(), clarifying "chunks" argument by @arthur-e in #9121 * Grouper refactor by @dcherian in #9122 * adjust repr tests to account for different platforms (#9127) by @mgorny in #9128 * Support duplicate dimensions in .chunk by @mraspaud in #9099 * Update zendoo badge link by @max-sixty in #9133 * Split out distributed writes in zarr docs by @max-sixty in #9132 * Improve to_zarr docs by @max-sixty in #9139 * groupby: remove some internal use of IndexVariable by @dcherian in #9123 * Improve zarr chunks docs by @max-sixty in #9140 * Include numbagg in type checks by @max-sixty in #9159 * Remove mypy exclusions for a couple more libraries by @max-sixty in #9160 * Add test for #9155 by @max-sixty in #9161 * switch to datetime unit "D" by @keewis in #9170 * Slightly improve DataTree repr by @shoyer in #9064 * Fix example code formatting for CachingFileManager by @djhoese in #9178 * Change np.core.defchararray to np.char (#9165) by @pont-us in #9166 * temporarily remove pydap from CI by @keewis in #9183 * also pin numpy in the all-but-dask CI by @keewis in #9184 * promote floating-point numeric datetimes to 64-bit before decoding by @keewis in #9182 * "source" encoding for datasets opened from fsspec objects by OBS-URL: https://build.opensuse.org/package/show/devel:languages:python:numeric/python-xarray?expand=0&rev=99
21 lines
767 B
Diff
21 lines
767 B
Diff
---
|
|
xarray/tutorial.py | 5 ++++-
|
|
1 file changed, 4 insertions(+), 1 deletion(-)
|
|
|
|
Index: xarray-2024.05.0/xarray/tutorial.py
|
|
===================================================================
|
|
--- xarray-2024.05.0.orig/xarray/tutorial.py
|
|
+++ xarray-2024.05.0/xarray/tutorial.py
|
|
@@ -158,7 +158,10 @@ def open_dataset(
|
|
url = f"{base_url}/raw/{version}/{path.name}"
|
|
|
|
# retrieve the file
|
|
- filepath = pooch.retrieve(url=url, known_hash=None, path=cache_dir)
|
|
+ fname = pathlib.Path(cache_dir, path).expanduser()
|
|
+ if not fname.exists():
|
|
+ fname = None
|
|
+ filepath = pooch.retrieve(url=url, fname=fname, known_hash=None, path=cache_dir)
|
|
ds = _open_dataset(filepath, engine=engine, **kws)
|
|
if not cache:
|
|
ds = ds.load()
|