python-netCDF4/python-netCDF4.changes

524 lines
27 KiB
Plaintext

-------------------------------------------------------------------
Thu Jan 18 13:25:55 UTC 2024 - Daniel Garcia <daniel.garcia@suse.com>
- Disable broken tests for s390x, gh#Unidata/netcdf4-python#1124,
bsc#1218606
-------------------------------------------------------------------
Thu Dec 7 22:10:52 UTC 2023 - Dirk Müller <dmueller@suse.com>
- update to 1.6.5:
* fix for issue #1271 (mask ignored if bool MA assinged to
uint8 var)
* include information on specific object when reporting errors
from netcdf-c
-------------------------------------------------------------------
Sun Sep 10 17:08:14 UTC 2023 - Ben Greiner <code@bnavigator.de>
- Update to 1.6.4
* set path to SSL certificates internally, so https DAP URLs work
with wheels (issue #1246, requires nc_rc_set function available
starting with netcdf-c 4.9.1, plus bugfix in netcdf-c PR #2690).
* Added certifi as a dependency.
* Added `isopen` method to `MFDataset` object to check if
underlying files are open.
- Version 1.6.3
* Use ``nc_put_vars`` for strided writes for netcdf-c >= 4.6.2
(issue #1222).
* _Unsigned="false" should be same as not having _Unsigned set
(issue #1232). _Unsigned now must be set to "true" or "True" for
variable to be interpreted as unsigned, instead of just having
_Unsigned be set (to anything).
-------------------------------------------------------------------
Sat Jan 7 12:25:27 UTC 2023 - Dirk Müller <dmueller@suse.com>
- update to 1.6.2:
* Added ``netCDF4.__has_set_alignment__`` property to help identify if the
underlying netcdf4 supports setting the HDF5 alignment.
* Slicing multi-dimensional variables with an all False boolean index array
now returns an empty numpy array (instead of raising an exception - issue #1197).
Behavior now consistent with numpy slicing.
* fix problem with compiling using netcdf-c < 4.9.0 (issue #1209)
-------------------------------------------------------------------
Fri Sep 16 19:56:27 UTC 2022 - Arun Persaud <arun@gmx.de>
- update to version 1.6.1:
* add Dataset methods has_<name>_filter (where <name>=zstd,blosc,bzip2,szip)
to check for availability of extra compression filters.
* release GIL for all C-lib calls (issue #1180).
* Add support for nc_set_alignment and nc_get_alignment to control alignment
of data within HDF5 files.
-------------------------------------------------------------------
Mon Jul 11 03:01:26 UTC 2022 - Arun Persaud <arun@gmx.de>
- specfile:
* require python >= 3.7 (remove skip python 2 and 3.6 and replace with base >= 3.7)
- update to version 1.6.0:
* add support for new quantization functionality in netcdf-c 4.9.0 via "signficant_digits"
and "quantize_mode" kwargs in Dataset.createVariable. Default quantization_mode is "BitGroom",
but alternate methods "BitRound" and GranularBitRound" also supported.
* opening a Dataset in append mode (mode = 'a' or 'r+') creates a Dataset
if one does not already exist (similar to python open builtin). Issue #1144.
Added a mode='x' option (as in python open) which is the same as mode='w' with
clobber=False.
* allow createVariable to accept either Dimension instances or Dimension
names in "dimensions" tuple kwarg (issue #1145).
* remove all vestiges of python 2 in _netCDF4.pyx and set cython language_level
directive to 3 in setup.py.
* add 'compression' kwarg to createVariable to enable new compression
functionality in netcdf-c 4.9.0. 'None','zlib','szip','zstd','bzip2'
'blosc_lz','blosc_lz4','blosc_lz4hc','blosc_zlib' and 'blosc_zstd'
are currently supported. 'blosc_shuffle',
'szip_mask' and 'szip_pixels_per_block' kwargs also added.
compression='zlib' is equivalent to (the now deprecated) zlib=True.
If the environment variable NETCDF_PLUGIN_DIR is set to point to the
directory with the compression plugin lib__nc* files, then the compression plugins will
be installed within the package and be automatically available (the binary
wheels have this). Otherwise, the environment variable HDF5_PLUGIN_PATH
needs to be set at runtime to point to plugins in order to use the new compression
options.
* MFDataset did not aggregate 'name' variable attribute (issue #1153).
* issue warning instead of raising an exception if missing_value or
_FillValue can't be cast to the variable type when creating a
masked array (issue #1152).
* Define MPI_Session for compatibility with current mpi4py (PR #1156).
-------------------------------------------------------------------
Thu Feb 10 04:21:12 UTC 2022 - Arun Persaud <arun@gmx.de>
- specfile:
* update copyright year
- update to version 1.5.8:
* Fix Enum bug (issue #1128): the enum_dict member of an EnumType
read from a file contains invalid values when the enum is large
enough (more than 127 or 255 members).
* Binary wheels for aarch64 and python 3.10.
- changes from version 1.5.7:
* don't try to mask vlens with default _FillValue, since vlens don't
have a default _FillValue. This gets rid of numpy
DeprecationWarning (issue #1099).
* update docs to reflect the fact that a variable must be in
collective mode before writing compressed data to it in
parallel. Added a test for this
(examples/mpi_example_compressed.py). Issue #1108.
* Fix OverflowError when dimension sizes become greater than 2**32-1
elements on Windows (Issue #1112).
* Don't return masked arrays for vlens (only for primitive and enum
types - issue #1115).
-------------------------------------------------------------------
Sun Feb 21 21:06:58 UTC 2021 - Ben Greiner <code@bnavigator.de>
- Update to 1.5.6
* change numpy.bool to numpy.bool_ and numpy.float to
numpy.float_ (float and bool are deprecated in numpy 1.20,
issue #1065)
* clean up docstrings so that they work with latest pdoc.
* update cython numpy API to remove deprecation warnings.
* Add "fromcdl" and "tocdl" Dataset methods for import/export of
CDLvia ncdump/ncgen called externally via the subprocess module
(issue #1078).
* remove python 2.7 support.
* broadcast data (if possible) to conform to variable shape when
writing to a slice (issue #1083).
- Release 1.5.5
* have setup.py always try use nc-config first to find paths to
netcdf and hdf5 libraries and headers. Don't use pkg-config to
find HDF5 if HDF5 env vars are set (or read from setup.cfg).
* Change MIT license text to standard OSI wording (PR #1046).
- Skip python36 build: With NumPy 1.20, python36-numpy is no
longer available in Tumbleweed (NEP 29)
-------------------------------------------------------------------
Tue Dec 8 13:05:52 UTC 2020 - Dirk Stoecker <opensuse@dstoecker.de>
- Drop wrong hdf4 requirements
-------------------------------------------------------------------
Sat Jul 25 15:33:53 UTC 2020 - Arun Persaud <arun@gmx.de>
- update to version 1.5.4:
* fix printing of variable objects for variables that end with the
letter 'u' (issue #983).
* make sure root group has 'name' attribute (issue #988).
* add the ability to pack vlen floats to integers using
scale_factor/add_offset (issue #1003)
* use len instead of deprecated numpy.alen (issue #1008)
* check size on valid_range instead of using len (issue #1013).
* add `set_chunk_cache/get_chunk_cache` module functions to reset
the default chunk cache sizes before opening a Dataset (issue
#1018).
* replace use of numpy's deprecated tostring() method with tobytes()
(issue #1023).
* bump minimal numpy version to 1.9 (first version to have
tobytes()).
-------------------------------------------------------------------
Wed Jul 22 15:07:22 UTC 2020 - Michel Normand <normand@linux.vnet.ibm.com>
- Add _constraints with default 5G min disk space
-------------------------------------------------------------------
Thu May 21 10:56:35 UTC 2020 - Petr Gajdos <pgajdos@suse.com>
- %python3_only -> %python_alternative
-------------------------------------------------------------------
Tue Nov 19 20:48:10 UTC 2019 - Todd R <toddrme2178@gmail.com>
- Update to 1.5.3:
* make sure arrays are masked that are not filled when auto_fill is off
* python 3.8 binary wheels.
-------------------------------------------------------------------
Tue Oct 8 09:09:31 UTC 2019 - Tomáš Chvátal <tchvatal@suse.com>
- Update to 1.5.2:
* fix for scaling bug when _Unsigned attribute is set and byteorder of data
does not match native byteorder (issue #930).
* revise documentation for Python 3 (issue #946).
* establish support for Python 2.7, 3.5, 3.6 and 3.7 (issue #948).
* use dict built-in instead of OrderedDict for Python 3.7+
(pull request #955).
* remove underline ANSI in Dataset string representation (pull request #956).
* remove newlines from string representation (pull request #960).
* fix for issue #957 (size of scalar var is a float since numpy.prod(())=1.0).
* make sure Variable.setncattr fails to set _FillValue (issue #959).
* fix detection of parallel HDF5 support with netcdf-c 4.6.1 (issue #964).
- Remove not needed netcdf-disable-broken-test.patch as the bug was
fixed in netcdf itself
-------------------------------------------------------------------
Tue Jun 4 11:44:56 UTC 2019 - Marketa Calabkova <mcalabkova@suse.com>
- Update to 1.5.1.2
* fix for issue #919 (assigning 2d array to 3d variable with singleton
first dimension with v[:] = a).
* minimum numpy changed from 1.9.0 to 1.10.0.
* fix issue #908 by adding workaround for incorrect value returned
by nc_inq_var_fill for netcdf-c < 4.5.1.
* fix bug writing slice to unlimited dimension that is not the first
(leftmost). Issue #906.
* make sure data gets converted to type of scale_factor when add_offset=0
and scale_factor=1 (issue #913).
* fix for reading empty (NIL) string attributes (issue #915).
* add read-shared mode (mode='rs'). Significantly speeds up reads of NETCDF3
files (pull request #902).
* added support for parallel IO in the classic netcdf-3 formats through the
pnetcdf library (pull request #897).
-------------------------------------------------------------------
Wed Mar 20 16:36:46 CET 2019 - Matej Cepl <mcepl@suse.com>
- Update to 1.4.3.2
* include missing membuf.pyx file in release source tarball.
* fix bug in implementation of NETCDF4_CLASSIC support for parallel IO
in v1.4.3 release.
* make set_always_mask work in MFDataset.
* fix saving diskless files to disk with netcdf-c >= 4.6.2.
* write to an in-memory Dataset, memoryview buffer returned by Dataset.close()
(issue #865, requires netcdf-c >= 4.6.2)
* fix performance regression when using large sequences of consecutive
integers for indexing with netcdf-c >= 4.6.2 (issue #870).
* improved error messages for ncinfo and other utilities (issue #873).
* fix for int64 attributes not being created for NETCDF3_64BIT_DATA (CDF5)
files (issue #878).
* fix for MPI parallel error ("NetCDF: Attempt to use feature that was not
turned on when netCDF was built") using netcdf-c 4.6.2 (issue #883).
* Added methods `set_ncstring_attrs()` to Dataset, Group and Variable that
forces all text attributes to be written as variable length strings (netCDF
type NC_STRING - issue #882).
* Allow parallel mode with NETCDF4_CLASSIC files (issue #890).
* add get_dims Variable method (issue #824)
* make sure format keyword not ignored when mode is 'ws' (issue #827)
* fix numpy FutureWarning (non-tuple sequence for
multidimensional indexing is deprecated), issue #833.
* add 'master_file' kwarg to MFDataset.__init__ (issue #835).
* always use nc_get_vars for strided access over OpenDAP (issue #838).
* raise FutureWarning when trying to set multi-dimensional array attribute
while still silently flattening the array (issue #841). Will change
to ValueError in next release (1.4.3).
* fix parallel writes when both nc4 parallel and pnetcdf parallel options
enabled in the netcdf-c library (issue #820).
* fix for writing masked scalar character variable (issue #850).
* disable workaround for slow nc_get_vars for __netcdflibversion__ >= 4.6.2,
since a fix was added to speed up nc_get_vars in the C library. Issue 680.
* new Dataset and Variable methods (set_always_mask) to optionally
re-enable old behaviour (return masked arrays only if selected
slice contains missing values) (issue #809).
-------------------------------------------------------------------
Thu May 17 10:38:13 UTC 2018 - tchvatal@suse.com
- Add patch netcdf-disable-broken-test.patch
* This test got "broken" with the new netcdf that fixed another
problem https://github.com/Unidata/netcdf4-python/issues/752
-------------------------------------------------------------------
Thu May 17 09:57:50 UTC 2018 - tchvatal@suse.com
- Version update to 1.4.0:
* fixed bug in detection of CDF5 library support in setup.py (pull request
#736, issue #713).
* fixed reading of variables with zero-length dimensions in NETCDF3_CLASSIC
files (issue #743).
* allow integer-like objects in VLEN slices (not just python ints, issue
#526, pull request #757).
* treating _FillValue as a valid_min/valid_max was too surprising, despite
the fact the thet netcdf docs 'attribute best practices' suggests that
clients should to this. Revert this change from issue #576 (issue #761).
* remove netcdftime, since it is now a separate package. date2num, num2date
and date2index still importable from netCDF4.
* fix 'Unreachable code' cython warning (issue #767).
* Change behavior of string attributes so that nc.stringatt = ['foo','bar']
produces an vlen string array attribute in NETCDF4, instead of concatenating
into a single string ('foobar'). In NETCDF3/NETCDF4_CLASSIC, an IOError
is now raised, instead of writing 'foobar'. Issue #770.
* fix loading of enum type names (issue #775).
* make sure missing_value applies only to scaled short integers if
auto-scaling is on (issue #777).
* automatically create views of compound types with character arrays as
numpy strings (issue #773). Can be disabled using
'set_auto_chartostring(False)'. Numpy structured
array dtypes with 'SN' string subtypes can now be used to
define netcdf compound types (they get converted to ('S1',N)
character array types automatically).
* always return masked array by default, even if there are no
masked values (too surprising to get ndarray or MaskedArray depending
on slice, issue #785).
* treat valid_min/valid_max/_FillValue/missing_value as unsigned
integers if _Unsigned is set (to mimic behaviour of netcdf-java).
Conversion to unsigned type now occurs before masking and scale/offset
operation. Issue #794.
-------------------------------------------------------------------
Thu Mar 1 22:03:20 UTC 2018 - sebix+novell.com@sebix.at
- update to version 1.3.1:
* add parallel IO capabilities. netcdf-c and hdf5 must be compiled with MPI
support, and mpi4py must be installed. To open a file for parallel access,
use `parallel=True` in `Dataset.__init__` and optionally pass the mpi4py Comm instance
using the `comm` kwarg and the mpi4py Info instance using the `info` kwarg.
IO can be toggled between collective and independent using `Variable.set_collective`.
See `examples/mpi_example.py`. Issue #717, pull request #716.
Minimum cython dependency bumped from 0.19 to 0.21.
* Add optional `MFTime` calendar overload to use across all files, for example,
`'standard'` or `'gregorian'`. If `None` (the default), check that the calendar
attribute is present on each variable and values are unique across files raising
a `ValueError` otherwise.
* Allow _FillValue to be set for vlen string variables (issue #730).
- update to version 1.3.0:
* always search for HDF5 headers when building, even when nc-config is used
(since nc-config does not always include the path to the HDF5 headers).
Also use H5get_libversion to obtain HDF5 version info instead of
H5public.h. Fixes issue #677.
* encoding kwarg added to Dataset.__init__ and Dataset.filepath (default
is to use sys.getfilesystemencoding()) so that oddball
encodings (such as cp1252 on windows) can be handled in Dataset
filepaths (issue #686).
* Calls to nc_get_vars are avoided, since nc_get_vars is very slow (issue
#680). Strided slices are now converted to multiple calls to
nc_get_vara. This speeds up strided slice reads by a factor of 10-100
(especially for NETCDF4/HDF5 files) in most cases. In some cases, strided reads
using nc_get_vars are faster (e.g. strided reads over many dimensions
such as var[:,::2,::2,::2])), so a variable method use_nc_get_vars was added.
var.use_nc_get_vars(True) will tell the library to use nc_get_vars instead
of multiple calls to nc_get_vara, which was the default behaviour previous
to this change.
* fix utc offset time zone conversion in netcdftime - it was being done
exactly backwards (issue #685 - thanks to @pgamez and @mdecker).
* Fix error message for illegal ellipsis slicing, add test (issue #701).
* Improve timezone format parsing in netcdftime
* make sure numpy datatypes used to define CompoundTypes have
isalignedstruct flag set to True (issue #705), otherwise.
segfaults can occur. Fix required raising them minimum numpy requirement
from 1.7.0 to 1.9.0.
* ignore missing_value, _FillValue, valid_range, valid_min and valid_max
when creating masked arrays if attribute cannot be safely
cast to variable data type (and issue a warning). When setting
these attributes don't cast to variable dtype unless it can
be done safely and issue a warning. Issue #707.
- update to version 1.2.9:
* Fix for auto scaling and masking when _Unsigned attribute set (create
view as unsigned type after scaling and masking). Issue #671.
* Always mask values outside valid_min, valid_max (not just when
missing_value attribue present). Issue #672.
- remove check boundary condition
-------------------------------------------------------------------
Sun Jun 11 06:11:32 UTC 2017 - toddrme2178@gmail.com
- Implement single-spec version
- Fix source URL.
- Update to version 1.2.8
* recognize _Unsigned attribute used by netcdf-java to designate unsigned
integer data stored with a signed integer type in netcdf-3 (issue #656).
* add Dataset init memory parameter to allow loading a file from memory
(pull request #652, issues #406 and #295).
* fix for negative times in num2date (issue #659).
* fix for failing tests in numpy 1.13 due to changes in numpy.ma (issue #662).
* Checking for _Encoding attribute for NC_STRING variables, otherwise use
'utf-8'. 'utf-8' is used everywhere else, 'default_encoding' global module
variable is no longer used. getncattr method now takes optional kwarg
'encoding' (default 'utf-8') so encoding of attributes can be specified
if desired. If _Encoding is specified for an NC_CHAR ('S1') variable,
the chartostring utility function is used to convert the array of
characters to an array of strings with one less dimension (the last
dimension is interpreted as the length of each string) when reading the
data. When writing the data, stringtochar is used to convert a numpy
array of fixed length strings to an array of characters with one more
dimension. chartostring and stringtochar now also have an 'encoding' kwarg.
Automatic conversion to/from character to string arrays can be turned off
via a new set_auto_chartostring Dataset and Variable method (default
is True). Addresses issue #654.
* Cython >= 0.19 now required, _netCDF4.c and _netcdftime.c removed from
repository.
- Update to version 1.2.7
* fix for issue #624 (error in conversion to masked array when variable slice
returns a scalar). This is a regression introduced in 1.2.5 associated
with support for vector missing_values. Test (tst_masked5.py) added for
vector missing_values.
* fix for python 3.6 compatibility (error retrieving character _FillValue attribute,
issue #626). Test with python 3.6 using travis CI.
- Update to version 1.2.6
* fix some test failures on big endian PPC64 that were due to
errors in byte-swapping logic. Also fixed bug in enum
code exposed on PPC64 (issue #608).
* remove support for python 2.6 (it probably still will work for a while
though).
* Sometimes checking that data being assigned to a variable has
an 'ndim' attribute is not sufficient, instead check to see that
the object supports the buffer interface (issue #613).
* make get_variables_by_attributes work in MFDataset (issue #610)
The hack is also applied for set_auto_maskandscale, set_auto_scale,
set_automask, so these don't have to be duplicated in MFDataset (pull
request #571).
- Update to version 1.2.5
* Add MFDataset.set_auto_maskandscale (plus set_auto_scale, set_auto_mask).
Fixes issue #570.
* Use valid_min/valid_max/valid_range attributes when defining mask
(issue #576). Values outside the valid range are considered to
be missing when defining the mask.
* Fix for issue #584 (add support for dates before -4712-1-1 in 360_day
and 365_day calendars to netcdftime.utime).
* Fix for issue #593: add support for datetime.timedelta operations
(adding and subtracting timedelta, subtracting two datetime
instances to compute time duration between them), implement
datetime.replace() and datetime.__str__(). datetime.__repr__()
includes the full state of an instance. Add datetime.calendar.
datetime comparison operators have full accuracy now.
* Fix for issue #585 by increasing the size of the buffer used to store the
filepath.
* Fix for issue #592: Add support for string array attributes. (When
reading, a vlen string array attribute is returned as a list of
strings. To write, use var.setncattr_string("name", ["two", "strings"]).)
* Fix for issue #596 - julian day calculations wrong for negative years,
caused incorrect rountrip num2date(date2num(date)) roundtrip for dates with year
< 0.
* Make sure negative years work in utime.num2date (issue #596).
* raise NotImplementedError when trying to pickle Dataset, Variable,
CompoundType, VLType, EnumType and MFDataset (issue #602).
* Fix for issue #527: initialize vldata[i].p in Variable._get(...).
- Update to version 1.2.4
* Fix for issue #554. It is now ensured that data is in native endian
byte order before passing to netcdf-c library. Data read from variable
with non-native byte order is also byte-swapped, so that dtype remains
consistent with netcdf variable. Behavior now consistent with h5py.
* raise warning for HDF5 1.10.x (issue #549), since backwards
incompatible files may be created.
* raise AttributeError instead of RuntimeError when attribute operation
fails. raise IOError instead of RuntimeError when nc_create or
nc_open fails (issue #546).
* Use NamedTemporaryFile instead of deprecated mktemp in tests
(pull request #543).
* add AppVeyor automated windows tests (pull request #540).
- Update to version 1.2.3.1
* fix bug in setup.py (pull request #539, introduced in issue #518).
- Update to version 1.2.3
* try to avoid writing NC_STRING attributes if possible, by
trying to convert unicode strings to ascii and write as NC_CHAR (issue
#529). This preserves compatibility with clients (like Matlab) that
can't deal with NC_STRING attributes. A 'setncattr_string' method
was added for Dataset and Variable to that users can force attributes
to be written as NC_STRING if necessary.
* fix failing tests with numpy 1.11 (issues #521 and #522).
* fix indentation bug in nc4tonc3 utility (issue #519).
* add the capability in setup.py to use pkg-config instead of
nc-config (pull request #518).
* make sure slices which return scalar masked arrays
are consistent with numpy.ma (issue #515).
* add test/tst_cdf5.py and test/tst_filepath.py (to test new
NETCDF3_64BIT_DATA format and filepath Dataset method).
* expose netcdftime.__version__ (issue #504).
* fix potential memory leak in Dataset.filepath in attempt to fix
mysterious segfaults on CentOS6 (issue #506). Segfaults
can apparently still occur on systems like CentOS6 with old versions of glibc.
- Update to version 1.2.2
* fix failing tests on python 2.6 (issue #497). Change minimum required
python from 2.5 to 2.6.
* Potential memory leaks fixed by freeing string pointers internally allocated
in netcdf-c using nc_free_string. Also use nc_free_vlens to free space allocated
for vlens inside netcdf-c (issue #495).
* invoke str on filename argument to Dataset constructor, so pathlib
instances can be used (issue #489).
* don't use hardwired NC_MAX_DIMS or NC_MAX_VARS internally to allocate space
for dimension or variable ids. Instead, find out the number of dims
and vars and use malloc. NC_MAX_NAME is still used to allocate space
for attribute and variable names, since there is no obvious way to
determine the length of these names.
* if trying to write a unicode attribute, check to see if it exists
first and is NC_CHAR, and if so, delete it and recreate it. Workaround for C
lib bug discovered in issue #485.
* support for NETCDF3_64BIT_DATA format supported in netcdf-c 4.4.0.
Similar to NETCDF3_64BIT (now NETCDF3_64BIT_OFFSET), but includes
64 bit dimensions and sizes, plus unsigned and 64 bit integer
data types.
* make sure chunksize does not exceed dimension size
(for non-unlimited dimensions) on variable creation (issue #480).
* add 'size' attribute to Dimension (same as len(d), where
d is a Dimension instance, issue #477).
* fix bug in nc3tonc4 with --unpackshort=1 (issue #474).
* dates do not have to be contiguous, i.e. can be before and after the
missing dates in Gregorian calendar (pull request #476).
-------------------------------------------------------------------
Fri Oct 16 12:37:25 UTC 2015 - toddrme2178@gmail.com
- Update to version 1.2.1
* add the capability to slice variables with unsorted integer sequences,
or integer sequences with duplicates (issue #467). This was done
by converting boolean array slices to integer array slices internally,
instead of the other way around.
* raise TypeError if masked array assigned to a VLEN str variable slice
(issue #464).
* Ellipsis now can be used with scalar VLEN str variables (issue #458).
Slicing of scalar VLEN (non-str) variables now works.
* Allow non-positive reference years in non-real-world calendars
(issue #442).
- Update to version 1.2.0
* Fixes to setup.py for building on windows (issue #460).
* warnings now issued if file being read contains unsupported
variables or data types (they were previously being silently
skipped).
* added 'get_variables_by_attributes' method (issue #454).
* check for 'units' attribute in date2index (issue #453).
* added support for enum types (issue #452).
* added 'isopen' Dataset method (issue #450).
* raise ValueError if year 0 or negative year used in time units string.
The year 0 does not exist in the Julian and Gregorian
calendars (issue #442).
-------------------------------------------------------------------
Tue Jul 14 16:30:08 UTC 2015 - toddrme2178@gmail.com
- Initial version