1
0
python-fastparquet/python-fastparquet.spec

91 lines
3.2 KiB
RPMSpec
Raw Permalink Normal View History

#
# spec file for package python-fastparquet
#
# Copyright (c) 2024 SUSE LLC
#
# All modifications and additions to the file contributed by third parties
# remain the property of their copyright owners, unless otherwise agreed
# upon. The license for this file, and modifications and additions to the
# file, is the same license as for the pristine package itself (unless the
# license for the pristine package is not an Open Source License, in which
# case the license is the MIT License). An "Open Source License" is a
# license that conforms to the Open Source Definition (Version 1.9)
# published by the Open Source Initiative.
# Please submit bugfixes or comments via https://bugs.opensuse.org/
#
%{?sle15_python_module_pythons}
Name: python-fastparquet
Version: 2024.5.0
Release: 0
Summary: Python support for Parquet file format
License: Apache-2.0
URL: https://github.com/dask/fastparquet/
# Use GitHub archive, because it containts the test modules and data, requires setting version manuall for setuptools_scm
Source: https://github.com/dask/fastparquet/archive/%{version}.tar.gz#/fastparquet-%{version}.tar.gz
BuildRequires: %{python_module Cython >= 0.29.23}
BuildRequires: %{python_module base >= 3.9}
BuildRequires: %{python_module cramjam >= 2.3.0}
Accepting request 910725 from home:bnavigator:branches:devel:languages:python:numeric - Update to version 0.7.1 * Back compile for older versions of numpy * Make pandas nullable types opt-out. The old behaviour (casting to float) is still available with ParquetFile(..., pandas_nulls=False). * Fix time field regression: IsAdjustedToUTC will be False when there is no timezone * Micro improvements to the speed of ParquetFile creation by using simple simple string ops instead of regex and regularising filenames once at the start. Effects datasets with many files. - Release 0.7.0 * This version institutes major, breaking changes, listed here, and incremental fixes and additions. * Reading a directory without a _metadata summary file now works by providing only the directory, instead of a list of constituent files. This change also makes direct of use of fsspec filesystems, if given, to be able to load the footer metadata areas of the files concurrently, if the storage backend supports it, and not directly instantiating intermediate ParquetFile instances * row-level filtering of the data. Whereas previously, only full row-groups could be excluded on the basis of their parquet metadata statistics (if present), filtering can now be done within row-groups too. The syntax is the same as before, allowing for multiple column expressions to be combined with AND|OR, depending on the list structure. This mechanism requires two passes: one to load the columns needed to create the boolean mask, and another to load the columns actually needed in the output. This will not be faster, and may be slower, but in some cases can save significant memory footprint, if a small fraction of rows are considered good and the columns for the filter expression are not in the output. Not currently supported for reading with DataPageV2. * DELTA integer encoding (read-only): experimentally working, but we only have one test file to verify against, since it is not trivial to persuade Spark to produce files encoded this way. DELTA can be extremely compact a representation for slowly varying and/or monotonically increasing integers. * nanosecond resolution times: the new extended "logical" types system supports nanoseconds alongside the previous millis and micros. We now emit these for the default pandas time type, and produce full parquet schema including both "converted" and "logical" type information. Note that all output has isAdjustedToUTC=True, i.e., these are timestamps rather than local time. The time-zone is stored in the metadata, as before, and will be successfully recreated only in fastparquet and (py)arrow. Otherwise, the times will appear to be UTC. For compatibility with Spark, you may still want to use times="int96" when writing. * DataPageV2 writing: now we support both reading and writing. For writing, can be enabled with the environment variable FASTPARQUET_DATAPAGE_V2, or module global fastparquet.writer. DATAPAGE_VERSION and is off by default. It will become on by default in the future. In many cases, V2 will result in better read performance, because the data and page headers are encoded separately, so data can be directly read into the output without addition allocation/copies. This feature is considered experimental, but we believe it working well for most use cases (i.e., our test suite) and should be readable by all modern parquet frameworks including arrow and spark. * pandas nullable types: pandas supports "masked" extension arrays for types that previously could not support NULL at all: ints and bools. Fastparquet used to cast such columns to float, so that we could represent NULLs as NaN; now we use the new(er) masked types by default. This means faster reading of such columns, as there is no conversion. If the metadata guarantees that there are no nulls, we still use the non-nullable variant unless the data was written with fastparquet/pyarrow, and the metadata indicates that the original datatype was nullable. We already handled writing of nullable columns. OBS-URL: https://build.opensuse.org/request/show/910725 OBS-URL: https://build.opensuse.org/package/show/devel:languages:python:numeric/python-fastparquet?expand=0&rev=34
2021-08-09 15:21:06 +02:00
# version requirement not declared for runtime, but necessary for tests.
BuildRequires: %{python_module fsspec >= 2021.6.0}
BuildRequires: %{python_module numpy-devel}
BuildRequires: %{python_module packaging}
BuildRequires: %{python_module pandas >= 1.5.0}
BuildRequires: %{python_module pip}
BuildRequires: %{python_module pytest-asyncio}
BuildRequires: %{python_module pytest-xdist}
BuildRequires: %{python_module pytest}
BuildRequires: %{python_module python-lzo}
BuildRequires: %{python_module setuptools_scm}
BuildRequires: %{python_module setuptools}
BuildRequires: %{python_module wheel}
BuildRequires: fdupes
BuildRequires: git-core
BuildRequires: python-rpm-macros
Requires: python-cramjam >= 2.3.0
Requires: python-fsspec
Requires: python-numpy
Requires: python-packaging
Requires: python-pandas >= 1.5.0
Recommends: python-python-lzo
%python_subpackages
%description
This is a Python implementation of the parquet format
for integrating it into python-based Big Data workflows.
%prep
%autosetup -p1 -n fastparquet-%{version}
# the tests import the fastparquet.test module and we need to import from sitearch, so install it.
sed -i -e "s/^\s*packages=\[/&'fastparquet.test', /" -e "/exclude_package_data/ d" setup.py
# remove empty module
[ ! -s fastparquet/evolve.py ] && rm fastparquet/evolve.py
%build
export CFLAGS="%{optflags}"
export SETUPTOOLS_SCM_PRETEND_VERSION=%{version}
%pyproject_wheel
%install
%pyproject_install
%python_expand rm -v %{buildroot}%{$python_sitearch}/fastparquet/{speedups,cencoding}.c
%python_expand %fdupes %{buildroot}%{$python_sitearch}
%check
%ifarch s390x
# Test suite is not working correctly in s390x so not running it.
echo "Not running tests for s390x"
%else
%pytest_arch --pyargs fastparquet --import-mode append -n auto
%endif
%files %{python_files}
%doc README.rst
%license LICENSE
%{python_sitearch}/fastparquet
%{python_sitearch}/fastparquet-%{version}.dist-info
%changelog