Accepting request 1130498 from devel:languages:python:numeric
- update to 2023.8.0: * More general timestamp units (#874) * ReadTheDocs V2 (#871) * Better roundtrip dtypes (#861, 859) * No convert when computing bytes-per-item for str (#858) - Add patch to fox the test test_delta_from_def_2 on * row-level filtering of the data. Whereas previously, only full row-groups could be excluded on the basis of their parquet metadata statistics (if present), filtering can now be done within row-groups too. The syntax is the same as before, allowing for multiple column expressions to be combined with AND|OR, depending on the list structure. This mechanism requires two passes: one to load the columns needed to create the boolean mask, and another to load the columns actually needed in the output. This will not be faster, and may be slower, but in some cases can save significant memory footprint, if a small fraction of rows are considered good and the columns for the filter expression are not in the output. * DELTA integer encoding (read-only): experimentally working, but we only have one test file to verify against, since it is not trivial to persuade Spark to produce files encoded this way. DELTA can be extremely compact a representation for * nanosecond resolution times: the new extended "logical" types system supports nanoseconds alongside the previous millis and micros. We now emit these for the default pandas time type, and produce full parquet schema including both "converted" and "logical" type information. Note that all output has isAdjustedToUTC=True, i.e., these are timestamps rather than local time. The time-zone is stored in the metadata, as OBS-URL: https://build.opensuse.org/request/show/1130498 OBS-URL: https://build.opensuse.org/package/show/openSUSE:Factory/python-fastparquet?expand=0&rev=29
This commit is contained in:
commit
981ea802f6
@ -1,3 +0,0 @@
|
|||||||
version https://git-lfs.github.com/spec/v1
|
|
||||||
oid sha256:3347318ce53194498e81b0203e0a3e0b2ab5dec946d274756bb44dbc5610cc0e
|
|
||||||
size 28907973
|
|
3
fastparquet-2023.8.0.tar.gz
Normal file
3
fastparquet-2023.8.0.tar.gz
Normal file
@ -0,0 +1,3 @@
|
|||||||
|
version https://git-lfs.github.com/spec/v1
|
||||||
|
oid sha256:67cf29707c47003d33609a3c9a973714ab3646fc87c30a5b1eefc81d0c4048e1
|
||||||
|
size 28904480
|
@ -1,3 +1,12 @@
|
|||||||
|
-------------------------------------------------------------------
|
||||||
|
Mon Sep 11 21:29:16 UTC 2023 - Dirk Müller <dmueller@suse.com>
|
||||||
|
|
||||||
|
- update to 2023.8.0:
|
||||||
|
* More general timestamp units (#874)
|
||||||
|
* ReadTheDocs V2 (#871)
|
||||||
|
* Better roundtrip dtypes (#861, 859)
|
||||||
|
* No convert when computing bytes-per-item for str (#858)
|
||||||
|
|
||||||
-------------------------------------------------------------------
|
-------------------------------------------------------------------
|
||||||
Sat Jul 1 20:05:36 UTC 2023 - Arun Persaud <arun@gmx.de>
|
Sat Jul 1 20:05:36 UTC 2023 - Arun Persaud <arun@gmx.de>
|
||||||
|
|
||||||
@ -57,7 +66,7 @@ Mon Jan 2 20:38:49 UTC 2023 - Ben Greiner <code@bnavigator.de>
|
|||||||
-------------------------------------------------------------------
|
-------------------------------------------------------------------
|
||||||
Fri Dec 23 09:18:39 UTC 2022 - Guillaume GARDET <guillaume.gardet@opensuse.org>
|
Fri Dec 23 09:18:39 UTC 2022 - Guillaume GARDET <guillaume.gardet@opensuse.org>
|
||||||
|
|
||||||
- Add patch to fox the test test_delta_from_def_2 on
|
- Add patch to fox the test test_delta_from_def_2 on
|
||||||
aarch64, armv7 and ppc64le:
|
aarch64, armv7 and ppc64le:
|
||||||
* fastparquet-pr835.patch
|
* fastparquet-pr835.patch
|
||||||
|
|
||||||
@ -138,56 +147,56 @@ Sun Aug 8 15:13:55 UTC 2021 - Ben Greiner <code@bnavigator.de>
|
|||||||
metadata areas of the files concurrently, if the storage
|
metadata areas of the files concurrently, if the storage
|
||||||
backend supports it, and not directly instantiating
|
backend supports it, and not directly instantiating
|
||||||
intermediate ParquetFile instances
|
intermediate ParquetFile instances
|
||||||
* row-level filtering of the data. Whereas previously, only full
|
* row-level filtering of the data. Whereas previously, only full
|
||||||
row-groups could be excluded on the basis of their parquet
|
row-groups could be excluded on the basis of their parquet
|
||||||
metadata statistics (if present), filtering can now be done
|
metadata statistics (if present), filtering can now be done
|
||||||
within row-groups too. The syntax is the same as before,
|
within row-groups too. The syntax is the same as before,
|
||||||
allowing for multiple column expressions to be combined with
|
allowing for multiple column expressions to be combined with
|
||||||
AND|OR, depending on the list structure. This mechanism
|
AND|OR, depending on the list structure. This mechanism
|
||||||
requires two passes: one to load the columns needed to create
|
requires two passes: one to load the columns needed to create
|
||||||
the boolean mask, and another to load the columns actually
|
the boolean mask, and another to load the columns actually
|
||||||
needed in the output. This will not be faster, and may be
|
needed in the output. This will not be faster, and may be
|
||||||
slower, but in some cases can save significant memory
|
slower, but in some cases can save significant memory
|
||||||
footprint, if a small fraction of rows are considered good and
|
footprint, if a small fraction of rows are considered good and
|
||||||
the columns for the filter expression are not in the output.
|
the columns for the filter expression are not in the output.
|
||||||
Not currently supported for reading with DataPageV2.
|
Not currently supported for reading with DataPageV2.
|
||||||
* DELTA integer encoding (read-only): experimentally working,
|
* DELTA integer encoding (read-only): experimentally working,
|
||||||
but we only have one test file to verify against, since it is
|
but we only have one test file to verify against, since it is
|
||||||
not trivial to persuade Spark to produce files encoded this
|
not trivial to persuade Spark to produce files encoded this
|
||||||
way. DELTA can be extremely compact a representation for
|
way. DELTA can be extremely compact a representation for
|
||||||
slowly varying and/or monotonically increasing integers.
|
slowly varying and/or monotonically increasing integers.
|
||||||
* nanosecond resolution times: the new extended "logical" types
|
* nanosecond resolution times: the new extended "logical" types
|
||||||
system supports nanoseconds alongside the previous millis and
|
system supports nanoseconds alongside the previous millis and
|
||||||
micros. We now emit these for the default pandas time type,
|
micros. We now emit these for the default pandas time type,
|
||||||
and produce full parquet schema including both "converted" and
|
and produce full parquet schema including both "converted" and
|
||||||
"logical" type information. Note that all output has
|
"logical" type information. Note that all output has
|
||||||
isAdjustedToUTC=True, i.e., these are timestamps rather than
|
isAdjustedToUTC=True, i.e., these are timestamps rather than
|
||||||
local time. The time-zone is stored in the metadata, as
|
local time. The time-zone is stored in the metadata, as
|
||||||
before, and will be successfully recreated only in fastparquet
|
before, and will be successfully recreated only in fastparquet
|
||||||
and (py)arrow. Otherwise, the times will appear to be UTC. For
|
and (py)arrow. Otherwise, the times will appear to be UTC. For
|
||||||
compatibility with Spark, you may still want to use
|
compatibility with Spark, you may still want to use
|
||||||
times="int96" when writing.
|
times="int96" when writing.
|
||||||
* DataPageV2 writing: now we support both reading and writing.
|
* DataPageV2 writing: now we support both reading and writing.
|
||||||
For writing, can be enabled with the environment variable
|
For writing, can be enabled with the environment variable
|
||||||
FASTPARQUET_DATAPAGE_V2, or module global fastparquet.writer.
|
FASTPARQUET_DATAPAGE_V2, or module global fastparquet.writer.
|
||||||
DATAPAGE_VERSION and is off by default. It will become on by
|
DATAPAGE_VERSION and is off by default. It will become on by
|
||||||
default in the future. In many cases, V2 will result in better
|
default in the future. In many cases, V2 will result in better
|
||||||
read performance, because the data and page headers are
|
read performance, because the data and page headers are
|
||||||
encoded separately, so data can be directly read into the
|
encoded separately, so data can be directly read into the
|
||||||
output without addition allocation/copies. This feature is
|
output without addition allocation/copies. This feature is
|
||||||
considered experimental, but we believe it working well for
|
considered experimental, but we believe it working well for
|
||||||
most use cases (i.e., our test suite) and should be readable
|
most use cases (i.e., our test suite) and should be readable
|
||||||
by all modern parquet frameworks including arrow and spark.
|
by all modern parquet frameworks including arrow and spark.
|
||||||
* pandas nullable types: pandas supports "masked" extension
|
* pandas nullable types: pandas supports "masked" extension
|
||||||
arrays for types that previously could not support NULL at
|
arrays for types that previously could not support NULL at
|
||||||
all: ints and bools. Fastparquet used to cast such columns to
|
all: ints and bools. Fastparquet used to cast such columns to
|
||||||
float, so that we could represent NULLs as NaN; now we use the
|
float, so that we could represent NULLs as NaN; now we use the
|
||||||
new(er) masked types by default. This means faster reading of
|
new(er) masked types by default. This means faster reading of
|
||||||
such columns, as there is no conversion. If the metadata
|
such columns, as there is no conversion. If the metadata
|
||||||
guarantees that there are no nulls, we still use the
|
guarantees that there are no nulls, we still use the
|
||||||
non-nullable variant unless the data was written with
|
non-nullable variant unless the data was written with
|
||||||
fastparquet/pyarrow, and the metadata indicates that the
|
fastparquet/pyarrow, and the metadata indicates that the
|
||||||
original datatype was nullable. We already handled writing of
|
original datatype was nullable. We already handled writing of
|
||||||
nullable columns.
|
nullable columns.
|
||||||
|
|
||||||
-------------------------------------------------------------------
|
-------------------------------------------------------------------
|
||||||
@ -202,7 +211,7 @@ Tue May 18 14:41:46 UTC 2021 - Ben Greiner <code@bnavigator.de>
|
|||||||
-------------------------------------------------------------------
|
-------------------------------------------------------------------
|
||||||
Fri Feb 12 14:50:18 UTC 2021 - Dirk Müller <dmueller@suse.com>
|
Fri Feb 12 14:50:18 UTC 2021 - Dirk Müller <dmueller@suse.com>
|
||||||
|
|
||||||
- skip python 36 build
|
- skip python 36 build
|
||||||
|
|
||||||
-------------------------------------------------------------------
|
-------------------------------------------------------------------
|
||||||
Thu Feb 4 17:50:32 UTC 2021 - Jan Engelhardt <jengelh@inai.de>
|
Thu Feb 4 17:50:32 UTC 2021 - Jan Engelhardt <jengelh@inai.de>
|
||||||
@ -291,7 +300,7 @@ Mon May 20 15:12:11 CEST 2019 - Matej Cepl <mcepl@suse.com>
|
|||||||
Tue Apr 30 14:28:46 UTC 2019 - Todd R <toddrme2178@gmail.com>
|
Tue Apr 30 14:28:46 UTC 2019 - Todd R <toddrme2178@gmail.com>
|
||||||
|
|
||||||
- update to 0.3.1
|
- update to 0.3.1
|
||||||
* Add schema == (__eq__) and != (__ne__) methods and tests.
|
* Add schema == (__eq__) and != (__ne__) methods and tests.
|
||||||
* Fix item iteration for decimals
|
* Fix item iteration for decimals
|
||||||
* List missing columns in error message
|
* List missing columns in error message
|
||||||
* Fix tz being None case
|
* Fix tz being None case
|
||||||
|
@ -17,7 +17,7 @@
|
|||||||
|
|
||||||
|
|
||||||
Name: python-fastparquet
|
Name: python-fastparquet
|
||||||
Version: 2023.7.0
|
Version: 2023.8.0
|
||||||
Release: 0
|
Release: 0
|
||||||
Summary: Python support for Parquet file format
|
Summary: Python support for Parquet file format
|
||||||
License: Apache-2.0
|
License: Apache-2.0
|
||||||
|
Loading…
Reference in New Issue
Block a user