forked from pool/python-confluent-kafka
* Add error_cb to confluent_cloud.py example * Clarify that doc output varies based on method * Docs say Schema when they mean SchemaReference * Add documentation for NewTopic and NewPartitions - update to 1.6.1: * KIP-429 - Incremental consumer rebalancing support. * OAUTHBEARER support. * Add return_record_name=True to AvroDeserializer * Fix deprecated schema.Parse call * Make reader schema optional in AvroDeserializer * Add **kwargs to legacy AvroProducer and AvroConsumer constructors to * support all Consumer and Producer base class constructor arguments, such * as logger * Add bool for permanent schema delete * The avro package is no longer required for Schema-Registry support * Only write to schema cache once, improving performance * Improve Schema-Registry error reporting * producer.flush() could return a non-zero value without hitting the specified timeout. * Bundles librdkafka v1.6.0 which adds support for Incremental rebalancing, * Sticky producer partitioning, Transactional producer scalabilty improvements, * and much much more. See link to release notes below. * Rename asyncio.py example to avoid circular import * The Linux wheels are now built with manylinux2010 (rather than manylinux1) * since OpenSSL v1.1.1 no longer builds on CentOS 5. Older Linux distros may * thus no longer be supported, such as CentOS 5. * The in-wheel OpenSSL version has been updated to 1.1.1i. * Added Message.latency() to retrieve the per-message produce latency. * Added trove classifiers. OBS-URL: https://build.opensuse.org/package/show/devel:languages:python/python-confluent-kafka?expand=0&rev=8
87 lines
4.6 KiB
Plaintext
87 lines
4.6 KiB
Plaintext
-------------------------------------------------------------------
|
|
Sat Oct 30 20:34:14 UTC 2021 - Dirk Müller <dmueller@suse.com>
|
|
|
|
- update to 1.7.0:
|
|
* Add error_cb to confluent_cloud.py example
|
|
* Clarify that doc output varies based on method
|
|
* Docs say Schema when they mean SchemaReference
|
|
* Add documentation for NewTopic and NewPartitions
|
|
|
|
-------------------------------------------------------------------
|
|
Mon Apr 26 09:12:33 UTC 2021 - Dirk Müller <dmueller@suse.com>
|
|
|
|
- update to 1.6.1:
|
|
* KIP-429 - Incremental consumer rebalancing support.
|
|
* OAUTHBEARER support.
|
|
* Add return_record_name=True to AvroDeserializer
|
|
* Fix deprecated schema.Parse call
|
|
* Make reader schema optional in AvroDeserializer
|
|
* Add **kwargs to legacy AvroProducer and AvroConsumer constructors to
|
|
* support all Consumer and Producer base class constructor arguments, such
|
|
* as logger
|
|
* Add bool for permanent schema delete
|
|
* The avro package is no longer required for Schema-Registry support
|
|
* Only write to schema cache once, improving performance
|
|
* Improve Schema-Registry error reporting
|
|
* producer.flush() could return a non-zero value without hitting the specified timeout.
|
|
* Bundles librdkafka v1.6.0 which adds support for Incremental rebalancing,
|
|
* Sticky producer partitioning, Transactional producer scalabilty improvements,
|
|
* and much much more. See link to release notes below.
|
|
* Rename asyncio.py example to avoid circular import
|
|
* The Linux wheels are now built with manylinux2010 (rather than manylinux1)
|
|
* since OpenSSL v1.1.1 no longer builds on CentOS 5. Older Linux distros may
|
|
* thus no longer be supported, such as CentOS 5.
|
|
* The in-wheel OpenSSL version has been updated to 1.1.1i.
|
|
* Added Message.latency() to retrieve the per-message produce latency.
|
|
* Added trove classifiers.
|
|
* Consumer destructor will no longer trigger consumer_close(),
|
|
* consumer.close() must now be explicitly called if the application
|
|
* wants to leave the consumer group properly and commit final offsets.
|
|
* Fix PY_SSIZE_T_CLEAN warning
|
|
* Move confluent_kafka/ to src/ to avoid pytest/tox picking up the local dir
|
|
* Added producer.purge() to purge messages in-queue/flight
|
|
* Added AdminClient.list_groups() API
|
|
* Rename asyncio.py example to avoid circular import
|
|
|
|
-------------------------------------------------------------------
|
|
Tue Oct 13 07:18:50 UTC 2020 - Dirk Mueller <dmueller@suse.com>
|
|
|
|
- update to 1.5.0:
|
|
* Bundles librdkafka v1.5.0 - see release notes for all enhancements and fixes.
|
|
* Dockerfile examples
|
|
* List offsets example
|
|
* confluent-kafka-python is based on librdkafka v1.5.0, see the librdkafka
|
|
release notes for a complete list of changes, enhancements, fixes and
|
|
upgrade considerations.
|
|
- no-license-as-datafile.patch (obsolete, replaced by rm in %install)
|
|
|
|
-------------------------------------------------------------------
|
|
Thu Oct 31 09:17:20 UTC 2019 - Dirk Mueller <dmueller@suse.com>
|
|
|
|
- update to 1.1.0:
|
|
* confluent-kafka-python is based on librdkafka v1.1.0, see the librdkafka v1.1.0 release notes for a complete list of changes, enhancements, fixes and upgrade considerations.
|
|
|
|
* ssl.endpoint.identification.algorithm=https (off by default) to validate the broker hostname matches the certificate. Requires OpenSSL >= 1.0.2(included with Wheel installations))
|
|
* Improved GSSAPI/Kerberos ticket refresh
|
|
* Confluent monitoring interceptor package bumped to v0.11.1 (#634)
|
|
|
|
New configuration properties:
|
|
|
|
* ssl.key.pem - client's private key as a string in PEM format
|
|
* ssl.certificate.pem - client's public key as a string in PEM format
|
|
* enable.ssl.certificate.verification - enable(default)/disable OpenSSL's builtin broker certificate verification.
|
|
* enable.ssl.endpoint.identification.algorithm - to verify the broker's hostname with its certificate (disabled by default).
|
|
* Add new rd_kafka_conf_set_ssl_cert() to pass PKCS#12, DER or PEM certs in (binary) memory form to the configuration object.
|
|
* The private key data is now securely cleared from memory after last use.
|
|
|
|
* SASL GSSAPI/Kerberos: Don't run kinit refresh for each broker, just per client instance.
|
|
* SASL GSSAPI/Kerberos: Changed sasl.kerberos.kinit.cmd to first attempt ticket refresh, then acquire.
|
|
* SASL: Proper locking on broker name acquisition.
|
|
* Consumer: max.poll.interval.ms now correctly handles blocking poll calls, allowing a longer poll timeout than the max poll interval.
|
|
* configure: Fix libzstd static lib detection
|
|
|
|
-------------------------------------------------------------------
|
|
Thu Nov 29 09:10:08 UTC 2018 - Thomas Bechtold <tbechtold@suse.com>
|
|
|
|
- Initial packaging (version 0.11.6)
|