- Update to 2.1.0 * Fix typo that prevented error message * Remove ``dask-mpi`` * Updates to use ``update_graph`` in task journey docs * Fix Client repr with ``memory_info=None`` * Fix case where key, rather than ``TaskState``, could end up in ``ts.waiting_on`` * Use Keyword-only arguments * Relax check for worker references in cluster context manager * Add HTTPS support for the dashboard * Use ``dask.utils.format_bytes`` - Update to 2.0.1 * Add python_requires entry to setup.py * Correctly manage tasks beyond deque limit in TaskStream plot * Fix diagnostics page for memory_limit=None - Update to 2.0.0 * **Drop support for Python 2** * Relax warnings before release * Deprecate --bokeh/--no-bokeh CLI * Typo in bokeh service_kwargs for dask-worker * Update command line cli options docs * Remove "experimental" from TLS docs * Add warnings around ncores= keywords * Add --version option to scheduler and worker CLI * Raise when workers initialization times out * Replace ncores with nthreads throughout codebase * Add unknown pytest markers * Delay lookup of allowed failures. * Change address -> worker in ColumnDataSource for nbytes plot * Remove module state in Prometheus Handlers * Add stress test for UCX * Add nanny logs * Move some of the adaptive logic into the scheduler * Add SpecCluster.new_worker_spec method * Worker dashboard fixes * Add async context managers to scheduler/worker classes * Fix the resource key representation before sending graphs * Allow user to configure whether workers are daemon. * Pin pytest >=4 with pip in appveyor and python 3.5 * Add Experimental UCX Comm * Close nannies gracefully * add kwargs to progressbars * Add back LocalCluster.__repr__. * Move bokeh module to dashboard * Close clusters at exit * Add SchedulerPlugin TaskState example * Add SpecificationCluster * Replace register_worker_callbacks with worker plugins * Proxy worker dashboards from scheduler dashboard * Add docstring to Scheduler.check_idle_saturated * Refer to LocalCluster in Client docstring * Remove special casing of Scikit-Learn BaseEstimator serialization * Fix two typos in Pub class docstring * Support uploading files with multiple modules * Change the main workers bokeh page to /status * Cleanly stop periodic callbacks in Client * Disable pan tool for the Progress, Byte Stored and Tasks Processing plot * Except errors in Nanny's memory monitor if process no longer exists * Handle heartbeat when worker has just left * Modify styling of histograms for many-worker dashboard plots * Add method to wait for n workers before continuing * Support computation on delayed(None) * Cleanup localcluster * Use 'temporary-directory' from dask.config for Worker's directory * Remove support for Iterators and Queues - Update to 1.28.1 * Use config accessor method for "scheduler-address" - Update to 1.28.0 * Add Type Attribute to TaskState * Add waiting task count to progress title bar * DOC: Clean up reference to cluster object * Allow scheduler to politely close workers as part of shutdown * Check direct_to_workers before using get_worker in Client * Fixed comment regarding keeping existing level if less verbose * Add idle timeout to scheduler * Avoid deprecation warnings * Use an LRU cache for deserialized functions * Rename Worker._close to Worker.close * Add Comm closed bookkeeping * Explain LocalCluster behavior in Client docstring * Add last worker into KilledWorker exception to help debug * Set working worker class for dask-ssh * Add as_completed methods to docs * Add timeout to Client._reconnect * Limit test_spill_by_default memory, reenable it * Use proper address in worker -> nanny comms * Fix deserialization of bytes chunks larger than 64MB - Update to 1.27.1 * Adaptive: recommend close workers when any are idle * Increase GC thresholds * Add interface= keyword to LocalCluster * Add worker_class argument to LocalCluster * Remove Python 2.7 from testing matrix * Add number of trials to diskutils test * Fix parameter name in LocalCluster docstring * Integrate stacktrace for low-level profiling * Apply Black to standardize code styling * added missing whitespace to start_worker cmd * Updated logging module doc links from docs.python.org/2 to docs.python.org/3. - Update to 1.27.0 * Add basic health endpoints to scheduler and worker bokeh. * Improved description accuracy of --memory-limit option. * Check self.dependencies when looking at dependent tasks in memory * Add RabbitMQ SchedulerPlugin example * add resources to scheduler update_graph plugin * Use ensure_bytes in serialize_error * Specify data storage explicitly from Worker constructor * Change bokeh port keywords to dashboard_address * .detach_() pytorch tensor to serialize data as numpy array. * Add warning if creating scratch directories takes a long time * Fix typo in pub-sub doc. * Allow return_when='FIRST_COMPLETED' in wait * Forward kwargs through Nanny to Worker * Use ensure_dict instead of dict * Specify protocol in LocalCluster - Update to 1.26.1 * Fix LocalCluster to not overallocate memory when overcommitting threads per worker * Make closing resilient to lacking an address * fix typo in comment * Fix double init of prometheus metrics * Skip test_duplicate_clients without bokeh. * Add blocked_handlers to servers * Always yield Server.handle_comm coroutine * Use yaml.safe_load * Fetch executables from build root. * Fix Torando 6 test failures * Fix test_sync_closed_loop - Update to 1.26.0 * Update style to fix recent flake8 update * Fix typo in gen_cluster log message * Allow KeyError when closing event loop * Avoid thread testing for TCP ThreadPoolExecutor * Find Futures inside SubgraphCallable * Avoid AttributeError when closing and sending a message * Add deprecation warning to dask_mpi.py * Relax statistical profiling test * Support alternative --remote-dask-worker SSHCluster() and dask-ssh CLI * Iterate over full list of plugins in transition * Create Prometheus Endpoint * Use pytest.importorskip for prometheus test * MAINT skip prometheus test when no installed * Fix intermittent testing failures * Avoid using nprocs keyword in dask-ssh if set to one * Bump minimum Tornado version to 5.0 OBS-URL: https://build.opensuse.org/request/show/717982 OBS-URL: https://build.opensuse.org/package/show/devel:languages:python:numeric/python-distributed?expand=0&rev=25
105 lines
3.5 KiB
RPMSpec
105 lines
3.5 KiB
RPMSpec
#
|
|
# spec file for package python-distributed
|
|
#
|
|
# Copyright (c) 2019 SUSE LINUX GmbH, Nuernberg, Germany.
|
|
#
|
|
# All modifications and additions to the file contributed by third parties
|
|
# remain the property of their copyright owners, unless otherwise agreed
|
|
# upon. The license for this file, and modifications and additions to the
|
|
# file, is the same license as for the pristine package itself (unless the
|
|
# license for the pristine package is not an Open Source License, in which
|
|
# case the license is the MIT License). An "Open Source License" is a
|
|
# license that conforms to the Open Source Definition (Version 1.9)
|
|
# published by the Open Source Initiative.
|
|
|
|
# Please submit bugfixes or comments via https://bugs.opensuse.org/
|
|
#
|
|
|
|
|
|
%{?!python_module:%define python_module() python-%{**} python3-%{**}}
|
|
%define skip_python2 1
|
|
# Test requires network connection
|
|
%bcond_with test
|
|
Name: python-distributed
|
|
Version: 2.1.0
|
|
Release: 0
|
|
Summary: Library for distributed computing with Python
|
|
License: BSD-3-Clause
|
|
Group: Development/Languages/Python
|
|
URL: https://distributed.readthedocs.io/en/latest/
|
|
Source: https://files.pythonhosted.org/packages/source/d/distributed/distributed-%{version}.tar.gz
|
|
Source99: python-distributed-rpmlintrc
|
|
BuildRequires: %{python_module joblib >= 0.10.2}
|
|
BuildRequires: %{python_module scikit-learn >= 0.17.1}
|
|
BuildRequires: %{python_module setuptools}
|
|
BuildRequires: fdupes
|
|
BuildRequires: python-rpm-macros
|
|
Requires: python-PyYAML
|
|
Requires: python-certifi
|
|
Requires: python-click >= 6.6
|
|
Requires: python-cloudpickle >= 0.2.2
|
|
Requires: python-dask >= 0.18.0
|
|
Requires: python-joblib >= 0.10.2
|
|
Requires: python-msgpack
|
|
Requires: python-psutil
|
|
Requires: python-scikit-learn >= 0.17.1
|
|
Requires: python-six
|
|
Requires: python-sortedcontainers
|
|
Requires: python-tblib
|
|
Requires: python-toolz >= 0.7.4
|
|
Requires: python-tornado >= 4.5.1
|
|
Requires: python-zict >= 0.1.3
|
|
BuildArch: noarch
|
|
%if %{with test}
|
|
BuildRequires: %{python_module PyYAML}
|
|
BuildRequires: %{python_module certifi}
|
|
BuildRequires: %{python_module click >= 6.6}
|
|
BuildRequires: %{python_module cloudpickle >= 0.2.2}
|
|
BuildRequires: %{python_module dask >= 0.18.0}
|
|
BuildRequires: %{python_module msgpack}
|
|
BuildRequires: %{python_module psutil}
|
|
BuildRequires: %{python_module pytest}
|
|
BuildRequires: %{python_module six}
|
|
BuildRequires: %{python_module sortedcontainers}
|
|
BuildRequires: %{python_module tblib}
|
|
BuildRequires: %{python_module toolz >= 0.7.4}
|
|
BuildRequires: %{python_module tornado >= 4.5.1}
|
|
BuildRequires: %{python_module zict >= 0.1.3}
|
|
%endif
|
|
%python_subpackages
|
|
|
|
%description
|
|
Dask.distributed is a library for distributed computing in Python. It
|
|
extends both the concurrent.futures and dask APIs to moderate sized
|
|
clusters.
|
|
|
|
%prep
|
|
%setup -q -n distributed-%{version}
|
|
|
|
%build
|
|
%python_build
|
|
|
|
%install
|
|
%python_install
|
|
%{python_expand rm -rf %{buildroot}%{$python_sitelib}/distributed/tests/
|
|
# Deduplicating files can generate a RPMLINT warning for pyc mtime
|
|
%fdupes %{buildroot}%{$python_sitelib}
|
|
}
|
|
|
|
%if %{with test}
|
|
%check
|
|
%python_expand PYTHONPATH=%{buildroot}%{$python_sitelib} py.test-%{$python_bin_suffix} distributed/tests/
|
|
%endif
|
|
|
|
%files %{python_files}
|
|
%doc README.rst
|
|
%license LICENSE.txt
|
|
%python3_only %{_bindir}/dask-ssh
|
|
%python3_only %{_bindir}/dask-submit
|
|
%python3_only %{_bindir}/dask-remote
|
|
%python3_only %{_bindir}/dask-scheduler
|
|
%python3_only %{_bindir}/dask-worker
|
|
%{python_sitelib}/distributed*
|
|
|
|
%changelog
|