* Python 3.12 support
* minimum supported version to 3.9
* Add support for ufunc attributes and reduce
* Add a config variable to enable / disable the llvmlite memory
manager
* see https://numba.readthedocs.io/en/stable/release/0.59.0-notes.html#highlights
* fix regressions with 0.57.0
+ Support is added for the dict(iterable) constructor.
- Clean up leftover Python 3.8 gubbins, look forward to Python 3.11 support.
This release focuses on performance improvements, but also adds
some new features and contains numerous bug fixes and stability
* Intel kindly sponsored research and development into producing
a new reference count pruning pass. This pass operates at the
LLVM level and can prune a number of common reference counting
patterns. This will improve performance for two primary
- There will be less pressure on the atomic locks used to do
- Removal of reference counting operations permits more
inlining and the optimisation passes can in general do more
* Intel also sponsored work to improve the performance of the
numba.typed.List container, particularly in the case of
* Superword-level parallelism vectorization is now switched on
and the optimisation pipeline has been lightly analysed and
tuned so as to be able to vectorize more and more often
* The inspect_cfg method on the JIT dispatcher object has been
significantly enhanced and now includes highlighted output and
* The BSD operating system is now unofficially supported (Stuart
* Numerous features/functionality improvements to NumPy support,
- the ndarray allocators, empty, ones and zeros, accepting a
* Cudasim support for mapped array, memcopies and memset has
OBS-URL: https://build.opensuse.org/package/show/devel:languages:python:numeric/python-numba?expand=0&rev=77
* Support for Python 3.11 (minimum is moved to 3.8)
* Support for NumPy 1.24 (minimum is moved to 1.21)
* Python language support enhancements:
+ Exception classes now support arguments that are not compile time
constant.
+ The built-in functions hasattr and getattr are supported for compile
time constant attributes.
+ The built-in functions str and repr are now implemented similarly to
their Python implementations. Custom __str__ and __repr__ functions
can be associated with types and work as expected.
+ Numba’s unicode functionality in str.startswith now supports kwargs
start and end.
+ min and max now support boolean types.
+ Support is added for the dict(iterable) constructor.
- Dropped patches:
* numba-pr8620-np1.24.patch
* update-tbb-backend-calls-2021.6.patch
- Rebased existing patch.
OBS-URL: https://build.opensuse.org/package/show/devel:languages:python:numeric/python-numba?expand=0&rev=67
This release continues to add new features, bug fixes and stability
improvements to Numba. Please note that this will be the last release that
has support for Python 3.7 as the next release series (Numba 0.57) will
support Python 3.11! Also note that, this will be the last release to support
linux-32 packages produced by the Numba team.
- Remove fix-max-name-size.patch, it's included in the new version.
- Add update-tbb-backend-calls-2021.6.patch to make it compatible with the
latest tbb-devel version.
- Add fix-cli-test.patch to disable one test that fails with OBS.
OBS-URL: https://build.opensuse.org/package/show/devel:languages:python:numeric/python-numba?expand=0&rev=55
- Update to 0.55.1
* This is a bugfix release that closes all the remaining issues
from the accelerated release of 0.55.0 and also any release
critical regressions discovered since then.
* CUDA target deprecation notices:
- Support for CUDA toolkits < 10.2 is deprecated and will be
removed in Numba 0.56.
- Support for devices with Compute Capability < 5.3 is
deprecated and will be removed in Numba 0.56.
- Drop numba-pr7748-random32bitwidth.patch
- Explicitly declare supported platforms (avoid failing tests on
ppc64)
OBS-URL: https://build.opensuse.org/request/show/949971
OBS-URL: https://build.opensuse.org/package/show/devel:languages:python:numeric/python-numba?expand=0&rev=52
- Update to 0.54.1
* This is a bugfix release for 0.54.0. It fixes a regression in
structured array type handling, a potential leak on
initialization failure in the CUDA target, a regression caused
by Numba’s vendored cloudpickle module resetting dynamic
classes and a few minor testing/infrastructure related
problems.
- Release summary for 0.54.0
* This release includes a significant number of new features,
important refactoring, critical bug fixes and a number of
dependency upgrades.
* Python language support enhancements:
- Basic support for f-strings.
- dict comprehensions are now supported.
- The sum built-in function is implemented.
* NumPy features/enhancements, The following functions are now
supported:
- np.clip
- np.iscomplex
- np.iscomplexobj
- np.isneginf
- np.isposinf
- np.isreal
- np.isrealobj
- np.isscalar
- np.random.dirichlet
- np.rot90
- np.swapaxes
* Also np.argmax has gained support for the axis keyword argument
and it’s now possible to use 0d NumPy arrays as scalars in
OBS-URL: https://build.opensuse.org/request/show/932318
OBS-URL: https://build.opensuse.org/package/show/devel:languages:python:numeric/python-numba?expand=0&rev=49
- Update to 0.53.0
* Support for Python 3.9
* Function sub-typing
* Initial support for dynamic gufuncs (i.e. from @guvectorize)
* Parallel Accelerator (@njit(parallel=True) now supports
Fortran ordered arrays
* Full release notes at
https://numba.readthedocs.io/en/0.53.0/release-notes.html
- Don't unpin-llvmlite.patch. It really need to be the correct
version.
- Refresh skip-failing-tests.patch
- Add packaging-ignore-setuptools-deprecation.patch
gh#numba/numba#6837
- Add numba-pr6851-llvm-timings.patch gh#numba/numba#6851 in order
to fix 32-bit issues gh#numba/numba#6832
- Update to 0.52.0
https://numba.readthedocs.io/en/stable/release-notes.html
This release focuses on performance improvements, but also adds
some new features and contains numerous bug fixes and stability
improvements.
Highlights of core performance improvements include:
* Intel kindly sponsored research and development into producing
a new reference count pruning pass. This pass operates at the
LLVM level and can prune a number of common reference counting
patterns. This will improve performance for two primary
reasons:
- There will be less pressure on the atomic locks used to do
the reference counting.
- Removal of reference counting operations permits more
inlining and the optimisation passes can in general do more
with what is present.
(Siu Kwan Lam).
* Intel also sponsored work to improve the performance of the
numba.typed.List container, particularly in the case of
__getitem__ and iteration (Stuart Archibald).
* Superword-level parallelism vectorization is now switched on
and the optimisation pipeline has been lightly analysed and
tuned so as to be able to vectorize more and more often
(Stuart Archibald).
Highlights of core feature changes include:
* The inspect_cfg method on the JIT dispatcher object has been
significantly enhanced and now includes highlighted output and
interleaved line markers and Python source (Stuart Archibald).
* The BSD operating system is now unofficially supported (Stuart
Archibald).
* Numerous features/functionality improvements to NumPy support,
including support for:
- np.asfarray (Guilherme Leobas)
- “subtyping” in record arrays (Lucio Fernandez-Arjona)
- np.split and np.array_split (Isaac Virshup)
- operator.contains with ndarray (@mugoh).
- np.asarray_chkfinite (Rishabh Varshney).
- NumPy 1.19 (Stuart Archibald).
- the ndarray allocators, empty, ones and zeros, accepting a
dtype specified as a string literal (Stuart Archibald).
* Booleans are now supported as literal types (Alexey Kozlov).
* On the CUDA target:
* CUDA 9.0 is now the minimum supported version (Graham Markall).
* Support for Unified Memory has been added (Max Katz).
* Kernel launch overhead is reduced (Graham Markall).
* Cudasim support for mapped array, memcopies and memset has
been * added (Mike Williams).
* Access has been wired in to all libdevice functions (Graham
Markall).
* Additional CUDA atomic operations have been added (Michae
Collison).
* Additional math library functions (frexp, ldexp, isfinite)
(Zhihao * Yuan).
* Support for power on complex numbers (Graham Markall).
Deprecations to note:
* There are no new deprecations. However, note that
“compatibility” mode, which was added some 40 releases ago to
help transition from 0.11 to 0.12+, has been removed! Also,
the shim to permit the import of jitclass from Numba’s top
level namespace has now been removed as per the deprecation
schedule.
- NEP 29: Skip python36 build. Python 3.6 is dropped by NumPy 1.20
OBS-URL: https://build.opensuse.org/request/show/880602
OBS-URL: https://build.opensuse.org/package/show/devel:languages:python:numeric/python-numba?expand=0&rev=47
- Update to 0.51.2
* The compilation chain is now based on LLVM 10 (Valentin Haenel).
* Numba has internally switched to prefer non-literal types over literal ones so
as to reduce function over-specialisation, this with view of speeding up
compile times (Siu Kwan Lam).
* On the CUDA target: Support for CUDA Toolkit 11, Ampere, and Compute
Capability 8.0; Printing of ``SASS`` code for kernels; Callbacks to Python
functions can be inserted into CUDA streams, and streams are async awaitable;
Atomic ``nanmin`` and ``nanmax`` functions are added; Fixes for various
miscompilations and segfaults. (mostly Graham Markall; call backs on
streams by Peter Würtz).
* Support for heterogeneous immutable lists and heterogeneous immutable string
key dictionaries. Also optional initial/construction value capturing for all
lists and dictionaries containing literal values (Stuart Archibald).
* A new pass-by-reference mutable structure extension type ``StructRef`` (Siu
Kwan Lam).
* Object mode blocks are now cacheable, with the side effect of numerous bug
fixes and performance improvements in caching. This also permits caching of
functions defined in closures (Siu Kwan Lam).
* The error handling and reporting system has been improved to reduce the size
of error messages, and also improve quality and specificity.
* The CUDA target has more stream constructors available and a new function for
compiling to PTX without linking and loading the code to a device. Further,
the macro-based system for describing CUDA threads and blocks has been
replaced with standard typing and lowering implementations, for improved
debugging and extensibility.
- Better unpin llvmlite with unpin-llvmlite.patch to avoid breakages
OBS-URL: https://build.opensuse.org/request/show/845659
OBS-URL: https://build.opensuse.org/package/show/devel:languages:python:numeric/python-numba?expand=0&rev=45
- version update to 0.49.1
* PR #5587: Fixed#5586 Threading Implementation Typos
* PR #5592: Fixes#5583 Remove references to cffi_support from docs and examples
* PR #5614: Fix invalid type in resolve for comparison expr in parfors.
* PR #5624: Fix erroneous rewrite of predicate to bit const on prune.
* PR #5627: Fixes#5623, SSA local def scan based on invalid equality
assumption.
* PR #5629: Fixes naming error in array_exprs
* PR #5630: Fix#5570. Incorrect race variable detection due to SSA naming.
* PR #5638: Make literal_unroll function work as a freevar.
* PR #5648: Unset the memory manager after EMM Plugin tests
* PR #5651: Fix some SSA issues
* PR #5652: Pin to sphinx=2.4.4 to avoid problem with C declaration
* PR #5658: Fix unifying undefined first class function types issue
* PR #5669: Update example in 5m guide WRT SSA type stability.
* PR #5676: Restore ``numba.types`` as public API
OBS-URL: https://build.opensuse.org/request/show/809217
OBS-URL: https://build.opensuse.org/package/show/devel:languages:python:numeric/python-numba?expand=0&rev=43
- Update to 0.49.0
* Removal of all Python 2 related code and also updating the minimum supported
Python version to 3.6, the minimum supported NumPy version to 1.15 and the
minimum supported SciPy version to 1.0. (Stuart Archibald).
* Refactoring of the Numba code base. The code is now organised into submodules
by functionality. This cleans up Numba's top level namespace.
(Stuart Archibald).
* Introduction of an ``ir.Del`` free static single assignment form for Numba's
intermediate representation (Siu Kwan Lam and Stuart Archibald).
* An OpenMP-like thread masking API has been added for use with code using the
parallel CPU backends (Aaron Meurer and Stuart Archibald).
* For the CUDA target, all kernel launches now require a configuration, this
preventing accidental launches of kernels with the old default of a single
thread in a single block. The hard-coded autotuner is also now removed, such
tuning is deferred to CUDA API calls that provide the same functionality
(Graham Markall).
* The CUDA target also gained an External Memory Management plugin interface to
allow Numba to use another CUDA-aware library for all memory allocations and
deallocations (Graham Markall).
* The Numba Typed List container gained support for construction from iterables
(Valentin Haenel).
* Experimental support was added for first-class function types
(Pearu Peterson).
- Refreshed patch skip-failing-tests.patch
* the troublesome tests are skipped upstream on 32-bit
- Unpin llvmlite
OBS-URL: https://build.opensuse.org/request/show/798175
OBS-URL: https://build.opensuse.org/package/show/devel:languages:python:numeric/python-numba?expand=0&rev=41
- Update to Version 0.40.1
* PR #3338: Accidentally left Anton off contributor list for 0.40.0
* PR #3374: Disable OpenMP in wheel building
* PR #3376: Update 0.40.1 changelog and docs on OpenMP backend
- Update to Version 0.40.0
+ This release adds a number of major features:
* A new GPU backend: kernels for AMD GPUs can now be compiled using the ROCm
driver on Linux.
* The thread pool implementation used by Numba for automatic multithreading
is configurable to use TBB, OpenMP, or the old "workqueue" implementation.
(TBB is likely to become the preferred default in a future release.)
* New documentation on thread and fork-safety with Numba, along with overall
improvements in thread-safety.
* Experimental support for executing a block of code inside a nopython mode
function in object mode.
* Parallel loops now allow arrays as reduction variables
* CUDA improvements: FMA, faster float64 atomics on supporting hardware,
records in const memory, and improved datatime dtype support
* More NumPy functions: vander, tri, triu, tril, fill_diagonal
+ General Enhancements:
* PR #3017: Add facility to support with-contexts
* PR #3033: Add support for multidimensional CFFI arrays
* PR #3122: Add inliner to object mode pipeline
* PR #3127: Support for reductions on arrays.
* PR #3145: Support for np.fill_diagonal
* PR #3151: Keep a queue of references to last N deserialized functions. Fixes#3026
* PR #3154: Support use of list() if typeable.
* PR #3166: Objmode with-block
* PR #3179: Updates for llvmlite 0.25
* PR #3181: Support function extension in alias analysis
* PR #3189: Support literal constants in typing of object methods
* PR #3190: Support passing closures as literal values in typing
* PR #3199: Support inferring stencil index as constant in simple unary expressions
* PR #3202: Threading layer backend refactor/rewrite/reinvention!
* PR #3209: Support for np.tri, np.tril and np.triu
* PR #3211: Handle unpacking in building tuple (BUILD_TUPLE_UNPACK opcode)
* PR #3212: Support for np.vander
* PR #3227: Add NumPy 1.15 support
* PR #3272: Add MemInfo_data to runtime._nrt_python.c_helpers
* PR #3273: Refactor. Removing thread-local-storage based context nesting.
* PR #3278: compiler threadsafety lockdown
* PR #3291: Add CPU count and CFS restrictions info to numba -s.
+ CUDA Enhancements:
* PR #3152: Use cuda driver api to get best blocksize for best occupancy
* PR #3165: Add FMA intrinsic support
* PR #3172: Use float64 add Atomics, Where Available
* PR #3186: Support Records in CUDA Const Memory
* PR #3191: CUDA: fix log size
* PR #3198: Fix GPU datetime timedelta types usage
* PR #3221: Support datetime/timedelta scalar argument to a CUDA kernel.
* PR #3259: Add DeviceNDArray.view method to reinterpret data as a different type.
* PR #3310: Fix IPC handling of sliced cuda array.
+ ROCm Enhancements:
* PR #3023: Support for AMDGCN/ROCm.
* PR #3108: Add ROC info to `numba -s` output.
* PR #3176: Move ROC vectorize init to npyufunc
* PR #3177: Add auto_synchronize support to ROC stream
* PR #3178: Update ROC target documentation.
* PR #3294: Add compiler lock to ROC compilation path.
* PR #3280: Add wavebits property to the HSA Agent.
* PR #3281: Fix ds_permute types and add tests
+ Continuous Integration / Testing:
* PR #3091: Remove old recipes, switch to test config based on env var.
* PR #3094: Add higher ULP tolerance for products in complex space.
* PR #3096: Set exit on error in incremental scripts
* PR #3109: Add skip to test needing jinja2 if no jinja2.
* PR #3125: Skip cudasim only tests
* PR #3126: add slack, drop flowdock
* PR #3147: Improve error message for arg type unsupported during typing.
* PR #3128: Fix recipe/build for jetson tx2/ARM
* PR #3167: In build script activate env before installing.
* PR #3180: Add skip to broken test.
* PR #3216: Fix libcuda.so loading in some container setup
* PR #3224: Switch to new Gitter notification webhook URL and encrypt it
* PR #3235: Add 32bit Travis CI jobs
* PR #3257: This adds scipy/ipython back into windows conda test phase.
+ Fixes:
* PR #3038: Fix random integer generation to match results from NumPy.
* PR #3045: Fix#3027 - Numba reassigns sys.stdout
* PR #3059: Handler for known LoweringErrors.
* PR #3060: Adjust attribute error for NumPy functions.
* PR #3067: Abort simulator threads on exception in thread block.
* PR #3079: Implement +/-(types.boolean) Fix#2624
* PR #3080: Compute np.var and np.std correctly for complex types.
* PR #3088: Fix#3066 (array.dtype.type in prange)
* PR #3089: Fix invalid ParallelAccelerator hoisting issue.
* PR #3136: Fix#3135 (lowering error)
* PR #3137: Fix for issue3103 (race condition detection)
* PR #3142: Fix Issue #3139 (parfors reuse of reduction variable across prange blocks)
* PR #3148: Remove dead array equal @infer code
* PR #3153: Fix canonicalize_array_math typing for calls with kw args
* PR #3156: Fixes issue with missing pygments in testing and adds guards.
* PR #3168: Py37 bytes output fix.
* PR #3171: Fix#3146. Fix CFUNCTYPE void* return-type handling
* PR #3193: Fix setitem/getitem resolvers
* PR #3222: Fix#3214. Mishandling of POP_BLOCK in while True loop.
* PR #3230: Fixes liveness analysis issue in looplifting
* PR #3233: Fix return type difference for 32bit ctypes.c_void_p
* PR #3234: Fix types and layout for `np.where`.
* PR #3237: Fix DeprecationWarning about imp module
* PR #3241: Fix#3225. Normalize 0nd array to scalar in typing of indexing code.
* PR #3256: Fix#3251: Move imports of ABCs to collections.abc for Python >= 3.3
* PR #3292: Fix issue3279.
* PR #3302: Fix error due to mismatching dtype
+ Documentation Updates:
* PR #3104: Workaround for #3098 (test_optional_unpack Heisenbug)
* PR #3132: Adds an ~5 minute guide to Numba.
* PR #3194: Fix docs RE: np.random generator fork/thread safety
* PR #3242: Page with Numba talks and tutorial links
* PR #3258: Allow users to choose the type of issue they are reporting.
* PR #3260: Fixed broken link
* PR #3266: Fix cuda pointer ownership problem with user/externally allocated pointer
* PR #3269: Tweak typography with CSS
* PR #3270: Update FAQ for functions passed as arguments
* PR #3274: Update installation instructions
* PR #3275: Note pyobject and voidptr are types in docs
* PR #3288: Do not need to call parallel optimizations "experimental" anymore
* PR #3318: Tweak spacing to avoid search box wrapping onto second line
- Remove upstream-included numba-0.39.0-fix-3135.patch
OBS-URL: https://build.opensuse.org/request/show/644953
OBS-URL: https://build.opensuse.org/package/show/devel:languages:python:numeric/python-numba?expand=0&rev=5